hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
623101e3c7c215abce7735b3565c0648a3b39c34 | 68 | py | Python | __init__.py | OSevangelist/vsf-odoo | 2133dcaf0ce7d50573d43387fd343be6dfe7c9d7 | [
"MIT"
] | 84 | 2019-06-11T08:14:52.000Z | 2022-02-17T13:58:20.000Z | __init__.py | OSevangelist/vsf-odoo | 2133dcaf0ce7d50573d43387fd343be6dfe7c9d7 | [
"MIT"
] | 16 | 2019-06-15T14:30:14.000Z | 2020-07-26T04:21:42.000Z | __init__.py | OSevangelist/vsf-odoo | 2133dcaf0ce7d50573d43387fd343be6dfe7c9d7 | [
"MIT"
] | 38 | 2019-06-11T11:44:12.000Z | 2021-11-20T20:55:17.000Z | from . import common
from . import models
from . import controllers
| 17 | 25 | 0.779412 | 9 | 68 | 5.888889 | 0.555556 | 0.566038 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 68 | 3 | 26 | 22.666667 | 0.946429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
624d91dcfcaf93dc5b7a8824780a098e381009c4 | 71 | py | Python | netlas/__init__.py | netlas-io/netlas-python | cb33beccccc8b7bafb9fe32aaa21c3d8feb21b0c | [
"MIT"
] | 4 | 2021-06-22T12:22:22.000Z | 2022-03-21T15:55:03.000Z | netlas/__init__.py | netlas-io/netlas-python | cb33beccccc8b7bafb9fe32aaa21c3d8feb21b0c | [
"MIT"
] | null | null | null | netlas/__init__.py | netlas-io/netlas-python | cb33beccccc8b7bafb9fe32aaa21c3d8feb21b0c | [
"MIT"
] | null | null | null | from netlas.client import Netlas
from netlas.exception import APIError
| 23.666667 | 37 | 0.859155 | 10 | 71 | 6.1 | 0.6 | 0.327869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112676 | 71 | 2 | 38 | 35.5 | 0.968254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
65762b0141a955045b62e73dd3810bd8b8560502 | 10,085 | py | Python | wsbtrading/maths/tests/test_maths.py | bordumb/wsbtrading | 32cadab1d9e2f4d37e7d028cc30f4cd0e924be92 | [
"MIT"
] | 14 | 2021-01-25T00:01:39.000Z | 2021-08-12T09:20:39.000Z | wsbtrading/maths/tests/test_maths.py | bordumb/wsbtrading | 32cadab1d9e2f4d37e7d028cc30f4cd0e924be92 | [
"MIT"
] | 15 | 2021-01-24T20:18:13.000Z | 2021-02-04T21:54:27.000Z | wsbtrading/maths/tests/test_maths.py | bordumb/wsbtrading | 32cadab1d9e2f4d37e7d028cc30f4cd0e924be92 | [
"MIT"
] | 3 | 2021-01-27T14:03:02.000Z | 2021-08-29T04:13:26.000Z | import unittest
import math
import pandas as pd
from pandas._testing import assert_frame_equal
from wsbtrading import maths
class TestDivision(unittest.TestCase):
def setUp(self) -> None:
# yapf: disable
schema = ['timestamp', 'low', 'high', 'low_perc_high']
data = [
(1, 1, 1, 1),
(2, 1, 2, 0.5),
(3, 3, 9, 0.3333333333333333),
]
# yapf: enable
self.expected_df = pd.DataFrame(data=data, columns=schema)
def test_divide_kernel(self):
"""Ensure we can divide properly."""
actual = maths.divide_kernel(numerator=1, denominator=1)
assert actual == 1
actual = maths.divide_kernel(numerator=3, denominator=9)
assert math.isclose(actual, 0.33333333333)
actual = maths.divide_kernel(numerator=3, denominator=0)
assert actual == 0
def test_divide(self):
"""Ensures we can correctly divide when using a pandas DF."""
mock_df = self.expected_df.drop('low_perc_high', axis=1)
actual = maths.divide(df=mock_df, numerator_col='low', denominator_col='high')
assert_frame_equal(actual, self.expected_df, check_dtype=True)
class TestSma(unittest.TestCase):
def setUp(self) -> None:
# yapf: disable
schema = ['Date', 'High', 'Low', 'Close', '2sma']
data = [
('2017-01-03', 22, 20, 20, None),
('2017-01-04', 32, 20, 30, 25.0),
('2017-01-05', 42, 32, 40, 35.0),
('2017-01-06', 52, 45, 50, 45.0),
]
# yapf: enable
self.expected_df = pd.DataFrame(data=data, columns=schema)
def test_sma(self):
"""Ensures we correctly calculate the simple moving average (SMA)."""
mock_df = self.expected_df[['Date', 'High', 'Low', 'Close']]
actual = maths.sma(df=mock_df, metric_col='Close', rolling_window=2)
assert_frame_equal(actual, self.expected_df, check_dtype=True)
class TestStandardDeviation(unittest.TestCase):
def setUp(self) -> None:
# yapf: disable
schema = ['Date', 'High', 'Low', 'Close', '2stddev']
data = [
('2017-01-03', 22, 20, 20, None),
('2017-01-04', 32, 20, 31, 7.778175),
('2017-01-05', 42, 32, 40, 6.363961),
('2017-01-06', 52, 45, 51, 7.778175),
]
# yapf: enable
self.expected_df = pd.DataFrame(data=data, columns=schema)
def test_rolling_stddev(self):
"""Ensures we correctly calculate the simple moving average (SMA)."""
mock_df = self.expected_df[['Date', 'High', 'Low', 'Close']]
actual = maths.rolling_stddev(df=mock_df, metric_col='Close', rolling_window=2)
assert_frame_equal(actual, self.expected_df, check_dtype=True)
class TestLowerBand(unittest.TestCase):
def setUp(self) -> None:
# yapf: disable
schema = ['Date', 'High', 'Low', 'Close', 'lower_band']
data = [
('2017-01-03', 22, 20, 20, None),
('2017-01-04', 32, 20, 31, 9.943651),
('2017-01-05', 42, 32, 40, 22.772078),
('2017-01-06', 52, 45, 51, 29.943651),
]
# yapf: enable
self.expected_df = pd.DataFrame(data=data, columns=schema)
def test_lower_band(self):
"""Ensures we correctly calculate the lower band."""
mock_df = self.expected_df[['Date', 'High', 'Low', 'Close']]
actual = maths.lower_band(df=mock_df, metric_col='Close', rolling_window=2).drop(['2sma', '2stddev'], axis=1)
assert_frame_equal(actual, self.expected_df, check_dtype=True)
class TestUpperBand(unittest.TestCase):
def setUp(self) -> None:
# yapf: disable
schema = ['Date', 'High', 'Low', 'Close', 'upper_band']
data = [
('2017-01-03', 22, 20, 20, None),
('2017-01-04', 32, 20, 31, 41.056349),
('2017-01-05', 42, 32, 40, 48.227922),
('2017-01-06', 52, 45, 51, 61.056349),
]
# yapf: enable
self.expected_df = pd.DataFrame(data=data, columns=schema)
def test_upper_band(self):
"""Ensures we correctly calculate the upper band."""
mock_df = self.expected_df[['Date', 'High', 'Low', 'Close']]
actual = maths.upper_band(df=mock_df, metric_col='Close', rolling_window=2).drop(['2sma', '2stddev'], axis=1)
assert_frame_equal(actual, self.expected_df, check_dtype=True)
class TestTrueRange(unittest.TestCase):
def setUp(self) -> None:
# yapf: disable
schema = ['Date', 'High', 'Low', 'Close', 'true_range']
data = [
('2017-01-03', 22, 20, 20, 2),
('2017-01-04', 32, 20, 31, 12),
('2017-01-05', 42, 32, 40, 10),
('2017-01-06', 52, 45, 51, 7),
]
# yapf: enable
self.expected_df = pd.DataFrame(data=data, columns=schema)
def test_true_range(self):
"""Ensures we correctly calculate the true range."""
mock_df = self.expected_df[['Date', 'High', 'Low', 'Close']]
actual = maths.true_range(df=mock_df, low_col='Low', high_col='High')
assert_frame_equal(actual, self.expected_df, check_dtype=True)
class TestAtr(unittest.TestCase):
def setUp(self) -> None:
# yapf: disable
schema = ['Date', 'High', 'Low', 'Close', 'ATR']
data = [
('2017-01-03', 22, 20, 20, None),
('2017-01-04', 32, 20, 31, 7.0),
('2017-01-05', 42, 32, 40, 11.0),
('2017-01-06', 52, 45, 51, 8.5),
]
# yapf: enable
self.expected_df = pd.DataFrame(data=data, columns=schema)
def test_avg_true_range(self):
"""Ensures we correctly calculate the average true range."""
mock_df = self.expected_df[['Date', 'High', 'Low', 'Close']]
actual = maths.avg_true_range(df=mock_df, low_col='Low', high_col='High', rolling_window=2)\
.drop(['true_range'], axis=1)
assert_frame_equal(actual, self.expected_df, check_dtype=True)
class TestLowerKeltner(unittest.TestCase):
def setUp(self) -> None:
# yapf: disable
schema = ['Date', 'High', 'Low', 'Close', 'lower_keltner']
data = [
('2017-01-03', 22, 20, 20, None),
('2017-01-04', 32, 20, 31, 15.00),
('2017-01-05', 42, 32, 40, 19.00),
('2017-01-06', 52, 45, 51, 32.75),
]
# yapf: enable
self.expected_df = pd.DataFrame(data=data, columns=schema)
def test_lower_keltner(self):
"""Ensures we correctly calculate the lower Keltner."""
mock_df = self.expected_df[['Date', 'High', 'Low', 'Close']]
actual = maths.lower_keltner(df=mock_df, metric_col='Close', low_col='Low', high_col='High', rolling_window=2) \
.drop(['2sma', 'true_range', 'ATR'], axis=1)
assert_frame_equal(actual, self.expected_df, check_dtype=True)
class TestUpperKeltner(unittest.TestCase):
def setUp(self) -> None:
# yapf: disable
schema = ['Date', 'High', 'Low', 'Close', 'upper_keltner']
data = [
('2017-01-03', 22, 20, 20, None),
('2017-01-04', 32, 20, 31, 36.00),
('2017-01-05', 42, 32, 40, 52.00),
('2017-01-06', 52, 45, 51, 58.25),
]
# yapf: enable
self.expected_df = pd.DataFrame(data=data, columns=schema)
def test_upper_keltner(self):
"""Ensures we correctly calculate the upper Keltner."""
mock_df = self.expected_df[['Date', 'High', 'Low', 'Close']]
actual = maths.upper_keltner(df=mock_df, metric_col='Close', low_col='Low', high_col='High', rolling_window=2) \
.drop(['2sma', 'true_range', 'ATR'], axis=1)
assert_frame_equal(actual, self.expected_df, check_dtype=True)
class TestIsInSqueeze(unittest.TestCase):
def setUp(self) -> None:
# yapf: disable
schema = ['Date', 'High', 'Low', 'Close', 'upper_keltner']
data = [
('2017-01-03', 22, 20, 20, None),
('2017-01-04', 32, 20, 31, 36.00),
('2017-01-05', 42, 32, 40, 52.00),
('2017-01-06', 52, 45, 51, 58.25),
]
# yapf: enable
self.expected_df = pd.DataFrame(data=data, columns=schema)
def test_is_in_squeeze(self):
"""Ensures we correctly calculate when a stock is squeezing."""
mock_df = self.expected_df[['Date', 'High', 'Low', 'Close']]
actual = maths.is_in_squeeze(df=mock_df, metric_col='Close', low_col='Low', high_col='High', rolling_window=2)
self.assertFalse(actual)
import math
import pandas as pd
from pandas._testing import assert_frame_equal
from wsbtrading import maths
schema = ['Date', 'High', 'Low', 'Close', 'lower_keltner']
data = [
('2017-01-03', 22, 20, 20, None),
('2017-01-04', 32, 20, 31, 15.00),
('2017-01-05', 42, 32, 40, 19.00),
('2017-01-06', 52, 45, 51, 32.75),
]
# yapf: enable
expected_df = pd.DataFrame(data=data, columns=schema)
mock_df = expected_df[['Date', 'High', 'Low', 'Close']]
metric_col='Close'
low_col='Low'
high_col='High'
rolling_window=2
lower_band_df = maths.lower_band(df=mock_df, metric_col=metric_col, rolling_window=rolling_window)
upper_band_df = maths.upper_band(df=lower_band_df, metric_col=metric_col, rolling_window=rolling_window)
lower_keltner_df = maths.lower_keltner(df=upper_band_df, metric_col=metric_col, low_col=low_col, high_col=high_col,
rolling_window=rolling_window)
upper_keltner_df = maths.upper_keltner(df=lower_keltner_df, metric_col=metric_col, low_col=low_col, high_col=high_col,
rolling_window=rolling_window)
def is_squeeze(df):
return df['lower_band'].iloc[-3] > df['lower_keltner'].iloc[-3] and df['upper_band'].iloc[-3] < df['upper_keltner'].iloc[-3]
is_squeeze(df=upper_keltner_df)
actual = maths.is_in_squeeze(df=mock_df, metric_col='Close', low_col='Low', high_col='High', rolling_window=2)
actual
assert actual == False
| 35.636042 | 128 | 0.589588 | 1,366 | 10,085 | 4.189605 | 0.106149 | 0.041936 | 0.070942 | 0.055915 | 0.835576 | 0.821073 | 0.791368 | 0.73126 | 0.705399 | 0.680762 | 0 | 0.102337 | 0.249083 | 10,085 | 282 | 129 | 35.762411 | 0.653374 | 0.084482 | 0 | 0.480874 | 0 | 0 | 0.116985 | 0 | 0 | 0 | 0 | 0 | 0.087432 | 1 | 0.120219 | false | 0 | 0.04918 | 0.005464 | 0.229508 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65de24b9c57e6507853241eef4833d0e7b25366d | 25 | py | Python | tests/test_json.py | douglasdavis/intake-awkward | ef67b27bedb53cb76e1e61eb77416636e56bff2d | [
"BSD-3-Clause"
] | 3 | 2020-09-29T13:54:28.000Z | 2020-12-17T13:29:51.000Z | tests/test_json.py | douglasdavis/intake-awkward | ef67b27bedb53cb76e1e61eb77416636e56bff2d | [
"BSD-3-Clause"
] | 3 | 2020-04-08T10:45:35.000Z | 2020-08-15T16:05:41.000Z | tests/test_plugin.py | pytest-dev/pytest-plus | 21a0f26340870631fb706c5a16ff105529a40bd8 | [
"MIT"
] | null | null | null | def test_one():
pass
| 8.333333 | 15 | 0.6 | 4 | 25 | 3.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.28 | 25 | 2 | 16 | 12.5 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
65e753e8d3bbbd1f6e856dd3eefe869b2aff8511 | 376 | py | Python | locust/rpc/__init__.py | RollForReflex/locust | 4ffbe1579d0d9f2534f9da33a7558552c9e67076 | [
"MIT"
] | 1 | 2017-05-26T04:26:14.000Z | 2017-05-26T04:26:14.000Z | locust/rpc/__init__.py | agoragames/locust | 465100c903cc8558e408760f4e49792798dd7b16 | [
"MIT"
] | null | null | null | locust/rpc/__init__.py | agoragames/locust | 465100c903cc8558e408760f4e49792798dd7b16 | [
"MIT"
] | 1 | 2021-09-08T11:46:00.000Z | 2021-09-08T11:46:00.000Z | import warnings
try:
import zmqrpc as rpc
except ImportError:
warnings.warn("WARNING: Using pure Python socket RPC implementation instead of zmq. If running in distributed mode, this could cause a performance decrease. We recommend you to install the pyzmq python package when running in distributed mode.")
import socketrpc as rpc
from .protocol import Message
| 37.6 | 249 | 0.789894 | 54 | 376 | 5.5 | 0.777778 | 0.03367 | 0.13468 | 0.161616 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175532 | 376 | 9 | 250 | 41.777778 | 0.958065 | 0 | 0 | 0 | 0 | 0.142857 | 0.606383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.714286 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
02a73897a5b51e1c793e8239d6c54b417b4714a7 | 35 | py | Python | SW/__init__.py | ddvlamin/SW-transformation | d7c46c75df89140ce5498f9800c319be5a75f0bb | [
"MIT"
] | 1 | 2021-05-19T07:22:15.000Z | 2021-05-19T07:22:15.000Z | SW/__init__.py | ddvlamin/SW-transformation | d7c46c75df89140ce5498f9800c319be5a75f0bb | [
"MIT"
] | null | null | null | SW/__init__.py | ddvlamin/SW-transformation | d7c46c75df89140ce5498f9800c319be5a75f0bb | [
"MIT"
] | 3 | 2019-02-11T18:46:07.000Z | 2021-02-18T10:09:20.000Z | from .SW import SW_transformation
| 17.5 | 34 | 0.828571 | 5 | 35 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 35 | 1 | 35 | 35 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f30e9271927c154e0e53c1cb94c18bf30f3c4d60 | 66 | py | Python | TedSeg/__init__.py | tadeephuy/CoFo | 28461e923f112182887d66d1db499da7a2535b28 | [
"MIT"
] | 2 | 2022-02-15T07:58:29.000Z | 2022-02-25T10:08:59.000Z | TedSeg/__init__.py | tadeephuy/CoFo | 28461e923f112182887d66d1db499da7a2535b28 | [
"MIT"
] | null | null | null | TedSeg/__init__.py | tadeephuy/CoFo | 28461e923f112182887d66d1db499da7a2535b28 | [
"MIT"
] | null | null | null | from .dataset import *
from .learner import *
from .utils import * | 22 | 22 | 0.742424 | 9 | 66 | 5.444444 | 0.555556 | 0.408163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 66 | 3 | 23 | 22 | 0.890909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b82324a6db43a8b8604a05f79ff7c0bb562c4595 | 38 | py | Python | peacock/syntax/__init__.py | jeremyclewell/peacock | 74a8a5ec4cc8fd7ebcdcc9ece41cf2e4d66719d6 | [
"MIT"
] | null | null | null | peacock/syntax/__init__.py | jeremyclewell/peacock | 74a8a5ec4cc8fd7ebcdcc9ece41cf2e4d66719d6 | [
"MIT"
] | null | null | null | peacock/syntax/__init__.py | jeremyclewell/peacock | 74a8a5ec4cc8fd7ebcdcc9ece41cf2e4d66719d6 | [
"MIT"
] | null | null | null | from highlight import DataHighlighter
| 19 | 37 | 0.894737 | 4 | 38 | 8.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b838de5d7f2f474f917e18a309a9724840ecce57 | 78 | py | Python | CLITests.py | josephxsxn/alchemists_notepad | 8bf484d294a5b79ebeee2befbfaf1f32871b2882 | [
"Apache-2.0"
] | null | null | null | CLITests.py | josephxsxn/alchemists_notepad | 8bf484d294a5b79ebeee2befbfaf1f32871b2882 | [
"Apache-2.0"
] | 4 | 2015-12-29T19:20:29.000Z | 2015-12-30T19:44:14.000Z | CLITests.py | josephxsxn/alchemists_notepad | 8bf484d294a5b79ebeee2befbfaf1f32871b2882 | [
"Apache-2.0"
] | null | null | null | from UI.CLI import CLI
from UI.CLI import CLIOption
cli = CLI()
cli.shell()
| 11.142857 | 28 | 0.717949 | 14 | 78 | 4 | 0.428571 | 0.214286 | 0.321429 | 0.535714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 78 | 6 | 29 | 13 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b84288f2a18bf0140b8f75eccc2bf0b842e86baf | 63 | py | Python | HelloWorld.py | andryuha77/Emerging-Technologies-problem-Sheet1 | 036a132032a4dc3110e5ea78702e2c7a921b071d | [
"MIT"
] | null | null | null | HelloWorld.py | andryuha77/Emerging-Technologies-problem-Sheet1 | 036a132032a4dc3110e5ea78702e2c7a921b071d | [
"MIT"
] | null | null | null | HelloWorld.py | andryuha77/Emerging-Technologies-problem-Sheet1 | 036a132032a4dc3110e5ea78702e2c7a921b071d | [
"MIT"
] | null | null | null | # Print Hello world
# Date: 21/09/2017
print ("hello, world!")
| 15.75 | 23 | 0.666667 | 10 | 63 | 4.2 | 0.7 | 0.47619 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150943 | 0.15873 | 63 | 3 | 24 | 21 | 0.641509 | 0.539683 | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b85041e2c03168b4da3a3be7995979be9edd9e63 | 174 | py | Python | ros/src/tl_detector/TrafficLightDetector.py | frobinet/cad-final-project | d5b10ba80580942103b201cd52be50d18138a79a | [
"MIT"
] | 2 | 2020-12-21T13:16:52.000Z | 2020-12-21T13:16:59.000Z | ros/src/tl_detector/TrafficLightDetector.py | frobinet/cad-final-project | d5b10ba80580942103b201cd52be50d18138a79a | [
"MIT"
] | null | null | null | ros/src/tl_detector/TrafficLightDetector.py | frobinet/cad-final-project | d5b10ba80580942103b201cd52be50d18138a79a | [
"MIT"
] | 2 | 2021-01-15T09:49:13.000Z | 2021-01-24T15:21:12.000Z | from styx_msgs.msg import TrafficLight
import cv2
class TrafficLightDetector():
def detect_state(self, camera_frame):
# TODO
return TrafficLight.UNKNOWN
| 21.75 | 41 | 0.735632 | 20 | 174 | 6.25 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0.206897 | 174 | 7 | 42 | 24.857143 | 0.898551 | 0.022989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b89d04d1841eebea8b303e01fd250ed5ceaea40b | 15,315 | py | Python | tests/text/test_case.py | spack971/none | 6313dd7d7095b301e8d49a38d1b39c9080008ae0 | [
"MIT"
] | 1 | 2020-09-28T17:57:33.000Z | 2020-09-28T17:57:33.000Z | tests/text/test_case.py | spack971/none | 6313dd7d7095b301e8d49a38d1b39c9080008ae0 | [
"MIT"
] | 5 | 2020-09-02T15:30:39.000Z | 2020-10-15T09:52:35.000Z | tests/text/test_case.py | spack971/none | 6313dd7d7095b301e8d49a38d1b39c9080008ae0 | [
"MIT"
] | 1 | 2020-09-19T05:10:02.000Z | 2020-09-19T05:10:02.000Z | # tests/text/test_case.py
# =======================
#
# Copying
# -------
#
# Copyright (c) 2020 none authors and contributors.
#
# This file is part of the *none* project.
#
# None is a free software project. You can redistribute it and/or
# modify it following the terms of the MIT License.
#
# This software project is distributed *as is*, WITHOUT WARRANTY OF ANY
# KIND; including but not limited to the WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE and NONINFRINGEMENT.
#
# You should have received a copy of the MIT License along with
# *none*. If not, see <http://opensource.org/licenses/MIT>.
#
"""Test cases for the :mod:`none.text.case` module."""
import pytest
import none
# fmt: off
@pytest.mark.parametrize("string,expected", [
("", ""),
("4", "4"),
("4F", "4-f"),
("4f", "4-f"),
("4Foo", "4-foo"),
("4foo", "4-foo"),
("4FooB", "4-foo-b"),
("4fooB", "4-foo-b"),
("4FOOB", "4-foob"),
("4foob", "4-foob"),
("4fOOBar", "4-f-oo-bar"),
("4FoOBar", "4-fo-o-bar"),
("4foOBar", "4-fo-o-bar"),
("4FOOBar", "4-foo-bar"),
("4FooBAR", "4-foo-bar"),
("4FooBar", "4-foo-bar"),
("4fooBAR", "4-foo-bar"),
("4fooBar", "4-foo-bar"),
("F", "f"),
("f", "f"),
("F4", "f4"),
("f4", "f4"),
("Fo4OBar", "fo4-o-bar"),
("fo4OBar", "fo4-o-bar"),
("Foo", "foo"),
("foo", "foo"),
("Foo4", "foo4"),
("foo4", "foo4"),
("FOO4B", "foo4-b"),
("Foo4B", "foo4-b"),
("fOO4B", "f-oo4-b"),
("foo4B", "foo4-b"),
("fOO4Bar", "f-oo4-bar"),
("FoO4Bar", "fo-o4-bar"),
("foO4Bar", "fo-o4-bar"),
("FOO4Bar", "foo4-bar"),
("Foo4BAR", "foo4-bar"),
("Foo4Bar", "foo4-bar"),
("foo4BAR", "foo4-bar"),
("foo4Bar", "foo4-bar"),
("fOOB", "f-oob"),
("FooB", "foo-b"),
("fooB", "foo-b"),
("FOOB", "foob"),
("fOOB4", "f-oob4"),
("FooB4", "foo-b4"),
("fooB4", "foo-b4"),
("FOOB4", "foob4"),
("fOOBar", "f-oo-bar"),
("FoOBar", "fo-o-bar"),
("foOBar", "fo-o-bar"),
("FOOBar", "foo-bar"),
("FooBAR", "foo-bar"),
("FooBar", "foo-bar"),
("fooBAR", "foo-bar"),
("fooBar", "foo-bar"),
("fOOBar4", "f-oo-bar4"),
("FoOBar4", "fo-o-bar4"),
("foOBar4", "fo-o-bar4"),
("FOOBar4", "foo-bar4"),
("FooBAR4", "foo-bar4"),
("FooBar4", "foo-bar4"),
("fooBAR4", "foo-bar4"),
("fooBar4", "foo-bar4"),
])
# fmt: on
def test_camel2kebab_expected(string: str, expected: str):
"""Test :func:`none.text.case.camel2kebab` for expected output."""
assert none.text.case.camel2kebab(string) == expected
# fmt: off
@pytest.mark.parametrize("string,expected", [
("", ""),
("4", "4"),
("4F", "4_f"),
("4f", "4_f"),
("4Foo", "4_foo"),
("4foo", "4_foo"),
("4FooB", "4_foo_b"),
("4fooB", "4_foo_b"),
("4FOOB", "4_foob"),
("4foob", "4_foob"),
("4fOOBar", "4_f_oo_bar"),
("4FoOBar", "4_fo_o_bar"),
("4foOBar", "4_fo_o_bar"),
("4FOOBar", "4_foo_bar"),
("4FooBAR", "4_foo_bar"),
("4FooBar", "4_foo_bar"),
("4fooBAR", "4_foo_bar"),
("4fooBar", "4_foo_bar"),
("F", "f"),
("f", "f"),
("F4", "f4"),
("f4", "f4"),
("Fo4OBar", "fo4_o_bar"),
("fo4OBar", "fo4_o_bar"),
("Foo", "foo"),
("foo", "foo"),
("Foo4", "foo4"),
("foo4", "foo4"),
("FOO4B", "foo4_b"),
("Foo4B", "foo4_b"),
("fOO4B", "f_oo4_b"),
("foo4B", "foo4_b"),
("fOO4Bar", "f_oo4_bar"),
("FoO4Bar", "fo_o4_bar"),
("foO4Bar", "fo_o4_bar"),
("FOO4Bar", "foo4_bar"),
("Foo4BAR", "foo4_bar"),
("Foo4Bar", "foo4_bar"),
("foo4BAR", "foo4_bar"),
("foo4Bar", "foo4_bar"),
("fOOB", "f_oob"),
("FooB", "foo_b"),
("fooB", "foo_b"),
("FOOB", "foob"),
("fOOB4", "f_oob4"),
("FooB4", "foo_b4"),
("fooB4", "foo_b4"),
("FOOB4", "foob4"),
("fOOBar", "f_oo_bar"),
("FoOBar", "fo_o_bar"),
("foOBar", "fo_o_bar"),
("FOOBar", "foo_bar"),
("FooBAR", "foo_bar"),
("FooBar", "foo_bar"),
("fooBAR", "foo_bar"),
("fooBar", "foo_bar"),
("fOOBar4", "f_oo_bar4"),
("FoOBar4", "fo_o_bar4"),
("foOBar4", "fo_o_bar4"),
("FOOBar4", "foo_bar4"),
("FooBAR4", "foo_bar4"),
("FooBar4", "foo_bar4"),
("fooBAR4", "foo_bar4"),
("fooBar4", "foo_bar4"),
])
# fmt: on
def test_camel2snake_expected(string: str, expected: str):
"""Test :func:`none.text.case.camel2snake` for expected output."""
assert none.text.case.camel2snake(string) == expected
# fmt: off
@pytest.mark.parametrize("string,capitalize,expected", [
("", True, ""),
("4", True, "4"),
("4-f", True, "4F"),
("4-F", True, "4F"),
("4-fo-o-bar", True, "4FoOBar"),
("4-FO-O-BAR", True, "4FoOBar"),
("4-foo", True, "4Foo"),
("4-FOO", True, "4Foo"),
("4-foo-b", True, "4FooB"),
("4-FOO-B", True, "4FooB"),
("4-foo-bar", True, "4FooBar"),
("4-FOO-BAR", True, "4FooBar"),
("4-foob", True, "4Foob"),
("4-FOOB", True, "4Foob"),
("f", True, "F"),
("F", True, "F"),
("f4", True, "F4"),
("F4", True, "F4"),
("fo-o-bar", True, "FoOBar"),
("FO-O-BAR", True, "FoOBar"),
("fo-o-bar4", True, "FoOBar4"),
("FO-O-BAR4", True, "FoOBar4"),
("fo-o4-bar", True, "FoO4Bar"),
("FO-O4-BAR", True, "FoO4Bar"),
("fo4-o-bar", True, "Fo4OBar"),
("FO4-O-BAR", True, "Fo4OBar"),
("foo", True, "Foo"),
("FOO", True, "Foo"),
("foo-b", True, "FooB"),
("FOO-B", True, "FooB"),
("foo-b4", True, "FooB4"),
("FOO-B4", True, "FooB4"),
("foo-bar", True, "FooBar"),
("FOO-BAR", True, "FooBar"),
("foo-bar4", True, "FooBar4"),
("FOO-BAR4", True, "FooBar4"),
("foo4", True, "Foo4"),
("FOO4", True, "Foo4"),
("foo4-b", True, "Foo4B"),
("FOO4-b", True, "Foo4B"),
("foo4-bar", True, "Foo4Bar"),
("FOO4-bar", True, "Foo4Bar"),
("foob", True, "Foob"),
("FOOB", True, "Foob"),
("foob4", True, "Foob4"),
("FOOB4", True, "Foob4"),
("", False, ""),
("4", False, "4"),
("4-f", False, "4F"),
("4-F", False, "4F"),
("4-fo-o-bar", False, "4FoOBar"),
("4-FO-O-BAR", False, "4FoOBar"),
("4-foo", False, "4Foo"),
("4-FOO", False, "4Foo"),
("4-foo-b", False, "4FooB"),
("4-FOO-B", False, "4FooB"),
("4-foo-bar", False, "4FooBar"),
("4-FOO-BAR", False, "4FooBar"),
("4-foob", False, "4Foob"),
("4-FOOB", False, "4Foob"),
("f", False, "f"),
("F", False, "f"),
("f4", False, "f4"),
("F4", False, "f4"),
("fo-o-bar", False, "foOBar"),
("FO-O-BAR", False, "foOBar"),
("fo-o-bar4", False, "foOBar4"),
("FO-O-BAR4", False, "foOBar4"),
("fo-o4-bar", False, "foO4Bar"),
("FO-O4-BAR", False, "foO4Bar"),
("fo4-o-bar", False, "fo4OBar"),
("FO4-O-BAR", False, "fo4OBar"),
("foo", False, "foo"),
("FOO", False, "foo"),
("foo-b", False, "fooB"),
("FOO-B", False, "fooB"),
("foo-b4", False, "fooB4"),
("FOO-B4", False, "fooB4"),
("foo-bar", False, "fooBar"),
("FOO-BAR", False, "fooBar"),
("foo-bar4", False, "fooBar4"),
("FOO-BAR4", False, "fooBar4"),
("foo4", False, "foo4"),
("FOO4", False, "foo4"),
("foo4-b", False, "foo4B"),
("FOO4-b", False, "foo4B"),
("foo4-bar", False, "foo4Bar"),
("FOO4-bar", False, "foo4Bar"),
("foob", False, "foob"),
("FOOB", False, "foob"),
("foob4", False, "foob4"),
("FOOB4", False, "foob4"),
])
# fmt: on
def test_kebab2camel_expected(string: str, capitalize: bool, expected: str):
"""Test :func:`none.text.case.kebab2camel` for expected output."""
assert none.text.case.kebab2camel(string, capitalize=capitalize) == expected
# fmt: off
@pytest.mark.parametrize("string,expected", [
("", ""),
("4", "4"),
("4-f", "4_f"),
("4-F", "4_F"),
("4-fo-o-bar", "4_fo_o_bar"),
("4-FO-O-BAR", "4_FO_O_BAR"),
("4-foo", "4_foo"),
("4-FOO", "4_FOO"),
("4-foo-b", "4_foo_b"),
("4-FOO-B", "4_FOO_B"),
("4-foo-bar", "4_foo_bar"),
("4-FOO-BAR", "4_FOO_BAR"),
("4-foob", "4_foob"),
("4-FOOB", "4_FOOB"),
("f", "f"),
("F", "F"),
("f4", "f4"),
("F4", "F4"),
("fo-o-bar", "fo_o_bar"),
("FO-O-BAR", "FO_O_BAR"),
("fo-o-bar4", "fo_o_bar4"),
("FO-O-BAR4", "FO_O_BAR4"),
("fo-o4-bar", "fo_o4_bar"),
("FO-O4-BAR", "FO_O4_BAR"),
("fo4-o-bar", "fo4_o_bar"),
("FO4-O-BAR", "FO4_O_BAR"),
("foo", "foo"),
("FOO", "FOO"),
("foo-b", "foo_b"),
("FOO-B", "FOO_B"),
("foo-b4", "foo_b4"),
("FOO-B4", "FOO_B4"),
("foo-bar", "foo_bar"),
("FOO-BAR", "FOO_BAR"),
("foo-bar4", "foo_bar4"),
("FOO-BAR4", "FOO_BAR4"),
("foo4", "foo4"),
("FOO4", "FOO4"),
("foo4-b", "foo4_b"),
("FOO4-b", "FOO4_b"),
("foo4-bar", "foo4_bar"),
("FOO4-bar", "FOO4_bar"),
("foob", "foob"),
("FOOB", "FOOB"),
("foob4", "foob4"),
("FOOB4", "FOOB4"),
])
# fmt: on
def test_kebab2snake_expected(string: str, expected: str):
"""Test :func:`none.text.case.kebab2snake` for expected output."""
assert none.text.case.kebab2snake(string) == expected
# fmt: off
@pytest.mark.parametrize("string,capitalize,expected", [
("", True, ""),
("4", True, "4"),
("4_f", True, "4F"),
("4_F", True, "4F"),
("4_fo_o_bar", True, "4FoOBar"),
("4_FO_O_BAR", True, "4FoOBar"),
("4_foo", True, "4Foo"),
("4_FOO", True, "4Foo"),
("4_foo_b", True, "4FooB"),
("4_FOO_B", True, "4FooB"),
("4_foo_bar", True, "4FooBar"),
("4_FOO_BAR", True, "4FooBar"),
("4_foob", True, "4Foob"),
("4_FOOB", True, "4Foob"),
("f", True, "F"),
("F", True, "F"),
("f4", True, "F4"),
("F4", True, "F4"),
("fo_o_bar", True, "FoOBar"),
("FO_O_BAR", True, "FoOBar"),
("fo_o_bar4", True, "FoOBar4"),
("FO_O_BAR4", True, "FoOBar4"),
("fo_o4_bar", True, "FoO4Bar"),
("FO_O4_BAR", True, "FoO4Bar"),
("fo4_o_bar", True, "Fo4OBar"),
("FO4_O_BAR", True, "Fo4OBar"),
("foo", True, "Foo"),
("FOO", True, "Foo"),
("foo_b", True, "FooB"),
("FOO_B", True, "FooB"),
("foo_b4", True, "FooB4"),
("FOO_B4", True, "FooB4"),
("foo_bar", True, "FooBar"),
("FOO_BAR", True, "FooBar"),
("foo_bar4", True, "FooBar4"),
("FOO_BAR4", True, "FooBar4"),
("foo4", True, "Foo4"),
("FOO4", True, "Foo4"),
("foo4_b", True, "Foo4B"),
("FOO4_b", True, "Foo4B"),
("foo4_bar", True, "Foo4Bar"),
("FOO4_bar", True, "Foo4Bar"),
("foob", True, "Foob"),
("FOOB", True, "Foob"),
("foob4", True, "Foob4"),
("FOOB4", True, "Foob4"),
("", False, ""),
("4", False, "4"),
("4_f", False, "4F"),
("4_F", False, "4F"),
("4_fo_o_bar", False, "4FoOBar"),
("4_FO_O_BAR", False, "4FoOBar"),
("4_foo", False, "4Foo"),
("4_FOO", False, "4Foo"),
("4_foo_b", False, "4FooB"),
("4_FOO_B", False, "4FooB"),
("4_foo_bar", False, "4FooBar"),
("4_FOO_BAR", False, "4FooBar"),
("4_foob", False, "4Foob"),
("4_FOOB", False, "4Foob"),
("f", False, "f"),
("F", False, "f"),
("f4", False, "f4"),
("F4", False, "f4"),
("fo_o_bar", False, "foOBar"),
("FO_O_BAR", False, "foOBar"),
("fo_o_bar4", False, "foOBar4"),
("FO_O_BAR4", False, "foOBar4"),
("fo_o4_bar", False, "foO4Bar"),
("FO_O4_BAR", False, "foO4Bar"),
("fo4_o_bar", False, "fo4OBar"),
("FO4_O_BAR", False, "fo4OBar"),
("foo", False, "foo"),
("FOO", False, "foo"),
("foo_b", False, "fooB"),
("FOO_B", False, "fooB"),
("foo_b4", False, "fooB4"),
("FOO_B4", False, "fooB4"),
("foo_bar", False, "fooBar"),
("FOO_BAR", False, "fooBar"),
("foo_bar4", False, "fooBar4"),
("FOO_BAR4", False, "fooBar4"),
("foo4", False, "foo4"),
("FOO4", False, "foo4"),
("foo4_b", False, "foo4B"),
("FOO4_b", False, "foo4B"),
("foo4_bar", False, "foo4Bar"),
("FOO4_bar", False, "foo4Bar"),
("foob", False, "foob"),
("FOOB", False, "foob"),
("foob4", False, "foob4"),
("FOOB4", False, "foob4"),
])
# fmt: on
def test_snake2camel_expected(string: str, capitalize: bool, expected: str):
"""Test :func:`none.text.case.snake2camel` for expected output."""
assert none.text.case.snake2camel(string, capitalize=capitalize) == expected
# fmt: off
@pytest.mark.parametrize("string,expected", [
("", ""),
("4", "4"),
("4_f", "4-f"),
("4_F", "4-F"),
("4_fo_o_bar", "4-fo-o-bar"),
("4_FO_O_BAR", "4-FO-O-BAR"),
("4_foo", "4-foo"),
("4_FOO", "4-FOO"),
("4_foo_b", "4-foo-b"),
("4_FOO_B", "4-FOO-B"),
("4_foo_bar", "4-foo-bar"),
("4_FOO_BAR", "4-FOO-BAR"),
("4_foob", "4-foob"),
("4_FOOB", "4-FOOB"),
("f", "f"),
("F", "F"),
("f4", "f4"),
("F4", "F4"),
("fo_o_bar", "fo-o-bar"),
("FO_O_BAR", "FO-O-BAR"),
("fo_o_bar4", "fo-o-bar4"),
("FO_O_BAR4", "FO-O-BAR4"),
("fo_o4_bar", "fo-o4-bar"),
("FO_O4_BAR", "FO-O4-BAR"),
("fo4_o_bar", "fo4-o-bar"),
("FO4_O_BAR", "FO4-O-BAR"),
("foo", "foo"),
("FOO", "FOO"),
("foo_b", "foo-b"),
("FOO_B", "FOO-B"),
("foo_b4", "foo-b4"),
("FOO_B4", "FOO-B4"),
("foo_bar", "foo-bar"),
("FOO_BAR", "FOO-BAR"),
("foo_bar4", "foo-bar4"),
("FOO_BAR4", "FOO-BAR4"),
("foo4", "foo4"),
("FOO4", "FOO4"),
("foo4_b", "foo4-b"),
("FOO4_b", "FOO4-b"),
("foo4_bar", "foo4-bar"),
("FOO4_bar", "FOO4-bar"),
("foob", "foob"),
("FOOB", "FOOB"),
("foob4", "foob4"),
("FOOB4", "FOOB4"),
])
# fmt: on
def test_snake2kebab_expected(string: str, expected: str):
"""Test :func:`none.text.case.snake2kebab` for expected output."""
assert none.text.case.snake2kebab(string) == expected
| 31.708075 | 80 | 0.443944 | 1,826 | 15,315 | 3.569003 | 0.064622 | 0.040509 | 0.036827 | 0.021482 | 0.891515 | 0.891515 | 0.891515 | 0.859291 | 0.85837 | 0.85837 | 0 | 0.055904 | 0.292197 | 15,315 | 482 | 81 | 31.773859 | 0.545295 | 0.071825 | 0 | 0.27907 | 0 | 0 | 0.329234 | 0.003675 | 0 | 0 | 0 | 0 | 0.013953 | 1 | 0.013953 | false | 0 | 0.004651 | 0 | 0.018605 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b8ad030c1c0313d515820d0184310c2ef517dd19 | 133 | py | Python | xrudderai/token.py | justin-cotarla/x-rudder-ai | 60f27bcdd2154130d3228c73c839a213d7c8fa92 | [
"MIT"
] | null | null | null | xrudderai/token.py | justin-cotarla/x-rudder-ai | 60f27bcdd2154130d3228c73c839a213d7c8fa92 | [
"MIT"
] | null | null | null | xrudderai/token.py | justin-cotarla/x-rudder-ai | 60f27bcdd2154130d3228c73c839a213d7c8fa92 | [
"MIT"
] | null | null | null | class Token:
def __init__(self, player):
self.player = player
def __str__(self):
return self.player.symbol
| 16.625 | 33 | 0.62406 | 16 | 133 | 4.6875 | 0.5625 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 133 | 7 | 34 | 19 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b244b4df97daa9259c5b8d08d0f31e2679a9330f | 14,628 | py | Python | malaya/text/regex.py | ebiggerr/malaya | be757c793895522f80b929fe82353d90762f7fff | [
"MIT"
] | 88 | 2021-01-06T10:01:31.000Z | 2022-03-30T17:34:09.000Z | malaya/text/regex.py | zulkiflizaki/malaya | 2358081bfa43aad57d9415a99f64c68f615d0cc4 | [
"MIT"
] | 43 | 2021-01-14T02:44:41.000Z | 2022-03-31T19:47:42.000Z | malaya/text/regex.py | zulkiflizaki/malaya | 2358081bfa43aad57d9415a99f64c68f615d0cc4 | [
"MIT"
] | 38 | 2021-01-06T07:15:03.000Z | 2022-03-19T05:07:50.000Z | # this file to store possible regex
_ltr_emoticon = [
# optional hat
r'(?:(?<![a-zA-Z])[DPO]|(?<!\d)[03]|[|}><=])?',
# eyes
r'(?:(?<![a-zA-Z\(])[xXB](?![a-ce-oq-zA-CE-OQ-Z,\.\/])|(?<![:])[:=|](?![\.])|(?<![%#\d])[%#](?![%#\d])|(?<![\d\$])[$](?![\d\.,\$])|[;](?!\()|(?<![\d\(\-\+])8(?![\da-ce-zA-CE-Z\\/])|\*(?![\*\d,.]))',
# optional tears
r"(?:['\",])?",
# optional nose
r'(?:(?<![\w*])[oc](?![a-zA-Z])|(?:[-‑^]))?',
# mouth
r'(?:[(){}\[\]<>|/\\]+|[Þ×þ]|(?<!\d)[30](?!\d)|(?<![\d\*])[*,.@#&](?![\*\d,.])|(?<![\d\$])[$](?![\d\.,\$])|[DOosSJLxXpPbc](?![a-zA-Z]))',
]
_rtl_emoticon = [
r'(?<![\w])',
r'(?:[(){}\[\]<>|/\\]+|(?<![\d\.\,])[0](?![\d\.])|(?![\d\*,.@#&])[*,.@#&]|[$]|(?<![a-zA-Z])[DOosSxX])',
# mouth
r'(?:[-‑^])?', # optional nose
r"(?:['\",])?", # optional tears
r'(?:[xX]|[:=|]|[%#]|[$8](?![\d\.])|[;]|\*)', # eyes
r'(?:[O]|[0]|[|{><=])?', # optional hat
r'(?![a-zA-Z])',
]
_LTR_FACE = ''.join(_ltr_emoticon)
_RTL_FACE = ''.join(_rtl_emoticon)
_short_date = r'(?:\b(?<!\d\.)(?:(?:(?:[0123]?[0-9][\.\-\/])?[0123]?[0-9][\.\-\/][12][0-9]{3})|(?:[0123]?[0-9][\.\-\/][0123]?[0-9][\.\-\/][12]?[0-9]{2,3}))(?!\.\d)\b)'
_full_date_parts = [
# prefix
r'(?:(?<!:)\b\'?\d{1,4},? ?)',
r'\b(?:[Jj]an(?:uari)?|[Ff]eb(?:ruari)?|[Mm]a(?:c)?|[Aa]pr(?:il)?|[Mm]ei|[Jj]u(?:n)?|[Jj]ula(?:i)?|[Aa]ug(?:ust)?|[Oo]gos|[Ss]ept?(?:ember)?|[Oo]kt(?:ober)?|[Nn]ov(?:ember)?|[Dd]is(?:ember)?)\b',
r'(?:(?:,? ?\'?)?\d{1,4}(?:st|nd|rd|n?th)?\b(?:[,\/]? ?\'?\d{2,4}[a-zA-Z]*)?(?: ?- ?\d{2,4}[a-zA-Z]*)?(?!:\d{1,4})\b)',
]
_fd1 = '(?:{})'.format(
''.join(
[_full_date_parts[0] + '?', _full_date_parts[1], _full_date_parts[2]]
)
)
_fd2 = '(?:{})'.format(
''.join(
[_full_date_parts[0], _full_date_parts[1], _full_date_parts[2] + '?']
)
)
_date = '(?:' + '(?:' + _fd1 + '|' + _fd2 + ')' + '|' + _short_date + ')'
_time = r'(?:(?:\d+)?\.?\d+\s*(?:AM|PM|am|pm|a\.m\.|p\.m\.))|(?:(?:[0-2]?[0-9]|[2][0-3]):(?:[0-5][0-9])(?::(?:[0-5][0-9]))?(?: ?(?:AM|PM|am|pm|a\.m\.|p\.m\.))?)'
_today_time = '\\d+\\s*(?:pagi|pgi|morning|tengahari|tngahari|petang|ptg|malam)\\b|(?:pkul|pukul|kul)\\s*(?:\\s|\\d+)\\b'
_past_date_string = '(?:\\s|\\d+)\\s*(?:minggu|bulan|tahun|hari|thun|hri|mnggu|jam|minit|saat)\\s*(?:lalu|lepas|lps)\\b'
_now_date_string = '(?:sekarang|skrg|jam|tahun|thun|saat|minit) (?:ini|ni)\\b'
_yesterday_tomorrow_date_string = (
'(?:yesterday|semalam|kelmarin|smalam|esok|esk)\\b'
)
_future_date_string = '(?:dlm|dalam)\\s*\\d+(?:minggu|bulan|tahun|hari|thun|hri|mnggu|jam|minit|saat)\\b'
_depan_date_string = '(?:\\s|\\d+)\\s*(?:minggu|bulan|tahun|hari|thun|hri|mnggu|jam|minit|saat)\\s*(?:depan|dpan|dpn)\\b'
_number = r"\b\d+(?:[\.,']\d+)?\b"
_percentage = _number + '%'
_money = r"(?:(?:[$€£¢]|RM|rm)\s*\d+(?:[\.,']\d+)?(?:[MmKkBb](?:n|(?:il(?:lion)?))?)?)|(?:\d+(?:[\.,']\d+)?(?:[MmKkBb](?:n|(?:il(?:lion)?))?)?\s*(?:[$€£¢]|sen|ringgit|cent|penny))"
_temperature = "-?\\d+(?:[\\.,']\\d+)?\\s*(?:K|Kelvin|kelvin|Kvin|F|f|Farenheit|farenheit|C|c|Celcius|celcius|clcius|celsius)\\b"
_distance = "-?\\d+(?:[\\.,']\\d+)?\\s*(?:kaki|mtrs|metres|meters|feet|km|m|cm|feet|feets|miles|batu|inch|inches|feets)\\b"
_volume = "-?\\d+(?:[\\.,']\\d+)?\\s*(?:ml|ML|l|L|mililiter|Mililiter|millilitre|liter|litre|litres|liters|gallon|gallons|galon)\\b"
_duration = '\\d+\\s*(?:jam|minit|hari|minggu|tahun|hours|hour)\\b|(?:sejam|sehari|setahun|sesaat|seminit)\\b'
_weight = "\\d+(?:[\\.,']\\d+)?\\s*(?:kg|kilo|kilogram|g|gram|KG)\\b"
_left_datetime = '(%s) (%s)' % (_time, _date)
_right_datetime = '(%s) (%s)' % (_date, _time)
_left_datetodaytime = '(%s) (%s)' % (_today_time, _date)
_right_datetodaytime = '(%s) (%s)' % (_date, _today_time)
_left_yesterdaydatetime = '(%s) (%s)' % (_time, _yesterday_tomorrow_date_string)
_right_yesterdaydatetime = '(%s) (%s)' % (
_yesterday_tomorrow_date_string,
_time,
)
_left_yesterdaydatetodaytime = '(%s) (%s)' % (
_today_time,
_yesterday_tomorrow_date_string,
)
_right_yesterdaydatetodaytime = '(%s) (%s)' % (
_yesterday_tomorrow_date_string,
_today_time,
)
_expressions = {
'hashtag': r'\#\b[\w\-\_]+\b',
'cashtag': r'(?<![A-Z])\$[A-Z]+\b',
'tag': r'<[\/]?\w+[\/]?>',
'user': r'\@\w+',
'emphasis': r'(?:\*\b\w+\b\*)',
'censored': r'(?:\b\w+\*+\w+\b)',
'acronym': r'\b(?:[A-Z]\.)(?:[A-Z]\.)+(?:\.(?!\.))?(?:[A-Z]\b)?',
'elongated': r'\b[A-Za-z]*([a-zA-Z])\1\1[A-Za-z]*\b',
'rtl_face': _RTL_FACE,
'ltr_face': _LTR_FACE,
'eastern_emoticons': r'(?<![\w])(?:(?:[<>]?[\^;][\W_m][\;^][;<>]?)|(?:[^\s()]?m?[\(][\W_oTOJ]{1,3}[\s]?[\W_oTOJ]{1,3}[)]m?[^\s()]?)|(?:\*?[v>\-\/\\][o0O\_\.][v\-<\/\\]\*?)|(?:[oO0>][\-_\/oO\.\\]{1,2}[oO0>])|(?:\^\^))(?![\w])',
'rest_emoticons': r'(?<![A-Za-z0-9/()])(?:(?:\^5)|(?:\<3))(?![[A-Za-z0-9/()])',
# from https://github.com/mathiasbynens/emoji-regex/blob/master/text.js
'emoji': r'(?:\uD83C\uDFF4\uDB40\uDC67\uDB40\uDC62(?:\uDB40\uDC65\uDB40\uDC6E\uDB40\uDC67|\uDB40\uDC77\uDB40\uDC6C\uDB40\uDC73|\uDB40\uDC73\uDB40\uDC63\uDB40\uDC74)\uDB40\uDC7F|\uD83D\uDC69\u200D\uD83D\uDC69\u200D(?:\uD83D\uDC66\u200D\uD83D\uDC66|\uD83D\uDC67\u200D(?:\uD83D[\uDC66\uDC67]))|\uD83D\uDC68(?:\u200D(?:\u2764\uFE0F\u200D(?:\uD83D\uDC8B\u200D)?\uD83D\uDC68|(?:\uD83D[\uDC68\uDC69])\u200D(?:\uD83D\uDC66\u200D\uD83D\uDC66|\uD83D\uDC67\u200D(?:\uD83D[\uDC66\uDC67]))|\uD83D\uDC66\u200D\uD83D\uDC66|\uD83D\uDC67\u200D(?:\uD83D[\uDC66\uDC67])|[\u2695\u2696\u2708]\uFE0F|\uD83C[\uDF3E\uDF73\uDF93\uDFA4\uDFA8\uDFEB\uDFED]|\uD83D[\uDCBB\uDCBC\uDD27\uDD2C\uDE80\uDE92])|(?:\uD83C[\uDFFB-\uDFFF])\u200D[\u2695\u2696\u2708]\uFE0F|(?:\uD83C[\uDFFB-\uDFFF])\u200D(?:\uD83C[\uDF3E\uDF73\uDF93\uDFA4\uDFA8\uDFEB\uDFED]|\uD83D[\uDCBB\uDCBC\uDD27\uDD2C\uDE80\uDE92]))|\uD83D\uDC69\u200D(?:\u2764\uFE0F\u200D(?:\uD83D\uDC8B\u200D(?:\uD83D[\uDC68\uDC69])|\uD83D[\uDC68\uDC69])|\uD83C[\uDF3E\uDF73\uDF93\uDFA4\uDFA8\uDFEB\uDFED]|\uD83D[\uDCBB\uDCBC\uDD27\uDD2C\uDE80\uDE92])|\uD83D\uDC69\u200D\uD83D\uDC66\u200D\uD83D\uDC66|(?:\uD83D\uDC41\uFE0F\u200D\uD83D\uDDE8|\uD83D\uDC69(?:\uD83C[\uDFFB-\uDFFF])\u200D[\u2695\u2696\u2708]|(?:(?:\u26F9|\uD83C[\uDFCB\uDFCC]|\uD83D\uDD75)\uFE0F|\uD83D\uDC6F|\uD83E[\uDD3C\uDDDE\uDDDF])\u200D[\u2640\u2642]|(?:\u26F9|\uD83C[\uDFCB\uDFCC]|\uD83D\uDD75)(?:\uD83C[\uDFFB-\uDFFF])\u200D[\u2640\u2642]|(?:\uD83C[\uDFC3\uDFC4\uDFCA]|\uD83D[\uDC6E\uDC71\uDC73\uDC77\uDC81\uDC82\uDC86\uDC87\uDE45-\uDE47\uDE4B\uDE4D\uDE4E\uDEA3\uDEB4-\uDEB6]|\uD83E[\uDD26\uDD37-\uDD39\uDD3D\uDD3E\uDDD6-\uDDDD])(?:(?:\uD83C[\uDFFB-\uDFFF])\u200D[\u2640\u2642]|\u200D[\u2640\u2642])|\uD83D\uDC69\u200D[\u2695\u2696\u2708])\uFE0F|\uD83D\uDC69\u200D\uD83D\uDC67\u200D(?:\uD83D[\uDC66\uDC67])|\uD83D\uDC69\u200D\uD83D\uDC69\u200D(?:\uD83D[\uDC66\uDC67])|\uD83D\uDC68(?:\u200D(?:(?:\uD83D[\uDC68\uDC69])\u200D(?:\uD83D[\uDC66\uDC67])|\uD83D[\uDC66\uDC67])|\uD83C[\uDFFB-\uDFFF])|\uD83C\uDFF3\uFE0F\u200D\uD83C\uDF08|\uD83D\uDC69\u200D\uD83D\uDC67|\uD83D\uDC69(?:\uD83C[\uDFFB-\uDFFF])\u200D(?:\uD83C[\uDF3E\uDF73\uDF93\uDFA4\uDFA8\uDFEB\uDFED]|\uD83D[\uDCBB\uDCBC\uDD27\uDD2C\uDE80\uDE92])|\uD83D\uDC69\u200D\uD83D\uDC66|\uD83C\uDDF4\uD83C\uDDF2|\uD83C\uDDFD\uD83C\uDDF0|\uD83C\uDDF6\uD83C\uDDE6|\uD83D\uDC69(?:\uD83C[\uDFFB-\uDFFF])|\uD83C\uDDFC(?:\uD83C[\uDDEB\uDDF8])|\uD83C\uDDEB(?:\uD83C[\uDDEE-\uDDF0\uDDF2\uDDF4\uDDF7])|\uD83C\uDDE9(?:\uD83C[\uDDEA\uDDEC\uDDEF\uDDF0\uDDF2\uDDF4\uDDFF])|\uD83C\uDDE7(?:\uD83C[\uDDE6\uDDE7\uDDE9-\uDDEF\uDDF1-\uDDF4\uDDF6-\uDDF9\uDDFB\uDDFC\uDDFE\uDDFF])|\uD83C\uDDF1(?:\uD83C[\uDDE6-\uDDE8\uDDEE\uDDF0\uDDF7-\uDDFB\uDDFE])|\uD83C\uDDFE(?:\uD83C[\uDDEA\uDDF9])|\uD83C\uDDF9(?:\uD83C[\uDDE6\uDDE8\uDDE9\uDDEB-\uDDED\uDDEF-\uDDF4\uDDF7\uDDF9\uDDFB\uDDFC\uDDFF])|\uD83C\uDDF5(?:\uD83C[\uDDE6\uDDEA-\uDDED\uDDF0-\uDDF3\uDDF7-\uDDF9\uDDFC\uDDFE])|\uD83C\uDDEF(?:\uD83C[\uDDEA\uDDF2\uDDF4\uDDF5])|\uD83C\uDDED(?:\uD83C[\uDDF0\uDDF2\uDDF3\uDDF7\uDDF9\uDDFA])|\uD83C\uDDEE(?:\uD83C[\uDDE8-\uDDEA\uDDF1-\uDDF4\uDDF6-\uDDF9])|\uD83C\uDDFB(?:\uD83C[\uDDE6\uDDE8\uDDEA\uDDEC\uDDEE\uDDF3\uDDFA])|\uD83C\uDDEC(?:\uD83C[\uDDE6\uDDE7\uDDE9-\uDDEE\uDDF1-\uDDF3\uDDF5-\uDDFA\uDDFC\uDDFE])|\uD83C\uDDF7(?:\uD83C[\uDDEA\uDDF4\uDDF8\uDDFA\uDDFC])|\uD83C\uDDEA(?:\uD83C[\uDDE6\uDDE8\uDDEA\uDDEC\uDDED\uDDF7-\uDDFA])|\uD83C\uDDFA(?:\uD83C[\uDDE6\uDDEC\uDDF2\uDDF3\uDDF8\uDDFE\uDDFF])|\uD83C\uDDE8(?:\uD83C[\uDDE6\uDDE8\uDDE9\uDDEB-\uDDEE\uDDF0-\uDDF5\uDDF7\uDDFA-\uDDFF])|\uD83C\uDDE6(?:\uD83C[\uDDE8-\uDDEC\uDDEE\uDDF1\uDDF2\uDDF4\uDDF6-\uDDFA\uDDFC\uDDFD\uDDFF])|[#\*0-9]\uFE0F\u20E3|\uD83C\uDDF8(?:\uD83C[\uDDE6-\uDDEA\uDDEC-\uDDF4\uDDF7-\uDDF9\uDDFB\uDDFD-\uDDFF])|\uD83C\uDDFF(?:\uD83C[\uDDE6\uDDF2\uDDFC])|\uD83C\uDDF0(?:\uD83C[\uDDEA\uDDEC-\uDDEE\uDDF2\uDDF3\uDDF5\uDDF7\uDDFC\uDDFE\uDDFF])|\uD83C\uDDF3(?:\uD83C[\uDDE6\uDDE8\uDDEA-\uDDEC\uDDEE\uDDF1\uDDF4\uDDF5\uDDF7\uDDFA\uDDFF])|\uD83C\uDDF2(?:\uD83C[\uDDE6\uDDE8-\uDDED\uDDF0-\uDDFF])|(?:\uD83C[\uDFC3\uDFC4\uDFCA]|\uD83D[\uDC6E\uDC71\uDC73\uDC77\uDC81\uDC82\uDC86\uDC87\uDE45-\uDE47\uDE4B\uDE4D\uDE4E\uDEA3\uDEB4-\uDEB6]|\uD83E[\uDD26\uDD37-\uDD39\uDD3D\uDD3E\uDDD6-\uDDDD])(?:\uD83C[\uDFFB-\uDFFF])|(?:\u26F9|\uD83C[\uDFCB\uDFCC]|\uD83D\uDD75)(?:\uD83C[\uDFFB-\uDFFF])|(?:[\u261D\u270A-\u270D]|\uD83C[\uDF85\uDFC2\uDFC7]|\uD83D[\uDC42\uDC43\uDC46-\uDC50\uDC66\uDC67\uDC70\uDC72\uDC74-\uDC76\uDC78\uDC7C\uDC83\uDC85\uDCAA\uDD74\uDD7A\uDD90\uDD95\uDD96\uDE4C\uDE4F\uDEC0\uDECC]|\uD83E[\uDD18-\uDD1C\uDD1E\uDD1F\uDD30-\uDD36\uDDD1-\uDDD5])(?:\uD83C[\uDFFB-\uDFFF])|(?:[\u261D\u26F9\u270A-\u270D]|\uD83C[\uDF85\uDFC2-\uDFC4\uDFC7\uDFCA-\uDFCC]|\uD83D[\uDC42\uDC43\uDC46-\uDC50\uDC66-\uDC69\uDC6E\uDC70-\uDC78\uDC7C\uDC81-\uDC83\uDC85-\uDC87\uDCAA\uDD74\uDD75\uDD7A\uDD90\uDD95\uDD96\uDE45-\uDE47\uDE4B-\uDE4F\uDEA3\uDEB4-\uDEB6\uDEC0\uDECC]|\uD83E[\uDD18-\uDD1C\uDD1E\uDD1F\uDD26\uDD30-\uDD39\uDD3D\uDD3E\uDDD1-\uDDDD])(?:\uD83C[\uDFFB-\uDFFF])?|(?:[\u231A\u231B\u23E9-\u23EC\u23F0\u23F3\u25FD\u25FE\u2614\u2615\u2648-\u2653\u267F\u2693\u26A1\u26AA\u26AB\u26BD\u26BE\u26C4\u26C5\u26CE\u26D4\u26EA\u26F2\u26F3\u26F5\u26FA\u26FD\u2705\u270A\u270B\u2728\u274C\u274E\u2753-\u2755\u2757\u2795-\u2797\u27B0\u27BF\u2B1B\u2B1C\u2B50\u2B55]|\uD83C[\uDC04\uDCCF\uDD8E\uDD91-\uDD9A\uDDE6-\uDDFF\uDE01\uDE1A\uDE2F\uDE32-\uDE36\uDE38-\uDE3A\uDE50\uDE51\uDF00-\uDF20\uDF2D-\uDF35\uDF37-\uDF7C\uDF7E-\uDF93\uDFA0-\uDFCA\uDFCF-\uDFD3\uDFE0-\uDFF0\uDFF4\uDFF8-\uDFFF]|\uD83D[\uDC00-\uDC3E\uDC40\uDC42-\uDCFC\uDCFF-\uDD3D\uDD4B-\uDD4E\uDD50-\uDD67\uDD7A\uDD95\uDD96\uDDA4\uDDFB-\uDE4F\uDE80-\uDEC5\uDECC\uDED0-\uDED2\uDEEB\uDEEC\uDEF4-\uDEF8]|\uD83E[\uDD10-\uDD3A\uDD3C-\uDD3E\uDD40-\uDD45\uDD47-\uDD4C\uDD50-\uDD6B\uDD80-\uDD97\uDDC0\uDDD0-\uDDE6])|(?:[#\*0-9\xA9\xAE\u203C\u2049\u2122\u2139\u2194-\u2199\u21A9\u21AA\u231A\u231B\u2328\u23CF\u23E9-\u23F3\u23F8-\u23FA\u24C2\u25AA\u25AB\u25B6\u25C0\u25FB-\u25FE\u2600-\u2604\u260E\u2611\u2614\u2615\u2618\u261D\u2620\u2622\u2623\u2626\u262A\u262E\u262F\u2638-\u263A\u2640\u2642\u2648-\u2653\u2660\u2663\u2665\u2666\u2668\u267B\u267F\u2692-\u2697\u2699\u269B\u269C\u26A0\u26A1\u26AA\u26AB\u26B0\u26B1\u26BD\u26BE\u26C4\u26C5\u26C8\u26CE\u26CF\u26D1\u26D3\u26D4\u26E9\u26EA\u26F0-\u26F5\u26F7-\u26FA\u26FD\u2702\u2705\u2708-\u270D\u270F\u2712\u2714\u2716\u271D\u2721\u2728\u2733\u2734\u2744\u2747\u274C\u274E\u2753-\u2755\u2757\u2763\u2764\u2795-\u2797\u27A1\u27B0\u27BF\u2934\u2935\u2B05-\u2B07\u2B1B\u2B1C\u2B50\u2B55\u3030\u303D\u3297\u3299]|\uD83C[\uDC04\uDCCF\uDD70\uDD71\uDD7E\uDD7F\uDD8E\uDD91-\uDD9A\uDDE6-\uDDFF\uDE01\uDE02\uDE1A\uDE2F\uDE32-\uDE3A\uDE50\uDE51\uDF00-\uDF21\uDF24-\uDF93\uDF96\uDF97\uDF99-\uDF9B\uDF9E-\uDFF0\uDFF3-\uDFF5\uDFF7-\uDFFF]|\uD83D[\uDC00-\uDCFD\uDCFF-\uDD3D\uDD49-\uDD4E\uDD50-\uDD67\uDD6F\uDD70\uDD73-\uDD7A\uDD87\uDD8A-\uDD8D\uDD90\uDD95\uDD96\uDDA4\uDDA5\uDDA8\uDDB1\uDDB2\uDDBC\uDDC2-\uDDC4\uDDD1-\uDDD3\uDDDC-\uDDDE\uDDE1\uDDE3\uDDE8\uDDEF\uDDF3\uDDFA-\uDE4F\uDE80-\uDEC5\uDECB-\uDED2\uDEE0-\uDEE5\uDEE9\uDEEB\uDEEC\uDEF0\uDEF3-\uDEF8]|\uD83E[\uDD10-\uDD3A\uDD3C-\uDD3E\uDD40-\uDD45\uDD47-\uDD4C\uDD50-\uDD6B\uDD80-\uDD97\uDDC0\uDDD0-\uDDE6])\uFE0F?)',
# "EMOJI": r"(?:[\u00A9\u00AE\u203C\u2049\u2122\u2139\u2194-\u2199\u21A9-\u21AA\u231A-\u231B\u2328\u23CF\u23E9-\u23F3\u23F8-\u23FA\u24C2\u25AA-\u25AB\u25B6\u25C0\u25FB-\u25FE\u2600-\u2604\u260E\u2611\u2614-\u2615\u2618\u261D\u2620\u2622-\u2623\u2626\u262A\u262E-\u262F\u2638-\u263A\u2640\u2642\u2648-\u2653\u2660\u2663\u2665-\u2666\u2668\u267B\u267F\u2692-\u2697\u2699\u269B-\u269C\u26A0-\u26A1\u26AA-\u26AB\u26B0-\u26B1\u26BD-\u26BE\u26C4-\u26C5\u26C8\u26CE-\u26CF\u26D1\u26D3-\u26D4\u26E9-\u26EA\u26F0-\u26F5\u26F7-\u26FA\u26FD\u2702\u2705\u2708-\u270D\u270F\u2712\u2714\u2716\u271D\u2721\u2728\u2733-\u2734\u2744\u2747\u274C\u274E\u2753-\u2755\u2757\u2763-\u2764\u2795-\u2797\u27A1\u27B0\u27BF\u2934-\u2935\u2B05-\u2B07\u2B1B-\u2B1C\u2B50\u2B55\u3030\u303D\u3297\u3299]|(?:\uD83C[\uDC04\uDCCF\uDD70-\uDD71\uDD7E-\uDD7F\uDD8E\uDD91-\uDD9A\uDDE6-\uDDFF\uDE01-\uDE02\uDE1A\uDE2F\uDE32-\uDE3A\uDE50-\uDE51\uDF00-\uDF21\uDF24-\uDF93\uDF96-\uDF97\uDF99-\uDF9B\uDF9E-\uDFF0\uDFF3-\uDFF5\uDFF7-\uDFFF]|\uD83D[\uDC00-\uDCFD\uDCFF-\uDD3D\uDD49-\uDD4E\uDD50-\uDD67\uDD6F-\uDD70\uDD73-\uDD7A\uDD87\uDD8A-\uDD8D\uDD90\uDD95-\uDD96\uDDA4-\uDDA5\uDDA8\uDDB1-\uDDB2\uDDBC\uDDC2-\uDDC4\uDDD1-\uDDD3\uDDDC-\uDDDE\uDDE1\uDDE3\uDDE8\uDDEF\uDDF3\uDDFA-\uDE4F\uDE80-\uDEC5\uDECB-\uDED2\uDEE0-\uDEE5\uDEE9\uDEEB-\uDEEC\uDEF0\uDEF3-\uDEF6]|\uD83E[\uDD10-\uDD1E\uDD20-\uDD27\uDD30\uDD33-\uDD3A\uDD3C-\uDD3E\uDD40-\uDD45\uDD47-\uDD4B\uDD50-\uDD5E\uDD80-\uDD91\uDDC0]))",
'quotes': r'\"(\\.|[^\"]){2,}\"',
'percent': _percentage,
'repeat_puncts': r'([!?.]){2,}',
'money': _money,
'email': r'(?:^|(?<=[^\w@.)]))(?:[\w+-](?:\.(?!\.))?)*?[\w+-]@(?:\w-?)*?\w+(?:\.(?:[a-z]{2,})){1,3}(?:$|(?=\b))',
'phone': r'(?<![0-9])(?:\+\d{1,2}\s)?\(?\d{3}\)?[\s.-]?\d{3}[\s.-]?\d{4}(?![0-9])',
# 'temperature': _temperature,
# 'distance': _distance,
'number': _number,
'allcaps': r'(?<![#@$])\b([A-Z][A-Z ]{1,}[A-Z])\b',
'url': r'(?:https?:\/\/(?:www\.|(?!www))[^\s\.]+\.[^\s]{2,}|www\.[^\s]+\.[^\s]{2,})',
'date': _date,
'time': _time,
# "CAMEL_SPLIT": '((?<=[a-z])[A-Z]|(?<!\A)[A-Z](?=[a-z]))',
# r'((?<=[a-z])[A-Z]|(?<=[A-Z][A-Z])[a-z]|(?<!^)(?<![A-Z])[A-Z](?=[a-z])|[0-9]+|(?<=[0-9\-\_])[A-Za-z]|[\-\_])',
'camel_split': r'((?<=[a-z])[A-Z]|(?<!^)[A-Z](?=[a-z])|[0-9]+|(?<=[0-9\-\_])[A-Za-z]|[\-\_])',
# REGEX_NORMALIZE_ELONG = '(.)\1+')
'normalize_elong': r'(.)\1{2,}',
'word': r'(?:[\w_]+)',
'hypen': r'\w+(?:-\w+)+',
'temperature': _temperature,
'distance': _distance,
'volume': _volume,
'duration': _duration,
'weight': _weight,
}
| 115.181102 | 7,104 | 0.625239 | 2,139 | 14,628 | 4.207106 | 0.28331 | 0.005556 | 0.005667 | 0.007112 | 0.532059 | 0.493722 | 0.435826 | 0.41249 | 0.403712 | 0.377709 | 0 | 0.174376 | 0.04382 | 14,628 | 126 | 7,105 | 116.095238 | 0.468363 | 0.13105 | 0 | 0.086538 | 0 | 0.221154 | 0.837917 | 0.790009 | 0.009615 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b282fe1f03d8cdb71e0dfa4591d3c70dce50ad70 | 80,894 | py | Python | qiita_ware/test/test_ebi.py | smruthi98/qiita | 8514d316670919e074a927226f985b56dba815f6 | [
"BSD-3-Clause"
] | 2 | 2019-07-18T22:39:20.000Z | 2019-08-13T00:17:19.000Z | qiita_ware/test/test_ebi.py | smruthi98/qiita | 8514d316670919e074a927226f985b56dba815f6 | [
"BSD-3-Clause"
] | 3 | 2019-07-23T18:37:18.000Z | 2019-07-30T22:37:32.000Z | qiita_ware/test/test_ebi.py | smruthi98/qiita | 8514d316670919e074a927226f985b56dba815f6 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import division
# -----------------------------------------------------------------------------
# Copyright (c) 2014--, The Qiita Development Team.
#
# Distributed under the terms of the BSD 3-clause License.
#
# The full license is in the file LICENSE, distributed with this software.
# -----------------------------------------------------------------------------
from os import remove
from os.path import join, isdir, exists
from shutil import rmtree
from tempfile import mkdtemp
from unittest import TestCase, main
from xml.etree import ElementTree as ET
from functools import partial
import pandas as pd
import warnings
from datetime import date
from skbio.util import safe_md5
from future.utils import viewitems
from h5py import File
from qiita_files.demux import to_hdf5
from qiita_ware.ebi import EBISubmission
from qiita_ware.exceptions import EBISubmissionError
from qiita_core.qiita_settings import qiita_config
from qiita_db.util import get_mountpoint
from qiita_db.study import Study, StudyPerson
from qiita_db.metadata_template.prep_template import PrepTemplate
from qiita_db.metadata_template.sample_template import SampleTemplate
from qiita_db.user import User
from qiita_db.artifact import Artifact
from qiita_db.software import Parameters, DefaultParameters
from qiita_core.util import qiita_test_checker
@qiita_test_checker()
class TestEBISubmission(TestCase):
def setUp(self):
self.files_to_remove = []
self.temp_dir = mkdtemp()
self.files_to_remove.append(self.temp_dir)
self.study_id = None
def tearDown(self):
if self.study_id and Study.exists("Test EBI study"):
study = Study(self.study_id)
for a in study.artifacts():
Artifact.delete(a.id)
for pt in study.prep_templates():
PrepTemplate.delete(pt.id)
SampleTemplate.delete(self.study_id)
Study.delete(self.study_id)
for f in self.files_to_remove:
if exists(f):
if isdir(f):
rmtree(f)
else:
remove(f)
def test_init(self):
artifact_id = 3
action = 'ADD'
e = EBISubmission(artifact_id, action)
self.files_to_remove.append(e.full_ebi_dir)
self.assertEqual(e.artifact_id, artifact_id)
self.assertEqual(e.study_title, 'Identification of the Microbiomes '
'for Cannabis Soils')
self.assertEqual(e.study_abstract,
('This is a preliminary study to examine the '
'microbiota associated with the Cannabis plant. '
'Soils samples from the bulk soil, soil associated '
'with the roots, and the rhizosphere were extracted '
'and the DNA sequenced. Roots from three '
'independent plants of different strains were '
'examined. These roots were obtained November 11, '
'2011 from plants that had been harvested in the '
'summer. Future studies will attempt to analyze the '
'soils and rhizospheres from the same location at '
'different time points in the plant lifecycle.'))
self.assertEqual(e.investigation_type, 'Metagenomics')
self.assertIsNone(e.new_investigation_type)
self.assertCountEqual(e.sample_template, e.samples)
self.assertCountEqual(e.publications, [
['10.100/123456', True], ['123456', False],
['10.100/7891011', True], ['7891011', False]])
self.assertEqual(e.action, action)
self.assertEqual(e.ascp_reply, join(e.full_ebi_dir, 'ascp_reply.txt'))
self.assertEqual(e.curl_reply, join(e.full_ebi_dir, 'curl_reply.xml'))
get_output_fp = partial(join, e.full_ebi_dir)
self.assertEqual(e.xml_dir, get_output_fp('xml_dir'))
self.assertIsNone(e.study_xml_fp)
self.assertIsNone(e.sample_xml_fp)
self.assertIsNone(e.experiment_xml_fp)
self.assertIsNone(e.run_xml_fp)
self.assertIsNone(e.submission_xml_fp)
for sample in e.sample_template:
self.assertEqual(e.sample_template[sample], e.samples[sample])
self.assertEqual(e.prep_template[sample], e.samples_prep[sample])
self.assertEqual(e.sample_demux_fps[sample], get_output_fp(sample))
def test_get_study_alias(self):
e = EBISubmission(3, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
exp = '%s_sid_1' % qiita_config.ebi_organization_prefix
self.assertEqual(e._get_study_alias(), exp)
def test_get_sample_alias(self):
e = EBISubmission(3, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
exp = '%s_sid_1:foo' % qiita_config.ebi_organization_prefix
self.assertEqual(e._get_sample_alias('foo'), exp)
self.assertEqual(e._sample_aliases, {exp: 'foo'})
def test_get_experiment_alias(self):
e = EBISubmission(3, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
exp = '%s_ptid_1:foo' % qiita_config.ebi_organization_prefix
self.assertEqual(e._get_experiment_alias('foo'), exp)
self.assertEqual(e._experiment_aliases, {exp: 'foo'})
def test_get_submission_alias(self):
artifact_id = 3
e = EBISubmission(artifact_id, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
obs = e._get_submission_alias()
exp = '%s_submission_%d' % (qiita_config.ebi_organization_prefix,
artifact_id)
self.assertEqual(obs, exp)
def test_get_run_alias(self):
artifact_id = 3
e = EBISubmission(artifact_id, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
exp = '%s_ppdid_%d:foo' % (qiita_config.ebi_organization_prefix,
artifact_id)
self.assertEqual(e._get_run_alias('foo'), exp)
self.assertEqual(e._run_aliases, {exp: 'foo'})
def test_get_library_name(self):
e = EBISubmission(3, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
obs = e._get_library_name("nasty<business>")
exp = "nasty<business>"
self.assertEqual(obs, exp)
def test_add_dict_as_tags_and_values(self):
e = EBISubmission(3, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
elm = ET.Element('TESTING', {'foo': 'bar'})
e._add_dict_as_tags_and_values(elm, 'foo', {'x': 'y',
'>x': '<y',
'none': None})
obs = ET.tostring(elm)
exp = ''.join([v.strip() for v in ADDDICTTEST.splitlines()])
self.assertEqual(obs.decode('ascii'), exp)
def test_generate_study_xml(self):
submission = EBISubmission(3, 'ADD')
self.files_to_remove.append(submission.full_ebi_dir)
obs = ET.tostring(submission.generate_study_xml())
exp = ''.join([l.strip() for l in STUDYXML.splitlines()])
self.assertEqual(obs.decode('ascii'), exp)
def test_generate_sample_xml(self):
submission = EBISubmission(3, 'ADD')
self.files_to_remove.append(submission.full_ebi_dir)
samples = ['1.SKB2.640194', '1.SKB3.640195']
obs = ET.tostring(submission.generate_sample_xml(samples=samples))
exp = ''.join([l.strip() for l in SAMPLEXML.splitlines()])
self.assertEqual(obs.decode('ascii'), exp)
# removing samples so test text is easier to read
keys_to_del = ['1.SKD6.640190', '1.SKM6.640187', '1.SKD9.640182',
'1.SKM8.640201', '1.SKM2.640199', '1.SKD2.640178',
'1.SKB7.640196', '1.SKD4.640185', '1.SKB8.640193',
'1.SKM3.640197', '1.SKD5.640186', '1.SKB1.640202',
'1.SKM1.640183', '1.SKD1.640179', '1.SKD3.640198',
'1.SKB5.640181', '1.SKB4.640189', '1.SKB9.640200',
'1.SKM9.640192', '1.SKD8.640184', '1.SKM5.640177',
'1.SKM7.640188', '1.SKD7.640191', '1.SKB6.640176',
'1.SKM4.640180']
for k in keys_to_del:
del(submission.samples[k])
del(submission.samples_prep[k])
obs = ET.tostring(submission.generate_sample_xml())
exp = ''.join([l.strip() for l in SAMPLEXML.splitlines()])
self.assertEqual(obs.decode('ascii'), exp)
obs = ET.tostring(submission.generate_sample_xml(samples=[]))
self.assertEqual(obs.decode('ascii'), exp)
def test_generate_spot_descriptor(self):
e = EBISubmission(3, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
elm = ET.Element('design', {'foo': 'bar'})
e._generate_spot_descriptor(elm, 'LS454')
exp = ''.join([l.strip() for l in GENSPOTDESC.splitlines()])
obs = ET.tostring(elm)
self.assertEqual(obs.decode('ascii'), exp)
def test_generate_submission_xml(self):
submission = EBISubmission(3, 'ADD')
self.files_to_remove.append(submission.full_ebi_dir)
submission.experiment_xml_fp = "/some/path/experiment.xml"
submission.run_xml_fp = "/some/path/run.xml"
obs = ET.tostring(
submission.generate_submission_xml(
submission_date=date(2015, 9, 3)))
exp = SUBMISSIONXML % {
'submission_alias': submission._get_submission_alias(),
'center_name': qiita_config.ebi_center_name}
exp = ''.join([l.strip() for l in exp.splitlines()])
self.assertEqual(obs.decode('ascii'), exp)
submission.study_xml_fp = "/some/path/study.xml"
submission.sample_xml_fp = "/some/path/sample.xml"
submission.experiment_xml_fp = "/some/path/experiment.xml"
submission.run_xml_fp = "/some/path/run.xml"
obs = ET.tostring(
submission.generate_submission_xml(
submission_date=date(2015, 9, 3)))
exp = SUBMISSIONXML_FULL % {
'submission_alias': submission._get_submission_alias(),
'center_name': qiita_config.ebi_center_name}
exp = ''.join([l.strip() for l in exp.splitlines()])
self.assertEqual(obs.decode('ascii'), exp)
def test_write_xml_file(self):
element = ET.Element('TESTING', {'foo': 'bar'})
e = EBISubmission(3, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
e.write_xml_file(element, 'testfile')
self.files_to_remove.append('testfile')
obs = open('testfile').read()
exp = "<?xml version='1.0' encoding='UTF-8'?>\n<TESTING foo=\"bar\" />"
self.assertEqual(obs, exp)
def test_generate_curl_command(self):
submission = EBISubmission(3, 'ADD')
self.files_to_remove.append(submission.full_ebi_dir)
test_ebi_seq_xfer_user = 'ebi_seq_xfer_user'
test_ebi_seq_xfer_pass = 'ebi_seq_xfer_pass'
test_ebi_dropbox_url = 'ebi_dropbox_url'
submission.study_xml_fp = "/some/path/study.xml"
submission.sample_xml_fp = "/some/path/sample.xml"
submission.experiment_xml_fp = "/some/path/experiment.xml"
submission.run_xml_fp = "/some/path/run.xml"
submission.submission_xml_fp = "/some/path/submission.xml"
obs = submission.generate_curl_command(test_ebi_seq_xfer_user,
test_ebi_seq_xfer_pass,
test_ebi_dropbox_url)
exp = ('curl -sS -k '
'-F "SUBMISSION=@/some/path/submission.xml" '
'-F "STUDY=@/some/path/study.xml" '
'-F "SAMPLE=@/some/path/sample.xml" '
'-F "RUN=@/some/path/run.xml" '
'-F "EXPERIMENT=@/some/path/experiment.xml" '
'"ebi_dropbox_url/?auth=ENA%20ebi_seq_xfer_user'
'%20ebi_seq_xfer_pass"')
self.assertEqual(obs, exp)
def write_demux_files(self, prep_template, sequences='FASTA-EXAMPLE'):
"""Writes a demux test file to avoid duplication of code"""
fna_fp = join(self.temp_dir, 'seqs.fna')
demux_fp = join(self.temp_dir, 'demux.seqs')
if sequences == 'FASTA-EXAMPLE':
with open(fna_fp, 'w') as f:
f.write(FASTA_EXAMPLE)
with File(demux_fp, "w") as f:
to_hdf5(fna_fp, f)
elif sequences == 'WRONG-SEQS':
with open(fna_fp, 'w') as f:
f.write('>a_1 X orig_bc=X new_bc=X bc_diffs=0\nCCC')
with File(demux_fp, "w") as f:
to_hdf5(fna_fp, f)
elif sequences == 'EMPTY':
with open(demux_fp, 'w') as f:
f.write("")
else:
raise ValueError('Wrong sequences values: %s. Valid values: '
'FASTA_EXAMPLE, WRONG-SEQS, EMPTY' % sequences)
if prep_template.artifact is None:
artifact = Artifact.create(
[(demux_fp, 6)], "Demultiplexed", prep_template=prep_template)
else:
params = Parameters.from_default_params(
DefaultParameters(1),
{'input_data': prep_template.artifact.id})
artifact = Artifact.create(
[(demux_fp, 6)], "Demultiplexed",
parents=[prep_template.artifact],
processing_parameters=params)
return artifact
def generate_new_prep_template_and_write_demux_files(self,
valid_metadata=False):
"""Creates new prep-template/demux-file to avoid duplication of code"""
# creating prep template without required EBI submission columns
if not valid_metadata:
metadata_dict = {
'SKD6.640190': {'center_name': 'ANL', 'barcode': 'AAA',
'center_project_name': 'Test Project'},
'SKM6.640187': {'center_name': 'ANL', 'barcode': 'AAA',
'center_project_name': 'Test Project',
'platform': 'Illumina',
'instrument_model': 'Not valid'},
'SKD9.640182': {'center_name': 'ANL', 'barcode': 'AAA',
'center_project_name': 'Test Project',
'platform': 'Illumina',
'instrument_model': 'Illumina MiSeq',
'primer': 'GTGCCAGCMGCCGCGGTAA',
'experiment_design_description':
'microbiome of soil and rhizosphere',
'library_construction_protocol':
'PMID: 22402401'}
}
investigation_type = None
else:
metadata_dict = {
'SKD6.640190': {'center_name': 'ANL', 'barcode': 'AAA',
'center_project_name': 'Test Project',
'platform': 'Illumina',
'instrument_model': 'Illumina MiSeq',
'primer': 'GTGCCAGCMGCCGCGGTAA',
'experiment_design_description':
'microbiome of soil and rhizosphere',
'library_construction_protocol':
'PMID: 22402401'},
'SKM6.640187': {'center_name': 'ANL', 'barcode': 'AAA',
'center_project_name': 'Test Project',
'platform': 'Illumina',
'instrument_model': 'Illumina MiSeq',
'primer': 'GTGCCAGCMGCCGCGGTAA',
'experiment_design_description':
'microbiome of soil and rhizosphere',
'library_construction_protocol':
'PMID: 22402401',
'extra_value': 1.2},
'SKD9.640182': {'center_name': 'ANL', 'barcode': 'AAA',
'center_project_name': 'Test Project',
'platform': 'Illumina',
'instrument_model': 'Illumina MiSeq',
'primer': 'GTGCCAGCMGCCGCGGTAA',
'experiment_design_description':
'microbiome of soil and rhizosphere',
'library_construction_protocol':
'PMID: 22402401',
'extra_value': 'Unspecified'}
}
investigation_type = "Metagenomics"
metadata = pd.DataFrame.from_dict(metadata_dict, orient='index',
dtype=str)
with warnings.catch_warnings(record=True):
pt = PrepTemplate.create(metadata, Study(1), "18S",
investigation_type=investigation_type)
artifact = self.write_demux_files(pt)
return artifact
def generate_new_study_with_preprocessed_data(self):
"""Creates a new study up to the processed data for testing"""
info = {
"timeseries_type_id": 1,
"metadata_complete": True,
"mixs_compliant": True,
"study_alias": "Test EBI",
"study_description": "Study for testing EBI",
"study_abstract": "Study for testing EBI",
"principal_investigator_id": StudyPerson(3),
"lab_person_id": StudyPerson(1)
}
study = Study.create(User('test@foo.bar'), "Test EBI study", info)
self.study_id = study.id
metadata_dict = {
'Sample1': {'collection_timestamp': '06/01/15 07:00:00',
'physical_specimen_location': 'location1',
'taxon_id': 9606,
'scientific_name': 'homo sapiens',
'Description': 'Test Sample 1'},
'Sample2': {'collection_timestamp': '06/02/15 07:00:00',
'physical_specimen_location': 'location1',
'taxon_id': 9606,
'scientific_name': 'homo sapiens',
'Description': 'Test Sample 2'},
'Sample3': {'collection_timestamp': '06/03/15 07:00:00',
'physical_specimen_location': 'location1',
'taxon_id': 9606,
'scientific_name': 'homo sapiens',
'Description': 'Test Sample 3'}
}
metadata = pd.DataFrame.from_dict(metadata_dict, orient='index',
dtype=str)
with warnings.catch_warnings(record=True):
SampleTemplate.create(metadata, study)
metadata_dict = {
'Sample1': {'primer': 'GTGCCAGCMGCCGCGGTAA',
'barcode': 'CGTAGAGCTCTC',
'center_name': 'KnightLab',
'platform': 'Illumina',
'instrument_model': 'Illumina MiSeq',
'library_construction_protocol': 'Protocol ABC',
'experiment_design_description': "Random value 1"},
'Sample2': {'primer': 'GTGCCAGCMGCCGCGGTAA',
'barcode': 'CGTAGAGCTCTA',
'center_name': 'KnightLab',
'platform': 'Illumina',
'instrument_model': 'Illumina MiSeq',
'library_construction_protocol': 'Protocol ABC',
'experiment_design_description': "Random value 2"},
'Sample3': {'primer': 'GTGCCAGCMGCCGCGGTAA',
'barcode': 'CGTAGAGCTCTT',
'center_name': 'KnightLab',
'platform': 'Illumina',
'instrument_model': 'Illumina MiSeq',
'library_construction_protocol': 'Protocol ABC',
'experiment_design_description': "Random value 3"},
}
metadata = pd.DataFrame.from_dict(metadata_dict, orient='index',
dtype=str)
with warnings.catch_warnings(record=True):
pt = PrepTemplate.create(metadata, study, "16S", 'Metagenomics')
fna_fp = join(self.temp_dir, 'seqs.fna')
demux_fp = join(self.temp_dir, 'demux.seqs')
with open(fna_fp, 'w') as f:
f.write(FASTA_EXAMPLE_2.format(study.id))
with File(demux_fp, 'w') as f:
to_hdf5(fna_fp, f)
# Magic number 6: the id of the preprocessed_demux filepath_type
artifact = Artifact.create(
[(demux_fp, 6)], "Demultiplexed", prep_template=pt)
return artifact
def test_init_exceptions(self):
# not a valid action
with self.assertRaises(EBISubmissionError):
EBISubmission(1, 'This is not a valid action')
# artifact can't be submitted
with self.assertRaises(EBISubmissionError):
EBISubmission(1, 'ADD')
# artifact has been already submitted
with self.assertRaises(EBISubmissionError):
EBISubmission(2, 'ADD')
artifact = self.generate_new_prep_template_and_write_demux_files()
# raise error as we are missing columns
# artifact.prep_templates[0] cause there should only be 1
exp_text = ("Errors found during EBI submission for study #1, "
"artifact #%d and prep template #%d:\nUnrecognized "
"investigation type: 'None'. This term is neither one of "
"the official terms nor one of the user-defined terms in "
"the ENA ontology.\nThese samples do not have a valid "
"platform (instrumet model wasn't checked): "
"1.SKD6.640190\nThese samples do not have a valid "
"instrument model: 1.SKM6.640187" % (
artifact.id, artifact.prep_templates[0].id))
with self.assertRaises(EBISubmissionError) as e:
EBISubmission(artifact.id, 'ADD')
self.assertEqual(exp_text, str(e.exception))
def test_prep_with_less_samples_than_sample_template(self):
# the next line generates a valid prep template with less samples than
# the sample template and basically we want to test that
# the EBISubmission can be generated
artifact = self.generate_new_prep_template_and_write_demux_files(True)
e = EBISubmission(artifact.id, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
exp = ['1.SKD6.640190', '1.SKM6.640187', '1.SKD9.640182']
self.assertCountEqual(exp, e.samples)
def test_generate_experiment_xml(self):
artifact = self.generate_new_study_with_preprocessed_data()
submission = EBISubmission(artifact.id, 'ADD')
self.files_to_remove.append(submission.full_ebi_dir)
obs = ET.tostring(submission.generate_experiment_xml())
exp = EXPERIMENTXML_NEWSTUDY % {
'organization_prefix': qiita_config.ebi_organization_prefix,
'center_name': qiita_config.ebi_center_name,
'study_id': artifact.study.id,
'pt_id': artifact.prep_templates[0].id
}
exp = ''.join([l.strip() for l in exp.splitlines()])
self.assertEqual(obs.decode('ascii'), exp)
submission = EBISubmission(3, 'ADD')
self.files_to_remove.append(submission.full_ebi_dir)
samples = ['1.SKB2.640194', '1.SKB3.640195']
obs = ET.tostring(submission.generate_experiment_xml(samples=samples))
exp = EXPERIMENTXML
exp = ''.join([l.strip() for l in exp.splitlines()])
self.assertEqual(obs.decode('ascii'), exp)
# removing samples so test text is easier to read
keys_to_del = ['1.SKD6.640190', '1.SKM6.640187', '1.SKD9.640182',
'1.SKM8.640201', '1.SKM2.640199', '1.SKD2.640178',
'1.SKB7.640196', '1.SKD4.640185', '1.SKB8.640193',
'1.SKM3.640197', '1.SKD5.640186', '1.SKB1.640202',
'1.SKM1.640183', '1.SKD1.640179', '1.SKD3.640198',
'1.SKB5.640181', '1.SKB4.640189', '1.SKB9.640200',
'1.SKM9.640192', '1.SKD8.640184', '1.SKM5.640177',
'1.SKM7.640188', '1.SKD7.640191', '1.SKB6.640176',
'1.SKM4.640180']
for k in keys_to_del:
del(submission.samples[k])
del(submission.samples_prep[k])
obs = ET.tostring(submission.generate_experiment_xml())
self.assertEqual(obs.decode('ascii'), exp)
def test_generate_run_xml(self):
artifact = self.generate_new_study_with_preprocessed_data()
submission = EBISubmission(artifact.id, 'ADD')
self.files_to_remove.append(submission.full_ebi_dir)
submission.generate_demultiplexed_fastq(mtime=1)
obs = ET.tostring(submission.generate_run_xml())
md5_sums = {}
for s, fp in viewitems(submission.sample_demux_fps):
md5_sums[s] = safe_md5(
open(fp + submission.FWD_READ_SUFFIX, 'rb')).hexdigest()
exp = RUNXML_NEWSTUDY % {
'study_alias': submission._get_study_alias(),
'ebi_dir': submission.ebi_dir,
'organization_prefix': qiita_config.ebi_organization_prefix,
'center_name': qiita_config.ebi_center_name,
'artifact_id': artifact.id,
'study_id': artifact.study.id,
'pt_id': artifact.prep_templates[0].id,
'sample_1': md5_sums['%d.Sample1' % self.study_id],
'sample_2': md5_sums['%d.Sample2' % self.study_id],
'sample_3': md5_sums['%d.Sample3' % self.study_id]
}
exp = ''.join([l.strip() for l in exp.splitlines()])
self.assertEqual(obs.decode('ascii'), exp)
artifact = self.write_demux_files(PrepTemplate(1))
submission = EBISubmission(artifact.id, 'ADD')
# removing samples so test text is easier to read
keys_to_del = ['1.SKD6.640190', '1.SKM6.640187', '1.SKD9.640182',
'1.SKM8.640201', '1.SKM2.640199']
for k in keys_to_del:
del(submission.samples[k])
del(submission.samples_prep[k])
submission.generate_demultiplexed_fastq(mtime=1)
self.files_to_remove.append(submission.full_ebi_dir)
obs = ET.tostring(submission.generate_run_xml())
exp = RUNXML % {
'study_alias': submission._get_study_alias(),
'ebi_dir': submission.ebi_dir,
'organization_prefix': qiita_config.ebi_organization_prefix,
'center_name': qiita_config.ebi_center_name,
'artifact_id': artifact.id}
exp = ''.join([l.strip() for l in exp.splitlines()])
self.assertEqual(obs.decode('ascii'), exp)
def test_generate_xml_files(self):
artifact = self.generate_new_study_with_preprocessed_data()
e = EBISubmission(artifact.id, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
e.generate_demultiplexed_fastq()
self.assertIsNone(e.run_xml_fp)
self.assertIsNone(e.experiment_xml_fp)
self.assertIsNone(e.sample_xml_fp)
self.assertIsNone(e.study_xml_fp)
self.assertIsNone(e.submission_xml_fp)
e.generate_xml_files()
self.assertIsNotNone(e.run_xml_fp)
self.assertIsNotNone(e.experiment_xml_fp)
self.assertIsNotNone(e.sample_xml_fp)
self.assertIsNotNone(e.study_xml_fp)
self.assertIsNotNone(e.submission_xml_fp)
artifact = self.generate_new_prep_template_and_write_demux_files(True)
e = EBISubmission(artifact.id, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
e.generate_demultiplexed_fastq()
self.assertIsNone(e.run_xml_fp)
self.assertIsNone(e.experiment_xml_fp)
self.assertIsNone(e.sample_xml_fp)
self.assertIsNone(e.study_xml_fp)
self.assertIsNone(e.submission_xml_fp)
e.generate_xml_files()
self.assertIsNotNone(e.run_xml_fp)
self.assertIsNotNone(e.experiment_xml_fp)
self.assertIsNone(e.sample_xml_fp)
self.assertIsNone(e.study_xml_fp)
self.assertIsNotNone(e.submission_xml_fp)
artifact = self.write_demux_files(PrepTemplate(1))
e = EBISubmission(artifact.id, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
e.generate_demultiplexed_fastq()
self.assertIsNone(e.run_xml_fp)
self.assertIsNone(e.experiment_xml_fp)
self.assertIsNone(e.sample_xml_fp)
self.assertIsNone(e.study_xml_fp)
self.assertIsNone(e.submission_xml_fp)
e.generate_xml_files()
self.assertIsNotNone(e.run_xml_fp)
self.assertIsNone(e.experiment_xml_fp)
self.assertIsNone(e.sample_xml_fp)
self.assertIsNone(e.study_xml_fp)
self.assertIsNotNone(e.submission_xml_fp)
def test_generate_demultiplexed_fastq_failure(self):
# generating demux file for testing
artifact = self.write_demux_files(PrepTemplate(1), 'EMPTY')
ebi_submission = EBISubmission(artifact.id, 'ADD')
self.files_to_remove.append(ebi_submission.full_ebi_dir)
with self.assertRaises(EBISubmissionError):
ebi_submission.generate_demultiplexed_fastq(rewrite_fastq=True)
artifact = self.write_demux_files(PrepTemplate(1), 'WRONG-SEQS')
ebi_submission = EBISubmission(artifact.id, 'ADD')
self.files_to_remove.append(ebi_submission.full_ebi_dir)
with self.assertRaises(EBISubmissionError):
ebi_submission.generate_demultiplexed_fastq()
def test_generate_demultiplexed_fastq(self):
# generating demux file for testing
exp_demux_samples = set(
['1.SKD6.640190', '1.SKM6.640187', '1.SKD9.640182',
'1.SKB2.640194', '1.SKM8.640201', '1.SKM4.640180',
'1.SKM2.640199', '1.SKB3.640195', '1.SKB6.640176'])
artifact = self.write_demux_files(PrepTemplate(1))
# This is testing that only the samples with sequences are going to
# be created
ebi_submission = EBISubmission(artifact.id, 'ADD')
# adding rewrite_fastq=True as it's possible to have duplicated ids
# and this will assure to get the right test
obs_demux_samples = ebi_submission.generate_demultiplexed_fastq(
rewrite_fastq=True)
self.files_to_remove.append(ebi_submission.full_ebi_dir)
self.assertCountEqual(obs_demux_samples, exp_demux_samples)
# testing that the samples/samples_prep and demux_samples are the same
self.assertCountEqual(obs_demux_samples, ebi_submission.samples.keys())
self.assertCountEqual(obs_demux_samples,
ebi_submission.samples_prep.keys())
# If the last test passed then we can test that the folder already
# exists and that we have the same files and ignore not fastq.gz files
ebi_submission = EBISubmission(artifact.id, 'ADD')
obs_demux_samples = ebi_submission.generate_demultiplexed_fastq()
self.files_to_remove.append(ebi_submission.full_ebi_dir)
self.assertCountEqual(obs_demux_samples, exp_demux_samples)
# testing that the samples/samples_prep and demux_samples are the same
self.assertCountEqual(obs_demux_samples, ebi_submission.samples.keys())
self.assertCountEqual(obs_demux_samples,
ebi_submission.samples_prep.keys())
def _generate_per_sample_FASTQs(self, prep_template, sequences):
# generating a per_sample_FASTQ artifact, adding should_rename so
# we can test that the script uses the correct names during
# copy/gz-generation
files = []
for sn, seqs in viewitems(sequences):
fn = join(self.temp_dir, sn + 'should_rename.fastq')
with open(fn, 'w') as fh:
fh.write(seqs)
files.append(fn)
self.files_to_remove.append(fn)
if prep_template.artifact is None:
artifact = Artifact.create(
[(fp, 1) for fp in files], "per_sample_FASTQ",
prep_template=prep_template)
else:
params = Parameters.from_default_params(
DefaultParameters(1),
{'input_data': prep_template.artifact.id})
artifact = Artifact.create(
# 1 is raw_forward_seqs
[(fp, 1) for fp in files], "per_sample_FASTQ",
parents=[prep_template.artifact],
processing_parameters=params)
return artifact
def test_generate_demultiplexed_per_sample_fastq(self):
# testing failure due to "extra" filepaths
artifact = self._generate_per_sample_FASTQs(
PrepTemplate(1), FASTQ_EXAMPLE)
ebi_submission = EBISubmission(artifact.id, 'ADD')
self.files_to_remove.append(ebi_submission.full_ebi_dir)
with self.assertRaises(EBISubmissionError):
ebi_submission.generate_demultiplexed_fastq()
# testing that we generate the correct samples
exp_samples = ['1.SKM4.640180', '1.SKB2.640194']
metadata_dict = {
'SKB2.640194': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'platform': 'Illumina',
'instrument_model': 'Illumina MiSeq',
'experiment_design_description':
'microbiome of soil and rhizosphere',
'library_construction_protocol':
'PMID: 22402401',
'run_prefix': '1.SKB2.640194'},
'SKM4.640180': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'platform': 'Illumina',
'instrument_model': 'Illumina MiSeq',
'experiment_design_description':
'microbiome of soil and rhizosphere',
'library_construction_protocol':
'PMID: 22402401',
'run_prefix': '1.SKM4.640180'}}
metadata = pd.DataFrame.from_dict(
metadata_dict, orient='index', dtype=str)
with warnings.catch_warnings(record=True):
pt = PrepTemplate.create(metadata, Study(1), "18S",
investigation_type="Metagenomics")
artifact = self._generate_per_sample_FASTQs(pt, FASTQ_EXAMPLE)
# this should fail due to missing columns
with self.assertRaises(EBISubmissionError) as err:
ebi_submission = EBISubmission(artifact.id, 'ADD')
self.assertIn('Missing column in the prep template: barcode',
str(err.exception))
metadata_dict = {
'SKB2.640194': {'barcode': 'AAA', 'primer': 'CCCC'},
'SKM4.640180': {'barcode': 'CCC', 'primer': 'AAAA'}}
metadata = pd.DataFrame.from_dict(
metadata_dict, orient='index', dtype=str)
with warnings.catch_warnings(record=True):
pt.extend_and_update(metadata)
ebi_submission = EBISubmission(artifact.id, 'ADD')
self.files_to_remove.append(ebi_submission.full_ebi_dir)
obs_demux_samples = ebi_submission.generate_demultiplexed_fastq()
self.assertCountEqual(obs_demux_samples, exp_samples)
self.assertCountEqual(ebi_submission.samples.keys(), exp_samples)
self.assertCountEqual(ebi_submission.samples_prep.keys(), exp_samples)
ebi_submission.generate_xml_files()
obs_run_xml = open(ebi_submission.run_xml_fp).read()
obs_experiment_xml = open(ebi_submission.experiment_xml_fp).read()
self.assertIn('1.SKB2.640194.R1.fastq.gz', obs_run_xml)
self.assertNotIn('1.SKB2.640194.R2.fastq.gz', obs_run_xml)
self.assertIn('1.SKM4.640180.R1.fastq.gz', obs_run_xml)
self.assertNotIn('1.SKM4.640180.R2.fastq.gz', obs_run_xml)
self.assertNotIn('PAIRED', obs_experiment_xml)
self.assertIn('SINGLE', obs_experiment_xml)
# generate_send_sequences_cmd returns a list of commands so joining
# for easier testing
obs_cmd = '|'.join(ebi_submission.generate_send_sequences_cmd())
self.assertIn('1.SKB2.640194.R1.fastq.gz', obs_cmd)
self.assertNotIn('1.SKB2.640194.R2.fastq.gz', obs_cmd)
self.assertIn('1.SKM4.640180.R1.fastq.gz', obs_cmd)
self.assertNotIn('1.SKM4.640180.R2.fastq.gz', obs_cmd)
# at this point the full_ebi_dir has been created so we can test that
# the ADD actually works without rewriting the files
ebi_submission = EBISubmission(artifact.id, 'ADD')
obs_demux_samples = ebi_submission.generate_demultiplexed_fastq()
self.assertCountEqual(obs_demux_samples, exp_samples)
self.assertCountEqual(ebi_submission.samples.keys(), exp_samples)
self.assertCountEqual(ebi_submission.samples_prep.keys(), exp_samples)
ebi_submission.generate_xml_files()
obs_run_xml = open(ebi_submission.run_xml_fp).read()
obs_experiment_xml = open(ebi_submission.experiment_xml_fp).read()
self.assertIn('1.SKB2.640194.R1.fastq.gz', obs_run_xml)
self.assertNotIn('1.SKB2.640194.R2.fastq.gz', obs_run_xml)
self.assertIn('1.SKM4.640180.R1.fastq.gz', obs_run_xml)
self.assertNotIn('1.SKM4.640180.R2.fastq.gz', obs_run_xml)
self.assertNotIn('PAIRED', obs_experiment_xml)
self.assertIn('SINGLE', obs_experiment_xml)
# generate_send_sequences_cmd returns a list of commands so joining
# for easier testing
obs_cmd = '|'.join(ebi_submission.generate_send_sequences_cmd())
self.assertIn('1.SKB2.640194.R1.fastq.gz', obs_cmd)
self.assertNotIn('1.SKB2.640194.R2.fastq.gz', obs_cmd)
self.assertIn('1.SKM4.640180.R1.fastq.gz', obs_cmd)
self.assertNotIn('1.SKM4.640180.R2.fastq.gz', obs_cmd)
Artifact.delete(artifact.id)
PrepTemplate.delete(pt.id)
def test_generate_demultiplexed_per_sample_fastq_reverse(self):
metadata_dict = {
'SKB2.640194': {'barcode': 'AAA',
'primer': 'CCCC',
'center_name': 'ANL',
'center_project_name': 'Test Project',
'platform': 'Illumina',
'instrument_model': 'Illumina MiSeq',
'experiment_design_description':
'microbiome of soil and rhizosphere',
'library_construction_protocol':
'PMID: 22402401',
'run_prefix': '1.SKB2.640194'},
'SKM4.640180': {'barcode': 'CCC',
'primer': 'AAAA',
'center_name': 'ANL',
'center_project_name': 'Test Project',
'platform': 'Illumina',
'instrument_model': 'Illumina MiSeq',
'experiment_design_description':
'microbiome of soil and rhizosphere',
'library_construction_protocol':
'PMID: 22402401',
'run_prefix': '1.SKM4.640180'}}
metadata = pd.DataFrame.from_dict(
metadata_dict, orient='index', dtype=str)
with warnings.catch_warnings(record=True):
pt = PrepTemplate.create(metadata, Study(1), "18S",
investigation_type="Metagenomics")
filepaths = []
for sn in pt:
# 1 is forward, 2 is reverse
filepaths.append((join(self.temp_dir, sn + '_rename.R1.fastq'), 1))
filepaths.append((join(self.temp_dir, sn + '_rename.R2.fastq'), 2))
for fn, _ in filepaths:
with open(fn, 'w') as fh:
fh.write('some text')
self.files_to_remove.append(fn)
artifact = Artifact.create(
filepaths, "per_sample_FASTQ", prep_template=pt)
ebi_submission = EBISubmission(artifact.id, 'ADD')
self.files_to_remove.append(ebi_submission.full_ebi_dir)
obs_demux_samples = ebi_submission.generate_demultiplexed_fastq()
exp_samples = ['1.SKM4.640180', '1.SKB2.640194']
self.assertCountEqual(obs_demux_samples, exp_samples)
self.assertCountEqual(ebi_submission.samples.keys(), exp_samples)
self.assertCountEqual(ebi_submission.samples_prep.keys(), exp_samples)
ebi_submission.generate_xml_files()
obs_run_xml = open(ebi_submission.run_xml_fp).read()
obs_experiment_xml = open(ebi_submission.experiment_xml_fp).read()
self.assertIn('1.SKB2.640194.R1.fastq.gz', obs_run_xml)
self.assertIn('1.SKB2.640194.R2.fastq.gz', obs_run_xml)
self.assertIn('1.SKM4.640180.R1.fastq.gz', obs_run_xml)
self.assertIn('1.SKM4.640180.R2.fastq.gz', obs_run_xml)
self.assertIn('PAIRED', obs_experiment_xml)
self.assertNotIn('SINGLE', obs_experiment_xml)
# generate_send_sequences_cmd returns a list of commands so joining
# for easier testing
obs_cmd = '|'.join(ebi_submission.generate_send_sequences_cmd())
self.assertIn('1.SKB2.640194.R1.fastq.gz', obs_cmd)
self.assertIn('1.SKB2.640194.R2.fastq.gz', obs_cmd)
self.assertIn('1.SKM4.640180.R1.fastq.gz', obs_cmd)
self.assertIn('1.SKM4.640180.R2.fastq.gz', obs_cmd)
# now we have a full submission so let's test if a new one will create
# the correct values without rewriting the fastq files
ebi_submission = EBISubmission(artifact.id, 'ADD')
obs_demux_samples = ebi_submission.generate_demultiplexed_fastq()
exp_samples = ['1.SKM4.640180', '1.SKB2.640194']
self.assertCountEqual(obs_demux_samples, exp_samples)
self.assertCountEqual(ebi_submission.samples.keys(), exp_samples)
self.assertCountEqual(ebi_submission.samples_prep.keys(), exp_samples)
ebi_submission.generate_xml_files()
obs_run_xml = open(ebi_submission.run_xml_fp).read()
obs_experiment_xml = open(ebi_submission.experiment_xml_fp).read()
self.assertIn('1.SKB2.640194.R1.fastq.gz', obs_run_xml)
self.assertIn('1.SKB2.640194.R2.fastq.gz', obs_run_xml)
self.assertIn('1.SKM4.640180.R1.fastq.gz', obs_run_xml)
self.assertIn('1.SKM4.640180.R2.fastq.gz', obs_run_xml)
self.assertIn('PAIRED', obs_experiment_xml)
self.assertNotIn('SINGLE', obs_experiment_xml)
# generate_send_sequences_cmd returns a list of commands so joining
# for easier testing
obs_cmd = '|'.join(ebi_submission.generate_send_sequences_cmd())
self.assertIn('1.SKB2.640194.R1.fastq.gz', obs_cmd)
self.assertIn('1.SKB2.640194.R2.fastq.gz', obs_cmd)
self.assertIn('1.SKM4.640180.R1.fastq.gz', obs_cmd)
self.assertIn('1.SKM4.640180.R2.fastq.gz', obs_cmd)
Artifact.delete(artifact.id)
PrepTemplate.delete(pt.id)
def test_generate_send_sequences_cmd(self):
artifact = self.write_demux_files(PrepTemplate(1))
e = EBISubmission(artifact.id, 'ADD')
e.generate_demultiplexed_fastq()
self.files_to_remove.append(e.full_ebi_dir)
e.generate_xml_files()
obs = e.generate_send_sequences_cmd()
_, base_fp = get_mountpoint("preprocessed_data")[0]
exp = ('ascp --ignore-host-key -d -QT -k2 '
'%(ebi_dir)s/1.SKB2.640194.R1.fastq.gz '
'Webin-41528@webin.ebi.ac.uk:./%(aid)d_ebi_submission/\n'
'ascp --ignore-host-key -d -QT -k2 '
'%(ebi_dir)s/1.SKM4.640180.R1.fastq.gz '
'Webin-41528@webin.ebi.ac.uk:./%(aid)d_ebi_submission/\n'
'ascp --ignore-host-key -d -QT -k2 '
'%(ebi_dir)s/1.SKB3.640195.R1.fastq.gz '
'Webin-41528@webin.ebi.ac.uk:./%(aid)d_ebi_submission/\n'
'ascp --ignore-host-key -d -QT -k2 '
'%(ebi_dir)s/1.SKB6.640176.R1.fastq.gz '
'Webin-41528@webin.ebi.ac.uk:./%(aid)d_ebi_submission/\n'
'ascp --ignore-host-key -d -QT -k2 '
'%(ebi_dir)s/1.SKD6.640190.R1.fastq.gz '
'Webin-41528@webin.ebi.ac.uk:./%(aid)d_ebi_submission/\n'
'ascp --ignore-host-key -d -QT -k2 '
'%(ebi_dir)s/1.SKM6.640187.R1.fastq.gz '
'Webin-41528@webin.ebi.ac.uk:./%(aid)d_ebi_submission/\n'
'ascp --ignore-host-key -d -QT -k2 '
'%(ebi_dir)s/1.SKD9.640182.R1.fastq.gz '
'Webin-41528@webin.ebi.ac.uk:./%(aid)d_ebi_submission/\n'
'ascp --ignore-host-key -d -QT -k2 '
'%(ebi_dir)s/1.SKM8.640201.R1.fastq.gz '
'Webin-41528@webin.ebi.ac.uk:./%(aid)d_ebi_submission/\n'
'ascp --ignore-host-key -d -QT -k2 '
'%(ebi_dir)s/1.SKM2.640199.R1.fastq.gz '
'Webin-41528@webin.ebi.ac.uk:./%(aid)d_ebi_submission/' % {
'ebi_dir': e.full_ebi_dir, 'aid': artifact.id}).split('\n')
self.assertCountEqual(obs, exp)
def test_parse_EBI_reply(self):
artifact = self.generate_new_study_with_preprocessed_data()
study_id = artifact.study.id
e = EBISubmission(artifact.id, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
e.generate_demultiplexed_fastq(mtime=1)
e.generate_xml_files()
curl_result = CURL_RESULT_FULL.format(
qiita_config.ebi_organization_prefix, artifact.id, study_id,
artifact.prep_templates[0].id)
stacc, saacc, bioacc, exacc, runacc = e.parse_EBI_reply(curl_result)
self.assertEqual(stacc, 'ERP000000')
study_id = artifact.study.id
exp_saacc = {'%d.Sample1' % study_id: 'ERS000000',
'%d.Sample2' % study_id: 'ERS000001',
'%d.Sample3' % study_id: 'ERS000002'}
self.assertEqual(saacc, exp_saacc)
exp_bioacc = {'%d.Sample1' % study_id: 'SAMEA0000000',
'%d.Sample2' % study_id: 'SAMEA0000001',
'%d.Sample3' % study_id: 'SAMEA0000002'}
self.assertEqual(bioacc, exp_bioacc)
exp_exacc = {'%d.Sample1' % study_id: 'ERX0000000',
'%d.Sample2' % study_id: 'ERX0000001',
'%d.Sample3' % study_id: 'ERX0000002'}
self.assertEqual(exacc, exp_exacc)
exp_runacc = {'%d.Sample1' % study_id: 'ERR0000000',
'%d.Sample2' % study_id: 'ERR0000001',
'%d.Sample3' % study_id: 'ERR0000002'}
self.assertEqual(runacc, exp_runacc)
artifact = self.write_demux_files(PrepTemplate(1))
e = EBISubmission(artifact.id, 'ADD')
self.files_to_remove.append(e.full_ebi_dir)
# removing samples so test text is easier to read
keys_to_del = ['1.SKD6.640190', '1.SKM6.640187', '1.SKD9.640182',
'1.SKM8.640201', '1.SKM2.640199', '1.SKB3.640195']
for k in keys_to_del:
del(e.samples[k])
del(e.samples_prep[k])
# Genereate the XML files so the aliases are generated
# and stored internally
e.generate_demultiplexed_fastq(mtime=1)
e.generate_xml_files()
curl_result = ""
with self.assertRaises(EBISubmissionError):
e.parse_EBI_reply(curl_result)
curl_result = 'success="true"'
with self.assertRaises(EBISubmissionError):
e.parse_EBI_reply(curl_result)
curl_result = ('some general text success="true" more text'
'<STUDY accession="staccession" some text> '
'some othe text'
'<SUBMISSION accession="sbaccession" some text>'
'some final text')
with self.assertRaises(EBISubmissionError):
e.parse_EBI_reply(curl_result)
curl_result = CURL_RESULT_2_STUDY.format(
qiita_config.ebi_organization_prefix, artifact.id)
with self.assertRaises(EBISubmissionError):
e.parse_EBI_reply(curl_result)
curl_result = CURL_RESULT.format(qiita_config.ebi_organization_prefix,
artifact.id)
stacc, saacc, bioacc, exacc, runacc = e.parse_EBI_reply(curl_result)
self.assertEqual(stacc, None)
self.assertEqual(saacc, {})
self.assertEqual(bioacc, {})
self.assertEqual(exacc, {})
exp_runacc = {'1.SKB2.640194': 'ERR0000000',
'1.SKB6.640176': 'ERR0000001',
'1.SKM4.640180': 'ERR0000002'}
self.assertEqual(runacc, exp_runacc)
FASTA_EXAMPLE = """>1.SKB2.640194_1 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>1.SKB2.640194_2 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>1.SKB2.640194_3 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>1.SKM4.640180_4 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>1.SKM4.640180_5 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>1.SKB3.640195_6 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>1.SKB6.640176_7 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>1.SKD6.640190_8 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>1.SKM6.640187_9 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>1.SKD9.640182_10 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>1.SKM8.640201_11 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>1.SKM2.640199_12 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
"""
FASTQ_EXAMPLE = {
'1.SKB2.640194': """@1.SKB2.640194_1 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
+
~~~~~~~~~~~~
@1.SKB2.640194_2 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
+
~~~~~~~~~~~~
@1.SKB2.640194_3 X orig_bc=X new_bc=X bc_diffs=0
+
~~~~~~~~~~~~
CCACCCAGTAAC""",
'1.SKM4.640180': """@1.SKM4.640180_4 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
+
~~~~~~~~~~~~
>1.SKM4.640180_5 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
+
~~~~~~~~~~~~"""
}
FASTA_EXAMPLE_2 = """>{0}.Sample1_1 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>{0}.Sample1_2 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>{0}.Sample1_3 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>{0}.Sample2_4 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>{0}.Sample2_5 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>{0}.Sample2_6 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>{0}.Sample3_7 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>{0}.Sample3_8 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
>{0}.Sample3_9 X orig_bc=X new_bc=X bc_diffs=0
CCACCCAGTAAC
"""
SAMPLEXML = """
<SAMPLE_SET xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noName\
spaceSchemaLocation="ftp://ftp.sra.ebi.ac.uk/meta/xsd/sra_1_3/SRA.sample.xsd">
<SAMPLE accession="ERS000008" center_name="%(center_name)s">
<TITLE>1.SKB2.640194</TITLE>
<SAMPLE_NAME>
<TAXON_ID>410658</TAXON_ID>
<SCIENTIFIC_NAME>1118232</SCIENTIFIC_NAME>
</SAMPLE_NAME>
<DESCRIPTION>Cannabis Soil Microbiome</DESCRIPTION>
<SAMPLE_ATTRIBUTES>
<SAMPLE_ATTRIBUTE>
<TAG>altitude</TAG><VALUE>0</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>anonymized_name</TAG><VALUE>SKB2</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>assigned_from_geo</TAG><VALUE>n</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>collection_timestamp</TAG><VALUE>2011-11-11 13:00:00</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>common_name</TAG><VALUE>soil metagenome</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>country</TAG><VALUE>GAZ:United States of America</VALUE>
</SAMPLE_ATTRIBUTE><SAMPLE_ATTRIBUTE>
<TAG>depth</TAG><VALUE>0.15</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>description_duplicate</TAG><VALUE>Burmese bulk</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>dna_extracted</TAG><VALUE>true</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>elevation</TAG><VALUE>114</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>env_biome</TAG><VALUE>ENVO:Temperate grasslands, savannas, and \
shrubland biome</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>env_feature</TAG><VALUE>ENVO:plant-associated habitat</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>env_package</TAG><VALUE>soil</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>host_subject_id</TAG><VALUE>1001:B4</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>host_taxid</TAG><VALUE>3483</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>latitude</TAG><VALUE>35.2374368957</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>longitude</TAG><VALUE>68.5041623253</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>ph</TAG><VALUE>6.94</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>physical_specimen_location</TAG><VALUE>ANL</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>physical_specimen_remaining</TAG><VALUE>true</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>samp_salinity</TAG><VALUE>7.15</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>sample_type</TAG><VALUE>ENVO:soil</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>season_environment</TAG><VALUE>winter</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>temp</TAG><VALUE>15</VALUE></SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>texture</TAG><VALUE>64.6 sand, 17.6 silt, 17.8 clay</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>tot_nitro</TAG><VALUE>1.41</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>tot_org_carb</TAG><VALUE>5</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>water_content_soil</TAG><VALUE>0.164</VALUE>
</SAMPLE_ATTRIBUTE>
</SAMPLE_ATTRIBUTES>
</SAMPLE>
<SAMPLE accession="ERS000024" center_name="%(center_name)s">
<TITLE>1.SKB3.640195</TITLE>
<SAMPLE_NAME>
<TAXON_ID>410658</TAXON_ID>
<SCIENTIFIC_NAME>1118232</SCIENTIFIC_NAME>
</SAMPLE_NAME>
<DESCRIPTION>Cannabis Soil Microbiome</DESCRIPTION>
<SAMPLE_ATTRIBUTES>
<SAMPLE_ATTRIBUTE>
<TAG>altitude</TAG><VALUE>0</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>anonymized_name</TAG><VALUE>SKB3</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>assigned_from_geo</TAG><VALUE>n</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>collection_timestamp</TAG><VALUE>2011-11-11 13:00:00</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>common_name</TAG><VALUE>soil metagenome</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>country</TAG><VALUE>GAZ:United States of America</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>depth</TAG><VALUE>0.15</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>description_duplicate</TAG><VALUE>Burmese bulk</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>dna_extracted</TAG><VALUE>true</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>elevation</TAG><VALUE>114</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>env_biome</TAG><VALUE>ENVO:Temperate grasslands, savannas, and \
shrubland biome</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>env_feature</TAG><VALUE>ENVO:plant-associated habitat</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>env_package</TAG><VALUE>soil</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>host_subject_id</TAG><VALUE>1001:M6</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>host_taxid</TAG><VALUE>3483</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>latitude</TAG><VALUE>95.2060749748</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>longitude</TAG><VALUE>27.3592668624</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>ph</TAG><VALUE>6.94</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>physical_specimen_location</TAG><VALUE>ANL</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>physical_specimen_remaining</TAG><VALUE>true</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>samp_salinity</TAG><VALUE>7.15</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>sample_type</TAG><VALUE>ENVO:soil</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>season_environment</TAG><VALUE>winter</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>temp</TAG><VALUE>15</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>texture</TAG><VALUE>64.6 sand, 17.6 silt, 17.8 clay</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>tot_nitro</TAG><VALUE>1.41</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>tot_org_carb</TAG><VALUE>5</VALUE>
</SAMPLE_ATTRIBUTE>
<SAMPLE_ATTRIBUTE>
<TAG>water_content_soil</TAG><VALUE>0.164</VALUE>
</SAMPLE_ATTRIBUTE>
</SAMPLE_ATTRIBUTES>
</SAMPLE>
</SAMPLE_SET>
""" % {'organization_prefix': qiita_config.ebi_organization_prefix,
'center_name': qiita_config.ebi_center_name}
STUDYXML = """
<STUDY_SET xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noName\
spaceSchemaLocation="ftp://ftp.sra.ebi.ac.uk/meta/xsd/sra_1_3/SRA.study.xsd">
<STUDY alias="%(organization_prefix)s_sid_1" center_name="%(center_name)s">
<DESCRIPTOR>
<STUDY_TITLE>
Identification of the Microbiomes for Cannabis Soils
</STUDY_TITLE>
<STUDY_TYPE existing_study_type="Metagenomics" />
<STUDY_ABSTRACT>
This is a preliminary study to examine the microbiota associated with \
the Cannabis plant. Soils samples from the bulk soil, soil associated with \
the roots, and the rhizosphere were extracted and the DNA sequenced. Roots \
from three independent plants of different strains were examined. These roots \
were obtained November 11, 2011 from plants that had been harvested in the \
summer. Future studies will attempt to analyze the soils and rhizospheres \
from the same location at different time points in the plant lifecycle.
</STUDY_ABSTRACT>
</DESCRIPTOR>
<STUDY_LINKS>
<STUDY_LINK>
<XREF_LINK><DB>DOI</DB><ID>10.100/123456</ID></XREF_LINK>
</STUDY_LINK>
<STUDY_LINK>
<XREF_LINK><DB>PUBMED</DB><ID>123456</ID></XREF_LINK>
</STUDY_LINK>
<STUDY_LINK>
<XREF_LINK><DB>DOI</DB><ID>10.100/7891011</ID></XREF_LINK>
</STUDY_LINK>
<STUDY_LINK>
<XREF_LINK><DB>PUBMED</DB><ID>7891011</ID></XREF_LINK>
</STUDY_LINK>
</STUDY_LINKS>
</STUDY>
</STUDY_SET>
""" % {'organization_prefix': qiita_config.ebi_organization_prefix,
'center_name': qiita_config.ebi_center_name}
EXPERIMENTXML_NEWSTUDY = """
<EXPERIMENT_SET xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:no\
NamespaceSchemaLocation="ftp://ftp.sra.ebi.ac.uk/meta/xsd/sra_1_3/SRA.\
experiment.xsd">
<EXPERIMENT alias="%(organization_prefix)s_ptid_%(pt_id)s:\
%(study_id)s.Sample1" center_name="%(center_name)s">
<TITLE>%(organization_prefix)s_ptid_%(pt_id)s:%(study_id)s.Sample1</TITLE>
<STUDY_REF refname="%(organization_prefix)s_sid_%(study_id)s" />
<DESIGN>
<DESIGN_DESCRIPTION>
Random value 1
</DESIGN_DESCRIPTION>
<SAMPLE_DESCRIPTOR refname="%(organization_prefix)s_sid_%(study_id)s:\
%(study_id)s.Sample1" />
<LIBRARY_DESCRIPTOR>
<LIBRARY_NAME>%(study_id)s.Sample1</LIBRARY_NAME>
<LIBRARY_SOURCE>METAGENOMIC</LIBRARY_SOURCE>
<LIBRARY_SELECTION>PCR</LIBRARY_SELECTION>
<LIBRARY_LAYOUT><SINGLE /></LIBRARY_LAYOUT>
<LIBRARY_CONSTRUCTION_PROTOCOL>Protocol ABC
</LIBRARY_CONSTRUCTION_PROTOCOL>
</LIBRARY_DESCRIPTOR>
</DESIGN>
<PLATFORM>
<ILLUMINA><INSTRUMENT_MODEL>Illumina MiSeq</INSTRUMENT_MODEL></ILLUMINA>
</PLATFORM>
<EXPERIMENT_ATTRIBUTES>
<EXPERIMENT_ATTRIBUTE>
<TAG>barcode</TAG><VALUE>CGTAGAGCTCTC</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>center_name</TAG><VALUE>KnightLab</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>primer</TAG><VALUE>GTGCCAGCMGCCGCGGTAA</VALUE>
</EXPERIMENT_ATTRIBUTE>
</EXPERIMENT_ATTRIBUTES>
</EXPERIMENT>
<EXPERIMENT alias="%(organization_prefix)s_ptid_%(pt_id)s:\
%(study_id)s.Sample2" center_name="%(center_name)s">
<TITLE>%(organization_prefix)s_ptid_%(pt_id)s:%(study_id)s.Sample2</TITLE>
<STUDY_REF refname="%(organization_prefix)s_sid_%(study_id)s" />
<DESIGN>
<DESIGN_DESCRIPTION>
Random value 2
</DESIGN_DESCRIPTION>
<SAMPLE_DESCRIPTOR refname="%(organization_prefix)s_sid_%(study_id)s:\
%(study_id)s.Sample2" />
<LIBRARY_DESCRIPTOR>
<LIBRARY_NAME>%(study_id)s.Sample2</LIBRARY_NAME>
<LIBRARY_SOURCE>METAGENOMIC</LIBRARY_SOURCE>
<LIBRARY_SELECTION>PCR</LIBRARY_SELECTION>
<LIBRARY_LAYOUT><SINGLE /></LIBRARY_LAYOUT>
<LIBRARY_CONSTRUCTION_PROTOCOL>Protocol ABC
</LIBRARY_CONSTRUCTION_PROTOCOL>
</LIBRARY_DESCRIPTOR>
</DESIGN>
<PLATFORM>
<ILLUMINA><INSTRUMENT_MODEL>Illumina MiSeq</INSTRUMENT_MODEL></ILLUMINA>
</PLATFORM>
<EXPERIMENT_ATTRIBUTES>
<EXPERIMENT_ATTRIBUTE>
<TAG>barcode</TAG><VALUE>CGTAGAGCTCTA</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>center_name</TAG><VALUE>KnightLab</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>primer</TAG><VALUE>GTGCCAGCMGCCGCGGTAA</VALUE>
</EXPERIMENT_ATTRIBUTE>
</EXPERIMENT_ATTRIBUTES>
</EXPERIMENT>
<EXPERIMENT alias="%(organization_prefix)s_ptid_%(pt_id)s:\
%(study_id)s.Sample3" center_name="%(center_name)s">
<TITLE>%(organization_prefix)s_ptid_%(pt_id)s:%(study_id)s.Sample3</TITLE>
<STUDY_REF refname="%(organization_prefix)s_sid_%(study_id)s" />
<DESIGN>
<DESIGN_DESCRIPTION>
Random value 3
</DESIGN_DESCRIPTION>
<SAMPLE_DESCRIPTOR refname="%(organization_prefix)s_sid_%(study_id)s:\
%(study_id)s.Sample3" />
<LIBRARY_DESCRIPTOR>
<LIBRARY_NAME>%(study_id)s.Sample3</LIBRARY_NAME>
<LIBRARY_SOURCE>METAGENOMIC</LIBRARY_SOURCE>
<LIBRARY_SELECTION>PCR</LIBRARY_SELECTION>
<LIBRARY_LAYOUT><SINGLE /></LIBRARY_LAYOUT>
<LIBRARY_CONSTRUCTION_PROTOCOL>Protocol ABC
</LIBRARY_CONSTRUCTION_PROTOCOL>
</LIBRARY_DESCRIPTOR>
</DESIGN>
<PLATFORM>
<ILLUMINA><INSTRUMENT_MODEL>Illumina MiSeq</INSTRUMENT_MODEL></ILLUMINA>
</PLATFORM>
<EXPERIMENT_ATTRIBUTES>
<EXPERIMENT_ATTRIBUTE>
<TAG>barcode</TAG><VALUE>CGTAGAGCTCTT</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>center_name</TAG><VALUE>KnightLab</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>primer</TAG><VALUE>GTGCCAGCMGCCGCGGTAA</VALUE>
</EXPERIMENT_ATTRIBUTE>
</EXPERIMENT_ATTRIBUTES>
</EXPERIMENT>
</EXPERIMENT_SET>
"""
EXPERIMENTXML = """
<EXPERIMENT_SET xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:no\
NamespaceSchemaLocation="ftp://ftp.sra.ebi.ac.uk/meta/xsd/sra_1_3/SRA.\
experiment.xsd">
<EXPERIMENT alias="%(organization_prefix)s_ptid_1:1.SKB2.640194" \
center_name="%(center_name)s">
<TITLE>%(organization_prefix)s_ptid_1:1.SKB2.640194</TITLE>
<STUDY_REF accession="EBI123456-BB" />
<DESIGN>
<DESIGN_DESCRIPTION>
micro biome of soil and rhizosphere of cannabis plants from CA
</DESIGN_DESCRIPTION>
<SAMPLE_DESCRIPTOR accession="ERS000008" />
<LIBRARY_DESCRIPTOR>
<LIBRARY_NAME>1.SKB2.640194</LIBRARY_NAME>
<LIBRARY_SOURCE>METAGENOMIC</LIBRARY_SOURCE>
<LIBRARY_SELECTION>PCR</LIBRARY_SELECTION>
<LIBRARY_LAYOUT><SINGLE /></LIBRARY_LAYOUT>
<LIBRARY_CONSTRUCTION_PROTOCOL>This analysis was done as in Caporaso \
et al 2011 Genome research. The PCR primers (F515/R806) were developed \
against the V4 region of the 16S rRNA (both bacteria and archaea), which we \
determined would yield optimal community clustering with reads of this length \
using a procedure similar to that of ref. 15. [For reference, this primer \
pair amplifies the region 533_786 in the Escherichia coli strain 83972 \
sequence (greengenes accession no. prokMSA_id:470367).] The reverse PCR \
primer is barcoded with a 12-base error-correcting Golay code to facilitate \
multiplexing of up to 1,500 samples per lane, and both PCR primers contain \
sequencer adapter regions.
</LIBRARY_CONSTRUCTION_PROTOCOL>
</LIBRARY_DESCRIPTOR>
</DESIGN>
<PLATFORM>
<ILLUMINA><INSTRUMENT_MODEL>Illumina MiSeq</INSTRUMENT_MODEL></ILLUMINA>
</PLATFORM>
<EXPERIMENT_ATTRIBUTES>
<EXPERIMENT_ATTRIBUTE>
<TAG>barcode</TAG><VALUE>CGTAGAGCTCTC</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>center_name</TAG><VALUE>ANL</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>center_project_name</TAG><VALUE>Unknown</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>emp_status</TAG><VALUE>EMP</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>experiment_center</TAG><VALUE>ANL</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>experiment_title</TAG><VALUE>Cannabis Soil Microbiome</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>illumina_technology</TAG><VALUE>MiSeq</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>pcr_primers</TAG><VALUE>FWD:GTGCCAGCMGCCGCGGTAA; \
REV:GGACTACHVGGGTWTCTAAT</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>primer</TAG><VALUE>GTGCCAGCMGCCGCGGTAA</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>run_center</TAG><VALUE>ANL</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>run_date</TAG><VALUE>8/1/12</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>run_prefix</TAG><VALUE>s_G1_L001_sequences</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>samp_size</TAG><VALUE>.25,g</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>sample_center</TAG><VALUE>ANL</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>sequencing_meth</TAG><VALUE>Sequencing by synthesis</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>study_center</TAG><VALUE>CCME</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>target_gene</TAG><VALUE>16S rRNA</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>target_subfragment</TAG><VALUE>V4</VALUE>
</EXPERIMENT_ATTRIBUTE>
</EXPERIMENT_ATTRIBUTES>
</EXPERIMENT>
<EXPERIMENT alias="%(organization_prefix)s_ptid_1:1.SKB3.640195" \
center_name="%(center_name)s">
<TITLE>%(organization_prefix)s_ptid_1:1.SKB3.640195</TITLE>
<STUDY_REF accession="EBI123456-BB" />
<DESIGN>
<DESIGN_DESCRIPTION>
micro biome of soil and rhizosphere of cannabis plants from CA
</DESIGN_DESCRIPTION>
<SAMPLE_DESCRIPTOR accession="ERS000024" />
<LIBRARY_DESCRIPTOR>
<LIBRARY_NAME>1.SKB3.640195</LIBRARY_NAME>
<LIBRARY_SOURCE>METAGENOMIC</LIBRARY_SOURCE>
<LIBRARY_SELECTION>PCR</LIBRARY_SELECTION>
<LIBRARY_LAYOUT><SINGLE /></LIBRARY_LAYOUT>
<LIBRARY_CONSTRUCTION_PROTOCOL>This analysis was done as in Caporaso \
et al 2011 Genome research. The PCR primers (F515/R806) were developed \
against the V4 region of the 16S rRNA (both bacteria and archaea), which we \
determined would yield optimal community clustering with reads of this length \
using a procedure similar to that of ref. 15. [For reference, this primer \
pair amplifies the region 533_786 in the Escherichia coli strain 83972 \
sequence (greengenes accession no. prokMSA_id:470367).] The reverse PCR \
primer is barcoded with a 12-base error-correcting Golay code to facilitate \
multiplexing of up to 1,500 samples per lane, and both PCR primers contain \
sequencer adapter regions.
</LIBRARY_CONSTRUCTION_PROTOCOL>
</LIBRARY_DESCRIPTOR>
</DESIGN>
<PLATFORM>
<ILLUMINA><INSTRUMENT_MODEL>Illumina MiSeq</INSTRUMENT_MODEL></ILLUMINA>
</PLATFORM>
<EXPERIMENT_ATTRIBUTES>
<EXPERIMENT_ATTRIBUTE>
<TAG>barcode</TAG><VALUE>CCTCTGAGAGCT</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>center_name</TAG><VALUE>ANL</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>center_project_name</TAG><VALUE>Unknown</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>emp_status</TAG><VALUE>EMP</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>experiment_center</TAG><VALUE>ANL</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>experiment_title</TAG><VALUE>Cannabis Soil Microbiome</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>illumina_technology</TAG><VALUE>MiSeq</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>pcr_primers</TAG><VALUE>FWD:GTGCCAGCMGCCGCGGTAA; \
REV:GGACTACHVGGGTWTCTAAT</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>primer</TAG><VALUE>GTGCCAGCMGCCGCGGTAA</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>run_center</TAG><VALUE>ANL</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>run_date</TAG><VALUE>8/1/12</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>run_prefix</TAG><VALUE>s_G1_L001_sequences</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>samp_size</TAG><VALUE>.25,g</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>sample_center</TAG><VALUE>ANL</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>sequencing_meth</TAG><VALUE>Sequencing by synthesis</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>study_center</TAG><VALUE>CCME</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>target_gene</TAG><VALUE>16S rRNA</VALUE>
</EXPERIMENT_ATTRIBUTE>
<EXPERIMENT_ATTRIBUTE>
<TAG>target_subfragment</TAG><VALUE>V4</VALUE>
</EXPERIMENT_ATTRIBUTE>
</EXPERIMENT_ATTRIBUTES>
</EXPERIMENT>
</EXPERIMENT_SET>
""" % {'organization_prefix': qiita_config.ebi_organization_prefix,
'center_name': qiita_config.ebi_center_name}
RUNXML = """
<RUN_SET xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespace\
SchemaLocation="ftp://ftp.sra.ebi.ac.uk/meta/xsd/sra_1_3/SRA.run.xsd">
<RUN alias="%(organization_prefix)s_ppdid_%(artifact_id)s:1.SKB2.640194" \
center_name="%(center_name)s">
<EXPERIMENT_REF accession="ERX0000008" />
<DATA_BLOCK>
<FILES>
<FILE checksum="a32357beb845f5b598f1a712fb3b4c70" \
checksum_method="MD5" filename="%(ebi_dir)s/1.SKB2.640194.R1.fastq.gz" \
filetype="fastq" quality_scoring_system="phred" />
</FILES>
</DATA_BLOCK>
</RUN>
<RUN alias="%(organization_prefix)s_ppdid_%(artifact_id)s:1.SKB3.640195" \
center_name="%(center_name)s">
<EXPERIMENT_REF accession="ERX0000024" />
<DATA_BLOCK>
<FILES>
<FILE checksum="deb905ced92812a65a2158fdcfd0f84d" \
checksum_method="MD5" filename="%(ebi_dir)s/1.SKB3.640195.R1.fastq.gz" \
filetype="fastq" quality_scoring_system="phred" />
</FILES>
</DATA_BLOCK>
</RUN>
<RUN alias="%(organization_prefix)s_ppdid_%(artifact_id)s:1.SKB6.640176" \
center_name="%(center_name)s">
<EXPERIMENT_REF accession="ERX0000025" />
<DATA_BLOCK>
<FILES>
<FILE checksum="847ba142770397a2fae3a8acfbc70640" \
checksum_method="MD5" filename="%(ebi_dir)s/1.SKB6.640176.R1.fastq.gz" \
filetype="fastq" quality_scoring_system="phred" />
</FILES>
</DATA_BLOCK>
</RUN>
<RUN alias="%(organization_prefix)s_ppdid_%(artifact_id)s:1.SKM4.640180" \
center_name="%(center_name)s">
<EXPERIMENT_REF accession="ERX0000004" />
<DATA_BLOCK>
<FILES>
<FILE checksum="0dc19bc7ad4ab613c3f738cc9eb57e2c" \
checksum_method="MD5" filename="%(ebi_dir)s/1.SKM4.640180.R1.fastq.gz" \
filetype="fastq" quality_scoring_system="phred" />
</FILES>
</DATA_BLOCK>
</RUN>
</RUN_SET>
"""
RUNXML_NEWSTUDY = """
<RUN_SET xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespace\
SchemaLocation="ftp://ftp.sra.ebi.ac.uk/meta/xsd/sra_1_3/SRA.run.xsd">
<RUN alias="%(organization_prefix)s_ppdid_%(artifact_id)s:%(study_id)s.\
Sample1" center_name="%(center_name)s">
<EXPERIMENT_REF refname="%(organization_prefix)s_ptid_%(pt_id)s:\
%(study_id)s.Sample1" />
<DATA_BLOCK>
<FILES>
<FILE checksum="%(sample_1)s" \
checksum_method="MD5" filename="%(ebi_dir)s/%(study_id)s.Sample1.R1.fastq.gz" \
filetype="fastq" quality_scoring_system="phred" />
</FILES>
</DATA_BLOCK>
</RUN>
<RUN alias="%(organization_prefix)s_ppdid_%(artifact_id)s:%(study_id)s.\
Sample2" center_name="%(center_name)s">
<EXPERIMENT_REF refname="%(organization_prefix)s_ptid_%(pt_id)s:\
%(study_id)s.Sample2" />
<DATA_BLOCK>
<FILES>
<FILE checksum="%(sample_2)s" \
checksum_method="MD5" filename="%(ebi_dir)s/%(study_id)s.Sample2.R1.fastq.gz" \
filetype="fastq" quality_scoring_system="phred" />
</FILES>
</DATA_BLOCK>
</RUN>
<RUN alias="%(organization_prefix)s_ppdid_%(artifact_id)s:%(study_id)s.\
Sample3" center_name="%(center_name)s">
<EXPERIMENT_REF refname="%(organization_prefix)s_ptid_%(pt_id)s:\
%(study_id)s.Sample3" />
<DATA_BLOCK>
<FILES>
<FILE checksum="%(sample_3)s" \
checksum_method="MD5" filename="%(ebi_dir)s/%(study_id)s.Sample3.R1.fastq.gz" \
filetype="fastq" quality_scoring_system="phred" />
</FILES>
</DATA_BLOCK>
</RUN>
</RUN_SET>
"""
SUBMISSIONXML_FULL = """
<SUBMISSION_SET xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:no\
NamespaceSchemaLocation="ftp://ftp.sra.ebi.ac.uk/meta/xsd/sra_1_3/\
SRA.submission.xsd">
<SUBMISSION alias="%(submission_alias)s" center_name="%(center_name)s">
<ACTIONS>
<ACTION><ADD schema="study" source="study.xml" /></ACTION>
<ACTION><ADD schema="sample" source="sample.xml" /></ACTION>
<ACTION><ADD schema="experiment" source="experiment.xml" /></ACTION>
<ACTION><ADD schema="run" source="run.xml" /></ACTION>
<ACTION><HOLD HoldUntilDate="2016-09-02" /></ACTION>
</ACTIONS>
</SUBMISSION>
</SUBMISSION_SET>
"""
SUBMISSIONXML = """
<SUBMISSION_SET xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:no\
NamespaceSchemaLocation="ftp://ftp.sra.ebi.ac.uk/meta/xsd/sra_1_3/\
SRA.submission.xsd">
<SUBMISSION alias="%(submission_alias)s" center_name="%(center_name)s">
<ACTIONS>
<ACTION><ADD schema="experiment" source="experiment.xml" /></ACTION>
<ACTION><ADD schema="run" source="run.xml" /></ACTION>
<ACTION><HOLD HoldUntilDate="2016-09-02" /></ACTION>
</ACTIONS>
</SUBMISSION>
</SUBMISSION_SET>
"""
ADDDICTTEST = """<TESTING foo="bar">
<foo>
<TAG>>x</TAG>
<VALUE><y</VALUE>
</foo>
<foo>
<TAG>none</TAG>
<VALUE>Unknown</VALUE>
</foo>
<foo>
<TAG>x</TAG>
<VALUE>y</VALUE>
</foo>
</TESTING>
"""
GENSPOTDESC = """<design foo="bar">
<SPOT_DESCRIPTOR>
<SPOT_DECODE_SPEC />
<READ_SPEC>
<READ_INDEX>0</READ_INDEX>
<READ_CLASS>Application Read</READ_CLASS>
<READ_TYPE>Forward</READ_TYPE>
<BASE_COORD>1</BASE_COORD>
</READ_SPEC>
</SPOT_DESCRIPTOR>
</design>
"""
CURL_RESULT_FULL = """<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="receipt.xsl"?>
<RECEIPT receiptDate="2015-09-20T23:27:01.924+01:00" \
submissionFile="submission.xml" success="true">
<EXPERIMENT accession="ERX0000000" alias="{0}_ptid_{3}:{2}.Sample1" \
status="PRIVATE"/>
<EXPERIMENT accession="ERX0000001" alias="{0}_ptid_{3}:{2}.Sample2" \
status="PRIVATE"/>
<EXPERIMENT accession="ERX0000002" alias="{0}_ptid_{3}:{2}.Sample3" \
status="PRIVATE"/>
<RUN accession="ERR0000000" alias="{0}_ppdid_{1}:{2}.Sample1" \
status="PRIVATE"/>
<RUN accession="ERR0000001" alias="{0}_ppdid_{1}:{2}.Sample2" \
status="PRIVATE"/>
<RUN accession="ERR0000002" alias="{0}_ppdid_{1}:{2}.Sample3" \
status="PRIVATE"/>
<SAMPLE accession="ERS000000" alias="{0}_sid_{2}:{2}.Sample1"
status="PRIVATE">
<EXT_ID accession="SAMEA0000000" type="biosample"/>
</SAMPLE>
<SAMPLE accession="ERS000001" alias="{0}_sid_{2}:{2}.Sample2"
status="PRIVATE">
<EXT_ID accession="SAMEA0000001" type="biosample"/>
</SAMPLE>
<SAMPLE accession="ERS000002" alias="{0}_sid_{2}:{2}.Sample3"
status="PRIVATE">
<EXT_ID accession="SAMEA0000002" type="biosample"/>
</SAMPLE>
<STUDY accession="ERP000000" alias="{0}_sid_{2}" status="PRIVATE" \
holdUntilDate="2016-09-19+01:00"/>
<SUBMISSION accession="ERA000000" alias="qiime_submission_570"/>
<MESSAGES>
<INFO> ADD action for the following XML: study.xml sample.xml \
experiment.xml run.xml </INFO>
</MESSAGES>
<ACTIONS>ADD</ACTIONS>
<ACTIONS>ADD</ACTIONS>
<ACTIONS>ADD</ACTIONS>
<ACTIONS>ADD</ACTIONS>
<ACTIONS>HOLD</ACTIONS>
</RECEIPT>
"""
CURL_RESULT = """<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="receipt.xsl"?>
<RECEIPT receiptDate="2015-09-20T23:27:01.924+01:00" \
submissionFile="submission.xml" success="true">
<RUN accession="ERR0000000" alias="{0}_ppdid_{1}:1.SKB2.640194" \
status="PRIVATE"/>
<RUN accession="ERR0000001" alias="{0}_ppdid_{1}:1.SKB6.640176" \
status="PRIVATE"/>
<RUN accession="ERR0000002" alias="{0}_ppdid_{1}:1.SKM4.640180" \
status="PRIVATE"/>
<SUBMISSION accession="ERA000000" alias="qiime_submission_570"/>
<MESSAGES>
<INFO> ADD action for the following XML: run.xml </INFO>
</MESSAGES>
<ACTIONS>ADD</ACTIONS>
<ACTIONS>HOLD</ACTIONS>
</RECEIPT>
"""
CURL_RESULT_2_STUDY = """<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="receipt.xsl"?>
<RECEIPT receiptDate="2015-09-20T23:27:01.924+01:00" \
submissionFile="submission.xml" success="true">
<EXPERIMENT accession="ERX0000000" alias="{0}_ptid_1:1.SKB2.640194" \
status="PRIVATE"/>
<RUN accession="ERR0000000" alias="{0}_ppdid_{1}:1.SKB2.640194" \
status="PRIVATE"/>
<SAMPLE accession="ERS000000" alias="{0}_sid_1:1.SKB2.640194"
status="PRIVATE">
<EXT_ID accession="SAMEA0000000" type="biosample"/>
</SAMPLE>
<STUDY accession="ERP000000" alias="{0}_sid_1" status="PRIVATE" \
holdUntilDate="2016-09-19+01:00"/>
<STUDY accession="ERP000000" alias="{0}_sid_2" status="PRIVATE" \
holdUntilDate="2016-09-19+01:00"/>
<SUBMISSION accession="ERA000000" alias="qiime_submission_570"/>
<MESSAGES>
<INFO> ADD action for the following XML: study.xml sample.xml \
experiment.xml run.xml </INFO>
</MESSAGES>
<ACTIONS>ADD</ACTIONS>
<ACTIONS>ADD</ACTIONS>
<ACTIONS>ADD</ACTIONS>
<ACTIONS>ADD</ACTIONS>
<ACTIONS>HOLD</ACTIONS>
</RECEIPT>
"""
if __name__ == "__main__":
main()
| 43.351554 | 79 | 0.62865 | 9,602 | 80,894 | 5.062591 | 0.083941 | 0.03456 | 0.020736 | 0.029952 | 0.829072 | 0.807904 | 0.780626 | 0.765444 | 0.741787 | 0.708091 | 0 | 0.048702 | 0.24385 | 80,894 | 1,865 | 80 | 43.374799 | 0.746011 | 0.032944 | 0 | 0.689165 | 0 | 0.02013 | 0.516728 | 0.228055 | 0 | 0 | 0 | 0 | 0.092954 | 1 | 0.018354 | false | 0.001776 | 0.015394 | 0 | 0.036708 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a23c7e7eb8f8ed7eabab2736dfb7bc86c98f2dfb | 28 | py | Python | foood/__init__.py | starsparrow/foood | 2ee7666f0bda9b06c61a6c2d3da79af6c339ffc9 | [
"MIT"
] | 1 | 2021-03-19T00:54:42.000Z | 2021-03-19T00:54:42.000Z | foood/__init__.py | starsparrow/foood | 2ee7666f0bda9b06c61a6c2d3da79af6c339ffc9 | [
"MIT"
] | null | null | null | foood/__init__.py | starsparrow/foood | 2ee7666f0bda9b06c61a6c2d3da79af6c339ffc9 | [
"MIT"
] | null | null | null | from foood.foood import app
| 14 | 27 | 0.821429 | 5 | 28 | 4.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a261d6da88c98068b5e497cebff91fdc614a269a | 37 | py | Python | sequence/utils/__init__.py | ritchie46/clickstream | 79c669d0636521db2697e5fa583628d1920cc6c1 | [
"MIT"
] | 8 | 2020-01-27T14:43:02.000Z | 2021-08-24T19:26:30.000Z | sequence/utils/__init__.py | ritchie46/clickstream | 79c669d0636521db2697e5fa583628d1920cc6c1 | [
"MIT"
] | 4 | 2020-07-16T14:27:11.000Z | 2021-01-27T08:28:20.000Z | sequence/utils/__init__.py | ritchie46/clickstream | 79c669d0636521db2697e5fa583628d1920cc6c1 | [
"MIT"
] | 6 | 2020-07-16T12:14:49.000Z | 2021-05-17T08:18:42.000Z | from sequence.utils.general import *
| 18.5 | 36 | 0.810811 | 5 | 37 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a26f1fc432a5e19f2b4242070c7bbfa54773c3fb | 344 | py | Python | cla_backend/apps/checker/forms.py | uk-gov-mirror/ministryofjustice.cla_backend | 4d524c10e7bd31f085d9c5f7bf6e08a6bb39c0a6 | [
"MIT"
] | 3 | 2019-10-02T15:31:03.000Z | 2022-01-13T10:15:53.000Z | cla_backend/apps/checker/forms.py | uk-gov-mirror/ministryofjustice.cla_backend | 4d524c10e7bd31f085d9c5f7bf6e08a6bb39c0a6 | [
"MIT"
] | 206 | 2015-01-02T16:50:11.000Z | 2022-02-16T20:16:05.000Z | cla_backend/apps/checker/forms.py | uk-gov-mirror/ministryofjustice.cla_backend | 4d524c10e7bd31f085d9c5f7bf6e08a6bb39c0a6 | [
"MIT"
] | 6 | 2015-03-23T23:08:42.000Z | 2022-02-15T17:04:44.000Z | from legalaid.forms import BaseCallMeBackForm
class WebCallMeBackForm(BaseCallMeBackForm):
def __init__(self, *args, **kwargs):
self.requires_action_at = kwargs.pop("requires_action_at")
super(WebCallMeBackForm, self).__init__(*args, **kwargs)
def get_requires_action_at(self):
return self.requires_action_at
| 31.272727 | 66 | 0.744186 | 39 | 344 | 6.128205 | 0.487179 | 0.23431 | 0.267782 | 0.167364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159884 | 344 | 10 | 67 | 34.4 | 0.82699 | 0 | 0 | 0 | 0 | 0 | 0.052326 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.142857 | 0.714286 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a27a86b6c46bb79051b81bf35d4f87d98b9a6349 | 1,310 | py | Python | ds_statsml/_alt_metrics.py | airportpeople/dstools | 00207fa8edd8695b308c4b4a6e022176357b1c83 | [
"MIT"
] | null | null | null | ds_statsml/_alt_metrics.py | airportpeople/dstools | 00207fa8edd8695b308c4b4a6e022176357b1c83 | [
"MIT"
] | null | null | null | ds_statsml/_alt_metrics.py | airportpeople/dstools | 00207fa8edd8695b308c4b4a6e022176357b1c83 | [
"MIT"
] | null | null | null | import numpy as np
from sklearn.metrics import mean_squared_error
def APE(actual, forecast):
return np.abs((actual - forecast) / (actual))
def APE_inv(actual, forecast):
return np.abs(actual / (actual - forecast))
def sAPE(actual, forecast):
return np.abs(actual - forecast) / ((actual + forecast) / 2)
def sAPE2(actual, forecast):
return np.abs((actual - forecast) * ((actual + forecast) / 2) ** 0.5)
def AAPE(actual, forecast):
return np.arctan(APE(actual, forecast))
def AAPE_inv(actual, forecast):
return np.arctan(APE_inv(actual, forecast))
def MAPE(actual, forecast):
return np.mean(APE(actual, forecast))
def MdAPE(actual, forecast):
return np.median(APE(actual, forecast))
def sMAPE(actual, forecast):
return np.mean(sAPE(actual, forecast))
def sMAPE2(actual, forecast):
return np.mean(sAPE2(actual, forecast))
def MAAPE(actual, forecast):
return np.mean(AAPE(actual, forecast))
def MAAPE_inv(actual, forecast):
return np.mean(AAPE_inv(actual, forecast))
def RMSE(actual, forecast):
return np.sqrt(mean_squared_error(actual, forecast))
def median_within(a):
'''
Adjusted median to keep the value in the array
:param a:
:return:
'''
a_ = np.array(a).copy()
a_.sort()
return a_[len(a_) // 2] | 19.848485 | 73 | 0.677099 | 182 | 1,310 | 4.796703 | 0.247253 | 0.449026 | 0.297824 | 0.327606 | 0.450172 | 0.350515 | 0.175258 | 0.175258 | 0.123711 | 0.123711 | 0 | 0.007505 | 0.18626 | 1,310 | 66 | 74 | 19.848485 | 0.811445 | 0.049618 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4375 | false | 0 | 0.0625 | 0.40625 | 0.9375 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a290eb3699f22c432cb4e404f038d27b63908ea7 | 103 | py | Python | src/uri/basic/basic_1001.py | gabrielDpadua21/code-challenges | 0050bc9b358193aa6cacdda21e0670a9dc20450a | [
"MIT"
] | null | null | null | src/uri/basic/basic_1001.py | gabrielDpadua21/code-challenges | 0050bc9b358193aa6cacdda21e0670a9dc20450a | [
"MIT"
] | null | null | null | src/uri/basic/basic_1001.py | gabrielDpadua21/code-challenges | 0050bc9b358193aa6cacdda21e0670a9dc20450a | [
"MIT"
] | null | null | null | class SumNumbers:
def solution(self, value1, value2):
return "X = " + str(value1 + value2) | 25.75 | 44 | 0.621359 | 12 | 103 | 5.333333 | 0.833333 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051948 | 0.252427 | 103 | 4 | 44 | 25.75 | 0.779221 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0c3b7a4322ea156977b4721db332d0015dd23d99 | 3,766 | py | Python | drift_qec/oneangleheuristic.py | janmtl/drift_qec | 3b1c703d151f9dc2833b761f85586cd09666557b | [
"0BSD"
] | null | null | null | drift_qec/oneangleheuristic.py | janmtl/drift_qec | 3b1c703d151f9dc2833b761f85586cd09666557b | [
"0BSD"
] | null | null | null | drift_qec/oneangleheuristic.py | janmtl/drift_qec | 3b1c703d151f9dc2833b761f85586cd09666557b | [
"0BSD"
] | null | null | null | # -*- coding: utf-8 -*-
from base import Parameter, Channel, Estimator, Report, Constant
import numpy as np
def _cos_partial(x):
return (x/2.0 + 1/4.0 * np.sin(2.0*x))
def _sin_partial(x):
return (x/2.0 - 1/4.0 * np.sin(2.0*x))
class ThetaFFT(Parameter):
"""An angle of decesion from the |0> pole."""
def __init__(self, max_time, grains, sigma, k):
S = np.linspace(0, np.pi, grains+1)
super(ThetaFFT, self).__init__(S, max_time, "Theta")
start = np.random.rand()*2*np.pi
drift = np.random.normal(0.0, sigma, max_time+1)
drift = np.cumsum(drift)
self.val = np.mod(start + drift, np.pi)
self.k = k
def update(self, s, time):
w_x, w_z = np.sum(s[0]), np.sum(s[2])
if (w_x > 0) | (w_z > 0):
update = np.ones(len(self.M))
if (w_x > 0):
x_update = _cos_partial(self.S[1:] - self.hat[time]) \
- _cos_partial(self.S[:-1] - self.hat[time])
update = update * (x_update ** (2 * w_x))
if (w_z > 0):
z_update = _sin_partial(self.S[1:] - self.hat[time]) \
- _sin_partial(self.S[:-1] - self.hat[time])
update = update * (z_update ** (2 * w_z))
self.p = self.p * update
n = len(self.p)
k = self.k
w = np.fft.fft(self.p)
w = np.roll(w, n/2)
w[:n/2-k] = 0.0
w[n/2+k:] = 0.0
w = np.roll(w, n/2)
y = np.fft.ifft(w)
self.p = y/np.sum(y)
self.hat[time+1] = self.M[np.argmax(self.p)]
class ThetaNormals(Parameter):
"""An angle of decesion from the |0> pole."""
def __init__(self, max_time, grains, sigma, k):
S = np.linspace(0, np.pi, grains+1)
super(ThetaFFT, self).__init__(S, max_time, "Theta")
start = np.random.rand()*2*np.pi
drift = np.random.normal(0.0, sigma, max_time+1)
drift = np.cumsum(drift)
self.val = np.mod(start + drift, np.pi)
self.k = k
def update(self, s, time):
w_x, w_z = np.sum(s[0]), np.sum(s[2])
if (w_x > 0) | (w_z > 0):
update = np.ones(len(self.M))
if (w_x > 0):
x_update = _cos_partial(self.S[1:] - self.hat[time]) \
- _cos_partial(self.S[:-1] - self.hat[time])
update = update * (x_update ** (2 * w_x))
if (w_z > 0):
z_update = _sin_partial(self.S[1:] - self.hat[time]) \
- _sin_partial(self.S[:-1] - self.hat[time])
update = update * (z_update ** (2 * w_z))
self.p = self.p * update
n = len(self.p)
k = self.k
w = np.fft.fft(self.p)
w = np.roll(w, n/2)
w[:n/2-k] = 0.0
w[n/2+k:] = 0.0
w = np.roll(w, n/2)
y = np.fft.ifft(w)
self.p = y/np.sum(y)
self.hat[time+1] = self.M[np.argmax(self.p)]
class OneAngleDephasingChannel(Channel):
def __init__(self, n, max_time):
super(OneAngleDephasingChannel, self).__init__(n, max_time)
def px(self, params, constants, time):
theta = params["Theta"].hat[time] - params["Theta"].val[time]
p = constants["p"]
return p.val * (np.cos(theta) ** 2)
def py(self, params, constant, time):
return 0.0
def pz(self, params, constants, time):
theta = params["Theta"].hat[time] - params["Theta"].val[time]
p = constants["p"]
return p.val * (np.sin(theta) ** 2)
class OneAngleDephasingEstimator(Estimator):
def __init__(self, params, constants):
super(OneAngleDephasingEstimator, self).__init__(params, constants)
| 35.196262 | 75 | 0.509559 | 572 | 3,766 | 3.208042 | 0.132867 | 0.045777 | 0.059946 | 0.056676 | 0.785831 | 0.785831 | 0.785831 | 0.785831 | 0.785831 | 0.785831 | 0 | 0.029377 | 0.322092 | 3,766 | 106 | 76 | 35.528302 | 0.689385 | 0.027084 | 0 | 0.767442 | 0 | 0 | 0.00876 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.127907 | false | 0 | 0.023256 | 0.034884 | 0.255814 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0c47875d06ddb5d52b4bd0ecb08174313b67dc07 | 46 | py | Python | example_SetlX_stat_code/stat_python_code/stat_fisher.py | leonmutschke/setlX | a10333405cba3d9d814d7de9e160561bd5fa4f76 | [
"BSD-3-Clause"
] | 28 | 2015-01-14T11:12:02.000Z | 2022-02-15T21:06:05.000Z | example_SetlX_stat_code/stat_python_code/stat_fisher.py | leonmutschke/setlX | a10333405cba3d9d814d7de9e160561bd5fa4f76 | [
"BSD-3-Clause"
] | 6 | 2016-08-01T14:21:37.000Z | 2018-06-03T17:15:00.000Z | example_SetlX_stat_code/stat_python_code/stat_fisher.py | leonmutschke/setlX | a10333405cba3d9d814d7de9e160561bd5fa4f76 | [
"BSD-3-Clause"
] | 18 | 2015-02-11T21:10:18.000Z | 2018-05-02T07:41:41.000Z | from scipy.stats import f
print(f.pdf(3,5,2))
| 15.333333 | 25 | 0.717391 | 11 | 46 | 3 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 0.108696 | 46 | 2 | 26 | 23 | 0.731707 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
a74a06b0b46118761d28aff22093f3b07441a7d8 | 138 | py | Python | vennwedo/frontend/views.py | aniruddha2000/vennWedo | 5647bb82eda59678b88d9d6be0caca453f2a0589 | [
"Apache-2.0"
] | 1 | 2021-02-14T07:48:46.000Z | 2021-02-14T07:48:46.000Z | vennwedo/frontend/views.py | ArkaprabhaChakraborty/vennWedo | 7896e14a928a2f7e25531653c0ff45decaffbaa2 | [
"Apache-2.0"
] | null | null | null | vennwedo/frontend/views.py | ArkaprabhaChakraborty/vennWedo | 7896e14a928a2f7e25531653c0ff45decaffbaa2 | [
"Apache-2.0"
] | 2 | 2021-01-31T09:06:22.000Z | 2021-01-31T09:07:47.000Z | from django.shortcuts import render
def index(request, *args, **kwargs):
return render(request, 'index.html')
# Create your views here.
| 23 | 37 | 0.746377 | 19 | 138 | 5.421053 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 138 | 5 | 38 | 27.6 | 0.858333 | 0.166667 | 0 | 0 | 0 | 0 | 0.088496 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
a7645cefdc6be86e01c217bff3a9d5e82012695e | 118 | py | Python | source/appModules/miranda64.py | SWEN-712/screen-reader-brandonp728 | e30c25ad2d10ce632fac0548696a61a872328f59 | [
"bzip2-1.0.6"
] | null | null | null | source/appModules/miranda64.py | SWEN-712/screen-reader-brandonp728 | e30c25ad2d10ce632fac0548696a61a872328f59 | [
"bzip2-1.0.6"
] | null | null | null | source/appModules/miranda64.py | SWEN-712/screen-reader-brandonp728 | e30c25ad2d10ce632fac0548696a61a872328f59 | [
"bzip2-1.0.6"
] | null | null | null | """App module for Miranda IM for Windows x64
This simply uses the miranda32 app module.
"""
from .miranda32 import *
| 19.666667 | 44 | 0.745763 | 18 | 118 | 4.888889 | 0.777778 | 0.204545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061856 | 0.177966 | 118 | 5 | 45 | 23.6 | 0.845361 | 0.711864 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a7bbadd3eceb95f1f6e638a60a262137cc414d10 | 7,815 | py | Python | tests/utility/test_dict_deep_update.py | bnaard/pycmdlineapp-groundwork | e78b29676e1594afcc2fba46478b938e64492d9e | [
"MIT"
] | null | null | null | tests/utility/test_dict_deep_update.py | bnaard/pycmdlineapp-groundwork | e78b29676e1594afcc2fba46478b938e64492d9e | [
"MIT"
] | null | null | null | tests/utility/test_dict_deep_update.py | bnaard/pycmdlineapp-groundwork | e78b29676e1594afcc2fba46478b938e64492d9e | [
"MIT"
] | null | null | null | from hypothesis.strategies._internal.core import recursive
import pytest
from hypothesis import given, strategies as st
from string import printable
from typing import Dict, Any, List, Tuple
import json
from pycmdlineapp_groundwork.utility.dict_deep_update import (
dict_deep_update,
MAX_RECURSION_DEPTH,
)
key_strategies = (
st.none() | st.booleans() | st.integers() | st.floats() | st.text(printable)
)
value_strategies = (
st.none()
| st.booleans()
| st.integers()
| st.floats()
| st.text(printable)
| st.lists(
st.none() | st.booleans() | st.integers() | st.floats() | st.text(printable)
)
| st.dictionaries(
st.text(printable),
st.none() | st.booleans() | st.integers() | st.floats() | st.text(printable),
)
)
value_no_lists_strategies = (
st.none()
| st.booleans()
| st.integers()
| st.floats()
| st.text(printable)
)
@st.composite
def dict_deep_update_strategy(draw, keys=key_strategies, values=value_strategies, values_no_lists=value_no_lists_strategies):
k1 = draw(keys)
v1 = draw(values)
k2 = draw(keys)
v2 = draw(values)
k3 = draw(keys)
v3 = draw(values)
vp1 = draw(keys)
vp2 = draw(keys)
vp3 = draw(keys)
vnl1 = draw(values_no_lists)
vnl2 = draw(values_no_lists)
return (k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2)
def _check_test_case_dict(test_case_dict):
merged_dict = test_case_dict["target"]
dict_deep_update(merged_dict, test_case_dict["source"])
assert merged_dict == test_case_dict["result"]
def _inner_test_dict_deep_merge(test_case_dict):
if test_case_dict["target"] is None or test_case_dict["source"] is None:
with pytest.raises(ValueError):
_check_test_case_dict(test_case_dict)
return None
if test_case_dict["should_raise_recursion_exception"] == True:
with pytest.raises(RecursionError):
_check_test_case_dict(test_case_dict)
return None
_check_test_case_dict(test_case_dict)
@given(dict_keys_values=dict_deep_update_strategy())
def test_dict_deep_update_001(dict_keys_values):
k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2 = dict_keys_values
test_case_dict = {
"should_raise_recursion_exception": False,
"target": {k1: v1, k2: vnl1},
"source": {k3: vnl2},
"result": {k1: v1, k2: vnl1, k3: vnl2},
}
_inner_test_dict_deep_merge(test_case_dict)
@given(dict_keys_values=dict_deep_update_strategy())
def test_dict_deep_update_002(dict_keys_values):
k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2 = dict_keys_values
test_case_dict = {
"should_raise_recursion_exception": False,
"target": {k1: v1, k2: vp2},
"source": {k2: vp3},
"result": {k1: v1, k2: vp3},
}
_inner_test_dict_deep_merge(test_case_dict)
@given(dict_keys_values=dict_deep_update_strategy())
def test_dict_deep_update_003(dict_keys_values):
k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2 = dict_keys_values
test_case_dict = {
"should_raise_recursion_exception": False,
"target": {k1: v1, k2: [vp2]},
"source": {k2: [v2, vp3]},
"result": {k1: v1, k2: [vp2, v2, vp3]},
}
_inner_test_dict_deep_merge(test_case_dict)
@given(dict_keys_values=dict_deep_update_strategy())
def test_dict_deep_update_004(dict_keys_values):
k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2 = dict_keys_values
test_case_dict = {
"should_raise_recursion_exception": False,
"target": {k1: v1, k2: [vp2]},
"source": {k2: vp3},
"result": {k1: v1, k2: vp3},
}
_inner_test_dict_deep_merge(test_case_dict)
@given(dict_keys_values=dict_deep_update_strategy())
def test_dict_deep_update_005(dict_keys_values):
k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2 = dict_keys_values
test_case_dict = {
"should_raise_recursion_exception": False,
"target": {k1: v1, k2: {k3: vp2} },
"source": {k2: [v2, vp3]},
"result": {k1: v1, k2: [v2, vp3]},
}
_inner_test_dict_deep_merge(test_case_dict)
@given(dict_keys_values=dict_deep_update_strategy())
def test_dict_deep_update_006(dict_keys_values):
k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2 = dict_keys_values
test_case_dict = {
"should_raise_recursion_exception": False,
"target": {k1: v1, k2: {k3: vp2}},
"source": {k2: vp3},
"result": {k1: v1, k2: vp3},
}
_inner_test_dict_deep_merge(test_case_dict)
@given(dict_keys_values=dict_deep_update_strategy())
def test_dict_deep_update_007(dict_keys_values):
k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2 = dict_keys_values
test_case_dict = {
"should_raise_recursion_exception": False,
"target": {k1: v1, k2: {k3: vp2}},
"source": {k2: {k3: vp3}},
"result": {k1: v1, k2: {k3: vp3}},
}
_inner_test_dict_deep_merge(test_case_dict)
@given(dict_keys_values=dict_deep_update_strategy())
def test_dict_deep_update_008(dict_keys_values):
k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2 = dict_keys_values
test_case_dict = {
"should_raise_recursion_exception": False,
"target": {k1: v1, k2: {k3: vp2}},
"source": {k2: {k2: v3}},
"result": {k1: v1, k2: {k3: vp2, k2: v3}},
}
_inner_test_dict_deep_merge(test_case_dict)
@given(dict_keys_values=dict_deep_update_strategy())
def test_dict_deep_update_009(dict_keys_values):
k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2 = dict_keys_values
test_case_dict = {
"should_raise_recursion_exception": False,
"target": {k1: v1, k2: {k3: vp2}},
"source": {k2: {k2: v3}},
"result": {k1: v1, k2: {k3: vp2, k2: v3}},
}
_inner_test_dict_deep_merge(test_case_dict)
@given(dict_keys_values=dict_deep_update_strategy())
def test_dict_deep_update_010(dict_keys_values):
k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2 = dict_keys_values
test_case_dict = {
"should_raise_recursion_exception": False,
"target": {k1: v1, k2: {k3: vp2}},
"source": {k2: {k2: v3}},
"result": {k1: v1, k2: {k3: vp2, k2: v3}},
}
_inner_test_dict_deep_merge(test_case_dict)
@given(dict_keys_values=dict_deep_update_strategy())
def test_dict_deep_update_011(dict_keys_values):
k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2 = dict_keys_values
test_case_dict = {
"should_raise_recursion_exception": False,
"target": {k1: v1, k2: vnl1},
"source": {k2: vnl2},
"result": {k1: v1, k2: vnl2},
}
_inner_test_dict_deep_merge(test_case_dict)
@given(dict_keys_values=dict_deep_update_strategy())
def test_dict_deep_update_012(dict_keys_values):
k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2 = dict_keys_values
assert MAX_RECURSION_DEPTH == 8
test_case_dict = {
"should_raise_recursion_exception": False,
"target": {k1: v1, k2: vnl2},
"source": {k2: {k1: {k1: {k1: {k1: {k1: {k1: { k1: { k1: {k1: [v1, v2]}}}}}}}}}},
"result": {k1: v1, k2: {k1: {k1: {k1: {k1: {k1: {k1: { k1: { k1: {k1: [v1, v2]}}}}}}}}}},
}
_inner_test_dict_deep_merge(test_case_dict)
@given(dict_keys_values=dict_deep_update_strategy())
def test_dict_deep_update_013(dict_keys_values):
k1,v1,k2,v2,k3,v3,vp1,vp2,vp3,vnl1,vnl2 = dict_keys_values
assert MAX_RECURSION_DEPTH == 8
test_case_dict = {
"should_raise_recursion_exception": True,
"target": {k1: v1, k2: {k1: {k1: {k1: {k1: {k1: {k1: { k1: { k1: {k1: [v1, v3]}}}}}}}}}},
"source": {k2: {k1: {k1: {k1: {k1: {k1: {k1: { k1: { k1: {k1: [v1, v2]}}}}}}}}}},
"result": {k1: v1, k2: {k1: {k1: {k1: {k1: {k1: {k1: { k1: { k1: {k1: [v1, v2]}}}}}}}}}},
}
assert test_case_dict["should_raise_recursion_exception"] == True
_inner_test_dict_deep_merge(test_case_dict)
| 32.974684 | 125 | 0.656814 | 1,154 | 7,815 | 4.090988 | 0.078856 | 0.038128 | 0.106757 | 0.050837 | 0.834993 | 0.810633 | 0.810633 | 0.80089 | 0.766575 | 0.742851 | 0 | 0.065535 | 0.193602 | 7,815 | 236 | 126 | 33.114407 | 0.683593 | 0 | 0 | 0.569231 | 0 | 0 | 0.095226 | 0.061436 | 0 | 0 | 0 | 0 | 0.020513 | 1 | 0.082051 | false | 0 | 0.035897 | 0 | 0.133333 | 0.035897 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a7d48e60b6687d48214bdf28ec07845fbe888f74 | 176 | py | Python | courses/python/cursoemvideo/exercicios/ex008.py | bdpcampos/public | dda57c265718f3e1cc0d6bce73f149051f5647ef | [
"MIT"
] | 3 | 2020-04-28T01:42:09.000Z | 2020-05-03T12:05:23.000Z | courses/python/cursoemvideo/exercicios/ex008.py | bdpcampos/public | dda57c265718f3e1cc0d6bce73f149051f5647ef | [
"MIT"
] | null | null | null | courses/python/cursoemvideo/exercicios/ex008.py | bdpcampos/public | dda57c265718f3e1cc0d6bce73f149051f5647ef | [
"MIT"
] | null | null | null | m = float(input('Digite o valor em metros: '))
print('O valor em metros é {:.2f}.\nO valor em centimetros é {:.2f}.\nO valor em milímetros é {:.2f}.'.format(m, m*100, m*1000)) | 58.666667 | 128 | 0.647727 | 33 | 176 | 3.454545 | 0.515152 | 0.245614 | 0.140351 | 0.245614 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.147727 | 176 | 3 | 128 | 58.666667 | 0.693333 | 0 | 0 | 0 | 0 | 0.5 | 0.677966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
ce0482dadc870d8a997faea9d07b20eb16b98105 | 5,163 | py | Python | tests/se_test/test_wait_visibility.py | amplify-education/selen_kaa | 8f3f683d97f15be8bda050975c64c7883471e648 | [
"MIT"
] | 4 | 2019-11-10T19:17:15.000Z | 2021-07-20T12:41:25.000Z | tests/se_test/test_wait_visibility.py | amplify-education/selen_kaa | 8f3f683d97f15be8bda050975c64c7883471e648 | [
"MIT"
] | 3 | 2020-04-15T14:44:50.000Z | 2020-04-15T14:59:35.000Z | tests/se_test/test_wait_visibility.py | amplify-education/selen_kaa | 8f3f683d97f15be8bda050975c64c7883471e648 | [
"MIT"
] | 1 | 2022-01-24T10:05:29.000Z | 2022-01-24T10:05:29.000Z | import time
import pytest
from selenium.common.exceptions import TimeoutException
TIMEOUT_6_SEC = 6
TIMEOUT_BASE_ERR_MSG = "TimeoutException while waited {} second(s) for the element '{}' to {}."
def test_should_invisibility(app):
index_page = app.goto_index_page()
assert index_page.test_div.should.be_invisible(timeout=TIMEOUT_6_SEC)
index_page.btn_show_div.click()
assert index_page.test_div.should.be_visible(timeout=TIMEOUT_6_SEC)
index_page.btn_hide_div.click()
assert index_page.test_div.should.be_invisible(timeout=TIMEOUT_6_SEC)
def test_should_visible_web_element(app):
"""Verify be visible and be invisible works for Selenium WebElement."""
index_page = app.goto_index_page()
assert app.wait.element_to_be_invisible(index_page.test_div.web_element, timeout=TIMEOUT_6_SEC)
index_page.btn_show_div.click()
assert app.wait.element_to_be_visible(index_page.test_div.web_element, timeout=TIMEOUT_6_SEC)
index_page.btn_hide_div.click()
assert app.wait.element_to_be_invisible(index_page.test_div.web_element, timeout=TIMEOUT_6_SEC)
def test_should_visible_by_selector(app):
"""Verify be visible and be invisible works when to pass css selector to method."""
index_page = app.goto_index_page()
assert app.wait.element_to_be_invisible(index_page.test_div.selector, timeout=TIMEOUT_6_SEC)
index_page.btn_show_div.click()
assert app.wait.element_to_be_visible(index_page.test_div.selector, timeout=TIMEOUT_6_SEC)
index_page.btn_hide_div.click()
assert app.wait.element_to_be_invisible(index_page.test_div.selector, timeout=TIMEOUT_6_SEC)
def test_expect_invisibility(app):
index_page = app.goto_index_page()
assert index_page.test_div.expect.be_invisible(timeout=TIMEOUT_6_SEC)
index_page.btn_show_div.click()
assert index_page.test_div.expect.be_visible(timeout=TIMEOUT_6_SEC)
index_page.btn_hide_div.click()
assert index_page.test_div.expect.be_invisible(timeout=TIMEOUT_6_SEC)
def test_exception_on_visibility(app):
index_page = app.goto_index_page()
# raises(Error, match) param is too verbose for verification
with pytest.raises(TimeoutException) as exc:
assert index_page.test_div.should.be_visible(timeout=1)
assert TIMEOUT_BASE_ERR_MSG.format(1, index_page.test_div.selector, 'be visible') == exc.msg
def test_exception_on_invisibility(app):
index_page = app.goto_index_page()
index_page.btn_show_div.click()
time.sleep(6)
with pytest.raises(TimeoutException) as exc:
assert index_page.test_div.should.be_invisible(timeout=1)
assert TIMEOUT_BASE_ERR_MSG.format(1, index_page.test_div.selector, 'disappear') in exc.msg
def test_expect_visibility_has_no_exception(app):
index_page = app.goto_index_page()
assert not index_page.test_div.expect.be_visible(timeout=1)
def test_expect_invisibility_has_no_exception(app):
index_page = app.goto_index_page()
index_page.btn_show_div.click()
time.sleep(6)
assert not index_page.test_div.expect.be_invisible(timeout=1)
def test_expect_no_exc_if_no_such_element(app):
# no exception even such element doesn't exist
index_page = app.goto_index_page()
assert index_page.no_such_element.expect.be_invisible(1)
def test_timeout_duration_on_expect_visibility(app):
index_page = app.goto_index_page()
start_t = time.time()
assert not index_page.test_div.expect.be_visible(timeout=1)
after_1_sec_time = time.time()
assert all((after_1_sec_time - start_t >= 1, after_1_sec_time - start_t <= 2))
start_t2 = time.time()
assert not index_page.test_div.expect.be_visible(timeout=5)
after_6_sec_time = time.time()
assert all((after_6_sec_time - start_t2 >= 5, after_6_sec_time - start_t2 <= 6))
def test_timeout_duration_on_invisibility(app):
index_page = app.goto_index_page()
start_t = time.time()
assert index_page.test_div.expect.be_invisible(timeout=1)
after_1_sec_time = time.time()
# it should be True faster than 1 seconds, as the condition is true from beginning
assert after_1_sec_time - start_t <= 1
start_t2 = time.time()
assert index_page.test_div.expect.be_invisible(timeout=5)
after_6_sec_time = time.time()
# it should be True faster than 5 seconds, as the condition is true from beginning
assert after_6_sec_time - start_t2 <= 1
def test_invisible_timeout_none_and_zero(app):
index_page = app.goto_index_page()
start_t = time.time()
assert index_page.test_div.expect.be_invisible()
assert index_page.test_div.expect.be_invisible(None)
assert index_page.test_div.expect.be_invisible(0)
after_time = time.time()
# it should be faster than 1 seconds
assert after_time - start_t <= 1
def test_visible_timeout_none_and_zero(app):
index_page = app.goto_index_page()
start_t = time.time()
assert index_page.btn_show_div.expect.be_visible()
assert index_page.btn_show_div.expect.be_visible(None)
assert index_page.btn_show_div.expect.be_visible(0)
after_time = time.time()
# it should be faster than 1 seconds
assert after_time - start_t <= 1
| 38.529851 | 100 | 0.768933 | 829 | 5,163 | 4.414958 | 0.117008 | 0.159836 | 0.088798 | 0.10929 | 0.85765 | 0.83388 | 0.813115 | 0.797268 | 0.712022 | 0.645628 | 0 | 0.012644 | 0.142165 | 5,163 | 133 | 101 | 38.819549 | 0.813728 | 0.092969 | 0 | 0.554348 | 0 | 0 | 0.01907 | 0 | 0 | 0 | 0 | 0 | 0.380435 | 1 | 0.141304 | false | 0 | 0.032609 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ce330c1527c57640fae342a3f4e0ba775f9e8121 | 3,364 | py | Python | code/dataPrepare.py | yangyuan333/MvSMPLfitting | c6d92c0e50531c7b9480c57212b3e0e99249525f | [
"MIT"
] | null | null | null | code/dataPrepare.py | yangyuan333/MvSMPLfitting | c6d92c0e50531c7b9480c57212b3e0e99249525f | [
"MIT"
] | null | null | null | code/dataPrepare.py | yangyuan333/MvSMPLfitting | c6d92c0e50531c7b9480c57212b3e0e99249525f | [
"MIT"
] | null | null | null | import sys
sys.path.append('./')
import glob
import os
import shutil
import json
import numpy as np
## lsp
# def transProx2coco(js):
# idx = [0,16,15,18,17,5,2,6,3,7,4,12,9,13,10,14,11]
# return js[idx]
# if __name__ == '__main__':
# path = R'H:\YangYuan\ProjectData\HumanObject\dataset\PROX\prox_quantiative_dataset\keypoints'
# imgPath = R'H:\YangYuan\ProjectData\HumanObject\dataset\PROX\prox_quantiative_dataset\recordings'
# desPath = R'H:\YangYuan\Code\phy_program\MvSMPLfitting\dataprox'
# for seq in glob.glob(os.path.join(path,'*')):
# seqname = os.path.basename(seq)
# for frame in glob.glob(os.path.join(seq,'*')):
# framename = os.path.basename(frame)[:4]
# if not os.path.exists(os.path.join(imgPath,seqname,'img',os.path.basename(frame)[:-15]+'.jpg')):
# continue
# os.makedirs(
# os.path.join(desPath,'images',seqname+'_'+framename,'Camera00'),exist_ok=True
# )
# shutil.copyfile(
# os.path.join(imgPath,seqname,'img',os.path.basename(frame)[:-15]+'.jpg'),
# os.path.join(desPath,'images',seqname+'_'+framename,'Camera00','00001.jpg')
# )
# os.makedirs(
# os.path.join(desPath,'keypoints',seqname+'_'+framename,'Camera00'),exist_ok=True
# )
# with open(frame, 'rb') as file:
# data = json.load(file)
# js = np.array(data['people'][0]['pose_keypoints_2d']).reshape(-1,3)
# js = transProx2coco(js)
# temData = {}
# temData['people'] = []
# temData['people'].append(
# {'pose_keypoints_2d':list(js.reshape(-1))}
# )
# with open(os.path.join(desPath,'keypoints',seqname+'_'+framename,'Camera00','00001_keypoints.json'),'w',encoding="utf8") as file:
# json.dump(temData,file)
# coc25
def transProx2coco(js):
idx = [0,16,15,18,17,5,2,6,3,7,4,12,9,13,10,14,11]
return js[idx]
if __name__ == '__main__':
path = R'H:\YangYuan\ProjectData\HumanObject\dataset\PROX\prox_quantiative_dataset\keypoints'
imgPath = R'H:\YangYuan\ProjectData\HumanObject\dataset\PROX\prox_quantiative_dataset\recordings'
desPath = R'H:\YangYuan\Code\phy_program\MvSMPLfitting\dataproxTest'
for seq in glob.glob(os.path.join(path,'*')):
seqname = os.path.basename(seq)
for frame in glob.glob(os.path.join(seq,'*')):
framename = os.path.basename(frame)[:4]
if not os.path.exists(os.path.join(imgPath,seqname,'img',os.path.basename(frame)[:-15]+'.jpg')):
continue
os.makedirs(
os.path.join(desPath,'images',seqname+'_'+framename,'Camera00'),exist_ok=True
)
shutil.copyfile(
os.path.join(imgPath,seqname,'img',os.path.basename(frame)[:-15]+'.jpg'),
os.path.join(desPath,'images',seqname+'_'+framename,'Camera00','00001.jpg')
)
os.makedirs(
os.path.join(desPath,'keypoints',seqname+'_'+framename,'Camera00'),exist_ok=True
)
shutil.copyfile(
os.path.join(frame),
os.path.join(desPath,'keypoints',seqname+'_'+framename,'Camera00','00001_keypoints.json')
) | 46.722222 | 143 | 0.58591 | 410 | 3,364 | 4.7 | 0.231707 | 0.084069 | 0.08822 | 0.070576 | 0.832382 | 0.832382 | 0.832382 | 0.832382 | 0.832382 | 0.832382 | 0 | 0.043137 | 0.241974 | 3,364 | 72 | 144 | 46.722222 | 0.712549 | 0.529132 | 0 | 0.117647 | 0 | 0 | 0.221719 | 0.143504 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0 | 0.176471 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cbf083494ca790fcf00390f5968350fbf8da522a | 46 | py | Python | tests/pbraiders/pages/options/graphs/__init__.py | pbraiders/pomponne-test-bdd | 7f2973936318221f54e65e0f8bd839cad7216fa4 | [
"MIT"
] | 1 | 2021-03-30T14:41:29.000Z | 2021-03-30T14:41:29.000Z | tests/pbraiders/pages/options/graphs/__init__.py | pbraiders/pomponne-test-bdd | 7f2973936318221f54e65e0f8bd839cad7216fa4 | [
"MIT"
] | null | null | null | tests/pbraiders/pages/options/graphs/__init__.py | pbraiders/pomponne-test-bdd | 7f2973936318221f54e65e0f8bd839cad7216fa4 | [
"MIT"
] | null | null | null | # coding=utf-8
from .graphs import GraphsPage
| 15.333333 | 30 | 0.782609 | 7 | 46 | 5.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.130435 | 46 | 2 | 31 | 23 | 0.875 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cbfe179fa18408f42cb725a676113e0852a5f66b | 269 | py | Python | tests/test_hello_minitrade.py | janFiOpelito/minitradetest | 1befdf69a567d0712962181e089b8f9dbd205285 | [
"MIT"
] | null | null | null | tests/test_hello_minitrade.py | janFiOpelito/minitradetest | 1befdf69a567d0712962181e089b8f9dbd205285 | [
"MIT"
] | null | null | null | tests/test_hello_minitrade.py | janFiOpelito/minitradetest | 1befdf69a567d0712962181e089b8f9dbd205285 | [
"MIT"
] | null | null | null | """Unit tests for the mini-trade module."""
from minitrade import hellominitradedef
import pytest
def test_function_1():
assert hello_crypto_def.hello_crypto_world() == "hello_crypto_world"
def test_function_2():
assert hello_crypto_def.add_crypto(3,2) == 5
| 24.454545 | 72 | 0.769517 | 40 | 269 | 4.85 | 0.6 | 0.226804 | 0.154639 | 0.206186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021368 | 0.130112 | 269 | 10 | 73 | 26.9 | 0.807692 | 0.137546 | 0 | 0 | 0 | 0 | 0.079646 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
023338e11d2fbf7de75abaa567284664b11209a8 | 21,219 | py | Python | saleor/graphql/payment/tests/mutations/test_transaction_create.py | DevPoke/saleor | ced3a2249a18031f9f593e71d1d18aa787ec1060 | [
"CC-BY-4.0"
] | null | null | null | saleor/graphql/payment/tests/mutations/test_transaction_create.py | DevPoke/saleor | ced3a2249a18031f9f593e71d1d18aa787ec1060 | [
"CC-BY-4.0"
] | null | null | null | saleor/graphql/payment/tests/mutations/test_transaction_create.py | DevPoke/saleor | ced3a2249a18031f9f593e71d1d18aa787ec1060 | [
"CC-BY-4.0"
] | null | null | null | from decimal import Decimal
import graphene
import pytest
from .....order import OrderEvents
from .....order.utils import update_order_authorize_data, update_order_charge_data
from .....payment import TransactionStatus
from .....payment.error_codes import TransactionCreateErrorCode
from .....payment.models import TransactionItem
from ....tests.utils import assert_no_permission, get_graphql_content
from ...enums import TransactionActionEnum, TransactionStatusEnum
MUTATION_TRANSACTION_CREATE = """
mutation TransactionCreate(
$id: ID!,
$transaction_event: TransactionEventInput,
$transaction: TransactionCreateInput!
){
transactionCreate(
id: $id,
transactionEvent: $transaction_event,
transaction: $transaction
){
transaction{
id
actions
reference
type
status
modifiedAt
createdAt
authorizedAmount{
amount
currency
}
voidedAmount{
currency
amount
}
chargedAmount{
currency
amount
}
refundedAmount{
currency
amount
}
events{
status
reference
name
createdAt
}
}
errors{
field
message
code
}
}
}
"""
def test_transaction_create_for_order(
order_with_lines, permission_manage_payments, app_api_client
):
# given
status = "Authorized for 10$"
type = "Credit Card"
reference = "PSP reference - 123"
available_actions = [
TransactionActionEnum.CHARGE.name,
TransactionActionEnum.VOID.name,
]
authorized_value = Decimal("10")
metadata = {"key": "test-1", "value": "123"}
private_metadata = {"key": "test-2", "value": "321"}
variables = {
"id": graphene.Node.to_global_id("Order", order_with_lines.pk),
"transaction": {
"status": status,
"type": type,
"reference": reference,
"availableActions": available_actions,
"amountAuthorized": {
"amount": authorized_value,
"currency": "USD",
},
"metadata": [metadata],
"privateMetadata": [private_metadata],
},
}
# when
response = app_api_client.post_graphql(
MUTATION_TRANSACTION_CREATE, variables, permissions=[permission_manage_payments]
)
# then
transaction = order_with_lines.payment_transactions.first()
content = get_graphql_content(response)
data = content["data"]["transactionCreate"]["transaction"]
assert data["actions"] == available_actions
assert data["status"] == status
assert data["reference"] == reference
assert data["authorizedAmount"]["amount"] == authorized_value
assert available_actions == list(map(str.upper, transaction.available_actions))
assert status == transaction.status
assert reference == transaction.reference
assert authorized_value == transaction.authorized_value
assert transaction.metadata == {metadata["key"]: metadata["value"]}
assert transaction.private_metadata == {
private_metadata["key"]: private_metadata["value"]
}
def test_transaction_create_for_order_updates_order_total_authorized(
order_with_lines, permission_manage_payments, app_api_client
):
# given
previously_authorized_value = Decimal("90")
old_transaction = order_with_lines.payment_transactions.create(
authorized_value=previously_authorized_value, currency=order_with_lines.currency
)
update_order_authorize_data(order_with_lines)
authorized_value = Decimal("10")
variables = {
"id": graphene.Node.to_global_id("Order", order_with_lines.pk),
"transaction": {
"status": "Authorized for 10$",
"type": "Credit Card",
"reference": "PSP reference - 123",
"availableActions": [],
"amountAuthorized": {
"amount": authorized_value,
"currency": "USD",
},
},
}
# when
response = app_api_client.post_graphql(
MUTATION_TRANSACTION_CREATE, variables, permissions=[permission_manage_payments]
)
# then
order_with_lines.refresh_from_db()
transaction = order_with_lines.payment_transactions.exclude(
id=old_transaction.id
).last()
content = get_graphql_content(response)
data = content["data"]["transactionCreate"]["transaction"]
assert data["authorizedAmount"]["amount"] == authorized_value
assert (
order_with_lines.total_authorized_amount
== previously_authorized_value + authorized_value
)
assert authorized_value == transaction.authorized_value
def test_transaction_create_for_order_updates_order_total_charged(
order_with_lines, permission_manage_payments, app_api_client
):
# given
previously_charged_value = Decimal("90")
old_transaction = order_with_lines.payment_transactions.create(
charged_value=previously_charged_value, currency=order_with_lines.currency
)
update_order_charge_data(order_with_lines)
charged_value = Decimal("10")
variables = {
"id": graphene.Node.to_global_id("Order", order_with_lines.pk),
"transaction": {
"status": "Charged 10$",
"type": "Credit Card",
"reference": "PSP reference - 123",
"availableActions": [],
"amountCharged": {
"amount": charged_value,
"currency": "USD",
},
},
}
# when
response = app_api_client.post_graphql(
MUTATION_TRANSACTION_CREATE, variables, permissions=[permission_manage_payments]
)
# then
order_with_lines.refresh_from_db()
transaction = order_with_lines.payment_transactions.exclude(
id=old_transaction.id
).last()
content = get_graphql_content(response)
data = content["data"]["transactionCreate"]["transaction"]
assert data["chargedAmount"]["amount"] == charged_value
assert (
order_with_lines.total_charged_amount
== previously_charged_value + charged_value
)
assert charged_value == transaction.charged_value
def test_transaction_create_for_checkout(
checkout_with_items, permission_manage_payments, app_api_client
):
# given
status = "Authorized for 10$"
type = "Credit Card"
reference = "PSP reference - 123"
available_actions = [
TransactionActionEnum.CHARGE.name,
TransactionActionEnum.VOID.name,
]
authorized_value = Decimal("10")
metadata = {"key": "test-1", "value": "123"}
private_metadata = {"key": "test-2", "value": "321"}
variables = {
"id": graphene.Node.to_global_id("Checkout", checkout_with_items.pk),
"transaction": {
"status": status,
"type": type,
"reference": reference,
"availableActions": available_actions,
"amountAuthorized": {
"amount": authorized_value,
"currency": "USD",
},
"metadata": [metadata],
"privateMetadata": [private_metadata],
},
}
# when
response = app_api_client.post_graphql(
MUTATION_TRANSACTION_CREATE, variables, permissions=[permission_manage_payments]
)
# then
transaction = checkout_with_items.payment_transactions.first()
content = get_graphql_content(response)
data = content["data"]["transactionCreate"]["transaction"]
assert data["actions"] == available_actions
assert data["status"] == status
assert data["reference"] == reference
assert data["authorizedAmount"]["amount"] == authorized_value
assert available_actions == list(map(str.upper, transaction.available_actions))
assert status == transaction.status
assert reference == transaction.reference
assert authorized_value == transaction.authorized_value
assert transaction.metadata == {metadata["key"]: metadata["value"]}
assert transaction.private_metadata == {
private_metadata["key"]: private_metadata["value"]
}
@pytest.mark.parametrize(
"amount_field_name, amount_db_field",
[
("amountAuthorized", "authorized_value"),
("amountCharged", "charged_value"),
("amountVoided", "voided_value"),
("amountRefunded", "refunded_value"),
],
)
def test_transaction_create_calculate_amount(
amount_field_name,
amount_db_field,
order_with_lines,
permission_manage_payments,
app_api_client,
):
# given
status = "Authorized for 10$"
type = "Credit Card"
reference = "PSP reference - 123"
expected_value = Decimal("10")
variables = {
"id": graphene.Node.to_global_id("Order", order_with_lines.pk),
"transaction": {
"status": status,
"type": type,
"reference": reference,
"availableActions": [],
amount_field_name: {
"amount": expected_value,
"currency": "USD",
},
},
}
# when
response = app_api_client.post_graphql(
MUTATION_TRANSACTION_CREATE, variables, permissions=[permission_manage_payments]
)
# then
transaction = TransactionItem.objects.first()
get_graphql_content(response)
assert getattr(transaction, amount_db_field) == expected_value
def test_transaction_create_multiple_amounts_provided(
order_with_lines, permission_manage_payments, app_api_client
):
# given
status = "Authorized for 10$"
type = "Credit Card"
reference = "PSP reference - 123"
available_actions = [
TransactionActionEnum.CHARGE.name,
TransactionActionEnum.VOID.name,
]
authorized_value = Decimal("10")
charged_value = Decimal("11")
refunded_value = Decimal("12")
voided_value = Decimal("13")
variables = {
"id": graphene.Node.to_global_id("Order", order_with_lines.pk),
"transaction": {
"status": status,
"type": type,
"reference": reference,
"availableActions": available_actions,
"amountAuthorized": {
"amount": authorized_value,
"currency": "USD",
},
"amountCharged": {
"amount": charged_value,
"currency": "USD",
},
"amountRefunded": {
"amount": refunded_value,
"currency": "USD",
},
"amountVoided": {
"amount": voided_value,
"currency": "USD",
},
},
}
# when
response = app_api_client.post_graphql(
MUTATION_TRANSACTION_CREATE, variables, permissions=[permission_manage_payments]
)
# then
transaction = TransactionItem.objects.first()
content = get_graphql_content(response)
data = content["data"]["transactionCreate"]["transaction"]
assert data["actions"] == available_actions
assert data["status"] == status
assert data["reference"] == reference
assert data["authorizedAmount"]["amount"] == authorized_value
assert data["chargedAmount"]["amount"] == charged_value
assert data["refundedAmount"]["amount"] == refunded_value
assert data["voidedAmount"]["amount"] == voided_value
assert transaction.authorized_value == authorized_value
assert transaction.charged_value == charged_value
assert transaction.voided_value == voided_value
assert transaction.refunded_value == refunded_value
def test_transaction_create_create_event_for_order(
order_with_lines, permission_manage_payments, app_api_client
):
# given
status = "Authorized for 10$"
type = "Credit Card"
reference = "PSP reference - 123"
available_actions = [
TransactionActionEnum.CHARGE.name,
TransactionActionEnum.VOID.name,
]
authorized_value = Decimal("10")
transaction_status = "PENDING"
transaction_reference = "transaction reference"
transaction_name = "Processing transaction"
variables = {
"id": graphene.Node.to_global_id("Order", order_with_lines.pk),
"transaction": {
"status": status,
"type": type,
"reference": reference,
"availableActions": available_actions,
"amountAuthorized": {
"amount": authorized_value,
"currency": "USD",
},
},
"transaction_event": {
"status": transaction_status,
"reference": transaction_reference,
"name": transaction_name,
},
}
# when
app_api_client.post_graphql(
MUTATION_TRANSACTION_CREATE, variables, permissions=[permission_manage_payments]
)
# then
event = order_with_lines.events.first()
assert event.type == OrderEvents.TRANSACTION_EVENT
assert event.parameters == {
"message": transaction_name,
"reference": transaction_reference,
"status": transaction_status.lower(),
}
def test_transaction_create_permission_denied_for_staff(
order_with_lines, staff_api_client, permission_manage_payments
):
# given
status = "Authorized for 10$"
type = "Credit Card"
reference = "PSP reference - 123"
available_actions = [
TransactionActionEnum.CHARGE.name,
TransactionActionEnum.VOID.name,
]
authorized_value = Decimal("10")
metadata = {"key": "test-1", "value": "123"}
private_metadata = {"key": "test-2", "value": "321"}
variables = {
"id": graphene.Node.to_global_id("Order", order_with_lines.pk),
"transaction": {
"status": status,
"type": type,
"reference": reference,
"availableActions": available_actions,
"amountAuthorized": {
"amount": authorized_value,
"currency": "USD",
},
"metadata": [metadata],
"privateMetadata": [private_metadata],
},
}
# when
response = staff_api_client.post_graphql(
MUTATION_TRANSACTION_CREATE, variables, permissions=[permission_manage_payments]
)
# then
assert_no_permission(response)
def test_transaction_create_missing_app_permission(order_with_lines, app_api_client):
# given
status = "Authorized for 10$"
type = "Credit Card"
reference = "PSP reference - 123"
available_actions = [
TransactionActionEnum.CHARGE.name,
TransactionActionEnum.VOID.name,
]
authorized_value = Decimal("10")
metadata = {"key": "test-1", "value": "123"}
private_metadata = {"key": "test-2", "value": "321"}
variables = {
"id": graphene.Node.to_global_id("Order", order_with_lines.pk),
"transaction": {
"status": status,
"type": type,
"reference": reference,
"availableActions": available_actions,
"amountAuthorized": {
"amount": authorized_value,
"currency": "USD",
},
"metadata": [metadata],
"privateMetadata": [private_metadata],
},
}
# when
response = app_api_client.post_graphql(MUTATION_TRANSACTION_CREATE, variables)
# then
assert_no_permission(response)
@pytest.mark.parametrize(
"amount_field_name, amount_db_field",
[
("amountAuthorized", "authorized_value"),
("amountCharged", "charged_value"),
("amountVoided", "voided_value"),
("amountRefunded", "refunded_value"),
],
)
def test_transaction_create_incorrect_currency(
amount_field_name,
amount_db_field,
order_with_lines,
permission_manage_payments,
app_api_client,
):
# given
status = "Authorized for 10$"
type = "Credit Card"
reference = "PSP reference - 123"
expected_value = Decimal("10")
variables = {
"id": graphene.Node.to_global_id("Order", order_with_lines.pk),
"transaction": {
"status": status,
"type": type,
"reference": reference,
"availableActions": [],
amount_field_name: {
"amount": expected_value,
"currency": "PLN",
},
},
}
# when
response = app_api_client.post_graphql(
MUTATION_TRANSACTION_CREATE, variables, permissions=[permission_manage_payments]
)
# then
content = get_graphql_content(response)
data = content["data"]["transactionCreate"]
assert data["errors"][0]["field"] == amount_field_name
assert (
data["errors"][0]["code"] == TransactionCreateErrorCode.INCORRECT_CURRENCY.name
)
def test_creates_transaction_event_for_order(
order_with_lines, permission_manage_payments, app_api_client
):
# given
status = "Failed authorized for 10$"
type = "Credit Card"
reference = "PSP reference - 123"
available_actions = []
authorized_value = Decimal("0")
metadata = {"key": "test-1", "value": "123"}
private_metadata = {"key": "test-2", "value": "321"}
event_status = TransactionStatus.FAILURE
event_reference = "PSP-ref"
event_name = "Failed authorization"
variables = {
"id": graphene.Node.to_global_id("Order", order_with_lines.pk),
"transaction": {
"status": status,
"type": type,
"reference": reference,
"availableActions": available_actions,
"amountAuthorized": {
"amount": authorized_value,
"currency": "USD",
},
"metadata": [metadata],
"privateMetadata": [private_metadata],
},
"transaction_event": {
"status": TransactionStatusEnum.FAILURE.name,
"reference": event_reference,
"name": event_name,
},
}
# when
response = app_api_client.post_graphql(
MUTATION_TRANSACTION_CREATE, variables, permissions=[permission_manage_payments]
)
# then
transaction = order_with_lines.payment_transactions.first()
content = get_graphql_content(response)
data = content["data"]["transactionCreate"]["transaction"]
events_data = data["events"]
assert len(events_data) == 1
event_data = events_data[0]
assert event_data["name"] == event_name
assert event_data["status"] == TransactionStatusEnum.FAILURE.name
assert event_data["reference"] == event_reference
assert transaction.events.count() == 1
event = transaction.events.first()
assert event.name == event_name
assert event.status == event_status
assert event.reference == event_reference
def test_creates_transaction_event_for_checkout(
checkout_with_items, permission_manage_payments, app_api_client
):
# given
status = "Authorized for 10$"
type = "Credit Card"
reference = "PSP reference - 123"
available_actions = [
TransactionActionEnum.CHARGE.name,
TransactionActionEnum.VOID.name,
]
authorized_value = Decimal("10")
metadata = {"key": "test-1", "value": "123"}
private_metadata = {"key": "test-2", "value": "321"}
event_status = TransactionStatus.FAILURE
event_reference = "PSP-ref"
event_name = "Failed authorization"
variables = {
"id": graphene.Node.to_global_id("Checkout", checkout_with_items.pk),
"transaction": {
"status": status,
"type": type,
"reference": reference,
"availableActions": available_actions,
"amountAuthorized": {
"amount": authorized_value,
"currency": "USD",
},
"metadata": [metadata],
"privateMetadata": [private_metadata],
},
"transaction_event": {
"status": TransactionStatusEnum.FAILURE.name,
"reference": event_reference,
"name": event_name,
},
}
# when
response = app_api_client.post_graphql(
MUTATION_TRANSACTION_CREATE, variables, permissions=[permission_manage_payments]
)
# then
transaction = checkout_with_items.payment_transactions.first()
content = get_graphql_content(response)
data = content["data"]["transactionCreate"]["transaction"]
events_data = data["events"]
assert len(events_data) == 1
event_data = events_data[0]
assert event_data["name"] == event_name
assert event_data["status"] == TransactionStatusEnum.FAILURE.name
assert event_data["reference"] == event_reference
assert transaction.events.count() == 1
event = transaction.events.first()
assert event.name == event_name
assert event.status == event_status
assert event.reference == event_reference
| 31.15859 | 88 | 0.619916 | 1,931 | 21,219 | 6.525634 | 0.076644 | 0.044044 | 0.038886 | 0.019602 | 0.829696 | 0.812951 | 0.78716 | 0.777557 | 0.770256 | 0.759464 | 0 | 0.009635 | 0.271219 | 21,219 | 680 | 89 | 31.204412 | 0.805225 | 0.009001 | 0 | 0.68662 | 0 | 0 | 0.204135 | 0.002144 | 0 | 0 | 0 | 0 | 0.107394 | 1 | 0.021127 | false | 0 | 0.017606 | 0 | 0.038732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5a17fc07025061f8ac8765489243564db3ccf420 | 117 | py | Python | src/yolo_test.py | odigous-labs/video-summarization | c125bf9fa1016d76680d5e9389e4bdb0f83bc4fb | [
"MIT"
] | 1 | 2019-03-05T06:00:38.000Z | 2019-03-05T06:00:38.000Z | src/yolo_test.py | odigous-labs/video-summarization | c125bf9fa1016d76680d5e9389e4bdb0f83bc4fb | [
"MIT"
] | 2 | 2019-03-02T05:12:59.000Z | 2019-09-26T17:03:56.000Z | src/yolo_test.py | odigous-labs/video-summarization | c125bf9fa1016d76680d5e9389e4bdb0f83bc4fb | [
"MIT"
] | null | null | null | from yolo.model import YOLO
YOLO().predict("D:\Campus\FYP\\video-summarization\src\\test_data\generated_frames")
| 16.714286 | 84 | 0.769231 | 17 | 117 | 5.176471 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 117 | 6 | 85 | 19.5 | 0.814815 | 0 | 0 | 0 | 1 | 0 | 0.578947 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5a4248afa2bd0de8018657ff3447a40145e385c0 | 31 | py | Python | lldl/__init__.py | svenschultze/Lidar-Localization-DL | b30821767d1edacb9ff28f6575d6b20ac674d046 | [
"MIT"
] | 2 | 2021-10-24T01:05:22.000Z | 2022-01-03T10:52:45.000Z | lldl/__init__.py | svenschultze/Lidar-Localization-DL | b30821767d1edacb9ff28f6575d6b20ac674d046 | [
"MIT"
] | null | null | null | lldl/__init__.py | svenschultze/Lidar-Localization-DL | b30821767d1edacb9ff28f6575d6b20ac674d046 | [
"MIT"
] | null | null | null | from lldl import dataset, model | 31 | 31 | 0.83871 | 5 | 31 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5a54624fca4655236fc65adf03e97dbb202623c4 | 174 | py | Python | amocrm_asterisk_ng/telephony/core/ami_manager/__init__.py | iqtek/amocrn_asterisk_ng | 429a8d0823b951c855a49c1d44ab0e05263c54dc | [
"MIT"
] | null | null | null | amocrm_asterisk_ng/telephony/core/ami_manager/__init__.py | iqtek/amocrn_asterisk_ng | 429a8d0823b951c855a49c1d44ab0e05263c54dc | [
"MIT"
] | null | null | null | amocrm_asterisk_ng/telephony/core/ami_manager/__init__.py | iqtek/amocrn_asterisk_ng | 429a8d0823b951c855a49c1d44ab0e05263c54dc | [
"MIT"
] | null | null | null | from .IAmiEventHandler import IAmiEventHandler
from .IAmiManager import IAmiManager
from .IAmiMessageConvertFunction import IAmiMessageConvertFunction
from .packets import *
| 34.8 | 66 | 0.87931 | 15 | 174 | 10.2 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091954 | 174 | 4 | 67 | 43.5 | 0.968354 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5a68fe5cb031ca3af5dca0fb8334d69d30fd82dc | 10,050 | py | Python | api/tests/test_views.py | gilleshenrard/ikoab_elise_restful | 51b702b2f08306841e86a75f6d864058055ed085 | [
"MIT"
] | null | null | null | api/tests/test_views.py | gilleshenrard/ikoab_elise_restful | 51b702b2f08306841e86a75f6d864058055ed085 | [
"MIT"
] | 1 | 2018-02-15T21:51:21.000Z | 2018-02-15T21:51:21.000Z | api/tests/test_views.py | gilleshenrard/ikoab_elise_restful | 51b702b2f08306841e86a75f6d864058055ed085 | [
"MIT"
] | null | null | null | import json
from rest_framework import status
from django.test import TestCase, Client
from django.core.urlresolvers import reverse, NoReverseMatch
from ..models import Person
from ..serializers import PersonSerializer
# initialize the APIClient app
client = Client()
class MethodsTest(TestCase):
"""Test module for request methods on API"""
def setUp(self):
self.jane = Person.objects.create(
firstname='Jane', lastname='Dean', country='US', email='test2@test.com', phone='+1123456789', occupation_field='Administration', occupation='Accountance', birthdate='1982-05-01', description='Smart girl')
Person.objects.create(
firstname='John', lastname='Doe', country='UK', email='test@test.com', phone='+44123456789', occupation_field='Diplomacy', occupation='Spy', birthdate='1963-05-01', description='Tall guy')
def test_invalid_methods(self):
#send delete on get_post_people
response = client.delete(reverse('get_post_people'))
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
#send put on get_post_people
response = client.put(reverse('get_post_people'))
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
#send post on get_delete_update_person
response = client.post(reverse('get_delete_update_person', kwargs={'fstname': self.jane.firstname}))
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
class GetAllPeopleTest(TestCase):
""" Test module for GET all people API """
def setUp(self):
Person.objects.create(
firstname='John', lastname='Doe', country='UK', email='test@test.com', phone='+44123456789', occupation_field='Diplomacy', occupation='Spy', birthdate='1963-05-01', description='Tall guy')
Person.objects.create(
firstname='Jane', lastname='Dean', country='US', email='test2@test.com', phone='+1123456789', occupation_field='Administration', occupation='Accountance', birthdate='1982-05-01', description='Smart girl')
Person.objects.create(
firstname='Jack', lastname='Dull', country='FR', email='test3@test.com', phone='+33123456789', occupation_field='Maintenance', occupation='Welder', birthdate='1973-05-01', description='Cool guy')
Person.objects.create(
firstname='Jim', lastname='Dane', country='ES', email='test4@test.com', phone='+3423456789', occupation_field='IT', occupation='Developer', birthdate='1989-05-01', description='Smart guy')
def test_get_all_people(self):
# get API response
response = client.get(reverse('get_post_people'))
# get data from db
people = Person.objects.all()
serializer = PersonSerializer(people, many=True)
self.assertEqual(response.data, serializer.data)
self.assertEqual(response.status_code, status.HTTP_200_OK)
class GetSinglepersonTest(TestCase):
""" Test module for GET single person API """
def setUp(self):
self.john = Person.objects.create(
firstname='John', lastname='Doe', country='UK', email='test@test.com', phone='+44123456789', occupation_field='Diplomacy', occupation='Spy', birthdate='1963-05-01', description='Tall guy')
self.jane = Person.objects.create(
firstname='Jane', lastname='Dean', country='US', email='test2@test.com', phone='+1123456789', occupation_field='Administration', occupation='Accountance', birthdate='1982-05-01', description='Smart girl')
self.jack = Person.objects.create(
firstname='Jack', lastname='Dull', country='FR', email='test3@test.com', phone='+33123456789', occupation_field='Maintenance', occupation='Welder', birthdate='1973-05-01', description='Cool guy')
self.jim = Person.objects.create(
firstname='Jim', lastname='Dane', country='ES', email='test4@test.com', phone='+3423456789', occupation_field='IT', occupation='Developer', birthdate='1989-05-01', description='Smart guy')
def test_get_valid_single_person(self):
response = client.get(
reverse('get_delete_update_person', kwargs={'fstname': self.john.firstname}))
person = Person.objects.get(firstname = self.john.firstname)
serializer = PersonSerializer(person)
self.assertEqual(response.data, serializer.data)
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_get_invalid_single_person(self):
response = client.get(
reverse('get_delete_update_person', kwargs={'fstname': 'test'}))
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
def test_get_invalid_single_person_url_type(self):
with self.assertRaises(NoReverseMatch):
client.get(reverse('get_delete_update_person', kwargs={'fstname': 30}))
def test_get_invalid_single_person_url_length(self):
with self.assertRaises(NoReverseMatch):
client.get(reverse('get_delete_update_person', kwargs={'fstname': "abcdefghijklmnopqrstuvwxyzabcdefg"}))
class CreateNewPersonTest(TestCase):
""" Test module for inserting a new person """
def setUp(self):
self.valid_payload = {
'firstname':'John',
'lastname':'Doe',
'country':'UK',
'email':'test@test.com',
'phone':'+44123456789',
'occupation_field':'Diplomacy',
'occupation':'Spy',
'birthdate':'1963-05-01',
'description':'Tall guy'
}
self.invalid_firstname_payload = {
'firstname':'@@@',
'lastname':'Doe',
'country':'UK',
'email':'test@test.com',
'phone':'+44123456789',
'occupation_field':'Diplomacy',
'occupation':'Spy',
'birthdate':'1963-05-01',
'description':'Tall guy'
}
self.invalid_email_payload = {
'firstname':'John',
'lastname':'Doe',
'country':'UK',
'email':'test',
'phone':'+44123456789',
'occupation_field':'Diplomacy',
'occupation':'Spy',
'birthdate':'1963-05-01',
'description':'Tall guy'
}
def test_create_valid_person(self):
response = client.post(
reverse('get_post_people'),
data=json.dumps(self.valid_payload),
content_type='application/json'
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
def test_create_invalid_person_firstname(self):
response = client.post(
reverse('get_post_people'),
data=json.dumps(self.invalid_firstname_payload),
content_type='application/json'
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_create_invalid_person_email(self):
response = client.post(
reverse('get_post_people'),
data=json.dumps(self.invalid_email_payload),
content_type='application/json'
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
class UpdateSinglePersonTest(TestCase):
""" Test module for updating an existing person record """
def setUp(self):
self.john = Person.objects.create(
firstname='John', lastname='Doe', country='UK', email='test@test.com', phone='+44123456789', occupation_field='Diplomacy', occupation='Spy', birthdate='1963-05-01', description='Tall guy')
self.valid_payload = {
'firstname':'John-John',
'lastname':'Doe2',
'country':'UK2',
'email':'test2@test.com',
'phone':'+441234567892',
'occupation_field':'Diplomacy2',
'occupation':'Spy2',
'birthdate':'1963-05-02',
'description':'Tall guy2'
}
self.invalid_firstname_payload = {
'firstname':'@@@',
'lastname':'Doe',
'country':'UK',
'email':'test@test.com',
'phone':'+44123456789',
'occupation_field':'Diplomacy',
'occupation':'Spy',
'birthdate':'1963-05-01',
'description':'Tall guy'
}
def test_valid_update_person(self):
response = client.put(
reverse('get_delete_update_person', kwargs={'fstname': self.john.firstname}),
data=json.dumps(self.valid_payload),
content_type='application/json'
)
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
def test_invalid_update_person(self):
response = client.put(
reverse('get_delete_update_person', kwargs={'fstname': self.john.firstname}),
data=json.dumps(self.invalid_firstname_payload),
content_type='application/json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
class DeleteSinglePersonTest(TestCase):
""" Test module for deleting an existing person record """
def setUp(self):
self.john = Person.objects.create(
firstname='John', lastname='Doe', country='UK', email='test@test.com', phone='+44123456789', occupation_field='Diplomacy', occupation='Spy', birthdate='1963-05-01', description='Tall guy')
def test_valid_delete_person(self):
response = client.delete(
reverse('get_delete_update_person', kwargs={'fstname': self.john.firstname}))
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
def test_invalid_delete_person(self):
response = client.delete(
reverse('get_delete_update_person', kwargs={'fstname': 'test'}))
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
def test_invalid_delete_person_url(self):
with self.assertRaises(NoReverseMatch):
client.delete(reverse('get_delete_update_person', kwargs={'fstname': 30}))
| 46.313364 | 216 | 0.645373 | 1,093 | 10,050 | 5.758463 | 0.137237 | 0.040515 | 0.030505 | 0.059898 | 0.838259 | 0.799015 | 0.781061 | 0.772005 | 0.76279 | 0.745472 | 0 | 0.047686 | 0.217512 | 10,050 | 217 | 217 | 46.313364 | 0.75267 | 0.041294 | 0 | 0.635838 | 0 | 0 | 0.215946 | 0.028452 | 0 | 0 | 0 | 0 | 0.104046 | 1 | 0.115607 | false | 0 | 0.034682 | 0 | 0.184971 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ce5c37d9432bb1b5025c30debc484ad5093fe6ab | 24 | py | Python | extdocker/helpers/__init__.py | dtwardow/docker-py-helpers | 4d2592ca8103e940ff76ae91ccef8a937ce20ca1 | [
"MIT"
] | null | null | null | extdocker/helpers/__init__.py | dtwardow/docker-py-helpers | 4d2592ca8103e940ff76ae91ccef8a937ce20ca1 | [
"MIT"
] | null | null | null | extdocker/helpers/__init__.py | dtwardow/docker-py-helpers | 4d2592ca8103e940ff76ae91ccef8a937ce20ca1 | [
"MIT"
] | null | null | null | from .shhelper import *
| 12 | 23 | 0.75 | 3 | 24 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cea3870d00f18482501917a447cab1f077fabea7 | 119 | py | Python | tests/sound.py | shiburizu/Eddienput | b2ce192090ba658641383af84c7f3e09920a7d83 | [
"MIT"
] | 91 | 2021-04-05T21:48:35.000Z | 2022-03-09T20:45:12.000Z | tests/sound.py | shiburizu/Eddienput | b2ce192090ba658641383af84c7f3e09920a7d83 | [
"MIT"
] | 7 | 2021-04-08T04:47:29.000Z | 2021-12-09T18:30:38.000Z | tests/sound.py | shiburizu/Eddienput | b2ce192090ba658641383af84c7f3e09920a7d83 | [
"MIT"
] | 10 | 2021-04-06T10:35:24.000Z | 2022-02-07T14:23:14.000Z | from playsound import playsound
playsound('zapsplat_technology_electronic_device_high_pitched_beep_tone_003_54697.mp3') | 59.5 | 87 | 0.92437 | 16 | 119 | 6.3125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078261 | 0.033613 | 119 | 2 | 87 | 59.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.616667 | 0.616667 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0c97f2a4c979c2b7a4591208dba162c713f5161e | 176 | py | Python | parsley/tests/admin.py | blueyed/Django-parsley | 0f82e3475ab7b9caae023627c6a2987103b76cd2 | [
"BSD-3-Clause"
] | 208 | 2015-01-05T16:53:10.000Z | 2022-03-13T21:55:32.000Z | parsley/tests/admin.py | blueyed/Django-parsley | 0f82e3475ab7b9caae023627c6a2987103b76cd2 | [
"BSD-3-Clause"
] | 46 | 2015-01-09T08:17:04.000Z | 2022-02-16T21:06:59.000Z | parsley/tests/admin.py | blueyed/Django-parsley | 0f82e3475ab7b9caae023627c6a2987103b76cd2 | [
"BSD-3-Clause"
] | 58 | 2015-01-09T14:16:40.000Z | 2022-01-30T22:10:34.000Z | from django.contrib import admin
from parsley.mixins import ParsleyAdminMixin
from .models import Student
class StudentAdmin(ParsleyAdminMixin, admin.ModelAdmin):
pass
| 17.6 | 56 | 0.818182 | 20 | 176 | 7.2 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 176 | 9 | 57 | 19.555556 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
0b90a8101c4fc852ae90395617595a6c01fce3c3 | 115 | py | Python | scripts/cleaning/process.py | jim-schwoebel/voiceome | 9af84a8568a7a0630992ec56be03521ff9547c68 | [
"Apache-2.0"
] | 11 | 2021-08-17T00:46:38.000Z | 2022-02-18T20:29:11.000Z | scripts/cleaning/process.py | jim-schwoebel/voiceome | 9af84a8568a7a0630992ec56be03521ff9547c68 | [
"Apache-2.0"
] | 1 | 2021-06-16T20:56:42.000Z | 2021-06-16T20:56:42.000Z | scripts/cleaning/process.py | jim-schwoebel/voiceome | 9af84a8568a7a0630992ec56be03521ff9547c68 | [
"Apache-2.0"
] | 1 | 2021-08-31T00:42:01.000Z | 2021-08-31T00:42:01.000Z | import os
os.system('python3 copy2.py')
os.system('python3 edit_json.py')
os.system('python3 featurize_data2.py') | 23 | 39 | 0.765217 | 19 | 115 | 4.526316 | 0.526316 | 0.27907 | 0.523256 | 0.395349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04717 | 0.078261 | 115 | 5 | 39 | 23 | 0.764151 | 0 | 0 | 0 | 0 | 0 | 0.534483 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0b9ce430eb42a89e217035da0f54748837cbd9c4 | 67 | py | Python | AtCoder/ABC/000-159/ABC137_A.py | sireline/PyCode | 8578467710c3c1faa89499f5d732507f5d9a584c | [
"MIT"
] | null | null | null | AtCoder/ABC/000-159/ABC137_A.py | sireline/PyCode | 8578467710c3c1faa89499f5d732507f5d9a584c | [
"MIT"
] | null | null | null | AtCoder/ABC/000-159/ABC137_A.py | sireline/PyCode | 8578467710c3c1faa89499f5d732507f5d9a584c | [
"MIT"
] | null | null | null | A, B = [int(n) for n in input().split()]
print(max(A+B, A-B, A*B))
| 22.333333 | 40 | 0.537313 | 17 | 67 | 2.117647 | 0.588235 | 0.222222 | 0.166667 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164179 | 67 | 2 | 41 | 33.5 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f0622214cb22b5d274669b48e706e71be10e8122 | 25 | py | Python | example2/polls/tests/__init__.py | beezz/django-admin2 | 4aec1a3836011cd46e5eb8b6375590bf5a76c044 | [
"BSD-3-Clause"
] | 1 | 2015-04-30T13:34:03.000Z | 2015-04-30T13:34:03.000Z | example2/polls/tests/__init__.py | taxido/django-admin2 | 6a6b3d5f790b8289b0dd0f9194d80799af8804dc | [
"BSD-3-Clause"
] | 1 | 2021-03-19T23:57:09.000Z | 2021-03-19T23:57:09.000Z | example2/polls/tests/__init__.py | RyanBalfanz/django-admin2 | e7f0611eea22370bb3418e25e9cd10ddbac4fd6d | [
"BSD-3-Clause"
] | null | null | null | from test_views import *
| 12.5 | 24 | 0.8 | 4 | 25 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f06de4e50517aec47aa61be5d4e6ca6d2123db60 | 95 | py | Python | HW8/Olenka_M/double_char.py | kolyasalubov/Lv-677.PythonCore | c9f9107c734a61e398154a90b8a3e249276c2704 | [
"MIT"
] | null | null | null | HW8/Olenka_M/double_char.py | kolyasalubov/Lv-677.PythonCore | c9f9107c734a61e398154a90b8a3e249276c2704 | [
"MIT"
] | null | null | null | HW8/Olenka_M/double_char.py | kolyasalubov/Lv-677.PythonCore | c9f9107c734a61e398154a90b8a3e249276c2704 | [
"MIT"
] | 6 | 2022-02-22T22:30:49.000Z | 2022-03-28T12:51:19.000Z | def double_char(s: str) -> str:
return "".join(l*2 for l in s)
print(double_char("Hello")) | 23.75 | 34 | 0.642105 | 18 | 95 | 3.277778 | 0.722222 | 0.338983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012658 | 0.168421 | 95 | 4 | 35 | 23.75 | 0.734177 | 0 | 0 | 0 | 0 | 0 | 0.052083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f07ac9f7450a99e054a6e735ba73e5935ee2400e | 158 | py | Python | django/contrib/formtools/exceptions.py | PirosB3/django | 9b729ddd8f2040722971ccfb3b12f7d8162633d1 | [
"BSD-3-Clause"
] | 285 | 2019-12-23T09:50:21.000Z | 2021-12-08T09:08:49.000Z | django/contrib/formtools/exceptions.py | PirosB3/django | 9b729ddd8f2040722971ccfb3b12f7d8162633d1 | [
"BSD-3-Clause"
] | 18 | 2015-01-14T07:51:48.000Z | 2021-10-14T01:19:26.000Z | django/contrib/formtools/exceptions.py | PirosB3/django | 9b729ddd8f2040722971ccfb3b12f7d8162633d1 | [
"BSD-3-Clause"
] | 70 | 2015-01-01T00:33:24.000Z | 2021-12-10T03:43:07.000Z | from django.core.exceptions import SuspiciousOperation
class WizardViewCookieModified(SuspiciousOperation):
"""Signature of cookie modified"""
pass
| 22.571429 | 54 | 0.797468 | 14 | 158 | 9 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132911 | 158 | 6 | 55 | 26.333333 | 0.919708 | 0.177215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b2ba15167c8f36e6b5c0c60bbcb1100d23786c80 | 34 | py | Python | uniflex_mininet/__init__.py | uniflex/uniflex-mininet | bcf06a1b0b8daeb71ff1cbaf373c5aa883a47f27 | [
"MIT"
] | null | null | null | uniflex_mininet/__init__.py | uniflex/uniflex-mininet | bcf06a1b0b8daeb71ff1cbaf373c5aa883a47f27 | [
"MIT"
] | null | null | null | uniflex_mininet/__init__.py | uniflex/uniflex-mininet | bcf06a1b0b8daeb71ff1cbaf373c5aa883a47f27 | [
"MIT"
] | null | null | null | from .uniflex_mn_wrapper import *
| 17 | 33 | 0.823529 | 5 | 34 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b2c0d4f77f940f9074096fe7aed9b9f65b813d16 | 24 | py | Python | swarm_ws/devel/lib/python2.7/dist-packages/swarm_center/srv/__init__.py | Wei-Fan/SJTU-swarm | 95b93fa020a5168f150c6430cbc25884e03c22db | [
"MIT"
] | 2 | 2019-03-23T03:03:13.000Z | 2019-10-07T08:21:46.000Z | swarm_ws/devel/lib/python2.7/dist-packages/swarm_center/srv/__init__.py | Wei-Fan/SJTU-swarm | 95b93fa020a5168f150c6430cbc25884e03c22db | [
"MIT"
] | null | null | null | swarm_ws/devel/lib/python2.7/dist-packages/swarm_center/srv/__init__.py | Wei-Fan/SJTU-swarm | 95b93fa020a5168f150c6430cbc25884e03c22db | [
"MIT"
] | null | null | null | from ._mCPPReq import *
| 12 | 23 | 0.75 | 3 | 24 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b2f908942ffee190d6d824f8c40c2264ed3a0b0b | 1,854 | py | Python | Nicolaus.py | phy234/FirstTurtleSteps | 68e88606ee96fa26e4f0cdfcc90b3b76e1c2a005 | [
"Apache-2.0"
] | null | null | null | Nicolaus.py | phy234/FirstTurtleSteps | 68e88606ee96fa26e4f0cdfcc90b3b76e1c2a005 | [
"Apache-2.0"
] | null | null | null | Nicolaus.py | phy234/FirstTurtleSteps | 68e88606ee96fa26e4f0cdfcc90b3b76e1c2a005 | [
"Apache-2.0"
] | null | null | null | # Autor: Alexander Herbrich
# Wann: 09.02.16
# Thema: Turtle "Haus vom Nicolaus"
from turtle import *
def nicolaus ():
a = 1
i = 3
x = 0
pencolor("white")
lt(180)
fd(220)
rt(90)
fd(60)
rt(90)
pencolor("red")
lt(90)
fd(140)
lt(30)
fd(140)
lt(120)
fd(140)
lt(120)
fd(140)
lt(225)
fd(198)
lt(135)
fd(140)
lt(135)
fd(198)
lt(135)
fd(140)
lt(90)
fd(140)
while (a == 1):
x += 1
if (i - x == 0):
pencolor("white")
rt(90)
fd(20)
rt(90)
fd(560)
lt(90)
fd(280)
lt(90)
pencolor("red")
lt(90)
fd(140)
lt(30)
fd(140)
lt(120)
fd(140)
lt(120)
fd(140)
lt(225)
fd(198)
lt(135)
fd(140)
lt(135)
fd(198)
lt(135)
fd(140)
lt(90)
fd(140)
x -= 3
else:
pencolor("green")
lt(90)
fd(140)
rt(30)
fd(140)
rt(120)
fd(140)
rt(120)
fd(140)
rt(225)
fd(198)
rt(135)
fd(140)
rt(135)
fd(198)
rt(135)
fd(140)
lt(90)
pencolor("red")
lt(90)
fd(140)
rt(30)
fd(140)
rt(120)
fd(140)
rt(120)
fd(140)
rt(225)
fd(198)
rt(135)
fd(140)
rt(135)
fd(198)
rt(135)
fd(140)
lt(90)
nicolaus()
| 17.490566 | 35 | 0.313376 | 212 | 1,854 | 2.740566 | 0.207547 | 0.223752 | 0.168675 | 0.092943 | 0.648881 | 0.648881 | 0.648881 | 0.648881 | 0.626506 | 0.626506 | 0 | 0.297264 | 0.566343 | 1,854 | 105 | 36 | 17.657143 | 0.425373 | 0.039914 | 0 | 0.818182 | 0 | 0 | 0.013514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010101 | false | 0 | 0.010101 | 0 | 0.020202 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
650981bda71ea16184973a1b0ffe8da40c9e3ac4 | 45 | py | Python | link_tools/tests/__init__.py | davpayne/satcom-linkbudgets | 86ed441eb762aabb5608098f7b26163b6c80f1ea | [
"MIT"
] | null | null | null | link_tools/tests/__init__.py | davpayne/satcom-linkbudgets | 86ed441eb762aabb5608098f7b26163b6c80f1ea | [
"MIT"
] | null | null | null | link_tools/tests/__init__.py | davpayne/satcom-linkbudgets | 86ed441eb762aabb5608098f7b26163b6c80f1ea | [
"MIT"
] | null | null | null | from .test_conversions import TestERPConvert
| 22.5 | 44 | 0.888889 | 5 | 45 | 7.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
652bdb415e9ebbe981a95f8c5bc3367c44c61680 | 11,160 | py | Python | _apicheck/test/rules/test_rules_spec.py | sundayayandele/apicheck | ab91f567d67547b92b8e94824a29dcd5993b769e | [
"Apache-2.0"
] | null | null | null | _apicheck/test/rules/test_rules_spec.py | sundayayandele/apicheck | ab91f567d67547b92b8e94824a29dcd5993b769e | [
"Apache-2.0"
] | 3 | 2022-02-07T03:37:37.000Z | 2022-03-02T03:38:13.000Z | _apicheck/test/rules/test_rules_spec.py | sundayayandele/apicheck | ab91f567d67547b92b8e94824a29dcd5993b769e | [
"Apache-2.0"
] | 1 | 2021-07-18T15:01:22.000Z | 2021-07-18T15:01:22.000Z | import os
import json
import pytest
from apicheck.core.generator import AbsentValue
from apicheck.core.rules import rules_processor
def _test_ruleset(rules, res_in, expected):
proc = rules_processor(rules)
res = proc(res_in)
assert res is not None
if expected.__class__.__name__ != "dict":
return None
for k, v in expected.items():
assert k in res
assert res[k] == v
def test_no_rules():
req = {}
_test_ruleset(None, req, req)
_test_ruleset({}, req, req)
def test_endpoint_not_found():
req = {}
rules = {
"/my/fine/endpoint": {}
}
_test_ruleset(rules, req, req)
def test_path_params():
res_in = {
"method": "post",
"path": "/my/entity/33456979458037",
"headers": {}
}
rules = {
"/by/entity/{id}": {
"pathParams": {
"id": "a09941c5a-51fe-4379-b6e3-e91b9788b4fb"
}
}
}
res_out = {
"method": "post",
"path": "/my/entity/a09941c5a-51fe-4379-b6e3-e91b9788b4fb",
"headers": {}
}
_test_ruleset(rules, res_in, res_out)
def test_query_params():
res_in = {
"method": "post",
"path": "/my/great/endpoint?id=109497203948",
"headers": {}
}
rules = {
"/my/great/endpoint": {
"queryParams": {
"id": "10"
}
}
}
res_out = {
"method": "post",
"path": "/my/great/endpoint?id=10",
"headers": {}
}
_test_ruleset(rules, res_in, res_out)
def test_query_params_generated():
res_in = {
"method": "post",
"path": "/my/great/endpoint?id=109497203948",
"headers": {}
}
rules = {
"/my/great/endpoint": {
"queryParams": {
"id": {
"type": "dictionary",
"values": [
"66"
]
}
}
}
}
res_out = {
"method": "post",
"path": "/my/great/endpoint?id=66",
"headers": {}
}
_test_ruleset(rules, res_in, res_out)
def test_body():
res_in = {
"method": "post",
"path": "/my/great/endpoint",
"headers": {},
"body": {
"first": "hello",
"then": "loren ipsum"
}
}
rules = {
"/my/great/endpoint": {
"body": {
"then": "world"
}
}
}
res_out = {
"method": "post",
"path": "/my/great/endpoint",
"headers": {},
"body": {
"first": "hello",
"then": "world"
}
}
_test_ruleset(rules, res_in, res_out)
def test_body_generator():
res_in = {
"method": "post",
"path": "/my/great/endpoint",
"headers": {},
"body": {
"first": "hello",
"then": "loren ipsum"
}
}
rules = {
"/my/great/endpoint": {
"body": {
"then": {
"type": "dictionary",
"values": [
"world"
]
}
}
}
}
res_out = {
"method": "post",
"path": "/my/great/endpoint",
"headers": {},
"body": {
"first": "hello",
"then": "world"
}
}
_test_ruleset(rules, res_in, res_out)
def test_method_filter():
res_in_s = [
{
"method": "get",
"path": "/my/great/endpoint?id=109497203948",
"headers": {}
},
{
"method": "post",
"path": "/my/great/endpoint?id=109497203948",
"headers": {}
},
{
"method": "put",
"path": "/my/great/endpoint?id=109497203948",
"headers": {}
},
{
"method": "delete",
"path": "/my/great/endpoint?id=109497203948",
"headers": {}
}
]
rules = {
"/my/great/endpoint": {
"queryParams": {
"id": {
"type": "dictionary",
"values": [
"66"
]
}
}
}
}
res_out_s = [
{
"method": "get",
"path": "/my/great/endpoint?id=66",
"headers": {}
},
{
"method": "post",
"path": "/my/great/endpoint?id=66",
"headers": {}
},
{
"method": "put",
"path": "/my/great/endpoint?id=66",
"headers": {}
},
{
"method": "delete",
"path": "/my/great/endpoint?id=66",
"headers": {}
}
]
for res_in, res_out in zip(res_in_s, res_out_s):
_test_ruleset(rules, res_in, res_out)
def test_method_get_filter():
res_in_s = [
{
"method": "get",
"path": "/my/great/endpoint?id=109497203948",
"headers": {}
},
{
"method": "post",
"path": "/my/great/endpoint?id=109497203948",
"headers": {}
},
{
"method": "put",
"path": "/my/great/endpoint?id=109497203948",
"headers": {}
},
{
"method": "delete",
"path": "/my/great/endpoint?id=109497203948",
"headers": {}
}
]
rules = {
"/my/great/endpoint": {
"method": "get",
"queryParams": {
"id": {
"type": "dictionary",
"values": [
"66"
]
}
}
}
}
res_out_s = [
{
"method": "get",
"path": "/my/great/endpoint?id=66",
"headers": {}
},
{
"method": "post",
"path": "/my/great/endpoint?id=109497203948",
"headers": {}
},
{
"method": "put",
"path": "/my/great/endpoint?id=109497203948",
"headers": {}
},
{
"method": "delete",
"path": "/my/great/endpoint?id=109497203948",
"headers": {}
}
]
for res_in, res_out in zip(res_in_s, res_out_s):
_test_ruleset(rules, res_in, res_out)
def test_override_body():
res_in = {
"method": "post",
"path": "/my/great/endpoint",
"headers": {},
"body": {
"first": "hello",
"then": "loren ipsum"
}
}
rules = {
"/my/great/endpoint": {
"override": [
"body"
],
"body": {
"then": {
"type": "dictionary",
"values": [
"world"
]
}
}
}
}
res_out = {
"method": "post",
"path": "/my/great/endpoint",
"headers": {},
"body": {
"then": "world"
}
}
_test_ruleset(rules, res_in, res_out)
def test_override_query_params():
res_in = {
"method": "get",
"path": "/my/great/endpoint?cool=no&id=33",
"headers": {},
}
rules = {
"/my/great/endpoint": {
"override": [
"queryParams"
],
"queryParams": {
"id": 21
}
}
}
res_out = {
"method": "get",
"path": "/my/great/endpoint?id=21",
"headers": {}
}
_test_ruleset(rules, res_in, res_out)
def test_custom_policy():
res_in = {
"method": "post",
"path": "/linode/instances/3850272634059/disks",
"headers": {},
"body": {
"path": "/tmp/example",
"stackscript_data": AbsentValue("No properties available")
}
}
rules = {
"/linode/instances/{linodeId}/disks": {
"pathParams": {
"linodeId": 500
},
"body": {
"stackscript_data": {
"type": "dictionary",
"values": [
"A"
]
}
}
}
}
res_out = {
"method": "post",
"path": "/linode/instances/500/disks",
"headers": {},
"body": {
"path": "/tmp/example",
"stackscript_data": "A"
}
}
_test_ruleset(rules, res_in, res_out)
def test_header_rule():
res_in = {
"method": "post",
"path": "/linode/instances/500/disks",
"headers": {
"SOME_HEADER": "crazy value"
},
"body": {
"path": "/tmp/example"
}
}
rules = {
"/linode/instances/{linodeId}/disks": {
"headers": {
"SOME_HEADER": "correct"
}
}
}
res_out = {
"method": "post",
"path": "/linode/instances/500/disks",
"headers": {
"SOME_HEADER": "correct"
},
"body": {
"path": "/tmp/example"
}
}
_test_ruleset(rules, res_in, res_out)
def test_header_generator_rule():
res_in = {
"method": "post",
"path": "/linode/instances/500/disks",
"headers": {
"SOME_HEADER": "crazy value"
},
"body": {
"path": "/tmp/example"
}
}
rules = {
"/linode/instances/{linodeId}/disks": {
"headers": {
"SOME_HEADER": {
"type": "dictionary",
"values": [
"correct"
]
}
}
}
}
res_out = {
"method": "post",
"path": "/linode/instances/500/disks",
"headers": {
"SOME_HEADER": "correct"
},
"body": {
"path": "/tmp/example"
}
}
_test_ruleset(rules, res_in, res_out)
def test_header_override_rule():
res_in = {
"method": "post",
"path": "/linode/instances/500/disks",
"headers": {
"SOME_HEADER": "crazy value",
"BORING": "header"
},
"body": {
"path": "/tmp/example"
}
}
rules = {
"/linode/instances/{linodeId}/disks": {
"override": [
"headers"
],
"headers": {
"SOME_HEADER": {
"type": "dictionary",
"values": [
"correct"
]
}
}
}
}
res_out = {
"method": "post",
"path": "/linode/instances/500/disks",
"headers": {
"SOME_HEADER": "correct"
},
"body": {
"path": "/tmp/example"
}
}
_test_ruleset(rules, res_in, res_out)
| 22.729124 | 70 | 0.389695 | 875 | 11,160 | 4.786286 | 0.116571 | 0.060172 | 0.12894 | 0.12703 | 0.83787 | 0.792264 | 0.780325 | 0.758118 | 0.722302 | 0.691261 | 0 | 0.045107 | 0.449731 | 11,160 | 490 | 71 | 22.77551 | 0.636867 | 0 | 0 | 0.587719 | 0 | 0 | 0.283961 | 0.101971 | 0 | 0 | 0 | 0 | 0.006579 | 1 | 0.035088 | false | 0 | 0.010965 | 0 | 0.048246 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65480e33102b390e6ef66a24bffe37a915b66f1b | 40 | py | Python | libs/__init__.py | IronWolf-K/nonebot_plugin_fr24 | f3752882598de54f41cd9b27456dd3b6f88971e2 | [
"MIT"
] | null | null | null | libs/__init__.py | IronWolf-K/nonebot_plugin_fr24 | f3752882598de54f41cd9b27456dd3b6f88971e2 | [
"MIT"
] | null | null | null | libs/__init__.py | IronWolf-K/nonebot_plugin_fr24 | f3752882598de54f41cd9b27456dd3b6f88971e2 | [
"MIT"
] | null | null | null | from . import info
from . import request | 20 | 21 | 0.775 | 6 | 40 | 5.166667 | 0.666667 | 0.645161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175 | 40 | 2 | 21 | 20 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
33062dd6fc407378417f929a8a5860f840764ea5 | 488 | py | Python | resources/lib/soundcloud/api_interface.py | mr-pineapple-nz/kodi-addon-soundcloud | bea6161cc4e603260084755a845e3338091b0fec | [
"MIT"
] | 21 | 2019-04-13T12:06:19.000Z | 2022-03-15T00:04:43.000Z | resources/lib/soundcloud/api_interface.py | mr-pineapple-nz/kodi-addon-soundcloud | bea6161cc4e603260084755a845e3338091b0fec | [
"MIT"
] | 26 | 2019-04-20T17:13:27.000Z | 2022-03-15T09:54:36.000Z | resources/lib/soundcloud/api_interface.py | mr-pineapple-nz/kodi-addon-soundcloud | bea6161cc4e603260084755a845e3338091b0fec | [
"MIT"
] | 9 | 2019-04-21T10:42:52.000Z | 2021-06-16T05:27:27.000Z | from abc import ABCMeta, abstractmethod
class ApiInterface(metaclass=ABCMeta):
@abstractmethod
def search(self, query, kind): pass
@abstractmethod
def charts(self, filters): pass
@abstractmethod
def call(self, url): pass
@abstractmethod
def discover(self, selection): pass
@abstractmethod
def resolve_id(self, id): pass
@abstractmethod
def resolve_url(self, url): pass
@abstractmethod
def resolve_media_url(self, url): pass
| 19.52 | 42 | 0.696721 | 56 | 488 | 6 | 0.410714 | 0.354167 | 0.375 | 0.25 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219262 | 488 | 24 | 43 | 20.333333 | 0.88189 | 0 | 0 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4375 | false | 0.4375 | 0.0625 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
334b01560693fa5c72fabd9d7ae310d352f626cb | 99 | py | Python | bidsify/__init__.py | NILAB-UvA/bidsify | f7103ecf987e491499a52a55b435d1771a917b36 | [
"BSD-3-Clause"
] | 11 | 2019-07-22T12:46:00.000Z | 2021-09-01T08:56:41.000Z | bidsify/__init__.py | lukassnoek/BidsConverter | 95db2a19178375ffb09b46367b174d3fe13524e6 | [
"BSD-3-Clause"
] | 12 | 2019-10-07T19:46:15.000Z | 2022-03-28T07:16:42.000Z | bidsify/__init__.py | lukassnoek/BidsConverter | 95db2a19178375ffb09b46367b174d3fe13524e6 | [
"BSD-3-Clause"
] | 2 | 2019-07-22T12:46:01.000Z | 2020-08-11T15:15:43.000Z | from __future__ import absolute_import, division, print_function
from .main import bidsify # noqa
| 33 | 64 | 0.828283 | 13 | 99 | 5.846154 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131313 | 99 | 2 | 65 | 49.5 | 0.883721 | 0.040404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
686474e8e1bde01295076ca7513fffd480513ded | 70 | py | Python | network/roca/modeling/retrieval_head/__init__.py | cangumeli/ROCA | 44a08859c12bf4fdac1b17d0bb6d7e0e9bc99bf3 | [
"MIT"
] | 15 | 2022-03-23T20:38:09.000Z | 2022-03-30T02:18:20.000Z | network/roca/modeling/retrieval_head/__init__.py | cangumeli/ROCA | 44a08859c12bf4fdac1b17d0bb6d7e0e9bc99bf3 | [
"MIT"
] | 1 | 2022-03-29T17:36:51.000Z | 2022-03-30T18:31:37.000Z | network/roca/modeling/retrieval_head/__init__.py | cangumeli/ROCA | 44a08859c12bf4fdac1b17d0bb6d7e0e9bc99bf3 | [
"MIT"
] | 1 | 2022-03-29T12:58:41.000Z | 2022-03-29T12:58:41.000Z | from roca.modeling.retrieval_head.retrieval_head import RetrievalHead
| 35 | 69 | 0.9 | 9 | 70 | 6.777778 | 0.777778 | 0.42623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 70 | 1 | 70 | 70 | 0.924242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
68a51b82fe0b27afdba07b327351f5f95654ca02 | 319 | py | Python | catalyst/contrib/models/cv/segmentation/decoder/__init__.py | gr33n-made/catalyst | bd413abc908ef7cbdeab42b0e805277a791e3ddb | [
"Apache-2.0"
] | 4 | 2019-12-14T07:27:09.000Z | 2021-03-23T14:34:37.000Z | catalyst/contrib/models/cv/segmentation/decoder/__init__.py | gr33n-made/catalyst | bd413abc908ef7cbdeab42b0e805277a791e3ddb | [
"Apache-2.0"
] | 1 | 2021-01-07T16:13:45.000Z | 2021-01-21T09:27:54.000Z | catalyst/contrib/models/cv/segmentation/decoder/__init__.py | gr33n-made/catalyst | bd413abc908ef7cbdeab42b0e805277a791e3ddb | [
"Apache-2.0"
] | 1 | 2020-12-02T18:42:31.000Z | 2020-12-02T18:42:31.000Z | # flake8: noqa
from catalyst.contrib.models.cv.segmentation.decoder.core import DecoderSpec
from catalyst.contrib.models.cv.segmentation.decoder.fpn import FPNDecoder
from catalyst.contrib.models.cv.segmentation.decoder.psp import PSPDecoder
from catalyst.contrib.models.cv.segmentation.decoder.unet import UNetDecoder
| 53.166667 | 76 | 0.858934 | 42 | 319 | 6.52381 | 0.428571 | 0.175182 | 0.277372 | 0.364964 | 0.671533 | 0.671533 | 0.671533 | 0 | 0 | 0 | 0 | 0.003333 | 0.059561 | 319 | 5 | 77 | 63.8 | 0.91 | 0.037618 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d7ab95e9f864bb60b9147696a4c2759f566be6e5 | 968 | py | Python | iriusrisk-python-client-lib/test/test_control_command_standards.py | iriusrisk/iriusrisk-python-client-lib | 4912706cd1e5c0bc555dbc7da02fb64cbeab3b18 | [
"Apache-2.0"
] | null | null | null | iriusrisk-python-client-lib/test/test_control_command_standards.py | iriusrisk/iriusrisk-python-client-lib | 4912706cd1e5c0bc555dbc7da02fb64cbeab3b18 | [
"Apache-2.0"
] | null | null | null | iriusrisk-python-client-lib/test/test_control_command_standards.py | iriusrisk/iriusrisk-python-client-lib | 4912706cd1e5c0bc555dbc7da02fb64cbeab3b18 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
IriusRisk API
Products API # noqa: E501
OpenAPI spec version: 1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import iriusrisk_python_client_lib
from iriusrisk_python_client_lib.models.control_command_standards import ControlCommandStandards # noqa: E501
from iriusrisk_python_client_lib.rest import ApiException
class TestControlCommandStandards(unittest.TestCase):
"""ControlCommandStandards unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testControlCommandStandards(self):
"""Test ControlCommandStandards"""
# FIXME: construct object with mandatory attributes with example values
# model = iriusrisk_python_client_lib.models.control_command_standards.ControlCommandStandards() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 23.609756 | 118 | 0.737603 | 104 | 968 | 6.586538 | 0.557692 | 0.087591 | 0.122628 | 0.140146 | 0.20146 | 0.154745 | 0.154745 | 0.154745 | 0 | 0 | 0 | 0.014013 | 0.18905 | 968 | 40 | 119 | 24.2 | 0.858599 | 0.418388 | 0 | 0.214286 | 1 | 0 | 0.015355 | 0 | 0 | 0 | 0 | 0.025 | 0 | 1 | 0.214286 | false | 0.214286 | 0.357143 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
d7cf411a5b579a786066b8740a6e1dba02f7ca3f | 285 | py | Python | src/voice/__init__.py | sribich/twitchTransFreeNext | 7a3cf83ef3d9fdd100ac7780ef37b86d477575be | [
"MIT"
] | null | null | null | src/voice/__init__.py | sribich/twitchTransFreeNext | 7a3cf83ef3d9fdd100ac7780ef37b86d477575be | [
"MIT"
] | null | null | null | src/voice/__init__.py | sribich/twitchTransFreeNext | 7a3cf83ef3d9fdd100ac7780ef37b86d477575be | [
"MIT"
] | null | null | null | class AudioSource(object):
def __init__(self):
raise NotImplementedError("method not implemented")
def __enter__(self):
raise NotImplementedError("method not implemented")
def __exit__(self):
raise NotImplementedError("method not implemented") | 31.666667 | 59 | 0.701754 | 27 | 285 | 6.962963 | 0.481481 | 0.143617 | 0.446809 | 0.542553 | 0.797872 | 0.797872 | 0.542553 | 0 | 0 | 0 | 0 | 0 | 0.214035 | 285 | 9 | 60 | 31.666667 | 0.839286 | 0 | 0 | 0.428571 | 0 | 0 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d7fb7d2adfbd8af6f9103f605c99ae052dab4d07 | 147 | py | Python | comm/ntlmrelayx/servers/__init__.py | Ridter/GhostPotato | bdd395f08646ddd1eac17c0c5513e528d3703a44 | [
"MIT"
] | 53 | 2019-11-22T02:24:16.000Z | 2022-01-23T21:48:37.000Z | comm/ntlmrelayx/servers/__init__.py | Ridter/GhostPotato | bdd395f08646ddd1eac17c0c5513e528d3703a44 | [
"MIT"
] | null | null | null | comm/ntlmrelayx/servers/__init__.py | Ridter/GhostPotato | bdd395f08646ddd1eac17c0c5513e528d3703a44 | [
"MIT"
] | 8 | 2019-11-22T17:44:09.000Z | 2022-01-10T00:57:56.000Z | from comm.ntlmrelayx.servers.httprelayserver import HTTPRelayServer
from impacket.examples.ntlmrelayx.servers.smbrelayserver import SMBRelayServer
| 49 | 78 | 0.897959 | 15 | 147 | 8.8 | 0.6 | 0.257576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054422 | 147 | 2 | 79 | 73.5 | 0.94964 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cc04658c5ac6f0041979c0f6017c797f93dd7fec | 146 | py | Python | src/deep_dialog/usersims/__init__.py | AtmaHou/UserSimulator | 0322b7c725f771ea1f5b517c72e15e3436cb8d22 | [
"MIT"
] | 3 | 2020-12-16T08:19:00.000Z | 2022-01-11T05:40:15.000Z | src/deep_dialog/usersims/__init__.py | AtmaHou/UserSimulator | 0322b7c725f771ea1f5b517c72e15e3436cb8d22 | [
"MIT"
] | null | null | null | src/deep_dialog/usersims/__init__.py | AtmaHou/UserSimulator | 0322b7c725f771ea1f5b517c72e15e3436cb8d22 | [
"MIT"
] | 1 | 2020-12-18T00:03:12.000Z | 2020-12-18T00:03:12.000Z | from .usersim_rule import *
from .action_generation import *
from .nn_models import *
from .usersim_supervise import *
from .prepare_data import * | 29.2 | 32 | 0.80137 | 20 | 146 | 5.6 | 0.55 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130137 | 146 | 5 | 33 | 29.2 | 0.88189 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0bc424f4b74ecec7d67662858d3d99895cae0370 | 420 | py | Python | pytracer/reflection/bdf/__init__.py | zjiayao/pyTracer | c2b4ef299ecbdca1c519059488f7cd2438943ee4 | [
"MIT"
] | 9 | 2017-11-20T18:17:27.000Z | 2022-01-27T23:00:31.000Z | pytracer/reflection/bdf/__init__.py | zjiayao/pyTracer | c2b4ef299ecbdca1c519059488f7cd2438943ee4 | [
"MIT"
] | 4 | 2021-06-08T19:03:51.000Z | 2022-03-11T23:18:44.000Z | pytracer/reflection/bdf/__init__.py | zjiayao/pyTracer | c2b4ef299ecbdca1c519059488f7cd2438943ee4 | [
"MIT"
] | 1 | 2017-11-20T22:48:01.000Z | 2017-11-20T22:48:01.000Z | """
__init__.py
pytracer.reflection.bdf package
Model bidirectional distribution functions.
- BDF
- BSDF
Created by Jiayao on Aug 2, 2017
"""
from __future__ import absolute_import
from pytracer.reflection.bdf.bdf import *
from pytracer.reflection.bdf.measured import *
from pytracer.reflection.bdf.fresnelblend import *
from pytracer.reflection.bdf.orennayar import *
from pytracer.reflection.bdf.torrance import * | 24.705882 | 50 | 0.807143 | 54 | 420 | 6.111111 | 0.481481 | 0.327273 | 0.381818 | 0.424242 | 0.469697 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013441 | 0.114286 | 420 | 17 | 51 | 24.705882 | 0.873656 | 0.361905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0bd85bcc5aa22d59d86daa6a7a7b35b502e0e0c4 | 190 | py | Python | autofit/graphical/expectation_propagation/__init__.py | rhayes777/AutoFit | f5d769755b85a6188ec1736d0d754f27321c2f06 | [
"MIT"
] | null | null | null | autofit/graphical/expectation_propagation/__init__.py | rhayes777/AutoFit | f5d769755b85a6188ec1736d0d754f27321c2f06 | [
"MIT"
] | null | null | null | autofit/graphical/expectation_propagation/__init__.py | rhayes777/AutoFit | f5d769755b85a6188ec1736d0d754f27321c2f06 | [
"MIT"
] | null | null | null | from .ep_mean_field import EPMeanField
from .history import FactorHistory, EPHistory
from .optimiser import AbstractFactorOptimiser
from .optimiser import EPOptimiser, StochasticEPOptimiser
| 38 | 57 | 0.873684 | 20 | 190 | 8.2 | 0.65 | 0.158537 | 0.231707 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094737 | 190 | 4 | 58 | 47.5 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
04259fa4e6805ba6dd56a208a3906366eae48497 | 175 | py | Python | benchmark/commitlintbasic/thrift/run.py | jviotti/binary-json-size-benchmark | a515dfd05736204fb36d3571a6a6b17e5f6e4916 | [
"Apache-2.0"
] | 2 | 2022-01-14T06:09:26.000Z | 2022-02-04T02:13:03.000Z | benchmark/commitlintbasic/thrift/run.py | jviotti/binary-json-size-benchmark | a515dfd05736204fb36d3571a6a6b17e5f6e4916 | [
"Apache-2.0"
] | null | null | null | benchmark/commitlintbasic/thrift/run.py | jviotti/binary-json-size-benchmark | a515dfd05736204fb36d3571a6a6b17e5f6e4916 | [
"Apache-2.0"
] | null | null | null | def encode(json, schema):
payload = schema.Main()
payload.defaultIgnores = json['defaultIgnores']
return payload
def decode(payload):
return payload.__dict__
| 21.875 | 51 | 0.714286 | 19 | 175 | 6.368421 | 0.526316 | 0.214876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182857 | 175 | 7 | 52 | 25 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f086f94a40a68ede726fc1e89a382be80d693f80 | 148 | py | Python | tests/java/org/python/indexer/data/pkg/misc/m3.py | jeff5/jython-whinchat | 65d8e5268189f8197295ff2d91be3decb1ee0081 | [
"CNRI-Jython"
] | 577 | 2020-06-04T16:34:44.000Z | 2022-03-31T11:46:07.000Z | tests/java/org/python/indexer/data/pkg/misc/m3.py | jeff5/jython-whinchat | 65d8e5268189f8197295ff2d91be3decb1ee0081 | [
"CNRI-Jython"
] | 174 | 2015-01-08T20:37:09.000Z | 2020-06-03T16:48:59.000Z | tests/java/org/python/indexer/data/pkg/misc/m3.py | jeff5/jython-whinchat | 65d8e5268189f8197295ff2d91be3decb1ee0081 | [
"CNRI-Jython"
] | 162 | 2015-02-07T02:14:38.000Z | 2020-05-30T16:42:03.000Z | # test in python shell with execfile after loading m2
# >>> execfile("/abs/path/to/this/m3.py")
import m2
print "m2.distutils: %s" % m2.distutils
| 21.142857 | 53 | 0.702703 | 24 | 148 | 4.333333 | 0.791667 | 0.211538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039683 | 0.148649 | 148 | 6 | 54 | 24.666667 | 0.785714 | 0.614865 | 0 | 0 | 0 | 0 | 0.296296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
0b0439baa4e7e49fea88466f1147d7a61df86236 | 10,059 | py | Python | esphome/components/dsmr/sensor.py | OttoWinter/esphomeyaml | 6a85259e4d6d1b0a0f819688b8e555efcb99ecb0 | [
"MIT"
] | 249 | 2018-04-07T12:04:11.000Z | 2019-01-25T01:11:34.000Z | esphome/components/dsmr/sensor.py | OttoWinter/esphomeyaml | 6a85259e4d6d1b0a0f819688b8e555efcb99ecb0 | [
"MIT"
] | 243 | 2018-04-11T16:37:11.000Z | 2019-01-25T16:50:37.000Z | esphome/components/dsmr/sensor.py | OttoWinter/esphomeyaml | 6a85259e4d6d1b0a0f819688b8e555efcb99ecb0 | [
"MIT"
] | 40 | 2018-04-10T05:50:14.000Z | 2019-01-25T15:20:36.000Z | import esphome.codegen as cg
import esphome.config_validation as cv
from esphome.components import sensor
from esphome.const import (
CONF_ID,
DEVICE_CLASS_CURRENT,
DEVICE_CLASS_ENERGY,
DEVICE_CLASS_GAS,
DEVICE_CLASS_POWER,
DEVICE_CLASS_VOLTAGE,
STATE_CLASS_MEASUREMENT,
STATE_CLASS_TOTAL_INCREASING,
UNIT_AMPERE,
UNIT_CUBIC_METER,
UNIT_KILOWATT,
UNIT_KILOWATT_HOURS,
UNIT_KILOVOLT_AMPS_REACTIVE_HOURS,
UNIT_KILOVOLT_AMPS_REACTIVE,
UNIT_VOLT,
)
from . import Dsmr, CONF_DSMR_ID
AUTO_LOAD = ["dsmr"]
CONFIG_SCHEMA = cv.Schema(
{
cv.GenerateID(CONF_DSMR_ID): cv.use_id(Dsmr),
cv.Optional("energy_delivered_lux"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT_HOURS,
accuracy_decimals=3,
device_class=DEVICE_CLASS_ENERGY,
state_class=STATE_CLASS_TOTAL_INCREASING,
),
cv.Optional("energy_delivered_tariff1"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT_HOURS,
accuracy_decimals=3,
device_class=DEVICE_CLASS_ENERGY,
state_class=STATE_CLASS_TOTAL_INCREASING,
),
cv.Optional("energy_delivered_tariff2"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT_HOURS,
accuracy_decimals=3,
device_class=DEVICE_CLASS_ENERGY,
state_class=STATE_CLASS_TOTAL_INCREASING,
),
cv.Optional("energy_returned_lux"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT_HOURS,
accuracy_decimals=3,
device_class=DEVICE_CLASS_ENERGY,
state_class=STATE_CLASS_TOTAL_INCREASING,
),
cv.Optional("energy_returned_tariff1"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT_HOURS,
accuracy_decimals=3,
device_class=DEVICE_CLASS_ENERGY,
state_class=STATE_CLASS_TOTAL_INCREASING,
),
cv.Optional("energy_returned_tariff2"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT_HOURS,
accuracy_decimals=3,
device_class=DEVICE_CLASS_ENERGY,
state_class=STATE_CLASS_TOTAL_INCREASING,
),
cv.Optional("total_imported_energy"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOVOLT_AMPS_REACTIVE_HOURS,
accuracy_decimals=3,
),
cv.Optional("total_exported_energy"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOVOLT_AMPS_REACTIVE_HOURS,
accuracy_decimals=3,
),
cv.Optional("power_delivered"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT,
accuracy_decimals=3,
device_class=DEVICE_CLASS_POWER,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("power_returned"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT,
accuracy_decimals=3,
device_class=DEVICE_CLASS_POWER,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("reactive_power_delivered"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOVOLT_AMPS_REACTIVE,
accuracy_decimals=3,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("reactive_power_returned"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOVOLT_AMPS_REACTIVE,
accuracy_decimals=3,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("electricity_threshold"): sensor.sensor_schema(
accuracy_decimals=3,
),
cv.Optional("electricity_switch_position"): sensor.sensor_schema(
accuracy_decimals=3,
),
cv.Optional("electricity_failures"): sensor.sensor_schema(
accuracy_decimals=0,
),
cv.Optional("electricity_long_failures"): sensor.sensor_schema(
accuracy_decimals=0,
),
cv.Optional("electricity_sags_l1"): sensor.sensor_schema(
accuracy_decimals=0,
),
cv.Optional("electricity_sags_l2"): sensor.sensor_schema(
accuracy_decimals=0,
),
cv.Optional("electricity_sags_l3"): sensor.sensor_schema(
accuracy_decimals=0,
),
cv.Optional("electricity_swells_l1"): sensor.sensor_schema(
accuracy_decimals=0,
),
cv.Optional("electricity_swells_l2"): sensor.sensor_schema(
accuracy_decimals=0,
),
cv.Optional("electricity_swells_l3"): sensor.sensor_schema(
accuracy_decimals=0,
),
cv.Optional("current_l1"): sensor.sensor_schema(
unit_of_measurement=UNIT_AMPERE,
accuracy_decimals=1,
device_class=DEVICE_CLASS_CURRENT,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("current_l2"): sensor.sensor_schema(
unit_of_measurement=UNIT_AMPERE,
accuracy_decimals=1,
device_class=DEVICE_CLASS_CURRENT,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("current_l3"): sensor.sensor_schema(
unit_of_measurement=UNIT_AMPERE,
accuracy_decimals=1,
device_class=DEVICE_CLASS_CURRENT,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("power_delivered_l1"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT,
accuracy_decimals=3,
device_class=DEVICE_CLASS_POWER,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("power_delivered_l2"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT,
accuracy_decimals=3,
device_class=DEVICE_CLASS_POWER,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("power_delivered_l3"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT,
accuracy_decimals=3,
device_class=DEVICE_CLASS_POWER,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("power_returned_l1"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT,
accuracy_decimals=3,
device_class=DEVICE_CLASS_POWER,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("power_returned_l2"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT,
accuracy_decimals=3,
device_class=DEVICE_CLASS_POWER,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("power_returned_l3"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOWATT,
accuracy_decimals=3,
device_class=DEVICE_CLASS_POWER,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("reactive_power_delivered_l1"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOVOLT_AMPS_REACTIVE,
accuracy_decimals=3,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("reactive_power_delivered_l2"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOVOLT_AMPS_REACTIVE,
accuracy_decimals=3,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("reactive_power_delivered_l3"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOVOLT_AMPS_REACTIVE,
accuracy_decimals=3,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("reactive_power_returned_l1"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOVOLT_AMPS_REACTIVE,
accuracy_decimals=3,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("reactive_power_returned_l2"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOVOLT_AMPS_REACTIVE,
accuracy_decimals=3,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("reactive_power_returned_l3"): sensor.sensor_schema(
unit_of_measurement=UNIT_KILOVOLT_AMPS_REACTIVE,
accuracy_decimals=3,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("voltage_l1"): sensor.sensor_schema(
unit_of_measurement=UNIT_VOLT,
accuracy_decimals=1,
device_class=DEVICE_CLASS_VOLTAGE,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("voltage_l2"): sensor.sensor_schema(
unit_of_measurement=UNIT_VOLT,
accuracy_decimals=1,
device_class=DEVICE_CLASS_VOLTAGE,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("voltage_l3"): sensor.sensor_schema(
unit_of_measurement=UNIT_VOLT,
accuracy_decimals=1,
device_class=DEVICE_CLASS_VOLTAGE,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional("gas_delivered"): sensor.sensor_schema(
unit_of_measurement=UNIT_CUBIC_METER,
accuracy_decimals=3,
device_class=DEVICE_CLASS_GAS,
state_class=STATE_CLASS_TOTAL_INCREASING,
),
cv.Optional("gas_delivered_be"): sensor.sensor_schema(
unit_of_measurement=UNIT_CUBIC_METER,
accuracy_decimals=3,
device_class=DEVICE_CLASS_GAS,
state_class=STATE_CLASS_TOTAL_INCREASING,
),
}
).extend(cv.COMPONENT_SCHEMA)
async def to_code(config):
hub = await cg.get_variable(config[CONF_DSMR_ID])
sensors = []
for key, conf in config.items():
if not isinstance(conf, dict):
continue
id = conf[CONF_ID]
if id and id.type == sensor.Sensor:
sens = await sensor.new_sensor(conf)
cg.add(getattr(hub, f"set_{key}")(sens))
sensors.append(f"F({key})")
if sensors:
cg.add_define(
"DSMR_SENSOR_LIST(F, sep)", cg.RawExpression(" sep ".join(sensors))
)
| 38.688462 | 79 | 0.650263 | 1,078 | 10,059 | 5.602041 | 0.093692 | 0.102666 | 0.125186 | 0.116576 | 0.868521 | 0.854281 | 0.854281 | 0.854281 | 0.847988 | 0.81901 | 0 | 0.009548 | 0.2712 | 10,059 | 259 | 80 | 38.837838 | 0.814214 | 0 | 0 | 0.666667 | 0 | 0 | 0.086191 | 0.04752 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02381 | 0 | 0.02381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0b0a22e24ad17b6ede40c2ff7841c181e8c59505 | 36 | py | Python | software/jetson/jetson-containers/ArduCAM_USB_Camera_Shield/RaspberryPi/Python/External_trigger_demo/arducam_config_parser/__init__.py | abstractguy/TSO_project | 1130e6fb081d1486ff15339a9757c46a927a2965 | [
"BSD-2-Clause"
] | 1 | 2021-06-06T14:12:32.000Z | 2021-06-06T14:12:32.000Z | software/jetson/jetson-containers/ArduCAM_USB_Camera_Shield/Nvidia_Jetson/Python/Streaming_demo/arducam_config_parser/__init__.py | abstractguy/TSO_project | 1130e6fb081d1486ff15339a9757c46a927a2965 | [
"BSD-2-Clause"
] | 6 | 2021-04-06T12:35:34.000Z | 2022-03-12T00:58:16.000Z | software/jetson/jetson-containers/ArduCAM_USB_Camera_Shield/ROS/arducam_usb2_ros/src/arducam_config_parser/__init__.py | abstractguy/TSO_project | 1130e6fb081d1486ff15339a9757c46a927a2965 | [
"BSD-2-Clause"
] | 2 | 2020-03-05T00:09:48.000Z | 2021-06-03T20:06:03.000Z | from .arducam_config_parser import * | 36 | 36 | 0.861111 | 5 | 36 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0b15752cd1632421a915fda459b6684bbe327895 | 3,530 | py | Python | symphoni-api/tests/test_playlist_api.py | tylersuchan/symphoni-fork | 9b12a17d6562f5c94e17c840b0ea4af1f74d4a08 | [
"Apache-2.0"
] | null | null | null | symphoni-api/tests/test_playlist_api.py | tylersuchan/symphoni-fork | 9b12a17d6562f5c94e17c840b0ea4af1f74d4a08 | [
"Apache-2.0"
] | null | null | null | symphoni-api/tests/test_playlist_api.py | tylersuchan/symphoni-fork | 9b12a17d6562f5c94e17c840b0ea4af1f74d4a08 | [
"Apache-2.0"
] | null | null | null | import requests
import unittest
import json
from set_up_party import TestData
class TestPlaylistAPI(unittest.TestCase):
def test_get_party_playlist(self):
code = TestData.code
party = requests.get('http://localhost:5000/party/'+code+'/playlist')
assert party.json()['playlist'] is not []
def test_get_invalid_playlist(self):
code = TestData.code
party = requests.get('http://localhost:5000/party/NOTVALID/playlist/')
assert 404 == party.status_code
def test_put_song_in_playlist(self):
code = TestData.code
r = requests.get("http://localhost:5000/party/"+code+"/song?track=Everlong")
header = {'Conent-Type': 'application/json'}
data = {'song': r.json()['results'][0]}
r = requests.put("http://localhost:5000/party/"+code+"/playlist", headers=header, json=data)
assert r.json()['party_data']['playlist'][0] is not None
def test_put_invalid_code(self):
code = TestData.code
r = requests.get("http://localhost:5000/party/"+code+"/song?track=Everlong")
header = {'Conent-Type': 'application/json'}
data = {'song': r.json()['results'][0]}
party = requests.put('http://localhost:5000/party/NOTVALID/playlist/',headers=header,json=data)
assert 404 == party.status_code
def test_put_invalid_json(self):
code = TestData.code
header = {"Content-Type": "application/json"}
party = requests.put('http://localhost:5000/party/'+code+'/playlist/',headers=header,json={'song': 'test'} )
assert 404 == party.status_code
def test_delete_song_in_playlist(self):
code = TestData.code
r = requests.get("http://localhost:5000/party/"+code+"/song?track=Sorry")
header = {'Conent-Type': 'application/json'}
data = {'song': r.json()['results'][0]}
r = requests.put("http://localhost:5000/party/"+code+"/playlist", headers=header, json=data)
party = requests.delete('http://localhost:5000/party/'+code+'/playlist?track_uri=spotify:track:6rAXHPd18PZ6W8m9EectzH', headers=header)
assert 200 == party.status_code
def test_delete_invalid_query(self):
code = TestData.code
party = requests.delete('http://localhost:5000/party/'+code+'/playlist/?invalid_parameter=invalidsong')
assert 404 == party.status_code
def test_delete_invalid_song_in_playlist(self):
code = TestData.code
r = requests.get("http://localhost:5000/party/"+code+"/song?track=Sorry")
data = {'song': r.json()['results'][0]}
header = {"Content-Type": "application/json"}
party = requests.put('http://localhost:5000/party/'+code+'/playlist/',headers=header,json=data)
party = requests.delete('http://localhost:5000/party/'+code+'/playlist/?track_uri=invalidsong')
assert 404 == party.status_code
def test_delete_invalid_code(self):
party = requests.delete('http://localhost:5000/party/NOTVALID/playlist/')
assert "404" in party.text
def test_get_put(self):
code = TestData.code
party = requests.get("http://localhost:5000/party/"+code+"/song?track=Everlong")
header = {'Conent-Type': 'application/json'}
data = {'song': party.json()['results'][1]}
party = requests.put("http://localhost:5000/party/"+code+"/playlist", headers=header, json=data)
party = requests.get('http://localhost:5000/party/'+code+'/playlist')
assert 200 == party.status_code
| 40.574713 | 143 | 0.648725 | 432 | 3,530 | 5.194444 | 0.127315 | 0.104278 | 0.136364 | 0.176471 | 0.837344 | 0.825312 | 0.787879 | 0.746881 | 0.672014 | 0.648396 | 0 | 0.037565 | 0.185552 | 3,530 | 86 | 144 | 41.046512 | 0.742957 | 0 | 0 | 0.539683 | 0 | 0 | 0.311898 | 0.036261 | 0 | 0 | 0 | 0 | 0.15873 | 1 | 0.15873 | false | 0 | 0.063492 | 0 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9be797d4024fc48d5033b77cf5fd9e54081ee860 | 153 | py | Python | network-testing/old-2018-stream/tests_2018/conftest.py | caputomarcos/network-programmability-stream | 95a988dd71879f90f348a30ddab4f0b6071fec8b | [
"MIT"
] | 120 | 2018-08-13T14:11:36.000Z | 2022-01-31T01:13:22.000Z | network-testing/old-2018-stream/tests_2018/conftest.py | caputomarcos/network-programmability-stream | 95a988dd71879f90f348a30ddab4f0b6071fec8b | [
"MIT"
] | 5 | 2019-11-28T05:21:43.000Z | 2021-04-10T21:50:07.000Z | network-testing/old-2018-stream/tests_2018/conftest.py | caputomarcos/network-programmability-stream | 95a988dd71879f90f348a30ddab4f0b6071fec8b | [
"MIT"
] | 43 | 2018-08-21T15:49:05.000Z | 2022-02-10T08:24:54.000Z | import pytest
from nornir import InitNornir
@pytest.fixture(scope="session", autouse=True)
def nr():
return InitNornir(config_file="config.yaml")
| 17 | 48 | 0.75817 | 20 | 153 | 5.75 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124183 | 153 | 8 | 49 | 19.125 | 0.858209 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
502d24917d069dc9e91b02cde58cc79e7e36d36e | 57 | py | Python | HelloWorld/packages/ecommerce/shipping.py | lazywithcrazyness/PythonForBeginners | 7667d16e4f0fd0f4e659e65ab7482f5a40fa5c5c | [
"MIT"
] | null | null | null | HelloWorld/packages/ecommerce/shipping.py | lazywithcrazyness/PythonForBeginners | 7667d16e4f0fd0f4e659e65ab7482f5a40fa5c5c | [
"MIT"
] | null | null | null | HelloWorld/packages/ecommerce/shipping.py | lazywithcrazyness/PythonForBeginners | 7667d16e4f0fd0f4e659e65ab7482f5a40fa5c5c | [
"MIT"
] | null | null | null | def calc_shipping_cost():
print("cal_shipping_cost")
| 19 | 30 | 0.754386 | 8 | 57 | 4.875 | 0.75 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122807 | 57 | 2 | 31 | 28.5 | 0.78 | 0 | 0 | 0 | 0 | 0 | 0.298246 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
503cb44ce2cfb4bf5e115019e0ef2d3fbef28da7 | 20,981 | py | Python | image_synthesis.py | pranshu-mandal/MKID_Data_Analysis_Pipeline | 58482266d3c932972e515f28dd5a73fdf217f2f2 | [
"Apache-2.0"
] | null | null | null | image_synthesis.py | pranshu-mandal/MKID_Data_Analysis_Pipeline | 58482266d3c932972e515f28dd5a73fdf217f2f2 | [
"Apache-2.0"
] | null | null | null | image_synthesis.py | pranshu-mandal/MKID_Data_Analysis_Pipeline | 58482266d3c932972e515f28dd5a73fdf217f2f2 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
from astropy.wcs import WCS
import cygrid
# self made python files
import param
import imaging
Param = param.Param.copy()
class SingleMKIDmapper():
def map(self, data, pixel, settings):
"""meant to perform a check on the settings parameters, and if anything is found wrong, popup an error message"""
if settings['path'] == None:
print('ERROR: Path to the folder is not specified')
if settings['position_log'] == None:
print('ERROR: position_log is not specified')
if settings['mkids_to_exclude'] == None:
print('No MKIDs are being excluded')
self.work_fits(data, settings)
def work_fits(self, data, settings):
"""
redirects the data to the respective functions.
:param data: the 1D TOD of the given pixel
:param settings: has other arguments like antenna log and stuff.
:return: single pixel cygrid datacube.
"""
path_to_folder = settings["path"]
# filename = settings["filename"]
position_log = path_to_folder + settings["position_log"]
mkids_exclude = settings["mkids_to_exclude"] # mkids to exclude
mkid_end = settings["mkids_end"]
return self.run(f=data, position_log=position_log, settings=settings)
def run(self, f, position_log, settings):
"""
adjusts the cropping of the two values. calls imaging intensity, and calls write_map function.
:param f: the 1D tod from the work_fits.
:param position_log: the antenna log file name
:param settings: has other information regarding the scan.
:return: calls the write_map function.
"""
# Cropping the transition scan values
manual_selection = settings["position_crop"] # the indexes of the position log cropping
offset = settings["intensity_offset"] # the value to allign the position log with the correct position log.
maptype = settings["map_type"]
crop = []
sampling_rate = settings["sampling_rate"]
if settings["scan_size"] == "auto":
print("scan is auto")
crop = 0
elif len(settings["scan_size"]) == 2:
crop = settings["scan_size"]
else:
print(('\x1b[0;31;40m'
+ 'scan size must be a list of 2 parameters, X-scanwidth, and Y-scanheight in arcsec Could not crop' +
'\n Uncropeed data imported'
'\x1b[0m'))
print(crop, 'crop1')
ra, dec, index_cropped = imaging.position_import(antlog=position_log, maptype=maptype,
manual_selection=manual_selection,
crop=crop, sampling_rate=sampling_rate)
intensity = imaging.intensity_import(f, index_cropped=index_cropped, offset=offset, normalize=True)
# if settings['plot_type'] == 'frequency_shift':
# pass
# elif settings['plot_type'] == 'tmb':
# intensity = imaging.antenna_temp(intensity, t_atm=settings['t_atm'])
if settings['show_scatter_plot'] == True:
plt.scatter(ra, dec, c=intensity)
plt.title('')
plt.show()
else:
pass
return self.write_map(intensity, ra, dec, settings)
def write_map(self, intensity, ra, dec, settings):
"""
uses all the information provided for intensity, and positions it returns the data cube from cygrid.
:param intensity: the cropped 1D TOD for the given pixel.
:param ra: ra position
:param dec: dec position.
:param settings: other information.
:return: the cygrid data cube.
"""
overall_maxra = np.amax(ra)
overall_minra = np.amin(ra)
overall_maxdec = np.amax(dec)
overall_mindec = np.amin(dec)
map_width, map_height = (overall_maxra - overall_minra), (
overall_maxdec - overall_mindec) # in RA-dec
# map_width, map_height = (overall_maxra - overall_minra) * 3600., (
# overall_maxdec - overall_mindec) * 3600. # in arcsecs
# print map_width, map_height
num_mkids = 101 # len(timewithra[0, :]) - 1
if settings["beamsize_fwhm"] == None:
beamsize_fwhm = 16. # arcsec;
else:
beamsize_fwhm = settings["beamsize_fwhm"]
'''IF THE gridder.grid() gives nan as output from the intensity, the BEAMSIZE / KERNELSIZE is too small
in the calculation.'''
target_header = imaging.setup_header((((overall_minra + (overall_maxra - overall_minra) / 2)),
(overall_mindec + (overall_maxdec - overall_mindec) / 2)),
((overall_maxdec - overall_mindec), (overall_maxdec - overall_mindec)),
beamsize_fwhm / 3600., ) # (overall_maxra - overall_minra) replaced with height to account for removing the transition positioning
target_header['NAXIS3'] = 1 # dummy spectral axis
target_wcs = WCS(target_header)
# print 'Header:', target_header
# Defining the gridder
gridder = cygrid.WcsGrid(target_header)
kernelsize_fwhm = 4.
kernelsize_fwhm /= 3600. # need to convert to degree
kernelsize_sigma = kernelsize_fwhm / np.sqrt(8 * np.log(2))
support_radius = 4. * kernelsize_sigma
healpix_reso = kernelsize_sigma / 2.
gridder.set_kernel(
'gauss1d',
(kernelsize_sigma,),
support_radius,
healpix_reso,
)
tmp_map = np.array(intensity)
tmp_map_f = tmp_map.flatten()
xcoords = np.array(ra)
ycoords = np.array(dec)
xcoords = np.array(xcoords)
ycoords = np.array(ycoords)
gridder.grid(xcoords.flatten(), ycoords.flatten(), tmp_map_f[:, None])
cygrid_cube = gridder.get_datacube().squeeze()
target_wcs = gridder.get_wcs()
return cygrid_cube, target_header
class BeamCharcteristics():
"""
This is similar to the SIngleMKIDMapper, but instead of returning the cygrid datacube, it returns the center positions for a given mkid.
"""
def map(self, data, settings):
"""meant to perform a check on the settings parameters, and if anything is found wrong, popup an error message"""
if settings['path'] == None:
print('ERROR: Path to the folder is not specified')
if settings['position_log'] == None:
print('ERROR: position_log is not specified')
if settings['mkids_to_exclude'] == None:
print('No MKIDs are being excluded')
self.work_fits(data, settings)
def work_fits(self, data, settings):
"""
redirects the data to the respective functions.
:param data: the 1D TOD of the given pixel
:param settings: has other arguments like antenna log and stuff.
:return: single pixel cygrid datacube.
"""
path_to_folder = settings["path"]
# filename = settings["filename"]
position_log = path_to_folder + settings["position_log"]
mkids_exclude = settings["mkids_to_exclude"] # mkids to exclude
mkid_end = settings["mkids_end"]
return self.run(f=data, position_log=position_log, settings=settings)
def run(self, f, position_log, settings):
"""
adjusts the cropping of the two values. calls imaging intensity, and calls write_map function.
:param f: the 1D tod from the work_fits.
:param position_log: the antenna log file name
:param settings: has other information regarding the scan.
:return: calls the write_map function.
"""
# Cropping the transition scan values
manual_selection = settings["position_crop"] # the indexes of the position log cropping
offset = settings["intensity_offset"] # the value to allign the position log with the correct position log.
crop = []
if settings["scan_size"] == "auto":
print("scan is auto")
crop = 0
elif len(settings["scan_size"]) == 2:
crop = settings["scan_size"]
else:
print(('\x1b[0;31;40m'
+ 'scan size must be a list of 2 parameters, X-scanwidth, and Y-scanheight in arcsec Could not crop' +
'\n Uncropeed data imported'
'\x1b[0m'))
print(crop, 'crop1')
ra, dec, index_cropped = imaging.position_import(antlog=position_log, maptype=settings['map_type'], manual_selection=manual_selection,
crop=crop)
intensity = imaging.intensity_import(f, index_cropped=index_cropped, offset=offset, normalize=True)
if settings['show_scatter_plot'] == True:
plt.scatter(ra, dec, c=intensity)
plt.title('')
plt.show()
else:
pass
return self.write_map(intensity, ra, dec, settings)
def write_map(self, intensity, ra, dec, settings):
"""
uses all the information provided for intensity, and positions it returns the data cube from cygrid.
:param intensity: the cropped 1D TOD for the given pixel.
:param ra: ra position
:param dec: dec position.
:param settings: other information.
:return: the cygrid data cube.
"""
overall_maxra = np.amax(ra)
overall_minra = np.amin(ra)
overall_maxdec = np.amax(dec)
overall_mindec = np.amin(dec)
map_width, map_height = (overall_maxra - overall_minra), (
overall_maxdec - overall_mindec) # in RA-dec
# map_width, map_height = (overall_maxra - overall_minra) * 3600., (
# overall_maxdec - overall_mindec) * 3600. # in arcsecs
# print map_width, map_height
num_mkids = 101 # len(timewithra[0, :]) - 1
if settings["beamsize_fwhm"] == None:
beamsize_fwhm = 16. # arcsec;
else:
beamsize_fwhm = settings["beamsize_fwhm"]
'''IF THE gridder.grid() gives nan as output from the intensity, the BEAMSIZE / KERNELSIZE is too small
in the calculation.'''
target_header = imaging.setup_header((((overall_minra + (overall_maxra - overall_minra) / 2)),
(overall_mindec + (overall_maxdec - overall_mindec) / 2)),
((overall_maxdec - overall_mindec), (overall_maxdec - overall_mindec)),
beamsize_fwhm / 3600., ) # (overall_maxra - overall_minra) replaced with height to account for removing the transition positioning
target_header['NAXIS3'] = 1 # dummy spectral axis
target_wcs = WCS(target_header)
# print 'Header:', target_header
# Defining the gridder
gridder = cygrid.WcsGrid(target_header)
kernelsize_fwhm = 4.
kernelsize_fwhm /= 3600. # need to convert to degree
kernelsize_sigma = kernelsize_fwhm / np.sqrt(8 * np.log(2))
support_radius = 4. * kernelsize_sigma
healpix_reso = kernelsize_sigma / 2.
gridder.set_kernel(
'gauss1d',
(kernelsize_sigma,),
support_radius,
healpix_reso,
)
tmp_map = np.array(intensity)
tmp_map_f = tmp_map.flatten()
xcoords = np.array(ra)
ycoords = np.array(dec)
xcoords = np.array(xcoords)
ycoords = np.array(ycoords)
gridder.grid(xcoords.flatten(), ycoords.flatten(), tmp_map_f[:, None])
cygrid_cube = gridder.get_datacube().squeeze()
# Get the db conversion and then plot the contour
center = []
cygrid_cube = 10 * np.log10(cygrid_cube) # dB conversion
xaxmin = -120
xaxmax = 120
yaxmin = -120
yaxmax = 120
extent = (xaxmin, xaxmax, yaxmin, yaxmax)
levels = [-3] # defining the contour level
plt.imshow(cygrid_cube)
CS = plt.contour(cygrid_cube, levels, colors=('r', (1, 1, 0), '#afeeee', '0.5'), extent=extent) # , extent=extent)
dat0 = CS.allsegs[0][0]
CS_center = [np.mean(dat0[:, 0]), np.mean(dat0[:, 1])]
CS_beamsize = [np.amax(dat0[:, 0]) - np.amin(dat0[:, 0]), np.amax(dat0[:, 1]) - np.amin(dat0[:, 1])]
CS_beamsize_average = np.mean(CS_beamsize)
# CS_plot = plt.text(np.mean(dat0[:, 0]), np.mean(dat0[:, 1]), "mkid {}, bw = {}".format(mkid_exclude[i] , CS_beamsize_average), size=6)
center.append([CS_center[0], CS_center[1], CS_beamsize_average])
# plt.show()
plt.close()
print("BEAM CENTER : ", center)
w = WCS(target_header, naxis=2)
world = w.wcs_pix2world(np.array([[center[0][0], center[0][1]]], dtype=np.float64), 0)
return CS_beamsize_average
i
#
# target_wcs = gridder.get_wcs()
# return cygrid_cube, target_wcs
class CompositeMKIDmapper():
def __init__(self):
self.freq=[]
self.roomtemp=[]
self.etamb = []
def map(self, data, mkid_ids, offsets, settings):
"""meant to perform a check on the settings parameters, and if anything is found wrong, popup an error message"""
if settings['path'] == None:
print('ERROR: Path to the folder is not specified')
if settings['position_log'] == None:
print('ERROR: position_log is not specified')
if settings['mkids_to_exclude'] == None:
print('No MKIDs are being excluded')
self.work_fits(data, settings)
def work_fits(self, data, frequencies, eta_mb, mkid_ids, offsets, settings, mkid_end, roomtemp=282):
"""
works with the offsets table to properly adjust the coordinates of the map.
:param data: the 2D cleaned data matrix.
:param mkid_ids: the list of MKID ids that are there in the 2d data in the same order.
:param offsets: the offset table which contains the MKID id, offset_x, offset_y relative to the central pixel.
:param settings: cotains other information like beam width, position log filename etc.
:return: the composite cygrid data cube.
"""
self.freq = frequencies
self.roomtemp = roomtemp
self.etamb = eta_mb
path_to_folder = settings["path"]
filename = settings["filename"]
position_log = path_to_folder + settings["position_log"]
three_values = []
for i in range(mkid_end): #len(offsets)
for j, val in enumerate(mkid_ids):
if offsets[i][0] == val:
three_values.append(self.run(j, f=data[j], off_x = offsets[i][1], off_y = offsets[i][2],
position_log=position_log, settings=settings))
three_values = np.array(three_values)
intensity_composite = three_values[:, 0]
ra_composite = three_values[:, 1]
dec_composite = three_values[:, 2]
if settings['show_scatter_plot'] == True:
print(" You should be careful scatter plotting so much points in python. It may crash, if you still want it, change the source code")
# plt.scatter(ra_composite, dec_composite, c=intensity_composite)
# plt.title('')
# plt.show()
else:
pass
return self.composite_write_map(intensity_composite, ra_composite, dec_composite, settings)
def run(self, j, f, off_x, off_y, position_log, settings):
# Cropping the transition scan values
manual_selection = settings["position_crop"]
intensity_offset = settings["intensity_offset"]
map_type = settings["map_type"]
crop = []
if settings["scan_size"] == "auto":
print("scan is auto")
crop = 0
elif len(settings["scan_size"]) == 2:
crop = settings["scan_size"]
else:
print(('\x1b[0;31;40m'
+ 'scan size must be a list of 2 parameters, X-scanwidth, and Y-scanheight in arcsec Could not crop' +
'\n Uncropeed data imported'
'\x1b[0m'))
print(crop, 'crop1')
# ra, dec, index_cropped = imaging.position_import(antlog=position_log, manual_selection=manual_selection,
# crop=crop)
ra, dec, index_cropped = imaging.pos_import_offset(antlog=position_log, maptype= map_type, manual_selection=manual_selection, crop=crop, offset_x=off_x, offset_y=off_y)
# # position_offset = 3 #position_offset
# ra = ra - (off_x/3600.)
# dec = dec - (off_y/3600.)
intensity = np.array(imaging.intensity_import(f, index_cropped=index_cropped, offset=intensity_offset, normalize=False))
if settings['plot_type'] == 'tmb':
t_a = []
f_load_av = []
# for k in range(len(self.etamb)):
index = self.etamb[j][0]
# print(index)
l = int(index - 1)
f_load = np.mean([self.freq[l][-1], self.freq[l][-2]])
f_load_av.append(f_load)
if f_load != 0:
# print(j, f_load)
# print(k)
t_a.append(-self.roomtemp * (-intensity / f_load))
t_a = np.array(t_a)
plt.close()
plt.plot(t_a)
plt.show()
t_mb = t_a/(self.etamb[j][1]/100)
intensity= t_mb
######################### Changed For Tastar calculation ################################
# f_load = 7.5
# t_atm = 300 #K
# intensity = t_atm * (intensity/f_load)
######################### Changed For Tastar calculation ################################
# plt.scatter(ra, dec, c=intensity)
# plt.title('')
# plt.show()
return intensity.flatten(), ra, dec
def composite_write_map(self, intensity, ra, dec, settings):
"""
uses all the information provided for intensity, and positions it returns the data cube from cygrid.
:param intensity: the cropped 1D TOD for the given pixel.
:param ra: ra position
:param dec: dec position.
:param settings: other information.
:return: the cygrid data cube.
"""
overall_maxra = np.amax(ra)
overall_minra = np.amin(ra)
overall_maxdec = np.amax(dec)
overall_mindec = np.amin(dec)
map_width, map_height = (overall_maxra - overall_minra) * 3600., (
overall_maxdec - overall_mindec) * 3600. # in arcsecs
# print map_width, map_height
num_mkids = 101 # len(timewithra[0, :]) - 1
if settings["beamsize_fwhm"] == None:
beamsize_fwhm = 16. # arcsec;
else:
beamsize_fwhm = settings["beamsize_fwhm"]
'''IF THE gridder.grid() gives nan as output from the intensity, the BEAMSIZE / KERNELSIZE is too small
in the calculation.'''
target_header = imaging.setup_header((((overall_minra + (overall_maxra - overall_minra) / 2)),
(overall_mindec + (overall_maxdec - overall_mindec) / 2)),
((overall_maxdec - overall_mindec), (overall_maxdec - overall_mindec)),
beamsize_fwhm / 3600., ) # (overall_maxra - overall_minra) replaced with height to account for removing the transition positioning
target_header['NAXIS3'] = 1 # dummy spectral axis
target_wcs = WCS(target_header)
# print 'Header:', target_header
# Defining the gridder
gridder = cygrid.WcsGrid(target_header)
kernelsize_fwhm = 4.
kernelsize_fwhm /= 3600. # need to convert to degree
kernelsize_sigma = kernelsize_fwhm / np.sqrt(8 * np.log(2))
support_radius = 4. * kernelsize_sigma
healpix_reso = kernelsize_sigma / 2.
gridder.set_kernel(
'gauss1d',
(kernelsize_sigma,),
support_radius,
healpix_reso,
)
tmp_map = np.array(intensity)
tmp_map_f = tmp_map.flatten()
xcoords = np.array(ra)
ycoords = np.array(dec)
xcoords = np.array(xcoords)
ycoords = np.array(ycoords)
gridder.grid(xcoords.flatten(), ycoords.flatten(), tmp_map_f[:, None])
cygrid_cube = gridder.get_datacube().squeeze()
target_wcs = gridder.get_wcs()
return cygrid_cube, target_header
| 38.42674 | 177 | 0.586435 | 2,479 | 20,981 | 4.793062 | 0.132715 | 0.031476 | 0.023565 | 0.030635 | 0.764939 | 0.762919 | 0.748359 | 0.748359 | 0.734304 | 0.72614 | 0 | 0.015794 | 0.308946 | 20,981 | 546 | 178 | 38.42674 | 0.803711 | 0.255183 | 0 | 0.711409 | 0 | 0.013423 | 0.106302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043624 | false | 0.010067 | 0.050336 | 0 | 0.134228 | 0.067114 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
acb53a60bab4fc982d4daa5548f034dd6e469827 | 6,670 | py | Python | lib/utilities.py | jcasmer/grow_control_backend- | 6a18a137e0a16138607413925727d7e5f8486777 | [
"BSD-3-Clause"
] | 1 | 2019-05-11T14:45:47.000Z | 2019-05-11T14:45:47.000Z | lib/utilities.py | jcasmer/grow_control_backend- | 6a18a137e0a16138607413925727d7e5f8486777 | [
"BSD-3-Clause"
] | 6 | 2021-03-18T20:45:02.000Z | 2021-09-22T17:41:38.000Z | lib/utilities.py | jcasmer/grow_control_backend- | 6a18a137e0a16138607413925727d7e5f8486777 | [
"BSD-3-Clause"
] | null | null | null | import pandas
import math
from django.conf import settings
from api.models import TypeDiagnostic
class Utilites():
def get_oms_data(gender, char_type, data_lenght):
'''
function set the data to oms chart
'''
file_name = None
if gender == 'Femenino':
file_path = settings.FILE_GIRL_ROOT
elif gender == 'Masculino':
file_path = settings.FILE_BOY_ROOT
sheet = 0
# 1 == weight
if int(char_type) == 1 :
sheet = 0
sub = 7
file_to_read_imc = []
elif int(char_type) == 3:
sheet = 0
sub = 7
file_to_read_imc = pandas.read_excel(open(file_path, 'rb'), sheet_name=1)
# 1 == height
elif int(char_type) == 2:
sheet = 1
file_to_read_imc = []
full_data = {}
label = []
data = []
try:
file_to_read = pandas.read_excel(open(file_path, 'rb'), sheet_name=sheet)
except:
pass
j = 0
for i in range(0, data_lenght + 2, 2):
if len(file_to_read_imc) > 0 and i > len(file_to_read_imc):
max_imc = file_to_read_imc['SD0'][len(file_to_read_imc)]
if int(char_type) == 1 or int(char_type) ==3:
value = math.ceil( file_to_read['Day'][i] / sub )
else:
value = file_to_read['Day'][i]
if value <= data_lenght:
if len(file_to_read_imc) > 0 :
imc = file_to_read['SD0'][i] / ((file_to_read_imc['SD0'][i] / 100) **2)
data.append({'y': imc , 'x': value })
else:
data.append({'y': file_to_read['SD0'][i], 'x': value })
elif value > data_lenght:
if len(file_to_read_imc) > 0:
imc = file_to_read['SD0'][i] / ((file_to_read_imc['SD0'][i] / 100) ** 2 )
data.append({'y': imc , 'x': value })
else:
data.append({'y': file_to_read['SD0'][i], 'x':value })
break
full_data = {
# 'label': label,
'data': data
}
return full_data
def get_child_status(gender, char_type, child_detail, week):
'''
function calculate child's status
'''
file_name = None
if gender == 'Femenino':
file_path = settings.FILE_GIRL_ROOT
elif gender == 'Masculino':
file_path = settings.FILE_BOY_ROOT
sheet = 0
# 1 == weight; 3 == IMC
if int(char_type) == 1 or int(char_type) == 3:
sheet = 0
# 1 == height
elif int(char_type) == 2:
sheet = 1
week = math.ceil(week / 30)
try:
file_to_read = pandas.read_excel(open(file_path, 'rb'), sheet_name=sheet)
except:
pass
line_week = None
status = ''
for i in range(0, len(file_to_read['Day']) - 1):
try:
if week <= file_to_read['Day'][i]:
line_week = i
break
except:
continue
try:
if int(char_type) == 1 or int(char_type) == 3:
if child_detail.weight <= file_to_read['SD4neg'][line_week]:
status = 'Bajo Peso Severo'
elif file_to_read['SD4neg'][line_week] <= child_detail.weight and file_to_read['SD3neg'][line_week] >= child_detail.weight :
status = 'Bajo Peso'
elif file_to_read['SD3neg'][line_week] <= child_detail.weight and file_to_read['SD2neg'][line_week] >= child_detail.weight :
status = 'Bajo Peso'
elif file_to_read['SD2neg'][line_week] <= child_detail.weight and file_to_read['SD1neg'][line_week] >= child_detail.weight :
status = 'Peso Promedio'
elif file_to_read['SD1neg'][line_week] <= child_detail.weight and file_to_read['SD0'][line_week] >= child_detail.weight :
status = 'Peso Promedio'
elif file_to_read['SD0'][line_week] <= child_detail.weight and file_to_read['SD1'][line_week] >= child_detail.weight :
status = 'Peso Promedio'
elif file_to_read['SD1'][line_week] <= child_detail.weight and file_to_read['SD2'][line_week] >= child_detail.weight :
status = 'Posible Riesgo de Sobre Peso'
elif file_to_read['SD2'][line_week] <= child_detail.weight and file_to_read['SD3'][line_week] >= child_detail.weight :
status = 'Sobre Peso'
elif file_to_read['SD3'][line_week] <= child_detail.weight and file_to_read['SD4'][line_week] >= child_detail.weight :
status = 'Sobre Peso'
elif file_to_read['SD4'][line_week] < child_detail.weight :
status = 'Obeso'
elif int(char_type) == 2:
if child_detail.height <= file_to_read['SD3neg'][line_week]:
status = 'Baja Talla Severa'
elif file_to_read['SD3neg'][line_week] >= child_detail.height and file_to_read['SD2neg'][line_week] >= child_detail.height :
status = 'Baja Talla'
elif file_to_read['SD2neg'][line_week] <= child_detail.height and file_to_read['SD1neg'][line_week] >= child_detail.height :
status = 'Talla Promedio'
elif file_to_read['SD1neg'][line_week] <= child_detail.height and file_to_read['SD0'][line_week] >= child_detail.height :
status = 'Talla Promedio'
elif file_to_read['SD0'][line_week] <= child_detail.height and file_to_read['SD1'][line_week] >= child_detail.height :
status = 'Talla Promedio'
elif file_to_read['SD1'][line_week] <= child_detail.height and file_to_read['SD2'][line_week] >= child_detail.height :
status = 'Talla Por Encima Del Promedio'
elif file_to_read['SD2'][line_week] <= child_detail.height and file_to_read['SD3'][line_week] >= child_detail.height :
status = 'Talla Por Encima Del Promedio '
elif file_to_read['SD3'][line_week] < child_detail.height :
status = 'Super Talla'
except Exception as e:
pass
return status | 46.319444 | 142 | 0.52009 | 803 | 6,670 | 4.031133 | 0.139477 | 0.098239 | 0.163732 | 0.176089 | 0.808156 | 0.764597 | 0.732777 | 0.726908 | 0.712697 | 0.505715 | 0 | 0.019915 | 0.367616 | 6,670 | 144 | 143 | 46.319444 | 0.747511 | 0.021439 | 0 | 0.52459 | 0 | 0 | 0.077045 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016393 | false | 0.02459 | 0.032787 | 0 | 0.07377 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
accbc1d25d5a96df410f7cf912c5b35808ff2e11 | 319 | py | Python | pages/themes/beginners/unicodeTopics/examples/i_in_cp1251.py | ProgressBG-Python-Course/ProgressBG-VC2-Python | 03b892a42ee1fad3d4f97e328e06a4b1573fd356 | [
"MIT"
] | null | null | null | pages/themes/beginners/unicodeTopics/examples/i_in_cp1251.py | ProgressBG-Python-Course/ProgressBG-VC2-Python | 03b892a42ee1fad3d4f97e328e06a4b1573fd356 | [
"MIT"
] | null | null | null | pages/themes/beginners/unicodeTopics/examples/i_in_cp1251.py | ProgressBG-Python-Course/ProgressBG-VC2-Python | 03b892a42ee1fad3d4f97e328e06a4b1573fd356 | [
"MIT"
] | null | null | null | t1 = 'А'.encode('cp1251').decode('cp1251')
t2 = 'З'.encode('cp1251').decode('cp1251')
t3 = 'Ж'.encode('cp1251').decode('cp1251')
t4 = 'И'.encode('cp1251').decode('cp1251')
t5 = 'Й'.encode('cp1251').decode('cp1251')
t6 = 'К'.encode('cp1251').decode('cp1251')
print(t1)
print(t2)
print(t3)
print(t4)
print(t5)
print(t6)
| 22.785714 | 42 | 0.639498 | 48 | 319 | 4.25 | 0.333333 | 0.352941 | 0.529412 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204082 | 0.07837 | 319 | 13 | 43 | 24.538462 | 0.489796 | 0 | 0 | 0 | 0 | 0 | 0.244514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
acf1ad4c8b45b059a0f8dd5718b01dd57b16e84c | 141 | py | Python | setup.py | iilei/identify | d82b4a26cfb2512d4bf4e1be4d18c0bba6887448 | [
"MIT"
] | 1 | 2020-11-20T12:22:25.000Z | 2020-11-20T12:22:25.000Z | setup.py | iilei/identify | d82b4a26cfb2512d4bf4e1be4d18c0bba6887448 | [
"MIT"
] | 3 | 2020-11-20T14:40:17.000Z | 2020-11-27T00:57:50.000Z | setup.py | iilei/identify | d82b4a26cfb2512d4bf4e1be4d18c0bba6887448 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import unicode_literals
from setuptools import setup
setup()
| 20.142857 | 39 | 0.787234 | 18 | 141 | 5.611111 | 0.611111 | 0.19802 | 0.316832 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008197 | 0.134752 | 141 | 6 | 40 | 23.5 | 0.819672 | 0.148936 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c583ac1768c90a31e73ae09ffd4ebecbf44c4710 | 9,321 | py | Python | asv_bench/benchmarks/io_sql.py | raspbian-packages/pandas | fb33806b5286deb327b2e0fa96aedf25a6ed563f | [
"PSF-2.0",
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"BSD-3-Clause"
] | null | null | null | asv_bench/benchmarks/io_sql.py | raspbian-packages/pandas | fb33806b5286deb327b2e0fa96aedf25a6ed563f | [
"PSF-2.0",
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"BSD-3-Clause"
] | null | null | null | asv_bench/benchmarks/io_sql.py | raspbian-packages/pandas | fb33806b5286deb327b2e0fa96aedf25a6ed563f | [
"PSF-2.0",
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"BSD-3-Clause"
] | null | null | null | import sqlalchemy
from .pandas_vb_common import *
import sqlite3
from sqlalchemy import create_engine
class sql_datetime_read_and_parse_sqlalchemy(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.df = DataFrame({'float': randn(10000), 'datetime': date_range('2000-01-01', periods=10000, freq='s'), })
self.df['datetime_string'] = self.df['datetime'].map(str)
self.df.to_sql('test_type', self.engine, if_exists='replace')
self.df[['float', 'datetime_string']].to_sql('test_type', self.con, if_exists='replace')
def time_sql_datetime_read_and_parse_sqlalchemy(self):
read_sql_table('test_type', self.engine, columns=['datetime_string'], parse_dates=['datetime_string'])
class sql_datetime_read_as_native_sqlalchemy(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.df = DataFrame({'float': randn(10000), 'datetime': date_range('2000-01-01', periods=10000, freq='s'), })
self.df['datetime_string'] = self.df['datetime'].map(str)
self.df.to_sql('test_type', self.engine, if_exists='replace')
self.df[['float', 'datetime_string']].to_sql('test_type', self.con, if_exists='replace')
def time_sql_datetime_read_as_native_sqlalchemy(self):
read_sql_table('test_type', self.engine, columns=['datetime'])
class sql_datetime_write_sqlalchemy(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.df = DataFrame({'float': randn(10000), 'string': (['foo'] * 10000), 'bool': ([True] * 10000), 'datetime': date_range('2000-01-01', periods=10000, freq='s'), })
self.df.loc[1000:3000, 'float'] = np.nan
def time_sql_datetime_write_sqlalchemy(self):
self.df[['datetime']].to_sql('test_datetime', self.engine, if_exists='replace')
class sql_float_read_query_fallback(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.df = DataFrame({'float': randn(10000), 'datetime': date_range('2000-01-01', periods=10000, freq='s'), })
self.df['datetime_string'] = self.df['datetime'].map(str)
self.df.to_sql('test_type', self.engine, if_exists='replace')
self.df[['float', 'datetime_string']].to_sql('test_type', self.con, if_exists='replace')
def time_sql_float_read_query_fallback(self):
read_sql_query('SELECT float FROM test_type', self.con)
class sql_float_read_query_sqlalchemy(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.df = DataFrame({'float': randn(10000), 'datetime': date_range('2000-01-01', periods=10000, freq='s'), })
self.df['datetime_string'] = self.df['datetime'].map(str)
self.df.to_sql('test_type', self.engine, if_exists='replace')
self.df[['float', 'datetime_string']].to_sql('test_type', self.con, if_exists='replace')
def time_sql_float_read_query_sqlalchemy(self):
read_sql_query('SELECT float FROM test_type', self.engine)
class sql_float_read_table_sqlalchemy(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.df = DataFrame({'float': randn(10000), 'datetime': date_range('2000-01-01', periods=10000, freq='s'), })
self.df['datetime_string'] = self.df['datetime'].map(str)
self.df.to_sql('test_type', self.engine, if_exists='replace')
self.df[['float', 'datetime_string']].to_sql('test_type', self.con, if_exists='replace')
def time_sql_float_read_table_sqlalchemy(self):
read_sql_table('test_type', self.engine, columns=['float'])
class sql_float_write_fallback(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.df = DataFrame({'float': randn(10000), 'string': (['foo'] * 10000), 'bool': ([True] * 10000), 'datetime': date_range('2000-01-01', periods=10000, freq='s'), })
self.df.loc[1000:3000, 'float'] = np.nan
def time_sql_float_write_fallback(self):
self.df[['float']].to_sql('test_float', self.con, if_exists='replace')
class sql_float_write_sqlalchemy(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.df = DataFrame({'float': randn(10000), 'string': (['foo'] * 10000), 'bool': ([True] * 10000), 'datetime': date_range('2000-01-01', periods=10000, freq='s'), })
self.df.loc[1000:3000, 'float'] = np.nan
def time_sql_float_write_sqlalchemy(self):
self.df[['float']].to_sql('test_float', self.engine, if_exists='replace')
class sql_read_query_fallback(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.index = tm.makeStringIndex(10000)
self.df = DataFrame({'float1': randn(10000), 'float2': randn(10000), 'string1': (['foo'] * 10000), 'bool1': ([True] * 10000), 'int1': np.random.randint(0, 100000, size=10000), }, index=self.index)
self.df.to_sql('test2', self.engine, if_exists='replace')
self.df.to_sql('test2', self.con, if_exists='replace')
def time_sql_read_query_fallback(self):
read_sql_query('SELECT * FROM test2', self.con)
class sql_read_query_sqlalchemy(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.index = tm.makeStringIndex(10000)
self.df = DataFrame({'float1': randn(10000), 'float2': randn(10000), 'string1': (['foo'] * 10000), 'bool1': ([True] * 10000), 'int1': np.random.randint(0, 100000, size=10000), }, index=self.index)
self.df.to_sql('test2', self.engine, if_exists='replace')
self.df.to_sql('test2', self.con, if_exists='replace')
def time_sql_read_query_sqlalchemy(self):
read_sql_query('SELECT * FROM test2', self.engine)
class sql_read_table_sqlalchemy(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.index = tm.makeStringIndex(10000)
self.df = DataFrame({'float1': randn(10000), 'float2': randn(10000), 'string1': (['foo'] * 10000), 'bool1': ([True] * 10000), 'int1': np.random.randint(0, 100000, size=10000), }, index=self.index)
self.df.to_sql('test2', self.engine, if_exists='replace')
self.df.to_sql('test2', self.con, if_exists='replace')
def time_sql_read_table_sqlalchemy(self):
read_sql_table('test2', self.engine)
class sql_string_write_fallback(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.df = DataFrame({'float': randn(10000), 'string': (['foo'] * 10000), 'bool': ([True] * 10000), 'datetime': date_range('2000-01-01', periods=10000, freq='s'), })
self.df.loc[1000:3000, 'float'] = np.nan
def time_sql_string_write_fallback(self):
self.df[['string']].to_sql('test_string', self.con, if_exists='replace')
class sql_string_write_sqlalchemy(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.df = DataFrame({'float': randn(10000), 'string': (['foo'] * 10000), 'bool': ([True] * 10000), 'datetime': date_range('2000-01-01', periods=10000, freq='s'), })
self.df.loc[1000:3000, 'float'] = np.nan
def time_sql_string_write_sqlalchemy(self):
self.df[['string']].to_sql('test_string', self.engine, if_exists='replace')
class sql_write_fallback(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.index = tm.makeStringIndex(10000)
self.df = DataFrame({'float1': randn(10000), 'float2': randn(10000), 'string1': (['foo'] * 10000), 'bool1': ([True] * 10000), 'int1': np.random.randint(0, 100000, size=10000), }, index=self.index)
def time_sql_write_fallback(self):
self.df.to_sql('test1', self.con, if_exists='replace')
class sql_write_sqlalchemy(object):
goal_time = 0.2
def setup(self):
self.engine = create_engine('sqlite:///:memory:')
self.con = sqlite3.connect(':memory:')
self.index = tm.makeStringIndex(10000)
self.df = DataFrame({'float1': randn(10000), 'float2': randn(10000), 'string1': (['foo'] * 10000), 'bool1': ([True] * 10000), 'int1': np.random.randint(0, 100000, size=10000), }, index=self.index)
def time_sql_write_sqlalchemy(self):
self.df.to_sql('test1', self.engine, if_exists='replace')
| 43.152778 | 204 | 0.647785 | 1,258 | 9,321 | 4.598569 | 0.062798 | 0.05497 | 0.059637 | 0.038894 | 0.960415 | 0.933103 | 0.920657 | 0.876923 | 0.8586 | 0.835091 | 0 | 0.06947 | 0.17069 | 9,321 | 215 | 205 | 43.353488 | 0.678913 | 0 | 0 | 0.683871 | 0 | 0 | 0.175196 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.193548 | false | 0 | 0.025806 | 0 | 0.412903 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c59b4b2420d61abe80da33c99501a3483d076a25 | 96 | py | Python | venv/lib/python3.8/site-packages/numpy/distutils/command/egg_info.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/numpy/distutils/command/egg_info.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/numpy/distutils/command/egg_info.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/8b/e6/64/e2c7ed2b970c4154368ea4b14cca5523e818c97375ebea794ef9a35117 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427083 | 0 | 96 | 1 | 96 | 96 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c5e270a2ac6021eccfae8f7ce4739e9861389c76 | 141 | py | Python | pyperclip/main.py | jonathan-JIPSlok/DataScience | 44fa44fdef8d4bd4d60d94179ec2d7a78babcc06 | [
"MIT"
] | 1 | 2021-10-09T10:04:39.000Z | 2021-10-09T10:04:39.000Z | pyperclip/main.py | jonathan-JIPSlok/DataScience | 44fa44fdef8d4bd4d60d94179ec2d7a78babcc06 | [
"MIT"
] | null | null | null | pyperclip/main.py | jonathan-JIPSlok/DataScience | 44fa44fdef8d4bd4d60d94179ec2d7a78babcc06 | [
"MIT"
] | null | null | null | import pyperclip
pyperclip.copy("Hello Word") #manda para o clipboard ou seja ele copia
pyperclip.paste() #lê do clipboard ou seja ele cola | 28.2 | 70 | 0.780142 | 23 | 141 | 4.782609 | 0.73913 | 0.2 | 0.272727 | 0.327273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 141 | 5 | 71 | 28.2 | 0.916667 | 0.510638 | 0 | 0 | 0 | 0 | 0.147059 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c5ea6039b08e1a45c1058db7599ca563405f7fb4 | 11,450 | py | Python | datahub/sql/apis/routes/label_studio/storage_router.py | Chronicles-of-AI/osiris | c71b1324ed270caa3724c0a8c58c4883b28dc19c | [
"Apache-2.0"
] | 3 | 2021-08-03T08:13:40.000Z | 2022-02-23T04:27:30.000Z | datahub/sql/apis/routes/label_studio/storage_router.py | Chronicles-of-AI/osiris | c71b1324ed270caa3724c0a8c58c4883b28dc19c | [
"Apache-2.0"
] | null | null | null | datahub/sql/apis/routes/label_studio/storage_router.py | Chronicles-of-AI/osiris | c71b1324ed270caa3724c0a8c58c4883b28dc19c | [
"Apache-2.0"
] | null | null | null | from fastapi import APIRouter, Depends, HTTPException, status
from sql.apis.schemas.requests.label_studio.storage_request import (
CreateStorage,
Storage,
CreateGCSStorage,
)
from sql.apis.schemas.responses.label_studio.storage_response import (
CreateGCSStorageResponse,
CreateStorageResponse,
StorageDeleteResponse,
StorageResponse,
ListStoragesResponse,
)
from sql.controllers.label_studio.label_studio_controller import (
StorageController,
ProjectController,
)
from fastapi.security import OAuth2PasswordBearer
from commons.auth import decodeJWT
from sql import logger
logging = logger(__name__)
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="user/login")
storage_router = APIRouter()
@storage_router.post(
"/label_studio/create_s3_storage", response_model=CreateStorageResponse
)
async def create_s3_storage(
create_storage_request: CreateStorage, token: str = Depends(oauth2_scheme)
):
"""[API router to add S3 storage to Label studio project]
Args:
create_storage_request (CreateStorage): [Create storage request]
token (str, optional): [Bearer token for authentication]. Defaults to Depends(oauth2_scheme).
Raises:
HTTPException: [Unauthorized exception when invalid token is passed]
error: [Exception in underlying controller]
Returns:
[CreateStorageResponse]: [Create storage response]
"""
try:
logging.info("Calling /label_studio/create_s3_storage endpoint")
logging.debug(f"Request: {create_storage_request}")
if decodeJWT(token=token):
response = StorageController().create_s3_storage_controller(
create_storage_request
)
return CreateStorageResponse(**response)
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid access token",
headers={"WWW-Authenticate": "Bearer"},
)
except Exception as error:
logging.error(f"Error in /label_studio/create_s3_storage endpoint: {error}")
raise error
@storage_router.post("/label_studio/sync_s3_storage", response_model=StorageResponse)
async def sync_s3_storage(
sync_storage_request: Storage, token: str = Depends(oauth2_scheme)
):
"""[API router to sync data into label studio project]
Args:
sync_storage_request (Storage): [Sync storage request]
token (str, optional): [Bearer token for authentication]. Defaults to Depends(oauth2_scheme).
Raises:
HTTPException: [Unauthorized exception when invalid token is passed]
error: [Exception in underlying controller]
Returns:
[StorageResponse]: [Sync storage response]
"""
try:
logging.info("Calling /label_studio/sync_s3_storage endpoint")
logging.debug(f"Request: {sync_storage_request}")
if decodeJWT(token=token):
response = StorageController().sync_s3_storage_controller(
sync_storage_request
)
return StorageResponse(**response)
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid access token",
headers={"WWW-Authenticate": "Bearer"},
)
except Exception as error:
logging.error(f"Error in /label_studio/sync_s3_storage endpoint: {error}")
raise error
@storage_router.delete(
"/label_studio/delete_s3_storage", response_model=StorageDeleteResponse
)
async def delete_s3_storage(
delete_storage_request: Storage, token: str = Depends(oauth2_scheme)
):
"""[API router to remove S3 storage from label studio project]
Args:
delete_storage_request (Storage): [Delete S3 storage request]
token (str, optional): [Bearer token for authentication]. Defaults to Depends(oauth2_scheme).
Raises:
HTTPException: [Unauthorized exception when invalid token is passed]
error: [Exception in underlying controller]
Returns:
[StorageDeleteResponse]: [Delete S3 storage from label studio project response]
"""
try:
logging.info("Calling /label_studio/delete_s3_storage endpoint")
logging.debug(f"Request: {delete_storage_request}")
if decodeJWT(token=token):
response = StorageController().delete_s3_storage_controller(
delete_storage_request
)
return StorageDeleteResponse(**response)
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid access token",
headers={"WWW-Authenticate": "Bearer"},
)
except Exception as error:
logging.error(f"Error in /label_studio/delete_s3_storage endpoint: {error}")
raise error
@storage_router.get("/label_studio/list_projects")
async def list_projects(token: str = Depends(oauth2_scheme)):
"""[API Router to list Projects attached in your Annotation Project]
Args:
token (str, optional): [description]. Defaults to Depends(oauth2_scheme).
Raises:
HTTPException: [Unauthorized exception when invalid token is passed]
error: [Error]
Returns:
[type]: [List of Projects]
"""
try:
logging.info("Calling /label_studio/list_projects endpoint")
if decodeJWT(token=token):
response = ProjectController().list_projects_controller()
return response
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid access token",
headers={"WWW-Authenticate": "Bearer"},
)
except Exception as error:
logging.error(f"Error in /label_studio/list_projects endpoint: {error}")
raise error
@storage_router.get("/label_studio/list_storages", response_model=ListStoragesResponse)
async def list_storages(project_id: int, token: str = Depends(oauth2_scheme)):
"""[API Router to list Storages attached in your Annotation Project]
Args:
project_id (int): [Unique Identifier for the Annotation Project]
token (str, optional): [description]. Defaults to Depends(oauth2_scheme).
Raises:
HTTPException: [Unauthorized exception when invalid token is passed]
error: [Error]
Returns:
[type]: [List of Storages]
"""
try:
logging.info("Calling /label_studio/list_storages endpoint")
logging.debug(f"Request: {project_id}")
if decodeJWT(token=token):
response = StorageController().list_storages_controller(
project_id=project_id
)
return ListStoragesResponse(**response)
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid access token",
headers={"WWW-Authenticate": "Bearer"},
)
except Exception as error:
logging.error(f"Error in /label_studio/list_storages endpoint: {error}")
raise error
@storage_router.post(
"/label_studio/create_gcs_storage", response_model=CreateGCSStorageResponse
)
async def create_gcs_storage(
create_storage_request: CreateGCSStorage, token: str = Depends(oauth2_scheme)
):
"""[API router to add GCS storage to label studio project]
Args:
create_storage_request (CreateGCSStorage): [Add GCS storage to label studio project request]
token (str, optional): [Bearer token for authentication]. Defaults to Depends(oauth2_scheme).
Raises:
HTTPException: [Unauthorized exception when invalid token is passed]
error: [Exception in underlying controller]
Returns:
[CreateGCSStorageResponse]: [Add GCS storage to label studio project response]
"""
try:
logging.info("Calling /label_studio/create_gcs_storage endpoint")
logging.debug(f"Request: {create_storage_request}")
if decodeJWT(token=token):
response = StorageController().create_gcs_storage_controller(
create_storage_request
)
return CreateGCSStorageResponse(**response)
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid access token",
headers={"WWW-Authenticate": "Bearer"},
)
except Exception as error:
logging.error(f"Error in /label_studio/create_gcs_storage endpoint: {error}")
raise error
@storage_router.post(
"/label_studio/sync_gcs_storage", response_model=CreateGCSStorageResponse
)
async def sync_gcs_storage(
sync_storage_request: Storage, token: str = Depends(oauth2_scheme)
):
"""[API router to sync data into label studio project]
Args:
sync_storage_request (Storage): [Sync storage request]
token (str, optional): [Bearer token for authentication]. Defaults to Depends(oauth2_scheme).
Raises:
HTTPException: [Unauthorized exception when invalid token is passed]
error: [Exception in underlying controller]
Returns:
[CreateGCSStorageResponse]: [Sync storage response]
"""
try:
logging.info("Calling /label_studio/sync_gcs_storage endpoint")
logging.debug(f"Request: {sync_storage_request}")
if decodeJWT(token=token):
response = StorageController().sync_gcs_storage_controller(
sync_storage_request
)
return CreateGCSStorageResponse(**response)
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid access token",
headers={"WWW-Authenticate": "Bearer"},
)
except Exception as error:
logging.error(f"Error in /label_studio/sync_gcs_storage endpoint: {error}")
raise error
@storage_router.delete(
"/label_studio/delete_gcs_storage", response_model=StorageDeleteResponse
)
async def delete_gcs_storage(
delete_storage_request: Storage, token: str = Depends(oauth2_scheme)
):
"""[API router to remove GCS storage from label studio project]
Args:
delete_storage_request (Storage): [Delete GCS storage request]
token (str, optional): [Bearer token for authentication]. Defaults to Depends(oauth2_scheme).
Raises:
HTTPException: [Unauthorized exception when invalid token is passed]
error: [Exception in underlying controller]
Returns:
[StorageDeleteResponse]: [Delete GCS storage from label studio project response]
"""
try:
logging.info("Calling /label_studio/delete_gcs_storage endpoint")
logging.debug(f"Request: {delete_storage_request}")
if decodeJWT(token=token):
response = StorageController().delete_gcs_storage_controller(
delete_storage_request
)
return StorageDeleteResponse(**response)
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid access token",
headers={"WWW-Authenticate": "Bearer"},
)
except Exception as error:
logging.error(f"Error in /label_studio/delete_gcs_storage endpoint: {error}")
raise error
| 36.349206 | 101 | 0.670393 | 1,182 | 11,450 | 6.305415 | 0.094755 | 0.056085 | 0.040789 | 0.022541 | 0.837381 | 0.82128 | 0.777405 | 0.729371 | 0.718905 | 0.676104 | 0 | 0.007177 | 0.245502 | 11,450 | 314 | 102 | 36.464968 | 0.855539 | 0 | 0 | 0.530928 | 0 | 0 | 0.208174 | 0.109451 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.010309 | 0.036082 | 0 | 0.07732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a84553250c9833af105f0b08f99c09f348b6a3ff | 10,206 | py | Python | unit/test_check_cluster_issues.py | tarantool/ansible-cartridge | 14d86752d582f43f0d8efb27dbaa1175ff6e8ac2 | [
"BSD-2-Clause"
] | 17 | 2019-09-02T15:31:56.000Z | 2022-03-29T18:49:59.000Z | unit/test_check_cluster_issues.py | tarantool/ansible-cartridge | 14d86752d582f43f0d8efb27dbaa1175ff6e8ac2 | [
"BSD-2-Clause"
] | 171 | 2019-10-24T15:34:34.000Z | 2022-03-29T09:18:46.000Z | unit/test_check_cluster_issues.py | tarantool/ansible-cartridge | 14d86752d582f43f0d8efb27dbaa1175ff6e8ac2 | [
"BSD-2-Clause"
] | 14 | 2019-12-23T08:27:06.000Z | 2021-07-06T15:53:49.000Z | import sys
import unittest
from parameterized import parameterized
import module_utils.helpers as helpers
from unit.instance import Instance
sys.modules['ansible.module_utils.helpers'] = helpers
from library.cartridge_check_cluster_issues import check_cluster_issues
ISSUES_WARN_HEADER = '* Issues (warning): *********************************************'
ISSUES_CRIT_HEADER = '* Issues (critical): *********************************************'
ISSUES_OTHER_HEADER = '* Issues (other-level): *********************************************'
def call_check_cluster_issues(console_sock, show_issues=True, allow_warnings=False):
return check_cluster_issues({
'console_sock': console_sock,
'show_issues': show_issues,
'allow_warnings': allow_warnings,
})
def set_issues(instance, issues):
formatted_issues = [
{
'level': issue['level'],
'topic': 'some-topic',
'replicaset_uuid': 'some-replicaset-uuid',
'instance_uuid': 'some-instance-uuid',
'message': issue['message'],
}
for issue in issues
]
instance.set_variable('issues', formatted_issues)
def get_warnings(res):
res_json = res.get_exit_json()
return res_json.get('warnings')
class TestCheckClusterIssues(unittest.TestCase):
def setUp(self):
self.maxDiff = None
self.instance = Instance()
self.console_sock = self.instance.console_sock
self.cookie = self.instance.cluster_cookie
self.instance.start()
def test_no_issues(self):
res = call_check_cluster_issues(self.console_sock)
self.assertFalse(res.failed, res.msg)
@parameterized.expand([
[True],
[False],
])
def test_issues(self, show_issues):
helpers.WARNINGS = []
set_issues(self.instance, [
{'level': 'critical', 'message': 'Some critical issue 1'},
{'level': 'warning', 'message': 'Some warning issue 2'},
{'level': 'critical', 'message': 'Some critical issue 3'},
{'level': 'other-level', 'message': 'Some other-level issue 4'},
{'level': 'warning', 'message': 'Some warning issue 5'},
{'level': 'critical', 'message': 'Some critical issue 6'},
])
res = call_check_cluster_issues(self.console_sock, show_issues=show_issues)
self.assertTrue(res.failed)
self.assertEqual(res.msg, "Cluster has 6 issues")
if not show_issues:
self.assertEqual(len(get_warnings(res)), 0)
else:
self.assertEqual(get_warnings(res), [
ISSUES_CRIT_HEADER,
'Some critical issue 1',
'Some critical issue 3',
'Some critical issue 6',
ISSUES_OTHER_HEADER,
'Some other-level issue 4',
ISSUES_WARN_HEADER,
'Some warning issue 2',
'Some warning issue 5',
])
# only critical
helpers.WARNINGS = []
set_issues(self.instance, [
{'level': 'critical', 'message': 'Some critical issue 1'},
{'level': 'critical', 'message': 'Some critical issue 3'},
{'level': 'critical', 'message': 'Some critical issue 6'},
])
res = call_check_cluster_issues(self.console_sock, show_issues=show_issues)
self.assertTrue(res.failed)
self.assertEqual(res.msg, "Cluster has 3 issues")
if not show_issues:
self.assertEqual(len(get_warnings(res)), 0)
else:
self.assertEqual(get_warnings(res), [
ISSUES_CRIT_HEADER,
'Some critical issue 1',
'Some critical issue 3',
'Some critical issue 6',
])
# only warnings
helpers.WARNINGS = []
set_issues(self.instance, [
{'level': 'warning', 'message': 'Some warning issue 2'},
{'level': 'warning', 'message': 'Some warning issue 5'},
])
res = call_check_cluster_issues(self.console_sock, show_issues=show_issues)
self.assertTrue(res.failed)
self.assertEqual(res.msg, "Cluster has 2 issues")
if not show_issues:
self.assertEqual(len(get_warnings(res)), 0)
else:
self.assertEqual(get_warnings(res), [
ISSUES_WARN_HEADER,
'Some warning issue 2',
'Some warning issue 5',
])
# only unknown
helpers.WARNINGS = []
set_issues(self.instance, [
{'level': 'other-level', 'message': 'Some other-level issue 4'},
])
res = call_check_cluster_issues(self.console_sock, show_issues=show_issues)
self.assertTrue(res.failed)
self.assertEqual(res.msg, "Cluster has 1 issues")
if not show_issues:
self.assertEqual(len(get_warnings(res)), 0)
else:
self.assertEqual(get_warnings(res), [
ISSUES_OTHER_HEADER,
'Some other-level issue 4',
])
@parameterized.expand([
[True],
[False],
])
def test_warnings(self, show_issues):
helpers.WARNINGS = []
set_issues(self.instance, [
{'level': 'critical', 'message': 'Some critical issue 1'},
{'level': 'warning', 'message': 'Some warning issue 2'},
{'level': 'critical', 'message': 'Some critical issue 3'},
{'level': 'other-level', 'message': 'Some other-level issue 4'},
{'level': 'warning', 'message': 'Some warning issue 5'},
{'level': 'critical', 'message': 'Some critical issue 6'},
])
res = call_check_cluster_issues(self.console_sock, allow_warnings=True, show_issues=show_issues)
self.assertTrue(res.failed)
self.assertEqual(res.msg, "Cluster has 4 critical issues")
if not show_issues:
self.assertEqual(len(get_warnings(res)), 0)
else:
self.assertEqual(get_warnings(res), [
ISSUES_CRIT_HEADER,
'Some critical issue 1',
'Some critical issue 3',
'Some critical issue 6',
ISSUES_OTHER_HEADER,
'Some other-level issue 4',
ISSUES_WARN_HEADER,
'Some warning issue 2',
'Some warning issue 5',
])
# only warnings
helpers.WARNINGS = []
set_issues(self.instance, [
{'level': 'warning', 'message': 'Some warning issue 2'},
{'level': 'warning', 'message': 'Some warning issue 5'},
])
res = call_check_cluster_issues(self.console_sock, allow_warnings=True, show_issues=show_issues)
self.assertFalse(res.failed)
if not show_issues:
self.assertEqual(len(get_warnings(res)), 0)
else:
self.assertEqual(get_warnings(res), [
ISSUES_WARN_HEADER,
'Some warning issue 2',
'Some warning issue 5',
])
# only critical
helpers.WARNINGS = []
set_issues(self.instance, [
{'level': 'critical', 'message': 'Some critical issue 1'},
{'level': 'critical', 'message': 'Some critical issue 3'},
{'level': 'critical', 'message': 'Some critical issue 6'},
])
res = call_check_cluster_issues(self.console_sock, allow_warnings=True, show_issues=show_issues)
self.assertTrue(res.failed)
self.assertEqual(res.msg, "Cluster has 3 critical issues")
if not show_issues:
self.assertEqual(len(get_warnings(res)), 0)
else:
self.assertEqual(get_warnings(res), [
ISSUES_CRIT_HEADER,
'Some critical issue 1',
'Some critical issue 3',
'Some critical issue 6',
])
# only unknown
helpers.WARNINGS = []
set_issues(self.instance, [
{'level': 'other-level', 'message': 'Some other-level issue 4'},
])
res = call_check_cluster_issues(self.console_sock, allow_warnings=True, show_issues=show_issues)
self.assertTrue(res.failed)
self.assertEqual(res.msg, "Cluster has 1 critical issues")
if not show_issues:
self.assertEqual(len(get_warnings(res)), 0)
else:
self.assertEqual(get_warnings(res), [
ISSUES_OTHER_HEADER,
'Some other-level issue 4',
])
@parameterized.expand([
[True],
[False],
])
def test_list_on_cluster_returns_error(self, show_issues):
helpers.WARNINGS = []
set_issues(self.instance, [
{'level': 'critical', 'message': 'Some critical issue 1'},
{'level': 'warning', 'message': 'Some warning issue 2'},
{'level': 'critical', 'message': 'Some critical issue 3'},
{'level': 'other-level', 'message': 'Some other-level issue 4'},
{'level': 'warning', 'message': 'Some warning issue 5'},
{'level': 'critical', 'message': 'Some critical issue 6'},
])
self.instance.set_fail_on('issues_list_on_clister')
res = call_check_cluster_issues(self.console_sock, show_issues=show_issues)
self.assertTrue(res.failed)
self.assertEqual(res.msg, "Cluster has 6 issues")
warnings = get_warnings(res)
self.assertEqual(warnings[0], 'Received error on getting list of cluster issues: cartridge err')
if not show_issues:
self.assertEqual(len(warnings), 1)
else:
self.assertEqual(warnings[1:], [
ISSUES_CRIT_HEADER,
'Some critical issue 1',
'Some critical issue 3',
'Some critical issue 6',
ISSUES_OTHER_HEADER,
'Some other-level issue 4',
ISSUES_WARN_HEADER,
'Some warning issue 2',
'Some warning issue 5',
])
def tearDown(self):
self.instance.stop()
del self.instance
| 35.4375 | 104 | 0.563492 | 1,079 | 10,206 | 5.15848 | 0.087118 | 0.070068 | 0.091628 | 0.064678 | 0.790155 | 0.779734 | 0.767876 | 0.761948 | 0.754761 | 0.754761 | 0 | 0.011172 | 0.307172 | 10,206 | 287 | 105 | 35.560976 | 0.775986 | 0.007937 | 0 | 0.748936 | 0 | 0 | 0.250148 | 0.018284 | 0 | 0 | 0 | 0 | 0.157447 | 1 | 0.038298 | false | 0 | 0.025532 | 0.004255 | 0.076596 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a85b9d3bed035ecb38304b3459691af9ef35002f | 96 | py | Python | venv/lib/python3.8/site-packages/platformdirs/__main__.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/platformdirs/__main__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/platformdirs/__main__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/f2/ea/5f/6dbfe0b664fe9208423b4ddc552749f56d64320375b6c78faebe008484 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 96 | 1 | 96 | 96 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a8603a196a09b730a3f72796704e1762578f5e6c | 220 | py | Python | general/utilTF1/__init__.py | duennbart/masterthesis_VAE | 1a161bc5c234acc0a021d84cde8cd69e784174e1 | [
"BSD-3-Clause"
] | 14 | 2020-06-28T15:38:48.000Z | 2021-12-05T01:49:50.000Z | general/utilTF1/__init__.py | duennbart/masterthesis_VAE | 1a161bc5c234acc0a021d84cde8cd69e784174e1 | [
"BSD-3-Clause"
] | null | null | null | general/utilTF1/__init__.py | duennbart/masterthesis_VAE | 1a161bc5c234acc0a021d84cde8cd69e784174e1 | [
"BSD-3-Clause"
] | 3 | 2020-06-28T15:38:49.000Z | 2022-02-13T22:04:34.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from general.utilTF1.utils import *
from general.utilTF1.dataset import *
from general.utilTF1.models import *
| 31.428571 | 38 | 0.85 | 29 | 220 | 5.965517 | 0.413793 | 0.17341 | 0.277457 | 0.277457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015306 | 0.109091 | 220 | 6 | 39 | 36.666667 | 0.867347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.166667 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a89f2d81c2c117e46addf780d6175b5e99edea67 | 52 | py | Python | tests/test_nested_import/thp_io/pyotherside/nested/__init__.py | medxchange/pyotherside | 0414fb9cfa3176324774a0bde9e9cb14fc1d6f89 | [
"0BSD"
] | 291 | 2015-01-02T23:10:23.000Z | 2022-03-28T08:13:10.000Z | tests/test_nested_import/thp_io/pyotherside/nested/__init__.py | medxchange/pyotherside | 0414fb9cfa3176324774a0bde9e9cb14fc1d6f89 | [
"0BSD"
] | 89 | 2015-01-02T08:10:07.000Z | 2021-11-17T17:49:50.000Z | tests/test_nested_import/thp_io/pyotherside/nested/__init__.py | medxchange/pyotherside | 0414fb9cfa3176324774a0bde9e9cb14fc1d6f89 | [
"0BSD"
] | 46 | 2015-02-10T16:21:29.000Z | 2021-12-27T13:55:05.000Z | def info():
return 'This is the nested package'
| 17.333333 | 39 | 0.673077 | 8 | 52 | 4.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 52 | 2 | 40 | 26 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
765187babe6e89b313e198dab2d6ad4e0c12fff1 | 25 | py | Python | tracking perfomance UI/sample.py | marksikaundi/Web-Development-Students-Projects | 43a85c1cda407ae4903ef2e17d027ca3ac4aee8a | [
"MIT"
] | 2 | 2022-01-20T00:31:29.000Z | 2022-01-24T15:46:33.000Z | tracking perfomance UI/sample.py | marksikaundi/Web-Development-Students-Projects | 43a85c1cda407ae4903ef2e17d027ca3ac4aee8a | [
"MIT"
] | null | null | null | tracking perfomance UI/sample.py | marksikaundi/Web-Development-Students-Projects | 43a85c1cda407ae4903ef2e17d027ca3ac4aee8a | [
"MIT"
] | null | null | null | print("this just a demo") | 25 | 25 | 0.72 | 5 | 25 | 3.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
766a1629e318674a28faad84701ab1b261623815 | 27,556 | py | Python | test_backtest_pkg/test_portfolio.py | andyhu4023/backtest_pkg | 00f57244307a740245c6419c8a52cb07e80f171a | [
"MIT"
] | 3 | 2020-04-22T09:27:33.000Z | 2021-02-04T14:55:13.000Z | test_backtest_pkg/test_portfolio.py | andyhu4023/backtest_pkg | 00f57244307a740245c6419c8a52cb07e80f171a | [
"MIT"
] | 1 | 2021-02-04T14:54:06.000Z | 2021-02-04T14:54:06.000Z | test_backtest_pkg/test_portfolio.py | andyhu4023/backtest_pkg | 00f57244307a740245c6419c8a52cb07e80f171a | [
"MIT"
] | 1 | 2021-12-18T10:03:13.000Z | 2021-12-18T10:03:13.000Z | import unittest
from pandas.util.testing import assert_frame_equal, assert_series_equal
import backtest_pkg as bt
import pandas as pd
import numpy as np
from math import sqrt, log, sin, pi
from IPython import display
def cal_std(data):
if len(data)<=1:
return np.nan
data_mean = sum(data)/len(data)
data_var = sum((i-data_mean)**2 for i in data)/(len(data)-1)
return sqrt(data_var)
def cal_mean(data):
return sum(data)/len(data)
class TestPortfolio(unittest.TestCase):
def setUp(self):
n = 10 # length of the period
price_dict = {
'Up trend': list(range(1, n+1)),
'Down trend': list(range(n, 0, -1)),
'Convex': list(1+(n/2)**2+ i*(i-n+1) for i in range(n)),
'Concave': list(1+i*(n-1-i) for i in range(n)),
'Sin': list(1+n*(1+sin(i/(n-1)*2*pi)) for i in range(n)),
}
adj_price_df = pd.DataFrame(price_dict, index=pd.date_range('2020-01-01', periods=n,freq='D'))
self.ticker = adj_price_df.columns
self.index = adj_price_df.index
self.price = adj_price_df
''' Price in values:
'Up trend': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
'Down trend': [10, 9, 8, 7, 6, 5, 4, 3, 2, 1],
'Convex': [26, 18, 12, 8, 6, 6, 8, 12, 18, 26],
'Concave': [1, 9, 15, 19, 21, 21, 19, 15, 9, 1],
'Sin': [11, 17.42, 20.84, 19.66, 14.42, 7.57, 2.33, 1.15, 4.57, 11]
'''
self.trading_status = pd.DataFrame(True, index=self.index, columns=self.ticker)
self.trading_status.iloc[:3, 0]=False
self.trading_status.iloc[3:6, 1]=False
self.trading_status.iloc[6:, 2]=False
# Equal weight portfolio:
self.weight = pd.DataFrame(1, index=self.index[[0, 5]], columns=self.ticker)
# Equal weight asset values:
self.asset_values_no_rebal = self.price.copy()
self.asset_values_no_rebal = self.asset_values_no_rebal.apply(lambda ts: ts/ts[0], axis=0)
self.asset_values_1_rebal = self.asset_values_no_rebal.copy()
rebal_value = self.asset_values_1_rebal.iloc[5, :].mean()
self.asset_values_1_rebal.iloc[5:,:] = self.asset_values_1_rebal.iloc[5:,:].apply(lambda ts: ts/ts[0]*rebal_value, axis=0)
# With trading status:
self.asset_values_no_rebal_tst = self.asset_values_no_rebal.copy()
self.asset_values_no_rebal_tst.iloc[:, 0] = 0
asset_values = self.asset_values_no_rebal.copy()
asset_values.iloc[:5, 0] = 0
rebal_value = asset_values.iloc[5, [2, 3, 4]].sum()/4
apply_col = [0, 2, 3, 4]
asset_values.iloc[5:,apply_col] = asset_values.iloc[5:,apply_col].apply(lambda ts: ts/ts[0]*rebal_value, axis=0)
self.asset_values_1_rebal_tst = asset_values
# Price weight portfolio:
self.share = pd.DataFrame(1, index=self.index[[0, 5]], columns=self.ticker)
# Price weight asset values:
self.asset_values_no_rebal_share = self.price.copy()
self.asset_values_1_rebal_share = self.price.copy()
# With trading status:
self.asset_values_no_rebal_share_tst = self.asset_values_no_rebal_share.copy()
self.asset_values_no_rebal_share_tst.iloc[:, 0] = 0
asset_values = self.asset_values_no_rebal_share.copy()
asset_values.iloc[:5, 0] = 0
adjust_factor = asset_values.iloc[5, [2, 3, 4]].sum()/asset_values.iloc[5, [0, 2, 3, 4]].sum()
apply_col = [0, 2, 3, 4]
asset_values.iloc[5:,apply_col] = asset_values.iloc[5:,apply_col].apply(lambda ts: ts*adjust_factor, axis=0)
self.asset_values_1_rebal_share_tst = asset_values
####################### Portfolio Construction ########################
def test_portfolio_set_price(self):
# Normal setting:
port = bt.portfolio(weight=self.weight)
port.set_price(price=self.price)
expect_status = pd.DataFrame(True, index=self.index, columns=self.ticker)
assert_frame_equal(port.price, self.price)
assert_frame_equal(port.trading_status, expect_status)
# Setting at initiation:
port = bt.portfolio(weight=self.weight, price=self.price)
expect_status = pd.DataFrame(True, index=self.index, columns=self.ticker)
assert_frame_equal(port.price, self.price)
assert_frame_equal(port.trading_status, expect_status)
# Price and trading status cannot be set directly
with self.assertRaises(AttributeError):
port.price = self.price
with self.assertRaises(AttributeError):
port.trading_status = self.trading_status
# Try masking out untradable prices:
price = self.price.where(self.trading_status, other=np.nan)
port = bt.portfolio(weight=self.weight)
port.set_price(price=price)
assert_frame_equal(port.price, price)
assert_frame_equal(port.trading_status, self.trading_status)
def test_portfolio_set_price_and_trading_status(self):
# Normal setting price and trading status:
port = bt.portfolio(weight=self.weight)
port.set_price(price=self.price, trading_status=self.trading_status)
assert_frame_equal(port.price, self.price)
assert_frame_equal(port.trading_status, self.trading_status)
# Setting at initiation:
port = bt.portfolio(weight=self.weight, price=self.price, trading_status=self.trading_status)
assert_frame_equal(port.price, self.price)
assert_frame_equal(port.trading_status, self.trading_status)
# Independent NA prices and trading status:
price = self.price.copy()
price.iloc[:5, 4] = np.nan
expect_status = self.trading_status.copy()
expect_status.iloc[:5, 4]=False
port = bt.portfolio(weight=self.weight)
port.set_price(price=price, trading_status=self.trading_status)
assert_frame_equal(port.price, price)
assert_frame_equal(port.trading_status, expect_status)
# Out range trading status:
out_range_status = self.trading_status.copy()
out_range_status['Extra Ticker'] = True
out_range_status.loc[pd.to_datetime('2020-01-20'), :]=False
expect_status = self.trading_status
port = bt.portfolio(weight=self.weight)
port.set_price(price=self.price, trading_status=out_range_status)
assert_frame_equal(port.trading_status, expect_status)
def test_portfolio_weight(self):
# Noraml equal weigt:
port = bt.portfolio(weight=self.weight, price=self.price)
expect = pd.DataFrame(0.2, index=self.index[[0, 5]], columns=self.ticker)
assert_frame_equal(port.weight, expect)
# Weights of row sum==zeros:
weight = pd.DataFrame(0.2, index=self.index[[0, 5]], columns=self.ticker)
weight.iloc[1, :] = 0
port = bt.portfolio(weight=weight, price=self.price)
expect = weight.iloc[[0], :]
assert_frame_equal(port.weight, expect)
weight = pd.DataFrame(0.2, index=self.index[[0, 5]], columns=self.ticker)
weight.iloc[0, :] = 0
port = bt.portfolio(weight=weight, price=self.price)
expect = weight.iloc[[1], :]
assert_frame_equal(port.weight, expect)
# Out range weight:
out_range_weight=self.weight.copy()
out_range_weight['Extra Ticker']=1
out_range_weight.loc[pd.to_datetime('2020-01-20'), :]=1
port = bt.portfolio(weight=out_range_weight, price=self.price)
expect = pd.DataFrame(0.2, index=self.index[[0, 5]], columns=self.ticker)
assert_frame_equal(port.weight, expect)
# Weight cannot be set after initiation:
with self.assertRaises(AttributeError):
port.weight = self.weight
def test_portfolio_weight_with_trading_status(self):
# Weights on untradables:
port = bt.portfolio(weight=self.weight, price=self.price, trading_status=self.trading_status)
expect = pd.DataFrame(0.25, index=self.index[[0, 5]], columns=self.ticker)
expect.iloc[0, 0]=0
expect.iloc[1, 1]=0
assert_frame_equal(port.weight, expect)
# Weights sum 0 from untradables:
weight = pd.DataFrame(0.25, index=self.index[[0, 5]], columns=self.ticker)
weight.iloc[0, 0]=0
weight.iloc[1, :] = [0, 1, 0, 0, 0]
port = bt.portfolio(weight=weight, price=self.price, trading_status=self.trading_status)
expect = weight.iloc[[0], :]
assert_frame_equal(port.weight, expect)
def test_portfolio_from_share(self):
# No trading status:
price_1 = [1, 10, 26, 1, 11]
price_2 = [6, 5, 6, 21, 1+10*(1+sin(5/9*2*pi))]
weight_value=[[i/sum(price_1) for i in price_1], [j/sum(price_2) for j in price_2]]
expect = pd.DataFrame(weight_value, index=self.index[[0, 5]], columns=self.ticker)
port = bt.portfolio(share=self.share, price=self.price)
assert_frame_equal(port.weight,expect)
# With trading status:
price_1 = [0, 10, 26, 1, 11]
price_2 = [6, 0, 6, 21, 1+10*(1+sin(5/9*2*pi))]
weight_value=[[i/sum(price_1) for i in price_1], [j/sum(price_2) for j in price_2]]
port = bt.portfolio(share=self.share, price=self.price, trading_status=self.trading_status)
expect = pd.DataFrame(weight_value, index=self.index[[0, 5]], columns=self.ticker)
assert_frame_equal(port.weight,expect)
def test_portfolio_end_date(self):
# Defulat end date: last date of price
port = bt.portfolio(weight=self.weight, price=self.price)
self.assertEqual(port.end_date, self.index[-1])
# Set end date at initiation:
end_date = pd.to_datetime('2020-01-08')
port = bt.portfolio(weight=self.weight, price=self.price, end_date=end_date)
self.assertEqual(port.end_date, end_date)
# Change end date after initiation:
port = bt.portfolio(weight=self.weight, price=self.price)
self.assertEqual(port.end_date, self.index[-1])
port.end_date = end_date
self.assertEqual(port.end_date, end_date)
############################ Backtest calculations ############################
def test_portfolio_daily_ret(self):
price = pd.DataFrame(index=self.index)
price['Normal'] = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # Normal case
price['Suspension'] = [1, 2, 3, np.nan, np.nan, np.nan, 4, 5, 6, 7] # Temporary suspension
price['Delisting'] = [1, 2, 3, 4, 5] + [np.nan]*5 # Delisting
price['Late'] = [np.nan]*5 + [1, 2, 3, 4, 5]# Late listing:
expect = pd.DataFrame(index=self.index)
expect['Normal'] = [np.nan]+[log((i+1)/i) for i in range(1,10)]
expect['Suspension'] = [np.nan]+[log(2/1), log(3/2)] + [0.]*3 + [log(4/3), log(5/4), log(6/5), log(7/6)]
expect['Delisting'] =[np.nan]+[log((i+1)/i) for i in range(1,5)] + [0.]*5
expect['Late'] = [np.nan]*6+[log((i+1)/i) for i in range(1,5)]
port = bt.portfolio(weight=self.weight, price=price)
assert_frame_equal(port.daily_ret, expect)
def test_portfolio_drift_weight(self):
# NO rebalance:
port = bt.portfolio(weight=self.weight.iloc[[0], :], price=self.price)
expect = self.asset_values_no_rebal.copy()
expect = expect.apply(lambda ts: ts/ts.sum(), axis=1)
assert_frame_equal(port.ex_weight, expect)
# 1 rebalance:
port = bt.portfolio(weight=self.weight, price=self.price)
expect = self.asset_values_1_rebal.copy()
expect = expect.apply(lambda ts: ts/ts.sum(), axis=1)
assert_frame_equal(port.ex_weight, expect)
def test_portfolio_performance_return(self):
# NO rebalance:
port = bt.portfolio(weight=self.weight.iloc[[0], :], price=self.price)
port_value = self.asset_values_no_rebal.sum(axis=1).values
expect_daily_ret = [0]+list(log(port_value[i+1]/port_value[i]) for i in range(len(self.index)-1))
expect_daily_ret = pd.Series(expect_daily_ret, index=self.index)
expect_total_ret = [0]+list(log(port_value[i+1]/port_value[0]) for i in range(len(self.index)-1))
expect_total_ret = pd.Series(expect_total_ret, index=self.index)
assert_series_equal(port.port_daily_ret, expect_daily_ret)
assert_series_equal(port.port_total_ret, expect_total_ret)
port_value_ts = self.asset_values_no_rebal.sum(axis=1)
assert_series_equal(port.port_total_value, port_value_ts/port_value_ts[0])
# 1 rebalance:
port = bt.portfolio(weight=self.weight, price=self.price)
port_value = self.asset_values_1_rebal.sum(axis=1).values
expect_daily_ret = [0]+list(log(port_value[i+1]/port_value[i]) for i in range(len(self.index)-1))
expect_daily_ret = pd.Series(expect_daily_ret, index=self.index)
expect_total_ret = [0]+list(log(port_value[i+1]/port_value[0]) for i in range(len(self.index)-1))
expect_total_ret = pd.Series(expect_total_ret, index=self.index)
assert_series_equal(port.port_daily_ret, expect_daily_ret)
assert_series_equal(port.port_total_ret, expect_total_ret)
port_value_ts = self.asset_values_1_rebal.sum(axis=1)
assert_series_equal(port.port_total_value, port_value_ts/port_value_ts[0])
def test_portfolio_backtest(self):
# Default setting, name = 'Portfolio'
port = bt.portfolio(weight=self.weight, price=self.price)
port_value_ts = self.asset_values_1_rebal.sum(axis=1)
port_value_ts = port_value_ts/port_value_ts[0]
assert_frame_equal(port.backtest(), port_value_ts.to_frame(name='Portfolio'))
# Setting portfolio name to 'Equal Weight'
name = 'Equal Weight'
port = bt.portfolio(weight=self.weight, price=self.price, name=name)
assert_frame_equal(port.backtest(), port_value_ts.to_frame(name=name))
def test_portfolio_drift_weight_with_trading_status(self):
# NO rebalance:
port = bt.portfolio(weight=self.weight.iloc[[0], :], price=self.price, trading_status=self.trading_status)
expect = self.asset_values_no_rebal_tst.apply(lambda ts: ts/ts.sum(), axis=1)
assert_frame_equal(port.ex_weight, expect)
# 1 rebalance:
port = bt.portfolio(weight=self.weight, price=self.price, trading_status=self.trading_status)
expect = self.asset_values_1_rebal_tst.apply(lambda ts: ts/ts.sum(), axis=1)
assert_frame_equal(port.ex_weight, expect)
def test_portfolio_performance_return_with_trading_status(self):
# NO rebalance:
port = bt.portfolio(weight=self.weight.iloc[[0], :], price=self.price, trading_status=self.trading_status)
port_value = self.asset_values_no_rebal_tst.sum(axis=1).values
expect_daily_ret = [0]+list(log(port_value[i+1]/port_value[i]) for i in range(len(self.index)-1))
expect_daily_ret = pd.Series(expect_daily_ret, index=self.index)
expect_total_ret = [0]+list(log(port_value[i+1]/port_value[0]) for i in range(len(self.index)-1))
expect_total_ret = pd.Series(expect_total_ret, index=self.index)
assert_series_equal(port.port_daily_ret, expect_daily_ret)
assert_series_equal(port.port_total_ret, expect_total_ret)
port_value_ts = self.asset_values_no_rebal_tst.sum(axis=1)
assert_series_equal(port.port_total_value, port_value_ts/port_value_ts[0])
# 1 rebalance:
port = bt.portfolio(weight=self.weight, price=self.price, trading_status=self.trading_status)
port_value = self.asset_values_1_rebal_tst.sum(axis=1).values
expect_daily_ret = [0]+list(log(port_value[i+1]/port_value[i]) for i in range(len(self.index)-1))
expect_daily_ret = pd.Series(expect_daily_ret, index=self.index)
expect_total_ret = [0]+list(log(port_value[i+1]/port_value[0]) for i in range(len(self.index)-1))
expect_total_ret = pd.Series(expect_total_ret, index=self.index)
assert_series_equal(port.port_daily_ret, expect_daily_ret)
assert_series_equal(port.port_total_ret, expect_total_ret)
port_value_ts = self.asset_values_1_rebal_tst.sum(axis=1)
assert_series_equal(port.port_total_value, port_value_ts/port_value_ts[0])
def test_portfolio_backtest_with_trading_status(self):
# Default setting, name = 'Portfolio'
port = bt.portfolio(weight=self.weight, price=self.price, trading_status=self.trading_status)
port_value_ts = self.asset_values_1_rebal_tst.sum(axis=1)
port_value_ts = port_value_ts/port_value_ts[0]
assert_frame_equal(port.backtest(), port_value_ts.to_frame(name='Portfolio'))
# Setting portfolio name to 'Equal Weight'
name = 'Equal Weight'
port = bt.portfolio(weight=self.weight, price=self.price, trading_status=self.trading_status, name=name)
assert_frame_equal(port.backtest(), port_value_ts.to_frame(name=name))
##################### Portfolio with Benchmark ##############################
def test_portfolio_with_benchmark(self):
price_weight = self.price.iloc[[0, 5], :].copy()
price_weight = price_weight.apply(lambda ts: ts/ts.sum(), axis=1)
equal_weight = pd.DataFrame(0.2, index=self.index[[0, 5]], columns=self.ticker)
# Add benchmark at initiation:
price_weight_port = bt.portfolio(share=self.share, price=self.price, name='Price Weight', benchmark=self.weight, benchmark_name='Equal Weight')
assert_frame_equal(price_weight_port.weight, price_weight)
assert_frame_equal(price_weight_port.benchmark.weight, equal_weight)
self.assertEqual(price_weight_port.name, 'Price Weight')
self.assertEqual(price_weight_port.benchmark.name, 'Equal Weight')
# Set benchmark after initiation:
price_weight_port = bt.portfolio(share=self.share, price=self.price, name='Price Weight')
equal_weight_port = bt.portfolio(weight=self.weight, price=self.price, name='Equal Weight')
price_weight_port.set_benchmark(equal_weight_port)
assert_frame_equal(price_weight_port.weight, price_weight)
assert_frame_equal(price_weight_port.benchmark.weight, equal_weight)
self.assertEqual(price_weight_port.name, 'Price Weight')
self.assertEqual(price_weight_port.benchmark.name, 'Equal Weight')
def test_portfolio_backtest_with_benchmark(self):
# No trading status:
price_weight_port = bt.portfolio(share=self.share, price=self.price, name='Price Weight', benchmark=self.weight, benchmark_name='Equal Weight')
result = pd.DataFrame()
port_value_ts = self.asset_values_1_rebal_share.sum(axis=1)
bm_value_ts = self.asset_values_1_rebal.sum(axis=1)
result['Price Weight']= port_value_ts/port_value_ts[0]
result['Equal Weight'] = bm_value_ts/bm_value_ts[0]
result['Difference'] = result['Price Weight'] - result['Equal Weight']
assert_frame_equal(price_weight_port.backtest(), result)
# With trading status:
price_weight_port = bt.portfolio(share=self.share, price=self.price, trading_status=self.trading_status, name='Price Weight', benchmark=self.weight, benchmark_name='Equal Weight')
result = pd.DataFrame()
port_value_ts = self.asset_values_1_rebal_share_tst.sum(axis=1)
bm_value_ts = self.asset_values_1_rebal_tst.sum(axis=1)
result['Price Weight']= port_value_ts/port_value_ts[0]
result['Equal Weight'] = bm_value_ts/bm_value_ts[0]
result['Difference'] = result['Price Weight'] - result['Equal Weight']
assert_frame_equal(price_weight_port.backtest(), result)
#################### Portfolio Analytic Tools ####################
def test_portfolio_performance_metrics(self):
# No trading status:
price_weight_port = bt.portfolio(share=self.share, price=self.price, name='Price Weight')
daily_ret = price_weight_port.port_daily_ret
performance_df = pd.DataFrame(index=['Price Weight'])
performance_df['Return'] = daily_ret.sum()
performance_df['Volatility'] = daily_ret.std()*sqrt(len(daily_ret))
performance_df['Sharpe'] = performance_df['Return']/performance_df['Volatility']
hist_value = np.exp(daily_ret.cumsum())
previous_peak = hist_value.cummax()
performance_df['MaxDD'] = max(1 - hist_value/previous_peak)
assert_series_equal(price_weight_port.period_return, performance_df['Return'])
assert_series_equal(price_weight_port.period_volatility, performance_df['Volatility'])
assert_series_equal(price_weight_port.period_sharpe_ratio, performance_df['Sharpe'])
assert_series_equal(price_weight_port.period_maximum_drawdown, performance_df['MaxDD'])
# No trading status:
price_weight_port = bt.portfolio(share=self.share, price=self.price, name='Price Weight', trading_status=self.trading_status)
daily_ret = price_weight_port.port_daily_ret
performance_df = pd.DataFrame(index=['Price Weight'])
performance_df['Return'] = daily_ret.sum()
performance_df['Volatility'] = daily_ret.std()*sqrt(len(daily_ret))
performance_df['Sharpe'] = performance_df['Return']/performance_df['Volatility']
hist_value = np.exp(daily_ret.cumsum())
previous_peak = hist_value.cummax()
performance_df['MaxDD'] = max(1 - hist_value/previous_peak)
assert_series_equal(price_weight_port.period_return, performance_df['Return'])
assert_series_equal(price_weight_port.period_volatility, performance_df['Volatility'])
assert_series_equal(price_weight_port.period_sharpe_ratio, performance_df['Sharpe'])
assert_series_equal(price_weight_port.period_maximum_drawdown, performance_df['MaxDD'])
def test_portfolio_performance_metrics_with_benchmark(self):
# No trading status:
price_weight_port = bt.portfolio(
share=self.share,
name='Price Weight',
benchmark=self.weight,
benchmark_name='Equal Weight',
price=self.price
)
performance_df = pd.DataFrame()
for i in ['Price Weight', 'Equal Weight', 'Active']:
if i == 'Price Weight':
daily_ret = price_weight_port.port_daily_ret
elif i == 'Equal Weight':
daily_ret = price_weight_port.benchmark.port_daily_ret
elif i == 'Active':
daily_ret = price_weight_port.port_daily_ret - price_weight_port.benchmark.port_daily_ret
performance_df.loc[i, 'Return'] = daily_ret.sum()
performance_df.loc[i, 'Volatility'] = daily_ret.std()*sqrt(len(daily_ret))
performance_df.loc[i, 'Sharpe'] = performance_df.loc[i, 'Return']/performance_df.loc[i, 'Volatility']
if i != 'Active':
hist_value = np.exp(daily_ret.cumsum())
else:
hist_value = price_weight_port.port_total_value - price_weight_port.benchmark.port_total_value
previous_peak = hist_value.cummax()
performance_df.loc[i, 'MaxDD'] = max(1 - hist_value/previous_peak)
assert_series_equal(price_weight_port.period_return, performance_df['Return'])
assert_series_equal(price_weight_port.period_volatility, performance_df['Volatility'])
assert_series_equal(price_weight_port.period_sharpe_ratio, performance_df['Sharpe'])
assert_series_equal(price_weight_port.period_maximum_drawdown, performance_df['MaxDD'])
# No trading status:
price_weight_port = bt.portfolio(
share=self.share,
name='Price Weight',
benchmark=self.weight,
benchmark_name='Equal Weight',
price=self.price,
trading_status=self.trading_status
)
performance_df = pd.DataFrame()
for i in ['Price Weight', 'Equal Weight', 'Active']:
if i == 'Price Weight':
daily_ret = price_weight_port.port_daily_ret
elif i == 'Equal Weight':
daily_ret = price_weight_port.benchmark.port_daily_ret
elif i == 'Active':
daily_ret = price_weight_port.port_daily_ret - price_weight_port.benchmark.port_daily_ret
performance_df.loc[i, 'Return'] = daily_ret.sum()
performance_df.loc[i, 'Volatility'] = daily_ret.std()*sqrt(len(daily_ret))
performance_df.loc[i, 'Sharpe'] = performance_df.loc[i, 'Return']/performance_df.loc[i, 'Volatility']
if i != 'Active':
hist_value = np.exp(daily_ret.cumsum())
else:
hist_value = price_weight_port.port_total_value - price_weight_port.benchmark.port_total_value
previous_peak = hist_value.cummax()
performance_df.loc[i, 'MaxDD'] = max(1 - hist_value/previous_peak)
assert_series_equal(price_weight_port.period_return, performance_df['Return'])
assert_series_equal(price_weight_port.period_volatility, performance_df['Volatility'])
assert_series_equal(price_weight_port.period_sharpe_ratio, performance_df['Sharpe'])
assert_series_equal(price_weight_port.period_maximum_drawdown, performance_df['MaxDD'])
def test_portfolio_performance_plot(self):
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
# Portfolio without benchamrk:
price_weight_port = bt.portfolio(
share=self.share,
name='Price Weight',
price=self.price,
)
price_weight_port.backtest()
performance_plot = price_weight_port.performance_plot() # figure object
dates = performance_plot.axes[0].lines[0].get_xdata()
total_vales = performance_plot.axes[0].lines[0].get_ydata()
plot_ts = pd.Series(total_vales, index=dates)
expect_ts = self.asset_values_1_rebal_share.sum(axis=1)
expect_ts = expect_ts/expect_ts[0]
assert_series_equal(plot_ts, expect_ts)
# Portfolio with benchmark:
price_weight_port = bt.portfolio(
share=self.share,
name='Price Weight',
benchmark=self.weight,
benchmark_name='Equal Weight',
price=self.price
)
price_weight_port.backtest()
performance_plot = price_weight_port.performance_plot() # figure object
# Plot 1: portfolio values and benchmark values
dates = performance_plot.axes[0].lines[0].get_xdata()
port_vales = performance_plot.axes[0].lines[0].get_ydata()
bm_vales = performance_plot.axes[0].lines[1].get_ydata()
plot_port_ts = pd.Series(port_vales, index=dates)
plot_bm_ts = pd.Series(bm_vales, index=dates)
expect_port_ts = self.asset_values_1_rebal_share.sum(axis=1)
expect_port_ts = expect_port_ts/expect_port_ts[0]
expect_bm_ts = self.asset_values_1_rebal.sum(axis=1)
expect_bm_ts = expect_bm_ts/expect_bm_ts[0]
assert_series_equal(plot_port_ts, expect_port_ts)
assert_series_equal(plot_bm_ts, expect_bm_ts)
# Plot 2: Value differences
diff_values = performance_plot.axes[1].lines[0].get_ydata()
plot_diff_ts = pd.Series(diff_values, index = dates)
expect_diff_ts = expect_port_ts - expect_bm_ts
assert_series_equal(plot_diff_ts, expect_diff_ts)
#%%%%%%%%%%%%%%%%%%%
# %%
| 51.992453 | 187 | 0.667368 | 3,886 | 27,556 | 4.463201 | 0.056613 | 0.053275 | 0.049297 | 0.036324 | 0.844442 | 0.798086 | 0.771737 | 0.743427 | 0.724862 | 0.704566 | 0 | 0.022095 | 0.206706 | 27,556 | 529 | 188 | 52.090737 | 0.771317 | 0.054906 | 0 | 0.569652 | 0 | 0 | 0.039394 | 0 | 0 | 0 | 0 | 0 | 0.199005 | 1 | 0.052239 | false | 0 | 0.019901 | 0.002488 | 0.08209 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
76b3592450d345441aef9d9ab6fd2ab17db40ad2 | 186 | py | Python | TensorMap/tensor/Tensor.py | social-learning/pylearn | dfa751fe2db2859c83e8616f25ae5ba28293d6c2 | [
"MIT"
] | 2 | 2020-10-19T04:23:29.000Z | 2021-05-09T18:19:38.000Z | TensorMap/tensor/Tensor.py | social-learning/pylearn | dfa751fe2db2859c83e8616f25ae5ba28293d6c2 | [
"MIT"
] | null | null | null | TensorMap/tensor/Tensor.py | social-learning/pylearn | dfa751fe2db2859c83e8616f25ae5ba28293d6c2 | [
"MIT"
] | null | null | null | import numpy as np
def my_transpose(m:list) -> list:
return [[m[j][i] for j in range(len(m))] for i in range(len(m[0]))]
def numpy_transpose(m:list) -> list:
return np.array(m) | 26.571429 | 71 | 0.645161 | 37 | 186 | 3.189189 | 0.486486 | 0.169492 | 0.237288 | 0.305085 | 0.40678 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006536 | 0.177419 | 186 | 7 | 72 | 26.571429 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4f46fa0cb6c3c8d07212ecacc9a4f621c1542f00 | 153 | py | Python | dynaconf/contrib/__init__.py | dmsimard/dynaconf | ec394ab07e3b522879c8be678c65ebeb05fc2b59 | [
"MIT"
] | null | null | null | dynaconf/contrib/__init__.py | dmsimard/dynaconf | ec394ab07e3b522879c8be678c65ebeb05fc2b59 | [
"MIT"
] | null | null | null | dynaconf/contrib/__init__.py | dmsimard/dynaconf | ec394ab07e3b522879c8be678c65ebeb05fc2b59 | [
"MIT"
] | null | null | null | from dynaconf.contrib.flask_dynaconf import FlaskDynaconf, DynaconfConfig # noqa
from dynaconf.contrib.django_dynaconf_v2 import DjangoDynaconf # noqa
| 51 | 81 | 0.856209 | 18 | 153 | 7.111111 | 0.611111 | 0.1875 | 0.296875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0.098039 | 153 | 2 | 82 | 76.5 | 0.92029 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4f662085e537344b4b15c549f6e7be68776e6057 | 37 | py | Python | mogwai/optim/__init__.py | joshim5/mogwai | 917fe5b2dea9c3adc3a3d1dfe41ae33c3ae86f55 | [
"BSD-3-Clause"
] | 24 | 2020-11-20T19:10:23.000Z | 2022-03-13T13:26:56.000Z | mogwai/optim/__init__.py | joshim5/mogwai | 917fe5b2dea9c3adc3a3d1dfe41ae33c3ae86f55 | [
"BSD-3-Clause"
] | 10 | 2020-10-21T21:42:14.000Z | 2020-11-18T07:57:30.000Z | mogwai/optim/__init__.py | joshim5/mogwai | 917fe5b2dea9c3adc3a3d1dfe41ae33c3ae86f55 | [
"BSD-3-Clause"
] | 7 | 2020-12-27T00:44:18.000Z | 2021-11-07T05:16:49.000Z | from .gremlin_adam import GremlinAdam | 37 | 37 | 0.891892 | 5 | 37 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4f8a959c343f4a65cbdfd7d298ab0e7346aadc6f | 11,799 | py | Python | Tools/LyTestTools/ly_test_tools/o3de/ini_configuration_util.py | cypherdotXd/o3de | bb90c4ddfe2d495e9c00ebf1e2650c6d603a5676 | [
"Apache-2.0",
"MIT"
] | 11 | 2021-07-08T09:58:26.000Z | 2022-03-17T17:59:26.000Z | Tools/LyTestTools/ly_test_tools/o3de/ini_configuration_util.py | RoddieKieley/o3de | e804fd2a4241b039a42d9fa54eaae17dc94a7a92 | [
"Apache-2.0",
"MIT"
] | 29 | 2021-07-06T19:33:52.000Z | 2022-03-22T10:27:49.000Z | Tools/LyTestTools/ly_test_tools/o3de/ini_configuration_util.py | RoddieKieley/o3de | e804fd2a4241b039a42d9fa54eaae17dc94a7a92 | [
"Apache-2.0",
"MIT"
] | 4 | 2021-07-06T19:24:43.000Z | 2022-03-31T12:42:27.000Z | """
Copyright (c) Contributors to the Open 3D Engine Project.
For complete copyright and license terms please see the LICENSE at the root of this distribution.
SPDX-License-Identifier: Apache-2.0 OR MIT
Small library of functions to manipulate ini configuration files.
Please see INI Specification for in depth explanations on parameter terms.
"""
import logging
from configparser import ConfigParser
logger = logging.getLogger(__name__)
def check_section_exists(file_location, section):
"""
Searches an INI Configuration file for the existance of a section
:param file_location: The file to get a key value from
:param section: The section to find the key value
:return: The boolean value of whether or not the section exists
"""
config = ConfigParser()
config.read(file_location)
return config.has_section(section)
def check_key_exists(file_location, section, key):
"""
Searches an INI Configuration file for the existance of a section & key
:param file_location: The file to get a key value from
:param section: The section to find the key value
:param key: The key that can contain a value to retrieve
:return: The boolean value of whether or not the key exists
"""
config = ConfigParser()
config.read(file_location)
return config.has_option(section, key)
def get_string_value(file_location, section, key):
"""
Searches an INI Configuration file for a section & key and returns it as a string.
:param file_location: The file to get a key value from
:param section: The section to find the key value
:param key: The key that can contain a value to retrieve
:return: The string value retained in the key
"""
if check_key_exists(file_location, section, key):
config = ConfigParser()
config.read(file_location)
return config.get(section, key)
else:
assert False, "Was unable to find the key. Please verify the existance of the section '{0}' and key '{1}"\
.format(section, key)
def get_boolean_value(file_location, section, key):
"""
Searches an INI Configuration file for a section & key and returns it as a string.
:param file_location: The file to get a key value from
:param section: The section to find the key value
:param key: The key that can contain a value to retrieve
:return: The boolean value retained in the key
"""
if check_key_exists(file_location, section, key):
config = ConfigParser()
config.read(file_location)
return config.getboolean(section, key)
else:
assert False, "Was unable to find the key. Please verify the existance of the section '{0}' and key '{1}" \
.format(section, key)
def get_integral_value(file_location, section, key):
"""
Searches an INI Configuration file for a section & key and returns it as a string.
:param file_location: The file to get a key value from
:param section: The section to find the key value
:param key: The key that can contain a value to retrieve
:return: The boolean value retained in the key
"""
if check_key_exists(file_location, section, key):
config = ConfigParser()
config.read(file_location)
return config.getint(section, key)
else:
assert False, "Was unable to find the key. Please verify the existance of the section '{0}' and key '{1}" \
.format(section, key)
def get_float_value(file_location, section, key):
"""
Searches an INI Configuration file for a section & key and returns it as a string.
:param file_location: The file to get a key value from
:param section: The section to find the key value
:param key: The key that can contain a value to retrieve
:return: The boolean value retained in the key
"""
if check_key_exists(file_location, section, key):
config = ConfigParser()
config.read(file_location)
return config.getfloat(section, key)
else:
assert False, "Was unable to find the key. Please verify the existance of the section '{0}' and key '{1}" \
.format(section, key)
def check_string_value(file_location, section, key, expected):
"""
Compare the string contained in a key against expected.
:param file_location: The file to get a key value from
:param section: The section to find the key value
:param key: The key that can contain a value to retrieve
:param expected: They expected value to compare to
:assert: If the values do not match
:return: None
"""
if check_key_exists(file_location, section, key):
config = ConfigParser()
config.read(file_location)
actual = get_string_value(file_location, section, key)
assert actual == expected, "The value of the '{0}' key in the '{1}' section was '{2}'" \
"and did not match the expected value of {3}".format(key, section, actual, expected)
else:
assert False, "Was unable to find the key to do a comparison. " \
"Please verify the existance of the section '{0}' and key '{1}" \
.format(section, key)
def check_boolean_value(file_location, section, key, expected):
"""
Compare the boolean contained in a key against expected.
:param file_location: The file to get a key value from
:param section: The section to find the key value
:param key: The key that can contain a value to retrieve
:param expected: They expected value to compare to
:assert: If the values do not match
:return: None
"""
if check_key_exists(file_location, section, key):
config = ConfigParser()
config.read(file_location)
actual = get_boolean_value(file_location, section, key)
assert actual == expected, "The value of the '{0}' key in the '{1}' section was '{2}'" \
"and did not match the expected value of {3}".format(key, section, actual, expected)
else:
assert False, "Was unable to find the key to do a comparison. " \
"Please verify the existance of the section '{0}' and key '{1}" \
.format(section, key)
def check_integral_value(file_location, section, key, expected):
"""
Compare the integral contained in a key against expected.
:param file_location: The file to get a key value from
:param section: The section to find the key value
:param key: The key that can contain a value to retrieve
:param expected: They expected value to compare to
:assert: If the values do not match
:return: None
"""
if check_key_exists(file_location, section, key):
config = ConfigParser()
config.read(file_location)
actual = get_integral_value(file_location, section, key)
assert actual == expected, "The value of the '{0}' key in the '{1}' section was '{2}'" \
"and did not match the expected value of {3}".format(key, section, actual, expected)
else:
assert False, "Was unable to find the key to do a comparison. " \
"Please verify the existance of the section '{0}' and key '{1}" \
.format(section, key)
def check_float_value(file_location, section, key, expected):
"""
Compare the float contained in a key against expected.
:param file_location: The file to get a key value from
:param section: The section to find the key value
:param key: The key that can contain a value to retrieve
:param expected: They expected value to compare to
:assert: If the values do not match
:return: None
"""
if check_key_exists(file_location, section, key):
config = ConfigParser()
config.read(file_location)
actual = get_float_value(file_location, section, key)
assert actual == expected, "The value of the '{0}' key in the '{1}' section was '{2}'" \
"and did not match the expected value of {3}".format(key, section, actual, expected)
else:
assert False, "Was unable to find the key to do a comparison. " \
"Please verify the existance of the section '{0}' and key '{1}" \
.format(section, key)
def add_section(file_location, section):
"""
Add section to the configuration file provided
:param file_location: The file to get a key value from
:param section: The section to add
:assert: If the the section does not exist in the file after attempting to add it
:return: None
"""
config = ConfigParser()
config.read(file_location)
config.add_section(section)
with open(file_location, 'w') as configfile:
config.write(configfile)
assert check_section_exists(file_location, section), \
"Section '{0}' failed to add to the configuration file '{1}'".format(section, file_location)
def add_key(file_location, section, key, value=''):
"""
Add key to the section in the configuration file provided
:param file_location: The file to get a key value from
:param section: The section to add the key value
:param key: The section to add the key value
:param value: The value to set the key to
:assert: If the the key does not exist in the file after attempting to add it
:return: None
"""
logger.debug("Section exists: {0}".format(check_section_exists(file_location, section)))
assert check_section_exists(file_location, section), \
"Cannot add a key to section '{0}' since it does not exist in configuration file '{1}'".format(section,
file_location)
config = ConfigParser()
config.read(file_location)
config.set(section, key, value)
with open(file_location, 'w') as configfile:
config.write(configfile)
assert check_key_exists(file_location, section, key), "Key '{0}' failed to add to the configuration file '{1}'"\
.format(key, file_location)
def delete_section(file_location, section):
"""
Delete section from the configuration file provided
:param file_location: The file to modify
:param section: The section to delete
:assert: If the the section does exists in the file after attempting to add it
:return: None
"""
config = ConfigParser()
config.read(file_location)
config.remove_section(section)
with open(file_location, 'w') as configfile:
config.write(configfile)
assert not check_section_exists(file_location, section), \
"Section '{0}' still exists in the configuration file '{1}'".format(section, file_location)
def delete_key(file_location, section, key):
"""
Delete key from the section in the configuration file provided
:param file_location: The file to modify
:param section: The section to delete the key value
:param key: The section to delete the key value
:assert: If the the section exists in the file after attempting to delete it
:return: None
"""
logger.debug("Section exists: {0}".format(check_section_exists(file_location, section)))
assert check_section_exists(file_location, section), \
"Cannot add a key to section '{0}' since it does not exist in configuration file '{1}'".format(section, file)
config = ConfigParser()
config.read(file_location)
config.remove_option(section, key)
with open(file_location, 'w') as configfile:
config.write(configfile)
assert not check_key_exists(file_location, section, key), "Key '{0}' still exists in the configuration file '{1}'"\
.format(key, file_location)
| 35.646526 | 119 | 0.668531 | 1,657 | 11,799 | 4.673506 | 0.076041 | 0.110021 | 0.083419 | 0.071023 | 0.912448 | 0.90328 | 0.889205 | 0.866219 | 0.81108 | 0.778926 | 0 | 0.005546 | 0.251208 | 11,799 | 330 | 120 | 35.754545 | 0.870968 | 0.374778 | 0 | 0.690476 | 0 | 0.031746 | 0.235857 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.111111 | false | 0 | 0.015873 | 0 | 0.174603 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8c190620f198fa714ab6ce999ca7a5c053295fb3 | 1,344 | py | Python | lms/lms_app/views.py | Ney-Rocha/101018 | 25f50d99f875be61ba8b2d23ecb22a36c2475817 | [
"Apache-2.0"
] | null | null | null | lms/lms_app/views.py | Ney-Rocha/101018 | 25f50d99f875be61ba8b2d23ecb22a36c2475817 | [
"Apache-2.0"
] | null | null | null | lms/lms_app/views.py | Ney-Rocha/101018 | 25f50d99f875be61ba8b2d23ecb22a36c2475817 | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
from lms_app.models import Aluno
from lms_app.forms import AlunoForms
# Creado as viwes abaixo.
def Home(request):
return render(request, 'login.html', {})
def CadastrarUsuario(request):
return render(request, 'cadastrarUsuario.html', {})
def CursoDisciplina(request):
return render(request, 'cursoDisciplina.html', {})
def ConsultarCadastro(request):
return render(request, 'consultarCadastro.html', {})
def Aluno(request):
return render(request, 'aluno.html', {})
def Atividade(request):
return render(request, 'Atividade.html', {})
def AtividadeVinculada(request):
return render(request, 'atividadeVinculada.html', {})
def Coordenador(request):
return render(request, 'coordenador.html', {})
def Curso(request):
return render(request, 'curso.html', {})
def Disciplina(request):
return render(request, 'disciplina.html', {})
def DisciplinaOfertada(request):
return render(request, 'disciplinaOfertada.html', {})
def Entrega(request):
return render(request, 'entrega.html', {})
def Mensagem(request):
return render(request, 'mensagem.html', {})
def Professor(request):
return render(request, 'professor.html', {})
def Usuario(request):
return render(request, 'usuario.html', {})
| 25.846154 | 58 | 0.6875 | 141 | 1,344 | 6.539007 | 0.241135 | 0.211497 | 0.309111 | 0.422993 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174851 | 1,344 | 51 | 59 | 26.352941 | 0.83138 | 0.017113 | 0 | 0 | 0 | 0 | 0.185331 | 0.070189 | 0 | 0 | 0 | 0 | 0 | 1 | 0.454545 | false | 0 | 0.090909 | 0.454545 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
8c2e95fe850990432147248e5b95c1b5b802debf | 200 | py | Python | main/chapter_01_item_01.py | EverLookNeverSee/EfPy | 53d40d0207f2d2dbcae1de2e51172915e22b9155 | [
"MIT"
] | 1 | 2022-02-18T11:55:43.000Z | 2022-02-18T11:55:43.000Z | main/chapter_01_item_01.py | EverLookNeverSee/EffPy | 53d40d0207f2d2dbcae1de2e51172915e22b9155 | [
"MIT"
] | null | null | null | main/chapter_01_item_01.py | EverLookNeverSee/EffPy | 53d40d0207f2d2dbcae1de2e51172915e22b9155 | [
"MIT"
] | null | null | null | """
Chapter_01 - Item 01
Know which version of python you are using
"""
from sys import version, version_info
print(f"sys.version: {version}")
print(f"sys.version_info: {version_info}")
| 20 | 50 | 0.695 | 30 | 200 | 4.5 | 0.566667 | 0.244444 | 0.133333 | 0.237037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02454 | 0.185 | 200 | 9 | 51 | 22.222222 | 0.803681 | 0.335 | 0 | 0 | 0 | 0 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
8c34ccff37e777a019b8bd0fa3b4f4de4cbafb40 | 177 | py | Python | topics/zsh/install.py | serramatutu/dotfiles | 880a0450bb275f3e3c0cc9ebfb9f4fea53412e27 | [
"MIT"
] | 1 | 2020-04-12T05:04:36.000Z | 2020-04-12T05:04:36.000Z | topics/zsh/install.py | serramatutu/dotfiles | 880a0450bb275f3e3c0cc9ebfb9f4fea53412e27 | [
"MIT"
] | null | null | null | topics/zsh/install.py | serramatutu/dotfiles | 880a0450bb275f3e3c0cc9ebfb9f4fea53412e27 | [
"MIT"
] | null | null | null | from installer import apt, snap, sh
def requires():
return ['git']
def install():
apt('zsh')
sh('sh scripts/omz-install.sh', 'Oh My Zsh install')
snap('gotop') | 19.666667 | 56 | 0.621469 | 26 | 177 | 4.230769 | 0.653846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20904 | 177 | 9 | 57 | 19.666667 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0.297753 | 0.123596 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | true | 0 | 0.142857 | 0.142857 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4fc63cf1ac2c7154669073273de1cfcb46fbc0d2 | 32 | py | Python | pytrader/utils/__init__.py | wrieg123/pytrader | 9463f37d8eeb8d3347126f8041d288237eabdeee | [
"MIT"
] | 1 | 2021-03-30T21:01:15.000Z | 2021-03-30T21:01:15.000Z | pytrader/utils/__init__.py | wrieg123/pytrader | 9463f37d8eeb8d3347126f8041d288237eabdeee | [
"MIT"
] | null | null | null | pytrader/utils/__init__.py | wrieg123/pytrader | 9463f37d8eeb8d3347126f8041d288237eabdeee | [
"MIT"
] | 2 | 2021-11-09T23:05:45.000Z | 2022-01-03T11:23:40.000Z | from .svconfig import Connector
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4fe736d1dac4d8d53ae8738733c619351d917cf7 | 11,312 | py | Python | tests/test_entry_points.py | Seabreg/androguard | eef97351c661fecd1de05644943a862850aeb9af | [
"Apache-2.0"
] | null | null | null | tests/test_entry_points.py | Seabreg/androguard | eef97351c661fecd1de05644943a862850aeb9af | [
"Apache-2.0"
] | null | null | null | tests/test_entry_points.py | Seabreg/androguard | eef97351c661fecd1de05644943a862850aeb9af | [
"Apache-2.0"
] | 1 | 2019-12-18T02:26:50.000Z | 2019-12-18T02:26:50.000Z | # -*- coding: utf-8 -*-
# core modules
from pkg_resources import resource_filename
from tempfile import mkstemp, mkdtemp
import os
import shutil
import unittest
# 3rd party modules
from click.testing import CliRunner
# internal modules
from androguard.cli import entry_points
class EntryPointsTest(unittest.TestCase):
def test_entry_point_help(self):
runner = CliRunner()
result = runner.invoke(entry_points.entry_point, ['--help'])
assert result.exit_code == 0
def test_axml_help(self):
runner = CliRunner()
result = runner.invoke(entry_points.entry_point,
['axml', '--help'])
assert result.exit_code == 0
def test_axml_basic_call_by_positional_argument(self):
axml_path = resource_filename('androguard',
'../examples/axml/AndroidManifest.xml')
_, output_path = mkstemp(prefix='androguard_', suffix='decoded.txt')
runner = CliRunner()
arguments = ['axml', axml_path, '-o', output_path]
result = runner.invoke(entry_points.entry_point,
arguments)
assert result.exit_code == 0, arguments
os.remove(output_path)
def test_axml_basic_call_by_input_argument(self):
axml_path = resource_filename('androguard',
'../examples/axml/AndroidManifest.xml')
_, output_path = mkstemp(prefix='androguard_', suffix='decoded.txt')
runner = CliRunner()
arguments = ['axml', '-i', axml_path, '-o', output_path]
result = runner.invoke(entry_points.entry_point,
arguments)
assert result.exit_code == 0, arguments
os.remove(output_path)
def test_axml_error_call_two_arguments(self):
axml_path = resource_filename('androguard',
'../examples/axml/AndroidManifest.xml')
_, output_path = mkstemp(prefix='androguard_', suffix='decoded.txt')
runner = CliRunner()
arguments = ['axml', '-i', axml_path,
'-o', output_path,
axml_path]
result = runner.invoke(entry_points.entry_point,
arguments)
assert result.exit_code == 1, arguments
os.remove(output_path)
def test_axml_error_call_no_arguments(self):
_, output_path = mkstemp(prefix='androguard_', suffix='decoded.txt')
runner = CliRunner()
arguments = ['axml', '-o', output_path]
result = runner.invoke(entry_points.entry_point,
arguments)
assert result.exit_code == 1, arguments
os.remove(output_path)
def test_arsc_basic_call_positional_apk(self):
runner = CliRunner()
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
arguments = ['arsc', apk_path]
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 0, arguments
def test_arsc_error_filetype_py(self):
runner = CliRunner()
dex_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'classes.dex')
arguments = ['arsc', dex_path]
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 1, arguments
def test_arsc_basic_call_keyword(self):
runner = CliRunner()
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
arguments = ['arsc', '-i', apk_path]
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 0, arguments
def test_arsc_basic_call_list_packages(self):
runner = CliRunner()
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
arguments = ['arsc', apk_path, '--list-packages']
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 0, arguments
def test_arsc_basic_call_list_locales(self):
runner = CliRunner()
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
arguments = ['arsc', apk_path, '--list-locales']
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 0, arguments
def test_arsc_basic_call_list_types(self):
runner = CliRunner()
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
arguments = ['arsc', apk_path, '--list-types']
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 0, arguments
def test_arsc_error_two_arguments(self):
runner = CliRunner()
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
arguments = ['arsc', apk_path, '-i', apk_path]
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 1, arguments
def test_arsc_basic_id_resolve(self):
runner = CliRunner()
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
arguments = ['arsc', apk_path, '--id', '7F030000']
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 0, arguments
def test_arsc_error_id_resolve(self):
runner = CliRunner()
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
arguments = ['arsc', apk_path, '--id', 'sdlkfjsdlkf']
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 1, arguments
def test_arsc_error_id_not_resolve(self):
runner = CliRunner()
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
arguments = ['arsc', apk_path, '--id', '12345678']
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 1, arguments
def test_arsc_error_no_arguments(self):
runner = CliRunner()
arguments = ['arsc']
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 1, arguments
def test_arsc_help(self):
runner = CliRunner()
result = runner.invoke(entry_points.entry_point, ['arsc', '--help'])
assert result.exit_code == 0
def test_cg_basic(self):
runner = CliRunner()
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
arguments = ['--debug', 'cg', apk_path]
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 0
def test_cg_help(self):
runner = CliRunner()
result = runner.invoke(entry_points.entry_point, ['cg', '--help'])
assert result.exit_code == 0
def test_decompile_basic_positional(self):
runner = CliRunner()
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
output_dir = mkdtemp(prefix='androguard_test_')
result = runner.invoke(entry_points.entry_point,
['decompile', apk_path, '-o', output_dir])
assert result.exit_code == 0
# Cleanup:
if os.path.exists(output_dir) and os.path.isdir(output_dir):
shutil.rmtree(output_dir)
def test_decompile_basic_input(self):
runner = CliRunner()
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
output_dir = mkdtemp(prefix='androguard_test_')
result = runner.invoke(entry_points.entry_point,
['decompile', '-i', apk_path, '-o', output_dir])
assert result.exit_code == 0
# Cleanup:
if os.path.exists(output_dir) and os.path.isdir(output_dir):
shutil.rmtree(output_dir)
def test_decompile_error_two_arguments(self):
runner = CliRunner()
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
output_dir = mkdtemp(prefix='androguard_test_')
arguments = ['decompile', '-i', apk_path, apk_path, '-o', output_dir]
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 1, arguments
# Cleanup:
if os.path.exists(output_dir) and os.path.isdir(output_dir):
shutil.rmtree(output_dir)
def test_decompile_error_no_arguments(self):
runner = CliRunner()
output_dir = mkdtemp(prefix='androguard_test_')
arguments = ['decompile', '-o', output_dir]
result = runner.invoke(entry_points.entry_point, arguments)
assert result.exit_code == 1, arguments
# Cleanup:
if os.path.exists(output_dir) and os.path.isdir(output_dir):
shutil.rmtree(output_dir)
def test_decompile_help(self):
runner = CliRunner()
result = runner.invoke(entry_points.entry_point,
['decompile', '--help'])
assert result.exit_code == 0
def test_sign_basic(self):
apk_path = resource_filename('androguard',
'../examples/dalvik/test/bin/'
'Test-debug.apk')
runner = CliRunner()
arguments = ['sign', apk_path]
result = runner.invoke(entry_points.entry_point, )
assert result.exit_code == 0, arguments
def test_sign_help(self):
runner = CliRunner()
result = runner.invoke(entry_points.entry_point, ['sign', '--help'])
assert result.exit_code == 0
def test_gui_help(self):
runner = CliRunner()
result = runner.invoke(entry_points.entry_point,
['gui', '--help'])
assert result.exit_code == 0
def test_analyze_help(self):
runner = CliRunner()
result = runner.invoke(entry_points.entry_point,
['analyze', '--help'])
assert result.exit_code == 0
| 42.052045 | 79 | 0.569395 | 1,163 | 11,312 | 5.27945 | 0.084265 | 0.053746 | 0.085016 | 0.108632 | 0.9 | 0.896743 | 0.878339 | 0.878339 | 0.822964 | 0.803746 | 0 | 0.005987 | 0.32081 | 11,312 | 268 | 80 | 42.208955 | 0.79318 | 0.009282 | 0 | 0.685841 | 0 | 0 | 0.121908 | 0.047155 | 0 | 0 | 0 | 0 | 0.128319 | 1 | 0.128319 | false | 0 | 0.030973 | 0 | 0.163717 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4fe7553aa3c57b0e9d18c16878324cadf6ffe741 | 106 | py | Python | pythonProject1/venv/Lib/site-packages/tkinter_fonts_viewer/__init__.py | mjtomlinson/CNE330_Python_1_Final_Project | 05020806860937ef37b9a0ad2e27de4897a606de | [
"CC0-1.0"
] | 3 | 2020-09-13T20:22:47.000Z | 2021-09-15T18:23:02.000Z | pythonProject1/venv/Lib/site-packages/tkinter_fonts_viewer/__init__.py | mjtomlinson/CNE330_Python_1_Final_Project | 05020806860937ef37b9a0ad2e27de4897a606de | [
"CC0-1.0"
] | null | null | null | pythonProject1/venv/Lib/site-packages/tkinter_fonts_viewer/__init__.py | mjtomlinson/CNE330_Python_1_Final_Project | 05020806860937ef37b9a0ad2e27de4897a606de | [
"CC0-1.0"
] | null | null | null | from .tkinter_fonts_viewer import static_file_path, fonts_type, viewer, FontsMonoCheck, TkinterFontsViewer | 106 | 106 | 0.886792 | 13 | 106 | 6.846154 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066038 | 106 | 1 | 106 | 106 | 0.89899 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b0ac48e4989fbad00f1c265fc316f9a05273a97 | 306 | py | Python | src/classifier/__init__.py | William9923/IF4072-SentimentClassification | 5e22a6da418056955243c310bab0382e4683b781 | [
"MIT"
] | null | null | null | src/classifier/__init__.py | William9923/IF4072-SentimentClassification | 5e22a6da418056955243c310bab0382e4683b781 | [
"MIT"
] | null | null | null | src/classifier/__init__.py | William9923/IF4072-SentimentClassification | 5e22a6da418056955243c310bab0382e4683b781 | [
"MIT"
] | null | null | null | from src.classifier.impl.baseline import LSTMClf
from src.classifier.impl.bert import FineTuneBertClf
from src.classifier.impl.lgbm import LGBMClf
from src.classifier.impl.naive import NaiveBayesClf
from src.classifier.interface import IClassifier
from src.classifier.impl.roberta import FineTuneRobertaClf | 51 | 58 | 0.869281 | 41 | 306 | 6.487805 | 0.414634 | 0.157895 | 0.383459 | 0.394737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075163 | 306 | 6 | 58 | 51 | 0.939929 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b0dbff1101680ac735de0b170802cda86676939 | 13,513 | py | Python | tests/corpus/io/test_tatoeba.py | manuel3265/Audiomate | 76292142b585a75ec6fdf96ce1063e3d05ba3887 | [
"MIT"
] | null | null | null | tests/corpus/io/test_tatoeba.py | manuel3265/Audiomate | 76292142b585a75ec6fdf96ce1063e3d05ba3887 | [
"MIT"
] | null | null | null | tests/corpus/io/test_tatoeba.py | manuel3265/Audiomate | 76292142b585a75ec6fdf96ce1063e3d05ba3887 | [
"MIT"
] | null | null | null | import os
import pytest
import requests_mock
from audiomate import corpus
from audiomate import issuers
from audiomate.corpus import io
from audiomate.corpus.io import tatoeba
from tests import resources
from . import reader_test as rt
@pytest.fixture()
def sample_audio_list_path():
return resources.get_resource_path(['sample_corpora', 'tatoeba_download', 'sentences_with_audio.csv'])
@pytest.fixture()
def sample_sentence_list_path():
return resources.get_resource_path(['sample_corpora', 'tatoeba_download', 'sentences.csv'])
@pytest.fixture()
def sample_audio_list_tar_bz():
with open(resources.get_resource_path(['sample_corpora', 'tatoeba_download', 'sentences_with_audio.tar.bz2']),
'rb') as f:
return f.read()
@pytest.fixture()
def sample_sentence_list_tar_bz():
with open(resources.get_resource_path(['sample_corpora', 'tatoeba_download', 'sentences.tar.bz2']), 'rb') as f:
return f.read()
@pytest.fixture()
def sample_audio_content():
with open(resources.get_resource_path(['wav_files', 'wav_2.wav']), 'rb') as f:
return f.read()
class TestTatoebaDownloader:
def test_load_audio_list(self, sample_audio_list_path):
downloader = io.TatoebaDownloader()
entries = downloader._load_audio_list(sample_audio_list_path)
assert len(entries) == 5
assert entries['247'] == ['gretelen', 'CC BY-NC 4.0', None]
assert entries['1881'] == ['CK', 'CC BY-NC-ND 3.0', 'http://www.manythings.org/tatoeba']
assert entries['6286'] == ['Phoenix', 'CC BY-NC 4.0', None]
assert entries['2952354'] == ['pencil', 'CC BY-NC 4.0', None]
assert entries['6921520'] == ['CK', 'CC BY-NC-ND 3.0', 'http://www.manythings.org/tatoeba']
def test_load_audio_list_all(self, sample_audio_list_path):
downloader = io.TatoebaDownloader(include_empty_licence=True)
entries = downloader._load_audio_list(sample_audio_list_path)
assert len(entries) == 7
assert entries['141'] == ['BraveSentry', None, None]
assert entries['247'] == ['gretelen', 'CC BY-NC 4.0', None]
assert entries['1355'] == ['Nero', None, None]
assert entries['1881'] == ['CK', 'CC BY-NC-ND 3.0', 'http://www.manythings.org/tatoeba']
assert entries['6286'] == ['Phoenix', 'CC BY-NC 4.0', None]
assert entries['2952354'] == ['pencil', 'CC BY-NC 4.0', None]
assert entries['6921520'] == ['CK', 'CC BY-NC-ND 3.0', 'http://www.manythings.org/tatoeba']
def test_load_audio_list_filter_license(self, sample_audio_list_path):
downloader = io.TatoebaDownloader(include_licenses=['CC BY-NC 4.0'])
entries = downloader._load_audio_list(sample_audio_list_path)
assert len(entries) == 3
assert entries['247'] == ['gretelen', 'CC BY-NC 4.0', None]
assert entries['6286'] == ['Phoenix', 'CC BY-NC 4.0', None]
assert entries['2952354'] == ['pencil', 'CC BY-NC 4.0', None]
def test_load_sentence_list(self, sample_sentence_list_path):
downloader = io.TatoebaDownloader()
entries = downloader._load_sentence_list(sample_sentence_list_path)
assert len(entries) == 8
assert entries['141'] == ['eng', 'I want you to tell me why you did that.']
assert entries['247'] == ['fra', 'Comment ça, je suis trop vieille pour ce poste ?']
assert entries['511'] == ['deu', 'Wer will heiße Schokolade?']
assert entries['524'] == ['deu', 'Das ist zu teuer!']
assert entries['1355'] == ['epo', 'Mi panikis la homojn.']
assert entries['6286'] == ['deu', 'Ich denke, ich habe genug gehört.']
assert entries['299609'] == ['eng', 'He washes his car at least once a week.']
assert entries['6921520'] == ['ita', 'Ho una zia che abita a Osaka.']
def test_load_sentence_list_filter_languages(self, sample_sentence_list_path):
downloader = io.TatoebaDownloader(include_languages=['deu', 'eng'])
entries = downloader._load_sentence_list(sample_sentence_list_path)
assert len(entries) == 5
assert entries['141'] == ['eng', 'I want you to tell me why you did that.']
assert entries['511'] == ['deu', 'Wer will heiße Schokolade?']
assert entries['524'] == ['deu', 'Das ist zu teuer!']
assert entries['6286'] == ['deu', 'Ich denke, ich habe genug gehört.']
assert entries['299609'] == ['eng', 'He washes his car at least once a week.']
def test_download(self, sample_audio_list_tar_bz, sample_sentence_list_tar_bz, sample_audio_content, tmpdir):
downloader = io.TatoebaDownloader()
with requests_mock.Mocker() as mock:
mock.get(tatoeba.AUDIO_LIST_URL, content=sample_audio_list_tar_bz)
mock.get(tatoeba.SENTENCE_LIST_URL, content=sample_sentence_list_tar_bz)
mock.get('https://audio.tatoeba.org/sentences/eng/141.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/fra/247.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/epo/1355.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/deu/6286.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/ita/6921520.mp3', content=sample_audio_content)
downloader.download(tmpdir.strpath)
assert os.path.isfile(os.path.join(tmpdir.strpath, 'meta.txt'))
assert not os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'eng', '141.mp3'))
assert os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'fra', '247.mp3'))
assert not os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'epo', '1355.mp3'))
assert os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'deu', '6286.mp3'))
assert os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'ita', '6921520.mp3'))
def test_download_with_empty_licenses(self, sample_audio_list_tar_bz, sample_sentence_list_tar_bz,
sample_audio_content, tmpdir):
downloader = io.TatoebaDownloader(include_empty_licence=True)
with requests_mock.Mocker() as mock:
mock.get(tatoeba.AUDIO_LIST_URL, content=sample_audio_list_tar_bz)
mock.get(tatoeba.SENTENCE_LIST_URL, content=sample_sentence_list_tar_bz)
mock.get('https://audio.tatoeba.org/sentences/eng/141.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/fra/247.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/epo/1355.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/deu/6286.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/ita/6921520.mp3', content=sample_audio_content)
downloader.download(tmpdir.strpath)
assert os.path.isfile(os.path.join(tmpdir.strpath, 'meta.txt'))
assert os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'eng', '141.mp3'))
assert os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'fra', '247.mp3'))
assert os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'epo', '1355.mp3'))
assert os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'deu', '6286.mp3'))
assert os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'ita', '6921520.mp3'))
def test_download_with_filter_lang(self, sample_audio_list_tar_bz, sample_sentence_list_tar_bz,
sample_audio_content, tmpdir):
downloader = io.TatoebaDownloader(include_languages=['deu', 'eng'])
with requests_mock.Mocker() as mock:
mock.get(tatoeba.AUDIO_LIST_URL, content=sample_audio_list_tar_bz)
mock.get(tatoeba.SENTENCE_LIST_URL, content=sample_sentence_list_tar_bz)
mock.get('https://audio.tatoeba.org/sentences/eng/141.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/fra/247.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/epo/1355.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/deu/6286.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/ita/6921520.mp3', content=sample_audio_content)
downloader.download(tmpdir.strpath)
assert os.path.isfile(os.path.join(tmpdir.strpath, 'meta.txt'))
assert not os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'eng', '141.mp3'))
assert not os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'fra', '247.mp3'))
assert not os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'epo', '1355.mp3'))
assert os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'deu', '6286.mp3'))
assert not os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'ita', '6921520.mp3'))
def test_download_with_filter_license(self, sample_audio_list_tar_bz, sample_sentence_list_tar_bz,
sample_audio_content, tmpdir):
downloader = io.TatoebaDownloader(include_licenses=['CC BY-NC-ND 3.0'])
with requests_mock.Mocker() as mock:
mock.get(tatoeba.AUDIO_LIST_URL, content=sample_audio_list_tar_bz)
mock.get(tatoeba.SENTENCE_LIST_URL, content=sample_sentence_list_tar_bz)
mock.get('https://audio.tatoeba.org/sentences/eng/141.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/fra/247.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/epo/1355.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/deu/6286.mp3', content=sample_audio_content)
mock.get('https://audio.tatoeba.org/sentences/ita/6921520.mp3', content=sample_audio_content)
downloader.download(tmpdir.strpath)
assert os.path.isfile(os.path.join(tmpdir.strpath, 'meta.txt'))
assert not os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'eng', '141.mp3'))
assert not os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'fra', '247.mp3'))
assert not os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'epo', '1355.mp3'))
assert not os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'deu', '6286.mp3'))
assert os.path.isfile(os.path.join(tmpdir.strpath, 'audio', 'ita', '6921520.mp3'))
class TestTatoebaReader:
SAMPLE_PATH = resources.sample_corpus_path('tatoeba')
FILE_TRACK_BASE_PATH = os.path.join(SAMPLE_PATH, 'audio')
EXPECTED_NUMBER_OF_TRACKS = 5
EXPECTED_TRACKS = [
rt.ExpFileTrack('141', os.path.join('eng', '141.mp3')),
rt.ExpFileTrack('247', os.path.join('fra', '247.mp3')),
rt.ExpFileTrack('1355', os.path.join('deu', '1355.mp3')),
rt.ExpFileTrack('1881', os.path.join('deu', '1881.mp3')),
rt.ExpFileTrack('6921520', os.path.join('eng', '6921520.mp3')),
]
EXPECTED_NUMBER_OF_ISSUERS = 4
EXPECTED_ISSUERS = [
rt.ExpSpeaker('BraveSentry', 1, issuers.Gender.UNKNOWN, issuers.AgeGroup.UNKNOWN, None),
rt.ExpSpeaker('gretelen', 1, issuers.Gender.UNKNOWN, issuers.AgeGroup.UNKNOWN, None),
rt.ExpSpeaker('Nero', 1, issuers.Gender.UNKNOWN, issuers.AgeGroup.UNKNOWN, None),
rt.ExpSpeaker('CK', 2, issuers.Gender.UNKNOWN, issuers.AgeGroup.UNKNOWN, None),
]
EXPECTED_NUMBER_OF_UTTERANCES = 5
EXPECTED_UTTERANCES = [
rt.ExpUtterance('141', '141', 'BraveSentry', 0, float('inf')),
rt.ExpUtterance('247', '247', 'gretelen', 0, float('inf')),
rt.ExpUtterance('1355', '1355', 'Nero', 0, float('inf')),
rt.ExpUtterance('1881', '1881', 'CK', 0, float('inf')),
rt.ExpUtterance('6921520', '6921520', 'CK', 0, float('inf')),
]
EXPECTED_LABEL_LISTS = {
'141': [rt.ExpLabelList(corpus.LL_WORD_TRANSCRIPT_RAW, 1)],
'247': [rt.ExpLabelList(corpus.LL_WORD_TRANSCRIPT_RAW, 1)],
'1355': [rt.ExpLabelList(corpus.LL_WORD_TRANSCRIPT_RAW, 1)],
'1881': [rt.ExpLabelList(corpus.LL_WORD_TRANSCRIPT_RAW, 1)],
'6921520': [rt.ExpLabelList(corpus.LL_WORD_TRANSCRIPT_RAW, 1)],
}
EXPECTED_LABELS = {
'141': [
rt.ExpLabel(
corpus.LL_WORD_TRANSCRIPT, 'I want you to tell me why you did that.',
0, float('inf')
)
],
'247': [
rt.ExpLabel(
corpus.LL_WORD_TRANSCRIPT, 'Comment ça, je suis trop vieille pour ce poste ?',
0, float('inf')
)
],
'1355': [
rt.ExpLabel(
corpus.LL_WORD_TRANSCRIPT, 'Wer will heiße Schokolade?',
0, float('inf')
)
],
'1881': [
rt.ExpLabel(
corpus.LL_WORD_TRANSCRIPT, 'Das ist zu teuer!',
0, float('inf')
)
],
'6921520': [
rt.ExpLabel(
corpus.LL_WORD_TRANSCRIPT, 'He washes his car at least once a week.',
0, float('inf')
)
],
}
EXPECTED_NUMBER_OF_SUBVIEWS = 0
def load(self):
return io.TatoebaReader().load(self.SAMPLE_PATH)
| 47.91844 | 115 | 0.644861 | 1,764 | 13,513 | 4.763605 | 0.106576 | 0.038558 | 0.035702 | 0.039986 | 0.857313 | 0.836963 | 0.798405 | 0.78472 | 0.73807 | 0.711651 | 0 | 0.048774 | 0.206468 | 13,513 | 281 | 116 | 48.088968 | 0.734869 | 0 | 0 | 0.550926 | 0 | 0 | 0.21927 | 0.003848 | 0 | 0 | 0 | 0 | 0.263889 | 1 | 0.069444 | false | 0 | 0.041667 | 0.013889 | 0.199074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8b241633db6ac0f4af21c655bc5536c5eeb6fa81 | 46 | py | Python | trial.py | Fine-BITeam/Trial | 57df7567656805aae238e9b329e2c3caeb90e43c | [
"MIT"
] | null | null | null | trial.py | Fine-BITeam/Trial | 57df7567656805aae238e9b329e2c3caeb90e43c | [
"MIT"
] | null | null | null | trial.py | Fine-BITeam/Trial | 57df7567656805aae238e9b329e2c3caeb90e43c | [
"MIT"
] | null | null | null | print("Hello world")
print("Hello Fine Team")
| 15.333333 | 24 | 0.717391 | 7 | 46 | 4.714286 | 0.714286 | 0.606061 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 46 | 2 | 25 | 23 | 0.804878 | 0 | 0 | 0 | 0 | 0 | 0.565217 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8ca3925b9833c02e9f092c2e87720565052f5d47 | 194 | py | Python | hknweb/tutoring/admin.py | yuji3w/hknweb | 0df5369da28f46dc9016da97652cb6b8e2b7f3e6 | [
"MIT"
] | 3 | 2019-04-22T21:51:07.000Z | 2019-12-16T21:54:00.000Z | hknweb/tutoring/admin.py | yuji3w/hknweb | 0df5369da28f46dc9016da97652cb6b8e2b7f3e6 | [
"MIT"
] | null | null | null | hknweb/tutoring/admin.py | yuji3w/hknweb | 0df5369da28f46dc9016da97652cb6b8e2b7f3e6 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Course
from .models import Slot
from .models import Tutor
admin.site.register(Course)
admin.site.register(Slot)
admin.site.register(Tutor)
| 19.4 | 32 | 0.809278 | 29 | 194 | 5.413793 | 0.37931 | 0.191083 | 0.305732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108247 | 194 | 9 | 33 | 21.555556 | 0.907514 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.571429 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8cb668b54a15b51d1a9b66441250afdeb562657a | 126 | py | Python | Account/Token_Gen.py | nishanthjoelmj/Project-Class-Room | 624cabaf27e296f0f57f90918a5218d896053cd8 | [
"MIT"
] | 1 | 2021-06-05T08:45:48.000Z | 2021-06-05T08:45:48.000Z | Account/Token_Gen.py | nishanthjoelmj/Project-Class-Room | 624cabaf27e296f0f57f90918a5218d896053cd8 | [
"MIT"
] | null | null | null | Account/Token_Gen.py | nishanthjoelmj/Project-Class-Room | 624cabaf27e296f0f57f90918a5218d896053cd8 | [
"MIT"
] | null | null | null | from django.contrib.auth.tokens import PasswordResetTokenGenerator
class Token_maker (PasswordResetTokenGenerator) :
pass | 31.5 | 66 | 0.849206 | 12 | 126 | 8.833333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103175 | 126 | 4 | 67 | 31.5 | 0.938053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
8cb7f2d322de8d4b99426d168d653808a965d7ec | 1,331 | py | Python | src/sage/combinat/path_tableaux/catalog.py | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 1,742 | 2015-01-04T07:06:13.000Z | 2022-03-30T11:32:52.000Z | src/sage/combinat/path_tableaux/catalog.py | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 66 | 2015-03-19T19:17:24.000Z | 2022-03-16T11:59:30.000Z | src/sage/combinat/path_tableaux/catalog.py | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 495 | 2015-01-10T10:23:18.000Z | 2022-03-24T22:06:11.000Z | r"""
Catalog of Path Tableaux
The ``path_tableaux`` object may be used to access examples of various path
tableau objects currently implemented in Sage. Using tab-completion on this
object is an easy way to discover and quickly create the path tableaux that
are available (as listed here).
Let ``<tab>`` indicate pressing the tab key. So begin by typing
``path_tableaux.<tab>`` to the see the currently implemented path tableaux.
- :class:`~sage.combinat.path_tableaux.path_tableau.CylindricalDiagram`
- :class:`~sage.combinat.path_tableaux.dyck_path.DyckPath`
- :class:`~sage.combinat.path_tableaux.dyck_path.DyckPaths`
- :class:`~sage.combinat.path_tableaux.frieze.FriezePattern`
- :class:`~sage.combinat.path_tableaux.frieze.FriezePatterns`
- :class:`~sage.combinat.path_tableaux.semistandard.SemistandardPathTableau`
- :class:`~sage.combinat.path_tableaux.semistandard.SemistandardPathTableaux`
"""
from sage.misc.lazy_import import lazy_import
lazy_import('sage.combinat.path_tableaux.path_tableau', ['CylindricalDiagram'])
lazy_import('sage.combinat.path_tableaux.dyck_path', ['DyckPath', 'DyckPaths'])
lazy_import('sage.combinat.path_tableaux.frieze', ['FriezePattern','FriezePatterns'])
lazy_import('sage.combinat.path_tableaux.semistandard', ['SemistandardPathTableau','SemistandardPathTableaux'])
del lazy_import
| 44.366667 | 111 | 0.802404 | 170 | 1,331 | 6.135294 | 0.382353 | 0.184084 | 0.168744 | 0.253116 | 0.539789 | 0.534995 | 0.2186 | 0 | 0 | 0 | 0 | 0 | 0.078137 | 1,331 | 29 | 112 | 45.896552 | 0.850041 | 0.673929 | 0 | 0 | 0 | 0 | 0.611765 | 0.465882 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5067a1f26f1e8342bec1fa5ad3d587b262bd4836 | 32 | py | Python | tests/unittests/http_functions/no_return_returns/main.py | yojagad/azure-functions-python-worker | d5a1587a4ccf56af64f211a64f0b7a3d6cf976c9 | [
"MIT"
] | null | null | null | tests/unittests/http_functions/no_return_returns/main.py | yojagad/azure-functions-python-worker | d5a1587a4ccf56af64f211a64f0b7a3d6cf976c9 | [
"MIT"
] | null | null | null | tests/unittests/http_functions/no_return_returns/main.py | yojagad/azure-functions-python-worker | d5a1587a4ccf56af64f211a64f0b7a3d6cf976c9 | [
"MIT"
] | 1 | 2018-04-22T18:03:52.000Z | 2018-04-22T18:03:52.000Z | def main(req):
return 'ABC'
| 10.666667 | 16 | 0.59375 | 5 | 32 | 3.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 32 | 2 | 17 | 16 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
50753117a3c06725e30fe7e3473c7ed0106d6516 | 49 | py | Python | run.py | udayansawant/River-Game | bc4355b6be88b00bc5ce3a0609afded5b968ad60 | [
"MIT"
] | null | null | null | run.py | udayansawant/River-Game | bc4355b6be88b00bc5ce3a0609afded5b968ad60 | [
"MIT"
] | null | null | null | run.py | udayansawant/River-Game | bc4355b6be88b00bc5ce3a0609afded5b968ad60 | [
"MIT"
] | null | null | null | import game_functionality
game_functionality() | 16.333333 | 26 | 0.857143 | 5 | 49 | 8 | 0.6 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 3 | 27 | 16.333333 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
50f806af09c7d35745257cd456f44ee252d7a6e6 | 32 | py | Python | day3/modules/solutions/exercise2/mod3.py | austind/pyplus-ons | f0fcd6b2a980f75968ab54cd2ae39b42c1f68302 | [
"Apache-2.0"
] | null | null | null | day3/modules/solutions/exercise2/mod3.py | austind/pyplus-ons | f0fcd6b2a980f75968ab54cd2ae39b42c1f68302 | [
"Apache-2.0"
] | null | null | null | day3/modules/solutions/exercise2/mod3.py | austind/pyplus-ons | f0fcd6b2a980f75968ab54cd2ae39b42c1f68302 | [
"Apache-2.0"
] | 5 | 2019-11-19T18:41:41.000Z | 2020-06-18T14:58:09.000Z | def func3():
print("func3")
| 10.666667 | 18 | 0.5625 | 4 | 32 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 0.21875 | 32 | 2 | 19 | 16 | 0.64 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
0fc6796c08ede05c585020c1f75e4c334615476d | 105 | py | Python | src/__init__.py | kremayard/krema | f1272f9841367f190b093137ef3e66d870d915aa | [
"MIT"
] | 23 | 2021-07-19T14:22:43.000Z | 2022-01-13T02:35:43.000Z | src/__init__.py | kremayard/krema | f1272f9841367f190b093137ef3e66d870d915aa | [
"MIT"
] | 1 | 2021-07-23T15:09:37.000Z | 2021-07-23T15:12:10.000Z | src/__init__.py | kremayard/krema | f1272f9841367f190b093137ef3e66d870d915aa | [
"MIT"
] | 2 | 2021-07-26T11:09:08.000Z | 2021-08-28T12:12:19.000Z | from .models import *
from .utils import *
from .perms import *
from .embed import *
from .types import * | 21 | 21 | 0.72381 | 15 | 105 | 5.066667 | 0.466667 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180952 | 105 | 5 | 22 | 21 | 0.883721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.