hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8d4fa1bd6b56997aa2fdaca67c38d86af09a76b4 | 17,010 | py | Python | test_filament.py | grsmiley/filament | bd125d7b1460e6c715c30e4a501ef96185ad6d07 | [
"MIT"
] | null | null | null | test_filament.py | grsmiley/filament | bd125d7b1460e6c715c30e4a501ef96185ad6d07 | [
"MIT"
] | null | null | null | test_filament.py | grsmiley/filament | bd125d7b1460e6c715c30e4a501ef96185ad6d07 | [
"MIT"
] | null | null | null | from argparse import ArgumentError
from unittest import IsolatedAsyncioTestCase
from filament import AsyncInjector, Injector, BindingContext, BindingScope
class BindingContextCase(IsolatedAsyncioTestCase):
async def test_init(self):
bc = BindingContext()
async def test_init_singletons(self):
bc = BindingContext(singletons = {'x': 'y'})
result_map, result_scope = bc.get('x')
self.assertEqual(result_map, 'y')
self.assertEqual(result_scope, BindingScope.Singleton)
async def test_init_locals(self):
bc = BindingContext(locals = {'x': 'y'})
result_map, result_scope = bc.get('x')
self.assertEqual(result_map, 'y')
self.assertEqual(result_scope, BindingScope.Local)
async def test_init_transients(self):
bc = BindingContext(transients = {'x': 'y'})
result_map, result_scope = bc.get('x')
self.assertEqual(result_map, 'y')
self.assertEqual(result_scope, BindingScope.Transient)
async def test_str_must_be_explicit_singleton(self):
bc = BindingContext()
with self.assertRaises(ValueError):
bc.singleton('x')
bc.singleton('x', 'y')
async def test_str_must_be_explicit_local(self):
bc = BindingContext()
with self.assertRaises(ValueError):
bc.local('x')
bc.local('x', 'y')
async def test_str_must_be_explicit_transient(self):
bc = BindingContext()
with self.assertRaises(ValueError):
bc.transient('x')
bc.transient('x', 'y')
async def test_get(self):
bc = BindingContext(locals={'x':'y'})
not_found = object()
result = bc.get('y', not_found)
self.assertIs(result, not_found)
result = bc.get('x', not_found)
self.assertEqual(result, ('y', BindingScope.Local))
async def test_falsy_callable(self):
bc = BindingContext()
dict_ = dict()
bc.transient('test', dict_)
# Async
i = AsyncInjector(bc)
result = await i.resolve('test')
self.assertIs(result, dict_)
# Sync
i = Injector(bc)
result = i.resolve('test')
self.assertIs(result, dict_)
async def test_falsy_base_binding(self):
bc = BindingContext()
with self.assertRaises(ValueError):
bc.transient('', lambda:'test')
class Resolve(IsolatedAsyncioTestCase):
async def test_resolve_callable(self):
class A:
pass
# Async
i = AsyncInjector()
result = await i.resolve(A)
self.assertIsInstance(result, A)
# Sync
i = Injector()
result = i.resolve(A)
self.assertIsInstance(result, A)
async def test_resolve_noncallable(self):
class A:
pass
bc = BindingContext(transients={A:'y'})
# Async
i = AsyncInjector(context=bc)
result = await i.resolve(A)
self.assertEqual(result, 'y')
# Sync
i = Injector(context=bc)
result = i.resolve(A)
self.assertEqual(result, 'y')
async def test_resolve_string(self):
bc = BindingContext(transients={'x':'y'})
# Async
i = AsyncInjector(context=bc)
result = await i.resolve('x')
self.assertEqual(result, 'y')
# Sync
i = AsyncInjector(context=bc)
result = await i.resolve('x')
self.assertEqual(result, 'y')
async def test_resolve_awaitable(self):
class A:
pass
async def B(a:A):
return ('_', a)
# Async only
i = AsyncInjector()
_, result = await i.resolve(B)
self.assertIsInstance(result, A)
async def test_resolve_recursive(self):
class A:
pass
class B:
def __init__(self, a:A):
self.a = a
class C:
def __init__(self, b:B):
self.b = b
# Async
i = AsyncInjector()
result = await i.resolve(C)
self.assertIsInstance(result, C)
self.assertIsInstance(result.b, B)
self.assertIsInstance(result.b.a, A)
# Sync
i = Injector()
result = i.resolve(C)
self.assertIsInstance(result, C)
self.assertIsInstance(result.b, B)
self.assertIsInstance(result.b.a, A)
async def test_scope_transient(self):
class A:
pass
class B:
def __init__(self, a1:A, a2:A):
self.a1 = a1
self.a2 = a2
bc = BindingContext()
bc.transient(A)
# Async
i = AsyncInjector(bc)
result = await i.resolve(B)
self.assertIsInstance(result.a1, A)
self.assertIsInstance(result.a2, A)
self.assertIsNot(result.a1, result.a2)
# Sync
i = Injector(bc)
result = i.resolve(B)
self.assertIsInstance(result.a1, A)
self.assertIsInstance(result.a2, A)
self.assertIsNot(result.a1, result.a2)
async def test_scope_local(self):
class A:
pass
class B:
def __init__(self, a1:A, a2:A):
self.a1 = a1
self.a2 = a2
bc = BindingContext()
bc.local(A)
# Async
i = AsyncInjector(bc)
result1 = await i.resolve(B)
self.assertIsInstance(result1.a1, A)
self.assertIsInstance(result1.a2, A)
self.assertIs(result1.a1, result1.a2)
result2 = await i.resolve(B)
self.assertIsInstance(result2.a1, A)
self.assertIsInstance(result2.a2, A)
self.assertIs(result2.a1, result2.a2)
self.assertIsNot(result1.a1, result2.a1)
self.assertIsNot(result1.a2, result2.a2)
# Sync
i = Injector(bc)
result1 = i.resolve(B)
self.assertIsInstance(result1.a1, A)
self.assertIsInstance(result1.a2, A)
self.assertIs(result1.a1, result1.a2)
result2 = i.resolve(B)
self.assertIsInstance(result2.a1, A)
self.assertIsInstance(result2.a2, A)
self.assertIs(result2.a1, result2.a2)
self.assertIsNot(result1.a1, result2.a1)
self.assertIsNot(result1.a2, result2.a2)
async def test_scope_singleton(self):
class A:
pass
class B:
def __init__(self, a1:A, a2:A):
self.a1 = a1
self.a2 = a2
bc = BindingContext()
bc.singleton(A)
# Async
i = AsyncInjector(bc)
result1 = await i.resolve(B)
self.assertIsInstance(result1.a1, A)
self.assertIsInstance(result1.a2, A)
self.assertIs(result1.a1, result1.a2)
result2 = await i.resolve(B)
self.assertIsInstance(result2.a1, A)
self.assertIsInstance(result2.a2, A)
self.assertIs(result2.a1, result2.a2)
self.assertIs(result1.a1, result2.a1)
self.assertIs(result1.a2, result2.a2)
# Sync
i = Injector(bc)
result1 = i.resolve(B)
self.assertIsInstance(result1.a1, A)
self.assertIsInstance(result1.a2, A)
self.assertIs(result1.a1, result1.a2)
result2 = i.resolve(B)
self.assertIsInstance(result2.a1, A)
self.assertIsInstance(result2.a2, A)
self.assertIs(result2.a1, result2.a2)
self.assertIs(result1.a1, result2.a1)
self.assertIs(result1.a2, result2.a2)
async def test_default_mapping(self):
class A:
pass
class B:
def __init__(self, a:A):
self.a = a
# Async
i = AsyncInjector()
result = await i.resolve(B)
self.assertIsInstance(result.a, A)
# Sync
i = Injector()
result = i.resolve(B)
self.assertIsInstance(result.a, A)
async def test_implicit_mapping(self):
class A:
pass
class B:
def __init__(self, a:A):
self.a = a
bc = BindingContext()
bc.local(A)
# Async
i = AsyncInjector(bc)
result = await i.resolve(B)
self.assertIsInstance(result.a, A)
# Sync
i = Injector(bc)
result = i.resolve(B)
self.assertIsInstance(result.a, A)
async def test_explicit_mapping(self):
class A:
pass
class A_:
pass
class B:
def __init__(self, a:A):
self.a = a
bc = BindingContext(locals={A:A_})
# Async
i = AsyncInjector(bc)
result = await i.resolve(B)
self.assertIsInstance(result.a, A_)
# Sync
i = Injector(bc)
result = i.resolve(B)
self.assertIsInstance(result.a, A_)
async def test_resolution_order(self):
class A:
pass
class A_:
pass
class B:
def __init__(self, a:A_):
self.a = a
bc = BindingContext(locals={'a':A})
# Async
i = AsyncInjector(bc)
result = await i.resolve(B)
self.assertIsInstance(result.a, A)
# Sync
i = Injector(bc)
result = i.resolve(B)
self.assertIsInstance(result.a, A)
async def test_no_binding_for_positional(self):
class A:
def __init__(self, a):
self.a = a
# Async
i = AsyncInjector()
with self.assertRaises(TypeError):
result = await i.resolve(A)
# Sync
i = Injector()
with self.assertRaises(TypeError):
result = i.resolve(A)
async def test_default_scope(self):
class A:
pass
class B:
def __init__(self, a1:A, a2:A):
self.a1 = a1
self.a2 = a2
# Async
i = AsyncInjector(default_scope = BindingScope.Transient)
result = await i.resolve(B)
self.assertIsInstance(result.a1, A)
self.assertIsInstance(result.a2, A)
self.assertIsNot(result.a1, result.a2)
i = AsyncInjector(default_scope = BindingScope.Local)
result1 = await i.resolve(B)
self.assertIsInstance(result1.a1, A)
self.assertIsInstance(result1.a2, A)
self.assertIs(result1.a1, result1.a2)
result2 = await i.resolve(B)
self.assertIsInstance(result2.a1, A)
self.assertIsInstance(result2.a2, A)
self.assertIsNot(result1.a1, result2.a1)
i = AsyncInjector(default_scope = BindingScope.Singleton)
result1 = await i.resolve(B)
self.assertIsInstance(result1.a1, A)
self.assertIsInstance(result1.a2, A)
self.assertIs(result1.a1, result1.a2)
result2 = await i.resolve(B)
self.assertIsInstance(result2.a1, A)
self.assertIsInstance(result2.a2, A)
self.assertIs(result1.a1, result2.a1)
# Sync
i = Injector(default_scope = BindingScope.Transient)
result = i.resolve(B)
self.assertIsInstance(result.a1, A)
self.assertIsInstance(result.a2, A)
self.assertIsNot(result.a1, result.a2)
i = Injector(default_scope = BindingScope.Local)
result1 = i.resolve(B)
self.assertIsInstance(result1.a1, A)
self.assertIsInstance(result1.a2, A)
self.assertIs(result1.a1, result1.a2)
result2 = i.resolve(B)
self.assertIsInstance(result2.a1, A)
self.assertIsInstance(result2.a2, A)
self.assertIsNot(result1.a1, result2.a1)
i = Injector(default_scope = BindingScope.Singleton)
result1 = i.resolve(B)
self.assertIsInstance(result1.a1, A)
self.assertIsInstance(result1.a2, A)
self.assertIs(result1.a1, result1.a2)
result2 = i.resolve(B)
self.assertIsInstance(result2.a1, A)
self.assertIsInstance(result2.a2, A)
self.assertIs(result1.a1, result2.a1)
async def test_duplicate_bindings(self):
bc1 = BindingContext(singletons={'x':'y'})
bc2 = BindingContext(locals={'x':'z'})
# Async
i = AsyncInjector(bc1)
with self.assertRaises(AssertionError):
result = await i.resolve('x', context=bc2)
# Sync
i = Injector(bc1)
with self.assertRaises(AssertionError):
result = i.resolve('x', context=bc2)
async def test_falsy_cache(self):
dict_ = {}
class A:
def __init__(self, a1:'test', a2:'test'): # noqa: F821
self.a1 = a1
self.a2 = a2
bc = BindingContext()
bc.local('test', lambda: {})
# Async
i = AsyncInjector(bc)
result = await i.resolve(A)
self.assertIs(result.a1, result.a2)
# Sync
i = Injector(bc)
result = i.resolve(A)
self.assertIs(result.a1, result.a2)
async def test_falsy_name_with_annotation(self):
dict_ = {}
class A:
pass
class B:
def __init__(self, test:A):
self.test = test
bc = BindingContext()
bc.transient('test', lambda: dict_)
# Async
i = AsyncInjector(bc)
result = await i.resolve(B)
self.assertIs(result.test, dict_)
# Sync
i = Injector(bc)
result = i.resolve(B)
self.assertIs(result.test, dict_)
class UseCases(IsolatedAsyncioTestCase):
async def test_string_injection(self):
class A:
pass
class B:
def __init__(self, a):
self.a = a
bc = BindingContext()
bc.transient('a', A)
bc.transient('b', B)
# Async
i = AsyncInjector(bc)
result = await i.resolve('b')
self.assertIsInstance(result, B)
self.assertIsInstance(result.a, A)
# Sync
i = Injector(bc)
result = i.resolve('b')
self.assertIsInstance(result, B)
self.assertIsInstance(result.a, A)
async def test_default_type_injection(self):
class A:
pass
class B:
def __init__(self, a:A):
self.a = a
# Async
i = AsyncInjector()
result = await i.resolve(B)
self.assertIsInstance(result, B)
self.assertIsInstance(result.a, A)
# Sync
i = Injector()
result = i.resolve(B)
self.assertIsInstance(result, B)
self.assertIsInstance(result.a, A)
async def test_implicit_type_injection(self):
class A:
pass
class B:
def __init__(self, a:A):
self.a = a
bc = BindingContext()
bc.transient(A)
bc.transient(B)
# Async
i = AsyncInjector(bc)
result = await i.resolve(B)
self.assertIsInstance(result, B)
self.assertIsInstance(result.a, A)
# Sync
i = Injector(bc)
result = i.resolve(B)
self.assertIsInstance(result, B)
self.assertIsInstance(result.a, A)
async def test_explicit_type_injection(self):
class A:
pass
class B:
def __init__(self, a:A):
self.a = a
bc = BindingContext()
bc.transient(A, A)
bc.transient(B, B)
# Async
i = AsyncInjector(bc)
result = await i.resolve(B)
self.assertIsInstance(result, B)
self.assertIsInstance(result.a, A)
# Sync
i = Injector(bc)
result = i.resolve(B)
self.assertIsInstance(result, B)
self.assertIsInstance(result.a, A)
async def test_mixed_injection(self):
class A:
pass
class B:
def __init__(self, a:A, my_value):
self.a = a
self.my_value = my_value
bc = BindingContext()
bc.transient(A)
bc.transient(B, B)
bc.transient('my_value', 7)
# Async
i = AsyncInjector(bc)
result = await i.resolve(B)
self.assertIsInstance(result, B)
self.assertIsInstance(result.a, A)
self.assertEqual(result.my_value, 7)
# Sync
i = Injector(bc)
result = i.resolve(B)
self.assertIsInstance(result, B)
self.assertIsInstance(result.a, A)
self.assertEqual(result.my_value, 7)
async def test_falsy_resolution(self):
dict_ = {}
class A:
def __init__(self, test):
self.test = test
bc = BindingContext()
bc.transient('test', dict_)
# Async
i = AsyncInjector(bc)
result = await i.resolve(A)
self.assertIs(result.test, dict_)
# Sync
# Async
i = Injector(bc)
result = i.resolve(A)
self.assertIs(result.test, dict_) | 27.931034 | 74 | 0.556555 | 1,906 | 17,010 | 4.858867 | 0.050892 | 0.166289 | 0.115646 | 0.057553 | 0.864162 | 0.805637 | 0.780909 | 0.741173 | 0.688263 | 0.662671 | 0 | 0.021096 | 0.336743 | 17,010 | 609 | 75 | 27.931034 | 0.79977 | 0.016461 | 0 | 0.782222 | 0 | 0 | 0.005455 | 0 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.042222 | false | 0.044444 | 0.006667 | 0 | 0.144444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d5296a7b7c863ac3c85f8ae18c39290f7c3ba09 | 10,404 | py | Python | trading/strategies.py | taylormm/gdax-order-book | 9ddf295e54edb6a8cdda60f9cd60116cb4ff541d | [
"BSD-2-Clause"
] | 221 | 2015-10-13T17:10:14.000Z | 2022-03-24T10:34:03.000Z | trading/strategies.py | ahao1995/coinbase-exchange-order-book | 4eae6025e567dd50ba1ac2b9727e0e75b6bf2e40 | [
"BSD-2-Clause"
] | 3 | 2016-04-12T22:14:36.000Z | 2018-04-08T10:44:37.000Z | trading/strategies.py | ahao1995/coinbase-exchange-order-book | 4eae6025e567dd50ba1ac2b9727e0e75b6bf2e40 | [
"BSD-2-Clause"
] | 91 | 2015-10-13T21:43:24.000Z | 2022-03-09T18:18:37.000Z | from decimal import Decimal
from trading import file_logger
try:
import ujson as json
except ImportError:
import json
from pprint import pformat
import time
import requests
from trading.exchange import exchange_api_url, exchange_auth
def market_maker_strategy(open_orders, order_book, spreads):
time.sleep(10)
open_orders.get_open_orders()
open_orders.cancel_all()
while True:
time.sleep(0.005)
if order_book.asks.price_tree.min_key() - order_book.bids.price_tree.max_key() < 0:
file_logger.warn('Negative spread: {0}'.format(
order_book.asks.price_tree.min_key() - order_book.bids.price_tree.max_key()))
continue
if not open_orders.open_bid_order_id:
open_bid_price = order_book.asks.price_tree.min_key() - spreads.bid_spread - open_orders.open_bid_rejections
if 0.01 * float(open_bid_price) < float(open_orders.accounts['USD']['available']):
order = {'size': '0.01',
'price': str(open_bid_price),
'side': 'buy',
'product_id': 'BTC-USD',
'post_only': True}
response = requests.post(exchange_api_url + 'orders', json=order, auth=exchange_auth)
if 'status' in response.json() and response.json()['status'] == 'pending':
open_orders.open_bid_order_id = response.json()['id']
open_orders.open_bid_price = open_bid_price
open_orders.open_bid_rejections = Decimal('0.0')
file_logger.info('new bid @ {0}'.format(open_bid_price))
elif 'status' in response.json() and response.json()['status'] == 'rejected':
open_orders.open_bid_order_id = None
open_orders.open_bid_price = None
open_orders.open_bid_rejections += Decimal('0.04')
file_logger.warn('rejected: new bid @ {0}'.format(open_bid_price))
elif 'message' in response.json() and response.json()['message'] == 'Insufficient funds':
open_orders.open_bid_order_id = None
open_orders.open_bid_price = None
file_logger.warn('Insufficient USD')
else:
file_logger.error('Unhandled response: {0}'.format(pformat(response.json())))
continue
if not open_orders.open_ask_order_id:
open_ask_price = order_book.bids.price_tree.max_key() + spreads.ask_spread + open_orders.open_ask_rejections
if 0.01 < float(open_orders.accounts['BTC']['available']):
order = {'size': '0.01',
'price': str(open_ask_price),
'side': 'sell',
'product_id': 'BTC-USD',
'post_only': True}
response = requests.post(exchange_api_url + 'orders', json=order, auth=exchange_auth)
if 'status' in response.json() and response.json()['status'] == 'pending':
open_orders.open_ask_order_id = response.json()['id']
open_orders.open_ask_price = open_ask_price
file_logger.info('new ask @ {0}'.format(open_ask_price))
open_orders.open_ask_rejections = Decimal('0.0')
elif 'status' in response.json() and response.json()['status'] == 'rejected':
open_orders.open_ask_order_id = None
open_orders.open_ask_price = None
open_orders.open_ask_rejections += Decimal('0.04')
file_logger.warn('rejected: new ask @ {0}'.format(open_ask_price))
elif 'message' in response.json() and response.json()['message'] == 'Insufficient funds':
open_orders.open_ask_order_id = None
open_orders.open_ask_price = None
file_logger.warn('Insufficient BTC')
else:
file_logger.error('Unhandled response: {0}'.format(pformat(response.json())))
continue
if open_orders.open_bid_order_id and not open_orders.open_bid_cancelled:
bid_too_far_out = open_orders.open_bid_price < (order_book.asks.price_tree.min_key()
- spreads.bid_too_far_adjustment_spread)
bid_too_close = open_orders.open_bid_price > (order_book.bids.price_tree.max_key()
- spreads.bid_too_close_adjustment_spread)
cancel_bid = bid_too_far_out or bid_too_close
if cancel_bid:
if bid_too_far_out:
file_logger.info('CANCEL: open bid {0} too far from best ask: {1} spread: {2}'.format(
open_orders.open_bid_price,
order_book.asks.price_tree.min_key(),
open_orders.open_bid_price - order_book.asks.price_tree.min_key()))
if bid_too_close:
file_logger.info('CANCEL: open bid {0} too close to best bid: {1} spread: {2}'.format(
open_orders.open_bid_price,
order_book.bids.price_tree.max_key(),
open_orders.open_bid_price - order_book.bids.price_tree.max_key()))
open_orders.cancel('bid')
continue
if open_orders.open_ask_order_id and not open_orders.open_ask_cancelled:
ask_too_far_out = open_orders.open_ask_price > (order_book.bids.price_tree.max_key() +
spreads.ask_too_far_adjustment_spread)
ask_too_close = open_orders.open_ask_price < (order_book.asks.price_tree.min_key() -
spreads.ask_too_close_adjustment_spread)
cancel_ask = ask_too_far_out or ask_too_close
if cancel_ask:
if ask_too_far_out:
file_logger.info('CANCEL: open ask {0} too far from best bid: {1} spread: {2}'.format(
open_orders.open_ask_price,
order_book.bids.price_tree.max_key(),
open_orders.open_ask_price - order_book.bids.price_tree.max_key()))
if ask_too_close:
file_logger.info('CANCEL: open ask {0} too close to best ask: {1} spread: {2}'.format(
open_orders.open_ask_price,
order_book.asks.price_tree.min_key(),
open_orders.open_ask_price - order_book.asks.price_tree.min_key()))
open_orders.cancel('ask')
continue
def buyer_strategy(order_book, open_orders, spreads):
time.sleep(10)
while True:
time.sleep(0.001)
if not open_orders.open_bid_order_id:
open_bid_price = order_book.bids.price_tree.max_key() - spreads.bid_spread
if 0.01 * float(open_bid_price) < float(open_orders.accounts['USD']['available']):
order = {'size': '0.01',
'price': str(open_bid_price),
'side': 'buy',
'product_id': 'BTC-USD',
'post_only': True}
response = requests.post(exchange_api_url + 'orders', json=order, auth=exchange_auth)
try:
response = response.json()
except ValueError:
file_logger.error('Unhandled response: {0}'.format(pformat(response)))
if 'status' in response and response['status'] == 'pending':
open_orders.open_bid_order_id = response['id']
open_orders.open_bid_price = open_bid_price
open_orders.open_bid_rejections = Decimal('0.0')
file_logger.info('new bid @ {0}'.format(open_bid_price))
elif 'status' in response and response['status'] == 'rejected':
open_orders.open_bid_order_id = None
open_orders.open_bid_price = None
open_orders.open_bid_rejections += Decimal('0.04')
file_logger.warn('rejected: new bid @ {0}'.format(open_bid_price))
elif 'message' in response and response['message'] == 'Insufficient funds':
open_orders.open_bid_order_id = None
open_orders.open_bid_price = None
file_logger.warn('Insufficient USD')
elif 'message' in response and response['message'] == 'request timestamp expired':
open_orders.open_bid_order_id = None
open_orders.open_bid_price = None
file_logger.warn('Request timestamp expired')
else:
file_logger.error('Unhandled response: {0}'.format(pformat(response)))
continue
if open_orders.open_bid_order_id and not open_orders.open_bid_cancelled:
bid_too_far_out = open_orders.open_bid_price < (order_book.bids.price_tree.max_key()
- spreads.bid_too_far_adjustment_spread)
bid_too_close = open_orders.open_bid_price > (order_book.bids.price_tree.max_key()
- spreads.bid_too_close_adjustment_spread)
cancel_bid = bid_too_far_out or bid_too_close
if cancel_bid:
if bid_too_far_out:
file_logger.info('CANCEL: open bid {0} too far from best bid: {1} spread: {2}'.format(
open_orders.open_bid_price,
order_book.bids.price_tree.max_key(),
order_book.bids.price_tree.max_key() - open_orders.open_bid_price))
if bid_too_close:
file_logger.info('CANCEL: open bid {0} too close to best bid: {1} spread: {2}'.format(
open_orders.open_bid_price,
order_book.bids.price_tree.max_key(),
order_book.bids.price_tree.max_key() - open_orders.open_bid_price))
open_orders.cancel('bid')
continue
| 56.543478 | 120 | 0.56286 | 1,227 | 10,404 | 4.425428 | 0.08476 | 0.121547 | 0.144383 | 0.115838 | 0.87698 | 0.845672 | 0.815101 | 0.782136 | 0.756538 | 0.723573 | 0 | 0.01096 | 0.342272 | 10,404 | 183 | 121 | 56.852459 | 0.782552 | 0 | 0 | 0.634731 | 0 | 0.035928 | 0.10842 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011976 | false | 0 | 0.053892 | 0 | 0.065868 | 0.005988 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d5eded15367b0145503fd6b69867e824fcc28a5 | 21 | py | Python | surfator/analysis/__init__.py | mir-group/surfator | 07ba060767ef04feec2e73ace193e77f6b9075f2 | [
"MIT"
] | 1 | 2020-06-05T10:11:21.000Z | 2020-06-05T10:11:21.000Z | surfator/analysis/__init__.py | mir-group/surfator | 07ba060767ef04feec2e73ace193e77f6b9075f2 | [
"MIT"
] | null | null | null | surfator/analysis/__init__.py | mir-group/surfator | 07ba060767ef04feec2e73ace193e77f6b9075f2 | [
"MIT"
] | null | null | null | from .coord import *
| 10.5 | 20 | 0.714286 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d5fcdda7a9b098580dbdcfe16702084f5511bff | 37 | py | Python | cupy_alias/logic/ops.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 142 | 2018-06-07T07:43:10.000Z | 2021-10-30T21:06:32.000Z | cupy_alias/logic/ops.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 282 | 2018-06-07T08:35:03.000Z | 2021-03-31T03:14:32.000Z | cupy_alias/logic/ops.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 19 | 2018-06-19T11:07:53.000Z | 2021-05-13T20:57:04.000Z | from clpy.logic.ops import * # NOQA
| 18.5 | 36 | 0.702703 | 6 | 37 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189189 | 37 | 1 | 37 | 37 | 0.866667 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a5d1cb5ba754e9d8d4a42d4121ad6742dfe1dabb | 161 | py | Python | app/main/__init__.py | scottfabini/flaskApp | 4ad332e5a5b41b1a0fc5e799e8ce96f50b83053b | [
"MIT"
] | null | null | null | app/main/__init__.py | scottfabini/flaskApp | 4ad332e5a5b41b1a0fc5e799e8ce96f50b83053b | [
"MIT"
] | null | null | null | app/main/__init__.py | scottfabini/flaskApp | 4ad332e5a5b41b1a0fc5e799e8ce96f50b83053b | [
"MIT"
] | null | null | null | from flask import Blueprint
main = Blueprint('main', __name__)
# this import is at the end due to an otherwise circular dependency
from . import views, errors
| 23 | 67 | 0.770186 | 24 | 161 | 5 | 0.791667 | 0.216667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 161 | 6 | 68 | 26.833333 | 0.902256 | 0.403727 | 0 | 0 | 0 | 0 | 0.042553 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
5705277a4b81e3bc087113397eea566f356e24c5 | 6,690 | py | Python | btf_extractor/ubo2003.py | 2-propanol/BTF_extractor | 0ec5358504ab51aff6256b98f51d29e540012ce8 | [
"Zlib"
] | 1 | 2022-02-16T14:53:26.000Z | 2022-02-16T14:53:26.000Z | btf_extractor/ubo2003.py | 2-propanol/BTF_extractor | 0ec5358504ab51aff6256b98f51d29e540012ce8 | [
"Zlib"
] | 1 | 2021-02-05T10:04:20.000Z | 2021-04-11T13:45:01.000Z | btf_extractor/ubo2003.py | 2-propanol/BTF_extractor | 0ec5358504ab51aff6256b98f51d29e540012ce8 | [
"Zlib"
] | 1 | 2021-02-04T04:22:19.000Z | 2021-02-04T04:22:19.000Z | """BTFDBBのzipファイルをzipファイルのまま使用するためのライブラリ
BTFDBB UBO2003(*)形式, ATRIUM(**)形式のzipファイルを参照し、
・zipファイルに含まれる角度情報の取得
・「撮影条件の角度(tl, pl, tv, pv)」から
「画像の実体(ndarray形式(BGR, channels-last))」を取得
する関数を提供する
(*) http://cg.cs.uni-bonn.de/en/projects/btfdbb/download/ubo2003/
(**) http://cg.cs.uni-bonn.de/en/projects/btfdbb/download/atrium/
"""
from collections import Counter
from sys import stderr
from typing import Any, Tuple
from zipfile import ZipFile
import imageio
import numpy as np
from nptyping import NDArray
from simplejpeg import decode_jpeg
AnglesTuple = Tuple[int, int, int, int]
BGRImage = NDArray[(Any, Any, 3), np.uint8]
BGRImageHDR = NDArray[(Any, Any, 3), np.float32]
class Ubo2003:
"""BTFDBBのzipファイルから角度や画像を取り出す
角度は全て度数法(degree)を用いている。
zipファイルに含まれる角度情報の順番は保証せず、並べ替えもしない。
`angles_set`には`list`ではなく、順序の無い`set`を用いている。
画像の実体はopencvと互換性のあるndarray形式(BGR, channels-last)で出力する。
zipファイル要件:
f"tl{tl:03} pl{pl:03} tv{tv:03} pv{pv:03}.jpg"を格納している。
Attributes:
zip_filepath (str): コンストラクタに指定したzipファイルパス。
angles_set (set[tuple[int,int,int,int]]): zipファイルに含まれる画像の角度条件の集合。
Example:
>>> btf = Ubo2003("UBO_CORDUROY256.zip")
>>> angles_list = list(btf.angles_set)
>>> print(angles_list[0])
(0, 0, 0, 0)
>>> image = btf.angles_to_image(*angles_list[0])
>>> print(image.shape)
(256, 256, 3)
>>> print(image.dtype)
uint8
"""
def __init__(self, zip_filepath: str) -> None:
"""使用するzipファイルを指定する
指定したzipファイルに角度条件の重複がある場合、
何が重複しているか表示し、`RuntimeError`を投げる。
"""
self.zip_filepath = zip_filepath
self.__z = ZipFile(zip_filepath)
# ファイルパスは重複しないので`filepath_set`はsetで良い
filepath_set = {path for path in self.__z.namelist() if path.endswith(".jpg")}
self.__angles_vs_filepath_dict = {
self._filename_to_angles(path): path for path in filepath_set
}
self.angles_set = frozenset(self.__angles_vs_filepath_dict.keys())
# 角度条件の重複がある場合、何が重複しているか調べる
if len(filepath_set) != len(self.angles_set):
angles_list = [self._filename_to_angles(path) for path in filepath_set]
angle_collection = Counter(angles_list)
for angles, counter in angle_collection.items():
if counter > 1:
print(
f"[BTF-Extractor] '{self.zip_filepath}' has"
+ f"{counter} files with condition {angles}.",
file=stderr,
)
raise RuntimeError(f"'{self.zip_filepath}' has duplicated conditions.")
@staticmethod
def _filename_to_angles(filename: str) -> AnglesTuple:
"""ファイル名(orパス)から角度(`int`)のタプル(`tl`, `pl`, `tv`, `pv`)を取得する"""
# ファイルパスの長さの影響を受けないように後ろから数えている
tl = int(filename[-25:-22])
pl = int(filename[-19:-16])
tv = int(filename[-13:-10])
pv = int(filename[-7:-4])
return (tl, pl, tv, pv)
def angles_to_image(self, tl: int, pl: int, tv: int, pv: int) -> BGRImage:
"""`tl`, `pl`, `tv`, `pv`の角度条件の画像をndarray形式で返す
`filename`が含まれるファイルが存在しない場合は`ValueError`を投げる。
"""
key = (tl, pl, tv, pv)
filepath = self.__angles_vs_filepath_dict.get(key)
if not filepath:
raise ValueError(
f"Condition {key} does not exist in '{self.zip_filepath}'."
)
with self.__z.open(filepath) as f:
return decode_jpeg(f.read(), colorspace="BGR")
class AtriumHdr:
"""ATRIUM(HDR)のzipファイルから角度や画像を取り出す
角度は全て度数法(degree)を用いている。
zipファイルに含まれる角度情報の順番は保証せず、並べ替えもしない。
`angles_set`には`list`ではなく、順序の無い`set`を用いている。
画像の実体はopencvと互換性のあるndarray形式(BGR, channels-last)で出力する。
zipファイル要件:
f"tl{tl:03} pl{pl:03} tv{tv:03} pv{pv:03}.hdr"を格納している。
Attributes:
zip_filepath (str): コンストラクタに指定したzipファイルパス。
angles_set (set[tuple[int,int,int,int]]): zipファイルに含まれる画像の角度条件の集合。
Example:
>>> btf = AtriumHdr("CEILING_HDR.zip")
>>> angles_list = list(btf.angles_set)
>>> print(angles_list[0])
(0, 0, 0, 0)
>>> image = btf.angles_to_image(*angles_list[0])
>>> print(image.shape)
(256, 256, 3)
>>> print(image.dtype)
float32
"""
FIXED_HEADER = "-Y 256 +X 256".encode()
def __init__(self, zip_filepath: str) -> None:
"""使用するzipファイルを指定する
指定したzipファイルに角度条件の重複がある場合、
何が重複しているか表示し、`RuntimeError`を投げる。
"""
self.zip_filepath = zip_filepath
self.__z = ZipFile(zip_filepath)
# ファイルパスは重複しないので`filepath_set`はsetで良い
filepath_set = {path for path in self.__z.namelist() if path.endswith(".hdr")}
self.__angles_vs_filepath_dict = {
self._filename_to_angles(path): path for path in filepath_set
}
self.angles_set = frozenset(self.__angles_vs_filepath_dict.keys())
# 角度条件の重複がある場合、何が重複しているか調べる
if len(filepath_set) != len(self.angles_set):
angles_list = [self._filename_to_angles(path) for path in filepath_set]
angle_collection = Counter(angles_list)
for angles, counter in angle_collection.items():
if counter > 1:
print(
f"[BTF-Extractor] '{self.zip_filepath}' has"
+ f"{counter} files with condition {angles}.",
file=stderr,
)
raise RuntimeError(f"'{self.zip_filepath}' has duplicated conditions.")
@staticmethod
def _filename_to_angles(filename: str) -> AnglesTuple:
"""ファイル名(orパス)から角度(`int`)のタプル(`tl`, `pl`, `tv`, `pv`)を取得する"""
# ファイルパスの長さの影響を受けないように後ろから数えている
tl = int(filename[-25:-22])
pl = int(filename[-19:-16])
tv = int(filename[-13:-10])
pv = int(filename[-7:-4])
return (tl, pl, tv, pv)
def angles_to_image(self, tl: int, pl: int, tv: int, pv: int) -> BGRImageHDR:
"""`tl`, `pl`, `tv`, `pv`の角度条件の画像をndarray形式で返す
`filename`が含まれるファイルが存在しない場合は`ValueError`を投げる。
"""
key = (tl, pl, tv, pv)
filepath = self.__angles_vs_filepath_dict.get(key)
if not filepath:
raise ValueError(
f"Condition {key} does not exist in '{self.zip_filepath}'."
)
with self.__z.open(filepath) as f:
# "+X 256 -Y 256"を"-Y 256 +X 256"に書き換える
raw_bytes = f.read()
fixed_bytes = raw_bytes[:53] + self.FIXED_HEADER + raw_bytes[66:]
return imageio.imread(fixed_bytes, format="HDR-FI")[::-1]
| 34.132653 | 86 | 0.603139 | 782 | 6,690 | 4.987212 | 0.226343 | 0.045128 | 0.038462 | 0.018462 | 0.813077 | 0.800513 | 0.800513 | 0.800513 | 0.800513 | 0.800513 | 0 | 0.024878 | 0.266966 | 6,690 | 195 | 87 | 34.307692 | 0.769984 | 0.34843 | 0 | 0.674419 | 0 | 0 | 0.098522 | 0.031527 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.093023 | 0 | 0.244186 | 0.023256 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
574c61fa8e87c91a0ae5e2ae46b3618b80628fb2 | 2,269 | py | Python | tests/data/test_clip.py | maki-nage/rxsci | 64c9956752cbdd4c65aa9f054b6b28318a056625 | [
"MIT"
] | 3 | 2021-05-03T13:40:46.000Z | 2022-03-06T07:59:30.000Z | tests/data/test_clip.py | maki-nage/rxsci | 64c9956752cbdd4c65aa9f054b6b28318a056625 | [
"MIT"
] | 9 | 2020-10-22T21:08:10.000Z | 2021-08-05T09:01:26.000Z | tests/data/test_clip.py | maki-nage/rxsci | 64c9956752cbdd4c65aa9f054b6b28318a056625 | [
"MIT"
] | 2 | 2021-01-05T16:48:54.000Z | 2021-08-07T12:51:01.000Z | import pytest
import rx
import rxsci as rs
def test_clip():
source = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
expected_result = [3, 3, 3, 4, 5, 6, 7, 8, 8, 8]
actual_result = []
rx.from_(source).pipe(
rs.data.clip(lower_bound=3, higher_bound=8)
).subscribe(on_next=actual_result.append)
assert actual_result == expected_result
def test_clip_lower_bound():
source = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
expected_result = [3, 3, 3, 4, 5, 6, 7, 8, 9, 10]
actual_result = []
rx.from_(source).pipe(
rs.data.clip(lower_bound=3)
).subscribe(on_next=actual_result.append)
assert actual_result == expected_result
def test_clip_higher_bound():
source = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
expected_result = [1, 2, 3, 4, 5, 6, 7, 8, 8, 8]
actual_result = []
rx.from_(source).pipe(
rs.data.clip(higher_bound=8)
).subscribe(on_next=actual_result.append)
assert actual_result == expected_result
def test_clip_no_bound():
source = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
actual_result = []
rx.from_(source).pipe(
rs.data.clip()
).subscribe(on_next=actual_result.append)
assert actual_result == source
def test_clip_invalid_bound():
source = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
with pytest.raises(ValueError):
rx.from_(source).pipe(
rs.data.clip(lower_bound=8, higher_bound=3)
).subscribe()
def test_clip_mux():
source = [
rs.OnCreateMux((1 ,None)),
rs.OnNextMux((1, None), 1),
rs.OnNextMux((1, None), 2),
rs.OnNextMux((1, None), 3),
rs.OnNextMux((1, None), 4),
rs.OnNextMux((1, None), 5),
rs.OnNextMux((1, None), 6),
rs.OnCompletedMux((1, None)),
]
actual_result = []
rx.from_(source).pipe(
rs.cast_as_mux_observable(),
rs.data.clip(lower_bound=2, higher_bound=5)
).subscribe(on_next=actual_result.append)
assert actual_result == [
rs.OnCreateMux((1 ,None)),
rs.OnNextMux((1, None), 2),
rs.OnNextMux((1, None), 2),
rs.OnNextMux((1, None), 3),
rs.OnNextMux((1, None), 4),
rs.OnNextMux((1, None), 5),
rs.OnNextMux((1, None), 5),
rs.OnCompletedMux((1, None)),
]
| 25.211111 | 55 | 0.57955 | 341 | 2,269 | 3.686217 | 0.134897 | 0.063644 | 0.114558 | 0.152745 | 0.821002 | 0.805091 | 0.802705 | 0.778839 | 0.735879 | 0.61257 | 0 | 0.072189 | 0.255178 | 2,269 | 89 | 56 | 25.494382 | 0.671598 | 0 | 0 | 0.575758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075758 | 1 | 0.090909 | false | 0 | 0.045455 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9393f6adcb4442497c57a1c84408087ba3169959 | 19,318 | py | Python | tests/test_model.py | kawaja/tpn-sdk | d81aa9cbbbd9751c198ca39460e0f5470487121b | [
"Apache-2.0"
] | null | null | null | tests/test_model.py | kawaja/tpn-sdk | d81aa9cbbbd9751c198ca39460e0f5470487121b | [
"Apache-2.0"
] | 31 | 2021-09-20T02:10:54.000Z | 2022-03-31T02:15:10.000Z | tests/test_model.py | kawaja/tpn-sdk | d81aa9cbbbd9751c198ca39460e0f5470487121b | [
"Apache-2.0"
] | null | null | null | from unittest.mock import MagicMock
import unittest
from telstra_pn.models import tpn_model
import telstra_pn
import telstra_pn.exceptions
class TestModelBasics(unittest.TestCase):
def setUp(self):
telstra_pn.__flags__['debug'] = True
telstra_pn.__flags__['debug_getattr'] = True
# No longer a fatal error to omit _get_data()
# def test_model_refresh_with_no_get_data(self):
# class Model(tpn_model.TPNModel):
# def _update_data(self, data):
# pass
# m = Model(MagicMock())
# m.refresh()
# self.assertFalse(m._check_method_overridden('_get_data'))
# No longer a fatal error to omit _update_data()
# def test_model_missing_update_data(self):
# class Model(tpn_model.TPNModel):
# def _get_data(self):
# pass
# m = Model(MagicMock())
# with self.assertRaisesRegex(NotImplementedError, ''):
# m._update_data(MagicMock())
# self.assertTrue(m._check_method_overridden('_get_data'))
def test_model_create(self):
class Model(tpn_model.TPNModel):
def _get_data(self):
pass
def _update_data(self, data):
pass
session_mock = MagicMock()
m = Model(session_mock)
self.assertEqual(m.session, session_mock)
self.assertEqual(m.data, {})
self.assertEqual(m._is_refreshing, False)
def test_model_create_with_missing_super(self):
class Model(tpn_model.TPNModel):
def __init__(self, parent, **data):
pass
def _get_data(self):
pass
def _update_data(self, data):
pass
with self.assertRaises(telstra_pn.exceptions.TPNLibraryInternalError):
session_mock = MagicMock()
m = Model(session_mock)
m.look_at_any_attribute
def test_model_create_with_data(self):
session_mock = MagicMock()
class Model(tpn_model.TPNModel):
def __init__(self, parent, **data):
super().__init__(session_mock)
self._update_data({'key1': 'data1', 'key2': 'data2'})
def _get_data(self) -> dict:
pass
def _update_data(self, data):
self.data = data
m = Model(session_mock)
self.assertEqual(m.session, session_mock)
self.assertEqual(m.data, {'key1': 'data1', 'key2': 'data2'}, m.data)
def test_model_create_with_refresh(self):
session_mock = MagicMock()
class Model(tpn_model.TPNModel):
def __init__(self, parent, **data):
super().__init__(session_mock)
self._url_path = 'none'
self.refresh()
def _get_data(self) -> dict:
return {'key1': 'data1', 'key2': 'data2'}
def _update_data(self, data):
self.data = data
m = Model(session_mock)
self.assertEqual(m.session, session_mock)
self.assertEqual(m.data, {'key1': 'data1', 'key2': 'data2'}, m.data)
class TestListModelBasics(unittest.TestCase):
def setUp(self):
telstra_pn.__flags__['debug'] = True
telstra_pn.__flags__['debug_getattr'] = True
# No longer a fatal error to omit _update_data()
# def test_list_model_missing_update_data(self):
# with self.assertRaisesRegex(NotImplementedError, ''):
# class ListModel(tpn_model.TPNListModel):
# def _get_data(self):
# pass
# ListModel(MagicMock())._update_data(MagicMock())
def test_list_model_missing_primary_key(self):
class ListModel(tpn_model.TPNListModel):
def _get_data(self):
pass
def _update_data(self, data):
pass
with self.assertRaisesRegex(
telstra_pn.exceptions.TPNLibraryInternalError,
'has not implemented required attribute "_primary_key"'):
ListModel(MagicMock()).additem(1)
def test_list_model_create(self):
class ListModel(tpn_model.TPNListModel):
def _get_data(self):
pass
def _update_data(self, data):
pass
session_mock = MagicMock()
m = ListModel(session_mock)
self.assertEqual(m.session, session_mock)
self.assertEqual(m.data, {})
self.assertEqual(m.all, {})
self.assertEqual(m._is_refreshing, False)
def test_list_model_create_with_missing_super(self):
class ListModel(tpn_model.TPNListModel):
def __init__(self, parent, **data):
pass
def _get_data(self):
pass
def _update_data(self, data):
pass
with self.assertRaises(telstra_pn.exceptions.TPNLibraryInternalError):
session_mock = MagicMock()
m = ListModel(session_mock)
m.look_at_any_attribute
def test_list_model_create_with_refresh(self):
session_mock = MagicMock()
class Model(tpn_model.TPNModel):
def __init__(self, session, **data):
super().__init__(session)
self._update_data(data)
def _update_data(self, data):
self.data = data
def display(self):
return 'child'
class ListModel(tpn_model.TPNListModel):
def __init__(self, session):
super().__init__(session)
self._primary_key = 'key1'
self._refkeys = ['key1']
self._url_path = 'none'
self.refresh()
def _get_data(self) -> list:
return [{'key1': 'data1', 'key2': 'data2'}]
def _update_data(self, data):
self.data = {**self.data, 'list': data}
for item in data:
self.additem(Model(self, **item))
ml = ListModel(session_mock)
self.assertEqual(ml.session, session_mock)
self.assertEqual(len(ml), 1)
self.assertEqual(len(ml.all), 1)
self.assertEqual(ml['data1'].key1, 'data1', ml.all)
self.assertEqual(ml['data1'].key2, 'data2', ml.all)
class TestModelBehaviour(unittest.TestCase):
getdata_mock = MagicMock()
def setUp(self):
telstra_pn.__flags__['debug'] = True
telstra_pn.__flags__['debug_getattr'] = True
session_mock = MagicMock()
TestModelBehaviour.getdata_mock.reset_mock()
TestModelBehaviour.getdata_mock.return_value = {
'key1': 'data1',
'key2': 'data2'
}
class Model(tpn_model.TPNModel):
def __init__(self, parent, **data):
super().__init__(session_mock)
self._url_path = 'none'
self.refresh()
def _update_data(self, data: dict):
self.data = data
self._update_keys(data)
def _get_data(self) -> dict:
return {**TestModelBehaviour.getdata_mock()}
def display(self):
return 'child'
self.m = Model(session_mock)
return super().setUp()
def test_model_gettattr_exists(self):
self.assertEqual(self.m.key1, 'data1')
def test_model_refresh_changed_attributes(self):
self.assertEqual(self.m.key1, 'data1')
self.assertEqual(len(self.m.data), 2)
self.assertEqual(TestModelBehaviour.getdata_mock.call_count, 1)
TestModelBehaviour.getdata_mock.return_value = {
'key1': 'data+1',
'key2': 'data+2',
'key3': 'data+3'
}
self.m.refresh()
self.assertEqual(TestModelBehaviour.getdata_mock.call_count, 2)
self.assertEqual(self.m.key1, 'data+1')
self.assertEqual(self.m.key2, 'data+2')
self.assertEqual(self.m.key3, 'data+3')
self.assertEqual(len(self.m.data), 3)
def test_model_force_refresh_missing_attribute(self):
self.m.refresh_if_null = ['key3']
with self.assertRaises(telstra_pn.exceptions.TPNLogicalError):
self.m.missing_attribute
def test_model_force_refresh_changed_attributes(self):
self.assertEqual(self.m.key1, 'data1')
self.assertEqual(len(self.m.data), 2)
self.assertEqual(TestModelBehaviour.getdata_mock.call_count, 1)
TestModelBehaviour.getdata_mock.return_value = {
'key1': 'data+1',
'key2': 'data+2',
'key3': 'data+3'
}
self.m.refresh_if_null = ['key3']
self.assertEqual(self.m.key3, 'data+3')
self.assertEqual(TestModelBehaviour.getdata_mock.call_count, 2)
self.assertEqual(self.m.key1, 'data+1')
self.assertEqual(self.m.key2, 'data+2')
self.assertEqual(len(self.m.data), 3)
def test_repr_single(self):
self.assertEqual(str(self.m), 'child')
def test_repr_parent(self):
session_mock = MagicMock()
self.assertEqual(str(self.m), 'child')
class ModelParent(tpn_model.TPNModel):
def __init__(self, parent, **data):
super().__init__(session_mock)
self.refresh()
def _get_data(self) -> dict:
return TestModelBehaviour.getdata_mock()
def _update_data(self, data):
self.data = data
def display(self):
return 'parent'
n = ModelParent(session_mock)
self.m.parent = n
self.assertEqual(str(n), 'parent')
self.assertEqual(str(self.m), 'parent / child')
class TestListModelBehaviour(unittest.TestCase):
getdata_mock = MagicMock()
def setUp(self):
telstra_pn.__flags__['debug'] = True
telstra_pn.__flags__['debug_getattr'] = True
session_mock = MagicMock()
TestModelBehaviour.getdata_mock.reset_mock()
TestModelBehaviour.getdata_mock.return_value = [{
'key1': 'data1',
'key2': 'data2'
}]
class Model(tpn_model.TPNModel):
def __init__(self, parent, **data):
super().__init__(session_mock)
self._update_data(data)
def _update_data(self, data):
self.data = data
self.extradata = 'extra'
def display(self):
return 'child'
class ListModel(tpn_model.TPNListModel):
def __init__(self, parent, **data):
super().__init__(session_mock)
self._primary_key = 'key1'
self._refkeys = ['key1', 'key2']
self._url_path = 'none'
self.refresh()
def _update_data(self, data):
self.data = {**self.data, 'list': data}
for item in data:
self.additem(Model(self, **item))
def _get_data(self) -> dict:
return TestModelBehaviour.getdata_mock()
def display(self):
return f'list of {len(self.all)} item(s)'
self.ml = ListModel(session_mock)
return super().setUp()
def test_list_model_contains_exists(self):
self.assertIn('data1', self.ml)
self.assertIn('data2', self.ml)
def test_list_model_contains_missing(self):
self.assertNotIn('data3', self.ml)
def test_list_model_get_exists(self):
self.assertEqual(self.ml['data1'].key2, 'data2')
self.assertEqual(self.ml['data2'].key1, 'data1')
def test_list_model_get_missing(self):
self.assertIsNone(self.ml['data3'])
def test_reset(self):
self.assertEqual(len(self.ml), 1)
self.ml.reset()
self.assertEqual(len(self.ml), 0)
def test_list_model_create_with_missing_primary_key(self):
TestModelBehaviour.getdata_mock.return_value = [{
'key1': 'data1',
'key2': 'data2'
}, {
'key2': 'data+2',
'key3': 'data+3'
}]
with self.assertRaisesRegex(
telstra_pn.exceptions.TPNLogicalError,
'attempted to add item.*does not contain primary key'):
self.ml.refresh()
def test_list_model_lookup_with_missing_refkey_contains(self):
TestModelBehaviour.getdata_mock.return_value = [{
'key1': 'data1',
'key2': 'data2'
}, {
'key1': 'data+1',
'key2': 'data+2',
'key3': 'data+3'
}]
self.ml.refresh()
self.ml._refkeys = ['key3']
with self.assertRaisesRegex(ValueError, 'refkey key3 missing'):
'data+3' in self.ml
def test_list_model_lookup_with_missing_refkey_get(self):
TestModelBehaviour.getdata_mock.return_value = [{
'key1': 'data1',
'key2': 'data2'
}, {
'key1': 'data+1',
'key2': 'data+2',
'key3': 'data+3'
}]
self.ml.refresh()
self.ml._refkeys = ['key3']
with self.assertRaisesRegex(ValueError, 'refkey key3 missing'):
self.ml['data+3']
def test_list_model_lookup_attribute_missing_contains(self):
self.ml._refkeys = ['nothere']
with self.assertRaisesRegex(ValueError, 'refkey nothere missing'):
'extra' in self.ml
def test_list_model_lookup_attribute_missing_get(self):
self.ml._refkeys = ['nothere']
with self.assertRaisesRegex(ValueError, 'refkey nothere missing'):
self.ml['extra']
def test_list_model_lookup_attribute_exists_contains(self):
self.ml._refkeys = ['extradata']
self.assertIn('extra', self.ml)
def test_list_model_lookup_attribute_exists_get(self):
self.ml._refkeys = ['extradata']
self.assertEqual(self.ml['extra'].key1, 'data1')
def test_list_model_no_refkeys_contains(self):
self.assertIn('data1', self.ml)
del self.ml._refkeys
self.assertNotIn('data1', self.ml)
def test_list_model_no_refkeys_get(self):
self.assertEqual(self.ml['data1'].key1, 'data1')
del self.ml._refkeys
self.assertIsNone(self.ml['data1'])
def test_list_model_refresh_changed_attributes(self):
self.assertIn('data1', self.ml)
self.assertIn('data1', self.ml)
self.assertNotIn('data+1', self.ml)
self.assertNotIn('data+2', self.ml)
self.assertNotIn('data+3', self.ml)
self.assertEqual(len(self.ml), 1)
self.assertEqual(TestModelBehaviour.getdata_mock.call_count, 1)
TestModelBehaviour.getdata_mock.return_value = [{
'key1': 'data1',
'key2': 'data2'
}, {
'key1': 'data+1',
'key2': 'data+2',
'key3': 'data+3'
}]
self.ml.refresh()
self.assertEqual(TestModelBehaviour.getdata_mock.call_count, 2)
self.assertIn('data1', self.ml)
self.assertIn('data1', self.ml)
self.assertIn('data+1', self.ml)
self.assertIn('data+2', self.ml)
self.assertNotIn('data+3', self.ml)
self.assertEqual(len(self.ml), 2)
def test_list_iteration(self):
self.assertEqual(TestModelBehaviour.getdata_mock.call_count, 1)
TestModelBehaviour.getdata_mock.return_value = [{
'key1': 'data1',
'key2': 'data2'
}, {
'key1': 'data+1',
'key2': 'data+2',
'key3': 'data+3'
}]
self.ml.refresh()
for item in self.ml:
self.assertEqual(item.extradata, 'extra')
def test_repr_single(self):
self.assertEqual(str(self.ml), 'list of 1 item(s)')
class TestModelSubclassMixin(unittest.TestCase):
def setUp(self):
telstra_pn.__flags__['debug'] = True
telstra_pn.__flags__['debug_getattr'] = True
def test_mixin_create(self):
class Model(tpn_model.TPNModel, tpn_model.TPNModelSubclassesMixin):
def __init__(self, parent, **data):
self.type = data['type']
def _get_data(self):
pass
def _update_data(self, data):
pass
class Subclass1(Model):
@staticmethod
def _is_a(data, parent):
return data['type'] == 'subclass1'
class Subclass2(Model):
@staticmethod
def _is_a(data, parent):
return data['type'] == 'subclass2'
session_mock = MagicMock()
m1 = Model(session_mock, type='subclass1')
m2 = Model(session_mock, type='subclass2')
self.assertIsInstance(m1, Subclass1, type(m1))
self.assertIsInstance(m2, Subclass2, type(m2))
def test_mixin_missing_is_a(self):
class Model(tpn_model.TPNModel, tpn_model.TPNModelSubclassesMixin):
def __init__(self, parent, **data):
self.type = data['type']
def _get_data(self):
pass
def _update_data(self, data):
pass
class Subclass1(Model):
@staticmethod
def _is_a(data, parent):
return data['type'] == 'subclass1'
class Subclass2(Model):
pass
session_mock = MagicMock()
with self.assertRaisesRegex(
telstra_pn.exceptions.TPNLibraryInternalError,
'not implemented the required "_is_a.*" static method'):
Model(session_mock, type='subclass1')
def test_mixin_no_matching_subclasses(self):
class Model(tpn_model.TPNModel, tpn_model.TPNModelSubclassesMixin):
def __init__(self, parent, **data):
self.type = data['type']
def _get_data(self):
pass
def _update_data(self, data):
pass
class Subclass1(Model):
@staticmethod
def _is_a(data, parent):
return data['type'] == parent.look_for_class
class Subclass2(Model):
@staticmethod
def _is_a(data, parent):
return data['type'] == parent.look_for_class
session_mock = MagicMock(look_for_class='subclass1')
m1 = Model(session_mock, type='subclass2')
self.assertIsInstance(m1, Model, type(m1))
def test_mixin_two_matching_subclasses(self):
class Model(tpn_model.TPNModel, tpn_model.TPNModelSubclassesMixin):
def __init__(self, parent, **data):
self.type = data['type']
def _get_data(self):
pass
def _update_data(self, data):
pass
class Subclass1(Model):
@staticmethod
def _is_a(data, parent):
return data['type'] == parent.look_for_class
class Subclass2(Model):
@staticmethod
def _is_a(data, parent):
return data['type'] == parent.look_for_class
session_mock = MagicMock(look_for_class='subclass1')
with self.assertRaisesRegex(
telstra_pn.exceptions.TPNLibraryInternalError,
'Could not determine unique .* type .found 2 potentials.'):
Model(session_mock, type='subclass1')
| 32.797963 | 78 | 0.578683 | 2,088 | 19,318 | 5.071839 | 0.074234 | 0.043815 | 0.030595 | 0.028706 | 0.836733 | 0.77932 | 0.725496 | 0.667328 | 0.627384 | 0.603683 | 0 | 0.015089 | 0.307019 | 19,318 | 588 | 79 | 32.853742 | 0.775977 | 0.049332 | 0 | 0.740909 | 0 | 0 | 0.074318 | 0 | 0 | 0 | 0 | 0 | 0.186364 | 1 | 0.222727 | false | 0.05 | 0.011364 | 0.038636 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9399bbdaa4f6588bf5195b13b398bc3316a10ca0 | 123 | py | Python | CheckingScripts/CheckItem03.py | WrongGoodBye/GeneralAlarmSystem | de1df41910aeee8f2a96124845bef4906394083a | [
"MIT"
] | null | null | null | CheckingScripts/CheckItem03.py | WrongGoodBye/GeneralAlarmSystem | de1df41910aeee8f2a96124845bef4906394083a | [
"MIT"
] | null | null | null | CheckingScripts/CheckItem03.py | WrongGoodBye/GeneralAlarmSystem | de1df41910aeee8f2a96124845bef4906394083a | [
"MIT"
] | null | null | null | import os
import sys
import numpy as np
def get_status():
# this is an example, always return BAD status
return 2
| 15.375 | 50 | 0.715447 | 21 | 123 | 4.142857 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010753 | 0.243902 | 123 | 7 | 51 | 17.571429 | 0.924731 | 0.357724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.6 | 0.2 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
93c2a93891cb6dd6306193d690b02be30c25a679 | 1,200 | py | Python | integration_tests/strax_tests/test_externalapi_endpoints.py | TjadenFroyda/pyStratis | 9cc7620d7506637f8a2b84003d931eceb36ac5f2 | [
"MIT"
] | 8 | 2021-06-30T20:44:22.000Z | 2021-12-07T14:42:22.000Z | integration_tests/strax_tests/test_externalapi_endpoints.py | TjadenFroyda/pyStratis | 9cc7620d7506637f8a2b84003d931eceb36ac5f2 | [
"MIT"
] | 2 | 2021-07-01T11:50:18.000Z | 2022-01-25T18:39:49.000Z | integration_tests/strax_tests/test_externalapi_endpoints.py | TjadenFroyda/pyStratis | 9cc7620d7506637f8a2b84003d931eceb36ac5f2 | [
"MIT"
] | 4 | 2021-07-01T04:36:42.000Z | 2021-09-17T10:54:19.000Z | import pytest
from pystratis.nodes import StraxNode
from pystratis.core.types import Money
@pytest.mark.integration_test
@pytest.mark.strax_integration_test
def test_estimate_conversion_gas(strax_hot_node: StraxNode):
response = strax_hot_node.externalapi.estimate_conversion_gas()
assert isinstance(response, int)
@pytest.mark.integration_test
@pytest.mark.strax_integration_test
def test_estimate_conversion_fee(strax_hot_node: StraxNode):
response = strax_hot_node.externalapi.estimate_conversion_fee()
assert isinstance(response, Money)
@pytest.mark.integration_test
@pytest.mark.strax_integration_test
def test_gasprice(strax_hot_node: StraxNode):
response = strax_hot_node.externalapi.gas_price()
assert isinstance(response, int)
@pytest.mark.integration_test
@pytest.mark.strax_integration_test
def test_stratis_price(strax_hot_node: StraxNode):
response = strax_hot_node.externalapi.stratis_price()
assert isinstance(response, Money)
@pytest.mark.integration_test
@pytest.mark.strax_integration_test
def test_ethereum_price(strax_hot_node: StraxNode):
response = strax_hot_node.externalapi.ethereum_price()
assert isinstance(response, Money)
| 30.769231 | 67 | 0.826667 | 156 | 1,200 | 6.019231 | 0.185897 | 0.106496 | 0.127796 | 0.13312 | 0.86049 | 0.818956 | 0.818956 | 0.818956 | 0.818956 | 0.763578 | 0 | 0 | 0.095833 | 1,200 | 38 | 68 | 31.578947 | 0.865438 | 0 | 0 | 0.535714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 1 | 0.178571 | false | 0 | 0.107143 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9e268e8fdba020a006d3a1c720db6a69f1738d9e | 48 | py | Python | main.py | tmlee/pypi-api | cc98961438c68b7f3503d01ecff78aea04eda7a6 | [
"MIT"
] | null | null | null | main.py | tmlee/pypi-api | cc98961438c68b7f3503d01ecff78aea04eda7a6 | [
"MIT"
] | null | null | null | main.py | tmlee/pypi-api | cc98961438c68b7f3503d01ecff78aea04eda7a6 | [
"MIT"
] | null | null | null | from pypiapi import fetch
print fetch("django") | 16 | 25 | 0.791667 | 7 | 48 | 5.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 48 | 3 | 26 | 16 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0.122449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
1956d5a6c88df82651c971d976f134a04ad8c08b | 108 | py | Python | srcs/parser/tokens/semicolon_token.py | pomponchik/computor_v2 | 742b3f3b47c8d46806b2f733b4ec07ae63a23f00 | [
"MIT"
] | null | null | null | srcs/parser/tokens/semicolon_token.py | pomponchik/computor_v2 | 742b3f3b47c8d46806b2f733b4ec07ae63a23f00 | [
"MIT"
] | null | null | null | srcs/parser/tokens/semicolon_token.py | pomponchik/computor_v2 | 742b3f3b47c8d46806b2f733b4ec07ae63a23f00 | [
"MIT"
] | null | null | null | from srcs.parser.tokens.abstract_token import AbstractToken
class SemicolonToken(AbstractToken):
pass
| 18 | 59 | 0.824074 | 12 | 108 | 7.333333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12037 | 108 | 5 | 60 | 21.6 | 0.926316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
197af2188fdad9acd994a652ac356b8ab6446917 | 3,905 | py | Python | admin/handler/companyHandler.py | xin1195/smart | 11815b8a63f2459300e8aaad82b539cfef8a7546 | [
"Apache-2.0"
] | 1 | 2016-05-09T12:29:47.000Z | 2016-05-09T12:29:47.000Z | admin/handler/companyHandler.py | xin1195/smartSearch | 11815b8a63f2459300e8aaad82b539cfef8a7546 | [
"Apache-2.0"
] | null | null | null | admin/handler/companyHandler.py | xin1195/smartSearch | 11815b8a63f2459300e8aaad82b539cfef8a7546 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# _*_coding:utf-8_*_
import traceback
import tornado.web
from tornado import gen
from admin.handler.baseHandler import BaseHandler
from setting import logger
class AdminCompanyHandler(BaseHandler):
@tornado.web.authenticated
@gen.coroutine
def get(self, *args, **kwargs):
res_msg = ""
companys = []
num = int(self.get_argument("num", 15))
page = int(self.get_argument("page", 1))
total_count = 0
try:
query = {}
show = {"_id": 0}
cursor = self.db.bijia_company.find(query, show)
while (yield cursor.fetch_next):
company = cursor.next_object()
companys.append(company)
total_count = yield self.db.bijia_company.find().count()
except:
logger.error(traceback.format_exc())
self.render("admin/company_list.html", companys=companys, res_msg=res_msg, total_count=total_count, page=page, num=num)
class AdminCompanyAddHandler(BaseHandler):
@tornado.web.authenticated
@gen.coroutine
def get(self, *args, **kwargs):
res_msg = ""
company = {}
self.render("admin/company_add.html", res_msg=res_msg, form_action="/admin/company/add", company=company)
@gen.coroutine
def post(self, *args, **kwargs):
name = self.get_argument("name", "")
code = self.get_argument("code", "")
address = self.get_argument("address", "")
email = self.get_argument("email", "")
tell_phone = self.get_argument("tell_phone", "")
website = self.get_argument("website", "")
try:
company_dict = {
"name": name,
"code": code,
"address": address,
"email": email,
"tell_phone": tell_phone,
"website": website,
}
query = {"code": code}
yield self.db.bijia_company.update(query, company_dict, upsert=True)
except:
logger.error(traceback.format_exc())
self.redirect("/admin/company")
class AdminCompanyUpdateHandler(BaseHandler):
@tornado.web.authenticated
@gen.coroutine
def get(self, *args, **kwargs):
res_msg = ""
company = {}
try:
code = self.get_argument("code", "")
query = {"code": code}
show = {"_id": 0}
company = yield self.db.bijia_company.find_one(query, show)
except:
logger.error(traceback.format_exc())
self.render("admin/company_add.html", res_msg=res_msg, form_action="/admin/company/add", company=company)
@gen.coroutine
def post(self, *args, **kwargs):
name = self.get_argument("name", "")
code = self.get_argument("code", "")
address = self.get_argument("address", "")
email = self.get_argument("email", "")
tell_phone = self.get_argument("tell_phone", "")
website = self.get_argument("website", "")
try:
company_dict = {
"name": name,
"code": code,
"address": address,
"email": email,
"tell_phone": tell_phone,
"website": website,
}
query = {"code": code}
yield self.db.bijia_company.update(query, {"$set": company_dict}, upsert=True)
except:
logger.error(traceback.format_exc())
self.redirect("/admin/company")
class AdminCompanyDeleteHandler(BaseHandler):
@tornado.web.authenticated
@gen.coroutine
def get(self, *args, **kwargs):
try:
code = self.get_argument("code", "")
query = {"code": code}
self.db.bijia_company.remove(query)
except:
logger.error(traceback.format_exc())
self.redirect("/admin/company")
| 33.663793 | 128 | 0.566453 | 412 | 3,905 | 5.208738 | 0.201456 | 0.05219 | 0.111836 | 0.050326 | 0.739515 | 0.729264 | 0.704101 | 0.704101 | 0.704101 | 0.667754 | 0 | 0.00292 | 0.298335 | 3,905 | 115 | 129 | 33.956522 | 0.780292 | 0.010243 | 0 | 0.742574 | 0 | 0 | 0.086461 | 0.017344 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059406 | false | 0 | 0.049505 | 0 | 0.148515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1988b2a9279600cef4470c1392fba66e03cca78b | 45 | py | Python | src/importchecker/tests/fixture/absimportattrassignment.py | zopefoundation/importchecker | f30daa3cf90506c90e56104e29599c275fa14d1a | [
"ZPL-2.1"
] | 4 | 2015-06-08T18:11:35.000Z | 2021-03-24T13:36:52.000Z | src/importchecker/tests/fixture/absimportattrassignment.py | zopefoundation/importchecker | f30daa3cf90506c90e56104e29599c275fa14d1a | [
"ZPL-2.1"
] | 6 | 2017-03-01T04:50:44.000Z | 2020-12-07T08:31:26.000Z | src/importchecker/tests/fixture/absimportattrassignment.py | zopefoundation/importchecker | f30daa3cf90506c90e56104e29599c275fa14d1a | [
"ZPL-2.1"
] | 2 | 2018-02-16T12:26:09.000Z | 2018-02-19T13:13:40.000Z | import sys.stderr
sys.stderr.__foo__ = 'bar'
| 15 | 26 | 0.755556 | 7 | 45 | 4.285714 | 0.714286 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 45 | 2 | 27 | 22.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
19a371e8b746b22cf7bdb1a8431f6636b995c36f | 29 | py | Python | fuzzing/__init__.py | jrs1061/wheatley | bd1143413495ef317970b9c6cedbc4903fdbf7a9 | [
"MIT"
] | 14 | 2020-08-16T21:41:13.000Z | 2021-07-13T01:15:01.000Z | fuzzing/__init__.py | jrs1061/wheatley | bd1143413495ef317970b9c6cedbc4903fdbf7a9 | [
"MIT"
] | 121 | 2020-08-13T16:54:46.000Z | 2021-09-17T10:32:04.000Z | fuzzing/__init__.py | Kneasle/wheatley | 9141bf8511dce737208731e55bfe138d48845319 | [
"MIT"
] | 10 | 2020-12-20T03:52:47.000Z | 2021-11-22T14:46:15.000Z | from .run_fuzzing import run
| 14.5 | 28 | 0.827586 | 5 | 29 | 4.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5feff8caafb772def6692d73cd6bad1a3e9670e0 | 152 | py | Python | tests/conftest.py | lRomul/rosny | 63e581be8144082065ce26a7de9b3c6e164b77e6 | [
"MIT"
] | 6 | 2021-06-17T13:17:56.000Z | 2021-07-17T11:54:28.000Z | tests/conftest.py | lRomul/rosny | 63e581be8144082065ce26a7de9b3c6e164b77e6 | [
"MIT"
] | null | null | null | tests/conftest.py | lRomul/rosny | 63e581be8144082065ce26a7de9b3c6e164b77e6 | [
"MIT"
] | 1 | 2021-08-03T23:21:25.000Z | 2021-08-03T23:21:25.000Z | import pytest
from rosny.timing import LoopTimeMeter
@pytest.fixture(scope='function')
def time_meter() -> LoopTimeMeter:
return LoopTimeMeter()
| 16.888889 | 38 | 0.769737 | 17 | 152 | 6.823529 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 152 | 8 | 39 | 19 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
273d0d12d868c60b31fbb01baca532af56683f21 | 32 | py | Python | breviar/core/providers/__init__.py | hairygeek/breviar | b563f3887c367ce6e859b69cfb2513081335969b | [
"Apache-2.0"
] | 1 | 2020-04-13T12:14:50.000Z | 2020-04-13T12:14:50.000Z | breviar/core/providers/__init__.py | hairygeek/breviar | b563f3887c367ce6e859b69cfb2513081335969b | [
"Apache-2.0"
] | null | null | null | breviar/core/providers/__init__.py | hairygeek/breviar | b563f3887c367ce6e859b69cfb2513081335969b | [
"Apache-2.0"
] | null | null | null | from .bitly import BitlyProvider | 32 | 32 | 0.875 | 4 | 32 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
276803421aa369f97e613640959980f3549fa9d3 | 223 | bzl | Python | go_third_party.bzl | timoth-y/iot-blockchain-contracts | 74bbae7f2f04dbe3584e9c8ae5be1a49e77eea95 | [
"Apache-2.0"
] | 5 | 2021-03-31T08:13:35.000Z | 2021-12-16T14:57:07.000Z | go_third_party.bzl | timoth-y/iot-blockchain-contracts | 74bbae7f2f04dbe3584e9c8ae5be1a49e77eea95 | [
"Apache-2.0"
] | 1 | 2021-04-11T01:06:29.000Z | 2021-04-11T01:06:29.000Z | go_third_party.bzl | timoth-y/iot-blockchain-contracts | 74bbae7f2f04dbe3584e9c8ae5be1a49e77eea95 | [
"Apache-2.0"
] | 3 | 2021-09-19T18:01:14.000Z | 2021-12-16T14:56:57.000Z | load("@bazel_gazelle//:deps.bzl", "go_repository")
# Generate Go dependencies macro with:
# gazelle update-repos --from_file=go.mod -index=false -to_macro=go_third_party.bzl%go_dependencies
def go_dependencies():
pass
| 31.857143 | 99 | 0.7713 | 33 | 223 | 4.969697 | 0.69697 | 0.256098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098655 | 223 | 6 | 100 | 37.166667 | 0.81592 | 0.600897 | 0 | 0 | 1 | 0 | 0.44186 | 0.290698 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0.333333 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
27902b382a7a35a403ffa908613c91af828502f9 | 115 | py | Python | archive/MCQ/models/__init__.py | dreinq/DeepQ | abb6d8b492f802fefbc0095e8719377dc708069c | [
"Apache-2.0"
] | null | null | null | archive/MCQ/models/__init__.py | dreinq/DeepQ | abb6d8b492f802fefbc0095e8719377dc708069c | [
"Apache-2.0"
] | null | null | null | archive/MCQ/models/__init__.py | dreinq/DeepQ | abb6d8b492f802fefbc0095e8719377dc708069c | [
"Apache-2.0"
] | 1 | 2020-11-23T09:13:58.000Z | 2020-11-23T09:13:58.000Z | from .Transformer import PolicyTransformer, ValueTransformer
from .Estimator import PolicyEstimator, ValueEstimator | 57.5 | 60 | 0.886957 | 10 | 115 | 10.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078261 | 115 | 2 | 61 | 57.5 | 0.962264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
279c61e5f47e4891ac331c055aae003b903c3c1c | 40 | py | Python | carla_utils/benchmark/__init__.py | IamWangYunKai/DG-TrajGen | 0a8aab7e1c05111a5afe43d53801c55942e9ff56 | [
"MIT"
] | 31 | 2021-09-15T00:43:43.000Z | 2022-03-27T22:57:21.000Z | carla_utils/benchmark/__init__.py | zhangdongkun98/carla-utils | a370db53589841c8cffe95c8df43dfc036176431 | [
"MIT"
] | 1 | 2021-12-09T03:08:13.000Z | 2021-12-15T07:08:31.000Z | carla_utils/benchmark/__init__.py | zhangdongkun98/carla-utils | a370db53589841c8cffe95c8df43dfc036176431 | [
"MIT"
] | 2 | 2021-11-26T05:45:18.000Z | 2022-01-19T12:46:41.000Z |
from .forward_agent import ForwardAgent | 20 | 39 | 0.875 | 5 | 40 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 2 | 39 | 20 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
27a759103a24d75c3b74d0bf3972a96046305f2e | 96 | py | Python | venv/lib/python3.8/site-packages/virtualenv/discovery/windows/__init__.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/virtualenv/discovery/windows/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/virtualenv/discovery/windows/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/f4/b8/d8/4e388fca0704466594ba0c8772fe631e66527c73b71f646fc3322f42f5 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.458333 | 0 | 96 | 1 | 96 | 96 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7e0bada78ff76f47388fd0623b614d07a2bcc653 | 27,069 | py | Python | tests/test_kamstrup.py | dstrigl/amshan | 9c2d54e9fd1ce0fd844c30c83349d959ac551fd5 | [
"MIT"
] | 3 | 2021-12-28T22:57:54.000Z | 2022-02-26T20:13:09.000Z | tests/test_kamstrup.py | dstrigl/amshan | 9c2d54e9fd1ce0fd844c30c83349d959ac551fd5 | [
"MIT"
] | 7 | 2022-01-18T10:31:08.000Z | 2022-03-15T07:16:36.000Z | tests/test_kamstrup.py | dstrigl/amshan | 9c2d54e9fd1ce0fd844c30c83349d959ac551fd5 | [
"MIT"
] | 2 | 2021-12-29T09:18:35.000Z | 2022-03-12T16:53:45.000Z | """Kamstrup tests."""
# pylint: disable = no-self-use
from __future__ import annotations
from datetime import datetime
from pprint import pprint
import construct
from han import kamstrup
from tests.assert_utils import (
assert_apdu,
assert_obis_element,
)
# Kamstrup example 1: 10 seconds list, three-phases, four-quadrants
no_list_1_three_phase = bytes.fromhex(
(
"E6 E7 00" # LLC: dsap, ssap, control
"0F" # APDU: tag
"00000000" # APDU: LongInvokeIdAndPriority
"0C07D0010106162100FF800001" # APDU: DateTime
"0219" # structure of 0x19 elements
"0A0E 4B616D73747275705F5630303031" # visible_string
"0906 0101000005FF 0A10 35373036353637303030303030303030" # octet_string (obis) + visible_string
"0906 0101600101FF 0A12 303030303030303030303030303030303030" # octet_string (obis) + visible_string
"0906 0101010700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101020700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101030700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101040700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 01011F0700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101330700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101470700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101200700FF 12 0000" # octet_string (obis) + long_unsigned
"0906 0101340700FF 12 0000" # octet_string (obis) + long_unsigned
"0906 0101480700FF 12 0000" # octet_string (obis) + long_unsigned
).replace(" ", "")
)
# Kamstrup example 3: 1 hour list, single-phase, one-quadrant
no_list_2_single_phase = bytes.fromhex(
(
"E6 E7 00" # LLC: dsap, ssap, control
"0F" # APDU: tag
"00000000" # APDU: LongInvokeIdAndPriority
"0C07E1081003100005FF800000" # APDU: DateTime
"020F" # structure of 0x0f elements
"0A0E 4B616D73747275705F5630303031" # visible_string
"0906 0101000005FF 0A10 35373036353637303030303030303030"
"0906 0101600101FF 0A12 303030303030303030303030303030303030"
"0906 0101010700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 01011F0700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101200700FF 12 0000" # octet_string (obis) + long_unsigned
"0906 0001010000FF 090C 07E1081003100005FF800000" # octet_string (obis) + octet_string
"0906 0101010800FF 0600000000" # octet_string + double_long_unsigned
).replace(" ", "")
)
# Kamstrup example 2: 1 hour list, three-phases, four-quadrants
no_list_2_three_phase = bytes.fromhex(
(
"E6 E 700" # LLC: dsap, ssap, control
"0F" # APDU: tag
"00000000" # APDU: LongInvokeIdAndPriority
"0C07E1081003100005FF800000" # APDU: DateTime
"0223" # structure of 0x23 elements
"0A0E 4B616D73747275705F5630303031" # visible_string
"0906 0101000005FF 0A10 35373036353637303030303030303030"
"0906 0101600101FF 0A12 303030303030303030303030303030303030"
"0906 0101010700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101020700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101030700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101040700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 01011F0700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101330700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101470700FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101200700FF 12 0000" # octet_string (obis) + long_unsigned
"0906 0101340700FF 12 0000" # octet_string (obis) + long_unsigned
"0906 0101480700FF 12 0000" # octet_string (obis) + long_unsigned
"0906 0001010000FF 090C 07E1081003100005FF800000" # octet_string (obis) + octet_string
"0906 0101010800FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101020800FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101030800FF 06 00000000" # octet_string (obis) + double_long_unsigned
"0906 0101040800FF 06 00000000" # octet_string (obis) + double_long_unsigned
).replace(" ", "")
)
no_list_1_single_phase_real_sample = bytes.fromhex(
(
"e6 e7 00"
"0f"
"000000000"
"c07e60111010c2c28ff800000"
"0219"
"0a0e 4b616d73747275705f5630303031" # OBIS List version identifier
"0906 0101000005ff 0a10 35373035373035373035373035373032" # 1.1.0.0.5.255 (GS1 number)
"0906 0101600101ff 0a12 36383631313131424e323432313031303430" # 1.1.96.1.1.255 (Meter type)
"0906 0101010700ff 0600000768" # 1.1.1.7.0.255 (P14)
"0906 0101020700ff 0600000000" # 1.1.2.7.0.255 (P23)
"0906 0101030700ff 0600000000" # 1.1.3.7.0.255 (Q12)
"0906 0101040700ff 06000001ed" # 1.1.4.7.0.255 (Q34)
"0906 01011f0700ff 0600000380" # 1.1.31.7.0.255 (IL1)
"00000000"
"0906 0101200700ff 1200e1" # 1.1.32.7.0.255 (UL1)
"00000000"
).replace(" ", "")
)
no_list_2_single_phase_real_sample = bytes.fromhex(
(
"e6e700"
"0f"
"000000000"
"c07e50b1803000019ff800000"
"0223"
"0a0e 4b616d73747275705f5630303031"
"0906 0101000005ff 0a10 35373035373035373035373035373032"
"0906 0101600101ff 0a12 36383631313131424e323432313031303430"
"0906 0101010700ff 06 00002742"
"0906 0101020700ff 06 00000000"
"0906 0101030700ff 06 00000000"
"0906 0101040700ff 06 00000117"
"0906 01011f0700ff 06 000011a000000000"
"0906 0101200700ff 12 00df00000000"
"0906 0001010000ff 090c 07e50b1803000019ff800000"
"0906 0101010800ff 06 00762ee2"
"0906 0101020800ff 06 00000000"
"0906 0101030800ff 06 000035a3"
"0906 0101040800ff 06 00116b53"
).replace(" ", "")
)
# 7E A0 E2 2B 21 13 23 9A E6 E7 00 0F 00 00 00 00 0C 07 E6 01 18 01 12 3A 32 FF 80 00 00 02 19 0A 0E 4B 61 6D 73 74 72 75 70 5F 56 30 30 30 31 09 06 01 01 00 00 05 FF 0A 10 35 37 30 36 35 36 37 33 32 36 35 39 30 34 30 37 09 06 01 01 60 01 01 FF 0A 12 36 38 34 31 31 33 38 42 4E 32 34 35 31 30 31 30 39 30 09 06 01 01 01 07 00 FF 06 00 00 03 3A 09 06 01 01 02 07 00 FF 06 00 00 00 00 09 06 01 01 03 07 00 FF 06 00 00 00 68 09 06 01 01 04 07 00 FF 06 00 00 00 B0 09 06 01 01 1F 07 00 FF 06 00 00 00 ED 09 06 01 01 33 07 00 FF 06 00 00 00 59 09 06 01 01 47 07 00 FF 06 00 00 00 4B 09 06 01 01 20 07 00 FF 12 00 E8 09 06 01 01 34 07 00 FF 12 00 E9 09 06 01 01 48 07 00 FF 12 00 EC 84 46 7E
# 7EA0E22B2113239AE6E7000F000000000C07E6011801123A32FF80000002190A0E4B616D73747275705F563030303109060101000005FF0A103537303635363733323635393034303709060101600101FF0A1236383431313338424E32343531303130393009060101010700FF060000033A09060101020700FF060000000009060101030700FF060000006809060101040700FF06000000B0090601011F0700FF06000000ED09060101330700FF060000005909060101470700FF060000004B09060101200700FF1200E809060101340700FF1200E909060101480700FF1200EC84467E
se_list_real_sample = bytes.fromhex(
(
"E6E700"
"0F"
"000000000"
"C07E6011801123A32FF800000"
"0219"
"0A0E 4B616D73747275705F5630303031"
"0906 0101000005FF"
"0A10 35373035373035373035373035373037 0906 0101600101FF"
"0A12 36383431313338424E323435313031303930 09060101010700FF060000033A"
"0906 0101020700FF 0600000000"
"0906 0101030700FF 0600000068"
"0906 0101040700FF 06000000B0"
"0906 01011F0700FF 06000000ED"
"0906 0101330700FF 0600000059"
"0906 0101470700FF 060000004B"
"0906 0101200700FF 1200E8"
"0906 0101340700FF 1200E9"
"0906 0101480700FF 1200EC"
).replace(" ", "")
)
class TestParseKamstrup:
"""Test parse Kamstrup frames."""
def test_parse_se_list_three_phase_real_sample(self):
"""Parse SE list."""
parsed = kamstrup.LlcPdu.parse(se_list_real_sample)
print(parsed)
assert_apdu(parsed, 0, datetime(2022, 1, 24, 18, 58, 50))
assert isinstance(parsed.information.notification_body, construct.Container)
assert parsed.information.notification_body.length == 0x19
assert isinstance(
parsed.information.notification_body.list_items, construct.ListContainer
)
assert_obis_element(
parsed.information.notification_body.list_items[0],
None,
"visible_string",
"Kamstrup_V0001",
)
assert_obis_element(
parsed.information.notification_body.list_items[1],
"1.1.0.0.5.255", # GS1 number
"visible_string",
"5705705705705707",
)
assert_obis_element(
parsed.information.notification_body.list_items[2],
"1.1.96.1.1.255", # Meter type
"visible_string",
"6841138BN245101090",
)
assert_obis_element(
parsed.information.notification_body.list_items[3],
"1.1.1.7.0.255", # P14
"double_long_unsigned",
826,
)
assert_obis_element(
parsed.information.notification_body.list_items[4],
"1.1.2.7.0.255", # P23
"double_long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[5],
"1.1.3.7.0.255", # Q12
"double_long_unsigned",
104,
)
assert_obis_element(
parsed.information.notification_body.list_items[6],
"1.1.4.7.0.255", # Q34
"double_long_unsigned",
176,
)
assert_obis_element(
parsed.information.notification_body.list_items[7],
"1.1.31.7.0.255", # IL1
"double_long_unsigned",
237,
)
assert_obis_element(
parsed.information.notification_body.list_items[8],
"1.1.51.7.0.255", # IL2
"double_long_unsigned",
89,
)
assert_obis_element(
parsed.information.notification_body.list_items[9],
"1.1.71.7.0.255", # IL3
"double_long_unsigned",
75,
)
assert_obis_element(
parsed.information.notification_body.list_items[10],
"1.1.32.7.0.255", # UL1
"long_unsigned",
232,
)
assert_obis_element(
parsed.information.notification_body.list_items[11],
"1.1.52.7.0.255", # UL2
"long_unsigned",
233,
)
assert_obis_element(
parsed.information.notification_body.list_items[12],
"1.1.72.7.0.255", # UL3
"long_unsigned",
236,
)
def test_parse_no_list_1_single_phase_real_sample(self):
"""Parse single phase NO list number 1."""
parsed = kamstrup.LlcPdu.parse(no_list_1_single_phase_real_sample)
print(parsed)
assert_apdu(parsed, 0, datetime(2022, 1, 17, 12, 44, 40))
assert isinstance(parsed.information.notification_body, construct.Container)
assert parsed.information.notification_body.length == 0x19
assert isinstance(
parsed.information.notification_body.list_items, construct.ListContainer
)
assert_obis_element(
parsed.information.notification_body.list_items[0],
None,
"visible_string",
"Kamstrup_V0001",
)
assert_obis_element(
parsed.information.notification_body.list_items[1],
"1.1.0.0.5.255", # GS1 number
"visible_string",
"5705705705705702",
)
assert_obis_element(
parsed.information.notification_body.list_items[2],
"1.1.96.1.1.255", # Meter type
"visible_string",
"6861111BN242101040",
)
assert_obis_element(
parsed.information.notification_body.list_items[3],
"1.1.1.7.0.255", # P14
"double_long_unsigned",
1896,
)
assert_obis_element(
parsed.information.notification_body.list_items[4],
"1.1.2.7.0.255", # P23
"double_long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[5],
"1.1.3.7.0.255", # Q12
"double_long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[6],
"1.1.4.7.0.255", # Q34
"double_long_unsigned",
493,
)
assert_obis_element(
parsed.information.notification_body.list_items[7],
"1.1.31.7.0.255", # IL1
"double_long_unsigned",
896,
)
assert_obis_element(
parsed.information.notification_body.list_items[8],
"1.1.32.7.0.255", # UL1
"long_unsigned",
225,
)
def test_parse_no_list_2_single_phase_real_sample(self):
"""Parse single phase NO list number 1."""
parsed = kamstrup.LlcPdu.parse(no_list_2_single_phase_real_sample)
print(parsed)
assert_apdu(parsed, 0, datetime(2021, 11, 24, 0, 0, 25))
assert isinstance(parsed.information.notification_body, construct.Container)
assert parsed.information.notification_body.length == 35
assert isinstance(
parsed.information.notification_body.list_items, construct.ListContainer
)
assert_obis_element(
parsed.information.notification_body.list_items[0],
None,
"visible_string",
"Kamstrup_V0001",
)
assert_obis_element(
parsed.information.notification_body.list_items[1],
"1.1.0.0.5.255", # GS1 number
"visible_string",
"5705705705705702",
)
assert_obis_element(
parsed.information.notification_body.list_items[2],
"1.1.96.1.1.255", # Meter type
"visible_string",
"6861111BN242101040",
)
assert_obis_element(
parsed.information.notification_body.list_items[3],
"1.1.1.7.0.255", # P14
"double_long_unsigned",
10050,
)
assert_obis_element(
parsed.information.notification_body.list_items[4],
"1.1.2.7.0.255", # P23
"double_long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[5],
"1.1.3.7.0.255", # Q12
"double_long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[6],
"1.1.4.7.0.255", # Q34
"double_long_unsigned",
279,
)
assert_obis_element(
parsed.information.notification_body.list_items[7],
"1.1.31.7.0.255", # IL1
"double_long_unsigned",
4512,
)
assert_obis_element(
parsed.information.notification_body.list_items[8],
"1.1.32.7.0.255", # UL1
"long_unsigned",
223,
)
rtc = parsed.information.notification_body.list_items[9]
assert isinstance(rtc, construct.Container)
assert rtc.obis == "0.1.1.0.0.255" # RTC
assert rtc.value_type == "octet_string"
assert rtc.value.datetime == datetime(2021, 11, 24, 0, 0, 25)
assert_obis_element(
parsed.information.notification_body.list_items[10],
"1.1.1.8.0.255", # A14
"double_long_unsigned",
7745250,
)
assert_obis_element(
parsed.information.notification_body.list_items[11],
"1.1.2.8.0.255", # A23
"double_long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[12],
"1.1.3.8.0.255", # R12
"double_long_unsigned",
13731,
)
assert_obis_element(
parsed.information.notification_body.list_items[13],
"1.1.4.8.0.255", # R12
"double_long_unsigned",
1141587,
)
def test_parse_no_list_1_three_phase(self):
"""Parse three phase NO list number 1."""
parsed = kamstrup.LlcPdu.parse(no_list_1_three_phase)
print(parsed)
assert_apdu(parsed, 0, datetime(2000, 1, 1, 22, 33, 0))
assert parsed.information.notification_body.length == 0x19
assert isinstance(
parsed.information.notification_body.list_items, construct.ListContainer
)
assert_obis_element(
parsed.information.notification_body.list_items[0],
None,
"visible_string",
"Kamstrup_V0001",
)
assert_obis_element(
parsed.information.notification_body.list_items[1],
"1.1.0.0.5.255", # GS1 number
"visible_string",
"5706567000000000",
)
assert_obis_element(
parsed.information.notification_body.list_items[2],
"1.1.96.1.1.255", # Meter type
"visible_string",
"000000000000000000",
)
assert_obis_element(
parsed.information.notification_body.list_items[3],
"1.1.1.7.0.255", # P14
"double_long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[4],
"1.1.2.7.0.255", # P23
"double_long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[5],
"1.1.3.7.0.255", # Q12
"double_long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[6],
"1.1.4.7.0.255", # Q34
"double_long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[7],
"1.1.31.7.0.255", # IL1
"double_long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[8],
"1.1.51.7.0.255", # IL2
"double_long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[9],
"1.1.71.7.0.255", # IL3
"double_long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[10],
"1.1.32.7.0.255", # UL1
"long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[11],
"1.1.52.7.0.255", # UL2
"long_unsigned",
0,
)
assert_obis_element(
parsed.information.notification_body.list_items[12],
"1.1.72.7.0.255", # UL3
"long_unsigned",
0,
)
def test_parse_no_list_2_single_phase(self):
"""Parse single phase NO list number 2."""
parsed = kamstrup.LlcPdu.parse(no_list_2_single_phase)
print(parsed)
assert_apdu(parsed, 0, datetime(2017, 8, 16, 16, 0, 5))
assert parsed.information.notification_body.length == 15
assert isinstance(
parsed.information.notification_body.list_items, construct.ListContainer
)
def test_parse_no_list_2_three_phase(self):
"""Parse three phase NO list number 2."""
parsed = kamstrup.LlcPdu.parse(no_list_2_three_phase)
print(parsed)
assert_apdu(parsed, 0, datetime(2017, 8, 16, 16, 0, 5))
assert parsed.information.notification_body.length == 35
assert isinstance(
parsed.information.notification_body.list_items, construct.ListContainer
)
class TestDecodeKamstrup:
"""Test decode Kamstrup frames."""
def test_decode_se_list_three_phase_real_sample(self):
"""Decode three phase SE list (real sample)."""
decoded = kamstrup.decode_frame_content(se_list_real_sample)
pprint(decoded)
assert isinstance(decoded, dict)
assert len(decoded) == 15
assert decoded["active_power_export"] == 0
assert decoded["active_power_import"] == 826
assert decoded["current_l1"] == 2.37
assert decoded["current_l2"] == 0.89
assert decoded["current_l3"] == 0.75
assert decoded["list_ver_id"] == "Kamstrup_V0001"
assert decoded["meter_datetime"] == datetime(2022, 1, 24, 18, 58, 50)
assert decoded["meter_id"] == "5705705705705707"
assert decoded["meter_manufacturer"] == "Kamstrup"
assert decoded["meter_type"] == "6841138BN245101090"
assert decoded["reactive_power_export"] == 176
assert decoded["reactive_power_import"] == 104
assert decoded["voltage_l1"] == 232
assert decoded["voltage_l2"] == 233
assert decoded["voltage_l3"] == 236
def test_decode_frame_no_list_1_single_phase_real_sample(self):
"""Decode three phase no list number 1."""
decoded = kamstrup.decode_frame_content(no_list_1_single_phase_real_sample)
pprint(decoded)
assert isinstance(decoded, dict)
assert len(decoded) == 11
assert decoded["active_power_export"] == 0
assert decoded["active_power_import"] == 1896
assert decoded["current_l1"] == 8.96
assert decoded["list_ver_id"] == "Kamstrup_V0001"
assert decoded["meter_datetime"] == datetime(2022, 1, 17, 12, 44, 40)
assert decoded["meter_id"] == "5705705705705702"
assert decoded["meter_manufacturer"] == "Kamstrup"
assert decoded["meter_type"] == "6861111BN242101040"
assert decoded["reactive_power_export"] == 493
assert decoded["reactive_power_import"] == 0
assert decoded["voltage_l1"] == 225
def test_decode_frame_no_list_2_single_phase_real_sample(self):
"""Decode three phase no list number 1."""
decoded = kamstrup.decode_frame_content(no_list_2_single_phase_real_sample)
pprint(decoded)
assert isinstance(decoded, dict)
assert len(decoded) == 15
assert decoded["active_power_export"] == 0
assert decoded["active_power_export_total"] == 0
assert decoded["active_power_import"] == 10050
assert decoded["active_power_import_total"] == 77452500
assert decoded["current_l1"] == 45.12
assert decoded["list_ver_id"] == "Kamstrup_V0001"
assert decoded["meter_datetime"] == datetime(2021, 11, 24, 0, 0, 25)
assert decoded["meter_id"] == "5705705705705702"
assert decoded["meter_manufacturer"] == "Kamstrup"
assert decoded["meter_type"] == "6861111BN242101040"
assert decoded["reactive_power_export"] == 279
assert decoded["reactive_power_export_total"] == 11415870
assert decoded["reactive_power_import"] == 0
assert decoded["reactive_power_import_total"] == 137310
assert decoded["voltage_l1"] == 223
def test_decode_frame_no_list_1_three_phase(self):
"""Decode three phase no list number 1."""
decoded = kamstrup.decode_frame_content(no_list_1_three_phase)
pprint(decoded)
assert isinstance(decoded, dict)
assert len(decoded) == 15
assert decoded["active_power_export"] == 0
assert decoded["active_power_import"] == 0
assert decoded["current_l1"] == 0.0
assert decoded["current_l2"] == 0.0
assert decoded["current_l3"] == 0.0
assert decoded["list_ver_id"] == "Kamstrup_V0001"
assert decoded["meter_datetime"] == datetime(2000, 1, 1, 22, 33)
assert decoded["meter_id"] == "5706567000000000"
assert decoded["meter_manufacturer"] == "Kamstrup"
assert decoded["meter_type"] == "000000000000000000"
assert decoded["reactive_power_export"] == 0
assert decoded["reactive_power_import"] == 0
assert decoded["voltage_l1"] == 0
assert decoded["voltage_l2"] == 0
assert decoded["voltage_l3"] == 0
def test_decode_frame_no_list_2_three_phase(self):
"""Decode three phase no list number 2."""
decoded = kamstrup.decode_frame_content(no_list_2_three_phase)
pprint(decoded)
assert isinstance(decoded, dict)
assert len(decoded) == 19
assert decoded["active_power_export"] == 0
assert decoded["active_power_export_total"] == 0
assert decoded["active_power_import"] == 0
assert decoded["active_power_import_total"] == 0
assert decoded["current_l1"] == 0
assert decoded["current_l2"] == 0
assert decoded["current_l3"] == 0
assert decoded["list_ver_id"] == "Kamstrup_V0001"
assert decoded["meter_datetime"] == datetime(2017, 8, 16, 16, 0, 5)
assert decoded["meter_id"] == "5706567000000000"
assert decoded["meter_manufacturer"] == "Kamstrup"
assert decoded["meter_type"] == "000000000000000000"
assert decoded["reactive_power_export"] == 0
assert decoded["reactive_power_export_total"] == 0
assert decoded["reactive_power_import"] == 0
assert decoded["reactive_power_import_total"] == 0
assert decoded["voltage_l1"] == 0
assert decoded["voltage_l2"] == 0
assert decoded["voltage_l3"] == 0
def test_decode_frame_no_list_2_single_phase(self):
"""Decode single phase no list number 2."""
decoded = kamstrup.decode_frame_content(no_list_2_single_phase)
pprint(decoded)
assert isinstance(decoded, dict)
assert len(decoded) == 9
assert decoded["active_power_import"] == 0
assert decoded["active_power_import_total"] == 0
assert decoded["current_l1"] == 0.0
assert decoded["list_ver_id"] == "Kamstrup_V0001"
assert decoded["meter_datetime"] == datetime(2017, 8, 16, 16, 0, 5)
assert decoded["meter_id"] == "5706567000000000"
assert decoded["meter_manufacturer"] == "Kamstrup"
assert decoded["meter_type"] == "000000000000000000"
assert decoded["voltage_l1"] == 0
| 40.951589 | 685 | 0.621671 | 3,074 | 27,069 | 5.24203 | 0.100195 | 0.067767 | 0.115179 | 0.131066 | 0.799926 | 0.775227 | 0.755368 | 0.719809 | 0.707646 | 0.669108 | 0 | 0.235303 | 0.281724 | 27,069 | 660 | 686 | 41.013636 | 0.593478 | 0.146219 | 0 | 0.643333 | 0 | 0 | 0.268124 | 0.055396 | 0 | 0 | 0.000523 | 0 | 0.286667 | 1 | 0.02 | false | 0 | 0.036667 | 0 | 0.06 | 0.021667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fd77c70a192cbaa712ba8532a1b3859358e30237 | 193 | py | Python | modules/hbnative/config.py | LinoBigatti/godot | 4229e850f439879cfe201e25c1ea99c4c6ab8162 | [
"MIT",
"Apache-2.0",
"CC-BY-4.0",
"Unlicense"
] | 15 | 2020-12-20T19:24:59.000Z | 2022-03-25T19:41:12.000Z | modules/hbnative/config.py | EIRTeam/godot-angle | 4229e850f439879cfe201e25c1ea99c4c6ab8162 | [
"MIT",
"Apache-2.0",
"CC-BY-4.0",
"Unlicense"
] | null | null | null | modules/hbnative/config.py | EIRTeam/godot-angle | 4229e850f439879cfe201e25c1ea99c4c6ab8162 | [
"MIT",
"Apache-2.0",
"CC-BY-4.0",
"Unlicense"
] | 1 | 2022-01-16T14:18:33.000Z | 2022-01-16T14:18:33.000Z | # config.py
def configure(env):
env.Prepend(CFLAGS=["-std=c++17"])
env.Prepend(CXXFLAGS=["-std=gnu++14"])
def can_build(env, platform):
return True
def configure(env):
pass
| 14.846154 | 42 | 0.637306 | 28 | 193 | 4.357143 | 0.678571 | 0.196721 | 0.245902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025157 | 0.176166 | 193 | 12 | 43 | 16.083333 | 0.742138 | 0.046632 | 0 | 0.285714 | 0 | 0 | 0.120879 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.142857 | 0 | 0.142857 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
fd9ad82eb8a6fe39e46d309034b8c301e8b006fa | 25 | py | Python | Distances/__init__.py | L-F-A/Machine-Learning | b9472544e06fc91606c0d1a609c23e22ba30cf18 | [
"MIT"
] | null | null | null | Distances/__init__.py | L-F-A/Machine-Learning | b9472544e06fc91606c0d1a609c23e22ba30cf18 | [
"MIT"
] | null | null | null | Distances/__init__.py | L-F-A/Machine-Learning | b9472544e06fc91606c0d1a609c23e22ba30cf18 | [
"MIT"
] | null | null | null | from .Calc_dist import *
| 12.5 | 24 | 0.76 | 4 | 25 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fdee79df52e156917d7eda970305846d32c0d4bd | 143 | py | Python | src/spaceone/inventory/connector/__init__.py | jihyungSong/plugin-azure-power-state | d66bd5dfafa01659c877da11c0d18de6e55cb5ab | [
"Apache-2.0"
] | null | null | null | src/spaceone/inventory/connector/__init__.py | jihyungSong/plugin-azure-power-state | d66bd5dfafa01659c877da11c0d18de6e55cb5ab | [
"Apache-2.0"
] | null | null | null | src/spaceone/inventory/connector/__init__.py | jihyungSong/plugin-azure-power-state | d66bd5dfafa01659c877da11c0d18de6e55cb5ab | [
"Apache-2.0"
] | null | null | null | from spaceone.inventory.libs.connector import AzureConnector
from spaceone.inventory.connector.virtual_machine import VirtualMachineConnector
| 35.75 | 80 | 0.895105 | 15 | 143 | 8.466667 | 0.666667 | 0.188976 | 0.330709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062937 | 143 | 3 | 81 | 47.666667 | 0.947761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8bee6cc520c92e2de7a252c189dd7e9013072cc8 | 38 | py | Python | tonotopy/stim/__init__.py | thomasbazeille/public_protocols | 8d8dd051eda7eec2b8358dae42ab363b7d83e1d0 | [
"BSD-3-Clause"
] | 3 | 2019-09-19T13:06:59.000Z | 2021-07-03T18:09:32.000Z | tonotopy/stim/__init__.py | thomasbazeille/public_protocols | 8d8dd051eda7eec2b8358dae42ab363b7d83e1d0 | [
"BSD-3-Clause"
] | 2 | 2017-11-30T19:32:24.000Z | 2020-09-03T19:40:13.000Z | tonotopy/stim/__init__.py | thomasbazeille/public_protocols | 8d8dd051eda7eec2b8358dae42ab363b7d83e1d0 | [
"BSD-3-Clause"
] | 3 | 2019-09-19T13:07:10.000Z | 2021-01-14T16:07:16.000Z | from .create_prerandom_orders import * | 38 | 38 | 0.868421 | 5 | 38 | 6.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8bf9c6fa7afc81e7943422325abde7876c6724d9 | 38 | py | Python | sentence_transformers/cross_encoder/__init__.py | faezakamran/sentence-transformers | 2158fff3aa96651b10fe367c41fdd5008a33c5c6 | [
"Apache-2.0"
] | 7,566 | 2019-07-25T07:45:17.000Z | 2022-03-31T22:15:35.000Z | sentence_transformers/cross_encoder/__init__.py | faezakamran/sentence-transformers | 2158fff3aa96651b10fe367c41fdd5008a33c5c6 | [
"Apache-2.0"
] | 1,444 | 2019-07-25T11:53:48.000Z | 2022-03-31T15:13:32.000Z | sentence_transformers/cross_encoder/__init__.py | faezakamran/sentence-transformers | 2158fff3aa96651b10fe367c41fdd5008a33c5c6 | [
"Apache-2.0"
] | 1,567 | 2019-07-26T15:19:28.000Z | 2022-03-31T19:57:35.000Z | from .CrossEncoder import CrossEncoder | 38 | 38 | 0.894737 | 4 | 38 | 8.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e3054a6225ef5234211d92eef91a89ec10e53123 | 30 | py | Python | tapiriik/services/DecathlonCoach/__init__.py | decathloncoach/tapiriik | dd9b62ed22f63ace8bb2e0cdfc4be741006c31d1 | [
"Apache-2.0"
] | 14 | 2018-04-16T14:39:45.000Z | 2021-08-18T06:22:31.000Z | tapiriik/services/DecathlonCoach/__init__.py | filin2009/exercisync | f0995a6fb11efd1a662d1e5fb8be02b53b053548 | [
"Apache-2.0"
] | 14 | 2018-04-29T21:14:30.000Z | 2021-02-25T07:17:25.000Z | tapiriik/services/DecathlonCoach/__init__.py | filin2009/exercisync | f0995a6fb11efd1a662d1e5fb8be02b53b053548 | [
"Apache-2.0"
] | 8 | 2018-04-16T14:39:56.000Z | 2021-02-25T07:12:57.000Z | from .decathloncoach import *
| 15 | 29 | 0.8 | 3 | 30 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e30657caf28fbf96dc5c3bd59c42f93e4b70f899 | 22 | py | Python | __init__.py | yombo/module-nest | cfcf7fe923c4459bece34b3f228aee62469fb736 | [
"MIT"
] | null | null | null | __init__.py | yombo/module-nest | cfcf7fe923c4459bece34b3f228aee62469fb736 | [
"MIT"
] | null | null | null | __init__.py | yombo/module-nest | cfcf7fe923c4459bece34b3f228aee62469fb736 | [
"MIT"
] | null | null | null | from .nest import Nest | 22 | 22 | 0.818182 | 4 | 22 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e34c60a14978031b56d21796afdac9c90a8acde6 | 1,267 | py | Python | datawarehouse/edw_migrations/versions/32a5c13c6efd_factrequestdetailsaddcols.py | bcgov/foi-reporting | 25856ce87b668df964ddd16ac7459fae4aa6a7c5 | [
"Apache-2.0"
] | null | null | null | datawarehouse/edw_migrations/versions/32a5c13c6efd_factrequestdetailsaddcols.py | bcgov/foi-reporting | 25856ce87b668df964ddd16ac7459fae4aa6a7c5 | [
"Apache-2.0"
] | 3 | 2022-01-05T18:01:41.000Z | 2022-02-08T21:51:32.000Z | datawarehouse/edw_migrations/versions/32a5c13c6efd_factrequestdetailsaddcols.py | bcgov/foi-reporting | 25856ce87b668df964ddd16ac7459fae4aa6a7c5 | [
"Apache-2.0"
] | null | null | null | """factRequestDetailsaddcols
Revision ID: 32a5c13c6efd
Revises: 7ba38127b650
Create Date: 2022-02-28 19:57:56.182275
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '32a5c13c6efd'
down_revision = '7ba38127b650'
branch_labels = None
depends_on = None
def upgrade():
op.add_column('factRequestDetails', sa.Column('shipaddressid', sa.Integer()))
op.add_column('factRequestDetails', sa.Column('billaddressid', sa.Integer()))
op.add_column('factRequestDetails', sa.Column('originalreceiveddate', sa.DateTime()))
op.add_column('factRequestDetails', sa.Column('coordinatednrresponsereqd', sa.VARCHAR(length=1000)))
op.add_column('factRequestDetails', sa.Column('applicantfilereference', sa.VARCHAR(length=1000)))
def downgrade():
op.add_column('factRequestDetails', sa.Column('shipaddressid', sa.Integer()))
op.add_column('factRequestDetails', sa.Column('billaddressid', sa.Integer()))
op.add_column('factRequestDetails', sa.Column('originalreceiveddate', sa.DateTime()))
op.add_column('factRequestDetails', sa.Column('coordinatednrresponsereqd', sa.VARCHAR(length=1000)))
op.add_column('factRequestDetails', sa.Column('applicantfilereference', sa.VARCHAR(length=1000)))
| 38.393939 | 104 | 0.760063 | 141 | 1,267 | 6.737589 | 0.333333 | 0.052632 | 0.115789 | 0.305263 | 0.724211 | 0.724211 | 0.724211 | 0.724211 | 0.724211 | 0.724211 | 0 | 0.057491 | 0.093923 | 1,267 | 32 | 105 | 39.59375 | 0.770035 | 0.121547 | 0 | 0.555556 | 0 | 0 | 0.352941 | 0.085068 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e37f43903cda9589858f713fa4929a392ff1feea | 154 | py | Python | framework/QSSF Service/estimators/__init__.py | S-Lab-System-Group/HeliosArtifact | ba73838935bd345ea8575b7bcfcc99e5df690c01 | [
"MIT"
] | 5 | 2021-09-20T13:59:55.000Z | 2022-03-16T12:48:08.000Z | framework/QSSF Service/estimators/__init__.py | S-Lab-System-Group/HeliosArtifact | ba73838935bd345ea8575b7bcfcc99e5df690c01 | [
"MIT"
] | null | null | null | framework/QSSF Service/estimators/__init__.py | S-Lab-System-Group/HeliosArtifact | ba73838935bd345ea8575b7bcfcc99e5df690c01 | [
"MIT"
] | 1 | 2021-10-15T11:39:11.000Z | 2021-10-15T11:39:11.000Z | from .estimator import NaiveEstimator
from .estimator import LGBEstimator
from .estimator import CombinedEstimator
from .estimator import PhillyEstimator
| 30.8 | 40 | 0.87013 | 16 | 154 | 8.375 | 0.4375 | 0.38806 | 0.567164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103896 | 154 | 4 | 41 | 38.5 | 0.971014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e3892d262ffb84d24cc6092ae5e565f7a2d72678 | 38 | py | Python | abelfunctions/riemann_theta/deprecated/utilities/__init__.py | RParini/abelfunctions | 9263cd865e33cbd3ff4ffc1d59c14d77ff27f155 | [
"MIT"
] | null | null | null | abelfunctions/riemann_theta/deprecated/utilities/__init__.py | RParini/abelfunctions | 9263cd865e33cbd3ff4ffc1d59c14d77ff27f155 | [
"MIT"
] | 2 | 2017-10-26T20:34:33.000Z | 2017-10-31T09:50:37.000Z | abelfunctions/riemann_theta/deprecated/utilities/__init__.py | RParini/abelfunctions | 9263cd865e33cbd3ff4ffc1d59c14d77ff27f155 | [
"MIT"
] | 1 | 2017-10-26T18:43:48.000Z | 2017-10-26T18:43:48.000Z | # utilitiies
from qflll import qflll
| 9.5 | 23 | 0.789474 | 5 | 38 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184211 | 38 | 3 | 24 | 12.666667 | 0.967742 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b8083e9fe98f364efc9f5fce0a18419a545ce79 | 612 | py | Python | octicons16px/mute.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | 1 | 2021-01-28T06:47:39.000Z | 2021-01-28T06:47:39.000Z | octicons16px/mute.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null | octicons16px/mute.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null |
OCTICON_MUTE = """
<svg class="octicon octicon-mute" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M8 2.75a.75.75 0 00-1.238-.57L3.472 5H1.75A1.75 1.75 0 000 6.75v2.5C0 10.216.784 11 1.75 11h1.723l3.289 2.82A.75.75 0 008 13.25V2.75zM4.238 6.32L6.5 4.38v7.24L4.238 9.68a.75.75 0 00-.488-.18h-2a.25.25 0 01-.25-.25v-2.5a.25.25 0 01.25-.25h2a.75.75 0 00.488-.18zm7.042-1.1a.75.75 0 10-1.06 1.06L11.94 8l-1.72 1.72a.75.75 0 101.06 1.06L13 9.06l1.72 1.72a.75.75 0 101.06-1.06L14.06 8l1.72-1.72a.75.75 0 00-1.06-1.06L13 6.94l-1.72-1.72z"></path></svg>
"""
| 122.4 | 587 | 0.671569 | 161 | 612 | 2.546584 | 0.484472 | 0.065854 | 0.097561 | 0.068293 | 0.229268 | 0.109756 | 0.082927 | 0.082927 | 0.082927 | 0 | 0 | 0.490942 | 0.098039 | 612 | 4 | 588 | 153 | 0.251812 | 0 | 0 | 0 | 0 | 0.333333 | 0.963993 | 0.166939 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8b95588c94b340654376de05f4b89517f49f8279 | 96 | py | Python | venv/lib/python3.8/site-packages/poetry/installation/operations/update.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/poetry/installation/operations/update.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/poetry/installation/operations/update.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/77/e0/52/b54670375af0599d8e419de18c87b9f7cd4e0cb1a4730836446c253e62 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.46875 | 0 | 96 | 1 | 96 | 96 | 0.427083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
47c3894c32658c059b0c93c8cdd50d6e831253bf | 57 | py | Python | 02_conditionals/Example.py | felipemlrt/begginer-python-exercises | 2a4627bb59f0834c2ea2d33b8ce18cb0f22709dc | [
"MIT"
] | null | null | null | 02_conditionals/Example.py | felipemlrt/begginer-python-exercises | 2a4627bb59f0834c2ea2d33b8ce18cb0f22709dc | [
"MIT"
] | null | null | null | 02_conditionals/Example.py | felipemlrt/begginer-python-exercises | 2a4627bb59f0834c2ea2d33b8ce18cb0f22709dc | [
"MIT"
] | 1 | 2020-10-01T06:55:59.000Z | 2020-10-01T06:55:59.000Z | if x == 2:
print("x is 2!")
else:
print("x is not 2!")
| 11.4 | 21 | 0.508772 | 13 | 57 | 2.230769 | 0.538462 | 0.413793 | 0.551724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 0.245614 | 57 | 4 | 22 | 14.25 | 0.604651 | 0 | 0 | 0 | 0 | 0 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
7be1159c3b368584ecfbb183eaa9b94ef8deb9a5 | 21 | py | Python | api/src/models/__init__.py | re3turn/twitter-crawling | a5b4075cda9d2bdca2cd9891c8d609627feb83e4 | [
"MIT"
] | null | null | null | api/src/models/__init__.py | re3turn/twitter-crawling | a5b4075cda9d2bdca2cd9891c8d609627feb83e4 | [
"MIT"
] | null | null | null | api/src/models/__init__.py | re3turn/twitter-crawling | a5b4075cda9d2bdca2cd9891c8d609627feb83e4 | [
"MIT"
] | null | null | null | from . import entity
| 10.5 | 20 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d00e35bdf1340d4b7c098f3dfeb59a396fa8466c | 173 | py | Python | elecsec_web/__init__.py | thealau/498-audit | 9d8008c1c4b5a62a09879b6bce42ac426621d3ba | [
"MIT"
] | 1 | 2018-12-19T22:18:06.000Z | 2018-12-19T22:18:06.000Z | elecsec_web/__init__.py | thealau/498-audit | 9d8008c1c4b5a62a09879b6bce42ac426621d3ba | [
"MIT"
] | null | null | null | elecsec_web/__init__.py | thealau/498-audit | 9d8008c1c4b5a62a09879b6bce42ac426621d3ba | [
"MIT"
] | null | null | null | """Elecsec package initializer."""
import flask
app = flask.Flask(__name__)
app.config.from_object('elecsec_web.config')
import elecsec_web.views
import elecsec_web.model | 19.222222 | 44 | 0.797688 | 24 | 173 | 5.416667 | 0.541667 | 0.230769 | 0.246154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086705 | 173 | 9 | 45 | 19.222222 | 0.822785 | 0.16185 | 0 | 0 | 0 | 0 | 0.128571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d03c0d9225e100da6f8e6c8aa92b9a422dbaa856 | 18 | py | Python | src/nalu/__init__.py | RufaelDev/nalu | b6eae89e439fba023bbb008df3bfab2d23bfdb9b | [
"Apache-2.0"
] | 1 | 2021-11-17T20:46:52.000Z | 2021-11-17T20:46:52.000Z | src/nalu/__init__.py | RufaelDev/nalu | b6eae89e439fba023bbb008df3bfab2d23bfdb9b | [
"Apache-2.0"
] | 1 | 2021-11-17T21:38:43.000Z | 2021-11-24T19:44:32.000Z | src/nalu/__init__.py | RufaelDev/nalu | b6eae89e439fba023bbb008df3bfab2d23bfdb9b | [
"Apache-2.0"
] | null | null | null | from . import nalu | 18 | 18 | 0.777778 | 3 | 18 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 18 | 1 | 18 | 18 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d06aa6586ed5361eec8833cfe46418789940c95a | 3,170 | py | Python | test/inspections/test_intersectional_histogram_for_columns.py | JinyangLi01/mlinspect | 87725d599db63b89907bc71071d6de66b40a33b7 | [
"Apache-2.0"
] | 40 | 2020-10-20T15:56:35.000Z | 2022-02-22T14:48:09.000Z | test/inspections/test_intersectional_histogram_for_columns.py | JinyangLi01/mlinspect | 87725d599db63b89907bc71071d6de66b40a33b7 | [
"Apache-2.0"
] | 55 | 2020-10-21T15:37:44.000Z | 2022-02-10T02:44:18.000Z | test/inspections/test_intersectional_histogram_for_columns.py | JinyangLi01/mlinspect | 87725d599db63b89907bc71071d6de66b40a33b7 | [
"Apache-2.0"
] | 9 | 2021-01-15T15:53:25.000Z | 2022-03-31T23:42:12.000Z | """
Tests whether IntersectionalHistogramForColumns works
"""
from inspect import cleandoc
from testfixtures import compare
from mlinspect._pipeline_inspector import PipelineInspector
from mlinspect.inspections import IntersectionalHistogramForColumns
def test_histogram_merge():
"""
Tests whether IntersectionalHistogramForColumns works for joins
"""
test_code = cleandoc("""
import pandas as pd
df_a = pd.DataFrame({'A': ['cat_a', 'cat_b', 'cat_a', 'cat_b', 'cat_b'],
'B': [1, 2, 4, 5, 7],
'C': [True, False, True, True, True]})
df_b = pd.DataFrame({'B': [1, 2, 3, 4, 5], 'D': [1, 5, 4, 11, None]})
df_merged = df_a.merge(df_b, on='B')
""")
inspector_result = PipelineInspector \
.on_pipeline_from_string(test_code) \
.add_required_inspection(IntersectionalHistogramForColumns(["A", "C"])) \
.execute()
inspection_results = list(inspector_result.dag_node_to_inspection_results.values())
histogram_output = inspection_results[0][IntersectionalHistogramForColumns(["A", "C"])]
expected_histogram = {('cat_a', True): 2, ('cat_b', False): 1, ('cat_b', True): 2}
compare(histogram_output, expected_histogram)
histogram_output = inspection_results[1][IntersectionalHistogramForColumns(["A", "C"])]
expected_histogram = {(None, None): 5}
compare(histogram_output, expected_histogram)
histogram_output = inspection_results[2][IntersectionalHistogramForColumns(["A", "C"])]
expected_histogram = {('cat_a', True): 2, ('cat_b', False): 1, ('cat_b', True): 1}
compare(histogram_output, expected_histogram)
def test_histogram_projection():
"""
Tests whether IntersectionalHistogramForColumns works for projections
"""
test_code = cleandoc("""
import pandas as pd
pandas_df = pd.DataFrame({'A': ['cat_a', 'cat_b', 'cat_a', 'cat_c', 'cat_b'],
'B': [1, 2, 4, 5, 7], 'C': [True, False, True, True, True]})
pandas_df = pandas_df[['B', 'C']]
pandas_df = pandas_df[['C']]
""")
inspector_result = PipelineInspector \
.on_pipeline_from_string(test_code) \
.add_required_inspection(IntersectionalHistogramForColumns(["A", "C"])) \
.execute()
inspection_results = list(inspector_result.dag_node_to_inspection_results.values())
histogram_output = inspection_results[0][IntersectionalHistogramForColumns(["A", "C"])]
expected_histogram = {('cat_a', True): 2, ('cat_b', False): 1, ('cat_c', True): 1, ('cat_b', True): 1}
compare(histogram_output, expected_histogram)
histogram_output = inspection_results[1][IntersectionalHistogramForColumns(["A", "C"])]
expected_histogram = {('cat_a', True): 2, ('cat_b', False): 1, ('cat_c', True): 1, ('cat_b', True): 1}
compare(histogram_output, expected_histogram)
histogram_output = inspection_results[2][IntersectionalHistogramForColumns(["A", "C"])]
expected_histogram = {('cat_a', True): 2, ('cat_b', False): 1, ('cat_c', True): 1, ('cat_b', True): 1}
compare(histogram_output, expected_histogram)
| 42.266667 | 106 | 0.654574 | 366 | 3,170 | 5.39071 | 0.166667 | 0.030411 | 0.141916 | 0.097314 | 0.801318 | 0.747086 | 0.747086 | 0.714648 | 0.714648 | 0.714648 | 0 | 0.017564 | 0.191798 | 3,170 | 74 | 107 | 42.837838 | 0.752537 | 0.058991 | 0 | 0.632653 | 0 | 0.081633 | 0.259083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040816 | false | 0 | 0.122449 | 0 | 0.163265 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d08e58427b9a0452fa48f40b211e39bb1b59da4a | 77 | py | Python | BlackBoxAuditing/find_contexts/__init__.py | alatenko/BlackBoxAuditing | b06c4faed5591cd7088475b2a203127bc5820483 | [
"Apache-2.0"
] | 125 | 2016-06-15T17:46:55.000Z | 2022-02-19T14:44:49.000Z | BlackBoxAuditing/find_contexts/__init__.py | alatenko/BlackBoxAuditing | b06c4faed5591cd7088475b2a203127bc5820483 | [
"Apache-2.0"
] | 12 | 2016-12-13T14:27:13.000Z | 2019-10-02T11:50:08.000Z | BlackBoxAuditing/find_contexts/__init__.py | alatenko/BlackBoxAuditing | b06c4faed5591cd7088475b2a203127bc5820483 | [
"Apache-2.0"
] | 32 | 2016-09-07T02:48:07.000Z | 2021-12-20T18:36:41.000Z | from .context_finder import context_finder
from .load_audit_data import load
| 25.666667 | 42 | 0.87013 | 12 | 77 | 5.25 | 0.583333 | 0.412698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103896 | 77 | 2 | 43 | 38.5 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d09d42d196136488b64b6849bf5bb25a081f1310 | 2,712 | py | Python | wqxlib/wqx_v3_0/__init__.py | FlippingBinary/wqxlib-python | 5aa1d41384928f1faca47d5984485e2efa93174c | [
"MIT"
] | null | null | null | wqxlib/wqx_v3_0/__init__.py | FlippingBinary/wqxlib-python | 5aa1d41384928f1faca47d5984485e2efa93174c | [
"MIT"
] | null | null | null | wqxlib/wqx_v3_0/__init__.py | FlippingBinary/wqxlib-python | 5aa1d41384928f1faca47d5984485e2efa93174c | [
"MIT"
] | null | null | null | from .Activity import * # noqa: F401,F403
from .ActivityDescription import * # noqa: F401,F403
from .ActivityGroup import * # noqa: F401,F403
from .ActivityLocation import * # noqa: F401,F403
from .ActivityMetric import * # noqa: F401,F403
from .ActivityMetricType import * # noqa: F401,F403
from .AlternateMonitoringLocationIdentity import * # noqa: F401,F403
from .AquiferInformation import * # noqa: F401,F403
from .AttachedBinaryObject import * # noqa: F401,F403
from .BibliographicReference import * # noqa: F401,F403
from .BiologicalActivityDescription import * # noqa: F401,F403
from .BiologicalHabitatCollectionInformation import * # noqa: F401,F403
from .BiologicalHabitatIndex import * # noqa: F401,F403
from .CollectionEffort import * # noqa: F401,F403
from .ComparableAnalyticalMethod import * # noqa: F401,F403
from .DataQualityIndicator import * # noqa: F401,F403
from .DetectionQuantitationLimit import * # noqa: F401,F403
from .ElectronicAddress import * # noqa: F401,F403
from .Entity_Update_Identifiers import * # noqa: F401,F403
from .FrequencyClassInformation import * # noqa: F401,F403
from .HorizontalAccuracyMeasure import * # noqa: F401,F403
from .IndexType import * # noqa: F401,F403
from .LabSamplePreparation import * # noqa: F401,F403
from .Measure import * # noqa: F401,F403
from .MeasureCompact import * # noqa: F401,F403
from .MonitoringLocation import * # noqa: F401,F403
from .MonitoringLocationGeospatial import * # noqa: F401,F403
from .MonitoringLocationIdentity import * # noqa: F401,F403
from .NetInformation import * # noqa: F401,F403
from .Organization import * # noqa: F401,F403
from .Organization_Delete import * # noqa: F401,F403
from .OrganizationAddress import * # noqa: F401,F403
from .OrganizationDescription import * # noqa: F401,F403
from .Project import * # noqa: F401,F403
from .ProjectMonitoringLocationWeighting import * # noqa: F401,F403
from .ReferenceMethod import * # noqa: F401,F403
from .Result import * # noqa: F401,F403
from .ResultAnalyticalMethod import * # noqa: F401,F403
from .ResultDescription import * # noqa: F401,F403
from .ResultLabInformation import * # noqa: F401,F403
from .SampleDescription import * # noqa: F401,F403
from .SamplePreparation import * # noqa: F401,F403
from .SimpleContent import * # noqa: F401,F403
from .TaxonomicDetails import * # noqa: F401,F403
from .Telephonic import * # noqa: F401,F403
from .WellInformation import * # noqa: F401,F403
from .WQX import * # noqa: F401,F403
from .WQX_Delete import * # noqa: F401,F403
from .WQX_Update_Identifiers import * # noqa: F401,F403
from .WQXTime import * # noqa: F401,F403
| 53.176471 | 73 | 0.741888 | 306 | 2,712 | 6.555556 | 0.179739 | 0.249252 | 0.348953 | 0.448654 | 0.57677 | 0.127119 | 0.038883 | 0 | 0 | 0 | 0 | 0.132626 | 0.165929 | 2,712 | 50 | 74 | 54.24 | 0.7542 | 0.294617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d0b519a9426996703ae8fc34f80c0a22efc173f6 | 2,704 | py | Python | tests/network_test.py | undark-lab/swyft | 50aa524e2f3a2b3d1354543178ff72bc7f055a35 | [
"MIT"
] | 104 | 2020-11-26T09:46:03.000Z | 2022-03-18T06:22:03.000Z | tests/network_test.py | cweniger/swyft | 2c0ed514622a37e8ec4e406b99a8327ecafb7ab4 | [
"MIT"
] | 83 | 2021-03-02T15:54:26.000Z | 2022-03-10T08:09:05.000Z | tests/network_test.py | undark-lab/swyft | 50aa524e2f3a2b3d1354543178ff72bc7f055a35 | [
"MIT"
] | 10 | 2021-02-04T14:27:36.000Z | 2022-03-31T17:39:34.000Z | from itertools import product
import pytest
import torch
from swyft.networks.standardization import OnlineStandardizingLayer
class TestStandardizationLayer:
bss = [1, 128]
shapes = [(1,), (5,), (10, 5, 2, 1), (1, 1, 1)]
means = [2, 139]
stds = [1, 80]
stables = [True, False]
@pytest.mark.parametrize(
"bs, shape, mean, std, stable", product(bss, shapes, means, stds, stables)
)
def test_online_standardization_layer_update(self, bs, shape, mean, std, stable):
osl = OnlineStandardizingLayer(shape, stable=stable)
osl.train()
old_stats = osl.n.clone(), osl.mean.clone(), osl.var.clone(), osl.std.clone()
data = torch.randn(bs, *shape) * std + mean
_ = osl(data)
new_stats = osl.n.clone(), osl._mean.clone(), osl.var.clone(), osl.std.clone()
assert old_stats != new_stats
@pytest.mark.parametrize(
"bs, shape, mean, std, stable", product(bss, shapes, means, stds, stables)
)
def test_online_standardization_layer_mean(self, bs, shape, mean, std, stable):
torch.manual_seed(0)
osl = OnlineStandardizingLayer(shape, stable=stable)
osl.train()
data = torch.randn(bs, *shape) * std + mean
_ = osl(data)
assert torch.allclose(osl.mean, data.mean(0))
@pytest.mark.parametrize(
"bs, shape, mean, std, stable", product(bss, shapes, means, stds, stables)
)
def test_online_standardization_layer_std(self, bs, shape, mean, std, stable):
torch.manual_seed(0)
osl = OnlineStandardizingLayer(shape, stable=stable)
osl.train()
data = torch.randn(bs, *shape) * std + mean
_ = osl(data)
if torch.isnan(data.std(0)).all():
replacement_std = torch.sqrt(torch.ones_like(osl.std) * osl.epsilon)
assert torch.allclose(osl.std, replacement_std)
else:
assert torch.allclose(osl.std, data.std(0))
@pytest.mark.parametrize(
"bs, shape, mean, std, stable", product(bss, shapes, means, stds, stables)
)
def test_online_standardization_layer_std_average(
self, bs, shape, mean, std, stable
):
torch.manual_seed(0)
osl = OnlineStandardizingLayer(shape, stable=stable, use_average_std=True)
osl.train()
data = torch.randn(bs, *shape) * std + mean
_ = osl(data)
if torch.isnan(data.std(0)).all():
replacement_std = torch.sqrt(torch.ones_like(osl.std) * osl.epsilon)
assert torch.allclose(osl.std, replacement_std.mean())
else:
assert torch.allclose(osl.std, data.std(0).mean())
if __name__ == "__main__":
pass
| 31.44186 | 86 | 0.620932 | 338 | 2,704 | 4.83432 | 0.201183 | 0.051408 | 0.053856 | 0.068543 | 0.808446 | 0.808446 | 0.793758 | 0.761934 | 0.761934 | 0.695838 | 0 | 0.014223 | 0.245932 | 2,704 | 85 | 87 | 31.811765 | 0.787151 | 0 | 0 | 0.507937 | 0 | 0 | 0.044379 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 1 | 0.063492 | false | 0.015873 | 0.063492 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d0d9537ac1d24e0081b216d34b95d2c1631f01b2 | 245 | py | Python | timemachines/skaters/sk/allskskaters.py | iklasky/timemachines | 1820fa9453d31d4daaeff75274a935c7455febe3 | [
"MIT"
] | 253 | 2021-01-08T17:33:30.000Z | 2022-03-21T17:32:36.000Z | timemachines/skaters/sk/allskskaters.py | iklasky/timemachines | 1820fa9453d31d4daaeff75274a935c7455febe3 | [
"MIT"
] | 65 | 2021-01-20T16:43:35.000Z | 2022-03-30T19:07:22.000Z | timemachines/skaters/sk/allskskaters.py | iklasky/timemachines | 1820fa9453d31d4daaeff75274a935c7455febe3 | [
"MIT"
] | 28 | 2021-02-04T14:58:30.000Z | 2022-01-17T04:35:17.000Z | from timemachines.skaters.sk.sktheta import SK_THETA_SKATERS
from timemachines.skaters.sk.skautoarima import SK_AA_SKATERS
from timemachines.skaters.sk.skautoets import SK_AE_SKATERS
SK_SKATERS = SK_THETA_SKATERS + SK_AA_SKATERS + SK_AE_SKATERS
| 49 | 61 | 0.873469 | 38 | 245 | 5.289474 | 0.289474 | 0.313433 | 0.343284 | 0.373134 | 0.318408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077551 | 245 | 4 | 62 | 61.25 | 0.889381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
efbfae89cfcc5866583dfb5b68637e4b471e392a | 9,517 | py | Python | src/piscola/extinction_correction.py | temuller/PISCoLA | e380603155991c267c26c4c93dfd650b9777b6b9 | [
"MIT"
] | 1 | 2019-09-18T11:30:46.000Z | 2019-09-18T11:30:46.000Z | src/piscola/extinction_correction.py | temuller/PISCoLA | e380603155991c267c26c4c93dfd650b9777b6b9 | [
"MIT"
] | 1 | 2020-12-05T21:04:40.000Z | 2021-03-25T11:10:57.000Z | src/piscola/extinction_correction.py | temuller/PISCoLA | e380603155991c267c26c4c93dfd650b9777b6b9 | [
"MIT"
] | 2 | 2020-11-21T20:21:02.000Z | 2021-07-22T16:10:18.000Z | import piscola
from .filter_utils import integrate_filter
import extinction
import sfdmap
import matplotlib.pyplot as plt
import numpy as np
import os
def redden(wave, flux, ra, dec, scaling=0.86, reddening_law='fitzpatrick99', dustmaps_dir=None, r_v=3.1, ebv=None):
"""Reddens the given spectrum, given a right ascension and declination or :math:`E(B-V)`.
Parameters
----------
wave : array
Wavelength values.
flux : array
Flux density values.
ra : float
Right ascension.
dec : float
Declination in degrees.
scaling: float, default ``0.86``
Calibration of the Milky Way dust maps. Either ``0.86``
for the Schlafly & Finkbeiner (2011) recalibration or ``1.0`` for the original
dust map of Schlegel, Fikbeiner & Davis (1998).
reddening_law: str, default ``fitzpatrick99``
Reddening law. The options are: ``ccm89`` (Cardelli, Clayton & Mathis 1989), ``odonnell94`` (O’Donnell 1994),
``fitzpatrick99`` (Fitzpatrick 1999), ``calzetti00`` (Calzetti 2000) and ``fm07`` (Fitzpatrick & Massa 2007 with
:math:`R_V` = 3.1.)
dustmaps_dir : str, default ``None``
Directory where the dust maps of Schlegel, Fikbeiner & Davis (1998) are found.
r_v : float, default ``3.1``
Total-to-selective extinction ratio (:math:`R_V`)
ebv : float, default ``None``
Colour excess (:math:`E(B-V)`). If given, this is used instead of the dust map value.
Returns
-------
redden_flux : array
Redden flux values.
"""
pisco_path = piscola.__path__[0]
if dustmaps_dir is None:
dustmaps_dir = os.path.join(pisco_path, 'sfddata-master')
if ebv is None:
m = sfdmap.SFDMap(mapdir=dustmaps_dir, scaling=scaling)
ebv = m.ebv(ra, dec) # RA and DEC in degrees
a_v = r_v*ebv
rl_list = ['ccm89', 'odonnell94', 'fitzpatrick99', 'calzetti00', 'fm07']
assert reddening_law in rl_list, f'Choose one of the available reddening laws: {rl_list}'
if reddening_law=='ccm89':
ext = extinction.ccm89(wave, a_v, r_v)
elif reddening_law=='odonnell94':
ext = extinction.odonnell94(wave, a_v, r_v)
elif reddening_law=='fitzpatrick99':
ext = extinction.fitzpatrick99(wave, a_v, r_v)
elif reddening_law=='calzetti00':
ext = extinction.calzetti00(wave, a_v, r_v)
elif reddening_law=='fm07':
ext = extinction.fm07(wave, a_v)
redden_flux = extinction.apply(ext, flux)
return redden_flux
def deredden(wave, flux, ra, dec, scaling=0.86, reddening_law='fitzpatrick99', dustmaps_dir=None, r_v=3.1, ebv=None):
"""Dereddens the given spectrum, given a right ascension and declination or :math:`E(B-V)`.
Parameters
----------
wave : array
Wavelength values.
flux : array
Flux density values.
ra : float
Right ascension in degrees.
dec : float
Declination in degrees.
scaling: float, default ``0.86``
Calibration of the Milky Way dust maps. Either ``0.86``
for the Schlafly & Finkbeiner (2011) recalibration or ``1.0`` for the original
dust map of Schlegel, Fikbeiner & Davis (1998).
reddening_law: str, default ``fitzpatrick99``
Reddening law. The options are: ``ccm89`` (Cardelli, Clayton & Mathis 1989), ``odonnell94`` (O’Donnell 1994),
``fitzpatrick99`` (Fitzpatrick 1999), ``calzetti00`` (Calzetti 2000) and ``fm07`` (Fitzpatrick & Massa 2007 with
:math:`R_V` = 3.1.)
dustmaps_dir : str, default ``None``
Directory where the dust maps of Schlegel, Fikbeiner & Davis (1998) are found.
r_v : float, default ``3.1``
Total-to-selective extinction ratio (:math:`R_V`)
ebv : float, default ``None``
Colour excess (:math:`E(B-V)`). If given, this is used instead of the dust map value.
Returns
-------
deredden_flux : array
Deredden flux values.
"""
pisco_path = piscola.__path__[0]
if dustmaps_dir is None:
dustmaps_dir = os.path.join(pisco_path, 'sfddata-master')
if ebv is None:
m = sfdmap.SFDMap(mapdir=dustmaps_dir, scaling=scaling)
ebv = m.ebv(ra, dec) # RA and DEC in degrees
a_v = r_v*ebv
rl_list = ['ccm89', 'odonnell94', 'fitzpatrick99', 'calzetti00', 'fm07']
assert reddening_law in rl_list, f'Choose one of the available reddening laws: {rl_list}'
if reddening_law=='ccm89':
ext = extinction.ccm89(wave, a_v, r_v)
elif reddening_law=='odonnell94':
ext = extinction.odonnell94(wave, a_v, r_v)
elif reddening_law=='fitzpatrick99':
ext = extinction.fitzpatrick99(wave, a_v, r_v)
elif reddening_law=='calzetti00':
ext = extinction.calzetti00(wave, a_v, r_v)
elif reddening_law=='fm07':
ext = extinction.fm07(wave, a_v)
deredden_flux = extinction.remove(ext, flux)
return deredden_flux
def calculate_ebv(ra, dec, scaling=0.86, dustmaps_dir=None):
"""Calculates Milky Way colour excess, :math:`E(B-V)`, for a given right ascension and declination.
Parameters
----------
ra : float
Right ascension.
dec : float
Declination
scaling: float, default ``0.86``
Calibration of the Milky Way dust maps. Either ``0.86``
for the Schlafly & Finkbeiner (2011) recalibration or ``1.0`` for the original
dust map of Schlegel, Finkbeiner & Davis (1998).
dustmaps_dir : str, default ``None``
Directory where the dust maps of Schlegel, Fikbeiner & Davis (1998) are found.
Returns
-------
ebv : float
Reddening value, :math:`E(B-V)``.
"""
pisco_path = piscola.__path__[0]
if dustmaps_dir is None:
dustmaps_dir = os.path.join(pisco_path, 'sfddata-master')
m = sfdmap.SFDMap(mapdir=dustmaps_dir, scaling=scaling)
ebv = m.ebv(ra, dec) # RA and DEC in degrees
return ebv
def extinction_filter(filter_wave, filter_response, ra, dec, scaling=0.86, reddening_law='fitzpatrick99', dustmaps_dir=None, r_v=3.1, ebv=None):
"""Estimate the extinction for a given filter, given a right ascension and declination or :math:`E(B-V)`.
Parameters
----------
filter_wave : array
Filter's wavelength range.
filter_response : array
Filter's response function.
ra : float
Right ascension.
dec : float
Declinationin degrees.
scaling: float, default ``0.86``
Calibration of the Milky Way dust maps. Either ``0.86``
for the Schlafly & Finkbeiner (2011) recalibration or ``1.0`` for the original
dust map of Schlegel, Fikbeiner & Davis (1998).
reddening_law: str, default ``fitzpatrick99``
Reddening law. The options are: ``ccm89`` (Cardelli, Clayton & Mathis 1989), ``odonnell94`` (O’Donnell 1994),
``fitzpatrick99`` (Fitzpatrick 1999), ``calzetti00`` (Calzetti 2000) and ``fm07`` (Fitzpatrick & Massa 2007 with
:math:`R_V` = 3.1.)
dustmaps_dir : str, default ``None``
Directory where the dust maps of Schlegel, Fikbeiner & Davis (1998) are found.
r_v : float, default ``3.1``
Total-to-selective extinction ratio (:math:`R_V`)
ebv : float, default ``None``
Colour excess (:math:`E(B-V)`). If given, this is used instead of the dust map value.
Returns
-------
A : float
Extinction value in magnitudes.
"""
flux = 100
deredden_flux = deredden(filter_wave, flux, ra, dec, scaling, reddening_law, dustmaps_dir, r_v, ebv)
f1 = integrate_filter(filter_wave, flux, filter_wave, filter_response)
f2 = integrate_filter(filter_wave, deredden_flux, filter_wave, filter_response)
A = -2.5*np.log10(f1/f2)
return A
def extinction_curve(ra, dec, scaling=0.86, reddening_law='fitzpatrick99', dustmaps_dir=None, r_v=3.1, ebv=None):
"""Plots the extinction curve for a given right ascension and declination or :math:`E(B-V)`.
Parameters
----------
ra : float
Right ascension.
dec : float
Declination in degrees.
scaling: float, default ``0.86``
Calibration of the Milky Way dust maps. Either ``0.86``
for the Schlafly & Finkbeiner (2011) recalibration or ``1.0`` for the original
dust map of Schlegel, Fikbeiner & Davis (1998).
reddening_law: str, default ``fitzpatrick99``
Reddening law. The options are: ``ccm89`` (Cardelli, Clayton & Mathis 1989), ``odonnell94`` (O’Donnell 1994),
``fitzpatrick99`` (Fitzpatrick 1999), ``calzetti00`` (Calzetti 2000) and ``fm07`` (Fitzpatrick & Massa 2007 with
:math:`R_V` = 3.1.)
dustmaps_dir : str, default ``None``
Directory where the dust maps of Schlegel, Fikbeiner & Davis (1998) are found.
r_v : float, default ``3.1``
Total-to-selective extinction ratio (:math:`R_V`)
ebv : float, default ``None``
Colour excess (:math:`E(B-V)`). If given, this is used instead of the dust map value.
"""
flux = 100
wave = np.arange(1000, 25001) # in Angstroms
deredden_flux = deredden(wave, flux, ra, dec, scaling, reddening_law, dustmaps_dir, r_v, ebv)
ff = 1 - flux/deredden_flux
f, ax = plt.subplots(figsize=(8,6))
ax.plot(wave, ff)
ax.set_xlabel(r'wavelength ($\AA$)', fontsize=18)
ax.set_ylabel('fraction of extinction', fontsize=18)
ax.set_title(r'Extinction curve', fontsize=18)
ax.xaxis.set_tick_params(labelsize=15)
ax.yaxis.set_tick_params(labelsize=15)
plt.show()
| 37.031128 | 144 | 0.646212 | 1,289 | 9,517 | 4.660978 | 0.137316 | 0.009321 | 0.009987 | 0.011651 | 0.83006 | 0.810253 | 0.799601 | 0.791944 | 0.783622 | 0.783622 | 0 | 0.051072 | 0.230535 | 9,517 | 256 | 145 | 37.175781 | 0.769357 | 0.555532 | 0 | 0.576923 | 0 | 0 | 0.114874 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 1 | 0.064103 | false | 0 | 0.089744 | 0 | 0.205128 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
560144f8aa953769b218b91981bdda5295ecfa3f | 143 | py | Python | unicode-characters.py | eltechno/python_course | f74abac7df3f9f41864afd06479389260c29ea3a | [
"MIT"
] | 4 | 2019-05-04T00:33:25.000Z | 2021-05-29T20:37:59.000Z | unicode-characters.py | eltechno/python_course | f74abac7df3f9f41864afd06479389260c29ea3a | [
"MIT"
] | null | null | null | unicode-characters.py | eltechno/python_course | f74abac7df3f9f41864afd06479389260c29ea3a | [
"MIT"
] | 3 | 2020-05-05T13:14:28.000Z | 2022-02-03T16:18:37.000Z | #www.unicode.org/charts
print("You Rolled \u2680")
print('Discard \U0001F000')
#print('Discard \N(MAHJONG TILE RED DRAGON'.encode("UTF-8"))
| 17.875 | 60 | 0.713287 | 21 | 143 | 4.857143 | 0.857143 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 0.097902 | 143 | 7 | 61 | 20.428571 | 0.697674 | 0.566434 | 0 | 0 | 0 | 0 | 0.603448 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
4bff69377eaff929c5f29ab97c5df203ab0c06f5 | 2,032 | py | Python | plugin/strelka.py | pasha00000/mbplugin | e919c7c899495ae7f4b04fc2568991bb717bed9a | [
"MIT"
] | null | null | null | plugin/strelka.py | pasha00000/mbplugin | e919c7c899495ae7f4b04fc2568991bb717bed9a | [
"MIT"
] | null | null | null | plugin/strelka.py | pasha00000/mbplugin | e919c7c899495ae7f4b04fc2568991bb717bed9a | [
"MIT"
] | null | null | null | # -*- coding: utf8 -*-
''' Автор ArtyLa '''
import os, sys, re, logging
import store
icon = '789C73F235636100033320D600620128666450804840E591C1FFFFFFA98E3FBC7ECA706CCB6C86E53D290CCBBB9319564FCC6638BF7F25C3AF9FDF09EA3DB2713A43758008434F9A31C3A619650C7B977732ACEC4D67688BD760688E5262B87E6A274EBDDBE6D5313446C8335C3FBD0343EECF9F5F0CBB97B63154F8F0339C3FB00A43FEF685030C659E3C0C8F6E9C06F3FFFEFDC3F0E7F72F0C75C7B7CE6588099AC9F0EEDD3F14F1F90DA10C1BA617C3F99FDEBD60985EEA0E0E0B6475FFFEFD6748CBFBCC306B01222CBE7E7ACB50E6C5CBF0F4EE2514B5170EAC66E84CD265B87BF1108AF8BACD3F1962523FC1F90FAF9F62A8F217069AFD17C3BD2F1E5C63E8CD300187E3FF7F1037DFB8F587C1D9EF03C3AF5F1035203F57FA0962F5EF87D74F182617D8336C98560C0E1390D8F59B7F185CFC11FABF7D7AC7500E74FFA39B6750F4DE3ABF8FA133598FE1D2E17528E26B37FD64884DFB8422B6B0399261DD940238FFF3FB570C1372AC185E3CBC86117EC9399F18E62FFD8122FEECDE657018DC01C62338BE817EF9F9FD0B867FF6AFEA65C808AB6378FFE11F86DC1E6018D58548315C38B806C84795FFF3FB27C3B6F975E0B479E3CC2E9C69F0E4F6790C75A1D20C5D29FA0C9B6757311C5A3F8561FDB42270DA6D8D53677870F538C13CF0FDCB0786D3BB16312C698B659855E5CBB0069C7F5681DD40EDBC0A00F3E78F1E'
def get_card_info(cardnum):
# чтобы не ругался на ошибки сертификатов
baseurl = 'http://strelkacard.ru'
url = f'https://strelkacard.ru/api/cards/status/?cardnum={cardnum}&cardtypeid=3ae427a1-0f17-4524-acb1-a3f50090a8f3'
session = store.Session()
session.disable_warnings() # pylint: disable=no-member
response2 = session.get(url, headers={'Referer': baseurl + '/'}, verify=False)
return response2.json()
def get_balance(login, password, storename=None, **kwargs):
result = {}
card_info = get_card_info(login)
result['Balance'] = card_info['balance']/100.
if card_info.get('emergencyblocked', False):
result['BlockStatus'] = 'Emergency Blocked'
if card_info.get('cardblocked', False):
result['BlockStatus'] = 'Card Blocked'
return result
if __name__ == '__main__':
print('This is module strelka')
| 63.5 | 1,040 | 0.829724 | 117 | 2,032 | 14.25641 | 0.623932 | 0.028777 | 0.019784 | 0.015588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.355349 | 0.098425 | 2,032 | 31 | 1,041 | 65.548387 | 0.555131 | 0.049705 | 0 | 0 | 0 | 0.047619 | 0.680592 | 0.544685 | 0 | 1 | 0 | 0 | 0 | 1 | 0.095238 | false | 0.047619 | 0.095238 | 0 | 0.285714 | 0.047619 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ef25739a352ecbca38cfa9d53cd8a25845ac5335 | 71 | py | Python | digsby/src/msn/SOAP/MSNStorageService/__init__.py | ifwe/digsby | f5fe00244744aa131e07f09348d10563f3d8fa99 | [
"Python-2.0"
] | 35 | 2015-08-15T14:32:38.000Z | 2021-12-09T16:21:26.000Z | digsby/src/msn/SOAP/MSNStorageService/__init__.py | niterain/digsby | 16a62c7df1018a49eaa8151c0f8b881c7e252949 | [
"Python-2.0"
] | 4 | 2015-09-12T10:42:57.000Z | 2017-02-27T04:05:51.000Z | digsby/src/msn/SOAP/MSNStorageService/__init__.py | niterain/digsby | 16a62c7df1018a49eaa8151c0f8b881c7e252949 | [
"Python-2.0"
] | 15 | 2015-07-10T23:58:07.000Z | 2022-01-23T22:16:33.000Z | from StorageService_types import *
from StorageService_client import *
| 23.666667 | 35 | 0.859155 | 8 | 71 | 7.375 | 0.625 | 0.610169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112676 | 71 | 2 | 36 | 35.5 | 0.936508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
324d58a0b550bcdbd0b1dde3b56c3600a2d869b8 | 45 | py | Python | shotbot/__init__.py | fyafighter/shotbot | d95041e86a64330bfb0b6439cd773ddf0db29acd | [
"Apache-2.0"
] | 1 | 2020-08-28T02:36:08.000Z | 2020-08-28T02:36:08.000Z | shotbot/__init__.py | fyafighter/shotbot | d95041e86a64330bfb0b6439cd773ddf0db29acd | [
"Apache-2.0"
] | 1 | 2020-09-09T21:56:39.000Z | 2020-09-09T21:56:39.000Z | shotbot/__init__.py | fyafighter/shotbot | d95041e86a64330bfb0b6439cd773ddf0db29acd | [
"Apache-2.0"
] | null | null | null | from .relay import Relay
from .bot import Bot | 22.5 | 24 | 0.8 | 8 | 45 | 4.5 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 2 | 25 | 22.5 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3254b006ad95808b4137014e0a72aac0aa6a61c9 | 17,094 | py | Python | brown_tatum_rapm/brown_tatum_lineup_combos.py | airalcorn2/NBA_Tutorials | 85179300864d2c97a727a7b2a5ee46d250bdac20 | [
"MIT"
] | 123 | 2019-01-02T20:51:19.000Z | 2022-03-16T23:37:17.000Z | brown_tatum_rapm/brown_tatum_lineup_combos.py | nicholasrios/NBA_Tutorials | 85179300864d2c97a727a7b2a5ee46d250bdac20 | [
"MIT"
] | 8 | 2019-07-31T14:32:46.000Z | 2022-02-14T00:16:15.000Z | brown_tatum_rapm/brown_tatum_lineup_combos.py | nicholasrios/NBA_Tutorials | 85179300864d2c97a727a7b2a5ee46d250bdac20 | [
"MIT"
] | 38 | 2019-05-15T23:26:16.000Z | 2022-01-03T05:35:13.000Z | import pandas as pd
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
celtics_team_id = 1610612738
jaylen_brown = 202683
jayson_tatum = 1629057
data = pd.read_csv('data/possessions_19_20.csv')
data = data[(data['offenseTeamId1'] == celtics_team_id) | (data['defenseTeamId2'] == celtics_team_id)]
season_stats = pd.read_csv('data/season_stats.csv')
starters = {}
end_of_bench = {}
player_names = {}
def select_starters(team):
starters[team['TEAM_ID'].values[0]] = list(team.nlargest(5, 'MIN')['PLAYER_ID'].values)
end_of_bench[team['TEAM_ID'].values[0]] = list(team.nsmallest(7, 'MIN')['PLAYER_ID'].values)
def build_player_names(row):
player_names[str(row['PLAYER_ID'])] = row['PLAYER_NAME'].split(' ', 1)[-1]
season_stats[['PLAYER_ID', 'TEAM_ID', 'MIN']].groupby(by='TEAM_ID').apply(select_starters)
season_stats.apply(build_player_names, axis=1)
print(starters[celtics_team_id])
def count_offensive_starters(row):
team = row['offenseTeamId1']
team_starters = starters[team]
cnt = 0
if row['offensePlayer1Id'] in team_starters:
cnt += 1
if row['offensePlayer2Id'] in team_starters:
cnt += 1
if row['offensePlayer3Id'] in team_starters:
cnt += 1
if row['offensePlayer4Id'] in team_starters:
cnt += 1
if row['offensePlayer5Id'] in team_starters:
cnt += 1
return cnt
def count_offensive_eob(row):
team = row['offenseTeamId1']
team_eob = end_of_bench[team]
cnt = 0
if row['offensePlayer1Id'] in team_eob:
cnt += 1
if row['offensePlayer2Id'] in team_eob:
cnt += 1
if row['offensePlayer3Id'] in team_eob:
cnt += 1
if row['offensePlayer4Id'] in team_eob:
cnt += 1
if row['offensePlayer5Id'] in team_eob:
cnt += 1
return cnt
def count_defensive_starters(row):
team = row['defenseTeamId2']
team_starters = starters[team]
cnt = 0
if row['defensePlayer1Id'] in team_starters:
cnt += 1
if row['defensePlayer2Id'] in team_starters:
cnt += 1
if row['defensePlayer3Id'] in team_starters:
cnt += 1
if row['defensePlayer4Id'] in team_starters:
cnt += 1
if row['defensePlayer5Id'] in team_starters:
cnt += 1
return cnt
def count_defensive_eob(row):
team = row['defenseTeamId2']
team_eob = end_of_bench[team]
cnt = 0
if row['defensePlayer1Id'] in team_eob:
cnt += 1
if row['defensePlayer2Id'] in team_eob:
cnt += 1
if row['defensePlayer3Id'] in team_eob:
cnt += 1
if row['defensePlayer4Id'] in team_eob:
cnt += 1
if row['defensePlayer5Id'] in team_eob:
cnt += 1
return cnt
def calculate_ppp(frame):
frame['PPP'] = frame['points'] / frame['possessions']
def calculate_eppp(frame):
frame['EPPP'] = frame['expectedPoints'] / frame['possessions']
def calculate_possession_percent(frame):
frame['possession%'] = frame['possessions'] / frame['possessions'].sum()
def calculate_stats(frame):
calculate_ppp(frame)
calculate_eppp(frame)
calculate_possession_percent(frame)
return frame.fillna(0.0)
data['OFFENSIVE_STARTERS'] = data.apply(count_offensive_starters, axis=1)
data['DEFENSIVE_STARTERS'] = data.apply(count_defensive_starters, axis=1)
data['OFFENSIVE_END_OF_BENCH'] = data.apply(count_offensive_eob, axis=1)
data['DEFENSIVE_END_OF_BENCH'] = data.apply(count_defensive_eob, axis=1)
celtics_o = data[data['offenseTeamId1'] == celtics_team_id]
every_possession_with_brown_not_tatum = celtics_o[
((celtics_o['offensePlayer1Id'] == jaylen_brown) |
(celtics_o['offensePlayer2Id'] == jaylen_brown) |
(celtics_o['offensePlayer3Id'] == jaylen_brown) |
(celtics_o['offensePlayer4Id'] == jaylen_brown) |
(celtics_o['offensePlayer5Id'] == jaylen_brown)) &
((celtics_o['offensePlayer1Id'] != jayson_tatum) &
(celtics_o['offensePlayer2Id'] != jayson_tatum) &
(celtics_o['offensePlayer3Id'] != jayson_tatum) &
(celtics_o['offensePlayer4Id'] != jayson_tatum) &
(celtics_o['offensePlayer5Id'] != jayson_tatum))
]
every_possession_with_tatum_not_brown = celtics_o[
((celtics_o['offensePlayer1Id'] == jayson_tatum) |
(celtics_o['offensePlayer2Id'] == jayson_tatum) |
(celtics_o['offensePlayer3Id'] == jayson_tatum) |
(celtics_o['offensePlayer4Id'] == jayson_tatum) |
(celtics_o['offensePlayer5Id'] == jayson_tatum)) &
((celtics_o['offensePlayer1Id'] != jaylen_brown) &
(celtics_o['offensePlayer2Id'] != jaylen_brown) &
(celtics_o['offensePlayer3Id'] != jaylen_brown) &
(celtics_o['offensePlayer4Id'] != jaylen_brown) &
(celtics_o['offensePlayer5Id'] != jaylen_brown))
]
###################
# SIMILAR LINEUPS #
###################
print('OFFENSE STATS')
brown_no_tatum_stats = every_possession_with_brown_not_tatum[['points', 'expectedPoints', 'fieldGoalAttempts', 'fieldGoals', 'threePtAttempts', 'threePtMade', 'freeThrowAttempts', 'freeThrowsMade', 'possessions', 'seconds']].sum()
brown_no_tatum_stats['OFFENSIVE_STARTERS_AVG'] = every_possession_with_brown_not_tatum['OFFENSIVE_STARTERS'].mean()
brown_no_tatum_stats['DEFENSIVE_STARTERS_AVG'] = every_possession_with_brown_not_tatum['DEFENSIVE_STARTERS'].mean()
brown_no_tatum_stats['OFFENSIVE_EOB_AVG'] = every_possession_with_brown_not_tatum['OFFENSIVE_END_OF_BENCH'].mean()
brown_no_tatum_stats['DEFENSIVE_EOB_AVG'] = every_possession_with_brown_not_tatum['DEFENSIVE_END_OF_BENCH'].mean()
calculate_ppp(brown_no_tatum_stats)
calculate_eppp(brown_no_tatum_stats)
tatum_no_brown_stats = every_possession_with_tatum_not_brown[['points', 'expectedPoints', 'fieldGoalAttempts', 'fieldGoals', 'threePtAttempts', 'threePtMade', 'freeThrowAttempts', 'freeThrowsMade', 'possessions', 'seconds']].sum()
tatum_no_brown_stats['OFFENSIVE_STARTERS_AVG'] = every_possession_with_tatum_not_brown['OFFENSIVE_STARTERS'].mean()
tatum_no_brown_stats['DEFENSIVE_STARTERS_AVG'] = every_possession_with_tatum_not_brown['DEFENSIVE_STARTERS'].mean()
tatum_no_brown_stats['OFFENSIVE_EOB_AVG'] = every_possession_with_tatum_not_brown['OFFENSIVE_END_OF_BENCH'].mean()
tatum_no_brown_stats['DEFENSIVE_EOB_AVG'] = every_possession_with_tatum_not_brown['DEFENSIVE_END_OF_BENCH'].mean()
calculate_ppp(tatum_no_brown_stats)
calculate_eppp(tatum_no_brown_stats)
print('Stats with Brown and Not Tatum:')
print(brown_no_tatum_stats.round(2))
print('Stats with Tatum and Not Brown:')
print(tatum_no_brown_stats.round(2))
def calculate_key(row, player):
players = [row['offensePlayer1Id'], row['offensePlayer2Id'], row['offensePlayer3Id'], row['offensePlayer4Id'], row['offensePlayer5Id']]
others = [player_names[str(int(p))] for p in players if p != player]
return '-'.join(others)
every_possession_with_brown_not_tatum['new_key'] = every_possession_with_brown_not_tatum.apply(calculate_key, args=(jaylen_brown,), axis=1)
stints_brown_not_tatum = every_possession_with_brown_not_tatum[['new_key','points', 'expectedPoints', 'possessions', 'seconds']].groupby(by='new_key').sum().reset_index()
calculate_ppp(stints_brown_not_tatum)
calculate_eppp(stints_brown_not_tatum)
every_possession_with_tatum_not_brown['new_key'] = every_possession_with_tatum_not_brown.apply(calculate_key, args=(jayson_tatum,), axis=1)
stints_tatum_not_brown = every_possession_with_tatum_not_brown[['new_key','points', 'expectedPoints', 'possessions', 'seconds']].groupby(by='new_key').sum().reset_index()
calculate_ppp(stints_tatum_not_brown)
calculate_eppp(stints_tatum_not_brown)
joined = stints_brown_not_tatum.merge(stints_tatum_not_brown, how='inner', on='new_key', suffixes=('_brown', '_tatum'))
joined = joined.fillna(0.0)
print('OFFENSE STATS')
print(joined[['new_key', 'possessions_brown', 'PPP_brown', 'EPPP_brown', 'possessions_tatum', 'PPP_tatum', 'EPPP_tatum']].round(2))
###########################
# STARTERS & END OF BENCH #
###########################
print('OFFENSE STATS')
print('\n Offense and Defense \n')
brown_no_tatun_with_starters = every_possession_with_brown_not_tatum[['OFFENSIVE_STARTERS', 'DEFENSIVE_STARTERS', 'points', 'expectedPoints', 'possessions']].groupby(by=['OFFENSIVE_STARTERS', 'DEFENSIVE_STARTERS']).sum().reset_index()
brown_no_tatun_with_starters = calculate_stats(brown_no_tatun_with_starters)
tatum_no_brown_with_starters = every_possession_with_tatum_not_brown[['OFFENSIVE_STARTERS', 'DEFENSIVE_STARTERS', 'points', 'expectedPoints', 'possessions']].groupby(by=['OFFENSIVE_STARTERS', 'DEFENSIVE_STARTERS']).sum().reset_index()
tatum_no_brown_with_starters = calculate_stats(tatum_no_brown_with_starters)
joined = brown_no_tatun_with_starters.merge(tatum_no_brown_with_starters, on=['OFFENSIVE_STARTERS', 'DEFENSIVE_STARTERS'], suffixes=('_brown', '_tatum'))
print(joined.round(2))
print('\n Offense Only \n')
brown_no_tatun_with_starters = every_possession_with_brown_not_tatum[['OFFENSIVE_STARTERS', 'points', 'expectedPoints', 'possessions']].groupby(by=['OFFENSIVE_STARTERS']).sum().reset_index()
brown_no_tatun_with_starters = calculate_stats(brown_no_tatun_with_starters)
tatum_no_brown_with_starters = every_possession_with_tatum_not_brown[['OFFENSIVE_STARTERS', 'points', 'expectedPoints', 'possessions']].groupby(by=['OFFENSIVE_STARTERS']).sum().reset_index()
calculate_stats(tatum_no_brown_with_starters)
joined = brown_no_tatun_with_starters.merge(tatum_no_brown_with_starters, on=['OFFENSIVE_STARTERS'], suffixes=('_brown', '_tatum'))
print(joined.round(2))
print('\n Defense Only \n')
brown_no_tatun_with_starters = every_possession_with_brown_not_tatum[['DEFENSIVE_STARTERS', 'points', 'expectedPoints', 'possessions']].groupby(by=['DEFENSIVE_STARTERS']).sum().reset_index()
brown_no_tatun_with_starters = calculate_stats(brown_no_tatun_with_starters)
tatum_no_brown_with_starters = every_possession_with_tatum_not_brown[['DEFENSIVE_STARTERS', 'points', 'expectedPoints', 'possessions']].groupby(by=['DEFENSIVE_STARTERS']).sum().reset_index()
tatum_no_brown_with_starters = calculate_stats(tatum_no_brown_with_starters)
joined = brown_no_tatun_with_starters.merge(tatum_no_brown_with_starters, on=['DEFENSIVE_STARTERS'], suffixes=('_brown', '_tatum'))
print(joined.round(2))
###########
# DEFENSE #
###########
print('\n\n\n\n')
print('DEFENSE STATS')
celtics_d = data[data['defenseTeamId2'] == celtics_team_id]
every_possession_with_brown_not_tatum = celtics_d[
((celtics_d['defensePlayer1Id'] == jaylen_brown) |
(celtics_d['defensePlayer2Id'] == jaylen_brown) |
(celtics_d['defensePlayer3Id'] == jaylen_brown) |
(celtics_d['defensePlayer4Id'] == jaylen_brown) |
(celtics_d['defensePlayer5Id'] == jaylen_brown)) &
((celtics_d['defensePlayer1Id'] != jayson_tatum) &
(celtics_d['defensePlayer2Id'] != jayson_tatum) &
(celtics_d['defensePlayer3Id'] != jayson_tatum) &
(celtics_d['defensePlayer4Id'] != jayson_tatum) &
(celtics_d['defensePlayer5Id'] != jayson_tatum))
]
every_possession_with_tatum_not_brown = celtics_d[
((celtics_d['defensePlayer1Id'] == jayson_tatum) |
(celtics_d['defensePlayer2Id'] == jayson_tatum) |
(celtics_d['defensePlayer3Id'] == jayson_tatum) |
(celtics_d['defensePlayer4Id'] == jayson_tatum) |
(celtics_d['defensePlayer5Id'] == jayson_tatum)) &
((celtics_d['defensePlayer1Id'] != jaylen_brown) &
(celtics_d['defensePlayer2Id'] != jaylen_brown) &
(celtics_d['defensePlayer3Id'] != jaylen_brown) &
(celtics_d['defensePlayer4Id'] != jaylen_brown) &
(celtics_d['defensePlayer5Id'] != jaylen_brown))
]
###################
# SIMILAR LINEUPS #
###################
brown_no_tatum_stats = every_possession_with_brown_not_tatum[['points', 'expectedPoints', 'fieldGoalAttempts', 'fieldGoals', 'threePtAttempts', 'threePtMade', 'freeThrowAttempts', 'freeThrowsMade', 'possessions', 'seconds']].sum()
brown_no_tatum_stats['OFFENSIVE_STARTERS_AVG'] = every_possession_with_brown_not_tatum['OFFENSIVE_STARTERS'].mean()
brown_no_tatum_stats['DEFENSIVE_STARTERS_AVG'] = every_possession_with_brown_not_tatum['DEFENSIVE_STARTERS'].mean()
brown_no_tatum_stats['OFFENSIVE_EOB_AVG'] = every_possession_with_brown_not_tatum['OFFENSIVE_END_OF_BENCH'].mean()
brown_no_tatum_stats['DEFENSIVE_EOB_AVG'] = every_possession_with_brown_not_tatum['DEFENSIVE_END_OF_BENCH'].mean()
calculate_ppp(brown_no_tatum_stats)
calculate_eppp(brown_no_tatum_stats)
tatum_no_brown_stats = every_possession_with_tatum_not_brown[['points', 'expectedPoints', 'fieldGoalAttempts', 'fieldGoals', 'threePtAttempts', 'threePtMade', 'freeThrowAttempts', 'freeThrowsMade', 'possessions', 'seconds']].sum()
tatum_no_brown_stats['OFFENSIVE_STARTERS_AVG'] = every_possession_with_tatum_not_brown['OFFENSIVE_STARTERS'].mean()
tatum_no_brown_stats['DEFENSIVE_STARTERS_AVG'] = every_possession_with_tatum_not_brown['DEFENSIVE_STARTERS'].mean()
tatum_no_brown_stats['OFFENSIVE_EOB_AVG'] = every_possession_with_tatum_not_brown['OFFENSIVE_END_OF_BENCH'].mean()
tatum_no_brown_stats['DEFENSIVE_EOB_AVG'] = every_possession_with_tatum_not_brown['DEFENSIVE_END_OF_BENCH'].mean()
calculate_ppp(tatum_no_brown_stats)
calculate_eppp(tatum_no_brown_stats)
print('Stats with Brown and Not Tatum:')
print(brown_no_tatum_stats.round(2))
print('Stats with Tatum and Not Brown:')
print(tatum_no_brown_stats.round(2))
def calculate_key(row, player):
players = [row['defensePlayer1Id'], row['defensePlayer2Id'], row['defensePlayer3Id'], row['defensePlayer4Id'], row['defensePlayer5Id']]
others = [player_names[str(int(p))] for p in players if p != player]
return '-'.join(others)
every_possession_with_brown_not_tatum['new_key'] = every_possession_with_brown_not_tatum.apply(calculate_key, args=(jaylen_brown,), axis=1)
stints_brown_not_tatum = every_possession_with_brown_not_tatum[['new_key','points', 'expectedPoints', 'possessions', 'seconds']].groupby(by='new_key').sum().reset_index()
calculate_ppp(stints_brown_not_tatum)
calculate_eppp(stints_brown_not_tatum)
every_possession_with_tatum_not_brown['new_key'] = every_possession_with_tatum_not_brown.apply(calculate_key, args=(jayson_tatum,), axis=1)
stints_tatum_not_brown = every_possession_with_tatum_not_brown[['new_key','points', 'expectedPoints', 'possessions', 'seconds']].groupby(by='new_key').sum().reset_index()
calculate_ppp(stints_tatum_not_brown)
calculate_eppp(stints_tatum_not_brown)
joined = stints_brown_not_tatum.merge(stints_tatum_not_brown, how='inner', on='new_key', suffixes=('_brown', '_tatum'))
joined = joined.fillna(0.0)
print('DEFENSE STATS')
print(joined[['new_key', 'possessions_brown', 'PPP_brown', 'EPPP_brown', 'possessions_tatum', 'PPP_tatum', 'EPPP_tatum']].round(2))
###########################
# STARTERS & END OF BENCH #
###########################
print('DEFENSE STATS')
print('\n Offense and Defense \n')
brown_no_tatun_with_starters = every_possession_with_brown_not_tatum[['OFFENSIVE_STARTERS', 'DEFENSIVE_STARTERS', 'points', 'expectedPoints', 'possessions']].groupby(by=['OFFENSIVE_STARTERS', 'DEFENSIVE_STARTERS']).sum().reset_index()
brown_no_tatun_with_starters = calculate_stats(brown_no_tatun_with_starters)
tatum_no_brown_with_starters = every_possession_with_tatum_not_brown[['OFFENSIVE_STARTERS', 'DEFENSIVE_STARTERS', 'points', 'expectedPoints', 'possessions']].groupby(by=['OFFENSIVE_STARTERS', 'DEFENSIVE_STARTERS']).sum().reset_index()
tatum_no_brown_with_starters = calculate_stats(tatum_no_brown_with_starters)
joined = brown_no_tatun_with_starters.merge(tatum_no_brown_with_starters, on=['OFFENSIVE_STARTERS', 'DEFENSIVE_STARTERS'], suffixes=('_brown', '_tatum'))
print(joined.round(2))
print('\n Offense Only \n')
brown_no_tatun_with_starters = every_possession_with_brown_not_tatum[['OFFENSIVE_STARTERS', 'points', 'expectedPoints', 'possessions']].groupby(by=['OFFENSIVE_STARTERS']).sum().reset_index()
brown_no_tatun_with_starters = calculate_stats(brown_no_tatun_with_starters)
tatum_no_brown_with_starters = every_possession_with_tatum_not_brown[['OFFENSIVE_STARTERS', 'points', 'expectedPoints', 'possessions']].groupby(by=['OFFENSIVE_STARTERS']).sum().reset_index()
calculate_stats(tatum_no_brown_with_starters)
joined = brown_no_tatun_with_starters.merge(tatum_no_brown_with_starters, on=['OFFENSIVE_STARTERS'], suffixes=('_brown', '_tatum'))
print(joined.round(2))
print('\n Defense Only \n')
brown_no_tatun_with_starters = every_possession_with_brown_not_tatum[['DEFENSIVE_STARTERS', 'points', 'expectedPoints', 'possessions']].groupby(by=['DEFENSIVE_STARTERS']).sum().reset_index()
brown_no_tatun_with_starters = calculate_stats(brown_no_tatun_with_starters)
tatum_no_brown_with_starters = every_possession_with_tatum_not_brown[['DEFENSIVE_STARTERS', 'points', 'expectedPoints', 'possessions']].groupby(by=['DEFENSIVE_STARTERS']).sum().reset_index()
tatum_no_brown_with_starters = calculate_stats(tatum_no_brown_with_starters)
joined = brown_no_tatun_with_starters.merge(tatum_no_brown_with_starters, on=['DEFENSIVE_STARTERS'], suffixes=('_brown', '_tatum'))
print(joined.round(2)) | 44.515625 | 234 | 0.75547 | 2,176 | 17,094 | 5.491268 | 0.059743 | 0.060256 | 0.076324 | 0.048205 | 0.886016 | 0.873546 | 0.864759 | 0.811783 | 0.794711 | 0.782325 | 0 | 0.01095 | 0.09711 | 17,094 | 384 | 235 | 44.515625 | 0.76325 | 0.005324 | 0 | 0.569231 | 0 | 0 | 0.27511 | 0.026391 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046154 | false | 0 | 0.003846 | 0 | 0.076923 | 0.115385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
087c33db49339ce08b15e0289cd01b84fbbf1dc2 | 5,733 | py | Python | tests/test_meter.py | OKDaG/smeterd | e8307b7cf79684c2b0eb548905f1c2f337812df5 | [
"MIT"
] | 34 | 2015-05-10T20:32:43.000Z | 2022-01-27T23:41:58.000Z | tests/test_meter.py | OKDaG/smeterd | e8307b7cf79684c2b0eb548905f1c2f337812df5 | [
"MIT"
] | 21 | 2015-07-14T11:40:27.000Z | 2022-02-23T16:42:03.000Z | tests/test_meter.py | OKDaG/smeterd | e8307b7cf79684c2b0eb548905f1c2f337812df5 | [
"MIT"
] | 17 | 2015-04-20T09:10:27.000Z | 2020-05-31T20:02:23.000Z | import unittest
from smeterd.meter import SmartMeter
from smeterd.meter import SmartMeterError
from tests import BROKEN_PACKET
from tests import LONG_BROKEN_PACKET
from tests import NORMAL_PACKET
from tests import NORMAL_PACKET_1003
from unittest import mock
class MeterTestCase(unittest.TestCase):
def test_meter_typeerror(self):
with self.assertRaises(TypeError):
SmartMeter()
def test_meter_tty_not_available(self):
with self.assertRaises(SmartMeterError):
SmartMeter('/dev/ttyUSB0')
def test_meter_tty_not_available_again(self):
with self.assertRaises(SmartMeterError):
SmartMeter('/dev/ttyUSB0').read_one_packet()
@mock.patch('serial.Serial')
def test_meter_connect_twice(self, mocked_serial):
mocked_serial.return_value.isOpen.side_effect = [True, True, True, True, True]
meter = SmartMeter('/dev/ttyUSB0')
self.assertTrue(meter.connected())
meter.connect()
mocked_serial.return_value.open.assert_not_called()
self.assertTrue(meter.connected())
meter.connect()
mocked_serial.return_value.open.assert_not_called()
self.assertTrue(meter.connected())
@mock.patch('serial.Serial')
def test_meter_disconnect_twice(self, mocked_serial):
mocked_serial.return_value.isOpen.side_effect = [True, True, False, False, False]
meter = SmartMeter('/dev/ttyUSB0')
self.assertTrue(meter.connected())
meter.disconnect()
mocked_serial.return_value.close.assert_called()
mocked_serial.return_value.close.reset_mock()
self.assertFalse(meter.connected())
meter.disconnect()
mocked_serial.return_value.close.assert_not_called()
self.assertFalse(meter.connected())
@mock.patch('serial.Serial')
def test_meter_disconnect_and_connect(self, mocked_serial):
mocked_serial.return_value.isOpen.side_effect = [True, True, False, False, True]
meter = SmartMeter('/dev/ttyUSB0')
self.assertTrue(meter.connected())
mocked_serial.return_value.isOpen.assert_called()
meter.disconnect()
mocked_serial.return_value.isOpen.assert_called()
self.assertFalse(meter.connected())
mocked_serial.return_value.isOpen.assert_called()
meter.connect()
mocked_serial.return_value.isOpen.assert_called()
self.assertTrue(meter.connected())
mocked_serial.return_value.isOpen.assert_called()
@mock.patch('serial.Serial', **{'return_value.name': '/dev/ttyUSB0'})
def test_meter_ok(self, mocked_serial):
mocked_serial.return_value.isOpen.return_value = True
meter = SmartMeter('/dev/ttyUSB0')
self.assertTrue(meter.connected())
self.assertEqual(meter.port, '/dev/ttyUSB0')
@mock.patch('serial.Serial')
def test_meter_read_one_packet(self, mocked_serial):
mocked_serial.return_value.readline.side_effect = NORMAL_PACKET.splitlines(True)
p = SmartMeter('/dev/ttyUSB0').read_one_packet()
self.assertEqual(p['header'], '/ISk5\2ME382-1004')
self.assertEqual(p['kwh']['eid'], 'XXXXXXXXXXXXXXMYSERIALXXXXXXXXXXXXXX')
self.assertEqual(p['kwh']['low']['consumed'], 608.400)
self.assertEqual(p['kwh']['high']['consumed'], 490.342)
self.assertEqual(p['kwh']['low']['produced'], 0.001)
self.assertEqual(p['kwh']['high']['produced'], 0)
self.assertEqual(p['kwh']['tariff'], 1)
self.assertEqual(p['kwh']['current_consumed'], 1.51)
self.assertEqual(p['kwh']['current_produced'], 0)
self.assertEqual(p['kwh']['treshold'], 999)
self.assertEqual(p['kwh']['switch'], 1)
self.assertEqual(p['msg']['code'], None)
self.assertEqual(p['msg']['text'], None)
self.assertEqual(p['gas']['device_type'], 3)
self.assertEqual(p['gas']['eid'], '3238303131303031323332313337343132')
self.assertEqual(p['gas']['total'], 947.680)
self.assertEqual(p['gas']['valve'], 1)
self.assertEqual(p._datagram, NORMAL_PACKET)
@mock.patch('serial.Serial')
def test_meter_read_one_packet_1003(self, mocked_serial):
mocked_serial.return_value.readline.side_effect = NORMAL_PACKET_1003.splitlines(True)
p = SmartMeter('/dev/ttyUSB0').read_one_packet()
self.assertEqual(p['header'], '/ISk5\2ME382-1003')
self.assertEqual(p['kwh']['eid'], '5A424556303035303933313937373132')
self.assertEqual(p['kwh']['low']['consumed'], 608.400)
self.assertEqual(p['kwh']['high']['consumed'], 490.342)
self.assertEqual(p['kwh']['low']['produced'], 0.001)
self.assertEqual(p['kwh']['high']['produced'], 0)
self.assertEqual(p['kwh']['tariff'], 1)
self.assertEqual(p['kwh']['current_consumed'], 1.51)
self.assertEqual(p['kwh']['current_produced'], 0)
self.assertEqual(p['kwh']['treshold'], 999)
self.assertEqual(p['kwh']['switch'], 1)
self.assertEqual(p['msg']['code'], None)
self.assertEqual(p['msg']['text'], None)
self.assertEqual(p._datagram, NORMAL_PACKET_1003)
@mock.patch('serial.Serial')
def test_meter_read_broken_packet(self, mocked_serial):
mocked_serial.return_value.readline.side_effect = BROKEN_PACKET.splitlines(True)
with self.assertRaises(SmartMeterError):
SmartMeter('/dev/ttyUSB0').read_one_packet()
@mock.patch('serial.Serial')
def test_meter_read_long_broken_packet(self, mocked_serial):
mocked_serial.return_value.readline.side_effect = LONG_BROKEN_PACKET.splitlines(True)
with self.assertRaises(SmartMeterError):
SmartMeter('/dev/ttyUSB0').read_one_packet()
| 45.141732 | 93 | 0.678179 | 679 | 5,733 | 5.521355 | 0.145803 | 0.132035 | 0.13657 | 0.10136 | 0.840491 | 0.798079 | 0.7498 | 0.7498 | 0.688184 | 0.644439 | 0 | 0.03619 | 0.175824 | 5,733 | 126 | 94 | 45.5 | 0.757249 | 0 | 0 | 0.585586 | 0 | 0 | 0.128903 | 0.017792 | 0 | 0 | 0 | 0 | 0.513514 | 1 | 0.099099 | false | 0 | 0.072072 | 0 | 0.18018 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0882d5e02720c0d8aa8fe3eae89845bfe8f6ebb2 | 151 | py | Python | ansys/mapdl/core/_commands/graphics_/__init__.py | da1910/pymapdl | 305b70b30e61a78011e974ff4cb409ee21f89e13 | [
"MIT"
] | 194 | 2016-10-21T08:46:41.000Z | 2021-01-06T20:39:23.000Z | ansys/mapdl/core/_commands/graphics_/__init__.py | da1910/pymapdl | 305b70b30e61a78011e974ff4cb409ee21f89e13 | [
"MIT"
] | 463 | 2021-01-12T14:07:38.000Z | 2022-03-31T22:42:25.000Z | ansys/mapdl/core/_commands/graphics_/__init__.py | da1910/pymapdl | 305b70b30e61a78011e974ff4cb409ee21f89e13 | [
"MIT"
] | 66 | 2016-11-21T04:26:08.000Z | 2020-12-28T09:27:27.000Z | from . import annotation
from . import graphs
from . import labeling
from . import scaling
from . import setup
from . import style
from . import views
| 18.875 | 24 | 0.768212 | 21 | 151 | 5.52381 | 0.428571 | 0.603448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18543 | 151 | 7 | 25 | 21.571429 | 0.943089 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
089dbaa2b6f400eefbf64e1162acf5b055d11860 | 28 | py | Python | humble/eclipse-cyclonedds-report/perf_tool/perf_tool/__init__.py | kydos/TSC-RMW-Reports | b9fa8afe0aef554f00e6708f524e27cfdea88232 | [
"MIT"
] | 15 | 2020-12-30T02:31:42.000Z | 2022-02-09T12:07:16.000Z | humble/eclipse-cyclonedds-report/perf_tool/perf_tool/__init__.py | kydos/TSC-RMW-Reports | b9fa8afe0aef554f00e6708f524e27cfdea88232 | [
"MIT"
] | 10 | 2021-08-16T14:46:26.000Z | 2022-01-19T20:05:09.000Z | humble/eclipse-cyclonedds-report/perf_tool/perf_tool/__init__.py | kydos/TSC-RMW-Reports | b9fa8afe0aef554f00e6708f524e27cfdea88232 | [
"MIT"
] | 7 | 2021-03-15T15:17:24.000Z | 2022-03-13T08:00:02.000Z | from .__main__ import main
| 14 | 27 | 0.785714 | 4 | 28 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 28 | 1 | 28 | 28 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
089ebe1455e12be04387aaa20abf4a2bfcd12880 | 38 | py | Python | strategypy/bots/unittest_moveup.py | davide-ceretti/strategypy | 37df9569e3a9fc8a0f1487a29a7897db6363c42e | [
"MIT"
] | 8 | 2015-03-03T17:40:41.000Z | 2020-11-08T19:02:23.000Z | strategypy/bots/unittest_moveup.py | davide-ceretti/strategypy | 37df9569e3a9fc8a0f1487a29a7897db6363c42e | [
"MIT"
] | 19 | 2015-01-14T12:07:05.000Z | 2015-03-19T11:53:11.000Z | strategypy/bots/unittest_moveup.py | davide-ceretti/strategypy | 37df9569e3a9fc8a0f1487a29a7897db6363c42e | [
"MIT"
] | 6 | 2015-03-16T18:17:06.000Z | 2021-11-04T23:44:47.000Z | def action(ctx):
return 'move up'
| 12.666667 | 20 | 0.631579 | 6 | 38 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.236842 | 38 | 2 | 21 | 19 | 0.827586 | 0 | 0 | 0 | 0 | 0 | 0.184211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
08b6b2c19f06050dbf38276d5198c92f337d9c2c | 45 | py | Python | test_util.py | timothyhalim/Render-Manager | b919a6a2290c25fe7799d661fa7839f99bf0a5cc | [
"MIT"
] | null | null | null | test_util.py | timothyhalim/Render-Manager | b919a6a2290c25fe7799d661fa7839f99bf0a5cc | [
"MIT"
] | null | null | null | test_util.py | timothyhalim/Render-Manager | b919a6a2290c25fe7799d661fa7839f99bf0a5cc | [
"MIT"
] | null | null | null | from db import Utils
print(Utils.get_host()) | 15 | 23 | 0.777778 | 8 | 45 | 4.25 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 45 | 3 | 23 | 15 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
08e346537e676a1a202edc034b01f2c85b4524d1 | 81 | py | Python | __main__.py | TomasJani/news_scraper | b473d45f0d1b09c0aed78a17a70ae1e3b7683272 | [
"MIT"
] | null | null | null | __main__.py | TomasJani/news_scraper | b473d45f0d1b09c0aed78a17a70ae1e3b7683272 | [
"MIT"
] | 1 | 2020-05-16T21:11:02.000Z | 2020-05-16T21:11:02.000Z | __main__.py | TomasJani/news_scraper | b473d45f0d1b09c0aed78a17a70ae1e3b7683272 | [
"MIT"
] | null | null | null | from news_scraper.execute import set_and_scrape, start
set_and_scrape()
start()
| 16.2 | 54 | 0.82716 | 13 | 81 | 4.769231 | 0.692308 | 0.193548 | 0.387097 | 0.548387 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098765 | 81 | 4 | 55 | 20.25 | 0.849315 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3eafc9415b20ece2f7027e62443bab45b68dfe94 | 33 | py | Python | robustbench/model_zoo/__init__.py | flymin/robustbench | c51d44e5c9f9425d0a2146dbfd5c54d86ea11609 | [
"Apache-2.0"
] | 299 | 2020-10-20T02:24:18.000Z | 2022-03-29T08:21:08.000Z | robustbench/model_zoo/__init__.py | flymin/robustbench | c51d44e5c9f9425d0a2146dbfd5c54d86ea11609 | [
"Apache-2.0"
] | 46 | 2020-11-05T17:16:18.000Z | 2022-03-24T03:42:31.000Z | robustbench/model_zoo/__init__.py | flymin/robustbench | c51d44e5c9f9425d0a2146dbfd5c54d86ea11609 | [
"Apache-2.0"
] | 45 | 2020-10-22T03:47:08.000Z | 2022-03-27T20:33:26.000Z | from .models import model_dicts
| 11 | 31 | 0.818182 | 5 | 33 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 33 | 2 | 32 | 16.5 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ebdc2be97a60935b2656fda2c7de934c02b9800 | 59,198 | py | Python | server/apps/physicaldevice/tests/test_device_api.py | iotile/iotile_cloud | 9dc65ac86d3a730bba42108ed7d9bbb963d22ba6 | [
"MIT"
] | null | null | null | server/apps/physicaldevice/tests/test_device_api.py | iotile/iotile_cloud | 9dc65ac86d3a730bba42108ed7d9bbb963d22ba6 | [
"MIT"
] | null | null | null | server/apps/physicaldevice/tests/test_device_api.py | iotile/iotile_cloud | 9dc65ac86d3a730bba42108ed7d9bbb963d22ba6 | [
"MIT"
] | null | null | null | import json
import pytz
from django.contrib.auth import get_user_model
from django.utils.dateparse import parse_datetime
from rest_framework import status
from rest_framework.reverse import reverse
from rest_framework.test import APITestCase
from apps.datablock.models import DataBlock
from apps.project.models import Project
from apps.sensorgraph.models import SensorGraph
from apps.stream.models import StreamId, StreamVariable
from apps.streamdata.models import StreamData
from apps.streamer.models import Streamer, StreamerReport
from apps.streamevent.models import StreamEventData
from apps.streamfilter.dynamodb import DynamoFilterLogModel, create_filter_log, create_filter_log_table_if_needed
from apps.streamfilter.models import State, StateTransition, StreamFilter, StreamFilterAction, StreamFilterTrigger
from apps.streamfilter.serializers import StreamFilterSerializer
from apps.streamnote.models import StreamNote
from apps.utils.test_util import TestMixin
from apps.utils.timezone_utils import str_utc
from ..models import *
user_model = get_user_model()
class DeviceAPITestCase(TestMixin, APITestCase):
def setUp(self):
self.assertEqual(Device.objects.count(), 0)
self.usersTestSetup()
self.orgTestSetup()
self.deviceTemplateTestSetup()
def tearDown(self):
GenericProperty.objects.all().delete()
DeviceStatus.objects.all().delete()
StreamData.objects.all().delete()
StreamId.objects.all().delete()
StreamVariable.objects.all().delete()
Device.objects.all().delete()
self.deviceTemplateTestTearDown()
self.orgTestTearDown()
self.userTestTearDown()
self.projectTestTearDown()
def testDeleteDevice(self):
"""
Ensure delete operations are protected
"""
pd1 = Device.objects.create_device(id=1, project=self.p1, label='d1',
template=self.dt1, created_by=self.u1, claimed_by=self.u2)
url1 = reverse('device-detail', kwargs={'slug': str(pd1.slug)})
resp = self.client.delete(url1)
self.assertEqual(resp.status_code, status.HTTP_401_UNAUTHORIZED)
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
self.assertEqual(Device.objects.count(), 1)
resp = self.client.delete(url1)
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(Device.objects.count(), 1)
self.client.logout()
ok = self.client.login(email='user1@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.delete(url1)
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(Device.objects.count(), 1)
resp = self.client.delete(url1+'?staff=1')
self.assertEqual(resp.status_code, status.HTTP_204_NO_CONTENT)
self.assertEqual(Device.objects.count(), 0)
self.client.logout()
def testGetDevice(self):
"""
Ensure we can call GET on the device API.
"""
url = reverse('device-list')
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_401_UNAUTHORIZED)
ok = self.client.login(email='user1@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 0)
pd1 = Device.objects.create_device(project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
pd2 = Device.objects.create_device(project=self.p2, label='d2', template=self.dt1, created_by=self.u3)
detail_url1 = reverse('device-detail', kwargs={'slug': str(pd1.slug)})
detail_url2 = reverse('device-detail', kwargs={'slug': str(pd2.slug)})
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 0)
resp = self.client.get(url+'?staff=1', format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 2)
# Staff can retrieve any record
resp = self.client.get(detail_url1, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
resp = self.client.get(detail_url2, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
resp = self.client.get(detail_url1+'?staff=1', format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['id'], pd1.id)
self.assertEqual(deserialized['label'], str(pd1.label))
self.assertEqual(deserialized['slug'], str(pd1.slug))
self.client.logout()
# Staff has access to all
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 1)
resp = self.client.get(detail_url1, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['id'], pd1.id)
resp = self.client.get(detail_url2, format='json')
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
self.client.logout()
# Other Users don't have access
ok = self.client.login(email='user3@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 1)
resp = self.client.get(detail_url1, format='json')
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
resp = self.client.get(detail_url2, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
self.client.logout()
def testGetBlock(self):
"""
Ensure we can call GET on a DataBlock
"""
ok = self.client.login(email='user1@foo.com', password='pass')
self.assertTrue(ok)
pd1 = Device.objects.create_device(project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
pd2 = Device.objects.create_device(project=self.p1, label='d2', template=self.dt1, created_by=self.u2)
db1 = DataBlock.objects.create(org=self.o2, title='test1', device=pd1, block=1, created_by=self.u2)
db2 = DataBlock.objects.create(org=self.o2, title='test2', device=pd2, block=1, created_by=self.u2)
pd2.active = False
pd2.save()
block_url1 = reverse('device-detail', kwargs={'slug': str(db1.slug)})
block_url2 = reverse('device-detail', kwargs={'slug': str(db2.slug)})
# Staff can retrieve any record
resp = self.client.get(block_url1, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
resp = self.client.get(block_url2, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
self.client.logout()
# Staff has access to all
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.get(block_url1, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['id'], pd1.id)
resp = self.client.get(block_url2, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['id'], pd2.id)
self.client.logout()
# Other Users don't have access
ok = self.client.login(email='user3@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.get(block_url1, format='json')
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
resp = self.client.get(block_url2, format='json')
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
self.client.logout()
def testGetDeviceWithFilter(self):
"""
Ensure we can call GET with filters
"""
url = reverse('device-list')+'?staff=1'
ok = self.client.login(email='user1@foo.com', password='pass')
self.assertTrue(ok)
sg1 = SensorGraph.objects.create(name='SG 1',
major_version=1,
created_by=self.u1, org=self.o1)
sg2 = SensorGraph.objects.create(name='SG 2',
major_version=1,
created_by=self.u1, org=self.o1)
pd1 = Device.objects.create_device(project=self.p1, label='d1', sg=sg1, template=self.dt1, created_by=self.u1)
pd2 = Device.objects.create_device(project=self.p2, label='d2', sg=sg1, template=self.dt1, created_by=self.u1)
pd3 = Device.objects.create_device(project=None, label='Unclaimed', sg=sg1, template=self.dt1, created_by=self.u1)
pd4 = Device.objects.create_device(project=self.p2, label='d2', sg=sg2, template=self.dt2,
external_id='abc', created_by=self.u1)
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 4)
resp = self.client.get(url+'&project={}'.format(str(self.p1.id)), format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 1)
resp = self.client.get(url+'&project={}'.format(str(self.p2.id)), format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 2)
resp = self.client.get(url+'&project={}'.format(self.p2.slug), format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 2)
resp = self.client.get(url+'&org__slug={}'.format(str(self.p2.org.slug)), format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 2)
resp = self.client.get(url+'&sg={}'.format(str(sg1.slug)), format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 3)
resp = self.client.get(url+'&external_id=abc', format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 1)
self.assertEqual(deserialized['results'][0]['external_id'], 'abc')
resp = self.client.get(url+'&sg={0}&project={1}'.format(str(sg1.slug), str(self.p1.id)), format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 1)
resp = self.client.get(url+'&dt={}'.format(str(self.dt2.slug)), format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 1)
resp = self.client.get(url+'&claimed=True', format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 3)
self.client.logout()
def testGeInactiveDevices(self):
url = reverse('device-list')
sg1 = SensorGraph.objects.create(name='SG 1', major_version=1,
created_by=self.u1, org=self.o1)
sg2 = SensorGraph.objects.create(name='SG 2',
major_version=1,
created_by=self.u1, org=self.o1)
pd1 = Device.objects.create_device(project=self.p1, label='d1', sg=sg1, template=self.dt1,
created_by=self.u1)
pd2 = Device.objects.create_device(project=self.p2, label='d2', sg=sg1, template=self.dt1,
created_by=self.u1)
pd3 = Device.objects.create_device(project=None, label='Unclaimed', sg=sg1, template=self.dt1,
created_by=self.u1)
pd4 = Device.objects.create_device(project=self.p2, label='d2', sg=sg2, template=self.dt2, active=False,
external_id='abc', created_by=self.u1)
ok = self.client.login(email='user1@foo.com', password='pass')
self.assertTrue(ok)
staff_url = url + '?staff=1'
resp = self.client.get(staff_url, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 4)
self.client.logout()
self.p2.org.register_user(self.u2)
self.assertTrue(self.p2.org.has_access(self.u2))
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
project_url = url+'?project={}'.format(self.p2.slug)
resp = self.client.get(project_url, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 1)
resp = self.client.get(project_url+'&all=0', format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 1)
resp = self.client.get(project_url+'&all=1', format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 2)
self.client.logout()
def testGetDeviceWithSlug(self):
"""
Ensure we can call GET on the org API.
"""
ok = self.client.login(email='user1@foo.com', password='pass')
self.assertTrue(ok)
pd1 = Device.objects.create_device(id=1, project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
for ok_slug in ['d--0001', 'd--0000-0001', 'd--0000-0000-0000-0001']:
url = reverse('device-detail', kwargs={'slug': ok_slug})
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['slug'], str(pd1.slug))
for fail_slug in ['0001', str(pd1.id)]:
url = reverse('device-detail', kwargs={'slug': fail_slug})
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
def testPatchDevice(self):
"""
Ensure we can call GET on the org API.
"""
pd1 = Device.objects.create_device(id=1, project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
url1 = reverse('device-detail', kwargs={'slug': str(pd1.slug)})
payload = {
'label':'d2',
'lat': 40.741895,
'lon': -73.989308
}
resp = self.client.put(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_401_UNAUTHORIZED)
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.patch(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
pd1 = Device.objects.get(id=1)
self.assertEqual(pd1.label, payload['label'])
self.assertEqual(float(pd1.lat), payload['lat'])
self.assertEqual(float(pd1.lon), payload['lon'])
self.client.logout()
ok = self.client.login(email='user3@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.patch(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
self.p1.org.register_user(self.u3, role='r1')
self.assertFalse(self.p1.org.has_permission(self.u3, 'can_modify_device'))
resp = self.client.patch(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['detail'], 'User is not allowed to modify device')
self.client.logout()
def testPatchDeviceState(self):
"""
Ensure we can call GET on the org API.
"""
ok = self.client.login(email='user1@foo.com', password='pass')
self.assertTrue(ok)
pd1 = Device.objects.create_device(id=1, project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
url1 = reverse('device-detail', kwargs={'slug': str(pd1.slug)})
payload = {
'state':'N0'
}
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.patch(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
pd1 = Device.objects.get(id=1)
self.assertEqual(pd1.state, 'N0')
self.assertFalse(pd1.active)
self.client.logout()
def testPatchDeviceTemplateAndSG(self):
"""
Ensure that users cannot change template, but can change sg
"""
sg2 = SensorGraph.objects.create(name='SG 2', major_version=1, app_tag=1030,
created_by=self.u1, org=self.o1)
pd1 = Device.objects.create_device(id=1, project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
url1 = reverse('device-detail', kwargs={'slug': str(pd1.slug)})
template_payload = {
'template': self.dt2.slug,
'sg': sg2.slug
}
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
# Not allowed for users
resp = self.client.patch(url1, data=template_payload)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
pd1 = Device.objects.get(id=1)
self.assertEqual(pd1.template.slug, self.dt1.slug)
self.assertEqual(pd1.sg.slug, sg2.slug)
resp = self.client.patch(url1+'?staff=1', data=template_payload)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
pd1 = Device.objects.get(id=1)
self.assertEqual(pd1.template.slug, self.dt1.slug)
self.client.logout()
ok = self.client.login(email='user1@foo.com', password='pass')
self.assertTrue(ok)
pd1.template = self.dt1
pd1.sg = sg2
pd1.save()
resp = self.client.patch(url1, data=template_payload)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
pd1 = Device.objects.get(id=1)
self.assertEqual(pd1.template.slug, self.dt1.slug)
self.assertEqual(pd1.sg.slug, sg2.slug)
# ?staff=1 most be passed to force proper serializer
resp = self.client.patch(url1+'?staff=1', data=template_payload)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
pd1 = Device.objects.get(id=1)
self.assertEqual(pd1.template.slug, template_payload['template'])
self.assertEqual(pd1.sg.slug, sg2.slug)
self.client.logout()
def testStaffPatchDevice(self):
"""
Ensure we can call GET on the org API.
"""
old_org = self.p1.org
old_sg = SensorGraph.objects.create(name='SG 1', created_by=self.u1, org=self.o1)
pd1 = Device.objects.create_device(id=1, project=self.p1, label='d1', sg=old_sg,
template=self.dt1, created_by=self.u1, claimed_by=self.u2)
url1 = reverse('device-detail', kwargs={'slug': str(pd1.slug)})
payload = {
'active': False,
'label': 'Disabled Device'
}
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.patch(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
pd1 = Device.objects.get(id=1)
self.assertFalse(pd1.active)
old_template = pd1.template
new_template = DeviceTemplate.objects.create(external_sku='New Template', org=self.o1,
released_on=datetime.datetime.utcnow(),
created_by=self.u1)
new_sg = SensorGraph.objects.create(name='SG 2', created_by=self.u1, org=self.o1)
new_org = self.o3
payload = {
'org': new_org.slug,
'template': new_template.slug,
'sg': new_sg.slug,
'claimed_by': self.u3.slug
}
resp = self.client.patch(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['template'], old_template.slug)
self.assertEqual(deserialized['sg'], new_sg.slug)
self.assertEqual(deserialized['org'], old_org.slug)
self.assertEqual(deserialized['claimed_by'], self.u2.slug)
resp = self.client.patch(url1+'?staff=1', data=payload)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['template'], old_template.slug)
self.assertEqual(deserialized['org'], old_org.slug)
self.assertEqual(deserialized['claimed_by'], self.u2.slug)
self.client.logout()
ok = self.client.login(email='user1@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.patch(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['template'], old_template.slug)
self.assertEqual(deserialized['org'], old_org.slug)
self.assertEqual(deserialized['claimed_by'], self.u2.slug)
resp = self.client.patch(url1+'?staff=1', data=payload)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['template'], new_template.slug)
self.assertEqual(deserialized['org'], old_org.slug)
self.assertEqual(deserialized['claimed_by'], self.u3.slug)
self.client.logout()
def testClaimable(self):
pd1 = Device.objects.create_device(project=None, label='d1', template=self.dt1, created_by=self.u2)
pd2 = Device.objects.create_device(project=self.p2, label='d2', template=self.dt1, created_by=self.u3)
pd3 = Device.objects.create_device(project=None, label='d3', template=self.dt1, created_by=self.u3, active=False)
url1 = reverse('device-claimable')
payload = {
'slugs':[pd1.slug, pd2.slug, pd3.slug]
}
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_401_UNAUTHORIZED)
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_202_ACCEPTED)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 3)
results = deserialized['results']
self.assertEqual(results[0]['slug'], pd1.slug)
self.assertEqual(results[0]['claimable'], True)
self.assertEqual(results[1]['slug'], pd2.slug)
self.assertEqual(results[1]['claimable'], False)
self.assertEqual(results[2]['slug'], pd3.slug)
self.assertEqual(results[2]['claimable'], True)
payload = {
'slugs':[pd1.slug, 'abc']
}
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_202_ACCEPTED)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['count'], 1)
results = deserialized['results']
self.assertEqual(results[0]['slug'], pd1.slug)
self.assertEqual(results[0]['claimable'], True)
def testClaimSuccess(self):
pd1 = Device.objects.create_device(project=None, label='d1', template=self.dt1, created_by=self.u2)
pd2 = Device.objects.create_device(project=None, label='d2', template=self.dt1, created_by=self.u2, active=False)
pd3 = Device.objects.create_device(project=None, label='d3', template=self.dt1, created_by=self.u2)
url1 = reverse('device-claim')
# Try with project slug
payload = {
'device':pd1.slug,
'project':self.p1.slug
}
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_202_ACCEPTED)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['device'], pd1.slug)
self.assertEqual(deserialized['project'], self.p1.slug)
self.assertEqual(deserialized['project_id'], str(self.p1.id))
self.assertEqual(deserialized['claimed'], True)
# Try with project ID
payload = {
'device': pd2.slug,
'project': str(self.p1.id)
}
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_202_ACCEPTED)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['device'], pd2.slug)
self.assertEqual(deserialized['project'], self.p1.slug)
self.assertEqual(deserialized['project_id'], str(self.p1.id))
self.assertEqual(deserialized['claimed'], True)
# Try with bad device
payload = {
'device': 'foo',
'project': str(self.p1.id)
}
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
# Try with bad device
payload = {
'device': pd3.slug,
'project': 'foo'
}
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
def testClaimFail(self):
pd1 = Device.objects.create_device(project=None, label='d1', template=self.dt1, created_by=self.u2)
pd2 = Device.objects.create_device(project=self.p2, label='d2', template=self.dt1, created_by=self.u3)
url1 = reverse('device-claim')
payload = {
'device':pd1.slug,
'project':self.p1.slug
}
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_401_UNAUTHORIZED)
ok = self.client.login(email='user3@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['detail'], 'No claim permissions')
self.p1.org.register_user(self.u3, role='r1')
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['detail'], 'No claim permissions')
payload = {
'device':pd2.slug,
'project':self.p1.slug
}
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['detail'], 'Device is not claimable')
def testPatchUnclaimDevice(self):
pd1 = Device.objects.create_device(id=1, project=None, label='d1', template=self.dt1, created_by=self.u2)
streamer = Streamer.objects.create(device=pd1, index=1, created_by=self.u1 )
StreamerReport.objects.create(streamer=streamer, actual_first_id=11, actual_last_id=20, created_by=self.u1 )
StreamerReport.objects.create(streamer=streamer, actual_first_id=21, actual_last_id=30, created_by=self.u1 )
v1 = StreamVariable.objects.create_variable(
name='Var X', project=self.p1, created_by=self.u3, lid=5,
)
v2 = StreamVariable.objects.create_variable(
name='Var Y', project=self.p1, created_by=self.u3, lid=6,
)
claim_url = reverse('device-claim')
unclaim_url = reverse('device-unclaim', kwargs={'slug': str(pd1.slug)})
# Claim the device first
claim_payload = {
'device':pd1.slug,
'project':self.p1.slug
}
unclaim_payload = {
'clean_streams': True
}
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.post(unclaim_payload, data=unclaim_payload)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
resp = self.client.post(claim_url, data=claim_payload)
self.assertEqual(resp.status_code, status.HTTP_202_ACCEPTED)
pd1 = Device.objects.get(slug=pd1.slug)
StreamId.objects.create_after_new_device(pd1)
self.assertEqual(StreamId.objects.count(), 2)
s1 = StreamId.objects.filter(variable=v1).first()
s2 = StreamId.objects.filter(variable=v2).first()
self.assertIsNotNone(s1)
self.assertIsNotNone(s2)
StreamData.objects.create(
stream_slug=s1.slug,
type='Num',
timestamp=timezone.now(),
int_value=5
)
StreamData.objects.create(
stream_slug=s1.slug,
type='Num',
timestamp=timezone.now(),
int_value=6
)
StreamEventData.objects.create(
timestamp=timezone.now(),
device_timestamp=10,
stream_slug=s2.slug,
streamer_local_id=7
)
self.assertEqual(s1.get_data_count(), 2)
self.assertEqual(s2.get_event_count(), 1)
self.assertEqual(StreamData.objects.count(), 2)
self.assertEqual(StreamEventData.objects.count(), 1)
# Create Device properties
GenericProperty.objects.create_int_property(slug=pd1.slug,
created_by=self.u1,
name='prop1', value=4)
GenericProperty.objects.create_str_property(slug=pd1.slug,
created_by=self.u1,
name='prop2', value='4')
GenericProperty.objects.create_bool_property(slug=pd1.slug,
created_by=self.u1,
name='prop3', value=True)
self.assertEqual(GenericProperty.objects.object_properties_qs(pd1).count(), 3)
# Now try to unclaim
# create filter logs
with self.settings(USE_DYNAMODB_FILTERLOG_DB=True):
if DynamoFilterLogModel.exists():
DynamoFilterLogModel.delete_table()
create_filter_log_table_if_needed()
for stream in pd1.streamids.all():
create_filter_log(stream.slug, datetime.datetime.utcnow(), "src", "dst", "trigger")
self.assertEqual(DynamoFilterLogModel.count(), pd1.streamids.count())
ok = self.client.login(email='user3@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.post(unclaim_url, data=unclaim_payload)
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['detail'], 'You do not have permission to perform this action.')
self.p1.org.register_user(self.u3, role='r1')
resp = self.client.post(unclaim_url, data=unclaim_payload)
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['detail'], 'No unclaim permissions')
self.client.logout()
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
self.assertEqual(StreamId.objects.count(), 2)
self.assertEqual(pd1.streamids.count(), 2)
self.assertEqual(Streamer.objects.count(), 1)
self.assertEqual(StreamerReport.objects.count(), 2)
resp = self.client.post(unclaim_url, data=unclaim_payload)
self.assertEqual(resp.status_code, status.HTTP_202_ACCEPTED)
pd1 = Device.objects.get(id=1)
self.assertEqual(pd1.label, 'Device (0001)')
self.assertEqual(pd1.lat, None)
self.assertEqual(pd1.lon, None)
self.assertEqual(pd1.project, None)
self.assertEqual(pd1.org, None)
self.assertEqual(Streamer.objects.count(), 0)
self.assertEqual(StreamerReport.objects.count(), 0)
self.assertEqual(StreamId.objects.count(), 0)
self.assertEqual(pd1.streamids.count(), 0)
self.assertEqual(StreamData.objects.count(), 0)
self.assertEqual(StreamEventData.objects.count(), 0)
self.assertEqual(GenericProperty.objects.object_properties_qs(pd1).count(), 0)
self.assertEqual(DynamoFilterLogModel.count(), 0)
self.client.logout()
def testGetProperties(self):
GenericProperty.objects.create_int_property(slug='d--0000-0000-0000-0100',
created_by=self.u1,
name='prop1', value=4)
GenericProperty.objects.create_str_property(slug='d--0000-0000-0000-0100',
created_by=self.u1, is_system=True,
name='prop2', value='4')
GenericProperty.objects.create_bool_property(slug='d--0000-0000-0000-0002',
created_by=self.u1,
name='prop3', value=True)
d1 = Device.objects.create(id=0x100, project=self.p1, org=self.p1.org, template=self.dt1, created_by=self.u2)
url = reverse('device-properties', kwargs={'slug': str(d1.slug)})
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_401_UNAUTHORIZED)
ok = self.client.login(email='user1@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(len(deserialized), 2)
self.assertEqual(deserialized[0]['name'], 'prop1')
self.assertEqual(deserialized[0]['type'], 'int')
self.assertEqual(deserialized[0]['value'], 4)
self.assertEqual(deserialized[0]['is_system'], False)
self.assertEqual(deserialized[1]['name'], 'prop2')
self.assertEqual(deserialized[1]['is_system'], True)
self.assertEqual(deserialized[1]['type'], 'str')
self.assertEqual(deserialized[1]['value'], '4')
self.client.logout()
# Staff has access to all
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(len(deserialized), 2)
self.client.logout()
# Other Users don't have access
ok = self.client.login(email='user3@foo.com', password='pass')
self.assertTrue(ok)
self.p1.org.register_user(self.u3, role='r1')
self.assertFalse(self.p1.org.has_permission(self.u3, 'can_read_device_properties'))
resp = self.client.get(url)
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['detail'], 'User has no access to read properties')
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
self.client.logout()
def testPostProperties(self):
d1 = Device.objects.create(id=0x100, project=self.p1, org=self.p1.org, template=self.dt1, created_by=self.u2)
url = reverse('device-new-property', kwargs={'slug': str(d1.slug)})
payload = {
'name': 'NewProp1'
}
resp = self.client.post(url, data=payload, format='json')
self.assertEqual(resp.status_code, status.HTTP_401_UNAUTHORIZED)
ok = self.client.login(email='user1@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.post(url, data=payload, format='json')
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
payload['int_value'] = 5
resp = self.client.post(url, data=payload, format='json')
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
payload = {
'name': 'NewProp2',
'str_value': '6'
}
resp = self.client.post(url, data=payload, format='json')
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
payload = {
'name': 'NewProp3',
'is_system': True,
'bool_value': True
}
resp = self.client.post(url, data=payload, format='json')
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
qs = d1.get_properties_qs()
self.assertEqual(qs.count(), 3)
self.assertEqual(qs[0].value, 5)
self.assertFalse(qs[0].is_system)
self.assertEqual(qs[1].value, '6')
self.assertFalse(qs[1].is_system)
self.assertEqual(qs[2].value, True)
self.assertTrue(qs[2].is_system)
payload = {
'name': 'NewProp4',
'int_value': 7,
'bool_value': True
}
resp = self.client.post(url, data=payload, format='json')
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
payload = {
'name': 'NewProp5',
'int_value': 7,
'str_value': 'Foo',
'bool_value': True
}
resp = self.client.post(url, data=payload, format='json')
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
# Staff has access to all
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
payload = {
'name': 'NewProp4',
'bool_value': True
}
resp = self.client.post(url, data=payload, format='json')
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
qs = d1.get_properties_qs()
self.assertEqual(qs.count(), 4)
p4 = qs.get(name='NewProp4')
self.assertTrue(p4.value)
payload = {
'name': 'NewProp4',
'bool_value': False
}
resp = self.client.post(url, data=payload, format='json')
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
qs = d1.get_properties_qs()
self.assertEqual(qs.count(), 4)
p4 = qs.get(name='NewProp4')
self.assertFalse(p4.value)
self.client.logout()
# Other Users don't have access
ok = self.client.login(email='user3@foo.com', password='pass')
self.assertTrue(ok)
payload = {
'name': 'NewProp5',
'bool_value': True
}
resp = self.client.post(url, data=payload, format='json')
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
self.client.logout()
def testPostPropertiesNoString(self):
d1 = Device.objects.create(id=0x100, project=self.p1, template=self.dt1, created_by=self.u2)
url = reverse('device-new-property', kwargs={'slug': str(d1.slug)})
# Test that we can write null values
payload = {
'name': 'NewProp1',
'str_value': ' '
}
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.post(url, data=payload, format='json')
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
qs = d1.get_properties_qs()
self.assertEqual(qs.count(), 1)
p5 = qs.get(name='NewProp1')
self.assertEqual(p5.value, '')
self.client.logout()
def testDeviceHealthSettingAPI(self):
pd1 = Device.objects.create_device(project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
pd2 = Device.objects.create_device(project=None, label='d2', template=self.dt1, created_by=self.u3)
pd1.get_or_create_status()
pd2.get_or_create_status()
self.assertEqual(DeviceStatus.objects.count(), 2)
url = reverse("device-health", kwargs={"slug": pd1.slug})
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
# Create new dynamo device model if not exists
payload = {
'health_check_enabled': False,
'health_check_period': 900,
'notification_recipients': ['org:all']
}
resp = self.client.patch(url, data=json.dumps(payload), content_type="application/json")
self.assertEqual(resp.status_code, status.HTTP_200_OK)
st1 = pd1.get_or_create_status()
self.assertFalse(st1.health_check_enabled)
self.assertEqual(st1.health_check_period, 900)
# update
payload = {
'health_check_enabled': True,
'health_check_period': 1800,
'notification_recipient': ['org:admin']
}
resp = self.client.patch(url, data=json.dumps(payload), content_type="application/json")
self.assertEqual(resp.status_code, status.HTTP_200_OK)
st1 = pd1.get_or_create_status()
self.assertTrue(st1.health_check_enabled)
self.assertEqual(st1.health_check_period, 1800)
self.client.logout()
def testGetDeviceStatus(self):
pd1 = Device.objects.create_device(project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
st1 = pd1.get_or_create_status()
self.assertEqual(DeviceStatus.objects.count(), 1)
streamer = Streamer.objects.create(device=pd1,
index=1,
created_by=self.u2)
url = reverse("device-health", kwargs={"slug": pd1.slug})
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
# When device has no report and non exist in dynamodb table
resp = self.client.get(url, format="json")
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertIsNone(deserialized['last_report_ts'], None)
self.assertEqual(deserialized['alert'], 'DSBL')
st1.health_check_enabled = True
st1.health_check_period = 3600
st1.notification_recipients = ['org:admin']
st1.last_report_ts = timezone.now() - datetime.timedelta(seconds=3601)
st1.save()
resp = self.client.get(url, format="json")
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertIsNotNone(deserialized['last_report_ts'])
self.assertEqual(deserialized['alert'], 'FAIL')
self.client.logout()
def testGetDeviceFilterLog(self):
with self.settings(USE_DYNAMODB_FILTERLOG_DB=True):
if DynamoFilterLogModel.exists():
DynamoFilterLogModel.delete_table()
create_filter_log_table_if_needed()
pd1 = Device.objects.create_device(project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
v1 = StreamVariable.objects.create_variable(
name='Var A', project=self.p1, created_by=self.u2, lid=0x5001,
)
s1 = StreamId.objects.create_stream(
project=self.p1, variable=v1, device=pd1, created_by=self.u2
)
f = StreamFilter.objects.create_filter_from_streamid(name='Filter test',
input_stream=s1,
created_by=self.u2)
state1 = State.objects.create(label="state1", filter=f, created_by=self.u2)
state2 = State.objects.create(label="state2", filter=f, created_by=self.u2)
a = StreamFilterAction.objects.create(
type="eml",created_by=self.u2, on='exit', state=state1
)
transition = StateTransition.objects.create(src=state1, dst=state2, filter=f, created_by=self.u2)
t = StreamFilterTrigger.objects.create(operator='gt', threshold=10, created_by=self.u2, filter=f, transition=transition)
t0 = parse_datetime('2017-01-10T10:00:00Z')
url = reverse("device-filterlog", kwargs={"slug": pd1.slug})
serializer = StreamFilterSerializer(f)
f_data = serializer.data
self.assertEqual(DynamoFilterLogModel.count(), 0)
log_id = create_filter_log(s1.slug, t0, state1.label, state2.label, f_data['transitions'][0]['triggers'])
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
self.assertEqual(DynamoFilterLogModel.count(str(log_id)), 1)
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(len(deserialized['device_filter_logs']), 1)
log = deserialized['device_filter_logs'][0]
self.assertEqual(log['uuid'], str(log_id))
self.assertEqual(log['target_slug'], s1.slug)
self.assertEqual(log['timestamp'], '2017-01-10T10:00:00Z')
self.assertEqual(log['src'], "state1")
self.assertEqual(log['dst'], "state2")
self.assertEqual(len(log['triggers']), 1)
self.assertEqual(log['triggers'][0]['operator'], "gt")
self.assertEqual(log['triggers'][0]['threshold'], 10)
DynamoFilterLogModel.delete_table()
self.client.logout()
def testDeviceUpgrade(self):
pd1 = Device.objects.create_device(project=None, label='d1', template=self.dt1, created_by=self.u2)
url1 = reverse('device-upgrade', kwargs={"slug": pd1.slug})
payload = {
'firmware': 'v2.9.1'
}
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_401_UNAUTHORIZED)
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
self.assertEqual(StreamNote.objects.filter(target_slug=pd1.slug).count(), 0)
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_202_ACCEPTED)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['slug'], pd1.slug)
self.assertEqual(StreamNote.objects.filter(target_slug=pd1.slug).count(), 1)
note = StreamNote.objects.filter(target_slug=pd1.slug).first()
self.assertEqual(note.note, 'Device {} firmware upgraded to {}'.format(
pd1.slug, payload['firmware'])
)
self.client.logout()
ok = self.client.login(email='user3@foo.com', password='pass')
resp = self.client.post(url1, data=payload)
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
self.client.logout()
def testDevicePermission(self):
"""
Ensure we can call GET on the org API.
"""
pd1 = Device.objects.create_device(id=1, project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
ok = self.client.login(email='user3@foo.com', password='pass')
self.assertTrue(ok)
url_reset = reverse('device-reset', kwargs={'slug': str(pd1.slug)})
ts_now0 = parse_datetime('2018-01-02T23:31:36Z')
payload_trim = {
'start': str_utc(ts_now0 + datetime.timedelta(seconds=101)),
'end': str_utc(ts_now0 + datetime.timedelta(seconds=201)),
}
url_trim = reverse('device-trim', kwargs={'slug': pd1.slug})
resp = self.client.post(url_reset, data={})
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
resp = self.client.post(url_trim, data=payload_trim)
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
membership = self.p1.org.register_user(self.u3, role='m1')
membership.permissions['can_modify_device'] = False
membership.permissions['can_reset_device'] = False
membership.save()
url_reset = reverse('device-reset', kwargs={'slug': str(pd1.slug)})
resp = self.client.post(url_reset, data={})
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
resp = self.client.post(url_trim, data=payload_trim)
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
membership.permissions['can_modify_device'] = True
membership.permissions['can_reset_device'] = True
membership.save()
url_reset = reverse('device-reset', kwargs={'slug': str(pd1.slug)})
resp = self.client.post(url_reset, data={})
self.assertEqual(resp.status_code, status.HTTP_202_ACCEPTED)
deserialized = json.loads(resp.content.decode())
self.assertTrue('pid' in deserialized)
self.assertEqual(deserialized['pid'], 'pid:000000')
resp = self.client.post(url_trim, data=payload_trim)
self.assertEqual(resp.status_code, status.HTTP_202_ACCEPTED)
self.client.logout()
def testDeviceFullReset(self):
"""
Ensure we can do a full reset
"""
pd1 = Device.objects.create_device(id=1, project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
url_reset = reverse('device-reset', kwargs={'slug': str(pd1.slug)})
resp = self.client.post(url_reset, data={ 'full': True})
self.assertEqual(resp.status_code, status.HTTP_202_ACCEPTED)
deserialized = json.loads(resp.content.decode())
self.assertTrue('pid' in deserialized)
self.assertEqual(deserialized['pid'], 'pid:000000')
self.client.logout()
def testDeviceResetNoProperties(self):
"""
Ensure we can do a reset without deleting proerties
"""
pd1 = Device.objects.create_device(id=1, project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
GenericProperty.objects.create_int_property(slug=pd1.slug,
created_by=self.u1,
name='prop1', value=4)
self.assertTrue(GenericProperty.objects.count() == 1)
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
url_reset = reverse('device-reset', kwargs={'slug': str(pd1.slug)})
resp = self.client.post(url_reset, data={ 'include_properties': False})
self.assertEqual(resp.status_code, status.HTTP_202_ACCEPTED)
deserialized = json.loads(resp.content.decode())
self.assertTrue('pid' in deserialized)
self.assertEqual(deserialized['pid'], 'pid:000000')
self.assertTrue(GenericProperty.objects.count() == 1)
self.client.logout()
def testDeviceResetNoNotes(self):
"""
Ensure we can do a reset without deleting proerties
"""
pd1 = Device.objects.create_device(id=1, project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
GenericProperty.objects.create_int_property(slug=pd1.slug,
created_by=self.u1,
name='prop1', value=4)
self.assertTrue(GenericProperty.objects.count() == 1)
StreamNote.objects.create(
target_slug=pd1.slug,
timestamp=timezone.now(),
created_by=self.u1,
note='Note 1'
)
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
url_reset = reverse('device-reset', kwargs={'slug': str(pd1.slug)})
resp = self.client.post(url_reset, data={ 'include_notes_and_locations': False})
self.assertEqual(resp.status_code, status.HTTP_202_ACCEPTED)
deserialized = json.loads(resp.content.decode())
self.assertTrue('pid' in deserialized)
self.assertEqual(deserialized['pid'], 'pid:000000')
self.assertTrue(GenericProperty.objects.count() == 0)
self.assertTrue(StreamNote.objects.count() == 2)
self.client.logout()
def testPostBusy(self):
d1 = Device.objects.create(id=0x100, project=self.p1, org=self.p1.org, state='B1',
template=self.dt1, created_by=self.u2)
url_reset = reverse('device-reset', kwargs={'slug': str(d1.slug)})
ts_now0 = parse_datetime('2018-01-02T23:31:36Z')
payload_trim = {
'start': str_utc(ts_now0 + datetime.timedelta(seconds=101)),
'end': str_utc(ts_now0 + datetime.timedelta(seconds=201)),
}
url_trim = reverse('device-trim', kwargs={'slug': d1.slug})
ok = self.client.login(email='user1@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.post(url_reset, data={})
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
resp = self.client.post(url_trim, data=payload_trim)
self.assertEqual(resp.status_code, status.HTTP_403_FORBIDDEN)
def testGetDeviceExtraInfo(self):
"""
Ensure we can call GET on the device API.
"""
pd1 = Device.objects.create_device(project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
url = reverse('device-extra', kwargs={'slug': pd1.slug})
ok = self.client.login(email='user2@foo.com', password='pass')
self.assertTrue(ok)
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(deserialized['stream_counts'], {})
v1 = StreamVariable.objects.create_variable(
name='Var X', project=self.p1, created_by=self.u3, lid=5,
)
v2 = StreamVariable.objects.create_variable(
name='Var Y', project=self.p1, created_by=self.u3, lid=6,
)
StreamId.objects.create_after_new_device(pd1)
self.assertEqual(StreamId.objects.count(), 2)
s1 = StreamId.objects.filter(variable=v1).first()
s2 = StreamId.objects.filter(variable=v2).first()
self.assertIsNotNone(s1)
self.assertIsNotNone(s2)
StreamData.objects.create(
stream_slug=s1.slug,
type='Num',
timestamp=timezone.now(),
int_value=5
)
StreamData.objects.create(
stream_slug=s1.slug,
type='Num',
timestamp=timezone.now(),
int_value=6
)
StreamData.objects.create(
stream_slug=s2.slug,
type='Num',
timestamp=timezone.now(),
int_value=7
)
StreamData.objects.create(
stream_slug=s1.slug,
type='Num',
timestamp=timezone.now(),
int_value=9
)
system_stream_slug='s--{}--{}--5800'.format(pd1.project.formatted_gid, pd1.formatted_gid)
StreamData.objects.create(
stream_slug=system_stream_slug,
type='Num',
timestamp=timezone.now(),
int_value=8
)
resp = self.client.get(url, format='json')
self.assertEqual(resp.status_code, status.HTTP_200_OK)
deserialized = json.loads(resp.content.decode())
self.assertEqual(len(deserialized['stream_counts'].keys()), 3)
self.assertTrue(s1.slug in deserialized['stream_counts'])
self.assertTrue(s2.slug in deserialized['stream_counts'])
self.assertTrue(system_stream_slug in deserialized['stream_counts'])
self.assertEqual(deserialized['stream_counts'][s1.slug]['data_cnt'], 3)
self.assertEqual(deserialized['stream_counts'][s2.slug]['data_cnt'], 1)
self.assertEqual(deserialized['stream_counts'][system_stream_slug]['data_cnt'], 1)
self.assertTrue(deserialized['stream_counts'][s1.slug]['has_streamid'])
self.assertTrue(deserialized['stream_counts'][s2.slug]['has_streamid'])
self.assertFalse(deserialized['stream_counts'][system_stream_slug]['has_streamid'])
self.client.logout()
| 42.773121 | 132 | 0.624818 | 7,058 | 59,198 | 5.12454 | 0.061349 | 0.107827 | 0.04103 | 0.073267 | 0.803616 | 0.766292 | 0.745334 | 0.725096 | 0.717078 | 0.700876 | 0 | 0.031044 | 0.242559 | 59,198 | 1,383 | 133 | 42.804049 | 0.775597 | 0.021082 | 0 | 0.632325 | 0 | 0 | 0.07477 | 0.003229 | 0 | 0 | 0.000451 | 0 | 0.318526 | 1 | 0.02741 | false | 0.042533 | 0.019849 | 0 | 0.048204 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4124a786c6985639eff5e9f7e631896ea7fda54e | 45 | py | Python | orderedset/__init__.py | rindPHI/proxyorderedset | ce3fd7d120a06a4dbfc243518399842f03880819 | [
"Unlicense"
] | null | null | null | orderedset/__init__.py | rindPHI/proxyorderedset | ce3fd7d120a06a4dbfc243518399842f03880819 | [
"Unlicense"
] | null | null | null | orderedset/__init__.py | rindPHI/proxyorderedset | ce3fd7d120a06a4dbfc243518399842f03880819 | [
"Unlicense"
] | null | null | null | from orderedset.orderedset import OrderedSet
| 22.5 | 44 | 0.888889 | 5 | 45 | 8 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.97561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f5d396b81e75d5afce4fab9ff51f7c3aa91dae3a | 2,836 | py | Python | modules/module_correlations.py | Innovation-DOOEL-Skopje/GLYCO-LongTerm-and-Instantaneous-Glucose-Prediction-from-HRV | 8d4a6f1ff91e0fd17783618d92475f2758b3934c | [
"MIT"
] | null | null | null | modules/module_correlations.py | Innovation-DOOEL-Skopje/GLYCO-LongTerm-and-Instantaneous-Glucose-Prediction-from-HRV | 8d4a6f1ff91e0fd17783618d92475f2758b3934c | [
"MIT"
] | null | null | null | modules/module_correlations.py | Innovation-DOOEL-Skopje/GLYCO-LongTerm-and-Instantaneous-Glucose-Prediction-from-HRV | 8d4a6f1ff91e0fd17783618d92475f2758b3934c | [
"MIT"
] | null | null | null | import pandas as pd
from scipy import stats
import numpy as np
from typing import Union
# __________________________________________________________________________________________________________________
def pearson_nan_proof(x, y, round_flag: bool = False):
two_feature_frame = pd.DataFrame({'x': x, 'y': y})
where_both_not_null_frame = two_feature_frame[~two_feature_frame['x'].isna() & ~two_feature_frame['y'].isna()]
if where_both_not_null_frame.shape[0] < 3:
return np.nan
else:
return_val = stats.pearsonr(where_both_not_null_frame['x'], where_both_not_null_frame['y'])[0]
if round_flag: return round(return_val, 4)
else: return return_val
def spearman_nan_proof(x, y, round_flag: bool = False):
two_feature_frame = pd.DataFrame({'x': x, 'y': y})
where_both_not_null_frame = two_feature_frame[~two_feature_frame['x'].isna() & ~two_feature_frame['y'].isna()]
if where_both_not_null_frame.shape[0] < 3:
return np.nan
else:
return_val = stats.spearmanr(where_both_not_null_frame['x'], where_both_not_null_frame['y'])[0]
if round_flag: return round(return_val, 4)
else: return return_val
def p_value_pearson_nan_proof(x, y, round_flag: bool = False):
two_feature_frame = pd.DataFrame({'x': x, 'y': y})
where_both_not_null_frame = two_feature_frame[~two_feature_frame['x'].isna() & ~two_feature_frame['y'].isna()]
if where_both_not_null_frame.shape[0] < 3:
return np.nan
else:
return_val = stats.pearsonr(where_both_not_null_frame['x'], where_both_not_null_frame['y'])[1]
if round_flag: return round(return_val, 4)
else: return return_val
def p_value_spearman_nan_proof(x, y, round_flag: bool = False):
two_feature_frame = pd.DataFrame({'x': x, 'y': y})
where_both_not_null_frame = two_feature_frame[~two_feature_frame['x'].isna() & ~two_feature_frame['y'].isna()]
if where_both_not_null_frame.shape[0] < 3:
return np.nan
else:
return_val = stats.spearmanr(where_both_not_null_frame['x'], where_both_not_null_frame['y'])[1]
if round_flag: return round(return_val, 4)
else: return return_val
# __________________________________________________________________________________________________________________
def target_correlations(df: pd.DataFrame, _method: str, ):
# choose target feature
if _method == 'PBC': target_feature = 'Class'
else: target_feature = 'Glucose'
# different correlation methods
if _method == 'Spearman':
cmtx = df.corr(method = 'spearman')
pmtx = df.corr(method = p_value_spearman_nan_proof)
else:
cmtx = df.corr(method = 'pearson')
pmtx = df.corr(method = p_value_pearson_nan_proof)
return cmtx[target_feature], pmtx[target_feature]
| 37.315789 | 116 | 0.70945 | 399 | 2,836 | 4.110276 | 0.14787 | 0.097561 | 0.146341 | 0.156098 | 0.804268 | 0.785366 | 0.758537 | 0.758537 | 0.758537 | 0.758537 | 0 | 0.006841 | 0.175247 | 2,836 | 75 | 117 | 37.813333 | 0.694314 | 0.099083 | 0 | 0.58 | 0 | 0 | 0.024342 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.08 | 0 | 0.28 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eb27dcd802bf83772af8d2959a36c13cbe1005e6 | 43 | py | Python | user/example-apps/oracle/apps/app.py | phanhongan/h1st | ddde59018b7ad44133135d2bfc7f47dc7277e00e | [
"Apache-2.0"
] | 2 | 2021-08-20T18:11:54.000Z | 2021-09-28T15:59:58.000Z | user/example-apps/oracle/apps/app.py | phanhongan/h1st | ddde59018b7ad44133135d2bfc7f47dc7277e00e | [
"Apache-2.0"
] | null | null | null | user/example-apps/oracle/apps/app.py | phanhongan/h1st | ddde59018b7ad44133135d2bfc7f47dc7277e00e | [
"Apache-2.0"
] | 1 | 2021-06-05T01:30:47.000Z | 2021-06-05T01:30:47.000Z | import __init__
from ..models import oracle | 21.5 | 27 | 0.837209 | 6 | 43 | 5.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 43 | 2 | 27 | 21.5 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
de3658f6671789a8bd29de037e36e5dc321a8d34 | 12,437 | py | Python | pgmpy/tests/test_inference/test_ExactInference.py | jaidevd/pgmpy | 0a7f371f4b39ded45e48d637fa1a44b4518162da | [
"MIT"
] | null | null | null | pgmpy/tests/test_inference/test_ExactInference.py | jaidevd/pgmpy | 0a7f371f4b39ded45e48d637fa1a44b4518162da | [
"MIT"
] | null | null | null | pgmpy/tests/test_inference/test_ExactInference.py | jaidevd/pgmpy | 0a7f371f4b39ded45e48d637fa1a44b4518162da | [
"MIT"
] | null | null | null | import unittest
import numpy as np
import numpy.testing as np_test
from pgmpy.inference import VariableElimination
from pgmpy.inference import BeliefPropagation
from pgmpy.models import BayesianModel
from pgmpy.models import JunctionTree
from pgmpy.factors import TabularCPD
from pgmpy.factors import Factor
class TestVariableElimination(unittest.TestCase):
def setUp(self):
self.bayesian_model = BayesianModel([('A', 'J'), ('R', 'J'), ('J', 'Q'),
('J', 'L'), ('G', 'L')])
cpd_a = TabularCPD('A', 2, [[0.2], [0.8]])
cpd_r = TabularCPD('R', 2, [[0.4], [0.6]])
cpd_j = TabularCPD('J', 2,
[[0.9, 0.6, 0.7, 0.1],
[0.1, 0.4, 0.3, 0.9]],
['R', 'A'], [2, 2])
cpd_q = TabularCPD('Q', 2,
[[0.9, 0.2],
[0.1, 0.8]],
['J'], [2])
cpd_l = TabularCPD('L', 2,
[[0.9, 0.45, 0.8, 0.1],
[0.1, 0.55, 0.2, 0.9]],
['G', 'J'], [2, 2])
cpd_g = TabularCPD('G', 2, [[0.6], [0.4]])
self.bayesian_model.add_cpds(cpd_a, cpd_g, cpd_j, cpd_l, cpd_q, cpd_r)
self.bayesian_inference = VariableElimination(self.bayesian_model)
# All the values that are used for comparision in the all the tests are
# found using SAMIAM (assuming that it is correct ;))
def test_query_single_variable(self):
query_result = self.bayesian_inference.query(['J'])
np_test.assert_array_almost_equal(query_result['J'].values,
np.array([0.416, 0.584]))
def test_query_multiple_variable(self):
query_result = self.bayesian_inference.query(['Q', 'J'])
np_test.assert_array_almost_equal(query_result['J'].values,
np.array([0.416, 0.584]))
np_test.assert_array_almost_equal(query_result['Q'].values,
np.array([0.4912, 0.5088]))
def test_query_single_variable_with_evidence(self):
query_result = self.bayesian_inference.query(variables=['J'],
evidence={'A': 0, 'R': 1})
np_test.assert_array_almost_equal(query_result['J'].values,
np.array([0.60, 0.40]))
def test_query_multiple_variable_with_evidence(self):
query_result = self.bayesian_inference.query(variables=['J', 'Q'],
evidence={'A': 0, 'R': 0,
'G': 0, 'L': 1})
np_test.assert_array_almost_equal(query_result['J'].values,
np.array([0.818182, 0.181818]))
np_test.assert_array_almost_equal(query_result['Q'].values,
np.array([0.772727, 0.227273]))
def test_max_marginal(self):
np_test.assert_almost_equal(self.bayesian_inference.max_marginal(), 0.1659, decimal=4)
def test_max_marginal_var(self):
np_test.assert_almost_equal(self.bayesian_inference.max_marginal(['G']), 0.5714, decimal=4)
def test_max_marginal_var1(self):
np_test.assert_almost_equal(self.bayesian_inference.max_marginal(['G', 'R']),
0.4055, decimal=4)
def test_max_marginal_var2(self):
np_test.assert_almost_equal(self.bayesian_inference.max_marginal(['G', 'R', 'A']),
0.3260, decimal=4)
def test_map_query(self):
map_query = self.bayesian_inference.map_query()
self.assertDictEqual(map_query, {'A': 1, 'R': 1, 'J': 1, 'Q': 1, 'G': 0,
'L': 0})
def test_map_query_with_evidence(self):
map_query = self.bayesian_inference.map_query(['A', 'R', 'L'],
{'J': 0, 'Q': 1, 'G': 0})
self.assertDictEqual(map_query, {'A': 1, 'R': 0, 'L': 0})
def test_induced_graph(self):
induced_graph = self.bayesian_inference.induced_graph(['G', 'Q', 'A', 'J', 'L', 'R'])
result_edges = sorted([sorted(x) for x in induced_graph.edges()])
self.assertEqual([['A', 'J'], ['A', 'R'], ['G', 'J'], ['G', 'L'],
['J', 'L'], ['J', 'Q'], ['J', 'R'], ['L', 'R']],
result_edges)
def test_induced_width(self):
result_width = self.bayesian_inference.induced_width(['G', 'Q', 'A', 'J', 'L', 'R'])
self.assertEqual(2, result_width)
def tearDown(self):
del self.bayesian_inference
del self.bayesian_model
class TestBeliefPropagation(unittest.TestCase):
def setUp(self):
self.junction_tree = JunctionTree([(('A', 'B'), ('B', 'C')),
(('B', 'C'), ('C', 'D'))])
phi1 = Factor(['A', 'B'], [2, 3], range(6))
phi2 = Factor(['B', 'C'], [3, 2], range(6))
phi3 = Factor(['C', 'D'], [2, 2], range(4))
self.junction_tree.add_factors(phi1, phi2, phi3)
self.bayesian_model = BayesianModel([('A', 'J'), ('R', 'J'), ('J', 'Q'),
('J', 'L'), ('G', 'L')])
cpd_a = TabularCPD('A', 2, [[0.2], [0.8]])
cpd_r = TabularCPD('R', 2, [[0.4], [0.6]])
cpd_j = TabularCPD('J', 2,
[[0.9, 0.6, 0.7, 0.1],
[0.1, 0.4, 0.3, 0.9]],
['R', 'A'], [2, 2])
cpd_q = TabularCPD('Q', 2,
[[0.9, 0.2],
[0.1, 0.8]],
['J'], [2])
cpd_l = TabularCPD('L', 2,
[[0.9, 0.45, 0.8, 0.1],
[0.1, 0.55, 0.2, 0.9]],
['G', 'J'], [2, 2])
cpd_g = TabularCPD('G', 2, [[0.6], [0.4]])
self.bayesian_model.add_cpds(cpd_a, cpd_g, cpd_j, cpd_l, cpd_q, cpd_r)
def test_calibrate_clique_belief(self):
belief_propagation = BeliefPropagation(self.junction_tree)
belief_propagation.calibrate()
clique_belief = belief_propagation.get_clique_beliefs()
phi1 = Factor(['A', 'B'], [2, 3], range(6))
phi2 = Factor(['B', 'C'], [3, 2], range(6))
phi3 = Factor(['C', 'D'], [2, 2], range(4))
b_A_B = phi1 * (phi3.marginalize('D', inplace=False) * phi2).marginalize('C', inplace=False)
b_B_C = phi2 * (phi1.marginalize('A', inplace=False) * phi3.marginalize('D', inplace=False))
b_C_D = phi3 * (phi1.marginalize('A', inplace=False) * phi2).marginalize('B', inplace=False)
np_test.assert_array_almost_equal(clique_belief[('A', 'B')].values, b_A_B.values)
np_test.assert_array_almost_equal(clique_belief[('B', 'C')].values, b_B_C.values)
np_test.assert_array_almost_equal(clique_belief[('C', 'D')].values, b_C_D.values)
def test_calibrate_sepset_belief(self):
belief_propagation = BeliefPropagation(self.junction_tree)
belief_propagation.calibrate()
sepset_belief = belief_propagation.get_sepset_beliefs()
phi1 = Factor(['A', 'B'], [2, 3], range(6))
phi2 = Factor(['B', 'C'], [3, 2], range(6))
phi3 = Factor(['C', 'D'], [2, 2], range(4))
b_B = (phi1 * (phi3.marginalize('D', inplace=False) *
phi2).marginalize('C', inplace=False)).marginalize('A', inplace=False)
b_C = (phi2 * (phi1.marginalize('A', inplace=False) *
phi3.marginalize('D', inplace=False))).marginalize('B', inplace=False)
np_test.assert_array_almost_equal(sepset_belief[frozenset((('A', 'B'), ('B', 'C')))].values, b_B.values)
np_test.assert_array_almost_equal(sepset_belief[frozenset((('B', 'C'), ('C', 'D')))].values, b_C.values)
def test_max_calibrate_clique_belief(self):
belief_propagation = BeliefPropagation(self.junction_tree)
belief_propagation.max_calibrate()
clique_belief = belief_propagation.get_clique_beliefs()
phi1 = Factor(['A', 'B'], [2, 3], range(6))
phi2 = Factor(['B', 'C'], [3, 2], range(6))
phi3 = Factor(['C', 'D'], [2, 2], range(4))
b_A_B = phi1 * (phi3.maximize('D', inplace=False) * phi2).maximize('C', inplace=False)
b_B_C = phi2 * (phi1.maximize('A', inplace=False) * phi3.maximize('D', inplace=False))
b_C_D = phi3 * (phi1.maximize('A', inplace=False) * phi2).maximize('B', inplace=False)
np_test.assert_array_almost_equal(clique_belief[('A', 'B')].values, b_A_B.values)
np_test.assert_array_almost_equal(clique_belief[('B', 'C')].values, b_B_C.values)
np_test.assert_array_almost_equal(clique_belief[('C', 'D')].values, b_C_D.values)
def test_max_calibrate_sepset_belief(self):
belief_propagation = BeliefPropagation(self.junction_tree)
belief_propagation.max_calibrate()
sepset_belief = belief_propagation.get_sepset_beliefs()
phi1 = Factor(['A', 'B'], [2, 3], range(6))
phi2 = Factor(['B', 'C'], [3, 2], range(6))
phi3 = Factor(['C', 'D'], [2, 2], range(4))
b_B = (phi1 * (phi3.maximize('D', inplace=False) *
phi2).maximize('C', inplace=False)).maximize('A', inplace=False)
b_C = (phi2 * (phi1.maximize('A', inplace=False) *
phi3.maximize('D', inplace=False))).maximize('B', inplace=False)
np_test.assert_array_almost_equal(sepset_belief[frozenset((('A', 'B'), ('B', 'C')))].values, b_B.values)
np_test.assert_array_almost_equal(sepset_belief[frozenset((('B', 'C'), ('C', 'D')))].values, b_C.values)
# All the values that are used for comparision in the all the tests are
# found using SAMIAM (assuming that it is correct ;))
def test_query_single_variable(self):
belief_propagation = BeliefPropagation(self.bayesian_model)
query_result = belief_propagation.query(['J'])
np_test.assert_array_almost_equal(query_result['J'].values,
np.array([0.416, 0.584]))
def test_query_multiple_variable(self):
belief_propagation = BeliefPropagation(self.bayesian_model)
query_result = belief_propagation.query(['Q', 'J'])
np_test.assert_array_almost_equal(query_result['J'].values,
np.array([0.416, 0.584]))
np_test.assert_array_almost_equal(query_result['Q'].values,
np.array([0.4912, 0.5088]))
def test_query_single_variable_with_evidence(self):
belief_propagation = BeliefPropagation(self.bayesian_model)
query_result = belief_propagation.query(variables=['J'],
evidence={'A': 0, 'R': 1})
np_test.assert_array_almost_equal(query_result['J'].values,
np.array([0.60, 0.40]))
def test_query_multiple_variable_with_evidence(self):
belief_propagation = BeliefPropagation(self.bayesian_model)
query_result = belief_propagation.query(variables=['J', 'Q'],
evidence={'A': 0, 'R': 0,
'G': 0, 'L': 1})
np_test.assert_array_almost_equal(query_result['J'].values,
np.array([0.818182, 0.181818]))
np_test.assert_array_almost_equal(query_result['Q'].values,
np.array([0.772727, 0.227273]))
def test_map_query(self):
belief_propagation = BeliefPropagation(self.bayesian_model)
map_query = belief_propagation.map_query()
self.assertDictEqual(map_query, {'A': 1, 'R': 1, 'J': 1, 'Q': 1, 'G': 0,
'L': 0})
def test_map_query_with_evidence(self):
belief_propagation = BeliefPropagation(self.bayesian_model)
map_query = belief_propagation.map_query(['A', 'R', 'L'],
{'J': 0, 'Q': 1, 'G': 0})
self.assertDictEqual(map_query, {'A': 1, 'R': 0, 'L': 0})
def tearDown(self):
del self.junction_tree
del self.bayesian_model
| 49.353175 | 112 | 0.536223 | 1,566 | 12,437 | 4.029374 | 0.081737 | 0.025674 | 0.049445 | 0.059271 | 0.866086 | 0.846117 | 0.822345 | 0.818225 | 0.794453 | 0.794453 | 0 | 0.046672 | 0.300555 | 12,437 | 251 | 113 | 49.549801 | 0.678699 | 0.019538 | 0 | 0.671642 | 0 | 0 | 0.019034 | 0 | 0 | 0 | 0 | 0 | 0.159204 | 1 | 0.129353 | false | 0 | 0.044776 | 0 | 0.18408 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
de7f3716ee9622dfa9e59a24922f9646573c3c9e | 2,156 | py | Python | epytope/Data/pssms/tepitopepan/mat/DRB1_1367_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/tepitopepan/mat/DRB1_1367_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/tepitopepan/mat/DRB1_1367_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | DRB1_1367_9 = {0: {'A': -999.0, 'E': -999.0, 'D': -999.0, 'G': -999.0, 'F': -0.004754, 'I': -0.99525, 'H': -999.0, 'K': -999.0, 'M': -0.99525, 'L': -0.99525, 'N': -999.0, 'Q': -999.0, 'P': -999.0, 'S': -999.0, 'R': -999.0, 'T': -999.0, 'W': -0.004754, 'V': -0.99525, 'Y': -0.004754}, 1: {'A': 0.0, 'E': 0.1, 'D': -1.3, 'G': 0.5, 'F': 0.8, 'I': 1.1, 'H': 0.8, 'K': 1.1, 'M': 1.1, 'L': 1.0, 'N': 0.8, 'Q': 1.2, 'P': -0.5, 'S': -0.3, 'R': 2.2, 'T': 0.0, 'W': -0.1, 'V': 2.1, 'Y': 0.9}, 2: {'A': 0.0, 'E': -1.2, 'D': -1.3, 'G': 0.2, 'F': 0.8, 'I': 1.5, 'H': 0.2, 'K': 0.0, 'M': 1.4, 'L': 1.0, 'N': 0.5, 'Q': 0.0, 'P': 0.3, 'S': 0.2, 'R': 0.7, 'T': 0.0, 'W': 0.0, 'V': 0.5, 'Y': 0.8}, 3: {'A': 0.0, 'E': -1.4299, 'D': -1.6154, 'G': -1.3514, 'F': 0.61964, 'I': -0.21609, 'H': 1.115, 'K': 0.43588, 'M': 0.79098, 'L': 0.282, 'N': -0.023648, 'Q': 0.22998, 'P': -1.4805, 'S': -0.69332, 'R': 0.23013, 'T': -0.88127, 'W': 0.76658, 'V': -0.66199, 'Y': 0.056088}, 4: {'A': 0.0, 'E': 0.0, 'D': 0.0, 'G': 0.0, 'F': 0.0, 'I': 0.0, 'H': 0.0, 'K': 0.0, 'M': 0.0, 'L': 0.0, 'N': 0.0, 'Q': 0.0, 'P': 0.0, 'S': 0.0, 'R': 0.0, 'T': 0.0, 'W': 0.0, 'V': 0.0, 'Y': 0.0}, 5: {'A': 0.0, 'E': -1.6245, 'D': -2.1015, 'G': -0.87712, 'F': -1.3294, 'I': 0.5177, 'H': -0.39434, 'K': 0.46119, 'M': -0.94245, 'L': -0.093265, 'N': -0.1681, 'Q': -0.57202, 'P': 0.38533, 'S': 0.14989, 'R': 0.23318, 'T': 1.0456, 'W': -1.3072, 'V': 1.1282, 'Y': -1.4177}, 6: {'A': 0.0, 'E': -1.0686, 'D': -1.7567, 'G': -1.643, 'F': 0.47283, 'I': 0.024902, 'H': 0.31817, 'K': -0.06064, 'M': 0.44581, 'L': 0.70203, 'N': 0.27251, 'Q': 0.21386, 'P': -0.68055, 'S': -0.64527, 'R': 1.1984, 'T': -0.45262, 'W': 0.70859, 'V': 0.0096356, 'Y': 0.24125}, 7: {'A': 0.0, 'E': 0.0, 'D': 0.0, 'G': 0.0, 'F': 0.0, 'I': 0.0, 'H': 0.0, 'K': 0.0, 'M': 0.0, 'L': 0.0, 'N': 0.0, 'Q': 0.0, 'P': 0.0, 'S': 0.0, 'R': 0.0, 'T': 0.0, 'W': 0.0, 'V': 0.0, 'Y': 0.0}, 8: {'A': 0.0, 'E': -0.54182, 'D': -0.78869, 'G': 0.1478, 'F': 0.55352, 'I': 0.43948, 'H': -0.38613, 'K': -0.2285, 'M': 0.82817, 'L': -0.20101, 'N': -0.73258, 'Q': -0.073797, 'P': -0.48481, 'S': 1.0175, 'R': 0.22077, 'T': -0.6178, 'W': -0.99494, 'V': 0.11956, 'Y': 0.066112}} | 2,156 | 2,156 | 0.397959 | 525 | 2,156 | 1.630476 | 0.201905 | 0.114486 | 0.028037 | 0.037383 | 0.224299 | 0.142523 | 0.142523 | 0.142523 | 0.133178 | 0.133178 | 0 | 0.378049 | 0.163265 | 2,156 | 1 | 2,156 | 2,156 | 0.096452 | 0 | 0 | 0 | 0 | 0 | 0.079277 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dee6dfb6944fc10898ae8efc7a814f0f8e975323 | 47,396 | py | Python | tests/test-script.py | darosior/pycoind3 | 2177e8a936799c5981560fa4055b5d644be347f7 | [
"MIT"
] | null | null | null | tests/test-script.py | darosior/pycoind3 | 2177e8a936799c5981560fa4055b5d644be347f7 | [
"MIT"
] | null | null | null | tests/test-script.py | darosior/pycoind3 | 2177e8a936799c5981560fa4055b5d644be347f7 | [
"MIT"
] | null | null | null | import unittest
import sys
sys.path.append('.')
import pycoind
import binascii
class TestScriptTransactions(unittest.TestCase):
"""
class MockTransaction(pycoind.blockchain.transaction.Transaction):
'''Script only requires the wrapper to include the raw Txn. Many
operations will otherwise fail on this object, so do not try this
in your day-to-day life. Badness may ensue.'''
def __init__(self, txn_bytes):
(dummy, self._transaction) = pycoind.protocol.Txn.parse(txn_bytes)
def setUp(self):
pass
def check_transaction(self, txn_bytes, pk_script, input_index):
txn = self.MockTransaction(txn_bytes)
txio = pycoind.script.Script(txn)
valid = txio.verify_input(input_index, pk_script)
self.assertTrue(valid)
# with self.assertRaises(exception):
#self.assetRaises(exception, script.script, broken
def test_hashall(self):
# Block: bitcoin@40094
# Txn: 854a3aaaee36be32e441cc8f86890b7eebe4710f30003dfff362cc2d90d163d4
txn_bytes = binascii.unhexlify(b"010000004b0697346921ea5687bced8310faa75d95e531540e7c809444473f1a2f2db4ff610000000049483045022029f00165de21b00dd70fb883f3fe1c22a757b33db230bab2893c5ab4ecf60b6b022100c648aaf0538c3db5adf5f3c00fbf2d47fa894b6072c8b7c26c37167d7754a94401ffffffff0959ccc39b6791cbc632212ecf3b76cf8b49ebd05f936d2c00fae9947013544500000000494830450220659e8a8c1b0738750f612bf03f2c04933841cbd150fc3d3a07f815d492cc7fd6022100b4906c00755d30d99b6263d2ee8927152ba701565de49f26f9d5e7581b2a80a701ffffffff0b774ca6b9479f048aaaf02fc431794d51e294c53e24721c32d2d714080490f6000000004a493046022100cde005d2f60eea1e5ff2247a9b1ce0b70176e2e67a01af88432fcffc58eced39022100d3201d7afa2a114226096eacb85c51f38fb53255b079f61f4062d769901fdc8601ffffffff0cc8cd5a205da590f8b47a68840f15e23ee6825601a78faacb7a1d7c06c0995d00000000484730440220529a790e3dc0832b5f06ec5a942d8dc060cb3017fd632979a7b5a80806f6e08702201b3bf209083ac0e575efbf46547c59cf725a79ca6aa0696a9764b9453f5dcc4501ffffffff0fae5b65a35ab5aa4e80dc2a510fb75ce3626935a56ddea60f5d7faa817e8201000000004847304402206d7b259609efe99d9ff2257a18d86d8bd2eea1210b71753777cdd344ce27690d02205f35e088fdd43383744f441c6e59c2089f9203dda3633dfaf9b9a35404c3a68301ffffffff1262f363537de6721216cd3eb02176e0b9fb5abff7b1ff22f50bbaae1a809da7000000004948304502207d7374a25a7db60dce4d557f0af9d6232ed036737a3a2081eef856c2b74da6f1022100a4c412d7cf44e8181a0bdaa4bc6eef73f8d2c754066dc8be5766f37f7323a37d01ffffffff13c20a55b8ceeebfc1c4512e15aad65cd71f82ffaaa606286273862301d1c6af000000004948304502205cfc865f2737d1f05687b21bde1c08e71a926ac50853733fbe0122b1600172fc022100bdf4b75d13426efd2ef1d16780dd8036578d74f3f003d84e80ff76b8a9ed266601ffffffff150685bdd265bb33204db07aea51d37b3997231414b0db2939cbde46ef43d343000000004a493046022100e933d0b486b4ddbf87503314170708d07650b92dea7d87d32229239d911e49b9022100fac091b42ff57f801b2817be61003597f034bbdb7e79ca6e3bfd0332342bf94701ffffffff1615e7d54f56dc9a3ea4aa9cfaa212dc5568a54803aa69c0c8f37d84bf3e62360000000048473044022054dced6116c86719ea2fbebbef1a4548736628a279fe1e813bd1d0a073ef3d9802206b515e10d04597bc68b8fbd01f23c37c8c858f85db1f47e1229a2b6cbae3fb9901ffffffff1714df487cea7d9501d3e3761481bdc1a0032e442c7ddf2a98f70bfb9e7d93280000000048473044022071e62832a33825ed04abac116745012e498e9cc36928a3af7c1543acccfb2c4a02203f156f293928968b40c84ca158ceda0cec2a2437892dda47bcd6b12e660482d101ffffffff1a7f8205088860312ca9e6de5f9b764272df936e77cefdd53cf4370987b16d37000000004948304502203356dd6dc4c77c6959a520fbdcd4786407b043a9b782187a83388806e4267e0f0221008659ec4f9489fa17b539155a9d9bcd8fd88715e080d150673a160cc0798462a101ffffffff1c8599c4a634d8d2c0eb2fc23224a7c996210324791710ac499990227203869f0000000048473044022023a5d48b8e5ab938f66c2aed02a39af402d49a29c491d802832c6fa08058ef2502205900b8ec5e7eb2eafff392b71bc8945dd96710b24baa4f51735e42faeae0019801ffffffff1e34818b090787139970f9fb9bc8febfc6df665b53e29abcf46b262b2c5fc9b3000000004847304402207b3a47ed4fccea6d0110d8626d54aa54e8104bdd05cb3845c46b45ee9572fde602201e2cbaf237d993b6dd84329591ca4fb666272ec93ece5e81940c3914b976a8d201ffffffff1e7c41586649d6c7960a09d85d414a183ae44a39b7dae477e78f7107b21e3813000000004a49304602210080b44eac9f15997e14ee13142570b3b912ebd662b19b1c12d6b599ff168c4a6702210091c54bd572e94663561a4349cb67c9c4735e417210acd822394516738c61e89801ffffffff224da22f0ab20f05b8c1ba24f66e0bb1ec639eeb56edd71c4585faa034cabc740000000049483045022100af331b3e2459137ec5329d8fec95a19aa0afb0b837dc58b7c81841ee9ac37924022006a46bb4013b2436497d99b84f17ee117d186a1760cc968e3b68237aed6c8c2801ffffffff230378c6d546117caa4dee9d5857af137f1f3cac445230aa245d70f962cb1b7e000000004948304502204a93ecc8bbbc64a83fcd39e480de58b0518a66ab0f5127a90d36798f3cdacf5f022100b057b3e19b6e994eb9d6a66ecc3d3a57a45572f3bb44df6378129c876a7e371701ffffffff23277aba7937f6e3b1c37b5b092102ba13490f2282515d12beb47c490c7f114100000000494830450221009b1b22670807ea036c8af1470c670cb3e65696594316c62358e48c5d3da9d8bf02207752757d4fb446e1176cb930ec5c356d6f3cde6ad6b47f698cfc2c98a96575fc01ffffffff24b1b0fea16958138bf0b76bb5770084f2a29d726754fa07c1cc16435d3b015a000000004a4930460221008cc044b6ea4d16cadc940d34fdd69cc2720173f3616147302812c72b73e9c232022100978258049c8debef255a7bb481c528346335889529e74c4b6fb635a3ae0f1a7d01ffffffff29ffd3ffdfd02849ac49cbab295a1309031c4a01bec9b8c1c5f2cdea0c5fcbde000000004948304502207286a6115e2932927994a21688023823a34aca2a2246cec12d9ba9880c2a004f022100fae35b479ade02c46bbcacd2c3f1395f1ef80d46e9f4c7601eb97adec765b33b01ffffffff38ed25a059795d975c80b1fb6d5a26002ae05074f1a90809df8dfdb8c585ecce000000004847304402201a687e423c03dff911d920d55c332dd61769f098526bb3430813f3ea01d5592902200b4e36e0f8292b6a626abbeb68d7e0c4599f81e29970056e044f8ef3269e51b701ffffffff3b9c2dc9eafdc3baf34a2161d7a1d199c8a4b3792d7175f8218f6b38d061e76c000000004847304402205c8cff4dc2725dfd160fb8ddf2c3955255dd29a4ce5491a40e0a29a547f22d3d022029ebe5350fcac49d85de0589a7aef38e113929fb0c9cab09d36a33ac12f4ea4101ffffffff3e5881e77873572ff51eeebee7460313a7ebe3c3080018b0d7db94a13e4efa8f0000000049483045022100ee24c665bc4b162aad8c78d49ca3295cb67b8c67dd0f6781fcb8788612a3d11902207d96be246124f4fa8e57343b284b9c6e525d8621dde432218741ce3771c6ced801ffffffff5268883eaad594a0f2f87698c15681acf6eae8c995062abc7c54b4b3547edd18000000004847304402203386f8417f126cb7ac5d4c5917ec7573efb8f5be52d62fbe62eac0c9285486b902206e404c7023dea0152734ce921c3488e13cc696b5ee520e9f12ce1bd5eb9d794201ffffffff552d3e1ca375f0b6e67e8b9a978cd09c2616db6728f861419dacaa5efef4e2f1000000004a493046022100c208f4047779d21da90256cecbb73d7c8a3c2543d1863bfdf766c30320b8be900221009e8a48541306eec6ba25801bee60707f866114b6e3567b7f0346b2e2c2e45efd01ffffffff5d8ec526e652f0fd4f918a998f5e78cd5c98d87a3f2291fef891314e34a5152a000000004948304502203a07181ea3555b1f78a7ad8388ee8bc2a8af73e5054a7bcb2785f15629129abd0221008f28019dbfe0f76d10ae15fdbcfc43b3ec7b4a54e8d1583324dec989ac89101701ffffffff5e523d3835e1fcb23934311e30d3dd081828e9fdf685f6b3664548de9e346f4f0000000049483045022100a899427352aaec1dd8f37096a4b36382653ced3b1ab7ab4aaf57b067fab537fa022049ae60d4f117f1a7962ba3fdcef24c2803cd8601085187bd2c7b4ff5302bfb2e01ffffffff5f57a24c69abda8a11c042baa47df6ba1bed67dbf87078344320d7364188af290000000048473044022017b7f977d071b38a2d135539fbbd110ecd5a465e56c0c7f7e7385e260dc40cdc02206421f73b4a7af046c824634c1976ffbac50c298e867f51f107f2262f7f42d8e401ffffffff609bdc14e7033ebc3c18e4beabb8a056e9ed5baa42519a64508e4e1ab95fe6c9000000004a49304602210098ad4f910cd06511452333425aa3e38e2c8b453aa69af29da110aa050ee9f61d022100eda9f658e850880e7f88622a046c50a1b5e012c2b26fa26b0775af5fcaac2e2a01ffffffff60dc32e2b33afd5aeb0a9b0f965f845228cf76c447117a398a794074f336b440000000004847304402201c999aed9203a7e15082b9372108d22ebff32c83c368ec326172c52a8c4707a902207d8d72a2cdda850ea357d6dd5fa92e4e972316ae4a59ffa395c39183310fc34f01ffffffff65443e8163ea3be8d0b76fa674062072ca72b954bf291a20c66567d299477710000000004847304402203e3f54ac274ac5a9e148d184137b16e35e6a132efb61351f2acc1e9c9bc8a02802207cfd397b8bfe0891f7f780e22cf7edfb713eb9573f4f5b7bb7087d6ed9fb278d01ffffffff7470e7e35cf5cf28a0a243c9ba533b4c3aee28b6edd4dabd7e90c19fa6e0686c000000004847304402206efcff007e153c690fa1895a3fdef4affe76217b81e9877fe4de237d0e04eaf5022049ad7cb8e28fac7fae0c00345da3025cb02cc404ba27a1f9b3680c03decae38d01ffffffff74e4172f9dcb6648d525274f9f097ad3ce84cb0f7ec6b4ee7483386832b40bfe000000004a493046022100ffb8e9d9944e49612b10156c8cea119a4fda907d7d45660dd40287b83d42be0e022100e2dc38c6b3c88d212c6eb27ab40d566f8729681219119893f47850c24633d97601ffffffff81d0094f9df66a4ec7b161e03ca3ad2d14360c7ec45b8f1c87ccee1c0104adc100000000494830450221009e592fd8af5a6f788719a3c2177ad16b6ee64cde235447612a67271f6737ea430220344be58237e10a89ab3ac202cd56d33cfe249ff672edf7a2a5ec1bf010155b6201ffffffff88c7e32c75414002706325021ed94993374f315143cc5075c486825cd72a3944000000004a493046022100dad49cee9d079c3b1dd2ac669dfddeaccc95110e080eefc22f70e4ca6da5162a022100fa5cf1a9a1ae4ac7a3ef62e6cf8603d04060d67193f255c90b3da74054c0386101ffffffff8fd07c10670ea209e4033c25ceffb507a1d7ee99a5785f89c471fdc235503839000000004a493046022100c63eff2fc4e1e24465e60a4ffa0e3a9d61b609b6b00ac76f96233cbc01a4a83d022100b19bb63c043aad0ad19334f4a34e9bddcc2b7d444285872e2701611a91b9f8ca01ffffffff975ebe7a655827d8427975bfb858764586cbae2c3db49bc027a2d58db4ef6151000000004a493046022100f9e47edd2594528be0d37ce4cbdb978e3cbe112545e7b6d5241af4a8ad36518b0221008caf6e40abc9178c1bf74211a3acc815fa16eb854ad799835372ae21e0614e5101ffffffff9d836ba5bf3703ad883ef96e6f22d067471666b181c37dc3f6eac0df0482324f000000004948304502204b7c09f999bb9714c9e4e376c8954d4e6ccf7f98287079dc6acea0545c7ca432022100f19aacd6890899b229c92bb4c4c1984046c24ba501dfe6d121d192d11a05977801ffffffffa399eb1e8c3fa73097e83588027cad6471981ba32e81f1a3a3929c96fabf4e08000000004847304402203a771bffcee86ab4ef90c0160ec86cff8f5f597e2b6cd316ddaa41e3a7c1c78102205b998f2a37d4bdebee58defcf278cd4f8ed8acf50a452e930fd7f862cd7e6a9601ffffffffa5b4f53976b48b1438861f302132f7c569afad08a3602d2956b751c8c9ee53580000000049483045022006cb474bdb4214b4ddcee2a78f27f9e5abb6fea446fc784c066a0a0f27ea6366022100af047920e9b66b0436e3c21d2892dbbb8539ad3c42707133cf40a8a664d2dc9101ffffffffaa24b709aac5db1487ecf706f815d2520be4d434b4c77eb8b2ac372e6ce38a6a00000000494830450221009440f5d9551cb5844dfe49c1e21a195ea040552fdcc47ca710cb2a81ba52233d02200bfccc8dd3b9306eb677c7315be283f8192661eb2923488e195076c7a67e744801ffffffffb2cb8c8211359d959e0f5480307c46931c5dd387b84aa03910c979d2cc1fb13e00000000494830450221008584a5fb61da4046b51e06d7ef064038cab8d1ad72edac28c2ca0f2a2bce95df02200eb0b5c98acfda989add184e63bbb37c3c0a3492c23a1aa177e7e72034cc0efc01ffffffffb352d4eca00bf730e38ee04a4d82d3a87c9405e2a31eff01963a07e124a816670000000049483045022100d4cd7d516a453b05d8a4f1fde0e373793421b424dfe8730f256ec8d5af70937b02201787384b54e38344f63294013ad7478d9b757666af539a2afd5cb63003382b1e01ffffffffb5d24fa6b63e49a222dac88759cf30769ae1f68e8d2bfa50fafb70ac7090c10f0000000049483045022100fc255ef1172f1ed7ba81c96550634896f92a8714ba16d371006677f4a335a65d02205bcc5eeef0e9f6b419cc21e63a9eff215492f5445e6b4561e51a408f91ff840201ffffffffbc2dfc8ca5dbd986ee885e22e2da0d34e3937b041dc728aea6bac81e6166fe550000000048473044022007334226a2f4cc2a2124285113105fc0616a4c0e6f2a55d7cfdc68c5cce12e3d022058b43bd4f21f0b3976b32e51f37a7fb2614058f4247c455dcdf7f590f7abc3b101ffffffffc1a24496db8f100f6b40a830bdba2fa6d3c664801c5b91869e703ce6650c07bb000000004a4930460221009741b37c820bbb728d51f702033d7667da8c96b305720197b475d5c003a32342022100eb592e6dc24cc6b23679136011562439e8ebb65fc8f36952b79dac8829394b3e01ffffffffc4f3e8da7dbbdd8a6055257ed5dddc7c65ee31006504e3262816ed6edb57894f00000000494830450221008a0ec8f35757a172d3d65a213e463952e02c69f6ee03252eb709a28b8aa1b5ac022072f76264153818eee0268e568762b395b1fd9276c9f4574226bd918b30f95e9a01ffffffffcc004cbaf93d38f4e9dd8273522bdf0fd4a0c13c67ac7e29009a0ec16d554d7a000000004a493046022100d848fb8db19ba3e9106366963ee29fb21b054203af60be82236d996c357d8b9e022100c1f79a46e93bc7c449b3faa78ec5bc05ed91afab1ab07921c762a45d8e1b7cf201ffffffffcf86ae619331bc893f565aaf7bc35c70b4215ca23ea1fc8cae54584475c6eb8d000000004a493046022100e703652bb44d8808c8697dc570f3d34fe571fdc04d9bc18777689051ad95acb1022100a403b82ef246c30e0610f6eaa6438ab3c546adb45ababba6924e698147c02a5901ffffffffd1027667d0e6cf32be29adcb3565038f8e663843b1e118ae757eb7bcc665f4c1000000004a4930460221009afcb71dd4cc6cb33b759567fe1a153f061b47d031dcbbc871aa4790c1c78977022100f202070472a3e9d74e7f982c6a2c60a1a93ae15f646f0e33109d10a87f79091b01ffffffffd15d885ccc7031a94ff615b0bc868816dbc026d7288169da1e2c08c1c752c8100000000049483045022004674d8e6f3859139c5cf257baf53d82c5bb4679a15276f47e6ef848f25276e0022100c8586f460c1099dcc103366824e0c6e13d346cb9d36761a67ed17776def6ba6301ffffffffd5a2c839dd5bcfe59772662b1f5d18658d1f2b8dc63727b6e9391354f8ffec230000000049483045022100905112701ab2b32ced284a9f738b15e52635dbd7f64f0ddde57e2b13643b3f3202200eae12a4bba497ecdb094c595214206be8d388d6fef45b611f981102cc92712601ffffffffd9ba934e1557347b14973762fcdd6b66c34fb72323d528fc4b74b3b07ad6eca0000000004a493046022100d0eccedd5755339cf551a92115cb1a09faeaaf17a6b00feb3a5dddd1158720d3022100c5c4c405c3a86434531a90b1b56518dd7ef98d6279ebabb064ee1d3955d9ad0301ffffffffe32a19ac5b70771ec2ea2c0a7bbf2fbb5cd242b6b2cf81fb6995a601c7e5a3a8000000004948304502201cc079a72ea06c0a0c2c922f0fe53063353be9875bf97e2e382052ca81833b70022100d8f2453b88247cfb866181dc508f0c4777c748921e814bd837ddde1a51b19d3201ffffffffebdb08e2129635b2fc22c723d1734039325db7c197a04da7d0fe438349b7d1b200000000494830450220654832a5845596290c0c85aa3bcb569b0ef20f61639e445d5030f7b66faa9f6b022100f8bebd344c4c919fd4bbcaca09e0f31f61219ba2945823e340b6677018e7aaed01fffffffff52a2399f4ec01410d66ee3e162325483d3150120b3d6159aaeddcf8d6754f7c0000000049483045022100cb363d4b4825576118dab26ab4898277cd69fc34a7b625a164713dc7eb15789a02201a290c6ccf26cc5b6c45fe183eeec8d995e2cdd9cb07c287d58635e14d623d6b01fffffffff55feaacf0a4ee51ea68fc17c9e218d8e506a21b3d9e3d3371ed0030a9bbb4ed0000000048473044022068f08b70a3cc18a5ecb35e33829a9014f18dfa9d30862f2bb9c7643f05b8bdc602207c8d29d727f63defeb6a117f4ec8da1c5daa6e0ebd107c0dc98c48aa8ba47e0801fffffffff6fa045a3056f9faf35cbc8181f340355a5ef86c7e422aa7b338aa27dee6818400000000484730440220300025c6bffe6e06115cf35f887cecdad49d98e4846499d4fdc86ae36e5a5c230220768f31bf5a4769a4bffed77963c13b92cd6d593f79213d4c38b963365b75c85501ffffffff1ce5cc90ab1a2d501565d806f17d4effecca94be89d1995aa34948216f69050a0000000048473044022004f4ba08c81653d860ccc1895b1c88cc2430dd49e6f2c3589f879325862adf3e022036e56c6da1d81eaead5ad4638173e3eb02cb3e07b51059cd7b435fef7514878e01ffffffffce2397dc5bb87a83dda56245e2ef4067238c7035871c83c82947edc6896ab90d000000004847304402206d0deb5fc7da46cc1dfa4e6f2b3a71b00992129f13ee773a43ebf18f982f164102205303232e7a5185618081b48fd726d663338b3e99cd231dc33bc454920773be4501ffffffff272a100f2b66b9bfea6dbc51d42857242e2b205bc25db0771d7d87f296320e45000000004a4930460221009e6e9055e3d9bd42ea9eb18bcd59657faf788e2590084e6d896895667573d911022100ee01743b821a2f5f480a15bbbc2b1b11dc72089db25e9c0fe69c91bf55943c7701ffffffff0e574f847913bea618afe92da0f562e3da5f40ae6cec4551fc306bebfa42ca9b000000004948304502210091f51bb7b4f52ee835bad4bf4c6c65521a496e61c2e1247a618a0594b51176a702204947b4de85ae4865bd005873de8d016d801beb5f944d83b73c7b84570512b2d301ffffffffc2f70bf9367276d712ddfa669a6b218d959aa1cedc332ccd87276cc1a7fcb20e000000004a4930460221008248753a8703d1d769b1c1fc1ce699d47ef8a9f43d7e68752859da3f1f4a7857022100910b0215890c30d99099efcc5fdf1f785c3e91b50fd1b4d61660b84aa196530301ffffffff7d680d42fc9b7f7df4c791ab2f289526293e25396ba6ce80d04960152dc7588a0000000048473044022054fee6c1e7d34209f91d15db34f7b5f5e47738facc3903326e162abfbaccc8c202201411aca9ea7d0533dc6a3100e1294156330b2c1056e429397593a2b2be19680301fffffffffde96321d1c586f9c63890b2da765bd4ee608e4cc4082dad47b27392a878fc7a0000000049483045022100b4290552c040f88409a97b7d50d16a4b7ba75971bdd98423d30ef33ddbe7c40902201df1526fdead6c2a9ab8bab70677c61bb69e140cd31ea5105e13526b9b9dd6ed01ffffffffaa78ddd4007309b0253339bdf087741bd1e08f933b653c7ae2f8e633f32aecd40000000049483045022100f8a5b4878e36093844408e25702b308ae06da4e4e01d5115a947dde01a97402102205d42fbb36dd19c0a1218ec829095824043c068d0e03aefaadc578529a70b7a7001ffffffff82736f7cf3e775d8b6fba88706c72abf2a4d27a089a47ad8cb44d7f9d3c2608a00000000484730440220094482402c276cf81f830dc4b6c252c3c635f139be7e5107db0c6cd26e46f8ff02206e358d0d949e8056e62ac8cf5466c504c40bb04ec893406eca408556a77028f601ffffffff3b5f01d2069b524c0c65099eba52f63a7778da51571fb6775a4e27745884a7b00000000048473044022003f926d29b95a7f5ba6fc50ce1a7696c0f6a294bbf87d0684117a3b766da5aa602203363f399e10d3e1bd286ba19745ce27914f9831777388756ca5bdb29caca5b1001ffffffff42ed235dad96ed2e0507431b71901f4ad105570a9777ecc2839be89ca309d0bf000000004a4930460221009d177b68ea537dd4dddce265880ec2c37787ac03571ff573e361d0f46f6820e9022100bbc16b7c1f8cf0be9033e5788f75a4517e395a2f302f5a84cf33a6e56248b93001fffffffff2980bc4f40e5a3192b67880e5104221e4b30b8a135612fcc540c3524534247e000000004948304502205f369bf979158971eb48e4884558b63810b0baebfb3ce0658dbc4daa00530b08022100a56d896b97865df29b23b1c71e09d31b12a6b54f7fe735a9895ac86c1ade3f6c01ffffffff8a228564c029d6e08ea5bff6af2d9ed92a03e6c2a135918ea0e7fda2f8b416800000000049483045022100d0fd3f441eccf5519f096d64fd14a8131b2a74b4e573a1d57acbe7e73bca867f02201339ef61fdb1568d36ffc5a9dbe235352d38caef66a0f8a6f5beac52903b821f01ffffffff6862d8d5b4b35d447f803b785b5d26bc74800d8a91578d62a6c6ac53b4d87a82000000008c493046022100a21330d1468cb7c375db7eacdb78220e3ad8f8371ade6e94971fa7bb8eb577bf0221008a374a8178796d66d3e5da96c337e31dcc2e197510eaabdd8171dae2af6c78d20141047ac1764167aeab394574228624dcc2f0ab92cd44fe41284c0ef98c8aff9347497b5b0fd7ea2bed4390c86e75d511f693fa80433a740d5b0af1a033beab9e2f01ffffffff74c819b75683c8898eda4c91a02b07252bf7bb033f752a97dc18cbf5a1812acc00000000494830450221008693406bc557f41b1893e082b4b411b435c381d300fc32511aeae4cc30ae254b02207962c7167fbc68453622b9f1b9788eb1e5c4c18c65340aed79a52d9c0e034f0601ffffffffbb2c5676f8e8eb73173c1688848103bbfd88eecb311a93d0a7f4aee617ee5440000000004a493046022100de027616bba340c89877aac986442f432b80378093f8ef9ed09b7870ca2c5a05022100da4b67ed58599c37723208e2fdc2a943047d4f80e6640ed2cfe21f7b183328a301ffffffffd9225f02a6b437045b0fd39cb502e7ac5a5b3746c474aac547e689d5a90003ef000000004948304502202b5871cf7791118d2cb47b9d01a8d074898d1fae361ed591767991135ec9535d022100a9130365c88915a80ab0d6d94c20c9dcfd2d2fb03f54747b862d21d7dd16d04401ffffffffc55abf692b6e503515dea8f5116b874bc12559b329a3a74cf1f5b3588844f3eb000000004847304402205d0627ddc2c5b05261b280e8584dd6f6d37f81aba62acd1a0a37421d24f1c55102207e9f414af5b2eae955e138ea2487a9541cab277b5b49e0c4d556a6ecee89c1e401ffffffff02c08a5064000000004341047a4bb59f6f17b823ac377b11eb4361c6b213574609d576bdf2aab2add4b5de06e56654441e8420f09866105a7761e727ea0ac6c6b8685c608415cccb2ac3e284ac40395552840000001976a914a8b1a799c3bd3f303945e43132c52b02629254f188ac00000000")
pk_script = binascii.unhexlify(b"76a9143f320f852a51643d3ffbaa1f49bfe521dd97764a88ac")
input_index = 70
self.check_transaction(txn_bytes, pk_script, input_index)
def test_hashall_anyonecanpay(self):
# Block: bitcoin@276459
# Txn: 33854f625c90e3287eae951103489a2449f91bfe039aa4d4c810bd66450edbf1
txn_bytes = '0100000023384f3a49bb74f3ad78aba69c1daee52224ea37fbd663d5e709858ebb1a2bc4cc6d0000008b48304502210096aa7b8a2d6b05ec3f3f1073b9b485b5f9660ccb9929c45ce932c1963ef065cb0220764409a26766fcdcfaf100400633377eabebbc7d9ca9424cf9a274786e5d36c0014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff171954b763766e92eca51ef0d165afeb28a22b911827bdfd805d1b83fcb0568b3a0100008b483045022100f743a42402a24f1b63748fc02725fd0db976708c18cb5d415f5c2ae1539c6bc802206d67fef858f7640637fe33a01ebdc2dd2e4f4cec2a996d962551feed03f814ce014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffffb7a0e9efc462e1048e0c2ae01131ef386315f85e5809e2763cd8a422be6584fee00000008a47304402202678eb664db637e1c341dfcf4d6749dc098a863f9b2f9ecfb2e520892adff088022035ae54315eccd68eb618b4e1610cb27aa227dfa4158a9935947ccdae3128f946014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffffa13636766f691aec363cbe802c0c5be14096452e37e2866110ce2643ded06dfae60000008b483045022100fc78e22d4e6ae0545e4aa4572bf84b00d2ca5303932b69b88d45508643612f580220410a75bc8523c7a47c92b955d5377f9f1e039b335fa93f68b3b4469514667472014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffffdc7b320b9e39e54b0e5802f9fdd73fe5f19b50213f787b78642240330af5d959190100008a473044022051a6fb9d05af0be75c5a885908500ffb103de6fce55b9df2d62f92ecc32482f302202eb90cf1fc7ec490bc3cd01f7d5cc7466a3fcb26680394f019a92a0c3a6fa5be014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff7a3d6a0860a8b448952114ab5517dce85deaa0bd7f77c1a29691acda659fe1c5400100008b483045022100ccfbbe13c42ac8dc45cf23094c83fb78986ffb8f372d5006806bcde0129d6ba702204f71bf18c9355c92e77c5a2c651dc81d568acdca16bd1e51abb50b6364ce7d17014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffffdd3121c2dcdec6c158a20adc7b1b47bba4641701bb07732fd803bbc5a2474cfa010000008b483045022100e4e5862d540115c3d6370d784bb76b5fbdd7211a7b3aadaa4bc4e9b54296876402203f7b3ff39a0f23cad44f2fb7e609793b961bb74323eeebb336c979d999b5a6a8014104904f151123490efe4dc531fc73beb279c042bf78984dfb43050fcd2f40f42553217e7d8462a8f421c8cde940fd7911feb49a54f73ea21ef5afa108268443e6aeffffffff2950101deb0355deab28e7cbe737519474f5a8b3ef400c9119312b55f79567e82a0100008b4830450221009061f8527b52879179b4ee0d3a9a03c51f279b4a717cbf9acf889b46d0af23a4022030f7db8c9eaf837d4e0bdcaa113bbe5481ac65cddbcf608061232462d3347ff8014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff2ee22f3d8d359348619561cf608ef2d5f21490360cd7f7b6f70ef26bc3483fe93e0100008a47304402204ad6c78afa48234cf950f8a50a021449d148bba494c04e1d86be93551f080dd002204636b40d25c212cd68ffc04657f499b135c1aecf13bdc3ac7d9fb1661c8ffb0b014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffffd76e1b49c0c38085a0d36597e8dc3270c75d4ad1b8a22473058e578e3bb680dec80000008a47304402201b3dab4452541618a03eab206c206e1d3a9f83f305ed5a56dd585cec2b5b308802207cd4cdc1628239e4e40b66fb3803b5cd71322927eacd4f59909924edf4f36615014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff22d828ba390833f66eabd5307e212ec4d3531daf602c0866ec3f446cb234d562280100008b483045022100d006a46c39b3b12d4020508b12cd520e385a90a30d20c07312f260bf7216e3b202200ffa078d632ad2032d7789a03501413e8179d205fc72f0af40db407d3b9700ef014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff2dda665c3be2cc19635d915e836884116a0b277843687f7b0f247ae46a5216381d0100008b483045022100b65798d10e911c01d4e2955a1cd7488db2fb92764021915c84d4b5050a5e85f402205c9c469dc17dde84b6f3a8f987abf8cbf67569e9cb113553c7c2653127370d6b014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffffed7e4f17204eb20e38d69444e56324dfff9edd04b8200225d4c8f71dd99a2ff0000000008b483045022100b7fa240aa9a693e55ecaba4aa0ccbc124176357570f1d52ef47ed1bf46092fe602204c6be9d94a58326fab63cecf248f36045ff63efe33567bf436ec1b8557720b65814104d34775baab521d7ba2bd43997312d5f663633484ae1a4d84246866b7088297715a049e2288ae16f168809d36e2da1162f03412bf23aa5f949f235eb2e7141783ffffffffbce9df80ed7de31b5bc2b1f749767c849988bd4083fe927e66d4193bb486dafd0c0100008b483045022100ac6c13d112aba01518122d2745b5d406779f054f9988bcd7f1567373d91881c8022034d48e8b6bbd08e2a061218d826fd0560155684b925b418470d6a9b3a3fa4ffe014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffffc11769e8e8f482ecd6e7744a88cfb26a46a22ff3b18d13749d7480786daf019b920000008b483045022100e23f9976fd2601095da04aaef6ac59629ef0ef2a9c2e346884a2456d5e07f79802206f7d82da0f73503221d7d0b0b0206879bdea89d2180fb76baf7344f7371d933a014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff5ca118782acbacbd58da948d63b78cd70dcdb156fee355603789aba82b5a65401d0100008a473044022007b0c16bd2f265b5a65377ed35afecf13b16fea13e8e0b11a3f9d06d418a8ba602205f61fe09891f2a159ecb9319ca454526dccc9edb151d24a3e32c6eac0defc005014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff09bfd8808d260efdf42953c0b27ab550d56f226c589647553c2b7690a0330a1a330100008a47304402202d493e7bef94c170d8c2b525d8c23f68daaa01321cd798d9a739745de6d1f633022006810997ea02ecff8c9368e93c632772e64ed9a567d394ea0bf1e7529fac738a014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff4e765a426bf9f70e732e0e1f75e1ac19ce0dca17833c95662b13942ad8f2d28c080100008b483045022100dafe9c8315d097566330ae126d922ae3b842a17a7da896f8ee252ad118631c21022007303b4d022303243b69d5884f8395a775f9f61c0b37a2f967dc3899921c48bc014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff50b39340dbb1a7edf60bb308bf04a700c27e4ba83f48651a14e161005699d931e50000008b483045022100eb53ddeab50996df90f5a9d1f4655b293e12e440074ccaa067da4f9ffd38cb3d02202201c85c1847a8cb63f130823a2ad82c4c4840456a031ab318cc7014ec46cea4014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff5d07e5a0e623f45ef188c90d3f9152f262b4e01140e16523dab43e5367f68882170100008b4830450221008ee70db54199e3a0ab17bbbe5d28e8d16b2898c57e4b9562fd0428a1a5c5da1002205922c21d5bb24e8a91a0c6d7a1d4f4d835192b4defcf232fb8ae3b9d23a094df014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff6ef05c1435ab14db3a0a0112568fbd6bd1aeb7fcf1295a564271842815c9a889130100008b483045022100c298aabc11dfb978114d745038df9f8a3f62f14e68b8f8c772fb811925d11e9502207f08435c5d895aa497ef743690f95ffa1b147ec94675a6e5bf018863d4435cbd014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffffd71344b43e3eead56ac66a47587c19715e28937fc94a864780c8e5dd24d20918950000008b483045022100d922d8f24620b45edeca522af63ebf4d8f0266cd2ceed2166ce43c0c144b1153022016400f9e39562bc4ddb5bbbd3ea21e1000c22f52bb0733d7bbe4a9a597eb981e014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffffc450500fa1dc368f51741f9e213100e3df38e2380c0aa6f007ddc280ce74ccbef10000008b483045022100c3c2fc171c6781dddbc4a43bfa74761fb1e7442e202e66dfd169aa8f2c8ab5700220194c9103b2df0f7f81a92be75f33d6a07ec036468090a889ba0445c379698e3f014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff2a7b20fd487d2363675441f698f13fd25cee4f45250817eadc977ee99f3dfd0ec70000008b483045022100bd545605ccd412e57250547608a338c24c315158b98ba69183981a17f318bcc702203fa5bbe81733053c2292569050741daacac4eb81b81a1adb94089260dde9478f014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff26a5cb6d551a4969e605c79ea83fb7d9059e55caba675a9d72f5f800483ff23f180100008b48304502210082f145b30c2b9f617b93e0072c276d53123d7d65932d793ea5dccc09690e7e9302206c98c0b9e44c058fccfa35d1983f5d46d150193a817264894a26360ecb80b07c014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bfffffffffb0dd94fe877ddfa5c15420015efe36cd4df55de6327653ea5fbf1b90f7f905df50000008a4730440220021d5becb48bb1a0381d75b06413b19bf20ebc95aab0091d9848d4aebde68d50022047ce17f1dd0c6394fb8ccd406f4aaafda0532cbb3eb55b37cbb8286e8abda89f014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffffeef671f5e0fc4cdac0eb07f313ccc63b323c23b939fb2418a0692506d359012f0c0100008b483045022100b62f77008ac2ec1f738aa9fd3f1c6d13a9bc82a9573a2e055bb600a779aa4147022057a815080f11cf729eff6f22e68ba5618cfad08b0002244b8dd042e074892835014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff5c98c3c94aaa0f5efcdba720d6b53f7c06941a47eed7244c3ddc04185f59e386190100008a47304402207dbdf6c78c5486d0393e8cdc787d225046671919167171a7ca77850f1c47f2a20220534c4d41618c76696630747c54f7dda0a5be43498ac8d6cc4389a1f5798f8ffb014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bfffffffff1f345a155d43ee9457d2c062ee3ddd9a87c1c67bd0cef22fc7bcb4557e54b90ae0000008a47304402206b34ddddf7d5e8a7ba07e1b2a18058dd55363f3b16a9193888ef23ced24231f702207e1bcbed7f04e15e1c710ba10c27fde26989128240e8383df10b768c58291548014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff998a67d72bbd6b544175bb34f8fd79111bd29fc431b11dd8a42fb428ff5696b3150100008b4830450221008b79d6512b6cce25d260aa47157d1fc83fab1efb63a2b9c16a67580ea6aa0a8302205501e4239bb5ec12565f285ffd513992dec721eeb31806c43f314815286e32d3014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff60f12e280a483110122ff06987fb0116ffb3243c86a61b1c3d4b916c0d9c2ac0040000006a47304402201f25366ab7e9459bb62311702cf6fde2d7bc3b22f904c30b66282026c67297e602207c2398e6745cf1fe244967e86a54340a4fd681c8f5eb2d39e45cbfb4dae368d90121028ef2f0e363a6b16b9a98512c1d72e180c96bacd75dca438944a67cad64f3dfe1ffffffff914bb057318afb09fdeadad6e80243a5601e3a4f61eae33ae473975326a2569a370100008a473044022041fa6fdf9a58e9a29183d59fc83a140a17d5445dcbeb9448211c5ea60368ebca02200848d5ae101513704603ceca1ac31654a38632fd1a7fb36381153de7b6f6b33d014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff51e54c30f6adb6b481127d90a5a2efa058ca3a11fa0c5cc2ae2435f895ea73b8f70000008b483045022100fbee9d43477b20c8673bd486ea5635aa662b458b79044401267225b32a12eb13022000c9cc7471f6803f0df46cfe2e85f8f2d8fae7c8762fc185a277e718b3d6b471014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff405c78a00c18d2377f5fda6788cd2981009045cc139aaa582b0742c0d6f78a313c0100008a473044022050eec797e48442371f41bbb996bbd36e1abd5ee417f19e525ec74c97538288040220534689c7fbe2573da8aec3d57b7e9193f19298a49339eb59329b912d35833b38014104f13c4a1e02b1862a13db8836b4b1a00098cca1314f591f19f089b8017a51cf76541c0573685fa9f2a4aad3c8b8d33f24c1d95cfbc56b8e7a1b388b1a905dbf6bffffffff3b1d3a805d143955abb290c8f0e1bcdf4015727882f8ac8a1af5b68fb762d4c3000000006a47304402207185c0a3e44bbddd92d519420476e1149cbba10b1bf806d15b659727f64cf43d02201e9ec57ed2961a9cc7ba150dcdc3a2f3fd9725619045d7b3c26e729775df7aec0121032c7de9fa436f37613fd8c02a9fcf1b2f3a9d31ecb1d68c64d4afac9928d3c45dffffffff0600e1f5050000000017a91454259fdb77a61c1f9b053fffdd2885ad1b98d91287e00f9700000000001976a9147452497125562a2990474b80e730a75afabc785388ac00e1f505000000001976a9143b8278cda896d032f956ac4196595001b7151bd588ac79180c00000000001976a9143da03c286f5ab3fd4558d67e6dffb884e62758d588ac14eca300000000001976a914409b1a1a50bcdb4485a8700f4312276a470b43cb88ac8a485b02000000001976a91463f38e97c536b5a49dd0222329a797590e4d83af88ac00000000'.decode('hex')
pk_script = '76a9149bc0bbdd3024da4d0c38ed1aecf5c68dd1d3fa1288ac'.decode('hex')
input_index = 12
self.check_transaction(txn_bytes, pk_script, input_index)
def test_hashnone(self):
# Block: bitcoin@
# Txn:
txn_bytes = ''.decode('hex')
pk_script = ''.decode('hex')
input_index = 3
#self.check_transaction(txn_bytes, pk_script, input_index)
def test_hashnone_anyonecanpay(self):
# Block: bitcoin@
# Txn:
txn_bytes = ''.decode('hex')
pk_script = ''.decode('hex')
input_index = 3
#self.check_transaction(txn_bytes, pk_script, input_index)
def test_hashsingle(self):
# Block: bitcoin@238797
# Txn: afd9c17f8913577ec3509520bd6e5d63e9c0fd2a5f70c787993b097ba6ca9fae
txn_bytes = binascii.unhexlify(b"010000000370ac0a1ae588aaf284c308d67ca92c69a39e2db81337e563bf40c59da0a5cf63000000006a4730440220360d20baff382059040ba9be98947fd678fb08aab2bb0c172efa996fd8ece9b702201b4fb0de67f015c90e7ac8a193aeab486a1f587e0f54d0fb9552ef7f5ce6caec032103579ca2e6d107522f012cd00b52b9a65fb46f0c57b9b8b6e377c48f526a44741affffffff7d815b6447e35fbea097e00e028fb7dfbad4f3f0987b4734676c84f3fcd0e804010000006b483045022100c714310be1e3a9ff1c5f7cacc65c2d8e781fc3a88ceb063c6153bf950650802102200b2d0979c76e12bb480da635f192cc8dc6f905380dd4ac1ff35a4f68f462fffd032103579ca2e6d107522f012cd00b52b9a65fb46f0c57b9b8b6e377c48f526a44741affffffff3f1f097333e4d46d51f5e77b53264db8f7f5d2e18217e1099957d0f5af7713ee010000006c493046022100b663499ef73273a3788dea342717c2640ac43c5a1cf862c9e09b206fcb3f6bb8022100b09972e75972d9148f2bdd462e5cb69b57c1214b88fc55ca638676c07cfc10d8032103579ca2e6d107522f012cd00b52b9a65fb46f0c57b9b8b6e377c48f526a44741affffffff0380841e00000000001976a914bfb282c70c4191f45b5a6665cad1682f2c9cfdfb88ac80841e00000000001976a9149857cc07bed33a5cf12b9c5e0500b675d500c81188ace0fd1c00000000001976a91443c52850606c872403c0601e69fa34b26f62db4a88ac00000000")
pk_script = binascii.unhexlify("76a914dcf72c4fd02f5a987cf9b02f2fabfcac3341a87d88ac")
input_index = 1
self.check_transaction(txn_bytes, pk_script, input_index)
def test_hashsingle_anyonecanpay(self):
# Block: bitcoin@
# Txn:
txn_bytes = ''.decode('hex')
pk_script = ''.decode('hex')
input_index = 3
#self.check_transaction(txn_bytes, pk_script, input_index)
"""
def check_script(self, script, output):
# for each output, add the literal and do checkverify
if output:
for o in reversed(output):
if isinstance(o, int):
# o is a signed int so it needs a bit more
length = (o.bit_length()+7+1)//8
script += length.to_bytes(1, 'big') + o.to_bytes(length, 'big', signed=True)
elif isinstance(o, bytes):
script += len(o).to_bytes(1, 'big') + o
else:
raise Exception()
script += pycoind.script.opcodes.OP_EQUALVERIFY.to_bytes(1, 'big')
# make sure the stack depth is 0 and return true
script += (
pycoind.script.opcodes.OP_DEPTH.to_bytes(1, 'big') +
b'\x00' +
pycoind.script.opcodes.OP_EQUALVERIFY.to_bytes(1, 'big') +
pycoind.script.opcodes.OP_TRUE.to_bytes(1, 'big')
)
# run the script
result = pycoind.script.Script.process(b'', script, None, None)
# check the output (None indicates expected failure)
if output is None:
self.assertFalse(result)
else:
self.assertTrue(result)
def test_flow_control_ops(self):
pass
def test_literal_ops(self):
# Literals are also tested in hash tests
tests = [
# https://github.com/bitcoin/bitcoin/blob/master/src/test/script_tests.cpp#L110
((1, 0x5a), 0x5a),
((pycoind.script.opcodes.OP_PUSHDATA1, 1, 0x5a), 0x5a),
((pycoind.script.opcodes.OP_PUSHDATA2, 1, 0, 0x5a), 0x5a),
((pycoind.script.opcodes.OP_PUSHDATA4, 1, 0, 0, 0, 0x5a), 0x5a),
]
for (inputs, output) in tests:
script = b''.join(i.to_bytes((i.bit_length()+7+1)//8, 'big') for i in inputs)
self.check_script(script, [output])
def test_stack_ops(self):
xTop = 19
(x, x0, x1, x2, x3, x4, x5, x6) = (41, 43, 47, 53, 59, 61, 67, 71)
tests = [
('OP_IFDUP', [0], [0],),
('OP_IFDUP', [1], [1, 1],),
('OP_DEPTH', [], [0]),
('OP_DEPTH', [1,1], [1,1,2]),
('OP_DROP', [x], []),
('OP_DROP', [], None),
('OP_DUP', [x], [x, x]),
('OP_DUP', [x1, x], [x1, x, x]),
('OP_DUP', [], None),
('OP_NIP', [x1,x2], [x2]),
('OP_NIP', [x,x2,x3], [x,x3]),
('OP_NIP', [x1], None),
('OP_NIP', [], None),
('OP_OVER', [x1,x2], [x1,x2,x1]),
('OP_OVER', [x1], None),
('OP_OVER', [], None),
('OP_PICK', [x5, x4, x3, x2, x1, x0, 5], [x5, x4, x3, x2, x1, x0, x5]),
('OP_PICK', [x5, x4, x3, x2, x1, x0, 2], [x5, x4, x3, x2, x1, x0, x2]),
# @TODO: pick error cases
('OP_ROLL', [x5, x4, x3, x2, x1, x0, 5], [x4, x3, x2, x1, x0, x5]),
('OP_ROLL', [x5, x4, x3, x2, x1, x0, 2], [x5, x4, x3, x1, x0, x2]),
# @TODO: roll error cases (not enough arguments to start, len < n)
('OP_ROT', [x1, x2, x3], [x2, x3, x1]),
('OP_ROT', [x, x1, x2, x3], [x, x2, x3, x1]),
# @TODO: rot error cases (ditto)
('OP_SWAP', [x1, x2], [x2, x1]),
('OP_SWAP', [x, x1, x2], [x, x2, x1]),
('OP_SWAP', [x1], None),
('OP_SWAP', [], None),
('OP_TUCK', [x1, x2], [x2, x1, x2]),
('OP_TUCK', [x, x1, x2], [x, x2, x1, x2]),
# @TODO: tuck error cases
('OP_2DROP', [x1, x2], []),
('OP_2DROP', [x, x1, x2], [x]),
('OP_2DROP', [x1], None),
('OP_2DROP', [], None),
('OP_2DUP', [x1, x2], [x1, x2, x1, x2]),
('OP_2DUP', [x, x1, x2], [x, x1, x2, x1, x2]),
('OP_2DUP', [x1], None),
('OP_2DUP', [], None),
('OP_3DUP', [x1, x2, x3], [x1, x2, x3, x1, x2, x3]),
('OP_3DUP', [x, x1, x2, x3], [x, x1, x2, x3, x1, x2, x3]),
('OP_3DUP', [x1, x2], None),
('OP_3DUP', [x1], None),
('OP_3DUP', [], None),
('OP_2OVER', [x1, x2, x3, x4], [x1, x2, x3, x4, x1, x2]),
('OP_2OVER', [x, x1, x2, x3, x4], [x, x1, x2, x3, x4, x1, x2]),
# @TODO: 2over error cases
('OP_2ROT', [x1, x2, x3, x4, x5, x6], [x3, x4, x5, x6, x1, x2]),
('OP_2ROT', [x, x1, x2, x3, x4, x5, x6], [x, x3, x4, x5, x6, x1, x2]),
# @TODO: 2rot error cases
('OP_2SWAP', [x1, x2, x3, x4], [x3, x4, x1, x2]),
('OP_2SWAP', [x, x1, x2, x3, x4], [x, x3, x4, x1, x2]),
# @TODO: 2swap error cases
]
for opname, inputs, outputs in tests:
# TODO: Check the +1 patch to get the byte length
# length(x1) + x1 [+ length(x1) + x2 ...] + opcode
script = b''.join(int(i.bit_length()//8+1).to_bytes(1, 'big') + i.to_bytes(i.bit_length()//8+1, 'big') for i in inputs) + \
pycoind.script.opcodes.get_opcode(opname).to_bytes(1, 'big')
self.check_script(script, outputs)
def test_splice_ops(self):
tests = [
([b'hello'], [b'hello', 5]),
([b'hello', b'world!'], [b'hello', b'world!', 6]),
([], None),
]
for (inputs, outputs) in tests:
script = b''.join((len(i).to_bytes(1, 'big') + i) for i in inputs) + pycoind.script.opcodes.OP_SIZE.to_bytes(1, 'big')
self.check_script(script, outputs)
def test_logic_ops(self):
tests = [
('OP_EQUAL', 1, 1, 1),
('OP_EQUAL', 1, 0, 0),
('OP_EQUAL', 345, 345, 1),
('OP_EQUAL', 1200, 400, 0),
('OP_EQUAL', 988, -988, 0),
('OP_EQUAL', -650, -650, 1)
]
for opname, a, b, output in tests:
a_len = a.bit_length()//8 + 1
b_len = b.bit_length()//8 + 1
# length(a) + a + length(b) + b + opcode
script = a_len.to_bytes(1, 'big') + a.to_bytes(a_len, 'big', signed=True) + \
b_len.to_bytes(1, 'big') + b.to_bytes(b_len, 'big', signed=True) + \
pycoind.script.opcodes.get_opcode(opname).to_bytes(1, 'big')
self.check_script(script, [output])
def test_math_overflow_ops(self):
pass
def test_unary_math_ops(self):
tests = [
('OP_1ADD', -2, -1),
('OP_1ADD', -1, 0),
('OP_1ADD', 0, 1),
('OP_1ADD', 1, 2),
('OP_1SUB', -1, -2),
('OP_1SUB', 0, -1),
('OP_1SUB', 1, 0),
('OP_1SUB', 2, 1),
('OP_NEGATE', 0, 0),
('OP_NEGATE', 1, -1),
('OP_NEGATE', -1, 1),
('OP_ABS', 0, 0),
('OP_ABS', 1, 1),
('OP_ABS', -1, 1),
('OP_NOT', 0, 1),
('OP_NOT', 1, 0),
('OP_NOT', -1, 0),
('OP_NOT', 200, 0),
('OP_0NOTEQUAL', 0, 0),
('OP_0NOTEQUAL', 1, 1),
('OP_0NOTEQUAL', -1, 1),
]
for opname, input, output in tests:
len_in = input.bit_length()//8 + 1
# length(input) + input + opcode
script = int(len_in).to_bytes(len_in.bit_length()//8+1, 'big') + input.to_bytes(len_in, 'big', signed=True) +\
pycoind.script.opcodes.get_opcode(opname).to_bytes(1, 'big')
self.check_script(script, [output])
def test_binary_math_ops(self):
tests = {
('OP_ADD', 1, 2, 3),
('OP_ADD', 1, -1, 0),
('OP_ADD', 1, -2, -1),
('OP_ADD', 1, -2, -1),
('OP_ADD', 1, 127, 128),
('OP_ADD', 129, 129, 258),
('OP_ADD', -1000, -1000, -2000),
('OP_SUB', 1, 1, 0),
('OP_SUB', 5, 1, 4),
('OP_SUB', 1, 5, -4),
('OP_BOOLAND', 0, 0, 0),
('OP_BOOLAND', 0, 1, 0),
('OP_BOOLAND', 1, 0, 0),
('OP_BOOLAND', 1, 1, 1),
('OP_BOOLOR', 0, 0, 0),
('OP_BOOLOR', 0, 1, 1),
('OP_BOOLOR', 1, 0, 1),
('OP_BOOLOR', 1, 1, 1),
('OP_NUMEQUAL', 1, 1, 1),
('OP_NUMEQUAL', 1, 5, 0),
('OP_NUMNOTEQUAL', 1, 1, 0),
('OP_NUMNOTEQUAL', 5, 1, 1),
('OP_LESSTHAN', 5, 6, 1),
('OP_LESSTHAN', 5, 5, 0),
('OP_LESSTHAN', 5, 4, 0),
('OP_LESSTHANOREQUAL', 5, 6, 1),
('OP_LESSTHANOREQUAL', 5, 5, 1),
('OP_LESSTHANOREQUAL', 5, 4, 0),
('OP_GREATERTHAN', 5, 6, 0),
('OP_GREATERTHAN', 5, 5, 0),
('OP_GREATERTHAN', 5, 4, 1),
('OP_GREATERTHANOREQUAL', 5, 6, 0),
('OP_GREATERTHANOREQUAL', 5, 5, 1),
('OP_GREATERTHANOREQUAL', 5, 4, 1),
('OP_MIN', 5, 6, 5),
('OP_MIN', 6, 5, 5),
('OP_MIN', 5, -6, -6),
('OP_MAX', 5, 6, 6),
('OP_MAX', 6, 5, 6),
('OP_MAX', -5, -6, -5),
}
for opname, a, b, output in tests:
# a and b are signed, thus they need a bit more
a_len = (a.bit_length()+7+1)//8
b_len = (b.bit_length()+7+1)//8
#print(a_len, a)
script = a_len.to_bytes(1, 'big') + a.to_bytes(a_len, 'big', signed=True) + \
b_len.to_bytes(1, 'big') + b.to_bytes(b_len, 'big', signed=True) + \
pycoind.script.opcodes.get_opcode(opname).to_bytes(1, 'big')
self.check_script(script, [output])
def test_multisig(self):
pass
# test 1/3 work for 2/3
# test 2/3 work for 2/3
# test 3/3 work for 2/3
# 1 of 2 multi-sig, where one of the public keys is giberish
# txid: 7aa7f9172660e38236b3bb97830c0b79a6e843ae83145d8707b9b8f249e7c470
def test_ternary_math_ops(self):
tests = [
(pycoind.script.opcodes.OP_WITHIN, [2, 1, 3], [1]),
(pycoind.script.opcodes.OP_WITHIN, [1, 1, 3], [1]),
(pycoind.script.opcodes.OP_WITHIN, [3, 1, 3], [0]),
(pycoind.script.opcodes.OP_WITHIN, [0, -5, 5], [1]),
(pycoind.script.opcodes.OP_WITHIN, [0, 5, -5], [0]),
(pycoind.script.opcodes.OP_WITHIN, [10, 12], None),
(pycoind.script.opcodes.OP_WITHIN, [12, 10], None),
(pycoind.script.opcodes.OP_WITHIN, [12], None),
(pycoind.script.opcodes.OP_WITHIN, [], None),
]
for (opname, inputs, outputs) in tests:
# length(x1) + x1 [+ length(x1) + x2 ...] + opcode
script = b''.join(int(i.bit_length()//8+1).to_bytes(1, 'big') + i.to_bytes(i.bit_length()//8+1, 'big', signed=True) for i in inputs)+\
opname.to_bytes(1, 'big')
self.check_script(script, outputs)
def test_crypto_ops(self):
tests = [
# http://en.wikipedia.org/wiki/RIPEMD
#('OP_RIPEMD160', b'The quick brown fox jumps over the lazy dog', b'37f332f68db77bd9d7edd4969571ad671cf9dd3b'),
#('OP_RIPEMD160', b'', b'9c1185a5c5e9fc54612808977ee8f548b2258d31'),
# http://en.wikipedia.org/wiki/SHA_1
#('OP_SHA1', b'The quick brown fox jumps over the lazy dog', b'2fd4e1c67a2d28fced849ee1bb76e7391b93eb12'),
#('OP_SHA1', b'', b'da39a3ee5e6b4b0d3255bfef95601890afd80709'),
# http://en.wikipedia.org/wiki/Sha256
('OP_SHA256', b'The quick brown fox jumps over the lazy dog', b'd7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592'),
#('OP_SHA256', b'', b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'),
# https://blockgeeks.com/wp-content/uploads/2017/06/address.pdf
('OP_HASH160', b'0450863AD64A87AE8A2FE83C1AF1A8403CB53F53E486D8511DAD8A04887E5B23522CD470243453A299FA9E77237716103ABC11A1DF38855ED6F2EE187E9C582BA6', b'010966776006953D5567439E5E39F86A0D273BEE'),
# https://blockgeeks.com/wp-content/uploads/2017/06/address.pdf
#('OP_HASH256', b'00010966776006953D5567439E5E39F86A0D273BEE', b'D61967F63C7DD183914A4AE452C9F6AD5D462CE3D277798075B107615C1A8A30'),
]
for opname, input, output in tests:
in_len = len(input)
if in_len <= 75:
script = len(input).to_bytes(1, 'big')
else:
byte_len = in_len.bit_length()//8
if not byte_len: byte_len = 1
# https://en.bitcoin.it/wiki/Script [PUSHDATA1, PUSHDATA2, PUSHDATA3]
pushdata_opcode = [b'\x4c', b'\x4d', b'\x4e']
script = pushdata_opcode[byte_len-1] + in_len.to_bytes(byte_len, 'little')
script += input + pycoind.script.opcodes.get_opcode(opname).to_bytes(1, 'big')
self.check_script(script, [output])
def test_reserved_ops(self):
pass
suite = unittest.TestLoader().loadTestsFromTestCase(TestScriptTransactions)
unittest.TextTestRunner(verbosity=2).run(suite)
| 111.258216 | 17,509 | 0.838657 | 1,994 | 47,396 | 19.772818 | 0.15998 | 0.004362 | 0.01116 | 0.005859 | 0.085551 | 0.070409 | 0.058615 | 0.051817 | 0.044716 | 0.042484 | 0 | 0.512896 | 0.106718 | 47,396 | 425 | 17,510 | 111.52 | 0.418348 | 0.753861 | 0 | 0.15 | 0 | 0 | 0.123021 | 0.026117 | 0 | 1 | 0.002814 | 0.002353 | 0.008333 | 1 | 0.054167 | false | 0.016667 | 0.016667 | 0 | 0.075 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
defcf0212c47c2116033e07ed45369bb9cc62a87 | 137 | py | Python | pretalx_view_as_raw_table/views/helloworld.py | s-light/pretalx-view_as_raw_table | bc36b8a9dfb4730d94f7d1f22459c93df918fb82 | [
"Apache-2.0"
] | null | null | null | pretalx_view_as_raw_table/views/helloworld.py | s-light/pretalx-view_as_raw_table | bc36b8a9dfb4730d94f7d1f22459c93df918fb82 | [
"Apache-2.0"
] | null | null | null | pretalx_view_as_raw_table/views/helloworld.py | s-light/pretalx-view_as_raw_table | bc36b8a9dfb4730d94f7d1f22459c93df918fb82 | [
"Apache-2.0"
] | null | null | null | """."""
from django.http import HttpResponse
def world(request, event):
"""Hello World."""
return HttpResponse('Hello World')
| 15.222222 | 38 | 0.656934 | 15 | 137 | 6 | 0.733333 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175182 | 137 | 8 | 39 | 17.125 | 0.79646 | 0.10219 | 0 | 0 | 0 | 0 | 0.098214 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9d13a8d4a50efc72a16fd4c044c9794b1b0d11da | 67 | py | Python | wtflabel/__init__.py | greentfrapp/wtflabel | f46be903e935e8fc6a8c268c688179d8fb9880c9 | [
"Apache-2.0"
] | null | null | null | wtflabel/__init__.py | greentfrapp/wtflabel | f46be903e935e8fc6a8c268c688179d8fb9880c9 | [
"Apache-2.0"
] | 2 | 2021-06-08T21:55:31.000Z | 2021-09-08T02:15:56.000Z | wtflabel/__init__.py | greentfrapp/wtflabel | f46be903e935e8fc6a8c268c688179d8fb9880c9 | [
"Apache-2.0"
] | null | null | null | import wtflabel.utils
import wtflabel.models
import wtflabel.train
| 16.75 | 22 | 0.865672 | 9 | 67 | 6.444444 | 0.555556 | 0.724138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089552 | 67 | 3 | 23 | 22.333333 | 0.95082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9d3249134d503ab35df1016307a26b724f5cc11a | 49 | py | Python | lib/Actions/Action.py | BlackChaosNL/Discounter-CLI | 12e10ff8dfcbf51dec5761827655b88290ad0979 | [
"MIT"
] | null | null | null | lib/Actions/Action.py | BlackChaosNL/Discounter-CLI | 12e10ff8dfcbf51dec5761827655b88290ad0979 | [
"MIT"
] | null | null | null | lib/Actions/Action.py | BlackChaosNL/Discounter-CLI | 12e10ff8dfcbf51dec5761827655b88290ad0979 | [
"MIT"
] | null | null | null | class Action:
def Run(self, window):
pass;
| 12.25 | 24 | 0.632653 | 7 | 49 | 4.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244898 | 49 | 3 | 25 | 16.333333 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
c22ae2466deb91016bb0ea43a52a3fd84f71e6ea | 141 | py | Python | src/attrbench/metrics/result/__init__.py | zoeparman/benchmark | 96331b7fa0db84f5f422b52cae2211b41bbd15ce | [
"MIT"
] | null | null | null | src/attrbench/metrics/result/__init__.py | zoeparman/benchmark | 96331b7fa0db84f5f422b52cae2211b41bbd15ce | [
"MIT"
] | 7 | 2020-03-02T13:03:50.000Z | 2022-03-12T00:16:20.000Z | src/attrbench/metrics/result/__init__.py | zoeparman/benchmark | 96331b7fa0db84f5f422b52cae2211b41bbd15ce | [
"MIT"
] | null | null | null | from .metric_result import BasicMetricResult, AbstractMetricResult
from .masker_activation_metric_result import MaskerActivationMetricResult
| 47 | 73 | 0.914894 | 13 | 141 | 9.615385 | 0.692308 | 0.192 | 0.288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 141 | 2 | 74 | 70.5 | 0.94697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dfb6554f7c180f13cd795b0b6a54e9bc3ebb5dd9 | 61 | py | Python | netodesys/tests/systems/__init__.py | spcornelius/netodesys | 9283f39d7a24591221a7f45434794dbb0e2b3a39 | [
"MIT"
] | null | null | null | netodesys/tests/systems/__init__.py | spcornelius/netodesys | 9283f39d7a24591221a7f45434794dbb0e2b3a39 | [
"MIT"
] | null | null | null | netodesys/tests/systems/__init__.py | spcornelius/netodesys | 9283f39d7a24591221a7f45434794dbb0e2b3a39 | [
"MIT"
] | null | null | null | from .kuramoto import *
from .lv import *
from .sis import *
| 15.25 | 23 | 0.704918 | 9 | 61 | 4.777778 | 0.555556 | 0.465116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196721 | 61 | 3 | 24 | 20.333333 | 0.877551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dfbe7e26c19d0d97b6b90d1c22cdf997a5d75064 | 18,577 | py | Python | scenes/tutorial.py | JoshuaSkelly/TroubleInCloudLand | 2642124f7549c91b89060d424524f69bb7edc169 | [
"MIT"
] | 2 | 2017-04-13T09:59:10.000Z | 2017-04-13T21:07:22.000Z | scenes/tutorial.py | JoshuaSkelly/TroubleInCloudLand | 2642124f7549c91b89060d424524f69bb7edc169 | [
"MIT"
] | 2 | 2017-04-14T15:33:19.000Z | 2017-04-21T19:55:01.000Z | scenes/tutorial.py | JoshuaSkelly/TroubleInCloudLand | 2642124f7549c91b89060d424524f69bb7edc169 | [
"MIT"
] | 4 | 2019-02-12T05:48:17.000Z | 2020-10-15T23:12:45.000Z | import enemies
from ui import text, infobubble
from utils import utility, vector
from utils.settings import *
WORLD_NAME = 0
PLAYER = 1
GROUP_LIST = 2
class Tutorial(object):
def __init__(self, tutorial_world_tuple):
self.timer = 0
self.boss_fight = False
self.time_after_boss = 0
self.boss_dead = False
self.moono_dead = True
self.mass_attack = False
self.default_spawn_rate = 0
self.moono_spawn_rate = 0
self.force_drop = 0
self.level = 0
self.world_name = tutorial_world_tuple[WORLD_NAME]
self.player = tutorial_world_tuple[PLAYER]
group_list = tutorial_world_tuple[GROUP_LIST]
self.text_group = group_list[TEXT_GROUP]
self.enemy_group = group_list[ENEMY_GROUP]
self.boss_group = group_list[BOSS_GROUP]
self.group_list = group_list
self.current_step = 0
self.new_moono = enemies.moono.Moono(self.player, self.group_list)
def update(self):
if self.boss_dead:
self.time_after_boss += 1
if self.default_spawn_rate:
if not self.moono_spawn_rate:
self.moono_spawn_rate = self.default_spawn_rate
self.spawn_moono()
self.moono_spawn_rate -= 1
if self.new_moono.health <= 0:
self.moono_dead = True
if self.current_step == 0:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Welcome to the tutorial!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.current_step += 1
self.timer = 0
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 1:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Use your mouse to move.').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 5 * FRAMES_PER_SECOND and self.current_step == 2:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Notice the stars that you shoot?').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 3:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'When moving you shoot stars.').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 4:
self.spawn_moono('balloon')
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Oh no! Its a Moono!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 5:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Attack with your stars!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 6:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Moonos hurt you when they touch you.').image
self.timer -= 1
self.moono_dead = False
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 7:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Did you get the Balloon?').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 8:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Collect balloons for bonus points!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 9:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Balloons always help you get more points.').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 10:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Points are important!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 11:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Every 50,000 points you get an extra life!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 12:
self.spawn_baake()
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, "Oh bother! It's Baake.").image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 13:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, "Baakes don't hurt you...").image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 14:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, '...but they do love to get in the way!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 15:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Its usually best to stay away from Baake.').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 16:
self.spawn_moono('reflect')
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Blast, another Moono!').image
self.moono_dead = False
self.timer-= 1
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 17:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Did you pickup that powerup gem?').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 18:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Gems change how you attack.').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 19:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, "These effects don't last long...").image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 20:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, '...but they do stack!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 21:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'So pick up as many as you can!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 22:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'It is important to make use of gems').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 23:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'They can really help you out!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 24:
self.default_spawn_rate = 2.5 * FRAMES_PER_SECOND
self.mass_attack = True
self.timer = 0
self.current_step +=1
if self.current_step == 25:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, "They're attacking en masse!").image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 26:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Try moving slowly towards them.').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 27:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'You can better control your shots.').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 28:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'This can really increase your chances!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 3 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 4 * FRAMES_PER_SECOND and self.current_step == 29:
self.spawn_boss()
self.boss_fight = True
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Oh no! Its a boss!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 30:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Kill the moonos that spawn.').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer >= 3 * FRAMES_PER_SECOND and self.current_step == 31:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Some of them will drop a nova.').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer == 3 * FRAMES_PER_SECOND and self.current_step == 32:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Use the nova when the boss is near').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.timer == 3 * FRAMES_PER_SECOND and self.current_step == 33:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Only novas can hurt bosses!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
self.timer = 0
self.current_step +=1
if self.time_after_boss == 1 * FRAMES_PER_SECOND:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, 'Congratulations!').image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
if self.time_after_boss == 3 * FRAMES_PER_SECOND:
temp_image = text.TextSurface(FONT_PATH, 30, FONT_COLOR, "Looks like you're ready for a real challenge!").image
help_bubble = infobubble.InfoBubble(temp_image, self.player, 2 * FRAMES_PER_SECOND)
help_bubble.set_offset(vector.Vector2d(0.0, -100.0))
self.text_group.add(help_bubble)
if self.moono_dead or self.mass_attack:
self.timer += 1
if self.time_after_boss == 5 * FRAMES_PER_SECOND:
utility.fade_music()
return True
def spawn_baake(self):
self.enemy_group.add(enemies.baake.Baake())
def spawn_moono(self, force_drops = None):
self.new_moono = enemies.moono.Moono(self.player, self.group_list)
if force_drops:
self.new_moono.boss_fight = True
if force_drops == 'reflect':
self.new_moono.drop_reflect = True
if force_drops == 'balloon':
self.new_moono.drop_balloon = True
elif self.boss_fight:
self.force_drop += 1
self.new_moono.boss_fight = True
if self.force_drop > 4:
self.new_moono.drop_item = True
self.force_drop = 0
self.enemy_group.add(self.new_moono)
def spawn_boss(self):
self.boss_group.add(enemies.boss.BossTut(self, self.player, self.group_list))
| 49.80429 | 123 | 0.633095 | 2,546 | 18,577 | 4.371563 | 0.085232 | 0.09434 | 0.095687 | 0.075472 | 0.807817 | 0.796047 | 0.793082 | 0.785265 | 0.785265 | 0.697394 | 0 | 0.039471 | 0.267643 | 18,577 | 372 | 124 | 49.938172 | 0.778611 | 0 | 0 | 0.586751 | 0 | 0 | 0.056737 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015773 | false | 0 | 0.018927 | 0 | 0.041009 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dfe19056bb11d5fd663d18d823a1d63fb68ef21a | 32 | py | Python | menpo3d/correspond/__init__.py | nontas/menpo3d | f29324b12a147f5b716ae5c3048d2c6b7a298752 | [
"BSD-3-Clause"
] | 3 | 2021-10-03T19:49:04.000Z | 2022-02-11T10:48:05.000Z | menpo3d/correspond/__init__.py | nontas/menpo3d | f29324b12a147f5b716ae5c3048d2c6b7a298752 | [
"BSD-3-Clause"
] | null | null | null | menpo3d/correspond/__init__.py | nontas/menpo3d | f29324b12a147f5b716ae5c3048d2c6b7a298752 | [
"BSD-3-Clause"
] | 1 | 2021-12-21T01:13:24.000Z | 2021-12-21T01:13:24.000Z | from .nicp import non_rigid_icp
| 16 | 31 | 0.84375 | 6 | 32 | 4.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dffb43d9116b7f4e4573aa42c2fb998cffadff30 | 26 | py | Python | lib/mpl_toolkits/mplot3d/__init__.py | pierre-haessig/matplotlib | 0d945044ca3fbf98cad55912584ef80911f330c6 | [
"MIT",
"PSF-2.0",
"BSD-3-Clause"
] | 35 | 2015-10-23T08:15:36.000Z | 2022-02-03T10:17:15.000Z | lib/mpl_toolkits/mplot3d/__init__.py | pierre-haessig/matplotlib | 0d945044ca3fbf98cad55912584ef80911f330c6 | [
"MIT",
"PSF-2.0",
"BSD-3-Clause"
] | 3 | 2015-09-17T16:27:45.000Z | 2018-07-31T05:59:33.000Z | lib/mpl_toolkits/mplot3d/__init__.py | pierre-haessig/matplotlib | 0d945044ca3fbf98cad55912584ef80911f330c6 | [
"MIT",
"PSF-2.0",
"BSD-3-Clause"
] | 25 | 2016-01-18T12:19:11.000Z | 2021-12-11T15:45:17.000Z | from axes3d import Axes3D
| 13 | 25 | 0.846154 | 4 | 26 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a026a61dfa14d03bbbd76ef1aa73d1d09d2578ca | 315 | py | Python | exchange_cash/exchange.py | gringonivoli/tdd-own-stuff | 906e4b48b0cddc98a9c9ab220af98c363a2a94f8 | [
"MIT"
] | null | null | null | exchange_cash/exchange.py | gringonivoli/tdd-own-stuff | 906e4b48b0cddc98a9c9ab220af98c363a2a94f8 | [
"MIT"
] | null | null | null | exchange_cash/exchange.py | gringonivoli/tdd-own-stuff | 906e4b48b0cddc98a9c9ab220af98c363a2a94f8 | [
"MIT"
] | null | null | null | from abc import ABCMeta, abstractmethod
class Exchange(metaclass=ABCMeta):
@abstractmethod
def rate(self, origin_currency, target_currency):
pass
class ExchangeFake(Exchange):
def rate(self, origin_currency, target_currency):
return 150 if origin_currency != target_currency else 1
| 22.5 | 63 | 0.742857 | 37 | 315 | 6.162162 | 0.567568 | 0.184211 | 0.263158 | 0.368421 | 0.342105 | 0.342105 | 0.342105 | 0 | 0 | 0 | 0 | 0.015686 | 0.190476 | 315 | 13 | 64 | 24.230769 | 0.878431 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.125 | 0.125 | 0.125 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
a02aac0c85a854023d31ad0d9737e09bf0e5173c | 126 | py | Python | src/go.py | kamuridesu/kamubuilder | 8afe33c359baf93d005bc55585f544e1d5837110 | [
"MIT"
] | null | null | null | src/go.py | kamuridesu/kamubuilder | 8afe33c359baf93d005bc55585f544e1d5837110 | [
"MIT"
] | 1 | 2021-09-23T12:45:58.000Z | 2021-09-23T14:31:02.000Z | src/go.py | kamuridesu/kamubuilder | 8afe33c359baf93d005bc55585f544e1d5837110 | [
"MIT"
] | null | null | null | from src.parser import popen
def go(path: str, filename: str, *args) -> int:
return popen("go", "run", *args, filename)
| 21 | 47 | 0.650794 | 19 | 126 | 4.315789 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18254 | 126 | 5 | 48 | 25.2 | 0.796117 | 0 | 0 | 0 | 0 | 0 | 0.039683 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
a036466a94437472aeb223b5e1cf7bf98f4a38b0 | 8,488 | py | Python | examples/mixing_sweep/plot_projections.py | SamuelBrand1/covid-19-in-households-public | a0740d85f8f9fb1ae67dbd9c5a92f1085e4d9ea1 | [
"Apache-2.0"
] | 4 | 2020-04-17T13:19:43.000Z | 2021-12-02T19:56:27.000Z | examples/mixing_sweep/plot_projections.py | SamuelBrand1/covid-19-in-households-public | a0740d85f8f9fb1ae67dbd9c5a92f1085e4d9ea1 | [
"Apache-2.0"
] | 6 | 2020-06-16T17:06:52.000Z | 2021-02-08T18:32:39.000Z | examples/mixing_sweep/plot_projections.py | SamuelBrand1/covid-19-in-households-public | a0740d85f8f9fb1ae67dbd9c5a92f1085e4d9ea1 | [
"Apache-2.0"
] | 3 | 2020-05-12T12:09:48.000Z | 2021-06-07T09:16:09.000Z | '''This plots the bubble results
'''
from pickle import load
from numpy import arange, array, atleast_2d, hstack, sum, where, zeros
from matplotlib.pyplot import close, colorbar, imshow, set_cmap, subplots
from examples.temp_bubbles.common import DataObject
from seaborn import heatmap
from time import sleep
def plot_bars(max_hh_size,household_population,results):
R_probs = []
SAP = zeros((max_hh_size,))
for hh_size in range(1,max_hh_size+1):
R_probs.append(zeros((hh_size+1,)))
for R in range(hh_size+1):
this_hh_range = where(
household_population.states.sum(axis=1)==hh_size)
this_R_range = where(
(household_population.states.sum(axis=1)==hh_size) &
(household_population.states[:,4::5].sum(axis=1)==R))[0]
R_probs[hh_size-1][R] = sum(results.H[this_R_range,-1]) / sum(results.H[this_hh_range,-1])
SAP[hh_size-1] = sum(arange(0,hh_size,1)*R_probs[hh_size-1][1:]/sum(R_probs[hh_size-1][1:]))/hh_size
return R_probs, SAP
labels = ['Baseline',
'External reduction only',
'Internal reduction only',
'Both external and internal reduction']
linestyle_str = ['solid','dotted','dashed','dashdot']
linecol_str = ['b','r','orange']
fig_pro, ax_pro = subplots()
fig_SAP, ax_SAP = subplots()
with open('mix_sweep_results_AR0.45_intred0.0_extred0.0.pkl', 'rb') as f:
AR_now, household_population, results = load(f)
ax_pro.plot(results.t-10,100*results.I,label=labels[0],color=linecol_str[0],linestyle=linestyle_str[0])
R_probs, SAP = plot_bars(6, household_population, results)
ax_SAP.plot(range(1,7), SAP, label=labels[0],color=linecol_str[0],linestyle=linestyle_str[0])
with open('mix_sweep_results_AR0.45_intred0.0_extred0.25.pkl', 'rb') as f:
AR_now, household_population, results = load(f)
ax_pro.plot(results.t-10,100*results.I,label=labels[1],color=linecol_str[1],linestyle=linestyle_str[1])
R_probs, SAP = plot_bars(6, household_population, results)
ax_SAP.plot(range(1,7), SAP, label=labels[1],color=linecol_str[1],linestyle=linestyle_str[1])
extred_3week = where(results.t>31)[0][0]
Rext_3week = results.I[extred_3week]
with open('mix_sweep_results_AR0.45_intred0.5_extred0.0.pkl', 'rb') as f:
AR_now, household_population, results = load(f)
ax_pro.plot(results.t-10,100*results.I,label=labels[2],color=linecol_str[1],linestyle=linestyle_str[2])
R_probs, SAP = plot_bars(6, household_population, results)
ax_SAP.plot(range(1,7), SAP, label=labels[2],color=linecol_str[1],linestyle=linestyle_str[2])
with open('mix_sweep_results_AR0.45_intred0.25_extred0.25.pkl', 'rb') as f:
AR_now, household_population, results = load(f)
ax_pro.plot(results.t-10,100*results.I,label=labels[3],color=linecol_str[2],linestyle=linestyle_str[3])
R_probs, SAP = plot_bars(6, household_population, results)
ax_SAP.plot(range(1,7), SAP, label=labels[3],color=linecol_str[2],linestyle=linestyle_str[3])
bothred_3week = where(results.t>31)[0][0]
Rboth_3week = results.I[bothred_3week]
print('At 3 week mark reduction from doing internal as well is',100*(Rext_3week-Rboth_3week)/Rext_3week)
ax_pro.legend(loc='upper left')
ax_pro.set_xlim([0,90])
ax_pro.set_xlabel('Time in days')
ax_pro.set_ylabel('Percentage prevalence')
ax_pro.set_title('High baseline within-household transmission')
fig_pro.savefig('projections_AR_045.png', bbox_inches='tight', dpi=300)
ax_SAP.legend(loc='upper right')
ax_SAP.set_xlim([1,6])
ax_SAP.set_xlabel('Household size')
ax_SAP.set_ylabel('Estimated attack ratio')
ax_SAP.set_title('High baseline within-household transmission')
fig_SAP.savefig('SAP_AR_045.png', bbox_inches='tight', dpi=300)
close()
fig_pro, ax_pro = subplots()
fig_SAP, ax_SAP = subplots()
fig_pro, ax_pro = subplots()
fig_SAP, ax_SAP = subplots()
with open('mix_sweep_results_AR0.3_intred0.0_extred0.0.pkl', 'rb') as f:
AR_now, household_population, results = load(f)
ax_pro.plot(results.t-10,100*results.I,label=labels[0],color=linecol_str[0],linestyle=linestyle_str[0])
R_probs, SAP = plot_bars(6, household_population, results)
ax_SAP.plot(range(1,7), SAP, label=labels[0],color=linecol_str[0],linestyle=linestyle_str[0])
with open('mix_sweep_results_AR0.3_intred0.0_extred0.25.pkl', 'rb') as f:
AR_now, household_population, results = load(f)
ax_pro.plot(results.t-10,100*results.I,label=labels[1],color=linecol_str[1],linestyle=linestyle_str[1])
R_probs, SAP = plot_bars(6, household_population, results)
ax_SAP.plot(range(1,7), SAP, label=labels[1],color=linecol_str[1],linestyle=linestyle_str[1])
extred_3week = where(results.t>31)[0][0]
Rext_3week = results.I[extred_3week]
with open('mix_sweep_results_AR0.3_intred0.5_extred0.0.pkl', 'rb') as f:
AR_now, household_population, results = load(f)
ax_pro.plot(results.t-10,100*results.I,label=labels[2],color=linecol_str[1],linestyle=linestyle_str[2])
R_probs, SAP = plot_bars(6, household_population, results)
ax_SAP.plot(range(1,7), SAP, label=labels[2],color=linecol_str[1],linestyle=linestyle_str[2])
with open('mix_sweep_results_AR0.3_intred0.25_extred0.25.pkl', 'rb') as f:
AR_now, household_population, results = load(f)
ax_pro.plot(results.t-10,100*results.I,label=labels[3],color=linecol_str[2],linestyle=linestyle_str[3])
R_probs, SAP = plot_bars(6, household_population, results)
ax_SAP.plot(range(1,7), SAP, label=labels[3],color=linecol_str[2],linestyle=linestyle_str[3])
bothred_3week = where(results.t>31)[0][0]
Rboth_3week = results.I[bothred_3week]
print('At 3 week mark reduction from doing internal as well is',100*(Rext_3week-Rboth_3week)/Rext_3week)
#ax_pro.legend()
ax_pro.set_xlim([0,90])
ax_pro.set_xlabel('Time in days')
ax_pro.set_ylabel('Percentage prevalence')
ax_pro.set_title('Medium baseline within-household transmission')
fig_pro.savefig('projections_AR_03.png', bbox_inches='tight', dpi=300)
#ax_SAP.legend()
ax_SAP.set_xlim([1,6])
ax_SAP.set_xlabel('Household size')
ax_SAP.set_ylabel('Estimated attack ratio')
ax_SAP.set_title('Medium baseline within-household transmission')
fig_SAP.savefig('SAP_AR_03.png', bbox_inches='tight', dpi=300)
close()
fig_pro, ax_pro = subplots()
fig_SAP, ax_SAP = subplots()
with open('mix_sweep_results_AR0.15_intred0.0_extred0.0.pkl', 'rb') as f:
AR_now, household_population, results = load(f)
ax_pro.plot(results.t-10,100*results.I,label=labels[0],color=linecol_str[0],linestyle=linestyle_str[0])
R_probs, SAP = plot_bars(6, household_population, results)
ax_SAP.plot(range(1,7), SAP, label=labels[0],color=linecol_str[0],linestyle=linestyle_str[0])
with open('mix_sweep_results_AR0.15_intred0.0_extred0.25.pkl', 'rb') as f:
AR_now, household_population, results = load(f)
ax_pro.plot(results.t-10,100*results.I,label=labels[1],color=linecol_str[1],linestyle=linestyle_str[1])
R_probs, SAP = plot_bars(6, household_population, results)
ax_SAP.plot(range(1,7), SAP, label=labels[1],color=linecol_str[1],linestyle=linestyle_str[1])
extred_3week = where(results.t>31)[0][0]
Rext_3week = results.I[extred_3week]
with open('mix_sweep_results_AR0.15_intred0.5_extred0.0.pkl', 'rb') as f:
AR_now, household_population, results = load(f)
ax_pro.plot(results.t-10,100*results.I,label=labels[2],color=linecol_str[1],linestyle=linestyle_str[2])
R_probs, SAP = plot_bars(6, household_population, results)
ax_SAP.plot(range(1,7), SAP, label=labels[2],color=linecol_str[1],linestyle=linestyle_str[2])
with open('mix_sweep_results_AR0.15_intred0.25_extred0.25.pkl', 'rb') as f:
AR_now, household_population, results = load(f)
ax_pro.plot(results.t-10,100*results.I,label=labels[3],color=linecol_str[2],linestyle=linestyle_str[3])
R_probs, SAP = plot_bars(6, household_population, results)
ax_SAP.plot(range(1,7), SAP, label=labels[3],color=linecol_str[2],linestyle=linestyle_str[3])
bothred_3week = where(results.t>31)[0][0]
Rboth_3week = results.I[bothred_3week]
print('At 3 week mark reduction from doing internal as well is',100*(Rext_3week-Rboth_3week)/Rext_3week)
#ax_pro.legend()
ax_pro.set_xlim([0,90])
ax_pro.set_xlabel('Time in days')
ax_pro.set_ylabel('Percentage prevalence')
ax_pro.set_title('Low baseline within-household transmission')
fig_pro.savefig('projections_AR_015.png', bbox_inches='tight', dpi=300)
#ax_SAP.legend()
ax_SAP.set_xlim([1,6])
ax_SAP.set_xlabel('Household size')
ax_SAP.set_ylabel('Estimated attack ratio')
ax_SAP.set_title('Low baseline within-household transmission')
fig_SAP.savefig('SAP_AR_015.png', bbox_inches='tight', dpi=300)
close()
| 45.148936 | 108 | 0.758483 | 1,468 | 8,488 | 4.144414 | 0.101499 | 0.025477 | 0.106838 | 0.031558 | 0.888725 | 0.8833 | 0.878698 | 0.877876 | 0.865385 | 0.803419 | 0 | 0.050394 | 0.088242 | 8,488 | 187 | 109 | 45.390374 | 0.735754 | 0.010603 | 0 | 0.638889 | 0 | 0 | 0.180734 | 0.077015 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006944 | false | 0 | 0.041667 | 0 | 0.055556 | 0.020833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a0626ff84e95ccdee54c00472ae35cf513d1ee8a | 524 | py | Python | find-kth-bit-in-nth-binary-string/test_find_kth_bit_in_nth_binary_string.py | joaojunior/hackerrank | a5ee0449e791535930b8659dfb7dddcf9e1237de | [
"MIT"
] | null | null | null | find-kth-bit-in-nth-binary-string/test_find_kth_bit_in_nth_binary_string.py | joaojunior/hackerrank | a5ee0449e791535930b8659dfb7dddcf9e1237de | [
"MIT"
] | null | null | null | find-kth-bit-in-nth-binary-string/test_find_kth_bit_in_nth_binary_string.py | joaojunior/hackerrank | a5ee0449e791535930b8659dfb7dddcf9e1237de | [
"MIT"
] | 1 | 2019-06-19T00:51:02.000Z | 2019-06-19T00:51:02.000Z | from find_kth_bit_in_nth_binary_string import Solution
def test_example_1():
n = 3
k = 1
expected = '0'
assert expected == Solution().find_kth_bit(n, k)
def test_example_2():
n = 4
k = 11
expected = '1'
assert expected == Solution().find_kth_bit(n, k)
def test_example_3():
n = 1
k = 1
expected = '0'
assert expected == Solution().find_kth_bit(n, k)
def test_example_4():
n = 2
k = 3
expected = '1'
assert expected == Solution().find_kth_bit(n, k)
| 15.411765 | 54 | 0.603053 | 82 | 524 | 3.585366 | 0.280488 | 0.119048 | 0.170068 | 0.353742 | 0.741497 | 0.741497 | 0.741497 | 0.741497 | 0.741497 | 0.741497 | 0 | 0.044737 | 0.274809 | 524 | 33 | 55 | 15.878788 | 0.728947 | 0 | 0 | 0.47619 | 0 | 0 | 0.007634 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 1 | 0.190476 | false | 0 | 0.047619 | 0 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a06cec02fee8b800a3ef62c5ec8601963b971718 | 27 | py | Python | users/models/__init__.py | mrearsbig/store | f311c48f8e79f6d6fb7bf2c8c9a0b65d1b271ff0 | [
"MIT"
] | 1 | 2021-11-26T21:39:52.000Z | 2021-11-26T21:39:52.000Z | users/models/__init__.py | mrearsbig/users | 88a8365c9b7cb397e6c8f9c2647c321742de9d4c | [
"MIT"
] | null | null | null | users/models/__init__.py | mrearsbig/users | 88a8365c9b7cb397e6c8f9c2647c321742de9d4c | [
"MIT"
] | null | null | null | from .usermodel import User | 27 | 27 | 0.851852 | 4 | 27 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
260f8a5da53c3f4839fae5d8b9bb4278c438a266 | 184 | py | Python | src/hatch/env/collectors/plugin/hooks.py | daobook/hatch | 1cf39ad1a11ce90bc77fb7fdc4b9202433509179 | [
"MIT"
] | null | null | null | src/hatch/env/collectors/plugin/hooks.py | daobook/hatch | 1cf39ad1a11ce90bc77fb7fdc4b9202433509179 | [
"MIT"
] | null | null | null | src/hatch/env/collectors/plugin/hooks.py | daobook/hatch | 1cf39ad1a11ce90bc77fb7fdc4b9202433509179 | [
"MIT"
] | null | null | null | from hatchling.plugin import hookimpl
from ..default import DefaultEnvironmentCollector
@hookimpl
def hatch_register_environment_collector():
return DefaultEnvironmentCollector
| 20.444444 | 49 | 0.853261 | 17 | 184 | 9.058824 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 184 | 8 | 50 | 23 | 0.939024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
26422315f6dc02b1972c0874fbf87e166e5bd256 | 239 | py | Python | hmm/hidden_markov_model.py | dkohlsdorf/hidden_markov_models | 1060d4fb6b6fcf85c4140be1caec1531d51cacf8 | [
"MIT"
] | null | null | null | hmm/hidden_markov_model.py | dkohlsdorf/hidden_markov_models | 1060d4fb6b6fcf85c4140be1caec1531d51cacf8 | [
"MIT"
] | null | null | null | hmm/hidden_markov_model.py | dkohlsdorf/hidden_markov_models | 1060d4fb6b6fcf85c4140be1caec1531d51cacf8 | [
"MIT"
] | null | null | null | class HiddenMarkovModel:
def __init__(self, transitions, observations):
self.transitions = transitions
self.observations = observations
@property
def n_states(self):
return self.transitions.n_states
| 23.9 | 50 | 0.694561 | 23 | 239 | 6.956522 | 0.478261 | 0.28125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238494 | 239 | 9 | 51 | 26.555556 | 0.879121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0.142857 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
266ea2a7708c7d0015e2afda25d0211fd71c2986 | 75 | py | Python | examples/taylorgreenvortices/src/python/rodney/__init__.py | barbagroup/petibm-examples | 794de3613967c14750c750aed386602c988cff05 | [
"BSD-3-Clause"
] | 2 | 2020-08-08T13:37:32.000Z | 2021-12-01T03:22:32.000Z | examples/taylorgreenvortices/src/python/rodney/__init__.py | barbagroup/petibm-examples | 794de3613967c14750c750aed386602c988cff05 | [
"BSD-3-Clause"
] | null | null | null | examples/taylorgreenvortices/src/python/rodney/__init__.py | barbagroup/petibm-examples | 794de3613967c14750c750aed386602c988cff05 | [
"BSD-3-Clause"
] | 2 | 2019-12-22T08:49:01.000Z | 2021-12-01T03:22:44.000Z | from .misc import *
from .plot import *
from .taylorgreenvortices import *
| 18.75 | 34 | 0.76 | 9 | 75 | 6.333333 | 0.555556 | 0.350877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 75 | 3 | 35 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2670662bd4076f4d1c02df91dfbcebd6c2d3b877 | 1,575 | py | Python | app/entity/journal_entry_entity.py | JunFuruya/Ratsnake | ff7fe7f6b56663978d2ee2d94df5d88786a0eaa7 | [
"MIT"
] | null | null | null | app/entity/journal_entry_entity.py | JunFuruya/Ratsnake | ff7fe7f6b56663978d2ee2d94df5d88786a0eaa7 | [
"MIT"
] | 5 | 2018-05-17T04:03:39.000Z | 2021-09-08T01:03:17.000Z | app/entity/journal_entry_entity.py | JunFuruya/Hideout | ff7fe7f6b56663978d2ee2d94df5d88786a0eaa7 | [
"MIT"
] | null | null | null | # -*- coding: UTF-8 -*-
from app.entity.base_web_entity import BaseWebEntity
class JournalEntryEntity(BaseWebEntity):
__journal_entry_id = ''
__user_id = ''
__account_title_id = ''
__journal_entry_transaction_date = ''
__journal_entry_note = ''
def set_error_message(self, error_message):
self.__error_message = error_message
return self
def get_error_message(self):
return self.__error_message
def set_journal_entry_id(self, journal_entry_id):
self.__journal_entry_id = journal_entry_id
return self
def get_journal_entry_id(self):
return self.__journal_entry_id
def set_user_id(self, user_id):
self.user_id = user_id
return self
def get_user_id(self):
return self.__user_id
def set_account_title_id(self, account_title_id):
self.__account_title_id = account_title_id
return self
def get_account_title_id(self):
return self.__account_title_id
def set_journal_entry_transaction_date(self, journal_entry_transaction_date):
self.__journal_entry_transaction_date = journal_entry_transaction_date
return self
def get_journal_entry_transaction_date(self):
return self.__journal_entry_transaction_date
def set_journal_entry_note(self, journal_entry_note):
self.__journal_entry_note = journal_entry_note
return self
def get_journal_entry_note(self):
return self.__journal_entry_note
# TODO to array | 29.166667 | 81 | 0.697778 | 200 | 1,575 | 4.885 | 0.155 | 0.257932 | 0.14739 | 0.193449 | 0.610031 | 0.388946 | 0.250768 | 0.091095 | 0.091095 | 0 | 0 | 0.000841 | 0.245079 | 1,575 | 54 | 82 | 29.166667 | 0.820858 | 0.022222 | 0 | 0.162162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018519 | 0 | 1 | 0.324324 | false | 0 | 0.027027 | 0.162162 | 0.837838 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
26867363be1ef27ca40cb4ade1716cacde82219d | 7,560 | py | Python | Sklearn_PyTorch/random_forest.py | ValentinFigue/Sklearn_PyTorch | 1b56a43e41de331ecdf73d08418f75bb34c9fa06 | [
"BSD-2-Clause"
] | 25 | 2019-11-13T08:27:17.000Z | 2022-02-22T19:12:05.000Z | Sklearn_PyTorch/random_forest.py | nidhaloff/Sklearn_PyTorch | 1b56a43e41de331ecdf73d08418f75bb34c9fa06 | [
"BSD-2-Clause"
] | 3 | 2020-02-06T09:03:45.000Z | 2021-02-04T15:04:54.000Z | Sklearn_PyTorch/random_forest.py | nidhaloff/Sklearn_PyTorch | 1b56a43e41de331ecdf73d08418f75bb34c9fa06 | [
"BSD-2-Clause"
] | 5 | 2019-12-12T08:43:30.000Z | 2022-01-02T22:14:06.000Z | # -*- coding: utf-8 -*-
import torch
from .binary_tree import TorchDecisionTreeClassifier, TorchDecisionTreeRegressor
from .utils import sample_vectors, sample_dimensions
class TorchRandomForestClassifier(torch.nn.Module):
"""
Torch random forest object used to solve classification problem. This object implements the fitting and prediction
function which can be used with torch tensors. The random forest is based on
:class:`Sklearn_PyTorch.binary_tree.TorchDecisionTreeClassifier` which are built during the :func:`fit` and called
recursively during the :func:`predict`.
Args:
nb_trees (:class:`int`): Number of :class:`Sklearn_PyTorch.binary_tree.TorchDecisionTreeClassifier` used to fit the
classification problem.
nb_samples (:class:`int`): Number of vector samples used to fit each
:class:`Sklearn_PyTorch.binary_tree.TorchDecisionTreeClassifier`.
max_depth (:class:`int`): The maximum depth which corresponds to the maximum successive number of
:class:`DecisionNode`.
bootstrap (:class:`bool`): If set to true, a sample of the dimensions of the input vectors are made during the
fitting and the prediction.
"""
def __init__(self, nb_trees, nb_samples, max_depth=-1, bootstrap=True):
self.trees = []
self.trees_features = []
self.nb_trees = nb_trees
self.nb_samples = nb_samples
self.max_depth = max_depth
self.bootstrap = bootstrap
def fit(self, vectors, labels):
"""
Function which must be used after the initialisation to fit the random forest and build the successive
:class:`Sklearn_PyTorch.binary_tree.TorchDecisionTreeClassifier` to solve a specific classification problem.
Args:
vectors(:class:`torch.FloatTensor`): Vectors tensor used to fit the random forest. It represents the data
and must correspond to the following shape (num_vectors, num_dimensions).
labels (:class:`torch.LongTensor`): Labels tensor used to fit the decision tree. It represents the labels
associated to each vectors and must correspond to the following shape (num_vectors).
"""
for _ in range(self.nb_trees):
tree = TorchDecisionTreeClassifier(self.max_depth)
list_features = sample_dimensions(vectors)
self.trees_features.append(list_features)
if self.bootstrap:
sampled_vectors, sample_labels = sample_vectors(vectors, labels, self.nb_samples)
sampled_featured_vectors = torch.index_select(sampled_vectors, 1, list_features)
tree.fit(sampled_featured_vectors, sample_labels)
else:
sampled_featured_vectors = torch.index_select(vectors, 1, list_features)
tree.fit(sampled_featured_vectors, labels)
self.trees.append(tree)
def predict(self, vector):
"""
Function which must be used after the the fitting of the random forest. It calls recursively the different
:class:`Sklearn_PyTorch.binary_tree.TorchDecisionTreeClassifier` to classify the vector.
Args:
vector(:class:`torch.FloatTensor`): Vectors tensor which must be classified. It represents the data
and must correspond to the following shape (num_dimensions).
Returns:
:class:`torch.LongTensor`: Tensor which corresponds to the label predicted by the random forest.
"""
predictions = []
for tree, index_features in zip(self.trees, self.trees_features):
sampled_vector = torch.index_select(vector, 0, index_features)
predictions.append(tree.predict(sampled_vector))
return max(set(predictions), key=predictions.count)
class TorchRandomForestRegressor(torch.nn.Module):
"""
Torch random forest object used to solve regression problem. This object implements the fitting and prediction
function which can be used with torch tensors. The random forest is based on
:class:`Sklearn_PyTorch.binary_tree.TorchDecisionTreeRegressor` which are built during the :func:`fit` and called
recursively during the :func:`predict`.
Args:
nb_trees (:class:`int`): Number of :class:`Sklearn_PyTorch.binary_tree.TorchDecisionTreeRegressor` used to fit the
classification problem.
nb_samples (:class:`int`): Number of vector samples used to fit each
:class:`Sklearn_PyTorch.binary_tree.TorchDecisionTreeRegressor`.
max_depth (:class:`int`): The maximum depth which corresponds to the maximum successive number of
:class:`Sklearn_PyTorch.decision_node.DecisionNode`.
bootstrap (:class:`bool`): If set to true, a sample of the dimensions of the input vectors are made during the
fitting and the prediction.
"""
def __init__(self, nb_trees, nb_samples, max_depth=-1, bootstrap=True):
self.trees = []
self.trees_features = []
self.nb_trees = nb_trees
self.nb_samples = nb_samples
self.max_depth = max_depth
self.bootstrap = bootstrap
def fit(self, vectors, values):
"""
Function which must be used after the initialisation to fit the random forest and build the successive
:class:`Sklearn_PyTorch.binary_tree.TorchDecisionTreeRegressor` to solve a specific classification problem.
Args:
vectors(:class:`torch.FloatTensor`): Vectors tensor used to fit the decision tree. It represents the data
and must correspond to the following shape (num_vectors, num_dimensions_vectors).
values(:class:`torch.FloatTensor`): Values tensor used to fit the decision tree. It represents the values
associated to each vectors and must correspond to the following shape (num_vectors,
num_dimensions_values).
"""
for _ in range(self.nb_trees):
tree = TorchDecisionTreeRegressor(self.max_depth)
list_features = sample_dimensions(vectors)
self.trees_features.append(list_features)
if self.bootstrap:
sampled_vectors, sample_labels = sample_vectors(vectors, values, self.nb_samples)
sampled_featured_vectors = torch.index_select(sampled_vectors, 1, list_features)
tree.fit(sampled_featured_vectors, sample_labels)
else:
sampled_featured_vectors = torch.index_select(vectors, 1, list_features)
tree.fit(sampled_featured_vectors, values)
self.trees.append(tree)
def predict(self, vector):
"""
Function which must be used after the the fitting of the random forest. It calls recursively the different
:class:`Sklearn_PyTorch.binary_tree.TorchDecisionTreeRegressor` to regress the vector.
Args:
vector(:class:`torch.FloatTensor`): Vectors tensor which must be regressed. It represents the data
and must correspond to the following shape (num_dimensions).
Returns:
:class:`torch.FloatTensor`: Tensor which corresponds to the value regressed by the random forest.
"""
predictions_sum = 0
for tree, index_features in zip(self.trees, self.trees_features):
sampled_vector = torch.index_select(vector, 0, index_features)
predictions_sum += tree.predict(sampled_vector)
return predictions_sum/len(self.trees)
| 50.066225 | 123 | 0.68664 | 905 | 7,560 | 5.58674 | 0.143646 | 0.023141 | 0.041337 | 0.049446 | 0.886472 | 0.847706 | 0.816258 | 0.784612 | 0.784612 | 0.784612 | 0 | 0.001745 | 0.241931 | 7,560 | 150 | 124 | 50.4 | 0.880475 | 0.549868 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.052632 | 0 | 0.22807 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
26871611b17c9d4c88b3ff84554f28af0ec55e1f | 108 | py | Python | ambra_sdk/service/entrypoints/help.py | dyens/sdk-python | 24bf05268af2832c70120b84fd53bf44862cffec | [
"Apache-2.0"
] | null | null | null | ambra_sdk/service/entrypoints/help.py | dyens/sdk-python | 24bf05268af2832c70120b84fd53bf44862cffec | [
"Apache-2.0"
] | null | null | null | ambra_sdk/service/entrypoints/help.py | dyens/sdk-python | 24bf05268af2832c70120b84fd53bf44862cffec | [
"Apache-2.0"
] | null | null | null | from ambra_sdk.service.entrypoints.generated.help import Help as GHelp
class Help(GHelp):
"""Help."""
| 18 | 70 | 0.731481 | 15 | 108 | 5.2 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 108 | 5 | 71 | 21.6 | 0.83871 | 0.046296 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cd3156be1989b9055c6d018b82be4527c2024a00 | 22 | py | Python | yushu_test/a.py | yushuyang/pytorch-pretrained-BERT | 238f8a693949044c3f530075a57446aff3e5f1ac | [
"Apache-2.0"
] | null | null | null | yushu_test/a.py | yushuyang/pytorch-pretrained-BERT | 238f8a693949044c3f530075a57446aff3e5f1ac | [
"Apache-2.0"
] | null | null | null | yushu_test/a.py | yushuyang/pytorch-pretrained-BERT | 238f8a693949044c3f530075a57446aff3e5f1ac | [
"Apache-2.0"
] | null | null | null | import sys
print("a")
| 7.333333 | 10 | 0.681818 | 4 | 22 | 3.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 2 | 11 | 11 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
cd3594e750f981a2d0ffa133c26dc3166940450b | 31,842 | py | Python | wiki_to_md/wiki2gfm_test.py | fanghuaqi/support-tools | 8ac29fcfacaf09abab2d76c372146fa29a9c22ec | [
"Apache-2.0"
] | 1 | 2019-09-03T19:14:53.000Z | 2019-09-03T19:14:53.000Z | wiki_to_md/wiki2gfm_test.py | fanghuaqi/support-tools | 8ac29fcfacaf09abab2d76c372146fa29a9c22ec | [
"Apache-2.0"
] | null | null | null | wiki_to_md/wiki2gfm_test.py | fanghuaqi/support-tools | 8ac29fcfacaf09abab2d76c372146fa29a9c22ec | [
"Apache-2.0"
] | null | null | null | # Copyright 2014 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for wiki2gfm."""
import codecs
import StringIO
import unittest
from impl import converter
from impl import formatting_handler
from impl import pragma_handler
class BaseTest(unittest.TestCase):
"""Base test for wiki2gfm tests."""
def setUp(self):
"""Create a base test."""
self.warnings = []
self.output = StringIO.StringIO()
self.pragma_handler = pragma_handler.PragmaHandler(self._TrackWarning)
self.formatting_handler = formatting_handler.FormattingHandler(
self._TrackWarning,
project="test",
issue_map={123: "https://github.com/abcxyz/test/issues/789"},
symmetric_headers=False)
self.converter = converter.Converter(
self.pragma_handler,
self.formatting_handler,
self._TrackWarning,
project="test",
wikipages=["TestPage"])
def assertOutput(self, expected_output):
"""Assert that specific output was written.
Args:
expected_output: The expected value of the output.
"""
self.assertEquals(expected_output, self.output.getvalue())
def assertNoOutput(self, expected_output):
self.assertNotEqual(expected_output, self.output.getvalue())
def assertWarning(self, warning_contents, occurrences=1):
"""Assert that a warning was issued containing the given contents.
This searches all tracked warnings for the contents.
Args:
warning_contents: Text that the warning was expected to contain.
occurrences: The number of occurrences of the warning contents.
"""
occurrences_found = 0
for warning in self.warnings:
if warning_contents in warning[1]:
occurrences_found += 1
if occurrences_found != occurrences:
self.fail("Failed to find '{0}' in {1} warnings (found it in {2})."
.format(warning_contents, occurrences, occurrences_found))
def assertNoWarnings(self):
"""Assert that no warnings were issued."""
self.assertListEqual([], self.warnings)
def _TrackWarning(self, input_line, message):
"""Track a warning by storing it in memory.
Args:
input_line: Line the warning was issued on.
message: The warning message.
"""
self.warnings.append((input_line, message))
class TestPragmaHandler(BaseTest):
"""Tests the pragma handler."""
def testSummaryPragmaGivesWarning(self):
self.pragma_handler.HandlePragma(1, self.output, "summary", "abc")
self.assertWarning("summary")
def testSidebarPragmaGivesWarning(self):
self.pragma_handler.HandlePragma(1, self.output, "sidebar", "abc")
self.assertWarning("sidebar")
def testUnknownPragmaGivesWarning(self):
self.pragma_handler.HandlePragma(1, self.output, "fail!", "abc")
self.assertWarning("fail!")
class TestFormattingHandler(BaseTest):
"""Tests the formatting handler."""
def testHandleHeaderOpen(self):
self.formatting_handler.HandleHeaderOpen(1, self.output, 3)
self.assertOutput("### ")
self.assertNoWarnings()
def testHandleHeaderOpenInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleHeaderOpen(1, self.output, 3)
self.assertOutput("<h3>")
self.assertNoWarnings()
def testHandleHeaderClose(self):
self.formatting_handler.HandleHeaderClose(1, self.output, 3)
self.assertOutput("") # No header closing markup by default.
self.assertNoWarnings()
def testHandleHeaderCloseInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleHeaderClose(1, self.output, 3)
self.assertOutput("</h3>")
self.assertNoWarnings()
def testHandleHeaderCloseSymmetric(self):
self.formatting_handler._symmetric_headers = True
self.formatting_handler.HandleHeaderClose(1, self.output, 3)
self.assertOutput(" ###")
self.assertNoWarnings()
def testHandleHeaderCloseSymmetricInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler._symmetric_headers = True
self.formatting_handler.HandleHeaderClose(1, self.output, 3)
self.assertOutput("</h3>")
self.assertNoWarnings()
def testHandleHRule(self):
self.formatting_handler.HandleHRule(1, self.output)
self.assertOutput("\n---\n")
self.assertNoWarnings()
def testHandleHRuleInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleHRule(1, self.output)
self.assertOutput("<hr />")
self.assertNoWarnings()
def testHandleCodeBlockOpen(self):
self.formatting_handler.HandleCodeBlockOpen(1, self.output, None)
self.assertOutput("```\n")
self.assertNoWarnings()
def testHandleCodeBlockOpenInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleCodeBlockOpen(1, self.output, None)
self.assertOutput("<pre><code>")
self.assertWarning("Code markup was used")
def testHandleCodeBlockOpenWithLanguage(self):
self.formatting_handler.HandleCodeBlockOpen(1, self.output, "idris")
self.assertOutput("```idris\n")
self.assertNoWarnings()
def testHandleCodeBlockOpenWithLanguageInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleCodeBlockOpen(1, self.output, "idris")
self.assertOutput("<pre><code>")
self.assertWarning("Code markup was used")
def testHandleCodeBlockClose(self):
self.formatting_handler.HandleCodeBlockClose(1, self.output)
self.assertOutput("```")
self.assertNoWarnings()
def testHandleCodeBlockCloseInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleCodeBlockClose(1, self.output)
self.assertOutput("</code></pre>")
self.assertNoWarnings()
def testHandleNumericList(self):
self.formatting_handler.HandleNumericListOpen(1, self.output, 1)
self.formatting_handler.HandleText(1, self.output, "a\n")
self.formatting_handler.HandleNumericListOpen(2, self.output, 1)
self.formatting_handler.HandleText(2, self.output, "b\n")
self.formatting_handler.HandleNumericListOpen(3, self.output, 2)
self.formatting_handler.HandleText(3, self.output, "c\n")
self.formatting_handler.HandleListClose(4, self.output) # Closing 2.
self.formatting_handler.HandleNumericListOpen(4, self.output, 1)
self.formatting_handler.HandleText(4, self.output, "d\n")
self.formatting_handler.HandleListClose(5, self.output) # Closing 1.
self.assertOutput(" 1. a\n 1. b\n 1. c\n 1. d\n")
self.assertNoWarnings()
def testHandleNumericListInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleNumericListOpen(1, self.output, 1)
self.formatting_handler.HandleText(1, self.output, "a\n")
self.formatting_handler.HandleNumericListOpen(2, self.output, 1)
self.formatting_handler.HandleText(2, self.output, "b\n")
self.formatting_handler.HandleNumericListOpen(3, self.output, 2)
self.formatting_handler.HandleText(3, self.output, "c\n")
self.formatting_handler.HandleListClose(4, self.output) # Closing 2.
self.formatting_handler.HandleNumericListOpen(4, self.output, 1)
self.formatting_handler.HandleText(4, self.output, "d\n")
self.formatting_handler.HandleListClose(5, self.output) # Closing 1.
self.assertOutput("<ol><li>a\n</li><li>b\n<ol><li>c\n</li></ol></li>"
"<li>d\n</li></ol>")
self.assertWarning("Numeric list markup was used", occurrences=2)
def testHandleBulletList(self):
self.formatting_handler.HandleBulletListOpen(1, self.output, 1)
self.formatting_handler.HandleText(1, self.output, "a\n")
self.formatting_handler.HandleBulletListOpen(2, self.output, 1)
self.formatting_handler.HandleText(2, self.output, "b\n")
self.formatting_handler.HandleBulletListOpen(3, self.output, 2)
self.formatting_handler.HandleText(3, self.output, "c\n")
self.formatting_handler.HandleListClose(4, self.output) # Closing 2.
self.formatting_handler.HandleBulletListOpen(4, self.output, 1)
self.formatting_handler.HandleText(4, self.output, "d\n")
self.formatting_handler.HandleListClose(5, self.output) # Closing 1.
self.assertOutput(" * a\n * b\n * c\n * d\n")
self.assertNoWarnings()
def testHandleBulletListInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleBulletListOpen(1, self.output, 1)
self.formatting_handler.HandleText(1, self.output, "a\n")
self.formatting_handler.HandleBulletListOpen(2, self.output, 1)
self.formatting_handler.HandleText(2, self.output, "b\n")
self.formatting_handler.HandleBulletListOpen(3, self.output, 2)
self.formatting_handler.HandleText(3, self.output, "c\n")
self.formatting_handler.HandleListClose(4, self.output) # Closing 2.
self.formatting_handler.HandleBulletListOpen(4, self.output, 1)
self.formatting_handler.HandleText(4, self.output, "d\n")
self.formatting_handler.HandleListClose(5, self.output) # Closing 1.
self.assertOutput("<ul><li>a\n</li><li>b\n<ul><li>c\n</li></ul></li>"
"<li>d\n</li></ul>")
self.assertWarning("Bulleted list markup was used", occurrences=2)
def testHandleBlockQuote(self):
self.formatting_handler.HandleBlockQuoteOpen(1, self.output, 1)
self.formatting_handler.HandleText(1, self.output, "a\n")
self.formatting_handler.HandleBlockQuoteOpen(2, self.output, 1)
self.formatting_handler.HandleText(2, self.output, "b\n")
self.formatting_handler.HandleBlockQuoteOpen(3, self.output, 2)
self.formatting_handler.HandleText(3, self.output, "c\n")
self.formatting_handler.HandleListClose(4, self.output) # Closing 2.
self.formatting_handler.HandleBlockQuoteOpen(4, self.output, 1)
self.formatting_handler.HandleText(4, self.output, "d\n")
self.formatting_handler.HandleListClose(5, self.output) # Closing 1.
self.assertOutput("> a\n> b\n> > c\n\n> d\n")
self.assertNoWarnings()
def testHandleBlockQuoteInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleBlockQuoteOpen(1, self.output, 1)
self.formatting_handler.HandleText(1, self.output, "a\n")
self.formatting_handler.HandleBlockQuoteOpen(2, self.output, 1)
self.formatting_handler.HandleText(2, self.output, "b\n")
self.formatting_handler.HandleBlockQuoteOpen(3, self.output, 2)
self.formatting_handler.HandleText(3, self.output, "c\n")
self.formatting_handler.HandleListClose(4, self.output) # Closing 2.
self.formatting_handler.HandleBlockQuoteOpen(4, self.output, 1)
self.formatting_handler.HandleText(4, self.output, "d\n")
self.formatting_handler.HandleListClose(5, self.output) # Closing 1.
self.assertOutput("<blockquote>a\nb<br>\n<blockquote>c\n</blockquote>"
"d\n</blockquote>")
self.assertWarning("Blockquote markup was used", occurrences=2)
def testHandleParagraphBreak(self):
self.formatting_handler.HandleText(1, self.output, "a\n")
self.formatting_handler.HandleParagraphBreak(2, self.output)
self.formatting_handler.HandleText(3, self.output, "b\n")
self.assertOutput("a\n\nb\n")
self.assertNoWarnings()
def testHandleParagraphBreakInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleText(1, self.output, "a\n")
self.formatting_handler.HandleParagraphBreak(2, self.output)
self.formatting_handler.HandleText(3, self.output, "b\n")
self.assertOutput("a\n<br>\nb<br>\n")
self.assertNoWarnings()
def testHandleBold(self):
self.formatting_handler.HandleBoldOpen(1, self.output)
self.formatting_handler.HandleText(2, self.output, "xyz")
self.formatting_handler.HandleBoldClose(3, self.output)
self.assertOutput("**xyz**")
self.assertNoWarnings()
def testHandleBoldInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleBoldOpen(1, self.output)
self.formatting_handler.HandleText(2, self.output, "xyz")
self.formatting_handler.HandleBoldClose(3, self.output)
self.assertOutput("<b>xyz</b>")
self.assertWarning("Bold markup was used")
def testHandleItalic(self):
self.formatting_handler.HandleItalicOpen(1, self.output)
self.formatting_handler.HandleText(2, self.output, "xyz")
self.formatting_handler.HandleItalicClose(3, self.output)
self.assertOutput("_xyz_")
self.assertNoWarnings()
def testHandleItalicInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleItalicOpen(1, self.output)
self.formatting_handler.HandleText(2, self.output, "xyz")
self.formatting_handler.HandleItalicClose(3, self.output)
self.assertOutput("<i>xyz</i>")
self.assertWarning("Italic markup was used")
def testHandleStrikethrough(self):
self.formatting_handler.HandleStrikethroughOpen(1, self.output)
self.formatting_handler.HandleText(2, self.output, "xyz")
self.formatting_handler.HandleStrikethroughClose(3, self.output)
self.assertOutput("~~xyz~~")
self.assertNoWarnings()
def testHandleStrikethroughInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleStrikethroughOpen(1, self.output)
self.formatting_handler.HandleText(2, self.output, "xyz")
self.formatting_handler.HandleStrikethroughClose(3, self.output)
self.assertOutput("<del>xyz</del>")
self.assertWarning("Strikethrough markup was used")
def testHandleSuperscript(self):
self.formatting_handler.HandleSuperscript(1, self.output, "xyz")
self.assertOutput("<sup>xyz</sup>")
self.assertNoWarnings()
def testHandleSuperscriptInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleSuperscript(1, self.output, "xyz")
self.assertOutput("<sup>xyz</sup>")
self.assertNoWarnings()
def testHandleSubscript(self):
self.formatting_handler.HandleSubscript(1, self.output, "xyz")
self.assertOutput("<sub>xyz</sub>")
self.assertNoWarnings()
def testHandleSubscriptInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleSubscript(1, self.output, "xyz")
self.assertOutput("<sub>xyz</sub>")
self.assertNoWarnings()
def testHandleInlineCode(self):
self.formatting_handler.HandleInlineCode(1, self.output, "xyz")
self.assertOutput("` xyz `")
self.assertNoWarnings()
def testHandleInlineCodeInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleInlineCode(1, self.output, "xyz")
self.assertOutput("<code>xyz</code>")
self.assertNoWarnings()
# Table handling is tested in the Converter tests,
# as the interactions are multiple and handled there.
def testHandleLink(self):
self.formatting_handler.HandleLink(
1, self.output, "http://example.com", None)
self.assertOutput("http://example.com")
self.assertNoWarnings()
def testHandleLinkInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleLink(
1, self.output, "http://example.com", None)
self.assertOutput("<a href='http://example.com'>http://example.com</a>")
self.assertWarning("Link markup was used")
def testHandleLinkWithDescription(self):
self.formatting_handler.HandleLink(
1, self.output, "http://example.com", "Description")
self.assertOutput("[Description](http://example.com)")
self.assertNoWarnings()
def testHandleLinkWithDescriptionInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleLink(
1, self.output, "http://example.com", "Description")
self.assertOutput("<a href='http://example.com'>Description</a>")
self.assertWarning("Link markup was used")
def testHandleLinkWithImageDescription(self):
self.formatting_handler.HandleLink(
1, self.output, "http://example.com", "http://example.com/a.png")
self.assertOutput("[](http://example.com)")
self.assertNoWarnings()
def testHandleLinkWithImageDescriptionInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleLink(
1, self.output, "http://example.com", "http://example.com/a.png")
self.assertOutput("<a href='http://example.com'>"
"<img src='http://example.com/a.png' /></a>")
self.assertWarning("Link markup was used")
def testHandleImageLink(self):
self.formatting_handler.HandleLink(
1, self.output, "http://example.com/a.png", None)
self.assertOutput("")
self.assertNoWarnings()
def testHandleImageLinkInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleLink(
1, self.output, "http://example.com/a.png", None)
self.assertOutput("<img src='http://example.com/a.png' />")
self.assertWarning("Link markup was used")
def testHandleImageLinkWithDescription(self):
self.formatting_handler.HandleLink(
1, self.output, "http://example.com/a.png", "Description")
self.assertOutput("[Description](http://example.com/a.png)")
self.assertNoWarnings()
def testHandleImageLinkWithDescriptionInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleLink(
1, self.output, "http://example.com/a.png", "Description")
self.assertOutput("<a href='http://example.com/a.png'>Description</a>")
self.assertWarning("Link markup was used")
def testHandleImageLinkWithImageDescription(self):
self.formatting_handler.HandleLink(
1, self.output, "http://example.com/a.png", "http://example.com/b.png")
self.assertOutput("]"
"(http://example.com/a.png)")
self.assertNoWarnings()
def testHandleImageLinkWithImageDescriptionInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleLink(
1, self.output, "http://example.com/a.png", "http://example.com/b.png")
self.assertOutput("<a href='http://example.com/a.png'>"
"<img src='http://example.com/b.png' /></a>")
self.assertWarning("Link markup was used")
def testHandleWiki(self):
self.formatting_handler.HandleWiki(1, self.output, "TestPage", "Test Page")
self.assertOutput("[Test Page](TestPage.md)")
self.assertNoWarnings()
def testHandleWikiInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleWiki(1, self.output, "TestPage", "Test Page")
self.assertOutput("<a href='TestPage.md'>Test Page</a>")
self.assertWarning("Link markup was used")
def testHandleIssue(self):
self.formatting_handler.HandleIssue(1, self.output, "issue ", 123)
self.assertOutput("[issue 789](https://github.com/abcxyz/test/issues/789)")
self.assertWarning("Issue 123 was auto-linked")
self.assertWarning("In the output, it has been linked to the "
"migrated issue on GitHub: 789.")
def testHandleIssueInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleIssue(1, self.output, "issue ", 123)
self.assertOutput("<a href='https://github.com/abcxyz/test/issues/789'>"
"issue 789</a>")
self.assertWarning("Link markup was used")
self.assertWarning("Issue 123 was auto-linked")
self.assertWarning("In the output, it has been linked to the "
"migrated issue on GitHub: 789.")
def testHandleIssueNotInMap(self):
self.formatting_handler.HandleIssue(1, self.output, "issue ", 456)
self.assertOutput("[issue 456](https://code.google.com/p/"
"test/issues/detail?id=456)")
self.assertWarning("Issue 456 was auto-linked")
self.assertWarning("However, it was not found in the issue migration map")
self.assertWarning("As a placeholder, the text has been modified to "
"link to the original Google Code issue page")
def testHandleIssueNotInMapInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleIssue(1, self.output, "issue ", 456)
self.assertOutput("<a href='https://code.google.com/p/"
"test/issues/detail?id=456'>issue 456</a>")
self.assertWarning("Link markup was used")
self.assertWarning("Issue 456 was auto-linked")
self.assertWarning("However, it was not found in the issue migration map")
self.assertWarning("As a placeholder, the text has been modified to "
"link to the original Google Code issue page")
def testHandleIssueNoMap(self):
self.formatting_handler._issue_map = None
self.formatting_handler.HandleIssue(1, self.output, "issue ", 456)
self.assertOutput("[issue 456](https://code.google.com/p/"
"test/issues/detail?id=456)")
self.assertWarning("Issue 456 was auto-linked")
self.assertWarning("However, no issue migration map was specified")
self.assertWarning("As a placeholder, the text has been modified to "
"link to the original Google Code issue page")
def testHandleIssueNoMapInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler._issue_map = None
self.formatting_handler.HandleIssue(1, self.output, "issue ", 456)
self.assertOutput("<a href='https://code.google.com/p/"
"test/issues/detail?id=456'>issue 456</a>")
self.assertWarning("Link markup was used")
self.assertWarning("Issue 456 was auto-linked")
self.assertWarning("However, no issue migration map was specified")
self.assertWarning("As a placeholder, the text has been modified to "
"link to the original Google Code issue page")
def testHandleIssueNotInMapNoProject(self):
self.formatting_handler._project = None
self.formatting_handler.HandleIssue(1, self.output, "issue ", 456)
self.assertOutput("issue 456 (on Google Code)")
self.assertWarning("Issue 456 was auto-linked")
self.assertWarning("However, it was not found in the issue migration map")
self.assertWarning("Additionally, because no project name was specified "
"the issue could not be linked to the original Google "
"Code issue page.")
self.assertWarning("The auto-link has been removed")
def testHandleIssueNotInMapNoProjectInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler._project = None
self.formatting_handler.HandleIssue(1, self.output, "issue ", 456)
self.assertOutput("issue 456 (on Google Code)")
self.assertWarning("Issue 456 was auto-linked")
self.assertWarning("However, it was not found in the issue migration map")
self.assertWarning("Additionally, because no project name was specified "
"the issue could not be linked to the original Google "
"Code issue page.")
self.assertWarning("The auto-link has been removed")
def testHandleIssueNoMapNoProject(self):
self.formatting_handler._issue_map = None
self.formatting_handler._project = None
self.formatting_handler.HandleIssue(1, self.output, "issue ", 456)
self.assertOutput("issue 456 (on Google Code)")
self.assertWarning("Issue 456 was auto-linked")
self.assertWarning("However, no issue migration map was specified")
self.assertWarning("Additionally, because no project name was specified "
"the issue could not be linked to the original Google "
"Code issue page.")
self.assertWarning("The auto-link has been removed")
def testHandleIssueNoMapNoProjectInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler._issue_map = None
self.formatting_handler._project = None
self.formatting_handler.HandleIssue(1, self.output, "issue ", 456)
self.assertOutput("issue 456 (on Google Code)")
self.assertWarning("Issue 456 was auto-linked")
self.assertWarning("However, no issue migration map was specified")
self.assertWarning("Additionally, because no project name was specified "
"the issue could not be linked to the original Google "
"Code issue page.")
self.assertWarning("The auto-link has been removed")
def testHandleRevision(self):
self.formatting_handler.HandleRevision(1, self.output, "revision ", 7)
self.assertOutput("[revision 7](https://code.google.com/p/"
"test/source/detail?r=7)")
self.assertWarning("Revision 7 was auto-linked")
self.assertWarning("As a placeholder, the text has been modified to "
"link to the original Google Code source page")
def testHandleRevisionInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleRevision(1, self.output, "revision ", 7)
self.assertOutput("<a href='https://code.google.com/p/"
"test/source/detail?r=7'>revision 7</a>")
self.assertWarning("Link markup was used")
self.assertWarning("Revision 7 was auto-linked")
self.assertWarning("As a placeholder, the text has been modified to "
"link to the original Google Code source page")
def testHandleRevisionNoProject(self):
self.formatting_handler._project = None
self.formatting_handler.HandleRevision(1, self.output, "revision ", 7)
self.assertOutput("revision 7 (on Google Code)")
self.assertWarning("Revision 7 was auto-linked")
self.assertWarning("Additionally, because no project name was specified "
"the revision could not be linked to the original "
"Google Code source page.")
self.assertWarning("The auto-link has been removed")
def testHandleRevisionNoProjectInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler._project = None
self.formatting_handler.HandleRevision(1, self.output, "revision ", 7)
self.assertOutput("revision 7 (on Google Code)")
self.assertWarning("Revision 7 was auto-linked")
self.assertWarning("Additionally, because no project name was specified "
"the revision could not be linked to the original "
"Google Code source page.")
self.assertWarning("The auto-link has been removed")
def testHandleInHtml(self):
self.formatting_handler.HandleHtmlOpen(
1, self.output, "tag", {"a": "1", "b": "2"}, False)
self.formatting_handler.HandleText(2, self.output, "xyz")
self.formatting_handler.HandleHtmlClose(3, self.output, "tag")
self.assertOutput("<tag a='1' b='2'>xyz</tag>")
self.assertNoWarnings()
def testHandleHtmlInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleHtmlOpen(
1, self.output, "tag", {"a": "1", "b": "2"}, False)
self.formatting_handler.HandleText(2, self.output, "xyz")
self.formatting_handler.HandleHtmlClose(3, self.output, "tag")
self.assertOutput("<tag a='1' b='2'>xyz</tag>")
self.assertNoWarnings()
def testHandleInHtmlSelfClose(self):
self.formatting_handler.HandleHtmlOpen(
1, self.output, "tag", {"a": "1", "b": "2"}, True)
self.assertOutput("<tag a='1' b='2' />")
self.assertNoWarnings()
def testHandleHtmlSelfCloseInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleHtmlOpen(
1, self.output, "tag", {"a": "1", "b": "2"}, True)
self.assertOutput("<tag a='1' b='2' />")
self.assertNoWarnings()
def testHandleGPlus(self):
self.formatting_handler.HandleGPlusOpen(1, self.output, None)
self.formatting_handler.HandleGPlusClose(1, self.output)
self.assertNoOutput("(TODO: Link to Google+ page.)")
self.assertWarning("A Google+ +1 button was embedded on this page")
def testHandleGPlusInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleGPlusOpen(1, self.output, None)
self.formatting_handler.HandleGPlusClose(1, self.output)
self.assertNoOutput("(TODO: Link to Google+ page.)")
self.assertWarning("A Google+ +1 button was embedded on this page")
def testHandleComment(self):
self.formatting_handler.HandleCommentOpen(1, self.output)
self.formatting_handler.HandleText(2, self.output, "xyz")
self.formatting_handler.HandleCommentClose(3, self.output)
self.assertOutput("<a href='Hidden comment: xyz'></a>")
self.assertWarning("A comment was used in the wiki file")
def testHandleCommentInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleCommentOpen(1, self.output)
self.formatting_handler.HandleText(2, self.output, "xyz")
self.formatting_handler.HandleCommentClose(3, self.output)
self.assertOutput("<a href='Hidden comment: xyz'></a>")
self.assertWarning("A comment was used in the wiki file")
def testHandleVideo(self):
self.formatting_handler.HandleVideoOpen(
1, self.output, "FiARsQSlzDc", 320, 240)
self.formatting_handler.HandleVideoClose(1, self.output)
self.assertOutput("<a href='http://www.youtube.com/watch?"
"feature=player_embedded&v=FiARsQSlzDc' target='_blank'>"
"<img src='http://img.youtube.com/vi/FiARsQSlzDc/0.jpg' "
"width='320' height=240 /></a>")
self.assertWarning("GFM does not support embedding the YouTube player")
def testHandleVideoInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleVideoOpen(
1, self.output, "FiARsQSlzDc", 320, 240)
self.formatting_handler.HandleVideoClose(1, self.output)
self.assertOutput("<a href='http://www.youtube.com/watch?"
"feature=player_embedded&v=FiARsQSlzDc' target='_blank'>"
"<img src='http://img.youtube.com/vi/FiARsQSlzDc/0.jpg' "
"width='320' height=240 /></a>")
self.assertWarning("GFM does not support embedding the YouTube player")
def testHandleText(self):
self.formatting_handler.HandleText(1, self.output, "xyz")
self.assertOutput("xyz")
self.assertNoWarnings()
def testHandleTextInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleText(1, self.output, "xyz")
self.assertOutput("xyz")
self.assertNoWarnings()
def testHandleEscapedText(self):
self.formatting_handler.HandleEscapedText(1, self.output, "**_xyz_** <a>")
self.assertOutput("\\*\\*\\_xyz\\_\\*\\* <a>")
self.assertNoWarnings()
def testHandleEscapedTextInHtml(self):
self.formatting_handler._in_html = 1
self.formatting_handler.HandleEscapedText(1, self.output, "**_xyz_** <a>")
self.assertOutput("**_xyz_** <a>")
self.assertNoWarnings()
class TestConverter(BaseTest):
"""Tests the converter."""
def testExamplePage(self):
with codecs.open("example.wiki", "rU", "utf-8") as example_input:
with codecs.open("example.md", "rU", "utf-8") as example_output:
self.converter.Convert(example_input, self.output)
self.assertOutput(example_output.read())
if __name__ == "__main__":
unittest.main()
| 39.604478 | 79 | 0.707462 | 3,808 | 31,842 | 5.820903 | 0.100578 | 0.163358 | 0.198953 | 0.085717 | 0.766309 | 0.76085 | 0.749752 | 0.734684 | 0.721014 | 0.690156 | 0 | 0.016894 | 0.167201 | 31,842 | 803 | 80 | 39.653798 | 0.818991 | 0.047579 | 0 | 0.678694 | 0 | 0.005155 | 0.213756 | 0.014537 | 0 | 0 | 0 | 0 | 0.335052 | 1 | 0.147766 | false | 0 | 0.010309 | 0 | 0.164948 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cd8761fa3773674ad52c199970a518fc0ed3a523 | 16,330 | py | Python | tests/nodes/styles/test_styles.py | Hengle/Houdini-Toolbox | a1fd7d3dd73d3fc4cea78e29aeff1d190c41bae3 | [
"MIT"
] | 136 | 2015-01-03T04:03:23.000Z | 2022-02-07T11:08:57.000Z | tests/nodes/styles/test_styles.py | Hengle/Houdini-Toolbox | a1fd7d3dd73d3fc4cea78e29aeff1d190c41bae3 | [
"MIT"
] | 11 | 2017-02-09T20:05:04.000Z | 2021-01-24T22:25:59.000Z | tests/nodes/styles/test_styles.py | Hengle/Houdini-Toolbox | a1fd7d3dd73d3fc4cea78e29aeff1d190c41bae3 | [
"MIT"
] | 26 | 2015-08-18T12:11:02.000Z | 2020-12-19T01:53:31.000Z | """Tests for ht.nodes.styles.styles module."""
# =============================================================================
# IMPORTS
# =============================================================================
# Third Party
import pytest
# Houdini Toolbox
from ht.nodes.styles import styles
# Houdini
import hou
# =============================================================================
# FIXTURES
# =============================================================================
@pytest.fixture
def init_constant_rule(mocker):
"""Fixture to initialize a style constant."""
mocker.patch.object(styles.ConstantRule, "__init__", lambda x, y, z: None)
def _create():
return styles.ConstantRule(None, None)
return _create
@pytest.fixture
def init_style_constant(mocker):
"""Fixture to initialize a style constant."""
mocker.patch.object(styles.StyleConstant, "__init__", lambda x, y, z, u, v: None)
def _create():
return styles.StyleConstant(None, None, None, None)
return _create
@pytest.fixture
def init_style_rule(mocker):
"""Fixture to initialize a style rule."""
mocker.patch.object(styles.StyleRule, "__init__", lambda x, y, z, u, v: None)
def _create():
return styles.StyleRule(None, None, None, None)
return _create
# =============================================================================
# TESTS
# =============================================================================
class Test_StyleConstant:
"""Test ht.nodes.styles.styles.StyleConstant."""
def test___init__(self, mocker):
"""Test object initialization."""
mock_name = mocker.MagicMock(spec=str)
mock_color = mocker.MagicMock(spec=hou.Color)
mock_color_type = mocker.MagicMock(spec=str)
mock_shape = mocker.MagicMock(spec=str)
mock_file_path = mocker.MagicMock(spec=str)
constant = styles.StyleConstant(
mock_name, mock_color, mock_color_type, mock_shape, mock_file_path
)
assert constant._name == mock_name
assert constant._color == mock_color
assert constant._color_type == mock_color_type
assert constant._shape == mock_shape
assert constant._file_path == mock_file_path
def test___eq__(self, init_style_constant, mocker):
"""Test the equality operator."""
mock_name_prop = mocker.patch.object(
styles.StyleConstant, "name", new_callable=mocker.PropertyMock
)
mock_name_prop.return_value = "name1"
constant = init_style_constant()
mock_constant = mocker.MagicMock(spec=styles.StyleConstant)
mock_constant.name = "name2"
assert constant != mock_constant
mock_name_prop.return_value = "name"
mock_constant.name = "name"
assert constant == mock_constant
result = constant.__eq__(mocker.MagicMock())
assert result == NotImplemented
def test___hash__(self, init_style_constant, mocker):
"""Test the hash operator."""
mocker.patch.object(
styles.StyleConstant, "name", new_callable=mocker.PropertyMock
)
constant = init_style_constant()
result = constant.__hash__()
assert result == hash(constant.name)
assert hash(constant) == hash(constant.name)
# ne
def test___ne__(self, init_style_constant, mocker):
"""Test the __ne__ operator."""
mock_eq = mocker.patch.object(styles.StyleConstant, "__eq__")
constant = init_style_constant()
mock_constant = mocker.MagicMock(spec=styles.StyleConstant)
result = constant.__ne__(mock_constant)
assert result != mock_eq.return_value
def test___ne___different_type(self, init_style_constant, mocker):
"""Test the __ne__ operator when the other item isn't a StyleConstant."""
constant = init_style_constant()
result = constant.__ne__(mocker.MagicMock())
assert result == NotImplemented
# Properties
def test_color(self, init_style_constant, mocker):
"""Test the 'color' property."""
mock_value = mocker.MagicMock(spec=hou.Color)
constant = init_style_constant()
constant._color = mock_value
assert constant.color == mock_value
def test_color_type(self, init_style_constant, mocker):
"""Test the 'color_type' property."""
value1 = mocker.MagicMock(spec=str)
constant = init_style_constant()
constant._color_type = value1
assert constant.color_type == value1
def test_file_path(self, init_style_constant, mocker):
"""Test the 'file_path' property."""
mock_value = mocker.MagicMock(spec=str)
constant = init_style_constant()
constant._file_path = mock_value
assert constant.file_path == mock_value
def test_name(self, init_style_constant, mocker):
"""Test the 'name' property."""
mock_value = mocker.MagicMock(spec=str)
constant = init_style_constant()
constant._name = mock_value
assert constant.name == mock_value
def test_shape(self, init_style_constant, mocker):
"""Test the 'shape' property."""
mock_value = mocker.MagicMock(spec=str)
constant = init_style_constant()
constant._shape = mock_value
assert constant.shape == mock_value
# Methods
# apply_to_node
def test_apply_to_node__both(self, init_style_constant, mocker):
"""Test applying everything to a node."""
mock_color_prop = mocker.patch.object(
styles.StyleConstant, "color", new_callable=mocker.PropertyMock
)
mock_shape_prop = mocker.patch.object(
styles.StyleConstant, "shape", new_callable=mocker.PropertyMock
)
mock_node = mocker.MagicMock(spec=hou.Node)
constant = init_style_constant()
constant.apply_to_node(mock_node)
mock_node.setColor.assert_called_with(mock_color_prop.return_value)
mock_node.setUserData.assert_called_with(
"nodeshape", mock_shape_prop.return_value
)
def test_apply_to_node__none(self, init_style_constant, mocker):
"""Test applying to a node when no values will be set."""
mocker.patch.object(
styles.StyleConstant,
"color",
new_callable=mocker.PropertyMock(return_value=None),
)
mocker.patch.object(
styles.StyleConstant,
"shape",
new_callable=mocker.PropertyMock(return_value=None),
)
mock_node = mocker.MagicMock(spec=hou.Node)
constant = init_style_constant()
constant.apply_to_node(mock_node)
mock_node.setColor.assert_not_called()
mock_node.setUserData.assert_not_called()
class Test_StyleRule:
"""Test ht.nodes.styles.styles.StyleRule."""
def test___init__(self, mocker):
"""Test the constructor."""
mock_name = mocker.MagicMock(spec=str)
mock_color = mocker.MagicMock(spec=hou.Color)
mock_color_type = mocker.MagicMock(spec=str)
mock_shape = mocker.MagicMock(spec=str)
mock_file_path = mocker.MagicMock(spec=str)
rule = styles.StyleRule(
mock_name, mock_color, mock_color_type, mock_shape, mock_file_path
)
assert rule._name == mock_name
assert rule._color == mock_color
assert rule._color_type == mock_color_type
assert rule._shape == mock_shape
assert rule._file_path == mock_file_path
def test___eq__(self, init_style_rule, mocker):
"""Test the equality operator."""
mocker.patch.object(
styles.StyleRule,
"name",
new_callable=mocker.PropertyMock(return_value="name"),
)
rule = init_style_rule()
mock_rule = mocker.MagicMock(spec=styles.StyleRule)
mock_rule.name = "different_name"
assert rule != mock_rule
mock_rule.name = "name"
assert rule == mock_rule
result = rule.__eq__(mocker.MagicMock())
assert result == NotImplemented
def test___hash__(self, init_style_rule, mocker):
"""Test the hash operator."""
mocker.patch.object(
styles.StyleRule,
"name",
new_callable=mocker.PropertyMock(return_value="name"),
)
rule = init_style_rule()
result = rule.__hash__()
assert result == hash(rule.name)
assert hash(rule) == hash(rule.name)
# ne
def test___ne__(self, init_style_rule, mocker):
"""Test the __ne__ operator."""
mock_eq = mocker.patch.object(styles.StyleRule, "__eq__")
rule = init_style_rule()
mock_rule = mocker.MagicMock(spec=styles.StyleRule)
result = rule.__ne__(mock_rule)
assert result != mock_eq.return_value
def test___ne___different_type(self, init_style_rule, mocker):
"""Test the __ne__ operator when the other item isn't a StyleConstant."""
mocker.patch.object(styles.StyleRule, "__eq__")
rule = init_style_rule()
result = rule.__ne__(mocker.MagicMock())
assert result == NotImplemented
def test___str__(self, init_style_rule, mocker):
"""Test converting the object to a string."""
mock_typed = mocker.patch.object(styles.StyleRule, "_get_typed_color_value")
mock_typed.return_value = (0.46700000762939453, 1.0, 0.5)
rule = init_style_rule()
assert str(rule) == "(0.467, 1, 0.5)"
# Properties
def test_color(self, init_style_rule, mocker):
"""Test the 'color' property."""
mock_value = mocker.MagicMock(spec=hou.Color)
rule = init_style_rule()
rule._color = mock_value
assert rule.color == mock_value
def test_color_type(self, init_style_rule, mocker):
"""Test the 'color_type' property."""
mock_value = mocker.MagicMock(spec=str)
rule = init_style_rule()
rule._color_type = mock_value
assert rule.color_type == mock_value
def test_file_path(self, init_style_rule, mocker):
"""Test the 'file_path' property."""
mock_value = mocker.MagicMock(spec=str)
rule = init_style_rule()
rule._file_path = mock_value
assert rule.file_path == mock_value
def test_name(self, init_style_rule, mocker):
"""Test the 'name' property."""
mock_value = mocker.MagicMock(spec=str)
rule = init_style_rule()
rule._name = mock_value
assert rule.name == mock_value
def test_shape(self, init_style_rule, mocker):
"""Test the 'shape' property."""
mock_value = mocker.MagicMock(spec=str)
rule = init_style_rule()
rule._shape = mock_value
assert rule.shape == mock_value
# Methods
# _get_typed_color_value
def test__get_typed_color_value(self, init_style_rule, mocker):
"""Test getting a typed color value."""
mock_color_prop = mocker.patch.object(
styles.StyleRule, "color", new_callable=mocker.PropertyMock
)
mocker.patch.object(
styles.StyleRule,
"color_type",
new_callable=mocker.PropertyMock(return_value="HSV"),
)
mock_color = mocker.MagicMock(spec=hou.Color)
mock_color_prop.return_value = mock_color
rule = init_style_rule()
result = rule._get_typed_color_value()
assert result == mock_color.hsv.return_value
# apply_to_node
def test_apply_to_node__both(self, init_style_rule, mocker):
"""Test applying everything to a node."""
mock_color_prop = mocker.patch.object(
styles.StyleRule, "color", new_callable=mocker.PropertyMock
)
mock_shape_prop = mocker.patch.object(
styles.StyleRule, "shape", new_callable=mocker.PropertyMock
)
mock_node = mocker.MagicMock(spec=hou.Node)
rule = init_style_rule()
rule.apply_to_node(mock_node)
mock_node.setColor.assert_called_with(mock_color_prop.return_value)
mock_node.setUserData.assert_called_with(
"nodeshape", mock_shape_prop.return_value
)
def test_apply_to_node__none(self, init_style_rule, mocker):
"""Test applying to a node when no values will be set."""
mocker.patch.object(
styles.StyleRule,
"color",
new_callable=mocker.PropertyMock(return_value=None),
)
mocker.patch.object(
styles.StyleRule,
"shape",
new_callable=mocker.PropertyMock(return_value=None),
)
mock_node = mocker.MagicMock(spec=hou.Node)
rule = init_style_rule()
rule.apply_to_node(mock_node)
mock_node.setColor.assert_not_called()
mock_node.setUserData.assert_not_called()
class Test_ConstantRule:
"""Test ht.nodes.styles.styles.ConstantRule."""
def test___init__(self, mocker):
"""Test the constructor."""
mock_name = mocker.MagicMock(spec=str)
mock_constant_name = mocker.MagicMock(spec=str)
mock_file_path = mocker.MagicMock(spec=str)
rule = styles.ConstantRule(mock_name, mock_constant_name, mock_file_path)
assert rule._name == mock_name
assert rule._constant_name == mock_constant_name
assert rule._file_path == mock_file_path
def test___eq__(self, init_constant_rule, mocker):
"""Test the equality operator."""
mock_name_prop = mocker.patch.object(
styles.ConstantRule, "name", new_callable=mocker.PropertyMock
)
constant = init_constant_rule()
mock_name_prop.return_value = "name1"
mock_constant = mocker.MagicMock(spec=styles.ConstantRule)
mock_constant.name = "name2"
assert constant != mock_constant
mock_name_prop.return_value = "name"
mock_constant.name = "name"
assert constant == mock_constant
result = constant.__eq__(mocker.MagicMock())
assert result == NotImplemented
def test___hash__(self, init_constant_rule, mocker):
"""Test the hash operator."""
mocker.patch.object(
styles.ConstantRule, "name", new_callable=mocker.PropertyMock
)
mocker.patch.object(
styles.ConstantRule, "constant_name", new_callable=mocker.PropertyMock
)
constant = init_constant_rule()
result = constant.__hash__()
assert result == hash((constant.constant_name, constant.name))
assert hash(constant) == hash((constant.constant_name, constant.name))
# ne
def test___ne__(self, init_constant_rule, mocker):
"""Test the ne operator."""
mock_eq = mocker.patch.object(styles.ConstantRule, "__eq__")
constant = init_constant_rule()
mock_constant = mocker.MagicMock(spec=styles.ConstantRule)
result = constant.__ne__(mock_constant)
assert result != mock_eq.return_value
def test___ne___different_type(self, init_constant_rule, mocker):
"""Test the ne operator when the other item isn't a StyleConstant."""
mocker.patch.object(styles.ConstantRule, "__eq__")
constant = init_constant_rule()
result = constant.__ne__(mocker.MagicMock())
assert result == NotImplemented
# Properties
def test_constant_name(self, init_constant_rule, mocker):
"""Test the 'constant_name' property."""
mock_value = mocker.MagicMock(spec=str)
constant = init_constant_rule()
constant._constant_name = mock_value
assert constant.constant_name == mock_value
def test_file_path(self, init_constant_rule, mocker):
"""Test the 'file_path' property."""
mock_value = mocker.MagicMock(spec=str)
constant = init_constant_rule()
constant._file_path = mock_value
assert constant.file_path == mock_value
def test_name(self, init_constant_rule, mocker):
"""Test the 'name' property."""
mock_value = mocker.MagicMock(spec=str)
constant = init_constant_rule()
constant._name = mock_value
assert constant.name == mock_value
| 30.637899 | 85 | 0.632394 | 1,859 | 16,330 | 5.19419 | 0.058096 | 0.046603 | 0.072804 | 0.06193 | 0.885253 | 0.845899 | 0.791529 | 0.733844 | 0.702568 | 0.650269 | 0 | 0.002912 | 0.242988 | 16,330 | 532 | 86 | 30.695489 | 0.778191 | 0.125352 | 0 | 0.576547 | 0 | 0 | 0.018926 | 0.001565 | 0 | 0 | 0 | 0 | 0.185668 | 1 | 0.130293 | false | 0 | 0.009772 | 0.009772 | 0.169381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f8270aba3d5344772fdb3f8544a5e9267add2e5b | 39 | py | Python | nicemessages/__init__.py | hmallen/nicemessages | 47edfcb7397a15c244705cced0cba62a2c117e89 | [
"MIT"
] | null | null | null | nicemessages/__init__.py | hmallen/nicemessages | 47edfcb7397a15c244705cced0cba62a2c117e89 | [
"MIT"
] | null | null | null | nicemessages/__init__.py | hmallen/nicemessages | 47edfcb7397a15c244705cced0cba62a2c117e89 | [
"MIT"
] | null | null | null | from .nicemessages import NiceMessages
| 19.5 | 38 | 0.871795 | 4 | 39 | 8.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f83cef476c4699f3edd29a4d20a0bb81b54709e5 | 35 | py | Python | zonys/__main__.py | zonys/zonys | 88d3cef6296163d3395972d2af5ce96ff1d3e0d0 | [
"BSD-2-Clause"
] | 1 | 2021-06-10T22:01:12.000Z | 2021-06-10T22:01:12.000Z | zonys/__main__.py | zonys/zonys | 88d3cef6296163d3395972d2af5ce96ff1d3e0d0 | [
"BSD-2-Clause"
] | null | null | null | zonys/__main__.py | zonys/zonys | 88d3cef6296163d3395972d2af5ce96ff1d3e0d0 | [
"BSD-2-Clause"
] | null | null | null | import zonys.cli
zonys.cli.main()
| 8.75 | 16 | 0.742857 | 6 | 35 | 4.333333 | 0.666667 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 3 | 17 | 11.666667 | 0.83871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3e0191a1400a61c05487df40b7431ada1400ea7a | 68 | py | Python | driverapp/__init__.py | manisharmagarg/nessus-driver | b0aef1a9c6ec5a00036c3040e062a334b1a1ca7f | [
"Apache-2.0"
] | null | null | null | driverapp/__init__.py | manisharmagarg/nessus-driver | b0aef1a9c6ec5a00036c3040e062a334b1a1ca7f | [
"Apache-2.0"
] | null | null | null | driverapp/__init__.py | manisharmagarg/nessus-driver | b0aef1a9c6ec5a00036c3040e062a334b1a1ca7f | [
"Apache-2.0"
] | null | null | null | from driverapp.settings import *
from driverapp import urls, models
| 22.666667 | 34 | 0.823529 | 9 | 68 | 6.222222 | 0.666667 | 0.464286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132353 | 68 | 2 | 35 | 34 | 0.949153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3e72e7d4c47fb64e67e1924176c6e9eb90d2bd5f | 9,638 | py | Python | tests/deephub/trainer/test_cli.py | deeplab-ai/deephub | b1d271436fab69cdfad14f19fa2e29c5338f18d6 | [
"Apache-2.0"
] | 8 | 2019-10-17T12:46:13.000Z | 2020-03-12T08:09:40.000Z | tests/deephub/trainer/test_cli.py | deeplab-ai/deephub | b1d271436fab69cdfad14f19fa2e29c5338f18d6 | [
"Apache-2.0"
] | 12 | 2019-10-22T13:11:56.000Z | 2022-02-10T00:23:30.000Z | tests/deephub/trainer/test_cli.py | deeplab-ai/deephub | b1d271436fab69cdfad14f19fa2e29c5338f18d6 | [
"Apache-2.0"
] | 1 | 2019-10-17T13:21:27.000Z | 2019-10-17T13:21:27.000Z | from unittest import mock
import click.testing
from deephub.trainer.__main__ import cli
class TestTrainerCLI:
def _invoke_cli(self, *args):
runner = click.testing.CliRunner()
return runner.invoke(cli, args=args)
def test_train_unknown(self):
result = self._invoke_cli('train', 'unknown')
assert result.exit_code != 0
assert "Error: Cannot find variant with name 'unknown'" in result.output
def test_train_packaged_variant(self, datadir):
variants_dir = datadir / 'config' / 'variants'
with mock.patch('deephub.trainer.__main__.load_variant') as mocked_load_variant, \
mock.patch('deephub.trainer.__main__.Trainer') as mocked_trainer:
mocked_pd = mock.MagicMock()
mocked_load_variant.return_value = mocked_pd
result = self._invoke_cli('train', 'varianta', '-d', variants_dir)
# Check that objects instantiated properly
mocked_load_variant.assert_called_once_with(variant_name='varianta', variants_dir=variants_dir)
mocked_trainer.assert_called_once_with(requested_gpu_devices=())
# Check that was training was called correctly
mocked_pd.train.assert_called_once()
assert result.exit_code == 0
assert 'finished' in result.output
def test_train_packaged_with_user_definitions(self, datadir):
variants_dir = datadir / 'config' / 'variants'
with mock.patch('deephub.variants.io.VariantDefinition') as mocked_pd, \
mock.patch('deephub.trainer.__main__.Trainer') as mocked_trainer:
# Test unknown
result = self._invoke_cli('train', 'unknown', '-d', variants_dir)
assert result.exit_code != 0
assert "Error: Cannot find variant with name 'unknown'" in result.output
with mock.patch('deephub.variants.io.VariantDefinition') as mocked_pd, \
mock.patch('deephub.trainer.__main__.Trainer') as mocked_trainer:
result = self._invoke_cli('train', 'varianta', '-d', variants_dir)
mocked_pd.assert_called_once()
assert mocked_pd.mock_calls[0][1][0] == 'varianta'
assert result.exit_code == 0
assert 'finished' in result.output
def test_train_packaged_with_multiple_user_definitions(self, datadir):
variants_dir = datadir / 'config' / 'variants'
with mock.patch('deephub.variants.io.VariantDefinition') as mocked_pd, \
mock.patch('deephub.trainer.__main__.Trainer') as mocked_trainer:
# Test unknown
result = self._invoke_cli('train', 'unknown', '-d', variants_dir)
assert result.exit_code != 0
assert "Error: Cannot find variant with name 'unknown'" in result.output
with mock.patch('deephub.variants.io.VariantDefinition') as mocked_pd, \
mock.patch('deephub.trainer.__main__.Trainer') as mocked_trainer:
# Test unknown
result = self._invoke_cli('train', 'variantc', '-d', variants_dir)
mocked_pd.assert_called_once()
assert mocked_pd.mock_calls[0][1][0] == 'variantc'
assert result.exit_code == 0
assert 'finished' in result.output
def test_override_configuration_with_literals(self, datadir):
variants_dir = datadir / 'config' / 'variants'
# Check that VariantDefinition.set was used
with mock.patch('deephub.variants.io.VariantDefinition') as mocked_pd, \
mock.patch('deephub.trainer.__main__.Trainer') as mocked_trainer:
# Test unknown
result = self._invoke_cli('train', 'varianta',
'-d', variants_dir,
'-p', 'model.variable', '15',
'-p', 'yet.another', 'mitsos')
mocked_pd.assert_called_once()
assert len(mocked_pd().set.mock_calls) == 2
mocked_pd().set.assert_has_calls([mock.call('model.variable', 15),
mock.call('yet.another', 'mitsos')])
assert result.exit_code == 0
assert 'finished' in result.output
def test_override_configuration_with_expr(self, datadir):
variants_dir = datadir / 'config' / 'variants'
# Check that VariantDefinition.set was used
with mock.patch('deephub.variants.io.VariantDefinition') as mocked_pd, \
mock.patch('deephub.trainer.__main__.Trainer') as mocked_trainer:
# Test unknown
result = self._invoke_cli('train', 'varianta',
'-d', variants_dir,
'-p', 'mixed.list', '[12, "mixed", 3.0]',
'-p', 'math.expression', '12/3.0')
mocked_pd.assert_called_once()
assert len(mocked_pd().set.mock_calls) == 2
mocked_pd().set.assert_has_calls([mock.call('mixed.list', [12, "mixed", 3.0]),
mock.call('math.expression', 4.0)])
assert result.exit_code == 0
assert 'finished' in result.output
def test_override_configuration_with_str(self, datadir):
variants_dir = datadir / 'config' / 'variants'
# Check that VariantDefinition.set usage
with mock.patch('deephub.variants.io.VariantDefinition') as mocked_pd, \
mock.patch('deephub.trainer.__main__.Trainer') as mocked_trainer:
# Test unknown
result = self._invoke_cli('train', 'varianta',
'-d', variants_dir,
'-P', 'mixed.list', '[12, "mixed", 3.0]',
'-P', 'math.expression', '12/3.0')
# mocked_pd.assert_called_once()
# assert len(mocked_pd().set.mock_calls) == 2
# mocked_pd().set.assert_has_calls([mock.call('mixed.list', [12, "mixed", 3.0]),
# mock.call('math.expression', 4.0)])
assert result.exit_code != 0
assert 'implemented' in result.output
def test_no_gpu_configuration(self, datadir):
variants_dir = datadir / 'config' / 'variants'
# Check that VariantDefinition.set was used
with mock.patch('deephub.variants.io.VariantDefinition') as mocked_pd, \
mock.patch('deephub.trainer.__main__.Trainer') as mocked_trainer:
result = self._invoke_cli('train', 'varianta',
'-d', variants_dir)
mocked_trainer.assert_called_once_with(requested_gpu_devices=())
assert result.exit_code == 0
def test_single_gpu_configuration(self, datadir):
variants_dir = datadir / 'config' / 'variants'
# Check that VariantDefinition.set was used
with mock.patch('deephub.variants.io.VariantDefinition') as mocked_pd, \
mock.patch('deephub.trainer.__main__.Trainer') as mocked_trainer:
result = self._invoke_cli('train', 'varianta',
'-d', variants_dir,
'-g', '0')
mocked_trainer.assert_called_once_with(requested_gpu_devices=(0,))
assert result.exit_code == 0
def test_multi_gpu_configuration(self, datadir):
variants_dir = datadir / 'config' / 'variants'
# Check that VariantDefinition.set was used
with mock.patch('deephub.variants.io.VariantDefinition') as mocked_pd, \
mock.patch('deephub.trainer.__main__.Trainer') as mocked_trainer:
result = self._invoke_cli('train', 'varianta',
'-d', variants_dir,
'-g', '1', '-g', '0')
mocked_trainer.assert_called_once_with(requested_gpu_devices=(1, 0))
assert result.exit_code == 0
def test_gpu_wrong_id(self, datadir):
variants_dir = datadir / 'config' / 'variants'
# Check that VariantDefinition.set was used
with mock.patch('deephub.variants.io.VariantDefinition') as mocked_pd, \
mock.patch('deephub.trainer.__main__.Trainer') as mocked_trainer:
result = self._invoke_cli('train', 'varianta',
'-d', variants_dir,
'-g', 'GPU:0')
assert result.exit_code != 0
assert "Invalid value for " in result.output
def test_warm_start_invocation(self, datadir, tmpdir):
variants_dir = datadir / 'config' / 'variants'
# Check that VariantDefinition.set was used
with mock.patch('deephub.variants.io.VariantDefinition') as mocked_pd, \
mock.patch('deephub.trainer.__main__.Trainer') as mocked_trainer:
# Test unknown
result = self._invoke_cli('train', 'varianta',
'-d', variants_dir,
'--warm-start-checkpoint', str(tmpdir),
'--warm-start-vars', "myscope1")
mocked_pd.assert_called_once()
assert len(mocked_pd().set.mock_calls) == 2
mocked_pd().set.assert_has_calls([mock.call('train.warm_start_check_point', str(tmpdir)),
mock.call('train.warm_start_variables_regex', "myscope1")])
assert result.exit_code == 0
assert 'finished' in result.output
| 49.680412 | 107 | 0.58425 | 1,047 | 9,638 | 5.104107 | 0.110793 | 0.046407 | 0.077844 | 0.049775 | 0.858533 | 0.841879 | 0.836078 | 0.821856 | 0.811003 | 0.7936 | 0 | 0.009717 | 0.305976 | 9,638 | 193 | 108 | 49.937824 | 0.789206 | 0.076053 | 0 | 0.637681 | 0 | 0 | 0.206124 | 0.110323 | 0 | 0 | 0 | 0 | 0.318841 | 1 | 0.094203 | false | 0 | 0.021739 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e428f1d03b60fcf4ac9de49bd5b9a028ccc57c80 | 42 | py | Python | xinyu/python/common/tools/__init__.py | xzhuah/codingDimension | 9b90b93a3a3b8afee28e3a2a571050ca3f86f066 | [
"Apache-2.0"
] | 1 | 2020-11-06T20:39:11.000Z | 2020-11-06T20:39:11.000Z | xinyu/python/common/tools/__init__.py | xzhuah/codingDimension | 9b90b93a3a3b8afee28e3a2a571050ca3f86f066 | [
"Apache-2.0"
] | 1 | 2021-08-28T02:29:51.000Z | 2021-08-28T02:29:51.000Z | xinyu/python/common/tools/__init__.py | xzhuah/codingDimension | 9b90b93a3a3b8afee28e3a2a571050ca3f86f066 | [
"Apache-2.0"
] | null | null | null | # Created by Xinyu Zhu on 2021/8/29, 0:11
| 21 | 41 | 0.690476 | 10 | 42 | 2.9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.294118 | 0.190476 | 42 | 1 | 42 | 42 | 0.558824 | 0.928571 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e497dceb3c3925eba4b67b71f0697f7501ce3a61 | 4,421 | py | Python | linux-distro/package/nuxleus/Source/Vendor/Microsoft/IronPython-2.0.1/Lib/headstock/example/microblog/microblog/jabber/profile.py | mdavid/nuxleus | 653f1310d8bf08eaa5a7e3326c2349e56a6abdc2 | [
"BSD-3-Clause"
] | 1 | 2017-03-28T06:41:51.000Z | 2017-03-28T06:41:51.000Z | linux-distro/package/nuxleus/Source/Vendor/Microsoft/IronPython-2.0.1/Lib/headstock/example/microblog/microblog/jabber/profile.py | mdavid/nuxleus | 653f1310d8bf08eaa5a7e3326c2349e56a6abdc2 | [
"BSD-3-Clause"
] | null | null | null | linux-distro/package/nuxleus/Source/Vendor/Microsoft/IronPython-2.0.1/Lib/headstock/example/microblog/microblog/jabber/profile.py | mdavid/nuxleus | 653f1310d8bf08eaa5a7e3326c2349e56a6abdc2 | [
"BSD-3-Clause"
] | 1 | 2016-12-13T21:08:58.000Z | 2016-12-13T21:08:58.000Z | # -*- coding: utf-8 -*-
from urlparse import urlparse
from Axon.Component import component
from Axon.Ipc import shutdownMicroprocess, producerFinished
from Kamaelia.Protocol.HTTP.HTTPClient import SimpleHTTPClient
from microblog.jabber.client import Client
from microblog.profile.manager import ProfileManager
__all__ = ['ProfileHandler', 'NewProfileHandler']
class NewProfileHandler(component):
Inboxes = {"inbox" : "",
"control" : "Shutdown the client stream",
"_response": ""}
Outboxes = {"outbox" : "",
"signal" : "Shutdown signal",
"log" : "log",
"_request": ""}
def __init__(self, base_dir, atompub):
super(NewProfileHandler, self).__init__()
self.base_dir = base_dir
self.atompub = atompub
def initComponents(self):
self.client = SimpleHTTPClient()
self.addChildren(self.client)
self.link((self, '_request'), (self.client, 'inbox'))
self.link((self.client, 'outbox'), (self, '_response'))
self.client.activate()
def main(self):
yield self.initComponents()
while 1:
if self.dataReady("control"):
mes = self.recv("control")
if isinstance(mes, shutdownMicroprocess) or \
isinstance(mes, producerFinished):
self.send(producerFinished(), "signal")
break
if self.dataReady("inbox"):
feed = self.recv('inbox')
for entry in feed.entries:
for link in entry.links:
if link.rel == 'edit-media' and link.type == 'application/xml':
profile_name = urlparse(link.href)[2].rsplit('/', -1)[-1]
pwd = ProfileManager.set_profile_password(self.base_dir, profile_name)
profile = ProfileManager.load_profile(self.base_dir, self.atompub, profile_name)
Client.register_jabber_user(self.atompub, profile_name.lower(), pwd, profile)
params = {'url': link.href, 'method': 'DELETE'}
self.send(params, '_request')
continue
if not self.anyReady():
self.pause()
yield 1
class ProfileHandler(component):
Inboxes = {"inbox" : "",
"control" : "Shutdown the client stream",
"_response": ""}
Outboxes = {"outbox" : "",
"signal" : "Shutdown signal",
"log" : "log",
"_request": ""}
def __init__(self, base_dir, atompub):
super(ProfileHandler, self).__init__()
self.base_dir = base_dir
self.atompub = atompub
def initComponents(self):
self.client = SimpleHTTPClient()
self.addChildren(self.client)
self.link((self, '_request'), (self.client, 'inbox'))
self.link((self.client, 'outbox'), (self, '_response'))
self.client.activate()
def main(self):
yield self.initComponents()
while 1:
if self.dataReady("control"):
mes = self.recv("control")
if isinstance(mes, shutdownMicroprocess) or \
isinstance(mes, producerFinished):
self.send(producerFinished(), "signal")
break
if self.dataReady("inbox"):
feed = self.recv('inbox')
for entry in feed.entries:
for link in entry.links:
if link.rel == 'edit-media' and link.type == 'application/xml':
profile_name = urlparse(link.href)[2].rsplit('/', -1)[-1]
pwd = ProfileManager.get_profile_password(self.base_dir, profile_name)
profile = ProfileManager.load_profile(self.base_dir, self.atompub, profile_name)
if not Client.get_status(profile_name):
Client.connect_jabber_user(self.atompub, profile_name.lower(), pwd, profile)
continue
if not self.anyReady():
self.pause()
yield 1
| 37.151261 | 108 | 0.521601 | 399 | 4,421 | 5.631579 | 0.240602 | 0.031153 | 0.039163 | 0.026702 | 0.785937 | 0.785937 | 0.785937 | 0.785937 | 0.785937 | 0.7085 | 0 | 0.003931 | 0.367112 | 4,421 | 118 | 109 | 37.466102 | 0.799142 | 0.00475 | 0 | 0.8 | 0 | 0 | 0.090516 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.022222 | 0.066667 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e4e16d9b3330d9d783c4bdc131a29f02e7f0652a | 78 | py | Python | acq4/util/configfile.py | aleonlein/acq4 | 4b1fcb9ad2c5e8d4595a2b9cf99d50ece0c0f555 | [
"MIT"
] | 1 | 2020-06-04T17:04:53.000Z | 2020-06-04T17:04:53.000Z | acq4/util/configfile.py | aleonlein/acq4 | 4b1fcb9ad2c5e8d4595a2b9cf99d50ece0c0f555 | [
"MIT"
] | 24 | 2016-09-27T17:25:24.000Z | 2017-03-02T21:00:11.000Z | acq4/util/configfile.py | sensapex/acq4 | 9561ba73caff42c609bd02270527858433862ad8 | [
"MIT"
] | 4 | 2016-10-19T06:39:36.000Z | 2019-09-30T21:06:45.000Z | from __future__ import print_function
from acq4.pyqtgraph.configfile import *
| 26 | 39 | 0.858974 | 10 | 78 | 6.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014286 | 0.102564 | 78 | 2 | 40 | 39 | 0.871429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
e4f1929e64994ee1d3521d8e27e361e4a7e5c60b | 6,894 | py | Python | bot.py | BubblegumDiscord/PoppingWelcomer | 2572d8798367ab8c2106bc1d8384b576e4059103 | [
"MIT"
] | null | null | null | bot.py | BubblegumDiscord/PoppingWelcomer | 2572d8798367ab8c2106bc1d8384b576e4059103 | [
"MIT"
] | null | null | null | bot.py | BubblegumDiscord/PoppingWelcomer | 2572d8798367ab8c2106bc1d8384b576e4059103 | [
"MIT"
] | null | null | null | import config
from discord.ext import commands
from PIL import Image, ImageDraw, ImageFont, ImageChops
from io import BytesIO, StringIO
import discord, aiohttp, datetime
bot = commands.Bot(command_prefix="-", description="Give new members a bubbly start to their experience!")
def tint_image(image, tint_color):
return ImageChops.multiply(image, Image.new('RGBA', image.size, tint_color))
def tag(u):
return u.name + "#" + u.discriminator
@bot.event
async def on_ready():
print("Logged in")
print(bot.user.name)
print(bot.user.id)
print("--------")
@bot.command(pass_context=True)
async def dump(ctx, *, vc:discord.Channel):
if str(vc.type) != "voice":
print(type(vc.type))
return await bot.say("Please provide a voice channel, not a text channel.")
mems = [m for m in vc.server.members if m.voice.voice_channel == vc ]
await bot.send_message(ctx.message.channel, embed=discord.Embed(
title=":book: Members currently in " + vc.name,
description="```" + "".join([tag(m) + "\n" for m in mems]) + "```",
colour=0x0a96de,
timestamp=datetime.datetime.now()
))
@bot.event
async def on_member_join(mem):
im = Image.open("./banner2.png")
draw = ImageDraw.Draw(im, mode='RGBA')
text = mem.display_name
fsiz = 48
font=ImageFont.truetype("./Starfish.ttf", fsiz)
while draw.textsize(text, font=font)[0] > 430:
fsiz -= 1
font=ImageFont.truetype("./Starfish.ttf", fsiz)
tx, ty = draw.textsize(text, font=font)
x = 250 - tx//2
y = round(158 * 1.8) - ty//2
#shadowcolor = (100, 100, 100)
#shadowcolor = (255,255,255)
fillcolor = (165, 214, 254)
shadowcolor = (105, 154, 194)
a = "center"
draw.text((x-1, y-1), text, font=font, fill=shadowcolor, align=a)
draw.text((x+1, y+1), text, font=font, fill=shadowcolor, align=a)
draw.text((x+1, y-1), text, font=font, fill=shadowcolor, align=a)
draw.text((x-1, y+1), text, font=font, fill=shadowcolor, align=a)
draw.text((x, y), text, font=font, fill=fillcolor, align=a)
avatar_im = None
url = mem.avatar_url
if not url:
url = mem.default_avatar_url
while True:
async with aiohttp.ClientSession(loop=bot.loop) as aiosession:
with aiohttp.Timeout(10):
async with aiosession.get(url) as resp:
avatar_im = BytesIO(await resp.read())
if avatar_im.getbuffer().nbytes > 0 or retries == 0:
await aiosession.close()
break
retries -= 1
print('0 nbytes image found. Retries left: {}'.format(retries+1))
ava_sqdim = 78
resize = (ava_sqdim, ava_sqdim)
avatar_im = Image.open(avatar_im).convert("RGBA")
avatar_im = avatar_im.resize(resize, Image.ANTIALIAS)
avatar_im.putalpha(avatar_im.split()[3])
is_square = False
if not is_square:
mask = Image.new('L', resize, 0)
maskDraw = ImageDraw.Draw(mask)
maskDraw.ellipse((0, 0) + resize, fill=255)
mask = mask.resize(avatar_im.size, Image.ANTIALIAS)
avatar_im.putalpha(mask)
img_center_x = (im.width // 2)
img_center_y = (im.height // 2)
offset_x = 109
offset_y = 36
img_offset_x = img_center_x + offset_x
img_offset_y = img_center_y + offset_y
ava_right = img_offset_x + avatar_im.width//2
ava_bottom = img_offset_y + avatar_im.height//2
ava_left = img_offset_x - avatar_im.width//2
ava_top = img_offset_y - avatar_im.height//2
avatar_im = tint_image(avatar_im, (255, 255, 255, 80))
im.paste(avatar_im, box=(ava_left, ava_top, ava_right, ava_bottom), mask=avatar_im)
temp = BytesIO()
im.save(temp, format="png")
temp.seek(0)
await bot.send_file(mem.server.default_channel ,temp, content="Give a popping welcome to " + mem.display_name + " :candy:", filename="welcome.png")
@bot.command(pass_context=True)
async def whalecum(ctx, mem: discord.Member = None):
mem = mem if mem else ctx.message.author
im = Image.open("./banner2.png")
draw = ImageDraw.Draw(im, mode='RGBA')
text = mem.display_name
fsiz = 48
font=ImageFont.truetype("./Starfish.ttf", fsiz)
while draw.textsize(text, font=font)[0] > 430:
fsiz -= 1
font=ImageFont.truetype("./Starfish.ttf", fsiz)
tx, ty = draw.textsize(text, font=font)
x = 250 - tx//2
y = round(158 * 1.8) - ty//2
#shadowcolor = (100, 100, 100)
#shadowcolor = (255,255,255)
fillcolor = (165, 214, 254)
shadowcolor = (105, 154, 194)
a = "center"
draw.text((x-1, y-1), text, font=font, fill=shadowcolor, align=a)
draw.text((x+1, y+1), text, font=font, fill=shadowcolor, align=a)
draw.text((x+1, y-1), text, font=font, fill=shadowcolor, align=a)
draw.text((x-1, y+1), text, font=font, fill=shadowcolor, align=a)
draw.text((x, y), text, font=font, fill=fillcolor, align=a)
avatar_im = None
url = mem.avatar_url
if not url:
url = mem.default_avatar_url
while True:
async with aiohttp.ClientSession(loop=bot.loop) as aiosession:
with aiohttp.Timeout(10):
async with aiosession.get(url) as resp:
avatar_im = BytesIO(await resp.read())
if avatar_im.getbuffer().nbytes > 0 or retries == 0:
await aiosession.close()
break
retries -= 1
print('0 nbytes image found. Retries left: {}'.format(retries+1))
ava_sqdim = 78
resize = (ava_sqdim, ava_sqdim)
avatar_im = Image.open(avatar_im).convert("RGBA")
avatar_im = avatar_im.resize(resize, Image.ANTIALIAS)
avatar_im.putalpha(avatar_im.split()[3])
is_square = False
if not is_square:
mask = Image.new('L', resize, 0)
maskDraw = ImageDraw.Draw(mask)
maskDraw.ellipse((0, 0) + resize, fill=255)
mask = mask.resize(avatar_im.size, Image.ANTIALIAS)
avatar_im.putalpha(mask)
img_center_x = (im.width // 2)
img_center_y = (im.height // 2)
offset_x = 109
offset_y = 36
img_offset_x = img_center_x + offset_x
img_offset_y = img_center_y + offset_y
ava_right = img_offset_x + avatar_im.width//2
ava_bottom = img_offset_y + avatar_im.height//2
ava_left = img_offset_x - avatar_im.width//2
ava_top = img_offset_y - avatar_im.height//2
avatar_im = tint_image(avatar_im, (255, 255, 255, 80))
im.paste(avatar_im, box=(ava_left, ava_top, ava_right, ava_bottom), mask=avatar_im)
temp = BytesIO()
im.save(temp, format="png")
#im.show()
temp.seek(0)
await bot.upload(temp, content="Give a popping welcome to " + mem.display_name + " :candy:", filename="welcome.png")
bot.run(config.token)
| 33.960591 | 151 | 0.6217 | 993 | 6,894 | 4.178248 | 0.192346 | 0.073271 | 0.040492 | 0.038564 | 0.79706 | 0.780188 | 0.780188 | 0.765004 | 0.765004 | 0.765004 | 0 | 0.039388 | 0.241369 | 6,894 | 202 | 152 | 34.128713 | 0.75392 | 0.017551 | 0 | 0.794872 | 0 | 0 | 0.066637 | 0 | 0.00641 | 0 | 0.001182 | 0 | 0 | 1 | 0.012821 | false | 0.012821 | 0.032051 | 0.012821 | 0.064103 | 0.044872 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
902711ea614b16802a41fad9814ab482df22f902 | 34 | py | Python | middlewares/__init__.py | nasimdjanovich/async_telegram_bot | 250a36d4382aa51e2f02d5a1de0326890d8f973a | [
"MIT"
] | 3 | 2022-02-17T13:01:11.000Z | 2022-02-17T15:45:03.000Z | middlewares/__init__.py | turdibek-jumabaev/AsyncTelebot-shablon | f25d28389326ba13a1747a6534c9643c49feeeca | [
"MIT"
] | null | null | null | middlewares/__init__.py | turdibek-jumabaev/AsyncTelebot-shablon | f25d28389326ba13a1747a6534c9643c49feeeca | [
"MIT"
] | null | null | null | from . import middleware_antiflood | 34 | 34 | 0.882353 | 4 | 34 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9043da99b9a70198289795b169f5ae5c3b401bd5 | 56 | py | Python | graphgallery/nn/models/dgl/ala_gnn/__init__.py | TobiasSchmidtDE/GraphGallery | e627e4f454e0ce3813171305a524f5190a6e6f45 | [
"MIT"
] | null | null | null | graphgallery/nn/models/dgl/ala_gnn/__init__.py | TobiasSchmidtDE/GraphGallery | e627e4f454e0ce3813171305a524f5190a6e6f45 | [
"MIT"
] | null | null | null | graphgallery/nn/models/dgl/ala_gnn/__init__.py | TobiasSchmidtDE/GraphGallery | e627e4f454e0ce3813171305a524f5190a6e6f45 | [
"MIT"
] | null | null | null | from .ala_gcn import ALaGCN
from .ala_gat import ALaGAT | 28 | 28 | 0.821429 | 10 | 56 | 4.4 | 0.7 | 0.318182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 56 | 2 | 29 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5f52a4b3dd8c494146e88c416aa02cee90fa86fc | 133 | py | Python | menu/forms.py | douglasPinheiro/nirvaris-menu | 3f2d09adbcda1ee5145966bcb2c9b40d9d934f49 | [
"MIT"
] | null | null | null | menu/forms.py | douglasPinheiro/nirvaris-menu | 3f2d09adbcda1ee5145966bcb2c9b40d9d934f49 | [
"MIT"
] | null | null | null | menu/forms.py | douglasPinheiro/nirvaris-menu | 3f2d09adbcda1ee5145966bcb2c9b40d9d934f49 | [
"MIT"
] | null | null | null | import pdb
from django import forms
from django.contrib.auth.models import User
from django.utils.translation import ugettext as _
| 19 | 50 | 0.827068 | 20 | 133 | 5.45 | 0.65 | 0.275229 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135338 | 133 | 6 | 51 | 22.166667 | 0.947826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5f6e6d5770899dacfc8b906c054a6c6bac636b93 | 38 | py | Python | examples/importing_modules/py/magic_dust/hearts.py | mooreryan/pyml_bindgen | b326af274fca2de959c9b1ec1c61030de4633304 | [
"Apache-2.0",
"MIT"
] | 24 | 2021-11-10T06:36:17.000Z | 2022-02-08T15:16:10.000Z | examples/importing_modules/py/magic_dust/hearts.py | mooreryan/pyml_bindgen | b326af274fca2de959c9b1ec1c61030de4633304 | [
"Apache-2.0",
"MIT"
] | 9 | 2022-01-28T05:57:08.000Z | 2022-03-23T05:59:21.000Z | examples/importing_modules/py/magic_dust/hearts.py | mooreryan/pyml_bindgen | b326af274fca2de959c9b1ec1c61030de4633304 | [
"Apache-2.0",
"MIT"
] | 1 | 2022-01-28T05:25:19.000Z | 2022-01-28T05:25:19.000Z | def hearts():
return('hearts...')
| 12.666667 | 23 | 0.552632 | 4 | 38 | 5.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184211 | 38 | 2 | 24 | 19 | 0.677419 | 0 | 0 | 0 | 0 | 0 | 0.236842 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
39792f67dc30c078d31978730d3006e7e118bc47 | 205 | py | Python | src/app/cv/engine.py | smidm/openpose-api | 0871441bea0c7ca7f8ec7e817757c0d197f6cfa8 | [
"MIT"
] | null | null | null | src/app/cv/engine.py | smidm/openpose-api | 0871441bea0c7ca7f8ec7e817757c0d197f6cfa8 | [
"MIT"
] | null | null | null | src/app/cv/engine.py | smidm/openpose-api | 0871441bea0c7ca7f8ec7e817757c0d197f6cfa8 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from typing import List
from app.schemas.pose import Person
class Engine(ABC):
@abstractmethod
def infer_image(self, image) -> List[Person]:
return []
| 20.5 | 49 | 0.717073 | 27 | 205 | 5.407407 | 0.62963 | 0.232877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 205 | 9 | 50 | 22.777778 | 0.890244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0.142857 | 0.857143 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
39a2c783c65158d293ce7f4189251c7183187687 | 380 | py | Python | neko/Common/DataStructures/OLE1/__init__.py | mebuis/neko | c76eacb60c3a3f6adfb6a7a6fd7f61640be2c00d | [
"Apache-2.0"
] | 1 | 2018-12-07T02:05:16.000Z | 2018-12-07T02:05:16.000Z | neko/Common/DataStructures/OLE1/__init__.py | mebuis/neko | c76eacb60c3a3f6adfb6a7a6fd7f61640be2c00d | [
"Apache-2.0"
] | null | null | null | neko/Common/DataStructures/OLE1/__init__.py | mebuis/neko | c76eacb60c3a3f6adfb6a7a6fd7f61640be2c00d | [
"Apache-2.0"
] | null | null | null | # -*- encoding: UTF-8 -*-
from .CountOfCharactersPrefixedStringTypes import CountOfCharactersPrefixedAnsiString, CountOfCharactersPrefixedUnicodeString
from .LengthPrefixedByteArray import LengthPrefixedByteArray
from .LengthPrefixedStringTypes import LengthPrefixedAnsiString, LengthPrefixedUnicodeString
from .OLE1Object import OLE1Object
from .OLE1Package import OLE1Package
| 42.222222 | 125 | 0.881579 | 25 | 380 | 13.4 | 0.56 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014245 | 0.076316 | 380 | 8 | 126 | 47.5 | 0.940171 | 0.060526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8432369ab4ea1586856dd950ae673584d0a782e1 | 2,611 | py | Python | PythonExercicios/ex059.py | Caio-Moretti/115.Exercicios-Python | 7e66fb1f44ea3eb4ade63f37d843242ac42ade84 | [
"MIT"
] | null | null | null | PythonExercicios/ex059.py | Caio-Moretti/115.Exercicios-Python | 7e66fb1f44ea3eb4ade63f37d843242ac42ade84 | [
"MIT"
] | null | null | null | PythonExercicios/ex059.py | Caio-Moretti/115.Exercicios-Python | 7e66fb1f44ea3eb4ade63f37d843242ac42ade84 | [
"MIT"
] | null | null | null | n1 = int(input('Digite um número: '))
n2 = int(input('Digite outro número: '))
opt = int(input('Agora selecione a próxima operação:'
'\n[1] Somar'
'\n[2] Multiplicar'
'\n[3] Mostrar o maior'
'\n[4] Novos números'
'\n[5] Sair do programa.'
'\n RESPOSTA: '))
while opt != 5:
if opt == 1:
print('{} + {} = {}'.format(n1, n2, n1 + n2))
opt = int(input('E agora, o que quer fazer?'
'\n[1] Somar'
'\n[2] Multiplicar'
'\n[3] Mostrar o maior'
'\n[4] Novos números'
'\n[5] Sair do programa.'
'\n RESPOSTA: '))
elif opt == 2:
print('{} x {} = {}'.format(n1, n2, n1 * n2))
opt = int(input('E agora, o que quer fazer?'
'\n[1] Somar'
'\n[2] Multiplicar'
'\n[3] Mostrar o maior'
'\n[4] Novos números'
'\n[5] Sair do programa.'
'\n RESPOSTA: '))
elif opt == 3:
if n1 == n2:
print('{} e {} são iguais.'.format(n1, n2))
elif n1 > n2:
print('{} é maior que {}'.format(n1, n2))
else:
print('{} é maior que {}'.format(n2, n1))
opt = int(input('E agora, o que quer fazer?'
'\n[1] Somar'
'\n[2] Multiplicar'
'\n[3] Mostrar o maior'
'\n[4] Novos números'
'\n[5] Sair do programa.'
'\n RESPOSTA: '))
elif opt == 4:
n1 = int(input('Digite um número: '))
n2 = int(input('Digite outro número: '))
opt = int(input('E agora, o que quer fazer?'
'\n[1] Somar'
'\n[2] Multiplicar'
'\n[3] Mostrar o maior'
'\n[4] Novos números'
'\n[5] Sair do programa.'
'\n RESPOSTA: '))
elif opt == 5:
print('Saindo do programa.')
else:
print('Número inválido, tente novamente.')
opt = int(input('O que quer fazer?'
'\n[1] Somar'
'\n[2] Multiplicar'
'\n[3] Mostrar o maior'
'\n[4] Novos números'
'\n[5] Sair do programa.'
'\n RESPOSTA: '))
print('Programa encerrado. ')
| 39.560606 | 55 | 0.378399 | 279 | 2,611 | 3.541219 | 0.179211 | 0.080972 | 0.066802 | 0.048583 | 0.795547 | 0.755061 | 0.755061 | 0.755061 | 0.755061 | 0.755061 | 0 | 0.042491 | 0.477212 | 2,611 | 65 | 56 | 40.169231 | 0.681319 | 0 | 0 | 0.71875 | 0 | 0 | 0.385676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.