hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fa15bb67b93340d194098e6afb7420d913b7f2ad | 4,015 | py | Python | unittests/utility/test_utils.py | rpratap-bot/cephci | 113ce2afc6bc22a66cce1fede21d4c53834a61f4 | [
"MIT"
] | null | null | null | unittests/utility/test_utils.py | rpratap-bot/cephci | 113ce2afc6bc22a66cce1fede21d4c53834a61f4 | [
"MIT"
] | null | null | null | unittests/utility/test_utils.py | rpratap-bot/cephci | 113ce2afc6bc22a66cce1fede21d4c53834a61f4 | [
"MIT"
] | null | null | null | import os
import pytest
from utility.utils import custom_ceph_config
suite_config = {'global': {'osd_pool_default_pg_num': 64,
'osd_default_pool_size': 2,
'osd_pool_default_pgp_num': 64,
'mon_max_pg_per_osd': 1024,
'osd_objectstore': 'bluestore'}}
cli_config = ['osd_pool_default_pg_num=128',
'osd_default_pool_size=2',
'osd_pool_default_pgp_num=128',
'mon_max_pg_per_osd=1024']
@pytest.fixture
def config_file(fixtures_dir):
return os.path.join(fixtures_dir, 'custom_ceph_config.yaml')
def test_custom_ceph_config_no_values():
expected = {}
result = custom_ceph_config(None, None, None)
assert result == expected
def test_custom_ceph_config_suite_only():
result = custom_ceph_config(suite_config, None, None)
assert result == suite_config
def test_custom_ceph_config_cli_only():
expected = {'global': {'osd_pool_default_pg_num': '128',
'osd_default_pool_size': '2',
'osd_pool_default_pgp_num': '128',
'mon_max_pg_per_osd': '1024'}}
result = custom_ceph_config(None, cli_config, None)
assert result == expected
def test_custom_ceph_config_file_only(config_file):
expected = {'global': {'osd_pool_default_pg_num': 64,
'osd_default_pool_size': 2,
'osd_pool_default_pgp_num': 64,
'mon_max_pg_per_osd': 2048,
'osd_journal_size': 10000},
'mon': {'mon_osd_full_ratio': .80,
'mon_osd_nearfull_ratio': .70}}
result = custom_ceph_config(None, None, config_file)
assert result == expected
def test_custom_ceph_config_suite_and_cli():
expected = {'global': {'osd_pool_default_pg_num': '128',
'osd_default_pool_size': '2',
'osd_pool_default_pgp_num': '128',
'mon_max_pg_per_osd': '1024',
'osd_objectstore': 'bluestore'}}
result = custom_ceph_config(suite_config, cli_config, None)
assert result == expected
def test_custom_ceph_config_suite_and_file(config_file):
expected = {'global': {'osd_pool_default_pg_num': 64,
'osd_default_pool_size': 2,
'osd_pool_default_pgp_num': 64,
'mon_max_pg_per_osd': 2048,
'osd_objectstore': 'bluestore',
'osd_journal_size': 10000},
'mon': {'mon_osd_full_ratio': .80,
'mon_osd_nearfull_ratio': .70}}
result = custom_ceph_config(suite_config, None, config_file)
assert result == expected
def test_custom_ceph_config_cli_and_file(config_file):
expected = {'global': {'osd_pool_default_pg_num': '128',
'osd_default_pool_size': '2',
'osd_pool_default_pgp_num': '128',
'mon_max_pg_per_osd': '1024',
'osd_journal_size': 10000},
'mon': {'mon_osd_full_ratio': .80,
'mon_osd_nearfull_ratio': .70}}
result = custom_ceph_config(None, cli_config, config_file)
assert result == expected
def test_custom_ceph_config_all(config_file):
expected = {'global': {'osd_pool_default_pg_num': '128',
'osd_default_pool_size': '2',
'osd_pool_default_pgp_num': '128',
'mon_max_pg_per_osd': '1024',
'osd_objectstore': 'bluestore',
'osd_journal_size': 10000},
'mon': {'mon_osd_full_ratio': .80,
'mon_osd_nearfull_ratio': .70}}
result = custom_ceph_config(suite_config, cli_config, config_file)
assert result == expected
| 39.752475 | 70 | 0.56812 | 461 | 4,015 | 4.427332 | 0.119306 | 0.088192 | 0.141107 | 0.082313 | 0.902009 | 0.877511 | 0.848604 | 0.798628 | 0.77805 | 0.764821 | 0 | 0.043882 | 0.330262 | 4,015 | 100 | 71 | 40.15 | 0.715136 | 0 | 0 | 0.607595 | 0 | 0 | 0.284932 | 0.171357 | 0 | 0 | 0 | 0 | 0.101266 | 1 | 0.113924 | false | 0 | 0.037975 | 0.012658 | 0.164557 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fa174b1b9b726a8e138611adc50fbede0dcf70e8 | 96 | py | Python | venv/lib/python3.8/site-packages/clikit/utils/command.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/clikit/utils/command.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/clikit/utils/command.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/2b/35/8c/aa657928ee14a7f7bd76e20e4fb17d02a241d5018ec7159fc1ba17c650 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 96 | 1 | 96 | 96 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fa2433dc6b1cd7ca424f2da130da1c4295fbbaf1 | 154 | py | Python | blog/admin.py | alcibiadesBustillo/twitter_clone | 6e61a06484800d687384cee289daff5d8bfff330 | [
"MIT"
] | null | null | null | blog/admin.py | alcibiadesBustillo/twitter_clone | 6e61a06484800d687384cee289daff5d8bfff330 | [
"MIT"
] | null | null | null | blog/admin.py | alcibiadesBustillo/twitter_clone | 6e61a06484800d687384cee289daff5d8bfff330 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Post
# Register your models here.
@admin.register(Post)
class PostAdmin(admin.ModelAdmin):
pass
| 19.25 | 34 | 0.779221 | 21 | 154 | 5.714286 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 154 | 7 | 35 | 22 | 0.909091 | 0.168831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
fa6eaabf46b74547c222e05c763a44240ec1d3f4 | 45 | py | Python | 2.Python/m2.py | sgeek28/Data-Science | ea0bfd6eeb78f534ab89fc9d4c306adb0087e07e | [
"MIT"
] | null | null | null | 2.Python/m2.py | sgeek28/Data-Science | ea0bfd6eeb78f534ab89fc9d4c306adb0087e07e | [
"MIT"
] | null | null | null | 2.Python/m2.py | sgeek28/Data-Science | ea0bfd6eeb78f534ab89fc9d4c306adb0087e07e | [
"MIT"
] | null | null | null | import m1
print("M2 name is %s",(__name__))
| 11.25 | 33 | 0.666667 | 8 | 45 | 3.25 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.155556 | 45 | 3 | 34 | 15 | 0.631579 | 0 | 0 | 0 | 0 | 0 | 0.288889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
d7009732b9dab86d2fdae57bce3b257877dd38d2 | 42 | py | Python | src/Driver/__init__.py | TTTPOB/UnifiedMessageRelay | 7cba8febee4e6d834a15176779c54019c769dc96 | [
"MIT"
] | null | null | null | src/Driver/__init__.py | TTTPOB/UnifiedMessageRelay | 7cba8febee4e6d834a15176779c54019c769dc96 | [
"MIT"
] | null | null | null | src/Driver/__init__.py | TTTPOB/UnifiedMessageRelay | 7cba8febee4e6d834a15176779c54019c769dc96 | [
"MIT"
] | null | null | null | from . import QQ, Telegram, Line, Discord
| 21 | 41 | 0.738095 | 6 | 42 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 42 | 1 | 42 | 42 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d70c2245993f9b335f496218760144f60c24002e | 11,243 | py | Python | UserNameListGenerator.py | 3xxu5/UserNameListGenerator | d3afac14a91621bf02413316c3a9e8b90dda6e65 | [
"Unlicense"
] | 3 | 2020-03-22T19:53:31.000Z | 2020-07-16T11:51:16.000Z | UserNameListGenerator.py | dpdug4n/UserNameListGenerator | d3afac14a91621bf02413316c3a9e8b90dda6e65 | [
"Unlicense"
] | null | null | null | UserNameListGenerator.py | dpdug4n/UserNameListGenerator | d3afac14a91621bf02413316c3a9e8b90dda6e65 | [
"Unlicense"
] | null | null | null | #UserName List Generator - D.Patrick Dugan
#There probably is a more pythonic way to do this...
import argparse
import sys
class UserNameGen(object):
def __init__(self, username, usersfile, outfile, numappend, email):
self.username = username
self.usersfile = usersfile
self.outfile = outfile
self.numappend = numappend
self.email = email
#string functions here
def FirstLast(self, firstName, lastName):
return firstName[0].upper()+firstName[1:]+lastName[0]+lastName[1:]
def FirstDotLast(self, firstName, lastName):
return firstName[0].upper()+firstName[1:]+'.'+lastName[0]+lastName[1:]
def First_Last(self, firstName, lastName):
return firstName[0].upper()+firstName[1:]+'_'+lastName[0]+lastName[1:]
def FLast(self, firstName, lastName):
return firstName[0]+lastName[0]+lastName[1:]
def FDotLast(self, firstName, lastName):
return firstName[0]+'.'+lastName[0]+lastName[1:]
def F_Last(self, firstName, lastName):
return firstName[0]+'_'+lastName[0]+lastName[1:]
def FirstL(self, firstName, lastName):
return firstName[0].upper()+firstName[1:]+lastName[0]
def FirstDotL(self, firstName, lastName):
return firstName[0].upper()+firstName[1:]+'.'+lastName[0]
def First_L(self, firstName, lastName):
return firstName[0].upper()+firstName[1:]+'_'+lastName[0]
def FirLas(self, firstName, lastName):
return firstName[0].upper()+firstName[1:3]+lastName[0].upper()+lastName[1:3]
def FirDotLas(self, firstName, lastName):
return firstName[0].upper()+firstName[1:3]+'.'+lastName[0].upper()+lastName[1:3]
def Fir_Las(self, firstName, lastName):
return firstName[0].upper()+firstName[1:3]+'_'+lastName[0].upper()+lastName[1:3]
def FiLast(self, firstName, lastName):
return firstName[0].upper()+firstName[1:2]+lastName[0].upper()+lastName[1:]
def FiDotLast(self, firstName, lastName):
return firstName[0].upper()+firstName[1:2]+'.'+lastName[0].upper()+lastName[1:]
def Fi_Last(self, firstName, lastName):
return firstName[0].upper()+firstName[1:2]+'_'+lastName[0].upper()+lastName[1:]
def LastFirst(self, firstName, lastName):
return lastName[0].upper()+lastName[1:]+firstName[0].upper()+firstName[1:]
def LastDotFirst(self, firstName, lastName):
return lastName[0].upper()+lastName[1:]+'.'+firstName[0].upper()+firstName[1:]
def Last_First(self, firstName, lastName):
return lastName[0].upper()+lastName[1:]+'_'+firstName[0].upper()+firstName[1:]
def LasFir(self, firstName, lastName):
return lastName[0].upper()+lastName[1:3]+firstName[0].upper()+firstName[1:3]
def LasDotFir(self, firstName, lastName):
return lastName[0].upper()+lastName[1:3]+'.'+firstName[0].upper()+firstName[1:3]
def Las_Fir(self, firstName, lastName):
return lastName[0].upper()+lastName[1:3]+'_'+firstName[0].upper()+firstName[1:3]
#same methods but all lowercase
def firstlast(self, firstName, lastName):
return firstName[0].lower()+firstName[1:]+lastName[0].lower()+lastName[1:]
def firstDotlast(self, firstName, lastName):
return firstName[0].lower()+firstName[1:]+'.'+lastName[0].lower()+lastName[1:]
def first_last(self, firstName, lastName):
return firstName[0].lower()+firstName[1:]+'_'+lastName[0].lower()+lastName[1:]
def flast(self, firstName, lastName):
return firstName[0].lower()+lastName[0].lower().lower()+lastName[1:]
def fDotlast(self, firstName, lastName):
return firstName[0].lower()+'.'+lastName[0].lower()+lastName[1:]
def f_last(self, firstName, lastName):
return firstName[0].lower()+'_'+lastName[0].lower()+lastName[1:]
def firstl(self, firstName, lastName):
return firstName[0].lower()+firstName[1:]+lastName[0].lower()
def firstDotl(self, firstName, lastName):
return firstName[0].lower()+firstName[1:]+'.'+lastName[0].lower()
def first_l(self, firstName, lastName):
return firstName[0].lower()+firstName[1:]+'_'+lastName[0].lower()
def firlas(self, firstName, lastName):
return firstName[0].lower()+firstName[1:3]+lastName[0].lower()+lastName[1:3]
def firDotlas(self, firstName, lastName):
return firstName[0].lower()+firstName[1:3]+'.'+lastName[0].lower()+lastName[1:3]
def fir_las(self, firstName, lastName):
return firstName[0].lower()+firstName[1:3]+'_'+lastName[0].lower()+lastName[1:3]
def filast(self, firstName, lastName):
return firstName[0].lower()+firstName[1:2]+lastName[0].lower()+lastName[1:]
def fiDotlast(self, firstName, lastName):
return firstName[0].lower()+firstName[1:2]+'.'+lastName[0].lower()+lastName[1:]
def fi_last(self, firstName, lastName):
return firstName[0].lower()+firstName[1:2]+'_'+lastName[0].lower()+lastName[1:]
def lastfirst(self, firstName, lastName):
return lastName[0].lower()+lastName[1:]+firstName[0].lower()+firstName[1]
def lastDotfirst(self, firstName, lastName):
return lastName[0].lower()+lastName[1:]+'.'+firstName[0].lower()+firstName[1:]
def last_first(self, firstName, lastName):
return lastName[0].lower()+lastName[1:]+'_'+firstName[0].lower()+firstName[1:]
def lasfir(self, firstName, lastName):
return lastName[0].lower()+lastName[1:3]+firstName[0].lower()+firstName[1:3]
def lasDotfir(self, firstName, lastName):
return lastName[0].lower()+lastName[1:3]+'.'+firstName[0].lower()+firstName[1:3]
def las_fir(self, firstName, lastName):
return lastName[0].lower()+lastName[1:3]+'_'+firstName[0].lower()+firstName[1:3]
@staticmethod
def outputUserName(lineEntry, f=None):
if f is None:
print(lineEntry)
else:
f.write(lineEntry + '\n')
def run(self):
usernames = [self.username]
self.generate_user_names(usernames)
def load_users_file(self):
with open(self.usersfile) as names:
usernames = [line.strip() for line in names]
self.generate_user_names(usernames)
def generate_user_names(self, usernames):
# if self.numappend is False:
functions = [
self.FirstLast, self.FirstDotLast, self.First_Last,
self.FLast, self.FDotLast, self.F_Last,
self.FirstL, self.FirstDotL, self.First_L,
self.FirLas, self.FirDotLas, self.Fir_Las,
self.FiLast, self.FiDotLast, self.Fi_Last,
self.LastFirst, self.LastDotFirst, self.Last_First,
self.LasFir, self.LasDotFir, self.Las_Fir,
#lowercase functions
self.firstlast, self.firstDotlast, self.first_last,
self.flast, self.fDotlast, self.f_last,
self.firstl, self.firstDotl, self.first_l,
self.firlas, self.firDotlas, self.fir_las,
self.filast, self.fiDotlast, self.fi_last,
self.lastfirst, self.lastDotfirst, self.last_first,
self.lasfir, self.lasDotfir, self.las_fir,
]
if self.outfile is not None:
f = open(self.outfile, 'w+')
else:
f = None
for name in usernames:
try:
firstName, lastName = name.split()
for fn in functions:
lineEntry = fn(firstName, lastName)
if self.numappend:
if self.email:
self.outputUserName(lineEntry+self.email, f)
numrange = self.numappend.split(',')
for num in range(int(numrange[0]), int(numrange[1])+1):
if self.email:
self.outputUserName(lineEntry+str(num)+self.email, f)
else:
self.outputUserName(lineEntry+str(num), f)
elif self.email:
self.outputUserName(lineEntry+self.email, f)
else:
self.outputUserName(lineEntry, f)
except Exception as e:
print(e)
continue
if f is not None:
f.close()
if __name__ == '__main__':
parser = argparse.ArgumentParser(add_help= True, description= "Generates a list of usernames based off of standard naming conventions.")
parser.add_argument('-u','--username', help="Name of the user to enumerate. 'First Last' format")
parser.add_argument('-U','--usersfile', help="File with names to generate list in 'First Last' format")
parser.add_argument('-o','--outfile', action='store', help= "File to save generated usernames in.")
parser.add_argument('-n', help='Adds number range to every naming convention. Must be in "x,y" format.')
parser.add_argument('-e', '--email', help="Appends '@domain.com' to all generated usernames")
if len(sys.argv)==1:
parser.print_help()
sys.exit(1)
args = parser.parse_args()
try:
executer = UserNameGen(args.username, args.usersfile, args.outfile, args.n, args.email)
if executer.usersfile is not None:
executer.load_users_file()
elif executer.username is not None:
executer.run()
else:
parser.print_help()
except Exception as e:
print(e) | 55.935323 | 145 | 0.524593 | 1,155 | 11,243 | 5.04329 | 0.12987 | 0.128412 | 0.151416 | 0.194678 | 0.748498 | 0.746438 | 0.709013 | 0.696652 | 0.671588 | 0.669528 | 0 | 0.026043 | 0.347683 | 11,243 | 201 | 146 | 55.935323 | 0.768203 | 0.016899 | 0 | 0.111111 | 0 | 0 | 0.038993 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.274854 | false | 0 | 0.011696 | 0.245614 | 0.538012 | 0.02924 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
d73f78a323cd7bf4904483ca801f6f6015ed4d1c | 8,511 | py | Python | autolens/simulator/simulator.py | harshitjindal/PyAutoLens | f1d3f08f12a61f6634e1b7a0ccf8f5cfe0252035 | [
"MIT"
] | 1 | 2020-04-06T20:07:56.000Z | 2020-04-06T20:07:56.000Z | autolens/simulator/simulator.py | harshitjindal/PyAutoLens | f1d3f08f12a61f6634e1b7a0ccf8f5cfe0252035 | [
"MIT"
] | null | null | null | autolens/simulator/simulator.py | harshitjindal/PyAutoLens | f1d3f08f12a61f6634e1b7a0ccf8f5cfe0252035 | [
"MIT"
] | null | null | null | from autoarray.structures import grids
from autoarray.simulator import simulator
from autolens.lens import ray_tracing
class ImagingSimulator(simulator.ImagingSimulator):
def __init__(
self,
shape_2d,
pixel_scales,
sub_size,
psf,
exposure_time,
background_level,
add_noise=True,
noise_if_add_noise_false=0.1,
noise_seed=-1,
origin=(0.0, 0.0),
):
"""A class representing a Imaging observation, using the shape of the image, the pixel scale,
psf, exposure time, etc.
Parameters
----------
shape_2d : (int, int)
The shape of the observation. Note that we do not simulator a full Imaging frame (e.g. 2000 x 2000 pixels for \
Hubble imaging), but instead just a cut-out around the strong lens.
pixel_scales : float
The size of each pixel in arc seconds.
psf : PSF
An arrays describing the PSF kernel of the image.
exposure_time : float
The exposure time of an observation using this data_type.
background_level : float
The level of the background sky of an observationg using this data_type.
"""
super(ImagingSimulator, self).__init__(
shape_2d=shape_2d,
pixel_scales=pixel_scales,
sub_size=sub_size,
psf=psf,
exposure_time=exposure_time,
background_level=background_level,
add_noise=add_noise,
noise_if_add_noise_false=noise_if_add_noise_false,
noise_seed=noise_seed,
origin=origin,
)
def from_tracer(self, tracer, name=None):
"""
Create a realistic simulated image by applying effects to a plain simulated image.
Parameters
----------
name
image : ndarray
The image before simulating (e.g. the lens and source galaxies before optics blurring and Imaging read-out).
pixel_scales: float
The scale of each pixel in arc seconds
exposure_time_map : ndarray
An arrays representing the effective exposure time of each pixel.
psf: PSF
An arrays describing the PSF the simulated image is blurred with.
background_sky_map : ndarray
The value of background sky in every image pixel (electrons per second).
add_noise: Bool
If True poisson noise_maps is simulated and added to the image, based on the total counts in each image
pixel
noise_seed: int
A seed for random noise_maps generation
"""
image = tracer.padded_profile_image_from_grid_and_psf_shape(
grid=self.grid, psf_shape_2d=self.psf.shape_2d
)
return self.from_image(image=image.in_1d_binned, name=name)
def from_galaxies(self, galaxies):
"""Simulate imaging data for this data_type, as follows:
1) Setup the image-plane grid of the Imaging arrays, which defines the coordinates used for the ray-tracing.
2) Use this grid and the lens and source galaxies to setup a tracer, which generates the image of \
the simulated imaging data.
3) Simulate the imaging data, using a special image which ensures edge-effects don't
degrade simulator of the telescope optics (e.g. the PSF convolution).
4) Plot the image using Matplotlib, if the plot_imaging bool is True.
5) Output the dataset to .fits format if a dataset_path and data_name are specified. Otherwise, return the simulated \
imaging data_type instance."""
tracer = ray_tracing.Tracer.from_galaxies(galaxies=galaxies)
return self.from_tracer(tracer=tracer)
def from_deflections_and_galaxies(self, deflections, galaxies):
grid = grids.Grid.uniform(
shape_2d=deflections.shape_2d,
pixel_scales=deflections.pixel_scales,
sub_size=1,
)
deflected_grid = grid - deflections.in_1d_binned
image = sum(
map(lambda g: g.profile_image_from_grid(grid=deflected_grid), galaxies)
)
return self.from_image(image=image)
class InterferometerSimulator(simulator.InterferometerSimulator):
def __init__(
self,
real_space_shape_2d,
real_space_pixel_scales,
uv_wavelengths,
sub_size,
exposure_time,
background_level,
primary_beam=None,
noise_sigma=0.1,
noise_if_add_noise_false=0.1,
noise_seed=-1,
origin=(0.0, 0.0),
):
"""A class representing a Imaging observation, using the shape of the image, the pixel scale,
psf, exposure time, etc.
Parameters
----------
shape_2d : (int, int)
The shape of the observation. Note that we do not simulator a full Imaging frame (e.g. 2000 x 2000 pixels for \
Hubble imaging), but instead just a cut-out around the strong lens.
pixel_scales : float
The size of each pixel in arc seconds.
psf : PSF
An arrays describing the PSF kernel of the image.
exposure_time : float
The exposure time of an observation using this data_type.
background_level : float
The level of the background sky of an observationg using this data_type.
"""
super(InterferometerSimulator, self).__init__(
real_space_shape_2d=real_space_shape_2d,
real_space_pixel_scales=real_space_pixel_scales,
uv_wavelengths=uv_wavelengths,
sub_size=sub_size,
exposure_time=exposure_time,
background_level=background_level,
primary_beam=primary_beam,
noise_sigma=noise_sigma,
noise_if_add_noise_false=noise_if_add_noise_false,
noise_seed=noise_seed,
origin=origin,
)
def from_tracer(self, tracer):
"""
Create a realistic simulated image by applying effects to a plain simulated image.
Parameters
----------
name
image : ndarray
The image before simulating (e.g. the lens and source galaxies before optics blurring and Imaging read-out).
pixel_scales: float
The scale of each pixel in arc seconds
exposure_time_map : ndarray
An arrays representing the effective exposure time of each pixel.
psf: PSF
An arrays describing the PSF the simulated image is blurred with.
background_sky_map : ndarray
The value of background sky in every image pixel (electrons per second).
add_noise: Bool
If True poisson noise_maps is simulated and added to the image, based on the total counts in each image
pixel
noise_seed: int
A seed for random noise_maps generation
"""
image = tracer.profile_image_from_grid(grid=self.grid)
return self.from_real_space_image(real_space_image=image.in_1d_binned)
def from_galaxies(self, galaxies):
"""Simulate imaging data for this data_type, as follows:
1) Setup the image-plane grid of the Imaging arrays, which defines the coordinates used for the ray-tracing.
2) Use this grid and the lens and source galaxies to setup a tracer, which generates the image of \
the simulated imaging data.
3) Simulate the imaging data, using a special image which ensures edge-effects don't
degrade simulator of the telescope optics (e.g. the PSF convolution).
4) Plot the image using Matplotlib, if the plot_imaging bool is True.
5) Output the dataset to .fits format if a dataset_path and data_name are specified. Otherwise, return the simulated \
imaging data_type instance."""
tracer = ray_tracing.Tracer.from_galaxies(galaxies=galaxies)
return self.from_tracer(tracer=tracer)
def from_deflections_and_galaxies(self, deflections, galaxies):
grid = grids.Grid.uniform(
shape_2d=deflections.shape_2d,
pixel_scales=deflections.pixel_scales,
sub_size=1,
)
deflected_grid = grid - deflections.in_1d_binned
image = sum(
map(lambda g: g.profile_image_from_grid(grid=deflected_grid), galaxies)
)
return self.from_real_space_image(real_space_image=image)
| 37.328947 | 126 | 0.646693 | 1,106 | 8,511 | 4.786618 | 0.159132 | 0.036267 | 0.011334 | 0.017 | 0.876842 | 0.86853 | 0.848508 | 0.848508 | 0.814507 | 0.814507 | 0 | 0.010354 | 0.29644 | 8,511 | 227 | 127 | 37.493392 | 0.873747 | 0.502526 | 0 | 0.55914 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086022 | false | 0 | 0.032258 | 0 | 0.204301 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d750e0d331763b0421a82014fd9ea2b97c4249c8 | 5,451 | py | Python | tests/download_helpers/github.py | everestmz/l2tdevtools | e416061d4765bb698fdca258b400b2a44dabc21d | [
"Apache-2.0"
] | null | null | null | tests/download_helpers/github.py | everestmz/l2tdevtools | e416061d4765bb698fdca258b400b2a44dabc21d | [
"Apache-2.0"
] | null | null | null | tests/download_helpers/github.py | everestmz/l2tdevtools | e416061d4765bb698fdca258b400b2a44dabc21d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Tests for the download helper object implementations."""
from __future__ import unicode_literals
import os
import unittest
from l2tdevtools.download_helpers import github
from tests import test_lib
@unittest.skipIf(
os.environ.get('TRAVIS_OS_NAME') == 'osx',
'TLS 1.2 not supported by macOS on Travis')
class DocoptGitHubReleasesDownloadHelperTest(test_lib.BaseTestCase):
"""Tests for the docopt GitHub releases download helper."""
_DOWNLOAD_URL = 'https://github.com/docopt/docopt/releases'
_PROJECT_ORGANIZATION = 'docopt'
_PROJECT_NAME = 'docopt'
_PROJECT_VERSION = '0.6.2'
def testGetLatestVersion(self):
"""Tests the GetLatestVersion functions."""
download_helper = github.GitHubReleasesDownloadHelper(self._DOWNLOAD_URL)
latest_version = download_helper.GetLatestVersion(self._PROJECT_NAME, None)
self.assertEqual(latest_version, self._PROJECT_VERSION)
def testGetDownloadURL(self):
"""Tests the GetDownloadURL functions."""
download_helper = github.GitHubReleasesDownloadHelper(self._DOWNLOAD_URL)
download_url = download_helper.GetDownloadURL(
self._PROJECT_NAME, self._PROJECT_VERSION)
expected_download_url = (
'https://github.com/{0:s}/{1:s}/archive/{2:s}.tar.gz').format(
self._PROJECT_ORGANIZATION, self._PROJECT_NAME,
self._PROJECT_VERSION)
self.assertEqual(download_url, expected_download_url)
def testGetProjectIdentifier(self):
"""Tests the GetProjectIdentifier functions."""
download_helper = github.GitHubReleasesDownloadHelper(self._DOWNLOAD_URL)
project_identifier = download_helper.GetProjectIdentifier()
expected_project_identifier = 'com.github.{0:s}.{1:s}'.format(
self._PROJECT_ORGANIZATION, self._PROJECT_NAME)
self.assertEqual(project_identifier, expected_project_identifier)
@unittest.skipIf(
os.environ.get('TRAVIS_OS_NAME') == 'osx',
'TLS 1.2 not supported by macOS on Travis')
class LibyalGitHubReleasesDownloadHelperTest(test_lib.BaseTestCase):
"""Tests for the libyal GitHub releases download helper."""
_DOWNLOAD_URL = 'https://github.com/libyal/libevt/releases'
_PROJECT_ORGANIZATION = 'libyal'
_PROJECT_NAME = 'libevt'
_PROJECT_STATUS = 'alpha'
_PROJECT_VERSION = '20180317'
def testGetLatestVersion(self):
"""Tests the GetLatestVersion functions."""
download_helper = github.GitHubReleasesDownloadHelper(self._DOWNLOAD_URL)
latest_version = download_helper.GetLatestVersion(self._PROJECT_NAME, None)
self.assertEqual(latest_version, self._PROJECT_VERSION)
def testGetDownloadURL(self):
"""Tests the GetDownloadURL functions."""
download_helper = github.GitHubReleasesDownloadHelper(self._DOWNLOAD_URL)
download_url = download_helper.GetDownloadURL(
self._PROJECT_NAME, self._PROJECT_VERSION)
expected_download_url = (
'https://github.com/{0:s}/{1:s}/releases/download/{3:s}/'
'{1:s}-{2:s}-{3:s}.tar.gz').format(
self._PROJECT_ORGANIZATION, self._PROJECT_NAME,
self._PROJECT_STATUS, self._PROJECT_VERSION)
self.assertEqual(download_url, expected_download_url)
def testGetProjectIdentifier(self):
"""Tests the GetProjectIdentifier functions."""
download_helper = github.GitHubReleasesDownloadHelper(self._DOWNLOAD_URL)
project_identifier = download_helper.GetProjectIdentifier()
expected_project_identifier = 'com.github.{0:s}.{1:s}'.format(
self._PROJECT_ORGANIZATION, self._PROJECT_NAME)
self.assertEqual(project_identifier, expected_project_identifier)
@unittest.skipIf(
os.environ.get('TRAVIS_OS_NAME') == 'osx',
'TLS 1.2 not supported by macOS on Travis')
class Log2TimelineGitHubReleasesDownloadHelperTest(test_lib.BaseTestCase):
"""Tests for the log2timeline GitHub releases download helper."""
_DOWNLOAD_URL = 'https://github.com/log2timeline/dfvfs/releases'
_PROJECT_ORGANIZATION = 'log2timeline'
_PROJECT_NAME = 'dfvfs'
# Hard-coded version to check parsing of GitHub page.
_PROJECT_VERSION = '20180703'
def testGetLatestVersion(self):
"""Tests the GetLatestVersion functions."""
download_helper = github.GitHubReleasesDownloadHelper(self._DOWNLOAD_URL)
latest_version = download_helper.GetLatestVersion(self._PROJECT_NAME, None)
self.assertEqual(latest_version, self._PROJECT_VERSION)
def testGetDownloadURL(self):
"""Tests the GetDownloadURL functions."""
download_helper = github.GitHubReleasesDownloadHelper(self._DOWNLOAD_URL)
download_url = download_helper.GetDownloadURL(
self._PROJECT_NAME, self._PROJECT_VERSION)
expected_download_url = (
'https://github.com/{0:s}/{1:s}/releases/download/{2:s}/'
'{1:s}-{2:s}.tar.gz').format(
self._PROJECT_ORGANIZATION, self._PROJECT_NAME,
self._PROJECT_VERSION)
self.assertEqual(download_url, expected_download_url)
def testGetProjectIdentifier(self):
"""Tests the GetProjectIdentifier functions."""
download_helper = github.GitHubReleasesDownloadHelper(self._DOWNLOAD_URL)
project_identifier = download_helper.GetProjectIdentifier()
expected_project_identifier = 'com.github.{0:s}.{1:s}'.format(
self._PROJECT_ORGANIZATION, self._PROJECT_NAME)
self.assertEqual(project_identifier, expected_project_identifier)
if __name__ == '__main__':
unittest.main()
| 34.283019 | 79 | 0.749587 | 599 | 5,451 | 6.51419 | 0.155259 | 0.078934 | 0.04613 | 0.066889 | 0.823936 | 0.821886 | 0.798821 | 0.798821 | 0.798821 | 0.758073 | 0 | 0.010918 | 0.143093 | 5,451 | 158 | 80 | 34.5 | 0.824449 | 0.121813 | 0 | 0.681319 | 0 | 0.032967 | 0.136258 | 0.019072 | 0 | 0 | 0 | 0 | 0.098901 | 1 | 0.098901 | false | 0 | 0.054945 | 0 | 0.32967 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ad20bbfe62a33e23ef0434af176a8ce1aaef65d5 | 157 | py | Python | av_bakery/av_bakery/doctype/avb_customer/test_avb_customer.py | yuvabedev/AV-Bakery-APP | 2a58a6283f9e3c11fa91706b82594a6f4d9cef94 | [
"MIT"
] | null | null | null | av_bakery/av_bakery/doctype/avb_customer/test_avb_customer.py | yuvabedev/AV-Bakery-APP | 2a58a6283f9e3c11fa91706b82594a6f4d9cef94 | [
"MIT"
] | null | null | null | av_bakery/av_bakery/doctype/avb_customer/test_avb_customer.py | yuvabedev/AV-Bakery-APP | 2a58a6283f9e3c11fa91706b82594a6f4d9cef94 | [
"MIT"
] | null | null | null | # Copyright (c) 2022, mariya@yuvabe.com and Contributors
# See license.txt
# import frappe
import unittest
class TestAVBCustomer(unittest.TestCase):
pass
| 17.444444 | 56 | 0.783439 | 20 | 157 | 6.15 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.133758 | 157 | 8 | 57 | 19.625 | 0.875 | 0.535032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
ad71c5b9cfc506f170cc060787584b2fadde0160 | 5,458 | py | Python | tests/test_integration.py | repole/drowsy | 1360068c52d4ef4fbb7bcb2db2e4a5ce9c3c7294 | [
"MIT"
] | 4 | 2016-06-16T20:16:38.000Z | 2020-08-18T19:51:40.000Z | tests/test_integration.py | repole/drowsy | 1360068c52d4ef4fbb7bcb2db2e4a5ce9c3c7294 | [
"MIT"
] | 2 | 2017-10-05T11:14:39.000Z | 2018-04-15T00:54:35.000Z | tests/test_integration.py | repole/drowsy | 1360068c52d4ef4fbb7bcb2db2e4a5ce9c3c7294 | [
"MIT"
] | null | null | null | """
tests.test_integration
~~~~~~~~~~~~~~~~~~~~~~
Integration tests for Drowsy.
"""
# :copyright: (c) 2016-2020 by Nicholas Repole and contributors.
# See AUTHORS for more details.
# :license: MIT - See LICENSE for more details.
from drowsy.parser import ModelQueryParamParser
from .base import DrowsyDatabaseTests
from .resources import *
class TestDrowsyIntegration(DrowsyDatabaseTests):
"""General purpose drowsy integration tests."""
@staticmethod
def test_offset(db_session):
"""Make sure providing an offset query_param works."""
query_params = {"offset": "1"}
parser = ModelQueryParamParser(query_params)
album_resource = AlbumResource(session=db_session)
offset_limit_info = parser.parse_offset_limit(page_max_size=30)
offset = offset_limit_info.offset
limit = offset_limit_info.limit
result = album_resource.get_collection(
filters=parser.parse_filters(album_resource.model),
sorts=parser.parse_sorts(),
limit=limit,
offset=offset
)
assert result[0]["album_id"] == 2
@staticmethod
def test_limit(db_session):
"""Make sure providing a limit query_param works."""
query_params = {"limit": "1"}
parser = ModelQueryParamParser(query_params)
album_resource = AlbumResource(session=db_session)
offset_limit_info = parser.parse_offset_limit(page_max_size=30)
offset = offset_limit_info.offset
limit = offset_limit_info.limit
result = album_resource.get_collection(
filters=parser.parse_filters(album_resource.model),
sorts=parser.parse_sorts(),
limit=limit,
offset=offset
)
assert len(result) == 1
@staticmethod
def test_get_resources_ordered(db_session):
"""Test simple get_resources sort functionality."""
query_params = {
"sort": "-album_id,title"
}
parser = ModelQueryParamParser(query_params)
album_resource = AlbumResource(session=db_session)
result = album_resource.get_collection(
filters=parser.parse_filters(album_resource.model),
sorts=parser.parse_sorts()
)
assert len(result) == 347
assert result[0]["album_id"] == 347
@staticmethod
def test_get_first_page(db_session):
"""Test that we can get the first page of a set of objects."""
query_params = {
"sort": "album_id"
}
album_resource = AlbumResource(session=db_session)
parser = ModelQueryParamParser(query_params)
offset_limit_info = parser.parse_offset_limit(page_max_size=30)
offset = offset_limit_info.offset
limit = offset_limit_info.limit
result = album_resource.get_collection(
filters=parser.parse_filters(album_resource.model),
sorts=parser.parse_sorts(),
limit=limit,
offset=offset
)
assert len(result) == 30
assert result[0]["album_id"] == 1
@staticmethod
def test_get_second_page(db_session):
"""Test that we can get the second page of a set of objects."""
query_params = {
"sort": "album_id",
"page": "2"
}
parser = ModelQueryParamParser(query_params)
album_resource = AlbumResource(session=db_session)
offset_limit_info = parser.parse_offset_limit(page_max_size=30)
offset = offset_limit_info.offset
limit = offset_limit_info.limit
result = album_resource.get_collection(
filters=parser.parse_filters(album_resource.model),
sorts=parser.parse_sorts(),
limit=limit,
offset=offset
)
assert len(result) == 30
assert result[0]["album_id"] == 31
@staticmethod
def test_subresource_nested_query(db_session):
"""Ensure a simple subresource query works."""
query_params = {
"tracks._subquery_.track_id-gte": 5,
"tracks.playlists._subquery_.playlist_id-lte": 5
}
parser = ModelQueryParamParser(query_params)
album_resource = AlbumResource(session=db_session)
result = album_resource.get_collection(
subfilters=parser.parse_subfilters(),
embeds=parser.parse_embeds()
)
success = False
for album in result:
if album["album_id"] == 3:
assert len(album["tracks"]) == 1
assert album["tracks"][0]["track_id"] == 5
success = True
assert success
@staticmethod
def test_subresource_simple_query(db_session):
"""Ensure a simple subresource query works."""
query_params = {
"tracks._subquery_.track_id-gte": 5,
"tracks.playlists._subquery_.playlist_id-lte": 5
}
parser = ModelQueryParamParser(query_params)
album_resource = AlbumResource(session=db_session)
result = album_resource.get_collection(
subfilters=parser.parse_subfilters(),
embeds=parser.parse_embeds()
)
success = False
for album in result:
if album["album_id"] == 3:
assert len(album["tracks"]) == 1
assert album["tracks"][0]["track_id"] == 5
success = True
assert success
| 36.145695 | 71 | 0.622023 | 586 | 5,458 | 5.53413 | 0.174061 | 0.067838 | 0.055504 | 0.082023 | 0.798335 | 0.734813 | 0.721862 | 0.721862 | 0.721862 | 0.702128 | 0 | 0.012765 | 0.282338 | 5,458 | 150 | 72 | 36.386667 | 0.815165 | 0.111579 | 0 | 0.704918 | 0 | 0 | 0.061638 | 0.030506 | 0 | 0 | 0 | 0 | 0.114754 | 1 | 0.057377 | false | 0 | 0.02459 | 0 | 0.090164 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ad871c79a8a17859a411087a2f543210fb62f593 | 95 | py | Python | unit_test.py | Woufeil/MyPingSweeper | 8095b3257c74befb23dc0c52a8b602cbabd3e718 | [
"MIT"
] | null | null | null | unit_test.py | Woufeil/MyPingSweeper | 8095b3257c74befb23dc0c52a8b602cbabd3e718 | [
"MIT"
] | null | null | null | unit_test.py | Woufeil/MyPingSweeper | 8095b3257c74befb23dc0c52a8b602cbabd3e718 | [
"MIT"
] | null | null | null | import mypingsweeper as script
def test_main():
assert script.main("191.168.1.0/27") == 0
| 19 | 45 | 0.694737 | 16 | 95 | 4.0625 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1375 | 0.157895 | 95 | 4 | 46 | 23.75 | 0.675 | 0 | 0 | 0 | 0 | 0 | 0.147368 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a8e39a76c04f465dd001650fbbd73b4b0b65ae3a | 9,997 | py | Python | models/denoising/rednet.py | smikhai1/deep-image-denoising | 157180fc378d580e29c14885f91ed80d23074897 | [
"MIT"
] | 2 | 2021-03-11T13:07:06.000Z | 2022-02-17T12:40:47.000Z | models/denoising/rednet.py | smikhai1/deep-image-denoising | 157180fc378d580e29c14885f91ed80d23074897 | [
"MIT"
] | null | null | null | models/denoising/rednet.py | smikhai1/deep-image-denoising | 157180fc378d580e29c14885f91ed80d23074897 | [
"MIT"
] | null | null | null | import torch.nn as nn
import torch.nn.functional as F
class RED_Net_30(nn.Module):
"""
This baseline is 30-layered residual encoder-decoder neural network
with symmetric skip-connections between convolutional and deconvolutional
layers with step 2, ReLU activations, filters of constant size 3x3, constant
number of channels (128) in activations of each layer, padding = 1, stride = 1,
no max-pooling.
"""
def __init__(self):
super(RED_Net_30, self).__init__()
self.conv_1 = nn.Sequential(
nn.Conv2d(1, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_2 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_3 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_4 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_5 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_6 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_7 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_8 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_9 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_10 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_11 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_12 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_13 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_14 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.conv_15 = nn.Sequential(
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_1 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_2 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_3 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_4 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_5 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_6 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_7 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_8 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_9 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_10 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_11 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_12 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_13 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_14 = nn.Sequential(
nn.ConvTranspose2d(128, 128, 3, stride=1, padding=1),
nn.ReLU()
)
self.deconv_15 = nn.Sequential(
nn.ConvTranspose2d(128, 1, 3, stride=1, padding=1),
nn.ReLU()
)
def forward(self, X):
X = self.conv_1(X)
X_2 = self.conv_2(X)
X = self.conv_3(X_2)
X_4 = self.conv_4(X)
X = self.conv_5(X_4)
X_6 = self.conv_6(X)
X = self.conv_7(X_6)
X_8 = self.conv_8(X)
X = self.conv_9(X_8)
X_10 = self.conv_10(X)
X = self.conv_11(X_10)
X_12 = self.conv_12(X)
X = self.conv_13(X_12)
X_14 = self.conv_14(X)
X = self.conv_15(X_14)
X = self.deconv_1(X)
X = self.deconv_2(F.relu(X + X_14))
X = self.deconv_3(X)
X = self.deconv_4(F.relu(X + X_12))
X = self.deconv_5(X)
X = self.deconv_6(F.relu(X + X_10))
X = self.deconv_7(X)
X = self.deconv_8(F.relu(X + X_8))
X = self.deconv_9(X)
X = self.deconv_10(F.relu(X + X_6))
X = self.deconv_11(X)
X = self.deconv_12(F.relu(X + X_4))
X = self.deconv_13(X)
X = self.deconv_14(F.relu(X + X_2))
X = self.deconv_15(X)
return X
class RED_Net_20(nn.Module):
"""
This baseline is 20-layered residual encoder-decoder neural network
with symmetric skip-connections between convolutional and deconvolutional
layers with step 2, ReLU activations, filters of constant size 3x3, constant
number of channels (128) in activations of each layer, padding = 1, stride = 1,
no max-pooling.
"""
def __init__(self):
super(RED_Net_20, self).__init__()
self.conv_1 = nn.Sequential(
nn.Conv2d(1, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.conv_2 = nn.Sequential(
nn.Conv2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.conv_3 = nn.Sequential(
nn.Conv2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.conv_4 = nn.Sequential(
nn.Conv2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.conv_5 = nn.Sequential(
nn.Conv2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.conv_6 = nn.Sequential(
nn.Conv2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.conv_7 = nn.Sequential(
nn.Conv2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.conv_8 = nn.Sequential(
nn.Conv2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.conv_9 = nn.Sequential(
nn.Conv2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.conv_10 = nn.Sequential(
nn.Conv2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.deconv_1 = nn.Sequential(
nn.ConvTranspose2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.deconv_2 = nn.Sequential(
nn.ConvTranspose2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.deconv_3 = nn.Sequential(
nn.ConvTranspose2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.deconv_4 = nn.Sequential(
nn.ConvTranspose2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.deconv_5 = nn.Sequential(
nn.ConvTranspose2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.deconv_6 = nn.Sequential(
nn.ConvTranspose2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.deconv_7 = nn.Sequential(
nn.ConvTranspose2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.deconv_8 = nn.Sequential(
nn.ConvTranspose2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.deconv_9 = nn.Sequential(
nn.ConvTranspose2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True)
)
self.deconv_10 = nn.Sequential(
nn.ConvTranspose2d(64, 1, 3, stride=1, padding=1),
nn.BatchNorm2d(1),
nn.Sigmoid()
)
def forward(self, X):
X = self.conv_1(X)
X = self.conv_2(X)
X_3 = self.conv_3(X)
X = self.conv_4(X_3)
X_5 = self.conv_5(X)
X = self.conv_6(X_5)
X_7 = self.conv_7(X)
X = self.conv_8(X_7)
X_9 = self.conv_9(X)
X = self.conv_10(X_9)
X = self.deconv_1(X)
X = self.deconv_2(F.relu(X + X_9))
X = self.deconv_3(X)
X = self.deconv_4(F.relu(X + X_7))
X = self.deconv_5(X)
X = self.deconv_6(F.relu(X + X_5))
X = self.deconv_7(X)
X = self.deconv_8(F.relu(X + X_3))
X = self.deconv_9(X)
X = self.deconv_10(X)
return X | 26.44709 | 83 | 0.505652 | 1,340 | 9,997 | 3.655224 | 0.059701 | 0.033687 | 0.142915 | 0.153124 | 0.901797 | 0.879747 | 0.879747 | 0.85443 | 0.85443 | 0.843814 | 0 | 0.111828 | 0.364009 | 9,997 | 378 | 84 | 26.44709 | 0.65854 | 0.062919 | 0 | 0.609929 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014184 | false | 0 | 0.007092 | 0 | 0.035461 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d152eb5edf56e20b76acf2c0f0ab42fbb0121142 | 11,652 | py | Python | tests/commands/test_build_full_response.py | atviriduomenys/spinta | 77a10e201f8cdc63143fce7996fd0898acb1ff58 | [
"MIT"
] | 2 | 2019-03-14T06:41:14.000Z | 2019-03-26T11:48:14.000Z | tests/commands/test_build_full_response.py | sirex/spinta | 77a10e201f8cdc63143fce7996fd0898acb1ff58 | [
"MIT"
] | 44 | 2019-04-05T15:52:45.000Z | 2022-03-30T07:41:33.000Z | tests/commands/test_build_full_response.py | sirex/spinta | 77a10e201f8cdc63143fce7996fd0898acb1ff58 | [
"MIT"
] | 1 | 2019-04-01T09:54:27.000Z | 2019-04-01T09:54:27.000Z | from spinta import commands
from spinta.commands import build_full_response
from spinta.components import Model
def create_model(context, schema):
manifest = context.get('store').manifest
data = {
'type': 'model',
'name': 'model',
**schema,
}
model = Model()
model.eid = '9244f3a6-a672-4aac-bb1c-831646264a51'
commands.load(context, model, data, manifest)
commands.link(context, model)
return model
def test_scalar_with_empty_saved(context):
model = create_model(context, {
'properties': {
'scalar': {'type': 'string'},
}
})
patch = {'scalar': '42'}
saved = {}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'scalar': '42',
}
def test_scalar_overwrite_saved(context):
model = create_model(context, {
'properties': {
'scalar': {'type': 'string'},
}
})
patch = {'scalar': '42'}
saved = {'scalar': '13'}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'scalar': '42',
}
def test_empty_scalar_with_non_empty_saved(context):
model = create_model(context, {
'properties': {
'scalar': {'type': 'string'},
}
})
patch = {}
saved = {'scalar': '42'}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {'scalar': '42'}
def test_obj_empty_patch_and_saved(context):
model = create_model(context, {
'properties': {
'obj': {
'type': 'object',
'properties': {
'foo': {'type': 'string'},
},
},
},
})
# test empty patch and empty saved
patch = {}
saved = {}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {'obj': {'foo': None}}
def test_obj_empty_saved_and_empty_obj(context):
model = create_model(context, {
'properties': {
'obj': {
'type': 'object',
'properties': {
'foo': {'type': 'string'},
},
},
},
})
patch = {'obj': {}}
saved = {}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'obj': {'foo': None}
}
def test_obj_with_default(context):
model = create_model(context, {
'properties': {
'obj': {
'type': 'object',
'properties': {
'foo': {'type': 'string', 'default': 'def_val'},
},
},
},
})
patch = {'obj': {}}
saved = {}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'obj': {'foo': 'def_val'}
}
def test_obj_with_default_and_non_empty_saved(context):
model = create_model(context, {
'properties': {
'obj': {
'type': 'object',
'properties': {
'foo': {'type': 'string', 'default': 'def_val'},
},
},
},
})
patch = {'obj': {}}
saved = {'obj': {'foo': 'non_default'}}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
# as nothing have changed - return already saved value
assert full_patch == {
'obj': {'foo': 'non_default'}
}
def test_obj_non_empty_patch(context):
model = create_model(context, {
'properties': {
'obj': {
'type': 'object',
'properties': {
'foo': {'type': 'string'},
},
},
},
})
patch = {'obj': {'foo': '42'}}
saved = {}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'obj': {'foo': '42'}
}
def test_obj_non_empty_patch_and_saved(context):
model = create_model(context, {
'properties': {
'obj': {
'type': 'object',
'properties': {
'foo': {'type': 'string'},
},
},
},
})
patch = {'obj': {'foo': '42'}}
saved = {'obj': {'foo': '13'}}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'obj': {'foo': '42'}
}
def test_obj_empty_patch_and_non_empty_saved(context):
model = create_model(context, {
'properties': {
'obj': {
'type': 'object',
'properties': {
'foo': {'type': 'string'},
},
},
},
})
patch = {}
saved = {'obj': {'foo': '13'}}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {'obj': {'foo': '13'}}
def test_obj_null_patch_and_non_empty_saved(context):
model = create_model(context, {
'properties': {
'obj': {
'type': 'object',
'properties': {
'foo': {'type': 'string'},
},
},
},
})
patch = {'obj': {'foo': None}}
saved = {'obj': {'foo': '13'}}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'obj': {'foo': None}
}
def test_nested_obj(context):
model = create_model(context, {
'properties': {
'obj': {
'type': 'object',
'properties': {
'foo': {'type': 'string'},
'sub': {
'type': 'object',
'properties': {
'foos': {'type': 'string'},
},
},
},
},
},
})
patch = {'obj': {'foo': None, 'sub': {'foos': '420'}}}
saved = {'obj': {'foo': '13'}}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'obj': {'foo': None, 'sub': {'foos': '420'}}
}
def test_complex_obj(context):
model = create_model(context, {
'properties': {
'obj': {
'type': 'object',
'properties': {
'foo': {'type': 'string'},
'bar': {'type': 'string', 'default': 'default'},
'sub': {
'type': 'object',
'properties': {
'foos': {'type': 'string'},
'bars': {'type': 'string'},
},
}
},
},
},
})
# test that missing values from patch are still filled with defaults
patch = {'obj': {
'foo': '42',
'sub': {
'foos': '420',
}
}}
saved = {}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'obj': {
'foo': '42',
'bar': 'default',
'sub': {
'foos': '420',
'bars': None,
}
}
}
# test that missing obj values are filled in from saved values
patch = {'obj': {
'foo': '42',
'sub': {
'foos': '420',
}
}}
saved = {'obj': {
'foo': '00',
'bar': None, # XXX: if value is null, should it be converted to default
'sub': {
'foos': '000',
'bars': 'abc',
}
}}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'obj': {
'foo': '42',
'bar': None,
'sub': {
'foos': '420',
'bars': 'abc',
}
}
}
def test_array_empty_patch_and_saved(context):
model = create_model(context, {
'properties': {
'list': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'foo': {'type': 'string'},
},
},
},
},
})
patch = {}
saved = {}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {'list': None}
def test_array_empty_saved(context):
model = create_model(context, {
'properties': {
'list': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'foo': {'type': 'string'},
},
},
},
},
})
patch = {'list': []}
saved = {}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'list': [],
}
def test_array_non_empty_patch_and_empty_saved(context):
model = create_model(context, {
'properties': {
'list': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'foo': {'type': 'string'},
},
},
},
},
})
patch = {'list': [{
'foo': '42',
}]}
saved = {}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'list': [{
'foo': '42',
}]
}
def test_array_non_empty_patch_and_non_empty_saved(context):
model = create_model(context, {
'properties': {
'list': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'foo': {'type': 'string'},
},
},
},
},
})
patch = {'list': []}
saved = {'list': [{
'foo': '42',
}]}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'list': []
}
def test_array_overwrite_saved(context):
model = create_model(context, {
'properties': {
'list': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'foo': {'type': 'string'},
},
},
},
},
})
patch = {'list': [{
'foo': '49',
}]}
saved = {'list': [{
'foo': '42',
}]}
full_patch = build_full_response(
context, model,
patch=patch,
saved=saved
)
assert full_patch == {
'list': [{
'foo': '49',
}]
}
| 22.625243 | 80 | 0.412805 | 939 | 11,652 | 4.920128 | 0.090522 | 0.101299 | 0.073593 | 0.074026 | 0.829437 | 0.812987 | 0.805195 | 0.774242 | 0.759307 | 0.758658 | 0 | 0.014724 | 0.434604 | 11,652 | 514 | 81 | 22.669261 | 0.686551 | 0.023172 | 0 | 0.663717 | 0 | 0 | 0.130362 | 0.003165 | 0 | 0 | 0 | 0 | 0.042035 | 1 | 0.042035 | false | 0 | 0.006637 | 0 | 0.050885 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0f1e2b57ee82dac64aedaa3b7958c8ba1af6d299 | 41 | py | Python | seabed/__init__.py | noaa-ocs-modeling/seabed | 95c9a2fb8a2fe678d783a134af69be6873d6a595 | [
"CC0-1.0"
] | null | null | null | seabed/__init__.py | noaa-ocs-modeling/seabed | 95c9a2fb8a2fe678d783a134af69be6873d6a595 | [
"CC0-1.0"
] | null | null | null | seabed/__init__.py | noaa-ocs-modeling/seabed | 95c9a2fb8a2fe678d783a134af69be6873d6a595 | [
"CC0-1.0"
] | null | null | null | from .ngdc import NGDCSeabedDescriptions
| 20.5 | 40 | 0.878049 | 4 | 41 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0f2e1bf683130047e30f9689dd6d4ee63ac073ef | 21 | py | Python | codes/nlper/modules/__init__.py | zhaoxiongjun/NLPer-Arsenal | c62cc155de5f471a522a26199893348abd9184f4 | [
"MIT"
] | 839 | 2020-07-13T00:53:01.000Z | 2022-03-31T15:09:38.000Z | codes/nlper/modules/__init__.py | 760bdteam/NLPer-Arsenal | 17f34ec68c83babf8c3e5959fed14b9f251f869f | [
"MIT"
] | 8 | 2021-07-16T01:31:50.000Z | 2022-03-11T12:14:22.000Z | codes/nlper/modules/__init__.py | 760bdteam/NLPer-Arsenal | 17f34ec68c83babf8c3e5959fed14b9f251f869f | [
"MIT"
] | 108 | 2021-04-26T01:01:13.000Z | 2022-03-26T08:47:04.000Z | from .mlp import MLP
| 10.5 | 20 | 0.761905 | 4 | 21 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0f2ed8dbdd75c324a2270542392afc144cea3b11 | 174 | py | Python | wtu/task/__init__.py | ag-sc/WTU | 17d28696ba0a9391bf6f29dde91d2201c74ac292 | [
"Apache-2.0"
] | 2 | 2018-07-30T15:00:38.000Z | 2021-05-21T08:38:17.000Z | wtu/task/__init__.py | ag-sc/WTU | 17d28696ba0a9391bf6f29dde91d2201c74ac292 | [
"Apache-2.0"
] | null | null | null | wtu/task/__init__.py | ag-sc/WTU | 17d28696ba0a9391bf6f29dde91d2201c74ac292 | [
"Apache-2.0"
] | null | null | null | from abc import ABCMeta, abstractmethod
from wtu.table import Table
class Task(metaclass=ABCMeta):
@abstractmethod
def run(self, table: Table) -> bool:
pass
| 21.75 | 40 | 0.712644 | 22 | 174 | 5.636364 | 0.681818 | 0.33871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206897 | 174 | 7 | 41 | 24.857143 | 0.898551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.166667 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
0f43da08fb7568b744096fc683d41d7f52de795e | 12,134 | py | Python | analyze_foldamers/tests/test_angle_distributions.py | shirtsgroup/analyze_foldamers | 17a7b948d1d0d4fbfb1d84d58753289404fb99a9 | [
"MIT"
] | null | null | null | analyze_foldamers/tests/test_angle_distributions.py | shirtsgroup/analyze_foldamers | 17a7b948d1d0d4fbfb1d84d58753289404fb99a9 | [
"MIT"
] | 33 | 2020-08-05T23:00:56.000Z | 2022-03-21T22:37:03.000Z | analyze_foldamers/tests/test_angle_distributions.py | shirtsgroup/analyze_foldamers | 17a7b948d1d0d4fbfb1d84d58753289404fb99a9 | [
"MIT"
] | null | null | null | """
Unit and regression test for the analyze_foldamers package.
"""
# Import package, test suite, and other packages as needed
import analyze_foldamers
import pytest
import sys
import os
import pickle
from cg_openmm.cg_model.cgmodel import CGModel
from analyze_foldamers.parameters.bond_distributions import *
from analyze_foldamers.parameters.angle_distributions import *
def test_analyze_foldamers_imported():
"""Sample test, will always pass so long as import statement worked"""
assert "analyze_foldamers" in sys.modules
current_path = os.path.dirname(os.path.abspath(__file__))
data_path = os.path.join(current_path, 'test_data')
def test_bond_dist_calc_pdb(tmpdir):
"""Test bond distribution calculator"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory pdb file:
traj_file = os.path.join(data_path, "replica_1.pdb")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
bond_hist_data = calc_bond_length_distribution(
cgmodel,
traj_file,
plotfile=f"{output_directory}/bond_hist_pdb.pdf",
)
assert os.path.isfile(f"{output_directory}/bond_hist_pdb.pdf")
def test_angle_dist_calc_pdb(tmpdir):
"""Test angle distribution calculator"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory pdb file:
traj_file = os.path.join(data_path, "replica_1.pdb")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
angle_hist_data = calc_bond_angle_distribution(
cgmodel,
traj_file,
plotfile=f"{output_directory}/angle_hist_pdb.pdf",
)
assert os.path.isfile(f"{output_directory}/angle_hist_pdb.pdf")
def test_torsion_dist_calc_pdb(tmpdir):
"""Test torsion distribution calculator"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory pdb file:
traj_file = os.path.join(data_path, "replica_1.pdb")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
torsion_hist_data = calc_torsion_distribution(
cgmodel,
traj_file,
plotfile=f"{output_directory}/torsion_hist_pdb.pdf",
)
assert os.path.isfile(f"{output_directory}/torsion_hist_pdb.pdf")
def test_bond_dist_calc_dcd(tmpdir):
"""Test bond distribution calculator"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory dcd file:
traj_file = os.path.join(data_path, "replica_1.dcd")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
bond_hist_data = calc_bond_length_distribution(
cgmodel,
traj_file,
plotfile=f"{output_directory}/bond_hist_dcd.pdf",
)
assert os.path.isfile(f"{output_directory}/bond_hist_dcd.pdf")
def test_bond_dist_calc_dcd_multi(tmpdir):
"""Test bond distribution calculator"""
output_directory = tmpdir.mkdir("output")
# Load in trajectory dcd files:
traj_file_list = []
for i in range(3):
traj_file_list.append(os.path.join(data_path, f"replica_{i+1}.dcd"))
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
bond_hist_data = calc_bond_length_distribution(
cgmodel,
traj_file_list,
plotfile=f"{output_directory}/bond_hist_dcd.pdf",
)
assert os.path.isfile(f"{output_directory}/bond_hist_dcd.pdf")
def test_angle_dist_calc_dcd(tmpdir):
"""Test angle distribution calculator"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory dcd file:
traj_file = os.path.join(data_path, "replica_1.dcd")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
angle_hist_data = calc_bond_angle_distribution(
cgmodel,
traj_file,
plotfile=f"{output_directory}/angle_hist_dcd.pdf",
)
assert os.path.isfile(f"{output_directory}/angle_hist_dcd.pdf")
def test_angle_dist_calc_dcd_multi(tmpdir):
"""Test angle distribution calculator"""
output_directory = tmpdir.mkdir("output")
# Load in trajectory dcd files:
traj_file_list = []
for i in range(3):
traj_file_list.append(os.path.join(data_path, f"replica_{i+1}.dcd"))
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
angle_hist_data = calc_bond_angle_distribution(
cgmodel,
traj_file_list,
plotfile=f"{output_directory}/angle_hist_dcd.pdf",
)
assert os.path.isfile(f"{output_directory}/angle_hist_dcd.pdf")
def test_torsion_dist_calc_dcd(tmpdir):
"""Test torsion distribution calculator"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory dcd file:
traj_file = os.path.join(data_path, "replica_1.dcd")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
torsion_hist_data = calc_torsion_distribution(
cgmodel,
traj_file,
plotfile=f"{output_directory}/torsion_hist_dcd.pdf",
)
assert os.path.isfile(f"{output_directory}/torsion_hist_dcd.pdf")
def test_torsion_dist_calc_dcd_multi(tmpdir):
"""Test torsion distribution calculator"""
output_directory = tmpdir.mkdir("output")
# Load in trajectory dcd files:
traj_file_list = []
for i in range(3):
traj_file_list.append(os.path.join(data_path, f"replica_{i+1}.dcd"))
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
torsion_hist_data = calc_torsion_distribution(
cgmodel,
traj_file_list,
plotfile=f"{output_directory}/torsion_hist_dcd.pdf",
)
assert os.path.isfile(f"{output_directory}/torsion_hist_dcd.pdf")
def test_ramachandran_calc_pdb(tmpdir):
"""Test ramachandran calculation/plotting"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory pdb file:
traj_file = os.path.join(data_path, "replica_1.pdb")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
rama_hist, xedges, yedges = calc_ramachandran(
cgmodel,
traj_file,
plotfile=f"{output_directory}/ramachandran_pdb.pdf",
)
assert os.path.isfile(f"{output_directory}/ramachandran_pdb.pdf")
# Fit ramachandran data to 2d Gaussian:
param_opt, param_cov = fit_ramachandran_data(rama_hist, xedges, yedges)
def test_ramachandran_calc_dcd(tmpdir):
"""Test ramachandran calculation/plotting"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory pdb file:
traj_file = os.path.join(data_path, "replica_1.dcd")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
rama_hist, xedges, yedges = calc_ramachandran(
cgmodel,
traj_file,
plotfile=f"{output_directory}/ramachandran_dcd.pdf",
)
assert os.path.isfile(f"{output_directory}/ramachandran_dcd.pdf")
# Fit ramachandran data to 2d Gaussian:
param_opt, param_cov = fit_ramachandran_data(rama_hist, xedges, yedges)
def test_2d_dist_bond_bond_dcd(tmpdir):
"""Test general 2d histogram - bond-bond correlation"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory pdb file:
traj_file = os.path.join(data_path, "replica_1.dcd")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
hist_out, xedges, yedges = calc_2d_distribution(
cgmodel,
traj_file,
plotfile=f"{output_directory}/bond_bond_2d.pdf",
xvar_name='bb_bb',
yvar_name='bb_sc',
)
assert os.path.isfile(f"{output_directory}/bond_bond_2d.pdf")
def test_2d_dist_bond_angle_dcd(tmpdir):
"""Test general 2d histogram - bond-angle correlation"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory pdb file:
traj_file = os.path.join(data_path, "replica_1.dcd")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
hist_out, xedges, yedges = calc_2d_distribution(
cgmodel,
traj_file,
plotfile=f"{output_directory}/bond_angle_2d.pdf",
xvar_name='bb_bb',
yvar_name='bb_bb_sc',
)
assert os.path.isfile(f"{output_directory}/bond_angle_2d.pdf")
def test_2d_dist_bond_torsion_dcd(tmpdir):
"""Test general 2d histogram - bond-torsion correlation"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory pdb file:
traj_file = os.path.join(data_path, "replica_1.dcd")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
hist_out, xedges, yedges = calc_2d_distribution(
cgmodel,
traj_file,
plotfile=f"{output_directory}/bond_torsion_2d.pdf",
xvar_name='bb_bb',
yvar_name='bb_bb_bb_sc',
)
assert os.path.isfile(f"{output_directory}/bond_torsion_2d.pdf")
def test_2d_dist_angle_angle_dcd(tmpdir):
"""Test general 2d histogram - angle-angle correlation"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory pdb file:
traj_file = os.path.join(data_path, "replica_1.dcd")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
hist_out, xedges, yedges = calc_2d_distribution(
cgmodel,
traj_file,
plotfile=f"{output_directory}/angle_angle_2d.pdf",
xvar_name='bb_bb_bb',
yvar_name='sc_bb_bb',
)
assert os.path.isfile(f"{output_directory}/angle_angle_2d.pdf")
def test_2d_dist_angle_torsion_dcd(tmpdir):
"""Test general 2d histogram - angle-torsion correlation"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory pdb file:
traj_file = os.path.join(data_path, "replica_1.dcd")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
hist_out, xedges, yedges = calc_2d_distribution(
cgmodel,
traj_file,
plotfile=f"{output_directory}/angle_torsion_2d.pdf",
xvar_name='bb_bb_sc',
yvar_name='bb_bb_bb_sc',
)
assert os.path.isfile(f"{output_directory}/angle_torsion_2d.pdf")
def test_2d_dist_torsion_torsion_dcd(tmpdir):
"""Test general 2d histogram - torsion-torsion correlation"""
output_directory = tmpdir.mkdir("output")
# Load in a trajectory pdb file:
traj_file = os.path.join(data_path, "replica_1.dcd")
# Load in a CGModel:
cgmodel_path = os.path.join(data_path, "stored_cgmodel.pkl")
cgmodel = pickle.load(open(cgmodel_path, "rb"))
hist_out, xedges, yedges = calc_2d_distribution(
cgmodel,
traj_file,
plotfile=f"{output_directory}/torsion_torsion_2d.pdf",
xvar_name='bb_bb_bb_sc',
yvar_name='sc_bb_bb_sc',
)
assert os.path.isfile(f"{output_directory}/torsion_torsion_2d.pdf")
| 30.718987 | 79 | 0.672243 | 1,631 | 12,134 | 4.714286 | 0.063765 | 0.042138 | 0.04552 | 0.061907 | 0.927039 | 0.918455 | 0.910261 | 0.841072 | 0.834439 | 0.806347 | 0 | 0.005471 | 0.216664 | 12,134 | 395 | 80 | 30.718987 | 0.803472 | 0.149745 | 0 | 0.657534 | 0 | 0 | 0.203947 | 0.125687 | 0 | 0 | 0 | 0 | 0.082192 | 1 | 0.082192 | false | 0 | 0.041096 | 0 | 0.123288 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0f9066f3e6deaa306f78ab0f73c61b11930f29bb | 7,088 | py | Python | president/tests/gameplay/test_settings.py | BrandonShute/President | abd35471e4a861d25bcf153660a947536c69738b | [
"MIT"
] | null | null | null | president/tests/gameplay/test_settings.py | BrandonShute/President | abd35471e4a861d25bcf153660a947536c69738b | [
"MIT"
] | null | null | null | president/tests/gameplay/test_settings.py | BrandonShute/President | abd35471e4a861d25bcf153660a947536c69738b | [
"MIT"
] | null | null | null | from unittest import TestCase
from gameplay.settings import game_settings, TRUMP_CARD_SHORT_NAME
from gameplay.settings import START_CARD_SUIT, START_CARD_SHORT_NAME
from gameplay.president_card import PresidentCard
from gameplay.card_constants import SPADES, CLUBS, HEARTS, DIAMONDS
class SettingsTest(TestCase):
def test_when_import_game_settings_then_class_properties_are_not_null(self):
self.assertIsNotNone(game_settings.trump_card_rank)
self.assertIsNotNone(game_settings.card_rankings)
def test_current_play_has_same_number_as_last_play_then_do_not_throw_exception(self):
cards = [
PresidentCard(HEARTS, '4'),
PresidentCard(SPADES, '4'),
]
last_played = [
PresidentCard(HEARTS, '3'),
PresidentCard(CLUBS, '3'),
]
game_settings.validate_cards(cards, last_played)
def test_when_current_play_is_not_trump_but_different_number_than_last_play_then_throw_exception(self):
cards = [
PresidentCard(HEARTS, '4'),
PresidentCard(DIAMONDS, '4'),
PresidentCard(SPADES, '4'),
]
last_played = [
PresidentCard(HEARTS, '3'),
PresidentCard(CLUBS, '3'),
]
with self.assertRaises(Exception) as context:
game_settings.validate_cards(cards, last_played)
expected_error = 'Expected 2 cards but 3 were played.'
self.assertEqual(expected_error, str(context.exception))
def test_when_current_play_is_trump_and_one_less_than_last_played_then_do_not_throw_exception(self):
cards = [
PresidentCard(HEARTS, TRUMP_CARD_SHORT_NAME)
]
last_played = [
PresidentCard(HEARTS, '3'),
PresidentCard(CLUBS, '3'),
]
game_settings.validate_cards(cards, last_played)
def test_when_last_play_is_one_card_and_one_trump_card_is_played_then_do_not_throw_exception(self):
cards = [
PresidentCard(HEARTS, TRUMP_CARD_SHORT_NAME),
]
last_played = [
PresidentCard(HEARTS, '3'),
]
game_settings.validate_cards(cards, last_played)
def test_when_last_play_is_more_than_one_card_and_same_number_of_trumps_played_then_throw_exception(self):
cards = [
PresidentCard(HEARTS, TRUMP_CARD_SHORT_NAME),
PresidentCard(DIAMONDS, TRUMP_CARD_SHORT_NAME),
]
last_played = [
PresidentCard(CLUBS, '3'),
PresidentCard(DIAMONDS, '3'),
]
with self.assertRaises(Exception) as context:
game_settings.validate_cards(cards, last_played)
expected_error = 'When playing trump cards, you must play one less than previous play.'
self.assertEqual(expected_error, str(context.exception))
def test_when_last_play_is_three_card_and_current_play_is_one_trump_then_throw_exception(self):
cards = [
PresidentCard(HEARTS, TRUMP_CARD_SHORT_NAME)
]
last_played = [
PresidentCard(HEARTS, '3'),
PresidentCard(CLUBS, '3'),
PresidentCard(DIAMONDS, '3'),
]
with self.assertRaises(Exception) as context:
game_settings.validate_cards(cards, last_played)
expected_error = 'When playing trump cards, you must play one less than previous play.'
self.assertEqual(expected_error, str(context.exception))
def test_when_playing_one_card_on_the_first_play_then_do_not_throw_exception(self):
cards = [
PresidentCard(HEARTS, '3'),
]
last_played = None
game_settings.validate_cards(cards, last_played)
def test_when_playing_multiple_cards_on_the_first_play_then_do_not_throw_exception(self):
cards = [
PresidentCard(HEARTS, '3'),
PresidentCard(DIAMONDS, '3'),
]
last_played = None
game_settings.validate_cards(cards, last_played)
def test_when_new_play_has_lower_rank_than_last_play_then_throw_exception(self):
cards = [
PresidentCard(HEARTS, '4'),
]
last_played = [
PresidentCard(HEARTS, '5'),
]
with self.assertRaises(Exception) as context:
game_settings.validate_cards(cards, last_played)
expected_error = 'Cannot play a lower card than the last player.'
self.assertEqual(expected_error, str(context.exception))
def test_when_multiple_cards_are_played_and_they_are_not_the_same_rank_then_throw_exception(self):
cards = [
PresidentCard(HEARTS, '4'),
PresidentCard(DIAMONDS, '5'),
]
last_played = [
PresidentCard(HEARTS, '3'),
PresidentCard(CLUBS, '3'),
]
with self.assertRaises(Exception) as context:
game_settings.validate_cards(cards, last_played)
expected_error = 'When playing multiple cards they must be the same rank.'
self.assertEqual(expected_error, str(context.exception))
def test_when_multiple_cards_are_played_and_they_are_the_same_rank_then_do_not_throw_exception(self):
cards = [
PresidentCard(HEARTS, '4'),
PresidentCard(DIAMONDS, '4'),
]
last_played = [
PresidentCard(HEARTS, '3'),
PresidentCard(CLUBS, '3'),
]
game_settings.validate_cards(cards, last_played)
def test_when_passed_start_card_then_is_start_card_returns_true(self):
card = PresidentCard(START_CARD_SUIT, START_CARD_SHORT_NAME)
result = game_settings.is_start_card(card)
self.assertEqual(True, result)
def test_when_passed_card_with_different_suit_then_is_start_card_returns_false(self):
card = PresidentCard(HEARTS, START_CARD_SHORT_NAME)
result = game_settings.is_start_card(card)
self.assertEqual(False, result)
def test_when_passed_card_with_different_rank_then_is_start_card_returns_false(self):
card = PresidentCard(START_CARD_SUIT, '7')
result = game_settings.is_start_card(card)
self.assertEqual(False, result)
def test_when_passed_trump_card_then_is_trump_card_returns_true(self):
card = PresidentCard(HEARTS, TRUMP_CARD_SHORT_NAME)
result = game_settings.is_trump_card(card)
self.assertEqual(True, result)
def test_when_passed_card_other_than_trump_rank_then_is_trump_card_returns_false(self):
card = PresidentCard(HEARTS, '7')
result = game_settings.is_trump_card(card)
self.assertEqual(False, result)
def test_when_sort_multiple_president_cards_then_should_sort_by(self):
card = PresidentCard(START_CARD_SUIT, '7')
result = game_settings.is_start_card(card)
self.assertEqual(False, result)
def test_when_validate_cards_for_no_cards_played_then_do_not_throw_exception(self):
cards = []
last_played = [
PresidentCard(HEARTS, '3'),
]
game_settings.validate_cards(cards, last_played)
| 34.407767 | 110 | 0.680023 | 838 | 7,088 | 5.307876 | 0.116945 | 0.056205 | 0.044514 | 0.06205 | 0.805306 | 0.782824 | 0.75607 | 0.725944 | 0.710656 | 0.672437 | 0 | 0.006701 | 0.242099 | 7,088 | 205 | 111 | 34.57561 | 0.821296 | 0 | 0 | 0.614379 | 0 | 0 | 0.043178 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.124183 | false | 0.03268 | 0.039216 | 0 | 0.169935 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0f9444609cf8d59976ce2cb7f684308de6d3dfe4 | 44,010 | py | Python | cryptoapis/api/informative_api.py | Crypto-APIs/Crypto_APIs_2.0_SDK_Python | c59ebd914850622b2c6500c4c30af31fb9cecf0e | [
"MIT"
] | 5 | 2021-05-17T04:45:03.000Z | 2022-03-23T12:51:46.000Z | cryptoapis/api/informative_api.py | Crypto-APIs/Crypto_APIs_2.0_SDK_Python | c59ebd914850622b2c6500c4c30af31fb9cecf0e | [
"MIT"
] | null | null | null | cryptoapis/api/informative_api.py | Crypto-APIs/Crypto_APIs_2.0_SDK_Python | c59ebd914850622b2c6500c4c30af31fb9cecf0e | [
"MIT"
] | 2 | 2021-06-02T07:32:26.000Z | 2022-02-12T02:36:23.000Z | """
CryptoAPIs
Crypto APIs 2.0 is a complex and innovative infrastructure layer that radically simplifies the development of any Blockchain and Crypto related applications. Organized around REST, Crypto APIs 2.0 can assist both novice Bitcoin/Ethereum enthusiasts and crypto experts with the development of their blockchain applications. Crypto APIs 2.0 provides unified endpoints and data, raw data, automatic tokens and coins forwardings, callback functionalities, and much more. # noqa: E501
The version of the OpenAPI document: 2.0.0
Contact: developers@cryptoapis.io
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from cryptoapis.api_client import ApiClient, Endpoint as _Endpoint
from cryptoapis.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from cryptoapis.model.get_transaction_request_details_r import GetTransactionRequestDetailsR
from cryptoapis.model.get_wallet_asset_details_r import GetWalletAssetDetailsR
from cryptoapis.model.get_wallet_transaction_details_by_transaction_idr import GetWalletTransactionDetailsByTransactionIDR
from cryptoapis.model.inline_response40034 import InlineResponse40034
from cryptoapis.model.inline_response40035 import InlineResponse40035
from cryptoapis.model.inline_response40041 import InlineResponse40041
from cryptoapis.model.inline_response40045 import InlineResponse40045
from cryptoapis.model.inline_response40046 import InlineResponse40046
from cryptoapis.model.inline_response4007 import InlineResponse4007
from cryptoapis.model.inline_response40134 import InlineResponse40134
from cryptoapis.model.inline_response40135 import InlineResponse40135
from cryptoapis.model.inline_response40141 import InlineResponse40141
from cryptoapis.model.inline_response40145 import InlineResponse40145
from cryptoapis.model.inline_response40146 import InlineResponse40146
from cryptoapis.model.inline_response4017 import InlineResponse4017
from cryptoapis.model.inline_response402 import InlineResponse402
from cryptoapis.model.inline_response40334 import InlineResponse40334
from cryptoapis.model.inline_response40335 import InlineResponse40335
from cryptoapis.model.inline_response40341 import InlineResponse40341
from cryptoapis.model.inline_response40345 import InlineResponse40345
from cryptoapis.model.inline_response40346 import InlineResponse40346
from cryptoapis.model.inline_response4037 import InlineResponse4037
from cryptoapis.model.inline_response4041 import InlineResponse4041
from cryptoapis.model.inline_response409 import InlineResponse409
from cryptoapis.model.inline_response415 import InlineResponse415
from cryptoapis.model.inline_response422 import InlineResponse422
from cryptoapis.model.inline_response429 import InlineResponse429
from cryptoapis.model.inline_response500 import InlineResponse500
from cryptoapis.model.list_deposit_addresses_r import ListDepositAddressesR
from cryptoapis.model.list_supported_tokens_r import ListSupportedTokensR
from cryptoapis.model.list_wallet_transactions_r import ListWalletTransactionsR
class InformativeApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self.get_transaction_request_details_endpoint = _Endpoint(
settings={
'response_type': (GetTransactionRequestDetailsR,),
'auth': [
'ApiKey'
],
'endpoint_path': '/wallet-as-a-service/transactionRequests/{transactionRequestId}',
'operation_id': 'get_transaction_request_details',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'transaction_request_id',
'context',
],
'required': [
'transaction_request_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'transaction_request_id':
(str,),
'context':
(str,),
},
'attribute_map': {
'transaction_request_id': 'transactionRequestId',
'context': 'context',
},
'location_map': {
'transaction_request_id': 'path',
'context': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_wallet_asset_details_endpoint = _Endpoint(
settings={
'response_type': (GetWalletAssetDetailsR,),
'auth': [
'ApiKey'
],
'endpoint_path': '/wallet-as-a-service/wallets/{walletId}/{blockchain}/{network}',
'operation_id': 'get_wallet_asset_details',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'blockchain',
'network',
'wallet_id',
'context',
],
'required': [
'blockchain',
'network',
'wallet_id',
],
'nullable': [
],
'enum': [
'blockchain',
'network',
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
('blockchain',): {
"BITCOIN": "bitcoin",
"BITCOIN-CASH": "bitcoin-cash",
"LITECOIN": "litecoin",
"DOGECOIN": "dogecoin",
"DASH": "dash",
"ETHEREUM": "ethereum",
"ETHEREUM-CLASSIC": "ethereum-classic",
"ZCASH": "zcash",
"BINANCE-SMART-CHAIN": "binance-smart-chain"
},
('network',): {
"MAINNET": "mainnet",
"TESTNET": "testnet",
"ROPSTEN": "ropsten",
"MORDOR": "mordor"
},
},
'openapi_types': {
'blockchain':
(str,),
'network':
(str,),
'wallet_id':
(str,),
'context':
(str,),
},
'attribute_map': {
'blockchain': 'blockchain',
'network': 'network',
'wallet_id': 'walletId',
'context': 'context',
},
'location_map': {
'blockchain': 'path',
'network': 'path',
'wallet_id': 'path',
'context': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_wallet_transaction_details_by_transaction_id_endpoint = _Endpoint(
settings={
'response_type': (GetWalletTransactionDetailsByTransactionIDR,),
'auth': [
'ApiKey'
],
'endpoint_path': '/wallet-as-a-service/wallets/{blockchain}/{network}/transactions/{transactionId}',
'operation_id': 'get_wallet_transaction_details_by_transaction_id',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'blockchain',
'network',
'transaction_id',
'context',
],
'required': [
'blockchain',
'network',
'transaction_id',
],
'nullable': [
],
'enum': [
'blockchain',
'network',
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
('blockchain',): {
"BITCOIN": "bitcoin",
"ETHEREUM": "ethereum",
"LITECOIN": "litecoin",
"BITCOIN-CASH": "bitcoin-cash",
"ETHEREUM-CLASSIC": "ethereum-classic",
"DOGECOIN": "dogecoin",
"DASH": "dash",
"ZCASH": "zcash",
"BINANCE-SMART-CHAIN": "binance-smart-chain"
},
('network',): {
"MAINNET": "mainnet",
"TESTNET": "testnet",
"ROPSTEN": "ropsten",
"MORDOR": "mordor"
},
},
'openapi_types': {
'blockchain':
(str,),
'network':
(str,),
'transaction_id':
(str,),
'context':
(str,),
},
'attribute_map': {
'blockchain': 'blockchain',
'network': 'network',
'transaction_id': 'transactionId',
'context': 'context',
},
'location_map': {
'blockchain': 'path',
'network': 'path',
'transaction_id': 'path',
'context': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.list_deposit_addresses_endpoint = _Endpoint(
settings={
'response_type': (ListDepositAddressesR,),
'auth': [
'ApiKey'
],
'endpoint_path': '/wallet-as-a-service/wallets/{walletId}/{blockchain}/{network}/addresses',
'operation_id': 'list_deposit_addresses',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'blockchain',
'network',
'wallet_id',
'context',
],
'required': [
'blockchain',
'network',
'wallet_id',
],
'nullable': [
],
'enum': [
'blockchain',
'network',
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
('blockchain',): {
"BITCOIN": "bitcoin",
"BITCOIN-CASH": "bitcoin-cash",
"LITECOIN": "litecoin",
"DOGECOIN": "dogecoin",
"DASH": "dash",
"ETHEREUM": "ethereum",
"ETHEREUM-CLASSIC": "ethereum-classic",
"ZCASH": "zcash",
"BINANCE-SMART-CHAIN": "binance-smart-chain"
},
('network',): {
"MAINNET": "mainnet",
"TESTNET": "testnet",
"ROPSTEN": "ropsten",
"MORDOR": "mordor"
},
},
'openapi_types': {
'blockchain':
(str,),
'network':
(str,),
'wallet_id':
(str,),
'context':
(str,),
},
'attribute_map': {
'blockchain': 'blockchain',
'network': 'network',
'wallet_id': 'walletId',
'context': 'context',
},
'location_map': {
'blockchain': 'path',
'network': 'path',
'wallet_id': 'path',
'context': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.list_supported_tokens_endpoint = _Endpoint(
settings={
'response_type': (ListSupportedTokensR,),
'auth': [
'ApiKey'
],
'endpoint_path': '/wallet-as-a-service/info/{blockchain}/{network}/supported-tokens',
'operation_id': 'list_supported_tokens',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'blockchain',
'network',
'context',
'limit',
'offset',
],
'required': [
'blockchain',
'network',
],
'nullable': [
],
'enum': [
'blockchain',
'network',
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
('blockchain',): {
"ETHEREUM": "ethereum",
"ETHEREUM-CLASSIC": "ethereum-classic",
"BINANCE-SMART-CHAIN": "binance-smart-chain"
},
('network',): {
"MAINNET": "mainnet",
"TESTNET": "testnet",
"ROPSTEN": "ropsten",
"MORDOR": "mordor"
},
},
'openapi_types': {
'blockchain':
(str,),
'network':
(str,),
'context':
(str,),
'limit':
(int,),
'offset':
(int,),
},
'attribute_map': {
'blockchain': 'blockchain',
'network': 'network',
'context': 'context',
'limit': 'limit',
'offset': 'offset',
},
'location_map': {
'blockchain': 'path',
'network': 'path',
'context': 'query',
'limit': 'query',
'offset': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.list_wallet_transactions_endpoint = _Endpoint(
settings={
'response_type': (ListWalletTransactionsR,),
'auth': [
'ApiKey'
],
'endpoint_path': '/wallet-as-a-service/wallets/{walletId}/{blockchain}/{network}/transactions',
'operation_id': 'list_wallet_transactions',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'blockchain',
'network',
'wallet_id',
'context',
'limit',
'offset',
],
'required': [
'blockchain',
'network',
'wallet_id',
],
'nullable': [
],
'enum': [
'blockchain',
'network',
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
('blockchain',): {
"BITCOIN": "bitcoin",
"BITCOIN-CASH": "bitcoin-cash",
"LITECOIN": "litecoin",
"DOGECOIN": "dogecoin",
"DASH": "dash",
"ETHEREUM": "ethereum",
"ETHEREUM-CLASSIC": "ethereum-classic",
"ZCASH": "zcash",
"BINANCE-SMART-CHAIN": "binance-smart-chain"
},
('network',): {
"MAINNET": "mainnet",
"TESTNET": "testnet",
"ROPSTEN": "ropsten",
"MORDOR": "mordor"
},
},
'openapi_types': {
'blockchain':
(str,),
'network':
(str,),
'wallet_id':
(str,),
'context':
(str,),
'limit':
(int,),
'offset':
(int,),
},
'attribute_map': {
'blockchain': 'blockchain',
'network': 'network',
'wallet_id': 'walletId',
'context': 'context',
'limit': 'limit',
'offset': 'offset',
},
'location_map': {
'blockchain': 'path',
'network': 'path',
'wallet_id': 'path',
'context': 'query',
'limit': 'query',
'offset': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
def get_transaction_request_details(
self,
transaction_request_id,
**kwargs
):
"""Get Transaction Request Details # noqa: E501
Through this endpoint customers can obtain details on transaction request. {note}This regards **transaction requests**, which is not to be confused with **transactions**. Transaction requests may not be approved due to any reason, hence a transaction may not occur.{/note} # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_transaction_request_details(transaction_request_id, async_req=True)
>>> result = thread.get()
Args:
transaction_request_id (str): Represents the unique ID of the transaction request.
Keyword Args:
context (str): In batch situations the user can use the context to correlate responses with requests. This property is present regardless of whether the response was successful or returned as an error. `context` is specified by the user.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
GetTransactionRequestDetailsR
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['transaction_request_id'] = \
transaction_request_id
return self.get_transaction_request_details_endpoint.call_with_http_info(**kwargs)
def get_wallet_asset_details(
self,
blockchain,
network,
wallet_id,
**kwargs
):
"""Get Wallet Asset Details # noqa: E501
Through this endpoint customers can obtain details about a specific Wallet/Vault. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_wallet_asset_details(blockchain, network, wallet_id, async_req=True)
>>> result = thread.get()
Args:
blockchain (str): Represents the specific blockchain protocol name, e.g. Ethereum, Bitcoin, etc.
network (str): Represents the name of the blockchain network used; blockchain networks are usually identical as technology and software, but they differ in data, e.g. - \"mainnet\" is the live network with actual data while networks like \"testnet\", \"ropsten\" are test networks.
wallet_id (str): Defines the unique ID of the Wallet.
Keyword Args:
context (str): In batch situations the user can use the context to correlate responses with requests. This property is present regardless of whether the response was successful or returned as an error. `context` is specified by the user.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
GetWalletAssetDetailsR
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['blockchain'] = \
blockchain
kwargs['network'] = \
network
kwargs['wallet_id'] = \
wallet_id
return self.get_wallet_asset_details_endpoint.call_with_http_info(**kwargs)
def get_wallet_transaction_details_by_transaction_id(
self,
blockchain,
network,
transaction_id,
**kwargs
):
"""Get Wallet Transaction Details By Transaction ID # noqa: E501
Through this endpoint users can obtain Wallet transaction information by providing a `transactionId`. Customers can receive information only for a transaction that has been made from their own wallet. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_wallet_transaction_details_by_transaction_id(blockchain, network, transaction_id, async_req=True)
>>> result = thread.get()
Args:
blockchain (str): Represents the specific blockchain protocol name, e.g. Ethereum, Bitcoin, etc.
network (str): Represents the name of the blockchain network used; blockchain networks are usually identical as technology and software, but they differ in data, e.g. - \"mainnet\" is the live network with actual data while networks like \"testnet\", \"ropsten\" are test networks.
transaction_id (str): Represents the unique identifier of a transaction, i.e. it could be `transactionId` in UTXO-based protocols like Bitcoin, and transaction `hash` in Ethereum blockchain.
Keyword Args:
context (str): In batch situations the user can use the context to correlate responses with requests. This property is present regardless of whether the response was successful or returned as an error. `context` is specified by the user.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
GetWalletTransactionDetailsByTransactionIDR
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['blockchain'] = \
blockchain
kwargs['network'] = \
network
kwargs['transaction_id'] = \
transaction_id
return self.get_wallet_transaction_details_by_transaction_id_endpoint.call_with_http_info(**kwargs)
def list_deposit_addresses(
self,
blockchain,
network,
wallet_id,
**kwargs
):
"""List Deposit Addresses # noqa: E501
Through this endpoint customers can pull a list of Deposit/Receiving Addresses they have already generated. {note}Please note that listing data from the same type will apply pagination on the results.{/note} # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_deposit_addresses(blockchain, network, wallet_id, async_req=True)
>>> result = thread.get()
Args:
blockchain (str): Represents the specific blockchain protocol name, e.g. Ethereum, Bitcoin, etc.
network (str): Represents the name of the blockchain network used; blockchain networks are usually identical as technology and software, but they differ in data, e.g. - \"mainnet\" is the live network with actual data while networks like \"testnet\", \"ropsten\" are test networks.
wallet_id (str): Represents the unique ID of the specific Wallet.
Keyword Args:
context (str): In batch situations the user can use the context to correlate responses with requests. This property is present regardless of whether the response was successful or returned as an error. `context` is specified by the user.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
ListDepositAddressesR
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['blockchain'] = \
blockchain
kwargs['network'] = \
network
kwargs['wallet_id'] = \
wallet_id
return self.list_deposit_addresses_endpoint.call_with_http_info(**kwargs)
def list_supported_tokens(
self,
blockchain,
network,
**kwargs
):
"""List Supported Tokens # noqa: E501
Through this endpoint customers can obtain information on multiple tokens at once. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_supported_tokens(blockchain, network, async_req=True)
>>> result = thread.get()
Args:
blockchain (str): Represents the specific blockchain protocol name, e.g. Ethereum, Bitcoin, etc.
network (str): Represents the name of the blockchain network used; blockchain networks are usually identical as technology and software, but they differ in data, e.g. - \"mainnet\" is the live network with actual data while networks like \"testnet\", \"ropsten\" are test networks.
Keyword Args:
context (str): In batch situations the user can use the context to correlate responses with requests. This property is present regardless of whether the response was successful or returned as an error. `context` is specified by the user.. [optional]
limit (int): Defines how many items should be returned in the response per page basis.. [optional] if omitted the server will use the default value of 50
offset (int): The starting index of the response items, i.e. where the response should start listing the returned items.. [optional] if omitted the server will use the default value of 0
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
ListSupportedTokensR
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['blockchain'] = \
blockchain
kwargs['network'] = \
network
return self.list_supported_tokens_endpoint.call_with_http_info(**kwargs)
def list_wallet_transactions(
self,
blockchain,
network,
wallet_id,
**kwargs
):
"""List Wallet Transactions # noqa: E501
Through this endpoint customers can list Transactions from and to their Wallet. The data returned will include `transactionId`, `direction` of the transaction - incoming or outgoing, `amount` and more. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_wallet_transactions(blockchain, network, wallet_id, async_req=True)
>>> result = thread.get()
Args:
blockchain (str): Represents the specific blockchain protocol name, e.g. Ethereum, Bitcoin, etc.
network (str): Represents the name of the blockchain network used; blockchain networks are usually identical as technology and software, but they differ in data, e.g. - \"mainnet\" is the live network with actual data while networks like \"testnet\", \"ropsten\" are test networks.
wallet_id (str): Represents the unique ID of the specific Wallet.
Keyword Args:
context (str): In batch situations the user can use the context to correlate responses with requests. This property is present regardless of whether the response was successful or returned as an error. `context` is specified by the user.. [optional]
limit (int): Defines how many items should be returned in the response per page basis.. [optional] if omitted the server will use the default value of 50
offset (int): The starting index of the response items, i.e. where the response should start listing the returned items.. [optional] if omitted the server will use the default value of 0
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
ListWalletTransactionsR
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['blockchain'] = \
blockchain
kwargs['network'] = \
network
kwargs['wallet_id'] = \
wallet_id
return self.list_wallet_transactions_endpoint.call_with_http_info(**kwargs)
| 41.874405 | 484 | 0.522677 | 3,987 | 44,010 | 5.592927 | 0.098821 | 0.020584 | 0.027266 | 0.028028 | 0.770528 | 0.752679 | 0.732679 | 0.718956 | 0.703798 | 0.695278 | 0 | 0.010743 | 0.392956 | 44,010 | 1,050 | 485 | 41.914286 | 0.823926 | 0.362554 | 0 | 0.713904 | 0 | 0 | 0.231854 | 0.042479 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009358 | false | 0 | 0.046791 | 0 | 0.065508 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7ea93dd7733aab03deaca4ea27798d71c6e9ce05 | 146,082 | py | Python | tests/unit/gapic/pubsublite_v1/test_admin_service.py | tmdiep/python-pubsublite | 8edef6708fab60ce29c040f3de60783fe31b55ae | [
"Apache-2.0"
] | null | null | null | tests/unit/gapic/pubsublite_v1/test_admin_service.py | tmdiep/python-pubsublite | 8edef6708fab60ce29c040f3de60783fe31b55ae | [
"Apache-2.0"
] | null | null | null | tests/unit/gapic/pubsublite_v1/test_admin_service.py | tmdiep/python-pubsublite | 8edef6708fab60ce29c040f3de60783fe31b55ae | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import mock
import grpc
from grpc.experimental import aio
import math
import pytest
from proto.marshal.rules.dates import DurationRule, TimestampRule
from google import auth
from google.api_core import client_options
from google.api_core import exceptions
from google.api_core import gapic_v1
from google.api_core import grpc_helpers
from google.api_core import grpc_helpers_async
from google.auth import credentials
from google.auth.exceptions import MutualTLSChannelError
from google.cloud.pubsublite_v1.services.admin_service import AdminServiceAsyncClient
from google.cloud.pubsublite_v1.services.admin_service import AdminServiceClient
from google.cloud.pubsublite_v1.services.admin_service import pagers
from google.cloud.pubsublite_v1.services.admin_service import transports
from google.cloud.pubsublite_v1.types import admin
from google.cloud.pubsublite_v1.types import common
from google.oauth2 import service_account
from google.protobuf import duration_pb2 as duration # type: ignore
from google.protobuf import field_mask_pb2 as field_mask # type: ignore
def client_cert_source_callback():
return b"cert bytes", b"key bytes"
# If default endpoint is localhost, then default mtls endpoint will be the same.
# This method modifies the default endpoint so the client can produce a different
# mtls endpoint for endpoint testing purposes.
def modify_default_endpoint(client):
return (
"foo.googleapis.com"
if ("localhost" in client.DEFAULT_ENDPOINT)
else client.DEFAULT_ENDPOINT
)
def test__get_default_mtls_endpoint():
api_endpoint = "example.googleapis.com"
api_mtls_endpoint = "example.mtls.googleapis.com"
sandbox_endpoint = "example.sandbox.googleapis.com"
sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com"
non_googleapi = "api.example.com"
assert AdminServiceClient._get_default_mtls_endpoint(None) is None
assert (
AdminServiceClient._get_default_mtls_endpoint(api_endpoint) == api_mtls_endpoint
)
assert (
AdminServiceClient._get_default_mtls_endpoint(api_mtls_endpoint)
== api_mtls_endpoint
)
assert (
AdminServiceClient._get_default_mtls_endpoint(sandbox_endpoint)
== sandbox_mtls_endpoint
)
assert (
AdminServiceClient._get_default_mtls_endpoint(sandbox_mtls_endpoint)
== sandbox_mtls_endpoint
)
assert AdminServiceClient._get_default_mtls_endpoint(non_googleapi) == non_googleapi
@pytest.mark.parametrize("client_class", [AdminServiceClient, AdminServiceAsyncClient,])
def test_admin_service_client_from_service_account_info(client_class):
creds = credentials.AnonymousCredentials()
with mock.patch.object(
service_account.Credentials, "from_service_account_info"
) as factory:
factory.return_value = creds
info = {"valid": True}
client = client_class.from_service_account_info(info)
assert client.transport._credentials == creds
assert isinstance(client, client_class)
assert client.transport._host == "pubsublite.googleapis.com:443"
@pytest.mark.parametrize("client_class", [AdminServiceClient, AdminServiceAsyncClient,])
def test_admin_service_client_from_service_account_file(client_class):
creds = credentials.AnonymousCredentials()
with mock.patch.object(
service_account.Credentials, "from_service_account_file"
) as factory:
factory.return_value = creds
client = client_class.from_service_account_file("dummy/file/path.json")
assert client.transport._credentials == creds
assert isinstance(client, client_class)
client = client_class.from_service_account_json("dummy/file/path.json")
assert client.transport._credentials == creds
assert isinstance(client, client_class)
assert client.transport._host == "pubsublite.googleapis.com:443"
def test_admin_service_client_get_transport_class():
transport = AdminServiceClient.get_transport_class()
available_transports = [
transports.AdminServiceGrpcTransport,
]
assert transport in available_transports
transport = AdminServiceClient.get_transport_class("grpc")
assert transport == transports.AdminServiceGrpcTransport
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(AdminServiceClient, transports.AdminServiceGrpcTransport, "grpc"),
(
AdminServiceAsyncClient,
transports.AdminServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
@mock.patch.object(
AdminServiceClient, "DEFAULT_ENDPOINT", modify_default_endpoint(AdminServiceClient)
)
@mock.patch.object(
AdminServiceAsyncClient,
"DEFAULT_ENDPOINT",
modify_default_endpoint(AdminServiceAsyncClient),
)
def test_admin_service_client_client_options(
client_class, transport_class, transport_name
):
# Check that if channel is provided we won't create a new one.
with mock.patch.object(AdminServiceClient, "get_transport_class") as gtc:
transport = transport_class(credentials=credentials.AnonymousCredentials())
client = client_class(transport=transport)
gtc.assert_not_called()
# Check that if channel is provided via str we will create a new one.
with mock.patch.object(AdminServiceClient, "get_transport_class") as gtc:
client = client_class(transport=transport_name)
gtc.assert_called()
# Check the case api_endpoint is provided.
options = client_options.ClientOptions(api_endpoint="squid.clam.whelk")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is
# "never".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}):
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is
# "always".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}):
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_MTLS_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has
# unsupported value.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}):
with pytest.raises(MutualTLSChannelError):
client = client_class()
# Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"}
):
with pytest.raises(ValueError):
client = client_class()
# Check the case quota_project_id is provided
options = client_options.ClientOptions(quota_project_id="octopus")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id="octopus",
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name,use_client_cert_env",
[
(AdminServiceClient, transports.AdminServiceGrpcTransport, "grpc", "true"),
(
AdminServiceAsyncClient,
transports.AdminServiceGrpcAsyncIOTransport,
"grpc_asyncio",
"true",
),
(AdminServiceClient, transports.AdminServiceGrpcTransport, "grpc", "false"),
(
AdminServiceAsyncClient,
transports.AdminServiceGrpcAsyncIOTransport,
"grpc_asyncio",
"false",
),
],
)
@mock.patch.object(
AdminServiceClient, "DEFAULT_ENDPOINT", modify_default_endpoint(AdminServiceClient)
)
@mock.patch.object(
AdminServiceAsyncClient,
"DEFAULT_ENDPOINT",
modify_default_endpoint(AdminServiceAsyncClient),
)
@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"})
def test_admin_service_client_mtls_env_auto(
client_class, transport_class, transport_name, use_client_cert_env
):
# This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default
# mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists.
# Check the case client_cert_source is provided. Whether client cert is used depends on
# GOOGLE_API_USE_CLIENT_CERTIFICATE value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
options = client_options.ClientOptions(
client_cert_source=client_cert_source_callback
)
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
if use_client_cert_env == "false":
expected_client_cert_source = None
expected_host = client.DEFAULT_ENDPOINT
else:
expected_client_cert_source = client_cert_source_callback
expected_host = client.DEFAULT_MTLS_ENDPOINT
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=expected_host,
scopes=None,
client_cert_source_for_mtls=expected_client_cert_source,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case ADC client cert is provided. Whether client cert is used depends on
# GOOGLE_API_USE_CLIENT_CERTIFICATE value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
with mock.patch.object(transport_class, "__init__") as patched:
with mock.patch(
"google.auth.transport.mtls.has_default_client_cert_source",
return_value=True,
):
with mock.patch(
"google.auth.transport.mtls.default_client_cert_source",
return_value=client_cert_source_callback,
):
if use_client_cert_env == "false":
expected_host = client.DEFAULT_ENDPOINT
expected_client_cert_source = None
else:
expected_host = client.DEFAULT_MTLS_ENDPOINT
expected_client_cert_source = client_cert_source_callback
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=expected_host,
scopes=None,
client_cert_source_for_mtls=expected_client_cert_source,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case client_cert_source and ADC client cert are not provided.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
with mock.patch.object(transport_class, "__init__") as patched:
with mock.patch(
"google.auth.transport.mtls.has_default_client_cert_source",
return_value=False,
):
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(AdminServiceClient, transports.AdminServiceGrpcTransport, "grpc"),
(
AdminServiceAsyncClient,
transports.AdminServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
def test_admin_service_client_client_options_scopes(
client_class, transport_class, transport_name
):
# Check the case scopes are provided.
options = client_options.ClientOptions(scopes=["1", "2"],)
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=["1", "2"],
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(AdminServiceClient, transports.AdminServiceGrpcTransport, "grpc"),
(
AdminServiceAsyncClient,
transports.AdminServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
def test_admin_service_client_client_options_credentials_file(
client_class, transport_class, transport_name
):
# Check the case credentials file is provided.
options = client_options.ClientOptions(credentials_file="credentials.json")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file="credentials.json",
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
def test_admin_service_client_client_options_from_dict():
with mock.patch(
"google.cloud.pubsublite_v1.services.admin_service.transports.AdminServiceGrpcTransport.__init__"
) as grpc_transport:
grpc_transport.return_value = None
client = AdminServiceClient(client_options={"api_endpoint": "squid.clam.whelk"})
grpc_transport.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
def test_create_topic(transport: str = "grpc", request_type=admin.CreateTopicRequest):
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = common.Topic(name="name_value",)
response = client.create_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == admin.CreateTopicRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, common.Topic)
assert response.name == "name_value"
def test_create_topic_from_dict():
test_create_topic(request_type=dict)
def test_create_topic_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_topic), "__call__") as call:
client.create_topic()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == admin.CreateTopicRequest()
@pytest.mark.asyncio
async def test_create_topic_async(
transport: str = "grpc_asyncio", request_type=admin.CreateTopicRequest
):
client = AdminServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
common.Topic(name="name_value",)
)
response = await client.create_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == admin.CreateTopicRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, common.Topic)
assert response.name == "name_value"
@pytest.mark.asyncio
async def test_create_topic_async_from_dict():
await test_create_topic_async(request_type=dict)
def test_create_topic_field_headers():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.CreateTopicRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_topic), "__call__") as call:
call.return_value = common.Topic()
client.create_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_topic_field_headers_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.CreateTopicRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_topic), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(common.Topic())
await client.create_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_topic_flattened():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = common.Topic()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_topic(
parent="parent_value",
topic=common.Topic(name="name_value"),
topic_id="topic_id_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].topic == common.Topic(name="name_value")
assert args[0].topic_id == "topic_id_value"
def test_create_topic_flattened_error():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_topic(
admin.CreateTopicRequest(),
parent="parent_value",
topic=common.Topic(name="name_value"),
topic_id="topic_id_value",
)
@pytest.mark.asyncio
async def test_create_topic_flattened_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = common.Topic()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(common.Topic())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_topic(
parent="parent_value",
topic=common.Topic(name="name_value"),
topic_id="topic_id_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].topic == common.Topic(name="name_value")
assert args[0].topic_id == "topic_id_value"
@pytest.mark.asyncio
async def test_create_topic_flattened_error_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_topic(
admin.CreateTopicRequest(),
parent="parent_value",
topic=common.Topic(name="name_value"),
topic_id="topic_id_value",
)
def test_get_topic(transport: str = "grpc", request_type=admin.GetTopicRequest):
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = common.Topic(name="name_value",)
response = client.get_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == admin.GetTopicRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, common.Topic)
assert response.name == "name_value"
def test_get_topic_from_dict():
test_get_topic(request_type=dict)
def test_get_topic_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_topic), "__call__") as call:
client.get_topic()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == admin.GetTopicRequest()
@pytest.mark.asyncio
async def test_get_topic_async(
transport: str = "grpc_asyncio", request_type=admin.GetTopicRequest
):
client = AdminServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
common.Topic(name="name_value",)
)
response = await client.get_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == admin.GetTopicRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, common.Topic)
assert response.name == "name_value"
@pytest.mark.asyncio
async def test_get_topic_async_from_dict():
await test_get_topic_async(request_type=dict)
def test_get_topic_field_headers():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.GetTopicRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_topic), "__call__") as call:
call.return_value = common.Topic()
client.get_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_topic_field_headers_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.GetTopicRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_topic), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(common.Topic())
await client.get_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_topic_flattened():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = common.Topic()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_topic(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_get_topic_flattened_error():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_topic(
admin.GetTopicRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_topic_flattened_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = common.Topic()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(common.Topic())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_topic(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_get_topic_flattened_error_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_topic(
admin.GetTopicRequest(), name="name_value",
)
def test_get_topic_partitions(
transport: str = "grpc", request_type=admin.GetTopicPartitionsRequest
):
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_topic_partitions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = admin.TopicPartitions(partition_count=1634,)
response = client.get_topic_partitions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == admin.GetTopicPartitionsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, admin.TopicPartitions)
assert response.partition_count == 1634
def test_get_topic_partitions_from_dict():
test_get_topic_partitions(request_type=dict)
def test_get_topic_partitions_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_topic_partitions), "__call__"
) as call:
client.get_topic_partitions()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == admin.GetTopicPartitionsRequest()
@pytest.mark.asyncio
async def test_get_topic_partitions_async(
transport: str = "grpc_asyncio", request_type=admin.GetTopicPartitionsRequest
):
client = AdminServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_topic_partitions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
admin.TopicPartitions(partition_count=1634,)
)
response = await client.get_topic_partitions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == admin.GetTopicPartitionsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, admin.TopicPartitions)
assert response.partition_count == 1634
@pytest.mark.asyncio
async def test_get_topic_partitions_async_from_dict():
await test_get_topic_partitions_async(request_type=dict)
def test_get_topic_partitions_field_headers():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.GetTopicPartitionsRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_topic_partitions), "__call__"
) as call:
call.return_value = admin.TopicPartitions()
client.get_topic_partitions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_topic_partitions_field_headers_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.GetTopicPartitionsRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_topic_partitions), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
admin.TopicPartitions()
)
await client.get_topic_partitions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_topic_partitions_flattened():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_topic_partitions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = admin.TopicPartitions()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_topic_partitions(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_get_topic_partitions_flattened_error():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_topic_partitions(
admin.GetTopicPartitionsRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_topic_partitions_flattened_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_topic_partitions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = admin.TopicPartitions()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
admin.TopicPartitions()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_topic_partitions(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_get_topic_partitions_flattened_error_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_topic_partitions(
admin.GetTopicPartitionsRequest(), name="name_value",
)
def test_list_topics(transport: str = "grpc", request_type=admin.ListTopicsRequest):
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_topics), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = admin.ListTopicsResponse(
next_page_token="next_page_token_value",
)
response = client.list_topics(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == admin.ListTopicsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListTopicsPager)
assert response.next_page_token == "next_page_token_value"
def test_list_topics_from_dict():
test_list_topics(request_type=dict)
def test_list_topics_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_topics), "__call__") as call:
client.list_topics()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == admin.ListTopicsRequest()
@pytest.mark.asyncio
async def test_list_topics_async(
transport: str = "grpc_asyncio", request_type=admin.ListTopicsRequest
):
client = AdminServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_topics), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
admin.ListTopicsResponse(next_page_token="next_page_token_value",)
)
response = await client.list_topics(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == admin.ListTopicsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListTopicsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_topics_async_from_dict():
await test_list_topics_async(request_type=dict)
def test_list_topics_field_headers():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.ListTopicsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_topics), "__call__") as call:
call.return_value = admin.ListTopicsResponse()
client.list_topics(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_topics_field_headers_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.ListTopicsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_topics), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
admin.ListTopicsResponse()
)
await client.list_topics(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_topics_flattened():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_topics), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = admin.ListTopicsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_topics(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
def test_list_topics_flattened_error():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_topics(
admin.ListTopicsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_topics_flattened_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_topics), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = admin.ListTopicsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
admin.ListTopicsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_topics(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
@pytest.mark.asyncio
async def test_list_topics_flattened_error_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_topics(
admin.ListTopicsRequest(), parent="parent_value",
)
def test_list_topics_pager():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_topics), "__call__") as call:
# Set the response to a series of pages.
call.side_effect = (
admin.ListTopicsResponse(
topics=[common.Topic(), common.Topic(), common.Topic(),],
next_page_token="abc",
),
admin.ListTopicsResponse(topics=[], next_page_token="def",),
admin.ListTopicsResponse(topics=[common.Topic(),], next_page_token="ghi",),
admin.ListTopicsResponse(topics=[common.Topic(), common.Topic(),],),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_topics(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, common.Topic) for i in results)
def test_list_topics_pages():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_topics), "__call__") as call:
# Set the response to a series of pages.
call.side_effect = (
admin.ListTopicsResponse(
topics=[common.Topic(), common.Topic(), common.Topic(),],
next_page_token="abc",
),
admin.ListTopicsResponse(topics=[], next_page_token="def",),
admin.ListTopicsResponse(topics=[common.Topic(),], next_page_token="ghi",),
admin.ListTopicsResponse(topics=[common.Topic(), common.Topic(),],),
RuntimeError,
)
pages = list(client.list_topics(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_topics_async_pager():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_topics), "__call__", new_callable=mock.AsyncMock
) as call:
# Set the response to a series of pages.
call.side_effect = (
admin.ListTopicsResponse(
topics=[common.Topic(), common.Topic(), common.Topic(),],
next_page_token="abc",
),
admin.ListTopicsResponse(topics=[], next_page_token="def",),
admin.ListTopicsResponse(topics=[common.Topic(),], next_page_token="ghi",),
admin.ListTopicsResponse(topics=[common.Topic(), common.Topic(),],),
RuntimeError,
)
async_pager = await client.list_topics(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, common.Topic) for i in responses)
@pytest.mark.asyncio
async def test_list_topics_async_pages():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_topics), "__call__", new_callable=mock.AsyncMock
) as call:
# Set the response to a series of pages.
call.side_effect = (
admin.ListTopicsResponse(
topics=[common.Topic(), common.Topic(), common.Topic(),],
next_page_token="abc",
),
admin.ListTopicsResponse(topics=[], next_page_token="def",),
admin.ListTopicsResponse(topics=[common.Topic(),], next_page_token="ghi",),
admin.ListTopicsResponse(topics=[common.Topic(), common.Topic(),],),
RuntimeError,
)
pages = []
async for page_ in (await client.list_topics(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_update_topic(transport: str = "grpc", request_type=admin.UpdateTopicRequest):
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = common.Topic(name="name_value",)
response = client.update_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == admin.UpdateTopicRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, common.Topic)
assert response.name == "name_value"
def test_update_topic_from_dict():
test_update_topic(request_type=dict)
def test_update_topic_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_topic), "__call__") as call:
client.update_topic()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == admin.UpdateTopicRequest()
@pytest.mark.asyncio
async def test_update_topic_async(
transport: str = "grpc_asyncio", request_type=admin.UpdateTopicRequest
):
client = AdminServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
common.Topic(name="name_value",)
)
response = await client.update_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == admin.UpdateTopicRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, common.Topic)
assert response.name == "name_value"
@pytest.mark.asyncio
async def test_update_topic_async_from_dict():
await test_update_topic_async(request_type=dict)
def test_update_topic_field_headers():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.UpdateTopicRequest()
request.topic.name = "topic.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_topic), "__call__") as call:
call.return_value = common.Topic()
client.update_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "topic.name=topic.name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_update_topic_field_headers_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.UpdateTopicRequest()
request.topic.name = "topic.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_topic), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(common.Topic())
await client.update_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "topic.name=topic.name/value",) in kw["metadata"]
def test_update_topic_flattened():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = common.Topic()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.update_topic(
topic=common.Topic(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].topic == common.Topic(name="name_value")
assert args[0].update_mask == field_mask.FieldMask(paths=["paths_value"])
def test_update_topic_flattened_error():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.update_topic(
admin.UpdateTopicRequest(),
topic=common.Topic(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
@pytest.mark.asyncio
async def test_update_topic_flattened_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = common.Topic()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(common.Topic())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.update_topic(
topic=common.Topic(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].topic == common.Topic(name="name_value")
assert args[0].update_mask == field_mask.FieldMask(paths=["paths_value"])
@pytest.mark.asyncio
async def test_update_topic_flattened_error_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.update_topic(
admin.UpdateTopicRequest(),
topic=common.Topic(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
def test_delete_topic(transport: str = "grpc", request_type=admin.DeleteTopicRequest):
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = None
response = client.delete_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == admin.DeleteTopicRequest()
# Establish that the response is the type that we expect.
assert response is None
def test_delete_topic_from_dict():
test_delete_topic(request_type=dict)
def test_delete_topic_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_topic), "__call__") as call:
client.delete_topic()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == admin.DeleteTopicRequest()
@pytest.mark.asyncio
async def test_delete_topic_async(
transport: str = "grpc_asyncio", request_type=admin.DeleteTopicRequest
):
client = AdminServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
response = await client.delete_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == admin.DeleteTopicRequest()
# Establish that the response is the type that we expect.
assert response is None
@pytest.mark.asyncio
async def test_delete_topic_async_from_dict():
await test_delete_topic_async(request_type=dict)
def test_delete_topic_field_headers():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.DeleteTopicRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_topic), "__call__") as call:
call.return_value = None
client.delete_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_delete_topic_field_headers_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.DeleteTopicRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_topic), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
await client.delete_topic(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_delete_topic_flattened():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = None
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.delete_topic(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_delete_topic_flattened_error():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.delete_topic(
admin.DeleteTopicRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_delete_topic_flattened_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_topic), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = None
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.delete_topic(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_delete_topic_flattened_error_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.delete_topic(
admin.DeleteTopicRequest(), name="name_value",
)
def test_list_topic_subscriptions(
transport: str = "grpc", request_type=admin.ListTopicSubscriptionsRequest
):
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_topic_subscriptions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = admin.ListTopicSubscriptionsResponse(
subscriptions=["subscriptions_value"],
next_page_token="next_page_token_value",
)
response = client.list_topic_subscriptions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == admin.ListTopicSubscriptionsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListTopicSubscriptionsPager)
assert response.subscriptions == ["subscriptions_value"]
assert response.next_page_token == "next_page_token_value"
def test_list_topic_subscriptions_from_dict():
test_list_topic_subscriptions(request_type=dict)
def test_list_topic_subscriptions_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_topic_subscriptions), "__call__"
) as call:
client.list_topic_subscriptions()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == admin.ListTopicSubscriptionsRequest()
@pytest.mark.asyncio
async def test_list_topic_subscriptions_async(
transport: str = "grpc_asyncio", request_type=admin.ListTopicSubscriptionsRequest
):
client = AdminServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_topic_subscriptions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
admin.ListTopicSubscriptionsResponse(
subscriptions=["subscriptions_value"],
next_page_token="next_page_token_value",
)
)
response = await client.list_topic_subscriptions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == admin.ListTopicSubscriptionsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListTopicSubscriptionsAsyncPager)
assert response.subscriptions == ["subscriptions_value"]
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_topic_subscriptions_async_from_dict():
await test_list_topic_subscriptions_async(request_type=dict)
def test_list_topic_subscriptions_field_headers():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.ListTopicSubscriptionsRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_topic_subscriptions), "__call__"
) as call:
call.return_value = admin.ListTopicSubscriptionsResponse()
client.list_topic_subscriptions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_topic_subscriptions_field_headers_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.ListTopicSubscriptionsRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_topic_subscriptions), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
admin.ListTopicSubscriptionsResponse()
)
await client.list_topic_subscriptions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_list_topic_subscriptions_flattened():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_topic_subscriptions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = admin.ListTopicSubscriptionsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_topic_subscriptions(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_list_topic_subscriptions_flattened_error():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_topic_subscriptions(
admin.ListTopicSubscriptionsRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_list_topic_subscriptions_flattened_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_topic_subscriptions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = admin.ListTopicSubscriptionsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
admin.ListTopicSubscriptionsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_topic_subscriptions(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_list_topic_subscriptions_flattened_error_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_topic_subscriptions(
admin.ListTopicSubscriptionsRequest(), name="name_value",
)
def test_list_topic_subscriptions_pager():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_topic_subscriptions), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
admin.ListTopicSubscriptionsResponse(
subscriptions=[str(), str(), str(),], next_page_token="abc",
),
admin.ListTopicSubscriptionsResponse(
subscriptions=[], next_page_token="def",
),
admin.ListTopicSubscriptionsResponse(
subscriptions=[str(),], next_page_token="ghi",
),
admin.ListTopicSubscriptionsResponse(subscriptions=[str(), str(),],),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", ""),)),
)
pager = client.list_topic_subscriptions(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, str) for i in results)
def test_list_topic_subscriptions_pages():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_topic_subscriptions), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
admin.ListTopicSubscriptionsResponse(
subscriptions=[str(), str(), str(),], next_page_token="abc",
),
admin.ListTopicSubscriptionsResponse(
subscriptions=[], next_page_token="def",
),
admin.ListTopicSubscriptionsResponse(
subscriptions=[str(),], next_page_token="ghi",
),
admin.ListTopicSubscriptionsResponse(subscriptions=[str(), str(),],),
RuntimeError,
)
pages = list(client.list_topic_subscriptions(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_topic_subscriptions_async_pager():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_topic_subscriptions),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
admin.ListTopicSubscriptionsResponse(
subscriptions=[str(), str(), str(),], next_page_token="abc",
),
admin.ListTopicSubscriptionsResponse(
subscriptions=[], next_page_token="def",
),
admin.ListTopicSubscriptionsResponse(
subscriptions=[str(),], next_page_token="ghi",
),
admin.ListTopicSubscriptionsResponse(subscriptions=[str(), str(),],),
RuntimeError,
)
async_pager = await client.list_topic_subscriptions(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, str) for i in responses)
@pytest.mark.asyncio
async def test_list_topic_subscriptions_async_pages():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_topic_subscriptions),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
admin.ListTopicSubscriptionsResponse(
subscriptions=[str(), str(), str(),], next_page_token="abc",
),
admin.ListTopicSubscriptionsResponse(
subscriptions=[], next_page_token="def",
),
admin.ListTopicSubscriptionsResponse(
subscriptions=[str(),], next_page_token="ghi",
),
admin.ListTopicSubscriptionsResponse(subscriptions=[str(), str(),],),
RuntimeError,
)
pages = []
async for page_ in (await client.list_topic_subscriptions(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_create_subscription(
transport: str = "grpc", request_type=admin.CreateSubscriptionRequest
):
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_subscription), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = common.Subscription(name="name_value", topic="topic_value",)
response = client.create_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == admin.CreateSubscriptionRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, common.Subscription)
assert response.name == "name_value"
assert response.topic == "topic_value"
def test_create_subscription_from_dict():
test_create_subscription(request_type=dict)
def test_create_subscription_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_subscription), "__call__"
) as call:
client.create_subscription()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == admin.CreateSubscriptionRequest()
@pytest.mark.asyncio
async def test_create_subscription_async(
transport: str = "grpc_asyncio", request_type=admin.CreateSubscriptionRequest
):
client = AdminServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_subscription), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
common.Subscription(name="name_value", topic="topic_value",)
)
response = await client.create_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == admin.CreateSubscriptionRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, common.Subscription)
assert response.name == "name_value"
assert response.topic == "topic_value"
@pytest.mark.asyncio
async def test_create_subscription_async_from_dict():
await test_create_subscription_async(request_type=dict)
def test_create_subscription_field_headers():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.CreateSubscriptionRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_subscription), "__call__"
) as call:
call.return_value = common.Subscription()
client.create_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_subscription_field_headers_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.CreateSubscriptionRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_subscription), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(common.Subscription())
await client.create_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_subscription_flattened():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_subscription), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = common.Subscription()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_subscription(
parent="parent_value",
subscription=common.Subscription(name="name_value"),
subscription_id="subscription_id_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].subscription == common.Subscription(name="name_value")
assert args[0].subscription_id == "subscription_id_value"
def test_create_subscription_flattened_error():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_subscription(
admin.CreateSubscriptionRequest(),
parent="parent_value",
subscription=common.Subscription(name="name_value"),
subscription_id="subscription_id_value",
)
@pytest.mark.asyncio
async def test_create_subscription_flattened_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_subscription), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = common.Subscription()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(common.Subscription())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_subscription(
parent="parent_value",
subscription=common.Subscription(name="name_value"),
subscription_id="subscription_id_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].subscription == common.Subscription(name="name_value")
assert args[0].subscription_id == "subscription_id_value"
@pytest.mark.asyncio
async def test_create_subscription_flattened_error_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_subscription(
admin.CreateSubscriptionRequest(),
parent="parent_value",
subscription=common.Subscription(name="name_value"),
subscription_id="subscription_id_value",
)
def test_get_subscription(
transport: str = "grpc", request_type=admin.GetSubscriptionRequest
):
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_subscription), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = common.Subscription(name="name_value", topic="topic_value",)
response = client.get_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == admin.GetSubscriptionRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, common.Subscription)
assert response.name == "name_value"
assert response.topic == "topic_value"
def test_get_subscription_from_dict():
test_get_subscription(request_type=dict)
def test_get_subscription_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_subscription), "__call__") as call:
client.get_subscription()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == admin.GetSubscriptionRequest()
@pytest.mark.asyncio
async def test_get_subscription_async(
transport: str = "grpc_asyncio", request_type=admin.GetSubscriptionRequest
):
client = AdminServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_subscription), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
common.Subscription(name="name_value", topic="topic_value",)
)
response = await client.get_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == admin.GetSubscriptionRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, common.Subscription)
assert response.name == "name_value"
assert response.topic == "topic_value"
@pytest.mark.asyncio
async def test_get_subscription_async_from_dict():
await test_get_subscription_async(request_type=dict)
def test_get_subscription_field_headers():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.GetSubscriptionRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_subscription), "__call__") as call:
call.return_value = common.Subscription()
client.get_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_subscription_field_headers_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.GetSubscriptionRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_subscription), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(common.Subscription())
await client.get_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_subscription_flattened():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_subscription), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = common.Subscription()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_subscription(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_get_subscription_flattened_error():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_subscription(
admin.GetSubscriptionRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_subscription_flattened_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_subscription), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = common.Subscription()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(common.Subscription())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_subscription(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_get_subscription_flattened_error_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_subscription(
admin.GetSubscriptionRequest(), name="name_value",
)
def test_list_subscriptions(
transport: str = "grpc", request_type=admin.ListSubscriptionsRequest
):
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_subscriptions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = admin.ListSubscriptionsResponse(
next_page_token="next_page_token_value",
)
response = client.list_subscriptions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == admin.ListSubscriptionsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListSubscriptionsPager)
assert response.next_page_token == "next_page_token_value"
def test_list_subscriptions_from_dict():
test_list_subscriptions(request_type=dict)
def test_list_subscriptions_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_subscriptions), "__call__"
) as call:
client.list_subscriptions()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == admin.ListSubscriptionsRequest()
@pytest.mark.asyncio
async def test_list_subscriptions_async(
transport: str = "grpc_asyncio", request_type=admin.ListSubscriptionsRequest
):
client = AdminServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_subscriptions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
admin.ListSubscriptionsResponse(next_page_token="next_page_token_value",)
)
response = await client.list_subscriptions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == admin.ListSubscriptionsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListSubscriptionsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_subscriptions_async_from_dict():
await test_list_subscriptions_async(request_type=dict)
def test_list_subscriptions_field_headers():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.ListSubscriptionsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_subscriptions), "__call__"
) as call:
call.return_value = admin.ListSubscriptionsResponse()
client.list_subscriptions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_subscriptions_field_headers_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.ListSubscriptionsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_subscriptions), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
admin.ListSubscriptionsResponse()
)
await client.list_subscriptions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_subscriptions_flattened():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_subscriptions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = admin.ListSubscriptionsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_subscriptions(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
def test_list_subscriptions_flattened_error():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_subscriptions(
admin.ListSubscriptionsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_subscriptions_flattened_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_subscriptions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = admin.ListSubscriptionsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
admin.ListSubscriptionsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_subscriptions(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
@pytest.mark.asyncio
async def test_list_subscriptions_flattened_error_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_subscriptions(
admin.ListSubscriptionsRequest(), parent="parent_value",
)
def test_list_subscriptions_pager():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_subscriptions), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
admin.ListSubscriptionsResponse(
subscriptions=[
common.Subscription(),
common.Subscription(),
common.Subscription(),
],
next_page_token="abc",
),
admin.ListSubscriptionsResponse(subscriptions=[], next_page_token="def",),
admin.ListSubscriptionsResponse(
subscriptions=[common.Subscription(),], next_page_token="ghi",
),
admin.ListSubscriptionsResponse(
subscriptions=[common.Subscription(), common.Subscription(),],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_subscriptions(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, common.Subscription) for i in results)
def test_list_subscriptions_pages():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_subscriptions), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
admin.ListSubscriptionsResponse(
subscriptions=[
common.Subscription(),
common.Subscription(),
common.Subscription(),
],
next_page_token="abc",
),
admin.ListSubscriptionsResponse(subscriptions=[], next_page_token="def",),
admin.ListSubscriptionsResponse(
subscriptions=[common.Subscription(),], next_page_token="ghi",
),
admin.ListSubscriptionsResponse(
subscriptions=[common.Subscription(), common.Subscription(),],
),
RuntimeError,
)
pages = list(client.list_subscriptions(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_subscriptions_async_pager():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_subscriptions),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
admin.ListSubscriptionsResponse(
subscriptions=[
common.Subscription(),
common.Subscription(),
common.Subscription(),
],
next_page_token="abc",
),
admin.ListSubscriptionsResponse(subscriptions=[], next_page_token="def",),
admin.ListSubscriptionsResponse(
subscriptions=[common.Subscription(),], next_page_token="ghi",
),
admin.ListSubscriptionsResponse(
subscriptions=[common.Subscription(), common.Subscription(),],
),
RuntimeError,
)
async_pager = await client.list_subscriptions(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, common.Subscription) for i in responses)
@pytest.mark.asyncio
async def test_list_subscriptions_async_pages():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_subscriptions),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
admin.ListSubscriptionsResponse(
subscriptions=[
common.Subscription(),
common.Subscription(),
common.Subscription(),
],
next_page_token="abc",
),
admin.ListSubscriptionsResponse(subscriptions=[], next_page_token="def",),
admin.ListSubscriptionsResponse(
subscriptions=[common.Subscription(),], next_page_token="ghi",
),
admin.ListSubscriptionsResponse(
subscriptions=[common.Subscription(), common.Subscription(),],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_subscriptions(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_update_subscription(
transport: str = "grpc", request_type=admin.UpdateSubscriptionRequest
):
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_subscription), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = common.Subscription(name="name_value", topic="topic_value",)
response = client.update_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == admin.UpdateSubscriptionRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, common.Subscription)
assert response.name == "name_value"
assert response.topic == "topic_value"
def test_update_subscription_from_dict():
test_update_subscription(request_type=dict)
def test_update_subscription_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_subscription), "__call__"
) as call:
client.update_subscription()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == admin.UpdateSubscriptionRequest()
@pytest.mark.asyncio
async def test_update_subscription_async(
transport: str = "grpc_asyncio", request_type=admin.UpdateSubscriptionRequest
):
client = AdminServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_subscription), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
common.Subscription(name="name_value", topic="topic_value",)
)
response = await client.update_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == admin.UpdateSubscriptionRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, common.Subscription)
assert response.name == "name_value"
assert response.topic == "topic_value"
@pytest.mark.asyncio
async def test_update_subscription_async_from_dict():
await test_update_subscription_async(request_type=dict)
def test_update_subscription_field_headers():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.UpdateSubscriptionRequest()
request.subscription.name = "subscription.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_subscription), "__call__"
) as call:
call.return_value = common.Subscription()
client.update_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
"x-goog-request-params",
"subscription.name=subscription.name/value",
) in kw["metadata"]
@pytest.mark.asyncio
async def test_update_subscription_field_headers_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.UpdateSubscriptionRequest()
request.subscription.name = "subscription.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_subscription), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(common.Subscription())
await client.update_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
"x-goog-request-params",
"subscription.name=subscription.name/value",
) in kw["metadata"]
def test_update_subscription_flattened():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_subscription), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = common.Subscription()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.update_subscription(
subscription=common.Subscription(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].subscription == common.Subscription(name="name_value")
assert args[0].update_mask == field_mask.FieldMask(paths=["paths_value"])
def test_update_subscription_flattened_error():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.update_subscription(
admin.UpdateSubscriptionRequest(),
subscription=common.Subscription(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
@pytest.mark.asyncio
async def test_update_subscription_flattened_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_subscription), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = common.Subscription()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(common.Subscription())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.update_subscription(
subscription=common.Subscription(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].subscription == common.Subscription(name="name_value")
assert args[0].update_mask == field_mask.FieldMask(paths=["paths_value"])
@pytest.mark.asyncio
async def test_update_subscription_flattened_error_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.update_subscription(
admin.UpdateSubscriptionRequest(),
subscription=common.Subscription(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
def test_delete_subscription(
transport: str = "grpc", request_type=admin.DeleteSubscriptionRequest
):
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_subscription), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
response = client.delete_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == admin.DeleteSubscriptionRequest()
# Establish that the response is the type that we expect.
assert response is None
def test_delete_subscription_from_dict():
test_delete_subscription(request_type=dict)
def test_delete_subscription_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_subscription), "__call__"
) as call:
client.delete_subscription()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == admin.DeleteSubscriptionRequest()
@pytest.mark.asyncio
async def test_delete_subscription_async(
transport: str = "grpc_asyncio", request_type=admin.DeleteSubscriptionRequest
):
client = AdminServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_subscription), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
response = await client.delete_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == admin.DeleteSubscriptionRequest()
# Establish that the response is the type that we expect.
assert response is None
@pytest.mark.asyncio
async def test_delete_subscription_async_from_dict():
await test_delete_subscription_async(request_type=dict)
def test_delete_subscription_field_headers():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.DeleteSubscriptionRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_subscription), "__call__"
) as call:
call.return_value = None
client.delete_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_delete_subscription_field_headers_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = admin.DeleteSubscriptionRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_subscription), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
await client.delete_subscription(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_delete_subscription_flattened():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_subscription), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.delete_subscription(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_delete_subscription_flattened_error():
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.delete_subscription(
admin.DeleteSubscriptionRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_delete_subscription_flattened_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_subscription), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.delete_subscription(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_delete_subscription_flattened_error_async():
client = AdminServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.delete_subscription(
admin.DeleteSubscriptionRequest(), name="name_value",
)
def test_credentials_transport_error():
# It is an error to provide credentials and a transport instance.
transport = transports.AdminServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# It is an error to provide a credentials file and a transport instance.
transport = transports.AdminServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = AdminServiceClient(
client_options={"credentials_file": "credentials.json"},
transport=transport,
)
# It is an error to provide scopes and a transport instance.
transport = transports.AdminServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = AdminServiceClient(
client_options={"scopes": ["1", "2"]}, transport=transport,
)
def test_transport_instance():
# A client may be instantiated with a custom transport instance.
transport = transports.AdminServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
client = AdminServiceClient(transport=transport)
assert client.transport is transport
def test_transport_get_channel():
# A client may be instantiated with a custom transport instance.
transport = transports.AdminServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
transport = transports.AdminServiceGrpcAsyncIOTransport(
credentials=credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
@pytest.mark.parametrize(
"transport_class",
[
transports.AdminServiceGrpcTransport,
transports.AdminServiceGrpcAsyncIOTransport,
],
)
def test_transport_adc(transport_class):
# Test default credentials are used if not provided.
with mock.patch.object(auth, "default") as adc:
adc.return_value = (credentials.AnonymousCredentials(), None)
transport_class()
adc.assert_called_once()
def test_transport_grpc_default():
# A client should use the gRPC transport by default.
client = AdminServiceClient(credentials=credentials.AnonymousCredentials(),)
assert isinstance(client.transport, transports.AdminServiceGrpcTransport,)
def test_admin_service_base_transport_error():
# Passing both a credentials object and credentials_file should raise an error
with pytest.raises(exceptions.DuplicateCredentialArgs):
transport = transports.AdminServiceTransport(
credentials=credentials.AnonymousCredentials(),
credentials_file="credentials.json",
)
def test_admin_service_base_transport():
# Instantiate the base transport.
with mock.patch(
"google.cloud.pubsublite_v1.services.admin_service.transports.AdminServiceTransport.__init__"
) as Transport:
Transport.return_value = None
transport = transports.AdminServiceTransport(
credentials=credentials.AnonymousCredentials(),
)
# Every method on the transport should just blindly
# raise NotImplementedError.
methods = (
"create_topic",
"get_topic",
"get_topic_partitions",
"list_topics",
"update_topic",
"delete_topic",
"list_topic_subscriptions",
"create_subscription",
"get_subscription",
"list_subscriptions",
"update_subscription",
"delete_subscription",
)
for method in methods:
with pytest.raises(NotImplementedError):
getattr(transport, method)(request=object())
def test_admin_service_base_transport_with_credentials_file():
# Instantiate the base transport with a credentials file
with mock.patch.object(
auth, "load_credentials_from_file"
) as load_creds, mock.patch(
"google.cloud.pubsublite_v1.services.admin_service.transports.AdminServiceTransport._prep_wrapped_messages"
) as Transport:
Transport.return_value = None
load_creds.return_value = (credentials.AnonymousCredentials(), None)
transport = transports.AdminServiceTransport(
credentials_file="credentials.json", quota_project_id="octopus",
)
load_creds.assert_called_once_with(
"credentials.json",
scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id="octopus",
)
def test_admin_service_base_transport_with_adc():
# Test the default credentials are used if credentials and credentials_file are None.
with mock.patch.object(auth, "default") as adc, mock.patch(
"google.cloud.pubsublite_v1.services.admin_service.transports.AdminServiceTransport._prep_wrapped_messages"
) as Transport:
Transport.return_value = None
adc.return_value = (credentials.AnonymousCredentials(), None)
transport = transports.AdminServiceTransport()
adc.assert_called_once()
def test_admin_service_auth_adc():
# If no credentials are provided, we should use ADC credentials.
with mock.patch.object(auth, "default") as adc:
adc.return_value = (credentials.AnonymousCredentials(), None)
AdminServiceClient()
adc.assert_called_once_with(
scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id=None,
)
def test_admin_service_transport_auth_adc():
# If credentials and host are not provided, the transport class should use
# ADC credentials.
with mock.patch.object(auth, "default") as adc:
adc.return_value = (credentials.AnonymousCredentials(), None)
transports.AdminServiceGrpcTransport(
host="squid.clam.whelk", quota_project_id="octopus"
)
adc.assert_called_once_with(
scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id="octopus",
)
@pytest.mark.parametrize(
"transport_class",
[transports.AdminServiceGrpcTransport, transports.AdminServiceGrpcAsyncIOTransport],
)
def test_admin_service_grpc_transport_client_cert_source_for_mtls(transport_class):
cred = credentials.AnonymousCredentials()
# Check ssl_channel_credentials is used if provided.
with mock.patch.object(transport_class, "create_channel") as mock_create_channel:
mock_ssl_channel_creds = mock.Mock()
transport_class(
host="squid.clam.whelk",
credentials=cred,
ssl_channel_credentials=mock_ssl_channel_creds,
)
mock_create_channel.assert_called_once_with(
"squid.clam.whelk:443",
credentials=cred,
credentials_file=None,
scopes=("https://www.googleapis.com/auth/cloud-platform",),
ssl_credentials=mock_ssl_channel_creds,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
# Check if ssl_channel_credentials is not provided, then client_cert_source_for_mtls
# is used.
with mock.patch.object(transport_class, "create_channel", return_value=mock.Mock()):
with mock.patch("grpc.ssl_channel_credentials") as mock_ssl_cred:
transport_class(
credentials=cred,
client_cert_source_for_mtls=client_cert_source_callback,
)
expected_cert, expected_key = client_cert_source_callback()
mock_ssl_cred.assert_called_once_with(
certificate_chain=expected_cert, private_key=expected_key
)
def test_admin_service_host_no_port():
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(
api_endpoint="pubsublite.googleapis.com"
),
)
assert client.transport._host == "pubsublite.googleapis.com:443"
def test_admin_service_host_with_port():
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(
api_endpoint="pubsublite.googleapis.com:8000"
),
)
assert client.transport._host == "pubsublite.googleapis.com:8000"
def test_admin_service_grpc_transport_channel():
channel = grpc.secure_channel("http://localhost/", grpc.local_channel_credentials())
# Check that channel is used if provided.
transport = transports.AdminServiceGrpcTransport(
host="squid.clam.whelk", channel=channel,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
assert transport._ssl_channel_credentials == None
def test_admin_service_grpc_asyncio_transport_channel():
channel = aio.secure_channel("http://localhost/", grpc.local_channel_credentials())
# Check that channel is used if provided.
transport = transports.AdminServiceGrpcAsyncIOTransport(
host="squid.clam.whelk", channel=channel,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
assert transport._ssl_channel_credentials == None
# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are
# removed from grpc/grpc_asyncio transport constructor.
@pytest.mark.parametrize(
"transport_class",
[transports.AdminServiceGrpcTransport, transports.AdminServiceGrpcAsyncIOTransport],
)
def test_admin_service_transport_channel_mtls_with_client_cert_source(transport_class):
with mock.patch(
"grpc.ssl_channel_credentials", autospec=True
) as grpc_ssl_channel_cred:
with mock.patch.object(
transport_class, "create_channel"
) as grpc_create_channel:
mock_ssl_cred = mock.Mock()
grpc_ssl_channel_cred.return_value = mock_ssl_cred
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
cred = credentials.AnonymousCredentials()
with pytest.warns(DeprecationWarning):
with mock.patch.object(auth, "default") as adc:
adc.return_value = (cred, None)
transport = transport_class(
host="squid.clam.whelk",
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=client_cert_source_callback,
)
adc.assert_called_once()
grpc_ssl_channel_cred.assert_called_once_with(
certificate_chain=b"cert bytes", private_key=b"key bytes"
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=cred,
credentials_file=None,
scopes=("https://www.googleapis.com/auth/cloud-platform",),
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
assert transport.grpc_channel == mock_grpc_channel
assert transport._ssl_channel_credentials == mock_ssl_cred
# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are
# removed from grpc/grpc_asyncio transport constructor.
@pytest.mark.parametrize(
"transport_class",
[transports.AdminServiceGrpcTransport, transports.AdminServiceGrpcAsyncIOTransport],
)
def test_admin_service_transport_channel_mtls_with_adc(transport_class):
mock_ssl_cred = mock.Mock()
with mock.patch.multiple(
"google.auth.transport.grpc.SslCredentials",
__init__=mock.Mock(return_value=None),
ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred),
):
with mock.patch.object(
transport_class, "create_channel"
) as grpc_create_channel:
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
mock_cred = mock.Mock()
with pytest.warns(DeprecationWarning):
transport = transport_class(
host="squid.clam.whelk",
credentials=mock_cred,
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=None,
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=mock_cred,
credentials_file=None,
scopes=("https://www.googleapis.com/auth/cloud-platform",),
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
assert transport.grpc_channel == mock_grpc_channel
def test_subscription_path():
project = "squid"
location = "clam"
subscription = "whelk"
expected = "projects/{project}/locations/{location}/subscriptions/{subscription}".format(
project=project, location=location, subscription=subscription,
)
actual = AdminServiceClient.subscription_path(project, location, subscription)
assert expected == actual
def test_parse_subscription_path():
expected = {
"project": "octopus",
"location": "oyster",
"subscription": "nudibranch",
}
path = AdminServiceClient.subscription_path(**expected)
# Check that the path construction is reversible.
actual = AdminServiceClient.parse_subscription_path(path)
assert expected == actual
def test_topic_path():
project = "cuttlefish"
location = "mussel"
topic = "winkle"
expected = "projects/{project}/locations/{location}/topics/{topic}".format(
project=project, location=location, topic=topic,
)
actual = AdminServiceClient.topic_path(project, location, topic)
assert expected == actual
def test_parse_topic_path():
expected = {
"project": "nautilus",
"location": "scallop",
"topic": "abalone",
}
path = AdminServiceClient.topic_path(**expected)
# Check that the path construction is reversible.
actual = AdminServiceClient.parse_topic_path(path)
assert expected == actual
def test_common_billing_account_path():
billing_account = "squid"
expected = "billingAccounts/{billing_account}".format(
billing_account=billing_account,
)
actual = AdminServiceClient.common_billing_account_path(billing_account)
assert expected == actual
def test_parse_common_billing_account_path():
expected = {
"billing_account": "clam",
}
path = AdminServiceClient.common_billing_account_path(**expected)
# Check that the path construction is reversible.
actual = AdminServiceClient.parse_common_billing_account_path(path)
assert expected == actual
def test_common_folder_path():
folder = "whelk"
expected = "folders/{folder}".format(folder=folder,)
actual = AdminServiceClient.common_folder_path(folder)
assert expected == actual
def test_parse_common_folder_path():
expected = {
"folder": "octopus",
}
path = AdminServiceClient.common_folder_path(**expected)
# Check that the path construction is reversible.
actual = AdminServiceClient.parse_common_folder_path(path)
assert expected == actual
def test_common_organization_path():
organization = "oyster"
expected = "organizations/{organization}".format(organization=organization,)
actual = AdminServiceClient.common_organization_path(organization)
assert expected == actual
def test_parse_common_organization_path():
expected = {
"organization": "nudibranch",
}
path = AdminServiceClient.common_organization_path(**expected)
# Check that the path construction is reversible.
actual = AdminServiceClient.parse_common_organization_path(path)
assert expected == actual
def test_common_project_path():
project = "cuttlefish"
expected = "projects/{project}".format(project=project,)
actual = AdminServiceClient.common_project_path(project)
assert expected == actual
def test_parse_common_project_path():
expected = {
"project": "mussel",
}
path = AdminServiceClient.common_project_path(**expected)
# Check that the path construction is reversible.
actual = AdminServiceClient.parse_common_project_path(path)
assert expected == actual
def test_common_location_path():
project = "winkle"
location = "nautilus"
expected = "projects/{project}/locations/{location}".format(
project=project, location=location,
)
actual = AdminServiceClient.common_location_path(project, location)
assert expected == actual
def test_parse_common_location_path():
expected = {
"project": "scallop",
"location": "abalone",
}
path = AdminServiceClient.common_location_path(**expected)
# Check that the path construction is reversible.
actual = AdminServiceClient.parse_common_location_path(path)
assert expected == actual
def test_client_withDEFAULT_CLIENT_INFO():
client_info = gapic_v1.client_info.ClientInfo()
with mock.patch.object(
transports.AdminServiceTransport, "_prep_wrapped_messages"
) as prep:
client = AdminServiceClient(
credentials=credentials.AnonymousCredentials(), client_info=client_info,
)
prep.assert_called_once_with(client_info)
with mock.patch.object(
transports.AdminServiceTransport, "_prep_wrapped_messages"
) as prep:
transport_class = AdminServiceClient.get_transport_class()
transport = transport_class(
credentials=credentials.AnonymousCredentials(), client_info=client_info,
)
prep.assert_called_once_with(client_info)
| 37.010894 | 115 | 0.688647 | 16,921 | 146,082 | 5.739614 | 0.024585 | 0.013406 | 0.024094 | 0.023672 | 0.934102 | 0.907177 | 0.886615 | 0.860348 | 0.844265 | 0.823198 | 0 | 0.00358 | 0.225476 | 146,082 | 3,946 | 116 | 37.020274 | 0.854796 | 0.221027 | 0 | 0.69033 | 0 | 0 | 0.074166 | 0.027216 | 0 | 0 | 0 | 0 | 0.142799 | 1 | 0.049776 | false | 0 | 0.009792 | 0.000816 | 0.060384 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0e54aa2cc5108bf97327827024ef82726428cac9 | 86 | py | Python | configur/__init__.py | StudioTrackr/configur | 6e7505a812d830955f2ef65fc14b650b10487e37 | [
"MIT"
] | null | null | null | configur/__init__.py | StudioTrackr/configur | 6e7505a812d830955f2ef65fc14b650b10487e37 | [
"MIT"
] | null | null | null | configur/__init__.py | StudioTrackr/configur | 6e7505a812d830955f2ef65fc14b650b10487e37 | [
"MIT"
] | null | null | null | from configur.config import Settings
from configur.logging_config import init_logging
| 28.666667 | 48 | 0.883721 | 12 | 86 | 6.166667 | 0.583333 | 0.324324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 86 | 2 | 49 | 43 | 0.948718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e72de8675c3e4801f9d7ef0a6dfea644f9f921f | 132 | py | Python | Chapter01/manage.py | allainwonderland/Mastering-Flask-Web-Development-Second-Edition | f8963b73747d98d040f1d753e0f85b6ce6adf2fe | [
"MIT"
] | null | null | null | Chapter01/manage.py | allainwonderland/Mastering-Flask-Web-Development-Second-Edition | f8963b73747d98d040f1d753e0f85b6ce6adf2fe | [
"MIT"
] | null | null | null | Chapter01/manage.py | allainwonderland/Mastering-Flask-Web-Development-Second-Edition | f8963b73747d98d040f1d753e0f85b6ce6adf2fe | [
"MIT"
] | null | null | null | from main import app, db, User
@app.shell_context_processor
def make_shell_context():
return dict(app=app, db=db, User=User)
| 16.5 | 42 | 0.742424 | 22 | 132 | 4.272727 | 0.590909 | 0.106383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 132 | 7 | 43 | 18.857143 | 0.839286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7ec762e199a3cf05d8de4f80904464fb2feca1df | 541 | py | Python | geokit/test/TEST_ALL.py | PyPSA/geokit | 4c6c98fc8fbae11f44fea46bcdf6299cbf65e0e1 | [
"MIT"
] | 1 | 2021-03-12T15:52:49.000Z | 2021-03-12T15:52:49.000Z | geokit/test/TEST_ALL.py | r-beer/geokit-1 | 950d86467a44b97b49df3837dd5a2fff59383c35 | [
"MIT"
] | null | null | null | geokit/test/TEST_ALL.py | r-beer/geokit-1 | 950d86467a44b97b49df3837dd5a2fff59383c35 | [
"MIT"
] | 1 | 2020-08-06T17:25:35.000Z | 2020-08-06T17:25:35.000Z | import os
print("\nTESTING: util...")
os.system("python test_util.py")
print("\nTESTING: srs...")
os.system("python test_srs.py")
print("\nTESTING: geom...")
os.system("python test_geom.py")
print("\nTESTING: raster...")
os.system("python test_raster.py")
print("\nTESTING: vector...")
os.system("python test_vector.py")
print("\nTESTING: extent...")
os.system("python test_extent.py")
print("\nTESTING: regionmask...")
os.system("python test_regionmask.py")
print("\nTESTING: algorithms...")
os.system("python test_algorithms.py")
| 20.037037 | 38 | 0.698706 | 74 | 541 | 5 | 0.216216 | 0.281081 | 0.302703 | 0.389189 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079482 | 541 | 26 | 39 | 20.807692 | 0.742972 | 0 | 0 | 0 | 0 | 0 | 0.611111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.058824 | 0 | 0.058824 | 0.470588 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
7ecbfdb337ebe4111b61e3e30088974c98dad157 | 2,312 | py | Python | tests/unit/app/api/representations/test_json.py | datphan/moviecrab | e3bcff700b994388f1ded68d268a960b10d57a81 | [
"BSD-3-Clause"
] | null | null | null | tests/unit/app/api/representations/test_json.py | datphan/moviecrab | e3bcff700b994388f1ded68d268a960b10d57a81 | [
"BSD-3-Clause"
] | null | null | null | tests/unit/app/api/representations/test_json.py | datphan/moviecrab | e3bcff700b994388f1ded68d268a960b10d57a81 | [
"BSD-3-Clause"
] | null | null | null | from mock import patch, MagicMock
from nose.plugins.attrib import attr
from tests.unit import UnitTestCase
class JsonTestCase(UnitTestCase):
@attr('smoke')
@patch('app.api.representations.json.current_app')
@patch('app.api.representations.json.make_response')
def test_output_json(self, mock_make_response, mock_current_app):
settings = {}
mock_current_app.config.get.return_value = settings
mock_current_app.debug = False
mock_output = MagicMock()
mock_make_response.return_value = mock_output
from app.api.representations.json import output_json
data = {
'hello': 'world'
}
code = 200
headers = {
'X-APPLICATION': 'Flask'
}
output = output_json(data, code, headers)
self.assertEqual(settings, {})
self.assertEqual(output, mock_output)
mock_output.headers.__setitem__.called_once_with('Content-Type', 'application/json')
mock_output.headers.extend.assert_called_once_with(headers)
self.assertTrue(mock_make_response.call_count, 1)
mock_current_app.config.get.assert_called_once_with('RESTFUL_JSON', {})
@patch('app.api.representations.json.current_app')
@patch('app.api.representations.json.make_response')
def test_output_json_debug(self, mock_make_response, mock_current_app):
settings = {}
mock_current_app.config.get.return_value = settings
mock_current_app.debug = True
mock_output = MagicMock()
mock_make_response.return_value = mock_output
from app.api.representations.json import output_json
data = {
'hello': 'world'
}
code = 200
headers = {
'X-APPLICATION': 'Flask'
}
output = output_json(data, code, headers)
self.assertEqual(settings.get('indent'), 4)
self.assertEqual(settings.get('sort_keys'), True)
self.assertEqual(output, mock_output)
mock_output.headers.__setitem__.called_once_with('Content-Type', 'application/json')
mock_output.headers.extend.assert_called_once_with(headers)
self.assertTrue(mock_make_response.call_count, 1)
mock_current_app.config.get.assert_called_once_with('RESTFUL_JSON', {})
| 31.671233 | 92 | 0.67301 | 270 | 2,312 | 5.451852 | 0.225926 | 0.067935 | 0.076087 | 0.101902 | 0.870924 | 0.870924 | 0.870924 | 0.870924 | 0.870924 | 0.870924 | 0 | 0.005028 | 0.225779 | 2,312 | 72 | 93 | 32.111111 | 0.817318 | 0 | 0 | 0.692308 | 0 | 0 | 0.138408 | 0.070934 | 0 | 0 | 0 | 0 | 0.211538 | 1 | 0.038462 | false | 0 | 0.096154 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7d1ce0218b47c6a9a831d814d0cad8d398d8504d | 35 | py | Python | horoscope/__init__.py | tapasweni-pathak/pyhoroscope | bd619b28756b416869d0f616395bd5b1f5daf855 | [
"MIT"
] | 10 | 2015-10-18T14:44:52.000Z | 2018-01-20T18:41:47.000Z | horoscope/__init__.py | tapasweni-pathak/pyhoroscope | bd619b28756b416869d0f616395bd5b1f5daf855 | [
"MIT"
] | 17 | 2015-08-11T02:40:59.000Z | 2017-07-06T01:21:43.000Z | horoscope/__init__.py | tapasweni-pathak/pyhoroscope | bd619b28756b416869d0f616395bd5b1f5daf855 | [
"MIT"
] | 6 | 2015-07-11T15:18:29.000Z | 2018-08-17T18:40:19.000Z | from pyhoroscope import Horoscope
| 11.666667 | 33 | 0.857143 | 4 | 35 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 35 | 2 | 34 | 17.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
adabbe2e8b8359428d0bfe6a770398718c652949 | 142 | py | Python | evidentialdl/layers/__init__.py | Tuttusa/EvidentialDL | 7813c2705784bfeee21d25643259fd28d75b5f95 | [
"MIT"
] | 2 | 2022-03-19T06:22:10.000Z | 2022-03-22T00:51:24.000Z | evidentialdl/layers/__init__.py | Tuttusa/EvidentialDL | 7813c2705784bfeee21d25643259fd28d75b5f95 | [
"MIT"
] | null | null | null | evidentialdl/layers/__init__.py | Tuttusa/EvidentialDL | 7813c2705784bfeee21d25643259fd28d75b5f95 | [
"MIT"
] | null | null | null | from evidentialdl.layers.continuous import DenseNormalGamma, DenseNormal
from evidentialdl.layers.discrete import DenseDirichlet, DenseSigmoid | 71 | 72 | 0.894366 | 14 | 142 | 9.071429 | 0.714286 | 0.251969 | 0.346457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06338 | 142 | 2 | 73 | 71 | 0.954887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
bc146d99cadd651d322871d70951658a1880bcab | 53 | py | Python | gitlab_timetracking/__init__.py | cutec-chris/gitlab-timetracking | 8f96fadb5baee8a80bb309ca61a7d06ba4f5a2be | [
"MIT"
] | null | null | null | gitlab_timetracking/__init__.py | cutec-chris/gitlab-timetracking | 8f96fadb5baee8a80bb309ca61a7d06ba4f5a2be | [
"MIT"
] | null | null | null | gitlab_timetracking/__init__.py | cutec-chris/gitlab-timetracking | 8f96fadb5baee8a80bb309ca61a7d06ba4f5a2be | [
"MIT"
] | null | null | null | from gitlab_timetracking.gitlab_timetracking import * | 53 | 53 | 0.90566 | 6 | 53 | 7.666667 | 0.666667 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056604 | 53 | 1 | 53 | 53 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc28e65e27114fc8ca8f3921b99c5b25d51ed28f | 422 | py | Python | notebooks/experiments/src/image_demos/__init__.py | fronovics/AI_playground | ac302c0694fa2182af343c257b28a033bc4cf5b9 | [
"Apache-2.0"
] | null | null | null | notebooks/experiments/src/image_demos/__init__.py | fronovics/AI_playground | ac302c0694fa2182af343c257b28a033bc4cf5b9 | [
"Apache-2.0"
] | null | null | null | notebooks/experiments/src/image_demos/__init__.py | fronovics/AI_playground | ac302c0694fa2182af343c257b28a033bc4cf5b9 | [
"Apache-2.0"
] | null | null | null | from notebooks.experiments.src.image_demos.demo_1a import VideoRecognitionDashboard
from notebooks.experiments.src.image_demos.demo_1b import VideoTakeDashboard
from notebooks.experiments.src.image_demos.demo_2 import TwoClassCameraDashboard
from notebooks.experiments.src.image_demos.simple_binary_classifier import SimpleBinaryClassifier
from notebooks.experiments.src.image_demos.image_collection import ImageCollector
| 70.333333 | 97 | 0.905213 | 51 | 422 | 7.27451 | 0.392157 | 0.175202 | 0.32345 | 0.363881 | 0.530997 | 0.530997 | 0.331536 | 0 | 0 | 0 | 0 | 0.007463 | 0.047393 | 422 | 5 | 98 | 84.4 | 0.915423 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
70e00da97deb84f8e4a8bff847348036927e3fbe | 1,209 | py | Python | test_python_toolbox/test_read_write_lock/test.py | hboshnak/python_toolbox | cb9ef64b48f1d03275484d707dc5079b6701ad0c | [
"MIT"
] | 119 | 2015-02-05T17:59:47.000Z | 2022-02-21T22:43:40.000Z | test_python_toolbox/test_read_write_lock/test.py | hboshnak/python_toolbox | cb9ef64b48f1d03275484d707dc5079b6701ad0c | [
"MIT"
] | 4 | 2019-04-24T14:01:14.000Z | 2020-05-21T12:03:29.000Z | test_python_toolbox/test_read_write_lock/test.py | hboshnak/python_toolbox | cb9ef64b48f1d03275484d707dc5079b6701ad0c | [
"MIT"
] | 14 | 2015-03-30T06:30:42.000Z | 2021-12-24T23:45:11.000Z | # Copyright 2009-2017 Ram Rachum.
# This program is distributed under the MIT license.
from python_toolbox.locking import ReadWriteLock
def test():
''' '''
read_write_lock = ReadWriteLock()
with read_write_lock.read:
pass
with read_write_lock.write:
pass
with read_write_lock.read as enter_return_value:
assert enter_return_value is read_write_lock
with read_write_lock.read:
with read_write_lock.read:
with read_write_lock.read:
with read_write_lock.read:
with read_write_lock.write:
with read_write_lock.write:
with read_write_lock.write:
with read_write_lock.write:
pass
with read_write_lock.write:
with read_write_lock.write:
with read_write_lock.write:
with read_write_lock.write:
with read_write_lock.read:
with read_write_lock.read:
with read_write_lock.read:
with read_write_lock.read:
pass
| 32.675676 | 59 | 0.554177 | 136 | 1,209 | 4.580882 | 0.227941 | 0.303371 | 0.438202 | 0.518459 | 0.680578 | 0.674157 | 0.627608 | 0.627608 | 0.627608 | 0.627608 | 0 | 0.011019 | 0.399504 | 1,209 | 36 | 60 | 33.583333 | 0.847107 | 0.067825 | 0 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 1 | 0.037037 | false | 0.148148 | 0.037037 | 0 | 0.074074 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
cb26753c8d8d6f07a7ad7f71f5b9b394c74aedf3 | 108 | py | Python | day03/test7.py | liuweidongg/dejin | 12240b9d27347d6e041338869591aa7133bf80cd | [
"Apache-2.0"
] | 1 | 2018-09-27T02:01:38.000Z | 2018-09-27T02:01:38.000Z | day03/test7.py | liuweidongg/dejin | 12240b9d27347d6e041338869591aa7133bf80cd | [
"Apache-2.0"
] | null | null | null | day03/test7.py | liuweidongg/dejin | 12240b9d27347d6e041338869591aa7133bf80cd | [
"Apache-2.0"
] | null | null | null | n = 50000.0
s = 0
a = 0
while n<=50000.0 and n>=2:
a = (1/n)+(1/(n-1))
s = a+s
n = n-1
print(s)
| 12 | 26 | 0.425926 | 28 | 108 | 1.642857 | 0.357143 | 0.130435 | 0.304348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.260274 | 0.324074 | 108 | 8 | 27 | 13.5 | 0.369863 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 1 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cb308023311eb5f86ecd8068785364d3dd3083af | 10,252 | py | Python | tests/test_selenium/test_token_correct.py | lungsang/pyrrha | 5d101494267ba4b8146e2a846adb3cc7813b892f | [
"MIT"
] | 16 | 2018-11-16T13:48:20.000Z | 2020-11-13T21:28:06.000Z | tests/test_selenium/test_token_correct.py | lungsang/pyrrha | 5d101494267ba4b8146e2a846adb3cc7813b892f | [
"MIT"
] | 179 | 2018-11-16T12:43:05.000Z | 2022-03-31T08:52:22.000Z | tests/test_selenium/test_token_correct.py | lungsang/pyrrha | 5d101494267ba4b8146e2a846adb3cc7813b892f | [
"MIT"
] | 21 | 2019-02-17T15:56:29.000Z | 2022-03-28T09:27:57.000Z | from tests.test_selenium.base import TokenCorrectBase, TokenCorrect2CorporaBase
import selenium
class TestTokenCorrectWauchierCorpus(TokenCorrectBase):
def test_edit_token_lemma_with_allowed_values(self):
""" [Wauchier] Test the edition of a token lemma with allowed values"""
# Try first with an edit that would word
self.addCorpus(with_token=True, with_allowed_lemma=True, tokens_up_to=24)
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value("un", id_row="1")
self.assert_token_has_values(token, lemma="un")
self.assert_saved(row)
# Try with an unallowed lemma
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value("WRONG", id_row="2")
self.assert_token_has_values(token, lemma="saint")
self.assert_invalid_value(row, "lemma")
# Try with a POS update but keeping the lemma
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value("ADJqua", value_type="POS", id_row="3")
self.assert_token_has_values(token, lemma="martin", POS="ADJqua")
self.assert_saved(row)
def test_edit_token_lemma_with_allowed_values_autocomplete(self):
""" [Wauchier] Test the edition of a token with the use of autocompletion"""
# Try first with an edit that would word
self.addCorpus(with_token=True, with_allowed_lemma=True, tokens_up_to=24)
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value(
"d", id_row="1",
autocomplete_selector=".autocomplete-suggestion[data-val='devoir']"
)
self.assert_token_has_values(token, lemma="devoir", POS="PRE")
self.assert_saved(row)
def test_edit_POS(self):
""" [Wauchier] Edit POS of a token """
self.addCorpus(with_token=True, with_allowed_lemma=True, tokens_up_to=24)
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value(
"ADJqua", id_row="1", value_type="POS"
)
self.assert_token_has_values(token, lemma="de", POS="ADJqua")
self.assert_saved(row)
def test_edit_morph(self):
""" [Wauchier] Edit morph of a token """
self.addCorpus(with_token=True, with_allowed_lemma=True, tokens_up_to=24)
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value(
"_", id_row="1", value_type="morph"
)
self.assert_saved(row)
self.assert_token_has_values(token, lemma="de", POS="PRE", morph="_")
# Try with an unallowed morph
token, status_text, row = self.edith_nth_row_value(
"Not Allowed", id_row="2", value_type="morph"
)
self.assert_saved(row)
self.assert_token_has_values(token, lemma="saint", POS="ADJqua", morph="Not Allowed")
def test_edit_morph_with_allowed(self):
""" [Wauchier] Edit morph of a token with allowed values as control"""
self.addCorpus(with_token=True, with_allowed_lemma=True, with_allowed_morph=True, tokens_up_to=24)
self.driver.get(self.get_server_url())
token, status_text, row = self.edith_nth_row_value(
"_", id_row="1", value_type="morph"
)
self.assert_saved(row)
self.assert_token_has_values(token, lemma="de", POS="PRE", morph="_")
# Try with an unallowed morph
token, status_text, row = self.edith_nth_row_value(
"Not Allowed", id_row="2", value_type="morph"
)
self.assert_invalid_value(row, "morph")
self.assert_token_has_values(token, lemma="saint", POS="ADJqua", morph="None")
# With auto complete
token, status_text, row = self.edith_nth_row_value(
"masc sing", id_row="3", corpus_id="1", value_type="morph",
autocomplete_selector=".autocomplete-suggestion[data-val='NOMB.=s|GENRE=m|CAS=n']"
)
self.assert_saved(row)
self.assert_token_has_values(token, lemma="martin", POS="NOMpro", morph="NOMB.=s|GENRE=m|CAS=n")
# With auto complete based on value and not label
token, status_text, row = self.edith_nth_row_value(
"NOMB.=s GENRE=m", id_row="4", corpus_id="1", value_type="morph",
autocomplete_selector=".autocomplete-suggestion[data-val='NOMB.=s|GENRE=m|CAS=n']"
)
self.assert_saved(row)
self.assert_token_has_values(token, lemma="mout", POS="ADVgen", morph="NOMB.=s|GENRE=m|CAS=n")
class TestTokensCorrectFloovant(TokenCorrectBase):
CORPUS = "floovant"
CORPUS_ID = "2"
def test_edit_POS(self):
""" [Floovant] Edit POS of a token """
self.addCorpus(with_token=True, with_allowed_lemma=True)
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value(
"ADJqua", id_row="1", value_type="POS"
)
self.assert_saved(row)
self.assert_token_has_values(token, lemma="seignor", POS="ADJqua")
def test_edit_token_lemma_with_allowed_values_lemma_pos(self):
""" [Floovant] Test the edition of a token with allowed lemma and POS"""
# Try first with an edit that would word
self.addCorpus(with_token=True, with_allowed_lemma=True, with_allowed_pos=True)
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value("estoire1", id_row="1")
self.assert_saved(row)
self.assert_token_has_values(token, lemma="estoire1")
# Try with an unallowed lemma
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value("WRONG", id_row="2")
self.assert_invalid_value(row, "lemma")
# It should not be changed
self.assert_token_has_values(token, lemma="or4")
# Try with a POS update but keeping the lemma
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value("ADJqua", value_type="POS", id_row="3")
self.assert_saved(row)
self.assert_token_has_values(token, lemma="escouter", POS="ADJqua")
def test_edit_token_morph_with_allowed_values_lemma(self):
""" [Floovant] Test the edition of a token's morph with allowed_lemma """
# Try first with an edit that would word
self.addCorpus(with_token=True, with_allowed_lemma=True)
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value("SomeMorph", id_row="1", value_type="morph")
self.assert_saved(row)
self.assert_token_has_values(token, lemma="seignor", morph="SomeMorph")
def test_edit_token_morph(self):
""" [Floovant] Test the edition of a token's morph"""
# Try first with an edit that would word
self.addCorpus(with_token=True)
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value("SomeMorph", id_row="1", value_type="morph")
self.assert_saved(row)
self.assert_token_has_values(token, lemma="seignor", morph="SomeMorph")
def test_edit_token_with_same_value(self):
""" [Floovant] Test the edition of a token's lemma with the same value"""
# Try first with an edit that would word
self.addCorpus(with_token=True)
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value("seignor", id_row="1", value_type="lemma")
self.assert_unchanged(row)
self.assertNotIn("table-changed", row.get_attribute("class"))
class TestTokensEditTwoCorpora(TokenCorrect2CorporaBase):
CORPUS = "wauchier"
CORPUS_ID = "1"
def test_edit_token_lemma_with_allowed_values_lemma_pos(self):
""" [TwoCorpora] Test the edition of a token """
# Try first with an edit that would work
self.addCorpus(with_token=True, with_allowed_lemma=True, with_allowed_pos=True, tokens_up_to=24)
token, status_text, row = self.edith_nth_row_value("saint", id_row="1")
self.assert_saved(row)
self.assert_token_has_values(token, lemma="saint")
# Try with an allowed lemma from the second corpus
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value("seignor", id_row="2")
self.assert_invalid_value(row, "lemma")
# Should not be changed
self.assert_token_has_values(token, lemma="saint")
# Try with a POS update but keeping the lemma
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value("ADJqua", value_type="POS", id_row="3")
self.assert_saved(row)
self.assert_token_has_values(token, lemma="martin", POS="ADJqua")
def test_edit_token_lemma_with_typeahead_click(self):
""" [TwoCorpora] Test the edition of a token using typeahead"""
# Try first with an edit that would work
self.addCorpus(with_token=True, with_allowed_lemma=True, tokens_up_to=24)
token, status_text, row = self.edith_nth_row_value(
"s", id_row="1", corpus_id="1",
autocomplete_selector=".autocomplete-suggestion[data-val='saint']"
)
self.assert_saved(row)
self.assert_token_has_values(token, lemma="saint")
self.assertEqual(
row.find_elements_by_tag_name("b")[0].text,
token.form,
"Bold should be used to highlight in-context word"
)
# Try with an allowed lemma from the second corpus
self.driver.refresh()
token, status_text, row = self.edith_nth_row_value(
"s", id_row=str(self.first_token_id(2)+1), corpus_id="2",
autocomplete_selector=".autocomplete-suggestion[data-val='seignor']"
)
self.assert_saved(row)
self.assert_token_has_values(token, lemma="seignor")
# Try with an allowed lemma from the second corpus
self.driver.refresh()
with self.assertRaises(
(selenium.common.exceptions.NoSuchElementException,
selenium.common.exceptions.TimeoutException)
):
_ = self.edith_nth_row_value(
"s", id_row=str(self.first_token_id(2)+1), corpus_id="2",
autocomplete_selector=".autocomplete-suggestion[data-val='saint']"
)
| 45.362832 | 106 | 0.660359 | 1,395 | 10,252 | 4.58853 | 0.102509 | 0.070302 | 0.044993 | 0.056241 | 0.843306 | 0.826746 | 0.818778 | 0.767068 | 0.725668 | 0.68005 | 0 | 0.006675 | 0.225517 | 10,252 | 225 | 107 | 45.564444 | 0.799496 | 0.142606 | 0 | 0.56875 | 0 | 0.0125 | 0.100173 | 0.037838 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.075 | false | 0 | 0.0125 | 0 | 0.13125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cb5629c6995c00c3e631a907b69c5238bf14151b | 188 | py | Python | bin/sticks/polysticks-123-4x4-clipped-corners-2.py | tiwo/puzzler | 7ad3d9a792f0635f7ec59ffa85fb46b54fd77a7e | [
"Intel"
] | null | null | null | bin/sticks/polysticks-123-4x4-clipped-corners-2.py | tiwo/puzzler | 7ad3d9a792f0635f7ec59ffa85fb46b54fd77a7e | [
"Intel"
] | null | null | null | bin/sticks/polysticks-123-4x4-clipped-corners-2.py | tiwo/puzzler | 7ad3d9a792f0635f7ec59ffa85fb46b54fd77a7e | [
"Intel"
] | 1 | 2022-01-02T16:54:14.000Z | 2022-01-02T16:54:14.000Z | #!/usr/bin/env python
# $Id$
"""132 solutions"""
import puzzler
from puzzler.puzzles.polysticks123 import Polysticks123_4x4ClippedCorners2
puzzler.run(Polysticks123_4x4ClippedCorners2)
| 18.8 | 74 | 0.81383 | 20 | 188 | 7.55 | 0.7 | 0.410596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104651 | 0.085106 | 188 | 9 | 75 | 20.888889 | 0.773256 | 0.207447 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cb73bdfb1c93b46fb2ecb2cac023007b9a3a2c49 | 103 | py | Python | seaborn_altair/__init__.py | Kitware/seaborn_altair | 716fbf10e2bf207f4de5aaad87fddba43d4e4dfe | [
"Apache-2.0"
] | 21 | 2018-06-06T12:06:47.000Z | 2022-01-27T20:43:07.000Z | seaborn_altair/__init__.py | Kitware/seaborn_altair | 716fbf10e2bf207f4de5aaad87fddba43d4e4dfe | [
"Apache-2.0"
] | 1 | 2019-05-09T18:32:25.000Z | 2019-05-10T13:28:40.000Z | seaborn_altair/__init__.py | Kitware/seaborn_altair | 716fbf10e2bf207f4de5aaad87fddba43d4e4dfe | [
"Apache-2.0"
] | 2 | 2018-12-06T13:19:40.000Z | 2019-07-02T06:26:35.000Z | from .axisgrid import *
from .categorical import *
from .regression import *
from .relational import *
| 20.6 | 26 | 0.76699 | 12 | 103 | 6.583333 | 0.5 | 0.379747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15534 | 103 | 4 | 27 | 25.75 | 0.908046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cb7b0e42094a1c46b4291a3c59f6cbc4279712b6 | 77 | py | Python | svnTree/test/__init__.py | Nizarazo/instl | d04e9aede292caa1174447189a423726fa3bb97f | [
"BSD-3-Clause"
] | 8 | 2015-06-10T12:33:11.000Z | 2022-02-19T03:02:56.000Z | svnTree/test/__init__.py | Nizarazo/instl | d04e9aede292caa1174447189a423726fa3bb97f | [
"BSD-3-Clause"
] | 13 | 2016-12-27T09:02:20.000Z | 2022-02-07T08:49:24.000Z | svnTree/test/__init__.py | Nizarazo/instl | d04e9aede292caa1174447189a423726fa3bb97f | [
"BSD-3-Clause"
] | 6 | 2015-09-24T21:20:46.000Z | 2021-06-23T12:58:31.000Z | from .test_SVNTree import TestSVNTree
from .test_SVNItem import TestSVNItem
| 19.25 | 37 | 0.857143 | 10 | 77 | 6.4 | 0.7 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116883 | 77 | 3 | 38 | 25.666667 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cb7f43b41ebd40cd1df5a99e4d98b76bb2914897 | 4,423 | py | Python | tests/unit/dataactvalidator/test_fabs41_detached_award_financial_assistance_1.py | dael-victoria-reyes/data-act-broker-backend | f83c7cad29cac24d95f45a262710dc1564de7dc1 | [
"CC0-1.0"
] | 1 | 2019-06-22T21:53:16.000Z | 2019-06-22T21:53:16.000Z | tests/unit/dataactvalidator/test_fabs41_detached_award_financial_assistance_1.py | dael-victoria-reyes/data-act-broker-backend | f83c7cad29cac24d95f45a262710dc1564de7dc1 | [
"CC0-1.0"
] | null | null | null | tests/unit/dataactvalidator/test_fabs41_detached_award_financial_assistance_1.py | dael-victoria-reyes/data-act-broker-backend | f83c7cad29cac24d95f45a262710dc1564de7dc1 | [
"CC0-1.0"
] | null | null | null | from tests.unit.dataactcore.factories.staging import DetachedAwardFinancialAssistanceFactory
from tests.unit.dataactvalidator.utils import number_of_errors, query_columns
from dataactcore.models.domainModels import CityCode
_FILE = 'fabs41_detached_award_financial_assistance_1'
def test_column_headers(database):
expected_subset = {"row_number", "place_of_performance_code"}
actual = set(query_columns(_FILE, database))
assert expected_subset == actual
def test_success(database):
""" For PrimaryPlaceOfPerformanceCode XX##### or XX####R, where PrimaryPlaceOfPerformanceZIP+4 is blank or
"city-wide": city code ##### or ####R must be valid and exist in the provided state.
"""
city_code = CityCode(city_code="10987", state_code="NY")
city_code_2 = CityCode(city_code="1098R", state_code="NY")
det_award_1 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="NY*****",
place_of_performance_zip4a="2")
det_award_2 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="NY**123",
place_of_performance_zip4a="1")
det_award_3 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="NY**123",
place_of_performance_zip4a=None)
det_award_4 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="ny10986",
place_of_performance_zip4a="12345")
det_award_5 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="Na10987",
place_of_performance_zip4a="12345-6789")
det_award_6 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="Ny10987",
place_of_performance_zip4a=None)
det_award_7 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="Ny10987",
place_of_performance_zip4a='')
det_award_8 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="Ny10987",
place_of_performance_zip4a="city-wide")
# Testing with R ending
det_award_9 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="Ny1098R",
place_of_performance_zip4a="city-wide")
det_award_10 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="Ny1098R",
place_of_performance_zip4a=None)
errors = number_of_errors(_FILE, database, models=[det_award_1, det_award_2, det_award_3, det_award_4, det_award_5,
det_award_6, det_award_7, det_award_8, det_award_9, det_award_10,
city_code, city_code_2])
assert errors == 0
def test_failure(database):
""" Test failure for PrimaryPlaceOfPerformanceCode XX##### or XX####R, where PrimaryPlaceOfPerformanceZIP+4 is
blank or "city-wide": city code ##### or ####R must be valid and exist in the provided state.
"""
city_code = CityCode(city_code="10987", state_code="NY")
city_code_2 = CityCode(city_code="1098R", state_code="NY")
det_award_1 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="ny10986",
place_of_performance_zip4a=None)
det_award_2 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="NY10986",
place_of_performance_zip4a='')
det_award_3 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="na10987",
place_of_performance_zip4a=None)
det_award_4 = DetachedAwardFinancialAssistanceFactory(place_of_performance_code="na1098R",
place_of_performance_zip4a=None)
errors = number_of_errors(_FILE, database, models=[det_award_1, det_award_2, det_award_3, det_award_4, city_code,
city_code_2])
assert errors == 4
| 67.015152 | 120 | 0.628985 | 432 | 4,423 | 6.006944 | 0.203704 | 0.078227 | 0.201156 | 0.127168 | 0.777264 | 0.77341 | 0.763391 | 0.734875 | 0.721002 | 0.721002 | 0 | 0.047435 | 0.299344 | 4,423 | 65 | 121 | 68.046154 | 0.789932 | 0.092923 | 0 | 0.285714 | 0 | 0 | 0.060408 | 0.017367 | 0 | 0 | 0 | 0 | 0.061224 | 1 | 0.061224 | false | 0 | 0.061224 | 0 | 0.122449 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cbc0c16c8427e4328c081835ad06ada7d44dfe4a | 3,180 | py | Python | test/v2/test_h_infinity.py | wavestate/wavestate-iirrational | 01d6dba8b2131fa2a099a74f17e6540f30cee606 | [
"Apache-2.0"
] | null | null | null | test/v2/test_h_infinity.py | wavestate/wavestate-iirrational | 01d6dba8b2131fa2a099a74f17e6540f30cee606 | [
"Apache-2.0"
] | null | null | null | test/v2/test_h_infinity.py | wavestate/wavestate-iirrational | 01d6dba8b2131fa2a099a74f17e6540f30cee606 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
"""
from __future__ import division, print_function, unicode_literals
import pytest
import numpy as np
import os.path as path
from wavestate.iirrational import v2
from wavestate.iirrational.v2 import testing
from wavestate.iirrational.testing.utilities import (
sign_validate_and_plot_hint,
)
try:
from IIRrational_test_data import IIRrational_data_dev
except ImportError:
module_import_skip = True
else:
module_import_skip = False
@pytest.mark.skipif(module_import_skip, reason="cannot import IIRrational_test_data")
def test_simple1(request, browser, plotsections, plot_verbosity):
data_name = "PUM_long"
data = IIRrational_data_dev(
data_name,
instance_num=1,
set_num=1,
)
out = v2.data2filter(
xfer=data.data,
F_Hz=data.F_Hz,
SNR=data.SNR,
order_initial=10,
delay_s=0,
F_nyquist_Hz=None,
poles=(-100, -50),
zeros=(-10, -50),
mode="fit",
delay_s_max=0.01,
h_infinity=0.99,
h_infinity_deweight=0.1,
# hints = [
# testing.validate_plot_log(__file__, request),
# ],
)
print(out.as_scipy_signal_ZPKsw())
print(out.as_foton_ZPKsf())
print(out.as_foton_str_ZPKsf())
print(out.as_foton_ZPKsn())
print(out.as_foton_str_ZPKsn())
return
@pytest.mark.skipif(module_import_skip, reason="cannot import IIRrational_test_data")
def test_simple1_vec(request, browser, plotsections, plot_verbosity):
data_name = "PUM_long"
data = IIRrational_data_dev(
data_name,
instance_num=1,
set_num=1,
)
out = v2.data2filter(
xfer=data.data,
F_Hz=data.F_Hz,
SNR=data.SNR,
order_initial=10,
delay_s=0,
F_nyquist_Hz=None,
poles=(-100, -50),
zeros=(-10, -50),
mode="fit",
delay_s_max=0.01,
h_infinity=abs(1 / (1 + 1j * data.F_Hz / 1)),
# hints = [
# testing.validate_plot_log(__file__, request),
# ],
)
print(out.as_scipy_signal_ZPKsw())
print(out.as_foton_ZPKsf())
print(out.as_foton_str_ZPKsf())
print(out.as_foton_ZPKsn())
print(out.as_foton_str_ZPKsn())
return
@pytest.mark.skipif(module_import_skip, reason="cannot import IIRrational_test_data")
def test_simple1_callable(request, browser, plotsections, plot_verbosity):
data_name = "PUM_long"
data = IIRrational_data_dev(
data_name,
instance_num=1,
set_num=1,
)
out = v2.data2filter(
xfer=data.data,
F_Hz=data.F_Hz,
SNR=data.SNR,
order_initial=10,
delay_s=0,
F_nyquist_Hz=None,
poles=(-100, -50),
zeros=(-10, -50),
mode="fit",
delay_s_max=0.01,
h_infinity=lambda L: abs(1 / (1 + 1j * np.linspace(0, 100, L))),
# hints = [
# testing.validate_plot_log(__file__, request),
# ],
)
print(out.as_scipy_signal_ZPKsw())
print(out.as_foton_ZPKsf())
print(out.as_foton_str_ZPKsf())
print(out.as_foton_ZPKsn())
print(out.as_foton_str_ZPKsn())
return
| 25.853659 | 85 | 0.628302 | 425 | 3,180 | 4.357647 | 0.232941 | 0.064795 | 0.080994 | 0.097192 | 0.765659 | 0.765659 | 0.765659 | 0.765659 | 0.765659 | 0.765659 | 0 | 0.033319 | 0.254403 | 3,180 | 122 | 86 | 26.065574 | 0.747786 | 0.065094 | 0 | 0.71134 | 0 | 0 | 0.046701 | 0.02132 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030928 | false | 0 | 0.14433 | 0 | 0.206186 | 0.164948 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cbcb6a4a9dc4326179a3714346fb2e369b2ab1ff | 23 | py | Python | dixi/__init__.py | tvogels/dixi | 0ab33a7e0de06e358e9c1b4ec7a7a296a19d0df9 | [
"BSD-3-Clause"
] | 1 | 2018-09-21T02:01:53.000Z | 2018-09-21T02:01:53.000Z | dixi/__init__.py | tvogels/dixi | 0ab33a7e0de06e358e9c1b4ec7a7a296a19d0df9 | [
"BSD-3-Clause"
] | null | null | null | dixi/__init__.py | tvogels/dixi | 0ab33a7e0de06e358e9c1b4ec7a7a296a19d0df9 | [
"BSD-3-Clause"
] | null | null | null | from .dixi import Dixi
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1dae24fe067a6fcba43d1740fc67434520b5eb71 | 371 | py | Python | Ex00.py | ErickTeixeira777/Python-3 | 5bd886d13e8118bba556d499ecd6841e9cdb5e9f | [
"MIT"
] | null | null | null | Ex00.py | ErickTeixeira777/Python-3 | 5bd886d13e8118bba556d499ecd6841e9cdb5e9f | [
"MIT"
] | null | null | null | Ex00.py | ErickTeixeira777/Python-3 | 5bd886d13e8118bba556d499ecd6841e9cdb5e9f | [
"MIT"
] | null | null | null | print('\033[36m{:=^37}\033[m'.format('Code-Save'))
print('\033[31m{:=^37}\033[m'.format('Hello Word'))
print('\033[0;37;40mHello, Word!\033[m')
'''Lista cores de fundo
Trasparente = '\033[0;30;40m
Vermelho = '\033[0;30;41m
Verde = '\033[0;30;42m
Amarelo = '\033[0;30;43m
Azul = '\033[0;30;44m
Roxo = '\033[0;30;45m
Azul Claro = '\033[0;30;46m
Branco = '\033[0;30;47m'''
| 26.5 | 51 | 0.628032 | 71 | 371 | 3.28169 | 0.478873 | 0.154506 | 0.206009 | 0.103004 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.282738 | 0.09434 | 371 | 13 | 52 | 28.538462 | 0.410714 | 0 | 0 | 0 | 0 | 0 | 0.630137 | 0.287671 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
1db981f295877f875ba5272ea664182f0fa5765c | 17,542 | py | Python | interaction_prediction/dataset.py | david9dragon9/AIR | cbcb7cb74f4280596ede9d998f75f20c272843bd | [
"Apache-2.0"
] | 16 | 2021-06-20T17:01:46.000Z | 2021-12-16T19:04:42.000Z | interaction_prediction/dataset.py | david9dragon9/AIR | cbcb7cb74f4280596ede9d998f75f20c272843bd | [
"Apache-2.0"
] | 1 | 2022-03-03T08:49:40.000Z | 2022-03-04T03:24:09.000Z | interaction_prediction/dataset.py | david9dragon9/AIR | cbcb7cb74f4280596ede9d998f75f20c272843bd | [
"Apache-2.0"
] | 4 | 2021-08-18T13:36:42.000Z | 2022-03-04T06:03:44.000Z | import tensorflow as tf
import numpy as np
i_scenario_features = {
'scenario/id':
tf.io.FixedLenFeature([1], tf.string, default_value = None)
}
i_state_features = {
'state/id':
tf.io.FixedLenFeature([2], tf.float32, default_value=None),
'state/type':
tf.io.FixedLenFeature([2], tf.float32, default_value=None),
'state/is_sdc':
tf.io.FixedLenFeature([2], tf.int64, default_value=None),
'state/tracks_to_predict':
tf.io.FixedLenFeature([2], tf.int64, default_value=None),
'state/current/bbox_yaw':
tf.io.FixedLenFeature([2, 1], tf.float32, default_value=None),
'state/current/height':
tf.io.FixedLenFeature([2, 1], tf.float32, default_value=None),
'state/current/length':
tf.io.FixedLenFeature([2, 1], tf.float32, default_value=None),
'state/current/timestamp_micros':
tf.io.FixedLenFeature([2, 1], tf.int64, default_value=None),
'state/current/valid':
tf.io.FixedLenFeature([2, 1], tf.int64, default_value=None),
'state/current/vel_yaw':
tf.io.FixedLenFeature([2, 1], tf.float32, default_value=None),
'state/current/velocity_x':
tf.io.FixedLenFeature([2, 1], tf.float32, default_value=None),
'state/current/velocity_y':
tf.io.FixedLenFeature([2, 1], tf.float32, default_value=None),
'state/current/width':
tf.io.FixedLenFeature([2, 1], tf.float32, default_value=None),
'state/current/x':
tf.io.FixedLenFeature([2, 1], tf.float32, default_value=None),
'state/current/y':
tf.io.FixedLenFeature([2, 1], tf.float32, default_value=None),
'state/current/z':
tf.io.FixedLenFeature([2, 1], tf.float32, default_value=None),
'state/future/bbox_yaw':
tf.io.FixedLenFeature([2, 80], tf.float32, default_value=None),
'state/future/height':
tf.io.FixedLenFeature([2, 80], tf.float32, default_value=None),
'state/future/length':
tf.io.FixedLenFeature([2, 80], tf.float32, default_value=None),
'state/future/timestamp_micros':
tf.io.FixedLenFeature([2, 80], tf.int64, default_value=None),
'state/future/valid':
tf.io.FixedLenFeature([2, 80], tf.int64, default_value=None),
'state/future/vel_yaw':
tf.io.FixedLenFeature([2, 80], tf.float32, default_value=None),
'state/future/velocity_x':
tf.io.FixedLenFeature([2, 80], tf.float32, default_value=None),
'state/future/velocity_y':
tf.io.FixedLenFeature([2, 80], tf.float32, default_value=None),
'state/future/width':
tf.io.FixedLenFeature([2, 80], tf.float32, default_value=None),
'state/future/x':
tf.io.FixedLenFeature([2, 80], tf.float32, default_value=None),
'state/future/y':
tf.io.FixedLenFeature([2, 80], tf.float32, default_value=None),
'state/future/z':
tf.io.FixedLenFeature([2, 80], tf.float32, default_value=None),
'state/past/bbox_yaw':
tf.io.FixedLenFeature([2, 10], tf.float32, default_value=None),
'state/past/height':
tf.io.FixedLenFeature([2, 10], tf.float32, default_value=None),
'state/past/length':
tf.io.FixedLenFeature([2, 10], tf.float32, default_value=None),
'state/past/timestamp_micros':
tf.io.FixedLenFeature([2, 10], tf.int64, default_value=None),
'state/past/valid':
tf.io.FixedLenFeature([2, 10], tf.int64, default_value=None),
'state/past/vel_yaw':
tf.io.FixedLenFeature([2, 10], tf.float32, default_value=None),
'state/past/velocity_x':
tf.io.FixedLenFeature([2, 10], tf.float32, default_value=None),
'state/past/velocity_y':
tf.io.FixedLenFeature([2, 10], tf.float32, default_value=None),
'state/past/width':
tf.io.FixedLenFeature([2, 10], tf.float32, default_value=None),
'state/past/x':
tf.io.FixedLenFeature([2, 10], tf.float32, default_value=None),
'state/past/y':
tf.io.FixedLenFeature([2, 10], tf.float32, default_value=None),
'state/past/z':
tf.io.FixedLenFeature([2, 10], tf.float32, default_value=None),
}
i_features = {
'image0/encoded':
tf.io.FixedLenFeature([1], tf.string, default_value=None),
'image1/encoded':
tf.io.FixedLenFeature([1], tf.string, default_value=None),
}
i_features_description = {}
i_features_description.update(i_scenario_features)
i_features_description.update(i_state_features)
i_features_description.update(i_features)
def _parse_no_swap(value):
decoded_example = tf.io.parse_single_example(value, i_features_description)
return parse_example_no_swap(decoded_example)
def parse_example_no_swap(decoded_example):
scenario_id = decoded_example['scenario/id'] # [1]
object_id = decoded_example['state/id'] # [2]
x = decoded_example['state/current/x']
y = decoded_example['state/current/y']
yaw = decoded_example['state/current/bbox_yaw']
past_states = tf.stack([
decoded_example['state/past/x'],
decoded_example['state/past/y'],
decoded_example['state/past/length'],
decoded_example['state/past/width'],
decoded_example['state/past/bbox_yaw'],
decoded_example['state/past/velocity_x'],
decoded_example['state/past/velocity_y']
], -1) # (2, 10, 7)
cur_states = tf.stack([
decoded_example['state/current/x'],
decoded_example['state/current/y'],
decoded_example['state/current/length'],
decoded_example['state/current/width'],
decoded_example['state/current/bbox_yaw'],
decoded_example['state/current/velocity_x'],
decoded_example['state/current/velocity_y']
], -1) # (2, 1, 7)
input_states = tf.concat([past_states, cur_states], 1)[..., :7] # (2, 11, 7)
future_states = tf.stack([
decoded_example['state/future/x'],
decoded_example['state/future/y'],
decoded_example['state/future/length'],
decoded_example['state/future/width'],
decoded_example['state/future/bbox_yaw'],
decoded_example['state/future/velocity_x'],
decoded_example['state/future/velocity_y']
], -1) # (2, 80, 7)
gt_future_states = tf.concat([past_states, cur_states, future_states], 1) # (2, 91, 7)
past_is_valid = decoded_example['state/past/valid'] > 0 # (2, 10)
current_is_valid = decoded_example['state/current/valid'] > 0 # (2,1)
future_is_valid = decoded_example['state/future/valid'] > 0 # (2, 80)
gt_future_is_valid = tf.concat(
[past_is_valid, current_is_valid, future_is_valid], 1) # (2, 91)
encoded_0 = decoded_example['image0/encoded'][0] # (224,448,3)
encoded_1 = decoded_example['image1/encoded'][0] # (224,448,3)
is_sdc = decoded_example['state/is_sdc']
object_type = decoded_example['state/type']
image_0 = tf.image.decode_jpeg(encoded_0)
image_1 = tf.image.decode_jpeg(encoded_1)
inputs = {
'is_sdc': is_sdc,
'gt_future_states': gt_future_states,# (2, 91, 7)
'gt_future_is_valid': gt_future_is_valid,# (2, 91)
'past_states': past_states,# (2, 10, 7)
'object_type': object_type,# (2, )
'x': x,# (2,)
'y': y,# (2, )
'yaw':yaw,# (2, )
'image_0': image_0,
'image_1': image_1,
'scenario_id':scenario_id,
'object_id': object_id
}
return inputs
def _parse(value):
decoded_example = tf.io.parse_single_example(value, i_features_description)
return parse_example(decoded_example)
def parse_example(decoded_example):
scenario_id = decoded_example['scenario/id'] # [1]
object_id = decoded_example['state/id'] # [2]
x = decoded_example['state/current/x']
y = decoded_example['state/current/y']
yaw = decoded_example['state/current/bbox_yaw']
past_states = tf.stack([
decoded_example['state/past/x'],
decoded_example['state/past/y'],
decoded_example['state/past/length'],
decoded_example['state/past/width'],
decoded_example['state/past/bbox_yaw'],
decoded_example['state/past/velocity_x'],
decoded_example['state/past/velocity_y']
], -1) # (2, 10, 7)
cur_states = tf.stack([
decoded_example['state/current/x'],
decoded_example['state/current/y'],
decoded_example['state/current/length'],
decoded_example['state/current/width'],
decoded_example['state/current/bbox_yaw'],
decoded_example['state/current/velocity_x'],
decoded_example['state/current/velocity_y']
], -1) # (2, 1, 7)
input_states = tf.concat([past_states, cur_states], 1)[..., :7] # (2, 11, 7)
future_states = tf.stack([
decoded_example['state/future/x'],
decoded_example['state/future/y'],
decoded_example['state/future/length'],
decoded_example['state/future/width'],
decoded_example['state/future/bbox_yaw'],
decoded_example['state/future/velocity_x'],
decoded_example['state/future/velocity_y']
], -1) # (2, 80, 7)
gt_future_states = tf.concat([past_states, cur_states, future_states], 1) # (2, 91, 7)
past_is_valid = decoded_example['state/past/valid'] > 0 # (2, 10)
current_is_valid = decoded_example['state/current/valid'] > 0 # (2,1)
future_is_valid = decoded_example['state/future/valid'] > 0 # (2, 80)
gt_future_is_valid = tf.concat(
[past_is_valid, current_is_valid, future_is_valid], 1) # (2, 91)
encoded_0 = decoded_example['image0/encoded'][0] # (224,448,3)
encoded_1 = decoded_example['image1/encoded'][0] # (224,448,3)
is_sdc = decoded_example['state/is_sdc']
object_type = decoded_example['state/type']
swap = tf.cast(object_type[0] > object_type[1], tf.int32)
indices = swap * tf.constant([1, 0]) + (1-swap) * tf.constant([0, 1])
orig_image_0 = tf.image.decode_jpeg(encoded_0)
orig_image_1 = tf.image.decode_jpeg(encoded_1)
swap_uint8 = tf.cast(swap, tf.uint8)
image_0 = swap_uint8*orig_image_1 + (1-swap_uint8)*orig_image_0
image_1 = swap_uint8*orig_image_0 + (1-swap_uint8)*orig_image_1
inputs = {
'is_sdc': tf.gather(is_sdc, indices, axis=0),
'gt_future_states': tf.gather(gt_future_states, indices, axis=0),# (2, 91, 7)
'gt_future_is_valid': tf.gather(gt_future_is_valid, indices, axis=0),# (2, 91)
'past_states':tf.gather(past_states, indices, axis=0),# (2, 10, 7)
'object_type': tf.gather(object_type, indices, axis=0),# (2, )
'x':tf.gather(x, indices, axis=0),# (2,)
'y':tf.gather(y, indices, axis=0),# (2, )
'yaw':tf.gather(yaw, indices, axis=0),# (2, )
'image_0': image_0,
'image_1': image_1,
'scenario_id':scenario_id,
'object_id': tf.gather(object_id, indices, axis=0),
'indices': indices
}
return inputs
def _parse_without_image(value):
decoded_example = tf.io.parse_single_example(value, i_features_description)
scenario_id = decoded_example['scenario/id'] # [1]
object_id = decoded_example['state/id'] # [2]
x = decoded_example['state/current/x']
y = decoded_example['state/current/y']
yaw = decoded_example['state/current/bbox_yaw']
past_states = tf.stack([
decoded_example['state/past/x'],
decoded_example['state/past/y'],
decoded_example['state/past/length'],
decoded_example['state/past/width'],
decoded_example['state/past/bbox_yaw'],
decoded_example['state/past/velocity_x'],
decoded_example['state/past/velocity_y']
], -1) # (2, 10, 7)
cur_states = tf.stack([
decoded_example['state/current/x'],
decoded_example['state/current/y'],
decoded_example['state/current/length'],
decoded_example['state/current/width'],
decoded_example['state/current/bbox_yaw'],
decoded_example['state/current/velocity_x'],
decoded_example['state/current/velocity_y']
], -1) # (2, 1, 7)
input_states = tf.concat([past_states, cur_states], 1)[..., :7] # (2, 11, 7)
future_states = tf.stack([
decoded_example['state/future/x'],
decoded_example['state/future/y'],
decoded_example['state/future/length'],
decoded_example['state/future/width'],
decoded_example['state/future/bbox_yaw'],
decoded_example['state/future/velocity_x'],
decoded_example['state/future/velocity_y']
], -1) # (2, 80, 7)
gt_future_states = tf.concat([past_states, cur_states, future_states], 1) # (2, 91, 7)
past_is_valid = decoded_example['state/past/valid'] > 0 # (2, 10)
current_is_valid = decoded_example['state/current/valid'] > 0 # (2,1)
future_is_valid = decoded_example['state/future/valid'] > 0 # (2, 80)
gt_future_is_valid = tf.concat(
[past_is_valid, current_is_valid, future_is_valid], 1) # (2, 91)
inputs = {
'is_sdc': decoded_example['state/is_sdc'],
'gt_future_states': gt_future_states, # (2, 91, 7)
'gt_future_is_valid': gt_future_is_valid, # (2, 91)
'past_states':past_states, # (2, 10, 7)
'object_type': decoded_example['state/type'], # (2, )
'x':x, # (2,)
'y':y, # (2, )
'yaw':yaw, # (2, )
'scenario_id':scenario_id,
'object_id': object_id}
return inputs
def get_dataset(file_pattern, batch_size=16, shuffle=True):
file_dataset = tf.data.Dataset.list_files(file_pattern, shuffle=shuffle)
dataset = file_dataset\
.interleave(lambda x: tf.data.TFRecordDataset(x).prefetch(4), cycle_length=8 if shuffle else 1)\
.map(_parse, num_parallel_calls=8).batch(batch_size)
return dataset
ot_feature_desc = {
'state/type':
tf.io.FixedLenFeature([2], tf.float32, default_value=None),
}
def _cyclist_only(data):
example = tf.io.parse_single_example(data, ot_feature_desc)
return tf.reduce_max(example['state/type']) == 3.
def get_cyclist_dataset(file_pattern, batch_size=32, shuffle=True):
file_dataset = tf.data.Dataset.list_files(file_pattern, shuffle=shuffle)
dataset = file_dataset\
.interleave(lambda x: tf.data.TFRecordDataset(x).prefetch(4).filter(lambda y: _cyclist_only(y)), cycle_length=8 if shuffle else 1)\
.map(_parse, num_parallel_calls=8)
return dataset.batch(batch_size)
def _ped_only(data):
example = tf.io.parse_single_example(data, ot_feature_desc)
return tf.reduce_max(example['state/type']) == 2.
def get_ped_dataset(file_pattern, batch_size=32, shuffle=True):
file_dataset = tf.data.Dataset.list_files(file_pattern, shuffle=shuffle)
dataset = file_dataset\
.interleave(lambda x: tf.data.TFRecordDataset(x).prefetch(4).filter(lambda y: _ped_only(y)), cycle_length=8 if shuffle else 1)\
.map(_parse, num_parallel_calls=8)
return dataset.batch(batch_size)
def _veh_only(data):
example = tf.io.parse_single_example(data, ot_feature_desc)
return tf.reduce_max(example['state/type']) == 1.
def get_veh_dataset(file_pattern, batch_size=32, shuffle=True):
file_dataset = tf.data.Dataset.list_files(file_pattern, shuffle=shuffle)
dataset = file_dataset\
.interleave(lambda x: tf.data.TFRecordDataset(x).prefetch(4).filter(lambda y: _veh_only(y)), cycle_length=8 if shuffle else 1)\
.map(_parse, num_parallel_calls=8)
return dataset.batch(batch_size)
def get_interaction_eval_dataset(data_type, eval_file_pattern, batch_size=32):
if data_type == "cyclist":
dataset = get_cyclist_dataset(eval_file_pattern, batch_size, shuffle=False)
elif data_type == "ped":
dataset = get_ped_dataset(eval_file_pattern, batch_size, shuffle=False)
elif data_type == "veh":
dataset = get_veh_dataset(eval_file_pattern, batch_size, shuffle=False)
else:
dataset = get_dataset(eval_file_pattern, batch_size, shuffle=False)
return dataset
def get_dataset_for_clustering(file_pattern):
file_dataset = tf.data.Dataset.list_files(file_pattern)
dataset = file_dataset\
.interleave(lambda x: tf.data.TFRecordDataset(x).prefetch(4), cycle_length=8)\
.map(_parse_without_image, num_parallel_calls=8).batch(16)
return dataset
def _object_type_only(data, object_type):
example = tf.io.parse_single_example(data, ot_feature_desc)
return tf.reduce_max(example['state/type']) == object_type
def get_extended_dataset(train_file_pattern, validation_file_pattern, object_type):
train_file_dataset = tf.data.Dataset.list_files(train_file_pattern)
if validation_file_pattern:
validation_file_dataset = tf.data.Dataset.list_files(validation_file_pattern)
combined_dataset = train_file_dataset.concatenate(validation_file_dataset).shuffle(1100)
else:
combined_dataset = train_file_dataset
dataset = combined_dataset\
.interleave(lambda x: tf.data.TFRecordDataset(x).prefetch(4).filter(lambda y: _object_type_only(y, object_type)), cycle_length=8)\
.map(_parse, num_parallel_calls=8)
return dataset
def combine_datasets(veh_dataset, ped_dataset, cyc_dataset, veh_val, ped_val, cyc_val):
veh_dataset = veh_dataset.repeat()
ped_dataset = ped_dataset.repeat()
cyc_dataset = cyc_dataset.repeat()
dataset = tf.data.experimental.sample_from_datasets([veh_dataset, ped_dataset, cyc_dataset], [veh_val, ped_val, cyc_val])
return dataset
def get_weighted_dataset(train_file_pattern, validation_file_pattern, veh_val, ped_val, cyc_val, batch_size=16):
veh_dataset = get_extended_dataset(train_file_pattern, validation_file_pattern, object_type=1)
ped_dataset = get_extended_dataset(train_file_pattern, validation_file_pattern, object_type=2)
cyc_dataset = get_extended_dataset(train_file_pattern, validation_file_pattern, object_type=3)
return combine_datasets(veh_dataset, ped_dataset, cyc_dataset, veh_val, ped_val, cyc_val).batch(batch_size) | 42.57767 | 133 | 0.688576 | 2,473 | 17,542 | 4.615447 | 0.05742 | 0.127563 | 0.149816 | 0.071842 | 0.88847 | 0.852287 | 0.831873 | 0.814526 | 0.792798 | 0.788943 | 0 | 0.033556 | 0.162467 | 17,542 | 412 | 134 | 42.57767 | 0.74333 | 0.027477 | 0 | 0.619835 | 0 | 0 | 0.164451 | 0.053641 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049587 | false | 0 | 0.00551 | 0 | 0.104683 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1dc190f6176d0a846e9c71f98b93f061dcd95f21 | 40 | py | Python | src/biblishelf_main/utils/type/loader.py | x007007007/biblishelf | a79b105f6929ecc9df41b1e74a92713212cd06fa | [
"MIT"
] | null | null | null | src/biblishelf_main/utils/type/loader.py | x007007007/biblishelf | a79b105f6929ecc9df41b1e74a92713212cd06fa | [
"MIT"
] | 3 | 2018-01-28T15:17:55.000Z | 2018-01-31T09:21:01.000Z | src/biblishelf_main/utils/type/loader.py | x007007007/biblishelf | a79b105f6929ecc9df41b1e74a92713212cd06fa | [
"MIT"
] | 1 | 2018-01-27T13:49:51.000Z | 2018-01-27T13:49:51.000Z |
def load_all_file_analyzer():
pass | 10 | 29 | 0.725 | 6 | 40 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 40 | 4 | 30 | 10 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
38224f6928549a14507992edcd7c280d651a1b84 | 90 | py | Python | deelight/exceptions.py | codingjoe/yeelight-night-shift | e9fc8add2f00e2f917b5094b7100060896cf50b0 | [
"Apache-2.0"
] | 2 | 2019-10-26T10:42:18.000Z | 2021-08-18T06:33:48.000Z | deelight/exceptions.py | codingjoe/yeelight-night-shift | e9fc8add2f00e2f917b5094b7100060896cf50b0 | [
"Apache-2.0"
] | null | null | null | deelight/exceptions.py | codingjoe/yeelight-night-shift | e9fc8add2f00e2f917b5094b7100060896cf50b0 | [
"Apache-2.0"
] | null | null | null | from yeelib.exceptions import YeelightError
class CommandError(YeelightError):
pass
| 15 | 43 | 0.811111 | 9 | 90 | 8.111111 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144444 | 90 | 5 | 44 | 18 | 0.948052 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3828461b32b7545cfe8356d29432592cf07782ec | 1,863 | py | Python | python/src/main/python/pyalink/alink/tests/common/types/file_system/test_s3.py | wenwei8268/Alink | c00702538c95a32403985ebd344eb6aeb81749a7 | [
"Apache-2.0"
] | null | null | null | python/src/main/python/pyalink/alink/tests/common/types/file_system/test_s3.py | wenwei8268/Alink | c00702538c95a32403985ebd344eb6aeb81749a7 | [
"Apache-2.0"
] | null | null | null | python/src/main/python/pyalink/alink/tests/common/types/file_system/test_s3.py | wenwei8268/Alink | c00702538c95a32403985ebd344eb6aeb81749a7 | [
"Apache-2.0"
] | null | null | null | import unittest
import pandas as pd
import pytest
from pyalink.alink import *
class TestS3FileSystem(unittest.TestCase):
@pytest.mark.skip()
def test_hadoop_s3(self):
data = [
["0L", "1L", 0.6],
["2L", "2L", 0.8],
["2L", "4L", 0.6],
["3L", "1L", 0.6],
["3L", "2L", 0.3],
["3L", "4L", 0.4],
]
source = BatchOperator.fromDataframe(pd.DataFrame.from_records(data), "uid string, iid string, label float")
fs = S3HadoopFileSystem("1.11.788", "http://xxx:9000", "test", "xxx", "xxx", True)
print(fs.listFiles("/"))
source.link(
AkSinkBatchOp()
.setFilePath(FilePath("/tmp/test_alink_file_sink", fs))
.setOverwriteSink(True)
.setNumFiles(2)
)
BatchOperator.execute()
AkSourceBatchOp() \
.setFilePath(FilePath("/tmp/test_alink_file_sink", fs)) \
.print()
@pytest.mark.skip()
def test_presto_s3(self):
data = [
["0L", "1L", 0.6],
["2L", "2L", 0.8],
["2L", "4L", 0.6],
["3L", "1L", 0.6],
["3L", "2L", 0.3],
["3L", "4L", 0.4],
]
source = BatchOperator.fromDataframe(pd.DataFrame.from_records(data), "uid string, iid string, label float")
fs = S3PrestoFileSystem("1.11.788", "http://xxx:9000", "test", "xxx", "xxx", True)
print(fs.listFiles("/"))
source.link(
AkSinkBatchOp()
.setFilePath(FilePath("/tmp/test_alink_file_sink2", fs))
.setOverwriteSink(True)
.setNumFiles(2)
)
BatchOperator.execute()
AkSourceBatchOp() \
.setFilePath(FilePath("/tmp/test_alink_file_sink2", fs)) \
.print()
| 27.397059 | 116 | 0.497585 | 194 | 1,863 | 4.685567 | 0.340206 | 0.013201 | 0.017602 | 0.114411 | 0.825083 | 0.778878 | 0.778878 | 0.778878 | 0.754675 | 0.754675 | 0 | 0.061748 | 0.330649 | 1,863 | 67 | 117 | 27.80597 | 0.667201 | 0 | 0 | 0.666667 | 0 | 0 | 0.154589 | 0.05475 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039216 | false | 0 | 0.078431 | 0 | 0.137255 | 0.078431 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
69b38bc85d726bed0b1e6e2afcab0703cbeb1fda | 394 | py | Python | OpenGLWrapper_JE/venv/Lib/site-packages/OpenGL/EGL/__init__.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | OpenGLWrapper_JE/venv/Lib/site-packages/OpenGL/EGL/__init__.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | OpenGLWrapper_JE/venv/Lib/site-packages/OpenGL/EGL/__init__.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | """OpenGL.EGL the portable interface to GL environments"""
from OpenGL.raw.EGL._types import *
from OpenGL.raw.EGL._errors import EGLError
from OpenGL.EGL.VERSION.EGL_1_0 import *
from OpenGL.EGL.VERSION.EGL_1_1 import *
from OpenGL.EGL.VERSION.EGL_1_2 import *
from OpenGL.EGL.VERSION.EGL_1_3 import *
from OpenGL.EGL.VERSION.EGL_1_4 import *
from OpenGL.EGL.VERSION.EGL_1_5 import *
| 39.4 | 59 | 0.781726 | 69 | 394 | 4.26087 | 0.304348 | 0.272109 | 0.326531 | 0.408163 | 0.591837 | 0.591837 | 0.510204 | 0 | 0 | 0 | 0 | 0.034682 | 0.121827 | 394 | 9 | 60 | 43.777778 | 0.815029 | 0.13198 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
69be20f02389f56a37f50e8c13f066eaeb65d49f | 5,987 | py | Python | Contents/Libraries/Shared/rebulk/test/test_processors.py | jippo015/Sub-Zero.bundle | 734e0f7128c05c0f639e11e7dfc77daa1014064b | [
"MIT"
] | 1,553 | 2015-11-09T02:17:06.000Z | 2022-03-31T20:24:52.000Z | Contents/Libraries/Shared/rebulk/test/test_processors.py | saiterlz/Sub-Zero.bundle | 1a0bb9c3e4be84be35d46672907783363fe5a87b | [
"MIT"
] | 691 | 2015-11-05T21:32:26.000Z | 2022-03-17T10:52:45.000Z | Contents/Libraries/Shared/rebulk/test/test_processors.py | saiterlz/Sub-Zero.bundle | 1a0bb9c3e4be84be35d46672907783363fe5a87b | [
"MIT"
] | 162 | 2015-11-06T19:38:55.000Z | 2022-03-16T02:42:41.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# pylint: disable=no-self-use, pointless-statement, missing-docstring, no-member, len-as-condition
from ..pattern import StringPattern, RePattern
from ..processors import ConflictSolver
from ..rules import execute_rule
from ..match import Matches
def test_conflict_1():
input_string = "abcdefghijklmnopqrstuvwxyz"
pattern = StringPattern("ijklmn", "kl", "abcdef", "ab", "ef", "yz")
matches = Matches(pattern.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
values = [x.value for x in matches]
assert values == ["ijklmn", "abcdef", "yz"]
def test_conflict_2():
input_string = "abcdefghijklmnopqrstuvwxyz"
pattern = StringPattern("ijklmn", "jklmnopqrst")
matches = Matches(pattern.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
values = [x.value for x in matches]
assert values == ["jklmnopqrst"]
def test_conflict_3():
input_string = "abcdefghijklmnopqrstuvwxyz"
pattern = StringPattern("ijklmnopqrst", "jklmnopqrst")
matches = Matches(pattern.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
values = [x.value for x in matches]
assert values == ["ijklmnopqrst"]
def test_conflict_4():
input_string = "123456789"
pattern = StringPattern("123", "456789")
matches = Matches(pattern.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
values = [x.value for x in matches]
assert values == ["123", "456789"]
def test_conflict_5():
input_string = "123456789"
pattern = StringPattern("123456", "789")
matches = Matches(pattern.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
values = [x.value for x in matches]
assert values == ["123456", "789"]
def test_prefer_longer_parent():
input_string = "xxx.1x02.xxx"
re1 = RePattern("([0-9]+)x([0-9]+)", name='prefer', children=True, formatter=int)
re2 = RePattern("x([0-9]+)", name='skip', children=True)
matches = Matches(re1.matches(input_string))
matches.extend(re2.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
assert len(matches) == 2
assert matches[0].value == 1
assert matches[1].value == 2
def test_conflict_solver_1():
input_string = "123456789"
re1 = StringPattern("2345678", conflict_solver=lambda match, conflicting: '__default__')
re2 = StringPattern("34567")
matches = Matches(re1.matches(input_string))
matches.extend(re2.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
assert len(matches) == 1
assert matches[0].value == "2345678"
def test_conflict_solver_2():
input_string = "123456789"
re1 = StringPattern("2345678", conflict_solver=lambda match, conflicting: '__default__')
re2 = StringPattern("34567", conflict_solver=lambda match, conflicting: conflicting)
matches = Matches(re1.matches(input_string))
matches.extend(re2.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
assert len(matches) == 1
assert matches[0].value == "34567"
def test_conflict_solver_3():
input_string = "123456789"
re1 = StringPattern("2345678", conflict_solver=lambda match, conflicting: match)
re2 = StringPattern("34567")
matches = Matches(re1.matches(input_string))
matches.extend(re2.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
assert len(matches) == 1
assert matches[0].value == "34567"
def test_conflict_solver_4():
input_string = "123456789"
re1 = StringPattern("2345678")
re2 = StringPattern("34567", conflict_solver=lambda match, conflicting: conflicting)
matches = Matches(re1.matches(input_string))
matches.extend(re2.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
assert len(matches) == 1
assert matches[0].value == "34567"
def test_conflict_solver_5():
input_string = "123456789"
re1 = StringPattern("2345678", conflict_solver=lambda match, conflicting: conflicting)
re2 = StringPattern("34567")
matches = Matches(re1.matches(input_string))
matches.extend(re2.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
assert len(matches) == 1
assert matches[0].value == "2345678"
def test_conflict_solver_6():
input_string = "123456789"
re1 = StringPattern("2345678")
re2 = StringPattern("34567", conflict_solver=lambda match, conflicting: conflicting)
matches = Matches(re1.matches(input_string))
matches.extend(re2.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
assert len(matches) == 1
assert matches[0].value == "34567"
def test_conflict_solver_7():
input_string = "102"
re1 = StringPattern("102")
re2 = StringPattern("02")
matches = Matches(re2.matches(input_string))
matches.extend(re1.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
assert len(matches) == 1
assert matches[0].value == "102"
def test_unresolved():
input_string = "123456789"
re1 = StringPattern("23456")
re2 = StringPattern("34567")
matches = Matches(re1.matches(input_string))
matches.extend(re2.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
assert len(matches) == 2
re1 = StringPattern("34567")
re2 = StringPattern("2345678", conflict_solver=lambda match, conflicting: None)
matches = Matches(re1.matches(input_string))
matches.extend(re2.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
assert len(matches) == 2
re1 = StringPattern("34567", conflict_solver=lambda match, conflicting: None)
re2 = StringPattern("2345678")
matches = Matches(re1.matches(input_string))
matches.extend(re2.matches(input_string))
execute_rule(ConflictSolver(), matches, None)
assert len(matches) == 2
| 27.717593 | 98 | 0.700351 | 691 | 5,987 | 5.908828 | 0.131693 | 0.110458 | 0.11903 | 0.097967 | 0.82121 | 0.771247 | 0.735734 | 0.713201 | 0.713201 | 0.713201 | 0 | 0.071587 | 0.169367 | 5,987 | 215 | 99 | 27.846512 | 0.749447 | 0.023217 | 0 | 0.651515 | 0 | 0 | 0.087767 | 0.013345 | 0 | 0 | 0 | 0 | 0.189394 | 1 | 0.106061 | false | 0 | 0.030303 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
38bec4312a8590d7cc0a068d2b4304e365a932a3 | 12,192 | py | Python | django_evolution/tests/test_database_state.py | kamrankalantarli/django-evolution | 3e67426b189aecca5e470607838d1191f4892859 | [
"BSD-3-Clause"
] | null | null | null | django_evolution/tests/test_database_state.py | kamrankalantarli/django-evolution | 3e67426b189aecca5e470607838d1191f4892859 | [
"BSD-3-Clause"
] | null | null | null | django_evolution/tests/test_database_state.py | kamrankalantarli/django-evolution | 3e67426b189aecca5e470607838d1191f4892859 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import unicode_literals
from django.test.testcases import TestCase
from django_evolution.db.state import DatabaseState, IndexState
from django_evolution.errors import DatabaseStateError
class DatabaseStateTests(TestCase):
"""Testing django_evolution.db.state.DatabaseState."""
def test_initial_state(self):
"""Testing DatabaseState with scan=True"""
database_state = DatabaseState(db_name='default')
# Check that a few known tables are in the list, to make sure
# the scan worked.
for table_name in ('django_content_type',
'django_evolution',
'django_project_version'):
self.assertTrue(database_state.has_table(table_name))
# Check the Evolution model.
indexes = [
(index_state.columns, index_state.unique)
for index_state in database_state.iter_indexes('django_evolution')
]
self.assertIn((['version_id'], False), indexes)
def test_clone(self):
"""Testing DatabaseState.clone"""
database_state = DatabaseState(db_name='default')
cloned_state = database_state.clone()
self.assertEqual(cloned_state.db_name, database_state.db_name)
self.assertEqual(cloned_state._tables, database_state._tables)
def test_add_table(self):
"""Testing DatabaseState.add_table"""
database_state = DatabaseState(db_name='default', scan=False)
database_state.add_table('my_test_table')
self.assertEqual(database_state._tables['my_test_table'], {
'indexes': {},
})
def test_has_table(self):
"""Testing DatabaseState.has_table"""
database_state = DatabaseState(db_name='default', scan=False)
self.assertFalse(database_state.has_table('my_test_table'))
database_state.add_table('my_test_table')
self.assertTrue(database_state.has_table('my_test_table'))
def test_add_index(self):
"""Testing DatabaseState.add_index"""
database_state = DatabaseState(db_name='default', scan=False)
database_state.add_table('my_test_table')
database_state.add_index(table_name='my_test_table',
index_name='my_index',
columns=['col1', 'col2'],
unique=True)
self.assertEqual(
database_state._tables['my_test_table']['indexes'],
{
'my_index': IndexState(name='my_index',
columns=['col1', 'col2'],
unique=True),
})
def test_add_index_with_untracked_table(self):
"""Testing DatabaseState.add_index with untracked table"""
database_state = DatabaseState(db_name='default', scan=False)
expected_message = (
'Unable to add index "my_index" to table "my_test_table". The '
'table is not being tracked in the database state.'
)
with self.assertRaisesMessage(DatabaseStateError, expected_message):
database_state.add_index(table_name='my_test_table',
index_name='my_index',
columns=['col1', 'col2'],
unique=True)
def test_add_index_with_existing_index(self):
"""Testing DatabaseState.add_index with existing index"""
database_state = DatabaseState(db_name='default', scan=False)
database_state.add_table('my_test_table')
database_state.add_index(table_name='my_test_table',
index_name='existing_index',
columns=['col1', 'col2'],
unique=True)
expected_message = (
'Unable to add index "existing_index" to table "my_test_table". '
'This index already exists.'
)
with self.assertRaisesMessage(DatabaseStateError, expected_message):
database_state.add_index(table_name='my_test_table',
index_name='existing_index',
columns=['col1', 'col2'],
unique=True)
# It's fine if it has a new name.
database_state.add_index(table_name='my_test_table',
index_name='new_index',
columns=['col1', 'col2'],
unique=True)
def test_remove_index(self):
"""Testing DatabaseState.remove_index"""
database_state = DatabaseState(db_name='default', scan=False)
database_state.add_table('my_test_table')
database_state.add_index(table_name='my_test_table',
index_name='my_index',
columns=['col1', 'col2'],
unique=True)
database_state.remove_index(table_name='my_test_table',
index_name='my_index',
unique=True)
self.assertEqual(database_state._tables['my_test_table']['indexes'],
{})
def test_remove_index_with_untracked_table(self):
"""Testing DatabaseState.remove_index with untracked table"""
database_state = DatabaseState(db_name='default', scan=False)
expected_message = (
'Unable to remove index "my_index" from table "my_test_table". '
'The table is not being tracked in the database state.'
)
with self.assertRaisesMessage(DatabaseStateError, expected_message):
database_state.remove_index(table_name='my_test_table',
index_name='my_index',
unique=True)
def test_remove_index_with_invalid_index_name(self):
"""Testing DatabaseState.remove_index with invalid index name"""
database_state = DatabaseState(db_name='default', scan=False)
database_state.add_table('my_test_table')
expected_message = (
'Unable to remove index "my_index" from table "my_test_table". '
'The index could not be found.'
)
with self.assertRaisesMessage(DatabaseStateError, expected_message):
database_state.remove_index(table_name='my_test_table',
index_name='my_index',
unique=True)
def test_remove_index_with_invalid_index_type(self):
"""Testing DatabaseState.remove_index with invalid index type"""
database_state = DatabaseState(db_name='default', scan=False)
database_state.add_table('my_test_table')
database_state.add_index(table_name='my_test_table',
index_name='my_index',
columns=['col1', 'col2'],
unique=True)
expected_message = (
'Unable to remove index "my_index" from table "my_test_table". '
'The specified index type (unique=False) does not match the '
'existing type (unique=True).'
)
with self.assertRaisesMessage(DatabaseStateError, expected_message):
database_state.remove_index(table_name='my_test_table',
index_name='my_index',
unique=False)
def test_get_index(self):
"""Testing DatabaseState.get_index"""
database_state = DatabaseState(db_name='default', scan=False)
database_state.add_table('my_test_table')
database_state.add_index(table_name='my_test_table',
index_name='my_index',
columns=['col1', 'col2'],
unique=True)
self.assertEqual(
database_state.get_index(table_name='my_test_table',
index_name='my_index'),
IndexState(name='my_index',
columns=['col1', 'col2'],
unique=True))
def test_get_index_with_invalid_name(self):
"""Testing DatabaseState.get_index with invalid name"""
database_state = DatabaseState(db_name='default', scan=False)
database_state.add_table('my_test_table')
self.assertIsNone(database_state.get_index(table_name='my_test_table',
index_name='my_index'),)
def test_find_index(self):
"""Testing DatabaseState.find_index"""
database_state = DatabaseState(db_name='default', scan=False)
database_state.add_table('my_test_table')
database_state.add_index(table_name='my_test_table',
index_name='my_index',
columns=['col1', 'col2'],
unique=True)
index = database_state.find_index(table_name='my_test_table',
columns=['col1', 'col2'],
unique=True)
self.assertEqual(index,
IndexState(name='my_index',
columns=['col1', 'col2'],
unique=True))
def test_find_index_with_not_found(self):
"""Testing DatabaseState.find_index with index no found"""
database_state = DatabaseState(db_name='default', scan=False)
database_state.add_table('my_test_table')
index = database_state.find_index(table_name='my_test_table',
columns=['col1', 'col2'],
unique=True)
self.assertIsNone(index)
def clear_indexes(self):
"""Testing DatabaseState.clear_indexes"""
database_state = DatabaseState(db_name='default', scan=False)
database_state.add_table('my_test_table')
database_state.add_index(table_name='my_test_table',
index_name='my_index',
columns=['col1', 'col2'],
unique=True)
self.assertEqual(database_state._tables['my_test_table']['indexes'],
{})
def test_iter_indexes(self):
"""Testing DatabaseState.iter_indexes"""
database_state = DatabaseState(db_name='default', scan=False)
database_state.add_table('my_test_table')
database_state.add_index(table_name='my_test_table',
index_name='my_index1',
columns=['col1', 'col2'],
unique=True)
database_state.add_index(table_name='my_test_table',
index_name='my_index2',
columns=['col3'])
indexes = list(database_state.iter_indexes('my_test_table'))
self.assertEqual(
indexes,
[
IndexState(name='my_index1',
columns=['col1', 'col2'],
unique=True),
IndexState(name='my_index2',
columns=['col3'],
unique=False),
])
def test_rescan_indexes(self):
"""Testing DatabaseState.rescan_indexes"""
database_state = DatabaseState(db_name='default')
# Check that a few known tables are in the list, to make sure
# the scan worked.
for table_name in ('django_content_type',
'django_evolution',
'django_project_version'):
self.assertTrue(database_state.has_table(table_name))
# Check the Evolution model.
indexes = [
(index_state.columns, index_state.unique)
for index_state in database_state.iter_indexes('django_evolution')
]
self.assertIn((['version_id'], False), indexes)
| 42.778947 | 78 | 0.560614 | 1,218 | 12,192 | 5.278325 | 0.083744 | 0.13548 | 0.076995 | 0.049774 | 0.845544 | 0.798258 | 0.767149 | 0.741173 | 0.703375 | 0.694354 | 0 | 0.005007 | 0.344734 | 12,192 | 284 | 79 | 42.929577 | 0.799725 | 0.085302 | 0 | 0.674757 | 0 | 0 | 0.1581 | 0.003982 | 0 | 0 | 0 | 0 | 0.106796 | 1 | 0.087379 | false | 0 | 0.019417 | 0 | 0.11165 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
38bf9b711e9ae92e4a2b7d23322fe0366419b2ed | 710 | py | Python | src/spaceone/identity/model/__init__.py | choonho/identity | e56263e06e04b4d3ea7b230026c9b2db4e0be685 | [
"Apache-2.0"
] | 1 | 2020-07-31T09:42:02.000Z | 2020-07-31T09:42:02.000Z | src/spaceone/identity/model/__init__.py | choonho/identity | e56263e06e04b4d3ea7b230026c9b2db4e0be685 | [
"Apache-2.0"
] | null | null | null | src/spaceone/identity/model/__init__.py | choonho/identity | e56263e06e04b4d3ea7b230026c9b2db4e0be685 | [
"Apache-2.0"
] | null | null | null | from spaceone.identity.model.api_key_model import APIKey
from spaceone.identity.model.domain_model import Domain
from spaceone.identity.model.domain_owner_model import DomainOwner
from spaceone.identity.model.provider_model import Provider
from spaceone.identity.model.service_account_model import ServiceAccount
from spaceone.identity.model.domain_secret_model import DomainSecret
from spaceone.identity.model.policy_model import Policy
from spaceone.identity.model.project_group_model import ProjectGroup, ProjectGroupMemberMap
from spaceone.identity.model.project_model import Project, ProjectMemberMap
from spaceone.identity.model.role_model import Role
from spaceone.identity.model.user_model import User
| 59.166667 | 91 | 0.885915 | 95 | 710 | 6.452632 | 0.263158 | 0.215334 | 0.358891 | 0.448613 | 0.256117 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064789 | 710 | 11 | 92 | 64.545455 | 0.923193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
38c11889322acf84cac86c38748dd07c8d6fc231 | 196 | py | Python | budgetsupervisor/utils.py | ltowarek/budget-supervisor | 862a2d720aecd4ad2fded9c63bc839190ebbc77e | [
"MIT"
] | 1 | 2022-03-01T10:28:31.000Z | 2022-03-01T10:28:31.000Z | budgetsupervisor/utils.py | ltowarek/budget-supervisor | 862a2d720aecd4ad2fded9c63bc839190ebbc77e | [
"MIT"
] | 75 | 2020-11-07T20:14:55.000Z | 2021-10-05T15:08:22.000Z | budgetsupervisor/utils.py | ltowarek/budget-supervisor | 862a2d720aecd4ad2fded9c63bc839190ebbc77e | [
"MIT"
] | null | null | null | from urllib.parse import urlparse
from django.http import HttpResponseRedirect
def get_url_path(response: HttpResponseRedirect) -> str:
return urlparse(response.url).path.rstrip("/") + "/"
| 24.5 | 56 | 0.765306 | 23 | 196 | 6.434783 | 0.695652 | 0.094595 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122449 | 196 | 7 | 57 | 28 | 0.860465 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
2a15b93f4b154afc33b34bebb573815fc7fb34f9 | 109 | py | Python | gltflib/utils/__init__.py | Onboard-Team/gltflib | 1aef8e5edcd6336b170226b0666bf78f92f02ee9 | [
"MIT"
] | 56 | 2019-06-17T10:46:25.000Z | 2022-03-30T14:48:36.000Z | gltflib/utils/__init__.py | Onboard-Team/gltflib | 1aef8e5edcd6336b170226b0666bf78f92f02ee9 | [
"MIT"
] | 198 | 2019-11-15T03:33:25.000Z | 2022-03-30T07:02:21.000Z | gltflib/utils/__init__.py | Onboard-Team/gltflib | 1aef8e5edcd6336b170226b0666bf78f92f02ee9 | [
"MIT"
] | 11 | 2019-10-15T01:37:05.000Z | 2022-03-24T12:11:12.000Z | from .data_utils import padbytes
from .file_utils import create_parent_dirs
from .json_utils import del_none
| 27.25 | 42 | 0.862385 | 18 | 109 | 4.888889 | 0.666667 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110092 | 109 | 3 | 43 | 36.333333 | 0.907216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2a312e68c471ffc80c0e2678ea86271e8757789f | 5,177 | py | Python | mftools/fingerprint/generate_features.py | mikedwhite/microstructural-fingerprinting-tools | 969ac9d032f82ca002846ac39017b7de04f50e85 | [
"BSD-3-Clause"
] | null | null | null | mftools/fingerprint/generate_features.py | mikedwhite/microstructural-fingerprinting-tools | 969ac9d032f82ca002846ac39017b7de04f50e85 | [
"BSD-3-Clause"
] | null | null | null | mftools/fingerprint/generate_features.py | mikedwhite/microstructural-fingerprinting-tools | 969ac9d032f82ca002846ac39017b7de04f50e85 | [
"BSD-3-Clause"
] | null | null | null | import cv2
import numpy as np
import torch.nn as nn
from PIL import Image
from skimage import img_as_ubyte, color
from torchvision import models, transforms
def generate_feature_sift(image):
r"""Extract scale invariant feature transform (SIFT) features from a single image.
Parameters
----------
image : ndarray
Image data.
Returns
-------
xfeat : ndarray
Array of features. Has shape :math:`(N, 128)`, where :math:`N` is the number of features extracted.
"""
sift = cv2.SIFT_create()
_, xfeat = sift.detectAndCompute(image, None)
return xfeat
def generate_feature_surf(image):
r"""Extract speeded-up robust features (SURF) from a single image.
Parameters
----------
image : ndarray
Image data.
Returns
-------
xfeat : ndarray
Array of features. Has shape :math:`(N, 64)`, where :math:`N` is the number of features extracted.
"""
surf = cv2.xfeatures2d.SURF_create()
_, xfeat = surf.detectAndCompute(image, None)
return xfeat
def generate_feature_cnn_flatten(image, cnn='alexnet'):
r"""Generate feature vector from final convolution layer by flattening.
Parameters
----------
image : ndarray
Image data.
cnn : str, optional
'alexnet' (default)
Generates CNN features from AlexNet architecture.
'vgg'
Generates CNN features from VGG architecture.
Returns
-------
xfeat : ndarray
Array of features. Has shape :math:`(N, )`, where :math:`N` is the number of voxels in the final convolution
output.
"""
if cnn == 'alexnet':
model = models.alexnet(pretrained=True)
elif cnn == 'vgg':
model = models.vgg19(pretrained=True)
else:
print('cnn must be either `alexnet` or `vgg`')
return 1
new_classifier = nn.Sequential(*list(model.features.children())[:-1])
model = new_classifier
preprocess = transforms.Compose([
transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(Image.fromarray(color.gray2rgb(img_as_ubyte(image / np.max(image)))))
input_batch = input_tensor.unsqueeze(0)
output = model(input_batch).detach().numpy()
xfeat = output.flatten()
return xfeat
def generate_feature_cnn_maxpool(image, cnn='alexnet'):
r"""Generate single feature vector from final convolution layer of CNN via apply MaxPooling.
Parameters
----------
image : ndarray
Image data.
cnn : str, optional
'alexnet' (default)
Generates CNN features from AlexNet architecture.
'vgg'
Generates CNN features from VGG architecture.
Returns
-------
xfeat : ndarray
Array of features. Has shape :math:`(d, )`, where :math:`d` is length of each feature output from the final
convolution layer.
"""
if cnn == 'alexnet':
model = models.alexnet(pretrained=True)
elif cnn == 'vgg':
model = models.vgg19(pretrained=True)
else:
print('cnn must be either `alexnet` or `vgg`')
return 1
new_classifier = nn.Sequential(*list(model.features.children())[:-1])
model = new_classifier
preprocess = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(Image.fromarray(color.gray2rgb(img_as_ubyte(image / np.max(image)))))
input_batch = input_tensor.unsqueeze(0)
output = model(input_batch).detach().numpy()
xfeat = np.amax(output, axis=(2, 3))[0, :]
return xfeat
def generate_feature_cnn_featdict(image, cnn='alexnet'):
r"""Generate dictionary of features from CNN output.
Parameters
----------
image : ndarray
Image data.
cnn : str, optional
'alexnet' (default)
Generates CNN features from AlexNet architecture.
'vgg'
Generates CNN features from VGG architecture.
Returns
-------
xfeat : ndarray
Array of features. Has shape :math:`(N, d)`, where :math:`N` is the number of features in the final convolution
output and d is the dimension of each feature.
"""
if cnn == 'alexnet':
model = models.alexnet(pretrained=True)
elif cnn == 'vgg':
model = models.vgg19(pretrained=True)
else:
print('cnn must be either `alexnet` or `vgg`')
return 1
new_classifier = nn.Sequential(*list(model.features.children())[:-1])
model = new_classifier
preprocess = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(Image.fromarray(color.gray2rgb(img_as_ubyte(image / np.max(image)))))
input_batch = input_tensor.unsqueeze(0)
output = model(input_batch).detach().numpy()
xfeat = np.reshape(output, (output.shape[1], -1))
xfeat = np.transpose(xfeat)
return xfeat
| 28.445055 | 119 | 0.629901 | 630 | 5,177 | 5.107937 | 0.209524 | 0.027968 | 0.03729 | 0.044748 | 0.802051 | 0.762896 | 0.718459 | 0.711311 | 0.668117 | 0.643257 | 0 | 0.028169 | 0.245702 | 5,177 | 181 | 120 | 28.60221 | 0.795903 | 0.370678 | 0 | 0.670886 | 1 | 0 | 0.054454 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063291 | false | 0 | 0.075949 | 0 | 0.240506 | 0.037975 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aa4f5bd3fdd91ce4b63500d4e98dca4a1da41b7a | 307 | py | Python | httpcore/_compat.py | abranjith/httpcore | de629324b1e2b375fe1d6c61b0daf8d9430559ca | [
"BSD-3-Clause"
] | 1 | 2022-01-30T18:55:08.000Z | 2022-01-30T18:55:08.000Z | httpcore/_compat.py | abranjith/httpcore | de629324b1e2b375fe1d6c61b0daf8d9430559ca | [
"BSD-3-Clause"
] | null | null | null | httpcore/_compat.py | abranjith/httpcore | de629324b1e2b375fe1d6c61b0daf8d9430559ca | [
"BSD-3-Clause"
] | null | null | null | # `contextlib.asynccontextmanager` exists from Python 3.7 onwards.
# For 3.6 we require the `async_generator` package for a backported version.
try:
from contextlib import asynccontextmanager # type: ignore
except ImportError:
from async_generator import asynccontextmanager # type: ignore # noqa
| 43.857143 | 76 | 0.785016 | 38 | 307 | 6.289474 | 0.684211 | 0.117155 | 0.242678 | 0.292887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015444 | 0.156352 | 307 | 6 | 77 | 51.166667 | 0.907336 | 0.557003 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aa68b87648174208eb713daa0fafe91d7245753b | 44 | py | Python | src/hub/dataload/sources/snpedia/__init__.py | erikyao/myvariant.info | a4eaaca7ab6c069199f8942d5afae2dece908147 | [
"Apache-2.0"
] | 39 | 2017-07-01T22:34:39.000Z | 2022-03-15T22:25:59.000Z | src/hub/dataload/sources/snpedia/__init__.py | erikyao/myvariant.info | a4eaaca7ab6c069199f8942d5afae2dece908147 | [
"Apache-2.0"
] | 105 | 2017-06-28T17:26:06.000Z | 2022-03-17T17:49:53.000Z | src/hub/dataload/sources/snpedia/__init__.py | erikyao/myvariant.info | a4eaaca7ab6c069199f8942d5afae2dece908147 | [
"Apache-2.0"
] | 15 | 2015-10-15T20:46:50.000Z | 2021-07-12T19:17:49.000Z | from .snpedia_upload import SnpediaUploader
| 22 | 43 | 0.886364 | 5 | 44 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aac3c947b47f5799fa94d1260331ba99cf674270 | 8,764 | py | Python | tests/test_param_check.py | pengyan510/torchtest | f84e4a7f1c3e0cda2430ba09880af4a964b1d3ba | [
"MIT"
] | 24 | 2021-06-09T16:12:45.000Z | 2022-03-08T17:50:47.000Z | tests/test_param_check.py | pengyan510/torchtest | f84e4a7f1c3e0cda2430ba09880af4a964b1d3ba | [
"MIT"
] | 1 | 2021-11-19T09:17:30.000Z | 2021-11-19T09:17:30.000Z | tests/test_param_check.py | pengyan510/torchtest | f84e4a7f1c3e0cda2430ba09880af4a964b1d3ba | [
"MIT"
] | 1 | 2021-06-11T05:23:33.000Z | 2021-06-11T05:23:33.000Z | import pytest
import torcheck
def test_module_changing_check_with_changing_model(
changing_model_optimizer, changing_model, dataloader, run_training
):
torcheck.register(changing_model_optimizer)
torcheck.add_module_changing_check(changing_model, module_name="NeuralNet")
run_training(changing_model, dataloader, changing_model_optimizer)
def test_module_changing_check_with_unchanging_model(
unchanging_model_optimizer, unchanging_model, dataloader, run_training
):
torcheck.register(unchanging_model_optimizer)
torcheck.add_module_changing_check(unchanging_model, module_name="NeuralNet")
with pytest.raises(
RuntimeError,
match=(
r"Module NeuralNet's fc1\.weight should change\.\n"
r".*fc1.bias should change"
),
):
run_training(unchanging_model, dataloader, unchanging_model_optimizer)
def test_module_unchanging_check_with_changing_model(
changing_model_optimizer, changing_model, dataloader, run_training
):
torcheck.register(changing_model_optimizer)
torcheck.add_module_unchanging_check(changing_model, module_name="NeuralNet")
with pytest.raises(
RuntimeError,
match=(
r"Module NeuralNet's fc1\.weight should not change\."
r"(.|\n)*fc2\.weight should not change"
),
):
run_training(changing_model, dataloader, changing_model_optimizer)
def test_module_unchanging_check_with_unchanging_model(
unchanging_model_optimizer, unchanging_model, dataloader, run_training
):
torcheck.register(unchanging_model_optimizer)
torcheck.add_module_unchanging_check(
unchanging_model.fc1, module_name="First Layer"
)
run_training(unchanging_model, dataloader, unchanging_model_optimizer)
def test_tensor_changing_check_with_changing_model(
changing_model_optimizer, changing_model, dataloader, run_training
):
torcheck.register(changing_model_optimizer)
torcheck.add_tensor_changing_check(
changing_model.fc1.weight, tensor_name="fc1.weight", module_name="NeuralNet"
)
torcheck.add_tensor_changing_check(
changing_model.fc1.bias, tensor_name="fc1.bias", module_name="NeuralNet"
)
torcheck.add_tensor_changing_check(
changing_model.fc2.weight, tensor_name="fc2.weight", module_name="NeuralNet"
)
torcheck.add_tensor_changing_check(
changing_model.fc2.bias, tensor_name="fc2.bias", module_name="NeuralNet"
)
run_training(changing_model, dataloader, changing_model_optimizer)
def test_tensor_changing_check_with_unchanging_model(
unchanging_model_optimizer, unchanging_model, dataloader, run_training
):
torcheck.register(unchanging_model_optimizer)
torcheck.add_tensor_changing_check(
unchanging_model.fc1.weight, tensor_name="fc1.weight", module_name="NeuralNet"
)
torcheck.add_tensor_changing_check(
unchanging_model.fc1.bias, tensor_name="fc1.bias", module_name="NeuralNet"
)
torcheck.add_tensor_changing_check(
unchanging_model.fc2.weight, tensor_name="fc2.weight", module_name="NeuralNet"
)
torcheck.add_tensor_changing_check(
unchanging_model.fc2.bias, tensor_name="fc2.bias", module_name="NeuralNet"
)
with pytest.raises(
RuntimeError,
match=(
r"Module NeuralNet's fc1\.weight should change\.\n"
r".*fc1.bias should change"
),
):
run_training(unchanging_model, dataloader, unchanging_model_optimizer)
def test_tensor_unchanging_check_with_changing_model(
changing_model_optimizer, changing_model, dataloader, run_training
):
torcheck.register(changing_model_optimizer)
torcheck.add_tensor_unchanging_check(
changing_model.fc1.weight, tensor_name="fc1.weight", module_name="NeuralNet"
)
torcheck.add_tensor_unchanging_check(
changing_model.fc1.bias, tensor_name="fc1.bias", module_name="NeuralNet"
)
with pytest.raises(
RuntimeError,
match=(
r"Module NeuralNet's fc1\.weight should not change\.\n"
r".*fc1.bias should not change"
),
):
run_training(changing_model, dataloader, changing_model_optimizer)
def test_tensor_unchanging_check_with_unchanging_model(
unchanging_model_optimizer, unchanging_model, dataloader, run_training
):
torcheck.register(unchanging_model_optimizer)
torcheck.add_tensor_unchanging_check(
unchanging_model.fc1.weight, tensor_name="fc1.weight", module_name="NeuralNet"
)
torcheck.add_tensor_unchanging_check(
unchanging_model.fc1.bias, tensor_name="fc1.bias", module_name="NeuralNet"
)
run_training(unchanging_model, dataloader, unchanging_model_optimizer)
def test_tensor_nan_check_with_nan_model(
nan_model_optimizer, nan_model, dataloader, run_training
):
torcheck.register(nan_model_optimizer)
torcheck.add_tensor_nan_check(
nan_model.fc1.weight, tensor_name="fc1.weight", module_name="NeuralNet"
)
torcheck.add_tensor_nan_check(
nan_model.fc1.bias, tensor_name="fc1.bias", module_name="NeuralNet"
)
torcheck.add_tensor_nan_check(
nan_model.fc2.weight, tensor_name="fc2.weight", module_name="NeuralNet"
)
torcheck.add_tensor_nan_check(
nan_model.fc2.bias, tensor_name="fc2.bias", module_name="NeuralNet"
)
with pytest.raises(
RuntimeError,
match=(
r"Module NeuralNet's fc1\.weight contains NaN\.\n"
r".*fc1.bias contains NaN\.\n.*fc2.weight contains NaN\.\n"
r".*fc2.bias contains NaN"
),
):
run_training(nan_model, dataloader, nan_model_optimizer)
def test_tensor_nan_check_with_nonan_model(
nonan_model_optimizer, nonan_model, dataloader, run_training
):
torcheck.register(nonan_model_optimizer)
torcheck.add_tensor_nan_check(
nonan_model.fc1.weight, tensor_name="fc1.weight", module_name="NeuralNet"
)
torcheck.add_tensor_nan_check(
nonan_model.fc1.bias, tensor_name="fc1.bias", module_name="NeuralNet"
)
torcheck.add_tensor_nan_check(
nonan_model.fc2.weight, tensor_name="fc2.weight", module_name="NeuralNet"
)
torcheck.add_tensor_nan_check(
nonan_model.fc2.bias, tensor_name="fc2.bias", module_name="NeuralNet"
)
run_training(nonan_model, dataloader, nonan_model_optimizer)
def _test_tensor_inf_check_with_inf_model(
inf_model_optimizer, inf_model, dataloader, run_training
):
"""TODO: design a test case with inf gradient values"""
torcheck.register(inf_model_optimizer)
torcheck.add_tensor_inf_check(
inf_model.fc1.weight, tensor_name="fc1.weight", module_name="NeuralNet"
)
torcheck.add_tensor_inf_check(
inf_model.fc1.bias, tensor_name="fc1.bias", module_name="NeuralNet"
)
torcheck.add_tensor_inf_check(
inf_model.fc2.weight, tensor_name="fc2.weight", module_name="NeuralNet"
)
torcheck.add_tensor_inf_check(
inf_model.fc2.bias, tensor_name="fc2.bias", module_name="NeuralNet"
)
with pytest.raises(
RuntimeError,
match=(
r"Module NeuralNet's fc1\.weight contains inf\.\n"
r".*fc1.bias contains inf\.\n.*fc2.weight contains inf\.\n"
r".*fc2.bias contains inf"
),
):
run_training(inf_model, dataloader, inf_model_optimizer)
def test_tensor_inf_check_with_noinf_model(
noinf_model_optimizer, noinf_model, dataloader, run_training
):
torcheck.register(noinf_model_optimizer)
torcheck.add_tensor_inf_check(
noinf_model.fc1.weight, tensor_name="fc1.weight", module_name="NeuralNet"
)
torcheck.add_tensor_inf_check(
noinf_model.fc1.bias, tensor_name="fc1.bias", module_name="NeuralNet"
)
torcheck.add_tensor_inf_check(
noinf_model.fc2.weight, tensor_name="fc2.weight", module_name="NeuralNet"
)
torcheck.add_tensor_inf_check(
noinf_model.fc2.bias, tensor_name="fc2.bias", module_name="NeuralNet"
)
run_training(noinf_model, dataloader, noinf_model_optimizer)
def test_tensor_multiple_check_with_correct_model(
correct_model_optimizer, correct_model, dataloader, run_training
):
torcheck.register(correct_model_optimizer)
torcheck.add_tensor(
correct_model.fc1.weight,
tensor_name="fc1.weight",
module_name="NeuralNet",
changing=True,
check_nan=True,
check_inf=True,
)
torcheck.add_tensor(
correct_model.fc1.bias,
tensor_name="fc1.bias",
module_name="NeuralNet",
changing=True,
check_nan=True,
check_inf=True,
)
run_training(correct_model, dataloader, correct_model_optimizer)
| 35.771429 | 86 | 0.728206 | 1,064 | 8,764 | 5.613722 | 0.046053 | 0.091411 | 0.104972 | 0.090407 | 0.909426 | 0.881467 | 0.838105 | 0.819354 | 0.771304 | 0.771304 | 0 | 0.010717 | 0.180169 | 8,764 | 244 | 87 | 35.918033 | 0.820598 | 0.005591 | 0 | 0.539171 | 0 | 0 | 0.130899 | 0 | 0 | 0 | 0 | 0.004098 | 0 | 1 | 0.059908 | false | 0 | 0.009217 | 0 | 0.069124 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2ae7c519067038f0ae5b50a687621d95bae2ab9b | 158 | py | Python | tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial1_Solution_80f8ef7f.py | raxosiris/course-content | 6904ae919b91aeb885d73b53cb9b02e6ec73d9cd | [
"CC-BY-4.0"
] | 2 | 2021-05-12T02:19:05.000Z | 2021-05-12T13:49:29.000Z | tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial1_Solution_80f8ef7f.py | raxosiris/course-content | 6904ae919b91aeb885d73b53cb9b02e6ec73d9cd | [
"CC-BY-4.0"
] | 1 | 2021-06-16T05:41:08.000Z | 2021-06-16T05:41:08.000Z | tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial1_Solution_80f8ef7f.py | raxosiris/course-content | 6904ae919b91aeb885d73b53cb9b02e6ec73d9cd | [
"CC-BY-4.0"
] | 1 | 2021-11-26T17:23:48.000Z | 2021-11-26T17:23:48.000Z | """
Students should observe exponential growth to positive values, exponential
growth to negative values, stable oscillations, and decay to the origin.
"""; | 26.333333 | 74 | 0.78481 | 20 | 158 | 6.2 | 0.75 | 0.274194 | 0.306452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14557 | 158 | 6 | 75 | 26.333333 | 0.918519 | 0.93038 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2d9eb845ed2097cb994c85998ac8420c7899f73d | 4,704 | py | Python | db/insert_blob_img.py | inf6150-doublej/backend | 7734bce3b531213f62eb04c2f89cecde490702b8 | [
"MIT"
] | null | null | null | db/insert_blob_img.py | inf6150-doublej/backend | 7734bce3b531213f62eb04c2f89cecde490702b8 | [
"MIT"
] | 27 | 2019-05-10T00:01:39.000Z | 2019-06-08T19:09:23.000Z | db/insert_blob_img.py | inf6150-doublej/backend | 7734bce3b531213f62eb04c2f89cecde490702b8 | [
"MIT"
] | null | null | null | import datetime
import sqlite3
import os
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
DB_PATH = os.path.join(ROOT_DIR, 'db.db')
con = sqlite3.connect(DB_PATH)
cur = con.cursor()
now = datetime.datetime.now()
with open(ROOT_DIR + '/img/bear.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'ju'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/squirel.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'frank'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/panda.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'luce'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/zebra.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'renee'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/elephant.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'michelle'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
with open(ROOT_DIR + '/img/cat1.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'claude'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/cat2.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'flavie'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/cat3.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'machtelle'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/cat4.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'mutante'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/cat5.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'vincent'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/cat6.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'etienne'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/cat7.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'elie'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/dog1.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'auguste'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/dog2.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'gloria'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/dog3.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'marc'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
with open(ROOT_DIR + '/img/dog4.jpeg', "rb") as input_file:
imagedata = input_file.read()
pic_id = 'david'
img_data = sqlite3.Binary(imagedata)
cur.execute('INSERT INTO Pictures'
'(pic_id, img_data)'
' VALUES (?,?)', (pic_id, img_data))
con.commit()
con.close()
| 30.153846 | 63 | 0.637755 | 660 | 4,704 | 4.315152 | 0.104545 | 0.08427 | 0.089888 | 0.134831 | 0.88448 | 0.87184 | 0.87184 | 0.87184 | 0.87184 | 0.87184 | 0 | 0.007666 | 0.195791 | 4,704 | 155 | 64 | 30.348387 | 0.745176 | 0 | 0 | 0.698529 | 0 | 0 | 0.250425 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022059 | 0 | 0.022059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
937dbde8d9f31c8f22f5758973ff6bd3fc6b2c65 | 348 | py | Python | v1.0.0.test/toontown/uberdog/DistributedCpuInfoMgrUD.py | TTOFFLINE-LEAK/ttoffline | bb0e91704a755d34983e94288d50288e46b68380 | [
"MIT"
] | 4 | 2019-07-01T15:46:43.000Z | 2021-07-23T16:26:48.000Z | v1.0.0.test/toontown/uberdog/DistributedCpuInfoMgrUD.py | TTOFFLINE-LEAK/ttoffline | bb0e91704a755d34983e94288d50288e46b68380 | [
"MIT"
] | 1 | 2019-06-29T03:40:05.000Z | 2021-06-13T01:15:16.000Z | v1.0.0.test/toontown/uberdog/DistributedCpuInfoMgrUD.py | TTOFFLINE-LEAK/ttoffline | bb0e91704a755d34983e94288d50288e46b68380 | [
"MIT"
] | 4 | 2019-07-28T21:18:46.000Z | 2021-02-25T06:37:25.000Z | from direct.directnotify import DirectNotifyGlobal
from direct.distributed.DistributedObjectGlobalUD import DistributedObjectGlobalUD
class DistributedCpuInfoMgrUD(DistributedObjectGlobalUD):
notify = DirectNotifyGlobal.directNotify.newCategory('DistributedCpuInfoMgrUD')
def setCpuInfoToUd(self, todo0, todo1, todo2, todo3):
pass | 43.5 | 83 | 0.836207 | 27 | 348 | 10.777778 | 0.703704 | 0.068729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012862 | 0.106322 | 348 | 8 | 84 | 43.5 | 0.92283 | 0 | 0 | 0 | 0 | 0 | 0.065903 | 0.065903 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.166667 | 0.333333 | 0 | 0.833333 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
fa8f8ccca1bb40ff65f60ad9a4752e0361b314aa | 134 | py | Python | msgtracker/__init__.py | mpillar/msg-tracker | 16edb9d555795d0eec625dd954e14f914cbbbe2b | [
"MIT"
] | null | null | null | msgtracker/__init__.py | mpillar/msg-tracker | 16edb9d555795d0eec625dd954e14f914cbbbe2b | [
"MIT"
] | null | null | null | msgtracker/__init__.py | mpillar/msg-tracker | 16edb9d555795d0eec625dd954e14f914cbbbe2b | [
"MIT"
] | null | null | null | from . import backend
from . import constants
from . import helper
from . import endpoints
from . import algorithm
from . import test
| 19.142857 | 23 | 0.776119 | 18 | 134 | 5.777778 | 0.444444 | 0.576923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179104 | 134 | 6 | 24 | 22.333333 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
facd7591fa42c7b457d148f2b47518b7b0696dc9 | 229 | py | Python | nomenklatura/__init__.py | ouseful-backup/nomenklatura | 086852e849c90ada59bc39a844479f805bae2279 | [
"MIT"
] | null | null | null | nomenklatura/__init__.py | ouseful-backup/nomenklatura | 086852e849c90ada59bc39a844479f805bae2279 | [
"MIT"
] | null | null | null | nomenklatura/__init__.py | ouseful-backup/nomenklatura | 086852e849c90ada59bc39a844479f805bae2279 | [
"MIT"
] | null | null | null | # shut up useless SA warning:
import warnings;
warnings.filterwarnings('ignore', 'Unicode type received non-unicode bind param value.')
from sqlalchemy.exc import SAWarning
warnings.filterwarnings('ignore', category=SAWarning)
| 32.714286 | 88 | 0.803493 | 28 | 229 | 6.571429 | 0.75 | 0.23913 | 0.304348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104803 | 229 | 6 | 89 | 38.166667 | 0.897561 | 0.117904 | 0 | 0 | 0 | 0 | 0.316583 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
879afc490dc48f3bbf725a63781a7684eb643711 | 31 | py | Python | grsnp/__init__.py | mdozmorov/genome_runner | 1fd77dd8e0bb7333e2d8e0d299d020bc8a3e36a1 | [
"AFL-3.0"
] | 11 | 2016-12-12T08:40:14.000Z | 2020-05-11T22:42:28.000Z | grsnp/__init__.py | mdozmorov/genome_runner | 1fd77dd8e0bb7333e2d8e0d299d020bc8a3e36a1 | [
"AFL-3.0"
] | 66 | 2015-07-10T12:08:34.000Z | 2016-07-02T01:56:07.000Z | grsnp/__init__.py | mdozmorov/genomerunner_web | c6e0be91c3cf8eb8443093573034f9fcd40dbe22 | [
"AFL-3.0"
] | 5 | 2016-07-08T16:57:35.000Z | 2020-01-06T20:07:49.000Z | import hypergeom4, grsnp, path
| 15.5 | 30 | 0.806452 | 4 | 31 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.129032 | 31 | 1 | 31 | 31 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
879f69f6a99f029fde277d0d49ca64607243256c | 94 | py | Python | tests/__init__.py | akshaybabloo/SpikesData | aa44342845871f310d6ecb522e9f2b88e749aa98 | [
"MIT"
] | null | null | null | tests/__init__.py | akshaybabloo/SpikesData | aa44342845871f310d6ecb522e9f2b88e749aa98 | [
"MIT"
] | null | null | null | tests/__init__.py | akshaybabloo/SpikesData | aa44342845871f310d6ecb522e9f2b88e749aa98 | [
"MIT"
] | null | null | null | from numpy.testing import run_module_suite
if __name__ == '__main__':
run_module_suite()
| 18.8 | 42 | 0.765957 | 13 | 94 | 4.615385 | 0.769231 | 0.3 | 0.466667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 94 | 4 | 43 | 23.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
879f98b32e3759d809452ce0f41733d293d3b8fa | 81 | py | Python | toai/metrics/__init__.py | mokahaiku/toai | 76ce097ee6785586e33b63acdfe06467fef4c36c | [
"MIT"
] | 9 | 2019-07-31T16:57:26.000Z | 2019-11-27T22:24:44.000Z | toai/metrics/__init__.py | mokahaiku/toai | 76ce097ee6785586e33b63acdfe06467fef4c36c | [
"MIT"
] | 8 | 2019-10-14T15:48:09.000Z | 2019-11-24T10:24:13.000Z | toai/metrics/__init__.py | tribeofai/toai | 76ce097ee6785586e33b63acdfe06467fef4c36c | [
"MIT"
] | 5 | 2019-07-31T16:57:14.000Z | 2019-11-22T13:00:33.000Z | # pylama:ignore=W0611
from .error_rate import error_rate
from .rmse import rmse
| 16.2 | 34 | 0.802469 | 13 | 81 | 4.846154 | 0.615385 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0.135802 | 81 | 4 | 35 | 20.25 | 0.842857 | 0.234568 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
87bd4d7802edd86d94c98dacb749c0c5af4decac | 21 | py | Python | utils/__init__.py | chucoding/notion2github | 4dd34c641a21c59a6fed7bbfd050beb9be174c16 | [
"MIT"
] | 1 | 2022-03-20T07:05:05.000Z | 2022-03-20T07:05:05.000Z | utils/__init__.py | chucoding/notion2github | 4dd34c641a21c59a6fed7bbfd050beb9be174c16 | [
"MIT"
] | null | null | null | utils/__init__.py | chucoding/notion2github | 4dd34c641a21c59a6fed7bbfd050beb9be174c16 | [
"MIT"
] | null | null | null | from .logMng import * | 21 | 21 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 21 | 1 | 21 | 21 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35947f81ab83122003a939989c7b2cc3256ba43b | 89 | py | Python | unit-testing-for-data-science-in-python/1. Unit testing basics/preprocessing_helpers.py | nhutnamhcmus/datacamp-playground | 25457e813b1145e1d335562286715eeddd1c1a7b | [
"MIT"
] | 1 | 2021-05-08T11:09:27.000Z | 2021-05-08T11:09:27.000Z | unit-testing-for-data-science-in-python/1. Unit testing basics/preprocessing_helpers.py | nhutnamhcmus/datacamp-playground | 25457e813b1145e1d335562286715eeddd1c1a7b | [
"MIT"
] | 1 | 2022-03-12T15:42:14.000Z | 2022-03-12T15:42:14.000Z | unit-testing-for-data-science-in-python/1. Unit testing basics/preprocessing_helpers.py | nhutnamhcmus/datacamp-playground | 25457e813b1145e1d335562286715eeddd1c1a7b | [
"MIT"
] | 1 | 2021-04-30T18:24:19.000Z | 2021-04-30T18:24:19.000Z | def convert_to_int(number_with_commas):
return number_with_commas.replace(",", "")
| 29.666667 | 47 | 0.741573 | 12 | 89 | 5 | 0.75 | 0.333333 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123596 | 89 | 2 | 48 | 44.5 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
35cb51c60b407daf78b05db119597ec7f78252d3 | 153 | py | Python | authors/apps/articles/admin.py | hoslack/jua-kali_Backend | e0e92aa0287c4a17b303fdde941f457b28c51223 | [
"BSD-3-Clause"
] | 2 | 2018-07-14T21:31:21.000Z | 2018-08-05T23:46:06.000Z | authors/apps/articles/admin.py | andela/ah-scorpion | 6b6b31ca0a42dc577e9740d54b8c579de1a603fe | [
"BSD-3-Clause"
] | 36 | 2018-07-25T11:46:06.000Z | 2021-06-10T20:33:42.000Z | authors/apps/articles/admin.py | hoslack/jua-kali_Backend | e0e92aa0287c4a17b303fdde941f457b28c51223 | [
"BSD-3-Clause"
] | 3 | 2018-09-04T17:43:53.000Z | 2018-11-18T10:14:59.000Z | from django.contrib import admin
from .models import Article
class AuthorAdmin(admin.ModelAdmin):
pass
admin.site.register(Article, AuthorAdmin)
| 15.3 | 41 | 0.79085 | 19 | 153 | 6.368421 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 153 | 9 | 42 | 17 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
ea2ae949e9a6912c734f32554bf5d377bf58f69f | 159 | py | Python | jupyter_extension_example/__init__.py | killf/jupyter_extension_example | 7c217814a399e1a5a1f756743341f3ba835c8f78 | [
"MIT"
] | null | null | null | jupyter_extension_example/__init__.py | killf/jupyter_extension_example | 7c217814a399e1a5a1f756743341f3ba835c8f78 | [
"MIT"
] | null | null | null | jupyter_extension_example/__init__.py | killf/jupyter_extension_example | 7c217814a399e1a5a1f756743341f3ba835c8f78 | [
"MIT"
] | null | null | null | from .application import SimpleApp
def _jupyter_server_extension_paths():
return [{"module": "jupyter_extension_example.application", "app": SimpleApp}]
| 26.5 | 82 | 0.779874 | 17 | 159 | 6.941176 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106918 | 159 | 5 | 83 | 31.8 | 0.830986 | 0 | 0 | 0 | 0 | 0 | 0.289308 | 0.232704 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
ea4438d9cee6f0f244decb3d3054ad8fd55d6244 | 22,311 | py | Python | tests/infer_for_schema/test_len_on_properties.py | aas-core-works/aas-core-codegen | afec2cf363b6cb69816e7724a2b58626e2165869 | [
"MIT"
] | 5 | 2021-12-29T12:55:34.000Z | 2022-03-01T17:57:21.000Z | tests/infer_for_schema/test_len_on_properties.py | aas-core-works/aas-core-codegen | afec2cf363b6cb69816e7724a2b58626e2165869 | [
"MIT"
] | 10 | 2021-12-29T02:15:55.000Z | 2022-03-09T11:04:22.000Z | tests/infer_for_schema/test_len_on_properties.py | aas-core-works/aas-core-codegen | afec2cf363b6cb69816e7724a2b58626e2165869 | [
"MIT"
] | 2 | 2021-12-29T01:42:12.000Z | 2022-02-15T13:46:33.000Z | # pylint: disable=missing-docstring
import textwrap
import unittest
from typing import Optional, MutableMapping
import tests.common
import tests.infer_for_schema.common
from aas_core_codegen import infer_for_schema, intermediate
class Test_expected(unittest.TestCase):
def test_no_constraints(self) -> None:
source = textwrap.dedent(
"""\
class Something:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={},
patterns_by_property={})"""
),
text,
)
def test_min_value_constant_left(self) -> None:
source = textwrap.dedent(
"""\
@invariant(lambda self: 10 < len(self.some_property))
class Something:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={
'some_property': LenConstraint(
min_value=11,
max_value=None)},
patterns_by_property={})"""
),
text,
)
def test_min_value_constant_right(self) -> None:
source = textwrap.dedent(
"""\
@invariant(lambda self: len(self.some_property) > 10)
class Something:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={
'some_property': LenConstraint(
min_value=11,
max_value=None)},
patterns_by_property={})"""
),
text,
)
def test_max_value_constant_right(self) -> None:
source = textwrap.dedent(
"""\
@invariant(lambda self: len(self.some_property) < 10)
class Something:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={
'some_property': LenConstraint(
min_value=None,
max_value=9)},
patterns_by_property={})"""
),
text,
)
def test_max_value_constant_left(self) -> None:
source = textwrap.dedent(
"""\
@invariant(lambda self: 10 > len(self.some_property))
class Something:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={
'some_property': LenConstraint(
min_value=None,
max_value=9)},
patterns_by_property={})"""
),
text,
)
def test_max_value_constant_right_and_not_required(self) -> None:
source = textwrap.dedent(
"""\
@invariant(
lambda self:
not (self.some_property is not None)
or len(self.some_property) <= 128
)
class Something:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={
'some_property': LenConstraint(
min_value=None,
max_value=128)},
patterns_by_property={})"""
),
text,
)
def test_exact_value_constant_left(self) -> None:
source = textwrap.dedent(
"""\
@invariant(lambda self: 10 == len(self.some_property))
class Something:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={
'some_property': LenConstraint(
min_value=10,
max_value=10)},
patterns_by_property={})"""
),
text,
)
def test_exact_value_constant_right(self) -> None:
source = textwrap.dedent(
"""\
@invariant(lambda self: len(self.some_property) == 10)
class Something:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={
'some_property': LenConstraint(
min_value=10,
max_value=10)},
patterns_by_property={})"""
),
text,
)
def test_conditioned_on_property(self) -> None:
source = textwrap.dedent(
"""\
@invariant(
lambda self:
not (self.some_property is not None)
or len(self.some_property) == 10
)
class Something:
some_property: Optional[str]
def __init__(self, some_property: Optional[str] = None) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={
'some_property': LenConstraint(
min_value=10,
max_value=10)},
patterns_by_property={})"""
),
text,
)
def test_no_inheritance_by_default(self) -> None:
source = textwrap.dedent(
"""\
@invariant(lambda self: len(self.some_property) > 3)
class Parent:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
@invariant(lambda self: len(self.some_property) > 5)
class Something(Parent):
def __init__(self, some_property: str) -> None:
Parent.__init__(
self,
some_property=some_property
)
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
# NOTE (mristin, 2022-01-02):
# We infer only the constraints as specified in the class itself, and
# ignore the constraints of the ancestors in *this particular kind of
# inference*.
#
# This is necessary as we want to use these constraints to generate schemas
# whereas it is the job of the schema engine to stack the constraints together.
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={
'some_property': LenConstraint(
min_value=6,
max_value=None)},
patterns_by_property={})"""
),
text,
)
class Test_unexpected(unittest.TestCase):
def test_conflicting_min_and_max(self) -> None:
source = textwrap.dedent(
"""\
@invariant(lambda self: len(self.some_property) > 10)
@invariant(lambda self: len(self.some_property) < 3)
class Something:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
symbol_table,
_,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls(
source=source
)
_, error = infer_for_schema.infer_constraints_by_class(
symbol_table=symbol_table
)
assert error is not None
self.assertEqual(
"The property some_property has conflicting invariants on the length: "
"the minimum length, 11, contradicts the maximum length 2.",
tests.common.most_underlying_messages(error),
)
def test_conflicting_min_and_exact(self) -> None:
source = textwrap.dedent(
"""\
@invariant(lambda self: len(self.some_property) > 10)
@invariant(lambda self: len(self.some_property) == 3)
class Something:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
symbol_table,
_,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls(
source=source
)
_, error = infer_for_schema.infer_constraints_by_class(
symbol_table=symbol_table
)
assert error is not None
self.assertEqual(
"The property some_property has conflicting invariants on the length: "
"the minimum length, 11, contradicts the exactly expected length 3.",
tests.common.most_underlying_messages(error),
)
def test_conflicting_max_and_exact(self) -> None:
source = textwrap.dedent(
"""\
@invariant(lambda self: len(self.some_property) < 10)
@invariant(lambda self: len(self.some_property) == 30)
class Something:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
symbol_table,
_,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls(
source=source
)
_, error = infer_for_schema.infer_constraints_by_class(
symbol_table=symbol_table
)
assert error is not None
self.assertEqual(
"The property some_property has conflicting invariants on the length: "
"the maximum length, 9, contradicts the exactly expected length 30.",
tests.common.most_underlying_messages(error),
)
class Test_stacking(unittest.TestCase):
def test_no_inheritance_involved(self) -> None:
source = textwrap.dedent(
"""\
@invariant(lambda self: len(self.some_property) < 10)
class Something:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
# NOTE (mristin, 2022-05-18):
# This definition here is necessary for mypy.
constraints_by_class: Optional[
MutableMapping[
intermediate.ClassUnion, infer_for_schema.ConstraintsByProperty
]
] = None
(
symbol_table,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_class, error = infer_for_schema.merge_constraints_with_ancestors(
symbol_table=symbol_table, constraints_by_class=constraints_by_class
)
assert error is None, tests.common.most_underlying_messages(error)
assert constraints_by_class is not None
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={
'some_property': LenConstraint(
min_value=None,
max_value=9)},
patterns_by_property={})"""
),
text,
)
def test_inheritance_from_parent_with_no_patterns_of_own(self) -> None:
source = textwrap.dedent(
"""\
@invariant(lambda self: len(self.some_property) > 3)
class Parent:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
class Something(Parent):
def __init__(self, some_property: str) -> None:
Parent.__init__(
self,
some_property=some_property
)
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
# NOTE (mristin, 2022-05-18):
# This definition here is necessary for mypy.
constraints_by_class: Optional[
MutableMapping[
intermediate.ClassUnion, infer_for_schema.ConstraintsByProperty
]
] = None
(
symbol_table,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_class, error = infer_for_schema.merge_constraints_with_ancestors(
symbol_table=symbol_table, constraints_by_class=constraints_by_class
)
assert error is None, tests.common.most_underlying_messages(error)
assert constraints_by_class is not None
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={
'some_property': LenConstraint(
min_value=4,
max_value=None)},
patterns_by_property={})"""
),
text,
)
def test_merge_with_parent(self) -> None:
source = textwrap.dedent(
"""\
@invariant(lambda self: len(self.some_property) > 3)
class Parent:
some_property: str
def __init__(self, some_property: str) -> None:
self.some_property = some_property
@invariant(lambda self: len(self.some_property) < 10)
class Something(Parent):
def __init__(self, some_property: str) -> None:
Parent.__init__(
self,
some_property=some_property
)
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
# NOTE (mristin, 2022-05-18):
# This definition here is necessary for mypy.
constraints_by_class: Optional[
MutableMapping[
intermediate.ClassUnion, infer_for_schema.ConstraintsByProperty
]
] = None
(
symbol_table,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_class, error = infer_for_schema.merge_constraints_with_ancestors(
symbol_table=symbol_table, constraints_by_class=constraints_by_class
)
assert error is None, tests.common.most_underlying_messages(error)
assert constraints_by_class is not None
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={
'some_property': LenConstraint(
min_value=4,
max_value=9)},
patterns_by_property={})"""
),
text,
)
if __name__ == "__main__":
unittest.main()
| 30.816298 | 107 | 0.541392 | 2,041 | 22,311 | 5.439 | 0.069574 | 0.118908 | 0.086479 | 0.039636 | 0.935231 | 0.918476 | 0.912711 | 0.907576 | 0.907576 | 0.885326 | 0 | 0.007431 | 0.384788 | 22,311 | 723 | 108 | 30.858921 | 0.801326 | 0.025862 | 0 | 0.674051 | 0 | 0 | 0.034648 | 0 | 0 | 0 | 0 | 0 | 0.079114 | 1 | 0.050633 | false | 0 | 0.018987 | 0 | 0.079114 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ea819fefe410b0149492b8de71cb3d547614cc7f | 5,150 | py | Python | hail/python/test/hail/experimental/test_dnd_array.py | sigmarkarl/hail | 11b7c22342a945c61b24c5f8babf4ab411d3d2f1 | [
"MIT"
] | 2 | 2020-12-15T21:20:24.000Z | 2020-12-21T19:46:26.000Z | hail/python/test/hail/experimental/test_dnd_array.py | Dania-Abuhijleh/hail | a187dc0867801ca1eee774588fe58604a133a0d9 | [
"MIT"
] | 2 | 2016-11-17T03:06:10.000Z | 2017-12-05T19:00:24.000Z | hail/python/test/hail/experimental/test_dnd_array.py | Dania-Abuhijleh/hail | a187dc0867801ca1eee774588fe58604a133a0d9 | [
"MIT"
] | 2 | 2020-07-28T18:55:19.000Z | 2020-10-19T16:43:03.000Z | import numpy as np
import hail as hl
from hail.utils import new_temp_file
from ..helpers import startTestHailContext, stopTestHailContext
setUpModule = startTestHailContext
tearDownModule = stopTestHailContext
def test_range_collect():
n_variants = 10
n_samples = 10
block_size = 3
mt = hl.utils.range_matrix_table(n_variants, n_samples)
mt = mt.select_entries(x=mt.row_idx * mt.col_idx)
da = hl.experimental.dnd.array(mt, 'x', block_size=block_size)
a = np.array(mt.x.collect()).reshape(n_variants, n_samples)
assert np.array_equal(da.collect(), a)
def test_range_matmul():
n_variants = 10
n_samples = 10
block_size = 3
n_blocks = 16
mt = hl.utils.range_matrix_table(n_variants, n_samples)
mt = mt.select_entries(x=mt.row_idx * mt.col_idx)
da = hl.experimental.dnd.array(mt, 'x', block_size=block_size)
da = (da @ da.T).checkpoint(new_temp_file())
assert da._force_count_blocks() == n_blocks
da_result = da.collect().reshape(n_variants, n_variants)
a = np.array(mt.x.collect()).reshape(n_variants, n_samples)
a_result = a @ a.T
assert np.array_equal(da_result, a_result)
def test_small_collect():
n_variants = 10
n_samples = 10
block_size = 3
mt = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt = mt.select_entries(dosage=hl.float(mt.GT.n_alt_alleles()))
da = hl.experimental.dnd.array(mt, 'dosage', block_size=block_size)
a = np.array(mt.dosage.collect()).reshape(n_variants, n_samples)
assert np.array_equal(da.collect(), a)
def test_medium_collect():
n_variants = 100
n_samples = 100
block_size = 32
mt = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt = mt.select_entries(dosage=hl.float(mt.GT.n_alt_alleles()))
da = hl.experimental.dnd.array(mt, 'dosage', block_size=block_size)
a = np.array(mt.dosage.collect()).reshape(n_variants, n_samples)
assert np.array_equal(da.collect(), a)
def test_small_matmul():
n_variants = 10
n_samples = 10
block_size = 3
n_blocks = 16
mt = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt = mt.select_entries(dosage=hl.float(mt.GT.n_alt_alleles()))
da = hl.experimental.dnd.array(mt, 'dosage', block_size=block_size)
da = (da @ da.T).checkpoint(new_temp_file())
assert da._force_count_blocks() == n_blocks
da_result = da.collect().reshape(n_variants, n_variants)
a = np.array(mt.dosage.collect()).reshape(n_variants, n_samples)
a_result = a @ a.T
assert np.array_equal(da_result, a_result)
def test_medium_matmul():
n_variants = 100
n_samples = 100
block_size = 32
n_blocks = 16
mt = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt = mt.select_entries(dosage=hl.float(mt.GT.n_alt_alleles()))
da = hl.experimental.dnd.array(mt, 'dosage', block_size=block_size)
da = (da @ da.T).checkpoint(new_temp_file())
assert da._force_count_blocks() == n_blocks
da_result = da.collect().reshape(n_variants, n_variants)
a = np.array(mt.dosage.collect()).reshape(n_variants, n_samples)
a_result = a @ a.T
assert np.array_equal(da_result, a_result)
def test_matmul_via_inner_product():
n_variants = 10
n_samples = 10
block_size = 3
n_blocks = 16
mt = hl.utils.range_matrix_table(n_variants, n_samples)
mt = mt.select_entries(x=mt.row_idx * mt.col_idx)
da = hl.experimental.dnd.array(mt, 'x', block_size=block_size)
prod = (da @ da.T).checkpoint(new_temp_file())
assert prod._force_count_blocks() == n_blocks
prod_result = prod.collect().reshape(n_variants, n_variants)
ip_result = da.inner_product(da.T,
lambda l, r: l * r,
lambda l, r: l + r,
hl.float(0.0),
lambda prod: hl.agg.sum(prod)
).collect().reshape(n_variants, n_variants)
assert np.array_equal(prod_result, ip_result)
def test_king_homo_estimator():
hl.set_global_seed(1)
mt = hl.balding_nichols_model(2, 5, 5)
mt = mt.select_entries(genotype_score=hl.float(mt.GT.n_alt_alleles()))
da = hl.experimental.dnd.array(mt, 'genotype_score', block_size=3)
def sqr(x):
return x * x
score_difference = da.T.inner_product(
da,
lambda l, r: sqr(l - r),
lambda l, r: l + r,
hl.float(0),
hl.agg.sum
).checkpoint(new_temp_file())
assert np.array_equal(
score_difference.collect(),
np.array([[0., 6., 4., 2., 4.],
[6., 0., 6., 4., 6.],
[4., 6., 0., 6., 0.],
[2., 4., 6., 0., 6.],
[4., 6., 0., 6., 0.]]))
| 32.1875 | 74 | 0.620583 | 754 | 5,150 | 3.964191 | 0.122016 | 0.102375 | 0.073603 | 0.073938 | 0.813985 | 0.786885 | 0.784209 | 0.755437 | 0.744731 | 0.721981 | 0 | 0.02246 | 0.256505 | 5,150 | 159 | 75 | 32.389937 | 0.758161 | 0 | 0 | 0.655738 | 0 | 0 | 0.007961 | 0 | 0 | 0 | 0 | 0 | 0.098361 | 1 | 0.07377 | false | 0 | 0.032787 | 0.008197 | 0.114754 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5763fbd66438f819e66ced8023bfcfc641fcbfec | 136 | py | Python | Python/S10430.py | irostub/Beakjoon-Problem-Solving | 3a230cbd16ade4ed7cc1da7f36085853d69d673d | [
"Beerware"
] | null | null | null | Python/S10430.py | irostub/Beakjoon-Problem-Solving | 3a230cbd16ade4ed7cc1da7f36085853d69d673d | [
"Beerware"
] | null | null | null | Python/S10430.py | irostub/Beakjoon-Problem-Solving | 3a230cbd16ade4ed7cc1da7f36085853d69d673d | [
"Beerware"
] | null | null | null | A, B, C = map(int, input().split())
print((A + B) % C)
print(((A % C) + (B % C)) % C)
print((A * B) % C)
print(((A % C) * (B % C)) % C)
| 22.666667 | 35 | 0.404412 | 27 | 136 | 2.037037 | 0.296296 | 0.181818 | 0.163636 | 0.290909 | 0.654545 | 0.654545 | 0.654545 | 0.654545 | 0.654545 | 0.654545 | 0 | 0 | 0.25 | 136 | 5 | 36 | 27.2 | 0.539216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.8 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
579d829b322f7c2c222606d468fb26efab542653 | 219 | py | Python | chainer_mask_rcnn/datasets/voc/__init__.py | m3at/chainer-mask-rcnn | fa491663675cdc97974008becc99454d5e6e1d09 | [
"MIT"
] | 61 | 2018-04-04T07:09:32.000Z | 2021-11-12T19:54:23.000Z | chainer_mask_rcnn/datasets/voc/__init__.py | Swall0w/chainer-mask-rcnn | 83366fc77e52aa6a29cfac4caa697d8b45dcffc6 | [
"MIT"
] | 15 | 2018-04-10T10:48:47.000Z | 2021-05-20T10:00:42.000Z | chainer_mask_rcnn/datasets/voc/__init__.py | Swall0w/chainer-mask-rcnn | 83366fc77e52aa6a29cfac4caa697d8b45dcffc6 | [
"MIT"
] | 18 | 2018-07-06T10:13:56.000Z | 2022-03-02T12:25:31.000Z | # flake8: noqa
from .sbd import SBDInstanceSeg
from .sbd import SBDInstanceSegmentationDataset
from .voc import VOC2012InstanceSeg
from .voc import VOC2012InstanceSegmentationDataset
from .voc import VOCInstanceSegBase
| 31.285714 | 51 | 0.863014 | 22 | 219 | 8.590909 | 0.5 | 0.111111 | 0.206349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045918 | 0.105023 | 219 | 6 | 52 | 36.5 | 0.918367 | 0.054795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
57bc11fdb8416846cd917001fc20ece76d90aae5 | 111 | py | Python | lightning_transformers/task/nlp/reaction/datasets/__init__.py | zhaisilong/lightning-transformers | cd6843b6caa8279df86bb5e808dfccc79ca9c3d2 | [
"Apache-2.0"
] | null | null | null | lightning_transformers/task/nlp/reaction/datasets/__init__.py | zhaisilong/lightning-transformers | cd6843b6caa8279df86bb5e808dfccc79ca9c3d2 | [
"Apache-2.0"
] | null | null | null | lightning_transformers/task/nlp/reaction/datasets/__init__.py | zhaisilong/lightning-transformers | cd6843b6caa8279df86bb5e808dfccc79ca9c3d2 | [
"Apache-2.0"
] | null | null | null | from lightning_transformers.task.nlp.translation.datasets.smi import SMILESTranslationDataModule # noqa: F401
| 55.5 | 110 | 0.864865 | 12 | 111 | 7.916667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029126 | 0.072072 | 111 | 1 | 111 | 111 | 0.893204 | 0.09009 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
57e48ba8c21c287a7669fea6db5a2180893ba033 | 102 | py | Python | python-api-challenge/Solutions to Homework/api_keys.py | samstruthers35/python-api-challenge | e248dc1ed0b15ad727e48e1fc0682c2335b0114c | [
"ADSL"
] | null | null | null | python-api-challenge/Solutions to Homework/api_keys.py | samstruthers35/python-api-challenge | e248dc1ed0b15ad727e48e1fc0682c2335b0114c | [
"ADSL"
] | null | null | null | python-api-challenge/Solutions to Homework/api_keys.py | samstruthers35/python-api-challenge | e248dc1ed0b15ad727e48e1fc0682c2335b0114c | [
"ADSL"
] | null | null | null | weather_api_key = "6d13e35324673617dbbc9a5c9ad44c2a"
g_key = "AIzaSyAJzbJZHeQUGmjKvFbd3SJhYUjTYgG8P3Y" | 51 | 52 | 0.892157 | 7 | 102 | 12.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.237113 | 0.04902 | 102 | 2 | 53 | 51 | 0.670103 | 0 | 0 | 0 | 0 | 0 | 0.68932 | 0.68932 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
57f5f89e329c4e61a27a12bb839ade682a5843f0 | 263 | py | Python | model/score.py | rbzargon/py_flashcards | 69a154ac747484ed508565aa4cfb9871b68f6c7a | [
"Apache-2.0"
] | null | null | null | model/score.py | rbzargon/py_flashcards | 69a154ac747484ed508565aa4cfb9871b68f6c7a | [
"Apache-2.0"
] | null | null | null | model/score.py | rbzargon/py_flashcards | 69a154ac747484ed508565aa4cfb9871b68f6c7a | [
"Apache-2.0"
] | null | null | null |
class Score:
def __init__(self, maximum: int):
super().__init__()
self.score = 0
self.maximum = maximum
@property
def score(self):
return self.score
@property
def maximum(self):
return self.maximum
| 15.470588 | 37 | 0.570342 | 29 | 263 | 4.896552 | 0.37931 | 0.232394 | 0.197183 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005714 | 0.334601 | 263 | 16 | 38 | 16.4375 | 0.805714 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0 | 0.181818 | 0.545455 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
17ab307b840a2a012ea46c19f7cca7e2822413d4 | 37 | py | Python | tests/fixtures.py | brianbruggeman/rl | 6dd8a53da07697ffc87e62aa397be7b3b08f0aa0 | [
"MIT"
] | null | null | null | tests/fixtures.py | brianbruggeman/rl | 6dd8a53da07697ffc87e62aa397be7b3b08f0aa0 | [
"MIT"
] | null | null | null | tests/fixtures.py | brianbruggeman/rl | 6dd8a53da07697ffc87e62aa397be7b3b08f0aa0 | [
"MIT"
] | null | null | null | # TODO: Add fixtures here as needed.
| 18.5 | 36 | 0.72973 | 6 | 37 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189189 | 37 | 1 | 37 | 37 | 0.9 | 0.918919 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
17af38da6e0ff1e8b254a4d72b1232cf67eb210b | 175 | py | Python | bulbea/app/config/server.py | saimohithnaag/StockPredictor | 4caba8f042f1d87ec0b41ec8e14d3a458a7409a4 | [
"Apache-2.0"
] | 1,761 | 2017-03-09T08:51:28.000Z | 2022-03-27T18:15:06.000Z | bulbea/app/config/server.py | saimohithnaag/StockPredictor | 4caba8f042f1d87ec0b41ec8e14d3a458a7409a4 | [
"Apache-2.0"
] | 38 | 2017-03-11T11:51:16.000Z | 2021-06-27T15:00:07.000Z | bulbea/app/config/server.py | saimohithnaag/StockPredictor | 4caba8f042f1d87ec0b41ec8e14d3a458a7409a4 | [
"Apache-2.0"
] | 511 | 2017-03-12T03:49:26.000Z | 2022-03-15T23:05:49.000Z | # imports - compatibility packages
from __future__ import absolute_import
# module imports
from bulbea.app.config import BaseConfig
class ServerConfig(BaseConfig):
pass
| 19.444444 | 40 | 0.811429 | 20 | 175 | 6.85 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 175 | 8 | 41 | 21.875 | 0.913333 | 0.268571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
17d254b9c2762b119b4c763ed5a2d4ec0cfeed74 | 31 | py | Python | rlagent/__init__.py | YunjaeChoi/rlagent | 41062fc1beaa2d5a0765bb782e1a55d1962ab058 | [
"MIT"
] | null | null | null | rlagent/__init__.py | YunjaeChoi/rlagent | 41062fc1beaa2d5a0765bb782e1a55d1962ab058 | [
"MIT"
] | null | null | null | rlagent/__init__.py | YunjaeChoi/rlagent | 41062fc1beaa2d5a0765bb782e1a55d1962ab058 | [
"MIT"
] | null | null | null | from rlagent.core import Agent
| 15.5 | 30 | 0.83871 | 5 | 31 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aa2e042d0e793dbebdae8c76b56e742befe01df5 | 41 | py | Python | aniapython.py | azawada123/gitCDV-1 | fbb25b8cf0dcd50bd76c8a3f451fbdf53c90cae5 | [
"MIT"
] | null | null | null | aniapython.py | azawada123/gitCDV-1 | fbb25b8cf0dcd50bd76c8a3f451fbdf53c90cae5 | [
"MIT"
] | 2 | 2019-10-06T12:58:42.000Z | 2019-10-06T13:35:30.000Z | aniapython.py | azawada123/gitCDV-1 | fbb25b8cf0dcd50bd76c8a3f451fbdf53c90cae5 | [
"MIT"
] | 22 | 2019-10-06T12:46:04.000Z | 2019-10-06T12:48:33.000Z | print("Hello World from feature branch")
| 20.5 | 40 | 0.780488 | 6 | 41 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 41 | 1 | 41 | 41 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0.756098 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
aa3730c47e5b655169b49df530388def38025e5f | 126 | py | Python | tests/test_canvasutils.py | rtaph/canvasutils | 0515723bfa295d9627e21a15923b0eb5d5a31fef | [
"BSD-3-Clause"
] | null | null | null | tests/test_canvasutils.py | rtaph/canvasutils | 0515723bfa295d9627e21a15923b0eb5d5a31fef | [
"BSD-3-Clause"
] | null | null | null | tests/test_canvasutils.py | rtaph/canvasutils | 0515723bfa295d9627e21a15923b0eb5d5a31fef | [
"BSD-3-Clause"
] | null | null | null | from canvasutils import __version__
def test_version():
assert __version__ == "0.4.0"
# I need to write tests here...
| 14 | 35 | 0.698413 | 18 | 126 | 4.388889 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029703 | 0.198413 | 126 | 8 | 36 | 15.75 | 0.752475 | 0.230159 | 0 | 0 | 0 | 0 | 0.052632 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a4c5303f452a5da855cbf86c4b44bbcc095b8303 | 41 | py | Python | autopipe/exceptions/__init__.py | AnonymusRaccoon/AutoPipe | e75a5891828c0d937f0e2b0ea38ae33f7758512a | [
"MIT"
] | null | null | null | autopipe/exceptions/__init__.py | AnonymusRaccoon/AutoPipe | e75a5891828c0d937f0e2b0ea38ae33f7758512a | [
"MIT"
] | null | null | null | autopipe/exceptions/__init__.py | AnonymusRaccoon/AutoPipe | e75a5891828c0d937f0e2b0ea38ae33f7758512a | [
"MIT"
] | null | null | null | from .ArgumentError import ArgumentError
| 20.5 | 40 | 0.878049 | 4 | 41 | 9 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a4cd5f2a35ced4df7d04d805c12845b83ffa41d2 | 70 | py | Python | scripts/engine/project.py | pebbie/BIBINT | c00d11c228c8da2ae2a88cd616147da54309ebaa | [
"MIT"
] | 1 | 2022-02-06T17:09:44.000Z | 2022-02-06T17:09:44.000Z | scripts/engine/project.py | pebbie/BIBINT | c00d11c228c8da2ae2a88cd616147da54309ebaa | [
"MIT"
] | null | null | null | scripts/engine/project.py | pebbie/BIBINT | c00d11c228c8da2ae2a88cd616147da54309ebaa | [
"MIT"
] | null | null | null | import ConfigParser
def load_project(project_configuration):
pass | 17.5 | 40 | 0.828571 | 8 | 70 | 7 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128571 | 70 | 4 | 41 | 17.5 | 0.918033 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
35005be899586ab5db7afe9d9a8078f35e0d23d3 | 84 | py | Python | afs/lla/VolServerLLAError.py | chanke/afspy | 525e7b3b53e58be515f11b83cc59ddb0765ef8e5 | [
"BSD-2-Clause"
] | null | null | null | afs/lla/VolServerLLAError.py | chanke/afspy | 525e7b3b53e58be515f11b83cc59ddb0765ef8e5 | [
"BSD-2-Clause"
] | null | null | null | afs/lla/VolServerLLAError.py | chanke/afspy | 525e7b3b53e58be515f11b83cc59ddb0765ef8e5 | [
"BSD-2-Clause"
] | null | null | null | from afs.util.AFSError import AFSError
class VolServerLLAError(AFSError):
pass
| 16.8 | 38 | 0.797619 | 10 | 84 | 6.7 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 84 | 4 | 39 | 21 | 0.930556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
103c03880b829a583d2f21e31d44336be8866ba2 | 47 | py | Python | Example-University-System/university_manager.py | s-c-23/Elements-of-Software-Design | 4a29b6e864b792f7dc3bafd25b13e9abd8d79798 | [
"MIT"
] | null | null | null | Example-University-System/university_manager.py | s-c-23/Elements-of-Software-Design | 4a29b6e864b792f7dc3bafd25b13e9abd8d79798 | [
"MIT"
] | null | null | null | Example-University-System/university_manager.py | s-c-23/Elements-of-Software-Design | 4a29b6e864b792f7dc3bafd25b13e9abd8d79798 | [
"MIT"
] | null | null | null | from people import *
from course import *
| 5.875 | 20 | 0.680851 | 6 | 47 | 5.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.276596 | 47 | 7 | 21 | 6.714286 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
106011d01a8d066ce2557a12bf203132fb657a4f | 41 | py | Python | demo/code/compile.py | mverleg/notex_package | 1a918233a7c18434dd9f8b2b3b3aa058f86d86d8 | [
"MIT",
"Unlicense",
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | demo/code/compile.py | mverleg/notex_package | 1a918233a7c18434dd9f8b2b3b3aa058f86d86d8 | [
"MIT",
"Unlicense",
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | demo/code/compile.py | mverleg/notex_package | 1a918233a7c18434dd9f8b2b3b3aa058f86d86d8 | [
"MIT",
"Unlicense",
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null |
def demo_compile(soup):
return soup
| 5.857143 | 23 | 0.707317 | 6 | 41 | 4.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219512 | 41 | 6 | 24 | 6.833333 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
108f5c9ccf8428730035dd4b0be28ea8d046efac | 77 | py | Python | stonehenge/server/__init__.py | RobertTownley/stonehenge | 376b8e1501dd12ac1bcec5de680a5b521b0d949c | [
"MIT"
] | 1 | 2018-09-07T14:15:31.000Z | 2018-09-07T14:15:31.000Z | stonehenge/server/__init__.py | RobertTownley/stonehenge | 376b8e1501dd12ac1bcec5de680a5b521b0d949c | [
"MIT"
] | 5 | 2018-09-06T01:48:12.000Z | 2021-05-08T10:47:00.000Z | stonehenge/server/__init__.py | RobertTownley/stonehenge | 376b8e1501dd12ac1bcec5de680a5b521b0d949c | [
"MIT"
] | null | null | null | from stonehenge.modules import Module
class ServerModule(Module):
pass
| 12.833333 | 37 | 0.779221 | 9 | 77 | 6.666667 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168831 | 77 | 5 | 38 | 15.4 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
10949c45725f93d4bd0378555385ba9f8077999c | 8,587 | py | Python | openstack_dashboard/dashboards/admin/flavors/tests.py | Mirantis/openstack-horizon | 4d8e331635a1d9ed9ea1ae574a79a370265aef0b | [
"Apache-2.0"
] | 3 | 2017-02-13T15:11:01.000Z | 2021-07-28T08:28:09.000Z | openstack_dashboard/dashboards/admin/flavors/tests.py | Mirantis/openstack-horizon | 4d8e331635a1d9ed9ea1ae574a79a370265aef0b | [
"Apache-2.0"
] | null | null | null | openstack_dashboard/dashboards/admin/flavors/tests.py | Mirantis/openstack-horizon | 4d8e331635a1d9ed9ea1ae574a79a370265aef0b | [
"Apache-2.0"
] | 2 | 2018-08-29T10:56:04.000Z | 2019-11-11T11:45:20.000Z | from django import http
from django.core.urlresolvers import reverse
from mox import IsA
from openstack_dashboard import api
from openstack_dashboard.test import helpers as test
from novaclient.v1_1 import flavors
class FlavorsTests(test.BaseAdminViewTests):
@test.create_stubs({api.nova: ('flavor_list', 'flavor_create'), })
def test_create_new_flavor_when_none_exist(self):
flavor = self.flavors.first()
eph = getattr(flavor, 'OS-FLV-EXT-DATA:ephemeral')
# no pre-existing flavors
api.nova.flavor_create(IsA(http.HttpRequest),
flavor.name,
flavor.ram,
flavor.vcpus,
flavor.disk,
swap=flavor.swap,
ephemeral=eph).AndReturn(flavor)
api.nova.flavor_list(IsA(http.HttpRequest))
self.mox.ReplayAll()
url = reverse('horizon:admin:flavors:create')
resp = self.client.get(url)
self.assertEqual(resp.status_code, 200)
self.assertTemplateUsed(resp, "admin/flavors/create.html")
data = {'name': flavor.name,
'vcpus': flavor.vcpus,
'memory_mb': flavor.ram,
'disk_gb': flavor.disk,
'swap_mb': flavor.swap,
'eph_gb': eph}
resp = self.client.post(url, data)
self.assertRedirectsNoFollow(resp,
reverse("horizon:admin:flavors:index"))
# keeping the 2 edit tests separate to aid debug breaks
@test.create_stubs({api.nova: ('flavor_list',
'flavor_create',
'flavor_delete',
'flavor_get_extras',
'flavor_get'), })
def test_edit_flavor(self):
flavor = self.flavors.first() # has no extra specs
eph = getattr(flavor, 'OS-FLV-EXT-DATA:ephemeral')
extra_specs = getattr(flavor, 'extra_specs')
new_flavor = flavors.Flavor(flavors.FlavorManager(None),
{'id':
"cccccccc-cccc-cccc-cccc-cccccccccccc",
'name': flavor.name,
'vcpus': flavor.vcpus + 1,
'disk': flavor.disk,
'ram': flavor.ram,
'swap': 0,
'OS-FLV-EXT-DATA:ephemeral': eph,
'extra_specs': extra_specs})
# GET
api.nova.flavor_get(IsA(http.HttpRequest), flavor.id).AndReturn(flavor)
# POST
api.nova.flavor_list(IsA(http.HttpRequest))
api.nova.flavor_get(IsA(http.HttpRequest), flavor.id).AndReturn(flavor)
api.nova.flavor_get_extras(IsA(http.HttpRequest), flavor.id, raw=True)\
.AndReturn(extra_specs)
api.nova.flavor_delete(IsA(http.HttpRequest), flavor.id)
api.nova.flavor_create(IsA(http.HttpRequest),
new_flavor.name,
new_flavor.ram,
new_flavor.vcpus,
new_flavor.disk,
swap=flavor.swap,
ephemeral=eph).AndReturn(new_flavor)
self.mox.ReplayAll()
# get_test
url = reverse('horizon:admin:flavors:edit', args=[flavor.id])
resp = self.client.get(url)
self.assertEqual(resp.status_code, 200)
self.assertTemplateUsed(resp, "admin/flavors/edit.html")
# post test
data = {'flavor_id': flavor.id,
'name': flavor.name,
'vcpus': flavor.vcpus + 1,
'memory_mb': flavor.ram,
'disk_gb': flavor.disk,
'swap_mb': flavor.swap,
'eph_gb': eph}
resp = self.client.post(url, data)
self.assertNoFormErrors(resp)
self.assertMessageCount(success=1)
self.assertRedirectsNoFollow(resp,
reverse("horizon:admin:flavors:index"))
@test.create_stubs({api.nova: ('flavor_list',
'flavor_create',
'flavor_delete',
'flavor_get_extras',
'flavor_extra_set',
'flavor_get'), })
def test_edit_flavor_with_extra_specs(self):
flavor = self.flavors.list()[1] # the second element has extra specs
eph = getattr(flavor, 'OS-FLV-EXT-DATA:ephemeral')
extra_specs = getattr(flavor, 'extra_specs')
new_vcpus = flavor.vcpus + 1
new_flavor = flavors.Flavor(flavors.FlavorManager(None),
{'id':
"cccccccc-cccc-cccc-cccc-cccccccccccc",
'name': flavor.name,
'vcpus': new_vcpus,
'disk': flavor.disk,
'ram': flavor.ram,
'swap': flavor.swap,
'OS-FLV-EXT-DATA:ephemeral': eph,
'extra_specs': extra_specs})
# GET
api.nova.flavor_get(IsA(http.HttpRequest), flavor.id).AndReturn(flavor)
# POST
api.nova.flavor_list(IsA(http.HttpRequest))
api.nova.flavor_get(IsA(http.HttpRequest), flavor.id).AndReturn(flavor)
api.nova.flavor_get_extras(IsA(http.HttpRequest), flavor.id, raw=True)\
.AndReturn(extra_specs)
api.nova.flavor_delete(IsA(http.HttpRequest), flavor.id)
api.nova.flavor_create(IsA(http.HttpRequest),
flavor.name,
flavor.ram,
new_vcpus,
flavor.disk,
swap=flavor.swap,
ephemeral=eph).AndReturn(new_flavor)
api.nova.flavor_extra_set(IsA(http.HttpRequest),
new_flavor.id,
extra_specs)
self.mox.ReplayAll()
#get_test
url = reverse('horizon:admin:flavors:edit', args=[flavor.id])
resp = self.client.get(url)
self.assertEqual(resp.status_code, 200)
self.assertTemplateUsed(resp, "admin/flavors/edit.html")
#post test
data = {'flavor_id': flavor.id,
'name': flavor.name,
'vcpus': new_vcpus,
'memory_mb': flavor.ram,
'disk_gb': flavor.disk,
'swap_mb': flavor.swap,
'eph_gb': eph}
resp = self.client.post(url, data)
self.assertNoFormErrors(resp)
self.assertMessageCount(success=1)
self.assertRedirectsNoFollow(resp,
reverse("horizon:admin:flavors:index"))
@test.create_stubs({api.nova: ('flavor_list',
'flavor_get'), })
def test_edit_flavor_set_existing_name(self):
flavor_a = self.flavors.list()[0]
flavor_b = self.flavors.list()[1]
eph = getattr(flavor_a, 'OS-FLV-EXT-DATA:ephemeral')
# GET
api.nova.flavor_get(IsA(http.HttpRequest),
flavor_a.id).AndReturn(flavor_a)
# POST
api.nova.flavor_list(IsA(http.HttpRequest)) \
.AndReturn(self.flavors.list())
api.nova.flavor_get(IsA(http.HttpRequest),
flavor_a.id).AndReturn(flavor_a)
self.mox.ReplayAll()
# get_test
url = reverse('horizon:admin:flavors:edit', args=[flavor_a.id])
resp = self.client.get(url)
self.assertEqual(resp.status_code, 200)
self.assertTemplateUsed(resp, "admin/flavors/edit.html")
# post test
data = {'flavor_id': flavor_a.id,
'name': flavor_b.name,
'vcpus': flavor_a.vcpus + 1,
'memory_mb': flavor_a.ram,
'disk_gb': flavor_a.disk,
'swap_mb': flavor_a.swap,
'eph_gb': eph}
resp = self.client.post(url, data)
self.assertFormErrors(resp, 1, 'The name "m1.massive" '
'is already used by another flavor.')
| 43.588832 | 79 | 0.504716 | 852 | 8,587 | 4.93662 | 0.131455 | 0.036614 | 0.067998 | 0.068474 | 0.815977 | 0.776985 | 0.752734 | 0.724441 | 0.692582 | 0.668331 | 0 | 0.005141 | 0.388378 | 8,587 | 196 | 80 | 43.811224 | 0.795697 | 0.024805 | 0 | 0.723926 | 0 | 0 | 0.122652 | 0.062822 | 0 | 0 | 0 | 0 | 0.09816 | 1 | 0.02454 | false | 0 | 0.03681 | 0 | 0.067485 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10ac7cbe619024238618a3ad09a214f8eaac65dc | 3,848 | py | Python | tests/common/output/test_sarif_report.py | graybrandonpfg/checkov | 3081a8560f6369465314ee8f4ac8a8ec01649d68 | [
"Apache-2.0"
] | null | null | null | tests/common/output/test_sarif_report.py | graybrandonpfg/checkov | 3081a8560f6369465314ee8f4ac8a8ec01649d68 | [
"Apache-2.0"
] | null | null | null | tests/common/output/test_sarif_report.py | graybrandonpfg/checkov | 3081a8560f6369465314ee8f4ac8a8ec01649d68 | [
"Apache-2.0"
] | null | null | null | import unittest
import os
import json
import jsonschema
import urllib.request
from checkov.common.models.enums import CheckResult
from checkov.common.output.report import Report
from checkov.common.output.record import Record
class TestSarifReport(unittest.TestCase):
def test_valid_passing_valid_testcases(self):
record1 = Record(
check_id="CKV_AWS_21",
check_name="Some Check",
check_result={"result": CheckResult.FAILED},
code_block=None,
file_path="./s3.tf",
file_line_range=[1, 3],
resource="aws_s3_bucket.operations",
evaluations=None,
check_class=None,
file_abs_path=",.",
entity_tags={"tag1": "value1"},
)
record2 = Record(
check_id="CKV_AWS_3",
check_name="Ensure all data stored in the EBS is securely encrypted",
check_result={"result": CheckResult.FAILED},
code_block=None,
file_path="./ec2.tf",
file_line_range=[1, 3],
resource="aws_ebs_volume.web_host_storage",
evaluations=None,
check_class=None,
file_abs_path=",.",
entity_tags={"tag1": "value1"},
)
r = Report("terraform")
r.add_record(record=record1)
r.add_record(record=record2)
ts = r.get_test_suites()
json_structure = r.get_sarif_json()
print(json.dumps(json_structure))
self.assertEqual(
None,
jsonschema.validate(instance=json_structure, schema=get_sarif_schema()),
)
def test_multiples_of_same_rule_do_not_break_schema(self):
record1 = Record(
check_id="CKV_AWS_21",
check_name="Some Check",
check_result={"result": CheckResult.FAILED},
code_block=None,
file_path="./s3.tf",
file_line_range=[1, 3],
resource="aws_s3_bucket.operations",
evaluations=None,
check_class=None,
file_abs_path=",.",
entity_tags={"tag1": "value1"},
)
record2 = Record(
check_id="CKV_AWS_3",
check_name="Ensure all data stored in the EBS is securely encrypted",
check_result={"result": CheckResult.FAILED},
code_block=None,
file_path="./ec2.tf",
file_line_range=[1, 3],
resource="aws_ebs_volume.web_host_storage",
evaluations=None,
check_class=None,
file_abs_path=",.",
entity_tags={"tag1": "value1"},
)
record3 = Record(
check_id="CKV_AWS_3",
check_name="Ensure all data stored in the EBS is securely encrypted",
check_result={"result": CheckResult.FAILED},
code_block=None,
file_path="./ec2.tf",
file_line_range=[7, 10],
resource="aws_ebs_volume.web_host_storage",
evaluations=None,
check_class=None,
file_abs_path=",.",
entity_tags={"tag1": "value1"},
)
r = Report("terraform")
r.add_record(record=record1)
r.add_record(record=record2)
r.add_record(record=record3)
json_structure = r.get_sarif_json()
print(json.dumps(json_structure))
self.assertEqual(
None,
jsonschema.validate(instance=json_structure, schema=get_sarif_schema()),
)
def get_sarif_schema():
file_name, headers = urllib.request.urlretrieve(
"https://raw.githubusercontent.com/oasis-tcs/sarif-spec/master/Schemata/sarif-schema-2.1.0.json"
)
with open(file_name, "r") as file:
schema = json.load(file)
return schema
if __name__ == "__main__":
unittest.main()
| 32.336134 | 104 | 0.58316 | 435 | 3,848 | 4.868966 | 0.264368 | 0.037771 | 0.030689 | 0.037771 | 0.737488 | 0.737488 | 0.737488 | 0.737488 | 0.737488 | 0.737488 | 0 | 0.017957 | 0.305353 | 3,848 | 118 | 105 | 32.610169 | 0.774411 | 0 | 0 | 0.704762 | 0 | 0.009524 | 0.161642 | 0.036642 | 0 | 0 | 0 | 0 | 0.019048 | 1 | 0.028571 | false | 0.009524 | 0.07619 | 0 | 0.12381 | 0.019048 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52c338027d8db5563504a731a668f53048ef5172 | 3,568 | py | Python | forge_api_client/versions.py | dmh126/forge-python-data-management-api | 9c33f220021251a0340346065e3dd1998fc49a12 | [
"MIT"
] | 1 | 2019-07-02T08:32:22.000Z | 2019-07-02T08:32:22.000Z | forge_api_client/versions.py | dmh126/forge-python-data-management-api | 9c33f220021251a0340346065e3dd1998fc49a12 | [
"MIT"
] | null | null | null | forge_api_client/versions.py | dmh126/forge-python-data-management-api | 9c33f220021251a0340346065e3dd1998fc49a12 | [
"MIT"
] | 2 | 2019-07-04T05:13:42.000Z | 2020-05-09T22:15:05.000Z | from .utils import get_request, post_request, patch_request, authorized
class Versions:
@authorized
def getVersion(self, project_id, version_id):
url = self.api_url + '/data/v1/projects/%s/versions/%s' % (project_id, version_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token)
}
return get_request(url, headers)
@authorized
def getVersionDownloadFormats(self, project_id, version_id):
url = self.api_url + '/data/v1/projects/%s/versions/%s/downloadFormats' % (project_id, version_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token)
}
return get_request(url, headers)
@authorized
def getVersionDownloads(self, project_id, version_id):
url = self.api_url + '/data/v1/projects/%s/versions/%s/downloads' % (project_id, version_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token)
}
return get_request(url, headers)
@authorized
def getVersionItem(self, project_id, version_id):
url = self.api_url + '/data/v1/projects/%s/versions/%s/item' % (project_id, version_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token)
}
return get_request(url, headers)
@authorized
def getVersionRefs(self, project_id, version_id):
url = self.api_url + '/data/v1/projects/%s/versions/%s/refs' % (project_id, version_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token)
}
return get_request(url, headers)
@authorized
def getVersionLinks(self, project_id, version_id):
url = self.api_url + '/data/v1/projects/%s/versions/%s/links' % (project_id, version_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token)
}
return get_request(url, headers)
@authorized
def getVersionRelationshipsRefs(self, project_id, version_id):
url = self.api_url + '/data/v1/projects/%s/versions/%s/relationships/refs' % (project_id, version_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token)
}
return get_request(url, headers)
@authorized
def patchVersion(self, project_id, version_id, body):
url = self.api_url + '/data/v1/projects/%s/versions/%s' % (project_id, version_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token),
'Content-Type': 'application/vnd.api+json'
}
data = body
return patch_request(url, data, headers)
@authorized
def postVersion(self, project_id, body):
url = self.api_url + '/data/v1/projects/%s/versions' % (project_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token),
'Content-Type': 'application/vnd.api+json'
}
data = body
return post_request(url, data, headers)
@authorized
def postVersionRelationshipsRefs(self, project_id, version_id, body):
url = self.api_url + '/data/v1/projects/%s/versions/%s/relationships/refs' % (project_id, version_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token),
'Content-Type': 'application/vnd.api+json'
}
data = body
return post_request(url, data, headers)
| 29.00813 | 109 | 0.612948 | 416 | 3,568 | 5.0625 | 0.115385 | 0.08547 | 0.136752 | 0.153846 | 0.850427 | 0.850427 | 0.82811 | 0.82811 | 0.82811 | 0.82811 | 0 | 0.003765 | 0.255605 | 3,568 | 122 | 110 | 29.245902 | 0.789157 | 0 | 0 | 0.628205 | 0 | 0 | 0.191984 | 0.131446 | 0 | 0 | 0 | 0 | 0 | 1 | 0.128205 | false | 0 | 0.012821 | 0 | 0.282051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52f30a30a3430473874d130ca8c5d8641129665a | 39 | py | Python | update/__main__.py | itemmanager/bungieapi | 0c4326f88ea0f28a1dcab683dc08c8d21c940fc1 | [
"MIT"
] | 5 | 2022-01-06T21:05:53.000Z | 2022-02-12T19:58:11.000Z | update/__main__.py | itemmanager/bungieapi | 0c4326f88ea0f28a1dcab683dc08c8d21c940fc1 | [
"MIT"
] | 8 | 2021-12-25T02:40:56.000Z | 2022-03-28T03:31:41.000Z | update/__main__.py | itemmanager/bungieapi | 0c4326f88ea0f28a1dcab683dc08c8d21c940fc1 | [
"MIT"
] | 1 | 2022-01-30T23:53:25.000Z | 2022-01-30T23:53:25.000Z | from .cli import generate
generate()
| 7.8 | 25 | 0.74359 | 5 | 39 | 5.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 39 | 4 | 26 | 9.75 | 0.90625 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5e0aa5a7d5eddf99f706a755767166ccd3cc53ea | 3,315 | py | Python | tests/test_calculate.py | ayeshasilvia/price_calculator | 3b0773cbda115d1a30941cb5b269d76d8df1f9c9 | [
"MIT"
] | null | null | null | tests/test_calculate.py | ayeshasilvia/price_calculator | 3b0773cbda115d1a30941cb5b269d76d8df1f9c9 | [
"MIT"
] | null | null | null | tests/test_calculate.py | ayeshasilvia/price_calculator | 3b0773cbda115d1a30941cb5b269d76d8df1f9c9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import json
import pytest
import os
from src.price_calculator.calculate import read_files, check_options, get_base_price, calculate
__author__ = "Ayesha Mosaddeque"
__copyright__ = "Ayesha Mosaddeque"
__license__ = "mit"
def test_read_files():
assert read_files(os.path.abspath('tests/samples/cart-4560.json'),
os.path.abspath('tests/samples/base-prices.json')) == 4560
assert read_files(os.path.abspath('tests/samples/cart-9363.json'),
os.path.abspath('tests/samples/base-prices.json')) == 9363
assert read_files(os.path.abspath('tests/samples/cart-9500.json'),
os.path.abspath('tests/samples/base-prices.json')) == 9500
assert read_files(os.path.abspath('tests/samples/cart-11356.json'),
os.path.abspath('tests/samples/base-prices.json')) == 11356
assert read_files(os.path.abspath('tests/samples/cart-9363-non-existing-product.json'),
os.path.abspath('tests/samples/base-prices.json')) == 9363
assert read_files(os.path.abspath('tests/samples/cart-9363-unknown-option.json'),
os.path.abspath('tests/samples/base-prices.json')) == 9363
with pytest.raises(AssertionError):
assert read_files(os.path.abspath('tests/samples/cart-9363-unknown-option.json'),
os.path.abspath('tests/samples/base-prices.json')) == 9369
def test_check_options_returns_true():
cart_options = {
"size": "small",
"colour": "dark",
"print-location": "front"
}
baseprice_options = {
"colour": ["white", "dark"],
"size": ["small", "medium"]
}
assert check_options(cart_options, baseprice_options)
def test_check_options_returns_false():
cart_options = {
"size": "xl",
"colour": "dark",
"print-location": "front"
}
baseprice_options = {
"colour": ["white", "dark"],
"size": ["small", "medium"]
}
assert not check_options(cart_options, baseprice_options)
def test_check_options_for_empty_options_at_cart():
cart_options = {}
baseprice_options = {
"colour": ["white", "dark"],
"size": ["small", "medium"]
}
assert check_options(cart_options, baseprice_options)
def test_check_options_for_empty_options_at_baseprice():
cart_options = {
"size": "xl",
"colour": "dark",
"print-location": "front"
}
baseprice_options = {
}
assert check_options(cart_options, baseprice_options)
def test_get_base_price_returns_price():
with open(os.path.abspath('tests/samples/base-prices.json'), 'r') as f:
base_price_dict = json.load(f)
cart_options = {
"size": "xl",
"colour": "dark",
"print-location": "front"
}
assert get_base_price('hoodie', cart_options, base_price_dict) == 4368
def test_get_base_price_returns_zero():
with open(os.path.abspath('tests/samples/base-prices.json'), 'r') as f:
base_price_dict = json.load(f)
cart_options = {
"size": "xl",
"colour": "pink",
"print-location": "front"
}
assert get_base_price('hoodie', cart_options, base_price_dict) == 0
def test_calculate():
assert calculate(3800, 20, 1) == 4560
| 31.571429 | 95 | 0.638009 | 405 | 3,315 | 4.987654 | 0.202469 | 0.047525 | 0.10297 | 0.142574 | 0.801485 | 0.785149 | 0.762871 | 0.762871 | 0.762871 | 0.627723 | 0 | 0.028736 | 0.21267 | 3,315 | 104 | 96 | 31.875 | 0.745211 | 0.01267 | 0 | 0.493827 | 0 | 0 | 0.255885 | 0.158361 | 0 | 0 | 0 | 0 | 0.185185 | 1 | 0.098765 | false | 0 | 0.049383 | 0 | 0.148148 | 0.061728 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eae54459cee5602a223babdc6ac5858f18588ee7 | 31 | py | Python | app/api/v1/__init__.py | FX-HAO/scheduler-service | 885b724076f51609696eaf5fcda317487e706c22 | [
"BSD-2-Clause"
] | 1 | 2017-07-19T14:17:22.000Z | 2017-07-19T14:17:22.000Z | app/api/v1/__init__.py | FX-HAO/scheduler-service | 885b724076f51609696eaf5fcda317487e706c22 | [
"BSD-2-Clause"
] | 2 | 2017-02-07T05:33:21.000Z | 2017-02-14T02:03:28.000Z | app/api/v1/__init__.py | FX-HAO/scheduler-service | 885b724076f51609696eaf5fcda317487e706c22 | [
"BSD-2-Clause"
] | 2 | 2017-01-08T08:28:38.000Z | 2021-07-20T07:44:36.000Z | from .users import UserResource | 31 | 31 | 0.870968 | 4 | 31 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eaee2287027fce5d45ecad25e510bc7a1c234461 | 34 | py | Python | ddlogger/__init__.py | kkew3/dot-dot-logger | 31620e2137c5a2dbdcf7bb46dd91555c51b64c81 | [
"MIT"
] | null | null | null | ddlogger/__init__.py | kkew3/dot-dot-logger | 31620e2137c5a2dbdcf7bb46dd91555c51b64c81 | [
"MIT"
] | null | null | null | ddlogger/__init__.py | kkew3/dot-dot-logger | 31620e2137c5a2dbdcf7bb46dd91555c51b64c81 | [
"MIT"
] | null | null | null | from ddlogger import DotDotLogger
| 17 | 33 | 0.882353 | 4 | 34 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eaf01e1d18a4774a07317336c4dbfdf1d34de834 | 9,569 | py | Python | authors/apps/articles/tests/test_comments.py | andela/ah-backend-prime | 0708463d4565a4977a5a5dcb839f1dfed52fdc90 | [
"BSD-3-Clause"
] | 1 | 2019-09-19T14:30:05.000Z | 2019-09-19T14:30:05.000Z | authors/apps/articles/tests/test_comments.py | e-ian/authors-haven-frontend | 05829c8088ca49ef2cf0863dc87ec55b44b13534 | [
"BSD-3-Clause"
] | 22 | 2019-03-25T16:10:53.000Z | 2022-03-11T23:44:21.000Z | authors/apps/articles/tests/test_comments.py | e-ian/authors-haven-frontend | 05829c8088ca49ef2cf0863dc87ec55b44b13534 | [
"BSD-3-Clause"
] | 6 | 2019-03-25T09:39:39.000Z | 2021-03-11T23:54:12.000Z | import json
from rest_framework.test import APIClient, APITestCase
from .base import BaseTest
from rest_framework import status, response
from django.urls import reverse
from .base import ArticlesBaseTest
from .test_data import (
VALID_ARTICLE,
VALID_COMMENT,
VALID_COMMENT_2
)
from authors.apps.authentication.tests.test_data import (
VALID_USER_DATA,
VALID_LOGIN_DATA,
)
from authors.apps.profiles.tests.test_data import VALID_USER_DATA_2
class CommentCreateTest(ArticlesBaseTest):
"""
Test Commentin on articles
"""
def test_user_can_comment_on_article_data(self):
"""
User can create comment
"""
token1 = self.create_user(VALID_USER_DATA)
response = self.create_article(VALID_ARTICLE, token1)
response = self.create_comment(
token=token1,
parentId=0,
slug=response.data['article']['slug']
)
self.assertEqual(
response.data['comment']['body'],
VALID_COMMENT['body']
)
self.assertEqual(
response.status_code,
status.HTTP_201_CREATED
)
def test_user_can_get_a_comment(self):
"""
User can get a comment
"""
token1 = self.create_user(VALID_USER_DATA)
response = self.create_article(VALID_ARTICLE, token1)
response = self.create_comment(
token=token1,
parentId=0,
slug=response.data['article']['slug']
)
get_comment_url = reverse('crud-comment', kwargs={
'id': response.data['comment']['id']
})
response = self.client.get(
get_comment_url
)
self.assertEqual(
response.data['body'],
VALID_COMMENT['body']
)
self.assertEqual(
response.status_code,
status.HTTP_200_OK
)
def test_user_can_reply_to_comment(self):
"""
User can reply to a comment
"""
token1 = self.create_user(VALID_USER_DATA)
response = self.create_article(VALID_ARTICLE, token1)
response = self.create_comment(
token=token1,
parentId=0,
slug=response.data['article']['slug']
)
response = self.create_comment(
token=token1,
parentId=response.data['comment']['id'],
slug=response.data['comment']['article']['slug']
)
self.assertEqual(
response.status_code,
status.HTTP_201_CREATED
)
self.assertEqual(
response.data['comment']['body'],
VALID_COMMENT['body']
)
def test_user_can_get_comments_for_an_article(self):
"""
user can get comments for an article
"""
token1 = self.create_user(VALID_USER_DATA)
response = self.create_article(VALID_ARTICLE, token1)
response = self.create_comment(
token=token1,
parentId=0,
slug=response.data['article']['slug']
)
get_comment_url = reverse('comments', kwargs={
'slug': response.data['comment']['article']['slug'],
'id': 0
})
response = self.client.get(
get_comment_url
)
self.assertEqual(
response.status_code,
status.HTTP_200_OK
)
def test_user_can_delete_their_comment(self):
"""
user can delete comments for an article
"""
token = self.create_user(VALID_USER_DATA)
response = self.create_article(VALID_ARTICLE, token)
response = self.create_comment(
token=token,
parentId=0,
slug=response.data['article']['slug']
)
get_comment_url = reverse('crud-comment', kwargs={
'id': response.data['comment']['id']
})
response = self.client.delete(
get_comment_url,
HTTP_AUTHORIZATION=token
)
self.assertEqual(
response.status_code,
status.HTTP_200_OK
)
def test_invalid_user_can_delete_a_comment(self):
"""
user can delete comments for an article
"""
token = self.create_user(VALID_USER_DATA)
response = self.create_article(VALID_ARTICLE, token)
response = self.create_comment(
token=token,
parentId=0,
slug=response.data['article']['slug']
)
token = self.create_user(VALID_USER_DATA_2)
get_comment_url = reverse('crud-comment', kwargs={
'id': response.data['comment']['id']
})
response = self.client.delete(
get_comment_url,
HTTP_AUTHORIZATION=token
)
self.assertEqual(
response.status_code,
status.HTTP_403_FORBIDDEN
)
def test_user_can_update_their_comment(self):
"""
user can update their comments for an article
"""
token = self.create_user(VALID_USER_DATA)
response = self.create_article(VALID_ARTICLE, token)
response = self.create_comment(
token=token,
parentId=0,
slug=response.data['article']['slug']
)
get_comment_url = reverse('crud-comment', kwargs={
'id': response.data['comment']['id']
})
response = self.client.put(
get_comment_url,
HTTP_AUTHORIZATION=token,
data=VALID_COMMENT_2,
format='json'
)
self.assertEqual(
response.status_code,
status.HTTP_200_OK
)
self.assertIn(
'version',
response.data['commentHistory'],
)
def test_user_can_update_their_comment_with_same_data(self):
"""
user can update their comments for an article
"""
token = self.create_user(VALID_USER_DATA)
response = self.create_article(VALID_ARTICLE, token)
response = self.create_comment(
token=token,
parentId=0,
slug=response.data['article']['slug']
)
get_comment_url = reverse('crud-comment', kwargs={
'id': response.data['comment']['id']
})
response = self.client.put(
get_comment_url,
HTTP_AUTHORIZATION=token,
data=VALID_COMMENT,
format='json'
)
self.assertEqual(
response.status_code,
status.HTTP_400_BAD_REQUEST
)
self.assertEqual(
response.data['message'],
'No changes made to the comment'
)
def test_users_can_track_edit_history(self):
"""
User can view the edit history from a comment
"""
token = self.create_user(VALID_USER_DATA)
response = self.create_article(VALID_ARTICLE, token)
response = self.create_comment(
token=token,
parentId=0,
slug=response.data['article']['slug']
)
update_comment_url = reverse('crud-comment', kwargs={
'id': response.data['comment']['id']
})
response = self.client.put(
update_comment_url,
HTTP_AUTHORIZATION=token,
data=VALID_COMMENT_2,
format='json'
)
get_comment_url = reverse(
'crud-comment',
kwargs={'id':response.data['id']}
)
token2 = self.create_user(VALID_USER_DATA_2)
response = self.client.get(
get_comment_url,
HTTP_AUTHORIZATION=token2
)
self.assertEqual(
response.status_code,
status.HTTP_200_OK
)
self.assertIn(
'version',
response.data['commentHistory'],
)
def test_invalid_user_can_update_comment(self):
"""
invalid user can delete comments for an article
"""
token = self.create_user(VALID_USER_DATA)
response = self.create_article(VALID_ARTICLE, token)
response = self.create_comment(
token=token,
parentId=0,
slug=response.data['article']['slug']
)
token = self.create_user(VALID_USER_DATA_2)
get_comment_url = reverse('crud-comment', kwargs={
'id': response.data['comment']['id']
})
response = self.client.put(
get_comment_url,
HTTP_AUTHORIZATION=token,
data=VALID_COMMENT_2,
format='json'
)
self.assertEqual(
response.status_code,
status.HTTP_403_FORBIDDEN
)
def test_user_can_get_comments_of_invalid_article(self):
"""
User can retrieve comments of invalid article
"""
token = self.create_user(VALID_USER_DATA)
response = self.create_article(VALID_ARTICLE, token)
response = self.create_comment(
token=token,
parentId=0,
slug=response.data['article']['slug']
)
token = self.create_user(VALID_USER_DATA_2)
get_comment_url = reverse('comments', kwargs={
'slug': 'random-non-existent-article-0x3',
'id': 0
})
response = self.client.get(
get_comment_url,
HTTP_AUTHORIZATION=token,
)
self.assertEqual(
response.status_code,
status.HTTP_404_NOT_FOUND
) | 29.17378 | 67 | 0.564113 | 981 | 9,569 | 5.24159 | 0.100917 | 0.073901 | 0.080513 | 0.055426 | 0.849086 | 0.834889 | 0.818942 | 0.765461 | 0.765461 | 0.710618 | 0 | 0.011365 | 0.337966 | 9,569 | 328 | 68 | 29.17378 | 0.800316 | 0.047027 | 0 | 0.67433 | 0 | 0 | 0.060445 | 0.003502 | 0 | 0 | 0.000339 | 0 | 0.065134 | 1 | 0.042146 | false | 0 | 0.034483 | 0 | 0.08046 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d81f2262959956bea008b69793b51952b4b3e0b3 | 134 | py | Python | code/1007.py | minssoj/Learning_Algorithm_Up | 45ec4e2eb4c07c9ec907a74dbd31370e1645c50b | [
"MIT"
] | null | null | null | code/1007.py | minssoj/Learning_Algorithm_Up | 45ec4e2eb4c07c9ec907a74dbd31370e1645c50b | [
"MIT"
] | null | null | null | code/1007.py | minssoj/Learning_Algorithm_Up | 45ec4e2eb4c07c9ec907a74dbd31370e1645c50b | [
"MIT"
] | null | null | null | # [기초-출력] 출력하기07(설명)
# minso.jeong@daum.net
'''
문제링크 : https://www.codeup.kr/problem.php?id=1007
'''
print('"C:\Download\hello.cpp"') | 19.142857 | 48 | 0.649254 | 22 | 134 | 3.954545 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0.08209 | 134 | 7 | 49 | 19.142857 | 0.658537 | 0.664179 | 0 | 0 | 0 | 0 | 0.621622 | 0.621622 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d820238df5a63041ec4a118603d01c05f21d8ea7 | 123 | py | Python | self_supervised/__init__.py | 36000/myow | 73f2b430d284bcd6bb0c6faaac30576e84393ff1 | [
"Apache-2.0"
] | 19 | 2021-02-22T15:51:30.000Z | 2022-01-23T12:25:30.000Z | self_supervised/__init__.py | 36000/myow | 73f2b430d284bcd6bb0c6faaac30576e84393ff1 | [
"Apache-2.0"
] | 1 | 2022-01-19T16:21:51.000Z | 2022-01-19T16:21:51.000Z | self_supervised/__init__.py | 36000/myow | 73f2b430d284bcd6bb0c6faaac30576e84393ff1 | [
"Apache-2.0"
] | 2 | 2021-07-28T21:19:07.000Z | 2022-01-26T23:10:56.000Z | from . import data, transforms
from . import model, optimizer, trainer, loss
from . import tensorboard
from . import utils
| 24.6 | 45 | 0.772358 | 16 | 123 | 5.9375 | 0.625 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162602 | 123 | 4 | 46 | 30.75 | 0.92233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d83332ec12fd27b8284305f900864e8565581901 | 38 | py | Python | creacion_de_aplicaciones/pyqt5_iris/app/gui/__init__.py | soytupadrrre/Master_Python_Eip | c4774209d7dd15584233fe5d4cc01b1434c9316b | [
"MIT"
] | null | null | null | creacion_de_aplicaciones/pyqt5_iris/app/gui/__init__.py | soytupadrrre/Master_Python_Eip | c4774209d7dd15584233fe5d4cc01b1434c9316b | [
"MIT"
] | null | null | null | creacion_de_aplicaciones/pyqt5_iris/app/gui/__init__.py | soytupadrrre/Master_Python_Eip | c4774209d7dd15584233fe5d4cc01b1434c9316b | [
"MIT"
] | null | null | null | from app.gui.home import Ui_MainWindow | 38 | 38 | 0.868421 | 7 | 38 | 4.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc24117c3addc5bfa4374db72d97499544e36999 | 39 | py | Python | venv/Lib/site-packages/IPython/kernel/restarter.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 6,989 | 2017-07-18T06:23:18.000Z | 2022-03-31T15:58:36.000Z | venv/Lib/site-packages/IPython/kernel/restarter.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 1,978 | 2017-07-18T09:17:58.000Z | 2022-03-31T14:28:43.000Z | venv/Lib/site-packages/IPython/kernel/restarter.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 1,228 | 2017-07-18T09:03:13.000Z | 2022-03-29T05:57:40.000Z | from jupyter_client.restarter import *
| 19.5 | 38 | 0.846154 | 5 | 39 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc391bcb36c653049c076a4517a847fc4cf64e7f | 79 | py | Python | h1st/h1st/model_repository/__init__.py | Mou-Ikkai/h1st | da47a8f1ad6af532c549e075fba19e3b3692de89 | [
"Apache-2.0"
] | 2 | 2020-08-21T07:49:08.000Z | 2020-08-21T07:49:13.000Z | h1st/h1st/model_repository/__init__.py | Mou-Ikkai/h1st | da47a8f1ad6af532c549e075fba19e3b3692de89 | [
"Apache-2.0"
] | null | null | null | h1st/h1st/model_repository/__init__.py | Mou-Ikkai/h1st | da47a8f1ad6af532c549e075fba19e3b3692de89 | [
"Apache-2.0"
] | null | null | null | from h1st.model_repository.model_repository import ModelRepository, ModelSerDe
| 39.5 | 78 | 0.898734 | 9 | 79 | 7.666667 | 0.777778 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013514 | 0.063291 | 79 | 1 | 79 | 79 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc5b1079f6f7da6115a683ff6f3bbe717a9a8a84 | 420 | py | Python | delira/data_loading/sampler/__init__.py | gedoensmax/delira | 545e2ccbe56ed382d300cf3d00317e9a0e3ab5f6 | [
"BSD-2-Clause"
] | null | null | null | delira/data_loading/sampler/__init__.py | gedoensmax/delira | 545e2ccbe56ed382d300cf3d00317e9a0e3ab5f6 | [
"BSD-2-Clause"
] | null | null | null | delira/data_loading/sampler/__init__.py | gedoensmax/delira | 545e2ccbe56ed382d300cf3d00317e9a0e3ab5f6 | [
"BSD-2-Clause"
] | null | null | null | from delira.data_loading.sampler.abstract import AbstractSampler
from delira.data_loading.sampler.batch import BatchSampler
from delira.data_loading.sampler.random import RandomSampler, \
RandomSamplerNoReplacement, RandomSamplerWithReplacement
from delira.data_loading.sampler.sequential import SequentialSampler
from delira.data_loading.sampler.weighted import WeightedRandomSampler, \
PrevalenceRandomSampler
| 52.5 | 73 | 0.87381 | 43 | 420 | 8.418605 | 0.44186 | 0.138122 | 0.19337 | 0.290055 | 0.38674 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078571 | 420 | 7 | 74 | 60 | 0.935401 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.714286 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc6ea56134f47a2d8d89e119047ce37c62492e43 | 114 | py | Python | nncompress/__init__.py | GabrielMajeri/neuralcompressor | ab293a87abc1d9173553ce0388a88d3ebcf028c7 | [
"MIT"
] | 84 | 2018-03-17T13:35:31.000Z | 2022-03-17T08:43:58.000Z | nncompress/__init__.py | GabrielMajeri/neuralcompressor | ab293a87abc1d9173553ce0388a88d3ebcf028c7 | [
"MIT"
] | 9 | 2018-04-06T01:15:10.000Z | 2020-02-24T13:34:39.000Z | nncompress/__init__.py | GabrielMajeri/neuralcompressor | ab293a87abc1d9173553ce0388a88d3ebcf028c7 | [
"MIT"
] | 23 | 2018-03-31T01:36:41.000Z | 2020-09-25T03:58:47.000Z | from __future__ import absolute_import, division, print_function
from .embed_compress import EmbeddingCompressor
| 28.5 | 64 | 0.877193 | 13 | 114 | 7.153846 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096491 | 114 | 3 | 65 | 38 | 0.902913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
dc7b52f4f1436ad0d70f342b11bb3f4ff43941e1 | 237 | py | Python | app/ml/objects/classification/enum.py | PSE-TECO-2020-TEAM1/e2e-ml_model-management | 7f01a008648e25a29c639a5e16124b2e399eb821 | [
"MIT"
] | 1 | 2021-05-04T08:46:19.000Z | 2021-05-04T08:46:19.000Z | app/ml/objects/classification/enum.py | PSE-TECO-2020-TEAM1/e2e-ml_model-management | 7f01a008648e25a29c639a5e16124b2e399eb821 | [
"MIT"
] | null | null | null | app/ml/objects/classification/enum.py | PSE-TECO-2020-TEAM1/e2e-ml_model-management | 7f01a008648e25a29c639a5e16124b2e399eb821 | [
"MIT"
] | 1 | 2022-01-28T21:21:32.000Z | 2022-01-28T21:21:32.000Z | from enum import Enum
class Classifier(str, Enum):
KNEIGHBORS_CLASSIFIER = "KNEIGHBORS_CLASSIFIER"
MLP_CLASSIFIER = "MLP_CLASSIFIER"
RANDOM_FOREST_CLASSIFIER = "RANDOM_FOREST_CLASSIFIER"
SVC_CLASSIFIER = "SVC_CLASSIFIER" | 33.857143 | 57 | 0.78481 | 26 | 237 | 6.769231 | 0.423077 | 0.227273 | 0.261364 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147679 | 237 | 7 | 58 | 33.857143 | 0.871287 | 0 | 0 | 0 | 0 | 0 | 0.306723 | 0.189076 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
dc7f87a795e52252ea4c10ed91f4ba38e6dd2cf4 | 54,704 | py | Python | vaccine/tests/test_vaccine_reg_whatsapp.py | praekeltfoundation/vaccine-eligibility | 041010cbb14a12854a9644d97e56b63ba62cd32e | [
"BSD-3-Clause"
] | null | null | null | vaccine/tests/test_vaccine_reg_whatsapp.py | praekeltfoundation/vaccine-eligibility | 041010cbb14a12854a9644d97e56b63ba62cd32e | [
"BSD-3-Clause"
] | 6 | 2021-04-09T11:09:23.000Z | 2022-03-29T08:35:25.000Z | vaccine/tests/test_vaccine_reg_whatsapp.py | praekeltfoundation/vaccine-eligibility | 041010cbb14a12854a9644d97e56b63ba62cd32e | [
"BSD-3-Clause"
] | null | null | null | import gzip
from datetime import date
from unittest import mock
import pytest
from sanic import Sanic, response
from vaccine.data.medscheme import config as m_config
from vaccine.data.suburbs import config as s_config
from vaccine.models import Message
from vaccine.testing import AppTester
from vaccine.vaccine_reg_whatsapp import Application, config
@pytest.fixture
def tester():
return AppTester(Application)
@pytest.fixture
async def evds_mock(sanic_client):
Sanic.test_mode = True
app = Sanic("mock_turn")
app.requests = []
app.errors = 0
app.errormax = 0
@app.route("/api/private/evds-sa/person/8/record", methods=["POST"])
def submit_record(request):
app.requests.append(request)
if app.errormax:
if app.errors < app.errormax:
app.errors += 1
return response.json({}, status=500)
return response.json({}, status=200)
@app.route("/api/private/evds-sa/person/8/lookup/medscheme/1", methods=["GET"])
def get_medschemes(request):
with gzip.open("vaccine/data/medscheme.json.gz") as f:
return response.raw(f.read(), content_type="application/json")
@app.route("/api/private/evds-sa/person/8/lookup/location/1", methods=["GET"])
def get_suburbs(request):
with gzip.open("vaccine/data/suburbs.json.gz") as f:
return response.raw(f.read(), content_type="application/json")
client = await sanic_client(app)
url = config.EVDS_URL
username = config.EVDS_USERNAME
password = config.EVDS_PASSWORD
s_config.EVDS_URL = (
m_config.EVDS_URL
) = config.EVDS_URL = f"http://{client.host}:{client.port}"
s_config.EVDS_USERNAME = m_config.EVDS_USERNAME = config.EVDS_USERNAME = "test"
s_config.EVDS_PASSWORD = m_config.EVDS_PASSWORD = config.EVDS_PASSWORD = "test"
yield client
s_config.EVDS_URL = m_config.EVDS_URL = config.EVDS_URL = url
s_config.EVDS_USERNAME = m_config.EVDS_USERNAME = config.EVDS_USERNAME = username
s_config.EVDS_PASSWORD = m_config.EVDS_PASSWORD = config.EVDS_PASSWORD = password
@pytest.fixture
async def eventstore_mock(sanic_client):
Sanic.test_mode = True
app = Sanic("mock_eventstore")
app.requests = []
@app.route("/v2/vaccineregistration/", methods=["POST"])
def valid_registration(request):
app.requests.append(request)
return response.json({})
client = await sanic_client(app)
url = config.VACREG_EVENTSTORE_URL
config.VACREG_EVENTSTORE_URL = f"http://{client.host}:{client.port}"
yield client
config.VACREG_EVENTSTORE_URL = url
@pytest.mark.asyncio
async def test_language(tester: AppTester):
"""
Should set the user language on selection
"""
assert tester.user.lang is None
await tester.user_input(session=Message.SESSION_EVENT.NEW)
tester.assert_state("state_language")
tester.assert_num_messages(1)
await tester.user_input("2")
assert tester.user.lang == "zul"
tester.assert_state("state_age_gate")
tester.assert_message(
"\n".join(
[
"*INGXOXO EPHEPHILE YOKUBHALISELA UKUGOMA*🔐 ",
" ",
"Uyemukelwa kwi-Portal esemthethweni ye-COVID-19 Vaccination "
"Self-registration evela eMnyangweni Wezempilo Kazwelonke. Ukubhalisa "
"kuzothatha cishe imizuzu engu-5. Sicela ube *nenombolo* yakho ye-ID , "
"yePhasipothi, yemvume yababaleki noma yemvume yokufuna ukukhoseliswa "
"esandleni. Uma uneMedical Aid, sizocela nenombolo yakho "
"ye-Medical Aid.",
" ",
"*Qaphela:* ",
"- Ukugoma kwenziwa ngokuzithandela futhi akudingeki ukuthi ukhokhe ",
"- Akukho okuzokhokhwa ngokuhlanganyela / intela ezodingeka uma "
"ungaphansi kweMedical Aid ",
"- Wonke umuntu obhalisile uzonikezwa umuthi wokugoma ",
"- Sizohamba ngamaqembu eminyeka ngokushesha okukhulu ",
" ",
"Ukubhalisa okwamanje kuvulelwe abaneminyaka engu-60 nangaphezulu "
"kuphela. Uneminyaka engu-60 noma ngaphezulu? ",
"",
"1. Yebo, ngineminyaka engu-60 noma ngaphezulu",
"2. Cha",
]
)
)
@pytest.mark.asyncio
async def test_age_gate(tester: AppTester):
"""
Should ask the user if they're over 60
"""
tester.setup_state("state_language")
await tester.user_input("1")
tester.assert_state("state_age_gate")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_age_gate_error(tester: AppTester):
"""
Should show the error message on incorrect input
"""
tester.setup_state("state_age_gate")
await tester.user_input("invalid")
tester.assert_state("state_age_gate")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_under_age_notification(tester: AppTester):
"""
Should ask the user if they want a notification when it opens up
"""
tester.setup_state("state_age_gate")
await tester.user_input("no")
tester.assert_state("state_under_age_notification")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_under_age_notification_error(tester: AppTester):
"""
Should show the error message on incorrect input
"""
tester.setup_state("state_under_age_notification")
await tester.user_input("invalid")
tester.assert_state("state_under_age_notification")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_under_age_notification_confirm(tester: AppTester):
"""
Should confirm the selection and end the session
"""
tester.setup_state("state_under_age_notification")
await tester.user_input("yes")
tester.assert_message(
"Thank you for confirming", session=Message.SESSION_EVENT.CLOSE
)
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_identification_type(tester: AppTester):
tester.setup_state("state_age_gate")
await tester.user_input("1")
tester.assert_state("state_terms_and_conditions")
tester.assert_num_messages(2)
@pytest.mark.asyncio
async def test_identification_type_invalid(tester: AppTester):
tester.setup_state("state_identification_type")
await tester.user_input("invalid")
tester.assert_state("state_identification_type")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_identification_number(tester: AppTester):
tester.setup_state("state_identification_type")
await tester.user_input("rsa id number")
tester.assert_state("state_identification_number")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_identification_number_invalid(tester: AppTester):
tester.setup_state("state_identification_number")
tester.setup_answer("state_identification_type", "rsa_id")
await tester.user_input("9001010001089")
tester.assert_state("state_identification_number")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_identification_number_said_on_passport(tester: AppTester):
tester.setup_state("state_identification_number")
tester.setup_answer("state_identification_type", "passport")
await tester.user_input("9001010001088")
tester.assert_state("state_check_id_number")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_check_id_number(tester: AppTester):
tester.setup_state("state_identification_number")
tester.setup_answer("state_identification_type", "passport")
await tester.user_input("9001010001088")
tester.assert_state("state_check_id_number")
tester.assert_num_messages(1)
await tester.user_input("yes")
tester.assert_answer("state_identification_type", "rsa_id")
tester.assert_state("state_first_name")
@pytest.mark.asyncio
async def test_check_id_number_restart(tester: AppTester):
tester.setup_state("state_identification_number")
tester.setup_answer("state_identification_type", "passport")
await tester.user_input("9001010001088")
tester.assert_num_messages(1)
tester.assert_state("state_check_id_number")
await tester.user_input("no")
tester.assert_num_messages(1)
tester.assert_state("state_identification_type")
@pytest.mark.asyncio
async def test_passport_country(tester: AppTester):
tester.setup_state("state_identification_number")
tester.setup_answer("state_identification_type", "passport")
await tester.user_input("A1234567890")
tester.assert_state("state_passport_country")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_passport_country_search(tester: AppTester):
tester.setup_state("state_passport_country")
await tester.user_input("cote d'ivory")
tester.assert_state("state_passport_country_list")
tester.assert_answer("state_passport_country", "cote d'ivory")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"Please confirm your passport's COUNTRY of origin.",
"",
"REPLY with a NUMBER from the list below:",
"1. Republic of Côte d'Ivoire",
"2. British Indian Ocean Territory",
"3. Plurinational State of Bolivia",
"4. Other",
]
)
)
@pytest.mark.asyncio
async def test_passport_country_search_other(tester: AppTester):
tester.setup_state("state_passport_country_list")
tester.setup_answer("state_passport_country", "CI")
await tester.user_input("other")
tester.assert_num_messages(1)
tester.assert_state("state_passport_country")
@pytest.mark.asyncio
async def test_passport_country_search_list_invalid(tester: AppTester):
tester.setup_state("state_passport_country_list")
tester.setup_answer("state_passport_country", "Côte d'Ivoire")
await tester.user_input("invalid")
tester.assert_num_messages(1)
tester.assert_state("state_passport_country_list")
@pytest.mark.asyncio
@mock.patch("vaccine.utils.get_today")
async def test_said_date_and_sex_extraction(get_today, tester: AppTester, evds_mock):
get_today.return_value = date(2100, 1, 1)
tester.setup_state("state_confirm_profile")
tester.setup_answer("state_first_name", "test")
tester.setup_answer("state_surname", "name")
tester.setup_answer("state_identification_type", "rsa_id")
tester.setup_answer("state_identification_number", "9001010001088")
await tester.user_input("correct")
tester.assert_state("state_province_id")
tester.assert_answer("state_dob_year", "1990")
tester.assert_answer("state_dob_month", "1")
tester.assert_answer("state_dob_day", "1")
tester.assert_answer("state_gender", "Female")
@pytest.mark.asyncio
@mock.patch("vaccine.utils.get_today")
async def test_said_date_extraction_ambiguous(get_today, tester: AppTester, evds_mock):
get_today.return_value = date(2010, 1, 1)
tester.setup_state("state_confirm_profile")
tester.setup_answer("state_first_name", "test")
tester.setup_answer("state_surname", "name")
tester.setup_answer("state_identification_type", "rsa_id")
tester.setup_answer("state_identification_number", "0001010001087")
await tester.user_input("correct")
tester.assert_state("state_dob_year")
tester.assert_no_answer("state_dob_year")
await tester.user_input("1900")
tester.assert_state("state_province_id")
tester.assert_answer("state_dob_month", "1")
tester.assert_answer("state_dob_day", "1")
@pytest.mark.asyncio
@mock.patch("vaccine.utils.get_today")
async def test_gender(get_today, tester: AppTester):
get_today.return_value = date(2120, 1, 1)
tester.setup_state("state_dob_day")
tester.setup_answer("state_dob_year", "1990")
tester.setup_answer("state_dob_month", "1")
tester.setup_answer("state_identification_type", "passport")
await tester.user_input("1")
tester.assert_num_messages(1)
tester.assert_state("state_gender")
@pytest.mark.asyncio
@mock.patch("vaccine.utils.get_today")
async def test_gender_invalid(get_today, tester: AppTester):
get_today.return_value = date(2120, 1, 1)
tester.setup_state("state_gender")
tester.setup_answer("state_dob_year", "1990")
tester.setup_answer("state_dob_month", "1")
tester.setup_answer("state_dob_day", "1")
tester.setup_answer("state_identification_type", "passport")
await tester.user_input("invalid")
tester.assert_num_messages(1)
tester.assert_state("state_gender")
@pytest.mark.asyncio
@mock.patch("vaccine.utils.get_today")
async def test_too_young(get_today, tester: AppTester):
get_today.return_value = date(2020, 1, 1)
tester.setup_state("state_dob_year")
tester.setup_answer("state_dob_month", "1")
tester.setup_answer("state_dob_day", "1")
tester.setup_answer("state_identification_type", "passport")
await tester.user_input("1990")
tester.assert_num_messages(1)
tester.assert_state("state_under_age_notification")
@pytest.mark.asyncio
async def test_confirm_profile(tester: AppTester):
tester.setup_answer("state_identification_number", "123456")
tester.setup_answer("state_first_name", "test$ first ")
tester.setup_state("state_surname")
await tester.user_input("test$ surname")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Please confirm the following:",
"",
"test first test surname",
"123456",
"",
"1. Correct",
"2. Wrong",
]
)
)
tester.assert_state("state_confirm_profile")
@pytest.mark.asyncio
async def test_dob_year(tester: AppTester):
tester.setup_answer("state_identification_number", "123456")
tester.setup_answer("state_first_name", "test$ first ")
tester.setup_answer("state_surname", "test$ surname ")
tester.setup_answer("state_identification_type", "passport")
tester.setup_state("state_confirm_profile")
await tester.user_input("correct")
tester.assert_state("state_dob_year")
@pytest.mark.asyncio
async def test_dob_year_invalid(tester: AppTester):
tester.setup_state("state_dob_year")
tester.setup_answer("state_identification_type", "passport")
await tester.user_input("invalid")
tester.assert_state("state_dob_year")
tester.assert_message(
"\n".join(["⚠️ Please TYPE in only the YEAR you were born.", "Example _1980_"])
)
@pytest.mark.asyncio
@mock.patch("vaccine.utils.get_today")
async def test_dob_year_not_match_id(get_today, tester: AppTester):
get_today.return_value = date(2010, 1, 1)
tester.setup_state("state_dob_year")
tester.setup_answer("state_identification_number", "9001010001088")
tester.setup_answer("state_identification_type", "rsa_id")
await tester.user_input("1991")
tester.assert_state("state_dob_year")
tester.assert_message(
"The YEAR you have given does not match the YEAR of your ID number. Please "
"try again"
)
@pytest.mark.asyncio
async def test_dob_month(tester: AppTester):
tester.setup_state("state_dob_year")
tester.setup_answer("state_identification_type", "passport")
await tester.user_input("1990")
tester.assert_num_messages(1)
tester.assert_state("state_dob_month")
@pytest.mark.asyncio
async def test_dob_month_error(tester: AppTester):
tester.setup_state("state_dob_month")
tester.setup_answer("state_identification_type", "passport")
await tester.user_input("invalid")
tester.assert_num_messages(1)
tester.assert_state("state_dob_month")
@pytest.mark.asyncio
async def test_dob_day(tester: AppTester):
tester.setup_state("state_dob_month")
tester.setup_answer("state_identification_type", "passport")
await tester.user_input("january")
tester.assert_num_messages(1)
tester.assert_state("state_dob_day")
@pytest.mark.asyncio
async def test_dob_day_invalid(tester: AppTester):
tester.setup_state("state_dob_day")
tester.setup_answer("state_dob_year", "1990")
tester.setup_answer("state_dob_month", "2")
tester.setup_answer("state_identification_type", "passport")
await tester.user_input("29")
tester.assert_num_messages(1)
tester.assert_state("state_dob_day")
@pytest.mark.asyncio
async def test_first_name(tester: AppTester):
tester.setup_state("state_passport_country_list")
tester.setup_answer("state_passport_country", "south africa")
await tester.user_input("1")
tester.assert_num_messages(1)
tester.assert_state("state_first_name")
@pytest.mark.asyncio
async def test_first_name_invalid(tester: AppTester):
tester.setup_state("state_first_name")
await tester.user_input("")
tester.assert_num_messages(1)
tester.assert_state("state_first_name")
@pytest.mark.asyncio
async def test_surname(tester: AppTester):
tester.setup_state("state_first_name")
await tester.user_input("firstname")
tester.assert_num_messages(1)
tester.assert_state("state_surname")
tester.assert_answer("state_first_name", "firstname")
@pytest.mark.asyncio
@mock.patch("vaccine.utils.get_today")
async def test_skip_dob_and_gender(get_today, evds_mock, tester: AppTester):
get_today.return_value = date(2120, 1, 1)
tester.setup_state("state_confirm_profile")
tester.setup_answer("state_identification_type", "passport")
tester.setup_answer("state_dob_day", "1")
tester.setup_answer("state_dob_month", "1")
tester.setup_answer("state_dob_year", "1990")
tester.setup_answer("state_gender", "male")
tester.setup_answer("state_first_name", "first")
tester.setup_answer("state_surname", "surname")
tester.setup_answer("state_identification_number", "123456")
await tester.user_input("correct")
tester.assert_num_messages(1)
tester.assert_state("state_province_id")
@pytest.mark.asyncio
async def test_state_after_terms_and_conditions(tester: AppTester):
tester.setup_state("state_terms_and_conditions")
await tester.user_input("accept")
tester.assert_num_messages(1)
tester.assert_state("state_identification_type")
@pytest.mark.asyncio
async def test_province_invalid(evds_mock, tester: AppTester):
tester.setup_state("state_province_id")
await tester.user_input("invalid")
tester.assert_num_messages(1)
tester.assert_state("state_province_id")
@pytest.mark.asyncio
async def test_suburb_search(evds_mock, tester: AppTester):
tester.setup_state("state_province_id")
await tester.user_input("9")
tester.assert_num_messages(1)
tester.assert_state("state_suburb_search")
tester.assert_answer("state_province_id", "western cape")
@pytest.mark.asyncio
async def test_suburb(evds_mock, tester: AppTester):
tester.setup_state("state_suburb_search")
tester.setup_answer("state_province_id", "western cape")
await tester.user_input("tableview")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Please REPLY with a NUMBER to confirm your location:",
"",
"1. Table View, Blouberg",
"2. Other",
]
)
)
tester.assert_state("state_suburb")
@pytest.mark.asyncio
async def test_province_no_results(evds_mock, tester: AppTester):
tester.setup_state("state_suburb_search")
tester.setup_answer("state_province_id", "western cape")
await tester.user_input("invalid")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"⚠️ Your suburb could not be found. Please try again by selecting your "
"province",
"",
"Select your province",
"1. Eastern Cape",
"2. Free State",
"3. Gauteng",
"4. Kwazulu-natal",
"5. Limpopo",
"6. Mpumalanga",
"7. North West",
"8. Northern Cape",
"9. Western Cape",
]
)
)
tester.assert_state("state_province_no_results")
@pytest.mark.asyncio
async def test_suburb_search_no_results(evds_mock, tester: AppTester):
tester.setup_state("state_province_no_results")
await tester.user_input("western cape")
tester.assert_num_messages(1)
tester.assert_state("state_suburb_search")
tester.assert_answer("state_province_id", "western cape")
@pytest.mark.asyncio
async def test_municipality(evds_mock, tester: AppTester):
tester.setup_state("state_suburb_search")
tester.setup_answer("state_province_id", "eastern cape")
await tester.user_input("mandela")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Please confirm the MUNICIPALITY for the suburb you have given:",
"",
"1. Buffalo City",
"2. Enoch Mgijima",
"3. Great Kei",
"4. King Sabata Dalindyebo",
"5. Nelson Mandela Bay",
"6. Raymond Mhlaba",
"7. Umzimvubu",
"8. Mbizana",
"9. Mnquma",
"10. Other",
]
)
)
tester.assert_state("state_municipality")
@pytest.mark.asyncio
async def test_municipality_other(evds_mock, tester: AppTester):
tester.setup_state("state_municipality")
tester.setup_answer("state_province_id", "eastern cape")
tester.setup_answer("state_suburb_search", "mandela")
await tester.user_input("other")
tester.assert_state("state_suburb_search")
@pytest.mark.asyncio
async def test_municipality_plumstead(evds_mock, tester: AppTester):
tester.setup_state("state_suburb_search")
tester.setup_answer("state_province_id", "western cape")
await tester.user_input("plumstead")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Please REPLY with a NUMBER to confirm your location:",
"",
"1. Plumstead, Cape Town",
"2. Other",
]
)
)
tester.assert_state("state_suburb")
@pytest.mark.asyncio
async def test_suburb_with_municipality(evds_mock, tester: AppTester):
tester.setup_state("state_municipality")
tester.setup_answer("state_province_id", "eastern cape")
tester.setup_answer("state_suburb_search", "mandela")
await tester.user_input("Buffalo City")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Please REPLY with a NUMBER to confirm your location:",
"",
"1. Mandela Park, Mandela Park",
"2. Other",
]
)
)
tester.assert_state("state_suburb")
@pytest.mark.asyncio
async def test_suburb_error(evds_mock, tester: AppTester):
tester.setup_state("state_suburb")
tester.setup_answer("state_province_id", "western cape")
tester.setup_answer("state_suburb_search", "tableview")
await tester.user_input("invalid")
tester.assert_num_messages(1)
tester.assert_state("state_suburb")
@pytest.mark.asyncio
async def test_suburb_other(evds_mock, tester: AppTester):
tester.setup_state("state_suburb")
tester.setup_answer("state_province_id", "western cape")
tester.setup_answer("state_suburb_search", "tableview")
await tester.user_input("other")
tester.assert_num_messages(1)
tester.assert_state("state_province_id")
@pytest.mark.asyncio
async def test_suburb_value(evds_mock, tester: AppTester):
tester.setup_state("state_suburb")
tester.setup_answer("state_province_id", "western cape")
tester.setup_answer("state_suburb_search", "tableview")
await tester.user_input("1")
tester.assert_num_messages(1)
tester.assert_state("state_self_registration")
@pytest.mark.asyncio
async def test_self_registration(evds_mock, tester: AppTester):
tester.setup_state("state_suburb")
tester.setup_answer("state_province_id", "western cape")
tester.setup_answer("state_suburb_search", "tableview")
await tester.user_input("1")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"We will use your mobile phone number to send you notifications and "
"updates via WhatsApp and/or SMS about getting vaccinated.",
"",
"Can we use 082 000 1001?",
"1. Yes",
"2. No",
]
)
)
tester.assert_state("state_self_registration")
@pytest.mark.asyncio
async def test_self_registration_invalid(tester: AppTester):
tester.setup_state("state_self_registration")
await tester.user_input("invalid")
tester.assert_num_messages(1)
tester.assert_state("state_self_registration")
@pytest.mark.asyncio
async def test_phone_number(tester: AppTester):
tester.setup_state("state_self_registration")
await tester.user_input("no")
tester.assert_num_messages(1)
tester.assert_state("state_phone_number")
@pytest.mark.asyncio
async def test_phone_number_invalid(tester: AppTester):
tester.setup_state("state_phone_number")
await tester.user_input("invalid")
tester.assert_num_messages(1)
tester.assert_state("state_phone_number")
@pytest.mark.asyncio
async def test_vaccination_time(tester: AppTester):
tester.setup_state("state_medical_aid_number")
await tester.user_input("A1234567890")
tester.assert_num_messages(1)
tester.assert_state("state_vaccination_time")
@pytest.mark.asyncio
async def test_vaccination_time_invalid(tester: AppTester):
tester.setup_state("state_vaccination_time")
await tester.user_input("invalid")
tester.assert_num_messages(1)
tester.assert_state("state_vaccination_time")
@pytest.mark.asyncio
async def test_medical_aid_search(tester: AppTester):
tester.setup_state("state_medical_aid")
await tester.user_input("yes")
tester.assert_num_messages(1)
tester.assert_state("state_medical_aid_search")
@pytest.mark.asyncio
async def test_medical_aid_list_1(evds_mock, tester: AppTester):
tester.setup_state("state_medical_aid_search")
await tester.user_input("discovery")
tester.assert_state("state_medical_aid_list")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Please confirm your Medical Aid Provider. REPLY with a NUMBER from "
"the list below:",
"",
"1. Discovery Health Medical Scheme",
"2. Aeci Medical Aid Society",
"3. BMW Employees Medical Aid Society",
"4. None of these",
]
)
)
@pytest.mark.asyncio
async def test_medical_aid_list_2(evds_mock, tester: AppTester):
tester.setup_state("state_medical_aid_search")
await tester.user_input("tsogo sun")
tester.assert_state("state_medical_aid_list")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Please confirm your Medical Aid Provider. REPLY with a NUMBER from "
"the list below:",
"",
"1. Tsogo Sun Group Medical Scheme",
"2. Golden Arrows Employees Medical Benefit Fund",
"3. Engen Medical Benefit Fund",
"4. None of these",
]
)
)
@pytest.mark.asyncio
async def test_medical_aid_list_3(evds_mock, tester: AppTester):
tester.setup_state("state_medical_aid_search")
await tester.user_input("de beers")
tester.assert_state("state_medical_aid_list")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Please confirm your Medical Aid Provider. REPLY with a NUMBER from "
"the list below:",
"",
"1. De Beers Benefit Society",
"2. BMW Employees Medical Aid Society",
"3. Government Employees Medical Scheme (GEMS)",
"4. None of these",
]
)
)
@pytest.mark.asyncio
async def test_medical_aid_list_other(evds_mock, tester: AppTester):
tester.setup_state("state_medical_aid_list")
tester.setup_answer("state_medical_aid_search", "discovery")
await tester.user_input("4")
tester.assert_state("state_medical_aid")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_medical_aid_number(evds_mock, tester: AppTester):
tester.setup_state("state_medical_aid_list")
tester.setup_answer("state_medical_aid_search", "discovery")
await tester.user_input("1")
tester.assert_state("state_medical_aid_number")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_medical_aid(tester: AppTester):
tester.setup_state("state_email_address")
await tester.user_input("test@example.org")
tester.assert_state("state_medical_aid")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_medical_aid_invalid(tester: AppTester):
tester.setup_state("state_medical_aid")
await tester.user_input("invalid")
tester.assert_state("state_medical_aid")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_email_address(tester: AppTester):
tester.setup_state("state_self_registration")
await tester.user_input("yes")
tester.assert_state("state_email_address")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_email_skip(tester: AppTester):
tester.setup_state("state_email_address")
await tester.user_input("skip")
tester.assert_state("state_medical_aid")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_email_invalid(tester: AppTester):
tester.setup_state("state_email_address")
await tester.user_input("invalid@")
tester.assert_state("state_email_address")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_email_invalid_2(tester: AppTester):
tester.setup_state("state_email_address")
await tester.user_input("invalid")
tester.assert_state("state_email_address")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_terms_and_conditions(tester: AppTester):
tester.setup_state("state_age_gate")
await tester.user_input("1")
tester.assert_state("state_terms_and_conditions")
tester.assert_num_messages(2)
assert "document" in tester.application.messages[0].helper_metadata
@pytest.mark.asyncio
async def test_terms_and_conditions_invalid(tester: AppTester):
tester.setup_state("state_terms_and_conditions")
await tester.user_input("invalid")
tester.assert_state("state_terms_and_conditions")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_terms_and_conditions_summary(tester: AppTester):
tester.setup_state("state_terms_and_conditions")
await tester.user_input("read summary")
tester.assert_state("state_terms_and_conditions_summary")
tester.assert_num_messages(1)
@pytest.mark.asyncio
async def test_no_terms(tester: AppTester):
tester.setup_state("state_terms_and_conditions_summary")
await tester.user_input("no")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Thank you. If you change your mind, type *REGISTER* to restart your "
"registration session",
]
),
session=Message.SESSION_EVENT.CLOSE,
)
@pytest.mark.asyncio
async def test_state_success(evds_mock, eventstore_mock, tester: AppTester):
tester.setup_state("state_vaccination_time")
tester.setup_answer("state_dob_year", "1960")
tester.setup_answer("state_dob_month", "1")
tester.setup_answer("state_dob_day", "1")
tester.setup_answer("state_vaccination_time", "weekday_morning")
tester.setup_answer("state_suburb", "d114778e-c590-4a08-894e-0ddaefc5759e")
tester.setup_answer("state_province_id", "e32298eb-17b4-471e-8d9b-ba093c6afc7c")
tester.setup_answer("state_gender", "Other")
tester.setup_answer("state_surname", " test \nsurname$")
tester.setup_answer("state_first_name", " test first %name")
tester.setup_answer("state_identification_type", "rsa_id")
tester.setup_answer("state_identification_number", " 6001 010001081 ")
tester.setup_answer("state_medical_aid", "state_vaccination_time")
tester.setup_answer("state_email_address", "SKIP")
await tester.user_input("1")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Congratulations! You successfully registered with the National "
"Department of Health to get a COVID-19 vaccine.",
"",
"Look out for messages from this number (060 012 3456) on WhatsApp OR "
"on SMS/email. We will update you with important information about "
"your appointment and what to expect.",
"",
"-----",
"📌 Reply *0* to return to the main *MENU*",
]
),
session=Message.SESSION_EVENT.CLOSE,
)
[requests] = evds_mock.app.requests
assert requests.json == {
"gender": "Other",
"surname": "test surname",
"firstName": "test first name",
"dateOfBirth": "1960-01-01",
"mobileNumber": "27820001001",
"preferredVaccineScheduleTimeOfDay": "morning",
"preferredVaccineScheduleTimeOfWeek": "weekday",
"preferredVaccineLocation": {
"value": "d114778e-c590-4a08-894e-0ddaefc5759e",
"text": "Diep River",
},
"termsAndConditionsAccepted": True,
"iDNumber": "6001010001081",
"medicalAidMember": False,
"sourceId": "aeb8444d-cfa4-4c52-bfaf-eed1495124b7",
}
[requests] = eventstore_mock.app.requests
assert requests.json == {
"msisdn": "+27820001001",
"source": "WhatsApp 27820001002",
"gender": "Other",
"first_name": "test first name",
"last_name": "test surname",
"date_of_birth": "1960-01-01",
"preferred_time": "morning",
"preferred_date": "weekday",
"preferred_location_id": "d114778e-c590-4a08-894e-0ddaefc5759e",
"preferred_location_name": "Diep River",
"id_number": "6001010001081",
"data": {},
}
@pytest.mark.asyncio
async def test_state_success_international_phonenumber(
evds_mock, eventstore_mock, tester: AppTester
):
tester.setup_state("state_vaccination_time")
tester.setup_user_address("32470001001")
tester.setup_answer("state_dob_year", "1960")
tester.setup_answer("state_dob_month", "1")
tester.setup_answer("state_dob_day", "1")
tester.setup_answer("state_vaccination_time", "weekday_morning")
tester.setup_answer("state_suburb", "d114778e-c590-4a08-894e-0ddaefc5759e")
tester.setup_answer("state_province_id", "e32298eb-17b4-471e-8d9b-ba093c6afc7c")
tester.setup_answer("state_gender", "Other")
tester.setup_answer("state_surname", "test surname")
tester.setup_answer("state_first_name", "test first name")
tester.setup_answer("state_identification_type", "rsa_id")
tester.setup_answer("state_identification_number", "6001010001081")
tester.setup_answer("state_medical_aid", "state_vaccination_time")
tester.setup_answer("state_email_address", "SKIP")
await tester.user_input("1")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Congratulations! You successfully registered with the National "
"Department of Health to get a COVID-19 vaccine.",
"",
"Look out for messages from this number (060 012 3456) on WhatsApp OR "
"on SMS/email. We will update you with important information about "
"your appointment and what to expect.",
"",
"-----",
"📌 Reply *0* to return to the main *MENU*",
]
),
session=Message.SESSION_EVENT.CLOSE,
)
[requests] = evds_mock.app.requests
assert requests.json == {
"gender": "Other",
"surname": "test surname",
"firstName": "test first name",
"dateOfBirth": "1960-01-01",
"mobileNumber": "32470001001",
"preferredVaccineScheduleTimeOfDay": "morning",
"preferredVaccineScheduleTimeOfWeek": "weekday",
"preferredVaccineLocation": {
"value": "d114778e-c590-4a08-894e-0ddaefc5759e",
"text": "Diep River",
},
"termsAndConditionsAccepted": True,
"iDNumber": "6001010001081",
"medicalAidMember": False,
"sourceId": "aeb8444d-cfa4-4c52-bfaf-eed1495124b7",
}
[requests] = eventstore_mock.app.requests
assert requests.json == {
"msisdn": "+32470001001",
"source": "WhatsApp 27820001002",
"gender": "Other",
"first_name": "test first name",
"last_name": "test surname",
"date_of_birth": "1960-01-01",
"preferred_time": "morning",
"preferred_date": "weekday",
"preferred_location_id": "d114778e-c590-4a08-894e-0ddaefc5759e",
"preferred_location_name": "Diep River",
"id_number": "6001010001081",
"data": {},
}
@pytest.mark.asyncio
async def test_state_success_passport(evds_mock, eventstore_mock, tester: AppTester):
tester.setup_state("state_vaccination_time")
tester.setup_answer("state_dob_year", "1960")
tester.setup_answer("state_dob_month", "1")
tester.setup_answer("state_dob_day", "1")
tester.setup_answer("state_vaccination_time", "weekday_morning")
tester.setup_answer("state_suburb", "d114778e-c590-4a08-894e-0ddaefc5759e")
tester.setup_answer("state_province_id", "e32298eb-17b4-471e-8d9b-ba093c6afc7c")
tester.setup_answer("state_gender", "Other")
tester.setup_answer("state_surname", "test surname")
tester.setup_answer("state_first_name", "test first name")
tester.setup_answer("state_identification_type", "passport")
tester.setup_answer("state_identification_number", "A1234567890")
tester.setup_answer("state_passport_country", "south africa")
tester.setup_answer("state_passport_country_list", "ZA")
tester.setup_answer("state_medical_aid", "state_vaccination_time")
tester.setup_answer("state_email_address", "SKIP")
await tester.user_input("1")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Congratulations! You successfully registered with the National "
"Department of Health to get a COVID-19 vaccine.",
"",
"Look out for messages from this number (060 012 3456) on WhatsApp OR "
"on SMS/email. We will update you with important information about "
"your appointment and what to expect.",
"",
"-----",
"📌 Reply *0* to return to the main *MENU*",
]
),
session=Message.SESSION_EVENT.CLOSE,
)
[requests] = evds_mock.app.requests
assert requests.json == {
"gender": "Other",
"surname": "test surname",
"firstName": "test first name",
"dateOfBirth": "1960-01-01",
"mobileNumber": "27820001001",
"preferredVaccineScheduleTimeOfDay": "morning",
"preferredVaccineScheduleTimeOfWeek": "weekday",
"preferredVaccineLocation": {
"value": "d114778e-c590-4a08-894e-0ddaefc5759e",
"text": "Diep River",
},
"termsAndConditionsAccepted": True,
"passportNumber": "A1234567890",
"passportCountry": "ZA",
"medicalAidMember": False,
"sourceId": "aeb8444d-cfa4-4c52-bfaf-eed1495124b7",
}
[requests] = eventstore_mock.app.requests
assert requests.json == {
"msisdn": "+27820001001",
"source": "WhatsApp 27820001002",
"gender": "Other",
"first_name": "test first name",
"last_name": "test surname",
"date_of_birth": "1960-01-01",
"preferred_time": "morning",
"preferred_date": "weekday",
"preferred_location_id": "d114778e-c590-4a08-894e-0ddaefc5759e",
"preferred_location_name": "Diep River",
"passport_number": "A1234567890",
"passport_country": "ZA",
"data": {},
}
@pytest.mark.asyncio
async def test_state_success_asylum_seeker(
evds_mock, eventstore_mock, tester: AppTester
):
tester.setup_state("state_vaccination_time")
tester.setup_answer("state_dob_year", "1960")
tester.setup_answer("state_dob_month", "1")
tester.setup_answer("state_dob_day", "1")
tester.setup_answer("state_vaccination_time", "weekday_morning")
tester.setup_answer("state_suburb", "d114778e-c590-4a08-894e-0ddaefc5759e")
tester.setup_answer("state_province_id", "e32298eb-17b4-471e-8d9b-ba093c6afc7c")
tester.setup_answer("state_gender", "Other")
tester.setup_answer("state_surname", "test surname")
tester.setup_answer("state_first_name", "test first name")
tester.setup_answer("state_identification_type", "asylum_seeker")
tester.setup_answer("state_identification_number", "A1234567890")
tester.setup_answer("state_medical_aid", "state_vaccination_time")
tester.setup_answer("state_email_address", "SKIP")
await tester.user_input("1")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Congratulations! You successfully registered with the National "
"Department of Health to get a COVID-19 vaccine.",
"",
"Look out for messages from this number (060 012 3456) on WhatsApp OR "
"on SMS/email. We will update you with important information about "
"your appointment and what to expect.",
"",
"-----",
"📌 Reply *0* to return to the main *MENU*",
]
),
session=Message.SESSION_EVENT.CLOSE,
)
[requests] = evds_mock.app.requests
assert requests.json == {
"gender": "Other",
"surname": "test surname",
"firstName": "test first name",
"dateOfBirth": "1960-01-01",
"mobileNumber": "27820001001",
"preferredVaccineScheduleTimeOfDay": "morning",
"preferredVaccineScheduleTimeOfWeek": "weekday",
"preferredVaccineLocation": {
"value": "d114778e-c590-4a08-894e-0ddaefc5759e",
"text": "Diep River",
},
"termsAndConditionsAccepted": True,
"refugeeNumber": "A1234567890",
"medicalAidMember": False,
"sourceId": "aeb8444d-cfa4-4c52-bfaf-eed1495124b7",
}
[requests] = eventstore_mock.app.requests
assert requests.json == {
"msisdn": "+27820001001",
"source": "WhatsApp 27820001002",
"gender": "Other",
"first_name": "test first name",
"last_name": "test surname",
"date_of_birth": "1960-01-01",
"preferred_time": "morning",
"preferred_date": "weekday",
"preferred_location_id": "d114778e-c590-4a08-894e-0ddaefc5759e",
"preferred_location_name": "Diep River",
"asylum_seeker_number": "A1234567890",
"data": {},
}
@pytest.mark.asyncio
async def test_state_success_refugee(evds_mock, eventstore_mock, tester: AppTester):
tester.setup_state("state_vaccination_time")
tester.setup_answer("state_dob_year", "1960")
tester.setup_answer("state_dob_month", "1")
tester.setup_answer("state_dob_day", "1")
tester.setup_answer("state_vaccination_time", "weekday_morning")
tester.setup_answer("state_suburb", "d114778e-c590-4a08-894e-0ddaefc5759e")
tester.setup_answer("state_province_id", "e32298eb-17b4-471e-8d9b-ba093c6afc7c")
tester.setup_answer("state_gender", "Other")
tester.setup_answer("state_surname", "test surname")
tester.setup_answer("state_first_name", "test first name")
tester.setup_answer("state_identification_type", "refugee")
tester.setup_answer("state_identification_number", "A1234567890")
tester.setup_answer("state_medical_aid", "state_vaccination_time")
tester.setup_answer("state_email_address", "SKIP")
await tester.user_input("1")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Congratulations! You successfully registered with the National "
"Department of Health to get a COVID-19 vaccine.",
"",
"Look out for messages from this number (060 012 3456) on WhatsApp OR "
"on SMS/email. We will update you with important information about "
"your appointment and what to expect.",
"",
"-----",
"📌 Reply *0* to return to the main *MENU*",
]
),
session=Message.SESSION_EVENT.CLOSE,
)
[requests] = evds_mock.app.requests
assert requests.json == {
"gender": "Other",
"surname": "test surname",
"firstName": "test first name",
"dateOfBirth": "1960-01-01",
"mobileNumber": "27820001001",
"preferredVaccineScheduleTimeOfDay": "morning",
"preferredVaccineScheduleTimeOfWeek": "weekday",
"preferredVaccineLocation": {
"value": "d114778e-c590-4a08-894e-0ddaefc5759e",
"text": "Diep River",
},
"termsAndConditionsAccepted": True,
"refugeeNumber": "A1234567890",
"medicalAidMember": False,
"sourceId": "aeb8444d-cfa4-4c52-bfaf-eed1495124b7",
}
[requests] = eventstore_mock.app.requests
assert requests.json == {
"msisdn": "+27820001001",
"source": "WhatsApp 27820001002",
"gender": "Other",
"first_name": "test first name",
"last_name": "test surname",
"date_of_birth": "1960-01-01",
"preferred_time": "morning",
"preferred_date": "weekday",
"preferred_location_id": "d114778e-c590-4a08-894e-0ddaefc5759e",
"preferred_location_name": "Diep River",
"refugee_number": "A1234567890",
"data": {},
}
@pytest.mark.asyncio
async def test_state_success_temporary_failure(evds_mock, tester: AppTester):
evds_mock.app.errormax = 1
tester.setup_state("state_vaccination_time")
tester.setup_answer("state_dob_year", "1960")
tester.setup_answer("state_dob_month", "1")
tester.setup_answer("state_dob_day", "1")
tester.setup_answer("state_vaccination_time", "weekday_morning")
tester.setup_answer("state_suburb", "d114778e-c590-4a08-894e-0ddaefc5759e")
tester.setup_answer("state_province_id", "e32298eb-17b4-471e-8d9b-ba093c6afc7c")
tester.setup_answer("state_gender", "Other")
tester.setup_answer("state_surname", "test surname")
tester.setup_answer("state_first_name", "test first name")
tester.setup_answer("state_identification_type", "passport")
tester.setup_answer("state_identification_number", "A1234567890")
tester.setup_answer("state_passport_country", "south africa")
tester.setup_answer("state_passport_country_list", "ZA")
tester.setup_answer("state_medical_aid", "state_vaccination_time")
tester.setup_answer("state_email_address", "test@example.org")
await tester.user_input("1")
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"Congratulations! You successfully registered with the National "
"Department of Health to get a COVID-19 vaccine.",
"",
"Look out for messages from this number (060 012 3456) on WhatsApp OR "
"on SMS/email. We will update you with important information about "
"your appointment and what to expect.",
"",
"-----",
"📌 Reply *0* to return to the main *MENU*",
]
),
session=Message.SESSION_EVENT.CLOSE,
)
requests = evds_mock.app.requests
assert len(requests) == 2
assert requests[-1].json == {
"gender": "Other",
"surname": "test surname",
"firstName": "test first name",
"dateOfBirth": "1960-01-01",
"mobileNumber": "27820001001",
"preferredVaccineScheduleTimeOfDay": "morning",
"preferredVaccineScheduleTimeOfWeek": "weekday",
"preferredVaccineLocation": {
"value": "d114778e-c590-4a08-894e-0ddaefc5759e",
"text": "Diep River",
},
"termsAndConditionsAccepted": True,
"passportNumber": "A1234567890",
"passportCountry": "ZA",
"emailAddress": "test@example.org",
"medicalAidMember": False,
"sourceId": "aeb8444d-cfa4-4c52-bfaf-eed1495124b7",
}
@pytest.mark.asyncio
async def test_state_error(evds_mock, tester: AppTester):
evds_mock.app.errormax = 3
tester.setup_state("state_vaccination_time")
tester.setup_answer("state_dob_year", "1960")
tester.setup_answer("state_dob_month", "1")
tester.setup_answer("state_dob_day", "1")
tester.setup_answer("state_gender", "Other")
tester.setup_answer("state_suburb", "d114778e-c590-4a08-894e-0ddaefc5759e")
tester.setup_answer("state_province_id", "e32298eb-17b4-471e-8d9b-ba093c6afc7c")
tester.setup_answer("state_gender", "Other")
tester.setup_answer("state_surname", "test surname")
tester.setup_answer("state_first_name", "test first name")
tester.setup_answer("state_identification_type", "refugee")
tester.setup_answer("state_identification_number", "6001010001081")
tester.setup_answer("state_medical_aid", "state_medical_aid_search")
tester.setup_answer("state_medical_aid_search", "discovery")
tester.setup_answer(
"state_medical_aid_list", "971672ba-bb31-4fca-945a-7c530b8b5558"
)
tester.setup_answer("state_medical_aid_number", "M1234567890")
tester.setup_answer("state_vaccination_time", "weekday_morning")
tester.setup_answer("state_email_address", "SKIP")
await tester.user_input("1")
tester.assert_message(
"Something went wrong with your registration session. Your registration was "
"not able to be processed. Please try again later",
session=Message.SESSION_EVENT.CLOSE,
)
requests = evds_mock.app.requests
assert len(requests) == 3
assert requests[-1].json == {
"gender": "Other",
"surname": "test surname",
"firstName": "test first name",
"dateOfBirth": "1960-01-01",
"mobileNumber": "27820001001",
"preferredVaccineScheduleTimeOfDay": "morning",
"preferredVaccineScheduleTimeOfWeek": "weekday",
"preferredVaccineLocation": {
"value": "d114778e-c590-4a08-894e-0ddaefc5759e",
"text": "Diep River",
},
"termsAndConditionsAccepted": True,
"refugeeNumber": "6001010001081",
"medicalAidMember": True,
"medicalAidScheme": {
"text": "Discovery Health Medical Scheme",
"value": "971672ba-bb31-4fca-945a-7c530b8b5558",
},
"medicalAidSchemeNumber": "M1234567890",
"sourceId": "aeb8444d-cfa4-4c52-bfaf-eed1495124b7",
}
@pytest.mark.asyncio
async def test_timeout(tester: AppTester):
tester.setup_state("state_passport_country")
await tester.user_input(session=Message.SESSION_EVENT.CLOSE)
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"We haven’t heard from you in a while!",
"",
"The registration session has timed out due to inactivity. You will "
"need to start again. Just TYPE the word REGISTER.",
"",
"-----",
"📌 Reply *0* to return to the main *MENU*",
]
),
session=Message.SESSION_EVENT.CLOSE,
)
@pytest.mark.asyncio
async def test_throttle(tester: AppTester):
throttle = config.THROTTLE_PERCENTAGE
config.THROTTLE_PERCENTAGE = 100.0
await tester.user_input()
tester.assert_message(
"\n".join(
[
"*VACCINE REGISTRATION SECURE CHAT* 🔐",
"",
"⚠️ We are currently experiencing high volumes of registrations.",
"",
"Your registration is important! Please try again in 15 minutes.",
"",
"-----",
"📌 Reply *0* to return to the main *MENU*",
]
),
session=Message.SESSION_EVENT.CLOSE,
)
config.THROTTLE_PERCENTAGE = throttle
@pytest.mark.asyncio
async def test_exit_keywords(tester: AppTester):
tester.setup_state("state_terms_and_conditions_summary")
await tester.user_input("menu")
tester.assert_message("", session=Message.SESSION_EVENT.CLOSE)
assert tester.application.messages[0].helper_metadata["automation_handle"] is True
tester.assert_state(None)
assert tester.user.answers == {}
@pytest.mark.asyncio
async def test_uncaught_exception(tester: AppTester):
tester.setup_state("state_vaccination_time")
await tester.user_input("1")
tester.assert_message(
"Something went wrong. Please try again later.",
session=Message.SESSION_EVENT.CLOSE,
)
assert tester.user.answers == {}
tester.assert_state(None)
| 36.518024 | 88 | 0.666386 | 6,304 | 54,704 | 5.523953 | 0.084549 | 0.077391 | 0.081038 | 0.104873 | 0.870861 | 0.856876 | 0.842058 | 0.821526 | 0.793412 | 0.763806 | 0 | 0.040026 | 0.217205 | 54,704 | 1,497 | 89 | 36.542418 | 0.77236 | 0 | 0 | 0.687017 | 0 | 0 | 0.356621 | 0.11224 | 0 | 0 | 0 | 0 | 0.142195 | 1 | 0.003864 | false | 0.037867 | 0.013138 | 0.000773 | 0.021638 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc82fe29faf62f9aae84f188efa41469dfe6ff15 | 40,088 | py | Python | sqlite_dissect/export/csv_export.py | Defense-Cyber-Crime-Center/sqlite-dissect | e1a6c19928bc092bf7aeaff71072634f77a452ea | [
"MIT"
] | 12 | 2021-10-21T21:23:51.000Z | 2022-03-13T03:01:53.000Z | sqlite_dissect/export/csv_export.py | Defense-Cyber-Crime-Center/sqlite-dissect | e1a6c19928bc092bf7aeaff71072634f77a452ea | [
"MIT"
] | 21 | 2021-09-13T17:00:33.000Z | 2022-03-31T12:56:56.000Z | sqlite_dissect/export/csv_export.py | kchason/sqlite-dissect | e1a6c19928bc092bf7aeaff71072634f77a452ea | [
"MIT"
] | 1 | 2021-10-21T22:00:07.000Z | 2021-10-21T22:00:07.000Z | import os
from csv import QUOTE_ALL
from csv import writer
from logging import DEBUG
from logging import getLogger
from os.path import basename
from os.path import normpath
from os.path import sep
from re import sub
from sqlite_dissect.constants import ILLEGAL_XML_CHARACTER_PATTERN
from sqlite_dissect.constants import LOGGER_NAME
from sqlite_dissect.constants import MASTER_SCHEMA_ROW_TYPE
from sqlite_dissect.constants import PAGE_TYPE
from sqlite_dissect.constants import UTF_8
from sqlite_dissect.exception import ExportError
from sqlite_dissect.file.database.utilities import aggregate_leaf_cells
"""
csv_export.py
This script holds the objects used for exporting results of the SQLite carving framework to csv files.
This script holds the following object(s):
VersionCsvExporter(object)
CommitCsvExporter(object)
"""
class VersionCsvExporter(object):
@staticmethod
def write_version(csv_file_name, export_directory, version, master_schema_entry_carved_records=None):
logger = getLogger(LOGGER_NAME)
if not master_schema_entry_carved_records:
master_schema_entry_carved_records = {}
for master_schema_entry in version.master_schema.master_schema_entries:
"""
Here we only care about the master schema entries that have a root page number since ones that either
do not have a root page number or have a root page number of 0 do not have correlating b-trees in the
SQLite file and are instead either trigger types, view types, or special cases of table types such as
virtual tables.
"""
if master_schema_entry.root_page_number:
fixed_file_name = basename(normpath(csv_file_name))
fixed_master_schema_name = sub(" ", "_", master_schema_entry.name)
csv_file_name = export_directory + sep + fixed_file_name + "-" + fixed_master_schema_name + ".csv"
logger.info("Writing CSV file: {}.".format(csv_file_name))
with open(csv_file_name, "wb") as csv_file_handle:
csv_writer = writer(csv_file_handle, delimiter=',', quotechar="\"", quoting=QUOTE_ALL)
b_tree_root_page = version.get_b_tree_root_page(master_schema_entry.root_page_number)
"""
Retrieve the carved records for this particular master schema entry.
"""
carved_cells = []
if master_schema_entry.name in master_schema_entry_carved_records:
carved_cells = master_schema_entry_carved_records[master_schema_entry.name]
"""
Below we have to account for how the pages are stored.
For the table master schema entry row type:
1.) If the table is not a "without rowid" table, it will be stored on a table b-tree page with
row ids.
2.) If the table is a "without rowid" table, it will be stored on an index b-tree page with no
row ids.
For the index master schema entry row type:
1.) It will be stored on an index b-tree page with no row ids.
Different functions are created to write records for both table and index b-tree pages. Keep in
mind that a table master schema row type may be stored on a index b-tree page depending if it is
specified as a "without rowid" table. All index master schema row types are stored on index
b-tree pages.
"""
if master_schema_entry.row_type == MASTER_SCHEMA_ROW_TYPE.TABLE:
if not master_schema_entry.without_row_id:
VersionCsvExporter._write_b_tree_table_leaf_records(csv_writer, version,
master_schema_entry,
b_tree_root_page, carved_cells)
else:
VersionCsvExporter._write_b_tree_index_leaf_records(csv_writer, version,
master_schema_entry,
b_tree_root_page, carved_cells)
elif master_schema_entry.row_type == MASTER_SCHEMA_ROW_TYPE.INDEX:
VersionCsvExporter._write_b_tree_index_leaf_records(csv_writer, version, master_schema_entry,
b_tree_root_page, carved_cells)
else:
log_message = "Invalid master schema entry row type: {} found for csv export on master " \
"schema entry name: {} table name: {} sql: {}."
log_message = log_message.format(master_schema_entry.row_type, master_schema_entry.name,
master_schema_entry.table_name, master_schema_entry.sql)
logger.warn(log_message)
raise ExportError(log_message)
@staticmethod
def _write_b_tree_index_leaf_records(csv_writer, version, master_schema_entry, b_tree_root_page, carved_cells):
"""
This function will write the list of cells sent in to the sheet specified including the metadata regarding
to the file type, page type, and operation.
Note: The types of the data in the values can prove to be an issue here. We want to write the value out as
a string similarly as the text and csv outputs do for example even though it may contain invalid
characters. When data is sent into the openpyxl library to be written to the xml xlsx, if it is a
string, it is encoded into the default encoding and then checked for xml illegal characters that may
pose an issue when written to the xml. In order to properly check the values and write them accordingly
through the openpyxl library we address the following use cases for the value in order:
1.) If the value is None, we replace the value with the string "NULL". This might be replaced by
leaving it None but issues can be seen when carving cells where the value is None not because it
was NULL originally in the database, but because it was unable to be parsed out when it may have
actually had a value (when it was truncated). Distinction is needed between these two use cases.
2.) If the value is a bytearray (most likely originally a blob object) or a string value, we want to
write the value as a string. However, in order to do this for blob objects or strings that may
have a few bad characters in them from carving, we need to do our due diligence and make sure
there are no bad unicode characters and no xml illegal characters that may cause issues with
writing to the xlsx. In order to do this we do the following:
a.) We first convert the value to string if the affinity was not text, otherwise we decode
the value in the database text encoding. When we decode using the database text encoding,
we specify to "replace" characters it does not recognize in order to compensate for carved
rows.
b.) We then test encoding it to UTF-8.
i.) If the value successfully encodes as UTF-8 we set that as the value.
ii.) If the value throws an exception encoding, we have illegal unicode characters in the
string that need to be addressed. In order to escape these, we decode the string
as UTF-8 using the "replace" method to replace any illegal unicode characters
with '\ufffd' and set this back as the value after encoding again.
c.) After we have successfully set the value back to a UTF-8 compliant value, we need to check
the value for xml illegal characters. If any of these xml illegal characters are found,
they are replaced with a space. This behaviour may be different from how values are output
into text or csv since this is being written to xml and additional rules apply for certain
characters.
between the xlsx output and text/csv output in reference to xml illegal characters.
d.) After all the illegal characters are removed, due to the way openpyxl determines data types
of particular cells, if a cell starts with "=", it is determined to be a formula and set as
that in the data type field for that cell. This causes issues when opening the file in excel.
Microsoft Excel recommends prefacing the string with a single quote character, however,
this only seems to be within Excel itself. You can specify the data type of the cell in
openpyxl, but not in the write-only mode that is being used here. In order to work around
this, we check if the first character of a string or bytearray is a "=" character and preface
that string with a space. There may be better ways to handle this such as not using the
write-only mode.
Note: Additionally to the "=" character, the "-" character has similar issues in excel.
However, openpyxl explicitly checks on the "=" character being the first character
and setting that cell to a formula and does not handle the use case of a cell starting
with the "-" character, so this use case is ignored.
3.) If the value does not fall in one of the above use cases, we leave it as is and write it to the
xlsx without any modifications.
Note: It was noticed that blob objects are typically detected as isinstance of str here and strings are
bytearray objects. This needs to be investigated why exactly blob objects are coming out as str
objects.
Note: Comparision should be done on how other applications work with different database text encodings in
reference to their output.
Note: The decoding of the value in the database text encoding should only specify replace on a carved entry.
:param csv_writer:
:param version:
:param master_schema_entry:
:param b_tree_root_page:
:param carved_cells:
:return:
"""
logger = getLogger(LOGGER_NAME)
number_of_cells, cells = aggregate_leaf_cells(b_tree_root_page)
if logger.isEnabledFor(DEBUG):
master_schema_entry_string = "The {} b-tree page with {} row type and name: {} with sql: {} " \
"has {} in-tact rows:"
master_schema_entry_string = master_schema_entry_string.format(b_tree_root_page.page_type,
master_schema_entry.row_type,
master_schema_entry.name,
master_schema_entry.sql, number_of_cells)
logger.debug(master_schema_entry_string)
"""
Note: The index master schema entries are currently not fully parsed and therefore we do not have column
definitions in order to derive the column names from.
"""
column_headers = []
column_headers.extend(["File Source", "Version", "Page Version", "Cell Source", "Page Number", "Location",
"Carved", "Status", "File Offset"])
logger.debug("Column Headers: {}".format(" , ".join(column_headers)))
csv_writer.writerow(column_headers)
for cell in cells.values():
cell_record_column_values = []
for record_column in cell.payload.record_columns:
serial_type = record_column.serial_type
text_affinity = True if serial_type >= 13 and serial_type % 2 == 1 else False
value = record_column.value
if value is None:
pass
elif isinstance(value, (bytearray, str)):
value = value.decode(version.database_text_encoding, "replace") if text_affinity else str(value)
try:
value.encode(UTF_8)
except UnicodeDecodeError:
value = value.decode(UTF_8, "replace")
value = ILLEGAL_XML_CHARACTER_PATTERN.sub(" ", value)
if value.startswith("="):
value = ' ' + value
cell_record_column_values.append(value)
row = [version.file_type, cell.version_number, cell.page_version_number, cell.source, cell.page_number,
cell.location, False, "Complete", cell.file_offset]
row.extend(cell_record_column_values)
csv_writer.writerow(row)
if logger.isEnabledFor(DEBUG):
for cell in cells.values():
cell_record_column_values = [str(record_column.value) if record_column.value else "NULL"
for record_column in cell.payload.record_columns]
log_message = "File source: {} version: {} page version: {} cell source: {} page: {} located: {} " \
"carved: {} status: {} at file offset: {}: "
log_message = log_message.format(version.file_type, cell.version_number, cell.page_version_number,
cell.source, cell.page_number, cell.location, False,
"Complete", cell.file_offset)
log_message += "(" + ", ".join(cell_record_column_values) + ")"
logger.debug(log_message)
VersionCsvExporter._write_b_tree_table_master_schema_carved_records(csv_writer, version, carved_cells, False)
@staticmethod
def _write_b_tree_table_leaf_records(csv_writer, version, master_schema_entry, b_tree_root_page, carved_cells):
"""
This function will write the list of cells sent in to the sheet specified including the metadata regarding
to the file type, page type, and operation.
Note: The types of the data in the values can prove to be an issue here. We want to write the value out as
a string similarly as the text and csv outputs do for example even though it may contain invalid
characters. When data is sent into the openpyxl library to be written to the xml xlsx, if it is a
string, it is encoded into the default encoding and then checked for xml illegal characters that may
pose an issue when written to the xml. In order to properly check the values and write them accordingly
through the openpyxl library we address the following use cases for the value in order:
1.) If the value is None, we replace the value with the string "NULL". This might be replaced by
leaving it None but issues can be seen when carving cells where the value is None not because it
was NULL originally in the database, but because it was unable to be parsed out when it may have
actually had a value (when it was truncated). Distinction is needed between these two use cases.
2.) If the value is a bytearray (most likely originally a blob object) or a string value, we want to
write the value as a string. However, in order to do this for blob objects or strings that may
have a few bad characters in them from carving, we need to do our due diligence and make sure
there are no bad unicode characters and no xml illegal characters that may cause issues with
writing to the xlsx. In order to do this we do the following:
a.) We first convert the value to string if the affinity was not text, otherwise we decode
the value in the database text encoding. When we decode using the database text encoding,
we specify to "replace" characters it does not recognize in order to compensate for carved
rows.
b.) We then test encoding it to UTF-8.
i.) If the value successfully encodes as UTF-8 we set that as the value.
ii.) If the value throws an exception encoding, we have illegal unicode characters in the
string that need to be addressed. In order to escape these, we decode the string
as UTF-8 using the "replace" method to replace any illegal unicode characters
with '\ufffd' and set this back as the value after encoding again.
c.) After we have successfully set the value back to a UTF-8 compliant value, we need to check
the value for xml illegal characters. If any of these xml illegal characters are found,
they are replaced with a space. This behaviour may be different from how values are output
into text or csv since this is being written to xml and additional rules apply for certain
characters.
between the xlsx output and text/csv output in reference to xml illegal characters.
d.) After all the illegal characters are removed, due to the way openpyxl determines data types
of particular cells, if a cell starts with "=", it is determined to be a formula and set as
that in the data type field for that cell. This causes issues when opening the file in excel.
Microsoft Excel recommends prefacing the string with a single quote character, however,
this only seems to be within Excel itself. You can specify the data type of the cell in
openpyxl, but not in the write-only mode that is being used here. In order to work around
this, we check if the first character of a string or bytearray is a "=" character and preface
that string with a space. There may be better ways to handle this such as not using the
write-only mode.
Note: Additionally to the "=" character, the "-" character has similar issues in excel.
However, openpyxl explicitly checks on the "=" character being the first character
and setting that cell to a formula and does not handle the use case of a cell starting
with the "-" character, so this use case is ignored.
3.) If the value does not fall in one of the above use cases, we leave it as is and write it to the
xlsx without any modifications.
Note: It was noticed that blob objects are typically detected as isinstance of str here and strings are
bytearray objects. This needs to be investigated why exactly blob objects are coming out as str
objects.
Note: Comparision should be done on how other applications work with different database text encodings in
reference to their output.
Note: The decoding of the value in the database text encoding should only specify replace on a carved entry.
:param csv_writer:
:param version:
:param master_schema_entry:
:param b_tree_root_page:
:param carved_cells:
:return:
"""
logger = getLogger(LOGGER_NAME)
number_of_cells, cells = aggregate_leaf_cells(b_tree_root_page)
if logger.isEnabledFor(DEBUG):
master_schema_entry_string = "The {} b-tree page with {} row type and name: {} with sql: {} " \
"has {} in-tact rows:"
master_schema_entry_string = master_schema_entry_string.format(b_tree_root_page.page_type,
master_schema_entry.row_type,
master_schema_entry.name,
master_schema_entry.sql, number_of_cells)
logger.debug(master_schema_entry_string)
column_headers = []
column_headers.extend(["File Source", "Version", "Page Version", "Cell Source", "Page Number", "Location",
"Carved", "Status", "File Offset", "Row ID"])
column_headers.extend([column_definition.column_name
for column_definition in master_schema_entry.column_definitions])
logger.debug("Column Headers: {}".format(" , ".join(column_headers)))
csv_writer.writerow(column_headers)
sorted_cells = sorted(cells.values(), key=lambda b_tree_cell: b_tree_cell.row_id)
for cell in sorted_cells:
cell_record_column_values = []
for record_column in cell.payload.record_columns:
serial_type = record_column.serial_type
text_affinity = True if serial_type >= 13 and serial_type % 2 == 1 else False
value = record_column.value
if value is None:
pass
elif isinstance(value, (bytearray, str)):
value = value.decode(version.database_text_encoding, "replace") if text_affinity else str(value)
try:
value = value.encode(UTF_8)
except UnicodeDecodeError:
value = value.decode(UTF_8, "replace").encode(UTF_8)
value = ILLEGAL_XML_CHARACTER_PATTERN.sub(" ", value)
if value.startswith("="):
value = ' ' + value
value = str(value)
cell_record_column_values.append(value)
row = [version.file_type, cell.version_number, cell.page_version_number, cell.source, cell.page_number,
cell.location, False, "Complete", cell.file_offset, cell.row_id]
row.extend(cell_record_column_values)
csv_writer.writerow(row)
if logger.isEnabledFor(DEBUG):
for cell in sorted_cells:
cell_record_column_values = [str(record_column.value) if record_column.value else "NULL"
for record_column in cell.payload.record_columns]
log_message = "File source: {} version: {} page version: {} cell source: {} page: {} location: {} " \
"carved: {} status: {} at file offset: {} for row id: {}: "
log_message = log_message.format(version.file_type, cell.version_number, cell.page_version_number,
cell.source, cell.page_number, cell.location, False, "Complete",
cell.file_offset, cell.row_id)
log_message += "(" + ", ".join(cell_record_column_values) + ")"
logger.debug(log_message)
VersionCsvExporter._write_b_tree_table_master_schema_carved_records(csv_writer, version, carved_cells, True)
@staticmethod
def _write_b_tree_table_master_schema_carved_records(csv_writer, version, carved_cells, has_row_ids):
logger = getLogger(LOGGER_NAME)
for carved_cell in carved_cells:
cell_record_column_values = []
for record_column in carved_cell.payload.record_columns:
serial_type = record_column.serial_type
text_affinity = True if serial_type >= 13 and serial_type % 2 == 1 else False
value = record_column.value
if value is None:
pass
elif isinstance(value, (bytearray, str)):
value = value.decode(version.database_text_encoding, "replace") if text_affinity else str(value)
try:
value = value.encode(UTF_8)
except UnicodeDecodeError:
value = value.decode(UTF_8, "replace").encode(UTF_8)
value = ILLEGAL_XML_CHARACTER_PATTERN.sub(" ", value)
if value.startswith("="):
value = ' ' + value
value = str(value)
cell_record_column_values.append(value)
row = [version.file_type, carved_cell.version_number, carved_cell.page_version_number,
carved_cell.source, carved_cell.page_number, carved_cell.location, True, "Unknown",
carved_cell.file_offset]
if has_row_ids:
row.append("")
row.extend(cell_record_column_values)
csv_writer.writerow(row)
if logger.isEnabledFor(DEBUG):
for carved_cell in carved_cells:
cell_record_column_values = [str(record_column.value) if record_column.value else "NULL"
for record_column in carved_cell.payload.record_columns]
log_message = "File source: {} version: {} version number: {} cell source: {} page: {} location: {} " \
"carved: {} status: {} at file offset: {}"
log_message = log_message.format(version.file_type, carved_cell.version_number,
carved_cell.page_version_number, carved_cell.source,
carved_cell.page_number, carved_cell.location, True,
"Unknown", carved_cell.file_offset)
if has_row_ids:
log_message += " for row id: {}:".format("")
log_message += "(" + ", ".join(cell_record_column_values) + ")"
logger.debug(log_message)
class CommitCsvExporter(object):
def __init__(self, export_directory, file_name_prefix=""):
self._export_directory = export_directory
self._file_name_prefix = file_name_prefix
self._csv_file_names = {}
@property
def csv_file_names(self):
return self._csv_file_names
def write_commit(self, master_schema_entry, commit):
"""
Note: This function only writes the commit record if the commit record was updated.
:param master_schema_entry:
:param commit:
"""
if not commit.updated:
return
logger = getLogger(LOGGER_NAME)
mode = "ab"
csv_file_name = self._csv_file_names[commit.name] if commit.name in self._csv_file_names else None
write_headers = False
if not csv_file_name:
mode = "wb"
commit_name = sub(" ", "_", commit.name)
csv_file_name = os.path.join(self._export_directory, (self._file_name_prefix + "-" + commit_name + ".csv"))
self._csv_file_names[commit.name] = csv_file_name
write_headers = True
with open(csv_file_name, mode) as csv_file_handle:
csv_writer = writer(csv_file_handle, delimiter=',', quotechar="\"", quoting=QUOTE_ALL)
"""
Below we have to account for how the pages are stored.
For the table master schema entry row type:
1.) If the table is not a "without rowid" table, it will be stored on a table b-tree page with
row ids.
2.) If the table is a "without rowid" table, it will be stored on an index b-tree page with no
row ids.
For the index master schema entry row type:
1.) It will be stored on an index b-tree page with no row ids.
The commit object handles this by having a page type to make this distinction easier. Therefore, we only
need to check on the page type here.
"""
column_headers = []
if write_headers:
column_headers.extend(["File Source", "Version", "Page Version", "Cell Source", "Page Number",
"Location", "Operation", "File Offset"])
if commit.page_type == PAGE_TYPE.B_TREE_INDEX_LEAF:
"""
Note: The index master schema entries are currently not fully parsed and therefore we do not have
column definitions in order to derive the column names from.
"""
csv_writer.writerow(column_headers)
CommitCsvExporter._write_cells(csv_writer, commit.file_type, commit.database_text_encoding,
commit.page_type, commit.added_cells.values(), "Added")
CommitCsvExporter._write_cells(csv_writer, commit.file_type, commit.database_text_encoding,
commit.page_type, commit.updated_cells.values(), "Updated")
CommitCsvExporter._write_cells(csv_writer, commit.file_type, commit.database_text_encoding,
commit.page_type, commit.deleted_cells.values(), "Deleted")
CommitCsvExporter._write_cells(csv_writer, commit.file_type, commit.database_text_encoding,
commit.page_type, commit.carved_cells.values(), "Carved")
elif commit.page_type == PAGE_TYPE.B_TREE_TABLE_LEAF or commit.page_type == PAGE_TYPE.B_TREE_TABLE_INTERIOR:
if write_headers:
column_headers.append("Row ID")
column_headers.extend([column_definition.column_name
for column_definition in master_schema_entry.column_definitions])
csv_writer.writerow(column_headers)
# Sort the added, updated, and deleted cells by the row id
sorted_added_cells = sorted(commit.added_cells.values(), key=lambda b_tree_cell: b_tree_cell.row_id)
CommitCsvExporter._write_cells(csv_writer, commit.file_type, commit.database_text_encoding,
commit.page_type, sorted_added_cells, "Added")
sorted_updated_cells = sorted(commit.updated_cells.values(), key=lambda b_tree_cell: b_tree_cell.row_id)
CommitCsvExporter._write_cells(csv_writer, commit.file_type, commit.database_text_encoding,
commit.page_type, sorted_updated_cells, "Updated")
sorted_deleted_cells = sorted(commit.deleted_cells.values(), key=lambda b_tree_cell: b_tree_cell.row_id)
CommitCsvExporter._write_cells(csv_writer, commit.file_type, commit.database_text_encoding,
commit.page_type, sorted_deleted_cells, "Deleted")
# We will not sort the carved cells since row ids are not deterministic even if parsed
CommitCsvExporter._write_cells(csv_writer, commit.file_type, commit.database_text_encoding,
commit.page_type, commit.carved_cells.values(), "Carved")
else:
log_message = "Invalid commit page type: {} found for csv export on master " \
"schema entry name: {} while writing to csv file name: {}."
log_message = log_message.format(commit.page_type, commit.name, csv_file_name)
logger.warn(log_message)
raise ExportError(log_message)
@staticmethod
def _write_cells(csv_writer, file_type, database_text_encoding, page_type, cells, operation):
"""
This function will write the list of cells sent in to the sheet specified including the metadata regarding
to the file type, page type, and operation.
Note: The types of the data in the values can prove to be an issue here. We want to write the value out as
a string similarly as the text and csv outputs do for example even though it may contain invalid
characters. When data is sent into the openpyxl library to be written to the xml xlsx, if it is a
string, it is encoded into the default encoding and then checked for xml illegal characters that may
pose an issue when written to the xml. In order to properly check the values and write them accordingly
through the openpyxl library we address the following use cases for the value in order:
1.) If the value is a bytearray (most likely originally a blob object) or a string value, we want to
write the value as a string. However, in order to do this for blob objects or strings that may
have a few bad characters in them from carving, we need to do our due diligence and make sure
there are no bad unicode characters and no xml illegal characters that may cause issues with
writing to the xlsx. In order to do this we do the following:
a.) We first convert the value to string if the affinity was not text, otherwise we decode
the value in the database text encoding. When we decode using the database text encoding,
we specify to "replace" characters it does not recognize in order to compensate for carved
rows.
b.) We then test encoding it to UTF-8.
i.) If the value successfully encodes as UTF-8 we set that as the value.
ii.) If the value throws an exception encoding, we have illegal unicode characters in the
string that need to be addressed. In order to escape these, we decode the string
as UTF-8 using the "replace" method to replace any illegal unicode characters
with '\ufffd' and set this back as the value after encoding again.
c.) After we have successfully set the value back to a UTF-8 compliant value, we need to check
the value for xml illegal characters. If any of these xml illegal characters are found,
they are replaced with a space. This behaviour may be different from how values are output
into text or csv since this is being written to xml and additional rules apply for certain
characters.
between the xlsx output and text/csv output in reference to xml illegal characters.
d.) After all the illegal characters are removed, due to the way openpyxl determines data types
of particular cells, if a cell starts with "=", it is determined to be a formula and set as
that in the data type field for that cell. This causes issues when opening the file in excel.
Microsoft Excel recommends prefacing the string with a single quote character, however,
this only seems to be within Excel itself. You can specify the data type of the cell in
openpyxl, but not in the write-only mode that is being used here. In order to work around
this, we check if the first character of a string or bytearray is a "=" character and preface
that string with a space. There may be better ways to handle this such as not using the
write-only mode.
Note: Additionally to the "=" character, the "-" character has similar issues in excel.
However, openpyxl explicitly checks on the "=" character being the first character
and setting that cell to a formula and does not handle the use case of a cell starting
with the "-" character, so this use case is ignored.
2.) If the value does not fall in one of the above use cases, we leave it as is and write it to the
xlsx without any modifications.
Note: If the value is None, we leave it as None. We used to update the None value with the string "NULL"
since issues could be seen when carving cells where the value is None not because it was NULL originally
in the database, but because it was unable to be parsed out when it may have actually had a value (when
it was truncated). Distinction is needed between these two use cases.
Note: It was noticed that blob objects are typically detected as isinstance of str here and strings are
bytearray objects. This needs to be investigated why exactly blob objects are coming out as str
objects.
Note: Comparision should be done on how other applications work with different database text encodings in
reference to their output.
Note: The decoding of the value in the database text encoding should only specify replace on a carved entry.
:param csv_writer:
:param file_type:
:param database_text_encoding:
:param page_type:
:param cells:
:param operation:
:return:
"""
for cell in cells:
cell_record_column_values = []
for record_column in cell.payload.record_columns:
serial_type = record_column.serial_type
text_affinity = True if serial_type >= 13 and serial_type % 2 == 1 else False
value = record_column.value
if value is None:
pass
elif isinstance(value, (bytearray, str)):
value = value.decode(database_text_encoding, "replace") if text_affinity else str(value)
try:
value = value.encode(UTF_8)
except UnicodeDecodeError:
value = value.decode(UTF_8, "replace").encode(UTF_8)
value = ILLEGAL_XML_CHARACTER_PATTERN.sub(" ", value)
if value.startswith("="):
value = ' ' + value
value = str(value)
cell_record_column_values.append(value)
row = [file_type, cell.version_number, cell.page_version_number, cell.source, cell.page_number,
cell.location, operation, cell.file_offset]
if page_type == PAGE_TYPE.B_TREE_TABLE_LEAF:
row.append(cell.row_id)
row.extend(cell_record_column_values)
csv_writer.writerow(row)
| 59.654762 | 120 | 0.595964 | 4,974 | 40,088 | 4.648774 | 0.081021 | 0.034252 | 0.037495 | 0.017126 | 0.877827 | 0.848246 | 0.832764 | 0.831596 | 0.823293 | 0.81153 | 0 | 0.00212 | 0.352799 | 40,088 | 671 | 121 | 59.743666 | 0.889112 | 0.397126 | 0 | 0.588235 | 0 | 0.00692 | 0.067776 | 0 | 0.010381 | 0 | 0 | 0 | 0 | 1 | 0.027682 | false | 0.013841 | 0.055363 | 0.00346 | 0.096886 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.