hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
07f7074cdeb71dc8f8e68c52c48c0098165e24e9 | 19,768 | py | Python | tests/wallet/did_wallet/test_did.py | Pipscoin-Network/pipscoin-blockchain | f400d26956881eb319786230506bb441f76f64d9 | [
"Apache-2.0"
] | 8 | 2021-08-29T15:13:45.000Z | 2022-03-30T17:23:04.000Z | tests/wallet/did_wallet/test_did.py | Pipscoin-Network/pipscoin-blockchain | f400d26956881eb319786230506bb441f76f64d9 | [
"Apache-2.0"
] | 28 | 2021-08-29T02:08:07.000Z | 2022-03-24T23:32:00.000Z | tests/wallet/did_wallet/test_did.py | Pipscoin-Network/pipscoin-blockchain | f400d26956881eb319786230506bb441f76f64d9 | [
"Apache-2.0"
] | 4 | 2021-08-29T12:59:05.000Z | 2022-03-15T08:38:29.000Z | import asyncio
import pytest
from pipscoin.simulator.simulator_protocol import FarmNewBlockProtocol
from pipscoin.types.peer_info import PeerInfo
from pipscoin.util.ints import uint16, uint32, uint64
from tests.setup_nodes import setup_simulators_and_wallets
from pipscoin.wallet.did_wallet.did_wallet import DIDWallet
from pipscoin.types.blockchain_format.program import Program
from blspy import AugSchemeMPL
from pipscoin.types.spend_bundle import SpendBundle
from pipscoin.consensus.block_rewards import calculate_pool_reward, calculate_base_farmer_reward
from tests.time_out_assert import time_out_assert
@pytest.fixture(scope="module")
def event_loop():
loop = asyncio.get_event_loop()
yield loop
class TestDIDWallet:
@pytest.fixture(scope="function")
async def wallet_node(self):
async for _ in setup_simulators_and_wallets(1, 1, {}):
yield _
@pytest.fixture(scope="function")
async def two_wallet_nodes(self):
async for _ in setup_simulators_and_wallets(1, 2, {}):
yield _
@pytest.fixture(scope="function")
async def three_wallet_nodes(self):
async for _ in setup_simulators_and_wallets(1, 3, {}):
yield _
@pytest.fixture(scope="function")
async def two_wallet_nodes_five_freeze(self):
async for _ in setup_simulators_and_wallets(1, 2, {}):
yield _
@pytest.fixture(scope="function")
async def three_sim_two_wallets(self):
async for _ in setup_simulators_and_wallets(3, 2, {}):
yield _
@pytest.mark.asyncio
async def test_creation_from_backup_file(self, three_wallet_nodes):
num_blocks = 5
full_nodes, wallets = three_wallet_nodes
full_node_api = full_nodes[0]
full_node_server = full_node_api.server
wallet_node_0, server_0 = wallets[0]
wallet_node_1, server_1 = wallets[1]
wallet_node_2, server_2 = wallets[2]
wallet_0 = wallet_node_0.wallet_state_manager.main_wallet
wallet_1 = wallet_node_1.wallet_state_manager.main_wallet
wallet_2 = wallet_node_2.wallet_state_manager.main_wallet
ph = await wallet_0.get_new_puzzlehash()
ph1 = await wallet_1.get_new_puzzlehash()
ph2 = await wallet_2.get_new_puzzlehash()
await server_0.start_client(PeerInfo("localhost", uint16(full_node_server._port)), None)
await server_1.start_client(PeerInfo("localhost", uint16(full_node_server._port)), None)
await server_2.start_client(PeerInfo("localhost", uint16(full_node_server._port)), None)
for i in range(1, num_blocks):
await full_node_api.farm_new_transaction_block(FarmNewBlockProtocol(ph))
funds = sum(
[
calculate_pool_reward(uint32(i)) + calculate_base_farmer_reward(uint32(i))
for i in range(1, num_blocks - 1)
]
)
await time_out_assert(10, wallet_0.get_unconfirmed_balance, funds)
await time_out_assert(10, wallet_0.get_confirmed_balance, funds)
for i in range(1, num_blocks):
await full_node_api.farm_new_transaction_block(FarmNewBlockProtocol(ph1))
for i in range(1, num_blocks):
await full_node_api.farm_new_transaction_block(FarmNewBlockProtocol(ph2))
# Wallet1 sets up DIDWallet1 without any backup set
async with wallet_node_0.wallet_state_manager.lock:
did_wallet_0: DIDWallet = await DIDWallet.create_new_did_wallet(
wallet_node_0.wallet_state_manager, wallet_0, uint64(101)
)
for i in range(1, num_blocks):
await full_node_api.farm_new_transaction_block(FarmNewBlockProtocol(ph))
await time_out_assert(15, did_wallet_0.get_confirmed_balance, 101)
await time_out_assert(15, did_wallet_0.get_unconfirmed_balance, 101)
await time_out_assert(15, did_wallet_0.get_pending_change_balance, 0)
# Wallet1 sets up DIDWallet_1 with DIDWallet_0 as backup
backup_ids = [bytes.fromhex(did_wallet_0.get_my_DID())]
async with wallet_node_1.wallet_state_manager.lock:
did_wallet_1: DIDWallet = await DIDWallet.create_new_did_wallet(
wallet_node_1.wallet_state_manager, wallet_1, uint64(201), backup_ids
)
for i in range(1, num_blocks):
await full_node_api.farm_new_transaction_block(FarmNewBlockProtocol(ph))
await time_out_assert(15, did_wallet_1.get_confirmed_balance, 201)
await time_out_assert(15, did_wallet_1.get_unconfirmed_balance, 201)
await time_out_assert(15, did_wallet_1.get_pending_change_balance, 0)
filename = "test.backup"
did_wallet_1.create_backup(filename)
# Wallet2 recovers DIDWallet2 to a new set of keys
async with wallet_node_2.wallet_state_manager.lock:
did_wallet_2 = await DIDWallet.create_new_did_wallet_from_recovery(
wallet_node_2.wallet_state_manager, wallet_2, filename
)
coins = await did_wallet_1.select_coins(1)
coin = coins.copy().pop()
assert did_wallet_2.did_info.temp_coin == coin
newpuzhash = await did_wallet_2.get_new_inner_hash()
pubkey = bytes(
(await did_wallet_2.wallet_state_manager.get_unused_derivation_record(did_wallet_2.wallet_info.id)).pubkey
)
message_spend_bundle = await did_wallet_0.create_attestment(
did_wallet_2.did_info.temp_coin.name(), newpuzhash, pubkey, "test.attest"
)
print(f"pubkey: {pubkey}")
for i in range(1, num_blocks):
await full_node_api.farm_new_transaction_block(FarmNewBlockProtocol(ph))
(
test_info_list,
test_message_spend_bundle,
) = await did_wallet_2.load_attest_files_for_recovery_spend(["test.attest"])
assert message_spend_bundle == test_message_spend_bundle
await did_wallet_2.recovery_spend(
did_wallet_2.did_info.temp_coin,
newpuzhash,
test_info_list,
pubkey,
test_message_spend_bundle,
)
print(f"pubkey: {did_wallet_2}")
for i in range(1, num_blocks):
await full_node_api.farm_new_transaction_block(FarmNewBlockProtocol(ph))
await time_out_assert(45, did_wallet_2.get_confirmed_balance, 201)
await time_out_assert(45, did_wallet_2.get_unconfirmed_balance, 201)
some_ph = 32 * b"\2"
await did_wallet_2.create_exit_spend(some_ph)
for i in range(1, num_blocks):
await full_node_api.farm_new_transaction_block(FarmNewBlockProtocol(ph))
async def get_coins_with_ph():
coins = await full_node_api.full_node.coin_store.get_coin_records_by_puzzle_hash(True, some_ph)
if len(coins) == 1:
return True
return False
await time_out_assert(15, get_coins_with_ph, True)
await time_out_assert(45, did_wallet_2.get_confirmed_balance, 0)
await time_out_assert(45, did_wallet_2.get_unconfirmed_balance, 0)
@pytest.mark.asyncio
async def test_did_recovery_with_multiple_backup_dids(self, two_wallet_nodes):
num_blocks = 5
full_nodes, wallets = two_wallet_nodes
full_node_1 = full_nodes[0]
server_1 = full_node_1.server
wallet_node, server_2 = wallets[0]
wallet_node_2, server_3 = wallets[1]
wallet = wallet_node.wallet_state_manager.main_wallet
wallet2 = wallet_node_2.wallet_state_manager.main_wallet
ph = await wallet.get_new_puzzlehash()
await server_2.start_client(PeerInfo("localhost", uint16(server_1._port)), None)
await server_3.start_client(PeerInfo("localhost", uint16(server_1._port)), None)
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph))
funds = sum(
[
calculate_pool_reward(uint32(i)) + calculate_base_farmer_reward(uint32(i))
for i in range(1, num_blocks - 1)
]
)
await time_out_assert(15, wallet.get_confirmed_balance, funds)
async with wallet_node.wallet_state_manager.lock:
did_wallet: DIDWallet = await DIDWallet.create_new_did_wallet(
wallet_node.wallet_state_manager, wallet, uint64(101)
)
ph = await wallet2.get_new_puzzlehash()
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph))
await time_out_assert(15, did_wallet.get_confirmed_balance, 101)
await time_out_assert(15, did_wallet.get_unconfirmed_balance, 101)
recovery_list = [bytes.fromhex(did_wallet.get_my_DID())]
async with wallet_node_2.wallet_state_manager.lock:
did_wallet_2: DIDWallet = await DIDWallet.create_new_did_wallet(
wallet_node_2.wallet_state_manager, wallet2, uint64(101), recovery_list
)
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph))
await time_out_assert(15, did_wallet_2.get_confirmed_balance, 101)
await time_out_assert(15, did_wallet_2.get_unconfirmed_balance, 101)
assert did_wallet_2.did_info.backup_ids == recovery_list
recovery_list.append(bytes.fromhex(did_wallet_2.get_my_DID()))
async with wallet_node_2.wallet_state_manager.lock:
did_wallet_3: DIDWallet = await DIDWallet.create_new_did_wallet(
wallet_node_2.wallet_state_manager, wallet2, uint64(201), recovery_list
)
ph2 = await wallet.get_new_puzzlehash()
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph2))
assert did_wallet_3.did_info.backup_ids == recovery_list
await time_out_assert(15, did_wallet_3.get_confirmed_balance, 201)
await time_out_assert(15, did_wallet_3.get_unconfirmed_balance, 201)
coins = await did_wallet_3.select_coins(1)
coin = coins.pop()
filename = "test.backup"
did_wallet_3.create_backup(filename)
async with wallet_node.wallet_state_manager.lock:
did_wallet_4 = await DIDWallet.create_new_did_wallet_from_recovery(
wallet_node.wallet_state_manager,
wallet,
filename,
)
pubkey = (
await did_wallet_4.wallet_state_manager.get_unused_derivation_record(did_wallet_2.wallet_info.id)
).pubkey
new_ph = await did_wallet_4.get_new_inner_hash()
message_spend_bundle = await did_wallet.create_attestment(coin.name(), new_ph, pubkey, "test1.attest")
message_spend_bundle2 = await did_wallet_2.create_attestment(coin.name(), new_ph, pubkey, "test2.attest")
message_spend_bundle = message_spend_bundle.aggregate([message_spend_bundle, message_spend_bundle2])
(
test_info_list,
test_message_spend_bundle,
) = await did_wallet_4.load_attest_files_for_recovery_spend(["test1.attest", "test2.attest"])
assert message_spend_bundle == test_message_spend_bundle
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph2))
await did_wallet_4.recovery_spend(coin, new_ph, test_info_list, pubkey, message_spend_bundle)
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph2))
await time_out_assert(15, did_wallet_4.get_confirmed_balance, 201)
await time_out_assert(15, did_wallet_4.get_unconfirmed_balance, 201)
await time_out_assert(15, did_wallet_3.get_confirmed_balance, 0)
await time_out_assert(15, did_wallet_3.get_unconfirmed_balance, 0)
@pytest.mark.asyncio
async def test_did_recovery_with_empty_set(self, two_wallet_nodes):
num_blocks = 5
full_nodes, wallets = two_wallet_nodes
full_node_1 = full_nodes[0]
server_1 = full_node_1.server
wallet_node, server_2 = wallets[0]
wallet_node_2, server_3 = wallets[1]
wallet = wallet_node.wallet_state_manager.main_wallet
ph = await wallet.get_new_puzzlehash()
await server_2.start_client(PeerInfo("localhost", uint16(server_1._port)), None)
await server_3.start_client(PeerInfo("localhost", uint16(server_1._port)), None)
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph))
funds = sum(
[
calculate_pool_reward(uint32(i)) + calculate_base_farmer_reward(uint32(i))
for i in range(1, num_blocks - 1)
]
)
await time_out_assert(15, wallet.get_confirmed_balance, funds)
async with wallet_node.wallet_state_manager.lock:
did_wallet: DIDWallet = await DIDWallet.create_new_did_wallet(
wallet_node.wallet_state_manager, wallet, uint64(101)
)
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph))
await time_out_assert(15, did_wallet.get_confirmed_balance, 101)
await time_out_assert(15, did_wallet.get_unconfirmed_balance, 101)
coins = await did_wallet.select_coins(1)
coin = coins.pop()
info = Program.to([])
pubkey = (await did_wallet.wallet_state_manager.get_unused_derivation_record(did_wallet.wallet_info.id)).pubkey
spend_bundle = await did_wallet.recovery_spend(
coin, ph, info, pubkey, SpendBundle([], AugSchemeMPL.aggregate([]))
)
additions = spend_bundle.additions()
assert additions == []
@pytest.mark.asyncio
async def test_did_attest_after_recovery(self, two_wallet_nodes):
num_blocks = 5
full_nodes, wallets = two_wallet_nodes
full_node_1 = full_nodes[0]
server_1 = full_node_1.server
wallet_node, server_2 = wallets[0]
wallet_node_2, server_3 = wallets[1]
wallet = wallet_node.wallet_state_manager.main_wallet
wallet2 = wallet_node_2.wallet_state_manager.main_wallet
ph = await wallet.get_new_puzzlehash()
await server_2.start_client(PeerInfo("localhost", uint16(server_1._port)), None)
await server_3.start_client(PeerInfo("localhost", uint16(server_1._port)), None)
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph))
funds = sum(
[
calculate_pool_reward(uint32(i)) + calculate_base_farmer_reward(uint32(i))
for i in range(1, num_blocks - 1)
]
)
await time_out_assert(15, wallet.get_confirmed_balance, funds)
async with wallet_node.wallet_state_manager.lock:
did_wallet: DIDWallet = await DIDWallet.create_new_did_wallet(
wallet_node.wallet_state_manager, wallet, uint64(101)
)
ph2 = await wallet2.get_new_puzzlehash()
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph2))
await time_out_assert(15, did_wallet.get_confirmed_balance, 101)
await time_out_assert(15, did_wallet.get_unconfirmed_balance, 101)
recovery_list = [bytes.fromhex(did_wallet.get_my_DID())]
async with wallet_node_2.wallet_state_manager.lock:
did_wallet_2: DIDWallet = await DIDWallet.create_new_did_wallet(
wallet_node_2.wallet_state_manager, wallet2, uint64(101), recovery_list
)
ph = await wallet.get_new_puzzlehash()
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph))
await time_out_assert(15, did_wallet_2.get_confirmed_balance, 101)
await time_out_assert(15, did_wallet_2.get_unconfirmed_balance, 101)
assert did_wallet_2.did_info.backup_ids == recovery_list
# Update coin with new ID info
recovery_list = [bytes.fromhex(did_wallet_2.get_my_DID())]
await did_wallet.update_recovery_list(recovery_list, uint64(1))
assert did_wallet.did_info.backup_ids == recovery_list
await did_wallet.create_update_spend()
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph2))
await time_out_assert(15, did_wallet.get_confirmed_balance, 101)
await time_out_assert(15, did_wallet.get_unconfirmed_balance, 101)
# DID Wallet 2 recovers into DID Wallet 3 with new innerpuz
filename = "test.backup"
did_wallet_2.create_backup(filename)
async with wallet_node.wallet_state_manager.lock:
did_wallet_3 = await DIDWallet.create_new_did_wallet_from_recovery(
wallet_node.wallet_state_manager,
wallet,
filename,
)
new_ph = await did_wallet_3.get_new_inner_hash()
coins = await did_wallet_2.select_coins(1)
coin = coins.pop()
pubkey = (
await did_wallet_3.wallet_state_manager.get_unused_derivation_record(did_wallet_3.wallet_info.id)
).pubkey
message_spend_bundle = await did_wallet.create_attestment(coin.name(), new_ph, pubkey, "test.attest")
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph2))
(
info,
message_spend_bundle,
) = await did_wallet_3.load_attest_files_for_recovery_spend(["test.attest"])
await did_wallet_3.recovery_spend(coin, new_ph, info, pubkey, message_spend_bundle)
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph))
await time_out_assert(15, did_wallet_3.get_confirmed_balance, 101)
await time_out_assert(15, did_wallet_3.get_unconfirmed_balance, 101)
# DID Wallet 1 recovery spends into DID Wallet 4
filename = "test.backup"
did_wallet.create_backup(filename)
async with wallet_node_2.wallet_state_manager.lock:
did_wallet_4 = await DIDWallet.create_new_did_wallet_from_recovery(
wallet_node_2.wallet_state_manager,
wallet2,
filename,
)
coins = await did_wallet.select_coins(1)
coin = coins.pop()
new_ph = await did_wallet_4.get_new_inner_hash()
pubkey = (
await did_wallet_4.wallet_state_manager.get_unused_derivation_record(did_wallet_4.wallet_info.id)
).pubkey
await did_wallet_3.create_attestment(coin.name(), new_ph, pubkey, "test.attest")
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph2))
(
test_info_list,
test_message_spend_bundle,
) = await did_wallet_4.load_attest_files_for_recovery_spend(["test.attest"])
await did_wallet_4.recovery_spend(coin, new_ph, test_info_list, pubkey, test_message_spend_bundle)
for i in range(1, num_blocks):
await full_node_1.farm_new_transaction_block(FarmNewBlockProtocol(ph))
await time_out_assert(15, did_wallet_4.get_confirmed_balance, 101)
await time_out_assert(15, did_wallet_4.get_unconfirmed_balance, 101)
await time_out_assert(15, did_wallet.get_confirmed_balance, 0)
await time_out_assert(15, did_wallet.get_unconfirmed_balance, 0)
| 43.54185 | 119 | 0.693697 | 2,661 | 19,768 | 4.738068 | 0.069899 | 0.083518 | 0.043306 | 0.057107 | 0.834312 | 0.793702 | 0.762849 | 0.742306 | 0.721209 | 0.695035 | 0 | 0.033768 | 0.231485 | 19,768 | 453 | 120 | 43.637969 | 0.796143 | 0.014518 | 0 | 0.509695 | 0 | 0 | 0.016689 | 0 | 0 | 0 | 0 | 0 | 0.135734 | 1 | 0.00277 | false | 0 | 0.033241 | 0 | 0.044321 | 0.00554 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
58035491bc00e4d9d39a232517f5918fdf8632a6 | 1,228 | py | Python | catkin_ws/build/srrg2_laser_slam_2d/catkin_generated/pkg.installspace.context.pc.py | laaners/progetto-labiagi_pick_e_delivery | 3453bfbc1dd7562c78ba06c0f79b069b0a952c0e | [
"MIT"
] | null | null | null | catkin_ws/build/srrg2_laser_slam_2d/catkin_generated/pkg.installspace.context.pc.py | laaners/progetto-labiagi_pick_e_delivery | 3453bfbc1dd7562c78ba06c0f79b069b0a952c0e | [
"MIT"
] | null | null | null | catkin_ws/build/srrg2_laser_slam_2d/catkin_generated/pkg.installspace.context.pc.py | laaners/progetto-labiagi_pick_e_delivery | 3453bfbc1dd7562c78ba06c0f79b069b0a952c0e | [
"MIT"
] | null | null | null | # generated from catkin/cmake/template/pkg.context.pc.in
CATKIN_PACKAGE_PREFIX = ""
PROJECT_PKG_CONFIG_INCLUDE_DIRS = "${prefix}/include;/usr/include/QGLViewer".split(';') if "${prefix}/include;/usr/include/QGLViewer" != "" else []
PROJECT_CATKIN_DEPENDS = "srrg2_slam_interfaces;srrg2_core;srrg2_core_ros;srrg2_solver;srrg2_qgl_viewport;sensor_msgs;tf;srrg_cmake_modules".replace(';', ' ')
PKG_CONFIG_LIBRARIES_WITH_PREFIX = "-lsrrg2_laser_slam_2d_library;-lsrrg2_laser_slam_2d_registration_library;-lsrrg2_laser_slam_2d_sensor_processing_library;-lsrrg2_laser_slam_2d_mapping_library;/usr/lib/x86_64-linux-gnu/libQGLViewer-qt5.so;/usr/lib/x86_64-linux-gnu/libglut.so;/usr/lib/x86_64-linux-gnu/libXmu.so;/usr/lib/x86_64-linux-gnu/libXi.so".split(';') if "-lsrrg2_laser_slam_2d_library;-lsrrg2_laser_slam_2d_registration_library;-lsrrg2_laser_slam_2d_sensor_processing_library;-lsrrg2_laser_slam_2d_mapping_library;/usr/lib/x86_64-linux-gnu/libQGLViewer-qt5.so;/usr/lib/x86_64-linux-gnu/libglut.so;/usr/lib/x86_64-linux-gnu/libXmu.so;/usr/lib/x86_64-linux-gnu/libXi.so" != "" else []
PROJECT_NAME = "srrg2_laser_slam_2d"
PROJECT_SPACE_DIR = "/home/alessiohu/Desktop/progetto-labiagi/catkin_ws/install"
PROJECT_VERSION = "0.1.0"
| 136.444444 | 692 | 0.824104 | 197 | 1,228 | 4.736041 | 0.345178 | 0.086817 | 0.106109 | 0.145766 | 0.600214 | 0.531618 | 0.531618 | 0.531618 | 0.531618 | 0.531618 | 0 | 0.050463 | 0.031759 | 1,228 | 8 | 693 | 153.5 | 0.73423 | 0.043974 | 0 | 0 | 1 | 0.285714 | 0.770478 | 0.746587 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed1e4983fa5b2a9b26dc28d44f6324434665dd7d | 31 | py | Python | src/model/__init__.py | etiennelndr/predict_faces | c3eb9b4c8fa51f5b7facf8d10df679ae26043ebe | [
"MIT"
] | 1 | 2019-08-28T15:56:23.000Z | 2019-08-28T15:56:23.000Z | src/model/__init__.py | etiennelndr/predict_faces | c3eb9b4c8fa51f5b7facf8d10df679ae26043ebe | [
"MIT"
] | null | null | null | src/model/__init__.py | etiennelndr/predict_faces | c3eb9b4c8fa51f5b7facf8d10df679ae26043ebe | [
"MIT"
] | null | null | null | from .model import PredictFace
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed288ab715e02db1db605b9f66dc2110cb72739c | 49 | py | Python | String/validador_url.py | YanMCoutinho/TIL-Python | 8c1836c4a9d5aed8ab17e48a64bf0e5c0764470b | [
"MIT"
] | null | null | null | String/validador_url.py | YanMCoutinho/TIL-Python | 8c1836c4a9d5aed8ab17e48a64bf0e5c0764470b | [
"MIT"
] | null | null | null | String/validador_url.py | YanMCoutinho/TIL-Python | 8c1836c4a9d5aed8ab17e48a64bf0e5c0764470b | [
"MIT"
] | null | null | null | # https://www.bytebank.com.br/cambio
import re
| 9.8 | 36 | 0.714286 | 8 | 49 | 4.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122449 | 49 | 4 | 37 | 12.25 | 0.813953 | 0.693878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed330816b435daf07486cba70399789809b47613 | 797 | py | Python | src/sage/combinat/crystals/catalog_elementary_crystals.py | defeo/sage | d8822036a9843bd4d75845024072515ede56bcb9 | [
"BSL-1.0"
] | 2 | 2018-06-30T01:37:35.000Z | 2018-06-30T01:37:39.000Z | src/sage/combinat/crystals/catalog_elementary_crystals.py | boothby/sage | 1b1e6f608d1ef8ee664bb19e991efbbc68cbd51f | [
"BSL-1.0"
] | null | null | null | src/sage/combinat/crystals/catalog_elementary_crystals.py | boothby/sage | 1b1e6f608d1ef8ee664bb19e991efbbc68cbd51f | [
"BSL-1.0"
] | null | null | null | """
Catalog Of Elementary Crystals
See :mod:`~sage.combinat.crystals.elementary_crystals`.
* :class:`Component <sage.combinat.crystals.elementary_crystals.ComponentCrystal>`
* :class:`Elementary <sage.combinat.crystals.elementary_crystals.ElementaryCrystal>`
or :class:`B <sage.combinat.crystals.elementary_crystals.ElementaryCrystal>`
* :class:`R <sage.combinat.crystals.elementary_crystals.RCrystal>`
* :class:`T <sage.combinat.crystals.elementary_crystals.TCrystal>`
"""
from __future__ import absolute_import
from .elementary_crystals import TCrystal as T
from .elementary_crystals import RCrystal as R
from .elementary_crystals import ElementaryCrystal as Elementary
from .elementary_crystals import ElementaryCrystal as B
from .elementary_crystals import ComponentCrystal as Component
| 39.85 | 84 | 0.828105 | 92 | 797 | 7 | 0.25 | 0.335404 | 0.186335 | 0.279503 | 0.552795 | 0.31677 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079046 | 797 | 19 | 85 | 41.947368 | 0.877384 | 0.588457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed522bb4ecc4b635a0f16672f35a8b9fa78e04fa | 109 | py | Python | python/learn/base/system/test_system.py | qrsforever/workspace | 53c7ce7ca7da62c9fbb3d991ae9e4e34d07ece5f | [
"MIT"
] | 2 | 2017-06-07T03:20:42.000Z | 2020-01-07T09:14:26.000Z | python/learn/base/system/test_system.py | qrsforever/workspace | 53c7ce7ca7da62c9fbb3d991ae9e4e34d07ece5f | [
"MIT"
] | null | null | null | python/learn/base/system/test_system.py | qrsforever/workspace | 53c7ce7ca7da62c9fbb3d991ae9e4e34d07ece5f | [
"MIT"
] | null | null | null | #!/usr/bin/python3
# -*- coding: utf-8 -*-
import os
print(os.system('ls'))
print(os.system('ps aux'))
| 9.083333 | 26 | 0.587156 | 17 | 109 | 3.764706 | 0.764706 | 0.21875 | 0.40625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0.155963 | 109 | 11 | 27 | 9.909091 | 0.673913 | 0.357798 | 0 | 0 | 0 | 0 | 0.119403 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
ed5438cfc699fda6aced3fcdcbcf6c31f9547588 | 28,832 | py | Python | cottonformation/res/robomaker.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | cottonformation/res/robomaker.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | cottonformation/res/robomaker.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
This module
"""
import attr
import typing
from ..core.model import (
Property, Resource, Tag, GetAtt, TypeHint, TypeCheck,
)
from ..core.constant import AttrMeta
#--- Property declaration ---
@attr.s
class SimulationApplicationSimulationSoftwareSuite(Property):
"""
AWS Object Type = "AWS::RoboMaker::SimulationApplication.SimulationSoftwareSuite"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-simulationsoftwaresuite.html
Property Document:
- ``rp_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-simulationsoftwaresuite.html#cfn-robomaker-simulationapplication-simulationsoftwaresuite-name
- ``rp_Version``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-simulationsoftwaresuite.html#cfn-robomaker-simulationapplication-simulationsoftwaresuite-version
"""
AWS_OBJECT_TYPE = "AWS::RoboMaker::SimulationApplication.SimulationSoftwareSuite"
rp_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-simulationsoftwaresuite.html#cfn-robomaker-simulationapplication-simulationsoftwaresuite-name"""
rp_Version: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Version"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-simulationsoftwaresuite.html#cfn-robomaker-simulationapplication-simulationsoftwaresuite-version"""
@attr.s
class SimulationApplicationRobotSoftwareSuite(Property):
"""
AWS Object Type = "AWS::RoboMaker::SimulationApplication.RobotSoftwareSuite"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-robotsoftwaresuite.html
Property Document:
- ``rp_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-robotsoftwaresuite.html#cfn-robomaker-simulationapplication-robotsoftwaresuite-name
- ``rp_Version``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-robotsoftwaresuite.html#cfn-robomaker-simulationapplication-robotsoftwaresuite-version
"""
AWS_OBJECT_TYPE = "AWS::RoboMaker::SimulationApplication.RobotSoftwareSuite"
rp_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-robotsoftwaresuite.html#cfn-robomaker-simulationapplication-robotsoftwaresuite-name"""
rp_Version: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Version"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-robotsoftwaresuite.html#cfn-robomaker-simulationapplication-robotsoftwaresuite-version"""
@attr.s
class SimulationApplicationSourceConfig(Property):
"""
AWS Object Type = "AWS::RoboMaker::SimulationApplication.SourceConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-sourceconfig.html
Property Document:
- ``rp_Architecture``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-sourceconfig.html#cfn-robomaker-simulationapplication-sourceconfig-architecture
- ``rp_S3Bucket``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-sourceconfig.html#cfn-robomaker-simulationapplication-sourceconfig-s3bucket
- ``rp_S3Key``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-sourceconfig.html#cfn-robomaker-simulationapplication-sourceconfig-s3key
"""
AWS_OBJECT_TYPE = "AWS::RoboMaker::SimulationApplication.SourceConfig"
rp_Architecture: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Architecture"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-sourceconfig.html#cfn-robomaker-simulationapplication-sourceconfig-architecture"""
rp_S3Bucket: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "S3Bucket"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-sourceconfig.html#cfn-robomaker-simulationapplication-sourceconfig-s3bucket"""
rp_S3Key: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "S3Key"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-sourceconfig.html#cfn-robomaker-simulationapplication-sourceconfig-s3key"""
@attr.s
class RobotApplicationRobotSoftwareSuite(Property):
"""
AWS Object Type = "AWS::RoboMaker::RobotApplication.RobotSoftwareSuite"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-robotapplication-robotsoftwaresuite.html
Property Document:
- ``rp_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-robotapplication-robotsoftwaresuite.html#cfn-robomaker-robotapplication-robotsoftwaresuite-name
- ``rp_Version``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-robotapplication-robotsoftwaresuite.html#cfn-robomaker-robotapplication-robotsoftwaresuite-version
"""
AWS_OBJECT_TYPE = "AWS::RoboMaker::RobotApplication.RobotSoftwareSuite"
rp_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-robotapplication-robotsoftwaresuite.html#cfn-robomaker-robotapplication-robotsoftwaresuite-name"""
rp_Version: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Version"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-robotapplication-robotsoftwaresuite.html#cfn-robomaker-robotapplication-robotsoftwaresuite-version"""
@attr.s
class SimulationApplicationRenderingEngine(Property):
"""
AWS Object Type = "AWS::RoboMaker::SimulationApplication.RenderingEngine"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-renderingengine.html
Property Document:
- ``rp_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-renderingengine.html#cfn-robomaker-simulationapplication-renderingengine-name
- ``rp_Version``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-renderingengine.html#cfn-robomaker-simulationapplication-renderingengine-version
"""
AWS_OBJECT_TYPE = "AWS::RoboMaker::SimulationApplication.RenderingEngine"
rp_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-renderingengine.html#cfn-robomaker-simulationapplication-renderingengine-name"""
rp_Version: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Version"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-simulationapplication-renderingengine.html#cfn-robomaker-simulationapplication-renderingengine-version"""
@attr.s
class RobotApplicationSourceConfig(Property):
"""
AWS Object Type = "AWS::RoboMaker::RobotApplication.SourceConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-robotapplication-sourceconfig.html
Property Document:
- ``rp_Architecture``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-robotapplication-sourceconfig.html#cfn-robomaker-robotapplication-sourceconfig-architecture
- ``rp_S3Bucket``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-robotapplication-sourceconfig.html#cfn-robomaker-robotapplication-sourceconfig-s3bucket
- ``rp_S3Key``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-robotapplication-sourceconfig.html#cfn-robomaker-robotapplication-sourceconfig-s3key
"""
AWS_OBJECT_TYPE = "AWS::RoboMaker::RobotApplication.SourceConfig"
rp_Architecture: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Architecture"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-robotapplication-sourceconfig.html#cfn-robomaker-robotapplication-sourceconfig-architecture"""
rp_S3Bucket: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "S3Bucket"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-robotapplication-sourceconfig.html#cfn-robomaker-robotapplication-sourceconfig-s3bucket"""
rp_S3Key: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "S3Key"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-robomaker-robotapplication-sourceconfig.html#cfn-robomaker-robotapplication-sourceconfig-s3key"""
#--- Resource declaration ---
@attr.s
class SimulationApplication(Resource):
"""
AWS Object Type = "AWS::RoboMaker::SimulationApplication"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html
Property Document:
- ``rp_RenderingEngine``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-renderingengine
- ``rp_RobotSoftwareSuite``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-robotsoftwaresuite
- ``rp_SimulationSoftwareSuite``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-simulationsoftwaresuite
- ``rp_Sources``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-sources
- ``p_CurrentRevisionId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-currentrevisionid
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-name
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-tags
"""
AWS_OBJECT_TYPE = "AWS::RoboMaker::SimulationApplication"
rp_RenderingEngine: typing.Union['SimulationApplicationRenderingEngine', dict] = attr.ib(
default=None,
converter=SimulationApplicationRenderingEngine.from_dict,
validator=attr.validators.instance_of(SimulationApplicationRenderingEngine),
metadata={AttrMeta.PROPERTY_NAME: "RenderingEngine"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-renderingengine"""
rp_RobotSoftwareSuite: typing.Union['SimulationApplicationRobotSoftwareSuite', dict] = attr.ib(
default=None,
converter=SimulationApplicationRobotSoftwareSuite.from_dict,
validator=attr.validators.instance_of(SimulationApplicationRobotSoftwareSuite),
metadata={AttrMeta.PROPERTY_NAME: "RobotSoftwareSuite"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-robotsoftwaresuite"""
rp_SimulationSoftwareSuite: typing.Union['SimulationApplicationSimulationSoftwareSuite', dict] = attr.ib(
default=None,
converter=SimulationApplicationSimulationSoftwareSuite.from_dict,
validator=attr.validators.instance_of(SimulationApplicationSimulationSoftwareSuite),
metadata={AttrMeta.PROPERTY_NAME: "SimulationSoftwareSuite"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-simulationsoftwaresuite"""
rp_Sources: typing.List[typing.Union['SimulationApplicationSourceConfig', dict]] = attr.ib(
default=None,
converter=SimulationApplicationSourceConfig.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(SimulationApplicationSourceConfig), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "Sources"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-sources"""
p_CurrentRevisionId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "CurrentRevisionId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-currentrevisionid"""
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-name"""
p_Tags: dict = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(dict)),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#cfn-robomaker-simulationapplication-tags"""
@property
def rv_CurrentRevisionId(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#aws-resource-robomaker-simulationapplication-return-values"""
return GetAtt(resource=self, attr_name="CurrentRevisionId")
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplication.html#aws-resource-robomaker-simulationapplication-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@attr.s
class SimulationApplicationVersion(Resource):
"""
AWS Object Type = "AWS::RoboMaker::SimulationApplicationVersion"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplicationversion.html
Property Document:
- ``rp_Application``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplicationversion.html#cfn-robomaker-simulationapplicationversion-application
- ``p_CurrentRevisionId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplicationversion.html#cfn-robomaker-simulationapplicationversion-currentrevisionid
"""
AWS_OBJECT_TYPE = "AWS::RoboMaker::SimulationApplicationVersion"
rp_Application: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Application"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplicationversion.html#cfn-robomaker-simulationapplicationversion-application"""
p_CurrentRevisionId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "CurrentRevisionId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-simulationapplicationversion.html#cfn-robomaker-simulationapplicationversion-currentrevisionid"""
@attr.s
class RobotApplication(Resource):
"""
AWS Object Type = "AWS::RoboMaker::RobotApplication"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplication.html
Property Document:
- ``rp_RobotSoftwareSuite``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplication.html#cfn-robomaker-robotapplication-robotsoftwaresuite
- ``rp_Sources``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplication.html#cfn-robomaker-robotapplication-sources
- ``p_CurrentRevisionId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplication.html#cfn-robomaker-robotapplication-currentrevisionid
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplication.html#cfn-robomaker-robotapplication-name
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplication.html#cfn-robomaker-robotapplication-tags
"""
AWS_OBJECT_TYPE = "AWS::RoboMaker::RobotApplication"
rp_RobotSoftwareSuite: typing.Union['RobotApplicationRobotSoftwareSuite', dict] = attr.ib(
default=None,
converter=RobotApplicationRobotSoftwareSuite.from_dict,
validator=attr.validators.instance_of(RobotApplicationRobotSoftwareSuite),
metadata={AttrMeta.PROPERTY_NAME: "RobotSoftwareSuite"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplication.html#cfn-robomaker-robotapplication-robotsoftwaresuite"""
rp_Sources: typing.List[typing.Union['RobotApplicationSourceConfig', dict]] = attr.ib(
default=None,
converter=RobotApplicationSourceConfig.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(RobotApplicationSourceConfig), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "Sources"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplication.html#cfn-robomaker-robotapplication-sources"""
p_CurrentRevisionId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "CurrentRevisionId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplication.html#cfn-robomaker-robotapplication-currentrevisionid"""
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplication.html#cfn-robomaker-robotapplication-name"""
p_Tags: dict = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(dict)),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplication.html#cfn-robomaker-robotapplication-tags"""
@property
def rv_CurrentRevisionId(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplication.html#aws-resource-robomaker-robotapplication-return-values"""
return GetAtt(resource=self, attr_name="CurrentRevisionId")
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplication.html#aws-resource-robomaker-robotapplication-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@attr.s
class Fleet(Resource):
"""
AWS Object Type = "AWS::RoboMaker::Fleet"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-fleet.html
Property Document:
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-fleet.html#cfn-robomaker-fleet-name
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-fleet.html#cfn-robomaker-fleet-tags
"""
AWS_OBJECT_TYPE = "AWS::RoboMaker::Fleet"
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-fleet.html#cfn-robomaker-fleet-name"""
p_Tags: dict = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(dict)),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-fleet.html#cfn-robomaker-fleet-tags"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-fleet.html#aws-resource-robomaker-fleet-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@attr.s
class RobotApplicationVersion(Resource):
"""
AWS Object Type = "AWS::RoboMaker::RobotApplicationVersion"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplicationversion.html
Property Document:
- ``rp_Application``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplicationversion.html#cfn-robomaker-robotapplicationversion-application
- ``p_CurrentRevisionId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplicationversion.html#cfn-robomaker-robotapplicationversion-currentrevisionid
"""
AWS_OBJECT_TYPE = "AWS::RoboMaker::RobotApplicationVersion"
rp_Application: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Application"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplicationversion.html#cfn-robomaker-robotapplicationversion-application"""
p_CurrentRevisionId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "CurrentRevisionId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robotapplicationversion.html#cfn-robomaker-robotapplicationversion-currentrevisionid"""
@attr.s
class Robot(Resource):
"""
AWS Object Type = "AWS::RoboMaker::Robot"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robot.html
Property Document:
- ``rp_Architecture``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robot.html#cfn-robomaker-robot-architecture
- ``rp_GreengrassGroupId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robot.html#cfn-robomaker-robot-greengrassgroupid
- ``p_Fleet``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robot.html#cfn-robomaker-robot-fleet
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robot.html#cfn-robomaker-robot-name
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robot.html#cfn-robomaker-robot-tags
"""
AWS_OBJECT_TYPE = "AWS::RoboMaker::Robot"
rp_Architecture: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Architecture"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robot.html#cfn-robomaker-robot-architecture"""
rp_GreengrassGroupId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "GreengrassGroupId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robot.html#cfn-robomaker-robot-greengrassgroupid"""
p_Fleet: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Fleet"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robot.html#cfn-robomaker-robot-fleet"""
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robot.html#cfn-robomaker-robot-name"""
p_Tags: dict = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(dict)),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-robomaker-robot.html#cfn-robomaker-robot-tags"""
| 59.693582 | 227 | 0.771435 | 2,956 | 28,832 | 7.43065 | 0.030447 | 0.033144 | 0.045573 | 0.07043 | 0.932529 | 0.931618 | 0.896972 | 0.85964 | 0.85964 | 0.85964 | 0 | 0.000813 | 0.104155 | 28,832 | 482 | 228 | 59.817427 | 0.849588 | 0.36019 | 0 | 0.633065 | 0 | 0 | 0.095995 | 0.064894 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020161 | false | 0 | 0.016129 | 0 | 0.302419 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed8252345d7a301503e3306fb07b8e17208bc551 | 18,793 | py | Python | api_1.3/containerd/services/content/v1/content_pb2_grpc.py | siemens/pycontainerd | 9b1184ecbcc91144ad6903403818b5b8989a32f3 | [
"Apache-2.0"
] | 24 | 2019-12-16T12:38:51.000Z | 2022-02-16T18:44:20.000Z | api_1.5/containerd/services/content/v1/content_pb2_grpc.py | siemens/pycontainerd | 9b1184ecbcc91144ad6903403818b5b8989a32f3 | [
"Apache-2.0"
] | 9 | 2020-03-03T07:42:40.000Z | 2021-09-01T10:11:18.000Z | api_1.2/containerd/services/content/v1/content_pb2_grpc.py | siemens/pycontainerd | 9b1184ecbcc91144ad6903403818b5b8989a32f3 | [
"Apache-2.0"
] | 10 | 2019-12-16T11:20:23.000Z | 2022-01-24T01:53:13.000Z | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from containerd.services.content.v1 import content_pb2 as containerd_dot_services_dot_content_dot_v1_dot_content__pb2
from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2
class ContentStub(object):
"""Content provides access to a content addressable storage system.
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.Info = channel.unary_unary(
'/containerd.services.content.v1.Content/Info',
request_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.InfoRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.InfoResponse.FromString,
)
self.Update = channel.unary_unary(
'/containerd.services.content.v1.Content/Update',
request_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.UpdateRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.UpdateResponse.FromString,
)
self.List = channel.unary_stream(
'/containerd.services.content.v1.Content/List',
request_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ListContentRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ListContentResponse.FromString,
)
self.Delete = channel.unary_unary(
'/containerd.services.content.v1.Content/Delete',
request_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.DeleteContentRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.Read = channel.unary_stream(
'/containerd.services.content.v1.Content/Read',
request_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ReadContentRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ReadContentResponse.FromString,
)
self.Status = channel.unary_unary(
'/containerd.services.content.v1.Content/Status',
request_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.StatusRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.StatusResponse.FromString,
)
self.ListStatuses = channel.unary_unary(
'/containerd.services.content.v1.Content/ListStatuses',
request_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ListStatusesRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ListStatusesResponse.FromString,
)
self.Write = channel.stream_stream(
'/containerd.services.content.v1.Content/Write',
request_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.WriteContentRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.WriteContentResponse.FromString,
)
self.Abort = channel.unary_unary(
'/containerd.services.content.v1.Content/Abort',
request_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.AbortRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
class ContentServicer(object):
"""Content provides access to a content addressable storage system.
"""
def Info(self, request, context):
"""Info returns information about a committed object.
This call can be used for getting the size of content and checking for
existence.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Update(self, request, context):
"""Update updates content metadata.
This call can be used to manage the mutable content labels. The
immutable metadata such as digest, size, and committed at cannot
be updated.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def List(self, request, context):
"""List streams the entire set of content as Info objects and closes the
stream.
Typically, this will yield a large response, chunked into messages.
Clients should make provisions to ensure they can handle the entire data
set.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Delete(self, request, context):
"""Delete will delete the referenced object.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Read(self, request, context):
"""Read allows one to read an object based on the offset into the content.
The requested data may be returned in one or more messages.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Status(self, request, context):
"""Status returns the status for a single reference.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def ListStatuses(self, request, context):
"""ListStatuses returns the status of ongoing object ingestions, started via
Write.
Only those matching the regular expression will be provided in the
response. If the provided regular expression is empty, all ingestions
will be provided.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Write(self, request_iterator, context):
"""Write begins or resumes writes to a resource identified by a unique ref.
Only one active stream may exist at a time for each ref.
Once a write stream has started, it may only write to a single ref, thus
once a stream is started, the ref may be omitted on subsequent writes.
For any write transaction represented by a ref, only a single write may
be made to a given offset. If overlapping writes occur, it is an error.
Writes should be sequential and implementations may throw an error if
this is required.
If expected_digest is set and already part of the content store, the
write will fail.
When completed, the commit flag should be set to true. If expected size
or digest is set, the content will be validated against those values.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Abort(self, request, context):
"""Abort cancels the ongoing write named in the request. Any resources
associated with the write will be collected.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_ContentServicer_to_server(servicer, server):
rpc_method_handlers = {
'Info': grpc.unary_unary_rpc_method_handler(
servicer.Info,
request_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.InfoRequest.FromString,
response_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.InfoResponse.SerializeToString,
),
'Update': grpc.unary_unary_rpc_method_handler(
servicer.Update,
request_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.UpdateRequest.FromString,
response_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.UpdateResponse.SerializeToString,
),
'List': grpc.unary_stream_rpc_method_handler(
servicer.List,
request_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ListContentRequest.FromString,
response_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ListContentResponse.SerializeToString,
),
'Delete': grpc.unary_unary_rpc_method_handler(
servicer.Delete,
request_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.DeleteContentRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'Read': grpc.unary_stream_rpc_method_handler(
servicer.Read,
request_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ReadContentRequest.FromString,
response_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ReadContentResponse.SerializeToString,
),
'Status': grpc.unary_unary_rpc_method_handler(
servicer.Status,
request_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.StatusRequest.FromString,
response_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.StatusResponse.SerializeToString,
),
'ListStatuses': grpc.unary_unary_rpc_method_handler(
servicer.ListStatuses,
request_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ListStatusesRequest.FromString,
response_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ListStatusesResponse.SerializeToString,
),
'Write': grpc.stream_stream_rpc_method_handler(
servicer.Write,
request_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.WriteContentRequest.FromString,
response_serializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.WriteContentResponse.SerializeToString,
),
'Abort': grpc.unary_unary_rpc_method_handler(
servicer.Abort,
request_deserializer=containerd_dot_services_dot_content_dot_v1_dot_content__pb2.AbortRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'containerd.services.content.v1.Content', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class Content(object):
"""Content provides access to a content addressable storage system.
"""
@staticmethod
def Info(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/containerd.services.content.v1.Content/Info',
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.InfoRequest.SerializeToString,
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.InfoResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Update(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/containerd.services.content.v1.Content/Update',
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.UpdateRequest.SerializeToString,
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.UpdateResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def List(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_stream(request, target, '/containerd.services.content.v1.Content/List',
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ListContentRequest.SerializeToString,
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ListContentResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Delete(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/containerd.services.content.v1.Content/Delete',
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.DeleteContentRequest.SerializeToString,
google_dot_protobuf_dot_empty__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Read(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_stream(request, target, '/containerd.services.content.v1.Content/Read',
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ReadContentRequest.SerializeToString,
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ReadContentResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Status(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/containerd.services.content.v1.Content/Status',
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.StatusRequest.SerializeToString,
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.StatusResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def ListStatuses(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/containerd.services.content.v1.Content/ListStatuses',
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ListStatusesRequest.SerializeToString,
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.ListStatusesResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Write(request_iterator,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.stream_stream(request_iterator, target, '/containerd.services.content.v1.Content/Write',
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.WriteContentRequest.SerializeToString,
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.WriteContentResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Abort(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/containerd.services.content.v1.Content/Abort',
containerd_dot_services_dot_content_dot_v1_dot_content__pb2.AbortRequest.SerializeToString,
google_dot_protobuf_dot_empty__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 49.585752 | 139 | 0.694141 | 1,972 | 18,793 | 6.241379 | 0.11714 | 0.079623 | 0.083604 | 0.095548 | 0.787537 | 0.779574 | 0.76235 | 0.737488 | 0.704176 | 0.704176 | 0 | 0.008929 | 0.243122 | 18,793 | 378 | 140 | 49.716931 | 0.85637 | 0.122652 | 0 | 0.530035 | 1 | 0 | 0.082382 | 0.053474 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070671 | false | 0 | 0.010601 | 0.031802 | 0.123675 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71f009655474b79f01ef62d33ff01484d5ee2084 | 56,631 | py | Python | tests/testflows/window_functions/tests/range_frame.py | pdv-ru/ClickHouse | 0ff975bcf3008fa6c6373cbdfed16328e3863ec5 | [
"Apache-2.0"
] | 15,577 | 2019-09-23T11:57:53.000Z | 2022-03-31T18:21:48.000Z | tests/testflows/window_functions/tests/range_frame.py | pdv-ru/ClickHouse | 0ff975bcf3008fa6c6373cbdfed16328e3863ec5 | [
"Apache-2.0"
] | 16,476 | 2019-09-23T11:47:00.000Z | 2022-03-31T23:06:01.000Z | tests/testflows/window_functions/tests/range_frame.py | pdv-ru/ClickHouse | 0ff975bcf3008fa6c6373cbdfed16328e3863ec5 | [
"Apache-2.0"
] | 3,633 | 2019-09-23T12:18:28.000Z | 2022-03-31T15:55:48.000Z | from testflows.core import *
from window_functions.requirements import *
from window_functions.tests.common import *
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_MissingFrameExtent_Error("1.0")
)
def missing_frame_extent(self):
"""Check that when range frame has missing frame extent then an error is returned.
"""
exitcode, message = syntax_error()
self.context.node.query("SELECT number,sum(number) OVER (ORDER BY number RANGE) FROM numbers(1,3)",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_InvalidFrameExtent_Error("1.0")
)
def invalid_frame_extent(self):
"""Check that when range frame has invalid frame extent then an error is returned.
"""
exitcode, message = syntax_error()
self.context.node.query("SELECT number,sum(number) OVER (ORDER BY number RANGE '1') FROM numbers(1,3)",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_CurrentRow_Peers("1.0"),
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_CurrentRow_WithoutOrderBy("1.0")
)
def start_current_row_without_order_by(self):
"""Check range current row frame without order by and
that the peers of the current row are rows that have values in the same order bucket.
In this case without order by clause all rows are the peers of the current row.
"""
expected = convert_output("""
empno | salary | sum
--------+--------+--------
1 | 5000 | 47100
2 | 3900 | 47100
3 | 4800 | 47100
4 | 4800 | 47100
5 | 3500 | 47100
7 | 4200 | 47100
8 | 6000 | 47100
9 | 4500 | 47100
10 | 5200 | 47100
11 | 5200 | 47100
""")
execute_query(
"SELECT * FROM (SELECT empno, salary, sum(salary) OVER (RANGE CURRENT ROW) AS sum FROM empsalary) ORDER BY empno",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_CurrentRow_Peers("1.0"),
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_CurrentRow_WithOrderBy("1.0")
)
def start_current_row_with_order_by(self):
"""Check range current row frame with order by and that the peers of the current row
are rows that have values in the same order bucket.
"""
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
1 | sales | 5000 | 14600
2 | personnel | 3900 | 7400
3 | sales | 4800 | 14600
4 | sales | 4800 | 14600
5 | personnel | 3500 | 7400
7 | develop | 4200 | 25100
8 | develop | 6000 | 25100
9 | develop | 4500 | 25100
10 | develop | 5200 | 25100
11 | develop | 5200 | 25100
""")
execute_query(
"SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY depname RANGE CURRENT ROW) AS sum FROM empsalary) ORDER BY empno",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_UnboundedFollowing_Error("1.0")
)
def start_unbounded_following_error(self):
"""Check range current row frame with or without order by returns an error.
"""
exitcode, message = frame_start_error()
with Example("without order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE UNBOUNDED FOLLOWING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
with Example("with order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE UNBOUNDED FOLLOWING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_UnboundedPreceding_WithoutOrderBy("1.0")
)
def start_unbounded_preceding_without_order_by(self):
"""Check range unbounded preceding frame without order by.
"""
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
7 | develop | 4200 | 25100
8 | develop | 6000 | 25100
9 | develop | 4500 | 25100
10 | develop | 5200 | 25100
11 | develop | 5200 | 25100
""")
execute_query(
"SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (RANGE UNBOUNDED PRECEDING) AS sum FROM empsalary WHERE depname = 'develop') ORDER BY empno",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_UnboundedPreceding_WithOrderBy("1.0")
)
def start_unbounded_preceding_with_order_by(self):
"""Check range unbounded preceding frame with order by.
"""
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
1 | sales | 5000 | 47100
2 | personnel | 3900 | 32500
3 | sales | 4800 | 47100
4 | sales | 4800 | 47100
5 | personnel | 3500 | 32500
7 | develop | 4200 | 25100
8 | develop | 6000 | 25100
9 | develop | 4500 | 25100
10 | develop | 5200 | 25100
11 | develop | 5200 | 25100
""")
execute_query(
"SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY depname RANGE UNBOUNDED PRECEDING) AS sum FROM empsalary) ORDER BY empno",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprFollowing_WithoutOrderBy_Error("1.0")
)
def start_expr_following_without_order_by_error(self):
"""Check range expr following frame without order by returns an error.
"""
exitcode, message = window_frame_error()
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE 1 FOLLOWING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprFollowing_WithOrderBy_Error("1.0")
)
def start_expr_following_with_order_by_error(self):
"""Check range expr following frame with order by returns an error.
"""
exitcode, message = window_frame_error()
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE 1 FOLLOWING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprPreceding_WithOrderBy("1.0")
)
def start_expr_preceding_with_order_by(self):
"""Check range expr preceding frame with order by.
"""
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
1 | sales | 5000 | 5000
2 | personnel | 3900 | 3900
3 | sales | 4800 | 9600
4 | sales | 4800 | 9600
5 | personnel | 3500 | 3500
7 | develop | 4200 | 4200
8 | develop | 6000 | 6000
9 | develop | 4500 | 4500
10 | develop | 5200 | 10400
11 | develop | 5200 | 10400
""")
execute_query(
"SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE 1 PRECEDING) AS sum FROM empsalary) ORDER BY empno",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprPreceding_OrderByNonNumericalColumn_Error("1.0")
)
def start_expr_preceding_order_by_non_numerical_column_error(self):
"""Check range expr preceding frame with order by non-numerical column returns an error.
"""
exitcode, message = frame_range_offset_error()
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY depname RANGE 1 PRECEDING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprPreceding_WithoutOrderBy_Error("1.0")
)
def start_expr_preceding_without_order_by_error(self):
"""Check range expr preceding frame without order by returns an error.
"""
exitcode, message = frame_requires_order_by_error()
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE 1 PRECEDING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_CurrentRow("1.0")
)
def between_current_row_and_current_row(self):
"""Check range between current row and current row frame with or without order by.
"""
with Example("without order by"):
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
7 | develop | 4200 | 25100
8 | develop | 6000 | 25100
9 | develop | 4500 | 25100
10 | develop | 5200 | 25100
11 | develop | 5200 | 25100
""")
execute_query(
"SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN CURRENT ROW AND CURRENT ROW) AS sum FROM empsalary WHERE depname = 'develop') ORDER BY empno",
expected=expected
)
with Example("with order by"):
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+------
7 | develop | 4200 | 4200
8 | develop | 6000 | 6000
9 | develop | 4500 | 4500
10 | develop | 5200 | 5200
11 | develop | 5200 | 5200
""")
execute_query(
"SELECT empno, depname, salary, sum(salary) OVER (ORDER BY empno RANGE BETWEEN CURRENT ROW AND CURRENT ROW) AS sum FROM empsalary WHERE depname = 'develop'",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_UnboundedPreceding_Error("1.0")
)
def between_current_row_and_unbounded_preceding_error(self):
"""Check range between current row and unbounded preceding frame with or without order by returns an error.
"""
exitcode, message = frame_end_error()
with Example("without order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
with Example("with order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_UnboundedFollowing("1.0")
)
def between_current_row_and_unbounded_following(self):
"""Check range between current row and unbounded following frame with or without order by.
"""
with Example("without order by"):
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
7 | develop | 4200 | 25100
8 | develop | 6000 | 25100
9 | develop | 4500 | 25100
10 | develop | 5200 | 25100
11 | develop | 5200 | 25100
""")
execute_query(
"SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS sum FROM empsalary WHERE depname = 'develop') ORDER BY empno",
expected=expected
)
with Example("with order by"):
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
7 | develop | 4200 | 25100
8 | develop | 6000 | 20900
9 | develop | 4500 | 14900
10 | develop | 5200 | 10400
11 | develop | 5200 | 5200
""")
execute_query(
"SELECT empno, depname, salary, sum(salary) OVER (ORDER BY empno RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS sum FROM empsalary WHERE depname = 'develop'",
expected=expected
)
with Example("with order by from tenk1"):
expected = convert_output("""
sum | unique1 | four
-----+---------+------
45 | 0 | 0
33 | 1 | 1
18 | 2 | 2
10 | 3 | 3
45 | 4 | 0
33 | 5 | 1
18 | 6 | 2
10 | 7 | 3
45 | 8 | 0
33 | 9 | 1
""")
execute_query(
"SELECT * FROM (SELECT sum(unique1) over (order by four range between current row and unbounded following) AS sum,"
"unique1, four "
"FROM tenk1 WHERE unique1 < 10) ORDER BY unique1",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_ExprFollowing_WithoutOrderBy_Error("1.0")
)
def between_current_row_and_expr_following_without_order_by_error(self):
"""Check range between current row and expr following frame without order by returns an error.
"""
exitcode, message = frame_requires_order_by_error()
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING) FROM numbers(1,3)",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_ExprFollowing_WithOrderBy("1.0")
)
def between_current_row_and_expr_following_with_order_by(self):
"""Check range between current row and expr following frame with order by.
"""
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
1 | sales | 5000 | 8900
2 | personnel | 3900 | 8700
3 | sales | 4800 | 9600
4 | sales | 4800 | 8300
5 | personnel | 3500 | 3500
7 | develop | 4200 | 10200
8 | develop | 6000 | 10500
9 | develop | 4500 | 9700
10 | develop | 5200 | 10400
11 | develop | 5200 | 5200
""")
execute_query(
"SELECT empno, depname, salary, sum(salary) OVER (ORDER BY empno RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING) AS sum FROM empsalary",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_ExprPreceding_Error("1.0")
)
def between_current_row_and_expr_preceding_error(self):
"""Check range between current row and expr preceding frame with or without order by returns an error.
"""
exitcode, message = window_frame_error()
with Example("without order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN CURRENT ROW AND 1 PRECEDING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
with Example("with order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN CURRENT ROW AND 1 PRECEDING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_CurrentRow("1.0")
)
def between_unbounded_preceding_and_current_row(self):
"""Check range between unbounded preceding and current row frame with and without order by.
"""
with Example("with order by"):
expected = convert_output("""
four | ten | sum | last_value
------+-----+-----+------------
0 | 0 | 0 | 0
0 | 2 | 2 | 2
0 | 4 | 6 | 4
0 | 6 | 12 | 6
0 | 8 | 20 | 8
1 | 1 | 1 | 1
1 | 3 | 4 | 3
1 | 5 | 9 | 5
1 | 7 | 16 | 7
1 | 9 | 25 | 9
2 | 0 | 0 | 0
2 | 2 | 2 | 2
2 | 4 | 6 | 4
2 | 6 | 12 | 6
2 | 8 | 20 | 8
3 | 1 | 1 | 1
3 | 3 | 4 | 3
3 | 5 | 9 | 5
3 | 7 | 16 | 7
3 | 9 | 25 | 9
""")
execute_query(
"SELECT four, ten,"
"sum(ten) over (partition by four order by ten range between unbounded preceding and current row) AS sum,"
"last_value(ten) over (partition by four order by ten range between unbounded preceding and current row) AS last_value "
"FROM (select distinct ten, four from tenk1)",
expected=expected
)
with Example("without order by"):
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
7 | develop | 4200 | 25100
8 | develop | 6000 | 25100
9 | develop | 4500 | 25100
10 | develop | 5200 | 25100
11 | develop | 5200 | 25100
""")
execute_query(
"SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS sum FROM empsalary WHERE depname = 'develop') ORDER BY empno",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_UnboundedPreceding_Error("1.0")
)
def between_unbounded_preceding_and_unbounded_preceding_error(self):
"""Check range between unbounded preceding and unbounded preceding frame with or without order by returns an error.
"""
exitcode, message = frame_end_error()
with Example("without order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
with Example("with order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_UnboundedFollowing("1.0")
)
def between_unbounded_preceding_and_unbounded_following(self):
"""Check range between unbounded preceding and unbounded following range with and without order by.
"""
with Example("with order by"):
expected = convert_output("""
four | ten | sum | last_value
------+-----+-----+------------
0 | 0 | 20 | 8
0 | 2 | 20 | 8
0 | 4 | 20 | 8
0 | 6 | 20 | 8
0 | 8 | 20 | 8
1 | 1 | 25 | 9
1 | 3 | 25 | 9
1 | 5 | 25 | 9
1 | 7 | 25 | 9
1 | 9 | 25 | 9
2 | 0 | 20 | 8
2 | 2 | 20 | 8
2 | 4 | 20 | 8
2 | 6 | 20 | 8
2 | 8 | 20 | 8
3 | 1 | 25 | 9
3 | 3 | 25 | 9
3 | 5 | 25 | 9
3 | 7 | 25 | 9
3 | 9 | 25 | 9
""")
execute_query(
"SELECT four, ten, "
"sum(ten) over (partition by four order by ten range between unbounded preceding and unbounded following) AS sum, "
"last_value(ten) over (partition by four order by ten range between unbounded preceding and unbounded following) AS last_value "
"FROM (select distinct ten, four from tenk1)",
expected=expected
)
with Example("without order by"):
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
1 | sales | 5000 | 47100
2 | personnel | 3900 | 47100
3 | sales | 4800 | 47100
4 | sales | 4800 | 47100
5 | personnel | 3500 | 47100
7 | develop | 4200 | 47100
8 | develop | 6000 | 47100
9 | develop | 4500 | 47100
10 | develop | 5200 | 47100
11 | develop | 5200 | 47100
""")
execute_query(
"SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS sum FROM empsalary) ORDER BY empno",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprFollowing_WithoutOrderBy_Error("1.0")
)
def between_unbounded_preceding_and_expr_following_without_order_by_error(self):
"""Check range between unbounded preceding and expr following frame without order by returns an error.
"""
exitcode, message = frame_requires_order_by_error()
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprPreceding_WithoutOrderBy_Error("1.0")
)
def between_unbounded_preceding_and_expr_preceding_without_order_by_error(self):
"""Check range between unbounded preceding and expr preceding frame without order by returns an error.
"""
exitcode, message = frame_requires_order_by_error()
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprFollowing_WithOrderBy("1.0")
)
def between_unbounded_preceding_and_expr_following_with_order_by(self):
"""Check range between unbounded preceding and expr following frame with order by.
"""
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
1 | sales | 5000 | 41100
2 | personnel | 3900 | 11600
3 | sales | 4800 | 41100
4 | sales | 4800 | 41100
5 | personnel | 3500 | 7400
7 | develop | 4200 | 16100
8 | develop | 6000 | 47100
9 | develop | 4500 | 30700
10 | develop | 5200 | 41100
11 | develop | 5200 | 41100
""")
execute_query(
"SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED PRECEDING AND 500 FOLLOWING) AS sum FROM empsalary) ORDER BY empno",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprPreceding_WithOrderBy("1.0")
)
def between_unbounded_preceding_and_expr_preceding_with_order_by(self):
"""Check range between unbounded preceding and expr preceding frame with order by.
"""
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
1 | sales | 5000 | 16100
2 | personnel | 3900 | 0
3 | sales | 4800 | 11600
4 | sales | 4800 | 11600
5 | personnel | 3500 | 0
7 | develop | 4200 | 3500
8 | develop | 6000 | 41100
9 | develop | 4500 | 7400
10 | develop | 5200 | 16100
11 | develop | 5200 | 16100
""")
execute_query(
"SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED PRECEDING AND 500 PRECEDING) AS sum FROM empsalary) ORDER BY empno",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_CurrentRow_Error("1.0")
)
def between_unbounded_following_and_current_row_error(self):
"""Check range between unbounded following and current row frame with or without order by returns an error.
"""
exitcode, message = frame_start_error()
with Example("without order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED FOLLOWING AND CURRENT ROW) AS sum FROM empsalary",
exitcode=exitcode, message=message)
with Example("with order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED FOLLOWING AND CURRENT ROW) AS sum FROM empsalary",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_UnboundedFollowing_Error("1.0")
)
def between_unbounded_following_and_unbounded_following_error(self):
"""Check range between unbounded following and unbounded following frame with or without order by returns an error.
"""
exitcode, message = frame_start_error()
with Example("without order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED FOLLOWING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
with Example("with order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED FOLLOWING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_UnboundedPreceding_Error("1.0")
)
def between_unbounded_following_and_unbounded_preceding_error(self):
"""Check range between unbounded following and unbounded preceding frame with or without order by returns an error.
"""
exitcode, message = frame_start_error()
with Example("without order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED PRECEDING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
with Example("with order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED PRECEDING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_ExprPreceding_Error("1.0")
)
def between_unbounded_following_and_expr_preceding_error(self):
"""Check range between unbounded following and expr preceding frame with or without order by returns an error.
"""
exitcode, message = frame_start_error()
with Example("without order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED FOLLOWING AND 1 PRECEDING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
with Example("with order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED FOLLOWING AND 1 PRECEDING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_ExprFollowing_Error("1.0")
)
def between_unbounded_following_and_expr_following_error(self):
"""Check range between unbounded following and expr following frame with or without order by returns an error.
"""
exitcode, message = frame_start_error()
with Example("without order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED FOLLOWING AND 1 FOLLOWING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
with Example("with order by"):
self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED FOLLOWING AND 1 FOLLOWING) AS sum FROM empsalary",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_CurrentRow_WithoutOrderBy_Error("1.0")
)
def between_expr_preceding_and_current_row_without_order_by_error(self):
"""Check range between expr preceding and current row frame without order by returns an error.
"""
exitcode, message = frame_requires_order_by_error()
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_UnboundedFollowing_WithoutOrderBy_Error("1.0")
)
def between_expr_preceding_and_unbounded_following_without_order_by_error(self):
"""Check range between expr preceding and unbounded following frame without order by returns an error.
"""
exitcode, message = frame_requires_order_by_error()
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprFollowing_WithoutOrderBy_Error("1.0")
)
def between_expr_preceding_and_expr_following_without_order_by_error(self):
"""Check range between expr preceding and expr following frame without order by returns an error.
"""
exitcode, message = frame_requires_order_by_error()
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprPreceding_WithoutOrderBy_Error("1.0")
)
def between_expr_preceding_and_expr_preceding_without_order_by_error(self):
"""Check range between expr preceding and expr preceding frame without order by returns an error.
"""
exitcode, message = frame_requires_order_by_error()
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 0 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_UnboundedPreceding_Error("1.0")
)
def between_expr_preceding_and_unbounded_preceding_error(self):
"""Check range between expr preceding and unbounded preceding frame with or without order by returns an error.
"""
exitcode, message = frame_end_unbounded_preceding_error()
with Example("without order by"):
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
with Example("with order by"):
self.context.node.query("SELECT number,sum(number) OVER (ORDER BY salary RANGE BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_CurrentRow_WithOrderBy("1.0")
)
def between_expr_preceding_and_current_row_with_order_by(self):
"""Check range between expr preceding and current row frame with order by.
"""
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
1 | sales | 5000 | 5000
2 | personnel | 3900 | 8900
3 | sales | 4800 | 13700
4 | sales | 4800 | 18500
5 | personnel | 3500 | 22000
7 | develop | 4200 | 26200
8 | develop | 6000 | 32200
9 | develop | 4500 | 36700
10 | develop | 5200 | 41900
11 | develop | 5200 | 47100
""")
execute_query(
"SELECT empno, depname, salary, sum(salary) OVER (ORDER BY empno RANGE BETWEEN 500 PRECEDING AND CURRENT ROW) AS sum FROM empsalary",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_UnboundedFollowing_WithOrderBy("1.0")
)
def between_expr_preceding_and_unbounded_following_with_order_by(self):
"""Check range between expr preceding and unbounded following frame with order by.
"""
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
1 | sales | 5000 | 35500
2 | personnel | 3900 | 47100
3 | sales | 4800 | 35500
4 | sales | 4800 | 35500
5 | personnel | 3500 | 47100
7 | develop | 4200 | 43600
8 | develop | 6000 | 6000
9 | develop | 4500 | 39700
10 | develop | 5200 | 31000
11 | develop | 5200 | 31000
""")
execute_query(
"SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN 500 PRECEDING AND UNBOUNDED FOLLOWING) AS sum FROM empsalary) ORDER BY empno",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprFollowing_WithOrderBy("1.0")
)
def between_expr_preceding_and_expr_following_with_order_by(self):
"""Check range between expr preceding and expr following frame with order by.
"""
with Example("empsalary"):
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
1 | sales | 5000 | 29500
2 | personnel | 3900 | 11600
3 | sales | 4800 | 29500
4 | sales | 4800 | 29500
5 | personnel | 3500 | 7400
7 | develop | 4200 | 12600
8 | develop | 6000 | 6000
9 | develop | 4500 | 23300
10 | develop | 5200 | 25000
11 | develop | 5200 | 25000
""")
execute_query(
"SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN 500 PRECEDING AND 500 FOLLOWING) AS sum FROM empsalary) ORDER BY empno",
expected=expected
)
with Example("tenk1"):
expected = convert_output("""
sum | unique1 | four
-----+---------+------
4 | 0 | 0
12 | 4 | 0
12 | 8 | 0
6 | 1 | 1
15 | 5 | 1
14 | 9 | 1
8 | 2 | 2
8 | 6 | 2
10 | 3 | 3
10 | 7 | 3
""")
execute_query(
"SELECT sum(unique1) over (partition by four order by unique1 range between 5 preceding and 6 following) AS sum, "
"unique1, four "
"FROM tenk1 WHERE unique1 < 10",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprPreceding_WithOrderBy("1.0")
)
def between_expr_preceding_and_expr_preceding_with_order_by(self):
"""Check range between expr preceding and expr preceding range with order by.
"""
with Example("order by asc"):
expected = convert_output("""
sum | unique1 | four
-----+---------+------
0 | 0 | 0
0 | 4 | 0
0 | 8 | 0
12 | 1 | 1
12 | 5 | 1
12 | 9 | 1
27 | 2 | 2
27 | 6 | 2
23 | 3 | 3
23 | 7 | 3
""")
execute_query(
"SELECT * FROM (SELECT sum(unique1) over (order by four range between 2 preceding and 1 preceding) AS sum, "
"unique1, four "
"FROM tenk1 WHERE unique1 < 10) ORDER BY four, unique1",
expected=expected
)
with Example("order by desc"):
expected = convert_output("""
sum | unique1 | four
-----+---------+------
23 | 0 | 0
23 | 4 | 0
23 | 8 | 0
18 | 1 | 1
18 | 5 | 1
18 | 9 | 1
10 | 2 | 2
10 | 6 | 2
0 | 3 | 3
0 | 7 | 3
""")
execute_query(
"SELECT * FROM (SELECT sum(unique1) over (order by four desc range between 2 preceding and 1 preceding) AS sum, "
"unique1, four "
"FROM tenk1 WHERE unique1 < 10) ORDER BY four, unique1",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprPreceding_WithOrderBy_Error("1.0")
)
def between_expr_preceding_and_expr_preceding_with_order_by_error(self):
"""Check range between expr preceding and expr preceding range with order by returns error
when end frame is before of start frame.
"""
exitcode, message = frame_start_error()
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 2 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_CurrentRow_WithoutOrderBy_Error("1.0")
)
def between_expr_following_and_current_row_without_order_by_error(self):
"""Check range between expr following and current row frame without order by returns an error.
"""
exitcode, message = window_frame_error()
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_UnboundedFollowing_WithoutOrderBy_Error("1.0")
)
def between_expr_following_and_unbounded_following_without_order_by_error(self):
"""Check range between expr following and unbounded following frame without order by returns an error.
"""
exitcode, message = frame_requires_order_by_error()
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprFollowing_WithoutOrderBy_Error("1.0")
)
def between_expr_following_and_expr_following_without_order_by_error(self):
"""Check range between expr following and expr following frame without order by returns an error.
"""
exitcode, message = window_frame_error()
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprPreceding_WithoutOrderBy_Error("1.0")
)
def between_expr_following_and_expr_preceding_without_order_by_error(self):
"""Check range between expr following and expr preceding frame without order by returns an error.
"""
exitcode, message = window_frame_error()
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND 0 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_UnboundedPreceding_Error("1.0")
)
def between_expr_following_and_unbounded_preceding_error(self):
"""Check range between expr following and unbounded preceding frame with or without order by returns an error.
"""
exitcode, message = frame_end_unbounded_preceding_error()
with Example("without order by"):
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
with Example("with order by"):
self.context.node.query("SELECT number,sum(number) OVER (ORDER BY salary RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_CurrentRow_WithOrderBy_Error("1.0")
)
def between_expr_following_and_current_row_with_order_by_error(self):
"""Check range between expr following and current row frame with order by returns an error
when expr if greater than 0.
"""
exitcode, message = window_frame_error()
self.context.node.query("SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprPreceding_Error("1.0")
)
def between_expr_following_and_expr_preceding_error(self):
"""Check range between expr following and expr preceding frame with order by returns an error
when either expr is not 0.
"""
exitcode, message = frame_start_error()
with Example("1 following 0 preceding"):
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND 0 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
with Example("1 following 0 preceding"):
self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND 1 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprFollowing_WithOrderBy_Error("1.0")
)
def between_expr_following_and_expr_following_with_order_by_error(self):
"""Check range between expr following and expr following frame with order by returns an error
when the expr for the frame end is less than the expr for the framevstart.
"""
exitcode, message = frame_start_error()
self.context.node.query("SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND 0 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))",
exitcode=exitcode, message=message)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_CurrentRow_ZeroSpecialCase("1.0")
)
def between_expr_following_and_current_row_zero_special_case(self):
"""Check range between expr following and current row frame for special case when exp is 0.
It is expected to work.
"""
with When("I use it with order by"):
expected = convert_output("""
number | sum
---------+------
1 | 2
1 | 2
2 | 2
3 | 3
""")
execute_query("SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) AS sum FROM values('number Int8', (1),(1),(2),(3))",
expected=expected
)
with And("I use it without order by"):
expected = convert_output("""
number | sum
---------+------
1 | 7
1 | 7
2 | 7
3 | 7
""")
execute_query(
"SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) AS sum FROM values('number Int8', (1),(1),(2),(3))",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_UnboundedFollowing_WithOrderBy("1.0")
)
def between_expr_following_and_unbounded_following_with_order_by(self):
"""Check range between expr following and unbounded following range with order by.
"""
expected = convert_output("""
number | sum
---------+------
1 | 5
1 | 5
2 | 3
3 | 0
""")
execute_query(
"SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) AS sum FROM values('number Int8', (1),(1),(2),(3))",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprPreceding_WithOrderBy_ZeroSpecialCase("1.0")
)
def between_expr_following_and_expr_preceding_with_order_by_zero_special_case(self):
"""Check range between expr following and expr preceding frame for special case when exp is 0.
It is expected to work.
"""
expected = convert_output("""
number | sum
---------+------
1 | 2
1 | 2
2 | 2
3 | 3
""")
execute_query("SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 0 FOLLOWING AND 0 PRECEDING) AS sum FROM values('number Int8', (1),(1),(2),(3))",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprFollowing_WithOrderBy("1.0")
)
def between_expr_following_and_expr_following_with_order_by(self):
"""Check range between expr following and expr following frame with order by when frame start
is before frame end.
"""
expected = convert_output("""
empno | depname | salary | sum
--------+-----------+--------+---------
1 | sales | 5000 | 6000
2 | personnel | 3900 | 14100
3 | sales | 4800 | 0
4 | sales | 4800 | 0
5 | personnel | 3500 | 8700
7 | develop | 4200 | 25000
8 | develop | 6000 | 0
9 | develop | 4500 | 15400
10 | develop | 5200 | 6000
11 | develop | 5200 | 6000
""")
execute_query(
"SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN 500 FOLLOWING AND 1000 FOLLOWING) AS sum FROM empsalary) ORDER BY empno",
expected=expected
)
@TestScenario
def between_unbounded_preceding_and_current_row_with_expressions_in_order_by_and_aggregate(self):
"""Check range between unbounded prceding and current row with
expression used in the order by clause and aggregate functions.
"""
expected = convert_output("""
four | two | sum | last_value
------+-----+-----+------------
0 | 0 | 0 | 0
0 | 0 | 0 | 0
0 | 1 | 2 | 1
0 | 1 | 2 | 1
0 | 2 | 4 | 2
1 | 0 | 0 | 0
1 | 0 | 0 | 0
1 | 1 | 2 | 1
1 | 1 | 2 | 1
1 | 2 | 4 | 2
2 | 0 | 0 | 0
2 | 0 | 0 | 0
2 | 1 | 2 | 1
2 | 1 | 2 | 1
2 | 2 | 4 | 2
3 | 0 | 0 | 0
3 | 0 | 0 | 0
3 | 1 | 2 | 1
3 | 1 | 2 | 1
3 | 2 | 4 | 2
""")
execute_query(
"SELECT four, toInt8(ten/4) as two, "
"sum(toInt8(ten/4)) over (partition by four order by toInt8(ten/4) range between unbounded preceding and current row) AS sum, "
"last_value(toInt8(ten/4)) over (partition by four order by toInt8(ten/4) range between unbounded preceding and current row) AS last_value "
"FROM (select distinct ten, four from tenk1)",
expected=expected
)
@TestScenario
def between_current_row_and_unbounded_following_modifying_named_window(self):
"""Check range between current row and unbounded following when
modifying named window.
"""
expected = convert_output("""
sum | unique1 | four
-----+---------+------
45 | 0 | 0
45 | 8 | 0
45 | 4 | 0
33 | 5 | 1
33 | 9 | 1
33 | 1 | 1
18 | 6 | 2
18 | 2 | 2
10 | 3 | 3
10 | 7 | 3
""")
execute_query(
"SELECT * FROM (SELECT sum(unique1) over (w range between current row and unbounded following) AS sum,"
"unique1, four "
"FROM tenk1 WHERE unique1 < 10 WINDOW w AS (order by four)) ORDER BY unique1",
expected=expected
)
@TestScenario
def between_current_row_and_unbounded_following_in_named_window(self):
"""Check range between current row and unbounded following in named window.
"""
expected = convert_output("""
first_value | last_value | unique1 | four
-------------+------------+---------+------
0 | 9 | 0 | 0
1 | 9 | 1 | 1
2 | 9 | 2 | 2
3 | 9 | 3 | 3
4 | 9 | 4 | 0
5 | 9 | 5 | 1
6 | 9 | 6 | 2
7 | 9 | 7 | 3
8 | 9 | 8 | 0
9 | 9 | 9 | 1
""")
execute_query(
"SELECT first_value(unique1) over w AS first_value, "
"last_value(unique1) over w AS last_value, unique1, four "
"FROM tenk1 WHERE unique1 < 10 "
"WINDOW w AS (order by unique1 range between current row and unbounded following)",
expected=expected
)
@TestScenario
def between_expr_preceding_and_expr_following_with_partition_by_two_columns(self):
"""Check range between n preceding and n following frame with partition
by two int value columns.
"""
expected = convert_output("""
f1 | sum
----+-----
1 | 0
2 | 0
""")
execute_query(
"""
select f1, sum(f1) over (partition by f1, f2 order by f2
range between 1 following and 2 following) AS sum
from t1 where f1 = f2
""",
expected=expected
)
@TestScenario
def between_expr_preceding_and_expr_following_with_partition_by_same_column_twice(self):
"""Check range between n preceding and n folowing with partition
by the same column twice.
"""
expected = convert_output("""
f1 | sum
----+-----
1 | 0
2 | 0
""")
execute_query(
"""
select * from (select f1, sum(f1) over (partition by f1, f1 order by f2
range between 2 preceding and 1 preceding) AS sum
from t1 where f1 = f2) order by f1, sum
""",
expected=expected
)
@TestScenario
def between_expr_preceding_and_expr_following_with_partition_and_order_by(self):
"""Check range between expr preceding and expr following frame used
with partition by and order by clauses.
"""
expected = convert_output("""
f1 | sum
----+-----
1 | 1
2 | 2
""")
execute_query(
"""
select f1, sum(f1) over (partition by f1 order by f2
range between 1 preceding and 1 following) AS sum
from t1 where f1 = f2
""",
expected=expected
)
@TestScenario
def order_by_decimal(self):
"""Check using range with order by decimal column.
"""
expected = convert_output("""
id | f_numeric | first_value | last_value
----+-----------+-------------+------------
0 | -1000 | 0 | 0
1 | -3 | 1 | 1
2 | -1 | 2 | 3
3 | 0 | 2 | 4
4 | 1.1 | 4 | 6
5 | 1.12 | 4 | 6
6 | 2 | 4 | 6
7 | 100 | 7 | 7
8 | 1000 | 8 | 8
9 | 0 | 9 | 9
""")
execute_query(
"""
select id, f_numeric, first_value(id) over w AS first_value, last_value(id) over w AS last_value
from numerics
window w as (order by f_numeric range between
1 preceding and 1 following)
""",
expected=expected
)
@TestScenario
def order_by_float(self):
"""Check using range with order by float column.
"""
expected = convert_output("""
id | f_float4 | first_value | last_value
----+-----------+-------------+------------
0 | -inf | 0 | 0
1 | -3 | 1 | 1
2 | -1 | 2 | 3
3 | 0 | 2 | 3
4 | 1.1 | 4 | 6
5 | 1.12 | 4 | 6
6 | 2 | 4 | 6
7 | 100 | 7 | 7
8 | inf | 8 | 8
9 | nan | 8 | 8
""")
execute_query(
"""
select id, f_float4, first_value(id) over w AS first_value, last_value(id) over w AS last_value
from numerics
window w as (order by f_float4 range between
1 preceding and 1 following)
""",
expected=expected
)
@TestScenario
def with_nulls(self):
"""Check using range frame over window with nulls.
"""
expected = convert_output("""
x | y | first_value | last_value
---+----+-------------+------------
\\N | 42 | 42 | 43
\\N | 43 | 42 | 43
1 | 1 | 1 | 3
2 | 2 | 1 | 4
3 | 3 | 1 | 5
4 | 4 | 2 | 5
5 | 5 | 3 | 5
""")
execute_query(
"""
select x, y,
first_value(y) over w AS first_value,
last_value(y) over w AS last_value
from
(select number as x, x as y from numbers(1,5)
union all select null, 42
union all select null, 43)
window w as
(order by x asc nulls first range between 2 preceding and 2 following)
""",
expected=expected
)
@TestFeature
@Name("range frame")
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame("1.0"),
RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_DataTypes_IntAndUInt("1.0")
)
def feature(self):
"""Check defining range frame.
"""
for scenario in loads(current_module(), Scenario):
Scenario(run=scenario, flags=TE)
| 40.135365 | 193 | 0.604792 | 6,583 | 56,631 | 5.032508 | 0.037977 | 0.042893 | 0.026201 | 0.030427 | 0.918742 | 0.898427 | 0.879169 | 0.837544 | 0.80914 | 0.754143 | 0 | 0.068742 | 0.294362 | 56,631 | 1,410 | 194 | 40.16383 | 0.760291 | 0.105561 | 0 | 0.54512 | 0 | 0.070902 | 0.541898 | 0.022124 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05709 | false | 0 | 0.002762 | 0 | 0.059853 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71f3c1573efc6663dd400d4085379502ca56d41b | 313 | py | Python | spyder_line_profiler/spyder/__init__.py | spyder-ide/spyplugins.ui.line_profiler | cb4e4ef40b5b49d07f4974c1f8a7366d75fbe0a3 | [
"MIT"
] | null | null | null | spyder_line_profiler/spyder/__init__.py | spyder-ide/spyplugins.ui.line_profiler | cb4e4ef40b5b49d07f4974c1f8a7366d75fbe0a3 | [
"MIT"
] | 2 | 2015-09-03T03:05:30.000Z | 2015-09-10T12:34:30.000Z | spyder_line_profiler/spyder/__init__.py | spyder-ide/spyplugins.ui.line_profiler | cb4e4ef40b5b49d07f4974c1f8a7366d75fbe0a3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# ----------------------------------------------------------------------------
# Copyright © 2022, Spyder Line Profiler contributors
#
# Licensed under the terms of the MIT license
# ----------------------------------------------------------------------------
"""
Spyder Line Profiler
"""
| 31.3 | 78 | 0.341853 | 21 | 313 | 5.142857 | 0.809524 | 0.185185 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017794 | 0.102236 | 313 | 9 | 79 | 34.777778 | 0.362989 | 0.936102 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c1f90f28aaf622da15efd3d93566621578ff6f5 | 50 | py | Python | src/utils/__init__.py | biobdeveloper/biobot | 440ae73517dcd572e68b2de045cbbad8b12a2e3e | [
"MIT"
] | 1 | 2019-11-30T12:47:38.000Z | 2019-11-30T12:47:38.000Z | src/utils/__init__.py | biobdeveloper/biobot | 440ae73517dcd572e68b2de045cbbad8b12a2e3e | [
"MIT"
] | null | null | null | src/utils/__init__.py | biobdeveloper/biobot | 440ae73517dcd572e68b2de045cbbad8b12a2e3e | [
"MIT"
] | null | null | null | from src.utils.logger_create import logger_create
| 25 | 49 | 0.88 | 8 | 50 | 5.25 | 0.75 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 50 | 1 | 50 | 50 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c49f5c0e4bd4a2376167599d712f4c5afdb856a | 2,442 | py | Python | test/test_idseqs_to_mask.py | ulf1/torch-tweaks | 9e33df4b089af3ec17d41c79e6a0a351db17c518 | [
"Apache-2.0"
] | null | null | null | test/test_idseqs_to_mask.py | ulf1/torch-tweaks | 9e33df4b089af3ec17d41c79e6a0a351db17c518 | [
"Apache-2.0"
] | null | null | null | test/test_idseqs_to_mask.py | ulf1/torch-tweaks | 9e33df4b089af3ec17d41c79e6a0a351db17c518 | [
"Apache-2.0"
] | null | null | null | from torch_tweaks import idseqs_to_mask
import torch
def test1():
idseqs = [[1, 1, 0, 0, 2, 2, 3], [1, 3, 2, 1, 0, 0, 2]]
target = torch.sparse.FloatTensor(
indices=torch.LongTensor([
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 1, 2, 3, 4, 5, 0, 2, 3, 4, 5],
[1, 1, 0, 0, 2, 2, 1, 2, 1, 0, 0]]),
values=torch.FloatTensor([1. for _ in range(11)]),
size=torch.Size([2, 6, 3])
).coalesce()
masks = idseqs_to_mask(
idseqs, n_seqlen=6, n_vocab_sz=3, ignore=[3], dense=False)
assert (masks.to_dense() == target.to_dense()).all()
assert masks.dtype == target.dtype
def test2():
idseqs = [[1, 1, 0, 0, 2, 2, 3], [1, 3, 2, 1, 0, 0, 2]]
target = torch.sparse.FloatTensor(
indices=torch.LongTensor([
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 1, 2, 3, 4, 5, 0, 2, 3, 4, 5],
[1, 1, 0, 0, 2, 2, 1, 2, 1, 0, 0]]),
values=torch.FloatTensor([True for _ in range(11)]),
size=torch.Size([2, 6, 3])
).to_dense().type(torch.bool)
masks = idseqs_to_mask(
idseqs, n_seqlen=6, n_vocab_sz=3, ignore=[3],
dense=True, dtype=torch.bool)
assert (masks == target).all()
assert masks.dtype == target.dtype
def test3():
idseqs = [[1, 1, 0, 0, 2, 2, 3], [1, 3, 2, 1, 0, 0, 2]]
target = torch.sparse.FloatTensor(
indices=torch.LongTensor([
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 1, 2, 3, 4, 5, 0, 2, 3, 4, 5],
[1, 1, 0, 0, 2, 2, 1, 2, 1, 0, 0]]),
values=torch.FloatTensor([1 for _ in range(11)]),
size=torch.Size([2, 6, 3])
).to_dense().type(torch.uint8)
masks = idseqs_to_mask(
idseqs, n_seqlen=6, n_vocab_sz=3, ignore=[3],
dense=True, dtype=torch.uint8)
assert (masks == target).all()
assert masks.dtype == target.dtype
def test4():
idseqs = [[1, 1, 0, 0, 2, 2, 3], [1, 3, 2, 1, 0, 0, 2]]
target = torch.sparse.FloatTensor(
indices=torch.LongTensor([
[0, 0, 0, 0, 1, 1, 1, 1],
[2, 3, 4, 5, 1, 2, 4, 5],
[0, 0, 1, 1, 2, 1, 0, 0]]),
values=torch.FloatTensor([1. for _ in range(8)]),
size=torch.Size([2, 6, 3])
).coalesce()
masks = idseqs_to_mask(
idseqs, n_seqlen=6, ignore=[1], dense=False)
assert (masks.to_dense() == target.to_dense()).all()
assert masks.dtype == target.dtype
| 30.148148 | 66 | 0.505733 | 406 | 2,442 | 2.965517 | 0.115764 | 0.056478 | 0.037375 | 0.036545 | 0.914452 | 0.910299 | 0.910299 | 0.907807 | 0.904485 | 0.904485 | 0 | 0.126744 | 0.295659 | 2,442 | 80 | 67 | 30.525 | 0.573256 | 0 | 0 | 0.683333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.066667 | false | 0 | 0.033333 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c5860dd934cd9e92ace470625811fdc0b706dd6 | 172 | py | Python | wagtail_cloudfront_invalidate/__init__.py | ryanbagwell/wagtail-cloudfront-invalidate | 7c56cf4f51b3014e15a77fb7f5e70809b52cbb92 | [
"MIT"
] | 2 | 2019-05-31T10:42:33.000Z | 2021-09-24T09:46:41.000Z | wagtail_cloudfront_invalidate/__init__.py | ryanbagwell/wagtail-cloudfront-invalidate | 7c56cf4f51b3014e15a77fb7f5e70809b52cbb92 | [
"MIT"
] | null | null | null | wagtail_cloudfront_invalidate/__init__.py | ryanbagwell/wagtail-cloudfront-invalidate | 7c56cf4f51b3014e15a77fb7f5e70809b52cbb92 | [
"MIT"
] | null | null | null | import logging
default_app_config = 'wagtail_cloudfront_invalidate.apps.WagtailCloudfrontInvalidateConfig'
wci_logger = logging.getLogger('wagtail_cloudfront_invalidate') | 34.4 | 91 | 0.883721 | 17 | 172 | 8.529412 | 0.764706 | 0.234483 | 0.372414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052326 | 172 | 5 | 92 | 34.4 | 0.889571 | 0 | 0 | 0 | 0 | 0 | 0.560694 | 0.560694 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
92c82d262181c3d5371b1445572b528004121291 | 430 | py | Python | sickbeard/lib/hachoir_parser/file_system/__init__.py | Branlala/docker-sickbeardfr | 3ac85092dc4cc8a4171fb3c83e9682162245e13e | [
"MIT"
] | null | null | null | sickbeard/lib/hachoir_parser/file_system/__init__.py | Branlala/docker-sickbeardfr | 3ac85092dc4cc8a4171fb3c83e9682162245e13e | [
"MIT"
] | null | null | null | sickbeard/lib/hachoir_parser/file_system/__init__.py | Branlala/docker-sickbeardfr | 3ac85092dc4cc8a4171fb3c83e9682162245e13e | [
"MIT"
] | null | null | null | from lib.hachoir_parser.file_system.ext2 import EXT2_FS
from lib.hachoir_parser.file_system.fat import FAT12, FAT16, FAT32
from lib.hachoir_parser.file_system.mbr import MSDos_HardDrive
from lib.hachoir_parser.file_system.ntfs import NTFS
from lib.hachoir_parser.file_system.iso9660 import ISO9660
from lib.hachoir_parser.file_system.reiser_fs import REISER_FS
from lib.hachoir_parser.file_system.linux_swap import LinuxSwapFile
| 47.777778 | 67 | 0.874419 | 70 | 430 | 5.1 | 0.314286 | 0.137255 | 0.27451 | 0.392157 | 0.59944 | 0.59944 | 0.179272 | 0 | 0 | 0 | 0 | 0.0401 | 0.072093 | 430 | 8 | 68 | 53.75 | 0.854637 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
92f9243c498e8e23aa6089fa71547d4c6c62fa0d | 148 | py | Python | tests/fixtures/__init__.py | paranoid-software/elemental-cms | 7f09f9cd5498577d23fa70d1a51497b9de232598 | [
"MIT"
] | 3 | 2022-01-12T09:11:54.000Z | 2022-02-24T22:39:11.000Z | tests/fixtures/__init__.py | paranoid-software/elemental-cms | 7f09f9cd5498577d23fa70d1a51497b9de232598 | [
"MIT"
] | null | null | null | tests/fixtures/__init__.py | paranoid-software/elemental-cms | 7f09f9cd5498577d23fa70d1a51497b9de232598 | [
"MIT"
] | 1 | 2022-01-12T09:11:56.000Z | 2022-01-12T09:11:56.000Z | from .settingsfixture import default_settings_fixture, missing_gcs_buckets_settings_fixture
from .elementalfixture import default_elemental_fixture
| 49.333333 | 91 | 0.918919 | 17 | 148 | 7.529412 | 0.647059 | 0.203125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060811 | 148 | 2 | 92 | 74 | 0.920863 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
92fd382d04b29c39f43781639c8e900086a7f407 | 1,980 | py | Python | models.py | MoonHyuk/BOJ-statistics | 016a51e0b336fa4bf533ff13ba6401e5465226b6 | [
"MIT"
] | 62 | 2017-07-23T11:50:23.000Z | 2021-01-16T09:50:58.000Z | models.py | MoonHyuk/BOJ-statistics | 016a51e0b336fa4bf533ff13ba6401e5465226b6 | [
"MIT"
] | 12 | 2017-08-21T01:46:40.000Z | 2019-04-12T11:33:05.000Z | models.py | MoonHyuk/BOJ-statistics | 016a51e0b336fa4bf533ff13ba6401e5465226b6 | [
"MIT"
] | 17 | 2017-09-22T12:16:09.000Z | 2020-03-18T05:39:26.000Z | import json
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import JSON
db = SQLAlchemy()
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
boj_id = db.Column(db.String(20), unique=True, nullable=False)
intro = db.Column(db.String(100), default="")
tobcoder_id = db.Column(db.String(20), default="")
tobcoder_rating = db.Column(db.Integer, default=0)
codeforce_id = db.Column(db.String(20), default="")
codeforce_rating = db.Column(db.Integer, default=0)
update_time = db.Column(db.DateTime)
solved_num = db.Column(db.Integer, default=0)
class Submission(db.Model):
id = db.Column(db.Integer, primary_key=True)
submit_id = db.Column(db.Integer, unique=True, nullable=False)
problem_id = db.Column(db.Integer, nullable=False)
problem_name = db.Column(db.String, nullable=False)
boj_id = db.Column(db.String(20), db.ForeignKey("user.boj_id"), nullable=False)
result = db.Column(db.Integer, nullable=False)
language = db.Column(db.String(20), nullable=False)
memory = db.Column(db.Integer, nullable=False)
time = db.Column(db.Integer, nullable=False)
code_length = db.Column(db.Integer, nullable=False)
datetime = db.Column(db.DateTime, nullable=False)
class Ranking(db.Model):
id = db.Column(db.Integer, primary_key=True)
boj_id = db.Column(db.String(20), nullable=False)
ranking = db.Column(JSON)
class AcceptedSubmission(db.Model):
id = db.Column(db.Integer, primary_key=True)
submit_id = db.Column(db.Integer, unique=True, nullable=False)
problem_id = db.Column(db.Integer, nullable=False)
boj_id = db.Column(db.String(20), db.ForeignKey("user.boj_id"), nullable=False)
language = db.Column(db.String(20), nullable=False)
memory = db.Column(db.Integer, nullable=False)
time = db.Column(db.Integer, nullable=False)
code_length = db.Column(db.Integer, nullable=False)
datetime = db.Column(db.DateTime, nullable=False)
| 38.823529 | 83 | 0.711111 | 291 | 1,980 | 4.756014 | 0.154639 | 0.184971 | 0.223988 | 0.221098 | 0.770954 | 0.770954 | 0.740607 | 0.647399 | 0.647399 | 0.647399 | 0 | 0.013018 | 0.146465 | 1,980 | 50 | 84 | 39.6 | 0.805917 | 0 | 0 | 0.5 | 0 | 0 | 0.011111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.075 | 0 | 0.975 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
92fd418fa23db74c111465e83a5fcf4d099cd233 | 49,456 | py | Python | matdgl/data/crystalgraph.py | huzongxiang/CrystalNetwork | a434f76fa4347d42b3c905852ce265cd0bcefca3 | [
"BSD-2-Clause"
] | 6 | 2022-03-30T13:47:03.000Z | 2022-03-31T09:27:46.000Z | matdgl/data/crystalgraph.py | huzongxiang/CrystalNetwork | a434f76fa4347d42b3c905852ce265cd0bcefca3 | [
"BSD-2-Clause"
] | null | null | null | matdgl/data/crystalgraph.py | huzongxiang/CrystalNetwork | a434f76fa4347d42b3c905852ce265cd0bcefca3 | [
"BSD-2-Clause"
] | 2 | 2022-03-30T20:53:11.000Z | 2022-03-31T22:20:05.000Z | # -*- coding: utf-8 -*-
"""
Created on Wed Jun 9 11:39:28 2021
@author: huzongxiang
"""
import time
import json
import numpy as np
from tqdm import tqdm
from pathlib import Path
from operator import itemgetter
from multiprocessing import Pool
import tensorflow as tf
from tensorflow.keras.utils import Sequence
from tensorflow.keras.utils import to_categorical
from matdgl.utils import Features
from typing import Union, Dict, List, Set
from .embedding import Mendeleev_property, GaussianDistance, MultiPropertyFeatures, Embedding_edges
from pymatgen.core import Structure
from pymatgen.analysis.local_env import NearNeighbors, VoronoiNN
from matdgl.utils.get_nn import get_nn_info
from matdgl.utils import get_space_group_number
ModulePath = Path(__file__).parent.absolute()
class LabelledCrystalGraphBase():
def __init__(self, strategy: Union[None, NearNeighbors]=VoronoiNN(cutoff=18.0)
):
"""
Parameters
----------
strategy : Union[str, NearNeighbors], optional
DESCRIPTION. The default is 'VoronoiNN'.
Raises
------
RuntimeError
DESCRIPTION.
Returns
-------
None.
"""
if isinstance(strategy, NearNeighbors):
self.strategy = strategy
self.properties = None
with open(Path(ModulePath/"mendeleev.json"),'r') as f:
self.properties = json.load(f)
if self.properties is None:
self.properties = Mendeleev_property.get_mendeleev_properties()
def get_graph(self, structure: Structure) -> Dict:
"""
Parameters
----------
structure : pymatgen.core.structure.Structure
Feed with pymatgen.core.structure.Structure and produce the graph.
Raises
------
RuntimeError
DESCRIPTION.
Returns
-------
Dict
Labelled graph
state_attributes is global attributes, here is symmetry,
oxide_states in order of atoms,
atoms are nodes of graph,
bond are distance between nodes,
image are direction of bond along abc,
pair_indices are nodes indices of edges, including self-cycle edge,
lattice are abc of structure,
cart_coords are Descartes coordinations.
"""
lattice = np.array(structure.as_dict()['lattice']['matrix'])
cart_coords = structure.cart_coords
space_group_number = get_space_group_number(structure)
state_attributes = np.array([space_group_number],dtype="int32")
node1 = []
node2 = []
bonds = []
images = []
for atom, neighbors in enumerate(self.strategy.get_all_nn_info(structure)):
node1.extend([atom] * (len(neighbors) + 1))
node2.append(atom)
bonds.append(0.0)
images.append((0,0,0))
for neighbor in neighbors:
node2.append(neighbor["site_index"])
bonds.append(neighbor["weight"])
images.append(neighbor['image'])
atoms = self.get_Z_number(structure)
pair_indices = list(zip(node1,node2))
if np.size(np.unique(node1)) < len(atoms):
raise RuntimeError("Isolated atoms found in the structure")
return {Features.atom: atoms, Features.bond: bonds, Features.state: state_attributes,
Features.pair_indices: pair_indices, Features.image: images,
Features.lattice: lattice, Features.cart_coords: cart_coords}
@staticmethod
def get_Z_number(structure: Structure) -> List:
"""
Parameters
----------
structure : Structure
Get atomic number from pymatgen.core.struture.Structure.
structure.atomic_number can also get it.
Returns
-------
List
DESCRIPTION.
"""
return np.array([i.specie.Z for i in structure], dtype="int32")
def _local_coordinates(self, graph: Dict) -> List:
"""
Parameters
----------
graph : Dict
DESCRIPTION.
Returns
-------
TYPE
a seires of polar coordinates used for building local coordinate.
"""
pair_indices = graph[Features.pair_indices]
images = graph[Features.image]
lattice = graph[Features.lattice]
a, b, c = lattice[0], lattice[1], lattice[2]
cart_coords = graph[Features.cart_coords]
local_env = []
for idx, pair_indice in enumerate(pair_indices):
image = images[idx]
node_send, node_recive = pair_indice[1], pair_indice[0]
polar = cart_coords[node_recive] - cart_coords[node_send] - \
a*image[0] - b*image[1] - c*image[2]
vetical = np.array([polar[1], -polar[0], 0.0])
local_env.append(np.array([polar[0], polar[1], polar[2], vetical[0], vetical[1], vetical[2]]))
return local_env
def graph_to_input(self, graph: Dict) -> List[np.ndarray]:
"""
Parameters
----------
graph : Dict
DESCRIPTION.
Returns
-------
list
DESCRIPTION.
"""
atom_num_pairs = [[graph[Features.atom][pair[0]], graph[Features.atom][pair[1]]] for pair in graph[Features.pair_indices]]
distance_features = Embedding_edges(converter=GaussianDistance()).embedding(graph[Features.bond])
multi_properties = Embedding_edges(converter=MultiPropertyFeatures(self.properties)).embedding(atom_num_pairs)
bond_features = np.concatenate([distance_features, multi_properties], axis=1)
local_env = self._local_coordinates(graph)
return [
np.array(graph[Features.atom], dtype=np.int32),
np.array(bond_features),
np.array(graph[Features.state], dtype=np.int32),
np.array(graph[Features.pair_indices], dtype=np.int32),
np.array(local_env),
]
def structure_to_input(self, structure: Structure) -> List:
"""
Parameters
----------
structure : pymatgen.core.structure.Structure
DESCRIPTION.
Returns
-------
List
DESCRIPTION.
"""
graph = self.get_graph(structure)
return self.graph_to_input(graph)
def ragged_inputs_from_strcutre_list(self, structure_list: List) -> Set:
"""
Parameters
----------
structure_list : List
a list of pymatgen.core.Structure
In order to keep with Semantic space of atom2vector, MPRester should be used
to get structures from materialsproject.
Returns
-------
Set
DESCRIPTION.
"""
# Initialize graphs
atom_features_list = []
bond_features_list = []
state_attrs_list = []
pair_indices_list = []
local_env_list = []
for structure in tqdm(structure_list, desc="Generating labelled multi-property graphs"):
atom_features, bond_features, state_attrs, pair_indices, local_env = self.structure_to_input(structure)
atom_features_list.append(atom_features)
bond_features_list.append(bond_features)
state_attrs_list.append(state_attrs)
pair_indices_list.append(pair_indices)
local_env_list.append(local_env)
return (
tf.ragged.constant(atom_features_list, dtype=tf.int32),
tf.ragged.constant(bond_features_list, dtype=tf.float32),
tf.ragged.constant(local_env_list, dtype=tf.float32),
tf.ragged.constant(state_attrs_list, dtype=tf.int32),
tf.ragged.constant(pair_indices_list, dtype=tf.int64),
)
def inputs_from_strcutre_list(self, structure_list: List) -> Set:
"""
Parameters
----------
structure_list : List
a list of pymatgen.core.Structure
In order to keep with Semantic space of atom2vector, MPRester should be used
to get structures from materialsproject.
Returns
-------
Set
DESCRIPTION.
"""
# Initialize graphs
atom_features_list = []
bond_features_list = []
state_attrs_list = []
pair_indices_list = []
local_env_list = []
for structure in tqdm(structure_list, desc="Generating labelled directionial spherical harmonic graphs"):
atom_features, bond_features, state_attrs, pair_indices, local_env = self.structure_to_input(structure)
atom_features_list.append(atom_features)
bond_features_list.append(bond_features)
state_attrs_list.append(state_attrs)
pair_indices_list.append(pair_indices)
local_env_list.append(local_env)
return (
atom_features_list,
bond_features_list,
local_env_list,
state_attrs_list,
pair_indices_list,
)
def graphs_from_strcutre_list(self, structure_list: List) -> List:
"""
Parameters
----------
structure_list : List
a list of pymatgen.core.Structure
In order to keep with Semantic space of atom2vector, MPRester should be used
to get structures from materialsproject.
Returns
-------
List
DESCRIPTION.
"""
# Initialize graphs
graphs = []
for structure in tqdm(structure_list, desc="Generating labelled directionial spherical harmonic graphs"):
graphs.append(self.structure_to_input(structure))
return graphs
class LabelledCrystalGraph(LabelledCrystalGraphBase):
def __init__(self, cutoff=3.0, mendeleev=False):
self.cutoff = cutoff
self.mendeleev = mendeleev
if self.mendeleev:
with open(Path(ModulePath/"mendeleev.json"),'r') as f:
self.properties = json.load(f)
def graph_to_input(self, graph: Dict) -> List[np.ndarray]:
"""
Parameters
----------
graph : Dict
DESCRIPTION.
Returns
-------
list
DESCRIPTION.
"""
if self.mendeleev:
atom_num_pairs = [[graph[Features.atom][pair[0]], graph[Features.atom][pair[1]]] for pair in graph[Features.pair_indices]]
distance_features = Embedding_edges(converter=GaussianDistance(n=57)).embedding(graph[Features.bond])
multi_properties = Embedding_edges(converter=MultiPropertyFeatures(self.properties)).embedding(atom_num_pairs)
bond_features = np.concatenate([distance_features, multi_properties], axis=1)
local_env = self._local_coordinates(graph)
else:
distance_features = Embedding_edges(converter=GaussianDistance()).embedding(graph[Features.bond])
local_env = self._local_coordinates(graph)
bond_features = distance_features
return [
np.array(graph[Features.atom], dtype=np.int32),
np.array(bond_features),
np.array(graph[Features.state], dtype=np.int32),
np.array(graph[Features.pair_indices], dtype=np.int32),
np.array(local_env),
]
def get_graph(self, structure: Structure) -> Dict:
"""
Parameters
----------
structure : Structure
DESCRIPTION.
space_group_number : int
DESCRIPTION.
Raises
------
RuntimeError
DESCRIPTION.
Returns
-------
dict
DESCRIPTION.
"""
lattice = np.array(structure.as_dict()['lattice']['matrix'], dtype=np.float32)
cart_coords = structure.cart_coords.astype(np.float32)
space_group_number = get_space_group_number(structure) - 1
state_attributes = np.array([space_group_number], dtype="int32")
center_indices, neighbor_indices, images, bonds = get_nn_info(structure, cutoff=self.cutoff)
atoms = self.get_Z_number(structure)
pair_indices = np.concatenate([center_indices, neighbor_indices], axis=0).reshape(2, -1).transpose()
return {Features.atom: atoms, Features.bond: bonds, Features.state: state_attributes,
Features.pair_indices: pair_indices, Features.image: images,
Features.lattice: lattice, Features.cart_coords: cart_coords}
def _local_coordinates(self, graph: Dict) -> np.ndarray:
"""
calculate local environment using numpy array only.
Parameters
----------
graph : Dict
DESCRIPTION.
Returns
-------
TYPE
a seires of polar coordinates used for building local coordinate.
"""
pair_indices = graph[Features.pair_indices]
images = graph[Features.image]
lattice = graph[Features.lattice]
a, b, c = lattice[0], lattice[1], lattice[2]
cart_coords = graph[Features.cart_coords]
it1 = itemgetter(pair_indices[:,0])
it2 = itemgetter(pair_indices[:,1])
recive=it1(cart_coords)
send=it2(cart_coords)
polar = recive - send - np.expand_dims(images[:,0], axis=-1)*a - np.expand_dims(images[:,1], axis=-1)*b - np.expand_dims(images[:,2], axis=-1)*c
zeros= np.ones_like(np.expand_dims(polar[:,1], axis=-1))
vetical = np.concatenate([np.expand_dims(polar[:,1], axis=-1), -np.expand_dims(polar[:,0], axis=-1), zeros], axis=-1)
local_env = np.concatenate([polar, vetical], axis=-1)
return local_env
def inputs_from_strcutre_list(self, structure_list: List) -> Set:
"""
Parameters
----------
structure_list : List
a list of pymatgen.core.Structure
In order to keep with Semantic space of atom2vector, MPRester should be used
to get structures from materialsproject.
Returns
-------
Set
DESCRIPTION.
"""
# Initialize graphs
start = time.time()
pool = Pool()
graphs = pool.map(self.structure_to_input, structure_list)
pool.close()
pool.join()
atom_features_list = []
bond_features_list = []
state_attrs_list = []
pair_indices_list = []
local_env_list = []
for graph in graphs:
atom_features, bond_features, state_attrs, pair_indices, local_env = graph
atom_features_list.append(atom_features)
bond_features_list.append(bond_features)
state_attrs_list.append(state_attrs)
pair_indices_list.append(pair_indices)
local_env_list.append(local_env)
end = time.time()
run_time = end - start
print('run time: {:.2f} s'.format(run_time))
return (
atom_features_list,
bond_features_list,
local_env_list,
state_attrs_list,
pair_indices_list,
)
class LabelledCrystalGraphMasking(LabelledCrystalGraph):
def __init__(self, masking_percent=0.15, masking=0, cutoff=3.0, mendeleev=False):
super().__init__(cutoff=cutoff, mendeleev=mendeleev)
self.masking_percent = masking_percent
self.masking = masking
def get_graph(self, structure: Structure) -> Dict:
"""
Parameters
----------
structure : Structure
DESCRIPTION.
space_group_number : int
DESCRIPTION.
Raises
------
RuntimeError
DESCRIPTION.
Returns
-------
dict
DESCRIPTION.
"""
lattice = np.array(structure.as_dict()['lattice']['matrix'], dtype=np.float32)
cart_coords = structure.cart_coords.astype(np.float32)
space_group_number = get_space_group_number(structure) - 1
state_attributes = np.array([space_group_number], dtype="int32")
center_indices, neighbor_indices, images, bonds = get_nn_info(structure, cutoff=self.cutoff)
atoms = self.get_Z_number(structure)
pair_indices = np.concatenate([center_indices, neighbor_indices], axis=0).reshape(2, -1).transpose()
# masking atoms
atoms, masking_indices, masking_node_labels = self._random_masking(atoms)
return {Features.atom: atoms, Features.bond: bonds, Features.state: state_attributes,
Features.pair_indices: pair_indices, Features.image: images,
Features.lattice: lattice, Features.cart_coords: cart_coords,
Features.masking_indices: masking_indices, Features.masking_node_labels: masking_node_labels}
def _random_masking(self, features: np.array) -> np.array:
"""
Random select atoms masking and return label for masking atoms.
----------
x_batch : TYPE
DESCRIPTION.
Returns
-------
x_batch : TYPE
DESCRIPTION.
y_label : TYPE
DESCRIPTION.
"""
masking_indices = np.random.choice(np.arange(features.size), replace=False,
size=int(np.ceil(features.size * self.masking_percent)))
masking_node_labels = features[masking_indices]
features[masking_indices] = self.masking
return features, masking_indices, masking_node_labels
def graph_to_input(self, graph: Dict) -> List[np.ndarray]:
"""
Parameters
----------
graph : Dict
DESCRIPTION.
Returns
-------
list
DESCRIPTION.
"""
if self.mendeleev:
atom_num_pairs = [[graph[Features.atom][pair[0]], graph[Features.atom][pair[1]]] for pair in graph[Features.pair_indices]]
distance_features = Embedding_edges(converter=GaussianDistance(n=57)).embedding(graph[Features.bond])
multi_properties = Embedding_edges(converter=MultiPropertyFeatures(self.properties)).embedding(atom_num_pairs)
bond_features = np.concatenate([distance_features, multi_properties], axis=1)
local_env = self._local_coordinates(graph)
else:
distance_features = Embedding_edges(converter=GaussianDistance()).embedding(graph[Features.bond])
local_env = self._local_coordinates(graph)
bond_features = distance_features
return [
np.array(graph[Features.atom], dtype=np.int32),
np.array(bond_features, dtype=np.float32),
np.array(graph[Features.state], dtype=np.int32),
np.array(graph[Features.pair_indices], dtype=np.int32),
np.array(local_env, dtype=np.int32),
np.array(graph[Features.masking_indices], dtype=np.int32),
np.array(graph[Features.masking_node_labels], dtype=np.int32),
]
def inputs_from_strcutre_list(self, structure_list: List) -> Set:
"""
Parameters
----------
structure_list : List
a list of pymatgen.core.Structure
In order to keep with Semantic space of atom2vector, MPRester should be used
to get structures from materialsproject.
Returns
-------
Set
DESCRIPTION.
"""
# Initialize graphs
start = time.time()
pool = Pool()
graphs = pool.map(self.structure_to_input, structure_list)
pool.close()
pool.join()
atom_features_list = []
bond_features_list = []
state_attrs_list = []
pair_indices_list = []
local_env_list = []
masking_indices_list = []
masking_node_labels_list = []
for graph in graphs:
atom_features, bond_features, state_attrs, pair_indices, local_env, masking_indices, masking_node_labels = graph
atom_features_list.append(atom_features)
bond_features_list.append(bond_features)
state_attrs_list.append(state_attrs)
pair_indices_list.append(pair_indices)
local_env_list.append(local_env)
masking_indices_list.append(masking_indices)
masking_node_labels_list.append(masking_node_labels)
end = time.time()
run_time = end - start
print('run time: {:.2f} s'.format(run_time))
return (
tf.ragged.constant(atom_features_list, dtype=tf.int64),
tf.ragged.constant(bond_features_list, dtype=tf.float32),
tf.ragged.constant(local_env_list, dtype=tf.float64),
tf.ragged.constant(state_attrs_list, dtype=tf.int64),
tf.ragged.constant(pair_indices_list, dtype=tf.int64),
tf.ragged.constant(masking_indices_list, dtype=tf.int64),
tf.ragged.constant(masking_node_labels_list, dtype=tf.int64 ),
)
class GraphBatchGeneratorSequence(Sequence):
def __init__(self, atom_features_list: List[np.ndarray],
bond_features_list: List[np.ndarray],
local_env_list: List[np.ndarray],
state_attrs_list: List[np.ndarray],
pair_indices_list: List[np.ndarray],
labels: Union[List, None]=None,
task_type: Union[str, None]=None,
batch_size: int=32,
is_shuffle: bool=False):
"""
Parameters
----------
X_tensor : TYPE
DESCRIPTION.
y_label : TYPE
DESCRIPTION.
batch_size : TYPE, optional
DESCRIPTION. The default is 32.
is_shuffle : TYPE, optional
DESCRIPTION. The default is False.
Returns
-------
None.
"""
self.task_type = task_type
self.data_size = len(atom_features_list)
self.batch_size = batch_size
self.total_index = np.arange(self.data_size)
self.atom_features_list = atom_features_list
self.bond_features_list = bond_features_list
self.local_env_list = local_env_list
self.state_attrs_list = state_attrs_list
self.pair_indices_list = pair_indices_list
self.labels = labels
if is_shuffle:
shuffle = itemgetter(np.random.permutation(self.total_index))
self.total_index = shuffle(self.total_index)
def __len__(self) -> int:
return int(np.ceil(self.data_size / self.batch_size))
def on_epoch_end(self):
"""
code to be executed on epoch end
"""
self.total_index = np.random.permutation(self.total_index)
def __getitem__(self, index: int) -> tuple:
batch_index = self.total_index[index * self.batch_size : (index + 1) * self.batch_size]
get = itemgetter(*batch_index)
atom_features_list = get(self.atom_features_list)
bond_features_list = get(self.bond_features_list)
local_env_list = get(self.local_env_list)
state_attrs_list = get(self.state_attrs_list)
pair_indices_list = get(self.pair_indices_list)
inputs_batch = (atom_features_list,
bond_features_list,
local_env_list,
state_attrs_list,
pair_indices_list,
)
x_batch = self._merge_batch(inputs_batch)
if self.labels is None:
return (x_batch, )
y_batch = np.array(get(self.labels))
return x_batch, (y_batch)
# def __getitem__(self, index: int) -> tuple:
# batch_index = self.total_index[index * self.batch_size : (index + 1) * self.batch_size]
# get = itemgetter(*batch_index)
# atom_features_list = list(get(self.atom_features_list))
# bond_features_list = list(get(self.bond_features_list))
# local_env_list = list(get(self.local_env_list))
# state_attrs_list = list(get(self.state_attrs_list))
# pair_indices_list = list(get(self.pair_indices_list))
# inputs_batch = (tf.ragged.constant(atom_features_list, dtype=tf.int32),
# tf.ragged.constant(bond_features_list, dtype=tf.float32),
# tf.ragged.constant(local_env_list, dtype=tf.float32),
# tf.ragged.constant(state_attrs_list, dtype=tf.int32),
# tf.ragged.constant(pair_indices_list, dtype=tf.int64),
# )
# x_batch = self._merge_batch(inputs_batch)
# y_batch = np.atleast_2d(get(self.labels))
# return x_batch, y_batch
def _merge_batch(self, x_batch: tuple) -> tuple:
"""
Merging a batch of graphs into a disconnected graph should reindex atoms only
features of graphs desn't be changed only merge them to one dimension of globl graph.
reindex indices in pair_indices by adding increment of number of atoms in the batch
atom marked with structure indice also need to be tell in globl graph.
Parameters
----------
x_batch : TYPE
DESCRIPTION.
Returns
-------
atom_features : TYPE
DESCRIPTION.
bond_features : TYPE
DESCRIPTION.
state_attributes : TYPE
DESCRIPTION.
pair_indices : TYPE
DESCRIPTION.
atom_partition_indices: TYPE
DESCRIPTION.
bond_partition_indices: TYPE
DESCRIPTION.
"""
atom_features, bond_features, local_env, state_attrs, pair_indices = x_batch
# Obtain number of atoms and bonds for each graph
# allocate graph (structure) indice for atom and bond in global graph
num_atoms_per_graph = []
atom_graph_indices = []
for i, atoms in enumerate(atom_features):
num = len(atoms)
num_atoms_per_graph.append(num)
atom_graph_indices += [i] * num
atom_graph_indices = np.array(atom_graph_indices)
num_bonds_per_graph = []
bond_graph_indices = []
for i, bonds in enumerate(bond_features):
num = len(bonds)
num_bonds_per_graph.append(num)
bond_graph_indices += [i] * num
bond_graph_indices = np.array(bond_graph_indices)
# Increment is accumulative number of atom of each graph, it is used to reindex
# indices of atom in global graph, so it should be adding to pair indices apart
# from the first graph. The first subgraph keep its atom indices in global graph.
# In order to add increment to pair indices, each accumulative number in increment
# should be repeat num_bonds times so that every indice in pair_indices
# accumulative number for first graph is zeros so that should pad num_bonds of zero to increment.
increment = np.cumsum(num_atoms_per_graph[:-1])
increment = np.pad(
np.repeat(increment, num_bonds_per_graph[1:]), [(num_bonds_per_graph[0], 0)])
pair_indices_per_graph = np.concatenate(pair_indices, axis=0)
pair_indices = pair_indices_per_graph + np.expand_dims(increment, axis=-1)
atom_features = np.concatenate(atom_features, axis=0)
bond_features = np.concatenate(bond_features, axis=0)
state_attrs = np.concatenate(state_attrs, axis=0)
# Local spherical theta phi used for EdgeNetworks, the same as NodeNetworks.
local_env = np.concatenate(local_env, axis=0)
return (atom_features, bond_features, local_env, state_attrs, pair_indices, atom_graph_indices,
bond_graph_indices, pair_indices_per_graph)
class GraphBatchGeneratorMasking(GraphBatchGeneratorSequence):
def __init__(self,
atom_features_list: List[np.ndarray],
bond_features_list: List[np.ndarray],
local_env_list: List[np.ndarray],
state_attrs_list: List[np.ndarray],
pair_indices_list: List[np.ndarray],
labels: Union[List, None]=None,
task_type: Union[str, None]=None,
batch_size: int=32,
is_shuffle: bool=False,
masking_percent: float=0.15,
masking: int=0):
"""
Parameters
----------
X_tensor : TYPE
DESCRIPTION.
y_label : TYPE
DESCRIPTION.
batch_size : TYPE, optional
DESCRIPTION. The default is 32.
is_shuffle : TYPE, optional
DESCRIPTION. The default is False.
Returns
-------
None.
"""
self.masking_percent = masking_percent
self.masking = masking
super().__init__(atom_features_list,
bond_features_list,
local_env_list,
state_attrs_list,
pair_indices_list,
labels,
task_type,
batch_size,
is_shuffle)
def _random_masking(self, feature_list: List[np.array]) -> tuple:
"""
Random select atoms masking and return label for masking atoms.
----------
x_batch : TYPE
DESCRIPTION.
Returns
-------
x_batch : TYPE
DESCRIPTION.
y_label : TYPE
DESCRIPTION.
"""
masking_indices = []
masking_node_labels = []
for features in feature_list:
indices = np.random.choice(np.arange(features.size), replace=False,
size=int(features.size * self.masking_percent))
masking_indices.append(indices)
masking_node_labels.append(features[indices])
features[indices] = self.masking
masking_indices = masking_indices
masking_node_labels = to_categorical(np.concatenate(masking_node_labels, axis=0), 119)
return feature_list, masking_indices, masking_node_labels
def __getitem__(self, index: int) -> tuple:
batch_index = self.total_index[index * self.batch_size : (index + 1) * self.batch_size]
get = itemgetter(*batch_index)
atom_features_list = get(self.atom_features_list)
bond_features_list = get(self.bond_features_list)
local_env_list = get(self.local_env_list)
state_attrs_list = get(self.state_attrs_list)
pair_indices_list = get(self.pair_indices_list)
# random masking atoms of a graph in the batch
atom_features_list, masking_indices_list, masking_node_labels = self._random_masking(atom_features_list)
inputs_batch = (atom_features_list,
bond_features_list,
local_env_list,
state_attrs_list,
pair_indices_list,
masking_indices_list,
)
x_batch = self._merge_batch(inputs_batch)
return x_batch, (masking_node_labels)
def _merge_batch(self, x_batch: tuple) -> tuple:
"""
Merging a batch of graphs into a disconnected graph should reindex atoms only
features of graphs desn't be changed only merge them to one dimension of globl graph.
reindex indices in pair_indices by adding increment of number of atoms in the batch
atom marked with structure indice also need to be tell in globl graph.
Parameters
----------
x_batch : TYPE
DESCRIPTION.
Returns
-------
atom_features : TYPE
DESCRIPTION.
bond_features : TYPE
DESCRIPTION.
state_attributes : TYPE
DESCRIPTION.
pair_indices : TYPE
DESCRIPTION.
atom_partition_indices: TYPE
DESCRIPTION.
bond_partition_indices: TYPE
DESCRIPTION.
"""
atom_features, bond_features, local_env, state_attrs, pair_indices, masking_indices = x_batch
# Obtain number of atoms and bonds for each graph
# allocate graph (structure) indice for atom and bond in global graph
num_atoms_per_graph = []
atom_graph_indices = []
for i, atoms in enumerate(atom_features):
num = len(atoms)
num_atoms_per_graph.append(num)
atom_graph_indices += [i] * num
atom_graph_indices = np.array(atom_graph_indices)
num_bonds_per_graph = []
bond_graph_indices = []
for i, bonds in enumerate(bond_features):
num = len(bonds)
num_bonds_per_graph.append(num)
bond_graph_indices += [i] * num
bond_graph_indices = np.array(bond_graph_indices)
num_masking_per_graph = []
masking_graph_indices = []
for i, indices in enumerate(masking_indices):
num = len(indices)
num_masking_per_graph.append(num)
masking_graph_indices += [i] * num
masking_graph_indices = np.array(masking_graph_indices)
# Increment is accumulative number of atom of each graph, it is used to reindex
# indices of atom in global graph, so it should be adding to pair indices apart
# from the first graph. The first subgraph keep its atom indices in global graph.
# In order to add increment to pair indices, each accumulative number in increment
# should be repeat num_bonds times so that every indice in pair_indices
# accumulative number for first graph is zeros so that should pad num_bonds of zero to increment.
increment = np.cumsum(num_atoms_per_graph[:-1])
increment = np.pad(
np.repeat(increment, num_bonds_per_graph[1:]), [(num_bonds_per_graph[0], 0)])
pair_indices_per_graph = np.concatenate(pair_indices, axis=0)
pair_indices = pair_indices_per_graph + np.expand_dims(increment, axis=-1)
atom_features = np.concatenate(atom_features, axis=0)
bond_features = np.concatenate(bond_features, axis=0)
state_attrs = np.concatenate(state_attrs, axis=0)
masking_indices = np.concatenate(masking_indices, axis=0)
# Local spherical theta phi used for EdgeNetworks, the same as NodeNetworks.
local_env = np.concatenate(local_env, axis=0)
return (atom_features, bond_features, local_env, state_attrs, pair_indices, atom_graph_indices,
bond_graph_indices, pair_indices_per_graph, masking_indices, masking_graph_indices)
class GraphBatchGeneratorFromGraphs(GraphBatchGeneratorSequence):
def __init__(self, graphs: List, labels: Union[List, None], task_type, batch_size=32):
self.graphs = graphs
self.labels = labels
self.task_type = task_type
self.batch_size = batch_size
self.data_size = len(graphs)
def __getitem__(self, index: int) -> tuple:
structure_batch = self.graphs[index * self.batch_size : (index + 1) * self.batch_size]
graph_batch = self._inputs_from_graphs(structure_batch)
x_batch = self._merge_batch(graph_batch)
if self.labels is None:
return (x_batch, )
y_batch = np.array(self.labels[index * self.batch_size : (index + 1) * self.batch_size])
return x_batch, (y_batch)
def _inputs_from_graphs(self, graphs_list: List) -> Set:
"""
Parameters
----------
structure_list : List
a list of pymatgen.core.Structure
In order to keep with Semantic space of atom2vector, MPRester should be used
to get structures from materialsproject.
Returns
-------
Set
DESCRIPTION.
"""
# Initialize graphs
atom_features_list = []
bond_features_list = []
state_attrs_list = []
pair_indices_list = []
local_env_list = []
for s in graphs_list:
atom_features, bond_features, state_attrs, pair_indices, local_env = s
atom_features_list.append(atom_features)
bond_features_list.append(bond_features)
state_attrs_list.append(state_attrs)
pair_indices_list.append(pair_indices)
local_env_list.append(local_env)
return (
atom_features_list,
bond_features_list,
local_env_list,
state_attrs_list,
pair_indices_list,
)
class GraphBatchGeneratorBase:
def __init__(self, task_type: Union[str, None]=None, batch_size: int=32, is_shuffle: bool=False):
"""
Parameters
----------
X_tensor : TYPE
DESCRIPTION.
y_label : TYPE
DESCRIPTION.
batch_size : TYPE, optional
DESCRIPTION. The default is 64.
is_shuffle : TYPE, optional
DESCRIPTION. The default is False.
Returns
-------
None.
"""
self.batch_size = batch_size
self.task_type = task_type
self.is_shuffle = is_shuffle
def _merge_batch(self, x_batch, y_batch):
"""
Merging a batch of graphs into a disconnected graph should reindex atoms only
features of graphs desn't be changed only merge them to one dimension of globl graph.
reindex indices in pair_indices by adding increment of number of atoms in the batch
atom marked with structure indice also need to be tell in globl graph.
Parameters
----------
x_batch : TYPE
DESCRIPTION.
y_batch : TYPE
DESCRIPTION.
Returns
-------
atom_features : TYPE
DESCRIPTION.
bond_features : TYPE
DESCRIPTION.
state_attributes : TYPE
DESCRIPTION.
pair_indices : TYPE
DESCRIPTION.
atom_partition_indices: TYPE
DESCRIPTION.
bond_partition_indices: TYPE
DESCRIPTION.
y_batch : TYPE
DESCRIPTION.
"""
atom_features, bond_features, local_env, state_attrs, pair_indices = x_batch
# Obtain number of atoms and bonds for each graph
num_atoms_per_graph = atom_features.row_lengths()
num_bonds_per_graph = bond_features.row_lengths()
# max_num_atoms = tf.reduce_max(num_atoms_per_graph)
# get adjacent matrix for each graph
# adj_matrixes = self.adjacent_matrix_batch(max_num_atoms, pair_indices)
# allocate graph (structure) indice for atom and bond in global graph
graph_indices = tf.range(len(num_atoms_per_graph))
atom_graph_indices = tf.repeat(graph_indices, num_atoms_per_graph)
bond_graph_indices = tf.repeat(graph_indices, num_bonds_per_graph)
# Increment is accumulative number of atom of each graph, it is used to reindex
# indices of atom in global graph, so it should be adding to pair indices apart
# from the first graph. The first subgraph keep its atom indices in global graph.
# In order to add increment to pair indices, each accumulative number in increment
# should be repeat num_bonds times so that every indice in pair_indices
# accumulative number for first graph is zeros so that should pad num_bonds of zero to increment.
increment = tf.cumsum(num_atoms_per_graph[:-1])
increment = tf.pad(
tf.repeat(increment, num_bonds_per_graph[1:]), [(num_bonds_per_graph[0], 0)])
pair_indices_per_graph = pair_indices.merge_dims(outer_axis=0, inner_axis=1).to_tensor()
pair_indices = pair_indices_per_graph + tf.expand_dims(increment, axis=-1)
atom_features = atom_features.merge_dims(outer_axis=0, inner_axis=1)
bond_features = bond_features.merge_dims(outer_axis=0, inner_axis=1).to_tensor()
state_attrs = state_attrs.merge_dims(outer_axis=0, inner_axis=1)
# Local spherical theta phi used for EdgeNetworks, the same as NodeNetworks.
# num_edges_per_graph = local_env.row_lengths()
# edge_graph_indices = tf.repeat(graph_indices, num_edges_per_graph)
# increment_edges = tf.cumsum(num_bonds_per_graph[:-1])
# increment_edges = tf.pad(
# tf.repeat(increment_edges, num_edges_per_graph[1:]), [(num_edges_per_graph[0], 0)])
local_env = local_env.merge_dims(outer_axis=0, inner_axis=1).to_tensor()
return (atom_features, bond_features, local_env, state_attrs, pair_indices, atom_graph_indices,
bond_graph_indices, pair_indices_per_graph), y_batch
def __call__(self, X_tensor, y):
"""
Returns
-------
TYPE
partition datas into batches, and using merge_batch track graphs to a global graph.
"""
self.dataset = tf.data.Dataset.from_tensor_slices((X_tensor, (y)))
if self.is_shuffle:
self.dataset = self.dataset.shuffle(1024)
return self.dataset.batch(self.batch_size).map(self._merge_batch, -1).prefetch(-1)
class GraphBatchGenerator(GraphBatchGeneratorBase):
def _merge_batch(self, x_batch, y_batch):
"""
Merging a batch of graphs into a disconnected graph should reindex atoms only
features of graphs desn't be changed only merge them to one dimension of globl graph.
reindex indices in pair_indices by adding increment of number of atoms in the batch
atom marked with structure indice also need to be tell in globl graph.
Parameters
----------
x_batch : TYPE
DESCRIPTION.
y_batch : TYPE
DESCRIPTION.
Returns
-------
atom_features : TYPE
DESCRIPTION.
bond_features : TYPE
DESCRIPTION.
state_attributes : TYPE
DESCRIPTION.
pair_indices : TYPE
DESCRIPTION.
atom_partition_indices: TYPE
DESCRIPTION.
bond_partition_indices: TYPE
DESCRIPTION.
y_batch : TYPE
DESCRIPTION.
"""
atom_features, bond_features, local_env, state_attrs, pair_indices = x_batch
# Obtain number of atoms and bonds for each graph
num_atoms_per_graph = []
for i in atom_features:
num_atoms_per_graph.append(len(i))
num_bonds_per_graph = []
for i in bond_features:
num_bonds_per_graph.append(len(i))
# max_num_atoms = tf.reduce_max(num_atoms_per_graph)
# allocate graph (structure) indice for atom and bond in global graph
graph_indices = np.arange(len(num_atoms_per_graph))
atom_graph_indices = np.repeat(graph_indices, num_atoms_per_graph)
bond_graph_indices = np.repeat(graph_indices, num_bonds_per_graph)
# Increment is accumulative number of atom of each graph, it is used to reindex
# indices of atom in global graph, so it should be adding to pair indices apart
# from the first graph. The first subgraph keep its atom indices in global graph.
# In order to add increment to pair indices, each accumulative number in increment
# should be repeat num_bonds times so that every indice in pair_indices
# accumulative number for first graph is zeros so that should pad num_bonds of zero to increment.
increment = np.cumsum(num_atoms_per_graph[:-1])
increment = np.pad(
np.repeat(increment, num_bonds_per_graph[1:]), [(num_bonds_per_graph[0], 0)])
pair_indices_per_graph = np.concatenate(pair_indices, axis=0)
pair_indices = pair_indices_per_graph + np.expand_dims(increment, axis=-1)
atom_features = np.concatenate(atom_features, axis=0)
bond_features = np.concatenate(bond_features, axis=0)
state_attrs = np.concatenate(state_attrs, axis=0)
# Local spherical theta phi used for EdgeNetworks, the same as NodeNetworks.
local_env = np.concatenate(local_env, axis=0)
return (atom_features, bond_features, local_env, state_attrs, pair_indices, atom_graph_indices,
bond_graph_indices, pair_indices_per_graph), y_batch
class GraphBatchGeneratorDist(GraphBatchGeneratorBase):
def __init__(self, task_type: Union[str, None]=None, batch_size: int=32, is_shuffle: bool=False):
"""
Parameters
----------
X_tensor : TYPE
DESCRIPTION.
y_label : TYPE
DESCRIPTION.
batch_size : TYPE, optional
DESCRIPTION. The default is 32.
is_shuffle : TYPE, optional
DESCRIPTION. The default is False.
Returns
-------
None.
"""
super().__init__(task_type=task_type, batch_size=batch_size, is_shuffle=is_shuffle)
def _merge_batch(self, x_batch, y_batch):
"""
Merging a batch of graphs into a disconnected graph should reindex atoms only
features of graphs desn't be changed only merge them to one dimension of globl graph.
reindex indices in pair_indices by adding increment of number of atoms in the batch
atom marked with structure indice also need to be tell in globl graph.
Parameters
----------
x_batch : TYPE
DESCRIPTION.
y_batch : TYPE
DESCRIPTION.
Returns
-------
atom_features : TYPE
DESCRIPTION.
bond_features : TYPE
DESCRIPTION.
state_attributes : TYPE
DESCRIPTION.
pair_indices : TYPE
DESCRIPTION.
atom_partition_indices: TYPE
DESCRIPTION.
bond_partition_indices: TYPE
DESCRIPTION.
y_batch : TYPE
DESCRIPTION.
"""
atom_features, bond_features, local_env, state_attrs, pair_indices, masking_indices = x_batch
# Obtain number of atoms and bonds for each graph
num_atoms_per_graph = atom_features.row_lengths()
num_bonds_per_graph = bond_features.row_lengths()
num_masking_per_graph = masking_indices.row_lengths()
# max_num_atoms = tf.reduce_max(num_atoms_per_graph)
# get adjacent matrix for each graph
# adj_matrixes = self.adjacent_matrix_batch(max_num_atoms, pair_indices)
# allocate graph (structure) indice for atom and bond in global graph
graph_indices = tf.range(len(num_atoms_per_graph))
atom_graph_indices = tf.repeat(graph_indices, num_atoms_per_graph)
bond_graph_indices = tf.repeat(graph_indices, num_bonds_per_graph)
masking_graph_indices = tf.repeat(graph_indices, num_masking_per_graph)
# Increment is accumulative number of atom of each graph, it is used to reindex
# indices of atom in global graph, so it should be adding to pair indices apart
# from the first graph. The first subgraph keep its atom indices in global graph.
# In order to add increment to pair indices, each accumulative number in increment
# should be repeat num_bonds times so that every indice in pair_indices
# accumulative number for first graph is zeros so that should pad num_bonds of zero to increment.
increment = tf.cumsum(num_atoms_per_graph[:-1])
increment = tf.pad(
tf.repeat(increment, num_bonds_per_graph[1:]), [(num_bonds_per_graph[0], 0)])
pair_indices_per_graph = pair_indices.merge_dims(outer_axis=0, inner_axis=1).to_tensor()
pair_indices = pair_indices_per_graph + tf.expand_dims(increment, axis=-1)
atom_features = atom_features.merge_dims(outer_axis=0, inner_axis=1)
bond_features = bond_features.merge_dims(outer_axis=0, inner_axis=1).to_tensor()
state_attrs = state_attrs.merge_dims(outer_axis=0, inner_axis=1)
masking_indices = masking_indices.merge_dims(outer_axis=0, inner_axis=1)
# Local spherical theta phi used for EdgeNetworks, the same as NodeNetworks.
local_env = local_env.merge_dims(outer_axis=0, inner_axis=1).to_tensor()
y_batch = y_batch.merge_dims(outer_axis=0, inner_axis=1)
return (atom_features, bond_features, local_env, state_attrs, pair_indices, atom_graph_indices,
bond_graph_indices, pair_indices_per_graph, masking_indices, masking_graph_indices), y_batch
def __call__(self,
atom_features_list,
bond_features_list,
local_env_list,
state_attrs_list,
pair_indices_list,
masking_indices_list,
labels,
):
X_tensor = (atom_features_list,
bond_features_list,
local_env_list,
state_attrs_list,
pair_indices_list,
masking_indices_list,
)
return super().__call__(X_tensor, labels) | 36.852459 | 152 | 0.616487 | 5,739 | 49,456 | 5.045653 | 0.060464 | 0.047864 | 0.018786 | 0.012156 | 0.823531 | 0.800083 | 0.779017 | 0.769486 | 0.754394 | 0.733191 | 0 | 0.008353 | 0.297982 | 49,456 | 1,342 | 153 | 36.852459 | 0.825686 | 0.27481 | 0 | 0.642487 | 0 | 0 | 0.010591 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06563 | false | 0 | 0.029361 | 0.001727 | 0.164076 | 0.003454 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
13251e7bf4831e7b427ebc46a55e9dfec47a524e | 2,551 | py | Python | ipam/client/abstractipam.py | achamo/ipam-client | c0d6ffa535c8f3c0d56b9d78a1a5a73b890f5fbb | [
"Apache-2.0"
] | null | null | null | ipam/client/abstractipam.py | achamo/ipam-client | c0d6ffa535c8f3c0d56b9d78a1a5a73b890f5fbb | [
"Apache-2.0"
] | null | null | null | ipam/client/abstractipam.py | achamo/ipam-client | c0d6ffa535c8f3c0d56b9d78a1a5a73b890f5fbb | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
from abc import ABCMeta, abstractmethod
class AbstractIPAM:
__metaclass__ = ABCMeta
@abstractmethod
def add_ip(self, ipaddr, dnsname, description, mac=None):
raise NotImplementedError()
@abstractmethod
def add_next_ip(self, subnet, dnsname, description, mac=None):
raise NotImplementedError()
@abstractmethod
def get_next_free_ip(self, subnet):
raise NotImplementedError()
@abstractmethod
def add_top_level_subnet(self, subnet, description):
raise NotImplementedError()
@abstractmethod
def add_next_subnet(self, parent_subnet, prefixlen, description):
raise NotImplementedError()
@abstractmethod
def delete_subnet(self, subnet, empty_subnet):
raise NotImplementedError()
@abstractmethod
def get_ip(self, ip):
raise NotImplementedError()
@abstractmethod
def get_hostname_by_ip(self, ip):
raise NotImplementedError()
@abstractmethod
def get_description_by_ip(self, ip):
raise NotImplementedError()
@abstractmethod
def get_mac_by_ip(self, ip):
raise NotImplementedError()
@abstractmethod
def get_ip_interface_list_by_desc(self, description):
raise NotImplementedError()
@abstractmethod
def get_ip_interface_list_by_subnet_name(self, subnet_name):
raise NotImplementedError()
@abstractmethod
def get_ip_interface_by_subnet_name(self, subnet_name):
raise NotImplementedError()
@abstractmethod
def get_ip_interface_by_desc(self, description):
raise NotImplementedError()
@abstractmethod
def get_ip_list_by_desc(self, description):
raise NotImplementedError()
@abstractmethod
def get_ip_by_desc(self, description):
raise NotImplementedError()
@abstractmethod
def get_ip_list_by_mac(self, mac):
raise NotImplementedError()
@abstractmethod
def get_ip_by_mac(self, mac):
raise NotImplementedError()
@abstractmethod
def get_subnet_list_by_desc(self, description):
raise NotImplementedError()
@abstractmethod
def get_subnet_by_desc(self, description):
raise NotImplementedError()
@abstractmethod
def get_subnet_with_ips(self, subnet):
raise NotImplementedError()
@abstractmethod
def get_num_ips_by_desc(self, description):
raise NotImplementedError()
@abstractmethod
def get_num_subnets_by_desc(self, description):
raise NotImplementedError()
| 25.51 | 69 | 0.71031 | 264 | 2,551 | 6.55303 | 0.162879 | 0.226012 | 0.483237 | 0.521387 | 0.851445 | 0.811561 | 0.657225 | 0.656069 | 0.549711 | 0.34104 | 0 | 0 | 0.218738 | 2,551 | 99 | 70 | 25.767677 | 0.868038 | 0.00784 | 0 | 0.638889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.319444 | false | 0 | 0.013889 | 0 | 0.361111 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1349f0428834bdfe216f43da8e1f3a02770e7144 | 102 | py | Python | tests/instrumentation/utils/stringify.py | rlnsanz/inspectional-rara-parakeet | 2c7919ed432616ec016a5afcd6718d16fa65e8af | [
"Apache-2.0"
] | null | null | null | tests/instrumentation/utils/stringify.py | rlnsanz/inspectional-rara-parakeet | 2c7919ed432616ec016a5afcd6718d16fa65e8af | [
"Apache-2.0"
] | null | null | null | tests/instrumentation/utils/stringify.py | rlnsanz/inspectional-rara-parakeet | 2c7919ed432616ec016a5afcd6718d16fa65e8af | [
"Apache-2.0"
] | 1 | 2021-06-25T16:06:59.000Z | 2021-06-25T16:06:59.000Z | from gadget.instrumentation.utils.stringify import vertical_prefix_string, print_ssa
import unittest
| 25.5 | 84 | 0.882353 | 13 | 102 | 6.692308 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 102 | 3 | 85 | 34 | 0.925532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
1352f22f3165aa5fcb62f46c93bf6c67ca9dece9 | 6,394 | py | Python | tests/test_interactive.py | aureliojargas/git-revise | 6f9b02afb46d17fd52bcdd8599026bf167f74628 | [
"MIT"
] | null | null | null | tests/test_interactive.py | aureliojargas/git-revise | 6f9b02afb46d17fd52bcdd8599026bf167f74628 | [
"MIT"
] | null | null | null | tests/test_interactive.py | aureliojargas/git-revise | 6f9b02afb46d17fd52bcdd8599026bf167f74628 | [
"MIT"
] | null | null | null | # pylint: skip-file
import textwrap
def interactive_reorder_helper(repo, bash, main, fake_editor, cwd):
bash(
"""
echo "hello, world" > file1
git add file1
git commit -m "commit one"
echo "second file" > file2
git add file2
git commit -m "commit two"
echo "new line!" >> file1
git add file1
git commit -m "commit three"
"""
)
prev = repo.get_commit("HEAD")
prev_u = prev.parent()
prev_uu = prev_u.parent()
def editor(inq, outq):
in_todo = inq.get()
expected = textwrap.dedent(
f"""\
pick {prev.parent().oid.short()} commit two
pick {prev.oid.short()} commit three
"""
).encode()
assert in_todo.startswith(expected)
outq.put(
textwrap.dedent(
f"""\
pick {prev.oid.short()} commit three
pick {prev.parent().oid.short()} commit two
"""
).encode()
)
with fake_editor(editor):
main(["-i", "HEAD~~"], cwd=cwd)
curr = repo.get_commit("HEAD")
curr_u = curr.parent()
curr_uu = curr_u.parent()
assert curr != prev
assert curr.tree() == prev.tree()
assert curr_u.message == prev.message
assert curr.message == prev_u.message
assert curr_uu == prev_uu
assert b"file2" in prev_u.tree().entries
assert b"file2" not in curr_u.tree().entries
assert prev_u.tree().entries[b"file2"] == curr.tree().entries[b"file2"]
assert prev_u.tree().entries[b"file1"] == curr_uu.tree().entries[b"file1"]
assert prev.tree().entries[b"file1"] == curr_u.tree().entries[b"file1"]
def test_interactive_reorder(repo, bash, main, fake_editor):
interactive_reorder_helper(repo, bash, main, fake_editor, cwd=repo.workdir)
def test_interactive_reorder_subdir(repo, bash, main, fake_editor):
bash("mkdir subdir")
interactive_reorder_helper(
repo, bash, main, fake_editor, cwd=repo.workdir / "subdir"
)
def test_interactive_fixup(repo, bash, main, fake_editor):
bash(
"""
echo "hello, world" > file1
git add file1
git commit -m "commit one"
echo "second file" > file2
git add file2
git commit -m "commit two"
echo "new line!" >> file1
git add file1
git commit -m "commit three"
echo "extra" >> file3
git add file3
"""
)
prev = repo.get_commit("HEAD")
prev_u = prev.parent()
prev_uu = prev_u.parent()
index_tree = repo.index.tree()
def editor(inq, outq):
in_todo = inq.get()
# Get the index tree to check it
index = repo.index.commit()
expected = textwrap.dedent(
f"""\
pick {prev.parent().oid.short()} commit two
pick {prev.oid.short()} commit three
index {index.oid.short()} <git index>
"""
).encode()
assert in_todo.startswith(expected)
outq.put(
textwrap.dedent(
f"""\
pick {prev.oid.short()} commit three
fixup {index.oid.short()} <git index>
pick {prev.parent().oid.short()} commit two
"""
).encode()
)
with fake_editor(editor):
main(["-i", "HEAD~~"])
curr = repo.get_commit("HEAD")
curr_u = curr.parent()
curr_uu = curr_u.parent()
assert curr != prev
assert curr.tree() == index_tree
assert curr_u.message == prev.message
assert curr.message == prev_u.message
assert curr_uu == prev_uu
assert b"file2" in prev_u.tree().entries
assert b"file2" not in curr_u.tree().entries
assert b"file3" not in prev.tree().entries
assert b"file3" not in prev_u.tree().entries
assert b"file3" not in prev_uu.tree().entries
assert b"file3" in curr.tree().entries
assert b"file3" in curr_u.tree().entries
assert b"file3" not in curr_uu.tree().entries
assert curr.tree().entries[b"file3"].blob().body == b"extra\n"
assert curr_u.tree().entries[b"file3"].blob().body == b"extra\n"
assert prev_u.tree().entries[b"file2"] == curr.tree().entries[b"file2"]
assert prev_u.tree().entries[b"file1"] == curr_uu.tree().entries[b"file1"]
assert prev.tree().entries[b"file1"] == curr_u.tree().entries[b"file1"]
def test_interactive_reword(repo, bash, main, fake_editor):
bash(
"""
echo "hello, world" > file1
git add file1
git commit -m "commit one" -m "extended1"
echo "second file" > file2
git add file2
git commit -m "commit two" -m "extended2"
echo "new line!" >> file1
git add file1
git commit -m "commit three" -m "extended3"
"""
)
prev = repo.get_commit("HEAD")
prev_u = prev.parent()
prev_uu = prev_u.parent()
def editor(inq, outq):
in_todo = inq.get()
expected = textwrap.dedent(
f"""\
++ pick {prev.parent().oid.short()}
commit two
extended2
++ pick {prev.oid.short()}
commit three
extended3
"""
).encode()
assert in_todo.startswith(expected)
outq.put(
textwrap.dedent(
f"""\
++ pick {prev.oid.short()}
updated commit three
extended3 updated
++ pick {prev.parent().oid.short()}
updated commit two
extended2 updated
"""
).encode()
)
with fake_editor(editor):
main(["-ie", "HEAD~~"])
curr = repo.get_commit("HEAD")
curr_u = curr.parent()
curr_uu = curr_u.parent()
assert curr != prev
assert curr.tree() == prev.tree()
assert curr_u.message == b"updated commit three\n\nextended3 updated\n"
assert curr.message == b"updated commit two\n\nextended2 updated\n"
assert curr_uu == prev_uu
assert b"file2" in prev_u.tree().entries
assert b"file2" not in curr_u.tree().entries
assert prev_u.tree().entries[b"file2"] == curr.tree().entries[b"file2"]
assert prev_u.tree().entries[b"file1"] == curr_uu.tree().entries[b"file1"]
assert prev.tree().entries[b"file1"] == curr_u.tree().entries[b"file1"]
| 27.324786 | 79 | 0.562402 | 815 | 6,394 | 4.300614 | 0.099387 | 0.100428 | 0.068474 | 0.058203 | 0.867903 | 0.84194 | 0.81826 | 0.803424 | 0.786305 | 0.761769 | 0 | 0.013662 | 0.301689 | 6,394 | 233 | 80 | 27.44206 | 0.771333 | 0.007507 | 0 | 0.697183 | 0 | 0 | 0.257649 | 0.030224 | 0 | 0 | 0 | 0 | 0.288732 | 1 | 0.056338 | false | 0 | 0.007042 | 0 | 0.06338 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
136b8cc14b3b43380fa91a6628b75498a755a92a | 11,280 | py | Python | tests/test_contexts.py | LaudateCorpus1/Bella-5 | 7de51ff4914bdefbcf05e490b85517c5fb014595 | [
"MIT"
] | 22 | 2018-06-16T02:03:44.000Z | 2022-01-04T19:06:12.000Z | tests/test_contexts.py | LaudateCorpus1/Bella-5 | 7de51ff4914bdefbcf05e490b85517c5fb014595 | [
"MIT"
] | 3 | 2018-06-21T11:01:28.000Z | 2018-11-29T20:32:22.000Z | tests/test_contexts.py | LaudateCorpus1/Bella-5 | 7de51ff4914bdefbcf05e490b85517c5fb014595 | [
"MIT"
] | 2 | 2019-11-12T18:02:15.000Z | 2021-11-25T12:15:02.000Z | '''
Unit test suite for the :py:mod:`bella.contexts` module.
'''
from unittest import TestCase
#from tdparse.contexts import right_context
#from tdparse.contexts import left_context
#from tdparse.contexts import target_context
#from tdparse.contexts import full_context
from bella.contexts import context
class TestContexts(TestCase):
'''
Contains the following functions:
'''
single_context = [{'text':'This is a fake news article that is to represent a Tweet!!!!',
'target':'news article',
'spans':[[15, 27]]},
{'text':'I had a great day however I did not get much work done',
'target':'day',
'spans':[[14, 17]]},
{'text':'I cycled in today and it was ok as it was not raining.',
'target':'cycled',
'spans':[[2, 8]]}]
multi_contexts = [{'text':'This is a fake news article that is to represent a '\
'Tweet!!!! and it was an awful News Article I think.',
'target':'news article',
'spans':[[15, 27], [81, 93]]},
{'text':'I had a great Day however I did not get much '\
'work done in the day',
'target':'day',
'spans':[[14, 17], [62, 65]]}]
def test_context(self):
'''
Tests :py:func:`bella.contexts._context`
'''
with self.assertRaises(ValueError, msg='Should only accept left, right '\
'or target context words for parameters'):
context(self.single_context[0], 'itself')
def test_left_context(self):
'''
Tests :py:func:`bella.contexts.left_context`
'''
single_left = [['This is a fake '], ['I had a great '], ['I ']]
for index, test_context in enumerate(self.single_context):
test_text = test_context['text']
test_target = test_context['target']
correct_context = single_left[index]
left_string = context(test_context, 'left', inc_target=False)
msg = 'Cannot get the left context of target {} text {} which should be {}'\
' and not {}'.format(test_target, test_text, correct_context, left_string)
self.assertEqual(correct_context, left_string, msg=msg)
# Handle including targets
single_left = [['This is a fake news article'], ['I had a great day'],
['I cycled']]
for index, test_context in enumerate(self.single_context):
test_text = test_context['text']
test_target = test_context['target']
correct_context = single_left[index]
left_string = context(test_context, 'left', inc_target=True)
msg = 'Cannot get the left context of target {} text {} including the '\
'target which should be {} and not {}'\
.format(test_target, test_text, correct_context, left_string)
self.assertEqual(correct_context, left_string, msg=msg)
multi_left = [['This is a fake ', 'This is a fake news article that is to'\
' represent a Tweet!!!! and it was an awful '],
['I had a great ', 'I had a great Day however I did not get '\
'much work done in the ']]
for index, test_context in enumerate(self.multi_contexts):
test_text = test_context['text']
test_target = test_context['target']
correct_context = multi_left[index]
left_string = context(test_context, 'left', inc_target=False)
msg = 'Cannot get the left context of target {} text {} which should be {}'\
' and not {}'.format(test_target, test_text, correct_context, left_string)
self.assertEqual(correct_context, left_string, msg=msg)
# Handle including targets
multi_left = [['This is a fake news article', 'This is a fake news article '\
'that is to represent a Tweet!!!! and it was an awful News Article'],
['I had a great Day', 'I had a great Day however I did not get '\
'much work done in the day']]
for index, test_context in enumerate(self.multi_contexts):
test_text = test_context['text']
test_target = test_context['target']
correct_context = multi_left[index]
left_string = context(test_context, 'left', inc_target=True)
msg = 'Cannot get the left context of target {} text {} including the '\
'target which should be {} and not {}'\
.format(test_target, test_text, correct_context, left_string)
self.assertEqual(correct_context, left_string, msg=msg)
def test_right_context(self):
'''
Tests :py:func:`bella.contexts.right_context`
'''
single_right = [[' that is to represent a Tweet!!!!'],
[' however I did not get much work done'],
[' in today and it was ok as it was not raining.']]
for index, test_context in enumerate(self.single_context):
test_text = test_context['text']
test_target = test_context['target']
correct_context = single_right[index]
right_string = context(test_context, 'right', inc_target=False)
msg = 'Cannot get the right context of target {} text {} '\
'which should be {} and not {}'\
.format(test_target, test_text, correct_context, right_string)
self.assertEqual(correct_context, right_string, msg=msg)
# Handle including targets
single_right = [['news article that is to represent a Tweet!!!!'],
['day however I did not get much work done'],
['cycled in today and it was ok as it was not raining.']]
for index, test_context in enumerate(self.single_context):
test_text = test_context['text']
test_target = test_context['target']
correct_context = single_right[index]
right_string = context(test_context, 'right', inc_target=True)
msg = 'Cannot get the right context of target {} text {} including the '\
'target which should be {} and not {}'\
.format(test_target, test_text, correct_context, right_string)
self.assertEqual(correct_context, right_string, msg=msg)
multi_right = [[' that is to represent a Tweet!!!! and it was an awful News'\
' Article I think.', ' I think.'],
[' however I did not get much work done in the day', '']]
for index, test_context in enumerate(self.multi_contexts):
test_text = test_context['text']
test_target = test_context['target']
correct_context = multi_right[index]
right_string = context(test_context, 'right', inc_target=False)
msg = 'Cannot get the right context of target {} text {} which should be {}'\
' and not {}'\
.format(test_target, test_text, correct_context, right_string)
self.assertEqual(correct_context, right_string, msg=msg)
# Handle including targets
multi_right = [['news article that is to represent a Tweet!!!! and it was'\
' an awful News Article I think.', 'News Article I think.'],
['Day however I did not get much work done in the day', 'day']]
for index, test_context in enumerate(self.multi_contexts):
test_text = test_context['text']
test_target = test_context['target']
correct_context = multi_right[index]
right_string = context(test_context, 'right', inc_target=True)
msg = 'Cannot get the right context of target {} text {} including the '\
'target which should be {} and not {}'\
.format(test_target, test_text, correct_context, right_string)
self.assertEqual(correct_context, right_string, msg=msg)
def test_target_context(self):
'''
Tests :py:func:`bella.contexts.target_context`
'''
single_targets = [['news article'], ['day'], ['cycled']]
for index, test_context in enumerate(self.single_context):
test_text = test_context['text']
correct_target = single_targets[index]
target_string = context(test_context, 'target')
msg = 'Cannot get the target for text {}, target found {} correct {}'\
.format(test_text, target_string, correct_target)
self.assertEqual(correct_target, target_string, msg=msg)
multi_targets = [['news article', 'News Article'], ['Day', 'day']]
for index, test_context in enumerate(self.multi_contexts):
test_text = test_context['text']
correct_targets = multi_targets[index]
target_strings = context(test_context, 'target')
msg = 'Cannot get the targets for text {}, targets found {} correct {}'\
.format(test_text, target_strings, correct_targets)
self.assertEqual(correct_targets, target_strings, msg=msg)
def test_full_context(self):
'''
Tests :py:func:`bella.contexts.full_context`
'''
single_targets = [['This is a fake news article that is to represent a Tweet!!!!'],
['I had a great day however I did not get much work done'],
['I cycled in today and it was ok as it was not raining.']]
multi_targets = [['This is a fake news article that is to represent a '\
'Tweet!!!! and it was an awful News Article I think.',
'This is a fake news article that is to represent a '\
'Tweet!!!! and it was an awful News Article I think.'],
['I had a great Day however I did not get much '\
'work done in the day',
'I had a great Day however I did not get much '\
'work done in the day']]
for index, test_context in enumerate(self.single_context):
test_text = test_context['text']
correct_target = single_targets[index]
target_string = context(test_context, 'full')
msg = 'Cannot get the target for text {}, target found {} correct {}'\
.format(test_text, target_string, correct_target)
self.assertEqual(correct_target, target_string, msg=msg)
for index, test_context in enumerate(self.multi_contexts):
test_text = test_context['text']
correct_targets = multi_targets[index]
target_strings = context(test_context, 'full')
msg = 'Cannot get the targets for text {}, targets found {} correct {}'\
.format(test_text, target_strings, correct_targets)
self.assertEqual(correct_targets, target_strings, msg=msg)
| 55.02439 | 93 | 0.576241 | 1,351 | 11,280 | 4.647668 | 0.07846 | 0.078834 | 0.035674 | 0.036312 | 0.90731 | 0.880713 | 0.865902 | 0.822742 | 0.814142 | 0.79742 | 0 | 0.003529 | 0.321809 | 11,280 | 204 | 94 | 55.294118 | 0.817255 | 0.051596 | 0 | 0.604938 | 0 | 0 | 0.302284 | 0 | 0 | 0 | 0 | 0 | 0.080247 | 1 | 0.030864 | false | 0 | 0.012346 | 0 | 0.061728 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1379926fce10d2d6b333860aa2c898d41c68375e | 161 | py | Python | tests/__init__.py | stefan-wolfsheimer/Z80-ASM | 42863f5e329e27fb3b9375510695c027348dd793 | [
"MIT"
] | 2 | 2021-03-05T15:02:50.000Z | 2021-10-30T21:53:43.000Z | tests/__init__.py | stefan-wolfsheimer/Z80-ASM | 42863f5e329e27fb3b9375510695c027348dd793 | [
"MIT"
] | null | null | null | tests/__init__.py | stefan-wolfsheimer/Z80-ASM | 42863f5e329e27fb3b9375510695c027348dd793 | [
"MIT"
] | null | null | null | import sys
from os.path import abspath
from os.path import dirname
from os.path import join
sys.path.insert(0, abspath(join(dirname(abspath(__file__)), "..")))
| 23 | 67 | 0.757764 | 26 | 161 | 4.538462 | 0.423077 | 0.152542 | 0.254237 | 0.40678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006993 | 0.111801 | 161 | 6 | 68 | 26.833333 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0.012422 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
13830e6cdb4e74914c3eac7d7bc7f1781dda4fdd | 228 | py | Python | udapter/__init__.py | ahmetustun/udapter | e3cfff870ab3c50bec38de756acce944bf4aaf80 | [
"MIT"
] | 20 | 2020-10-15T07:20:35.000Z | 2022-01-23T12:47:05.000Z | udapter/__init__.py | ahmetustun/udapter | e3cfff870ab3c50bec38de756acce944bf4aaf80 | [
"MIT"
] | 2 | 2021-01-18T12:59:14.000Z | 2021-09-20T12:01:39.000Z | udapter/__init__.py | ahmetustun/udapter | e3cfff870ab3c50bec38de756acce944bf4aaf80 | [
"MIT"
] | 2 | 2020-11-09T08:03:58.000Z | 2022-02-23T11:37:08.000Z | from udapter.dataset_readers import *
from udapter.udapter_models import *
from udapter.udify_models import *
from udapter.modules import *
from udapter.optimizers import *
from udapter.predictors import *
from udapter import *
| 28.5 | 37 | 0.820175 | 30 | 228 | 6.133333 | 0.333333 | 0.418478 | 0.554348 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122807 | 228 | 7 | 38 | 32.571429 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1389fdaa08c8ea4d84adbb9b37cf89699a1edbe6 | 11,849 | py | Python | test_parallel_proc_runner.py | cquickstad/parallel_proc_runner | 41f1b516da38274125ae08d9e7955357969c22d1 | [
"Apache-2.0"
] | 1 | 2018-09-28T01:11:00.000Z | 2018-09-28T01:11:00.000Z | test_parallel_proc_runner.py | cquickstad/parallel_proc_runner | 41f1b516da38274125ae08d9e7955357969c22d1 | [
"Apache-2.0"
] | null | null | null | test_parallel_proc_runner.py | cquickstad/parallel_proc_runner | 41f1b516da38274125ae08d9e7955357969c22d1 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# https://github.com/cquickstad/parallel_proc_runner
# Copyright 2018 Chad Quickstad
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import threading
from time import sleep
from parallel_proc_runner_base import DummyRunner
class BaseRunnerTest(unittest.TestCase):
def setUp(self):
self.event_to_wait_for = threading.Event()
self.job_mocking_event = threading.Event()
# Class Under Test
self.runner = DummyRunner('base runner')
self.start_callback_called = False
self.stop_callback_called = False
self.stop_callback_result = None
self.stop_callback_output = None
def mock_start_callback(self, name):
# Name should be passed to the callback
self.assertEqual(self.runner.name, name)
# Indicate that the callback indeed happened
self.start_callback_called = True
def mock_stop_callback(self, name, result, output):
# Name should be passed to the callback
self.assertEqual(self.runner.name, name)
self.stop_callback_result = result
self.stop_callback_output = output
self.stop_callback_called = True
# Check that stop event is not triggered yet
self.assertFalse(self.runner.stop_event.is_set())
def test_that_run_waits_for_start_gating_event(self):
self.runner.set_start_gating_event(self.event_to_wait_for)
self.runner.set_start_callback(self.mock_start_callback)
self.runner.set_stop_callback(self.mock_stop_callback)
self.runner.set_args(job_mocking_event=self.job_mocking_event)
self.runner.start()
sleep(0.01) # Let thread have a chance to go
self.assertFalse(self.start_callback_called)
self.assertFalse(self.runner.running)
self.event_to_wait_for.set()
sleep(0.01) # Let thread have a chance to go
self.assertTrue(self.start_callback_called)
self.assertTrue(self.runner.running)
self.job_mocking_event.set()
def test_that_callbacks_can_be_none(self):
self.runner.set_start_gating_event(None)
self.runner.set_start_callback(None)
self.runner.set_stop_callback(None)
self.runner.set_args(job_mocking_event=self.job_mocking_event)
self.runner.start()
self.job_mocking_event.set()
def test_that_run_does_not_wait_when_start_gating_event_is_none(self):
self.runner.set_start_gating_event(None)
self.runner.set_start_callback(self.mock_start_callback)
self.runner.set_stop_callback(self.mock_stop_callback)
self.runner.set_args(job_mocking_event=self.job_mocking_event)
self.runner.start()
sleep(0.01) # Let thread have a chance to go
self.assertTrue(self.start_callback_called)
self.assertTrue(self.runner.running)
self.job_mocking_event.set()
def test_that_job_runs_after_start_callback(self):
self.runner.set_start_gating_event(self.event_to_wait_for)
self.runner.set_start_callback(self.mock_start_callback)
self.runner.set_stop_callback(self.mock_stop_callback)
self.runner.set_args(job_mocking_event=self.job_mocking_event)
self.runner.start()
sleep(0.01) # Let thread have a chance to go
self.event_to_wait_for.set()
sleep(0.01) # Let thread have a chance to go
self.assertTrue(self.start_callback_called)
self.assertTrue(self.runner.running)
self.assertFalse(self.runner.job_ran)
self.job_mocking_event.set()
sleep(0.01) # Let thread have a chance to go
self.assertTrue(self.runner.job_ran)
def test_that_args_are_passed(self):
self.runner.set_start_gating_event(self.event_to_wait_for)
self.runner.set_start_callback(self.mock_start_callback)
self.runner.set_stop_callback(self.mock_stop_callback)
self.runner.set_args(job_mocking_event=self.job_mocking_event,
some_arg="Bla Bla")
self.runner.start()
sleep(0.01) # Let thread have a chance to go
self.event_to_wait_for.set()
sleep(0.01) # Let thread have a chance to go
self.assertTrue(self.start_callback_called)
self.assertFalse(self.runner.job_ran)
self.job_mocking_event.set()
sleep(0.01) # Let thread have a chance to go
self.assertEqual("Bla Bla", self.runner.setup_kwargs['some_arg'])
self.assertTrue(self.runner.job_ran)
def test_that_stop_callback_is_called_after_job_ran(self):
self.runner.set_start_gating_event(self.event_to_wait_for)
self.runner.set_start_callback(self.mock_start_callback)
self.runner.set_stop_callback(self.mock_stop_callback)
self.runner.set_args(job_mocking_event=self.job_mocking_event)
self.runner.start()
sleep(0.01) # Let thread have a chance to go
self.event_to_wait_for.set()
sleep(0.01) # Let thread have a chance to go
self.assertFalse(self.stop_callback_called)
self.assertTrue(self.runner.running)
self.runner.set_result(0)
self.job_mocking_event.set()
sleep(0.01) # Let thread have a chance to go
self.assertTrue(self.stop_callback_called)
self.assertFalse(self.runner.running)
self.assertEqual("Success", self.stop_callback_result)
self.assertEqual("Output from base runner", self.stop_callback_output)
def test_that_stop_callback_reports_failure(self):
self.runner.set_start_gating_event(self.event_to_wait_for)
self.runner.set_start_callback(self.mock_start_callback)
self.runner.set_stop_callback(self.mock_stop_callback)
self.runner.set_args(job_mocking_event=self.job_mocking_event)
self.runner.start()
sleep(0.01) # Let thread have a chance to go
self.assertFalse(self.runner.running)
self.event_to_wait_for.set()
sleep(0.01) # Let thread have a chance to go
self.assertFalse(self.stop_callback_called)
self.assertTrue(self.runner.running)
self.runner.set_result(1)
self.job_mocking_event.set()
sleep(0.01) # Let thread have a chance to go
self.assertTrue(self.stop_callback_called)
self.assertFalse(self.runner.running)
self.assertEqual("FAIL (1)", self.stop_callback_result)
self.assertEqual("Output from base runner", self.stop_callback_output)
def test_that_stop_event_is_triggered_on_success(self):
self.runner.set_start_gating_event(None)
self.runner.set_start_callback(None)
self.runner.set_stop_callback(self.mock_stop_callback)
self.runner.set_args(job_mocking_event=self.job_mocking_event)
self.runner.start()
sleep(0.01) # Let thread have a chance to go
self.runner.set_result(0)
self.assertFalse(self.runner.stop_event.is_set())
self.job_mocking_event.set()
sleep(0.1) # Let thread have a chance to go
# Check in stop callback that it is not yet set
self.assertTrue(self.runner.stop_event.is_set())
def test_that_stop_event_is_triggered_on_failure(self):
self.runner.set_start_gating_event(None)
self.runner.set_start_callback(None)
self.runner.set_stop_callback(self.mock_stop_callback)
self.runner.set_args(job_mocking_event=self.job_mocking_event)
self.runner.start()
sleep(0.01) # Let thread have a chance to go
self.runner.set_result(1)
self.assertFalse(self.runner.stop_event.is_set())
self.job_mocking_event.set()
sleep(0.01) # Let thread have a chance to go
# Check in stop callback that it is not yet set
self.assertTrue(self.runner.stop_event.is_set())
def test_that_runner_can_run_again(self):
self.runner.set_start_gating_event(self.event_to_wait_for)
self.runner.set_start_callback(self.mock_start_callback)
self.runner.set_stop_callback(self.mock_stop_callback)
self.runner.set_args(job_mocking_event=self.job_mocking_event)
# Run for the first time
self.runner.start()
sleep(0.01) # Let thread have a chance to go
self.assertFalse(self.start_callback_called)
self.assertFalse(self.runner.running)
self.event_to_wait_for.set()
sleep(0.01) # Let thread have a chance to go
self.assertTrue(self.start_callback_called)
self.assertTrue(self.runner.running)
self.assertFalse(self.stop_callback_called)
self.runner.set_result(1)
self.job_mocking_event.set()
sleep(0.01) # Let thread have a chance to go
self.assertTrue(self.stop_callback_called)
self.assertFalse(self.runner.running)
self.assertEqual("FAIL (1)", self.stop_callback_result)
self.assertEqual("Output from base runner", self.stop_callback_output)
# Reset the test events for running again
self.event_to_wait_for.clear()
self.job_mocking_event.clear()
# Reset the indicators of the runner running
self.start_callback_called = False
self.stop_callback_called = False
self.stop_callback_result = None
self.stop_callback_output = None
# Run again
self.runner.start()
sleep(0.01) # Let thread have a chance to go
self.assertFalse(self.start_callback_called)
self.assertFalse(self.runner.running)
self.assertEqual(self.runner.result_message, "")
self.assertFalse(self.runner.stop_event.is_set())
self.event_to_wait_for.set()
sleep(0.01) # Let thread have a chance to go
self.assertTrue(self.start_callback_called)
self.assertTrue(self.runner.running)
self.assertFalse(self.stop_callback_called)
self.runner.set_result(1)
self.job_mocking_event.set()
sleep(0.01) # Let thread have a chance to go
self.assertTrue(self.stop_callback_called)
self.assertFalse(self.runner.running)
self.assertEqual("FAIL (1)", self.stop_callback_result)
self.assertEqual("Output from base runner", self.stop_callback_output)
def test_that_stop_event_is_triggered_and_there_is_a_failure_result_on_exception(self):
self.runner.set_start_gating_event(None)
self.runner.set_start_callback(None)
self.runner.set_stop_callback(self.mock_stop_callback)
self.runner.set_args(job_mocking_event=self.job_mocking_event)
self.runner.set_result(0) # Make sure that it's the exception that causes the fail result
self.runner.name = None # This will cause an exception in the job() method of TestableBaseRunner
self.job_mocking_event.set() # No threads in this test, so trigger this ahead of time
# So that we can test the exception, don't launch a thread. Call run() directly.
self.runner.run()
# Check in stop callback that it is not yet set
self.assertTrue(self.runner.stop_event.is_set())
self.assertEqual("FAIL (Exception)", self.stop_callback_result)
self.assertEqual("\nTypeError: must be str, not NoneType", self.stop_callback_output)
self.assertFalse(self.runner.running)
if __name__ == '__main__':
unittest.main()
| 44.545113 | 105 | 0.70622 | 1,693 | 11,849 | 4.682221 | 0.108683 | 0.121105 | 0.083638 | 0.059922 | 0.784282 | 0.764854 | 0.759808 | 0.759304 | 0.748455 | 0.729658 | 0 | 0.010027 | 0.208794 | 11,849 | 265 | 106 | 44.713208 | 0.83552 | 0.179002 | 0 | 0.8 | 0 | 0 | 0.022542 | 0 | 0 | 0 | 0 | 0 | 0.282927 | 1 | 0.068293 | false | 0.004878 | 0.019512 | 0 | 0.092683 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1399a9991869467e12a37dd6e39f88db331777dd | 191 | py | Python | transintentlation/__init__.py | jpmondet/transintentlation | b86b33fc6669d869836c559b6ca967ff4382c00e | [
"MIT"
] | 3 | 2018-09-18T19:50:34.000Z | 2021-02-23T12:20:31.000Z | transintentlation/__init__.py | jpmondet/transintentlation | b86b33fc6669d869836c559b6ca967ff4382c00e | [
"MIT"
] | 1 | 2018-09-20T14:52:17.000Z | 2018-09-22T07:51:17.000Z | transintentlation/__init__.py | jpmondet/transintentlation | b86b33fc6669d869836c559b6ca967ff4382c00e | [
"MIT"
] | 1 | 2018-11-06T11:39:49.000Z | 2018-11-06T11:39:49.000Z | """ Transintentlation init module """
from transintentlation.config_v2 import Configuring
from transintentlation.compare_v2 import Comparing
from transintentlation.translate import Translate
| 38.2 | 51 | 0.863874 | 20 | 191 | 8.15 | 0.55 | 0.386503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0.089005 | 191 | 4 | 52 | 47.75 | 0.925287 | 0.151832 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13c1a94cd06e78d563b580d9f1d16e8824c92066 | 51 | py | Python | example.py | JackDrinkwater/machine_learning_lessons | f3d3b5003fc5564536594aed88d9bc685af8a4b1 | [
"MIT"
] | null | null | null | example.py | JackDrinkwater/machine_learning_lessons | f3d3b5003fc5564536594aed88d9bc685af8a4b1 | [
"MIT"
] | null | null | null | example.py | JackDrinkwater/machine_learning_lessons | f3d3b5003fc5564536594aed88d9bc685af8a4b1 | [
"MIT"
] | 1 | 2020-02-06T21:35:46.000Z | 2020-02-06T21:35:46.000Z | import sys
print(sys.path)
| 1.888889 | 15 | 0.411765 | 5 | 51 | 4.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.529412 | 51 | 26 | 16 | 1.961538 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
13c99f79ff594f006c8201da44bed29f50eaad9e | 30 | py | Python | src/widgets/py_button/__init__.py | t0a5ted/qutetodo | 6da95b9e1e3fabcceb9257b61283d73c997ce56b | [
"MIT"
] | 1 | 2021-11-11T13:12:53.000Z | 2021-11-11T13:12:53.000Z | src/widgets/py_button/__init__.py | t0a5ted/qutetodo | 6da95b9e1e3fabcceb9257b61283d73c997ce56b | [
"MIT"
] | null | null | null | src/widgets/py_button/__init__.py | t0a5ted/qutetodo | 6da95b9e1e3fabcceb9257b61283d73c997ce56b | [
"MIT"
] | null | null | null |
from .button import PyButton
| 10 | 28 | 0.8 | 4 | 30 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 30 | 2 | 29 | 15 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13f79c690f159b4485d0bf346e0f62b10d710c00 | 21,337 | py | Python | tests/python/spec/test_stl_reset.py | sguysc/rtamt | a16db77b61028f774d81457ff22e666229a5432c | [
"BSD-3-Clause"
] | 24 | 2019-12-04T00:20:16.000Z | 2022-03-24T17:48:14.000Z | tests/python/spec/test_stl_reset.py | sguysc/rtamt | a16db77b61028f774d81457ff22e666229a5432c | [
"BSD-3-Clause"
] | 142 | 2020-01-16T15:36:21.000Z | 2022-03-28T20:40:45.000Z | tests/python/spec/test_stl_reset.py | sguysc/rtamt | a16db77b61028f774d81457ff22e666229a5432c | [
"BSD-3-Clause"
] | 17 | 2020-07-07T20:32:08.000Z | 2022-03-07T07:20:22.000Z | import unittest
import math
from rtamt.spec.stl.discrete_time.specification import STLDiscreteTimeSpecification
class TestSTLReset(unittest.TestCase):
def __init__(self, *args, **kwargs):
super(TestSTLReset, self).__init__(*args, **kwargs)
def test_constant(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('out', 'float')
spec.spec = 'out = 5'
spec.parse()
out = spec.update(0, [])
self.assertEqual(5, out, 'Constant reset assertion')
out = spec.update(0, [])
self.assertEqual(5, out, 'Constant reset assertion')
spec.reset()
out = spec.update(0, [])
self.assertEqual(5, out, 'Constant reset assertion')
def test_variable(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req'
spec.parse()
out = spec.update(0, [['req', 1.1]])
self.assertEqual(1.1, out, 'Variable reset assertion')
out = spec.update(1, [['req', 2]])
self.assertEqual(2, out, 'Variable reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3]])
self.assertEqual(3.3, out, 'Variable reset assertion')
def test_abs(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = abs(req)'
spec.parse()
out = spec.update(0, [['req', 1.1]])
self.assertEqual(1.1, out, 'Abs reset assertion')
out = spec.update(1, [['req', 2]])
self.assertEqual(2, out, 'Abs reset assertion')
spec.reset()
out = spec.update(0, [['req', -3.3]])
self.assertEqual(3.3, out, 'Abs reset assertion')
def test_sqrt(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = sqrt(abs(req))'
spec.parse()
out = spec.update(0, [['req', 1.1]])
self.assertEqual(math.sqrt(1.1), out, 'Abs reset assertion')
out = spec.update(1, [['req', 2]])
self.assertEqual(math.sqrt(2), out, 'Abs reset assertion')
spec.reset()
out = spec.update(0, [['req', -3.3]])
self.assertEqual(math.sqrt(3.3), out, 'Abs reset assertion')
def test_exp(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = exp(req)'
spec.parse()
out = spec.update(0, [['req', 1.1]])
self.assertEqual(math.exp(1.1), out, 'Abs reset assertion')
out = spec.update(1, [['req', 2]])
self.assertEqual(math.exp(2), out, 'Abs reset assertion')
spec.reset()
out = spec.update(0, [['req', -3.3]])
self.assertEqual(math.exp(-3.3), out, 'Abs reset assertion')
def test_pow(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = pow(2,req)'
spec.parse()
out = spec.update(0, [['req', 1.1]])
self.assertEqual(math.pow(2,1.1), out, 'Abs reset assertion')
out = spec.update(1, [['req', 2]])
self.assertEqual(math.pow(2,2), out, 'Abs reset assertion')
spec.reset()
out = spec.update(0, [['req', -3.3]])
self.assertEqual(math.pow(2,-3.3), out, 'Abs reset assertion')
def test_addition(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req + gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(1.1 + 2.2, out, 'Addition reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(2 - 1, out, 'Addition reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(3.3 + 4.3, out, 'Addition reset assertion')
def test_subtraction(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req - gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(1.1 - 2.2, out, 'Subtraction reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(2 + 1, out, 'Subtraction reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(3.3 - 4.3, out, 'Subtraction reset assertion')
def test_multiplication(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req * gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(1.1 * 2.2, out, 'Multiplication reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(2 * -1, out, 'Multiplication reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(3.3 * 4.3, out, 'Multiplication reset assertion')
def test_division(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req / gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(1.1 / 2.2, out, 'Division reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(2 / -1, out, 'Division reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(3.3 / 4.3, out, 'Division reset assertion')
def test_predicate_leq(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req <= gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(2.2 - 1.1, out, 'Predicate <= reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(-1 - 2, out, 'Predicate <= reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(4.3 - 3.3, out, 'Predicate <= reset assertion')
def test_predicate_less(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req < gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(2.2 - 1.1, out, 'Predicate < reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(-1 - 2, out, 'Predicate < reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(4.3 - 3.3, out, 'Predicate < reset assertion')
def test_predicate_geq(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req >= gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(-2.2 + 1.1, out, 'Predicate >= reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(1 + 2, out, 'Predicate >= reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(-4.3 + 3.3, out, 'Predicate >= reset assertion')
def test_predicate_greater(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req >= gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(-2.2 + 1.1, out, 'Predicate > reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(1 + 2, out, 'Predicate > reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(-4.3 + 3.3, out, 'Predicate > reset assertion')
def test_predicate_eq(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req == gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(-(2.2 - 1.1), out, 'Predicate == reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(-(1 + 2), out, 'Predicate == reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(-4.3 + 3.3, out, 'Predicate == reset assertion')
def test_predicate_neq(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req !== gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(2.2 - 1.1, out, 'Predicate == reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(1 + 2, out, 'Predicate == reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(4.3 - 3.3, out, 'Predicate == reset assertion')
def test_not(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = not(req)'
spec.parse()
out = spec.update(0, [['req', 1.1]])
self.assertEqual(-1.1, out, 'Negation reset assertion')
out = spec.update(1, [['req', 2]])
self.assertEqual(-2, out, 'Negation reset assertion')
spec.reset()
out = spec.update(0, [['req', -3.3]])
self.assertEqual(3.3, out, 'Negation reset assertion')
def test_conjunction(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req and gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(1.1, out, 'And reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(-1, out, 'And reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(3.3, out, 'And reset assertion')
def test_disjunction(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req or gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(2.2, out, 'Or reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(2, out, 'Or reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(4.3, out, 'Or reset assertion')
def test_implication(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req implies gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(2.2, out, 'Implies reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(-1, out, 'Implies reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(4.3, out, 'Implies reset assertion')
def test_iff(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req iff gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(1.1 - 2.2, out, 'Iff reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(-1 - 2, out, 'Iff reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(3.3 - 4.3, out, 'Iff reset assertion')
def test_xor(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req xor gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(-(1.1 - 2.2), out, 'Xor reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(1 + 2, out, 'Xor reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 4.3]])
self.assertEqual(4.3 - 3.3, out, 'Xor reset assertion')
def test_rise(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = rise(req)'
spec.parse()
out = spec.update(0, [['req', 1.1]])
self.assertEqual(1.1, out, 'Rise reset assertion')
out = spec.update(1, [['req', 2]])
self.assertEqual(-1.1, out, 'Rise reset assertion')
spec.reset()
out = spec.update(0, [['req', 4.3]])
self.assertEqual(4.3, out, 'Rise reset assertion')
def test_fall(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = fall(req)'
spec.parse()
out = spec.update(0, [['req', 1.1]])
self.assertEqual(-1.1, out, 'Fall reset assertion')
out = spec.update(1, [['req', 2]])
self.assertEqual(-2, out, 'Fall reset assertion')
spec.reset()
out = spec.update(0, [['req', -3]])
self.assertEqual(3, out, 'Rise reset assertion')
def test_prev(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = prev(req)'
spec.parse()
out = spec.update(0, [['req', 1.1]])
self.assertEqual(float("inf"), out, 'Fall reset assertion')
out = spec.update(1, [['req', 2]])
self.assertEqual(1.1, out, 'Fall reset assertion')
spec.reset()
out = spec.update(0, [['req', -3]])
self.assertEqual(float("inf"), out, 'Rise reset assertion')
def test_once(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = once(req)'
spec.parse()
out = spec.update(0, [['req', 5]])
self.assertEqual(5, out, 'Once reset assertion')
out = spec.update(1, [['req', 2]])
self.assertEqual(5, out, 'Once reset assertion')
spec.reset()
out = spec.update(0, [['req', 4.3]])
self.assertEqual(4.3, out, 'Once reset assertion')
def test_historically(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = historically(req)'
spec.parse()
out = spec.update(0, [['req', 1.1]])
self.assertEqual(1.1, out, 'Historically reset assertion')
out = spec.update(1, [['req', 2]])
self.assertEqual(1.1, out, 'Historically reset assertion')
spec.reset()
out = spec.update(0, [['req', 4.3]])
self.assertEqual(4.3, out, 'Historically reset assertion')
# def test_eventually(self):
# spec = STLDiscreteTimeSpecification()
# spec.declare_var('req', 'float')
# spec.declare_var('out', 'float')
# spec.spec = 'out = eventually(req)'
# spec.parse()
#
# out = spec.update(0, [['req', 5]])
# self.assertEqual(5, out, 'Eventually reset assertion')
#
# out = spec.update(1, [['req', 2]])
# self.assertEqual(5, out, 'Eventually reset assertion')
#
# spec.reset()
#
# out = spec.update(0, [['req', 4.3]])
# self.assertEqual(4.3, out, 'Eventually reset assertion')
# def test_always(self):
# spec = STLDiscreteTimeSpecification()
# spec.declare_var('req', 'float')
# spec.declare_var('out', 'float')
# spec.spec = 'out = always(req)'
# spec.parse()
#
# out = spec.update(0, [['req', 1.1]])
# self.assertEqual(1.1, out, 'Always reset assertion')
#
# out = spec.update(1, [['req', 2]])
# self.assertEqual(1.1, out, 'Always reset assertion')
#
# spec.reset()
#
# out = spec.update(0, [['req', 4.3]])
# self.assertEqual(4.3, out, 'Always reset assertion')
def test_since(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req since gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(2.2, out, 'Since reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(2, out, 'Since reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 1.6]])
self.assertEqual(1.6, out, 'Since reset assertion')
def test_once_0_1(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = once[0:1](req)'
spec.parse()
out = spec.update(0, [['req', 5]])
self.assertEqual(5, out, 'Once [0,1] reset assertion')
out = spec.update(1, [['req', 4.8]])
self.assertEqual(5, out, 'Once [0,1] reset assertion')
spec.reset()
out = spec.update(0, [['req', 4.3]])
self.assertEqual(4.3, out, 'Once [0,1] reset assertion')
def test_historically_0_1(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = historically[0:1](req)'
spec.parse()
out = spec.update(0, [['req', 1.1]])
self.assertEqual(1.1, out, 'Historically [0,1] reset assertion')
out = spec.update(1, [['req', 2]])
self.assertEqual(1.1, out, 'Historically [0,1] reset assertion')
spec.reset()
out = spec.update(0, [['req', 4.3]])
self.assertEqual(4.3, out, 'Historically [0,1] reset assertion')
def test_since_0_1(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req since[0:1] gnt'
spec.parse()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(2.2, out, 'Since [0:1] reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(2, out, 'Since [0:1] reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 1.6]])
self.assertEqual(1.6, out, 'Since [0:1] reset assertion')
def test_precedes_0_1(self):
spec = STLDiscreteTimeSpecification()
spec.declare_var('req', 'float')
spec.declare_var('gnt', 'float')
spec.declare_var('out', 'float')
spec.spec = 'out = req until[0:1] gnt'
spec.parse()
spec.pastify()
out = spec.update(0, [['req', 1.1], ['gnt', 2.2]])
self.assertEqual(2.2, out, 'Precedes [0:1] reset assertion')
out = spec.update(1, [['req', 2], ['gnt', -1]])
self.assertEqual(2.2, out, 'Precedes [0:1] reset assertion')
spec.reset()
out = spec.update(0, [['req', 3.3], ['gnt', 1.6]])
self.assertEqual(1.6, out, 'Precedes [0:1] reset assertion')
if __name__ == '__main__':
unittest.main() | 33.131988 | 83 | 0.546281 | 2,699 | 21,337 | 4.26306 | 0.030382 | 0.062055 | 0.115244 | 0.083956 | 0.916131 | 0.898662 | 0.883713 | 0.876934 | 0.861898 | 0.857031 | 0 | 0.039816 | 0.265501 | 21,337 | 644 | 84 | 33.131988 | 0.694359 | 0.04757 | 0 | 0.647059 | 0 | 0 | 0.193612 | 0.001084 | 0 | 0 | 0 | 0 | 0.217195 | 1 | 0.074661 | false | 0 | 0.006787 | 0 | 0.08371 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b92fe9ec47bdd85b6e401620486ad1888ddb44c7 | 143 | py | Python | todo.py | superiorkid/todo | a5fef03d16d500af29c49f640dc1fcc2589df7db | [
"MIT"
] | null | null | null | todo.py | superiorkid/todo | a5fef03d16d500af29c49f640dc1fcc2589df7db | [
"MIT"
] | null | null | null | todo.py | superiorkid/todo | a5fef03d16d500af29c49f640dc1fcc2589df7db | [
"MIT"
] | null | null | null | from app import app, db
from app.models import Todo
@app.shell_context_processor
def make_shell_context():
return {'db': db, 'Todo': Todo} | 23.833333 | 35 | 0.741259 | 23 | 143 | 4.434783 | 0.521739 | 0.137255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146853 | 143 | 6 | 35 | 23.833333 | 0.836066 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
dbe372899c2f1a19831dd73b698099015f3a25a8 | 314 | py | Python | testing/test_outcomes.py | blueyed/pytest | 2b52e24a9fe013a043c36e3df3d62b4b4f6348f1 | [
"MIT"
] | 3 | 2019-11-26T02:30:12.000Z | 2020-04-15T17:49:07.000Z | testing/test_outcomes.py | blueyed/pytest | 2b52e24a9fe013a043c36e3df3d62b4b4f6348f1 | [
"MIT"
] | 59 | 2019-10-22T04:34:22.000Z | 2021-11-27T18:23:11.000Z | testing/test_outcomes.py | blueyed/pytest | 2b52e24a9fe013a043c36e3df3d62b4b4f6348f1 | [
"MIT"
] | 1 | 2019-11-14T16:47:19.000Z | 2019-11-14T16:47:19.000Z | from _pytest.outcomes import OutcomeException
def test_OutcomeException():
assert repr(OutcomeException()) == "<OutcomeException msg=None>"
assert repr(OutcomeException(msg="msg")) == "<OutcomeException msg='msg'>"
assert repr(OutcomeException(msg="msg\nline2")) == "<OutcomeException msg='msg...'>"
| 39.25 | 88 | 0.726115 | 31 | 314 | 7.290323 | 0.387097 | 0.420354 | 0.389381 | 0.256637 | 0.283186 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003584 | 0.111465 | 314 | 7 | 89 | 44.857143 | 0.806452 | 0 | 0 | 0 | 0 | 0 | 0.315287 | 0 | 0 | 0 | 0 | 0 | 0.6 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e02c0253b6e43340447176b46a8068e36d03ba0e | 2,892 | py | Python | django_xsede_warehouse/resource_v3/migrations/0002_auto_20200828_2125.py | XSEDE/XSEDE_Information_Warehouse | 8b3aab42b7afd70ce69b9bf44551a0ded4491831 | [
"Apache-2.0"
] | 1 | 2019-10-29T22:50:29.000Z | 2019-10-29T22:50:29.000Z | django_xsede_warehouse/resource_v3/migrations/0002_auto_20200828_2125.py | XSEDE/XSEDE_Information_Warehouse | 8b3aab42b7afd70ce69b9bf44551a0ded4491831 | [
"Apache-2.0"
] | null | null | null | django_xsede_warehouse/resource_v3/migrations/0002_auto_20200828_2125.py | XSEDE/XSEDE_Information_Warehouse | 8b3aab42b7afd70ce69b9bf44551a0ded4491831 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.2.9 on 2020-08-28 21:25
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('resource_v3', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='resourcev3',
name='Audience',
field=models.CharField(blank=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='resourcev3',
name='Description',
field=models.CharField(blank=True, max_length=24000, null=True),
),
migrations.AlterField(
model_name='resourcev3',
name='EndDateTime',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AlterField(
model_name='resourcev3',
name='Keywords',
field=models.CharField(blank=True, max_length=1000, null=True),
),
migrations.AlterField(
model_name='resourcev3',
name='LocalID',
field=models.CharField(blank=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='resourcev3',
name='ProviderID',
field=models.CharField(blank=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='resourcev3',
name='ShortDescription',
field=models.CharField(blank=True, max_length=1200, null=True),
),
migrations.AlterField(
model_name='resourcev3',
name='StartDateTime',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AlterField(
model_name='resourcev3',
name='Topics',
field=models.CharField(blank=True, max_length=1000, null=True),
),
migrations.AlterField(
model_name='resourcev3local',
name='CatalogMetaURL',
field=models.CharField(blank=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='resourcev3local',
name='EntityJSON',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AlterField(
model_name='resourcev3local',
name='LocalID',
field=models.CharField(blank=True, db_index=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='resourcev3local',
name='LocalType',
field=models.CharField(blank=True, max_length=32, null=True),
),
migrations.AlterField(
model_name='resourcev3local',
name='LocalURL',
field=models.CharField(blank=True, max_length=200, null=True),
),
]
| 34.023529 | 89 | 0.578838 | 270 | 2,892 | 6.096296 | 0.225926 | 0.170109 | 0.212637 | 0.246659 | 0.814095 | 0.775213 | 0.749089 | 0.653706 | 0.498177 | 0.498177 | 0 | 0.035288 | 0.304288 | 2,892 | 84 | 90 | 34.428571 | 0.782803 | 0.01556 | 0 | 0.679487 | 1 | 0 | 0.114587 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025641 | 0 | 0.064103 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e0396ff1ee10e6bfde31aeec659ada57bcfa0a3b | 30 | py | Python | dbug12/__init__.py | mlndz28/d-bug12 | 7096488ae8b436d3350729166f901c168be13975 | [
"MIT"
] | 1 | 2020-11-19T16:22:25.000Z | 2020-11-19T16:22:25.000Z | dbug12/__init__.py | mlndz28/d-bug12 | 7096488ae8b436d3350729166f901c168be13975 | [
"MIT"
] | null | null | null | dbug12/__init__.py | mlndz28/d-bug12 | 7096488ae8b436d3350729166f901c168be13975 | [
"MIT"
] | 1 | 2020-07-12T04:44:30.000Z | 2020-07-12T04:44:30.000Z | from .debugger import Debugger | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0eb49c8cea0463bd68f04b1a47fa4b7a951af8ac | 28,293 | py | Python | Collections-a-installer/community-general-2.4.0/tests/unit/plugins/lookup/test_manifold.py | d-amien-b/simple-getwordpress | da90d515a0aa837b633d50db4d91d22b031c04a2 | [
"MIT"
] | 22 | 2021-07-16T08:11:22.000Z | 2022-03-31T07:15:34.000Z | Collections-a-installer/community-general-2.4.0/tests/unit/plugins/lookup/test_manifold.py | d-amien-b/simple-getwordpress | da90d515a0aa837b633d50db4d91d22b031c04a2 | [
"MIT"
] | 12 | 2020-02-21T07:24:52.000Z | 2020-04-14T09:54:32.000Z | Collections-a-installer/community-general-2.4.0/tests/unit/plugins/lookup/test_manifold.py | d-amien-b/simple-getwordpress | da90d515a0aa837b633d50db4d91d22b031c04a2 | [
"MIT"
] | 39 | 2021-07-05T02:31:42.000Z | 2022-03-31T02:46:03.000Z | # (c) 2018, Arigato Machine Inc.
# (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible_collections.community.general.tests.unit.compat import unittest
from ansible_collections.community.general.tests.unit.compat.mock import patch, call
from ansible.errors import AnsibleError
from ansible.module_utils.urls import ConnectionError, SSLValidationError
from ansible.module_utils.six.moves.urllib.error import HTTPError, URLError
from ansible.module_utils import six
from ansible_collections.community.general.plugins.lookup.manifold import ManifoldApiClient, LookupModule, ApiError
import json
API_FIXTURES = {
'https://api.marketplace.manifold.co/v1/resources':
[
{
"body": {
"label": "resource-1",
"name": "Resource 1"
},
"id": "rid-1"
},
{
"body": {
"label": "resource-2",
"name": "Resource 2"
},
"id": "rid-2"
}
],
'https://api.marketplace.manifold.co/v1/resources?label=resource-1':
[
{
"body": {
"label": "resource-1",
"name": "Resource 1"
},
"id": "rid-1"
}
],
'https://api.marketplace.manifold.co/v1/resources?label=resource-2':
[
{
"body": {
"label": "resource-2",
"name": "Resource 2"
},
"id": "rid-2"
}
],
'https://api.marketplace.manifold.co/v1/resources?team_id=tid-1':
[
{
"body": {
"label": "resource-1",
"name": "Resource 1"
},
"id": "rid-1"
}
],
'https://api.marketplace.manifold.co/v1/resources?project_id=pid-1':
[
{
"body": {
"label": "resource-2",
"name": "Resource 2"
},
"id": "rid-2"
}
],
'https://api.marketplace.manifold.co/v1/resources?project_id=pid-2':
[
{
"body": {
"label": "resource-1",
"name": "Resource 1"
},
"id": "rid-1"
},
{
"body": {
"label": "resource-3",
"name": "Resource 3"
},
"id": "rid-3"
}
],
'https://api.marketplace.manifold.co/v1/resources?team_id=tid-1&project_id=pid-1':
[
{
"body": {
"label": "resource-1",
"name": "Resource 1"
},
"id": "rid-1"
}
],
'https://api.marketplace.manifold.co/v1/projects':
[
{
"body": {
"label": "project-1",
"name": "Project 1",
},
"id": "pid-1",
},
{
"body": {
"label": "project-2",
"name": "Project 2",
},
"id": "pid-2",
}
],
'https://api.marketplace.manifold.co/v1/projects?label=project-2':
[
{
"body": {
"label": "project-2",
"name": "Project 2",
},
"id": "pid-2",
}
],
'https://api.marketplace.manifold.co/v1/credentials?resource_id=rid-1':
[
{
"body": {
"resource_id": "rid-1",
"values": {
"RESOURCE_TOKEN_1": "token-1",
"RESOURCE_TOKEN_2": "token-2"
}
},
"id": "cid-1",
}
],
'https://api.marketplace.manifold.co/v1/credentials?resource_id=rid-2':
[
{
"body": {
"resource_id": "rid-2",
"values": {
"RESOURCE_TOKEN_3": "token-3",
"RESOURCE_TOKEN_4": "token-4"
}
},
"id": "cid-2",
}
],
'https://api.marketplace.manifold.co/v1/credentials?resource_id=rid-3':
[
{
"body": {
"resource_id": "rid-3",
"values": {
"RESOURCE_TOKEN_1": "token-5",
"RESOURCE_TOKEN_2": "token-6"
}
},
"id": "cid-3",
}
],
'https://api.identity.manifold.co/v1/teams':
[
{
"id": "tid-1",
"body": {
"name": "Team 1",
"label": "team-1"
}
},
{
"id": "tid-2",
"body": {
"name": "Team 2",
"label": "team-2"
}
}
]
}
def mock_fixture(open_url_mock, fixture=None, data=None, headers=None):
if not headers:
headers = {}
if fixture:
data = json.dumps(API_FIXTURES[fixture])
if 'content-type' not in headers:
headers['content-type'] = 'application/json'
open_url_mock.return_value.read.return_value = data
open_url_mock.return_value.headers = headers
class TestManifoldApiClient(unittest.TestCase):
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_sends_default_headers(self, open_url_mock):
mock_fixture(open_url_mock, data='hello')
client = ManifoldApiClient('token-123')
client.request('test', 'endpoint')
open_url_mock.assert_called_with('https://api.test.manifold.co/v1/endpoint',
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_decodes_json(self, open_url_mock):
mock_fixture(open_url_mock, fixture='https://api.marketplace.manifold.co/v1/resources')
client = ManifoldApiClient('token-123')
self.assertIsInstance(client.request('marketplace', 'resources'), list)
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_streams_text(self, open_url_mock):
mock_fixture(open_url_mock, data='hello', headers={'content-type': "text/plain"})
client = ManifoldApiClient('token-123')
self.assertEqual('hello', client.request('test', 'endpoint'))
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_processes_parameterized_headers(self, open_url_mock):
mock_fixture(open_url_mock, data='hello')
client = ManifoldApiClient('token-123')
client.request('test', 'endpoint', headers={'X-HEADER': 'MANIFOLD'})
open_url_mock.assert_called_with('https://api.test.manifold.co/v1/endpoint',
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123',
'X-HEADER': 'MANIFOLD'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_passes_arbitrary_parameters(self, open_url_mock):
mock_fixture(open_url_mock, data='hello')
client = ManifoldApiClient('token-123')
client.request('test', 'endpoint', use_proxy=False, timeout=5)
open_url_mock.assert_called_with('https://api.test.manifold.co/v1/endpoint',
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0',
use_proxy=False, timeout=5)
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_raises_on_incorrect_json(self, open_url_mock):
mock_fixture(open_url_mock, data='noJson', headers={'content-type': "application/json"})
client = ManifoldApiClient('token-123')
with self.assertRaises(ApiError) as context:
client.request('test', 'endpoint')
self.assertEqual('JSON response can\'t be parsed while requesting https://api.test.manifold.co/v1/endpoint:\n'
'noJson',
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_raises_on_status_500(self, open_url_mock):
open_url_mock.side_effect = HTTPError('https://api.test.manifold.co/v1/endpoint',
500, 'Server error', {}, six.StringIO('ERROR'))
client = ManifoldApiClient('token-123')
with self.assertRaises(ApiError) as context:
client.request('test', 'endpoint')
self.assertEqual('Server returned: HTTP Error 500: Server error while requesting '
'https://api.test.manifold.co/v1/endpoint:\nERROR',
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_raises_on_bad_url(self, open_url_mock):
open_url_mock.side_effect = URLError('URL is invalid')
client = ManifoldApiClient('token-123')
with self.assertRaises(ApiError) as context:
client.request('test', 'endpoint')
self.assertEqual('Failed lookup url for https://api.test.manifold.co/v1/endpoint : <url'
'open error URL is invalid>',
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_raises_on_ssl_error(self, open_url_mock):
open_url_mock.side_effect = SSLValidationError('SSL Error')
client = ManifoldApiClient('token-123')
with self.assertRaises(ApiError) as context:
client.request('test', 'endpoint')
self.assertEqual('Error validating the server\'s certificate for https://api.test.manifold.co/v1/endpoint: '
'SSL Error',
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_raises_on_connection_error(self, open_url_mock):
open_url_mock.side_effect = ConnectionError('Unknown connection error')
client = ManifoldApiClient('token-123')
with self.assertRaises(ApiError) as context:
client.request('test', 'endpoint')
self.assertEqual('Error connecting to https://api.test.manifold.co/v1/endpoint: Unknown connection error',
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_resources_get_all(self, open_url_mock):
url = 'https://api.marketplace.manifold.co/v1/resources'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_resources())
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_resources_filter_label(self, open_url_mock):
url = 'https://api.marketplace.manifold.co/v1/resources?label=resource-1'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_resources(label='resource-1'))
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_resources_filter_team_and_project(self, open_url_mock):
url = 'https://api.marketplace.manifold.co/v1/resources?team_id=tid-1&project_id=pid-1'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_resources(team_id='tid-1', project_id='pid-1'))
args, kwargs = open_url_mock.call_args
url_called = args[0]
# Dict order is not guaranteed, so an url may have querystring parameters order randomized
self.assertIn('team_id=tid-1', url_called)
self.assertIn('project_id=pid-1', url_called)
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_teams_get_all(self, open_url_mock):
url = 'https://api.identity.manifold.co/v1/teams'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_teams())
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_teams_filter_label(self, open_url_mock):
url = 'https://api.identity.manifold.co/v1/teams'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url][1:2], client.get_teams(label='team-2'))
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_projects_get_all(self, open_url_mock):
url = 'https://api.marketplace.manifold.co/v1/projects'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_projects())
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_projects_filter_label(self, open_url_mock):
url = 'https://api.marketplace.manifold.co/v1/projects?label=project-2'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_projects(label='project-2'))
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_credentials(self, open_url_mock):
url = 'https://api.marketplace.manifold.co/v1/credentials?resource_id=rid-1'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_credentials(resource_id='rid-1'))
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
class TestLookupModule(unittest.TestCase):
def setUp(self):
self.lookup = LookupModule()
self.lookup._load_name = "manifold"
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_get_all(self, client_mock):
expected_result = [{'RESOURCE_TOKEN_1': 'token-1',
'RESOURCE_TOKEN_2': 'token-2',
'RESOURCE_TOKEN_3': 'token-3',
'RESOURCE_TOKEN_4': 'token-4'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources']
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run([], api_token='token-123'))
client_mock.assert_called_with('token-123')
client_mock.return_value.get_resources.assert_called_with(team_id=None, project_id=None)
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_get_one_resource(self, client_mock):
expected_result = [{'RESOURCE_TOKEN_3': 'token-3',
'RESOURCE_TOKEN_4': 'token-4'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources?label=resource-2']
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run(['resource-2'], api_token='token-123'))
client_mock.return_value.get_resources.assert_called_with(team_id=None, project_id=None, label='resource-2')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_get_two_resources(self, client_mock):
expected_result = [{'RESOURCE_TOKEN_1': 'token-1',
'RESOURCE_TOKEN_2': 'token-2',
'RESOURCE_TOKEN_3': 'token-3',
'RESOURCE_TOKEN_4': 'token-4'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources']
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run(['resource-1', 'resource-2'], api_token='token-123'))
client_mock.assert_called_with('token-123')
client_mock.return_value.get_resources.assert_called_with(team_id=None, project_id=None)
@patch('ansible_collections.community.general.plugins.lookup.manifold.display')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_get_resources_with_same_credential_names(self, client_mock, display_mock):
expected_result = [{'RESOURCE_TOKEN_1': 'token-5',
'RESOURCE_TOKEN_2': 'token-6'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources?project_id=pid-2']
client_mock.return_value.get_projects.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/projects?label=project-2']
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run([], api_token='token-123', project='project-2'))
client_mock.assert_called_with('token-123')
display_mock.warning.assert_has_calls([
call("'RESOURCE_TOKEN_1' with label 'resource-1' was replaced by resource data with label 'resource-3'"),
call("'RESOURCE_TOKEN_2' with label 'resource-1' was replaced by resource data with label 'resource-3'")],
any_order=True
)
client_mock.return_value.get_resources.assert_called_with(team_id=None, project_id='pid-2')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_filter_by_team(self, client_mock):
expected_result = [{'RESOURCE_TOKEN_1': 'token-1',
'RESOURCE_TOKEN_2': 'token-2'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources?team_id=tid-1']
client_mock.return_value.get_teams.return_value = API_FIXTURES['https://api.identity.manifold.co/v1/teams'][0:1]
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run([], api_token='token-123', team='team-1'))
client_mock.assert_called_with('token-123')
client_mock.return_value.get_resources.assert_called_with(team_id='tid-1', project_id=None)
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_filter_by_project(self, client_mock):
expected_result = [{'RESOURCE_TOKEN_3': 'token-3',
'RESOURCE_TOKEN_4': 'token-4'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources?project_id=pid-1']
client_mock.return_value.get_projects.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/projects'][0:1]
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run([], api_token='token-123', project='project-1'))
client_mock.assert_called_with('token-123')
client_mock.return_value.get_resources.assert_called_with(team_id=None, project_id='pid-1')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_filter_by_team_and_project(self, client_mock):
expected_result = [{'RESOURCE_TOKEN_1': 'token-1',
'RESOURCE_TOKEN_2': 'token-2'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources?team_id=tid-1&project_id=pid-1']
client_mock.return_value.get_teams.return_value = API_FIXTURES['https://api.identity.manifold.co/v1/teams'][0:1]
client_mock.return_value.get_projects.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/projects'][0:1]
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run([], api_token='token-123', project='project-1'))
client_mock.assert_called_with('token-123')
client_mock.return_value.get_resources.assert_called_with(team_id=None, project_id='pid-1')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_raise_team_doesnt_exist(self, client_mock):
client_mock.return_value.get_teams.return_value = []
with self.assertRaises(AnsibleError) as context:
self.lookup.run([], api_token='token-123', team='no-team')
self.assertEqual("Team 'no-team' does not exist",
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_raise_project_doesnt_exist(self, client_mock):
client_mock.return_value.get_projects.return_value = []
with self.assertRaises(AnsibleError) as context:
self.lookup.run([], api_token='token-123', project='no-project')
self.assertEqual("Project 'no-project' does not exist",
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_raise_resource_doesnt_exist(self, client_mock):
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources']
with self.assertRaises(AnsibleError) as context:
self.lookup.run(['resource-1', 'no-resource-1', 'no-resource-2'], api_token='token-123')
self.assertEqual("Resource(s) no-resource-1, no-resource-2 do not exist",
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_catch_api_error(self, client_mock):
client_mock.side_effect = ApiError('Generic error')
with self.assertRaises(AnsibleError) as context:
self.lookup.run([], api_token='token-123')
self.assertEqual("API Error: Generic error",
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_catch_unhandled_exception(self, client_mock):
client_mock.side_effect = Exception('Unknown error')
with self.assertRaises(AnsibleError) as context:
self.lookup.run([], api_token='token-123')
self.assertTrue('Exception: Unknown error' in str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.os.getenv')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_falls_back_to_env_var(self, client_mock, getenv_mock):
getenv_mock.return_value = 'token-321'
client_mock.return_value.get_resources.return_value = []
client_mock.return_value.get_credentials.return_value = []
self.lookup.run([])
getenv_mock.assert_called_with('MANIFOLD_API_TOKEN')
client_mock.assert_called_with('token-321')
@patch('ansible_collections.community.general.plugins.lookup.manifold.os.getenv')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_falls_raises_on_no_token(self, client_mock, getenv_mock):
getenv_mock.return_value = None
client_mock.return_value.get_resources.return_value = []
client_mock.return_value.get_credentials.return_value = []
with self.assertRaises(AnsibleError) as context:
self.lookup.run([])
self.assertEqual('API token is required. Please set api_token parameter or MANIFOLD_API_TOKEN env var',
str(context.exception))
| 52.687151 | 157 | 0.601774 | 3,092 | 28,293 | 5.275226 | 0.078266 | 0.029183 | 0.037521 | 0.07921 | 0.856784 | 0.840108 | 0.830483 | 0.813255 | 0.795966 | 0.771381 | 0 | 0.020255 | 0.274096 | 28,293 | 536 | 158 | 52.785448 | 0.773931 | 0.008341 | 0 | 0.501018 | 0 | 0.010183 | 0.314701 | 0.108941 | 0 | 0 | 0 | 0 | 0.136456 | 1 | 0.069246 | false | 0.002037 | 0.01833 | 0 | 0.09165 | 0.002037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0ebf2edbf233c040888c46f033d87f5143ee951a | 30 | py | Python | tase/db/graph_models/__init__.py | soran-ghaderi/Chromusic_search_engine | e811401fee39ff4cb184750fcbde55053c69453d | [
"Apache-2.0"
] | 4 | 2022-02-21T06:56:16.000Z | 2022-03-07T21:10:19.000Z | tase/db/graph_models/__init__.py | soran-ghaderi/Chromusic_search_engine | e811401fee39ff4cb184750fcbde55053c69453d | [
"Apache-2.0"
] | null | null | null | tase/db/graph_models/__init__.py | soran-ghaderi/Chromusic_search_engine | e811401fee39ff4cb184750fcbde55053c69453d | [
"Apache-2.0"
] | 1 | 2022-03-07T21:10:02.000Z | 2022-03-07T21:10:02.000Z | from . import edges, vertices
| 15 | 29 | 0.766667 | 4 | 30 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 30 | 1 | 30 | 30 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0edd12adc5e36fdcb9267fe04c00badb103cf5d8 | 93,117 | py | Python | boogio/test/test_aws_reporter.py | osgirl/boogio | b78fc02b93f2ed1320ba253b01f28f5e2f45afa0 | [
"Apache-2.0"
] | null | null | null | boogio/test/test_aws_reporter.py | osgirl/boogio | b78fc02b93f2ed1320ba253b01f28f5e2f45afa0 | [
"Apache-2.0"
] | null | null | null | boogio/test/test_aws_reporter.py | osgirl/boogio | b78fc02b93f2ed1320ba253b01f28f5e2f45afa0 | [
"Apache-2.0"
] | null | null | null | # ----------------------------------------------------------------------------
# Copyright (C) 2017 Verizon. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ----------------------------------------------------------------------------
'''Test cases for the aws_reporter.py module.'''
import json
import os
import tempfile
import unittest
import xlsxwriter
import boogio.aws_reporter as aws_reporter
import boogio.aws_surveyor as aws_surveyor
import boogio.report_definitions
import boogio.utensils.tabulizer
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
class TestReportDefinition(unittest.TestCase):
'''
Test cases for aws_reporter.ReportDefinition.
'''
# pylint: disable=invalid-name
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def setUp(self):
pass
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@classmethod
def setUpClass(cls):
cls.sample_name = 'Sample Reporter'
cls.sample_entity_type = 'eip'
cls.sample_prune_specs = [
{'path': 'meta.profile_name', 'path_to_none': False}
]
cls.sample_prune_specs_no_path_to_none = [
{'path': 'meta.profile_name'},
{'path': 'meta.region_name'},
]
cls.sample_prune_specs_varied_path_to_none = [
{'path': 'meta.profile_name', 'path_to_none': False},
{'path': 'meta.region_name'},
]
cls.sample_default_column_order = ['meta.profile_name']
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_report_definition_init_minimal(self):
'''
Tests of initialization of ReportDefinition instances.
'''
definition = aws_reporter.ReportDefinition(
name=self.sample_name,
entity_type=self.sample_entity_type
)
self.assertIsNotNone(definition)
self.assertEqual(definition.name, self.sample_name)
self.assertEqual(definition.entity_type, self.sample_entity_type)
self.assertEqual(definition.prune_specs, [])
self.assertEqual(definition.default_column_order, None)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_report_definition_init_all(self):
'''
Tests of initialization of ReportDefinition instances.
'''
definition = aws_reporter.ReportDefinition(
name=self.sample_name,
entity_type=self.sample_entity_type,
prune_specs=self.sample_prune_specs,
default_column_order=self.sample_default_column_order
)
self.assertIsNotNone(definition)
self.assertEqual(definition.name, self.sample_name)
self.assertEqual(definition.entity_type, self.sample_entity_type)
self.assertEqual(definition.prune_specs, self.sample_prune_specs)
self.assertEqual(
definition.default_column_order,
self.sample_default_column_order
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_report_definition_init_path_to_none(self):
'''
Test handling of path_to_none in ReportDefinition prune_specs.
'''
# - - - - - - - - - - - -
definition = aws_reporter.ReportDefinition(
name=self.sample_name,
entity_type=self.sample_entity_type,
prune_specs=self.sample_prune_specs,
default_column_order=self.sample_default_column_order
)
self.assertTrue(definition.default_path_to_none)
path_to_none_values = [
p['path_to_none'] for p in definition.prune_specs
]
self.assertItemsEqual(path_to_none_values, [False])
# - - - - - - - - - - - -
definition = aws_reporter.ReportDefinition(
name=self.sample_name,
entity_type=self.sample_entity_type,
prune_specs=self.sample_prune_specs,
default_column_order=self.sample_default_column_order
)
self.assertTrue(definition.default_path_to_none)
path_to_none_values = [
p['path_to_none'] for p in definition.prune_specs
]
self.assertItemsEqual(path_to_none_values, [False])
# - - - - - - - - - - - -
definition = aws_reporter.ReportDefinition(
name=self.sample_name,
entity_type=self.sample_entity_type,
prune_specs=self.sample_prune_specs,
default_column_order=self.sample_default_column_order,
default_path_to_none=True
)
self.assertTrue(definition.default_path_to_none)
path_to_none_values = [
p['path_to_none'] for p in definition.prune_specs
]
self.assertItemsEqual(path_to_none_values, [False])
# - - - - - - - - - - - -
definition = aws_reporter.ReportDefinition(
name=self.sample_name,
entity_type=self.sample_entity_type,
prune_specs=self.sample_prune_specs_no_path_to_none,
default_column_order=self.sample_default_column_order,
default_path_to_none=True
)
self.assertTrue(definition.default_path_to_none)
path_to_none_values = [
p['path_to_none'] for p in definition.prune_specs
]
self.assertItemsEqual(
list(set(path_to_none_values)),
[True]
)
# - - - - - - - - - - - -
definition = aws_reporter.ReportDefinition(
name=self.sample_name,
entity_type=self.sample_entity_type,
prune_specs=self.sample_prune_specs_no_path_to_none,
default_column_order=self.sample_default_column_order,
default_path_to_none=False
)
self.assertFalse(definition.default_path_to_none)
path_to_none_values = [
p['path_to_none'] for p in definition.prune_specs
]
self.assertItemsEqual(
list(set(path_to_none_values)),
[False]
)
# - - - - - - - - - - - -
definition = aws_reporter.ReportDefinition(
name=self.sample_name,
entity_type=self.sample_entity_type,
prune_specs=self.sample_prune_specs_varied_path_to_none,
default_column_order=self.sample_default_column_order,
default_path_to_none=False
)
self.assertFalse(definition.default_path_to_none)
path_to_none_values = [
p['path_to_none'] for p in definition.prune_specs
]
self.assertItemsEqual(
list(set(path_to_none_values)),
[False]
)
# - - - - - - - - - - - -
definition = aws_reporter.ReportDefinition(
name=self.sample_name,
entity_type=self.sample_entity_type,
prune_specs=self.sample_prune_specs_varied_path_to_none,
default_column_order=self.sample_default_column_order,
default_path_to_none=True
)
self.assertTrue(definition.default_path_to_none)
path_to_none_values = [
p['path_to_none'] for p in definition.prune_specs
]
self.assertItemsEqual(
list(set(path_to_none_values)),
[True, False]
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_report_definition_assign_prune_specs(self):
'''
Test assigning ReportDefinition prune_specs.
In particular, setting the path_to_none.
'''
definition = aws_reporter.ReportDefinition(
name=self.sample_name,
entity_type=self.sample_entity_type,
prune_specs=self.sample_prune_specs,
default_column_order=self.sample_default_column_order,
default_path_to_none=False
)
path_to_none_values = [
p['path_to_none'] for p in definition.prune_specs
]
self.assertItemsEqual(path_to_none_values, [False])
definition.prune_specs = []
self.assertEqual(definition.prune_specs, [])
definition.prune_specs = (
self.sample_prune_specs_no_path_to_none
)
path_to_none_values = [
p['path_to_none'] for p in definition.prune_specs
]
self.assertItemsEqual(
list(set(path_to_none_values)),
[False]
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_report_definition_copy(self):
'''
Tests of copying ReportDefinition instances.
'''
definition = aws_reporter.ReportDefinition(
name=self.sample_name,
entity_type=self.sample_entity_type,
prune_specs=self.sample_prune_specs,
default_column_order=self.sample_default_column_order
)
definition2 = definition.copy()
self.assertEqual(
definition2.name,
definition.name
)
self.assertEqual(
definition2.entity_type,
definition.entity_type
)
self.assertEqual(
definition2.prune_specs,
definition.prune_specs
)
self.assertEqual(
definition2.default_column_order,
definition.default_column_order
)
definition3 = definition2.copy()
definition3.name = 'Changed name'
definition3.entity_type = 'foo'
definition3.prune_specs.append({'path': 'dot.dot.dot'})
definition3.default_column_order.append('dot.dot')
self.assertEqual(
definition2.name,
definition.name
)
self.assertEqual(
definition2.entity_type,
definition.entity_type
)
self.assertEqual(
definition2.prune_specs,
definition.prune_specs
)
self.assertEqual(
definition2.default_column_order,
definition.default_column_order
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_report_definition_extract_from_flat(self):
'''
Tests for ReportDefinition.extract_from().
'''
definition = aws_reporter.ReportDefinition(
name=self.sample_name,
entity_type=self.sample_entity_type,
prune_specs=self.sample_prune_specs,
default_column_order=self.sample_default_column_order
)
surveyor = aws_surveyor.AWSSurveyor(
profiles=['default'], regions=['us-east-1']
)
surveyor.survey('eip')
informers = surveyor.informers()
report = definition.extract_from(
informers
)
self.assertEqual(type(report), list)
self.assertNotEqual(report, [])
self.assertEqual(type(report[0]), dict)
self.assertEqual(
set([len(r.items()) for r in report]),
set([1])
)
self.assertEqual(
set([r.items()[0][0] for r in report]),
set(['meta.profile_name'])
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_report_definition_extract_from_nested(self):
'''
Tests for ReportDefinition.extract_from().
'''
definition = aws_reporter.ReportDefinition(
name=self.sample_name,
entity_type=self.sample_entity_type,
prune_specs=self.sample_prune_specs,
default_column_order=self.sample_default_column_order
)
surveyor = aws_surveyor.AWSSurveyor(
profiles=['default'], regions=['us-east-1']
)
surveyor.survey('eip')
informers = surveyor.informers()
report = definition.extract_from(
informers,
flat=False
)
self.assertEqual(type(report), list)
self.assertNotEqual(report, [])
self.assertEqual(type(report[0]), dict)
self.assertEqual(
set([len(r.items()) for r in report]),
set([1])
)
self.assertEqual(
set([r.items()[0][0] for r in report]),
set(['meta'])
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
class TestAWSReporterInit(unittest.TestCase):
'''
Test cases for AWSReporter initialization.
'''
# pylint: disable=invalid-name
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@classmethod
def setUpClass(cls):
cls.sample_name = 'Sample Reporter'
cls.sample_entity_type = 'eip'
cls.sample_prune_specs = [
{'path': 'meta.profile_name', 'path_to_none': False}
]
cls.sample_default_column_order = ['meta.profile_name']
cls.report_definition = aws_reporter.ReportDefinition(
name=cls.sample_name,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs,
default_column_order=cls.sample_default_column_order
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_init_no_reports(self):
'''
Tests of initialization of AWSReporter instances without
assignment of report definitions.
'''
reporter = aws_reporter.AWSReporter()
self.assertIsNotNone(reporter)
self.assertEqual(reporter.report_definitions(), [])
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_init_with_reports(self):
'''
Tests of initialization of AWSReporter instances with
assignment of report definitions.
'''
reporter = aws_reporter.AWSReporter(
packaged_report_definitions=True
)
self.assertIsNotNone(reporter)
self.assertNotEqual(reporter.report_definitions(), [])
# We'll refer to this in later asserts.
packaged_report_count = len(reporter.report_definitions())
# - - - - - - - - - - - -
reporter = aws_reporter.AWSReporter(
report_definitions=[self.report_definition]
)
self.assertIsNotNone(reporter)
self.assertEqual(len(reporter.report_definitions()), 1)
self.assertEqual(
reporter.report_definitions()[0].name,
self.sample_name
)
# - - - - - - - - - - - -
reporter = aws_reporter.AWSReporter(
report_definitions=[self.report_definition],
packaged_report_definitions=True
)
self.assertIsNotNone(reporter)
self.assertEqual(
len(reporter.report_definitions()),
1 + packaged_report_count
)
self.assertTrue(
self.sample_name in [
r.name for r in reporter.report_definitions()
],
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
class TestAWSReporterGeneralMethods(unittest.TestCase):
'''
Test cases for AWSReporter methods.
'''
# pylint: disable=invalid-name
# pylint: disable=protected-access
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@classmethod
def setUpClass(cls):
cls.sample_entity_type = 'eip'
# These will be assigned.
cls.sample_name_a1 = 'Sample Reporter a1'
cls.report_definition_a1 = aws_reporter.ReportDefinition(
name=cls.sample_name_a1,
entity_type=cls.sample_entity_type,
)
cls.sample_name_a2 = 'Sample Reporter a2'
cls.report_definition_a2 = aws_reporter.ReportDefinition(
name=cls.sample_name_a2,
entity_type=cls.sample_entity_type,
)
# These will be passed.
cls.sample_name_p1 = 'Sample Reporter p1'
cls.report_definition_p1 = aws_reporter.ReportDefinition(
name=cls.sample_name_p1,
entity_type=cls.sample_entity_type,
)
cls.sample_name_p2 = 'Sample Reporter p2'
cls.report_definition_p2 = aws_reporter.ReportDefinition(
name=cls.sample_name_p2,
entity_type=cls.sample_entity_type,
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_combined_report_definitions(self):
'''
Test AWSReporter._combined_report_definitions.
'''
# - - - - - - - - - - - -
reporter = aws_reporter.AWSReporter()
self.assertItemsEqual(
reporter._combined_report_definitions(),
[]
)
self.assertItemsEqual(
reporter._combined_report_definitions(
report_names=[self.sample_name_a1]
),
[]
)
self.assertItemsEqual(
reporter._combined_report_definitions(
report_definitions=[self.report_definition_p1]
),
[
self.report_definition_p1
]
)
self.assertItemsEqual(
reporter._combined_report_definitions(
report_names=[self.sample_name_a1],
report_definitions=[self.report_definition_p1]
),
[
self.report_definition_p1
]
)
# - - - - - - - - - - - -
reporter = aws_reporter.AWSReporter(
report_definitions=[self.report_definition_a1]
)
self.assertItemsEqual(
reporter._combined_report_definitions(),
[
self.report_definition_a1
]
)
self.assertItemsEqual(
reporter._combined_report_definitions(
report_names=[self.sample_name_a1]
),
[
self.report_definition_a1
]
)
self.assertItemsEqual(
reporter._combined_report_definitions(
report_definitions=[self.report_definition_p1]
),
[
self.report_definition_a1,
self.report_definition_p1
]
)
self.assertItemsEqual(
reporter._combined_report_definitions(
report_names=[self.sample_name_a1],
report_definitions=[self.report_definition_p1]
),
[
self.report_definition_a1,
self.report_definition_p1
]
)
# - - - - - - - - - - - -
reporter = aws_reporter.AWSReporter(
report_definitions=[
self.report_definition_a1,
self.report_definition_a2
]
)
self.assertItemsEqual(
reporter._combined_report_definitions(),
[
self.report_definition_a1,
self.report_definition_a2
]
)
self.assertItemsEqual(
reporter._combined_report_definitions(
report_names=[self.sample_name_a1]
),
[
self.report_definition_a1
]
)
self.assertItemsEqual(
reporter._combined_report_definitions(
report_definitions=[self.report_definition_p1]
),
[
self.report_definition_a1,
self.report_definition_a2,
self.report_definition_p1
]
)
self.assertItemsEqual(
reporter._combined_report_definitions(
report_names=[self.sample_name_a1],
report_definitions=[self.report_definition_p1]
),
[
self.report_definition_a1,
self.report_definition_p1
]
)
self.assertItemsEqual(
reporter._combined_report_definitions(
report_names=[self.sample_name_a1],
report_definitions=[
self.report_definition_p1,
self.report_definition_p2
]
),
[
self.report_definition_a1,
self.report_definition_p1,
self.report_definition_p2
]
)
self.assertItemsEqual(
reporter._combined_report_definitions(
report_names=[
self.sample_name_a1,
self.sample_name_a2
],
report_definitions=[
self.report_definition_p1,
self.report_definition_p2
]
),
[
self.report_definition_a1,
self.report_definition_a2,
self.report_definition_p1,
self.report_definition_p2
]
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_report_definitions(self):
'''
Test AWSReporter.report_definitions().
'''
reporter = aws_reporter.AWSReporter()
self.assertItemsEqual(
reporter.report_definitions(),
[]
)
# - - - - - - - - - - - -
reporter = aws_reporter.AWSReporter(
report_definitions=[
self.report_definition_a1,
self.report_definition_a2,
]
)
self.assertItemsEqual(
reporter.report_definitions(),
[self.report_definition_a1, self.report_definition_a2]
)
self.assertItemsEqual(
reporter.report_definitions(self.sample_name_a1),
[self.report_definition_a1]
)
self.assertItemsEqual(
reporter.report_definitions(
self.sample_name_a1,
self.sample_name_a2
),
[self.report_definition_a1, self.report_definition_a2]
)
self.assertEqual(
len(reporter.report_definitions(
self.sample_name_a1,
self.sample_name_a1
)),
1
)
self.assertItemsEqual(
reporter.report_definitions(
self.sample_name_a1,
self.sample_name_a1
),
[self.report_definition_a1]
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_report_names(self):
'''
Test AWSReporter.report_names().
'''
reporter = aws_reporter.AWSReporter()
self.assertItemsEqual(
reporter.report_names(),
[]
)
# - - - - - - - - - - - -
reporter = aws_reporter.AWSReporter()
self.assertItemsEqual(
reporter.report_names(self.report_definition_p1),
[
self.sample_name_p1,
]
)
# - - - - - - - - - - - -
reporter = aws_reporter.AWSReporter(
report_definitions=[
self.report_definition_a1,
self.report_definition_a2,
]
)
self.assertItemsEqual(
reporter.report_names(),
[
self.sample_name_a1,
self.sample_name_a2,
]
)
# - - - - - - - - - - - -
reporter = aws_reporter.AWSReporter(
report_definitions=[
self.report_definition_a1,
self.report_definition_a2,
]
)
self.assertItemsEqual(
reporter.report_names(self.report_definition_p1),
[
self.sample_name_a1,
self.sample_name_a2,
self.sample_name_p1,
]
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
class TestAWSReporterAssignReports(unittest.TestCase):
'''
Test cases for AWSReporter report assignment methods.
'''
# pylint: disable=invalid-name
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@classmethod
def setUpClass(cls):
cls.sample_name = 'Sample Reporter'
cls.sample_entity_type = 'eip'
cls.sample_prune_specs = [
{'path': 'meta.profile_name', 'path_to_none': False}
]
cls.sample_default_column_order = ['meta.profile_name']
cls.report_definition = aws_reporter.ReportDefinition(
name=cls.sample_name,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs,
default_column_order=cls.sample_default_column_order
)
cls.sample_name_2 = 'Sample Reporter 2'
cls.report_definition_2 = aws_reporter.ReportDefinition(
name=cls.sample_name_2,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs,
default_column_order=cls.sample_default_column_order
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_add_report_definitions_single(self):
'''
Tests of the AWSReporter add_report_definitions() method,
adding one definition at a time.
'''
reporter = aws_reporter.AWSReporter()
self.assertEqual(reporter.report_definitions(), [])
reporter.add_report_definitions([self.report_definition])
self.assertEqual(len(reporter.report_definitions()), 1)
self.assertEqual(
reporter.report_definitions()[0].name,
self.sample_name
)
with self.assertRaises(NameError):
reporter.add_report_definitions([self.report_definition])
reporter.add_report_definitions([self.report_definition_2])
self.assertEqual(len(reporter.report_definitions()), 2)
self.assertEqual(
set([r.name for r in reporter.report_definitions()]),
set([self.sample_name, self.sample_name_2])
)
with self.assertRaises(NameError):
reporter.add_report_definitions([self.report_definition])
with self.assertRaises(NameError):
reporter.add_report_definitions([self.report_definition_2])
with self.assertRaises(NameError):
reporter.add_report_definitions(
[self.report_definition, self.report_definition_2]
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_add_report_definitions_multiple(self):
'''
Tests of the AWSReporter add_report_definitions() method,
adding multiple definitions at a time.
'''
reporter = aws_reporter.AWSReporter()
self.assertEqual(reporter.report_definitions(), [])
reporter.add_report_definitions(
[self.report_definition, self.report_definition_2]
)
self.assertEqual(len(reporter.report_definitions()), 2)
self.assertEqual(
set([r.name for r in reporter.report_definitions()]),
set([self.sample_name, self.sample_name_2])
)
with self.assertRaises(NameError):
reporter.add_report_definitions([self.report_definition])
with self.assertRaises(NameError):
reporter.add_report_definitions([self.report_definition_2])
with self.assertRaises(NameError):
reporter.add_report_definitions(
[self.report_definition, self.report_definition_2]
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_add_packaged_report_definitions(self):
'''
Tests of the AWSReporter add_packaged_report_definitions()
method.
'''
reporter = aws_reporter.AWSReporter()
self.assertEqual(reporter.report_definitions(), [])
reporter.add_packaged_report_definitions(
[boogio.report_definitions]
)
self.assertNotEqual(reporter.report_definitions(), [])
self.assertIn(
'EC2Instances',
[r.name for r in reporter.report_definitions()]
)
with self.assertRaises(NameError):
reporter.add_packaged_report_definitions(
[boogio.report_definitions]
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_add_multiple_report_definitions(self):
'''
Tests of adding AWSReporter report definitions from both
packages and individually.
'''
reporter = aws_reporter.AWSReporter()
self.assertEqual(reporter.report_definitions(), [])
reporter.add_packaged_report_definitions(
[boogio.report_definitions]
)
packaged_report_count = len(reporter.report_definitions())
reporter.add_report_definitions(
[self.report_definition, self.report_definition_2]
)
self.assertEqual(
len(reporter.report_definitions()),
packaged_report_count + 2
)
with self.assertRaises(NameError):
reporter.add_report_definitions([self.report_definition])
with self.assertRaises(NameError):
reporter.add_report_definitions([self.report_definition_2])
with self.assertRaises(NameError):
reporter.add_report_definitions(
[self.report_definition, self.report_definition_2]
)
with self.assertRaises(NameError):
reporter.add_packaged_report_definitions(
[boogio.report_definitions]
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
class TestAWSReporterReportErrors(unittest.TestCase):
'''
Test cases for AWSReporter.report() method exceptions.
This is the "singular" method for one report definition.
'''
# pylint: disable=invalid-name
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@classmethod
def setUpClass(cls):
# Define a reporter to use in cases.
cls.sample_name = 'Sample Reporter'
cls.sample_entity_type = 'eip'
cls.sample_prune_specs = [
{'path': 'meta.profile_name', 'path_to_none': False}
]
cls.sample_default_column_order = ['meta.profile_name']
cls.report_definition = aws_reporter.ReportDefinition(
name=cls.sample_name,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs,
default_column_order=cls.sample_default_column_order
)
# Define a surveyor and informers to use in cases.
cls.surveyor = aws_surveyor.AWSSurveyor(
profiles=['default'], regions=['us-east-1']
)
cls.surveyor.survey('eip')
cls.informer = cls.surveyor.informers()[0]
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_report_errors(self):
'''
Tests of the AWSReporter.report() method with improper
signature, raising exceptions.
'''
# - - - - - - - - - - - -
reporter = aws_reporter.AWSReporter()
with self.assertRaises(TypeError):
reporter.report()
# - - - - - - - - - - - -
# No informer or surveyor; reports assigned.
reporter = aws_reporter.AWSReporter(
report_definitions=[self.report_definition]
)
with self.assertRaises(TypeError):
reporter.report()
# - - - - - - - - - - - -
# No informer or surveyor; reports passed to report().
reporter = aws_reporter.AWSReporter()
with self.assertRaises(TypeError):
reporter.report(
report_definition=self.report_definition
)
# - - - - - - - - - - - -
# Surveyor used; no reports assigned or selected.
reporter = aws_reporter.AWSReporter()
with self.assertRaises(TypeError):
reporter.report(surveyors=[self.surveyor])
# - - - - - - - - - - - -
# Informer used; no reports assigned or selected.
reporter = aws_reporter.AWSReporter()
with self.assertRaises(TypeError):
reporter.report(informers=[self.informer])
# - - - - - - - - - - - -
# No reports assigned; report name is missing.
reporter = aws_reporter.AWSReporter()
with self.assertRaises(IndexError):
reporter.report(
informers=[self.informer],
report_name='ceci_nest_pas_une_nome'
)
# - - - - - - - - - - - -
# Reports assigned; report name is missing.
reporter = aws_reporter.AWSReporter(
report_definitions=[self.report_definition]
)
with self.assertRaises(IndexError):
reporter.report(
informers=[self.informer],
report_name='ceci_nest_pas_une_nome'
)
# - - - - - - - - - - - -
# Both report name and report definition provided.
reporter = aws_reporter.AWSReporter(
report_definitions=[self.report_definition]
)
with self.assertRaises(TypeError):
reporter.report(
informers=[self.informer],
report_definition=self.report_definition,
report_name=self.report_definition.name
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
class TestAWSReporterReportsErrors(unittest.TestCase):
'''
Test cases for AWSReporter.reports() method exceptions.
This is the "plural" method that calls AWSReporter.report()
multiple times.
'''
# pylint: disable=invalid-name
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@classmethod
def setUpClass(cls):
# Define a reporter to use in cases.
cls.sample_name = 'Sample Reporter'
cls.sample_entity_type = 'eip'
cls.sample_prune_specs = [
{'path': 'meta.profile_name', 'path_to_none': False}
]
cls.sample_default_column_order = ['meta.profile_name']
cls.report_definition = aws_reporter.ReportDefinition(
name=cls.sample_name,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs,
default_column_order=cls.sample_default_column_order
)
# Define a surveyor and informers to use in cases.
cls.surveyor = aws_surveyor.AWSSurveyor(
profiles=['default'], regions=['us-east-1']
)
cls.surveyor.survey('eip')
cls.informer = cls.surveyor.informers()[0]
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_reports_errors(self):
'''
Tests of the AWSReporter.reports() method with improper
signature, raising exceptions.
'''
# - - - - - - - - - - - -
reporter = aws_reporter.AWSReporter()
with self.assertRaises(TypeError):
reporter.reports()
# - - - - - - - - - - - -
# No informer or surveyor; reports assigned.
reporter = aws_reporter.AWSReporter(
report_definitions=[self.report_definition]
)
with self.assertRaises(TypeError):
reporter.reports()
# - - - - - - - - - - - -
# No informer or surveyor; reports passed to report().
reporter = aws_reporter.AWSReporter()
with self.assertRaises(TypeError):
reporter.reports(
report_definitions=[self.report_definition]
)
# - - - - - - - - - - - -
# Surveyor used; no reports assigned or selected.
reporter = aws_reporter.AWSReporter()
with self.assertRaises(TypeError):
reporter.reports(surveyors=[self.surveyor])
# - - - - - - - - - - - -
# Informer used; no reports assigned or selected.
reporter = aws_reporter.AWSReporter()
with self.assertRaises(TypeError):
reporter.reports(informers=[self.informer])
# - - - - - - - - - - - -
# No reports assigned; report name is missing.
reporter = aws_reporter.AWSReporter()
with self.assertRaises(IndexError):
reporter.reports(
informers=[self.informer],
report_definitions=[self.report_definition],
report_names=['ceci_nest_pas_une_nome']
)
# - - - - - - - - - - - -
# Reports assigned; report name is missing.
reporter = aws_reporter.AWSReporter(
report_definitions=[self.report_definition]
)
with self.assertRaises(IndexError):
reporter.reports(
informers=[self.informer],
report_names=['ceci_nest_pas_une_nome']
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
class TestAWSReporterReport(unittest.TestCase):
'''
Test cases for the AWSReporter.report() method.
This is the "singular" method for one report definition.
'''
# pylint: disable=invalid-name
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@classmethod
def setUpClass(cls):
# Define reporters to use in cases.
cls.sample_entity_type = 'eip'
# - - - - - - - - - - - -
cls.sample_name_profile_name = 'Sample Reporter profile_name'
cls.sample_prune_specs_profile_name = [
{'path': 'meta.profile_name', 'path_to_none': False}
]
cls.report_definition_profile_name = aws_reporter.ReportDefinition(
name=cls.sample_name_profile_name,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs_profile_name,
)
# - - - - - - - - - - - -
cls.sample_name_region_name = 'Sample Reporter region_name'
cls.sample_prune_specs_region_name = [
{'path': 'meta.region_name', 'path_to_none': False}
]
cls.report_definition_region_name = aws_reporter.ReportDefinition(
name=cls.sample_name_region_name,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs_region_name,
)
# - - - - - - - - - - - -
cls.sample_entity_type_2 = 'vpc'
cls.sample_name_profile_name_2 = 'Sample Reporter profile_name 2'
cls.sample_prune_specs_profile_name_2 = [
{'path': 'meta.profile_name', 'path_to_none': False}
]
cls.report_definition_profile_name_2 = aws_reporter.ReportDefinition(
name=cls.sample_name_profile_name_2,
entity_type=cls.sample_entity_type_2,
prune_specs=cls.sample_prune_specs_profile_name_2,
)
# - - - - - - - - - - - -
# Define a surveyor and informers to use in cases.
cls.surveyor_eip = aws_surveyor.AWSSurveyor(
profiles=['default'], regions=['us-east-1']
)
cls.surveyor_eip.survey('eip')
cls.informers = cls.surveyor_eip.informers()
# An alias for consistency & clarity when needed.
cls.eip_informers = cls.informers
# - - - - - - - - - - - -
# Define additional surveyors and informers to use in cases.
cls.surveyor_vpc = aws_surveyor.AWSSurveyor(
profiles=['default'], regions=['us-east-1']
)
cls.surveyor_vpc.survey('vpc')
cls.vpc_informers = cls.surveyor_vpc.informers()
# - - - - - - - - - - - -
cls.surveyor_eip_vpc = aws_surveyor.AWSSurveyor(
profiles=['default'], regions=['us-east-1']
)
cls.surveyor_eip_vpc.survey('eip', 'vpc')
cls.eip_vpc_informers = cls.surveyor_eip_vpc.informers()
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_report_no_informers(self):
'''
Tests of the AWSReporter.report() method when the list
of informers to be checked is empty. Check both with ``flat``
set to ``True`` and to ``False``.
'''
reporter = aws_reporter.AWSReporter()
report = reporter.report(
informers=[],
report_definition=self.report_definition_profile_name,
flat=True
)
# List is the result of calling extract_from().
self.assertTrue(isinstance(report, list))
self.assertEqual(report, [])
report = reporter.report(
informers=[],
report_definition=self.report_definition_profile_name,
flat=False
)
# List is the result of calling extract_from().
self.assertTrue(isinstance(report, list))
self.assertEqual(report, [])
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_report_flat_named_with_surveyors(self):
'''
Tests of the AWSReporter.report() method when report() is
passed report names and surveyors. Here we set flat to True.
'''
reporter = aws_reporter.AWSReporter(
report_definitions=[
self.report_definition_profile_name,
self.report_definition_region_name
]
)
report = reporter.report(
surveyors=[self.surveyor_eip],
report_name=self.sample_name_region_name,
flat=True
)
# List is the result of calling extract_from().
self.assertTrue(isinstance(report, list))
self.assertNotEqual(report, [])
self.assertTrue(isinstance(report[0], dict))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_report_nested_passed_with_informers(self):
'''
Tests of the AWSReporter.report() method when report() is
passed report definitions and informers. Here we set flat to
False.
'''
reporter = aws_reporter.AWSReporter()
report = reporter.report(
informers=self.informers,
report_definition=self.report_definition_profile_name,
flat=False
)
# List is the result of calling extract_from().
self.assertTrue(isinstance(report, list))
self.assertNotEqual(report, [])
self.assertTrue(isinstance(report[0], dict))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_report_passed_with_surveyors(self):
'''
Tests of the AWSReporter.report() method when report() is
passed report definitions and surveyors. Here we set flat to
True.
'''
reporter = aws_reporter.AWSReporter()
report = reporter.report(
surveyors=[self.surveyor_eip],
report_definition=self.report_definition_profile_name,
flat=True
)
# List is the result of calling extract_from().
self.assertTrue(isinstance(report, list))
self.assertNotEqual(report, [])
self.assertTrue(isinstance(report[0], dict))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_report_multiple_informer_types(self):
'''
Tests of the AWSReporter.report() method when report() is
passed multiple informer types.
'''
reporter = aws_reporter.AWSReporter()
# report_definition_region_name should extract region name
# from eip resources only.
report_eip_vpc_informers_eip_region_name = reporter.report(
informers=self.eip_vpc_informers,
report_definition=self.report_definition_region_name,
flat=True
)
report_eip_informers_eip_region_name = reporter.report(
informers=self.eip_informers,
report_definition=self.report_definition_region_name,
flat=True
)
# print "eip informers: {}".format(len(self.eip_informers))
# print "vpc informers: {}".format(len(self.vpc_informers))
# print "eip & vpc informers: {}".format(len(self.eip_vpc_informers))
self.assertEqual(
len(report_eip_vpc_informers_eip_region_name),
len(report_eip_informers_eip_region_name)
)
# - - - - - - - - - - - -
# report_definition_profile_name should extract profile name
# from eip resources only.
report_eip_vpc_informers_eip_profile_name = reporter.report(
informers=self.eip_vpc_informers,
report_definition=self.report_definition_profile_name,
flat=True
)
report_eip_informers_eip_profile_name = reporter.report(
informers=self.eip_informers,
report_definition=self.report_definition_profile_name,
flat=True
)
self.assertEqual(
len(report_eip_vpc_informers_eip_profile_name),
len(report_eip_informers_eip_profile_name)
)
# - - - - - - - - - - - -
# report_definition_profile_name_2 should extract profile name
# from vpc resources only.
report_eip_vpc_informers_vpc_profile_name = reporter.report(
informers=self.eip_vpc_informers,
report_definition=self.report_definition_profile_name_2,
flat=True
)
report_vpc_informers_vpc_profile_name = reporter.report(
informers=self.vpc_informers,
report_definition=self.report_definition_profile_name_2,
flat=True
)
self.assertEqual(
len(report_eip_vpc_informers_vpc_profile_name),
len(report_vpc_informers_vpc_profile_name)
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
class TestAWSReporterReports(unittest.TestCase):
'''
Test cases for the AWSReporter.reports() method.
This is the "plural" method that calls AWSReporter.report()
multiple times.
'''
# pylint: disable=invalid-name
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@classmethod
def setUpClass(cls):
# Define reporters to use in cases.
cls.sample_entity_type = 'eip'
cls.sample_name_profile_name = 'Sample Reporter profile_name'
cls.sample_prune_specs_profile_name = [
{'path': 'meta.profile_name', 'path_to_none': False}
]
cls.report_definition_profile_name = aws_reporter.ReportDefinition(
name=cls.sample_name_profile_name,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs_profile_name,
)
cls.sample_name_region_name = 'Sample Reporter region_name'
cls.sample_prune_specs_region_name = [
{'path': 'meta.region_name', 'path_to_none': False}
]
cls.report_definition_region_name = aws_reporter.ReportDefinition(
name=cls.sample_name_region_name,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs_region_name,
)
# Define a surveyor and informers to use in cases.
cls.surveyor_eip = aws_surveyor.AWSSurveyor(
profiles=['default'], regions=['us-east-1']
)
cls.surveyor_eip.survey('eip')
cls.informers = cls.surveyor_eip.informers()
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_reports_no_informers(self):
'''
Tests of the AWSReporter.report() method when the list
of informers to be checked is empty. Check both with ``flat``
set to ``True`` and to ``False``.
'''
reporter = aws_reporter.AWSReporter(
report_definitions=[self.report_definition_profile_name]
)
reports = reporter.reports(
informers=[],
flat=True
)
# Dict items are the result of each report definition.
self.assertIsNotNone(reports)
self.assertTrue(isinstance(reports, dict))
self.assertEqual(len(reports), 1)
self.assertItemsEqual(
reports.keys(), [self.report_definition_profile_name.name]
)
report = reports.values()[0]
# The report is the result of calling extract_from().
self.assertTrue(isinstance(report, list))
self.assertEqual(report, [])
reports = reporter.reports(
informers=[],
flat=False
)
# Dict items are the result of each report definition.
self.assertIsNotNone(reports)
self.assertTrue(isinstance(reports, dict))
self.assertEqual(len(reports), 1)
self.assertItemsEqual(
reports.keys(), [self.report_definition_profile_name.name]
)
report = reports.values()[0]
# The report is the result of calling extract_from().
self.assertTrue(isinstance(report, list))
self.assertEqual(report, [])
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_reports_all_assigned_with_informers(self):
'''
Tests of the AWSReporter.report() method when report
definitions are assigned at instantiation and report() is
passed informers. Here we set flat to True.
'''
reporter = aws_reporter.AWSReporter(
report_definitions=[self.report_definition_profile_name]
)
self.assertNotEqual(self.informers, [])
reports = reporter.reports(
informers=self.informers,
flat=True
)
# Dict items are the result of each report definition.
self.assertIsNotNone(reports)
self.assertTrue(isinstance(reports, dict))
self.assertNotEqual(reports, {})
self.assertEqual(len(reports), 1)
self.assertItemsEqual(
reports.keys(), [self.report_definition_profile_name.name]
)
report = reports.values()[0]
# The report is the result of calling extract_from().
self.assertTrue(isinstance(report, list))
self.assertNotEqual(report, [])
self.assertTrue(isinstance(report[0], dict))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_reports_all_assigned_with_surveyors(self):
'''
Tests of the AWSReporter.report() method when report
definitions are assigned at instantiation and report() is
passed surveyors. Here we set flat to False.
'''
reporter = aws_reporter.AWSReporter(
report_definitions=[self.report_definition_profile_name]
)
reports = reporter.reports(
surveyors=[self.surveyor_eip],
flat=False
)
# Dict items are the result of each report definition.
self.assertIsNotNone(reports)
self.assertTrue(isinstance(reports, dict))
self.assertEqual(len(reports), 1)
self.assertItemsEqual(
reports.keys(), [self.report_definition_profile_name.name]
)
report = reports.values()[0]
# The report is the result of calling extract_from().
self.assertTrue(isinstance(report, list))
self.assertNotEqual(report, [])
self.assertTrue(isinstance(report[0], dict))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_reports_named_with_surveyors(self):
'''
Tests of the AWSReporter.report() method when report() is
passed report names and informers. Here we set flat to True.
'''
reporter = aws_reporter.AWSReporter(
report_definitions=[
self.report_definition_profile_name,
self.report_definition_region_name
]
)
reports = reporter.reports(
informers=self.informers,
report_names=[self.sample_name_region_name],
flat=True
)
# Dict items are the result of each report definition.
self.assertIsNotNone(reports)
self.assertTrue(isinstance(reports, dict))
self.assertEqual(len(reports), 1)
self.assertItemsEqual(
reports.keys(), [self.report_definition_region_name.name]
)
report = reports.values()[0]
# The report is the result of calling extract_from().
self.assertTrue(isinstance(report, list))
self.assertNotEqual(report, [])
self.assertTrue(isinstance(report[0], dict))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_reports_named_and_passed_with_surveyors(self):
'''
Tests of the AWSReporter.report() method when report
definitions are both assigned/named and passed in and report()
is passed surveyors. Here we set flat to True.
'''
sample_name_public_ip = 'Sample Reporter PublicIP'
sample_prune_specs_public_ip = [
{'path': 'PublicIp', 'path_to_none': True}
]
report_definition_public_ip = aws_reporter.ReportDefinition(
name=sample_name_public_ip,
entity_type=self.sample_entity_type,
prune_specs=sample_prune_specs_public_ip,
)
reporter = aws_reporter.AWSReporter(
report_definitions=[
self.report_definition_profile_name,
report_definition_public_ip
]
)
reports = reporter.reports(
surveyors=[self.surveyor_eip],
report_names=[
self.sample_name_profile_name,
],
report_definitions=[self.report_definition_region_name],
flat=True
)
# Dict items are the result of each report definition.
self.assertIsNotNone(reports)
self.assertTrue(isinstance(reports, dict))
self.assertEqual(len(reports), 2)
self.assertItemsEqual(
reports.keys(), [
self.sample_name_profile_name,
self.report_definition_region_name.name
]
)
report = reports.values()[0]
# The report is the result of calling extract_from().
self.assertTrue(isinstance(report, list))
self.assertNotEqual(report, [])
self.assertTrue(isinstance(report[0], dict))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_reports_passed_with_informers(self):
'''
Tests of the AWSReporter.report() method when report() is
passed report definitions and informers. Here we set flat to
False.
'''
reporter = aws_reporter.AWSReporter()
reports = reporter.reports(
informers=self.informers,
report_definitions=[self.report_definition_profile_name],
flat=False
)
# Dict items are the result of each report definition.
self.assertIsNotNone(reports)
self.assertTrue(isinstance(reports, dict))
self.assertEqual(len(reports), 1)
self.assertItemsEqual(
reports.keys(), [self.report_definition_profile_name.name]
)
report = reports.values()[0]
# The report is the result of calling extract_from().
self.assertTrue(isinstance(report, list))
self.assertNotEqual(report, [])
self.assertTrue(isinstance(report[0], dict))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_reports_passed_with_surveyors(self):
'''
Tests of the AWSReporter.report() method when report() is
passed report definitions and surveyors. Here we set flat to
True.
'''
reporter = aws_reporter.AWSReporter()
reports = reporter.reports(
surveyors=[self.surveyor_eip],
report_definitions=[self.report_definition_profile_name],
flat=True
)
# Dict items are the result of each report definition.
self.assertIsNotNone(reports)
self.assertTrue(isinstance(reports, dict))
self.assertEqual(len(reports), 1)
self.assertItemsEqual(
reports.keys(), [self.report_definition_profile_name.name]
)
report = reports.values()[0]
# The report is the result of calling extract_from().
self.assertTrue(isinstance(report, list))
self.assertNotEqual(report, [])
self.assertTrue(isinstance(report[0], dict))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
class TestAWSReporterReportFormats(unittest.TestCase):
'''
Test cases for AWSReporter report methods.
'''
# pylint: disable=invalid-name
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@classmethod
def setUpClass(cls):
# - - - - - - - - - - - - - - - - - - - -
# Define reporters to use in cases.
# - - - - - - - - - - - - - - - - - - - -
cls.sample_entity_type = 'eip'
# - - - - - - - - - - - - - - - - - - - -
cls.sample_name_profile_name = 'Sample Reporter profile_name'
cls.sample_prune_specs_profile_name = [
{'path': 'meta.profile_name', 'path_to_none': False}
]
cls.report_definition_profile_name = aws_reporter.ReportDefinition(
name=cls.sample_name_profile_name,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs_profile_name,
)
# - - - - - - - - - - - - - - - - - - - -
cls.sample_name_region_name = 'Sample Reporter region_name'
cls.sample_prune_specs_region_name = [
{'path': 'meta.region_name', 'path_to_none': False}
]
cls.report_definition_region_name = aws_reporter.ReportDefinition(
name=cls.sample_name_region_name,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs_region_name,
)
# - - - - - - - - - - - - - - - - - - - -
cls.sample_name_public_ip = 'Sample Reporter PublicIp'
cls.sample_prune_specs_public_ip = [
{'path': 'PublicIp', 'path_to_none': False}
]
cls.report_definition_public_ip = aws_reporter.ReportDefinition(
name=cls.sample_name_public_ip,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs_public_ip,
)
# - - - - - - - - - - - - - - - - - - - -
# Define a surveyor and informers to use in cases.
# - - - - - - - - - - - - - - - - - - - -
cls.surveyor_eip = aws_surveyor.AWSSurveyor(
profiles=['default'], regions=['us-east-1']
)
cls.surveyor_eip.survey('eip')
cls.informers = cls.surveyor_eip.informers()
# - - - - - - - - - - - - - - - - - - - -
# Define reporters to use in cases.
# - - - - - - - - - - - - - - - - - - - -
cls.single_definition_reporter = aws_reporter.AWSReporter(
report_definitions=[
cls.report_definition_profile_name,
]
)
cls.single_definition_report_name = (
cls.report_definition_profile_name.name
)
cls.triple_definition_reporter = aws_reporter.AWSReporter(
report_definitions=[
cls.report_definition_profile_name,
cls.report_definition_region_name,
cls.report_definition_public_ip
]
)
cls.triple_definition_report_names = [
d.name
for d in cls.triple_definition_reporter.report_definitions()
]
# - - - - - - - - - - - - - - - - - - - -
# Run their basic reports so we can compare to formatted
# report results.
# - - - - - - - - - - - - - - - - - - - -
cls.single_definition_report_flat = (
cls.single_definition_reporter.reports(
informers=cls.informers,
flat=True
)
)
cls.single_definition_report_nested = (
cls.single_definition_reporter.reports(
informers=cls.informers,
flat=False
)
)
# - - - - - - - - - - - -
cls.triple_definition_report_flat = (
cls.triple_definition_reporter.reports(
informers=cls.informers,
flat=True
)
)
cls.triple_definition_report_nested = (
cls.triple_definition_reporter.reports(
informers=cls.informers,
flat=False
)
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_csv_list(self):
'''Tests for the aws_reporter csv_list() method. '''
report_name = self.single_definition_report_name
report = (
self.single_definition_reporter.csv_list(
informers=self.informers,
report_name=report_name
)
)
self.assertTrue(isinstance(report, list))
self.assertEqual(
len(report),
1 + len(self.single_definition_report_flat[report_name])
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_tsv_list(self):
'''Tests for the aws_reporter tsv_list() method. '''
report_name = self.single_definition_report_name
report = (
self.single_definition_reporter.tsv_list(
informers=self.informers,
report_name=report_name
)
)
self.assertTrue(isinstance(report, list))
self.assertEqual(
len(report),
1 + len(self.single_definition_report_flat[report_name])
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_sv_list(self):
'''Tests for the aws_reporter sv_list() method. '''
# TODO: Document, add include_headers, columns and
# placeholder arguments to pass to tabulizer.
# - - - - - - - - - - - -
report_name = self.single_definition_report_name
report = (
self.single_definition_reporter.sv_list(
informers=self.informers,
report_name=report_name,
separator='XXX',
# include_headers=True
)
)
self.assertTrue(isinstance(report, list))
# This adds a header list.
self.assertEqual(
len(report),
1 + len(self.single_definition_report_flat[report_name])
)
# TODO: Move to plural version, if it gets created.
# # - - - - - - - - - - - -
# report = (
# self.triple_definition_reporter.sv_list(
# informers=self.informers,
# separator='XXX',
# # include_headers=True
# )
# )
# self.assertTrue(isinstance(report, list))
# # This adds a header list.
# self.assertEqual(
# len(report),
# 3 + reduce(
# lambda x, y: x + y,
# [len(d) for d in self.triple_definition_report_flat]
# )
# )
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_json_dumps(self):
'''Tests for the aws_reporter json_dumps() method. '''
# - - - - - - - - - - - -
# Flat.
# - - - - - - - - - - - -
report_name = self.single_definition_report_name
report_json = (
self.single_definition_reporter.json_dumps(
informers=self.informers,
report_name=report_name,
flat=True
)
)
report = json.loads(report_json)
self.assertTrue(isinstance(report, list))
self.assertEqual(
len(report),
len(self.single_definition_report_flat[report_name])
)
# - - - - - - - - - - - -
# Not flat.
# - - - - - - - - - - - -
report_name = self.single_definition_report_name
report_json = (
self.single_definition_reporter.json_dumps(
informers=self.informers,
report_name=report_name,
flat=False
)
)
report = json.loads(report_json)
self.assertTrue(isinstance(report, list))
self.assertNotEqual(report, [])
self.assertTrue(isinstance(report[0], dict))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_tabulizer(self):
'''Tests for the aws_reporter tabulizer() method. '''
# TODO: Add support for headers, columns arguments.
report_name = self.single_definition_report_name
report = self.single_definition_reporter.tabulizer(
informers=self.informers,
report_name=report_name
)
self.assertTrue(
isinstance(report, boogio.utensils.tabulizer.Tabulizer)
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_tabulizers(self):
'''Tests for the aws_reporter tabulizers() method.
This is the plural method, which calls the singular tabulizer
method for multiple report definitions.
'''
report_names = self.triple_definition_report_names
reports = self.triple_definition_reporter.tabulizers(
informers=self.informers,
)
self.assertItemsEqual(
reports.keys(),
report_names
)
for report_name in report_names:
self.assertTrue(
isinstance(
reports[report_name],
boogio.utensils.tabulizer.Tabulizer
)
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
class TestAWSReporterReportExcelMethods(unittest.TestCase):
'''
Test cases for AWSReporter report excel interaction methods.
'''
# pylint: disable=invalid-name
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@classmethod
def setUpClass(cls):
'''Class setup. Doh.
'''
# Get a temp directory for output.
cls.tmpdir = tempfile.mkdtemp()
# - - - - - - - - - - - - - - - - - - - -
# Define reporters to use in cases.
# - - - - - - - - - - - - - - - - - - - -
cls.sample_entity_type = 'eip'
# - - - - - - - - - - - - - - - - - - - -
cls.sample_name_profile_name = 'Sample Reporter profile_name'
cls.sample_prune_specs_profile_name = [
{'path': 'meta.profile_name', 'path_to_none': False}
]
cls.report_definition_profile_name = aws_reporter.ReportDefinition(
name=cls.sample_name_profile_name,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs_profile_name,
)
# - - - - - - - - - - - - - - - - - - - -
cls.sample_name_region_name = 'Sample Reporter region_name'
cls.sample_prune_specs_region_name = [
{'path': 'meta.region_name', 'path_to_none': False}
]
cls.report_definition_region_name = aws_reporter.ReportDefinition(
name=cls.sample_name_region_name,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs_region_name,
)
# - - - - - - - - - - - - - - - - - - - -
cls.sample_name_public_ip = 'Sample Reporter PublicIp'
cls.sample_prune_specs_public_ip = [
{'path': 'PublicIp', 'path_to_none': False}
]
cls.report_definition_public_ip = aws_reporter.ReportDefinition(
name=cls.sample_name_public_ip,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs_public_ip,
)
# - - - - - - - - - - - - - - - - - - - -
# Define a surveyor and informers to use in cases.
# - - - - - - - - - - - - - - - - - - - -
cls.surveyor_eip = aws_surveyor.AWSSurveyor(
profiles=['default'], regions=['us-east-1']
)
cls.surveyor_eip.survey('eip')
cls.informers = cls.surveyor_eip.informers()
# - - - - - - - - - - - - - - - - - - - -
# Define reporters to use in cases.
# - - - - - - - - - - - - - - - - - - - -
cls.single_definition_reporter = aws_reporter.AWSReporter(
report_definitions=[
cls.report_definition_profile_name,
]
)
cls.single_definition_report_name = (
cls.report_definition_profile_name.name
)
cls.triple_definition_reporter = aws_reporter.AWSReporter(
report_definitions=[
cls.report_definition_profile_name,
cls.report_definition_region_name,
cls.report_definition_public_ip
]
)
cls.triple_definition_report_names = [
d.name
for d in cls.triple_definition_reporter.report_definitions()
]
# - - - - - - - - - - - - - - - - - - - -
# Run their basic reports so we can compare to formatted
# report results.
# - - - - - - - - - - - - - - - - - - - -
cls.single_definition_report_flat = (
cls.single_definition_reporter.reports(
informers=cls.informers,
flat=True
)
)
cls.triple_definition_report_flat = (
cls.triple_definition_reporter.reports(
informers=cls.informers,
flat=True
)
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@classmethod
def tearDownClass(cls):
'''
Remove test files and temp directory.
'''
for root, dirs, files in os.walk(cls.tmpdir, topdown=False):
for name in files:
os.remove(os.path.join(root, name))
for name in dirs:
os.rmdir(os.path.join(root, name))
os.rmdir(cls.tmpdir)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_add_worksheets_single(self):
'''Tests for the aws_reporter add_worksheets() method. '''
workbook_path = os.path.join(self.tmpdir, 'test_add_worksheets.xls')
workbook = xlsxwriter.Workbook(
workbook_path,
{'strings_to_numbers': True}
)
# worksheet_1 = workbook.add_worksheet()
# worksheet_2 = workbook.add_worksheet()
report_name = self.single_definition_report_name
# - - - - - - - - - - - - - - - - - - - -
# Error case testing.
# - - - - - - - - - - - - - - - - - - - -
# with self.assertRaises(ValueError):
# self.triple_definition_reporter.add_worksheets(
# worksheets=[worksheet_1],
# informers=self.informers,
# report_names=self.triple_definition_report_names
# )
# with self.assertRaises(ValueError):
# self.single_definition_reporter.add_worksheets(
# worksheets=[worksheet_1, worksheet_2],
# informers=self.informers,
# report_names=[report_name]
# )
# - - - - - - - - - - - - - - - - - - - -
# Test populating worksheets.
# - - - - - - - - - - - - - - - - - - - -
self.single_definition_reporter.add_worksheets(
workbook=workbook,
informers=self.informers,
report_names=[report_name]
)
self.assertEqual(len(workbook.worksheets()), 1)
worksheet = workbook.worksheets()[0]
# The worksheet has one extra row, for the headers, while the
# report list's length is one more than the last index, so
# these are equal.
self.assertEqual(
worksheet.dim_rowmax,
len(self.single_definition_report_flat[report_name])
)
# This doesn't add a column, the way headers add a row, so the
# dim_colmax value is one less than the length.
self.assertEqual(
worksheet.dim_colmax,
len(self.single_definition_report_flat[report_name][0]) - 1
)
self.assertEqual(worksheet.name, report_name)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_add_worksheets_triple(self):
'''Tests for the aws_reporter add_worksheets() method. '''
workbook_path = os.path.join(self.tmpdir, 'test_add_worksheets.xls')
workbook = xlsxwriter.Workbook(
workbook_path,
{'strings_to_numbers': True}
)
worksheet_1 = workbook.add_worksheet()
worksheet_2 = workbook.add_worksheet()
report_names = self.triple_definition_report_names
self.triple_definition_reporter.add_worksheets(
workbook=workbook,
informers=self.informers,
report_names=report_names
)
self.assertEqual(len(workbook.worksheets()), 5)
worksheets = workbook.worksheets()
worksheet_names = [s.name for s in worksheets]
self.assertItemsEqual(
worksheet_names,
report_names + [worksheet_1.name, worksheet_2.name]
)
for report_name in report_names:
worksheet = [w for w in worksheets if w.name == report_name][0]
# report_definition = [
# d for d in
# self.triple_definition_reporter.report_definitions()
# if d.name == report_name
# ][0]
# The worksheet has one extra row, for the headers, while the
# report list's length is one more than the last index, so
# these are equal.
self.assertEqual(
worksheet.dim_rowmax,
len(self.triple_definition_report_flat[report_name])
)
# This doesn't add a column, the way headers add a row, so the
# dim_colmax value is one less than the length.
self.assertEqual(
worksheet.dim_colmax,
len(self.triple_definition_report_flat[report_name][0]) - 1
)
self.assertEqual(worksheet.name, report_name)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_write_workbook(self):
'''Tests for the aws_reporter write_workbook() method. '''
workbook_path = os.path.join(self.tmpdir, 'test_write_workbook.xls')
report_name = self.single_definition_report_name
self.single_definition_reporter.write_workbook(
output_path=workbook_path,
informers=self.informers,
report_names=[report_name]
)
# This is a pretty minimal test, since xlsxwriter doesn't
# provide a "read in workbook" method.
self.assertTrue(os.path.exists(workbook_path))
statinfo = os.stat(workbook_path)
self.assertTrue(statinfo.st_size > 0)
# Replace the workbook with an empty file.
os.remove(workbook_path)
self.assertFalse(os.path.exists(workbook_path))
with open(workbook_path, 'w') as _:
pass
self.assertTrue(os.path.exists(workbook_path))
statinfo = os.stat(workbook_path)
self.assertTrue(statinfo.st_size == 0)
with self.assertRaises(ValueError):
self.single_definition_reporter.write_workbook(
output_path=workbook_path,
informers=self.informers,
report_names=[report_name]
)
self.single_definition_reporter.write_workbook(
output_path=workbook_path,
informers=self.informers,
report_names=[report_name],
overwrite=True
)
# Make sure we updated the file.
statinfo = os.stat(workbook_path)
self.assertTrue(statinfo.st_size > 0)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
class TestAWSReporterReportWriteToFile(unittest.TestCase):
'''
Test cases for AWSReporter report file output methods.
'''
# pylint: disable=invalid-name
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@classmethod
def setUpClass(cls):
'''Class setup. Doh.
'''
# Get a temp directory for output.
cls.tmpdir = tempfile.mkdtemp()
# - - - - - - - - - - - - - - - - - - - -
# Define reporters to use in cases.
# - - - - - - - - - - - - - - - - - - - -
cls.sample_entity_type = 'eip'
# - - - - - - - - - - - - - - - - - - - -
cls.sample_name_profile_name = 'Sample Reporter profile_name'
cls.sample_prune_specs_profile_name = [
{'path': 'meta.profile_name', 'path_to_none': False},
{'path': 'meta.region_name', 'path_to_none': False},
{'path': 'PublicIp', 'path_to_none': False}
]
cls.report_definition_profile_name = aws_reporter.ReportDefinition(
name=cls.sample_name_profile_name,
entity_type=cls.sample_entity_type,
prune_specs=cls.sample_prune_specs_profile_name,
)
# # - - - - - - - - - - - - - - - - - - - -
# cls.sample_name_region_name = 'Sample Reporter region_name'
# cls.sample_prune_specs_region_name = [
# {'path': 'meta.region_name', 'path_to_none': False}
# ]
# cls.report_definition_region_name = aws_reporter.ReportDefinition(
# name=cls.sample_name_region_name,
# entity_type=cls.sample_entity_type,
# prune_specs=cls.sample_prune_specs_region_name,
# )
# # - - - - - - - - - - - - - - - - - - - -
# cls.sample_name_public_ip = 'Sample Reporter PublicIp'
# cls.sample_prune_specs_public_ip = [
# {'path': 'PublicIp', 'path_to_none': False}
# ]
# cls.report_definition_public_ip = aws_reporter.ReportDefinition(
# name=cls.sample_name_public_ip,
# entity_type=cls.sample_entity_type,
# prune_specs=cls.sample_prune_specs_public_ip,
# )
# - - - - - - - - - - - - - - - - - - - -
# Define a surveyor and informers to use in cases.
# - - - - - - - - - - - - - - - - - - - -
cls.surveyor_eip = aws_surveyor.AWSSurveyor(
profiles=['default'], regions=['us-east-1']
)
cls.surveyor_eip.survey('eip')
cls.informers = cls.surveyor_eip.informers()
# - - - - - - - - - - - - - - - - - - - -
# Define reporters to use in cases.
# - - - - - - - - - - - - - - - - - - - -
cls.single_definition_reporter = aws_reporter.AWSReporter(
report_definitions=[
cls.report_definition_profile_name,
]
)
cls.single_definition_report_name = (
cls.report_definition_profile_name.name
)
# cls.triple_definition_reporter = aws_reporter.AWSReporter(
# report_definitions=[
# cls.report_definition_profile_name,
# cls.report_definition_region_name,
# cls.report_definition_public_ip
# ]
# )
# cls.triple_definition_report_names = [
# d.name
# for d in cls.triple_definition_reporter.report_definitions()
# ]
# - - - - - - - - - - - - - - - - - - - -
# Run their basic reports so we can compare to formatted
# report results.
# - - - - - - - - - - - - - - - - - - - -
cls.single_definition_report_flat = (
cls.single_definition_reporter.reports(
informers=cls.informers,
flat=True
)
)
cls.single_definition_report_nested = (
cls.single_definition_reporter.reports(
informers=cls.informers,
flat=False
)
)
# cls.triple_definition_report_flat = (
# cls.triple_definition_reporter.reports(
# informers=cls.informers,
# flat=True
# )
# )
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@classmethod
def tearDownClass(cls):
'''
Remove test files and temp directory.
'''
for root, dirs, files in os.walk(cls.tmpdir, topdown=False):
for name in files:
os.remove(os.path.join(root, name))
for name in dirs:
os.rmdir(os.path.join(root, name))
os.rmdir(cls.tmpdir)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_write_sv(self):
'''Tests for the aws_reporter write_sv() method. '''
sv_file_base = 'test_write_sv'
sv_file_name = '.'.join([sv_file_base, '.sv'])
sv_path = os.path.join(self.tmpdir, sv_file_name)
current_separator = 'XXX'
report_name = self.single_definition_report_name
self.single_definition_reporter.write_sv(
separator=current_separator,
output_path=sv_path,
informers=self.informers,
report_name=report_name
)
self.assertTrue(os.path.exists(sv_path))
statinfo = os.stat(sv_path)
self.assertTrue(statinfo.st_size > 0)
lines = []
for line in open(sv_path, 'r'):
lines.append(line)
# There's a header row.
self.assertEqual(
len(lines),
1 + len(self.single_definition_report_flat[report_name])
)
self.assertEqual(
len(lines[0].split(current_separator)),
len(self.sample_prune_specs_profile_name)
)
self.assertEqual(
len(lines[-1].split(current_separator)),
len(self.sample_prune_specs_profile_name)
)
# Replace the file with an empty file.
os.remove(sv_path)
self.assertFalse(os.path.exists(sv_path))
with open(sv_path, 'w') as _:
pass
self.assertTrue(os.path.exists(sv_path))
statinfo = os.stat(sv_path)
self.assertTrue(statinfo.st_size == 0)
with self.assertRaises(ValueError):
self.single_definition_reporter.write_sv(
separator=current_separator,
output_path=sv_path,
informers=self.informers,
report_name=report_name
)
self.single_definition_reporter.write_sv(
separator=current_separator,
output_path=sv_path,
informers=self.informers,
report_name=report_name,
overwrite=True
)
# Make sure we updated the file.
statinfo = os.stat(sv_path)
self.assertTrue(statinfo.st_size > 0)
lines = []
for line in open(sv_path, 'r'):
lines.append(line)
# There's a header row.
self.assertEqual(
len(lines),
1 + len(self.single_definition_report_flat[report_name])
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_write_csv(self):
'''Tests for the aws_reporter write_csv() method. '''
csv_file_base = 'test_write_csv'
csv_file_name = '.'.join([csv_file_base, '.csv'])
csv_path = os.path.join(self.tmpdir, csv_file_name)
current_separator = ','
report_name = self.single_definition_report_name
self.single_definition_reporter.write_csv(
output_path=csv_path,
informers=self.informers,
report_name=report_name
)
self.assertTrue(os.path.exists(csv_path))
statinfo = os.stat(csv_path)
self.assertTrue(statinfo.st_size > 0)
lines = []
for line in open(csv_path, 'r'):
lines.append(line)
# There's a header row.
self.assertEqual(
len(lines),
1 + len(self.single_definition_report_flat[report_name])
)
self.assertEqual(
len(lines[0].split(current_separator)),
len(self.sample_prune_specs_profile_name)
)
self.assertEqual(
len(lines[-1].split(current_separator)),
len(self.sample_prune_specs_profile_name)
)
# Replace the file with an empty file.
os.remove(csv_path)
self.assertFalse(os.path.exists(csv_path))
with open(csv_path, 'w') as _:
pass
self.assertTrue(os.path.exists(csv_path))
statinfo = os.stat(csv_path)
self.assertTrue(statinfo.st_size == 0)
with self.assertRaises(ValueError):
self.single_definition_reporter.write_csv(
output_path=csv_path,
informers=self.informers,
report_name=report_name
)
self.single_definition_reporter.write_csv(
output_path=csv_path,
informers=self.informers,
report_name=report_name,
overwrite=True
)
# Make sure we updated the file.
statinfo = os.stat(csv_path)
self.assertTrue(statinfo.st_size > 0)
lines = []
for line in open(csv_path, 'r'):
lines.append(line)
# There's a header row.
self.assertEqual(
len(lines),
1 + len(self.single_definition_report_flat[report_name])
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_write_tsv(self):
'''Tests for the aws_reporter write_tsv() method. '''
tsv_file_base = 'test_write_tsv'
tsv_file_name = '.'.join([tsv_file_base, '.tsv'])
tsv_path = os.path.join(self.tmpdir, tsv_file_name)
current_separator = '\t'
report_name = self.single_definition_report_name
self.single_definition_reporter.write_tsv(
output_path=tsv_path,
informers=self.informers,
report_name=report_name
)
self.assertTrue(os.path.exists(tsv_path))
statinfo = os.stat(tsv_path)
self.assertTrue(statinfo.st_size > 0)
lines = []
for line in open(tsv_path, 'r'):
lines.append(line)
# There's a header row.
self.assertEqual(
len(lines),
1 + len(self.single_definition_report_flat[report_name])
)
self.assertEqual(
len(lines[0].split(current_separator)),
len(self.sample_prune_specs_profile_name)
)
self.assertEqual(
len(lines[-1].split(current_separator)),
len(self.sample_prune_specs_profile_name)
)
# Replace the file with an empty file.
os.remove(tsv_path)
self.assertFalse(os.path.exists(tsv_path))
with open(tsv_path, 'w') as _:
pass
self.assertTrue(os.path.exists(tsv_path))
statinfo = os.stat(tsv_path)
self.assertTrue(statinfo.st_size == 0)
with self.assertRaises(ValueError):
self.single_definition_reporter.write_tsv(
output_path=tsv_path,
informers=self.informers,
report_name=report_name
)
self.single_definition_reporter.write_tsv(
output_path=tsv_path,
informers=self.informers,
report_name=report_name,
overwrite=True
)
# Make sure we updated the file.
statinfo = os.stat(tsv_path)
self.assertTrue(statinfo.st_size > 0)
lines = []
for line in open(tsv_path, 'r'):
lines.append(line)
# There's a header row.
self.assertEqual(
len(lines),
1 + len(self.single_definition_report_flat[report_name])
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def test_aws_reporter_write_json(self):
'''Tests for the aws_reporter write_json() method. '''
json_file_base = 'test_write_json'
json_file_name = '.'.join([json_file_base, '.json'])
json_path = os.path.join(self.tmpdir, json_file_name)
report_name = self.single_definition_report_name
report_data = self.single_definition_report_flat[
self.single_definition_report_name
]
self.single_definition_reporter.write_json(
output_path=json_path,
informers=self.informers,
report_name=report_name
)
self.assertTrue(os.path.exists(json_path))
statinfo = os.stat(json_path)
self.assertTrue(statinfo.st_size > 0)
# lines = []
# for line in open(json_path, 'r'):
# lines.append(line)
with open(json_path, 'r') as fptr:
reloaded_1 = json.load(fptr)
# # There's a header row.
# self.assertEqual(
# len(lines),
# 1 + len(self.single_definition_report_flat[report_name])
# )
# self.assertEqual(
# len(lines[0].split(current_separator)),
# len(self.sample_prune_specs_profile_name)
# )
# self.assertEqual(
# len(lines[-1].split(current_separator)),
# len(self.sample_prune_specs_profile_name)
# )
# Replace the file with an empty file.
os.remove(json_path)
self.assertFalse(os.path.exists(json_path))
with open(json_path, 'w') as _:
pass
self.assertTrue(os.path.exists(json_path))
statinfo = os.stat(json_path)
self.assertTrue(statinfo.st_size == 0)
with self.assertRaises(ValueError):
self.single_definition_reporter.write_json(
output_path=json_path,
informers=self.informers,
report_name=report_name
)
self.single_definition_reporter.write_json(
output_path=json_path,
informers=self.informers,
report_name=report_name,
overwrite=True
)
# Make sure we updated the file.
statinfo = os.stat(json_path)
self.assertTrue(statinfo.st_size > 0)
# lines = []
# for line in open(json_path, 'r'):
# lines.append(line)
with open(json_path, 'r') as fptr:
reloaded_2 = json.load(fptr)
self.assertEqual(
type(reloaded_1),
type(report_data)
)
self.assertEqual(reloaded_1, report_data)
self.assertEqual(reloaded_1, reloaded_2)
# # There's a header row.
# self.assertEqual(
# len(lines),
# 1 + len(self.single_definition_report_flat[report_name])
# )
if __name__ == '__main__':
unittest.main()
| 32.535639 | 78 | 0.547193 | 8,723 | 93,117 | 5.546486 | 0.043334 | 0.06151 | 0.045471 | 0.031003 | 0.911124 | 0.884006 | 0.857592 | 0.822227 | 0.789984 | 0.772291 | 0 | 0.004116 | 0.339863 | 93,117 | 2,861 | 79 | 32.547012 | 0.782967 | 0.211272 | 0 | 0.676334 | 0 | 0 | 0.028387 | 0.002188 | 0 | 0 | 0 | 0.00035 | 0.153132 | 1 | 0.033063 | false | 0.006381 | 0.00522 | 0 | 0.044664 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0efe84786df6c395287ecac758754b66041cd043 | 117 | py | Python | cca_zoo/utils/__init__.py | sunshiding/cca_zoo-1 | 9164f46afa97628f8c5b895e0483e0b2f7b40719 | [
"MIT"
] | 1 | 2021-07-24T12:25:02.000Z | 2021-07-24T12:25:02.000Z | cca_zoo/utils/__init__.py | sunshiding/cca_zoo-1 | 9164f46afa97628f8c5b895e0483e0b2f7b40719 | [
"MIT"
] | null | null | null | cca_zoo/utils/__init__.py | sunshiding/cca_zoo-1 | 9164f46afa97628f8c5b895e0483e0b2f7b40719 | [
"MIT"
] | null | null | null | from .check_values import *
from .plot_utils import cv_plot, plot_results, plot_latent_label, plot_latent_train_test
| 39 | 88 | 0.854701 | 19 | 117 | 4.789474 | 0.631579 | 0.21978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094017 | 117 | 2 | 89 | 58.5 | 0.858491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1601094364dc66a6fe9eb09ba238cd6261a518d8 | 31,943 | py | Python | src/tools/graph_draw_all.py | funalab/NVAN | b609dd94a84ffc78c029bb1c3445861967dd7586 | [
"MIT"
] | 1 | 2021-08-24T13:24:39.000Z | 2021-08-24T13:24:39.000Z | src/tools/graph_draw_all.py | funalab/NVAN | b609dd94a84ffc78c029bb1c3445861967dd7586 | [
"MIT"
] | null | null | null | src/tools/graph_draw_all.py | funalab/NVAN | b609dd94a84ffc78c029bb1c3445861967dd7586 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import csv
import sys
import time
import random
import copy
import math
import os
sys.path.append(os.getcwd())
import json
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from mpl_toolkits.mplot3d import Axes3D
from glob import glob
import skimage.io as skimage
from skimage import transform as tr
import skimage.morphology as mor
from argparse import ArgumentParser
from datetime import datetime
import pytz
#plt.style.use('ggplot')
class GraphDraw():
def __init__(self, opbase, file_list_born, file_list_abort):
self.save_dir = save_dir
# self.scale = 0.8 * 0.8 * 2.0
self.scale = 1.0 * 1.0 * 1.0
self.fn_born = file_list_born
self.fn_abort = file_list_abort
self.density = 0
self.roi_pixel_num = 0
self.label = ['born', 'abort']
self.color = ['royalblue', 'tomato'] # born, abort
self.time_max = 360
self.label_size = 20
self.figsize = (8, 6)
self.ticks_size = 20
def graph_draw_number(self, Time, Count_born, Count_abort):
# Count
label = []
plt.figure(figsize=self.figsize)
for num in range(len(Count_born)):
plt.plot(Time[:len(Count_born[num])], Count_born[num], color=self.color[0], alpha=0.8, linewidth=1.0, label=self.label[0])
for num in range(len(Count_abort)):
plt.plot(Time[:len(Count_abort[num])], Count_abort[num], color=self.color[1], alpha=0.8, linewidth=1.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Number of Nuclei', size=self.label_size)
# plt.legend(self.label)
if Time[-1] != 0:
plt.xlim([0.0, round(Time[-1], 1)])
plt.savefig(os.path.join(self.save_dir, 'number.pdf'))
plt.figure(figsize=(10,6))
plt.plot(1, 1, color=self.color[0], label=self.label[0])
plt.plot(1, 1, color=self.color[1], label=self.label[1])
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0, fontsize=16)
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'legend.pdf'), bbox_inches='tight')
# mean plot
tp_min = 488
plt.figure(figsize=self.figsize)
born_mean, abort_mean = [], []
for num in range(len(Count_born)):
plt.plot(Time[:len(Count_born[num])], Count_born[num], color=self.color[0], alpha=0.2, linewidth=1.0, label=self.label[0])
born_mean.append(Count_born[num][:tp_min])
for num in range(len(Count_abort)):
plt.plot(Time[:len(Count_abort[num])], Count_abort[num], color=self.color[1], alpha=0.2, linewidth=1.0, label=self.label[1])
abort_mean.append(Count_abort[num][:tp_min])
plt.plot(Time[:tp_min], np.mean(born_mean, axis=0), color=self.color[0], alpha=1.0, linewidth=2.0, label=self.label[0])
plt.plot(Time[:tp_min], np.mean(abort_mean, axis=0), color=self.color[1], alpha=1.0, linewidth=2.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Number of Nuclei', size=self.label_size)
# plt.legend(self.label)
if Time[-1] != 0:
#plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.ylim([0, 40])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'mean_number.pdf'), bbox_inches='tight')
def graph_draw_volume(self, Time, MeanVol_born, StdVol_born, MeanVol_abort, StdVol_abort):
# Volume Mean & SD
plt.figure(figsize=self.figsize)
for num in range(len(MeanVol_born)):
plt.plot(Time[:len(MeanVol_born[num])], np.array(MeanVol_born[num]) * self.scale, color=self.color[0], alpha=0.8, linewidth=1.0, label=self.label[0])
for num in range(len(MeanVol_abort)):
plt.plot(Time[:len(MeanVol_abort[num])], np.array(MeanVol_abort[num]) * self.scale, color=self.color[1], alpha=0.8, linewidth=1.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Volume [$\mu m^{3}$]', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'volume_mean.pdf'))
# mean plot
tp_min = 488
plt.figure(figsize=self.figsize)
born_mean, abort_mean = [], []
for num in range(len(MeanVol_born)):
plt.plot(Time[:len(MeanVol_born[num])], np.array(MeanVol_born[num]) * self.scale, color=self.color[0], alpha=0.2, linewidth=1.0, label=self.label[0])
born_mean.append(MeanVol_born[num][:tp_min])
for num in range(len(MeanVol_abort)):
plt.plot(Time[:len(MeanVol_abort[num])], np.array(MeanVol_abort[num]) * self.scale, color=self.color[1], alpha=0.2, linewidth=1.0, label=self.label[1])
abort_mean.append(MeanVol_abort[num][:tp_min])
plt.plot(Time[:tp_min], np.mean(born_mean, axis=0), color=self.color[0], alpha=1.0, linewidth=2.0, label=self.label[0])
plt.plot(Time[:tp_min], np.mean(abort_mean, axis=0), color=self.color[1], alpha=1.0, linewidth=2.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Volume [$\mu m^{3}$]', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.gca().get_yaxis().set_major_formatter(ticker.FuncFormatter(lambda v,p: f'{int(v):,d}'))
plt.savefig(os.path.join(self.save_dir, 'mean_volume_mean.pdf'), bbox_inches='tight')
plt.figure(figsize=self.figsize)
for num in range(len(StdVol_born)):
plt.plot(Time[:len(StdVol_born[num])], np.array(StdVol_born[num]) * self.scale, color=self.color[0], alpha=0.8, linewidth=1.0, label=self.label[0])
for num in range(len(StdVol_abort)):
plt.plot(Time[:len(StdVol_abort[num])], np.array(StdVol_abort[num]) * self.scale, color=self.color[1], alpha=0.8, linewidth=1.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Volume (standard deviation) [$\mu m^{3}$]', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'volume_std.pdf'))
# mean plot
tp_min = 488
plt.figure(figsize=self.figsize)
born_mean, abort_mean = [], []
for num in range(len(StdVol_born)):
plt.plot(Time[:len(StdVol_born[num])], np.array(StdVol_born[num]) * self.scale, color=self.color[0], alpha=0.2, linewidth=1.0, label=self.label[0])
born_mean.append(StdVol_born[num][:tp_min])
for num in range(len(StdVol_abort)):
plt.plot(Time[:len(StdVol_abort[num])], np.array(StdVol_abort[num]) * self.scale, color=self.color[1], alpha=0.2, linewidth=1.0, label=self.label[1])
abort_mean.append(StdVol_abort[num][:tp_min])
plt.plot(Time[:tp_min], np.mean(born_mean, axis=0), color=self.color[0], alpha=1.0, linewidth=2.0, label=self.label[0])
plt.plot(Time[:tp_min], np.mean(abort_mean, axis=0), color=self.color[1], alpha=1.0, linewidth=2.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Volume (standard deviation) [$\mu m^{3}$]', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.gca().get_yaxis().set_major_formatter(ticker.FuncFormatter(lambda v,p: f'{int(v):,d}'))
plt.savefig(os.path.join(self.save_dir, 'mean_volume_std.pdf'), bbox_inches='tight')
def graph_draw_surface(self, Time, MeanArea_born, StdArea_born, MeanArea_abort, StdArea_abort):
# Surface Mean & SD
plt.figure(figsize=self.figsize)
for num in range(len(MeanArea_born)):
plt.plot(Time[:len(MeanArea_born[num])], np.array(MeanArea_born[num]) * self.scale, color=self.color[0], alpha=0.8, linewidth=1.0, label=self.label[0])
for num in range(len(MeanArea_abort)):
plt.plot(Time[:len(MeanArea_abort[num])], np.array(MeanArea_abort[num]) * self.scale, color=self.color[1], alpha=0.8, linewidth=1.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Surface Area [$\mu m^{2}$]', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'surface_area_mean.pdf'))
# mean plot
tp_min = 488
plt.figure(figsize=self.figsize)
born_mean, abort_mean = [], []
for num in range(len(MeanArea_born)):
plt.plot(Time[:len(MeanArea_born[num])], np.array(MeanArea_born[num]) * self.scale, color=self.color[0], alpha=0.2, linewidth=1.0, label=self.label[0])
born_mean.append(MeanArea_born[num][:tp_min])
for num in range(len(MeanArea_abort)):
plt.plot(Time[:len(MeanArea_abort[num])], np.array(MeanArea_abort[num]) * self.scale, color=self.color[1], alpha=0.2, linewidth=1.0, label=self.label[1])
abort_mean.append(MeanArea_abort[num][:tp_min])
plt.plot(Time[:tp_min], np.mean(born_mean, axis=0), color=self.color[0], alpha=1.0, linewidth=2.0, label=self.label[0])
plt.plot(Time[:tp_min], np.mean(abort_mean, axis=0), color=self.color[1], alpha=1.0, linewidth=2.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Surface Area [$\mu m^{2}$]', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.gca().get_yaxis().set_major_formatter(ticker.FuncFormatter(lambda v,p: f'{int(v):,d}'))
plt.savefig(os.path.join(self.save_dir, 'mean_surface_area_mean.pdf'), bbox_inches='tight')
plt.figure(figsize=self.figsize)
for num in range(len(MeanArea_born)):
plt.plot(Time[:len(StdArea_born[num])], np.array(StdArea_born[num]) * self.scale, color=self.color[0], alpha=0.8, linewidth=1.0, label=self.label[0])
for num in range(len(MeanArea_abort)):
plt.plot(Time[:len(StdArea_abort[num])], np.array(StdArea_abort[num]) * self.scale, color=self.color[1], alpha=0.8, linewidth=1.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Surface Area (standard deviation) [$\mu m^{2}$]', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'surface_area_std.pdf'))
# mean plot
tp_min = 488
plt.figure(figsize=self.figsize)
born_mean, abort_mean = [], []
for num in range(len(MeanArea_born)):
plt.plot(Time[:len(StdArea_born[num])], np.array(StdArea_born[num]) * self.scale, color=self.color[0], alpha=0.2, linewidth=1.0, label=self.label[0])
born_mean.append(StdArea_born[num][:tp_min])
for num in range(len(MeanArea_abort)):
plt.plot(Time[:len(StdArea_abort[num])], np.array(StdArea_abort[num]) * self.scale, color=self.color[1], alpha=0.2, linewidth=1.0, label=self.label[1])
abort_mean.append(StdArea_abort[num][:tp_min])
plt.plot(Time[:tp_min], np.mean(born_mean, axis=0), color=self.color[0], alpha=1.0, linewidth=2.0, label=self.label[0])
plt.plot(Time[:tp_min], np.mean(abort_mean, axis=0), color=self.color[1], alpha=1.0, linewidth=2.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Surface Area (standard deviation) [$\mu m^{2}$]', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.gca().get_yaxis().set_major_formatter(ticker.FuncFormatter(lambda v,p: f'{int(v):,d}'))
plt.savefig(os.path.join(self.save_dir, 'mean_surface_area_std.pdf'), bbox_inches='tight')
def graph_draw_aspect_ratio(self, Time, MeanAsp_born, StdAsp_born, MeanAsp_abort, StdAsp_abort):
# Aspect Ratio Mean & SD
plt.figure(figsize=self.figsize)
for num in range(len(MeanAsp_born)):
plt.plot(Time[:len(MeanAsp_born[num])], np.array(MeanAsp_born[num]), color=self.color[0], alpha=0.8, linewidth=1.0, label=self.label[0])
for num in range(len(MeanAsp_abort)):
plt.plot(Time[:len(MeanAsp_abort[num])], np.array(MeanAsp_abort[num]), color=self.color[1], alpha=0.8, linewidth=1.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Aspect Ratio', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'aspect_ratio_mean.pdf'))
# mean plot
tp_min = 488
plt.figure(figsize=self.figsize)
born_mean, abort_mean = [], []
for num in range(len(MeanAsp_born)):
plt.plot(Time[:len(MeanAsp_born[num])], np.array(MeanAsp_born[num]), color=self.color[0], alpha=0.2, linewidth=1.0, label=self.label[0])
born_mean.append(MeanAsp_born[num][:tp_min])
for num in range(len(MeanAsp_abort)):
plt.plot(Time[:len(MeanAsp_abort[num])], np.array(MeanAsp_abort[num]), color=self.color[1], alpha=0.2, linewidth=1.0, label=self.label[1])
abort_mean.append(MeanAsp_abort[num][:tp_min])
plt.plot(Time[:tp_min], np.mean(born_mean, axis=0), color=self.color[0], alpha=1.0, linewidth=2.0, label=self.label[0])
plt.plot(Time[:tp_min], np.mean(abort_mean, axis=0), color=self.color[1], alpha=1.0, linewidth=2.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Aspect Ratio', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'mean_aspect_ratio_mean.pdf'), bbox_inches='tight')
plt.figure(figsize=self.figsize)
for num in range(len(StdAsp_born)):
plt.plot(Time[:len(StdAsp_born[num])], np.array(StdAsp_born[num]), color=self.color[0], alpha=0.8, linewidth=1.0, label=self.label[0])
for num in range(len(StdAsp_abort)):
plt.plot(Time[:len(StdAsp_abort[num])], np.array(StdAsp_abort[num]), color=self.color[1], alpha=0.8, linewidth=1.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Aspect Ratio (standard deviation)', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'aspect_ratio_std.pdf'))
# mean plot
tp_min = 488
plt.figure(figsize=self.figsize)
born_mean, abort_mean = [], []
for num in range(len(StdAsp_born)):
plt.plot(Time[:len(StdAsp_born[num])], np.array(StdAsp_born[num]), color=self.color[0], alpha=0.2, linewidth=1.0, label=self.label[0])
born_mean.append(StdAsp_born[num][:tp_min])
for num in range(len(StdAsp_abort)):
plt.plot(Time[:len(StdAsp_abort[num])], np.array(StdAsp_abort[num]), color=self.color[1], alpha=0.2, linewidth=1.0, label=self.label[1])
abort_mean.append(StdAsp_abort[num][:tp_min])
plt.plot(Time[:tp_min], np.mean(born_mean, axis=0), color=self.color[0], alpha=1.0, linewidth=2.0, label=self.label[0])
plt.plot(Time[:tp_min], np.mean(abort_mean, axis=0), color=self.color[1], alpha=1.0, linewidth=2.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Aspect Ratio (standard deviation)', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'mean_aspect_ratio_std.pdf'), bbox_inches='tight')
def graph_draw_solidity(self, Time, MeanSol_born, StdSol_born, MeanSol_abort, StdSol_abort):
# Solidity Mean & SD
plt.figure(figsize=self.figsize)
for num in range(len(MeanSol_born)):
plt.plot(Time[:len(MeanSol_born[num])], np.array(MeanSol_born[num]), color=self.color[0], alpha=0.8, linewidth=1.0, label=self.label[0])
for num in range(len(MeanSol_abort)):
plt.plot(Time[:len(MeanSol_abort[num])], np.array(MeanSol_abort[num]), color=self.color[1], alpha=0.8, linewidth=1.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Solidity', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'solidity_mean.pdf'))
# mean plot
tp_min = 488
plt.figure(figsize=self.figsize)
born_mean, abort_mean = [], []
for num in range(len(MeanSol_born)):
plt.plot(Time[:len(MeanSol_born[num])], np.array(MeanSol_born[num]), color=self.color[0], alpha=0.2, linewidth=1.0, label=self.label[0])
born_mean.append(MeanSol_born[num][:tp_min])
for num in range(len(MeanSol_abort)):
plt.plot(Time[:len(MeanSol_abort[num])], np.array(MeanSol_abort[num]), color=self.color[1], alpha=0.2, linewidth=1.0, label=self.label[1])
abort_mean.append(MeanSol_abort[num][:tp_min])
plt.plot(Time[:tp_min], np.mean(born_mean, axis=0), color=self.color[0], alpha=1.0, linewidth=2.0, label=self.label[0])
plt.plot(Time[:tp_min], np.mean(abort_mean, axis=0), color=self.color[1], alpha=1.0, linewidth=2.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Solidity', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'mean_solidity_mean.pdf'), bbox_inches='tight')
plt.figure(figsize=self.figsize)
for num in range(len(StdSol_born)):
plt.plot(Time[:len(StdSol_born[num])], np.array(StdSol_born[num]), color=self.color[0], alpha=0.8, linewidth=1.0, label=self.label[0])
for num in range(len(StdSol_abort)):
plt.plot(Time[:len(StdSol_abort[num])], np.array(StdSol_abort[num]), color=self.color[1], alpha=0.8, linewidth=1.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Solidity (standard deviation)', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'solidity_std.pdf'))
# mean plot
tp_min = 488
plt.figure(figsize=self.figsize)
born_mean, abort_mean = [], []
for num in range(len(StdSol_born)):
plt.plot(Time[:len(StdSol_born[num])], np.array(StdSol_born[num]), color=self.color[0], alpha=0.2, linewidth=1.0, label=self.label[0])
born_mean.append(StdSol_born[num][:tp_min])
for num in range(len(StdSol_abort)):
plt.plot(Time[:len(StdSol_abort[num])], np.array(StdSol_abort[num]), color=self.color[1], alpha=0.2, linewidth=1.0, label=self.label[1])
abort_mean.append(StdSol_abort[num][:tp_min])
plt.plot(Time[:tp_min], np.mean(born_mean, axis=0), color=self.color[0], alpha=1.0, linewidth=2.0, label=self.label[0])
plt.plot(Time[:tp_min], np.mean(abort_mean, axis=0), color=self.color[1], alpha=1.0, linewidth=2.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Solidity (standard deviation)', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'mean_solidity_std.pdf'), bbox_inches='tight')
def graph_draw_centroid(self, Time, MeanCen_born, StdCen_born, MeanCen_abort, StdCen_abort):
# Centroid Mean & SD
plt.figure(figsize=self.figsize)
for num in range(len(MeanCen_born)):
plt.plot(Time[:len(MeanCen_born[num])], np.array(MeanCen_born[num]), color=self.color[0], alpha=0.8, linewidth=1.0, label=self.label[0])
for num in range(len(MeanCen_abort)):
plt.plot(Time[:len(MeanCen_abort[num])], np.array(MeanCen_abort[num]), color=self.color[1], alpha=0.8, linewidth=1.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Centroid [$\mu m$]', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'centroid_mean.pdf'))
# mean plot
tp_min = 488
plt.figure(figsize=self.figsize)
born_mean, abort_mean = [], []
for num in range(len(MeanCen_born)):
plt.plot(Time[:len(MeanCen_born[num])], np.array(MeanCen_born[num]), color=self.color[0], alpha=0.2, linewidth=1.0, label=self.label[0])
born_mean.append(MeanCen_born[num][:tp_min])
for num in range(len(MeanCen_abort)):
plt.plot(Time[:len(MeanCen_abort[num])], np.array(MeanCen_abort[num]), color=self.color[1], alpha=0.2, linewidth=1.0, label=self.label[1])
abort_mean.append(MeanCen_abort[num][:tp_min])
plt.plot(Time[:tp_min], np.mean(born_mean, axis=0), color=self.color[0], alpha=1.0, linewidth=2.0, label=self.label[0])
plt.plot(Time[:tp_min], np.mean(abort_mean, axis=0), color=self.color[1], alpha=1.0, linewidth=2.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Centroid [$\mu m$]', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'mean_centroid_mean.pdf'), bbox_inches='tight')
plt.figure(figsize=self.figsize)
for num in range(len(StdCen_born)):
plt.plot(Time[:len(StdCen_born[num])], np.array(StdCen_born[num]), color=self.color[0], alpha=0.8, linewidth=1.0, label=self.label[0])
for num in range(len(StdCen_abort)):
plt.plot(Time[:len(StdCen_abort[num])], np.array(StdCen_abort[num]), color=self.color[1], alpha=0.8, linewidth=1.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Centroid (standard deviation) [$\mu m$]', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'centroid_std.pdf'))
# mean plot
tp_min = 488
plt.figure(figsize=self.figsize)
born_mean, abort_mean = [], []
for num in range(len(StdCen_born)):
plt.plot(Time[:len(StdCen_born[num])], np.array(StdCen_born[num]), color=self.color[0], alpha=0.2, linewidth=1.0, label=self.label[0])
born_mean.append(StdCen_born[num][:tp_min])
for num in range(len(StdCen_abort)):
plt.plot(Time[:len(StdCen_abort[num])], np.array(StdCen_abort[num]), color=self.color[1], alpha=0.2, linewidth=1.0, label=self.label[1])
abort_mean.append(StdCen_abort[num][:tp_min])
plt.plot(Time[:tp_min], np.mean(born_mean, axis=0), color=self.color[0], alpha=1.0, linewidth=2.0, label=self.label[0])
plt.plot(Time[:tp_min], np.mean(abort_mean, axis=0), color=self.color[1], alpha=1.0, linewidth=2.0, label=self.label[1])
plt.xlabel('Time [day]', size=self.label_size)
plt.ylabel('Centroid (standard deviation) [$\mu m$]', size=self.label_size)
if Time[-1] != 0:
# plt.xlim([0.0, round(Time[-1], 1)])
plt.xlim([0.0, Time[self.time_max]])
plt.xticks(size=self.ticks_size)
plt.yticks(size=self.ticks_size)
plt.savefig(os.path.join(self.save_dir, 'mean_centroid_std.pdf'), bbox_inches='tight')
if __name__ == '__main__':
ap = ArgumentParser(description='python graph_draw.py')
ap.add_argument('--root', '-r', nargs='?', default='/Users/tokkuman/git-tokkuman/embryo_classification/datasets', help='Specify root path')
ap.add_argument('--save_dir', '-o', nargs='?', default='results/figures_criteria_all', help='Specify output files directory for create figures')
# ap.add_argument('--label', '-l', nargs='?', default='born', help='Specify label class (born or abort)')
args = ap.parse_args()
argvs = sys.argv
# Make Directory
current_datetime = datetime.now(pytz.timezone('Asia/Tokyo')).strftime('%Y%m%d_%H%M%S')
save_dir = '{0}_{1}'.format(args.save_dir, current_datetime)
os.makedirs(save_dir, exist_ok=True)
with open(os.path.join(args.root, 'labels', '{}.txt'.format('born')), 'r') as f:
file_list_born= np.sort([line.rstrip() for line in f])
with open(os.path.join(args.root, 'labels', '{}.txt'.format('abort')), 'r') as f:
file_list_abort = np.sort([line.rstrip() for line in f])
# born
number_born = []
volume_mean_born, volume_sd_born = [], []
surface_mean_born, surface_sd_born = [], []
aspect_ratio_mean_born, aspect_ratio_sd_born = [], []
solidity_mean_born, solidity_sd_born = [], []
centroid_mean_born, centroid_sd_born = [], []
for fl in file_list_born:
file_name = os.path.join(args.root, 'input', fl, 'criteria.json')
print('read: {}'.format(file_name))
with open(file_name, 'r') as f:
criteria_value = json.load(f)
criteria_list = criteria_value.keys()
if 'number' in criteria_list:
number_born.append(criteria_value['number'])
if 'volume_mean' in criteria_list:
volume_mean_born.append(criteria_value['volume_mean'])
if 'volume_sd' in criteria_list:
volume_sd_born.append(criteria_value['volume_sd'])
if 'surface_mean' in criteria_list:
surface_mean_born.append(criteria_value['surface_mean'])
if 'surface_sd' in criteria_list:
surface_sd_born.append(criteria_value['surface_sd'])
if 'aspect_ratio_mean' in criteria_list:
aspect_ratio_mean_born.append(criteria_value['aspect_ratio_mean'])
if 'aspect_ratio_sd' in criteria_list:
aspect_ratio_sd_born.append(criteria_value['aspect_ratio_sd'])
if 'solidity_mean' in criteria_list:
solidity_mean_born.append(criteria_value['solidity_mean'])
if 'solidity_sd' in criteria_list:
solidity_sd_born.append(criteria_value['solidity_sd'])
if 'centroid_mean' in criteria_list:
centroid_mean_born.append(criteria_value['centroid_mean'])
if 'centroid_sd' in criteria_list:
centroid_sd_born.append(criteria_value['centroid_sd'])
# abort
number_abort = []
volume_mean_abort, volume_sd_abort = [], []
surface_mean_abort, surface_sd_abort = [], []
aspect_ratio_mean_abort, aspect_ratio_sd_abort = [], []
solidity_mean_abort, solidity_sd_abort = [], []
centroid_mean_abort, centroid_sd_abort = [], []
for fl in file_list_abort:
file_name = os.path.join(args.root, 'input', fl, 'criteria.json')
print('read: {}'.format(file_name))
with open(file_name, 'r') as f:
criteria_value = json.load(f)
criteria_list = criteria_value.keys()
if 'number' in criteria_list:
number_abort.append(criteria_value['number'])
if 'volume_mean' in criteria_list:
volume_mean_abort.append(criteria_value['volume_mean'])
if 'volume_sd' in criteria_list:
volume_sd_abort.append(criteria_value['volume_sd'])
if 'surface_mean' in criteria_list:
surface_mean_abort.append(criteria_value['surface_mean'])
if 'surface_sd' in criteria_list:
surface_sd_abort.append(criteria_value['surface_sd'])
if 'aspect_ratio_mean' in criteria_list:
aspect_ratio_mean_abort.append(criteria_value['aspect_ratio_mean'])
if 'aspect_ratio_sd' in criteria_list:
aspect_ratio_sd_abort.append(criteria_value['aspect_ratio_sd'])
if 'solidity_mean' in criteria_list:
solidity_mean_abort.append(criteria_value['solidity_mean'])
if 'solidity_sd' in criteria_list:
solidity_sd_abort.append(criteria_value['solidity_sd'])
if 'centroid_mean' in criteria_list:
centroid_mean_abort.append(criteria_value['centroid_mean'])
if 'centroid_sd' in criteria_list:
centroid_sd_abort.append(criteria_value['centroid_sd'])
# Time Scale
dt = 10 / float(60 * 24)
count_max = 0
for i in range(len(number_born)):
count_max = np.max([len(number_born[i]), count_max])
time_point = [dt * x for x in range(count_max)]
gd = GraphDraw(save_dir, file_list_born, file_list_abort)
gd.graph_draw_number(time_point, number_born, number_abort)
gd.graph_draw_volume(time_point, volume_mean_born, volume_sd_born, volume_mean_abort, volume_sd_abort)
gd.graph_draw_surface(time_point, surface_mean_born, surface_sd_born, surface_mean_abort, surface_sd_abort)
gd.graph_draw_aspect_ratio(time_point, aspect_ratio_mean_born, aspect_ratio_sd_born, aspect_ratio_mean_abort, aspect_ratio_sd_abort)
gd.graph_draw_solidity(time_point, solidity_mean_born, solidity_sd_born, solidity_mean_abort, solidity_sd_abort)
gd.graph_draw_centroid(time_point, centroid_mean_born, centroid_sd_born, centroid_mean_abort, centroid_sd_abort)
| 55.746946 | 165 | 0.638325 | 4,950 | 31,943 | 3.950505 | 0.046263 | 0.053388 | 0.048683 | 0.051394 | 0.888878 | 0.875224 | 0.847609 | 0.842905 | 0.824495 | 0.818972 | 0 | 0.026812 | 0.195536 | 31,943 | 572 | 166 | 55.844406 | 0.734171 | 0.038788 | 0 | 0.584388 | 0 | 0 | 0.072062 | 0.010341 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014768 | false | 0 | 0.040084 | 0 | 0.056962 | 0.004219 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
16164a5ff76443a3d50e107954d959689cbdcb77 | 44 | py | Python | src/api_utilities/abstract_classes/__init__.py | tomfran/lastfm-project | a4acd177d69235e49653f103a897a18c37c3d230 | [
"MIT"
] | 1 | 2021-07-21T16:51:12.000Z | 2021-07-21T16:51:12.000Z | src/api_utilities/abstract_classes/__init__.py | tomfran/lastfm-project | a4acd177d69235e49653f103a897a18c37c3d230 | [
"MIT"
] | null | null | null | src/api_utilities/abstract_classes/__init__.py | tomfran/lastfm-project | a4acd177d69235e49653f103a897a18c37c3d230 | [
"MIT"
] | null | null | null | from .abstract_source import AbstractSource
| 22 | 43 | 0.886364 | 5 | 44 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
162e9d26ed229b1f44ffdd921cbacddd8372ecd6 | 80 | py | Python | packages/watchmen-rest-dqc/src/watchmen_rest_dqc/util/__init__.py | Indexical-Metrics-Measure-Advisory/watchmen | c54ec54d9f91034a38e51fd339ba66453d2c7a6d | [
"MIT"
] | null | null | null | packages/watchmen-rest-dqc/src/watchmen_rest_dqc/util/__init__.py | Indexical-Metrics-Measure-Advisory/watchmen | c54ec54d9f91034a38e51fd339ba66453d2c7a6d | [
"MIT"
] | null | null | null | packages/watchmen-rest-dqc/src/watchmen_rest_dqc/util/__init__.py | Indexical-Metrics-Measure-Advisory/watchmen | c54ec54d9f91034a38e51fd339ba66453d2c7a6d | [
"MIT"
] | null | null | null | from .trans import trans, trans_readonly, trans_with_fail_over, trans_with_tail
| 40 | 79 | 0.8625 | 13 | 80 | 4.846154 | 0.615385 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0875 | 80 | 1 | 80 | 80 | 0.863014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
165dc49dd574d456f3b954e96b28d1c9468cf6cd | 186 | py | Python | venv3/lib/python3.6/site-packages/netmiko/avaya/__init__.py | brightmaraba/devnet-work | 7582055083b634601d0add20d112b2a92f9a77b2 | [
"MIT"
] | null | null | null | venv3/lib/python3.6/site-packages/netmiko/avaya/__init__.py | brightmaraba/devnet-work | 7582055083b634601d0add20d112b2a92f9a77b2 | [
"MIT"
] | null | null | null | venv3/lib/python3.6/site-packages/netmiko/avaya/__init__.py | brightmaraba/devnet-work | 7582055083b634601d0add20d112b2a92f9a77b2 | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from netmiko.avaya.avaya_vsp_ssh import AvayaVspSSH
from netmiko.avaya.avaya_ers_ssh import AvayaErsSSH
__all__ = ["AvayaVspSSH", "AvayaErsSSH"]
| 31 | 51 | 0.844086 | 24 | 186 | 6 | 0.541667 | 0.152778 | 0.222222 | 0.291667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091398 | 186 | 5 | 52 | 37.2 | 0.852071 | 0 | 0 | 0 | 0 | 0 | 0.11828 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
167a480eab5058c0353fe2c1ff770de7a47e90bd | 40 | py | Python | bot264/__init__.py | mwenclubhouse/python-queueup-bot | c6301d69814d2c9597ff6fda75d2244d5119b6af | [
"MIT"
] | null | null | null | bot264/__init__.py | mwenclubhouse/python-queueup-bot | c6301d69814d2c9597ff6fda75d2244d5119b6af | [
"MIT"
] | 1 | 2021-04-17T00:23:32.000Z | 2021-04-17T00:23:32.000Z | bot264/__init__.py | mwenclubhouse/python-queueup-bot | c6301d69814d2c9597ff6fda75d2244d5119b6af | [
"MIT"
] | 2 | 2021-04-04T15:39:38.000Z | 2021-04-16T03:20:36.000Z | from .discord_config import run_discord
| 20 | 39 | 0.875 | 6 | 40 | 5.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
168f976a9fca0423ee656555fc765e25ff1ca87e | 121 | py | Python | scripts/url_utils.py | altova/SECDB | 9ed89664f7eeebac0a747a8ebb09e7a2dae935c3 | [
"Apache-2.0"
] | 94 | 2015-10-31T18:33:38.000Z | 2022-03-17T06:16:33.000Z | scripts/url_utils.py | altova/SECDB | 9ed89664f7eeebac0a747a8ebb09e7a2dae935c3 | [
"Apache-2.0"
] | 14 | 2016-01-14T06:57:19.000Z | 2021-01-20T17:33:10.000Z | scripts/url_utils.py | altova/SECDB | 9ed89664f7eeebac0a747a8ebb09e7a2dae935c3 | [
"Apache-2.0"
] | 39 | 2015-12-17T13:01:10.000Z | 2021-09-17T16:24:28.000Z | import urllib.request
def mk_req( url ):
return urllib.request.Request( url, headers={"User-Agent": "Altova/1.0"} )
| 24.2 | 78 | 0.694215 | 18 | 121 | 4.611111 | 0.777778 | 0.313253 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 0.140496 | 121 | 4 | 79 | 30.25 | 0.778846 | 0 | 0 | 0 | 0 | 0 | 0.165289 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
16cf54677ad989aab90b3e4a1395976a853777da | 31 | py | Python | backend/saas_framework/users/models.py | snarayanank2/django-workspaces | 46ef92a4caa95eee617a24ead284e533422afca0 | [
"MIT"
] | 1 | 2021-01-27T17:51:58.000Z | 2021-01-27T17:51:58.000Z | backend/saas_framework/users/models.py | snarayanank2/django-workspaces | 46ef92a4caa95eee617a24ead284e533422afca0 | [
"MIT"
] | 6 | 2021-03-30T13:51:35.000Z | 2022-03-02T09:24:07.000Z | backend/saas_framework/users/models.py | snarayanank2/django-workspaces | 46ef92a4caa95eee617a24ead284e533422afca0 | [
"MIT"
] | 1 | 2022-03-18T08:43:17.000Z | 2022-03-18T08:43:17.000Z | # TODO - use custom user model
| 15.5 | 30 | 0.709677 | 5 | 31 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.225806 | 31 | 1 | 31 | 31 | 0.916667 | 0.903226 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
16df6e99db48b743b08fed1c35f661347f82ff9c | 187 | py | Python | slot_racer/game/state/__init__.py | mgreenw/slot-racer | ccb456cf489616e14d95c34c7398fb3e04307b02 | [
"MIT"
] | 1 | 2018-12-08T03:18:00.000Z | 2018-12-08T03:18:00.000Z | slot_racer/game/state/__init__.py | mgreenw/slot-racer | ccb456cf489616e14d95c34c7398fb3e04307b02 | [
"MIT"
] | null | null | null | slot_racer/game/state/__init__.py | mgreenw/slot-racer | ccb456cf489616e14d95c34c7398fb3e04307b02 | [
"MIT"
] | null | null | null | """Module containing definitions of data structures we will use to model our
game state"""
# state module should provide access to all the definitions in state.py
from .state import *
| 23.375 | 76 | 0.770053 | 29 | 187 | 4.965517 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 187 | 7 | 77 | 26.714286 | 0.935065 | 0.828877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc5c6d8afd4f3a9fd1c83047a6487ae4fd97a7de | 16,768 | py | Python | datadotworld/client/_swagger/apis/sparql_api.py | DanialBetres/data.world-py | 0e3acf2be9a07c5ab62ecac9289eb662088d54c7 | [
"Apache-2.0"
] | 99 | 2017-01-23T16:24:18.000Z | 2022-03-30T22:51:58.000Z | datadotworld/client/_swagger/apis/sparql_api.py | DanialBetres/data.world-py | 0e3acf2be9a07c5ab62ecac9289eb662088d54c7 | [
"Apache-2.0"
] | 77 | 2017-01-26T04:33:06.000Z | 2022-03-11T09:39:50.000Z | datadotworld/client/_swagger/apis/sparql_api.py | DanialBetres/data.world-py | 0e3acf2be9a07c5ab62ecac9289eb662088d54c7 | [
"Apache-2.0"
] | 29 | 2017-01-25T16:55:23.000Z | 2022-01-31T01:44:15.000Z | # coding: utf-8
"""
data.world API
data.world is designed for data and the people who work with data. From professional projects to open data, data.world helps you host and share your data, collaborate with your team, and capture context and conclusions as you work. Using this API users are able to easily access data and manage their data projects regardless of language or tool of preference. Check out our [documentation](https://dwapi.apidocs.io) for tips on how to get started, tutorials and to interact with the API right within your browser.
OpenAPI spec version: 0.14.1
Contact: help@data.world
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class SparqlApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def sparql_get(self, owner, id, query, **kwargs):
"""
SPARQL query (via GET)
This endpoint executes SPARQL queries against a dataset or data project. SPARQL results are available in a variety of formats. By default, `application/sparql-results+json` will be returned. Set the `Accept` header to one of the following values in accordance with your preference: - `application/sparql-results+xml` - `application/sparql-results+json` - `application/rdf+json` - `application/rdf+xml` - `text/csv` - `text/tab-separated-values` New to SPARQL? Check out data.world’s[SPARQL tutorial](https://docs.data.world/tutorials/sparql/).
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.sparql_get(owner, id, query, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str owner: User name and unique identifier of the creator of a dataset or project. For example, in the URL: [https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), jonloyens is the unique identifier of the owner. (required)
:param str id: Dataset unique identifier. For example, in the URL:[https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), an-intro-to-dataworld-dataset is the unique identifier of the dataset. (required)
:param str query: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.sparql_get_with_http_info(owner, id, query, **kwargs)
else:
(data) = self.sparql_get_with_http_info(owner, id, query, **kwargs)
return data
def sparql_get_with_http_info(self, owner, id, query, **kwargs):
"""
SPARQL query (via GET)
This endpoint executes SPARQL queries against a dataset or data project. SPARQL results are available in a variety of formats. By default, `application/sparql-results+json` will be returned. Set the `Accept` header to one of the following values in accordance with your preference: - `application/sparql-results+xml` - `application/sparql-results+json` - `application/rdf+json` - `application/rdf+xml` - `text/csv` - `text/tab-separated-values` New to SPARQL? Check out data.world’s[SPARQL tutorial](https://docs.data.world/tutorials/sparql/).
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.sparql_get_with_http_info(owner, id, query, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str owner: User name and unique identifier of the creator of a dataset or project. For example, in the URL: [https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), jonloyens is the unique identifier of the owner. (required)
:param str id: Dataset unique identifier. For example, in the URL:[https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), an-intro-to-dataworld-dataset is the unique identifier of the dataset. (required)
:param str query: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['owner', 'id', 'query']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method sparql_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'owner' is set
if ('owner' not in params) or (params['owner'] is None):
raise ValueError("Missing the required parameter `owner` when calling `sparql_get`")
# verify the required parameter 'id' is set
if ('id' not in params) or (params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `sparql_get`")
# verify the required parameter 'query' is set
if ('query' not in params) or (params['query'] is None):
raise ValueError("Missing the required parameter `query` when calling `sparql_get`")
if 'owner' in params and not re.search('[a-z0-9](?:-(?!-)|[a-z0-9])+[a-z0-9]', params['owner']):
raise ValueError("Invalid value for parameter `owner` when calling `sparql_get`, must conform to the pattern `/[a-z0-9](?:-(?!-)|[a-z0-9])+[a-z0-9]/`")
if 'id' in params and not re.search('[a-z0-9](?:-(?!-)|[a-z0-9])+[a-z0-9]', params['id']):
raise ValueError("Invalid value for parameter `id` when calling `sparql_get`, must conform to the pattern `/[a-z0-9](?:-(?!-)|[a-z0-9])+[a-z0-9]/`")
collection_formats = {}
path_params = {}
if 'owner' in params:
path_params['owner'] = params['owner']
if 'id' in params:
path_params['id'] = params['id']
query_params = []
if 'query' in params:
query_params.append(('query', params['query']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/sparql-results+json', 'application/sparql-results+xml', 'application/rdf+json', 'application/rdf+xml', 'text/tab-separated-values', 'text/csv'])
# Authentication setting
auth_settings = ['token']
return self.api_client.call_api('/sparql/{owner}/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def sparql_post(self, owner, id, query, **kwargs):
"""
SPARQL query
This endpoint executes SPARQL queries against a dataset or data project. SPARQL results are available in a variety of formats. By default, `application/sparql-results+json` will be returned. Set the `Accept` header to one of the following values in accordance with your preference: - `application/sparql-results+xml` - `application/sparql-results+json` - `application/rdf+json` - `application/rdf+xml` - `text/csv` - `text/tab-separated-values` New to SPARQL? Check out data.world's [SPARQL tutorial](https://docs.data.world/tutorials/sparql/).
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.sparql_post(owner, id, query, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str owner: User name and unique identifier of the creator of a dataset or project. For example, in the URL: [https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), jonloyens is the unique identifier of the owner. (required)
:param str id: Dataset unique identifier. For example, in the URL:[https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), an-intro-to-dataworld-dataset is the unique identifier of the dataset. (required)
:param str query: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.sparql_post_with_http_info(owner, id, query, **kwargs)
else:
(data) = self.sparql_post_with_http_info(owner, id, query, **kwargs)
return data
def sparql_post_with_http_info(self, owner, id, query, **kwargs):
"""
SPARQL query
This endpoint executes SPARQL queries against a dataset or data project. SPARQL results are available in a variety of formats. By default, `application/sparql-results+json` will be returned. Set the `Accept` header to one of the following values in accordance with your preference: - `application/sparql-results+xml` - `application/sparql-results+json` - `application/rdf+json` - `application/rdf+xml` - `text/csv` - `text/tab-separated-values` New to SPARQL? Check out data.world's [SPARQL tutorial](https://docs.data.world/tutorials/sparql/).
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.sparql_post_with_http_info(owner, id, query, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str owner: User name and unique identifier of the creator of a dataset or project. For example, in the URL: [https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), jonloyens is the unique identifier of the owner. (required)
:param str id: Dataset unique identifier. For example, in the URL:[https://data.world/jonloyens/an-intro-to-dataworld-dataset](https://data.world/jonloyens/an-intro-to-dataworld-dataset), an-intro-to-dataworld-dataset is the unique identifier of the dataset. (required)
:param str query: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['owner', 'id', 'query']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method sparql_post" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'owner' is set
if ('owner' not in params) or (params['owner'] is None):
raise ValueError("Missing the required parameter `owner` when calling `sparql_post`")
# verify the required parameter 'id' is set
if ('id' not in params) or (params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `sparql_post`")
# verify the required parameter 'query' is set
if ('query' not in params) or (params['query'] is None):
raise ValueError("Missing the required parameter `query` when calling `sparql_post`")
if 'owner' in params and not re.search('[a-z0-9](?:-(?!-)|[a-z0-9])+[a-z0-9]', params['owner']):
raise ValueError("Invalid value for parameter `owner` when calling `sparql_post`, must conform to the pattern `/[a-z0-9](?:-(?!-)|[a-z0-9])+[a-z0-9]/`")
if 'id' in params and not re.search('[a-z0-9](?:-(?!-)|[a-z0-9])+[a-z0-9]', params['id']):
raise ValueError("Invalid value for parameter `id` when calling `sparql_post`, must conform to the pattern `/[a-z0-9](?:-(?!-)|[a-z0-9])+[a-z0-9]/`")
collection_formats = {}
path_params = {}
if 'owner' in params:
path_params['owner'] = params['owner']
if 'id' in params:
path_params['id'] = params['id']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'query' in params:
form_params.append(('query', params['query']))
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/sparql-results+json', 'application/sparql-results+xml', 'application/rdf+json', 'application/rdf+xml', 'text/tab-separated-values', 'text/csv'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/x-www-form-urlencoded'])
# Authentication setting
auth_settings = ['token']
return self.api_client.call_api('/sparql/{owner}/{id}', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 58.629371 | 557 | 0.625358 | 2,064 | 16,768 | 4.979167 | 0.117733 | 0.024521 | 0.009341 | 0.03503 | 0.897733 | 0.884499 | 0.884499 | 0.870293 | 0.868931 | 0.868931 | 0 | 0.004469 | 0.266042 | 16,768 | 285 | 558 | 58.835088 | 0.830584 | 0.473998 | 0 | 0.697183 | 0 | 0.028169 | 0.251441 | 0.078621 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035211 | false | 0 | 0.049296 | 0 | 0.133803 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc6b631caee5e6d6c578e98657985b884925b503 | 47 | py | Python | envisage/developer/ui/api.py | robmcmullen/envisage | 57338fcb0ea69c75bc3c86de18a5967d8e78c6c1 | [
"BSD-3-Clause"
] | null | null | null | envisage/developer/ui/api.py | robmcmullen/envisage | 57338fcb0ea69c75bc3c86de18a5967d8e78c6c1 | [
"BSD-3-Clause"
] | 1 | 2017-05-22T21:15:22.000Z | 2017-05-22T21:15:22.000Z | envisage/developer/ui/api.py | robmcmullen/envisage | 57338fcb0ea69c75bc3c86de18a5967d8e78c6c1 | [
"BSD-3-Clause"
] | 1 | 2019-10-01T07:03:58.000Z | 2019-10-01T07:03:58.000Z | from .view.plugin_browser import browse_plugin
| 23.5 | 46 | 0.87234 | 7 | 47 | 5.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc6f4d230111b15c392ac6ddc8c328bcc59f24ee | 36 | py | Python | Amplo/API/__init__.py | Amplo-GmbH/AutoML | eb6cc83b6e4a3ddc7c3553e9c41d236e8b48c606 | [
"MIT"
] | 5 | 2022-01-07T13:34:37.000Z | 2022-03-17T06:40:28.000Z | Amplo/API/__init__.py | Amplo-GmbH/AutoML | eb6cc83b6e4a3ddc7c3553e9c41d236e8b48c606 | [
"MIT"
] | 5 | 2022-03-22T13:42:22.000Z | 2022-03-31T16:20:44.000Z | Amplo/API/__init__.py | Amplo-GmbH/AutoML | eb6cc83b6e4a3ddc7c3553e9c41d236e8b48c606 | [
"MIT"
] | 1 | 2021-12-17T22:41:11.000Z | 2021-12-17T22:41:11.000Z | from Amplo.API.interface import API
| 18 | 35 | 0.833333 | 6 | 36 | 5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc84cbc828d0ffb0f27b3dc0c01f456b5213cbc2 | 31 | py | Python | recfreq/__init__.py | lcarsos/recency-frequency | 075b2d63c915adc1060a5bf5de923c6e6e486397 | [
"MIT"
] | null | null | null | recfreq/__init__.py | lcarsos/recency-frequency | 075b2d63c915adc1060a5bf5de923c6e6e486397 | [
"MIT"
] | null | null | null | recfreq/__init__.py | lcarsos/recency-frequency | 075b2d63c915adc1060a5bf5de923c6e6e486397 | [
"MIT"
] | null | null | null | from .main import main as init
| 15.5 | 30 | 0.774194 | 6 | 31 | 4 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 31 | 1 | 31 | 31 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bcca7b5e19e96a2dbcaad105a96b9d553f0374eb | 53 | py | Python | testprojects/src/python/interpreter_selection/resolver_blacklist_testing/main.py | jakubbujny/pants | e7fe73eaa3bc196d6d976e9f362bf60b69da17b3 | [
"Apache-2.0"
] | null | null | null | testprojects/src/python/interpreter_selection/resolver_blacklist_testing/main.py | jakubbujny/pants | e7fe73eaa3bc196d6d976e9f362bf60b69da17b3 | [
"Apache-2.0"
] | null | null | null | testprojects/src/python/interpreter_selection/resolver_blacklist_testing/main.py | jakubbujny/pants | e7fe73eaa3bc196d6d976e9f362bf60b69da17b3 | [
"Apache-2.0"
] | null | null | null | import jupyter
print(jupyter)
print('Successful.')
| 8.833333 | 20 | 0.754717 | 6 | 53 | 6.666667 | 0.666667 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113208 | 53 | 5 | 21 | 10.6 | 0.851064 | 0 | 0 | 0 | 0 | 0 | 0.207547 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
bce5fb8a72638d88651d4fd92f868470ea6f90a2 | 150 | py | Python | CodeWars/Python/6 kyu/Find The Parity Outlier/main.py | opastushkov/codewars-solutions | 0132a24259a4e87f926048318332dcb4d94858ca | [
"MIT"
] | null | null | null | CodeWars/Python/6 kyu/Find The Parity Outlier/main.py | opastushkov/codewars-solutions | 0132a24259a4e87f926048318332dcb4d94858ca | [
"MIT"
] | null | null | null | CodeWars/Python/6 kyu/Find The Parity Outlier/main.py | opastushkov/codewars-solutions | 0132a24259a4e87f926048318332dcb4d94858ca | [
"MIT"
] | null | null | null | def find_outlier(integers):
ls = [x % 2 == 0 for x in integers]
return integers[ls.index(True)] if sum(ls) == 1 else integers[ls.index(False)] | 50 | 82 | 0.66 | 26 | 150 | 3.769231 | 0.692308 | 0.306122 | 0.306122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02459 | 0.186667 | 150 | 3 | 82 | 50 | 0.778689 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
d5bdb3df46f8a4cd72ede38ae222834855c905c9 | 32 | py | Python | gqltst/__init__.py | pyatka/gqltst | 4c824df7e62b2759c6a06e5bc0c3752019907f47 | [
"MIT"
] | null | null | null | gqltst/__init__.py | pyatka/gqltst | 4c824df7e62b2759c6a06e5bc0c3752019907f47 | [
"MIT"
] | 3 | 2018-10-31T07:59:55.000Z | 2018-11-01T14:34:48.000Z | gqltst/__init__.py | pyatka/gqltst | 4c824df7e62b2759c6a06e5bc0c3752019907f47 | [
"MIT"
] | null | null | null | from gqltst.schema import Schema | 32 | 32 | 0.875 | 5 | 32 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d5cce972b05a20fa94525fabbbf0886d67a12ddb | 15,611 | py | Python | auto_process_ngs/test/qc/test_cellranger.py | fls-bioinformatics-core/auto_process_ngs | 1f07a08e14f118e6a61d3f37130515efc6049dd7 | [
"AFL-3.0"
] | 5 | 2017-01-31T21:37:09.000Z | 2022-03-17T19:26:29.000Z | auto_process_ngs/test/qc/test_cellranger.py | fls-bioinformatics-core/auto_process_ngs | 1f07a08e14f118e6a61d3f37130515efc6049dd7 | [
"AFL-3.0"
] | 294 | 2015-08-14T09:00:30.000Z | 2022-03-18T10:17:05.000Z | auto_process_ngs/test/qc/test_cellranger.py | fls-bioinformatics-core/auto_process_ngs | 1f07a08e14f118e6a61d3f37130515efc6049dd7 | [
"AFL-3.0"
] | 7 | 2017-11-23T07:52:21.000Z | 2020-07-15T10:12:05.000Z | #######################################################################
# Unit tests for qc/cellranger.py
#######################################################################
import unittest
import os
import shutil
import tempfile
from auto_process_ngs.mock import MockAnalysisProject
from auto_process_ngs.mock import UpdateAnalysisProject
from auto_process_ngs.analysis import AnalysisProject
from auto_process_ngs.tenx_genomics_utils import CellrangerMultiConfigCsv
from auto_process_ngs.tenx_genomics_utils import MultiplexSummary
from auto_process_ngs.qc.cellranger import CellrangerCount
from auto_process_ngs.qc.cellranger import CellrangerMulti
class TestCellrangerCount(unittest.TestCase):
def setUp(self):
# Create a temp working dir
self.dirn = tempfile.mkdtemp(suffix='TestCellrangerCount')
# Make mock analysis project
p = MockAnalysisProject("PJB",("PJB1_S1_R1_001.fastq.gz",
"PJB1_S1_R2_001.fastq.gz",
"PJB2_S2_R1_001.fastq.gz",
"PJB2_S2_R2_001.fastq.gz",),
metadata={ 'Organism': 'Human',
'Single cell platform':
"10xGenomics Chromium 3'v3" })
p.create(top_dir=self.dirn)
self.project = AnalysisProject("PJB",os.path.join(self.dirn,"PJB"))
def tearDown(self):
# Remove the temporary test directory
shutil.rmtree(self.dirn)
def test_cellrangercount_501(self):
"""
CellrangerCount: check outputs from cellranger count (v5.0.1)
"""
# Add cellranger count outputs
UpdateAnalysisProject(self.project).add_cellranger_count_outputs()
# Do tests
count_dir = os.path.join(self.project.qc_dir,"cellranger_count","PJB1")
cmdline = "/path/to/cellranger count --id PJB1 --fastqs /path/to/PJB/fastqs --sample PJB1 --transcriptome /data/refdata-gex-GRCh38-2020-A --chemistry auto --r1-length=26 --jobmode=local --localcores=16 --localmem=48 --maxjobs=1 --jobinterval=100"
with open(os.path.join(count_dir,"_cmdline"),'wt') as fp:
fp.write("%s\n" % cmdline)
cellranger_count = CellrangerCount(count_dir)
self.assertEqual(cellranger_count.dir,count_dir)
self.assertEqual(cellranger_count.sample_name,"PJB1")
self.assertEqual(cellranger_count.metrics_csv,
os.path.join(count_dir,"outs","metrics_summary.csv"))
self.assertEqual(cellranger_count.web_summary,
os.path.join(count_dir,"outs","web_summary.html"))
self.assertEqual(cellranger_count.cmdline_file,
os.path.join(count_dir,"_cmdline"))
self.assertEqual(cellranger_count.cmdline,cmdline)
self.assertEqual(cellranger_count.version,None)
self.assertEqual(cellranger_count.reference_data,
"/data/refdata-gex-GRCh38-2020-A")
self.assertEqual(cellranger_count.cellranger_exe,
"/path/to/cellranger")
self.assertEqual(cellranger_count.pipeline_name,"cellranger")
def test_cellrangercount_cellranger_310(self):
"""
CellrangerCount: check outputs from cellranger count (v3.1.0)
"""
# Add cellranger count outputs
UpdateAnalysisProject(self.project).add_cellranger_count_outputs()
# Do tests
count_dir = os.path.join(self.project.qc_dir,"cellranger_count","PJB1")
cmdline = "/path/to/cellranger-cs/3.1.0/bin/count --id PJB1 --fastqs /path/to/PJB/fastqs --sample PJB1 --transcriptome /data/refdata-cellranger-GRCh38-1.2.0 --chemistry auto --jobmode=local --localcores=16 --localmem=48 --maxjobs=1 --jobinterval=100"
with open(os.path.join(count_dir,"_cmdline"),'wt') as fp:
fp.write("%s\n" % cmdline)
cellranger_count = CellrangerCount(count_dir)
self.assertEqual(cellranger_count.dir,count_dir)
self.assertEqual(cellranger_count.sample_name,"PJB1")
self.assertEqual(cellranger_count.metrics_csv,
os.path.join(count_dir,"outs","metrics_summary.csv"))
self.assertEqual(cellranger_count.web_summary,
os.path.join(count_dir,"outs","web_summary.html"))
self.assertEqual(cellranger_count.cmdline_file,
os.path.join(count_dir,"_cmdline"))
self.assertEqual(cellranger_count.cmdline,cmdline)
self.assertEqual(cellranger_count.version,None)
self.assertEqual(cellranger_count.reference_data,
"/data/refdata-cellranger-GRCh38-1.2.0")
self.assertEqual(cellranger_count.cellranger_exe,
"/path/to/cellranger-cs/3.1.0/bin/count")
self.assertEqual(cellranger_count.pipeline_name,"cellranger")
def test_cellrangercount_cellranger_atac_120(self):
"""
CellrangerCount: check outputs from cellranger-atac count (v1.2.0)
"""
# Add cellranger count outputs
UpdateAnalysisProject(self.project).add_cellranger_count_outputs(
cellranger='cellranger-atac')
# Do tests
count_dir = os.path.join(self.project.qc_dir,"cellranger_count","PJB1")
cmdline = "/path/to/cellranger-atac-cs/1.2.0/bin/count --id PJB1 --fastqs /path/to/PJB/fastqs --sample PJB1 --reference /data/refdata-cellranger-atac-GRCh38-1.2.0 --jobmode=local --localcores=16 --localmem=128 --maxjobs=48 --jobinterval=100"
with open(os.path.join(count_dir,"_cmdline"),'wt') as fp:
fp.write("%s\n" % cmdline)
cellranger_count = CellrangerCount(count_dir)
self.assertEqual(cellranger_count.dir,count_dir)
self.assertEqual(cellranger_count.sample_name,"PJB1")
self.assertEqual(cellranger_count.metrics_csv,
os.path.join(count_dir,"outs","summary.csv"))
self.assertEqual(cellranger_count.web_summary,
os.path.join(count_dir,"outs","web_summary.html"))
self.assertEqual(cellranger_count.cmdline_file,
os.path.join(count_dir,"_cmdline"))
self.assertEqual(cellranger_count.cmdline,cmdline)
self.assertEqual(cellranger_count.version,None)
self.assertEqual(cellranger_count.reference_data,
"/data/refdata-cellranger-atac-GRCh38-1.2.0")
self.assertEqual(cellranger_count.cellranger_exe,
"/path/to/cellranger-atac-cs/1.2.0/bin/count")
self.assertEqual(cellranger_count.pipeline_name,"cellranger-atac")
def test_cellrangercount_cellranger_arc_120(self):
"""
CellrangerCount: check outputs from cellranger-arc count (v1.0.0)
"""
# Add cellranger count outputs
UpdateAnalysisProject(self.project).add_cellranger_count_outputs(
cellranger='cellranger-atac')
# Do tests
count_dir = os.path.join(self.project.qc_dir,"cellranger_count","PJB1")
cmdline = "/path/to/cellranger-arc count --id PJB1 --fastqs /path/to/PJB/fastqs --sample PJB1 --reference /data/refdata-cellranger-arc-GRCh38-2020-A --libraries /path/to/libraries.csv --jobmode=local --localcores=16 --localmem=128 --maxjobs=48 --jobinterval=100"
with open(os.path.join(count_dir,"_cmdline"),'wt') as fp:
fp.write("%s\n" % cmdline)
cellranger_count = CellrangerCount(count_dir)
self.assertEqual(cellranger_count.dir,count_dir)
self.assertEqual(cellranger_count.sample_name,"PJB1")
self.assertEqual(cellranger_count.metrics_csv,
os.path.join(count_dir,"outs","summary.csv"))
self.assertEqual(cellranger_count.web_summary,
os.path.join(count_dir,"outs","web_summary.html"))
self.assertEqual(cellranger_count.cmdline_file,
os.path.join(count_dir,"_cmdline"))
self.assertEqual(cellranger_count.cmdline,cmdline)
self.assertEqual(cellranger_count.version,None)
self.assertEqual(cellranger_count.reference_data,
"/data/refdata-cellranger-arc-GRCh38-2020-A")
self.assertEqual(cellranger_count.cellranger_exe,
"/path/to/cellranger-arc")
self.assertEqual(cellranger_count.pipeline_name,"cellranger-arc")
def test_cellrangercount_with_data(self):
"""
CellrangerCount: check outputs when data are supplied
"""
# Add cellranger count outputs
UpdateAnalysisProject(self.project).add_cellranger_count_outputs()
# Do tests
count_dir = os.path.join(self.project.qc_dir,"cellranger_count","PJB1")
cmdline = "/path/to/cellranger count --id PJB1 --fastqs /path/to/PJB/fastqs --sample PJB1 --transcriptome /data/refdata-gex-GRCh38-2020-A --chemistry auto --r1-length=26 --jobmode=local --localcores=16 --localmem=48 --maxjobs=1 --jobinterval=100"
with open(os.path.join(count_dir,"_cmdline"),'wt') as fp:
fp.write("%s\n" % cmdline)
cellranger_count = CellrangerCount(
count_dir,
cellranger_exe="/alt/path/to/cellranger",
version="5.0.1",
reference_data="/alt/data/refdata-gex-GRCh38-2020-A")
self.assertEqual(cellranger_count.dir,count_dir)
self.assertEqual(cellranger_count.sample_name,"PJB1")
self.assertEqual(cellranger_count.metrics_csv,
os.path.join(count_dir,"outs","metrics_summary.csv"))
self.assertEqual(cellranger_count.web_summary,
os.path.join(count_dir,"outs","web_summary.html"))
self.assertEqual(cellranger_count.cmdline_file,
os.path.join(count_dir,"_cmdline"))
self.assertEqual(cellranger_count.cmdline,cmdline)
self.assertEqual(cellranger_count.version,"5.0.1")
self.assertEqual(cellranger_count.reference_data,
"/alt/data/refdata-gex-GRCh38-2020-A")
self.assertEqual(cellranger_count.cellranger_exe,
"/alt/path/to/cellranger")
self.assertEqual(cellranger_count.pipeline_name,"cellranger")
def test_cellrangercount_missing_directory(self):
"""
CellrangerCount: handle missing directory
"""
# Do tests
count_dir = os.path.join(self.project.qc_dir,"cellranger_count","PJB1")
cellranger_count = CellrangerCount(count_dir)
self.assertRaises(OSError,
getattr,cellranger_count,'dir')
self.assertEqual(cellranger_count.sample_name,None)
self.assertRaises(OSError,
getattr,cellranger_count,'metrics_csv')
self.assertRaises(OSError,
getattr,cellranger_count,'web_summary')
self.assertEqual(cellranger_count.cmdline_file,None)
self.assertEqual(cellranger_count.cmdline,None)
self.assertEqual(cellranger_count.version,None)
self.assertEqual(cellranger_count.reference_data,None)
self.assertEqual(cellranger_count.cellranger_exe,None)
self.assertEqual(cellranger_count.pipeline_name,None)
class TestCellrangerMulti(unittest.TestCase):
def setUp(self):
# Create a temp working dir
self.dirn = tempfile.mkdtemp(suffix='TestCellrangerMulti')
# Make mock analysis project
p = MockAnalysisProject("PJB",("PJB1_GEX_S1_R1_001.fastq.gz",
"PJB1_GEX_S1_R2_001.fastq.gz",
"PJB2_MC_S2_R1_001.fastq.gz",
"PJB2_MC_S2_R2_001.fastq.gz",),
metadata={ 'Organism': 'Human',
'Single cell platform':
"10xGenomics Chromium 3'v3" })
p.create(top_dir=self.dirn)
self.project = AnalysisProject("PJB",os.path.join(self.dirn,"PJB"))
def tearDown(self):
# Remove the temporary test directory
shutil.rmtree(self.dirn)
def test_cellrangermulti(self):
"""
CellrangerMulti: check outputs from cellranger multi
"""
# Add config.csv file
config_csv = os.path.join(self.project.dirn,
"10x_multi_config.csv")
with open(config_csv,'wt') as fp:
fp.write("""[gene-expression]
reference,/data/refdata-cellranger-gex-GRCh38-2020-A
[libraries]
fastq_id,fastqs,lanes,physical_library_id,feature_types,subsample_rate
PJB1_GEX,/data/runs/fastqs_gex,any,PJB1,gene expression,
PJB2_MC,/data/runs/fastqs_mc,any,PJB2,Multiplexing Capture,
[samples]
sample_id,cmo_ids,description
PBA,CMO301,PBA
PBB,CMO302,PBB
""")
# Add cellranger multi outputs
UpdateAnalysisProject(self.project).add_cellranger_multi_outputs(
config_csv)
# Do tests
multi_dir = os.path.join(self.project.qc_dir,"cellranger_multi")
cmdline = "/path/to/cellranger count --id PJB --csv %s --jobmode=local --localcores=16 --localmem=48 --maxjobs=1 --jobinterval=100" % config_csv
with open(os.path.join(multi_dir,"_cmdline"),'wt') as fp:
fp.write("%s\n" % cmdline)
cellranger_multi = CellrangerMulti(multi_dir)
self.assertEqual(cellranger_multi.dir,multi_dir)
self.assertEqual(cellranger_multi.sample_names,["PBA","PBB"])
self.assertEqual(cellranger_multi.metrics_csv('PBA'),
os.path.join(multi_dir,
"outs",
"per_sample_outs",
"PBA",
"metrics_summary.csv"))
self.assertEqual(cellranger_multi.metrics_csv('PBB'),
os.path.join(multi_dir,
"outs",
"per_sample_outs",
"PBB",
"metrics_summary.csv"))
self.assertTrue(isinstance(cellranger_multi.metrics('PBA'),
MultiplexSummary))
self.assertTrue(isinstance(cellranger_multi.metrics('PBB'),
MultiplexSummary))
self.assertEqual(cellranger_multi.web_summary('PBA'),
os.path.join(multi_dir,
"outs",
"per_sample_outs",
"PBA",
"web_summary.html"))
self.assertEqual(cellranger_multi.web_summary('PBB'),
os.path.join(multi_dir,
"outs",
"per_sample_outs",
"PBB",
"web_summary.html"))
self.assertEqual(cellranger_multi.cmdline_file,
os.path.join(multi_dir,"_cmdline"))
self.assertEqual(cellranger_multi.cmdline,cmdline)
self.assertEqual(cellranger_multi.version,None)
self.assertEqual(cellranger_multi.reference_data,
"/data/refdata-cellranger-gex-GRCh38-2020-A")
self.assertEqual(cellranger_multi.cellranger_exe,
"/path/to/cellranger")
self.assertEqual(cellranger_multi.pipeline_name,"cellranger")
| 53.462329 | 270 | 0.615335 | 1,688 | 15,611 | 5.495853 | 0.10545 | 0.14067 | 0.185944 | 0.184327 | 0.868815 | 0.844131 | 0.740218 | 0.682117 | 0.649671 | 0.644605 | 0 | 0.023002 | 0.267568 | 15,611 | 291 | 271 | 53.646048 | 0.788351 | 0.055922 | 0 | 0.590517 | 0 | 0.025862 | 0.228528 | 0.080894 | 0 | 0 | 0 | 0 | 0.318966 | 1 | 0.047414 | false | 0 | 0.047414 | 0 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d5d6cba94c657c80625861ec5a2125916c36c1d2 | 14,200 | py | Python | internal/buildscripts/packaging/tests/installer_test.py | slernersplunk/splunk-otel-collector | f922c6b63cf27998a7334397507777a001271559 | [
"Apache-2.0"
] | null | null | null | internal/buildscripts/packaging/tests/installer_test.py | slernersplunk/splunk-otel-collector | f922c6b63cf27998a7334397507777a001271559 | [
"Apache-2.0"
] | 335 | 2021-04-22T07:50:56.000Z | 2022-03-31T00:13:23.000Z | internal/buildscripts/packaging/tests/installer_test.py | slernersplunk/splunk-otel-collector | f922c6b63cf27998a7334397507777a001271559 | [
"Apache-2.0"
] | 1 | 2021-08-19T11:20:54.000Z | 2021-08-19T11:20:54.000Z | # Copyright Splunk Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import time
import pytest
from tests.helpers.util import (
copy_file_into_container,
run_container_cmd,
run_distro_container,
service_is_running,
wait_for,
DEB_DISTROS,
REPO_DIR,
RPM_DISTROS,
SERVICE_NAME,
SERVICE_OWNER,
TESTS_DIR,
)
INSTALLER_PATH = REPO_DIR / "internal" / "buildscripts" / "packaging" / "installer" / "install.sh"
# Override default test parameters with the following env vars
STAGE = os.environ.get("STAGE", "release")
VERSIONS = os.environ.get("VERSIONS", "latest").split(",")
SPLUNK_ENV_PATH = "/etc/otel/collector/splunk-otel-collector.conf"
OLD_SPLUNK_ENV_PATH = "/etc/otel/collector/splunk_env"
AGENT_CONFIG_PATH = "/etc/otel/collector/agent_config.yaml"
GATEWAY_CONFIG_PATH = "/etc/otel/collector/gateway_config.yaml"
OLD_CONFIG_PATH = "/etc/otel/collector/splunk_config_linux.yaml"
TOTAL_MEMORY = "256"
BALLAST = "128"
@pytest.mark.installer
@pytest.mark.parametrize(
"distro",
[pytest.param(distro, marks=pytest.mark.deb) for distro in DEB_DISTROS]
+ [pytest.param(distro, marks=pytest.mark.rpm) for distro in RPM_DISTROS],
)
@pytest.mark.parametrize("version", VERSIONS)
@pytest.mark.parametrize("mode", ["agent", "gateway"])
def test_installer_mode(distro, version, mode):
install_cmd = f"sh -x /test/install.sh -- testing123 --realm us0 --memory {TOTAL_MEMORY} --mode {mode}"
if version != "latest":
install_cmd = f"{install_cmd} --collector-version {version.lstrip('v')}"
if STAGE != "release":
assert STAGE in ("test", "beta"), f"Unsupported stage '{STAGE}'!"
install_cmd = f"{install_cmd} --{STAGE}"
print(f"Testing installation on {distro} from {STAGE} stage ...")
with run_distro_container(distro) as container:
# run installer script
copy_file_into_container(container, INSTALLER_PATH, "/test/install.sh")
try:
run_container_cmd(container, install_cmd, env={"VERIFY_ACCESS_TOKEN": "false"})
time.sleep(5)
config_path = AGENT_CONFIG_PATH if mode == "agent" else GATEWAY_CONFIG_PATH
if container.exec_run(f"test -f {OLD_CONFIG_PATH}").exit_code == 0:
config_path = OLD_CONFIG_PATH
elif mode == "gateway" and container.exec_run(f"test -f {GATEWAY_CONFIG_PATH}").exit_code != 0:
config_path = AGENT_CONFIG_PATH
# verify env file created with configured parameters
splunk_env_path = SPLUNK_ENV_PATH
if container.exec_run(f"test -f {OLD_SPLUNK_ENV_PATH}").exit_code == 0:
splunk_env_path = OLD_SPLUNK_ENV_PATH
run_container_cmd(container, f"grep '^SPLUNK_CONFIG={config_path}$' {splunk_env_path}")
run_container_cmd(container, f"grep '^SPLUNK_ACCESS_TOKEN=testing123$' {splunk_env_path}")
run_container_cmd(container, f"grep '^SPLUNK_REALM=us0$' {splunk_env_path}")
run_container_cmd(container, f"grep '^SPLUNK_MEMORY_TOTAL_MIB={TOTAL_MEMORY}$' {splunk_env_path}")
# verify collector service status
assert wait_for(lambda: service_is_running(container, service_owner=SERVICE_OWNER))
# the td-agent service should only be running when installing
# collector packages that have our custom fluent config
if container.exec_run("test -f /etc/otel/collector/fluentd/fluent.conf").exit_code == 0:
assert container.exec_run("systemctl status td-agent").exit_code == 0
else:
assert container.exec_run("systemctl status td-agent").exit_code != 0
# test support bundle script
if container.exec_run("test -f /etc/otel/collector/splunk-support-bundle.sh").exit_code == 0:
run_container_cmd(container, "/etc/otel/collector/splunk-support-bundle.sh -t /tmp/splunk-support-bundle")
run_container_cmd(container, "test -f /tmp/splunk-support-bundle/config/agent_config.yaml")
run_container_cmd(container, "test -f /tmp/splunk-support-bundle/logs/splunk-otel-collector.log")
run_container_cmd(container, "test -f /tmp/splunk-support-bundle/logs/splunk-otel-collector.txt")
if container.exec_run("test -f /etc/otel/collector/fluentd/fluent.conf").exit_code == 0:
run_container_cmd(container, "test -f /tmp/splunk-support-bundle/logs/td-agent.log")
run_container_cmd(container, "test -f /tmp/splunk-support-bundle/logs/td-agent.txt")
run_container_cmd(container, "test -f /tmp/splunk-support-bundle/metrics/collector-metrics.txt")
run_container_cmd(container, "test -f /tmp/splunk-support-bundle/metrics/df.txt")
run_container_cmd(container, "test -f /tmp/splunk-support-bundle/metrics/free.txt")
run_container_cmd(container, "test -f /tmp/splunk-support-bundle/metrics/top.txt")
run_container_cmd(container, "test -f /tmp/splunk-support-bundle/zpages/tracez.html")
run_container_cmd(container, "test -f /tmp/splunk-support-bundle.tar.gz")
run_container_cmd(container, "sh -x /test/install.sh --uninstall")
finally:
run_container_cmd(container, "journalctl -u td-agent --no-pager")
if container.exec_run("test -f /var/log/td-agent/td-agent.log").exit_code == 0:
run_container_cmd(container, "cat /var/log/td-agent/td-agent.log")
run_container_cmd(container, f"journalctl -u {SERVICE_NAME} --no-pager")
@pytest.mark.installer
@pytest.mark.parametrize(
"distro",
[pytest.param(distro, marks=pytest.mark.deb) for distro in DEB_DISTROS]
+ [pytest.param(distro, marks=pytest.mark.rpm) for distro in RPM_DISTROS],
)
@pytest.mark.parametrize("version", VERSIONS)
def test_installer_ballast(distro, version):
install_cmd = f"sh -x /test/install.sh -- testing123 --realm us0 --ballast {BALLAST}"
if version != "latest":
install_cmd = f"{install_cmd} --collector-version {version.lstrip('v')}"
if STAGE != "release":
assert STAGE in ("test", "beta"), f"Unsupported stage '{STAGE}'!"
install_cmd = f"{install_cmd} --{STAGE}"
print(f"Testing installation on {distro} from {STAGE} stage ...")
with run_distro_container(distro) as container:
# run installer script
copy_file_into_container(container, INSTALLER_PATH, "/test/install.sh")
try:
run_container_cmd(container, install_cmd, env={"VERIFY_ACCESS_TOKEN": "false"})
time.sleep(5)
config_path = AGENT_CONFIG_PATH
if container.exec_run(f"test -f {OLD_CONFIG_PATH}").exit_code == 0:
config_path = OLD_CONFIG_PATH
splunk_env_path = SPLUNK_ENV_PATH
if container.exec_run(f"test -f {OLD_SPLUNK_ENV_PATH}").exit_code == 0:
splunk_env_path = OLD_SPLUNK_ENV_PATH
# verify env file created with configured parameters
run_container_cmd(container, f"grep '^SPLUNK_CONFIG={config_path}$' {splunk_env_path}")
run_container_cmd(container, f"grep '^SPLUNK_ACCESS_TOKEN=testing123$' {splunk_env_path}")
run_container_cmd(container, f"grep '^SPLUNK_REALM=us0$' {splunk_env_path}")
run_container_cmd(container, f"grep '^SPLUNK_BALLAST_SIZE_MIB={BALLAST}$' {splunk_env_path}")
# verify collector service status
assert wait_for(lambda: service_is_running(container, service_owner=SERVICE_OWNER))
# the td-agent service should only be running when installing
# collector packages that have our custom fluent config
if container.exec_run("test -f /etc/otel/collector/fluentd/fluent.conf").exit_code == 0:
assert container.exec_run("systemctl status td-agent").exit_code == 0
else:
assert container.exec_run("systemctl status td-agent").exit_code != 0
run_container_cmd(container, "sh -x /test/install.sh --uninstall")
finally:
run_container_cmd(container, "journalctl -u td-agent --no-pager")
if container.exec_run("test -f /var/log/td-agent/td-agent.log").exit_code == 0:
run_container_cmd(container, "cat /var/log/td-agent/td-agent.log")
run_container_cmd(container, f"journalctl -u {SERVICE_NAME} --no-pager")
@pytest.mark.installer
@pytest.mark.parametrize(
"distro",
[pytest.param(distro, marks=pytest.mark.deb) for distro in DEB_DISTROS]
+ [pytest.param(distro, marks=pytest.mark.rpm) for distro in RPM_DISTROS],
)
@pytest.mark.parametrize("version", VERSIONS)
def test_installer_service_owner(distro, version):
service_owner = "test-user"
install_cmd = f"sh -x /test/install.sh -- testing123 --realm us0 --memory {TOTAL_MEMORY}"
install_cmd = f"{install_cmd} --service-user {service_owner} --service-group {service_owner}"
if version != "latest":
install_cmd = f"{install_cmd} --collector-version {version.lstrip('v')}"
if STAGE != "release":
assert STAGE in ("test", "beta"), f"Unsupported stage '{STAGE}'!"
install_cmd = f"{install_cmd} --{STAGE}"
print(f"Testing installation on {distro} from {STAGE} stage ...")
with run_distro_container(distro) as container:
copy_file_into_container(container, INSTALLER_PATH, "/test/install.sh")
try:
# run installer script
run_container_cmd(container, install_cmd, env={"VERIFY_ACCESS_TOKEN": "false"})
time.sleep(5)
config_path = AGENT_CONFIG_PATH
if container.exec_run(f"test -f {OLD_CONFIG_PATH}").exit_code == 0:
config_path = OLD_CONFIG_PATH
splunk_env_path = SPLUNK_ENV_PATH
if container.exec_run(f"test -f {OLD_SPLUNK_ENV_PATH}").exit_code == 0:
splunk_env_path = OLD_SPLUNK_ENV_PATH
# verify env file created with configured parameters
run_container_cmd(container, f"grep '^SPLUNK_CONFIG={config_path}$' {splunk_env_path}")
run_container_cmd(container, f"grep '^SPLUNK_ACCESS_TOKEN=testing123$' {splunk_env_path}")
run_container_cmd(container, f"grep '^SPLUNK_REALM=us0$' {splunk_env_path}")
run_container_cmd(container, f"grep '^SPLUNK_MEMORY_TOTAL_MIB={TOTAL_MEMORY}$' {splunk_env_path}")
# verify collector service status
assert wait_for(lambda: service_is_running(container, service_owner=service_owner))
# the td-agent service should only be running when installing
# collector packages that have our custom fluent config
if container.exec_run("test -f /etc/otel/collector/fluentd/fluent.conf").exit_code == 0:
assert container.exec_run("systemctl status td-agent").exit_code == 0
else:
assert container.exec_run("systemctl status td-agent").exit_code != 0
finally:
run_container_cmd(container, "journalctl -u td-agent --no-pager")
run_container_cmd(container, f"journalctl -u {SERVICE_NAME} --no-pager")
@pytest.mark.installer
@pytest.mark.parametrize(
"distro",
[pytest.param(distro, marks=pytest.mark.deb) for distro in DEB_DISTROS]
+ [pytest.param(distro, marks=pytest.mark.rpm) for distro in RPM_DISTROS],
)
@pytest.mark.parametrize("version", VERSIONS)
def test_installer_without_fluentd(distro, version):
install_cmd = f"sh -x /test/install.sh -- testing123 --realm us0 --memory {TOTAL_MEMORY} --without-fluentd"
if version != "latest":
install_cmd = f"{install_cmd} --collector-version {version.lstrip('v')}"
if STAGE != "release":
assert STAGE in ("test", "beta"), f"Unsupported stage '{STAGE}'!"
install_cmd = f"{install_cmd} --{STAGE}"
print(f"Testing installation on {distro} from {STAGE} stage ...")
with run_distro_container(distro) as container:
copy_file_into_container(container, INSTALLER_PATH, "/test/install.sh")
try:
# run installer script
run_container_cmd(container, install_cmd, env={"VERIFY_ACCESS_TOKEN": "false"})
time.sleep(5)
config_path = AGENT_CONFIG_PATH
if container.exec_run(f"test -f {OLD_CONFIG_PATH}").exit_code == 0:
config_path = OLD_CONFIG_PATH
splunk_env_path = SPLUNK_ENV_PATH
if container.exec_run(f"test -f {OLD_SPLUNK_ENV_PATH}").exit_code == 0:
splunk_env_path = OLD_SPLUNK_ENV_PATH
# verify env file created with configured parameters
run_container_cmd(container, f"grep '^SPLUNK_CONFIG={config_path}$' {splunk_env_path}")
run_container_cmd(container, f"grep '^SPLUNK_ACCESS_TOKEN=testing123$' {splunk_env_path}")
run_container_cmd(container, f"grep '^SPLUNK_REALM=us0$' {splunk_env_path}")
run_container_cmd(container, f"grep '^SPLUNK_MEMORY_TOTAL_MIB={TOTAL_MEMORY}$' {splunk_env_path}")
# verify collector service status
assert wait_for(lambda: service_is_running(container, service_owner=SERVICE_OWNER))
if distro in DEB_DISTROS:
assert container.exec_run("dpkg -s td-agent").exit_code != 0
else:
assert container.exec_run("rpm -q td-agent").exit_code != 0
run_container_cmd(container, "sh -x /test/install.sh --uninstall")
finally:
run_container_cmd(container, f"journalctl -u {SERVICE_NAME} --no-pager")
| 48.29932 | 122 | 0.671479 | 1,856 | 14,200 | 4.904095 | 0.109914 | 0.059328 | 0.07416 | 0.116018 | 0.848715 | 0.83597 | 0.833553 | 0.812679 | 0.807405 | 0.80312 | 0 | 0.006255 | 0.211901 | 14,200 | 293 | 123 | 48.464164 | 0.807077 | 0.097817 | 0 | 0.699507 | 0 | 0.024631 | 0.34116 | 0.133913 | 0 | 0 | 0 | 0 | 0.078818 | 1 | 0.019704 | false | 0 | 0.019704 | 0 | 0.039409 | 0.019704 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
910889646a629ac08a62bc4b2878960574cb138d | 110 | py | Python | py_date.py | cgyqu/python_learning | 55c8df4a963c40ace050d3454b72538190cb0517 | [
"Apache-2.0"
] | null | null | null | py_date.py | cgyqu/python_learning | 55c8df4a963c40ace050d3454b72538190cb0517 | [
"Apache-2.0"
] | null | null | null | py_date.py | cgyqu/python_learning | 55c8df4a963c40ace050d3454b72538190cb0517 | [
"Apache-2.0"
] | null | null | null | import datetime
print(datetime.date.today())
print(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')) | 22 | 63 | 0.681818 | 18 | 110 | 4.166667 | 0.722222 | 0.346667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 110 | 5 | 63 | 22 | 0.721154 | 0 | 0 | 0 | 0 | 0 | 0.18018 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
9133515d96738d8476ab7f2d0ae8bdacbfb3e915 | 9,979 | py | Python | tests/integration_tests/test_examples.py | maliesa96/garage | 6ba6cf3a4fbe231418d34f432a610f67a7187e6b | [
"MIT"
] | null | null | null | tests/integration_tests/test_examples.py | maliesa96/garage | 6ba6cf3a4fbe231418d34f432a610f67a7187e6b | [
"MIT"
] | null | null | null | tests/integration_tests/test_examples.py | maliesa96/garage | 6ba6cf3a4fbe231418d34f432a610f67a7187e6b | [
"MIT"
] | null | null | null | """This is an integration test to make sure scripts from examples/
work when executing `./examples/**/*.py`.
"""
import os
import pathlib
import subprocess
import pytest
EXAMPLES_ROOT_DIR = pathlib.Path('examples/')
NON_ALGO_EXAMPLES = [
EXAMPLES_ROOT_DIR / 'torch/resume_training.py',
EXAMPLES_ROOT_DIR / 'tf/resume_training.py',
EXAMPLES_ROOT_DIR / 'sim_policy.py',
EXAMPLES_ROOT_DIR / 'step_env.py',
EXAMPLES_ROOT_DIR / 'step_dm_control_env.py',
]
# yapf: disable
LONG_RUNNING_EXAMPLES = [
EXAMPLES_ROOT_DIR / 'tf/ppo_memorize_digits.py',
EXAMPLES_ROOT_DIR / 'tf/dqn_pong.py',
EXAMPLES_ROOT_DIR / 'tf/trpo_cubecrash.py',
EXAMPLES_ROOT_DIR / 'torch/maml_ppo_half_cheetah_dir.py',
EXAMPLES_ROOT_DIR / 'torch/maml_trpo_half_cheetah_dir.py',
EXAMPLES_ROOT_DIR / 'torch/maml_vpg_half_cheetah_dir.py',
EXAMPLES_ROOT_DIR / 'torch/maml_trpo_ml10.py',
EXAMPLES_ROOT_DIR / 'torch/pearl_half_cheetah_vel.py',
EXAMPLES_ROOT_DIR / 'torch/pearl_ml1_push.py',
EXAMPLES_ROOT_DIR / 'torch/pearl_ml10.py',
EXAMPLES_ROOT_DIR / 'torch/pearl_ml45.py',
EXAMPLES_ROOT_DIR / 'tf/rl2_ppo_ml1.py',
EXAMPLES_ROOT_DIR / 'tf/rl2_ppo_ml10.py',
EXAMPLES_ROOT_DIR / 'tf/rl2_ppo_ml10_meta_test.py',
EXAMPLES_ROOT_DIR / 'tf/rl2_ppo_ml45.py',
]
# yapf: enable
def enumerate_algo_examples():
"""Return a list of paths for all algo examples
Returns:
List[str]: list of path strings
"""
exclude = NON_ALGO_EXAMPLES + LONG_RUNNING_EXAMPLES
all_examples = EXAMPLES_ROOT_DIR.glob('**/*.py')
return [str(e) for e in all_examples if e not in exclude]
@pytest.mark.mujoco
@pytest.mark.no_cover
@pytest.mark.timeout(70)
@pytest.mark.parametrize('filepath', enumerate_algo_examples())
def test_algo_examples(filepath):
"""Test algo examples.
Args:
filepath (str): path string of example
"""
if filepath == str(EXAMPLES_ROOT_DIR / 'tf/her_ddpg_fetchreach.py'):
pytest.skip('Temporarily skipped because it is broken')
env = os.environ.copy()
env['GARAGE_EXAMPLE_TEST_N_EPOCHS'] = '1'
# Don't use check=True, since that causes subprocess to throw an error
# in case of failure before the assertion is evaluated
assert subprocess.run([filepath], check=False, env=env).returncode == 0
@pytest.mark.no_cover
@pytest.mark.timeout(180)
def test_dqn_pong():
"""Test tf/dqn_pong.py with reduced replay buffer size for reduced memory
consumption.
"""
env = os.environ.copy()
env['GARAGE_EXAMPLE_TEST_N_EPOCHS'] = '1'
assert subprocess.run(
[str(EXAMPLES_ROOT_DIR / 'tf/dqn_pong.py'), '--buffer_size', '5'],
check=False,
env=env).returncode == 0
@pytest.mark.no_cover
@pytest.mark.timeout(30)
def test_ppo_memorize_digits():
"""Test tf/ppo_memorize_digits.py with reduced batch size for reduced
memory consumption.
"""
env = os.environ.copy()
env['GARAGE_EXAMPLE_TEST_N_EPOCHS'] = '1'
command = [
str(EXAMPLES_ROOT_DIR / 'tf/ppo_memorize_digits.py'), '--batch_size',
'4'
]
assert subprocess.run(command, check=False, env=env).returncode == 0
@pytest.mark.no_cover
@pytest.mark.timeout(40)
def test_trpo_cubecrash():
"""Test tf/trpo_cubecrash.py with reduced batch size for reduced memory
consumption.
"""
env = os.environ.copy()
env['GARAGE_EXAMPLE_TEST_N_EPOCHS'] = '1'
assert subprocess.run(
[str(EXAMPLES_ROOT_DIR / 'tf/trpo_cubecrash.py'), '--batch_size', '4'],
check=False,
env=env).returncode == 0
@pytest.mark.no_cover
@pytest.mark.timeout(10)
def test_step_env():
"""Test step_env.py."""
assert subprocess.run(
[EXAMPLES_ROOT_DIR / 'step_env.py', '--n_steps', '1'],
check=False).returncode == 0
@pytest.mark.mujoco
@pytest.mark.no_cover
@pytest.mark.timeout(20)
def test_step_dm_control_env():
"""Test step_dm_control_env.py."""
assert subprocess.run(
[EXAMPLES_ROOT_DIR / 'step_dm_control_env.py', '--n_steps', '1'],
check=False).returncode == 0
@pytest.mark.mujoco
@pytest.mark.no_cover
@pytest.mark.timeout(20)
def test_maml_halfcheetah():
"""Test maml_trpo_half_cheetah_dir.py"""
assert subprocess.run([
EXAMPLES_ROOT_DIR / 'torch/maml_trpo_half_cheetah_dir.py', '--epochs',
'1', '--rollouts_per_task', '1', '--meta_batch_size', '1'
],
check=False).returncode == 0
@pytest.mark.mujoco
@pytest.mark.no_cover
@pytest.mark.timeout(60)
def test_pearl_half_cheetah_vel():
"""Test pearl_half_cheetah_vel.py"""
assert subprocess.run([
EXAMPLES_ROOT_DIR / 'torch/pearl_half_cheetah_vel.py', '--num_epochs',
'1', '--num_train_tasks', '5', '--num_test_tasks', '1',
'--encoder_hidden_size', '2', '--net_size', '2',
'--num_steps_per_epoch', '5', '--num_initial_steps', '5',
'--num_steps_prior', '1', '--num_extra_rl_steps_posterior', '1',
'--batch_size', '4', '--embedding_batch_size', '2',
'--embedding_mini_batch_size', '2', '--max_path_length', '1'
],
check=False).returncode == 0
@pytest.mark.mujoco
@pytest.mark.no_cover
@pytest.mark.timeout(60)
def test_pearl_ml1_push():
"""Test pearl_ml1_push.py"""
assert subprocess.run([
EXAMPLES_ROOT_DIR / 'torch/pearl_ml1_push.py', '--num_epochs', '1',
'--num_train_tasks', '5', '--num_test_tasks', '1',
'--encoder_hidden_size', '2', '--net_size', '2',
'--num_steps_per_epoch', '5', '--num_initial_steps', '5',
'--num_steps_prior', '1', '--num_extra_rl_steps_posterior', '1',
'--batch_size', '4', '--embedding_batch_size', '2',
'--embedding_mini_batch_size', '2', '--max_path_length', '1'
],
check=False).returncode == 0
@pytest.mark.mujoco
@pytest.mark.no_cover
def test_pearl_ml10():
"""Test pearl_ml10.py"""
assert subprocess.run([
EXAMPLES_ROOT_DIR / 'torch/pearl_ml10.py', '--num_epochs', '1',
'--num_train_tasks', '1', '--num_test_tasks', '1',
'--encoder_hidden_size', '1', '--net_size', '2',
'--num_steps_per_epoch', '2', '--num_initial_steps', '2',
'--num_steps_prior', '1', '--num_extra_rl_steps_posterior', '1',
'--batch_size', '2', '--embedding_batch_size', '1',
'--embedding_mini_batch_size', '1', '--max_path_length', '1'
],
check=False).returncode == 0
@pytest.mark.mujoco
@pytest.mark.no_cover
def test_pearl_ml45():
"""Test pearl_ml45.py"""
assert subprocess.run([
EXAMPLES_ROOT_DIR / 'torch/pearl_ml45.py', '--num_epochs', '1',
'--num_train_tasks', '1', '--num_test_tasks', '1',
'--encoder_hidden_size', '1', '--net_size', '2',
'--num_steps_per_epoch', '2', '--num_initial_steps', '2',
'--num_steps_prior', '1', '--num_extra_rl_steps_posterior', '1',
'--batch_size', '2', '--embedding_batch_size', '1',
'--embedding_mini_batch_size', '1', '--max_path_length', '1'
],
check=False).returncode == 0
@pytest.mark.nightly
@pytest.mark.no_cover
@pytest.mark.timeout(120)
def test_maml_ml10():
"""Test maml_trpo_ml10.py"""
assert subprocess.run([
EXAMPLES_ROOT_DIR / 'torch/maml_trpo_ml10.py', '--epochs', '1',
'--rollouts_per_task', '1', '--meta_batch_size', '1'
],
check=False).returncode == 0
@pytest.mark.mujoco
@pytest.mark.no_cover
@pytest.mark.timeout(30)
def test_maml_trpo():
"""Test maml_trpo_half_cheetah_dir.py"""
assert subprocess.run([
EXAMPLES_ROOT_DIR / 'torch/maml_trpo_half_cheetah_dir.py', '--epochs',
'1', '--rollouts_per_task', '1', '--meta_batch_size', '1'
],
check=False).returncode == 0
@pytest.mark.mujoco
@pytest.mark.no_cover
@pytest.mark.timeout(30)
def test_maml_ppo():
"""Test maml_ppo_half_cheetah_dir.py"""
assert subprocess.run([
EXAMPLES_ROOT_DIR / 'torch/maml_ppo_half_cheetah_dir.py', '--epochs',
'1', '--rollouts_per_task', '1', '--meta_batch_size', '1'
],
check=False).returncode == 0
@pytest.mark.mujoco
@pytest.mark.no_cover
@pytest.mark.timeout(30)
def test_maml_vpg():
"""Test maml_vpg_half_cheetah_dir.py"""
assert subprocess.run([
EXAMPLES_ROOT_DIR / 'torch/maml_vpg_half_cheetah_dir.py', '--epochs',
'1', '--rollouts_per_task', '1', '--meta_batch_size', '1'
],
check=False).returncode == 0
@pytest.mark.nightly
@pytest.mark.no_cover
@pytest.mark.timeout(80)
def test_rl2_ml1():
"""Test rl2_ppo_ml1.py."""
assert subprocess.run([
EXAMPLES_ROOT_DIR / 'tf/rl2_ppo_ml1.py', '--n_epochs', '1',
'--episode_per_task', '1', '--meta_batch_size', '10'
],
check=False).returncode == 0
@pytest.mark.nightly
@pytest.mark.no_cover
@pytest.mark.timeout(120)
def test_rl2_ppo_ml1():
"""Test rl2_ppo_ml1.py."""
assert subprocess.run([
EXAMPLES_ROOT_DIR / 'tf/rl2_ppo_ml1.py', '--n_epochs', '1',
'--episode_per_task', '1', '--meta_batch_size', '10'
],
check=False).returncode == 0
@pytest.mark.nightly
@pytest.mark.no_cover
@pytest.mark.timeout(200)
def test_rl2_ml10():
"""Test rl2_ppo_ml10.py"""
assert subprocess.run([
EXAMPLES_ROOT_DIR / 'tf/rl2_ppo_ml10.py', '--n_epochs', '1',
'--episode_per_task', '1', '--meta_batch_size', '10'
],
check=False).returncode == 0
@pytest.mark.nightly
@pytest.mark.no_cover
@pytest.mark.timeout(200)
def test_rl2_ml10_meta_test():
"""Test rl2_ppo_ml10_meta_test.py"""
assert subprocess.run([
EXAMPLES_ROOT_DIR / 'tf/rl2_ppo_ml10_meta_test.py', '--n_epochs', '1',
'--episode_per_task', '1', '--meta_batch_size', '10'
],
check=False).returncode == 0
| 32.504886 | 79 | 0.641046 | 1,365 | 9,979 | 4.348718 | 0.123077 | 0.087601 | 0.103605 | 0.054414 | 0.808288 | 0.788747 | 0.768868 | 0.739387 | 0.701314 | 0.645216 | 0 | 0.027331 | 0.197014 | 9,979 | 306 | 80 | 32.611111 | 0.713466 | 0.106524 | 0 | 0.625571 | 0 | 0 | 0.295509 | 0.138384 | 0 | 0 | 0 | 0 | 0.086758 | 1 | 0.091324 | false | 0 | 0.018265 | 0 | 0.114155 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e674b3cd8dd243b06a1ac611cf031122aba0c903 | 15,096 | py | Python | test/secure_config_params_test.py | kkellerlbl/catalog | 2c14ad4d940ee432e8a9de6b269a3d1aeabe8e86 | [
"MIT"
] | null | null | null | test/secure_config_params_test.py | kkellerlbl/catalog | 2c14ad4d940ee432e8a9de6b269a3d1aeabe8e86 | [
"MIT"
] | null | null | null | test/secure_config_params_test.py | kkellerlbl/catalog | 2c14ad4d940ee432e8a9de6b269a3d1aeabe8e86 | [
"MIT"
] | null | null | null | import unittest
from catalog_test_util import CatalogTestUtil
from biokbase.catalog.Impl import Catalog
class HiddenConfigParamsTest(unittest.TestCase):
# assumes no client groups exist
def test_permissions(self):
anonCtx = self.cUtil.anonymous_ctx()
userCtx = self.cUtil.user_ctx()
# set_secure_config_params
with self.assertRaises(ValueError) as e:
self.catalog.set_secure_config_params(anonCtx, {})
self.assertEqual(str(e.exception), 'You do not have permission to work with hidden ' +
'configuration parameters.');
with self.assertRaises(ValueError) as e:
self.catalog.set_secure_config_params(userCtx, {})
self.assertEqual(str(e.exception), 'You do not have permission to work with hidden ' +
'configuration parameters.');
# remove_secure_config_params
with self.assertRaises(ValueError) as e:
self.catalog.remove_secure_config_params(anonCtx, {})
self.assertEqual(str(e.exception), 'You do not have permission to work with hidden ' +
'configuration parameters.');
with self.assertRaises(ValueError) as e:
self.catalog.remove_secure_config_params(userCtx, {})
self.assertEqual(str(e.exception), 'You do not have permission to work with hidden ' +
'configuration parameters.');
# get_secure_config_params
with self.assertRaises(ValueError) as e:
self.catalog.get_secure_config_params(anonCtx, {})
self.assertEqual(str(e.exception), 'You do not have permission to work with hidden ' +
'configuration parameters.');
with self.assertRaises(ValueError) as e:
self.catalog.get_secure_config_params(userCtx, {})
self.assertEqual(str(e.exception), 'You do not have permission to work with hidden ' +
'configuration parameters.');
def test_errors(self):
adminCtx = self.cUtil.admin_ctx()
with self.assertRaises(ValueError) as e:
self.catalog.set_secure_config_params(adminCtx, {})
self.assertEqual(str(e.exception),
'data parameter field is required');
with self.assertRaises(ValueError) as e:
self.catalog.set_secure_config_params(adminCtx, {'data': "test"})
self.assertEqual(str(e.exception),
'data parameter field must be a list');
with self.assertRaises(ValueError) as e:
self.catalog.remove_secure_config_params(adminCtx, {})
self.assertEqual(str(e.exception),
'data parameter field is required');
with self.assertRaises(ValueError) as e:
self.catalog.remove_secure_config_params(adminCtx, {'data': "test"})
self.assertEqual(str(e.exception),
'data parameter field must be a list');
with self.assertRaises(ValueError) as e:
self.catalog.get_secure_config_params(adminCtx, {})
self.assertEqual(str(e.exception),
'module_name parameter field is required');
with self.assertRaises(ValueError) as e:
self.catalog.get_secure_config_params(adminCtx, {'module_name': [1, 2, 3]})
self.assertEqual(str(e.exception),
'module_name parameter field must be a string');
with self.assertRaises(ValueError) as e:
self.catalog.get_secure_config_params(adminCtx, {'module_name': 'abc',
'version': [1, 2, 3]})
self.assertEqual(str(e.exception),
'version parameter field must be a string');
def test_no_data(self):
adminCtx = self.cUtil.admin_ctx()
params = self.catalog.get_secure_config_params(adminCtx, {'module_name': 'test0',
'load_all_versions': 1})[0]
self.assertEqual(len(params), 0)
def test_set_parameters(self):
adminCtx = self.cUtil.admin_ctx()
self.catalog.set_secure_config_params(adminCtx, {'data': [{'module_name': 'Test1',
'param_name': 'param0',
'param_value': 'value0'}]})
params = self.catalog.get_secure_config_params(adminCtx, {'module_name': 'test1',
'load_all_versions': 1})[0]
self.assertEqual(len(params), 1)
self.assertEqual(params[0]['module_name'], 'Test1')
self.assertEqual(params[0]['param_name'], 'param0')
self.assertEqual(params[0]['param_value'], 'value0')
self.assertEqual(params[0]['version'], '')
self.catalog.set_secure_config_params(adminCtx, {'data': [{'module_name': 'Test1',
'param_name': 'param0',
'param_value': 'value1'}]})
params = self.catalog.get_secure_config_params(adminCtx, {'module_name': 'Test1',
'load_all_versions': 1})[0]
self.assertEqual(len(params), 1)
self.assertEqual(params[0]['param_value'], 'value1')
self.catalog.set_secure_config_params(adminCtx, {'data': [{'module_name': 'Test1',
'param_name': 'param2',
'param_value': 'value2'}]})
params = self.catalog.get_secure_config_params(adminCtx, {'module_name': 'test1',
'load_all_versions': 1})[0]
self.assertEqual(len(params), 2)
def test_remove_parameters(self):
adminCtx = self.cUtil.admin_ctx()
self.catalog.set_secure_config_params(adminCtx, {'data': [{'module_name': 'Test2',
'param_name': 'param0',
'param_value': 'value0'},
{'module_name': 'Test2',
'param_name': 'param1',
'param_value': 'value1'}]})
params = self.catalog.get_secure_config_params(adminCtx, {'module_name': 'test2',
'load_all_versions': 1})[0]
self.assertEqual(len(params), 2)
self.catalog.remove_secure_config_params(adminCtx, {'data': [{'module_name': 'Test2',
'param_name': 'param1'}]})
params = self.catalog.get_secure_config_params(adminCtx, {'module_name': 'test2',
'load_all_versions': 1})[0]
self.assertEqual(len(params), 1)
self.assertEqual(params[0]['param_name'], 'param0')
self.assertEqual(params[0]['param_value'], 'value0')
def test_versions(self):
adminCtx = self.cUtil.admin_ctx()
self.catalog.set_secure_config_params(adminCtx, {'data': [{'module_name': 'Test3',
'param_name': 'param0',
'param_value': 'value0'}]})
params = self.catalog.get_secure_config_params(adminCtx, {'module_name': 'test3',
'load_all_versions': 1})[0]
self.assertEqual(len(params), 1)
self.catalog.set_secure_config_params(adminCtx, {'data': [{'module_name': 'Test3',
'param_name': 'param0',
'version': 'special_version',
'param_value': 'value1'}]})
params = self.catalog.get_secure_config_params(adminCtx, {'module_name': 'test3',
'load_all_versions': 1})[0]
self.assertEqual(len(params), 2)
self.catalog.remove_secure_config_params(adminCtx, {'data': [{'module_name': 'Test3',
'param_name': 'param0'}]})
params = self.catalog.get_secure_config_params(adminCtx, {'module_name': 'test3',
'load_all_versions': 1})[0]
self.assertEqual(len(params), 1)
self.assertEqual(params[0]['param_name'], 'param0')
self.assertEqual(params[0]['param_value'], 'value1')
self.assertEqual(params[0]['version'], 'special_version')
self.catalog.remove_secure_config_params(adminCtx, {'data': [{'module_name': 'Test3',
'param_name': 'param0',
'version': 'special_version'}]})
params = self.catalog.get_secure_config_params(adminCtx, {'module_name': 'test3',
'load_all_versions': 1})[0]
self.assertEqual(len(params), 0)
def test_module_versions(self):
adminCtx = self.cUtil.admin_ctx()
module_name = 'onerepotest'
version_tag = 'release'
mv = self.catalog.get_module_version(adminCtx, {'module_name': module_name,
'version': version_tag})[0]
git_commit_hash = mv['git_commit_hash']
semantic_version = mv['version']
mv2 = self.catalog.get_module_version(adminCtx, {'module_name': module_name,
'version': semantic_version})[0]
garbage = 'garbage'
param_name = 'param0'
self.catalog.set_secure_config_params(adminCtx, {'data': [{'module_name': module_name,
'param_name': param_name,
'param_value': 'value0'},
{'module_name': module_name,
'param_name': param_name,
'version': garbage,
'param_value': 'value1'}]})
self.check_secure_param_value(module_name, version_tag, 'param0', 'value0')
self.catalog.remove_secure_config_params(adminCtx, {'data': [{'module_name': module_name,
'param_name': param_name,
'version': garbage}]})
self.check_secure_param_value(module_name, version_tag, 'param0', 'value0')
self.check_secure_param_value(module_name, git_commit_hash, 'param0', 'value0')
self.check_secure_param_value(module_name, semantic_version, 'param0', 'value0')
self.catalog.set_secure_config_params(adminCtx, {'data': [{'module_name': module_name,
'param_name': param_name,
'version': version_tag,
'param_value': 'value1'}]})
self.check_secure_param_value(module_name, version_tag, 'param0', 'value1')
self.check_secure_param_value(module_name, git_commit_hash, 'param0', 'value1')
self.check_secure_param_value(module_name, semantic_version, 'param0', 'value1')
self.catalog.remove_secure_config_params(adminCtx, {'data': [{'module_name': module_name,
'param_name': param_name,
'version': version_tag}]})
self.check_secure_param_value(module_name, version_tag, 'param0', 'value0')
self.catalog.set_secure_config_params(adminCtx, {'data': [{'module_name': module_name,
'param_name': param_name,
'version': git_commit_hash,
'param_value': 'value2'}]})
self.check_secure_param_value(module_name, version_tag, 'param0', 'value2')
self.check_secure_param_value(module_name, git_commit_hash, 'param0', 'value2')
self.check_secure_param_value(module_name, semantic_version, 'param0', 'value2')
self.catalog.remove_secure_config_params(adminCtx, {'data': [{'module_name': module_name,
'param_name': param_name,
'version': git_commit_hash}]})
self.check_secure_param_value(module_name, version_tag, 'param0', 'value0')
self.catalog.set_secure_config_params(adminCtx, {'data': [{'module_name': module_name,
'param_name': param_name,
'version': semantic_version,
'param_value': 'value3'}]})
self.check_secure_param_value(module_name, version_tag, 'param0', 'value3')
self.check_secure_param_value(module_name, git_commit_hash, 'param0', 'value3')
self.check_secure_param_value(module_name, semantic_version, 'param0', 'value3')
def check_secure_param_value(self, module_name, version, param_name, param_value):
params = self.catalog.get_secure_config_params(self.cUtil.admin_ctx(),
{'module_name': module_name,
'version': version})[0]
self.assertEqual(len(params), 1)
self.assertEqual(params[0]['param_name'], param_name)
self.assertEqual(params[0]['param_value'], param_value)
@classmethod
def setUpClass(cls):
print('++++++++++++ RUNNING secure_config_params.py +++++++++++')
cls.cUtil = CatalogTestUtil('.') # TODO: pass in test directory from outside
cls.cUtil.setUp()
cls.catalog = Catalog(cls.cUtil.getCatalogConfig())
print('ready')
@classmethod
def tearDownClass(cls):
cls.cUtil.tearDown()
| 55.094891 | 102 | 0.506492 | 1,372 | 15,096 | 5.297376 | 0.084548 | 0.088057 | 0.108971 | 0.118052 | 0.880572 | 0.870941 | 0.835718 | 0.824298 | 0.81508 | 0.796918 | 0 | 0.014491 | 0.387454 | 15,096 | 273 | 103 | 55.296703 | 0.771493 | 0.009936 | 0 | 0.592417 | 0 | 0 | 0.163832 | 0.00154 | 0 | 0 | 0 | 0.003663 | 0.232227 | 1 | 0.047393 | false | 0 | 0.014218 | 0 | 0.066351 | 0.009479 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6b779c436160a4a42c4c95974d219dcf32aaa1b | 22 | py | Python | utils/utils/database/__init__.py | koursaros-ai/microservices | 9613595ba62d00cb918feafa329834634bb76dc4 | [
"MIT"
] | 13 | 2019-11-26T04:24:02.000Z | 2021-09-29T04:22:40.000Z | utils/utils/database/__init__.py | koursaros-ai/koursaros | 9613595ba62d00cb918feafa329834634bb76dc4 | [
"MIT"
] | null | null | null | utils/utils/database/__init__.py | koursaros-ai/koursaros | 9613595ba62d00cb918feafa329834634bb76dc4 | [
"MIT"
] | null | null | null | from .psql import *
| 5.5 | 19 | 0.636364 | 3 | 22 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 22 | 3 | 20 | 7.333333 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6d9ddf6dcff294ee28ed786041e7edc394805eb | 102 | py | Python | Language Skills/Python/Unit 10 Advanced Topics in Python/01 Advanced Topics in Python/List Slicing/7-List Slicing Syntax.py | WarHatch/Codecademy-Exercise-Answers | 1fe3684d7edfa712747bce8e595e89409446eb94 | [
"MIT"
] | 346 | 2016-02-22T20:21:10.000Z | 2022-01-27T20:55:53.000Z | Language Skills/Python/Unit 10/1-Advanced Topics in Python/List Slicing/7-List Slicing Syntax_.py | vpstudios/Codecademy-Exercise-Answers | ebd0ee8197a8001465636f52c69592ea6745aa0c | [
"MIT"
] | 55 | 2016-04-07T13:58:44.000Z | 2020-06-25T12:20:24.000Z | Language Skills/Python/Unit 10/1-Advanced Topics in Python/List Slicing/7-List Slicing Syntax_.py | vpstudios/Codecademy-Exercise-Answers | ebd0ee8197a8001465636f52c69592ea6745aa0c | [
"MIT"
] | 477 | 2016-02-21T06:17:02.000Z | 2021-12-22T10:08:01.000Z | l = [i ** 2 for i in range(1, 11)]
# Should be [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
print l[2:9:2]
| 20.4 | 50 | 0.509804 | 26 | 102 | 2 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.328947 | 0.254902 | 102 | 4 | 51 | 25.5 | 0.355263 | 0.470588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
fc5a078afc7ee35ba2916d09e2575a0394813e8a | 13,067 | py | Python | sktime/dists_kernels/_base.py | biologioholic/sktime | 9d0391a04b11d22bd783b452f01aa5b4529b41a2 | [
"BSD-3-Clause"
] | 1 | 2021-12-22T02:45:39.000Z | 2021-12-22T02:45:39.000Z | sktime/dists_kernels/_base.py | biologioholic/sktime | 9d0391a04b11d22bd783b452f01aa5b4529b41a2 | [
"BSD-3-Clause"
] | null | null | null | sktime/dists_kernels/_base.py | biologioholic/sktime | 9d0391a04b11d22bd783b452f01aa5b4529b41a2 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
"""
Base class templates for distances or kernels between time series, and for tabular data.
templates in this module:
BasePairwiseTransformer - distances/kernels for tabular data
BasePairwiseTransformerPanel - distances/kernels for time series
Interface specifications below.
---
class name: BasePairwiseTransformer
Scitype defining methods:
computing distance/kernel matrix (shorthand) - __call__(self, X, X2=X)
computing distance/kernel matrix - transform(self, X, X2=X)
Inspection methods:
hyper-parameter inspection - get_params()
---
class name: BasePairwiseTransformerPanel
Scitype defining methods:
computing distance/kernel matrix (shorthand) - __call__(self, X, X2=X)
computing distance/kernel matrix - transform(self, X, X2=X)
Inspection methods:
hyper-parameter inspection - get_params()
"""
__author__ = ["fkiraly"]
from sktime.base import BaseEstimator
from sktime.datatypes import check_is_scitype, convert_to
from sktime.datatypes._series_as_panel import convert_Series_to_Panel
class BasePairwiseTransformer(BaseEstimator):
"""Base pairwise transformer for tabular or series data template class.
The base pairwise transformer specifies the methods and method
signatures that all pairwise transformers have to implement.
Specific implementations of these methods is deferred to concrete classes.
"""
# default tag values - these typically make the "safest" assumption
_tags = {
"symmetric": False, # is the transformer symmetric, i.e., t(x,y)=t(y,x) always?
"X_inner_mtype": "numpy2D", # which mtype is used internally in _transform?
"fit_is_empty": True, # is "fit" empty? Yes, for all pairwise transforms
}
def __init__(self):
super(BasePairwiseTransformer, self).__init__()
def __call__(self, X, X2=None):
"""Compute distance/kernel matrix, call shorthand.
Behaviour: returns pairwise distance/kernel matrix
between samples in X and X2
if X2 is not passed, is equal to X
alias for transform
Parameters
----------
X: pd.DataFrame of length n, or 2D np.array with n rows
X2: pd.DataFrame of length m, or 2D np.array with m rows, optional
default X2 = X
Returns
-------
distmat: np.array of shape [n, m]
(i,j)-th entry contains distance/kernel between X.iloc[i] and X2.iloc[j]
"""
# no input checks or input logic here, these are done in transform
# this just defines __call__ as an alias for transform
return self.transform(X=X, X2=X2)
def transform(self, X, X2=None):
"""Compute distance/kernel matrix.
Behaviour: returns pairwise distance/kernel matrix
between samples in X and X2 (equal to X if not passed)
Parameters
----------
X: pd.DataFrame of length n, or 2D np.array with n rows
X2: pd.DataFrame of length m, or 2D np.array with m rows, optional
default X2 = X
Returns
-------
distmat: np.array of shape [n, m]
(i,j)-th entry contains distance/kernel between X.iloc[i] and X2.iloc[j]
"""
X = self._pairwise_table_x_check(X)
if X2 is None:
X2 = X
else:
X2 = self._pairwise_table_x_check(X2, var_name="X2")
return self._transform(X=X, X2=X2)
def _transform(self, X, X2=None):
"""Compute distance/kernel matrix.
private _transform containing core logic, called from transform
Behaviour: returns pairwise distance/kernel matrix
between samples in X and X2 (equal to X if not passed)
Parameters
----------
X: pd.DataFrame of length n, or 2D np.array with n rows
X2: pd.DataFrame of length m, or 2D np.array with m rows, optional
default X2 = X
Returns
-------
distmat: np.array of shape [n, m]
(i,j)-th entry contains distance/kernel between X.iloc[i] and X2.iloc[j]
"""
raise NotImplementedError
def fit(self, X=None, X2=None):
"""Fit method for interface compatibility (no logic inside)."""
# no fitting logic, but in case fit is called or expected
self.reset()
self._is_fitted = True
return self
def _pairwise_table_x_check(self, X, var_name="X"):
"""Check and coerce input data.
Method used to check the input and convert Table input
to internally used format, as defined in X_inner_mtype tag
Parameters
----------
X: pd.DataFrame, pd.Series, numpy 1D or 2D, list of dicts
sktime data container compliant with the Table scitype
The value to be checked and coerced
var_name: str, variable name to print in error messages
Returns
-------
X: Panel data container of a supported format in X_inner_mtype
usually a 2D np.ndarray or a pd.DataFrame, unless overridden
"""
X_valid = check_is_scitype(X, "Table", return_metadata=False, var_name=var_name)
if not X_valid:
msg = (
"X and X2 must be in an sktime compatible format, of scitype Table, "
"for instance a pandas.DataFrame or a 2D numpy.ndarray. "
"See the data format tutorial examples/AA_datatypes_and_datasets.ipynb"
)
raise TypeError(msg)
X_inner_mtype = self.get_tag("X_inner_mtype")
X_coerced = convert_to(X, to_type=X_inner_mtype, as_scitype="Table")
return X_coerced
class BasePairwiseTransformerPanel(BaseEstimator):
"""Base pairwise transformer for panel data template class.
The base pairwise transformer specifies the methods and method
signatures that all pairwise transformers have to implement.
Specific implementations of these methods is deferred to concrete classes.
"""
# default tag values - these typically make the "safest" assumption
_tags = {
"symmetric": False, # is the transformer symmetric, i.e., t(x,y)=t(y,x) always?
"X_inner_mtype": "df-list", # which mtype is used internally in _transform?
"fit_is_empty": True, # is "fit" empty? Yes, for all pairwise transforms
}
def __init__(self):
super(BasePairwiseTransformerPanel, self).__init__()
def __call__(self, X, X2=None):
"""Compute distance/kernel matrix, call shorthand.
Behaviour: returns pairwise distance/kernel matrix
between samples in X and X2 (equal to X if not passed)
Parameters
----------
X : Series or Panel, any supported mtype, of n instances
Data to transform, of python type as follows:
Series: pd.Series, pd.DataFrame, or np.ndarray (1D or 2D)
Panel: pd.DataFrame with 2-level MultiIndex, list of pd.DataFrame,
nested pd.DataFrame, or pd.DataFrame in long/wide format
subject to sktime mtype format specifications, for further details see
examples/AA_datatypes_and_datasets.ipynb
X2 : Series or Panel, any supported mtype, of m instances
optional, default: X = X2
Data to transform, of python type as follows:
Series: pd.Series, pd.DataFrame, or np.ndarray (1D or 2D)
Panel: pd.DataFrame with 2-level MultiIndex, list of pd.DataFrame,
nested pd.DataFrame, or pd.DataFrame in long/wide format
subject to sktime mtype format specifications, for further details see
examples/AA_datatypes_and_datasets.ipynb
X and X2 need not have the same mtype
Returns
-------
distmat: np.array of shape [n, m]
(i,j)-th entry contains distance/kernel between X[i] and X2[j]
"""
# no input checks or input logic here, these are done in transform
# this just defines __call__ as an alias for transform
return self.transform(X=X, X2=X2)
def transform(self, X, X2=None):
"""Compute distance/kernel matrix.
Behaviour: returns pairwise distance/kernel matrix
between samples in X and X2 (equal to X if not passed)
Parameters
----------
X : Series or Panel, any supported mtype, of n instances
Data to transform, of python type as follows:
Series: pd.Series, pd.DataFrame, or np.ndarray (1D or 2D)
Panel: pd.DataFrame with 2-level MultiIndex, list of pd.DataFrame,
nested pd.DataFrame, or pd.DataFrame in long/wide format
subject to sktime mtype format specifications, for further details see
examples/AA_datatypes_and_datasets.ipynb
X2 : Series or Panel, any supported mtype, of m instances
optional, default: X = X2
Data to transform, of python type as follows:
Series: pd.Series, pd.DataFrame, or np.ndarray (1D or 2D)
Panel: pd.DataFrame with 2-level MultiIndex, list of pd.DataFrame,
nested pd.DataFrame, or pd.DataFrame in long/wide format
subject to sktime mtype format specifications, for further details see
examples/AA_datatypes_and_datasets.ipynb
X and X2 need not have the same mtype
Returns
-------
distmat: np.array of shape [n, m]
(i,j)-th entry contains distance/kernel between X[i] and X2[j]
"""
X = self._pairwise_panel_x_check(X)
if X2 is None:
X2 = X
else:
X2 = self._pairwise_panel_x_check(X2, var_name="X2")
return self._transform(X=X, X2=X2)
def _transform(self, X, X2=None):
"""Compute distance/kernel matrix.
private _transform containing core logic, called from transform
Behaviour: returns pairwise distance/kernel matrix
between samples in X and X2 (equal to X if not passed)
Parameters
----------
X : guaranteed to be Series or Panel of mtype X_inner_mtype, n instances
if X_inner_mtype is list, _transform must support all types in it
Data to be transformed
X2 : guaranteed to be Series or Panel of mtype X_inner_mtype, m instances
if X_inner_mtype is list, _transform must support all types in it
Data to be transformed
default X2 = X
Returns
-------
distmat: np.array of shape [n, m]
(i,j)-th entry contains distance/kernel between X[i] and X2[j]
"""
raise NotImplementedError
def fit(self, X=None, X2=None):
"""Fit method for interface compatibility (no logic inside)."""
# no fitting logic, but in case fit is called or expected
self.reset()
self._is_fitted = True
return self
def _pairwise_panel_x_check(self, X, var_name="X"):
"""Check and coerce input data.
Method used to check the input and convert Series/Panel input
to internally used format, as defined in X_inner_mtype tag
Parameters
----------
X: List of dfs, Numpy of dfs, 3d numpy
sktime data container compliant with the Series or Panel scitype
The value to be checked
var_name: str, variable name to print in error messages
Returns
-------
X: Panel data container of a supported format in X_inner_mtype
usually df-list, list of pd.DataFrame, unless overridden
"""
check_res = check_is_scitype(
X, ["Series", "Panel"], return_metadata=True, var_name=var_name
)
X_valid = check_res[0]
metadata = check_res[2]
X_scitype = metadata["scitype"]
if not X_valid:
msg = (
"X and X2 must be in an sktime compatible format, "
"of scitype Series or Panel, "
"for instance a pandas.DataFrame with sktime compatible time indices, "
"or with MultiIndex and lowest level a sktime compatible time index. "
"See the data format tutorial examples/AA_datatypes_and_datasets.ipynb"
)
raise TypeError(msg)
# if the input is a single series, convert it to a Panel
if X_scitype == "Series":
X = convert_Series_to_Panel(X)
# can't be anything else if check_is_scitype is working properly
elif X_scitype != "Panel":
raise RuntimeError("Unexpected error in check_is_scitype, check validity")
X_inner_mtype = self.get_tag("X_inner_mtype")
X_coerced = convert_to(X, to_type=X_inner_mtype, as_scitype="Panel")
return X_coerced
| 37.985465 | 88 | 0.626311 | 1,722 | 13,067 | 4.639954 | 0.141696 | 0.039925 | 0.04005 | 0.012015 | 0.816896 | 0.794869 | 0.7796 | 0.7796 | 0.7796 | 0.7796 | 0 | 0.009481 | 0.297773 | 13,067 | 343 | 89 | 38.09621 | 0.861269 | 0.629066 | 0 | 0.541176 | 0 | 0 | 0.188592 | 0.021834 | 0 | 0 | 0 | 0 | 0 | 1 | 0.141176 | false | 0 | 0.035294 | 0 | 0.317647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc64273ce7e9ce845292b54068aaa6c78e57efc3 | 56 | py | Python | welding/__init__.py | Swall0w/welding_inference | 6cb0f720ee4b8480f599b4fa3e0e199845c8197b | [
"MIT"
] | null | null | null | welding/__init__.py | Swall0w/welding_inference | 6cb0f720ee4b8480f599b4fa3e0e199845c8197b | [
"MIT"
] | 3 | 2017-09-07T15:07:59.000Z | 2017-12-12T15:17:13.000Z | welding/__init__.py | Swall0w/welding_inference | 6cb0f720ee4b8480f599b4fa3e0e199845c8197b | [
"MIT"
] | null | null | null | from welding import convert
from welding import replace
| 18.666667 | 27 | 0.857143 | 8 | 56 | 6 | 0.625 | 0.458333 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 56 | 2 | 28 | 28 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fc6c8851e41bd63fb5b91a6a9275b9a835ce9bdc | 1,118 | py | Python | test/game/test_timer.py | IanDCarroll/Trivvy | 2aaa68301e4dd1daaf717d98bb468cc65c8f373a | [
"MIT"
] | 1 | 2020-10-09T21:11:38.000Z | 2020-10-09T21:11:38.000Z | test/game/test_timer.py | IanDCarroll/Trivvy | 2aaa68301e4dd1daaf717d98bb468cc65c8f373a | [
"MIT"
] | 1 | 2020-09-05T01:29:49.000Z | 2020-09-05T01:29:49.000Z | test/game/test_timer.py | Coding-Koans/Trivvy | 2aaa68301e4dd1daaf717d98bb468cc65c8f373a | [
"MIT"
] | 2 | 2020-07-12T05:02:43.000Z | 2020-07-16T00:27:07.000Z | import unittest
from src.game.timer import Timer as Subject
class TimerTestCase(unittest.TestCase):
def test_timer_max_for_returns_the_number_of_times_questioner_should_iterate(self):
setting_key = 'max_for_doesnt_care_about_specifics'
times_per_second = 120
seconds_to_wait = 2
tempo = 1 / times_per_second
settings = {
setting_key: seconds_to_wait
}
subject = Subject(tempo, settings)
actual = subject.max_for(setting_key)
expected = times_per_second * seconds_to_wait
self.assertEqual(actual, expected)
def test_timer_max_for_returns_a_different_number_of_times_questioner_should_iterate(self):
setting_key = 'max_for_doesnt_care_about_specifics'
times_per_second = 1000
seconds_to_wait = 8
tempo = 1 / times_per_second
settings = {
setting_key: seconds_to_wait
}
subject = Subject(tempo, settings)
actual = subject.max_for(setting_key)
expected = times_per_second * seconds_to_wait
self.assertEqual(actual, expected) | 33.878788 | 95 | 0.694991 | 139 | 1,118 | 5.122302 | 0.33813 | 0.050562 | 0.117978 | 0.042135 | 0.814607 | 0.814607 | 0.744382 | 0.744382 | 0.744382 | 0.744382 | 0 | 0.013111 | 0.249553 | 1,118 | 33 | 96 | 33.878788 | 0.835518 | 0 | 0 | 0.592593 | 0 | 0 | 0.062556 | 0.062556 | 0 | 0 | 0 | 0 | 0.074074 | 1 | 0.074074 | false | 0 | 0.074074 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5da176fbc8b3b775b77c86ca4df33c3a14640eb5 | 22,411 | py | Python | tests/test_procedures.py | hafeezibbad/telegram-bot | 0cbc35005ea5d076a8b3a243d794889532e69c4c | [
"Apache-2.0"
] | null | null | null | tests/test_procedures.py | hafeezibbad/telegram-bot | 0cbc35005ea5d076a8b3a243d794889532e69c4c | [
"Apache-2.0"
] | 2 | 2021-02-02T22:38:30.000Z | 2021-06-02T01:27:14.000Z | tests/test_procedures.py | hafeezibbad/telegram-bot | 0cbc35005ea5d076a8b3a243d794889532e69c4c | [
"Apache-2.0"
] | null | null | null | """
Module containing tests cases for procedures used in Rest and WebAPI.
"""
import unittest
from mongoengine import Q
from datetime import datetime, timedelta
from telegram.bot import Bot
from botapp import create_app
from botapp.models import MyBot, Message
from botapp.api_helpers import procedures
from helper import CONSTANTS
class ProceduresTest(unittest.TestCase):
def setUp(self):
self.app = create_app('testing')
self.app_context = self.app.app_context()
self.app_context.push()
def tearDown(self):
# Drop all collections
MyBot.drop_collection()
Message.drop_collection()
self.app_context.pop()
def test_add_bot_procedure_with_no_inputs(self):
with self.assertRaises(ValueError) as e:
procedures.add_bot()
self.assertEqual(str(e.exception),
'No/Bad token(String expected) provided to add new'
' bot.')
def test_add_bot_with_invalid_token_type(self):
with self.assertRaises(ValueError) as e:
procedures.add_bot(token=1)
self.assertEqual(str(e.exception),
'Invalid token:{tokn} used for adding live '
'bot.'.format(tokn=1))
def test_add_testbot_valid_token(self):
status = procedures.add_bot(token='dummy_token', testing=True)
self.assertIsNotNone(status[0])
self.assertFalse(status[1])
self.assertTrue('testbot-' in status[0])
bot = MyBot.objects(username=status[0]).first()
self.assertIsNotNone(bot)
self.assertTrue(bot.test_bot)
self.assertEqual(bot.first_name, 'test')
self.assertEqual(bot.last_name, 'bot')
def test_add_livebot_with_valid_token(self):
status = procedures.add_bot(token=CONSTANTS.LIVE_BOTS.get(1))
# Get bot information from Telegram API.
bot = Bot(token=CONSTANTS.LIVE_BOTS.get(1)).get_me()
self.assertIsNotNone(status[0])
self.assertEqual(status[0], bot.username)
self.assertTrue(status[1])
mybot = MyBot.objects(username=status[0]).first()
self.assertIsNotNone(mybot)
self.assertFalse(mybot.test_bot)
self.assertEqual(mybot.first_name, bot.first_name)
self.assertEqual(mybot.last_name, mybot.last_name)
self.assertEqual(mybot.username, bot.username)
# Otherwise unittests doesn't end.
self.assertEqual(procedures.stop_bot(mybot.bot_id), 1)
def test_add_livebot_with_invalid_token(self):
bad_token = 'dummy-token'
with self.assertRaises(ValueError) as e:
procedures.add_bot(token=bad_token)
self.assertEqual(str(e.exception),
'Invalid token:{tokn} used for adding live '
'bot.'.format(tokn=bad_token))
def test_add_testbot_with_duplicate_token(self):
bad_token = 'dummy-token'
# Add a test bot with bad token.
MyBot(token=bad_token, test_bot=True).save()
self.assertIsNotNone(MyBot.objects(token=bad_token).first())
self.assertEqual(MyBot.objects.count(), 1)
with self.assertRaises(ValueError) as e:
procedures.add_bot(token=bad_token, testing=True)
self.assertEqual(str(e.exception),
'Bot with given token{tokn} is already present in '
'database.'.format(tokn=bad_token))
def test_add_livebot_with_duplicate_token(self):
live_token = CONSTANTS.LIVE_BOTS.get(1)
# Add a live bot with valid token.
MyBot(token=live_token).save()
self.assertIsNotNone(MyBot.objects(token=live_token).first())
self.assertEqual(MyBot.objects.count(), 1)
with self.assertRaises(ValueError) as e:
procedures.add_bot(token=live_token)
self.assertEqual(str(e.exception),
'Bot with given token{tokn} is already present in '
'database.'.format(tokn=live_token))
def test_add_testbot_with_duplicate_live_token(self):
live_token = CONSTANTS.LIVE_BOTS.get(1)
# Add a live bot with valid token.
MyBot(token=live_token).save()
self.assertIsNotNone(MyBot.objects(token=live_token).first())
self.assertEqual(MyBot.objects.count(), 1)
with self.assertRaises(ValueError) as e:
procedures.add_bot(token=live_token, testing=True)
self.assertEqual(str(e.exception),
'Bot with given token{tokn} is already present in '
'database.'.format(tokn=live_token))
def test_add_livebot_with_duplicate_bad_token(self):
bad_token = 'dummy-token'
# Add a live bot with valid token.
MyBot(token=bad_token, test_bot=True).save()
self.assertIsNotNone(MyBot.objects(token=bad_token).first())
self.assertEqual(MyBot.objects.count(), 1)
with self.assertRaises(ValueError) as e:
procedures.add_bot(token=bad_token)
self.assertEqual(str(e.exception),
'Bot with given token{tokn} is already present in '
'database.'.format(tokn=bad_token))
def test_start_bot_with_no_inputs(self):
with self.assertRaises(ValueError) as e:
procedures.start_bot()
self.assertEqual(str(e.exception),
'No botid/username provided with start bot '
'request.')
def test_start_bot_with_invalid_botid(self):
with self.assertRaises(ValueError) as e:
procedures.start_bot(botid='abc')
self.assertEqual(str(e.exception),
'Integer value expected for botid in start bot '
'request.')
def test_start_bot_with_invalid_username(self):
with self.assertRaises(ValueError) as e:
procedures.start_bot(username=1234)
self.assertEqual(str(e.exception),
'String value expected for username in start bot '
'request.')
def test_start_bot_for_non_existing_botid(self):
self.assertEqual(procedures.start_bot(botid=12345), -1)
def test_start_bot_for_non_existing_username(self):
self.assertEqual(procedures.start_bot(username='unknown-username'), -1)
def test_start_bot_for_non_existing_botid_username(self):
self.assertEqual(procedures.start_bot(botid=1234,
username='unknown-username'), -1)
def test_start_bot_for_test_bot(self):
bot = MyBot(token='dummy-token', test_bot=True).save()
self.assertIsNotNone(bot)
self.assertEqual(procedures.start_bot(botid=bot.bot_id), -2)
def test_start_livebot_with_valid_botid(self):
bot = MyBot(token=CONSTANTS.LIVE_BOTS.get(1)).save()
self.assertIsNotNone(bot)
self.assertEqual(procedures.start_bot(botid=bot.bot_id), 1)
bot = MyBot.objects(bot_id=bot.bot_id).first()
self.assertTrue(bot.state)
# Otherwise unittests doesn't end.
self.assertEqual(procedures.stop_bot(botid=bot.bot_id), 1)
bot = MyBot.objects(bot_id=bot.bot_id).first()
self.assertFalse(bot.state)
def test_start_livebot_with_valid_username(self):
bot = Bot(token=CONSTANTS.LIVE_BOTS.get(1)).get_me()
mybot = MyBot(token=CONSTANTS.LIVE_BOTS.get(1), bot_id=bot.id,
username=bot.username, first_name=bot.first_name,
last_name=bot.last_name).save()
self.assertIsNotNone(mybot)
self.assertEqual(procedures.start_bot(username=str(mybot.username)), 1)
mybot = MyBot.objects(bot_id=mybot.bot_id).first()
self.assertTrue(mybot.state)
# Otherwise unittests doesn't end.
self.assertEqual(procedures.stop_bot(botid=mybot.bot_id), 1)
mybot = MyBot.objects(bot_id=mybot.bot_id).first()
self.assertFalse(mybot.state)
def test_start_livebot_with_valid_botid_username(self):
bot = Bot(token=CONSTANTS.LIVE_BOTS.get(1)).get_me()
mybot = MyBot(token=CONSTANTS.LIVE_BOTS.get(1), bot_id=bot.id,
username=bot.username, first_name=bot.first_name,
last_name=bot.last_name).save()
self.assertIsNotNone(mybot)
self.assertEqual(procedures.start_bot(botid=mybot.bot_id,
username=str(mybot.username)), 1)
mybot = MyBot.objects(bot_id=mybot.bot_id).first()
self.assertTrue(mybot.state)
# Otherwise unittests doesn't end.
self.assertEqual(procedures.stop_bot(botid=mybot.bot_id), 1)
mybot = MyBot.objects(bot_id=mybot.bot_id).first()
self.assertFalse(mybot.state)
def test_start_livebot_with_invalid_botid_valid_username(self):
bot = Bot(token=CONSTANTS.LIVE_BOTS.get(1)).get_me()
mybot = MyBot(token=CONSTANTS.LIVE_BOTS.get(1), bot_id=bot.id,
username=bot.username, first_name=bot.first_name,
last_name=bot.last_name).save()
self.assertIsNotNone(mybot)
self.assertEqual(procedures.start_bot(botid=12345,
username=str(mybot.username)), 1)
mybot = MyBot.objects(bot_id=mybot.bot_id).first()
self.assertTrue(mybot.state)
# Otherwise unittests doesn't end.
self.assertEqual(procedures.stop_bot(botid=mybot.bot_id), 1)
mybot = MyBot.objects(bot_id=mybot.bot_id).first()
self.assertFalse(mybot.state)
def test_start_livebot_with_valid_botid_invalid_username(self):
bot = Bot(token=CONSTANTS.LIVE_BOTS.get(1)).get_me()
mybot = MyBot(token=CONSTANTS.LIVE_BOTS.get(1), bot_id=bot.id,
username=bot.username, first_name=bot.first_name,
last_name=bot.last_name).save()
self.assertIsNotNone(mybot)
self.assertEqual(procedures.start_bot(botid=mybot.bot_id,
username='abcde'), 1)
mybot = MyBot.objects(bot_id=mybot.bot_id).first()
self.assertTrue(mybot.state)
# Otherwise unittests doesn't end.
self.assertEqual(procedures.stop_bot(botid=mybot.bot_id), 1)
mybot = MyBot.objects(bot_id=mybot.bot_id).first()
self.assertFalse(mybot.state)
def test_start_livebot_with_bad_token(self):
bot = MyBot(token='dummy-token').save()
self.assertIsNotNone(bot)
with self.assertRaises(ValueError) as e:
procedures.start_bot(botid=bot.bot_id)
self.assertEqual(str(e.exception),
'Bot:{username} registered with bad token can not '
'be started.'.format(username=bot.username))
def test_stop_bot_with_no_inputs(self):
with self.assertRaises(ValueError) as e:
procedures.stop_bot()
self.assertEqual(str(e.exception),
'No botid/username provided with stop bot '
'request.')
def test_stop_bot_with_invalid_botid(self):
with self.assertRaises(ValueError) as e:
procedures.stop_bot(botid='abc')
self.assertEqual(str(e.exception),
'Integer value expected for botid in stop bot '
'request.')
def test_stop_bot_with_invalid_username(self):
with self.assertRaises(ValueError) as e:
procedures.stop_bot(username=1234)
self.assertEqual(str(e.exception),
'String value expected for username in stop bot '
'request.')
def test_stop_bot_for_non_existing_botid(self):
self.assertEqual(procedures.stop_bot(botid=12345), -1)
def test_stop_bot_for_non_existing_username(self):
self.assertEqual(procedures.stop_bot(username='unknown-username'), -1)
def test_stop_bot_for_non_existing_botid_username(self):
self.assertEqual(procedures.stop_bot(botid=1234,
username='unknown-username'), -1)
def test_stop_bot_for_test_bot(self):
bot = MyBot(token='dummy-token', test_bot=True).save()
self.assertIsNotNone(bot)
self.assertEqual(procedures.stop_bot(botid=bot.bot_id), -2)
def test_stop_bot_never_running_live_bot(self):
bot = MyBot(token=CONSTANTS.LIVE_BOTS.get(1)).save()
self.assertIsNotNone(bot)
self.assertEqual(procedures.stop_bot(botid=bot.bot_id), -2)
def test_stopbot_previously_running_now_stopped_live_bot(self):
bot = MyBot(token=CONSTANTS.LIVE_BOTS.get(1)).save()
self.assertIsNotNone(bot)
self.assertEqual(procedures.start_bot(botid=bot.bot_id), 1)
self.assertEqual(procedures.stop_bot(botid=bot.bot_id), 1)
bot = MyBot.objects(token=bot.token).first()
self.assertFalse(bot.state)
self.assertEqual(procedures.stop_bot(botid=bot.bot_id), -2)
def test_stopbot_valid_running_bot_using_valid_username(self):
bot = Bot(token=CONSTANTS.LIVE_BOTS.get(1)).get_me()
mybot = MyBot(token=CONSTANTS.LIVE_BOTS.get(1), bot_id=bot.id,
username=bot.username, first_name=bot.first_name,
last_name=bot.last_name).save()
self.assertIsNotNone(mybot)
self.assertEqual(procedures.start_bot(botid=mybot.bot_id), 1)
self.assertEqual(procedures.stop_bot(username=str(mybot.username)), 1)
mybot = MyBot.objects(bot_id=mybot.bot_id).first()
self.assertFalse(mybot.state)
def test_stopbot_valid_running_bot_using_valid_botid(self):
bot = Bot(token=CONSTANTS.LIVE_BOTS.get(1)).get_me()
mybot = MyBot(token=CONSTANTS.LIVE_BOTS.get(1), bot_id=bot.id,
username=bot.username, first_name=bot.first_name,
last_name=bot.last_name).save()
self.assertIsNotNone(mybot)
self.assertEqual(procedures.start_bot(botid=mybot.bot_id), 1)
self.assertEqual(procedures.stop_bot(botid=mybot.bot_id), 1)
mybot = MyBot.objects(bot_id=mybot.bot_id).first()
self.assertFalse(mybot.state)
def test_stopbot_valid_running_bot_using_valid_username_invalid_botid(self):
bot = Bot(token=CONSTANTS.LIVE_BOTS.get(1)).get_me()
mybot = MyBot(token=CONSTANTS.LIVE_BOTS.get(1), bot_id=bot.id,
username=bot.username, first_name=bot.first_name,
last_name=bot.last_name).save()
self.assertIsNotNone(mybot)
self.assertEqual(procedures.start_bot(botid=mybot.bot_id), 1)
self.assertEqual(procedures.stop_bot(botid=12345,
username=str(mybot.username)), 1)
mybot = MyBot.objects(bot_id=mybot.bot_id).first()
self.assertFalse(mybot.state)
def test_stopbot_valid_running_bot_using_invalid_username_valid_botid(self):
bot = Bot(token=CONSTANTS.LIVE_BOTS.get(1)).get_me()
mybot = MyBot(token=CONSTANTS.LIVE_BOTS.get(1), bot_id=bot.id,
username=bot.username, first_name=bot.first_name,
last_name=bot.last_name).save()
self.assertIsNotNone(mybot)
self.assertEqual(procedures.start_bot(botid=mybot.bot_id), 1)
assert isinstance(mybot, MyBot)
self.assertEqual(procedures.stop_bot(botid=mybot.bot_id,
username='abcde'), 1)
mybot = MyBot.objects(bot_id=mybot.bot_id).first()
self.assertFalse(mybot.state)
def test_start_stop_all_with_valid_bots(self):
bot = MyBot(token=CONSTANTS.LIVE_BOTS.get(1)).save()
self.assertIsNotNone(bot)
started = procedures.start_all()
self.assertTrue(bot.bot_id in started)
self.assertEqual(len(started),
MyBot.objects(test_bot=False).count())
stopped = procedures.stop_all()
self.assertTrue(len(stopped), len(started))
self.assertTrue(bot.bot_id in stopped)
def test_start_stop_all_with_test_bots(self):
bot = MyBot(token='dummy-token', test_bot=True).save()
self.assertIsNotNone(bot)
started = procedures.start_all()
self.assertTrue(bot.bot_id not in started)
self.assertEqual(len(started),
MyBot.objects(test_bot=False).count())
stopped = procedures.stop_all()
self.assertTrue(bot.bot_id not in stopped)
def test_start_stop_all_with_test_and_live_bots(self):
bot1 = MyBot(token='dummy-token', test_bot=True, username='test').save()
bot2 = MyBot(token=CONSTANTS.LIVE_BOTS.get(1), username='live').save()
self.assertIsNotNone(bot1)
self.assertIsNotNone(bot2)
started = procedures.start_all()
self.assertTrue(bot2.bot_id in started)
self.assertTrue(bot1.bot_id not in started)
self.assertEqual(len(started),
MyBot.objects(test_bot=False).count())
stopped = procedures.stop_all()
self.assertTrue(bot2.bot_id in stopped)
def test_filter_messages_by_time(self):
# Add dummy messages
Message.generate_fake(10)
# Add 2 legit messages
Message(date=datetime.now()-timedelta(minutes=30)).save()
Message(date=datetime.now() - timedelta(minutes=60)).save()
# Get messages
msgs = procedures.filter_messages(time_min=90)
self.assertEqual(len(msgs), 2)
def test_filter_messages_by_botid(self):
# Add dummy messages
Message.generate_fake(5)
# Add 2 legit messages
Message(bot_id=1234).save()
Message(bot_id=1234).save()
# Get messages
msgs = procedures.filter_messages(botid=1234)
self.assertEqual(len(msgs), 2)
def test_filter_messages_by_sender_username(self):
# Add dummy messages
Message.generate_fake(5)
# Add 2 legit messages
Message(sender_username='tester').save()
Message(sender_username='Tester').save()
# Get messages
msgs = procedures.filter_messages(username='tester')
self.assertEqual(len(msgs), 2)
def test_filter_messages_by_sender_text(self):
# Add dummy messages
Message.generate_fake(5)
# Add 2 legit messages
Message(text_content='text-12345').save()
Message(text_content='TEXT-abcde').save()
# Get messages
msgs = procedures.filter_messages(text='text')
self.assertEqual(len(msgs), 2)
def test_filter_messages_by_sender_firstname(self):
# Add dummy messages
Message.generate_fake(5)
# Add 2 legit messages
Message(sender_firstname='tom-hanks', sender_lastname='john').save()
Message(sender_firstname='tom-cruise', sender_lastname='doe').save()
# Get messages
msgs = procedures.filter_messages(name='tom')
self.assertEqual(len(msgs), 2)
def test_filter_messages_by_sender_lastname(self):
# Add dummy messages
Message.generate_fake(5)
# Add 2 legit messages
Message(sender_firstname='doe', sender_lastname='john').save()
Message(sender_firstname='angel', sender_lastname='johnny').save()
# Get messages
msgs = procedures.filter_messages(name='john')
self.assertEqual(len(msgs), 2)
def test_filter_messages_by_sender_firstname_lastname(self):
# Add dummy messages
Message.generate_fake(10)
# Remove any message with (possibly) matching names.
Message.objects(Q(sender_firstname__icontains='john') |
Q(sender_lastname__icontains='john')).delete()
# Add 2 legit messages
Message(sender_firstname='doe', sender_lastname='john').save()
Message(sender_firstname='johnathen', sender_lastname='angel').save()
# Get messages
msgs = procedures.filter_messages(name='john')
self.assertEqual(len(msgs), 2)
def test_filter_messages_by_all_criteria(self):
# Add dummy messages
Message.generate_fake(5)
# Add partially matching messages.
Message(date=datetime.now() - timedelta(minutes=30), # Un-match time.
sender_username='tester1',
sender_firstname='test',
sender_lastname='bot',
text_content='testmessage',
bot_id=12345).save()
Message(date=datetime.now() - timedelta(minutes=10),
sender_username='tester2', # Non-matching sender-username.
sender_firstname='test',
sender_lastname='bot',
text_content='testmessage',
bot_id=12345).save()
Message(date=datetime.now() - timedelta(minutes=10),
sender_username='tester1',
sender_firstname='abc', # Non-matching first-name, last-name
sender_lastname='def',
text_content='testmessage',
bot_id=12345).save()
Message(date=datetime.now() - timedelta(minutes=10),
sender_username='tester1',
sender_firstname='test',
sender_lastname='bot',
text_content='message', # Non-matching text content
bot_id=12345).save()
Message(date=datetime.now() - timedelta(minutes=10),
sender_username='Tester1',
sender_firstname='Test',
sender_lastname='Bot',
text_content='testmessage',
bot_id=11111).save() # Non-matching botid
# Add expected message.
Message(date=datetime.now()-timedelta(minutes=10),
sender_username='tester1',
sender_firstname='test',
sender_lastname='bot',
text_content='testmessage',
bot_id=12345).save()
# Get messages
msgs = procedures.filter_messages(botid=12345, time_min=15, text='test',
username='tester1', name='test')
self.assertEqual(len(msgs), 1)
| 44.911824 | 80 | 0.62978 | 2,707 | 22,411 | 5.002955 | 0.068341 | 0.029905 | 0.057225 | 0.040611 | 0.855276 | 0.83246 | 0.821088 | 0.779148 | 0.72783 | 0.67821 | 0 | 0.013138 | 0.262996 | 22,411 | 498 | 81 | 45.002008 | 0.806805 | 0.048726 | 0 | 0.59596 | 0 | 0 | 0.058771 | 0 | 0 | 0 | 0 | 0 | 0.340909 | 1 | 0.121212 | false | 0 | 0.020202 | 0 | 0.143939 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f8dd4d8d624a509785b3b9c8b7585f482172bde2 | 35 | py | Python | src/reforemast/__init__.py | e4r7hbug/reforemast | a4499922d4702f82b4f2034219278e7738c1eb12 | [
"Apache-2.0"
] | null | null | null | src/reforemast/__init__.py | e4r7hbug/reforemast | a4499922d4702f82b4f2034219278e7738c1eb12 | [
"Apache-2.0"
] | null | null | null | src/reforemast/__init__.py | e4r7hbug/reforemast | a4499922d4702f82b4f2034219278e7738c1eb12 | [
"Apache-2.0"
] | null | null | null | from .reforemast import Reforemast
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d3f1be0d33b904cf58a12b5975e7b50f136c1e5 | 101 | py | Python | works_on/admin.py | Sudani-Coder/morsalHR | 9febdcd93763da8cb10eaa1860ce1465d5f2173f | [
"MIT"
] | null | null | null | works_on/admin.py | Sudani-Coder/morsalHR | 9febdcd93763da8cb10eaa1860ce1465d5f2173f | [
"MIT"
] | null | null | null | works_on/admin.py | Sudani-Coder/morsalHR | 9febdcd93763da8cb10eaa1860ce1465d5f2173f | [
"MIT"
] | null | null | null | from django.contrib import admin
from works_on.models import works_on
admin.site.register(works_on)
| 20.2 | 36 | 0.841584 | 17 | 101 | 4.823529 | 0.588235 | 0.256098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09901 | 101 | 4 | 37 | 25.25 | 0.901099 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d61779d45bf71e1ae78b9eb26075e266eeaca70 | 25,353 | bzl | Python | rust/cargo/crates.bzl | justinwp/rules_proto | 76e30bc0ad6c2f4150f40e593db83eedeb069f1e | [
"Apache-2.0"
] | null | null | null | rust/cargo/crates.bzl | justinwp/rules_proto | 76e30bc0ad6c2f4150f40e593db83eedeb069f1e | [
"Apache-2.0"
] | null | null | null | rust/cargo/crates.bzl | justinwp/rules_proto | 76e30bc0ad6c2f4150f40e593db83eedeb069f1e | [
"Apache-2.0"
] | null | null | null | """
cargo-raze crate workspace functions
DO NOT EDIT! Replaced on runs of cargo-raze
"""
def raze_fetch_remote_crates():
native.new_http_archive(
name = "raze__arrayvec__0_4_7",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/arrayvec/arrayvec-0.4.7.crate",
type = "tar.gz",
strip_prefix = "arrayvec-0.4.7",
build_file = str(Label("//rust/cargo/remote:arrayvec-0.4.7.BUILD")),
)
native.new_http_archive(
name = "raze__base64__0_9_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/base64/base64-0.9.3.crate",
type = "tar.gz",
strip_prefix = "base64-0.9.3",
build_file = str(Label("//rust/cargo/remote:base64-0.9.3.BUILD")),
)
native.new_http_archive(
name = "raze__bitflags__1_0_4",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/bitflags/bitflags-1.0.4.crate",
type = "tar.gz",
strip_prefix = "bitflags-1.0.4",
build_file = str(Label("//rust/cargo/remote:bitflags-1.0.4.BUILD")),
)
native.new_http_archive(
name = "raze__byteorder__1_2_6",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/byteorder/byteorder-1.2.6.crate",
type = "tar.gz",
strip_prefix = "byteorder-1.2.6",
build_file = str(Label("//rust/cargo/remote:byteorder-1.2.6.BUILD")),
)
native.new_http_archive(
name = "raze__bytes__0_4_10",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/bytes/bytes-0.4.10.crate",
type = "tar.gz",
strip_prefix = "bytes-0.4.10",
build_file = str(Label("//rust/cargo/remote:bytes-0.4.10.BUILD")),
)
native.new_http_archive(
name = "raze__cfg_if__0_1_5",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/cfg-if/cfg-if-0.1.5.crate",
type = "tar.gz",
strip_prefix = "cfg-if-0.1.5",
build_file = str(Label("//rust/cargo/remote:cfg-if-0.1.5.BUILD")),
)
native.new_http_archive(
name = "raze__cloudabi__0_0_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/cloudabi/cloudabi-0.0.3.crate",
type = "tar.gz",
strip_prefix = "cloudabi-0.0.3",
build_file = str(Label("//rust/cargo/remote:cloudabi-0.0.3.BUILD")),
)
native.new_http_archive(
name = "raze__crossbeam_deque__0_6_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/crossbeam-deque/crossbeam-deque-0.6.1.crate",
type = "tar.gz",
strip_prefix = "crossbeam-deque-0.6.1",
build_file = str(Label("//rust/cargo/remote:crossbeam-deque-0.6.1.BUILD")),
)
native.new_http_archive(
name = "raze__crossbeam_epoch__0_5_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/crossbeam-epoch/crossbeam-epoch-0.5.2.crate",
type = "tar.gz",
strip_prefix = "crossbeam-epoch-0.5.2",
build_file = str(Label("//rust/cargo/remote:crossbeam-epoch-0.5.2.BUILD")),
)
native.new_http_archive(
name = "raze__crossbeam_utils__0_5_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/crossbeam-utils/crossbeam-utils-0.5.0.crate",
type = "tar.gz",
strip_prefix = "crossbeam-utils-0.5.0",
build_file = str(Label("//rust/cargo/remote:crossbeam-utils-0.5.0.BUILD")),
)
native.new_http_archive(
name = "raze__fuchsia_zircon__0_3_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/fuchsia-zircon/fuchsia-zircon-0.3.3.crate",
type = "tar.gz",
strip_prefix = "fuchsia-zircon-0.3.3",
build_file = str(Label("//rust/cargo/remote:fuchsia-zircon-0.3.3.BUILD")),
)
native.new_http_archive(
name = "raze__fuchsia_zircon_sys__0_3_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/fuchsia-zircon-sys/fuchsia-zircon-sys-0.3.3.crate",
type = "tar.gz",
strip_prefix = "fuchsia-zircon-sys-0.3.3",
build_file = str(Label("//rust/cargo/remote:fuchsia-zircon-sys-0.3.3.BUILD")),
)
native.new_http_archive(
name = "raze__futures__0_1_24",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/futures/futures-0.1.24.crate",
type = "tar.gz",
strip_prefix = "futures-0.1.24",
build_file = str(Label("//rust/cargo/remote:futures-0.1.24.BUILD")),
)
native.new_http_archive(
name = "raze__futures_cpupool__0_1_8",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/futures-cpupool/futures-cpupool-0.1.8.crate",
type = "tar.gz",
strip_prefix = "futures-cpupool-0.1.8",
build_file = str(Label("//rust/cargo/remote:futures-cpupool-0.1.8.BUILD")),
)
native.new_http_archive(
name = "raze__grpc__0_4_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/grpc/grpc-0.4.0.crate",
type = "tar.gz",
strip_prefix = "grpc-0.4.0",
build_file = str(Label("//rust/cargo/remote:grpc-0.4.0.BUILD")),
)
native.new_http_archive(
name = "raze__grpc_compiler__0_4_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/grpc-compiler/grpc-compiler-0.4.0.crate",
type = "tar.gz",
strip_prefix = "grpc-compiler-0.4.0",
build_file = str(Label("//rust/cargo/remote:grpc-compiler-0.4.0.BUILD")),
)
native.new_http_archive(
name = "raze__httpbis__0_6_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/httpbis/httpbis-0.6.1.crate",
type = "tar.gz",
strip_prefix = "httpbis-0.6.1",
build_file = str(Label("//rust/cargo/remote:httpbis-0.6.1.BUILD")),
)
native.new_http_archive(
name = "raze__iovec__0_1_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/iovec/iovec-0.1.2.crate",
type = "tar.gz",
strip_prefix = "iovec-0.1.2",
build_file = str(Label("//rust/cargo/remote:iovec-0.1.2.BUILD")),
)
native.new_http_archive(
name = "raze__kernel32_sys__0_2_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/kernel32-sys/kernel32-sys-0.2.2.crate",
type = "tar.gz",
strip_prefix = "kernel32-sys-0.2.2",
build_file = str(Label("//rust/cargo/remote:kernel32-sys-0.2.2.BUILD")),
)
native.new_http_archive(
name = "raze__lazy_static__1_1_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/lazy_static/lazy_static-1.1.0.crate",
type = "tar.gz",
strip_prefix = "lazy_static-1.1.0",
build_file = str(Label("//rust/cargo/remote:lazy_static-1.1.0.BUILD")),
)
native.new_http_archive(
name = "raze__lazycell__1_2_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/lazycell/lazycell-1.2.0.crate",
type = "tar.gz",
strip_prefix = "lazycell-1.2.0",
build_file = str(Label("//rust/cargo/remote:lazycell-1.2.0.BUILD")),
)
native.new_http_archive(
name = "raze__libc__0_2_43",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/libc/libc-0.2.43.crate",
type = "tar.gz",
strip_prefix = "libc-0.2.43",
build_file = str(Label("//rust/cargo/remote:libc-0.2.43.BUILD")),
)
native.new_http_archive(
name = "raze__lock_api__0_1_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/lock_api/lock_api-0.1.3.crate",
type = "tar.gz",
strip_prefix = "lock_api-0.1.3",
build_file = str(Label("//rust/cargo/remote:lock_api-0.1.3.BUILD")),
)
native.new_http_archive(
name = "raze__log__0_3_9",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/log/log-0.3.9.crate",
type = "tar.gz",
strip_prefix = "log-0.3.9",
build_file = str(Label("//rust/cargo/remote:log-0.3.9.BUILD")),
)
native.new_http_archive(
name = "raze__log__0_4_5",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/log/log-0.4.5.crate",
type = "tar.gz",
strip_prefix = "log-0.4.5",
build_file = str(Label("//rust/cargo/remote:log-0.4.5.BUILD")),
)
native.new_http_archive(
name = "raze__memoffset__0_2_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/memoffset/memoffset-0.2.1.crate",
type = "tar.gz",
strip_prefix = "memoffset-0.2.1",
build_file = str(Label("//rust/cargo/remote:memoffset-0.2.1.BUILD")),
)
native.new_http_archive(
name = "raze__mio__0_6_16",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/mio/mio-0.6.16.crate",
type = "tar.gz",
strip_prefix = "mio-0.6.16",
build_file = str(Label("//rust/cargo/remote:mio-0.6.16.BUILD")),
)
native.new_http_archive(
name = "raze__mio_uds__0_6_7",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/mio-uds/mio-uds-0.6.7.crate",
type = "tar.gz",
strip_prefix = "mio-uds-0.6.7",
build_file = str(Label("//rust/cargo/remote:mio-uds-0.6.7.BUILD")),
)
native.new_http_archive(
name = "raze__miow__0_2_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/miow/miow-0.2.1.crate",
type = "tar.gz",
strip_prefix = "miow-0.2.1",
build_file = str(Label("//rust/cargo/remote:miow-0.2.1.BUILD")),
)
native.new_http_archive(
name = "raze__net2__0_2_33",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/net2/net2-0.2.33.crate",
type = "tar.gz",
strip_prefix = "net2-0.2.33",
build_file = str(Label("//rust/cargo/remote:net2-0.2.33.BUILD")),
)
native.new_http_archive(
name = "raze__nodrop__0_1_12",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/nodrop/nodrop-0.1.12.crate",
type = "tar.gz",
strip_prefix = "nodrop-0.1.12",
build_file = str(Label("//rust/cargo/remote:nodrop-0.1.12.BUILD")),
)
native.new_http_archive(
name = "raze__num_cpus__1_8_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/num_cpus/num_cpus-1.8.0.crate",
type = "tar.gz",
strip_prefix = "num_cpus-1.8.0",
build_file = str(Label("//rust/cargo/remote:num_cpus-1.8.0.BUILD")),
)
native.new_http_archive(
name = "raze__owning_ref__0_3_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/owning_ref/owning_ref-0.3.3.crate",
type = "tar.gz",
strip_prefix = "owning_ref-0.3.3",
build_file = str(Label("//rust/cargo/remote:owning_ref-0.3.3.BUILD")),
)
native.new_http_archive(
name = "raze__parking_lot__0_6_4",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/parking_lot/parking_lot-0.6.4.crate",
type = "tar.gz",
strip_prefix = "parking_lot-0.6.4",
build_file = str(Label("//rust/cargo/remote:parking_lot-0.6.4.BUILD")),
)
native.new_http_archive(
name = "raze__parking_lot_core__0_3_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/parking_lot_core/parking_lot_core-0.3.1.crate",
type = "tar.gz",
strip_prefix = "parking_lot_core-0.3.1",
build_file = str(Label("//rust/cargo/remote:parking_lot_core-0.3.1.BUILD")),
)
native.new_http_archive(
name = "raze__protobuf__1_6_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/protobuf/protobuf-1.6.0.crate",
type = "tar.gz",
strip_prefix = "protobuf-1.6.0",
build_file = str(Label("//rust/cargo/remote:protobuf-1.6.0.BUILD")),
)
native.new_http_archive(
name = "raze__protobuf_codegen__1_6_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/protobuf-codegen/protobuf-codegen-1.6.0.crate",
type = "tar.gz",
strip_prefix = "protobuf-codegen-1.6.0",
build_file = str(Label("//rust/cargo/remote:protobuf-codegen-1.6.0.BUILD")),
)
native.new_http_archive(
name = "raze__rand__0_5_5",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/rand/rand-0.5.5.crate",
type = "tar.gz",
strip_prefix = "rand-0.5.5",
build_file = str(Label("//rust/cargo/remote:rand-0.5.5.BUILD")),
)
native.new_http_archive(
name = "raze__rand_core__0_2_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/rand_core/rand_core-0.2.1.crate",
type = "tar.gz",
strip_prefix = "rand_core-0.2.1",
build_file = str(Label("//rust/cargo/remote:rand_core-0.2.1.BUILD")),
)
native.new_http_archive(
name = "raze__rustc_version__0_2_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/rustc_version/rustc_version-0.2.3.crate",
type = "tar.gz",
strip_prefix = "rustc_version-0.2.3",
build_file = str(Label("//rust/cargo/remote:rustc_version-0.2.3.BUILD")),
)
native.new_http_archive(
name = "raze__safemem__0_3_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/safemem/safemem-0.3.0.crate",
type = "tar.gz",
strip_prefix = "safemem-0.3.0",
build_file = str(Label("//rust/cargo/remote:safemem-0.3.0.BUILD")),
)
native.new_http_archive(
name = "raze__scoped_tls__0_1_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/scoped-tls/scoped-tls-0.1.2.crate",
type = "tar.gz",
strip_prefix = "scoped-tls-0.1.2",
build_file = str(Label("//rust/cargo/remote:scoped-tls-0.1.2.BUILD")),
)
native.new_http_archive(
name = "raze__scopeguard__0_3_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/scopeguard/scopeguard-0.3.3.crate",
type = "tar.gz",
strip_prefix = "scopeguard-0.3.3",
build_file = str(Label("//rust/cargo/remote:scopeguard-0.3.3.BUILD")),
)
native.new_http_archive(
name = "raze__semver__0_9_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/semver/semver-0.9.0.crate",
type = "tar.gz",
strip_prefix = "semver-0.9.0",
build_file = str(Label("//rust/cargo/remote:semver-0.9.0.BUILD")),
)
native.new_http_archive(
name = "raze__semver_parser__0_7_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/semver-parser/semver-parser-0.7.0.crate",
type = "tar.gz",
strip_prefix = "semver-parser-0.7.0",
build_file = str(Label("//rust/cargo/remote:semver-parser-0.7.0.BUILD")),
)
native.new_http_archive(
name = "raze__slab__0_3_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/slab/slab-0.3.0.crate",
type = "tar.gz",
strip_prefix = "slab-0.3.0",
build_file = str(Label("//rust/cargo/remote:slab-0.3.0.BUILD")),
)
native.new_http_archive(
name = "raze__slab__0_4_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/slab/slab-0.4.1.crate",
type = "tar.gz",
strip_prefix = "slab-0.4.1",
build_file = str(Label("//rust/cargo/remote:slab-0.4.1.BUILD")),
)
native.new_http_archive(
name = "raze__smallvec__0_6_5",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/smallvec/smallvec-0.6.5.crate",
type = "tar.gz",
strip_prefix = "smallvec-0.6.5",
build_file = str(Label("//rust/cargo/remote:smallvec-0.6.5.BUILD")),
)
native.new_http_archive(
name = "raze__stable_deref_trait__1_1_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/stable_deref_trait/stable_deref_trait-1.1.1.crate",
type = "tar.gz",
strip_prefix = "stable_deref_trait-1.1.1",
build_file = str(Label("//rust/cargo/remote:stable_deref_trait-1.1.1.BUILD")),
)
native.new_http_archive(
name = "raze__tls_api__0_1_20",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tls-api/tls-api-0.1.20.crate",
type = "tar.gz",
strip_prefix = "tls-api-0.1.20",
build_file = str(Label("//rust/cargo/remote:tls-api-0.1.20.BUILD")),
)
native.new_http_archive(
name = "raze__tls_api_stub__0_1_20",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tls-api-stub/tls-api-stub-0.1.20.crate",
type = "tar.gz",
strip_prefix = "tls-api-stub-0.1.20",
build_file = str(Label("//rust/cargo/remote:tls-api-stub-0.1.20.BUILD")),
)
native.new_http_archive(
name = "raze__tokio__0_1_8",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio/tokio-0.1.8.crate",
type = "tar.gz",
strip_prefix = "tokio-0.1.8",
build_file = str(Label("//rust/cargo/remote:tokio-0.1.8.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_codec__0_1_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-codec/tokio-codec-0.1.0.crate",
type = "tar.gz",
strip_prefix = "tokio-codec-0.1.0",
build_file = str(Label("//rust/cargo/remote:tokio-codec-0.1.0.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_core__0_1_17",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-core/tokio-core-0.1.17.crate",
type = "tar.gz",
strip_prefix = "tokio-core-0.1.17",
build_file = str(Label("//rust/cargo/remote:tokio-core-0.1.17.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_current_thread__0_1_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-current-thread/tokio-current-thread-0.1.1.crate",
type = "tar.gz",
strip_prefix = "tokio-current-thread-0.1.1",
build_file = str(Label("//rust/cargo/remote:tokio-current-thread-0.1.1.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_executor__0_1_4",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-executor/tokio-executor-0.1.4.crate",
type = "tar.gz",
strip_prefix = "tokio-executor-0.1.4",
build_file = str(Label("//rust/cargo/remote:tokio-executor-0.1.4.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_fs__0_1_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-fs/tokio-fs-0.1.3.crate",
type = "tar.gz",
strip_prefix = "tokio-fs-0.1.3",
build_file = str(Label("//rust/cargo/remote:tokio-fs-0.1.3.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_io__0_1_8",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-io/tokio-io-0.1.8.crate",
type = "tar.gz",
strip_prefix = "tokio-io-0.1.8",
build_file = str(Label("//rust/cargo/remote:tokio-io-0.1.8.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_reactor__0_1_5",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-reactor/tokio-reactor-0.1.5.crate",
type = "tar.gz",
strip_prefix = "tokio-reactor-0.1.5",
build_file = str(Label("//rust/cargo/remote:tokio-reactor-0.1.5.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_tcp__0_1_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-tcp/tokio-tcp-0.1.1.crate",
type = "tar.gz",
strip_prefix = "tokio-tcp-0.1.1",
build_file = str(Label("//rust/cargo/remote:tokio-tcp-0.1.1.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_threadpool__0_1_6",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-threadpool/tokio-threadpool-0.1.6.crate",
type = "tar.gz",
strip_prefix = "tokio-threadpool-0.1.6",
build_file = str(Label("//rust/cargo/remote:tokio-threadpool-0.1.6.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_timer__0_1_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-timer/tokio-timer-0.1.2.crate",
type = "tar.gz",
strip_prefix = "tokio-timer-0.1.2",
build_file = str(Label("//rust/cargo/remote:tokio-timer-0.1.2.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_timer__0_2_6",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-timer/tokio-timer-0.2.6.crate",
type = "tar.gz",
strip_prefix = "tokio-timer-0.2.6",
build_file = str(Label("//rust/cargo/remote:tokio-timer-0.2.6.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_tls_api__0_1_20",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-tls-api/tokio-tls-api-0.1.20.crate",
type = "tar.gz",
strip_prefix = "tokio-tls-api-0.1.20",
build_file = str(Label("//rust/cargo/remote:tokio-tls-api-0.1.20.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_udp__0_1_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-udp/tokio-udp-0.1.2.crate",
type = "tar.gz",
strip_prefix = "tokio-udp-0.1.2",
build_file = str(Label("//rust/cargo/remote:tokio-udp-0.1.2.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_uds__0_1_7",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-uds/tokio-uds-0.1.7.crate",
type = "tar.gz",
strip_prefix = "tokio-uds-0.1.7",
build_file = str(Label("//rust/cargo/remote:tokio-uds-0.1.7.BUILD")),
)
native.new_http_archive(
name = "raze__tokio_uds__0_2_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tokio-uds/tokio-uds-0.2.1.crate",
type = "tar.gz",
strip_prefix = "tokio-uds-0.2.1",
build_file = str(Label("//rust/cargo/remote:tokio-uds-0.2.1.BUILD")),
)
native.new_http_archive(
name = "raze__unix_socket__0_5_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/unix_socket/unix_socket-0.5.0.crate",
type = "tar.gz",
strip_prefix = "unix_socket-0.5.0",
build_file = str(Label("//rust/cargo/remote:unix_socket-0.5.0.BUILD")),
)
native.new_http_archive(
name = "raze__unreachable__1_0_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/unreachable/unreachable-1.0.0.crate",
type = "tar.gz",
strip_prefix = "unreachable-1.0.0",
build_file = str(Label("//rust/cargo/remote:unreachable-1.0.0.BUILD")),
)
native.new_http_archive(
name = "raze__version_check__0_1_5",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/version_check/version_check-0.1.5.crate",
type = "tar.gz",
strip_prefix = "version_check-0.1.5",
build_file = str(Label("//rust/cargo/remote:version_check-0.1.5.BUILD")),
)
native.new_http_archive(
name = "raze__void__1_0_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/void/void-1.0.2.crate",
type = "tar.gz",
strip_prefix = "void-1.0.2",
build_file = str(Label("//rust/cargo/remote:void-1.0.2.BUILD")),
)
native.new_http_archive(
name = "raze__winapi__0_2_8",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/winapi/winapi-0.2.8.crate",
type = "tar.gz",
strip_prefix = "winapi-0.2.8",
build_file = str(Label("//rust/cargo/remote:winapi-0.2.8.BUILD")),
)
native.new_http_archive(
name = "raze__winapi__0_3_6",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/winapi/winapi-0.3.6.crate",
type = "tar.gz",
strip_prefix = "winapi-0.3.6",
build_file = str(Label("//rust/cargo/remote:winapi-0.3.6.BUILD")),
)
native.new_http_archive(
name = "raze__winapi_build__0_1_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/winapi-build/winapi-build-0.1.1.crate",
type = "tar.gz",
strip_prefix = "winapi-build-0.1.1",
build_file = str(Label("//rust/cargo/remote:winapi-build-0.1.1.BUILD")),
)
native.new_http_archive(
name = "raze__winapi_i686_pc_windows_gnu__0_4_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/winapi-i686-pc-windows-gnu/winapi-i686-pc-windows-gnu-0.4.0.crate",
type = "tar.gz",
strip_prefix = "winapi-i686-pc-windows-gnu-0.4.0",
build_file = str(Label("//rust/cargo/remote:winapi-i686-pc-windows-gnu-0.4.0.BUILD")),
)
native.new_http_archive(
name = "raze__winapi_x86_64_pc_windows_gnu__0_4_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/winapi-x86_64-pc-windows-gnu/winapi-x86_64-pc-windows-gnu-0.4.0.crate",
type = "tar.gz",
strip_prefix = "winapi-x86_64-pc-windows-gnu-0.4.0",
build_file = str(Label("//rust/cargo/remote:winapi-x86_64-pc-windows-gnu-0.4.0.BUILD")),
)
native.new_http_archive(
name = "raze__ws2_32_sys__0_2_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/ws2_32-sys/ws2_32-sys-0.2.1.crate",
type = "tar.gz",
strip_prefix = "ws2_32-sys-0.2.1",
build_file = str(Label("//rust/cargo/remote:ws2_32-sys-0.2.1.BUILD")),
)
| 40.695024 | 138 | 0.618349 | 4,010 | 25,353 | 3.687032 | 0.035162 | 0.013527 | 0.067704 | 0.10416 | 0.941359 | 0.877443 | 0.828881 | 0.765776 | 0.62976 | 0.518634 | 0 | 0.058902 | 0.20112 | 25,353 | 622 | 139 | 40.76045 | 0.671077 | 0.003195 | 0 | 0.285185 | 1 | 0.146296 | 0.523235 | 0.192844 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001852 | true | 0 | 0 | 0 | 0.001852 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
53946a204c20698b1f44fd8933b9e5c22b08ec91 | 22 | py | Python | utils/__init__.py | Dionysusnu/Docs-Bot | 68ca0626745a0fce0d8d1aeebf05ed472676c416 | [
"MIT"
] | null | null | null | utils/__init__.py | Dionysusnu/Docs-Bot | 68ca0626745a0fce0d8d1aeebf05ed472676c416 | [
"MIT"
] | null | null | null | utils/__init__.py | Dionysusnu/Docs-Bot | 68ca0626745a0fce0d8d1aeebf05ed472676c416 | [
"MIT"
] | null | null | null | from .auto import Auto | 22 | 22 | 0.818182 | 4 | 22 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
539c6b8c662eb73384ce4a7087feed7aa0f5841e | 159 | py | Python | app/main/errors.py | JKimani77/News | 4ee2f54e1688d8b4f6be29e4923fa94b364fb0d6 | [
"MIT"
] | null | null | null | app/main/errors.py | JKimani77/News | 4ee2f54e1688d8b4f6be29e4923fa94b364fb0d6 | [
"MIT"
] | null | null | null | app/main/errors.py | JKimani77/News | 4ee2f54e1688d8b4f6be29e4923fa94b364fb0d6 | [
"MIT"
] | null | null | null | from flask import render_template
from . import main
@main.app_errorhandler(404)
def errorforrowfor(error):
return render_template('foh_oh_foh.html'),404 | 26.5 | 49 | 0.798742 | 23 | 159 | 5.304348 | 0.695652 | 0.229508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 0.113208 | 159 | 6 | 49 | 26.5 | 0.822695 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
53da0f69c8fb789ee614aa60c458b445510a3971 | 78 | py | Python | deprecated_python_cowbull_game/tests/__init__.py | dsandersAzure/python_cowbull_game | 82a0d8ee127869123d4fad51a8cd1707879e368f | [
"Apache-2.0"
] | 1 | 2017-05-01T20:13:40.000Z | 2017-05-01T20:13:40.000Z | deprecated_python_cowbull_game/tests/__init__.py | dsandersAzure/python_cowbull_game | 82a0d8ee127869123d4fad51a8cd1707879e368f | [
"Apache-2.0"
] | null | null | null | deprecated_python_cowbull_game/tests/__init__.py | dsandersAzure/python_cowbull_game | 82a0d8ee127869123d4fad51a8cd1707879e368f | [
"Apache-2.0"
] | null | null | null | from .test_Game import test_Game
from .test_GameObject import test_GameObject
| 26 | 44 | 0.871795 | 12 | 78 | 5.333333 | 0.416667 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 78 | 2 | 45 | 39 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
54d92509b7b1bac02e3d836c217a053425d742a5 | 1,050 | py | Python | cloudmersive_barcode_api_client/__init__.py | Cloudmersive/Cloudmersive.APIClient.Python.Barcode | e584de80304ebddbcce99ee6ff42196d46486421 | [
"Apache-2.0"
] | 1 | 2018-06-24T04:50:28.000Z | 2018-06-24T04:50:28.000Z | cloudmersive_barcode_api_client/__init__.py | Cloudmersive/Cloudmersive.APIClient.Python.Barcode | e584de80304ebddbcce99ee6ff42196d46486421 | [
"Apache-2.0"
] | 1 | 2019-02-25T18:23:23.000Z | 2019-02-25T18:23:23.000Z | cloudmersive_barcode_api_client/__init__.py | Cloudmersive/Cloudmersive.APIClient.Python.Barcode | e584de80304ebddbcce99ee6ff42196d46486421 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
# flake8: noqa
"""
barcodeapi
Barcode APIs let you generate barcode images, and recognize values from images of barcodes. # noqa: E501
OpenAPI spec version: v1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
# import apis into sdk package
from cloudmersive_barcode_api_client.api.barcode_lookup_api import BarcodeLookupApi
from cloudmersive_barcode_api_client.api.barcode_scan_api import BarcodeScanApi
from cloudmersive_barcode_api_client.api.generate_barcode_api import GenerateBarcodeApi
# import ApiClient
from cloudmersive_barcode_api_client.api_client import ApiClient
from cloudmersive_barcode_api_client.configuration import Configuration
# import models into sdk package
from cloudmersive_barcode_api_client.models.barcode_lookup_response import BarcodeLookupResponse
from cloudmersive_barcode_api_client.models.barcode_scan_result import BarcodeScanResult
from cloudmersive_barcode_api_client.models.product_match import ProductMatch
| 35 | 109 | 0.850476 | 135 | 1,050 | 6.311111 | 0.407407 | 0.105634 | 0.215962 | 0.244131 | 0.43662 | 0.43662 | 0.347418 | 0.107981 | 0 | 0 | 0 | 0.00641 | 0.108571 | 1,050 | 29 | 110 | 36.206897 | 0.903846 | 0.299048 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
072c1a9ac3b5620f2f7ebca45f42af396d927bf5 | 199 | py | Python | uber/admin.py | helmetwearer/uber-clone | ed77e52ef7c82b7a555fcd231e817de96867a451 | [
"MIT"
] | 22 | 2018-08-05T14:44:27.000Z | 2022-01-11T15:35:15.000Z | uber/admin.py | helmetwearer/uber-clone | ed77e52ef7c82b7a555fcd231e817de96867a451 | [
"MIT"
] | null | null | null | uber/admin.py | helmetwearer/uber-clone | ed77e52ef7c82b7a555fcd231e817de96867a451 | [
"MIT"
] | 12 | 2018-11-24T16:39:12.000Z | 2022-03-02T21:05:59.000Z | from django.contrib import admin
from . models import Driver, Car, Location, Category
admin.site.register(Driver)
admin.site.register(Car)
admin.site.register(Location)
admin.site.register(Category) | 28.428571 | 52 | 0.81407 | 28 | 199 | 5.785714 | 0.428571 | 0.222222 | 0.419753 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080402 | 199 | 7 | 53 | 28.428571 | 0.885246 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ab0f40428e1a22de0f77940c4c37d247b66dbced | 87 | py | Python | mypy/test/data/fixtures/module.py | TimSimpsonR/mypy | 5e6fd6335e0662b0477e1d678269f33e6f4194ba | [
"PSF-2.0"
] | 1 | 2019-06-27T11:34:27.000Z | 2019-06-27T11:34:27.000Z | mypy/test/data/fixtures/module.py | silky/mypy | de6a8d3710df9f49109cb682f2092e4967bfb92c | [
"PSF-2.0"
] | null | null | null | mypy/test/data/fixtures/module.py | silky/mypy | de6a8d3710df9f49109cb682f2092e4967bfb92c | [
"PSF-2.0"
] | null | null | null | class object:
def __init__(self) -> None: pass
class module: pass
class type: pass
| 17.4 | 36 | 0.712644 | 13 | 87 | 4.461538 | 0.692308 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195402 | 87 | 4 | 37 | 21.75 | 0.828571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.75 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
ab544630b9404681efa0fce4eb153314b15ecd5c | 31 | py | Python | venv/Lib/site-packages/pybrain/supervised/__init__.py | ishatserka/MachineLearningAndDataAnalysisCoursera | e82e772df2f4aec162cb34ac6127df10d14a625a | [
"MIT"
] | 4 | 2015-01-01T14:57:38.000Z | 2018-07-12T04:21:36.000Z | pybrain/supervised/__init__.py | abhishekgahlot/pybrain | c54661f13857d5bcb0095ba2fb12f5a403a4a70f | [
"BSD-3-Clause"
] | null | null | null | pybrain/supervised/__init__.py | abhishekgahlot/pybrain | c54661f13857d5bcb0095ba2fb12f5a403a4a70f | [
"BSD-3-Clause"
] | 2 | 2015-01-23T09:23:58.000Z | 2019-02-22T05:42:29.000Z | from trainers.__init__ import * | 31 | 31 | 0.83871 | 4 | 31 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
db5161c2e3d0c242b80d097efa0a2aa501d1dbf3 | 42 | py | Python | __init__.py | polimi-ispl/python_patch_extractor | 25bdf17b5517284696190cb7feb063320986d34c | [
"MIT"
] | 4 | 2020-04-24T17:23:55.000Z | 2021-06-18T10:48:39.000Z | __init__.py | polimi-ispl/python_patch_extractor | 25bdf17b5517284696190cb7feb063320986d34c | [
"MIT"
] | 3 | 2021-07-07T10:39:16.000Z | 2021-07-12T16:16:16.000Z | __init__.py | polimi-ispl/python_patch_extractor | 25bdf17b5517284696190cb7feb063320986d34c | [
"MIT"
] | 1 | 2021-01-14T06:50:50.000Z | 2021-01-14T06:50:50.000Z | from PatchExtractor import PatchExtractor
| 21 | 41 | 0.904762 | 4 | 42 | 9.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
db53eafa60721e7c9d1393ee4238633978967228 | 9,455 | py | Python | tests/parsers/main_parser/test_headers.py | Project-Mau/mau | 193d16633c1573227debf4517ebcaf07add24979 | [
"MIT"
] | 28 | 2021-02-22T18:46:52.000Z | 2022-02-21T15:14:05.000Z | tests/parsers/main_parser/test_headers.py | Project-Mau/mau | 193d16633c1573227debf4517ebcaf07add24979 | [
"MIT"
] | 5 | 2021-02-23T09:56:13.000Z | 2022-03-13T09:47:42.000Z | tests/parsers/main_parser/test_headers.py | Project-Mau/mau | 193d16633c1573227debf4517ebcaf07add24979 | [
"MIT"
] | 2 | 2021-02-23T09:11:45.000Z | 2021-03-13T11:08:21.000Z | from unittest.mock import patch
from mau.parsers import nodes
from mau.parsers.main_parser import MainParser
from tests.helpers import init_parser_factory, parser_test_factory
init_parser = init_parser_factory(MainParser)
_test = parser_test_factory(MainParser)
def test_default_header_anchor_function():
source = """
= Some Words 1234 56
"""
expected = [
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Some Words 1234 56",
"level": 1,
"anchor": "some-words-1234-56",
}
]
_test(source, expected)
def test_default_header_anchor_function_multiple_spaces():
source = """
= Some Words 1234 56
"""
expected = [
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Some Words 1234 56",
"level": 1,
"anchor": "some-words-1234-56",
}
]
_test(source, expected)
def test_default_header_anchor_function_filter_characters():
source = """
= Some #Words @ 12!34 56
"""
expected = [
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Some #Words @ 12!34 56",
"level": 1,
"anchor": "some-words-1234-56",
}
]
_test(source, expected)
def test_custom_header_anchor_function():
source = """
= Title of the section
"""
expected = [
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Title of the section",
"level": 1,
"anchor": "XXXXXY",
}
]
config = {"mau": {"header_anchor_function": lambda text, level: "XXXXXY"}}
_test = parser_test_factory(MainParser, variables=config)
_test(source, expected)
@patch("mau.parsers.main_parser.header_anchor")
def test_parse_header_level_1(header_anchor_mock):
header_anchor_mock.return_value = "XXXXXX"
source = """
= Title of the section
"""
expected = [
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Title of the section",
"level": 1,
"anchor": "XXXXXX",
}
]
_test(source, expected)
@patch("mau.parsers.main_parser.header_anchor")
def test_parse_header_level_3(header_anchor_mock):
header_anchor_mock.return_value = "XXXXXX"
source = """
=== Title of a subsection
"""
expected = [
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Title of a subsection",
"level": 3,
"anchor": "XXXXXX",
}
]
_test(source, expected)
@patch("mau.parsers.main_parser.header_anchor")
def test_parse_collect_headers(header_anchor_mock):
header_anchor_mock.side_effect = lambda text, level: f"{text}-XXXXXX"
source = """
= Header 1
== Header 1.1
== Header 1.2
= Header 2
== Header 2.1
=== Header 2.1.1
"""
expected = [
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Header 1",
"level": 1,
"anchor": "Header 1-XXXXXX",
},
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Header 1.1",
"level": 2,
"anchor": "Header 1.1-XXXXXX",
},
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Header 1.2",
"level": 2,
"anchor": "Header 1.2-XXXXXX",
},
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Header 2",
"level": 1,
"anchor": "Header 2-XXXXXX",
},
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Header 2.1",
"level": 2,
"anchor": "Header 2.1-XXXXXX",
},
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Header 2.1.1",
"level": 3,
"anchor": "Header 2.1.1-XXXXXX",
},
]
p = _test(source, expected)
assert p.headers == [
nodes.HeaderNode("Header 1", 1, "Header 1-XXXXXX"),
nodes.HeaderNode("Header 1.1", 2, "Header 1.1-XXXXXX"),
nodes.HeaderNode("Header 1.2", 2, "Header 1.2-XXXXXX"),
nodes.HeaderNode("Header 2", 1, "Header 2-XXXXXX"),
nodes.HeaderNode("Header 2.1", 2, "Header 2.1-XXXXXX"),
nodes.HeaderNode("Header 2.1.1", 3, "Header 2.1.1-XXXXXX"),
]
@patch("mau.parsers.main_parser.header_anchor")
def test_attributes_header(header_anchor_mock):
header_anchor_mock.side_effect = lambda text, level: f"{text}-XXXXXX"
source = """
[value1,someattr1=somevalue1,someattr2=somevalue2]
= Header
"""
expected = [
{
"type": "header",
"value": "Header",
"level": 1,
"kwargs": {"someattr1": "somevalue1", "someattr2": "somevalue2"},
"tags": [],
"anchor": "Header-XXXXXX",
},
]
_test(source, expected)
@patch("mau.parsers.main_parser.header_anchor")
def test_single_tag_header(header_anchor_mock):
header_anchor_mock.side_effect = lambda text, level: f"{text}-XXXXXX"
source = """
[tags=section]
= Header
"""
expected = [
{
"type": "header",
"value": "Header",
"level": 1,
"kwargs": {},
"tags": ["section"],
"anchor": "Header-XXXXXX",
},
]
_test(source, expected)
@patch("mau.parsers.main_parser.header_anchor")
def test_multiple_tags_header(header_anchor_mock):
header_anchor_mock.side_effect = lambda text, level: f"{text}-XXXXXX"
source = """
[tags="section,important"]
= Header
"""
expected = [
{
"type": "header",
"value": "Header",
"level": 1,
"kwargs": {},
"tags": ["section", "important"],
"anchor": "Header-XXXXXX",
},
]
_test(source, expected)
@patch("mau.parsers.main_parser.header_anchor")
def test_attributes_and_tags_header(header_anchor_mock):
header_anchor_mock.side_effect = lambda text, level: f"{text}-XXXXXX"
source = """
[value1,someattr1=somevalue1,someattr2=somevalue2,tags="section,important"]
= Header
"""
expected = [
{
"type": "header",
"value": "Header",
"level": 1,
"kwargs": {"someattr1": "somevalue1", "someattr2": "somevalue2"},
"tags": ["section", "important"],
"anchor": "Header-XXXXXX",
},
]
_test(source, expected)
@patch("mau.parsers.main_parser.header_anchor")
def test_parse_headers_not_in_toc(header_anchor_mock):
header_anchor_mock.side_effect = lambda text, level: f"{text}-XXXXXX"
source = """
= Header 1
== Header 1.1
==! Header 1.2
"""
expected = [
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Header 1",
"level": 1,
"anchor": "Header 1-XXXXXX",
},
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Header 1.1",
"level": 2,
"anchor": "Header 1.1-XXXXXX",
},
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Header 1.2",
"level": 2,
"anchor": "Header 1.2-XXXXXX",
},
]
p = _test(source, expected)
assert p.headers == [
nodes.HeaderNode("Header 1", 1, "Header 1-XXXXXX"),
nodes.HeaderNode("Header 1.1", 2, "Header 1.1-XXXXXX"),
]
@patch("mau.parsers.main_parser.header_anchor")
def test_parse_headers_not_in_toc_with_children(header_anchor_mock):
header_anchor_mock.side_effect = lambda text, level: f"{text}-XXXXXX"
source = """
= Header 1
== Header 1.1
==! Header 1.2
=== Header 1.2.1
"""
expected = [
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Header 1",
"level": 1,
"anchor": "Header 1-XXXXXX",
},
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Header 1.1",
"level": 2,
"anchor": "Header 1.1-XXXXXX",
},
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Header 1.2",
"level": 2,
"anchor": "Header 1.2-XXXXXX",
},
{
"type": "header",
"kwargs": {},
"tags": [],
"value": "Header 1.2.1",
"level": 3,
"anchor": "Header 1.2.1-XXXXXX",
},
]
p = _test(source, expected)
assert p.headers == [
nodes.HeaderNode("Header 1", 1, "Header 1-XXXXXX"),
nodes.HeaderNode("Header 1.1", 2, "Header 1.1-XXXXXX"),
nodes.HeaderNode("Header 1.2.1", 3, "Header 1.2.1-XXXXXX"),
]
| 23.461538 | 79 | 0.474669 | 917 | 9,455 | 4.733915 | 0.08615 | 0.074176 | 0.07003 | 0.087537 | 0.899332 | 0.857176 | 0.842663 | 0.828841 | 0.819396 | 0.790371 | 0 | 0.036448 | 0.361608 | 9,455 | 402 | 80 | 23.519901 | 0.682737 | 0 | 0 | 0.665663 | 0 | 0 | 0.319831 | 0.053517 | 0 | 0 | 0 | 0 | 0.009036 | 1 | 0.039157 | false | 0 | 0.024096 | 0 | 0.063253 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db8c506eecb45eeb67704319ac4878f220c5f7b1 | 96 | py | Python | venv/lib/python3.8/site-packages/setuptools/dep_util.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/setuptools/dep_util.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/setuptools/dep_util.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/04/3c/75/064ccd427b6f001e1a972a476d6e54541ce3aad86cd34d0fad42f866a7 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 96 | 1 | 96 | 96 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dba2b65d8c711076996ad5e1cc1aeba970672bf2 | 72 | py | Python | app/app/calc.py | ofvera/recipe-app-api | 329bd1f84f13a0cf3c81389d7429112d153e39cc | [
"MIT"
] | null | null | null | app/app/calc.py | ofvera/recipe-app-api | 329bd1f84f13a0cf3c81389d7429112d153e39cc | [
"MIT"
] | null | null | null | app/app/calc.py | ofvera/recipe-app-api | 329bd1f84f13a0cf3c81389d7429112d153e39cc | [
"MIT"
] | null | null | null | def add(x, y):
return x + y
def substract(x, y):
return y - x
| 10.285714 | 20 | 0.527778 | 14 | 72 | 2.714286 | 0.428571 | 0.157895 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 72 | 6 | 21 | 12 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
dbaab160a5103b0971decb0a6481b789f5718547 | 117 | py | Python | models/__init__.py | james-simon/dnn_mode_connectivity | 59dcf7c1486a154c88d63283c2a68825ff524776 | [
"BSD-2-Clause"
] | 1 | 2020-09-06T09:42:24.000Z | 2020-09-06T09:42:24.000Z | models/__init__.py | james-simon/dnn_mode_connectivity | 59dcf7c1486a154c88d63283c2a68825ff524776 | [
"BSD-2-Clause"
] | null | null | null | models/__init__.py | james-simon/dnn_mode_connectivity | 59dcf7c1486a154c88d63283c2a68825ff524776 | [
"BSD-2-Clause"
] | null | null | null | from .convfc import *
from .vgg import *
from .preresnet import *
from .wide_resnet import *
from .onelayer import *
| 19.5 | 26 | 0.74359 | 16 | 117 | 5.375 | 0.5 | 0.465116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17094 | 117 | 5 | 27 | 23.4 | 0.886598 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dbd542fa28756be51e17cc64e14408e05df6bcbe | 7,669 | py | Python | src/openprocurement/tender/openua/tests/contract_blanks.py | ProzorroUKR/openprocurement.api | 2855a99aa8738fb832ee0dbad4e9590bd3643511 | [
"Apache-2.0"
] | 10 | 2020-02-18T01:56:21.000Z | 2022-03-28T00:32:57.000Z | src/openprocurement/tender/openua/tests/contract_blanks.py | quintagroup/openprocurement.api | 2855a99aa8738fb832ee0dbad4e9590bd3643511 | [
"Apache-2.0"
] | 26 | 2018-07-16T09:30:44.000Z | 2021-02-02T17:51:30.000Z | src/openprocurement/tender/openua/tests/contract_blanks.py | ProzorroUKR/openprocurement.api | 2855a99aa8738fb832ee0dbad4e9590bd3643511 | [
"Apache-2.0"
] | 15 | 2019-08-08T10:50:47.000Z | 2022-02-05T14:13:36.000Z | # -*- coding: utf-8 -*-
from datetime import timedelta
from openprocurement.api.utils import get_now
# TenderContractResourceTest
def create_tender_contract(self):
auth = self.app.authorization
self.app.authorization = ("Basic", ("token", ""))
response = self.app.post_json(
"/tenders/{}/contracts".format(self.tender_id),
{"data": {"title": "contract title", "description": "contract description", "awardID": self.award_id}},
)
self.assertEqual(response.status, "201 Created")
self.assertEqual(response.content_type, "application/json")
contract = response.json["data"]
self.assertIn("id", contract)
self.assertIn(contract["id"], response.headers["Location"])
self.set_status("unsuccessful")
response = self.app.post_json(
"/tenders/{}/contracts".format(self.tender_id),
{"data": {"title": "contract title", "description": "contract description", "awardID": self.award_id}},
status=403,
)
self.assertEqual(response.status, "403 Forbidden")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(
response.json["errors"][0]["description"], "Can't add contract in current (unsuccessful) tender status"
)
self.app.authorization = auth
response = self.app.patch_json(
"/tenders/{}/contracts/{}?acc_token={}".format(self.tender_id, contract["id"], self.tender_token),
{"data": {"status": "active"}},
status=403,
)
self.assertEqual(response.status, "403 Forbidden")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(
response.json["errors"][0]["description"], "Can't update contract in current (unsuccessful) tender status"
)
def patch_tender_contract_datesigned(self):
response = self.app.get("/tenders/{}/contracts".format(self.tender_id))
contract = response.json["data"][0]
self.set_status("complete", {"status": "active.awarded"})
tender = self.db.get(self.tender_id)
for i in tender.get("awards", []):
i["complaintPeriod"]["endDate"] = i["complaintPeriod"]["startDate"]
self.db.save(tender)
response = self.app.patch_json(
"/tenders/{}/contracts/{}?acc_token={}".format(self.tender_id, contract["id"], self.tender_token),
{"data": {"value": {"amountNet": contract["value"]["amount"] - 1}}},
)
self.assertEqual(response.status, "200 OK")
response = self.app.patch_json(
"/tenders/{}/contracts/{}?acc_token={}".format(self.tender_id, contract["id"], self.tender_token),
{"data": {"status": "active"}},
)
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["data"]["status"], "active")
self.assertIn("dateSigned", response.json["data"].keys())
def patch_tender_contract(self):
response = self.app.get("/tenders/{}/contracts".format(self.tender_id))
contract = response.json["data"][0]
response = self.app.patch_json(
"/tenders/{}/contracts/{}?acc_token={}".format(self.tender_id, contract["id"], self.tender_token),
{"data": {"status": "active"}},
status=403,
)
self.assertEqual(response.status, "403 Forbidden")
self.assertEqual(response.content_type, "application/json")
self.assertIn("Can't sign contract before stand-still period end (", response.json["errors"][0]["description"])
self.set_status("complete", {"status": "active.awarded"})
tender = self.db.get(self.tender_id)
for i in tender.get("awards", []):
i["complaintPeriod"]["endDate"] = i["complaintPeriod"]["startDate"]
self.db.save(tender)
response = self.app.patch_json(
"/tenders/{}/contracts/{}?acc_token={}".format(self.tender_id, contract["id"], self.tender_token),
{"data": {"value": {"amountNet": contract["value"]["amount"] - 1}}},
)
self.assertEqual(response.status, "200 OK")
response = self.app.patch_json(
"/tenders/{}/contracts/{}?acc_token={}".format(self.tender_id, contract["id"], self.tender_token),
{"data": {"dateSigned": i["complaintPeriod"]["endDate"]}},
status=422,
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(
response.json["errors"],
[
{
"description": [
"Contract signature date should be after award complaint period end date ({})".format(
i["complaintPeriod"]["endDate"]
)
],
"location": "body",
"name": "dateSigned",
}
],
)
one_hour_in_furure = (get_now() + timedelta(hours=1)).isoformat()
response = self.app.patch_json(
"/tenders/{}/contracts/{}?acc_token={}".format(self.tender_id, contract["id"], self.tender_token),
{"data": {"dateSigned": one_hour_in_furure}},
status=422,
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(
response.json["errors"],
[
{
"description": ["Contract signature date can't be in the future"],
"location": "body",
"name": "dateSigned",
}
],
)
custom_signature_date = get_now().isoformat()
response = self.app.patch_json(
"/tenders/{}/contracts/{}?acc_token={}".format(self.tender_id, contract["id"], self.tender_token),
{"data": {"dateSigned": custom_signature_date}},
)
self.assertEqual(response.status, "200 OK")
response = self.app.patch_json(
"/tenders/{}/contracts/{}?acc_token={}".format(self.tender_id, contract["id"], self.tender_token),
{"data": {"status": "active"}},
)
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["data"]["status"], "active")
response = self.app.patch_json(
"/tenders/{}/contracts/{}?acc_token={}".format(self.tender_id, contract["id"], self.tender_token),
{"data": {"status": "pending"}},
status=403,
)
self.assertEqual(response.status, "403 Forbidden")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(
response.json["errors"][0]["description"], "Can't update contract in current (complete) tender status"
)
response = self.app.patch_json(
"/tenders/{}/contracts/some_id?acc_token={}".format(self.tender_id, self.tender_token),
{"data": {"status": "active"}},
status=404,
)
self.assertEqual(response.status, "404 Not Found")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["status"], "error")
self.assertEqual(
response.json["errors"], [{"description": "Not Found", "location": "url", "name": "contract_id"}]
)
response = self.app.patch_json("/tenders/some_id/contracts/some_id", {"data": {"status": "active"}}, status=404)
self.assertEqual(response.status, "404 Not Found")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["status"], "error")
self.assertEqual(
response.json["errors"], [{"description": "Not Found", "location": "url", "name": "tender_id"}]
)
response = self.app.get("/tenders/{}/contracts/{}".format(self.tender_id, contract["id"]))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["data"]["status"], "active")
| 40.363158 | 116 | 0.632547 | 839 | 7,669 | 5.669845 | 0.131108 | 0.11667 | 0.178894 | 0.060542 | 0.838343 | 0.817532 | 0.792306 | 0.773176 | 0.769603 | 0.769603 | 0 | 0.012685 | 0.187899 | 7,669 | 189 | 117 | 40.57672 | 0.751124 | 0.006259 | 0 | 0.61875 | 0 | 0 | 0.286558 | 0.072723 | 0 | 0 | 0 | 0 | 0.25625 | 1 | 0.01875 | false | 0 | 0.0125 | 0 | 0.03125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
915816ae3b5d356d2e923b84296f7d23083aa056 | 201 | py | Python | src/core/role.py | kk-r/urunner | 2683d0165af7ddeeb9ed3796f00627de17cbc7e7 | [
"MIT"
] | null | null | null | src/core/role.py | kk-r/urunner | 2683d0165af7ddeeb9ed3796f00627de17cbc7e7 | [
"MIT"
] | null | null | null | src/core/role.py | kk-r/urunner | 2683d0165af7ddeeb9ed3796f00627de17cbc7e7 | [
"MIT"
] | null | null | null | import os
from enum import Enum
class ROLE(Enum):
ADMIN: str = os.getenv('ADMIN', 'ADMINISTRATOR')
BASIC: str = os.getenv('BASIC', 'BASIC')
MANAGER: str = os.getenv('MANAGER', 'MANAGER')
| 22.333333 | 52 | 0.651741 | 27 | 201 | 4.851852 | 0.444444 | 0.114504 | 0.251908 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18408 | 201 | 8 | 53 | 25.125 | 0.79878 | 0 | 0 | 0 | 0 | 0 | 0.208955 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
916f35248846161ab36f147f3d8e583d02cb34ba | 80 | py | Python | modelmaker/config/__init__.py | wangjm12138/modelmaker | aa42ce9d504cc13a636b0c9f4ac49b71538c7cda | [
"MIT"
] | null | null | null | modelmaker/config/__init__.py | wangjm12138/modelmaker | aa42ce9d504cc13a636b0c9f4ac49b71538c7cda | [
"MIT"
] | null | null | null | modelmaker/config/__init__.py | wangjm12138/modelmaker | aa42ce9d504cc13a636b0c9f4ac49b71538c7cda | [
"MIT"
] | null | null | null | from .config_exception import ConfigException
from .config import create_client
| 26.666667 | 45 | 0.875 | 10 | 80 | 6.8 | 0.7 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 80 | 2 | 46 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
918e669c8180c947f6aacfedfad89ced6227b32e | 53 | py | Python | dashboard/templatetags/__init__.py | PyFlux/PyFlux | 8abae10261e276bf4942aed8d54ef3b5498754ca | [
"Apache-2.0"
] | null | null | null | dashboard/templatetags/__init__.py | PyFlux/PyFlux | 8abae10261e276bf4942aed8d54ef3b5498754ca | [
"Apache-2.0"
] | 10 | 2020-03-24T17:09:56.000Z | 2021-12-13T20:00:15.000Z | dashboard/templatetags/__init__.py | PyFlux/PyFlux-Django-Html | 8abae10261e276bf4942aed8d54ef3b5498754ca | [
"Apache-2.0"
] | null | null | null | from .toolbar_tag import *
from .sidebar_tag import * | 26.5 | 26 | 0.792453 | 8 | 53 | 5 | 0.625 | 0.45 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132075 | 53 | 2 | 27 | 26.5 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
918ee35dfedec05abf4ddfd93bcd41ace0be14bf | 191 | py | Python | asf_search/search/__init__.py | gitter-badger/Discovery-asf_search | 35d0e796a6c926b3188a4aed1685358b0e9a8142 | [
"BSD-3-Clause"
] | 57 | 2021-03-02T18:16:01.000Z | 2022-03-30T09:35:01.000Z | asf_search/search/__init__.py | gitter-badger/Discovery-asf_search | 35d0e796a6c926b3188a4aed1685358b0e9a8142 | [
"BSD-3-Clause"
] | 14 | 2021-05-18T15:32:57.000Z | 2022-03-07T23:22:20.000Z | asf_search/search/__init__.py | gitter-badger/Discovery-asf_search | 35d0e796a6c926b3188a4aed1685358b0e9a8142 | [
"BSD-3-Clause"
] | 16 | 2021-03-30T00:56:17.000Z | 2022-03-30T09:35:09.000Z | from .search import search
from .granule_search import granule_search
from .product_search import product_search
from .geo_search import geo_search
from .baseline_search import stack_from_id
| 31.833333 | 42 | 0.86911 | 29 | 191 | 5.413793 | 0.310345 | 0.382166 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104712 | 191 | 5 | 43 | 38.2 | 0.918129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
91cc95f33a1ffdb4edf310212e1ce2071638443c | 46 | py | Python | measurator/__init__.py | ahitrin-attic/measurator-proto | b7abaf5943826a909c31697ee307d95ad3f4f909 | [
"MIT"
] | null | null | null | measurator/__init__.py | ahitrin-attic/measurator-proto | b7abaf5943826a909c31697ee307d95ad3f4f909 | [
"MIT"
] | 1 | 2021-04-21T10:13:48.000Z | 2021-04-21T10:13:48.000Z | measurator/__init__.py | ahitrin/measurator | b7abaf5943826a909c31697ee307d95ad3f4f909 | [
"MIT"
] | null | null | null | from measurator.main import run_main, migrate
| 23 | 45 | 0.847826 | 7 | 46 | 5.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 46 | 1 | 46 | 46 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
91e45f391431e8e3a3ad1eae4a65f3737506af0f | 6,551 | py | Python | src/projects/workers/grayscale_image.py | firewut/data-transform-pipelines-api | c62a7aa5fd57102fa67cf715dc78c3365b739925 | [
"MIT"
] | 2 | 2019-01-09T07:42:17.000Z | 2021-08-25T02:43:47.000Z | src/projects/workers/grayscale_image.py | firewut/data-transform-pipelines-api | c62a7aa5fd57102fa67cf715dc78c3365b739925 | [
"MIT"
] | null | null | null | src/projects/workers/grayscale_image.py | firewut/data-transform-pipelines-api | c62a7aa5fd57102fa67cf715dc78c3365b739925 | [
"MIT"
] | null | null | null | import base64
import io
import os
from django.conf import settings
from PIL import Image
from core.utils import random_uuid4
from projects.workers.base import Worker
from projects.workers.exceptions import WorkerNoInputException
class GrayscaleImage(Worker):
id = 'grayscale_image'
name = 'grayscale_image'
image = 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAOEAAADhCAMAAAAJbSJIAAAAhFBMVEX///8AAACWlpb6+vr19fWoqKj4+PhFRUVpaWnn5+fKysqbm5vW1tbk5OTFxcWQkJDw8PC9vb1zc3M3NzeDg4PQ0NC1tbWioqLc3NwtLS2EhIRRUVGtra1kZGSzs7NdXV0kJCQcHBx7e3tCQkISEhIODg4xMTFVVVVxcXEZGRk8PDwhISGdx6ozAAAPE0lEQVR4nO1d52LyvA5uSdiZzLDXW7ru//4OlBZrOXHa2IHv8PyEEKTY1ngkO09PDzzwwAMPPPD/As/vjtN1Y7TvLI7b95fnE+oWqSr4Wdrfvw6fOeqW7O/wg+V+IWj239AwaO6POcrdt4ZJOvooUu5+NfSi+cxIu/vUMGi0jbW7Pw299K2UdnemYWu9Ka3ePWkYrX6j3t1oGOxLK7YbLlZvg8HgrW7ZDZD03w21ejms4ulknHXrFrkUekaL77CfRplft6y/QGtaPHyvcRrWLedv0S1afcN5el8TEiPIN56HuHePs1Khlxe4/Nun961dvn6zRla3eH/GWJ80HKb3vPC+kWnT2VkzqVu4ChBq3d/8/ifnCb4uddhEJe6SBL10OW3E8Wj/tlpYE/Y3aMjqvTeMZqcfRsvRipFRtoUugeifqF87Lf5pmDZWn5rhty+4IRJ5AXaKVl83jfOYttvRsC9KN8r3DeGyk6vcDWkYiMRSIy9w6TYNU2JnSuRhJEkWe/ofBPHWTL3b0HAsZUhz/fiN9y/G6t2EhlKKtNe6h3FpPsOlLhICwUW86jLabmzKZ9yOhjGXaDiWL/Um5ZjgC97f3SpEkAgyT+RLQ9EaSRiuRo11NM66LS/HVrlBxMXby0Kl+U79gsW+GYU3lX1wm3EMpOu8phzOKexW/d5NqfaFhKe5fem6liYcv45cHN2ecmf0mKhtSVB/nqdduy8O+k1gyqRtCld5gqm9Yh/dMhvFAmZxAOVo/Iz3kcal3AhazElIA7jW6je63bl5QZdKvBVimJ6uA2HQcy9xSYypzB1+TVfDRw2XtTvxYrDJt+bXaAzM3nx2+t0wiCbLZrM5rVB2I1DhP/kM1fA1fRPTmYzTxn6BksfqdcgFjWM27ApfnKAzTbiq0I36nYP0Uyt6aEGJhwa7YiIJOcyn21q9+FUcdvcaUjkYzetLks5y2eBeXNAaZU8dDuIGd4xIkwZwmzN+YTNn7GrQkKySNjP8EqW/1N4uaxQ27TnWkCg4oN8HAr/E1+k3MnM+w65WAETBmH4vJEkrTVqUNMy5RHcakjVIA1GPJ/EzTXAdGay9GjQkClDrGHDBxHT4ydfnG/VqSLw4HZ0mE+tVnKChIVm6e+3Mm+tJNA6y0E2vDTGStJw0YCKKHiIonp4fg34U1EBqECKQKMiX4EaKQCMxJFNo19g9RBgLMm26LM6WBjCnPeOE1bTWnDjNVZBliwdhkmU583NROxtFzCRRMKUCMz95cn/6zufOpH42ilAWZA0yyk3wgVq+bZPeQsLv4VCMTChaixBmKBvkn0vXLUcqFADbSUIiUS+xZz/vauoVo5vpKMUemuTplDTlabwcwBxviI7CsQqJRSlbwdpKMjE72hhzwV7LT5Ku1dHGZpRYSaLgkS1B0cK8FQvsZdE07iyGu++fVKYOR4Jlw18SD8cYqa60lXCQH5C1sknMeaxKdcJAYVYbf0coqRH9LY/FT5FL3vgl6Uh6JlY1RLNsh40DMTI0U/IERnGhb//yc7eyWVDtAlwgxEExcRM0EM04JTzTViu6zYIKuA3lzvDRv+CMl1Q9aToszFAdNZ9Mi3dZWlHvibh6PAuJl6Ojw7Pcjib4nJj0L9jSEPFKKywXFoC4txZPA+UJGubWv61rmMG/2KKvSAGfRKohE5CHcl93MRo+ixoitg8ZeZJrkBHk/QviAC7NdwFb0hDlDCjc9DCRS+RnNkYonsq+kmHW3gzi/nTalArofwaK1rCQeHIRK8pIYamkVqTfdtBPA99ybL4Df4gb57CdJNVfmi1uBWZpmafccL92sycDGTlkSbAZJUUJmkwJ21t7+vU3jN211yA7irTAhpLEorR4yp28rn3hFLcvnXKk8Dl/wC+8HZQKO0mmICfQdHzNZuKYz0ARC1pJaAiG+Fdkim7ZmPR2zxKGTedkG0oKkaXGvBoWjCjI9ypx4v+MQR10DZxtKCfEixDbPBKKMi84FpvzjdpPKgcKStAc3cFvcL5E3ARLh6UVeBQajZwAmhmUUaBhwjqQXIOmw4lQktmW2axXKeBam8EvUEs3ZjRIrkFbE4Rm8M+i3Wx+GK37+85meE6lK1BLoQXlgPbeQxIi607qGlRBYYbmRpphGr/u0OVVKfcFuKBQrwWylSifILkGiUSF8v5cH3IGUykmqFA/LC0UBBUfUJjj4a4KomDICJu2zj/oz+apUkPoKeBUQpwNXoR4jMj8Y7VFqVPzol7O8SAVKggD0iP8As1RFK1gP0GsKOsBW8jRZy9/m2WFGsLxgKktGgpkBnHpjPhBVlsULYzfKGqOqk5BqAgi6eFSQ/E2tjIkkqFGdCb1IWQGB2NVpyHseoL2ADl05CiQlSG0P02HJUIqMDqcpzIF4RBCcVAojuYoWj6E9qekqcBn6E+XsKQh/D9oEaCBRZMXL0I8B8kIvvMZ2jXYxV2thjA0iTWfI81x7Q2TboTsXXAnn7dd6ITZZtRMszCpkpCCSwJmNbCugIwhiqcxZUHE50tQbAb/RtvSsVHQF8IhhFMRURrIF+D6KOELWRdtou0fOjTG1mhEaLXhEMKwC2a9yFFs0a1IfwkLY3T7odpWT6+BcdkcfA6HA/k71FCLZhXJNWiiJBVPT9j1LbftQdMAhxAKAZ8wmohoeZJcg2a6PFT9enj29+qBf4N2AS42uJxQKP4Kb0TqGlRBcces2ek1fwPk2sFsQXkvvB6RoyjMwV6cTNGW4OP/6bcrVAnAzsDVBuM1KAgiJpAW2E8QI8OzRX35u2LAxQENJvh4C6+H3CCyP5iTIdILhE1Ovl8tgKuAPg+aEzhSyBVCGXF3A+mh4k5i46zrGQoGNQFDddRcjucociEkmeJNfAbHR1UFMCYv4GMYWkGbCNMGZEeRoSTJFItDVy7LMaCFEM4s0IQFqVPk8OA8Q1W5Z7zCWN9F4YbLKgElA54Jmh84hNBToJATnUaGg2eqYNttxQL8PYygoSb9OG5Mo+AsFxoprRZ4iOgUlRtQ7EEeq+RZwgZtiYS2AoWjWAUayDidoU+42gQ+LkhRz0DGBHI2uHpKmxOcb68AbALIKnQt9jpR0TihRUiaiHbOdzXB2PMaz0x0ZxhCwEWLDCzKNUgb2If7PQgglvpx65FZTxYcKZj0oRI3yTWIl3QCELFdSHnf9Pz7hXKGaCaiaYiziVrOIqWDIsTHWlxDa9ihjbIhbK/qGEH49L8soGnT5wXfsScMqg+auz+zBhVHANTteZKad31ecPwKTuAnMPvCleP3erapAUMQPvncxBziNLssrFaY9gUWKcNZFqSxyKaTenaIghjs+ORTH9Fm7WZeyuxQiBJieDHmfWs6GAokTo0nMoID+aEnNNqBSxeaGZwP19U+A9Zdhs8UWOknVUtrjlDlGI026yJyBSXCDtfD8o+rCjUxAcyykB2tx4xiKVDn2aaQIRKHEemxg9/Udm6gJoEQdiszSKE5HELEy9SzCL0sbcoBthlHy4/CgEOIzAw/TMo+8t4MZvrAmYowIUar2rmrz3JDM/lMEgk08wNfoZzJ9RxN83eLleFQ9NuEtX249rEuOEp7W+puyOUB+4uG0Gmn87jw7KlyOzpgaA19OuyzcEqsye3kECZ+AgKEniBGQJyGQ9pCOOmIQXs2lw6qIwP0o47+csffI3fH0RWHkvUuwIxff4nSQnfHJ4gHhp+3wA1WbVS5LLm3SlnmqzuE1Td3Z3Py3HWfhtfnm/QaKsso10avGIzrNIVheYUq5IMq+Mbz0aT5wyeVUtFj2uj6x6yCJOa6dzME3wNZKh1XnuE7o4QW21VOgVOITs7f9rZQViMoh3FZiNDOiJtkLQAnOgVHMX89jTIJq3J+Mfs3R9wMqpO1Cx1wUHb9XO99SZLAitj+TuDSgIS0yYtZk7MtLFECu1rhSy0c/Jt5mvInwGTbjA/yjqXm6TUV/GpzgCmVm5gbzlHTINj7NE30z1B27JzpgtT3o/CnlQDUlswrP+fHYnyxCmHORhoQI27iGZCpvZcIEYMSmbnKg32cVriZpMCylQo4G+YrUWl47MEy1Kz4pxUAPNJ58dUQM+MnAuOJvdwVYBGKc3opvhhhbOITg+ab/sxjNxth1f+VblR93RVcEBXsCXESkyrnVCQuR5YbnfoFbwBytQyVc/pFePGR0x75rd9nDnPnhoFSAvyCdl5rTUXv+dgff49wN2jKs9UJD6wqvL+pHHg6KncUk/nrpUKNwMlJMyokXU2y8pSQPM9aoo3kW+2csIj4IMJDXPKxNks9FFJuc2FoAr7Bdliq8TEot3hbaMOWg4qahh8t4ReTshMNRjf26zHaCtrQognQbUGxgbwj+i2m3lBFu2w3Z7g/O/v5fNC2vUZAOmqVKyW7OF/mvesDzfofp5lqz5SD+p3F4jauMrGX9YzbFm05SLnt5fj4HHXprIbIIoeiDGrZhM0ckFYXz2o4YWHCK/4KgPO2RQjDPR96rzSwFhkre2orvViYKHiaTbYyVNUsZOl1zKAqu8t1Sda2rij2yw7bBkqFNb2AQFWjrKwEwHE7KhvkiGCFblMMdHlqpirsfkQ4FF9bHirbrqvvGJSFbTxk0PVo4e6GUE7fQnSo+EM3lLMIFRZb8EjK3db4Wui1TQ1VxFbjq3iUu7Cwi+TaonQsvtYarM7S671XxddaA64KV4zrvWvbwvEEKRQLN7/e21m/lYBrnm8j0b4FZ6HOebWxVK43d9VwJUD5ZBvts1db6qjVQ4JahgUdZr+C8of1vdvsKoIVtk2VZmt7M6Ty91aYfbUVuzZjqhhTK3QiSIBt3N4AYIOCnZ2/Qme5WygFLZ0voJILN80QFKBwYamhBkxTK2/ZKQCYo9aeMKBL3e+Ah0cvWVslPQdPUQdYErIYcoD+BMdniaAtlxZLzXCmOFURVS2t7uiCG2R0tafqgc/OtbsBHx/X4GhvFenks9yYiE/Rmdl3/S16KqL1oJi2RsaBxUwjXLKNcQ5OZOMH8n+8xY3qEa8OwrGrTo6cMz3OygYcvQfI+Nj6yuHmBX9PmmOX7ePV4RER/J2gDuDmZOcfeM5nat7mTTsIit+YXCFWzlYgRE9/SkvFmNdzWtkJCXfIlWM7ssGMloAXLEerMi8ZNseuPehHtZ0C9cADDzzwwAMY/wNj1K8i7cB0qAAAAABJRU5ErkJggg=='
description = 'Make an image grayscaled'
schema = {
"type": "object",
"properties": {
"in": {
"type": [
"file",
"string"
],
"description": "object to make a template from"
},
"out": {
"type": "file",
"description": "output data"
}
}
}
def process(self, data):
image = Image.open(data).convert('LA')
if image is None:
raise WorkerNoInputException(
'File Object or Base64 String Input required'
)
_file = self.request_file()
image.save(_file.path, 'png')
image.close()
return _file
| 139.382979 | 5,452 | 0.89452 | 267 | 6,551 | 21.921348 | 0.820225 | 0.0041 | 0.006492 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128322 | 0.063807 | 6,551 | 46 | 5,453 | 142.413043 | 0.826023 | 0 | 0 | 0 | 0 | 0.025641 | 0.862464 | 0.830102 | 0 | 1 | 0 | 0 | 0 | 1 | 0.025641 | false | 0 | 0.205128 | 0 | 0.410256 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
37e97c2f4b653a8b2374fd7484c769a6fe7ac797 | 18,073 | py | Python | omaha_server/omaha/tests/test_tasks.py | dentalwings/omaha-server | 3d8e18c8f4aac4eb16445c0f3160ed1fc2fc8de5 | [
"Apache-2.0"
] | 2 | 2019-06-13T20:47:18.000Z | 2022-03-31T03:14:54.000Z | omaha_server/omaha/tests/test_tasks.py | dentalwings/omaha-server | 3d8e18c8f4aac4eb16445c0f3160ed1fc2fc8de5 | [
"Apache-2.0"
] | 1 | 2020-02-26T20:03:27.000Z | 2020-02-26T20:03:27.000Z | omaha_server/omaha/tests/test_tasks.py | dentalwings/omaha-server | 3d8e18c8f4aac4eb16445c0f3160ed1fc2fc8de5 | [
"Apache-2.0"
] | null | null | null | # coding: utf8
"""
This software is licensed under the Apache 2 license, quoted below.
Copyright 2014 Crystalnix Limited
Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.
"""
import os
import uuid
from django.test import TestCase
from mock import patch
from freezegun import freeze_time
from crash.models import Crash, Symbols
from crash.factories import CrashFactory, SymbolsFactory
from feedback.models import Feedback
from feedback.factories import FeedbackFactory
from omaha.dynamic_preferences_registry import global_preferences_manager as gpm
from omaha_server.utils import is_private, storage_with_spaces_instance
from omaha.models import Version
from omaha.factories import VersionFactory
from omaha.tasks import (
auto_delete_duplicate_crashes,
auto_delete_older_than,
auto_delete_size_is_exceeded,
deferred_manual_cleanup,
auto_delete_dangling_files
)
from omaha_server.utils import add_extra_to_log_message
from sparkle.models import SparkleVersion
from sparkle.factories import SparkleVersionFactory
class DuplicatedCrashesTest(TestCase):
@freeze_time("2012-12-21 12:00:00")
@patch('logging.getLogger')
@is_private()
def test_crashes(self, mocked_get_logger):
gpm['Crash__duplicate_number'] = 2
crashes = CrashFactory.create_batch(10, signature='test')
deleted_crash = crashes[7]
self.assertEqual(Crash.objects.all().count(), 10)
extra_meta = dict(count=8, reason='duplicated', meta=True, log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00',
model='Crash', size='0 bytes')
log_extra_msg = add_extra_to_log_message('Automatic cleanup', extra=extra_meta)
extra = dict(Crash_id=deleted_crash.id, element_created=deleted_crash.created.strftime("%d. %B %Y %I:%M%p"),
signature=deleted_crash.signature, userid=deleted_crash.userid, appid=deleted_crash.appid,
log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00')
log_msg = add_extra_to_log_message('Automatic cleanup element', extra=extra)
mocked_logger = mocked_get_logger.return_value
with patch('uuid.uuid4') as mocked_uuid4:
mocked_uuid4.side_effect = (uuid.UUID('36446dc3-ae7c-42ad-ae4e-6a826dcf0a%02d' % x) for x in range(100))
auto_delete_duplicate_crashes()
self.assertEqual(mocked_logger.info.call_count, 10)
mocked_logger.info.assert_any_call(log_extra_msg)
mocked_logger.info.assert_any_call(log_msg)
class OldObjectsTest(TestCase):
@patch('logging.getLogger')
@is_private()
def test_crashes(self, mocked_get_logger):
gpm['Crash__limit_storage_days'] = 2
with freeze_time("2012-12-21 12:00:00"):
crashes = CrashFactory.create_batch(10, signature='test')
deleted_crash = crashes[-1]
self.assertEqual(Crash.objects.all().count(), 10)
extra_meta = dict(count=10, reason='old', meta=True, log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00',
model='Crash', size='0 bytes')
log_extra_msg = add_extra_to_log_message('Automatic cleanup', extra=extra_meta)
extra = dict(Crash_id=deleted_crash.id, element_created=deleted_crash.created.strftime("%d. %B %Y %I:%M%p"),
signature=deleted_crash.signature, userid=deleted_crash.userid, appid=deleted_crash.appid,
log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00')
log_msg = add_extra_to_log_message('Automatic cleanup element', extra=extra)
mocked_logger = mocked_get_logger.return_value
with patch('uuid.uuid4') as mocked_uuid4:
mocked_uuid4.side_effect = (uuid.UUID('36446dc3-ae7c-42ad-ae4e-6a826dcf0a%02d' % x) for x in range(100))
auto_delete_older_than()
self.assertEqual(mocked_logger.info.call_count, 11)
mocked_logger.info.assert_any_call(log_extra_msg)
mocked_logger.info.assert_any_call(log_msg)
@patch('logging.getLogger')
@is_private()
def test_feedbacks(self, mocked_get_logger):
gpm['Feedback__limit_storage_days'] = 2
with freeze_time("2012-12-21 12:00:00"):
feedbacks = FeedbackFactory.create_batch(10)
deleted_feedback = feedbacks[-1]
self.assertEqual(Feedback.objects.all().count(), 10)
extra_meta = dict(count=10, reason='old', meta=True, log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00',
model='Feedback', size='0 bytes')
log_extra_msg = add_extra_to_log_message('Automatic cleanup', extra=extra_meta)
extra = dict(Feedback_id=deleted_feedback.id, element_created=deleted_feedback.created.strftime("%d. %B %Y %I:%M%p"),
log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00')
log_msg = add_extra_to_log_message('Automatic cleanup element', extra=extra)
mocked_logger = mocked_get_logger.return_value
with patch('uuid.uuid4') as mocked_uuid4:
mocked_uuid4.side_effect = (uuid.UUID('36446dc3-ae7c-42ad-ae4e-6a826dcf0a%02d' % x) for x in range(100))
auto_delete_older_than()
self.assertEqual(mocked_logger.info.call_count, 11)
mocked_logger.info.assert_any_call(log_extra_msg)
mocked_logger.info.assert_any_call(log_msg)
class SizeExceedTest(TestCase):
@freeze_time("2012-12-21 12:00:00")
@patch('logging.getLogger')
@is_private()
def test_crashes(self, mocked_get_logger):
gpm['Crash__limit_size'] = 1
crash_size = 10*1024*1023
crashes = CrashFactory.create_batch(200, archive_size=crash_size, minidump_size=0)
deleted_crash = crashes[97]
self.assertEqual(Crash.objects.all().count(), 200)
extra_meta = dict(count=98, reason='size_is_exceeded', meta=True, log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00',
model='Crash', size='979.0 MB')
log_extra_msg = add_extra_to_log_message('Automatic cleanup', extra=extra_meta)
extra = dict(Crash_id=deleted_crash.id, element_created=deleted_crash.created.strftime("%d. %B %Y %I:%M%p"),
signature=deleted_crash.signature, userid=deleted_crash.userid, appid=deleted_crash.appid,
log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00')
log_msg = add_extra_to_log_message('Automatic cleanup element', extra=extra)
mocked_logger = mocked_get_logger.return_value
with patch('uuid.uuid4') as mocked_uuid4:
mocked_uuid4.side_effect = (uuid.UUID('36446dc3-ae7c-42ad-ae4e-6a826dcf0a%02d' % x) for x in range(100))
auto_delete_size_is_exceeded()
self.assertEqual(mocked_logger.info.call_count, 99)
mocked_logger.info.assert_any_call(log_extra_msg)
mocked_logger.info.assert_any_call(log_msg)
@freeze_time("2012-12-21 12:00:00")
@patch('logging.getLogger')
@is_private()
def test_feedbacks(self, mocked_get_logger):
gpm['Feedback__limit_size'] = 1
feedback_size = 10*1024*1023
feedbacks = FeedbackFactory.create_batch(200, screenshot_size=feedback_size, system_logs_size=0, attached_file_size=0,
blackbox_size=0)
deleted_feedback = feedbacks[97]
self.assertEqual(Feedback.objects.all().count(), 200)
extra_meta = dict(count=98, reason='size_is_exceeded', meta=True, log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00',
model='Feedback', size='979.0 MB')
log_extra_msg = add_extra_to_log_message('Automatic cleanup', extra=extra_meta)
extra = dict(Feedback_id=deleted_feedback.id, element_created=deleted_feedback.created.strftime("%d. %B %Y %I:%M%p"),
log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00')
log_msg = add_extra_to_log_message('Automatic cleanup element', extra=extra)
mocked_logger = mocked_get_logger.return_value
with patch('uuid.uuid4') as mocked_uuid4:
mocked_uuid4.side_effect = (uuid.UUID('36446dc3-ae7c-42ad-ae4e-6a826dcf0a%02d' % x) for x in range(100))
auto_delete_size_is_exceeded()
self.assertEqual(mocked_logger.info.call_count, 99)
mocked_logger.info.assert_any_call(log_extra_msg)
mocked_logger.info.assert_any_call(log_msg)
class ManualCleanupTest(TestCase):
@freeze_time("2012-12-21 12:00:00")
@patch('logging.getLogger')
@is_private()
def test_crashes(self, mocked_get_logger):
gpm['Crash__duplicate_number'] = 2
crashes = CrashFactory.create_batch(10, signature='test')
deleted_crash = crashes[7]
self.assertEqual(Crash.objects.count(), 10)
extra_meta = dict(count=8, reason='manual', meta=True, log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00',
model='Crash', limit_duplicated=2, limit_size=None, limit_days=None, size='0 bytes')
log_extra_msg = add_extra_to_log_message('Manual cleanup', extra=extra_meta)
extra = dict(Crash_id=deleted_crash.id, element_created=deleted_crash.created.strftime("%d. %B %Y %I:%M%p"),
signature=deleted_crash.signature, userid=deleted_crash.userid, appid=deleted_crash.appid,
log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00')
log_msg = add_extra_to_log_message('Manual cleanup element', extra=extra)
mocked_logger = mocked_get_logger.return_value
with patch('uuid.uuid4') as mocked_uuid4:
mocked_uuid4.side_effect = (uuid.UUID('36446dc3-ae7c-42ad-ae4e-6a826dcf0a%02d' % x) for x in range(100))
deferred_manual_cleanup(['crash', 'Crash'], limit_duplicated=2)
self.assertEqual(mocked_logger.info.call_count, 10)
mocked_logger.info.assert_any_call(log_extra_msg)
mocked_logger.info.assert_any_call(log_msg)
@freeze_time("2012-12-21 12:00:00")
@patch('logging.getLogger')
@is_private()
def test_feedbacks(self, mocked_get_logger):
gpm['Feedback__limit_size'] = 1
feedback_size = 100*1024*1023
feedbacks = FeedbackFactory.create_batch(20, screenshot_size=feedback_size, system_logs_size=0, attached_file_size=0,
blackbox_size=0)
deleted_feedback = feedbacks[7]
self.assertEqual(Feedback.objects.count(), 20)
extra_meta = dict(count=10, reason='manual', meta=True, log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00',
model='Feedback', limit_duplicated=None, limit_size=1, limit_days=None, size='999.0 MB')
log_extra_msg = add_extra_to_log_message('Manual cleanup', extra=extra_meta)
extra = dict(Feedback_id=deleted_feedback.id, element_created=deleted_feedback.created.strftime("%d. %B %Y %I:%M%p"),
log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00')
log_msg = add_extra_to_log_message('Manual cleanup element', extra=extra)
mocked_logger = mocked_get_logger.return_value
with patch('uuid.uuid4') as mocked_uuid4:
mocked_uuid4.side_effect = (uuid.UUID('36446dc3-ae7c-42ad-ae4e-6a826dcf0a%02d' % x) for x in range(100))
deferred_manual_cleanup(['feedback', 'Feedback'], limit_size=1)
self.assertEqual(mocked_logger.info.call_count, 11)
mocked_logger.info.assert_any_call(log_extra_msg)
mocked_logger.info.assert_any_call(log_msg)
@freeze_time("2012-12-21 12:00:00")
@patch('logging.getLogger')
@is_private()
def test_symbols(self, mocked_get_logger):
storage_with_spaces_instance._setup()
gpm['Feedback__limit_size'] = 1
symbols_size = 100*1024*1023
symbols = SymbolsFactory.create_batch(20, file_size=symbols_size)
deleted_symbols = symbols[7]
self.assertEqual(Symbols.objects.count(), 20)
extra_meta = dict(count=10, reason='manual', meta=True, log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00',
model='Symbols', limit_duplicated=None, limit_size=1, limit_days=None, size='999.0 MB')
log_extra_msg = add_extra_to_log_message('Manual cleanup', extra=extra_meta)
extra = dict(Symbols_id=deleted_symbols.id, element_created=deleted_symbols.created.strftime("%d. %B %Y %I:%M%p"),
log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00')
log_msg = add_extra_to_log_message('Manual cleanup element', extra=extra)
mocked_logger = mocked_get_logger.return_value
with patch('uuid.uuid4') as mocked_uuid4:
mocked_uuid4.side_effect = (uuid.UUID('36446dc3-ae7c-42ad-ae4e-6a826dcf0a%02d' % x) for x in range(100))
deferred_manual_cleanup(['crash', 'Symbols'], limit_size=1)
self.assertEqual(mocked_logger.info.call_count, 11)
mocked_logger.info.assert_any_call(log_extra_msg)
mocked_logger.info.assert_any_call(log_msg)
@freeze_time("2012-12-21 12:00:00")
@patch('logging.getLogger')
@is_private()
def test_omaha_versions(self, mocked_get_logger):
gpm['Version__limit_size'] = 1
version_size = 1000*1024*1023
versions = VersionFactory.create_batch(2, file_size=version_size)
deleted_version = versions[0]
self.assertEqual(Version.objects.count(), 2)
extra_meta = dict(count=1, reason='manual', meta=True, log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00',
model='Version', limit_duplicated=None, limit_size=1, limit_days=None, size='999.0 MB')
log_extra_msg = add_extra_to_log_message('Manual cleanup', extra=extra_meta)
extra = dict(Version_id=deleted_version.id, element_created=deleted_version.created.strftime("%d. %B %Y %I:%M%p"),
log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00')
log_msg = add_extra_to_log_message('Manual cleanup element', extra=extra)
mocked_logger = mocked_get_logger.return_value
with patch('uuid.uuid4') as mocked_uuid4:
mocked_uuid4.side_effect = (uuid.UUID('36446dc3-ae7c-42ad-ae4e-6a826dcf0a%02d' % x) for x in range(100))
deferred_manual_cleanup(['omaha', 'Version'], limit_size=1)
self.assertEqual(mocked_logger.info.call_count, 2)
mocked_logger.info.assert_any_call(log_extra_msg)
mocked_logger.info.assert_any_call(log_msg)
@freeze_time("2012-12-21 12:00:00")
@patch('logging.getLogger')
@is_private()
def test_sparkle_versions(self, mocked_get_logger):
gpm['SparkleVersion__limit_size'] = 1
version_size = 1000*1024*1023
versions = SparkleVersionFactory.create_batch(2, file_size=version_size)
deleted_version = versions[0]
self.assertEqual(SparkleVersion.objects.count(), 2)
extra_meta = dict(count=1, reason='manual', meta=True, log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00',
model='SparkleVersion', limit_duplicated=None, limit_size=1, limit_days=None, size='999.0 MB')
log_extra_msg = add_extra_to_log_message('Manual cleanup', extra=extra_meta)
extra = dict(SparkleVersion_id=deleted_version.id, element_created=deleted_version.created.strftime("%d. %B %Y %I:%M%p"),
log_id='36446dc3-ae7c-42ad-ae4e-6a826dcf0a00')
log_msg = add_extra_to_log_message('Manual cleanup element', extra=extra)
mocked_logger = mocked_get_logger.return_value
with patch('uuid.uuid4') as mocked_uuid4:
mocked_uuid4.side_effect = (uuid.UUID('36446dc3-ae7c-42ad-ae4e-6a826dcf0a%02d' % x) for x in range(100))
deferred_manual_cleanup(['sparkle', 'SparkleVersion'], limit_size=1)
self.assertEqual(mocked_logger.info.call_count, 2)
mocked_logger.info.assert_any_call(log_extra_msg)
mocked_logger.info.assert_any_call(log_msg)
class DeleteDanglingTest(TestCase):
@patch('omaha.limitation.raven.captureMessage')
@patch('logging.getLogger')
@patch('omaha.tasks.handle_dangling_files')
def test_dangling_delete_db(self, mock_obj, mocked_get_logger, mocked_raven):
mocked_logger = mocked_get_logger.return_value
mock_obj.return_value = {
'mark': 'db',
'status': 'Send notifications',
'data': [],
'count': 0,
'cleaned_space': 0
}
auto_delete_dangling_files()
self.assertEqual(mocked_logger.info.call_count, 5)
self.assertEqual(mocked_raven.call_count, 5)
log_msg = 'Dangling files detected in db [%d], files path: %s' % (
mock_obj.return_value['count'], mock_obj.return_value['data']
)
mocked_logger.info.assert_any_call(log_msg)
@patch('omaha.limitation.raven.captureMessage')
@patch('logging.getLogger')
@patch('omaha.tasks.handle_dangling_files')
def test_dangling_delete_s3(self, mock_obj, mocked_get_logger, mocked_get_raven):
mocked_logger = mocked_get_logger.return_value
file_path = os.path.abspath('crash/tests/testdata/7b05e196-7e23-416b-bd13-99287924e214.dmp')
mock_obj.return_value = {
'mark': 's3',
'status': 'Delete files',
'data': ['minidump_archive%s' % file_path],
'count': 1,
'cleaned_space': 100
}
auto_delete_dangling_files()
self.assertEqual(mocked_logger.info.call_count, 5)
self.assertEqual(mocked_get_raven.call_count, 5)
log_msg = 'Dangling files deleted from s3 [%d], files path: %s' % (
mock_obj.return_value['count'], mock_obj.return_value['data']
)
mocked_logger.info.assert_any_call(log_msg)
| 49.515068 | 129 | 0.68998 | 2,405 | 18,073 | 4.906861 | 0.101455 | 0.046776 | 0.046098 | 0.050843 | 0.815609 | 0.801966 | 0.784001 | 0.778578 | 0.762732 | 0.755783 | 0 | 0.066694 | 0.199413 | 18,073 | 364 | 130 | 49.651099 | 0.748911 | 0.035467 | 0 | 0.648276 | 0 | 0 | 0.179063 | 0.081841 | 0 | 0 | 0 | 0 | 0.158621 | 1 | 0.041379 | false | 0 | 0.058621 | 0 | 0.117241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
37ef0b74f13e3fe60b2966f3e7a10e45bbfdc3fb | 51,599 | py | Python | tests/oldtests/annotation_hg18_megatest.py | ctb/pygr | a3a3e68073834c20ddbdb27ed746baf8c73fef0a | [
"BSD-3-Clause"
] | 2 | 2015-03-07T13:20:50.000Z | 2015-11-04T12:01:21.000Z | tests/oldtests/annotation_hg18_megatest.py | ctb/pygr | a3a3e68073834c20ddbdb27ed746baf8c73fef0a | [
"BSD-3-Clause"
] | null | null | null | tests/oldtests/annotation_hg18_megatest.py | ctb/pygr | a3a3e68073834c20ddbdb27ed746baf8c73fef0a | [
"BSD-3-Clause"
] | null | null | null |
import ConfigParser, sys, os, string
from pygr.mapping import Collection
import pygr.Data
try:
import hashlib
except ImportError:
import md5 as hashlib
config = ConfigParser.ConfigParser({'testOutputBaseDir' : '.', 'smallSampleKey': ''})
config.read([ os.path.join(os.path.expanduser('~'), '.pygrrc'), os.path.join(os.path.expanduser('~'), 'pygr.cfg'), '.pygrrc', 'pygr.cfg' ])
msaDir = config.get('megatests_hg18', 'msaDir')
seqDir = config.get('megatests_hg18', 'seqDir')
smallSampleKey = config.get('megatests_hg18', 'smallSampleKey')
testInputDB = config.get('megatests', 'testInputDB')
testInputDir = config.get('megatests', 'testInputDir')
testOutputBaseDir = config.get('megatests', 'testOutputBaseDir')
if smallSampleKey:
smallSamplePostfix = '_' + smallSampleKey
else:
smallSamplePostfix = ''
## msaDir CONTAINS PRE-BUILT NLMSA
## seqDir CONTAINS GENOME ASSEMBLIES AND THEIR SEQDB FILES
## TEST INPUT/OUPTUT FOR COMPARISON, THESE FILES SHOULD BE IN THIS DIRECTORY
## exonAnnotFileName = 'Annotation_ConservedElement_Exons_hg18.txt'
## intronAnnotFileName = 'Annotation_ConservedElement_Introns_hg18.txt'
## stopAnnotFileName = 'Annotation_ConservedElement_Stop_hg18.txt'
## testDir = os.path.join(testOutputBaseDir, 'TEST_' + ''.join(tmpList)) SHOULD BE DELETED IF YOU WANT TO RUN IN '.'
# DIRECTIONARY FOR DOC STRING OF SEQDB
docStringDict = {
'anoCar1':' Lizard Genome (January 2007)',
'bosTau3':'Cow Genome (August 2006)',
'canFam2':'Dog Genome (May 2005)',
'cavPor2':'Guinea Pig (October 2005)',
'danRer4':'Zebrafish Genome (March 2006)',
'dasNov1':'Armadillo Genome (May 2005)',
'echTel1':'Tenrec Genome (July 2005)',
'eriEur1':'European Hedgehog (Junuary 2006)',
'equCab1':'Horse Genome (January 2007)',
'felCat3':'Cat Genome (March 2006)',
'fr2':'Fugu Genome (October 2004)',
'galGal3':'Chicken Genome (May 2006)',
'gasAcu1':'Stickleback Genome (February 2006)',
'hg18':'Human Genome (May 2006)',
'loxAfr1':'Elephant Genome (May 2005)',
'mm8':'Mouse Genome (March 2006)',
'monDom4':'Opossum Genome (January 2006)',
'ornAna1':'Platypus Genome (March 2007)',
'oryCun1':'Rabbit Genome (May 2005)',
'oryLat1':'Medaka Genome (April 2006)',
'otoGar1':'Bushbaby Genome (December 2006)',
'panTro2':'Chimpanzee Genome (March 2006)',
'rheMac2':'Rhesus Genome (January 2006)',
'rn4':'Rat Genome (November 2004)',
'sorAra1':'Shrew (Junuary 2006)',
'tetNig1':'Tetraodon Genome (February 2004)',
'tupBel1':'Tree Shrew (December 2006)',
'xenTro2':'X. tropicalis Genome (August 2005)'
}
# GENOME ASSEMBLY LIST FOR DM2 MULTIZ15WAY
msaSpeciesList = ['anoCar1', 'bosTau3', 'canFam2', 'cavPor2', 'danRer4', 'dasNov1', 'echTel1', \
'equCab1', 'eriEur1', 'felCat3', 'fr2', 'galGal3', 'gasAcu1', 'hg18', 'loxAfr1', \
'mm8', 'monDom4', 'ornAna1', 'oryCun1', 'oryLat1', 'otoGar1', 'panTro2', 'rheMac2', \
'rn4', 'sorAra1', 'tetNig1', 'tupBel1', 'xenTro2']
class PygrBuildNLMSAMegabase(object):
'restrict megatest to an initially empty directory, need large space to perform'
def __init__(self, testDir = None):
import random
tmpList = [c for c in 'PygrBuildNLMSAMegabase']
random.shuffle(tmpList)
testDir = os.path.join(testOutputBaseDir, 'TEST_' + ''.join(tmpList)) # FOR TEST, SHOULD BE DELETED
if testDir is None: testDir = 'TEST_' + ''.join(tmpList) # NOT SPECIFIED, USE CURRENT DIRECTORY
try:
os.mkdir(testDir)
testDir = os.path.realpath(testDir)
except:
raise IOError
self.path = testDir
try:
tmpFileName = os.path.join(testDir, 'DELETE_THIS_TEMP_FILE')
open(tmpFileName, 'w').write('A'*1024*1024) # WRITE 1MB FILE FOR TESTING
except:
raise IOError
pygr.Data.update(self.path)
from pygr import seqdb
for orgstr in msaSpeciesList:
genome = seqdb.BlastDB(os.path.join(seqDir, orgstr))
genome.__doc__ = docStringDict[orgstr]
pygr.Data.addResource('TEST.Seq.Genome.' + orgstr, genome)
pygr.Data.save()
def copyFile(self, filename): # COPY A FILE INTO TEST DIRECTORY
newname = os.path.join(self.path, os.path.basename(filename))
open(newname, 'w').write(open(filename, 'r').read())
return newname
def teardown(self):
'delete the temporary directory and files'
for dirpath, subdirs, files in os.walk(self.path, topdown = False): # SHOULD BE DELETED BOTTOM-UP FASHION
# THIS PART MAY NOT WORK IN NFS MOUNTED DIRECTORY DUE TO .nfsXXXXXXXXX CREATION
# IN NFS MOUNTED DIRECTORY, IT CANNOT BE DELETED UNTIL CLOSING PYGRDATA
for filename in files:
os.remove(os.path.join(dirpath, filename))
os.rmdir(dirpath)
class Build_Test(PygrBuildNLMSAMegabase):
def seqdb_test(self): # CHECK PYGR.DATA CONTENTS
l = pygr.Data.dir('TEST')
preList = ['TEST.Seq.Genome.' + orgstr for orgstr in msaSpeciesList]
assert l == preList
def collectionannot_test(self): # BUILD ANNOTATION DB FROM FILE
from pygr import seqdb, cnestedlist, sqlgraph
hg18 = pygr.Data.getResource('TEST.Seq.Genome.hg18')
# BUILD ANNOTATION DATABASE FOR REFSEQ EXONS
exon_slices = Collection(filename = os.path.join(self.path, 'refGene_exonAnnot_hg18.cdb'), \
intKeys = True, mode = 'c', writeback = False) # ONLY C
exon_db = seqdb.AnnotationDB(exon_slices, hg18,
sliceAttrDict = dict(id = 0, exon_id = 1, orientation = 2,
gene_id = 3, start = 4, stop = 5))
msa = cnestedlist.NLMSA(os.path.join(self.path, 'refGene_exonAnnot_hg18'), 'w', \
pairwiseMode = True, bidirectional = False)
for lines in open(os.path.join(testInputDir, 'refGene_exonAnnot%s_hg18.txt' % smallSamplePostfix), 'r').xreadlines():
row = [x for x in lines.split('\t')] # CONVERT TO LIST SO MUTABLE
row[1] = int(row[1]) # CONVERT FROM STRING TO INTEGER
exon_slices[row[1]] = row
exon = exon_db[row[1]] # GET THE ANNOTATION OBJECT FOR THIS EXON
msa.addAnnotation(exon) # SAVE IT TO GENOME MAPPING
exon_db.clear_cache() # not really necessary; cache should autoGC
exon_slices.close() # SHELVE SHOULD BE EXPLICITLY CLOSED IN ORDER TO SAVE CURRENT CONTENTS
msa.build() # FINALIZE GENOME ALIGNMENT INDEXES
exon_db.__doc__ = 'Exon Annotation Database for hg18'
pygr.Data.addResource('TEST.Annotation.hg18.exons', exon_db)
msa.__doc__ = 'NLMSA Exon for hg18'
pygr.Data.addResource('TEST.Annotation.NLMSA.hg18.exons', msa)
exon_schema = pygr.Data.ManyToManyRelation(hg18, exon_db, bindAttrs = ('exon1',))
exon_schema.__doc__ = 'Exon Schema for hg18'
pygr.Data.addSchema('TEST.Annotation.NLMSA.hg18.exons', exon_schema)
# BUILD ANNOTATION DATABASE FOR REFSEQ SPLICES
splice_slices = Collection(filename = os.path.join(self.path, 'refGene_spliceAnnot_hg18.cdb'), \
intKeys = True, mode = 'c', writeback = False) # ONLY C
splice_db = seqdb.AnnotationDB(splice_slices, hg18,
sliceAttrDict = dict(id = 0, splice_id = 1, orientation = 2,
gene_id = 3, start = 4, stop = 5))
msa = cnestedlist.NLMSA(os.path.join(self.path, 'refGene_spliceAnnot_hg18'), 'w', \
pairwiseMode = True, bidirectional = False)
for lines in open(os.path.join(testInputDir, 'refGene_spliceAnnot%s_hg18.txt' % smallSamplePostfix), 'r').xreadlines():
row = [x for x in lines.split('\t')] # CONVERT TO LIST SO MUTABLE
row[1] = int(row[1]) # CONVERT FROM STRING TO INTEGER
splice_slices[row[1]] = row
splice = splice_db[row[1]] # GET THE ANNOTATION OBJECT FOR THIS EXON
msa.addAnnotation(splice) # SAVE IT TO GENOME MAPPING
splice_db.clear_cache() # not really necessary; cache should autoGC
splice_slices.close() # SHELVE SHOULD BE EXPLICITLY CLOSED IN ORDER TO SAVE CURRENT CONTENTS
msa.build() # FINALIZE GENOME ALIGNMENT INDEXES
splice_db.__doc__ = 'Splice Annotation Database for hg18'
pygr.Data.addResource('TEST.Annotation.hg18.splices', splice_db)
msa.__doc__ = 'NLMSA Splice for hg18'
pygr.Data.addResource('TEST.Annotation.NLMSA.hg18.splices', msa)
splice_schema = pygr.Data.ManyToManyRelation(hg18, splice_db, bindAttrs = ('splice1',))
splice_schema.__doc__ = 'Splice Schema for hg18'
pygr.Data.addSchema('TEST.Annotation.NLMSA.hg18.splices', splice_schema)
# BUILD ANNOTATION DATABASE FOR REFSEQ EXONS
cds_slices = Collection(filename = os.path.join(self.path, 'refGene_cdsAnnot_hg18.cdb'), \
intKeys = True, mode = 'c', writeback = False) # ONLY C
cds_db = seqdb.AnnotationDB(cds_slices, hg18,
sliceAttrDict = dict(id = 0, cds_id = 1, orientation = 2,
gene_id = 3, start = 4, stop = 5))
msa = cnestedlist.NLMSA(os.path.join(self.path, 'refGene_cdsAnnot_hg18'), 'w', \
pairwiseMode = True, bidirectional = False)
for lines in open(os.path.join(testInputDir, 'refGene_cdsAnnot%s_hg18.txt' % smallSamplePostfix), 'r').xreadlines():
row = [x for x in lines.split('\t')] # CONVERT TO LIST SO MUTABLE
row[1] = int(row[1]) # CONVERT FROM STRING TO INTEGER
cds_slices[row[1]] = row
cds = cds_db[row[1]] # GET THE ANNOTATION OBJECT FOR THIS EXON
msa.addAnnotation(cds) # SAVE IT TO GENOME MAPPING
cds_db.clear_cache() # not really necessary; cache should autoGC
cds_slices.close() # SHELVE SHOULD BE EXPLICITLY CLOSED IN ORDER TO SAVE CURRENT CONTENTS
msa.build() # FINALIZE GENOME ALIGNMENT INDEXES
cds_db.__doc__ = 'CDS Annotation Database for hg18'
pygr.Data.addResource('TEST.Annotation.hg18.cdss', cds_db)
msa.__doc__ = 'NLMSA CDS for hg18'
pygr.Data.addResource('TEST.Annotation.NLMSA.hg18.cdss', msa)
cds_schema = pygr.Data.ManyToManyRelation(hg18, cds_db, bindAttrs = ('cds1',))
cds_schema.__doc__ = 'CDS Schema for hg18'
pygr.Data.addSchema('TEST.Annotation.NLMSA.hg18.cdss', cds_schema)
# BUILD ANNOTATION DATABASE FOR MOST CONSERVED ELEMENTS FROM UCSC
ucsc_slices = Collection(filename = os.path.join(self.path, 'phastConsElements28way_hg18.cdb'), \
intKeys = True, mode = 'c', writeback = False) # ONLY C
ucsc_db = seqdb.AnnotationDB(ucsc_slices, hg18,
sliceAttrDict = dict(id = 0, ucsc_id = 1, orientation = 2,
gene_id = 3, start = 4, stop = 5))
msa = cnestedlist.NLMSA(os.path.join(self.path, 'phastConsElements28way_hg18'), 'w', \
pairwiseMode = True, bidirectional = False)
for lines in open(os.path.join(testInputDir, 'phastConsElements28way%s_hg18.txt' % smallSamplePostfix), 'r').xreadlines():
row = [x for x in lines.split('\t')] # CONVERT TO LIST SO MUTABLE
row[1] = int(row[1]) # CONVERT FROM STRING TO INTEGER
ucsc_slices[row[1]] = row
ucsc = ucsc_db[row[1]] # GET THE ANNOTATION OBJECT FOR THIS EXON
msa.addAnnotation(ucsc) # SAVE IT TO GENOME MAPPING
ucsc_db.clear_cache() # not really necessary; cache should autoGC
ucsc_slices.close() # SHELVE SHOULD BE EXPLICITLY CLOSED IN ORDER TO SAVE CURRENT CONTENTS
msa.build() # FINALIZE GENOME ALIGNMENT INDEXES
ucsc_db.__doc__ = 'Most Conserved Elements for hg18'
pygr.Data.addResource('TEST.Annotation.UCSC.hg18.mostconserved', ucsc_db)
msa.__doc__ = 'NLMSA for Most Conserved Elements for hg18'
pygr.Data.addResource('TEST.Annotation.UCSC.NLMSA.hg18.mostconserved', msa)
ucsc_schema = pygr.Data.ManyToManyRelation(hg18, ucsc_db, bindAttrs = ('element1',))
ucsc_schema.__doc__ = 'Schema for UCSC Most Conserved Elements for hg18'
pygr.Data.addSchema('TEST.Annotation.UCSC.NLMSA.hg18.mostconserved', ucsc_schema)
# BUILD ANNOTATION DATABASE FOR SNP126 FROM UCSC
snp_slices = Collection(filename = os.path.join(self.path, 'snp126_hg18.cdb'), \
intKeys = True, protocol = 2, mode = 'c', writeback = False) # ONLY C
snp_db = seqdb.AnnotationDB(snp_slices, hg18,
sliceAttrDict = dict(id = 0, snp_id = 1, orientation = 2, gene_id = 3, start = 4,
stop = 5, score = 6, ref_NCBI = 7, ref_UCSC = 8, observed = 9,
molType = 10, myClass = 11, myValid = 12, avHet = 13, avHetSE = 14,
myFunc = 15, locType = 16, myWeight = 17))
msa = cnestedlist.NLMSA(os.path.join(self.path, 'snp126_hg18'), 'w', \
pairwiseMode = True, bidirectional = False)
for lines in open(os.path.join(testInputDir, 'snp126%s_hg18.txt' % smallSamplePostfix), 'r').xreadlines():
row = [x for x in lines.split('\t')] # CONVERT TO LIST SO MUTABLE
row[1] = int(row[1]) # CONVERT FROM STRING TO INTEGER
snp_slices[row[1]] = row
snp = snp_db[row[1]] # GET THE ANNOTATION OBJECT FOR THIS EXON
msa.addAnnotation(snp) # SAVE IT TO GENOME MAPPING
snp_db.clear_cache() # not really necessary; cache should autoGC
snp_slices.close() # SHELVE SHOULD BE EXPLICITLY CLOSED IN ORDER TO SAVE CURRENT CONTENTS
msa.build() # FINALIZE GENOME ALIGNMENT INDEXES
snp_db.__doc__ = 'SNP126 for hg18'
pygr.Data.addResource('TEST.Annotation.UCSC.hg18.snp126', snp_db)
msa.__doc__ = 'NLMSA for SNP126 for hg18'
pygr.Data.addResource('TEST.Annotation.UCSC.NLMSA.hg18.snp126', msa)
snp_schema = pygr.Data.ManyToManyRelation(hg18, snp_db, bindAttrs = ('snp1',))
snp_schema.__doc__ = 'Schema for UCSC SNP126 for hg18'
pygr.Data.addSchema('TEST.Annotation.UCSC.NLMSA.hg18.snp126', snp_schema)
pygr.Data.save()
pygr.Data.clear_cache()
# QUERY TO EXON AND SPLICES ANNOTATION DATABASE
hg18 = pygr.Data.getResource('TEST.Seq.Genome.hg18')
exonmsa = pygr.Data.getResource('TEST.Annotation.NLMSA.hg18.exons')
splicemsa = pygr.Data.getResource('TEST.Annotation.NLMSA.hg18.splices')
conservedmsa = pygr.Data.getResource('TEST.Annotation.UCSC.NLMSA.hg18.mostconserved')
snpmsa = pygr.Data.getResource('TEST.Annotation.UCSC.NLMSA.hg18.snp126')
cdsmsa = pygr.Data.getResource('TEST.Annotation.NLMSA.hg18.cdss')
exons = pygr.Data.getResource('TEST.Annotation.hg18.exons')
splices = pygr.Data.getResource('TEST.Annotation.hg18.splices')
mostconserved = pygr.Data.getResource('TEST.Annotation.UCSC.hg18.mostconserved')
snp126 = pygr.Data.getResource('TEST.Annotation.UCSC.hg18.snp126')
cdss = pygr.Data.getResource('TEST.Annotation.hg18.cdss')
# OPEN hg18_MULTIZ28WAY NLMSA
msa = cnestedlist.NLMSA(os.path.join(msaDir, 'hg18_multiz28way'), 'r', trypath = [seqDir])
exonAnnotFileName = os.path.join(testInputDir, 'Annotation_ConservedElement_Exons%s_hg18.txt' % smallSamplePostfix)
intronAnnotFileName = os.path.join(testInputDir, 'Annotation_ConservedElement_Introns%s_hg18.txt' % smallSamplePostfix)
stopAnnotFileName = os.path.join(testInputDir, 'Annotation_ConservedElement_Stop%s_hg18.txt' % smallSamplePostfix)
newexonAnnotFileName = os.path.join(self.path, 'new_Exons_hg18.txt')
newintronAnnotFileName = os.path.join(self.path, 'new_Introns_hg18.txt')
newstopAnnotFileName = os.path.join(self.path, 'new_stop_hg18.txt')
tmpexonAnnotFileName = self.copyFile(exonAnnotFileName)
tmpintronAnnotFileName = self.copyFile(intronAnnotFileName)
tmpstopAnnotFileName = self.copyFile(stopAnnotFileName)
if smallSampleKey:
chrList = [ smallSampleKey ]
else:
chrList = hg18.seqLenDict.keys()
chrList.sort()
outfile = open(newexonAnnotFileName, 'w')
for chrid in chrList:
slice = hg18[chrid]
# EXON ANNOTATION DATABASE
try:
ex1 = exonmsa[slice]
except:
continue
else:
exlist1 = [(ix.exon_id, ix) for ix in ex1.keys()]
exlist1.sort()
for ixx, exon in exlist1:
saveList = []
tmp = exon.sequence
tmpexon = exons[exon.exon_id]
tmpslice = tmpexon.sequence # FOR REAL EXON COORDINATE
wlist1 = 'EXON', chrid, tmpexon.exon_id, tmpexon.gene_id, tmpslice.start, tmpslice.stop
try:
out1 = conservedmsa[tmp]
except KeyError:
pass
else:
elementlist = [(ix.ucsc_id, ix) for ix in out1.keys()]
elementlist.sort()
for iyy, element in elementlist:
if element.stop - element.start < 100: continue
score = int(string.split(element.gene_id, '=')[1])
if score < 100: continue
tmp2 = element.sequence
tmpelement = mostconserved[element.ucsc_id]
tmpslice2 = tmpelement.sequence # FOR REAL ELEMENT COORDINATE
wlist2 = wlist1 + (tmpelement.ucsc_id, tmpelement.gene_id, tmpslice2.start, tmpslice2.stop)
slicestart, sliceend = max(tmp.start, tmp2.start), min(tmp.stop, tmp2.stop)
if slicestart < 0 or sliceend < 0: sys.exit('wrong query')
tmp1 = msa.seqDict['hg18.' + chrid][slicestart:sliceend]
edges = msa[tmp1].edges()
for src, dest, e in edges:
if src.stop - src.start < 100: continue
palign, pident = e.pAligned(), e.pIdentity()
if palign < 0.8 or pident < 0.8: continue
palign, pident = '%.2f' % palign, '%.2f' % pident
wlist3 = wlist2 + ((~msa.seqDict)[src], str(src), src.start, src.stop, \
(~msa.seqDict)[dest], \
str(dest), dest.start, dest.stop, palign, pident)
saveList.append('\t'.join(map(str, wlist3)) + '\n')
saveList.sort()
for saveline in saveList:
outfile.write(saveline)
outfile.close()
md5old = hashlib.md5()
md5old.update(open(tmpexonAnnotFileName, 'r').read())
md5new = hashlib.md5()
md5new.update(open(newexonAnnotFileName, 'r').read())
assert md5old.digest() == md5new.digest() # MD5 COMPARISON INSTEAD OF COMPARING EACH CONTENTS
outfile = open(newintronAnnotFileName, 'w')
for chrid in chrList:
slice = hg18[chrid]
# SPLICE ANNOTATION DATABASE
try:
sp1 = splicemsa[slice]
except:
continue
else:
splist1 = [(ix.splice_id, ix) for ix in sp1.keys()]
splist1.sort()
for ixx, splice in splist1:
saveList = []
tmp = splice.sequence
tmpsplice = splices[splice.splice_id]
tmpslice = tmpsplice.sequence # FOR REAL EXON COORDINATE
wlist1 = 'INTRON', chrid, tmpsplice.splice_id, tmpsplice.gene_id, tmpslice.start, tmpslice.stop
try:
out1 = conservedmsa[tmp]
except KeyError:
pass
else:
elementlist = [(ix.ucsc_id, ix) for ix in out1.keys()]
elementlist.sort()
for iyy, element in elementlist:
if element.stop - element.start < 100: continue
score = int(string.split(element.gene_id, '=')[1])
if score < 100: continue
tmp2 = element.sequence
tmpelement = mostconserved[element.ucsc_id]
tmpslice2 = tmpelement.sequence # FOR REAL ELEMENT COORDINATE
wlist2 = wlist1 + (tmpelement.ucsc_id, tmpelement.gene_id, tmpslice2.start, tmpslice2.stop)
slicestart, sliceend = max(tmp.start, tmp2.start), min(tmp.stop, tmp2.stop)
if slicestart < 0 or sliceend < 0: sys.exit('wrong query')
tmp1 = msa.seqDict['hg18.' + chrid][slicestart:sliceend]
edges = msa[tmp1].edges()
for src, dest, e in edges:
if src.stop - src.start < 100: continue
palign, pident = e.pAligned(), e.pIdentity()
if palign < 0.8 or pident < 0.8: continue
palign, pident = '%.2f' % palign, '%.2f' % pident
wlist3 = wlist2 + ((~msa.seqDict)[src], str(src), src.start, src.stop, \
(~msa.seqDict)[dest], \
str(dest), dest.start, dest.stop, palign, pident)
saveList.append('\t'.join(map(str, wlist3)) + '\n')
saveList.sort()
for saveline in saveList:
outfile.write(saveline)
# SNP IN SPLICE SITES
saveList = []
gt = tmpslice[:2]
ag = tmpslice[-2:]
try:
gtout = snpmsa[gt]
agout = snpmsa[ag]
except KeyError:
pass
else:
gtlist = gtout.keys()
aglist = agout.keys()
for snp in gtlist:
tmpsnp = snp.sequence
annsnp = snp126[snp.snp_id]
wlist2 = ('SNP5', chrid, tmpsplice.gene_id, gt.start, gt.stop, str(gt)) \
+ (annsnp.snp_id, tmpsnp.start, tmpsnp.stop, \
str(tmpsnp), annsnp.gene_id, annsnp.ref_NCBI, annsnp.ref_UCSC, \
annsnp.observed, annsnp.molType, \
annsnp.myClass, annsnp.myValid)
tmp1 = msa.seqDict['hg18.' + chrid][abs(gt.start):abs(gt.stop)]
edges = msa[tmp1].edges()
for src, dest, e in edges:
if src.stop - src.start != 2 or dest.stop - dest.start != 2: continue
palign, pident = e.pAligned(), e.pIdentity()
palign, pident = '%.2f' % palign, '%.2f' % pident
wlist3 = wlist2 + ((~msa.seqDict)[src], str(src), src.start, src.stop, \
(~msa.seqDict)[dest], \
str(dest), dest.start, dest.stop, palign, pident)
saveList.append('\t'.join(map(str, wlist3)) + '\n')
for snp in aglist:
tmpsnp = snp.sequence
annsnp = snp126[snp.snp_id]
wlist2 = ('SNP3', chrid, tmpsplice.gene_id, ag.start, ag.stop, str(ag)) \
+ (annsnp.snp_id, tmpsnp.start, tmpsnp.stop, \
str(tmpsnp), annsnp.gene_id, annsnp.ref_NCBI, annsnp.ref_UCSC, \
annsnp.observed, annsnp.molType, \
annsnp.myClass, annsnp.myValid)
tmp1 = msa.seqDict['hg18.' + chrid][abs(ag.start):abs(ag.stop)]
edges = msa[tmp1].edges()
for src, dest, e in edges:
if src.stop - src.start != 2 or dest.stop - dest.start != 2: continue
palign, pident = e.pAligned(), e.pIdentity()
palign, pident = '%.2f' % palign, '%.2f' % pident
wlist3 = wlist2 + ((~msa.seqDict)[src], str(src), src.start, src.stop, \
(~msa.seqDict)[dest], \
str(dest), dest.start, dest.stop, palign, pident)
saveList.append('\t'.join(map(str, wlist3)) + '\n')
saveList.sort()
for saveline in saveList:
outfile.write(saveline)
outfile.close()
md5old = hashlib.md5()
md5old.update(open(tmpintronAnnotFileName, 'r').read())
md5new = hashlib.md5()
md5new.update(open(newintronAnnotFileName, 'r').read())
assert md5old.digest() == md5new.digest() # MD5 COMPARISON INSTEAD OF COMPARING EACH CONTENTS
outfile = open(newstopAnnotFileName, 'w')
for chrid in chrList:
slice = hg18[chrid]
# STOP ANNOTATION DATABASE
try:
cds1 = cdsmsa[slice]
except:
continue
else:
cdslist1 = [(ix.cds_id, ix) for ix in cds1.keys()]
cdslist1.sort()
for ixx, cds in cdslist1:
saveList = []
tmp = cds.sequence
tmpcds = cdss[cds.cds_id]
tmpslice = tmpcds.sequence # FOR REAL EXON COORDINATE
wlist1 = 'STOP', chrid, tmpcds.cds_id, tmpcds.gene_id, tmpslice.start, tmpslice.stop
if tmpslice.start < 0:
stopstart, stopend = -tmpslice.stop, -tmpslice.start
stop = -hg18[chrid][stopstart:stopstart+3]
else:
stopstart, stopend = tmpslice.start, tmpslice.stop
stop = hg18[chrid][stopend-3:stopend]
if str(stop).upper() not in ('TAA', 'TAG', 'TGA'): continue
try:
snp1 = snpmsa[stop]
except KeyError:
pass
else:
snplist = [(ix.snp_id, ix) for ix in snp1.keys()]
snplist.sort()
for iyy, snp in snplist:
tmpsnp = snp.sequence
annsnp = snp126[snp.snp_id]
wlist2 = wlist1 + (str(stop), stop.start, stop.stop) \
+ (annsnp.snp_id, tmpsnp.start, tmpsnp.stop, \
str(tmpsnp), annsnp.gene_id, annsnp.ref_NCBI, annsnp.ref_UCSC, \
annsnp.observed, annsnp.molType, \
annsnp.myClass, annsnp.myValid)
if tmpslice.start < 0:
tmp1 = -msa.seqDict['hg18.' + chrid][stopstart:stopstart+3]
else:
tmp1 = msa.seqDict['hg18.' + chrid][stopend-3:stopend]
edges = msa[tmp1].edges()
for src, dest, e in edges:
if src.stop - src.start != 3 or dest.stop - dest.start != 3: continue
palign, pident = e.pAligned(), e.pIdentity()
palign, pident = '%.2f' % palign, '%.2f' % pident
if str(dest).upper() not in ('TAA', 'TAG', 'TGA'): nonstr = 'NONSENSE'
else: nonstr = 'STOP'
wlist3 = wlist2 + ((~msa.seqDict)[src], str(src), src.start, src.stop, \
(~msa.seqDict)[dest], \
str(dest), dest.start, dest.stop, palign, pident, nonstr)
saveList.append('\t'.join(map(str, wlist3)) + '\n')
saveList.sort()
for saveline in saveList:
outfile.write(saveline)
outfile.close()
md5old = hashlib.md5()
md5old.update(open(tmpstopAnnotFileName, 'r').read())
md5new = hashlib.md5()
md5new.update(open(newstopAnnotFileName, 'r').read())
assert md5old.digest() == md5new.digest() # MD5 COMPARISON INSTEAD OF COMPARING EACH CONTENTS
def mysqlannot_test(self): # BUILD ANNOTATION DB FROM MYSQL
from pygr import seqdb, cnestedlist, sqlgraph
hg18 = pygr.Data.getResource('TEST.Seq.Genome.hg18')
# BUILD ANNOTATION DATABASE FOR REFSEQ EXONS: MYSQL VERSION
exon_slices = sqlgraph.SQLTableClustered('%s.pygr_refGene_exonAnnot%s_hg18' % ( testInputDB, smallSamplePostfix ),
clusterKey = 'chromosome', maxCache = 0)
exon_db = seqdb.AnnotationDB(exon_slices, hg18, sliceAttrDict = dict(id = 'chromosome', \
gene_id = 'name', exon_id = 'exon_id'))
msa = cnestedlist.NLMSA(os.path.join(self.path, 'refGene_exonAnnot_SQL_hg18'), 'w', \
pairwiseMode = True, bidirectional = False)
for id in exon_db:
msa.addAnnotation(exon_db[id])
exon_db.clear_cache() # not really necessary; cache should autoGC
exon_slices.clear_cache()
msa.build()
exon_db.__doc__ = 'SQL Exon Annotation Database for hg18'
pygr.Data.addResource('TEST.Annotation.SQL.hg18.exons', exon_db)
msa.__doc__ = 'SQL NLMSA Exon for hg18'
pygr.Data.addResource('TEST.Annotation.NLMSA.SQL.hg18.exons', msa)
exon_schema = pygr.Data.ManyToManyRelation(hg18, exon_db, bindAttrs = ('exon2',))
exon_schema.__doc__ = 'SQL Exon Schema for hg18'
pygr.Data.addSchema('TEST.Annotation.NLMSA.SQL.hg18.exons', exon_schema)
# BUILD ANNOTATION DATABASE FOR REFSEQ SPLICES: MYSQL VERSION
splice_slices = sqlgraph.SQLTableClustered('%s.pygr_refGene_spliceAnnot%s_hg18' % ( testInputDB, smallSamplePostfix ),
clusterKey = 'chromosome', maxCache = 0)
splice_db = seqdb.AnnotationDB(splice_slices, hg18, sliceAttrDict = dict(id = 'chromosome', \
gene_id = 'name', splice_id = 'splice_id'))
msa = cnestedlist.NLMSA(os.path.join(self.path, 'refGene_spliceAnnot_SQL_hg18'), 'w', \
pairwiseMode = True, bidirectional = False)
for id in splice_db:
msa.addAnnotation(splice_db[id])
splice_db.clear_cache() # not really necessary; cache should autoGC
splice_slices.clear_cache()
msa.build()
splice_db.__doc__ = 'SQL Splice Annotation Database for hg18'
pygr.Data.addResource('TEST.Annotation.SQL.hg18.splices', splice_db)
msa.__doc__ = 'SQL NLMSA Splice for hg18'
pygr.Data.addResource('TEST.Annotation.NLMSA.SQL.hg18.splices', msa)
splice_schema = pygr.Data.ManyToManyRelation(hg18, splice_db, bindAttrs = ('splice2',))
splice_schema.__doc__ = 'SQL Splice Schema for hg18'
pygr.Data.addSchema('TEST.Annotation.NLMSA.SQL.hg18.splices', splice_schema)
# BUILD ANNOTATION DATABASE FOR REFSEQ EXONS: MYSQL VERSION
cds_slices = sqlgraph.SQLTableClustered('%s.pygr_refGene_cdsAnnot%s_hg18' % ( testInputDB, smallSamplePostfix ),
clusterKey = 'chromosome', maxCache = 0)
cds_db = seqdb.AnnotationDB(cds_slices, hg18, sliceAttrDict = dict(id = 'chromosome', \
gene_id = 'name', cds_id = 'cds_id'))
msa = cnestedlist.NLMSA(os.path.join(self.path, 'refGene_cdsAnnot_SQL_hg18'), 'w', \
pairwiseMode = True, bidirectional = False)
for id in cds_db:
msa.addAnnotation(cds_db[id])
cds_db.clear_cache() # not really necessary; cache should autoGC
cds_slices.clear_cache()
msa.build()
cds_db.__doc__ = 'SQL CDS Annotation Database for hg18'
pygr.Data.addResource('TEST.Annotation.SQL.hg18.cdss', cds_db)
msa.__doc__ = 'SQL NLMSA CDS for hg18'
pygr.Data.addResource('TEST.Annotation.NLMSA.SQL.hg18.cdss', msa)
cds_schema = pygr.Data.ManyToManyRelation(hg18, cds_db, bindAttrs = ('cds2',))
cds_schema.__doc__ = 'SQL CDS Schema for hg18'
pygr.Data.addSchema('TEST.Annotation.NLMSA.SQL.hg18.cdss', cds_schema)
# BUILD ANNOTATION DATABASE FOR MOST CONSERVED ELEMENTS FROM UCSC: MYSQL VERSION
ucsc_slices = sqlgraph.SQLTableClustered('%s.pygr_phastConsElements28way%s_hg18' % ( testInputDB, smallSamplePostfix ),
clusterKey = 'chromosome', maxCache = 0)
ucsc_db = seqdb.AnnotationDB(ucsc_slices, hg18, sliceAttrDict = dict(id = 'chromosome', \
gene_id = 'name', ucsc_id = 'ucsc_id'))
msa = cnestedlist.NLMSA(os.path.join(self.path, 'phastConsElements28way_SQL_hg18'), 'w', \
pairwiseMode = True, bidirectional = False)
for id in ucsc_db:
msa.addAnnotation(ucsc_db[id])
ucsc_db.clear_cache() # not really necessary; cache should autoGC
ucsc_slices.clear_cache()
msa.build()
ucsc_db.__doc__ = 'SQL Most Conserved Elements for hg18'
pygr.Data.addResource('TEST.Annotation.UCSC.SQL.hg18.mostconserved', ucsc_db)
msa.__doc__ = 'SQL NLMSA for Most Conserved Elements for hg18'
pygr.Data.addResource('TEST.Annotation.UCSC.NLMSA.SQL.hg18.mostconserved', msa)
ucsc_schema = pygr.Data.ManyToManyRelation(hg18, ucsc_db, bindAttrs = ('element2',))
ucsc_schema.__doc__ = 'SQL Schema for UCSC Most Conserved Elements for hg18'
pygr.Data.addSchema('TEST.Annotation.UCSC.NLMSA.SQL.hg18.mostconserved', ucsc_schema)
# BUILD ANNOTATION DATABASE FOR SNP126 FROM UCSC: MYSQL VERSION
snp_slices = sqlgraph.SQLTableClustered('%s.pygr_snp126%s_hg18' % ( testInputDB, smallSamplePostfix ),
clusterKey = 'clusterKey', maxCache = 0)
snp_db = seqdb.AnnotationDB(snp_slices, hg18, sliceAttrDict = dict(id = 'chromosome', gene_id = 'name',
snp_id = 'snp_id', score = 'score', ref_NCBI = 'ref_NCBI', ref_UCSC = 'ref_UCSC',
observed = 'observed', molType = 'molType', myClass = 'myClass', myValid = 'myValid',
avHet = 'avHet', avHetSE = 'avHetSE', myFunc = 'myFunc', locType = 'locType',
myWeight = 'myWeight'))
msa = cnestedlist.NLMSA(os.path.join(self.path, 'snp126_SQL_hg18'), 'w', \
pairwiseMode = True, bidirectional = False)
for id in snp_db:
msa.addAnnotation(snp_db[id])
snp_db.clear_cache() # not really necessary; cache should autoGC
snp_slices.clear_cache()
msa.build()
snp_db.__doc__ = 'SQL SNP126 for hg18'
pygr.Data.addResource('TEST.Annotation.UCSC.SQL.hg18.snp126', snp_db)
msa.__doc__ = 'SQL NLMSA for SNP126 for hg18'
pygr.Data.addResource('TEST.Annotation.UCSC.NLMSA.SQL.hg18.snp126', msa)
snp_schema = pygr.Data.ManyToManyRelation(hg18, snp_db, bindAttrs = ('snp2',))
snp_schema.__doc__ = 'SQL Schema for UCSC SNP126 for hg18'
pygr.Data.addSchema('TEST.Annotation.UCSC.NLMSA.SQL.hg18.snp126', snp_schema)
pygr.Data.save()
pygr.Data.clear_cache()
# QUERY TO EXON AND SPLICES ANNOTATION DATABASE
hg18 = pygr.Data.getResource('TEST.Seq.Genome.hg18')
exonmsa = pygr.Data.getResource('TEST.Annotation.NLMSA.SQL.hg18.exons')
splicemsa = pygr.Data.getResource('TEST.Annotation.NLMSA.SQL.hg18.splices')
conservedmsa = pygr.Data.getResource('TEST.Annotation.UCSC.NLMSA.SQL.hg18.mostconserved')
snpmsa = pygr.Data.getResource('TEST.Annotation.UCSC.NLMSA.SQL.hg18.snp126')
cdsmsa = pygr.Data.getResource('TEST.Annotation.NLMSA.SQL.hg18.cdss')
exons = pygr.Data.getResource('TEST.Annotation.SQL.hg18.exons')
splices = pygr.Data.getResource('TEST.Annotation.SQL.hg18.splices')
mostconserved = pygr.Data.getResource('TEST.Annotation.UCSC.SQL.hg18.mostconserved')
snp126 = pygr.Data.getResource('TEST.Annotation.UCSC.SQL.hg18.snp126')
cdss = pygr.Data.getResource('TEST.Annotation.SQL.hg18.cdss')
# OPEN hg18_MULTIZ28WAY NLMSA
msa = cnestedlist.NLMSA(os.path.join(msaDir, 'hg18_multiz28way'), 'r', trypath = [seqDir])
exonAnnotFileName = os.path.join(testInputDir, 'Annotation_ConservedElement_Exons%s_hg18.txt' % smallSamplePostfix)
intronAnnotFileName = os.path.join(testInputDir, 'Annotation_ConservedElement_Introns%s_hg18.txt' % smallSamplePostfix)
stopAnnotFileName = os.path.join(testInputDir, 'Annotation_ConservedElement_Stop%s_hg18.txt' % smallSamplePostfix)
newexonAnnotFileName = os.path.join(self.path, 'new_Exons_hg18.txt')
newintronAnnotFileName = os.path.join(self.path, 'new_Introns_hg18.txt')
newstopAnnotFileName = os.path.join(self.path, 'new_stop_hg18.txt')
tmpexonAnnotFileName = self.copyFile(exonAnnotFileName)
tmpintronAnnotFileName = self.copyFile(intronAnnotFileName)
tmpstopAnnotFileName = self.copyFile(stopAnnotFileName)
if smallSampleKey:
chrList = [ smallSampleKey ]
else:
chrList = hg18.seqLenDict.keys()
chrList.sort()
outfile = open(newexonAnnotFileName, 'w')
for chrid in chrList:
slice = hg18[chrid]
# EXON ANNOTATION DATABASE
try:
ex1 = exonmsa[slice]
except:
continue
else:
exlist1 = [(ix.exon_id, ix) for ix in ex1.keys()]
exlist1.sort()
for ixx, exon in exlist1:
saveList = []
tmp = exon.sequence
tmpexon = exons[exon.exon_id]
tmpslice = tmpexon.sequence # FOR REAL EXON COORDINATE
wlist1 = 'EXON', chrid, tmpexon.exon_id, tmpexon.gene_id, tmpslice.start, tmpslice.stop
try:
out1 = conservedmsa[tmp]
except KeyError:
pass
else:
elementlist = [(ix.ucsc_id, ix) for ix in out1.keys()]
elementlist.sort()
for iyy, element in elementlist:
if element.stop - element.start < 100: continue
score = int(string.split(element.gene_id, '=')[1])
if score < 100: continue
tmp2 = element.sequence
tmpelement = mostconserved[element.ucsc_id]
tmpslice2 = tmpelement.sequence # FOR REAL ELEMENT COORDINATE
wlist2 = wlist1 + (tmpelement.ucsc_id, tmpelement.gene_id, tmpslice2.start, tmpslice2.stop)
slicestart, sliceend = max(tmp.start, tmp2.start), min(tmp.stop, tmp2.stop)
if slicestart < 0 or sliceend < 0: sys.exit('wrong query')
tmp1 = msa.seqDict['hg18.' + chrid][slicestart:sliceend]
edges = msa[tmp1].edges()
for src, dest, e in edges:
if src.stop - src.start < 100: continue
palign, pident = e.pAligned(), e.pIdentity()
if palign < 0.8 or pident < 0.8: continue
palign, pident = '%.2f' % palign, '%.2f' % pident
wlist3 = wlist2 + ((~msa.seqDict)[src], str(src), src.start, src.stop, \
(~msa.seqDict)[dest], \
str(dest), dest.start, dest.stop, palign, pident)
saveList.append('\t'.join(map(str, wlist3)) + '\n')
saveList.sort()
for saveline in saveList:
outfile.write(saveline)
outfile.close()
md5old = hashlib.md5()
md5old.update(open(tmpexonAnnotFileName, 'r').read())
md5new = hashlib.md5()
md5new.update(open(newexonAnnotFileName, 'r').read())
assert md5old.digest() == md5new.digest() # MD5 COMPARISON INSTEAD OF COMPARING EACH CONTENTS
outfile = open(newintronAnnotFileName, 'w')
for chrid in chrList:
slice = hg18[chrid]
# SPLICE ANNOTATION DATABASE
try:
sp1 = splicemsa[slice]
except:
continue
else:
splist1 = [(ix.splice_id, ix) for ix in sp1.keys()]
splist1.sort()
for ixx, splice in splist1:
saveList = []
tmp = splice.sequence
tmpsplice = splices[splice.splice_id]
tmpslice = tmpsplice.sequence # FOR REAL EXON COORDINATE
wlist1 = 'INTRON', chrid, tmpsplice.splice_id, tmpsplice.gene_id, tmpslice.start, tmpslice.stop
try:
out1 = conservedmsa[tmp]
except KeyError:
pass
else:
elementlist = [(ix.ucsc_id, ix) for ix in out1.keys()]
elementlist.sort()
for iyy, element in elementlist:
if element.stop - element.start < 100: continue
score = int(string.split(element.gene_id, '=')[1])
if score < 100: continue
tmp2 = element.sequence
tmpelement = mostconserved[element.ucsc_id]
tmpslice2 = tmpelement.sequence # FOR REAL ELEMENT COORDINATE
wlist2 = wlist1 + (tmpelement.ucsc_id, tmpelement.gene_id, tmpslice2.start, tmpslice2.stop)
slicestart, sliceend = max(tmp.start, tmp2.start), min(tmp.stop, tmp2.stop)
if slicestart < 0 or sliceend < 0: sys.exit('wrong query')
tmp1 = msa.seqDict['hg18.' + chrid][slicestart:sliceend]
edges = msa[tmp1].edges()
for src, dest, e in edges:
if src.stop - src.start < 100: continue
palign, pident = e.pAligned(), e.pIdentity()
if palign < 0.8 or pident < 0.8: continue
palign, pident = '%.2f' % palign, '%.2f' % pident
wlist3 = wlist2 + ((~msa.seqDict)[src], str(src), src.start, src.stop, \
(~msa.seqDict)[dest], \
str(dest), dest.start, dest.stop, palign, pident)
saveList.append('\t'.join(map(str, wlist3)) + '\n')
saveList.sort()
for saveline in saveList:
outfile.write(saveline)
# SNP IN SPLICE SITES
saveList = []
gt = tmpslice[:2]
ag = tmpslice[-2:]
try:
gtout = snpmsa[gt]
agout = snpmsa[ag]
except KeyError:
pass
else:
gtlist = gtout.keys()
aglist = agout.keys()
for snp in gtlist:
tmpsnp = snp.sequence
annsnp = snp126[snp.snp_id]
wlist2 = ('SNP5', chrid, tmpsplice.gene_id, gt.start, gt.stop, str(gt)) \
+ (annsnp.snp_id, tmpsnp.start, tmpsnp.stop, \
str(tmpsnp), annsnp.gene_id, annsnp.ref_NCBI, annsnp.ref_UCSC, \
annsnp.observed, annsnp.molType, \
annsnp.myClass, annsnp.myValid)
tmp1 = msa.seqDict['hg18.' + chrid][abs(gt.start):abs(gt.stop)]
edges = msa[tmp1].edges()
for src, dest, e in edges:
if src.stop - src.start != 2 or dest.stop - dest.start != 2: continue
palign, pident = e.pAligned(), e.pIdentity()
palign, pident = '%.2f' % palign, '%.2f' % pident
wlist3 = wlist2 + ((~msa.seqDict)[src], str(src), src.start, src.stop, \
(~msa.seqDict)[dest], \
str(dest), dest.start, dest.stop, palign, pident)
saveList.append('\t'.join(map(str, wlist3)) + '\n')
for snp in aglist:
tmpsnp = snp.sequence
annsnp = snp126[snp.snp_id]
wlist2 = ('SNP3', chrid, tmpsplice.gene_id, ag.start, ag.stop, str(ag)) \
+ (annsnp.snp_id, tmpsnp.start, tmpsnp.stop, \
str(tmpsnp), annsnp.gene_id, annsnp.ref_NCBI, annsnp.ref_UCSC, \
annsnp.observed, annsnp.molType, \
annsnp.myClass, annsnp.myValid)
tmp1 = msa.seqDict['hg18.' + chrid][abs(ag.start):abs(ag.stop)]
edges = msa[tmp1].edges()
for src, dest, e in edges:
if src.stop - src.start != 2 or dest.stop - dest.start != 2: continue
palign, pident = e.pAligned(), e.pIdentity()
palign, pident = '%.2f' % palign, '%.2f' % pident
wlist3 = wlist2 + ((~msa.seqDict)[src], str(src), src.start, src.stop, \
(~msa.seqDict)[dest], \
str(dest), dest.start, dest.stop, palign, pident)
saveList.append('\t'.join(map(str, wlist3)) + '\n')
saveList.sort()
for saveline in saveList:
outfile.write(saveline)
outfile.close()
md5old = hashlib.md5()
md5old.update(open(tmpintronAnnotFileName, 'r').read())
md5new = hashlib.md5()
md5new.update(open(newintronAnnotFileName, 'r').read())
assert md5old.digest() == md5new.digest() # MD5 COMPARISON INSTEAD OF COMPARING EACH CONTENTS
outfile = open(newstopAnnotFileName, 'w')
for chrid in chrList:
slice = hg18[chrid]
# STOP ANNOTATION DATABASE
try:
cds1 = cdsmsa[slice]
except:
continue
else:
cdslist1 = [(ix.cds_id, ix) for ix in cds1.keys()]
cdslist1.sort()
for ixx, cds in cdslist1:
saveList = []
tmp = cds.sequence
tmpcds = cdss[cds.cds_id]
tmpslice = tmpcds.sequence # FOR REAL EXON COORDINATE
wlist1 = 'STOP', chrid, tmpcds.cds_id, tmpcds.gene_id, tmpslice.start, tmpslice.stop
if tmpslice.start < 0:
stopstart, stopend = -tmpslice.stop, -tmpslice.start
stop = -hg18[chrid][stopstart:stopstart+3]
else:
stopstart, stopend = tmpslice.start, tmpslice.stop
stop = hg18[chrid][stopend-3:stopend]
if str(stop).upper() not in ('TAA', 'TAG', 'TGA'): continue
try:
snp1 = snpmsa[stop]
except KeyError:
pass
else:
snplist = [(ix.snp_id, ix) for ix in snp1.keys()]
snplist.sort()
for iyy, snp in snplist:
tmpsnp = snp.sequence
annsnp = snp126[snp.snp_id]
wlist2 = wlist1 + (str(stop), stop.start, stop.stop) \
+ (annsnp.snp_id, tmpsnp.start, tmpsnp.stop, \
str(tmpsnp), annsnp.gene_id, annsnp.ref_NCBI, annsnp.ref_UCSC, \
annsnp.observed, annsnp.molType, \
annsnp.myClass, annsnp.myValid)
if tmpslice.start < 0:
tmp1 = -msa.seqDict['hg18.' + chrid][stopstart:stopstart+3]
else:
tmp1 = msa.seqDict['hg18.' + chrid][stopend-3:stopend]
edges = msa[tmp1].edges()
for src, dest, e in edges:
if src.stop - src.start != 3 or dest.stop - dest.start != 3: continue
palign, pident = e.pAligned(), e.pIdentity()
palign, pident = '%.2f' % palign, '%.2f' % pident
if str(dest).upper() not in ('TAA', 'TAG', 'TGA'): nonstr = 'NONSENSE'
else: nonstr = 'STOP'
wlist3 = wlist2 + ((~msa.seqDict)[src], str(src), src.start, src.stop, \
(~msa.seqDict)[dest], \
str(dest), dest.start, dest.stop, palign, pident, nonstr)
saveList.append('\t'.join(map(str, wlist3)) + '\n')
saveList.sort()
for saveline in saveList:
outfile.write(saveline)
outfile.close()
md5old = hashlib.md5()
md5old.update(open(tmpstopAnnotFileName, 'r').read())
md5new = hashlib.md5()
md5new.update(open(newstopAnnotFileName, 'r').read())
assert md5old.digest() == md5new.digest() # MD5 COMPARISON INSTEAD OF COMPARING EACH CONTENTS
| 59.929152 | 139 | 0.5422 | 5,311 | 51,599 | 5.17492 | 0.086989 | 0.02154 | 0.015282 | 0.016373 | 0.840889 | 0.827936 | 0.81018 | 0.808689 | 0.780127 | 0.745634 | 0 | 0.031277 | 0.350627 | 51,599 | 860 | 140 | 59.998837 | 0.788969 | 0.080234 | 0 | 0.670823 | 0 | 0 | 0.126422 | 0.056375 | 0 | 0 | 0 | 0 | 0.008728 | 1 | 0.007481 | false | 0.009975 | 0.012469 | 0 | 0.023691 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
53021d91c744e52dc1aad201c3da9f89f1981b50 | 398 | py | Python | osprofiler/drivers/__init__.py | Carthaca/osprofiler | d1858792db723826e22eb23885713fd9a6213a27 | [
"Apache-2.0"
] | null | null | null | osprofiler/drivers/__init__.py | Carthaca/osprofiler | d1858792db723826e22eb23885713fd9a6213a27 | [
"Apache-2.0"
] | null | null | null | osprofiler/drivers/__init__.py | Carthaca/osprofiler | d1858792db723826e22eb23885713fd9a6213a27 | [
"Apache-2.0"
] | 1 | 2020-02-17T09:48:43.000Z | 2020-02-17T09:48:43.000Z | from osprofiler.drivers import base # noqa
from osprofiler.drivers import ceilometer # noqa
from osprofiler.drivers import elasticsearch_driver # noqa
from osprofiler.drivers import jaeger # noqa
from osprofiler.drivers import loginsight # noqa
from osprofiler.drivers import messaging # noqa
from osprofiler.drivers import mongodb # noqa
from osprofiler.drivers import redis_driver # noqa
| 44.222222 | 59 | 0.819095 | 50 | 398 | 6.48 | 0.28 | 0.345679 | 0.518519 | 0.666667 | 0.669753 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140704 | 398 | 8 | 60 | 49.75 | 0.947368 | 0.09799 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
531346b40cfad3a68eeb889f1e06fdd59d921052 | 24,268 | py | Python | allocator/test.py | HParker/vliw-scheduler-suite | 16687da2eafd7335e6fcd5d3727d3c9eb2776c4a | [
"MIT"
] | null | null | null | allocator/test.py | HParker/vliw-scheduler-suite | 16687da2eafd7335e6fcd5d3727d3c9eb2776c4a | [
"MIT"
] | null | null | null | allocator/test.py | HParker/vliw-scheduler-suite | 16687da2eafd7335e6fcd5d3727d3c9eb2776c4a | [
"MIT"
] | null | null | null | import hw3
def test_bb(expected, allocator):
if len(expected) != len(allocator.bb):
print("Basic Blocks Failed", list(map(lambda b: b[1], allocator.bb)), "\n ", expected)
return
for index, bb in enumerate(allocator.bb):
if bb[1] != expected[index]:
print("Basic Blocks Failed", list(map(lambda b: b[1], allocator.bb)), "\n ", expected)
return
def test_cfg(expected, allocator):
expected.sort()
allocator.cfg.sort()
if len(expected) != len(allocator.cfg):
print("CFG Failed", allocator.cfg, "\n ", expected)
return
for index, edge in enumerate(allocator.cfg):
if edge != expected[index]:
print("CFG Failed", allocator.cfg, "\n ", expected)
return
def test_defs(expected, allocator):
if len(expected) != len(allocator.defs):
print("Defs Failed", list(map(lambda b: b[1], allocator.defs)), "\n ", expected)
return
for index, d in enumerate(allocator.defs):
if d[1] != expected[index]:
print("Defs Failed", list(map(lambda b: b[1], allocator.defs)), "\n ", expected)
return
def test_uses(expected, allocator):
if len(expected) != len(allocator.uses):
print("Uses Failed", list(map(lambda b: b[1], allocator.uses)), "\n ", expected)
return
for index, uses in enumerate(allocator.uses):
a = uses[1].copy()
b = expected[index].copy()
a.sort()
b.sort()
if a != b:
print("Uses Failed", list(map(lambda b: b[1], allocator.uses)), "\n ", expected)
return
def test_livein(expected, allocator):
if len(expected) != len(allocator.live_in):
print("Live In Failed", list(map(lambda b: b[1], allocator.live_in)), "\n ", expected)
return
for index, live in enumerate(allocator.live_in):
a = live[1].copy()
b = expected[index].copy()
a.sort()
b.sort()
if a != b:
print("Live In Failed", list(map(lambda b: b[1], allocator.live_in)), "\n ", expected)
return
def test_liveout(expected, allocator):
if len(expected) != len(allocator.live_out):
print("Live Out Failed", list(map(lambda b: b[1], allocator.live_out)), "\n ", expected)
return
for index, live in enumerate(allocator.live_out):
a = live[1].copy()
b = expected[index].copy()
a.sort()
b.sort()
if a != b:
print("Live Out Failed", list(map(lambda b: b[1], allocator.live_out)), "\n ", expected)
return
def test_web(expected, allocator):
if len(expected) != len(allocator.webs):
print("Web Failed", allocator.webs, "\n ", expected)
return
for index, web_key in enumerate(allocator.webs):
a = allocator.webs[web_key].copy()
b = expected[web_key].copy()
a.sort()
b.sort()
if a != b:
print("Web Failed", allocator.webs, "\n ", expected)
return
def test_ig(expected, allocator):
if len(expected) != len(allocator.ig):
print("IG Failed", allocator.ig, "\n ", expected)
return
for index, edge in enumerate(allocator.ig):
if edge != expected[index]:
print("IG Failed", allocator.ig, "\n ", expected)
return
def test_colored_graph(expected, allocator):
if len(expected) != len(allocator.coloring):
print("Coloring Failed", allocator.coloring, "\n ", expected)
return
for index, color in enumerate(allocator.coloring):
if allocator.coloring[color] != expected[color]:
print("Coloring Failed", allocator.coloring, "\n ", expected)
return
# Simple sanity check. if this is failing, fix it first.
def test_three_variables():
ir = [
"Main:",
"assign, x, 10.0,",
"assign, y, x,",
"assign, z, 12.0,",
"add, w, x, y,",
"add, w, z, y,",
"goto, End, ,",
"End:",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_block_numbers = ["None", "1", "1", "1", "1", "1", "1", "None", "2"]
test_bb(expected_block_numbers, allocator)
expected_cfg = ["1 -> 2"]
test_cfg(expected_cfg, allocator)
expected_defs = ['', 'x', 'y', 'z', '', '', '', '', '']
test_defs(expected_defs, allocator)
expected_uses = [[],[],['x'],[],['x', 'y'], ['z','y'],[],[],[]]
test_uses(expected_uses, allocator)
expected_live_in = [[], [], ['x'], ['x', 'y'], ['x', 'y', 'z'], ['y', 'z'], [], [], []]
test_livein(expected_live_in, allocator)
expected_live_out = [[], ['x'], ['x', 'y'], ['x', 'y', 'z'], ['y', 'z'], [], [], [], []]
test_liveout(expected_live_out, allocator)
expected_web = {'x1': ['1', '2', '3', '4'], 'y1': ['2', '3', '4', '5'], 'z1': ['3', '4', '5']}
test_web(expected_web, allocator)
expected_ig = ['x1 -- y1', 'x1 -- z1', 'y1 -- z1']
test_ig(expected_ig, allocator)
expected_colors = {'z1': '3', 'y1': '2', 'x1': '1'}
test_colored_graph(expected_colors, allocator)
# Simple sanity check. if this is failing, fix it first.
def only_fallthrough_usage():
ir = [
"Main:",
"assign, x, 10.0,",
"brgt, x, 10.0, Jump,",
"add, w, x, 10.0,",
"return, , ,",
"Jump:",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_block_numbers = ['None', '1', '1', '2', '2', 'None', '3']
test_bb(expected_block_numbers, allocator)
expected_cfg = ['1 -> 2', '1 -> 3']
test_cfg(expected_cfg, allocator)
expected_defs = ['', 'x', '', '', '', '', '']
test_defs(expected_defs, allocator)
expected_uses = [[], [], ['x'], ['x'], [], [], []]
test_uses(expected_uses, allocator)
expected_live_in = [[], [], ['x'], ['x'], [], [], []]
test_livein(expected_live_in, allocator)
expected_live_out = [[], ['x'], ['x'], [], [], [], []]
test_liveout(expected_live_out, allocator)
expected_web = {'x1': ['1', '2', '3']}
test_web(expected_web, allocator)
expected_ig = []
test_ig(expected_ig, allocator)
expected_colors = {'x1': '1'}
test_colored_graph(expected_colors, allocator)
def only_branch_usage():
ir = [
"Main:",
"assign, x, 10.0,",
"brgt, x, 10.0, Jump,",
"return, , ,",
"Jump:",
"add, w, x, 10.0,",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_block_numbers = ['None', '1', '1', '2', 'None', '3', '3']
test_bb(expected_block_numbers, allocator)
expected_cfg = ['1 -> 2', '1 -> 3']
test_cfg(expected_cfg, allocator)
expected_defs = ['', 'x', '', '', '', '', '']
test_defs(expected_defs, allocator)
expected_uses = [[], [], ['x'], [], [], ['x'], []]
test_uses(expected_uses, allocator)
expected_live_in = [[], [], ['x'], [], [], ['x'], []]
test_livein(expected_live_in, allocator)
expected_live_out = [[], ['x'], ['x'], [], [], [], []]
test_liveout(expected_live_out, allocator)
expected_web = {'x1': ['1', '2', '5']}
test_web(expected_web, allocator)
expected_ig = []
test_ig(expected_ig, allocator)
expected_colors = {'x1': '1'}
test_colored_graph(expected_colors, allocator)
# Diamond shaped graph where all uses for the top node are in the bottom node.
def test_unused_diamond_sides():
ir = [
"Main:",
"assign, x, 10.0,",
"assign, y, x,",
"assign, z, 12.0,",
"brgt, x, y, Left,",
"assign, w, 13.0,", # Right
"assign, w, 13.0,",
"goto, End, ,",
"Left:",
"assign, w, 13.0,", # Left
"assign, w, 13.0,",
"goto, End, ,",
"End:",
"add, v, z, z,",
"add, v, y, y,",
"add, v, x, x,",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_block_numbers = ['None', '1', '1', '1', '1', '2', '2', '2', 'None', '3', '3', '3', 'None', '4', '4', '4', '4']
test_bb(expected_block_numbers, allocator)
expected_cfg = ['1 -> 3', '1 -> 2', '2 -> 4', '3 -> 4']
test_cfg(expected_cfg, allocator)
expected_defs = ['', 'x', 'y', 'z', '', '', '', '', '', '', '', '', '', '', '', '', '']
test_defs(expected_defs, allocator)
expected_uses = [[], [], ['x'], [], ['x', 'y'], [], [], [], [], [], [], [], [], ['z'], ['y'], ['x'], []]
test_uses(expected_uses, allocator)
expected_live_in = [[], [], ['x'], ['x', 'y'], ['x', 'y', 'z'], ['x', 'y', 'z'], ['x', 'y', 'z'], ['x', 'y', 'z'], [], ['x', 'y', 'z'], ['x', 'y', 'z'], ['x', 'y', 'z'], [], ['x', 'y', 'z'], ['x', 'y'], ['x'], []]
test_livein(expected_live_in, allocator)
expected_live_out = [[], ['x'], ['x', 'y'], ['x', 'y', 'z'], ['x', 'y', 'z'], ['x', 'y', 'z'], ['x', 'y', 'z'], ['x', 'y', 'z'], [], ['x', 'y', 'z'], ['x', 'y', 'z'], ['x', 'y', 'z'], [], ['x', 'y'], ['x'], [], []]
test_liveout(expected_live_out, allocator)
expected_web = {'x1': ['1', '2', '3', '4', '9', '10', '11', '13', '14', '15'], 'y1': ['2', '3', '4', '9', '10', '11', '13', '14'], 'z1': ['3', '4', '9', '10', '11', '13']}
test_web(expected_web, allocator)
expected_ig = ['x1 -- y1', 'x1 -- z1', 'y1 -- z1']
test_ig(expected_ig, allocator)
expected_colors = {'x1': '1', 'y1': '2', 'z1': '3'}
test_colored_graph(expected_colors, allocator)
# two unconnected legs of graph
# X is defined at the top node and used on the left side
# The right defines Y and uses it in the bottom node.
def non_overlapping_left_and_right():
ir = [
"Main:",
"assign, x, 10.0,",
"brgt, x, 10.0, Left,",
"assign, x, 13.0,", # Right
"goto, End, ,",
"Left:",
"add, x, x, 13.0,", # Left
"return, , ,",
"End:",
"add, x, x, x,",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_block_numbers = ['None', '1', '1', '2', '2', 'None', '3', '3', 'None', '4', '4']
test_bb(expected_block_numbers, allocator)
expected_cfg = ['1 -> 2', '1 -> 3', '2 -> 4']
test_cfg(expected_cfg, allocator)
expected_defs = ['', 'x', '', 'x', '', '', '', '', '', '', '']
test_defs(expected_defs, allocator)
expected_uses = [[], [], ['x'], [], [], [], ['x'], [], [], ['x'], []]
test_uses(expected_uses, allocator)
expected_live_in = [[], [], ['x'], [], ['x'], [], ['x'], [], [], ['x'], []]
test_livein(expected_live_in, allocator)
expected_live_out = [[], ['x'], ['x'], ['x'], ['x'], [], [], [], [], [], []]
test_liveout(expected_live_out, allocator)
expected_web = {'x1': ['1', '2', '6'], 'x2': ['3', '4', '9']}
test_web(expected_web, allocator)
expected_ig = []
test_ig(expected_ig, allocator)
expected_colors = {'x1': '1', 'x2': '1'}
test_colored_graph(expected_colors, allocator)
# test that a loop will keep a use alive through the end of the block
# even if the block after it does not use it.
def loop_extra_liveness():
ir = [
"Main:",
"assign, x, 10.0,",
"assign, w, 3.0,",
"Loop:",
"assign, p, 2.0,",
"assign, p, 2.0,",
"add, w, w, x,",
"assign, p, 2.0,",
"assign, p, 2.0,",
"brgeq, w, 100.0, Loop,",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_block_numbers = ['None', '1', '1', 'None', '2', '2', '2', '2', '2', '2', '3']
test_bb(expected_block_numbers, allocator)
expected_cfg = ['1 -> 2', '2 -> 2', '2 -> 3']
test_cfg(expected_cfg, allocator)
expected_defs = ['', 'x', 'w', '', '', '', 'w', '', '', '', '']
test_defs(expected_defs, allocator)
expected_uses = [[], [], [], [], [], [], ['w', 'x'], [], [], ['w'], []]
test_uses(expected_uses, allocator)
expected_live_in = [[], [], ['x'], [], ['w', 'x'], ['w', 'x'], ['w', 'x'], ['w', 'x'], ['w', 'x'], ['w', 'x'], []]
test_livein(expected_live_in, allocator)
expected_live_out = [[], ['x'], ['w', 'x'], [], ['w', 'x'], ['w', 'x'], ['w', 'x'], ['w', 'x'], ['w', 'x'], ['w', 'x'], []]
test_liveout(expected_live_out, allocator)
expected_web = {'x1': ['1', '2', '4', '5', '6'], 'w1': ['2', '4', '5', '6', '7', '8', '9']}
test_web(expected_web, allocator)
expected_ig = ['w1 -- x1']
test_ig(expected_ig, allocator)
expected_colors = {'x1': '2', 'w1': '1'}
test_colored_graph(expected_colors, allocator)
# make sure that unreachable uses act the same as not having them int he program.
# X is defined at the top node and used on the left side
# The right defines Y and uses it in the bottom node.
def unreachable_uses():
ir = [
"Main:",
"assign, x, 10.0,",
"assign, y, 10.0,",
"assign, z, 10.0,",
"goto, End, ,",
"NeverHit:",
"add, p, y, x,",
"add, p, z, x,",
"return, , ,",
"End:",
"add, p, x, x,",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_block_numbers = ['None', '1', '1', '1', '1', 'None', '2', '2', '2', 'None', '3', '3']
test_bb(expected_block_numbers, allocator)
expected_cfg = ['1 -> 3']
test_cfg(expected_cfg, allocator)
expected_defs = ['', 'x', '', '', '', '', '', '', '', '', '', '']
test_defs(expected_defs, allocator)
expected_uses = [[], [], [], [], [], [], [], [], [], [], ['x'], []]
test_uses(expected_uses, allocator)
expected_live_in = [[], [], ['x'], ['x'], ['x'], [], [], [], [], [], ['x'], []]
test_livein(expected_live_in, allocator)
expected_live_out = [[], ['x'], ['x'], ['x'], ['x'], [], [], [], [], [], [], []]
test_liveout(expected_live_out, allocator)
expected_web = {'x1': ['1', '2', '3', '4', '10']}
test_web(expected_web, allocator)
expected_ig = []
test_ig(expected_ig, allocator)
expected_colors = {'x1': '1'}
test_colored_graph(expected_colors, allocator)
# jump and fallthrough are the same
def jump_fallthrough_match():
ir = [
"Main:",
"assign, x, 10.0,",
"goto, Loop, ,",
"Loop:",
"add, x, x, 1,",
"brgeq, x, 100.0, Loop,",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_block_numbers = ['None', '1', '1', 'None', '2', '2', '3']
test_bb(expected_block_numbers, allocator)
expected_cfg = ['1 -> 2', '2 -> 2', '2 -> 3']
test_cfg(expected_cfg, allocator)
expected_defs = ['', 'x', '', '', 'x', '', '']
test_defs(expected_defs, allocator)
expected_uses = [[], [], [], [], ['x'], ['x'], []]
test_uses(expected_uses, allocator)
expected_live_in = [[], [], ['x'], [], ['x'], ['x'], []]
test_livein(expected_live_in, allocator)
expected_live_out = [[], ['x'], ['x'], [], ['x'], ['x'], []]
test_liveout(expected_live_out, allocator)
expected_web = {'x1': ['1', '2', '4', '5']}
test_web(expected_web, allocator)
expected_ig = []
test_ig(expected_ig, allocator)
expected_colors = {'x1': '1'}
test_colored_graph(expected_colors, allocator)
def only_used_after_redefinition():
ir = [
"Main:",
"assign, x, 10.0,",
"assign, p, 0.0",
"assign, x, 10.0,",
"add, p, x, 1,",
"add, p, x, 1,",
"add, p, x, 1,",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_block_numbers = ['None', '1', '1', '1', '1', '1', '1', '1']
test_bb(expected_block_numbers, allocator)
expected_cfg = []
test_cfg(expected_cfg, allocator)
expected_defs = ['', '', '', 'x', '', '', '', '']
test_defs(expected_defs, allocator)
expected_uses = [[], [], [], [], ['x'], ['x'], ['x'], []]
test_uses(expected_uses, allocator)
expected_live_in = [[], [], [], [], ['x'], ['x'], ['x'], []]
test_livein(expected_live_in, allocator)
expected_live_out = [[], [], [], ['x'], ['x'], ['x'], [], []]
test_liveout(expected_live_out, allocator)
expected_web = {'x1': ['3', '4', '5', '6']}
test_web(expected_web, allocator)
expected_ig = []
test_ig(expected_ig, allocator)
expected_colors = {'x1': '1'}
test_colored_graph(expected_colors, allocator)
def only_used_after_redefinition_jumps():
ir = [
"Main:",
"assign, x, 10.0,",
"assign, p, 0.0",
"assign, x, 10.0,",
"goto, after, ,",
"add, p, x, 1,",
"add, p, x, 1,",
"add, p, x, 1,",
"after:",
"add, p, x, 1,",
"add, p, x, 1,",
"add, p, x, 1,",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_block_numbers = ['None', '1', '1', '1', '1', '2', '2', '2', 'None', '3', '3', '3', '3']
test_bb(expected_block_numbers, allocator)
expected_cfg = ['1 -> 3', '2 -> 3']
test_cfg(expected_cfg, allocator)
expected_defs = ['', '', '', 'x', '', '', '', '', '', '', '', '', '']
test_defs(expected_defs, allocator)
expected_uses = [[], [], [], [], [], [], [], [], [], ['x'], ['x'], ['x'], []]
test_uses(expected_uses, allocator)
expected_live_in = [[], [], [], [], ['x'], ['x'], ['x'], ['x'], [], ['x'], ['x'], ['x'], []]
test_livein(expected_live_in, allocator)
expected_live_out = [[], [], [], ['x'], ['x'], ['x'], ['x'], ['x'], [], ['x'], ['x'], [], []]
test_liveout(expected_live_out, allocator)
expected_web = {'x1': ['3', '4', '9', '10', '11']}
test_web(expected_web, allocator)
expected_ig = []
test_ig(expected_ig, allocator)
expected_colors = {'x1': '1'}
test_colored_graph(expected_colors, allocator)
def ig_partial_overlap():
ir = [
"Main:",
"assign, x, 10.0,",
"assign, y, 0.0",
"add, p, x, 10.0,",
"assign, z, 10.0,",
"add, p, y, 10.0,",
"add, p, z, 10.0,",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_block_numbers = ['None', '1', '1', '1', '1', '1', '1', '1']
test_bb(expected_block_numbers, allocator)
expected_cfg = []
test_cfg(expected_cfg, allocator)
expected_defs = ['', 'x', 'y', '', 'z', '', '', '']
test_defs(expected_defs, allocator)
expected_uses = [[], [], [], ['x'], [], ['y'], ['z'], []]
test_uses(expected_uses, allocator)
expected_live_in = [[], [], ['x'], ['x', 'y'], ['y'], ['y', 'z'], ['z'], []]
test_livein(expected_live_in, allocator)
expected_live_out = [[], ['x'], ['x', 'y'], ['y'], ['y', 'z'], ['z'], [], []]
test_liveout(expected_live_out, allocator)
expected_web = {'x1': ['1', '2', '3'], 'y1': ['2', '3', '4', '5'], 'z1': ['4', '5', '6']}
test_web(expected_web, allocator)
expected_ig = ['x1 -- y1', 'y1 -- z1']
test_ig(expected_ig, allocator)
expected_colors = {'z1': '1', 'y1': '2', 'x1': '1'} # x and Z share since they don't overlap!
test_colored_graph(expected_colors, allocator)
def spills_first_if_usage_ties():
ir = [
"Main:",
"assign, w, 10.0,",
"assign, x, 10.0,",
"assign, y, 0.0,",
"assign, z, 0.0,",
"add, p, w, 10.0,",
"add, p, x, 10.0,",
"add, p, y, 10.0,",
"add, p, z, 10.0,",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_block_numbers = ['None', '1', '1', '1', '1', '1', '1', '1', '1', '1']
test_bb(expected_block_numbers, allocator)
expected_cfg = []
test_cfg(expected_cfg, allocator)
expected_defs = ['', 'w', 'x', 'y', 'z', '', '', '', '', '']
test_defs(expected_defs, allocator)
expected_uses = [[], [], [], [], [], ['w'], ['x'], ['y'], ['z'], []]
test_uses(expected_uses, allocator)
expected_live_in = [[], [], ['w'], ['w', 'x'], ['w', 'x', 'y'], ['w', 'x', 'y', 'z'], ['x', 'y', 'z'], ['y', 'z'], ['z'], []]
test_livein(expected_live_in, allocator)
expected_live_out = [[], ['w'], ['w', 'x'], ['w', 'x', 'y'], ['w', 'x', 'y', 'z'], ['x', 'y', 'z'], ['y', 'z'], ['z'], [], []]
test_liveout(expected_live_out, allocator)
expected_web = {'w1': ['1', '2', '3', '4', '5'], 'x1': ['2', '3', '4', '5', '6'], 'y1': ['3', '4', '5', '6', '7'], 'z1': ['4', '5', '6', '7', '8']}
test_web(expected_web, allocator)
expected_ig = ['w1 -- x1', 'w1 -- y1', 'w1 -- z1', 'x1 -- y1', 'x1 -- z1', 'y1 -- z1']
test_ig(expected_ig, allocator)
expected_colors = {'z1': 'spill', 'y1': '3', 'x1': '2', 'w1': '1'}
test_colored_graph(expected_colors, allocator)
def spills_least_defs():
ir = [
"Main:",
"assign, w, 10.0,",
"assign, w, w,",
"assign, x, 10.0,",
"assign, x, x,",
"assign, y, 0.0,",
"assign, y, y,",
"assign, z, 0.0,",
"add, p, w, 10.0,",
"add, p, x, 10.0,",
"add, p, y, 10.0,",
"add, p, z, 10.0,",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_ig = ['w1 -- x1', 'w1 -- y1', 'w1 -- z1', 'x1 -- y1', 'x1 -- z1', 'y1 -- z1']
test_ig(expected_ig, allocator)
expected_colors = {'z1': 'spill', 'y1': '3', 'x1': '2', 'w1': '1'}
test_colored_graph(expected_colors, allocator)
def spills_least_uses():
ir = [
"Main:",
"assign, w, 10.0,",
"assign, x, 10.0,",
"assign, y, 0.0,",
"assign, z, 0.0,",
"add, p, w, 10.0,",
"add, p, w, 10.0,",
"add, p, x, 10.0,",
"add, p, x, 10.0,",
"add, p, y, 10.0,",
"add, p, z, 10.0,",
"add, p, z, 10.0,",
"return, , ,"
]
allocator = hw3.Allocator(ir)
allocator.apply_functions()
expected_ig = ['w1 -- x1', 'w1 -- y1', 'w1 -- z1', 'x1 -- y1', 'x1 -- z1', 'y1 -- z1']
test_ig(expected_ig, allocator)
expected_colors = {'z1': '3', 'y1': 'spill', 'x1': '2', 'w1': '1'}
test_colored_graph(expected_colors, allocator)
print("three variables.")
test_three_variables()
print("only fallthrough use.")
only_fallthrough_usage()
print("only branch usage.")
only_branch_usage()
print("diamond sides.")
test_unused_diamond_sides()
print("non-overlapping left and right.")
non_overlapping_left_and_right()
print("loop extra liveness.")
loop_extra_liveness()
print("jump fallthrough match.")
jump_fallthrough_match()
print("unreachable uses.")
unreachable_uses()
print("only used after redefine.")
only_used_after_redefinition()
print("only use after redefinition and jump.")
only_used_after_redefinition_jumps()
print("IG partial overlap.")
ig_partial_overlap()
print("Spills first if usages tie.")
spills_first_if_usage_ties()
print("spills least defs.")
spills_least_defs()
print("spill least uses.")
spills_least_uses()
# test unreachable uses and defs do not contribute to costs.
| 33.06267 | 219 | 0.48566 | 2,918 | 24,268 | 3.858465 | 0.059287 | 0.14948 | 0.007994 | 0.007105 | 0.854072 | 0.816236 | 0.802469 | 0.770584 | 0.713651 | 0.688338 | 0 | 0.036331 | 0.284325 | 24,268 | 733 | 220 | 33.107776 | 0.61193 | 0.032017 | 0 | 0.645796 | 0 | 0 | 0.164079 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041145 | false | 0 | 0.001789 | 0 | 0.075134 | 0.057245 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5319de1101429e6292fac41a3afc5ba932075a3d | 27 | py | Python | zipping.py | Ichinga-Samuel/Python-Buffet | 7d8d6e748d3637b7f9be94d13e052ab0fb75e62b | [
"CC0-1.0"
] | null | null | null | zipping.py | Ichinga-Samuel/Python-Buffet | 7d8d6e748d3637b7f9be94d13e052ab0fb75e62b | [
"CC0-1.0"
] | null | null | null | zipping.py | Ichinga-Samuel/Python-Buffet | 7d8d6e748d3637b7f9be94d13e052ab0fb75e62b | [
"CC0-1.0"
] | null | null | null | import zipfile
import os
| 5.4 | 14 | 0.777778 | 4 | 27 | 5.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 27 | 4 | 15 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
531ce73de64b9966228e0cb6ab3331f4483e19b8 | 35 | py | Python | bitswap/table/__init__.py | VladislavSufyanov/py-bitswap | 875d15944e485c33b16af9965f24c1d85cb34c55 | [
"MIT"
] | null | null | null | bitswap/table/__init__.py | VladislavSufyanov/py-bitswap | 875d15944e485c33b16af9965f24c1d85cb34c55 | [
"MIT"
] | null | null | null | bitswap/table/__init__.py | VladislavSufyanov/py-bitswap | 875d15944e485c33b16af9965f24c1d85cb34c55 | [
"MIT"
] | null | null | null | from .hash_funcs import HASH_TABLE
| 17.5 | 34 | 0.857143 | 6 | 35 | 4.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
533911c404dde55374a1f66c309fa1ec628bd9ca | 10,800 | py | Python | model.py | escolmebartlebooth/usdc-t1-p3-behavioural-cloning | 70e4a46317db2f2360a6713bc856c2107bf6ce23 | [
"MIT"
] | null | null | null | model.py | escolmebartlebooth/usdc-t1-p3-behavioural-cloning | 70e4a46317db2f2360a6713bc856c2107bf6ce23 | [
"MIT"
] | null | null | null | model.py | escolmebartlebooth/usdc-t1-p3-behavioural-cloning | 70e4a46317db2f2360a6713bc856c2107bf6ce23 | [
"MIT"
] | null | null | null | """ behvioural cloning model """
# imports
import cv2
import csv
import random
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
from keras.layers import Activation, Dense, Flatten
from keras.layers import Lambda, Cropping2D, Dropout
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.models import Sequential
# from keras.utils.visualize_util import plot
import numpy as np
# global file locations
FILE_DIR = "usdc-t1-p3-data/data/"
# FILE_DIR = "usdc-t1-p3-data/"
DATA_FILE = "driving_log.csv"
CORRECTED_PATH = FILE_DIR + "IMG/"
# MODEL_TO_USE = nvidia_2
# to arrive at correct data size after image augmentation
SAMPLES_FACTOR = 3
# number of epochs
NB_EPOCHS = 2
# update this value if path info is windows (w) \ or linux (l) /
FILE_FROM = "l"
# FILE_FROM = "w"
def read_data_from_file():
""" function to read in image data from csv
and to correct for image folder
returns a list of images for use in training
"""
data_list = []
with open(FILE_DIR+DATA_FILE, 'rt') as f:
# ignore first line if header
img_data = csv.reader(f)
firstline = 0
for line in img_data:
if firstline == 0:
firstline = 1
else:
data_list.append(line)
print(len(data_list))
train_data, validation_data = train_test_split(data_list, test_size=0.2)
return train_data, validation_data
def generate_data(X, file_from="l", batch_size=32):
"""
generator function for training and validation data
"""
sample_size = len(X)
# run forever...
while 1:
# shuffle the data
shuffle(X)
# generate a sample batch
for offset in range(0, sample_size, batch_size):
# slice off the next batch
batch_samples = X[offset:offset+batch_size]
# placeholders for the images and angles
features = []
measurements = []
# loop the batch
for item in batch_samples:
# add centre, left and right images and adjust steering
for i in range(3):
# check whether data from windows or linux
if file_from == "w":
features.append(cv2.imread(CORRECTED_PATH +
item[i].split("\\")[-1]))
else:
features.append(cv2.imread(CORRECTED_PATH +
item[i].split("/")[-1]))
if i == 0:
correction_factor = 0
elif i == 1:
correction_factor = 0.25
else:
correction_factor = -0.25
angle = float(item[3])
measurements.append(angle+correction_factor)
# now build augmented images
aug_features, aug_measurements = [], []
for feature, measurement in zip(features, measurements):
aug_features.append(feature)
aug_measurements.append(measurement)
# now also add a flipped image
aug_features.append(cv2.flip(feature, 1))
aug_measurements.append(measurement*-1.0)
yield shuffle(np.array(aug_features),
np.array(aug_measurements))
def generate_data_2(X, file_from="l", batch_size=32):
"""
generator function for training and validation data
this adds more non zero angle images
"""
sample_size = len(X)
# run forever...
while 1:
# shuffle the data
shuffle(X)
# generate a sample batch
for offset in range(0, sample_size, batch_size):
# slice off the next batch
batch_samples = X[offset:offset+batch_size]
# placeholders for the images and angles
features = []
measurements = []
# loop the batch
for item in batch_samples:
# add centre image for zero angle
angle = float(item[3])
ch = random.choice([0, 1, 2])
if angle == 0:
if file_from == "w":
features.append(cv2.imread(CORRECTED_PATH +
item[ch].split("\\")[-1]))
else:
features.append(cv2.imread(CORRECTED_PATH +
item[ch].split("/")[-1]))
if ch = 0: measurements.append(angle)
if ch = 1: measurements.append(angle+0.25)
if ch = 2: measurements.append(angle-0.25)
else:
# add and augment non-zero
# left Image
if file_from == "w":
features.append(cv2.imread(CORRECTED_PATH +
item[1].split("\\")[-1]))
else:
features.append(cv2.imread(CORRECTED_PATH +
item[1].split("/")[-1]))
measurements.append(angle+0.25)
# right image
if file_from == "w":
features.append(cv2.imread(CORRECTED_PATH +
item[2].split("\\")[-1]))
else:
features.append(cv2.imread(CORRECTED_PATH +
item[2].split("/")[-1]))
measurements.append(angle-0.25)
# now build augmented images
aug_features, aug_measurements = [], []
for feature, measurement in zip(features, measurements):
aug_features.append(feature)
aug_measurements.append(measurement)
# now also add a flipped image for non zero angles
if measurement != 0:
aug_features.append(cv2.flip(feature, 1))
aug_measurements.append(measurement*-1.0)
yield shuffle(np.array(aug_features),
np.array(aug_measurements))
def training_model(X_train, X_valid):
"""
function to train model
args: training and validation data files
"""
# create data generators
batch_size = 32
X_gen_train = generate_data(X_train, FILE_FROM, batch_size=batch_size)
X_gen_valid = generate_data(X_valid, FILE_FROM, batch_size=batch_size)
# create model
model = Sequential()
model.add(Lambda(lambda x: x/255.0 - 0.5,
input_shape=(160, 320, 3)))
model.add(Cropping2D(cropping=((70, 25), (0, 0))))
model.add(Convolution2D(24, 5, 5, border_mode='valid', subsample=(1, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(36, 5, 5, border_mode='valid', subsample=(1, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1, 1)))
model.add(Convolution2D(48, 5, 5, border_mode='same', subsample=(1, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), border_mode='same'))
model.add(Convolution2D(64, 3, 3, border_mode='same', subsample=(1, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(64, 3, 3, border_mode='valid', subsample=(1, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(100))
model.add(Dropout(0.5))
model.add(Dense(50))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Dropout(0.5))
model.add(Dense(1))
# plot(model, to_file='examples/model.png')
model.compile(loss='mse', optimizer='adam')
history = model.fit_generator(X_gen_train,
samples_per_epoch=len(X_train) * SAMPLES_FACTOR,
nb_epoch=NB_EPOCHS, validation_data=X_gen_valid,
nb_val_samples=len(X_valid) * SAMPLES_FACTOR)
model.save("model.h5")
for item in history.history.keys():
print("key val: {0} is {1}".format(item,
history.history[item]))
def training_model_2(X_train, X_valid):
"""
function to train model
args: training and validation data files
"""
# create data generators
batch_size = 32
X_gen_train = generate_data_2(X_train, FILE_FROM, batch_size=batch_size)
X_gen_valid = generate_data_2(X_valid, FILE_FROM, batch_size=batch_size)
# create model
model = Sequential()
model.add(Lambda(lambda x: x/255.0 - 0.5,
input_shape=(160, 320, 3)))
model.add(Cropping2D(cropping=((70, 25), (0, 0))))
model.add(Convolution2D(24, 5, 5, border_mode='valid', subsample=(1, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(36, 5, 5, border_mode='valid', subsample=(1, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1, 1)))
model.add(Convolution2D(48, 5, 5, border_mode='same', subsample=(1, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), border_mode='same'))
model.add(Convolution2D(64, 3, 3, border_mode='same', subsample=(1, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(64, 3, 3, border_mode='valid', subsample=(1, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(100))
#model.add(Dropout(0.5))
model.add(Dense(50))
#model.add(Dropout(0.5))
model.add(Dense(10))
#model.add(Dropout(0.5))
model.add(Dense(1))
# plot(model, to_file='examples/model.png')
model.compile(loss='mse', optimizer='adam')
history = model.fit_generator(X_gen_train,
samples_per_epoch=len(X_train) * SAMPLES_FACTOR,
nb_epoch=NB_EPOCHS, validation_data=X_gen_valid,
nb_val_samples=len(X_valid) * SAMPLES_FACTOR)
model.save("model.h5")
for item in history.history.keys():
print("key val: {0} is {1}".format(item,
history.history[item]))
if __name__ == "__main__":
train_data, validation_data = read_data_from_file()
training_model_2(train_data, validation_data)
| 38.571429 | 77 | 0.559259 | 1,303 | 10,800 | 4.481965 | 0.159632 | 0.068493 | 0.014384 | 0.020548 | 0.746062 | 0.74161 | 0.736986 | 0.728082 | 0.719521 | 0.719521 | 0 | 0.037266 | 0.326667 | 10,800 | 279 | 78 | 38.709677 | 0.765814 | 0.102685 | 0 | 0.663102 | 0 | 0 | 0.025379 | 0.002307 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.058824 | null | null | 0.016043 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
53516cb25977b22b7707afda82f9bdf83ad28461 | 62,674 | py | Python | pydra/engine/tests/test_shelltask_inputspec.py | htwangtw/pydra | 118b9f329c634b7e3597866620ca7ecd35d2949b | [
"Apache-2.0"
] | null | null | null | pydra/engine/tests/test_shelltask_inputspec.py | htwangtw/pydra | 118b9f329c634b7e3597866620ca7ecd35d2949b | [
"Apache-2.0"
] | null | null | null | pydra/engine/tests/test_shelltask_inputspec.py | htwangtw/pydra | 118b9f329c634b7e3597866620ca7ecd35d2949b | [
"Apache-2.0"
] | null | null | null | import attr
import typing as ty
from pathlib import Path
import pytest
from ..task import ShellCommandTask
from ..specs import (
ShellOutSpec,
ShellSpec,
SpecInfo,
File,
MultiInputObj,
MultiInputFile,
MultiOutputFile,
)
from .utils import use_validator
from ..core import Workflow
from ..submitter import Submitter
def test_shell_cmd_execargs_1():
# separate command into exec + args
shelly = ShellCommandTask(executable="executable", args="arg")
assert shelly.cmdline == "executable arg"
def test_shell_cmd_execargs_2():
# separate command into exec + args
shelly = ShellCommandTask(executable=["cmd_1", "cmd_2"], args="arg")
assert shelly.cmdline == "cmd_1 cmd_2 arg"
def test_shell_cmd_inputs_1():
"""additional input with provided position"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={"position": 1, "help_string": "inp1", "argstr": ""},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", args="arg", inpA="inp1", input_spec=my_input_spec
)
assert shelly.cmdline == "executable inp1 arg"
def test_shell_cmd_inputs_1a():
"""additional input without provided position"""
my_input_spec = SpecInfo(
name="Input",
fields=[
("inpA", attr.ib(type=str, metadata={"help_string": "inpA", "argstr": ""}))
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", args="arg", inpA="inpNone1", input_spec=my_input_spec
)
# inp1 should be firt one after executable
assert shelly.cmdline == "executable inpNone1 arg"
def test_shell_cmd_inputs_1b():
"""additional input with negative position"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={"position": -1, "help_string": "inpA", "argstr": ""},
),
)
],
bases=(ShellSpec,),
)
# separate command into exec + args
shelly = ShellCommandTask(
executable="executable", args="arg", inpA="inp-1", input_spec=my_input_spec
)
# inp1 should be last before arg
assert shelly.cmdline == "executable inp-1 arg"
def test_shell_cmd_inputs_1_st():
"""additional input with provided position, checking cmdline when splitter"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={"position": 1, "help_string": "inp1", "argstr": ""},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
name="shelly",
executable="executable",
args="arg",
inpA=["inp1", "inp2"],
input_spec=my_input_spec,
).split("inpA")
# cmdline should be a list
assert shelly.cmdline[0] == "executable inp1 arg"
assert shelly.cmdline[1] == "executable inp2 arg"
def test_shell_cmd_inputs_2():
"""additional inputs with provided positions"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={"position": 2, "help_string": "inpA", "argstr": ""},
),
),
(
"inpB",
attr.ib(
type=str,
metadata={"position": 1, "help_string": "inpN", "argstr": ""},
),
),
],
bases=(ShellSpec,),
)
# separate command into exec + args
shelly = ShellCommandTask(
executable="executable", inpB="inp1", inpA="inp2", input_spec=my_input_spec
)
assert shelly.cmdline == "executable inp1 inp2"
def test_shell_cmd_inputs_2a():
"""additional inputs without provided positions"""
my_input_spec = SpecInfo(
name="Input",
fields=[
("inpA", attr.ib(type=str, metadata={"help_string": "inpA", "argstr": ""})),
("inpB", attr.ib(type=str, metadata={"help_string": "inpB", "argstr": ""})),
],
bases=(ShellSpec,),
)
# separate command into exec + args
shelly = ShellCommandTask(
executable="executable",
inpA="inpNone1",
inpB="inpNone2",
input_spec=my_input_spec,
)
# position taken from the order in input spec
assert shelly.cmdline == "executable inpNone1 inpNone2"
def test_shell_cmd_inputs_2_err():
"""additional inputs with provided positions (exception due to the duplication)"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={"position": 1, "help_string": "inpA", "argstr": ""},
),
),
(
"inpB",
attr.ib(
type=str,
metadata={"position": 1, "help_string": "inpB", "argstr": ""},
),
),
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA="inp1", inpB="inp2", input_spec=my_input_spec
)
with pytest.raises(Exception) as e:
shelly.cmdline
assert "1 is already used" in str(e.value)
def test_shell_cmd_inputs_2_noerr():
"""additional inputs with provided positions
(duplication of teh position doesn't lead to error, since only one field has value)
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={"position": 1, "help_string": "inpA", "argstr": ""},
),
),
(
"inpB",
attr.ib(
type=str,
metadata={"position": 1, "help_string": "inpB", "argstr": ""},
),
),
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA="inp1", input_spec=my_input_spec
)
shelly.cmdline
def test_shell_cmd_inputs_3():
"""additional inputs: positive pos, negative pos and no pos"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={"position": 1, "help_string": "inpA", "argstr": ""},
),
),
(
"inpB",
attr.ib(
type=str,
metadata={"position": -1, "help_string": "inpB", "argstr": ""},
),
),
("inpC", attr.ib(type=str, metadata={"help_string": "inpC", "argstr": ""})),
],
bases=(ShellSpec,),
)
# separate command into exec + args
shelly = ShellCommandTask(
executable="executable",
inpA="inp1",
inpB="inp-1",
inpC="inpNone",
input_spec=my_input_spec,
)
# input without position shoild be between positive an negative positions
assert shelly.cmdline == "executable inp1 inpNone inp-1"
def test_shell_cmd_inputs_argstr_1():
"""additional string inputs with argstr"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={"position": 1, "help_string": "inpA", "argstr": "-v"},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA="inp1", input_spec=my_input_spec
)
# flag used before inp1
assert shelly.cmdline == "executable -v inp1"
def test_shell_cmd_inputs_argstr_2():
"""additional bool inputs with argstr"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=bool,
metadata={"position": 1, "help_string": "inpA", "argstr": "-v"},
),
)
],
bases=(ShellSpec,),
)
# separate command into exec + args
shelly = ShellCommandTask(
executable="executable", args="arg", inpA=True, input_spec=my_input_spec
)
# a flag is used without any additional argument
assert shelly.cmdline == "executable -v arg"
def test_shell_cmd_inputs_list_1():
"""providing list as an additional input, no sep, no argstr"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=ty.List[str],
metadata={"position": 2, "help_string": "inpA", "argstr": ""},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA=["el_1", "el_2", "el_3"], input_spec=my_input_spec
)
# multiple elements
assert shelly.cmdline == "executable el_1 el_2 el_3"
def test_shell_cmd_inputs_list_2():
"""providing list as an additional input, no sep, but argstr"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=ty.List[str],
metadata={"position": 2, "help_string": "inpA", "argstr": "-v"},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA=["el_1", "el_2", "el_3"], input_spec=my_input_spec
)
assert shelly.cmdline == "executable -v el_1 el_2 el_3"
def test_shell_cmd_inputs_list_3():
"""providing list as an additional input, no sep, argstr with ..."""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=ty.List[str],
metadata={"position": 2, "help_string": "inpA", "argstr": "-v..."},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA=["el_1", "el_2", "el_3"], input_spec=my_input_spec
)
# a flag is repeated
assert shelly.cmdline == "executable -v el_1 -v el_2 -v el_3"
def test_shell_cmd_inputs_list_sep_1():
"""providing list as an additional input:, sep, no argstr"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"sep": ",",
"argstr": "",
},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA=["aaa", "bbb", "ccc"], input_spec=my_input_spec
)
# separated by commas
assert shelly.cmdline == "executable aaa,bbb,ccc"
def test_shell_cmd_inputs_list_sep_2():
"""providing list as an additional input:, sep, and argstr"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"sep": ",",
"argstr": "-v",
},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA=["aaa", "bbb", "ccc"], input_spec=my_input_spec
)
# a flag is used once
assert shelly.cmdline == "executable -v aaa,bbb,ccc"
def test_shell_cmd_inputs_list_sep_2a():
"""providing list as an additional input:, sep, and argstr with f-string"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"sep": ",",
"argstr": "-v {inpA}",
},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA=["aaa", "bbb", "ccc"], input_spec=my_input_spec
)
# a flag is used once
assert shelly.cmdline == "executable -v aaa,bbb,ccc"
def test_shell_cmd_inputs_list_sep_3():
"""providing list as an additional input:, sep, argstr with ..."""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"sep": ",",
"argstr": "-v...",
},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA=["aaa", "bbb", "ccc"], input_spec=my_input_spec
)
# a flag is repeated
assert shelly.cmdline == "executable -v aaa, -v bbb, -v ccc"
def test_shell_cmd_inputs_list_sep_3a():
"""providing list as an additional input:, sep, argstr with ... and f-string"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"sep": ",",
"argstr": "-v {inpA}...",
},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA=["aaa", "bbb", "ccc"], input_spec=my_input_spec
)
# a flag is repeated
assert shelly.cmdline == "executable -v aaa, -v bbb, -v ccc"
def test_shell_cmd_inputs_sep_4():
"""providing 1-el list as an additional input:, sep, argstr with ...,"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"sep": ",",
"argstr": "-v...",
},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA=["aaa"], input_spec=my_input_spec
)
assert shelly.cmdline == "executable -v aaa"
def test_shell_cmd_inputs_sep_4a():
"""providing str instead of list as an additional input:, sep, argstr with ..."""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"sep": ",",
"argstr": "-v...",
},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA="aaa", input_spec=my_input_spec
)
assert shelly.cmdline == "executable -v aaa"
def test_shell_cmd_inputs_format_1():
"""additional inputs with argstr that has string formatting"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "-v {inpA}",
},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA="aaa", input_spec=my_input_spec
)
assert shelly.cmdline == "executable -v aaa"
def test_shell_cmd_inputs_format_2():
"""additional inputs with argstr that has string formatting and ..."""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "-v {inpA}...",
},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA=["el_1", "el_2"], input_spec=my_input_spec
)
assert shelly.cmdline == "executable -v el_1 -v el_2"
def test_shell_cmd_inputs_format_3():
"""adding float formatting for argstr with input field"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=float,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "-v {inpA:.5f}",
},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", inpA=0.007, input_spec=my_input_spec
)
assert shelly.cmdline == "executable -v 0.00700"
def test_shell_cmd_inputs_mandatory_1():
"""additional inputs with mandatory=True"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(executable="executable", input_spec=my_input_spec)
with pytest.raises(Exception) as e:
shelly.cmdline
assert "mandatory" in str(e.value)
def test_shell_cmd_inputs_not_given_1():
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"arg1",
attr.ib(
type=MultiInputObj,
metadata={
"argstr": "--arg1",
"help_string": "Command line argument 1",
},
),
),
(
"arg2",
attr.ib(
type=MultiInputObj,
metadata={
"argstr": "--arg2",
"help_string": "Command line argument 2",
},
),
),
(
"arg3",
attr.ib(
type=File,
metadata={
"argstr": "--arg3",
"help_string": "Command line argument 3",
},
),
),
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
name="shelly", executable="executable", input_spec=my_input_spec
)
shelly.inputs.arg2 = "argument2"
assert shelly.cmdline == f"executable --arg2 argument2"
def test_shell_cmd_inputs_template_1():
"""additional inputs, one uses output_file_template (and argstr)"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"position": 2,
"help_string": "outA",
"argstr": "-o",
"output_file_template": "{inpA}_out",
},
),
),
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA="inpA"
)
# outA has argstr in the metadata fields, so it's a part of the command line
# the full path will be use din the command line
assert shelly.cmdline == f"executable inpA -o {str(shelly.output_dir / 'inpA_out')}"
# checking if outA in the output fields
assert shelly.output_names == ["return_code", "stdout", "stderr", "outA"]
def test_shell_cmd_inputs_template_1a():
"""additional inputs, one uses output_file_template (without argstr)"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"help_string": "outA",
"output_file_template": "{inpA}_out",
},
),
),
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA="inpA"
)
# outA has no argstr in metadata, so it's not a part of the command line
assert shelly.cmdline == f"executable inpA"
# TODO: after deciding how we use requires/templates
def test_shell_cmd_inputs_template_2():
"""additional inputs, one uses output_file_template (and argstr, but input not provided)"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpB",
attr.ib(
type=str,
metadata={"position": 1, "help_string": "inpB", "argstr": ""},
),
),
(
"outB",
attr.ib(
type=str,
metadata={
"position": 2,
"help_string": "outB",
"argstr": "-o",
"output_file_template": "{inpB}_out",
},
),
),
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(executable="executable", input_spec=my_input_spec)
# inpB not in the inputs, so no outB in the command line
assert shelly.cmdline == "executable"
# checking if outB in the output fields
assert shelly.output_names == ["return_code", "stdout", "stderr", "outB"]
def test_shell_cmd_inputs_template_3():
"""additional inputs with output_file_template and an additional
read-only fields that combine two outputs together in the command line
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"inpB",
attr.ib(
type=str,
metadata={
"position": 2,
"help_string": "inpB",
"argstr": "",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"help_string": "outA",
"output_file_template": "{inpA}_out",
},
),
),
(
"outB",
attr.ib(
type=str,
metadata={
"help_string": "outB",
"output_file_template": "{inpB}_out",
},
),
),
(
"outAB",
attr.ib(
type=str,
metadata={
"position": -1,
"help_string": "outAB",
"argstr": "-o {outA} {outB}",
"readonly": True,
},
),
),
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA="inpA", inpB="inpB"
)
# using syntax from the outAB field
assert (
shelly.cmdline
== f"executable inpA inpB -o {str(shelly.output_dir / 'inpA_out')} {str(shelly.output_dir / 'inpB_out')}"
)
# checking if outA and outB in the output fields (outAB should not be)
assert shelly.output_names == ["return_code", "stdout", "stderr", "outA", "outB"]
def test_shell_cmd_inputs_template_3a():
"""additional inputs with output_file_template and an additional
read-only fields that combine two outputs together in the command line
testing a different order within the input spec
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"inpB",
attr.ib(
type=str,
metadata={
"position": 2,
"help_string": "inpB",
"argstr": "",
"mandatory": True,
},
),
),
(
"outAB",
attr.ib(
type=str,
metadata={
"position": -1,
"help_string": "outAB",
"argstr": "-o {outA} {outB}",
"readonly": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"help_string": "outA",
"output_file_template": "{inpA}_out",
},
),
),
(
"outB",
attr.ib(
type=str,
metadata={
"help_string": "outB",
"output_file_template": "{inpB}_out",
},
),
),
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA="inpA", inpB="inpB"
)
# using syntax from the outAB field
assert (
shelly.cmdline
== f"executable inpA inpB -o {str(shelly.output_dir / 'inpA_out')} {str(shelly.output_dir / 'inpB_out')}"
)
# checking if outA and outB in the output fields (outAB should not be)
assert shelly.output_names == ["return_code", "stdout", "stderr", "outA", "outB"]
# TODO: after deciding how we use requires/templates
def test_shell_cmd_inputs_template_4():
"""additional inputs with output_file_template and an additional
read-only fields that combine two outputs together in the command line
one output_file_template can't be resolved - no inpB is provided
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"inpB",
attr.ib(
type=str,
metadata={"position": 2, "help_string": "inpB", "argstr": ""},
),
),
(
"outAB",
attr.ib(
type=str,
metadata={
"position": -1,
"help_string": "outAB",
"argstr": "-o {outA} {outB}",
"readonly": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"help_string": "outA",
"output_file_template": "{inpA}_out",
},
),
),
(
"outB",
attr.ib(
type=str,
metadata={
"help_string": "outB",
"output_file_template": "{inpB}_out",
},
),
),
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA="inpA"
)
# inpB is not provided so outB not in the command line
assert shelly.cmdline == f"executable inpA -o {str(shelly.output_dir / 'inpA_out')}"
assert shelly.output_names == ["return_code", "stdout", "stderr", "outA", "outB"]
def test_shell_cmd_inputs_template_5_ex():
"""checking if the exception is raised for read-only fields when input is set"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"outAB",
attr.ib(
type=str,
metadata={
"position": -1,
"help_string": "outAB",
"argstr": "-o",
"readonly": True,
},
),
)
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, outAB="outAB"
)
with pytest.raises(Exception) as e:
shelly.cmdline
assert "read only" in str(e.value)
def test_shell_cmd_inputs_template_6():
"""additional inputs with output_file_template that has type ty.Union[str, bool]
no default is set, so if nothing is provided as an input, the output is used
whenever the template can be formatted
(the same way as for templates that has type=str)
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=ty.Union[str, bool],
metadata={
"position": 2,
"help_string": "outA",
"argstr": "-o",
"output_file_template": "{inpA}_out",
},
),
),
],
bases=(ShellSpec,),
)
# no input for outA (and no default value), so the output is created whenever the
# template can be formatted (the same way as for templates that has type=str)
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA="inpA"
)
assert shelly.cmdline == f"executable inpA -o {str(shelly.output_dir / 'inpA_out')}"
# a string is provided for outA, so this should be used as the outA value
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA="inpA", outA="outA"
)
assert shelly.cmdline == "executable inpA -o outA"
# True is provided for outA, so the formatted template should be used as outA value
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA="inpA", outA=True
)
assert shelly.cmdline == f"executable inpA -o {str(shelly.output_dir / 'inpA_out')}"
# False is provided for outA, so the outA shouldn't be used
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA="inpA", outA=False
)
assert shelly.cmdline == "executable inpA"
def test_shell_cmd_inputs_template_6a():
"""additional inputs with output_file_template that has type ty.Union[str, bool]
and default is set to False,
so if nothing is provided as an input, the output is not used
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=ty.Union[str, bool],
default=False,
metadata={
"position": 2,
"help_string": "outA",
"argstr": "-o",
"output_file_template": "{inpA}_out",
},
),
),
],
bases=(ShellSpec,),
)
# no input for outA, but default is False, so the outA shouldn't be used
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA="inpA"
)
assert shelly.cmdline == "executable inpA"
# a string is provided for outA, so this should be used as the outA value
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA="inpA", outA="outA"
)
assert shelly.cmdline == "executable inpA -o outA"
# True is provided for outA, so the formatted template should be used as outA value
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA="inpA", outA=True
)
assert shelly.cmdline == f"executable inpA -o {str(shelly.output_dir / 'inpA_out')}"
# False is provided for outA, so the outA shouldn't be used
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA="inpA", outA=False
)
assert shelly.cmdline == "executable inpA"
def test_shell_cmd_inputs_template_7(tmpdir):
"""additional inputs uses output_file_template with a suffix (no extension)
no keep_extension is used
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=File,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"position": 2,
"help_string": "outA",
"argstr": "",
"output_file_template": "{inpA}_out",
},
),
),
],
bases=(ShellSpec,),
)
inpA_file = tmpdir.join("a_file.txt")
inpA_file.write("content")
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA=inpA_file
)
# outA should be formatted in a way that that .txt goes to the end
assert (
shelly.cmdline
== f"executable {tmpdir.join('a_file.txt')} {str(shelly.output_dir / 'a_file_out.txt')}"
)
def test_shell_cmd_inputs_template_7a(tmpdir):
"""additional inputs uses output_file_template with a suffix (no extension)
keep_extension is True (as default)
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=File,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"position": 2,
"help_string": "outA",
"argstr": "",
"keep_extension": True,
"output_file_template": "{inpA}_out",
},
),
),
],
bases=(ShellSpec,),
)
inpA_file = tmpdir.join("a_file.txt")
inpA_file.write("content")
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA=inpA_file
)
# outA should be formatted in a way that that .txt goes to the end
assert (
shelly.cmdline
== f"executable {tmpdir.join('a_file.txt')} {str(shelly.output_dir / 'a_file_out.txt')}"
)
def test_shell_cmd_inputs_template_7b(tmpdir):
"""additional inputs uses output_file_template with a suffix (no extension)
keep extension is False (so the extension is removed when creating the output)
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=File,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"position": 2,
"help_string": "outA",
"argstr": "",
"keep_extension": False,
"output_file_template": "{inpA}_out",
},
),
),
],
bases=(ShellSpec,),
)
inpA_file = tmpdir.join("a_file.txt")
inpA_file.write("content")
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA=inpA_file
)
# outA should be formatted in a way that that .txt goes to the end
assert (
shelly.cmdline
== f"executable {tmpdir.join('a_file.txt')} {str(shelly.output_dir / 'a_file_out')}"
)
def test_shell_cmd_inputs_template_8(tmpdir):
"""additional inputs uses output_file_template with a suffix and an extension"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=File,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"position": 2,
"help_string": "outA",
"argstr": "",
"output_file_template": "{inpA}_out.txt",
},
),
),
],
bases=(ShellSpec,),
)
inpA_file = tmpdir.join("a_file.t")
inpA_file.write("content")
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA=inpA_file
)
# outA should be formatted in a way that inpA extension is removed and the template extension is used
assert (
shelly.cmdline
== f"executable {tmpdir.join('a_file.t')} {str(shelly.output_dir / 'a_file_out.txt')}"
)
def test_shell_cmd_inputs_template_9(tmpdir):
"""additional inputs, one uses output_file_template with two fields:
one File and one ints - the output should be recreated from the template
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=File,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"inpInt",
attr.ib(
type=int,
metadata={
"position": 2,
"help_string": "inp int",
"argstr": "-i",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"position": 3,
"help_string": "outA",
"argstr": "-o",
"output_file_template": "{inpA}_{inpInt}_out.txt",
},
),
),
],
bases=(ShellSpec,),
)
inpA_file = tmpdir.join("inpA.t")
inpA_file.write("content")
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA=inpA_file, inpInt=3
)
assert (
shelly.cmdline
== f"executable {tmpdir.join('inpA.t')} -i 3 -o {str(shelly.output_dir / 'inpA_3_out.txt')}"
)
# checking if outA in the output fields
assert shelly.output_names == ["return_code", "stdout", "stderr", "outA"]
def test_shell_cmd_inputs_template_9a(tmpdir):
"""additional inputs, one uses output_file_template with two fields:
one file and one string without extension - should be fine
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=File,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"inpStr",
attr.ib(
type=str,
metadata={
"position": 2,
"help_string": "inp str",
"argstr": "-i",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"position": 3,
"help_string": "outA",
"argstr": "-o",
"output_file_template": "{inpA}_{inpStr}_out.txt",
},
),
),
],
bases=(ShellSpec,),
)
inpA_file = tmpdir.join("inpA.t")
inpA_file.write("content")
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA=inpA_file, inpStr="hola"
)
assert (
shelly.cmdline
== f"executable {tmpdir.join('inpA.t')} -i hola -o {str(shelly.output_dir / 'inpA_hola_out.txt')}"
)
# checking if outA in the output fields
assert shelly.output_names == ["return_code", "stdout", "stderr", "outA"]
def test_shell_cmd_inputs_template_9b_err(tmpdir):
"""output_file_template with two fields that are both Files,
an exception should be raised
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=File,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"inpFile",
attr.ib(
type=File,
metadata={
"position": 2,
"help_string": "inp file",
"argstr": "-i",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"position": 3,
"help_string": "outA",
"argstr": "-o",
"output_file_template": "{inpA}_{inpFile}_out.txt",
},
),
),
],
bases=(ShellSpec,),
)
inpA_file = tmpdir.join("inpA.t")
inpA_file.write("content")
inpFile_file = tmpdir.join("inpFile.t")
inpFile_file.write("content")
shelly = ShellCommandTask(
executable="executable",
input_spec=my_input_spec,
inpA=inpA_file,
inpFile=inpFile_file,
)
# the template has two files so the exception should be raised
with pytest.raises(Exception, match="can't have multiple paths"):
shelly.cmdline
def test_shell_cmd_inputs_template_9c_err(tmpdir):
"""output_file_template with two fields: a file and a string with extension,
that should be used as an additional file and the exception should be raised
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=File,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"inpStr",
attr.ib(
type=str,
metadata={
"position": 2,
"help_string": "inp str with extension",
"argstr": "-i",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"position": 3,
"help_string": "outA",
"argstr": "-o",
"output_file_template": "{inpA}_{inpStr}_out.txt",
},
),
),
],
bases=(ShellSpec,),
)
inpA_file = tmpdir.join("inpA.t")
inpA_file.write("content")
shelly = ShellCommandTask(
executable="executable",
input_spec=my_input_spec,
inpA=inpA_file,
inpStr="hola.txt",
)
# inptStr has an extension so should be treated as a second file in the template formatting
# and teh exception should be raised
with pytest.raises(Exception, match="can't have multiple paths"):
shelly.cmdline
def test_shell_cmd_inputs_template_10():
"""output_file_template uses a float field with formatting"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=float,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "{inpA:.1f}",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"position": 2,
"help_string": "outA",
"argstr": "-o",
"output_file_template": "file_{inpA:.1f}_out",
},
),
),
],
bases=(ShellSpec,),
)
shelly = ShellCommandTask(
executable="executable", input_spec=my_input_spec, inpA=3.3456
)
# outA has argstr in the metadata fields, so it's a part of the command line
# the full path will be use din the command line
assert (
shelly.cmdline == f"executable 3.3 -o {str(shelly.output_dir / 'file_3.3_out')}"
)
# checking if outA in the output fields
assert shelly.output_names == ["return_code", "stdout", "stderr", "outA"]
def test_shell_cmd_inputs_template_11():
input_fields = [
(
"inputFiles",
attr.ib(
type=MultiInputFile,
metadata={
"argstr": "--inputFiles ...",
"help_string": "The list of input image files to be segmented.",
},
),
)
]
output_fields = [
(
"outputFiles",
attr.ib(
type=MultiOutputFile,
metadata={
"help_string": "Corrected Output Images: should specify the same number of images as inputVolume, if only one element is given, then it is used as a file pattern where %s is replaced by the imageVolumeType, and %d by the index list location.",
"output_file_template": "{inputFiles}",
},
),
)
]
input_spec = SpecInfo(name="Input", fields=input_fields, bases=(ShellSpec,))
output_spec = SpecInfo(name="Output", fields=output_fields, bases=(ShellOutSpec,))
task = ShellCommandTask(
name="echoMultiple",
executable="echo",
input_spec=input_spec,
output_spec=output_spec,
)
wf = Workflow(name="wf", input_spec=["inputFiles"], inputFiles=["test1", "test2"])
task.inputs.inputFiles = wf.lzin.inputFiles
wf.add(task)
wf.set_output([("out", wf.echoMultiple.lzout.outputFiles)])
with Submitter(plugin="cf") as sub:
sub(wf)
result = wf.result()
for out_file in result.output.out:
assert out_file.name == "test1" or out_file.name == "test2"
def test_shell_cmd_inputs_template_1_st():
"""additional inputs, one uses output_file_template (and argstr)
testing cmdline when splitter defined
"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"inpA",
attr.ib(
type=str,
metadata={
"position": 1,
"help_string": "inpA",
"argstr": "",
"mandatory": True,
},
),
),
(
"outA",
attr.ib(
type=str,
metadata={
"position": 2,
"help_string": "outA",
"argstr": "-o",
"output_file_template": "{inpA}_out",
},
),
),
],
bases=(ShellSpec,),
)
inpA = ["inpA_1", "inpA_2"]
shelly = ShellCommandTask(
name="f",
executable="executable",
input_spec=my_input_spec,
inpA=inpA,
).split("inpA")
cmdline_list = shelly.cmdline
assert len(cmdline_list) == 2
for i in range(2):
path_out = Path(shelly.output_dir[i]) / f"{inpA[i]}_out"
assert cmdline_list[i] == f"executable {inpA[i]} -o {str(path_out)}"
# TODO: after deciding how we use requires/templates
def test_shell_cmd_inputs_di(tmpdir, use_validator):
"""example from #279"""
my_input_spec = SpecInfo(
name="Input",
fields=[
(
"image_dimensionality",
attr.ib(
type=int,
metadata={
"help_string": """
2/3/4
This option forces the image to be treated as a specified-dimensional image.
If not specified, the program tries to infer the dimensionality from
the input image.
""",
"allowed_values": [2, 3, 4],
"argstr": "-d",
},
),
),
(
"inputImageFilename",
attr.ib(
type=File,
metadata={
"help_string": "A scalar image is expected as input for noise correction.",
"argstr": "-i",
"mandatory": True,
},
),
),
(
"noise_model",
attr.ib(
type=str,
metadata={
"help_string": """
Rician/(Gaussian)
Employ a Rician or Gaussian noise model.
""",
"allowed_values": ["Rician", "Gaussian"],
"argstr": "-n",
},
),
),
(
"maskImageFilename",
attr.ib(
type=str,
metadata={
"help_string": "If a mask image is specified, denoising is only performed in the mask region.",
"argstr": "-x",
},
),
),
(
"shrink_factor",
attr.ib(
type=int,
default=1,
metadata={
"help_string": """
(1)/2/3/...
Running noise correction on large images can be time consuming.
To lessen computation time, the input image can be resampled.
The shrink factor, specified as a single integer, describes this
resampling. Shrink factor = 1 is the default.
""",
"argstr": "-s",
},
),
),
(
"patch_radius",
attr.ib(
type=int,
default=1,
metadata={
"help_string": "Patch radius. Default = 1x1x1",
"argstr": "-p",
},
),
),
(
"search_radius",
attr.ib(
type=int,
default=2,
metadata={
"help_string": "Search radius. Default = 2x2x2.",
"argstr": "-r",
},
),
),
(
"correctedImage",
attr.ib(
type=str,
metadata={
"help_string": """
The output consists of the noise corrected version of the input image.
Optionally, one can also output the estimated noise image.
""",
"output_file_template": "{inputImageFilename}_out",
},
),
),
(
"noiseImage",
attr.ib(
type=ty.Union[str, bool],
default=False,
metadata={
"help_string": """
The output consists of the noise corrected version of the input image.
Optionally, one can also output the estimated noise image.
""",
"output_file_template": "{inputImageFilename}_noise",
},
),
),
(
"output",
attr.ib(
type=str,
metadata={
"help_string": "Combined output",
"argstr": "-o [{correctedImage}, {noiseImage}]",
"position": -1,
"readonly": True,
},
),
),
(
"version",
attr.ib(
type=bool,
default=False,
metadata={
"help_string": "Get Version Information.",
"argstr": "--version",
},
),
),
(
"verbose",
attr.ib(
type=int,
default=0,
metadata={"help_string": "(0)/1. Verbose output. ", "argstr": "-v"},
),
),
(
"help_short",
attr.ib(
type=bool,
default=False,
metadata={
"help_string": "Print the help menu (short version)",
"argstr": "-h",
},
),
),
(
"help",
attr.ib(
type=int,
metadata={
"help_string": "Print the help menu.",
"argstr": "--help",
},
),
),
],
bases=(ShellSpec,),
)
my_input_file = tmpdir.join("a_file.ext")
my_input_file.write("content")
# no input provided
shelly = ShellCommandTask(executable="DenoiseImage", input_spec=my_input_spec)
with pytest.raises(Exception) as e:
shelly.cmdline
assert "mandatory" in str(e.value)
# input file name, noiseImage is not set, so using default value False
shelly = ShellCommandTask(
executable="DenoiseImage",
inputImageFilename=my_input_file,
input_spec=my_input_spec,
)
assert (
shelly.cmdline
== f"DenoiseImage -i {tmpdir.join('a_file.ext')} -s 1 -p 1 -r 2 -o [{str(shelly.output_dir / 'a_file_out.ext')}]"
)
# input file name, noiseImage is set to True, so template is used in the utput
shelly = ShellCommandTask(
executable="DenoiseImage",
inputImageFilename=my_input_file,
input_spec=my_input_spec,
noiseImage=True,
)
assert (
shelly.cmdline == f"DenoiseImage -i {tmpdir.join('a_file.ext')} -s 1 -p 1 -r 2 "
f"-o [{str(shelly.output_dir / 'a_file_out.ext')}, {str(shelly.output_dir / 'a_file_noise.ext')}]"
)
# input file name and help_short
shelly = ShellCommandTask(
executable="DenoiseImage",
inputImageFilename=my_input_file,
help_short=True,
input_spec=my_input_spec,
)
assert (
shelly.cmdline
== f"DenoiseImage -i {tmpdir.join('a_file.ext')} -s 1 -p 1 -r 2 -h -o [{str(shelly.output_dir / 'a_file_out.ext')}]"
)
assert shelly.output_names == [
"return_code",
"stdout",
"stderr",
"correctedImage",
"noiseImage",
]
# adding image_dimensionality that has allowed_values [2, 3, 4]
shelly = ShellCommandTask(
executable="DenoiseImage",
inputImageFilename=my_input_file,
input_spec=my_input_spec,
image_dimensionality=2,
)
assert (
shelly.cmdline
== f"DenoiseImage -d 2 -i {tmpdir.join('a_file.ext')} -s 1 -p 1 -r 2 -o [{str(shelly.output_dir / 'a_file_out.ext')}]"
)
# adding image_dimensionality that has allowed_values [2, 3, 4] and providing 5 - exception should be raised
with pytest.raises(ValueError) as excinfo:
shelly = ShellCommandTask(
executable="DenoiseImage",
inputImageFilename=my_input_file,
input_spec=my_input_spec,
image_dimensionality=5,
)
assert "value of image_dimensionality" in str(excinfo.value)
| 30.189788 | 263 | 0.437518 | 5,424 | 62,674 | 4.885509 | 0.0684 | 0.056379 | 0.042756 | 0.032869 | 0.824635 | 0.792068 | 0.76701 | 0.737801 | 0.709914 | 0.677837 | 0 | 0.008018 | 0.450761 | 62,674 | 2,075 | 264 | 30.204337 | 0.761787 | 0.110317 | 0 | 0.676153 | 0 | 0.007399 | 0.187865 | 0.016845 | 0 | 0 | 0 | 0.001446 | 0.038702 | 1 | 0.027888 | false | 0 | 0.005122 | 0 | 0.033011 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5352903b7edfd9553fab690be7740dc5f5bb5f18 | 5,155 | py | Python | embedcreator/sending.py | Kuro-Rui/flare-cogs | f739e3a4a8c65bf0e10945d242ba0b82f96c6d3d | [
"MIT"
] | 38 | 2021-03-07T17:13:10.000Z | 2022-02-28T19:50:00.000Z | embedcreator/sending.py | Kuro-Rui/flare-cogs | f739e3a4a8c65bf0e10945d242ba0b82f96c6d3d | [
"MIT"
] | 44 | 2021-03-12T19:13:32.000Z | 2022-03-18T10:20:52.000Z | embedcreator/sending.py | Kuro-Rui/flare-cogs | f739e3a4a8c65bf0e10945d242ba0b82f96c6d3d | [
"MIT"
] | 33 | 2021-03-08T18:59:59.000Z | 2022-03-23T10:57:46.000Z | import traceback
from io import BytesIO
from typing import Optional
import discord
from redbot.core import commands
from .abc import MixinMeta
from .embedmixin import embed
class EmbedSending(MixinMeta):
@embed.command(name="file")
@commands.bot_has_permissions(embed_links=True)
async def embed_file(self, ctx, channel: Optional[discord.TextChannel] = None):
"""Send an embed from a json file."""
channel = channel or ctx.channel
if not channel.permissions_for(ctx.me).send_messages:
return await ctx.send(f"I do not have permission to send messages in {channel}.")
if not channel.permissions_for(ctx.author).send_messages:
return await ctx.send(f"You do not have permission to send messages in {channel}.")
if not ctx.message.attachments:
return await ctx.send("You need to upload a file for this command to work")
with BytesIO() as fp:
await ctx.message.attachments[0].save(fp)
data = fp.read().decode("utf-8")
await self.build_embed(ctx, data=data, channel=channel)
@embed.command(name="json")
@commands.bot_has_permissions(embed_links=True)
async def embed_json(self, ctx, *, raw_json: str):
"""Send an embed from directly pasting json."""
channel = ctx.channel
raw_json = self.cleanup_code(raw_json)
if not channel.permissions_for(ctx.me).send_messages:
return await ctx.send(f"I do not have permission to send messages in {channel}.")
if not channel.permissions_for(ctx.author).send_messages:
return await ctx.send(f"You do not have permission to send messages in {channel}.")
await self.build_embed(ctx, data=raw_json, channel=channel)
@embed.command()
@commands.bot_has_permissions(embed_links=True)
async def send(self, ctx, channel: Optional[discord.TextChannel] = None, *, name: str):
"""Send a saved embed."""
channel = channel or ctx.channel
embeds_stored = await self.config.guild(ctx.guild).embeds()
if name not in embeds_stored:
return await ctx.send("This embed doesn't exist in this guild.")
data = embeds_stored[name]["data"]
await self.build_embed(ctx, data=data, channel=channel)
@embed.command()
@commands.bot_has_permissions(embed_links=True)
async def edit(self, ctx, message: discord.Message, *, name: str):
"""Edit a bot sent message with a new embed.
Message format is in messageID format.
Messages in other channels must follow ChannelID-MessageID format."""
if message.guild != ctx.guild:
return await ctx.send("I can only edit messages in this server.")
if message.author != ctx.guild.me:
return await ctx.send("I cannot edit messages that are not sent by me.")
embeds_stored = await self.config.guild(ctx.guild).embeds()
if name not in embeds_stored:
return await ctx.send("This embed doesn't exist.")
data = embeds_stored[name]["data"]
embed, content = await self.validate_data(ctx, data=data)
if not embed:
return
try:
await message.edit(content=content, embed=embed)
await ctx.tick()
except discord.errors.HTTPException as error:
err = "\n".join(traceback.format_exception_only(type(error), error))
em = discord.Embed(
title="Parsing Error",
description=f"The following is an extract of the error:\n```py\n{err}``` \nValidate your input by using any available embed generator online.",
colour=discord.Color.red(),
)
await ctx.send(embed=em)
@embed.command(name="editjson", aliases=["edit-json", "editraw"])
@commands.bot_has_permissions(embed_links=True)
async def edit_json(self, ctx, message: discord.Message, *, raw_json: str):
"""Edit a bot sent message with a new embed from JSON.
Message format is in messageID format.
Messages in other channels must follow ChannelID-MessageID format.
To add content, add a "content" entry to the json."""
if message.guild != ctx.guild:
return await ctx.send("I can only edit messages in this server.")
if message.author != ctx.guild.me:
return await ctx.send("I cannot edit messages that are not sent by me.")
data = self.cleanup_code(raw_json)
embed, content = await self.validate_data(ctx, data=data)
if not embed:
return
try:
await message.edit(content=content, embed=embed)
await ctx.tick()
except discord.errors.HTTPException as error:
err = "\n".join(traceback.format_exception_only(type(error), error))
em = discord.Embed(
title="Parsing Error",
description=f"The following is an extract of the error:\n```py\n{err}``` \nValidate your input by using any available embed generator online.",
colour=discord.Color.red(),
)
await ctx.send(embed=em)
| 46.863636 | 159 | 0.646557 | 695 | 5,155 | 4.72518 | 0.201439 | 0.038977 | 0.047503 | 0.060292 | 0.81486 | 0.75609 | 0.748173 | 0.719245 | 0.719245 | 0.719245 | 0 | 0.000519 | 0.25257 | 5,155 | 109 | 160 | 47.293578 | 0.851804 | 0 | 0 | 0.670455 | 0 | 0.022727 | 0.181367 | 0.00992 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.079545 | 0 | 0.238636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f41cdee0a997483f62dcc59e9f63cfead28b18b4 | 41 | py | Python | ctypes_generation/definitions/winerror_template.py | IMULMUL/PythonForWindows | 61e027a678d5b87aa64fcf8a37a6661a86236589 | [
"BSD-3-Clause"
] | 479 | 2016-01-08T00:53:34.000Z | 2022-03-22T10:28:19.000Z | ctypes_generation/definitions/winerror_template.py | IMULMUL/PythonForWindows | 61e027a678d5b87aa64fcf8a37a6661a86236589 | [
"BSD-3-Clause"
] | 38 | 2017-12-29T17:09:04.000Z | 2022-01-31T08:27:47.000Z | ctypes_generation/definitions/winerror_template.py | IMULMUL/PythonForWindows | 61e027a678d5b87aa64fcf8a37a6661a86236589 | [
"BSD-3-Clause"
] | 103 | 2016-01-10T01:32:17.000Z | 2021-12-24T17:21:06.000Z | from .flag import make_flag, FlagMapper
| 13.666667 | 39 | 0.804878 | 6 | 41 | 5.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 41 | 2 | 40 | 20.5 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f449f5adfa9f17f206166be7c75ccd6e725e379a | 4,443 | py | Python | SafeGraph/0.pull_up_pois.py | yeabinmoon/economics | 53bfc51f2227755948ac937c3e763b747d3aedec | [
"MIT"
] | null | null | null | SafeGraph/0.pull_up_pois.py | yeabinmoon/economics | 53bfc51f2227755948ac937c3e763b747d3aedec | [
"MIT"
] | null | null | null | SafeGraph/0.pull_up_pois.py | yeabinmoon/economics | 53bfc51f2227755948ac937c3e763b747d3aedec | [
"MIT"
] | null | null | null | """
Created on Jan 14 2021
@author: yeabinmoon
Using monthly visit patterns prior to pandemic (- 2020.02) label which race dominates it
0. Process monthly patterns to see what race distribution of POI
outputs: monthly files in /Users/yeabinmoon/Documents/JMP/data/SafeGraph/race_visitor/monthly
"""
import pandas as pd
import time
from safegraph_py_functions import safegraph_py_functions as sgpy
years = ['2018', '2019']
months = pd.date_range(start='2018-01-01', end='2018-12-31',freq= 'M')
months = list(months.strftime('%m'))
list_files = ['patterns-part1.csv.gz','patterns-part2.csv.gz',
'patterns-part3.csv.gz','patterns-part4.csv.gz']
demo = pd.read_csv('/Users/yeabinmoon/Dropbox (UH-ECON)/Research/JMP/data/open_census/race_cbg.csv',
index_col = 0, dtype = {'poi_cbg':str})
year = years[0]
month = months[0]
files = list_files[0]
for year in years:
for month in months:
start_time_month = time.time()
temp = pd.DataFrame()
for files in list_files:
temp_df = pd.read_csv('/Volumes/LaCie/cg-data/Pattern_1/'+year+'/'+month+'/'+files,
usecols = ['safegraph_place_id','visitor_home_cbgs'],
compression = 'gzip', dtype = {'poi_cbg':str})
temp = pd.concat([temp, temp_df], axis = 0, ignore_index=True)
temp = sgpy.unpack_json_and_merge_fast(temp,json_column = 'visitor_home_cbgs',chunk_n = 1000)
temp.drop(columns = {'visitor_home_cbgs'},inplace = True)
temp.rename(columns = {'visitor_home_cbgs_key':'poi_cbg'}, inplace = True)
temp = temp.merge(demo, how = 'left', on = 'poi_cbg')
temp.loc[:,'white'] = temp.loc[:,'visitor_home_cbgs_value'] * temp.loc[:,'white']
temp.loc[:,'black'] = temp.loc[:,'visitor_home_cbgs_value'] * temp.loc[:,'black']
temp.loc[:,'asian'] = temp.loc[:,'visitor_home_cbgs_value'] * temp.loc[:,'asian']
temp.loc[:,'hispanic'] = temp.loc[:,'visitor_home_cbgs_value'] * temp.loc[:,'hispanic']
temp = temp.groupby('safegraph_place_id')['visitor_home_cbgs_value','white','black','asian','hispanic'].sum()
temp.loc[:,'visitor_home_cbgs_value'] = temp.loc[:,'visitor_home_cbgs_value'].apply(pd.to_numeric, downcast = 'integer')
temp.iloc[:,1:] = temp.iloc[:,1:].apply(pd.to_numeric, downcast = 'float')
temp.to_pickle('/Users/yeabinmoon/Documents/JMP/data/SafeGraph/race_visitor/monthly/'+year+'-'+month+'.pickle.gz',
compression = 'gzip')
print("Done", year+'-'+month)
print("%f seconds" % (time.time() - start_time_month))
years = ['2020']
months = ['01','02']
for year in years:
for month in months:
start_time_month = time.time()
temp = pd.DataFrame()
for files in list_files:
temp_df = pd.read_csv('/Volumes/LaCie/cg-data/Pattern/'+year+'/'+month+'/'+files,
usecols = ['safegraph_place_id','visitor_home_cbgs'],
compression = 'gzip', dtype = {'poi_cbg':str})
temp = pd.concat([temp, temp_df], axis = 0, ignore_index=True)
temp = sgpy.unpack_json_and_merge_fast(temp,json_column = 'visitor_home_cbgs',chunk_n = 1000)
temp.drop(columns = {'visitor_home_cbgs'},inplace = True)
temp.rename(columns = {'visitor_home_cbgs_key':'poi_cbg'}, inplace = True)
temp = temp.merge(demo, how = 'left', on = 'poi_cbg')
temp.loc[:,'white'] = temp.loc[:,'visitor_home_cbgs_value'] * temp.loc[:,'white']
temp.loc[:,'black'] = temp.loc[:,'visitor_home_cbgs_value'] * temp.loc[:,'black']
temp.loc[:,'asian'] = temp.loc[:,'visitor_home_cbgs_value'] * temp.loc[:,'asian']
temp.loc[:,'hispanic'] = temp.loc[:,'visitor_home_cbgs_value'] * temp.loc[:,'hispanic']
temp = temp.groupby('safegraph_place_id')['visitor_home_cbgs_value','white','black','asian','hispanic'].sum()
temp.loc[:,'visitor_home_cbgs_value'] = temp.loc[:,'visitor_home_cbgs_value'].apply(pd.to_numeric, downcast = 'integer')
temp.iloc[:,1:] = temp.iloc[:,1:].apply(pd.to_numeric, downcast = 'float')
temp.to_pickle('/Users/yeabinmoon/Documents/JMP/data/SafeGraph/race_visitor/monthly/'+year+'-'+month+'.pickle.gz',
compression = 'gzip')
print("Done", year+'-'+month)
print("%f seconds" % (time.time() - start_time_month))
| 47.265957 | 128 | 0.630205 | 591 | 4,443 | 4.524535 | 0.23181 | 0.073298 | 0.123411 | 0.104712 | 0.782349 | 0.782349 | 0.782349 | 0.782349 | 0.782349 | 0.760658 | 0 | 0.01901 | 0.194913 | 4,443 | 93 | 129 | 47.774194 | 0.728543 | 0.066172 | 0 | 0.730159 | 0 | 0.015873 | 0.297101 | 0.175121 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.047619 | 0.063492 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f4622082a300712690fa3a459c901f2eedd4d8b6 | 376 | py | Python | python/testData/inspections/PyCompatibilityInspection/expressionInDecorators.py | leonardosnt/intellij-community | 7e58970e1043b9e600e1149dc8b227974cec9777 | [
"Apache-2.0"
] | 1 | 2020-08-04T08:23:50.000Z | 2020-08-04T08:23:50.000Z | python/testData/inspections/PyCompatibilityInspection/expressionInDecorators.py | leonardosnt/intellij-community | 7e58970e1043b9e600e1149dc8b227974cec9777 | [
"Apache-2.0"
] | 1 | 2020-07-30T19:04:47.000Z | 2020-07-30T19:04:47.000Z | python/testData/inspections/PyCompatibilityInspection/expressionInDecorators.py | bradleesand/intellij-community | 750ff9c10333c9c1278c00dbe8d88c877b1b9749 | [
"Apache-2.0"
] | null | null | null | <warning descr="Python version 2.6, 2.7, 3.4, 3.5, 3.6, 3.7, 3.8 do not support arbitrary expressions as a decorator">@x[0][1]</warning>
@my_decorator
def say_whee():
print("Whee!")
<warning descr="Python version 2.6, 2.7, 3.4, 3.5, 3.6, 3.7, 3.8 do not support arbitrary expressions as a decorator">@foo[0].wrapper</warning>
@foo.bar()
def say_whee():
print("Whee!") | 41.777778 | 143 | 0.670213 | 74 | 376 | 3.364865 | 0.405405 | 0.032129 | 0.144578 | 0.200803 | 0.819277 | 0.666667 | 0.666667 | 0.666667 | 0.666667 | 0.666667 | 0 | 0.095679 | 0.138298 | 376 | 9 | 144 | 41.777778 | 0.67284 | 0 | 0 | 0.5 | 0 | 0.25 | 0.557029 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f474d7fc91c25e8ed6dd773d9a096fc0984db9cd | 30 | py | Python | __init__.py | rafaol/Stein-Variational-Gradient-Descent | ae3a6004b68ac9b81bbb5e9e4584f31f1e14de22 | [
"MIT"
] | null | null | null | __init__.py | rafaol/Stein-Variational-Gradient-Descent | ae3a6004b68ac9b81bbb5e9e4584f31f1e14de22 | [
"MIT"
] | null | null | null | __init__.py | rafaol/Stein-Variational-Gradient-Descent | ae3a6004b68ac9b81bbb5e9e4584f31f1e14de22 | [
"MIT"
] | null | null | null | from .python.svgd import SVGD
| 15 | 29 | 0.8 | 5 | 30 | 4.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f475ecfaffb63d1513f76c461d4686835983e3ff | 17,407 | py | Python | tests/app/questionnaire/test_answer_store_updater.py | uk-gov-mirror/ONSdigital.eq-survey-runner | b3a67a82347d024177f7fa6bf05499f47ece7ea5 | [
"MIT"
] | 27 | 2015-10-02T17:27:54.000Z | 2021-04-05T12:39:16.000Z | tests/app/questionnaire/test_answer_store_updater.py | uk-gov-mirror/ONSdigital.eq-survey-runner | b3a67a82347d024177f7fa6bf05499f47ece7ea5 | [
"MIT"
] | 1,836 | 2015-09-16T09:59:03.000Z | 2022-03-30T14:27:06.000Z | tests/app/questionnaire/test_answer_store_updater.py | uk-gov-mirror/ONSdigital.eq-survey-runner | b3a67a82347d024177f7fa6bf05499f47ece7ea5 | [
"MIT"
] | 20 | 2016-09-09T16:56:12.000Z | 2021-11-12T06:09:27.000Z | import unittest
from unittest import mock
from unittest.mock import call
from mock import MagicMock
from app.data_model.answer_store import Answer, AnswerStore
from app.data_model.questionnaire_store import QuestionnaireStore
from app.forms.questionnaire_form import QuestionnaireForm
from app.questionnaire.answer_store_updater import AnswerStoreUpdater
from app.questionnaire.location import Location
from app.questionnaire.questionnaire_schema import QuestionnaireSchema
class TestAnswerStoreUpdater(unittest.TestCase):
def setUp(self):
super().setUp()
self.location = Location('group_foo', 0, 'block_bar')
self.schema = MagicMock(spec=QuestionnaireSchema)
self.answer_store = MagicMock(spec=AnswerStore)
self.questionnaire_store = MagicMock(
spec=QuestionnaireStore,
completed_blocks=[],
answer_store=self.answer_store
)
self.answer_store_updater = AnswerStoreUpdater(self.location, self.schema, self.questionnaire_store)
self.schema.location_requires_group_instance.return_value = False
def test_save_answers_with_answer_data(self):
self.location.block_id = 'household-composition'
self.schema.get_group_dependencies.return_value = None
self.schema.get_answer_ids_for_block.return_value = ['first-name', 'middle-names', 'last-name']
answers = [
Answer(
group_instance=0,
group_instance_id='group-0',
answer_id='first-name',
answer_instance=0,
value='Joe'
), Answer(
group_instance=0,
group_instance_id='group-0',
answer_id='middle-names',
answer_instance=0,
value=''
), Answer(
group_instance=0,
group_instance_id='group-0',
answer_id='last-name',
answer_instance=0,
value='Bloggs'
), Answer(
group_instance=0,
group_instance_id='group-1',
answer_id='first-name',
answer_instance=1,
value='Bob'
), Answer(
group_instance=0,
group_instance_id='group-1',
answer_id='middle-names',
answer_instance=1,
value=''
), Answer(
group_instance=0,
group_instance_id='group-1',
answer_id='last-name',
answer_instance=1,
value='Seymour'
)
]
form = MagicMock()
form.serialise.return_value = answers
self.answer_store_updater.save_answers(form)
assert self.questionnaire_store.completed_blocks == [self.location]
assert len(answers) == self.answer_store.add_or_update.call_count
# answers should be passed straight through as Answer objects
answer_calls = list(map(mock.call, answers))
assert answer_calls in self.answer_store.add_or_update.call_args_list
def test_save_answers_with_form_data(self):
answer_id = 'answer'
answer_value = '1000'
self.schema.get_answer_ids_for_block.return_value = [answer_id]
self.schema.get_group_dependencies.return_value = None
form = MagicMock(spec=QuestionnaireForm, data={answer_id: answer_value})
self.answer_store_updater.save_answers(form)
assert self.questionnaire_store.completed_blocks == [self.location]
assert self.answer_store.add_or_update.call_count == 1
created_answer = self.answer_store.add_or_update.call_args[0][0]
assert created_answer.__dict__ == {
'group_instance': 0,
'group_instance_id': None,
'answer_id': answer_id,
'answer_instance': 0,
'value': answer_value
}
def test_save_answers_stores_specific_group(self):
answer_id = 'answer'
answer_value = '1000'
self.location.group_instance = 1
self.schema.get_answer_ids_for_block.return_value = [answer_id]
self.schema.get_group_dependencies.return_value = None
form = MagicMock(spec=QuestionnaireForm, data={answer_id: answer_value})
self.answer_store_updater.save_answers(form)
assert self.questionnaire_store.completed_blocks == [self.location]
assert self.answer_store.add_or_update.call_count == 1
created_answer = self.answer_store.add_or_update.call_args[0][0]
assert created_answer.__dict__ == {
'group_instance': self.location.group_instance,
'group_instance_id': None,
'answer_id': answer_id,
'answer_instance': 0,
'value': answer_value
}
def test_save_answers_data_with_default_value(self):
answer_id = 'answer'
default_value = 0
self.schema.get_answer_ids_for_block.return_value = [answer_id]
self.schema.get_answer.return_value = {'default': default_value}
# No answer given so will use schema defined default
form_data = {
answer_id: None
}
form = MagicMock(spec=QuestionnaireForm, data=form_data)
self.answer_store_updater.save_answers(form)
assert self.questionnaire_store.completed_blocks == [self.location]
assert self.answer_store.add_or_update.call_count == 1
created_answer = self.answer_store.add_or_update.call_args[0][0]
assert created_answer.__dict__ == {
'group_instance': 0,
'group_instance_id': None,
'answer_id': answer_id,
'answer_instance': 0,
'value': default_value
}
def test_remove_empty_household_members_from_answer_store(self):
empty_household_answers = [
{
'answer_id': 'first-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 0,
'value': ''
},
{
'answer_id': 'middle-names',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 0,
'value': ''
},
{
'answer_id': 'last-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 0,
'value': ''
},
{
'answer_id': 'first-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 1,
'value': ''
},
{
'answer_id': 'middle-names',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 1,
'value': ''
},
{
'answer_id': 'last-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 1,
'value': ''
}
]
self.schema.get_answer_ids_for_block.return_value = ['first-name', 'middle-names', 'last-name']
self.answer_store.filter.return_value = iter(empty_household_answers)
self.answer_store_updater.remove_empty_household_members()
remove_answer_calls = [call(answer_ids=['first-name', 'middle-names', 'last-name'], answer_instance=0),
call(answer_ids=['first-name', 'middle-names', 'last-name'], answer_instance=1)]
# both instances of the answer should be removed
assert remove_answer_calls in self.answer_store.remove.call_args_list
assert self.answer_store.remove.call_count == 2
def test_remove_empty_household_members_values_entered_are_stored(self):
household_answers = [
# Answered
{
'answer_id': 'first-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 0,
'value': 'Joe'
},
{
'answer_id': 'middle-names',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 0,
'value': ''
},
{
'answer_id': 'last-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 0,
'value': 'Bloggs'
},
# Unanswered
{
'answer_id': 'first-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 1,
'value': ''
},
{
'answer_id': 'middle-names',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 1,
'value': ''
},
{
'answer_id': 'last-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 1,
'value': ''
}
]
self.schema.get_answer_ids_for_block.return_value = ['first-name', 'middle-names', 'last-name']
self.answer_store.filter.return_value = iter(household_answers)
self.answer_store_updater.remove_empty_household_members()
# only the second instance of the answer should be removed
assert self.answer_store.remove.call_count == 1
remove_answer_calls = [call(answer_ids=['first-name', 'middle-names', 'last-name'], answer_instance=1)]
assert remove_answer_calls in self.answer_store.remove.call_args_list
def test_remove_empty_household_members_partial_answers_are_stored(self):
self.location.block_id = 'household-composition'
self.schema.get_group_dependencies.return_value = None
household_answers = [
# Answered
{
'answer_id': 'first-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 0,
'value': 'Joe'
},
{
'answer_id': 'middle-names',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 0,
'value': 'J'
},
{
'answer_id': 'last-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 0,
'value': 'Bloggs'
},
# Partially answered
{
'answer_id': 'first-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 1,
'value': ''
},
{
'answer_id': 'middle-names',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 1,
'value': ''
},
{
'answer_id': 'last-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 1,
'value': 'Last name only'
},
{
'answer_id': 'first-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 2,
'value': 'First name only'
},
{
'answer_id': 'middle-names',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 2,
'value': ''
},
{
'answer_id': 'last-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 2,
'value': ''
}
]
self.answer_store.filter.return_value = iter(household_answers)
self.schema.get_answer_ids_for_block.return_value = ['first-name', 'middle-names', 'last-name']
self.answer_store_updater.remove_empty_household_members()
# no answers should be removed
assert self.answer_store.remove.called is False
def test_remove_empty_household_members_middle_name_only_not_stored(self):
household_answer = [
{
'answer_id': 'first-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 0,
'value': ''
},
{
'answer_id': 'middle-names',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 0,
'value': 'should not be saved'
},
{
'answer_id': 'last-name',
'group_instance_id': None,
'group_instance': 0,
'answer_instance': 0,
'value': ''
}
]
self.schema.get_answer_ids_for_block.return_value = ['first-name', 'middle-names', 'last-name']
self.answer_store.filter.return_value = iter(household_answer)
self.answer_store_updater.remove_empty_household_members()
# partial answer should be removed
assert self.answer_store.remove.call_count == 1
remove_answer_calls = [call(answer_ids=['first-name', 'middle-names', 'last-name'], answer_instance=0)]
assert remove_answer_calls in self.answer_store.remove.call_args_list
def test_save_answers_removes_completed_block_for_dependencies(self):
parent_id, dependent_answer_id = 'parent_answer', 'dependent_answer'
self.location = parent_location = Location('group', 0, 'min-block')
dependent_location = Location('group', 0, 'dependent-block')
self.questionnaire_store.completed_blocks = [parent_location, dependent_location]
self.schema.get_answer_ids_for_block.return_value = [parent_id]
self.schema.answer_dependencies = {parent_id: [dependent_answer_id]}
self.schema.get_block.return_value = {'id': dependent_location.block_id, 'parent_id': dependent_location.group_id}
# rotate the hash every time get_hash() is called to simulate the stored answer changing
self.answer_store.get_hash.side_effect = ['first_hash', 'second_hash']
form = MagicMock(spec=QuestionnaireForm, data={parent_id: '10'})
self.schema.get_group_dependencies.return_value = None
self.answer_store_updater.save_answers(form)
assert self.answer_store.add_or_update.call_count == 1
assert self.answer_store.remove.called is False
self.questionnaire_store.remove_completed_blocks.assert_called_with(location=dependent_location)
created_answer = self.answer_store.add_or_update.call_args[0][0]
assert created_answer.__dict__ == {
'group_instance': 0,
'group_instance_id': None,
'answer_id': parent_id,
'answer_instance': 0,
'value': '10'
}
def test_save_answers_removes_completed_block_for_dependencies_repeating_on_non_repeating_answer(self):
"""
Tests that all dependent completed blocks are removed across all repeating groups when
parent answer is not in a repeating group
"""
parent_id, dependent_answer_id = 'parent_answer', 'dependent_answer'
self.location = parent_location = Location('group', 0, 'min-block')
dependent_location = Location('group', 0, 'dependent-block')
self.questionnaire_store.completed_blocks = [parent_location, dependent_location]
self.schema.get_answer_ids_for_block.return_value = [parent_id]
self.schema.answer_dependencies = {parent_id: [dependent_answer_id]}
self.schema.get_block.return_value = {'id': dependent_location.block_id, 'parent_id': dependent_location.group_id}
# the dependent answer is in a repeating group, the parent is not
self.schema.answer_is_in_repeating_group = lambda _answer_id: _answer_id == dependent_answer_id
# rotate the hash every time get_hash() is called to simulate the stored answer changing
self.answer_store.get_hash.side_effect = ['first_hash', 'second_hash']
form = MagicMock(spec=QuestionnaireForm, data={parent_id: '10'})
self.answer_store_updater.save_answers(form)
self.questionnaire_store.remove_completed_blocks.assert_called_with(
group_id=dependent_location.group_id,
block_id=dependent_location.block_id
)
assert self.answer_store.add_or_update.call_count == 1
assert self.answer_store.remove.called is False
created_answer = self.answer_store.add_or_update.call_args[0][0]
assert created_answer.__dict__ == {
'group_instance': 0,
'group_instance_id': None,
'answer_id': parent_id,
'answer_instance': 0,
'value': '10'
}
| 37.115139 | 122 | 0.573907 | 1,817 | 17,407 | 5.148046 | 0.080352 | 0.101454 | 0.064144 | 0.058905 | 0.814732 | 0.796985 | 0.761386 | 0.748343 | 0.71873 | 0.68591 | 0 | 0.010257 | 0.327914 | 17,407 | 468 | 123 | 37.194444 | 0.789298 | 0.039754 | 0 | 0.657068 | 0 | 0 | 0.164837 | 0.002519 | 0 | 0 | 0 | 0 | 0.070681 | 1 | 0.028796 | false | 0 | 0.026178 | 0 | 0.057592 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f481cf66e1702f3ad941f3d1ea993d72bcbc6f88 | 34 | py | Python | octopus/platforms/ETH/utils/__init__.py | ZarvisD/octopus | 3e238721fccfec69a69a1635b8a0dc485e525e69 | [
"MIT"
] | 2 | 2019-01-19T07:12:02.000Z | 2021-08-14T13:23:37.000Z | octopus/platforms/ETH/utils/__init__.py | ZarvisD/octopus | 3e238721fccfec69a69a1635b8a0dc485e525e69 | [
"MIT"
] | null | null | null | octopus/platforms/ETH/utils/__init__.py | ZarvisD/octopus | 3e238721fccfec69a69a1635b8a0dc485e525e69 | [
"MIT"
] | 1 | 2019-01-19T07:12:05.000Z | 2019-01-19T07:12:05.000Z | from . import disassembler_helper
| 17 | 33 | 0.852941 | 4 | 34 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
be44f6f96a0f666edbe3b76e32ca5cd56e8bd190 | 86,079 | py | Python | pyquil/tests/test_operator_estimation.py | oliverdutton/pyquil | 027a3f6aecbd8206baf39189a0183ad0f85c262b | [
"Apache-2.0"
] | null | null | null | pyquil/tests/test_operator_estimation.py | oliverdutton/pyquil | 027a3f6aecbd8206baf39189a0183ad0f85c262b | [
"Apache-2.0"
] | null | null | null | pyquil/tests/test_operator_estimation.py | oliverdutton/pyquil | 027a3f6aecbd8206baf39189a0183ad0f85c262b | [
"Apache-2.0"
] | null | null | null | import functools
import itertools
import random
from math import pi
from unittest.mock import Mock
import numpy as np
import functools
from operator import mul
import numpy as np
import pytest
from pyquil.quilbase import Pragma
from pyquil import Program, get_qc
from pyquil.gates import *
from pyquil.api import WavefunctionSimulator, QVMConnection
from pyquil.operator_estimation import ExperimentSetting, TomographyExperiment, to_json, read_json, \
group_experiments, ExperimentResult, measure_observables, SIC0, SIC1, SIC2, SIC3, \
plusX, minusX, plusY, minusY, plusZ, minusZ, _one_q_sic_prep, \
_max_tpb_overlap, _max_weight_operator, _max_weight_state, _max_tpb_overlap, \
TensorProductState, zeros_state, \
group_experiments, group_experiments_greedy, ExperimentResult, measure_observables, \
_ops_bool_to_prog, _stats_from_measurements, \
ratio_variance, _calibration_program, \
_pauli_to_product_state
from pyquil.paulis import sI, sX, sY, sZ, PauliSum, PauliTerm
def _generate_random_states(n_qubits, n_terms):
oneq_states = [SIC0, SIC1, SIC2, SIC3, plusX, minusX, plusY, minusY, plusZ, minusZ]
all_s_inds = np.random.randint(len(oneq_states), size=(n_terms, n_qubits))
states = []
for s_inds in all_s_inds:
state = functools.reduce(mul, (oneq_states[pi](i) for i, pi in enumerate(s_inds)),
TensorProductState([]))
states += [state]
return states
def _generate_random_paulis(n_qubits, n_terms):
paulis = [sI, sX, sY, sZ]
all_op_inds = np.random.randint(len(paulis), size=(n_terms, n_qubits))
operators = []
for op_inds in all_op_inds:
op = functools.reduce(mul, (paulis[pi](i) for i, pi in enumerate(op_inds)), sI(0))
op *= np.random.uniform(-1, 1)
operators += [op]
return operators
def test_experiment_setting():
in_states = _generate_random_states(n_qubits=4, n_terms=7)
out_ops = _generate_random_paulis(n_qubits=4, n_terms=7)
for ist, oop in zip(in_states, out_ops):
expt = ExperimentSetting(ist, oop)
assert str(expt) == expt.serializable()
expt2 = ExperimentSetting.from_str(str(expt))
assert expt == expt2
assert expt2.in_state == ist
assert expt2.out_operator == oop
@pytest.mark.filterwarnings("ignore:ExperimentSetting")
def test_setting_no_in_back_compat():
out_ops = _generate_random_paulis(n_qubits=4, n_terms=7)
for oop in out_ops:
expt = ExperimentSetting(TensorProductState(), oop)
expt2 = ExperimentSetting.from_str(str(expt))
assert expt == expt2
assert expt2.in_operator == sI()
assert expt2.out_operator == oop
@pytest.mark.filterwarnings("ignore:ExperimentSetting")
def test_setting_no_in():
out_ops = _generate_random_paulis(n_qubits=4, n_terms=7)
for oop in out_ops:
expt = ExperimentSetting(zeros_state(oop.get_qubits()), oop)
expt2 = ExperimentSetting.from_str(str(expt))
assert expt == expt2
assert expt2.in_operator == functools.reduce(mul, [sZ(q) for q in oop.get_qubits()], sI())
assert expt2.out_operator == oop
def test_tomo_experiment():
expts = [
ExperimentSetting(TensorProductState(), sX(0) * sY(1)),
ExperimentSetting(plusZ(0), sZ(0)),
]
suite = TomographyExperiment(
settings=expts,
program=Program(X(0), Y(1))
)
assert len(suite) == 2
for e1, e2 in zip(expts, suite):
# experiment suite puts in groups of length 1
assert len(e2) == 1
e2 = e2[0]
assert e1 == e2
prog_str = str(suite).splitlines()[0]
assert prog_str == 'X 0; Y 1'
def test_tomo_experiment_pre_grouped():
expts = [
[ExperimentSetting(TensorProductState(), sX(0) * sI(1)), ExperimentSetting(TensorProductState(), sI(0) * sX(1))],
[ExperimentSetting(TensorProductState(), sZ(0) * sI(1)), ExperimentSetting(TensorProductState(), sI(0) * sZ(1))],
]
suite = TomographyExperiment(
settings=expts,
program=Program(X(0), Y(1))
)
assert len(suite) == 2 # number of groups
for es1, es2 in zip(expts, suite):
for e1, e2 in zip(es1, es2):
assert e1 == e2
prog_str = str(suite).splitlines()[0]
assert prog_str == 'X 0; Y 1'
def test_tomo_experiment_empty():
suite = TomographyExperiment([], program=Program(X(0)))
assert len(suite) == 0
assert str(suite.program) == 'X 0\n'
def test_experiment_deser(tmpdir):
expts = [
[ExperimentSetting(TensorProductState(), sX(0) * sI(1)), ExperimentSetting(TensorProductState(), sI(0) * sX(1))],
[ExperimentSetting(TensorProductState(), sZ(0) * sI(1)), ExperimentSetting(TensorProductState(), sI(0) * sZ(1))],
]
suite = TomographyExperiment(
settings=expts,
program=Program(X(0), Y(1))
)
to_json(f'{tmpdir}/suite.json', suite)
suite2 = read_json(f'{tmpdir}/suite.json')
assert suite == suite2
@pytest.fixture(params=['clique-removal', 'greedy'])
def grouping_method(request):
return request.param
def test_expt_settings_share_ntpb():
expts = [[ExperimentSetting(zeros_state([0, 1]), sX(0) * sI(1)), ExperimentSetting(zeros_state([0, 1]), sI(0) * sX(1))],
[ExperimentSetting(zeros_state([0, 1]), sZ(0) * sI(1)), ExperimentSetting(zeros_state([0, 1]), sI(0) * sZ(1))]]
for group in expts:
for e1, e2 in itertools.combinations(group, 2):
assert _max_weight_state([e1.in_state, e2.in_state]) is not None
assert _max_weight_operator([e1.out_operator, e2.out_operator]) is not None
def test_group_experiments(grouping_method):
expts = [ # cf above, I removed the inner nesting. Still grouped visually
ExperimentSetting(TensorProductState(), sX(0) * sI(1)), ExperimentSetting(TensorProductState(), sI(0) * sX(1)),
ExperimentSetting(TensorProductState(), sZ(0) * sI(1)), ExperimentSetting(TensorProductState(), sI(0) * sZ(1)),
]
suite = TomographyExperiment(expts, Program())
grouped_suite = group_experiments(suite, method=grouping_method)
assert len(suite) == 4
assert len(grouped_suite) == 2
def test_experiment_result_compat():
er = ExperimentResult(
setting=ExperimentSetting(plusX(0), sZ(0)),
expectation=0.9,
std_err=0.05,
total_counts=100,
)
assert str(er) == 'X0_0→(1+0j)*Z0: 0.9 +- 0.05'
def test_experiment_result():
er = ExperimentResult(
setting=ExperimentSetting(plusX(0), sZ(0)),
expectation=0.9,
std_err=0.05,
total_counts=100,
)
assert str(er) == 'X0_0→(1+0j)*Z0: 0.9 +- 0.05'
def test_measure_observables(forest):
expts = [
ExperimentSetting(TensorProductState(), o1 * o2)
for o1, o2 in itertools.product([sI(0), sX(0), sY(0), sZ(0)], [sI(1), sX(1), sY(1), sZ(1)])
]
suite = TomographyExperiment(expts, program=Program(X(0), CNOT(0, 1)))
assert len(suite) == 4 * 4
gsuite = group_experiments(suite)
assert len(gsuite) == 3 * 3 # can get all the terms with I for free in this case
qc = get_qc('2q-qvm')
for res in measure_observables(qc, gsuite, n_shots=2000):
if res.setting.out_operator in [sI(), sZ(0), sZ(1), sZ(0) * sZ(1)]:
assert np.abs(res.expectation) > 0.9
else:
assert np.abs(res.expectation) < 0.1
def _random_2q_programs(n_progs=3):
"""Generate random programs that consist of single qubit rotations, a CZ, and single
qubit rotations.
"""
r = random.Random(52)
def RI(qubit, angle):
# throw away angle so we can randomly choose the identity
return I(qubit)
def _random_1q_gate(qubit):
return r.choice([RI, RX, RY, RZ])(qubit=qubit, angle=r.uniform(0, 2 * pi))
for _ in range(n_progs):
prog = Program()
prog += _random_1q_gate(0)
prog += _random_1q_gate(1)
prog += CZ(0, 1)
prog += _random_1q_gate(0)
prog += _random_1q_gate(1)
yield prog
@pytest.mark.slow
def test_measure_observables_many_progs(forest):
expts = [
ExperimentSetting(TensorProductState(), o1 * o2)
for o1, o2 in itertools.product([sI(0), sX(0), sY(0), sZ(0)], [sI(1), sX(1), sY(1), sZ(1)])
]
qc = get_qc('2q-qvm')
qc.qam.random_seed = 0
for prog in _random_2q_programs():
suite = TomographyExperiment(expts, program=prog)
assert len(suite) == 4 * 4
gsuite = group_experiments(suite)
assert len(gsuite) == 3 * 3 # can get all the terms with I for free in this case
wfn = WavefunctionSimulator()
wfn_exps = {}
for expt in expts:
wfn_exps[expt] = wfn.expectation(gsuite.program, PauliSum([expt.out_operator]))
for res in measure_observables(qc, gsuite):
np.testing.assert_allclose(wfn_exps[res.setting], res.expectation, atol=2e-2)
def test_append():
expts = [
[ExperimentSetting(TensorProductState(), sX(0) * sI(1)), ExperimentSetting(TensorProductState(), sI(0) * sX(1))],
[ExperimentSetting(TensorProductState(), sZ(0) * sI(1)), ExperimentSetting(TensorProductState(), sI(0) * sZ(1))],
]
suite = TomographyExperiment(
settings=expts,
program=Program(X(0), Y(1))
)
suite.append(ExperimentSetting(TensorProductState(), sY(0) * sX(1)))
assert (len(str(suite))) > 0
def test_no_complex_coeffs(forest):
qc = get_qc('2q-qvm')
suite = TomographyExperiment([ExperimentSetting(TensorProductState(), 1.j * sY(0))], program=Program(X(0)))
with pytest.raises(ValueError):
res = list(measure_observables(qc, suite, n_shots=2000))
def test_max_weight_operator_1():
pauli_terms = [sZ(0),
sX(1) * sZ(0),
sY(2) * sX(1)]
assert _max_weight_operator(pauli_terms) == sY(2) * sX(1) * sZ(0)
def test_max_weight_operator_2():
pauli_terms = [sZ(0),
sX(1) * sZ(0),
sY(2) * sX(1),
sZ(5) * sI(3)]
assert _max_weight_operator(pauli_terms) == sZ(5) * sY(2) * sX(1) * sZ(0)
def test_max_weight_operator_3():
pauli_terms = [sZ(0) * sX(5),
sX(1) * sZ(0),
sY(2) * sX(1),
sZ(5) * sI(3)]
assert _max_weight_operator(pauli_terms) is None
def test_max_weight_operator_misc():
assert _max_weight_operator([sZ(0), sZ(0) * sZ(1)]) is not None
assert _max_weight_operator([sX(5), sZ(4)]) is not None
assert _max_weight_operator([sX(0), sY(0) * sZ(2)]) is None
x_term = sX(0) * sX(1)
z1_term = sZ(1)
z0_term = sZ(0)
z0z1_term = sZ(0) * sZ(1)
assert _max_weight_operator([x_term, z1_term]) is None
assert _max_weight_operator([z0z1_term, x_term]) is None
assert _max_weight_operator([z1_term, z0_term]) is not None
assert _max_weight_operator([z0z1_term, z0_term]) is not None
assert _max_weight_operator([z0z1_term, z1_term]) is not None
assert _max_weight_operator([z0z1_term, sI(1)]) is not None
assert _max_weight_operator([z0z1_term, sI(2)]) is not None
assert _max_weight_operator([z0z1_term, sX(5) * sZ(7)]) is not None
xxxx_terms = sX(1) * sX(2) + sX(2) + sX(3) * sX(4) + sX(4) + \
sX(1) * sX(3) * sX(4) + sX(1) * sX(4) + sX(1) * sX(2) * sX(3)
true_term = sX(1) * sX(2) * sX(3) * sX(4)
assert _max_weight_operator(xxxx_terms.terms) == true_term
zzzz_terms = sZ(1) * sZ(2) + sZ(3) * sZ(4) + \
sZ(1) * sZ(3) + sZ(1) * sZ(3) * sZ(4)
assert _max_weight_operator(zzzz_terms.terms) == sZ(1) * sZ(2) * \
sZ(3) * sZ(4)
pauli_terms = [sZ(0), sX(1) * sZ(0), sY(2) * sX(1), sZ(5) * sI(3)]
assert _max_weight_operator(pauli_terms) == sZ(5) * sY(2) * sX(1) * sZ(0)
def test_max_weight_operator_4():
# this last example illustrates that a pair of commuting operators
# need not be diagonal in the same tpb
assert _max_weight_operator([sX(1) * sZ(0), sZ(1) * sX(0)]) is None
def test_max_weight_state_1():
states = [plusX(0) * plusZ(1),
plusX(0),
plusZ(1),
]
assert _max_weight_state(states) == states[0]
def test_max_weight_state_2():
states = [plusX(1) * plusZ(0),
plusX(0),
plusZ(1),
]
assert _max_weight_state(states) is None
def test_max_weight_state_3():
states = [plusX(0) * minusZ(1),
plusX(0),
minusZ(1),
]
assert _max_weight_state(states) == states[0]
def test_max_weight_state_4():
states = [plusX(1) * minusZ(0),
plusX(0),
minusZ(1),
]
assert _max_weight_state(states) is None
def test_max_tpb_overlap_1():
tomo_expt_settings = [ExperimentSetting(plusZ(1) * plusX(0), sY(2) * sY(1)),
ExperimentSetting(plusX(2) * plusZ(1), sY(2) * sZ(0))]
tomo_expt_program = Program(H(0), H(1), H(2))
tomo_expt = TomographyExperiment(tomo_expt_settings, tomo_expt_program)
expected_dict = {
ExperimentSetting(plusX(0) * plusZ(1) * plusX(2), sZ(0) * sY(1) * sY(2)): [
ExperimentSetting(plusZ(1) * plusX(0), sY(2) * sY(1)),
ExperimentSetting(plusX(2) * plusZ(1), sY(2) * sZ(0))
]
}
assert expected_dict == _max_tpb_overlap(tomo_expt)
def test_max_tpb_overlap_2():
expt_setting = ExperimentSetting(_pauli_to_product_state(PauliTerm.from_compact_str('(1+0j)*Z7Y8Z1Y4Z2Y5Y0X6')),
PauliTerm.from_compact_str('(1+0j)*Z4X8Y5X3Y7Y1'))
p = Program(H(0), H(1), H(2))
tomo_expt = TomographyExperiment([expt_setting], p)
expected_dict = {expt_setting: [expt_setting]}
assert expected_dict == _max_tpb_overlap(tomo_expt)
def test_max_tpb_overlap_3():
# add another ExperimentSetting to the above
expt_setting = ExperimentSetting(_pauli_to_product_state(PauliTerm.from_compact_str('(1+0j)*Z7Y8Z1Y4Z2Y5Y0X6')),
PauliTerm.from_compact_str('(1+0j)*Z4X8Y5X3Y7Y1'))
expt_setting2 = ExperimentSetting(plusZ(7), sY(1))
p = Program(H(0), H(1), H(2))
tomo_expt2 = TomographyExperiment([expt_setting, expt_setting2], p)
expected_dict2 = {expt_setting: [expt_setting, expt_setting2]}
assert expected_dict2 == _max_tpb_overlap(tomo_expt2)
def test_group_experiments_greedy():
ungrouped_tomo_expt = TomographyExperiment(
[[ExperimentSetting(_pauli_to_product_state(PauliTerm.from_compact_str('(1+0j)*Z7Y8Z1Y4Z2Y5Y0X6')),
PauliTerm.from_compact_str('(1+0j)*Z4X8Y5X3Y7Y1'))],
[ExperimentSetting(plusZ(7), sY(1))]], program=Program(H(0), H(1), H(2)))
grouped_tomo_expt = group_experiments(ungrouped_tomo_expt, method='greedy')
expected_grouped_tomo_expt = TomographyExperiment(
[[
ExperimentSetting(TensorProductState.from_str('Z0_7 * Y0_8 * Z0_1 * Y0_4 * '
'Z0_2 * Y0_5 * Y0_0 * X0_6'),
PauliTerm.from_compact_str('(1+0j)*Z4X8Y5X3Y7Y1')),
ExperimentSetting(plusZ(7), sY(1))
]],
program=Program(H(0), H(1), H(2)))
assert grouped_tomo_expt == expected_grouped_tomo_expt
def test_expt_settings_diagonal_in_tpb():
def _expt_settings_diagonal_in_tpb(es1: ExperimentSetting, es2: ExperimentSetting):
"""
Extends the concept of being diagonal in the same tpb to ExperimentSettings, by
determining if the pairs of in_states and out_operators are separately diagonal in the same
tpb
"""
max_weight_in = _max_weight_state([es1.in_state, es2.in_state])
max_weight_out = _max_weight_operator([es1.out_operator, es2.out_operator])
return max_weight_in is not None and max_weight_out is not None
expt_setting1 = ExperimentSetting(plusZ(1) * plusX(0), sY(1) * sZ(0))
expt_setting2 = ExperimentSetting(plusY(2) * plusZ(1), sZ(2) * sY(1))
assert _expt_settings_diagonal_in_tpb(expt_setting1, expt_setting2)
expt_setting3 = ExperimentSetting(plusX(2) * plusZ(1), sZ(2) * sY(1))
expt_setting4 = ExperimentSetting(plusY(2) * plusZ(1), sX(2) * sY(1))
assert not _expt_settings_diagonal_in_tpb(expt_setting2, expt_setting3)
assert not _expt_settings_diagonal_in_tpb(expt_setting2, expt_setting4)
def test_identity(forest):
qc = get_qc('2q-qvm')
suite = TomographyExperiment([ExperimentSetting(plusZ(0), 0.123 * sI(0))],
program=Program(X(0)))
result = list(measure_observables(qc, suite))[0]
assert result.expectation == 0.123
def test_sic_process_tomo(forest):
qc = get_qc('2q-qvm')
process = Program(X(0))
settings = []
for in_state in [SIC0, SIC1, SIC2, SIC3]:
for out_op in [sI, sX, sY, sZ]:
settings += [ExperimentSetting(
in_state=in_state(q=0),
out_operator=out_op(q=0)
)]
experiment = TomographyExperiment(settings=settings, program=process)
results = list(measure_observables(qc, experiment))
assert len(results) == 4 * 4
def test_measure_observables_symmetrize(forest):
"""
Symmetrization alone should not change the outcome on the QVM
"""
expts = [
ExperimentSetting(TensorProductState(), o1 * o2)
for o1, o2 in itertools.product([sI(0), sX(0), sY(0), sZ(0)], [sI(1), sX(1), sY(1), sZ(1)])
]
suite = TomographyExperiment(expts, program=Program(X(0), CNOT(0, 1)))
assert len(suite) == 4 * 4
gsuite = group_experiments(suite)
assert len(gsuite) == 3 * 3 # can get all the terms with I for free in this case
qc = get_qc('2q-qvm')
for res in measure_observables(qc, gsuite, calibrate_readout=None):
if res.setting.out_operator in [sI(), sZ(0), sZ(1), sZ(0) * sZ(1)]:
assert np.abs(res.expectation) > 0.9
else:
assert np.abs(res.expectation) < 0.1
def test_measure_observables_symmetrize_calibrate(forest):
"""
Symmetrization + calibration should not change the outcome on the QVM
"""
expts = [
ExperimentSetting(TensorProductState(),
o1 * o2)
for o1, o2 in itertools.product([sI(0), sX(0), sY(0), sZ(0)], [sI(1), sX(1), sY(1), sZ(1)])
]
suite = TomographyExperiment(expts, program=Program(X(0), CNOT(0, 1)))
assert len(suite) == 4 * 4
gsuite = group_experiments(suite)
assert len(gsuite) == 3 * 3 # can get all the terms with I for free in this case
qc = get_qc('2q-qvm')
for res in measure_observables(qc, gsuite):
if res.setting.out_operator in [sI(), sZ(0), sZ(1), sZ(0) * sZ(1)]:
assert np.abs(res.expectation) > 0.9
else:
assert np.abs(res.expectation) < 0.1
def test_measure_observables_zero_expectation(forest):
"""
Testing case when expectation value of observable should be close to zero
"""
qc = get_qc('2q-qvm')
exptsetting = ExperimentSetting(plusZ(0), sX(0))
suite = TomographyExperiment([exptsetting],
program=Program(I(0)))
result = list(measure_observables(qc, suite))[0]
np.testing.assert_almost_equal(result.expectation, 0.0, decimal=1)
def test_measure_observables_no_symm_calibr_raises_error(forest):
qc = get_qc('2q-qvm')
exptsetting = ExperimentSetting(plusZ(0), sX(0))
suite = TomographyExperiment([exptsetting],
program=Program(I(0)))
with pytest.raises(ValueError):
result = list(measure_observables(qc, suite, symmetrize_readout=None,
calibrate_readout='plus-eig'))
def test_ops_bool_to_prog():
qubits = [0, 2, 3]
ops_strings = list(itertools.product([0, 1], repeat=len(qubits)))
d_expected = {(0, 0, 0): '', (0, 0, 1): 'X 3\n', (0, 1, 0): 'X 2\n', (0, 1, 1): 'X 2\nX 3\n',
(1, 0, 0): 'X 0\n', (1, 0, 1): 'X 0\nX 3\n', (1, 1, 0): 'X 0\nX 2\n',
(1, 1, 1): 'X 0\nX 2\nX 3\n'}
for op_str in ops_strings:
p = _ops_bool_to_prog(op_str, qubits)
assert str(p) == d_expected[op_str]
def test_stats_from_measurements():
bs_results = np.array([[0, 1] * 10])
d_qub_idx = {0: 0, 1: 1}
setting = ExperimentSetting(TensorProductState(), sZ(0) * sX(1))
n_shots = 2000
obs_mean, obs_var = _stats_from_measurements(bs_results, d_qub_idx, setting, n_shots)
assert obs_mean == -1.0
assert obs_var == 0.0
def test_ratio_variance_float():
a, b, var_a, var_b = 1.0, 2.0, 0.1, 0.05
ab_ratio_var = ratio_variance(a, var_a, b, var_b)
assert ab_ratio_var == 0.028125
def test_ratio_variance_numerator_zero():
# denominator can't be zero, but numerator can be
a, b, var_a, var_b = 0.0, 2.0, 0.1, 0.05
ab_ratio_var = ratio_variance(a, var_a, b, var_b)
assert ab_ratio_var == 0.025
def test_ratio_variance_array():
a = np.array([1.0, 10.0, 100.0])
b = np.array([2.0, 20.0, 200.0])
var_a = np.array([0.1, 1.0, 10.0])
var_b = np.array([0.05, 0.5, 5.0])
ab_ratio_var = ratio_variance(a, var_a, b, var_b)
np.testing.assert_allclose(ab_ratio_var, np.array([0.028125, 0.0028125, 0.00028125]))
def test_measure_observables_uncalibrated_asymmetric_readout(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
runs = 1
else:
runs = 100
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sX(0))
expt2 = ExperimentSetting(TensorProductState(plusY(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
p = Program()
p00, p11 = 0.90, 0.80
p.define_noisy_readout(0, p00=p00, p11=p11)
expt_list = [expt1, expt2, expt3]
tomo_expt = TomographyExperiment(settings=expt_list * runs, program=p)
expected_expectation_z_basis = 2 * p00 - 1
expect_arr = np.zeros(runs * len(expt_list))
for idx, res in enumerate(measure_observables(qc,
tomo_expt, n_shots=2000,
symmetrize_readout=None,
calibrate_readout=None)):
expect_arr[idx] = res.expectation
assert np.isclose(np.mean(expect_arr[::3]), expected_expectation_z_basis, atol=2e-2)
assert np.isclose(np.mean(expect_arr[1::3]), expected_expectation_z_basis, atol=2e-2)
assert np.isclose(np.mean(expect_arr[2::3]), expected_expectation_z_basis, atol=2e-2)
def test_measure_observables_uncalibrated_symmetric_readout(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
runs = 1
else:
runs = 100
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sX(0))
expt2 = ExperimentSetting(TensorProductState(plusY(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
p = Program()
p00, p11 = 0.90, 0.80
p.define_noisy_readout(0, p00=p00, p11=p11)
expt_list = [expt1, expt2, expt3]
tomo_expt = TomographyExperiment(settings=expt_list * runs, program=p)
expected_symm_error = (p00 + p11) / 2
expected_expectation_z_basis = expected_symm_error * (1) + (1 - expected_symm_error) * (-1)
uncalibr_e = np.zeros(runs * len(expt_list))
for idx, res in enumerate(measure_observables(qc,
tomo_expt, n_shots=2000,
calibrate_readout=None)):
uncalibr_e[idx] = res.expectation
assert np.isclose(np.mean(uncalibr_e[::3]), expected_expectation_z_basis, atol=2e-2)
assert np.isclose(np.mean(uncalibr_e[1::3]), expected_expectation_z_basis, atol=2e-2)
assert np.isclose(np.mean(uncalibr_e[2::3]), expected_expectation_z_basis, atol=2e-2)
def test_measure_observables_calibrated_symmetric_readout(forest, use_seed):
# expecting the result +1 for calibrated readout
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_simulations = 1
else:
num_simulations = 100
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sX(0))
expt2 = ExperimentSetting(TensorProductState(plusY(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
p = Program()
p.define_noisy_readout(0, p00=0.99, p11=0.80)
tomo_expt = TomographyExperiment(settings=[expt1, expt2, expt3], program=p)
expectations = []
for _ in range(num_simulations):
expt_results = list(measure_observables(qc, tomo_expt, n_shots=2000))
expectations.append([res.expectation for res in expt_results])
expectations = np.array(expectations)
results = np.mean(expectations, axis=0)
np.testing.assert_allclose(results, 1.0, atol=2e-2)
def test_measure_observables_result_zero_symmetrization_calibration(forest, use_seed):
# expecting expectation value to be 0 with symmetrization/calibration
qc = get_qc('9q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_simulations = 1
else:
num_simulations = 100
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sZ(0))
expt2 = ExperimentSetting(TensorProductState(minusZ(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(minusY(0)), sX(0))
expt_settings = [expt1, expt2, expt3]
p = Program()
p00, p11 = 0.99, 0.80
p.define_noisy_readout(0, p00=p00, p11=p11)
tomo_expt = TomographyExperiment(settings=expt_settings, program=p)
expectations = []
raw_expectations = []
for _ in range(num_simulations):
expt_results = list(measure_observables(qc, tomo_expt, n_shots=2000))
expectations.append([res.expectation for res in expt_results])
raw_expectations.append([res.raw_expectation for res in expt_results])
expectations = np.array(expectations)
raw_expectations = np.array(raw_expectations)
results = np.mean(expectations, axis=0)
raw_results = np.mean(raw_expectations)
np.testing.assert_allclose(results, 0.0, atol=2e-2)
np.testing.assert_allclose(raw_results, 0.0, atol=2e-2)
def test_measure_observables_result_zero_no_noisy_readout(forest, use_seed):
# expecting expectation value to be 0 with no symmetrization/calibration
# and no noisy readout
qc = get_qc('9q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_simulations = 1
else:
num_simulations = 100
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sZ(0))
expt2 = ExperimentSetting(TensorProductState(minusZ(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(plusY(0)), sX(0))
expt_settings = [expt1, expt2, expt3]
p = Program()
tomo_expt = TomographyExperiment(settings=expt_settings, program=p)
expectations = []
for _ in range(num_simulations):
expt_results = list(measure_observables(qc, tomo_expt, n_shots=2000,
symmetrize_readout=None,
calibrate_readout=None))
expectations.append([res.expectation for res in expt_results])
expectations = np.array(expectations)
results = np.mean(expectations, axis=0)
np.testing.assert_allclose(results, 0.0, atol=2e-2)
def test_measure_observables_result_zero_no_symm_calibr(forest, use_seed):
# expecting expectation value to be nonzero with symmetrization/calibration
qc = get_qc('9q-qvm')
if use_seed:
qc.qam.random_seed = 3
np.random.seed(0)
num_simulations = 1
else:
num_simulations = 100
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sZ(0))
expt2 = ExperimentSetting(TensorProductState(minusZ(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(minusY(0)), sX(0))
expt_settings = [expt1, expt2, expt3]
p = Program()
p00, p11 = 0.99, 0.80
p.define_noisy_readout(0, p00=p00, p11=p11)
tomo_expt = TomographyExperiment(settings=expt_settings, program=p)
expectations = []
expected_result = (p00 * 0.5 + (1 - p11) * 0.5) - ((1 - p00) * 0.5 + p11 * 0.5)
for _ in range(num_simulations):
expt_results = list(measure_observables(qc, tomo_expt, n_shots=2000,
symmetrize_readout=None,
calibrate_readout=None))
expectations.append([res.expectation for res in expt_results])
expectations = np.array(expectations)
results = np.mean(expectations, axis=0)
np.testing.assert_allclose(results, expected_result, atol=2e-2)
def test_measure_observables_2q_readout_error_one_measured(forest, use_seed):
# 2q readout errors, but only 1 qubit measured
qc = get_qc('9q-qvm')
if use_seed:
qc.qam.random_seed = 3
np.random.seed(0)
runs = 1
else:
runs = 100
qubs = [0, 1]
expt = ExperimentSetting(TensorProductState(plusZ(qubs[0]) * plusZ(qubs[1])), sZ(qubs[0]))
p = Program()
p.define_noisy_readout(0, 0.999, 0.85)
p.define_noisy_readout(1, 0.999, 0.75)
tomo_experiment = TomographyExperiment(settings=[expt] * runs, program=p)
raw_e = np.zeros(runs)
obs_e = np.zeros(runs)
cal_e = np.zeros(runs)
for idx, res in enumerate(measure_observables(qc,
tomo_experiment,
n_shots=5000)):
raw_e[idx] = res.raw_expectation
obs_e[idx] = res.expectation
cal_e[idx] = res.calibration_expectation
assert np.isclose(np.mean(raw_e), 0.849, atol=2e-2)
assert np.isclose(np.mean(obs_e), 1.0, atol=2e-2)
assert np.isclose(np.mean(cal_e), 0.849, atol=2e-2)
def test_measure_observables_inherit_noise_errors(forest):
qc = get_qc('3q-qvm')
# specify simplest experiments
expt1 = ExperimentSetting(TensorProductState(), sZ(0))
expt2 = ExperimentSetting(TensorProductState(), sZ(1))
expt3 = ExperimentSetting(TensorProductState(), sZ(2))
# specify a Program with multiple sources of noise
p = Program(X(0), Y(1), H(2))
# defining several bit-flip channels
kraus_ops_X = [np.sqrt(1 - 0.3) * np.array([[1, 0], [0, 1]]),
np.sqrt(0.3) * np.array([[0, 1], [1, 0]])]
kraus_ops_Y = [np.sqrt(1 - 0.2) * np.array([[1, 0], [0, 1]]),
np.sqrt(0.2) * np.array([[0, 1], [1, 0]])]
kraus_ops_H = [np.sqrt(1 - 0.1) * np.array([[1, 0], [0, 1]]),
np.sqrt(0.1) * np.array([[0, 1], [1, 0]])]
# replacing all the gates with bit-flip channels
p.define_noisy_gate("X", [0], kraus_ops_X)
p.define_noisy_gate("Y", [1], kraus_ops_Y)
p.define_noisy_gate("H", [2], kraus_ops_H)
# defining readout errors
p.define_noisy_readout(0, 0.99, 0.80)
p.define_noisy_readout(1, 0.95, 0.85)
p.define_noisy_readout(2, 0.97, 0.78)
tomo_expt = TomographyExperiment(settings=[expt1, expt2, expt3], program=p)
calibr_prog1 = _calibration_program(qc, tomo_expt, expt1)
calibr_prog2 = _calibration_program(qc, tomo_expt, expt2)
calibr_prog3 = _calibration_program(qc, tomo_expt, expt3)
expected_prog = '''PRAGMA READOUT-POVM 0 "(0.99 0.19999999999999996 0.010000000000000009 0.8)"
PRAGMA READOUT-POVM 1 "(0.95 0.15000000000000002 0.050000000000000044 0.85)"
PRAGMA READOUT-POVM 2 "(0.97 0.21999999999999997 0.030000000000000027 0.78)"
PRAGMA ADD-KRAUS X 0 "(0.8366600265340756 0.0 0.0 0.8366600265340756)"
PRAGMA ADD-KRAUS X 0 "(0.0 0.5477225575051661 0.5477225575051661 0.0)"
PRAGMA ADD-KRAUS Y 1 "(0.8944271909999159 0.0 0.0 0.8944271909999159)"
PRAGMA ADD-KRAUS Y 1 "(0.0 0.4472135954999579 0.4472135954999579 0.0)"
PRAGMA ADD-KRAUS H 2 "(0.9486832980505138 0.0 0.0 0.9486832980505138)"
PRAGMA ADD-KRAUS H 2 "(0.0 0.31622776601683794 0.31622776601683794 0.0)"
'''
assert calibr_prog1.out() == Program(expected_prog).out()
assert calibr_prog2.out() == Program(expected_prog).out()
assert calibr_prog3.out() == Program(expected_prog).out()
def test_expectations_sic0(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_simulations = 1
else:
num_simulations = 100
expt1 = ExperimentSetting(SIC0(0), sX(0))
expt2 = ExperimentSetting(SIC0(0), sY(0))
expt3 = ExperimentSetting(SIC0(0), sZ(0))
tomo_expt = TomographyExperiment(settings=[expt1, expt2, expt3], program=Program())
results_unavged = []
for _ in range(num_simulations):
measured_results = []
for res in measure_observables(qc, tomo_expt, n_shots=2000):
measured_results.append(res.expectation)
results_unavged.append(measured_results)
results_unavged = np.array(results_unavged)
results = np.mean(results_unavged, axis=0)
expected_results = np.array([0, 0, 1])
np.testing.assert_allclose(results, expected_results, atol=2e-2)
def test_expectations_sic1(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_simulations = 1
else:
num_simulations = 100
expt1 = ExperimentSetting(SIC1(0), sX(0))
expt2 = ExperimentSetting(SIC1(0), sY(0))
expt3 = ExperimentSetting(SIC1(0), sZ(0))
tomo_expt = TomographyExperiment(settings=[expt1, expt2, expt3], program=Program())
results_unavged = []
for _ in range(num_simulations):
measured_results = []
for res in measure_observables(qc, tomo_expt, n_shots=2000):
measured_results.append(res.expectation)
results_unavged.append(measured_results)
results_unavged = np.array(results_unavged)
results = np.mean(results_unavged, axis=0)
expected_results = np.array([2 * np.sqrt(2) / 3, 0, -1 / 3])
np.testing.assert_allclose(results, expected_results, atol=2e-2)
def test_expectations_sic2(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_simulations = 1
else:
num_simulations = 100
expt1 = ExperimentSetting(SIC2(0), sX(0))
expt2 = ExperimentSetting(SIC2(0), sY(0))
expt3 = ExperimentSetting(SIC2(0), sZ(0))
tomo_expt = TomographyExperiment(settings=[expt1, expt2, expt3], program=Program())
results_unavged = []
for _ in range(num_simulations):
measured_results = []
for res in measure_observables(qc, tomo_expt, n_shots=2000):
measured_results.append(res.expectation)
results_unavged.append(measured_results)
results_unavged = np.array(results_unavged)
results = np.mean(results_unavged, axis=0)
expected_results = np.array([(2 * np.sqrt(2) / 3) * np.cos(2 * np.pi / 3),
-(2 * np.sqrt(2) / 3) * np.sin(2 * np.pi / 3),
-1 / 3])
np.testing.assert_allclose(results, expected_results, atol=2e-2)
def test_expectations_sic3(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_simulations = 1
else:
num_simulations = 100
expt1 = ExperimentSetting(SIC3(0), sX(0))
expt2 = ExperimentSetting(SIC3(0), sY(0))
expt3 = ExperimentSetting(SIC3(0), sZ(0))
tomo_expt = TomographyExperiment(settings=[expt1, expt2, expt3], program=Program())
results_unavged = []
for _ in range(num_simulations):
measured_results = []
for res in measure_observables(qc, tomo_expt, n_shots=2000):
measured_results.append(res.expectation)
results_unavged.append(measured_results)
results_unavged = np.array(results_unavged)
results = np.mean(results_unavged, axis=0)
expected_results = np.array([(2 * np.sqrt(2) / 3) * np.cos(2 * np.pi / 3),
(2 * np.sqrt(2) / 3) * np.sin(2 * np.pi / 3),
-1 / 3])
np.testing.assert_allclose(results, expected_results, atol=2e-2)
def test_sic_conditions(forest):
"""
Test that the SIC states indeed yield SIC-POVMs
"""
wfn_sim = WavefunctionSimulator()
# condition (i) -- sum of all projectors equal identity times dimensionality
result = np.zeros((2, 2))
for i in range(4):
if i == 0:
amps = np.array([1, 0])
else:
sic = _one_q_sic_prep(i, 0)
wfn = wfn_sim.wavefunction(sic)
amps = wfn.amplitudes
proj = np.outer(amps, amps.conj())
result = np.add(result, proj)
np.testing.assert_allclose(result / 2, np.eye(2), atol=2e-2)
# condition (ii) -- tr(proj_a . proj_b) = 1 / 3, for a != b
for comb in itertools.combinations([0, 1, 2, 3], 2):
if comb[0] == 0:
sic_a = Program(I(0))
else:
sic_a = _one_q_sic_prep(comb[0], 0)
sic_b = _one_q_sic_prep(comb[1], 0)
wfn_a = wfn_sim.wavefunction(sic_a)
wfn_b = wfn_sim.wavefunction(sic_b)
amps_a = wfn_a.amplitudes
amps_b = wfn_b.amplitudes
proj_a = np.outer(amps_a, amps_a.conj())
proj_b = np.outer(amps_b, amps_b.conj())
assert np.isclose(np.trace(proj_a.dot(proj_b)), 1 / 3)
def test_measure_observables_grouped_expts(forest, use_seed):
qc = get_qc('3q-qvm')
if use_seed:
num_simulations = 1
qc.qam.random_seed = 4
else:
num_simulations = 100
# this more explicitly uses the list-of-lists-of-ExperimentSettings in TomographyExperiment
# create experiments in different groups
expt1_group1 = ExperimentSetting(SIC1(0) * plusX(1), sZ(0) * sX(1))
expt2_group1 = ExperimentSetting(plusX(1) * minusY(2), sX(1) * sY(2))
expts_group1 = [expt1_group1, expt2_group1]
expt1_group2 = ExperimentSetting(plusX(0) * SIC0(1), sX(0) * sZ(1))
expt2_group2 = ExperimentSetting(SIC0(1) * minusY(2), sZ(1) * sY(2))
expt3_group2 = ExperimentSetting(plusX(0) * minusY(2), sX(0) * sY(2))
expts_group2 = [expt1_group2, expt2_group2, expt3_group2]
# create a list-of-lists-of-ExperimentSettings
expt_settings = [expts_group1, expts_group2]
# and use this to create a TomographyExperiment suite
tomo_expt = TomographyExperiment(settings=expt_settings, program=Program())
results_unavged = []
for _ in range(num_simulations):
measured_results = []
for res in measure_observables(qc, tomo_expt, n_shots=2000):
measured_results.append(res.expectation)
results_unavged.append(measured_results)
results_unavged = np.array(results_unavged)
results = np.mean(results_unavged, axis=0)
expected_results = np.array([-1 / 3, -1, 1, -1, -1])
np.testing.assert_allclose(results, expected_results, atol=2e-2)
def _point_channel_fidelity_estimate(v, dim=2):
""":param v: array of expectation values
:param dim: dimensionality of the Hilbert space"""
return (1.0 + np.sum(v) + dim) / (dim * (dim + 1))
def test_bit_flip_channel_fidelity(forest, use_seed):
"""
We use Eqn (5) of https://arxiv.org/abs/quant-ph/0701138 to compare the fidelity
"""
qc = get_qc('1q-qvm')
if use_seed:
np.random.seed(0)
qc.qam.random_seed = 0
num_expts = 1
else:
num_expts = 100
# prepare experiment settings
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sX(0))
expt2 = ExperimentSetting(TensorProductState(plusY(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
expt_list = [expt1, expt2, expt3]
# prepare noisy bit-flip channel as program for some random value of probability
prob = np.random.uniform(0.1, 0.5)
# the bit flip channel is composed of two Kraus operations --
# applying the X gate with probability `prob`, and applying the identity gate
# with probability `1 - prob`
kraus_ops = [np.sqrt(1 - prob) * np.array([[1, 0], [0, 1]]), np.sqrt(prob) * np.array([[0, 1], [1, 0]])]
p = Program(Pragma("PRESERVE_BLOCK"), I(0), Pragma("END_PRESERVE_BLOCK"))
p.define_noisy_gate("I", [0], kraus_ops)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=expt_list, program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_channel_fidelity_estimate(results)
# how close is this channel to the identity operator
expected_fidelity = 1 - (2 / 3) * prob
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_dephasing_channel_fidelity(forest, use_seed):
"""
We use Eqn (5) of https://arxiv.org/abs/quant-ph/0701138 to compare the fidelity
"""
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment settings
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sX(0))
expt2 = ExperimentSetting(TensorProductState(plusY(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
expt_list = [expt1, expt2, expt3]
# prepare noisy dephasing channel as program for some random value of probability
prob = np.random.uniform(0.1, 0.5)
# Kraus operators for the dephasing channel
kraus_ops = [np.sqrt(1 - prob) * np.array([[1, 0], [0, 1]]),
np.sqrt(prob) * np.array([[1, 0], [0, -1]])]
p = Program(Pragma("PRESERVE_BLOCK"), I(0), Pragma("END_PRESERVE_BLOCK"))
p.define_noisy_gate("I", [0], kraus_ops)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=expt_list, program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_channel_fidelity_estimate(results)
# how close is this channel to the identity operator
expected_fidelity = 1 - (2 / 3) * prob
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_depolarizing_channel_fidelity(forest, use_seed):
"""
We use Eqn (5) of https://arxiv.org/abs/quant-ph/0701138 to compare the fidelity
"""
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment settings
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sX(0))
expt2 = ExperimentSetting(TensorProductState(plusY(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
expt_list = [expt1, expt2, expt3]
# prepare noisy depolarizing channel as program for some random value of probability
prob = np.random.uniform(0.1, 0.5)
# Kraus operators for the depolarizing channel
kraus_ops = [np.sqrt(3 * prob + 1) / 2 * np.array([[1, 0], [0, 1]]),
np.sqrt(1 - prob) / 2 * np.array([[0, 1], [1, 0]]),
np.sqrt(1 - prob) / 2 * np.array([[0, -1j], [1j, 0]]),
np.sqrt(1 - prob) / 2 * np.array([[1, 0], [0, -1]])]
p = Program(Pragma("PRESERVE_BLOCK"), I(0), Pragma("END_PRESERVE_BLOCK"))
p.define_noisy_gate("I", [0], kraus_ops)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=expt_list, program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_channel_fidelity_estimate(results)
# how close is this channel to the identity operator
expected_fidelity = (1 + prob) / 2
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_unitary_channel_fidelity(forest, use_seed):
"""
We use Eqn (5) of https://arxiv.org/abs/quant-ph/0701138 to compare the fidelity
"""
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment settings
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sX(0))
expt2 = ExperimentSetting(TensorProductState(plusY(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
expt_list = [expt1, expt2, expt3]
# prepare unitary channel as an RY rotation program for some random angle
theta = np.random.uniform(0.0, 2 * np.pi)
# unitary (RY) channel
p = Program(RY(theta, 0))
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=expt_list, program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_channel_fidelity_estimate(results)
# how close is this channel to the identity operator
expected_fidelity = (1 / 6) * ((2 * np.cos(theta / 2)) ** 2 + 2)
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_bit_flip_channel_fidelity_readout_error(forest, use_seed):
"""
We use Eqn (5) of https://arxiv.org/abs/quant-ph/0701138 to compare the fidelity
"""
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment settings
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sX(0))
expt2 = ExperimentSetting(TensorProductState(plusY(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
expt_list = [expt1, expt2, expt3]
# prepare noisy bit-flip channel as program for some random value of probability
prob = np.random.uniform(0.1, 0.5)
# the bit flip channel is composed of two Kraus operations --
# applying the X gate with probability `prob`, and applying the identity gate
# with probability `1 - prob`
kraus_ops = [np.sqrt(1 - prob) * np.array([[1, 0], [0, 1]]), np.sqrt(prob) * np.array([[0, 1], [1, 0]])]
p = Program(Pragma("PRESERVE_BLOCK"), I(0), Pragma("END_PRESERVE_BLOCK"))
p.define_noisy_gate("I", [0], kraus_ops)
# add some readout error
p.define_noisy_readout(0, 0.95, 0.82)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=expt_list, program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_channel_fidelity_estimate(results)
# how close is this channel to the identity operator
expected_fidelity = 1 - (2 / 3) * prob
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_dephasing_channel_fidelity_readout_error(forest, use_seed):
"""
We use Eqn (5) of https://arxiv.org/abs/quant-ph/0701138 to compare the fidelity
"""
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment settings
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sX(0))
expt2 = ExperimentSetting(TensorProductState(plusY(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
expt_list = [expt1, expt2, expt3]
# prepare noisy dephasing channel as program for some random value of probability
prob = np.random.uniform(0.1, 0.5)
# Kraus operators for the dephasing channel
kraus_ops = [np.sqrt(1 - prob) * np.array([[1, 0], [0, 1]]),
np.sqrt(prob) * np.array([[1, 0], [0, -1]])]
p = Program(Pragma("PRESERVE_BLOCK"), I(0), Pragma("END_PRESERVE_BLOCK"))
p.define_noisy_gate("I", [0], kraus_ops)
# add some readout error
p.define_noisy_readout(0, 0.95, 0.82)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=expt_list, program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_channel_fidelity_estimate(results)
# how close is this channel to the identity operator
expected_fidelity = 1 - (2 / 3) * prob
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_depolarizing_channel_fidelity_readout_error(forest, use_seed):
"""
We use Eqn (5) of https://arxiv.org/abs/quant-ph/0701138 to compare the fidelity
"""
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment settings
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sX(0))
expt2 = ExperimentSetting(TensorProductState(plusY(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
expt_list = [expt1, expt2, expt3]
# prepare noisy depolarizing channel as program for some random value of probability
prob = np.random.uniform(0.1, 0.5)
# Kraus operators for the depolarizing channel
kraus_ops = [np.sqrt(3 * prob + 1) / 2 * np.array([[1, 0], [0, 1]]),
np.sqrt(1 - prob) / 2 * np.array([[0, 1], [1, 0]]),
np.sqrt(1 - prob) / 2 * np.array([[0, -1j], [1j, 0]]),
np.sqrt(1 - prob) / 2 * np.array([[1, 0], [0, -1]])]
p = Program(Pragma("PRESERVE_BLOCK"), I(0), Pragma("END_PRESERVE_BLOCK"))
p.define_noisy_gate("I", [0], kraus_ops)
# add some readout error
p.define_noisy_readout(0, 0.95, 0.82)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=expt_list, program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_channel_fidelity_estimate(results)
# how close is this channel to the identity operator
expected_fidelity = (1 + prob) / 2
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_unitary_channel_fidelity_readout_error(forest, use_seed):
"""
We use Eqn (5) of https://arxiv.org/abs/quant-ph/0701138 to compare the fidelity
"""
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment settings
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sX(0))
expt2 = ExperimentSetting(TensorProductState(plusY(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
expt_list = [expt1, expt2, expt3]
# prepare unitary channel as an RY rotation program for some random angle
theta = np.random.uniform(0.0, 2 * np.pi)
# unitary (RY) channel
p = Program(RY(theta, 0))
# add some readout error
p.define_noisy_readout(0, 0.95, 0.82)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=expt_list, program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_channel_fidelity_estimate(results)
# how close is this channel to the identity operator
expected_fidelity = (1 / 6) * ((2 * np.cos(theta / 2)) ** 2 + 2)
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_2q_unitary_channel_fidelity_readout_error(forest, use_seed):
"""
We use Eqn (5) of https://arxiv.org/abs/quant-ph/0701138 to compare the fidelity
This tests if our dimensionality factors are correct, even in the presence
of readout errors
"""
qc = get_qc('2q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment settings
expt1 = ExperimentSetting(TensorProductState(plusX(0)), sX(0))
expt2 = ExperimentSetting(TensorProductState(plusY(0)), sY(0))
expt3 = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
expt4 = ExperimentSetting(TensorProductState(plusX(1)), sX(1))
expt5 = ExperimentSetting(TensorProductState(plusX(0) * plusX(1)), sX(0) * sX(1))
expt6 = ExperimentSetting(TensorProductState(plusY(0) * plusX(1)), sY(0) * sX(1))
expt7 = ExperimentSetting(TensorProductState(plusZ(0) * plusX(1)), sZ(0) * sX(1))
expt8 = ExperimentSetting(TensorProductState(plusY(1)), sY(1))
expt9 = ExperimentSetting(TensorProductState(plusX(0) * plusY(1)), sX(0) * sY(1))
expt10 = ExperimentSetting(TensorProductState(plusY(0) * plusY(1)), sY(0) * sY(1))
expt11 = ExperimentSetting(TensorProductState(plusZ(0) * plusY(1)), sZ(0) * sY(1))
expt12 = ExperimentSetting(TensorProductState(plusZ(1)), sZ(1))
expt13 = ExperimentSetting(TensorProductState(plusX(0) * plusZ(1)), sX(0) * sZ(1))
expt14 = ExperimentSetting(TensorProductState(plusY(0) * plusZ(1)), sY(0) * sZ(1))
expt15 = ExperimentSetting(TensorProductState(plusZ(0) * plusZ(1)), sZ(0) * sZ(1))
expt_list = [expt1, expt2, expt3, expt4, expt5, expt6, expt7, expt8, expt9, expt10, expt11, expt12, expt13, expt14, expt15]
# prepare unitary channel as an RY rotation program for some random angle
theta1, theta2 = np.random.uniform(0.0, 2 * np.pi, size=2)
# unitary (RY) channel
p = Program(RY(theta1, 0), RY(theta2, 1))
# add some readout error
p.define_noisy_readout(0, 0.95, 0.82)
p.define_noisy_readout(1, 0.99, 0.73)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=expt_list, program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=5000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_channel_fidelity_estimate(results, dim=4)
# how close is this channel to the identity operator
expected_fidelity = (1 / 5) * ((2 * np.cos(theta1 / 2) * np.cos(theta2 / 2)) ** 2 + 1)
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_measure_1q_observable_raw_expectation(forest, use_seed):
# testing that we get correct raw expectation in terms of readout errors
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
expt = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
p = Program()
p00, p11 = 0.99, 0.80
p.define_noisy_readout(0, p00=p00, p11=p11)
tomo_expt = TomographyExperiment(settings=[expt], program=p)
raw_expectations = []
for _ in range(num_expts):
expt_results = list(measure_observables(qc, tomo_expt, n_shots=2000))
raw_expectations.append([res.raw_expectation for res in expt_results])
raw_expectations = np.array(raw_expectations)
result = np.mean(raw_expectations, axis=0)
# calculate expected raw_expectation
eps_not = (p00 + p11) / 2
eps = 1 - eps_not
expected_result = 1 - 2 * eps
np.testing.assert_allclose(result, expected_result, atol=2e-2)
def test_measure_1q_observable_raw_variance(forest, use_seed):
# testing that we get correct raw std_err in terms of readout errors
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
expt = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
p = Program()
p00, p11 = 0.99, 0.80
p.define_noisy_readout(0, p00=p00, p11=p11)
tomo_expt = TomographyExperiment(settings=[expt], program=p)
num_shots = 2000
raw_std_errs = []
for _ in range(num_expts):
expt_results = list(measure_observables(qc, tomo_expt, n_shots=num_shots))
raw_std_errs.append([res.raw_std_err for res in expt_results])
raw_std_errs = np.array(raw_std_errs)
result = np.mean(raw_std_errs, axis=0)
# calculate expected raw_expectation
eps_not = (p00 + p11) / 2
eps = 1 - eps_not
expected_result = np.sqrt((1 - (1 - 2 * eps) ** 2) / num_shots)
np.testing.assert_allclose(result, expected_result, atol=2e-2)
def test_measure_1q_observable_calibration_expectation(forest, use_seed):
# testing that we get correct calibration expectation in terms of readout errors
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
expt = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
p = Program()
p00, p11 = 0.93, 0.77
p.define_noisy_readout(0, p00=p00, p11=p11)
tomo_expt = TomographyExperiment(settings=[expt], program=p)
calibration_expectations = []
for _ in range(num_expts):
expt_results = list(measure_observables(qc, tomo_expt, n_shots=2000))
calibration_expectations.append([res.calibration_expectation for res in expt_results])
calibration_expectations = np.array(calibration_expectations)
result = np.mean(calibration_expectations, axis=0)
# calculate expected raw_expectation
eps_not = (p00 + p11) / 2
eps = 1 - eps_not
expected_result = 1 - 2 * eps
np.testing.assert_allclose(result, expected_result, atol=2e-2)
def test_measure_1q_observable_calibration_variance(forest, use_seed):
# testing that we get correct calibration std_err in terms of readout errors
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
expt = ExperimentSetting(TensorProductState(plusZ(0)), sZ(0))
p = Program()
p00, p11 = 0.93, 0.77
p.define_noisy_readout(0, p00=p00, p11=p11)
tomo_expt = TomographyExperiment(settings=[expt], program=p)
num_shots = 2000
raw_std_errs = []
for _ in range(num_expts):
expt_results = list(measure_observables(qc, tomo_expt, n_shots=num_shots))
raw_std_errs.append([res.raw_std_err for res in expt_results])
raw_std_errs = np.array(raw_std_errs)
result = np.mean(raw_std_errs, axis=0)
# calculate expected raw_expectation
eps_not = (p00 + p11) / 2
eps = 1 - eps_not
expected_result = np.sqrt((1 - (1 - 2 * eps) ** 2) / num_shots)
np.testing.assert_allclose(result, expected_result, atol=2e-2)
def test_uncalibrated_asymmetric_readout_nontrivial_1q_state(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
runs = 1
else:
runs = 100
expt = ExperimentSetting(TensorProductState(), sZ(0))
# pick some random value for RX rotation
theta = np.random.uniform(0.0, 2 * np.pi)
p = Program(RX(theta, 0))
# pick some random (but sufficiently large) asymmetric readout errors
p00, p11 = np.random.uniform(0.7, 0.99, size=2)
p.define_noisy_readout(0, p00=p00, p11=p11)
expt_list = [expt]
tomo_expt = TomographyExperiment(settings=expt_list * runs, program=p)
# calculate expected expectation value
amp_sqr0 = (np.cos(theta / 2)) ** 2
amp_sqr1 = (np.sin(theta / 2)) ** 2
expected_expectation = (p00 * amp_sqr0 + (1 - p11) * amp_sqr1) - \
((1 - p00) * amp_sqr0 + p11 * amp_sqr1)
expect_arr = np.zeros(runs * len(expt_list))
for idx, res in enumerate(measure_observables(qc,
tomo_expt, n_shots=2000,
symmetrize_readout=None,
calibrate_readout=None)):
expect_arr[idx] = res.expectation
assert np.isclose(np.mean(expect_arr), expected_expectation, atol=2e-2)
def test_uncalibrated_symmetric_readout_nontrivial_1q_state(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
runs = 1
else:
runs = 100
expt = ExperimentSetting(TensorProductState(), sZ(0))
# pick some random value for RX rotation
theta = np.random.uniform(0.0, 2 * np.pi)
p = Program(RX(theta, 0))
# pick some random (but sufficiently large) asymmetric readout errors
p00, p11 = np.random.uniform(0.7, 0.99, size=2)
p.define_noisy_readout(0, p00=p00, p11=p11)
expt_list = [expt]
tomo_expt = TomographyExperiment(settings=expt_list * runs, program=p)
# calculate expected expectation value
amp_sqr0 = (np.cos(theta / 2)) ** 2
amp_sqr1 = (np.sin(theta / 2)) ** 2
symm_prob = (p00 + p11) / 2
expected_expectation = (symm_prob * amp_sqr0 + (1 - symm_prob) * amp_sqr1) - \
((1 - symm_prob) * amp_sqr0 + symm_prob * amp_sqr1)
expect_arr = np.zeros(runs * len(expt_list))
for idx, res in enumerate(measure_observables(qc,
tomo_expt, n_shots=2000,
symmetrize_readout='exhaustive',
calibrate_readout=None)):
expect_arr[idx] = res.expectation
assert np.isclose(np.mean(expect_arr), expected_expectation, atol=2e-2)
def test_calibrated_symmetric_readout_nontrivial_1q_state(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
runs = 1
else:
runs = 100
expt = ExperimentSetting(TensorProductState(), sZ(0))
# pick some random value for RX rotation
theta = np.random.uniform(0.0, 2 * np.pi)
p = Program(RX(theta, 0))
# pick some random (but sufficiently large) asymmetric readout errors
p00, p11 = np.random.uniform(0.7, 0.99, size=2)
p.define_noisy_readout(0, p00=p00, p11=p11)
expt_list = [expt]
tomo_expt = TomographyExperiment(settings=expt_list * runs, program=p)
# calculate expected expectation value
amp_sqr0 = (np.cos(theta / 2)) ** 2
amp_sqr1 = (np.sin(theta / 2)) ** 2
expected_expectation = amp_sqr0 - amp_sqr1
expect_arr = np.zeros(runs * len(expt_list))
for idx, res in enumerate(measure_observables(qc,
tomo_expt, n_shots=2000,
symmetrize_readout='exhaustive',
calibrate_readout='plus-eig')):
expect_arr[idx] = res.expectation
assert np.isclose(np.mean(expect_arr), expected_expectation, atol=2e-2)
def test_measure_2q_observable_raw_statistics(forest, use_seed):
''' Testing that we get correct exhaustively symmetrized statistics
in terms of readout errors.
Note: this only tests for exhaustive symmetrization in the presence
of uncorrelated errors
'''
qc = get_qc('2q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_simulations = 1
else:
num_simulations = 100
expt = ExperimentSetting(TensorProductState(), sZ(0) * sZ(1))
p = Program()
p00, p11 = 0.99, 0.80
q00, q11 = 0.93, 0.76
p.define_noisy_readout(0, p00=p00, p11=p11)
p.define_noisy_readout(1, p00=q00, p11=q11)
tomo_expt = TomographyExperiment(settings=[expt], program=p)
num_shots = 5000
raw_expectations = []
raw_std_errs = []
for _ in range(num_simulations):
expt_results = list(measure_observables(qc, tomo_expt, n_shots=num_shots))
raw_expectations.append([res.raw_expectation for res in expt_results])
raw_std_errs.append([res.raw_std_err for res in expt_results])
raw_expectations = np.array(raw_expectations)
raw_std_errs = np.array(raw_std_errs)
result_expectation = np.mean(raw_expectations, axis=0)
result_std_err = np.mean(raw_std_errs, axis=0)
# calculate relevant conditional probabilities, given |00> state
# notation used: pijmn means p(ij|mn)
p0000 = (p00 + p11) * (q00 + q11) / 4
p0100 = (p00 + p11) * (2 - q00 - q11) / 4
p1000 = (q00 + q11) * (2 - p00 - p11) / 4
p1100 = (2 - p00 - p11) * (2 - q00 - q11) / 4
# calculate expectation value of Z^{\otimes 2}
z_expectation = (p0000 + p1100) - (p0100 + p1000)
# calculate standard deviation of the mean
simulated_std_err = np.sqrt((1 - z_expectation ** 2) / num_shots)
# compare against simulated results
np.testing.assert_allclose(result_expectation, z_expectation, atol=2e-2)
np.testing.assert_allclose(result_std_err, simulated_std_err, atol=2e-2)
def test_raw_statistics_2q_nontrivial_nonentangled_state(forest, use_seed):
''' Testing that we get correct exhaustively symmetrized statistics
in terms of readout errors, even for non-trivial 2q nonentangled states
Note: this only tests for exhaustive symmetrization in the presence
of uncorrelated errors
'''
qc = get_qc('2q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_simulations = 1
else:
num_simulations = 100
expt = ExperimentSetting(TensorProductState(), sZ(0) * sZ(1))
theta1, theta2 = np.random.uniform(0.0, 2 * np.pi, size=2)
p = Program(RX(theta1, 0), RX(theta2, 1))
p00, p11, q00, q11 = np.random.uniform(0.70, 0.99, size=4)
p.define_noisy_readout(0, p00=p00, p11=p11)
p.define_noisy_readout(1, p00=q00, p11=q11)
tomo_expt = TomographyExperiment(settings=[expt], program=p)
num_shots = 5000
raw_expectations = []
raw_std_errs = []
for _ in range(num_simulations):
expt_results = list(measure_observables(qc, tomo_expt, n_shots=num_shots))
raw_expectations.append([res.raw_expectation for res in expt_results])
raw_std_errs.append([res.raw_std_err for res in expt_results])
raw_expectations = np.array(raw_expectations)
raw_std_errs = np.array(raw_std_errs)
result_expectation = np.mean(raw_expectations, axis=0)
result_std_err = np.mean(raw_std_errs, axis=0)
# calculate relevant conditional probabilities, given |00> state
# notation used: pijmn means p(ij|mn)
p0000 = (p00 + p11) * (q00 + q11) / 4
p0100 = (p00 + p11) * (2 - q00 - q11) / 4
p1000 = (q00 + q11) * (2 - p00 - p11) / 4
p1100 = (2 - p00 - p11) * (2 - q00 - q11) / 4
# calculate relevant conditional probabilities, given |01> state
p0001 = p0100
p0101 = p0000
p1001 = (2 - p00 - p11) * (2 - q00 - q11) / 4
p1101 = (2 - p00 - p11) * (q00 + q11) / 4
# calculate relevant conditional probabilities, given |10> state
p0010 = p1000
p0110 = p1001
p1010 = p0000
p1110 = (p00 + p11) * (2 - q00 - q11) / 4
# calculate relevant conditional probabilities, given |11> state
p0011 = p1100
p0111 = p1101
p1011 = p1110
p1111 = p0000
# calculate amplitudes squared of pure state
alph00 = (np.cos(theta1 / 2) * np.cos(theta2 / 2)) ** 2
alph01 = (np.cos(theta1 / 2) * np.sin(theta2 / 2)) ** 2
alph10 = (np.sin(theta1 / 2) * np.cos(theta2 / 2)) ** 2
alph11 = (np.sin(theta1 / 2) * np.sin(theta2 / 2)) ** 2
# calculate probabilities of various bitstrings
pr00 = p0000 * alph00 + p0001 * alph01 + p0010 * alph10 + p0011 * alph11
pr01 = p0100 * alph00 + p0101 * alph01 + p0110 * alph10 + p0111 * alph11
pr10 = p1000 * alph00 + p1001 * alph01 + p1010 * alph10 + p1011 * alph11
pr11 = p1100 * alph00 + p1101 * alph01 + p1110 * alph10 + p1111 * alph11
# calculate Z^{\otimes 2} expectation, and error of the mean
z_expectation = (pr00 + pr11) - (pr01 + pr10)
simulated_std_err = np.sqrt((1 - z_expectation ** 2) / num_shots)
# compare against simulated results
np.testing.assert_allclose(result_expectation, z_expectation, atol=2e-2)
np.testing.assert_allclose(result_std_err, simulated_std_err, atol=2e-2)
def test_raw_statistics_2q_nontrivial_entangled_state(forest, use_seed):
''' Testing that we get correct exhaustively symmetrized statistics
in terms of readout errors, even for non-trivial 2q entangled states.
Note: this only tests for exhaustive symmetrization in the presence
of uncorrelated errors
'''
qc = get_qc('2q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_simulations = 1
else:
num_simulations = 100
expt = ExperimentSetting(TensorProductState(), sZ(0) * sZ(1))
theta = np.random.uniform(0.0, 2 * np.pi)
p = Program(RX(theta, 0), CNOT(0, 1))
p00, p11, q00, q11 = np.random.uniform(0.70, 0.99, size=4)
p.define_noisy_readout(0, p00=p00, p11=p11)
p.define_noisy_readout(1, p00=q00, p11=q11)
tomo_expt = TomographyExperiment(settings=[expt], program=p)
num_shots = 5000
raw_expectations = []
raw_std_errs = []
for _ in range(num_simulations):
expt_results = list(measure_observables(qc, tomo_expt, n_shots=num_shots))
raw_expectations.append([res.raw_expectation for res in expt_results])
raw_std_errs.append([res.raw_std_err for res in expt_results])
raw_expectations = np.array(raw_expectations)
raw_std_errs = np.array(raw_std_errs)
result_expectation = np.mean(raw_expectations, axis=0)
result_std_err = np.mean(raw_std_errs, axis=0)
# calculate relevant conditional probabilities, given |00> state
# notation used: pijmn means p(ij|mn)
p0000 = (p00 + p11) * (q00 + q11) / 4
p0100 = (p00 + p11) * (2 - q00 - q11) / 4
p1000 = (q00 + q11) * (2 - p00 - p11) / 4
p1100 = (2 - p00 - p11) * (2 - q00 - q11) / 4
# calculate relevant conditional probabilities, given |11> state
p0011 = p1100
p0111 = (2 - p00 - p11) * (q00 + q11) / 4
p1011 = (p00 + p11) * (2 - q00 - q11) / 4
p1111 = p0000
# calculate amplitudes squared of pure state
alph00 = (np.cos(theta / 2)) ** 2
alph11 = (np.sin(theta / 2)) ** 2
# calculate probabilities of various bitstrings
pr00 = p0000 * alph00 + p0011 * alph11
pr01 = p0100 * alph00 + p0111 * alph11
pr10 = p1000 * alph00 + p1011 * alph11
pr11 = p1100 * alph00 + p1111 * alph11
# calculate Z^{\otimes 2} expectation, and error of the mean
z_expectation = (pr00 + pr11) - (pr01 + pr10)
simulated_std_err = np.sqrt((1 - z_expectation ** 2) / num_shots)
# compare against simulated results
np.testing.assert_allclose(result_expectation, z_expectation, atol=2e-2)
np.testing.assert_allclose(result_std_err, simulated_std_err, atol=2e-2)
@pytest.mark.flaky(reruns=1)
def test_corrected_statistics_2q_nontrivial_nonentangled_state(forest, use_seed):
''' Testing that we can successfully correct for observed statistics
in the presence of readout errors, even for 2q nontrivial but
nonentangled states.
Note: this only tests for exhaustive symmetrization in the presence
of uncorrelated errors
'''
qc = get_qc('2q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(13)
num_simulations = 1
else:
num_simulations = 100
expt = ExperimentSetting(TensorProductState(), sZ(0) * sZ(1))
theta1, theta2 = np.random.uniform(0.0, 2 * np.pi, size=2)
p = Program(RX(theta1, 0), RX(theta2, 1))
p00, p11, q00, q11 = np.random.uniform(0.70, 0.99, size=4)
p.define_noisy_readout(0, p00=p00, p11=p11)
p.define_noisy_readout(1, p00=q00, p11=q11)
tomo_expt = TomographyExperiment(settings=[expt], program=p)
num_shots = 5000
expectations = []
std_errs = []
for _ in range(num_simulations):
expt_results = list(measure_observables(qc, tomo_expt, n_shots=num_shots))
expectations.append([res.expectation for res in expt_results])
std_errs.append([res.std_err for res in expt_results])
expectations = np.array(expectations)
std_errs = np.array(std_errs)
result_expectation = np.mean(expectations, axis=0)
result_std_err = np.mean(std_errs, axis=0)
# calculate amplitudes squared of pure state
alph00 = (np.cos(theta1 / 2) * np.cos(theta2 / 2)) ** 2
alph01 = (np.cos(theta1 / 2) * np.sin(theta2 / 2)) ** 2
alph10 = (np.sin(theta1 / 2) * np.cos(theta2 / 2)) ** 2
alph11 = (np.sin(theta1 / 2) * np.sin(theta2 / 2)) ** 2
# calculate Z^{\otimes 2} expectation, and error of the mean
expected_expectation = (alph00 + alph11) - (alph01 + alph10)
expected_std_err = np.sqrt(np.var(expectations))
# compare against simulated results
np.testing.assert_allclose(result_expectation, expected_expectation, atol=2e-2)
np.testing.assert_allclose(result_std_err, expected_std_err, atol=2e-2)
def _point_state_fidelity_estimate(v, dim=2):
""":param v: array of expectation values
:param dim: dimensionality of the Hilbert space"""
return (1.0 + np.sum(v)) / dim
def test_bit_flip_state_fidelity(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment setting
expt = ExperimentSetting(TensorProductState(), sZ(0))
# prepare noisy bit-flip channel as program for some random value of probability
prob = np.random.uniform(0.1, 0.5)
# the bit flip channel is composed of two Kraus operations --
# applying the X gate with probability `prob`, and applying the identity gate
# with probability `1 - prob`
kraus_ops = [np.sqrt(1 - prob) * np.array([[1, 0], [0, 1]]), np.sqrt(prob) * np.array([[0, 1], [1, 0]])]
p = Program(Pragma("PRESERVE_BLOCK"), I(0), Pragma("END_PRESERVE_BLOCK"))
p.define_noisy_gate("I", [0], kraus_ops)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=[expt], program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_state_fidelity_estimate(results)
# how close is the mixed state to |0>
expected_fidelity = 1 - prob
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_dephasing_state_fidelity(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment setting
expt = ExperimentSetting(TensorProductState(), sZ(0))
# prepare noisy dephasing channel as program for some random value of probability
prob = np.random.uniform(0.1, 0.5)
# Kraus operators for dephasing channel
kraus_ops = [np.sqrt(1 - prob) * np.array([[1, 0], [0, 1]]),
np.sqrt(prob) * np.array([[1, 0], [0, -1]])]
p = Program(Pragma("PRESERVE_BLOCK"), I(0), Pragma("END_PRESERVE_BLOCK"))
p.define_noisy_gate("I", [0], kraus_ops)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=[expt], program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_state_fidelity_estimate(results)
# how close is the mixed state to |0>
expected_fidelity = 1
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_depolarizing_state_fidelity(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment setting
expt = ExperimentSetting(TensorProductState(), sZ(0))
# prepare noisy depolarizing channel as program for some random value of probability
prob = np.random.uniform(0.1, 0.5)
# Kraus operators for depolarizing channel
kraus_ops = [np.sqrt(3 * prob + 1) / 2 * np.array([[1, 0], [0, 1]]),
np.sqrt(1 - prob) / 2 * np.array([[0, 1], [1, 0]]),
np.sqrt(1 - prob) / 2 * np.array([[0, -1j], [1j, 0]]),
np.sqrt(1 - prob) / 2 * np.array([[1, 0], [0, -1]])]
p = Program(Pragma("PRESERVE_BLOCK"), I(0), Pragma("END_PRESERVE_BLOCK"))
p.define_noisy_gate("I", [0], kraus_ops)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=[expt], program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_state_fidelity_estimate(results)
# how close is the mixed state to |0>
expected_fidelity = (1 + prob) / 2
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_unitary_state_fidelity(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment setting
expt = ExperimentSetting(TensorProductState(), sZ(0))
# rotate |0> state by some random angle about X axis
theta = np.random.uniform(0.0, 2 * np.pi)
p = Program(RX(theta, 0))
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=[expt], program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_state_fidelity_estimate(results)
# how close is this state to |0>
expected_fidelity = (np.cos(theta / 2)) ** 2
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_bit_flip_state_fidelity_readout_error(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment setting
expt = ExperimentSetting(TensorProductState(), sZ(0))
# prepare noisy bit-flip channel as program for some random value of probability
prob = np.random.uniform(0.1, 0.5)
# the bit flip channel is composed of two Kraus operations --
# applying the X gate with probability `prob`, and applying the identity gate
# with probability `1 - prob`
kraus_ops = [np.sqrt(1 - prob) * np.array([[1, 0], [0, 1]]), np.sqrt(prob) * np.array([[0, 1], [1, 0]])]
p = Program(Pragma("PRESERVE_BLOCK"), I(0), Pragma("END_PRESERVE_BLOCK"))
p.define_noisy_gate("I", [0], kraus_ops)
p.define_noisy_readout(0, 0.95, 0.76)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=[expt], program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_state_fidelity_estimate(results)
# how close is the mixed state to |0>
expected_fidelity = 1 - prob
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_dephasing_state_fidelity_readout_error(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment setting
expt = ExperimentSetting(TensorProductState(), sZ(0))
# prepare noisy dephasing channel as program for some random value of probability
prob = np.random.uniform(0.1, 0.5)
# Kraus operators for dephasing channel
kraus_ops = [np.sqrt(1 - prob) * np.array([[1, 0], [0, 1]]),
np.sqrt(prob) * np.array([[1, 0], [0, -1]])]
p = Program(Pragma("PRESERVE_BLOCK"), I(0), Pragma("END_PRESERVE_BLOCK"))
p.define_noisy_gate("I", [0], kraus_ops)
p.define_noisy_readout(0, 0.95, 0.76)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=[expt], program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_state_fidelity_estimate(results)
# how close is the mixed state to |0>
expected_fidelity = 1
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_depolarizing_state_fidelity_readout_error(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment setting
expt = ExperimentSetting(TensorProductState(), sZ(0))
# prepare noisy depolarizing channel as program for some random value of probability
prob = np.random.uniform(0.1, 0.5)
# Kraus operators for depolarizing channel
kraus_ops = [np.sqrt(3 * prob + 1) / 2 * np.array([[1, 0], [0, 1]]),
np.sqrt(1 - prob) / 2 * np.array([[0, 1], [1, 0]]),
np.sqrt(1 - prob) / 2 * np.array([[0, -1j], [1j, 0]]),
np.sqrt(1 - prob) / 2 * np.array([[1, 0], [0, -1]])]
p = Program(Pragma("PRESERVE_BLOCK"), I(0), Pragma("END_PRESERVE_BLOCK"))
p.define_noisy_gate("I", [0], kraus_ops)
p.define_noisy_readout(0, 0.95, 0.76)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=[expt], program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_state_fidelity_estimate(results)
# how close is the mixed state to |0>
expected_fidelity = (1 + prob) / 2
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
def test_unitary_state_fidelity_readout_error(forest, use_seed):
qc = get_qc('1q-qvm')
if use_seed:
qc.qam.random_seed = 0
np.random.seed(0)
num_expts = 1
else:
num_expts = 100
# prepare experiment setting
expt = ExperimentSetting(TensorProductState(), sZ(0))
# rotate |0> state by some random angle about X axis
theta = np.random.uniform(0.0, 2 * np.pi)
p = Program(RX(theta, 0))
p.define_noisy_readout(0, 0.95, 0.76)
# prepare TomographyExperiment
process_exp = TomographyExperiment(settings=[expt], program=p)
# list to store experiment results
expts = []
for _ in range(num_expts):
expt_results = []
for res in measure_observables(qc, process_exp, n_shots=2000):
expt_results.append(res.expectation)
expts.append(expt_results)
expts = np.array(expts)
results = np.mean(expts, axis=0)
estimated_fidelity = _point_state_fidelity_estimate(results)
# how close is this state to |0>
expected_fidelity = (np.cos(theta / 2)) ** 2
np.testing.assert_allclose(expected_fidelity, estimated_fidelity, atol=2e-2)
| 39.323435 | 127 | 0.650519 | 11,921 | 86,079 | 4.514135 | 0.049241 | 0.068942 | 0.015535 | 0.017951 | 0.821363 | 0.782711 | 0.765986 | 0.756118 | 0.746065 | 0.729471 | 0 | 0.059547 | 0.228011 | 86,079 | 2,188 | 128 | 39.341408 | 0.750226 | 0.123398 | 0 | 0.688352 | 0 | 0 | 0.024538 | 0.001562 | 0 | 0 | 0 | 0 | 0.084263 | 1 | 0.056382 | false | 0 | 0.009913 | 0.001859 | 0.071252 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.