hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9f8b6c3fb3ea7a7f3f7103b5d427ef7958286de7 | 9 | py | Python | .history/ClassFiles/Python301/ObjectOrientatedProgramming/ErrorCatching/ErrorsExceptions_20210216162852.py | minefarmer/CompletePython | 6de46e7ee29d9e4eaada60352c193f552afd6f15 | [
"Unlicense"
] | null | null | null | .history/ClassFiles/Python301/ObjectOrientatedProgramming/ErrorCatching/ErrorsExceptions_20210216162852.py | minefarmer/CompletePython | 6de46e7ee29d9e4eaada60352c193f552afd6f15 | [
"Unlicense"
] | null | null | null | .history/ClassFiles/Python301/ObjectOrientatedProgramming/ErrorCatching/ErrorsExceptions_20210216162852.py | minefarmer/CompletePython | 6de46e7ee29d9e4eaada60352c193f552afd6f15 | [
"Unlicense"
] | null | null | null | try:
| 4.5 | 4 | 0.333333 | 1 | 9 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.555556 | 9 | 2 | 5 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9f8c30defd5fac4771ba01e052e000cd59ac7496 | 21,228 | py | Python | src/diem/testing/suites/offchainv2/test_receive_payment.py | isabella232/client-sdk-python | 2cbeb77eadc16a300b0026df513aef84152a8f94 | [
"Apache-2.0"
] | null | null | null | src/diem/testing/suites/offchainv2/test_receive_payment.py | isabella232/client-sdk-python | 2cbeb77eadc16a300b0026df513aef84152a8f94 | [
"Apache-2.0"
] | 1 | 2021-06-01T11:49:47.000Z | 2021-06-01T11:49:47.000Z | src/diem/testing/suites/offchainv2/test_receive_payment.py | isabella232/client-sdk-python | 2cbeb77eadc16a300b0026df513aef84152a8f94 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) The Diem Core Contributors
# SPDX-License-Identifier: Apache-2.0
from diem.testing.miniwallet import RestClient, AccountResource, Transaction, AppConfig
from diem import offchain, jsonrpc, stdlib, utils, txnmetadata, identifier
from typing import Optional, List
from ..conftest import wait_for, wait_for_balance, wait_for_event, wait_for_payment_transaction_complete
import pytest, json
def test_receive_payment_with_travel_rule_metadata_and_valid_reference_id(
stub_client: RestClient,
target_client: RestClient,
currency: str,
travel_rule_threshold: int,
) -> None:
"""
Test Plan:
1. Generate a valid account identifier from receiver account as payee.
2. Send a payment meeting travel rule threshold to the account identifier.
3. Wait for the transaction executed successfully.
4. Assert receiver account received the fund.
"""
amount = travel_rule_threshold
sender_account = stub_client.create_account(
balances={currency: amount}, kyc_data=target_client.get_kyc_sample().minimum
)
receiver_account = target_client.create_account(kyc_data=stub_client.get_kyc_sample().minimum)
try:
account_identifier = receiver_account.generate_account_identifier()
pay = sender_account.send_payment(currency, travel_rule_threshold, payee=account_identifier)
wait_for_payment_transaction_complete(sender_account, pay.id)
wait_for_balance(receiver_account, currency, travel_rule_threshold)
finally:
receiver_account.log_events()
sender_account.log_events()
@pytest.mark.parametrize( # pyre-ignore
"invalid_ref_id", [None, "", "ref_id_is_not_uuid", "6cd81d79-f041-4f28-867f-e4d54950833e"]
)
def test_receive_payment_with_travel_rule_metadata_and_invalid_reference_id(
stub_client: RestClient,
target_client: RestClient,
currency: str,
hrp: str,
stub_config: AppConfig,
diem_client: jsonrpc.Client,
stub_wallet_pending_income_account: AccountResource,
invalid_ref_id: Optional[str],
travel_rule_threshold: int,
) -> None:
"""
There is no way to create travel rule metadata with invalid reference id when the payment
amount meets travel rule threshold, because the metadata signature is verified by transaction
script.
Also, if metadata signature is provided, transaction script will also verify it regardless
whether the amount meets travel rule threshold, thus no need to test invalid metadata
signature case.
This test bypasses the transaction script validation by sending payment amount under the
travel rule threshold without metadata signature, and receiver should handle it properly and refund.
Test Plan:
1. Generate a valid account identifier from receiver account as payee.
2. Submit payment under travel rule threshold transaction from sender to receiver on-chain account.
3. Wait for the transaction executed successfully.
4. Assert the payment is refund eventually.
Note: the refund payment will be received by pending income account of the MiniWallet Stub, because
no account owns the original invalid payment transaction which is sent by test.
"""
amount = travel_rule_threshold
sender_account = stub_client.create_account(
balances={currency: amount}, kyc_data=target_client.get_kyc_sample().minimum
)
receiver_account = target_client.create_account(kyc_data=stub_client.get_kyc_sample().minimum)
try:
receiver_account_identifier = receiver_account.generate_account_identifier()
receiver_account_address = identifier.decode_account_address(receiver_account_identifier, hrp)
sender_account_identifier = sender_account.generate_account_identifier()
sender_address = identifier.decode_account_address(sender_account_identifier, hrp)
metadata, _ = txnmetadata.travel_rule(invalid_ref_id, sender_address, amount) # pyre-ignore
original_payment_txn: jsonrpc.Transaction = stub_config.account.submit_and_wait_for_txn(
diem_client,
stdlib.encode_peer_to_peer_with_metadata_script(
currency=utils.currency_code(currency),
amount=amount / 1000,
payee=receiver_account_address,
metadata=metadata,
metadata_signature=b"",
),
)
wait_for_event(
stub_wallet_pending_income_account,
"created_transaction",
status=Transaction.Status.completed,
refund_diem_txn_version=original_payment_txn.version,
)
assert receiver_account.balance(currency) == 0
finally:
receiver_account.log_events()
sender_account.log_events()
def test_receive_payment_meets_travel_rule_threshold_both_kyc_data_evaluations_are_accepted(
currency: str,
travel_rule_threshold: int,
target_client: RestClient,
stub_client: RestClient,
) -> None:
"""
Test Plan:
1. Create sender account with minimum valid kyc data and enough balance in the stub wallet application.
2. Create receiver account with minimum valid kyc data with 0 balance in the target wallet application.
3. Send payment from sender account to receiver account, amount is equal to travel_rule threshold.
4. Wait for stub wallet application account events include payment command states: ["S_INIT", "R_SEND", "READY"]
5 . Expect send payment success; receiver account balance increased by the amount sent; sender account balance decreased by the amount sent.
"""
receive_payment_meets_travel_rule_threshold(
sender=stub_client.create_account(
balances={currency: travel_rule_threshold},
kyc_data=target_client.get_kyc_sample().minimum,
),
receiver=target_client.create_account(kyc_data=stub_client.get_kyc_sample().minimum),
payment_command_states=["S_INIT", "R_SEND", "READY"],
currency=currency,
amount=travel_rule_threshold,
)
def test_receive_payment_meets_travel_rule_threshold_sender_kyc_data_is_rejected_by_the_receiver(
currency: str,
travel_rule_threshold: int,
target_client: RestClient,
stub_client: RestClient,
hrp: str,
) -> None:
"""
Test Plan:
1. Create sender account with kyc data that will be rejected by the target wallet application in the stub wallet application.
2. Create receiver account with minimum valid kyc data and 0 balance in the target wallet application.
3. Send payment from sender account to receiver account, amount is equal to travel_rule threshold.
4. Wait for stub wallet application account events include payment command states: ["S_INIT", "R_ABORT"]
5 . Expect sender and receiver accounts' balances are not changed.
"""
receive_payment_meets_travel_rule_threshold(
sender=stub_client.create_account(
balances={currency: travel_rule_threshold},
kyc_data=target_client.get_kyc_sample().reject,
),
receiver=target_client.create_account(kyc_data=stub_client.get_kyc_sample().minimum),
payment_command_states=["S_INIT", "R_ABORT"],
currency=currency,
amount=travel_rule_threshold,
)
def test_receive_payment_meets_travel_rule_threshold_receiver_kyc_data_is_rejected_by_the_sender(
currency: str,
travel_rule_threshold: int,
target_client: RestClient,
stub_client: RestClient,
) -> None:
"""
Test Plan:
1. Create sender account with minimum valid kyc data and enough balance in the stub wallet application.
2. Create receiver account with kyc data that will be rejected by the stub wallet application and 0 balance in the target wallet application.
3. Send payment from sender account to receiver account, amount is equal to travel_rule threshold.
4. Wait for stub wallet application account events include payment command states: ["S_INIT", "R_SEND", "S_ABORT"]
5. Expect sender and receiver accounts' balances are not changed.
"""
receive_payment_meets_travel_rule_threshold(
sender=stub_client.create_account(
balances={currency: travel_rule_threshold},
kyc_data=target_client.get_kyc_sample().minimum,
),
receiver=target_client.create_account(kyc_data=stub_client.get_kyc_sample().reject),
payment_command_states=["S_INIT", "R_SEND", "S_ABORT"],
currency=currency,
amount=travel_rule_threshold,
)
def test_receive_payment_meets_travel_rule_threshold_sender_kyc_data_is_soft_match_then_accepted_after_reviewing_additional_kyc_data(
currency: str,
travel_rule_threshold: int,
target_client: RestClient,
stub_client: RestClient,
) -> None:
"""
Test Plan:
1. Create sender account with kyc data that will be soft matched by the target wallet application and enough balance in the stub wallet application.
2. Create receiver account with minimum valid kyc data and 0 balance in the target wallet application.
3. Send payment from sender account to receiver account, amount is equal to travel_rule threshold.
4. Wait for stub wallet application account events include payment command states: ["S_INIT", "R_SOFT", "S_SOFT_SEND", "R_SEND", "READY"]
4. Expect send payment success; receiver account balance increased by the amount sent; sender account balance decreased by the amount sent.
"""
receive_payment_meets_travel_rule_threshold(
sender=stub_client.create_account(
balances={currency: travel_rule_threshold},
kyc_data=target_client.get_kyc_sample().soft_match,
),
receiver=target_client.create_account(kyc_data=stub_client.get_kyc_sample().minimum),
payment_command_states=["S_INIT", "R_SOFT", "S_SOFT_SEND", "R_SEND", "READY"],
currency=currency,
amount=travel_rule_threshold,
)
def test_receive_payment_meets_travel_rule_threshold_receiver_kyc_data_is_soft_match_then_accepted_after_reviewing_additional_kyc_data(
currency: str,
travel_rule_threshold: int,
target_client: RestClient,
stub_client: RestClient,
) -> None:
"""
Test Plan:
1. Create sender account with minimum valid kyc data and enough balance in the stub wallet application.
2. Create receiver account with kyc data that will be soft matched by the stub wallet application and 0 balance in the target wallet application.
3. Send payment from sender account to receiver account, amount is equal to travel_rule threshold.
4. Wait for stub wallet application account events include payment command states: ["S_INIT", "R_SEND", "S_SOFT", "R_SOFT_SEND", "READY"]
5. Expect send payment success; receiver account balance increased by the amount sent; sender account balance decreased by the amount sent.
"""
receive_payment_meets_travel_rule_threshold(
sender=stub_client.create_account(
balances={currency: travel_rule_threshold},
kyc_data=target_client.get_kyc_sample().minimum,
),
receiver=target_client.create_account(kyc_data=stub_client.get_kyc_sample().soft_match),
payment_command_states=["S_INIT", "R_SEND", "S_SOFT", "R_SOFT_SEND", "READY"],
currency=currency,
amount=travel_rule_threshold,
)
def test_receive_payment_meets_travel_rule_threshold_sender_kyc_data_is_soft_match_then_rejected_after_reviewing_additional_kyc_data(
currency: str,
travel_rule_threshold: int,
target_client: RestClient,
stub_client: RestClient,
) -> None:
"""
Test Plan:
1. Create sender account with kyc data that will be soft matched and then rejected by the target wallet application in the stub wallet application.
2. Create receiver account with minimum valid kyc data and 0 balance in the target wallet application.
3. Send payment from sender account to receiver account, amount is equal to travel_rule threshold.
4. Wait for stub wallet application account events include payment command states: ["S_INIT", "R_SOFT", "S_SOFT_SEND", "R_ABORT"]
5. Expect sender and receiver accounts' balances are not changed.
"""
receive_payment_meets_travel_rule_threshold(
sender=stub_client.create_account(
balances={currency: travel_rule_threshold},
kyc_data=target_client.get_kyc_sample().soft_reject,
),
receiver=target_client.create_account(kyc_data=stub_client.get_kyc_sample().minimum),
payment_command_states=["S_INIT", "R_SOFT", "S_SOFT_SEND", "R_ABORT"],
currency=currency,
amount=travel_rule_threshold,
)
def test_receive_payment_meets_travel_rule_threshold_receiver_kyc_data_is_soft_match_then_rejected_after_reviewing_additional_kyc_data(
currency: str,
travel_rule_threshold: int,
target_client: RestClient,
stub_client: RestClient,
) -> None:
"""
Test Plan:
1. Create sender account with minimum valid kyc data and enough balance in the stub wallet application.
2. Create receiver account with kyc data that will be soft matched and then rejected by the stub wallet application and 0 balance in the target wallet application.
3. Send payment from sender account to receiver account, amount is equal to travel_rule threshold.
4. Wait for stub wallet application account events include payment command states: ["S_INIT", "R_SEND", "S_SOFT", "R_SOFT_SEND", "S_ABORT"]
5. Expect sender and receiver accounts' balances are not changed.
"""
receive_payment_meets_travel_rule_threshold(
sender=stub_client.create_account(
balances={currency: travel_rule_threshold},
kyc_data=target_client.get_kyc_sample().minimum,
),
receiver=target_client.create_account(kyc_data=stub_client.get_kyc_sample().soft_reject),
payment_command_states=["S_INIT", "R_SEND", "S_SOFT", "R_SOFT_SEND", "S_ABORT"],
currency=currency,
amount=travel_rule_threshold,
)
def test_receive_payment_meets_travel_rule_threshold_sender_kyc_data_is_soft_match_then_receiver_aborts_for_sending_additional_kyc_data(
currency: str,
travel_rule_threshold: int,
target_client: RestClient,
stub_client: RestClient,
) -> None:
"""
Test Plan:
1. Create sender account with minimum valid kyc data and enough balance in the stub wallet application.
2. Create receiver account with kyc data that will be soft matched by the stub wallet application and 0 balance in the target wallet application.
3. Setup the stub wallet applicatoin to abort the payment command if receiver requests additional KYC data (soft match).
4. Send payment from sender account to receiver account, amount is equal to travel_rule threshold.
5. Wait for stub wallet application account events include payment command states: ["S_INIT", "R_SEND", "S_SOFT", "R_ABORT"]
6. Expect sender and receiver accounts' balances are not changed.
"""
receive_payment_meets_travel_rule_threshold(
sender=stub_client.create_account(
balances={currency: travel_rule_threshold},
kyc_data=target_client.get_kyc_sample().soft_match,
reject_additional_kyc_data_request=True,
),
receiver=target_client.create_account(kyc_data=stub_client.get_kyc_sample().minimum),
payment_command_states=["S_INIT", "R_SOFT", "S_ABORT"],
currency=currency,
amount=travel_rule_threshold,
)
def test_receive_payment_meets_travel_rule_threshold_sender_and_receiver_kyc_data_are_soft_match_then_accepted_after_reviewing_additional_kyc_data(
currency: str,
travel_rule_threshold: int,
target_client: RestClient,
stub_client: RestClient,
) -> None:
"""
Test Plan:
1. Create sender account with kyc data that will be soft matched and then accepted by the target wallet application and enough balance in the stub wallet application.
2. Create receiver account with kyc data that will be soft matched and then accepted by the stub wallet application and 0 balance in the target wallet application.
3. Send payment from sender account to receiver account, amount is equal to travel_rule threshold.
4. Wait for stub wallet application account events include payment command states: ["S_INIT", "R_SOFT", "S_SOFT_SEND", "R_SEND", "S_SOFT", "R_SOFT_SEND", "READY"]
5. Expect send payment success; receiver account balance increased by the amount sent; sender account balance decreased by the amount sent.
"""
receive_payment_meets_travel_rule_threshold(
sender=stub_client.create_account(
balances={currency: travel_rule_threshold},
kyc_data=target_client.get_kyc_sample().soft_match,
),
receiver=target_client.create_account(kyc_data=stub_client.get_kyc_sample().soft_match),
payment_command_states=["S_INIT", "R_SOFT", "S_SOFT_SEND", "R_SEND", "S_SOFT", "R_SOFT_SEND", "READY"],
currency=currency,
amount=travel_rule_threshold,
)
def test_receive_payment_meets_travel_rule_threshold_sender_kyc_data_is_soft_match_and_accepted_receiver_kyc_data_is_rejected(
currency: str,
travel_rule_threshold: int,
target_client: RestClient,
stub_client: RestClient,
) -> None:
"""
Test Plan:
1. Create sender account with kyc data that will be soft matched and then accepted by the target wallet application and enough balance in the stub wallet application.
2. Create receiver account with kyc data that will be rejected by the stub wallet application and 0 balance in the target wallet application.
3. Send payment from sender account to receiver account, amount is equal to travel_rule threshold.
4. Wait for stub wallet application account events include payment command states: ["S_INIT", "R_SOFT", "S_SOFT_SEND", "R_SEND", "S_ABORT"]
5. Expect sender and receiver accounts' balances are not changed.
"""
receive_payment_meets_travel_rule_threshold(
sender=stub_client.create_account(
balances={currency: travel_rule_threshold},
kyc_data=target_client.get_kyc_sample().soft_match,
),
receiver=target_client.create_account(kyc_data=stub_client.get_kyc_sample().reject),
payment_command_states=["S_INIT", "R_SOFT", "S_SOFT_SEND", "R_SEND", "S_ABORT"],
currency=currency,
amount=travel_rule_threshold,
)
def test_receive_payment_meets_travel_rule_threshold_sender_kyc_data_is_soft_match_and_accepted_receiver_kyc_data_is_soft_match_and_rejected(
currency: str,
travel_rule_threshold: int,
target_client: RestClient,
stub_client: RestClient,
) -> None:
"""
Test Plan:
1. Create sender account with kyc data that will be soft matched and then accepted by the target wallet application and enough balance in the stub wallet application.
2. Create receiver account with kyc data that will be soft matched and then rejected by the stub wallet application and 0 balance in the target wallet application.
3. Send payment from sender account to receiver account, amount is equal to travel_rule threshold.
4. Wait for stub wallet application account events include payment command states: ["S_INIT", "R_SOFT", "S_SOFT_SEND", "R_SEND", "S_SOFT", "R_SOFT_SEND", "S_ABORT"]
5. Expect sender and receiver accounts' balances are not changed.
"""
receive_payment_meets_travel_rule_threshold(
sender=stub_client.create_account(
balances={currency: travel_rule_threshold},
kyc_data=target_client.get_kyc_sample().soft_match,
),
receiver=target_client.create_account(kyc_data=stub_client.get_kyc_sample().soft_reject),
payment_command_states=["S_INIT", "R_SOFT", "S_SOFT_SEND", "R_SEND", "S_SOFT", "R_SOFT_SEND", "S_ABORT"],
currency=currency,
amount=travel_rule_threshold,
)
def receive_payment_meets_travel_rule_threshold(
sender: AccountResource,
receiver: AccountResource,
payment_command_states: List[str],
currency: str,
amount: int,
sender_reject_additional_kyc_data_request: bool = False,
) -> None:
sender_initial = sender.balance(currency)
receiver_initial = receiver.balance(currency)
payee = receiver.generate_account_identifier()
sender.send_payment(currency, amount, payee)
def match_exchange_states() -> None:
states = []
for e in sender.events():
if e.type in ["created_payment_command", "updated_payment_command"]:
payment_object = json.loads(e.data)["payment_object"]
payment = offchain.from_dict(payment_object, offchain.PaymentObject)
states.append(offchain.payment_state.MACHINE.match_state(payment).id)
assert states == payment_command_states
wait_for(match_exchange_states)
if payment_command_states[-1] == "READY":
wait_for_balance(sender, currency, sender_initial - amount)
wait_for_balance(receiver, currency, receiver_initial + amount)
else:
wait_for_balance(sender, currency, sender_initial)
wait_for_balance(receiver, currency, receiver_initial)
| 46.654945 | 170 | 0.742321 | 2,815 | 21,228 | 5.298046 | 0.076732 | 0.054982 | 0.09937 | 0.03138 | 0.811519 | 0.790063 | 0.783827 | 0.7617 | 0.758683 | 0.738836 | 0 | 0.006157 | 0.188996 | 21,228 | 454 | 171 | 46.757709 | 0.86013 | 0.384162 | 0 | 0.627737 | 0 | 0 | 0.039281 | 0.00652 | 0 | 0 | 0 | 0 | 0.007299 | 1 | 0.054745 | false | 0 | 0.018248 | 0 | 0.072993 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c8553c4c038e931d468bd5a062559455561665c | 22 | py | Python | Uncommon/Python/DataStructures/Heaps/__init__.py | MattiKemp/Data-Structures-And-Algorithms | 37a4eb4f092f5a058643ef5ac302fe16d97f84dc | [
"Unlicense"
] | null | null | null | Uncommon/Python/DataStructures/Heaps/__init__.py | MattiKemp/Data-Structures-And-Algorithms | 37a4eb4f092f5a058643ef5ac302fe16d97f84dc | [
"Unlicense"
] | null | null | null | Uncommon/Python/DataStructures/Heaps/__init__.py | MattiKemp/Data-Structures-And-Algorithms | 37a4eb4f092f5a058643ef5ac302fe16d97f84dc | [
"Unlicense"
] | null | null | null | from . import MaxHeap
| 11 | 21 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4ca2b339c8c91583e4aec5fc04fb6bd537db5fb1 | 47 | py | Python | tests/roots/test-ext-autodoc/target/canonical/__init__.py | samdoran/sphinx | 4c91c038b220d07bbdfe0c1680af42fe897f342c | [
"BSD-2-Clause"
] | 4,973 | 2015-01-03T15:44:00.000Z | 2022-03-31T03:11:51.000Z | tests/roots/test-ext-autodoc/target/canonical/__init__.py | samdoran/sphinx | 4c91c038b220d07bbdfe0c1680af42fe897f342c | [
"BSD-2-Clause"
] | 7,850 | 2015-01-02T08:09:25.000Z | 2022-03-31T18:57:40.000Z | tests/roots/test-ext-autodoc/target/canonical/__init__.py | samdoran/sphinx | 4c91c038b220d07bbdfe0c1680af42fe897f342c | [
"BSD-2-Clause"
] | 2,179 | 2015-01-03T15:26:53.000Z | 2022-03-31T12:22:44.000Z | from target.canonical.original import Bar, Foo
| 23.5 | 46 | 0.829787 | 7 | 47 | 5.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 47 | 1 | 47 | 47 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4cb27c30ea2ef1fc9472bcaa7713000f3e9afe9a | 14,879 | py | Python | evalml/tests/component_tests/test_delayed_features_transformer.py | sharshofski/evalml | f13dcd969e86b72ba01ca520247a16850030dcb0 | [
"BSD-3-Clause"
] | null | null | null | evalml/tests/component_tests/test_delayed_features_transformer.py | sharshofski/evalml | f13dcd969e86b72ba01ca520247a16850030dcb0 | [
"BSD-3-Clause"
] | null | null | null | evalml/tests/component_tests/test_delayed_features_transformer.py | sharshofski/evalml | f13dcd969e86b72ba01ca520247a16850030dcb0 | [
"BSD-3-Clause"
] | null | null | null | import pandas as pd
import pytest
from pandas.testing import assert_frame_equal
from evalml.pipelines import DelayedFeatureTransformer
@pytest.fixture
def delayed_features_data():
X = pd.DataFrame({"feature": range(1, 32)})
y = pd.Series(range(1, 32))
return X, y
def test_delayed_features_transformer_init():
delayed_features = DelayedFeatureTransformer(max_delay=4, delay_features=True, delay_target=False,
random_state=1)
assert delayed_features.parameters == {"max_delay": 4, "delay_features": True, "delay_target": False,
"gap": 1}
def encode_y_as_string(y):
y_answer = y.astype(int) - 1
y = y.map(lambda val: str(val).zfill(2))
return y, y_answer
def encode_X_as_string(X):
X_answer = X.astype(int) - 1
# So that the encoder encodes the values in ascending order. This makes it easier to
# specify the answer for each unit test
X.feature = pd.Categorical(X.feature.map(lambda val: str(val).zfill(2)))
return X, X_answer
def encode_X_y_as_strings(X, y, encode_X_as_str, encode_y_as_str):
y_answer = y
if encode_y_as_str:
y, y_answer = encode_y_as_string(y)
X_answer = X
if encode_X_as_str:
X, X_answer = encode_X_as_string(X)
return X, X_answer, y, y_answer
@pytest.mark.parametrize('encode_X_as_str', [True, False])
@pytest.mark.parametrize('encode_y_as_str', [True, False])
def test_delayed_feature_extractor_maxdelay3_gap1(encode_X_as_str, encode_y_as_str, delayed_features_data):
X, y = delayed_features_data
X, X_answer, y, y_answer = encode_X_y_as_strings(X, y, encode_X_as_str, encode_y_as_str)
answer = pd.DataFrame({"feature": X.feature,
"feature_delay_1": X_answer.feature.shift(1),
"feature_delay_2": X_answer.feature.shift(2),
"feature_delay_3": X_answer.feature.shift(3),
"target_delay_0": y_answer.astype("Int64"),
"target_delay_1": y_answer.shift(1),
"target_delay_2": y_answer.shift(2),
"target_delay_3": y_answer.shift(3)})
if not encode_X_as_str:
answer["feature"] = X.feature.astype("Int64")
if not encode_y_as_str:
answer["target_delay_0"] = y_answer.astype("Int64")
assert_frame_equal(answer, DelayedFeatureTransformer(max_delay=3, gap=1).fit_transform(X=X, y=y).to_dataframe())
answer_only_y = pd.DataFrame({"target_delay_0": y_answer.astype("Int64"),
"target_delay_1": y_answer.shift(1),
"target_delay_2": y_answer.shift(2),
"target_delay_3": y_answer.shift(3)})
assert_frame_equal(answer_only_y, DelayedFeatureTransformer(max_delay=3, gap=1).fit_transform(X=None, y=y).to_dataframe())
@pytest.mark.parametrize('encode_X_as_str', [True, False])
@pytest.mark.parametrize('encode_y_as_str', [True, False])
def test_delayed_feature_extractor_maxdelay5_gap1(encode_X_as_str, encode_y_as_str, delayed_features_data):
X, y = delayed_features_data
X, X_answer, y, y_answer = encode_X_y_as_strings(X, y, encode_X_as_str, encode_y_as_str)
answer = pd.DataFrame({"feature": X.feature,
"feature_delay_1": X_answer.feature.shift(1),
"feature_delay_2": X_answer.feature.shift(2),
"feature_delay_3": X_answer.feature.shift(3),
"feature_delay_4": X_answer.feature.shift(4),
"feature_delay_5": X_answer.feature.shift(5),
"target_delay_0": y_answer.astype("Int64"),
"target_delay_1": y_answer.shift(1),
"target_delay_2": y_answer.shift(2),
"target_delay_3": y_answer.shift(3),
"target_delay_4": y_answer.shift(4),
"target_delay_5": y_answer.shift(5)})
if not encode_X_as_str:
answer["feature"] = X.feature.astype("Int64")
assert_frame_equal(answer, DelayedFeatureTransformer(max_delay=5, gap=1).fit_transform(X, y).to_dataframe())
answer_only_y = pd.DataFrame({"target_delay_0": y_answer.astype("Int64"),
"target_delay_1": y_answer.shift(1),
"target_delay_2": y_answer.shift(2),
"target_delay_3": y_answer.shift(3),
"target_delay_4": y_answer.shift(4),
"target_delay_5": y_answer.shift(5)})
assert_frame_equal(answer_only_y, DelayedFeatureTransformer(max_delay=5, gap=1).fit_transform(X=None, y=y).to_dataframe())
@pytest.mark.parametrize('encode_X_as_str', [True, False])
@pytest.mark.parametrize('encode_y_as_str', [True, False])
def test_delayed_feature_extractor_maxdelay3_gap7(encode_X_as_str, encode_y_as_str, delayed_features_data):
X, y = delayed_features_data
X, X_answer, y, y_answer = encode_X_y_as_strings(X, y, encode_X_as_str, encode_y_as_str)
answer = pd.DataFrame({"feature": X.feature,
"feature_delay_1": X_answer.feature.shift(1),
"feature_delay_2": X_answer.feature.shift(2),
"feature_delay_3": X_answer.feature.shift(3),
"target_delay_0": y_answer.astype("Int64"),
"target_delay_1": y_answer.shift(1),
"target_delay_2": y_answer.shift(2),
"target_delay_3": y_answer.shift(3)})
if not encode_X_as_str:
answer["feature"] = X.feature.astype("Int64")
assert_frame_equal(answer, DelayedFeatureTransformer(max_delay=3, gap=7).fit_transform(X, y).to_dataframe())
answer_only_y = pd.DataFrame({"target_delay_0": y_answer.astype("Int64"),
"target_delay_1": y_answer.shift(1),
"target_delay_2": y_answer.shift(2),
"target_delay_3": y_answer.shift(3)})
assert_frame_equal(answer_only_y, DelayedFeatureTransformer(max_delay=3, gap=7).fit_transform(X=None, y=y).to_dataframe())
@pytest.mark.parametrize('encode_X_as_str', [True, False])
@pytest.mark.parametrize('encode_y_as_str', [True, False])
def test_delayed_feature_extractor_numpy(encode_X_as_str, encode_y_as_str, delayed_features_data):
X, y = delayed_features_data
X, X_answer, y, y_answer = encode_X_y_as_strings(X, y, encode_X_as_str, encode_y_as_str)
X_np = X.values
y_np = y.values
answer = pd.DataFrame({0: X.feature,
"0_delay_1": X_answer.feature.shift(1),
"0_delay_2": X_answer.feature.shift(2),
"0_delay_3": X_answer.feature.shift(3),
"target_delay_0": y_answer.astype("Int64"),
"target_delay_1": y_answer.shift(1),
"target_delay_2": y_answer.shift(2),
"target_delay_3": y_answer.shift(3)})
if not encode_X_as_str:
answer[0] = X.feature.astype("Int64")
assert_frame_equal(answer, DelayedFeatureTransformer(max_delay=3, gap=7).fit_transform(X_np, y_np).to_dataframe())
answer_only_y = pd.DataFrame({"target_delay_0": y_answer.astype("Int64"),
"target_delay_1": y_answer.shift(1),
"target_delay_2": y_answer.shift(2),
"target_delay_3": y_answer.shift(3)})
assert_frame_equal(answer_only_y, DelayedFeatureTransformer(max_delay=3, gap=7).fit_transform(X=None, y=y_np).to_dataframe())
@pytest.mark.parametrize("delay_features,delay_target", [(False, True), (True, False), (False, False)])
@pytest.mark.parametrize('encode_X_as_str', [True, False])
@pytest.mark.parametrize('encode_y_as_str', [True, False])
def test_lagged_feature_extractor_delay_features_delay_target(encode_y_as_str, encode_X_as_str, delay_features,
delay_target,
delayed_features_data):
X, y = delayed_features_data
X, X_answer, y, y_answer = encode_X_y_as_strings(X, y, encode_X_as_str, encode_y_as_str)
all_delays = pd.DataFrame({"feature": X.feature,
"feature_delay_1": X_answer.feature.shift(1),
"feature_delay_2": X_answer.feature.shift(2),
"feature_delay_3": X_answer.feature.shift(3),
"target_delay_0": y_answer.astype("Int64"),
"target_delay_1": y_answer.shift(1),
"target_delay_2": y_answer.shift(2),
"target_delay_3": y_answer.shift(3)})
if not encode_X_as_str:
all_delays["feature"] = X.feature.astype("Int64")
if not delay_features:
all_delays = all_delays.drop(columns=[c for c in all_delays.columns if "feature_" in c])
if not delay_target:
all_delays = all_delays.drop(columns=[c for c in all_delays.columns if "target" in c])
transformer = DelayedFeatureTransformer(max_delay=3, gap=1,
delay_features=delay_features, delay_target=delay_target)
assert_frame_equal(all_delays, transformer.fit_transform(X, y).to_dataframe())
@pytest.mark.parametrize("delay_features,delay_target", [(False, True), (True, False), (False, False)])
@pytest.mark.parametrize('encode_X_as_str', [True, False])
@pytest.mark.parametrize('encode_y_as_str', [True, False])
def test_lagged_feature_extractor_delay_target(encode_y_as_str, encode_X_as_str, delay_features,
delay_target, delayed_features_data):
X, y = delayed_features_data
X, X_answer, y, y_answer = encode_X_y_as_strings(X, y, encode_X_as_str, encode_y_as_str)
answer = pd.DataFrame()
if delay_target:
answer = pd.DataFrame({"target_delay_0": y_answer.astype("Int64"),
"target_delay_1": y_answer.shift(1),
"target_delay_2": y_answer.shift(2),
"target_delay_3": y_answer.shift(3)})
transformer = DelayedFeatureTransformer(max_delay=3, gap=1,
delay_features=delay_features, delay_target=delay_target)
assert_frame_equal(answer, transformer.fit_transform(None, y).to_dataframe())
@pytest.mark.parametrize("gap", [0, 1, 7])
def test_target_delay_when_gap_is_0(gap, delayed_features_data):
X, y = delayed_features_data
expected = pd.DataFrame({"feature": X.feature.astype("Int64"),
"feature_delay_1": X.feature.shift(1),
"target_delay_0": y.astype("Int64"),
"target_delay_1": y.shift(1)})
if gap == 0:
expected = expected.drop(columns=["target_delay_0"])
transformer = DelayedFeatureTransformer(max_delay=1, gap=gap)
assert_frame_equal(expected, transformer.fit_transform(X, y).to_dataframe())
expected = pd.DataFrame({"target_delay_0": y.astype("Int64"),
"target_delay_1": y.shift(1)})
if gap == 0:
expected = expected.drop(columns=["target_delay_0"])
assert_frame_equal(expected, transformer.fit_transform(None, y).to_dataframe())
@pytest.mark.parametrize('data_type', ['ww', 'pd'])
@pytest.mark.parametrize('encode_X_as_str', [True, False])
@pytest.mark.parametrize('encode_y_as_str', [True, False])
def test_delay_feature_transformer_supports_custom_index(encode_X_as_str, encode_y_as_str, data_type, make_data_type,
delayed_features_data):
X, y = delayed_features_data
X, X_answer, y, y_answer = encode_X_y_as_strings(X, y, encode_X_as_str, encode_y_as_str)
X.index = pd.RangeIndex(50, 81)
X_answer.index = pd.RangeIndex(50, 81)
y.index = pd.RangeIndex(50, 81)
y_answer.index = pd.RangeIndex(50, 81)
answer = pd.DataFrame({"feature": X.feature,
"feature_delay_1": X_answer.feature.shift(1),
"feature_delay_2": X_answer.feature.shift(2),
"feature_delay_3": X_answer.feature.shift(3),
"target_delay_0": y_answer.astype("Int64"),
"target_delay_1": y_answer.shift(1),
"target_delay_2": y_answer.shift(2),
"target_delay_3": y_answer.shift(3)}, index=pd.RangeIndex(50, 81))
if not encode_X_as_str:
answer["feature"] = X.feature.astype("Int64")
X = make_data_type(data_type, X)
y = make_data_type(data_type, y)
assert_frame_equal(answer, DelayedFeatureTransformer(max_delay=3, gap=7).fit_transform(X, y).to_dataframe())
answer_only_y = pd.DataFrame({"target_delay_0": y_answer.astype("Int64"),
"target_delay_1": y_answer.shift(1),
"target_delay_2": y_answer.shift(2),
"target_delay_3": y_answer.shift(3)}, index=pd.RangeIndex(50, 81))
assert_frame_equal(answer_only_y, DelayedFeatureTransformer(max_delay=3, gap=7).fit_transform(X=None, y=y).to_dataframe())
def test_delay_feature_transformer_multiple_categorical_columns(delayed_features_data):
X, y = delayed_features_data
X, X_answer, y, y_answer = encode_X_y_as_strings(X, y, True, True)
X['feature_2'] = pd.Categorical(["a"] * 10 + ['aa'] * 10 + ['aaa'] * 10 + ['aaaa'])
X_answer['feature_2'] = pd.Series([0] * 10 + [1] * 10 + [2] * 10 + [3])
answer = pd.DataFrame({"feature": X.feature,
'feature_2': X.feature_2,
"feature_delay_1": X_answer.feature.shift(1),
"feature_2_delay_1": X_answer.feature_2.shift(1),
"target_delay_0": y_answer.astype("Int64"),
"target_delay_1": y_answer.shift(1),
})
assert_frame_equal(answer, DelayedFeatureTransformer(max_delay=1, gap=11).fit_transform(X, y).to_dataframe())
def test_delay_feature_transformer_y_is_none(delayed_features_data):
X, _ = delayed_features_data
answer = pd.DataFrame({"feature": X.feature.astype("Int64"),
"feature_delay_1": X.feature.shift(1),
})
assert_frame_equal(answer, DelayedFeatureTransformer(max_delay=1, gap=11).fit_transform(X, y=None).to_dataframe())
| 54.105455 | 129 | 0.613213 | 1,986 | 14,879 | 4.219033 | 0.058912 | 0.057644 | 0.058718 | 0.041532 | 0.866213 | 0.837928 | 0.824084 | 0.793651 | 0.770378 | 0.746032 | 0 | 0.028314 | 0.268902 | 14,879 | 274 | 130 | 54.30292 | 0.741956 | 0.008065 | 0 | 0.593886 | 0 | 0 | 0.120968 | 0.00366 | 0 | 0 | 0 | 0 | 0.078603 | 1 | 0.065502 | false | 0 | 0.017467 | 0 | 0.100437 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4cc507a7f6a83b99e901e3ef3111fd3ede4059cd | 6,268 | py | Python | tests/test_geometry.py | samuelpeet/conehead | 0e18ef6b80104aeb97ab58fc33efa776ad9b6e10 | [
"MIT"
] | 27 | 2018-06-27T21:59:38.000Z | 2022-02-22T06:45:32.000Z | tests/test_geometry.py | lynch829/conehead | 0e18ef6b80104aeb97ab58fc33efa776ad9b6e10 | [
"MIT"
] | 4 | 2018-07-18T08:35:40.000Z | 2022-03-08T04:34:39.000Z | tests/test_geometry.py | lynch829/conehead | 0e18ef6b80104aeb97ab58fc33efa776ad9b6e10 | [
"MIT"
] | 15 | 2018-07-17T09:48:02.000Z | 2022-03-28T20:33:50.000Z | import pytest
import numpy as np
from conehead.source import Source
from conehead.geometry import (
Transform, line_block_plane_collision,
line_calc_limit_plane_collision, isocentre_plane_position
)
class TestGeometry:
def test_beam_to_global_G0_C0(self):
""" Test at G0, C0 """
source = Source("varian_clinac_6MV")
source.gantry(0)
source.collimator(0)
beam_coords = np.array([.1, .2, .3])
transform = Transform(source.position, source.rotation)
global_coords = transform.beam_to_global(beam_coords)
correct = np.array([.1, .2, 100.3])
np.testing.assert_array_almost_equal(correct, global_coords, decimal=5)
def test_beam_to_global_G90_C0(self):
""" Test at G90, C0 """
source = Source("varian_clinac_6MV")
source.gantry(90)
source.collimator(0)
beam_coords = np.array([.1, .2, .3])
transform = Transform(source.position, source.rotation)
global_coords = transform.beam_to_global(beam_coords)
correct = np.array([100.3, .2, -.1])
np.testing.assert_array_almost_equal(correct, global_coords, decimal=5)
def test_beam_to_global_G270_C0(self):
""" Test at G270, C0 """
source = Source("varian_clinac_6MV")
source.gantry(270)
source.collimator(0)
beam_coords = np.array([.1, .2, .3])
transform = Transform(source.position, source.rotation)
global_coords = transform.beam_to_global(beam_coords)
correct = np.array([-100.3, .2, .1])
np.testing.assert_array_almost_equal(correct, global_coords, decimal=5)
def test_beam_to_global_G0_C90(self):
""" Test at G0, C90 """
source = Source("varian_clinac_6MV")
source.gantry(0)
source.collimator(90)
beam_coords = np.array([.1, .2, .3])
transform = Transform(source.position, source.rotation)
global_coords = transform.beam_to_global(beam_coords)
correct = np.array([-.2, .1, 100.3])
np.testing.assert_array_almost_equal(correct, global_coords, decimal=5)
def test_beam_to_global_G270_C270(self):
""" Test at G270, C270 """
source = Source("varian_clinac_6MV")
source.gantry(270)
source.collimator(270)
beam_coords = np.array([.1, .2, .3])
transform = Transform(source.position, source.rotation)
global_coords = transform.beam_to_global(beam_coords)
correct = np.array([-100.3, -.1, .2])
np.testing.assert_array_almost_equal(correct, global_coords, decimal=5)
def test_global_to_beam_G0_C0(self):
""" Test at G0, C0 """
source = Source("varian_clinac_6MV")
source.gantry(0)
source.collimator(0)
global_coords = np.array([.1, .2, .3])
transform = Transform(source.position, source.rotation)
beam_coords = transform.global_to_beam(global_coords)
correct = np.array([.1, .2, -99.7])
np.testing.assert_array_almost_equal(correct, beam_coords, decimal=5)
def test_global_to_beam_G90_C0(self):
""" Test at G90, C0 """
source = Source("varian_clinac_6MV")
source.gantry(90)
source.collimator(0)
global_coords = np.array([.1, .2, .3])
transform = Transform(source.position, source.rotation)
beam_coords = transform.global_to_beam(global_coords)
correct = np.array([-.3, .2, -99.9])
np.testing.assert_array_almost_equal(correct, beam_coords, decimal=5)
def test_global_to_beam_G270_C0(self):
""" Test at G270, C0 """
source = Source("varian_clinac_6MV")
source.gantry(270)
source.collimator(0)
global_coords = np.array([.1, .2, .3])
transform = Transform(source.position, source.rotation)
beam_coords = transform.global_to_beam(global_coords)
correct = np.array([.3, .2, -100.1])
np.testing.assert_array_almost_equal(correct, beam_coords, decimal=5)
def test_global_to_beam_G0_C90(self):
""" Test at G0, C90 """
source = Source("varian_clinac_6MV")
source.gantry(0)
source.collimator(90)
global_coords = np.array([.1, .2, .3])
transform = Transform(source.position, source.rotation)
beam_coords = transform.global_to_beam(global_coords)
correct = np.array([.2, -.1, -99.7])
np.testing.assert_array_almost_equal(correct, beam_coords, decimal=5)
def test_global_to_beam_G270_C270(self):
""" Test at G270, C270 """
source = Source("varian_clinac_6MV")
source.gantry(270)
source.collimator(270)
global_coords = np.array([.1, .2, .3])
transform = Transform(source.position, source.rotation)
beam_coords = transform.global_to_beam(global_coords)
correct = np.array([-.2, .3, -100.1])
np.testing.assert_array_almost_equal(correct, beam_coords, decimal=5)
def test_line_block_plane_collision(self):
ray_direction = np.array([0, 0, -1])
point = line_block_plane_collision(ray_direction)
correct = np.array([0, 0, -100])
np.testing.assert_array_almost_equal(correct, point)
def test_line_block_plane_collision_parallel(self):
with pytest.raises(RuntimeError):
ray_direction = np.array([1, 0, 0])
line_block_plane_collision(ray_direction)
def test_line_calc_limit_plane_collision(self):
ray_direction = np.array([0, 0, -1])
plane_point = np.array([0, 0, -20])
point = line_calc_limit_plane_collision(ray_direction, plane_point)
correct = np.array([0, 0, -20])
np.testing.assert_array_almost_equal(correct, point)
def test_line_calc_limit_plane_collision_parallel(self):
with pytest.raises(RuntimeError):
ray_direction = np.array([1, 0, 0])
plane_point = np.array([0, 0, -20])
line_calc_limit_plane_collision(ray_direction, plane_point)
def test_isocentre_plane_position(self):
position = np.array([10.0, 20.0, 50.0])
position_iso = isocentre_plane_position(position, 100.0)
correct = np.array([20.0, 40.0])
np.testing.assert_array_almost_equal(correct, position_iso) | 42.067114 | 79 | 0.651723 | 831 | 6,268 | 4.637786 | 0.084236 | 0.054489 | 0.029061 | 0.067462 | 0.920602 | 0.904774 | 0.876232 | 0.84795 | 0.84795 | 0.819408 | 0 | 0.054216 | 0.226069 | 6,268 | 149 | 80 | 42.067114 | 0.74026 | 0.02776 | 0 | 0.650407 | 0 | 0 | 0.028178 | 0 | 0 | 0 | 0 | 0 | 0.105691 | 1 | 0.121951 | false | 0 | 0.03252 | 0 | 0.162602 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4cf55ca2c14c587b9a8e821b5a57abc3488e15b0 | 23 | py | Python | mlbriefcase/clarifai/__init__.py | Bhaskers-Blu-Org2/Briefcase | f551079b05d3f8494cdff6a0b393969def5a2443 | [
"MIT"
] | 2 | 2020-05-04T12:59:05.000Z | 2020-05-05T09:31:43.000Z | mlbriefcase/clarifai/__init__.py | Bhaskers-Blu-Org2/Briefcase | f551079b05d3f8494cdff6a0b393969def5a2443 | [
"MIT"
] | 4 | 2020-02-05T11:34:51.000Z | 2020-02-05T11:35:12.000Z | mlbriefcase/clarifai/__init__.py | microsoft/Briefcase | f551079b05d3f8494cdff6a0b393969def5a2443 | [
"MIT"
] | 5 | 2020-06-30T16:02:57.000Z | 2021-09-15T06:39:08.000Z | from .clarifai import * | 23 | 23 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4cfb53c55b9529f9666db852343d77bd5d499fa3 | 68 | py | Python | python_print_testFile.py | PCSailor/python_openpyxl_dcflog | ee10a3cde550b0d76fd033912de32af38d010589 | [
"MIT"
] | null | null | null | python_print_testFile.py | PCSailor/python_openpyxl_dcflog | ee10a3cde550b0d76fd033912de32af38d010589 | [
"MIT"
] | null | null | null | python_print_testFile.py | PCSailor/python_openpyxl_dcflog | ee10a3cde550b0d76fd033912de32af38d010589 | [
"MIT"
] | null | null | null | import os
os.startfile('C:\Users\pc\Desktop\testFile.txt', 'print')
| 22.666667 | 57 | 0.735294 | 11 | 68 | 4.545455 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 68 | 2 | 58 | 34 | 0.78125 | 0 | 0 | 0 | 0 | 0 | 0.544118 | 0.470588 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
e234da322c555ded802334e238ec780051473e7e | 78 | py | Python | k2/python/__init__.py | open-speech/sequeender | 7a64e1a7d8a4b05b0b82e17c542f9f7f943a41e0 | [
"MIT"
] | 5 | 2020-11-19T15:49:55.000Z | 2021-06-10T23:51:52.000Z | k2/python/__init__.py | open-speech/sequeender | 7a64e1a7d8a4b05b0b82e17c542f9f7f943a41e0 | [
"MIT"
] | null | null | null | k2/python/__init__.py | open-speech/sequeender | 7a64e1a7d8a4b05b0b82e17c542f9f7f943a41e0 | [
"MIT"
] | null | null | null | from k2.python import host
from k2.python import k2
__all__ = ['host', 'k2']
| 15.6 | 26 | 0.705128 | 13 | 78 | 3.923077 | 0.461538 | 0.235294 | 0.470588 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061538 | 0.166667 | 78 | 4 | 27 | 19.5 | 0.723077 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e26019b041e9c05749e492c538a17d049ff4700f | 179 | py | Python | smartsim/database/__init__.py | billschereriii/SmartSim | 7ef4cffeba23fe19b931bdae819f4de99bb112a3 | [
"BSD-2-Clause"
] | null | null | null | smartsim/database/__init__.py | billschereriii/SmartSim | 7ef4cffeba23fe19b931bdae819f4de99bb112a3 | [
"BSD-2-Clause"
] | null | null | null | smartsim/database/__init__.py | billschereriii/SmartSim | 7ef4cffeba23fe19b931bdae819f4de99bb112a3 | [
"BSD-2-Clause"
] | null | null | null | from .orchestrator import Orchestrator
# decrecated classes
from .orchestrator import (
PBSOrchestrator,
CobaltOrchestrator,
SlurmOrchestrator,
LSFOrchestrator
)
| 17.9 | 38 | 0.77095 | 13 | 179 | 10.615385 | 0.692308 | 0.231884 | 0.318841 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184358 | 179 | 9 | 39 | 19.888889 | 0.945205 | 0.100559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.285714 | 0 | 0.285714 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e26c0894e485d89f44f411c4cf01f9ce43edb484 | 505 | py | Python | sdk/python/pulumi_google_native/policysimulator/v1beta1/__init__.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 44 | 2021-04-18T23:00:48.000Z | 2022-02-14T17:43:15.000Z | sdk/python/pulumi_google_native/policysimulator/v1beta1/__init__.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 354 | 2021-04-16T16:48:39.000Z | 2022-03-31T17:16:39.000Z | sdk/python/pulumi_google_native/policysimulator/v1beta1/__init__.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 8 | 2021-04-24T17:46:51.000Z | 2022-01-05T10:40:21.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
from ... import _utilities
import typing
# Export this package's modules as members:
from ._enums import *
from .folder_replay import *
from .get_folder_replay import *
from .get_organization_replay import *
from .get_replay import *
from .organization_replay import *
from .replay import *
from ._inputs import *
from . import outputs
| 29.705882 | 80 | 0.750495 | 74 | 505 | 4.986486 | 0.567568 | 0.216802 | 0.260163 | 0.154472 | 0.135501 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002364 | 0.162376 | 505 | 16 | 81 | 31.5625 | 0.869976 | 0.40198 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e26dae8ad85f0b75ccf8a33a99db4a59d526c17c | 67 | py | Python | Main.py | joshuakristanto/Tutorial-1 | fa5a844c399b6de6a6759e3b188505bcec639fa4 | [
"MIT"
] | null | null | null | Main.py | joshuakristanto/Tutorial-1 | fa5a844c399b6de6a6759e3b188505bcec639fa4 | [
"MIT"
] | null | null | null | Main.py | joshuakristanto/Tutorial-1 | fa5a844c399b6de6a6759e3b188505bcec639fa4 | [
"MIT"
] | null | null | null | print("BRANCH2")
print("BRANCH1")
print("BRANCH1")
print("BRANCH3") | 16.75 | 16 | 0.716418 | 8 | 67 | 6 | 0.5 | 0.5 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.044776 | 67 | 4 | 17 | 16.75 | 0.6875 | 0 | 0 | 0.5 | 0 | 0 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e2937367253f413c2b2fa5faa517843f5da98980 | 154 | py | Python | tests/pybind11/foo.py | Erotemic/misc | 6f8460a690d05e7e0117becc6cae9902cbe2cedd | [
"Apache-2.0"
] | 5 | 2021-04-29T21:07:18.000Z | 2021-09-29T08:46:08.000Z | tests/pybind11/foo.py | Erotemic/misc | 6f8460a690d05e7e0117becc6cae9902cbe2cedd | [
"Apache-2.0"
] | null | null | null | tests/pybind11/foo.py | Erotemic/misc | 6f8460a690d05e7e0117becc6cae9902cbe2cedd | [
"Apache-2.0"
] | 1 | 2018-04-07T12:26:21.000Z | 2018-04-07T12:26:21.000Z | print('Importing {}'.format(__file__))
def bar():
import ctypes
print('ctypes = {!r}'.format(ctypes))
print('hi from a pure python module')
| 19.25 | 41 | 0.636364 | 20 | 154 | 4.7 | 0.75 | 0.234043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188312 | 154 | 7 | 42 | 22 | 0.752 | 0 | 0 | 0 | 0 | 0 | 0.344156 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.6 | 0.6 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
e2ad71938d8a6fa26a9d38c139d95813db2805bb | 176 | py | Python | tests/conftest.py | anthonycorletti/python-project-template | bd1aebe8fc77984f1008a89491f596f951461dac | [
"MIT"
] | 1 | 2022-01-06T20:30:40.000Z | 2022-01-06T20:30:40.000Z | tests/conftest.py | anthonycorletti/python-project-template | bd1aebe8fc77984f1008a89491f596f951461dac | [
"MIT"
] | null | null | null | tests/conftest.py | anthonycorletti/python-project-template | bd1aebe8fc77984f1008a89491f596f951461dac | [
"MIT"
] | null | null | null | import pytest
@pytest.fixture(scope="session", autouse=True)
def _session() -> None:
pass
@pytest.fixture(scope="module", autouse=True)
def _module() -> None:
pass
| 14.666667 | 46 | 0.681818 | 22 | 176 | 5.363636 | 0.5 | 0.220339 | 0.305085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 176 | 11 | 47 | 16 | 0.797297 | 0 | 0 | 0.285714 | 0 | 0 | 0.073864 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | true | 0.285714 | 0.142857 | 0 | 0.428571 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
2c3bfeea4bcdb9e092517988e5ce443cb4ff0888 | 192 | py | Python | utils/__init__.py | harryefstra/-multi-UAV-simulator | 903398981b0a33226d70a966ff0597decd248124 | [
"MIT"
] | 22 | 2021-04-07T21:10:53.000Z | 2022-03-26T08:21:06.000Z | utils/__init__.py | harryefstra/-multi-UAV-simulator | 903398981b0a33226d70a966ff0597decd248124 | [
"MIT"
] | 2 | 2021-04-12T06:23:50.000Z | 2021-05-20T04:33:35.000Z | utils/__init__.py | harryefstra/-multi-UAV-simulator | 903398981b0a33226d70a966ff0597decd248124 | [
"MIT"
] | 4 | 2021-05-21T06:11:34.000Z | 2022-03-09T18:41:10.000Z | from .rotationConversion import *
from .stateConversions import *
from .mixer import *
from .display import *
from .animation import *
from .pf_plot import *
from .quaternionFunctions import * | 27.428571 | 34 | 0.786458 | 22 | 192 | 6.818182 | 0.454545 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140625 | 192 | 7 | 34 | 27.428571 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2c84c95e410e8d657d5c603e76318ab4c9064a6b | 24 | py | Python | bmi/__init__.py | romenrg/body-mass-index | 462c9984a21cfc306c9466b20aafd184e6b1be37 | [
"MIT"
] | 2 | 2020-12-18T10:03:59.000Z | 2021-01-16T12:50:15.000Z | bmi/__init__.py | romenrg/body-mass-index | 462c9984a21cfc306c9466b20aafd184e6b1be37 | [
"MIT"
] | null | null | null | bmi/__init__.py | romenrg/body-mass-index | 462c9984a21cfc306c9466b20aafd184e6b1be37 | [
"MIT"
] | null | null | null | from bmi.bmi import Bmi
| 12 | 23 | 0.791667 | 5 | 24 | 3.8 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2ccb8f208046e2cb3b7d6733f42219a464bccb46 | 71,906 | py | Python | mac/google-cloud-sdk/lib/googlecloudsdk/third_party/apis/datacatalog/v1beta1/datacatalog_v1beta1_client.py | bopopescu/cndw | ee432efef88a4351b355f3d6d5350defc7f4246b | [
"Apache-2.0"
] | null | null | null | mac/google-cloud-sdk/lib/googlecloudsdk/third_party/apis/datacatalog/v1beta1/datacatalog_v1beta1_client.py | bopopescu/cndw | ee432efef88a4351b355f3d6d5350defc7f4246b | [
"Apache-2.0"
] | null | null | null | mac/google-cloud-sdk/lib/googlecloudsdk/third_party/apis/datacatalog/v1beta1/datacatalog_v1beta1_client.py | bopopescu/cndw | ee432efef88a4351b355f3d6d5350defc7f4246b | [
"Apache-2.0"
] | null | null | null | """Generated client library for datacatalog version v1beta1."""
# NOTE: This file is autogenerated and should not be edited by hand.
from apitools.base.py import base_api
from googlecloudsdk.third_party.apis.datacatalog.v1beta1 import datacatalog_v1beta1_messages as messages
class DatacatalogV1beta1(base_api.BaseApiClient):
"""Generated client library for service datacatalog version v1beta1."""
MESSAGES_MODULE = messages
BASE_URL = u'https://datacatalog.googleapis.com/'
_PACKAGE = u'datacatalog'
_SCOPES = [u'https://www.googleapis.com/auth/cloud-platform']
_VERSION = u'v1beta1'
_CLIENT_ID = '1042881264118.apps.googleusercontent.com'
_CLIENT_SECRET = 'x_Tw5K8nnjoRAqULM9PFAC2b'
_USER_AGENT = 'x_Tw5K8nnjoRAqULM9PFAC2b'
_CLIENT_CLASS_NAME = u'DatacatalogV1beta1'
_URL_VERSION = u'v1beta1'
_API_KEY = None
def __init__(self, url='', credentials=None,
get_credentials=True, http=None, model=None,
log_request=False, log_response=False,
credentials_args=None, default_global_params=None,
additional_http_headers=None, response_encoding=None):
"""Create a new datacatalog handle."""
url = url or self.BASE_URL
super(DatacatalogV1beta1, self).__init__(
url, credentials=credentials,
get_credentials=get_credentials, http=http, model=model,
log_request=log_request, log_response=log_response,
credentials_args=credentials_args,
default_global_params=default_global_params,
additional_http_headers=additional_http_headers,
response_encoding=response_encoding)
self.catalog = self.CatalogService(self)
self.entries = self.EntriesService(self)
self.projects_locations_entryGroups_entries_tags = self.ProjectsLocationsEntryGroupsEntriesTagsService(self)
self.projects_locations_entryGroups_entries = self.ProjectsLocationsEntryGroupsEntriesService(self)
self.projects_locations_entryGroups = self.ProjectsLocationsEntryGroupsService(self)
self.projects_locations_tagTemplates_fields = self.ProjectsLocationsTagTemplatesFieldsService(self)
self.projects_locations_tagTemplates = self.ProjectsLocationsTagTemplatesService(self)
self.projects_locations_taxonomies_policyTags = self.ProjectsLocationsTaxonomiesPolicyTagsService(self)
self.projects_locations_taxonomies = self.ProjectsLocationsTaxonomiesService(self)
self.projects_locations = self.ProjectsLocationsService(self)
self.projects = self.ProjectsService(self)
class CatalogService(base_api.BaseApiService):
"""Service class for the catalog resource."""
_NAME = u'catalog'
def __init__(self, client):
super(DatacatalogV1beta1.CatalogService, self).__init__(client)
self._upload_configs = {
}
def Search(self, request, global_params=None):
r"""Searches Data Catalog for multiple resources like entries, tags that.
match a query.
This is a custom method
(https://cloud.google.com/apis/design/custom_methods) and does not return
the complete resource, only the resource identifier and high level
fields. Clients can subsequentally call `Get` methods.
Note that searches do not have full recall. There may be results that match
your query but are not returned, even in subsequent pages of results. These
missing results may vary across repeated calls to search. Do not rely on
this method if you need to guarantee full recall.
See [Data Catalog Search
Syntax](/data-catalog/docs/how-to/search-reference) for more information.
Args:
request: (GoogleCloudDatacatalogV1beta1SearchCatalogRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1SearchCatalogResponse) The response message.
"""
config = self.GetMethodConfig('Search')
return self._RunMethod(
config, request, global_params=global_params)
Search.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'POST',
method_id=u'datacatalog.catalog.search',
ordered_params=[],
path_params=[],
query_params=[],
relative_path=u'v1beta1/catalog:search',
request_field='<request>',
request_type_name=u'GoogleCloudDatacatalogV1beta1SearchCatalogRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1SearchCatalogResponse',
supports_download=False,
)
class EntriesService(base_api.BaseApiService):
"""Service class for the entries resource."""
_NAME = u'entries'
def __init__(self, client):
super(DatacatalogV1beta1.EntriesService, self).__init__(client)
self._upload_configs = {
}
def Lookup(self, request, global_params=None):
r"""Get an entry by target resource name. This method allows clients to use.
the resource name from the source Google Cloud Platform service to get the
Data Catalog Entry.
Args:
request: (DatacatalogEntriesLookupRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1Entry) The response message.
"""
config = self.GetMethodConfig('Lookup')
return self._RunMethod(
config, request, global_params=global_params)
Lookup.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'GET',
method_id=u'datacatalog.entries.lookup',
ordered_params=[],
path_params=[],
query_params=[u'linkedResource', u'sqlResource'],
relative_path=u'v1beta1/entries:lookup',
request_field='',
request_type_name=u'DatacatalogEntriesLookupRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1Entry',
supports_download=False,
)
class ProjectsLocationsEntryGroupsEntriesTagsService(base_api.BaseApiService):
"""Service class for the projects_locations_entryGroups_entries_tags resource."""
_NAME = u'projects_locations_entryGroups_entries_tags'
def __init__(self, client):
super(DatacatalogV1beta1.ProjectsLocationsEntryGroupsEntriesTagsService, self).__init__(client)
self._upload_configs = {
}
def Create(self, request, global_params=None):
r"""Creates a tag on an Entry.
Note: The project identified by the `parent` parameter for the
[tag](/data-catalog/docs/reference/rest/v1beta1/projects.locations.entryGroups.entries.tags/create#path-parameters)
and the
[tag
template](/data-catalog/docs/reference/rest/v1beta1/projects.locations.tagTemplates/create#path-parameters)
used to create the tag must be from the same organization.
Args:
request: (DatacatalogProjectsLocationsEntryGroupsEntriesTagsCreateRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1Tag) The response message.
"""
config = self.GetMethodConfig('Create')
return self._RunMethod(
config, request, global_params=global_params)
Create.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}/entries/{entriesId}/tags',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.entryGroups.entries.tags.create',
ordered_params=[u'parent'],
path_params=[u'parent'],
query_params=[],
relative_path=u'v1beta1/{+parent}/tags',
request_field=u'googleCloudDatacatalogV1beta1Tag',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsEntriesTagsCreateRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1Tag',
supports_download=False,
)
def Delete(self, request, global_params=None):
r"""Deletes a tag.
Args:
request: (DatacatalogProjectsLocationsEntryGroupsEntriesTagsDeleteRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Empty) The response message.
"""
config = self.GetMethodConfig('Delete')
return self._RunMethod(
config, request, global_params=global_params)
Delete.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}/entries/{entriesId}/tags/{tagsId}',
http_method=u'DELETE',
method_id=u'datacatalog.projects.locations.entryGroups.entries.tags.delete',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[],
relative_path=u'v1beta1/{+name}',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsEntriesTagsDeleteRequest',
response_type_name=u'Empty',
supports_download=False,
)
def List(self, request, global_params=None):
r"""Lists the tags on an Entry.
Args:
request: (DatacatalogProjectsLocationsEntryGroupsEntriesTagsListRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1ListTagsResponse) The response message.
"""
config = self.GetMethodConfig('List')
return self._RunMethod(
config, request, global_params=global_params)
List.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}/entries/{entriesId}/tags',
http_method=u'GET',
method_id=u'datacatalog.projects.locations.entryGroups.entries.tags.list',
ordered_params=[u'parent'],
path_params=[u'parent'],
query_params=[u'pageSize', u'pageToken'],
relative_path=u'v1beta1/{+parent}/tags',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsEntriesTagsListRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1ListTagsResponse',
supports_download=False,
)
def Patch(self, request, global_params=None):
r"""Updates an existing tag.
Args:
request: (DatacatalogProjectsLocationsEntryGroupsEntriesTagsPatchRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1Tag) The response message.
"""
config = self.GetMethodConfig('Patch')
return self._RunMethod(
config, request, global_params=global_params)
Patch.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}/entries/{entriesId}/tags/{tagsId}',
http_method=u'PATCH',
method_id=u'datacatalog.projects.locations.entryGroups.entries.tags.patch',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[u'updateMask'],
relative_path=u'v1beta1/{+name}',
request_field=u'googleCloudDatacatalogV1beta1Tag',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsEntriesTagsPatchRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1Tag',
supports_download=False,
)
class ProjectsLocationsEntryGroupsEntriesService(base_api.BaseApiService):
"""Service class for the projects_locations_entryGroups_entries resource."""
_NAME = u'projects_locations_entryGroups_entries'
def __init__(self, client):
super(DatacatalogV1beta1.ProjectsLocationsEntryGroupsEntriesService, self).__init__(client)
self._upload_configs = {
}
def Create(self, request, global_params=None):
r"""Alpha feature.
Creates an entry. Currently only entries of 'FILESET' type can be created.
The user should enable the Data Catalog API in the project identified by
the `parent` parameter (see [Data Catalog Resource Project]
(/data-catalog/docs/concepts/resource-project) for more information).
Args:
request: (DatacatalogProjectsLocationsEntryGroupsEntriesCreateRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1Entry) The response message.
"""
config = self.GetMethodConfig('Create')
return self._RunMethod(
config, request, global_params=global_params)
Create.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}/entries',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.entryGroups.entries.create',
ordered_params=[u'parent'],
path_params=[u'parent'],
query_params=[u'entryId'],
relative_path=u'v1beta1/{+parent}/entries',
request_field=u'googleCloudDatacatalogV1beta1Entry',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsEntriesCreateRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1Entry',
supports_download=False,
)
def Delete(self, request, global_params=None):
r"""Alpha feature.
Deletes an existing entry. Only entries created through
CreateEntry
method can be deleted.
The user should enable the Data Catalog API in the project identified by
the `name` parameter (see [Data Catalog Resource Project]
(/data-catalog/docs/concepts/resource-project) for more information).
Args:
request: (DatacatalogProjectsLocationsEntryGroupsEntriesDeleteRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Empty) The response message.
"""
config = self.GetMethodConfig('Delete')
return self._RunMethod(
config, request, global_params=global_params)
Delete.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}/entries/{entriesId}',
http_method=u'DELETE',
method_id=u'datacatalog.projects.locations.entryGroups.entries.delete',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[],
relative_path=u'v1beta1/{+name}',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsEntriesDeleteRequest',
response_type_name=u'Empty',
supports_download=False,
)
def Get(self, request, global_params=None):
r"""Gets an entry.
Args:
request: (DatacatalogProjectsLocationsEntryGroupsEntriesGetRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1Entry) The response message.
"""
config = self.GetMethodConfig('Get')
return self._RunMethod(
config, request, global_params=global_params)
Get.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}/entries/{entriesId}',
http_method=u'GET',
method_id=u'datacatalog.projects.locations.entryGroups.entries.get',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[],
relative_path=u'v1beta1/{+name}',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsEntriesGetRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1Entry',
supports_download=False,
)
def GetIamPolicy(self, request, global_params=None):
r"""Gets the access control policy for a resource. A `NOT_FOUND` error.
is returned if the resource does not exist. An empty policy is returned
if the resource exists but does not have a policy set on it.
Supported resources are:
- Tag templates.
- Entries.
- Entry groups.
Note, this method cannot be used to manage policies for BigQuery, Cloud
Pub/Sub and any external Google Cloud Platform resources synced to Cloud
Data Catalog.
Callers must have following Google IAM permission
- `datacatalog.tagTemplates.getIamPolicy` to get policies on tag
templates.
- `datacatalog.entries.getIamPolicy` to get policies on entries.
- `datacatalog.entryGroups.getIamPolicy` to get policies on entry groups.
Args:
request: (DatacatalogProjectsLocationsEntryGroupsEntriesGetIamPolicyRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Policy) The response message.
"""
config = self.GetMethodConfig('GetIamPolicy')
return self._RunMethod(
config, request, global_params=global_params)
GetIamPolicy.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}/entries/{entriesId}:getIamPolicy',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.entryGroups.entries.getIamPolicy',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:getIamPolicy',
request_field=u'getIamPolicyRequest',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsEntriesGetIamPolicyRequest',
response_type_name=u'Policy',
supports_download=False,
)
def Patch(self, request, global_params=None):
r"""Updates an existing entry.
The user should enable the Data Catalog API in the project identified by
the `entry.name` parameter (see [Data Catalog Resource Project]
(/data-catalog/docs/concepts/resource-project) for more information).
Args:
request: (DatacatalogProjectsLocationsEntryGroupsEntriesPatchRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1Entry) The response message.
"""
config = self.GetMethodConfig('Patch')
return self._RunMethod(
config, request, global_params=global_params)
Patch.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}/entries/{entriesId}',
http_method=u'PATCH',
method_id=u'datacatalog.projects.locations.entryGroups.entries.patch',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[u'updateMask'],
relative_path=u'v1beta1/{+name}',
request_field=u'googleCloudDatacatalogV1beta1Entry',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsEntriesPatchRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1Entry',
supports_download=False,
)
def SetIamPolicy(self, request, global_params=None):
r"""Sets the access control policy for a resource. Replaces any existing.
policy.
Supported resources are:
- Tag templates.
- Entries.
- Entry groups.
Note, this method cannot be used to manage policies for BigQuery, Cloud
Pub/Sub and any external Google Cloud Platform resources synced to Cloud
Data Catalog.
Callers must have following Google IAM permission
- `datacatalog.tagTemplates.setIamPolicy` to set policies on tag
templates.
- `datacatalog.entries.setIamPolicy` to set policies on entries.
- `datacatalog.entryGroups.setIamPolicy` to set policies on entry groups.
Args:
request: (DatacatalogProjectsLocationsEntryGroupsEntriesSetIamPolicyRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Policy) The response message.
"""
config = self.GetMethodConfig('SetIamPolicy')
return self._RunMethod(
config, request, global_params=global_params)
SetIamPolicy.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}/entries/{entriesId}:setIamPolicy',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.entryGroups.entries.setIamPolicy',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:setIamPolicy',
request_field=u'setIamPolicyRequest',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsEntriesSetIamPolicyRequest',
response_type_name=u'Policy',
supports_download=False,
)
def TestIamPermissions(self, request, global_params=None):
r"""Returns the caller's permissions on a resource.
If the resource does not exist, an empty set of permissions is returned
(We don't return a `NOT_FOUND` error).
Supported resources are:
- Tag templates.
- Entries.
- Entry groups.
Note, this method cannot be used to manage policies for BigQuery, Cloud
Pub/Sub and any external Google Cloud Platform resources synced to Cloud
Data Catalog.
A caller is not required to have Google IAM permission to make this
request.
Args:
request: (DatacatalogProjectsLocationsEntryGroupsEntriesTestIamPermissionsRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(TestIamPermissionsResponse) The response message.
"""
config = self.GetMethodConfig('TestIamPermissions')
return self._RunMethod(
config, request, global_params=global_params)
TestIamPermissions.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}/entries/{entriesId}:testIamPermissions',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.entryGroups.entries.testIamPermissions',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:testIamPermissions',
request_field=u'testIamPermissionsRequest',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsEntriesTestIamPermissionsRequest',
response_type_name=u'TestIamPermissionsResponse',
supports_download=False,
)
class ProjectsLocationsEntryGroupsService(base_api.BaseApiService):
"""Service class for the projects_locations_entryGroups resource."""
_NAME = u'projects_locations_entryGroups'
def __init__(self, client):
super(DatacatalogV1beta1.ProjectsLocationsEntryGroupsService, self).__init__(client)
self._upload_configs = {
}
def Create(self, request, global_params=None):
r"""Alpha feature.
Creates an EntryGroup.
The user should enable the Data Catalog API in the project identified by
the `parent` parameter (see [Data Catalog Resource Project]
(/data-catalog/docs/concepts/resource-project) for more information).
Args:
request: (DatacatalogProjectsLocationsEntryGroupsCreateRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1EntryGroup) The response message.
"""
config = self.GetMethodConfig('Create')
return self._RunMethod(
config, request, global_params=global_params)
Create.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.entryGroups.create',
ordered_params=[u'parent'],
path_params=[u'parent'],
query_params=[u'entryGroupId'],
relative_path=u'v1beta1/{+parent}/entryGroups',
request_field=u'googleCloudDatacatalogV1beta1EntryGroup',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsCreateRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1EntryGroup',
supports_download=False,
)
def Delete(self, request, global_params=None):
r"""Alpha feature.
Deletes an EntryGroup. Only entry groups that do not contain entries can be
deleted. The user should enable the Data Catalog API in the project
identified by the `name` parameter (see [Data Catalog Resource Project]
(/data-catalog/docs/concepts/resource-project) for more information).
Args:
request: (DatacatalogProjectsLocationsEntryGroupsDeleteRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Empty) The response message.
"""
config = self.GetMethodConfig('Delete')
return self._RunMethod(
config, request, global_params=global_params)
Delete.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}',
http_method=u'DELETE',
method_id=u'datacatalog.projects.locations.entryGroups.delete',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[],
relative_path=u'v1beta1/{+name}',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsDeleteRequest',
response_type_name=u'Empty',
supports_download=False,
)
def Get(self, request, global_params=None):
r"""Alpha feature.
Gets an EntryGroup.
Args:
request: (DatacatalogProjectsLocationsEntryGroupsGetRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1EntryGroup) The response message.
"""
config = self.GetMethodConfig('Get')
return self._RunMethod(
config, request, global_params=global_params)
Get.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}',
http_method=u'GET',
method_id=u'datacatalog.projects.locations.entryGroups.get',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[u'readMask'],
relative_path=u'v1beta1/{+name}',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsGetRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1EntryGroup',
supports_download=False,
)
def GetIamPolicy(self, request, global_params=None):
r"""Gets the access control policy for a resource. A `NOT_FOUND` error.
is returned if the resource does not exist. An empty policy is returned
if the resource exists but does not have a policy set on it.
Supported resources are:
- Tag templates.
- Entries.
- Entry groups.
Note, this method cannot be used to manage policies for BigQuery, Cloud
Pub/Sub and any external Google Cloud Platform resources synced to Cloud
Data Catalog.
Callers must have following Google IAM permission
- `datacatalog.tagTemplates.getIamPolicy` to get policies on tag
templates.
- `datacatalog.entries.getIamPolicy` to get policies on entries.
- `datacatalog.entryGroups.getIamPolicy` to get policies on entry groups.
Args:
request: (DatacatalogProjectsLocationsEntryGroupsGetIamPolicyRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Policy) The response message.
"""
config = self.GetMethodConfig('GetIamPolicy')
return self._RunMethod(
config, request, global_params=global_params)
GetIamPolicy.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}:getIamPolicy',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.entryGroups.getIamPolicy',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:getIamPolicy',
request_field=u'getIamPolicyRequest',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsGetIamPolicyRequest',
response_type_name=u'Policy',
supports_download=False,
)
def SetIamPolicy(self, request, global_params=None):
r"""Sets the access control policy for a resource. Replaces any existing.
policy.
Supported resources are:
- Tag templates.
- Entries.
- Entry groups.
Note, this method cannot be used to manage policies for BigQuery, Cloud
Pub/Sub and any external Google Cloud Platform resources synced to Cloud
Data Catalog.
Callers must have following Google IAM permission
- `datacatalog.tagTemplates.setIamPolicy` to set policies on tag
templates.
- `datacatalog.entries.setIamPolicy` to set policies on entries.
- `datacatalog.entryGroups.setIamPolicy` to set policies on entry groups.
Args:
request: (DatacatalogProjectsLocationsEntryGroupsSetIamPolicyRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Policy) The response message.
"""
config = self.GetMethodConfig('SetIamPolicy')
return self._RunMethod(
config, request, global_params=global_params)
SetIamPolicy.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}:setIamPolicy',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.entryGroups.setIamPolicy',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:setIamPolicy',
request_field=u'setIamPolicyRequest',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsSetIamPolicyRequest',
response_type_name=u'Policy',
supports_download=False,
)
def TestIamPermissions(self, request, global_params=None):
r"""Returns the caller's permissions on a resource.
If the resource does not exist, an empty set of permissions is returned
(We don't return a `NOT_FOUND` error).
Supported resources are:
- Tag templates.
- Entries.
- Entry groups.
Note, this method cannot be used to manage policies for BigQuery, Cloud
Pub/Sub and any external Google Cloud Platform resources synced to Cloud
Data Catalog.
A caller is not required to have Google IAM permission to make this
request.
Args:
request: (DatacatalogProjectsLocationsEntryGroupsTestIamPermissionsRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(TestIamPermissionsResponse) The response message.
"""
config = self.GetMethodConfig('TestIamPermissions')
return self._RunMethod(
config, request, global_params=global_params)
TestIamPermissions.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/entryGroups/{entryGroupsId}:testIamPermissions',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.entryGroups.testIamPermissions',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:testIamPermissions',
request_field=u'testIamPermissionsRequest',
request_type_name=u'DatacatalogProjectsLocationsEntryGroupsTestIamPermissionsRequest',
response_type_name=u'TestIamPermissionsResponse',
supports_download=False,
)
class ProjectsLocationsTagTemplatesFieldsService(base_api.BaseApiService):
"""Service class for the projects_locations_tagTemplates_fields resource."""
_NAME = u'projects_locations_tagTemplates_fields'
def __init__(self, client):
super(DatacatalogV1beta1.ProjectsLocationsTagTemplatesFieldsService, self).__init__(client)
self._upload_configs = {
}
def Create(self, request, global_params=None):
r"""Creates a field in a tag template. The user should enable the Data Catalog.
API in the project identified by the `parent` parameter (see
[Data Catalog Resource
Project](/data-catalog/docs/concepts/resource-project) for more
information).
Args:
request: (DatacatalogProjectsLocationsTagTemplatesFieldsCreateRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1TagTemplateField) The response message.
"""
config = self.GetMethodConfig('Create')
return self._RunMethod(
config, request, global_params=global_params)
Create.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/tagTemplates/{tagTemplatesId}/fields',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.tagTemplates.fields.create',
ordered_params=[u'parent'],
path_params=[u'parent'],
query_params=[u'tagTemplateFieldId'],
relative_path=u'v1beta1/{+parent}/fields',
request_field=u'googleCloudDatacatalogV1beta1TagTemplateField',
request_type_name=u'DatacatalogProjectsLocationsTagTemplatesFieldsCreateRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1TagTemplateField',
supports_download=False,
)
def Delete(self, request, global_params=None):
r"""Deletes a field in a tag template and all uses of that field.
The user should enable the Data Catalog API in the project identified by
the `name` parameter (see [Data Catalog Resource Project]
(/data-catalog/docs/concepts/resource-project) for more information).
Args:
request: (DatacatalogProjectsLocationsTagTemplatesFieldsDeleteRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Empty) The response message.
"""
config = self.GetMethodConfig('Delete')
return self._RunMethod(
config, request, global_params=global_params)
Delete.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/tagTemplates/{tagTemplatesId}/fields/{fieldsId}',
http_method=u'DELETE',
method_id=u'datacatalog.projects.locations.tagTemplates.fields.delete',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[u'force'],
relative_path=u'v1beta1/{+name}',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsTagTemplatesFieldsDeleteRequest',
response_type_name=u'Empty',
supports_download=False,
)
def Patch(self, request, global_params=None):
r"""Updates a field in a tag template. This method cannot be used to update the.
field type. The user should enable the Data Catalog API in the project
identified by the `name` parameter (see [Data Catalog Resource Project]
(/data-catalog/docs/concepts/resource-project) for more information).
Args:
request: (DatacatalogProjectsLocationsTagTemplatesFieldsPatchRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1TagTemplateField) The response message.
"""
config = self.GetMethodConfig('Patch')
return self._RunMethod(
config, request, global_params=global_params)
Patch.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/tagTemplates/{tagTemplatesId}/fields/{fieldsId}',
http_method=u'PATCH',
method_id=u'datacatalog.projects.locations.tagTemplates.fields.patch',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[u'updateMask'],
relative_path=u'v1beta1/{+name}',
request_field=u'googleCloudDatacatalogV1beta1TagTemplateField',
request_type_name=u'DatacatalogProjectsLocationsTagTemplatesFieldsPatchRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1TagTemplateField',
supports_download=False,
)
def Rename(self, request, global_params=None):
r"""Renames a field in a tag template. The user should enable the Data Catalog.
API in the project identified by the `name` parameter (see [Data Catalog
Resource Project](/data-catalog/docs/concepts/resource-project) for more
information).
Args:
request: (DatacatalogProjectsLocationsTagTemplatesFieldsRenameRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1TagTemplateField) The response message.
"""
config = self.GetMethodConfig('Rename')
return self._RunMethod(
config, request, global_params=global_params)
Rename.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/tagTemplates/{tagTemplatesId}/fields/{fieldsId}:rename',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.tagTemplates.fields.rename',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[],
relative_path=u'v1beta1/{+name}:rename',
request_field=u'googleCloudDatacatalogV1beta1RenameTagTemplateFieldRequest',
request_type_name=u'DatacatalogProjectsLocationsTagTemplatesFieldsRenameRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1TagTemplateField',
supports_download=False,
)
class ProjectsLocationsTagTemplatesService(base_api.BaseApiService):
"""Service class for the projects_locations_tagTemplates resource."""
_NAME = u'projects_locations_tagTemplates'
def __init__(self, client):
super(DatacatalogV1beta1.ProjectsLocationsTagTemplatesService, self).__init__(client)
self._upload_configs = {
}
def Create(self, request, global_params=None):
r"""Creates a tag template. The user should enable the Data Catalog API in.
the project identified by the `parent` parameter (see [Data Catalog
Resource Project](/data-catalog/docs/concepts/resource-project) for more
information).
Args:
request: (DatacatalogProjectsLocationsTagTemplatesCreateRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1TagTemplate) The response message.
"""
config = self.GetMethodConfig('Create')
return self._RunMethod(
config, request, global_params=global_params)
Create.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/tagTemplates',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.tagTemplates.create',
ordered_params=[u'parent'],
path_params=[u'parent'],
query_params=[u'tagTemplateId'],
relative_path=u'v1beta1/{+parent}/tagTemplates',
request_field=u'googleCloudDatacatalogV1beta1TagTemplate',
request_type_name=u'DatacatalogProjectsLocationsTagTemplatesCreateRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1TagTemplate',
supports_download=False,
)
def Delete(self, request, global_params=None):
r"""Deletes a tag template and all tags using the template.
The user should enable the Data Catalog API in the project identified by
the `name` parameter (see [Data Catalog Resource Project]
(/data-catalog/docs/concepts/resource-project) for more information).
Args:
request: (DatacatalogProjectsLocationsTagTemplatesDeleteRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Empty) The response message.
"""
config = self.GetMethodConfig('Delete')
return self._RunMethod(
config, request, global_params=global_params)
Delete.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/tagTemplates/{tagTemplatesId}',
http_method=u'DELETE',
method_id=u'datacatalog.projects.locations.tagTemplates.delete',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[u'force'],
relative_path=u'v1beta1/{+name}',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsTagTemplatesDeleteRequest',
response_type_name=u'Empty',
supports_download=False,
)
def Get(self, request, global_params=None):
r"""Gets a tag template.
Args:
request: (DatacatalogProjectsLocationsTagTemplatesGetRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1TagTemplate) The response message.
"""
config = self.GetMethodConfig('Get')
return self._RunMethod(
config, request, global_params=global_params)
Get.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/tagTemplates/{tagTemplatesId}',
http_method=u'GET',
method_id=u'datacatalog.projects.locations.tagTemplates.get',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[],
relative_path=u'v1beta1/{+name}',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsTagTemplatesGetRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1TagTemplate',
supports_download=False,
)
def GetIamPolicy(self, request, global_params=None):
r"""Gets the access control policy for a resource. A `NOT_FOUND` error.
is returned if the resource does not exist. An empty policy is returned
if the resource exists but does not have a policy set on it.
Supported resources are:
- Tag templates.
- Entries.
- Entry groups.
Note, this method cannot be used to manage policies for BigQuery, Cloud
Pub/Sub and any external Google Cloud Platform resources synced to Cloud
Data Catalog.
Callers must have following Google IAM permission
- `datacatalog.tagTemplates.getIamPolicy` to get policies on tag
templates.
- `datacatalog.entries.getIamPolicy` to get policies on entries.
- `datacatalog.entryGroups.getIamPolicy` to get policies on entry groups.
Args:
request: (DatacatalogProjectsLocationsTagTemplatesGetIamPolicyRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Policy) The response message.
"""
config = self.GetMethodConfig('GetIamPolicy')
return self._RunMethod(
config, request, global_params=global_params)
GetIamPolicy.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/tagTemplates/{tagTemplatesId}:getIamPolicy',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.tagTemplates.getIamPolicy',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:getIamPolicy',
request_field=u'getIamPolicyRequest',
request_type_name=u'DatacatalogProjectsLocationsTagTemplatesGetIamPolicyRequest',
response_type_name=u'Policy',
supports_download=False,
)
def Patch(self, request, global_params=None):
r"""Updates a tag template. This method cannot be used to update the fields of.
a template. The tag template fields are represented as separate resources
and should be updated using their own create/update/delete methods.
The user should enable the Data Catalog API in the project identified by
the `tag_template.name` parameter (see [Data Catalog Resource Project]
(/data-catalog/docs/concepts/resource-project) for more information).
Args:
request: (DatacatalogProjectsLocationsTagTemplatesPatchRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1TagTemplate) The response message.
"""
config = self.GetMethodConfig('Patch')
return self._RunMethod(
config, request, global_params=global_params)
Patch.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/tagTemplates/{tagTemplatesId}',
http_method=u'PATCH',
method_id=u'datacatalog.projects.locations.tagTemplates.patch',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[u'updateMask'],
relative_path=u'v1beta1/{+name}',
request_field=u'googleCloudDatacatalogV1beta1TagTemplate',
request_type_name=u'DatacatalogProjectsLocationsTagTemplatesPatchRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1TagTemplate',
supports_download=False,
)
def SetIamPolicy(self, request, global_params=None):
r"""Sets the access control policy for a resource. Replaces any existing.
policy.
Supported resources are:
- Tag templates.
- Entries.
- Entry groups.
Note, this method cannot be used to manage policies for BigQuery, Cloud
Pub/Sub and any external Google Cloud Platform resources synced to Cloud
Data Catalog.
Callers must have following Google IAM permission
- `datacatalog.tagTemplates.setIamPolicy` to set policies on tag
templates.
- `datacatalog.entries.setIamPolicy` to set policies on entries.
- `datacatalog.entryGroups.setIamPolicy` to set policies on entry groups.
Args:
request: (DatacatalogProjectsLocationsTagTemplatesSetIamPolicyRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Policy) The response message.
"""
config = self.GetMethodConfig('SetIamPolicy')
return self._RunMethod(
config, request, global_params=global_params)
SetIamPolicy.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/tagTemplates/{tagTemplatesId}:setIamPolicy',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.tagTemplates.setIamPolicy',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:setIamPolicy',
request_field=u'setIamPolicyRequest',
request_type_name=u'DatacatalogProjectsLocationsTagTemplatesSetIamPolicyRequest',
response_type_name=u'Policy',
supports_download=False,
)
def TestIamPermissions(self, request, global_params=None):
r"""Returns the caller's permissions on a resource.
If the resource does not exist, an empty set of permissions is returned
(We don't return a `NOT_FOUND` error).
Supported resources are:
- Tag templates.
- Entries.
- Entry groups.
Note, this method cannot be used to manage policies for BigQuery, Cloud
Pub/Sub and any external Google Cloud Platform resources synced to Cloud
Data Catalog.
A caller is not required to have Google IAM permission to make this
request.
Args:
request: (DatacatalogProjectsLocationsTagTemplatesTestIamPermissionsRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(TestIamPermissionsResponse) The response message.
"""
config = self.GetMethodConfig('TestIamPermissions')
return self._RunMethod(
config, request, global_params=global_params)
TestIamPermissions.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/tagTemplates/{tagTemplatesId}:testIamPermissions',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.tagTemplates.testIamPermissions',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:testIamPermissions',
request_field=u'testIamPermissionsRequest',
request_type_name=u'DatacatalogProjectsLocationsTagTemplatesTestIamPermissionsRequest',
response_type_name=u'TestIamPermissionsResponse',
supports_download=False,
)
class ProjectsLocationsTaxonomiesPolicyTagsService(base_api.BaseApiService):
"""Service class for the projects_locations_taxonomies_policyTags resource."""
_NAME = u'projects_locations_taxonomies_policyTags'
def __init__(self, client):
super(DatacatalogV1beta1.ProjectsLocationsTaxonomiesPolicyTagsService, self).__init__(client)
self._upload_configs = {
}
def Create(self, request, global_params=None):
r"""Creates a policy tag in the specified taxonomy.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesPolicyTagsCreateRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1PolicyTag) The response message.
"""
config = self.GetMethodConfig('Create')
return self._RunMethod(
config, request, global_params=global_params)
Create.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}/policyTags',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.taxonomies.policyTags.create',
ordered_params=[u'parent'],
path_params=[u'parent'],
query_params=[],
relative_path=u'v1beta1/{+parent}/policyTags',
request_field=u'googleCloudDatacatalogV1beta1PolicyTag',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesPolicyTagsCreateRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1PolicyTag',
supports_download=False,
)
def Delete(self, request, global_params=None):
r"""Deletes a policy tag. Also deletes all of its descendant policy tags.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesPolicyTagsDeleteRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Empty) The response message.
"""
config = self.GetMethodConfig('Delete')
return self._RunMethod(
config, request, global_params=global_params)
Delete.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}/policyTags/{policyTagsId}',
http_method=u'DELETE',
method_id=u'datacatalog.projects.locations.taxonomies.policyTags.delete',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[],
relative_path=u'v1beta1/{+name}',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesPolicyTagsDeleteRequest',
response_type_name=u'Empty',
supports_download=False,
)
def Get(self, request, global_params=None):
r"""Gets a policy tag.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesPolicyTagsGetRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1PolicyTag) The response message.
"""
config = self.GetMethodConfig('Get')
return self._RunMethod(
config, request, global_params=global_params)
Get.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}/policyTags/{policyTagsId}',
http_method=u'GET',
method_id=u'datacatalog.projects.locations.taxonomies.policyTags.get',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[],
relative_path=u'v1beta1/{+name}',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesPolicyTagsGetRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1PolicyTag',
supports_download=False,
)
def GetIamPolicy(self, request, global_params=None):
r"""Gets the IAM policy for a taxonomy or a policy tag.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesPolicyTagsGetIamPolicyRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Policy) The response message.
"""
config = self.GetMethodConfig('GetIamPolicy')
return self._RunMethod(
config, request, global_params=global_params)
GetIamPolicy.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}/policyTags/{policyTagsId}:getIamPolicy',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.taxonomies.policyTags.getIamPolicy',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:getIamPolicy',
request_field=u'getIamPolicyRequest',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesPolicyTagsGetIamPolicyRequest',
response_type_name=u'Policy',
supports_download=False,
)
def List(self, request, global_params=None):
r"""Lists all policy tags in a taxonomy.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesPolicyTagsListRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1ListPolicyTagsResponse) The response message.
"""
config = self.GetMethodConfig('List')
return self._RunMethod(
config, request, global_params=global_params)
List.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}/policyTags',
http_method=u'GET',
method_id=u'datacatalog.projects.locations.taxonomies.policyTags.list',
ordered_params=[u'parent'],
path_params=[u'parent'],
query_params=[u'pageSize', u'pageToken'],
relative_path=u'v1beta1/{+parent}/policyTags',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesPolicyTagsListRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1ListPolicyTagsResponse',
supports_download=False,
)
def Patch(self, request, global_params=None):
r"""Updates a policy tag.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesPolicyTagsPatchRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1PolicyTag) The response message.
"""
config = self.GetMethodConfig('Patch')
return self._RunMethod(
config, request, global_params=global_params)
Patch.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}/policyTags/{policyTagsId}',
http_method=u'PATCH',
method_id=u'datacatalog.projects.locations.taxonomies.policyTags.patch',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[u'updateMask'],
relative_path=u'v1beta1/{+name}',
request_field=u'googleCloudDatacatalogV1beta1PolicyTag',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesPolicyTagsPatchRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1PolicyTag',
supports_download=False,
)
def SetIamPolicy(self, request, global_params=None):
r"""Sets the IAM policy for a taxonomy or a policy tag.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesPolicyTagsSetIamPolicyRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Policy) The response message.
"""
config = self.GetMethodConfig('SetIamPolicy')
return self._RunMethod(
config, request, global_params=global_params)
SetIamPolicy.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}/policyTags/{policyTagsId}:setIamPolicy',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.taxonomies.policyTags.setIamPolicy',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:setIamPolicy',
request_field=u'setIamPolicyRequest',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesPolicyTagsSetIamPolicyRequest',
response_type_name=u'Policy',
supports_download=False,
)
def TestIamPermissions(self, request, global_params=None):
r"""Returns the permissions that a caller has on the specified taxonomy or.
policy tag.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesPolicyTagsTestIamPermissionsRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(TestIamPermissionsResponse) The response message.
"""
config = self.GetMethodConfig('TestIamPermissions')
return self._RunMethod(
config, request, global_params=global_params)
TestIamPermissions.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}/policyTags/{policyTagsId}:testIamPermissions',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.taxonomies.policyTags.testIamPermissions',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:testIamPermissions',
request_field=u'testIamPermissionsRequest',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesPolicyTagsTestIamPermissionsRequest',
response_type_name=u'TestIamPermissionsResponse',
supports_download=False,
)
class ProjectsLocationsTaxonomiesService(base_api.BaseApiService):
"""Service class for the projects_locations_taxonomies resource."""
_NAME = u'projects_locations_taxonomies'
def __init__(self, client):
super(DatacatalogV1beta1.ProjectsLocationsTaxonomiesService, self).__init__(client)
self._upload_configs = {
}
def Create(self, request, global_params=None):
r"""Creates a taxonomy in the specified project.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesCreateRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1Taxonomy) The response message.
"""
config = self.GetMethodConfig('Create')
return self._RunMethod(
config, request, global_params=global_params)
Create.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.taxonomies.create',
ordered_params=[u'parent'],
path_params=[u'parent'],
query_params=[],
relative_path=u'v1beta1/{+parent}/taxonomies',
request_field=u'googleCloudDatacatalogV1beta1Taxonomy',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesCreateRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1Taxonomy',
supports_download=False,
)
def Delete(self, request, global_params=None):
r"""Deletes a taxonomy. This operation will also delete all.
policy tags in this taxonomy along with their associated policies.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesDeleteRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Empty) The response message.
"""
config = self.GetMethodConfig('Delete')
return self._RunMethod(
config, request, global_params=global_params)
Delete.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}',
http_method=u'DELETE',
method_id=u'datacatalog.projects.locations.taxonomies.delete',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[],
relative_path=u'v1beta1/{+name}',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesDeleteRequest',
response_type_name=u'Empty',
supports_download=False,
)
def Export(self, request, global_params=None):
r"""Exports all taxonomies and their policy tags in a project.
This method generates SerializedTaxonomy protos with nested policy tags
that can be used as an input for future ImportTaxonomies calls.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesExportRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1ExportTaxonomiesResponse) The response message.
"""
config = self.GetMethodConfig('Export')
return self._RunMethod(
config, request, global_params=global_params)
Export.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies:export',
http_method=u'GET',
method_id=u'datacatalog.projects.locations.taxonomies.export',
ordered_params=[u'parent'],
path_params=[u'parent'],
query_params=[u'serializedTaxonomies', u'taxonomies'],
relative_path=u'v1beta1/{+parent}/taxonomies:export',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesExportRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1ExportTaxonomiesResponse',
supports_download=False,
)
def Get(self, request, global_params=None):
r"""Gets a taxonomy.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesGetRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1Taxonomy) The response message.
"""
config = self.GetMethodConfig('Get')
return self._RunMethod(
config, request, global_params=global_params)
Get.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}',
http_method=u'GET',
method_id=u'datacatalog.projects.locations.taxonomies.get',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[],
relative_path=u'v1beta1/{+name}',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesGetRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1Taxonomy',
supports_download=False,
)
def GetIamPolicy(self, request, global_params=None):
r"""Gets the IAM policy for a taxonomy or a policy tag.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesGetIamPolicyRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Policy) The response message.
"""
config = self.GetMethodConfig('GetIamPolicy')
return self._RunMethod(
config, request, global_params=global_params)
GetIamPolicy.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}:getIamPolicy',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.taxonomies.getIamPolicy',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:getIamPolicy',
request_field=u'getIamPolicyRequest',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesGetIamPolicyRequest',
response_type_name=u'Policy',
supports_download=False,
)
def Import(self, request, global_params=None):
r"""Imports all taxonomies and their policy tags to a project as new.
taxonomies.
This method provides a bulk taxonomy / policy tag creation using nested
proto structure.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesImportRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1ImportTaxonomiesResponse) The response message.
"""
config = self.GetMethodConfig('Import')
return self._RunMethod(
config, request, global_params=global_params)
Import.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies:import',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.taxonomies.import',
ordered_params=[u'parent'],
path_params=[u'parent'],
query_params=[],
relative_path=u'v1beta1/{+parent}/taxonomies:import',
request_field=u'googleCloudDatacatalogV1beta1ImportTaxonomiesRequest',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesImportRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1ImportTaxonomiesResponse',
supports_download=False,
)
def List(self, request, global_params=None):
r"""Lists all taxonomies in a project in a particular location that the caller.
has permission to view.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesListRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1ListTaxonomiesResponse) The response message.
"""
config = self.GetMethodConfig('List')
return self._RunMethod(
config, request, global_params=global_params)
List.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies',
http_method=u'GET',
method_id=u'datacatalog.projects.locations.taxonomies.list',
ordered_params=[u'parent'],
path_params=[u'parent'],
query_params=[u'pageSize', u'pageToken'],
relative_path=u'v1beta1/{+parent}/taxonomies',
request_field='',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesListRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1ListTaxonomiesResponse',
supports_download=False,
)
def Patch(self, request, global_params=None):
r"""Updates a taxonomy.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesPatchRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(GoogleCloudDatacatalogV1beta1Taxonomy) The response message.
"""
config = self.GetMethodConfig('Patch')
return self._RunMethod(
config, request, global_params=global_params)
Patch.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}',
http_method=u'PATCH',
method_id=u'datacatalog.projects.locations.taxonomies.patch',
ordered_params=[u'name'],
path_params=[u'name'],
query_params=[u'updateMask'],
relative_path=u'v1beta1/{+name}',
request_field=u'googleCloudDatacatalogV1beta1Taxonomy',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesPatchRequest',
response_type_name=u'GoogleCloudDatacatalogV1beta1Taxonomy',
supports_download=False,
)
def SetIamPolicy(self, request, global_params=None):
r"""Sets the IAM policy for a taxonomy or a policy tag.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesSetIamPolicyRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Policy) The response message.
"""
config = self.GetMethodConfig('SetIamPolicy')
return self._RunMethod(
config, request, global_params=global_params)
SetIamPolicy.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}:setIamPolicy',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.taxonomies.setIamPolicy',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:setIamPolicy',
request_field=u'setIamPolicyRequest',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesSetIamPolicyRequest',
response_type_name=u'Policy',
supports_download=False,
)
def TestIamPermissions(self, request, global_params=None):
r"""Returns the permissions that a caller has on the specified taxonomy or.
policy tag.
Args:
request: (DatacatalogProjectsLocationsTaxonomiesTestIamPermissionsRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(TestIamPermissionsResponse) The response message.
"""
config = self.GetMethodConfig('TestIamPermissions')
return self._RunMethod(
config, request, global_params=global_params)
TestIamPermissions.method_config = lambda: base_api.ApiMethodInfo(
flat_path=u'v1beta1/projects/{projectsId}/locations/{locationsId}/taxonomies/{taxonomiesId}:testIamPermissions',
http_method=u'POST',
method_id=u'datacatalog.projects.locations.taxonomies.testIamPermissions',
ordered_params=[u'resource'],
path_params=[u'resource'],
query_params=[],
relative_path=u'v1beta1/{+resource}:testIamPermissions',
request_field=u'testIamPermissionsRequest',
request_type_name=u'DatacatalogProjectsLocationsTaxonomiesTestIamPermissionsRequest',
response_type_name=u'TestIamPermissionsResponse',
supports_download=False,
)
class ProjectsLocationsService(base_api.BaseApiService):
"""Service class for the projects_locations resource."""
_NAME = u'projects_locations'
def __init__(self, client):
super(DatacatalogV1beta1.ProjectsLocationsService, self).__init__(client)
self._upload_configs = {
}
class ProjectsService(base_api.BaseApiService):
"""Service class for the projects resource."""
_NAME = u'projects'
def __init__(self, client):
super(DatacatalogV1beta1.ProjectsService, self).__init__(client)
self._upload_configs = {
}
| 43.290789 | 146 | 0.726407 | 7,181 | 71,906 | 7.104025 | 0.055285 | 0.04587 | 0.035755 | 0.021641 | 0.778923 | 0.770788 | 0.744796 | 0.737327 | 0.711609 | 0.67458 | 0 | 0.006659 | 0.18342 | 71,906 | 1,660 | 147 | 43.316867 | 0.862152 | 0.338275 | 0 | 0.650465 | 1 | 0.002068 | 0.32087 | 0.276253 | 0 | 0 | 0 | 0 | 0 | 1 | 0.062048 | false | 0 | 0.011375 | 0 | 0.146846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2c78ae147c149861a036c85e70bc05441180034 | 9,974 | py | Python | test.py | michalc/lowhaio-redirect | 3fdbf84153dba1d4403dc5ea47af08628cd50b89 | [
"MIT"
] | null | null | null | test.py | michalc/lowhaio-redirect | 3fdbf84153dba1d4403dc5ea47af08628cd50b89 | [
"MIT"
] | null | null | null | test.py | michalc/lowhaio-redirect | 3fdbf84153dba1d4403dc5ea47af08628cd50b89 | [
"MIT"
] | null | null | null | import asyncio
import unittest
from aiodnsresolver import (
Resolver,
IPv4AddressExpiresAt,
)
from aiohttp import (
web,
)
from lowhaio import (
Pool,
buffered,
)
from lowhaio_redirect import (
HttpTooManyRedirects,
redirectable,
)
def async_test(func):
def wrapper(*args, **kwargs):
future = func(*args, **kwargs)
loop = asyncio.get_event_loop()
loop.run_until_complete(future)
return wrapper
class TestIntegration(unittest.TestCase):
def add_async_cleanup(self, coroutine, *args):
loop = asyncio.get_event_loop()
self.addCleanup(loop.run_until_complete, coroutine(*args))
@async_test
async def test_get_301(self):
async def handle_get_a(_):
return web.Response(
status=301,
headers={
'location': '/b'
},
)
async def handle_get_b(_):
return web.Response(body=b'def')
app = web.Application()
app.add_routes([
web.get('/a', handle_get_a),
web.get('/b', handle_get_b),
])
runner = web.AppRunner(app)
await runner.setup()
self.add_async_cleanup(runner.cleanup)
site = web.TCPSite(runner, '0.0.0.0', 8080)
await site.start()
request, close = Pool()
self.add_async_cleanup(close)
redirectable_request = redirectable(request)
response_status, _, response_body = await redirectable_request(
b'GET', 'http://localhost:8080/a',
)
response_body_buffered = await buffered(response_body)
self.assertEqual(response_body_buffered, b'def')
self.assertEqual(response_status, b'200')
@async_test
async def test_post_301(self):
async def handle_post_a(_):
return web.Response(
status=301,
headers={
'location': '/b'
},
)
async def handle_get_b(_):
return web.Response(body=b'def')
app = web.Application()
app.add_routes([
web.post('/a', handle_post_a),
web.get('/b', handle_get_b),
])
runner = web.AppRunner(app)
await runner.setup()
self.add_async_cleanup(runner.cleanup)
site = web.TCPSite(runner, '0.0.0.0', 8080)
await site.start()
request, close = Pool()
self.add_async_cleanup(close)
async def data():
yield b'a'
yield b'b'
yield b'c'
redirectable_request = redirectable(request)
response_status, _, response_body = await redirectable_request(
b'POST', 'http://localhost:8080/a',
body=data,
headers=((b'content-length', b'3'),),
)
response_body_buffered = await buffered(response_body)
self.assertEqual(response_body_buffered, b'def')
self.assertEqual(response_status, b'200')
@async_test
async def test_post_307(self):
body_b = None
async def handle_post_a(request):
await request.content.read()
return web.Response(
status=307,
headers={
'location': '/b'
},
)
async def handle_post_b(request):
nonlocal body_b
body_b = await request.content.read()
return web.Response(body=b'def')
app = web.Application()
app.add_routes([
web.post('/a', handle_post_a),
web.post('/b', handle_post_b),
])
runner = web.AppRunner(app)
await runner.setup()
self.add_async_cleanup(runner.cleanup)
site = web.TCPSite(runner, '0.0.0.0', 8080)
await site.start()
request, close = Pool()
self.add_async_cleanup(close)
async def data():
yield b'a'
yield b'b'
yield b'c'
redirectable_request = redirectable(request)
response_status, _, response_body = await redirectable_request(
b'POST', 'http://localhost:8080/a',
headers=((b'content-length', b'3'),),
body=data,
)
response_body_buffered = await buffered(response_body)
self.assertEqual(body_b, b'abc')
self.assertEqual(response_body_buffered, b'def')
self.assertEqual(response_status, b'200')
@async_test
async def test_post_307_chain(self):
body_c = None
async def handle_post_a(request):
await request.content.read()
return web.Response(
status=307,
headers={
'location': '/b'
},
)
async def handle_post_b(request):
await request.content.read()
return web.Response(
status=307,
headers={
'location': '/c'
},
)
async def handle_post_c(request):
nonlocal body_c
body_c = await request.content.read()
return web.Response(body=b'def')
app = web.Application()
app.add_routes([
web.post('/a', handle_post_a),
web.post('/b', handle_post_b),
web.post('/c', handle_post_c),
])
runner = web.AppRunner(app)
await runner.setup()
self.add_async_cleanup(runner.cleanup)
site = web.TCPSite(runner, '0.0.0.0', 8080)
await site.start()
request, close = Pool()
self.add_async_cleanup(close)
async def data():
yield b'a'
yield b'b'
yield b'c'
redirectable_request = redirectable(request)
response_status, _, response_body = await redirectable_request(
b'POST', 'http://localhost:8080/a',
headers=((b'content-length', b'3'),),
body=data,
)
response_body_buffered = await buffered(response_body)
self.assertEqual(body_c, b'abc')
self.assertEqual(response_body_buffered, b'def')
self.assertEqual(response_status, b'200')
@async_test
async def test_get_301_too_many_redirects(self):
async def handle_get_a(_):
return web.Response(
status=301,
headers={
'location': '/b'
},
)
async def handle_get_b(_):
return web.Response(
status=301,
headers={
'location': '/a'
},
)
app = web.Application()
app.add_routes([
web.get('/a', handle_get_a),
web.get('/b', handle_get_b),
])
runner = web.AppRunner(app)
await runner.setup()
self.add_async_cleanup(runner.cleanup)
site = web.TCPSite(runner, '0.0.0.0', 8080)
await site.start()
request, close = Pool()
self.add_async_cleanup(close)
redirectable_request = redirectable(request)
with self.assertRaises(HttpTooManyRedirects):
await redirectable_request(
b'GET', 'http://localhost:8080/a',
)
@async_test
async def test_get_301_same_domain_auth_preserved(self):
auth_b = None
async def handle_get_a(_):
return web.Response(
status=301,
headers={
'location': '/b'
},
)
async def handle_get_b(request):
nonlocal auth_b
auth_b = request.headers['authorization']
return web.Response()
app = web.Application()
app.add_routes([
web.get('/a', handle_get_a),
web.get('/b', handle_get_b),
])
runner = web.AppRunner(app)
await runner.setup()
self.add_async_cleanup(runner.cleanup)
site = web.TCPSite(runner, '0.0.0.0', 8080)
await site.start()
request, close = Pool()
self.add_async_cleanup(close)
redirectable_request = redirectable(request)
_, _, body = await redirectable_request(
b'GET', 'http://localhost:8080/a',
headers=((b'Authorization', b'the-key'),)
)
await buffered(body)
self.assertEqual(auth_b, 'the-key')
@async_test
async def test_get_301_different_domain_auth_lost(self):
auth_b = None
async def handle_get_a(_):
return web.Response(
status=301,
headers={
'location': 'http://anotherhost.com:8080/b'
},
)
async def handle_get_b(request):
nonlocal auth_b
auth_b = request.headers.get('authorization', None)
return web.Response(body=b'def')
app = web.Application()
app.add_routes([
web.get('/a', handle_get_a),
web.get('/b', handle_get_b),
])
runner = web.AppRunner(app)
await runner.setup()
self.add_async_cleanup(runner.cleanup)
site = web.TCPSite(runner, '0.0.0.0', 8080)
await site.start()
def get_dns_resolver():
async def get_host(_, __, ___):
return IPv4AddressExpiresAt('127.0.0.1', expires_at=0)
return Resolver(
get_host=get_host,
)
request, close = Pool(get_dns_resolver=get_dns_resolver)
self.add_async_cleanup(close)
redirectable_request = redirectable(request)
_, _, body = await redirectable_request(
b'GET', 'http://localhost:8080/a',
headers=((b'authorization', b'the-key'),)
)
response = await buffered(body)
self.assertEqual(auth_b, None)
self.assertEqual(response, b'def')
| 28.66092 | 71 | 0.541307 | 1,083 | 9,974 | 4.76916 | 0.092336 | 0.040271 | 0.043562 | 0.0515 | 0.83001 | 0.81607 | 0.811617 | 0.775799 | 0.775799 | 0.766893 | 0 | 0.024619 | 0.348406 | 9,974 | 347 | 72 | 28.743516 | 0.770118 | 0 | 0 | 0.689655 | 0 | 0 | 0.056647 | 0 | 0 | 0 | 0 | 0 | 0.048276 | 1 | 0.013793 | false | 0 | 0.02069 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2c8e839d7c911d0ceaa9769dd82f80d3808501a | 110 | py | Python | spikelearn/measures/__init__.py | EstevaoVieira/spikelearn | 060206558cc37c31493f1c9f01412d90375403cb | [
"MIT"
] | null | null | null | spikelearn/measures/__init__.py | EstevaoVieira/spikelearn | 060206558cc37c31493f1c9f01412d90375403cb | [
"MIT"
] | null | null | null | spikelearn/measures/__init__.py | EstevaoVieira/spikelearn | 060206558cc37c31493f1c9f01412d90375403cb | [
"MIT"
] | null | null | null | from .univariate import bracketing, unit_similarity_evolution, ramping_trajectory, ramping_p, cohen_d, dprime
| 55 | 109 | 0.863636 | 14 | 110 | 6.428571 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081818 | 110 | 1 | 110 | 110 | 0.891089 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e2dbf602e11cf7df1307a3437c35cf8de7e7316d | 281 | py | Python | venv/lib/python3.9/site-packages/google/longrunning/operations_grpc_pb2.py | qarik-hanrattyjen/apache-airflow-backport-providers-google-2021.3.3 | 630dcef73e6a258b6e9a52f934e2dd912ce741f8 | [
"Apache-2.0"
] | 5 | 2020-06-24T13:10:33.000Z | 2021-02-19T09:28:11.000Z | venv/lib/python3.9/site-packages/google/longrunning/operations_grpc_pb2.py | qarik-hanrattyjen/apache-airflow-backport-providers-google-2021.3.3 | 630dcef73e6a258b6e9a52f934e2dd912ce741f8 | [
"Apache-2.0"
] | 38 | 2020-05-15T23:28:00.000Z | 2022-03-22T16:52:08.000Z | venv/lib/python3.9/site-packages/google/longrunning/operations_grpc_pb2.py | qarik-hanrattyjen/apache-airflow-backport-providers-google-2021.3.3 | 630dcef73e6a258b6e9a52f934e2dd912ce741f8 | [
"Apache-2.0"
] | 10 | 2020-04-26T09:58:30.000Z | 2022-03-18T21:45:49.000Z | # This module is provided for backwards compatibility with
# googleapis-common-protos <= 1.52.0, where this import path contained
# all of the message and gRPC definitions.
from google.longrunning.operations_proto_pb2 import *
from google.longrunning.operations_pb2_grpc import *
| 40.142857 | 70 | 0.814947 | 40 | 281 | 5.625 | 0.775 | 0.088889 | 0.186667 | 0.275556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.124555 | 281 | 6 | 71 | 46.833333 | 0.890244 | 0.590747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e2df9ff096d04b48b52b94ca4d249f433fda5ca2 | 44 | py | Python | app/models/__init__.py | QUDUSKUNLE/OPEN-API | 7bdf31af5fa99c0b054f5dd9dff478c8900d3cac | [
"MIT"
] | null | null | null | app/models/__init__.py | QUDUSKUNLE/OPEN-API | 7bdf31af5fa99c0b054f5dd9dff478c8900d3cac | [
"MIT"
] | 3 | 2020-02-11T23:16:23.000Z | 2021-06-10T21:33:02.000Z | app/models/__init__.py | QUDUSKUNLE/OPEN-API | 7bdf31af5fa99c0b054f5dd9dff478c8900d3cac | [
"MIT"
] | null | null | null | from .articles import *
from .user import *
| 14.666667 | 23 | 0.727273 | 6 | 44 | 5.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 2 | 24 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e2fd42527597235445d280a87c7d8433f7b1afbe | 71 | py | Python | boardgames/app/search/__init__.py | codingblocks/-search-driven-apps | a133e57352394b46b0794c4b8fbd62ec31821844 | [
"MIT"
] | 5 | 2018-06-18T21:38:21.000Z | 2018-09-26T15:00:57.000Z | boardgames/app/search/__init__.py | codingblocks/-search-driven-apps | a133e57352394b46b0794c4b8fbd62ec31821844 | [
"MIT"
] | 9 | 2018-05-11T19:19:09.000Z | 2018-05-20T21:34:52.000Z | boardgames/app/search/__init__.py | codingblocks/-search-driven-apps | a133e57352394b46b0794c4b8fbd62ec31821844 | [
"MIT"
] | 4 | 2018-05-09T02:52:37.000Z | 2018-06-11T15:59:58.000Z | from import_converter import Import_Converter
from search import Search | 35.5 | 45 | 0.901408 | 10 | 71 | 6.2 | 0.4 | 0.483871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098592 | 71 | 2 | 46 | 35.5 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
39107e5fa79e4657c1c8860f216d73211167717a | 4,439 | py | Python | tests/infrastructure/api/views/unit/test_exchange_rate_views.py | sdediego/forex-django-clean-architecture | 915a8d844a8db5a40c726fe4cf9f6d50f7c95275 | [
"MIT"
] | 8 | 2021-11-09T16:43:38.000Z | 2022-03-25T16:04:26.000Z | tests/infrastructure/api/views/unit/test_exchange_rate_views.py | sdediego/forex-django-clean-architecture | 915a8d844a8db5a40c726fe4cf9f6d50f7c95275 | [
"MIT"
] | null | null | null | tests/infrastructure/api/views/unit/test_exchange_rate_views.py | sdediego/forex-django-clean-architecture | 915a8d844a8db5a40c726fe4cf9f6d50f7c95275 | [
"MIT"
] | 2 | 2021-11-16T21:17:31.000Z | 2022-02-11T11:15:29.000Z | # coding: utf-8
import datetime
import random
from http import HTTPStatus
from unittest.mock import Mock
from django.test.client import RequestFactory
import pytest
from src.infrastructure.api.views.exchange_rate import (
CurrencyViewSet, CurrencyExchangeRateViewSet)
from tests.fixtures import currency, exchange_rate
@pytest.mark.unit
def test_currency_viewset_get(currency):
viewset = CurrencyViewSet()
viewset.viewset_factory = Mock()
viewset.viewset_factory.create.return_value = Mock()
viewset.viewset_factory.create.return_value.get.return_value = (
vars(currency),
HTTPStatus.OK.value
)
response = viewset.get(RequestFactory(), currency.code)
assert hasattr(response, 'status_code')
assert response.status_code == HTTPStatus.OK.value
assert hasattr(response, 'data')
assert isinstance(response.data, dict)
@pytest.mark.unit
def test_currency_viewset_list(currency):
viewset = CurrencyViewSet()
viewset.viewset_factory = Mock()
viewset.viewset_factory.create.return_value = Mock()
viewset.viewset_factory.create.return_value.list.return_value = (
[vars(currency) for _ in range(random.randint(1, 10))],
HTTPStatus.OK.value
)
response = viewset.list(RequestFactory(), currency.code)
assert hasattr(response, 'status_code')
assert response.status_code == HTTPStatus.OK.value
assert hasattr(response, 'data')
assert isinstance(response.data, list)
@pytest.mark.unit
def test_currency_exchange_rate_viewset_convert(exchange_rate):
viewset = CurrencyExchangeRateViewSet()
viewset.viewset_factory = Mock()
viewset.viewset_factory.create.return_value = Mock()
viewset.viewset_factory.create.return_value.convert.return_value = (
{
'exchanged_currency': exchange_rate.exchanged_currency,
'exchanged_amount': round(random.uniform(10, 100), 2),
'rate_value': round(random.uniform(0.5, 1.5), 6)
},
HTTPStatus.OK.value
)
request = RequestFactory()
request.query_params = {
'source_currency': exchange_rate.source_currency,
'exchanged_currency': exchange_rate.exchanged_currency,
'amount': round(random.uniform(10, 100), 2)
}
response = viewset.convert(request)
assert hasattr(response, 'status_code')
assert response.status_code == HTTPStatus.OK.value
assert hasattr(response, 'data')
assert isinstance(response.data, dict)
@pytest.mark.unit
def test_currency_exchange_rate_viewset_list(exchange_rate):
series_length = random.randint(1, 10)
viewset = CurrencyExchangeRateViewSet()
viewset.viewset_factory = Mock()
viewset.viewset_factory.create.return_value = Mock()
viewset.viewset_factory.create.return_value.list.return_value = (
[exchange_rate for _ in range(series_length)],
HTTPStatus.OK.value
)
request = RequestFactory()
request.query_params = {
'source_currency': exchange_rate.source_currency,
'date_from': (
datetime.date.today() + datetime.timedelta(days=-series_length)
).strftime('%Y-%m-%d'),
'date_to': datetime.date.today().strftime('%Y-%m-%d'),
}
response = viewset.list(request)
assert hasattr(response, 'status_code')
assert response.status_code == HTTPStatus.OK.value
assert hasattr(response, 'data')
assert isinstance(response.data, list)
@pytest.mark.unit
def test_currency_exchange_rate_viewset_calculate_twr(exchange_rate):
viewset = CurrencyExchangeRateViewSet()
viewset.viewset_factory = Mock()
viewset.viewset_factory.create.return_value = Mock()
viewset.viewset_factory.create.return_value.calculate_twr.return_value = (
{'time_weighted_rate': round(random.uniform(0.5, 1.5), 6)},
HTTPStatus.OK.value
)
request = RequestFactory()
request.query_params = {
'source_currency': exchange_rate.source_currency,
'exchanged_currency': exchange_rate.exchanged_currency,
'date_from': (
datetime.date.today() + datetime.timedelta(days=-5)
).strftime('%Y-%m-%d'),
'date_to': datetime.date.today().strftime('%Y-%m-%d'),
}
response = viewset.calculate_twr(request)
assert hasattr(response, 'status_code')
assert response.status_code == HTTPStatus.OK.value
assert hasattr(response, 'data')
assert isinstance(response.data, dict)
| 36.089431 | 78 | 0.711421 | 508 | 4,439 | 6.011811 | 0.161417 | 0.058939 | 0.103143 | 0.08186 | 0.813032 | 0.792076 | 0.777014 | 0.743287 | 0.743287 | 0.706942 | 0 | 0.008208 | 0.176616 | 4,439 | 122 | 79 | 36.385246 | 0.82736 | 0.002929 | 0 | 0.607477 | 0 | 0 | 0.065099 | 0 | 0 | 0 | 0 | 0 | 0.186916 | 1 | 0.046729 | false | 0 | 0.074766 | 0 | 0.121495 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1abbfb923b83352ce0bcf595d173f0880e36e2ff | 116 | py | Python | apps/galleryapp/admin.py | itsMagondu/MaMaSe | 0287e092121155314e76124425ef26bb4154847f | [
"Apache-2.0"
] | 3 | 2016-03-08T15:15:00.000Z | 2020-03-05T05:32:19.000Z | apps/galleryapp/admin.py | itsMagondu/MaMaSe | 0287e092121155314e76124425ef26bb4154847f | [
"Apache-2.0"
] | 65 | 2015-09-25T13:32:12.000Z | 2022-03-11T23:22:12.000Z | apps/galleryapp/admin.py | itsMagondu/MaMaSe | 0287e092121155314e76124425ef26bb4154847f | [
"Apache-2.0"
] | 2 | 2017-05-16T07:56:10.000Z | 2020-06-06T06:01:31.000Z | from django.contrib import admin
from models import *
admin.site.register(GalleryApp)
admin.site.register(ImageApp) | 23.2 | 32 | 0.827586 | 16 | 116 | 6 | 0.625 | 0.229167 | 0.354167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086207 | 116 | 5 | 33 | 23.2 | 0.90566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
46f167daa0095c51d03f07e6ca5eaf924ed84cd2 | 19,831 | py | Python | tests/pyxelrest_service/test_pyxelrest.py | Colin-b/pyxelrest | 5c8db40d1537d0f9c29acd928ec9519b6bb557ec | [
"MIT"
] | 7 | 2018-12-07T10:08:53.000Z | 2021-03-24T07:52:36.000Z | tests/pyxelrest_service/test_pyxelrest.py | Colin-b/pyxelrest | 5c8db40d1537d0f9c29acd928ec9519b6bb557ec | [
"MIT"
] | 76 | 2018-12-07T10:29:48.000Z | 2021-11-17T00:54:24.000Z | tests/pyxelrest_service/test_pyxelrest.py | Colin-b/pyxelrest | 5c8db40d1537d0f9c29acd928ec9519b6bb557ec | [
"MIT"
] | null | null | null | from requests import PreparedRequest
from responses import RequestsMock
from tests import loader
def _get_request(responses: RequestsMock, url: str) -> PreparedRequest:
for call in responses.calls:
if call.request.url == url:
# Pop out verified request (to be able to check multiple requests)
responses.calls._calls.remove(call)
return call.request
def test_get_custom_url_sync(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
"vba_compatible": {},
}
}
},
)
responses.add(
responses.GET,
url="http://localhost:8958/async/status",
json={},
match_querystring=True,
)
assert (
generated_functions.vba_pyxelrest_get_url(
"http://localhost:8958/async/status",
extra_headers=[
["X-Custom-Header1", "custom1"],
["X-Custom-Header2", "custom2"],
],
)
== [[""]]
)
headers = _get_request(responses, "http://localhost:8958/async/status").headers
assert headers["X-Custom-Header1"] == "custom1"
assert headers["X-Custom-Header2"] == "custom2"
def test_get_custom_url(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
"vba_compatible": {},
}
}
},
)
responses.add(
responses.GET,
url="http://localhost:8958/async/status",
json={},
match_querystring=True,
)
assert (
generated_functions.pyxelrest_get_url(
"http://localhost:8958/async/status",
extra_headers=[
["X-Custom-Header1", "custom1"],
["X-Custom-Header2", "custom2"],
],
)
== [[""]]
)
headers = _get_request(responses, "http://localhost:8958/async/status").headers
assert headers["X-Custom-Header1"] == "custom1"
assert headers["X-Custom-Header2"] == "custom2"
def test_delete_custom_url_sync(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
"vba_compatible": {},
}
}
},
)
responses.add(
responses.DELETE,
url="http://localhost:8958/unlisted",
json={},
match_querystring=True,
)
assert (
generated_functions.vba_pyxelrest_delete_url(
"http://localhost:8958/unlisted",
extra_headers=[
["X-Custom-Header1", "custom1"],
["X-Custom-Header2", "custom2"],
],
)
== [[""]]
)
headers = _get_request(responses, "http://localhost:8958/unlisted").headers
assert headers["X-Custom-Header1"] == "custom1"
assert headers["X-Custom-Header2"] == "custom2"
def test_delete_custom_url(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
"vba_compatible": {},
}
}
},
)
responses.add(
responses.DELETE,
url="http://localhost:8958/unlisted",
json={},
match_querystring=True,
)
assert (
generated_functions.pyxelrest_delete_url(
"http://localhost:8958/unlisted",
extra_headers=[
["X-Custom-Header1", "custom1"],
["X-Custom-Header2", "custom2"],
],
)
== [[""]]
)
headers = _get_request(responses, "http://localhost:8958/unlisted").headers
assert headers["X-Custom-Header1"] == "custom1"
assert headers["X-Custom-Header2"] == "custom2"
def test_post_custom_url_dict(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
"vba_compatible": {},
}
}
},
)
responses.add(
responses.POST,
url="http://localhost:8958/dict",
json={},
match_querystring=True,
)
assert (
generated_functions.pyxelrest_post_url(
"http://localhost:8958/dict",
[["key1", "key2", "key3"], ["value1", 1, "value3"]],
extra_headers=[["Content-Type", "application/json"]],
parse_body_as="dict",
)
== [[""]]
)
request = _get_request(responses, "http://localhost:8958/dict")
assert request.headers["Content-Type"] == "application/json"
assert request.body == b'{"key1": "value1", "key2": 1, "key3": "value3"}'
def test_post_custom_url_dict_list_sync(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
"vba_compatible": {},
}
}
},
)
responses.add(
responses.POST,
url="http://localhost:8958/dict",
json={},
match_querystring=True,
)
assert (
generated_functions.vba_pyxelrest_post_url(
"http://localhost:8958/dict",
[
["key1", "key2", "key3"],
["value1", 1, "value3"],
["other1", 2, "other3"],
],
extra_headers=[["Content-Type", "application/json"]],
parse_body_as="dict_list",
)
== [[""]]
)
request = _get_request(responses, "http://localhost:8958/dict")
assert request.headers["Content-Type"] == "application/json"
assert (
request.body
== b'[{"key1": "value1", "key2": 1, "key3": "value3"}, {"key1": "other1", "key2": 2, "key3": "other3"}]'
)
def test_post_custom_url_dict_list(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
"vba_compatible": {},
}
}
},
)
responses.add(
responses.POST,
url="http://localhost:8958/dict",
json={},
match_querystring=True,
)
assert (
generated_functions.pyxelrest_post_url(
"http://localhost:8958/dict",
[
["key1", "key2", "key3"],
["value1", 1, "value3"],
["other1", 2, "other3"],
],
extra_headers=[["Content-Type", "application/json"]],
parse_body_as="dict_list",
)
== [[""]]
)
request = _get_request(responses, "http://localhost:8958/dict")
assert request.headers["Content-Type"] == "application/json"
assert (
request.body
== b'[{"key1": "value1", "key2": 1, "key3": "value3"}, {"key1": "other1", "key2": 2, "key3": "other3"}]'
)
def test_put_custom_url_dict_list(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
"vba_compatible": {},
}
}
},
)
responses.add(
responses.PUT, url="http://localhost:8958/dict", json={}, match_querystring=True
)
assert (
generated_functions.pyxelrest_put_url(
"http://localhost:8958/dict",
[
["key1", "key2", "key3"],
["value1", 1, "value3"],
["other1", 2, "other3"],
],
extra_headers=[["Content-Type", "application/json"]],
parse_body_as="dict_list",
)
== [[""]]
)
request = _get_request(responses, "http://localhost:8958/dict")
assert request.headers["Content-Type"] == "application/json"
assert (
request.body
== b'[{"key1": "value1", "key2": 1, "key3": "value3"}, {"key1": "other1", "key2": 2, "key3": "other3"}]'
)
def test_put_custom_url_dict(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
"vba_compatible": {},
}
}
},
)
responses.add(
responses.PUT, url="http://localhost:8958/dict", json={}, match_querystring=True
)
assert (
generated_functions.pyxelrest_put_url(
"http://localhost:8958/dict",
[["key1", "key2", "key3"], ["value1", 1, "value3"]],
extra_headers=[["Content-Type", "application/json"]],
parse_body_as="dict",
)
== [[""]]
)
request = _get_request(responses, "http://localhost:8958/dict")
assert request.headers["Content-Type"] == "application/json"
assert request.body == b'{"key1": "value1", "key2": 1, "key3": "value3"}'
def test_put_custom_url_dict_sync(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
"vba_compatible": {},
}
}
},
)
responses.add(
responses.PUT, url="http://localhost:8958/dict", json={}, match_querystring=True
)
assert (
generated_functions.vba_pyxelrest_put_url(
"http://localhost:8958/dict",
[["key1", "key2", "key3"], ["value1", 1, "value3"]],
extra_headers=[["Content-Type", "application/json"]],
parse_body_as="dict",
)
== [[""]]
)
request = _get_request(responses, "http://localhost:8958/dict")
assert request.headers["Content-Type"] == "application/json"
assert request.body == b'{"key1": "value1", "key2": 1, "key3": "value3"}'
def test_post_invalid_parse_body_as_date(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
}
}
},
)
assert (
generated_functions.pyxelrest_post_url(
"http://localhost:8958/dict",
[["key1", "key2", "key3"], ["value1", 1, "value3"]],
parse_body_as="invalid",
)
== ['parse_body_as value "invalid" should be dict or dict_list.']
)
def test_post_invalid_wait_for_status(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
}
}
},
)
assert (
generated_functions.pyxelrest_post_url(
"http://localhost:8958/dict",
[["key1", "key2", "key3"], ["value1", 1, "value3"]],
wait_for_status="invalid",
)
== ['wait_for_status value "invalid" must be an integer.']
)
def test_post_negative_wait_for_status(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
}
}
},
)
assert (
generated_functions.pyxelrest_post_url(
"http://localhost:8958/dict",
[["key1", "key2", "key3"], ["value1", 1, "value3"]],
wait_for_status=-1,
)
== ['wait_for_status value "-1" must be superior or equals to 0.']
)
def test_post_invalid_check_interval(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
}
}
},
)
assert (
generated_functions.pyxelrest_post_url(
"http://localhost:8958/dict",
[["key1", "key2", "key3"], ["value1", 1, "value3"]],
check_interval="invalid",
)
== ['check_interval value "invalid" must be an integer.']
)
def test_post_negative_check_interval(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
}
}
},
)
assert (
generated_functions.pyxelrest_post_url(
"http://localhost:8958/dict",
[["key1", "key2", "key3"], ["value1", 1, "value3"]],
check_interval=-1,
)
== ['check_interval value "-1" must be superior or equals to 0.']
)
def test_post_invalid_url(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
}
}
},
)
assert (
generated_functions.pyxelrest_post_url(
-1,
[["key1", "key2", "key3"], ["value1", 1, "value3"]],
)
== ['url value "-1" must be formatted as text.']
)
def test_get_wait_for_status(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
}
}
},
)
responses.add(
responses.GET,
url="http://localhost:8958/test",
json=["should not be returned"],
status=200,
match_querystring=True,
)
responses.add(
responses.GET,
url="http://localhost:8958/test",
status=303,
adding_headers={"location": "http://localhost:8958/test2"},
match_querystring=True,
)
responses.add(
responses.GET,
url="http://localhost:8958/test2",
json=["should be returned"],
status=200,
match_querystring=True,
)
assert generated_functions.pyxelrest_get_url(
"http://localhost:8958/test", wait_for_status=303, check_interval=1
) == [["should be returned"]]
def test_get_check_interval(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
}
}
},
)
responses.add(
responses.GET,
url="http://localhost:8958/test",
status=303,
adding_headers={"location": "http://localhost:8958/test2"},
match_querystring=True,
)
responses.add(
responses.GET,
url="http://localhost:8958/test2",
json={},
status=200,
match_querystring=True,
)
assert (
generated_functions.pyxelrest_get_url(
"http://localhost:8958/test",
wait_for_status=303,
check_interval=1,
)
== [[""]]
)
def test_post_invalid_dict_only_header(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
}
}
},
)
assert (
generated_functions.pyxelrest_post_url(
"http://localhost:8958/dict",
[["key1", "key2", "key3"]],
parse_body_as="dict",
)
== ["There should be only two rows. Header and values."]
)
def test_post_invalid_dict_too_many_rows(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
}
}
},
)
assert (
generated_functions.pyxelrest_post_url(
"http://localhost:8958/dict",
[["key1", "key2"], ["value1", "value2"], ["value10", "value20"]],
parse_body_as="dict",
)
== ["There should be only two rows. Header and values."]
)
def test_post_invalid_dict_list_only_header(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
}
}
},
)
assert (
generated_functions.pyxelrest_post_url(
"http://localhost:8958/dict",
[["key1", "key2", "key3"]],
parse_body_as="dict_list",
)
== ["There should be at least two rows. Header and first dictionary values."]
)
def test_post_body_as_is(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
"vba_compatible": {},
}
}
},
)
responses.add(
responses.POST,
url="http://localhost:8958/dict",
json={},
match_querystring=True,
)
assert (
generated_functions.pyxelrest_post_url(
"http://localhost:8958/dict",
"Content of the body",
)
== [[""]]
)
request = _get_request(responses, "http://localhost:8958/dict")
assert request.headers["Content-Type"] == "application/json"
assert request.body == b'"Content of the body"'
def test_invalid_security_definitions(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
"vba_compatible": {},
}
}
},
)
assert (
generated_functions.pyxelrest_get_url(
"http://localhost:8958/dict",
security_definitions="invalid",
)
== [
"security_definitions value \"invalid\" (<class 'str'> type) must be a list."
]
)
def test_incomplete_security_definitions(responses: RequestsMock, tmpdir):
generated_functions = loader.load(
tmpdir,
{
"pyxelrest": {
"formulas": {
"dynamic_array": {"lock_excel": True},
"vba_compatible": {},
}
}
},
)
assert generated_functions.pyxelrest_get_url(
"http://localhost:8958/dict",
security_definitions=["invalid"],
) == [
"security_definitions value should contains at least two rows. Header and values."
]
| 27.466759 | 112 | 0.50527 | 1,712 | 19,831 | 5.638435 | 0.076519 | 0.07003 | 0.091578 | 0.080804 | 0.923547 | 0.918885 | 0.909458 | 0.904278 | 0.904278 | 0.903657 | 0 | 0.033022 | 0.352529 | 19,831 | 721 | 113 | 27.504854 | 0.718769 | 0.003227 | 0 | 0.596184 | 0 | 0.004769 | 0.250645 | 0 | 0 | 0 | 0 | 0 | 0.073132 | 1 | 0.039746 | false | 0 | 0.004769 | 0 | 0.046105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
46f47c2e1b90ecd236bdec1fad5d9a15f4d472cd | 7,619 | py | Python | guiauto/gui/controlR2W.py | saasaa831/guidesktop | 68abe5e896c4d29cf12898abd3b27c60553a3948 | [
"Apache-2.0"
] | null | null | null | guiauto/gui/controlR2W.py | saasaa831/guidesktop | 68abe5e896c4d29cf12898abd3b27c60553a3948 | [
"Apache-2.0"
] | null | null | null | guiauto/gui/controlR2W.py | saasaa831/guidesktop | 68abe5e896c4d29cf12898abd3b27c60553a3948 | [
"Apache-2.0"
] | null | null | null | from guiauto.gui.base_test import BaseTest
from guiauto.util.guiutils import guiHelper
from guiautomation import guiautomation as gauto
import logging
logger = logging.getLogger(__name__)
class TextControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
def get_control_list_name(self, elementx, cp=None, wintitle=None):
if wintitle:geln = self.guihelper.control_element_details(gauto.TextControl, wintitle=wintitle)
else:geln = self.guihelper.control_element_details(gauto.TextControl)
if not cp:return self.guihelper.get_locator_element(elementx, geln[0])
else:return self.guihelper.get_cp_locator_element(elementx, geln[0])
def getText(self, elementx):
logger.info('Get Text <' + elementx +'>')
getcontrolList = self.get_control_list_name(elementx)
#logger.info('Text:' + str(getcontrolList))
return self.guihelper.find_gui_element(getcontrolList[0], getcontrolList[1]).text
def get_text_value(self, elementx):
getcontrolList = self.guihelper.window_opened_by_name(self.window_title)
textgauto = gauto.TextControl(searchFromControl=getcontrolList, Name=elementx)
result = gauto.WaitForExist(textgauto, 30)
if result == True:return textgauto
else:logger.info('control not found')
def click_action_text(self, elementx, wintitle=None):
getcontrolList = self.get_control_list_name(elementx, wintitle=wintitle)
buttonClick = self.guihelper.find_gui_element(getcontrolList[0], getcontrolList[1])
buttonClick.click()
class WindowControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
def Window_Name_Open(self, winName):
mmcWindow = gauto.WindowControl(Name=winName)
#logger.info(mmcWindow)
return mmcWindow
class RadioButtonControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class ScrollBarControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class SemanticZoomControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class SeparatorControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class SliderControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class SpinnerControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class SplitButtonControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class StatusBarControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class TabControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class TabItemControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class TableControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class ThumbControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class TitleBarControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class ToolBarControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class ToolTipControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class TreeControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
class TreeItemControl(BaseTest):
def __init__(self, driver, parent_handle):
super().__init__(driver, parent_handle)
self.guihelper=guiHelper(driver, parent_handle)
winHandle = self.guihelper.window_handle_title()
self.window_title=self.guihelper.window_title_exists(winHandle)
| 40.743316 | 103 | 0.738417 | 861 | 7,619 | 6.138211 | 0.101045 | 0.157427 | 0.194134 | 0.068307 | 0.793377 | 0.78316 | 0.78316 | 0.766509 | 0.746074 | 0.724503 | 0 | 0.001259 | 0.166295 | 7,619 | 186 | 104 | 40.962366 | 0.830762 | 0.0084 | 0 | 0.673759 | 0 | 0 | 0.003707 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170213 | false | 0 | 0.028369 | 0 | 0.347518 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
20161b2a6a0832dea07aedbb5015279cb2427c54 | 58 | py | Python | python/eet/transformers/__init__.py | NetEase-FuXi/EET | f827cef4bfcf8b18e2d4169469052440fe2b216f | [
"Apache-2.0"
] | 174 | 2021-04-06T08:49:42.000Z | 2022-03-31T11:54:44.000Z | python/eet/transformers/__init__.py | NetEase-FuXi/EET | f827cef4bfcf8b18e2d4169469052440fe2b216f | [
"Apache-2.0"
] | 3 | 2021-05-15T14:26:26.000Z | 2021-09-28T08:20:29.000Z | python/eet/transformers/__init__.py | NetEase-FuXi/EET | f827cef4bfcf8b18e2d4169469052440fe2b216f | [
"Apache-2.0"
] | 37 | 2021-04-06T09:05:40.000Z | 2022-03-30T12:23:20.000Z | from .modeling_bert import *
from .modeling_gpt2 import *
| 19.333333 | 28 | 0.793103 | 8 | 58 | 5.5 | 0.625 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02 | 0.137931 | 58 | 2 | 29 | 29 | 0.86 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
647474c5826ab0a75ac2e6df4c048723ea991de5 | 13,020 | py | Python | tests/client/test_client.py | pronovic/vplan | aee40c5f9ed72c11cd0d24631b8530af65961bc9 | [
"Apache-2.0"
] | null | null | null | tests/client/test_client.py | pronovic/vplan | aee40c5f9ed72c11cd0d24631b8530af65961bc9 | [
"Apache-2.0"
] | null | null | null | tests/client/test_client.py | pronovic/vplan | aee40c5f9ed72c11cd0d24631b8530af65961bc9 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# vim: set ft=python ts=4 sw=4 expandtab:
import json
from unittest.mock import MagicMock, patch
import pytest
from click import ClickException
from requests import HTTPError, Timeout
from vplan.client.client import (
_raise_for_status,
create_or_replace_account,
create_plan,
delete_account,
delete_plan,
refresh_plan,
retrieve_account,
retrieve_all_plans,
retrieve_health,
retrieve_plan,
retrieve_plan_status,
retrieve_version,
toggle_device,
toggle_group,
turn_off_device,
turn_off_group,
turn_on_device,
turn_on_group,
update_plan,
update_plan_status,
)
from vplan.interface import Account, Health, Plan, PlanSchema, Status, Version
def _response(model=None, data=None, status_code=None):
"""Build a mocked response for use with the requests library."""
response = MagicMock()
if model:
response.text = model.json()
if data:
response.text = json.dumps(data)
if status_code:
response.status_code = status_code
response.raise_for_status = MagicMock()
return response
class TestUtil:
def test_raise_for_status(self):
response = MagicMock()
response.raise_for_status = MagicMock()
response.raise_for_status.side_effect = HTTPError("hello")
with pytest.raises(ClickException, match="^hello"):
_raise_for_status(response)
@patch("vplan.client.client.api_url", new_callable=MagicMock(return_value=MagicMock(return_value="http://whatever")))
class TestHealthAndVersion:
@patch("vplan.client.client.requests.get")
def test_retrieve_health_error(self, requests_get, _api_url):
response = _response()
response.raise_for_status.side_effect = HTTPError("error")
requests_get.side_effect = [response]
assert retrieve_health() is False
requests_get.assert_called_once_with(url="http://whatever/health", timeout=1)
@patch("vplan.client.client.requests.get")
def test_retrieve_health_timeout(self, requests_get, _api_url):
response = _response()
response.raise_for_status.side_effect = Timeout("error")
requests_get.side_effect = [response]
assert retrieve_health() is False
requests_get.assert_called_once_with(url="http://whatever/health", timeout=1)
@patch("vplan.client.client.requests.get")
def test_retrieve_health_healthy(self, requests_get, _api_url):
response = _response(model=Health())
requests_get.side_effect = [response]
assert retrieve_health() is True
requests_get.assert_called_once_with(url="http://whatever/health", timeout=1)
@patch("vplan.client.client.requests.get")
def test_retrieve_version_error(self, requests_get, _api_url):
response = _response()
response.raise_for_status.side_effect = HTTPError("error")
requests_get.side_effect = [response]
result = retrieve_version()
assert result is None
requests_get.assert_called_once_with(url="http://whatever/version", timeout=1)
@patch("vplan.client.client.requests.get")
def test_retrieve_version_timeout(self, requests_get, _api_url):
response = _response()
response.raise_for_status.side_effect = Timeout("error")
requests_get.side_effect = [response]
result = retrieve_version()
assert result is None
requests_get.assert_called_once_with(url="http://whatever/version", timeout=1)
@patch("vplan.client.client.requests.get")
def test_retrieve_version_healthy(self, requests_get, _api_url):
version = Version(package="a", api="b")
response = _response(model=version)
requests_get.side_effect = [response]
result = retrieve_version()
assert result == version
requests_get.assert_called_once_with(url="http://whatever/version", timeout=1)
@patch("vplan.client.client._raise_for_status")
@patch("vplan.client.client.api_url", new_callable=MagicMock(return_value=MagicMock(return_value="http://whatever")))
class TestAccount:
@patch("vplan.client.client.requests.get")
def test_retrieve_account_not_found(self, requests_get, _api_url, raise_for_status):
response = _response(status_code=404)
requests_get.side_effect = [response]
result = retrieve_account()
assert result is None
raise_for_status.assert_not_called()
requests_get.assert_called_once_with(url="http://whatever/account")
@patch("vplan.client.client.requests.get")
def test_retrieve_account_found(self, requests_get, _api_url, raise_for_status):
account = Account(pat_token="token")
response = _response(model=account)
requests_get.side_effect = [response]
result = retrieve_account()
assert result == account
raise_for_status.assert_called_once_with(response)
requests_get.assert_called_once_with(url="http://whatever/account")
@patch("vplan.client.client.requests.post")
def test_create_or_replace_account(self, requests_post, _api_url, raise_for_status):
account = Account(pat_token="token")
response = _response()
requests_post.side_effect = [response]
create_or_replace_account(account)
raise_for_status.assert_called_once_with(response)
requests_post.assert_called_once_with(url="http://whatever/account", data=account.json())
@patch("vplan.client.client.requests.delete")
def test_delete_account(self, requests_delete, _api_url, raise_for_status):
response = _response()
requests_delete.side_effect = [response]
delete_account()
raise_for_status.assert_called_once_with(response)
requests_delete.assert_called_once_with(url="http://whatever/account")
@patch("vplan.client.client._raise_for_status")
@patch("vplan.client.client.api_url", new_callable=MagicMock(return_value=MagicMock(return_value="http://whatever")))
class TestPlan:
@patch("vplan.client.client.requests.get")
def test_retrieve_all_plans(self, requests_get, _api_url, raise_for_status):
plans = ["one", "two"]
response = _response(data=plans)
requests_get.side_effect = [response]
result = retrieve_all_plans()
assert result == plans
raise_for_status.assert_called_once_with(response)
requests_get.assert_called_once_with(url="http://whatever/plan")
@patch("vplan.client.client.requests.get")
def test_retrieve_plan_not_found(self, requests_get, _api_url, raise_for_status):
response = _response(status_code=404)
requests_get.side_effect = [response]
result = retrieve_plan("xxx")
assert result is None
raise_for_status.assert_not_called()
requests_get.assert_called_once_with(url="http://whatever/plan/xxx")
@patch("vplan.client.client.requests.get")
def test_retrieve_plan_found(self, requests_get, _api_url, raise_for_status):
schema = PlanSchema(version="1.0.0", plan=Plan(name="name", location="location", refresh_time="00:30"))
response = _response(model=schema)
requests_get.side_effect = [response]
result = retrieve_plan("xxx")
assert result == schema
raise_for_status.assert_called_once_with(response)
requests_get.assert_called_once_with(url="http://whatever/plan/xxx")
@patch("vplan.client.client.requests.post")
def test_create_plan(self, requests_post, _api_url, raise_for_status):
schema = PlanSchema(version="1.0.0", plan=Plan(name="name", location="location", refresh_time="00:30"))
response = _response()
requests_post.side_effect = [response]
create_plan(schema)
raise_for_status.assert_called_once_with(response)
requests_post.assert_called_once_with(url="http://whatever/plan", data=schema.json())
@patch("vplan.client.client.requests.put")
def test_update_plan(self, requests_put, _api_url, raise_for_status):
schema = PlanSchema(version="1.0.0", plan=Plan(name="name", location="location", refresh_time="00:30"))
response = _response()
requests_put.side_effect = [response]
update_plan(schema)
raise_for_status.assert_called_once_with(response)
requests_put.assert_called_once_with(url="http://whatever/plan", data=schema.json())
@patch("vplan.client.client.requests.delete")
def test_delete_plan(self, requests_delete, _api_url, raise_for_status):
response = _response()
requests_delete.side_effect = [response]
delete_plan("xxx")
raise_for_status.assert_called_once_with(response)
requests_delete.assert_called_once_with(url="http://whatever/plan/xxx")
@patch("vplan.client.client.requests.get")
def test_retrieve_plan_status_not_found(self, requests_get, _api_url, raise_for_status):
response = _response(status_code=404)
requests_get.side_effect = [response]
result = retrieve_plan_status("xxx")
assert result is None
raise_for_status.assert_not_called()
requests_get.assert_called_once_with(url="http://whatever/plan/xxx/status")
@patch("vplan.client.client.requests.get")
def test_retrieve_plan_status_found(self, requests_get, _api_url, raise_for_status):
status = Status(enabled=False)
response = _response(model=status)
requests_get.side_effect = [response]
result = retrieve_plan_status("xxx")
assert result == status
raise_for_status.assert_called_once_with(response)
requests_get.assert_called_once_with(url="http://whatever/plan/xxx/status")
@patch("vplan.client.client.requests.put")
def test_update_plan_status(self, requests_put, _api_url, raise_for_status):
status = Status(enabled=False)
response = _response()
requests_put.side_effect = [response]
update_plan_status("xxx", status)
raise_for_status.assert_called_once_with(response)
requests_put.assert_called_once_with(url="http://whatever/plan/xxx/status", data=status.json())
@patch("vplan.client.client.requests.post")
def test_refresh_plan(self, requests_post, _api_url, raise_for_status):
response = _response()
requests_post.side_effect = [response]
refresh_plan("xxx")
raise_for_status.assert_called_once_with(response)
requests_post.assert_called_once_with(url="http://whatever/plan/xxx/refresh")
@patch("vplan.client.client.requests.post")
def test_toggle_group(self, requests_post, _api_url, raise_for_status):
response = _response()
requests_post.side_effect = [response]
toggle_group("xxx", "yyy", 2, 5)
raise_for_status.assert_called_once_with(response)
requests_post.assert_called_once_with(url="http://whatever/plan/xxx/test/group/yyy", params={"toggles": 2, "delay_sec": 5})
@patch("vplan.client.client.requests.post")
def test_toggle_device(self, requests_post, _api_url, raise_for_status):
response = _response()
requests_post.side_effect = [response]
toggle_device("xxx", "yyy", "zzz", 2, 5)
raise_for_status.assert_called_once_with(response)
requests_post.assert_called_once_with(
url="http://whatever/plan/xxx/test/device/yyy/zzz", params={"toggles": 2, "delay_sec": 5}
)
@patch("vplan.client.client.requests.post")
def test_turn_on_group(self, requests_post, _api_url, raise_for_status):
response = _response()
requests_post.side_effect = [response]
turn_on_group("xxx", "yyy")
raise_for_status.assert_called_once_with(response)
requests_post.assert_called_once_with(url="http://whatever/plan/xxx/on/group/yyy")
@patch("vplan.client.client.requests.post")
def test_turn_on_device(self, requests_post, _api_url, raise_for_status):
response = _response()
requests_post.side_effect = [response]
turn_on_device("xxx", "yyy", "zzz")
raise_for_status.assert_called_once_with(response)
requests_post.assert_called_once_with(url="http://whatever/plan/xxx/on/device/yyy/zzz")
@patch("vplan.client.client.requests.post")
def test_turn_off_group(self, requests_post, _api_url, raise_for_status):
response = _response()
requests_post.side_effect = [response]
turn_off_group("xxx", "yyy")
raise_for_status.assert_called_once_with(response)
requests_post.assert_called_once_with(url="http://whatever/plan/xxx/off/group/yyy")
@patch("vplan.client.client.requests.post")
def test_turn_off_device(self, requests_post, _api_url, raise_for_status):
response = _response()
requests_post.side_effect = [response]
turn_off_device("xxx", "yyy", "zzz")
raise_for_status.assert_called_once_with(response)
requests_post.assert_called_once_with(url="http://whatever/plan/xxx/off/device/yyy/zzz")
| 44.285714 | 131 | 0.713057 | 1,665 | 13,020 | 5.216216 | 0.075075 | 0.047899 | 0.083823 | 0.099021 | 0.85734 | 0.849165 | 0.844675 | 0.830743 | 0.80852 | 0.769833 | 0 | 0.004377 | 0.175346 | 13,020 | 293 | 132 | 44.43686 | 0.804508 | 0.009293 | 0 | 0.551181 | 0 | 0 | 0.154127 | 0.077723 | 0 | 0 | 0 | 0 | 0.232283 | 1 | 0.110236 | false | 0 | 0.027559 | 0 | 0.15748 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
64d3e557365a592609b83f9d878d6f2e3da66df7 | 150 | py | Python | log/views.py | mushilianmeng/auto_ops_cmdb | 4bb63e5236b221a1923e7ca46808096d80aab4f1 | [
"Apache-2.0"
] | 2 | 2021-12-30T02:23:00.000Z | 2021-12-30T02:23:02.000Z | log/views.py | mushilianmeng/auto_ops_cmdb | 4bb63e5236b221a1923e7ca46808096d80aab4f1 | [
"Apache-2.0"
] | null | null | null | log/views.py | mushilianmeng/auto_ops_cmdb | 4bb63e5236b221a1923e7ca46808096d80aab4f1 | [
"Apache-2.0"
] | null | null | null | from django.http import HttpResponse
from log.models import alarm
def logs(request):
return HttpResponse(alarm.objects.get(id=request.GET["id"])) | 30 | 64 | 0.78 | 22 | 150 | 5.318182 | 0.681818 | 0.08547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106667 | 150 | 5 | 64 | 30 | 0.873134 | 0 | 0 | 0 | 0 | 0 | 0.013245 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b38cca3bc1ac2216feeae006aa7e7581e891c1f2 | 21 | py | Python | pysondb/__init__.py | Asher-MS/pysonDB | c2726b4827005e755dd6722677d123842db7f972 | [
"MIT"
] | 39 | 2021-10-19T18:04:12.000Z | 2022-03-28T08:11:53.000Z | pysondb/__init__.py | Asher-MS/pysonDB | c2726b4827005e755dd6722677d123842db7f972 | [
"MIT"
] | 22 | 2021-10-13T12:16:03.000Z | 2022-03-29T05:12:15.000Z | pysondb/__init__.py | Asher-MS/pysonDB | c2726b4827005e755dd6722677d123842db7f972 | [
"MIT"
] | 13 | 2021-11-03T15:24:21.000Z | 2022-03-29T06:26:26.000Z | from .db import getDb | 21 | 21 | 0.809524 | 4 | 21 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 21 | 1 | 21 | 21 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b3d247cba50c805ed2c218996ff8904285cf4f03 | 31 | py | Python | utils/test_spectrum.py | johnboyington/nebp | bfb3335b24d878f30e41ac099b73ed7668347014 | [
"MIT"
] | null | null | null | utils/test_spectrum.py | johnboyington/nebp | bfb3335b24d878f30e41ac099b73ed7668347014 | [
"MIT"
] | null | null | null | utils/test_spectrum.py | johnboyington/nebp | bfb3335b24d878f30e41ac099b73ed7668347014 | [
"MIT"
] | 2 | 2020-02-04T12:33:14.000Z | 2020-10-15T16:42:10.000Z | from spectrum import Spectrum
| 10.333333 | 29 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 31 | 2 | 30 | 15.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b3e4e201d9699a5a17b44aed06b937f0d070549f | 12,266 | py | Python | buildscripts/tests/test_evergreen_task_timeout.py | benety/mongo | 203430ac9559f82ca01e3cbb3b0e09149fec0835 | [
"Apache-2.0"
] | null | null | null | buildscripts/tests/test_evergreen_task_timeout.py | benety/mongo | 203430ac9559f82ca01e3cbb3b0e09149fec0835 | [
"Apache-2.0"
] | null | null | null | buildscripts/tests/test_evergreen_task_timeout.py | benety/mongo | 203430ac9559f82ca01e3cbb3b0e09149fec0835 | [
"Apache-2.0"
] | null | null | null | """Unit tests for the evergreen_task_timeout script."""
import unittest
from datetime import timedelta
from unittest.mock import MagicMock
import buildscripts.evergreen_task_timeout as under_test
from buildscripts.ciconfig.evergreen import EvergreenProjectConfig
from buildscripts.timeouts.timeout_service import TimeoutService
# pylint: disable=missing-docstring,no-self-use,invalid-name,protected-access
class TestTimeoutOverride(unittest.TestCase):
def test_exec_timeout_should_be_settable(self):
timeout_override = under_test.TimeoutOverride(task="my task", exec_timeout=42)
timeout = timeout_override.get_exec_timeout()
self.assertIsNotNone(timeout)
self.assertEqual(42 * 60, timeout.total_seconds())
def test_exec_timeout_should_default_to_none(self):
timeout_override = under_test.TimeoutOverride(task="my task")
timeout = timeout_override.get_exec_timeout()
self.assertIsNone(timeout)
def test_idle_timeout_should_be_settable(self):
timeout_override = under_test.TimeoutOverride(task="my task", idle_timeout=42)
timeout = timeout_override.get_idle_timeout()
self.assertIsNotNone(timeout)
self.assertEqual(42 * 60, timeout.total_seconds())
def test_idle_timeout_should_default_to_none(self):
timeout_override = under_test.TimeoutOverride(task="my task")
timeout = timeout_override.get_idle_timeout()
self.assertIsNone(timeout)
class TestTimeoutOverrides(unittest.TestCase):
def test_looking_up_a_non_existing_override_should_return_none(self):
timeout_overrides = under_test.TimeoutOverrides(overrides={})
self.assertIsNone(timeout_overrides.lookup_exec_override("bv", "task"))
self.assertIsNone(timeout_overrides.lookup_idle_override("bv", "task"))
def test_looking_up_a_duplicate_override_should_raise_error(self):
timeout_overrides = under_test.TimeoutOverrides(
overrides={
"bv": [{
"task": "task_name",
"exec_timeout": 42,
"idle_timeout": 10,
}, {
"task": "task_name",
"exec_timeout": 314,
"idle_timeout": 20,
}]
})
with self.assertRaises(ValueError):
self.assertIsNone(timeout_overrides.lookup_exec_override("bv", "task_name"))
with self.assertRaises(ValueError):
self.assertIsNone(timeout_overrides.lookup_idle_override("bv", "task_name"))
def test_looking_up_an_exec_override_should_work(self):
timeout_overrides = under_test.TimeoutOverrides(
overrides={
"bv": [
{
"task": "another_task",
"exec_timeout": 314,
"idle_timeout": 20,
},
{
"task": "task_name",
"exec_timeout": 42,
},
]
})
self.assertEqual(42 * 60,
timeout_overrides.lookup_exec_override("bv", "task_name").total_seconds())
def test_looking_up_an_idle_override_should_work(self):
timeout_overrides = under_test.TimeoutOverrides(
overrides={
"bv": [
{
"task": "another_task",
"exec_timeout": 314,
"idle_timeout": 20,
},
{
"task": "task_name",
"idle_timeout": 10,
},
]
})
self.assertEqual(10 * 60,
timeout_overrides.lookup_idle_override("bv", "task_name").total_seconds())
class TestDetermineExecTimeout(unittest.TestCase):
def _validate_exec_timeout(self, idle_timeout, exec_timeout, historic_timeout, evg_alias,
build_variant, timeout_override, expected_timeout):
task_name = "task_name"
variant = build_variant
overrides = {}
if timeout_override is not None:
overrides[variant] = [{"task": task_name, "exec_timeout": timeout_override}]
mock_timeout_overrides = under_test.TimeoutOverrides(overrides=overrides)
orchestrator = under_test.TaskTimeoutOrchestrator(
timeout_service=MagicMock(spec_set=TimeoutService),
timeout_overrides=mock_timeout_overrides,
evg_project_config=MagicMock(spec_set=EvergreenProjectConfig))
actual_timeout = orchestrator.determine_exec_timeout(
task_name, variant, idle_timeout, exec_timeout, evg_alias, historic_timeout)
self.assertEqual(actual_timeout, expected_timeout)
def test_timeout_used_if_specified(self):
self._validate_exec_timeout(idle_timeout=None, exec_timeout=timedelta(seconds=42),
historic_timeout=None, evg_alias=None, build_variant="variant",
timeout_override=None, expected_timeout=timedelta(seconds=42))
def test_default_is_returned_with_no_timeout(self):
self._validate_exec_timeout(idle_timeout=None, exec_timeout=None, historic_timeout=None,
evg_alias=None, build_variant="variant", timeout_override=None,
expected_timeout=under_test.DEFAULT_NON_REQUIRED_BUILD_TIMEOUT)
def test_default_is_returned_with_timeout_at_zero(self):
self._validate_exec_timeout(idle_timeout=None, exec_timeout=timedelta(seconds=0),
historic_timeout=None, evg_alias=None, build_variant="variant",
timeout_override=None,
expected_timeout=under_test.DEFAULT_NON_REQUIRED_BUILD_TIMEOUT)
def test_default_required_returned_on_required_variants(self):
self._validate_exec_timeout(idle_timeout=None, exec_timeout=None, historic_timeout=None,
evg_alias=None, build_variant="variant-required",
timeout_override=None,
expected_timeout=under_test.DEFAULT_REQUIRED_BUILD_TIMEOUT)
def test_override_on_required_should_use_override(self):
self._validate_exec_timeout(idle_timeout=None, exec_timeout=None, historic_timeout=None,
evg_alias=None, build_variant="variant-required",
timeout_override=3 * 60,
expected_timeout=timedelta(minutes=3 * 60))
def test_task_specific_timeout(self):
self._validate_exec_timeout(idle_timeout=None, exec_timeout=timedelta(seconds=0),
historic_timeout=None, evg_alias=None, build_variant="variant",
timeout_override=60, expected_timeout=timedelta(minutes=60))
def test_commit_queue_items_use_commit_queue_timeout(self):
self._validate_exec_timeout(idle_timeout=None, exec_timeout=None, historic_timeout=None,
evg_alias=under_test.COMMIT_QUEUE_ALIAS,
build_variant="variant", timeout_override=None,
expected_timeout=under_test.COMMIT_QUEUE_TIMEOUT)
def test_use_idle_timeout_if_greater_than_exec_timeout(self):
self._validate_exec_timeout(
idle_timeout=timedelta(hours=2), exec_timeout=timedelta(minutes=10),
historic_timeout=None, evg_alias=None, build_variant="variant", timeout_override=None,
expected_timeout=timedelta(hours=2))
def test_historic_timeout_should_be_used_if_given(self):
self._validate_exec_timeout(idle_timeout=None, exec_timeout=None,
historic_timeout=timedelta(minutes=15), evg_alias=None,
build_variant="variant", timeout_override=None,
expected_timeout=timedelta(minutes=15))
def test_commit_queue_should_override_historic_timeouts(self):
self._validate_exec_timeout(
idle_timeout=None, exec_timeout=None, historic_timeout=timedelta(minutes=15),
evg_alias=under_test.COMMIT_QUEUE_ALIAS, build_variant="variant", timeout_override=None,
expected_timeout=under_test.COMMIT_QUEUE_TIMEOUT)
def test_override_should_override_historic_timeouts(self):
self._validate_exec_timeout(idle_timeout=None, exec_timeout=None,
historic_timeout=timedelta(minutes=15), evg_alias=None,
build_variant="variant", timeout_override=33,
expected_timeout=timedelta(minutes=33))
def test_historic_timeout_should_not_be_overridden_by_required_bv(self):
self._validate_exec_timeout(idle_timeout=None, exec_timeout=None,
historic_timeout=timedelta(minutes=15), evg_alias=None,
build_variant="variant-required", timeout_override=None,
expected_timeout=timedelta(minutes=15))
def test_historic_timeout_should_not_be_increase_required_bv_timeout(self):
self._validate_exec_timeout(
idle_timeout=None, exec_timeout=None,
historic_timeout=under_test.DEFAULT_REQUIRED_BUILD_TIMEOUT + timedelta(minutes=30),
evg_alias=None, build_variant="variant-required", timeout_override=None,
expected_timeout=under_test.DEFAULT_REQUIRED_BUILD_TIMEOUT)
class TestDetermineIdleTimeout(unittest.TestCase):
def _validate_idle_timeout(self, idle_timeout, historic_timeout, build_variant,
timeout_override, expected_timeout):
task_name = "task_name"
overrides = {}
if timeout_override is not None:
overrides[build_variant] = [{"task": task_name, "idle_timeout": timeout_override}]
mock_timeout_overrides = under_test.TimeoutOverrides(overrides=overrides)
orchestrator = under_test.TaskTimeoutOrchestrator(
timeout_service=MagicMock(spec_set=TimeoutService),
timeout_overrides=mock_timeout_overrides,
evg_project_config=MagicMock(spec_set=EvergreenProjectConfig))
actual_timeout = orchestrator.determine_idle_timeout(task_name, build_variant, idle_timeout,
historic_timeout)
self.assertEqual(actual_timeout, expected_timeout)
def test_timeout_used_if_specified(self):
self._validate_idle_timeout(
idle_timeout=timedelta(seconds=42),
historic_timeout=None,
build_variant="variant",
timeout_override=None,
expected_timeout=timedelta(seconds=42),
)
def test_default_is_returned_with_no_timeout(self):
self._validate_idle_timeout(
idle_timeout=None,
historic_timeout=None,
build_variant="variant",
timeout_override=None,
expected_timeout=None,
)
def test_task_specific_timeout(self):
self._validate_idle_timeout(
idle_timeout=None,
historic_timeout=None,
build_variant="variant",
timeout_override=60,
expected_timeout=timedelta(minutes=60),
)
def test_historic_timeout_should_be_used_if_given(self):
self._validate_idle_timeout(idle_timeout=None, historic_timeout=timedelta(minutes=15),
build_variant="variant", timeout_override=None,
expected_timeout=timedelta(minutes=15))
def test_override_should_override_historic_timeout(self):
self._validate_idle_timeout(idle_timeout=None, historic_timeout=timedelta(minutes=15),
build_variant="variant", timeout_override=30,
expected_timeout=timedelta(minutes=30))
| 46.286792 | 100 | 0.637127 | 1,242 | 12,266 | 5.875201 | 0.102254 | 0.063314 | 0.039468 | 0.048239 | 0.825819 | 0.796492 | 0.77525 | 0.737015 | 0.700425 | 0.653282 | 0 | 0.011316 | 0.286728 | 12,266 | 264 | 101 | 46.462121 | 0.822723 | 0.010272 | 0 | 0.560386 | 0 | 0 | 0.041701 | 0 | 0 | 0 | 0 | 0 | 0.077295 | 1 | 0.135266 | false | 0 | 0.028986 | 0 | 0.183575 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3738b3a2808dc427128b60b7f3778c672dc3279f | 98,778 | py | Python | plugins/bot_joinGroup.py | AlinJiong/OPQ-SetuBot | 7debddf5e414244673e31265c306233ee7f0a0f1 | [
"MIT"
] | null | null | null | plugins/bot_joinGroup.py | AlinJiong/OPQ-SetuBot | 7debddf5e414244673e31265c306233ee7f0a0f1 | [
"MIT"
] | null | null | null | plugins/bot_joinGroup.py | AlinJiong/OPQ-SetuBot | 7debddf5e414244673e31265c306233ee7f0a0f1 | [
"MIT"
] | 1 | 2022-03-08T00:44:29.000Z | 2022-03-08T00:44:29.000Z | # import base64
# import io
# import random
# import time
# from threading import Lock
# from typing import Tuple
# from botoy import Action, EventMsg, GroupMsg
# from botoy.collection import MsgTypes
# from botoy.decorators import ignore_botself, these_msgtypes
# from botoy.parser import event as ep
# from PIL import Image, ImageDraw, ImageFont
# __doc__ = "进群验证码"
# # userID_groupID : code
# new_users = {}
# wait_time = 10 # minute
# lock = Lock()
# font = ImageFont.truetype(
# io.BytesIO(
# base64.b64decode(
# b"""
# AAEAAAARAQAABABgTFRTSCweGugAAALYAAAA4E9TLzKReZY1AAADuAAAAE5WRE1Y9szGVAAABAgAABdwY21hcFLCvLoAARY4AAACkGN2dCAvC1C2AAAbeAAABmpmcGdtYyYJaQAAIeQAAAUxZ2x5ZonmgaIAACcYAAC+5GhkbXhGelUIAADl/AAAFQhoZWFkr+llKgAA+wQAAAA2aGhlYQ9MBeUAAPs8AAAAJGhtdHjZTGA6AAD7YAAAA3BrZXJuB6AG7wAA/tAAAAJkbG9jYZXMaS4AAAEcAAABvG1heHAE7ALOAAEBNAAAACBuYW1lCdjVwAABAVQAAAj9cG9zdNQ2gekAAQpUAAAB7HByZXA5zDDPAAEMQAAACfcAAAAzADMAMwAzAKEAvgG7AsUDkgRoBJ8E4QUmBdEGLQZyBpcGuAbqB0EHowhjCRoJ3AqQCz8LuQxfDPkNNw2QDdoOJw50DwYP/hCNEUYRyRJHEtETQBQTFJ0VEhV6FkAWnheZGIkY8BlwGhsa5xuqHDgc5R1THi8e9R+5IDYgqiDNIUIhjSGoIcki3CN5I/8klCWpJnknYCgfKLUpjiqPKtssGizuLW0uGy7xL3owizF/Mkoy5DPINO815DaFN1Q3gThOOKI4xTjvOQY5Hzk6OVo5fTmaObQ5zzn0Og86KTpCOlo6djqWOrQ60DrsOwY7ITs5O1E7aTuEO507tjvPO+o8AzwdPJY82z2sPmY/Jj9aP71AZUFYQhJCn0LDQxND5USORRdF0kZqRv1HPUhhSUVJz0pJSodLFktzS9lMLExKTGtMh01WTihORU5hTsBPLE9jT6JQHVA8UFhRClEwUVZR+FIRUk9SuVPuVBFULFRIVGJUfVSaVLVU1VTyVQxVKFVCVVtVd1WYVdtWCVZdVq9W+VcpV0NXXlelWDdY7FkFWR5ZklojWr1bB1uPXDxdGl4BXzhfUl9xAAAAAADcBwEBARkSCAj9HP0GlAgHEgYICAgICAgICAgICAgSBgcHBxwsLi4uLggcBy5IFi4c/S4kDiQuKiEPHg8mFw4GCP8VCAYaJxUcCBIcHCknEkQwHCcnJxQvFyoqHjAuEQjpF0MGBgYIBgcGCAgICAgICAgICAgICAgIBwcHBwcHBwcHBwgSlwgIv1UHEREQBg4BB1IIHggIFysMHgcICBYBBgYHDhlVZAgIswgICAgIBgYICAgIuQYIBggICAgICAcHBwYGBikGBgYPBggI6QYhCAgIKwcGBgYICAgIBgAAA9QCvAAFAAAFmgUzAB4BGwWaBTMAWgPRAGYCEggFAgsHBAICAgkCBAAAAAAAAAAAAAAAAAAAAABNb25vACEAICIZBdP+UQEzBz4BsgAAAAAAAQAEAQQDAwEFAwMBAgEBAQAAAAAeBfILxhGaAPgI/wAIAAj//gAJAAn//gAKAAn//gALAAv//gAMAAz//QANAA3//QAOAA3//QAPAA7//QAQAA///QARAA///AASABD//AATABL/+wAUABP/+wAVABP/+wAWABT/+wAXABX/+wAYABb/+gAZABf/+wAaABj/+gAbABr/+gAcABr/+gAdABv/+QAeABv/+QAfABz/+QAgAB3/+QAhAB//+QAiAB//+AAjACD/+AAkACH/+AAlACL/+AAmACL/9wAnACP/9wAoACX/9wApACb/9wAqACb/9gArACf/9gAsACj/9gAtACn/9gAuACr/9gAvACz/9gAwAC3/9gAxAC3/9QAyAC7/9QAzAC//9AA0AC//9AA1ADD/9AA2ADH/9AA3ADP/9AA4ADT/8wA5ADT/8wA6ADX/8wA7ADb/8wA8ADf/8wA9ADf/8wA+ADn/8gA/ADr/8gBAADv/8QBBADz/8QBCADz/8QBDAD3/8QBEAD7/8QBFAED/8QBGAED/8ABHAEH/8ABIAEL/8ABJAEP/8ABKAET/7wBLAET/7wBMAEb/7wBNAEf/7wBOAEj/7wBPAEj/7gBQAEn/7gBRAEr/7gBSAEv/7gBTAEz/7gBUAE3/7QBVAE7/7QBWAE//7QBXAFD/7ABYAFD/7ABZAFH/7ABaAFL/7ABbAFT/7ABcAFX/7ABdAFX/7ABeAFb/6wBfAFf/6wBgAFj/6gBhAFj/6gBiAFr/6gBjAFv/6gBkAFz/6gBlAF3/6gBmAF3/6QBnAF//6QBoAGD/6QBpAGH/6ABqAGH/6QBrAGL/6ABsAGT/6ABtAGX/5wBuAGb/5wBvAGb/5wBwAGf/5wBxAGj/5wByAGn/5wBzAGr/5wB0AGv/5wB1AGz/5gB2AG3/5gB3AG7/5QB4AG7/5QB5AG//5QB6AHH/5QB7AHL/5QB8AHL/5AB9AHP/5AB+AHT/5AB/AHX/5ACAAHf/5ACBAHf/5ACCAHj/4wCDAHn/4wCEAHr/4gCFAHr/4gCGAHv/4gCHAH3/4gCIAH7/4gCJAH//4gCKAH//4gCLAID/4QCMAIH/4QCNAIL/4ACOAIP/4ACPAIT/4ACQAIX/4QCRAIb/4QCSAIf/4ACTAIf/4ACUAIj/4ACVAIr/4ACWAIv/3wCXAIv/4ACYAIz/3wCZAI3/3wCaAI7/3gCbAI//3gCcAJD/3gCdAJH/3gCeAJL/3gCfAJP/3gCgAJP/3gChAJT/3QCiAJX/3gCjAJf/3QCkAJj/3QClAJj/3QCmAJn/3QCnAJr/3QCoAJv/2wCpAJv/3ACqAJ3/3ACrAJ7/3ACsAJ//3ACtAKD/2wCuAKD/2wCvAKH/2wCwAKL/2wCxAKT/2gCyAKT/2gCzAKX/2gC0AKb/2gC1AKf/2gC2AKj/2QC3AKj/2QC4AKr/2QC5AKv/2QC6AKz/2AC7AKz/2AC8AK3/2AC9AK7/2AC+AK//2AC/ALH/2ADAALH/1wDBALL/1wDCALP/1wDDALT/1wDEALT/1gDFALX/1gDGALf/1gDHALj/1QDIALn/1QDJALn/1QDKALr/1QDLALv/1QDMALz/1QDNAL3/1QDOAL7/1ADPAL//1ADQAMD/1ADRAMH/0wDSAMH/0wDTAMP/0wDUAMT/0wDVAMX/0wDWAMX/0gDXAMb/0gDYAMf/0gDZAMj/0gDaAMr/0gDbAMr/0QDcAMv/0QDdAMz/0QDeAM3/0ADfAM3/0ADgAM7/0ADhAND/0ADiANH/0ADjANL/0ADkANL/0ADlANP/zwDmANT/zwDnANX/zgDoANb/zgDpANf/zgDqANj/zgDrANn/zgDsANr/zgDtANr/zQDuANv/zQDvAN3/zQDwAN7/zQDxAN7/zQDyAN//zADzAOD/zAD0AOH/ywD1AOL/ywD2AOP/ywD3AOT/ywD4AOX/ywD5AOb/ywD6AOb/ywD7AOf/ywD8AOj/ygD9AOr/ygD+AOv/yQD/AOv/yQD4CP8ACAAI//4ACQAJ//4ACgAJ//4ACwAL//4ADAAM//0ADQAN//0ADgAN//0ADwAO//0AEAAP//0AEQAP//wAEgAQ//wAEwAS//sAFAAT//sAFQAT//sAFgAU//sAFwAV//sAGAAW//oAGQAX//sAGgAY//oAGwAa//oAHAAa//oAHQAb//kAHgAb//kAHwAc//kAIAAd//kAIQAf//kAIgAf//gAIwAg//gAJAAh//gAJQAi//gAJgAi//cAJwAj//cAKAAl//cAKQAm//cAKgAm//YAKwAn//YALAAo//YALQAp//YALgAq//YALwAs//YAMAAt//YAMQAt//UAMgAu//UAMwAv//QANAAv//QANQAw//QANgAx//QANwAz//QAOAA0//MAOQA0//MAOgA1//MAOwA2//MAPAA3//MAPQA3//MAPgA5//IAPwA6//IAQAA7//EAQQA8//EAQgA8//EAQwA9//EARAA+//EARQBA//EARgBA//AARwBB//AASABC//AASQBD//AASgBE/+8ASwBE/+8ATABG/+8ATQBH/+8ATgBI/+8ATwBI/+4AUABJ/+4AUQBK/+4AUgBM/+4AUwBN/+4AVABN/+0AVQBO/+0AVgBP/+0AVwBR/+wAWABR/+wAWQBS/+wAWgBT/+wAWwBU/+wAXABV/+wAXQBV/+wAXgBX/+sAXwBY/+sAYABZ/+oAYQBZ/+oAYgBa/+oAYwBb/+oAZABc/+oAZQBe/+oAZgBe/+kAZwBf/+kAaABg/+kAaQBh/+kAagBh/+kAawBi/+gAbABk/+gAbQBl/+cAbgBm/+cAbwBm/+cAcABn/+cAcQBo/+cAcgBp/+cAcwBq/+cAdABr/+cAdQBs/+YAdgBt/+YAdwBu/+UAeABu/+UAeQBw/+UAegBx/+UAewBy/+UAfABy/+QAfQBz/+QAfgB0/+QAfwB1/+QAgAB3/+QAgQB3/+QAggB4/+MAgwB5/+MAhAB6/+IAhQB6/+IAhgB7/+IAhwB9/+IAiAB+/+IAiQB//+IAigB//+IAiwCA/+EAjACB/+EAjQCC/+AAjgCD/+AAjwCE/+AAkACF/+EAkQCG/+EAkgCH/+AAkwCH/+AAlACI/+AAlQCK/+AAlgCL/98AlwCL/+AAmACM/98AmQCN/98AmgCO/94AmwCP/94AnACQ/94AnQCR/94AngCS/94AnwCT/94AoACT/94AoQCU/90AogCV/94AowCX/90ApACY/90ApQCY/90ApgCZ/90ApwCa/90AqACb/9sAqQCb/9wAqgCd/9wAqwCe/9wArACf/9wArQCg/9sArgCg/9sArwCh/9sAsACi/9sAsQCk/9oAsgCk/9oAswCl/9oAtACm/9oAtQCn/9oAtgCo/9kAtwCo/9kAuACq/9kAuQCr/9kAugCs/9gAuwCs/9gAvACt/9gAvQCu/9gAvgCv/9gAvwCx/9gAwACx/9cAwQCy/9cAwgCz/9cAwwC0/9cAxAC0/9YAxQC1/9YAxgC3/9YAxwC4/9UAyAC5/9UAyQC5/9UAygC6/9UAywC7/9UAzAC8/9UAzQC9/9UAzgC+/9QAzwC//9QA0ADA/9QA0QDB/9MA0gDB/9MA0wDD/9MA1ADE/9MA1QDF/9MA1gDF/9IA1wDG/9IA2ADH/9IA2QDI/9IA2gDK/9IA2wDK/9EA3ADL/9EA3QDM/9EA3gDN/9AA3wDN/9AA4ADO/9AA4QDQ/9AA4gDR/9AA4wDS/9AA5ADS/9AA5QDT/88A5gDU/88A5wDV/84A6ADW/84A6QDX/84A6gDY/84A6wDZ/84A7ADa/84A7QDa/80A7gDb/80A7wDd/80A8ADe/80A8QDe/80A8gDf/8wA8wDg/8wA9ADh/8sA9QDi/8sA9gDj/8sA9wDk/8sA+ADl/8sA+QDm/8sA+gDm/8sA+wDn/8sA/ADo/8oA/QDq/8oA/gDr/8kA/wDr/8kA+Aj/AAgACP/+AAkACf/+AAoACf/+AAsAC//+AAwADP/9AA0ADf/9AA4ADf/9AA8ADv/9ABAAD//9ABEAD//8ABIAEP/8ABMAEv/7ABQAE//7ABUAE//7ABYAFP/7ABcAFf/7ABgAFv/6ABkAF//7ABoAGP/6ABsAGv/6ABwAGv/5AB0AG//5AB4AG//5AB8AHP/5ACAAHf/5ACEAH//5ACIAH//4ACMAIP/4ACQAIf/4ACUAIv/4ACYAIv/3ACcAI//3ACgAJf/3ACkAJv/3ACoAJv/2ACsAJ//2ACwAKP/2AC0AKf/2AC4AKv/2AC8ALP/2ADAALf/2ADEALf/1ADIALv/1ADMAL//0ADQAL//0ADUAMP/0ADYAMf/0ADcAM//0ADgANP/zADkANP/zADoANf/zADsANv/zADwAN//zAD0AN//zAD4AOf/yAD8AOv/yAEAAO//xAEEAPP/xAEIAPP/xAEMAPf/xAEQAPv/xAEUAQP/xAEYAQP/wAEcAQf/wAEgAQv/wAEkARP/wAEoARf/vAEsARf/vAEwARv/vAE0AR//vAE4ASP/vAE8ASP/uAFAASv/uAFEAS//uAFIATP/uAFMATf/uAFQATf/tAFUATv/tAFYAT//tAFcAUf/sAFgAUf/sAFkAUv/sAFoAU//sAFsAVP/sAFwAVf/sAF0AVf/sAF4AV//rAF8AWP/rAGAAWf/qAGEAWf/qAGIAWv/qAGMAW//qAGQAXP/qAGUAXv/qAGYAXv/pAGcAX//pAGgAYP/pAGkAYf/pAGoAYf/pAGsAYv/oAGwAZP/oAG0AZf/nAG4AZv/nAG8AZv/nAHAAZ//nAHEAaP/nAHIAaf/nAHMAav/nAHQAa//nAHUAbP/mAHYAbf/mAHcAbv/lAHgAbv/lAHkAcP/lAHoAcf/lAHsAcv/lAHwAcv/kAH0Ac//kAH4AdP/kAH8Adf/kAIAAd//kAIEAd//kAIIAeP/jAIMAef/jAIQAev/iAIUAev/iAIYAe//iAIcAff/iAIgAfv/iAIkAf//iAIoAf//iAIsAgP/hAIwAgf/hAI0Agv/gAI4Ag//gAI8AhP/gAJAAhf/hAJEAhv/hAJIAh//gAJMAh//gAJQAiP/gAJUAiv/gAJYAi//fAJcAi//gAJgAjP/fAJkAjf/fAJoAjv/eAJsAj//eAJwAkP/eAJ0Akf/eAJ4Akv/eAJ8Ak//eAKAAk//eAKEAlP/dAKIAlf/eAKMAl//dAKQAmP/dAKUAmP/dAKYAmf/dAKcAmv/dAKgAm//bAKkAm//cAKoAnf/cAKsAnv/cAKwAn//cAK0AoP/bAK4AoP/bAK8Aof/bALAAov/bALEApP/aALIApP/aALMApf/aALQApv/aALUAp//aALYAqP/ZALcAqP/ZALgAqv/ZALkAq//ZALoArP/YALsArP/YALwArf/YAL0Arv/YAL4Ar//YAL8Asf/YAMAAsf/XAMEAsv/XAMIAs//XAMMAtP/XAMQAtP/WAMUAtf/WAMYAt//WAMcAuP/VAMgAuf/VAMkAuf/VAMoAuv/VAMsAu//VAMwAvP/VAM0Avf/VAM4Avv/UAM8Av//UANAAwP/UANEAwf/TANIAwf/TANMAw//TANQAxP/TANUAxf/TANYAxf/SANcAxv/SANgAx//SANkAyP/SANoAyv/SANsAyv/RANwAy//RAN0AzP/RAN4Azf/QAN8Azf/QAOAAzv/QAOEA0P/QAOIA0f/QAOMA0v/QAOQA0v/QAOUA0//PAOYA1P/PAOcA1f/OAOgA1v/OAOkA1//OAOoA2P/OAOsA2f/OAOwA2v/OAO0A2v/NAO4A2//NAO8A3f/NAPAA3v/NAPEA3v/NAPIA3//MAPMA4P/MAPQA4f/LAPUA4v/LAPYA4//LAPcA5P/LAPgA5f/LAPkA5v/LAPoA5v/LAPsA5//LAPwA6P/KAP0A6v/KAP4A6//JAP8A6//JAPgI/wAIAAj//gAJAAn//gAKAAn//gALAAv//gAMAAz//QANAA3//QAOAA3//QAPAA7//QAQAA///QARAA///AASABH//AATABL/+wAUABP/+wAVABP/+wAWABT/+wAXABX/+wAYABb/+gAZABf/+wAaABj/+gAbABr/+gAcABr/+gAdABv/+QAeABv/+QAfABz/+QAgAB3/+QAhAB//+QAiAB//+AAjACD/+AAkACH/+AAlACL/+AAmACL/9wAnACP/9wAoACX/9wApACb/9wAqACb/9gArACf/9gAsACj/9gAtACn/9gAuACr/9gAvACz/9gAwAC3/9gAxAC3/9QAyAC7/9QAzAC//9AA0AC//9AA1ADD/9AA2ADH/9AA3ADP/9AA4ADT/8wA5ADT/8wA6ADX/8wA7ADb/8wA8ADf/8wA9ADf/8wA+ADn/8gA/ADr/8gBAADv/8QBBADz/8QBCADz/8QBDAD3/8QBEAD7/8QBFAED/8QBGAED/8ABHAEH/8ABIAEL/8ABJAEP/8ABKAET/7wBLAET/7wBMAEb/7wBNAEf/7wBOAEj/7wBPAEj/7gBQAEn/7gBRAEr/7gBSAEv/7gBTAEz/7gBUAE3/7QBVAE7/7QBWAE//7QBXAFD/7ABYAFD/7ABZAFH/7ABaAFL/7ABbAFT/7ABcAFX/7ABdAFX/7ABeAFb/6wBfAFf/6wBgAFj/6gBhAFj/6gBiAFr/6gBjAFv/6gBkAFz/6gBlAF3/6gBmAF3/6QBnAF7/6QBoAF//6QBpAGH/6ABqAGH/6QBrAGL/6ABsAGP/6ABtAGT/5wBuAGX/5wBvAGX/5wBwAGb/5wBxAGj/5wByAGn/5wBzAGn/5wB0AGr/5gB1AGv/5gB2AGz/5gB3AG3/5QB4AG7/5QB5AG//5QB6AHD/5QB7AHH/5QB8AHH/5AB9AHL/5AB+AHP/5AB/AHX/5ACAAHb/5ACBAHb/5ACCAHf/4wCDAHj/4wCEAHn/4gCFAHn/4gCGAHv/4gCHAHz/4gCIAH3/4gCJAH//4QCKAH//4QCLAID/4QCMAIH/4QCNAIL/4ACOAIP/4ACPAIT/4ACQAIX/4ACRAIb/4ACSAIf/4ACTAIf/3wCUAIj/3wCVAIr/3wCWAIv/3wCXAIv/4ACYAIz/3wCZAI3/3wCaAI7/3gCbAI//3gCcAJD/3gCdAJH/3gCeAJL/3gCfAJP/3gCgAJP/3gChAJT/3QCiAJX/3gCjAJf/3QCkAJj/3QClAJj/3QCmAJn/3QCnAJr/3QCoAJv/2wCpAJv/3ACqAJ3/3ACrAJ7/3ACsAJ//3ACtAKD/2wCuAKD/2wCvAKH/2wCwAKL/2wCxAKT/2gCyAKT/2gCzAKX/2gC0AKb/2gC1AKf/2gC2AKj/2QC3AKj/2QC4AKr/2QC5AKv/2QC6AKz/2AC7AKz/2AC8AK3/2AC9AK7/2AC+AK//2AC/ALH/2ADAALH/1wDBALL/1wDCALP/1wDDALT/1wDEALT/1gDFALX/1gDGALf/1gDHALj/1QDIALn/1QDJALn/1QDKALr/1QDLALv/1QDMALz/1QDNAL3/1QDOAL7/1ADPAL//1ADQAMD/1ADRAMH/0wDSAMH/0wDTAMP/0wDUAMT/0wDVAMX/0wDWAMX/0gDXAMb/0gDYAMf/0gDZAMj/0gDaAMr/0gDbAMr/0QDcAMv/0QDdAMz/0QDeAM3/0ADfAM3/0ADgAM7/0ADhAND/0ADiANH/0ADjANL/0ADkANL/0ADlANP/zwDmANT/zwDnANX/zgDoANb/zgDpANf/zgDqANj/zgDrANn/zgDsANr/zgDtANr/zQDuANv/zQDvAN3/zQDwAN7/zQDxAN7/zQDyAN//zADzAOD/zAD0AOH/ywD1AOL/ywD2AOP/ywD3AOT/ywD4AOX/ywD5AOb/ywD6AOb/ywD7AOf/ywD8AOj/ygD9AOr/ygD+AOv/yQD/AOv/yQAABboAGwW6ABsFpgAbBCYAGwAA/+UAAP/lAAD/5f5r/+UFugAb/mz/5QLnAAABHAAAARwAAAAAAAAAAAB6ANYBJwEYAPUBEgCvAR0AygC0ANgBKgB8AM0BZAAWABcA/AIkACABBQAGABgAVACqAMoBBwBZALP/6QCoAFcA7AQBAJEA4AEsAFYAzAEOAAMAVQCdAE4BFf+rAOsBAv/gABcAOgBQAJABFAV2BdgBggAFAQMChf8vAA0EAQCDABQAPgCcANMBfAm1/9UANwC9BEz/8QCYARgCKgAOAHAA5gDwAScBLQI4Al3/bQBhAH8AwQEGA0MFk/8sAG4A/AOG/6P/6QAHAFMAVQBfAH4AlwDrAT4BwAKvBWQAHAA/AEgASgBdAG0ApgCtAmYF8AABAAIAJgBsAKgAxwDoAa0B2wPoA/kECAReBIwFJf4/AA4AIgAzADgAVwBfAGIAcwCMAJgAvgEAAR8BUgGZBTL9gQAWACAAJgAxADgAgACCAIkAswEAAQ4BEQEVAVYBnQJ+As8C7gSpBdj/zwAmADQAdgB+AIMAwQDFAOsA8gEGAS4BMAGCAbkB0QIBAnkC+gMg/wD/vAAoAEcAUgBcAHcAgQCQAJkAsgC8AMwBwQJNA0MDdwOwBOsE+/7EAAwAWgBiAHsAswDJANUA1gESARwCJgLsAyEDhQOjA8wD9QP5BBMEgwT7/uAAIgAwADEATABMAFMAXQBzAHoAhwCOAKEAqwC2ALoAwQDQANEA2wDlARUBOAFrAXwBngG7AfYB+gIhAiICPAJvApUCsAK6AuIDFgNRA1QDcQOWA5oDxgPTBBEEQgRLBJ0EtgTaBi0G6Adh/qX+4/9O/1j/gf+S/7v/wv/T/+4ACQANACIAIwAsAGkAbABxAHcAfwCMAK4AvgC+AMgA1wDZANoA3ADlAPUBAAEMARsBNQFKAVMBVQFsAXIBjgGPAZQBmAHFAc4CCgIRAhUCTwJQAmcChgLIAs8DOQM7A7gEKAQyBEUEWgRrBHQEhgUyBTIFTQWMBagFqgWrBfAF/AYSBqoIAAjM/Sr93v5o/nb+3f8K/w7/Hv8w/2n/9QAFAB4AOABhAGcAhwCbAKEApACmAKwAwADEAMwA0QDUANkA3ADdAN4A3wDlAPMA/AEUARYBGAEYARsBLAE+AU4BagF4AYEBggGYAZsBowG2AbgBvAHDAc0B0AHRAdIB2AHhAeIB6gHuAfACAwIZAh8CIwIrAl8CaAJ/An8ChgKTApkCmgLKAs8CzwLQAtYC6ALtAxADIgMvAzgDOAM8A0EDQgOKA6sD0AQVBBcEQgRPBHUEegSdBKYEwATBBNEE4wUABRAFEwUkBSwFSwWLBcAFxwXwBfwGDgYYBiYGbgaDBoQGpQa4BwQHFgc2B4IHiQebB6EH1AgUCCMIoAi7ARsBKAEZARoAAAAAAAAAAAAAAAABWAHGAK8DTAFZAYcBVAENAYoBWQAUAisAoQRxAkoEnAKPAioCqwAAAAAGOQSwAAAAAAOpAI4DAQIPBJADkQGUALUBAQA7AIQBPgB3AZIAjADGAXAA2wAsAJoDvwP+AkYBAAMAAaQBOAD2A88AAP/VAcMBNAExAUcASgLIBM4Fx1yHAiQCXgHZBF4GCQTGAJMCuwRgBDUEAQG2AXkBAACIA40ANgDuA3MD5ADMAW0EkAC+AXwBBAA9AjsA9AEEANYBDAEQASUCLgA/AUkDGQFQA2YBGgEbAXkBAADVAG4AaQKxAj8AxAGUAmsDKAF7ATIA9QD+ALMFwwCZBVIE1f9OBLX/IAD+AHoAAAAAAKoCLgCwAAABjANgBCn/V/73AYcDGALBAyYCPQHyBGECaf6uAU8BNALlAzEBcwJ0AfsBswEoAKYAygJNAkEBGgKkAA0A9QDsANwA/ADxALIEkwON/94Dq/5RArwAJAVcANIA8AEGAFECugHzANMAqgC+An8CCADYAa0ENgC1A24A8ANgArgC/QH3AvcAngCuAWQArwInAdsCQADtBl8E4AHlbw4BHgNmAG0AAgClAAYAYgPuAEH/4QAfAXT/z/+/ARsCTwK6AIkA8QXDAm8AkgB7AL4AmQB+AJgAYQDzAGwADAF5AAUADgAOALMAoQDyAFMAFwADAAUABwBuAHcAmgBKALoAcwDVAF0A6AChANwA9gB/AJkA2wIBAFAGnAAAQEA/Pj08Ozo5ODc2NTQzMjEwLy4tLCsqKSgnJiUkIyIhIB8eHRwbGhkYFxYVFBMSERAPDg0MCwoJCAcGBQQDAgEALEUjRmAgsCZgsAQmI0hILSxFI0YjYSCwJmGwBCYjSEgtLEUjRmCwIGEgsEZgsAQmI0hILSxFI0YjYbAgYCCwJmGwIGGwBCYjSEgtLEUjRmCwQGEgsGZgsAQmI0hILSxFI0YjYbBAYCCwJmGwQGGwBCYjSEgtLAEQIDwAPC0sIEUjILDNRCMguAFaUVgjILCNRCNZILDtUVgjILBNRCNZILAEJlFYIyCwDUQjWSEhLSwgIEUYaEQgsAFgIEWwRnZoikVgRC0sAbELCkMjQ2UKLSwAsQoLQyNDCy0sALAXI3CxARc+AbAXI3CxAhdFOrECAAgNLSxFsBojREWwGSNELSwgRbADJUVhZLBQUVhFRBshIVktLLABQ2MjYrAAI0KwDystLCBFsABDYEQtLAGwBkOwB0NlCi0sIGmwQGGwAIsgsSzAioy4EABiYCsMZCNkYVxYsANhWS0sRbARK7AXI0SwF3rkGC0sRbARK7AXI0QtLLASQ1iHRbARK7AXI0SwF3rkGwOKRRhpILAXI0SKiocgsMBRWLARK7AXI0SwF3rkGyGwF3rkWVkYLSwtLLACJUZgikawQGGMSC0sARgvLSwgsAMlRbAZI0RFsBojREVlI0UgsAMlYGogsAkjQiNoimpgYSCwGoqwAFJ5IbIaGkC5/+AAGkUgilRYIyGwPxsjWWFEHLEUAIpSebMZQCAZRSCKVFgjIbA/GyNZYUQtLLEQEUMjQwstLLEOD0MjQwstLLEMDUMjQwstLLEMDUMjQ2ULLSyxDg9DI0NlCy0ssRARQyNDZQstLEtSWEVEGyEhWS0sASCwAyUjSbBAYLAgYyCwAFJYI7ACJTgjsAIlZTgAimM4GyEhISEhWQEtLEuwZFFYRWmwCUNgihA6GyEhIVktLAGwBSUQIyCK9QCwAWAj7ewtLAGwBSUQIyCK9QCwAWEj7ewtLAGwBiUQ9QDt7C0sILABYAEQIDwAPC0sILABYQEQIDwAPC0ssCsrsCoqLSwAsAdDsAZDCy0sPrAqKi0sNS0sdrgCViNwECC4AlZFILAAUFiwAWFZOi8YLSwhIQxkI2SLuEAAYi0sIbCAUVgMZCNki7ggAGIbsgBALytZsAJgLSwhsMBRWAxkI2SLuBVVYhuyAIAvK1mwAmAtLAxkI2SLuEAAYmAjIS0stAABAAAAFbAIJrAIJrAIJrAIJg8QFhNFaDqwARYtLLQAAQAAABWwCCawCCawCCawCCYPEBYTRWhlOrABFi0sS1MjS1FaWCBFimBEGyEhWS0sS1RYIEWKYEQbISFZLSxLUyNLUVpYOBshIVktLEtUWDgbISFZLSwBS1MjS1FasAIlsAQlsAYlSSNFGGlSWliwAiWwAiWwBSVGI0VpYEhZISEhLSywE0NYAxsCWS0ssBNDWAIbA1ktLEtUsBJDXFpYOBshIVktLLASQ1xYDLAEJbAEJQYMZCNkYWSwA1FYsAQlsAQlASBGsBBgSCBGsBBgSFkKISEbISFZLSywEkNcWAywBCWwBCUGDGQjZGFkuAcIUViwBCWwBCUBIEa4//BgSCBGuP/wYEhZCiEhGyEhWS0sS1MjS1FaWLA6KxshIVktLEtTI0tRWliwOysbISFZLSxLUyNLUVqwEkNcWlg4GyEhWS0sDIoDS1SwBCYCS1RaiooKsBJDXFpYOBshIVktAAAAAAIBAAAABQAFAAADAAcAPLQCAbcGB7gC9UAYAAUEtwMACgcEtwEAGQgGBbcCA7AJv94YKxD2PP08ThD0PE39PAA/PP08EPw8/TwxMCERIRElIREhAQAEAPwgA8D8QAUA+wAgBMAAAAIAfgAAAtMFugAFAAkApkAVAEAyGT8BQDIZPwcQGx80BhAbHzQHuP/IsxIVNAa4/8hAJhIVNAQLgBFkNiALSAWwC8AL0AsFAAEBBQcICAQCBgkJAwAPAQEBuAMqQAoGBAMABwZJCQoAuwMgAAgAAQFRswkEcAO4AsBACwhJIAkBCfYKr7oYKxD2Xe307RDkEOQAP/08PzwQ9nE8hwV9EMQOxIcFEDwOxDEwAXFdKysrKysAKysBIxMTIQMBIQMhAaKeWkgBLU3+MgEcOv7kAXUC7AFZ/o38zv7rAP//ATYDsQQOBboAJgAKAAABBwAKAWwAAAAgswEBBgC4AZNAEUgnMAMwCXANgA0EAAECCAApACsBXSsAAgAS/+cEWwXTABsAHwGNQN9AAUACZgNmBGYcZh2bBJwIlxGWGpscrACrA6gHqQqsGawaqxurHdgI2QzQE9AU1hbWF9Ya1hvoBOgfHYYW2QjZDNYV1hkFAAEUGwgDAhMbCAQFEBsIBwYPGwgKBg8aCQsGDxcMDgYPFg0RBRAWDRICExYNFQEUFg0YARQXDBkBFBoJHAUQGgkdAhMaCR4CExcMHwUQFwwFBgETFCEPCAkaGkAbCBQbGwgMDRYWQBcMFBcXDBAFQA8GYQ0NDAwJCSgIABMCQBQBYRsaGhcXKBYKBgbaCQkAAPoJCQnYCQ0NuAFRtw8JDxRvFAEUuAMyQAwJF58XrxcCF9gJGxu4AVG3CR8BATABAQG4Av6zIF1nGCtOEPRdcit6TfABGC8revBdARgvK3rxXQEYLzwrehDwARgvK3rwARgvK3rhARgvK3oQ4QEYLwA/KzwQPBA8APQ8/Tw/KzwQPBA8ABD0PP08hwUuK4d9xIcuGCuHfcQBERI5ORI5OQ8PDw8PDw8PDw8PDw8PDw8xMAFxXRMjNTMTIzUhEzMDMxMzAzMVIwMzFSEDIxMjAyMBAzMTmoi0PPABHU/gT91N6FCKtzzz/uBP303eT+IBmDveOgFt3AEn3QGG/noBhv563f7Z3P56AYb+egOJ/tkBJwADAFr/NQScBikAIQAnAC4Bd0BnKQkgG0YMqSjEGuoXBgoTChQMFw0lBCYGJQclKCYpMwY6GDgjMylXDVkeaAtvMJkHmgiaF5oYmhmaIpoj2BcUCBkIIkYfSioEGCkSDwYjACkfKBIYGR8oKSAgEQAGBw8iIyEhEAO2BL0BrQApACEBoQApAtZADB8Vtg8WAZ8WARZlI7oC1gARAo+1DwUoHw0RuAGct08QXxDfEAMQuwLbAAoAIAGct0AhoCHfIQMhvQLaABwAFQLXABb/wEAOMjUvFj8W0BbvFv8WBRa4Am+1ICwwLAIsuALXQA4wHFAckBygHLAcwBwGHLgBTEAKYDABMC8mPyYCJr0C1wAKAAMC1wAEAtlADVAKYAqwCgMwCkAKAgq8AtgALwD0AtAAGCsQ9F1d9O0Q7V0QXfZd7V30XSvtEP1d7RD0Xe0APys/AOz9/V1x5BDt5BD95AcFEDwOPAU8Djw8PAcFEDwOPAU8Djw8PCsREjkRORI5EjkxMAFxXQBxXQUmJic3FhcTJiY1NDY2Mxc3MwcWFhcHJicDFhYVFAAjByMTEyYGFRQTAzY2NTQmAa2PuAz9E3Jcn5N112A1Eo8WgZwN9hJPU7Kb/uLoJY/qTFVxxlRxezgKI+WsDaw0AbpG3Xhsy3EBVmcguI8Nbyz+ckjXisD+87AEYQFrAW1LcP56/m8JelQ3UAAABQC6/8EG6gXTAAwAGQAdACoANwDyQCc5HDkdZAJpCGgcaB1kIGkmYDlzGnMbehx5HYUahRuXGhBJHAEdGhq4At5ADhscFBsbHBobHB0EOTgQuALdsgrHFrgC3UALAx0cHAMBGxpFKDS4At2yIccuuALdtigLORcXGiS4AtxAEw8xMDHAMdAxBGAxcDGwMQMxsCu4AtxACxAesB4CgB4BHrAGuALcQBcwEwEPEwEwE8AT0BMDYBNwE7ATAxOwDbgC3EAJQAABABk4r2MYK04Q9HFN/fZdcXFy/fZdcf32XXH9TkVlROYAP03t/e0Q9Dw/PBA8EO397QEREhc5hy4rfRDEMTABcV0TNBIzMhYVFAYGIyImNxQWMzI2NTQmIyIGBhMjATMBNBIzMhYVFAYGIyImNxQWMzI2NTQmIyIGBrrOuX6XZ7JufJnTLCZIYS4lKkM7WOcExub9pM65fpdnsm58mdIsJ0hhLiUqQzwD878BIaCPdu5vnG8zL9l5LDFFyfumBhL7Er8BIqCPdu9unG8zL9h5LTFFyQADAKr/3gWmBdMAJQAxADoA9kB0IDwBLws/CmsyZjl4C3AkdiamJKMn8zkKCwoGIioILQomJUkITwoHJR4lHyQgISJmBGwVaTl3BIYEhguJGIkZniSmBLQExATKFfw5ElA3ARUyFzkmIAMwCiQqBwYEAQAFPB0GCg05MiYkIBUEAQAJNyAtAS24AydACRoBNycNCwcLKrgDHbN/HQEduAF4thAyMLAwAjC4AthAGEAXgBcCF0AQETQPFwGvF/8XAhfgoDQBNLgBs7UQGTuSZxgrThD0Tf1d9l1xK3HtXRD9Xe0APz/tP+1xERc5Ejk5ARESFzkSOTkRFzkSOTkAXTEwAV0AcV0BcQEXBgYHFhcHJiYnBgYjIiQ1NDY3NjcmNTQ2MzIWFRQGBxYXFhc2ATY3NjU0JiMiBhUUAwYVFBYzMjcmBNfPHV0oOVm1ImkkSNp+5P78Z1I7hUHhuJ65n7Y5VTMfM/6FdjAiNzA8UF/GdmJ3fpcCfnw4gyU9S7wSViY5TPGvb79EMUeNaZrSp3Bf3VxgfEwhMwHaPD0sLSk1U0NA/mVjolx2U7gAAAEBNgOxAqIFugAFAElAKCADJQQwAzQEVASHA7MDswSzBcIDwgTTBAwFBAMDAgcBAQXaAgAH5wG4AyC1ABkG1ugYK04Q9E309AA//TkBERI5FzIxMAFdARM3IQcDATYFNQEyNWgDsQER+Pj+7wAAAQCH/lEDfAXTABAAQUAXLwBXAogJAwkQARIAgg4B5wQK+gAJAQm4ASVACQ5XXwQBBBkRhLkBGgAYK04Q9F1N7fRx7RDkEOQAPz8xMAFdASMmAjU0Ejc2NzMGAgIVFBIB68NST4OHVbzav89mLf5RvwGHo8wBm9WI1eD+dv4w3pb+5AAAAf9g/lECVQXTABAASEALIAABgBIBCRIBEBG4AWJACQCCDgr6DwkBCbgBJUAMAecOVz8ETwQCBBoSuAG2sUoYK04Q9l1N7eT0ce0Q5OYAPz8xMAFxXRMzFhIVFAIHBgcjNhISNTQC8cNST4OHVrvavs9mLAXTvv54osz+ZdaI1eABiAHR35cBGwAAAQAcAxgC8QXTAB4A7UBVWgybDAJPHQEOQCEjNCcMJA8nECoRKBwmHjUMNw43DzwRORw2HnochhiJGZQLkgyfE58UmBnYGNkc6gP5HBgJEBkQKRBEAUkZShtKHHoM+Qz5DQoYBboBEAARAvu2DAwJChtrAbgBT7IcRQC9ASsADgAWAx8ACQKoshVFCrgDEEAQDw4dDgEFDBEdGA4PBxYJHLgBnUAJnxv/GwIbqBYAuAGdQBSQAfABAgGoCRWoFsYKqAkZH691GCtOEPRN5P3kEPRd7RD0Xe0REhc5AD8vEDz05PzlEP3k9OYREjkv5f08MTABcV0rAF0BchMnNjc2NyYnJic3FhcmNTMUBzY3NjcXBgcXFhcHJwb1l0hOHwgZdlUbO4NnGLIbFEJaTDVvkngkFZmHPQMYdVFKHggEHRUKsDVAo2dJwwgfKR21GRiHKRpl32wAAQBVANMEVgTUAAsAhUAUXwBfCwIIAwkCUAUBBWEwA0ADAgO4AlBAFAJhPwBPAAIAUAgBCGEGMApACgIKuAJQQCIFUAEBAWFQAmACsAIDDwJgAqACsAIEPwJPAp8C3wL/AgUCuP/AsxMZNAK4AvyzDEZKGCsQ9itdcXL0XTz9XTzkXQAvXfT9XeRdEDwQPDEwAF0lESERIREhESERIREB1f6AAYABAAGB/n/TAX0BBwF9/oP++f6DAAEAFf7CAbMBFQAKAFpAJIsDiweADAMKCAGKAwEIBwkGBgUCAQoGvQUJBYcASQoKAQkBCbgBUUAQCgFJCqggADAAAgCuC0ZzGCsQ9l3k7RDkcQA/7eQ8EO0REjkSORESOTkxMF0BcV0TIQcGBgc3NjY3I5IBITAptZAcSE8ThAEV5calA4UQU1YAAAEATwGGArYCmwADAC5AHQBJTwMBAwKoAAEBQAEBAWoDqL8A7wACAPYERiwYKxD2XeT9XXHkAC9d7TEwEyEDIYkCLTr90wKb/usAAQBaAAABsAEVAAMAJUAXAEkDCgKoAUkDqCAAMAC/AAMArgRGcxgrEPZd5P3kAD/9MTATIQMhlAEcOv7kARX+6wAAAf+n/+cDRQXTAAMARkARRAFEAgICAQADAAoC+HABAQG4AdtADAP4QABAFhg0zwABALgBmrIEIMW5AV4AGCsaGRD9XSsaGO30Xe0APzw/PDEwAV0HATMBWQLWyP0rGQXs+hQAAAIAhP/nBJIFwAAOACAAPEAJ6Bb4FgIpAgEbuAEcsgUFErgBHEAPDA0YaQgaIg9pABkhvJcYK04Q9E3tThD2Te0AP+0/7TEwAF0BXRM0NxIAMzISFRQCACMiAiUUFjMyNzYTNjU0JiMiBwYDBoRLXAEgsLLlp/7ksrLnARpMPFA+U0AzTTxMO1c/NgHd3eMBFQEO/v724f4L/vUBArp2YEpjASPnh25hSWr+9ecAAAEA8wAABBYFwwAJAJNAG0sATQlfAF8BXwkFLAM5AzYFNgZWA/kJBgQFBbgBX7cGBxQGBgcHALoCbgABAxdACwMDBAUFBgwHBhADQQsBogAEAWYABwKNAAUBsAAGAnUAAQJxQA8QAAE/AE8AYAADABkKBhe4AxSx8xgrK04Q9F1xTeT27fT97SsQPAA/PD88EPT9OYcFLit9EMQxMAFdAF0TEyQ3MwEhEwYG8zYBc8uv/sv+39Va6wNKAQCi1/o9A/g6YAABAHwAAASRBcAAHwEWQEyaEwESLw8vEC0XOhdPD08QTxRPFU8WVxOUE6MTsxO6GLUfwArAC8AT0xPZGOAT5RblF/QT9Rb1FxoPEQ8SDxMPFAQwDkAOwA7QDgQOuAFqQBERAR0ADzAOQA6wDsAO0A4FDrgBXrYQEQxfAAEAugJyAB0BHLIEBRC6AnEADwKPtxppjwf/BwIHuAIqtiEAaSABAQG8Am8AEQFOACADFLGXGCsQ9vRd7RD1ce305AA//eRdPzz9XTwREjkBEO1dMTABcV1DXFi5AA3/4LISORK4//iyETkJuP/AshE5Crj/wLIROQu4/8CyETkMuP/AshE5Dbj/wLIROQ64/8CyETkTuP/4sQw5ACsBKysrKysrKytZAF0BJTY2MzIWFRQGBwYEBgchAyE+Ajc2NzY2NTQmIyIGAh3+5yL5u83qSFI7/vRXKAH8NvyFDmOUxZQnRjRXQ0RmA/EqzNneq1evYET5WzP++mzDqLuOK05yLkpcZwABAGj/5wR7BcAALADpQCogGEQGYxuEIgQvGPsIAiAIVhppHHYrmgIFDAoNDxkVGAAEASMmEg0HDA+4ARxAHBAKAQovCk8K4ArwCgQKGAABAWABcAGAAfABBAG6AykABAEcQA8qDQ8YAW8YfxiPGP8YBBi6AykAFQEcsh0FB7gCibcwJk8mAiaBErgCibYgGi5ADAEMuAGXQA2QGKAYAhhpkBmwGQIZuAGhQAuQAQEBaQAZLfTzGCtOEPRN7V30Xf1d5l1OEPZN7fRd7QA//eRdcT/95F1xETldL3HtARESORESOQAREjkREjkRMxEzMTABXQBdAV0TJRYWMzI2NTQmIyIHNxYzMjY1NCYjIgYHJTY3NjMyFhUUBgcWFhUUBwYjIiZoARMRWUpigWhYFhgyDw5zeFFCPmId/voyWYfFytKGdV1Yc534uvEBfyF1WoRnWmkE7wJ1ZEdSW240oleC4Z1yuCsxnFWiiLrVAAACADgAAAR7BboACgANAUNAWxI8AkwCWQJoAqoCuwIGKANTA2UCbwNgDXsDiQObA6YCpgipDLgDtwTEAskD2gPXBNcIEgkFCQxJBUkLSAwFAgINAQsMCg0GCAQJAQcFBAkNBgAMCgEHAgMCAQO4AVdACQwNFAwMDQQJCbgBX0AMCgwUCgoMAwoHCQYCuAMYQBAGQA3QDQIN0wcBAQMJCgwMuAFqtQQDBAwQCb0BsAAKAAQBHwAMAapAFC8KPwqfCgN/Cr8K7woDXwqPCgIKuALNtw0GGg9gAQEBuAKHtVANsA0CDbgCbbIOChe4AZmx8xgrKxD0Xe1dThDmTRD2XV1x/e0Q7Ss8AD885T88EjkvPP1dPOYBERI5ETmHLit9EMSHLhgrCH0QxA8PDw8AERI5ATkxMAFxXQBdQ1xYuQAN/+BAChA5AxAROQIQETkBKysrWQEhNwEzAzMHIwMhExMBAnn9vzIDGfjBsjOyP/7ycl/+fQEr8AOf/GX0/tUCHwHJ/jcAAQCC/+cEnwWmACEA+0BDE0ALORRACzkvDy8QPA87EE0PTRBeD1oQbw7kCPYV9hYMIAEgBCgQKBQgID0QPxE/FJcQmBGaFOgR4xLlGvUVDxQVFbgBr0AWEBEUEBAREA8VDBgAUAFgAXAB4AEEAboBrgAGARy1Hw1vDwEPuAKPs+AMAQy4AVpAEBgTIBQ/FE8UvxTPFN8UBhS4AV6zEhEECbgCibIbthJBDgJxABMBlwAjABQC2QAVAiQADwGqABECjQAQAtlACwFpDwABABkivJcYK04Q9HFN7fTk7f3kEPbk9O0APzz9XTwv/V3kXT/99l05ERI5ETmHDi4rBX0QxDEwAV0AXSsrEyUGFRQWMzI2NTQmIyIGBycTIQMhBzY2MzIWFRQCBCMiJoIBFwFbSWanXk44aTHs3wLhNv4ZQypUKajYl/70k7vxAaQZFApmZsOsZmg0NRIC7f764hMT4r6Y/tme8gAAAgCm/+YEmwXAABoAKADbQF4IIAUnAjUINSdCCEInZghoJ3YIhgiWCJsWpgj2CPoa+SMOBggBKhk3IEYHWBZmCGkSaid2CIQIlQilCNYn4wboFuYnDwAEAQgiG1AlASVlTwoBCh8BAV8BnwHfAQMBugGhAAQBHLIYBR64ARxADxENLwEBAWl/AI8A4AADALgCcUAPImkPDQFwDYAN7w3/DQQNuAFMQAwqG2lAFYAVApAVARW8As0AKQGgAtAAGCsQ9l1x7RD2XXHt9F3tXQA/7T/95F1yL13tXQEREjkAERI5MTABXQBxXQFxAQUmJiMiBwYHNjMyFhUUAgYjIiYmNRAAITIWARQWMzI3NjU0JiMiBgYEm/7zCEM5STlRMV1oqtya5oB7xWsBXAEdpML9NVxEVDxRWkE3cD8EVBZXRjpTuT3oyJ/+5YJ77MYBiQIktPyjdG5OabFybVO1AAABANQAAATSBaYADwCwQEIPQAs5AEALOfgEAUAAQAFHB0YMRg5aBVgGVw9mBGYHdgV5DXkOeQ+FBYsOiQ+ZBJkPqQ/IDcgP1wfXDBgLDwEPCQO4AaJAEAIPIAA/AE8AvwDPAN8ABgC4AV62AgEECQoMA7gCcbMCGhEJvAGwAAoCcQABAnFADD8ATwBfAKAABAAZELoBpgGbABgrThD0XU3k9O1OEPZN5AA/PD88/V08EP0BETkxMAFxXQBdKysTEyEHBgICBwYHITY3NhI31DcDxytc3r08UxT+7xJDUvauBJ0BCcxK/tf+kaXkb27A7AG4ywAAAwCH/+YEiQXAABkAJQAxALhAPzoNPA5WEqcM1isFKw0gEiATPQUzEjYTSBJVDlkSdg6GDogkmAiYDLkk6CL4IhENAAENAB0vDQogABoD4C8BL7gBWkAKMB0BHR0j7ykBKbgBWrUUDeAjASO4AVpAGQcFLGlvEAEQgSBpHwoBChozGmkgAzADAgO6Am8AJgKJtRcZMrzzGCtOEPRN7fRd7U4Q9nFN7fRd7QA/7V0//V0ROS9d7V0BERI5ERI5ABESOTkxMAFxXQBdASYmNTQ2NjMyFhUUBgcWFhUUBwYhIiY1NDYTFBYzMjY1NCYjIgYDFBYzMjY1NCYjIgYBuE1NbNeIxNx8fFpZapr+9cLspPpUQ011VkNPcYBaSG95XUhaiwMnM4VRXsJwzJxuozY3nWWmiMTlsZPoASZFVXVOR1h0/QZSXsRzSl+mAAACAIL/5wR3BcAAGAAmALZAUjoIOiVPCGsIeQiICJkIlBTyAvkI9Bj1IQwoByYXOR5UFGkIZxF6CIkImgipCNkH2RXZJeYD6QfpCOYY6SUSfgd6CAIABAEIIBlfIwEjZXAKAQq4AxW1UAGQAQIBugGhAAQBHLIWDRy4ARxAHxAFGWkTGiggaUANUA1gDZANBA2BIAEBAWkAGSe88xgrThD0Te1d9F3tThD2Te0AP+0//fRd9nHtXQEREjkAERI5MTABcV0AXRMlFhYzMjc2NwYjIiY1NAAzMhIREAAhIiYBNCYjIgcGFRQWMzI2NoIBDQhDOUk4UjBdZ6rcARrpztr+o/7jp8gC1VxEVDxRWkE3cD8BUhZWRjlTuTznyOABXP7U/wD+df3evwNSdG1OabFybVS1AAIAkAAAAokEJgADAAcAT0AUgAkBAEkDBEkHCgKoAUkAqKADAQO4AU9AGwQGqAVJBKgPBwEgBzAHwAfQB/8HBQcZCGhjGCtOEPRdcU30/eQQ9F30/eQAP+0v7TEwAV0BIQMhAyEDIQFuARs6/uVqARw6/uQEJv7q/gX+6wACAFP+wgKPBCYAAwAOAHJAI4kHAQwLDQoKCQYFDgBJAw0OCr0JhwRJDgoCqAFJAKigAwEDuAFPtgVJBAENAQ24AVFAEA6oBUkgBDAEYARwBO8EBQS4AVizD2hnGCsQ9l3t9ORxEO30XfT95AA/7fTtEDwv7RESORI5ERI5OTEwAV0BIQMhAyEHBgYHNzY2NyMBcwEcOv7kaQEhMCm1kBxITxOEBCb+6v4F5calA4UQU1YAAQBfAKcETAUBAAYAaUAS2AHXAgIHBUYFhgUDAAQGAwgDuAF7QBWAAAFvAOAAAgADAAMABgYEBFADAQO4Av9AFgJwBXCAAQGQAaABwAHgAQQBkgeqShgrThD0XXFN/f39XTwQPBA8EDwAP11x7QERFzkxMHEAXSUBNQERAQEETPwTA+39QwK9pwG18gGz/uP+9P7qAAIAVQF0BFYEMgADAAcAbrYHPwRPBAIEuwJQAAYABQMntgM/AE8AAgC4AlBAGAIQASABAiABQAG/AdAB8AEFAQcGBgMDArgC/bUJBAUFAQC4/8CzExk0ALgC/LMIRkoYKxD2Kzw8EDwQ9jwQPBA8AC9dcjz9XTz2PP1dPDEwExEhEQERIRFVBAH7/wQBAzABAv7+/kQBA/79AAABAF8ApgRNBP8ABgBxQAzXBQEIAgEEAwEDBwS4AXtAE4AAAW8A4AACAAMAAQEDA18EAQS4Av9AEwVwAnAPBoAGAi8GUAbABuAGBAa4Av9ADSAAkACgAAMAkgeqShgrThD0XU39XXH9/f1dPBA8EDwAP11x7QESFzkxMAFxAF03EQEBEQEVXwK+/UID7qYBGwEUAREBGf5N8AAAAgD8AAAE8gXTABkAHQC8QDACBQYLBQwOEASPDAEqFyAfNhBDFkkXVQVWFlcXeQNwC4gDohYMDgovDQEN2Z8KAQq4ARy2EQEADwEBAbgDKkARGkkdChAHkAcCIAeQB6AHAwe4AlCzIBQBFLgDGrQgHwEfALgCUEANLwFfAQIBLRtJGqgdDbgCULUvDl8OAg64AU+1HEkgHQEduAFisx6/LBgrEPZd7fRd7RD07fRd7RBx9F3tXXIAP/32cTw/7XLtXRE5MTABXQBdAXEBITY2NzY2NTQmIyIGByU2JDMyFhUUBgcGBgUhAyECvP7+FH+/oTliZXCOHv75MwEk0dr0aOmSRP7RARw6/uQBbobFpY1aKj5TeoYuw+LZl1ivwXpgrP7rAAIAPf5RB8YF1AA9AE4A+kBFPxY/FzAiQiJGOkY9fRl1IggpGykmJDktRSlGKkk6JjsyOEY6R1kmWj1ZSGlIdiZ2SBAAUAEmJygoJSNIJBgYQQAjJEsFuAEWszvBADO4ARayDQFLuAEWtiEHJAYACiu7ARYAFQBBARZADxoaFQol5ygk5yhJjxgBGLoB2AAwARayEagAuAEWQAkAAQFwAYABAgG4AwBAD/BQAVA+VwAdAe8dAR1hN7gBFrMJhE9QuAIOsyFPZxgrK070Te30XXHtThBd9l1xTe30/fRd7eQQ5AA/PBDtEO0/Pz/tP+0Q9O0REjkREjkBERI5OYcOfRDExDEwAXFdAF0lMwYHBiEgJAIREAAkITIEEhUUBwIhIiYnBiMiJjU0NzYzMhc3IQMGFRQWMzI3NhI1NAAhIAQCFRQSBDMyJAEUFjMyNzY3NjY1NCYjIgYGBvHVZM/t/qv+tv4g6gEKAc4BKfwBiM+dxf7MU1QOdpqm24Sg+rJVGQEIlw4XEDBMZn7+nP7B/vH+h7/VAYH47wFY/DBnTDkyJiU1TWlQVpJKE8tzhN8BswEAARkB5fPE/pfW/87+/Do4cuW+68LsiG/9M0QUGRk6TAEAifYBS93+b9nT/qafhQIgf3gcFCs96mVxeYX2AAAC/+kAAAVjBboABwAKAONATUAMASAMcAwCTwx5BL8M0AzwDAUmAyoEKAlWA1YKmASXCgcJCQAMSQBLBkoIQAwGAAgJBwEKCQkCBQYGIgcJFAcHCQQDAyICCRQCAgkJuAJZsgQICrgCyEAOAAEBAwUEAgYHBwIDCAa4AoJACyAHfwcCUAffBwIHuwEVAAkAAwJZty8CcALgAgMCuAK2QAoACUAJcAnQCQQJvAEyAAsCsgK3ABgrGRD0cfZdGO0ZEPRdcRjtAD88PBA8PzwSOS88/TwQ7YcFLit9EMSHLhgrfRDEBxA8PIfEPDEwAXFdXXEBcgEhAyEBIRMhAwMBBBT9u7D+ygM9AU/u/uJWVf6eAUT+vAW6+kYCOAJO/bIAAwBSAAAFrgW6ABIAHgAoAPFAOyAqAVYQASsLJwwwKlcTVx9gC2QMlwanAKgBpx+gKsAq0CrwKg8TKB8eCxQnCwgYHh8fHwABFAAAARQTuALIQBcnzygBKCAo3ygCLyhPKLAowCgEKB8dHrgCyLQCAQIgH7gCyEAnEgAIAQAQGCggCAFvCAEIyyQoQCoBIA4wDmAOnw6/DgUOXnAqASoeugKSAB8CJEARwADQAAIwAEAAUAADADwpABe6AqoCVwAYKysQ9l1x/eQQXfZdcu30XXHtKxA8AD88/Tw/PP08ETldcS9xPP08hwUuK30QxAEREjkAERI5BwU8PDEwAV0AXQFxMwEhMhceAhUUBgcWFhUUBgQjAzMyNjY1NCYnJiMjAyEyNjY1NCYjIVIBMwH1o0Fslk6IjnF/lv745c/mm4hEQDshfurOASO3fUtsfv6YBboKD1qWW3qzLh+fZITtaANqLGQ5OEgMBvwnL2w8SV8AAAEAwv/mBfcF0wAbAKJAKk8ATwFADkAPBCsJKworFEgFTw9HFEoYWQJaA1oMlgkLAQAZDk8PXw8CD7oBrQASAstACgsDLwA/AEAAAwC6Aa0AGQLLQDAECQAobwG/AQJPAV8BnwEDAYEPKA7LQB0B4B0BIB1AHVAdYB0EHRYoQAdQB2AHAwe4ApqzHDkyGCsQ9nHtEHFdcvTt9F1d7QA//eRdP/30XTkREjkxMAFdAF0BBQYAIyAAETQSJDMyBBcFJiYjIgYCFRQWMzI2BHoBMFn+nO/+8/7RywFo1O4BJhr+3xiJcn3Zgql+dcMCEC73/vsBRAE0+gGf3P/cHIBzmP69obDAmAACAFkAAAXMBboAEQAfAJVAIoUYATcANRJLF1cSVx+6FbkXyhXKFgkfEhIfAAEUAAABExK4Asi0EQAIHh+4AshAFQIBAgEAEBkoChpAIQEgIQG/IQEhH7wCkgASAiQAAQJEQBHAANAAAjAAQABQAAMAPCAAF7gCqrEyGCsrEPZdceT95E4QXXFy9k3tKxA8AD88/Tw/PP08hwUuK30QxDEwAV0AXTMBITIXHgQVFAIHBgcGIyczMjY3NhI1NCYnJiMjWQEzAYulLluPdVUutYlppl7DppmmlT5ZemVKNIWvBboFCThljrlu7f6RbFQoFuwpOFEBDrecnRoSAAABAFQAAAXFBboACwDeQCcgCiALNwA2CUACQANXBFcJnwOfB58LCwUICQQECQkfAAEUAAABBgW4AshADwcILwhPCLAIwAgECAkDBLgCyLQCAQIKCbgCyLYLAAgBABAHugJCAAb/wLI2NQa6AkIACwJCtUAKYAoCCroCkgADAkJAD1ACAS8CUAICnwIBAhoNBLwCkgAJAiQAAQJEQBHAANAAAjAAQABQAAMAPAwAF7gCqrFsGCsrEPZdceT95E4Q9l1dck3k9F3k9CvkKxA8AD88/Tw/PP08ETldLzz9PIcFLit9EMQHPDwxMAFdMwEhByEDIQchAyEHVAEyBD8z/O9GAvcz/QlcA1U0Bbr1/rP1/nL1AAABAFAAAAWFBboACQCvQBdXBFcI1wkDCAUECQQJCR8AARQAAAEGBbgCyLUHCAgJAwS4AshACgIBAgkACAEAEAe4AkK20AYBsAYBBroCQwADAkJAEJACAcACAQIaIAsBUAsBCwS8AV0ACQJZAAECREARwADQAAIwAEAAUAADADwKABe6AqoBFAAYKysQ9l1x5P3kThBdcfZdck3k9F1x5CsQPAA/PD88/TwROS88/TyHBS4rfRDEBzw8MTABXTMBIQchAyEHIQNQATMEAjT9LEoCxjP9OoIFuvX+oPX9kAABALX/5wZJBdMAIQEyuQAg/8CyEzkhuP/AQF0TOSgLOQNJA1gLhx6XHqQeoB8IvwKzHgIHAgEsAicNLh4pHzoGOA84EDkfUAFcD1wQnwGfHp4fnyCkAKcCqwTWAtYf+gEVERUSIB8fIgIBFAICAR8gAgUcIK8hASG4AshACwEgAAEAUAABABIcuALLQAkFCF8SAV8SARK6Aa0AFQLLQAkOAwIiKAECECG4AkJAFQBPAAFQAJ8A3wD/AAQAGR/LjyABILgCJLVQAI8BAgG4AkJAEhIoQBEBEaUjGShACVAJYAkDCbgCmrYiAhcCIgo5uQKuABgrKxA8KxD2ce0Q9XHt9F39XfQROV1xL+QrEDwrEMAAP/3kXXE//RE5XS9dPP1dPBESORI5hw4uKwV9EMQAERI5MTABXQFxAHFdACsrASEDBgQjICcmETQ3NgAzMgQXBSYmIyIGAhUUFjMyNjc3IQOEApCEdP6vr/7wkcZVZgF4/v4BMzL+5yWlfZL0iqiiYcxKLf6KAw/9i0toe6gBP9bE7AEE7OIgfH6Y/sK8ubE3JtoAAAEAWQAABh4FugALANlAK1cCVwZXB1YKUA0FAAcICAsBBgUFAgkKCh8LCBQLCwgFAgIfAwQUAwMEAAG4AshAGQcGMAYBBgQKCwsCAwgJCAgFBAIICwQDEAi+AkQACwAEAkQAAwAKAllAEl8LzwvfCwPfCwGfC98L/wsDC7gBbLRADQENArgCWUATwAPQAwIwA0ADUAMDAzwMCxcDF7gCqrF6GCsrKxD2XXHtEHL0XXFy7RDkEOQrEDwQPAA/PDwQPD88PBA8EjldLzz9PIcFLit9EMSHLhgrfRDEBxA8PAcQPDwxMAFdASEDIQEhAyETIQEhBEf9yYn+0gEzAS12Ajd3AS3+zv7SAo/9cQW6/coCNvpGAAEARwAAAqcFugADAMy5AAX/wLNDRzQFuP/Asz9ANAW4/8CzPD00Bbj/wLMzNDQFuP/Asy8wNAW4/8BAPSctNJAFAUAFVwJQBWAFfwWgBccC5wLgBfAFChAFXwWfBbAF0AUFQAUBAgMDHwABFAAAAQIBAgMACAEAEAK9AkQAAwJZACAAAQJEQCBwAJAAoACwAAQwAEAAYAADEAAB4AABYACgAAIA4QQAF7gCqrFsGCsrEPZdXXFychnkGv3kKxA8ABg/PD88hwUuK30QxDEwAXJxXXIrKysrKyszASEBRwEzAS3+zgW6+kYAAAEAO//nBMwFugAUAIZAJjgDOBIwFmgCaBR4AHcUyBLAFv8GCgoQCwoNCBQAAB8BAhQBAQILugGnABACy0AMBQkBAAIBFSgCARACuAKCs3AUARS4AkZAEA0ogAgBCF4VARcBFQqngxgrKxA8KxD2Xf32Xe0rEDwrEMAAPzw//eaHBS4rDn0QxAEREjkAERI5MTABXQEhAwIGIyImNTQ3JQYVFBYzMjc2NwOkASi5SvD509IFARcGUFN0LyMyBbr8iP6f+sKnIyseLB1GSkQ08QABAFEAAAZpBboACwFOQFUSKwcrCFgBVgVWBmkEaQV2BoYGiQiHCYgKnwSWBpkHmQi5BcYH2gjWCdAN6gjgDfkIGAsEBQcADUkEgA3QBQaQBQFlBowDjAqnBrUGBQQDRgNGBAMIvAIkAAcCLAAEApNAVQUaQA0BIA0BUA0BDQoKCQMCAwQCCwYGBwkKCQgKBQILCx8AARQAAAEIBwYGHwkIFAkGBQkIAwYBCgAJBgkICgMLAgkJAQcICAsACAUEBAIBAgEAEAK8AkQACwJZAAECREARwADQAAIwAEAAUAADADwMABe6AqoBFAAYKysQ9l1x5P3kKxA8AD88PBA8Pzw8EDwSOS8BERI5ORI5OQAREjkSOTmHCC4rhwV9xIcuGCt9EMQHCBA8CDwHCBA8CDwBGE4QXXFy9k3t9O0xMABxXQFycV1DXFhADAkgDTkEIA85BCAQOQErKwArWTMBIQMBIQEBIQEFA1EBMgEuggKmAZT9bgHj/qv+lv7jXgW6/ZECb/3B/IUCuvn+PwABAFwAAASnBboABQCaQC4gBwElACUDIAQgBTYANgM9BEAFVgNTBFAFbwRwBHAFpwMPAgMDHwABFAAAAQQDuALIQAoFAAgCAQIBABAFuAJCQBBgBJ8EzwTfBAQEGlAHAQcCvAFdAAMCJAABAkRADDAAQABQAAMAPAYAF7oCqgJXABgrKxD2XeT95E4QXfZdTeQrEDwAPzw/PP08hwUuK30QxDEwAV0BcTMBIQMhB1wBMwEt/wLqMwW6+zv1AAEAUwAABwcFugAMAbVAsA4GCQcJCEUARQNGCm4IdgaFAIoBhAbwCQwwDkAO0A4DSgdGDH8HkAe/DAUnAiAOOQk4DE8GTAdJCEcMWAlYDG8GbAdoDGAOeQB/Bn8Hewh7CXwMjwaNB48IiQmICowMnQafB58ImQmXCpsMtAC7CLUJtwzFAM8IxwrGC8gM5gDqCOgM8A4tDAJmCAcHIQAMFAAADAkKCiILDBQLCwwFBAQiAwIUAwMCAgEDDAsCAxAHvQKEAAAADAKCAAkCRLQOFxcaCrgCgUAiXwvfCwJ/C/8LAg8LAe8L/wsCnwsBjwsBfwsBbwsBTwsBC7oBsgAAAbJAEj8BXwHfAQMPAX8BAo8B/wECAb0BsgADAAUCRAADAoFAL48EvwT/BAMwBEAEUARvBH8EnwQGBBkNCxcDFw0NBAwMDQIIAgUCCwgBCAQIOHoYKwA/Pz8/PxESOS8SOS8rK04Q9F1xTe3kEPZdcXL+9l1dXV1dXXFxcv1ORWVE5k307RDtKxA8EDwBERI5hy4rfRDEhy4YK30QxIcELhgrBX0QxDEwAXYvGS8BS7ALUVizDgkNBBA8EDxZAV0AXQFycSEhAwMhASETASEBIQEDn/7iNen+8AEyAa4tAfQBs/7N/u0BIQTJ+zcFuvv9BAP6RgTAAAABAFwAAAYaBboACQGrQI8SLAJkAmoHlgKaB+4H8AcH3wIBJQApASgCOAJHA0kGUAtoAWcDdgJ6B4YCiAeXAJkBlgKYBZoHqACmBKwHqAioCbwAuQe5CLoJxwDIAccDxwTKBcYG2gHrBvcA9gL8ByaACwEJAAAiAQgUAQEIBgMDIgQFFAQEBQedAAKdBQkICAYFAgQDAwEACAgBBQQQALgBskAoAUBQNQFASEk0AUA8NY8BzwHfAQMPAS8BXwEDzwHfAQKfAd8B/wEDAbgBbbRACwELA7gCgUATwATQBAIwBEAEUAQDBDwKARcEF7gCqrGDGCsrKxD2XXHtEHL0XXFycisrK/YrEDwQPAA/PDwQPD88PBA8EO0Q7YcFLit9EMSHLhgrfRDEMTAAS7AZU0uwGVFaWLkAB//OsQIyODhZAEuwJVNLsCdRWli5AAf/zrECMjg4WQBLsC5TS7AuUVpYuQAH/7qxAkY4OFkAS7AyU0uwMlFaWLkAB/+6sQJGODhZAXFdAHJdQ1xYuQAH//BAFws5AhgUOQEQFDkFGBQ5BSAQOQI2EDkHuP/KsRA5ASsrKysrKwArWSEhAQMhASEBEyEE6P7n/nTO/ucBMgEaAY3NARgD2fwnBbr8KwPVAAACALP/5gZGBdMAEAAdAGJAESgEJgxoFskExgsFCQQFCwIUuALLsg0JG7gCy0AjBgMYKOAJAQkaQB9gH6AfAx8RKKAAAUAAUABgAAPgAPAAAgC4ApqzHjkyGCsQ9nJxce1OEHH2XU3tAD/tP+0xMAFxXRM0NzYSJDMgABEUAgQjIiQCJRQWMzI2EjU0JiMiALMnM9EBPssBEAFP2v6K7M3+6nQBKriWeuGTupDd/usCP4KQwQEcpf6u/uno/k7qugETkpfMoQFHm63F/mQAAAIAUwAABZ4FugAPABoAqUApKBNXD1kTVxpvHNkT3xzpEwg1GtAcAjkaARAPABoaAAAfAQIUAQECERC4Asi1Dg8PARkauALIQBsDAgIAAQgCARAVKFAH4AcCB4EvHEAcYBwDHBq8AV0AAAJZAAICREARwAHQAQIwAUABUAEDATwbARe4AqqxZBgrKxD2XXHk/eROEF32XU3tKxA8AD88Pzz9PBE5Lzz9PIcFLit9EMQHPDwxMF0BXV0hIQEhMhYWFRQOAgcGIyM3MzI2NjU0JiYjIwGB/tIBMwJUn7lsVn2QekfDwTJd76BcL1WT2AW6S614b9J+QREK8zyETzVEHwACALP/PQZFBdMAFAAuAL5ARDkENgtpEGkWeRDJCMYi+xH5EgkKCAEmDyQRJRQoKjoBSgFbAWkAeQCtAbwBzQHcAA0sKBUAEBIGEy0tKBUAEBMGEiwtvQJEACMAEwMsACMCy7IDAhu4Asu2CgMsLBggErgCQ0AhGCjgDQENGkAwYDCgMAMwICjgBvAGAqAGAUAGUAZgBgMGuAKasy85MhgrThD0cXFyTe1OEHH2XU3t5BESOS8AP+0/7eQQ5AEREhc5ABESFzkxMABdAXFdJQYGIyAAETQSJDMgABEUAgcWFwcmAzYSNTQmIyIOAhUUFjMyNzY2NTQnJic3FgQ2RpZQ/vT+tc4Bd+8BEAFOs5Zgao6MXVtvupBjqI1ZvZMpLRAHHj9HbocdGxwBTwEO3QHE7/6v/ujW/nh6Xjy7RAHiXAEkga7FW6z+f6vFBgMGBggcPBqoRAACAFoAAAXuBboAFgAhAShAbBIKEAwRDBMqGrMJBSoNJho7Cz0NOBE5Gk8KSQ1JEFcAVwFXAlcMVyGDCYwRjxKaDZkQpw25CrkLuQ25EMkLyw3JENsN2hDZEeUK7xPpGvkQ+xojFhchACEAAB8BAhQBAQIYIBcwF0AXkBcEF7j/wLITORe4Asq1FRYWASAhuALIQA0DAgIODw8AAQgCARAPuAJZQBuwDgEOmBwoIAfAB9AHA78HAQfLQCMBICMBIyG8AV0AAAJZAAICREARwAHQAQIwAUABUAEDATwiARe4AqqxZBgrKxD2XXHk/eQQcXL0XXHt9F3tKxA8AD88PBA8Pzz9PBE5Lzz9K108hwUuK30QxAc8PDEwAV0BcUNcWEAQGkASORlAEjkYQBI5F0ASOQArKysrWSEhASEyFhYVFAYHFhcWEyEmAyYnJiMjNzMyNjY1NCcmIyEBiP7SATMCi6i7c+DjOjNlfP68J3I+Ritrey6g859aQSiF/rQFukW4g7vzHTRVrP7GfAEHjjEd3Dp8RFAoGAABAH7/5wVpBdMAKgD7QGNCB0oQSRFNG2kNaBBtE28dYymQAZ8InxafF5AckR6qG7kbygzAEcgbwigVACwBRgxJIUsjA0YJQAtOHk8fTyCALAY2Ik0LTw1GHkAfQyFAIqQeyBPFJdoe2iAMAAUBFW8WARa4AVlAD68ZvxnPGQMfGS8ZPxkDGbj/wLITORm4Asu1EgPPAQEBuAFZQBMFQBM5oAWwBcAFAxAFIAUwBQMFuALLQBQnCQgoJPUWKBUaLBwobw8Bnw8BD7gCLEALASggAAEAGSvOMhgrThD0XU3t9F1x7U4Q9k3t9O0AP/1xXSvkXT/9K3Fd9F05ERI5MTAAXQFxAHEBcl0TJRYXFjMyNjU0JyYkJiY1NCQzIAQXBSYmIyIGFRQXFhcWFxYVFAAhIiQmggEfBilCspSENib+q51aAQz+AQIBGw7+3wuCf31rMTGo/UVn/s3+7r3+63IB2w+JME5qS0EtIZRjnmq59PLJDWlwWEM/KSpJbURkncL+7X/iAAABAPYAAAWsBboABwD0QE1wCQE/Az8ERwBHAUgCTwRABmcAZwF3AIQAhQGJAosHmAKfBJ8FpwCjAqMHtAKzB8kCyAfZAtAE0AXZB+gCHQcAAB8BAhQBAQIAAQgGA7gCyEAJBQQCAgEQBbkEuP/AskQ1A7j/wLJENQW4/8CyRDUGuP/AQBFENSAGcAagBgPABuAG8AYDBrgCZ7QgCQEJALgCWUAeAQS5EAMBMAOgAwJwA+ADAgNAAwEDoAEBQAHgAQIBuAJGtAgJAReGuQEYABgrKysQ5l1xATlxL11xcuQQ7RBx9F1xKysrK+QrEDwAPzz9PD88hwUuK30QxDEwAV0BcSEhASE3IQchAtL+0gEA/lIzBIMz/lgExfX1AAEAu//nBh8FugAaAPlAWykCKgwpDykaOgA6ATkNOQ5KAEoBSg1KDlgMWRpQHG8AbwFrAmYQaxp5AnkaiACIAZgMmRWYGrcPHAIDBAUFARoZGBcWFgAODw8fDA0UDAwNAQUFHxYAFBYWAAi4AstAEhMJDg0NAQACFgwbKA0MABYQD7gCWUAWEAxADN8MAxAMzwzfDAOgDLAM/wwDDLgCtkAfBSjAFtAWAjAWQBZQFmAWBBY8GwkMFxYXDBYbCjnfGCsrEDw8KysrEPZdcQH99l1xcu0rEDwQPCsQwMAAPzw8EDw/7YcOLisFfRDEhw4uGCsFfRDEhw4QxMTExIcOEMTExDEwAV0BIQMHBhUUFjMyNjY3EyEDBgIEIyAkNTQ3NjcBhwEtpikEf3ltjk4oqAEtpzWK/ujW/wD+8AcEHwW6/OTEGRZXcVKiwgMh/N79/vSo+8UpMCCWAAABAOgAAAZZBboABgC0QDM/CFkAWgFaBF8IZwCVAJkBmQW/CAoQCCAITwjeAQQABgYiBQQUBQUEAQICHwMEFAMDBAS4AiRACwEGBQUDAgIAAQgGuAKCQA8QBV8FoAUDIAWQBcAFAwW7AsMABAACAlm1LwNfAwIDuAKSQA0QBCAEUASgBATgBAEEvAHaAAcAhgMdABgrGRD0XXH0XRjtGRD0XXEY7QA/PD88PBA8EO2HBS4rfRDEhy4YK30QxDEwAXFdISEBIRMBIQMx/r7++QEtugJfASsFuvurBFUAAQDwAAAIiwW6AAwBeEB2EtYH2goCcwfSBwIvBS8ONQA1Az8ORQNKCE8OVgNfBV8GXwdeCF8OaABtBW0GbQd1A4IAhAOLCJ0FtQDGAOUD4AT0ABwAAAYDCgcLCAAM0gDQCAcHCgIDBQQIBwchAwIUAwMCAAEBAwMECAwLCwkJCAgGBgUCDLgCgrUAC3ALAgu4ArSzCosJALgCWrMBiwIEuAJaQAsDBR8GWh8HjwcCB7gBbUAKCB8DjwMCXwMBA7gBbbII4wm8AuYAAgCGAx0AGCsv7e3kXXEQ9HH07RDuEPTuEPT0cu0APzwQPBA8EDwQPD88EDwQPIcuKwV9EMQAERIXOTEwAXFdAHEAXUNcWLkAB//gsgs5Crj/0LILOQK4/+iyDTkHuP/osg05B7j/8LYMOQsYEjkAuP/wshI5DLj/4LISOQW4//C2EjkIEBI5A7j/4LIROQC4/+CyETkAuP/gQA4QOQggEDkIGA85CBgWOQErKysrKysrKysrKwArKysrK1khIQMBIQMhEwEhEwEhBcb+zyH96v7JNwEnDQH3AUciAeMBJAQw+9AFuvv+BAL8CQP3AAAB/8IAAAZFBboAFQEyQIUSABdoAmgOwBcEaQIBiwkBIBY4AjQDNAQ2BTkRaQRmEHQAewt7DHQVigSKBY0PqQKrD6oRuAKwF8gAxArEC8UMyhTKFRoqAikPKRFfFwQTAA0DEA4ADQQPCQQPDAECAQwDEAQDEBAfDwQUDw8EDA0AAB8BDBQBAQwNDAwEAwIPEBAAAQgQuAIks58PAQ+6AkQADAKWQBUADWANwA0DQA1wDe8NAw0anxcBFwS8AlkAAwGxAAAClkATzwHfAQI/AWABjwGfAa8BBQEZFroBRQMdABgrThD0XXFN7fTtThBd9l1xTe30Xe0APzw8EDw/PDwQPIcFLiuHfcSHLhgrh33EDw8PDzEwAV1dAF0AcQFxQ1xYuQAL/9iyEDkMuP/YQAoQORUoEDkAKBA5ASsrKytZISEBASETFhcWFzY3EyEBASEnJicGBwFI/noCh/6UATuPB1sGB3pT5wF7/WEBaf6sblYcJ6IC5gLU/u0PvAoRlV8BBfz8/Urdr0w7uQABAOsAAAZHBboADQFJQIoSSAJsAn4CigmICrkCtgi6DdoK7AIKIAAgASACIQ04AjcFOQ1KCEcNVw1pCGoLYA91B3MIdQl5C3APnQicC6YCpg3IAsYK1gPQDxoGAgYJBgoEDRUNRgMGB8ACDQAAHwECFAEBAgcLBwQLHwwNFAwMDQcEBwsEHwMCFAMDAgwLCwQDAgABCAIBEAu4AiRAEVAM0AzgDAMwDEAMYAxwDAQMuAFdtA8H6gIEuAJZsz8DAQO6AbEAAAJZQBtgAQEQASABUAEDkAHQAeABAzABUAFgAXABBAG4AuCyDgEXugLfAtIAGCsrEPZdXXFx7fRd7RDtEPRdXe0rEDwAPzw/PDwQPIcFLisIfRDEhwUuGCsIfRDEhwUuGCsOfRDEABgv7TEwAXFdAF1DXFhAFAwQDzkNGA45DRgPOQ0gDTkCIAs5ACsrKysBK1khIRMBIRMWFzY3NxMhAQM1/tN3/mwBQbZBFiUxi88BXv1gAjsDf/5ilUJBScIBKfxlAAEAMwAABVgFugANAMG5AAH/+EBBCzkHBjtfAlAHAigBmwECWAcBLwMvBEAOXwNfBFcGVwd5AXUGhwelAsYC9wINDwIAB14BXgIEAQcGAAMCAQYCAwK7AsgABAAGAsizBQIMC7gCyLQNkAEBAbgCyLIACA29AkIADAJDAAUAAgKWtQbLBRoPBL0CQgADAVkAAAAHApa3AcsAGQ6nbBgrThD0TfTtEPTkThD2TfTtEPTkAD/tXTz9PD/tPP08ARc4FzgxMAFxXQByXQFyKwArMzcBITchBwE2MzI3IQczMANM/VUzBCEs/KuIFWCpAVoz5wPe9eb8HAMC9QABABT+bAODBboABwC+QC8mBzYHRgJGBkYHVgJWB2YGdgZ2B4cGhgefAJ8BnwWXB68BoASnBrYGFAQDAQIAB7gCf7MCKAUGuAJ/QAwoAxACEgSoBQGoAAC4AamzBwkFBbgBqUAeBgkGBwcfAgMUAgIDBwdwAgkGBvoJAwMJAgIZCAlduQEaABgrK3pOEPABGC8rPAEvK3pN4QEYLyt6EOEBGC+HLit9EMQrehDgARgvK3oQ8AEYL+QQ5AA/Pyv9ADwrEP0APBA8EDwBXQUHIQEhByMBAiot/hcBhwHoLtv+1b7WB07f+mcAAQCg/+cCTAXTAAMAKkAJAgEAAwAKAfgCuAMasgP4ALwBIAAEAGgBlAAYKxD27fbtAD88PzwxMAUDMxMBg+PJ4xkF7PoUAAH/jv5sAwAFugAHAL9AKSoCLAc4A0kCSAZoBngGiQeQAZcDnwSgAagCrwSoBqgH2AcRBAMBAgAHuAJ/swIoBQa4An9ADigDEgIQCI0EqAUBqAAAuAGpswcJBQW4AalAIQYJBgcHHwIDFAICAwcHcAIJBgb6AwlPAgECAgkDA9gICbgBGrFdGCsrehDwARgvKzwBL10rehDhARgvK3oQ4QEYL4cuK30QxCt6EOABGC8rehDwARgv5BDk7QA/Pyv9ADwrEP0APBA8EDwBXRM3IQEhNzMB6C0B6/55/hUv3QErBOTW+LLgBZgAAAEAcwK0BDgF0wAGAGxAFTQAOgNgCIAImwGUApsElAawCAkEBrgBLbUFcEABAwK6AygAAQMoQBkFIANJIAQwBAIE+AUASS8GPwYCBvhgBQEFugMIAAcDB7FnGCsZEPRd9F0Y7RkQ9F0Y7RoZEO3tABg/Gu39PDEwAV0TATMBIQMDcwF43wFu/uTGxQK0Ax/84QHp/hcAAAH/7f5rBH3/IQADABy5AAEBFkALAA8CRQUATgQ6LBgrEOUQ5AA/7TEwAzUhFRMEkP5rtrYAAAEBEgSwAqYF2wADACa2AgNwAQADArgBVEAKA/gBSQAZBPtnGCtOEPRN7fTtAD88/TwxMAEhEyMBEgEUgMIF2/7VAAIAXP/nBEQEPwAjADEBjUBkEgQDBiAAMwMLEkYaRBtJJ4gaiSe5Gska2xrbJ/YmCxYZAS8z1gPoL+gw4DP8K/Uv9TAILQAkGToSWRJpEmgZexm5ALca2gLVGNYZ6BrqJ+soD1gZATEkHAMeDgcRAQAh/yQBJLgBFkAaHDAcQBxwHNAc4BwFHCEODwoDLRMtAvAtAS24/8CzMhk/Lbj/wLMoFD8tuP/AsyMSPy24/8BAFh4PPy0nFAsAAB8AAl8AbwDQAPAABAC4AyRAKSFAMhk/IUAoFD8hQCMSPyFAHg8/DyEB/yEBISoEBwdEHx4BEB4gHgIeuAJFQBYMUhEzAA8/DwJwD8APAiAPMA//DwMPuAI/tjMAJA8BAQG4AkVAGCpEF0A/NV8XATAXQBfPF98XBBcZMkLKGCtOEPRdcStN7fRx7RD0XXFy9O38cXLtAD/9XXErKysr5F1xP+0rKysrXXE/PBE5XS/tcRESOQEREjkSFzkxMABdXQFdAHJxAXFDXFhAEAFACjkAQAo5AUAJOQBACTkAKysrK1kBJTY2MzIWFRQGBwYVFBchJicGBiMiJjU0Njc2NzY1NCYjIgYBBgcGBwYVFBYzMjY2NwHn/ugw78XNxBEzKhj+6REEP6RThKy9881FEkpJTVkBARor2EIvSDtBczkWAvkYjqCldzBs5r5MRFM6PkZLrYiYthMRGDwkLj4//rwHBhoyJD0yRT9jaQACAEr/5wTQBboAEQAfANNAbUkQASYAJAsiDCEQIRIhEyAUJBdBElcDqADoAQzQIQFEBJkIAgMQERAPEQIQFQ4DHAYDEAISEhECAiABABQBAQACAQAcKgYHFScOCxEACgEAEBkkCUAqMDSvCc8JAq8JAV8Jjwn/CQMgCfAJAgm4AUu0ICEBIQK4AiCyEXcBuAKXt08AAQCOIAAXuAG5sU0YKysQ9l3k/eQQcfZdcXFyK+0rEDwAPzw/7T/tPzyHBS4rfRDEARg5LxE5OQAREjkREjmHCH0QxA7EMTAAXQFxXQBxMwEhAzY2MzIWFRQCBgYjIicHExQWMzI2NjU0JiMiBwZKATIBIGdPiVGoyl+fr2DkayV+eVJIg1d1U2xOawW6/hY8M+HXkf7vq1PIrwGxa4Zo5HRxgWSJAAEAe//nBIQEPwAbAK5AOBsbASoMWQJZA2sCawN7AnsDegVwHYgbrwCpAqoDrBqpG7IBtQK3EbAdEysKOAJIAgMBGQAOEg8AuAIiQAsZJwQLHw8Bvw8BD7gCIkA3EicLBwAkAT8BbwECDwEfAQL/AQEvAT8BTwGPAZ8BvwEGAQ9EIA4BDhodFiQgB/AHAgflHClWGCsQ9l3tThD2XU3tOV1xcnIv7QA//eRdcj/95BESORESOTEwAV1dAXIBBQYEIyImNTQSJDMyFhcFJiYjIgYGFRQWMzI2AyIBF0X+8rLL7pABIq284A7+7wpZSFOQTVxFRYABkC26wvLUrQE0scSgHVlUfPdsXmZkAAACAHn/5wVZBboADwAcAMpAjyUbTwNDDE8UShhKHHgcBzsKOhg6G1gVUB6AHqYAqhbHAOcA6A73APMD+QoOKgooFXcScB4EFQMgHgINAgEBDg0aCwITBQIXDQMOAQAPDyAOARQODgEAAQoTJwULGicLBw4PAA4QAFIBkAEBIAFwAYABA6ABsAECIAEwAVABAwEeECQgCPAIAgjlHQEXKc0YKysQ9l3tEjldXXFyL+0rPAA/PD/tP+0/PIcFLit9EMQBERIXOQAREjkREjmHDhA8xDEwAXFdXQBdISE3BgYjIiY1NAAzMhcTIQEUFjMyNjY1NCYjIgIEJ/7wGE6XX6XNASDt1mtzAR/8N3NUTodRfFF+onNLQeHc/wGcqgIl+/ZygGvcZXOO/ugAAgB3/+cEcAQ/ABcAIAHAQIkSiwmIDYQO7QkEEwgQCRYKAxwIGgofCxgRGRUTHxIgWQgIihCHH70Jtg7MCdsJ4wgHCwcKEhABEAIQAxAEEB4QH0QfRSAKVg1mDXoHeBJ4H4kHlSCjCaUNvAe/CM8I2QjQCtoL6QfvCPoH+QjyFPsc9B8WKxGIEgJXH2cfogoDIAgYvyABjyABILgC70AXAAEQAVABAlABASABAQEGEh0nEwcJzAi4/8CzJSg0CLj/wEARGR40UAgBTwgBIAjQCPAIAwi4AkdAKxIGJwwLCHwJzB0YjRgCHRgBGEQgFgEWgCIgJIMBAQEkIA/wDwIP5SEpVhgrEPZd7XHtEPZd7XJx9O0AP/1DXFi5AAb/wLMyGT8GuP/Asy0XPwa4/8CzKBQ/Brj/wLMjEj8GuP/Ash4PPysrKysrWfRdcXIrK+Q/7UNcWEAZHUAyGT8dQC0XPx1AKBQ/HUAjEj8dQB4PPysrKysrWRE5XXFyLzz9XXI8ARI5MTABXV1dcQBdAXIAcnFDXFi5ABD/+LIKOQm4/8CyCjkIuP/Asgo5Cbj/wLIJOQi4/8CyCTkJuP/Asg45CLj/wLEOOQArKysrKysrWQEhBhUUFjMyNwUGBiMiADU0NzYhMhYVFCU2NTQmIyIGBwRc/TABeVePUAEBS/uc1v70eaUBMsPm/voBallZjRkBuhEJaoKUK5ubAQ/f2qrm8dloXBMKdnaGgwABAG4AAAPEBdMAGAFJQJYmAiYFIAklFjYJNRVGAkYFZwNqDHgJegyMDIAarwCtAasMsACwB7kJwADIAscDxwTIBcAH0ADYAtcD2AXQB9gJ4ADmA+YE4Af/AfAHJiQQNRVGAkMQRRUFEBEPEwFfEwH/EwETQCMSPxMnDgEYAgMXAgMDGAgFBAQJERAAFwMDIAQJFAQECQEGQB4PPw8GAf8GAQYnAAe4/8BAGSMSPwAHAQcGAwQKCQQQABABrxDQEOAQAxC4AkeyAMwBugJJAAICIrUDUgQHzAa6AiIABQIiQBc/BF8EAi8EAQ8EkAQCIATvBAIEexkEF7oCuQLFABgrKxD2XXFxcvT05BD95PT05F1xKxA8AD88P3ErPP1dcSs8hwUuKw59EMQBERI5BxA8PAcQPAc8PAAYP/0rXV1xMjlLsCVTS7A9UVpYsREoADhZMTABcV0BByMDIRMjNzM3Njc2NjMyFwcmIyIHBgcHAzYsy7L+4bKgLKAXHBkih22EkjhnQDEYEBEQBCbS/KwDVNJthi9ASzDNIx8UVUsAAAIAQP5RBPsEPwAlADQBMUB/IDYBJhcoMDEAMwE2I04NSyJOKocXgy8KGQ8mAQInKyguOBM4GTkrOi5HG0YeSCxYGFA2dyhwNoQehyiKL4A2rS6oL8cb8DYVLRMpMgIYDAsLGQEFAAwYDC0YAxkLGhsbIAsZFAsLGRsLKQ4aGQYwKhUHKScOChAAIAACIAABALoDJAAF/8CzKBQ/Bbj/wEAOHg8/BSohDws1KBkLEBq7AiIAGwAZAkpAHBtBAAsvC5ALAw8LIAvACwMgC1ALoAu/C/ALBQu4AcVAFzYBJACOJiQgEfARAhHlNQsXCzUKjDAYKysQPCsQ9l3t9u0Q/V1xcu3kEOQrEDwrEMA//Ssr5F1xP+0/7T88ERI5OYcOLisFfRDEARESFzkAOTkREjmHDhA8xDEwAV1dAHEAXQFxFwUUFhYzMjc2NzY3NwYjIiY1NBI2MzIWFzchAw4EIyImNTQBFBYzMjY2NTQmIyIGBwZDAS8eQzpbNykaEBkKlZ+cyIj2gGqwMSgBDbgtPlB6pmHh5gFVbVJLjUt6UE2BKiA4LC4vISQcPieDMnzc1LsBOKBwZbz8i9W2bEkgmrMTAhh5f3LSYnGKen9iAAEAVgAABLoFugAaASFAwSAcARkFAa8JqhLcCdcM1xDYEewSBxQEASUBJQwlDyQaNgE4AjYINgw2DzYaRghGDFcEVwxXGnkMeg+FCqcBqAmnDKcPqRGpEqcatQq2DLYPxwDHAcgCyQnHDckRyRLXANgM1w3bENoR2xLqEOoRKwQaAAADEA8ODhELDA0KGAMACg0NIA4RFA4OEQMAACABAhQBAQIDAgAUKgcHDQ4OAQAKEQ4CARANUpAOASAOzw7vDgMPDi8OPw7/DgQgDn0OAg64AjJADABSTwEBAY4bDhcBF7gBubHKGCsrKxD2Xe30XXFxcu0rEDwQPAA/PDwQPD/tPzyHBS4rfRDEhy4YKw59EMQBERI5hw7ExIcOEMTEBw4QPDwxMAFdAHJdAXJxISEBIQM2NjMyFhUUBwMhEzY1NCYjIgcGBwYHAXX+4QEzAR9uYahggZYcgv7hhRVCOUhCVi0ZKQW6/fZMQ5R4QIb9kwJ5Zx01QDJCYDXEAAACAFIAAAKjBboAAwAHAP9AgiAJSAOQCaAJ0AnwCQYWARgDAg8CDwMCqAevCcgExwbHB9cG7QLtA+gECSMEIwUvCTYGNgdWBVcGgAmWBpYHpQalB+gDDQgCCAMACQOwArADwALAA9AC0APvAu8DCAIFBgYBAwQHBwACBQEGAwQABwEGBiAHABQHBwACkAOgA7ADAwO4/8CzFxk0A7gDJUAOBAEAAAQFBgYHCgAHEAG4ApeyBlIAuAKXQA8HQD81kAcBTwcBB44IBxe4ArmxzRgrKxD2XXEr5P3kKxA8AD88Pzw/PBD2K3E8hwUuK30QxAEREjk5ERI5OQcQPDwHEDw8MTAAXQFxXQFdAHEBcnEBIQMhByEDIQGEAR82/uEeAR/e/uEFuv78kPvaAAAC/yD+UQKmBboAAwASAWdArxYBFgIaEBkRBE0JfQnAA8cR0ALQA+8C7wMIsAKwA8ACAw8CDwOgAqADsAKwAwYvFDYGMAwwDTYSRQdCEWcGeRGVBZoQqASoBagRqBKvFM8M6ALpA+gE6BL9EBaHBYkRgBQDRwZXBFcFAwgDDBEaECAURwakAqQDoBS0ArYD0BQLAgUGBgEDBBISAAIFAQYDBAASAQYGIBIAFBISABEMEgwNCgIAA5ADAvADAeADAQO4AyVACQQBAAAEBQYND7j/wEAbExc0EA8BgA8BIA8wD0APAw98Cg8SEygAEhABuAInsgZSALgCJ0AKEg3MLwwBPwwBDLgCIEAP/xIBTxIBEo4TCRIXEhMKuAKhsc0YKysQPCsrEPZdcQH0XXHkEOT95CsQPCsQwAA//V1xcisyPzw/PBD2XV1xPBESOQEREjmHDi4rBX0QxAEREjk5ERI5OQcQPDwHEDw8MTABcV1dXQBxXV0BcgEhAyEHIQMCBwYjIic3FjMyNhMBhgEgNv7gHgEgvVA5Vr52Yi1FLzpGOQW6/vyQ/H3+f1J/IO0RWgEPAAEATQAABOoFugALAcZAsRJIAUkDSARJCG0EjQSHCwdHA0cJZQZ3A5MDlQamA7QDxQPVA9oJ2grjA+cJ+gn6ChAGAwkJCQpDA3IDggPbCdsKCBUDnAkCJQAnBiMHIwgvDTYGMQcxCEYGQwdBCFcDVgpmBWYGagdoCIYGgA2WA5kHmQinAKgBqAanCrcDuAfHA8gHyAjXANoB2ALZA9kE1wXVBtkH2QjWC9AN6AHnA+UG6QfpCPcF9wb5CPYJ9wo0BLgCdkBZBQYGBwkKCQgKBQoKBQMDBAILCgkGAwQECAILCyAAARQAAAEDBAMCBDcFChQFBQoJCAkKCCAHBhQHBwYGBAYHCQsHCAgLAAoFBAYCAQABABAIUh8HAZ8HAQe4AkhADh8FAY8F/wUCrwW/BQIFuAJFtk8N3w0CDQK4ApdACgtSTwABAI4MABe6AbkBRAAYKysQ9l395BBd9F1xcvRdce0rEDwAPzw/PD88PBA8ARE5ETkAETmHBS4rCH0QxIcFLhgrCH0QxIcFLhgrfRDEABESFzkHCDwIPIcIEMQIxAEYEO0xMAFdAHJxXQFxQ1xYuQAD/+BAEgw5BxAQOQQQEDkIEA85BBAPOQErKysrACtZMwEhAwEhARMhAwcDTQEzAR+gAXYBdf5V4/7hksJDBbr9BQFn/n79XAHrrP7BAAABAFAAAAKhBboAAwB7QBMABSAFkAWgBdAF4AXwBQewBQEFuP/AQD1DNS8FVwOABa8FxwDIAccD2AHXA+gB9wMLAgMDIAABFAAAAQIBAAMACAEQA1KwAOAAApAAAU8AAQCOBAAXuAG5sc0YKysQ9l1xcu0rPAA/PD88hwUuK30QxDEwAV0BK3JxMwEhAVABMwEe/s4FuvpGAAEASQAABvMEPwArAfKxEi24/8CyQjUtuP/AQP8/NVAtYC0CJgiuDK0VqSHcDNgP1xQHLy1QLWAtwC3gLQUlDyUSJRslHSYpJis3ADkHNg82EjcbNilFC0cPRxJHG0ceVgtXD1cbhwKFC4kggC2pAKgMpg+mEqkVphuqIKcrry21C7MNtg+2G7YesC3IAMkMxxDJFcYcxh3JIccqxyvYD9cQ2RPaFdcc2iDbIdcq6ADnEOkV5xzrIOkh5yr5APgMQYYBzy0Cby0BAgIDKSoqAQ4PEBANExIRERQHIwInKQMqAQ0QECARFBQRERQbHBwgHR4UHR0eASoqICsAFCsrABARERwdHSorChcqCgojKgQHAQAGFBEeHQArEBBAClIvEf8RAs8RARG4Ar1ACxxSLx3/HQLPHQEduwK9ACsAAQJIsipSALgCSkAUMCsBYCsBTyuQKwIrjiwRFx0XKxe4ArmxyhgrKysrEPZdcXLk/eQQ9F1x7fRdce0rEDwQPBA8AD88P+08EO0/PDwQPDwQPIcFLit9EMSHLhgrDn0QxIcFLhgrDn0QxAEREhc5ABI5hw4QxMSHDhDExIcOEMQIxDEwAV1dXXEAXQFyKytDXFi5AA//6LIQORS4/+iyEDkNuP/oshA5Grj/8LIQORu4//C1EDkgGBA5ASsrKysrK1kBIQc2MzIWFzY2MzIWFRQHAyETNjU0JiMiBwYHAyETNjU0JiMiBgYHBgcDIQEnAQ4bnq54ghI3zm9/jhyI/uGIGTU1a1Q9K2X+4YYXOTIvZEsbDBtm/uEEJoGaZFhQbIhwN4f9dwKJehAsM3FRzv4eAoJvISo2OGJNJH/+GAAAAQBWAAAEuwQ/ABkBT0CiEggGIBtQCAMZAgGuENwH6QfpEAQlACQKJg0lFzcKNw02FDYXRgamCqYNphWyCLYKtAu0DLcNyAfHC8oQxxbHGNsQ6ADoD+oQGtgK1wvZDgOqDwGsB6kJqQ4DeQd7D3kQA0YKRw1XCgMCFxgYAQ4NDAwPCQoLCBUBGAgLCyAMDxQMDA8BGBggGQAUGRkACwwMGBkKEioFBwEABg8MABkQC1IMuP/AQBxHNZAMAe8M/wwCIAzPDAIPDC8MPwwDIAx/DAIMuAIyQAwYUk8ZARmOGgwXGRe4ArmxTRgrKysQ9l3t9F1xcXFyK+0rEDwQPAA/PD/tPzw8EDyHBS4rfRDEhy4YKw59EMQBERI5hw7ExIcOEMTEhw4QxDwxMAFdXV1dXV0AXQFycUNcWLkACP/wshA5D7j/2LIQOQq4//CyEDkNuP/wsRA5ASsrKytZASEHNjYzMhYVFAcDIRM2NTQmIyIGBwYHAyEBNAEQHGaxYoOXIX3+4X4cQTo/miwgKFv+4QQmilhLlnw4nP2nAluHGjc/alc+v/5MAAIAfP/nBMsEPwAMABoAnkAu0Bz7GfkaA8sZyhoCKQIoEkcSSRlQHGYEeBmIGfAcCRBAMhk/EEAoFD8QJwMHF7j/wLMyGT8XuP/AQCwoFD8XJwkLDSQGQCowNAZARzUGQEU1rwbPBgJfBo8Grwb/BgQgBlAG8AYDBrgBS0ARIBwBHBQkIADwAAIA5RspTRgrEPZd7RBx9l1xcisrK+0AP+0rKz/tKysxMAFdAXJxExAAITIAFRAAISImJgE0JiMiBgYVFBYzMjc2fAFNAQ/rAQj+tv7rluhyAzJ3Xl2QUHxedlByAbkBJwFf/v3b/v7+iHjcAUtme3PbXHKGZY8AAv/1/msE2AQ/ABAAHgD1QIlNDUgSRhZFHGYSBTQLRw9QIGYScCCYAJsPmxCoAKYPtg+zEskAxw7HEOcPEIwPjBACJAokFgIIAAkDDxNmEdAgBQIRDg8BAhsBDhQMDhECAxABDw8gEAAUEBAAAQAGGyoFBxQnDAsPEA8YJK8IzwgCCEAqMDSPCP8IAl8IrwjPCAMgCFAIcAgDCLgBS7cgIAGAIAEgAbgCl7IPUgC4ApdADgAQAdAQASAQgBACEBkfuAGWsVYYK04Q9F1xck3k/eQQXXH2XXFxK3LtAD88P+0/7T88hwUuK30QxAERFzkAERI5ERI5hw7ExDwxMAFxXV1dAF0BIQc2NjMyFhUQBwYjIicDIQEUFjMyNjY1NCYjIgYGAScBEBdXmFeny6+WzdNsc/7hAd95UkeEV3JYUodKBCZtSD7m5P7hxqmr/dkDWnyJZ+9md3914wAAAgB6/msFAAQ/ABAAHgFEuQAP//CyCTkWuP/4QJ4JOQQgDzkJFikWICBIBFgEZQJlA9AgCCcBJwIpBCkMKRYpHTcQNBY/IEkETRl/An8DfAR5BXwIfAl8EngXcCCOAosDjQSJBY0IjAmNEogXgCCdApsDnASYBZ0WpwKnA7cCtwPHAscD5wIpLQQnEC8WPhAEEBAPGBcEAwMABBQHBBgQAwADAQICIAMAFAMDAAIDDxQnBwsbJw4HAQAGAbsClwACAAD/wEAXQzXgAAEAAGAAwAADQACgAAJrAOAAAgC4AoVALAJSA18DAQ8DAZADwAPQAwMgA5ADoAOwAwQDIB8RRAAKASAK8AoCCuUfKTAYKxD2XXLtERI5XXFxci/t5F1xcnIrEOQAPzw/7T/tPzyHBS4rfRDEARESFzkAERI5hw4QxMTECMQxMABdAV0BcQErACsrASEBIRMGBiMiJjU0EjYzMhcBFBYzMjY2NTQmIyIGBgP4AQj+zv7hZ0yGVanMivWP5mf9unRSSYhVelBCiVcEJvpFAe8+NeLYqwFEr8D+J2t+Y+Rpd4do7wABAEIAAAPLBD8ADwDVQFclAiUKNAnHA9sM7AwGUgMBJQAmDS8RNw1LB1cLXxFvB3AGhwHIAMcNxw/XDecNDw0DFgKbB5cOBFUHlgcCAgIDDQ4BBgIJBAINAQ4OIA8AFA8PAA4PCgm4Aa5ACQQHAQAGAA8QBrj/wEAJPTVQBgEQBgEGuAJFQBEgBwFgB4AHAgeOrxEBEQ5SALgCSrdPDwEPjhAPF7oCuQEWABgrKxD2XeTtEF32XXLkcXIrKxA8AD88P+0/PIcFLit9EMQBOTkAERI5OYcOxAjEMTABcnFdAHJdASEHNjMyFwcmIyIGBgcDIQEgAQwrlq8+R24nLEqZVipG/uEEJs7nH+sOcLvL/rMAAQAt/+cEagQ/ACwBl7kAI//gQAkeDz8YIB4PPyW4//iyCTkjuP/4QIQLORAuIC4wLkAuUC4FNgtHDHcj2w3VJegNBgcjBSUWIxMlJCVGCkUjBwYCCRrQLgMgLjQiNiVFIkclQC5dAFsOVBVQFlIXVyRUJl8sUC53K4Yrmg6qFaAroCywLssVyxbAK8Qs+RobDg0MCwoJICEiIyQlJg0UKgAEARcfGAFfGG8YAhi4AyRAEhIcKhQHEAEBUAFgAaABwAEEAbgDHEAPEgQqKgsYRC8XPxdPFwMXuAJctQgk0CcBJ7j/wEAjCzUgJzAnQCcDJ44uH0QQASQvAAFfAG8AAlAArwDPAO8ABAC4AiJADE8QAQ8Q0BACIBABELgBpLMtQlYYKxD2XXFy9F1xcu0Q7RD2XStx7fRd7QA//UNcWLkABP/AszIZPwS4/8CzLRc/BLj/wLMoFD8EuP/AsiMSPwArKysrWeRdcT/9Q1xYQBQcQDIZPxxALRc/HEAoFD8cQCMSPwArKysrWfRdcTkREjkREhc5MTABXXEAcV0BcgArKysrEyUWFjMyNzY1NCcmJyQnJjU0NzYhMhYXBSYnJiMiBhUUFxYXFhcWFRQGIyImLQEWJW5fYjspFRZh/vw+YV6DAQLN0hz+9xUvQFpaTygZiNJHZP7n5vsBKSxYSSweKx0XFiBWMk5+flt/loAuOh0nPCcoGhAoPTtTdZfeqQAAAQCa/+cDIAWWABkBkkDKTwOPA98DA98DARcEAQEBDwc2BHsEhQSECIUZkgiUC9sECigDfwOKA54DyAPGBNkD5AQIJQIlAyAEIAUhBiEHIggmCSAQJBEtFSYYIhlFBEcJWANYCVkVWBh4CXkYgAaAB4AQgBGAG5MElgiWCaAEpAilCaYYpBmwAbsHtwi/EcQBxwTEBsQQ0wHWBN8H2QjfEdkZ3xvgAeQE4QboCOAQ6hnwBjgJCwsEGBYWAxkDFgAHCAQLAAcFBAsBBgIDFgEGCQsEDgoOERATA7gCSEApBAQLBAMLIBYDFBYWAwcPAAH/AAEAJwYBBgAQAfAQARAnEwsRzLAQARC4AkdADQbMB5AHAf8HAbAHAQe7AlwAGwAIAkVACQsBzJ8AAQAzGbgCRUARCySQFrAWwBYDIBbAFvQWAxa6AkwAGgEVsc0YKxD2XXHt5PRy5BDkehD4XXFyGC/09HHkAD/tXXE/PP1dcTyHDi4rCH0QxAAYL+QREjk5KxESOTkPDw8Phw59EMSHDhDEMTABXQBdAXEBcgBycRM3MzclAzMHIwMGFRQWMzI3BwYjIiY1NDcTmiyMIwFJTa8ssF0ZKjcTTC1KTpiKJVkDUdWqxv6Q1f5DeREhJQfVD3VoMbEBqwABAJD/5wT0BCYAGgE6QGgCCBA5Owk9FE0JTRSkBQUTChELAggYGAAZGCgAKBggHHAcoBzQHAkAHAElDCUROBpIGlUOVBFmDWYOZBF3AnUOdBGGEcQFyBDGF9YC3xbXGvAcFA4NExITFBIPAwQFAxATBxUTDg8SEb7/+gAO//oABP/zABj/8EBCERAQIA8SFA8PEgEEBCAYABQYGAAAAQEPEAYREgoHKhULGBsoDxIAGBARd38SzxICHxI/Ek8SAw8SASASfxLvEgMSuAHYQCEEJAAYAZAYoBiwGMAY0BgFIBgwGAIYCRIXGBcYGwpRVhgrKxA8KysrL11xcgHt9F1xcXHtKxA8EDwrEMAAP+0/PD88PBA8hw4uKwV9EMSHLhgrfRDEATg4ODgREjk5ABESOREXOYcIEMQOxMQxMAFdAXJxAHJdASsBIQMGFRQWMzI+Ajc2NxMhAyE3BiMiJjU0NwEuAR+DGEQ1JkpOPhsTFmoBH97+9B6wz4OWIgQm/YtxHC5CHTtMPi5oAfr72pCpln43pAAAAQCZAAAE8gQmAAsA/rkAC//AshI5ALj/yEBVEjlSBgEADV4HVQoDJgAjASsFMwFJAEMBVgBUAWQBcwF5CIMBngCbAZ8CnQOZCZkKmAuQDakAsA3JCsgL0A0ZAA0TAXYAeQuZAJoBqgCrAckB2gEKBrgCdkAaAQkIBwYKBAUGBgMACgYBAwsKCgMCBgABCgq4AvhADh8LPwtfCwP/CwEgCwELuAJFQBNQDWANcA0DDQJSHwOQAwLMAwEDuAJFt1AGYAZwBgMGvAHPAAwCvwFBABgrGRD0cfRdcRjtEHH0XXFy7QA/PD88PBA8ARI5ERI5hw59EMTEhw48xMQAGBDtMTABcV0BcgByASsrISMDIRMWFzY2NwEhAmX31QEaSx8GCHsOAQgBNgQm/jG8PhHhFwHAAAABAJMAAAa5BCYADAGJQKMSAAw7AAw7AAw7AAw7RAcBKAAvBCwGLQcqCCQMIA41ADYEMQU5CjgLPw5FAEsLWwdRCFAJVAprAmkDagdpCn8Eegd4CH8OhAOJBIYHnACcBZwGmQuoBqkLoA6wALMBtwe7CrkLygTOBccGyQrGDNwC3AXcBtwI3AnkCjUBAAMBCwULBgAKFgEZCyYANgBCAEUGRAdKC0AOiQuDDNMKEQIFBAcKuAJ2QBcBAQAAAwQKCQgIBgsMDAYFBggJAwcCALsCJwABAAwC+LaACwHgCwELuAIgQBcKAFJvAcAB4AEDAczfCgEKm18JbwkCCb4CSAACAAQBowAHAAUCokAKXwZvBn8G7wYEBrgCXEAaAAeABwIgBzAHQAdwB5AH4AcGB9cgAj8CAgK4AvmzDVHNGCsQ9F39XXH0Xe0Q7RDtXe1x9F3tEPRdce0Q7RESOREzAD88PBA8EDwQPD88PBA8EP08ERI5MTABcV0AcSsrKytDXFi5AAr/4EAKFzkIGBI5BxASOQErKytZISEDASEDMxMBIRMBIQSo/vI0/rz+9oX/SgFLAQUkAUkBIAKk/VwEJv04Asj9OALIAAAB/9MAAATmBCYACwIMQNkSVAABAA0QAhADEwYTByANUA1gDQgKAwUFCwswDUANXABaAV8IWAlWCwo8A0gDSAZmAGcDigCfAJoDkwYJKQMtBC0FJgckCC0KIA02AjYDNgg2CTANSwBJAUgCTgNLBEsFTQZIB04JWQBbA1kJawBpAWIDYARkBmMLdQR4CXoKdwuIAIMBhwmfAJQBmAmYCpwLkA3mAywDBgYCAwMEBgcGBQcCBgYHCQoJCAoFAwMCAAsAAQsECQkKAAEACwEIAAkGAwQBCAcHBQUEBgoLCwEBAgoJ5AaECgEKugJaAAsCR0AKIAABAOQDhAEBAbgCWrQChAgBCLgCWkAMYAegB7AHwAfgBwUHuAIntgbpA4QFAQW8AloABAJKAAMBy0AgHwIBDwIfAl8CAyACXwJvAn8CjwKvAr8CzwLfAu8CCgK6AQoADAEVsc0YKxD9XXFy5PTtXRD99F3tXRDtXRD9XfTtXRDtAD88EDwQPD88EDwQPBIXOQcIEDwIPIcIEDwIfcSHCBDECDyHCBA8CMSHEMQxMAFdAF0BcQFyAHFDXFhAEQsQEjkDEBI5BEASOQMQEzkGuP/Qtgs5ACALOQq4//C2ETkDCBE5Bbj/4LIROQC4/+KyEjkAuP/yshE5CLj/2LYROQMYEDkFuP/AQAoPOQsYDzkDGA85ASsrKysrKysrKysAKysBKysrK1kBASEBASETASEBASECO/74/qAB6/7yATGbAQ0BXf4TAQz+zwEq/tYCIAIG/tEBL/3Z/gEAAAEADf5RBPcEJgAUAZ1AphIAFkYIhQjgFvAIBR8UQBYCJQIlBCYJNQI1AzgIOQlGAkQDRQRLE1YCUQNYB1kIUBZqAWoJbxR2AnYDdwV5E4cCgAOGBY4ThRSTApMDkgSXBpEIowKlA6oHtQK0A7QEtgW1BsYCxQPFBMYFxQbXAtUD1QTlAuUD5QTmC+gT9gX0BvcLOT8JhxOQBAMGBQQHAgMEBAEJdxAPEg0J5IAEAQR3FAoJCAi4AV9AJwcEFAcHBAQBBAcBIgAUFAAAFAgHBwEABhInDQ8PM0AQgBACcBABELgCU0AYFAhBgAcBIAcwB0AHUAeQB6AH4AfwBwgHuAI/QCkEAFLgAfABAgF0BOkwFEAUgBSQFATgFAEQFJAU0BQDIBRgFMAU0BQEFLoC7gAVAsexrRgrEP1dcXFy/fRd7RD0XXHtEPZdceQAP+0/PDwQPIcFLisIfRDEhwUuGCsOfRDEABg/7XHtERI5MgHthw59EMTEhw7ExDEwAF0BXXJxQ1xYuQAG//CyEjkHuP/gthI5BSgPOQm4/+C1DzkUEA85ASsrKysrWRMhExYXNjcTIQEOAiMiJzcWMzI3qwEeSBsDOGL+ATD9cFFdg1xbchk0MYRVBCb978xSo7UB1/tykXRCINYPyAABACIAAAQmBCYACQETtQYgHg8/Abj/4EBZHg8/BQEFAnAGcAcEkgGgAQImAkwATwFPA08ETwdPCE8JXwJQB1ALaAJmB2ALcAuNAI8BjgKKB5gHqAKoB78AvwK/A7AHvwnMAMsDxAbAB8sJ0AbQB+4AIwe4//pACQIGAwJAHg8/Arj/0LUXOQInBAa4/9C3FzkGJwUGCAe4/8BAFh4PPwcoFzkHJwkBJwAKCcwfCD8IAgi7AkUABQAC/8CyDjkCuAJ2QA0GMwUaCwTMjwOfAwIDuAJHtQAHUA45B7gCdkALAcw/AAEAGQqmMBgrThD0cU307SsQ9F3kThD2TfTtKxD0ceQAP+08/SsrPD/tKzz9Kys8ATg4MTABXQBdAXEAKyszNwEhNyEHASEHIikCRv4lLwNBI/21Af8xwgKE4Kv9begAAQBS/lED6AXTAC8BDkBBNRcBIhk5FTsoRhlJKFQXVSxqCWQXZCx5CXUZiQmEGZANnyWpFbABtQO5FMQDyRTWFhcJCCQZPxY/F0ovBRMYAQG6AtcAAAEQthgYJE8OAQ64Ate1DRFAJAEkuALXQA0lExghHRstJagPJAEkuP/AswsNNCS4AU+zGw6oJLj/wLMREzQNuP/AswsNNA24AsC3LxOfE68TAxO4AlBADwb4LQGoAOcvG58brxsDG7gCUEAKLagvIJ8gryADILgCULNAKgEquP/AthETNCAqASq8ARcAMABGARoAGCsQ9l0rce1d/O1d9OQQ9P1d9Csr5BD0K3HkERI5EjkAP+1dP+1dETkv/e0xMAFycV0AcRM3PgM3PgI3NjMzByIGBwYHBgcGBgcWFhUUBwYGFRQWFjMHIyImJjU0EjU0JlI0SVVILBsmTm1fP4I0M3JKEg4fOBMkYlk7OAwhER04YTQ1iohKRksBmvAEJVl2gLeaVRoR7x0eFX7eMFdtMidvWiNIw0gYGyYS8C1oUkMBPzVXTwAAAQCw/lEBjwXTAAMAP7YCBZIPZDYDuAINQBABAAUXFxoCAgNAAAABkgQFvAGrACEAkgFAABgrK070PE0Q/TxOEEVlROYAP03tMTABKxMRMxGw3/5RB4L4fgAAAf9O/lEC5AXTAC0BDEBLAwUBJhItFzkUORs1JUkURiVXAlkTWRRfG1YlXyliBWIGaRRrKXcDdgV2BnkWeymGAoQFhAaJFospnwqZEZYUkCKmFL8A1hTQLyMtuALXsy8AAQC4ARC2FRUhQAsBC7gC17UKE08hASG4AtdAEiIRFR0aGCoiqAAhASFACw00IbgBT0AJGAuoCkALDTQKuALAtyAQkBCgEAMQuAJQQA8E+CoAqC3nIBiQGKAYAxi4AlBACiqoIB2QHaAdAx24AlBAE18nfyefJ68nzycFJ0ALDTQnri+4Ap+xYxgrEPYrXe1d/O1d9OQQ9P1d9CvkEPQrceQREjkSOQA/7V0//V0ROS/9Xe0xMAFdAXEBDgICBgYHBiMjNzI2NzY3Njc2NjcmJjU0NzY2NTQmJiM3MzIWFhUUAhUUFhcCsFlnSkhObV5AgjU0bE4UDh83ESNsUkIyDCASGTlkNDWKiEpGS2ABmgRAj/6lmlUaEvAeHxV73CxXeyoxZlkkRsBPGxsiEe8tZ1FE/sE1V1AFAAABAEMCBwRpA5wAFgBcsQwYugL9ABIBLEAPNiIALQw2ADQCOgw5DgYAuAMRshXBBbgDFrMMEMEKuAMStAwgDAEMuAF4s+AAAQC8AwkAFwBPAUQAGCtOEPRdTfVdAC/07RD0/eQxMABdASsTETY3NjMyFhcWMzI3EQYGIyImJyYjIkNHVD1UP3SXZDqPgzCnTTNnZZpYjAIHAQNNIxoePimN/vIyVBotQ////+kAAAVyBwICJgAkAAABBwCOAfcBRgArQBIDAgASAVASYBJwEoAS8BIFEgS4ATa1SCsCAwIRuQI1ACkAKwErXXE1NQD////o/+0FYgbVAiYAJP/tAQcAxwHZAM0AOUAuAwIPFx8XLxdvFwQAF2AXjxefF68X/xcGcBeAF68XvxfwFwUXBG5IKwIDAhoCKQArAStdcXI1NQD//wDC/l0F9wXTAiYAJgAAAQcAyAGIAAYAFLMBASM0uAKDtkgnAQErCCkAKwEr//8AVAAABcUHEwImACgAAAEHAI0B6gE7ABezAQEMAbgCSbRIJwEBD7kCNQApACsBKwD//wBcAAAGGgb+AiYAMQAAAQcAxgHSAU8AG7UBPwsBCwW4AQq0SCsBAQu5AjUAKQArAStdNQD//wCz/+YGRgcCAiYAMgAAAQcAjgHlAUYAJkAXAwIQJVAlAjAlcCXgJQMlBqtIKwIDAiS5AjUAKQArAStdcTU1//8Au//nBh8HAgImADgAAAEHAI4BuAFGACtAEgIBPyL/IgI/Ip8iryK/IgQiALgCorVIKwECAh25AjUAKQArAStdcTU1AP//AFz/5wSKBdgCJgBEAAABBwCNAQ8AAAAfQBECLzIBfzKPMgIyBEFIKwIBNbkCNgApACsBK11xNQD//wBc/+cERAXbAiYARAAAAQcAQwD2AAAAGUAMAvAzATMEQUgrAgE1uQI2ACkAKwErXTUA//8AXP/nBEQF2gImAEQAAAEHAMUBBgAAABtADgJwMtAyAjIEY0grAgEzuQI2ACkAKwErXTUA//8AXP/nBG4FvAImAEQAAAEHAI4A8wAAADBAIAMCkDmgOQIAORA5IDkDIDlgObA58DkEOQSmSCsCAwI4uQI2ACkAKwErXXFxNTX//wBc/+cEZgWvAiYARAAAAQcAxgD4AAAAG7UC4DMBMwS4/vq0SCsCATO5AjYAKQArAStdNQD//wBc/+cERAYIAiYARAAAAQcAxwDmAAAAGbQCAwI+BLj/8rVIJwIDAju5AjYAKQArASsA//8Ae/5fBIQEPwImAEYAAAEHAMgBBQAIABi1AX8jASMZuP/ntkgrAQErCikAKwErXTX//wB3/+cEfwXYAiYASAAAAQcAjQEEAAAAFUAKAgEhEyJIJwIBJLkCNgApACsBKwD//wB3/+cEcAXbAiYASAAAAQcAQwDqAAAAHUAPAjAiAS8iASITIkgrAgEkuQI2ACkAKwErXXE1AP//AHf/5wRwBdoCJgBIAAABBwDFAQcAAAAlQBcCICEBUCFgIZAhoCGwIQUhE09IKwIBIrkCNgApACsBK11xNQD//wB3/+cEcAW8AiYASAAAAQcAjgDPAAAAIkATAwLQKAHgKPAoAigTjUgrAgMCJ7kCNgApACsBK11xNTX//wBSAAADTgXYAiYAxAAAAQYAjdMAAB9AEQFvBAEvBD8EAgQAukgrAQEHuQI2ACkAKwErXXE1AP//AFIAAALHBdsCJgDEAAABBgBDIQAAH0ASAS8FcAWABfAFBAUA1UgrAQEHuQI2ACkAKwErXTUA//8AUgAAAx4F2gImAMQAAAEGAMX8AAAbQA4BLwTQBAIEANtIKwEBBbkCNgApACsBK101AP//AFIAAANXBbwCJgDEAAABBgCO3AAAHrYCAXALAQsAuAEStUgrAQICCrkCNgApACsBK101Nf//AFYAAAS7Ba8CJgBRAAABBwDGASMAAAAVQAkBGwCdSCsBARu5AjYAKQArASs1AP//AHz/5wTLBdgCJgBSAAABBwCNAM8AAAAVQAoCARsDQEgnAgEeuQI2ACkAKwErAP//AHz/5wTLBdsCJgBSAAABBwBDARkAAAAVQAoCARwDP0gnAgEeuQI2ACkAKwErAP//AHz/5wTLBdoCJgBSAAABBwDFATYAAAAbQA4CcBuAGwIbA1lIKwIBHLkCNgApACsBK101AP//AHz/5wTLBbwCJgBSAAABBwCOAP4AAAAXQAwCAwIiA4xIJwIDAiG5AjYAKQArASsA//8AfP/nBMsFrwImAFIAAAEHAMYA+gAAABeyAhwDuP8AtEgrAgEcuQI2ACkAKwErNQD//wCQ/+cE9AXYAiYAWAAAAQcAjQD+AAAAF7MBARsAuAIZtEgnAQEeuQI2ACkAKwErAP//AJD/5wT0BdsCJgBYAAABBwBDATwAAAAbtQGAHAEcALgCDLRIKwEBHrkCNgApACsBK101AP//AJD/5wT0BdoCJgBYAAABBwDFAUEAAAAXswEBGwC4AiK0SCcBARy5AjYAKQArASsA//8AkP/nBPQFvAImAFgAAAEHAI4BOQAAABqzAgEiAbgBNbVIKwECAiG5AjYAKQArASs1NQABAK3+ogTBBaYACwC5QDooBCcKZwZmCXgGhgEGIAEgAkABQAJfAV8CBgAECwgBAwQLBwIGBQoHAgkFCggBCAEBKAIHFAICBwECuAIvQDoLLwQ/BE8EnwQEBCEKBWEHBwgAC7IACrIJBWEGBGEDCQkBAAAIAQYGAgMDBwIIYQEoB2EgAgEC/AySuQENABgrEPZd5P3kERI5LxE5LxESOS8ROS8Q5BDkEOQQ5AA/PBD0PP1dPPY8hwUuK30QxA8PDw8xMABdAV0BAyETITchEyEDIQcDLPb+8vb+jy8BcVIBDlIBZi8DPvtkBJzfAYn+d98AAAIAVgNVAtQF0wALABcAPrOAGQEJvALBAA8BxAAVAsGyAwEGvALBABIBxAAMAsFAC18AbwACABkYm7oYK04Q9F1N7fTtAD/t9O0xMAFdEzQ2MzIWFRQGIyImNxQWMzI2NTQmIyIGVruEhLu7hIS7plo/P1paPz9aBJSFuruEhLu7hD9aWj8/WloAAAIAeP51BIAFtgAgACcBF0B3LAUvDSkSLxcvGCkiLyk/ED8RdQl3C3UMeg17F3ckiReIIoYklgufGKAZpBr5DRcJBgkSBxMIIgAjACQGWRGYDJUXAwkSAQwSEx4fHwsACSEiIAoKICDPHwsUHx8LDxIQDAkiBxMhGQMYAB4VHB8hKSZwGIAYAhi7AkgAFQAQAiJAPiInBwoABwcJBx4LFSccCx8OKRcXGhAPAVAPYA+AD5APoA+wD9AP8A8IDyQQjhkkIBgBGCYkIAMBA+UoKVYYKxD2Xe0vXe32/V1yTkVlROYAPz9N7T8/Pz8Q/e0Q5F0BERI5OQAREjk5ERc5ERI5ORE5OYcOLisOfRDEBwU8DjwFPA48BwUQPDwOPDwxMABxXQFxXSUmJjU0EiQzMhcTFwMWFhcHJicBFjMyNjcFBgQjIicDJxMBDgIVFAFbZn2YASChHiKbeZNngAf8Cjv+8QsFRHwtAQhA/vOuLDSbevIBDGCUSRIv2Ja0ATWnAwF6Mf6ZKahyGl81/WsBYHApuMcI/oYwAk4CiwaK8WlsAAABACr/2wTiBdMAMQDbQBkfHRcZJyAdJAgJDBEUEwMSABcvAw8CEzAquAJQQBEpMEASMQ8xATEFJH8JjwkCCboCkgAMARyyBQEZuAKStS8nASeCHbgBHEAUJAspCxOoEsEPMKgxMecCH4Ig5wm6AlAACAMoQBAzD1cvAgECwSktKuAyRqkYKxD25PRd7RD97fTkehD4GC/kEPTkAD8/7fRd7T/95F0REjlxLzztEO0QPAEREhc5ERc5ABESORESORESOREzMTAAS7ALU0uwE1FaWLkAKP+wOFkAS7AXU0uwGlFaWLEfPDhZASY1NAAzMhYXBSYmIyIGFRQXMwcjBgYHNjMyFxYzMjcXBgcGIyIkIyIHAzY2NzY3IzcBTREBGtq+5w3+/xFhSlyFD/0vwwZcckpKJjxWMm91VlcxQURi/vFWiZ0eZJUgGAnmLwNJVk7RARXWthlrXJeHMlzgcbFnHgsQNfQkCw5KVgEDK5pFMk/gAAACACz+TgR7BdMALwBBAL1ALVkhATYmNTo5QFoPWRBpKZkTlimWK9kJCgYjKkF7EAMYGRwAAQQoODAQBBUtGbgCqEAkHCcVAQH4BCctEDAfEjk4KAM9ByABAQF8AIINIB8BH3wvEgESuAEJQCYgMwEzfA2uQi8HAQd8ICoBKo09LxkBGXwYLz0BPXwYqCXgQ2gsGCsQ9uTtXRDtXRD9Xe1dEPb9Xf1d7V0Q9O1dERIXORESOTkAL/3kP/3kERIXORESORESOTEwAXFdAF0XJRYWMzI2NTQmJyYmNTQ2NyY1NDYzMhYXBSYmIyIGFRQWFxYWFRQGBxYVFAYjIiYBBgYVFBcWFxYXNjc2NTQmJyYsAQUbaE1ERjfpfGSGhjzKyLXEFf79Bk9BPUcwfNZkhIBU2cC89QG/TzciFpZpCEctHTBUuVc7bltALyJX4HahXHavSGFeiseqqxlYUEY4KVJwwLJaaLpAdWqHwsAEJTFVMTAvHZFlCBs3Iy0mVVG0AAABAEIBrAKRA/sACwA6swANYyG4ASyyNgANuAL9QBEQIDYJcABwBnADA3AAcAlwBrgBoLMMT2MYKxD2/f39AC/t/e0xMAErKwEUBiMiJjU0NjMyFgKRrXp7ra17eq0C1Hutrnp6rawAAAH//v5tBGkFugASAIS1LwwBAAEQuAHstALtCw0SuAGstwwLAAKfAQEBuAGsQA8SADAAAQATFA0MEZ8QARC4AaxACw8PXw5vDgIvDgEOuAFPQA0MqRRvBQEvBQEFRhMUuAF8syE6ZxgrK07kXXIQ9k30XXI8EP1yPBA8ERI5cS88/XI8AD88/TwQ/eQ5OTEwAV0BIxEmJjU0NjY3NjMhESMRIxEjAnvzvsxSi144owJVce2Q/m0EFw7YsmmvZhQM/vv5uAZIAAEASP/nBKcF0wA1ALVAMg0SASUCKAQnFiU1RwBIK1YAVgFYG50bnRyrJcoDDQkbgBmAHAM1AAAgAQIUAQECGxgcuAMmQA8fMScGAQEKHycYCwIBEBy4AkhADht7ACgkjw4BQA7gDgIOuAJKQBguJAmPIgEiJAmVDxQfFAIUQBEXNCAUARS4AnuyNwEXugGvAUQAGCsrEPZdK3Hk7XEQ7fRdce0Q9uwrEDwAP+0/P+0Q5BI5hwUuKw59EMQxMAFxXQBxISETPgIzMhYVFA4CFRQXHgIVFAYGIyImJzcWFjMyNjU0JicmJjU0Njc2NjU0JiMiBwYHAWf+4ckqX+mzubgwkCEMFJQsXapqbsM8zB04JTREGzRaNi9bLxc1Llw8KS0Dwca+jqB3OWuqPRgZGSbGbT9ar19tVnMqIUQzITxCcHcxLmx6PzQaKTNaPdMAAAT/9//cBfMF2AAPAB8AOABDAQ5AS0VACxE0OQI0EjQWOxo6HksuSzRbLls0ZhJmFmgaaR5rM3szmjCpMMY01DT5MxRGJoo0Ajw0KwM4Mj4xHCg0MjEDOCs3PEM6ISIQJLgBEUAMD0MBv0P/QwJDnQA3uAERt2A4jziwOAM4uAJYQBM6OggQhQADGIUICRyFMASABAIEuAEPQB2ARQFFDz5PPgJvPn8+/z4DPl+AKAFQKI8ovygDKLgB1kASIEMAOEA4AmA4cDjwOAM4XyEguAJSshSFDLgBD7NEfzIYK04Q9E3t9Dz9XXE8EP1dce1dcU4QXfZxTe0AP+0/7RI5L+1d7RD9XXHtETk5ERI5EjkRFzkBERI5EjkRFzkxMAFxXSsBMgQSFRQCBCMiJAI1NBIkFyIEAhUUEgQzMiQSNTQCJAERMzIXHgIVFAYHFhYXFhcXIycmJiMjEREzMjY2NTQmJiMjAvXFAWrPy/6VyMj+lcvPAWrGnv7ep6MBJKChASOkp/7d/gel6BxSWz1zaCUoIwkzYMxERVpFK0OPRSgnSI1DBdjF/pDJyP6Vy8sBa8jJAXDFlp7+2KKh/tykpAEkoaIBKJ77/gMtAgczaUBYfQ8OIS4MV6SEhUX+sgHOFjcjIjUXAAP/9//cBfMF2AAPAB8AOgC3QE48QAsRNDkCMBIwFj8aPx5mEmYWZxdoHmYzZjebJpopDS4vMiEgIOAg8CADIL04hSSyCC8v7y//LwMvvTKFK7IQhQABGIUICyCFISEvhS64AXRACScchTAEgAQCBLgBD7eAPAE8QDUBNbgCqEATACdQJ4AnA1AnoCf/JwMnYRSFDLgBD7M7OiwYKxD17fRdce1xThBd9nFN7RD97TwQ7QA/7T/t9P3kXRD0/eRdORESOTEwAV0rATIEEhUUAgQjIiQCNTQSJBciBAIVFBIEMzIkEjU0AiQTFwYGIyImNTQ2NjMyFhcHJiYjIgYVFBYzMjYC9cUBas/L/pXIyP6Vy88BasWe/t2mowEkoKABJKOm/t0coSaweqvYYLVwe6UuohxZPVt0clBDYgXYxv6Rycj+lcvLAWvIyQFvxpee/tmioP7co6MBJKChASie/Sc2fobgxYHKZHV+JklCiZKSik8AAAIA2AKHBwUFugAHABQAykBUBgoJDAJJE28LehOJC4kQiROZEJkTCDAWSQtLElAWZhFoEmAWcwp8DHYReBKECosMlQqaDA8LERIPDgcABBQTBAIUCO0JAgW9BA0MDAoKCQkEAA0OuAGgthB/D8APAg+4ARe3Ea5/EsASAhK6ARcAFAGgsggICb4BDwAFAlQABwGgAAACVLQCYAMBA7gCwLMVm94YKxD2XTz0/fT2PBD99l329l08/TwAPzwQPBA8EDwQ/TwQ/TwREjkSFzkBERI5MTABXQBdAXEBESM1IRUjESERIRMTIREjEQMjAxEB1f0CmfYBWQEDlZYBA56xlrAChwKljo79WwMz/cwCNPzNAo39cwKN/XMAAAEBeASsA3sF2AADACu2AgNwAQADArgBVLcDAXAA+AMZBLoBtwEaABgrThD0TfTtEO0APzz9PDEwASEBIwJJATL+vL8F2P7UAAIArQTHA3sFvAADAAcAekAhTwmQAJABkASQBaAAoAGgBKAFsACwAbAEsAUNBgcHAgIDuAJQQAkABQQEAQEAAwW7AlAABAAGAlCyBKgHvgEXAAIAAQJQAAAAAgJQtQCoAxkIkrkBGgAYK04Q9E3k7RDtEPbk7RDtAD88EDwQPBD9PBA8EDwxMAFdEzMHIyUzByPg9jP2Adj2M/YFvPX19QAC/74AAAh5BboADwATAU5AViQMNgQ4BzgRNhNKAE8HTAhPEV4AWwhcEFsRnwafCp8OqwCpEbsEuwe7ERUHBwwREhEPEwECAhIAEBERDwsIBwwHDAwiDxEUDw8RBAMDIgISFAICEgkIuALIQBAKCy8LTwuwC8ALBAsHDBATuALIQAoAAS8BAQEEAw0MuALIQAoODw8CAwgHBgYSuALItgUEAhEPEAq8AkIACQJCAA4CQkANvw3PDQKPDZ8Nrw0DDboCSAAGAkKzBRoVB70CkgAMAiQADwADAlmzLwIBArgC6LVgEnASAhK6ApoAEQFds58PAQ+4AueyFA8XuAFFsWwYKysQ9l30/l30Xe0Q/eROEPZN9PRdXeT05CsQPAA/PP08EDw/PDwQPP08ERI5XS88/TwREjldLzz9PIcFLit9EMSHLhgrfRDEBzw8BxA8PAcQPDwCCBA8CDwxMAFdASEDIQEhByEDIQchAyEHIRMTIwEDrf4X0P7KA1QFZzP9MkYCsjT9T1MDBzP74IB/s/6mAW/+kQW69f6x9f509QJkAmH9nwADAJ//hgZLBiIAFQAeACcAyUBIWR9ZIAIlCzUMPyg/KWoAZwpoFWcXCAsfIBMUFAoIFxYAFRUJGAcZBiESIhEnDCYNHgEdAhWYFPUgIjAiQCKQIgQiLhEJCpgJuAMeQA4vGT8ZTxmfGQQZLgYDCbgBEUALClomKD8NAQ0aKRS4ARFADRVaHSg/AgECGSg5MhgrThD0XU3tGfQY7U4Q9l1N7Rn0GO0AP+1dGfQY7D/tXRn0GOwBERI5ORESOTkAERI5ORESOTkHEA48PDw8BxAOPDw8PDEwAV0AXSUmNTQSJDMyFzcXBxYVFAIEIyInBycBASYjIgYCFRQBARYzMjYSNTQBNYrSAXbqzoqMipCD4f6N4saDk40BWwKGT2t96IsDG/2CVGF24Zmsp+fqAcTrWahyraD67v5O4lCxcgGhAwc3pv68nl8CIP0AN6ABSKhRAAIAMgAABDMFZQALAA8AzUAmMA0wDkANQA4EMAowC0AKQAsEDA0AAg4PEQhQBgEGYQgwA0ADAgO4AlC2CVACAQJhC7sDKgAOAA0CULMPDAoLuAJQQBYABAUBUAABAGECCVAIAQhhMAZABgIGuAJQQAxQBQEFYQMDAgINDQy4/8CyPzUMuP/AQBo6NRAMUAwCAAwBLwxPDPAMAwzlEBE+IU9jGCsr9l1xcisrPBA8EDwQ9F39XfRdPBD0XTwQPBDtAD88/Tz29F08/V085F0BERI5ORESOTkxMAFdAF0BESERIREhESERIREBESERAbL+gAGAAQABgf5//YAEAQFkAX0BBwF9/oP++f6D/pwBB/75AAABADEAAAVVBboAGgERQH4qCSMTPRRECkQLRQxlBWUXCDoJOQ4+Dz4TBAQHCAgDDA4OCxAREQ4VGBkZFAIDGQEGBQQYAQYXBBgAFhoDGQAWFBUcCAcEAwQbGBkcDgYWCAcoExISwhEOFBERDgkKCsILDhQLCw4GAQHCABYUAAAWCgsbERwODgoIGQNlGAS4Aw5AFRUVB2UICBQUABIREQsLCgABKAAKDrgB2LUbFhAAF0+5ARwAGCsrKzwBGRD0ABg/KzwAPzwQPBA8EjkvPBD9PBD2PP08ERI5ARESORI5OYcuK30QxIcuGCt9EMSHLhgrfRDEKxESOTkBERI5ORIXOREzMw8PDw8HEDw8hxAOxIcQDsQHEAU8PDEwAV1dISETITchNyE3IQMhExYTNjc3IQEhByEHIQchAof+5Tr+iygBdRv+iygBEMwBJEcbNHl8fwFH/jEBFyj+jxoBcCj+kAEWwX/BAqP+/WX+9fG/w/1dwX/BAAH/tP5lBK0EJgAYANhAYk4VAScBLxo2BjcLNg5GBEcKRg5DF0QYQBlgGoQOiBiAGpgY0BrwGhIQGkAaAgsQDw8MFgIDBRcXARYZEBoFDygQCA4AGBggFwEUFxcBDwwMIA0OFA0NDg0MDAEBKAAGDgoWuAJHQBoIJxMLGCgXDgUFGhcZKAEXDQ4QFxcOFxcZCrgBRrFZGCsrEDwrKysQPBA8KxDAARI5LwA/KzwAP+3kPz8rPBA8EDyHBS4rfRDEhy4YK30QxAAREjkrOQEREjkSOYcQDsQ8xDyHEA7ExDEwAXFdAF0TIQMHBhUUFjMyNjcTIQMhNwYGIyImJwMh6wEfZhoGX0VPfCl1ARzg/wAVT1YyQl0rbv7hBCb+FHgpJlJkfcACLPvaY1ErPkz99AACAKgC5wNMBdMAHwAoAKpALQsQ0AECOQIBCRUJIgko2wHbAtgWBg8mDQ8g3yACIL0ZGQQmP18RAREtIA0BDbgBdEA4BNAAAQAt7wH/AQKfAd8BAgGo3x4BHj8EASAZDQMbDwwKANABqBQH0NAbARuCCtCADwFwD5APAg+4Aam3I9AUrimSqRgrEPb99F1d7fRx7RD07REzERIXOQA//XH0XV3kcRD9cfRy7RI5L+1xERI5MTABcV0AcQEnNjYzMhYVFAIVFBcjJicGIyImNTQ2NzY3NjU0JiMiEwYGFRQWMzI2AavAKqh/jIRTFcAIA1tqYnRdWzriDiouWnijQSojNkoE5RpmbnFVRv7BKTUzFhk/c1hQbhcOFzUcGSH+5xMtJBwnRQACAJQC5gNRBdMADAAYADG5ABACwbIKxxa4AsGyAwETvALCAAYByQANAsK1ABkZaKkYK04Q9E399O0AP+397TEwEzQSMzIWFRQGBiMiJjcUFjMyNjU0JiMiBpTSr42vYbNzjKrKPDFSaDsxUmkEGqsBDq2GYOB6q3E1PsNnNj2+AAMAP//nBu0EPwAsADUAQAGCQHUQBhAHAlYDpAWjB6IIyQjbCNYh6QzpQP0G/zsLHwYcBx8IAwoGABmJB40IhSkFIAYmByAPJhMsHjk4SjhaElg4ajh/EncTdCR/LX8wfzV5OI8tjzWbOO0T7TjwBxcCBwQmiRSEKgQfGx4HBAYWGAsDAW82ATa4ARZAGhaAFgEwFkAWcBbQFuAWBRY9GzVPLQFwLQEtuALvQBwAAAFfAW8BAgEEMiooKCIAHh8eAl8ebx7QHgMeuAJFQBQPGwEbKiIHAAbQBgLQBuAG8AYDBrgCR0A7BCcJCQM9AfA9AT0nDQsQNQE1MwFENoA2AaA24DYCNi86BnwHzB8vAY0vAS9EICsBK4BfQtBCAkIeRB+4AkVAFDpEDxBfEAIwEM8Q3xADEBlBQsoYK04Q9F1xTe307RBd9F3tcXL07RESOV1xL/3kcgA/7V1xPBD95F1xP/1x5F1xEDwQ7RE5XS88EP1dcTwREjldcS/9XQERFzkAERI5ERI5MTAAcV0BcXJdAHIBIRYWMzI3BQIhICcGIyImNTQ2Njc2NzY1NCYjIgYHJTY2MzIXFhc2MzIWFRQlNjU0JiMiBgcFBAcGFRQWMzI3Ngba/SQEf1qQTQECrP7C/uZyuP+atHOvo9Q8CE5TS1MW/u0m88WQVz4mitfD7P77AWtXXZkc/tr+4lk2Sj5eSWEBuHSGiin+yPj4rYp3nj0PExchGEdJOkIchagqHj2F8tVhUBQNcniIg3IfPSU8MUZHXAAAAwBs/5UE1QSSABQAHwAoAT9AgyMAJgEpCyAULxUpGSsfICE2AT8KOQs/FTkfMSFPFUofZgBpCmYVaSCHCpcKmCGqAKkEqgm5FbkgxxXZIPkVHzkSRiJKJVAqcCqWGpklrA7JIeAq8CoLQBpLHkAq0CoEECoBAyAhCgsLAgAVHw0MDAEiCSMIFhQXEygEJwUeDh0PDHQLuAMmtiMnCAsCdAG4AyZAMhcnEwcB5AIzjycBJyRABQEgBVAFcAW/Bc8F3wUGBRoqC+QMM4AdAR0kIA8BDxkpKU0YuAEChStOEPRdTe1xGfQY7U4Q9l1xTe1xGfQY7QA/7Rn0GOw/7Rn0GOwBERI5ORESOTkAERI5ORESOTkHEA48PDw8BxAOPDw8PDEwS7ALU0uwDVFaWEAMFQEgHwEhFQEgIQEfABAXPBAXPAEQFzwQFzxZAXJxXQBdATcXBxYVEAAhIicHJzcmNTQ3NiEyByYjIgYGBwYVFBcBARYzMjY2NTQD6nxvem3+tv7snHd/bHptfqEBPqUtNkc+ZF8fLA8CFf5GN0ZdmFkEApBejn3J/v/+iECSYI2Bw+Wx4/IiM3BObmcvLgGM/gAfdediLgAAAgA3/lMELQQmAAMAHQCqQCAkECARAgAEAAUCIhsgHzgUSRp2B38PhgePDwgSDhHZDrgBHLMVDwQFuAMqtABJAwYRuAJQQAogEm8SnxK/EgQSuAFPtAMBSQAEuAJQQBggBQEFLQCoAkkDGh8fCwGvCwEvC58LAgu4AlBACl8YfxiPGJ8YBBi8AnEAHgCMAT8AGCsQ9l3tXV1yThD2Te309F3tEO0Q9F3tAD/99jw/7e0ROTEwAV0AcV0BIRMhASEGBgcGBhUUFjMyNjcFBgQjIiY1NDY3NjYDjf7kOgEc/qYBAhR/v6E5YmVwjh8BBjL+29Ha9GjpkkQDEAEW/pKGxaaMWyo+UnqFLsLi2ZZZrsJ6YAAAAgAY/mwCcAQmAAMACQC/QBoKBA8FCAYIBxcGFwkoBCcGJgkJAwuAEx02ALj/6LMbHzQBuP/oQC0bHzQAOBIVNAE4EhU0JgUoBjgGeAWZBpkJqAaoCQgJAQICCAYAAwMHBCAFAQW4AXpAEQgIBwEAAgBJAwYBqAIAqAMEuwMgAAIABQFRswMIcAe4AsBACwJJnwMBAxoLXWMYK04Q9l1N7fTtEOQQ5BDkEOQAP+08EDwvPBD9XTyHBRA8Dn3EhwUQxA7EMTABXSsrKysrAXEBIRMhATMDAyETAjb+5DoBHP7cnlxI/tJNAxABFv6L/RP+qAFyAAEAVQF4BFYEMgAFAFq5AAUBK7cDAT8CTwICArgCUEAKBAMGAD8BTwECAbgCUEAZBWAEASAEQASgBMAE4AQFBFsCgAMBnwMBA7gC/LMGkmcYKxD2XXE8/V1xPP1dPAA/PP1dPBDtMTABESERIREDVf0ABAEBeAG5AQH9RgAB/+z+UQR2BdMAJACoQFuRHAEoFSMYKCQ1F0gVBRIVFhERFhYgIwMUIyMDLw4/Dk8Onw6vDgUOfAkBCxQvAD8ATwCfAK8ABQBgARMSEgICIAEBASMcIB8wH0AfkB+gHwUffBoPHDMcMyUDuAJIsyAjASO4AiazIAEBAbwCUwAlAa8BQgAYKxDmXfZd5BDk5AA/7V0vLy9dPBA8EDwQ/V08Lz/tXYcOLit9EMSHBcQ8MTABXQBdATczNzY3Njc2MzIXByYjIgYHBzMHIwMOAiMiJzcWMzI3NjcTARQkyhAcFCA4SoeHhDZeQDY0EQvQJc2tFEGCcpx8OW06PhkUEZYDU9NZmStCIiwp4hwwVjjT/C53dEUo3RcmHWgDaQAAAgBsAEcEfgPRAAUACwB7QE9XA1cJ4QbgCeALBQULAQgIB+AKAQr6C40GcAAJQAmACdAJBNAJ8AkCCfYABPoFjQBJAwfSAAhACIAIAwiuAdLQAgECso8DAS8DcAOAAwMDuAMGsw2qLBgrEPZdcfRx/fZx7RDt/e0Q9l1x7f3tXQAvPBA8LzwxMAFdAQEzARMjAQEzARMjAgUBhfT+nn/O/Z8BnPH+l4TJAikBqP5C/jQB4QGp/kT+MgAAAgAuAEcEQAPRAAUACwCNQGcvDVgDWAlQDXANgA3NAcwCzAPADQoIAAgGjwePCATGAwEBCAgHBQsH0g8ITwgCCK4BCvoLjQZwDwlPCY8J3wkE3wnvCf8JAwn2AAHSArIDBPoFjQBJQAOAA9ADA0ADUAMCA8QMT2MYKxD0XXHt/e0Q9O0Q9l1x7f3tEPZx7QAvPC88EDwxMABdAXFdAQEjAQMzAQEjAQMzAqf+e/QBYX7OAmH+Y/ABaYXKAfD+VwG/Acv+IP5WAb0BzQAAAwC+AAAHQgEVAAMABwALAGhALDAAMAEwAjADMAQwBTAGMAcwCDAJMAowCwwISQsKBEkHCgBJAwoKqAlJC6gIuAEgtgaoBUkHqAS4ASC2AqgBSQOoALwCVAAMAK8BQAAYKxD05P3k9uT95Pbk/eQAP+0/7T/tMTABXRMhAyEBIQMhASEDIfgBHDr+5ALRARw6/uQC0gEbOv7lARX+6wEV/usBFf7r////6QAABWMHEQImACQAAAEHAEMB5AE2ACFAEwIADAFwDIAM8AwDDATZSCsCAQ65AjUAKQArAStdcTUA////6QAABWMG/gImACQAAAEHAMYBwAFPAChAEQIADHAMAlAMcAyADPAMBAwEuP+ItEgrAgEMuQI1ACkAKwErXXE1//8As//mBkYG/gImADIAAAEHAMYB5AFPAB23AoAfkB8CHwa4/vm0SCsCAR+5AjUAKQArAStdNQAAAgCN/+cIoQXTABkAKgENQDI2GFwNAj8rTyxZIJ0DlwidIwYEBwgIAxkMCwsAGBkDAAwNCCMACwMICCILABQLCwAFBLgCyEAKBgdPBwEHAAsCA7gCyLUBAQACCQi4AsiyCgsduALLsg8JJrgCy7UWAwALEAO8ApIACAIkAAACREAKCyALMAsCCwoaBkEJAkIABQJCAAoCQgAJApIAAgJCQBcBGiwaKEASUBJgEqASBOAS8BICPxIBErgCmrMrCxfOuQEUABgrK04Q9F1ycU3tThD2TeT05PTkERI5XS/k/eQrEDwAP+0/7RA8/Tw/PBD9PBESOV0vPP08hwUuK30QxAEREjkAEjk5ERI5OQcQDjw8BxAFPDwxMAFdAF0BIQchAyEHIQMhByE3BgYjIgAREBIkMzIWFwEUFjMyNjY3NjU0JiMiBgcGBOgDuTP9ZUQChzP9eFQC3jP8AxlAlWnb/tjIAWGwc7ww/NalhF+PXyU0lHZ+0UI0Bbr1/rn1/mz1dkpFAT8BGAEBAbvZZVj9B5esWaGa1YKCmrbJogAAAwB3/+cHjQQ/AB8AKAA1AP9AUSAGIAd/IH8oiR2PII8o4wYITyBPKAIvFDcHNQhmB3YDeQV4HHskigXtBe8G6wf9BfkU8BwPCgWJB4kIiwoEGTIWBwQGCwssDhkLAS8ocCABILgC70AWAAABUAEBXwFvAQIBBCUnGxsyJxYHBrj/wEARJSg0UAYBAAbwBgLQBvAGAga4AkdAMQQnCQksJw4LKDMBRC8wL6AvAi8iKQZ8B8wiRCAeAR6A8DcBNykkIBLwEgIS5TYpVhgrEPZd7RBd9l3t9O0REjldL/3kAD/tPBD95F1xcis/7TwQ7RE5XXIvPBD9XTwBERI5OQAREjk5ERI5ERI5MTABcV0AcV0BIQYWMzI3FwIhIicGBiMiJiY1NBIkMzIWFzYzMhYVFCU2NTQmIyIGBwUUFjMyEjU0JiMiBwYHdP00AnhYlE3/q/7J7IZPzH6V5XiOARPIdbtFq9zJ6P73AWlWWIYj/NB5XZCoeV16TXEBt3yHlS3+y59PUHTjh6MBL6hKSpTx3GheFA9zdHuPyWF7AQejeIJhjwAAAf/8AaoEbwJ8AAMAILMBmgACuAF8swAZBAW4AXyzITpnGCsrTvRN7QAv7TEwAzUhFQQEcwGq0tIAAAEAAAGqCAACfAADAB+zAZoAArgBirMAAAQFuAGKsyFdZxgrKzwQ7QAv7TEwETUhFQgAAarS0gAAAgEAA3cEgQXLAAoAFQBqQBF1B4YOhhIDBwgGCQBJCgtJFbsDJwARAAoDJ0AJBr0FBRG9EAAJuAFRtAqoAEkBuwEXAAsAFAFRQA0VqAtwkAwBDBkWv3UYK04Q9F1N/fTkEPb99PQAP+08EP3mEPbtEO0REjk5MTABXQEhNzY2NwcGBgczASE3NjY3BwYGBzMEBP7fLyq1kBxIThWE/eT+3jAqtJAcSE4UhAN35celA4URUlf+6+XHpQOFEVJXAAIBBwNmBIgFugAKABUAg0ArExIUEREQDQwVCAcJBgYFAgEKEb0QFBCHC0nvFQEVBr0FCQWHAEnvCgEKFLgBUbYVDHAVqAsJuwFRAAoACwEXtwFJCqggAAEAuAEgsxa/dRgrEPZd5P3mEOQQ5O0Q5AAvXe3kPBDtL13t5DwQ7RESORI5ERI5ORESORI5ERI5OTEwASEHBgYHNzY2NyMBIQcGBgc3NjY3IwGEASEwKbWQHEhOFIQCHQEiMCq1jxtITxSEBbrlx6UDhRBTVgEW5celA4UQU1YAAAEA3wN3An0FywAKAD1ACoUDhQWFBwMASQq4Aye0Br0FAAm4AVG3CgzgCqgAcAG8AVgACwD7AQ0AGCsQ9v3k5hDkAD/99u0xMAFdASE3NjY3BwYGBzMCAf7eMCq0kBxIThSEA3flx6UDhRFSVwAAAQD9A2YCmwW6AAoATkAbCgIKAwIIBwkGvQUJBYcKSQAABi0FqAoCqAEJuAFRQAkKAUkKqCAAAQC4ASCzC7+pGCsQ9l3k7RDkEOQQ9PQAP/3kPBDtETk5MTABcQEhBwYGBzc2NjcjAXoBITAptZAcSE4UhAW65celA4UQU1YAAwAxALkEMgTuAAMABwALALlAJwAEAAUCPwQ/BU8ETwVQBlAHbwRvBQggATABQAEDAUkA9jAFQAUCBboCUAAEAw5ACiAJMAlACQMJSQi4/8BART81CEA6NRAIXwiQCN8IBA8IQAiACN8IBD8ITwhvCH8IkAgFCCADMANAAwMDSQAAIAswC0ALAwtJPwhPCKAI4AjwCAUIB7gDAkALDQXwBAEE5QxPYxgrEPZdPBDmL13tXTwQ7V0AL11xcisr/V32/V327V0xMAFdcQERIREBESERAREhEQGlARn9cwQB/XMBGQPVARn+5/57AQf++f5pARn+5wD//wAN/lEE9wW8AiYAXAAAAQcAjgDgAAAAI0AKAgE/HAFQHAEcAbgBarVIKwECAhu5AjYAKQArAStdcTU1AP//AOsAAAZHBwICJgA8AAABBwCOAZwBRgAetgIBrxUBFQO4Axa1SCsBAgIUuQI1ACkAKwErXTU1AAIALQDOBD8E4gAjAC8A0bkAHAEJtRv3HhX3FLsBCQASAAoBCbUJ9wwD9wK4AQm1ABOCC4IquAMKQAoSqAyoYA8BD+4kuAMKQAwdggCoAYIeqAAhASG4Ao1ACTAUghyCGB73HbsBCQAbAAEBCbUA9wMM9wu7AQkACQATAQm2EvcVqBuoJ7oDCgAYATBACQqCAoIJqAOoLbgDCrNgBgEGuALOsoRzGCsAP13t5OTk5P3t5PTt7RDt7RDt7RDt7RDk5AFOEPRxTeTk5OTt/V3k5O3k5BDt7RDt7RDt7RDt7TEwEyc3FzY2MzIWFzcXBxYWFRQGBxcHJwYGIyImJwcnNyYmNTQ2FxQWMzI2NTQmIyIGsYSmiDVtODluNIanhBwcHByGp4gybTs4bDSCqoIcHRvWd1NUdnZUVHYDrYOyiB0cHB2GqYUzbTo5bTSFq4gcHRwcg6iENG05OWukVHZ2VFR2dgAAAQB6AEcDBwPRAAUAJ0AWAQUB0gJhAwT6BY0AcCADAQOwB4SpGCsQ9l3t/e0Q9O0ALy8xMBMBMwETI3oBnPH+l4TJAigBqf5E/jIAAAEAFABHAo0D0QAFACZADgEFBPoFjQMB0gKyAEkDuAEgswZdYxgrEPbt9O0Q/e0ALy8xMAEBIwEDMwKN/nv0AWF+zgHw/lcBvwHLAAAB//7+ogTMBaYAEwDxQEQnAi8JLwwnEgQADBMQBQEJAhAFBAgDEAUHCAMPBgoJAg8GCwwTDwYODRIPBhENEhAFAiAJMAlACZAJBAkhCAgDsgUFBrgCXkBQEy8MPwxPDJ8MBAwhEg2yEA8AA7IEArIBE7IAErIRDWEODGELCWEKCGEHEREFAAAFAQEFBAQQBQ4OBgsLBgoKBgcHDwYQYQUoD2EgBoAGAga4AWCzFE+pGCsQ9l3k/eQREjkvETkvETkvETkvERI5LxE5LxE5LxE5LxDkEOQQ5BDkEOQQ5BDkEOQAPzz0PP1dPPY8EPQ8EP1dPA8PDw8PDw8PMTABXQEDIQchAyETITchEyE3IRMhAyEHAzmLAWUv/ptH/vFI/o0vAXKL/o4uAXNIAQ5IAWUuA3D9Z9/+qgFW3wKZ3wFX/qnfAAABAJMCPQGsA1YAAwAWQAsBSQACSQCuBGhzGCsQ9u0AL+0xMBMRIRGTARkCPQEZ/ucAAAEAFf7CAbMBFQAKAE1AGooDAQgHCQYGBQIBCga9BQkFhwBJCgoBCQEJuAFRQBAKAUkKqCAAMAACAK4LRnMYKxD2XeTtEORxAD/t5DwQ7RESORI5ERI5OTEwXRMhBwYGBzc2NjcjkgEhMCm1kBxITxOEARXlxqUDhRBTVgACAAf+wgOIARUACgAVAIBAJxMSFBEREA0MFQgHCQYGBQIBChG9EBQQhwtJFQoGvQUJBYcASQoKFLgBUbYVDHAVqAsJuwFRAAoACwEXQA0BSQqoIAAwAAIA9hZGuQFDABgrEPZd5P3mEOQQ5O0Q5AA/7eQ8EO0/7eQ8EO0REjkSORESOTkREjkSORESOTkxMBMhBwYGBzc2NjcjASEHBgYHNzY2NyOEASEwKbWQHEhPE4QCHQEiMCq1jxtITxSEARXlxqUDhRBTVgEV5calA4UQU1YAAAcAi//GCCwF0wAOAB4AIgAxAEEAUABgAVJAP2kDYB9gIGAhYCJpJmlFcB9wIHAhcCKHH4IhgiKTH5IgkSGQIpBiyR/JIMkhFg9iH2IvYp9ir2K/Ys9iByEgILgC3kASHyIUHx8iHyAhIgRiYSAfRS89uALdsigoXLgC3bNHx04SuALdsgzHGrgC3bYFIiEhBQE1uwLdAC8AVALdQAlOTi8LYhcXGkq4AtxAC5BZARBZIFnAWQNZuP/Asw8SNFm6AawAUQLcskJQK7gC3EALkDoBEDogOsA6Azq4/8CzDxI0OroBrAAyAtyyI+AIuALcQAuQFwEQFyAXwBcDF7j/wLMPEjQXugGsAA8C3LdwAJAAwAADALj/wEAJGR00ABlhaLoYK04Q9CtxTf32K3Fy/fb99itxcv32/fYrcXL9TkVlROYAPzxNEO0Q7T88EDwQ7f3tEP3tPBDtEPQ8ARESFzmHLit9EMQxMAFxXRM0PgIzMhYVFAIGIyImNxQWMzI3NjY1NCYjIgcGBhMjATMBND4CMzIWFRQCBiMiJjcUFjMyNzY2NTQmIyIHBgYFND4CMzIWFRQCBiMiJjcUFjMyNzY2NTQmIyIHBgaLNmN6UnOJY61iaYa6JB8kHCQ9KRwiHSg4QMYDtM392TZjelJyimOtYmqFuSQgJBslPCgdIR0oOQH5NmN6UnKKY61iaoW5JR8kGyU9KR0hHSg5A+xAzZNHkoJp/vyAkH04Kx0n0kEpLSAtzfu3Bg37EkDNk0eSgmn+/ICQfTgrHSfSQSktIC3NI0DNk0eSgmj++4CQfTgrHSfSQSktIC3N////6QAABWMHPgImACQAAAEHAMUB5AFkACtAHAJQC2ALcAvACwSwC8ALAnALgAsCCwT/SCsCAQy5AjUAKQArAStdXXE1AP//AFQAAAXFBz4CJgAoAAABBwDFAcMBZAAbtQGgDAEMAbgCX7RIKwEBDbkCNQApACsBK101AP///+kAAAV9BxMCJgAkAAABBwCNAgIBOwAdQA8C4AsBUAsBCwT9SCsCAQ65AjUAKQArAStxXTUA//8AVAAABcUHAgImACgAAAEHAI4BxwFGABm0AQICEwG4ApS1SCcBAgISuQI1ACkAKwErAP//AFQAAAXFBxECJgAoAAABBwBDAfsBNgAbtQFwDQENAbgCSbRIKwEBD7kCNQApACsBK101AP//AEcAAAOcBxMCJgAsAAABBwCNACEBOwAfQBEBHwQBfwSPBAIEAa5IKwEBB7kCNQApACsBK11xNQD//wBHAAADkQc+AiYALAAAAQcAxQBvAWQAG0AOAXAEgAQCBAHQSCsBAQW5AjUAKQArAStdNQD//wBHAAADnwbVAiYALAAAAQcAjgAkARkAJUAMAgEgCwEvC58LAgsBuAEZtUgrAQICCrkCNQApACsBK11xNTUA//8ARwAAAxAHEQImACwAAAEHAEMAagE2AB9AEQFQBQFwBYAFAgUBzkgrAQEHuQI1ACkAKwErXXI1AP//ALP/5gZGBxMCJgAyAAABBwCNAj4BOwAZQAwCjx4BHgY8SCsCASG5AjUAKQArAStdNQD//wCz/+YGRgc+AiYAMgAAAQcAxQI2AWQAHUAQAoAekB7QHgMeBlJIKwIBH7kCNQApACsBK101AP//ALP/5gZGBxECJgAyAAABBwBDAfEBNgAZQAwCcB8BHwZCSCsCASG5AjUAKQArAStdNQD//wC7/+cGHwcTAiYAOAAAAQcAjQGhATsAF7MBARsAuAJjtEgnAQEeuQI1ACkAKwErAP//ALv/5wYfBz4CJgA4AAABBwDFAeoBZAAdtwFgG4AbAhsAuAKZtEgrAQEcuQI1ACkAKwErXTUA//8Au//nBh8HEQImADgAAAEHAEMBzwE2AChAEQEAHAEvHGAccByAHJ8cBRwAuAJstEgrAQEeuQI1ACkAKwErXXE1AAEAUgAAAk8EJgADAGhAMAAFJwIgBTAFoAXQBfAFByUAJQEvBTcCgAWvBQYBAgIgAwAUAwMAAwIKAQAGAAMQAbgCSrICUgC4Akq3TwMBA44EAxe4ArmxMBgrKxD2XeT95CsQPAA/PD88hwUuK30QxDEwAV0BcQEhAyEBMAEf3v7hBCb72gAAAQBzBK0DIgXaAAYANkAeJwE2AQIWBCUEAgYFAQACAXAEAwMFSQCzAhkHhHUYK04Q9E397QA/PO0BERI5ETkxMAFxXQEHIwEhEyMCD7DsASkBDHrGBV+yAS3+0wABAL4EvQNuBa8AFQBgQBBXEWcRdhGMB4ARmweREQcJuAMsQBMQABBrIAEwAaABsAHAAQUB0wwTuAMsQAkFDQxFBQMMvQ24ASW1AL0BGRavuQGjABgrThD0Tf307QA/9DwQ7RD9XeY8EP0xMAFdASM2NzYzMhcWMzI2NzMGBiMiJiMiBgE8fhslNlA0hk4dFyAOgBNxRi3GGx4sBL16MUYxHB0xe3VQIwACATQEhgK2BggACwAXAFi1TwmPCQIJuAL6tUAPgA8CD7gDErVPFY8VAhW4Avq3AwNABoAGAga4AvpADU8SjxICEq5ADIAMAgy4Avq1ABkY1iwYK04Q9E39cfZx7XEAP/1x9nHtcTEwATQ2MzIWFRQGIyImNxQWMzI2NTQmIyIGATRxUFBxcVBQcW0xIyMxMSMjMQVHUHFxUFBxcVAjMTEjIzIyAAEADv5XAiP/5gAXAEdAC1AZAQwNCg8AApgWuAFntAqYDwsSugEWAAcBWEAQDAGoAOcNLb8MAQyuGF1zGCsQ9l3k9OQQ9u0AP+397TkREjkyMTABXRM3FzI3NjY1NCYjIgc3NjMyFhUUBwYjIg4wOVVOMSI1PyA6Ikg7bnFYeuI//lt0ARgPKRYdKQljEmBFVD5YAAABAOAErQOPBdoABgA5QBwoATgBAhoEKQQCAQIGAAUDBHAGAwKzAEkFGQebuQEcABgrThD0Tf3tAD/9PAEREjkROTEwAXFdATczASEDMwHzsOz+1/70esYFKLL+0wEtAP//AH7/5wVpByQCJgA2AAABBwDJAYgBSgAZQAwBbysBKxJNSCsBAS+5AjUAKQArAStdNQD//wAt/+cEagXaAiYAVgAAAQcAyQDZAAAAG0AOAVAtny0CLRQdSCsBATG5AjYAKQArAStdNQAAAgCw/lEBjwXTAAMABwBmQAmQCaAJsAkDBgm4AqizC1o2A70BLQABAAYBLQAHAg1AFwEAAQAJFxcaAgIDAwcHBkAEBQUAGQgJvAGrACEAkgFAABgrK070PBA8Tf08EDwQPE4QRWVE5hA8AD9N/e0Q7TEwKwFdExEzEQMRMxGw39/fAr0DFvzq+5QDF/zpAAIASwAABc0FugAUACYAqkAfKAA/JD8ltAC1FQUEAQAABSMmFSIiFRUfAAUUAAAFFrsCyAAAACICyEAaBSWWAugkAy8DnwO/AwMDAAUCAAgFABAlyyS4Aum3A8sCWgAiTAW6AkQAFQIkQBnAANAAAjAAQABQAAMAPCccKA4aKAAXODIYKytOEPZN7RD2XXHt9O0Q9PT95CsQPAA/PxI5XS88/eYQ7RDthwUuK30QxAc8PAcQPDwxMAFdMxMjNzMTITIXHgQVFAIGBwYjJzMyNjc2EjU0JicmIyMDIQchWYmXJpaKAYamLluPdFYutO2mYMilmKaVPlp5ZUk1hapWATkm/scCgbIChwUJOGWOuW7u/pfCKRjsKThSAQ23nJ0aEv5rsQACAHz/5gTdBboAHwAsANpATS8tLy5QLmkftgTpHPAuBycDJRMlFDcDUARnA5UCmhyZHQkGGxwcBQMeHgAdHQQeGwADJhkGAwEECSAZIx4bBgMEARUFUBxQHQIcBQQcuAFNQCsdBBQdHQQcHRwdJgQgBS4JBR0VBAEAIycVBiknDQsmJBEZLSAkUAnwCQIJuALHQAtwLgHALgEuKU0YAIUrTRBdcfZdTe1OEPRN7QA/7T/tPzwSOTkBERI5EjkROTkIhy4rCH0QXcQAERIXORI5ARESFzkSFzkHEAg8DjwHEA48PDEwAF0BXQEhFhc3FwcWEhUQBwYhIicmNTQ3NjMyFxYXJicHJzcmEzQmIyICFRQWMzI2NgKwAQ8aGcohs2JLiKf+0dp3rIqh/V9INz0jRPIj4Cq2eV6YqXdhWphOBbolJktjQ7H+67H+/8TyYYru7a7MIRlCoH1XY1BD/PlrfP7rjGl6ftcA//8A6wAABkcHEwImADwAAAEHAI0ByQE7ABezAQEOA7gCw7RIJwEBEbkCNQApACsBKwD//wAN/lEE9wXYAiYAXAAAAQcAjQCwAAAAF7MBARUAuAJOtEgnAQEYuQI2ACkAKwErAAACAFMAAAVjBboAEwAeAIVAIDkXSQNJBEkXWASUArgJBwQeFBMAAwMAAB8BAhQBAQIeuALIt08EjwSQBAMEugHVABMCyEAXFUAVvxUCFQMAAwICAQAIARcZKAsaIAC4Alm3ARkfARc4MhgrK04Q9E3tThD2Te0rAD88PzwREjldL+30Xe2HBS4rfRDEh8Q8PDwxMAFdISEBIQMhMhceAhUUDgIHBiMjNzMyNjY1NCYmIyMBgf7SATgBLTsBIYU3SnVKVn6Re0XCwjRc76BcMk2Y1gW6/ugKDlmiYWnTf0ERCfI9hU8xShsAAv/t/msE0gW6AA8AHQC9QGoxAjgNMRtGAqgMBTUCWABZAlAfaA+JDooPmA8IKg04D2oNAxAfAQ0CEA4OAQIXEAIEGgAPDyAOARQODgEBKAAAGioEBxMnCwsPKA4OFyRfBwEQBy8HPwd/B48HBVAHAQfcHxABXxCvEAIQuAFrQAoOHigBEA4XDh4KuAG5sU0YKysQPCsrPCsQwAH0XXH0XXFy7QA/KzwAP+0/7T8rPIcFLit9EMQAERI5ARESOYcQDsTEPDEwAXJdXQBdASEDNjMyFhUQBwYjIicDIQEUFjMyNjY1NCYjIgYGAXsBH2aRm6fLr5XO02x1/uEB4XlSR4RXcldThksFuv4VcObj/uDGqav92QNafIlm8GZ3f3TjAAABAG0A6wQ9BLsACwDxQBcNDQECCQAFBwMICwYEAwgABQoCCQsGBbsDIAAJAAYDILMICwYDuwMgAAsAAgMgQAkACQIFAAAXSAC4AlBADAsGFAsLBgMICBdICLgCUEAJCQIUCQkCAEUIuALkswkFRQO4AuSyBkUCuAEtQAkLRT8JbwkCCQBBDgMgAAIBLQAFAyAACQLkAAgDIAAGAS0AIAADAyC3IAuAC6ALAwu5AwMADBkQ9l0Y5hoZ/Rj0GfQY5hn9GPQAL13kGf0Y5hn9GOQZEP0Y9ocOLisOd/V9EMSHDi4YKw539X0QxBgBCBDgEOAIEOAQ4A8PDw8BRn1qSBMBATcBARcBAQcBAW8BLv7QugEwAS61/tIBMbr+z/7SAaQBLgEvuv7RAS62/tL+z7oBMf7SAAEA6wLXAuUFzgAKAGS3JgcmCAIGBwe4AaNACwgJFAgICQUGBwkAvAKcAAECqAAIAuuzBgUBBrsBWwAHAAkBUbUIAZcA9ge6AaMACAFiswubcxgrEPbt9uQQ5BDkAD887fTtOQEREjmHLit9EMQxMAFdEzc2NzY3MwMjEwbrJ3s0ZkR6vMR7dgR1mzAbNj39CQHsOAABAKgC1wMrBcwAGgC3QEDlEQEQEVARAgQRATYJNwpGCUYKVglUCl8NXw5mBWUKYA1gDmoThgWOEo4TghqlCaUKtwm1CrYRFgEABBEODw0MugLsAA8C67M/AAEAugJQABgBE7IEAQy4AhdAFT8PTw9fD48PBA+CAQ4tUA0BDagHALgBF7UBat8VARW4ARhADSAHMAeABwMHGhySdRgrThD2XU3tcf3tEPRd5BD0Xe0AP+3tXf39PBA8MxESOTEwAV0AcXJdASc2NjMyFhUUBgcGByEHITY2NzY2NTQmIyIGAbW9FpV5hItcqRo0ARwn/dsNW6yELyolJzQEyhdzeHpXPox6EjCeV36KaEYYHSgyAAEAnALLAx4FzAAwAMxAKSABAT8KPww/Ddcb6R0FJichEiYnDwoAAQQaFg0PDAp/GQEZSBbgDwEPuAETQAoKzwoBCh4uAasEvAETAC4C6wAWARNADB4BJicnGhkNAxYPALgBF0AKAAHQAQIBUAwtDbgBT7QHgBkBGbgBF7WwGgEaahK4ARiyIagHuAEYQAsgK+ArAiuNMpJ1GCsQ/V3t9O39Xe1dEPT09nHtABESFzkSOTk/7f395BESOV0v7V0Q5F0RMxEzETkREjkREjk5ARESOTkxMAFdAF0TNxYWMzI2NTQmIyIHNxYzMjY1NCcmIyIGByc2NzYzMhYVFAYHBgcHFhcWFRQGIyImnLoKLSo4QTMwER0mEgw+PBEWJCMzEbQiL1WCeoUmHikHAQQfKLOLeo8DsBI9KjwqJS4FlgI1KBwPEyw5Hl0qTnZSK00TGQYHBhwlSFuedgADAK7/wwavBcwACwAPACoBLUBHJQYlCFQZVhpmFWUaYB1gHm0jiiKlGaUaDD8QPxFPGkkiaQB5AHUidSOJAJYhpSG2IcUh5SEOFSFQIQIVITQhlSGlIQQGBwe4AaNAGAgJFAgICQEABQgJAQAREBQFBgcRHh8dHL4C7AAfAusAEAJQACgBE7cUDA0NDw4AALwCnAABAqgACALrswYFAQ+/Au0ADgIvAA0C7QAMABwCF0ASPx9PH18fjx8EH4IRHi0dqBcQuAEXshFqJbgBGLMXGiwGuwFbAAcACQFRtQgBlwD2B7oBowAIAWKzK5tnGCsQ9u325BDkEOROEPZN7f3tEPTkEPRd7RkvGO327QA/PO307T88Pzwv7e39/TwQPDMBERI5ABESORESORESOTmHBS4rfRDEMTAAcXJdAV0TNzY3NjczAyMTBwYDIwEzASc2NjMyFhUUBgcGByEHITY2NzY2NTQmIyIG6ycGCcmBe73EexRiDq4FPq7+n74XlHqEi12oGjQBHCf92wxcrIMwKyQnNARzmwMDRnL9CgHrCS/7OgYJ+/0Xc3h6Vz6MehMvnlZ/iWlGGB0nMQAEAMz/wwa4BcwACgAOABkAHAFfQEEmByYI9g/2FPYX9hoGBQ8FFAUXBRoZGiUPJRQlFwgaGxkcFRcTGBAWFBMYHBUPGxkQFhscGxkczxESFBEREhMYGLgBo0AJGRsUGRkbBgcHuAGjQB4ICRQICAkWGBURHBAFBgcJAAERRRUc5hYQEBkb0hK4AjtAChgZDAsMDQ4NAAC8ApwAAQKoAAgC67MGBQEOvALtAA0CLwAMAu1AEwuvEAFfEJ8Q3xADLxA/EE8QAxC4AVuyHHQYuAGjsxkSkRO4AVu3IBvQG+AbAxu4Ajq3kBmgGdAZAxm4AnazFRoeBrsBWwAHAAkBUbUIAZcA9ge6AaMACAFisx2bZxgrEPbt9uQQ5BDkThD2Tf1d/V397RDt9u1dXV0ZLxjt9u0APzzt9O0/PD88Pzz95RI5Lzz9POQREjkBERI5ERI5ERI5hy4rfRDEhy4YK30QxIcuGCsIfRDEDw8PDzEwAXFdEzc2NzY3MwMjEwYTIwEzASE3ATMDMwcjByMTNwfrJ3s0ZkR6vMR7dhCuBT6u/qn+oCMB2qdyaSVpJa1LPOMEc5swGzc8/QoB6zj7OgYJ+p6NAdH+LoyUASDj4wAEAJv/wwbQBcwAMAA0AD8AQgGhQDwhAfQ3Atcb1hzYHekd+TcFBjUHOgY9B0AEQEE/Qjs9OT42PDo5PkI7NUE/NjxBQkE/Qs83OBQ3Nzg5Pj64AaNAIz9BFD8/QSYnIRImJw8KAAEEGhkWPD47NzY3lTtC5jw2Nj9BugFbADgCO0AUPj8MMTINNDMADQ8MCn8ZARlIFg+4ARNACgrPCgEKHi4BqwS8ARMALgLrABYBE7IeATS8Au0AMwI9ADIC7UATMa82AV82nzbfNgMvNj82TzYDNrgBW7JCdD69AaMAPwA4AaMAOQFbQAsAQQEgQdBB4EEDQbgCOreQP6A/0D8DP7gCdrM7GkQAuAEXtwABAQFQDC0NuAFPtAeAGQEZuAEXtbAaARpqErgBGLIhqAe6ARgAKwK6s0NRTRgrEPTt9O39Xe1dEPT09nHtThD2Tf1d/V1x/e0Q7fbtXV1dGS8Y7fbtAD/t/f3kERI5XS/tEORdETMRMz88Pzw/PP3lEjkvPP085AESORESOQAREjkREjkREjk5ARESOTmHLit9EMSHLhgrCH0QxA8PDw8xMAFxXQBdEzcWFjMyNjU0JiMiBzcWMzI2NTQnJiMiBgcnNjc2MzIWFRQGBwYHBxYXFhUUBiMiJhMjATMBITcBMwMzByMHIxM3B5u6Ci0qOEEzMBEdJhIMPjwRFiQjMxG0Ii9VgnqFJh4pBwEEHyizi3qP5q4FPq7+kf6gIwHapnFoJWklrEo94wOwEj0qPColLgWWAjUoHA8TLDkeXSpOdlIrTRMZBgcGHCVIW552/IIGCfqejQHR/i6MlAEg4+MAAf/tBhAEfQbHAAMAGrMBIwACugJ3AAACZ7MEfzIYKxD17QAv7TEwAzUhFRMEkAYQt7cAAAECEAJSA2UDaAADACFACgBJAwKoAUkDqAC6Ai4ABAEmsegYKxD25P3kAC/tMTABIQMhAkoBGzn+5ANo/uoAAAAAABgAAADgCwsIAAMDAwUGBggIAwQEBAYDBAMDBgYGBgYGBgYGBgQEBgYGBgoICAgIBwcJCAMGCAcLCAkICQgHBwgHCwgHBwQDBAYGBAcHBgcGBAcHBAQHBAoHBwcHBQUEBwYIBQYGBAMEBggICAcICQgHBwcHBwcGBgYGBgQEBAQHBwcHBwcHBwcHBgQGBgYEBgcJCQsEBAsJBwYHBAQKBwYEBgYGBwsICAkLCgYLBgYDAwYGBwYEBAYDAwYKCAcIBwcDAwMDCQkJCAgIBAQEBAQEBwUDCAcHBgcHBgQEBAkJCQYEAAAMDAkAAwMEBgcHCwkDBAQFBwMEAwMHBwcHBwcHBwcHBAQHBwcHCwkICQkIBwkJAwYJBwsJCQgJCQgHCQkMCAgHBAMEBwcEBwcHBwcEBwcDAwcDCwcHBwgFBwQHBwoGBwYFAwUHCQkJCAkJCQcHBwcHBwcHBwcHAwMDAwcHBwcHBwcHBwcHBQcHBwQHBwoKCwQEDAkHBwcEBAsHBwQHBwcHDAkJCQsLBwwGBgMDBwcIBwQEBwMDBgoJCAkICAMDAwMJCQkJCQkDBAQEBAQIBwMJBwgHCAcHBAQECgoKBwQAAA0OCgAEBAQGBwcLCQMEBAUIBAQEBAcHBwcHBwcHBwcEBAgICAgMCgkJCQkICgkDBwkIDAkKCAoJCQgJCQ0JCQkEBAQIBwQHCAcIBwQICAQECAQMCAgICAUGBAgHCgcHBwUDBQgKCgkJCQoJBwcHBwcHBwcHBwcEBAQECAgICAgICAgICAcFBwcHBQcICwsNBAUNCggHCAUFDAgIBAgHBwcNCgoKDgwHDQcHAwQHBwkHBAQHBAQHCwoJCgkJAwMDAwoKCgkJCQQEBAQEBAkGAwkICQcJCAgEBAQLCwsHBAAADxELAAQEBQcICA0LAwUFBgkEBQQECAgICAgICAgICAUFCQkJCQ8LCwsLCgkMCwMICwkNCwwKDAsKCQsKDgoJCQUEBAgIBQgJCAkIBQkJBAQIBA4JCQkJBggFCQgMCAgHBgQGCQsLCwoLDAsICAgICAgICAgICAQEBAQJCQkJCQkJCQkJCAYICAgFCAkMDA4FBQ8MCQgIBgUNCQkFCQgICA8LCwwPDggPCAgEBAgICQgFBQgEBAgRCwoLCgoDAwMDDAwMCwsLBAUFBQUFCggECwkJCAoJCQUFBQ0NDQgFAAAQEQwABAQFBwkJDgwDBQYGCQQFBAQJCQkJCQkJCQkJBQUJCQkKEAwLDAwLCgwMBQkMCg4MDAsMDAsKDAsPCwoKBQQECQkFCQoICQkFCQoEBAkEDgoKCQkGCQUKCQ0JCQcGBAYJDAwMCwwMDAkJCQkJCQgJCQkJBAQEBAoKCgoKCgoKCgoJBgkJCQUJCg0NEAUFEAwJCQkGBg4JCgUJCQkIEAwMDBAPCRAICAQECQkKCQUFCQQECBEMCwwLCwUFBQUMDAwMDAwEBQUFBQULCQQMCgoJCwoJBQUFDQ0NCQUAABERDQAFBQUHCQkPDAMGBgcKBAYFBQkJCQkJCQkJCQkFBgoKCgoRDAwMDAsKDQwFCQwKDwwNCw0MCwoMCxALCwoGBQYJCQYJCgkJCQUJCgQECAQOCgoJCQcJBQoJDQkJCQcEBwoMDAwLDA0MCQkJCQkJCQkJCQkEBAQECgoKCgoKCgoKCgkGCQkJBQkKDQ0RBgYRDQoJCQYGDwoKBgoJCQgRDAwNERAJEQkJBAUJCQsJBgYJBQUJEQwLDAsLBQUFBQ0NDQwMDAQGBgYGBgsJBAwKCwkLCgoGBgYODg4JBgAAExQOAAUFBgkLCxAOBAYHBwsFBgUFCwsLCwsLCwsLCwYGCwsLDBMODQ4ODQwPDgQLDgwSDg8NDw4NDA4NEg0NDAYFBwsLBgsMCwwLBgwMBgYLBhIMDAwMCAsGDAsOCwsKBwYHCw4ODg0ODw4LCwsLCwsLCwsLCwYGBgYMDAwMDAwMDAwMCwgLCwsHCwwODhMGBhMPCwsLBwcRDAwFCwsLCxMODg8TEgsTCgoFBQoLDQsGBgsFBQoUDg0ODQ0EBAQEDw8PDg4OBgYGBgYGDQsEDgwNCw0MCwYGBhAQEAsGAAAVFRAABgYGCgwMEw8FBwcIDAYHBgYMDAwMDAwMDAwMBwcMDAwNFQ8ODw8ODRAPBgsPDRIPEA4QDw4NDw4UDg4NBwYHDAwHDA0MDQwHDQ0GBgwGFA0NDQ0IDAcNDBAMDAsIBggMDw8PDg8QDwwMDAwMDAwMDAwMBgYGBg0NDQ0NDQ0NDQ0MCAwMDAcMDQ8PFQcHFRAMDAwICBMNDQYMDAwLFQ8PEBUUDBULCwUGDAwODAcHDAYGCxQPDg8ODgYGBgYQEBAPDw8GBwcHBwcODAYPDA4MDg0MBwcHEhISDAcAABgYEgAHBwcLDQ0UEQUICAkOBwgHBw0NDQ0NDQ0NDQ0ICA4ODg8XEBARERAPExEGDREPFBETEBMREA8REBcQEA8IBwgODQgNDw0PDQgODwcHDQcVDw4PDwkNCA4NEw0NDAkHCQ4QEBEQERMRDQ0NDQ0NDQ0NDQ0HBwcHDw4ODg4ODg4ODg0KDQ0NCA0PEhIYCAgYEw0NDQkJFQ4PBw4NDQ0YEBATGBYNGAwMBgcNDRANCAgNBwcMFxAQEBAQBgYGBhMTExEREQcICAgICBANBxEOEA0QDw4ICAgUFBQNCAAAGxsUAAgICQ0PDxgTBgkKCxAICQgIDw8PDw8PDw8PDwkJEBAQEBoTExMTEhAVFAgPExAYFBUSFRMSEBQRGRISEQkICRAPCQ8QDxAPCRAQBwcPBxkQEBAQCw8JEA8VDw8OCwcLEBMTExIUFRQPDw8PDw8PDw8PDwcHBwcQEBAQEBAQEBAQDwsQDw8JDxEUFBsJCRsVDw8QCgoYEREIEA8PDxsTExUbGQ8bDg4ICA8PEg8JCQ8ICA4bExITEhIICAgIFRUVFBQUBwkJCQkJEg8HFBESDxIREAkJCRcXFw8JAAAdHhYACAgKDhAQGhUGCgoLEQgKCAgQEBAQEBAQEBAQCgoRERESHBQVFBUTEhcVCBAVEhgVFhMWFRMSFRQbExMSCggLERAKEBIQEhAKEhIICBAIGhISEhILEAoSEBYPEA8LBwsRFBQUExUWFRAQEBAQEBAQEBAQCAgICBISEhISEhISEhIQDBEQEAoQEhUVHQoKHRcQEBALCxoREgkREBAQHRQUFh0bEB4PDwgIEBATEAoKEAgIDx0UExQTEwgICAgWFhYVFRUICgoKCgoTEAgVEhMQExIRCgoKGBgYEAoAACAhGAAJCQsPEhIcFwcLCwwTCQsJCRISEhISEhISEhILCxMTExQfFhcXFxUUGRcJEhcUGxcZFRkXFRMXFR4VFRQLCQsTEgsSFBIUEgsUFAkJEgkcFBQUFAwSCxMRGRISEAwJDBMWFhcVFxkXEhISEhISEhISEhIJCQkJFBQUFBQUExMTExINEhISCxIUGBggCwsgGRISEgwMHBQUCxMSEhIgFhYZIB4SIRAQCAkSEhUSCwsSCQkQIRYVFhUVCQkJCRkZGRcXFwkLCwsLCxUSCBcTFRIVFBMLCwsbGxsSCwAAISIZAAkJCxASEhwYBwsLDRMJCwkJEhISEhISEhISEgsLExMTFCAYGBgYFhQaGAkSFxQdFxoWGhgVFBgWHxYWFAsJDBMSCxIUEhQSCxQUCQkSCR0UFBQUDRILFBIaEhIRDQkNExgYGBYXGhgSEhISEhISEhISEgkJCQkUFBQUFBQUFBQUEg0TEhIMEhQYGCELCyEaExITDAwdFBQLExISEiEYGBohHxIiEREKCRISFhILCxIJCREiGBYYFhYJCQkJGhoaGBgYCQsLCwsLFRIJGBQWEhYUEwsLCxwcHBILAAAlJhwACgoMEhUVIRsIDA0OFgoMCgoVFRUVFRUVFRUVDAwWFhYXJBsbGxsZFx0bChUbFx4bHRkdGxkXGxkjGBkXDAoMFhUMFRcVFxUMFxcKChUKIBcXFxcOFQwXFR0VFRMOCg4VGxsbGRsdGxUVFRUVFRUVFRUVCgoKChcXFxcXFxcXFxcVDxUVFQwUFxsbJQwMJR0VFRUODiEXFwwWFRUVJRsbHSUjFCYTEwoKFBUZFQwMFQoKEyUbGRsZGQoKCgodHR0bGxsKDAwMDAwZFQobFxkVGRcWDAwMHx8fFAwAACosIAAMDA4UFxckHgkODxAZDA4MDBcXFxcXFxcXFxcODhkZGRoqHh4eHhwaIR4LFx4aJB4hHCEeHBoeHCgcHBoODA8ZFw4XGhcaFw4aGgwMFwwlGhoaGhAWDhoXIRcXFRAMEBkeHh4cHiEeFxcXFxcXFxcXFxcMDAwMGhoaGhoaGhoaGhcRGBcXDhcaHx8qDg4qIRcXGBAPJRkaDhkXFxcqHh4hKigXKxUVCwwXFxwXDg4XDAwVLB4cHhwcCwsLCyEhIR4eHgwODg4ODhwWCx4aHBccGRkODg4jIyMXDgAALi8jAA0NDxYaGichCg8QEhsNDw0NGhoaGhoaGhoaGg8PGxsbHC0hISEhHxwkIQ0aIRwnISQfJCEfHCEfKx8fHA8NDxsaDxocGhwaDxwcDQ0aDSgcHBwcEhkPHBokGRoXEg0SGyEhIR8hJCEaGhoaGhoaGhoaGg0NDQ0cHBwcHBwcHBwcGhIaGhoPGRwiIi4PDy4kGRobEREpHBwPGxoaGi4hISQuKxkvFxcMDRkaHxoPDxoNDRcsIR8hHx8NDQ0NJCQkISEhDQ8PDw8PHxkNIRwfGh8cGw8PDyYmJhkPAAAyMyYADg4RGBwcLCQLERETHQ4RDg4cHBwcHBwcHBwcEREdHR0fMSQkJCQhHyckDRwkHyokJyEnJCEfJCEvISEfEQ4RHRwRHB8cHxwRHx8ODhwOLB8fHx8THBEfHCccHBkTDRMdJCQkISQnJBwcHBwcHBwcHBwcDg4ODh8fHx8fHx8fHx8cFB0cHBEcHyUlMhERMicbHB0TEiwfHxEdHBwcMiQkJzIvHDMZGQ0OGxwhHBERHA4OGTIkISQhIQ0NDQ0nJyckJCQOEREREREhHA0kHyEcIR8dERERKioqHBEAADY3KQAPDxIaHh4vJwwSEhUgDxIPDx4eHh4eHh4eHh4SEiAgICE1JycnJyQhKicPHichLCcqJConJCEnJDMkJCESDxIgHhIeIR4hHhIhIQ8PHg8wISEhIRUeEiEeKh4eGxUQFSAnJyckJyonHh4eHh4eHh4eHh4PDw8PISEhISEhISEhIR4WHh4eER4hKCg2EhI2Kh4eHxQUMCEhEiAeHh42JycqNjMeNxsbDg8eHiQeEhIeDw8bNickJyQkDw8PDyoqKicnJw8SEhISEiQeDychJB4kISASEhItLS0eEgAAOjssABAQExwgIDAqDRMUFyIQExAQICAgICAgICAgIBMTIiIiIzkqKioqJyMtKhAgKiMxKi0nLSonIyonNycnIxMQFCIgEyAjICMgEyMjEBAgEDQjIyMjFyATIyAtICAdFxAXIioqKicqLSogICAgICAgICAgIBAQEBAjIyMjIyMjIyMjIBchICAUICMrKzoTEzotICAhFRU0IyMTIiAgIDoqKi06NyA7HR0PECAgJyATEyAQEB02KicqJycQEBAQLS0tKioqEBMTExMTJyAQKiMnICcjIhMTEzAwMCATAABDRjIAExMWICUlPDAPFhYaJxMWExMlJSUlJSUlJSUlFhYnJycpQTAwMDAtKTQwEiUwKTcwNC00MC0pMC0/LS0pFhMXJyUWJSklKSUWKSkTEyUSPCkpKSkaJRYpJTQlJSIaExonMDAwLTA0MCUlJSUlJSUlJSUlExMTEykpKSkpKSkpKSklGyYlJRYlKTExQxYWQzQlJScZGDwpKRYnJSUlQzAwNEM/JUQiIhETJSUtJRYWJRMTIkYwLTAtLRISEhI0NDQwMDATFhYWFhYtJRIwKS0lLSknFhYWODg4JRYAAEtQOAAVFRkkKipENhEZGB0sFRkVFSoqKioqKioqKioZGSwsLC5KNjY2NjIuOjYVKjYuPTY6Mjo2Mi42MkcyMi4ZFRosKhkqLiouKhkuLhUVKhVDLi4uLh0qGS4qOioqJh0UHSw2NjYyNjo2KioqKioqKioqKioVFRUVLi4uLi4uLi4uLioeKyoqGCkuNzdLGRlLOioqKxwbQy4uGSwqKipLNjY6S0cpTCYmExUpKjIqGRkqFRUmUDYyNjIyFRUVFTo6OjY2NhUZGRkZGTIqFDYuMioyLiwZGRk/Pz8pGQAAU1o+ABcXHCcuLks8EhwbIDAXHBcXLi4uLi4uLi4uLhwcMDAwM1E8PDw8NzNBPBcuPDNDPEE3QTw3Mzw3Tjc3MxwXHDAuHC4zLjMuHDMzFxcuF0ozMzMzIC4cMy5BLi4qIBcgMDw8PDc8QTwuLi4uLi4uLi4uLhcXFxczMzMzMzMzMzMzLiEvLi4bLjM9PVMcHFNBLi4wHx5KMzMcMC4uLlM8PEFTTi5UKioWFy4uNy4cHC4XFypaPDc8NzcXFxcXQUFBPDw8FxwcHBwcNy4XPDM3LjczMBwcHEVFRS4cAABcXkUAGhofLDMzUkIVHx4kNhofGhozMzMzMzMzMzMzHx82NjY4WkJCQkI9OEhCGjNCOE1CSD1IQj04Qj1XPT04HxohNjMfMzgzODMfODgaGjMaUjg4ODgkMx84M0gzMy4kGiQ2QkJCPUJIQjMzMzMzMzMzMzMzGhoaGjg4ODg4ODg4ODgzJTUzMx8zOEREXB8fXEgzMzUiIlI4OB82MzMzXEJCSFxXM14uLhkaMzM9Mx8fMxoaLl5CPUI9PRoaGhpISEhCQkIaHx8fHx89MxpCOD0zPTg2Hx8fTU1NMx8AAGRoSwAcHCEvODhaSBYhISc6HCEcHDg4ODg4ODg4ODghITo6Oj1iSEhISEM9TkgcOEg9U0hOQ05IQz1IQ15DQz0hHCI6OCE4PTg9OCE9PRwcOBxZPT09PSc4IT04Tjg4MicbJzpISEhDSE5IODg4ODg4ODg4ODgcHBwcPT09PT09PT09PTgoOjg4Ijc9SkpkISFkTjc4OiUlWT09ITo4ODhkSEhOZF43ZjIyGxw3OEM4ISE4HBwyaEhDSENDHBwcHE5OTkhISBwhISEhIUM4HEg9QzhDPTohISFTU1M3IQAAAAEAAAABAAB76wYhXw889QAZCAAAAAAAo1G/NgAAAAClp1to/yD+TgihBz4AAwALAAEAAAAAAAAAAQAABz7+TgBDCAD/IP6aCKEAIQAHAAAAAAAAAAAAAAAAANwGAAEAAAAAAAI5AAACOQAAAqoAfgPLAS8EcwBiBHMAWgcdALoFxwCqAecBNgKqAIcCqv9gAx0AyQSsAKUCOQAVAqoATwI5AFoCOf+nBHMAhARzAPMEcwB8BHMAaARzADgEcwCCBHMApgRzANQEcwCHBHMAggKqAJACqgBTBKwArwSsAKUErACvBOMA/AfNAIUFx//pBccAUgXHAMIFxwBZBVYAVATjAFAGOQC1BccAWQI5AEcEcwA7BccAUQTjAFwGqgBTBccAXAY5ALMFVgBTBjkAswXHAFoFVgB+BOMA9gXHALsFVgDoB40A8AVW/8IFVgDrBOMAMwKqABQCOQCgAqr/jgSsANcEc//tAqoBEgRzAFwE4wBKBHMAewTjAHkEcwB3AqoAbgTjAEAE4wBWAjkAUgI5/yAEcwBNAjkAUAcdAEkE4wBWBOMAfATj//UE4wB6Ax0AQgRzAC0CqgCaBOMAkARzAJkGOQCTBHP/0wRzAA0EAAAiAx0AVgI9ALADHf9SBKwAiQXH/+0Fx//pBccAwgVWAFQFxwBcBjkAswXHALsEcwBcBHMAXARzAFwEcwBcBHMARARzAFwEcwB6BHMAdwRzAHcEcwB3BHMAdwI5AFICOQBSAjkAUgI5AFIE4wBWBOMAfATjAHwE4wB8BOMAfATjAHwE4wCQBOMAkATjAJAE4wCQBHMArQMzAOEEcwB4BHMAKgRzACwCzQCmBHMAWgTjAEgF5QBZBeUAWQgAASgCqgF4AqoArQgA/74GOQCfBGQAggRzADEEnP+0AvYAqALsAJQHHQA/BOMAbATjADcCqgAYBKwApQRz/+wEcwCoBHMALggAAL4Fx//pBcf/6QY5ALMIAACNB40AdwRz//wIAAAABAABAAQAAQcCOQDfAjkA/QRkAIIEcwANBVYA6wRzAIcCqgB6AqoAFARz//4COQCTAjkAFQQAAAcIAACLBcf/8gVWAFQFx//pBVYAVAVWAFQCOQBHAjkARwI5AEcCOQBHBjkAswY5ALMGOQCzBccAuwXHALsFxwC7AjkAUgKqAHMCqgC+AqoBNAKqAA4CqgDgBVYAfgRzAC0CPQCwBccASwTjAHwFVgDrBHMADQVWAFME4//tBKwAvQKqAOsCqgCoAqoAnAasAK4GrADMBqwAmwRrAIwCqgIQAAAAAQAAAmAAAQBjAYAABgDSAAMAJP+0AAMAPP/bABQAFP9oACQAA/+0ACQAN/9oACQAOf9oACQAOv+PACQAPP9oACQAqf+PACkAD/8dACkAEf8dACkAJP+PAC8AA//bAC8AN/9oAC8AOf+PAC8AOv+PAC8APP9oAC8Aqf9oADMAA/+0ADMAD/74ADMAEf74ADMAJP9oADUAN//bADUAOv/bADUAPP/bADcAD/9oADcAEP+PADcAEf9oADcAHf9oADcAHv9oADcAJP9oADcAMv/bADcARP+0ADcARv+0ADcASP+0ADcATP/bADcAUv+0ADcAVf/bADcAVv+0ADcAWP/bADcAWv+0ADcAXP+0ADkAD/9EADkAEP+0ADkAEf9EADkAHf+0ADkAHv+0ADkAJP9oADkARP+0ADkASP+0ADkATP+0ADkAUv+0ADkAVf/bADkAWP/bADkAXP/bADoAD/9oADoAEP+0ADoAEf9oADoAHf+0ADoAHv+0ADoAJP+PADoARP/bADoASP/bADoATP/uADoAUv/bADoAVf/bADoAWP/bADoAXP/bADwAA//bADwAD/9EADwAEP9oADwAEf9EADwAHf+PADwAHv+PADwAJP9oADwARP+0ADwASP+0ADwATP+0ADwAUv+0ADwAU/+0ADwAVP+0ADwAWP+0ADwAWf+0AEkASf/bAEkAqQAlAFUAD/+PAFUAEf+PAFUAqQBMAFkAD/+PAFkAEf+PAFoAD/+0AFoAEf+0AFwAD/+0AFwAEf+0AKgAqP+0AKkAA/+0AKkAVv/bAKkAVwAlAKkAqf+0AAEAAADcAGEABwBKAAQAAgAQABQAQAAAA7ICDAACAAEAAAAoAeYAAQAAAAAAAAB/AAAAAQAAAAAAAQAFAH8AAQAAAAAAAgALAIQAAQAAAAAAAwAvAI8AAQAAAAAABAARAL4AAQAAAAAABQASAM8AAQAAAAAABgASAOEAAQAAAAAABwBiAPMAAwABBAYAAgAUAVUAAwABBAYABAAgAWkAAwABBAcAAgAWAYkAAwABBAcABAAiAZ8AAwABBAkAAAD+AcEAAwABBAkAAQAKAr8AAwABBAkAAgAWAskAAwABBAkAAwBeAt8AAwABBAkABAAiAz0AAwABBAkABQAkA18AAwABBAkABgAkA4MAAwABBAkABwDEA6cAAwABBAoAAgAeBGsAAwABBAoABAAqBIkAAwABBAsAAgAkBLMAAwABBAsABAAwBNcAAwABBAwAAgAaBQcAAwABBAwABAAmBSEAAwABBBAAAgAiBUcAAwABBBAABAAuBWkAAwABBBMAAgAWBZcAAwABBBMABAAiBa0AAwABBBQAAgAcBc8AAwABBBQABAAoBesAAwABBB0AAgAUBhMAAwABBB0ABAAgBicAAwABCBYAAgAeBkcAAwABCBYABAAqBmUAAwABDAoAAgAeBo8AAwABDAoABAAqBq0AAwABDAwAAgAaBtcAAwABDAwABAAmBvFUeXBlZmFjZSCpIFRoZSBNb25vdHlwZSBDb3Jwb3JhdGlvbiBwbGMuIERhdGEgqSBUaGUgTW9ub3R5cGUgQ29ycG9yYXRpb24gcGxjL1R5cGUgU29sdXRpb25zIEluYy4gMTk5MC0xOTkyLiBBbGwgUmlnaHRzIFJlc2VydmVkQXJpYWxCb2xkIEl0YWxpY01vbm90eXBlOkFyaWFsIEJvbGQgSXRhbGljOnZlcnNpb24gMShNaWNyb3NvZnQpQXJpYWwgQm9sZCBJdGFsaWNNUyBjb3JlIGZvbnQ6djEuMDBBcmlhbC1Cb2xkSXRhbGljTVRBcmlhbKggVHJhZGVtYXJrIG9mIFRoZSBNb25vdHlwZSBDb3Jwb3JhdGlvbiBwbGMgcmVnaXN0ZXJlZCBpbiB0aGUgVVMgUGF0ICYgVE0gT2ZmLiBhbmQgZWxzZXdoZXJlLgBmAGUAZAAgAGsAdQByAHMAaQB2AEEAcgBpAGEAbAAgAGYAZQBkACAAawB1AHIAcwBpAHYARgBlAHQAdAAgAEsAdQByAHMAaQB2AEEAcgBpAGEAbAAgAEYAZQB0AHQAIABLAHUAcgBzAGkAdgBUAHkAcABlAGYAYQBjAGUAIACpACAAVABoAGUAIABNAG8AbgBvAHQAeQBwAGUAIABDAG8AcgBwAG8AcgBhAHQAaQBvAG4AIABwAGwAYwAuACAARABhAHQAYQAgAKkAIABUAGgAZQAgAE0AbwBuAG8AdAB5AHAAZQAgAEMAbwByAHAAbwByAGEAdABpAG8AbgAgAHAAbABjAC8AVAB5AHAAZQAgAFMAbwBsAHUAdABpAG8AbgBzACAASQBuAGMALgAgADEAOQA5ADAALQAxADkAOQAyAC4AIABBAGwAbAAgAFIAaQBnAGgAdABzACAAUgBlAHMAZQByAHYAZQBkAEEAcgBpAGEAbABCAG8AbABkACAASQB0AGEAbABpAGMATQBvAG4AbwB0AHkAcABlADoAQQByAGkAYQBsACAAQgBvAGwAZAAgAEkAdABhAGwAaQBjADoAdgBlAHIAcwBpAG8AbgAgADEAKABNAGkAYwByAG8AcwBvAGYAdAApAEEAcgBpAGEAbAAgAEIAbwBsAGQAIABJAHQAYQBsAGkAYwBNAFMAIABjAG8AcgBlACAAZgBvAG4AdAA6AHYAMQAuADAAMABBAHIAaQBhAGwALQBCAG8AbABkAEkAdABhAGwAaQBjAE0AVABBAHIAaQBhAGwArgAgAFQAcgBhAGQAZQBtAGEAcgBrACAAbwBmACAAVABoAGUAIABNAG8AbgBvAHQAeQBwAGUAIABDAG8AcgBwAG8AcgBhAHQAaQBvAG4AIABwAGwAYwAgAHIAZQBnAGkAcwB0AGUAcgBlAGQAIABpAG4AIAB0AGgAZQAgAFUAUwAgAFAAYQB0ACAAJgAgAFQATQAgAE8AZgBmAC4AIABhAG4AZAAgAGUAbABzAGUAdwBoAGUAcgBlAC4ATgBlAGcAcgBpAHQAYQAgAEMAdQByAHMAaQB2AGEAQQByAGkAYQBsACAATgBlAGcAcgBpAHQAYQAgAEMAdQByAHMAaQB2AGEATABpAGgAYQB2AG8AaQB0AHUAIABLAHUAcgBzAGkAdgBvAGkAQQByAGkAYQBsACAATABpAGgAYQB2AG8AaQB0AHUAIABLAHUAcgBzAGkAdgBvAGkARwByAGEAcwAgAEkAdABhAGwAaQBxAHUAZQBBAHIAaQBhAGwAIABHAHIAYQBzACAASQB0AGEAbABpAHEAdQBlAEcAcgBhAHMAcwBlAHQAdABvACAAQwBvAHIAcwBpAHYAbwBBAHIAaQBhAGwAIABHAHIAYQBzAHMAZQB0AHQAbwAgAEMAbwByAHMAaQB2AG8AVgBlAHQAIABDAHUAcgBzAGkAZQBmAEEAcgBpAGEAbAAgAFYAZQB0ACAAQwB1AHIAcwBpAGUAZgBIAGEAbAB2AGYAZQB0ACAASwB1AHIAcwBpAHYAQQByAGkAYQBsACAASABhAGwAdgBmAGUAdAAgAEsAdQByAHMAaQB2AEYAZQB0ACAASwB1AHIAcwBpAHYAQQByAGkAYQBsACAARgBlAHQAIABLAHUAcgBzAGkAdgBOAGUAZwByAGkAdABvACAASQB0AOEAbABpAGMAbwBBAHIAaQBhAGwAIABOAGUAZwByAGkAdABvACAASQB0AOEAbABpAGMAbwBOAGUAZwByAGkAdABhACAAQwB1AHIAcwBpAHYAYQBBAHIAaQBhAGwAIABOAGUAZwByAGkAdABhACAAQwB1AHIAcwBpAHYAYQBHAHIAYQBzACAASQB0AGEAbABpAHEAdQBlAEEAcgBpAGEAbAAgAEcAcgBhAHMAIABJAHQAYQBsAGkAcQB1AGUAAAAAAgAA//QAAP8nANcAAAAAAAAAAAAAAAAAAAAAAAAAAADcAAAAAAAAAAMABAAFAAYABwAIAAkACgALAAwADQAOAA8AEAARABIAEwAUABUAFgAXABgAGQAaABsAHAAdAB4AHwAgACEAIgAjACQAJQAmACcAKAApACoAKwAsAC0ALgAvADAAMQAyADMANAA1ADYANwA4ADkAOgA7ADwAPQA+AD8AQABBAEIAQwBEAEUARgBHAEgASQBKAEsATABNAE4ATwBQAFEAUgBTAFQAVQBWAFcAWABZAFoAWwBcAF0AXgBfAGAAYQBiAGMAZABlAGYAZwBoAGkAagBrAGwAbQBuAG8AcABxAHIAcwB0AHUAdgB3AHgAeQB6AHsAfAB9AH4AfwCAAIEAggCDAIQAhQCGAIcAiACJAIoAiwCMAI0AjgCQAJEAkwCWAJcAnQCeAKAAoQCiAKMApACmAKkAqgCrAK0ArgCvALAAsQCyALMAtAC1ALYAtwC4ALoAuwC9AL4AvwDCAQIAxADFAMYAxwDIAMkAygDLAMwAzQDOAM8A0ADRANMA1ADVANYA1wDYANkA3QDeAOEA5ADlAOgA6QDqAOsA7ADtAO4A8ADxAPIA8wD0APUA9gDaAMMOcGVyaW9kY2VudGVyZWQAAAC6ABL/wALcskBBMrn/wALcsjk8Mrn/wALeszxBMtRBPQLeAAEAMALcAEAC3ABQAtwAYALcANAC3ADgAtwA8ALcAAcAAALcAJAC3ACgAtwAsALcAAQAkALLAAEAkALIAAEAQALLAAEAQALIAAEAMALLAAEAMALIAAEAIALLAAEAIALIAAEAQAKlAAECpQB2AJACpACgAqQAAgKkQCVf4AbgBwK/Br8HAq8GrwcCnwafBwJPBk8HAg8GDwcCrwavBwIPQTMCNgBPAjYAjwI2AAMAzwI2AAEArwI2AAEALwI2AD8CNgBfAjYAfwI2AO8CNgAFABACNQB/AjUAAgAPAjUALwI1ANACNQADAH8CNQABAD8CNQBPAjUAAgI3AjcCNgI2AjUCNf/AAsyyITQyuf/AAsuyITQyuf/AAsqyITQyuf/AAsmyITQyuf/AAsiyITQyuP/As20aPDK4/8Cz6Ro1Mrn/wAFbsho1Mrj/wLN8GjUyuP/As3YaNTK4/8CzYBo1Mrj/wLMuGjUyuP/Asio0M7j/wLIqMzO4/8CyKjIzuP/AsioxM7j/wLIqMDO4/8CyKi8zuP/AsioqM7j/wLIqKTO4/8CyKigzuP/AsiohM7j/wLIqFzO4/8CyKhYzuP/AsioVM7j/wLIqFDO4/8CyKhMzuP/AsioSM7j/wLIqDTO4/8CyKgwzuP/AsioLM7j/wLMqGjUyuP/Asic0M7j/wLInMzO4/8CyJzIzuP/AsicxM7j/wLInMDO4/8CyJy8zuP/AsicqM7j/wLInKTO4/8CyJygzuP/AsichM7j/wLInFzO4/8CyJxYzuP/AsicVM7j/wLInFDO4/8CyJxMzuP/AsicSM7j/wLInDTO4/8CyJwwzuP/AsicLM7j/wLMnGjUyuP/AsyEaNTK4AsyyJDUfuALLsiQ1H7gCyrIkNR+4AsmyJDUfuALIQAskNR9tJDwf6SQ1H7gBW0AjJDUffCQ1H3YkNR9gJDUfLiQ1HyokNR8nJDUfISQ1H48/PB+4ARm2JDwf9yQ1H7gBs7IkNR+4AauyJDUfuAFWsiQ1H7gBVbIkNR+4ARtATSQ1H/okNR/qJDUf0iQ1H3ckNR9uJDUfVyQ1H0wkNR9DJDUfPSQ1HzUkNR8BABLgAfABAhJwAYABkAEDAQEACQECAAgAFxcAAAASEQhAuwIWAAAACQKJsmkTH7gBtbIoZx9BFQG0ACgEAQAfAbMBXwQBAB8BsABpBAEAHwGrACcBJQAfAaoAJwFWAB8BorIqnh+4AZ+yKjIfuAGdsiopH7gBZbIoHR+4AWSyKCAfuAFjsigwH7gBYbIoQR+4AVuyJ54fQQkBVwAnCAEAHwFWACoBmgAfAVWyKokfuAFUsiqJH7gBU7IqQx+4AR+yKCAfuAEesiiTH0ELAR0AaQKrAB8BGwAnAqsAHwEZACoCq7If+ie4BAGyH/knuAKrth/3Kk8f6iq4CAG2H+kqeR/VKLgCAUAPH9QuzR/SIc0fwygvH8JpuAKrQAsfwGnNH74qTx+xJLgEAbIfmiq4AVZACx+ZKjgfkSo1H3wuuAQBQAsfdy7NH3Yqqx9wKLgCq7Ifbx+4BAGyH24huAGaQAsfbSeTH2UqgR9gJ7gBmrYfXyoqH1cuuAElsh9SabgCAbIfTC64AVa2H0shzR9JabgCq0ALH0cqKx9Eac0fQyq4CAGyH0EouAQBsh9AJ7gBAUAbHz0h5B87KjgfNy67HzUqOx8xLuQfIypFHyJpuAFWth9VDQkNCZC4ASNANgeQ3QeQcgeQVQeQNAeQLweQKweQJgeQJQeQHgeQHQcUCBIIEAgOCAwICggICAYIBAgCCAAIFLj/4EAsAAABABQGEAAAAQAGBAAAAQAEEAAAAQAQAgAAAQACAAAAAQAAAgEIAgBKABKwEwNLAktTQrA4K0u4CABSsDcrS7AIUFtYsQEBjlmwOCtLsMBjAEtiILD2UyO4AQpRWrAFI0IBsBJLAEtUQhiwAoi4AQBUWLgBGbEBAY6FG7ASQ1i5AAEBGYWNG7kAAQEZhY1ZWUNYugCfAhYAAXNZABZ2Pxg/Ej4ROUZEPhE5RkQ+ETlGRD4ROUZEPhE5RmBEPhE5RmBEKysrKysrKysrKysYKysrKysrKysrKyuwNytLUHm5AB8BqLMHHzYHKytLU3m5AJABqLMHkDYHKysYHbCWS1NYsKodWbAyS1NYsP8dWUuwiVMgXFi5AhgCFkVEuQIXAhZFRFlYuQSzAhhFUli5AhgEs0RZWUu4AZpTIFxYuQAgAhhFRLkAJAIYRURZWLkOCAAgRVJYuQAgDghEWVlLuAKrUyBcWLkAHwIXRUS5ACgCF0VEWVi5GKUAH0VSWLkAHxilRFlZS7gEAVMgXFixaSBFRLEgIEVEWVi5IwAAaUVSWLkAaSMARFlZS7gEAVMgXFi5AV8AJEVEsSQkRURZWLkjoAFfRVJYuQFfI6BEWVlLsCtTIFxYsScnRUSxLidFRFlYuQEcACdFUli5ACcBHERZWUuwNVMgXFixJydFRLEhJ0VEWVi5AV8AJ0VSWLkAJwFfRFlZS7CMUyBcWLEnJ0VEsSonRURZWLkDqgAnRVJYuQAnA6pEWVkrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrK2VCRWlTQgFLUFixCABCWUNcWLEIAEJZFhBwPrASQ1i5DRk+pRu6ANID6wALK1mwCiNCsAsjQgA/PxgrEDwBL11DXFiyfwEBXVldQ1xYsu8BAV1ZBgywBiNCsAcjQrASQ1i5OyEYfhu6BAABqAALK1mwDCNCsA0jQrASQ1i5LUEtQRu6BAAEAAALK1mwDiNCsA8jQrASQ1i5GH47IRu6AagEAAALK1mwECNCsBEjQgCwNysrKysrKysrKysrKysrKysrKysrKysrKysrKysrACsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysAGEVpREVpREVpRHNzdHVzc3N0c3R0dHR0c0VEc0VEdABLsCpTS7A4UVpYsQcHRbBAYERZAEuwLlNLsDhRWlixAwNFsEBgRLEJCUWwQGFEWXNzc3Nzc3NzsDcrdHV0KysrQ1xYQDFAKjQzQCozM0AqMjNAKjEzQCowM0AqLzNAJzQzQCczM0AnMjNAJzEzQCcwM0AnLzOgQQoCywABAKACyAABAJ8CywABAJ8CyEAzAUAqKjNAKikzQCcqM0AnKTNAKhIzQCcSM0AqKDNAJygzQCohM0AqHTUyQCcdNTJAJyEzACsrKysrKysrKysrK3Nzc3MrKysrKysrKysrKytZAAAAAAIAAQAAAAAAFAADAAEAAAEaAAABBgAAAQAAAAAAAAABAgAAAAIAAAAAAAAAAAAAAAAAAAABAAADBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHyAhIiMkJSYnKCkqKywtLi8wMTIzNDU2Nzg5Ojs8PT4/QEFCQ0RFRkdISUpLTE1OT1BRUlNUVVZXWFlaW1xdXl9gYQBiY2RlZmdoaWprbG1ub3BxcnN0dXZ3eHl6e3x9fn+AgYKDhIWGh4iJiouMjY4Aj5AAkQAAkpMAAAAAAJSVAJaXmJmaAJsAAJydngKfoKGio6SlpqeoqaoAq6wAra6vAACwsbKztLW2t7i5uru8vb6/AMDBwsPExcYAAADHyAAAyQAEAXIAAAAkACAABAAEAH4A/wFTAWEBeAGSAsYC3CAUIBogHiAiICYgMCA6ISIiGf//AAAAIACgAVIBYAF4AZICxgLcIBMgGCAcICAgJiAwIDkhIiIZ////4wAA/1D/av80/wn9//3q4JEAAAAAAADgeOCE4HXfat6YAAEAAAAiAAAAAAAAAAAAAAAAAAAA0gDWANoAAAAAAAAAAAAAAAAAAwCZAIQAhQCtAJIAzACGAI4AiwCUAJwAmgAQAIoA2gCDAJEA1QDWAI0AkwCIANsAyADUAJUAnQDYANcA2QCYAJ8AtwC1AKAAYgBjAI8AZAC5AGUAtgC4AL0AugC7ALwAzQBmAMAAvgC/AKEAZwDTAJAAwwDBAMIAaADPANEAiQBqAGkAawBtAGwAbgCWAG8AcQBwAHIAcwB1AHQAdgB3AM4AeAB6AHkAewB9AHwAqgCXAH8AfgCAAIEA0ADSAKsAqACpALIApgCnALMAggCwAIclugAA
# """
# )
# ),
# size=60,
# )
# def genCapcha() -> Tuple[int, str]:
# code = random.randint(111111, 999999)
# img = Image.new("RGB", size=(300, 120), color="#ffffff")
# draw = ImageDraw.Draw(img)
# draw.text((20, 45), text=str(code), font=font, fill="#000000")
# buf = io.BytesIO()
# img.save(buf, format="png")
# return code, base64.b64encode(buf.getvalue()).decode()
# @ignore_botself
# @these_msgtypes(MsgTypes.TextMsg)
# def receive_group_msg(ctx: GroupMsg):
# userKey = "{}_{}".format(ctx.FromUserId, ctx.FromGroupId)
# if userKey not in new_users:
# return
# content = ctx.Content
# with lock:
# if content.isdigit() and int(content) == new_users[userKey]:
# new_users.pop(userKey)
# Action(ctx.CurrentQQ).sendGroupText(
# ctx.FromGroupId,
# content="验证成功, 成为正式群员, 欢迎你!",
# atUser=ctx.FromUserId,
# )
# return
# if content == "看不清":
# code, capcha = genCapcha()
# with lock:
# new_users[userKey] = code
# Action(ctx.CurrentQQ).sendGroupPic(
# ctx.FromGroupId,
# picBase64Buf=capcha,
# content="验证码已刷新,请尽快验证! 时间不多啦! \n(验证成功才会回复提示! 发送 看不清 可刷新验证码)",
# atUser=ctx.FromUserId,
# )
# def receive_events(ctx: EventMsg):
# join_data = ep.group_join(ctx)
# if join_data is None:
# return
# userKey = "{}_{}".format(join_data.UserID, ctx.FromUin)
# code, capcha = genCapcha()
# with lock:
# new_users[userKey] = code
# Action(ctx.CurrentQQ).sendGroupPic(
# ctx.FromUin,
# picBase64Buf=capcha,
# content=f"请在{wait_time}分钟内发送验证码数字!否则将踢出本群!\n(验证成功才会回复提示!发送 看不清 可刷新验证码)",
# atUser=join_data.UserID,
# )
# time.sleep(wait_time * 60)
# if userKey in new_users:
# Action(ctx.CurrentQQ).sendGroupText(
# ctx.FromUin,
# content=f"由于你({join_data.UserName})在进群后{wait_time}分钟内未成功验证, 即将踢出本群!\n如果没踢出去,说明本机器人不是管理员,请自行退群或联系群主管理员办理退群手续!谢谢~~",
# atUser=join_data.UserID,
# )
# time.sleep(1)
# Action(ctx.CurrentQQ).driveUserAway(ctx.FromUin, join_data.UserID)
# with lock:
# new_users.pop(userKey)
| 1,018.329897 | 95,842 | 0.955203 | 2,991 | 98,778 | 31.534269 | 0.832497 | 0.000679 | 0.000954 | 0.000509 | 0.032804 | 0.03134 | 0.030195 | 0.030195 | 0.030195 | 0.030195 | 0 | 0.088143 | 0.008453 | 98,778 | 96 | 95,843 | 1,028.9375 | 0.874856 | 0.998117 | 0 | null | 0 | null | 0 | 0 | null | 1 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
379a6f76becec44526c908fbdfc6b798179d6f46 | 71 | py | Python | src/models/park_lot/__init__.py | TDSVirtru/parkinglot | 3895b4019ad70a1613e30483e98ac823e5cc8d64 | [
"MIT"
] | null | null | null | src/models/park_lot/__init__.py | TDSVirtru/parkinglot | 3895b4019ad70a1613e30483e98ac823e5cc8d64 | [
"MIT"
] | null | null | null | src/models/park_lot/__init__.py | TDSVirtru/parkinglot | 3895b4019ad70a1613e30483e98ac823e5cc8d64 | [
"MIT"
] | null | null | null | """The park lot model."""
from .park_lot import ParkLot # noqa: F401
| 17.75 | 43 | 0.676056 | 11 | 71 | 4.272727 | 0.818182 | 0.297872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 0.183099 | 71 | 3 | 44 | 23.666667 | 0.758621 | 0.43662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
37aa8e1708bfae769317648f2e36abc121d99458 | 244 | py | Python | pyleecan/Methods/Slot/SlotW27/__init__.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | 95 | 2019-01-23T04:19:45.000Z | 2022-03-17T18:22:10.000Z | pyleecan/Methods/Slot/SlotW27/__init__.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | 366 | 2019-02-20T07:15:08.000Z | 2022-03-31T13:37:23.000Z | pyleecan/Methods/Slot/SlotW27/__init__.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | 74 | 2019-01-24T01:47:31.000Z | 2022-02-25T05:44:42.000Z | from ....Methods.Slot.Slot import SlotCheckError
class S27_W01CheckError(SlotCheckError):
""" """
pass
class S27_W12CheckError(SlotCheckError):
""" """
pass
class S27_W03CheckError(SlotCheckError):
""" """
pass
| 12.2 | 48 | 0.651639 | 21 | 244 | 7.428571 | 0.52381 | 0.153846 | 0.294872 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.213115 | 244 | 19 | 49 | 12.842105 | 0.75 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.428571 | 0.142857 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
808a43258a183e8275622f447b21361ddeda18bb | 28 | py | Python | venv/Lib/site-packages/mpl_toolkits/mplot3d/__init__.py | arnoyu-hub/COMP0016miemie | 59af664dcf190eab4f93cefb8471908717415fea | [
"MIT"
] | 353 | 2020-12-10T10:47:17.000Z | 2022-03-31T23:08:29.000Z | venv/Lib/site-packages/mpl_toolkits/mplot3d/__init__.py | arnoyu-hub/COMP0016miemie | 59af664dcf190eab4f93cefb8471908717415fea | [
"MIT"
] | 80 | 2020-12-10T09:54:22.000Z | 2022-03-30T22:08:45.000Z | venv/Lib/site-packages/mpl_toolkits/mplot3d/__init__.py | arnoyu-hub/COMP0016miemie | 59af664dcf190eab4f93cefb8471908717415fea | [
"MIT"
] | 63 | 2020-12-10T17:10:34.000Z | 2022-03-28T16:27:07.000Z | from .axes3d import Axes3D
| 14 | 27 | 0.785714 | 4 | 28 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 0.178571 | 28 | 1 | 28 | 28 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
80cab195cd345cd0a043cdb7923f616349f72633 | 191 | py | Python | tests/init_test.py | bertrandchenal/tanker | b955311dc8f05f8bb3c0b391e169974e5c6a11b2 | [
"0BSD"
] | 1 | 2019-11-12T08:35:10.000Z | 2019-11-12T08:35:10.000Z | tests/init_test.py | bertrandchenal/tanker | b955311dc8f05f8bb3c0b391e169974e5c6a11b2 | [
"0BSD"
] | 1 | 2019-11-20T09:00:33.000Z | 2019-11-20T09:00:33.000Z | tests/init_test.py | bertrandchenal/tanker | b955311dc8f05f8bb3c0b391e169974e5c6a11b2 | [
"0BSD"
] | 1 | 2019-11-19T21:53:16.000Z | 2019-11-19T21:53:16.000Z | from tanker import create_tables
from .base_test import session, members
def test_create_tables(session):
# Call create_tables a second time, this should be harmless
create_tables()
| 27.285714 | 63 | 0.790576 | 28 | 191 | 5.178571 | 0.642857 | 0.331034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162304 | 191 | 6 | 64 | 31.833333 | 0.90625 | 0.298429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
03ea9b27863f9e412d564d71e832f2e0a9b64ed9 | 152 | py | Python | api/src/main.py | ian0cordova/dungeon-suite | c32ce668b022c8f21e830f88b0c4d64aa01ecfaf | [
"MIT"
] | null | null | null | api/src/main.py | ian0cordova/dungeon-suite | c32ce668b022c8f21e830f88b0c4d64aa01ecfaf | [
"MIT"
] | 1 | 2021-07-27T15:36:08.000Z | 2021-07-27T16:49:04.000Z | api/src/main.py | ian0cordova/dungeon-suite | c32ce668b022c8f21e830f88b0c4d64aa01ecfaf | [
"MIT"
] | null | null | null | from app import app
from models.names import Names
import views.names
if __name__ == '__main__':
# Names.create_table(True)
app.run(debug=True) | 21.714286 | 30 | 0.736842 | 23 | 152 | 4.478261 | 0.608696 | 0.213592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164474 | 152 | 7 | 31 | 21.714286 | 0.811024 | 0.157895 | 0 | 0 | 0 | 0 | 0.062992 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2089b09f505151a2fba9b986823c96d3f1683d25 | 40 | py | Python | Ann.py | Blocktoaster/Anuta | 0bf60ae286d8779cb46d744b96cf3bb8d6fc322c | [
"CC0-1.0"
] | null | null | null | Ann.py | Blocktoaster/Anuta | 0bf60ae286d8779cb46d744b96cf3bb8d6fc322c | [
"CC0-1.0"
] | null | null | null | Ann.py | Blocktoaster/Anuta | 0bf60ae286d8779cb46d744b96cf3bb8d6fc322c | [
"CC0-1.0"
] | null | null | null | print("привет")
KZKZKZKZKZKZKEZQZQZQZQZ
| 13.333333 | 23 | 0.85 | 3 | 40 | 11.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 40 | 2 | 24 | 20 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
20acc33caf972abcd4b05dc852e2c092f2d14358 | 102 | py | Python | tests/test_settings/example_dummy_registrant.py | costas-basdekis/aox | 63a90fb722f29d9b2d26041f9035f99b6b21615e | [
"MIT"
] | 2 | 2021-11-10T22:38:49.000Z | 2021-12-03T08:09:01.000Z | tests/test_settings/example_dummy_registrant.py | costas-basdekis/aox | 63a90fb722f29d9b2d26041f9035f99b6b21615e | [
"MIT"
] | null | null | null | tests/test_settings/example_dummy_registrant.py | costas-basdekis/aox | 63a90fb722f29d9b2d26041f9035f99b6b21615e | [
"MIT"
] | null | null | null | from tests.test_settings.dummy_registry import dummy_register
dummy_register('dummy', 'MODULE_NAME')
| 25.5 | 61 | 0.843137 | 14 | 102 | 5.785714 | 0.714286 | 0.320988 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068627 | 102 | 3 | 62 | 34 | 0.852632 | 0 | 0 | 0 | 0 | 0 | 0.156863 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
20b20920fe9b7630a848860aebfd8f5461fd5a4d | 195 | py | Python | blocks/postgres/__init__.py | severstal-digital/typed-blocks | 276e65d22772057ba58198332406274d06b87788 | [
"Apache-2.0"
] | null | null | null | blocks/postgres/__init__.py | severstal-digital/typed-blocks | 276e65d22772057ba58198332406274d06b87788 | [
"Apache-2.0"
] | null | null | null | blocks/postgres/__init__.py | severstal-digital/typed-blocks | 276e65d22772057ba58198332406274d06b87788 | [
"Apache-2.0"
] | null | null | null | from blocks.db.types import Row, Query, Table
from blocks.postgres.app import PostgresApp
from blocks.postgres.sources import PostgresReader
from blocks.postgres.processors import PostgresWriter
| 39 | 53 | 0.85641 | 26 | 195 | 6.423077 | 0.576923 | 0.239521 | 0.323353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092308 | 195 | 4 | 54 | 48.75 | 0.943503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4582fd4d46e565bd1b0c505181cd299996c24c07 | 11,669 | py | Python | checkpoint_tests.py | tianhuil/ediblepickle | f80038370d88ed3fdc421b695dc54daee15fa843 | [
"Apache-2.0"
] | null | null | null | checkpoint_tests.py | tianhuil/ediblepickle | f80038370d88ed3fdc421b695dc54daee15fa843 | [
"Apache-2.0"
] | null | null | null | checkpoint_tests.py | tianhuil/ediblepickle | f80038370d88ed3fdc421b695dc54daee15fa843 | [
"Apache-2.0"
] | null | null | null | import logging
import os
from tempfile import gettempdir
from time import sleep
from nose.tools import timed
from ediblepickle import checkpoint
from string import Template
__author__ = 'pavan'
SLEEP_TIME = 5
def save_ints(integers, f):
for i in integers:
f.write('%d' %i)
f.write('\n')
def load_ints(f):
return [int(x) for x in f.read().split('\n') if x != '']
# SECTION 1 : TEMPLATE KEY TESTING
# Create a scenario, where the checkpoint is first created. This is achieved by setting refresh=True.
@checkpoint(key=Template('n{0}_start${start}_stride${stride}.txt'), pickler=save_ints, unpickler=load_ints, refresh=True)
def expensive_function_creates_checkpoint(n, start=0, stride=1):
sleep(SLEEP_TIME)
return range(start, n, stride)
# Create a scenario, where the checkpoint is loaded after creation. This is not truly achievable since the @checkpoint
# will be created if it doesn't exist. We are relying on sequence of tests here. However, refresh must be set to False,
# since we don't want to recreate the file if it exists. The first test creates it and the
# timed-second test uses this function to load it.
@checkpoint(key=Template('n{0}_start${start}_stride${stride}.txt'), pickler=save_ints, unpickler=load_ints,
refresh=False)
def expensive_function_loads_checkpoint(n, start=0, stride=1):
sleep(SLEEP_TIME)
return range(start, n, stride)
def test_template_key_checkpoint_creation():
key = Template('n{0}_start${start}_stride${stride}.txt')
out_file = os.path.join(gettempdir(), key.substitute(start=str(10), stride=str(2)).format(100))
# make sure you delete the file first
try:
os.remove(out_file)
assert (not os.path.exists(out_file))
except OSError as e: # No such file or directory
logging.info('File not found or deleted. Good. Starting the test...') # we are good. ignore the exception.
# call the function that creates the file
result = expensive_function_creates_checkpoint(100, start=10, stride=2)
logging.info(out_file)
# make sure the file is created.
assert (os.path.exists(out_file))
# this test must run much lesser than 5 seconds, although there is a sleep in the loads function
# cuz we are just loading. Lets time it to 1 second.
@timed(1)
def test_template_key_checkpoint_loading():
result = expensive_function_loads_checkpoint(100, start=10, stride=2)
assert (result == range(10, 100, 2)) # Make sure whats loaded is what should be loaded.
# SECTION 2: STRING KEY TESTING
# Create a scenario, where the @checkpoint is loaded after creation; use the string filename.
@checkpoint(key='test_file.txt', pickler=save_ints, unpickler=load_ints, refresh=False)
def expensive_function_loads_checkpoint_str(n, start=0, stride=1):
sleep(SLEEP_TIME)
return range(start, n, stride)
# Create a scenario, where the @checkpoint is first created from 'test_file.txt'.
@checkpoint(key='test_file.txt', pickler=save_ints, unpickler=load_ints, refresh=True)
def expensive_function_creates_checkpoint_str(n, start=0, stride=1):
sleep(SLEEP_TIME)
return range(start, n, stride)
def test_string_key_checkpoint_creation():
key = 'test_file.txt'
out_file = os.path.join(gettempdir(), key)
# make sure you delete the file first
try:
os.remove(out_file)
assert (not os.path.exists(out_file))
except OSError as e: # No such file or directory
logging.info('File not found or deleted. Good. Starting the test...') # we are good. ignore the exception.
# call the function that creates the file
result = expensive_function_creates_checkpoint_str(100, start=10, stride=2)
logging.info(out_file)
# make sure the file is created.
assert (os.path.exists(out_file))
# this test must run much lesser than 5 seconds, although there is a sleep in the loads function
# cuz we are just loading. Lets time it to 1 second.
@timed(1)
def test_string_key_checkpoint_loading():
result = expensive_function_loads_checkpoint_str(100, start=10, stride=2)
assert (result == range(10, 100, 2)) # Make sure whats loaded is what should be loaded.
# SECTION 3: LAMBDA KEY TESTING
# Create a scenario, where the @checkpoint is loaded after creation; use the lambda filename.
@checkpoint(key=lambda args, kwargs: 'lambda_n%d_start%d_stride%d.txt' % (args[0], kwargs['start'], kwargs['stride']),
pickler=save_ints, unpickler=load_ints, refresh=False)
def expensive_function_loads_checkpoint_lambda(n, start=0, stride=1):
sleep(SLEEP_TIME)
return range(start, n, stride)
# Create a scenario, where the @checkpoint is first created from 'test_file.txt'.
@checkpoint(key=lambda args, kwargs: 'lambda_n%d_start%d_stride%d.txt' % (args[0], kwargs['start'], kwargs['stride']),
pickler=save_ints, unpickler=load_ints, refresh=False)
def expensive_function_creates_checkpoint_lambda(n, start=0, stride=1):
sleep(SLEEP_TIME)
return range(start, n, stride)
def test_lambda_key_checkpoint_creation():
key_func = lambda args, kwargs: 'lambda_n%d_start%d_stride%d.txt' % (args[0], kwargs['start'], kwargs['stride'])
key = os.path.join(gettempdir(), key_func((100,), dict(start=10, stride=2)))
out_file = os.path.join(gettempdir(), key)
# make sure you delete the file first
try:
os.remove(out_file)
assert (not os.path.exists(out_file))
except OSError as e: # No such file or directory
logging.info('File not found or deleted. Good. Starting the test...') # we are good. ignore the exception.
# call the function that creates the file
result = expensive_function_creates_checkpoint_lambda(100, start=10, stride=2)
logging.info(out_file)
# make sure the file is created.
assert (os.path.exists(out_file))
# this test must run much lesser than 5 seconds, although there is a sleep in the loads function
# cuz we are just loading. Lets time it to 1 second.
@timed(1)
def test_lambda_key_checkpoint_loading():
result = expensive_function_loads_checkpoint_lambda(100, start=10, stride=2)
assert (result == range(10, 100, 2)) # Make sure whats loaded is what should be loaded.
# SECTION 4: FUNCTION NAMER TESTING # same as lambda, but just for the heck of it lets test it.
def key_maker(args, kwargs): # remember no *s here.
return 'key_maker_n%d_start%d_stride%d.txt' % (args[0], kwargs['start'], kwargs['stride'])
# Create a scenario, where the @checkpoint is loaded after creation; use the lambda filename.
@checkpoint(key=key_maker, pickler=save_ints, unpickler=load_ints, refresh=False)
def expensive_function_loads_checkpoint_key_maker(n, start=0, stride=1):
sleep(SLEEP_TIME)
return range(start, n, stride)
# Create a scenario, where the @checkpoint is first created from 'test_file.txt'.
@checkpoint(key=key_maker, pickler=save_ints, unpickler=load_ints, refresh=False)
def expensive_function_creates_checkpoint_key_maker(n, start=0, stride=1):
sleep(SLEEP_TIME)
return range(start, n, stride)
def test_key_maker_key_checkpoint_creation():
key_func = lambda args, kwargs: 'key_maker_n%d_start%d_stride%d.txt' % (args[0], kwargs['start'], kwargs['stride'])
key = os.path.join(gettempdir(), key_func((100,), dict(start=10, stride=2)))
out_file = os.path.join(gettempdir(), key)
# make sure you delete the file first
try:
os.remove(out_file)
assert (not os.path.exists(out_file))
except OSError as e: # No such file or directory
logging.info('File not found or deleted. Good. Starting the test...') # we are good. ignore the exception.
# call the function that creates the file
result = expensive_function_creates_checkpoint_key_maker(100, start=10, stride=2)
logging.info(out_file)
# make sure the file is created.
assert (os.path.exists(out_file))
# this test must run much lesser than 5 seconds, although there is a sleep in the loads function
# cuz we are just loading. Lets time it to 1 second.
@timed(1)
def test_key_maker_key_checkpoint_loading():
result = expensive_function_loads_checkpoint_key_maker(100, start=10, stride=2)
assert (result == range(10, 100, 2)) # Make sure whats loaded is what should be loaded.
# SECTION 5: REFRESH using lambda testing
# Create a scenario, where the @checkpoint is loaded after creation; use the string filename;refresh is a lambda object
@checkpoint(key='test_file.txt', pickler=save_ints, unpickler=load_ints, refresh=lambda: 0)
def expensive_function_loads_checkpoint_str_refresh_lambda(n, start=0, stride=1):
sleep(SLEEP_TIME)
return range(start, n, stride)
# Create a scenario, where the @checkpoint is first created from 'test_file.txt'.
@checkpoint(key='test_file.txt', pickler=save_ints, unpickler=load_ints, refresh=lambda: 1)
def expensive_function_creates_checkpoint_str_refresh_lambda(n, start=0, stride=1):
sleep(SLEEP_TIME)
return range(start, n, stride)
def test_string_key_checkpoint_creation_refresh_lambda():
key = 'test_file.txt'
out_file = os.path.join(gettempdir(), key)
# make sure you delete the file first
try:
os.remove(out_file)
assert (not os.path.exists(out_file))
except OSError as e: # No such file or directory
logging.info('File not found or deleted. Good. Starting the test...') # we are good. ignore the exception.
# call the function that creates the file
result = expensive_function_creates_checkpoint_str_refresh_lambda(100, start=10, stride=2)
logging.info(out_file)
# make sure the file is created.
assert (os.path.exists(out_file))
# this test must run much lesser than 5 seconds, although there is a sleep in the loads function
# cuz we are just loading. Lets time it to 1 second.
@timed(1)
def test_string_key_checkpoint_loading_refresh_lambda():
result = expensive_function_loads_checkpoint_str_refresh_lambda(100, start=10, stride=2)
assert (result == range(10, 100, 2)) # Make sure whats loaded is what should be loaded.
# SECTION 6: GZIP STRING KEY TESTING
# Create a scenario, where the @checkpoint is loaded after creation; use the string filename.
@checkpoint(key='test_file.txt.gz', gzip=True, refresh=False)
def expensive_function_loads_checkpoint_gzip(n, start=0, stride=1):
sleep(SLEEP_TIME)
return range(start, n, stride)
# Create a scenario, where the @checkpoint is first created from 'test_file.txt'.
@checkpoint(key='test_file.txt.gz', gzip=True, refresh=True)
def expensive_function_creates_checkpoint_gzip(n, start=0, stride=1):
sleep(SLEEP_TIME)
return range(start, n, stride)
def test_gzip_checkpoint_creation():
key = 'test_file.txt'
out_file = os.path.join(gettempdir(), key)
# make sure you delete the file first
try:
os.remove(out_file)
assert (not os.path.exists(out_file))
except OSError as e: # No such file or directory
logging.info('File not found or deleted. Good. Starting the test...') # we are good. ignore the exception.
# call the function that creates the file
result = expensive_function_creates_checkpoint_str(100, start=10, stride=2)
logging.info(out_file)
# make sure the file is created.
assert (os.path.exists(out_file))
# this test must run much lesser than 5 seconds, although there is a sleep in the loads function
# cuz we are just loading. Lets time it to 1 second.
@timed(1)
def test_gzip_checkpoint_loading():
result = expensive_function_loads_checkpoint_str(100, start=10, stride=2)
assert (result == range(10, 100, 2)) # Make sure whats loaded is what should be loaded.
| 39.422297 | 121 | 0.729368 | 1,810 | 11,669 | 4.553591 | 0.09337 | 0.025479 | 0.022082 | 0.023781 | 0.903664 | 0.89869 | 0.895292 | 0.879641 | 0.844213 | 0.825285 | 0 | 0.019125 | 0.171052 | 11,669 | 295 | 122 | 39.555932 | 0.832937 | 0.327192 | 0 | 0.6125 | 0 | 0 | 0.10054 | 0.035356 | 0 | 0 | 0 | 0 | 0.1125 | 1 | 0.16875 | false | 0 | 0.04375 | 0.0125 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4599b05b8ea7512fff8975951a6fa7d0bdc93021 | 21,977 | py | Python | lib/rucio/tests/test_filter_engine.py | rcarpa/rucio | 226a9f8efb0dfe09b52607cd2333e184ab88d105 | [
"Apache-2.0"
] | 2 | 2020-02-18T22:34:24.000Z | 2022-03-09T16:26:18.000Z | lib/rucio/tests/test_filter_engine.py | Frederick9050/rucio | d59c237f533f40116026dc9f347f4fc1297f1ff0 | [
"Apache-2.0"
] | null | null | null | lib/rucio/tests/test_filter_engine.py | Frederick9050/rucio | d59c237f533f40116026dc9f347f4fc1297f1ff0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright 2013-2020 CERN for the benefit of the ATLAS collaboration.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Authors:
# - Gabriele Gaetano Fronzé <gfronze@cern.ch>, 2020
# - Rob Barnsley <rob.barnsley@skao.int>, 2021
#
# PY3K COMPATIBLE
import operator
from datetime import datetime, timedelta
import unittest
from rucio.common.exception import DuplicateCriteriaInDIDFilter
from rucio.common.config import config_get_bool
from rucio.common.types import InternalAccount, InternalScope
from rucio.common.utils import generate_uuid
from rucio.core.did import add_did
from rucio.core.did_meta_plugins import set_metadata
from rucio.db.sqla import models
from rucio.db.sqla.session import read_session
from rucio.core.did_meta_plugins.filter_engine import FilterEngine
from rucio.tests.common_server import get_vo
class TestFilterEngineDummy(unittest.TestCase):
def test_InputSanitisation(self):
filters = FilterEngine(' TestKeyword1 = True , TestKeyword2 = 0; 1 < TestKeyword4 <= 2', strict_coerce=False).filters
filters_expected = [[('TestKeyword1', operator.eq, 1),
('TestKeyword2', operator.eq, 0)],
[('TestKeyword4', operator.gt, 1),
('TestKeyword4', operator.le, 2)]]
self.assertEqual(filters, filters_expected)
with self.assertRaises(ValueError):
FilterEngine('did_type >= 1', strict_coerce=False)
with self.assertRaises(ValueError):
FilterEngine('name >= 1', strict_coerce=False)
with self.assertRaises(ValueError):
FilterEngine('length >= test', strict_coerce=False)
with self.assertRaises(ValueError):
FilterEngine('name >= *', strict_coerce=False)
def test_OperatorsEqualNotEqual(self):
self.assertTrue(FilterEngine('True = True', strict_coerce=False).evaluate())
self.assertTrue(FilterEngine('True != False', strict_coerce=False).evaluate())
def test_OneSidedInequality(self):
self.assertTrue(FilterEngine('1 < 2', strict_coerce=False).evaluate())
self.assertFalse(FilterEngine('1 > 2', strict_coerce=False).evaluate())
self.assertTrue(FilterEngine('1 <= 1', strict_coerce=False).evaluate())
self.assertTrue(FilterEngine('1 >= 1', strict_coerce=False).evaluate())
def test_CompoundInequality(self):
self.assertTrue(FilterEngine('3 > 2 > 1', strict_coerce=False).evaluate())
self.assertFalse(FilterEngine('1 > 2 > 3', strict_coerce=False).evaluate())
with self.assertRaises(DuplicateCriteriaInDIDFilter):
FilterEngine('1 < 2 > 3', strict_coerce=False)
with self.assertRaises(DuplicateCriteriaInDIDFilter):
FilterEngine('1 < 2 > 3', strict_coerce=False)
def test_AndGroups(self):
self.assertTrue(FilterEngine('True = True, False = False', strict_coerce=False).evaluate())
self.assertFalse(FilterEngine('True = True, False = True', strict_coerce=False).evaluate())
self.assertTrue(FilterEngine('3 > 2, 2 > 1', strict_coerce=False).evaluate())
self.assertFalse(FilterEngine('1 > 2, 2 > 1', strict_coerce=False).evaluate())
self.assertFalse(FilterEngine('1 > 2, 2 > 3', strict_coerce=False).evaluate())
self.assertFalse(FilterEngine('1 > 2, 4 > 3 > 2', strict_coerce=False).evaluate())
def test_OrGroups(self):
self.assertTrue(FilterEngine('True = True; True = True', strict_coerce=False).evaluate())
self.assertTrue(FilterEngine('True = True; True = False', strict_coerce=False).evaluate())
self.assertFalse(FilterEngine('True = False; False = True', strict_coerce=False).evaluate())
self.assertTrue(FilterEngine('3 > 2; 2 > 1', strict_coerce=False).evaluate())
self.assertTrue(FilterEngine('1 > 2; 2 > 1', strict_coerce=False).evaluate())
self.assertFalse(FilterEngine('1 > 2; 2 > 3', strict_coerce=False).evaluate())
self.assertTrue(FilterEngine('1 > 2; 4 > 3 > 2', strict_coerce=False).evaluate())
def test_AndOrGroups(self):
self.assertTrue(FilterEngine('1 > 2, 4 > 3 > 2; True=True', strict_coerce=False).evaluate())
self.assertFalse(FilterEngine('1 > 2, 4 > 3 > 2; True=False', strict_coerce=False).evaluate())
def test_BackwardsCompatibilityCreatedAfter(self):
test_expressions = {
"created_after=1900-01-01 00:00:00": [[('created_at', operator.ge, datetime(1900, 1, 1, 0, 0))]],
"created_after=1900-01-01T00:00:00": [[('created_at', operator.ge, datetime(1900, 1, 1, 0, 0))]],
"created_after=1900-01-01 00:00:00.000Z": [[('created_at', operator.ge, datetime(1900, 1, 1, 0, 0))]],
"created_after=1900-01-01T00:00:00.000Z": [[('created_at', operator.ge, datetime(1900, 1, 1, 0, 0))]]
}
for input_datetime_expression, filters_expected in test_expressions.items():
filters = FilterEngine(input_datetime_expression, strict_coerce=False).filters
self.assertEqual(filters, filters_expected)
def test_BackwardsCompatibilityCreatedBefore(self):
test_expressions = {
"created_before=1900-01-01 00:00:00": [[('created_at', operator.le, datetime(1900, 1, 1, 0, 0))]],
"created_before=1900-01-01T00:00:00": [[('created_at', operator.le, datetime(1900, 1, 1, 0, 0))]],
"created_before=1900-01-01 00:00:00.000Z": [[('created_at', operator.le, datetime(1900, 1, 1, 0, 0))]],
"created_before=1900-01-01T00:00:00.000Z": [[('created_at', operator.le, datetime(1900, 1, 1, 0, 0))]]
}
for input_datetime_expression, filters_expected in test_expressions.items():
filters = FilterEngine(input_datetime_expression, strict_coerce=False).filters
self.assertEqual(filters, filters_expected)
def test_BackwardsCompatibilityLength(self):
test_expressions = {
'length > 0': [[('length', operator.gt, 0)]],
'length < 0': [[('length', operator.lt, 0)]],
'length >= 0': [[('length', operator.ge, 0)]],
'length <= 0': [[('length', operator.le, 0)]],
'length == 0': [[('length', operator.eq, 0)]]
}
for input_length_expression, filters_expected in test_expressions.items():
filters = FilterEngine(input_length_expression, strict_coerce=False).filters
self.assertEqual(filters, filters_expected)
class TestFilterEngineReal(unittest.TestCase):
def setUp(self):
if config_get_bool('common', 'multi_vo', raise_exception=False, default=False):
self.vo = {'vo': get_vo()}
else:
self.vo = {}
self.tmp_scope = InternalScope('mock', **self.vo)
self.root = InternalAccount('root', **self.vo)
def _create_tmp_DID(self, type='DATASET'):
did_name = 'fe_test_did_%s' % generate_uuid()
add_did(scope=self.tmp_scope, name=did_name, did_type='DATASET', account=self.root)
return did_name
@read_session
def test_OperatorsEqualNotEqual(self, session=None):
did_name1 = self._create_tmp_DID()
did_name2 = self._create_tmp_DID()
did_name3 = self._create_tmp_DID()
set_metadata(scope=self.tmp_scope, name=did_name1, key='run_number', value=1)
set_metadata(scope=self.tmp_scope, name=did_name2, key='run_number', value=2)
dids = []
q = FilterEngine('run_number=1', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3), dids)).count(True), 1)
dids = []
q = FilterEngine('run_number!=1', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3), dids)).count(True), 2) # 1, 3 (NULL counted in not equals)
@read_session
def test_OneSidedInequality(self, session=None):
did_name = self._create_tmp_DID()
set_metadata(scope=self.tmp_scope, name=did_name, key='run_number', value=1)
dids = []
q = FilterEngine('run_number > 0', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name == did_name, dids)).count(True), 1)
dids = []
q = FilterEngine('run_number < 2', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name == did_name, dids)).count(True), 1)
dids = []
q = FilterEngine('run_number < 0', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertNotEqual(list(map(lambda did: did.name == did_name, dids)).count(True), 1)
dids = []
q = FilterEngine('run_number > 2', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertNotEqual(list(map(lambda did: did.name == did_name, dids)).count(True), 1)
@read_session
def test_CompoundInequality(self, session=None):
did_name = self._create_tmp_DID()
set_metadata(scope=self.tmp_scope, name=did_name, key='run_number', value=1)
dids = []
q = FilterEngine('0 < run_number < 2', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name == did_name, dids)).count(True), 1)
dids = []
q = FilterEngine('0 < run_number <= 1', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name == did_name, dids)).count(True), 1)
dids = []
q = FilterEngine('0 <= run_number < 1', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertNotEqual(list(map(lambda did: did.name == did_name, dids)).count(True), 1)
@read_session
def test_AndGroups(self, session=None):
did_name1 = self._create_tmp_DID()
did_name2 = self._create_tmp_DID()
did_name3 = self._create_tmp_DID()
set_metadata(scope=self.tmp_scope, name=did_name1, key='run_number', value='1')
set_metadata(scope=self.tmp_scope, name=did_name2, key='project', value="test")
set_metadata(scope=self.tmp_scope, name=did_name3, key='run_number', value='1')
set_metadata(scope=self.tmp_scope, name=did_name3, key='project', value="test")
dids = []
q = FilterEngine('run_number = 1, project = test', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3), dids)).count(True), 1) # 3
dids = []
q = FilterEngine('run_number = 1, project != test', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3), dids)).count(True), 1) # 1
@read_session
def test_OrGroups(self, session=None):
did_name1 = self._create_tmp_DID()
did_name2 = self._create_tmp_DID()
did_name3 = self._create_tmp_DID()
set_metadata(scope=self.tmp_scope, name=did_name1, key='run_number', value='1')
set_metadata(scope=self.tmp_scope, name=did_name2, key='project', value="test")
set_metadata(scope=self.tmp_scope, name=did_name3, key='run_number', value='1')
set_metadata(scope=self.tmp_scope, name=did_name3, key='project', value="test")
dids = []
q = FilterEngine('run_number = 1; project = test', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3), dids)).count(True), 3) # 1, 2, 3
dids = []
q = FilterEngine('run_number = 1; project != test', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3), dids)).count(True), 2) # 1, 3
dids = []
q = FilterEngine('run_number = 0; run_number = 1', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3), dids)).count(True), 2) # 1, 3
dids = []
q = FilterEngine('run_number = 0; run_number = 3', model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3), dids)).count(True), 0) #
dids = []
q = FilterEngine('name = {}; name = {}; name = {}'.format(did_name1, did_name2, did_name3), model_class=models.DataIdentifier).create_sqla_query(
additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3), dids)).count(True), 3) # 1, 2, 3
@read_session
def test_AndOrGroups(self, session=None):
did_name1 = self._create_tmp_DID()
did_name2 = self._create_tmp_DID()
did_name3 = self._create_tmp_DID()
set_metadata(scope=self.tmp_scope, name=did_name1, key='run_number', value='1')
set_metadata(scope=self.tmp_scope, name=did_name2, key='project', value="test")
set_metadata(scope=self.tmp_scope, name=did_name3, key='run_number', value='1')
set_metadata(scope=self.tmp_scope, name=did_name3, key='project', value="test")
dids = []
q = FilterEngine('run_number = 1, project != test; project = test', model_class=models.DataIdentifier).create_sqla_query(additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3), dids)).count(True), 3) # 1, 2, 3
dids = []
q = FilterEngine('run_number = 1, project = test; run_number != 1', model_class=models.DataIdentifier).create_sqla_query(additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3), dids)).count(True), 2) # 2, 3
@read_session
def test_BackwardsCompatibilityCreatedAfter(self, session=None):
before = datetime.strftime(datetime.now() - timedelta(seconds=1), "%Y-%m-%dT%H:%M:%S.%fZ") # w/ -1s buffer
did_name = self._create_tmp_DID()
dids = []
q = FilterEngine('created_after={}'.format(before), model_class=models.DataIdentifier).create_sqla_query(additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name == did_name, dids)).count(True), 1)
@read_session
def test_BackwardsCompatibilityCreatedBefore(self, session=None):
did_name = self._create_tmp_DID()
after = datetime.strftime(datetime.now() + timedelta(seconds=1), "%Y-%m-%dT%H:%M:%S.%fZ") # w/ +1s buffer
dids = []
q = FilterEngine('created_before={}'.format(after), model_class=models.DataIdentifier).create_sqla_query(additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name == did_name, dids)).count(True), 1)
@read_session
def test_BackwardsCompatibilityLength(self, session=None):
did_name = self._create_tmp_DID()
set_metadata(scope=self.tmp_scope, name=did_name, key='length', value='10')
dids = []
q = FilterEngine('length >= 10', model_class=models.DataIdentifier).create_sqla_query(additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name == did_name, dids)).count(True), 1)
dids = []
q = FilterEngine('length > 9', model_class=models.DataIdentifier).create_sqla_query(additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name == did_name, dids)).count(True), 1)
dids = []
q = FilterEngine('length <= 10', model_class=models.DataIdentifier).create_sqla_query(additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name == did_name, dids)).count(True), 1)
dids = []
q = FilterEngine('length < 11', model_class=models.DataIdentifier).create_sqla_query(additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name == did_name, dids)).count(True), 1)
@read_session
def test_Wildcards(self, session=None):
did_name1 = self._create_tmp_DID()
did_name2 = self._create_tmp_DID()
did_name3 = self._create_tmp_DID()
did_name4 = self._create_tmp_DID()
did_name5 = self._create_tmp_DID()
set_metadata(scope=self.tmp_scope, name=did_name1, key='project', value="test1")
set_metadata(scope=self.tmp_scope, name=did_name2, key='project', value="test2")
set_metadata(scope=self.tmp_scope, name=did_name3, key='project', value="anothertest1")
set_metadata(scope=self.tmp_scope, name=did_name4, key='project', value="anothertest2")
dids = []
q = FilterEngine('project = test*', model_class=models.DataIdentifier).create_sqla_query(additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3, did_name4, did_name5), dids)).count(True), 2) # 1, 2
dids = []
q = FilterEngine('project = *test*', model_class=models.DataIdentifier).create_sqla_query(additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3, did_name4, did_name5), dids)).count(True), 4) # 1, 2, 3, 4
dids = []
q = FilterEngine('project != *anothertest*', model_class=models.DataIdentifier).create_sqla_query(additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3, did_name4, did_name5), dids)).count(True), 3) # 3, 4, 5 (NULL counted in not equals)
dids = []
q = FilterEngine('project != *test*', model_class=models.DataIdentifier).create_sqla_query(additional_model_attributes=[models.DataIdentifier.name])
dids += [did for did in q.yield_per(5)]
dids = set(dids)
self.assertEqual(list(map(lambda did: did.name in (did_name1, did_name2, did_name3, did_name4, did_name5), dids)).count(True), 1) # 5 (NULL counted in not equals)
if __name__ == '__main__':
unittest.main()
| 53.733496 | 186 | 0.665059 | 2,874 | 21,977 | 4.888657 | 0.083855 | 0.079715 | 0.039929 | 0.059786 | 0.825409 | 0.804769 | 0.780854 | 0.77694 | 0.763915 | 0.732527 | 0 | 0.029941 | 0.200619 | 21,977 | 408 | 187 | 53.865196 | 0.769809 | 0.041862 | 0 | 0.567073 | 0 | 0 | 0.091342 | 0.013511 | 0 | 0 | 0 | 0 | 0.185976 | 1 | 0.067073 | false | 0 | 0.039634 | 0 | 0.115854 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
45a00be843b52f534cbdef82e323a73dd1d98327 | 72,467 | py | Python | data_processing/AWG_and_Alazar/old/Pulse_Processing_Utils_20220526.py | PITT-HATLAB/data_processing | ad49bb921e0fc90b90f0b696e2cbb662019f5f40 | [
"MIT"
] | null | null | null | data_processing/AWG_and_Alazar/old/Pulse_Processing_Utils_20220526.py | PITT-HATLAB/data_processing | ad49bb921e0fc90b90f0b696e2cbb662019f5f40 | [
"MIT"
] | null | null | null | data_processing/AWG_and_Alazar/old/Pulse_Processing_Utils_20220526.py | PITT-HATLAB/data_processing | ad49bb921e0fc90b90f0b696e2cbb662019f5f40 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Apr 29 09:40:12 2021
@author: Ryan Kaufman
Set up function module that can assist in loading pulse sequences into AWG
and functionalizing Alazar acquiring
"""
import numpy as np
from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib.patches import Ellipse
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
from matplotlib.colors import Normalize as Norm
from plottr.data.datadict_storage import all_datadicts_from_hdf5
from scipy.signal import butter, lfilter, sosfilt
from scipy.stats import tstd
from scipy.special import gamma
#plotting variance in the data wrt time
from plottr.data.datadict_storage import all_datadicts_from_hdf5
gfit = lambda x, s, m, A: A*np.exp(-0.5*((x-m)/s)**2)
def pfit(x, m, A, scale):
return A*np.power(m, x*scale)*np.exp(-m)/gamma(x*scale)
def custom_var(data_slice, debug = False, timestamp = 2000, trace = 0, method = 'gauss', title = '', fit = 0):
'''
we don't trust the numpy var, because by eye it doesn't look like it's changing that much
so this fuction will take in an array that is [nrecords x 1], create a histogram and fit
that histogram to a gaussian or poisson distribution to extract variance, which is what our eyes are doing
'''
plt.figure()
plt.title(f"trace {trace} time {timestamp}ns {title} distribution over records")
h1, bins = np.histogram(data_slice, bins = 100, density = True)
plt.plot(bins[:-1], h1, '.', label = 'data')
if fit:
if method == 'gauss':
popt, pcov = curve_fit(gfit, bins[:-1], h1, maxfev = 10000, p0 = [np.max(np.abs(bins))/10, bins[50], 150])
elif method == 'poisson':
popt, pcov = curve_fit(pfit, bins[:-1], h1)
if debug:
if method == 'gauss':
plt.plot(bins[:-1], gfit(bins[:-1], *popt), label = f'{method} fit')
elif method == 'poisson':
plt.plot(bins[:-1], pfit(bins[:-1], *popt), label = f'{method} fit')
#have to be careful here, because this fit is to a scalable x-axis
#so in terms of the real voltage, the mean is actually popt[0]/popt[-1]
#the second parameter A should be irrelevant
popt[0] = popt[0]/popt[-1]
plt.title(f"trace {trace} time {timestamp}ns {title} distribution over records")
plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),
fancybox=True, shadow=True, ncol=5)
return np.abs(popt[0])
def plot_custom_stats_from_filepath(filepath, timeslice = 100, trace = 0, debug = 0, fit = 0):
dicts = all_datadicts_from_hdf5(filepath)['data']
# print(dicts)
data_name_list = [x for x in dicts.keys() if x[0] == 'I' or x[0] == 'Q']
time = np.unique(dicts['time']['values'])
time_num = np.size(time)
rec_num = np.size(dicts['time']['values'])//time_num
I_list = [name for name in data_name_list if name[0]=='I']
Q_list = [name for name in data_name_list if name[0]=='Q']
print(I_list)
Pvar_arr = []
Pvar_fit_arr = []
Pavg_arr = []
for i, (I_name, Q_name) in enumerate(zip(I_list, Q_list)):
if i == trace:
print(f"Looking at the {i, I_name, Q_name} trace")
Idata = dicts[I_name]['values'].reshape(rec_num, time_num)
Qdata = dicts[Q_name]['values'].reshape(rec_num, time_num)
Pavg = np.average(np.sqrt(Idata**2+Qdata**2), axis = 0)
Pvar = tstd(np.sqrt(Idata**2+Qdata**2), axis = 0)**2
Pvar_arr.append(Pvar[timeslice])
Pavg_arr.append(Pavg[timeslice])
Pvar_fit = custom_var(np.sqrt(Idata**2+Qdata**2)[:, timeslice], debug = debug, timestamp = timeslice*20, trace = I_name[-1], method = 'poisson', title = 'Power', fit = fit)
Pvar_fit_arr.append(Pvar_fit)
Ivar_fit = custom_var(Idata[:, timeslice], debug = debug, timestamp = timeslice*20, trace = I_name[-1], method = 'gauss', title = "I", fit = fit)
Qvar_fit = custom_var(Qdata[:, timeslice], debug = debug, timestamp = timeslice*20, trace = I_name[-1], method = 'gauss', title = 'Q', fit = fit)
# if debug:
# plt.figure()
# plt.plot(Pavg)
# plt.title("DEBUG: Average vs. sample_num")
return np.array(Pvar_arr), np.array(Pvar_fit_arr), np.array(Pavg_arr), Ivar_fit, Qvar_fit
def plot_stats_from_filepath(filepath, plt_avg = 0, plt_var = 1, vscale = 100, plot = 0):
dicts = all_datadicts_from_hdf5(filepath)['data']
data_name_list = [x for x in dicts.keys() if x[0] == 'I' or x[0] == 'Q']
time = np.unique(dicts['time']['values'])
time_num = np.size(time)
rec_num = np.size(dicts['time']['values'])//time_num
variance_dict = {}
if plot:
fig, axs = plt.subplots(3,2, figsize = (16,12))
fig.suptitle(filepath.split('\\')[-1])
I_list = [name for name in data_name_list if name[0]=='I']
Q_list = [name for name in data_name_list if name[0]=='Q']
titles = ["G", "E", "F"]
Pvar_arr = []
Pavg_arr = []
for i, (I_name, Q_name) in enumerate(zip(I_list, Q_list)):
Idata = dicts[I_name]['values'].reshape(rec_num, time_num)
Qdata = dicts[Q_name]['values'].reshape(rec_num, time_num)
I_at_rec0 = Idata[:, 30]
I_at_rec1 = Idata[:, 100]
if plot:
plt.figure()
print("first val: ", I_at_rec0[0])
print("max val: ", np.max(I_at_rec0))
plt.plot(I_at_rec0-np.average(I_at_rec0), '.', label = 'time 30*20ns')
plt.plot(I_at_rec1-np.average(I_at_rec1), '.', label = 'time 90*20ns')
plt.title(titles[i])
plt.legend()
print("variance at first value: ", np.var(I_at_rec0-np.average(I_at_rec0)))
print("variance at second value: ", np.var(I_at_rec1-np.average(I_at_rec1)))
plt.figure()
h1 = np.histogram(I_at_rec1-np.average(I_at_rec0), bins = 50, density = True)
h2 = np.histogram(I_at_rec1-np.average(I_at_rec1), bins = 50, density = True)
plt.plot(h1[1][:-1], h1[0])
plt.plot(h2[1][:-1], h2[0])
Pavg = np.average(np.sqrt(Idata**2+Qdata**2), axis = 0)
Pvar = tstd(np.sqrt(Idata**2+Qdata**2), axis = 0)**2
Pvar_arr.append(Pvar)
Pavg_arr.append(Pavg)
for name in I_list:
data = dicts[name]['values'].reshape(rec_num, time_num)
avg = np.average(data, axis = 0)
var = np.var(data-np.average(data, axis = 0), axis = 0)
var_coherent = var/np.sqrt(np.abs(avg))
variance_dict[name] = var
if plot:
axs[0, 0].fill_between(time, np.average(data, axis = 0)-vscale*var, np.average(data, axis = 0)+vscale*var, label = name)
axs[0,0].plot(time, np.average(data, axis = 0))
axs[1, 0].plot(time, var, label = name)
if plot:
axs[0,0].set_title("Averages")
axs[1,0].set_title("Variances")
axs[0,0].grid()
axs[1,0].grid()
axs[0,0].legend()
axs[1,0].legend()
for name in Q_list:
data = dicts[name]['values'].reshape(rec_num, time_num)
var = np.var(data, axis = 0)
variance_dict[name] = var
var_coherent = var/np.sqrt(np.abs(avg))
if plot:
axs[0, 1].fill_between(time, np.average(data, axis = 0)-vscale*var, np.average(data, axis = 0)+vscale*var, label = name)
axs[0,1].plot(time, np.average(data, axis = 0))
axs[1, 1].plot(time, var, label = name)
if plot:
axs[2,0].plot(time, Pavg)
axs[2,1].plot(time, Pvar)
axs[0,1].set_title("Averages")
axs[1,1].set_title("Variances")
axs[0,1].grid()
axs[1,1].grid()
axs[0,1].legend()
axs[1,1].legend()
fig.tight_layout()
return zip(np.array(Pvar_arr), np.array(Pavg_arr))
def Process_One_Acquisition_3_state(name, time_vals, sI_c1, sI_c2, sI_c3, sQ_c1 ,sQ_c2, sQ_c3, figscale = 1, hist_scale = 200, odd_only = False, even_only = False, plot = False, lpf = True, lpf_wc = 1e6, fit = False, hist_y_scale = 10, boxcar = False, bc_window = [50, 150], record_track = False, rec_start = 0, rec_stop = 7860, debug = False, tstart_index = 0, tstop_index = -1, guess = 0, rec_skip = 5):
fs = 1/np.diff(time_vals)[0]
print('\n\n\nsampling rate: ', fs)
sI_c1 = sI_c1[rec_start:rec_stop:rec_skip].copy()
sI_c2 = sI_c2[rec_start:rec_stop:rec_skip].copy()
sI_c3 = sI_c3[rec_start:rec_stop:rec_skip].copy()
sQ_c1 = sQ_c1[rec_start:rec_stop:rec_skip].copy()
sQ_c2 = sQ_c2[rec_start:rec_stop:rec_skip].copy()
sQ_c3 = sQ_c3[rec_start:rec_stop:rec_skip].copy()
if lpf:
sI_c1_classify = np.empty(np.shape(sI_c1))
sI_c2_classify = np.empty(np.shape(sI_c1))
sI_c3_classify = np.empty(np.shape(sI_c1))
sQ_c1_classify = np.empty(np.shape(sI_c1))
sQ_c2_classify = np.empty(np.shape(sI_c1))
sQ_c3_classify = np.empty(np.shape(sI_c1))
for i, (rec1, rec2, rec3, rec4, rec5, rec6) in enumerate(zip(sI_c1, sI_c2, sI_c3, sQ_c1, sQ_c2, sQ_c3)):
sI_c1_classify[i] = lfilter(*butter(10, lpf_wc/1e9, fs=fs, btype='low', analog=False), rec1)
sI_c2_classify[i] = lfilter(*butter(10, lpf_wc/1e9, fs=fs, btype='low', analog=False), rec2)
sI_c3_classify[i] = lfilter(*butter(10, lpf_wc/1e9, fs=fs, btype='low', analog=False), rec3)
sQ_c1_classify[i] = lfilter(*butter(10, lpf_wc/1e9, fs=fs, btype='low', analog=False), rec4)
sQ_c2_classify[i] = lfilter(*butter(10, lpf_wc/1e9, fs=fs, btype='low', analog=False), rec5)
sQ_c3_classify[i] = lfilter(*butter(10, lpf_wc/1e9, fs=fs, btype='low', analog=False), rec6)
else:
sI_c1_classify = sI_c1
sI_c2_classify = sI_c2
sI_c3_classify = sI_c3
sQ_c1_classify = sQ_c1
sQ_c2_classify = sQ_c2
sQ_c3_classify = sQ_c3
sI_c1 = sI_c1_classify.copy()
sI_c2 = sI_c2_classify.copy()
sI_c3 = sI_c3_classify.copy()
sQ_c1 = sQ_c1_classify.copy()
sQ_c2 = sQ_c2_classify.copy()
sQ_c3 = sQ_c3_classify.copy()
if boxcar:
WF = np.zeros(np.size(time_vals))
WF[bc_window[0]:bc_window[1]] = 1
Sge_I = Sge_Q = Sgf_I = Sgf_Q = Sef_I = Sef_Q = WF#/(bc_window[1]-bc_window[0])
else:
tfilt = np.ones(np.size(np.unique(time_vals)))
tfilt[0:tstart_index] = 0
tfilt[tstop_index:-1] = 0
#weight functions denoted by Sij for telling trace i from trace j
Sge_I, Sge_Q = [(np.average(sI_c1, axis = 0)-np.average(sI_c2, axis = 0))*tfilt, (np.average(sQ_c1, axis = 0)-np.average(sQ_c2, axis = 0))*tfilt]
Sgf_I, Sgf_Q = [(np.average(sI_c1, axis = 0)-np.average(sI_c3, axis = 0))*tfilt, (np.average(sQ_c1, axis = 0)-np.average(sQ_c3, axis = 0))*tfilt]
Sef_I, Sef_Q = [(np.average(sI_c2, axis = 0)-np.average(sI_c3, axis = 0))*tfilt, (np.average(sQ_c2, axis = 0)-np.average(sQ_c3, axis = 0))*tfilt]
# if lpf:
# Sge = sosfilt(butter(10, lpf_wc, fs = 1e9/20, output = 'sos'), Sge)
# Sgf = sosfilt(butter(10, lpf_wc, fs = 1e9/20, output = 'sos'), Sgf)
# Sef = sosfilt(butter(10, lpf_wc, fs = 1e9/20, output = 'sos'), Sef)
#nromalizing weight functions
# Sge_I /= np.sum(np.linalg.norm([np.abs(Sge_I), np.abs(Sge_Q)]))
# Sge_Q /= np.linalg.norm([np.abs(Sge_I), np.abs(Sge_Q)])
# Sef_I /= np.linalg.norm([np.abs(Sef_I), np.abs(Sef_Q)])
# Sef_Q /= np.linalg.norm([np.abs(Sef_I), np.abs(Sef_Q)])
# Sgf_I /= np.linalg.norm([np.abs(Sgf_I), np.abs(Sgf_Q)])
# Sgf_Q /= np.linalg.norm([np.abs(Sgf_I), np.abs(Sgf_Q)])
sI_c1_avg = np.average(sI_c1, axis = 0)
sI_c2_avg = np.average(sI_c2, axis = 0)
sI_c3_avg = np.average(sI_c3, axis = 0)
sQ_c1_avg = np.average(sQ_c1, axis = 0)
sQ_c2_avg = np.average(sQ_c2, axis = 0)
sQ_c3_avg = np.average(sQ_c3, axis = 0)
if plot:
fig = plt.figure(1, figsize = tuple(np.array([12,8])*figscale))
fig.suptitle(name, fontsize = 20)
ax1 = fig.add_subplot(221)
ax1.set_title("I average")
ax1.set_ylabel("Voltage (mV)")
ax1.set_xlabel("Time (ns)")
ax1.plot(time_vals, np.average(sI_c1, axis = 0)*1000, label = 'G_records')
ax1.plot(time_vals,np.average(sI_c2, axis = 0)*1000, label = 'E_records')
ax1.plot(time_vals,np.average(sI_c3, axis = 0)*1000, label = 'F_records')
ax1.grid()
# ax1.set_aspect(1)
ax2 = fig.add_subplot(222)
ax2.set_title("Q average")
ax1.set_ylabel("Voltage (mV)")
ax1.set_xlabel("Time (ns)")
ax2.plot(time_vals,np.average(sQ_c1, axis = 0)*1000, label = 'G records')
ax2.plot(time_vals,np.average(sQ_c2, axis = 0)*1000, label = 'E records')
ax2.plot(time_vals,np.average(sQ_c3, axis = 0)*1000, label = 'F records')
ax2.grid()
# ax2.set_aspect(1)
ax2.legend(bbox_to_anchor=(1.05, 1.0),
loc='upper left')
ax3 = fig.add_subplot(223)
ax3.set_title("Trajectories")
ax3.set_ylabel("I Voltage (mV)")
ax3.set_xlabel("Q Voltage (mV)")
ax3.set_aspect(1)
ax3.plot(np.average(sI_c1, axis = 0)*1000, np.average(sQ_c1, axis = 0)*1000)
ax3.plot(np.average(sI_c2, axis = 0)*1000,np.average(sQ_c2, axis = 0)*1000)
ax3.plot(np.average(sI_c3, axis = 0)*1000,np.average(sQ_c3, axis = 0)*1000)
ax3.grid()
ax4 = fig.add_subplot(224)
ax4.set_title("Weight Functions")
ax4.plot(Sge_I, label = 'Wge_I')
ax4.plot(Sge_Q, label = 'Wge_Q')
ax4.plot(Sgf_I, label = 'Wgf_I')
ax4.plot(Sgf_Q, label = 'Wgf_Q')
ax4.plot(Sef_I, label = 'Wef_I')
ax4.plot(Sef_Q, label = 'Wef_Q')
ax4.legend(bbox_to_anchor=(1.05, 1.0),
loc='upper left')
ax4.grid()
fig.tight_layout(h_pad = 1, w_pad = 1.5)
fig2 = plt.figure(2, figsize = (12,8))
ax11 = fig2.add_subplot(331)
ax11.set_title("GE - G")
ax12 = fig2.add_subplot(332)
ax12.set_title("GE - E")
ax13 = fig2.add_subplot(333)
ax13.set_title("GE - F")
ax21 = fig2.add_subplot(334)
ax21.set_title("GF - G")
ax22 = fig2.add_subplot(335)
ax22.set_title("GF - E")
ax23 = fig2.add_subplot(336)
ax23.set_title("GF - F")
ax31 = fig2.add_subplot(337)
ax31.set_title("EF - G")
ax32 = fig2.add_subplot(338)
ax32.set_title("EF - E")
ax33 = fig2.add_subplot(339)
ax33.set_title("EF - F")
ax11.grid()
ax12.grid()
ax13.grid()
ax21.grid()
ax22.grid()
ax23.grid()
ax31.grid()
ax32.grid()
ax33.grid()
fig2.tight_layout(h_pad = 1, w_pad = 1)
else:
fig2 = None
ax11 = ax12 = ax13 = ax21 = ax22 = ax23 = ax31 = ax32 = ax33 = None
#using GE weights:
if hist_scale == None:
hist_scale = np.max(np.abs([sI_c1_avg, sQ_c1_avg]))*1.2
hist_scale1 = np.max(np.abs([sI_c1_avg, sQ_c1_avg]))*1.2
hist_scale2 = hist_scale1
hist_scale3 = hist_scale1
else:
hist_scale1 = hist_scale
hist_scale2 = hist_scale
hist_scale3 = hist_scale
# hist_scale2 = np.max(np.abs([sI_c2_avg, sQ_c2_avg]))*1.2
# hist_scale3 = np.max(np.abs([sI_c3_avg, sQ_c3_avg]))*1.2
#GE weights
bins_GE_G, h_GE_G, I_GE_G_pts, Q_GE_G_pts = weighted_histogram(Sge_I, Sge_Q, sI_c1, sQ_c1, plot = plot, fig = fig2, ax = ax11, scale = hist_scale1, record_track = record_track)
bins_GE_E, h_GE_E, I_GE_E_pts, Q_GE_E_pts = weighted_histogram(Sge_I, Sge_Q, sI_c2, sQ_c2, plot = plot, fig = fig2, ax = ax12, scale = hist_scale2, record_track = record_track)
bins_GE_F, h_GE_F, I_GE_F_pts, Q_GE_F_pts = weighted_histogram(Sge_I, Sge_Q, sI_c3, sQ_c3, plot = plot, fig = fig2, ax = ax13, scale = hist_scale3, record_track = record_track)
#
#GF weights:
bins_GF_G, h_GF_G, I_GF_G_pts, Q_GF_G_pts = weighted_histogram(Sgf_I, Sgf_Q, sI_c1, sQ_c1, plot = plot, fig = fig2, ax = ax21, scale = hist_scale1, record_track = False)
bins_GF_E, h_GF_E, I_GF_E_pts, Q_GF_E_pts = weighted_histogram(Sgf_I, Sgf_Q, sI_c2, sQ_c2, plot = plot, fig = fig2, ax = ax22, scale = hist_scale2, record_track = False)
bins_GF_F, h_GF_F, I_GF_F_pts, Q_GF_F_pts = weighted_histogram(Sgf_I, Sgf_Q, sI_c3, sQ_c3, plot = plot, fig = fig2, ax = ax23, scale = hist_scale3, record_track = False)
#EF weights:
bins_EF_G, h_EF_G, I_EF_G_pts, Q_EF_G_pts = weighted_histogram(Sef_I, Sef_Q, sI_c1, sQ_c1, plot = plot, fig = fig2, ax = ax31, scale = hist_scale1, record_track = False)
bins_EF_E, h_EF_E, I_EF_E_pts, Q_EF_E_pts = weighted_histogram(Sef_I, Sef_Q, sI_c2, sQ_c2, plot = plot, fig = fig2, ax = ax32, scale = hist_scale2, record_track = False)
bins_EF_F, h_EF_F, I_EF_F_pts, Q_EF_F_pts = weighted_histogram(Sef_I, Sef_Q, sI_c3, sQ_c3, plot = plot, fig = fig2, ax = ax33, scale = hist_scale3, record_track = False)
if plot and not fit:
fig3, axs = plt.subplots(3, 1, figsize = (6,12))
viridis = cm.get_cmap('magma', 256)
newcolors = viridis(np.linspace(0, 1, 256))
gray = np.array([0.1, 0.1, 0.1, 0.1])
newcolors[128-5: 128+5] = gray
newcmp = ListedColormap(newcolors)
ax1 = axs[0]
ax2 = axs[1]
ax3 = axs[2]
ax1.set_title("Sge - inputs G and E")
ax1.pcolormesh(bins_GE_G, bins_GE_G, h_GE_G+h_GE_E)
ax1.set_aspect(1)
ax2.set_title("Sgf - inputs G and F")
ax2.pcolormesh(bins_GF_G, bins_GF_F, h_GF_G+h_GF_F)
ax2.set_aspect(1)
ax3.set_title("Sef - inputs E and F")
ax3.pcolormesh(bins_EF_E, bins_EF_F, h_EF_E+h_EF_F)
ax3.set_aspect(1)
fig3.tight_layout()
if fit:
I_G = sI_c1
Q_G = sQ_c1
I_E = sI_c2
Q_E = sQ_c2
I_F = sI_c3
Q_F = sQ_c3
I_G_avg = np.average(I_G, axis = 0)
I_E_avg = np.average(I_E, axis = 0)
I_F_avg = np.average(I_F, axis = 0)
Q_G_avg = np.average(Q_G, axis = 0)
Q_E_avg = np.average(Q_E, axis = 0)
Q_F_avg = np.average(Q_F, axis = 0)
if guess:
guessParams = []
for i, [wfs, avgs] in enumerate([
[[Sge_I, Sge_Q], [I_G_avg, Q_G_avg, I_E_avg, Q_E_avg]],
[[Sgf_I, Sgf_Q], [I_G_avg, Q_G_avg, I_F_avg, Q_F_avg]],
[[Sef_I, Sef_Q], [I_E_avg, Q_E_avg, I_F_avg, Q_F_avg]],
]):
for j in range(2):
A_x0Guess = np.dot(avgs[2*j+0], wfs[0])+np.dot(avgs[2*j+1], wfs[1])
A_y0Guess = np.dot(avgs[2*j+1], wfs[0])-np.dot(avgs[2*j+0], wfs[1])
A_ampGuess = np.average([np.max(h_GE_G), np.max(h_GF_G), np.max(h_EF_F)])
A_sxGuess = hist_scale/8
# A_thetaGuess = np.average(np.angle(A_x0Guess+1j*A_y0Guess))
A_thetaGuess = 0
guessParams.append([A_ampGuess, A_y0Guess, A_x0Guess, A_sxGuess])
print(["amp", "Y0", 'X0', 'Sigma_x', 'Theta'])
print(guessParams)
print(np.shape(guessParams))
else:
guessParams = [None, None, None, None, None, None]
########
max_fev = 10000
line_ind = 0
GE_G_fit = fit_2D_Gaussian('GE_G_fit', bins_GE_G, h_GE_G,
guessParams[0],
# None,
max_fev = max_fev,
contour_line = line_ind)
GE_G_fit_h = Gaussian_2D(np.meshgrid(bins_GE_G[:-1], bins_GE_G[:-1]), *GE_G_fit.info_dict['popt'])
print(GE_G_fit.info_dict['popt'])
GE_G_fit_h_norm = np.copy(GE_G_fit_h/np.sum(GE_G_fit_h))
########
GE_E_fit = fit_2D_Gaussian('GE_E_fit', bins_GE_E, h_GE_E,
guessParams[1],
# None,
max_fev = max_fev,
contour_line = line_ind)
GE_E_fit_h = Gaussian_2D(np.meshgrid(bins_GE_E[:-1], bins_GE_E[:-1]), *GE_E_fit.info_dict['popt'])
print(GE_E_fit.info_dict['popt'])
GE_E_fit_h_norm = np.copy(GE_E_fit_h/np.sum(GE_E_fit_h))
########
GF_G_fit = fit_2D_Gaussian('GF_G_fit', bins_GF_G, h_GF_G,
guessParams[2],
# None,
max_fev = max_fev,
contour_line = line_ind)
GF_G_fit_h = Gaussian_2D(np.meshgrid(bins_GF_G[:-1], bins_GF_G[:-1]), *GF_G_fit.info_dict['popt'])
print(GF_G_fit.info_dict['popt'])
GF_G_fit_h_norm = np.copy(GF_G_fit_h/np.sum(GF_G_fit_h))
GF_F_fit = fit_2D_Gaussian('GF_F_fit', bins_GF_F, h_GF_F,
guessParams[3],
# None,
max_fev = max_fev,
contour_line = line_ind)
print(GF_F_fit.info_dict['popt'])
GF_F_fit_h = Gaussian_2D(np.meshgrid(bins_GF_F[:-1], bins_GF_F[:-1]), *GF_F_fit.info_dict['popt'])
GF_F_fit_h_norm = np.copy(GF_F_fit_h/np.sum(GF_F_fit_h))
EF_E_fit = fit_2D_Gaussian('EF_E_fit', bins_EF_E, h_EF_E,
guessParams[4],
# None,
max_fev = max_fev,
contour_line = line_ind)
print(EF_E_fit.info_dict['popt'])
EF_E_fit_h = Gaussian_2D(np.meshgrid(bins_EF_E[:-1], bins_EF_E[:-1]), *EF_E_fit.info_dict['popt'])
EF_E_fit_h_norm = np.copy(EF_E_fit_h/np.sum(EF_E_fit_h))
EF_F_fit = fit_2D_Gaussian('EF_F_fit', bins_EF_F, h_EF_F,
guessParams[5],
# None,
max_fev = max_fev,
contour_line = line_ind)
print(EF_F_fit.info_dict['popt'])
EF_F_fit_h = Gaussian_2D(np.meshgrid(bins_EF_F[:-1], bins_EF_F[:-1]), *EF_F_fit.info_dict['popt'])
EF_F_fit_h_norm = np.copy(EF_F_fit_h/np.sum(EF_F_fit_h))
GE_is_G = hist_discriminant(GE_G_fit_h, GE_E_fit_h)
GE_is_E = np.logical_not(GE_is_G)
GF_is_G = hist_discriminant(GF_G_fit_h, GF_F_fit_h)
GF_is_F = np.logical_not(GF_is_G)
EF_is_E = hist_discriminant(EF_E_fit_h, EF_F_fit_h)
EF_is_F = np.logical_not(EF_is_E)
if plot:
fig3, axs = plt.subplots(2, 3, figsize = (12,8))
viridis = cm.get_cmap('magma', 256)
newcolors = viridis(np.linspace(0, 1, 256))
gray = np.array([0.1, 0.1, 0.1, 0.1])
newcolors[128-5: 128+5] = gray
newcmp = ListedColormap(newcolors)
ax1 = axs[0,0]
ax2 = axs[0,1]
ax3 = axs[0,2]
ax1.set_title("Sge - inputs G and E")
ax1.pcolormesh(bins_GE_G, bins_GE_G, h_GE_G+h_GE_E)
ax2.set_title("Sgf - inputs G and F")
ax2.pcolormesh(bins_GF_G, bins_GF_F, h_GF_G+h_GF_F)
ax3.set_title("Sef - inputs E and F")
ax3.pcolormesh(bins_EF_E, bins_EF_F, h_EF_E+h_EF_F)
#*(GE_is_G-1/2)
scale = np.max((GE_G_fit_h+GE_E_fit_h))
pc1 = axs[1,0].pcolormesh(bins_GE_G, bins_GE_G, (GE_G_fit_h+GE_E_fit_h)*(GE_is_G-1/2)/scale*5, cmap = newcmp, vmin = -1, vmax = 1)
plt.colorbar(pc1, ax = axs[1,0],fraction=0.046, pad=0.04)
GE_G_fit.plot_on_ax(axs[1,0])
axs[1,0].add_patch(GE_G_fit.sigma_contour())
GE_E_fit.plot_on_ax(axs[1,0])
axs[1,0].add_patch(GE_E_fit.sigma_contour())
scale = np.max((GF_G_fit_h+GF_F_fit_h))
pc2 = axs[1,1].pcolormesh(bins_GE_G, bins_GE_G, (GF_is_G-1/2)*(GF_G_fit_h+GF_F_fit_h)/scale*5, cmap = newcmp, vmin = -1, vmax = 1)
plt.colorbar(pc1, ax = axs[1,1],fraction=0.046, pad=0.04)
GF_G_fit.plot_on_ax(axs[1,1])
axs[1,1].add_patch(GF_G_fit.sigma_contour())
GF_F_fit.plot_on_ax(axs[1,1])
axs[1,1].add_patch(GF_F_fit.sigma_contour())
scale = np.max((EF_E_fit_h+EF_F_fit_h))
pc3 = axs[1,2].pcolormesh(bins_GE_G, bins_GE_G, (EF_is_E-1/2)*(EF_E_fit_h+EF_F_fit_h)/scale*5, cmap = newcmp, vmin = -1, vmax = 1)
plt.colorbar(pc1, ax = axs[1,2],fraction=0.046, pad=0.04)
EF_E_fit.plot_on_ax(axs[1,2])
axs[1,2].add_patch(EF_E_fit.sigma_contour())
EF_F_fit.plot_on_ax(axs[1,2])
axs[1,2].add_patch(EF_F_fit.sigma_contour())
fig3.tight_layout(h_pad = 0.1, w_pad = 1)
for ax in np.array(axs).flatten():
ax.set_aspect(1)
ax.grid()
#classify the records - done for each weight function
results = []
GE_results = []
GF_results = []
EF_results = []
all_I = np.vstack((sI_c1_classify, sI_c2_classify, sI_c3_classify))
all_Q = np.vstack((sQ_c1_classify, sQ_c2_classify, sQ_c3_classify))
# print("all_I shape: ", np.shape(all_I))
# print(np.shape(list(zip(sI_c1, sQ_c1))))
for record in list(zip(all_I, all_Q)):
It, Qt = record[0], record[1]
#GE weights
ge_I = np.dot(Sge_I, It)+np.dot(Sge_Q, Qt)
ge_Q = np.dot(Sge_I, Qt)-np.dot(Sge_Q, It)
Iloc = np.digitize(ge_I, bins_GE_G)
Qloc = np.digitize(ge_Q, bins_GE_G)
if Iloc >= 99: Iloc = 98
if Qloc >= 99: Qloc = 98
#if 1 it's G
Sge_result = GE_is_G[Iloc, Qloc]
#GF weights
gf_I = np.dot(Sgf_I, It)+np.dot(Sgf_Q, Qt)
gf_Q = np.dot(Sgf_I, Qt)-np.dot(Sgf_Q, It)
Iloc = np.digitize(gf_I, bins_GF_G)
Qloc = np.digitize(gf_Q, bins_GF_G)
if Iloc >= 99: Iloc = 98
if Qloc >= 99: Qloc = 98
#if 1 it's G
Sgf_result = GF_is_G[Iloc, Qloc]
#EF weights
ef_I = np.dot(Sef_I, It)+np.dot(Sef_Q, Qt)
ef_Q = np.dot(Sef_I, Qt)-np.dot(Sef_Q, It)
Iloc = np.digitize(ef_I, bins_EF_E)
Qloc = np.digitize(ef_Q, bins_EF_E)#edge-shifting
if Iloc >= 99: Iloc = 98
if Qloc >= 99: Qloc = 98
#if 1 it's E
Sef_result = EF_is_E[Iloc, Qloc]
# print(Sge_result)
# print(Sgf_result)
if Sge_result*Sgf_result:
result = 1 #G
elif not Sge_result and Sef_result:
result = 2 #E
elif not Sef_result and not Sgf_result:
result = 3 #F
else:
result = 4 #Null
results.append(result)
GE_results.append(Sge_result)
GF_results.append(Sgf_result)
EF_results.append(Sef_result)
results = np.array(results)
#rescale so G-> 1, E-> 2, F -> 3
GE_results = np.logical_not(np.array(GE_results))+1
GF_results = np.logical_not(np.array(GF_results))*2+1
EF_results = np.logical_not(np.array(EF_results))+2
div1 = np.shape(sI_c1_classify)[0]
numRecords = 3*div1
# print(div1)
correct_classifications = np.append(np.append(np.ones(div1), 2*np.ones(div1)), 3*np.ones(div1))
numberNull = np.sum(results[results == 4]/4)
fidelity = np.round(np.sum(correct_classifications==results)/numRecords, 3)
if plot:
fig, ax = plt.subplots(5,1, figsize = (4, 8))
viridisBig = cm.get_cmap('viridis', 512)
_cmap = ListedColormap(viridisBig(np.linspace(0, 1, 256)))
scale = Norm(vmin = 1, vmax = 4)
ax[0].set_title("Correct classifications")
ax[0].imshow([correct_classifications, correct_classifications], interpolation = 'none', cmap = _cmap, norm = scale)
ax[1].set_title("GE classifications")
ax[1].imshow([GE_results,GE_results], interpolation = 'none', cmap = _cmap, norm = scale)
ax[2].set_title("GF classifications")
ax[2].imshow([GF_results,GF_results], interpolation = 'none', cmap = _cmap, norm = scale)
ax[3].set_title("EF classifications")
ax[3].imshow([EF_results,EF_results], interpolation = 'none', cmap = _cmap, norm = scale)
ax[4].set_title("Final classifications")
ax[4].get_yaxis().set_ticks([])
ax[4].set_label("Record number")
ax[4].imshow([results, results], interpolation = 'none', cmap = _cmap, norm = scale)
ax[4].set_aspect(1000)
for axi in ax:
axi.get_yaxis().set_ticks([])
axi.set_aspect(1000)
# ax[2].imshow([right, right], interpolation = 'none')
# ax[2].set_aspect(1000)
fig.tight_layout(h_pad = 1, w_pad = 1)
if debug:
print("checking sum: ", np.max(correct_classifications[2*div1:-1]==results[2*div1:-1]))
print("Number of Null results: ", numberNull)
print("Sge Imbar/sigma: ", np.linalg.norm(GE_G_fit.center_vec()-GE_E_fit.center_vec())/GE_G_fit.info_dict['sigma_x'])
print("Sgf Imbar/sigma: ", np.linalg.norm(GF_G_fit.center_vec()-GF_F_fit.center_vec())/GF_G_fit.info_dict['sigma_x'])
print("Sef Imbar/sigma: ", np.linalg.norm(EF_E_fit.center_vec()-EF_F_fit.center_vec())/EF_E_fit.info_dict['sigma_x'])
G_fidelity = np.round(np.sum(correct_classifications[0:div1]==results[0:div1])/div1, 3)
E_fidelity = np.round(np.sum(correct_classifications[div1:2*div1]==results[div1:2*div1])/div1, 3)
F_fidelity = np.round(np.sum(correct_classifications[2*div1:-1]==results[2*div1:-1])/div1, 3)
return G_fidelity, E_fidelity, F_fidelity, fidelity, numberNull
def Process_One_Acquisition_2_state(name, time_vals, sI_c1, sI_c2, sQ_c1 ,sQ_c2, hist_scale = 200, odd_only = False, even_only = False, plot = False, lpf = True, lpf_wc = 50e6, fit = False, hist_y_scale = 10, boxcar = False, bc_window = [50, 150], record_track = False, numRecordsUsed = 7860, debug = False):
sI_c1_classify = sI_c1
sI_c2_classify = sI_c2
# sI_c3_classify = sI_c3
sQ_c1_classify = sQ_c1
sQ_c2_classify = sQ_c2
# sQ_c3_classify = sQ_c3
sI_c1 = sI_c1[0:numRecordsUsed//3].copy()
sI_c2 = sI_c2[0:numRecordsUsed//3].copy()
# sI_c3 = sI_c3[0:numRecordsUsed//3].copy()
sQ_c1 = sQ_c1[0:numRecordsUsed//3].copy()
sQ_c2 = sQ_c2[0:numRecordsUsed//3].copy()
# sQ_c3 = sQ_c3[0:numRecordsUsed//3].copy()
if boxcar:
WF = np.zeros(np.size(time_vals))
WF[bc_window[0]:bc_window[1]] = 1
Sge = Sgf = Sef = WF
else:
#weight functions denoted by Sij for telling trace i from trace j
Sge_I, Sge_Q = [(np.average(sI_c1, axis = 0)-np.average(sI_c2, axis = 0)), (np.average(sQ_c1, axis = 0)-np.average(sQ_c2, axis = 0))]
# Sgf_I, Sgf_Q = [(np.average(sI_c1, axis = 0)-np.average(sI_c3, axis = 0)), (np.average(sQ_c1, axis = 0)-np.average(sQ_c3, axis = 0))]
# Sef_I, Sef_Q = [(np.average(sI_c2, axis = 0)-np.average(sI_c3, axis = 0)), (np.average(sQ_c2, axis = 0)-np.average(sQ_c3, axis = 0))]
if lpf:
Sge_I = sosfilt(butter(10, lpf_wc, fs = 1e9/20, output = 'sos'), Sge_I)
# Sgf = sosfilt(butter(10, lpf_wc, fs = 1e9/20, output = 'sos'), Sgf)
# Sef = sosfilt(butter(10, lpf_wc, fs = 1e9/20, output = 'sos'), Sef)
#nromalizing weight functions
# Sge_I /= np.linalg.norm([np.abs(Sge_I), np.abs(Sge_Q)])
# Sge_Q /= np.linalg.norm([np.abs(Sge_I), np.abs(Sge_Q)])
# Sef_I /= np.linalg.norm([np.abs(Sef_I), np.abs(Sef_Q)])
# Sef_Q /= np.linalg.norm([np.abs(Sef_I), np.abs(Sef_Q)])
# Sgf_I /= np.linalg.norm([np.abs(Sgf_I), np.abs(Sgf_Q)])
# Sgf_Q /= np.linalg.norm([np.abs(Sgf_I), np.abs(Sgf_Q)])
sI_c1_avg = np.average(sI_c1, axis = 0)
sI_c2_avg = np.average(sI_c2, axis = 0)
# sI_c3_avg = np.average(sI_c3, axis = 0)
sQ_c1_avg = np.average(sQ_c1, axis = 0)
sQ_c2_avg = np.average(sQ_c2, axis = 0)
# sQ_c3_avg = np.average(sQ_c3, axis = 0)
if plot:
fig = plt.figure(1, figsize = (12,8))
fig.suptitle(name, fontsize = 20)
ax1 = fig.add_subplot(221)
ax1.set_title("I average")
ax1.set_ylabel("Voltage (mV)")
ax1.set_xlabel("Time (ns)")
ax1.plot(time_vals, np.average(sI_c1, axis = 0)*1000, label = 'G_records')
ax1.plot(time_vals,np.average(sI_c2, axis = 0)*1000, label = 'E_records')
# ax1.plot(time_vals,np.average(sI_c3, axis = 0)*1000, label = 'F_records')
ax1.grid()
# ax1.set_aspect(1)
ax1.legend(loc = 'upper right')
ax2 = fig.add_subplot(222)
ax2.set_title("Q average")
ax1.set_ylabel("Voltage (mV)")
ax1.set_xlabel("Time (ns)")
ax2.plot(time_vals,np.average(sQ_c1, axis = 0)*1000, label = 'G records')
ax2.plot(time_vals,np.average(sQ_c2, axis = 0)*1000, label = 'E records')
# ax2.plot(time_vals,np.average(sQ_c3, axis = 0)*1000, label = 'F records')
ax2.grid()
# ax2.set_aspect(1)
ax2.legend(loc = 'upper right')
ax3 = fig.add_subplot(223)
ax3.set_title("Trajectories")
ax3.set_ylabel("I Voltage (mV)")
ax3.set_xlabel("Q Voltage (mV)")
ax3.set_aspect(1)
ax3.plot(np.average(sI_c1, axis = 0)*1000, np.average(sQ_c1, axis = 0)*1000)
ax3.plot(np.average(sI_c2, axis = 0)*1000,np.average(sQ_c2, axis = 0)*1000)
# ax3.plot(np.average(sI_c3, axis = 0)*1000,np.average(sQ_c3, axis = 0)*1000)
ax3.grid()
ax4 = fig.add_subplot(224)
ax4.set_title("Weight Functions")
ax4.plot(Sge_I, label = 'Wge_I')
ax4.plot(Sge_Q, label = 'Wge_Q')
# ax4.plot(Sgf_I, label = 'Wgf')
# ax4.plot(Sgf_Q, label = 'Wgf')
# ax4.plot(Sef_I, label = 'Wef')
# ax4.plot(Sef_Q, label = 'Wef')
ax4.legend()
ax4.grid()
fig.tight_layout(h_pad = 1, w_pad = 1.5)
fig01 = plt.figure(10, figsize = (12,8))
fig01.suptitle(name, fontsize = 20)
ax1 = fig01.add_subplot(111)
ax1.set_title("Magnitude Difference between G and E")
ax1.set_ylabel("Voltage (mV)")
ax1.set_xlabel("Time (ns)")
ax1.plot(time_vals, np.sqrt(sI_c1_avg**2+sQ_c1_avg**2)*1000 - np.sqrt(sI_c2_avg**2+sQ_c2_avg**2)*1000, label = 'G_records-E_records')
ax1.grid()
fig2 = plt.figure(2, figsize = (12,8))
ax11 = fig2.add_subplot(331)
ax11.set_title("GE - G")
ax12 = fig2.add_subplot(332)
ax12.set_title("GE - E")
# ax13 = fig2.add_subplot(333)
# ax13.set_title("GE - F")
# ax21 = fig2.add_subplot(334)
# ax21.set_title("GF - G")
# ax22 = fig2.add_subplot(335)
# ax22.set_title("GF - E")
# ax23 = fig2.add_subplot(336)
# ax23.set_title("GF - F")
# ax31 = fig2.add_subplot(337)
# ax31.set_title("EF - G")
# ax32 = fig2.add_subplot(338)
# ax32.set_title("EF - E")
# ax33 = fig2.add_subplot(339)
# ax33.set_title("EF - F")
ax11.grid()
ax12.grid()
# ax13.grid()
# ax21.grid()
# ax22.grid()
# ax23.grid()
# ax31.grid()
# ax32.grid()
# ax33.grid()
fig2.tight_layout(h_pad = 1, w_pad = 1)
else:
fig2 = None
ax11 = ax12 = ax13 = ax21 = ax22 = ax23 = ax31 = ax32 = ax33 = None
#using GE weights:
if hist_scale == None:
hist_scale = np.max(np.abs([sI_c1_avg, sQ_c1_avg]))*1.2
hist_scale1 = np.max(np.abs([sI_c1_avg, sQ_c1_avg]))*1.2
hist_scale2 = hist_scale1
hist_scale3 = hist_scale1
else:
hist_scale1 = hist_scale
hist_scale2 = hist_scale
hist_scale3 = hist_scale
# hist_scale2 = np.max(np.abs([sI_c2_avg, sQ_c2_avg]))*1.2
# hist_scale3 = np.max(np.abs([sI_c3_avg, sQ_c3_avg]))*1.2
#GE weights
bins_GE_G, h_GE_G, I_GE_G_pts, Q_GE_G_pts = weighted_histogram(Sge_I, Sge_Q, sI_c1, sQ_c1, plot = plot, fig = fig2, ax = ax11, scale = hist_scale1, record_track = record_track)
bins_GE_E, h_GE_E, I_GE_E_pts, Q_GE_E_pts = weighted_histogram(Sge_I, Sge_Q, sI_c2, sQ_c2, plot = plot, fig = fig2, ax = ax12, scale = hist_scale2, record_track = record_track)
# bins_GE_F, h_GE_F, I_GE_F_pts, Q_GE_F_pts = weighted_histogram(Sge_I, Sge_Q, sI_c3, sQ_c3, plot = plot, fig = fig2, ax = ax13, scale = hist_scale3, record_track = record_track)
#
#GF weights:
# bins_GF_G, h_GF_G, I_GF_G_pts, Q_GF_G_pts = weighted_histogram(Sgf_I, Sgf_Q, sI_c1, sQ_c1, plot = plot, fig = fig2, ax = ax21, scale = hist_scale1, record_track = False)
# bins_GF_E, h_GF_E, I_GF_E_pts, Q_GF_E_pts = weighted_histogram(Sgf_I, Sgf_Q, sI_c2, sQ_c2, plot = plot, fig = fig2, ax = ax22, scale = hist_scale2, record_track = False)
# bins_GF_F, h_GF_F, I_GF_F_pts, Q_GF_F_pts = weighted_histogram(Sgf_I, Sgf_Q, sI_c3, sQ_c3, plot = plot, fig = fig2, ax = ax23, scale = hist_scale3, record_track = False)
#EF weights:
# bins_EF_G, h_EF_G, I_EF_G_pts, Q_EF_G_pts = weighted_histogram(Sef_I, Sef_Q, sI_c1, sQ_c1, plot = plot, fig = fig2, ax = ax31, scale = hist_scale1, record_track = False)
# bins_EF_E, h_EF_E, I_EF_E_pts, Q_EF_E_pts = weighted_histogram(Sef_I, Sef_Q, sI_c2, sQ_c2, plot = plot, fig = fig2, ax = ax32, scale = hist_scale2, record_track = False)
# bins_EF_F, h_EF_F, I_EF_F_pts, Q_EF_F_pts = weighted_histogram(Sef_I, Sef_Q, sI_c3, sQ_c3, plot = plot, fig = fig2, ax = ax33, scale = hist_scale3, record_track = False)
if fit:
I_G = sI_c1
Q_G = sQ_c1
I_E = sI_c2
Q_E = sQ_c2
# I_F = sI_c3
# Q_F = sQ_c3
I_G_avg = np.average(I_G, axis = 0)
I_E_avg = np.average(I_E, axis = 0)
# I_F_avg = np.average(I_F, axis = 0)
Q_G_avg = np.average(Q_G, axis = 0)
Q_E_avg = np.average(Q_E, axis = 0)
# Q_F_avg = np.average(Q_F, axis = 0)
G_x0Guess = np.max(I_G_avg)
G_x0Guess = np.dot(I_G_avg, Sge_I)+np.dot(Q_G_avg, Sge_Q)
G_y0Guess = np.max(Q_G_avg)
G_y0_Guess = np.dot(Q_G_avg, Sge_Q)-np.dot(I_G_avg, Sge_I)
G_ampGuess = np.average(np.sqrt(I_G_avg**2+Q_G_avg**2))
G_sxGuess = hist_scale/2
G_syGuess = hist_scale/2
G_thetaGuess = np.average(np.angle(I_G_avg+1j*Q_G_avg))
G_offsetGuess = 0
E_x0Guess = np.max(I_E_avg)
E_y0Guess = np.max(Q_E_avg)
E_ampGuess = np.average(np.sqrt(I_E_avg**2+Q_E_avg**2))
E_sxGuess = hist_scale/2
E_syGuess = hist_scale/2
E_thetaGuess = np.average(np.angle(I_E_avg+1j*Q_E_avg))
E_offsetGuess = 0
# F_x0Guess = np.max(I_F_avg)
# F_y0Guess = np.max(Q_F_avg)
# F_ampGuess = np.average(np.sqrt(I_F_avg**2+Q_F_avg**2))
# F_sxGuess = hist_scale/2
# F_syGuess = hist_scale/2
# F_thetaGuess = np.average(np.angle(I_F_avg+1j*Q_F_avg))
# F_offsetGuess = 0
guessParams = [[G_ampGuess, G_x0Guess, G_y0Guess, G_sxGuess, G_thetaGuess],
[E_ampGuess, E_x0Guess, E_y0Guess, E_sxGuess, E_thetaGuess],
]
if debug:
print("fitting guess parameters: ", guessParams)
########
max_fev = 10000
line_ind = 0
GE_G_fit = fit_2D_Gaussian('GE_G_fit', bins_GE_G, h_GE_G,
# guessParams[0],
None,
max_fev = max_fev,
contour_line = line_ind)
GE_G_fit_h = Gaussian_2D(np.meshgrid(bins_GE_G[:-1], bins_GE_G[:-1]), *GE_G_fit.info_dict['popt'])
GE_G_fit_h_norm = np.copy(GE_G_fit_h/np.sum(GE_G_fit_h))
########
GE_E_fit = fit_2D_Gaussian('GE_E_fit', bins_GE_E, h_GE_E,
# guessParams[1],
None,
max_fev = max_fev,
contour_line = line_ind)
GE_E_fit_h = Gaussian_2D(np.meshgrid(bins_GE_E[:-1], bins_GE_E[:-1]), *GE_E_fit.info_dict['popt'])
GE_E_fit_h_norm = np.copy(GE_E_fit_h/np.sum(GE_E_fit_h))
########
# GF_G_fit = fit_2D_Gaussian('GF_G_fit', bins_GF_G, h_GF_G,
# # guessParams[0],
# None,
# max_fev = max_fev,
# contour_line = line_ind)
# GF_G_fit_h = Gaussian_2D(np.meshgrid(bins_GF_G[:-1], bins_GF_G[:-1]), *GF_G_fit.info_dict['popt'])
# GF_G_fit_h_norm = np.copy(GF_G_fit_h/np.sum(GF_G_fit_h))
# GF_F_fit = fit_2D_Gaussian('GF_F_fit', bins_GF_F, h_GF_F,
# # guessParams[2],
# None,
# max_fev = max_fev,
# contour_line = line_ind)
# GF_F_fit_h = Gaussian_2D(np.meshgrid(bins_GF_F[:-1], bins_GF_F[:-1]), *GF_F_fit.info_dict['popt'])
# GF_F_fit_h_norm = np.copy(GF_F_fit_h/np.sum(GF_F_fit_h))
# EF_E_fit = fit_2D_Gaussian('EF_E_fit', bins_EF_E, h_EF_E,
# # guessParams[2],
# None,
# max_fev = max_fev,
# contour_line = line_ind)
# EF_E_fit_h = Gaussian_2D(np.meshgrid(bins_EF_E[:-1], bins_EF_E[:-1]), *EF_E_fit.info_dict['popt'])
# EF_E_fit_h_norm = np.copy(EF_E_fit_h/np.sum(EF_E_fit_h))
# EF_F_fit = fit_2D_Gaussian('EF_F_fit', bins_EF_F, h_EF_F,
# # guessParams[2],
# None,
# max_fev = max_fev,
# contour_line = line_ind)
# EF_F_fit_h = Gaussian_2D(np.meshgrid(bins_EF_F[:-1], bins_EF_F[:-1]), *EF_F_fit.info_dict['popt'])
# EF_F_fit_h_norm = np.copy(EF_F_fit_h/np.sum(EF_F_fit_h))
GE_is_G = hist_discriminant(GE_G_fit_h, GE_E_fit_h)
GE_is_E = np.logical_not(GE_is_G)
# GF_is_G = hist_discriminant(GF_G_fit_h, GF_F_fit_h)
# GF_is_F = np.logical_not(GF_is_G)
# EF_is_E = hist_discriminant(EF_E_fit_h, EF_F_fit_h)
# EF_is_F = np.logical_not(EF_is_E)
if plot:
fig3, axs = plt.subplots(2, 3, figsize = (12,8))
viridis = cm.get_cmap('magma', 256)
newcolors = viridis(np.linspace(0, 1, 256))
gray = np.array([0.1, 0.1, 0.1, 0.1])
newcolors[128-5: 128+5] = gray
newcmp = ListedColormap(newcolors)
ax1 = axs[0,0]
ax2 = axs[0,1]
ax3 = axs[0,2]
ax1.set_title("Sge - inputs G and E")
ax1.pcolormesh(bins_GE_G, bins_GE_G, h_GE_G+h_GE_E)
ax2.set_title("Sgf - inputs G and F")
# ax2.pcolormesh(bins_GF_G, bins_GF_F, h_GF_G+h_GF_F)
ax3.set_title("Sef - inputs E and F")
# ax3.pcolormesh(bins_EF_E, bins_EF_F, h_EF_E+h_EF_F)
#*(GE_is_G-1/2)
scale = np.max((GE_G_fit_h+GE_E_fit_h))
pc1 = axs[1,0].pcolormesh(bins_GE_G, bins_GE_G, (GE_G_fit_h+GE_E_fit_h)*(GE_is_G-1/2)/scale*5, cmap = newcmp, vmin = -1, vmax = 1)
plt.colorbar(pc1, ax = axs[1,0],fraction=0.046, pad=0.04)
GE_G_fit.plot_on_ax(axs[1,0])
axs[1,0].add_patch(GE_G_fit.sigma_contour())
GE_E_fit.plot_on_ax(axs[1,0])
axs[1,0].add_patch(GE_E_fit.sigma_contour())
# scale = np.max((GF_G_fit_h+GF_F_fit_h))
# pc2 = axs[1,1].pcolormesh(bins_GE_G, bins_GE_G, (GF_is_G-1/2)*(GF_G_fit_h+GF_F_fit_h)/scale*5, cmap = newcmp, vmin = -1, vmax = 1)
# plt.colorbar(pc1, ax = axs[1,1],fraction=0.046, pad=0.04)
# GF_G_fit.plot_on_ax(axs[1,1])
# axs[1,1].add_patch(GF_G_fit.sigma_contour())
# GF_F_fit.plot_on_ax(axs[1,1])
# axs[1,1].add_patch(GF_F_fit.sigma_contour())
# scale = np.max((EF_E_fit_h+EF_F_fit_h))
# pc3 = axs[1,2].pcolormesh(bins_GE_G, bins_GE_G, (EF_is_E-1/2)*(EF_E_fit_h+EF_F_fit_h)/scale*5, cmap = newcmp, vmin = -1, vmax = 1)
# plt.colorbar(pc1, ax = axs[1,2],fraction=0.046, pad=0.04)
# EF_E_fit.plot_on_ax(axs[1,2])
# axs[1,2].add_patch(EF_E_fit.sigma_contour())
# EF_F_fit.plot_on_ax(axs[1,2])
# axs[1,2].add_patch(EF_F_fit.sigma_contour())
fig3.tight_layout(h_pad = 0.1, w_pad = 1)
for ax in np.array(axs).flatten():
ax.set_aspect(1)
ax.grid()
#classify the records - done for each weight function
results = []
GE_results = []
GF_results = []
EF_results = []
all_I = np.vstack((sI_c1_classify, sI_c2_classify))
all_Q = np.vstack((sQ_c1_classify, sQ_c2_classify))
# print("all_I shape: ", np.shape(all_I))
# print(np.shape(list(zip(sI_c1, sQ_c1))))
for record in list(zip(all_I, all_Q)):
It, Qt = record[0], record[1]
#GE weights
ge_I = np.dot(Sge_I, It)+np.dot(Sge_Q, Qt)
ge_Q = np.dot(Sge_I, Qt)-np.dot(Sge_Q, It)
Iloc = np.digitize(ge_I, bins_GE_G)
Qloc = np.digitize(ge_Q, bins_GE_G)
if Iloc >= 99: Iloc = 98
if Qloc >= 99: Qloc = 98
#if 1 it's G
Sge_result = GE_is_G[Iloc, Qloc]
#GF weights
# gf_I = np.dot(Sgf_I, It)+np.dot(Sgf_Q, Qt)
# gf_Q = np.dot(Sgf_I, Qt)-np.dot(Sgf_Q, It)
# Iloc = np.digitize(gf_I, bins_GF_G)
# Qloc = np.digitize(gf_Q, bins_GF_G)
# if Iloc >= 99: Iloc = 98
# if Qloc >= 99: Qloc = 98
# #if 1 it's G
# Sgf_result = GF_is_G[Iloc, Qloc]
# #EF weights
# ef_I = np.dot(Sef_I, It)+np.dot(Sef_Q, Qt)
# ef_Q = np.dot(Sef_I, Qt)-np.dot(Sef_Q, It)
# Iloc = np.digitize(ef_I, bins_EF_E)
# Qloc = np.digitize(ef_Q, bins_EF_E)#edge-shifting
# if Iloc >= 99: Iloc = 98
# if Qloc >= 99: Qloc = 98
#if 1 it's E
# Sef_result = EF_is_E[Iloc, Qloc]
# print(Sge_result)
# print(Sgf_result)
if Sge_result:
result = 1 #G
else:
result = 2 #E
results.append(result)
GE_results.append(Sge_result)
# GF_results.append(Sgf_result)
# EF_results.append(Sef_result)
results = np.array(results)
#rescale so G-> 1, E-> 2, F -> 3
GE_results = np.logical_not(np.array(GE_results))+1
# GF_results = np.logical_not(np.array(GF_results))*2+1
# EF_results = np.logical_not(np.array(EF_results))+2
div1 = np.shape(sI_c1_classify)[0]
numRecords = 2*div1
# print(div1)
correct_classifications = np.append(np.ones(div1), 2*np.ones(div1))
numberNull = np.sum(results[results == 4]/4)
fidelity = np.round(np.sum(correct_classifications==results)/numRecords, 3)
if plot:
fig, ax = plt.subplots(5,1, figsize = (4, 8))
viridisBig = cm.get_cmap('viridis', 512)
_cmap = ListedColormap(viridisBig(np.linspace(0, 1, 256)))
scale = Norm(vmin = 1, vmax = 4)
ax[0].set_title("Correct classifications")
ax[0].imshow([correct_classifications, correct_classifications], interpolation = 'none', cmap = _cmap, norm = scale)
ax[1].set_title("GE classifications")
ax[1].imshow([GE_results,GE_results], interpolation = 'none', cmap = _cmap, norm = scale)
# ax[2].set_title("GF classifications")
# ax[2].imshow([GF_results,GF_results], interpolation = 'none', cmap = _cmap, norm = scale)
# ax[3].set_title("EF classifications")
# ax[3].imshow([EF_results,EF_results], interpolation = 'none', cmap = _cmap, norm = scale)
ax[4].set_title("Final classifications")
ax[4].get_yaxis().set_ticks([])
ax[4].set_label("Record number")
ax[4].imshow([results, results], interpolation = 'none', cmap = _cmap, norm = scale)
ax[4].set_aspect(1000)
for axi in ax:
axi.get_yaxis().set_ticks([])
axi.set_aspect(1000)
# ax[2].imshow([right, right], interpolation = 'none')
# ax[2].set_aspect(1000)
fig.tight_layout(h_pad = 1, w_pad = 1)
if debug:
print("checking sum: ", np.max(correct_classifications[2*div1:-1]==results[2*div1:-1]))
print("Number of Null results: ", numberNull)
print("Sge Imbar/sigma: ", np.linalg.norm(GE_G_fit.center_vec()-GE_E_fit.center_vec())/GE_G_fit.info_dict['sigma_x'])
# print("Sgf Imbar/sigma: ", np.linalg.norm(GF_G_fit.center_vec()-GF_F_fit.center_vec())/GF_G_fit.info_dict['sigma_x'])
# print("Sef Imbar/sigma: ", np.linalg.norm(EF_E_fit.center_vec()-EF_F_fit.center_vec())/EF_E_fit.info_dict['sigma_x'])
G_fidelity = np.round(np.sum(correct_classifications[0:div1]==results[0:div1])/div1, 3)
E_fidelity = np.round(np.sum(correct_classifications[div1:2*div1]==results[div1:2*div1])/div1, 3)
# F_fidelity = np.round(np.sum(correct_classifications[2*div1:-1]==results[2*div1:-1])/div1, 3)
return G_fidelity, E_fidelity, 0, fidelity, 0
def boxcar_histogram(fig, ax,start_pt, stop_pt, sI, sQ, Ioffset = 0, Qoffset = 0, scale = 1, num_bins = 100):
I_bground = Ioffset
Q_bground = Qoffset
# print(I_bground, Q_bground)
I_pts = []
Q_pts = []
for I_row, Q_row in zip(sI, sQ):
I_pts.append(np.average(I_row[start_pt:stop_pt]-I_bground))
Q_pts.append(np.average(Q_row[start_pt:stop_pt]-Q_bground))
# plt.imshow(np.histogram2d(np.array(I_pts), np.array(Q_pts))[0])
divider = make_axes_locatable(ax)
ax.set_aspect(1)
bins = np.linspace(-1,1, num_bins)*scale
(h, xedges, yedges, im) = ax.hist2d(I_pts, Q_pts, bins = [bins, bins])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax = cax, orientation = 'vertical')
# ax.hexbin(I_pts, Q_pts, extent = np.array([-1,1,-1,1])*scale)
# ax.set_xticks(np.array([-100,-75,-50,-25,0,25,50,75,100])*scale/100)
# ax.set_yticks(np.array([-100,-75,-50,-25,0,25,50,75,100])*scale/100)
ax.grid()
return bins, h
def weighted_histogram_mpl(weight_function_arr_I, weight_function_arr_Q, sI, sQ, scale = 1, num_bins = 100, record_track = False, plot = False, fig = None, ax = None):
I_pts = []
Q_pts = []
# print("size check: ", np.shape(sI))
# print("weights: ", np.shape(weight_function_arr))
for I_row, Q_row in zip(sI, sQ):
I_pts.append(np.dot(I_row, weight_function_arr_I)+np.dot(Q_row, weight_function_arr_Q))
Q_pts.append(np.dot(Q_row, weight_function_arr_I)-np.dot(I_row, weight_function_arr_Q))
# plt.imshow(np.histogram2d(np.array(I_pts), np.array(Q_pts))[0])
bins = np.linspace(-1,1, num_bins)*scale
(h, xedges, yedges, im) = ax.hist2d(I_pts, Q_pts, bins = [bins, bins])
if plot:
divider = make_axes_locatable(ax)
ax.set_aspect(1)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax = cax, orientation = 'vertical')
if record_track:
fig2, ax2 = plt.subplots()
ax2.set_title("Record Tracking: Demodulated signals")
ax2.set_xlabel("time (~us)")
ax2.set_ylabel("$\phi(t)$")
unwrapped_phases = np.mod(np.unwrap(np.arctan(np.array(I_pts[0:500])/np.array(Q_pts[0:500])), period = np.pi), 2*np.pi)
ax2.plot(np.arange(500)*500, unwrapped_phases, '.', label = "phi(t)")
print("Average phase difference between records: ", np.average(np.diff(unwrapped_phases))/np.pi*180, ' degrees')
# ax2.hlines(-12*np.pi, 0, 20000)
return bins, h, I_pts, Q_pts
def weighted_histogram(weight_function_arr_I, weight_function_arr_Q, sI, sQ, scale = 1, num_bins = 100, record_track = False, plot = False, fig = None, ax = None):
I_pts = []
Q_pts = []
# print("size check: ", np.shape(sI))
# print("weights: ", np.shape(weight_function_arr))
for I_row, Q_row in zip(sI, sQ):
I_pts.append(np.dot(I_row, weight_function_arr_I)+np.dot(Q_row, weight_function_arr_Q))
Q_pts.append(np.dot(Q_row, weight_function_arr_I)-np.dot(I_row, weight_function_arr_Q))
# I_pts.append(np.dot(I_row, weight_function_arr_I))
# Q_pts.append(np.dot(Q_row, weight_function_arr_Q))
# plt.imshow(np.histogram2d(np.array(I_pts), np.array(Q_pts))[0])
bins = np.linspace(-1,1, num_bins)*scale
(h, xedges, yedges) = np.histogram2d(I_pts, Q_pts, bins = [bins, bins], density = False)
if plot:
im = ax.pcolormesh(bins, bins, h)
divider = make_axes_locatable(ax)
ax.set_aspect(1)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax = cax, orientation = 'vertical')
if record_track:
fig2, ax2 = plt.subplots()
ax2.set_title("Record Tracking: Demodulated signals")
ax2.set_xlabel("time (~us)")
ax2.set_ylabel("$\phi(t)$")
unwrapped_phases = np.mod(np.unwrap(np.arctan(np.array(sI[0:500, 100])/np.array(sQ[0:500, 100])), period = np.pi), 2*np.pi)
ax2.plot(np.arange(100)*500, unwrapped_phases, '.', label = "phi(t)")
print("Average phase difference between records: ", np.average(np.diff(unwrapped_phases))/np.pi*180, ' degrees')
# ax2.hlines(-12*np.pi, 0, 20000)
return bins, h, I_pts, Q_pts
'''
def Gaussian_2D(M,amplitude, xo, yo, sigma_x, sigma_y, theta):
x, y = M
xo = float(xo)
yo = float(yo)
a = (np.cos(theta)**2)/(2*sigma_x**2) + (np.sin(theta)**2)/(2*sigma_y**2)
b = -(np.sin(2*theta))/(4*sigma_x**2) + (np.sin(2*theta))/(4*sigma_y**2)
c = (np.sin(theta)**2)/(2*sigma_x**2) + (np.cos(theta)**2)/(2*sigma_y**2)
g = amplitude*np.exp( - (a*((x-xo)**2) + 2*b*(x-xo)*(y-yo)
+ c*((y-yo)**2)))
return g
'''
def Gaussian_2D(M,amplitude, xo, yo, sigma):
theta = 0
x, y = M
xo = float(xo)
yo = float(yo)
a = (np.cos(theta)**2)/(2*sigma**2) + (np.sin(theta)**2)/(2*sigma**2)
b = -(np.sin(2*theta))/(4*sigma**2) + (np.sin(2*theta))/(4*sigma**2)
c = (np.sin(theta)**2)/(2*sigma**2) + (np.cos(theta)**2)/(2*sigma**2)
g = amplitude*np.exp( - (a*((x-xo)**2) + 2*b*(x-xo)*(y-yo)
+ c*((y-yo)**2)))
return g
def Gaussian_2D_tilted(M,amplitude, xo, yo, sigma, theta = 0):
x, y = M
xo = float(xo)
yo = float(yo)
a = (np.cos(theta)**2)/(2*sigma**2) + (np.sin(theta)**2)/(2*sigma**2)
b = -(np.sin(2*theta))/(4*sigma**2) + (np.sin(2*theta))/(4*sigma**2)
c = (np.sin(theta)**2)/(2*sigma**2) + (np.cos(theta)**2)/(2*sigma**2)
g = amplitude*np.exp( - (a*((x-xo)**2) + 2*b*(x-xo)*(y-yo)
+ c*((y-yo)**2)))
return g
class Gaussian_info:
def __init__(self):
self.info_dict = {}
def print_info(self):
for key, val in self.info_dict.items():
if key == 'popt':
pass
elif key == 'pcov':
pass
else:
print(key, ': ', val)
def __sub__(self, other_GC):
sub_class = Gaussian_info()
for key, val in self.info_dict.items():
# print(key, val)
if type(val) == np.float64:
sub_class.info_dict[key] = val - other_GC.info_dict[key]
else:
sub_class.info_dict[key] = None
return sub_class
def center_vec(self):
return np.array([self.info_dict['x0'], self.info_dict['y0']])
def plot_on_ax(self, ax, displacement = np.array([0,0]), color = 'white'):
ax.annotate("", xy=self.center_vec(), xytext=(0,0), arrowprops=dict(arrowstyle = '->', lw = 3, color = color))
def plot_array(self):
return Gaussian_2D(*self.info_dict['popt'])
def sigma_contour(self):
x0, y0 = self.center_vec()
sx = self.info_dict['sigma_x']
sy = self.info_dict['sigma_y']
# angle = self.info_dict['theta']
angle = 0
return Ellipse((x0, y0), sx, sy, angle = angle/(2*np.pi)*360,
fill = False,
ls = '--',
color = 'red',
lw = 2)
def fit_2D_Gaussian(name,
bins,
h_arr,
guessParams,
max_fev = 10000,
contour_line = 0,
debug = False):
if debug:
print("fitting with maxfev = ", max_fev)
X, Y = np.meshgrid(bins[0:-1], bins[0:-1])
xdata, ydata= np.vstack((X.ravel(), Y.ravel())), h_arr.ravel()
# print('xdata_shape: ', np.shape(xdata))
# print("y shape: ",np.shape(ydata))
#,amplitude, xo, yo, sigma_x, sigma_y, theta
bounds = ([0,np.min(bins), np.min(bins), 0],
[10*np.max(h_arr), np.max(bins), np.max(bins), np.max(bins)])
# print(bounds)
popt, pcov = curve_fit(Gaussian_2D, xdata, ydata, p0 = guessParams, maxfev = max_fev, bounds = bounds)
GC = Gaussian_info()
GC.info_dict['name'] = name
GC.info_dict['canvas'] = xdata
GC.info_dict['amplitude'] = popt[0]
GC.info_dict['x0'] = popt[1]
GC.info_dict['y0'] = popt[2]
GC.info_dict['sigma_x'] = np.abs(popt[3])
GC.info_dict['sigma_y'] = np.abs(popt[3])
# GC.info_dict['theta'] = popt[4]
GC.info_dict['popt'] = popt
GC.info_dict['pcov'] = pcov
# GC.info_dict['contour'] = get_contour_line(X, Y, Gaussian_2D(xdata, *popt).reshape(resh_size), contour_line = contour_line)
return GC
def extract_3pulse_phase_differences_from_filepath(datapath, numRecords = 3840*2, window = [0, -1], bc_window = [50, 150], scale = 2):
dd = all_datadicts_from_hdf5(datapath)['data']
offset = window[0]
rtrim = window[-1]
time_unit = dd['time']['unit']
I_offset, Q_offset = 0,0
# print(np.size(np.unique(dd['time']['values'])))
time_vals = dd['time']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))
rec_unit = dd['record_num']['unit']
rec_num = dd['record_num']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))
I_G = dd['I_G']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-I_offset
I_E = dd['I_E']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-I_offset
I_F = dd['I_F']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-I_offset
Q_G = dd['Q_G']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-Q_offset
Q_E = dd['Q_E']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-Q_offset
Q_F = dd['Q_F']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-Q_offset
#averages
I_G_avg = np.average(I_G, axis = 0)
I_E_avg = np.average(I_E, axis = 0)
I_F_avg = np.average(I_F, axis = 0)
Q_G_avg = np.average(Q_G, axis = 0)
Q_E_avg = np.average(Q_E, axis = 0)
Q_F_avg = np.average(Q_F, axis = 0)
WF = np.zeros(np.size(time_vals[0]))
WF[bc_window[0]:bc_window[1]] = 1
Sge = Sgf = Sef = WF
fig2, ax11 = plt.subplots()
bins_GE_G, h_GE_G, I_pts, Q_pts = weighted_histogram(fig2, ax11, Sge, I_G, Q_G, scale = scale, record_track = True)
fig2, ax2 = plt.subplots()
ax2.set_title("Record Tracking")
ax2.set_xlabel("time (~us)")
ax2.set_ylabel("$\phi(t)$")
unwrapped_phases = np.unwrap(np.arctan(np.array(I_pts[0:500])/np.array(Q_pts[0:500])), period = np.pi)
ax2.plot(np.arange(500)*500, unwrapped_phases, '.', label = "phi(t)")
print("Average phase difference between records: ", np.average(np.diff(unwrapped_phases))/np.pi*180, ' degrees')
ax2.hlines(-12*np.pi, 0, 20000)
# ax2.set_aspect(1)
# ax2.plot(Q_pts[0:500], '.', label = "Q")
ax2.grid()
return np.average(np.diff(unwrapped_phases))/np.pi*180
def extract_3pulse_noise_from_filepath(datapath, numRecords = 3840*2, window = [0, -1]):
dd = all_datadicts_from_hdf5(datapath)['data']
offset = window[0]
rtrim = window[-1]
time_unit = dd['time']['unit']
I_offset, Q_offset = 0,0
# print(np.size(np.unique(dd['time']['values'])))
time_vals = dd['time']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))
rec_unit = dd['record_num']['unit']
rec_num = dd['record_num']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))
I_G = dd['I_G']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-I_offset
I_E = dd['I_E']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-I_offset
I_F = dd['I_F']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-I_offset
Q_G = dd['Q_G']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-Q_offset
Q_E = dd['Q_E']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-Q_offset
Q_F = dd['Q_F']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-Q_offset
#averages
I_G_avg = np.average(I_G, axis = 0)
I_E_avg = np.average(I_E, axis = 0)
I_F_avg = np.average(I_F, axis = 0)
Q_G_avg = np.average(Q_G, axis = 0)
Q_E_avg = np.average(Q_E, axis = 0)
Q_F_avg = np.average(Q_F, axis = 0)
print(np.shape(I_G))
print(np.shape(I_G_avg))
print(np.average(I_G_avg))
return [np.sqrt(np.var(np.sqrt((I_G[:, offset: rtrim]-I_G_avg[offset:rtrim])**2+(Q_G[:, offset: rtrim]-Q_G_avg[offset:rtrim])**2))),
np.sqrt(np.var(np.sqrt((I_E[:, offset: rtrim]-I_E_avg[offset:rtrim])**2+(Q_E[:, offset: rtrim]-Q_E_avg[offset:rtrim])**2))),
np.sqrt(np.var(np.sqrt((I_F[:, offset: rtrim]-I_F_avg[offset:rtrim])**2+(Q_F[:, offset: rtrim]-Q_F_avg[offset:rtrim])**2)))]
def extract_3pulse_pwr_from_filepath(datapath, numRecords = 3840*2, window = [0, -1]):
dd = all_datadicts_from_hdf5(datapath)['data']
offset = window[0]
rtrim = window[-1]
time_unit = dd['time']['unit']
I_offset, Q_offset = 0,0
# print(np.size(np.unique(dd['time']['values'])))
time_vals = dd['time']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))
rec_unit = dd['record_num']['unit']
rec_num = dd['record_num']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))
I_G = dd['I_G']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-I_offset
I_E = dd['I_E']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-I_offset
I_F = dd['I_F']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-I_offset
Q_G = dd['Q_G']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-Q_offset
Q_E = dd['Q_E']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-Q_offset
Q_F = dd['Q_F']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-Q_offset
#averages
I_G_avg = np.average(I_G, axis = 0)
I_E_avg = np.average(I_E, axis = 0)
I_F_avg = np.average(I_F, axis = 0)
Q_G_avg = np.average(Q_G, axis = 0)
Q_E_avg = np.average(Q_E, axis = 0)
Q_F_avg = np.average(Q_F, axis = 0)
return np.average(np.sqrt(I_G_avg**2+Q_G_avg**2)[offset:rtrim]), np.average(np.sqrt(I_E_avg**2+Q_E_avg**2)[offset:rtrim]), np.average(np.sqrt(I_F_avg**2+Q_F_avg**2)[offset:rtrim]),
def extract_3pulse_histogram_from_filepath(datapath, plot = False, hist_scale = None, numRecords = 3840*2, rec_start = 0, rec_stop = -1, IQ_offset = (0,0), fit = False, lpf = True, lpf_wc = 50e6, boxcar = False, bc_window = [50, 150], record_track = True, tuneup_plots = True, debug = False, tstart_index = 0, tstop_index = -1, phase_correction_rate = 0, figscale = 1, guess = 0, rec_skip = 5):
I_offset, Q_offset = IQ_offset
dd = all_datadicts_from_hdf5(datapath)['data']
if debug:
print("dd keys",dd.keys())
time_unit = dd['time']['unit']
# print(np.size(np.unique(dd['time']['values'])))
time_vals = dd['time']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))
rec_unit = dd['record_num']['unit']
rec_num = dd['record_num']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))
I_G = dd['I_G']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-I_offset
I_E = dd['I_E']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-I_offset
I_F = dd['I_F']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-I_offset
Q_G = dd['Q_G']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-Q_offset
Q_E = dd['Q_E']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-Q_offset
Q_F = dd['Q_F']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-Q_offset
#attempting to correct a rotating generator phase
pcr = phase_correction_rate
C = np.cos(pcr*rec_num)
S = np.sin(pcr*rec_num)
I_G = (I_G*C-Q_G*S).copy()
Q_G = (Q_G*C+I_G*S).copy()
I_E = (I_E*C-Q_E*S).copy()
Q_E = (Q_E*C+I_E*S).copy()
I_F = (I_F*C-Q_F*S).copy()
Q_F = (Q_F*C+I_F*S).copy()
return Process_One_Acquisition_3_state(datapath.split('/')[-1].split('\\')[-1], time_vals[0], I_G, I_E, I_F, Q_G, Q_E, Q_F,hist_scale = hist_scale, plot = plot, fit = fit, lpf = lpf, lpf_wc = lpf_wc, boxcar = boxcar, bc_window = bc_window, record_track = record_track, rec_start = rec_start, rec_stop = rec_stop, debug = debug, tstart_index = tstart_index, tstop_index = tstop_index, figscale = figscale, guess = guess, rec_skip = rec_skip)
def extract_2pulse_histogram_from_filepath(datapath, plot = False, hist_scale = None, numRecords = 3840*2, numRecordsUsed = 3840*2, IQ_offset = (0,0), fit = False, lpf = True, lpf_wc = 50e6, boxcar = False, bc_window = [50, 150], record_track = True, tuneup_plots = True, debug = False):
I_offset, Q_offset = IQ_offset
dd = all_datadicts_from_hdf5(datapath)['data']
if debug:
print("dd keys",dd.keys())
time_unit = dd['time']['unit']
# print(np.size(np.unique(dd['time']['values'])))
time_vals = dd['time']['values'].reshape((numRecords//2, np.size(dd['time']['values'])//(numRecords//2)))
print("Number of unique time values: %f"%np.size(np.unique(time_vals)))
rec_unit = dd['record_num']['unit']
rec_num = dd['record_num']['values'].reshape((numRecords//2, np.size(dd['time']['values'])//(numRecords//2)))
print("Number of unique records: %f"%np.size(np.unique(rec_num)))
print("User input of record number: %f"%np.size(np.unique(rec_num)))
I_G = dd['I_plus']['values'].reshape((numRecords//2, np.size(dd['time']['values'])//(numRecords//2)))-I_offset
I_E = dd['I_minus']['values'].reshape((numRecords//2, np.size(dd['time']['values'])//(numRecords//2)))-I_offset
# I_F = dd['I_F']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-I_offset
Q_G = dd['Q_plus']['values'].reshape((numRecords//2, np.size(dd['time']['values'])//(numRecords//2)))-Q_offset
Q_E = dd['Q_minus']['values'].reshape((numRecords//2, np.size(dd['time']['values'])//(numRecords//2)))-Q_offset
# Q_F = dd['Q_F']['values'].reshape((numRecords//3, np.size(dd['time']['values'])//(numRecords//3)))-Q_offset
#averages
I_G_avg = np.average(I_G, axis = 0)
I_E_avg = np.average(I_E, axis = 0)
# I_F_avg = np.average(I_F, axis = 0)
Q_G_avg = np.average(Q_G, axis = 0)
Q_E_avg = np.average(Q_E, axis = 0)
# Q_F_avg = np.average(Q_F, axis = 0)
return Process_One_Acquisition_2_state(datapath.split('/')[-1].split('\\')[-1], time_vals[0], I_G, I_E, Q_G, Q_E,hist_scale = hist_scale, plot = plot, fit = fit, lpf = lpf, lpf_wc = lpf_wc, boxcar = boxcar, bc_window = bc_window, record_track = record_track, numRecordsUsed = numRecordsUsed, debug = debug)
def hist_discriminant(h1, h2):
#1 if in h1, 0 if in h2
return ((h1-h2)>0)
| 45.981599 | 445 | 0.562035 | 11,288 | 72,467 | 3.343551 | 0.051736 | 0.031 | 0.015897 | 0.012718 | 0.826798 | 0.804197 | 0.792963 | 0.782338 | 0.755657 | 0.746092 | 0 | 0.048532 | 0.283467 | 72,467 | 1,575 | 446 | 46.010794 | 0.678325 | 0.165455 | 0 | 0.581653 | 0 | 0 | 0.055894 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025202 | false | 0.002016 | 0.013105 | 0.004032 | 0.061492 | 0.039315 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
45dfafc2ac8c87871264dc5c2a306ccdcaed221d | 9,516 | py | Python | test/hlt/pytest/python/com/huawei/iotplatform/client/invokeapi/DeviceGroupManagement.py | yuanyi-thu/AIOT- | 27f67d98324593c4c6c66bbd5e2a4aa7b9a4ac1e | [
"BSD-3-Clause"
] | 128 | 2018-10-29T04:11:47.000Z | 2022-03-07T02:19:14.000Z | test/hlt/pytest/python/com/huawei/iotplatform/client/invokeapi/DeviceGroupManagement.py | yuanyi-thu/AIOT- | 27f67d98324593c4c6c66bbd5e2a4aa7b9a4ac1e | [
"BSD-3-Clause"
] | 40 | 2018-11-02T00:40:48.000Z | 2021-12-07T09:33:56.000Z | test/hlt/pytest/python/com/huawei/iotplatform/client/invokeapi/DeviceGroupManagement.py | yuanyi-thu/AIOT- | 27f67d98324593c4c6c66bbd5e2a4aa7b9a4ac1e | [
"BSD-3-Clause"
] | 118 | 2018-10-29T08:43:57.000Z | 2022-01-07T06:49:25.000Z | import json
import logging
from com.huawei.iotplatform.client.NorthApiClient import NorthApiClient
from com.huawei.iotplatform.constant.RestConstant import RestConstant
from com.huawei.iotplatform.utils.DictUtil import DictUtil
from com.huawei.iotplatform.utils.LogUtil import Log
class DeviceGroupManagement(object):
log = Log()
log.setLogConfig()
def createDeviceGroup(self, cdgInDTO, accessToken):
cdgInDTO = DictUtil.dto2dict(cdgInDTO)
authUrl = RestConstant.CREATE_DEVICE_GROUP
payload = json.dumps(cdgInDTO)
logging.info(cdgInDTO), logging.info(accessToken)
return NorthApiClient.invokeAPI(RestConstant.HTTPPOST, authUrl, payload, accessToken)
def deleteDeviceGroup(self, devGroupId, accessAppId, accessToken):
authUrl = RestConstant.DELETE_DEVICE_GROUP + devGroupId
if accessAppId != None:
authUrl += "?accessAppId=" + accessAppId
logging.info(devGroupId), logging.info(accessAppId), logging.info(accessToken)
return NorthApiClient.invokeAPI(RestConstant.HTTPDELETE, authUrl, None, accessToken)
def modifyDeviceGroup(self, mdgInDTO, devGroupId, accessAppId, accessToken):
mdgInDTO = DictUtil.dto2dict(mdgInDTO)
authUrl = RestConstant.MODIFY_DEVICE_GROUP + devGroupId
if accessAppId != None:
authUrl += "?accessAppId=" + accessAppId
payload = json.dumps(mdgInDTO)
logging.info(mdgInDTO), logging.info(devGroupId), logging.info(accessAppId), logging.info(accessToken)
return NorthApiClient.invokeAPI(RestConstant.HTTPPUT, authUrl, payload, accessToken)
def queryDeviceGroups(self, qdgInDTO, accessToken):
qdgInDTO = DictUtil.dto2dict(qdgInDTO)
authUrl = RestConstant.QUERY_DEVICE_GROUPS
for key in qdgInDTO.keys():
if qdgInDTO[key] != None:
authUrl += "&" + key + "=" + qdgInDTO[key]
logging.info(qdgInDTO), logging.info(accessToken)
return NorthApiClient.invokeAPI(RestConstant.HTTPGET, authUrl, None, accessToken)
def querySingleDeviceGroup(self, devGroupId, accessAppId, accessToken):
authUrl = RestConstant.QUERY_SINGLE_DEVICE_GROUP + devGroupId
if accessAppId != None:
authUrl += "?accessAppId=" + accessAppId
logging.info(devGroupId), logging.info(accessAppId), logging.info(accessToken)
return NorthApiClient.invokeAPI(RestConstant.HTTPGET, authUrl, None, accessToken)
def queryDeviceGroupMembers(self, qdgmInDTO, accessToken):
qdgmInDTO = DictUtil.dto2dict(qdgmInDTO)
authUrl = RestConstant.QUERY_DEVICE_GROUP_MEMBERS + qdgmInDTO['devGroupId']
for key in qdgmInDTO.keys():
if key == 'devGroupId':
authUrl = authUrl
elif qdgmInDTO[key] != None:
authUrl += "&" + key + "=" + qdgmInDTO[key]
logging.info(qdgmInDTO), logging.info(accessToken)
return NorthApiClient.invokeAPI(RestConstant.HTTPGET, authUrl, None, accessToken)
def addDevicesToGroup(self, dgwdlDTO, accessAppId, accessToken):
dgwdlDTO = DictUtil.dto2dict(dgwdlDTO)
authUrl = RestConstant.ADD_DEVICES_TO_GROUP
if accessAppId != None:
authUrl += "?accessAppId=" + accessAppId
payload = json.dumps(dgwdlDTO)
logging.info(dgwdlDTO), logging.info(accessAppId), logging.info(accessToken)
return NorthApiClient.invokeAPI(RestConstant.HTTPPOST, authUrl, payload, accessToken)
def deleteDevicesFromGroup(self, dgwdlDTO, accessAppId, accessToken):
dgwdlDTO = DictUtil.dto2dict(dgwdlDTO)
authUrl = RestConstant.DELETE_DEVICES_FROM_GROUP
if accessAppId != None:
authUrl += "?accessAppId=" + accessAppId
payload = json.dumps(dgwdlDTO)
logging.info(dgwdlDTO), logging.info(accessAppId), logging.info(accessToken)
return NorthApiClient.invokeAPI(RestConstant.HTTPPOST, authUrl, payload, accessToken)
# def createDeviceGroup(self, clientInfo, cdgInDTO, accessToken):
# cdgInDTO = DictUtil.dto2dict(cdgInDTO)
# authUrl = RestConstant.BASE_URL + clientInfo['platformIp'] + ":" + clientInfo[
# 'platformPort'] + RestConstant.CREATE_DEVICE_GROUP
# headers = {
# "app_key": clientInfo['appId'],
# "Authorization": "Bearer " + accessToken,
# "Content-Type": "application/json"
# }
# payload = json.dumps(cdgInDTO)
# return NorthApiClient.invokeAPI(RestConstant.HTTPPOST, authUrl, headers, payload)
#
# def deleteDeviceGroup(self, clientInfo, devGroupId, accessAppId, accessToken):
# authUrl = RestConstant.BASE_URL + clientInfo['platformIp'] + ":" + clientInfo[
# 'platformPort'] + RestConstant.DELETE_DEVICE_GROUP + devGroupId
# if accessAppId != None:
# authUrl += "?accessAppId=" + accessAppId
# headers = {
# "app_key": clientInfo['appId'],
# "Authorization": "Bearer " + accessToken,
# "Content-Type": "application/json"
# }
# return NorthApiClient.invokeAPI(RestConstant.HTTPDELETE, authUrl, headers, None)
#
# def modifyDeviceGroup(self, clientInfo, mdgInDTO, devGroupId, accessAppId, accessToken):
# mdgInDTO = DictUtil.dto2dict(mdgInDTO)
# authUrl = RestConstant.BASE_URL + clientInfo['platformIp'] + ":" + clientInfo[
# 'platformPort'] + RestConstant.MODIFY_DEVICE_GROUP + devGroupId
# if accessAppId != None:
# authUrl += "?accessAppId=" + accessAppId
# headers = {
# "app_key": clientInfo['appId'],
# "Authorization": "Bearer " + accessToken,
# "Content-Type": "application/json"
# }
# payload = json.dumps(mdgInDTO)
# return NorthApiClient.invokeAPI(RestConstant.HTTPPUT, authUrl, headers, payload)
#
# def queryDeviceGroups(self, clientInfo, qdgInDTO, accessToken):
# qdgInDTO = DictUtil.dto2dict(qdgInDTO)
# authUrl = RestConstant.BASE_URL + clientInfo['platformIp'] + ":" + clientInfo[
# 'platformPort'] + RestConstant.QUERY_DEVICE_GROUPS
# for key in qdgInDTO.keys():
# if qdgInDTO[key] != None:
# authUrl += "&" + key + "=" + qdgInDTO[key]
# headers = {
# "app_key": clientInfo['appId'],
# "Authorization": "Bearer " + accessToken,
# "Content-Type": "application/json"
# }
# return NorthApiClient.invokeAPI(RestConstant.HTTPGET, authUrl, headers, None)
#
# def querySingleDeviceGroup(self, clientInfo, devGroupId, accessAppId, accessToken):
# # qsdgInDTO = DictUtil.dto2dict(qsdgInDTO)
# authUrl = RestConstant.BASE_URL + clientInfo['platformIp'] + ":" + clientInfo[
# 'platformPort'] + RestConstant.QUERY_SINGLE_DEVICE_GROUP + devGroupId
# if accessAppId != None:
# authUrl += "?accessAppId=" + accessAppId
# headers = {
# "app_key": clientInfo['appId'],
# "Authorization": "Bearer " + accessToken,
# "Content-Type": "application/json"
# }
# return NorthApiClient.invokeAPI(RestConstant.HTTPGET, authUrl, headers, None)
#
# def queryDeviceGroupMembers(self, clientInfo, qdgmInDTO, accessToken):
# qdgmInDTO = DictUtil.dto2dict(qdgmInDTO)
# authUrl = RestConstant.BASE_URL + clientInfo['platformIp'] + ":" + clientInfo[
# 'platformPort'] + RestConstant.QUERY_DEVICE_GROUP_MEMBERS + qdgmInDTO['devGroupId']
# for key in qdgmInDTO.keys():
# if key == 'devGroupId':
# authUrl = authUrl
# elif qdgmInDTO[key] != None:
# authUrl += "&" + key + "=" + qdgmInDTO[key]
# headers = {
# "app_key": clientInfo['appId'],
# "Authorization": "Bearer " + accessToken,
# "Content-Type": "application/json"
# }
# return NorthApiClient.invokeAPI(RestConstant.HTTPGET, authUrl, headers, None)
#
# def addDevicesToGroup(self, clientInfo, dgwdlDTO, accessAppId, accessToken):
# dgwdlDTO = DictUtil.dto2dict(dgwdlDTO)
# authUrl = RestConstant.BASE_URL + clientInfo['platformIp'] + ":" + clientInfo[
# 'platformPort'] + RestConstant.ADD_DEVICES_TO_GROUP
# if accessAppId != None:
# authUrl += "?accessAppId=" + accessAppId
# headers = {
# "app_key": clientInfo['appId'],
# "Authorization": "Bearer " + accessToken,
# "Content-Type": "application/json"
# }
# payload = json.dumps(dgwdlDTO)
# return NorthApiClient.invokeAPI(RestConstant.HTTPPOST, authUrl, headers, payload)
#
# def deleteDevicesFromGroup(self, clientInfo, dgwdlDTO, accessAppId, accessToken):
# dgwdlDTO = DictUtil.dto2dict(dgwdlDTO)
# authUrl = RestConstant.BASE_URL + clientInfo['platformIp'] + ":" + clientInfo[
# 'platformPort'] + RestConstant.DELETE_DEVICES_FROM_GROUP
# if accessAppId != None:
# authUrl += "?accessAppId=" + accessAppId
# headers = {
# "app_key": clientInfo['appId'],
# "Authorization": "Bearer " + accessToken,
# "Content-Type": "application/json"
# }
# payload = json.dumps(dgwdlDTO)
# return NorthApiClient.invokeAPI(RestConstant.HTTPPOST, authUrl, headers, payload)
| 50.084211 | 110 | 0.643338 | 793 | 9,516 | 7.649433 | 0.102144 | 0.039894 | 0.076492 | 0.108144 | 0.854599 | 0.829871 | 0.798714 | 0.798714 | 0.74184 | 0.705407 | 0 | 0.001804 | 0.242539 | 9,516 | 189 | 111 | 50.349206 | 0.839761 | 0.513241 | 0 | 0.347826 | 0 | 0 | 0.019682 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115942 | false | 0 | 0.086957 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
afdcbbfc35c496d4505ac4f1de4bc9f5eeb19ee6 | 199 | py | Python | rltf/__init__.py | psFournier/rltf | aae5451415dc18deda3c0c84580df42a12dc3843 | [
"MIT"
] | 90 | 2018-05-02T17:15:52.000Z | 2021-11-09T08:53:44.000Z | rltf/__init__.py | arita37/rltf | d56714494f73e53ed4b41d6376d942332b406885 | [
"MIT"
] | 1 | 2019-10-01T11:41:53.000Z | 2019-12-08T15:38:53.000Z | rltf/__init__.py | arita37/rltf | d56714494f73e53ed4b41d6376d942332b406885 | [
"MIT"
] | 25 | 2018-01-14T16:56:44.000Z | 2021-11-09T08:53:48.000Z | from rltf import agents
from rltf import cmdutils
from rltf import envs
from rltf import exploration
from rltf import memory
from rltf import models
from rltf import schedules
from rltf import utils
| 22.111111 | 28 | 0.839196 | 32 | 199 | 5.21875 | 0.34375 | 0.383234 | 0.670659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160804 | 199 | 8 | 29 | 24.875 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aff39500cccd84e5ac53cd8eda69c81c9aecb21a | 3,235 | py | Python | tests/test_pillar.py | Jabb0/FastFlow3D | cdc2a547268b85d0c851cf87786d80fcde4e8487 | [
"MIT"
] | 6 | 2021-10-14T03:30:32.000Z | 2022-03-25T07:16:03.000Z | tests/test_pillar.py | Jabb0/FastFlow3D | cdc2a547268b85d0c851cf87786d80fcde4e8487 | [
"MIT"
] | 2 | 2021-10-08T09:06:24.000Z | 2022-03-26T10:37:22.000Z | tests/test_pillar.py | Jabb0/FastFlow3D | cdc2a547268b85d0c851cf87786d80fcde4e8487 | [
"MIT"
] | null | null | null | import torch
from networks.pillarFeatureNetScatter import PillarFeatureNetScatter
def test_scatter_grid_representation():
n_points = 5
batch_size = 1
n_features = 64
n_pillars_x = 3
n_pillars_y = 3
x = torch.ones(size=(batch_size, n_points, n_features))
indices = torch.tensor([[0, 0, 0, 0, 0]]) # tensor of shape (batch_size, n_points)
indices = indices.unsqueeze(-1).expand(-1, -1, n_features)
pfns = PillarFeatureNetScatter(n_pillars_x=n_pillars_x, n_pillars_y=n_pillars_y)
output = pfns(x=x, indices=indices)
true_output = torch.zeros(size=(batch_size, n_features, n_pillars_x, n_pillars_y))
# all points are at (0, 0), so all point features are added at this position
true_output[0, :, 0, 0] = torch.full(size=(n_features, ), fill_value=5)
assert output.shape == torch.Size((batch_size, n_features, 3, 3))
assert torch.allclose(output, true_output)
n_points = 10
batch_size = 2
n_features = 64
n_pillars_x = 5
n_pillars_y = 5
x = torch.ones(size=(batch_size, n_points, n_features))
indices = torch.tensor([[1, 2, 3, 3, 1]]) # tensor of shape (batch_size, n_points)
indices = indices.unsqueeze(-1).expand(-1, -1, n_features)
pfns = PillarFeatureNetScatter(n_pillars_x=n_pillars_x, n_pillars_y=n_pillars_y)
output = pfns(x=x, indices=indices)
true_output = torch.zeros(size=(batch_size, n_features, n_pillars_x, n_pillars_y))
# all points are at (0, 0), so all point features are added at this position
true_output[0, :, 0, 1] = torch.full(size=(n_features, ), fill_value=2)
true_output[0, :, 0, 2] = torch.full(size=(n_features, ), fill_value=1)
true_output[0, :, 0, 3] = torch.full(size=(n_features, ), fill_value=2)
assert output.shape == torch.Size((batch_size, n_features, 5, 5))
assert torch.allclose(output, true_output)
n_points = 10
batch_size = 2
n_features = 64
n_pillars_x = 5
n_pillars_y = 5
x = torch.ones(size=(batch_size, n_points, n_features))
indices = torch.tensor([[1, 2, 3, 3, 1],
[1, 2, 6, 3, 1]]) # tensor of shape (batch_size, n_points)
indices = indices.unsqueeze(-1).expand(-1, -1, n_features)
pfns = PillarFeatureNetScatter(n_pillars_x=n_pillars_x, n_pillars_y=n_pillars_y)
output = pfns(x=x, indices=indices)
true_output = torch.zeros(size=(batch_size, n_features, n_pillars_x, n_pillars_y))
# all points are at (0, 0), so all point features are added at this position
true_output[0, :, 0, 1] = torch.full(size=(n_features, ), fill_value=2)
true_output[0, :, 0, 2] = torch.full(size=(n_features, ), fill_value=1)
true_output[0, :, 0, 3] = torch.full(size=(n_features, ), fill_value=2)
true_output[1, :, 0, 1] = torch.full(size=(n_features, ), fill_value=2)
true_output[1, :, 0, 2] = torch.full(size=(n_features, ), fill_value=1)
true_output[1, :, 0, 3] = torch.full(size=(n_features, ), fill_value=1)
true_output[1, :, 1, 1] = torch.full(size=(n_features, ), fill_value=1)
assert output.shape == torch.Size((batch_size, n_features, 5, 5))
assert torch.allclose(output, true_output)
if __name__ == '__main__':
test_scatter_grid_representation()
| 44.315068 | 87 | 0.676662 | 521 | 3,235 | 3.932822 | 0.101727 | 0.114202 | 0.107857 | 0.075159 | 0.908248 | 0.908248 | 0.898487 | 0.883358 | 0.867252 | 0.844802 | 0 | 0.039848 | 0.185471 | 3,235 | 72 | 88 | 44.930556 | 0.737761 | 0.10541 | 0 | 0.660714 | 0 | 0 | 0.002771 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 1 | 0.017857 | false | 0 | 0.035714 | 0 | 0.053571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b311ed75298ab8980d87627007450adc6e917325 | 96 | py | Python | venv/lib/python3.8/site-packages/pip/_internal/index/__init__.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pip/_internal/index/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pip/_internal/index/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/be/9b/7e/25e4d979f87c6be142db665e0525c555bb817174868882e141925a3694 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.479167 | 0 | 96 | 1 | 96 | 96 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b320a21a305c79aa980534d9ecf51541798fc8cd | 28 | py | Python | tweezers/plot/__init__.py | DollSimon/tweezers | 7c9b3d781c53f7728526a8242aa9e1d671f15688 | [
"BSD-2-Clause"
] | null | null | null | tweezers/plot/__init__.py | DollSimon/tweezers | 7c9b3d781c53f7728526a8242aa9e1d671f15688 | [
"BSD-2-Clause"
] | null | null | null | tweezers/plot/__init__.py | DollSimon/tweezers | 7c9b3d781c53f7728526a8242aa9e1d671f15688 | [
"BSD-2-Clause"
] | null | null | null | from .utils import peekPlot
| 14 | 27 | 0.821429 | 4 | 28 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6416aff064346c54ddadf29722a0e231b76ef710 | 43 | py | Python | stdpycompat/jsoncompat.py | mlockett42/pypddemo | 52523472c0b78e33ab0e985840b8c62cb5b2767b | [
"Apache-2.0"
] | null | null | null | stdpycompat/jsoncompat.py | mlockett42/pypddemo | 52523472c0b78e33ab0e985840b8c62cb5b2767b | [
"Apache-2.0"
] | null | null | null | stdpycompat/jsoncompat.py | mlockett42/pypddemo | 52523472c0b78e33ab0e985840b8c62cb5b2767b | [
"Apache-2.0"
] | null | null | null | from json import JSONEncoder, JSONDecoder
| 14.333333 | 41 | 0.837209 | 5 | 43 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 43 | 2 | 42 | 21.5 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
64358bdfec50716591699f2ccaa32bd900a8fa28 | 96 | py | Python | venv/lib/python3.8/site-packages/cachy/__init__.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/cachy/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/cachy/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/cb/8d/48/995f394c99713d4918ef0358800846d95404a39fe0ff4dd66dccd9e7f1 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 0 | 96 | 1 | 96 | 96 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ff2e45546c693d1f2582aef8f5deb8af2985c0aa | 1,780 | py | Python | pysot/models/GhostNet/Ghost_block.py | linjiangxiaoxian/ACSiamRPN | dba3298cdb4ca0bcd9b335cc9d507932fc1e3c78 | [
"Apache-2.0"
] | null | null | null | pysot/models/GhostNet/Ghost_block.py | linjiangxiaoxian/ACSiamRPN | dba3298cdb4ca0bcd9b335cc9d507932fc1e3c78 | [
"Apache-2.0"
] | 1 | 2022-03-06T07:14:21.000Z | 2022-03-06T07:14:21.000Z | pysot/models/GhostNet/Ghost_block.py | linjiangxiaoxian/ACSiamRPN | dba3298cdb4ca0bcd9b335cc9d507932fc1e3c78 | [
"Apache-2.0"
] | 4 | 2021-03-08T12:24:34.000Z | 2021-08-11T02:39:47.000Z | import torch
import torch.nn as nn
import math
class GhostModule_have_SK(nn.Module):
def __init__(self,inp=256, oup=256, kernel_size=1, ratio=2, dw_size=3, stride=1, relu=True):
super(GhostModule_have_SK, self).__init__()
self.oup = oup
init_channels = math.ceil(self.oup / ratio)
new_channels = init_channels*(ratio-1)
self.cheap_operation = nn.Sequential(
nn.Conv2d(init_channels, new_channels, dw_size, 1, dw_size//2, groups=init_channels, bias=False),
nn.BatchNorm2d(new_channels),
nn.ReLU(inplace=True) if relu else nn.Sequential(),
)
def forward(self, x):
x1 = x
x2 = self.cheap_operation(x1)
out = torch.cat([x1,x2], dim=1)
return out[:,:self.oup,:,:]
class GhostModule(nn.Module):
def __init__(self,inp=256, oup=256, kernel_size=1, ratio=2, dw_size=3, stride=1, relu=True):
super(GhostModule, self).__init__()
self.oup = oup
init_channels = math.ceil(self.oup / ratio)
new_channels = init_channels*(ratio-1)
self.primary_conv = nn.Sequential(
nn.Conv2d(inp, init_channels, kernel_size, stride, kernel_size//2, bias=False),
nn.BatchNorm2d(init_channels),
nn.ReLU(inplace=True) if relu else nn.Sequential(),
)
self.cheap_operation = nn.Sequential(
nn.Conv2d(init_channels, new_channels, dw_size, 1, dw_size//2, groups=init_channels, bias=False),
nn.BatchNorm2d(new_channels),
nn.ReLU(inplace=True) if relu else nn.Sequential(),
)
def forward(self, x):
x1 = self.primary_conv(x)
x2 = self.cheap_operation(x1)
out = torch.cat([x1,x2], dim=1)
return out[:,:self.oup,:,:] | 37.87234 | 109 | 0.623034 | 247 | 1,780 | 4.279352 | 0.206478 | 0.113529 | 0.068117 | 0.056764 | 0.81457 | 0.81457 | 0.81457 | 0.81457 | 0.81457 | 0.81457 | 0 | 0.033582 | 0.247191 | 1,780 | 47 | 110 | 37.87234 | 0.755224 | 0 | 0 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.075 | 0 | 0.275 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ff3ae28f655735313a63e84d7ab67335c25c57bf | 48 | py | Python | tasks/__init__.py | martinwhl/informer-lightning | 5a1851577b7e450492cd1fcdc1ecdc5d1b7e8c16 | [
"Apache-2.0"
] | 13 | 2021-04-12T16:37:51.000Z | 2022-03-30T01:58:31.000Z | tasks/__init__.py | martinwhl/informer-lightning | 5a1851577b7e450492cd1fcdc1ecdc5d1b7e8c16 | [
"Apache-2.0"
] | 1 | 2022-01-10T19:53:31.000Z | 2022-01-16T17:31:39.000Z | tasks/__init__.py | martinwhl/informer-lightning | 5a1851577b7e450492cd1fcdc1ecdc5d1b7e8c16 | [
"Apache-2.0"
] | 3 | 2021-05-18T15:51:18.000Z | 2021-11-06T04:37:52.000Z | from tasks.forecast import InformerForecastTask
| 24 | 47 | 0.895833 | 5 | 48 | 8.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.977273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ff955cc1a9fe047347b2c1c79452a5e7101ad03d | 955 | py | Python | tests/unit/utils/markers/test_skip_if_not_root.py | ScriptAutomate/pytest-salt-factories | 192e15a7e93eec694f59099021a4d4268a1ab1ea | [
"Apache-2.0"
] | null | null | null | tests/unit/utils/markers/test_skip_if_not_root.py | ScriptAutomate/pytest-salt-factories | 192e15a7e93eec694f59099021a4d4268a1ab1ea | [
"Apache-2.0"
] | null | null | null | tests/unit/utils/markers/test_skip_if_not_root.py | ScriptAutomate/pytest-salt-factories | 192e15a7e93eec694f59099021a4d4268a1ab1ea | [
"Apache-2.0"
] | null | null | null | """
tests.unit.utils.markers.test_skip_if_not_root
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Test the "skip_if_not_root" marker helper
"""
import sys
from unittest import mock
import saltfactories.utils.markers
def test_when_root():
if sys.platform.startswith("win"):
with mock.patch("salt.utils.win_functions.is_admin", return_value=True):
assert saltfactories.utils.markers.skip_if_not_root() is None
else:
with mock.patch("os.getuid", return_value=0):
assert saltfactories.utils.markers.skip_if_not_root() is None
def test_when_not_root():
if sys.platform.startswith("win"):
with mock.patch("salt.utils.win_functions.is_admin", return_value=False):
assert saltfactories.utils.markers.skip_if_not_root() is not None
else:
with mock.patch("os.getuid", return_value=1):
assert saltfactories.utils.markers.skip_if_not_root() is not None
| 32.931034 | 81 | 0.672251 | 130 | 955 | 4.692308 | 0.307692 | 0.080328 | 0.088525 | 0.127869 | 0.72459 | 0.72459 | 0.72459 | 0.72459 | 0.72459 | 0.606557 | 0 | 0.002551 | 0.179058 | 955 | 28 | 82 | 34.107143 | 0.77551 | 0.142408 | 0 | 0.470588 | 0 | 0 | 0.112641 | 0.082603 | 0 | 0 | 0 | 0 | 0.235294 | 1 | 0.117647 | true | 0 | 0.176471 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
440d03490b92091da831df9dd8a868de64fc822e | 19 | py | Python | easy_model_zoo/test/__init__.py | SharifElfouly/easy-model-zoo | e1726ea18eb5b64d98ab91a72ec07b29c8c38650 | [
"MIT"
] | 4 | 2020-08-19T14:18:28.000Z | 2021-06-02T08:12:14.000Z | easy_model_zoo/test/__init__.py | SharifElfouly/easy-model-zoo | e1726ea18eb5b64d98ab91a72ec07b29c8c38650 | [
"MIT"
] | null | null | null | easy_model_zoo/test/__init__.py | SharifElfouly/easy-model-zoo | e1726ea18eb5b64d98ab91a72ec07b29c8c38650 | [
"MIT"
] | null | null | null | from .t import TEST | 19 | 19 | 0.789474 | 4 | 19 | 3.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 19 | 1 | 19 | 19 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4435fe79e4fde04ddddae720cb93b4ece025210e | 189 | py | Python | xam/feature_extraction/__init__.py | topolphukhanh/xam | 3fa958ba8b0c8e8e266cac9997b7a7d0c309f55c | [
"MIT"
] | null | null | null | xam/feature_extraction/__init__.py | topolphukhanh/xam | 3fa958ba8b0c8e8e266cac9997b7a7d0c309f55c | [
"MIT"
] | null | null | null | xam/feature_extraction/__init__.py | topolphukhanh/xam | 3fa958ba8b0c8e8e266cac9997b7a7d0c309f55c | [
"MIT"
] | null | null | null | from .smooth_target_encoding import SmoothTargetEncoder
from .combinations import FeatureCombiner
from .cycle import CycleTransformer
from .k_fold_target_encoding import KFoldTargetEncoder
| 37.8 | 55 | 0.89418 | 21 | 189 | 7.809524 | 0.619048 | 0.170732 | 0.243902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084656 | 189 | 4 | 56 | 47.25 | 0.947977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4480de14afe1de53e1eeadcbf4d09e1f99469385 | 172 | py | Python | datasets/__init__.py | sanchezirina/defeatcovid19-net-pytorch | eadfec212ade7724688e4455e59157c9c53f0c89 | [
"MIT"
] | 9 | 2020-03-26T16:38:30.000Z | 2021-11-06T03:55:36.000Z | datasets/__init__.py | sanchezirina/defeatcovid19-net-pytorch | eadfec212ade7724688e4455e59157c9c53f0c89 | [
"MIT"
] | 9 | 2020-03-28T21:50:47.000Z | 2020-04-15T14:26:12.000Z | datasets/__init__.py | sanchezirina/defeatcovid19-net-pytorch | eadfec212ade7724688e4455e59157c9c53f0c89 | [
"MIT"
] | 10 | 2020-03-26T17:07:07.000Z | 2022-02-18T08:47:05.000Z | from .chest_xray_pneumonia_dataset import ChestXRayPneumoniaDataset
from .covid_chestxray_dataset import COVIDChestXRayDataset
from .nih_cx38_dataset import NIHCX38Dataset
| 43 | 67 | 0.912791 | 19 | 172 | 7.894737 | 0.684211 | 0.26 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.069767 | 172 | 3 | 68 | 57.333333 | 0.9125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9237550d9f275096b72ec282548f60e02c21cb89 | 41 | py | Python | akaocr/engine/__init__.py | qai-research/Efficient_Text_Detection | e5cfe51148cc4fbf4c4f3afede040e4ebd624e8b | [
"MIT"
] | 2 | 2021-04-28T04:13:09.000Z | 2021-06-05T04:11:11.000Z | akaocr/engine/__init__.py | qai-research/Efficient_Text_Detection | e5cfe51148cc4fbf4c4f3afede040e4ebd624e8b | [
"MIT"
] | 2 | 2021-05-06T13:49:52.000Z | 2021-05-14T08:45:13.000Z | akaocr/engine/__init__.py | qai-research/Efficient_Text_Detection | e5cfe51148cc4fbf4c4f3afede040e4ebd624e8b | [
"MIT"
] | null | null | null | from .trainer.train_base import Trainer
| 20.5 | 40 | 0.829268 | 6 | 41 | 5.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 41 | 1 | 41 | 41 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
925bc226e1e64ffe42ef72fcf08242ff2d560e95 | 5,874 | py | Python | test-framework/test-suites/integration/tests/add/test_add_host_alias.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | null | null | null | test-framework/test-suites/integration/tests/add/test_add_host_alias.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | null | null | null | test-framework/test-suites/integration/tests/add/test_add_host_alias.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | null | null | null | from textwrap import dedent
import pytest
import json
class TestAddHostAlias:
# split possible?
def test_to_multiple_interfaces_across_multiple_hosts(self, host, revert_etc, test_file):
result = host.run(f'stack load hostfile file={test_file("add/add_host_alias_hostfile.csv")}')
assert result.rc == 0
result = host.run('stack add host alias backend-0-0 alias=test0-eth0 interface=eth0')
assert result.rc == 0
# one alias in list
result = host.run('stack list host alias output-format=json')
assert result.rc == 0
with open(test_file('add/add_host_alias_one_alias.json')) as output:
expected_output = output.read()
assert json.loads(result.stdout) == json.loads(expected_output)
result = host.run('stack add host alias backend-0-0 alias=test0-eth1 interface=eth1')
assert result.rc == 0
result = host.run('stack add host alias backend-0-1 alias=test1-eth0 interface=eth0')
assert result.rc == 0
result = host.run('stack add host alias backend-0-1 alias=test1-eth1 interface=eth1')
assert result.rc == 0
# four aliases in list
result = host.run('stack list host alias output-format=json')
assert result.rc == 0
with open(test_file('add/add_host_alias_four_aliases.json')) as output:
expected_output = output.read()
assert json.loads(result.stdout) == json.loads(expected_output)
def test_add_numeric_alias(self, host, add_host_with_interface):
# add numeric alias (invalid)
result = host.run('stack add host alias backend-0-0 alias=42 interface=eth0')
assert result.rc != 0
# no aliases in list
result = host.run('stack list host alias output-format=json')
assert result.rc == 0
assert result.stdout.strip() == ''
def test_add_duplicate_alias_same_host_interface(self, host, add_host_with_interface, test_file):
result = host.run('stack add host alias backend-0-0 alias=test0-eth0 interface=eth0')
assert result.rc == 0
# add same alias again (invalid)
result = host.run('stack add host alias backend-0-0 alias=test0-eth0 interface=eth0')
assert result.rc != 0
# one alias in list
result = host.run('stack list host alias output-format=json')
assert result.rc == 0
with open(test_file('add/add_host_alias_one_alias.json')) as output:
expected_output = output.read()
assert json.loads(result.stdout) == json.loads(expected_output)
def test_add_duplicate_alias_same_host(self, host, revert_etc, test_file):
result = host.run(f'stack load hostfile file={test_file("add/add_host_alias_hostfile.csv")}')
assert result.rc == 0
result = host.run('stack add host alias backend-0-0 alias=test interface=eth0')
assert result.rc == 0
result = host.run('stack add host alias backend-0-0 alias=test interface=eth1')
assert result.rc == 0
# both aliases in list
result = host.run('stack list host alias output-format=json')
assert result.rc == 0
with open(test_file('add/add_host_alias_two_aliases_same_name.json')) as output:
expected_output = output.read()
assert json.loads(result.stdout) == json.loads(expected_output)
def test_add_duplicate_alias_different_host(self, host, revert_etc, test_file):
result = host.run(f'stack load hostfile file={test_file("add/add_host_alias_hostfile.csv")}')
assert result.rc == 0
result = host.run('stack add host alias backend-0-0 alias=test0-eth0 interface=eth0')
assert result.rc == 0
# add same alias to different host (invalid)
result = host.run('stack add host alias backend-0-1 alias=test0-eth0 interface=eth0')
assert result.rc != 0
# one alias in list
result = host.run('stack list host alias output-format=json')
assert result.rc == 0
with open(test_file('add/add_host_alias_one_alias.json')) as output:
expected_output = output.read()
assert json.loads(result.stdout) == json.loads(expected_output)
def test_add_multiple_aliases_same_host_interface(self, host, add_host_with_interface, test_file):
result = host.run('stack add host alias backend-0-0 alias=test0-eth0 interface=eth0')
assert result.rc == 0
result = host.run('stack add host alias backend-0-0 alias=2-test0-eth0 interface=eth0')
assert result.rc == 0
# both aliases in list
result = host.run('stack list host alias output-format=json')
assert result.rc == 0
with open(test_file('add/add_host_alias_multiple_aliases_same_host_interface.json')) as output:
expected_output = output.read()
assert json.loads(result.stdout) == json.loads(expected_output)
def test_no_host(self, host):
result = host.run('stack add host alias')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "host" argument is required
{host} {alias=string} {interface=string}
''')
def test_no_matching_hosts(self, host):
result = host.run('stack add host alias a:test')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "host" argument is required
{host} {alias=string} {interface=string}
''')
def test_multiple_hosts(self, host, add_host):
result = host.run('stack add host alias frontend-0-0 backend-0-0')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "host" argument must be unique
{host} {alias=string} {interface=string}
''')
def test_hostname_in_use(self, host, add_host):
result = host.run('stack add host alias frontend-0-0 alias=backend-0-0 interface=eth0')
assert result.rc == 255
assert result.stderr == 'error - hostname already in use\n'
def test_invalid_alias(self, host, add_host):
result = host.run('stack add host alias frontend-0-0 alias=127.0.0.1 interface=eth0')
assert result.rc == 255
assert result.stderr == 'error - aliases cannot be an IP address\n'
def test_invalid_interface(self, host, add_host):
result = host.run('stack add host alias frontend-0-0 alias=foo interface=eth7')
assert result.rc == 255
assert result.stderr == 'error - interface does not exist\n'
| 38.644737 | 99 | 0.733061 | 925 | 5,874 | 4.52 | 0.102703 | 0.081799 | 0.09017 | 0.111935 | 0.894762 | 0.878976 | 0.864626 | 0.837359 | 0.822531 | 0.804353 | 0 | 0.024775 | 0.14794 | 5,874 | 151 | 100 | 38.900662 | 0.810589 | 0.043071 | 0 | 0.648148 | 0 | 0.009259 | 0.392048 | 0.070066 | 0 | 0 | 0 | 0 | 0.388889 | 1 | 0.111111 | false | 0 | 0.027778 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
928f4b513c2f03d305adb1909e6223dd7108ff32 | 14,217 | py | Python | PA_DPC/dpc/dataset_3d_infer_pred_error.py | StanLei52/GEBD | 5f7e722e0384f9877c75d116e1db72400d2bc58f | [
"MIT"
] | 44 | 2021-03-24T07:10:57.000Z | 2022-03-12T11:49:14.000Z | PA_DPC/dpc/dataset_3d_infer_pred_error.py | StanLei52/GEBD | 5f7e722e0384f9877c75d116e1db72400d2bc58f | [
"MIT"
] | 2 | 2021-05-26T09:31:55.000Z | 2021-08-11T11:47:38.000Z | PA_DPC/dpc/dataset_3d_infer_pred_error.py | StanLei52/GEBD | 5f7e722e0384f9877c75d116e1db72400d2bc58f | [
"MIT"
] | 6 | 2021-04-07T00:51:51.000Z | 2022-01-12T01:54:41.000Z | import torch
from torch.utils import data
from torchvision import transforms
import os
import sys
import time
import pickle
import glob
import csv
import ipdb
import pandas as pd
import numpy as np
import cv2
sys.path.append('../utils')
from augmentation import *
from tqdm import tqdm
from joblib import Parallel, delayed
def pil_loader(path):
with open(path, 'rb') as f:
with Image.open(f) as img:
return img.convert('RGB')
def pil_loader_flow(path_x, path_y):
with open(path_x, 'rb') as f:
with Image.open(f) as img:
x = np.asarray(img)
x = np.round((x-127.5)*127.5/20+127.5)
x = (np.minimum(np.maximum(x, 0.0), 255.0))
with open(path_y, 'rb') as f:
with Image.open(f) as img:
y = np.asarray(img)
y = np.round((y-127.5)*127.5/20+127.5)
y = (np.minimum(np.maximum(y, 0.0), 255.0))
img = 128*np.ones((x.shape[0], x.shape[1],3), dtype=np.uint8)
img[:, :, 0] = x
img[:, :, 1] = y
return Image.fromarray(img).convert('RGB')
class TAPOS_instances_3d(data.Dataset):
def __init__(self,
mode='val',
transform=None,
seq_len=10,
num_seq=5,
downsample=3,
epsilon=5,
unit_test=False,
modality='rgb',
pkl_folder_name='',
big=False,
return_label=False,
pred_task='',
pred_step=3):
self.mode = mode
self.transform = transform
self.seq_len = seq_len
self.num_seq = num_seq
self.downsample = downsample
self.epsilon = epsilon
self.unit_test = unit_test
self.return_label = return_label
self.pred_step = pred_step
self.pred_task = pred_task
self.modality = modality
self.pkl_folder_name = pkl_folder_name
if big: print('Using TAPOS (224x224)')
else: print('Using TAPOS (112x112) ')
# splits
if big:
if mode == 'train':
raise ValueError('train mode NOT IMPLEMENTED')
elif (mode == 'val') or (mode == 'test'):
split = '../../data/exp_TAPOS/val_set.csv'
video_info = pd.read_csv(split, header=None)
else: raise ValueError('wrong mode')
else: # small
raise ValueError('NOT IMPLEMENTED')
drop_idx = []
print('filter out too short videos ...')
for idx, row in tqdm(video_info.iterrows(), total=len(video_info)):
vpath, vlen = row
if vlen-self.num_seq*self.seq_len*self.downsample <= 10:
drop_idx.append(idx)
self.video_info = video_info.drop(drop_idx, axis=0)
print('#videos kept: ' + str(len(self.video_info)))
if mode == 'val': self.video_info = self.video_info.sample(frac=1, random_state=666)# validate on 30% data only
if self.unit_test: self.video_info = self.video_info.sample(32, random_state=666) # sample a few videos for unittest
# shuffle not necessary because use RandomSampler
print('construct windows of stride 1 from each video ...')
self.window_info = pd.DataFrame(columns=['vpath', 'vlen', 'window_idx',
'all_window_seq_idx_block',
'current_frame_idx'])
pkl_dir = self.pkl_folder_name
if not os.path.exists(pkl_dir): os.makedirs(pkl_dir)
if self.mode == 'train':
raise ValueError('NOT IMPLEMENTED')
else:
pkl_name = pkl_dir+"/window_lists.rbg.pkl"
if self.modality == 'flow':
pkl_name = pkl_dir+"/window_lists.flow.pkl"
if os.path.exists(pkl_name):
print("skip constructing windows... use pre-computed one...")
self.window_info = pickle.load(open(pkl_name, "rb"))
else:
# stride per sampled frame
for _, (vpath, vlen) in tqdm(self.video_info.iterrows(), total=len(self.video_info)):
if vlen-self.num_seq*self.seq_len*self.downsample <= 0:
print("vlen: "+str(vlen))
print("self.num_seq*self.seq_len*self.downsample: "+str(self.num_seq*self.seq_len*self.downsample))
continue
n_window = int(vlen/self.downsample) - (1+1)*self.seq_len + 1
all_window_seq_idx_block = np.zeros((n_window, self.num_seq, self.seq_len))
for window_idx in range(n_window):
start_zeropad = self.num_seq - 1 - self.pred_step
window_start_frame_idx = window_idx*self.downsample - start_zeropad*self.seq_len*self.downsample
seq_idx = np.expand_dims(np.arange(self.num_seq), -1)*self.downsample*self.seq_len + window_start_frame_idx
tmp = seq_idx + np.expand_dims(np.arange(self.seq_len),0)*self.downsample
#ipdb.set_trace()
tmp[tmp<0] = 0
tmp[tmp>vlen-1] = vlen-1
all_window_seq_idx_block[window_idx] = tmp
self.window_info.loc[len(self.window_info)] = [vpath, vlen, window_idx,
all_window_seq_idx_block[window_idx],
all_window_seq_idx_block[window_idx][self.num_seq-self.pred_step-1][0]]
print("len(self.window_info): "+str(len(self.window_info)))
pickle.dump(self.window_info, open(pkl_name, "wb"))
def __getitem__(self, index):
vpath, vlen, window_idx, idx_block, current_frame_idx = self.window_info.iloc[index]
if idx_block is None: print(vpath)
n_window = vlen-self.num_seq*self.seq_len*self.downsample
assert idx_block.shape == (self.num_seq, self.seq_len)
idx_block = idx_block.reshape(self.num_seq*self.seq_len)
if self.modality == 'flow':
seq = [pil_loader_flow(os.path.join(vpath, 'flow_x_%05d.jpg' % (i+1)),os.path.join(vpath, 'flow_y_%05d.jpg' % (i+1))) for i in idx_block]
if self.modality == 'rgb':
seq = [pil_loader(os.path.join(vpath, 'image_%05d.jpg' % (i+1))) for i in idx_block]
t_seq = self.transform(seq) # apply same transform
(C, H, W) = t_seq[0].size()
t_seq = torch.stack(t_seq, 0)
t_seq = t_seq.view(self.num_seq, self.seq_len, C, H, W).transpose(1,2)
videoid = vpath.split('/')[-2] + '_' + vpath.split('/')[-1]
return t_seq, videoid, vlen, window_idx, current_frame_idx
def __len__(self):
return len(self.window_info)
class Kinetics400_full_3d(data.Dataset):
def __init__(self,
mode='val',
transform=None,
seq_len=10,
num_seq=5,
downsample=3,
epsilon=5,
unit_test=False,
big=False,
return_label=False,
modality='rgb',
pkl_folder_name='',
pred_task='',
pred_step=3):
self.mode = mode
self.transform = transform
self.seq_len = seq_len
self.num_seq = num_seq
self.downsample = downsample
self.epsilon = epsilon
self.unit_test = unit_test
self.return_label = return_label
self.pred_step = pred_step
self.modality = modality
self.pkl_folder_name = pkl_folder_name
self.pred_task = pred_task
if big: print('Using Kinetics400 GEBD data (224x224)')
else: print('Using Kinetics400 GEBD data (112x112)')
# get action list
self.action_dict_encode = {}
self.action_dict_decode = {}
action_file = os.path.join('../../data/exp_k400/', 'classInd.txt')
action_df = pd.read_csv(action_file, sep=',', header=None)
for _, row in action_df.iterrows():
act_id, act_name = row
act_id = int(act_id) - 1 # let id start from 0
self.action_dict_decode[act_id] = act_name
self.action_dict_encode[act_name] = act_id
# splits
if big:
if mode == 'train':
raise ValueError('train mode NOT IMPLEMENTED')
elif (mode == 'val') or (mode == 'test'):
split = '../../data/exp_k400/val_set.csv'
video_info = pd.read_csv(split, header=None)
else: raise ValueError('wrong mode')
else:
raise ValueError('NOT IMPLEMENTED')
drop_idx = []
print('filter out too short videos ...')
for idx, row in tqdm(video_info.iterrows(), total=len(video_info)):
vpath, vlen = row
if vlen-self.num_seq*self.seq_len*self.downsample <= 10:
drop_idx.append(idx)
self.video_info = video_info.drop(drop_idx, axis=0)
print('#videos kept: ' + str(len(self.video_info)))
if mode == 'val': self.video_info = self.video_info.sample(frac=1, random_state=666)#
if self.unit_test: self.video_info = self.video_info.sample(32, random_state=666) # sample a few videos for unittest
# shuffle not necessary because use RandomSampler
print('construct windows of stride 1 from each video ...')
self.window_info = pd.DataFrame(columns=['vpath', 'vlen', 'window_idx',
'all_window_seq_idx_block',
'current_frame_idx'])
pkl_dir = self.pkl_folder_name
if not os.path.exists(pkl_dir):
os.makedirs(pkl_dir)
if self.mode == 'train':
raise ValueError('NOT IMPLEMENTED')
else:
pkl_name = pkl_dir+"/window_lists.pkl"
if os.path.exists(pkl_name):
print("skip constructing windows... use pre-computed one...")
self.window_info = pickle.load(open(pkl_name, "rb"))
else:
# stride per sampled frame
ct_window_info_slice = 0
v_idx_startfrom1 = 0 #counter of showing progess
for _, (vpath, vlen) in tqdm(self.video_info.iterrows(), total=len(self.video_info)):
v_idx_startfrom1 += 1
if v_idx_startfrom1==1000:
v_idx_startfrom1 = 0
pickle.dump(self.window_info, open(pkl_name+str(ct_window_info_slice), "wb"))
print("+1 to ct_window_info_slice: "+str(ct_window_info_slice))
ct_window_info_slice += 1
self.window_info = pd.DataFrame(columns=['vpath', 'vlen', 'window_idx',
'all_window_seq_idx_block',
'current_frame_idx'])
if vlen-self.num_seq*self.seq_len*self.downsample <= 0:
print("vlen: "+str(vlen))
print("self.num_seq*self.seq_len*self.downsample: "+str(self.num_seq*self.seq_len*self.downsample))
continue
n_window = int(vlen/self.downsample) - (1+1)*self.seq_len + 1
all_window_seq_idx_block = np.zeros((n_window, self.num_seq, self.seq_len))
for window_idx in range(n_window):
start_zeropad = self.num_seq - 1 - self.pred_step
window_start_frame_idx = window_idx*self.downsample - start_zeropad*self.seq_len*self.downsample
seq_idx = np.expand_dims(np.arange(self.num_seq), -1)*self.downsample*self.seq_len + window_start_frame_idx
tmp = seq_idx + np.expand_dims(np.arange(self.seq_len),0)*self.downsample
tmp[tmp<0] = 0
tmp[tmp>vlen-1] = vlen-1
all_window_seq_idx_block[window_idx] = tmp
self.window_info.loc[len(self.window_info)] = [vpath, vlen, window_idx,
all_window_seq_idx_block[window_idx],
all_window_seq_idx_block[window_idx][self.num_seq-self.pred_step-1][0]]
# handling memory constraint
# merge the splits generated before
merge = []
for i in range(ct_window_info_slice):
with open(pkl_name+str(i), "rb") as f:
w = pickle.load(f, encoding='lartin1')
merge.append(w)
os.remove(pkl_name+str(i))
merge.append(self.window_info)
self.window_info = pd.concat(merge, ignore_index=True)
print("len(self.window_info): "+str(len(self.window_info)))
pickle.dump(self.window_info, open(pkl_name, "wb"))
def __getitem__(self, index):
vpath, vlen, window_idx, idx_block, current_frame_idx = self.window_info.iloc[index]
if idx_block is None: print(vpath)
n_window = vlen-self.num_seq*self.seq_len*self.downsample
assert idx_block.shape == (self.num_seq, self.seq_len)
idx_block = idx_block.reshape(self.num_seq*self.seq_len)
# FIXME
seq = [pil_loader(os.path.join(vpath, 'image_%05d.jpg' % (i+1))) for i in idx_block]
t_seq = self.transform(seq) # apply same transform
(C, H, W) = t_seq[0].size()
t_seq = torch.stack(t_seq, 0)
t_seq = t_seq.view(self.num_seq, self.seq_len, C, H, W).transpose(1,2)
videoid = vpath.split('/')[-1][:11]
return t_seq, videoid, vlen, window_idx, current_frame_idx
def __len__(self):
return len(self.window_info)
def encode_action(self, action_name):
'''give action name, return category'''
return self.action_dict_encode[action_name]
def decode_action(self, action_code):
'''give action code, return action name'''
return self.action_dict_decode[action_code]
| 44.152174 | 149 | 0.5613 | 1,843 | 14,217 | 4.091698 | 0.137276 | 0.025461 | 0.03713 | 0.03713 | 0.787959 | 0.752553 | 0.742342 | 0.738629 | 0.733988 | 0.721522 | 0 | 0.02079 | 0.326722 | 14,217 | 321 | 150 | 44.28972 | 0.767029 | 0.036295 | 0 | 0.708955 | 0 | 0 | 0.088734 | 0.023921 | 0 | 0 | 0 | 0.003115 | 0.007463 | 1 | 0.037313 | false | 0 | 0.059701 | 0.007463 | 0.134328 | 0.078358 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2bd3179fe5a5013b261edc1297b28a46794e1e40 | 817 | py | Python | dp_tornado/helper/web/http/post.py | donghak-shin/dp-tornado | 095bb293661af35cce5f917d8a2228d273489496 | [
"MIT"
] | 18 | 2015-04-07T14:28:39.000Z | 2020-02-08T14:03:38.000Z | dp_tornado/helper/web/http/post.py | donghak-shin/dp-tornado | 095bb293661af35cce5f917d8a2228d273489496 | [
"MIT"
] | 7 | 2016-10-05T05:14:06.000Z | 2021-05-20T02:07:22.000Z | dp_tornado/helper/web/http/post.py | donghak-shin/dp-tornado | 095bb293661af35cce5f917d8a2228d273489496 | [
"MIT"
] | 11 | 2015-12-15T09:49:39.000Z | 2021-09-06T18:38:21.000Z | # -*- coding: utf-8 -*-
from dp_tornado.engine.helper import Helper as dpHelper
class PostHelper(dpHelper):
def raw(self, url, data=None, json=None, **kwargs):
return self.helper.web.http.request(req_type='post', res_type='raw', url=url, data=data, json=json, **kwargs)
def json(self, url, data=None, json=None, **kwargs):
return self.helper.web.http.request(req_type='post', res_type='json', url=url, data=data, json=json, **kwargs)
def text(self, url, data=None, json=None, **kwargs):
return self.helper.web.http.request(req_type='post', res_type='text', url=url, data=data, json=json, **kwargs)
def html(self, url, data=None, json=None, **kwargs):
return self.helper.web.http.request(req_type='post', res_type='html', url=url, data=data, json=json, **kwargs)
| 43 | 118 | 0.674419 | 127 | 817 | 4.267717 | 0.244094 | 0.103321 | 0.081181 | 0.110701 | 0.791513 | 0.791513 | 0.791513 | 0.739852 | 0.568266 | 0.568266 | 0 | 0.001435 | 0.146879 | 817 | 18 | 119 | 45.388889 | 0.776184 | 0.025704 | 0 | 0 | 0 | 0 | 0.039043 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.1 | 0.4 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a63f61f6682baf6108f345fab4251e09e31ad35a | 80 | py | Python | tests/conftest.py | KiriLev/albu_scheduler | edb70f3d9c90761570744f35aed61c35cf121316 | [
"MIT"
] | 17 | 2021-05-03T08:24:21.000Z | 2021-08-04T15:19:06.000Z | tests/conftest.py | KiriLev/albu_scheduler | edb70f3d9c90761570744f35aed61c35cf121316 | [
"MIT"
] | null | null | null | tests/conftest.py | KiriLev/albu_scheduler | edb70f3d9c90761570744f35aed61c35cf121316 | [
"MIT"
] | 1 | 2021-08-04T13:46:54.000Z | 2021-08-04T13:46:54.000Z | import pytest
@pytest.fixture(scope="module")
def image():
return "IMAGE"
| 11.428571 | 31 | 0.6875 | 10 | 80 | 5.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1625 | 80 | 6 | 32 | 13.333333 | 0.820896 | 0 | 0 | 0 | 0 | 0 | 0.1375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a668f9a26edac00e7a8d89ec8946cf3d328e14dc | 43 | py | Python | cride/rides/models/__init__.py | AlexisLoya/cride-django | 04a8617093bea5de07aa6398d650116e2e6683ab | [
"MIT"
] | null | null | null | cride/rides/models/__init__.py | AlexisLoya/cride-django | 04a8617093bea5de07aa6398d650116e2e6683ab | [
"MIT"
] | 3 | 2021-05-24T18:17:14.000Z | 2021-05-24T18:18:44.000Z | cride/rides/models/__init__.py | AlexisLoya/cride-django | 04a8617093bea5de07aa6398d650116e2e6683ab | [
"MIT"
] | null | null | null | from .ride import *
from .ratings import *
| 14.333333 | 22 | 0.72093 | 6 | 43 | 5.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 43 | 2 | 23 | 21.5 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4716b4abc9a15d2f849ee207dc33ded68c5a7c07 | 70 | py | Python | Solution/api_keys.py | acanales92/Python-API-Challenge | a9bc02edacbeb14474398ae3f15d48821fe3764d | [
"ADSL"
] | null | null | null | Solution/api_keys.py | acanales92/Python-API-Challenge | a9bc02edacbeb14474398ae3f15d48821fe3764d | [
"ADSL"
] | null | null | null | Solution/api_keys.py | acanales92/Python-API-Challenge | a9bc02edacbeb14474398ae3f15d48821fe3764d | [
"ADSL"
] | null | null | null | # OpenWeatherMap API Key
api_key = "9f47e495713e79f40dcf331c6b90a28a"
| 23.333333 | 44 | 0.842857 | 6 | 70 | 9.666667 | 0.666667 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0.1 | 70 | 2 | 45 | 35 | 0.587302 | 0.314286 | 0 | 0 | 0 | 0 | 0.695652 | 0.695652 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5b30e6e80553d8ee461f2e0b73b1785576d4d03a | 660 | py | Python | tests/test_error_handlers.py | NYU-DevOps-2022/orders | d87aedec541d32cd9e8c1f341043f02a09dd27c3 | [
"Apache-2.0"
] | 1 | 2022-03-06T19:44:41.000Z | 2022-03-06T19:44:41.000Z | tests/test_error_handlers.py | NYU-DevOps-2022/orders | d87aedec541d32cd9e8c1f341043f02a09dd27c3 | [
"Apache-2.0"
] | 23 | 2022-02-18T17:23:20.000Z | 2022-03-31T21:09:23.000Z | tests/test_error_handlers.py | NYU-DevOps-2022/orders | d87aedec541d32cd9e8c1f341043f02a09dd27c3 | [
"Apache-2.0"
] | 1 | 2022-03-06T02:54:48.000Z | 2022-03-06T02:54:48.000Z | import unittest
# import service.error_handlers
# class TestErrorHandlers(unittest.TestCase):
# def test_request_validation_error(self):
# self.assertEqual(tuple, type(request_validation_error("blah")))
# def test_not_found(self):
# self.assertEqual(tuple, type(not_found("blah")))
# def test_method_not_supported(self):
# self.assertEqual(tuple, type(method_not_supported("blah")))
# def test_mediatype_not_supported(self):
# self.assertEqual(tuple, type(mediatype_not_supported("blah")))
# def test_internal_server_error(self):
# self.assertEqual(tuple, type(internal_server_error("blah")))
| 31.428571 | 73 | 0.716667 | 78 | 660 | 5.75641 | 0.307692 | 0.077951 | 0.211581 | 0.267261 | 0.489978 | 0.325167 | 0.178174 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 660 | 20 | 74 | 33 | 0.809009 | 0.927273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b4a64b69f8e739d232adc5dd3eab253db3562ce | 13,041 | py | Python | moonv4/tests/scenario/session_large.py | hashnfv/hashnfv-moon | daaba34fa2ed4426bc0fde359e54a5e1b872208c | [
"Apache-2.0"
] | null | null | null | moonv4/tests/scenario/session_large.py | hashnfv/hashnfv-moon | daaba34fa2ed4426bc0fde359e54a5e1b872208c | [
"Apache-2.0"
] | null | null | null | moonv4/tests/scenario/session_large.py | hashnfv/hashnfv-moon | daaba34fa2ed4426bc0fde359e54a5e1b872208c | [
"Apache-2.0"
] | null | null | null |
pdp_name = "pdp1"
policy_name = "Session policy example"
model_name = "Session"
policy_genre = "session"
subjects = {
"user0": "",
"user1": "",
"user2": "",
"user3": "",
"user4": "",
"user5": "",
"user6": "",
"user7": "",
"user8": "",
"user9": "",
}
objects = {"admin": "", "employee": "", "dev1": "", "dev2": "", }
actions = {"activate": "", "deactivate": ""}
subject_categories = {"subjectid": "", }
object_categories = {"role": "", }
action_categories = {"session-action": "", }
subject_data = {"subjectid": {
"user0": "",
"user1": "",
"user2": "",
"user3": "",
"user4": "",
"user5": "",
"user6": "",
"user7": "",
"user8": "",
"user9": "",
}}
object_data = {"role": {
"admin": "",
"employee": "",
"dev1": "",
"dev2": "",
"*": ""
}}
action_data = {"session-action": {"activate": "", "deactivate": "", "*": ""}}
subject_assignments = {
"user0": ({"subjectid": "user0"}, ),
"user1": ({"subjectid": "user1"}, ),
"user2": ({"subjectid": "user2"}, ),
"user3": ({"subjectid": "user3"}, ),
"user4": ({"subjectid": "user4"}, ),
"user5": ({"subjectid": "user5"}, ),
"user6": ({"subjectid": "user6"}, ),
"user7": ({"subjectid": "user7"}, ),
"user8": ({"subjectid": "user8"}, ),
"user9": ({"subjectid": "user9"}, ),
}
object_assignments = {"admin": ({"role": "admin"}, {"role": "*"}),
"employee": ({"role": "employee"}, {"role": "*"}),
"dev1": ({"role": "employee"}, {"role": "dev1"}, {"role": "*"}),
"dev2": ({"role": "employee"}, {"role": "dev2"}, {"role": "*"}),
}
action_assignments = {"activate": ({"session-action": "activate"}, {"session-action": "*"}, ),
"deactivate": ({"session-action": "deactivate"}, {"session-action": "*"}, )
}
meta_rule = {
"session": {"id": "", "value": ("subjectid", "role", "session-action")},
}
rules = {
"session": (
{
"rule": ("user0", "employee", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user1", "employee", "*"),
"instructions": (
{
"update": {
"operation": "delete",
"target": "rbac:role:employee" # delete the role employee from the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user2", "employee", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user2", "dev1", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user2", "dev2", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user3", "employee", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user3", "dev1", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user3", "dev2", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user4", "employee", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user4", "dev1", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user4", "dev2", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user5", "employee", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user5", "dev1", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user5", "dev2", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user6", "employee", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user6", "dev1", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user6", "dev2", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user7", "employee", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user7", "dev1", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user7", "dev2", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user8", "employee", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user8", "dev1", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user8", "dev2", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user9", "employee", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user9", "dev1", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
{
"rule": ("user9", "dev2", "*"),
"instructions": (
{
"update": {
"operation": "add",
"target": "rbac:role:admin" # add the role admin to the current user
}
},
{"chain": {"name": "rbac"}} # chain with the meta_rule named rbac
)
},
)
}
| 33.438462 | 104 | 0.344989 | 902 | 13,041 | 4.943459 | 0.060976 | 0.104956 | 0.157434 | 0.110787 | 0.821933 | 0.798834 | 0.798834 | 0.798834 | 0.798834 | 0.798834 | 0 | 0.01374 | 0.49214 | 13,041 | 389 | 105 | 33.524422 | 0.65952 | 0.150065 | 0 | 0.394737 | 0 | 0 | 0.252267 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5b55dbe30c3a72bd80091d521e788a7226a56bc2 | 229,945 | py | Python | pyuvdata/tests/test_uvdata.py | ntk688/pyuvdata | 96be086324ba8f35815dd590429c6415411c15ea | [
"BSD-2-Clause"
] | null | null | null | pyuvdata/tests/test_uvdata.py | ntk688/pyuvdata | 96be086324ba8f35815dd590429c6415411c15ea | [
"BSD-2-Clause"
] | null | null | null | pyuvdata/tests/test_uvdata.py | ntk688/pyuvdata | 96be086324ba8f35815dd590429c6415411c15ea | [
"BSD-2-Clause"
] | null | null | null | # -*- mode: python; coding: utf-8 -*-
# Copyright (c) 2018 Radio Astronomy Software Group
# Licensed under the 2-clause BSD License
"""Tests for uvdata object.
"""
from __future__ import absolute_import, division, print_function
import pytest
import os
import copy
import itertools
import numpy as np
from astropy.time import Time
from astropy.coordinates import Angle
from astropy.utils import iers
from pyuvdata import UVData, UVCal
import pyuvdata.utils as uvutils
import pyuvdata.tests as uvtest
from pyuvdata.data import DATA_PATH
from collections import Counter
@pytest.fixture(scope='function')
def uvdata_props():
required_parameters = ['_data_array', '_nsample_array',
'_flag_array', '_Ntimes', '_Nbls',
'_Nblts', '_Nfreqs', '_Npols', '_Nspws',
'_uvw_array', '_time_array', '_ant_1_array',
'_ant_2_array', '_lst_array',
'_baseline_array', '_freq_array',
'_polarization_array', '_spw_array',
'_integration_time', '_channel_width',
'_object_name', '_telescope_name',
'_instrument', '_telescope_location',
'_history', '_vis_units', '_Nants_data',
'_Nants_telescope', '_antenna_names',
'_antenna_numbers', '_phase_type']
required_properties = ['data_array', 'nsample_array',
'flag_array', 'Ntimes', 'Nbls',
'Nblts', 'Nfreqs', 'Npols', 'Nspws',
'uvw_array', 'time_array', 'ant_1_array',
'ant_2_array', 'lst_array',
'baseline_array', 'freq_array',
'polarization_array', 'spw_array',
'integration_time', 'channel_width',
'object_name', 'telescope_name',
'instrument', 'telescope_location',
'history', 'vis_units', 'Nants_data',
'Nants_telescope', 'antenna_names',
'antenna_numbers', 'phase_type']
extra_parameters = ['_extra_keywords', '_antenna_positions',
'_x_orientation', '_antenna_diameters',
'_blt_order',
'_gst0', '_rdate', '_earth_omega', '_dut1',
'_timesys', '_uvplane_reference_time',
'_phase_center_ra', '_phase_center_dec',
'_phase_center_epoch', '_phase_center_frame',
'_eq_coeffs', '_eq_coeffs_convention']
extra_properties = ['extra_keywords', 'antenna_positions',
'x_orientation', 'antenna_diameters', 'blt_order', 'gst0',
'rdate', 'earth_omega', 'dut1', 'timesys',
'uvplane_reference_time',
'phase_center_ra', 'phase_center_dec',
'phase_center_epoch', 'phase_center_frame',
'eq_coeffs', 'eq_coeffs_convention']
other_properties = ['telescope_location_lat_lon_alt',
'telescope_location_lat_lon_alt_degrees',
'phase_center_ra_degrees', 'phase_center_dec_degrees',
'pyuvdata_version_str']
uv_object = UVData()
class DataHolder():
def __init__(self, uv_object, required_parameters, required_properties,
extra_parameters, extra_properties, other_properties):
self.uv_object = uv_object
self.required_parameters = required_parameters
self.required_properties = required_properties
self.extra_parameters = extra_parameters
self.extra_properties = extra_properties
self.other_properties = other_properties
uvdata_props = DataHolder(uv_object, required_parameters, required_properties,
extra_parameters, extra_properties, other_properties)
# yields the data we need but will continue to the del call after tests
yield uvdata_props
# some post-test object cleanup
del(uvdata_props)
return
@pytest.fixture(scope="function")
def resample_in_time_file():
# read in test file for the resampling in time functions
uv_object = UVData()
testfile = os.path.join(DATA_PATH, "zen.2458661.23480.HH.uvh5")
uv_object.read(testfile)
yield uv_object
# cleanup
del uv_object
return
@pytest.fixture(scope="function")
def bda_test_file():
# read in test file for BDA-like data
uv_object = UVData()
testfile = os.path.join(DATA_PATH, "simulated_bda_file.uvh5")
uv_object.read(testfile)
yield uv_object
# cleanup
del uv_object
return
def test_parameter_iter(uvdata_props):
"Test expected parameters."
all = []
for prop in uvdata_props.uv_object:
all.append(prop)
for a in uvdata_props.required_parameters + uvdata_props.extra_parameters:
assert a in all, 'expected attribute ' + a + ' not returned in object iterator'
def test_required_parameter_iter(uvdata_props):
"Test expected required parameters."
# at first it's a metadata_only object, so need to modify required_parameters
required = []
for prop in uvdata_props.uv_object.required():
required.append(prop)
expected_required = copy.copy(uvdata_props.required_parameters)
expected_required.remove('_data_array')
expected_required.remove('_nsample_array')
expected_required.remove('_flag_array')
for a in expected_required:
assert a in required, 'expected attribute ' + a + ' not returned in required iterator'
uvdata_props.uv_object.data_array = 1
uvdata_props.uv_object.nsample_array = 1
uvdata_props.uv_object.flag_array = 1
required = []
for prop in uvdata_props.uv_object.required():
required.append(prop)
for a in uvdata_props.required_parameters:
assert a in required, 'expected attribute ' + a + ' not returned in required iterator'
def test_extra_parameter_iter(uvdata_props):
"Test expected optional parameters."
extra = []
for prop in uvdata_props.uv_object.extra():
extra.append(prop)
for a in uvdata_props.extra_parameters:
assert a in extra, 'expected attribute ' + a + ' not returned in extra iterator'
def test_unexpected_parameters(uvdata_props):
"Test for extra parameters."
expected_parameters = uvdata_props.required_parameters + uvdata_props.extra_parameters
attributes = [i for i in uvdata_props.uv_object.__dict__.keys() if i[0] == '_']
for a in attributes:
assert a in expected_parameters, 'unexpected parameter ' + a + ' found in UVData'
def test_unexpected_attributes(uvdata_props):
"Test for extra attributes."
expected_attributes = uvdata_props.required_properties + \
uvdata_props.extra_properties + uvdata_props.other_properties
attributes = [i for i in uvdata_props.uv_object.__dict__.keys() if i[0] != '_']
for a in attributes:
assert a in expected_attributes, 'unexpected attribute ' + a + ' found in UVData'
def test_properties(uvdata_props):
"Test that properties can be get and set properly."
prop_dict = dict(list(zip(uvdata_props.required_properties + uvdata_props.extra_properties,
uvdata_props.required_parameters + uvdata_props.extra_parameters)))
for k, v in prop_dict.items():
rand_num = np.random.rand()
setattr(uvdata_props.uv_object, k, rand_num)
this_param = getattr(uvdata_props.uv_object, v)
try:
assert rand_num == this_param.value
except AssertionError:
print('setting {prop_name} to a random number failed'.format(prop_name=k))
raise
@pytest.fixture(scope='function')
def uvdata_data():
uv_object = UVData()
testfile = os.path.join(DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uvtest.checkWarnings(uv_object.read_uvfits, [testfile],
message='Telescope EVLA is not')
class DataHolder():
def __init__(self, uv_object):
self.uv_object = uv_object
self.uv_object2 = copy.deepcopy(uv_object)
uvdata_data = DataHolder(uv_object)
# yields the data we need but will continue to the del call after tests
yield uvdata_data
# some post-test object cleanup
del(uvdata_data)
return
def test_metadata_only_property(uvdata_data):
uvdata_data.uv_object.data_array = None
assert uvdata_data.uv_object.metadata_only is False
pytest.raises(ValueError, uvdata_data.uv_object.check)
uvdata_data.uv_object.flag_array = None
assert uvdata_data.uv_object.metadata_only is False
pytest.raises(ValueError, uvdata_data.uv_object.check)
uvdata_data.uv_object.nsample_array = None
assert uvdata_data.uv_object.metadata_only is True
def test_equality(uvdata_data):
"""Basic equality test."""
assert uvdata_data.uv_object == uvdata_data.uv_object
@pytest.mark.filterwarnings("ignore:Telescope location derived from obs")
def test_check(uvdata_data):
"""Test simple check function."""
assert uvdata_data.uv_object.check()
# Check variety of special cases
uvdata_data.uv_object.Nants_data += 1
pytest.raises(ValueError, uvdata_data.uv_object.check)
uvdata_data.uv_object.Nants_data -= 1
uvdata_data.uv_object.Nbls += 1
pytest.raises(ValueError, uvdata_data.uv_object.check)
uvdata_data.uv_object.Nbls -= 1
uvdata_data.uv_object.Ntimes += 1
pytest.raises(ValueError, uvdata_data.uv_object.check)
uvdata_data.uv_object.Ntimes -= 1
# Check case where all data is autocorrelations
# Currently only test files that have autos are fhd files
testdir = os.path.join(DATA_PATH, 'fhd_vis_data/')
file_list = [testdir + '1061316296_flags.sav',
testdir + '1061316296_vis_XX.sav',
testdir + '1061316296_params.sav',
testdir + '1061316296_layout.sav',
testdir + '1061316296_settings.txt']
uvdata_data.uv_object.read_fhd(file_list)
uvdata_data.uv_object.select(blt_inds=np.where(uvdata_data.uv_object.ant_1_array
== uvdata_data.uv_object.ant_2_array)[0])
assert uvdata_data.uv_object.check()
# test auto and cross corr uvw_array
uvd = UVData()
uvd.read_miriad(os.path.join(DATA_PATH, "zen.2457698.40355.xx.HH.uvcA"))
autos = np.isclose(uvd.ant_1_array - uvd.ant_2_array, 0.0)
auto_inds = np.where(autos)[0]
cross_inds = np.where(~autos)[0]
# make auto have non-zero uvw coords, assert ValueError
uvd.uvw_array[auto_inds[0], 0] = 0.1
pytest.raises(ValueError, uvd.check)
# make cross have |uvw| zero, assert ValueError
uvd.read_miriad(os.path.join(DATA_PATH, "zen.2457698.40355.xx.HH.uvcA"))
uvd.uvw_array[cross_inds[0]][:] = 0.0
pytest.raises(ValueError, uvd.check)
def test_nants_data_telescope_larger(uvdata_data):
# make sure it's okay for Nants_telescope to be strictly greater than Nants_data
uvdata_data.uv_object.Nants_telescope += 1
# add dummy information for "new antenna" to pass object check
uvdata_data.uv_object.antenna_names = np.concatenate(
(uvdata_data.uv_object.antenna_names, ["dummy_ant"]))
uvdata_data.uv_object.antenna_numbers = np.concatenate(
(uvdata_data.uv_object.antenna_numbers, [20]))
uvdata_data.uv_object.antenna_positions = np.concatenate(
(uvdata_data.uv_object.antenna_positions, np.zeros((1, 3))), axis=0)
assert uvdata_data.uv_object.check()
def test_ant1_array_not_in_antnums(uvdata_data):
# make sure an error is raised if antennas in ant_1_array not in antenna_numbers
# remove antennas from antenna_names & antenna_numbers by hand
uvdata_data.uv_object.antenna_names = uvdata_data.uv_object.antenna_names[1:]
uvdata_data.uv_object.antenna_numbers = uvdata_data.uv_object.antenna_numbers[1:]
uvdata_data.uv_object.antenna_positions = uvdata_data.uv_object.antenna_positions[1:, :]
uvdata_data.uv_object.Nants_telescope = uvdata_data.uv_object.antenna_numbers.size
with pytest.raises(ValueError) as cm:
uvdata_data.uv_object.check()
assert str(cm.value).startswith('All antennas in ant_1_array must be in antenna_numbers')
def test_ant2_array_not_in_antnums(uvdata_data):
# make sure an error is raised if antennas in ant_2_array not in antenna_numbers
# remove antennas from antenna_names & antenna_numbers by hand
uvdata_data.uv_object.antenna_names = uvdata_data.uv_object.antenna_names[:-1]
uvdata_data.uv_object.antenna_numbers = uvdata_data.uv_object.antenna_numbers[:-1]
uvdata_data.uv_object.antenna_positions = uvdata_data.uv_object.antenna_positions[:-1, :]
uvdata_data.uv_object.Nants_telescope = uvdata_data.uv_object.antenna_numbers.size
with pytest.raises(ValueError) as cm:
uvdata_data.uv_object.check()
assert str(cm.value).startswith('All antennas in ant_2_array must be in antenna_numbers')
def test_converttofiletype(uvdata_data):
fhd_obj = uvdata_data.uv_object._convert_to_filetype('fhd')
uvdata_data.uv_object._convert_from_filetype(fhd_obj)
assert uvdata_data.uv_object == uvdata_data.uv_object2
with pytest.raises(ValueError) as cm:
uvdata_data.uv_object._convert_to_filetype('foo')
assert str(cm.value).startswith("filetype must be uvfits, miriad, fhd, or uvh5")
@pytest.fixture(scope='function')
def uvdata_baseline():
uv_object = UVData()
uv_object.Nants_telescope = 128
uv_object2 = UVData()
uv_object2.Nants_telescope = 2049
class DataHolder():
def __init__(self, uv_object, uv_object2):
self.uv_object = uv_object
self.uv_object2 = uv_object2
uvdata_baseline = DataHolder(uv_object, uv_object2)
# yields the data we need but will continue to the del call after tests
yield uvdata_baseline
# Post test clean-up
del(uvdata_baseline)
return
def test_baseline_to_antnums(uvdata_baseline):
"""Test baseline to antnum conversion for 256 & larger conventions."""
assert uvdata_baseline.uv_object.baseline_to_antnums(67585) == (0, 0)
with pytest.raises(Exception) as cm:
uvdata_baseline.uv_object2.baseline_to_antnums(67585)
assert str(cm.value).startswith('error Nants={Nants}>2048'
' not supported'.format(Nants=uvdata_baseline.uv_object2.Nants_telescope))
ant_pairs = [(10, 20), (280, 310)]
for pair in ant_pairs:
if np.max(np.array(pair)) < 255:
bl = uvdata_baseline.uv_object.antnums_to_baseline(
pair[0], pair[1], attempt256=True)
ant_pair_out = uvdata_baseline.uv_object.baseline_to_antnums(bl)
assert pair == ant_pair_out
bl = uvdata_baseline.uv_object.antnums_to_baseline(
pair[0], pair[1], attempt256=False)
ant_pair_out = uvdata_baseline.uv_object.baseline_to_antnums(bl)
assert pair == ant_pair_out
def test_baseline_to_antnums_vectorized(uvdata_baseline):
"""Test vectorized antnum to baseline conversion."""
ant_1 = [10, 280]
ant_2 = [20, 310]
baseline_array = uvdata_baseline.uv_object.antnums_to_baseline(ant_1, ant_2)
assert np.array_equal(baseline_array, [88085, 641335])
ant_1_out, ant_2_out = uvdata_baseline.uv_object.baseline_to_antnums(baseline_array.tolist())
assert np.array_equal(ant_1, ant_1_out)
assert np.array_equal(ant_2, ant_2_out)
def test_antnums_to_baselines(uvdata_baseline):
"""Test antums to baseline conversion for 256 & larger conventions."""
assert uvdata_baseline.uv_object.antnums_to_baseline(0, 0) == 67585
assert uvdata_baseline.uv_object.antnums_to_baseline(257, 256) == 594177
assert uvdata_baseline.uv_object.baseline_to_antnums(594177) == (257, 256)
# Check attempt256
assert uvdata_baseline.uv_object.antnums_to_baseline(0, 0, attempt256=True) == 257
assert uvdata_baseline.uv_object.antnums_to_baseline(257, 256) == 594177
uvtest.checkWarnings(uvdata_baseline.uv_object.antnums_to_baseline, [257, 256],
{'attempt256': True}, message='found > 256 antennas')
pytest.raises(Exception, uvdata_baseline.uv_object2.antnums_to_baseline, 0, 0)
# check a len-1 array returns as an array
ant1 = np.array([1])
ant2 = np.array([2])
assert isinstance(uvdata_baseline.uv_object.antnums_to_baseline(ant1, ant2), np.ndarray)
def test_known_telescopes():
"""Test known_telescopes method returns expected results."""
uv_object = UVData()
known_telescopes = ['PAPER', 'HERA', 'MWA']
# calling np.sort().tolist() because [].sort() acts inplace and returns None
# Before test had None == None
assert np.sort(known_telescopes).tolist() == np.sort(uv_object.known_telescopes()).tolist()
@pytest.mark.filterwarnings("ignore:Altitude is not present in Miriad file")
def test_HERA_diameters():
miriad_file = os.path.join(DATA_PATH, 'zen.2456865.60537.xy.uvcRREAA')
uv_in = UVData()
uv_in.read_miriad(miriad_file)
uv_in.telescope_name = 'HERA'
uvtest.checkWarnings(uv_in.set_telescope_params, message='antenna_diameters '
'is not set. Using known values for HERA.')
assert uv_in.telescope_name == 'HERA'
assert uv_in.antenna_diameters is not None
uv_in.check()
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_generic_read():
uv_in = UVData()
uvfits_file = os.path.join(DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_in.read(uvfits_file, read_data=False)
unique_times = np.unique(uv_in.time_array)
pytest.raises(ValueError, uv_in.read, uvfits_file, times=unique_times[0:2],
time_range=[unique_times[0], unique_times[1]])
pytest.raises(ValueError, uv_in.read, uvfits_file,
antenna_nums=uv_in.antenna_numbers[0],
antenna_names=uv_in.antenna_names[1])
pytest.raises(ValueError, uv_in.read, 'foo')
@pytest.fixture
def uv_phase_and_raw():
testfile = os.path.join(DATA_PATH, 'zen.2458661.23480.HH.uvh5')
UV_raw = UVData()
# Note the RA/DEC values in the raw file were calculated from the lat/long
# in the file, which don't agree with our known_telescopes.
# So for this test we use the lat/lon in the file.
UV_raw.read_uvh5(testfile)
# uvtest.checkWarnings(UV_raw.read_miriad, [testfile], {'correct_lat_lon': False},
# message='Altitude is not present in file and latitude and '
# 'longitude values do not match')
UV_phase = UVData()
UV_phase.read_uvh5(testfile)
yield UV_phase, UV_raw
del UV_phase, UV_raw
return
@pytest.mark.parametrize(
"phase_kwargs",
[
{"ra": 0., "dec": 0., "epoch": "J2000"},
{"ra": Angle('5d').rad, "dec": Angle('30d').rad, "phase_frame": "gcrs"},
{"ra": Angle('180d').rad, "dec": Angle('90d'),
"epoch": Time('2010-01-01T00:00:00', format='isot', scale='utc')
},
]
)
def test_phase_unphaseHERA(uv_phase_and_raw, phase_kwargs):
"""
Read in drift data, phase to an RA/DEC, unphase and check for object equality.
"""
UV_phase, UV_raw = uv_phase_and_raw
UV_phase.phase(**phase_kwargs)
UV_phase.unphase_to_drift()
# check that phase + unphase gets back to raw
assert UV_raw == UV_phase
def test_phase_unphaseHERA_one_bl(uv_phase_and_raw):
UV_phase, UV_raw = uv_phase_and_raw
# check that phase + unphase work with one baseline
UV_raw_small = UV_raw.select(blt_inds=[0], inplace=False)
UV_phase_small = copy.deepcopy(UV_raw_small)
UV_phase_small.phase(Angle('23h').rad, Angle('15d').rad)
UV_phase_small.unphase_to_drift()
assert UV_raw_small == UV_phase_small
def test_phase_unphaseHERA_antpos(uv_phase_and_raw):
UV_phase, UV_raw = uv_phase_and_raw
# check that they match if you phase & unphase using antenna locations
# first replace the uvws with the right values
antenna_enu = uvutils.ENU_from_ECEF((UV_raw.antenna_positions + UV_raw.telescope_location),
*UV_raw.telescope_location_lat_lon_alt)
uvw_calc = np.zeros_like(UV_raw.uvw_array)
unique_times, unique_inds = np.unique(UV_raw.time_array, return_index=True)
for ind, jd in enumerate(unique_times):
inds = np.where(UV_raw.time_array == jd)[0]
for bl_ind in inds:
ant1_index = np.where(UV_raw.antenna_numbers == UV_raw.ant_1_array[bl_ind])[0][0]
ant2_index = np.where(UV_raw.antenna_numbers == UV_raw.ant_2_array[bl_ind])[0][0]
uvw_calc[bl_ind, :] = antenna_enu[ant2_index, :] - antenna_enu[ant1_index, :]
UV_raw_new = copy.deepcopy(UV_raw)
UV_raw_new.uvw_array = uvw_calc
UV_phase.phase(0., 0., epoch="J2000", use_ant_pos=True)
UV_phase2 = copy.deepcopy(UV_raw_new)
UV_phase2.phase(0., 0., epoch="J2000")
# The uvw's only agree to ~1mm. should they be better?
assert np.allclose(UV_phase2.uvw_array, UV_phase.uvw_array, atol=1e-3)
# the data array are just multiplied by the w's for phasing, so a difference
# at the 1e-3 level makes the data array different at that level too.
# -> change the tolerance on data_array for this test
UV_phase2._data_array.tols = (0, 1e-3 * np.amax(np.abs(UV_phase2.data_array)))
assert UV_phase2 == UV_phase
# check that phase + unphase gets back to raw using antpos
UV_phase.unphase_to_drift(use_ant_pos=True)
assert UV_raw_new == UV_phase
def test_phase_unphaseHERA_zenith_timestamp(uv_phase_and_raw):
UV_phase, UV_raw = uv_phase_and_raw
# check that phasing to zenith with one timestamp has small changes
# (it won't be identical because of precession/nutation changing the coordinate axes)
# use gcrs rather than icrs to reduce differences (don't include abberation)
UV_raw_small = UV_raw.select(times=UV_raw.time_array[0], inplace=False)
UV_phase_simple_small = copy.deepcopy(UV_raw_small)
UV_phase_simple_small.phase_to_time(time=Time(UV_raw.time_array[0], format='jd'),
phase_frame='gcrs')
# it's unclear to me how close this should be...
assert np.allclose(UV_phase_simple_small.uvw_array, UV_raw_small.uvw_array, atol=1e-1)
def test_phase_to_time_jd_input(uv_phase_and_raw):
UV_phase, UV_raw = uv_phase_and_raw
UV_phase.phase_to_time(UV_raw.time_array[0])
UV_phase.unphase_to_drift()
assert UV_phase == UV_raw
def test_phase_to_time_error(uv_phase_and_raw):
UV_phase, UV_raw = uv_phase_and_raw
# check error if not passing a Time object to phase_to_time
with pytest.raises(TypeError) as cm:
UV_phase.phase_to_time('foo')
assert str(cm.value).startswith("time must be an astropy.time.Time object")
def test_unphase_drift_data_error(uv_phase_and_raw):
UV_phase, UV_raw = uv_phase_and_raw
# check error if not passing a Time object to phase_to_time
with pytest.raises(ValueError) as cm:
UV_phase.unphase_to_drift()
assert str(cm.value).startswith("The data is already drift scanning;")
@pytest.mark.parametrize(
"phase_func,phase_kwargs,err_msg",
[("unphase_to_drift", {},
"The phasing type of the data is unknown. Set the phase_type"),
("phase", {"ra": 0, "dec": 0, "epoch": "J2000"},
"The phasing type of the data is unknown. Set the phase_type"),
("phase_to_time", {"time": 0},
"The phasing type of the data is unknown. Set the phase_type")
]
)
def test_unknown_phase_unphaseHERA_errors(
uv_phase_and_raw, phase_func, phase_kwargs, err_msg
):
UV_phase, UV_raw = uv_phase_and_raw
# Set phase type to unkown on some tests, ignore on others.
UV_raw.set_unknown_phase_type()
# if this is phase_to_time, use this index set in the dictionary and
# assign the value of the time_array associated with that index
# this is a little hacky, but we cannot acces UV_raw.time_array in the parametrize
if phase_func == "phsae_to_time":
phase_kwargs["time"] = UV_raw.time_array[phase_kwargs["time"]]
with pytest.raises(ValueError) as cm:
getattr(UV_raw, phase_func)(**phase_kwargs)
assert str(cm.value).startswith(err_msg)
@pytest.mark.parametrize(
"phase_func,phase_kwargs,err_msg",
[("phase", {"ra": 0, "dec": 0, "epoch": "J2000"},
"The data is already phased;"),
("phase_to_time", {"time": 0},
"The data is already phased;")
]
)
def test_phase_rephaseHERA_errors(
uv_phase_and_raw, phase_func, phase_kwargs, err_msg
):
UV_phase, UV_raw = uv_phase_and_raw
# Set phase type to unkown on some tests, ignore on others.
UV_raw.phase(0., 0., epoch="J2000")
# if this is phase_to_time, use this index set in the dictionary and
# assign the value of the time_array associated with that index
# this is a little hacky, but we cannot acces UV_raw.time_array in the parametrize
if phase_func == "phsae_to_time":
phase_kwargs["time"] = UV_raw.time_array[phase_kwargs["time"]]
with pytest.raises(ValueError) as cm:
getattr(UV_raw, phase_func)(**phase_kwargs)
assert str(cm.value).startswith(err_msg)
def test_phase_unphaseHERA_bad_frame(uv_phase_and_raw):
UV_phase, UV_raw = uv_phase_and_raw
# check errors when trying to phase to an unsupported frame
with pytest.raises(ValueError) as cm:
UV_phase.phase(0., 0., epoch="J2000", phase_frame='cirs')
assert str(cm.value).startswith("phase_frame can only be set to icrs or gcrs.")
def test_phasing():
""" Use MWA files phased to 2 different places to test phasing. """
file1 = os.path.join(DATA_PATH, '1133866760.uvfits')
file2 = os.path.join(DATA_PATH, '1133866760_rephase.uvfits')
uvd1 = UVData()
uvd2 = UVData()
uvd1.read_uvfits(file1)
uvd2.read_uvfits(file2)
uvd1_drift = copy.deepcopy(uvd1)
uvd1_drift.unphase_to_drift(phase_frame='gcrs')
uvd1_drift_antpos = copy.deepcopy(uvd1)
uvd1_drift_antpos.unphase_to_drift(phase_frame='gcrs', use_ant_pos=True)
uvd2_drift = copy.deepcopy(uvd2)
uvd2_drift.unphase_to_drift(phase_frame='gcrs')
uvd2_drift_antpos = copy.deepcopy(uvd2)
uvd2_drift_antpos.unphase_to_drift(phase_frame='gcrs', use_ant_pos=True)
# the tolerances here are empirical -- based on what was seen in the external
# phasing test. See the phasing memo in docs/references for details
assert np.allclose(uvd1_drift.uvw_array, uvd2_drift.uvw_array, atol=2e-2)
assert np.allclose(uvd1_drift_antpos.uvw_array, uvd2_drift_antpos.uvw_array)
uvd2_rephase = copy.deepcopy(uvd2_drift)
uvd2_rephase.phase(uvd1.phase_center_ra,
uvd1.phase_center_dec,
uvd1.phase_center_epoch,
phase_frame='gcrs')
uvd2_rephase_antpos = copy.deepcopy(uvd2_drift_antpos)
uvd2_rephase_antpos.phase(uvd1.phase_center_ra,
uvd1.phase_center_dec,
uvd1.phase_center_epoch,
phase_frame='gcrs',
use_ant_pos=True)
# the tolerances here are empirical -- based on what was seen in the external
# phasing test. See the phasing memo in docs/references for details
assert np.allclose(uvd1.uvw_array, uvd2_rephase.uvw_array, atol=2e-2)
assert np.allclose(uvd1.uvw_array, uvd2_rephase_antpos.uvw_array, atol=5e-3)
# rephase the drift objects to the original pointing and verify that they match
uvd1_drift.phase(uvd1.phase_center_ra, uvd1.phase_center_dec,
uvd1.phase_center_epoch, phase_frame='gcrs')
uvd1_drift_antpos.phase(uvd1.phase_center_ra, uvd1.phase_center_dec,
uvd1.phase_center_epoch, phase_frame='gcrs',
use_ant_pos=True)
# the tolerances here are empirical -- caused by one unphase/phase cycle.
# the antpos-based phasing differences are based on what was seen in the external
# phasing test. See the phasing memo in docs/references for details
assert np.allclose(uvd1.uvw_array, uvd1_drift.uvw_array, atol=1e-4)
assert np.allclose(uvd1.uvw_array, uvd1_drift_antpos.uvw_array, atol=5e-3)
uvd2_drift.phase(uvd2.phase_center_ra, uvd2.phase_center_dec,
uvd2.phase_center_epoch, phase_frame='gcrs')
uvd2_drift_antpos.phase(uvd2.phase_center_ra, uvd2.phase_center_dec,
uvd2.phase_center_epoch, phase_frame='gcrs',
use_ant_pos=True)
# the tolerances here are empirical -- caused by one unphase/phase cycle.
# the antpos-based phasing differences are based on what was seen in the external
# phasing test. See the phasing memo in docs/references for details
assert np.allclose(uvd2.uvw_array, uvd2_drift.uvw_array, atol=1e-4)
assert np.allclose(uvd2.uvw_array, uvd2_drift_antpos.uvw_array, atol=2e-2)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_set_phase_unknown():
uv_object = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_object.read_uvfits(testfile)
uv_object.set_unknown_phase_type()
assert uv_object.phase_type == 'unknown'
assert not uv_object._phase_center_epoch.required
assert not uv_object._phase_center_ra.required
assert not uv_object._phase_center_dec.required
assert uv_object.check()
@pytest.mark.filterwarnings("ignore:Altitude is not present in Miriad file")
def test_select_blts():
uv_object = UVData()
testfile = os.path.join(DATA_PATH, 'zen.2456865.60537.xy.uvcRREAA')
uv_object.read_miriad(testfile)
old_history = uv_object.history
blt_inds = np.array([172, 182, 132, 227, 144, 44, 16, 104, 385, 134, 326, 140, 116,
218, 178, 391, 111, 276, 274, 308, 38, 64, 317, 76, 239, 246,
34, 39, 83, 184, 208, 60, 374, 295, 118, 337, 261, 21, 375,
396, 355, 187, 95, 122, 186, 113, 260, 264, 156, 13, 228, 291,
302, 72, 137, 216, 299, 341, 207, 256, 223, 250, 268, 147, 73,
32, 142, 383, 221, 203, 258, 286, 324, 265, 170, 236, 8, 275,
304, 117, 29, 167, 15, 388, 171, 82, 322, 248, 160, 85, 66,
46, 272, 328, 323, 152, 200, 119, 359, 23, 363, 56, 219, 257,
11, 307, 336, 289, 136, 98, 37, 163, 158, 80, 125, 40, 298,
75, 320, 74, 57, 346, 121, 129, 332, 238, 93, 18, 330, 339,
381, 234, 176, 22, 379, 199, 266, 100, 90, 292, 205, 58, 222,
350, 109, 273, 191, 368, 88, 101, 65, 155, 2, 296, 306, 398,
369, 378, 254, 67, 249, 102, 348, 392, 20, 28, 169, 262, 269,
287, 86, 300, 143, 177, 42, 290, 284, 123, 189, 175, 97, 340,
242, 342, 331, 282, 235, 344, 63, 115, 78, 30, 226, 157, 133,
71, 35, 212, 333])
selected_data = uv_object.data_array[np.sort(blt_inds), :, :, :]
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(blt_inds=blt_inds)
assert len(blt_inds) == uv_object2.Nblts
# verify that histories are different
assert not uvutils._check_histories(old_history, uv_object2.history)
assert uvutils._check_histories(old_history + ' Downselected to '
'specific baseline-times using pyuvdata.',
uv_object2.history)
assert np.all(selected_data == uv_object2.data_array)
# check that it also works with higher dimension array
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(blt_inds=blt_inds[np.newaxis, :])
assert len(blt_inds) == uv_object2.Nblts
assert uvutils._check_histories(old_history + ' Downselected to '
'specific baseline-times using pyuvdata.',
uv_object2.history)
assert np.all(selected_data == uv_object2.data_array)
# check that just doing the metadata works properly
uv_object3 = copy.deepcopy(uv_object)
uv_object3.data_array = None
uv_object3.flag_array = None
uv_object3.nsample_array = None
assert uv_object3.metadata_only is True
uv_object4 = uv_object3.select(blt_inds=blt_inds, inplace=False)
for param in uv_object4:
param_name = getattr(uv_object4, param).name
if param_name not in ['data_array', 'flag_array', 'nsample_array']:
assert getattr(uv_object4, param) == getattr(uv_object2, param)
else:
assert getattr(uv_object4, param_name) is None
# also check with inplace=True
uv_object3.select(blt_inds=blt_inds)
assert uv_object3 == uv_object4
# check for warnings & errors with the metadata_only keyword
uv_object3 = copy.deepcopy(uv_object)
with pytest.raises(ValueError) as cm:
uvtest.checkWarnings(uv_object3.select,
func_kwargs={'blt_inds': blt_inds, 'metadata_only': True},
message='The metadata_only option has been replaced',
category=DeprecationWarning)
assert str(cm.value).startswith('The metadata_only option can only be True')
# check for errors associated with out of bounds indices
pytest.raises(ValueError, uv_object.select, blt_inds=np.arange(-10, -5))
pytest.raises(ValueError, uv_object.select,
blt_inds=np.arange(uv_object.Nblts + 1, uv_object.Nblts + 10))
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_select_antennas():
uv_object = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_object.read_uvfits(testfile)
old_history = uv_object.history
unique_ants = np.unique(
uv_object.ant_1_array.tolist() + uv_object.ant_2_array.tolist())
ants_to_keep = np.array([0, 19, 11, 24, 3, 23, 1, 20, 21])
blts_select = [(a1 in ants_to_keep) & (a2 in ants_to_keep) for (a1, a2) in
zip(uv_object.ant_1_array, uv_object.ant_2_array)]
Nblts_selected = np.sum(blts_select)
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(antenna_nums=ants_to_keep)
assert len(ants_to_keep) == uv_object2.Nants_data
assert Nblts_selected == uv_object2.Nblts
for ant in ants_to_keep:
assert ant in uv_object2.ant_1_array or ant in uv_object2.ant_2_array
for ant in np.unique(uv_object2.ant_1_array.tolist() + uv_object2.ant_2_array.tolist()):
assert ant in ants_to_keep
assert uvutils._check_histories(old_history + ' Downselected to '
'specific antennas using pyuvdata.',
uv_object2.history)
# check that it also works with higher dimension array
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(antenna_nums=ants_to_keep[np.newaxis, :])
assert len(ants_to_keep) == uv_object2.Nants_data
assert Nblts_selected == uv_object2.Nblts
for ant in ants_to_keep:
assert ant in uv_object2.ant_1_array or ant in uv_object2.ant_2_array
for ant in np.unique(uv_object2.ant_1_array.tolist() + uv_object2.ant_2_array.tolist()):
assert ant in ants_to_keep
assert uvutils._check_histories(old_history + ' Downselected to '
'specific antennas using pyuvdata.',
uv_object2.history)
# now test using antenna_names to specify antennas to keep
uv_object3 = copy.deepcopy(uv_object)
ants_to_keep = np.array(sorted(list(ants_to_keep)))
ant_names = []
for a in ants_to_keep:
ind = np.where(uv_object3.antenna_numbers == a)[0][0]
ant_names.append(uv_object3.antenna_names[ind])
uv_object3.select(antenna_names=ant_names)
assert uv_object2 == uv_object3
# check that it also works with higher dimension array
uv_object3 = copy.deepcopy(uv_object)
ants_to_keep = np.array(sorted(list(ants_to_keep)))
ant_names = []
for a in ants_to_keep:
ind = np.where(uv_object3.antenna_numbers == a)[0][0]
ant_names.append(uv_object3.antenna_names[ind])
uv_object3.select(antenna_names=[ant_names])
assert uv_object2 == uv_object3
# test removing metadata associated with antennas that are no longer present
# also add (different) antenna_diameters to test downselection
uv_object.antenna_diameters = 1. * np.ones((uv_object.Nants_telescope,), dtype=np.float)
for i in range(uv_object.Nants_telescope):
uv_object.antenna_diameters += i
uv_object4 = copy.deepcopy(uv_object)
uv_object4.select(antenna_nums=ants_to_keep, keep_all_metadata=False)
assert uv_object4.Nants_telescope == 9
assert set(uv_object4.antenna_numbers) == set(ants_to_keep)
for a in ants_to_keep:
idx1 = uv_object.antenna_numbers.tolist().index(a)
idx2 = uv_object4.antenna_numbers.tolist().index(a)
assert uv_object.antenna_names[idx1] == uv_object4.antenna_names[idx2]
assert np.allclose(uv_object.antenna_positions[idx1, :],
uv_object4.antenna_positions[idx2, :])
assert uv_object.antenna_diameters[idx1], uv_object4.antenna_diameters[idx2]
# remove antenna_diameters from object
uv_object.antenna_diameters = None
# check for errors associated with antennas not included in data, bad names or providing numbers and names
pytest.raises(ValueError, uv_object.select,
antenna_nums=np.max(unique_ants) + np.arange(1, 3))
pytest.raises(ValueError, uv_object.select, antenna_names='test1')
pytest.raises(ValueError, uv_object.select,
antenna_nums=ants_to_keep, antenna_names=ant_names)
def sort_bl(p):
"""Sort a tuple that starts with a pair of antennas, and may have stuff after."""
if p[1] >= p[0]:
return p
return (p[1], p[0]) + p[2:]
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_select_bls():
uv_object = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_object.read_uvfits(testfile)
old_history = uv_object.history
first_ants = [6, 2, 7, 2, 21, 27, 8]
second_ants = [0, 20, 8, 1, 2, 3, 22]
new_unique_ants = np.unique(first_ants + second_ants)
ant_pairs_to_keep = list(zip(first_ants, second_ants))
sorted_pairs_to_keep = [sort_bl(p) for p in ant_pairs_to_keep]
blts_select = [sort_bl((a1, a2)) in sorted_pairs_to_keep for (a1, a2) in
zip(uv_object.ant_1_array, uv_object.ant_2_array)]
Nblts_selected = np.sum(blts_select)
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(bls=ant_pairs_to_keep)
sorted_pairs_object2 = [sort_bl(p) for p in zip(
uv_object2.ant_1_array, uv_object2.ant_2_array)]
assert len(new_unique_ants) == uv_object2.Nants_data
assert Nblts_selected == uv_object2.Nblts
for ant in new_unique_ants:
assert ant in uv_object2.ant_1_array or ant in uv_object2.ant_2_array
for ant in np.unique(uv_object2.ant_1_array.tolist() + uv_object2.ant_2_array.tolist()):
assert ant in new_unique_ants
for pair in sorted_pairs_to_keep:
assert pair in sorted_pairs_object2
for pair in sorted_pairs_object2:
assert pair in sorted_pairs_to_keep
assert uvutils._check_histories(old_history + ' Downselected to '
'specific baselines using pyuvdata.',
uv_object2.history)
# check select with polarizations
first_ants = [6, 2, 7, 2, 21, 27, 8]
second_ants = [0, 20, 8, 1, 2, 3, 22]
pols = ['RR', 'RR', 'RR', 'RR', 'RR', 'RR', 'RR']
new_unique_ants = np.unique(first_ants + second_ants)
bls_to_keep = list(zip(first_ants, second_ants, pols))
sorted_bls_to_keep = [sort_bl(p) for p in bls_to_keep]
blts_select = [sort_bl((a1, a2, 'RR')) in sorted_bls_to_keep for (a1, a2) in
zip(uv_object.ant_1_array, uv_object.ant_2_array)]
Nblts_selected = np.sum(blts_select)
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(bls=bls_to_keep)
sorted_pairs_object2 = [sort_bl(p) + ('RR',) for p in zip(
uv_object2.ant_1_array, uv_object2.ant_2_array)]
assert len(new_unique_ants) == uv_object2.Nants_data
assert Nblts_selected == uv_object2.Nblts
for ant in new_unique_ants:
assert ant in uv_object2.ant_1_array or ant in uv_object2.ant_2_array
for ant in np.unique(uv_object2.ant_1_array.tolist() + uv_object2.ant_2_array.tolist()):
assert ant in new_unique_ants
for bl in sorted_bls_to_keep:
assert bl in sorted_pairs_object2
for bl in sorted_pairs_object2:
assert bl in sorted_bls_to_keep
assert uvutils._check_histories(old_history + ' Downselected to '
'specific baselines, polarizations using pyuvdata.',
uv_object2.history)
# check that you can use numpy integers with out errors:
first_ants = list(map(np.int32, [6, 2, 7, 2, 21, 27, 8]))
second_ants = list(map(np.int32, [0, 20, 8, 1, 2, 3, 22]))
ant_pairs_to_keep = list(zip(first_ants, second_ants))
uv_object2 = uv_object.select(bls=ant_pairs_to_keep, inplace=False)
sorted_pairs_object2 = [sort_bl(p) for p in zip(
uv_object2.ant_1_array, uv_object2.ant_2_array)]
assert len(new_unique_ants) == uv_object2.Nants_data
assert Nblts_selected == uv_object2.Nblts
for ant in new_unique_ants:
assert ant in uv_object2.ant_1_array or ant in uv_object2.ant_2_array
for ant in np.unique(uv_object2.ant_1_array.tolist() + uv_object2.ant_2_array.tolist()):
assert ant in new_unique_ants
for pair in sorted_pairs_to_keep:
assert pair in sorted_pairs_object2
for pair in sorted_pairs_object2:
assert pair in sorted_pairs_to_keep
assert uvutils._check_histories(old_history + ' Downselected to '
'specific baselines using pyuvdata.',
uv_object2.history)
# check that you can specify a single pair without errors
uv_object2.select(bls=(0, 6))
sorted_pairs_object2 = [sort_bl(p) for p in zip(
uv_object2.ant_1_array, uv_object2.ant_2_array)]
assert list(set(sorted_pairs_object2)) == [(0, 6)]
# check for errors associated with antenna pairs not included in data and bad inputs
with pytest.raises(ValueError) as cm:
uv_object.select(bls=list(zip(first_ants, second_ants)) + [0, 6])
assert str(cm.value).startswith('bls must be a list of tuples of antenna numbers')
with pytest.raises(ValueError) as cm:
uv_object.select(bls=[(uv_object.antenna_names[0], uv_object.antenna_names[1])])
assert str(cm.value).startswith('bls must be a list of tuples of antenna numbers')
with pytest.raises(ValueError) as cm:
uv_object.select(bls=(5, 1))
assert str(cm.value).startswith('Antenna number 5 is not present in the '
'ant_1_array or ant_2_array')
with pytest.raises(ValueError) as cm:
uv_object.select(bls=(0, 5))
assert str(cm.value).startswith('Antenna number 5 is not present in the '
'ant_1_array or ant_2_array')
with pytest.raises(ValueError) as cm:
uv_object.select(bls=(27, 27))
assert str(cm.value).startswith('Antenna pair (27, 27) does not have any data')
with pytest.raises(ValueError) as cm:
uv_object.select(bls=(6, 0, 'RR'), polarizations='RR')
assert str(cm.value).startswith('Cannot provide length-3 tuples and also '
'specify polarizations.')
with pytest.raises(ValueError) as cm:
uv_object.select(bls=(6, 0, 8))
assert str(cm.value).startswith('The third element in each bl must be a '
'polarization string')
with pytest.raises(ValueError) as cm:
uv_object.select(bls=[])
assert str(cm.value).startswith('bls must be a list of tuples of antenna numbers')
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_select_times():
uv_object = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_object.read_uvfits(testfile)
old_history = uv_object.history
unique_times = np.unique(uv_object.time_array)
times_to_keep = unique_times[[0, 3, 5, 6, 7, 10, 14]]
Nblts_selected = np.sum([t in times_to_keep for t in uv_object.time_array])
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(times=times_to_keep)
assert len(times_to_keep) == uv_object2.Ntimes
assert Nblts_selected == uv_object2.Nblts
for t in times_to_keep:
assert t in uv_object2.time_array
for t in np.unique(uv_object2.time_array):
assert t in times_to_keep
assert uvutils._check_histories(old_history + ' Downselected to '
'specific times using pyuvdata.',
uv_object2.history)
# check that it also works with higher dimension array
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(times=times_to_keep[np.newaxis, :])
assert len(times_to_keep) == uv_object2.Ntimes
assert Nblts_selected == uv_object2.Nblts
for t in times_to_keep:
assert t in uv_object2.time_array
for t in np.unique(uv_object2.time_array):
assert t in times_to_keep
assert uvutils._check_histories(old_history + ' Downselected to '
'specific times using pyuvdata.',
uv_object2.history)
# check for errors associated with times not included in data
pytest.raises(ValueError, uv_object.select, times=[np.min(unique_times) - uv_object.integration_time[0]])
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_select_frequencies():
uv_object = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_object.read_uvfits(testfile)
old_history = uv_object.history
freqs_to_keep = uv_object.freq_array[0, np.arange(12, 22)]
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(frequencies=freqs_to_keep)
assert len(freqs_to_keep) == uv_object2.Nfreqs
for f in freqs_to_keep:
assert f in uv_object2.freq_array
for f in np.unique(uv_object2.freq_array):
assert f in freqs_to_keep
assert uvutils._check_histories(old_history + ' Downselected to '
'specific frequencies using pyuvdata.',
uv_object2.history)
# check that it also works with higher dimension array
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(frequencies=freqs_to_keep[np.newaxis, :])
assert len(freqs_to_keep) == uv_object2.Nfreqs
for f in freqs_to_keep:
assert f in uv_object2.freq_array
for f in np.unique(uv_object2.freq_array):
assert f in freqs_to_keep
assert uvutils._check_histories(old_history + ' Downselected to '
'specific frequencies using pyuvdata.',
uv_object2.history)
# check that selecting one frequency works
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(frequencies=freqs_to_keep[0])
assert 1 == uv_object2.Nfreqs
assert freqs_to_keep[0] in uv_object2.freq_array
for f in uv_object2.freq_array:
assert f in [freqs_to_keep[0]]
assert uvutils._check_histories(old_history + ' Downselected to '
'specific frequencies using pyuvdata.',
uv_object2.history)
# check for errors associated with frequencies not included in data
pytest.raises(ValueError, uv_object.select, frequencies=[
np.max(uv_object.freq_array) + uv_object.channel_width])
# check for warnings and errors associated with unevenly spaced or non-contiguous frequencies
uv_object2 = copy.deepcopy(uv_object)
uvtest.checkWarnings(uv_object2.select, [], {'frequencies': uv_object2.freq_array[0, [0, 5, 6]]},
message='Selected frequencies are not evenly spaced')
write_file_uvfits = os.path.join(DATA_PATH, 'test/select_test.uvfits')
write_file_miriad = os.path.join(DATA_PATH, 'test/select_test.uv')
pytest.raises(ValueError, uv_object2.write_uvfits, write_file_uvfits)
pytest.raises(ValueError, uv_object2.write_miriad, write_file_miriad)
uv_object2 = copy.deepcopy(uv_object)
uvtest.checkWarnings(uv_object2.select, [], {'frequencies': uv_object2.freq_array[0, [0, 2, 4]]},
message='Selected frequencies are not contiguous')
pytest.raises(ValueError, uv_object2.write_uvfits, write_file_uvfits)
pytest.raises(ValueError, uv_object2.write_miriad, write_file_miriad)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_select_freq_chans():
uv_object = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_object.read_uvfits(testfile)
old_history = uv_object.history
chans_to_keep = np.arange(12, 22)
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(freq_chans=chans_to_keep)
assert len(chans_to_keep) == uv_object2.Nfreqs
for chan in chans_to_keep:
assert uv_object.freq_array[0, chan] in uv_object2.freq_array
for f in np.unique(uv_object2.freq_array):
assert f in uv_object.freq_array[0, chans_to_keep]
assert uvutils._check_histories(old_history + ' Downselected to '
'specific frequencies using pyuvdata.',
uv_object2.history)
# check that it also works with higher dimension array
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(freq_chans=chans_to_keep[np.newaxis, :])
assert len(chans_to_keep) == uv_object2.Nfreqs
for chan in chans_to_keep:
assert uv_object.freq_array[0, chan] in uv_object2.freq_array
for f in np.unique(uv_object2.freq_array):
assert f in uv_object.freq_array[0, chans_to_keep]
assert uvutils._check_histories(old_history + ' Downselected to '
'specific frequencies using pyuvdata.',
uv_object2.history)
# Test selecting both channels and frequencies
freqs_to_keep = uv_object.freq_array[0, np.arange(20, 30)] # Overlaps with chans
all_chans_to_keep = np.arange(12, 30)
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(frequencies=freqs_to_keep, freq_chans=chans_to_keep)
assert len(all_chans_to_keep) == uv_object2.Nfreqs
for chan in all_chans_to_keep:
assert uv_object.freq_array[0, chan] in uv_object2.freq_array
for f in np.unique(uv_object2.freq_array):
assert f in uv_object.freq_array[0, all_chans_to_keep]
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_select_polarizations():
uv_object = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_object.read_uvfits(testfile)
old_history = uv_object.history
pols_to_keep = [-1, -2]
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(polarizations=pols_to_keep)
assert len(pols_to_keep) == uv_object2.Npols
for p in pols_to_keep:
assert p in uv_object2.polarization_array
for p in np.unique(uv_object2.polarization_array):
assert p in pols_to_keep
assert uvutils._check_histories(old_history + ' Downselected to '
'specific polarizations using pyuvdata.',
uv_object2.history)
# check that it also works with higher dimension array
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(polarizations=[pols_to_keep])
assert len(pols_to_keep) == uv_object2.Npols
for p in pols_to_keep:
assert p in uv_object2.polarization_array
for p in np.unique(uv_object2.polarization_array):
assert p in pols_to_keep
assert uvutils._check_histories(old_history + ' Downselected to '
'specific polarizations using pyuvdata.',
uv_object2.history)
# check for errors associated with polarizations not included in data
pytest.raises(ValueError, uv_object2.select, polarizations=[-3, -4])
# check for warnings and errors associated with unevenly spaced polarizations
uvtest.checkWarnings(uv_object.select, [], {'polarizations': uv_object.polarization_array[[0, 1, 3]]},
message='Selected polarization values are not evenly spaced')
write_file_uvfits = os.path.join(DATA_PATH, 'test/select_test.uvfits')
pytest.raises(ValueError, uv_object.write_uvfits, write_file_uvfits)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_select():
# now test selecting along all axes at once
uv_object = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_object.read_uvfits(testfile)
old_history = uv_object.history
blt_inds = np.array([1057, 461, 1090, 354, 528, 654, 882, 775, 369, 906, 748,
875, 296, 773, 554, 395, 1003, 476, 762, 976, 1285, 874,
717, 383, 1281, 924, 264, 1163, 297, 857, 1258, 1000, 180,
1303, 1139, 393, 42, 135, 789, 713, 527, 1218, 576, 100,
1311, 4, 653, 724, 591, 889, 36, 1033, 113, 479, 322,
118, 898, 1263, 477, 96, 935, 238, 195, 531, 124, 198,
992, 1131, 305, 154, 961, 6, 1175, 76, 663, 82, 637,
288, 1152, 845, 1290, 379, 1225, 1240, 733, 1172, 937, 1325,
817, 416, 261, 1316, 957, 723, 215, 237, 270, 1309, 208,
17, 1028, 895, 574, 166, 784, 834, 732, 1022, 1068, 1207,
356, 474, 313, 137, 172, 181, 925, 201, 190, 1277, 1044,
1242, 702, 567, 557, 1032, 1352, 504, 545, 422, 179, 780,
280, 890, 774, 884])
ants_to_keep = np.array([11, 6, 20, 26, 2, 27, 7, 14])
ant_pairs_to_keep = [(2, 11), (20, 26), (6, 7), (3, 27), (14, 6)]
sorted_pairs_to_keep = [sort_bl(p) for p in ant_pairs_to_keep]
freqs_to_keep = uv_object.freq_array[0, np.arange(31, 39)]
unique_times = np.unique(uv_object.time_array)
times_to_keep = unique_times[[0, 2, 6, 8, 10, 13, 14]]
pols_to_keep = [-1, -3]
# Independently count blts that should be selected
blts_blt_select = [i in blt_inds for i in np.arange(uv_object.Nblts)]
blts_ant_select = [(a1 in ants_to_keep) & (a2 in ants_to_keep) for (a1, a2) in
zip(uv_object.ant_1_array, uv_object.ant_2_array)]
blts_pair_select = [sort_bl((a1, a2)) in sorted_pairs_to_keep for (a1, a2) in
zip(uv_object.ant_1_array, uv_object.ant_2_array)]
blts_time_select = [t in times_to_keep for t in uv_object.time_array]
Nblts_select = np.sum([bi & (ai & pi) & ti for (bi, ai, pi, ti) in
zip(blts_blt_select, blts_ant_select, blts_pair_select,
blts_time_select)])
uv_object2 = copy.deepcopy(uv_object)
uv_object2.select(blt_inds=blt_inds, antenna_nums=ants_to_keep,
bls=ant_pairs_to_keep, frequencies=freqs_to_keep,
times=times_to_keep, polarizations=pols_to_keep)
assert Nblts_select == uv_object2.Nblts
for ant in np.unique(uv_object2.ant_1_array.tolist() + uv_object2.ant_2_array.tolist()):
assert ant in ants_to_keep
assert len(freqs_to_keep) == uv_object2.Nfreqs
for f in freqs_to_keep:
assert f in uv_object2.freq_array
for f in np.unique(uv_object2.freq_array):
assert f in freqs_to_keep
for t in np.unique(uv_object2.time_array):
assert t in times_to_keep
assert len(pols_to_keep) == uv_object2.Npols
for p in pols_to_keep:
assert p in uv_object2.polarization_array
for p in np.unique(uv_object2.polarization_array):
assert p in pols_to_keep
assert uvutils._check_histories(old_history + ' Downselected to '
'specific baseline-times, antennas, '
'baselines, times, frequencies, '
'polarizations using pyuvdata.',
uv_object2.history)
# test that a ValueError is raised if the selection eliminates all blts
pytest.raises(ValueError, uv_object.select,
times=unique_times[0], antenna_nums=1)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_select_not_inplace():
# Test non-inplace select
uv_object = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_object.read_uvfits(testfile)
old_history = uv_object.history
uv1 = uv_object.select(freq_chans=np.arange(32), inplace=False)
uv1 += uv_object.select(freq_chans=np.arange(32, 64), inplace=False)
assert uvutils._check_histories(old_history + ' Downselected to '
'specific frequencies using pyuvdata. '
'Combined data along frequency axis '
'using pyuvdata.', uv1.history)
uv1.history = old_history
assert uv1 == uv_object
@pytest.mark.filterwarnings("ignore:The default for the `center` keyword")
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_conjugate_bls():
uv1 = UVData()
testfile = os.path.join(DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv1.read_uvfits(testfile)
# file comes in with ant1<ant2
assert(np.min(uv1.ant_2_array - uv1.ant_1_array) >= 0)
# check everything swapped & conjugated when go to ant2<ant1
uv2 = copy.deepcopy(uv1)
uv2.conjugate_bls(convention='ant2<ant1')
assert(np.min(uv2.ant_1_array - uv2.ant_2_array) >= 0)
assert(np.allclose(uv1.ant_1_array, uv2.ant_2_array))
assert(np.allclose(uv1.ant_2_array, uv2.ant_1_array))
assert(np.allclose(uv1.uvw_array, -1 * uv2.uvw_array,
rtol=uv1._uvw_array.tols[0], atol=uv1._uvw_array.tols[1]))
# complicated because of the polarization swaps
# polarization_array = [-1 -2 -3 -4]
assert(np.allclose(uv1.data_array[:, :, :, :2],
np.conj(uv2.data_array[:, :, :, :2]),
rtol=uv1._data_array.tols[0], atol=uv1._data_array.tols[1]))
assert(np.allclose(uv1.data_array[:, :, :, 2],
np.conj(uv2.data_array[:, :, :, 3]),
rtol=uv1._data_array.tols[0], atol=uv1._data_array.tols[1]))
assert(np.allclose(uv1.data_array[:, :, :, 3],
np.conj(uv2.data_array[:, :, :, 2]),
rtol=uv1._data_array.tols[0], atol=uv1._data_array.tols[1]))
# check everything returned to original values with original convention
uv2.conjugate_bls(convention='ant1<ant2')
assert(uv1 == uv2)
# conjugate a particular set of blts
blts_to_conjugate = np.arange(uv2.Nblts // 2)
blts_not_conjugated = np.arange(uv2.Nblts // 2, uv2.Nblts)
uv2.conjugate_bls(convention=blts_to_conjugate)
assert(np.allclose(uv1.ant_1_array[blts_to_conjugate], uv2.ant_2_array[blts_to_conjugate]))
assert(np.allclose(uv1.ant_2_array[blts_to_conjugate], uv2.ant_1_array[blts_to_conjugate]))
assert(np.allclose(uv1.ant_1_array[blts_not_conjugated], uv2.ant_1_array[blts_not_conjugated]))
assert(np.allclose(uv1.ant_2_array[blts_not_conjugated], uv2.ant_2_array[blts_not_conjugated]))
assert(np.allclose(uv1.uvw_array[blts_to_conjugate],
-1 * uv2.uvw_array[blts_to_conjugate],
rtol=uv1._uvw_array.tols[0], atol=uv1._uvw_array.tols[1]))
assert(np.allclose(uv1.uvw_array[blts_not_conjugated],
uv2.uvw_array[blts_not_conjugated],
rtol=uv1._uvw_array.tols[0], atol=uv1._uvw_array.tols[1]))
# complicated because of the polarization swaps
# polarization_array = [-1 -2 -3 -4]
assert(np.allclose(uv1.data_array[blts_to_conjugate, :, :, :2],
np.conj(uv2.data_array[blts_to_conjugate, :, :, :2]),
rtol=uv1._data_array.tols[0], atol=uv1._data_array.tols[1]))
assert(np.allclose(uv1.data_array[blts_not_conjugated, :, :, :2],
uv2.data_array[blts_not_conjugated, :, :, :2],
rtol=uv1._data_array.tols[0], atol=uv1._data_array.tols[1]))
assert(np.allclose(uv1.data_array[blts_to_conjugate, :, :, 2],
np.conj(uv2.data_array[blts_to_conjugate, :, :, 3]),
rtol=uv1._data_array.tols[0], atol=uv1._data_array.tols[1]))
assert(np.allclose(uv1.data_array[blts_not_conjugated, :, :, 2],
uv2.data_array[blts_not_conjugated, :, :, 2],
rtol=uv1._data_array.tols[0], atol=uv1._data_array.tols[1]))
assert(np.allclose(uv1.data_array[blts_to_conjugate, :, :, 3],
np.conj(uv2.data_array[blts_to_conjugate, :, :, 2]),
rtol=uv1._data_array.tols[0], atol=uv1._data_array.tols[1]))
assert(np.allclose(uv1.data_array[blts_not_conjugated, :, :, 3],
uv2.data_array[blts_not_conjugated, :, :, 3],
rtol=uv1._data_array.tols[0], atol=uv1._data_array.tols[1]))
# check uv half plane conventions
uv2.conjugate_bls(convention='u<0', use_enu=False)
assert(np.max(uv2.uvw_array[:, 0]) <= 0)
uv2.conjugate_bls(convention='u>0', use_enu=False)
assert(np.min(uv2.uvw_array[:, 0]) >= 0)
uv2.conjugate_bls(convention='v<0', use_enu=False)
assert(np.max(uv2.uvw_array[:, 1]) <= 0)
uv2.conjugate_bls(convention='v>0', use_enu=False)
assert(np.min(uv2.uvw_array[:, 1]) >= 0)
# unphase to drift to test using ENU positions
uv2.unphase_to_drift(use_ant_pos=True)
uv2.conjugate_bls(convention='u<0')
assert(np.max(uv2.uvw_array[:, 0]) <= 0)
uv2.conjugate_bls(convention='u>0')
assert(np.min(uv2.uvw_array[:, 0]) >= 0)
uv2.conjugate_bls(convention='v<0')
assert(np.max(uv2.uvw_array[:, 1]) <= 0)
uv2.conjugate_bls(convention='v>0')
assert(np.min(uv2.uvw_array[:, 1]) >= 0)
# test errors
with pytest.raises(ValueError) as cm:
uv2.conjugate_bls(convention='foo')
assert str(cm.value).startswith('convention must be one of')
with pytest.raises(ValueError) as cm:
uv2.conjugate_bls(convention=np.arange(5) - 1)
assert str(cm.value).startswith('If convention is an index array')
with pytest.raises(ValueError) as cm:
uv2.conjugate_bls(convention=[uv2.Nblts])
assert str(cm.value).startswith('If convention is an index array')
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_reorder_pols():
# Test function to fix polarization order
uv1 = UVData()
testfile = os.path.join(DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv1.read_uvfits(testfile)
uv2 = copy.deepcopy(uv1)
# reorder uv2 manually
order = [1, 3, 2, 0]
uv2.polarization_array = uv2.polarization_array[order]
uv2.data_array = uv2.data_array[:, :, :, order]
uv2.nsample_array = uv2.nsample_array[:, :, :, order]
uv2.flag_array = uv2.flag_array[:, :, :, order]
uv1.reorder_pols(order=order)
assert uv1 == uv2
# Restore original order
uv1.read_uvfits(testfile)
uv2.reorder_pols()
assert uv1 == uv2
uv1.reorder_pols(order='AIPS')
# check that we have aips ordering
aips_pols = np.array([-1, -2, -3, -4]).astype(int)
assert np.all(uv1.polarization_array == aips_pols)
uv2 = copy.deepcopy(uv1)
uv2.reorder_pols(order='CASA')
# check that we have casa ordering
casa_pols = np.array([-1, -3, -4, -2]).astype(int)
assert np.all(uv2.polarization_array == casa_pols)
order = np.array([0, 2, 3, 1])
assert np.all(uv2.data_array == uv1.data_array[:, :, :, order])
assert np.all(uv2.flag_array == uv1.flag_array[:, :, :, order])
uv2.reorder_pols(order='AIPS')
# check that we have aips ordering again
assert uv1 == uv2
# check error on unknown order
pytest.raises(ValueError, uv2.reorder_pols, {'order': 'foo'})
# check error if order is an array of the wrong length
with pytest.raises(ValueError) as cm:
uv2.reorder_pols(order=[3, 2, 1])
assert str(cm.value).startswith('If order is an index array, it must')
# check warning for order_pols:
uvtest.checkWarnings(uv2.order_pols, [], {'order': 'AIPS'},
message=('order_pols method will be deprecated in '
'favor of reorder_pols'),
category=DeprecationWarning)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_reorder_blts():
uv1 = UVData()
testfile = os.path.join(DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv1.read_uvfits(testfile)
# test default reordering in detail
uv2 = copy.deepcopy(uv1)
uv2.reorder_blts()
assert(uv2.blt_order == ('time', 'baseline'))
assert(np.min(np.diff(uv2.time_array)) >= 0)
for this_time in np.unique(uv2.time_array):
bls_2 = uv2.baseline_array[np.where(uv2.time_array == this_time)]
bls_1 = uv1.baseline_array[np.where(uv2.time_array == this_time)]
assert(bls_1.shape == bls_2.shape)
assert(np.min(np.diff(bls_2)) >= 0)
bl_inds = [np.where(bls_1 == bl)[0][0] for bl in bls_2]
assert(np.allclose(bls_1[bl_inds], bls_2))
uvw_1 = uv1.uvw_array[np.where(uv2.time_array == this_time)[0], :]
uvw_2 = uv2.uvw_array[np.where(uv2.time_array == this_time)[0], :]
assert(uvw_1.shape == uvw_2.shape)
assert(np.allclose(uvw_1[bl_inds, :], uvw_2))
data_1 = uv1.data_array[np.where(uv2.time_array == this_time)[0], :, :, :]
data_2 = uv2.data_array[np.where(uv2.time_array == this_time)[0], :, :, :]
assert(data_1.shape == data_2.shape)
assert(np.allclose(data_1[bl_inds, :, :, :], data_2))
# check that ordering by time, ant1 is identical to time, baseline
uv3 = copy.deepcopy(uv1)
uv3.reorder_blts(order='time', minor_order='ant1')
assert(uv3.blt_order == ('time', 'ant1'))
assert(np.min(np.diff(uv3.time_array)) >= 0)
uv3.blt_order = uv2.blt_order
assert(uv2 == uv3)
uv3.reorder_blts(order='time', minor_order='ant2')
assert(uv3.blt_order == ('time', 'ant2'))
assert(np.min(np.diff(uv3.time_array)) >= 0)
# check that loopback works
uv3.reorder_blts()
assert(uv2 == uv3)
# sort with a specified index array
new_order = np.lexsort((uv3.baseline_array, uv3.time_array))
uv3.reorder_blts(order=new_order)
assert(uv3.blt_order is None)
assert(np.min(np.diff(uv3.time_array)) >= 0)
uv3.blt_order = ('time', 'baseline')
assert(uv2 == uv3)
# test sensible defaulting if minor order = major order
uv3.reorder_blts(order='time', minor_order='time')
assert(uv2 == uv3)
# test all combinations of major, minor order
uv3.reorder_blts(order='baseline')
assert(uv3.blt_order == ('baseline', 'time'))
assert(np.min(np.diff(uv3.baseline_array)) >= 0)
uv3.reorder_blts(order='ant1')
assert(uv3.blt_order == ('ant1', 'ant2'))
assert(np.min(np.diff(uv3.ant_1_array)) >= 0)
uv3.reorder_blts(order='ant1', minor_order='time')
assert(uv3.blt_order == ('ant1', 'time'))
assert(np.min(np.diff(uv3.ant_1_array)) >= 0)
uv3.reorder_blts(order='ant1', minor_order='baseline')
assert(uv3.blt_order == ('ant1', 'baseline'))
assert(np.min(np.diff(uv3.ant_1_array)) >= 0)
uv3.reorder_blts(order='ant2')
assert(uv3.blt_order == ('ant2', 'ant1'))
assert(np.min(np.diff(uv3.ant_2_array)) >= 0)
uv3.reorder_blts(order='ant2', minor_order='time')
assert(uv3.blt_order == ('ant2', 'time'))
assert(np.min(np.diff(uv3.ant_2_array)) >= 0)
uv3.reorder_blts(order='ant2', minor_order='baseline')
assert(uv3.blt_order == ('ant2', 'baseline'))
assert(np.min(np.diff(uv3.ant_2_array)) >= 0)
uv3.reorder_blts(order='bda')
assert(uv3.blt_order == ('bda',))
assert(np.min(np.diff(uv3.integration_time)) >= 0)
assert(np.min(np.diff(uv3.baseline_array)) >= 0)
# test doing conjugation along with a reorder
# the file is already conjugated this way, so should be equal
uv3.reorder_blts(order='time', conj_convention='ant1<ant2')
assert(uv2 == uv3)
# test errors
with pytest.raises(ValueError) as cm:
uv3.reorder_blts(order='foo')
assert str(cm.value).startswith('order must be one of')
with pytest.raises(ValueError) as cm:
uv3.reorder_blts(order=np.arange(5))
assert str(cm.value).startswith('If order is an index array, it must')
with pytest.raises(ValueError) as cm:
uv3.reorder_blts(order=np.arange(5, dtype=np.float))
assert str(cm.value).startswith('If order is an index array, it must')
with pytest.raises(ValueError) as cm:
uv3.reorder_blts(order=np.arange(uv3.Nblts), minor_order='time')
assert str(cm.value).startswith('Minor order cannot be set if order is an index array')
with pytest.raises(ValueError) as cm:
uv3.reorder_blts(order='bda', minor_order='time')
assert str(cm.value).startswith('minor_order cannot be specified if order is')
with pytest.raises(ValueError) as cm:
uv3.reorder_blts(order='baseline', minor_order='ant1')
assert str(cm.value).startswith('minor_order conflicts with order')
with pytest.raises(ValueError) as cm:
uv3.reorder_blts(order='time', minor_order='foo')
assert str(cm.value).startswith('minor_order can only be one of')
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_add():
uv_full = UVData()
testfile = os.path.join(DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_full.read_uvfits(testfile)
# Add frequencies
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=np.arange(0, 32))
uv2.select(freq_chans=np.arange(32, 64))
uv1 += uv2
# Check history is correct, before replacing and doing a full object check
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific frequencies using pyuvdata. '
'Combined data along frequency axis '
'using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Add frequencies - out of order
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=np.arange(0, 32))
uv2.select(freq_chans=np.arange(32, 64))
uv2 += uv1
uv2.history = uv_full.history
assert uv2 == uv_full
# Add polarizations
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(polarizations=uv1.polarization_array[0:2])
uv2.select(polarizations=uv2.polarization_array[2:4])
uv1 += uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific polarizations using pyuvdata. '
'Combined data along polarization axis '
'using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Add polarizations - out of order
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(polarizations=uv1.polarization_array[0:2])
uv2.select(polarizations=uv2.polarization_array[2:4])
uv2 += uv1
uv2.history = uv_full.history
assert uv2 == uv_full
# Add times
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
times = np.unique(uv_full.time_array)
uv1.select(times=times[0:len(times) // 2])
uv2.select(times=times[len(times) // 2:])
uv1 += uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific times using pyuvdata. '
'Combined data along baseline-time axis '
'using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Add baselines
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
ant_list = list(range(15)) # Roughly half the antennas in the data
# All blts where ant_1 is in list
ind1 = [i for i in range(uv1.Nblts) if uv1.ant_1_array[i] in ant_list]
ind2 = [i for i in range(uv1.Nblts) if uv1.ant_1_array[i] not in ant_list]
uv1.select(blt_inds=ind1)
uv2.select(blt_inds=ind2)
uv1 += uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific baseline-times using pyuvdata. '
'Combined data along baseline-time axis '
'using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Add baselines - out of order
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv3 = copy.deepcopy(uv_full)
ants = uv_full.get_ants()
ants1 = ants[0:6]
ants2 = ants[6:12]
ants3 = ants[12:]
# All blts where ant_1 is in list
ind1 = [i for i in range(uv1.Nblts) if uv1.ant_1_array[i] in ants1]
ind2 = [i for i in range(uv2.Nblts) if uv2.ant_1_array[i] in ants2]
ind3 = [i for i in range(uv3.Nblts) if uv3.ant_1_array[i] in ants3]
uv1.select(blt_inds=ind1)
uv2.select(blt_inds=ind2)
uv3.select(blt_inds=ind3)
uv3.data_array = uv3.data_array[-1::-1, :, :, :]
uv3.nsample_array = uv3.nsample_array[-1::-1, :, :, :]
uv3.flag_array = uv3.flag_array[-1::-1, :, :, :]
uv3.uvw_array = uv3.uvw_array[-1::-1, :]
uv3.time_array = uv3.time_array[-1::-1]
uv3.lst_array = uv3.lst_array[-1::-1]
uv3.ant_1_array = uv3.ant_1_array[-1::-1]
uv3.ant_2_array = uv3.ant_2_array[-1::-1]
uv3.baseline_array = uv3.baseline_array[-1::-1]
uv1 += uv3
uv1 += uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific baseline-times using pyuvdata. '
'Combined data along baseline-time axis '
'using pyuvdata. Combined data along '
'baseline-time axis using pyuvdata.',
uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Add multiple axes
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv_ref = copy.deepcopy(uv_full)
times = np.unique(uv_full.time_array)
uv1.select(times=times[0:len(times) // 2],
polarizations=uv1.polarization_array[0:2])
uv2.select(times=times[len(times) // 2:],
polarizations=uv2.polarization_array[2:4])
uv1 += uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific times, polarizations using '
'pyuvdata. Combined data along '
'baseline-time, polarization axis '
'using pyuvdata.', uv1.history)
blt_ind1 = np.array([ind for ind in range(uv_full.Nblts) if
uv_full.time_array[ind] in times[0:len(times) // 2]])
blt_ind2 = np.array([ind for ind in range(uv_full.Nblts) if
uv_full.time_array[ind] in times[len(times) // 2:]])
# Zero out missing data in reference object
uv_ref.data_array[blt_ind1, :, :, 2:] = 0.0
uv_ref.nsample_array[blt_ind1, :, :, 2:] = 0.0
uv_ref.flag_array[blt_ind1, :, :, 2:] = True
uv_ref.data_array[blt_ind2, :, :, 0:2] = 0.0
uv_ref.nsample_array[blt_ind2, :, :, 0:2] = 0.0
uv_ref.flag_array[blt_ind2, :, :, 0:2] = True
uv1.history = uv_full.history
assert uv1 == uv_ref
# Another combo
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv_ref = copy.deepcopy(uv_full)
times = np.unique(uv_full.time_array)
uv1.select(times=times[0:len(times) // 2], freq_chans=np.arange(0, 32))
uv2.select(times=times[len(times) // 2:], freq_chans=np.arange(32, 64))
uv1 += uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific times, frequencies using '
'pyuvdata. Combined data along '
'baseline-time, frequency axis using '
'pyuvdata.', uv1.history)
blt_ind1 = np.array([ind for ind in range(uv_full.Nblts) if
uv_full.time_array[ind] in times[0:len(times) // 2]])
blt_ind2 = np.array([ind for ind in range(uv_full.Nblts) if
uv_full.time_array[ind] in times[len(times) // 2:]])
# Zero out missing data in reference object
uv_ref.data_array[blt_ind1, :, 32:, :] = 0.0
uv_ref.nsample_array[blt_ind1, :, 32:, :] = 0.0
uv_ref.flag_array[blt_ind1, :, 32:, :] = True
uv_ref.data_array[blt_ind2, :, 0:32, :] = 0.0
uv_ref.nsample_array[blt_ind2, :, 0:32, :] = 0.0
uv_ref.flag_array[blt_ind2, :, 0:32, :] = True
uv1.history = uv_full.history
assert uv1 == uv_ref
# Add without inplace
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
times = np.unique(uv_full.time_array)
uv1.select(times=times[0:len(times) // 2])
uv2.select(times=times[len(times) // 2:])
uv1 = uv1 + uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific times using pyuvdata. '
'Combined data along baseline-time '
'axis using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Check warnings
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=np.arange(0, 32))
uv2.select(freq_chans=np.arange(33, 64))
uvtest.checkWarnings(uv1.__add__, [uv2],
message='Combined frequencies are not evenly spaced')
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=[0])
uv2.select(freq_chans=[3])
uvtest.checkWarnings(uv1.__iadd__, [uv2],
message='Combined frequencies are not contiguous')
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=[0])
uv2.select(freq_chans=[1])
uv2.freq_array += uv2._channel_width.tols[1] / 2.
uvtest.checkWarnings(uv1.__iadd__, [uv2],
nwarnings=0)
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(polarizations=uv1.polarization_array[0:2])
uv2.select(polarizations=uv2.polarization_array[3])
uvtest.checkWarnings(uv1.__iadd__, [uv2],
message='Combined polarizations are not evenly spaced')
# Combining histories
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(polarizations=uv1.polarization_array[0:2])
uv2.select(polarizations=uv2.polarization_array[2:4])
uv2.history += ' testing the history. AIPS WTSCAL = 1.0'
uv1 += uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific polarizations using pyuvdata. '
'Combined data along polarization '
'axis using pyuvdata. testing the history.',
uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# test add of autocorr-only and crosscorr-only objects
uv_full = UVData()
uv_full.read_miriad(os.path.join(DATA_PATH, 'zen.2457698.40355.xx.HH.uvcA'))
bls = uv_full.get_antpairs()
autos = [bl for bl in bls if bl[0] == bl[1]]
cross = sorted(set(bls) - set(autos))
uv_auto = uv_full.select(bls=autos, inplace=False)
uv_cross = uv_full.select(bls=cross, inplace=False)
uv1 = uv_auto + uv_cross
assert uv1.Nbls == uv_auto.Nbls + uv_cross.Nbls
uv2 = uv_cross + uv_auto
assert uv2.Nbls == uv_auto.Nbls + uv_cross.Nbls
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_add_drift():
uv_full = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_full.read_uvfits(testfile)
uvtest.checkWarnings(uv_full.unphase_to_drift, category=DeprecationWarning,
message='The xyz array in ENU_from_ECEF is being '
'interpreted as (Npts, 3)')
# Add frequencies
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=np.arange(0, 32))
uv2.select(freq_chans=np.arange(32, 64))
uv1 += uv2
# Check history is correct, before replacing and doing a full object check
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific frequencies using pyuvdata. '
'Combined data along frequency '
'axis using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Add polarizations
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(polarizations=uv1.polarization_array[0:2])
uv2.select(polarizations=uv2.polarization_array[2:4])
uv1 += uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific polarizations using pyuvdata. '
'Combined data along polarization '
'axis using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Add times
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
times = np.unique(uv_full.time_array)
uv1.select(times=times[0:len(times) // 2])
uv2.select(times=times[len(times) // 2:])
uv1 += uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific times using pyuvdata. '
'Combined data along baseline-time '
'axis using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Add baselines
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
ant_list = list(range(15)) # Roughly half the antennas in the data
# All blts where ant_1 is in list
ind1 = [i for i in range(uv1.Nblts) if uv1.ant_1_array[i] in ant_list]
ind2 = [i for i in range(uv1.Nblts) if uv1.ant_1_array[i] not in ant_list]
uv1.select(blt_inds=ind1)
uv2.select(blt_inds=ind2)
uv1 += uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific baseline-times using pyuvdata. '
'Combined data along baseline-time '
'axis using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Add multiple axes
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv_ref = copy.deepcopy(uv_full)
times = np.unique(uv_full.time_array)
uv1.select(times=times[0:len(times) // 2],
polarizations=uv1.polarization_array[0:2])
uv2.select(times=times[len(times) // 2:],
polarizations=uv2.polarization_array[2:4])
uv1 += uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific times, polarizations using '
'pyuvdata. Combined data along '
'baseline-time, polarization '
'axis using pyuvdata.', uv1.history)
blt_ind1 = np.array([ind for ind in range(uv_full.Nblts) if
uv_full.time_array[ind] in times[0:len(times) // 2]])
blt_ind2 = np.array([ind for ind in range(uv_full.Nblts) if
uv_full.time_array[ind] in times[len(times) // 2:]])
# Zero out missing data in reference object
uv_ref.data_array[blt_ind1, :, :, 2:] = 0.0
uv_ref.nsample_array[blt_ind1, :, :, 2:] = 0.0
uv_ref.flag_array[blt_ind1, :, :, 2:] = True
uv_ref.data_array[blt_ind2, :, :, 0:2] = 0.0
uv_ref.nsample_array[blt_ind2, :, :, 0:2] = 0.0
uv_ref.flag_array[blt_ind2, :, :, 0:2] = True
uv1.history = uv_full.history
assert uv1 == uv_ref
# Another combo
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv_ref = copy.deepcopy(uv_full)
times = np.unique(uv_full.time_array)
uv1.select(times=times[0:len(times) // 2], freq_chans=np.arange(0, 32))
uv2.select(times=times[len(times) // 2:], freq_chans=np.arange(32, 64))
uv1 += uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific times, frequencies using '
'pyuvdata. Combined data along '
'baseline-time, frequency '
'axis using pyuvdata.', uv1.history)
blt_ind1 = np.array([ind for ind in range(uv_full.Nblts) if
uv_full.time_array[ind] in times[0:len(times) // 2]])
blt_ind2 = np.array([ind for ind in range(uv_full.Nblts) if
uv_full.time_array[ind] in times[len(times) // 2:]])
# Zero out missing data in reference object
uv_ref.data_array[blt_ind1, :, 32:, :] = 0.0
uv_ref.nsample_array[blt_ind1, :, 32:, :] = 0.0
uv_ref.flag_array[blt_ind1, :, 32:, :] = True
uv_ref.data_array[blt_ind2, :, 0:32, :] = 0.0
uv_ref.nsample_array[blt_ind2, :, 0:32, :] = 0.0
uv_ref.flag_array[blt_ind2, :, 0:32, :] = True
uv1.history = uv_full.history
assert uv1 == uv_ref
# Add without inplace
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
times = np.unique(uv_full.time_array)
uv1.select(times=times[0:len(times) // 2])
uv2.select(times=times[len(times) // 2:])
uv1 = uv1 + uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific times using pyuvdata. '
'Combined data along baseline-time '
'axis using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Check warnings
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=np.arange(0, 32))
uv2.select(freq_chans=np.arange(33, 64))
uvtest.checkWarnings(uv1.__add__, [uv2],
message='Combined frequencies are not evenly spaced')
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=[0])
uv2.select(freq_chans=[3])
uvtest.checkWarnings(uv1.__iadd__, [uv2],
message='Combined frequencies are not contiguous')
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(polarizations=uv1.polarization_array[0:2])
uv2.select(polarizations=uv2.polarization_array[3])
uvtest.checkWarnings(uv1.__iadd__, [uv2],
message='Combined polarizations are not evenly spaced')
# Combining histories
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(polarizations=uv1.polarization_array[0:2])
uv2.select(polarizations=uv2.polarization_array[2:4])
uv2.history += ' testing the history. AIPS WTSCAL = 1.0'
uv1 += uv2
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific polarizations using pyuvdata. '
'Combined data along polarization '
'axis using pyuvdata. testing the history.',
uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_break_add():
# Test failure modes of add function
uv_full = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_full.read_uvfits(testfile)
# Wrong class
uv1 = copy.deepcopy(uv_full)
uv1.select(freq_chans=np.arange(0, 32))
pytest.raises(ValueError, uv1.__iadd__, np.zeros(5))
# One phased, one not
uv2 = copy.deepcopy(uv_full)
uvtest.checkWarnings(uv2.unphase_to_drift, category=DeprecationWarning,
message='The xyz array in ENU_from_ECEF is being '
'interpreted as (Npts, 3)')
pytest.raises(ValueError, uv1.__iadd__, uv2)
# Different units
uv2 = copy.deepcopy(uv_full)
uv2.select(freq_chans=np.arange(32, 64))
uv2.vis_units = "Jy"
pytest.raises(ValueError, uv1.__iadd__, uv2)
# Overlapping data
uv2 = copy.deepcopy(uv_full)
pytest.raises(ValueError, uv1.__iadd__, uv2)
# Different integration_time
uv2 = copy.deepcopy(uv_full)
uv2.select(freq_chans=np.arange(32, 64))
uv2.integration_time *= 2
pytest.raises(ValueError, uv1.__iadd__, uv2)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_fast_concat():
uv_full = UVData()
testfile = os.path.join(DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_full.read_uvfits(testfile)
# Add frequencies
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=np.arange(0, 32))
uv2.select(freq_chans=np.arange(32, 64))
uv1.fast_concat(uv2, 'freq', inplace=True)
# Check history is correct, before replacing and doing a full object check
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific frequencies using pyuvdata. '
'Combined data along frequency axis '
'using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Add frequencies - out of order
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=np.arange(0, 32))
uv2.select(freq_chans=np.arange(32, 64))
uvtest.checkWarnings(uv2.fast_concat, [uv1, 'freq'], {'inplace': True},
message='Combined frequencies are not evenly spaced')
assert uv2.Nfreqs == uv_full.Nfreqs
assert uv2._freq_array != uv_full._freq_array
assert uv2._data_array != uv_full._data_array
# reorder frequencies and test that they are equal
index_array = np.argsort(uv2.freq_array[0, :])
uv2.freq_array = uv2.freq_array[:, index_array]
uv2.data_array = uv2.data_array[:, :, index_array, :]
uv2.nsample_array = uv2.nsample_array[:, :, index_array, :]
uv2.flag_array = uv2.flag_array[:, :, index_array, :]
uv2.history = uv_full.history
assert uv2._freq_array == uv_full._freq_array
assert uv2 == uv_full
# Add polarizations
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(polarizations=uv1.polarization_array[0:2])
uv2.select(polarizations=uv2.polarization_array[2:4])
uv1.fast_concat(uv2, 'polarization', inplace=True)
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific polarizations using pyuvdata. '
'Combined data along polarization axis '
'using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Add polarizations - out of order
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(polarizations=uv1.polarization_array[0:2])
uv2.select(polarizations=uv2.polarization_array[2:4])
uvtest.checkWarnings(uv2.fast_concat, [uv1, 'polarization'], {'inplace': True},
message='Combined polarizations are not evenly spaced')
assert uv2._polarization_array != uv_full._polarization_array
assert uv2._data_array != uv_full._data_array
# reorder pols
uv2.reorder_pols()
uv2.history = uv_full.history
assert uv2 == uv_full
# Add times
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
times = np.unique(uv_full.time_array)
uv1.select(times=times[0:len(times) // 2])
uv2.select(times=times[len(times) // 2:])
uv1.fast_concat(uv2, 'blt', inplace=True)
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific times using pyuvdata. '
'Combined data along baseline-time axis '
'using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Add baselines
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
# divide in half to keep in order
ind1 = np.arange(uv1.Nblts // 2)
ind2 = np.arange(uv1.Nblts // 2, uv1.Nblts)
uv1.select(blt_inds=ind1)
uv2.select(blt_inds=ind2)
uv1.fast_concat(uv2, 'blt', inplace=True)
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific baseline-times using pyuvdata. '
'Combined data along baseline-time axis '
'using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1, uv_full
# Add baselines out of order
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(blt_inds=ind1)
uv2.select(blt_inds=ind2)
uv2.fast_concat(uv1, 'blt', inplace=True)
# test freq & pol arrays equal
assert uv2._freq_array == uv_full._freq_array
assert uv2._polarization_array == uv_full._polarization_array
# test Nblt length arrays not equal but same shape
assert uv2._ant_1_array != uv_full._ant_1_array
assert uv2.ant_1_array.shape == uv_full.ant_1_array.shape
assert uv2._ant_2_array != uv_full._ant_2_array
assert uv2.ant_2_array.shape == uv_full.ant_2_array.shape
assert uv2._uvw_array != uv_full._uvw_array
assert uv2.uvw_array.shape == uv_full.uvw_array.shape
assert uv2._time_array != uv_full._time_array
assert uv2.time_array.shape == uv_full.time_array.shape
assert uv2._baseline_array != uv_full._baseline_array
assert uv2.baseline_array.shape == uv_full.baseline_array.shape
assert uv2._data_array != uv_full._data_array
assert uv2.data_array.shape == uv_full.data_array.shape
# reorder blts to enable comparison
uv2.reorder_blts()
assert uv2.blt_order == ('time', 'baseline')
uv2.blt_order = None
uv2.history = uv_full.history
assert uv2 == uv_full
# add baselines such that Nants_data needs to change
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
ant_list = list(range(15)) # Roughly half the antennas in the data
# All blts where ant_1 is in list
ind1 = [i for i in range(uv1.Nblts) if uv1.ant_1_array[i] in ant_list]
ind2 = [i for i in range(uv1.Nblts) if uv1.ant_1_array[i] not in ant_list]
uv1.select(blt_inds=ind1)
uv2.select(blt_inds=ind2)
uv2.fast_concat(uv1, 'blt', inplace=True)
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific baseline-times using pyuvdata. '
'Combined data along baseline-time '
'axis using pyuvdata.', uv2.history)
# test freq & pol arrays equal
assert uv2._freq_array == uv_full._freq_array
assert uv2._polarization_array == uv_full._polarization_array
# test Nblt length arrays not equal but same shape
assert uv2._ant_1_array != uv_full._ant_1_array
assert uv2.ant_1_array.shape == uv_full.ant_1_array.shape
assert uv2._ant_2_array != uv_full._ant_2_array
assert uv2.ant_2_array.shape == uv_full.ant_2_array.shape
assert uv2._uvw_array != uv_full._uvw_array
assert uv2.uvw_array.shape == uv_full.uvw_array.shape
assert uv2._time_array != uv_full._time_array
assert uv2.time_array.shape == uv_full.time_array.shape
assert uv2._baseline_array != uv_full._baseline_array
assert uv2.baseline_array.shape == uv_full.baseline_array.shape
assert uv2._data_array != uv_full._data_array
assert uv2.data_array.shape == uv_full.data_array.shape
# reorder blts to enable comparison
uv2.reorder_blts()
assert uv2.blt_order == ('time', 'baseline')
uv2.blt_order = None
uv2.history = uv_full.history
assert uv2 == uv_full
# Add multiple axes
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
times = np.unique(uv_full.time_array)
uv1.select(times=times[0:len(times) // 2],
polarizations=uv1.polarization_array[0:2])
uv2.select(times=times[len(times) // 2:],
polarizations=uv2.polarization_array[2:4])
pytest.raises(ValueError, uv1.fast_concat, uv2, 'blt', inplace=True)
# Another combo
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
times = np.unique(uv_full.time_array)
uv1.select(times=times[0:len(times) // 2], freq_chans=np.arange(0, 32))
uv2.select(times=times[len(times) // 2:], freq_chans=np.arange(32, 64))
pytest.raises(ValueError, uv1.fast_concat, uv2, 'blt', inplace=True)
# Add without inplace
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
times = np.unique(uv_full.time_array)
uv1.select(times=times[0:len(times) // 2])
uv2.select(times=times[len(times) // 2:])
uv1 = uv1.fast_concat(uv2, 'blt', inplace=False)
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific times using pyuvdata. '
'Combined data along baseline-time '
'axis using pyuvdata.', uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# Check warnings
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=np.arange(0, 32))
uv2.select(freq_chans=np.arange(33, 64))
uvtest.checkWarnings(uv1.fast_concat, [uv2, 'freq'],
message='Combined frequencies are not evenly spaced')
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=[0])
uv2.select(freq_chans=[3])
uvtest.checkWarnings(uv1.fast_concat, [uv2, 'freq'],
message='Combined frequencies are not contiguous')
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=[0])
uv2.select(freq_chans=[1])
uv2.freq_array += uv2._channel_width.tols[1] / 2.
uvtest.checkWarnings(uv1.fast_concat, [uv2, 'freq'],
nwarnings=0)
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(polarizations=uv1.polarization_array[0:2])
uv2.select(polarizations=uv2.polarization_array[3])
uvtest.checkWarnings(uv1.fast_concat, [uv2, 'polarization'],
message='Combined polarizations are not evenly spaced')
# Combining histories
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(polarizations=uv1.polarization_array[0:2])
uv2.select(polarizations=uv2.polarization_array[2:4])
uv2.history += ' testing the history. AIPS WTSCAL = 1.0'
uv1.fast_concat(uv2, 'polarization', inplace=True)
assert uvutils._check_histories(uv_full.history + ' Downselected to '
'specific polarizations using pyuvdata. '
'Combined data along polarization '
'axis using pyuvdata. testing the history.',
uv1.history)
uv1.history = uv_full.history
assert uv1 == uv_full
# test add of autocorr-only and crosscorr-only objects
uv_full = UVData()
uv_full.read_miriad(os.path.join(DATA_PATH, 'zen.2457698.40355.xx.HH.uvcA'))
bls = uv_full.get_antpairs()
autos = [bl for bl in bls if bl[0] == bl[1]]
cross = sorted(set(bls) - set(autos))
uv_auto = uv_full.select(bls=autos, inplace=False)
uv_cross = uv_full.select(bls=cross, inplace=False)
uv1 = uv_auto.fast_concat(uv_cross, 'blt')
assert uv1.Nbls == uv_auto.Nbls + uv_cross.Nbls
uv2 = uv_cross.fast_concat(uv_auto, 'blt')
assert uv2.Nbls == uv_auto.Nbls + uv_cross.Nbls
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_fast_concat_errors():
uv_full = UVData()
testfile = os.path.join(DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_full.read_uvfits(testfile)
uv1 = copy.deepcopy(uv_full)
uv2 = copy.deepcopy(uv_full)
uv1.select(freq_chans=np.arange(0, 32))
uv2.select(freq_chans=np.arange(32, 64))
pytest.raises(ValueError, uv1.fast_concat, uv2, 'foo', inplace=True)
cal = UVCal()
pytest.raises(ValueError, uv1.fast_concat, cal, 'freq', inplace=True)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_key2inds():
# Test function to interpret key as antpair, pol
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
# Get an antpair/pol combo
ant1 = uv.ant_1_array[0]
ant2 = uv.ant_2_array[0]
pol = uv.polarization_array[0]
bltind = np.where((uv.ant_1_array == ant1) & (uv.ant_2_array == ant2))[0]
ind1, ind2, indp = uv._key2inds((ant1, ant2, pol))
assert np.array_equal(bltind, ind1)
assert np.array_equal(np.array([]), ind2)
assert np.array_equal([0], indp[0])
# Any of these inputs can also be a tuple of a tuple, so need to be checked twice.
ind1, ind2, indp = uv._key2inds(((ant1, ant2, pol),))
assert np.array_equal(bltind, ind1)
assert np.array_equal(np.array([]), ind2)
assert np.array_equal([0], indp[0])
# Combo with pol as string
ind1, ind2, indp = uv._key2inds((ant1, ant2, uvutils.polnum2str(pol)))
assert np.array_equal([0], indp[0])
ind1, ind2, indp = uv._key2inds(((ant1, ant2, uvutils.polnum2str(pol)),))
assert np.array_equal([0], indp[0])
# Check conjugation
ind1, ind2, indp = uv._key2inds((ant2, ant1, pol))
assert np.array_equal(bltind, ind2)
assert np.array_equal(np.array([]), ind1)
assert np.array_equal([0], indp[1])
# Conjugation with pol as string
ind1, ind2, indp = uv._key2inds((ant2, ant1, uvutils.polnum2str(pol)))
assert np.array_equal(bltind, ind2)
assert np.array_equal(np.array([]), ind1)
assert np.array_equal([0], indp[1])
assert np.array_equal([], indp[0])
# Antpair only
ind1, ind2, indp = uv._key2inds((ant1, ant2))
assert np.array_equal(bltind, ind1)
assert np.array_equal(np.array([]), ind2)
assert np.array_equal(np.arange(uv.Npols), indp[0])
ind1, ind2, indp = uv._key2inds(((ant1, ant2)))
assert np.array_equal(bltind, ind1)
assert np.array_equal(np.array([]), ind2)
assert np.array_equal(np.arange(uv.Npols), indp[0])
# Baseline number only
ind1, ind2, indp = uv._key2inds(uv.antnums_to_baseline(ant1, ant2))
assert np.array_equal(bltind, ind1)
assert np.array_equal(np.array([]), ind2)
assert np.array_equal(np.arange(uv.Npols), indp[0])
ind1, ind2, indp = uv._key2inds((uv.antnums_to_baseline(ant1, ant2),))
assert np.array_equal(bltind, ind1)
assert np.array_equal(np.array([]), ind2)
assert np.array_equal(np.arange(uv.Npols), indp[0])
# Pol number only
ind1, ind2, indp = uv._key2inds(pol)
assert np.array_equal(np.arange(uv.Nblts), ind1)
assert np.array_equal(np.array([]), ind2)
assert np.array_equal(np.array([0]), indp[0])
ind1, ind2, indp = uv._key2inds((pol))
assert np.array_equal(np.arange(uv.Nblts), ind1)
assert np.array_equal(np.array([]), ind2)
assert np.array_equal(np.array([0]), indp[0])
# Pol string only
ind1, ind2, indp = uv._key2inds('LL')
assert np.array_equal(np.arange(uv.Nblts), ind1)
assert np.array_equal(np.array([]), ind2)
assert np.array_equal(np.array([1]), indp[0])
ind1, ind2, indp = uv._key2inds(('LL'))
assert np.array_equal(np.arange(uv.Nblts), ind1)
assert np.array_equal(np.array([]), ind2)
assert np.array_equal(np.array([1]), indp[0])
# Test invalid keys
pytest.raises(KeyError, uv._key2inds, 'I') # pol str not in data
pytest.raises(KeyError, uv._key2inds, -8) # pol num not in data
pytest.raises(KeyError, uv._key2inds, 6) # bl num not in data
pytest.raises(KeyError, uv._key2inds, (1, 1)) # ant pair not in data
pytest.raises(KeyError, uv._key2inds, (1, 1, 'rr')) # ant pair not in data
pytest.raises(KeyError, uv._key2inds, (0, 1, 'xx')) # pol not in data
# Test autos are handled correctly
uv.ant_2_array[0] = uv.ant_1_array[0]
ind1, ind2, indp = uv._key2inds((ant1, ant1, pol))
assert np.array_equal(ind1, [0])
assert np.array_equal(ind2, [])
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_key2inds_conj_all_pols():
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
ant1 = uv.ant_1_array[0]
ant2 = uv.ant_2_array[0]
bltind = np.where((uv.ant_1_array == ant1) & (uv.ant_2_array == ant2))[0]
ind1, ind2, indp = uv._key2inds((ant2, ant1))
# Pols in data are 'rr', 'll', 'rl', 'lr'
# So conjugated order should be [0, 1, 3, 2]
assert np.array_equal(bltind, ind2)
assert np.array_equal(np.array([]), ind1)
assert np.array_equal(np.array([]), indp[0])
assert np.array_equal([0, 1, 3, 2], indp[1])
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_key2inds_conj_all_pols_fringe():
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
uv.select(polarizations=['rl'])
ant1 = uv.ant_1_array[0]
ant2 = uv.ant_2_array[0]
# Mix one instance of this baseline.
uv.ant_1_array[0] = ant2
uv.ant_2_array[0] = ant1
bltind = np.where((uv.ant_1_array == ant1) & (uv.ant_2_array == ant2))[0]
ind1, ind2, indp = uv._key2inds((ant1, ant2))
assert np.array_equal(bltind, ind1)
assert np.array_equal(np.array([]), ind2)
assert np.array_equal(np.array([0]), indp[0])
assert np.array_equal(np.array([]), indp[1])
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_key2inds_conj_all_pols_bl_fringe():
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
uv.select(polarizations=['rl'])
ant1 = uv.ant_1_array[0]
ant2 = uv.ant_2_array[0]
# Mix one instance of this baseline.
uv.ant_1_array[0] = ant2
uv.ant_2_array[0] = ant1
uv.baseline_array[0] = uvutils.antnums_to_baseline(ant2, ant1, uv.Nants_telescope)
bl = uvutils.antnums_to_baseline(ant1, ant2, uv.Nants_telescope)
bltind = np.where((uv.ant_1_array == ant1) & (uv.ant_2_array == ant2))[0]
ind1, ind2, indp = uv._key2inds(bl)
assert np.array_equal(bltind, ind1)
assert np.array_equal(np.array([]), ind2)
assert np.array_equal(np.array([0]), indp[0])
assert np.array_equal(np.array([]), indp[1])
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_key2inds_conj_all_pols_missing_data():
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
uv.select(polarizations=['rl'])
ant1 = uv.ant_1_array[0]
ant2 = uv.ant_2_array[0]
pytest.raises(KeyError, uv._key2inds, (ant2, ant1))
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_key2inds_conj_all_pols_bls():
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
ant1 = uv.ant_1_array[0]
ant2 = uv.ant_2_array[0]
bl = uvutils.antnums_to_baseline(ant2, ant1, uv.Nants_telescope)
bltind = np.where((uv.ant_1_array == ant1) & (uv.ant_2_array == ant2))[0]
ind1, ind2, indp = uv._key2inds(bl)
# Pols in data are 'rr', 'll', 'rl', 'lr'
# So conjugated order should be [0, 1, 3, 2]
assert np.array_equal(bltind, ind2)
assert np.array_equal(np.array([]), ind1)
assert np.array_equal(np.array([]), indp[0])
assert np.array_equal([0, 1, 3, 2], indp[1])
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_key2inds_conj_all_pols_missing_data_bls():
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
uv.select(polarizations=['rl'])
ant1 = uv.ant_1_array[0]
ant2 = uv.ant_2_array[0]
bl = uvutils.antnums_to_baseline(ant2, ant1, uv.Nants_telescope)
pytest.raises(KeyError, uv._key2inds, bl)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_smart_slicing():
# Test function to slice data
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
# ind1 reg, ind2 empty, pol reg
ind1 = 10 * np.arange(9)
ind2 = []
indp = [0, 1]
d = uv._smart_slicing(uv.data_array, ind1, ind2, (indp, []))
dcheck = uv.data_array[ind1, :, :, :]
dcheck = np.squeeze(dcheck[:, :, :, indp])
assert np.all(d == dcheck)
assert not d.flags.writeable
# Ensure a view was returned
uv.data_array[ind1[1], 0, 0, indp[0]] = 5.43
assert d[1, 0, 0] == uv.data_array[ind1[1], 0, 0, indp[0]]
# force copy
d = uv._smart_slicing(uv.data_array, ind1, ind2, (indp, []), force_copy=True)
dcheck = uv.data_array[ind1, :, :, :]
dcheck = np.squeeze(dcheck[:, :, :, indp])
assert np.all(d == dcheck)
assert d.flags.writeable
# Ensure a copy was returned
uv.data_array[ind1[1], 0, 0, indp[0]] = 4.3
assert d[1, 0, 0] != uv.data_array[ind1[1], 0, 0, indp[0]]
# ind1 reg, ind2 empty, pol not reg
ind1 = 10 * np.arange(9)
ind2 = []
indp = [0, 1, 3]
d = uv._smart_slicing(uv.data_array, ind1, ind2, (indp, []))
dcheck = uv.data_array[ind1, :, :, :]
dcheck = np.squeeze(dcheck[:, :, :, indp])
assert np.all(d == dcheck)
assert not d.flags.writeable
# Ensure a copy was returned
uv.data_array[ind1[1], 0, 0, indp[0]] = 1.2
assert d[1, 0, 0] != uv.data_array[ind1[1], 0, 0, indp[0]]
# ind1 not reg, ind2 empty, pol reg
ind1 = [0, 4, 5]
ind2 = []
indp = [0, 1]
d = uv._smart_slicing(uv.data_array, ind1, ind2, (indp, []))
dcheck = uv.data_array[ind1, :, :, :]
dcheck = np.squeeze(dcheck[:, :, :, indp])
assert np.all(d == dcheck)
assert not d.flags.writeable
# Ensure a copy was returned
uv.data_array[ind1[1], 0, 0, indp[0]] = 8.2
assert d[1, 0, 0] != uv.data_array[ind1[1], 0, 0, indp[0]]
# ind1 not reg, ind2 empty, pol not reg
ind1 = [0, 4, 5]
ind2 = []
indp = [0, 1, 3]
d = uv._smart_slicing(uv.data_array, ind1, ind2, (indp, []))
dcheck = uv.data_array[ind1, :, :, :]
dcheck = np.squeeze(dcheck[:, :, :, indp])
assert np.all(d == dcheck)
assert not d.flags.writeable
# Ensure a copy was returned
uv.data_array[ind1[1], 0, 0, indp[0]] = 3.4
assert d[1, 0, 0] != uv.data_array[ind1[1], 0, 0, indp[0]]
# ind1 empty, ind2 reg, pol reg
# Note conjugation test ensures the result is a copy, not a view.
ind1 = []
ind2 = 10 * np.arange(9)
indp = [0, 1]
d = uv._smart_slicing(uv.data_array, ind1, ind2, ([], indp))
dcheck = uv.data_array[ind2, :, :, :]
dcheck = np.squeeze(np.conj(dcheck[:, :, :, indp]))
assert np.all(d == dcheck)
# ind1 empty, ind2 reg, pol not reg
ind1 = []
ind2 = 10 * np.arange(9)
indp = [0, 1, 3]
d = uv._smart_slicing(uv.data_array, ind1, ind2, ([], indp))
dcheck = uv.data_array[ind2, :, :, :]
dcheck = np.squeeze(np.conj(dcheck[:, :, :, indp]))
assert np.all(d == dcheck)
# ind1 empty, ind2 not reg, pol reg
ind1 = []
ind2 = [1, 4, 5, 10]
indp = [0, 1]
d = uv._smart_slicing(uv.data_array, ind1, ind2, ([], indp))
dcheck = uv.data_array[ind2, :, :, :]
dcheck = np.squeeze(np.conj(dcheck[:, :, :, indp]))
assert np.all(d == dcheck)
# ind1 empty, ind2 not reg, pol not reg
ind1 = []
ind2 = [1, 4, 5, 10]
indp = [0, 1, 3]
d = uv._smart_slicing(uv.data_array, ind1, ind2, ([], indp))
dcheck = uv.data_array[ind2, :, :, :]
dcheck = np.squeeze(np.conj(dcheck[:, :, :, indp]))
assert np.all(d == dcheck)
# ind1, ind2 not empty, pol reg
ind1 = np.arange(20)
ind2 = np.arange(30, 40)
indp = [0, 1]
d = uv._smart_slicing(uv.data_array, ind1, ind2, (indp, indp))
dcheck = np.append(uv.data_array[ind1, :, :, :],
np.conj(uv.data_array[ind2, :, :, :]), axis=0)
dcheck = np.squeeze(dcheck[:, :, :, indp])
assert np.all(d == dcheck)
# ind1, ind2 not empty, pol not reg
ind1 = np.arange(20)
ind2 = np.arange(30, 40)
indp = [0, 1, 3]
d = uv._smart_slicing(uv.data_array, ind1, ind2, (indp, indp))
dcheck = np.append(uv.data_array[ind1, :, :, :],
np.conj(uv.data_array[ind2, :, :, :]), axis=0)
dcheck = np.squeeze(dcheck[:, :, :, indp])
assert np.all(d == dcheck)
# test single element
ind1 = [45]
ind2 = []
indp = [0, 1]
d = uv._smart_slicing(uv.data_array, ind1, ind2, (indp, []))
dcheck = uv.data_array[ind1, :, :, :]
dcheck = np.squeeze(dcheck[:, :, :, indp], axis=1)
assert np.all(d == dcheck)
# test single element
ind1 = []
ind2 = [45]
indp = [0, 1]
d = uv._smart_slicing(uv.data_array, ind1, ind2, ([], indp))
assert np.all(d == np.conj(dcheck))
# Full squeeze
ind1 = [45]
ind2 = []
indp = [0, 1]
d = uv._smart_slicing(uv.data_array, ind1, ind2, (indp, []), squeeze='full')
dcheck = uv.data_array[ind1, :, :, :]
dcheck = np.squeeze(dcheck[:, :, :, indp])
assert np.all(d == dcheck)
# Test invalid squeeze
pytest.raises(ValueError, uv._smart_slicing, uv.data_array, ind1, ind2,
(indp, []), squeeze='notasqueeze')
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_get_data():
# Test get_data function for easy access to data
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
# Get an antpair/pol combo
ant1 = uv.ant_1_array[0]
ant2 = uv.ant_2_array[0]
pol = uv.polarization_array[0]
bltind = np.where((uv.ant_1_array == ant1) & (uv.ant_2_array == ant2))[0]
dcheck = np.squeeze(uv.data_array[bltind, :, :, 0])
d = uv.get_data(ant1, ant2, pol)
assert np.all(dcheck == d)
d = uv.get_data(ant1, ant2, uvutils.polnum2str(pol))
assert np.all(dcheck == d)
d = uv.get_data((ant1, ant2, pol))
assert np.all(dcheck == d)
with pytest.raises(ValueError) as cm:
uv.get_data((ant1, ant2, pol), (ant1, ant2, pol))
assert str(cm.value).startswith('no more than 3 key values can be passed')
# Check conjugation
d = uv.get_data(ant2, ant1, pol)
assert np.all(dcheck == np.conj(d))
# Check cross pol conjugation
d = uv.get_data(ant2, ant1, uv.polarization_array[2])
d1 = uv.get_data(ant1, ant2, uv.polarization_array[3])
assert np.all(d == np.conj(d1))
# Antpair only
dcheck = np.squeeze(uv.data_array[bltind, :, :, :])
d = uv.get_data(ant1, ant2)
assert np.all(dcheck == d)
# Pol number only
dcheck = np.squeeze(uv.data_array[:, :, :, 0])
d = uv.get_data(pol)
assert np.all(dcheck == d)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_get_flags():
# Test function for easy access to flags
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
# Get an antpair/pol combo
ant1 = uv.ant_1_array[0]
ant2 = uv.ant_2_array[0]
pol = uv.polarization_array[0]
bltind = np.where((uv.ant_1_array == ant1) & (uv.ant_2_array == ant2))[0]
dcheck = np.squeeze(uv.flag_array[bltind, :, :, 0])
d = uv.get_flags(ant1, ant2, pol)
assert np.all(dcheck == d)
d = uv.get_flags(ant1, ant2, uvutils.polnum2str(pol))
assert np.all(dcheck == d)
d = uv.get_flags((ant1, ant2, pol))
assert np.all(dcheck == d)
with pytest.raises(ValueError) as cm:
uv.get_flags((ant1, ant2, pol), (ant1, ant2, pol))
assert str(cm.value).startswith('no more than 3 key values can be passed')
# Check conjugation
d = uv.get_flags(ant2, ant1, pol)
assert np.all(dcheck == d)
assert d.dtype == np.bool
# Antpair only
dcheck = np.squeeze(uv.flag_array[bltind, :, :, :])
d = uv.get_flags(ant1, ant2)
assert np.all(dcheck == d)
# Pol number only
dcheck = np.squeeze(uv.flag_array[:, :, :, 0])
d = uv.get_flags(pol)
assert np.all(dcheck == d)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_get_nsamples():
# Test function for easy access to nsample array
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
# Get an antpair/pol combo
ant1 = uv.ant_1_array[0]
ant2 = uv.ant_2_array[0]
pol = uv.polarization_array[0]
bltind = np.where((uv.ant_1_array == ant1) & (uv.ant_2_array == ant2))[0]
dcheck = np.squeeze(uv.nsample_array[bltind, :, :, 0])
d = uv.get_nsamples(ant1, ant2, pol)
assert np.all(dcheck == d)
d = uv.get_nsamples(ant1, ant2, uvutils.polnum2str(pol))
assert np.all(dcheck == d)
d = uv.get_nsamples((ant1, ant2, pol))
assert np.all(dcheck == d)
with pytest.raises(ValueError) as cm:
uv.get_nsamples((ant1, ant2, pol), (ant1, ant2, pol))
assert str(cm.value).startswith('no more than 3 key values can be passed')
# Check conjugation
d = uv.get_nsamples(ant2, ant1, pol)
assert np.all(dcheck == d)
# Antpair only
dcheck = np.squeeze(uv.nsample_array[bltind, :, :, :])
d = uv.get_nsamples(ant1, ant2)
assert np.all(dcheck == d)
# Pol number only
dcheck = np.squeeze(uv.nsample_array[:, :, :, 0])
d = uv.get_nsamples(pol)
assert np.all(dcheck == d)
@pytest.mark.filterwarnings("ignore:Altitude is not present in Miriad file")
def test_antpair2ind():
# Test for baseline-time axis indexer
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'zen.2456865.60537.xy.uvcRREAA')
uv.read_miriad(testfile)
# get indices
inds = uv.antpair2ind(0, 1, ordered=False)
np.testing.assert_array_equal(inds, np.array([1, 22, 43, 64, 85, 106, 127, 148, 169,
190, 211, 232, 253, 274, 295, 316,
337, 358, 379]))
assert inds.dtype == np.int
# conjugate (and use key rather than arg expansion)
inds2 = uv.antpair2ind((1, 0), ordered=False)
np.testing.assert_array_equal(inds, inds2)
# test ordered
inds3 = uv.antpair2ind(1, 0, ordered=True)
assert inds3.size == 0
inds3 = uv.antpair2ind(0, 1, ordered=True)
np.testing.assert_array_equal(inds, inds3)
# test autos w/ and w/o ordered
inds4 = uv.antpair2ind(0, 0, ordered=True)
inds5 = uv.antpair2ind(0, 0, ordered=False)
np.testing.assert_array_equal(inds4, inds5)
# test exceptions
pytest.raises(ValueError, uv.antpair2ind, 1)
pytest.raises(ValueError, uv.antpair2ind, 'bar', 'foo')
pytest.raises(ValueError, uv.antpair2ind, 0, 1, 'foo')
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_get_times():
# Test function for easy access to times, to work in conjunction with get_data
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
# Get an antpair/pol combo (pol shouldn't actually effect result)
ant1 = uv.ant_1_array[0]
ant2 = uv.ant_2_array[0]
pol = uv.polarization_array[0]
bltind = np.where((uv.ant_1_array == ant1) & (uv.ant_2_array == ant2))[0]
dcheck = uv.time_array[bltind]
d = uv.get_times(ant1, ant2, pol)
assert np.all(dcheck == d)
d = uv.get_times(ant1, ant2, uvutils.polnum2str(pol))
assert np.all(dcheck == d)
d = uv.get_times((ant1, ant2, pol))
assert np.all(dcheck == d)
with pytest.raises(ValueError) as cm:
uv.get_times((ant1, ant2, pol), (ant1, ant2, pol))
assert str(cm.value).startswith('no more than 3 key values can be passed')
# Check conjugation
d = uv.get_times(ant2, ant1, pol)
assert np.all(dcheck == d)
# Antpair only
d = uv.get_times(ant1, ant2)
assert np.all(dcheck == d)
# Pol number only
d = uv.get_times(pol)
assert np.all(d == uv.time_array)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_antpairpol_iter():
# Test generator
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
pol_dict = {uvutils.polnum2str(uv.polarization_array[i]): i for i in range(uv.Npols)}
keys = []
pols = set()
bls = set()
for key, d in uv.antpairpol_iter():
keys += key
bl = uv.antnums_to_baseline(key[0], key[1])
blind = np.where(uv.baseline_array == bl)[0]
bls.add(bl)
pols.add(key[2])
dcheck = np.squeeze(uv.data_array[blind, :, :, pol_dict[key[2]]])
assert np.all(dcheck == d)
assert len(bls) == len(uv.get_baseline_nums())
assert len(pols) == uv.Npols
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_get_ants():
# Test function to get unique antennas in data
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
ants = uv.get_ants()
for ant in ants:
assert (ant in uv.ant_1_array) or (ant in uv.ant_2_array)
for ant in uv.ant_1_array:
assert ant in ants
for ant in uv.ant_2_array:
assert ant in ants
def test_get_ENU_antpos():
uvd = UVData()
uvd.read_miriad(os.path.join(DATA_PATH, "zen.2457698.40355.xx.HH.uvcA"))
# no center, no pick data ants
antpos, ants = uvd.get_ENU_antpos(center=False, pick_data_ants=False)
assert len(ants) == 113
assert np.isclose(antpos[0, 0], 19.340211050751535)
assert ants[0] == 0
# test default behavior
antpos2, ants = uvtest.checkWarnings(uvd.get_ENU_antpos, category=DeprecationWarning,
message='The default for the `center` '
'keyword has changed')
assert np.all(antpos == antpos2)
# center
antpos, ants = uvd.get_ENU_antpos(center=True, pick_data_ants=False)
assert np.isclose(antpos[0, 0], 22.472442651767714)
# pick data ants
antpos, ants = uvd.get_ENU_antpos(center=True, pick_data_ants=True)
assert ants[0] == 9
assert np.isclose(antpos[0, 0], -0.0026981323386223721)
@pytest.mark.filterwarnings("ignore:Altitude is not present in Miriad file")
def test_telescope_loc_XYZ_check():
# test that improper telescope locations can still be read
miriad_file = os.path.join(DATA_PATH, 'zen.2456865.60537.xy.uvcRREAA')
uv = UVData()
uv.read(miriad_file)
uv.telescope_location = uvutils.XYZ_from_LatLonAlt(*uv.telescope_location)
fname = DATA_PATH + "/test/test.uv"
uv.write_miriad(fname, run_check=False, check_extra=False, clobber=True)
# try to read file without checks (passing is implicit)
uv.read(fname, run_check=False)
# try to read without checks: assert it fails
pytest.raises(ValueError, uv.read, fname)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_get_pols():
# Test function to get unique polarizations in string format
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
pols = uv.get_pols()
pols_data = ['rr', 'll', 'lr', 'rl']
assert sorted(pols) == sorted(pols_data)
@pytest.mark.filterwarnings("ignore:Altitude is not present in Miriad file")
def test_get_pols_x_orientation():
miriad_file = os.path.join(DATA_PATH, 'zen.2456865.60537.xy.uvcRREAA')
uv_in = UVData()
uv_in.read(miriad_file)
uv_in.x_orientation = 'east'
pols = uv_in.get_pols()
pols_data = ['en']
assert pols == pols_data
uv_in.x_orientation = 'north'
pols = uv_in.get_pols()
pols_data = ['ne']
assert pols == pols_data
@pytest.mark.filterwarnings("ignore:Altitude is not present in Miriad file")
def test_deprecated_x_orientation():
miriad_file = os.path.join(DATA_PATH, 'zen.2456865.60537.xy.uvcRREAA')
uv_in = UVData()
uv_in.read(miriad_file)
uv_in.x_orientation = 'e'
uvtest.checkWarnings(uv_in.check, category=DeprecationWarning,
message=['x_orientation e is not one of [east, north], '
'converting to "east".'])
uv_in.x_orientation = 'N'
uvtest.checkWarnings(uv_in.check, category=DeprecationWarning,
message=['x_orientation N is not one of [east, north], '
'converting to "north".'])
uv_in.x_orientation = 'foo'
pytest.raises(ValueError, uvtest.checkWarnings, uv_in.check,
category=DeprecationWarning,
message=['x_orientation n is not one of [east, north], '
'cannot be converted.'])
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_get_feedpols():
# Test function to get unique antenna feed polarizations in data. String format.
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
pols = uv.get_feedpols()
pols_data = ['r', 'l']
assert sorted(pols) == sorted(pols_data)
# Test break when pseudo-Stokes visibilities are present
uv.polarization_array[0] = 1 # pseudo-Stokes I
pytest.raises(ValueError, uv.get_feedpols)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_parse_ants():
# Test function to get correct antenna pairs and polarizations
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
# All baselines
ant_str = 'all'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
assert isinstance(ant_pairs_nums, type(None))
assert isinstance(polarizations, type(None))
# Auto correlations
ant_str = 'auto'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
assert Counter(ant_pairs_nums) == Counter([])
assert isinstance(polarizations, type(None))
# Cross correlations
ant_str = 'cross'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
assert Counter(uv.get_antpairs()) == Counter(ant_pairs_nums)
assert isinstance(polarizations, type(None))
# pseudo-Stokes params
ant_str = 'pI,pq,pU,pv'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
pols_expected = [4, 3, 2, 1]
assert isinstance(ant_pairs_nums, type(None))
assert Counter(polarizations) == Counter(pols_expected)
# Unparsible string
ant_str = 'none'
pytest.raises(ValueError, uv.parse_ants, ant_str)
# Single antenna number
ant_str = '0'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(0, 1), (0, 2), (0, 3), (0, 6), (0, 7), (0, 8),
(0, 11), (0, 14), (0, 18), (0, 19), (0, 20),
(0, 21), (0, 22), (0, 23), (0, 24), (0, 26),
(0, 27)]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert isinstance(polarizations, type(None))
# Single antenna number not in the data
ant_str = '10'
ant_pairs_nums, polarizations = uvtest.checkWarnings(uv.parse_ants,
[ant_str], {},
nwarnings=1,
message='Warning: Antenna')
assert isinstance(ant_pairs_nums, type(None))
assert isinstance(polarizations, type(None))
# Single antenna number with polarization, both not in the data
ant_str = '10x'
ant_pairs_nums, polarizations = uvtest.checkWarnings(uv.parse_ants,
[ant_str], {},
nwarnings=2,
message=['Warning: Antenna', 'Warning: Polarization'])
assert isinstance(ant_pairs_nums, type(None))
assert isinstance(polarizations, type(None))
# Multiple antenna numbers as list
ant_str = '22,26'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(0, 22), (0, 26), (1, 22), (1, 26), (2, 22), (2, 26),
(3, 22), (3, 26), (6, 22), (6, 26), (7, 22),
(7, 26), (8, 22), (8, 26), (11, 22), (11, 26),
(14, 22), (14, 26), (18, 22), (18, 26),
(19, 22), (19, 26), (20, 22), (20, 26),
(21, 22), (21, 26), (22, 23), (22, 24),
(22, 26), (22, 27), (23, 26), (24, 26),
(26, 27)]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert isinstance(polarizations, type(None))
# Single baseline
ant_str = '1_3'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(1, 3)]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert isinstance(polarizations, type(None))
# Single baseline with polarization
ant_str = '1l_3r'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(1, 3)]
pols_expected = [-4]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert Counter(polarizations) == Counter(pols_expected)
# Single baseline with single polarization in first entry
ant_str = '1l_3,2x_3'
ant_pairs_nums, polarizations = uvtest.checkWarnings(uv.parse_ants,
[ant_str], {},
nwarnings=1,
message='Warning: Polarization')
ant_pairs_expected = [(1, 3), (2, 3)]
pols_expected = [-2, -4]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert Counter(polarizations) == Counter(pols_expected)
# Single baseline with single polarization in last entry
ant_str = '1_3l,2_3x'
ant_pairs_nums, polarizations = uvtest.checkWarnings(uv.parse_ants,
[ant_str], {},
nwarnings=1,
message='Warning: Polarization')
ant_pairs_expected = [(1, 3), (2, 3)]
pols_expected = [-2, -3]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert Counter(polarizations) == Counter(pols_expected)
# Multiple baselines as list
ant_str = '1_2,1_3,1_11'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(1, 2), (1, 3), (1, 11)]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert isinstance(polarizations, type(None))
# Multiples baselines with polarizations as list
ant_str = '1r_2l,1l_3l,1r_11r'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(1, 2), (1, 3), (1, 11)]
pols_expected = [-1, -2, -3]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert Counter(polarizations) == Counter(pols_expected)
# Specific baselines with parenthesis
ant_str = '(1,3)_11'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(1, 11), (3, 11)]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert isinstance(polarizations, type(None))
# Specific baselines with parenthesis
ant_str = '1_(3,11)'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(1, 3), (1, 11)]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert isinstance(polarizations, type(None))
# Antenna numbers with polarizations
ant_str = '(1l,2r)_(3l,6r)'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(1, 3), (1, 6), (2, 3), (2, 6)]
pols_expected = [-1, -2, -3, -4]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert Counter(polarizations) == Counter(pols_expected)
# Antenna numbers with - for avoidance
ant_str = '1_(-3,11)'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(1, 11)]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert isinstance(polarizations, type(None))
# Remove specific antenna number
ant_str = '1,-3'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(0, 1), (1, 2), (1, 6), (1, 7), (1, 8), (1, 11),
(1, 14), (1, 18), (1, 19), (1, 20), (1, 21),
(1, 22), (1, 23), (1, 24), (1, 26), (1, 27)]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert isinstance(polarizations, type(None))
# Remove specific baseline (same expected antenna pairs as above example)
ant_str = '1,-1_3'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert isinstance(polarizations, type(None))
# Antenna numbers with polarizations and - for avoidance
ant_str = '1l_(-3r,11l)'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(1, 11)]
pols_expected = [-2]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert Counter(polarizations) == Counter(pols_expected)
# Antenna numbers and pseudo-Stokes parameters
ant_str = '(1l,2r)_(3l,6r),pI,pq'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(1, 3), (1, 6), (2, 3), (2, 6)]
pols_expected = [2, 1, -1, -2, -3, -4]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert Counter(polarizations) == Counter(pols_expected)
# Multiple baselines with multiple polarizations, one pol to be removed
ant_str = '1l_2,1l_3,-1l_3r'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = [(1, 2), (1, 3)]
pols_expected = [-2]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert Counter(polarizations) == Counter(pols_expected)
# Multiple baselines with multiple polarizations, one pol (not in data) to be removed
ant_str = '1l_2,1l_3,-1x_3y'
ant_pairs_nums, polarizations = uvtest.checkWarnings(uv.parse_ants,
[ant_str], {},
nwarnings=1,
message='Warning: Polarization')
ant_pairs_expected = [(1, 2), (1, 3)]
pols_expected = [-2, -4]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert Counter(polarizations) == Counter(pols_expected)
# Test print toggle on single baseline with polarization
ant_str = '1l_2l'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str, print_toggle=True)
ant_pairs_expected = [(1, 2)]
pols_expected = [-2]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert Counter(polarizations) == Counter(pols_expected)
# Test ant_str='auto' on file with auto correlations
uv = UVData()
testfile = os.path.join(DATA_PATH, 'zen.2457698.40355.xx.HH.uvcA')
uv.read(testfile)
ant_str = 'auto'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_nums = [9, 10, 20, 22, 31, 43, 53, 64, 65, 72, 80, 81, 88, 89, 96, 97,
104, 105, 112]
ant_pairs_autos = [(ant_i, ant_i) for ant_i in ant_nums]
assert Counter(ant_pairs_nums) == Counter(ant_pairs_autos)
assert isinstance(polarizations, type(None))
# Test cross correlation extraction on data with auto + cross
ant_str = 'cross'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_cross = list(itertools.combinations(ant_nums, 2))
assert Counter(ant_pairs_nums) == Counter(ant_pairs_cross)
assert isinstance(polarizations, type(None))
# Remove only polarization of single baseline
ant_str = 'all,-9x_10x'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = ant_pairs_autos + ant_pairs_cross
ant_pairs_expected.remove((9, 10))
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert isinstance(polarizations, type(None))
# Test appending all to beginning of strings that start with -
ant_str = '-9'
ant_pairs_nums, polarizations = uv.parse_ants(ant_str)
ant_pairs_expected = ant_pairs_autos + ant_pairs_cross
for ant_i in ant_nums:
ant_pairs_expected.remove((9, ant_i))
assert Counter(ant_pairs_nums) == Counter(ant_pairs_expected)
assert isinstance(polarizations, type(None))
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_select_with_ant_str():
# Test select function with ant_str argument
uv = UVData()
testfile = os.path.join(
DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
inplace = False
# Check error thrown if ant_str passed with antenna_nums,
# antenna_names, ant_pairs_nums, or polarizations
pytest.raises(ValueError, uv.select,
ant_str='',
antenna_nums=[],
inplace=inplace)
pytest.raises(ValueError, uv.select,
ant_str='',
antenna_nums=[],
inplace=inplace)
pytest.raises(ValueError, uv.select,
ant_str='',
antenna_nums=[],
inplace=inplace)
pytest.raises(ValueError, uv.select,
ant_str='',
antenna_nums=[],
inplace=inplace)
# All baselines
ant_str = 'all'
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(uv.get_antpairs())
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# Auto correlations
ant_str = 'auto'
pytest.raises(ValueError, uv.select, ant_str=ant_str, inplace=inplace)
# No auto correlations in this data
# Cross correlations
ant_str = 'cross'
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(uv.get_antpairs())
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# All baselines in data are cross correlations
# pseudo-Stokes params
ant_str = 'pI,pq,pU,pv'
pytest.raises(ValueError, uv.select, ant_str=ant_str, inplace=inplace)
# Unparsible string
ant_str = 'none'
pytest.raises(ValueError, uv.select, ant_str=ant_str, inplace=inplace)
# Single antenna number
ant_str = '0'
ant_pairs = [(0, 1), (0, 2), (0, 3), (0, 6), (0, 7), (0, 8), (0, 11),
(0, 14), (0, 18), (0, 19), (0, 20), (0, 21), (0, 22),
(0, 23), (0, 24), (0, 26), (0, 27)]
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# Single antenna number not present in data
ant_str = '10'
uv2 = uvtest.checkWarnings(uv.select, [], {'ant_str': ant_str, 'inplace': inplace},
nwarnings=1, message='Warning: Antenna')
# Multiple antenna numbers as list
ant_str = '22,26'
ant_pairs = [(0, 22), (0, 26), (1, 22), (1, 26), (2, 22), (2, 26),
(3, 22), (3, 26), (6, 22), (6, 26), (7, 22),
(7, 26), (8, 22), (8, 26), (11, 22), (11, 26),
(14, 22), (14, 26), (18, 22), (18, 26), (19, 22),
(19, 26), (20, 22), (20, 26), (21, 22), (21, 26),
(22, 23), (22, 24), (22, 26), (22, 27), (23, 26),
(24, 26), (26, 27)]
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# Single baseline
ant_str = '1_3'
ant_pairs = [(1, 3)]
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# Single baseline with polarization
ant_str = '1l_3r'
ant_pairs = [(1, 3)]
pols = ['lr']
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(pols)
# Single baseline with single polarization in first entry
ant_str = '1l_3,2x_3'
# x,y pols not present in data
uv2 = uvtest.checkWarnings(uv.select, [],
{'ant_str': ant_str, 'inplace': inplace},
nwarnings=1, message='Warning: Polarization')
# with polarizations in data
ant_str = '1l_3,2_3'
ant_pairs = [(1, 3), (2, 3)]
pols = ['ll', 'lr']
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(pols)
# Single baseline with single polarization in last entry
ant_str = '1_3l,2_3x'
# x,y pols not present in data
uv2 = uvtest.checkWarnings(uv.select, [],
{'ant_str': ant_str, 'inplace': inplace},
nwarnings=1, message='Warning: Polarization')
# with polarizations in data
ant_str = '1_3l,2_3'
ant_pairs = [(1, 3), (2, 3)]
pols = ['ll', 'rl']
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(pols)
# Multiple baselines as list
ant_str = '1_2,1_3,1_10'
# Antenna number 10 not in data
uv2 = uvtest.checkWarnings(uv.select, [],
{'ant_str': ant_str, 'inplace': inplace},
nwarnings=1, message='Warning: Antenna')
ant_pairs = [(1, 2), (1, 3)]
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# Multiples baselines with polarizations as list
ant_str = '1r_2l,1l_3l,1r_11r'
ant_pairs = [(1, 2), (1, 3), (1, 11)]
pols = ['rr', 'll', 'rl']
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(pols)
# Specific baselines with parenthesis
ant_str = '(1,3)_11'
ant_pairs = [(1, 11), (3, 11)]
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# Specific baselines with parenthesis
ant_str = '1_(3,11)'
ant_pairs = [(1, 3), (1, 11)]
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# Antenna numbers with polarizations
ant_str = '(1l,2r)_(3l,6r)'
ant_pairs = [(1, 3), (1, 6), (2, 3), (2, 6)]
pols = ['rr', 'll', 'rl', 'lr']
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(pols)
# Antenna numbers with - for avoidance
ant_str = '1_(-3,11)'
ant_pairs = [(1, 11)]
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
ant_str = '(-1,3)_11'
ant_pairs = [(3, 11)]
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# Remove specific antenna number
ant_str = '1,-3'
ant_pairs = [(0, 1), (1, 2), (1, 6), (1, 7), (1, 8), (1, 11),
(1, 14), (1, 18), (1, 19), (1, 20), (1, 21),
(1, 22), (1, 23), (1, 24), (1, 26), (1, 27)]
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# Remove specific baseline
ant_str = '1,-1_3'
ant_pairs = [(0, 1), (1, 2), (1, 6), (1, 7), (1, 8), (1, 11),
(1, 14), (1, 18), (1, 19), (1, 20), (1, 21),
(1, 22), (1, 23), (1, 24), (1, 26), (1, 27)]
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# Antenna numbers with polarizations and - for avoidance
ant_str = '1l_(-3r,11l)'
ant_pairs = [(1, 11)]
pols = ['ll']
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(pols)
# Test pseudo-Stokes params with select
ant_str = 'pi,pQ'
pols = ['pQ', 'pI']
uv.polarization_array = np.array([4, 3, 2, 1])
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(uv.get_antpairs())
assert Counter(uv2.get_pols()) == Counter(pols)
# Test ant_str = 'auto' on file with auto correlations
uv = UVData()
testfile = os.path.join(DATA_PATH, 'zen.2457698.40355.xx.HH.uvcA')
uv.read(testfile)
ant_str = 'auto'
ant_nums = [9, 10, 20, 22, 31, 43, 53, 64, 65, 72, 80, 81, 88, 89, 96, 97,
104, 105, 112]
ant_pairs_autos = [(ant_i, ant_i) for ant_i in ant_nums]
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs_autos)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# Test cross correlation extraction on data with auto + cross
ant_str = 'cross'
ant_pairs_cross = list(itertools.combinations(ant_nums, 2))
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs_cross)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# Remove only polarization of single baseline
ant_str = 'all,-9x_10x'
ant_pairs = ant_pairs_autos + ant_pairs_cross
ant_pairs.remove((9, 10))
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
# Test appending all to beginning of strings that start with -
ant_str = '-9'
ant_pairs = ant_pairs_autos + ant_pairs_cross
for ant_i in ant_nums:
ant_pairs.remove((9, ant_i))
uv2 = uv.select(ant_str=ant_str, inplace=inplace)
assert Counter(uv2.get_antpairs()) == Counter(ant_pairs)
assert Counter(uv2.get_pols()) == Counter(uv.get_pols())
def test_set_uvws_from_antenna_pos():
# Test set_uvws_from_antenna_positions function with phased data
uv_object = UVData()
testfile = os.path.join(
DATA_PATH, '1133866760.uvfits')
uv_object.read_uvfits(testfile)
orig_uvw_array = np.copy(uv_object.uvw_array)
with pytest.raises(ValueError) as cm:
uv_object.set_uvws_from_antenna_positions()
assert str(cm.value).startswith("UVW calculation requires unphased data.")
with pytest.raises(ValueError) as cm:
uvtest.checkWarnings(
uv_object.set_uvws_from_antenna_positions,
[True, "xyz"],
message="Data will be unphased"
)
assert str(cm.value).startswith("Invalid parameter orig_phase_frame.")
with pytest.raises(ValueError) as cm:
uvtest.checkWarnings(
uv_object.set_uvws_from_antenna_positions,
[True, "gcrs", "xyz"],
message="Data will be unphased"
)
assert str(cm.value).startswith("Invalid parameter output_phase_frame.")
uvtest.checkWarnings(
uv_object.set_uvws_from_antenna_positions,
[True, 'gcrs', 'gcrs'],
message='Data will be unphased'
)
max_diff = np.amax(np.absolute(np.subtract(orig_uvw_array,
uv_object.uvw_array)))
assert np.isclose(max_diff, 0., atol=2)
def test_deprecated_redundancy_funcs():
uv0 = UVData()
uv0.read_uvfits(os.path.join(DATA_PATH, 'fewant_randsrc_airybeam_Nsrc100_10MHz.uvfits'))
redant_gps, centers, lengths = uvtest.checkWarnings(
uv0.get_antenna_redundancies,
func_kwargs={'include_autos': False, 'conjugate_bls': True},
category=DeprecationWarning, nwarnings=2,
message=['UVData.get_antenna_redundancies has been replaced',
'The default for the `center` keyword'])
redbl_gps, centers, lengths, _ = uvtest.checkWarnings(
uv0.get_baseline_redundancies, category=DeprecationWarning,
message='UVData.get_baseline_redundancies has been replaced')
red_gps_new, _, _, = uv0.get_redundancies(include_autos=False, use_antpos=True)
assert red_gps_new == redant_gps
@pytest.mark.filterwarnings("ignore:The default for the `center` keyword")
def test_get_antenna_redundancies():
uv0 = UVData()
uv0.read_uvfits(os.path.join(DATA_PATH, 'fewant_randsrc_airybeam_Nsrc100_10MHz.uvfits'))
old_bl_array = np.copy(uv0.baseline_array)
red_gps, centers, lengths = uv0.get_redundancies(use_antpos=True, include_autos=False, conjugate_bls=True)
# new and old baseline Numbers are not the same (different conjugation)
assert not np.allclose(uv0.baseline_array, old_bl_array)
# assert all baselines are in the data (because it's conjugated to match)
for i, gp in enumerate(red_gps):
for bl in gp:
assert bl in uv0.baseline_array
# conjugate data differently
uv0.conjugate_bls(convention='ant1<ant2')
new_red_gps, new_centers, new_lengths, conjs = uv0.get_redundancies(use_antpos=True,
include_autos=False,
include_conjugates=True)
assert conjs is None
apos, anums = uv0.get_ENU_antpos()
new_red_gps, new_centers, new_lengths = uvutils.get_antenna_redundancies(
anums, apos, include_autos=False)
# all redundancy info is the same
assert red_gps == new_red_gps
assert np.allclose(centers, new_centers)
assert np.allclose(lengths, new_lengths)
@pytest.mark.filterwarnings("ignore:The default for the `center` keyword")
def test_redundancy_contract_expand():
# Test that a UVData object can be reduced to one baseline from each redundant group
# and restored to its original form.
uv0 = UVData()
uv0.read_uvfits(os.path.join(DATA_PATH, 'fewant_randsrc_airybeam_Nsrc100_10MHz.uvfits'))
tol = 0.02 # Fails at lower precision because some baselines fall into multiple redundant groups
# Assign identical data to each redundant group:
red_gps, centers, lengths = uv0.get_redundancies(tol=tol, use_antpos=True, conjugate_bls=True)
for i, gp in enumerate(red_gps):
for bl in gp:
inds = np.where(bl == uv0.baseline_array)
uv0.data_array[inds] *= 0
uv0.data_array[inds] += complex(i)
uv2 = uv0.compress_by_redundancy(tol=tol, inplace=False)
# Compare in-place to separated compression.
uv3 = copy.deepcopy(uv0)
uv3.compress_by_redundancy(tol=tol)
assert uv2 == uv3
# check inflating gets back to the original
uvtest.checkWarnings(
uv2.inflate_by_redundancy,
[tol],
nwarnings=3,
category=[DeprecationWarning, DeprecationWarning, UserWarning],
message=['The default for the `center` keyword', 'The default for the `center` keyword',
'Missing some redundant groups. Filling in available data.']
)
uv2.history = uv0.history
# Inflation changes the baseline ordering into the order of the redundant groups.
# reorder bls for comparison
uv0.reorder_blts(conj_convention='u>0')
uv2.reorder_blts(conj_convention='u>0')
uv2._uvw_array.tols = [0, tol]
assert uv2 == uv0
uv3 = uv2.compress_by_redundancy(tol=tol, inplace=False)
uvtest.checkWarnings(
uv3.inflate_by_redundancy,
[tol],
nwarnings=3,
category=[DeprecationWarning, DeprecationWarning, UserWarning],
message=['The default for the `center` keyword', 'The default for the `center` keyword',
'Missing some redundant groups. Filling in available data.']
)
# Confirm that we get the same result looping inflate -> compress -> inflate.
uv3.reorder_blts(conj_convention='u>0')
uv2.reorder_blts(conj_convention='u>0')
uv2.history = uv3.history
assert uv2 == uv3
@pytest.mark.filterwarnings("ignore:The default for the `center` keyword")
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_redundancy_contract_expand_nblts_not_nbls_times_ntimes():
uv0 = UVData()
testfile = os.path.join(DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv0.read_uvfits(testfile)
# check that Nblts != Nbls * Ntimes
assert uv0.Nblts != uv0.Nbls * uv0.Ntimes
tol = 1.0
# Assign identical data to each redundant group:
red_gps, centers, lengths = uv0.get_redundancies(tol=tol, use_antpos=True, conjugate_bls=True)
for i, gp in enumerate(red_gps):
for bl in gp:
inds = np.where(bl == uv0.baseline_array)
uv0.data_array[inds, ...] *= 0
uv0.data_array[inds, ...] += complex(i)
uv2 = uv0.compress_by_redundancy(tol=tol, inplace=False)
# check inflating gets back to the original
uvtest.checkWarnings(uv2.inflate_by_redundancy, {tol: tol},
nwarnings=3, category=[DeprecationWarning, DeprecationWarning, UserWarning],
message=['The default for the `center` keyword'] * 2
+ ['Missing some redundant groups. Filling in available data.'])
uv2.history = uv0.history
# Inflation changes the baseline ordering into the order of the redundant groups.
# reorder bls for comparison
uv0.reorder_blts()
uv2.reorder_blts()
uv2._uvw_array.tols = [0, tol]
blt_inds = []
missing_inds = []
for bl, t in zip(uv0.baseline_array, uv0.time_array):
if (bl, t) in zip(uv2.baseline_array, uv2.time_array):
this_ind = np.where((uv2.baseline_array == bl) & (uv2.time_array == t))[0]
blt_inds.append(this_ind[0])
else:
# this is missing because of the compress_by_redundancy step
missing_inds.append(np.where((uv0.baseline_array == bl) & (uv0.time_array == t))[0])
uv3 = uv2.select(blt_inds=blt_inds, inplace=False)
orig_inds_keep = list(np.arange(uv0.Nblts))
for ind in missing_inds:
orig_inds_keep.remove(ind)
uv1 = uv0.select(blt_inds=orig_inds_keep, inplace=False)
assert uv3 == uv1
@pytest.mark.filterwarnings("ignore:The default for the `center` keyword")
def test_compress_redundancy_metadata_only():
uv0 = UVData()
uv0.read_uvfits(os.path.join(DATA_PATH, 'fewant_randsrc_airybeam_Nsrc100_10MHz.uvfits'))
tol = 0.01
# Assign identical data to each redundant group:
red_gps, centers, lengths = uv0.get_redundancies(tol=tol, use_antpos=True, conjugate_bls=True)
for i, gp in enumerate(red_gps):
for bl in gp:
inds = np.where(bl == uv0.baseline_array)
uv0.data_array[inds] *= 0
uv0.data_array[inds] += complex(i)
uv2 = copy.deepcopy(uv0)
uv2.data_array = None
uv2.flag_array = None
uv2.nsample_array = None
uv2.compress_by_redundancy(tol=tol, inplace=True)
# check for deprecation warning with metadata_only keyword
uv1 = copy.deepcopy(uv0)
uv1.data_array = None
uv1.flag_array = None
uv1.nsample_array = None
uvtest.checkWarnings(uv1.compress_by_redundancy,
func_kwargs={'tol': tol, 'inplace': True,
'metadata_only': True},
category=DeprecationWarning,
message='The metadata_only option has been replaced')
assert uv1 == uv2
uv0.compress_by_redundancy(tol=tol)
uv0.data_array = None
uv0.flag_array = None
uv0.nsample_array = None
assert uv0 == uv2
def test_redundancy_missing_groups():
# Check that if I try to inflate a compressed UVData that is missing redundant groups, it will
# raise the right warnings and fill only what data are available.
uv0 = UVData()
uv0.read_uvfits(os.path.join(DATA_PATH, 'fewant_randsrc_airybeam_Nsrc100_10MHz.uvfits'))
tol = 0.02
Nselect = 19
uv0.compress_by_redundancy(tol=tol)
fname = 'temp_hera19_missingreds.uvfits'
bls = np.unique(uv0.baseline_array)[:Nselect] # First twenty baseline groups
uv0.select(bls=[uv0.baseline_to_antnums(bl) for bl in bls])
uv0.write_uvfits(fname)
uv1 = UVData()
uv1.read_uvfits(fname)
os.remove(fname)
assert uv0 == uv1 # Check that writing compressed files causes no issues.
uvtest.checkWarnings(
uv1.inflate_by_redundancy,
[tol],
nwarnings=3,
category=[DeprecationWarning, DeprecationWarning, UserWarning],
message=['The default for the `center` keyword', 'The default for the `center` keyword',
'Missing some redundant groups. Filling in available data.']
)
uv2 = uv1.compress_by_redundancy(tol=tol, inplace=False)
assert np.unique(uv2.baseline_array).size == Nselect
def test_quick_redundant_vs_redundant_test_array():
"""Verify the quick redundancy calc returns the same groups as a known array."""
uv = UVData()
uv.read_uvfits(os.path.join(DATA_PATH, 'fewant_randsrc_airybeam_Nsrc100_10MHz.uvfits'))
uv.select(times=uv.time_array[0])
uv.unphase_to_drift()
uvtest.checkWarnings(uv.conjugate_bls, func_kwargs={'convention': 'u>0', 'use_enu': True},
message=['The default for the `center`'],
nwarnings=1, category=DeprecationWarning)
tol = 0.05
# a quick and dirty redundancy calculation
unique_bls, baseline_inds = np.unique(uv.baseline_array, return_index=True)
uvw_vectors = np.take(uv.uvw_array, baseline_inds, axis=0)
uvw_diffs = np.expand_dims(uvw_vectors, axis=0) - np.expand_dims(uvw_vectors, axis=1)
uvw_diffs = np.linalg.norm(uvw_diffs, axis=2)
reds = np.where(uvw_diffs < tol, unique_bls, 0)
reds = np.ma.masked_where(reds == 0, reds)
groups = []
for bl in reds:
grp = []
grp.extend(bl.compressed())
for other_bls in reds:
if set(reds.compressed()).issubset(other_bls.compressed()):
grp.extend(other_bls.compressed())
grp = np.unique(grp).tolist()
groups.append(grp)
pad = len(max(groups, key=len))
groups = np.array([i + [-1] * (pad - len(i)) for i in groups])
groups = np.unique(groups, axis=0)
groups = [[bl for bl in grp if bl != -1] for grp in groups]
groups.sort(key=len)
redundant_groups, centers, lengths, conj_inds = uv.get_redundancies(tol=tol, include_conjugates=True)
redundant_groups.sort(key=len)
assert groups == redundant_groups
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_redundancy_finder_when_nblts_not_nbls_times_ntimes():
"""Test the redundancy finder functions when Nblts != Nbls * Ntimes."""
tol = 1 # meter
uv = UVData()
testfile = os.path.join(DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
uvtest.checkWarnings(uv.conjugate_bls, func_kwargs={'convention': 'u>0', 'use_enu': True},
message=['The default for the `center`'],
nwarnings=1, category=DeprecationWarning)
# check that Nblts != Nbls * Ntimes
assert uv.Nblts != uv.Nbls * uv.Ntimes
# a quick and dirty redundancy calculation
unique_bls, baseline_inds = np.unique(uv.baseline_array, return_index=True)
uvw_vectors = np.take(uv.uvw_array, baseline_inds, axis=0)
uvw_diffs = np.expand_dims(uvw_vectors, axis=0) - np.expand_dims(uvw_vectors, axis=1)
uvw_diffs = np.linalg.norm(uvw_diffs, axis=2)
reds = np.where(uvw_diffs < tol, unique_bls, 0)
reds = np.ma.masked_where(reds == 0, reds)
groups = []
for bl in reds:
grp = []
grp.extend(bl.compressed())
for other_bls in reds:
if set(reds.compressed()).issubset(other_bls.compressed()):
grp.extend(other_bls.compressed())
grp = np.unique(grp).tolist()
groups.append(grp)
pad = len(max(groups, key=len))
groups = np.array([i + [-1] * (pad - len(i)) for i in groups])
groups = np.unique(groups, axis=0)
groups = [[bl for bl in grp if bl != -1] for grp in groups]
groups.sort(key=len)
redundant_groups, centers, lengths, conj_inds = uv.get_redundancies(tol=tol, include_conjugates=True)
redundant_groups.sort(key=len)
assert groups == redundant_groups
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_overlapping_data_add():
# read in test data
uv = UVData()
testfile = os.path.join(DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv.read_uvfits(testfile)
# slice into four objects
blts1 = np.arange(500)
blts2 = np.arange(500, 1360)
uv1 = uv.select(polarizations=[-1, -2], blt_inds=blts1, inplace=False)
uv2 = uv.select(polarizations=[-3, -4], blt_inds=blts1, inplace=False)
uv3 = uv.select(polarizations=[-1, -2], blt_inds=blts2, inplace=False)
uv4 = uv.select(polarizations=[-3, -4], blt_inds=blts2, inplace=False)
# combine and check for equality
uvfull = uv1 + uv2
uvfull += uv3
uvfull += uv4
extra_history = ("Downselected to specific baseline-times, polarizations using pyuvdata. "
"Combined data along polarization axis using pyuvdata. Combined data along "
"baseline-time axis using pyuvdata. Overwrote invalid data using pyuvdata.")
assert uvutils._check_histories(uvfull.history, uv.history + extra_history)
uvfull.history = uv.history # make histories match
assert uv == uvfull
# check combination not-in-place
uvfull = uv1 + uv2
uvfull += uv3
uvfull = uvfull + uv4
uvfull.history = uv.history # make histories match
assert uv == uvfull
# test raising error for adding objects incorrectly (i.e., having the object
# with data to be overwritten come second)
uvfull = uv1 + uv2
uvfull += uv3
pytest.raises(ValueError, uv4.__iadd__, uvfull)
pytest.raises(ValueError, uv4.__add__, uv4, uvfull)
# write individual objects out, and make sure that we can read in the list
uv1_out = os.path.join(DATA_PATH, "uv1.uvfits")
uv1.write_uvfits(uv1_out)
uv2_out = os.path.join(DATA_PATH, "uv2.uvfits")
uv2.write_uvfits(uv2_out)
uv3_out = os.path.join(DATA_PATH, "uv3.uvfits")
uv3.write_uvfits(uv3_out)
uv4_out = os.path.join(DATA_PATH, "uv4.uvfits")
uv4.write_uvfits(uv4_out)
uvfull = UVData()
uvfull.read([uv1_out, uv2_out, uv3_out, uv4_out])
assert uvutils._check_histories(uvfull.history, uv.history + extra_history)
uvfull.history = uv.history # make histories match
assert uvfull == uv
# clean up after ourselves
os.remove(uv1_out)
os.remove(uv2_out)
os.remove(uv3_out)
os.remove(uv4_out)
return
@pytest.mark.filterwarnings("ignore:Altitude is not present in Miriad file")
def test_lsts_from_time_with_only_unique():
"""Test `set_lsts_from_time_array` with only unique values is identical to full array."""
miriad_file = os.path.join(DATA_PATH, 'zen.2456865.60537.xy.uvcRREAA')
uv = UVData()
uv.read_miriad(miriad_file)
lat, lon, alt = uv.telescope_location_lat_lon_alt_degrees
# calculate the lsts for all elements in time array
full_lsts = uvutils.get_lst_for_time(uv.time_array, lat, lon, alt)
# use `set_lst_from_time_array` to set the uv.lst_array using only unique values
uv.set_lsts_from_time_array()
assert np.array_equal(full_lsts, uv.lst_array)
@pytest.mark.filterwarnings("ignore:Telescope EVLA is not")
def test_copy():
"""Test the copy method"""
uv_object = UVData()
testfile = os.path.join(DATA_PATH, 'day2_TDEM0003_10s_norx_1src_1spw.uvfits')
uv_object.read_uvfits(testfile)
uv_object_copy = uv_object.copy()
assert uv_object_copy == uv_object
uv_object_copy = uv_object.copy(metadata_only=True)
assert uv_object_copy.metadata_only
for name in uv_object._data_params:
setattr(uv_object, name, None)
assert uv_object_copy == uv_object
uv_object_copy = uv_object.copy()
assert uv_object_copy == uv_object
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_upsample_in_time(resample_in_time_file):
"""Test the upsample_in_time method"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline")
# save some values for later
init_data_size = uv_object.data_array.size
init_wf = uv_object.get_data(0, 1)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
max_integration_time = np.amin(uv_object.integration_time) / 2.0
uv_object.upsample_in_time(max_integration_time, blt_order="baseline")
assert np.allclose(uv_object.integration_time, max_integration_time)
# we should double the size of the data arrays
assert uv_object.data_array.size == 2 * init_data_size
# output data should be the same
out_wf = uv_object.get_data(0, 1)
assert np.isclose(init_wf[0, 0, 0], out_wf[0, 0, 0])
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose(init_ns[0, 0, 0], out_ns[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_upsample_in_time_with_flags(resample_in_time_file):
"""Test the upsample_in_time method with flags"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline")
# save some values for later
init_wf = uv_object.get_data(0, 1)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
max_integration_time = np.amin(uv_object.integration_time) / 2.0
# add flags and upsample again
inds01 = uv_object.antpair2ind(0, 1)
uv_object.flag_array[inds01[0], 0, 0, 0] = True
uv_object.upsample_in_time(max_integration_time, blt_order="baseline")
# data and nsamples should be changed as normal, but flagged
out_wf = uv_object.get_data(0, 1)
assert np.isclose(init_wf[0, 0, 0], out_wf[0, 0, 0])
out_flags = uv_object.get_flags(0, 1)
assert np.all(out_flags[:2, 0, 0])
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose(init_ns[0, 0, 0], out_ns[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_upsample_in_time_noninteger_resampling(resample_in_time_file):
"""Test the upsample_in_time method with a non-integer resampling factor"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline")
# save some values for later
init_data_size = uv_object.data_array.size
init_wf = uv_object.get_data(0, 1)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
max_integration_time = np.amin(uv_object.integration_time) * 0.75
uv_object.upsample_in_time(max_integration_time, blt_order="baseline")
assert np.allclose(uv_object.integration_time, max_integration_time * 0.5 / 0.75)
# we should double the size of the data arrays
assert uv_object.data_array.size == 2 * init_data_size
# output data should be different by a factor of 2
out_wf = uv_object.get_data(0, 1)
assert np.isclose(init_wf[0, 0, 0], out_wf[0, 0, 0])
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose(init_ns[0, 0, 0], out_ns[0, 0, 0])
return
def test_upsample_in_time_errors(resample_in_time_file):
"""Test errors and warnings raised by upsample_in_time"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# test using a too-small integration time
max_integration_time = 1e-3 * np.amin(uv_object.integration_time)
with pytest.raises(ValueError) as cm:
uv_object.upsample_in_time(max_integration_time)
assert str(cm.value).startswith("Decreasing the integration time by more than")
# catch a warning for doing no work
uv_object2 = uv_object.copy()
max_integration_time = 2 * np.amax(uv_object.integration_time)
uvtest.checkWarnings(uv_object.upsample_in_time, [max_integration_time],
message="All values in integration_time array are already shorter")
assert uv_object == uv_object2
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_upsample_in_time_summing_correlator_mode(resample_in_time_file):
"""Test the upsample_in_time method with summing correlator mode"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline")
# save some values for later
init_data_size = uv_object.data_array.size
init_wf = uv_object.get_data(0, 1)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
max_integration_time = np.amin(uv_object.integration_time) / 2.0
uv_object.upsample_in_time(max_integration_time, blt_order="baseline",
summing_correlator_mode=True)
assert np.allclose(uv_object.integration_time, max_integration_time)
# we should double the size of the data arrays
assert uv_object.data_array.size == 2 * init_data_size
# output data should be the half the input
out_wf = uv_object.get_data(0, 1)
assert np.isclose(init_wf[0, 0, 0] / 2, out_wf[0, 0, 0])
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose(init_ns[0, 0, 0], out_ns[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_upsample_in_time_summing_correlator_mode_with_flags(resample_in_time_file):
"""Test the upsample_in_time method with summing correlator mode and flags"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline")
# save some values for later
init_wf = uv_object.get_data(0, 1)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# add flags and upsample again
inds01 = uv_object.antpair2ind(0, 1)
uv_object.flag_array[inds01[0], 0, 0, 0] = True
max_integration_time = np.amin(uv_object.integration_time) / 2.0
uv_object.upsample_in_time(max_integration_time, blt_order="baseline",
summing_correlator_mode=True)
# data and nsamples should be changed as normal, but flagged
out_wf = uv_object.get_data(0, 1)
assert np.isclose(init_wf[0, 0, 0] / 2, out_wf[0, 0, 0])
out_flags = uv_object.get_flags(0, 1)
assert np.all(out_flags[:2, 0, 0])
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose(init_ns[0, 0, 0], out_ns[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_upsample_in_time_summing_correlator_mode_nonint_resampling(resample_in_time_file):
"""Test the upsample_in_time method with summing correlator mode
and non-integer resampling
"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline")
# save some values for later
init_data_size = uv_object.data_array.size
init_wf = uv_object.get_data(0, 1)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# try again with a non-integer resampling factor
# change the target integration time
max_integration_time = np.amin(uv_object.integration_time) * 0.75
uv_object.upsample_in_time(max_integration_time, blt_order="baseline",
summing_correlator_mode=True)
assert np.allclose(uv_object.integration_time, max_integration_time * 0.5 / 0.75)
# we should double the size of the data arrays
assert uv_object.data_array.size == 2 * init_data_size
# output data should be half the input
out_wf = uv_object.get_data(0, 1)
assert np.isclose(init_wf[0, 0, 0] / 2, out_wf[0, 0, 0])
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose(init_ns[0, 0, 0], out_ns[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_partial_upsample_in_time(resample_in_time_file):
"""Test the upsample_in_time method with non-uniform upsampling"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# change a whole baseline's integration time
bl_inds = uv_object.antpair2ind(0, 1)
uv_object.integration_time[bl_inds] = uv_object.integration_time[0] / 2.0
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline")
# save some values for later
init_wf_01 = uv_object.get_data(0, 1)
init_wf_02 = uv_object.get_data(0, 2)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns_01 = uv_object.get_nsamples(0, 1)
init_ns_02 = uv_object.get_nsamples(0, 2)
# change the target integration time
max_integration_time = np.amin(uv_object.integration_time)
uv_object.upsample_in_time(max_integration_time, blt_order="baseline")
assert np.allclose(uv_object.integration_time, max_integration_time)
# output data should be the same
out_wf_01 = uv_object.get_data(0, 1)
out_wf_02 = uv_object.get_data(0, 2)
assert np.all(init_wf_01 == out_wf_01)
assert np.isclose(init_wf_02[0, 0, 0], out_wf_02[0, 0, 0])
assert init_wf_02.size * 2 == out_wf_02.size
# this should be true because there are no flags
out_ns_01 = uv_object.get_nsamples(0, 1)
out_ns_02 = uv_object.get_nsamples(0, 2)
assert np.allclose(out_ns_01, init_ns_01)
assert np.isclose(init_ns_02[0, 0, 0], out_ns_02[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_upsample_in_time_drift(resample_in_time_file):
"""Test the upsample_in_time method on drift mode data"""
uv_object = resample_in_time_file
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline")
# save some values for later
init_data_size = uv_object.data_array.size
init_wf = uv_object.get_data(0, 1)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
max_integration_time = np.amin(uv_object.integration_time) / 2.0
uv_object.upsample_in_time(
max_integration_time, blt_order="baseline", allow_drift=True
)
assert np.allclose(uv_object.integration_time, max_integration_time)
# we should double the size of the data arrays
assert uv_object.data_array.size == 2 * init_data_size
# output data should be the same
out_wf = uv_object.get_data(0, 1)
# we need a "large" tolerance given the "large" data
new_tol = 1e-2 * np.amax(np.abs(uv_object.data_array))
assert np.isclose(init_wf[0, 0, 0], out_wf[0, 0, 0], atol=new_tol)
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose(init_ns[0, 0, 0], out_ns[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_upsample_in_time_drift_no_phasing(resample_in_time_file):
"""Test the upsample_in_time method on drift mode data without phasing"""
uv_object = resample_in_time_file
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline")
# save some values for later
init_data_size = uv_object.data_array.size
init_wf = uv_object.get_data(0, 1)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
max_integration_time = np.amin(uv_object.integration_time) / 2.0
# upsample with allow_drift=False
uv_object.upsample_in_time(
max_integration_time, blt_order="baseline", allow_drift=False
)
assert np.allclose(uv_object.integration_time, max_integration_time)
# we should double the size of the data arrays
assert uv_object.data_array.size == 2 * init_data_size
# output data should be similar, but somewhat different because of the phasing
out_wf = uv_object.get_data(0, 1)
# we need a "large" tolerance given the "large" data
new_tol = 1e-2 * np.amax(np.abs(uv_object.data_array))
assert np.isclose(init_wf[0, 0, 0], out_wf[0, 0, 0], atol=new_tol)
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose(init_ns[0, 0, 0], out_ns[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_downsample_in_time(resample_in_time_file):
"""Test the downsample_in_time method"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_data_size = uv_object.data_array.size
init_wf = uv_object.get_data(0, 1)
original_int_time = np.amax(uv_object.integration_time)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
min_integration_time = original_int_time * 2.0
uv_object.downsample_in_time(min_integration_time, blt_order="baseline",
minor_order="time")
# Should have half the size of the data array and all the new integration time
# (for this file with 20 integrations and a factor of 2 downsampling)
assert np.all(np.isclose(uv_object.integration_time, min_integration_time))
assert uv_object.data_array.size * 2 == init_data_size
# output data should be the average
out_wf = uv_object.get_data(0, 1)
assert np.isclose((init_wf[0, 0, 0] + init_wf[1, 0, 0]) / 2., out_wf[0, 0, 0])
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose((init_ns[0, 0, 0] + init_ns[1, 0, 0]) / 2., out_ns[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_downsample_in_time_partial_flags(resample_in_time_file):
"""Test the downsample_in_time method with partial flagging"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_wf = uv_object.get_data(0, 1)
original_int_time = np.amax(uv_object.integration_time)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
min_integration_time = original_int_time * 2.0
# add flags and try again. With one of the 2 inputs flagged, the data should
# just be the unflagged value and nsample should be half the unflagged one
# and the output should not be flagged.
inds01 = uv_object.antpair2ind(0, 1)
uv_object.flag_array[inds01[0], 0, 0, 0] = True
uv_object.downsample_in_time(min_integration_time, blt_order="baseline",
minor_order="time")
out_wf = uv_object.get_data(0, 1)
assert np.isclose(init_wf[1, 0, 0], out_wf[0, 0, 0])
# make sure nsamples is correct
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose((init_ns[1, 0, 0]) / 2., out_ns[0, 0, 0])
# check that there are still no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_downsample_in_time_totally_flagged(resample_in_time_file):
"""Test the downsample_in_time method with totally flagged integrations"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_wf = uv_object.get_data(0, 1)
original_int_time = np.amax(uv_object.integration_time)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
min_integration_time = original_int_time * 2.0
# add more flags and try again. When all the input points are flagged,
# data and nsample should have the same results as no flags but the output
# should be flagged
inds01 = uv_object.antpair2ind(0, 1)
uv_object.flag_array[inds01[:2], 0, 0, 0] = True
uv_object.downsample_in_time(min_integration_time, blt_order="baseline",
minor_order="time")
out_wf = uv_object.get_data(0, 1)
assert np.isclose((init_wf[0, 0, 0] + init_wf[1, 0, 0]) / 2., out_wf[0, 0, 0])
# make sure nsamples is correct
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose((init_ns[0, 0, 0] + init_ns[1, 0, 0]) / 2., out_ns[0, 0, 0])
# check that the new sample is flagged
out_flag = uv_object.get_flags(0, 1)
assert out_flag[0, 0, 0]
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_downsample_in_time_uneven_samples(resample_in_time_file):
"""Test the downsample_in_time method with uneven downsampling"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_wf = uv_object.get_data(0, 1)
original_int_time = np.amax(uv_object.integration_time)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
# test again with a downsample factor that doesn't go evenly into the number of samples
min_integration_time = original_int_time * 3.0
uv_object.downsample_in_time(min_integration_time, blt_order="baseline",
minor_order="time", keep_ragged=False)
# Only some baselines have an even number of times, so the output integration time
# is not uniformly the same. For the test case, we'll have *either* the original
# integration time or twice that.
assert np.all(
np.logical_or(
np.isclose(uv_object.integration_time, original_int_time),
np.isclose(uv_object.integration_time, min_integration_time)
)
)
# as usual, the new data should be the average of the input data (3 points now)
out_wf = uv_object.get_data(0, 1)
assert np.isclose(np.mean(init_wf[0:3, 0, 0]), out_wf[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_downsample_in_time_uneven_samples_discard_ragged(resample_in_time_file):
"""Test the downsample_in_time method with uneven downsampling and
discarding the ragged samples.
"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_wf = uv_object.get_data(0, 1)
original_int_time = np.amax(uv_object.integration_time)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
# test again with a downsample factor that doesn't go evenly into the number of samples
min_integration_time = original_int_time * 3.0
# test again with keep_ragged=False
uv_object.downsample_in_time(min_integration_time, blt_order="baseline",
minor_order="time", keep_ragged=False)
# make sure integration time is correct
# in this case, all integration times should be the target one
assert np.all(np.isclose(uv_object.integration_time, min_integration_time))
# as usual, the new data should be the average of the input data
out_wf = uv_object.get_data(0, 1)
assert np.isclose(np.mean(init_wf[0:3, 0, 0]), out_wf[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_downsample_in_time_summing_correlator_mode(resample_in_time_file):
"""Test the downsample_in_time method with summing correlator mode"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_data_size = uv_object.data_array.size
init_wf = uv_object.get_data(0, 1)
original_int_time = np.amax(uv_object.integration_time)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
min_integration_time = original_int_time * 2.0
uv_object.downsample_in_time(min_integration_time, blt_order="baseline",
minor_order="time", summing_correlator_mode=True)
# Should have half the size of the data array and all the new integration time
# (for this file with 20 integrations and a factor of 2 downsampling)
assert np.all(np.isclose(uv_object.integration_time, min_integration_time))
assert uv_object.data_array.size * 2 == init_data_size
# output data should be the sum
out_wf = uv_object.get_data(0, 1)
assert np.isclose((init_wf[0, 0, 0] + init_wf[1, 0, 0]), out_wf[0, 0, 0])
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose((init_ns[0, 0, 0] + init_ns[1, 0, 0]) / 2., out_ns[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_downsample_in_time_summing_correlator_mode_partial_flags(
resample_in_time_file
):
"""Test the downsample_in_time method with summing correlator mode and
partial flags
"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_wf = uv_object.get_data(0, 1)
original_int_time = np.amax(uv_object.integration_time)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
min_integration_time = original_int_time * 2.0
# add flags and try again. With one of the 2 inputs flagged, the data should
# just be the unflagged value and nsample should be half the unflagged one
# and the output should not be flagged.
inds01 = uv_object.antpair2ind(0, 1)
uv_object.flag_array[inds01[0], 0, 0, 0] = True
uv_object.downsample_in_time(min_integration_time, blt_order="baseline",
minor_order="time", summing_correlator_mode=True)
out_wf = uv_object.get_data(0, 1)
assert np.isclose(init_wf[1, 0, 0], out_wf[0, 0, 0])
# make sure nsamples is correct
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose((init_ns[1, 0, 0]) / 2., out_ns[0, 0, 0])
# check that there are still no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_downsample_in_time_summing_correlator_mode_totally_flagged(
resample_in_time_file
):
"""Test the downsample_in_time method with summing correlator mode and
totally flagged integrations.
"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_wf = uv_object.get_data(0, 1)
original_int_time = np.amax(uv_object.integration_time)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
min_integration_time = original_int_time * 2.0
# add more flags and try again. When all the input points are flagged,
# data and nsample should have the same results as no flags but the output
# should be flagged
inds01 = uv_object.antpair2ind(0, 1)
uv_object.flag_array[inds01[:2], 0, 0, 0] = True
uv_object.downsample_in_time(min_integration_time, blt_order="baseline",
minor_order="time", summing_correlator_mode=True)
out_wf = uv_object.get_data(0, 1)
assert np.isclose((init_wf[0, 0, 0] + init_wf[1, 0, 0]), out_wf[0, 0, 0])
# make sure nsamples is correct
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose((init_ns[0, 0, 0] + init_ns[1, 0, 0]) / 2., out_ns[0, 0, 0])
# check that the new sample is flagged
out_flag = uv_object.get_flags(0, 1)
assert out_flag[0, 0, 0]
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_downsample_in_time_summing_correlator_mode_uneven_samples(
resample_in_time_file
):
"""Test the downsample_in_time method with summing correlator mode and
uneven samples.
"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_wf = uv_object.get_data(0, 1)
original_int_time = np.amax(uv_object.integration_time)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# test again with a downsample factor that doesn't go evenly into the number of samples
min_integration_time = original_int_time * 3.0
uv_object.downsample_in_time(
min_integration_time,
blt_order="baseline",
minor_order="time",
keep_ragged=False,
summing_correlator_mode=True,
)
# Only some baselines have an even number of times, so the output integration time
# is not uniformly the same. For the test case, we'll have *either* the original
# integration time or twice that.
assert np.all(
np.logical_or(
np.isclose(uv_object.integration_time, original_int_time),
np.isclose(uv_object.integration_time, min_integration_time)
)
)
# as usual, the new data should be the average of the input data (3 points now)
out_wf = uv_object.get_data(0, 1)
assert np.isclose(np.sum(init_wf[0:3, 0, 0]), out_wf[0, 0, 0])
# make sure nsamples is correct
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose(np.mean(init_ns[0:3, 0, 0]), out_ns[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_downsample_in_time_summing_correlator_mode_uneven_samples_drop_ragged(
resample_in_time_file
):
"""Test the downsample_in_time method with summing correlator mode and
uneven samples, dropping ragged ones.
"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_wf = uv_object.get_data(0, 1)
original_int_time = np.amax(uv_object.integration_time)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# test again with keep_ragged=False
min_integration_time = original_int_time * 3.0
uv_object.downsample_in_time(
min_integration_time,
blt_order="baseline",
minor_order="time",
keep_ragged=False,
summing_correlator_mode=True,
)
# make sure integration time is correct
# in this case, all integration times should be the target one
assert np.all(np.isclose(uv_object.integration_time, min_integration_time))
# as usual, the new data should be the average of the input data
out_wf = uv_object.get_data(0, 1)
assert np.isclose(np.sum(init_wf[0:3, 0, 0]), out_wf[0, 0, 0])
# make sure nsamples is correct
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose(np.mean(init_ns[0:3, 0, 0]), out_ns[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_partial_downsample_in_time(resample_in_time_file):
"""Test the downsample_in_time method without uniform downsampling"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# change a whole baseline's integration time
bl_inds = uv_object.antpair2ind(0, 1)
uv_object.integration_time[bl_inds] = uv_object.integration_time[0] * 2.0
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline")
# save some values for later
init_wf_01 = uv_object.get_data(0, 1)
init_wf_02 = uv_object.get_data(0, 2)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns_01 = uv_object.get_nsamples(0, 1)
init_ns_02 = uv_object.get_nsamples(0, 2)
# change the target integration time
min_integration_time = np.amax(uv_object.integration_time)
uv_object.downsample_in_time(min_integration_time, blt_order="baseline")
# Should have all the new integration time
# (for this file with 20 integrations and a factor of 2 downsampling)
assert np.all(np.isclose(uv_object.integration_time, min_integration_time))
# output data should be the same
out_wf_01 = uv_object.get_data(0, 1)
out_wf_02 = uv_object.get_data(0, 2)
assert np.all(init_wf_01 == out_wf_01)
assert np.isclose((init_wf_02[0, 0, 0] + init_wf_02[1, 0, 0]) / 2.,
out_wf_02[0, 0, 0])
# this should be true because there are no flags
out_ns_01 = uv_object.get_nsamples(0, 1)
out_ns_02 = uv_object.get_nsamples(0, 2)
assert np.allclose(out_ns_01, init_ns_01)
assert np.isclose((init_ns_02[0, 0, 0] + init_ns_02[1, 0, 0]) / 2.0,
out_ns_02[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_downsample_in_time_drift(resample_in_time_file):
"""Test the downsample_in_time method on drift mode data"""
uv_object = resample_in_time_file
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_data_size = uv_object.data_array.size
init_wf = uv_object.get_data(0, 1)
original_int_time = np.amax(uv_object.integration_time)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
min_integration_time = original_int_time * 2.0
uv_object.downsample_in_time(min_integration_time, blt_order="baseline",
allow_drift=True)
# Should have half the size of the data array and all the new integration time
# (for this file with 20 integrations and a factor of 2 downsampling)
assert np.all(np.isclose(uv_object.integration_time, min_integration_time))
assert uv_object.data_array.size * 2 == init_data_size
# output data should be the average
out_wf = uv_object.get_data(0, 1)
assert np.isclose((init_wf[0, 0, 0] + init_wf[1, 0, 0]) / 2., out_wf[0, 0, 0])
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose((init_ns[0, 0, 0] + init_ns[1, 0, 0]) / 2., out_ns[0, 0, 0])
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_downsample_in_time_drift_no_phasing(resample_in_time_file):
"""Test the downsample_in_time method on drift mode data without phasing"""
uv_object = resample_in_time_file
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_data_size = uv_object.data_array.size
init_wf = uv_object.get_data(0, 1)
original_int_time = np.amax(uv_object.integration_time)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the target integration time
min_integration_time = original_int_time * 2.0
# try again with allow_drift=False
uv_object.downsample_in_time(
min_integration_time, blt_order="baseline", allow_drift=False,
)
# Should have half the size of the data array and all the new integration time
# (for this file with 20 integrations and a factor of 2 downsampling)
assert np.all(np.isclose(uv_object.integration_time, min_integration_time))
assert uv_object.data_array.size * 2 == init_data_size
# output data should be similar to the average, but somewhat different
# because of the phasing
out_wf = uv_object.get_data(0, 1)
new_tol = 5e-2 * np.amax(np.abs(uv_object.data_array))
assert np.isclose((init_wf[0, 0, 0] + init_wf[1, 0, 0]) / 2.,
out_wf[0, 0, 0], atol=new_tol)
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose((init_ns[0, 0, 0] + init_ns[1, 0, 0]) / 2., out_ns[0, 0, 0])
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
return
def test_downsample_in_time_errors(resample_in_time_file):
"""Test various errors and warnings are raised"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# raise an error for a too-large integration time
max_integration_time = 1e3 * np.amax(uv_object.integration_time)
with pytest.raises(ValueError) as cm:
uv_object.downsample_in_time(max_integration_time)
assert str(cm.value).startswith("Increasing the integration time by more than")
# catch a warning for doing no work
uv_object2 = uv_object.copy()
max_integration_time = 0.5 * np.amin(uv_object.integration_time)
uvtest.checkWarnings(uv_object.downsample_in_time, [max_integration_time],
message="All values in the integration_time array are "
"already longer")
assert uv_object == uv_object2
del uv_object2
# save some values for later
init_data_size = uv_object.data_array.size
init_wf = uv_object.get_data(0, 1)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# make a gap in the times to check a warning about that
inds01 = uv_object.antpair2ind(0, 1)
initial_int_time = uv_object.integration_time[inds01[0]]
# time array is in jd, integration time is in sec
uv_object.time_array[inds01[-1]] += initial_int_time / (24 * 3600)
uv_object.Ntimes += 1
min_integration_time = 2 * np.amin(uv_object.integration_time)
uvtest.checkWarnings(uv_object.downsample_in_time, [min_integration_time],
message=["There is a gap in the times of baseline (0, 1)"])
# Should have half the size of the data array and all the new integration time
# (for this file with 20 integrations and a factor of 2 downsampling)
assert np.all(np.isclose(uv_object.integration_time, min_integration_time))
assert uv_object.data_array.size * 2 == init_data_size
# output data should be the average
out_wf = uv_object.get_data(0, 1)
assert np.isclose((init_wf[0, 0, 0] + init_wf[1, 0, 0]) / 2., out_wf[0, 0, 0])
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose((init_ns[0, 0, 0] + init_ns[1, 0, 0]) / 2., out_ns[0, 0, 0])
return
def test_downsample_in_time_int_time_mismatch_warning(resample_in_time_file):
"""Test warning in downsample_in_time about mismatch between integration
times and the time between integrations.
"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_data_size = uv_object.data_array.size
init_wf = uv_object.get_data(0, 1)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# change the integration times to catch a warning about integration times
# not matching the time delta between integrations
uv_object.integration_time *= 0.5
min_integration_time = 2 * np.amin(uv_object.integration_time)
uvtest.checkWarnings(uv_object.downsample_in_time, [min_integration_time],
message=["The time difference between integrations is "
"not the same"],
nwarnings=10)
# Should have half the size of the data array and all the new integration time
# (for this file with 20 integrations and a factor of 2 downsampling)
assert np.all(np.isclose(uv_object.integration_time, min_integration_time))
assert uv_object.data_array.size * 2 == init_data_size
# output data should be the average
out_wf = uv_object.get_data(0, 1)
assert np.isclose((init_wf[0, 0, 0] + init_wf[1, 0, 0]) / 2., out_wf[0, 0, 0])
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose((init_ns[0, 0, 0] + init_ns[1, 0, 0]) / 2., out_ns[0, 0, 0])
return
def test_downsample_in_time_varying_integration_time(resample_in_time_file):
"""Test downsample_in_time handling of file with integration time changing
within a baseline
"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_wf = uv_object.get_data(0, 1)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# test handling (& warnings) with varying integration time in a baseline
# First, change both integration time & time array to match
inds01 = uv_object.antpair2ind(0, 1)
initial_int_time = uv_object.integration_time[inds01[0]]
# time array is in jd, integration time is in sec
uv_object.time_array[inds01[-2]] += (initial_int_time / 2) / (24 * 3600)
uv_object.time_array[inds01[-1]] += (3 * initial_int_time / 2) / (24 * 3600)
uv_object.integration_time[inds01[-2:]] += initial_int_time
uv_object.Ntimes = np.unique(uv_object.time_array).size
min_integration_time = 2 * np.amin(uv_object.integration_time)
uvtest.checkWarnings(uv_object.downsample_in_time, [min_integration_time],
nwarnings=0)
# Should have all the new integration time
# (for this file with 20 integrations and a factor of 2 downsampling)
assert np.all(np.isclose(uv_object.integration_time, min_integration_time))
# output data should be the average
out_wf = uv_object.get_data(0, 1)
assert np.isclose((init_wf[0, 0, 0] + init_wf[1, 0, 0]) / 2., out_wf[0, 0, 0])
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose((init_ns[0, 0, 0] + init_ns[1, 0, 0]) / 2., out_ns[0, 0, 0])
return
def test_downsample_in_time_varying_integration_time_warning(resample_in_time_file):
"""Test downsample_in_time handling of file with integration time changing
within a baseline, but without adjusting the time_array so there is a mismatch.
"""
uv_object = resample_in_time_file
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
# save some values for later
init_wf = uv_object.get_data(0, 1)
# check that there are no flags
assert np.nonzero(uv_object.flag_array)[0].size == 0
init_ns = uv_object.get_nsamples(0, 1)
# Next, change just integration time, so time array doesn't match
inds01 = uv_object.antpair2ind(0, 1)
initial_int_time = uv_object.integration_time[inds01[0]]
uv_object.integration_time[inds01[-2:]] += initial_int_time
min_integration_time = 2 * np.amin(uv_object.integration_time)
uvtest.checkWarnings(uv_object.downsample_in_time, [min_integration_time],
message="The time difference between integrations is "
"different than")
# Should have all the new integration time
# (for this file with 20 integrations and a factor of 2 downsampling)
assert np.all(np.isclose(uv_object.integration_time, min_integration_time))
# output data should be the average
out_wf = uv_object.get_data(0, 1)
assert np.isclose((init_wf[0, 0, 0] + init_wf[1, 0, 0]) / 2., out_wf[0, 0, 0])
# this should be true because there are no flags
out_ns = uv_object.get_nsamples(0, 1)
assert np.isclose((init_ns[0, 0, 0] + init_ns[1, 0, 0]) / 2., out_ns[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
@pytest.mark.filterwarnings("ignore:Data will be unphased and rephased")
def test_upsample_downsample_in_time(resample_in_time_file):
"""Test round trip works"""
uv_object = resample_in_time_file
# set uvws from antenna positions so they'll agree later.
# the fact that this is required is a bit concerning, it means that
# our calculated uvws from the antenna positions do not match what's in the file
uv_object.set_uvws_from_antenna_positions()
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
uv_object2 = uv_object.copy()
max_integration_time = np.amin(uv_object.integration_time) / 2.0
uv_object.upsample_in_time(max_integration_time, blt_order="baseline")
assert np.amax(uv_object.integration_time) <= max_integration_time
new_Nblts = uv_object.Nblts
# check that calling upsample again with the same max_integration_time
# gives warning and does nothing
uvtest.checkWarnings(uv_object.upsample_in_time, func_args=[max_integration_time],
func_kwargs={'blt_order': "baseline"},
message='All values in the integration_time array are '
'already longer')
assert uv_object.Nblts == new_Nblts
# check that calling upsample again with the almost the same max_integration_time
# gives warning and does nothing
small_number = 0.9 * uv_object._integration_time.tols[1]
uvtest.checkWarnings(uv_object.upsample_in_time,
func_args=[max_integration_time - small_number],
func_kwargs={'blt_order': "baseline"},
message='All values in the integration_time array are '
'already longer')
assert uv_object.Nblts == new_Nblts
uv_object.downsample_in_time(np.amin(uv_object2.integration_time), blt_order="baseline")
# increase tolerance on LST if iers.conf.auto_max_age is set to None, as we
# do in testing if the iers url is down. See conftest.py for more info.
if iers.conf.auto_max_age is None:
uv_object._lst_array.tols = (0, 1e-4)
# make sure that history is correct
assert "Upsampled data to 0.939524 second integration time using pyuvdata." in uv_object.history
assert "Downsampled data to 1.879048 second integration time using pyuvdata." in uv_object.history
# overwrite history and check for equality
uv_object.history = uv_object2.history
assert uv_object == uv_object2
# check that calling downsample again with the same min_integration_time
# gives warning and does nothing
uvtest.checkWarnings(uv_object.downsample_in_time,
func_args=[np.amin(uv_object2.integration_time)],
func_kwargs={'blt_order': "baseline"},
message='All values in the integration_time array are '
'already shorter')
assert uv_object.Nblts == uv_object2.Nblts
# check that calling upsample again with the almost the same min_integration_time
# gives warning and does nothing
uvtest.checkWarnings(uv_object.downsample_in_time,
func_args=[np.amin(uv_object2.integration_time) + small_number],
func_kwargs={'blt_order': "baseline"},
message='All values in the integration_time array are '
'already shorter')
assert uv_object.Nblts == uv_object2.Nblts
return
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
@pytest.mark.filterwarnings("ignore:Data will be unphased and rephased")
@pytest.mark.filterwarnings("ignore:There is a gap in the times of baseline")
def test_upsample_downsample_in_time_odd_resample(resample_in_time_file):
"""Test round trip works with odd resampling"""
uv_object = resample_in_time_file
# set uvws from antenna positions so they'll agree later.
# the fact that this is required is a bit concerning, it means that
# our calculated uvws from the antenna positions do not match what's in the file
uv_object.set_uvws_from_antenna_positions()
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
uv_object2 = uv_object.copy()
# try again with a resampling factor of 3 (test odd numbers)
max_integration_time = np.amin(uv_object.integration_time) / 3.0
uv_object.upsample_in_time(max_integration_time, blt_order="baseline")
assert np.amax(uv_object.integration_time) <= max_integration_time
uv_object.downsample_in_time(np.amin(uv_object2.integration_time), blt_order="baseline")
# increase tolerance on LST if iers.conf.auto_max_age is set to None, as we
# do in testing if the iers url is down. See conftest.py for more info.
if iers.conf.auto_max_age is None:
uv_object._lst_array.tols = (0, 1e-4)
# make sure that history is correct
assert "Upsampled data to 0.626349 second integration time using pyuvdata." in uv_object.history
assert "Downsampled data to 1.879048 second integration time using pyuvdata." in uv_object.history
# overwrite history and check for equality
uv_object.history = uv_object2.history
assert uv_object == uv_object2
@pytest.mark.filterwarnings("ignore:The xyz array in ENU_from_ECEF")
@pytest.mark.filterwarnings("ignore:The enu array in ECEF_from_ENU")
def test_upsample_downsample_in_time_metadata_only(resample_in_time_file):
"""Test round trip works with metadata-only objects"""
uv_object = resample_in_time_file
# drop the data arrays
uv_object.data_array = None
uv_object.flag_array = None
uv_object.nsample_array = None
# set uvws from antenna positions so they'll agree later.
# the fact that this is required is a bit concerning, it means that
# our calculated uvws from the antenna positions do not match what's in the file
uv_object.set_uvws_from_antenna_positions()
uv_object.phase_to_time(Time(uv_object.time_array[0], format="jd"))
# reorder to make sure we get the right value later
uv_object.reorder_blts(order="baseline", minor_order="time")
uv_object2 = uv_object.copy()
max_integration_time = np.amin(uv_object.integration_time) / 2.0
uv_object.upsample_in_time(max_integration_time, blt_order="baseline")
assert np.amax(uv_object.integration_time) <= max_integration_time
uv_object.downsample_in_time(np.amin(uv_object2.integration_time), blt_order="baseline")
# increase tolerance on LST if iers.conf.auto_max_age is set to None, as we
# do in testing if the iers url is down. See conftest.py for more info.
if iers.conf.auto_max_age is None:
uv_object._lst_array.tols = (0, 1e-4)
# make sure that history is correct
assert "Upsampled data to 0.939524 second integration time using pyuvdata." in uv_object.history
assert "Downsampled data to 1.879048 second integration time using pyuvdata." in uv_object.history
# overwrite history and check for equality
uv_object.history = uv_object2.history
assert uv_object == uv_object2
@pytest.mark.filterwarnings("ignore:Telescope mock-HERA is not in known_telescopes")
@pytest.mark.filterwarnings("ignore:There is a gap in the times of baseline")
def test_resample_in_time(bda_test_file):
"""Test the resample_in_time method"""
# Note this file has slight variations in the delta t between integrations
# that causes our gap test to issue a warning, but the variations are small
# We aren't worried about them, so we filter those warnings
uv_object = bda_test_file
# save some initial info
# 2s integration time
init_data_1_136 = uv_object.get_data((1, 136))
# 4s integration time
init_data_1_137 = uv_object.get_data((1, 137))
# 8s integration time
init_data_1_138 = uv_object.get_data((1, 138))
# 16s integration time
init_data_136_137 = uv_object.get_data((136, 137))
uv_object.resample_in_time(8)
# Should have all the target integration time
assert np.all(np.isclose(uv_object.integration_time, 8))
# 2s integration time
out_data_1_136 = uv_object.get_data((1, 136))
# 4s integration time
out_data_1_137 = uv_object.get_data((1, 137))
# 8s integration time
out_data_1_138 = uv_object.get_data((1, 138))
# 16s integration time
out_data_136_137 = uv_object.get_data((136, 137))
# check array sizes make sense
assert out_data_1_136.size * 4 == init_data_1_136.size
assert out_data_1_137.size * 2 == init_data_1_137.size
assert out_data_1_138.size == init_data_1_138.size
assert out_data_136_137.size / 2 == init_data_136_137.size
# check some values
assert np.isclose(np.mean(init_data_1_136[0:4, 0, 0]), out_data_1_136[0, 0, 0])
assert np.isclose(np.mean(init_data_1_137[0:2, 0, 0]), out_data_1_137[0, 0, 0])
assert np.isclose(init_data_1_138[0, 0, 0], out_data_1_138[0, 0, 0])
assert np.isclose(init_data_136_137[0, 0, 0], out_data_136_137[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:Telescope mock-HERA is not in known_telescopes")
@pytest.mark.filterwarnings("ignore:There is a gap in the times of baseline")
def test_resample_in_time_downsample_only(bda_test_file):
"""Test resample_in_time with downsampling only"""
# Note this file has slight variations in the delta t between integrations
# that causes our gap test to issue a warning, but the variations are small
# We aren't worried about them, so we filter those warnings
uv_object = bda_test_file
# save some initial info
# 2s integration time
init_data_1_136 = uv_object.get_data((1, 136))
# 4s integration time
init_data_1_137 = uv_object.get_data((1, 137))
# 8s integration time
init_data_1_138 = uv_object.get_data((1, 138))
# 16s integration time
init_data_136_137 = uv_object.get_data((136, 137))
# resample again, with only_downsample set
uv_object.resample_in_time(8, only_downsample=True)
# Should have all less than or equal to the target integration time
assert np.all(
np.logical_or(
np.isclose(uv_object.integration_time, 8),
np.isclose(uv_object.integration_time, 16)
)
)
# 2s integration time
out_data_1_136 = uv_object.get_data((1, 136))
# 4s integration time
out_data_1_137 = uv_object.get_data((1, 137))
# 8s integration time
out_data_1_138 = uv_object.get_data((1, 138))
# 16s integration time
out_data_136_137 = uv_object.get_data((136, 137))
# check array sizes make sense
assert out_data_1_136.size * 4 == init_data_1_136.size
assert out_data_1_137.size * 2 == init_data_1_137.size
assert out_data_1_138.size == init_data_1_138.size
assert out_data_136_137.size == init_data_136_137.size
# check some values
assert np.isclose(np.mean(init_data_1_136[0:4, 0, 0]), out_data_1_136[0, 0, 0])
assert np.isclose(np.mean(init_data_1_137[0:2, 0, 0]), out_data_1_137[0, 0, 0])
assert np.isclose(init_data_1_138[0, 0, 0], out_data_1_138[0, 0, 0])
assert np.isclose(init_data_136_137[0, 0, 0], out_data_136_137[0, 0, 0])
return
@pytest.mark.filterwarnings("ignore:Telescope mock-HERA is not in known_telescopes")
@pytest.mark.filterwarnings("ignore:There is a gap in the times of baseline")
def test_resample_in_time_only_upsample(bda_test_file):
"""Test resample_in_time with only upsampling"""
# Note this file has slight variations in the delta t between integrations
# that causes our gap test to issue a warning, but the variations are small
# We aren't worried about them, so we filter those warnings
uv_object = bda_test_file
# save some initial info
# 2s integration time
init_data_1_136 = uv_object.get_data((1, 136))
# 4s integration time
init_data_1_137 = uv_object.get_data((1, 137))
# 8s integration time
init_data_1_138 = uv_object.get_data((1, 138))
# 16s integration time
init_data_136_137 = uv_object.get_data((136, 137))
# again, with only_upsample set
uv_object.resample_in_time(8, only_upsample=True)
# Should have all greater than or equal to the target integration time
assert np.all(
np.logical_or(
np.logical_or(
np.isclose(uv_object.integration_time, 2.),
np.isclose(uv_object.integration_time, 4.)),
np.isclose(uv_object.integration_time, 8.)
)
)
# 2s integration time
out_data_1_136 = uv_object.get_data((1, 136))
# 4s integration time
out_data_1_137 = uv_object.get_data((1, 137))
# 8s integration time
out_data_1_138 = uv_object.get_data((1, 138))
# 16s integration time
out_data_136_137 = uv_object.get_data((136, 137))
# check array sizes make sense
assert out_data_1_136.size == init_data_1_136.size
assert out_data_1_137.size == init_data_1_137.size
assert out_data_1_138.size == init_data_1_138.size
assert out_data_136_137.size / 2 == init_data_136_137.size
# check some values
assert np.isclose(init_data_1_136[0, 0, 0], out_data_1_136[0, 0, 0])
assert np.isclose(init_data_1_137[0, 0, 0], out_data_1_137[0, 0, 0])
assert np.isclose(init_data_1_138[0, 0, 0], out_data_1_138[0, 0, 0])
assert np.isclose(init_data_136_137[0, 0, 0], out_data_136_137[0, 0, 0])
return
def test_remove_eq_coeffs_divide(uvdata_data):
"""Test using the remove_eq_coeffs method with divide convention."""
# give eq_coeffs to the object
eq_coeffs = np.empty(
(uvdata_data.uv_object.Nants_telescope, uvdata_data.uv_object.Nfreqs),
dtype=np.float
)
for i, ant in enumerate(uvdata_data.uv_object.antenna_numbers):
eq_coeffs[i, :] = ant + 1
uvdata_data.uv_object.eq_coeffs = eq_coeffs
uvdata_data.uv_object.eq_coeffs_convention = "divide"
uvdata_data.uv_object.remove_eq_coeffs()
# make sure the right coefficients were removed
for key in uvdata_data.uv_object.get_antpairs():
eq1 = key[0] + 1
eq2 = key[1] + 1
blt_inds = uvdata_data.uv_object.antpair2ind(key)
norm_data = uvdata_data.uv_object.data_array[blt_inds, 0, :, :]
unnorm_data = uvdata_data.uv_object2.data_array[blt_inds, 0, :, :]
assert np.allclose(norm_data, unnorm_data / (eq1 * eq2))
return
def test_remove_eq_coeffs_multiply(uvdata_data):
"""Test using the remove_eq_coeffs method with multiply convention."""
# give eq_coeffs to the object
eq_coeffs = np.empty(
(uvdata_data.uv_object.Nants_telescope, uvdata_data.uv_object.Nfreqs),
dtype=np.float
)
for i, ant in enumerate(uvdata_data.uv_object.antenna_numbers):
eq_coeffs[i, :] = ant + 1
uvdata_data.uv_object.eq_coeffs = eq_coeffs
uvdata_data.uv_object.eq_coeffs_convention = "multiply"
uvdata_data.uv_object.remove_eq_coeffs()
# make sure the right coefficients were removed
for key in uvdata_data.uv_object.get_antpairs():
eq1 = key[0] + 1
eq2 = key[1] + 1
blt_inds = uvdata_data.uv_object.antpair2ind(key)
norm_data = uvdata_data.uv_object.data_array[blt_inds, 0, :, :]
unnorm_data = uvdata_data.uv_object2.data_array[blt_inds, 0, :, :]
assert np.allclose(norm_data, unnorm_data * (eq1 * eq2))
return
def test_remove_eq_coeffs_errors(uvdata_data):
"""Test errors raised by remove_eq_coeffs method."""
# raise error when eq_coeffs are not defined
with pytest.raises(ValueError) as cm:
uvdata_data.uv_object.remove_eq_coeffs()
assert str(cm.value).startswith("The eq_coeffs attribute must be defined")
# raise error when eq_coeffs are defined but not eq_coeffs_convention
uvdata_data.uv_object.eq_coeffs = np.ones(
(uvdata_data.uv_object.Nants_telescope, uvdata_data.uv_object.Nfreqs)
)
with pytest.raises(ValueError) as cm:
uvdata_data.uv_object.remove_eq_coeffs()
assert str(cm.value).startswith("The eq_coeffs_convention attribute must be defined")
# raise error when convention is not a valid choice
uvdata_data.uv_object.eq_coeffs_convention = "foo"
with pytest.raises(ValueError) as cm:
uvdata_data.uv_object.remove_eq_coeffs()
assert str(cm.value).startswith("Got unknown convention foo. Must be one of")
return
| 41.513811 | 111 | 0.66602 | 33,540 | 229,945 | 4.330441 | 0.042457 | 0.042081 | 0.010451 | 0.023134 | 0.846815 | 0.818064 | 0.789174 | 0.765965 | 0.74922 | 0.733605 | 0 | 0.042905 | 0.225406 | 229,945 | 5,538 | 112 | 41.521307 | 0.772545 | 0.142925 | 0 | 0.662924 | 0 | 0 | 0.102198 | 0.014518 | 0 | 0 | 0 | 0 | 0.20705 | 1 | 0.033681 | false | 0.001044 | 0.003655 | 0 | 0.049608 | 0.000783 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5be342a4f7f50ef49c3694dee2511b7b6f6a5606 | 29,268 | py | Python | Apps/polls/models.py | shadowofgost/WebEngineering | 693af827e3458806cdace959262cf393d29f6504 | [
"Apache-2.0"
] | 1 | 2021-04-05T05:40:17.000Z | 2021-04-05T05:40:17.000Z | Apps/polls/models.py | shadowofgost/WebEngineering | 693af827e3458806cdace959262cf393d29f6504 | [
"Apache-2.0"
] | null | null | null | Apps/polls/models.py | shadowofgost/WebEngineering | 693af827e3458806cdace959262cf393d29f6504 | [
"Apache-2.0"
] | null | null | null | from django.db import models
# Create your models here.
class TCydept(models.Model):
id = models.IntegerField(default=1, db_column='ID', primary_key=True)
id_parent = models.IntegerField(
default=1, null=True, db_column='ID_Parent')
name = models.CharField(default='1', null=True,
db_column='Name', max_length=32)
timeupdate = models.IntegerField(
default=1, null=True, db_column='TimeUpdate')
idmanager = models.IntegerField(
default=1, null=True, db_column='IdManager')
imark = models.SmallIntegerField(
default=1, null=True, db_column='IMark') # 1代表已经删除
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=64)
bakc_up1 = models.CharField(
default='1', null=True, max_length=254, blank=True)
back_up2 = models.IntegerField(default=1, null=True, blank=True)
class Meta:
db_table = 't_cydept'
class TCyuser(models.Model):
id = models.IntegerField(default=1, db_column='ID', primary_key=True)
nocard = models.CharField(default='1', null=True,
db_column='Nocard', max_length=32)
nouser = models.CharField(default='1', null=True,
db_column='NoUser', max_length=32)
name = models.CharField(default='1', null=True,
db_column='Name', max_length=32)
psw = models.CharField(default='1', null=True,
db_column='Psw', max_length=32)
deptid = models. ForeignKey(
TCydept, to_field="id", on_delete=models.CASCADE, db_column='Deptid', related_name='related_to_department')
sex = models.SmallIntegerField(default=1, null=True, db_column='Sex')
##########sex 的值只能为0(女)或者1(男)##########
attr = models.SmallIntegerField(default=1, null=True, db_column='Attr')
##########attr 用户管理权限, 0普通用户、1管理员、2超级管理员(可对管理员进行管理)##########
attrjf = models.SmallIntegerField(default=1, null=True, db_column='AttrJf')
##########机房管理权限, 0普通用户、1管理员、2超级管理员(可对管理员进行管理)##########
yue = models.IntegerField(default=1, null=True, db_column='Yue')
##用户余额1,单位为分;(默认)##
yue2 = models.IntegerField(default=1, null=True, db_column='Yue2')
##用户余额2,单位为分;(扩展于特殊需求)##
email = models.EmailField(default=None, null=True,
db_column='Email', max_length=254)
phone = models.IntegerField(default=1, null=True, db_column='Phone')
timeupdate = models.IntegerField(
default=1, null=True, db_column='TimeUpdate')
idmanager = models.IntegerField(default=1, null=True,
db_column='IdManager', blank=True)
localid = models.CharField(
default='1', null=True, db_column='LocalID', max_length=1024)
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=64)
imark = models.IntegerField(default=1, null=True, db_column='IMark')
back_up1 = models.CharField(default='1', null=True,
max_length=254, blank=True)
back_up2 = models.IntegerField(default=1, null=True, blank=True)
back_up3 = models.IntegerField(default=1, null=True)
class Meta:
db_table = 't_cyuser'
class TCylocation(models.Model):
id = models.IntegerField(default=1, db_column='ID',
blank=True, primary_key=True)
id_parent = models.IntegerField(default=1, null=True,
db_column='ID_Parent', blank=True)
name = models.CharField(default='1', null=True,
db_column='Name', max_length=32, blank=True)
timeupdate = models.IntegerField(default=1, null=True,
db_column='TimeUpdate', blank=True
)
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=1024, blank=True)
idmanager = models. ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='IdManager', related_name='location_related_to_user', null=True
)
imark = models.SmallIntegerField(
default=1, null=True, db_column='IMark') # 1代表已经删除
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=64)
back_up1 = models.CharField(default='1', null=True,
max_length=254, blank=True)
back_up2 = models.IntegerField(default=1, null=True, blank=True)
back_up3 = models.IntegerField(default=1, null=True)
class Meta:
db_table = 't_cylocation'
class TCycurricula(models.Model):
id = models.IntegerField(default=1, db_column='ID', primary_key=True)
name = models.CharField(default='1', null=True,
db_column='Name', max_length=32, blank=True)
timebegin = models.IntegerField(default=1, null=True,
db_column='TimeBegin', blank=True)
timeend = models.IntegerField(
default=1, null=True, db_column='TimeEnd', blank=True)
id_location = models. ForeignKey(
TCylocation, to_field="id", on_delete=models.CASCADE, db_column='ID_Location', related_name='curricula_related_to_location'
)
id_speaker = models. ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='ID_Speaker', related_name='curricula_related_to_user_speaker'
)
attr = models.SmallIntegerField(
default=1, null=True, db_column='Attr', blank=True)
#####attr1代表实验类型、2代表普通上课类型、3讲座考勤类型,必须有值实验类型:奇数刷卡派位,偶数刷卡下机,并记录派位编号上课考勤类型:刷卡记录刷卡机编号讲座考勤类型:刷卡记录刷卡机编号######
charge = models.SmallIntegerField(
default=1, null=True, db_column='Charge', blank=True)
######charge免费0、收费1、开放2,必须有值######
pwaccess = models.SmallIntegerField(
default=1, null=True, db_column='PwAccess', blank=True)
######pwaccess不派位0、刷卡派位1(派位指用户刷卡时系统指定座位)######
pwcontinuous = models.SmallIntegerField(default=1, null=True,
db_column='PwContinuous', blank=True)
######pwcontinuous连续派位0、随机派位1######
pwdirection = models.SmallIntegerField(default=1, null=True,
db_column='PwDirection', blank=True)
######pwdirection顺序派位0、逆序派位1(当设置为随机派位时本功能无效)#######
dooropen = models.SmallIntegerField(
default=1, null=True, db_column='DoorOpen', blank=True)
######dooropen匹配的用户刷卡是否开门,0开门,1不开门######
timebegincheckbegin = models.IntegerField(default=1, null=True,
db_column='TimeBeginCheckBegin', blank=True)
######0代表无效######
timebegincheckend = models.IntegerField(default=1, null=True,
db_column='TimeBeginCheckEnd', blank=True)
######0代表无效######
timeendcheckbegin = models.IntegerField(default=1, null=True,
db_column='TimeEndCheckBegin', blank=True)
######0代表无效######
timeendcheckend = models.IntegerField(default=1, null=True,
db_column='TimeEndCheckEnd', blank=True)
######0代表无效######
rangeusers = models.CharField(default='1', null=True,
db_column='RangeUsers', max_length=4096, blank=True)
listdepts = models.CharField(default='1', null=True,
db_column='ListDepts', max_length=1024, blank=True)
rangeequs = models.CharField(default='1', null=True,
db_column='RangeEqus', max_length=1024, blank=True)
listplaces = models.CharField(default='1', null=True,
db_column='ListPlaces', max_length=1024, blank=True)
mapuser2equ = models.CharField(default='1', null=True,
db_column='MapUser2Equ', max_length=1024, blank=True)
aboutspeaker = models.CharField(default='1', null=True,
db_column='AboutSpeaker', max_length=1024, blank=True)
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=1024, blank=True)
# 来自教务系统的课程编号
timeupdate = models.IntegerField(default=1, null=True, db_column='TimeUpdate',
blank=True)
idmanager = models. ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='IdManager', related_name='curricula_related_to_user', null=True
)
imark = models.SmallIntegerField(
default=1, null=True, db_column='IMark') # 1代表已经删除
bakc_up1 = models.CharField(default='1', null=True,
max_length=254, blank=True)
back_up2 = models.IntegerField(default=1, null=True, blank=True)
back_up3 = models.IntegerField(default=1, null=True)
class Meta:
db_table = 't_cycurricula'
class TCyplan(models.Model):
id = models.IntegerField(default=1, db_column='ID',
primary_key=True, blank=True)
id_curricula = models. ForeignKey(
TCycurricula, to_field="id", on_delete=models.CASCADE,
db_column='ID_Curricula', related_name='id_curricula'
)
timebegin = models.IntegerField(default=1, null=True,
db_column='TimeBegin', blank=True
)
timeend = models.IntegerField(
default=1, null=True, db_column='TimeEnd', blank=True)
id_location = models. ForeignKey(
TCylocation, to_field="id", on_delete=models.CASCADE, db_column='ID_Location', related_name='plan_related_to_location'
)
id_speaker = models. ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='ID_Speaker', related_name='plan_related_to_user_speaker'
)
attr = models.SmallIntegerField(
default=1, null=True, db_column='Attr', blank=True)
charge = models.SmallIntegerField(
default=1, null=True, db_column='Charge', blank=True)
pwaccess = models.SmallIntegerField(
default=1, null=True, db_column='PwAccess', blank=True)
pwcontinuous = models.SmallIntegerField(default=1, null=True,
db_column='PwContinuous', blank=True)
pwdirection = models.SmallIntegerField(default=1, null=True,
db_column='PwDirection', blank=True)
dooropen = models.SmallIntegerField(
default=1, null=True, db_column='DoorOpen', blank=True)
timebegincheckbegin = models.IntegerField(default=1, null=True, db_column='TimeBeginCheckBegin',
blank=True)
timebegincheckend = models.IntegerField(default=1, null=True,
db_column='imeBeginCheckEnd', blank=True)
timeendcheckbegin = models.IntegerField(default=1, null=True,
db_column='TimeEndCheckBegin', blank=True)
timeendcheckend = models.IntegerField(default=1, null=True,
db_column='TimeEndCheckEnd', blank=True)
rangeusers = models.CharField(default='1', null=True,
db_column='RangeUsers', max_length=4096, blank=True)
listdepts = models.CharField(default='1', null=True,
db_column='ListDepts', max_length=1024, blank=True)
rangeequs = models.CharField(default='1', null=True,
db_column='RangeEqus', max_length=1024, blank=True)
listplaces = models.CharField(default='1', null=True,
db_column='ListPlaces', max_length=1024, blank=True)
mapuser2equ = models.CharField(default='1', null=True,
db_column='MapUser2Equ', max_length=1024, blank=True)
aboutspeaker = models.CharField(default='1', null=True,
db_column='AboutSpeaker', max_length=1024, blank=True)
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=1024, blank=True) # 来自教务系统的课程编号
timeupdate = models.IntegerField(default=1, null=True, db_column='TimeUpdate',
blank=True)
idmanager = models. ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='IdManager', related_name='plan_related_to_user', null=True)
imark = models.SmallIntegerField(
default=1, null=True, db_column='IMark') # 1代表已经删除
back_up1 = models.CharField(default='1', null=True,
max_length=254, blank=True)
back_up2 = models.CharField(default='1', null=True,
max_length=254, blank=True)
back_up3 = models.IntegerField(default=1, null=True, blank=True)
back_up4 = models.IntegerField(default=1, null=True)
class Meta:
db_table = 't_cyplan'
class TCyequipment(models.Model):
id = models.IntegerField(default=1, db_column='ID',
blank=True, primary_key=True)
name = models.CharField(default='1', null=True,
db_column='Name', max_length=32, blank=True)
id_location = models. ForeignKey(
TCylocation, to_field="id", on_delete=models.CASCADE, db_column='ID_Location', related_name='equipment_related_to_location')
id_location_sn = models.SmallIntegerField(default=1, null=True,
db_column='ID_Location_SN', blank=True)
id_ip = models.CharField(default='1', null=True,
db_column='ID_IP', max_length=16, blank=True)
mac = models.CharField(default='1', null=True,
db_column='MAC', max_length=24, blank=True)
state = models.SmallIntegerField(
default=1, null=True, db_column='State', blank=True)
########state设备状态,0:正常空闲、1:故障、2:其它、3:正常使用中、4开放########
login = models.SmallIntegerField(
default=1, null=True, db_column='Login', blank=True)
########login登录状态,0:未登录、1:已经登录########
link = models.SmallIntegerField(
default=1, null=True, db_column='Link', blank=True)
# link网络状态,0:脱机、1:在线,######## Field renamed because it was a Python reserved word.
class_field = models.SmallIntegerField(
default=1, null=True, db_column='Class', blank=True)
########class_field0:PC设备、2:刷卡设备,11:服务器设备#######
dx = models.IntegerField(default=1, null=True, db_column='Dx', blank=True)
dy = models.IntegerField(default=1, null=True, db_column='Dy', blank=True)
id_user = models. ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='ID_User', related_name='equipment_related_to_user_use')
id_plan = models.IntegerField(
default=1, null=True, db_column='ID_Plan', blank=True)
itimebegin = models.IntegerField(default=1, null=True,
db_column='iTimeBegin', blank=True
)
itimelogin = models.IntegerField(default=1, null=True,
db_column='iTimeLogin', blank=True
)
whitelist = models.CharField(default='1', null=True,
db_column='WhiteList', max_length=1024, blank=True)
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=1024, blank=True)
timeupdate = models.IntegerField(default=1, null=True,
db_column='TimeUpdate', blank=True
)
idmanager = models. ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='IdManager', related_name='equipment_related_to_user', null=True
)
portlisten = models.IntegerField(default=1, null=True,
db_column='PortListen', blank=True
)
type_field = models.IntegerField(
default=1, null=True, db_column='Type', blank=True)
timedelay = models.IntegerField(default=1, null=True,
db_column='TimeDelay', blank=True)
keycancel = models.SmallIntegerField(default=1, null=True,
db_column='KeyCancel', blank=True)
keyOk = models.SmallIntegerField(
default=1, null=True, db_column='KeyOk', blank=True)
keydel = models.SmallIntegerField(
default=1, null=True, db_column='KeyDel', blank=True)
keyf1 = models.SmallIntegerField(
default=1, null=True, db_column='KeyF1', blank=True)
onall = models.SmallIntegerField(
default=1, null=True, db_column='OnAll', blank=True)
rangeequs = models.CharField(default='1', null=True,
db_column='RangeEqus', max_length=64, blank=True)
listplaces = models.CharField(default='1', null=True,
db_column='ListPlaces', max_length=64, blank=True)
imark = models.SmallIntegerField(
default=1, null=True, db_column='IMark') # 1代表已经删除
back_up1 = models.CharField(default='1', null=True,
max_length=254, blank=True)
back_up2 = models.IntegerField(default=1, null=True, blank=True)
back_up3 = models.IntegerField(default=1, null=True)
class Meta:
db_table = 't_cyequipment'
class TCyTableInfo(models.Model):
id = models.IntegerField(default=1, db_column='ID', primary_key=True)
name = models.CharField(default='1', null=True,
db_column='Name', max_length=50)
nametable = models.CharField(
default='1', null=True, db_column='NameTable', max_length=50)
timeupdate = models.IntegerField(
default=1, null=True, db_column='TimeUpdate')
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=64)
idmanager = models. ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='IdManager', related_name='kaoqin_related_to_user', null=True
)
imark = models.SmallIntegerField(
default=1, null=True, db_column='IMark') # 1代表已经删除
back_up1 = models.CharField(
default='1', null=True, max_length=254, blank=True)
back_up2 = models.IntegerField(default=1, null=True, blank=True)
back_up3 = models.IntegerField(default=1, null=True)
class Meta:
db_table = 't_cytableinfo'
class TCylocationex(models.Model):
id_location = models.OneToOneField(
TCylocation, to_field="id", on_delete=models.CASCADE, primary_key=True, db_column='ID_Location', related_name='locationex_related_to_location')
attr = models.SmallIntegerField(
default=1, null=True, db_column='Attr', blank=True)
datebegin = models.IntegerField(default=1, null=True,
db_column='DateBegin', blank=True)
dateend = models.IntegerField(
default=1, null=True, db_column='DateEnd', blank=True)
moderun = models.IntegerField(
default=1, null=True, db_column='ModeRun', blank=True)
modeshangji = models.IntegerField(default=1, null=True,
db_column='ModeShangJi', blank=True)
enabledelaycharged = models.IntegerField(default=1, null=True,
db_column='EnableDelayCharged', blank=True)
delaycharged = models.IntegerField(default=1, null=True,
db_column='DelayCharged', blank=True)
enablelimityue_sj = models.IntegerField(default=1, null=True,
db_column='EnableLimitYuE_SJ', blank=True)
limityue_sj = models.IntegerField(default=1, null=True,
db_column='LimitYuE_SJ', blank=True)
enablelimityue_xj = models.IntegerField(default=1, null=True,
db_column='EnableLimitYuE_XJ', blank=True)
limityue_xj = models.IntegerField(default=1, null=True,
db_column='LimitYuE_XJ', blank=True)
enablelimittime_xj = models.IntegerField(default=1, null=True,
db_column='EnableLimitTime_XJ', blank=True)
limittime_xj = models.IntegerField(default=1, null=True,
db_column='LimitTime_XJ', blank=True)
enablewarnyue = models.IntegerField(default=1, null=True,
db_column='EnableWarnYuE', blank=True)
warnyue = models.IntegerField(
default=1, null=True, db_column='WarnYuE', blank=True)
enablewarntime = models.IntegerField(default=1, null=True,
db_column='EnableWarnTime', blank=True)
warntime = models.IntegerField(
default=1, null=True, db_column='WarnTime', blank=True)
enablemincost = models.IntegerField(default=1, null=True,
db_column='EnableMinCost', blank=True)
mincost = models.IntegerField(
default=1, null=True, db_column='MinCost', blank=True)
price = models.IntegerField(
default=1, null=True, db_column='Price', blank=True)
priceminute = models.IntegerField(default=1, null=True,
db_column='PriceMinute', blank=True)
getequname = models.IntegerField(default=1, null=True,
db_column='GetEquName', blank=True
)
getequip = models.IntegerField(
default=1, null=True, db_column='GetEquIp', blank=True)
getequmac = models.IntegerField(default=1, null=True,
db_column='GetEquMac', blank=True)
timeupdate = models.IntegerField(default=1, null=True,
db_column='TimeUpdate', blank=True
)
idmanager = models. ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='IdManager', related_name='locationex_related_to_user', null=True
)
imark = models.SmallIntegerField(
default=1, null=True, db_column='IMark') # 1代表已经删除
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=64)
back_up1 = models.CharField(default='1', null=True,
max_length=254, blank=True)
back_up2 = models.IntegerField(default=1, null=True, blank=True)
back_up3 = models.IntegerField(default=1, null=True, blank=True)
class Meta:
db_table = 't_cylocationex'
class TCymmx(models.Model):
id = models.IntegerField(default=1, db_column='ID',
blank=True, primary_key=True)
id_data = models.IntegerField(
default=1, null=True, db_column='ID_Data', blank=True)
id_type = models.SmallIntegerField(
default=1, null=True, db_column='ID_Type', blank=True)
######id_type字段为媒体类型,1为PPT类型######
timeupdate = models.IntegerField(default=1, null=True,
db_column='TimeUpdate', blank=True
)
idmanager = models.ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='IdManager', related_name='mmx_related_to_user', null=True
)
imark = models.SmallIntegerField(
default=1, null=True, db_column='IMark') # 1代表已经删除
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=64)
back_up1 = models.CharField(default='1', null=True,
max_length=254, blank=True)
back_up2 = models.IntegerField(default=1, null=True, blank=True)
class Meta:
db_table = 't_cymmx'
class TCymmxdata(models.Model):
id = models.OneToOneField(
TCymmx, to_field="id", on_delete=models.CASCADE, primary_key=True, db_column='ID', related_name='mmxdata_related_to_mmx'
)
timeupdate = models.IntegerField(default=1, null=True,
db_column='TimeUpdate', blank=True
)
idmanager = models. ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='IdManager', related_name='mmxdata_related_to_user', null=True
)
data = models.TextField(db_column='Data', blank=True)
imark = models.SmallIntegerField(
default=1, null=True, db_column='IMark') # 1代表已经删除
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=64)
back_up1 = models.CharField(default='1', null=True,
max_length=254, blank=True)
back_up2 = models.IntegerField(default=1, null=True, blank=True)
class Meta:
db_table = 't_cymmxdata'
class TCyRunningaccount(models.Model):
id = models.IntegerField(default=1, db_column='ID',
blank=True, primary_key=True)
id_user = models. ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='ID_User', related_name='runningaccount_related_to_user_use')
# 0表示缺席,1表示签到,2表示签退
time = models.IntegerField(
default=1, null=True, db_column='Time', blank=True)
type_field = models.SmallIntegerField(
default=1, null=True, db_column='Type', blank=True)
######type例如:交费、存款:0x101,赠费: 0x102,退费、取款:0x103,扣费、罚款:0x104,纠错,取消某次缴费、赠费等:0x106,上机费: 0x201,考勤: 0x1001######
money = models.IntegerField(
default=1, null=True, db_column='Money', blank=True)
######money发生的费用,单位为分#####
param1 = models.IntegerField(
default=1, null=True, db_column='Param1', blank=True)
######param1收费管理员的ID:Type=0x101、0x102、0x103、0x104、0x106,上机机位编号: Type = 0x201,门禁考勤机编号:Type = 0x1001######
param2 = models.ForeignKey(
TCyplan, to_field="id", on_delete=models.CASCADE, db_column='Param2', related_name='runningaccount_related_to_plan')
######param2取消交易记录的ID: Type=0x106,讲座、课程编号: Type = 0x201、0x1001######
timeupdate = models.IntegerField(default=1, null=True, db_column='TimeUpdate',
blank=True)
idmanager = models. ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='IdManager', related_name='runningaccount_related_to_user', null=True)
imark = models.SmallIntegerField(
default=1, null=True, db_column='IMark') # 1代表已经删除
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=64)
back_up1 = models.CharField(default='1', null=True,
max_length=254, blank=True)
back_up2 = models.IntegerField(default=1, null=True, blank=True)
back_up3 = models.IntegerField(default=1, null=True, blank=True)
class Meta:
db_table = 't_cyrunningaccount'
class TCytypera(models.Model):
id = models.IntegerField(default=1, db_column='ID',
blank=True, primary_key=True)
id_parent = models.IntegerField(default=1, null=True,
db_column='ID_Parent', blank=True)
name = models.CharField(default='1', null=True,
db_column='Name', max_length=32, blank=True)
timeupdate = models.IntegerField(default=1, null=True,
db_column='TimeUpdate', blank=True
)
idmanager = models. ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='IdManager', related_name='typera_related_to_user', null=True
)
imark = models.SmallIntegerField(
default=1, null=True, db_column='IMark') # 1代表已经删除
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=64)
back_up1 = models.CharField(default='1', null=True,
max_length=254, blank=True)
back_up2 = models.IntegerField(default=1, null=True, blank=True)
class Meta:
db_table = 't_cytypera'
class TCyuserex(models.Model):
id = models.OneToOneField(
TCyuser, to_field="id", on_delete=models.CASCADE, primary_key=True, db_column='ID', related_name='userex_related_to_user_information')
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=32, blank=True,)
photo = models.BinaryField(db_column='FaceFearture', blank=True)
timeupdate = models.IntegerField(default=1, null=True,
db_column='TimeUpdate', blank=True)
idmanager = models.ForeignKey(
TCyuser, to_field="id", on_delete=models.CASCADE, db_column='IdManager', related_name='userex_related_to_user', null=True)
imark = models.IntegerField(default=1, null=True, db_column='IMark')
rem = models.CharField(default='1', null=True,
db_column='Rem', max_length=64)
back_up1 = models.CharField(default='1', null=True,
max_length=254, blank=True)
back_up2 = models.IntegerField(default=1, null=True, blank=True)
back_up3 = models.IntegerField(default=1, null=True, blank=True)
class Meta:
db_table = 't_cyuserex'
| 53.408759 | 151 | 0.610769 | 3,276 | 29,268 | 5.297009 | 0.086386 | 0.092664 | 0.130698 | 0.174264 | 0.835245 | 0.820031 | 0.811099 | 0.809774 | 0.692445 | 0.643693 | 0 | 0.025355 | 0.26558 | 29,268 | 547 | 152 | 53.506399 | 0.781949 | 0.033723 | 0 | 0.539583 | 0 | 0 | 0.082648 | 0.020099 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.002083 | 0 | 0.527083 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5bef0c929ca87f491ac5b4bebc0f917684895b6e | 23,213 | py | Python | test/test_header_indexing.py | LaudateCorpus1/hyper-h2 | 7dfab8f8e0e8605c4a2a90706b217d0a0a0c45b7 | [
"MIT"
] | 2 | 2020-07-01T20:46:51.000Z | 2021-04-28T21:28:48.000Z | test/test_header_indexing.py | LaudateCorpus1/hyper-h2 | 7dfab8f8e0e8605c4a2a90706b217d0a0a0c45b7 | [
"MIT"
] | null | null | null | test/test_header_indexing.py | LaudateCorpus1/hyper-h2 | 7dfab8f8e0e8605c4a2a90706b217d0a0a0c45b7 | [
"MIT"
] | 3 | 2021-06-03T10:10:16.000Z | 2022-03-17T19:57:00.000Z | # -*- coding: utf-8 -*-
"""
test_header_indexing.py
~~~~~~~~~~~~~~~~~~~~~~~
This module contains tests that use HPACK header tuples that provide additional
metadata to the hpack module about how to encode the headers.
"""
import pytest
from hpack import HeaderTuple, NeverIndexedHeaderTuple
import h2.connection
def assert_header_blocks_actually_equal(block_a, block_b):
"""
Asserts that two header bocks are really, truly equal, down to the types
of their tuples. Doesn't return anything.
"""
assert len(block_a) == len(block_b)
for a, b in zip(block_a, block_b):
assert a == b
assert a.__class__ is b.__class__
class TestHeaderIndexing(object):
"""
Test that Hyper-h2 can correctly handle never indexed header fields using
the appropriate hpack data structures.
"""
example_request_headers = [
HeaderTuple(u':authority', u'example.com'),
HeaderTuple(u':path', u'/'),
HeaderTuple(u':scheme', u'https'),
HeaderTuple(u':method', u'GET'),
]
bytes_example_request_headers = [
HeaderTuple(b':authority', b'example.com'),
HeaderTuple(b':path', b'/'),
HeaderTuple(b':scheme', b'https'),
HeaderTuple(b':method', b'GET'),
]
extended_request_headers = [
HeaderTuple(u':authority', u'example.com'),
HeaderTuple(u':path', u'/'),
HeaderTuple(u':scheme', u'https'),
HeaderTuple(u':method', u'GET'),
NeverIndexedHeaderTuple(u'authorization', u'realpassword'),
]
bytes_extended_request_headers = [
HeaderTuple(b':authority', b'example.com'),
HeaderTuple(b':path', b'/'),
HeaderTuple(b':scheme', b'https'),
HeaderTuple(b':method', b'GET'),
NeverIndexedHeaderTuple(b'authorization', b'realpassword'),
]
example_response_headers = [
HeaderTuple(u':status', u'200'),
HeaderTuple(u'server', u'fake-serv/0.1.0')
]
bytes_example_response_headers = [
HeaderTuple(b':status', b'200'),
HeaderTuple(b'server', b'fake-serv/0.1.0')
]
extended_response_headers = [
HeaderTuple(u':status', u'200'),
HeaderTuple(u'server', u'fake-serv/0.1.0'),
NeverIndexedHeaderTuple(u'secure', u'you-bet'),
]
bytes_extended_response_headers = [
HeaderTuple(b':status', b'200'),
HeaderTuple(b'server', b'fake-serv/0.1.0'),
NeverIndexedHeaderTuple(b'secure', b'you-bet'),
]
@pytest.mark.parametrize(
'headers', (
example_request_headers,
bytes_example_request_headers,
extended_request_headers,
bytes_extended_request_headers,
)
)
def test_sending_header_tuples(self, headers, frame_factory):
"""
Providing HeaderTuple and HeaderTuple subclasses preserves the metadata
about indexing.
"""
c = h2.connection.H2Connection()
c.initiate_connection()
# Clear the data, then send headers.
c.clear_outbound_data_buffer()
c.send_headers(1, headers)
f = frame_factory.build_headers_frame(headers=headers)
assert c.data_to_send() == f.serialize()
@pytest.mark.parametrize(
'headers', (
example_request_headers,
bytes_example_request_headers,
extended_request_headers,
bytes_extended_request_headers,
)
)
def test_header_tuples_in_pushes(self, headers, frame_factory):
"""
Providing HeaderTuple and HeaderTuple subclasses to push promises
preserves metadata about indexing.
"""
c = h2.connection.H2Connection(client_side=False)
c.receive_data(frame_factory.preamble())
# We can use normal headers for the request.
f = frame_factory.build_headers_frame(
self.example_request_headers
)
c.receive_data(f.serialize())
frame_factory.refresh_encoder()
expected_frame = frame_factory.build_push_promise_frame(
stream_id=1,
promised_stream_id=2,
headers=headers,
flags=['END_HEADERS'],
)
c.clear_outbound_data_buffer()
c.push_stream(
stream_id=1,
promised_stream_id=2,
request_headers=headers
)
assert c.data_to_send() == expected_frame.serialize()
@pytest.mark.parametrize(
'headers,encoding', (
(example_request_headers, 'utf-8'),
(bytes_example_request_headers, None),
(extended_request_headers, 'utf-8'),
(bytes_extended_request_headers, None),
)
)
def test_header_tuples_are_decoded_request(self,
headers,
encoding,
frame_factory):
"""
The indexing status of the header is preserved when emitting
RequestReceived events.
"""
c = h2.connection.H2Connection(
client_side=False, header_encoding=encoding
)
c.receive_data(frame_factory.preamble())
f = frame_factory.build_headers_frame(headers)
data = f.serialize()
events = c.receive_data(data)
assert len(events) == 1
event = events[0]
assert isinstance(event, h2.events.RequestReceived)
assert_header_blocks_actually_equal(headers, event.headers)
@pytest.mark.parametrize(
'headers,encoding', (
(example_response_headers, 'utf-8'),
(bytes_example_response_headers, None),
(extended_response_headers, 'utf-8'),
(bytes_extended_response_headers, None),
)
)
def test_header_tuples_are_decoded_response(self,
headers,
encoding,
frame_factory):
"""
The indexing status of the header is preserved when emitting
ResponseReceived events.
"""
c = h2.connection.H2Connection(header_encoding=encoding)
c.initiate_connection()
c.send_headers(stream_id=1, headers=self.example_request_headers)
f = frame_factory.build_headers_frame(headers)
data = f.serialize()
events = c.receive_data(data)
assert len(events) == 1
event = events[0]
assert isinstance(event, h2.events.ResponseReceived)
assert_header_blocks_actually_equal(headers, event.headers)
@pytest.mark.parametrize(
'headers,encoding', (
(example_response_headers, 'utf-8'),
(bytes_example_response_headers, None),
(extended_response_headers, 'utf-8'),
(bytes_extended_response_headers, None),
)
)
def test_header_tuples_are_decoded_info_response(self,
headers,
encoding,
frame_factory):
"""
The indexing status of the header is preserved when emitting
InformationalResponseReceived events.
"""
# Manipulate the headers to send 100 Continue. We need to copy the list
# to avoid breaking the example headers.
headers = headers[:]
if encoding:
headers[0] = HeaderTuple(u':status', u'100')
else:
headers[0] = HeaderTuple(b':status', b'100')
c = h2.connection.H2Connection(header_encoding=encoding)
c.initiate_connection()
c.send_headers(stream_id=1, headers=self.example_request_headers)
f = frame_factory.build_headers_frame(headers)
data = f.serialize()
events = c.receive_data(data)
assert len(events) == 1
event = events[0]
assert isinstance(event, h2.events.InformationalResponseReceived)
assert_header_blocks_actually_equal(headers, event.headers)
@pytest.mark.parametrize(
'headers,encoding', (
(example_response_headers, 'utf-8'),
(bytes_example_response_headers, None),
(extended_response_headers, 'utf-8'),
(bytes_extended_response_headers, None),
)
)
def test_header_tuples_are_decoded_trailers(self,
headers,
encoding,
frame_factory):
"""
The indexing status of the header is preserved when emitting
TrailersReceived events.
"""
# Manipulate the headers to remove the status, which shouldn't be in
# the trailers. We need to copy the list to avoid breaking the example
# headers.
headers = headers[1:]
c = h2.connection.H2Connection(header_encoding=encoding)
c.initiate_connection()
c.send_headers(stream_id=1, headers=self.example_request_headers)
f = frame_factory.build_headers_frame(self.example_response_headers)
data = f.serialize()
c.receive_data(data)
f = frame_factory.build_headers_frame(headers, flags=['END_STREAM'])
data = f.serialize()
events = c.receive_data(data)
assert len(events) == 2
event = events[0]
assert isinstance(event, h2.events.TrailersReceived)
assert_header_blocks_actually_equal(headers, event.headers)
@pytest.mark.parametrize(
'headers,encoding', (
(example_request_headers, 'utf-8'),
(bytes_example_request_headers, None),
(extended_request_headers, 'utf-8'),
(bytes_extended_request_headers, None),
)
)
def test_header_tuples_are_decoded_push_promise(self,
headers,
encoding,
frame_factory):
"""
The indexing status of the header is preserved when emitting
PushedStreamReceived events.
"""
c = h2.connection.H2Connection(header_encoding=encoding)
c.initiate_connection()
c.send_headers(stream_id=1, headers=self.example_request_headers)
f = frame_factory.build_push_promise_frame(
stream_id=1,
promised_stream_id=2,
headers=headers,
flags=['END_HEADERS'],
)
data = f.serialize()
events = c.receive_data(data)
assert len(events) == 1
event = events[0]
assert isinstance(event, h2.events.PushedStreamReceived)
assert_header_blocks_actually_equal(headers, event.headers)
class TestSecureHeaders(object):
"""
Certain headers should always be transformed to their never-indexed form.
"""
example_request_headers = [
(u':authority', u'example.com'),
(u':path', u'/'),
(u':scheme', u'https'),
(u':method', u'GET'),
]
bytes_example_request_headers = [
(b':authority', b'example.com'),
(b':path', b'/'),
(b':scheme', b'https'),
(b':method', b'GET'),
]
possible_auth_headers = [
(u'authorization', u'test'),
(u'Authorization', u'test'),
(u'authorization', u'really long test'),
HeaderTuple(u'authorization', u'test'),
HeaderTuple(u'Authorization', u'test'),
HeaderTuple(u'authorization', u'really long test'),
NeverIndexedHeaderTuple(u'authorization', u'test'),
NeverIndexedHeaderTuple(u'Authorization', u'test'),
NeverIndexedHeaderTuple(u'authorization', u'really long test'),
(b'authorization', b'test'),
(b'Authorization', b'test'),
(b'authorization', b'really long test'),
HeaderTuple(b'authorization', b'test'),
HeaderTuple(b'Authorization', b'test'),
HeaderTuple(b'authorization', b'really long test'),
NeverIndexedHeaderTuple(b'authorization', b'test'),
NeverIndexedHeaderTuple(b'Authorization', b'test'),
NeverIndexedHeaderTuple(b'authorization', b'really long test'),
(u'proxy-authorization', u'test'),
(u'Proxy-Authorization', u'test'),
(u'proxy-authorization', u'really long test'),
HeaderTuple(u'proxy-authorization', u'test'),
HeaderTuple(u'Proxy-Authorization', u'test'),
HeaderTuple(u'proxy-authorization', u'really long test'),
NeverIndexedHeaderTuple(u'proxy-authorization', u'test'),
NeverIndexedHeaderTuple(u'Proxy-Authorization', u'test'),
NeverIndexedHeaderTuple(u'proxy-authorization', u'really long test'),
(b'proxy-authorization', b'test'),
(b'Proxy-Authorization', b'test'),
(b'proxy-authorization', b'really long test'),
HeaderTuple(b'proxy-authorization', b'test'),
HeaderTuple(b'Proxy-Authorization', b'test'),
HeaderTuple(b'proxy-authorization', b'really long test'),
NeverIndexedHeaderTuple(b'proxy-authorization', b'test'),
NeverIndexedHeaderTuple(b'Proxy-Authorization', b'test'),
NeverIndexedHeaderTuple(b'proxy-authorization', b'really long test'),
]
secured_cookie_headers = [
(u'cookie', u'short'),
(u'Cookie', u'short'),
(u'cookie', u'nineteen byte cooki'),
HeaderTuple(u'cookie', u'short'),
HeaderTuple(u'Cookie', u'short'),
HeaderTuple(u'cookie', u'nineteen byte cooki'),
NeverIndexedHeaderTuple(u'cookie', u'short'),
NeverIndexedHeaderTuple(u'Cookie', u'short'),
NeverIndexedHeaderTuple(u'cookie', u'nineteen byte cooki'),
NeverIndexedHeaderTuple(u'cookie', u'longer manually secured cookie'),
(b'cookie', b'short'),
(b'Cookie', b'short'),
(b'cookie', b'nineteen byte cooki'),
HeaderTuple(b'cookie', b'short'),
HeaderTuple(b'Cookie', b'short'),
HeaderTuple(b'cookie', b'nineteen byte cooki'),
NeverIndexedHeaderTuple(b'cookie', b'short'),
NeverIndexedHeaderTuple(b'Cookie', b'short'),
NeverIndexedHeaderTuple(b'cookie', b'nineteen byte cooki'),
NeverIndexedHeaderTuple(b'cookie', b'longer manually secured cookie'),
]
unsecured_cookie_headers = [
(u'cookie', u'twenty byte cookie!!'),
(u'Cookie', u'twenty byte cookie!!'),
(u'cookie', u'substantially longer than 20 byte cookie'),
HeaderTuple(u'cookie', u'twenty byte cookie!!'),
HeaderTuple(u'cookie', u'twenty byte cookie!!'),
HeaderTuple(u'Cookie', u'twenty byte cookie!!'),
(b'cookie', b'twenty byte cookie!!'),
(b'Cookie', b'twenty byte cookie!!'),
(b'cookie', b'substantially longer than 20 byte cookie'),
HeaderTuple(b'cookie', b'twenty byte cookie!!'),
HeaderTuple(b'cookie', b'twenty byte cookie!!'),
HeaderTuple(b'Cookie', b'twenty byte cookie!!'),
]
@pytest.mark.parametrize(
'headers', (example_request_headers, bytes_example_request_headers)
)
@pytest.mark.parametrize('auth_header', possible_auth_headers)
def test_authorization_headers_never_indexed(self,
headers,
auth_header,
frame_factory):
"""
Authorization and Proxy-Authorization headers are always forced to be
never-indexed, regardless of their form.
"""
# Regardless of what we send, we expect it to be never indexed.
send_headers = headers + [auth_header]
expected_headers = headers + [
NeverIndexedHeaderTuple(auth_header[0].lower(), auth_header[1])
]
c = h2.connection.H2Connection()
c.initiate_connection()
# Clear the data, then send headers.
c.clear_outbound_data_buffer()
c.send_headers(1, send_headers)
f = frame_factory.build_headers_frame(headers=expected_headers)
assert c.data_to_send() == f.serialize()
@pytest.mark.parametrize(
'headers', (example_request_headers, bytes_example_request_headers)
)
@pytest.mark.parametrize('auth_header', possible_auth_headers)
def test_authorization_headers_never_indexed_push(self,
headers,
auth_header,
frame_factory):
"""
Authorization and Proxy-Authorization headers are always forced to be
never-indexed, regardless of their form, when pushed by a server.
"""
# Regardless of what we send, we expect it to be never indexed.
send_headers = headers + [auth_header]
expected_headers = headers + [
NeverIndexedHeaderTuple(auth_header[0].lower(), auth_header[1])
]
c = h2.connection.H2Connection(client_side=False)
c.receive_data(frame_factory.preamble())
# We can use normal headers for the request.
f = frame_factory.build_headers_frame(
self.example_request_headers
)
c.receive_data(f.serialize())
frame_factory.refresh_encoder()
expected_frame = frame_factory.build_push_promise_frame(
stream_id=1,
promised_stream_id=2,
headers=expected_headers,
flags=['END_HEADERS'],
)
c.clear_outbound_data_buffer()
c.push_stream(
stream_id=1,
promised_stream_id=2,
request_headers=send_headers
)
assert c.data_to_send() == expected_frame.serialize()
@pytest.mark.parametrize(
'headers', (example_request_headers, bytes_example_request_headers)
)
@pytest.mark.parametrize('cookie_header', secured_cookie_headers)
def test_short_cookie_headers_never_indexed(self,
headers,
cookie_header,
frame_factory):
"""
Short cookie headers, and cookies provided as NeverIndexedHeaderTuple,
are never indexed.
"""
# Regardless of what we send, we expect it to be never indexed.
send_headers = headers + [cookie_header]
expected_headers = headers + [
NeverIndexedHeaderTuple(cookie_header[0].lower(), cookie_header[1])
]
c = h2.connection.H2Connection()
c.initiate_connection()
# Clear the data, then send headers.
c.clear_outbound_data_buffer()
c.send_headers(1, send_headers)
f = frame_factory.build_headers_frame(headers=expected_headers)
assert c.data_to_send() == f.serialize()
@pytest.mark.parametrize(
'headers', (example_request_headers, bytes_example_request_headers)
)
@pytest.mark.parametrize('cookie_header', secured_cookie_headers)
def test_short_cookie_headers_never_indexed_push(self,
headers,
cookie_header,
frame_factory):
"""
Short cookie headers, and cookies provided as NeverIndexedHeaderTuple,
are never indexed when pushed by servers.
"""
# Regardless of what we send, we expect it to be never indexed.
send_headers = headers + [cookie_header]
expected_headers = headers + [
NeverIndexedHeaderTuple(cookie_header[0].lower(), cookie_header[1])
]
c = h2.connection.H2Connection(client_side=False)
c.receive_data(frame_factory.preamble())
# We can use normal headers for the request.
f = frame_factory.build_headers_frame(
self.example_request_headers
)
c.receive_data(f.serialize())
frame_factory.refresh_encoder()
expected_frame = frame_factory.build_push_promise_frame(
stream_id=1,
promised_stream_id=2,
headers=expected_headers,
flags=['END_HEADERS'],
)
c.clear_outbound_data_buffer()
c.push_stream(
stream_id=1,
promised_stream_id=2,
request_headers=send_headers
)
assert c.data_to_send() == expected_frame.serialize()
@pytest.mark.parametrize(
'headers', (example_request_headers, bytes_example_request_headers)
)
@pytest.mark.parametrize('cookie_header', unsecured_cookie_headers)
def test_long_cookie_headers_can_be_indexed(self,
headers,
cookie_header,
frame_factory):
"""
Longer cookie headers can be indexed.
"""
# Regardless of what we send, we expect it to be indexed.
send_headers = headers + [cookie_header]
expected_headers = headers + [
HeaderTuple(cookie_header[0].lower(), cookie_header[1])
]
c = h2.connection.H2Connection()
c.initiate_connection()
# Clear the data, then send headers.
c.clear_outbound_data_buffer()
c.send_headers(1, send_headers)
f = frame_factory.build_headers_frame(headers=expected_headers)
assert c.data_to_send() == f.serialize()
@pytest.mark.parametrize(
'headers', (example_request_headers, bytes_example_request_headers)
)
@pytest.mark.parametrize('cookie_header', unsecured_cookie_headers)
def test_long_cookie_headers_can_be_indexed_push(self,
headers,
cookie_header,
frame_factory):
"""
Longer cookie headers can be indexed.
"""
# Regardless of what we send, we expect it to be never indexed.
send_headers = headers + [cookie_header]
expected_headers = headers + [
HeaderTuple(cookie_header[0].lower(), cookie_header[1])
]
c = h2.connection.H2Connection(client_side=False)
c.receive_data(frame_factory.preamble())
# We can use normal headers for the request.
f = frame_factory.build_headers_frame(
self.example_request_headers
)
c.receive_data(f.serialize())
frame_factory.refresh_encoder()
expected_frame = frame_factory.build_push_promise_frame(
stream_id=1,
promised_stream_id=2,
headers=expected_headers,
flags=['END_HEADERS'],
)
c.clear_outbound_data_buffer()
c.push_stream(
stream_id=1,
promised_stream_id=2,
request_headers=send_headers
)
assert c.data_to_send() == expected_frame.serialize()
| 37.440323 | 79 | 0.59303 | 2,417 | 23,213 | 5.471659 | 0.083988 | 0.048696 | 0.050813 | 0.019055 | 0.884764 | 0.872514 | 0.862231 | 0.832439 | 0.812779 | 0.779811 | 0 | 0.008197 | 0.306294 | 23,213 | 619 | 80 | 37.500808 | 0.813078 | 0.112351 | 0 | 0.594771 | 0 | 0 | 0.119625 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 1 | 0.030501 | false | 0.004357 | 0.006536 | 0 | 0.069717 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
75022a8547e8095bd2f5760e7290867794d7cb2d | 49 | py | Python | src/odrive_ros/__init__.py | franz6ko/odrive_ros | dd96e447ed7184b12cc84c48372e8245b3250c17 | [
"BSD-3-Clause"
] | 1 | 2022-02-20T20:40:40.000Z | 2022-02-20T20:40:40.000Z | src/odrive_ros/__init__.py | franz6ko/odrive_ros | dd96e447ed7184b12cc84c48372e8245b3250c17 | [
"BSD-3-Clause"
] | null | null | null | src/odrive_ros/__init__.py | franz6ko/odrive_ros | dd96e447ed7184b12cc84c48372e8245b3250c17 | [
"BSD-3-Clause"
] | null | null | null | from .odrive_node import ODriveNode, start_odrive | 49 | 49 | 0.877551 | 7 | 49 | 5.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.911111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7545f97ab59ba56ee83ef64852e1afa4b2bf5dda | 54 | py | Python | modules/tests/core/__init__.py | PeterDaveHello/eden | 26174a9dde2f19cd3bc879694f373ad5f765b6ed | [
"MIT"
] | 1 | 2017-07-22T18:49:34.000Z | 2017-07-22T18:49:34.000Z | modules/tests/core/__init__.py | PeterDaveHello/eden | 26174a9dde2f19cd3bc879694f373ad5f765b6ed | [
"MIT"
] | null | null | null | modules/tests/core/__init__.py | PeterDaveHello/eden | 26174a9dde2f19cd3bc879694f373ad5f765b6ed | [
"MIT"
] | 1 | 2019-12-16T15:14:46.000Z | 2019-12-16T15:14:46.000Z | from core_utils import *
from core_dataTable import *
| 18 | 28 | 0.814815 | 8 | 54 | 5.25 | 0.625 | 0.380952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 54 | 2 | 29 | 27 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f38793f5b8c08580da91b7b23cd75458d5394340 | 156 | py | Python | ehr_functions/models/metrics/_base.py | fdabek1/EHR-Functions | e6bd0b6fa213930358c4a19be31c459ac7430ca9 | [
"MIT"
] | null | null | null | ehr_functions/models/metrics/_base.py | fdabek1/EHR-Functions | e6bd0b6fa213930358c4a19be31c459ac7430ca9 | [
"MIT"
] | null | null | null | ehr_functions/models/metrics/_base.py | fdabek1/EHR-Functions | e6bd0b6fa213930358c4a19be31c459ac7430ca9 | [
"MIT"
] | null | null | null | class BaseMetric:
def __init__(self):
pass
def pre_train(self):
pass
def post_train(self, train_data, val_data):
pass
| 15.6 | 47 | 0.596154 | 20 | 156 | 4.25 | 0.55 | 0.188235 | 0.258824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.326923 | 156 | 9 | 48 | 17.333333 | 0.809524 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.428571 | 0 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
f3b77018bd2317bf13365b0c6ae784ca410adad9 | 221 | py | Python | flair/trainers/__init__.py | db-bionlp/CLNER | 77910311acf0411252b9fea8c3e6efb7175eb21f | [
"MIT"
] | 46 | 2021-05-29T05:37:38.000Z | 2022-03-07T02:35:25.000Z | flair/trainers/__init__.py | db-bionlp/CLNER | 77910311acf0411252b9fea8c3e6efb7175eb21f | [
"MIT"
] | 13 | 2021-07-06T15:46:55.000Z | 2022-03-16T04:03:01.000Z | flair/trainers/__init__.py | db-bionlp/CLNER | 77910311acf0411252b9fea8c3e6efb7175eb21f | [
"MIT"
] | 7 | 2021-08-04T05:23:36.000Z | 2022-03-17T07:11:33.000Z | from .trainer import ModelTrainer
from .distillation_trainer import ModelDistiller
from .finetune_trainer import ModelFinetuner
from .reinforcement_trainer import ReinforcementTrainer
from .swaf_trainer import SWAFTrainer | 44.2 | 55 | 0.891403 | 24 | 221 | 8.041667 | 0.5 | 0.336788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085973 | 221 | 5 | 56 | 44.2 | 0.955446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f3c90e06ccafe07030febd7b54a492e426372d91 | 39 | py | Python | python_src/tbd_polly_speech/HashTable/__init__.py | xiangzhi/tbd_polly_speech | 92706a0afe2fc3cf4e1dab695564ab732a114501 | [
"MIT"
] | null | null | null | python_src/tbd_polly_speech/HashTable/__init__.py | xiangzhi/tbd_polly_speech | 92706a0afe2fc3cf4e1dab695564ab732a114501 | [
"MIT"
] | null | null | null | python_src/tbd_polly_speech/HashTable/__init__.py | xiangzhi/tbd_polly_speech | 92706a0afe2fc3cf4e1dab695564ab732a114501 | [
"MIT"
] | null | null | null | from .hash_table import HashTable, Item | 39 | 39 | 0.846154 | 6 | 39 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
34587dd3024a097c4094d78fa37127d0af845142 | 257 | py | Python | umlayer/adapters/__init__.py | selforthis/umlayer | 8679af2ec46d346bc1b4adbcb70e3ebc6a52604a | [
"MIT"
] | null | null | null | umlayer/adapters/__init__.py | selforthis/umlayer | 8679af2ec46d346bc1b4adbcb70e3ebc6a52604a | [
"MIT"
] | null | null | null | umlayer/adapters/__init__.py | selforthis/umlayer | 8679af2ec46d346bc1b4adbcb70e3ebc6a52604a | [
"MIT"
] | 1 | 2021-11-28T17:26:00.000Z | 2021-11-28T17:26:00.000Z | from umlayer.adapters.standard_item import StandardItem
from umlayer.adapters.standard_item_model import StandardItemModel
from umlayer.adapters.tree_sort_model import TreeSortModel
from umlayer.adapters.gui_adapters import makeItemFromProjectItem, itemize
| 51.4 | 74 | 0.898833 | 31 | 257 | 7.258065 | 0.483871 | 0.195556 | 0.337778 | 0.24 | 0.275556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066148 | 257 | 4 | 75 | 64.25 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
caeec9cbfe24f872c80f88fc354c8c4ab8087d67 | 11,557 | py | Python | tests/unit_tests/agents/sqlalchemy_agent/test_sqlalchemy_agent.py | tethysplatform/param_persist | 96b672fc88b0dad50c412138d9e1e2c235827d1e | [
"BSD-2-Clause"
] | 1 | 2020-09-14T16:19:58.000Z | 2020-09-14T16:19:58.000Z | tests/unit_tests/agents/sqlalchemy_agent/test_sqlalchemy_agent.py | tethysplatform/param_persist | 96b672fc88b0dad50c412138d9e1e2c235827d1e | [
"BSD-2-Clause"
] | 1 | 2020-09-10T20:45:26.000Z | 2020-09-10T20:45:26.000Z | tests/unit_tests/agents/sqlalchemy_agent/test_sqlalchemy_agent.py | tethysplatform/param_persist | 96b672fc88b0dad50c412138d9e1e2c235827d1e | [
"BSD-2-Clause"
] | null | null | null | """
Tests for the SqlAlchemy agent.
This file was created on August 06, 2020
"""
import json
import logging
import param
import pytest
from param_persist.agents.sqlalchemy_agent import SqlAlchemyAgent
from param_persist.sqlalchemy.models import InstanceModel, ParamModel
class AgentTestParam(param.Parameterized):
"""
A Test param class for testing the serializer.
"""
number_field = param.Number(0.5, doc="A simple number field.")
integer_field = param.Integer(1, doc="A simple integer field.")
string_field = param.String("My String", doc="A simple string field.")
bool_field = param.Boolean(False, doc="A simple boolean field.")
class AgentTestParamMissing(param.Parameterized):
"""
A Test param class for testing the serializer.
"""
integer_field = param.Integer(1, doc="A simple integer field.")
string_field = param.String("My String", doc="A simple string field.")
bool_field = param.Boolean(False, doc="A simple boolean field.")
def test_save_param_using_sqlalchemy_engine(sqlalchemy_engine, sqlalchemy_session_factory):
"""
Test the save function of the param persist sqlalchemy agent.
"""
agent = SqlAlchemyAgent(sqlalchemy_engine)
parameterized_class = AgentTestParam()
parameterized_class.number_field = 1.7
parameterized_class.integer_field = 9
parameterized_class.string_field = "Testing Strings"
parameterized_class.bool_field = True
returned_instance_model_id = agent.save(parameterized_class)
sqlalchemy_session = sqlalchemy_session_factory()
instance_model_count = sqlalchemy_session.query(InstanceModel).count()
instance_model_id = sqlalchemy_session.query(InstanceModel).first().id
assert instance_model_count == 1
assert returned_instance_model_id == instance_model_id
param_model_count = sqlalchemy_session.query(ParamModel).count()
param_models = sqlalchemy_session.query(ParamModel).all()
for p in param_models:
assert p.instance_id == instance_model_id
param_dict = json.loads(p.value)
assert getattr(parameterized_class, param_dict['name']) == param_dict['value']
parameter_type = type(getattr(parameterized_class.param, param_dict['name']))
base_type = '.'.join([parameter_type.__module__, parameter_type.__name__])
assert base_type == param_dict['type']
assert param_model_count == 4
def test_load_param_using_sqlalchemy_engine(sqlalchemy_engine, sqlalchemy_session_factory,
sqlalchemy_instance_model_complete):
"""
Test the load function of the param persist sqlalchemy agent with an invalid class.
"""
agent = SqlAlchemyAgent(sqlalchemy_engine)
sqlalchemy_session = sqlalchemy_session_factory()
instance_model = sqlalchemy_session.query(InstanceModel).filter_by(id=sqlalchemy_instance_model_complete.id).first()
assert instance_model.id == sqlalchemy_instance_model_complete.id
param_models = [x for x in sqlalchemy_session.query(ParamModel).filter_by(instance_id=instance_model.id)]
assert len(param_models) == 4
parameterized_instance = agent.load(instance_model.id)
assert type(parameterized_instance) is AgentTestParam
for p in param_models:
assert p.instance_id == instance_model.id
param_dict = json.loads(p.value)
assert getattr(parameterized_instance, param_dict['name']) == param_dict['value']
parameter_type = type(getattr(parameterized_instance.param, param_dict['name']))
base_type = '.'.join([parameter_type.__module__, parameter_type.__name__])
assert base_type == param_dict['type']
def test_load_param_missing_param_fields(sqlalchemy_engine, sqlalchemy_session_factory,
sqlalchemy_instance_model_missing):
"""
Test the load function of the param persist sqlalchemy agent with an missing fields.
"""
agent = SqlAlchemyAgent(sqlalchemy_engine)
sqlalchemy_session = sqlalchemy_session_factory()
instance_model = sqlalchemy_session.query(InstanceModel).filter_by(id=sqlalchemy_instance_model_missing.id).first()
assert instance_model.id == sqlalchemy_instance_model_missing.id
param_models = [x for x in sqlalchemy_session.query(ParamModel).filter_by(instance_id=instance_model.id)]
assert len(param_models) == 2
parameterized_instance = agent.load(instance_model.id)
assert type(parameterized_instance) is AgentTestParam
for p in param_models:
assert p.instance_id == instance_model.id
param_dict = json.loads(p.value)
assert getattr(parameterized_instance, param_dict['name']) == param_dict['value']
parameter_type = type(getattr(parameterized_instance.param, param_dict['name']))
base_type = '.'.join([parameter_type.__module__, parameter_type.__name__])
assert base_type == param_dict['type']
assert parameterized_instance.integer_field == 1
assert parameterized_instance.string_field == "My String"
def test_load_param_extra_param_fields(sqlalchemy_engine, sqlalchemy_session_factory,
sqlalchemy_instance_model_extra):
"""
Test the load function of the param persist sqlalchemy agent with extra fields.
"""
agent = SqlAlchemyAgent(sqlalchemy_engine)
sqlalchemy_session = sqlalchemy_session_factory()
instance_model = sqlalchemy_session.query(InstanceModel).filter_by(id=sqlalchemy_instance_model_extra.id).first()
assert instance_model.id == sqlalchemy_instance_model_extra.id
param_models = [x for x in sqlalchemy_session.query(ParamModel).filter_by(instance_id=instance_model.id)]
assert len(param_models) == 6
parameterized_instance = agent.load(instance_model.id)
assert type(parameterized_instance) is AgentTestParam
assert len(param_models) == 6
for p in param_models:
assert p.instance_id == instance_model.id
param_dict = json.loads(p.value)
param_value = getattr(parameterized_instance, param_dict['name'], None)
if param_value is None:
assert param_dict['name'] in ['garbage_field_1', 'garbage_field_2']
continue
assert param_value == param_dict['value']
parameter_type = type(getattr(parameterized_instance.param, param_dict['name']))
base_type = '.'.join([parameter_type.__module__, parameter_type.__name__])
assert base_type == param_dict['type']
def test_load_param_with_invalid_class(sqlalchemy_engine, sqlalchemy_instance_invalid_class):
"""
Test the load function of the param persist sqlalchemy agent with an invalid class.
"""
agent = SqlAlchemyAgent(sqlalchemy_engine)
with pytest.raises(Exception) as excinfo:
agent.load(sqlalchemy_instance_invalid_class.id)
assert 'Defined param class "class_path" was not importable. ' \
'Given path is "this.is.not.a.valid.module.InvalidParamClass"' in str(excinfo.value)
def test_delete_param_using_sqlalchemy_engine(sqlalchemy_engine, sqlalchemy_session_factory,
sqlalchemy_instance_model_complete):
"""
Test the delete function of the param persist sqlalchemy agent.
"""
agent = SqlAlchemyAgent(sqlalchemy_engine)
sqlalchemy_session = sqlalchemy_session_factory()
assert 1 == sqlalchemy_session.query(InstanceModel).count()
instance_model = sqlalchemy_session.query(InstanceModel).filter_by(id=sqlalchemy_instance_model_complete.id).first()
agent.delete(instance_model.id)
assert 0 == sqlalchemy_session.query(InstanceModel).count()
def test_delete_param_exception(sqlalchemy_engine, caplog):
"""
Test the delete function of the param persist sqlalchemy agent using a bad id.
"""
agent = SqlAlchemyAgent(sqlalchemy_engine)
with caplog.at_level(logging.WARNING):
agent.delete('not-a-valid-uuid')
assert 'unable to query database with given instance id. id="not-a-valid-uuid"' in str(caplog.text)
def test_update_param_using_sqlalchemy_engine(sqlalchemy_engine, sqlalchemy_session_factory,
sqlalchemy_instance_model_complete):
"""
Test the update function of the param persist sqlalchemy agent.
"""
agent = SqlAlchemyAgent(sqlalchemy_engine)
parameterized_class = AgentTestParam()
parameterized_class.number_field = 3.21
parameterized_class.integer_field = 123
parameterized_class.string_field = "Updated Strings"
parameterized_class.bool_field = False
returned_instance_model_id = agent.update(parameterized_class, sqlalchemy_instance_model_complete.id)
sqlalchemy_session = sqlalchemy_session_factory()
instance_model_count = sqlalchemy_session.query(InstanceModel).count()
instance_model_id = sqlalchemy_session.query(InstanceModel).first().id
assert instance_model_count == 1
assert returned_instance_model_id == instance_model_id
param_model_count = sqlalchemy_session.query(ParamModel).count()
param_models = sqlalchemy_session.query(ParamModel).all()
for p in param_models:
assert p.instance_id == instance_model_id
param_dict = json.loads(p.value)
assert getattr(parameterized_class, param_dict['name']) == param_dict['value']
parameter_type = type(getattr(parameterized_class.param, param_dict['name']))
base_type = '.'.join([parameter_type.__module__, parameter_type.__name__])
assert base_type == param_dict['type']
assert param_model_count == 4
def test_update_param_bad_instance_id(sqlalchemy_engine):
"""
Test the update function of the param persist sqlalchemy agent with a bad id.
"""
agent = SqlAlchemyAgent(sqlalchemy_engine)
parameterized_class = AgentTestParam()
with pytest.raises(Exception) as excinfo:
agent.update(parameterized_class, 'not-a-valid-uuid')
assert 'Parameterized instance with id "not-a-valid-uuid" does not exist.' in str(excinfo.value)
def test_update_param_removing_attribute(sqlalchemy_engine, sqlalchemy_session_factory,
sqlalchemy_instance_model_complete):
"""
Test what happens when you update a instance model and params after removing a field.
"""
agent = SqlAlchemyAgent(sqlalchemy_engine)
parameterized_class = AgentTestParamMissing()
parameterized_class.integer_field = 123
parameterized_class.string_field = "Updated Strings"
parameterized_class.bool_field = False
returned_instance_model_id = agent.update(parameterized_class, sqlalchemy_instance_model_complete.id)
sqlalchemy_session = sqlalchemy_session_factory()
instance_model_count = sqlalchemy_session.query(InstanceModel).count()
instance_model_id = sqlalchemy_session.query(InstanceModel).first().id
assert instance_model_count == 1
assert returned_instance_model_id == instance_model_id
param_model_count = sqlalchemy_session.query(ParamModel).count()
param_models = sqlalchemy_session.query(ParamModel).all()
for p in param_models:
assert p.instance_id == instance_model_id
param_dict = json.loads(p.value)
assert getattr(parameterized_class, param_dict['name']) == param_dict['value']
parameter_type = type(getattr(parameterized_class.param, param_dict['name']))
base_type = '.'.join([parameter_type.__module__, parameter_type.__name__])
assert base_type == param_dict['type']
assert param_model_count == 3
| 41.873188 | 120 | 0.738081 | 1,395 | 11,557 | 5.782796 | 0.098925 | 0.087021 | 0.052064 | 0.052064 | 0.839841 | 0.815545 | 0.793851 | 0.764844 | 0.764349 | 0.732986 | 0 | 0.003887 | 0.176343 | 11,557 | 275 | 121 | 42.025455 | 0.843576 | 0.08073 | 0 | 0.678571 | 0 | 0 | 0.062476 | 0.00642 | 0 | 0 | 0 | 0 | 0.267857 | 1 | 0.059524 | false | 0 | 0.041667 | 0 | 0.154762 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
caf1e7a9965920ea91757575debcca6b70381a2a | 109 | py | Python | tests/test_headers.py | rickproza/twill | 7a98e4912a8ff929a94e35d35e7a027472ee4f46 | [
"MIT"
] | 13 | 2020-04-18T15:17:58.000Z | 2022-02-24T13:25:46.000Z | tests/test_headers.py | rickproza/twill | 7a98e4912a8ff929a94e35d35e7a027472ee4f46 | [
"MIT"
] | 5 | 2020-04-04T21:16:00.000Z | 2022-02-10T00:26:20.000Z | tests/test_headers.py | rickproza/twill | 7a98e4912a8ff929a94e35d35e7a027472ee4f46 | [
"MIT"
] | 3 | 2020-06-06T17:26:19.000Z | 2022-02-10T00:30:39.000Z | from .utils import execute_script
def test(url):
execute_script('test_headers.twill', initial_url=url)
| 18.166667 | 57 | 0.770642 | 16 | 109 | 5 | 0.6875 | 0.325 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12844 | 109 | 5 | 58 | 21.8 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0.165138 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b004201a35ffb33cc685271eb68f2ae46a7bd4e | 28 | py | Python | {{cookiecutter.project_slug}}/backend/api/views/__init__.py | Exanis/drf-cookie-react | 9eb7682dca47bd9365f114e4f170fd35a679cf14 | [
"MIT"
] | 1 | 2018-12-13T08:35:18.000Z | 2018-12-13T08:35:18.000Z | {{cookiecutter.project_slug}}/backend/api/views/__init__.py | Exanis/drf-cookie-react | 9eb7682dca47bd9365f114e4f170fd35a679cf14 | [
"MIT"
] | 75 | 2017-11-25T07:12:20.000Z | 2019-02-11T18:30:57.000Z | {{cookiecutter.project_slug}}/backend/api/views/__init__.py | Exanis/drf-cookie-react | 9eb7682dca47bd9365f114e4f170fd35a679cf14 | [
"MIT"
] | 2 | 2017-11-25T07:16:56.000Z | 2019-01-10T14:18:41.000Z | from .health import health
| 14 | 27 | 0.785714 | 4 | 28 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 28 | 1 | 28 | 28 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b08781e796bae67ea52eba749aa4750fbbf2c10 | 8,843 | py | Python | imagenet-resnet50/el.py | lancopku/well-classified-examples-are-underestimated | 6ee7cdeae893d64d4795b2903d812875fe861c62 | [
"Apache-2.0"
] | 15 | 2021-10-13T07:12:43.000Z | 2022-03-09T04:13:08.000Z | imagenet-resnet50/el.py | lancopku/well-classified-examples-are-underestimated | 6ee7cdeae893d64d4795b2903d812875fe861c62 | [
"Apache-2.0"
] | null | null | null | imagenet-resnet50/el.py | lancopku/well-classified-examples-are-underestimated | 6ee7cdeae893d64d4795b2903d812875fe861c62 | [
"Apache-2.0"
] | null | null | null | import torch
import torch.nn as nn
from torch.nn import functional as F
import math
class EncourageLoss(nn.Module): # paste in 2021.4.7
def __init__(self, opt, cl_eps=1e-5):
super(EncourageLoss, self).__init__()
self.opt = opt
self.cl_eps = cl_eps
self.epoch = 1
def forward(self, input, target, is_train=True):
lprobs = F.log_softmax(input)
probs = torch.exp(lprobs)
ys = F.one_hot(target, num_classes=probs.size()[-1])
ones = torch.ones_like(probs)
if self.opt.base_loss == 'mse':
mse_loss = torch.mean(ys*(ones-probs)**2+(ones-ys)*probs**2) # (y*(1-p)^2 +(1-y)p^2)
org_loss = mse_loss
# identical to torch.mean((F.one_hot(target, num_classes=probs.size()[-1])-probs)**2)
elif self.opt.base_loss == 'mse_sigmoid':
mse_loss = torch.mean(ys*(ones-torch.sigmoid(input))**2+(ones-ys)*torch.sigmoid(input)**2) # (y*(1-p)^2 +(1-y)p^2)
org_loss = mse_loss
elif self.opt.base_loss == 'mae':
mae_loss = torch.mean(ys*(ones-probs)+(ones-ys)*probs) # ( y*(1-p) +(1-y)p)
org_loss = mae_loss
elif self.opt.base_loss == 'mae_sigmoid':
mae_loss = torch.mean(ys*(ones-torch.sigmoid(input))+(ones-ys)*torch.sigmoid(input)) # ( y*(1-p) +(1-y)p)
org_loss = mae_loss
else:
ce_loss = F.nll_loss(lprobs, target, reduction='mean',) # -y* log p
org_loss=ce_loss
c_loss = torch.zeros_like(org_loss)
if is_train and not (
self.opt.defer_start and self.get_epoch() <= self.opt.defer_start): # defer encourage loss:
bg = self.opt.bonus_gamma
if bg != 0:
if self.opt.base_loss in ['mse','mae','mse_sigmoid','mae_sigmoid']:
# mse: min mean y*(1-p)^2 +(1-y)p^2
# mirror mse: min - ( mean y*p^2 + (1-y)*(1-p)^2)
preds=probs if self.opt.base_loss in ['mse', 'mae'] else torch.sigmoid(input)
mirror_me = -torch.mean(ys*preds**bg+(ones-ys)*(ones-preds)**bg)
c_loss = mirror_me * self.opt.bonus_rho
else:
if bg > 0: # power bonus
bonus = -torch.pow(probs, bg) # power bonus
if self.opt.bonus_start != 0.0: # 20201023 截断bonus
pb = -torch.pow(self.opt.bonus_start *torch.ones_like(probs), bg)
bonus = torch.where(probs >= self.opt.bonus_start, bonus - pb, torch.zeros_like(bonus))
elif bg == -1: # log
bonus = torch.log(torch.clamp((torch.ones_like(probs) - probs), min=self.cl_eps)) # likelihood bonus
if self.opt.bonus_start != 0.0:
pb = torch.log(torch.clamp((torch.ones_like(probs) - self.opt.bonus_start), min=self.cl_eps))
bonus = torch.where(probs >= self.opt.bonus_start, bonus - pb, torch.zeros_like(bonus))
if self.opt.log_end != 1.0: # e.g. 0.9
log_end = self.opt.log_end
# 2021。1.31 17:04 发现原来下面两个式子都是clamp个寂寞 le 和1
# y_log_end = torch.log(torch.clamp((torch.ones_like(probs) - log_end), min=self.cl_eps))
# bonus_after_log_end = 1/(log_end - torch.clamp((torch.ones_like(probs)), min=self.cl_eps)) * (probs-log_end) + y_log_end
y_log_end = torch.log(torch.ones_like(probs) - log_end)
bonus_after_log_end = 1/(log_end - torch.ones_like(probs)) * (probs-log_end) + y_log_end
# x:log_end, y torch.log(torch.clamp((torch.ones_like(probs) - probs), min=self.cl_eps))
bonus = torch.where(probs > log_end, bonus_after_log_end, bonus)
elif bg == -2: # cosine
bonus = torch.cos(probs*math.pi) - 1
if self.opt.bonus_abrupt > 0: # can be 0.75
bonus = torch.where(probs > self.opt.bonus_abrupt, torch.zeros_like(bonus), bonus)
c_loss = F.nll_loss(
-bonus * self.opt.bonus_rho,
target.view(-1),
reduction='mean',
) # y*log(1-p)
all_loss = org_loss + c_loss
return all_loss, org_loss
def set_epoch(self, epoch):
self.epoch = epoch
def get_epoch(self):
return self.epoch
# import torch
# import torch.nn as nn
# from torch.nn import functional as F
# import math
#
# class EncourageLoss(nn.Module):
# def __init__(self, opt, num_class, cl_eps=1e-5):
# super(EncourageLoss, self).__init__()
# self.opt = opt
# self.num_class = num_class
# self.cl_eps = cl_eps
# self.epoch = 0 # 2021.3.9 set to 1 as default
#
# def forward(self, x, target, is_train=True):
# lprobs = F.log_softmax(x)
# mle_loss = F.nll_loss(lprobs, target, reduction='mean',) # -y* log p
# org_loss = mle_loss
# # print('self.opt.defer_start',self.opt.defer_start,'self.get_epoch()',self.get_epoch())
# # print('not (self.opt.defer_start and self.get_epoch() <= self.opt.defer_start)',not (self.opt.defer_start and self.get_epoch() <= self.opt.defer_start))
# if is_train and not (
# self.opt.defer_start and self.get_epoch() <= self.opt.defer_start): # defer encourage loss:
# probs = torch.exp(lprobs)
# bg = self.opt.bonus_gamma
#
# # if bg > 0:
# # bonus = -torch.pow(probs, bg) # power bonus
# # if self.opt.bonus_start != 0.0: # 20201023 截断bonus
# # pb = -torch.pow(self.opt.bonus_start *torch.ones_like(probs), bg)
# # bonus = torch.where(probs >= self.opt.bonus_start, bonus - pb, torch.zeros_like(bonus))
# # else:
# # bonus = torch.log(torch.clamp((torch.ones_like(probs) - probs), min=self.cl_eps)) # likelihood bonus
# # if self.opt.bonus_start != 0.0:
# # pb = torch.log(torch.clamp((torch.ones_like(probs) - self.opt.bonus_start), min=self.cl_eps))
# # bonus = torch.where(probs >= self.opt.bonus_start, bonus - pb, torch.zeros_like(bonus))
# if bg > 0: # power bonus
# bonus = -torch.pow(probs, bg) # power bonus
# if self.opt.bonus_start != 0.0: # 20201023 截断bonus
# pb = -torch.pow(self.opt.bonus_start * torch.ones_like(probs), bg)
# bonus = torch.where(probs >= self.opt.bonus_start, bonus - pb, torch.zeros_like(bonus))
# elif bg == -1: # log
# bonus = torch.log(torch.clamp((torch.ones_like(probs) - probs), min=self.cl_eps)) # likelihood bonus
# if self.opt.bonus_start != 0.0:
# pb = torch.log(torch.clamp((torch.ones_like(probs) - self.opt.bonus_start), min=self.cl_eps))
# bonus = torch.where(probs >= self.opt.bonus_start, bonus - pb, torch.zeros_like(bonus))
# if self.opt.log_end != 1.0: # e.g. 0.9
# log_end = self.opt.log_end # le
# # 2021 1.31 15:22 发现这里我把两个clamp,min 都写错了,第一个clamp个寂寞,第二个倒是斜率1e6
# # y_log_end = torch.log(torch.clamp((torch.ones_like(probs) - log_end), min=self.cl_eps)) # log(1-le)
# # # 斜率 1/(le-1)
# # bonus_after_log_end = 1 / (torch.clamp(log_end - (torch.ones_like(probs)), min=self.cl_eps)) * (
# # probs - log_end) + y_log_end
# y_log_end = torch.log(torch.ones_like(probs) - log_end) # log(1-le)
# # 斜率 1/(le-1)
# bonus_after_log_end = 1 / (log_end - torch.ones_like(probs)) * (
# probs - log_end) + y_log_end
# # x:log_end, y torch.log(torch.clamp((torch.ones_like(probs) - probs), min=self.cl_eps))
# bonus = torch.where(probs > log_end, bonus_after_log_end, bonus)
# elif bg == -2: # cosine
# bonus = torch.cos(probs * math.pi) - 1
# c_loss = F.nll_loss(
# -bonus* self.opt.bonus_rho,
# target.view(-1),
# reduction='mean',
# ) # y*log(1-p)
# all_loss = mle_loss + c_loss
# else:
# all_loss = mle_loss
# return all_loss, org_loss
#
# def set_epoch(self, epoch):
# self.epoch = epoch
#
# def get_epoch(self):
# return self.epoch
| 56.324841 | 164 | 0.531381 | 1,206 | 8,843 | 3.708126 | 0.10199 | 0.076699 | 0.067084 | 0.080501 | 0.84839 | 0.824016 | 0.802102 | 0.775716 | 0.714445 | 0.699687 | 0 | 0.024217 | 0.33224 | 8,843 | 156 | 165 | 56.685897 | 0.733108 | 0.539749 | 0 | 0.142857 | 0 | 0 | 0.017789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.057143 | 0.014286 | 0.157143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1b6808605427d8922afb210d4cb668844e395f0a | 31 | py | Python | python_hmac_auth/__init__.py | Pance/python-hmac-auth | f00f499850e728be564742e96bacf92fdd8b8046 | [
"Apache-2.0"
] | 8 | 2016-07-14T12:38:21.000Z | 2021-03-21T14:08:14.000Z | python_hmac_auth/__init__.py | Pance/python-hmac-auth | f00f499850e728be564742e96bacf92fdd8b8046 | [
"Apache-2.0"
] | 9 | 2015-12-09T22:07:24.000Z | 2021-06-25T15:31:32.000Z | python_hmac_auth/__init__.py | Pance/python-hmac-auth | f00f499850e728be564742e96bacf92fdd8b8046 | [
"Apache-2.0"
] | 14 | 2015-04-15T19:15:18.000Z | 2020-10-29T10:28:39.000Z | from .hmac_auth import HmacAuth | 31 | 31 | 0.870968 | 5 | 31 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9434c56d039823c889f17269ced152b2b1fb6b1a | 24 | py | Python | filestore/__init__.py | vbpupil/FileStore | f821a79077b1984c549be7589edfe566e70d8178 | [
"MIT"
] | null | null | null | filestore/__init__.py | vbpupil/FileStore | f821a79077b1984c549be7589edfe566e70d8178 | [
"MIT"
] | null | null | null | filestore/__init__.py | vbpupil/FileStore | f821a79077b1984c549be7589edfe566e70d8178 | [
"MIT"
] | null | null | null | from filestore import *
| 12 | 23 | 0.791667 | 3 | 24 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
846ca98bb25ffd1ff8b73b64dacaecf7ed6b5c50 | 201 | py | Python | atlas/foundations_internal/src/foundations_internal/pipeline_context_wrapper.py | manesioz/atlas | 4afb63b48f65b0765300dc8034403aeca8489520 | [
"Apache-2.0"
] | 1 | 2021-11-01T12:47:44.000Z | 2021-11-01T12:47:44.000Z | atlas/foundations_internal/src/foundations_internal/pipeline_context_wrapper.py | manesioz/atlas | 4afb63b48f65b0765300dc8034403aeca8489520 | [
"Apache-2.0"
] | null | null | null | atlas/foundations_internal/src/foundations_internal/pipeline_context_wrapper.py | manesioz/atlas | 4afb63b48f65b0765300dc8034403aeca8489520 | [
"Apache-2.0"
] | null | null | null |
class PipelineContextWrapper(object):
def __init__(self, pipeline_context):
self._pipeline_context = pipeline_context
def pipeline_context(self):
return self._pipeline_context | 28.714286 | 49 | 0.751244 | 21 | 201 | 6.666667 | 0.428571 | 0.535714 | 0.407143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18408 | 201 | 7 | 50 | 28.714286 | 0.853659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
84ff1b2fc2afbd14b4d2902b0047ed1303ea2991 | 97 | py | Python | example/app.py | TheCaptainCat/flasque | d42deb57572084f513202a32c460186700ce8e0b | [
"MIT"
] | 4 | 2020-11-02T15:16:32.000Z | 2022-01-11T11:19:24.000Z | example/app.py | TheCaptainCat/bolinette | d42deb57572084f513202a32c460186700ce8e0b | [
"MIT"
] | 14 | 2021-01-04T11:06:59.000Z | 2022-03-23T17:01:49.000Z | example/app.py | TheCaptainCat/bolinette | d42deb57572084f513202a32c460186700ce8e0b | [
"MIT"
] | null | null | null | from bolinette import Bolinette, main_func
@main_func
def create_app():
return Bolinette()
| 13.857143 | 42 | 0.762887 | 13 | 97 | 5.461538 | 0.692308 | 0.225352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164948 | 97 | 6 | 43 | 16.166667 | 0.876543 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
ca21d6e387d30a7221a2a5f3666ba0dbf0823bd8 | 220 | py | Python | PAINTeR/__init__.py | lejianhuang/RPN-signature_Lejian | 79f42c19330220a30eb0b7ef14d725274e77d607 | [
"BSD-3-Clause"
] | 11 | 2019-08-19T16:13:37.000Z | 2022-02-25T16:41:27.000Z | PAINTeR/__init__.py | lejianhuang/RPN-signature_Lejian | 79f42c19330220a30eb0b7ef14d725274e77d607 | [
"BSD-3-Clause"
] | 3 | 2019-04-26T09:42:56.000Z | 2020-04-19T11:28:38.000Z | PAINTeR/__init__.py | lejianhuang/RPN-signature_Lejian | 79f42c19330220a30eb0b7ef14d725274e77d607 | [
"BSD-3-Clause"
] | 4 | 2019-09-28T10:12:49.000Z | 2020-09-20T11:58:48.000Z | import warnings
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
warnings.filterwarnings("ignore", message="DeprecationWarning")
| 44 | 69 | 0.813636 | 23 | 220 | 7.782609 | 0.478261 | 0.368715 | 0.469274 | 0.586592 | 0.765363 | 0.513966 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 220 | 4 | 70 | 55 | 0.860577 | 0 | 0 | 0 | 0 | 0 | 0.381818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ca2f3ad8ac478c3b15bb461a97a796ec48fd8ca9 | 75 | py | Python | mp/__init__.py | RawPikachu/valor | 02e1eb0e599904d3f0c49b52534fcb6c3762951d | [
"MIT"
] | null | null | null | mp/__init__.py | RawPikachu/valor | 02e1eb0e599904d3f0c49b52534fcb6c3762951d | [
"MIT"
] | null | null | null | mp/__init__.py | RawPikachu/valor | 02e1eb0e599904d3f0c49b52534fcb6c3762951d | [
"MIT"
] | null | null | null | from .avg_process import avg_process
from .plot_process import plot_process | 37.5 | 38 | 0.88 | 12 | 75 | 5.166667 | 0.416667 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093333 | 75 | 2 | 38 | 37.5 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ca882902ac679d306efeed9a70b232e726d466ae | 3,033 | py | Python | bibles/models.py | njuaplusplus/j0shua | d14c657c72df157aaf2e471010b06bd85f415296 | [
"Apache-2.0"
] | null | null | null | bibles/models.py | njuaplusplus/j0shua | d14c657c72df157aaf2e471010b06bd85f415296 | [
"Apache-2.0"
] | null | null | null | bibles/models.py | njuaplusplus/j0shua | d14c657c72df157aaf2e471010b06bd85f415296 | [
"Apache-2.0"
] | null | null | null | #!/usr/local/bin/python
# coding=utf-8
from django.db import models
# Create your models here.
# class Bible_NIV(models.Model):
# versenum = models.IntegerField(u'节')
# chapternum = models.IntegerField(u'章')
# book = models.CharField(u'卷', max_length=30)
# verse = models.TextField(u'经文')
# def __str__(self):
# return "%s:%s:%s" % (self.book, self.chapternum, self.versenum)
class Bible_Book_Name(models.Model):
''' The chinese book name with the corresponding English name
'''
book_name_zh = models.CharField('中文名', max_length=30)
book_name_en = models.CharField('英文名', max_length=30)
chapternums = models.IntegerField('章节数', default=0)
def __str__(self):
return "%s(%s)" % (self.book_name_zh, self.book_name_en)
class Bible_CHN(models.Model):
versenum = models.IntegerField('节')
chapternum = models.IntegerField('章')
book = models.ForeignKey(Bible_Book_Name, verbose_name='卷')
verse = models.TextField('经文')
def __str__(self):
return "%s %s:%s" % (self.book, self.chapternum, self.versenum)
class Bible_NIV2011(models.Model):
versenum = models.IntegerField('节')
chapternum = models.IntegerField('章')
book = models.ForeignKey(Bible_Book_Name, verbose_name='卷')
verse = models.TextField('经文')
def __str__(self):
return "%s %s:%s" % (self.book, self.chapternum, self.versenum)
class Daily_Verse(models.Model):
verse_date = models.DateField('日期')
start_verse = models.ForeignKey(Bible_CHN, verbose_name='起始经文', related_name='daily_start_verse_set')
end_verse = models.ForeignKey(Bible_CHN, verbose_name='结束经文', related_name='daily_end_verse_set')
def __str__(self):
return "%s:%s-%s" % (self.verse_date, self.start_verse, self.end_verse)
class Weekly_Verse(models.Model):
verse_date = models.DateField('日期')
start_verse = models.ForeignKey(Bible_CHN, verbose_name='起始经文', related_name='weekly_start_verse_set')
end_verse = models.ForeignKey(Bible_CHN, verbose_name='结束经文', related_name='weekly_end_verse_set')
def __str__(self):
return "%s:%s-%s" % (self.verse_date, self.start_verse, self.end_verse)
class Weekly_Reading(models.Model):
''' 每周读经
'''
verse_date = models.DateField('日期')
start_verse = models.ForeignKey(Bible_CHN, verbose_name='起始经文', related_name='weekly_reading_start_verse_set')
end_verse = models.ForeignKey(Bible_CHN, verbose_name='结束经文', related_name='weekly_reading_end_verse_set')
def __str__(self):
return "%s:%s-%s" % (self.verse_date, self.start_verse, self.end_verse)
class Weekly_Recitation(models.Model):
''' 每周背经
'''
verse_date = models.DateField('日期')
start_verse = models.ForeignKey(Bible_CHN, verbose_name='起始经文', related_name='weekly_recitation_start_verse_set')
end_verse = models.ForeignKey(Bible_CHN, verbose_name='结束经文', related_name='weekly_recitation_end_verse_set')
def __str__(self):
return "%s:%s-%s" % (self.verse_date, self.start_verse, self.end_verse)
| 43.328571 | 117 | 0.70788 | 418 | 3,033 | 4.815789 | 0.172249 | 0.014903 | 0.104322 | 0.063587 | 0.752111 | 0.733731 | 0.724789 | 0.724789 | 0.724789 | 0.724789 | 0 | 0.004657 | 0.150346 | 3,033 | 69 | 118 | 43.956522 | 0.776484 | 0.144411 | 0 | 0.555556 | 0 | 0 | 0.12349 | 0.064277 | 0 | 0 | 0 | 0 | 0 | 1 | 0.155556 | false | 0 | 0.022222 | 0.155556 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
ca9a31e724ba1c99601808dd53c1def1b7d36543 | 37 | py | Python | python/swap_without_temp_variable.py | angelopassaro/Hacktoberfest-1 | 21f90f5d49efba9b1a27f4d9b923f5017ab43f0e | [
"Apache-2.0"
] | 1 | 2020-10-06T01:20:07.000Z | 2020-10-06T01:20:07.000Z | python/swap_without_temp_variable.py | angelopassaro/Hacktoberfest-1 | 21f90f5d49efba9b1a27f4d9b923f5017ab43f0e | [
"Apache-2.0"
] | null | null | null | python/swap_without_temp_variable.py | angelopassaro/Hacktoberfest-1 | 21f90f5d49efba9b1a27f4d9b923f5017ab43f0e | [
"Apache-2.0"
] | null | null | null | a=5
b=6
a=a+b
b=a-b
a=a-b
print(a,b)
| 5.285714 | 10 | 0.540541 | 16 | 37 | 1.25 | 0.3125 | 0.4 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 0.162162 | 37 | 6 | 11 | 6.166667 | 0.580645 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 1 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
04c24aacec84123a7a476f51eab178640282d8a8 | 50 | py | Python | tests/assets/file1.py | Jordan-Gillard/Versionizer | 39e10e8e6de101945271bfa130deb8e01bd5679c | [
"Apache-2.0"
] | null | null | null | tests/assets/file1.py | Jordan-Gillard/Versionizer | 39e10e8e6de101945271bfa130deb8e01bd5679c | [
"Apache-2.0"
] | null | null | null | tests/assets/file1.py | Jordan-Gillard/Versionizer | 39e10e8e6de101945271bfa130deb8e01bd5679c | [
"Apache-2.0"
] | null | null | null | def foo():
return 3
def bar():
return 1
| 7.142857 | 12 | 0.52 | 8 | 50 | 3.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.36 | 50 | 6 | 13 | 8.333333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
04d335728d32154c46ea6817c9bc2eb521fb20d3 | 24 | py | Python | registration/models.py | lig/django-registration-me | 1ce0be9b493a6c93a27e5d54a768eb06d8afaf0e | [
"BSD-3-Clause"
] | 2 | 2017-05-12T08:24:20.000Z | 2017-10-01T09:39:55.000Z | registration/models.py | micadeyeye/Blongo | 482dfc2516e83bb3bbc320cacccfc979a977ead8 | [
"BSD-3-Clause"
] | null | null | null | registration/models.py | micadeyeye/Blongo | 482dfc2516e83bb3bbc320cacccfc979a977ead8 | [
"BSD-3-Clause"
] | null | null | null | from documents import *
| 12 | 23 | 0.791667 | 3 | 24 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b6e06bceee4e1cf1b858dc784e7cd79e2d4b439f | 71 | py | Python | s07/tools/DateTool.py | hiparker/study-python | 262f3f8f22f886e83c3bd19b7326e92257ead556 | [
"Apache-2.0"
] | 1 | 2021-01-07T14:29:34.000Z | 2021-01-07T14:29:34.000Z | s07/tools/DateTool.py | hiparker/study-python | 262f3f8f22f886e83c3bd19b7326e92257ead556 | [
"Apache-2.0"
] | null | null | null | s07/tools/DateTool.py | hiparker/study-python | 262f3f8f22f886e83c3bd19b7326e92257ead556 | [
"Apache-2.0"
] | null | null | null | import time
def getDate():
"""获得时间"""
return time.localtime() | 11.833333 | 27 | 0.605634 | 8 | 71 | 5.375 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.225352 | 71 | 6 | 27 | 11.833333 | 0.781818 | 0.056338 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b6e0a8f555b9f367026a91e92690df6a8c824f34 | 3,742 | py | Python | src/rdga_4k/__init__.py | aquinordg/cabidge | 576cb367dd20598848cd34ed25187cbcf6a8204e | [
"MIT"
] | null | null | null | src/rdga_4k/__init__.py | aquinordg/cabidge | 576cb367dd20598848cd34ed25187cbcf6a8204e | [
"MIT"
] | null | null | null | src/rdga_4k/__init__.py | aquinordg/cabidge | 576cb367dd20598848cd34ed25187cbcf6a8204e | [
"MIT"
] | 1 | 2022-03-23T23:55:46.000Z | 2022-03-23T23:55:46.000Z | def catbird(m, n, k, lmbd=.5, eps=.5, random_state=None):
# libraries
import numpy as np
import math
from numpy.random import RandomState
if random_state is None:
random_state = RandomState()
elif type(random_state) == int:
random_state = RandomState(random_state)
else:
assert type(random_state) == RandomState
# checking
assert type(m) == int and m > 1
assert type(n) == int and n > 1
assert type(k) == int and k > 1
assert type(lmbd) == float and (lmbd >= 0.0 and lmbd <= 1.0)
assert type(eps) == float and (eps >= 0.0 and eps <= 1.0)
# tools
def sigmoid(x):
return 1 / (1 + math.exp(-x))
def sig_f(x):
return [sigmoid(i) for i in x]
def binarize(x, lmbd):
xbin = []
for i in range(len(x)):
if x[i] < lmbd:
xbin.append(1)
else:
xbin.append(0)
return xbin
# initialization
s = int(n//2)+1
rem = n - s
n_samples_per_center = [int(m // k)] * k
for i in range(m % k):
n_samples_per_center[i] += 1
X = []
y = []
q = -1
for i in range(len(n_samples_per_center)):
W = random_state.normal(0, 1, (s, s))
idx = list(random_state.choice(list(range(n)), size=rem, replace=False))
idx.sort()
q += 1
for j in range(n_samples_per_center[i]):
A = [random_state.normal(0, 1, s)]
A_W = [[sum(a*b for a,b in zip(A_row,W_col)) for W_col in zip(*W)] for A_row in A]
A_W_sig = sig_f(A_W[0])
for l in idx:
A_W_sig.insert(l, eps)
A_W_sig_bin = binarize(A_W_sig, lmbd)
y.append(q)
X.append(A_W_sig_bin)
X = np.array(X, dtype=np.int64)
y = np.array(y, dtype=np.int64)
return X, y
def free_catbird(n, rate, feat_sig, lmbd=.5, eps=.5, random_state=None):
# libraries
import numpy as np
import math
from numpy.random import RandomState
if random_state is None:
random_state = RandomState()
elif type(random_state) == int:
random_state = RandomState(random_state)
else:
assert type(random_state) == RandomState
# checking
assert type(n) == int and n > 1
assert isinstance(rate, list)
assert isinstance(feat_sig, list) and max(feat_sig) <= n
assert type(lmbd) == float and (lmbd >= 0.0 and lmbd <= 1.0)
assert type(eps) == float and (eps >= 0.0 and eps <= 1.0)
# tools
def sigmoid(x):
return 1 / (1 + math.exp(-x))
def sig_f(x):
return [sigmoid(i) for i in x]
def binarize(x, lmbd):
xbin = []
for i in range(len(x)):
if x[i] < lmbd:
xbin.append(1)
else:
xbin.append(0)
return xbin
# initialization
X = []
y = []
q = -1
for i in range(len(rate)):
W = random_state.normal(0, 1, (feat_sig[i], feat_sig[i]))
idx = list(random_state.choice(list(range(n)), size=feat_sig[i], replace=False))
q += 1
for j in range(rate[i]):
A = [random_state.normal(0, 1, feat_sig[i])]
A_W = [[sum(a*b for a,b in zip(A_row,W_col)) for W_col in zip(*W)] for A_row in A]
A_W_sig = sig_f(A_W[0])
noise_list = [eps for i in range(n)]
for l in range(len(A_W_sig)):
noise_list[idx[l]] = A_W_sig[l]
A_W_sig_bin = binarize(noise_list, lmbd)
y.append(q)
X.append(A_W_sig_bin)
X = np.array(X, dtype=np.int64)
y = np.array(y, dtype=np.int64)
return X, y | 25.455782 | 106 | 0.528327 | 585 | 3,742 | 3.237607 | 0.14188 | 0.116156 | 0.026399 | 0.034847 | 0.817846 | 0.782471 | 0.7566 | 0.744456 | 0.694826 | 0.635692 | 0 | 0.023208 | 0.343666 | 3,742 | 147 | 107 | 25.455782 | 0.747964 | 0.021112 | 0 | 0.707071 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 1 | 0.080808 | false | 0 | 0.060606 | 0.040404 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8e307a993349c2552e2b3e793629567c4e785de2 | 2,426 | py | Python | skforecast/utils/tests/test_preproces_exog.py | JavierEscobarOrtiz/skforecast | a3af4a1dd4201c582f159d4e3a1734ed6d29b6c5 | [
"MIT"
] | 1 | 2021-12-01T09:21:21.000Z | 2021-12-01T09:21:21.000Z | skforecast/utils/tests/test_preproces_exog.py | JavierEscobarOrtiz/skforecast | a3af4a1dd4201c582f159d4e3a1734ed6d29b6c5 | [
"MIT"
] | null | null | null | skforecast/utils/tests/test_preproces_exog.py | JavierEscobarOrtiz/skforecast | a3af4a1dd4201c582f159d4e3a1734ed6d29b6c5 | [
"MIT"
] | null | null | null | # Unit test preprocess_exog
# ==============================================================================
import pytest
import numpy as np
import pandas as pd
from skforecast.utils import preprocess_exog
def test_output_preprocess_exog_when_exog_index_is_DatetimeIndex_and_has_frequency():
'''
Test values returned by when exog is a pandas Series with DatetimeIndex
and freq is not None.
'''
exog = pd.Series(
data = np.arange(3),
index = pd.date_range("1990-01-01", periods=3, freq='D')
)
results = preprocess_exog(exog)
expected = (np.arange(3),
pd.DatetimeIndex(['1990-01-01', '1990-01-02', '1990-01-03'],
dtype='datetime64[ns]', freq='D')
)
assert (results[0] == expected[0]).all()
assert (results[1] == expected[1]).all()
def test_output_preprocess_exog_when_exog_index_is_RangeIndex():
'''
Test values returned by when exog is a pandas Series with RangeIndex
'''
exog = pd.Series(
data = np.arange(3),
index = pd.RangeIndex(start=0, stop=3, step=1)
)
results = preprocess_exog(exog)
expected = (np.arange(3),
pd.RangeIndex(start=0, stop=3, step=1)
)
assert (results[0] == expected[0]).all()
assert (results[1] == expected[1]).all()
def test_output_preprocess_exog_when_exog_index_is_DatetimeIndex_but_has_not_frequency():
'''
Test values returned by when exog is a pandas Series with DatetimeIndex
and freq is None.
'''
exog = pd.Series(
data = np.arange(3),
index = pd.to_datetime(["1990-01-01", "1990-01-02", "1990-01-03"])
)
results = preprocess_exog(exog)
expected = (np.arange(3),
pd.RangeIndex(start=0, stop=3, step=1)
)
assert (results[0] == expected[0]).all()
assert (results[1] == expected[1]).all()
def test_output_preprocess_exog_when_exog_index_is_not_DatetimeIndex_or_RangeIndex():
'''
Test values returned by when exog is a pandas Series without DatetimeIndex or RangeIndex.
'''
exog = pd.Series(data=np.arange(3))
results = preprocess_exog(exog)
expected = (np.arange(3),
pd.RangeIndex(start=0, stop=3, step=1)
)
assert (results[0] == expected[0]).all()
assert (results[1] == expected[1]).all() | 35.15942 | 93 | 0.590272 | 310 | 2,426 | 4.458065 | 0.196774 | 0.101302 | 0.052098 | 0.06657 | 0.828509 | 0.828509 | 0.828509 | 0.828509 | 0.777135 | 0.68741 | 0 | 0.052457 | 0.253504 | 2,426 | 69 | 94 | 35.15942 | 0.710657 | 0.184666 | 0 | 0.543478 | 0 | 0 | 0.04505 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 1 | 0.086957 | false | 0 | 0.086957 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f3f90730dddc62acd13e6b6206681b813b3239b0 | 101 | py | Python | challenges/2020/06-customCustoms/python/common.py | codemicro/adventOfCode | 53574532ece1d19e5f5ba2f39e8e183c4c6225a1 | [
"MIT"
] | 9 | 2020-12-06T23:18:30.000Z | 2021-12-19T22:31:26.000Z | challenges/2020/06-customCustoms/python/common.py | codemicro/adventOfCode | 53574532ece1d19e5f5ba2f39e8e183c4c6225a1 | [
"MIT"
] | null | null | null | challenges/2020/06-customCustoms/python/common.py | codemicro/adventOfCode | 53574532ece1d19e5f5ba2f39e8e183c4c6225a1 | [
"MIT"
] | 3 | 2020-12-08T09:45:44.000Z | 2020-12-15T19:20:20.000Z | from typing import List
def parse(instr: str) -> List[str]:
return instr.strip().split("\n\n")
| 16.833333 | 38 | 0.653465 | 16 | 101 | 4.125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168317 | 101 | 5 | 39 | 20.2 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0.039604 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
6d1a916a85f37bad9f2f6f58358714473bf026ad | 164 | py | Python | web/templatetags/pagination.py | zinaukarenku/zkr-platform | 8daf7d1206c482f1f8e0bcd54d4fde783e568774 | [
"Apache-2.0"
] | 2 | 2018-11-16T21:45:17.000Z | 2019-02-03T19:55:46.000Z | web/templatetags/pagination.py | zinaukarenku/zkr-platform | 8daf7d1206c482f1f8e0bcd54d4fde783e568774 | [
"Apache-2.0"
] | 13 | 2018-08-17T19:12:11.000Z | 2022-03-11T23:27:41.000Z | web/templatetags/pagination.py | zinaukarenku/zkr-platform | 8daf7d1206c482f1f8e0bcd54d4fde783e568774 | [
"Apache-2.0"
] | null | null | null | from django.template import Library
register = Library()
@register.simple_tag
def get_page_link(paginator_page, page):
return paginator_page.page_link(page)
| 18.222222 | 41 | 0.79878 | 23 | 164 | 5.434783 | 0.608696 | 0.24 | 0.272 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 164 | 8 | 42 | 20.5 | 0.868056 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6d6483e83a39798cfdc5582ab41bab24ef98a98c | 63 | py | Python | ctpbee/interface/sim/__init__.py | touqi/ctpbee | 72a01b818523ac90792db9431439ef5e5b312774 | [
"MIT"
] | 1 | 2020-07-27T15:05:40.000Z | 2020-07-27T15:05:40.000Z | ctpbee/interface/sim/__init__.py | TouQi/ctpbee | 72a01b818523ac90792db9431439ef5e5b312774 | [
"MIT"
] | null | null | null | ctpbee/interface/sim/__init__.py | TouQi/ctpbee | 72a01b818523ac90792db9431439ef5e5b312774 | [
"MIT"
] | null | null | null | from .md_api import SimMarket
from .td_api import SimInterface
| 21 | 32 | 0.84127 | 10 | 63 | 5.1 | 0.7 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126984 | 63 | 2 | 33 | 31.5 | 0.927273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eddfcd0f9282ce5315ad698c04305394027f4e91 | 23 | py | Python | detectionModules/wifi/__init__.py | Impeekay/shop-analytics-pi | 4e02068775b700da3f0e01a612fdc5cc29c85eaf | [
"MIT"
] | 1 | 2020-12-12T07:00:03.000Z | 2020-12-12T07:00:03.000Z | detectionModules/wifi/__init__.py | Impeekay/shop-analytics-pi | 4e02068775b700da3f0e01a612fdc5cc29c85eaf | [
"MIT"
] | 7 | 2020-11-13T18:47:55.000Z | 2022-03-12T00:30:13.000Z | detectionModules/wifi/__init__.py | Impeekay/shop-analytics-pi | 4e02068775b700da3f0e01a612fdc5cc29c85eaf | [
"MIT"
] | 3 | 2020-05-11T06:59:28.000Z | 2020-06-08T16:59:54.000Z | from .main import WiFi
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6109d3d1a7ca37ba56d50ebff1d9b4bc59a0207f | 834 | py | Python | tests/conftest.py | IanBrown00/pytest-play | 7928f7edc37ebbf78f67d094db398bbb687301c6 | [
"MIT"
] | null | null | null | tests/conftest.py | IanBrown00/pytest-play | 7928f7edc37ebbf78f67d094db398bbb687301c6 | [
"MIT"
] | null | null | null | tests/conftest.py | IanBrown00/pytest-play | 7928f7edc37ebbf78f67d094db398bbb687301c6 | [
"MIT"
] | null | null | null |
import statistics
import time
from typing import Union
import pytest
MIN = 0
MAX = 4
@pytest.fixture(scope="session")
def data():
# TODO: Parameterize for int and float types
return range(MIN, MAX + 1, 1)
@pytest.fixture(scope="session", params=[int, float])
def max_val(request) -> Union[int, float]:
return request.param(MAX)
@pytest.fixture(scope="session", params=[int, float])
def min_val(request) -> Union[int, float]:
return request.param(MIN)
@pytest.fixture(scope="session")
def mean_val(data, request) -> Union[int, float]:
return statistics.mean([float(val) for val in data])
@pytest.fixture(scope="session")
def stdev_val(data, request) -> Union[int, float]:
return statistics.pstdev([float(val) for val in data])
@pytest.fixture(scope="function")
def slow():
time.sleep(0.1)
yield | 23.828571 | 58 | 0.701439 | 122 | 834 | 4.762295 | 0.311475 | 0.134251 | 0.185886 | 0.215146 | 0.678141 | 0.564544 | 0.564544 | 0.564544 | 0.130809 | 0 | 0 | 0.008451 | 0.148681 | 834 | 35 | 59 | 23.828571 | 0.809859 | 0.05036 | 0 | 0.2 | 0 | 0 | 0.05443 | 0 | 0 | 0 | 0 | 0.028571 | 0 | 1 | 0.24 | false | 0 | 0.16 | 0.2 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b61a95a5d89698fecdbb11aae4533408bde7c6cb | 48 | py | Python | boards/stm32/NUCLEO_H743ZI2_MICROLITE/manifest.py | mattiantonini/tensorflow-micropython-examples | aab677bce2fcc239dab94b765155b767a8c29d2e | [
"MIT"
] | null | null | null | boards/stm32/NUCLEO_H743ZI2_MICROLITE/manifest.py | mattiantonini/tensorflow-micropython-examples | aab677bce2fcc239dab94b765155b767a8c29d2e | [
"MIT"
] | null | null | null | boards/stm32/NUCLEO_H743ZI2_MICROLITE/manifest.py | mattiantonini/tensorflow-micropython-examples | aab677bce2fcc239dab94b765155b767a8c29d2e | [
"MIT"
] | null | null | null | freeze("$(BOARD_DIR)/../../src/utils", "xxd.py") | 48 | 48 | 0.583333 | 7 | 48 | 3.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020833 | 48 | 1 | 48 | 48 | 0.574468 | 0 | 0 | 0 | 0 | 0 | 0.693878 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b623f46cd17c2edd56d4a3edf21576d77f34d1c1 | 261 | py | Python | virtool_workflow_runtime/autoload.py | igboyes/virtool-workflow | 1ef9a4b0bada1963ff9be0470dfe74b32c9e7ccf | [
"MIT"
] | null | null | null | virtool_workflow_runtime/autoload.py | igboyes/virtool-workflow | 1ef9a4b0bada1963ff9be0470dfe74b32c9e7ccf | [
"MIT"
] | null | null | null | virtool_workflow_runtime/autoload.py | igboyes/virtool-workflow | 1ef9a4b0bada1963ff9be0470dfe74b32c9e7ccf | [
"MIT"
] | null | null | null | """Workflow fixtures from this module are implicitly loaded by the runtime."""
__fixtures__ = [
"virtool_workflow.execute",
"virtool_workflow_runtime.db.db",
"virtool_workflow_runtime.db.fixtures",
"virtool_workflow_runtime.config.fixtures",
]
| 29 | 78 | 0.750958 | 30 | 261 | 6.166667 | 0.5 | 0.324324 | 0.356757 | 0.259459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 261 | 8 | 79 | 32.625 | 0.822222 | 0.275862 | 0 | 0 | 0 | 0 | 0.710383 | 0.710383 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fceb315aa70b7484353338068f2b73a863a806f6 | 69 | py | Python | coindeblend/models/__init__.py | aboucaud/deblend | 59b950d7de82814a42671e22497f87f3653942f6 | [
"BSD-3-Clause"
] | 3 | 2021-09-03T10:10:03.000Z | 2021-09-03T20:01:03.000Z | coindeblend/models/__init__.py | aboucaud/deblend | 59b950d7de82814a42671e22497f87f3653942f6 | [
"BSD-3-Clause"
] | 3 | 2021-08-25T15:47:28.000Z | 2022-02-10T00:19:44.000Z | coindeblend/models/__init__.py | aboucaud/deblend | 59b950d7de82814a42671e22497f87f3653942f6 | [
"BSD-3-Clause"
] | 2 | 2020-09-28T18:35:59.000Z | 2020-10-01T14:08:10.000Z | from .unet import *
from .decompnet import *
from .seqstack import *
| 17.25 | 24 | 0.73913 | 9 | 69 | 5.666667 | 0.555556 | 0.392157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 69 | 3 | 25 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.