hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a18cf5bd148f07ee046a651b4ef4dbaa5dac3751 | 381 | py | Python | app/main/errors.py | fossabot/upland | 6f56347a145d090a770c149462cd7b7215101b30 | [
"MIT"
] | null | null | null | app/main/errors.py | fossabot/upland | 6f56347a145d090a770c149462cd7b7215101b30 | [
"MIT"
] | null | null | null | app/main/errors.py | fossabot/upland | 6f56347a145d090a770c149462cd7b7215101b30 | [
"MIT"
] | null | null | null | from flask import render_template
from . import main
@main.app_errorhandler(403)
def forbidden(e):
return render_template('error/403.html'), 403
@main.app_errorhandler(404)
def page_not_found(e):
return render_template('error/404.html'), 404
@main.app_errorhandler(500)
def internal_server_error(e):
return render_template('error/500.html'), 500
| 21.166667 | 50 | 0.727034 | 54 | 381 | 4.925926 | 0.407407 | 0.210526 | 0.214286 | 0.236842 | 0.293233 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084375 | 0.160105 | 381 | 17 | 51 | 22.411765 | 0.746875 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.181818 | 0.272727 | 0.727273 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a19e8ed9a02e5f9ce66545580ec5dc2566c29d47 | 47,963 | py | Python | Packs/Sixgill-Darkfeed/Integrations/Sixgill_Darkfeed/Sixgill_Darkfeed_test.py | cbrake1/content | 5b031129f98935c492056675eeee0fefcacbd87b | [
"MIT"
] | 2 | 2020-07-27T10:35:41.000Z | 2020-12-14T15:44:18.000Z | Packs/Sixgill-Darkfeed/Integrations/Sixgill_Darkfeed/Sixgill_Darkfeed_test.py | cbrake1/content | 5b031129f98935c492056675eeee0fefcacbd87b | [
"MIT"
] | 48 | 2022-03-08T13:45:00.000Z | 2022-03-31T14:32:05.000Z | Packs/Sixgill-Darkfeed/Integrations/Sixgill_Darkfeed/Sixgill_Darkfeed_test.py | cbrake1/content | 5b031129f98935c492056675eeee0fefcacbd87b | [
"MIT"
] | 2 | 2020-12-10T12:02:45.000Z | 2020-12-15T09:20:01.000Z | import requests
import pytest
import json
import demistomock as demisto
bundle_index = 0
submitted_indicators = 0
mocked_get_token_response = '''{"access_token": "fababfafbh"}'''
iocs_bundle = [{"id": "bundle--716fd67b-ba74-44db-8d4c-2efde05ddbaa",
"objects": [
{"created": "2017-01-20T00:00:00.000Z", "definition": {"tlp": "amber"}, "definition_type": "tlp",
"id": "marking-definition--f88d31f6-486f-44da-b317-01333bde0b82", "type": "marking-definition"},
{"created": "2019-12-26T00:00:00Z",
"definition": {"statement": "Copyright Sixgill 2020. All rights reserved."},
"definition_type": "statement", "id": "marking-definition--41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4",
"type": "marking-definition"},
{"created": "2020-01-09T07:31:16.708Z",
"description": "Shell access to this domain is being sold on dark web markets",
"id": "indicator--7fc19d6d-2d58-45d6-a410-85554b12aea9",
"kill_chain_phases": [
{"kill_chain_name": "lockheed-martin-cyber-kill-chain", "phase_name": "weaponization"}],
"labels": ["compromised"], "lang": "en",
"modified": "2020-01-09T07:31:16.708Z",
"object_marking_refs": ["marking-definition--41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4",
"marking-definition--f88d31f6-486f-44da-b317-01333bde0b82"],
"pattern": "[file:hashes.MD5 = '8f8ff6b696859c3afe7936c345b098bd' OR "
"file:hashes.'SHA-1' = '9bb88f703e234a89ff523514a5c676ac12ae6225' OR "
"file:hashes.'SHA-256' = "
"'9cd46027d63c36e53f4347d43554336c2ea050d38be3ff9a608cb94cca6ab74b']",
"sixgill_actor": "some_actor", "sixgill_confidence": 90, "sixgill_feedid": "darkfeed_002",
"sixgill_feedname": "compromised_sites",
"sixgill_postid": "6e407c41fe6591d591cd8bbf0d105f7c15ed8991",
"sixgill_posttitle": "Credit Card Debt Help, somewebsite.com",
"sixgill_severity": 70, "sixgill_source": "market_magbo", "spec_version": "2.0",
"type": "indicator",
"valid_from": "2019-12-07T00:57:04Z"},
{"created": "2020-01-09T07:31:16.824Z",
"description": "Shell access to this domain is being sold on dark web markets",
"id": "indicator--67b2378f-cbdd-4263-b1c4-668014d376f2",
"kill_chain_phases": [
{"kill_chain_name": "lockheed-martin-cyber-kill-chain", "phase_name": "weaponization"}],
"labels": ["compromised"], "lang": "ru",
"modified": "2020-01-09T07:31:16.824Z",
"object_marking_refs": ["marking-definition--41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4",
"marking-definition--f88d31f6-486f-44da-b317-01333bde0b82"],
"pattern": "[ipv4-addr:value = '121.165.45.1']", "sixgill_actor": "some_actor",
"sixgill_confidence": 90, "sixgill_feedid": "darkfeed_004",
"sixgill_feedname": "compromised_sites",
"sixgill_postid": "59f08fbf692f84f15353a5e946d2a1cebab92418",
"sixgill_posttitle": "somewebsite.com",
"sixgill_severity": 70, "sixgill_source": "market_magbo", "spec_version": "2.0",
"type": "indicator",
"valid_from": "2019-12-06T17:10:04Z"},
{"created": "2020-01-09T07:31:16.757Z",
"description": "Shell access to this domain is being sold on dark web markets",
"id": "indicator--6e8b5f57-3ee2-4c4a-9283-8547754dfa09",
"kill_chain_phases": [
{"kill_chain_name": "lockheed-martin-cyber-kill-chain", "phase_name": "weaponization"}],
"labels": ["url"], "lang": "en",
"modified": "2020-01-09T07:31:16.757Z",
"object_marking_refs": ["marking-definition--41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4",
"marking-definition--f88d31f6-486f-44da-b317-01333bde0b82"],
"pattern": "[url:value = 'http://somewebsite.rar[.]html']", "sixgill_actor": "some_actor",
"sixgill_confidence": 90, "sixgill_feedid": "darkfeed_010",
"sixgill_feedname": "compromised_sites",
"sixgill_postid": "f46cdfc3332d9a04aa63078d82c1e453fd76ba50",
"sixgill_posttitle": "somewebsite.com", "sixgill_severity": 70,
"sixgill_source": "market_magbo", "spec_version": "2.0", "type": "indicator",
"valid_from": "2019-12-06T23:24:51Z"},
{"created": "2020-01-09T07:31:16.834Z",
"description": "Shell access to this domain is being sold on dark web markets",
"id": "indicator--85d3d87b-76ed-4cab-b709-a43dfbdc5d8d",
"kill_chain_phases": [
{"kill_chain_name": "lockheed-martin-cyber-kill-chain", "phase_name": "weaponization"}],
"labels": ["ip"], "lang": "en",
"modified": "2020-01-09T07:31:16.834Z",
"object_marking_refs": ["marking-definition--41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4",
"marking-definition--f88d31f6-486f-44da-b317-01333bde0b82"],
"pattern": "[ipv4-addr:value = '31.31.77.83']", "sixgill_actor": "some_actor",
"sixgill_confidence": 60, "sixgill_feedid": "darkfeed_005",
"sixgill_feedname": "compromised_sites",
"sixgill_postid": "c3f266e67f163e1a6181c0789e225baba89212a2",
"sixgill_posttitle": "somewebsite.com",
"sixgill_severity": 70, "sixgill_source": "market_magbo", "spec_version": "2.0",
"type": "indicator",
"valid_from": "2019-12-06T14:37:16Z"},
{"created": "2020-01-09T07:31:16.834Z",
"description": "Shell access to this domain is being sold on dark web markets",
"id": "indicator--85d3d87b-76ed-4cab-b709-a43dfbdc5d8d",
"kill_chain_phases": [
{"kill_chain_name": "lockheed-martin-cyber-kill-chain", "phase_name": "weaponization"}],
"labels": ["malware hash"], "lang": "en",
"modified": "2020-01-09T07:31:16.834Z",
"object_marking_refs": ["marking-definition--41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4",
"marking-definition--f88d31f6-486f-44da-b317-01333bde0b82"],
"pattern": "[file:hashes.MD5 = '2f4e41ea7006099f365942349b05a269' OR "
"file:hashes.'SHA-1' = '835e4574e01c12552c2a3b62b942d177c4d7aaca' OR "
"file:hashes.'SHA-256' = 'a925164d6c0c479967b3d9870267a03adf65e8145']",
"sixgill_actor": "some_actor",
"sixgill_confidence": 80, "sixgill_feedid": "darkfeed_002",
"sixgill_feedname": "compromised_sites",
"sixgill_postid": "c3f266e67f163e1a6181c0789e225baba89212a2",
"sixgill_posttitle": "somewebsite.com",
"sixgill_severity": 70, "sixgill_source": "market_magbo", "spec_version": "2.0",
"type": "indicator",
"valid_from": "2019-12-06T14:37:16Z"},
{"created": "2020-02-09T06:41:41.266Z",
"description": "IP address was listed as a proxy",
"external_reference": [
{
"description": "Mitre attack tactics and technique reference",
"mitre_attack_tactic": "Adversary OPSEC",
"mitre_attack_tactic_id": "TA0021",
"mitre_attack_tactic_url": "https://attack.mitre.org/tactics/TA0021/",
"mitre_attack_technique": "Proxy/protocol relays",
"mitre_attack_technique_id": "T1304",
"mitre_attack_technique_url": "https://attack.mitre.org/techniques/T1304/",
"source_name": "mitre-attack"
}
],
"id": "indicator--2ed98497-cef4-468c-9cee-4f05292b5142",
"labels": [
"anonymization",
],
"lang": "en",
"modified": "2020-02-09T06:41:41.266Z",
"object_marking_refs": [
"marking-definition--41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4",
"marking-definition--f88d31f6-486f-44da-b317-01333bde0b82"
],
"pattern": "[ipv4-addr:value = '182.253.121.14']",
"sixgill_actor": "LunarEclipsed",
"sixgill_confidence": 70,
"sixgill_feedid": "darkfeed_009",
"sixgill_feedname": "proxy_ips",
"sixgill_postid": "00f74eea142e746415457d0dd4a4fc747add3a1b",
"sixgill_posttitle": "✅ 9.7K HTTP/S PROXY LIST (FRESH) ✅",
"sixgill_severity": 40,
"sixgill_source": "forum_nulled",
"spec_version": "2.0",
"type": "indicator",
"valid_from": "2020-01-25T21:08:25Z"
}
],
"spec_version": "2.0",
"type": "bundle"},
{"id": "bundle--716fd67b-ba74-44db-8d4c-2efde05ddbaa",
"objects": [
{"created": "2017-01-20T00:00:00.000Z", "definition": {"tlp": "amber"}, "definition_type": "tlp",
"id": "marking-definition--f88d31f6-486f-44da-b317-01333bde0b82", "type": "marking-definition"},
{"created": "2019-12-26T00:00:00Z",
"definition": {"statement": "Copyright Sixgill 2020. All rights reserved."},
"definition_type": "statement", "id": "marking-definition--41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4",
"type": "marking-definition"}
],
"spec_version": "2.0",
"type": "bundle"}
]
expected_ioc_output = [{'value': '9cd46027d63c36e53f4347d43554336c2ea050d38be3ff9a608cb94cca6ab74b', 'type': 'File',
'rawJSON': {'created': '2020-01-09T07:31:16.708Z',
'description': 'Shell access to this domain is being sold on dark web markets',
'id': 'indicator--7fc19d6d-2d58-45d6-a410-85554b12aea9', 'kill_chain_phases':
[
{'kill_chain_name': 'lockheed-martin-cyber-kill-chain',
'phase_name': 'weaponization'}],
'labels': ['compromised'], 'lang': 'en',
'modified': '2020-01-09T07:31:16.708Z',
'object_marking_refs': ['marking-definition--41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4',
'marking-definition--f88d31f6-486f-44da-b317-01333bde0b82'],
'pattern': "[file:hashes.MD5 = '8f8ff6b696859c3afe7936c345b098bd' OR "
"file:hashes.'SHA-1' = '9bb88f703e234a89ff523514a5c676ac12ae6225' OR "
"file:hashes.'SHA-256' = "
"'9cd46027d63c36e53f4347d43554336c2ea050d38be3ff9a608cb94cca6ab74b']",
'sixgill_actor': 'some_actor', 'sixgill_confidence': 90,
'sixgill_feedid': 'darkfeed_002', 'sixgill_feedname': 'compromised_sites',
'sixgill_postid': '6e407c41fe6591d591cd8bbf0d105f7c15ed8991',
'sixgill_posttitle': 'Credit Card Debt Help, somewebsite.com',
'sixgill_severity': 70, 'sixgill_source': 'market_magbo', 'spec_version': '2.0',
'type': 'indicator', 'valid_from': '2019-12-07T00:57:04Z'},
'fields': {'name': 'compromised_sites', 'actor': 'some_actor',
'tags': ['compromised'],
'firstseenbysource': '2020-01-09T07:31:16.708Z',
'description': 'Description: Shell access to this domain is being sold on dark web '
'markets\nCreated On: 2020-01-09T07:31:16.708Z\nPost '
'Title: Credit Card Debt Help, somewebsite.com\nThreat '
'Actor Name: some_actor\nSource: market_magbo\nSixgill '
'Feed ID: darkfeed_002\nSixgill Feed Name: compromised_sites\n'
'Sixgill Post ID: 6e407c41fe6591d591cd8bbf0d105f7c15ed8991\n'
'Language: en\n'
'Indicator ID: indicator--7fc19d6d-2d58-45d6-a410-85554b12aea9\n'
'External references (e.g. MITRE ATT&CK): None\n',
'sixgillactor': 'some_actor', 'sixgillfeedname': 'compromised_sites',
'sixgillsource': 'market_magbo', 'sixgilllanguage': 'en',
'sixgillposttitle': 'Credit Card Debt Help, somewebsite.com',
'sixgillfeedid': 'darkfeed_002',
'sixgillpostreference': 'https://portal.cybersixgill.com/#/search?q='
'_id:6e407c41fe6591d591cd8bbf0d105f7c15ed8991',
'sixgillindicatorid': 'indicator--7fc19d6d-2d58-45d6-a410-85554b12aea9',
'sixgilldescription': 'Shell access to this domain is being sold on '
'dark web markets',
'sixgillvirustotaldetectionrate': None, 'sixgillvirustotalurl': None,
'sixgillmitreattcktactic': None, 'sixgillmitreattcktechnique': None,
'md5': '8f8ff6b696859c3afe7936c345b098bd',
'sha1': '9bb88f703e234a89ff523514a5c676ac12ae6225',
'sha256': '9cd46027d63c36e53f4347d43554336c2ea050d38be3ff9a608cb94cca6ab74b'},
'score': 3}, {'value': '121.165.45.1', 'type': 'IP',
'rawJSON': {'created': '2020-01-09T07:31:16.824Z',
'description': 'Shell access to this domain is being sold on '
'dark web markets',
'id': 'indicator--67b2378f-cbdd-4263-b1c4-668014d376f2',
'kill_chain_phases': [
{'kill_chain_name': 'lockheed-martin-cyber-kill-chain',
'phase_name': 'weaponization'}],
'labels': ['compromised'], 'lang': 'ru',
'modified': '2020-01-09T07:31:16.824Z', 'object_marking_refs':
[
'marking-definition--41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4',
'marking-definition--f88d31f6-486f-44da-b317-01333bde0b82'],
'pattern': "[ipv4-addr:value = '121.165.45.1']",
'sixgill_actor': 'some_actor', 'sixgill_confidence': 90,
'sixgill_feedid': 'darkfeed_004',
'sixgill_feedname': 'compromised_sites',
'sixgill_postid': '59f08fbf692f84f15353a5e946d2a1cebab92418',
'sixgill_posttitle': 'somewebsite.com', 'sixgill_severity': 70,
'sixgill_source': 'market_magbo', 'spec_version': '2.0',
'type': 'indicator', 'valid_from': '2019-12-06T17:10:04Z'},
'fields': {'name': 'compromised_sites', 'actor': 'some_actor',
'tags': ['compromised'],
'firstseenbysource': '2020-01-09T07:31:16.824Z',
'description': 'Description: Shell access to this domain is being '
'sold on dark web markets\n'
'Created On: 2020-01-09T07:31:16.824Z\n'
'Post Title: somewebsite.com\n'
'Threat Actor Name: some_actor\n'
'Source: market_magbo\nSixgill Feed ID: darkfeed_004\n'
'Sixgill Feed Name: compromised_sites\n'
'Sixgill Post ID: '
'59f08fbf692f84f15353a5e946d2a1cebab92418\n'
'Language: ru\n'
'Indicator ID: '
'indicator--67b2378f-cbdd-4263-b1c4-668014d376f2\n'
'External references (e.g. MITRE ATT&CK): None\n',
'sixgillactor': 'some_actor', 'sixgillfeedname': 'compromised_sites',
'sixgillsource': 'market_magbo', 'sixgilllanguage': 'ru',
'sixgillposttitle': 'somewebsite.com', 'sixgillfeedid': 'darkfeed_004',
'sixgillpostreference': 'https://portal.cybersixgill.com/#/search?q='
'_id:59f08fbf692f84f15353a5e946d2a1cebab92418',
'sixgillindicatorid':
'indicator--67b2378f-cbdd-4263-b1c4-668014d376f2',
'sixgilldescription': 'Shell access to this domain is being sold '
'on dark web markets',
'sixgillvirustotaldetectionrate': None, 'sixgillvirustotalurl': None,
'sixgillmitreattcktactic': None, 'sixgillmitreattcktechnique': None},
'score': 3}, {'value': 'http://somewebsite.rar.html', 'type': 'URL',
'rawJSON': {'created': '2020-01-09T07:31:16.757Z',
'description': 'Shell access to this domain is '
'being sold on dark web markets',
'id': 'indicator--6e8b5f57-3ee2-4c4a-9283-8547754dfa09',
'kill_chain_phases':
[{
'kill_chain_name':
'lockheed-martin-cyber-kill-chain',
'phase_name': 'weaponization'}],
'labels': ['url'], 'lang': 'en',
'modified': '2020-01-09T07:31:16.757Z',
'object_marking_refs': [
'marking-definition--'
'41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4',
'marking-definition--'
'f88d31f6-486f-44da-b317-01333bde0b82'],
'pattern': "[url:value = "
"'http://somewebsite.rar[.]html']",
'sixgill_actor': 'some_actor', 'sixgill_confidence': 90,
'sixgill_feedid': 'darkfeed_010',
'sixgill_feedname': 'compromised_sites',
'sixgill_postid':
'f46cdfc3332d9a04aa63078d82c1e453fd76ba50',
'sixgill_posttitle': 'somewebsite.com',
'sixgill_severity': 70,
'sixgill_source': 'market_magbo', 'spec_version': '2.0',
'type': 'indicator',
'valid_from': '2019-12-06T23:24:51Z'},
'fields': {'name': 'compromised_sites', 'actor': 'some_actor',
'tags': ['url'],
'firstseenbysource': '2020-01-09T07:31:16.757Z',
'description': 'Description: Shell access to this '
'domain is being sold on dark '
'web markets\n'
'Created On: 2020-01-09T07:31:16.757Z\n'
'Post Title: somewebsite.com\n'
'Threat Actor Name: some_actor\n'
'Source: market_magbo\n'
'Sixgill Feed ID: darkfeed_010\n'
'Sixgill Feed Name: '
'compromised_sites\n'
'Sixgill Post ID: '
'f46cdfc3332d9a04aa63078d82c1e453fd76ba50'
'\nLanguage: en\n'
'Indicator ID: indicator--'
'6e8b5f57-3ee2-4c4a-9283-8547754dfa09\n'
'External references '
'(e.g. MITRE ATT&CK): None\n',
'sixgillactor': 'some_actor',
'sixgillfeedname': 'compromised_sites',
'sixgillsource': 'market_magbo', 'sixgilllanguage': 'en',
'sixgillposttitle': 'somewebsite.com',
'sixgillfeedid': 'darkfeed_010',
'sixgillpostreference':
'https://portal.cybersixgill.com/#/search?q='
'_id:f46cdfc3332d9a04aa63078d82c1e453fd76ba50',
'sixgillindicatorid':
'indicator--6e8b5f57-3ee2-4c4a-9283-8547754dfa09',
'sixgilldescription': 'Shell access to this domain is '
'being sold on dark web markets',
'sixgillvirustotaldetectionrate': None,
'sixgillvirustotalurl': None,
'sixgillmitreattcktactic': None,
'sixgillmitreattcktechnique': None}, 'score': 3},
{'value': '31.31.77.83', 'type': 'IP', 'rawJSON': {'created': '2020-01-09T07:31:16.834Z',
'description': 'Shell access to this domain '
'is being sold on '
'dark web markets',
'id':
'indicator--85d3d87b-76ed-'
'4cab-b709-a43dfbdc5d8d',
'kill_chain_phases':
[{'kill_chain_name':
'lockheed-martin-cyber-kill-chain',
'phase_name': 'weaponization'}],
'labels': ['ip'], 'lang': 'en',
'modified': '2020-01-09T07:31:16.834Z',
'object_marking_refs': [
'marking-definition--'
'41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4',
'marking-definition--'
'f88d31f6-486f-44da-b317-01333bde0b82'],
'pattern': "[ipv4-addr:value = "
"'31.31.77.83']",
'sixgill_actor': 'some_actor',
'sixgill_confidence': 60,
'sixgill_feedid': 'darkfeed_005',
'sixgill_feedname': 'compromised_sites',
'sixgill_postid': 'c3f266e67f163e1a6'
'181c0789e225baba89212a2',
'sixgill_posttitle': 'somewebsite.com',
'sixgill_severity': 70,
'sixgill_source': 'market_magbo',
'spec_version': '2.0', 'type': 'indicator',
'valid_from': '2019-12-06T14:37:16Z'},
'fields': {'name': 'compromised_sites', 'actor': 'some_actor', 'tags': ['ip'],
'firstseenbysource': '2020-01-09T07:31:16.834Z',
'description': 'Description: Shell access to this domain is being sold on '
'dark web markets\nCreated On: 2020-01-09T07:31:16.834Z\n'
'Post Title: somewebsite.com\nThreat Actor Name: some_actor\n'
'Source: market_magbo\nSixgill Feed ID: darkfeed_005\n'
'Sixgill Feed Name: compromised_sites\n'
'Sixgill Post ID: c3f266e67f163e1a6181c0789e225baba89212a2\n'
'Language: en\nIndicator ID: '
'indicator--85d3d87b-76ed-4cab-b709-a43dfbdc5d8d\n'
'External references (e.g. MITRE ATT&CK): None\n',
'sixgillactor': 'some_actor', 'sixgillfeedname': 'compromised_sites',
'sixgillsource': 'market_magbo', 'sixgilllanguage': 'en',
'sixgillposttitle': 'somewebsite.com', 'sixgillfeedid': 'darkfeed_005',
'sixgillpostreference': 'https://portal.cybersixgill.com/#/search?q='
'_id:c3f266e67f163e1a6181c0789e225baba89212a2',
'sixgillindicatorid': 'indicator--85d3d87b-76ed-4cab-b709-a43dfbdc5d8d',
'sixgilldescription': 'Shell access to this domain is being sold on '
'dark web markets',
'sixgillvirustotaldetectionrate': None, 'sixgillvirustotalurl': None,
'sixgillmitreattcktactic': None, 'sixgillmitreattcktechnique': None}, 'score': 3},
{'value': 'a925164d6c0c479967b3d9870267a03adf65e8145', 'type': 'File',
'rawJSON': {'created': '2020-01-09T07:31:16.834Z',
'description': 'Shell access to this domain is being sold on dark web markets',
'id': 'indicator--85d3d87b-76ed-4cab-b709-a43dfbdc5d8d', 'kill_chain_phases': [{
'kill_chain_name': 'lockheed-martin-cyber-kill-chain',
'phase_name': 'weaponization'}],
'labels': ['malware hash'], 'lang': 'en',
'modified': '2020-01-09T07:31:16.834Z',
'object_marking_refs': ['marking-definition--41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4',
'marking-definition--f88d31f6-486f-44da-b317-01333bde0b82'],
'pattern': "[file:hashes.MD5 = '2f4e41ea7006099f365942349b05a269' OR "
"file:hashes.'SHA-1' = '835e4574e01c12552c2a3b62b942d177c4d7aaca' OR "
"file:hashes.'SHA-256' = 'a925164d6c0c479967b3d9870267a03adf65e8145']",
'sixgill_actor': 'some_actor', 'sixgill_confidence': 80,
'sixgill_feedid': 'darkfeed_002', 'sixgill_feedname': 'compromised_sites',
'sixgill_postid': 'c3f266e67f163e1a6181c0789e225baba89212a2',
'sixgill_posttitle': 'somewebsite.com', 'sixgill_severity': 70,
'sixgill_source': 'market_magbo', 'spec_version': '2.0', 'type': 'indicator',
'valid_from': '2019-12-06T14:37:16Z'},
'fields': {'name': 'compromised_sites', 'actor': 'some_actor',
'tags': ['malware hash'],
'firstseenbysource': '2020-01-09T07:31:16.834Z',
'description': 'Description: Shell access to this domain is being sold on dark '
'web markets\nCreated On: 2020-01-09T07:31:16.834Z\n'
'Post Title: somewebsite.com\nThreat Actor Name: some_actor\n'
'Source: market_magbo\nSixgill Feed ID: darkfeed_002\n'
'Sixgill Feed Name: compromised_sites\n'
'Sixgill Post ID: c3f266e67f163e1a6181c0789e225baba89212a2\n'
'Language: en\nIndicator ID: '
'indicator--85d3d87b-76ed-4cab-b709-a43dfbdc5d8d\n'
'External references (e.g. MITRE ATT&CK): None\n',
'sixgillactor': 'some_actor', 'sixgillfeedname': 'compromised_sites',
'sixgillsource': 'market_magbo', 'sixgilllanguage': 'en',
'sixgillposttitle': 'somewebsite.com', 'sixgillfeedid': 'darkfeed_002',
'sixgillpostreference': 'https://portal.cybersixgill.com/#/search?q='
'_id:c3f266e67f163e1a6181c0789e225baba89212a2',
'sixgillindicatorid': 'indicator--85d3d87b-76ed-4cab-b709-a43dfbdc5d8d',
'sixgilldescription': 'Shell access to this domain is being sold on dark'
' web markets',
'sixgillvirustotaldetectionrate': None, 'sixgillvirustotalurl': None,
'sixgillmitreattcktactic': None, 'sixgillmitreattcktechnique': None,
'md5': '2f4e41ea7006099f365942349b05a269',
'sha1': '835e4574e01c12552c2a3b62b942d177c4d7aaca',
'sha256': 'a925164d6c0c479967b3d9870267a03adf65e8145'}, 'score': 3},
{'value': '182.253.121.14', 'type': 'IP',
'rawJSON': {'created': '2020-02-09T06:41:41.266Z',
'description': 'IP address was listed '
'as a proxy',
'external_reference':
[{'description': 'Mitre attack tactics and technique reference',
'mitre_attack_tactic': 'Adversary OPSEC',
'mitre_attack_tactic_id': 'TA0021',
'mitre_attack_tactic_url': 'https://attack.mitre.org/tactics/TA0021/',
'mitre_attack_technique': 'Proxy/protocol relays',
'mitre_attack_technique_id': 'T1304',
'mitre_attack_technique_url': 'https://attack.mitre.org/techniques/T1304/',
'source_name': 'mitre-attack'}],
'id': 'indicator--2ed98497-cef4'
'-468c-9cee-4f05292b5142',
'labels': ['anonymization'],
'lang': 'en',
'modified': '2020-02-09T06:41:41.266Z',
'object_marking_refs': [
'marking-definition--41eaaf7c-0bc0-4c56-abdf-d89a7f096ac4',
'marking-definition--f88d31f6-486f-44da-b317-01333bde0b82'],
'pattern': "[ipv4-addr:value = '182.253.121.14']",
'sixgill_actor': 'LunarEclipsed',
'sixgill_confidence': 70,
'sixgill_feedid': 'darkfeed_009',
'sixgill_feedname': 'proxy_ips',
'sixgill_postid': '00f74eea142e746415457d0dd4a4fc747add3a1b',
'sixgill_posttitle': '✅ 9.7K HTTP/S PROXY LIST (FRESH) ✅',
'sixgill_severity': 40,
'sixgill_source': 'forum_nulled',
'spec_version': '2.0', 'type': 'indicator',
'valid_from': '2020-01-25T21:08:25Z'},
'fields': {'name': 'proxy_ips', 'actor': 'LunarEclipsed',
'tags': ['anonymization'],
'firstseenbysource': '2020-02-09T06:41:41.266Z',
'description': "Description: IP address was listed as a proxy\n"
"Created On: 2020-02-09T06:41:41.266Z\n"
"Post Title: ✅ 9.7K HTTP/S PROXY LIST (FRESH) ✅\n"
"Threat Actor Name: LunarEclipsed\nSource: forum_nulled\n"
"Sixgill Feed ID: darkfeed_009\nSixgill Feed Name: proxy_ips\n"
"Sixgill Post ID: 00f74eea142e746415457d0dd4a4fc747add3a1b\n"
"Language: en\nIndicator ID: "
"indicator--2ed98497-cef4-468c-9cee-4f05292b5142\n"
"External references (e.g. MITRE ATT&CK): "
"[{'description': 'Mitre attack tactics and technique reference', "
"'mitre_attack_tactic': 'Adversary OPSEC', "
"'mitre_attack_tactic_id': 'TA0021', 'mitre_attack_tactic_url': "
"'https://attack.mitre.org/tactics/TA0021/', "
"'mitre_attack_technique': 'Proxy/protocol relays', "
"'mitre_attack_technique_id': 'T1304', "
"'mitre_attack_technique_url': "
"'https://attack.mitre.org/techniques/T1304/', "
"'source_name': 'mitre-attack'}]\n",
'sixgillactor': 'LunarEclipsed', 'sixgillfeedname': 'proxy_ips',
'sixgillsource': 'forum_nulled', 'sixgilllanguage': 'en',
'sixgillposttitle': '✅ 9.7K HTTP/S PROXY LIST (FRESH) ✅',
'sixgillfeedid': 'darkfeed_009',
'sixgillpostreference': 'https://portal.cybersixgill.com/#/search?q='
'_id:00f74eea142e746415457d0dd4a4fc747add3a1b',
'sixgillindicatorid': 'indicator--2ed98497-cef4-468c-9cee-4f05292b5142',
'sixgilldescription': 'IP address was listed as a proxy',
'sixgillvirustotaldetectionrate': None, 'sixgillvirustotalurl': None,
'sixgillmitreattcktactic': 'Adversary OPSEC',
'sixgillmitreattcktechnique': 'Proxy/protocol relays',
'feedrelatedindicators': [{'type': 'MITRE ATT&CK', 'value': 'TA0021',
'description':
'https://attack.mitre.org/tactics/TA0021/'}]},
'score': 3}]
class MockedResponse(object):
def __init__(self, status_code, text, reason=None, url=None, method=None):
self.status_code = status_code
self.text = text
self.reason = reason
self.url = url
self.request = requests.Request('GET')
self.ok = True if self.status_code == 200 else False
def json(self):
return json.loads(self.text)
def init_params():
return {
'client_id': 'WRONG_CLIENT_ID_TEST',
'client_secret': 'CLIENT_SECRET_TEST',
}
def mocked_request(*args, **kwargs):
global bundle_index
global submitted_indicators
request = kwargs.get("request", {})
end_point = request.path_url
method = request.method
response_dict = {
'POST': {
'/auth/token':
MockedResponse(200, mocked_get_token_response),
'/darkfeed/ioc/ack':
MockedResponse(200, str(submitted_indicators))
},
'GET': {
'/darkfeed/ioc?limit=1000':
MockedResponse(200, json.dumps(iocs_bundle[bundle_index])),
},
}
response_dict = response_dict.get(method)
response = response_dict.get(end_point)
if method == 'GET' and end_point == '/darkfeed/ioc?limit=1000':
submitted_indicators = len(iocs_bundle[bundle_index].get("objects")) - 2
bundle_index += 1
return response
def test_test_module_command_raise_exception(mocker):
mocker.patch.object(demisto, 'params', return_value=init_params())
mocker.patch('requests.sessions.Session.send', return_value=MockedResponse(400, "error"))
from Sixgill_Darkfeed import test_module_command
with pytest.raises(Exception):
test_module_command()
def test_test_module_command(mocker):
mocker.patch.object(demisto, 'params', return_value=init_params())
mocker.patch('requests.sessions.Session.send', return_value=MockedResponse(200, "ok"))
from Sixgill_Darkfeed import test_module_command
test_module_command()
def test_fetch_indicators_command(mocker):
global bundle_index
global submitted_indicators
mocker.patch.object(demisto, 'params', return_value=init_params())
mocker.patch('requests.sessions.Session.send', new=mocked_request)
from Sixgill_Darkfeed import fetch_indicators_command
from sixgill.sixgill_feed_client import SixgillFeedClient
from sixgill.sixgill_constants import FeedStream
client = SixgillFeedClient("client_id",
"client_secret",
"some_channel",
FeedStream.DARKFEED,
demisto, 1000)
output = fetch_indicators_command(client)
bundle_index = 0
submitted_indicators = 0
assert output == expected_ioc_output
def test_get_indicators_command(mocker):
global bundle_index
global submitted_indicators
mocker.patch.object(demisto, 'params', return_value=init_params())
mocker.patch('requests.sessions.Session.send', new=mocked_request)
from Sixgill_Darkfeed import get_indicators_command
from sixgill.sixgill_feed_client import SixgillFeedClient
from sixgill.sixgill_constants import FeedStream
client = SixgillFeedClient("client_id",
"client_secret",
"some_channel",
FeedStream.DARKFEED,
demisto, 1000)
output = get_indicators_command(client, {"limit": 10})
bundle_index = 0
submitted_indicators = 0
assert output[2] == expected_ioc_output
@pytest.mark.parametrize('tlp_color', ['', None, 'AMBER'])
def test_feed_tags_and_tlp_color(mocker, tlp_color):
"""
Given:
- feedTags parameter
When:
- Executing fetch command on feed
Then:
- Validate the tags supplied are added to the tags list in addition to the tags that were there before
"""
global bundle_index
global submitted_indicators
mocker.patch.object(demisto, 'params', return_value=init_params())
mocker.patch('requests.sessions.Session.send', new=mocked_request)
from Sixgill_Darkfeed import fetch_indicators_command
from sixgill.sixgill_feed_client import SixgillFeedClient
from sixgill.sixgill_constants import FeedStream
client = SixgillFeedClient("client_id",
"client_secret",
"some_channel",
FeedStream.DARKFEED,
demisto, 1000)
output = fetch_indicators_command(client, tags=['tag1', 'tag2'], tlp_color=tlp_color)
assert all(item in output[0]['fields']['tags'] for item in ['tag1', 'tag2'])
assert any(item in output[0]['fields']['tags'] for item in ['compromised', 'ip', 'url'])
if tlp_color:
assert output[0]['fields']['trafficlightprotocol'] == tlp_color
else:
assert not output[0]['fields'].get('trafficlightprotocol')
bundle_index -= 1
| 74.942188 | 120 | 0.425265 | 3,280 | 47,963 | 6.066768 | 0.114939 | 0.027338 | 0.016584 | 0.019599 | 0.858184 | 0.839992 | 0.819137 | 0.797829 | 0.777577 | 0.769838 | 0 | 0.136344 | 0.476409 | 47,963 | 639 | 121 | 75.059468 | 0.655716 | 0.00367 | 0 | 0.422487 | 0 | 0 | 0.402379 | 0.150677 | 0 | 0 | 0 | 0 | 0.010221 | 1 | 0.015332 | false | 0 | 0.025554 | 0.003407 | 0.0477 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a1b065365f8e41ac132be0b4be0fdcca6efdab17 | 176 | py | Python | argparse_to_json/__init__.py | childsish/argparse-to-json | 5a75c859a6df05b444ec5491a07a4f51b1d97baa | [
"MIT"
] | 1 | 2022-01-20T19:50:49.000Z | 2022-01-20T19:50:49.000Z | argparse_to_json/__init__.py | childsish/argparse-to-json | 5a75c859a6df05b444ec5491a07a4f51b1d97baa | [
"MIT"
] | null | null | null | argparse_to_json/__init__.py | childsish/argparse-to-json | 5a75c859a6df05b444ec5491a07a4f51b1d97baa | [
"MIT"
] | null | null | null | import argparse
from argparse_to_json.converter import Converter
def convert_parser_to_json(parser: argparse.ArgumentParser) -> dict:
return Converter().convert(parser)
| 22 | 68 | 0.8125 | 22 | 176 | 6.272727 | 0.545455 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 176 | 7 | 69 | 25.142857 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
a1fde297016f925528ee7641ba827cb64f582a0c | 284 | py | Python | cupy/io/__init__.py | fukuta0614/Chainer | 337fe78e1c27924c1195b8b677a9b2cd3ea68828 | [
"MIT"
] | null | null | null | cupy/io/__init__.py | fukuta0614/Chainer | 337fe78e1c27924c1195b8b677a9b2cd3ea68828 | [
"MIT"
] | 1 | 2016-11-09T06:32:32.000Z | 2016-11-09T10:20:04.000Z | cupy/io/__init__.py | fukuta0614/Chainer | 337fe78e1c27924c1195b8b677a9b2cd3ea68828 | [
"MIT"
] | 1 | 2018-11-18T00:36:51.000Z | 2018-11-18T00:36:51.000Z | # Functions from the following NumPy document
# http://docs.scipy.org/doc/numpy/reference/routines.io.html
# "NOQA" to suppress flake8 warning
from cupy.io import formatting # NOQA
from cupy.io import npz # NOQA
from cupy.io import rawfile # NOQA
from cupy.io import text # NOQA
| 31.555556 | 60 | 0.760563 | 45 | 284 | 4.8 | 0.577778 | 0.148148 | 0.185185 | 0.296296 | 0.277778 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004167 | 0.15493 | 284 | 8 | 61 | 35.5 | 0.895833 | 0.549296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1a0e95a3d4781bc417bd77fc89619755b53a1b9d | 8,090 | py | Python | cmif/extract.py | herreio/cmif | 10c5cde63fffe6cbb45670c1ead0f8cc198b0787 | [
"MIT"
] | null | null | null | cmif/extract.py | herreio/cmif | 10c5cde63fffe6cbb45670c1ead0f8cc198b0787 | [
"MIT"
] | 1 | 2022-02-02T14:04:05.000Z | 2022-02-02T14:04:05.000Z | cmif/extract.py | herreio/cmif | 10c5cde63fffe6cbb45670c1ead0f8cc198b0787 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
extract XML data in CMI format
"""
import re
from .build import ns_cs, ns_xml
def title(data):
"""
extract text of TEI element <title>
"""
try:
return data.find(".//title", namespaces=data.nsmap).text
except AttributeError:
pass
return None
def editor(data, multi=False):
"""
| extract TEI element <editor>
| set multi to True if multiple editors exist
"""
return data.find(".//editor", namespaces=data.nsmap) if not multi else \
data.findall(".//editor", namespaces=data.nsmap)
def editor_name(data, multi=False):
"""
| extract text of TEI element <editor>
| set multi to True if multiple editors exist
"""
try:
return editor(data, multi=multi).text.strip() if not multi else \
[e.text.strip() for e in editor(data, multi=multi)]
except AttributeError:
pass
return None
def editor_email(data, multi=False):
"""
| extract text of TEI element <email> from parent <editor>
| set multi to True if multiple editors exist
"""
try:
return editor(data, multi=multi).find(".//email", namespaces=data.nsmap).text if not multi else \
[e.find(".//email", namespaces=data.nsmap).text for e in editor(data, multi=multi)]
except AttributeError:
pass
return None
def publisher(data):
"""
extract text from child <ref> of TEI element <publisher>
"""
try:
return data.find(".//publisher/ref", namespaces=data.nsmap).text
except AttributeError:
pass
return None
def publisher_target(data):
"""
extract @target from child <ref> of TEI element <publisher>
"""
try:
return data.find(".//publisher/ref", namespaces=data.nsmap).attrib["target"]
except (AttributeError, KeyError):
pass
return None
def idno(data):
"""
extract text from TEI element <idno>
"""
try:
return data.find(".//idno", namespaces=data.nsmap).text
except AttributeError:
pass
return None
def date_attrib(data):
"""
extract @ from TEI element <date>
"""
try:
return data.find(".//date", namespaces=data.nsmap).attrib
except AttributeError:
pass
return None
def date_when(data):
"""
extract @when from TEI element <date>
"""
try:
return data.find(".//date", namespaces=data.nsmap).attrib["when"]
except (AttributeError, KeyError):
pass
return None
def date_from(data):
"""
extract @when from TEI element <date>
"""
try:
return data.find(".//date", namespaces=data.nsmap).attrib["from"]
except (AttributeError, KeyError):
pass
return None
def date_to(data):
"""
extract @when from TEI element <date>
"""
try:
return data.find(".//date", namespaces=data.nsmap).attrib["to"]
except (AttributeError, KeyError):
pass
return None
def date_not_before(data):
"""
extract @when from TEI element <date>
"""
try:
return data.find(".//date", namespaces=data.nsmap).attrib["notBefore"]
except (AttributeError, KeyError):
pass
return None
def date_not_after(data):
"""
extract @when from TEI element <date>
"""
try:
return data.find(".//date", namespaces=data.nsmap).attrib["notAfter"]
except (AttributeError, KeyError):
pass
return None
def license(data):
"""
extract text of TEI element <licence>
"""
try:
return data.find(".//licence", namespaces=data.nsmap).text
except AttributeError:
pass
return None
def license_target(data):
"""
extract @target from TEI element <licence>
"""
try:
return data.find(".//licence", namespaces=data.nsmap).attrib["target"]
except (AttributeError, KeyError):
pass
return None
def bibl(data, multi=False):
"""
| extract TEI element <bibl>
| set multi to True if multiple references exist
"""
return data.find(".//bibl", namespaces=data.nsmap) if not multi else \
data.findall(".//bibl", namespaces=data.nsmap)
def bibl_id(data, multi=False):
"""
| extract @xml:id from TEI element <bibl>
| set multi to True if multiple references exist
"""
bibl_data = bibl(data, multi=multi)
try:
return bibl_data.attrib[ns_xml("id")] if not multi else \
[b.attrib[ns_xml("id")] for b in bibl_data]
except (AttributeError, KeyError):
pass
return None
def bibl_type(data, multi=False):
"""
| extract @type from TEI element <bibl>
| set multi to True if multiple references exist
"""
bibl_data = bibl(data, multi=multi)
try:
return bibl_data.attrib["type"] if not multi else \
[b.attrib["type"] for b in bibl_data]
except (AttributeError, KeyError):
pass
return None
def bibl_text(data, multi=False):
"""
| extract text of TEI element <bibl>
| set multi to True if multiple references exist
"""
bibl_data = bibl(data, multi=multi)
try:
return re.sub("[ \r\n]+", " ", "".join([l for l in list(bibl_data.itertext())]).strip()) if not multi else \
[re.sub("[ \r\n]+", " ", "".join([l for l in list(b.itertext())]).strip()) for b in bibl_data]
except AttributeError:
pass
return None
def correspdesc(data):
"""
extract TEI elements <correspDesc>
"""
return data.findall(".//correspDesc", namespaces=data.nsmap)
def correspdesc_source(data):
"""
extract @source from TEI elements <correspDesc>
"""
correspdesc_data = correspdesc(data)
try:
return [cd.attrib["source"].replace("#", "") for cd in correspdesc_data]
except KeyError:
pass
try:
return [cd.attrib[ns_cs("source")].replace("#", "") for cd in correspdesc_data]
except KeyError:
pass
return []
def correspdesc_key(data):
"""
extract @source from TEI elements <correspDesc>
"""
correspdesc_data = correspdesc(data)
try:
return [cd.attrib["key"].replace("#", "") for cd in correspdesc_data]
except KeyError:
pass
return []
def correspaction(data):
"""
extract TEI elements <correspAction>
"""
return data.findall(".//correspAction", namespaces=data.nsmap)
def correspaction_type(data):
"""
extract @type from TEI elements <correspAction>
"""
correspaction_data = correspaction(data)
try:
return [ca.attrib["type"] for ca in correspaction_data]
except (AttributeError, KeyError):
pass
return None
def org_name(data):
"""
extract text from TEI element <orgName>
"""
try:
return data.find(".//orgName", namespaces=data.nsmap).text
except AttributeError:
pass
return None
def org_name_ref(data):
"""
extract @ref from TEI element <orgName>
"""
try:
return data.find(".//orgName", namespaces=data.nsmap).attrib["ref"]
except (AttributeError, KeyError):
pass
return None
def pers_name(data):
"""
extract text from TEI element <persName>
"""
try:
return data.find(".//persName", namespaces=data.nsmap).text
except AttributeError:
pass
return None
def pers_name_ref(data):
"""
extract @ref from TEI element <persName>
"""
try:
return data.find(".//persName", namespaces=data.nsmap).attrib["ref"]
except (AttributeError, KeyError):
pass
return None
def place_name(data):
"""
extract text from TEI element <placeName>
"""
try:
return data.find(".//placeName", namespaces=data.nsmap).text
except AttributeError:
pass
return None
def place_name_ref(data):
"""
extract @ref from TEI element <placeName>
"""
try:
return data.find(".//placeName", namespaces=data.nsmap).attrib["ref"]
except (AttributeError, KeyError):
pass
return None
| 23.314121 | 116 | 0.610383 | 961 | 8,090 | 5.08845 | 0.097815 | 0.049693 | 0.101022 | 0.079959 | 0.823517 | 0.799387 | 0.745808 | 0.702658 | 0.640286 | 0.601022 | 0 | 0.000334 | 0.25958 | 8,090 | 346 | 117 | 23.381503 | 0.816027 | 0.202967 | 0 | 0.612022 | 0 | 0 | 0.059218 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.163934 | false | 0.147541 | 0.010929 | 0 | 0.486339 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
c52120589d13816742363fd0034c96086a816942 | 6,657 | py | Python | moesifdjango/update_companies.py | Moesif/moesifdjango | 67529381b7ffc234263e6989ae16cf8ef1ca62a6 | [
"Apache-2.0"
] | 13 | 2016-11-02T18:53:03.000Z | 2022-01-25T21:47:24.000Z | moesifdjango/update_companies.py | Moesif/moesifdjango | 67529381b7ffc234263e6989ae16cf8ef1ca62a6 | [
"Apache-2.0"
] | 10 | 2017-12-13T11:56:48.000Z | 2021-07-16T12:34:14.000Z | moesifdjango/update_companies.py | Moesif/moesifdjango | 67529381b7ffc234263e6989ae16cf8ef1ca62a6 | [
"Apache-2.0"
] | 5 | 2018-02-02T13:51:49.000Z | 2021-12-17T00:46:24.000Z | from moesifapi.models import *
from moesifapi.exceptions.api_exception import *
from moesifapi.api_helper import *
class Company:
def __init__(self):
pass
@classmethod
def update_company(cls, company_profile, api_client, DEBUG):
if not company_profile:
print('Expecting the input to be either of the type - CompanyModel, dict or json while updating user')
else:
if isinstance(company_profile, dict):
if 'company_id' in company_profile:
try:
api_client.update_company(CompanyModel.from_dictionary(company_profile))
if DEBUG:
print('Company Profile updated successfully')
except APIException as inst:
if 401 <= inst.response_code <= 403:
print("Unauthorized access sending event to Moesif. Please check your Appplication Id.")
if DEBUG:
print("Error while updating company, with status code:")
print(inst.response_code)
else:
print('To update a company, a company_id field is required')
elif isinstance(company_profile, CompanyModel):
if company_profile.company_id is not None:
try:
api_client.update_company(company_profile)
if DEBUG:
print('Company Profile updated successfully')
except APIException as inst:
if 401 <= inst.response_code <= 403:
print("Unauthorized access sending event to Moesif. Please check your Appplication Id.")
if DEBUG:
print("Error while updating company, with status code:")
print(inst.response_code)
else:
print('To update a company, a company_id field is required')
else:
try:
company_profile_json = APIHelper.json_deserialize(company_profile)
if 'company_id' in company_profile_json:
try:
api_client.update_company(CompanyModel.from_dictionary(company_profile_json))
if DEBUG:
print('Company Profile updated successfully')
except APIException as inst:
if 401 <= inst.response_code <= 403:
print("Unauthorized access sending event to Moesif. Please check your Appplication Id.")
if DEBUG:
print("Error while updating company, with status code:")
print(inst.response_code)
else:
print('To update a company, a company_id field is required')
except:
print('Error while deserializing the json, please make sure the json is valid')
@classmethod
def update_companies_batch(cls, companies_profiles, api_client, DEBUG):
if not companies_profiles:
print('Expecting the input to be either of the type - List of CompanyModel, dict or json while updating users')
else:
if all(isinstance(company, dict) for company in companies_profiles):
if all('company_id' in company for company in companies_profiles):
try:
batch_profiles = [CompanyModel.from_dictionary(d) for d in companies_profiles]
api_client.update_companies_batch(batch_profiles)
if DEBUG:
print('Companies Profile updated successfully')
except APIException as inst:
if 401 <= inst.response_code <= 403:
print("Unauthorized access sending event to Moesif. Please check your Appplication Id.")
if DEBUG:
print("Error while updating companies, with status code:")
print(inst.response_code)
else:
print('To update companies, an company_id field is required')
elif all(isinstance(company, CompanyModel) for company in companies_profiles):
if all(company.company_id is not None for company in companies_profiles):
try:
api_client.update_companies_batch(companies_profiles)
if DEBUG:
print('Companies Profile updated successfully')
except APIException as inst:
if 401 <= inst.response_code <= 403:
print("Unauthorized access sending event to Moesif. Please check your Appplication Id.")
if DEBUG:
print("Error while updating companues, with status code:")
print(inst.response_code)
else:
print('To update companies, a company_id field is required')
else:
try:
company_profiles_json = [APIHelper.json_deserialize(d) for d in companies_profiles]
if all(isinstance(company, dict) for company in company_profiles_json) and all(
'company_id' in company for company in company_profiles_json):
try:
batch_profiles = [CompanyModel.from_dictionary(d) for d in company_profiles_json]
api_client.update_companies_batch(batch_profiles)
if DEBUG:
print('Companies Profile updated successfully')
except APIException as inst:
if 401 <= inst.response_code <= 403:
print("Unauthorized access sending event to Moesif. Please check your Appplication Id.")
if DEBUG:
print("Error while updating companies, with status code:")
print(inst.response_code)
else:
print('To update companies, an company_id field is required')
except:
print('Error while deserializing the json, please make sure the json is valid')
| 55.941176 | 123 | 0.527865 | 640 | 6,657 | 5.346875 | 0.139063 | 0.061368 | 0.042081 | 0.056108 | 0.848042 | 0.810637 | 0.734658 | 0.734658 | 0.675628 | 0.66014 | 0 | 0.009293 | 0.418056 | 6,657 | 118 | 124 | 56.415254 | 0.874032 | 0 | 0 | 0.702703 | 0 | 0 | 0.250413 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0.009009 | 0.027027 | 0 | 0.063063 | 0.306306 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c535fd75123f8d37fba1084a35cb8615db426175 | 31 | py | Python | nni/algorithms/compression/tensorflow/pruning/__init__.py | dutxubo/nni | c16f4e1c89b54b8b80661ef0072433d255ad2d24 | [
"MIT"
] | 9,680 | 2019-05-07T01:42:30.000Z | 2022-03-31T16:48:33.000Z | nni/algorithms/compression/tensorflow/pruning/__init__.py | dutxubo/nni | c16f4e1c89b54b8b80661ef0072433d255ad2d24 | [
"MIT"
] | 1,957 | 2019-05-06T21:44:21.000Z | 2022-03-31T09:21:53.000Z | nni/algorithms/compression/tensorflow/pruning/__init__.py | dutxubo/nni | c16f4e1c89b54b8b80661ef0072433d255ad2d24 | [
"MIT"
] | 1,571 | 2019-05-07T06:42:55.000Z | 2022-03-31T03:19:24.000Z | from .one_shot_pruner import *
| 15.5 | 30 | 0.806452 | 5 | 31 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c53d0a6ab8cd95e818082ab9ddd5984a95a0cc1f | 60 | py | Python | python_stylesheets_color_changer/__init__.py | yjg30737/python-stylesheets-color-changer | 1851ef3256d61c73c5c884a7617fccfe30b771de | [
"MIT"
] | null | null | null | python_stylesheets_color_changer/__init__.py | yjg30737/python-stylesheets-color-changer | 1851ef3256d61c73c5c884a7617fccfe30b771de | [
"MIT"
] | null | null | null | python_stylesheets_color_changer/__init__.py | yjg30737/python-stylesheets-color-changer | 1851ef3256d61c73c5c884a7617fccfe30b771de | [
"MIT"
] | null | null | null | from .styleSheetsColorChanger import StyleSheetsColorChanger | 60 | 60 | 0.933333 | 4 | 60 | 14 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 60 | 1 | 60 | 60 | 0.982456 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3d6a770ee5ce7c67f4245e7416e8875a63c8bb65 | 3,234 | py | Python | tests/ci/jenkins_test.py | VJftw/invoke-tools | 9584a1f8a402118310b6f2a495062f388fc8dc3a | [
"MIT"
] | 2 | 2017-07-02T02:46:58.000Z | 2018-07-24T03:36:30.000Z | tests/ci/jenkins_test.py | VJftw/invoke-tools | 9584a1f8a402118310b6f2a495062f388fc8dc3a | [
"MIT"
] | null | null | null | tests/ci/jenkins_test.py | VJftw/invoke-tools | 9584a1f8a402118310b6f2a495062f388fc8dc3a | [
"MIT"
] | 1 | 2019-11-27T14:43:03.000Z | 2019-11-27T14:43:03.000Z | """
tests.invoke_tools.ci.jenkins_test
"""
import unittest
import mock
import json
from invoke_tools import ci
class JenkinsTests(unittest.TestCase):
"""
Tests for Jenkins
"""
def test_init(self):
"""
invoke_tools.ci.jenkins.init: Should initialise the Jenkins object
"""
jenkins = ci.Jenkins("https://jenkins.example.org", "job-name")
self.assertIsInstance(jenkins, ci.Jenkins)
git = mock.Mock()
jenkins = ci.Jenkins("https://jenkins.example.org", "job-name", git)
self.assertIsInstance(jenkins, ci.Jenkins)
def test_get_last_successful_build_for_multi_branch(self):
"""
invoke_tools.ci.jenkins.get_last_successful_build_sha: Should return the last successful build for a multi branch project
"""
git = mock.Mock()
git.get_branch = mock.Mock(return_value="develop")
jenkins = ci.Jenkins("https://jenkins.example.org", "job-name", git)
def requests_get(url):
if url == "https://jenkins.example.org/job/job-name/job/develop/api/json?tree=lastSuccessfulBuild[number,url,timestamp]":
json_file = "tests/json/ci-jenkins-multi-branch.json"
elif url == "https://jenkins.example.org/job/job-name/job/develop/18/api/json?tree=actions[*[revision[SHA1]]]":
json_file = "tests/json/ci-jenkins-multi-branch-build.json"
else:
raise ValueError("Invalid url: {0}".format(url))
json_mock = mock.Mock()
with open(json_file) as file:
file_dict = json.loads(file.read())
json_mock.json = mock.Mock(return_value=file_dict)
return json_mock
with mock.patch("invoke_tools.ci.jenkins.requests.get", side_effect=requests_get):
self.assertEqual(
jenkins.get_last_successful_build_sha(),
"fd48c805a7684a5d268d0df4849c4cce3be6ce2f"
)
def test_get_last_successful_build_for_single_branch(self):
"""
invoke_tools.ci.jenkins.get_last_successful_build_sha: Should return the last successful build for a single branch project
"""
jenkins = ci.Jenkins("https://jenkins.example.org", "job-name")
def requests_get(url):
if url == "https://jenkins.example.org/job/job-name/api/json?tree=lastSuccessfulBuild[number,url,timestamp]":
json_file = "tests/json/ci-jenkins-single-branch.json"
elif url == "https://jenkins.example.org/job/job-name/62/api/json?tree=actions[*[revision[SHA1]]]":
json_file = "tests/json/ci-jenkins-single-branch-build.json"
else:
raise ValueError("Invalid url: {0}".format(url))
json_mock = mock.Mock()
with open(json_file) as file:
file_dict = json.loads(file.read())
json_mock.json = mock.Mock(return_value=file_dict)
return json_mock
with mock.patch("invoke_tools.ci.jenkins.requests.get", side_effect=requests_get):
self.assertEqual(
jenkins.get_last_successful_build_sha(),
"1b5cdf46844d011596b9b6a34c105b9a26c26a19"
)
| 39.439024 | 133 | 0.631416 | 390 | 3,234 | 5.069231 | 0.192308 | 0.072838 | 0.076884 | 0.089024 | 0.81133 | 0.762772 | 0.762772 | 0.7304 | 0.719272 | 0.673748 | 0 | 0.023438 | 0.24799 | 3,234 | 81 | 134 | 39.925926 | 0.789474 | 0.112554 | 0 | 0.588235 | 0 | 0.078431 | 0.318689 | 0.115952 | 0 | 0 | 0 | 0 | 0.078431 | 1 | 0.098039 | false | 0 | 0.078431 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3da313bc449c9f1323849ff499bfc88935992dea | 34 | py | Python | algolia_analytics/__init__.py | dsfcode/algolia-analytics | 06c5bc2c44b8a99368d0ca175028dba22680bbc8 | [
"MIT"
] | 1 | 2022-01-04T16:32:39.000Z | 2022-01-04T16:32:39.000Z | algolia_analytics/__init__.py | dan-sf/algolia-analytics | 06c5bc2c44b8a99368d0ca175028dba22680bbc8 | [
"MIT"
] | null | null | null | algolia_analytics/__init__.py | dan-sf/algolia-analytics | 06c5bc2c44b8a99368d0ca175028dba22680bbc8 | [
"MIT"
] | null | null | null | from .api import AlgoliaAnalytics
| 17 | 33 | 0.852941 | 4 | 34 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3dd6c8f9eb5cd56813fdf220f26253d072ed7098 | 365 | py | Python | benedict/core/items_sorted.py | next-franciscoalgaba/python-benedict | 81ff459304868327238c322a0a8a203d9d5d4314 | [
"MIT"
] | 365 | 2019-05-21T05:50:30.000Z | 2022-03-29T11:35:35.000Z | benedict/core/items_sorted.py | next-franciscoalgaba/python-benedict | 81ff459304868327238c322a0a8a203d9d5d4314 | [
"MIT"
] | 78 | 2019-11-16T12:22:54.000Z | 2022-03-14T12:21:30.000Z | benedict/core/items_sorted.py | next-franciscoalgaba/python-benedict | 81ff459304868327238c322a0a8a203d9d5d4314 | [
"MIT"
] | 26 | 2019-12-16T06:34:12.000Z | 2022-02-28T07:16:41.000Z | # -*- coding: utf-8 -*-
def _items_sorted_by_item_at_index(d, index, reverse):
return sorted(d.items(), key=lambda item: item[index], reverse=reverse)
def items_sorted_by_keys(d, reverse=False):
return _items_sorted_by_item_at_index(d, 0, reverse)
def items_sorted_by_values(d, reverse=False):
return _items_sorted_by_item_at_index(d, 1, reverse)
| 26.071429 | 75 | 0.750685 | 60 | 365 | 4.166667 | 0.333333 | 0.22 | 0.26 | 0.192 | 0.636 | 0.452 | 0.452 | 0.352 | 0.352 | 0.352 | 0 | 0.009434 | 0.128767 | 365 | 13 | 76 | 28.076923 | 0.77673 | 0.057534 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
3de0d776faa235c99406886dd71181a76acc7454 | 59 | py | Python | testchild.py | JRogersESQ/animated-garbanzo | da6c4f109b2506b8ceb1c0622e1e1756724bc65a | [
"Apache-2.0"
] | null | null | null | testchild.py | JRogersESQ/animated-garbanzo | da6c4f109b2506b8ceb1c0622e1e1756724bc65a | [
"Apache-2.0"
] | null | null | null | testchild.py | JRogersESQ/animated-garbanzo | da6c4f109b2506b8ceb1c0622e1e1756724bc65a | [
"Apache-2.0"
] | null | null | null | ### Add file to child branch
print ("inside child branch")
| 19.666667 | 29 | 0.711864 | 9 | 59 | 4.666667 | 0.777778 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169492 | 59 | 2 | 30 | 29.5 | 0.857143 | 0.40678 | 0 | 0 | 0 | 0 | 0.612903 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9aeadc93ceef4048834a1b1cd863f8e5f2874176 | 26 | py | Python | easycrypto/__init__.py | emartech/python-easy-crypto | ef09b42e43fb6649498bfb7b5ffbbf490a94d85d | [
"MIT"
] | 3 | 2019-11-03T18:26:35.000Z | 2021-03-07T02:37:52.000Z | easycrypto/__init__.py | emartech/python-easy-crypto | ef09b42e43fb6649498bfb7b5ffbbf490a94d85d | [
"MIT"
] | 4 | 2019-06-05T01:48:19.000Z | 2019-07-19T11:53:51.000Z | easycrypto/__init__.py | emartech/python-easy-crypto | ef09b42e43fb6649498bfb7b5ffbbf490a94d85d | [
"MIT"
] | 2 | 2019-07-11T08:59:03.000Z | 2022-02-17T19:41:21.000Z | from .crypto import Crypto | 26 | 26 | 0.846154 | 4 | 26 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9aff9fa5e502c32e6b4ace7b1c8296b3bdf1aee9 | 39 | py | Python | pyLineSPM/__init__.py | rbeucher/pyLineSPM | 07ab561f638cae0caccd4f27c74b03f1f1364202 | [
"MIT"
] | null | null | null | pyLineSPM/__init__.py | rbeucher/pyLineSPM | 07ab561f638cae0caccd4f27c74b03f1f1364202 | [
"MIT"
] | null | null | null | pyLineSPM/__init__.py | rbeucher/pyLineSPM | 07ab561f638cae0caccd4f27c74b03f1f1364202 | [
"MIT"
] | null | null | null | from .river import *
from .spm import * | 19.5 | 20 | 0.717949 | 6 | 39 | 4.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 39 | 2 | 21 | 19.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b10b2fcb25dfdb6124a8bcbd3a9be22797d13ea0 | 157 | py | Python | gmso/external/__init__.py | rsdefever/gmso | 3ff3829cb4bc492b41e5e520d26d35c09c5338a4 | [
"MIT"
] | null | null | null | gmso/external/__init__.py | rsdefever/gmso | 3ff3829cb4bc492b41e5e520d26d35c09c5338a4 | [
"MIT"
] | null | null | null | gmso/external/__init__.py | rsdefever/gmso | 3ff3829cb4bc492b41e5e520d26d35c09c5338a4 | [
"MIT"
] | null | null | null | from .convert_mbuild import from_mbuild, to_mbuild, from_mbuild_box
from .convert_parmed import from_parmed, to_parmed
from .convert_openmm import to_openmm
| 39.25 | 67 | 0.866242 | 25 | 157 | 5.04 | 0.32 | 0.261905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095541 | 157 | 3 | 68 | 52.333333 | 0.887324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b18da9e260418c267c73a7d3d56acadbdc3b3636 | 96 | py | Python | albumentations/albumentations/augmentations/geometric/__init__.py | hfzx01/Substation | 760e2f1a5d21102a6a05973cc31bc8252659757c | [
"Apache-2.0"
] | 6,316 | 2019-11-18T14:19:17.000Z | 2022-03-31T22:25:23.000Z | albumentations/albumentations/augmentations/geometric/__init__.py | hfzx01/Substation | 760e2f1a5d21102a6a05973cc31bc8252659757c | [
"Apache-2.0"
] | 558 | 2019-11-19T00:36:01.000Z | 2022-03-30T22:04:15.000Z | albumentations/albumentations/augmentations/geometric/__init__.py | hfzx01/Substation | 760e2f1a5d21102a6a05973cc31bc8252659757c | [
"Apache-2.0"
] | 889 | 2019-11-18T16:49:44.000Z | 2022-03-28T11:00:14.000Z | from .functional import *
from .resize import *
from .rotate import *
from .transforms import *
| 19.2 | 25 | 0.75 | 12 | 96 | 6 | 0.5 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 96 | 4 | 26 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
495f8ac698df39c7dcc0678cab08f18a42d8cb80 | 9,262 | py | Python | TestFileSize_img_fig.py | ytyaru/Python.FileSize.201702071138 | 569c45d5e9b91befbaece50520eb69955e148c65 | [
"CC0-1.0"
] | null | null | null | TestFileSize_img_fig.py | ytyaru/Python.FileSize.201702071138 | 569c45d5e9b91befbaece50520eb69955e148c65 | [
"CC0-1.0"
] | 6 | 2017-02-09T00:54:50.000Z | 2017-02-09T10:56:13.000Z | TestFileSize_img_fig.py | ytyaru/Python.FileSize.201702071138 | 569c45d5e9b91befbaece50520eb69955e148c65 | [
"CC0-1.0"
] | null | null | null | import unittest
import FileSize
from decimal import Decimal
class TestFileSize_img_fig(unittest.TestCase):
def test_int_fig_2(self):
with self.assertRaises(Exception) as e:
int_fig=2;
self.__target = FileSize.FileSize(integral_figure_num=int_fig)
self.assertEqual('桁上がりするまでの桁数は3または4のみ有効です。無効値: {0}'.format(int_fig), e.exception.args[0])
def test_int_fig_5(self):
with self.assertRaises(Exception) as e:
int_fig=5;
self.__target = FileSize.FileSize(integral_figure_num=int_fig)
self.assertEqual('桁上がりするまでの桁数は3または4のみ有効です。無効値: {0}'.format(int_fig), e.exception.args[0])
def test_img_fig_nega(self):
with self.assertRaises(Exception) as e:
img_fig=-1;
self.__target = FileSize.FileSize(imaginary_figure_num=img_fig)
self.assertEqual('虚数部の桁数は0〜{0}までの整数値のみ有効です。無効値: {1}'.format(3, img_fig), e.exception.args[0])
def test_img_fig_5(self):
with self.assertRaises(Exception) as e:
img_fig=4;
self.__target = FileSize.FileSize(imaginary_figure_num=img_fig)
self.assertEqual('虚数部の桁数は0〜{0}までの整数値のみ有効です。無効値: {1}'.format(3, img_fig), e.exception.args[0])
def test_9999_999KiB_4_3(self):
unit=1024; int_fig=4; img_fig=3;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig)
actual = ((unit ** 1) * (10 ** int_fig)) - 1
self.assertEqual(self.__target.Get(actual), "9999.999 KiB")
def test_9999_99KiB_4_2(self):
unit=1024; int_fig=4; img_fig=2;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig)
actual = ((unit ** 1) * (10 ** int_fig)) - 1
self.assertEqual(self.__target.Get(actual), "9999.99 KiB")
def test_9999_9KiB_4_1(self):
unit=1024; int_fig=4; img_fig=1;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig)
actual = ((unit ** 1) * (10 ** int_fig)) - 1
self.assertEqual(self.__target.Get(actual), "9999.9 KiB")
def test_9999KiB_4_0(self):
unit=1024; int_fig=4; img_fig=0;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig)
actual = ((unit ** 1) * (10 ** int_fig)) - 1
self.assertEqual(self.__target.Get(actual), "9999 KiB")
def test_9999_999KiB_3_3(self):
unit=1024; int_fig=3; img_fig=3;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig)
actual = ((unit ** 1) * (10 ** int_fig)) - 1
self.assertEqual(self.__target.Get(actual), "999.999 KiB")
def test_9999_99KiB_3_2(self):
unit=1024; int_fig=3; img_fig=2;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig)
actual = ((unit ** 1) * (10 ** int_fig)) - 1
self.assertEqual(self.__target.Get(actual), "999.99 KiB")
def test_9999_9KiB_3_1(self):
unit=1024; int_fig=3; img_fig=1;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig)
actual = ((unit ** 1) * (10 ** int_fig)) - 1
self.assertEqual(self.__target.Get(actual), "999.9 KiB")
def test_9999KiB_3_0(self):
unit=1024; int_fig=3; img_fig=0;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig)
actual = ((unit ** 1) * (10 ** int_fig)) - 1
self.assertEqual(self.__target.Get(actual), "999 KiB")
def test_10_000KiB_4_3_zero(self):
unit=1024; int_fig=4; img_fig=3; zero=False;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = ((unit ** 1) * (10))
self.assertEqual(self.__target.Get(actual), "10.000 KiB")
def test_10_00KiB_4_2_zero(self):
unit=1024; int_fig=4; img_fig=2; zero=False;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = ((unit ** 1) * (10))
self.assertEqual(self.__target.Get(actual), "10.00 KiB")
def test_10_0KiB_4_1_zero(self):
unit=1024; int_fig=4; img_fig=1; zero=False;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = ((unit ** 1) * (10))
self.assertEqual(self.__target.Get(actual), "10.0 KiB")
def test_10KiB_4_0_zero(self):
unit=1024; int_fig=4; img_fig=0; zero=False;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = ((unit ** 1) * (10))
self.assertEqual(self.__target.Get(actual), "10 KiB")
def test_1_000KiB_3_3_zero(self):
unit=1024; int_fig=3; img_fig=3; zero=False;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = (unit ** 1)
self.assertEqual(self.__target.Get(actual), "1.000 KiB")
def test_1_00KiB_3_2_zero(self):
unit=1024; int_fig=3; img_fig=2; zero=False;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = (unit ** 1)
self.assertEqual(self.__target.Get(actual), "1.00 KiB")
def test_1_0KiB_3_1_zero(self):
unit=1024; int_fig=3; img_fig=1; zero=False;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = (unit ** 1)
self.assertEqual(self.__target.Get(actual), "1.0 KiB")
def test_1KiB_3_0_zero(self):
unit=1024; int_fig=3; img_fig=0; zero=False;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = (unit ** 1)
self.assertEqual(self.__target.Get(actual), "1 KiB")
def test_10KiB_4_3(self):
unit=1024; int_fig=4; img_fig=3; zero=True;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = ((unit ** 1) * (10))
self.assertEqual(self.__target.Get(actual), "10 KiB")
def test_10KiB_4_2(self):
unit=1024; int_fig=4; img_fig=2; zero=True;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = ((unit ** 1) * (10))
self.assertEqual(self.__target.Get(actual), "10 KiB")
def test_10KiB_4_1(self):
unit=1024; int_fig=4; img_fig=1; zero=True;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = ((unit ** 1) * (10))
self.assertEqual(self.__target.Get(actual), "10 KiB")
def test_10KiB_4_0(self):
unit=1024; int_fig=4; img_fig=0; zero=True;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = ((unit ** 1) * (10))
self.assertEqual(self.__target.Get(actual), "10 KiB")
def test_1KiB_3_3(self):
unit=1024; int_fig=3; img_fig=3; zero=True;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = (unit ** 1)
self.assertEqual(self.__target.Get(actual), "1 KiB")
def test_1KiB_3_2(self):
unit=1024; int_fig=3; img_fig=2; zero=True;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = (unit ** 1)
self.assertEqual(self.__target.Get(actual), "1 KiB")
def test_1KiB_3_1(self):
unit=1024; int_fig=3; img_fig=1; zero=True;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = (unit ** 1)
self.assertEqual(self.__target.Get(actual), "1 KiB")
def test_1KiB_3_0(self):
unit=1024; int_fig=3; img_fig=0; zero=True;
self.__target = FileSize.FileSize(byte_size_of_unit=unit, integral_figure_num=int_fig, imaginary_figure_num=img_fig, is_hidden_imaginary_all_zero=zero)
actual = (unit ** 1)
self.assertEqual(self.__target.Get(actual), "1 KiB")
| 60.142857 | 159 | 0.698337 | 1,422 | 9,262 | 4.151899 | 0.048523 | 0.065041 | 0.085366 | 0.123306 | 0.957486 | 0.949865 | 0.930217 | 0.928692 | 0.928692 | 0.891938 | 0 | 0.057462 | 0.178903 | 9,262 | 153 | 160 | 60.535948 | 0.718606 | 0 | 0 | 0.486111 | 0 | 0 | 0.033906 | 0.01231 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.194444 | false | 0 | 0.020833 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
498710ab6a4a6e43d65eb5915475d9de76164b9e | 90 | py | Python | libdlt/__init__.py | datalogistics/libdlt | f3d8afb06a237fe6e4114c1a55e6f407ba9cc7b0 | [
"BSD-3-Clause"
] | null | null | null | libdlt/__init__.py | datalogistics/libdlt | f3d8afb06a237fe6e4114c1a55e6f407ba9cc7b0 | [
"BSD-3-Clause"
] | 2 | 2018-05-20T21:33:03.000Z | 2019-02-15T16:48:37.000Z | libdlt/__init__.py | datalogistics/libdlt | f3d8afb06a237fe6e4114c1a55e6f407ba9cc7b0 | [
"BSD-3-Clause"
] | null | null | null | from libdlt.util import util
from libdlt.api import *
from libdlt.sessions import Session
| 22.5 | 35 | 0.822222 | 14 | 90 | 5.285714 | 0.5 | 0.405405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 90 | 3 | 36 | 30 | 0.948718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
498a12c07aece3061953a278e9e6bc94a0d23623 | 159 | py | Python | vedastr_cstr/vedastr/models/bodies/feature_extractors/encoders/__init__.py | bsm8734/formula-image-latex-recognition | 86d5070e8f907571a47967d64facaee246d92a35 | [
"MIT"
] | 13 | 2021-06-20T18:11:23.000Z | 2021-12-07T18:06:42.000Z | vedastr_cstr/vedastr/models/bodies/feature_extractors/encoders/__init__.py | bsm8734/formula-image-latex-recognition | 86d5070e8f907571a47967d64facaee246d92a35 | [
"MIT"
] | 9 | 2021-06-16T14:55:07.000Z | 2021-06-23T14:45:36.000Z | vedastr_cstr/vedastr/models/bodies/feature_extractors/encoders/__init__.py | bsm8734/formula-image-latex-recognition | 86d5070e8f907571a47967d64facaee246d92a35 | [
"MIT"
] | 6 | 2021-06-17T15:16:50.000Z | 2021-07-05T20:41:26.000Z | from .backbones import build_backbone # noqa 401
from .builder import build_encoder # noqa 401
from .enhance_modules import build_enhance_module # noqa 401
| 39.75 | 61 | 0.811321 | 23 | 159 | 5.391304 | 0.521739 | 0.266129 | 0.177419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.150943 | 159 | 3 | 62 | 53 | 0.851852 | 0.163522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b8fd3350022f9e526ac1df7865de8ffaec66ea50 | 7,897 | py | Python | baseline_mixed_model.py | Vignesh-Sairaj/cm-229 | 63ae1b8f3529e602352bc3eb92066c9520dfa805 | [
"MIT"
] | null | null | null | baseline_mixed_model.py | Vignesh-Sairaj/cm-229 | 63ae1b8f3529e602352bc3eb92066c9520dfa805 | [
"MIT"
] | null | null | null | baseline_mixed_model.py | Vignesh-Sairaj/cm-229 | 63ae1b8f3529e602352bc3eb92066c9520dfa805 | [
"MIT"
] | null | null | null |
import os
import sys
import pandas as pd
import numpy as np
from data_import import *
import statsmodels.api as sm
from phenotype_correlation import *
from sklearn.linear_model import Ridge
"""
baseline mixed model
Y = ZA + XB + E
we calculate the ZA using OLS first
and perform ridge on the residuals (Y - ZA ~ N(XB, sigma)
"""
def baseline_mixed_model_analysis(geno_df, pheno_df, phenotype_1, phenotype_2, missing_rate = 0.1, sample_list = list(), verbose = False):
corr_mat = calculate_highly_correlated_phenotypes(pheno_df)
print("The correlation between %s and %s is %f" % (phenotype_1, phenotype_2, corr_mat[phenotype_1][phenotype_2]))
# bind phenotype into list to extract
phenotype_list = [phenotype_1, phenotype_2]
# extract the phenotypes
geno_select, pheno_select = select_phenotype_multiple_phenotypes(geno_df, pheno_df, phenotype_list = phenotype_list, verbose = verbose)
# separate training and test dataset
geno_tr, pheno_tr, geno_test, pheno_test, test_sample_list = separate_training_test(geno_select, pheno_select, missing_rate = missing_rate, sample_list_select = sample_list)
# perform OLS
lm = sm.OLS(endog = pheno_tr[phenotype_2], exog = pheno_tr[phenotype_1]).fit()
if verbose:
print("The linear model summary for predicting phenotype %a based on phenotype %a" % (phenotype_2, phenotype_1))
print(lm.summary())
print(lm.params)
# prediction for fixed effect
predictions_fe = lm.predict(pheno_test[phenotype_1])
# perform ridge regression on the residual (random effect part)
residuals = pheno_tr[phenotype_2] - lm.predict(pheno_tr[phenotype_1])
lm_re = sm.OLS(endog = residuals, exog = geno_tr.transpose()).fit_regularized(L1_wt = 1.0)
if verbose:
print(lm_re.params)
predictions_re = lm_re.predict(geno_test.transpose())
# combine the result from both
total_prediction = predictions_fe + predictions_re
mse = calculate_MSE(total_prediction, pheno_test[phenotype_2])
return(mse, test_sample_list)
def top_N_snp_mixed_model_analysis(geno_df, pheno_df, phenotype_1, phenotype_2, top_N = 100, missing_rate = 0.1, sample_list = list(), verbose = False):
corr_mat = calculate_highly_correlated_phenotypes(pheno_df)
print("The correlation between %s and %s is %f" % (phenotype_1, phenotype_2, corr_mat[phenotype_1][phenotype_2]))
# bind phenotype into list to extract
phenotype_list = [phenotype_1, phenotype_2]
# extract the phenotypes
geno_select, pheno_select = select_phenotype_multiple_phenotypes(geno_df, pheno_df, phenotype_list = phenotype_list, verbose = verbose)
# separate training and test dataset
geno_tr, pheno_tr, geno_test, pheno_test, test_sample_list = separate_training_test(geno_select, pheno_select, missing_rate = missing_rate, sample_list_select = sample_list)
# remove duplciates
geno_test_new = geno_test.loc[:,~geno_test.columns.duplicated()]
geno_test = geno_test_new[pheno_test[phenotype_2].index]
# saving below
# # perform simple ridge to identify the top SNPs
# lm_ridge = sm.OLS(endog = pheno_tr[phenotype_2], exog = geno_tr.transpose()).fit_regularized(L1_wt = 1.0)
# if verbose:
# print(lm_ridge.params)
# # select top SNPs with highest effect size for select run
# top_N_idx = np.argsort(abs(lm_ridge.params))[-top_N:]
# if verbose:
# top_N_values = [lm_re.params[i] for i in top_N_idx]
# print(top_N_values)
# top_N_snps = geno_tr.iloc[top_N_idx].index
# sklearn test
# clf = Ridge(alpha = 1.0)
# a = clf.fit(y = pheno_tr[phenotype_2], X = geno_tr.transpose())
# # select top N
# top_N = 10
# top_N_idx = np.argsort(abs(a.coef_))[-top_N:]
# print (top_N_idx)
# top_N_values = [a.coef_[i] for i in top_N_idx]
# print (top_N_values)
# top_N_snps = geno_tr.iloc[top_N_idx].index
# print(top_N_snps)
# perform OLS
lm = sm.OLS(endog = pheno_tr[phenotype_2], exog = pheno_tr[phenotype_1]).fit()
if verbose:
print("The linear model summary for predicting phenotype %a based on phenotype %a" % (phenotype_2, phenotype_1))
print(lm.summary())
print(lm.params)
# prediction for fixed effect
predictions_fe = lm.predict(pheno_test[phenotype_1])
# perform ridge regression on the residual (random effect part)
residuals = pheno_tr[phenotype_2] - lm.predict(pheno_tr[phenotype_1])
# check marginal
num_SNPs = geno_tr.shape[0]
beta_list = []
for snp_idx in range(num_SNPs):
lm_snp = sm.OLS(endog = residuals, exog = geno_tr.iloc[snp_idx].transpose()).fit_regularized(L1_wt = 1.0, alpha = 1.0)
# clf = Ridge(alpha = 1.0)
# a = clf.fit(y = residuals, X = geno_tr.iloc[snp_idx].transpose())
beta_list.append(lm_snp.params)
if snp_idx % 1000 == 0:
print(snp_idx)
beta = pd.concat(beta_list)
top_N_idx = np.argsort(abs(beta))[-top_N:]
top_N_values = [beta[i] for i in top_N_idx]
top_N_snps = geno_tr.iloc[top_N_idx].index
lm_re = sm.OLS(endog = residuals, exog = geno_tr.loc[top_N_snps].transpose()).fit_regularized(L1_wt = 1.0, alpha = 1.0)
if verbose:
print(lm_re.params)
predictions_re = lm_re.predict(geno_test.loc[top_N_snps].transpose())
# combine the result from both
total_prediction = predictions_fe + predictions_re
print (predictions_re, predictions_fe)
mse = calculate_MSE(total_prediction, pheno_test[phenotype_2])
return(mse, test_sample_list)
def top_N_snp_mixed_model_analysis_p(geno_df, pheno_df, phenotype_1, phenotype_2, top_N = 100, missing_rate = 0.1, sample_list = list(), verbose = False):
corr_mat = calculate_highly_correlated_phenotypes(pheno_df)
print("The correlation between %s and %s is %f" % (phenotype_1, phenotype_2, corr_mat[phenotype_1][phenotype_2]))
# bind phenotype into list to extract
phenotype_list = [phenotype_1, phenotype_2]
# extract the phenotypes
geno_select, pheno_select = select_phenotype_multiple_phenotypes(geno_df, pheno_df, phenotype_list = phenotype_list, verbose = verbose)
# separate training and test dataset
geno_tr, pheno_tr, geno_test, pheno_test, test_sample_list = separate_training_test(geno_select, pheno_select, missing_rate = missing_rate, sample_list_select = sample_list)
# remove duplciates
geno_test_new = geno_test.loc[:,~geno_test.columns.duplicated()]
geno_test = geno_test_new[pheno_test[phenotype_2].index]
# perform OLS
lm = sm.OLS(endog = pheno_tr[phenotype_2], exog = pheno_tr[phenotype_1]).fit()
if verbose:
print("The linear model summary for predicting phenotype %a based on phenotype %a" % (phenotype_2, phenotype_1))
print(lm.summary())
print(lm.params)
# prediction for fixed effect
predictions_fe = lm.predict(pheno_test[phenotype_1])
# perform ridge regression on the residual (random effect part)
residuals = pheno_tr[phenotype_2] - lm.predict(pheno_tr[phenotype_1])
# check marginal
num_SNPs = geno_tr.shape[0]
beta_list = []
p_beta_list = []
for snp_idx in range(num_SNPs):
lm_snp = sm.OLS(endog = residuals, exog = geno_tr.iloc[snp_idx].transpose()).fit()
p_val = lm_snp.pvalues[0]
beta = lm_snp.params[0]
if p_val < 0.05:
beta_list.append(beta)
p_beta_list.append(pd.Series([beta, p_val], name = geno_tr.iloc[snp_idx].name))
if snp_idx % 1000 == 0:
print(snp_idx)
p_beta_df = pd.concat(p_beta_list, axis = 1).transpose()
p_beta_df.columns = ["beta", "pval"]
p_beta_df.sort_values(by = ['pval'], inplace = True)
top_N = min(top_N, p_beta_df.shape[0])
top_N_snps = p_beta_df.iloc[range(top_N)].index
lm_re = sm.OLS(endog = residuals, exog = geno_tr.loc[top_N_snps].transpose()).fit_regularized(L1_wt = 1.0, alpha = 1.0)
if verbose:
print(lm_re.params)
predictions_re = lm_re.predict(geno_test.loc[top_N_snps].transpose())
# combine the result from both
total_prediction = predictions_fe + predictions_re
print (predictions_re, predictions_fe)
mse = calculate_MSE(total_prediction, pheno_test[phenotype_2])
return(mse, test_sample_list, top_N) | 31.337302 | 174 | 0.745473 | 1,255 | 7,897 | 4.384064 | 0.1251 | 0.026899 | 0.040712 | 0.043621 | 0.846601 | 0.841694 | 0.82679 | 0.824246 | 0.810614 | 0.798073 | 0 | 0.01632 | 0.146511 | 7,897 | 252 | 175 | 31.337302 | 0.8 | 0.205141 | 0 | 0.686275 | 0 | 0 | 0.05773 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0 | 0.078431 | 0 | 0.107843 | 0.186275 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
772b71d48e3505086046ce9e1f6d36f41148da8b | 4,172 | py | Python | tests/test_engine/test_queries/test_queryop_comparsion_eq.py | gitter-badger/MontyDB | 849d03dc2cfed35739481e9acb1ff0bd8095c91b | [
"BSD-3-Clause"
] | null | null | null | tests/test_engine/test_queries/test_queryop_comparsion_eq.py | gitter-badger/MontyDB | 849d03dc2cfed35739481e9acb1ff0bd8095c91b | [
"BSD-3-Clause"
] | null | null | null | tests/test_engine/test_queries/test_queryop_comparsion_eq.py | gitter-badger/MontyDB | 849d03dc2cfed35739481e9acb1ff0bd8095c91b | [
"BSD-3-Clause"
] | null | null | null |
from bson.binary import Binary
from bson.code import Code
from bson.int64 import Int64
from bson.decimal128 import Decimal128
from bson.py3compat import PY3
def test_qop_eq_1(monty_find, mongo_find):
docs = [
{"a": 1},
{"a": 0}
]
spec = {"a": 1}
monty_c = monty_find(docs, spec)
mongo_c = mongo_find(docs, spec)
assert mongo_c.count() == 1
assert monty_c.count() == mongo_c.count()
assert next(mongo_c) == next(monty_c)
def test_qop_eq_2(monty_find, mongo_find):
docs = [
{"a": 1},
{"a": 0}
]
spec = {"a": {"$eq": 1}}
monty_c = monty_find(docs, spec)
mongo_c = mongo_find(docs, spec)
assert mongo_c.count() == 1
assert monty_c.count() == mongo_c.count()
assert next(mongo_c) == next(monty_c)
def test_qop_eq_3(monty_find, mongo_find):
docs = [
{"a": [1]},
{"a": 1}
]
spec = {"a": {"$eq": 1}}
monty_c = monty_find(docs, spec)
mongo_c = mongo_find(docs, spec)
assert mongo_c.count() == 2
assert monty_c.count() == mongo_c.count()
for i in range(2):
assert next(mongo_c) == next(monty_c)
def test_qop_eq_4(monty_find, mongo_find):
docs = [
{"a": [1]},
{"a": [[1], 2]}
]
spec = {"a": {"$eq": [1]}}
monty_c = monty_find(docs, spec)
mongo_c = mongo_find(docs, spec)
assert mongo_c.count() == 2
assert monty_c.count() == mongo_c.count()
for i in range(2):
assert next(mongo_c) == next(monty_c)
def test_qop_eq_5(monty_find, mongo_find):
docs = [
{"a": [2, 1]},
{"a": [1, 2]},
{"a": [[2, 1], 3]},
{"a": [[1, 2], 3]},
]
spec = {"a": {"$eq": [2, 1]}}
monty_c = monty_find(docs, spec)
mongo_c = mongo_find(docs, spec)
assert mongo_c.count() == 2
assert monty_c.count() == mongo_c.count()
for i in range(2):
assert next(mongo_c) == next(monty_c)
def test_qop_eq_6(monty_find, mongo_find):
docs = [
{"a": [{"b": Binary(b"00")}]},
{"a": [{"b": Binary(b"01")}]},
]
spec = {"a.b": {"$eq": b"01"}}
monty_c = monty_find(docs, spec)
mongo_c = mongo_find(docs, spec)
count = 1 if PY3 else 0
assert mongo_c.count() == count
assert monty_c.count() == mongo_c.count()
if PY3:
assert next(mongo_c) == next(monty_c)
mongo_c.rewind()
assert next(mongo_c)["_id"] == 1
def test_qop_eq_7(monty_find, mongo_find):
docs = [
{"a": [{"b": Code("a")}]},
]
spec = {"a.b": {"$eq": "a"}}
monty_c = monty_find(docs, spec)
mongo_c = mongo_find(docs, spec)
assert mongo_c.count() == 0
assert monty_c.count() == mongo_c.count()
def test_qop_eq_8(monty_find, mongo_find):
docs = [
{"a": [{"b": "a"}]},
]
spec = {"a.b": {"$eq": Code("a")}}
monty_c = monty_find(docs, spec)
mongo_c = mongo_find(docs, spec)
assert mongo_c.count() == 0
assert monty_c.count() == mongo_c.count()
def test_qop_eq_9(monty_find, mongo_find):
docs = [
{"a": 1},
]
spec = {"a": {"$eq": Int64(1)}}
monty_c = monty_find(docs, spec)
mongo_c = mongo_find(docs, spec)
assert mongo_c.count() == 1
assert monty_c.count() == mongo_c.count()
def test_qop_eq_10(monty_find, mongo_find):
docs = [
{"a": 1},
{"a": 1.0},
]
spec = {"a": {"$eq": Decimal128("1")}}
monty_c = monty_find(docs, spec)
mongo_c = mongo_find(docs, spec)
assert mongo_c.count() == 2
assert monty_c.count() == mongo_c.count()
def test_qop_eq_11(monty_find, mongo_find):
docs = [
{"a": 1},
{"a": 1.0},
]
spec = {"a": {"$eq": Decimal128("1.0")}}
monty_c = monty_find(docs, spec)
mongo_c = mongo_find(docs, spec)
assert mongo_c.count() == 2
assert monty_c.count() == mongo_c.count()
def test_qop_eq_12(monty_find, mongo_find):
docs = [
{"tags": [["ssl", "security"], "warning"]}
]
spec = {"tags.0": "security"}
monty_c = monty_find(docs, spec)
mongo_c = mongo_find(docs, spec)
assert mongo_c.count() == 0
assert monty_c.count() == mongo_c.count()
| 22.430108 | 50 | 0.551774 | 628 | 4,172 | 3.41242 | 0.08121 | 0.123192 | 0.14559 | 0.067196 | 0.836211 | 0.814279 | 0.803546 | 0.733551 | 0.733551 | 0.709286 | 0 | 0.031565 | 0.263423 | 4,172 | 185 | 51 | 22.551351 | 0.665799 | 0 | 0 | 0.595588 | 0 | 0 | 0.029969 | 0 | 0 | 0 | 0 | 0 | 0.227941 | 1 | 0.088235 | false | 0 | 0.036765 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7753eb5e6420ca68d8ffef287ff38769166ae880 | 18,133 | py | Python | lino_xl/lib/sepa/fixtures/sample_ibans.py | khchine5/xl | b1634937a9ce87af1e948eb712b934b11f221d9d | [
"BSD-2-Clause"
] | 1 | 2018-01-12T14:09:48.000Z | 2018-01-12T14:09:48.000Z | lino_xl/lib/sepa/fixtures/sample_ibans.py | khchine5/xl | b1634937a9ce87af1e948eb712b934b11f221d9d | [
"BSD-2-Clause"
] | 1 | 2019-09-10T05:03:47.000Z | 2019-09-10T05:03:47.000Z | lino_xl/lib/sepa/fixtures/sample_ibans.py | khchine5/xl | b1634937a9ce87af1e948eb712b934b11f221d9d | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: UTF-8 -*-
# Copyright 2015-2018 Rumma & Ko Ltd
# License: BSD (see file COPYING for details)
"""Contains a random list of example IBANs to be used for generating
demo data.
Thanks to `www.mobilefish.com
<http://www.mobilefish.com/services/random_iban_generator/random_iban_generator.php>`_.
This is being tested in :doc:`/specs/iban`.
"""
IBANS = """\
AL32293653370340154130927280
AL68897217980092911449710922
AL87834345355057404249973940
AL73552891583236787384690218
AL52238964890057847269894484
AL56320936759864956119671013
AL20697342929432609350613358
AL70060940492321966744594065
AL66383778998922195726400092
AL95065292037102725798481031
AD5257784281812851432256
AD8878195033144625652270
AD2045350237593959919746
AD6523598450190238508072
AD3217704582405114672833
AD0925341412842777761708
AD7106167154302413624767
AD3275655950079340831543
AD5158473228250917101533
AD0203514247355959437856
AT689713187429950994
AT455998560003573572
AT397563080929053546
AT718255015615458671
AT778047296008343102
AT233377816198914246
AT316966754324807854
AT866472820010769219
AT583809717053621489
AT420427319442037888
AZ26CSHL28857497955447355397
AZ78MMVD81193974286497530319
AZ12GMWX67840235162066893769
AZ23OWYX21759158476039403442
AZ57XKMM26906912594709450600
AZ14GUHJ07000171328006790958
AZ20LSOS77176596643066514721
AZ32BUGZ39044794148736641526
AZ14WAUS45898883554433526214
AZ72AOZV21841200481951294949
BH65QUYO58206773000590
BH58THLZ22569126050320
BH50GDHO00603036234521
BH55QRES42518078955802
BH29GWOZ41746150114337
BH87FGSD51413185537059
BH18DOJE76437091481967
BH16ZOBW34919351209826
BH37GGST16273522899557
BH67TSOF06222939713977
BE83540256917919
BE70458836777241
BE62315236188996
BE08853988745497
BE31486666479523
BE03747769840658
BE94532216847099
BE66457644520146
BE96553733406075
BE29077159619092
BA227528238568967209
BA630659428789618688
BA086304331850728340
BA267971951698167134
BA405821781250392265
BA085654567123222746
BA041764095134403193
BA667006231763000903
BA406456387178588479
BA257241002566987352
BR4075717543435635971706910G9
BR2701798507625253316527482W6
BR3205740494727766140328461Y8
BR3938873591834947138968079F6
BR6187168485481741686184498T8
BR0373600122620300612205391F6
BR4308204526746102472420665A3
BR2861259038756331065423965K0
BR8916505915221714901465542D6
BR1506469097892362005111181G8
BG33WODO90876019575940
BG02FIAO29753987488745
BG57HSVU36490105313708
BG89NKTJ64315412156435
BG95BSCA52994404540921
BG48VWZZ57937814280856
BG02COMX58214353599482
BG91RINK17137346736081
BG35HVQY38180962527925
BG45LMDF68752666847493
HR2824403757374499478
HR5339680539326212687
HR8189186439925058512
HR0783802488825704789
HR6244883767027158142
HR9395263440829542995
HR3912615479471615169
HR7063968116236082084
HR4269746574272814753
HR4291284211386241138
CY20350461461529468981546484
CY40385436276717483408921096
CY74835815375022221887908406
CY94595189933551887423183914
CY74377818093553669217254008
CY67178463066674360903454329
CY57755182740186642812845012
CY29518390749073022342250225
CY31539689431965695874278840
CY10626151158726057784602947
CZ9233597294072726325676
CZ0740187598646931517018
CZ5852539202347835583160
CZ5865017422326637013703
CZ4069084093587533989973
CZ6267334430668767452742
CZ3232025328465145166291
CZ3308088053085578160862
CZ6595671096439786778328
CZ0608265577029011461489
DK4827862790127019
DK7123594166436005
DK1358026849419971
DK5665068158568190
DK3039764874399310
DK4337975413218209
DK8050926904625409
DK5482875755076649
DK8028328034750938
DK0905734385914385
DO21437519120016115077986451
DO64127641001569019111921598
DO22898027640498916700053892
DO40144771611278919843152876
DO98152759478052008771013933
DO34254642316224814463370535
DO34894434296388176648298583
DO20700132936416677103485545
DO87947053138589917553903987
DO89550085175699265751046270
EE436294797788261706
EE352301769561122485
EE386024163501444960
EE790013989145018782
EE532994653685562728
EE393563313045526122
EE819123265601797362
EE687746880832564886
EE286716083049352617
EE046714948492597346
FO7328660749955447
FO5579305506122811
FO1888817025209592
FO0477005916381949
FO2217712851446538
FO5022806813744217
FO2532068909217259
FO4490237126630701
FO1240175365191477
FO3741182270345982
FI3667945602811802
FI9413543377540067
FI3387328799081749
FI3327639886742387
FI5473554239359361
FI3571023169949913
FI7107179684787950
FI1911913683440650
FI8873067591485792
FI5735689578040750
FR5928381777532178049892704
FR6815265988888370706343396
FR4223710265151843362314935
FR1618982647691715524045187
FR3893861268555950313800215
FR0564920470388213192727137
FR6235496719453367798862842
FR0231530265068483981048577
FR3828198092926793294851938
FR0301047932630849712421272
GE48ZZ7714861933468073
GE16AJ4781960758986244
GE07NR8011004138066039
GE02HS3067830533476618
GE12PP9137975552250001
GE98AL8871349722537565
GE61QM8615409899317007
GE35CG0037184599849491
GE42TS6886667776815973
GE25AG8490735824423051
DE70417630904413326955
DE20747128173755343928
DE35925967355688573820
DE11636098329080866451
DE35107935042598846074
DE04102122669763552692
DE32002227481065956819
DE79469998819370303702
DE24306748526568502288
DE84730360491971919607
GI24OGHJ730699465255824
GI81WXBT913392129341548
GI89VBBT489472978490646
GI46HEDQ933971017301898
GI05DTJQ800294649546209
GI21SQJU135209452447873
GI59XIKA274366211572810
GI60DSIM382702744563594
GI36CNDY620945003617395
GI34QUHN770324712367263
GR6282021285197442953293482
GR9702773488287769533651733
GR9424034634672719688340483
GR4502577819624986156216115
GR8604200542314535805653195
GR1303170682339302478828421
GR8971255069424847948338946
GR3523060094650505383042730
GR2292995781612635975940346
GR3155273141239259438667551
GL9309946207533274
GL3207415007460441
GL6473048615699104
GL6235111887200152
GL2971530388960593
GL4047445989668316
GL4266929615252311
GL4808785297006297
GL8949856050258218
GL5863426989933036
GT27296241207892724737327551
GT89987300832384218260348415
GT63024594904500975807287945
GT09382607144252133381222954
GT55573949227084171746198551
GT59431364777730292597091714
GT16724288008702437945405330
GT83246605281186286562491553
GT10042854019903417443544564
GT28292703859465869388424742
HU39771006655682854957804503
HU62920100103869059399074463
HU72061423738799746984188271
HU68319417701665981504742924
HU09488521934190758509592302
HU74724554620654875651018696
HU82054471489717478348714742
HU90629277021631044456379384
HU44543001241009247142209614
HU39466939091770622201442727
IS349491437116701662015858
IS707566594278860188469665
IS063555675269527431289751
IS841523272640957099727352
IS375194934206832772125425
IS707687924090566221512700
IS802577173359907171646932
IS623954351797414034520977
IS988539290100972267692240
IS456348544248316318787373
IE82DSKZ74065329378516
IE93BEAN39684801465132
IE67QYWG97262805831972
IE83TOGF66204901950148
IE40MRND02039182905147
IE39AJSD93994651105689
IE18ETLM41813880367567
IE87JWXT41456331640926
IE49YZYZ59993590293714
IE67STZG07271089895381
IL742783954295510103839
IL777146875247850363923
IL394061438701559776887
IL382040197066855309461
IL643674818502803617308
IL966892198536725690694
IL636222862347090504229
IL138827643741219366843
IL737871416796122356183
IL943900911646975005558
IT27P4944311481004991838256
IT14F8197042115752279069311
IT07C0095481155585439721596
IT46K0481134039897395702797
IT15Y9312595570639274432071
IT68R5916325645705288883650
IT30N0659332321376473414956
IT56N3479798438995368438965
IT97M7744689905513427993589
IT14O3620829263416057479269
JO88CHSQ2589280198273139164479
JO85CNPV8251120405019535065245
JO04QZPQ7097336914601919085559
JO55DFFR5611977550243174154955
JO35FHZQ1791739214543166273933
JO05VNNT3193608024229932156563
JO51QFWV1880230338517995175679
JO47RPXX4369254763491767658928
JO58NZNT6341393377246730854738
JO80EMSI8673567652287471735549
KZ323713749718692294
KZ137384331259077754
KZ324858740402979114
KZ368595033976244728
KZ537032903996823818
KZ866585103500460142
KZ064002060056456160
KZ711471331609763423
KZ658919748926414954
KZ601322151681431382
KW04LDZZ6155894205021549736414
KW17RZFN7035889356330572874320
KW03MNSN4537863266986354330960
KW77MRVE9244425773051707698447
KW72AGFL0890857828641084492067
KW57KHAI4067446388854264986791
KW21GPJT9960700060917400557807
KW42LXMH2411431884316880638514
KW48CPUW0326799621695984856290
KW05NSXT8794687246691499818749
LV72PAFP7139052741135
LV39UWUW7420033435816
LV61CWBD6231549630394
LV43LQBK9548810301246
LV88TBNF1961527010194
LV67JIND9511836396430
LV97VTUJ0760964274945
LV74ORPR8772319399354
LV90OMJR0840879671457
LV98BTDI7500903731123
LB93682816897097857126616615
LB85741323348481199400691880
LB69299203001668105857192601
LB27053111579735238570079997
LB98436145214600794774719373
LB10922087175793544937950610
LB55697870936938213006050732
LB16757292729416076899249347
LB24545315507379787475965570
LB45124078124475306265330138
LI2541202657757596968
LI4810107325382536424
LI7062380721665922241
LI7309491686748493693
LI4993758438937338316
LI9059994461603834609
LI9323440277003335759
LI7374419102880660622
LI4150741763365144558
LI5895041228510603170
LT853812355043060028
LT289345176305770779
LT058695241354476568
LT416399874508558153
LT166732427152202528
LT725602671789294993
LT646283925137481709
LT133803451369968589
LT067890206105397255
LT406626140535255119
LU786760982709649594
LU381139097904200850
LU933953862264056929
LU447388000693422154
LU126796420628421181
LU676085251152055590
LU288445403409961466
LU521209808640036104
LU098709488437436294
LU073047948428653302
MK27793186919374666
MK27228843250811279
MK75140201767764096
MK36026480278713299
MK42869572001783450
MK85101027077648113
MK63889310585624862
MK71635374943421003
MK22009520716176154
MK51634615043302398
MT90HIQZ99291235213693812490437
MT97MNDU51071749272744341502042
MT13OZEI98251041545443856136961
MT10WHPR82560498612466041393844
MT48FZJE39412800316166455316545
MT16ODCT99073580453626115168015
MT54ZWSU65718830582178013365458
MT63CJED52096677722049294615602
MT47NUIE83283711572452773790292
MT14AAQD59405166696774786467138
MR3384923096017404699050515
MR2237529125310047353061691
MR6499384362746137683936685
MR4743300116827025827099208
MR5733868953900381102921122
MR5917051610387591972733082
MR9604608144668899302417520
MR4496848629995535832805964
MR6043384974007768552920727
MR1547733751567053373006043
MU72DASM2220164801468982351IKR
MU46GYKH4926162176345975119KIS
MU85YPWL9050242991643176519XNT
MU52KRIY3752809744527969960KTF
MU18EAKQ0101368823839676391WTL
MU28CFWI9900956284242874025BNS
MU11PZDA8436785799349960574XGC
MU12SYJD4761582153318315339EGQ
MU35LIUM3776904705754142981POS
MU92ATKM3130605586434971280TFJ
MD8329875673777125138731
MD6429149856142246727317
MD4015662331194018164733
MD9549867388506061302064
MD1904448898092707107985
MD3820145573964976174174
MD0336597248209116843733
MD9207943176181035704252
MD2223521495895214073709
MD7223158910735279217385
MC8574374915374698884193509
MC0991999478792070721046115
MC8189369037214610016302895
MC7227191382588573778716660
MC9088956961239612997936786
MC4900095509867453092725761
MC4330867179336482944725033
MC3732938838566832215275814
MC8436506148111806867053763
MC5778617009168524943323605
ME83238140779268370057
ME28717391254002004238
ME12181566970045501216
ME39959901965223364503
ME36732799475698843875
ME34476683833649405249
ME79775323699263058272
ME57293790414192839765
ME77827830906953950118
ME69715194893730461811
NL60UQGK4026224708
NL03ZSEU7683047716
NL59BXLV6966419583
NL69LYPH7001916463
NL30SHNG3422037721
NL76IAGD2715277369
NL44MLSE7642315708
NL45IKIK4770528086
NL17SIGZ1287991440
NL02JUDW5380267505
NO7396284316624
NO3963986565184
NO4444552754968
NO5170202668251
NO2991508382251
NO7509742992679
NO1313457871859
NO9695779958890
NO8008095139614
NO3278842722336
PK56UYWD9455334040363944
PK04RHIZ4181584865067928
PK74HFLF6735612952651053
PK43MWUD5859577903529429
PK68NWHQ4897031973833863
PK14YVTH6509086563080231
PK27DSGN2551309759844672
PK67NCDN5793414249755802
PK74OBUJ4583538344182334
PK53EFNY0983284977542573
PL95088758814194738609756846
PL38860985228997618873644124
PL40131801131586532601731177
PL59526603827960852775407701
PL74402416619954066313951743
PL66230887259157863289899840
PL54247932753892471397909084
PL40349126929792416208098191
PL21104822022170292772419320
PL78831937696269465559639569
PS48CNKY289098472719012434370
PS44MMTR229965154647161144196
PS66JGKP089712026751026428898
PS61XHZI125020617356257707869
PS07SVOE463579012087866378397
PS62UWUP747738090015408358085
PS56MAMT213646471281152264147
PS31OFQI281350057702333895757
PS86YSNH838133923530616739326
PS18ABGW887959264339518939600
PT49989899151963304323854
PT11724771774972150030638
PT23413764319212821604832
PT88401039244759109529793
PT63981899499922608347727
PT18359391667153081973509
PT21119442415659205741098
PT71804324776943062210133
PT55843211081028875769365
PT23015329998372492532613
QA03FACD505461356122556309369
QA25MPJF530366990198362656257
QA44JQLV937552589499155954809
QA33GNCA244621312238359131025
QA23BMRN159835334358623644727
QA46BTGR868974027220594986413
QA60TIDD677368353804061251446
QA97KBKJ937160534872737310079
QA84AVCP891056590331463692228
QA37YJHZ068316886823354970276
RO09EDDB4991220576410270
RO52POOC0710519034033904
RO30LHMX9341821157602589
RO09QNOV7620939711980800
RO86MFNQ2562787905029266
RO03REWS9039660532123260
RO84ELSU5485514127468707
RO52UGXU4275409614681908
RO08OUJE5422375139718491
RO64PMSC0259890898103797
SM85N3486346046790757677752
SM47F7860084402630188506097
SM77Q6273810545558640560671
SM03A1883294716904015372270
SM29Y1984659356380925612532
SM51V4329416382727915249808
SM83N3825304032519042034549
SM29A9358953441630337513998
SM49W5359039293311479967415
SM45S5102409487885200118800
SA6992126838264117878638
SA1396608400596536009604
SA6401770844295834383174
SA4575777179192236975914
SA5805655867320134647113
SA6717196777878150579175
SA9739466740162791063097
SA1497288667793332425390
SA4411342488845298386958
SA1291512325034346181376
RS98195590564182640819
RS41701857437961992821
RS71927676665514930585
RS43641470932124192515
RS79182753811024203440
RS43668806021263802345
RS25294007646113282365
RS25719066417033142196
RS70938140463467666091
RS38819890963634679749
SK5854237735733744470561
SK7785377887498411836813
SK6127603499834046620616
SK6417334195725918640753
SK3468557541354551959899
SK2352716957644096590768
SK0883180645021748093033
SK0307592507815860727236
SK2352898239259709723269
SK4959161120391973681874
SI55491524316674390
SI33118225747045460
SI89425196217331712
SI50962089279243217
SI07741702217466442
SI28710587362137809
SI38692545291147791
SI64031953541785498
SI03298403231383604
SI22018789271204675
ES6167014516776282026234
ES0472829411864561770953
ES2802390268894454648288
ES5561358084467268962373
ES3514527669734870043078
ES2809193459277281988655
ES7808997788153647099616
ES0485060547022481667466
ES2435525534130813818871
ES3277548844398898100578
SE4614981265816525807951
SE9675946064548687219398
SE6527902665055002610685
SE0340154101803219967036
SE7850237246526296804786
SE2533402616927879914428
SE3195145108540362191743
SE1233180572582599611589
SE5811271100475285759867
SE8764936652244876654856
CH6136608557936433055
CH3040133501984472632
CH3588413027536284483
CH4038499961587960026
CH3542750014880024953
CH7066576008709090890
CH8742518167584596157
CH2533087402102217919
CH7649701365528516292
CH6871853191824795218
TN2153426409615179452949
TN0743536045431580848927
TN1289984232368352970486
TN2218820392727860630565
TN0745733174187640051306
TN2114238171463992567958
TN4763980992674983328595
TN8580702238925554774059
TN9503690068481203528486
TN1126430526709180919930
TR182990455182956512315296
TR030128157007392262792097
TR266903618734402705216603
TR701504939814770866991206
TR957287615317042101731044
TR649003964367678631674119
TR156938815246339197470604
TR861225760280394115692170
TR285618375202488419060731
TR547953420920409273686973
AE164418918730745393596
AE308110970197110968941
AE590474981251297032934
AE153195563772464742897
AE679001079729165694943
AE279269484124147726216
AE603138911437166198638
AE068243804515903106138
AE412640459667115689858
AE337125437974858451697
GB10GEKC50721013013109
GB12SRTS33636784443801
GB86QOKT30982409825339
GB14UXPV48266984676875
GB09KWGL74636293887252
GB69OMMM58716744181899
GB08UZXR18478642985694
GB62FLYK66083097889054
GB28XWUB98620303291736
GB36UKXF60650976992117
VG51TREU4074127965171700
VG48MLRL2373179768825879
VG82VUDI5058159588254496
VG82LPFE4172060144566744
VG32WFQI8618514735370894
VG30UHVY8677854646191129
VG80BRJY7046774069008330
VG02YJGC0184769647780388
VG21UZHE7970039851299264
VG18TTYS0203896448334395""".splitlines()
# The following IBANS were detected as invalid, so I removed them:
# TL082729006329851246793
# TL297626464298710812820
# TL884458237806050889877
# TL391868164800583027257
# TL343020249690418937133
# TL753667861828542863208
# TL349741409156256191378
# TL362195449234818858928
# TL539089589847679047839
# TL258052727278263703208
# LC53XCVC238423058296087694540929
# LC53XEIY609979207153397785722573
# LC34QHYT168420235658280804933327
# LC74RXMX090204524656445019789857
# LC48KHYO710521543801387119584365
# LC09IEOK730212212340805322981126
# LC11ARNO197949679560443418322160
# LC22WBVZ990238721604571039403043
# LC74RWAN085718618563710046930359
# LC25AGFC723302187294866758499798
# XK950805691464898317
# XK884764614319391388
# XK600851882328508541
# XK839760370560144243
# XK606658498518991544
# XK958904364800060950
# XK603587643115576223
# XK673216330536264219
# XK564095746549947149
# XK671363630089004882
# YY86QNQL00719894179064456209266850
# YY05DLMG07097815845971126639293620
# YY22VJRU87776647951802870899929172
# YY81KCOZ05702937251491071907006459
# YY59HTAQ72516526368949968968148439
# YY84QYYB00580449136092134227594808
# YY97ZKYE89681619317014576886154837
# YY95SXZW82475756549958360545747904
# YY08KSRD39688074574489771256936942
# YY79MSNY80958775125824421222948447
# ZZ42BLNL288538693245897396826771973
# ZZ40GSJY674051993318879306905972443
# ZZ34XSQW579374286465691058648292665
# ZZ72SRBA669305861802594134728595513
# ZZ21GKLW538403263080844520936832815
# ZZ77MVBO293991296221518971265192445
# ZZ65SCAJ605735175111102069055772701
# ZZ27IVZC321465223251350362930176499
# ZZ27SEJM481282382131199306115582554
# ZZ43PQLV216469604138756643619739783
| 24.976584 | 87 | 0.948492 | 768 | 18,133 | 22.388021 | 0.984375 | 0.001512 | 0.001861 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.843434 | 0.045442 | 18,133 | 725 | 88 | 25.011034 | 0.149922 | 0.104616 | 0 | 0 | 0 | 0 | 0.994373 | 0.781598 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
654df0e800d89cf2735a297e4c338a796e8ea104 | 31 | py | Python | light_cnn/__init__.py | NateThom/similarity_classifiers | 5b320e150181232a00813482d8361590ff1fd47e | [
"MIT"
] | null | null | null | light_cnn/__init__.py | NateThom/similarity_classifiers | 5b320e150181232a00813482d8361590ff1fd47e | [
"MIT"
] | null | null | null | light_cnn/__init__.py | NateThom/similarity_classifiers | 5b320e150181232a00813482d8361590ff1fd47e | [
"MIT"
] | null | null | null | from .light_cnn import LightCnn | 31 | 31 | 0.870968 | 5 | 31 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6560bd4f514ac3ea3a02b05113e17ef901543f0a | 9,895 | py | Python | usaspending_api/disaster/tests/integration/test_recipient_spending.py | g4brielvs/usaspending-api | bae7da2c204937ec1cdf75c052405b13145728d5 | [
"CC0-1.0"
] | null | null | null | usaspending_api/disaster/tests/integration/test_recipient_spending.py | g4brielvs/usaspending-api | bae7da2c204937ec1cdf75c052405b13145728d5 | [
"CC0-1.0"
] | null | null | null | usaspending_api/disaster/tests/integration/test_recipient_spending.py | g4brielvs/usaspending-api | bae7da2c204937ec1cdf75c052405b13145728d5 | [
"CC0-1.0"
] | null | null | null | import pytest
from rest_framework import status
from usaspending_api.search.tests.data.utilities import setup_elasticsearch_test
url = "/api/v2/disaster/recipient/spending/"
@pytest.mark.django_db
def test_correct_response_defc_no_results(
client, monkeypatch, helpers, elasticsearch_award_index, awards_and_transactions
):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["N"])
expected_results = []
assert resp.status_code == status.HTTP_200_OK
assert resp.json()["results"] == expected_results
@pytest.mark.django_db
def test_correct_response_single_defc(client, monkeypatch, helpers, elasticsearch_award_index, awards_and_transactions):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["L"])
expected_results = [
{
"code": "987654321",
"award_count": 2,
"description": "RECIPIENT, 3",
"id": ["d2894d22-67fc-f9cb-4005-33fa6a29ef86-C", "d2894d22-67fc-f9cb-4005-33fa6a29ef86-R"],
"obligation": 2200.0,
"outlay": 1100.0,
},
{
"code": "456789123",
"award_count": 1,
"description": "RECIPIENT 2",
"id": ["3c92491a-f2cd-ec7d-294b-7daf91511866-R"],
"obligation": 20.0,
"outlay": 10.0,
},
{
"code": "DUNS Number not provided",
"award_count": 1,
"description": "RECIPIENT 1",
"id": ["5f572ec9-8b49-e5eb-22c7-f6ef316f7689-R"],
"obligation": 2.0,
"outlay": 1.0,
},
]
assert resp.status_code == status.HTTP_200_OK
assert resp.json()["results"] == expected_results
@pytest.mark.django_db
def test_correct_response_multiple_defc(
client, monkeypatch, helpers, elasticsearch_award_index, awards_and_transactions
):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["L", "M"])
expected_results = [
{
"code": "987654321",
"award_count": 3,
"description": "RECIPIENT, 3",
"id": ["d2894d22-67fc-f9cb-4005-33fa6a29ef86-C", "d2894d22-67fc-f9cb-4005-33fa6a29ef86-R"],
"obligation": 202200.0,
"outlay": 101100.0,
},
{
"code": "456789123",
"award_count": 1,
"description": "RECIPIENT 2",
"id": ["3c92491a-f2cd-ec7d-294b-7daf91511866-R"],
"obligation": 20.0,
"outlay": 10.0,
},
{
"code": "DUNS Number not provided",
"award_count": 1,
"description": "RECIPIENT 1",
"id": ["5f572ec9-8b49-e5eb-22c7-f6ef316f7689-R"],
"obligation": 2.0,
"outlay": 1.0,
},
{
"code": "096354360",
"award_count": 1,
"description": "MULTIPLE RECIPIENTS",
"id": None,
"obligation": 20000.0,
"outlay": 10000.0,
},
{
"code": "DUNS Number not provided",
"award_count": 1,
"description": "MULTIPLE RECIPIENTS",
"id": None,
"obligation": 2000000.0,
"outlay": 1000000.0,
},
]
assert resp.status_code == status.HTTP_200_OK
assert resp.json()["results"] == expected_results
@pytest.mark.django_db
def test_correct_response_with_query(client, monkeypatch, helpers, elasticsearch_award_index, awards_and_transactions):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["L", "M"], query="GIBBERISH")
expected_results = []
assert resp.status_code == status.HTTP_200_OK
assert resp.json()["results"] == expected_results
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["L", "M"], query="3")
expected_results = [
{
"code": "987654321",
"award_count": 3,
"description": "RECIPIENT, 3",
"id": ["d2894d22-67fc-f9cb-4005-33fa6a29ef86-C", "d2894d22-67fc-f9cb-4005-33fa6a29ef86-R"],
"obligation": 202200.0,
"outlay": 101100.0,
}
]
assert resp.status_code == status.HTTP_200_OK
assert resp.json()["results"] == expected_results
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["L", "M"], query="ENT, 3")
expected_results = [
{
"code": "987654321",
"award_count": 3,
"description": "RECIPIENT, 3",
"id": ["d2894d22-67fc-f9cb-4005-33fa6a29ef86-C", "d2894d22-67fc-f9cb-4005-33fa6a29ef86-R"],
"obligation": 202200.0,
"outlay": 101100.0,
}
]
assert resp.status_code == status.HTTP_200_OK
assert resp.json()["results"] == expected_results
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["L", "M"], query="ReCiPiEnT,")
expected_results = [
{
"code": "987654321",
"award_count": 3,
"description": "RECIPIENT, 3",
"id": ["d2894d22-67fc-f9cb-4005-33fa6a29ef86-C", "d2894d22-67fc-f9cb-4005-33fa6a29ef86-R"],
"obligation": 202200.0,
"outlay": 101100.0,
}
]
assert resp.status_code == status.HTTP_200_OK
assert resp.json()["results"] == expected_results
@pytest.mark.django_db
def test_correct_response_with_award_type_codes(
client, monkeypatch, helpers, elasticsearch_award_index, awards_and_transactions
):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["L", "M"], award_type_codes=["IDV_A"])
expected_results = []
assert resp.status_code == status.HTTP_200_OK
assert resp.json()["results"] == expected_results
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["L", "M"], award_type_codes=["07", "A", "B"])
expected_results = [
{
"code": "987654321",
"award_count": 1,
"description": "RECIPIENT, 3",
"id": ["d2894d22-67fc-f9cb-4005-33fa6a29ef86-C", "d2894d22-67fc-f9cb-4005-33fa6a29ef86-R"],
"obligation": 2000.0,
"outlay": 1000.0,
},
{
"code": "456789123",
"award_count": 1,
"description": "RECIPIENT 2",
"id": ["3c92491a-f2cd-ec7d-294b-7daf91511866-R"],
"obligation": 20.0,
"outlay": 10.0,
},
{
"code": "DUNS Number not provided",
"award_count": 1,
"description": "RECIPIENT 1",
"id": ["5f572ec9-8b49-e5eb-22c7-f6ef316f7689-R"],
"obligation": 2.0,
"outlay": 1.0,
},
{
"code": "096354360",
"award_count": 1,
"description": "MULTIPLE RECIPIENTS",
"id": None,
"obligation": 20000.0,
"outlay": 10000.0,
},
]
assert resp.status_code == status.HTTP_200_OK
assert resp.json()["results"] == expected_results
@pytest.mark.django_db
def test_invalid_defc(client, monkeypatch, helpers, elasticsearch_award_index, awards_and_transactions):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["ZZ"])
assert resp.status_code == status.HTTP_400_BAD_REQUEST
assert resp.data["detail"] == "Field 'filter|def_codes' is outside valid values ['L', 'M', 'N']"
@pytest.mark.django_db
def test_invalid_defc_type(client, monkeypatch, helpers, elasticsearch_award_index, awards_and_transactions):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
resp = helpers.post_for_spending_endpoint(client, url, def_codes="100")
assert resp.status_code == status.HTTP_400_BAD_REQUEST
assert resp.data["detail"] == "Invalid value in 'filter|def_codes'. '100' is not a valid type (array)"
@pytest.mark.django_db
def test_missing_defc(client, monkeypatch, helpers, elasticsearch_award_index, awards_and_transactions):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
resp = helpers.post_for_spending_endpoint(client, url)
assert resp.status_code == status.HTTP_422_UNPROCESSABLE_ENTITY
assert resp.data["detail"] == "Missing value: 'filter|def_codes' is a required field"
@pytest.mark.django_db
def test_pagination_page_and_limit(client, monkeypatch, helpers, elasticsearch_award_index, awards_and_transactions):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["L", "M"], page=2, limit=1)
expected_results = {
"totals": {"award_count": 7, "obligation": 2222222.0, "outlay": 1111111.0},
"results": [
{
"code": "456789123",
"award_count": 1,
"description": "RECIPIENT 2",
"id": ["3c92491a-f2cd-ec7d-294b-7daf91511866-R"],
"obligation": 20.0,
"outlay": 10.0,
}
],
"page_metadata": {
"page": 2,
"total": 5,
"limit": 1,
"next": 3,
"previous": 1,
"hasNext": True,
"hasPrevious": True,
},
"messages": [
"Notice! API Request to sort on 'id' field isn't fully "
"implemented. Results were actually sorted using 'description' "
"field."
],
}
assert resp.status_code == status.HTTP_200_OK
assert resp.json() == expected_results
| 36.245421 | 120 | 0.60576 | 1,075 | 9,895 | 5.335814 | 0.147907 | 0.045328 | 0.072176 | 0.040795 | 0.867852 | 0.867678 | 0.840481 | 0.840481 | 0.826011 | 0.820084 | 0 | 0.101544 | 0.260536 | 9,895 | 272 | 121 | 36.378676 | 0.682383 | 0 | 0 | 0.609244 | 0 | 0 | 0.245073 | 0.076604 | 0 | 0 | 0 | 0 | 0.109244 | 1 | 0.037815 | false | 0 | 0.012605 | 0 | 0.05042 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
659fea67015d91605eb401fdac04d9b519e90de3 | 2,816 | py | Python | cogs/srtr.py | tasuren/tensei_disko | 7ec1d88e3e80f13cc2a17700aae672f5bf9a876d | [
"MIT"
] | null | null | null | cogs/srtr.py | tasuren/tensei_disko | 7ec1d88e3e80f13cc2a17700aae672f5bf9a876d | [
"MIT"
] | null | null | null | cogs/srtr.py | tasuren/tensei_disko | 7ec1d88e3e80f13cc2a17700aae672f5bf9a876d | [
"MIT"
] | null | null | null | from discord.ext import commands
import discord
import pickle
import n_fc
class srtr(commands.Cog):
def __init__(self, bot: commands.Bot):
self.bot = bot
@commands.command()
async def srtr(self, ctx: commands.Context):
if ctx.message.content == "n#srtr":
embed = discord.Embed(title="しりとり", description=f"`n!srtr start`でそのチャンネルでしりとり(風対話)を実行し、`n!srtr stop`でしりとりを停止します。", color=0x00ff00)
await ctx.message.reply(embed=embed)
return
if ctx.message.content == "n#srtr start":
try:
if ctx.message.guild.id in n_fc.srtr_bool_list:
if ctx.message.channel.id in n_fc.srtr_bool_list:
embed = discord.Embed(title="しりとり", description=f"{ctx.message.channel.name}でしりとりは既にに実行されています。", color=0x00ff00)
await ctx.message.reply(embed=embed)
return
else:
n_fc.srtr_bool_list[ctx.message.guild.id] = {ctx.message.channel.id:1}
if ctx.message.guild.id not in n_fc.srtr_bool_list:
n_fc.srtr_bool_list[ctx.message.guild.id] = {ctx.message.channel.id:1}
with open('srtr_bool_list.nira', 'wb') as f:
pickle.dump(n_fc.srtr_bool_list, f)
except BaseException as err:
await ctx.message.reply("err")
return
embed = discord.Embed(title="しりとり", description=f"{ctx.message.channel.name}でしりとりを始めます。", color=0x00ff00)
await ctx.message.reply(embed=embed)
return
if ctx.message.content == "n#srtr stop":
try:
if ctx.message.guild.id not in n_fc.srtr_bool_list:
embed = discord.Embed(title="しりとり", description=f"{ctx.message.guild.name}でしりとりは実行されていません。", color=0x00ff00)
await ctx.message.reply(embed=embed)
return
if ctx.message.channel.id not in n_fc.srtr_bool_list[ctx.message.guild.id]:
embed = discord.Embed(title="しりとり", description=f"{ctx.message.channel.name}でしりとりは実行されていません。", color=0x00ff00)
await ctx.message.reply(embed=embed)
return
del n_fc.srtr_bool_list[ctx.message.guild.id][ctx.message.channel.id]
with open('srtr_bool_list.nira', 'wb') as f:
pickle.dump(n_fc.srtr_bool_list, f)
except BaseException as err:
await ctx.message.reply("err")
embed = discord.Embed(title="しりとり", description=f"{ctx.message.channel.name}でのしりとりを終了します。", color=0x00ff00)
await ctx.message.reply(embed=embed)
return
def setup(bot):
bot.add_cog(srtr(bot)) | 51.2 | 142 | 0.584517 | 353 | 2,816 | 4.549575 | 0.184136 | 0.174346 | 0.089664 | 0.068493 | 0.807597 | 0.791407 | 0.762142 | 0.725405 | 0.721046 | 0.6401 | 0 | 0.016318 | 0.303622 | 2,816 | 55 | 143 | 51.2 | 0.802652 | 0 | 0 | 0.519231 | 0 | 0 | 0.12957 | 0.084487 | 0 | 0 | 0.017039 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.076923 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65acdeafe551513b7c2679223546249143fbfdfd | 3,043 | py | Python | tests/pytests/functional/utils/user/test_chugid_and_umask.py | ifraixedes/saltstack-salt | b54becb8b43cc9b7c00b2c0bc637ac534dc62896 | [
"Apache-2.0"
] | 3 | 2015-08-30T04:23:47.000Z | 2018-07-15T00:35:23.000Z | tests/pytests/functional/utils/user/test_chugid_and_umask.py | ifraixedes/saltstack-salt | b54becb8b43cc9b7c00b2c0bc637ac534dc62896 | [
"Apache-2.0"
] | 4 | 2016-05-10T22:05:34.000Z | 2016-05-20T18:10:13.000Z | tests/pytests/functional/utils/user/test_chugid_and_umask.py | ifraixedes/saltstack-salt | b54becb8b43cc9b7c00b2c0bc637ac534dc62896 | [
"Apache-2.0"
] | 1 | 2022-02-22T10:43:09.000Z | 2022-02-22T10:43:09.000Z | import functools
import os
import subprocess
import pytest
import salt.utils.user
pytestmark = [
pytest.mark.destructive_test,
pytest.mark.skip_if_not_root,
pytest.mark.skip_on_windows,
]
@pytest.fixture(scope="module")
def account_1():
with pytest.helpers.create_account(create_group=True) as _account:
yield _account
@pytest.fixture(scope="module")
def account_2(account_1):
with pytest.helpers.create_account(group_name=account_1.group.name) as _account:
yield _account
def test_chugid(account_1, tmp_path):
# Since we're changing accounts to touch the file, the parent directory must be user and group writable
tmp_path.chmod(0o770)
testfile = tmp_path / "testfile"
# We should fail because the parent directory group owner is not the account running the test
ret = subprocess.run(
["touch", str(testfile)],
preexec_fn=functools.partial(
salt.utils.user.chugid_and_umask,
runas=account_1.username,
umask=None,
group=None,
),
check=False,
)
assert ret.returncode != 0
# However if we change the group ownership to one of the account's groups, it should succeed
os.chown(str(tmp_path), 0, account_1.group.info.gid)
ret = subprocess.run(
["touch", str(testfile)],
preexec_fn=functools.partial(
salt.utils.user.chugid_and_umask,
runas=account_1.username,
umask=None,
group=None,
),
check=False,
)
assert ret.returncode == 0
assert testfile.exists()
testfile_stat = testfile.stat()
assert testfile_stat.st_uid == account_1.info.uid
assert testfile_stat.st_gid == account_1.info.gid
def test_chugid_and_group(account_1, account_2, tmp_path):
# Since we're changing accounts to touch the file, the parent directory must be world-writable
tmp_path.chmod(0o770)
testfile = tmp_path / "testfile"
# We should fail because the parent directory group owner is not the account running the test
ret = subprocess.run(
["touch", str(testfile)],
preexec_fn=functools.partial(
salt.utils.user.chugid_and_umask,
runas=account_2.username,
umask=None,
group=account_1.group.name,
),
check=False,
)
assert ret.returncode != 0
# However if we change the group ownership to one of the account's groups, it should succeed
os.chown(str(tmp_path), 0, account_1.group.info.gid)
ret = subprocess.run(
["touch", str(testfile)],
preexec_fn=functools.partial(
salt.utils.user.chugid_and_umask,
runas=account_2.username,
umask=None,
group=account_1.group.name,
),
check=False,
)
assert ret.returncode == 0
assert testfile.exists()
testfile_stat = testfile.stat()
assert testfile_stat.st_uid == account_2.info.uid
assert testfile_stat.st_gid == account_1.group.info.gid
| 28.980952 | 107 | 0.659547 | 404 | 3,043 | 4.80198 | 0.227723 | 0.057732 | 0.040206 | 0.043299 | 0.835567 | 0.829381 | 0.797938 | 0.758763 | 0.758763 | 0.719588 | 0 | 0.014448 | 0.249425 | 3,043 | 104 | 108 | 29.259615 | 0.834939 | 0.184029 | 0 | 0.692308 | 0 | 0 | 0.019386 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 1 | 0.051282 | false | 0 | 0.064103 | 0 | 0.115385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65d2f4a74b021735c340a1be2b2510e8335d2269 | 117 | py | Python | URI/2747.py | namelew/PythonExercices | e6701dddf163b616987fc9edd8b9ef8e9a207e84 | [
"MIT"
] | null | null | null | URI/2747.py | namelew/PythonExercices | e6701dddf163b616987fc9edd8b9ef8e9a207e84 | [
"MIT"
] | 1 | 2020-11-09T17:20:58.000Z | 2020-11-09T17:21:10.000Z | URI/2747.py | namelew/PythonExercices | e6701dddf163b616987fc9edd8b9ef8e9a207e84 | [
"MIT"
] | null | null | null | x = 1
print("-"*39)
while x <= 5:
print("|",end="")
print(end=" "*37)
print("|")
x += 1
print("-"*39) | 14.625 | 21 | 0.410256 | 17 | 117 | 2.823529 | 0.470588 | 0.083333 | 0.291667 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104651 | 0.264957 | 117 | 8 | 22 | 14.625 | 0.453488 | 0 | 0 | 0.25 | 0 | 0 | 0.042373 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.625 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
029e613fe9ea82daf80c7507f2ee11a12e283277 | 90 | py | Python | 10/03/2/package1/package11/module1.py | pylangstudy/201706 | f1cc6af6b18e5bd393cda27f5166067c4645d4d3 | [
"CC0-1.0"
] | null | null | null | 10/03/2/package1/package11/module1.py | pylangstudy/201706 | f1cc6af6b18e5bd393cda27f5166067c4645d4d3 | [
"CC0-1.0"
] | 70 | 2017-06-01T11:02:51.000Z | 2017-06-30T00:35:32.000Z | 10/03/3/package1/package11/module1.py | pylangstudy/201706 | f1cc6af6b18e5bd393cda27f5166067c4645d4d3 | [
"CC0-1.0"
] | null | null | null | print('0/package/module1.py Run!!')
def some_method():
print('module1.some_method()')
| 22.5 | 35 | 0.688889 | 13 | 90 | 4.615385 | 0.692308 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.1 | 90 | 3 | 36 | 30 | 0.703704 | 0 | 0 | 0 | 0 | 0 | 0.522222 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
02a715c6564fe587885db6c48b02edac2967b559 | 86 | py | Python | Programa/menu/__init__.py | NicolasGandolfi/Exercicios-Python | 935fe3577c149192f9e29568e9798e970a620131 | [
"MIT"
] | null | null | null | Programa/menu/__init__.py | NicolasGandolfi/Exercicios-Python | 935fe3577c149192f9e29568e9798e970a620131 | [
"MIT"
] | null | null | null | Programa/menu/__init__.py | NicolasGandolfi/Exercicios-Python | 935fe3577c149192f9e29568e9798e970a620131 | [
"MIT"
] | null | null | null | def linha(txt):
print('\033[0;35m—'*50)
print(f' {txt}')
print('—'*50)
| 12.285714 | 27 | 0.476744 | 15 | 86 | 2.866667 | 0.666667 | 0.372093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 0.244186 | 86 | 6 | 28 | 14.333333 | 0.476923 | 0 | 0 | 0 | 0 | 0 | 0.22619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0.75 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
02f47dadea79576a9f157ac8f16ff4cc069abf71 | 106 | py | Python | verilogparser/__init__.py | sepandhaghighi/verilogparser | 8983b8d74fa28605b6a6772c6a02eafa6e6ba213 | [
"MIT"
] | 13 | 2017-10-29T15:52:19.000Z | 2022-02-06T18:32:20.000Z | verilogparser/__init__.py | sepandhaghighi/verilogparser | 8983b8d74fa28605b6a6772c6a02eafa6e6ba213 | [
"MIT"
] | null | null | null | verilogparser/__init__.py | sepandhaghighi/verilogparser | 8983b8d74fa28605b6a6772c6a02eafa6e6ba213 | [
"MIT"
] | 4 | 2020-01-20T07:13:26.000Z | 2022-02-06T18:32:59.000Z | # -*- coding: utf-8 -*-
from .verilogparser import *
from .logics import *
from .deductivelogic import * | 17.666667 | 29 | 0.688679 | 12 | 106 | 6.083333 | 0.666667 | 0.273973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011364 | 0.169811 | 106 | 6 | 29 | 17.666667 | 0.818182 | 0.198113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b82486939a3556979f307de4e951e7d15a60a4d1 | 38 | py | Python | tests/test_video.py | hixan/av_slice | 0fbb4f45281f701c8e9eb9c764f380a719b5d1c0 | [
"MIT"
] | null | null | null | tests/test_video.py | hixan/av_slice | 0fbb4f45281f701c8e9eb9c764f380a719b5d1c0 | [
"MIT"
] | 2 | 2020-05-05T07:55:38.000Z | 2021-11-15T17:48:40.000Z | tests/test_video.py | hixan/av_slice | 0fbb4f45281f701c8e9eb9c764f380a719b5d1c0 | [
"MIT"
] | null | null | null |
def test_remove_sections():
pass
| 9.5 | 27 | 0.710526 | 5 | 38 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 38 | 3 | 28 | 12.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
b846a8c31d3bdf2cd5d292aca7bed13053efd337 | 95 | py | Python | job/src/slipstream/job/__init__.py | slipstream/SlipStreamJobEngine | 9860283b66ee053022c8261517d85b2a1088610c | [
"Apache-2.0"
] | 3 | 2019-04-27T10:36:21.000Z | 2019-04-29T12:41:57.000Z | code/src/nuvla/job/__init__.py | nuvla/job-engine | 58d42bd24d8dd2c6e28541c08df1455c9ac909f6 | [
"Apache-2.0"
] | 131 | 2019-02-13T06:00:49.000Z | 2022-03-29T15:06:03.000Z | job/src/slipstream/job/__init__.py | slipstream/SlipStreamJobEngine | 9860283b66ee053022c8261517d85b2a1088610c | [
"Apache-2.0"
] | 1 | 2020-12-03T11:35:21.000Z | 2020-12-03T11:35:21.000Z | # -*- coding: utf-8 -*-
from .distributor import *
from .executor import *
from .job import *
| 15.833333 | 26 | 0.652632 | 12 | 95 | 5.166667 | 0.666667 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012987 | 0.189474 | 95 | 5 | 27 | 19 | 0.792208 | 0.221053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b8574142b53ed1a09720632bf05e3d45b60cc5c3 | 32 | py | Python | Modules/TrainUS/TrainUSLib/__init__.py | EBATINCA/TrainUS | f24c894d23f4f608ccef77914215eba8c5559101 | [
"Apache-2.0"
] | 2 | 2022-01-18T22:39:03.000Z | 2022-01-20T10:28:21.000Z | Modules/TrainUS/TrainUSLib/__init__.py | EBATINCA/TrainUS | f24c894d23f4f608ccef77914215eba8c5559101 | [
"Apache-2.0"
] | 2 | 2022-01-28T13:11:57.000Z | 2022-03-29T11:22:23.000Z | Modules/TrainUS/TrainUSLib/__init__.py | EBATINCA/TrainUS | f24c894d23f4f608ccef77914215eba8c5559101 | [
"Apache-2.0"
] | null | null | null | from .TrainUSParameters import * | 32 | 32 | 0.84375 | 3 | 32 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b868ad5d7e98a14225c6fd781e4afd72adc90c51 | 265 | py | Python | lib/machine_learning/context_classification/context_classifiers/__init__.py | thesmarthomeninja/Video_Gaming_ML | e9c147f33a790a9cd3e4ee631ddbf6bbf91c3921 | [
"MIT"
] | null | null | null | lib/machine_learning/context_classification/context_classifiers/__init__.py | thesmarthomeninja/Video_Gaming_ML | e9c147f33a790a9cd3e4ee631ddbf6bbf91c3921 | [
"MIT"
] | 4 | 2020-09-25T22:39:46.000Z | 2022-02-09T23:39:43.000Z | lib/machine_learning/context_classification/context_classifiers/__init__.py | AsimKhan2019/Serpent-AI | e9c147f33a790a9cd3e4ee631ddbf6bbf91c3921 | [
"MIT"
] | null | null | null | from lib.machine_learning.context_classification.context_classifiers.svm_context_classifier import SVMContextClassifier
from lib.machine_learning.context_classification.context_classifiers.cnn_inception_v3_context_classifier import CNNInceptionV3ContextClassifier
| 66.25 | 143 | 0.935849 | 28 | 265 | 8.428571 | 0.535714 | 0.059322 | 0.118644 | 0.186441 | 0.516949 | 0.516949 | 0.516949 | 0.516949 | 0 | 0 | 0 | 0.007813 | 0.033962 | 265 | 3 | 144 | 88.333333 | 0.914063 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b886a4fbaf44bb591caaabce7624bbaaf45ab8b8 | 130 | py | Python | privx_api/utils.py | hokenssh/privx-sdk-for-python | 24627d25c0343f350c9b2396677344b771f8aec6 | [
"Apache-2.0"
] | 4 | 2020-06-15T17:14:18.000Z | 2021-12-20T12:12:56.000Z | privx_api/utils.py | hokenssh/privx-sdk-for-python | 24627d25c0343f350c9b2396677344b771f8aec6 | [
"Apache-2.0"
] | 5 | 2019-11-25T07:04:07.000Z | 2021-05-19T08:09:53.000Z | privx_api/utils.py | hokenssh/privx-sdk-for-python | 24627d25c0343f350c9b2396677344b771f8aec6 | [
"Apache-2.0"
] | 23 | 2019-11-22T08:17:58.000Z | 2022-02-21T15:50:36.000Z | from typing import Any
def get_value(obj: Any, default_value: Any) -> Any:
return obj if obj is not None else default_value
| 21.666667 | 52 | 0.738462 | 23 | 130 | 4.043478 | 0.652174 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 130 | 5 | 53 | 26 | 0.894231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
b88e654e466ad3288c50eb35d774882f8b85c7e5 | 169 | py | Python | Databaselayer/IPostStatus.py | rohitgs28/FindMyEmployer | d4b369eb488f44e40ef371ac09847f8ccc39994c | [
"MIT"
] | null | null | null | Databaselayer/IPostStatus.py | rohitgs28/FindMyEmployer | d4b369eb488f44e40ef371ac09847f8ccc39994c | [
"MIT"
] | null | null | null | Databaselayer/IPostStatus.py | rohitgs28/FindMyEmployer | d4b369eb488f44e40ef371ac09847f8ccc39994c | [
"MIT"
] | null | null | null | import hashlib, os
import logging
class IPostStatus:
def insertUserStatus(self): raise NotImplementedError
def getUserStatuses(self): raise NotImplementedError
| 24.142857 | 57 | 0.810651 | 17 | 169 | 8.058824 | 0.705882 | 0.131387 | 0.408759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142012 | 169 | 6 | 58 | 28.166667 | 0.944828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.4 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b22bd61bfb3ebf0345e574b99e45efbbff54a732 | 40,169 | py | Python | cameo/localdb.py | muchu1983/104_cameo | 8c7f78de198a5bd8d870589402e3b7e8b59f520a | [
"BSD-3-Clause"
] | null | null | null | cameo/localdb.py | muchu1983/104_cameo | 8c7f78de198a5bd8d870589402e3b7e8b59f520a | [
"BSD-3-Clause"
] | null | null | null | cameo/localdb.py | muchu1983/104_cameo | 8c7f78de198a5bd8d870589402e3b7e8b59f520a | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Copyright (C) 2015, MuChu Hsu
Contributed by Muchu Hsu (muchu1983@gmail.com)
This file is part of BSD license
<https://opensource.org/licenses/BSD-3-Clause>
"""
from bennu.localdb import SQLite3Db
from bennu.localdb import MongoDb
import random
"""
本地端資料庫存取
"""
#匯入json
class LocalDbForJsonImporter:
#建構子
def __init__(self):
self.mongodb = MongoDb().getClient().localdb
#匯率API
class LocalDbForCurrencyApi:
#建構子
def __init__(self):
self.mongodb = MongoDb().getClient().localdb
#crunchbase
class LocalDbForCRUNCHBASE:
#建構子
def __init__(self):
self.db = SQLite3Db(strResFolderPath="cameo_res")
self.initialDb()
#初取化資料庫
def initialDb(self):
strSQLCreateTable = (
"CREATE TABLE IF NOT EXISTS crunchbase_account("
"id INTEGER PRIMARY KEY,"
"strEmail TEXT NOT NULL,"
"strPassword TEXT NOT NULL,"
"strStatus TEXT NOT NULL)"
)
self.db.commitSQL(strSQL=strSQLCreateTable)
strSQLCreateTable = (
"CREATE TABLE IF NOT EXISTS crunchbase_organization("
"id INTEGER PRIMARY KEY,"
"strOrganizationUrl TEXT NOT NULL,"
"isGot BOOLEAN NOT NULL)"
)
self.db.commitSQL(strSQL=strSQLCreateTable)
#若無重覆,儲存 account
def insertAccountIfNotExists(self, strEmail=None, strPassword=None):
strSQL = "SELECT * FROM crunchbase_account WHERE strEmail='%s'"%strEmail
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO crunchbase_account VALUES(NULL, '%s', '%s', 'ready')"%(strEmail, strPassword)
self.db.commitSQL(strSQL=strSQL)
#隨機取得可用的 account
def fetchRandomReadyAccount(self):
strSQL = "SELECT * FROM crunchbase_account WHERE strStatus='ready'"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
rowDataAccount = lstRowData[random.randint(0, len(lstRowData)-1)]
return (rowDataAccount["strEmail"], rowDataAccount["strPassword"])
#若無重覆 儲存 organization URL
def insertOrganizationUrlIfNotExists(self, strOrganizationUrl=None):
strSQL = "SELECT * FROM crunchbase_organization WHERE strOrganizationUrl='%s'"%strOrganizationUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO crunchbase_organization VALUES(NULL, '%s', 0)"%strOrganizationUrl
self.db.commitSQL(strSQL=strSQL)
#取得所有尚未完成下載的 organization url
def fetchallNotObtainedOrganizationUrl(self):
strSQL = "SELECT strOrganizationUrl FROM crunchbase_organization WHERE isGot=0"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrCompanyUrl = []
for rowData in lstRowData:
lstStrCompanyUrl.append(rowData["strOrganizationUrl"])
return lstStrCompanyUrl
#檢查 organization 是否已下載
def checkOrganizationIsGot(self, strOrganizationUrl=None):
isGot = True
strSQL = "SELECT * FROM crunchbase_organization WHERE strOrganizationUrl='%s'"%strOrganizationUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
for rowData in lstRowData:
if rowData["isGot"] == 0:
isGot = False
return isGot
#更新 organization 為已完成下載狀態
def updateOrganizationStatusIsGot(self, strOrganizationUrl=None):
strSQL = "UPDATE crunchbase_organization SET isGot=1 WHERE strOrganizationUrl='%s'"%strOrganizationUrl
self.db.commitSQL(strSQL=strSQL)
#取得所有已完成下載的 organization url
def fetchallCompletedObtainedOrganizationUrl(self):
strSQL = "SELECT strOrganizationUrl FROM crunchbase_organization WHERE isGot=1"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrCompanyUrl = []
for rowData in lstRowData:
lstStrCompanyUrl.append(rowData["strOrganizationUrl"])
return lstStrCompanyUrl
#更新 organization 尚未開始下載狀態
def updateOrganizationStatusIsNotGot(self, strOrganizationUrl=None):
strSQL = "UPDATE crunchbase_organization SET isGot=0 WHERE strOrganizationUrl='%s'"%strOrganizationUrl
self.db.commitSQL(strSQL=strSQL)
#清除測試資料 (clear table)
def clearTestData(self):
strSQL = "DELETE FROM crunchbase_organization"
self.db.commitSQL(strSQL=strSQL)
#crowdcube
class LocalDbForCROWDCUBE:
#建構子
def __init__(self):
self.db = SQLite3Db(strResFolderPath="cameo_res")
self.initialDb()
#初取化資料庫
def initialDb(self):
strSQLCreateTable = (
"CREATE TABLE IF NOT EXISTS crowdcube_account("
"id INTEGER PRIMARY KEY,"
"strEmail TEXT NOT NULL,"
"strPassword TEXT NOT NULL,"
"strStatus TEXT NOT NULL)"
)
self.db.commitSQL(strSQL=strSQLCreateTable)
strSQLCreateTable = (
"CREATE TABLE IF NOT EXISTS crowdcube_company("
"id INTEGER PRIMARY KEY,"
"strCompanyUrl TEXT NOT NULL,"
"isGot BOOLEAN NOT NULL)"
)
self.db.commitSQL(strSQL=strSQLCreateTable)
#若無重覆,儲存 account
def insertAccountIfNotExists(self, strEmail=None, strPassword=None):
strSQL = "SELECT * FROM crowdcube_account WHERE strEmail='%s'"%strEmail
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO crowdcube_account VALUES(NULL, '%s', '%s', 'ready')"%(strEmail, strPassword)
self.db.commitSQL(strSQL=strSQL)
#隨機取得可用的 account
def fetchRandomReadyAccount(self):
strSQL = "SELECT * FROM crowdcube_account WHERE strStatus='ready'"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
rowDataAccount = lstRowData[random.randint(0, len(lstRowData)-1)]
return (rowDataAccount["strEmail"], rowDataAccount["strPassword"])
#若無重覆 儲存 company URL
def insertCompanyUrlIfNotExists(self, strCompanyUrl=None):
strSQL = "SELECT * FROM crowdcube_company WHERE strCompanyUrl='%s'"%strCompanyUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO crowdcube_company VALUES(NULL, '%s', 0)"%strCompanyUrl
self.db.commitSQL(strSQL=strSQL)
#取得所有尚未完成下載的 company url
def fetchallNotObtainedCompanyUrl(self):
strSQL = "SELECT strCompanyUrl FROM crowdcube_company WHERE isGot=0"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrCompanyUrl = []
for rowData in lstRowData:
lstStrCompanyUrl.append(rowData["strCompanyUrl"])
return lstStrCompanyUrl
#檢查 company 是否已下載
def checkCompanyIsGot(self, strCompanyUrl=None):
isGot = True
strSQL = "SELECT * FROM crowdcube_company WHERE strCompanyUrl='%s'"%strCompanyUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
for rowData in lstRowData:
if rowData["isGot"] == 0:
isGot = False
return isGot
#更新 company 為已完成下載狀態
def updateCompanyStatusIsGot(self, strCompanyUrl=None):
strSQL = "UPDATE crowdcube_company SET isGot=1 WHERE strCompanyUrl='%s'"%strCompanyUrl
self.db.commitSQL(strSQL=strSQL)
#取得所有已完成下載的 company url
def fetchallCompletedObtainedCompanyUrl(self):
strSQL = "SELECT strCompanyUrl FROM crowdcube_company WHERE isGot=1"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrCompanyUrl = []
for rowData in lstRowData:
lstStrCompanyUrl.append(rowData["strCompanyUrl"])
return lstStrCompanyUrl
#更新 company 尚未開始下載狀態
def updateCompanyStatusIsNotGot(self, strCompanyUrl=None):
strSQL = "UPDATE crowdcube_company SET isGot=0 WHERE strCompanyUrl='%s'"%strCompanyUrl
self.db.commitSQL(strSQL=strSQL)
#清除測試資料 (clear table)
def clearTestData(self):
strSQL = "DELETE FROM crowdcube_company"
self.db.commitSQL(strSQL=strSQL)
#京東眾籌
class LocalDbForJD:
#建構子
def __init__(self):
self.db = SQLite3Db(strResFolderPath="cameo_res")
self.initialDb()
#初取化資料庫
def initialDb(self):
strSQLCreateTable = (
"CREATE TABLE IF NOT EXISTS jd_category("
"id INTEGER PRIMARY KEY,"
"strCategoryPage1Url TEXT NOT NULL,"
"strCategoryName TEXT NOT NULL,"
"isGot BOOLEAN NOT NULL)"
)
self.db.commitSQL(strSQL=strSQLCreateTable)
strSQLCreateTable = (
"CREATE TABLE IF NOT EXISTS jd_project("
"id INTEGER PRIMARY KEY,"
"strProjectUrl TEXT NOT NULL,"
"intCategoryId INTEGER NOT NULL,"
"isGot BOOLEAN NOT NULL)"
)
self.db.commitSQL(strSQL=strSQLCreateTable)
strSQLCreateTable = (
"CREATE TABLE IF NOT EXISTS jd_funder("
"id INTEGER PRIMARY KEY,"
"strFunderUrl TEXT NOT NULL,"
"intCategoryId INTEGER NOT NULL,"
"isGot BOOLEAN NOT NULL)"
)
self.db.commitSQL(strSQL=strSQLCreateTable)
#若無重覆,儲存 category
def insertCategoryIfNotExists(self, strCategoryPage1Url=None, strCategoryName=None):
strSQL = "SELECT * FROM jd_category WHERE strCategoryPage1Url='%s'"%strCategoryPage1Url
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO jd_category VALUES(NULL, '%s', '%s', 0)"%(strCategoryPage1Url, strCategoryName)
self.db.commitSQL(strSQL=strSQL)
#取得 category 名稱
def fetchCategoryNameByUrl(self, strCategoryPage1Url=None):
strSQL = "SELECT * FROM jd_category WHERE strCategoryPage1Url='%s'"%strCategoryPage1Url
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
return lstRowData[0]["strCategoryName"]
#取得所有 category 第一頁 url (指定 isGot 狀態)
def fetchallCategoryUrl(self, isGot=False):
dicIsGotCode = {True:"1", False:"0"}
strSQL = "SELECT strCategoryPage1Url FROM jd_category WHERE isGot=%s"%dicIsGotCode[isGot]
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrCategoryPage1Url = []
for rowData in lstRowData:
lstStrCategoryPage1Url.append(rowData["strCategoryPage1Url"])
return lstStrCategoryPage1Url
#取得所有未完成下載的 category 第一頁 url
def fetchallNotObtainedCategoryUrl(self):
return self.fetchallCategoryUrl(isGot=False)
#取得所有已完成下載的 category 第一頁 url
def fetchallCompletedObtainedCategoryUrl(self):
return self.fetchallCategoryUrl(isGot=True)
#更新 category 為已完成下載狀態
def updateCategoryStatusIsGot(self, strCategoryPage1Url=None):
strSQL = "UPDATE jd_category SET isGot=1 WHERE strCategoryPage1Url='%s'"%strCategoryPage1Url
self.db.commitSQL(strSQL=strSQL)
#取得 category id
def fetchCategoryIdByUrl(self, strCategoryPage1Url=None):
strSQL = "SELECT * FROM jd_category WHERE strCategoryPage1Url='%s'"%strCategoryPage1Url
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
return lstRowData[0]["id"]
#若無重覆 儲存 project URL
def insertProjectUrlIfNotExists(self, strProjectUrl=None, strCategoryPage1Url=None):
intCategoryId = self.fetchCategoryIdByUrl(strCategoryPage1Url=strCategoryPage1Url)
#insert project url if not exists
strSQL = "SELECT * FROM jd_project WHERE strProjectUrl='%s'"%strProjectUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO jd_project VALUES(NULL, '%s', %d,0)"%(strProjectUrl, intCategoryId)
self.db.commitSQL(strSQL=strSQL)
#若無重覆 儲存 funder URL
def insertFunderUrlIfNotExists(self, strFunderUrl=None, strCategoryPage1Url=None):
intCategoryId = self.fetchCategoryIdByUrl(strCategoryPage1Url=strCategoryPage1Url)
#insert funder url if not exists
strSQL = "SELECT * FROM jd_funder WHERE strFunderUrl='%s'"%strFunderUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO jd_funder VALUES(NULL, '%s', %d,0)"%(strFunderUrl, intCategoryId)
self.db.commitSQL(strSQL=strSQL)
#取得指定 category 的 project url
def fetchallProjectUrlByCategoryUrl(self, strCategoryPage1Url=None):
intCategoryId = self.fetchCategoryIdByUrl(strCategoryPage1Url=strCategoryPage1Url)
strSQL = "SELECT * FROM jd_project WHERE intCategoryId=%d"%intCategoryId
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrProjectUrl = []
for rowData in lstRowData:
lstStrProjectUrl.append(rowData["strProjectUrl"])
return lstStrProjectUrl
#取得指定 category 的 funder url
def fetchallFunderUrlByCategoryUrl(self, strCategoryPage1Url=None):
intCategoryId = self.fetchCategoryIdByUrl(strCategoryPage1Url=strCategoryPage1Url)
strSQL = "SELECT * FROM jd_funder WHERE intCategoryId=%d"%intCategoryId
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrFunderUrl = []
for rowData in lstRowData:
lstStrFunderUrl.append(rowData["strFunderUrl"])
return lstStrFunderUrl
#檢查 project 是否已下載
def checkProjectIsGot(self, strProjectUrl=None):
isGot = True
strSQL = "SELECT * FROM jd_project WHERE strProjectUrl='%s'"%strProjectUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
for rowData in lstRowData:
if rowData["isGot"] == 0:
isGot = False
return isGot
#檢查 funder 是否已下載
def checkFunderIsGot(self, strFunderUrl=None):
isGot = True
strSQL = "SELECT * FROM jd_funder WHERE strFunderUrl='%s'"%strFunderUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
for rowData in lstRowData:
if rowData["isGot"] == 0:
isGot = False
return isGot
#更新 project 為已完成下載狀態
def updateProjectStatusIsGot(self, strProjectUrl=None):
strSQL = "UPDATE jd_project SET isGot=1 WHERE strProjectUrl='%s'"%strProjectUrl
self.db.commitSQL(strSQL=strSQL)
#更新 funder 為已完成下載狀態
def updateFunderStatusIsGot(self, strFunderUrl=None):
strSQL = "UPDATE jd_funder SET isGot=1 WHERE strFunderUrl='%s'"%strFunderUrl
self.db.commitSQL(strSQL=strSQL)
#取得所有已完成下載的 project url
def fetchallCompletedObtainedProjectUrl(self):
strSQL = "SELECT strProjectUrl FROM jd_project WHERE isGot=1"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrProjectUrl = []
for rowData in lstRowData:
lstStrProjectUrl.append(rowData["strProjectUrl"])
return lstStrProjectUrl
#取得所有已完成下載的 funder url
def fetchallCompletedObtainedFunderUrl(self):
strSQL = "SELECT strFunderUrl FROM jd_funder WHERE isGot=1"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrFunderUrl = []
for rowData in lstRowData:
lstStrFunderUrl.append(rowData["strFunderUrl"])
return lstStrFunderUrl
#更新 project 尚未開始下載狀態
def updateProjectStatusIsNotGot(self, strProjectUrl=None):
strSQL = "UPDATE jd_project SET isGot=0 WHERE strProjectUrl='%s'"%strProjectUrl
self.db.commitSQL(strSQL=strSQL)
#更新 funder 尚未開始下載狀態
def updateFunderStatusIsNotGot(self, strFunderUrl=None):
strSQL = "UPDATE jd_funder SET isGot=0 WHERE strFunderUrl='%s'"%strFunderUrl
self.db.commitSQL(strSQL=strSQL)
#清除測試資料 (clear table)
def clearTestData(self):
strSQL = "DELETE FROM jd_category"
self.db.commitSQL(strSQL=strSQL)
strSQL = "DELETE FROM jd_project"
self.db.commitSQL(strSQL=strSQL)
strSQL = "DELETE FROM jd_funder"
self.db.commitSQL(strSQL=strSQL)
#TECHCRUNCH
class LocalDbForTECHCRUNCH:
#建構子
def __init__(self):
self.db = SQLite3Db(strResFolderPath="cameo_res")
self.initialDb()
#初取化資料庫
def initialDb(self):
strSQLCreateTable = ("CREATE TABLE IF NOT EXISTS techcrunch_news("
"id INTEGER PRIMARY KEY,"
"strNewsUrl TEXT NOT NULL,"
"intTopicId INTEGER NOT NULL,"
"isGot BOOLEAN NOT NULL)")
self.db.commitSQL(strSQL=strSQLCreateTable)
strSQLCreateTable = ("CREATE TABLE IF NOT EXISTS techcrunch_topic("
"id INTEGER PRIMARY KEY,"
"strTopicPage1Url TEXT NOT NULL,"
"isGot BOOLEAN NOT NULL)")
self.db.commitSQL(strSQL=strSQLCreateTable)
#若無重覆,儲存 topic
def insertTopicIfNotExists(self, strTopicPage1Url=None):
strSQL = "SELECT * FROM techcrunch_topic WHERE strTopicPage1Url='%s'"%strTopicPage1Url
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO techcrunch_topic VALUES(NULL, '%s', 0)"%strTopicPage1Url
self.db.commitSQL(strSQL=strSQL)
#取得所有 topic 第一頁 url (指定 isGot 狀態)
def fetchallTopicUrl(self, isGot=False):
dicIsGotCode = {True:"1", False:"0"}
strSQL = "SELECT strTopicPage1Url FROM techcrunch_topic WHERE isGot=%s"%dicIsGotCode[isGot]
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrTopicPage1Url = []
for rowData in lstRowData:
lstStrTopicPage1Url.append(rowData["strTopicPage1Url"])
return lstStrTopicPage1Url
#取得所有未完成下載的 topic 第一頁 url
def fetchallNotObtainedTopicUrl(self):
return self.fetchallTopicUrl(isGot=False)
#取得所有已完成下載的 topic 第一頁 url
def fetchallCompletedObtainedTopicUrl(self):
return self.fetchallTopicUrl(isGot=True)
#更新 topic 為已完成下載狀態
def updateTopicStatusIsGot(self, strTopicPage1Url=None):
strSQL = "UPDATE techcrunch_topic SET isGot=1 WHERE strTopicPage1Url='%s'"%strTopicPage1Url
self.db.commitSQL(strSQL=strSQL)
#取得 topic id
def fetchTopicIdByUrl(self, strTopicPage1Url=None):
strSQL = "SELECT * FROM techcrunch_topic WHERE strTopicPage1Url='%s'"%strTopicPage1Url
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
return lstRowData[0]["id"]
#若無重覆 儲存 news URL
def insertNewsUrlIfNotExists(self, strNewsUrl=None, strTopicPage1Url=None):
intTopicId = self.fetchTopicIdByUrl(strTopicPage1Url=strTopicPage1Url)
#insert news url if not exists
strSQL = "SELECT * FROM techcrunch_news WHERE strNewsUrl='%s'"%strNewsUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO techcrunch_news VALUES(NULL, '%s', %d,0)"%(strNewsUrl, intTopicId)
self.db.commitSQL(strSQL=strSQL)
#取得指定 topic 的 news url
def fetchallNewsUrlByTopicUrl(self, strTopicPage1Url=None):
intTopicId = self.fetchTopicIdByUrl(strTopicPage1Url=strTopicPage1Url)
strSQL = "SELECT * FROM techcrunch_news WHERE intTopicId=%d"%intTopicId
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrNewsUrl = []
for rowData in lstRowData:
lstStrNewsUrl.append(rowData["strNewsUrl"])
return lstStrNewsUrl
#檢查 news 是否已下載
def checkNewsIsGot(self, strNewsUrl=None):
isGot = True
strSQL = "SELECT * FROM techcrunch_news WHERE strNewsUrl='%s'"%strNewsUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
for rowData in lstRowData:
if rowData["isGot"] == 0:
isGot = False
return isGot
#更新 news 為已完成下載狀態
def updateNewsStatusIsGot(self, strNewsUrl=None):
strSQL = "UPDATE techcrunch_news SET isGot=1 WHERE strNewsUrl='%s'"%strNewsUrl
self.db.commitSQL(strSQL=strSQL)
#取得所有已完成下載的 news url
def fetchallCompletedObtainedNewsUrl(self):
strSQL = "SELECT strNewsUrl FROM techcrunch_news WHERE isGot=1"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrNewsUrl = []
for rowData in lstRowData:
lstStrNewsUrl.append(rowData["strNewsUrl"])
return lstStrNewsUrl
#更新 news 尚未開始下載狀態
def updateNewsStatusIsNotGot(self, strNewsUrl=None):
strSQL = "UPDATE techcrunch_news SET isGot=0 WHERE strNewsUrl='%s'"%strNewsUrl
self.db.commitSQL(strSQL=strSQL)
#清除測試資料 (clear table)
def clearTestData(self):
strSQL = "DELETE FROM techcrunch_news"
self.db.commitSQL(strSQL=strSQL)
strSQL = "DELETE FROM techcrunch_topic"
self.db.commitSQL(strSQL=strSQL)
#硬塞的
class LocalDbForINSIDE:
#建構子
def __init__(self):
self.db = SQLite3Db(strResFolderPath="cameo_res")
self.initialDb()
#初取化資料庫
def initialDb(self):
strSQLCreateTable = ("CREATE TABLE IF NOT EXISTS inside_news("
"id INTEGER PRIMARY KEY,"
"strNewsUrl TEXT NOT NULL,"
"isGot BOOLEAN NOT NULL)")
self.db.commitSQL(strSQL=strSQLCreateTable)
strSQLCreateTable = ("CREATE TABLE IF NOT EXISTS inside_tag("
"id INTEGER PRIMARY KEY,"
"strTagPage1Url TEXT NOT NULL,"
"isGot BOOLEAN NOT NULL)")
self.db.commitSQL(strSQL=strSQLCreateTable)
strSQLCreateTable = ("CREATE TABLE IF NOT EXISTS inside_newstag("
"id INTEGER PRIMARY KEY,"
"strNewsUrl TEXT NOT NULL,"
"strTagPage1Url TEXT NOT NULL)")
self.db.commitSQL(strSQL=strSQLCreateTable)
#若無重覆,儲存Tag
def insertTagIfNotExists(self, strTagPage1Url=None):
strSQL = "SELECT * FROM inside_tag WHERE strTagPage1Url='%s'"%strTagPage1Url
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO inside_tag VALUES(NULL, '%s', 0)"%strTagPage1Url
self.db.commitSQL(strSQL=strSQL)
#取得所有未完成下載的 Tag 第一頁 url
def fetchallNotObtainedTagPage1Url(self):
strSQL = "SELECT strTagPage1Url FROM inside_tag WHERE isGot=0"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrTagPage1Url = []
for rowData in lstRowData:
lstStrTagPage1Url.append(rowData["strTagPage1Url"])
return lstStrTagPage1Url
#取得所有已完成下載的 Tag 第一頁 url
def fetchallCompletedObtainedTagPage1Url(self):
strSQL = "SELECT strTagPage1Url FROM inside_tag WHERE isGot=1"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrTagPage1Url = []
for rowData in lstRowData:
lstStrTagPage1Url.append(rowData["strTagPage1Url"])
return lstStrTagPage1Url
#更新 Tag 為已完成下載狀態
def updateTagStatusIsGot(self, strTagPage1Url=None):
strSQL = "UPDATE inside_tag SET isGot=1 WHERE strTagPage1Url='%s'"%strTagPage1Url
self.db.commitSQL(strSQL=strSQL)
#儲存 news URL 以及 URL 所對應的 tag
def insertNewsUrlAndNewsTagMappingIfNotExists(self, strNewsUrl=None, strTagPage1Url=None):
#insert news url if not exists
strSQL = "SELECT * FROM inside_news WHERE strNewsUrl='%s'"%strNewsUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO inside_news VALUES(NULL, '%s', 0)"%strNewsUrl
self.db.commitSQL(strSQL=strSQL)
#insert news tag mapping if not exists
strSQL = "SELECT * FROM inside_newstag WHERE strNewsUrl='%s' AND strTagPage1Url='%s'"%(strNewsUrl, strTagPage1Url)
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO inside_newstag VALUES(NULL, '%s', '%s')"%(strNewsUrl, strTagPage1Url)
self.db.commitSQL(strSQL=strSQL)
#取得指定 tag 的 news url
def fetchallNewsUrlByTagPage1Url(self, strTagPage1Url=None):
strSQL = "SELECT * FROM inside_newstag WHERE strTagPage1Url='%s'"%strTagPage1Url
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrNewsUrl = []
for rowData in lstRowData:
lstStrNewsUrl.append(rowData["strNewsUrl"])
return lstStrNewsUrl
#檢查 news 是否已下載
def checkNewsIsGot(self, strNewsUrl=None):
isGot = True
strSQL = "SELECT * FROM inside_news WHERE strNewsUrl='%s'"%strNewsUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
for rowData in lstRowData:
if rowData["isGot"] == 0:
isGot = False
return isGot
#更新 news 為已完成下載狀態
def updateNewsStatusIsGot(self, strNewsUrl=None):
strSQL = "UPDATE inside_news SET isGot=1 WHERE strNewsUrl='%s'"%strNewsUrl
self.db.commitSQL(strSQL=strSQL)
#更新 news 為尚未開始下載狀態
def updateNewsStatusIsNotGot(self, strNewsUrlPart=None):
strSQL = "UPDATE inside_news SET isGot=0 WHERE strNewsUrl LIKE'%" + strNewsUrlPart + "%'"
self.db.commitSQL(strSQL=strSQL)
#清除測試資料 (clear table)
def clearTestData(self):
strSQL = "DELETE FROM inside_news"
self.db.commitSQL(strSQL=strSQL)
strSQL = "DELETE FROM inside_tag"
self.db.commitSQL(strSQL=strSQL)
strSQL = "DELETE FROM inside_newstag"
self.db.commitSQL(strSQL=strSQL)
#投資界
class LocalDbForPEDAILY:
#建構子
def __init__(self):
self.db = SQLite3Db(strResFolderPath="cameo_res")
self.initialDb()
#初取化資料庫
def initialDb(self):
strSQLCreateTable = ("CREATE TABLE IF NOT EXISTS pedaily_news("
"id INTEGER PRIMARY KEY,"
"strNewsUrl TEXT NOT NULL,"
"intCategoryId INTEGER NOT NULL,"
"isGot BOOLEAN NOT NULL)")
self.db.commitSQL(strSQL=strSQLCreateTable)
strSQLCreateTable = ("CREATE TABLE IF NOT EXISTS pedaily_category("
"id INTEGER PRIMARY KEY,"
"strCategoryName TEXT NOT NULL,"
"isGot BOOLEAN NOT NULL)")
self.db.commitSQL(strSQL=strSQLCreateTable)
#若無重覆,儲存 category
def insertCategoryIfNotExists(self, strCategoryName=None):
strSQL = "SELECT * FROM pedaily_category WHERE strCategoryName='%s'"%strCategoryName
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO pedaily_category VALUES(NULL, '%s', 0)"%strCategoryName
self.db.commitSQL(strSQL=strSQL)
#取得所有 category 名稱
def fetchallCategoryName(self, isGot=False):
dicIsGotCode = {True:"1", False:"0"}
strSQL = "SELECT strCategoryName FROM pedaily_category WHERE isGot=%s"%dicIsGotCode[isGot]
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrCategoryName = []
for rowData in lstRowData:
lstStrCategoryName.append(rowData["strCategoryName"])
return lstStrCategoryName
#取得所有未完成下載的 category 名稱
def fetchallNotObtainedCategoryName(self):
return self.fetchallCategoryName(isGot=False)
#取得所有已完成下載的 category 名稱
def fetchallCompletedObtainedCategoryName(self):
return self.fetchallCategoryName(isGot=True)
#更新 category 為已完成下載狀態
def updateCategoryStatusIsGot(self, strCategoryName=None):
strSQL = "UPDATE pedaily_category SET isGot=1 WHERE strCategoryName='%s'"%strCategoryName
self.db.commitSQL(strSQL=strSQL)
#取得 category id
def fetchCategoryIdByName(self, strCategoryName=None):
strSQL = "SELECT * FROM pedaily_category WHERE strCategoryName='%s'"%strCategoryName
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
return lstRowData[0]["id"]
#若無重覆 儲存 news URL
def insertNewsUrlIfNotExists(self, strNewsUrl=None, strCategoryName=None):
intCategoryId = self.fetchCategoryIdByName(strCategoryName=strCategoryName)
#insert news url if not exists
strSQL = "SELECT * FROM pedaily_news WHERE strNewsUrl='%s'"%strNewsUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO pedaily_news VALUES(NULL, '%s', %d,0)"%(strNewsUrl, intCategoryId)
self.db.commitSQL(strSQL=strSQL)
#取得指定 category 的 news url
def fetchallNewsUrlByCategoryName(self, strCategoryName=None):
intCategoryId = self.fetchCategoryIdByName(strCategoryName=strCategoryName)
strSQL = "SELECT * FROM pedaily_news WHERE intCategoryId=%d"%intCategoryId
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrNewsUrl = []
for rowData in lstRowData:
lstStrNewsUrl.append(rowData["strNewsUrl"])
return lstStrNewsUrl
#檢查 news 是否已下載
def checkNewsIsGot(self, strNewsUrl=None):
isGot = True
strSQL = "SELECT * FROM pedaily_news WHERE strNewsUrl='%s'"%strNewsUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
for rowData in lstRowData:
if rowData["isGot"] == 0:
isGot = False
return isGot
#更新 news 為已完成下載狀態
def updateNewsStatusIsGot(self, strNewsUrl=None):
strSQL = "UPDATE pedaily_news SET isGot=1 WHERE strNewsUrl='%s'"%strNewsUrl
self.db.commitSQL(strSQL=strSQL)
#取得所有已完成下載的 news url
def fetchallCompletedObtainedNewsUrl(self):
strSQL = "SELECT strNewsUrl FROM pedaily_news WHERE isGot=1"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrNewsUrl = []
for rowData in lstRowData:
lstStrNewsUrl.append(rowData["strNewsUrl"])
return lstStrNewsUrl
#更新 news 尚未開始下載狀態
def updateNewsStatusIsNotGot(self, strNewsUrl=None):
strSQL = "UPDATE pedaily_news SET isGot=0 WHERE strNewsUrl='%s'"%strNewsUrl
self.db.commitSQL(strSQL=strSQL)
#清除測試資料 (clear table)
def clearTestData(self):
strSQL = "DELETE FROM pedaily_news"
self.db.commitSQL(strSQL=strSQL)
strSQL = "DELETE FROM pedaily_category"
self.db.commitSQL(strSQL=strSQL)
#數位時代
class LocalDbForBNEXT:
#建構子
def __init__(self):
self.db = SQLite3Db(strResFolderPath="cameo_res")
self.initialDb()
#初取化資料庫
def initialDb(self):
strSQLCreateTable = ("CREATE TABLE IF NOT EXISTS bnext_news("
"id INTEGER PRIMARY KEY,"
"strNewsUrl TEXT NOT NULL,"
"isGot BOOLEAN NOT NULL)")
self.db.commitSQL(strSQL=strSQLCreateTable)
strSQLCreateTable = ("CREATE TABLE IF NOT EXISTS bnext_tag("
"id INTEGER PRIMARY KEY,"
"strTagName TEXT NOT NULL,"
"isGot BOOLEAN NOT NULL)")
self.db.commitSQL(strSQL=strSQLCreateTable)
strSQLCreateTable = ("CREATE TABLE IF NOT EXISTS bnext_newstag("
"id INTEGER PRIMARY KEY,"
"strNewsUrl TEXT NOT NULL,"
"strTagName TEXT NOT NULL)")
self.db.commitSQL(strSQL=strSQLCreateTable)
#若無重覆,儲存Tag
def insertTagIfNotExists(self, strTagName=None):
strSQL = "SELECT * FROM bnext_tag WHERE strTagName='%s'"%strTagName
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO bnext_tag VALUES(NULL, '%s', 0)"%strTagName
self.db.commitSQL(strSQL=strSQL)
#取得所有未完成下載的 Tag 名稱
def fetchallNotObtainedTagName(self):
strSQL = "SELECT strTagName FROM bnext_tag WHERE isGot=0"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrTagName = []
for rowData in lstRowData:
lstStrTagName.append(rowData["strTagName"])
return lstStrTagName
#取得所有已完成下載的 Tag 名稱
def fetchallCompletedObtainedTagName(self):
strSQL = "SELECT strTagName FROM bnext_tag WHERE isGot=1"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrTagName = []
for rowData in lstRowData:
lstStrTagName.append(rowData["strTagName"])
return lstStrTagName
#更新 Tag 為已完成下載狀態
def updateTagStatusIsGot(self, strTagName=None):
strSQL = "UPDATE bnext_tag SET isGot=1 WHERE strTagName='%s'"%strTagName
self.db.commitSQL(strSQL=strSQL)
#儲存 news URL 以及 URL 所對應的 tag
def insertNewsUrlAndNewsTagMappingIfNotExists(self, strNewsUrl=None, strTagName=None):
#insert news url if not exists
strSQL = "SELECT * FROM bnext_news WHERE strNewsUrl='%s'"%strNewsUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO bnext_news VALUES(NULL, '%s', 0)"%strNewsUrl
self.db.commitSQL(strSQL=strSQL)
#insert news tag mapping if not exists
strSQL = "SELECT * FROM bnext_newstag WHERE strNewsUrl='%s' AND strTagName='%s'"%(strNewsUrl, strTagName)
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO bnext_newstag VALUES(NULL, '%s', '%s')"%(strNewsUrl, strTagName)
self.db.commitSQL(strSQL=strSQL)
#取得指定 tag 的 news url
def fetchallNewsUrlByTagName(self, strTagName=None):
strSQL = "SELECT * FROM bnext_newstag WHERE strTagName='%s'"%strTagName
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrNewsUrl = []
for rowData in lstRowData:
lstStrNewsUrl.append(rowData["strNewsUrl"])
return lstStrNewsUrl
#檢查 news 是否已下載
def checkNewsIsGot(self, strNewsUrl=None):
isGot = True
strSQL = "SELECT * FROM bnext_news WHERE strNewsUrl='%s'"%strNewsUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
for rowData in lstRowData:
if rowData["isGot"] == 0:
isGot = False
return isGot
#更新 news 為已完成下載狀態
def updateNewsStatusIsGot(self, strNewsUrl=None):
strSQL = "UPDATE bnext_news SET isGot=1 WHERE strNewsUrl='%s'"%strNewsUrl
self.db.commitSQL(strSQL=strSQL)
#清除測試資料 (clear table)
def clearTestData(self):
strSQL = "DELETE FROM bnext_news"
self.db.commitSQL(strSQL=strSQL)
strSQL = "DELETE FROM bnext_tag"
self.db.commitSQL(strSQL=strSQL)
strSQL = "DELETE FROM bnext_newstag"
self.db.commitSQL(strSQL=strSQL)
#科技報橘
class LocalDbForTECHORANGE:
#建構子
def __init__(self):
self.db = SQLite3Db(strResFolderPath="cameo_res")
self.initialDb()
#初取化資料庫
def initialDb(self):
strSQLCreateTable = ("CREATE TABLE IF NOT EXISTS techorange_news("
"id INTEGER PRIMARY KEY,"
"strNewsUrl TEXT NOT NULL,"
"isGot BOOLEAN NOT NULL)")
self.db.commitSQL(strSQL=strSQLCreateTable)
strSQLCreateTable = ("CREATE TABLE IF NOT EXISTS techorange_tag("
"id INTEGER PRIMARY KEY,"
"strTagName TEXT NOT NULL,"
"isGot BOOLEAN NOT NULL)")
self.db.commitSQL(strSQL=strSQLCreateTable)
strSQLCreateTable = ("CREATE TABLE IF NOT EXISTS techorange_newstag("
"id INTEGER PRIMARY KEY,"
"strNewsUrl TEXT NOT NULL,"
"strTagName TEXT NOT NULL)")
self.db.commitSQL(strSQL=strSQLCreateTable)
#若無重覆,儲存Tag
def insertTagIfNotExists(self, strTagName=None):
strSQL = "SELECT * FROM techorange_tag WHERE strTagName='%s'"%strTagName
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO techorange_tag VALUES(NULL, '%s', 0)"%strTagName
self.db.commitSQL(strSQL=strSQL)
#取得所有未完成下載的 Tag 名稱
def fetchallNotObtainedTagName(self):
strSQL = "SELECT strTagName FROM techorange_tag WHERE isGot=0"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrTagName = []
for rowData in lstRowData:
lstStrTagName.append(rowData["strTagName"])
return lstStrTagName
#取得所有已完成下載的 Tag 名稱
def fetchallCompletedObtainedTagName(self):
strSQL = "SELECT strTagName FROM techorange_tag WHERE isGot=1"
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrTagName = []
for rowData in lstRowData:
lstStrTagName.append(rowData["strTagName"])
return lstStrTagName
#更新 Tag 為已完成下載狀態
def updateTagStatusIsGot(self, strTagName=None):
strSQL = "UPDATE techorange_tag SET isGot=1 WHERE strTagName='%s'"%strTagName
self.db.commitSQL(strSQL=strSQL)
#儲存 news URL 以及 URL 所對應的 tag
def insertNewsUrlAndNewsTagMappingIfNotExists(self, strNewsUrl=None, strTagName=None):
#insert news url if not exists
strSQL = "SELECT * FROM techorange_news WHERE strNewsUrl='%s'"%strNewsUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO techorange_news VALUES(NULL, '%s', 0)"%strNewsUrl
self.db.commitSQL(strSQL=strSQL)
#insert news tag mapping if not exists
strSQL = "SELECT * FROM techorange_newstag WHERE strNewsUrl='%s' AND strTagName='%s'"%(strNewsUrl, strTagName)
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
if len(lstRowData) == 0:
strSQL = "INSERT INTO techorange_newstag VALUES(NULL, '%s', '%s')"%(strNewsUrl, strTagName)
self.db.commitSQL(strSQL=strSQL)
#取得指定 tag 的 news url
def fetchallNewsUrlByTagName(self, strTagName=None):
strSQL = "SELECT * FROM techorange_newstag WHERE strTagName='%s'"%strTagName
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
lstStrNewsUrl = []
for rowData in lstRowData:
lstStrNewsUrl.append(rowData["strNewsUrl"])
return lstStrNewsUrl
#檢查 news 是否已下載
def checkNewsIsGot(self, strNewsUrl=None):
isGot = True
strSQL = "SELECT * FROM techorange_news WHERE strNewsUrl='%s'"%strNewsUrl
lstRowData = self.db.fetchallSQL(strSQL=strSQL)
for rowData in lstRowData:
if rowData["isGot"] == 0:
isGot = False
return isGot
#更新 news 為已完成下載狀態
def updateNewsStatusIsGot(self, strNewsUrl=None):
strSQL = "UPDATE techorange_news SET isGot=1 WHERE strNewsUrl='%s'"%strNewsUrl
self.db.commitSQL(strSQL=strSQL)
#更新 news 為未完成下載狀態 (指定 部分 url )
def updateNewsStatusIsNotGot(self, strNewsUrlPart=None):
strSQL = "UPDATE techorange_news SET isGot=0 WHERE strNewsUrl LIKE'%" + strNewsUrlPart + "%'"
self.db.commitSQL(strSQL=strSQL)
#清除測試資料 (clear table)
def clearTestData(self):
strSQL = "DELETE FROM techorange_news"
self.db.commitSQL(strSQL=strSQL)
strSQL = "DELETE FROM techorange_tag"
self.db.commitSQL(strSQL=strSQL)
strSQL = "DELETE FROM techorange_newstag"
self.db.commitSQL(strSQL=strSQL) | 42.238696 | 122 | 0.640544 | 3,920 | 40,169 | 6.515816 | 0.061224 | 0.034766 | 0.047569 | 0.066596 | 0.840811 | 0.819787 | 0.788623 | 0.755579 | 0.74724 | 0.65727 | 0 | 0.006706 | 0.272399 | 40,169 | 951 | 123 | 42.238696 | 0.867212 | 0.059972 | 0 | 0.690856 | 0 | 0 | 0.237834 | 0.012206 | 0 | 0 | 0 | 0 | 0 | 1 | 0.161103 | false | 0.011611 | 0.005806 | 0.008708 | 0.246734 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b24437e7a2b052bb60bcdbba9119bb288d776585 | 64 | py | Python | pyanom/__init__.py | ground0state/pyanom | 15480b6e0cf4c27603737c81d8006013a98f410d | [
"MIT"
] | 5 | 2021-09-07T14:48:19.000Z | 2021-09-25T22:58:39.000Z | pyanom/__init__.py | ground0state/pyanom | 15480b6e0cf4c27603737c81d8006013a98f410d | [
"MIT"
] | 2 | 2020-05-21T01:39:33.000Z | 2021-09-07T08:27:05.000Z | pyanom/__init__.py | ground0state/pyanom | 15480b6e0cf4c27603737c81d8006013a98f410d | [
"MIT"
] | 3 | 2020-05-21T17:36:12.000Z | 2021-07-27T08:54:19.000Z | from pyanom.__version__ import __version__
from pyanom import *
| 21.333333 | 42 | 0.84375 | 8 | 64 | 5.75 | 0.5 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 64 | 2 | 43 | 32 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a22b4b458cb9dccb0de671b921f8f40e6a8ba2c9 | 4,028 | py | Python | tests/tasks/util/test_asynctask.py | nhausman1/python-client | b15f76977dc3178634ee8e007b53f613ddd2ac7c | [
"Apache-2.0"
] | 13 | 2017-03-17T15:15:20.000Z | 2022-03-14T22:24:10.000Z | tests/tasks/util/test_asynctask.py | nhausman1/python-client | b15f76977dc3178634ee8e007b53f613ddd2ac7c | [
"Apache-2.0"
] | 81 | 2017-01-12T23:06:48.000Z | 2022-02-21T18:20:23.000Z | tests/tasks/util/test_asynctask.py | nhausman1/python-client | b15f76977dc3178634ee8e007b53f613ddd2ac7c | [
"Apache-2.0"
] | 14 | 2017-05-25T10:49:13.000Z | 2021-12-27T16:39:20.000Z | """Asynctask test module."""
import time
import threading
from splitio.tasks.util import asynctask
class AsyncTaskTests(object):
"""AsyncTask test cases."""
def test_default_task_flow(self, mocker):
"""Test the default execution flow of an asynctask."""
main_func = mocker.Mock()
on_init = mocker.Mock()
on_stop = mocker.Mock()
on_stop_event = threading.Event()
task = asynctask.AsyncTask(main_func, 0.5, on_init, on_stop)
task.start()
time.sleep(1)
assert task.running()
task.stop(on_stop_event)
on_stop_event.wait()
assert on_stop_event.is_set()
assert 0 < len(main_func.mock_calls) <= 2
assert len(on_init.mock_calls) == 1
assert len(on_stop.mock_calls) == 1
assert not task.running()
def test_main_exception_skips_iteration(self, mocker):
"""Test that an exception in the main func only skips current iteration."""
def raise_exception():
raise Exception('something')
main_func = mocker.Mock()
main_func.side_effect = raise_exception
on_init = mocker.Mock()
on_stop = mocker.Mock()
on_stop_event = threading.Event()
task = asynctask.AsyncTask(main_func, 0.1, on_init, on_stop)
task.start()
time.sleep(1)
assert task.running()
task.stop(on_stop_event)
on_stop_event.wait()
assert on_stop_event.is_set()
assert 9 <= len(main_func.mock_calls) <= 10
assert len(on_init.mock_calls) == 1
assert len(on_stop.mock_calls) == 1
assert not task.running()
def test_on_init_failure_aborts_task(self, mocker):
"""Test that if the on_init callback fails, the task never runs."""
def raise_exception():
raise Exception('something')
main_func = mocker.Mock()
on_init = mocker.Mock()
on_init.side_effect = raise_exception
on_stop = mocker.Mock()
on_stop_event = threading.Event()
task = asynctask.AsyncTask(main_func, 0.1, on_init, on_stop)
task.start()
time.sleep(0.5)
assert not task.running() # Since on_init fails, task never starts
task.stop(on_stop_event)
on_stop_event.wait(1)
assert on_stop_event.is_set()
assert on_init.mock_calls == [mocker.call()]
assert on_stop.mock_calls == [mocker.call()]
assert main_func.mock_calls == []
assert not task.running()
def test_on_stop_failure_ends_gacefully(self, mocker):
"""Test that if the on_init callback fails, the task never runs."""
def raise_exception():
raise Exception('something')
main_func = mocker.Mock()
on_init = mocker.Mock()
on_stop = mocker.Mock()
on_stop.side_effect = raise_exception
on_stop_event = threading.Event()
task = asynctask.AsyncTask(main_func, 0.1, on_init, on_stop)
task.start()
time.sleep(1)
task.stop(on_stop_event)
on_stop_event.wait(1)
assert on_stop_event.isSet()
assert on_init.mock_calls == [mocker.call()]
assert on_stop.mock_calls == [mocker.call()]
assert 9 <= len(main_func.mock_calls) <= 10
def test_force_run(self, mocker):
"""Test that if the on_init callback fails, the task never runs."""
main_func = mocker.Mock()
on_init = mocker.Mock()
on_stop = mocker.Mock()
on_stop_event = threading.Event()
task = asynctask.AsyncTask(main_func, 5, on_init, on_stop)
task.start()
time.sleep(1)
assert task.running()
task.force_execution()
task.force_execution()
task.stop(on_stop_event)
on_stop_event.wait(1)
assert on_stop_event.isSet()
assert on_init.mock_calls == [mocker.call()]
assert on_stop.mock_calls == [mocker.call()]
assert len(main_func.mock_calls) == 2
assert not task.running()
| 33.848739 | 83 | 0.623635 | 539 | 4,028 | 4.400742 | 0.135436 | 0.093592 | 0.092749 | 0.060708 | 0.798904 | 0.788786 | 0.76855 | 0.733137 | 0.713744 | 0.713744 | 0 | 0.010541 | 0.269861 | 4,028 | 118 | 84 | 34.135593 | 0.795988 | 0.096574 | 0 | 0.806452 | 0 | 0 | 0.007494 | 0 | 0 | 0 | 0 | 0 | 0.301075 | 1 | 0.086022 | false | 0 | 0.032258 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a24c0115096b3a3bded9bc1bed1395c2340663ee | 38 | py | Python | Code/Day 13/demo07.py | AndyChiangSH/2021-IT-30days | d001523f1ed80d765fb92c893e3f936010a96c30 | [
"MIT"
] | 8 | 2021-10-19T03:35:37.000Z | 2022-03-27T09:58:19.000Z | Code/Day 13/demo07.py | AndyChiangSH/2021-IT-30days | d001523f1ed80d765fb92c893e3f936010a96c30 | [
"MIT"
] | null | null | null | Code/Day 13/demo07.py | AndyChiangSH/2021-IT-30days | d001523f1ed80d765fb92c893e3f936010a96c30 | [
"MIT"
] | null | null | null | import math
print(math.log(1024, 2))
| 9.5 | 24 | 0.710526 | 7 | 38 | 3.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 0.131579 | 38 | 3 | 25 | 12.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
a264eeb5359dbaec7dde65bed2d0836eb6afbfd1 | 89 | py | Python | __init__.py | maminian/caterpillar_mset | e25008a5c82cad0db914058198bf8f257bc2df36 | [
"MIT"
] | 5 | 2020-11-19T05:48:46.000Z | 2022-03-09T07:44:49.000Z | __init__.py | maminian/caterpillar_mset | e25008a5c82cad0db914058198bf8f257bc2df36 | [
"MIT"
] | 3 | 2020-11-24T05:46:16.000Z | 2021-12-24T15:15:29.000Z | __init__.py | maminian/caterpillar_mset | e25008a5c82cad0db914058198bf8f257bc2df36 | [
"MIT"
] | 3 | 2020-05-11T17:13:38.000Z | 2021-05-16T06:11:24.000Z | # Needed if one wants to utilize this as a package.
from . import tde
from . import mset
| 22.25 | 51 | 0.741573 | 16 | 89 | 4.125 | 0.875 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213483 | 89 | 3 | 52 | 29.666667 | 0.942857 | 0.550562 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a267fd0b5eb6fa653d610e9a0790d9f6cc60df2f | 38,007 | py | Python | pysmFISH/stitching_package/hybregistration.py | ambrosejcarr/pysmFISH | 0eb24355f70c0d5c9013a9407fd56f2e1e9ee3cb | [
"MIT"
] | 5 | 2018-05-29T23:03:19.000Z | 2022-02-02T02:04:41.000Z | pysmFISH/stitching_package/hybregistration.py | ambrosejcarr/pysmFISH | 0eb24355f70c0d5c9013a9407fd56f2e1e9ee3cb | [
"MIT"
] | 3 | 2018-12-18T20:18:38.000Z | 2019-01-18T22:47:45.000Z | pysmFISH/stitching_package/hybregistration.py | ambrosejcarr/pysmFISH | 0eb24355f70c0d5c9013a9407fd56f2e1e9ee3cb | [
"MIT"
] | 5 | 2018-08-10T14:54:54.000Z | 2021-10-09T13:32:08.000Z | """Functions to perform registration between all hybridizations.
register_final_images(folder, gene='Nuclei',
sub_pic_frac=0.2, use_MPI=False,
apply_to_corners=True, apply_warping = True)
-- Register stitched images an in all HDF5 file in the folder
find_reg_final_image(im_file_1, im_file_n,
max_trans, sub_pic_frac,
nr_peaks=8)
-- Find the transform that registers image n correctly onto
image 1.
transform_final_image(im_file_n, trans, new_size)
-- Transform an image according to trans.
transform_data_file(folder, data_name, trans,
new_size)
-- Transform the corners in the pickled data file
align_sub_region(overlap1, overlap2, nr_peaks)
-- Determine how much overlap2 should be shifted to fit
overlap1, help function for find_reg_final_image
"""
import numpy as np
import h5py
import os
import skimage.transform as smtf
try:
from mpi4py import MPI
MPI_available = True
except ImportError:
MPI_available = False
import logging
import glob
# Own imports
from . import inout
from . import pairwisesingle as ps
logger = logging.getLogger(__name__)
def register_final_images(folder, gene='Nuclei',
sub_pic_frac=0.2, use_MPI=False,
apply_to_corners=True, apply_warping = False,
region=None, compare_in_seq=False):
"""Register stitched images an in all HDF5 file in the folder
Loops the hybridizations in the HDF5 file, takes the stitched
images as indicated by gene and then compares each image to the
first image.
For the comparison only a small patch of the images is used, the
size of this patch can be controlled with "sub_pic_frac".
Parameters:
-----------
folder: str
The name of the folder containing the pickled file with stitching data,
needs a trailing slash ("/").
gene: str
The gene of which the stitched images are present and should be realigned.
Typically this will be 'Nuclei', because the smFISH genes will not have
enough signal to align the pictures properly. (Default: 'Nuclei')
sub_pic_frac: float
The fraction of the size of the original image that should be used to compare
images. (Default: 0.2)
use_MPI: bool
If True open the files in MPI friendly mode, if False open files in normal
single processing mode. (Default: False)
apply_to_corners: bool
Determines if the found registration will be applied to the tile
corners in the pickled stitching data file. (Default: True)
apply_warping: bool
Determines if the found registration will be applied as a warp to the
final pictures in the hdf5 file, should not be used with large datasets.
(Default: False)
region: list
List of length four containing ints. The region that should be compared to determine
the shift needed for registration. Should be in the order: [y_min, y_max, x_min,
x_max]. When region is defined, sub_pic_frac will not be used.
By default the code will determine the region itself taking a area around the
center of the image with a size determined by sub_pic_frac(Default: None)
compare_in_seq: bool
Determines if we should compare images in sequence or if we should compare
all to the first image.
"""
if not compare_in_seq:
file_name_list, file_1, im_file_1, trans, old_size_list, \
max_trans = \
prepare_for_comparing(folder, gene, compare_in_seq,
use_MPI=use_MPI)
# Compare each file to file 1:
for i in range(1, len(file_name_list)):
cur_trans, max_trans, cur_old_size, file_ind = \
get_single_trans(file_name_list, i, gene, im_file_1,
max_trans, sub_pic_frac=sub_pic_frac,
region=region, use_MPI=use_MPI)
trans[file_ind, :] = cur_trans
old_size_list[file_ind, :] = cur_old_size
# Close the hdf5 file.
file_1.close()
trans, new_size = correct_trans_and_size(trans,
old_size_list,
max_trans,
compare_in_seq)
else:
file_name_list, trans_relative, old_size_list, max_trans = \
prepare_for_comparing(folder, gene, compare_in_seq,
use_MPI=use_MPI)
# Compare each file to previous file:
for i in range(1, len(file_name_list)):
cur_trans, max_trans, cur_old_size, file_ind = \
get_single_relative_trans(file_name_list, i, gene,
max_trans,
sub_pic_frac = sub_pic_frac,
region = region,
use_MPI = use_MPI)
trans_relative[file_ind, :] = cur_trans
old_size_list[file_ind, :] = cur_old_size
trans, new_size = correct_trans_and_size(trans_relative,
old_size_list,
max_trans,
compare_in_seq)
logger.debug(
'Files: {} Translations: {}'
.format(file_name_list, trans))
# Apply the translations
for i in range(len(file_name_list)):
if apply_warping:
if use_MPI:
file_n = h5py.File(file_name_list[i], 'r+',
driver='mpio', comm=MPI.COMM_WORLD)
else:
file_n = h5py.File(file_name_list[i], 'r+')
im_file_n = file_n[gene]['StitchedImage']
transform_final_image(im_file_n, trans[i, :], new_size)
file_n.close()
if apply_to_corners:
data_name = (
os.path.split(file_name_list[i])[1].split(sep='.')[0]
+ '_' + gene
+ '_stitching_data')
transform_data_file(folder, data_name, trans[i, :],
new_size)
def prepare_for_comparing(folder, gene, compare_in_seq, use_MPI=False):
"""
Prepare the file list, first file and init other lists.
Parameters:
-----------
folder: str
The name of the folder containing the
pickled file with stitching data, needs a
trailing slash ("/").
gene: str
The gene of which the stitched
images are present and should be realigned.
Typically this will be 'Nuclei', because the
smFISH genes will not have enough signal to
align the pictures properly.
(Default: 'Nuclei')
compare_in_seq: bool
Determines if we should compare
images in sequence or if we should compare
all to the first image.
use_MPI: bool
If True open the files in MPI
friendly mode, if False open files in normal
single processing mode. (Default: False)
Returns:
--------
file_name_list: list
List of strings. List of the sf.hdf5-files in the folder.
trans: np.array
Array of ints. The array to store the translations, initialized with zeros.
old_size_list: np.array
Array of ints. The array to store the sizes of the final images, initialized with
zeros.
max_trans: np.array
Array of ints. Variable to store the
largest translation found up to now,
initialized at zero.
Notes:
------
Only returned if compare_in_seq is True:
file_1: pointer
File handle to the first hdf5 file in the folder.
im_file_1: pointer
Reference to the group in the first file that contains th final image.
"""
# Get a list of files in the folder
file_name_list = glob.glob(folder + '*.sf.hdf5')
file_name_list.sort()
logger.debug('Filenames sorted: {}'.format(file_name_list))
# Initialize some variables:
trans = np.zeros((len(file_name_list), 2), dtype=int)
old_size_list = np.zeros((len(file_name_list), 2), dtype=int)
max_trans = np.zeros((1, 2), dtype=int)
# Take the first hybridization (keys() seems to give the groups
# as a sorted list)
im_name_1 = file_name_list[0]
logger.debug('im_name_1: {}'.format(im_name_1))
# Open the stitching file and make a list of the hybridizations
# present in this file:
if use_MPI:
file_1 = h5py.File(im_name_1, 'r+',
driver='mpio', comm=MPI.COMM_WORLD)
else:
file_1 = h5py.File(im_name_1, 'r+')
# hyb_name_list = list(stitching_file.keys())
# Get the right group
im_file_1 = file_1[gene]['StitchedImage']
# Get the size of the first image in the list,
# which will be the reference image without translation.
old_size_list[0, :] = im_file_1['final_image'].shape
# Make comparisons
if not compare_in_seq:
return file_name_list, file_1, im_file_1, trans, \
old_size_list, max_trans
else:
# Get the size for the first image
file_1.close()
return file_name_list, trans, old_size_list, max_trans
def get_single_trans(file_name_list, i, gene,
im_file_1, max_trans, sub_pic_frac=0.2,
region=None, use_MPI=False):
"""Get the translation between image 1 and image i.
Get the translation between the image in file 1 and file i
from file_name_list.
Parameters:
-----------
file_name_list: list
List of strings. List of the sf.hdf5-files in the folder.
i: int
Index of the current file to compare.
gene: str
Gene of which the stitched
images are present and should be realigned.
Typically this will be 'Nuclei', because the
smFISH genes will not have enough signal to
align the pictures properly.
(Default: 'Nuclei')
im_file_1: pointer
Reference to the group in the first file that contains th final image.
max_trans: np.array
Variable to store the largest translation found up to now,
initialized at zero.
sub_pic_frac: float
The fraction of the size of the original image that should be used to compare
images. (Default: 0.2)
region: list
List of length four containing ints. The
region that should be compared to determine
the shift needed for registration.
Should be in the order: [y_min, y_max, x_min,
x_max]. When region is defined, sub_pic_frac
will not be used.
By default the code will determine the region
itself taking a area around the
center of the image with a size
determined by sub_pic_frac(Default: None)
use_MPI: bool
If True open the files in MPI friendly mode, if False open files in normal
single processing mode. (Default: False)
Returns:
--------
cur_trans: np.array
Array of ints. Translation found between the two images that are currently
being compared.
max_trans: np.array
Array of ints. The largest translation found up to now.
cur_old_size: np.array
Array of ints. The sizes of the original final image found in file_name_list
at index i.
i: int
The index of the second image file used for the current comparison (The first image
file is file 1).
"""
# Get the group containing the image we want to compare with.
if use_MPI:
file_n = h5py.File(file_name_list[i], 'r+',
driver='mpio', comm=MPI.COMM_WORLD)
else:
file_n = h5py.File(file_name_list[i], 'r+')
im_file_n = file_n[gene]['StitchedImage']
# Find the translation
cur_trans, max_trans, cur_old_size \
= find_reg_final_image(im_file_1, im_file_n, max_trans,
sub_pic_frac, region=region)
logger.debug(
"max_trans: {}".format(max_trans))
file_n.close()
return cur_trans, max_trans, cur_old_size, i
def get_single_relative_trans(file_name_list, i, gene, max_trans,
sub_pic_frac=0.2, region=None,
use_MPI=False):
"""Get the translation between image i - 1 and image i.
Get the translation between the image in file_name_list[i - 1] and
file_name_list[i].
Parameters:
-----------
file_name_list: list
List of strings. List of the sf.hdf5-files in the folder.
i: int
Index of the second image in the current comparison.
gene: str
The gene of which the stitched
images are present and should be realigned.
Typically this will be 'Nuclei', because the
smFISH genes will not have enough signal to
align the pictures properly.
(Default: 'Nuclei')
max_trans: np.array
Array of ints. Variable to store the
largest translation found up to now,
initialized at zero.
sub_pic_frac: float
The fraction of the size of the original image that should be used to compare
images. (Default: 0.2)
region: list
List of length four containing ints. The
region that should be compared to determine
the shift needed for registration.
Should be in the order: [y_min, y_max, x_min,
x_max]. When region is defined, sub_pic_frac
will not be used.
By default the code will determine the region
itself taking a area around the
center of the image with a size
determined by sub_pic_frac (Default: None)
use_MPI: bool
If True open the files in MPI
friendly mode, if False open files in normal
single processing mode. (Default: False)
Returns:
--------
cur_trans: np.array
Array of ints. Translation found between the two images that are currently
being compared.
max_trans: np.array
Array of ints. The largest translation found up to now.
cur_old_size: np.array
The sizes of the original final image found in file_name_list
at index i.
i: int
The index of the second image file used for the current comparison (The first image
file is file 1).
"""
# Get the group containing the image we want to compare with.
if use_MPI:
file_1 = h5py.File(file_name_list[i - 1], 'r+',
driver='mpio', comm=MPI.COMM_WORLD)
file_2 = h5py.File(file_name_list[i], 'r+',
driver='mpio', comm=MPI.COMM_WORLD)
else:
file_1 = h5py.File(file_name_list[i - 1], 'r+')
file_2 = h5py.File(file_name_list[i], 'r+')
im_file_1 = file_1[gene]['StitchedImage']
im_file_2 = file_2[gene]['StitchedImage']
# Find the translation
cur_trans, max_trans, cur_old_size \
= find_reg_final_image(im_file_1, im_file_2, max_trans,
sub_pic_frac, region=region)
logger.debug("max_trans: {}".format(max_trans))
file_1.close()
file_2.close()
return cur_trans, max_trans, cur_old_size, i
def correct_trans_and_size(trans_relative, old_size_list, max_trans,
compare_in_seq):
"""Correct the translations and the size of the registered images.
Parameters:
-----------
trans_relative: np.array
Array of ints. The array with the non-corrected translation.
old_size_list: np.array
Array of ints. The array with the sizes of all the final,
non-registered images.
max_trans: np.array
Array of ints. Variable to store the largest translation found up to now.
compare_in_seq: bool
Determines if we should compare images in sequence or if we should compare
all to the first image.
Returns:
--------
trans: np.array
Array of ints. The array with the corrected translations for each image.
new_size: np.array
Array of length 2 containing ints. The size the images should have after registration.
"""
if compare_in_seq:
# Get the normalized transistions
trans = np.cumsum(trans_relative, axis=0)
max_trans = np.amax(trans, axis=0)
logger.debug(("Comparing in sequence: relative translations: "
+ "\n {} \n normalized translations: \n{}\n"
.format(trans_relative, trans)))
logger.debug("max_trans: {}".format(max_trans))
else:
trans = trans_relative
# Correct translations
trans -= max_trans
logger.debug('old_size_list: {}'
.format(old_size_list))
# Determine final image size:
new_size_list = old_size_list + abs(trans)
new_size = np.amax(new_size_list, axis=0)
logger.debug('new_size_list: {} new_size: {}'
.format(new_size_list, new_size))
return trans, new_size
def find_reg_final_image(im_file_1, im_file_n, max_trans, sub_pic_frac,
region=None, nr_peaks=8):
"""
Find the transform that registers image n correctly onto image 1.
Parameters:
im_file_1: pointer
HDF5 group reference or file handle, should
contain a dataset "final_image" holding image 1.
im_name_n: pointer
HDF5 group reference or file handle, should
contain a dataset "final_image" holding image n.
max_trans: np.array
Array of length 2 with dtype: int.
Largest translation currently found.
sub_pic_frac: float
The fraction of the size of the original
image that should be used to compare images.
region: list
List of length four containing ints. The
region that should be compared to determine
the shift needed for registration.
Should be in the order: [y_min, y_max, x_min,
x_max]. When region is defined, sub_pic_frac
will not be used.
By default the code will determine the region
itself taking a area around the
center of the image with a size
determined by sub_pic_frac(Default: None)
nr_peaks: int
The number of peaks used to get the best peaks
from the phase correlation matrix. (default: 8)
Returns:
--------
trans: np.array
Array of length 2 containing ints.
Translation that projects image n correctly onto image 1.
max_trans: np.array
Array of shape (1, 2) containing ints.
The max_trans value that was passed to this
function, replaced by (part of) the current
translation if it is larger than max_trans.
shape_n: tuple
Tuple of python ints. The shape of image n.
"""
# Get the image shapes
shape_1 = im_file_1['final_image'].shape
shape_n = im_file_n['final_image'].shape
if region is None:
# Determine the size of the part of the picture that we want to compare
sub_pic_size = (np.array(shape_1) * sub_pic_frac).astype(int,
copy=False)
logger.debug('sub_pic_size: {}'.format(sub_pic_size))
# Take the center coordinates in the y and x axes
center = (int(np.floor(min(shape_1[-2] / 2,
shape_n[-2] / 2))),
int(np.floor(min(shape_1[-1] / 2,
shape_n[-1] / 2))))
start = np.array([center[0], center[1]])
end = np.array([min(start[-2] + sub_pic_size[-2],
shape_1[-2],
shape_n[-2]),
min(start[-1] + sub_pic_size[-1],
shape_1[-1],
shape_n[-1])])
else:
start = np.array([region[0],region[2]])
end = np.array([region[1],region[3]])
logger.debug("Area based on given region: Start: {} "
"End: {}".format(start, end))
# Get the region to compare from the pictures
if im_file_1['final_image'].ndim == 3:
pic_1 = np.amax(im_file_1['final_image'][:, start[0]:end[0],
start[1]:end[1]])
else:
pic_1 = im_file_1['final_image'][start[0]:end[0],
start[1]:end[1]]
if im_file_1['final_image'].ndim == 3:
pic_n = np.amax(im_file_n['final_image'][:, start[0]:end[0],
start[1]:end[1]])
else:
pic_n = im_file_n['final_image'][start[0]:end[0],
start[1]:end[1]]
# Find the best translation
trans, best_cov = align_sub_region(pic_1, pic_n, nr_peaks)
logger.debug('Found trans: {} \n best covariance: {}'
.format(trans, best_cov))
# Adjust max trans if necessary
max_trans = np.maximum(max_trans, np.array(trans))
return trans, max_trans, shape_n
def transform_final_image(im_file_n, trans, new_size):
"""
Transform an image according to trans.
Parameters:
-----------
im_file_n: pointer
HDF5 group reference or file handle, should
contain a dataset "final_image" holding image n.
trans: np.array
Array of len 2 containing ints. y and x transform of the image.
new_size: tuple
Tuple of length 2. The size of the image after the transform.
"""
# Make the trans matrix
trans_matrix = np.eye(3)
trans_matrix[1][2] = trans[0]
trans_matrix[0][2] = trans[1]
# Make a separate dataset for the registered image.
logger.debug('new_size {}'.format(new_size))
try:
registered_image = im_file_n.require_dataset('reg_image',
shape=tuple(new_size),
dtype=np.float64)
except TypeError as err:
logger.debug(
("Incompatible data set for reg_image, deleting old " +
"dataset. N.B: Not cleaning up space. \n {}")
.format(err))
del im_file_n['reg_image']
registered_image = im_file_n.require_dataset('reg_image',
shape=tuple(new_size),
dtype=np.float64)
# Transform the image
registered_image[:, :] = smtf.warp(im_file_n['final_image'],
trans_matrix,
output_shape=new_size, order=0)
def transform_data_file(folder, data_name, trans,
new_size):
"""
Transform the corners in the pickled data file
Parameters:
-----------
folder: str
The name of the folder containing the
pickled file with stitching data, needs a
trailing slash ("/").
data_name: str
Name of the pickled file with the corner coordinates.
trans: np.array
Array of len 2 containing ints. y and x transform of the image.
new_size: tuple
Tuple of length 2. The size of the image after the transform.
"""
# Determine the name to safe the new pickled data file.
exp_name = '_'.join(data_name.split('_')[:-2])
# Get the original coordinates
loaded_data = inout.load_stitching_coord(folder + data_name)
micData = loaded_data['micData']
joining_original = loaded_data['joining']
joining_new = {}
# Translate the corners
temp_corner_list = [[tile_ind, (corner - trans)]
for tile_ind, corner in
joining_original['corner_list']]
logger.debug(
'temp_corner_list: {} trans: {}'.format(temp_corner_list,
trans))
# Place the corners in the joining dictionary.
joining_new['corner_list'] = temp_corner_list
# Change final image shape of original:
joining_new['final_image_shape'] = new_size
# Save to a new file
inout.save_to_file(folder + exp_name + '_stitching_data_reg',
micData=micData, joining=joining_new)
def align_sub_region(overlap1, overlap2, nr_peaks):
"""Determine how much overlap2 should be shifted to fit overlap1.
Parameters:
-----------
overlap1: np.array
2D numpy array. Patch of the image that should be compared.
overlap2: np.array
2D numpy array. Patch of the image that should be compared.
nr_peaks: int
The number of peaks used to get the best peaks from the phase correlation matrix.
Returns:
--------
best_trans: np.array
Array of len 2 containing ints. Transform that projects overlap2
correctly onto overlap1.
best_cov: float
The normalized covariance
"""
plot_order = np.ones((1, 2))
# Calculate possible translations
unr_pos_transistions = ps.calculate_pos_shifts(
overlap1, overlap2, nr_peaks, 2)
logger.debug("Possible translations: {}".
format(unr_pos_transistions))
# Do correlation over the found shifts:
best_trans, best_cov = ps.find_best_trans(unr_pos_transistions,
overlap1, overlap2,
plot_order)
# Give some feedback
logger.info(
"Best shift: {} covariance: {}".format(best_trans, best_cov))
return np.array(best_trans,dtype='int16'), best_cov
def register_final_images_old(folder, gene='Nuclei',
sub_pic_frac=0.2, use_MPI=False,
apply_to_corners=True,
apply_warping=False,
region=None, compare_in_seq=False):
"""Register stitched images an in all HDF5 file in the folder
Loops the hybridizations in the HDF5 file, takes the stitched
images as indicated by gene and then compares each image to the
first image.
For the comparison only a small patch of the images is used, the
size of this patch can be controlled with "sub_pic_frac".
Parameters:
-----------
folder: str
The name of the folder containing the
pickled file with stitching data, needs a
trailing slash ("/").
gene: str
The gene of which the stitched
images are present and should be realigned.
Typically this will be 'Nuclei', because the
smFISH genes will not have enough signal to
align the pictures properly.
(Default: 'Nuclei')
sub_pic_frac: float
The fraction of the size of the
original image that should be used to compare
images.
(Default: 0.2)
use_MPI: bool
True open the files in MPI friendly mode, if False open files in normal
single processing mode. (Default: False)
apply_to_corners: bool
Determines if the found
registration will be applied to the tile
corners in the pickled stitching data file.
(Default: True)
apply_warping: bool
Determines if the found
registration will be applied as a warp to the
final pictures in the hdf5 file, should not
be used with large datasets.
(Default: False)
region: list
List of length four containing ints. The
region that should be compared to determine
the shift needed for registration.
Should be in the order: [y_min, y_max, x_min,
x_max]. When region is defined, sub_pic_frac
will not be used.
By default the code will determine the region
itself taking a area around the
center of the image with a size
determined by sub_pic_frac(Default: None)
compare_in_seq: bool
Determines if we should compare
images in sequence or if we should compare
all to the first image.
"""
# Get a list of files in the folder
file_name_list = glob.glob(folder + '*.sf.hdf5')
file_name_list.sort()
logger.debug('Filenames sorted: {}'.format(file_name_list))
# Initialize some variables:
trans = np.zeros((len(file_name_list), 2), dtype=int)
old_size_list = np.zeros((len(file_name_list), 2), dtype=int)
max_trans = np.zeros((1, 2), dtype=int)
# Make comparisons
if not compare_in_seq:
# Take the first hybridization (keys() seems to give the groups
# as a sorted list)
im_name_1 = file_name_list[0]
logger.debug('im_name_1: {}'.format(im_name_1))
# Open the stitching file and make a list of the hybridizations
# present in this file:
if use_MPI:
file_1 = h5py.File(im_name_1, 'r+',
driver='mpio', comm=MPI.COMM_WORLD)
else:
file_1 = h5py.File(im_name_1, 'r+')
# hyb_name_list = list(stitching_file.keys())
# Get the right group
im_file_1 = file_1[gene]['StitchedImage']
old_size_list[0, :] = im_file_1['final_image'].shape
# Compare each file to file 1:
for i in range(1, len(file_name_list)):
# Get the group containing the image we want to compare with.
if use_MPI:
file_n = h5py.File(file_name_list[i], 'r+',
driver='mpio', comm=MPI.COMM_WORLD)
else:
file_n = h5py.File(file_name_list[i], 'r+')
im_file_n = file_n[gene]['StitchedImage']
# Find the translation
trans[i, :], max_trans, old_size_list[i, :] \
= find_reg_final_image(im_file_1, im_file_n, max_trans,
sub_pic_frac, region=region)
logger.debug(
"max_trans: {}".format(max_trans))
file_n.close()
# Close the hdf5 file.
file_1.close()
else:
# Init specific array
trans_relative = np.zeros((len(file_name_list), 2), dtype=int)
# Compare each file to previous file:
for i in range(1, len(file_name_list)):
# Get the group containing the image we want to compare with.
if use_MPI:
file_1 = h5py.File(file_name_list[i - 1], 'r+',
driver='mpio', comm=MPI.COMM_WORLD)
file_2 = h5py.File(file_name_list[i], 'r+',
driver='mpio', comm=MPI.COMM_WORLD)
else:
file_1 = h5py.File(file_name_list[i - 1], 'r+')
file_2 = h5py.File(file_name_list[i], 'r+')
im_file_1 = file_1[gene]['StitchedImage']
im_file_2 = file_2[gene]['StitchedImage']
# Get the size of the first image in the list,
# which will be the reference image without translation.
if (i - 1) == 0:
old_size_list[0, :] = im_file_1['final_image'].shape
# Find the translation
trans_relative[i, :], max_trans, old_size_list[i, :] \
= find_reg_final_image(im_file_1, im_file_2, max_trans,
sub_pic_frac, region=region)
logger.debug("max_trans: {}".format(max_trans))
file_1.close()
file_2.close()
# Get the normalized transistions
trans = np.cumsum(trans_relative, axis=0)
max_trans = np.amax(trans, axis=0)
logger.debug(("Comparing in sequence: relative translations: "
+ "\n {} \n normalized translations: \n{}\n"
.format(trans_relative, trans)))
logger.debug("max_trans: {}".format(max_trans))
# Correct translations
trans -= max_trans
logger.debug('old_size_list: {}'
.format(old_size_list))
# Determine final image size:
new_size_list = old_size_list + abs(trans)
new_size = np.amax(new_size_list, axis=0)
logger.debug(
'Files: {} Translations: {} new_size_list: {} new_size: {}'
.format(file_name_list, trans, new_size_list, new_size))
# Apply the translations
for i in range(len(file_name_list)):
if apply_warping:
if use_MPI:
file_n = h5py.File(file_name_list[i], 'r+',
driver='mpio', comm=MPI.COMM_WORLD)
else:
file_n = h5py.File(file_name_list[i], 'r+')
im_file_n = file_n[gene]['StitchedImage']
transform_final_image(im_file_n, trans[i, :], new_size)
file_n.close()
if apply_to_corners:
data_name = (
os.path.split(file_name_list[i])[1].split(sep='.')[0]
+ '_' + gene
+ '_stitching_data')
transform_data_file(folder, data_name, trans[i, :],
new_size)
def register_final_images_reg_data_only(folder, gene='Nuclei',
sub_pic_frac=0.2, use_MPI=False,
apply_to_corners=True, apply_warping = False,
region=None, compare_in_seq=False):
"""Register stitched images an in all HDF5 file in the folders.
It is modified from register_final_images and saves only the reg_data
file with the new coords and nothing in the hdf5 file.
Loops the hybridizations in the HDF5 file, takes the stitched
images as indicated by gene and then compares each image to the
first image.
For the comparison only a small patch of the images is used, the
size of this patch can be controlled with "sub_pic_frac".
Parameters:
-----------
folder: str
The name of the folder containing the
pickled file with stitching data, needs a
trailing slash ("/").
gene: str
The gene of which the stitched
images are present and should be realigned.
Typically this will be 'Nuclei', because the
smFISH genes will not have enough signal to
align the pictures properly.
(Default: 'Nuclei')
sub_pic_frac: float
The fraction of the size of the original image that should be used to compare
images. (Default: 0.2)
use_MPI: bool
If True open the files in MPI friendly mode, if False open files in normal
single processing mode. (Default: False)
apply_to_corners: bool
Determines if the found
registration will be applied to the tile
corners in the pickled stitching data file.
(Default: True)
apply_warping: bool
Determines if the found
registration will be applied as a warp to the
final pictures in the hdf5 file, should not
be used with large datasets.
(Default: False)
region: list
List of length four containing ints. The
region that should be compared to determine
the shift needed for registration.
Should be in the order: [y_min, y_max, x_min,
x_max]. When region is defined, sub_pic_frac
will not be used.
By default the code will determine the region
itself taking a area around the
center of the image with a size
determined by sub_pic_frac(Default: None)
compare_in_seq: bool
Determines if we should compare images in sequence or if we should compare
all to the first image.
"""
if not compare_in_seq:
file_name_list, file_1, im_file_1, trans, old_size_list, \
max_trans = \
prepare_for_comparing(folder, gene, compare_in_seq,
use_MPI=use_MPI)
# Compare each file to file 1:
for i in range(1, len(file_name_list)):
cur_trans, max_trans, cur_old_size, file_ind = \
get_single_trans(file_name_list, i, gene, im_file_1,
max_trans, sub_pic_frac=sub_pic_frac,
region=region, use_MPI=use_MPI)
trans[file_ind, :] = cur_trans
old_size_list[file_ind, :] = cur_old_size
# Close the hdf5 file.
file_1.close()
trans, new_size = correct_trans_and_size(trans,
old_size_list,
max_trans,
compare_in_seq)
else:
file_name_list, trans_relative, old_size_list, max_trans = \
prepare_for_comparing(folder, gene, compare_in_seq,
use_MPI=use_MPI)
# Compare each file to previous file:
for i in range(1, len(file_name_list)):
cur_trans, max_trans, cur_old_size, file_ind = \
get_single_relative_trans(file_name_list, i, gene,
max_trans,
sub_pic_frac = sub_pic_frac,
region = region,
use_MPI = use_MPI)
trans_relative[file_ind, :] = cur_trans
old_size_list[file_ind, :] = cur_old_size
trans, new_size = correct_trans_and_size(trans_relative,
old_size_list,
max_trans,
compare_in_seq)
logger.debug(
'Files: {} Translations: {}'
.format(file_name_list, trans))
# Apply the translations
for i in range(len(file_name_list)):
if apply_to_corners:
data_name = (
os.path.split(file_name_list[i])[1].split(sep='.')[0]
+ '_' + gene
+ '_stitching_data')
transform_data_file(folder, data_name, trans[i, :],
new_size) | 40.134108 | 94 | 0.596574 | 5,120 | 38,007 | 4.228906 | 0.069531 | 0.024386 | 0.03547 | 0.016211 | 0.829346 | 0.823157 | 0.812627 | 0.807039 | 0.803344 | 0.794338 | 0 | 0.011227 | 0.327387 | 38,007 | 947 | 95 | 40.134108 | 0.835746 | 0.467966 | 0 | 0.696809 | 0 | 0 | 0.070876 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029255 | false | 0 | 0.026596 | 0 | 0.074468 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a2890d4689e7cfc52b164fe710dad0fb31293f19 | 38,185 | py | Python | tests/test_multitask_gp.py | ivandariojr/core | c4dec054a3e80355ed3812d48ca2bba286584a67 | [
"MIT"
] | 6 | 2021-01-26T21:00:24.000Z | 2022-02-28T23:57:50.000Z | tests/test_multitask_gp.py | ivandariojr/core | c4dec054a3e80355ed3812d48ca2bba286584a67 | [
"MIT"
] | 15 | 2020-01-28T22:49:18.000Z | 2021-12-14T08:34:39.000Z | tests/test_multitask_gp.py | ivandariojr/core | c4dec054a3e80355ed3812d48ca2bba286584a67 | [
"MIT"
] | 6 | 2019-06-07T21:31:20.000Z | 2021-12-13T01:00:02.000Z | from numpy import pi
from core.learning.gaussian_process import RBFKernel, GaussianProcess, \
GPScaler, ScaledGaussianProcess, PeriodicKernel, AdditiveKernel, \
AffineDotProductKernel, MultiplicativeKernel, save_gp, load_gp
import matplotlib.pyplot as plt
import torch as th
import pathlib
from os import makedirs
th.manual_seed(0)
def confidence_region(mean, cov_matrix):
variance = cov_matrix.diag()
std = variance.sqrt()
return mean - 2*std, mean + 2*std
def test_1dx1dgp():
sigma = 0.2
train_x = th.rand((20,1))*2*pi
train_y = th.sin(train_x) + sigma * (th.rand_like(train_x) - 0.5)
kernel = RBFKernel(1)
gp = GaussianProcess(train_x, train_y, kernel)
gp.train_model()
test_x = th.linspace(0, 2 * pi, 1000).unsqueeze(1)
test_y = th.sin(test_x)
y_hat, cov = gp(test_x)
lower, upper = confidence_region(y_hat.squeeze(), cov.to_dense().squeeze())
assert (y_hat - test_y).abs().mean().item() < 1e-1
assert (test_y.squeeze() > lower).all()
assert (upper > test_y.squeeze()).all()
#uncomment plotting for debugging
# plt.plot(train_x.squeeze().detach().numpy(),
# train_y.detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
# plt.plot(test_x.detach().squeeze().numpy(),
# test_y.detach().numpy(),
# label='True Y')
# plt.fill_between(test_x.detach().squeeze().numpy(),
# lower.detach().numpy(),
# upper.detach().numpy(), alpha=0.5)
# plt.plot(test_x.detach().squeeze().numpy(),
# y_hat.detach().numpy(), label='Estimate')
# plt.legend(loc='upper right')
# plt.show()
def test_2dx1dgp():
sigma = 0.2
train_X = th.rand((650,2))*2*pi
train_Y = th.sin(train_X[:,0])*th.cos(train_X[:,1]) \
+ sigma * (th.rand_like(train_X[:,0]) - 0.5)
train_Y = train_Y.unsqueeze(1)
kernel = RBFKernel(2)
gp = GaussianProcess(train_X, train_Y, kernel)
gp.train_model()
dim_sample = th.linspace(0,2*pi, 100)
X, Y = th.meshgrid(dim_sample, dim_sample)
x_vec, y_vec = th.flatten(X), th.flatten(Y)
test_x = th.stack((x_vec, y_vec), dim=1)
y_hat, cov = gp(test_x)
lower, upper = confidence_region(y_hat.squeeze(), cov.to_dense().squeeze())
test_y = (th.sin(test_x[:,0])*th.cos(test_x[:,1])).unsqueeze(1)
assert (y_hat - test_y).abs().max().item() < sigma
assert (test_y.squeeze() > lower).all()
assert (upper > test_y.squeeze()).all()
#uncomment plotting for debugging
# fig = plt.figure()
# ax = fig.add_subplot(111, projection='3d')
# ax.plot_surface(X.detach().numpy(), Y.detach().numpy(),
# (y_hat - test_y).abs().view_as(X).detach().numpy())
# # ax.plot_surface(X.detach().numpy(), Y.detach().numpy(), y_hat.view_as(X).detach().numpy())
# plt.show()
def test_gp_save_load():
sigma = 0.2
train_X = th.rand((650,2))*2*pi
train_Y = th.sin(train_X[:,0])*th.cos(train_X[:,1]) \
+ sigma * (th.rand_like(train_X[:,0]) - 0.5)
train_Y = train_Y.unsqueeze(1)
kernel = RBFKernel(2)
gp_original = GaussianProcess(train_X, train_Y, kernel)
gp_original.train_model()
root_dir = pathlib.Path(__file__).parent.absolute()
save_dir = root_dir / 'data' / 'test_load_save_gp'
makedirs(save_dir, exist_ok=True)
save_file = save_dir / 'test_save.th'
save_gp(gp_original, save_file)
gp_loaded = load_gp(save_file, RBFKernel(2))
dim_sample = th.linspace(0,2*pi, 100)
X, Y = th.meshgrid(dim_sample, dim_sample)
x_vec, y_vec = th.flatten(X), th.flatten(Y)
test_x = th.stack((x_vec, y_vec), dim=1)
y_hat, cov = gp_loaded(test_x)
lower, upper = confidence_region(y_hat.squeeze(), cov.to_dense().squeeze())
test_y = (th.sin(test_x[:,0])*th.cos(test_x[:,1])).unsqueeze(1)
assert (y_hat - test_y).abs().max().item() < sigma
assert (test_y.squeeze() > lower).all()
assert (upper > test_y.squeeze()).all()
def test_2d2dgp_multiplicative_periodic():
sigma = 0.5
train_X = th.stack([th.distributions.Uniform(0, 2 * pi * 10).sample((250,)),
th.distributions.Uniform(-10 * pi * 100, 10 * pi * 100).sample(
(250,))], dim=1)
def genYYprime(X):
Y = th.stack([
10 * th.sin(X[:, 0]/10),
10 * th.cos(X[:, 0]/10) * X[:, 1]], dim=1)
Y_prime = th.stack([
th.stack([th.cos(X[:, 0] / 10), th.zeros((X.shape[0], ))], dim=1),
th.stack([(- th.sin(X[:, 0] / 10) * X[:,1]),
10 * th.cos(X[:, 0] / 10)], dim=1)], dim=1)
return Y, Y_prime
train_Y, train_Y_prime = genYYprime(train_X)
train_Y += th.distributions.Normal(0.0, sigma).sample(train_X.shape)
# kernel = RBFKernel(2, ard_num_dims=True)
kernel = MultiplicativeKernel(
kernels=[PeriodicKernel(2), RBFKernel(1)],
active_dims=[[0], [1]])
x_scaler = GPScaler(xmins=th.tensor([0, -10 * pi * 100]),
xmaxs=th.tensor([2 * pi * 10, 10 * pi * 100]),
wraps=th.tensor([True, False]))
y_scaler = GPScaler(xmins=th.tensor([-10, -100 * pi * 100]),
xmaxs=th.tensor([10, 100 * pi * 100]))
gp = ScaledGaussianProcess(train_X, train_Y, kernel,
x_scaler=x_scaler, y_scaler=y_scaler)
gp.train_model()
x_vec = th.linspace(-10 * pi * 10, 10 * pi * 10, 100)
y_vec = th.linspace(-10 * pi * 100, 10 * pi * 100, 100)
test_x = th.stack((x_vec, y_vec), dim=1)
test_y , test_y_prime = genYYprime(test_x)
y_hat, cov = gp(test_x)
y_hat_prime, cov_prime = gp.ddx(test_x)
lower1, upper1 = confidence_region(y_hat.squeeze()[:, 0], cov[:, :, 0, 0])
lower2, upper2 =confidence_region(y_hat.squeeze()[:, 1], cov[:, :, 1, 1])
lower1_p, upper1_p = confidence_region(y_hat_prime[:, 0, 0], cov_prime[:, :, 0, 0, 0, 0])
lower2_p, upper2_p = confidence_region(y_hat_prime[:, 1, 1], cov_prime[:, :, 1, 1, 1, 1])
# these dont' guarantee true value is in confidence region but its good
# enough
# GP tests
assert (test_y[:, 0] > lower1).all()
assert (test_y[:, 1] > lower2).all()
assert (test_y[:, 0] < upper1).all()
assert (test_y[:, 1] < upper2).all()
#big error because the problem is scaled to be very large
assert (y_hat[5:-5,:] - test_y[5:-5,:]).abs().mean() < 25
# # derivative of GP tests
assert abs(y_hat_prime[:, 0, 0] - test_y_prime[:, 0, 0]).mean() < 1
assert abs(y_hat_prime[:, 1, 1] - test_y_prime[:, 1, 1]).mean() < 1
assert abs(y_hat_prime[:, 1, 0] - test_y_prime[:, 1, 0]).mean() < 10
assert y_hat_prime[:, 0, 1].abs().mean() < 1 # notice the loss of a significant figure
assert (y_hat_prime[:, 0, 0] < upper1_p).all()
assert (y_hat_prime[:, 0, 0] > lower1_p).all()
assert (y_hat_prime[:, 1, 1] < upper2_p).all()
assert (y_hat_prime[:, 1, 1] > lower2_p).all()
# uncomment plots for debugging
# f, axs = plt.subplots(2,2, dpi=200)
# axs[0][0].plot(test_x[:,0].detach().squeeze().numpy(),
# y_hat[:,0].detach().squeeze().numpy(), label='Estimate')
# axs[0][0].plot(test_x[:,0].detach().squeeze().numpy(),
# test_y[:,0].detach().squeeze().numpy(), label='True Y')
# axs[0][0].legend(loc='upper right')
# axs[0][0].plot(train_X[:,0].squeeze().detach().numpy(),
# train_Y[:,0].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
# axs[0][0].fill_between(test_x[:,0].detach().squeeze().numpy(),
# lower1.detach().numpy(),
# upper1.detach().numpy(), alpha=0.5)
# axs[0][0].title.set_text('$10 \\sin(\\frac{x_1}{10})$')
# axs[0][1].plot(test_x[:,1].detach().squeeze().numpy(),
# y_hat[:,1].detach().squeeze().numpy(), label='Estimate')
# axs[0][1].plot(test_x[:, 1].detach().squeeze().numpy(),
# test_y[:, 1].detach().squeeze().numpy(), label='True Y')
# axs[0][1].legend(loc='upper right')
# axs[0][1].fill_between(test_x[:, 1].detach().squeeze().numpy(),
# lower2.detach().numpy(),
# upper2.detach().numpy(), alpha=0.5)
# axs[0][1].title.set_text('$10 \\cos(\\frac{x_1}{10}) x_2$')
# axs[1][0].plot(test_x[:, 0].detach().squeeze().numpy(),
# y_hat_prime[:, 0,0].detach().squeeze().numpy(),
# label='Estimate')
# axs[1][0].plot(test_x[:, 0].detach().squeeze().numpy(),
# test_y_prime[:, 0,0].detach().squeeze().numpy(), label='True Y')
# axs[1][0].legend(loc='upper right')
# axs[1][0].fill_between(test_x[:, 0].detach().squeeze().numpy(),
# lower1_p.detach().numpy(),
# upper1_p.detach().numpy(), alpha=0.5)
# axs[1][0].plot(train_X[:, 0].squeeze().detach().numpy(),
# train_Y_prime[:, 0, 0].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
# axs[1][0].title.set_text('$\\cos(\\frac{x_1}{10})$')
# axs[1][1].plot(test_x[:, 1].detach().squeeze().numpy(),
# y_hat_prime[:, 1, 1].detach().squeeze().numpy(),
# label='Estimate')
# axs[1][1].plot(test_x[:, 1].detach().squeeze().numpy(),
# test_y_prime[:, 1,1].detach().squeeze().numpy(),
# label='True Y')
# axs[1][1].legend(loc='upper right')
# axs[1][1].fill_between(test_x[:, 1].detach().squeeze().numpy(),
# lower2_p.detach().numpy(),
# upper2_p.detach().numpy(), alpha=0.5)
# axs[1][1].title.set_text('$10 \\cos(\\frac{x_1}{10})$')
# plt.show()
def test_2d2dgp_scaling_wrapping():
#diagonals should be derivative
# off-diagonals should be approximately zero
sigma = 0.5
train_X = th.stack([th.distributions.Uniform(0, 2 * pi * 10).sample((250,)),
th.distributions.Uniform(0, 2 * pi * 100).sample((250,))], dim=1)
train_Y = th.stack([
10 * th.sin(train_X[:, 0]/10),
100 * th.cos(train_X[:,1]/ 100),
train_X[:, 0] * train_X[:,1]
], dim=1) + th.distributions.Normal(
0.0, sigma).sample((train_X.shape[0], 3))
train_Y_prime = th.stack([
th.stack([th.cos(train_X[:, 0] / 10), th.zeros(train_X.shape[0],)], dim=1),
th.stack([th.zeros(train_X.shape[0],), -th.sin(train_X[:, 1] / 100)],dim=1),
th.stack([train_X[:, 1], train_X[:, 0]], dim=1)], dim=1)
# kernel = RBFKernel(2, ard_num_dims=True)
kernel = AdditiveKernel(kernels=[
PeriodicKernel(p_prior=2.),
PeriodicKernel(p_prior=2.)],
active_dims=[
[0],
[1]])
x_scaler = GPScaler(xmins=th.tensor([0, 0]), xmaxs=th.tensor([2*pi*10, 2*pi*100]))
y_scaler = GPScaler(xmins=th.tensor([-10, -100, 0]),
xmaxs=th.tensor([10,100, 4 * pi * pi * 1000]))
gp = ScaledGaussianProcess(train_X, train_Y, kernel,
x_scaler=x_scaler, y_scaler=y_scaler)
gp.train_model()
# X, Y = th.meshgrid(dim_sample, dim_sample)
x_vec = th.linspace(-4 * pi * 10, 4 * pi * 10, 100)
y_vec = th.linspace(-4 * pi * 100, 4 * pi * 100, 100)
test_x = th.stack((x_vec, y_vec), dim=1)
test_y = th.stack([
10 * th.sin(test_x[:,0]/10),
100 * th.cos(test_x[:,1] / 100),
test_x[:,0] * test_x[:,1]
], dim=1)
test_y_prime = th.stack([
th.stack([th.cos(test_x[:, 0] / 10), th.zeros(test_x.shape[0], )],
dim=1),
th.stack([th.zeros(test_x.shape[0], ), -th.sin(test_x[:, 1] / 100)],
dim=1),
th.stack([test_x[:, 1], test_x[:, 0]], dim=1)], dim=1)
y_hat, cov = gp(test_x)
y_hat_prime, cov_prime = gp.ddx(test_x)
lower1, upper1 = confidence_region(y_hat.squeeze()[:,0], cov[:,:,0,0])
lower2, upper2 = confidence_region(y_hat.squeeze()[:, 1], cov[:,:,1,1])
lower3, upper3 = confidence_region(y_hat.squeeze()[:, 2], cov[:,:,2,2])
lower1_p , upper1_p = confidence_region(y_hat_prime[:,0,0], cov_prime[:,:,0,0,0,0])
lower2_p , upper2_p = confidence_region(y_hat_prime[:,1,1], cov_prime[:,:,1,1,1,1])
lower3_p, upper3_p = confidence_region(y_hat_prime[:, 2, 1], cov_prime[:, :, 1, 1, 2,2])
# these dont' guarantee true value is in confidence region but its good
# enough
# GP tests
assert (test_y[:,0] > lower1).all()
assert (test_y[:,1] > lower2).all()
assert (test_y[:, 0] < upper1).all()
assert (test_y[:, 1] < upper2).all()
assert (y_hat[:,[0,1]] - test_y[:, [0,1]]).abs().mean() < 1 #notice the
# loss of a significant figure
# # # derivative of GP tests
assert abs(y_hat_prime[:, 0, 0] - test_y_prime[:, 0, 0]).mean() < 1e-1
assert abs(y_hat_prime[:, 1, 1] - test_y_prime[:, 1, 1]).mean() < 1e-1
assert y_hat_prime[:, 0, 1].abs().mean() < 1 #notice the loss of a significant figure
assert y_hat_prime[:, 1, 0].abs().mean() < 1 #notice the loss of a significant figure
assert (y_hat_prime[:, 0, 0] < upper1_p).all()
assert (y_hat_prime[:, 0, 0] > lower1_p).all()
assert (y_hat_prime[:, 1, 1] < upper2_p).all()
assert (y_hat_prime[:, 1, 1] > lower2_p).all()
#uncomment plots for debugging
# f, axs = plt.subplots(2,3, figsize=(18, 6))
# axs[0][0].plot(test_x[:,0].detach().squeeze().numpy(),
# y_hat[:,0].detach().squeeze().numpy(), label='Estimate')
# axs[0][0].plot(test_x[:,0].detach().squeeze().numpy(),
# test_y[:,0].detach().squeeze().numpy(), label='True Y')
# axs[0][0].legend(loc='upper right')
# axs[0][0].plot(train_X[:,0].squeeze().detach().numpy(),
# train_Y[:,0].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
# axs[0][0].fill_between(test_x[:,0].detach().squeeze().numpy(),
# lower1.detach().numpy(),
# upper1.detach().numpy(), alpha=0.5)
# axs[0][1].plot(test_x[:,1].detach().squeeze().numpy(),
# y_hat[:,1].detach().squeeze().numpy(), label='Estimate')
# axs[0][1].plot(test_x[:, 1].detach().squeeze().numpy(),
# test_y[:, 1].detach().squeeze().numpy(), label='True Y')
# axs[0][1].legend(loc='upper right')
# axs[0][1].fill_between(test_x[:, 1].detach().squeeze().numpy(),
# lower2.detach().numpy(),
# upper2.detach().numpy(), alpha=0.5)
# axs[0][1].plot(train_X[:,1].squeeze().detach().numpy(),
# train_Y[:,1].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
#
# axs[0][2].plot(test_x[:, 1].detach().squeeze().numpy(),
# y_hat[:, 2].detach().squeeze().numpy(), label='Estimate')
# axs[0][2].plot(test_x[:, 1].detach().squeeze().numpy(),
# test_y[:, 2].detach().squeeze().numpy(), label='True Y')
# axs[0][2].legend(loc='upper right')
# axs[0][2].fill_between(test_x[:, 1].detach().squeeze().numpy(),
# lower3.detach().numpy(),
# upper3.detach().numpy(), alpha=0.5)
# axs[0][2].plot(train_X[:, 1].squeeze().detach().numpy(),
# train_Y[:, 2].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
#
# axs[1][0].plot(test_x[:, 0].detach().squeeze().numpy(),
# y_hat_prime[:, 0,0].detach().squeeze().numpy(),
# label='Estimate')
# axs[1][0].plot(test_x[:, 0].detach().squeeze().numpy(),
# test_y_prime[:, 0, 0].detach().squeeze().numpy(), label='True Y')
# axs[1][0].legend(loc='upper right')
# axs[1][0].fill_between(test_x[:, 0].detach().squeeze().numpy(),
# lower1_p.detach().numpy(),
# upper1_p.detach().numpy(), alpha=0.5)
# axs[1][0].plot(train_X[:, 0].squeeze().detach().numpy(),
# train_Y_prime[:, 0, 0].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
#
# axs[1][1].plot(test_x[:, 1].detach().squeeze().numpy(),
# y_hat_prime[:, 1, 1].detach().squeeze().numpy(),
# label='Estimate')
# axs[1][1].plot(test_x[:, 1].detach().squeeze().numpy(),
# test_y_prime[:, 1, 1].detach().squeeze().numpy(),
# label='True Y')
# axs[1][1].legend(loc='upper right')
# axs[1][1].fill_between(test_x[:, 1].detach().squeeze().numpy(),
# lower2_p.detach().numpy(),
# upper2_p.detach().numpy(), alpha=0.5)
# axs[1][1].plot(train_X[:, 1].squeeze().detach().numpy(),
# train_Y_prime[:, 1, 1].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
#
# axs[1][2].plot(test_x[:, 1].detach().squeeze().numpy(),
# y_hat_prime[:, 2, 1].detach().squeeze().numpy(),
# label='Estimate')
# test_y_prime.shape
# axs[1][2].plot(test_x[:, 1].detach().squeeze().numpy(),
# test_y_prime[:, 2,1].detach().squeeze().numpy(),
# label='True Y')
# axs[1][2].legend(loc='upper right')
# axs[1][2].fill_between(test_x[:, 1].detach().squeeze().numpy(),
# lower3_p.detach().numpy(),
# upper3_p.detach().numpy(), alpha=0.5)
# axs[1][2].plot(train_X[:, 1].squeeze().detach().numpy(),
# train_Y_prime[:, 2, 1].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
# plt.show()
def test_2d2dgp_scaling():
#diagonals should be derivative
# off-diagonals should be approximately zero
sigma = 0.5
train_X = th.stack([th.distributions.Uniform(0, 2 * pi * 10).sample((250,)),
th.distributions.Uniform(0, 2 * pi * 100).sample((250,))], dim=1)
train_Y = th.stack([
10 * th.sin(train_X[:, 0]/10),
100 * th.cos(train_X[:,1]/ 100)], dim=1) + th.distributions.Normal(0.0, sigma).sample(train_X.shape)
train_Y_prime = th.stack([
th.cat([th.cos(train_X[:, 0]/10).unsqueeze(1), th.zeros(train_X.shape[0], 1)],dim=1),
th.cat([th.zeros(train_X.shape[0], 1), -th.sin(train_X[:,1]/100).unsqueeze(1)],
dim=1)],
dim=1)
kernel = RBFKernel(2, ard_num_dims=True)
x_scaler = GPScaler(xmins=th.tensor([0, 0]), xmaxs=th.tensor([2*pi*10, 2*pi*100]))
y_scaler = GPScaler(xmins=th.tensor([-10, -100]) , xmaxs=th.tensor([10, 100]))
gp = ScaledGaussianProcess(train_X, train_Y, kernel,
x_scaler=x_scaler, y_scaler=y_scaler)
gp.train_model()
# X, Y = th.meshgrid(dim_sample, dim_sample)
x_vec = th.linspace(0, 2 * pi * 10, 100)
y_vec = th.linspace(0, 2 * pi * 100, 100)
test_x = th.stack((x_vec, y_vec), dim=1)
test_y = th.stack([
10 * th.sin(test_x[:,0]/10),
100 * th.cos(test_x[:,1] / 100)], dim=1)
test_y_prime = th.stack([
th.cos(test_x[:,0]/10),
-th.sin(test_x[:,1]/100)], dim=1)
y_hat, cov = gp(test_x)
y_hat_prime, cov_prime = gp.ddx(test_x)
lower1, upper1 = confidence_region(y_hat.squeeze()[:,0], cov[:,:,0,0])
lower2, upper2 = confidence_region(y_hat.squeeze()[:, 1], cov[:,:,1,1])
lower1_p , upper1_p = confidence_region(y_hat_prime[:,0,0], cov_prime[:,:,0,0, 0,0])
lower2_p , upper2_p = confidence_region(y_hat_prime[:,1,1], cov_prime[:,:,1,1, 1, 1])
# these dont' guarantee true value is in confidence region but its good
# enough
# GP tests
assert (test_y[:,0] > lower1).all()
assert (test_y[:,1] > lower2).all()
assert (test_y[:, 0] < upper1).all()
assert (test_y[:, 1] < upper2).all()
assert (y_hat - test_y).abs().mean() < 1 #notice the loss of a significant figure
# # derivative of GP tests
# assert abs(y_hat_prime[:, 0, 0] - test_y_prime[:, 0]).mean() < 1e-1
# assert abs(y_hat_prime[:, 1, 1] - test_y_prime[:, 1]).mean() < 1e-1
assert y_hat_prime[:, 0, 1].abs().mean() < 1 #notice the loss of a significant figure
assert y_hat_prime[:, 1, 0].abs().mean() < 1 #notice the loss of a significant figure
assert (y_hat_prime[:, 0, 0] < upper1_p).all()
assert (y_hat_prime[:, 0, 0] > lower1_p).all()
assert (y_hat_prime[:, 1, 1] < upper2_p).all()
assert (y_hat_prime[:, 1, 1] > lower2_p).all()
#uncomment plots for debugging
# f, axs = plt.subplots(2,2)
# axs[0][0].plot(test_x[:,0].detach().squeeze().numpy(),
# y_hat[:,0].detach().squeeze().numpy(), label='Estimate')
# axs[0][0].plot(test_x[:,0].detach().squeeze().numpy(),
# test_y[:,0].detach().squeeze().numpy(), label='True Y')
# axs[0][0].legend(loc='upper right')
# axs[0][0].plot(train_X[:,0].squeeze().detach().numpy(),
# train_Y[:,0].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
# axs[0][0].fill_between(test_x[:,0].detach().squeeze().numpy(),
# lower1.detach().numpy(),
# upper1.detach().numpy(), alpha=0.5)
# axs[0][1].plot(test_x[:,1].detach().squeeze().numpy(),
# y_hat[:,1].detach().squeeze().numpy(), label='Estimate')
# axs[0][1].plot(test_x[:, 1].detach().squeeze().numpy(),
# test_y[:, 1].detach().squeeze().numpy(), label='True Y')
# axs[0][1].legend(loc='upper right')
# axs[0][1].fill_between(test_x[:, 1].detach().squeeze().numpy(),
# lower2.detach().numpy(),
# upper2.detach().numpy(), alpha=0.5)
# axs[0][1].plot(train_X[:,1].squeeze().detach().numpy(),
# train_Y[:,1].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
#
# axs[1][0].plot(test_x[:, 0].detach().squeeze().numpy(),
# y_hat_prime[:, 0,0].detach().squeeze().numpy(),
# label='Estimate')
# axs[1][0].plot(test_x[:, 0].detach().squeeze().numpy(),
# test_y_prime[:, 0].detach().squeeze().numpy(), label='True Y')
# axs[1][0].legend(loc='upper right')
# axs[1][0].fill_between(test_x[:, 0].detach().squeeze().numpy(),
# lower1_p.detach().numpy(),
# upper1_p.detach().numpy(), alpha=0.5)
# axs[1][0].plot(train_X[:, 0].squeeze().detach().numpy(),
# train_Y_prime[:, 0, 0].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
#
# axs[1][1].plot(test_x[:, 1].detach().squeeze().numpy(),
# y_hat_prime[:, 1, 1].detach().squeeze().numpy(),
# label='Estimate')
# axs[1][1].plot(test_x[:, 1].detach().squeeze().numpy(),
# test_y_prime[:, 1].detach().squeeze().numpy(),
# label='True Y')
# axs[1][1].legend(loc='upper right')
# axs[1][1].fill_between(test_x[:, 1].detach().squeeze().numpy(),
# lower2_p.detach().numpy(),
# upper2_p.detach().numpy(), alpha=0.5)
# axs[1][1].plot(train_X[:, 1].squeeze().detach().numpy(),
# train_Y_prime[:, 1, 1].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
# plt.show()
def test_2d2dgp():
#diagonals should be derivative
# off-diagonals should be approximately zero
sigma = 0.2
train_X = th.rand((250,2))*2*pi
train_Y = th.stack([
th.sin(train_X[:, 0]),
th.cos(train_X[:,1])],dim=1) + sigma * (th.rand_like(train_X) - 0.5)
train_Y_prime = th.stack([
th.cat([th.cos(train_X[:, 0]).unsqueeze(1), th.zeros(train_X.shape[0], 1)],dim=1),
th.cat([th.zeros(train_X.shape[0], 1), -th.sin(train_X[:, 1]).unsqueeze(1)],dim=1)],
dim=1)
kernel = RBFKernel(2)
gp = GaussianProcess(train_X, train_Y, kernel)
gp.train_model()
dim_sample = th.linspace(0, 2 * pi, 100)
# X, Y = th.meshgrid(dim_sample, dim_sample)
x_vec, y_vec = dim_sample, dim_sample
test_x = th.stack((x_vec, y_vec), dim=1)
test_y = th.stack([
th.sin(test_x[:,0]),
th.cos(test_x[:,1])], dim=1)
test_y_prime = th.stack([
th.cos(test_x[:,0]),
-th.sin(test_x[:,1])], dim=1)
y_hat, cov = gp(test_x)
cov = cov.to_dense()
y_hat_prime, cov_prime = gp.ddx(test_x)
cov_prime = cov_prime.to_dense()
lower1, upper1 = confidence_region(y_hat.squeeze()[:,0], cov[:,:,0, 0])
lower2, upper2 = confidence_region(y_hat.squeeze()[:, 1], cov[:,:,1, 1])
lower1_p , upper1_p = confidence_region(y_hat_prime[:,0,0], cov_prime[:,:,0,0, 0, 0])
lower2_p , upper2_p = confidence_region(y_hat_prime[:,1,1], cov_prime[:,:,1,1, 1, 1])
# these dont' guarantee true value is in confidence region but its good
# enough
# GP tests
assert (test_y[:,0] > lower1).all()
assert (test_y[:,1] > lower2).all()
assert (test_y[:, 0] < upper1).all()
assert (test_y[:, 1] < upper2).all()
assert (y_hat - test_y).abs().mean() < 1e-1
# # derivative of GP tests
assert abs(y_hat_prime[:, 0, 0] - test_y_prime[:, 0]).mean() < 1e-1
assert abs(y_hat_prime[:, 1, 1] - test_y_prime[:, 1]).mean() < 1e-1
assert y_hat_prime[:, 0, 1].abs().mean() < 1e-1
assert y_hat_prime[:, 1, 0].abs().mean() < 1e-1
#
assert (y_hat_prime[:, 0, 0] < upper1_p).all()
assert (y_hat_prime[:, 0, 0] > lower1_p).all()
assert (y_hat_prime[:, 1, 1] < upper2_p).all()
assert (y_hat_prime[:, 1, 1] > lower2_p).all()
#uncomment plots for debugging
# f, axs = plt.subplots(2,2)
# axs[0][0].plot(test_x[:,0].detach().squeeze().numpy(),
# y_hat[:,0].detach().squeeze().numpy(), label='Estimate')
# axs[0][0].plot(test_x[:,0].detach().squeeze().numpy(),
# test_y[:,0].detach().squeeze().numpy(), label='True Y')
# axs[0][0].legend(loc='upper right')
# axs[0][0].plot(train_X[:,0].squeeze().detach().numpy(),
# train_Y[:,0].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
# axs[0][0].fill_between(test_x[:,0].detach().squeeze().numpy(),
# lower1.detach().numpy(),
# upper1.detach().numpy(), alpha=0.5)
# axs[0][1].plot(test_x[:,1].detach().squeeze().numpy(),
# y_hat[:,1].detach().squeeze().numpy(), label='Estimate')
# axs[0][1].plot(test_x[:, 1].detach().squeeze().numpy(),
# test_y[:, 1].detach().squeeze().numpy(), label='True Y')
# axs[0][1].legend(loc='upper right')
# axs[0][1].fill_between(test_x[:, 1].detach().squeeze().numpy(),
# lower2.detach().numpy(),
# upper2.detach().numpy(), alpha=0.5)
# axs[0][1].plot(train_X[:,1].squeeze().detach().numpy(),
# train_Y[:,1].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
#
# axs[1][0].plot(test_x[:, 0].detach().squeeze().numpy(),
# y_hat_prime[:, 0,0].detach().squeeze().numpy(),
# label='Estimate')
# axs[1][0].plot(test_x[:, 0].detach().squeeze().numpy(),
# test_y_prime[:, 0].detach().squeeze().numpy(), label='True Y')
# axs[1][0].legend(loc='upper right')
# axs[1][0].fill_between(test_x[:, 0].detach().squeeze().numpy(),
# lower1_p.detach().numpy(),
# upper1_p.detach().numpy(), alpha=0.5)
# axs[1][0].plot(train_X[:, 0].squeeze().detach().numpy(),
# train_Y_prime[:, 0, 0].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
#
# axs[1][1].plot(test_x[:, 1].detach().squeeze().numpy(),
# y_hat_prime[:, 1, 1].detach().squeeze().numpy(),
# label='Estimate')
# axs[1][1].plot(test_x[:, 1].detach().squeeze().numpy(),
# test_y_prime[:, 1].detach().squeeze().numpy(),
# label='True Y')
# axs[1][1].legend(loc='upper right')
# axs[1][1].fill_between(test_x[:, 1].detach().squeeze().numpy(),
# lower2_p.detach().numpy(),
# upper2_p.detach().numpy(), alpha=0.5)
# axs[1][1].plot(train_X[:, 1].squeeze().detach().numpy(),
# train_Y_prime[:, 1, 1].detach().numpy(), 'o',
# color='black',
# markersize=5,
# fillstyle="none")
# plt.show()
def test_exp_kernel():
test_X = th.tensor([[7.1,-100.2], [0.5, 12.2]], dtype=th.float64)
kernel = RBFKernel(2)
kernel._length_scale.data = th.tensor(1.23)
kernel._signal_variance.data = th.tensor(4.56)
K = kernel(test_X, test_X)
dKdx1 = kernel.ddx1(test_X, test_X)
dKdx2 = kernel.ddx2(test_X, test_X)
d2Kdx1x2 = kernel.d2dx1x2(test_X, test_X)
K_expected = th.tensor(
[[95.5835, 6.18301e-234],
[6.18301e-234, 95.5835]])
dKdx1_expected = th.tensor([[[0, 0], [-3.48642e-234, 5.93748e-233]], [[3.48642e-234, -5.93748e-233], [0, 0]]])
dKdx2_expected = th.tensor([[[0, 0], [3.48642e-234, -5.93748e-233]], [[-3.48642e-234,5.93748e-233], [0, 0]]])
d2Kdx1x2_expected = th.tensor([[[[8.166169912567646, 0],
[0, 8.166169912567646]],
[[-1.43764e-234,3.34797e-233], [3.34797e-233, -5.69641e-232]]], \
[[[-1.43764e-234, 3.34797e-233], [3.34797e-233, -5.69641e-232]],
[[8.166169912567646,0],
[0, 8.166169912567646]]]])
th.testing.assert_allclose(K, K_expected)
th.testing.assert_allclose(dKdx1, dKdx1_expected, atol=1e-234, rtol=1e-234)
th.testing.assert_allclose(dKdx2, dKdx2_expected, atol=1e-234, rtol=1e-234)
#Diagonals of first and alst elements are a bit weird.
#probably because we are doing a partial not full derivative
#even when x1 == x2. Mathematica shortcuts to zero in these cases.
th.testing.assert_allclose(d2Kdx1x2, d2Kdx1x2_expected, atol=1e-234, rtol=1e-234)
def test_periodic_kernel():
test_X = th.tensor([[7.1,-100.2], [0.5, 12.2]], dtype=th.float64)
kernel = PeriodicKernel(p_prior=3)
kernel._length_scale.data = th.tensor(1.23)
kernel._signal_variance.data = th.tensor(4.56)
K = kernel(test_X[:, 0, None], test_X[:, 0, None])
dKdx1 = kernel.ddx1(test_X[:,0,None], test_X[:,0,None])
dKdx2 = kernel.ddx2(test_X[:,0,None], test_X[:,0,None])
d2Kdx1x2 = kernel.d2dx1x2(test_X[:,0,None], test_X[:,0,None])
K_expected = th.tensor([[95.5835, 90.1041], [90.1041, 95.5835]])
dKdx1_expected = th.tensor([[0, -15.3336], [15.3336, 0]])
dKdx2_expected = th.tensor([[0, 15.3336], [-15.3336, 0]])
d2Kdx1x2_expected = th.tensor([[35.8208, 7.82527], [7.82527, 35.8208]])
th.testing.assert_allclose(K, K_expected, atol=1e-6, rtol=1e-6)
th.testing.assert_allclose(dKdx1, dKdx1_expected, atol=1e-5, rtol=1e-5)
th.testing.assert_allclose(dKdx2, dKdx2_expected, atol=1e-5, rtol=1e-5)
th.testing.assert_allclose(d2Kdx1x2, d2Kdx1x2_expected, atol=1e-6, rtol=1e-6)
def test_additive_kernel():
test_X = th.tensor([[7.1, -100.2], [0.5, 12.2]], dtype=th.float64)
p_kernel = PeriodicKernel(p_prior=3)
rbf_kernel = RBFKernel(2)
p_kernel._length_scale.data = th.tensor(1.23)
p_kernel._signal_variance.data = th.tensor(4.56)
rbf_kernel._length_scale.data = th.tensor(1.23)
rbf_kernel._signal_variance.data = th.tensor(4.56)
kernel = AdditiveKernel(kernels=[p_kernel, rbf_kernel],
active_dims=[[0], [0,1]])
K = kernel(test_X)
dKdx1 = kernel.ddx1(test_X, test_X)
dKdx2 = kernel.ddx2(test_X, test_X)
d2Kdx1x2 = kernel.d2dx1x2(test_X, test_X)
rbf_K_expected = th.tensor(
[[95.5835, 6.18301e-234],
[6.18301e-234, 95.5835]])
rbf_dKdx1_expected = th.tensor([[[0, 0], [-3.48642e-234, 5.93748e-233]],
[[3.48642e-234, -5.93748e-233], [0, 0]]])
rbf_dKdx2_expected = th.tensor([[[0, 0], [3.48642e-234, -5.93748e-233]],
[[-3.48642e-234, 5.93748e-233], [0, 0]]])
rbf_d2Kdx1x2_expected = th.tensor([[[[8.166169912567646, 0],
[0, 8.166169912567646]],
[[-1.43764e-234, 3.34797e-233],
[3.34797e-233, -5.69641e-232]]], \
[[[-1.43764e-234, 3.34797e-233],
[3.34797e-233, -5.69641e-232]],
[[8.166169912567646, 0],
[0, 8.166169912567646]]]])
p_K_expected = th.tensor([[95.5835, 90.1041], [90.1041, 95.5835]])
p_dKdx1_expected = th.zeros_like(rbf_dKdx1_expected)
p_dKdx1_expected[:, :,0] = th.tensor([[0, -15.3336], [15.3336, 0]])
p_dKdx2_expected = th.zeros_like(rbf_dKdx1_expected)
p_dKdx2_expected[:, :, 0] = th.tensor([[0, 15.3336], [-15.3336, 0]])
p_d2Kdx1x2_expected = th.zeros_like(rbf_d2Kdx1x2_expected)
p_d2Kdx1x2_expected[:,:,0,0] = th.tensor([[35.8208, 7.82527], [7.82527, 35.8208]])
K_expected = rbf_K_expected + p_K_expected
dKdx1_expected = rbf_dKdx1_expected + p_dKdx1_expected
dKdx2_expected = rbf_dKdx2_expected + p_dKdx2_expected
d2Kdx1x2_expected = rbf_d2Kdx1x2_expected + p_d2Kdx1x2_expected
th.testing.assert_allclose(K, K_expected)
th.testing.assert_allclose(dKdx1, dKdx1_expected, atol=1e-5, rtol=1e-5)
th.testing.assert_allclose(dKdx2, dKdx2_expected, atol=1e-5, rtol=1e-5)
th.testing.assert_allclose(d2Kdx1x2, d2Kdx1x2_expected, atol=1e-234, rtol=1e-5)
def test_multiplicative_kernel():
test_X = th.tensor([[7.1, -100.2], [0.5, 12.2]], dtype=th.float64)
p_kernel = PeriodicKernel(p_prior=3)
rbf_kernel = RBFKernel(1)
p_kernel._length_scale.data = th.tensor(1.23)
p_kernel._signal_variance.data = th.tensor(4.56)
rbf_kernel._length_scale.data = th.tensor(1.23)
rbf_kernel._signal_variance.data = th.tensor(4.56)
kernel = MultiplicativeKernel(kernels=[p_kernel, rbf_kernel],
active_dims=[[0], [1]])
K = kernel(test_X, test_X)
dKdx1 = kernel.ddx1(test_X, test_X)
dKdx2 = kernel.ddx2(test_X, test_X)
d2Kdx1x2 = kernel.d2dx1x2(test_X, test_X)
K_expected = th.tensor([[9136.2, 3.58153e-231], [3.58153e-231, 9136.2]])
dKdx1_expected = th.tensor([[[0., 0.], [-6.09493e-232,
3.4393e-230]], [[6.09493e-232, -3.4393e-230], [0., 0.]]])
dKdx2_expected = th.tensor([[[0., 0.], [6.09493e-232, -3.4393e-230]], [[-6.09493e-232,
3.4393e-230], [0., 0.]]])
d2Kdx1x2_expected = th.tensor([[[[3423.88, 0.], [0., 780.551]], [[3.11045e-232,
5.85289e-231], [5.85289e-231, -3.29966e-229]]], \
[[[3.11045e-232,
5.85289e-231], [5.85289e-231, -3.29966e-229]], [[3423.88,
0.], [0., 780.551]]]])
th.testing.assert_allclose(K, K_expected)
th.testing.assert_allclose(dKdx1, dKdx1_expected, atol=1e-234, rtol=1e-5)
th.testing.assert_allclose(dKdx2, dKdx2_expected, atol=1e-234, rtol=1e-5)
th.testing.assert_allclose(d2Kdx1x2, d2Kdx1x2_expected, atol=1e-234, rtol=1e-5)
def test_affine_dot_product_kernel():
test_X = th.tensor([[7.1, -100.2, -1., 1.3],
[0.5, 12.2, 3, 6.7]],
dtype=th.float64)
p_kernel = PeriodicKernel(p_prior=3)
rbf_kernel = RBFKernel(1)
p_kernel._length_scale.data = th.tensor(1.23)
p_kernel._signal_variance.data = th.tensor(4.56)
rbf_kernel._length_scale.data = th.tensor(1.23)
rbf_kernel._signal_variance.data = th.tensor(4.56)
sub_kernels = [MultiplicativeKernel(kernels=[p_kernel, rbf_kernel],
active_dims=[[0], [1]])]*3
kernel = AffineDotProductKernel(s_idx=[0,1], m_idx=[2,3],
kernels=sub_kernels, last_is_unit=True)
K = kernel(test_X, test_X)
dKdx1 = kernel.ddx1(test_X, test_X)
dKdx2 = kernel.ddx2(test_X, test_X)
d2Kdx1x2 = kernel.d2dx1x2(test_X, test_X)
def test_multiplicative_periodic_consistency():
kernel = MultiplicativeKernel(
kernels=[PeriodicKernel(p_prior=1/2,
learn_period=False),
RBFKernel(2, ard_num_dims=True)],
active_dims=[[0], [1, 2]]
)
expected_1 = kernel.kernels[0](th.tensor([[-1.]]), th.tensor([[-1.]])) * \
kernel.kernels[1](
th.tensor([[0.5, 0]]), th.tensor([[0.5, 0]]))
expected_2 = kernel.kernels[0](th.tensor([[-1.]]), th.tensor([[-2.]])) * \
kernel.kernels[1](
th.tensor([[0.5, 0]]), th.tensor([[0.5, 0]]))
actual_1 = kernel(th.tensor([[-1, 0.5, 0]]), th.tensor([[-1, 0.5, 0]]))
actual_2 = kernel(th.tensor([[-1, 0.5, 0]]), th.tensor([[-2, 0.5, 0]]))
actual_3 = kernel(th.tensor([[1, 0.5, 0]]), th.tensor([[1, 0.5, 0]]))
actual_4 = kernel(th.tensor([[1, 0.5, 0]]), th.tensor([[2, 0.5, 0]]))
th.testing.assert_allclose(expected_1, expected_2)
th.testing.assert_allclose(actual_1, expected_1)
th.testing.assert_allclose(actual_2, expected_2)
th.testing.assert_allclose(actual_3, actual_4)
th.testing.assert_allclose(actual_3, actual_2)
| 46.852761 | 114 | 0.550818 | 5,511 | 38,185 | 3.643077 | 0.054074 | 0.035613 | 0.083379 | 0.045425 | 0.892414 | 0.871296 | 0.843403 | 0.8183 | 0.788166 | 0.774917 | 0 | 0.086606 | 0.243734 | 38,185 | 814 | 115 | 46.910319 | 0.608629 | 0.380804 | 0 | 0.511848 | 0 | 0 | 0.001416 | 0 | 0 | 0 | 0 | 0 | 0.189573 | 1 | 0.035545 | false | 0 | 0.014218 | 0 | 0.054502 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a297417c5df77528b3521ecdb1d16984df511f38 | 31,025 | py | Python | simplebitcoinfuncs/_doctester.py | maxweisspoker/simplebitcoinfuncs | ad332433dfcc067e86d2e77fa0c8f1a27daffb63 | [
"MIT"
] | 1 | 2017-03-18T06:00:51.000Z | 2017-03-18T06:00:51.000Z | simplebitcoinfuncs/_doctester.py | maxweisspoker/simplebitcoinfuncs | ad332433dfcc067e86d2e77fa0c8f1a27daffb63 | [
"MIT"
] | null | null | null | simplebitcoinfuncs/_doctester.py | maxweisspoker/simplebitcoinfuncs | ad332433dfcc067e86d2e77fa0c8f1a27daffb63 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
I removed all the doctests from everywhere else and put them here.
'''
from __future__ import print_function, division, absolute_import
try:
from __builtin__ import bytes, str, open, super, range, zip, round, int, pow, object, input
except ImportError: pass
try:
from __builtin__ import raw_input as input
except: pass
from codecs import decode
from binascii import hexlify, unhexlify
try:
ModuleNotFoundError
except:
ModuleNotFoundError = ImportError
try:
from .hexhashes import *
from .ecmath import *
from .base58 import *
from .miscfuncs import *
from .miscbitcoinfuncs import *
from .bitcoin import *
from .signandverify import *
from .stealth import *
from .bip32 import *
from .bip39 import *
from .electrum1 import *
from .electrum2 import *
from .rfc6979 import generate_k
except Exception as e:
if type(e) != ImportError and \
type(e) != ModuleNotFoundError and \
type(e) != ValueError and \
type(e) != SystemError:
raise Exception("Unknown problem with imports.")
from hexhashes import *
from ecmath import *
from base58 import *
from miscfuncs import *
from miscbitcoinfuncs import *
from bitcoin import *
from signandverify import *
from stealth import *
from bip32 import *
from bip39 import *
from electrum1 import *
from electrum2 import *
from rfc6979 import generate_k
def hexhashes_py___doctest():
'''
hexhashes.py tests:
>>> sha256('')
'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'
>>> sha256('aabbccdd')
'8d70d691c822d55638b6e7fd54cd94170c87d19eb1f628b757506ede5688d297'
>>> sha512('')
'cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e'
>>> sha512('aabbccdd')
'48e218b30d4ea16305096fe35e84002a0d262eb3853131309423492228980c60238f9eed238285036f22e37c4662e40c80a461000a7aa9a03fb3cb6e4223e83b'
>>> sha512d('')
'826df068457df5dd195b437ab7e7739ff75d2672183f02bb8e1089fabcf97bd9dc80110cf42dbc7cff41c78ecb68d8ba78abe6b5178dea3984df8c55541bf949'
>>> sha512d('aabbccdd')
'46561839a3278e5cd3999450c8f89e459aa8c234fbee7935635db777d7dbd654bf7293c84cf64c318be0197a41c622a247a70024ff9d27f392c0d4a4da8d6354'
>>> ripemd160('')
'9c1185a5c5e9fc54612808977ee8f548b2258d31'
>>> ripemd160('aabbccdd')
'148164ccf60a825bc3250722074c3426a7f67fcb'
>>> hash160('')
'b472a266d0bd89c13706a4132ccfb16f7c3b9fcb'
>>> hash160('aabbccdd')
'd6e9254683798a28eabd2626fd573cf2cf3869f9'
>>> hash256('')
'5df6e0e2761359d30a8275058e299fcc0381534545f55cf43e41983f5d4c9456'
>>> hash256('aabbccdd')
'6a83c7f1def9386347c206e94c90559f49be557609fc1811bfe311b67ecef8b0'
>>> hash512('')
'6b4e6c1fe36504e12e6d9716f74250ecb6fefb2a83af8e8edee9caeb3f32ca4683eca58c50faa06afc40a15fdc4c706d296a6f859bfb9b22871d28a500baf7b1'
>>> hash512('aabbccdd')
'20df4f6c9244b517cb5dd1c3b1e13bb316a45f5b904fc57799b66389947186d266ad611ee282fdea6630da4dbc96015beba2faecc110782015df662c4abf6297'
'''
return
def ecmath_py___doctest():
'''
Many doctest for ecmath are done in a weird way. That is to ensure
the same output for Python 2 and 3, so things like 4 vs 4L don't
mess up the tests.
>>> modinv(2521213890399410648018095333325722136449021566908310412768334520696982806641) == \
17465617466841484688650846354295959695753514552349626970717521890536775674935
True
>>> modinv(-95700528412413679576195283092455617561285633360671739483652140770588235170392,N) == \
11411284869303779416452608717884069348175089368882490102158000583211275329323
True
>>> x,y = ecadd( \
4938373901174265576094805690384936437621390742743114714534166734031749709952, \
23406007515733211420427986631155727216565925582529100160361434981966318828999, \
11029270422249989266356636372380040023432092195222839243672437607748020962878, \
12338920660869481789439141094019604918037726829679018934712977981859756778348)
>>> x == 83336094426407305185582932726071265758876028986498851406936393497302545717601
True
>>> y == 71857134501436534997244054415723847888276629084374532235863885413095164252131
True
>>> x,y = ecsubtract( \
4938373901174265576094805690384936437621390742743114714534166734031749709952, \
23406007515733211420427986631155727216565925582529100160361434981966318828999, \
11029270422249989266356636372380040023432092195222839243672437607748020962878, \
12338920660869481789439141094019604918037726829679018934712977981859756778348)
>>> x == 55597633869961612317309410433076836678766403763677101352598043583451378461409
True
>>> y == 26239925317332119257936947014113260047893818687460697966446999229620006489892
True
>>> x,y = ecdouble( \
17122607971474055933869599824585174586417044884544686165239805207052395415204, \
40838023179274613372805173210407024975579475223402894269126337256598864150690)
>>> x == 72311113040667355501201059093433510680042205181920994715815687665050367873657
True
>>> y == 81202695815007557875587128276299271839897857009219307151962560322569452526782
True
>>> x,y = ecmultiply(Gx,Gy,42)
>>> x == 115136800820456833737994126771386015026287095034625623644186278108926690779567
True
>>> y == 3479535755779840016334846590594739014278212596066547564422106861430200972724
True
>>> x,y = ecmultiply( \
2521213890399410648018095333325722136449021566908310412768334520696982806641, \
61992791029995100687584613680591045503872148133214804167999634260847801377258, \
86160004736639257141798190143937095024102878958814199546049053726283481854320)
>>> x == 84191613447606291376043707809973780390176222720060740978105574111402634616050
True
>>> y == 109474824470067519156060832976900905455196281901984947573736908658291479411212
True
>>> pow_mod(int(N//4),int(N//15),P) == \
85863265686857850576725992990591539765753424982812429250530061375940639195105
True
'''
return
def base58_py___doctest():
'''
>>> b58e('0000000000000000000000000000000000000000000000000000000000000000')
'11111111111111111111111111111111273Yts'
>>> b58e('80000000000000000000000000000000000000000000000000000000000000000101')
'KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qYjgd9M7rFU73sVHnoWn'
>>> b58e('80000000000000000000000000000000000000000000000000000000000000000101', False)
'3tq8Vmhh9SN5XhjTGSWgx8iKk59XbKG6UH4oqpejRoF9ASt'
>>> b58e('')
'3QJmnh'
>>> b58d('3A5vdSL9MQrKRijvxr8S3V2DQ918XPL1GL')
'055c16274562a91d531f6043f86c68d3a0f65be42a'
>>> b58d('11111111111111111111111111111111', False)
'0000000000000000000000000000000000000000000000000000000000000000'
>>> b58d('11111111111111111111111111111111273Yts')
'0000000000000000000000000000000000000000000000000000000000000000'
# Incorrect checksum, while check is True
>>> b58d('11111111111111111111111111111111273YYY')
Traceback (most recent call last):
...
AssertionError
>>> b58d('11111111111111111111111111111111273YYY', False)
'00000000000000000000000000000000000000000000000000000000000000002b32d6d1'
>>> b58d('')
Traceback (most recent call last):
...
AssertionError
# str() because if unicode, Exception says character u'0' which fails
# the doctest
>>> b58d(str('XY0Z'))
Traceback (most recent call last):
...
Exception: Character '0' is not a valid base58 character
'''
return
def miscfuncs_py___doctest():
'''
>>> strlify(b'aabb')
'aabb'
>>> strlify(hexlify(unhexlify("aabb")))
'aabb'
>>> strlify('bb')
'bb'
>>> strlify(b'b')
'b'
>>> strlify('b')
'b'
>>> isitstring(55)
False
>>> isitstring('Hello')
True
>>> isitstring(u'Hello')
True
>>> isitint(4)
True
>>> isitint(2**256)
True
>>> isitint(-4)
True
>>> isitint(0)
True
>>> isitint('0')
False
>>> isitint('00')
False
>>> isitint(unhexlify('00'))
False
>>> isitint(4.0)
False
# Doctest for Py3 doesn't properly handle bytes completely,
# hence using unhexlify
>>> hexstrlify(bytes(unhexlify('bbc7f07e59670ffdbb6bbb')))
'bbc7f07e59670ffdbb6bbb'
>>> hexreverse('a1b2c3d4')
'd4c3b2a1'
>>> dechex(4,2)
'0004'
>>> dechex(0)
'00'
>>> dechex(0000)
'00'
>>> dechex(43528704357807084357809435278904235,16)
'000862217d6e549c3fdf5c2e5b450bab'
'''
return
def miscbitcoinfuncs_py___doctest():
'''
>>> oppushdatalen(13)
'0d'
>>> oppushdatalen(105)
'4c69'
>>> oppushdatalen(436)
'4db401'
>>> oppushdatalen(4294967290)
'4efaffffff'
>>> intfromoppushdatalen('4efaffffff')
4294967290
>>> intfromoppushdatalen('4db401')
436
>>> intfromoppushdatalen('4c69')
105
>>> intfromoppushdatalen('0d')
13
>>> intfromoppushdatalen('4c69dd')
Traceback (most recent call last):
...
AssertionError
>>> intfromoppushdatalen('0daa')
Traceback (most recent call last):
...
AssertionError
>>> tovarint(250)
'fa'
>>> tovarint(253)
'fdfd00'
>>> tovarint(260)
'fd0401'
>>> tovarint(294967296)
'fe00d89411'
>>> tovarint(6418473620)
'ff9422927e01000000'
>>> numvarintbytes('b5')
1
>>> numvarintbytes('fb')
1
>>> numvarintbytes('fc')
1
>>> numvarintbytes('fd')
3
>>> numvarintbytes('fe')
5
>>> numvarintbytes('ff')
9
>>> numvarintbytes('fd0401')
Traceback (most recent call last):
...
AssertionError
>>> fromvarint('ff9422927e01000000')
6418473620
>>> fromvarint('fe00d89411')
294967296
>>> fromvarint('fd0401')
260
>>> fromvarint('fdfd00')
253
>>> fromvarint('fc')
252
>>> fromvarint('fdfd0005')
Traceback (most recent call last):
...
AssertionError
>>> fromvarint('c9')
201
>>> x = 'fd5d010048304502200187af928e9d155c4b1ac9c1c9118153239aba76774f775d7c1f9c3e106ff33c0221008822b0f658edec22274d0b6ae9de10ebf2da06b1bbdaaba4e50eb078f39e3d78014730440220795f0f4f5941a77ae032ecb9e33753788d7eb5cb0c78d805575d6b00a1d9bfed02203e1f4ad9332d1416ae01e27038e945bc9db59c732728a383a6f1ed2fb99da7a4014cc952410491bba2510912a5bd37da1fb5b1673010e43d2c6d812c514e91bfa9f2eb129e1c183329db55bd868e209aac2fbc02cb33d98fe74bf23f0c235d6126b1d8334f864104865c40293a680cb9c020e7b1e106d8c1916d3cef99aa431a56d253e69256dac09ef122b1a986818a7cb624532f062c1d1f8722084861c5c3291ccffef4ec687441048d2455d2403e08708fc1f556002f1b6cd83f992d085097f9974ab08a28838f07896fbab08f39495e15fa6fad6edbfb1e754e35fa1c7844c41f322a1863d4621353aeffffffff0140420f00000000001976a914ae56b4db13554d321c402db3961187aed1bbed5b88ac00000000'
>>> getandstrip_varintdata(x)
('0048304502200187af928e9d155c4b1ac9c1c9118153239aba76774f775d7c1f9c3e106ff33c0221008822b0f658edec22274d0b6ae9de10ebf2da06b1bbdaaba4e50eb078f39e3d78014730440220795f0f4f5941a77ae032ecb9e33753788d7eb5cb0c78d805575d6b00a1d9bfed02203e1f4ad9332d1416ae01e27038e945bc9db59c732728a383a6f1ed2fb99da7a4014cc952410491bba2510912a5bd37da1fb5b1673010e43d2c6d812c514e91bfa9f2eb129e1c183329db55bd868e209aac2fbc02cb33d98fe74bf23f0c235d6126b1d8334f864104865c40293a680cb9c020e7b1e106d8c1916d3cef99aa431a56d253e69256dac09ef122b1a986818a7cb624532f062c1d1f8722084861c5c3291ccffef4ec687441048d2455d2403e08708fc1f556002f1b6cd83f992d085097f9974ab08a28838f07896fbab08f39495e15fa6fad6edbfb1e754e35fa1c7844c41f322a1863d4621353ae', 'ffffffff0140420f00000000001976a914ae56b4db13554d321c402db3961187aed1bbed5b88ac00000000')
>>> inttoDER(23159624154826860047781259025922852200415127951164078404008335037124850950245)
'02203333e1fba07e542a357c45103a2fa62c044af1000d21b54dc9c54de36aef2065'
>>> inttoDER(59344652041488171117647191841137823404561998159176754666338830039597703962725)
'0221008333e1fba07e542a357c45103a2fa62c044af1000d21b54dc9c54de36aef2065'
>>> inttoDER(6783848548763080805863616406882737495015296602382933530988832128704613)
'021e00fba07e542a357c45103a2fa62c044af1000d21b54dc9c54de36aef2065'
>>> inttoDER(3332975375367798912146238475744224768789742116297740253407570016804965)
'021d7ba07e542a357c45103a2fa62c044af1000d21b54dc9c54de36aef2065'
>>> inttoLEB128(624485)
'e58e26'
>>> LEB128toint('e58e26')
624485
'''
return
def bitcoin_py___doctest():
'''
>>> uncompress('03AB27DC61A8D60CEB3A3234E69B818F2DF5B79FD67E0CCFF474B788ACE319FBB8')
'04ab27dc61a8d60ceb3a3234e69b818f2df5b79fd67e0ccff474b788ace319fbb89dff12fbeb8368d30d28bf6c00dd1900c89ba086b19dab33828557418d855267'
>>> uncompress('02ab27dc61a8d60ceb3a3234e69b818f2df5b79fd67e0ccff474b788ace319fbb8')
'04ab27dc61a8d60ceb3a3234e69b818f2df5b79fd67e0ccff474b788ace319fbb86200ed04147c972cf2d74093ff22e6ff37645f794e6254cc7d7aa8bd727aa9c8'
>>> compress('04ab27dc61a8d60ceb3a3234e69b818f2df5b79fd67e0ccff474b788ace319fbb89dff12fbeb8368d30d28bf6c00dd1900c89ba086b19dab33828557418d855267')
'03ab27dc61a8d60ceb3a3234e69b818f2df5b79fd67e0ccff474b788ace319fbb8'
>>> compress('04ab27dc61a8d60ceb3a3234e69b818f2df5b79fd67e0ccff474b788ace319fbb86200ed04147c972cf2d74093ff22e6ff37645f794e6254cc7d7aa8bd727aa9c8')
'02ab27dc61a8d60ceb3a3234e69b818f2df5b79fd67e0ccff474b788ace319fbb8'
>>> privtopub('178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d')
'03ab27dc61a8d60ceb3a3234e69b818f2df5b79fd67e0ccff474b788ace319fbb8'
>>> privtopub('178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d', False)
'04ab27dc61a8d60ceb3a3234e69b818f2df5b79fd67e0ccff474b788ace319fbb89dff12fbeb8368d30d28bf6c00dd1900c89ba086b19dab33828557418d855267'
>>> addprivkeys('178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d', \
'5a499293484e3dec9d452d0996d9566613bebfe75b7c49bb205517336254105d')
'71d8a7f77f46c99745e9584b3b86e3dc262fdab955a3c7d8ab1b5e3939cc519a'
>>> addprivkeys('ff8f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d', \
'5a499293484e3dec9d452d0996d9566613bebfe75b7c49bb205517336254105d')
'59d8a7f77f46c99745e9584b3b86e3dd6b80fdd2a65b279ceb48ffac69961059'
>>> subtractprivkeys('178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d', \
'5a499293484e3dec9d452d0996d9566613bebfe75b7c49bb205517336254105d')
'bd4582d0eeaa4dbe0b5efe380dd4370eb96137d14df3d49e2a438e5f455a7221'
>>> multiplypriv('178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d', \
'5a499293484e3dec9d452d0996d9566613bebfe75b7c49bb205517336254105d')
'a08be4c08d9820284fc81896b465a08b0a95305cf364517d9d46ec7ae954321e'
>>> multiplypub( \
'04eee3998f3546c061cfedd989cc77280ba2777dff4ed437b00d43dd2942dae003a702ba24e6c79ca23f1890249639c2621f897618d51d633b5039f1f3a4f4e7d4', \
'178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d')
'02fdd25715a72408d662e844027d6deb58b76cb0b9a294ee490191a4ef40df4792'
>>> multiplypub('02eee3998f3546c061cfedd989cc77280ba2777dff4ed437b00d43dd2942dae003', \
'178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d', False)
'04fdd25715a72408d662e844027d6deb58b76cb0b9a294ee490191a4ef40df47923efac534afd12d2fcd07c751ef4f6fac9286045df6e9e29608d56efc403a0438'
>>> addpubs('02fdd25715a72408d662e844027d6deb58b76cb0b9a294ee490191a4ef40df4792', \
'02eee3998f3546c061cfedd989cc77280ba2777dff4ed437b00d43dd2942dae003')
'024abeabbdd5de7727bbb2ff5251d57310ef2607dab1e2889f4315474778b466a3'
>>> subtractpubs( \
'02fdd25715a72408d662e844027d6deb58b76cb0b9a294ee490191a4ef40df4792', \
'02eee3998f3546c061cfedd989cc77280ba2777dff4ed437b00d43dd2942dae003')
'02e3752f728d53e227f789be951fd899e36295c386f6c249940b5c9c275b4f908c'
>>> pubtoaddress('02e3752f728d53e227f789be951fd899e36295c386f6c249940b5c9c275b4f908c')
'18o5G4us8k5DscDdyFq1nx8iSE7RFy2euv'
>>> pubtoaddress(uncompress('02e3752f728d53e227f789be951fd899e36295c386f6c249940b5c9c275b4f908c'))
'1MtiJXjp3Vr8s1AtgK1veGLNnjhy3PrUxE'
>>> validatepubkey('02E3752F728D53E227F789BE951FD899E36295C386F6C249940B5C9C275B4F908C')
'02e3752f728d53e227f789be951fd899e36295c386f6c249940b5c9c275b4f908c'
>>> validatepubkey('04e3752f728d53e227f789be951fd899e36295c386f6c249940b5c9c275b4f908c9a50cec685f8e2a1f77b216b60319c5b5da20cb1ad305af39c85c42a78cebf64')
'04e3752f728d53e227f789be951fd899e36295c386f6c249940b5c9c275b4f908c9a50cec685f8e2a1f77b216b60319c5b5da20cb1ad305af39c85c42a78cebf64'
>>> validatepubkey('04e3752f728d53e227f789be951fd899e36295c386f6c249940b5c9c275b4f908c9a50cec685f8e2a1f77b216b60319c5b5da20cb1ad305af39c85c42a78cebf65')
False
>>> validatepubkey('04e3752f728d53e227f789be951fd899e36295c386f6c249940b5c9c275b4f908c')
False
>>> validatepubkey('e3752f728d53e227f789be951fd899e36295c386f6c249940b5c9c275b4f908c')
False
>>> wiftohex("5KcCmPP68JhjXE3guHwMnA5aiYWvsMbQrpDJYkreLpgGQAroXDh")
('ebf4c9e128721400d4d8ac059c1aff929e9ad121518f744bfedf456592cd1dbd', '80', False)
>>> wiftohex("L58NvunVdF8ngQas7okviK5DpN76mFttJsPJTAa7pVSJy1KbUUkL")
('ebf4c9e128721400d4d8ac059c1aff929e9ad121518f744bfedf456592cd1dbd', '80', True)
>>> wiftohex("6uUqLX6roU6TbVtWqWRRzSAwMx2E7ctTbwDGL8Dyn1bmyKfS9f8")
('2f43829ce7f2985d4b4de7cbbb99b8d15843ad3f3149879ab20963f2978aeab6', 'b0', False)
>>> privtohex('178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d')
'178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d'
>>> privtohex('5HzfMyinNsX6ohao4LxY6dssqxy9Tg5unjV1KCt9UCiJRZvq5Gv')
'178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d'
>>> privtohex('Kx1WKbMRHXyrd88AHm68FsmZR82pLjWfzrWcPMSkP4hHuHszrrZK')
'178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d'
>>> privtohex('T3qmmLebguxTPxm2qQ2zUEJwMyg8QpXZp4QsFA5Hx2sTRBUPvfom')
'178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d'
>>> privtohex(10656002286135494676906904972529529473002948329995631005275422314744862228797)
'178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d'
>>> privtohex(unhexlify('178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d'))
'178f156436f88baaa8a42b41a4ad8d7612711ad1fa277e1d8ac64705d778413d'
>>> privtohex("This is not a private key!")
Traceback (most recent call last):
...
Exception: Cannot interpret input key.
>>> privtohex('T3qmmLebguxTPxm2qQ2zUEJwMyg8QpX')
Traceback (most recent call last):
...
Exception: Cannot interpret input key.
>>> mycoin = Coin(u'ed4cbc48b674f3d3bce9f3f17ec7b9d8c03b5423afefd16a8c098ead535ec206','80','00')
>>> mycoin.priv
'ed4cbc48b674f3d3bce9f3f17ec7b9d8c03b5423afefd16a8c098ead535ec206'
>>> mycoin.wifc
'L5AzQaAcHxGhZUfzyHQqrJ76YFyzozEHZ1JNLbVQzbUScbPSw1hv'
>>> mycoin.wifu
'5Kco5tU6jMgMX1qFwtkTvmCoZunYDuqmz6WJyYq8FQfqrMrdMhE'
>>> mycoin.addrc
'1B7jusx1FY9u7XxsSYzLQtcohzf8sevxE4'
>>> mycoin.pubprefix
'00'
'''
return
def signandverify_py___doctest():
'''
>>> h = 'f7011e94125b5bba7f62eb25efe23339eb1637539206c87df3ee61b5ec6b023e'
>>> p = 'c05694a7af0e01dceb63e5912a415c28d3fc823ca1fd3fa34d41afde03740466'
>>> k = 4 # chosen by fair dice roll, guaranteed to be random
>>> sign(h,p,k)
'3045022100e493dbf1c10d80f3581e4904930b1404cc6c13900ee0758474fa94abe8c4cd130220598e37e2e66277ef4d0caf0e32d095debb3c744219508cd394b9747e548662b7'
>>> h = 'f7011e94125b5bba7f62eb25efe23339eb1637539206c87df3ee61b5ec6b023e'
>>> sig = '3045022100e493dbf1c10d80f3581e4904930b1404cc6c13900ee0758474fa94abe8c4cd130220598e37e2e66277ef4d0caf0e32d095debb3c744219508cd394b9747e548662b7'
>>> pub = '022587327dabe23ee608d8504d8bc3a341397db1c577370389f94ccd96bb59a077'
>>> verify(h,sig,pub)
True
>>> sig = '3046022100e493dbf1c10d80f3581e4904930b1404cc6c13900ee0758474fa94abe8c4cd13022100a671c81d199d8810b2f350f1cd2f6a1fff7268a495f813682b18ea0e7bafde8a'
>>> verify(h,sig,pub)
True
>>> verify(h,sig,uncompress(pub))
True
>>> verify(h,sig,pub,True)
Traceback (most recent call last):
...
TypeError: High S value.
>>> checksigformat('3045022100e493dbf1c10d80f3581e4904930b1404cc6c13900ee0758474fa94abe8c4cd130220598e37e2e66277ef4d0caf0e32d095debb3c744219508cd394b9747e548662b7')
True
>>> checksigformat('3046022100e493dbf1c10d80f3581e4904930b1404cc6c13900ee0758474fa94abe8c4cd13022100a671c81d199d8810b2f350f1cd2f6a1fff7268a495f813682b18ea0e7bafde8a')
True
>>> checksigformat('3046022100e493dbf1c10d80f3581e4904930b1404cc6c13900ee0758474fa94abe8c4cd13022100a671c81d199d8810b2f350f1cd2f6a1fff7268a495f813682b18ea0e7bafde8a', \
True)
False
>>> msg = '"You miss 100% of the shots you don\\'t take. -- Wayne Gretzky"\\n -- Michael Scott'
>>> p = 'c05694a7af0e01dceb63e5912a415c28d3fc823ca1fd3fa34d41afde03740466'
>>> k = 4 # chosen by fair dice roll, guaranteed to be random
>>> signmsg(msg,p,True,k)
'H+ST2/HBDYDzWB5JBJMLFATMbBOQDuB1hHT6lKvoxM0TBxoLMWsgrFmA3CGam/poUZPl/PukXCrYBzuwMW3Tyyo='
>>> msg = '"You miss 100% of the shots you don\\'t take. -- Wayne Gretzky"\\n -- Michael Scott'
>>> sig = 'H+ST2/HBDYDzWB5JBJMLFATMbBOQDuB1hHT6lKvoxM0TBxoLMWsgrFmA3CGam/poUZPl/PukXCrYBzuwMW3Tyyo='
>>> x = verifymsg(msg,sig)
>>> pub = '022587327dabe23ee608d8504d8bc3a341397db1c577370389f94ccd96bb59a077'
>>> x == pub
True
>>> checkmsgsigformat('H+ST2/HBDYDzWB5JBJMLFATMbBOQDuB1hHT6lKvoxM0TBxoLMWsgrFmA3CGam/poUZPl/PukXCrYBzuwMW3Tyyo=')
True
>>> checkmsgsigformat('H+ST2/HBDYDzWB5JBJMLFATMbBOQDuB1hHT6lKvoxM0TBxoLMWsgrFmA3CGam/poUZPl/PukXCrYBzuwMW3Tyyo=')
True
>>> checkmsgsigformat('H+ST2/HBDYDzWB5JBJMLFATMbBOQDuB1hHT6lKvoxM0T+OX0zpTfU6Z/I95lZAWXrSbI3+sK7HVjuJauW2Jidhc=')
True
>>> checkmsgsigformat('H+ST2/HBDYDzWB5JBJMLFATMbBOQDuB1hHT6lKvoxM0T+OX0zpTfU6Z/I95lZAWXrSbI3+sK7HVjuJauW2Jidhc=',True)
False
'''
return
def rfc6979_generate_k___doctest():
'''
>>> ########
>>> # Test vectors from https://bitcointalk.org/index.php?topic=285142.40
>>> ########
>>> h = sha256(hexlify(b"Satoshi Nakamoto"))
>>> p = dechex(1, 32)
>>> k = generate_k(p, h)
>>> k == 0x8F8A276C19F4149656B280621E358CCE24F5F52542772691EE69063B74F15D15
True
>>> h = sha256(hexlify(b"All those moments will be lost in time, like tears in rain. Time to die..."))
>>> k = generate_k(p, h)
>>> k == 0x38AA22D72376B4DBC472E06C3BA403EE0A394DA63FC58D88686C611ABA98D6B3
True
>>> h = sha256(hexlify(b"Satoshi Nakamoto"))
>>> p = dechex(int(0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140),32)
>>> k = generate_k(p, h)
>>> k == 0x33A19B60E25FB6F4435AF53A3D42D493644827367E6453928554F43E49AA6F90
True
>>> h = sha256(hexlify(b"Alan Turing"))
>>> p = dechex(int(0xf8b8af8ce3c7cca5e300d33939540c10d45ce001b8f252bfbc57ba0342904181),32)
>>> k = generate_k(p, h)
>>> k == 0x525A82B70E67874398067543FD84C83D30C175FDC45FDEEE082FE13B1D7CFDF1
True
>>> h = sha256(hexlify(b"There is a computer disease that anybody who works " \
+ b"with computers knows about. It's a very serious " \
+ b"disease and it interferes completely with the work. " \
+ b"The trouble with computers is that you 'play' with " \
+ b"them!"))
>>> p = dechex(int(0xe91671c46231f833a6406ccbea0e3e392c76c167bac1cb013f6f1013980455c2),32)
>>> k = generate_k(p, h)
>>> k == 0x1F4B84C23A86A221D233F2521BE018D9318639D5B8BBD6374A8A59232D16AD3D
True
>>> p, z1, z2 = wiftohex("KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qYjgd9M7rFU73sVHnoWn")
>>> h = sha256(hexlify(b"Everything should be made as simple as possible, but not simpler."))
>>> k = generate_k(p, h)
>>> sign(h,p,k)
'3044022033a69cd2065432a30f3d1ce4eb0d59b8ab58c74f27c41a7fdb5696ad4e6108c902206f807982866f785d3f6418d24163ddae117b7db4d5fdf0071de069fa54342262'
>>> p, z1, z2 = wiftohex("L5oLkpV3aqBjhki6LmvChTCV6odsp4SXM6FfU2Gppt5kFLaHLuZ9")
>>> h = sha256(hexlify(b"Equations are more important to me, because " \
+ b"politics is for the present, but an equation " \
+ b"is something for eternity."))
>>> k = generate_k(p, h)
>>> sign(h,p,k)
'3044022054c4a33c6423d689378f160a7ff8b61330444abb58fb470f96ea16d99d4a2fed022007082304410efa6b2943111b6a4e0aaa7b7db55a07e9861d1fb3cb1f421044a5'
>>> p, z1, z2 = wiftohex("L5oLkpV3aqBjhki6LmvChTCV6odsp4SXM6FfU2Gppt5kFLaHLuZ9")
>>> h = sha256(hexlify(b"Not only is the Universe stranger than we think, it is stranger than we can think."))
>>> k = generate_k(p, h)
>>> sign(h,p,k)
'3045022100ff466a9f1b7b273e2f4c3ffe032eb2e814121ed18ef84665d0f515360dab3dd002206fc95f5132e5ecfdc8e5e6e616cc77151455d46ed48f5589b7db7771a332b283'
>>> p = '0000000000000000000000000000000000000000000000000000000000000001'
>>> h = sha256(hexlify(b"How wonderful that we have met with a paradox. " \
+ b"Now we have some hope of making progress."))
>>> k = generate_k(p, h)
>>> sign(h,p,k)
'3045022100c0dafec8251f1d5010289d210232220b03202cba34ec11fec58b3e93a85b91d3022075afdc06b7d6322a590955bf264e7aaa155847f614d80078a90292fe205064d3'
'''
def stealth_py___doctest():
'''
>>> paystealth("vJmvinTgWP1phdFnACjc64U5iMExyv7JcQJVZjMA15MRf2KzmqjSpgDjmj8NxaFfiMBUEjaydmNfLBCcXstVDfkjwRoFQw7rLHWdFk", \
'824dc0ed612deca8664b3d421eaed28827eeb364ae76abc9a5924242ddca290a', 0)
('03e05931191100fa6cd072b1eda63079736464b950d2875e67f2ab2c8af9b07b8d', \
'0600000124025c6fb169b0ff1c95426fa073fadc62f50a6e98482ec8b3f26fb73006009d1c00')
>>> receivestealth('af4afaeb40810e5f8abdbb177c31a2d310913f91cf556f5350bca10cbfe8b9ec', \
'd39758028e201e8edf6d6eec6910ae4038f9b1db3f2d4e2d109ed833be94a026', \
'03b8a715c9432b2b52af9d58aaaf0ccbdefe36d45e158589ecc21ba2f064ebb315')
'6134396c3bc9a56ccaf80cd38728e6d3a7751524246e7924b21b08b0bfcc3cc4'
'''
return
def bip32_py___doctest():
'''
>>> testvector1 = BIP32('000102030405060708090a0b0c0d0e0f')
>>> str(testvector1)
'xprv9s21ZrQH143K3QTDL4LXw2F7HEK3wJUD2nW2nRk4stbPy6cq3jPPqjiChkVvvNKmPGJxWUtg6LnF5kejMRNNU3TGtRBeJgk33yuGBxrMPHi'
>>> testvector1.child("m/0H/1/2H/2/1000000000")
'xprvA41z7zogVVwxVSgdKUHDy1SKmdb533PjDz7J6N6mV6uS3ze1ai8FHa8kmHScGpWmj4WggLyQjgPie1rFSruoUihUZREPSL39UNdE3BBDu76'
>>> BIP32.xprvtoxpub(testvector1.child("m/0H/1/2H/2/1000000000"))
'xpub6H1LXWLaKsWFhvm6RVpEL9P4KfRZSW7abD2ttkWP3SSQvnyA8FSVqNTEcYFgJS2UaFcxupHiYkro49S8yGasTvXEYBVPamhGW6cFJodrTHy'
>>> testvector1.wif
'L52XzL2cMkHxqxBXRyEpnPQZGUs3uKiL3R11XbAdHigRzDozKZeW'
>>> testvector1["m/0H/1/2H/2/1000000000"].addr
'1LZiqrop2HGR4qrH1ULZPyBpU6AUP49Uam'
>>> testvector2 = BIP32('fffcf9f6f3f0edeae7e4e1dedbd8d5d2cfccc9c6c3c0bdbab7b4b1aeaba8a5a29f9c999693908d8a8784817e7b7875726f6c696663605d5a5754514e4b484542')
>>> testvector2.child("m/0/2147483647'/1/2147483646'/2")
'xprvA2nrNbFZABcdryreWet9Ea4LvTJcGsqrMzxHx98MMrotbir7yrKCEXw7nadnHM8Dq38EGfSh6dqA9QWTyefMLEcBYJUuekgW4BYPJcr9E7j'
>>> BIP32(BIP32.xprvtoxpub(testvector2.xprv)).child("m/0/2147483647'/1/2147483646'/2")
Traceback (most recent call last):
...
Exception: Input path contains hardened derivation. Cannot derive hardened child from public master key.
>>> path = 'm/2/4352/0/231/8/0'
>>> x = testvector1.child(path)
>>> x
'xprvA5hf574kbP5WQsvUYw7z8o7Sp5RmABwvw9wNFdeBotkbYfGedxB8UguRcxFPYVXDQzeb5SETXCCP8aXsyP3u2sNb42XdNZVYFUQ2nptCVUQ'
>>> crack_test = BIP32.crack(testvector1.xpub,x,path)
>>> crack_test == testvector1.xprv
True
>>> path = 'm/2/4352/0H/231/8/0'
>>> BIP32.crack(testvector1.xpub,testvector1.child(path),path)
Traceback (most recent call last):
...
Exception: Path input indicates a hardened key. Cannot crack up a level from hardened keys.
'''
return
def bip39_py___doctest():
'''
>>> x = BIP39("00000000000000000000000000000000")
>>> x.en
'abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about'
>>> x.enbip32seed
'5eb00bbddcf069084889a8ab9155568165f5c453ccb85e70811aaed6f6da5fc19a5ac40b389cd370d086206dec8aa6c43daea6690f20ad3d8d48b2d2ce9e38e4'
>>> x.setpassword('TREZOR')
>>> x.enbip32seed
'c55257c360c07c72029aebc1b53c05ed0362ada38ead3e3e9efa3708e53495531f09a6987599d18264c1e1c92f2cf141630c7a3c4ab7c81b2f001698e7463b04'
>>> x.hex
'00000000000000000000000000000000'
'''
return
def electrum1_py___doctest():
'''
>>> x = Electrum1('school eventually space front trip delicate drift score surely nine serve again')
>>> x.words
'school eventually space front trip delicate drift score surely nine serve again'
>>> x.seed
'950421e37c371408a14aeb9164d7a559'
>>> x.seed == Electrum1.wordstohex(x.words)
True
>>> x.mpriv
'9f8d1ab5da1f3133a87a6dff6daa1f8905906187ed72b6476fdc8a9a9aec68d5'
>>> x.mpub
'887867b2914527765faed6ac3d7fd1a4c373fda4a7d6350ac9adabc55befe34a50fc0ada9d1a439650653d445c5aad27d52d153cea3cf375578646a2b9820c58'
>>> x[4.0][0]
'5KJZT97WqVvLXwbbDyaVkGAcjK2AnMWBBy979BhWYbQ2yP7uJvb'
>>> x.mpriv == Electrum1.crack(x.mpub,x[4.0][0])
True
'''
return
def electrum2_py___doctest():
'''
>>> x = Electrum2('ride win pass silver noble position because balcony unveil perfect keen pyramid abuse')
>>> str(x)
'ride win pass silver noble position because balcony unveil perfect keen pyramid abuse'
>>> x.bip32xpub
'xpub661MyMwAqRbcGEHVXvE19EHH5Bpe7S4YFYXKPNAvCZ982MA1MyzkSAPSTmxWKqHjPsht3BDG2DxBfhiAKwrVzJFzVCTSovCEVXst6LPamzv'
>>> x.hex
'9af0f368c77311c27aa1cadc8d417ed5cb'
>>> x[3][0]
'Kwk1qQYC1NQkYrv2sgedWGEvSggKWMRbwrRTRCfVSbKbCrd2WfmL'
>>> x[1.]
('L2cthViuxbGEMiiBcxhAvgtusg13mSXT94ZHv2WuYfmDZbu3q4dx', '02fa3aab7ebc4a45f4e2bf428b113751f0aa31b39110c2f039c46b4da39fa0477b', '1NSuzNYZJBU9G91HdQw9szoAiGuZJyXRWj')
>>> x['m/4/8h/0'][2]
'18GrpcrjMDTnNtbbgNUuphNS2DhC9YMhPC'
>>> y = Electrum2.crack(x.bip32xpub,x[3][0])
>>> y == x.bip32xprv
True
>>> Electrum2.validate('ride win pass silver noble position because balcony unveil perfect keen pyramid abuse')
True
>>> Electrum2.validate('ride win pass silver noble position because balcony unveil perfect keen pyramid pyramid')
False
>>> Electrum2('ride win pass silver noble position because balcony unveil perfect keen pyramid pyramid')
Traceback (most recent call last):
...
Exception: Word list invalid.
'''
return
if __name__ == "__main__":
import doctest
doctest.testmod()
| 40.083979 | 804 | 0.776342 | 1,682 | 31,025 | 14.265755 | 0.348989 | 0.010002 | 0.010294 | 0.012461 | 0.163534 | 0.122526 | 0.108439 | 0.099271 | 0.093436 | 0.075891 | 0 | 0.424987 | 0.140629 | 31,025 | 773 | 805 | 40.135834 | 0.474983 | 0.864271 | 0 | 0.216216 | 0 | 0 | 0.017806 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.175676 | false | 0.027027 | 0.486486 | 0 | 0.824324 | 0.013514 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0c5446b85db5107cad16f31eefccc7dc97ee27ce | 30 | py | Python | virgo/__init__.py | terrykong/pyvirgo | 9c6cb8d791446881265a4a0e3f601376c618dadc | [
"MIT"
] | 4 | 2021-05-23T21:07:44.000Z | 2021-08-11T00:04:54.000Z | virgo/__init__.py | terrykong/pyvirgo | 9c6cb8d791446881265a4a0e3f601376c618dadc | [
"MIT"
] | 9 | 2021-02-22T02:04:36.000Z | 2021-05-24T04:53:54.000Z | virgo/__init__.py | terrykong/pyvirgo | 9c6cb8d791446881265a4a0e3f601376c618dadc | [
"MIT"
] | 1 | 2021-05-24T05:00:32.000Z | 2021-05-24T05:00:32.000Z | from .load import load, loads
| 15 | 29 | 0.766667 | 5 | 30 | 4.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 30 | 1 | 30 | 30 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a79a034811e3d1e4d15316455ac143b1a5f362e5 | 404 | py | Python | Pendulum/2d_simulation.py | garlicbutter/Jonathan-Tom | c1696f0a94da46911b3566a3d4f49791e877373f | [
"MIT"
] | 2 | 2021-10-05T04:31:19.000Z | 2021-10-05T04:31:26.000Z | Pendulum/2d_simulation.py | garlicbutter/Tom-Jonathan | c1696f0a94da46911b3566a3d4f49791e877373f | [
"MIT"
] | null | null | null | Pendulum/2d_simulation.py | garlicbutter/Tom-Jonathan | c1696f0a94da46911b3566a3d4f49791e877373f | [
"MIT"
] | null | null | null | import numpy as np
# parameters
L1, L2 = 0.4, 0.3 # meters
m1, m2, m_end = 8, 5, 4 # kg
K_e = 10**5 # N/m
# F = Kx + Bx' + Mx''
K_imp = [[62500.0, 0.0],[0.0, 62500.0]]
B_imp = [[3500.0, 0.0],[0.0, 3500.0]]
M_imp = [[100.0, 0.0],[0.0, 100.0]]
# PD controller
P_PBIC = [[150.0, 0.0] ,[0.0, 170.0]]
D_PBIC = [[18.0, 0.0] ,[0.0, 10.0]]
P_IMIC = [[0.5, 0.0] ,[0.0, 0.5]]
D_IMIC = [[0.1, 0.0] ,[0.0, 0.1]] | 23.764706 | 39 | 0.492574 | 102 | 404 | 1.862745 | 0.401961 | 0.294737 | 0.331579 | 0.294737 | 0.184211 | 0 | 0 | 0 | 0 | 0 | 0 | 0.291022 | 0.200495 | 404 | 17 | 40 | 23.764706 | 0.297214 | 0.143564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a7b03c434c7ffef8645cebbc0da857591896212f | 158 | py | Python | utils/models/__init__.py | wufanyou/growth-ring-detection | 27989870e12ab149413363a99080f7684db6cf1a | [
"MIT"
] | null | null | null | utils/models/__init__.py | wufanyou/growth-ring-detection | 27989870e12ab149413363a99080f7684db6cf1a | [
"MIT"
] | null | null | null | utils/models/__init__.py | wufanyou/growth-ring-detection | 27989870e12ab149413363a99080f7684db6cf1a | [
"MIT"
] | null | null | null | from .FPNV1 import FPN as FPN
from .FPNV2 import FPN as FPNV2
from .FPNV3 import FPN as FPNV3
from .FPNV4 import FPN as FPNV4
from .FPNV5 import FPN as FPNV5
| 26.333333 | 31 | 0.778481 | 30 | 158 | 4.1 | 0.3 | 0.365854 | 0.447154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070313 | 0.189873 | 158 | 5 | 32 | 31.6 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a7be627ec2bf6f24d752f0052e916e6ee33cc1c0 | 1,722 | py | Python | dev_up/categories/warface.py | lordralinc/dev_up | e035afd386c8a16c574aaa7615c263f1c1369911 | [
"MIT"
] | 2 | 2021-01-10T15:44:41.000Z | 2021-01-10T15:59:48.000Z | dev_up/categories/warface.py | lordralinc/dev_up | e035afd386c8a16c574aaa7615c263f1c1369911 | [
"MIT"
] | null | null | null | dev_up/categories/warface.py | lordralinc/dev_up | e035afd386c8a16c574aaa7615c263f1c1369911 | [
"MIT"
] | 4 | 2021-01-10T15:45:19.000Z | 2021-03-05T20:09:57.000Z | import typing as ty
from dev_up import models
from dev_up.categories.base import BaseAPICategories
class WarfaceAPICategories(BaseAPICategories):
def get_info(
self,
nick: str,
type: ty.Union[models.WarfaceGetInfoTypeEnum, str] = models.WarfaceGetInfoTypeEnum.STATISTICS,
key: str = None,
**kwargs
) -> models.WarfaceGetInfo:
"""Получает информацию об игроке Warface
:param nick: Ник игрока
:param type: Тип инфромации, defaults to models.WarfaceGetInfoTypeEnum.STATISTICS
:param key: Ключ доступа, defaults to None
:return: Информация об игроке. response зависит от переданного type
"""
return self.api.make_request(
method='warface.getInfo',
data=dict(nick=nick, type=models.WarfaceGetInfoTypeEnum(type).value, key=key, **kwargs),
dataclass=models.WarfaceGetInfo
)
async def get_info_async(
self,
nick: str,
type: ty.Union[models.WarfaceGetInfoTypeEnum, str] = models.WarfaceGetInfoTypeEnum.STATISTICS,
key: str = None,
**kwargs
) -> models.WarfaceGetInfo:
"""Получает информацию об игроке Warface
:param nick: Ник игрока
:param type: Тип инфромации, defaults to models.WarfaceGetInfoTypeEnum.STATISTICS
:param key: Ключ доступа, defaults to None
:return: Информация об игроке. response зависит от переданного type
"""
return await self.api.make_request_async(
method='warface.getInfo',
data=dict(nick=nick, type=models.WarfaceGetInfoTypeEnum(type).value, key=key, **kwargs),
dataclass=models.WarfaceGetInfo
) | 36.638298 | 103 | 0.656794 | 179 | 1,722 | 6.273743 | 0.324022 | 0.199466 | 0.135352 | 0.026714 | 0.821015 | 0.821015 | 0.821015 | 0.821015 | 0.821015 | 0.821015 | 0 | 0 | 0.259001 | 1,722 | 47 | 104 | 36.638298 | 0.880094 | 0.148084 | 0 | 0.642857 | 0 | 0 | 0.02681 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.107143 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3bd49f9c389bc2e87bb10ee416ff9c4d288827e9 | 49 | py | Python | Python/libraries/recognizers-number/recognizers_number/number/japanese/__init__.py | acblacktea/Recognizers-Text | 2170b8e35216f3fd56cce98fb33cde5339c9f088 | [
"MIT"
] | 1 | 2019-06-19T10:45:24.000Z | 2019-06-19T10:45:24.000Z | Python/libraries/recognizers-number/recognizers_number/number/japanese/__init__.py | AzureMentor/Recognizers-Text | 4f18e1d03607cc96e87095d8bf68c481c1b0756f | [
"MIT"
] | null | null | null | Python/libraries/recognizers-number/recognizers_number/number/japanese/__init__.py | AzureMentor/Recognizers-Text | 4f18e1d03607cc96e87095d8bf68c481c1b0756f | [
"MIT"
] | null | null | null | from .extractors import *
from .parsers import * | 24.5 | 26 | 0.755102 | 6 | 49 | 6.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 49 | 2 | 27 | 24.5 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ce154a949261c622e9a4b259a1861b315c8a4cbb | 257 | py | Python | paypalhttp/__init__.py | cameronmoten/paypalhttp_python | 72609783230663b8e34c6f0384837db7b166c8f4 | [
"MIT"
] | 5 | 2020-04-25T01:07:23.000Z | 2021-10-21T21:39:00.000Z | paypalhttp/__init__.py | cameronmoten/paypalhttp_python | 72609783230663b8e34c6f0384837db7b166c8f4 | [
"MIT"
] | 3 | 2020-07-23T12:07:26.000Z | 2021-12-01T18:45:36.000Z | paypalhttp/__init__.py | cameronmoten/paypalhttp_python | 72609783230663b8e34c6f0384837db7b166c8f4 | [
"MIT"
] | 13 | 2020-03-03T02:35:50.000Z | 2022-03-17T18:12:49.000Z | from paypalhttp.environment import Environment
from paypalhttp.file import File
from paypalhttp.http_client import HttpClient
from paypalhttp.http_response import HttpResponse
from paypalhttp.http_error import HttpError
from paypalhttp.serializers import *
| 36.714286 | 49 | 0.879377 | 32 | 257 | 6.96875 | 0.40625 | 0.376682 | 0.242152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093385 | 257 | 6 | 50 | 42.833333 | 0.957082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ce23a59eb6ab7f23f79d4e76eebb6e0cfabb0fcc | 1,486 | py | Python | tests/public/calculate_sleep_amount_test.py | Tesshin/CS-Pound | 8e40f3a144aa6578e87d30aba0d43cb51756ecdf | [
"MIT"
] | 4 | 2019-01-23T00:57:53.000Z | 2021-12-22T14:59:39.000Z | tests/public/calculate_sleep_amount_test.py | Tesshin/CS-Pound | 8e40f3a144aa6578e87d30aba0d43cb51756ecdf | [
"MIT"
] | 11 | 2018-10-03T09:12:03.000Z | 2022-01-15T01:44:12.000Z | tests/public/calculate_sleep_amount_test.py | Tesshin/CS-Pound | 8e40f3a144aa6578e87d30aba0d43cb51756ecdf | [
"MIT"
] | 4 | 2018-10-03T08:45:03.000Z | 2020-07-21T09:21:43.000Z | from constants import Variables
from library import calculate_sleep_amount
class TestClass:
def test_on_cooldown_less_than_1_hour(self):
Variables.cooldown = True
assert calculate_sleep_amount(1) == (-59, 60, True)
assert calculate_sleep_amount(3599) == (3539, 60, True)
assert calculate_sleep_amount(3600) == (3540, 60, True)
assert calculate_sleep_amount(0) == (0, 3600, False)
def test_off_cooldown_no_time(self):
Variables.cooldown = False
assert calculate_sleep_amount(-1) == (-1, 3600, False)
assert calculate_sleep_amount(0) == (0, 3600, False)
def test_off_cooldown_more_than_2_hours(self):
Variables.cooldown = False
assert calculate_sleep_amount(7201) == (7201, 1, False)
assert calculate_sleep_amount(7200) == (7200, 0, False)
def test_off_cooldown_between_1_and_2_hours(self):
Variables.cooldown = False
assert calculate_sleep_amount(3601) == (3600, 1, False)
Variables.cooldown = False
assert calculate_sleep_amount(3600) == (3600, 0, False)
def test_off_cooldown_less_than_1_hour(self):
Variables.cooldown = False
assert calculate_sleep_amount(3599) == (3599, 0, False)
Variables.cooldown = False
assert calculate_sleep_amount(1) == (1, 0, False)
def test_off_cooldown_10_hours(self):
Variables.cooldown = False
assert calculate_sleep_amount(36000) == (36000, 3600, False)
| 39.105263 | 68 | 0.687079 | 191 | 1,486 | 5.026178 | 0.219895 | 0.204167 | 0.291667 | 0.352083 | 0.809375 | 0.722917 | 0.583333 | 0.583333 | 0.398958 | 0.235417 | 0 | 0.096303 | 0.217362 | 1,486 | 37 | 69 | 40.162162 | 0.729149 | 0 | 0 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.433333 | 1 | 0.2 | false | 0 | 0.066667 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ce25ad3fff6d22ac1681cb6ae9967544df9e427f | 32 | py | Python | hypernlp/nlp/data_process/eda/__init__.py | DataCanvasIO/HyperNLP | 3ae565c88b6fc63b664c8fb264dc89c47ff92623 | [
"Apache-2.0"
] | 3 | 2021-11-22T04:09:22.000Z | 2022-01-10T10:27:28.000Z | hypernlp/nlp/data_process/eda/__init__.py | DataCanvasIO/HyperNLP | 3ae565c88b6fc63b664c8fb264dc89c47ff92623 | [
"Apache-2.0"
] | null | null | null | hypernlp/nlp/data_process/eda/__init__.py | DataCanvasIO/HyperNLP | 3ae565c88b6fc63b664c8fb264dc89c47ff92623 | [
"Apache-2.0"
] | null | null | null | def eda_model():
return None | 16 | 16 | 0.6875 | 5 | 32 | 4.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21875 | 32 | 2 | 17 | 16 | 0.84 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
cbef5755ca571f2eea563e5cdb8c61dd1d737fc1 | 302,726 | py | Python | QChemTool/Polarizable_atoms/Polarization_module_HeteroDimer.py | slamavl/QChemTool | b6b17adf6cfa8ac1db47acba93aab1ee49c1be47 | [
"MIT"
] | null | null | null | QChemTool/Polarizable_atoms/Polarization_module_HeteroDimer.py | slamavl/QChemTool | b6b17adf6cfa8ac1db47acba93aab1ee49c1be47 | [
"MIT"
] | 1 | 2018-01-03T12:08:41.000Z | 2018-01-03T12:08:41.000Z | QChemTool/Polarizable_atoms/Polarization_module_HeteroDimer.py | slamavl/QChemTool | b6b17adf6cfa8ac1db47acba93aab1ee49c1be47 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Tue Jan 31 14:33:56 2017
@author: Vladislav Sláma
"""
import numpy as np
from copy import deepcopy
from scipy.spatial.distance import pdist,squareform
import os
from ..QuantumChem.Classes.structure import Structure
from ..QuantumChem.calc import identify_molecule
from ..QuantumChem.read_mine import read_TrEsp_charges
from ..QuantumChem.interaction import charge_charge
from ..QuantumChem.positioningTools import project_on_plane, CenterMolecule, fit_plane
from ..General.units import conversion_facs_energy, conversion_facs_mass
from .Electrostatics_module import PrepareMolecule_1Def as ElStat_PrepareMolecule_1Def
from .Electrostatics_module import PrepareMolecule_2Def as ElStat_PrepareMolecule_2Def
from ..General.Potential import potential_charge, potential_dipole
from ..QuantumChem.Classes.general import Energy as EnergyClass
from ..General.UnitsManager import energy_units
from ..QuantumChem.calc import GuessBonds
from ..QuantumChem.output import OutputMathematica
debug=False
#==============================================================================
# Definition of class for polarizable environment
#==============================================================================
class Dielectric:
''' Class managing dielectric properties of the material
Parameters
----------
coor : numpy.array of real (dimension Nx3) where N is number of atoms
origin of density grid
polar : numpy.array or list of real (dimension N)
Polarizabilities for every atom
charge : numpy.array or list of real (dimension N)
charges on individual atoms (initial charges)
dipole : numpy.array of real (dimension Nx3)
dipole on individual atoms (initial dipole)
'''
def __init__(self,coor,charge,dipole,AlphaE,Alpha_E,BetaEE,V):
self.coor=np.copy(coor)
self.polar={}
self.polar['AlphaE']=AlphaE
self.polar['Alpha_E']=Alpha_E
self.polar['BetaEE']=BetaEE
self.VinterFG=V
self.charge=np.copy(charge)
self.dipole=np.copy(dipole)
self.Nat=len(coor)
def assign_polar(self,pol_type,**kwargs):
''' For now assignment is working only for fluorographene carbons with
type 'CF' and defect carbons with type 'CD'
Parameters
----------
pol_type : numpy.array or list of str (dimension N)
Polarization atomic types for assign of polarizabilities - diferent from
atomic types - for example group C-F will be treated as single atom and
type will be pol_type='CF'.
**kwargs : dict
dictionary with three matrixes for every polarizable atom type. For
example: kwargs['PolValues']['CF'][0] is Alpha(E) polarizability
matrix for atom tyle 'CF'. [1] correspond to Alpha(-E) matrix and
[2] to Beta(E,E)
Returns
-------
polar : numpy.array or list of real (dimension N)
Polarizabilities for every atom. 'CF'=1.03595 and 'CD'=1.4
'''
ZeroM=np.zeros((3,3),dtype='f8')
PolValues={'CF': [ZeroM,ZeroM,ZeroM],
'CD': [ZeroM,ZeroM,ZeroM],'C': [ZeroM,ZeroM,ZeroM]}
for key in list(kwargs.keys()):
if key=='PolValues':
PolValues=kwargs['PolValues']
if self.Nat!=len(pol_type):
raise IOError('Polarization type vector must have the same length as number of atoms')
polar={}
polar['AlphaE']=np.zeros((len(pol_type),3,3),dtype='f8')
polar['Alpha_E']=np.zeros((len(pol_type),3,3),dtype='f8')
polar['BetaEE']=np.zeros((len(pol_type),3,3),dtype='f8')
for ii in range(len(pol_type)):
polar['AlphaE'][ii,:,:]=PolValues[pol_type[ii]][0]
polar['Alpha_E'][ii,:,:]=PolValues[pol_type[ii]][1]
polar['BetaEE'][ii,:,:]=PolValues[pol_type[ii]][2]
return polar
def _swap_atoms(self,index1,index2):
''' Function which exchange polarization properties between atoms defined
by index1 and atoms defined by index 2
index1 : list or numpy.array of integer (dimension Natoms_change)
Indexes of first set of atoms which we would like to swap
index2 : list or numpy.array of integer (dimension Natoms_change)
Indexes of second set of atoms which we would like to swap
'''
if len(index1)!=len(index2):
raise IOError('You can swap values only between same number of atoms')
for ii in range(len(index1)):
# swap charges
self.charge[index1[ii]],self.charge[index2[ii]] = self.charge[index2[ii]],self.charge[index1[ii]]
# swap dipoles
self.dipole[index1[ii],:],self.dipole[index2[ii],:] = self.dipole[index2[ii],:],self.dipole[index1[ii],:]
# swap polarizabilities
#print(np.shape(self.dipole),index1[ii])
self.polar['AlphaE'][index1[ii],:,:],self.polar['AlphaE'][index2[ii],:,:] = self.polar['AlphaE'][index2[ii],:,:],self.polar['AlphaE'][index1[ii],:,:]
self.polar['Alpha_E'][index1[ii],:,:],self.polar['Alpha_E'][index2[ii],:,:] = self.polar['Alpha_E'][index2[ii],:,:],self.polar['Alpha_E'][index1[ii],:,:]
self.polar['BetaEE'][index1[ii],:,:],self.polar['BetaEE'][index2[ii],:,:] = self.polar['BetaEE'][index2[ii],:,:],self.polar['BetaEE'][index1[ii],:,:]
def _test_2nd_order(self,typ,Estatic=np.zeros(3,dtype='f8'),eps=1):
''' Function for testing of calculation with induced dipoles. Calculate
induced dipoles in second order (by induced dipoles). Combined with
calc_dipoles_All(typ,NN=1) we should obtain the same dipoles as with
calc_dipoles_All(typ,NN=2)
Parameters
----------
typ : str ('AlphaE','Alpha_E','BetaEE')
Specifies which polarizability is used for calculation of induced
atomic dipoles
Estatic : numpy.array of real (dimension 3) (optional - init=np.zeros(3,dtype='f8'))
External homogeneous electric fiel vectord (orientation and strength)
in ATOMIC UNITS. By default there is no electric field
eps : real (optional - init=1.0)
Relative dielectric polarizability of medium where the dipoles and
molecule is present ( by default vacuum with relative permitivity 1.0)
Notes
----------
**OK. Definition of Tensor T is right**
'''
debug=False
R=np.zeros((self.Nat,self.Nat,3),dtype='f8') # mutual distance vectors
P=np.zeros((self.Nat,3),dtype='f8')
for ii in range(self.Nat):
for jj in range(ii+1,self.Nat):
R[ii,jj,:]=self.coor[ii]-self.coor[jj]
R[jj,ii,:]=-R[ii,jj,:]
RR=np.sqrt(np.power(R[:,:,0],2)+np.power(R[:,:,1],2)+np.power(R[:,:,2],2)) # mutual distances
unit=np.diag([1]*self.Nat)
RR=RR+unit # only for avoiding ddivision by 0 for diagonal elements
RR3=np.power(RR,3)
RR5=np.power(RR,5)
# definition of T tensor
T=np.zeros((self.Nat,self.Nat,3,3),dtype='f8') # mutual distance vectors
for ii in range(3):
T[:,:,ii,ii]=1/RR3[:,:]-3*np.power(R[:,:,ii],2)/RR5
for jj in range(ii+1,3):
T[:,:,ii,jj] = -3*R[:,:,ii]*R[:,:,jj]/RR5
T[:,:,jj,ii] = T[:,:,ii,jj]
for ii in range(self.Nat):
T[ii,ii,:,:]=0.0 # no self interaction of atom i with atom i
# calculating induced dipoles in second order
Q=np.meshgrid(self.charge,self.charge)[0] # in columns same charges
ELF=np.zeros((self.Nat,self.Nat,3),dtype='f8')
for jj in range(3):
ELF[:,:,jj]=(Q/RR3)*R[:,:,jj] # ELF[i,j,:] is electric field at position i generated by atom j
for ii in range(self.Nat):
ELF[ii,ii,:]=np.zeros(3,dtype='f8')
ELFV=np.array(np.sum(ELF,axis=1),dtype='f8') # ELFV[i,:] is electric field at position of atom i
for ii in range(self.Nat):
P[ii,:]=np.dot(self.polar[typ][ii],ELFV[ii,:])
if debug and typ=='AlphaE':
from ..General.Potential import ElField_dipole
# Test first order induced dipoles
self.dipole=np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('AlphaE',NN=1)
if np.allclose(P,self.dipole):
print('First order dipoles are the same.')
else:
print('Problem with first order induced dipoles.')
# test induced electric field
Elfield=np.zeros(3,dtype='f8')
for ii in range(3):
Elfield[ii]=np.dot(-T[0,1,ii,:],P[1,:])
print('Electric field at atom 0 induced by dipole at position 1 wT:',Elfield)
Elfield=np.zeros(3,dtype='f8')
Elfield=ElField_dipole(P[1,:],R[0,1,:])
print('Electric field at atom 0 induced by dipole at position 1 woT:',Elfield)
ELFV=np.zeros((self.Nat,3),dtype='f8')
for ii in range(3):
for jj in range(3):
ELFV[:,ii]+=np.dot(T[:,:,ii,jj],P[:,jj])
for ii in range(self.Nat):
P[ii,:]=np.dot(self.polar[typ][ii],ELFV[ii,:])
# -P should be 2nd order induced dipoles
self.dipole+=(-P)
if debug:
print('Dipole sum:',np.sum(self.dipole,axis=0))
def _dRcB_BpA(self,index2,charge2,typ,c,eps=1):
''' function which calculate derivation of interaction energy between defect
A and defect B defined by index2:
d/dRc^{(B)}[Sum_{n} E^{(B)}(Rn).(1/2*Polarizability(n)).E^{(A)}(Rn)]
Parameters
----------
index2 : list or numpy.array of integer (dimension N_def_atoms)
Atomic indexes of atoms which coresponds to defect B (defect with zero charges)
charge2 : numpy array of real (dimension N_def_atoms)
Vector of transition charge for every atom of defect B (listed in `index2`)
typ : str ('AlphaE','Alpha_E','BetaEE')
Specifies which polarizability is used for calculation of induced
atomic dipoles
c : integer
Atomic index specifying along which atom displacement should we calculate
derivation
eps : real (optional - init=1.0)
Relative dielectric polarizability of medium where the dipoles and
molecule is present ( by default vacuum with relative permitivity 1.0)
Notes
----------
In initial structure transition charges are placed only on atoms from
first defect (defect A defined by index1) and zero charges are placed
on second defect (defect B defined by index2)
'''
# check if atom with index c is in defect B
if c in index2:
c_indx=np.where(index2==c)[0][0]
else:
raise IOError('Defined index c is not in defect B')
R=np.zeros((self.Nat,self.Nat,3),dtype='f8') # mutual distance vectors
P=np.zeros((self.Nat,3),dtype='f8')
for ii in range(self.Nat):
for jj in range(ii+1,self.Nat):
R[ii,jj,:]=self.coor[ii]-self.coor[jj]
R[jj,ii,:]=-R[ii,jj,:]
RR=np.sqrt(np.power(R[:,:,0],2)+np.power(R[:,:,1],2)+np.power(R[:,:,2],2)) # mutual distances
unit=np.diag([1]*self.Nat)
RR=RR+unit # only for avoiding ddivision by 0 for diagonal elements
RR3=np.power(RR,3)
RR5=np.power(RR,5)
# definition of T tensor
T=np.zeros((self.Nat,self.Nat,3,3),dtype='f8') # mutual distance vectors
for ii in range(3):
T[:,:,ii,ii]=1/RR3[:,:]-3*np.power(R[:,:,ii],2)/RR5
for jj in range(ii+1,3):
T[:,:,ii,jj] = -3*R[:,:,ii]*R[:,:,jj]/RR5
T[:,:,jj,ii] = T[:,:,ii,jj]
for ii in range(self.Nat):
T[ii,ii,:,:]=0.0 # no self interaction of atom i with atom i
# calculating derivation according to atom displacement from defect B
Q=np.meshgrid(self.charge,self.charge)[0] # in columns same charges
ELF=np.zeros((self.Nat,self.Nat,3),dtype='f8')
# Calculation of electric field generated by defect A
for jj in range(3):
ELF[:,:,jj]=(Q/RR3)*R[:,:,jj] # ELF[i,j,:] is electric field at position i generated by atom j
for ii in range(self.Nat):
ELF[ii,ii,:]=np.zeros(3,dtype='f8')
# calculate induced dipoles induced by defect A
ELFV=np.array(np.sum(ELF,axis=1),dtype='f8') # ELFV[i,:] is electric field at position of atom i
for ii in range(self.Nat):
P[ii,:]=np.dot(self.polar[typ][ii],ELFV[ii,:])
ELFV=np.zeros((self.Nat,3),dtype='f8')
for ii in range(3):
for jj in range(3):
ELFV[:,ii]+=np.dot(T[:,:,ii,jj],P[:,jj])
# TODO: check if it shouldnt be res = - charge2[c_indx]*ELFV[c,:]
res=charge2[c_indx]*ELFV[c,:]
return res
def _dR_BpA(self,index1,index2,charge1,charge2,typ,eps=1):
''' function which calculate derivation of interaction energy between defect
A and defect B defined by index: \n
d/dR[Sum_{n} E^{(B)}(Rn).(1/2*Polarizability(n)).E^{(A)}(Rn)] \n
which is -interaction energy ( for derivation of energy, Hamiltonian,
we need to take negative value of the result)
Parameters
----------
index1 : list or numpy.array of integer (dimension N_def1_atoms)
Atomic indexes of atoms which coresponds to first defect (defect with zero charges)
index2 : list or numpy.array of integer (dimension N_def2_atoms)
Atomic indexes of atoms which coresponds to second defect (defect with zero charges)
charge1 : numpy array of real (dimension N_def1_atoms)
Vector of transition charges for every atom of defect A (listed in ``index1``)
charge2 : numpy array of real (dimension N_def2_atoms)
Vector of transition charges for every atom of defect b (listed in ``index2``)
typ : str ('AlphaE','Alpha_E','BetaEE')
Specifies which polarizability is used for calculation of induced
atomic dipoles
eps : real (optional - init=1.0)
Relative dielectric polarizability of medium where the dipoles and
molecule is present ( by default vacuum with relative permitivity 1.0)
Notes
----------
**After exiting the function transition charges are the same as in the
begining**
For calculation of derivation of ApA use ``_dR_BpA(index1,index1,
charge1,charge1,typ,eps=1)`` where charges in molecule Dielectric class
have to be nonzero for defect with ``index1`` **and zero for the other
defect if present**.
'''
# TODO: Add posibility to read charges from self.charges: charge1 = self.charges[index1] and charge2 = self.charges[index2]
# TODO: Read polarizabilities on the defects and when potting charges to zero put also zero polarizabilities
charge1_orig = self.charge[index1]
charge2_orig = self.charge[index2]
res=np.zeros((self.Nat,3),dtype='f8')
# calculation of tensors with interatomic distances
R=np.zeros((self.Nat,self.Nat,3),dtype='f8') # mutual distance vectors
P=np.zeros((self.Nat,3),dtype='f8')
for ii in range(self.Nat):
for jj in range(ii+1,self.Nat):
R[ii,jj,:]=self.coor[ii]-self.coor[jj]
R[jj,ii,:]=-R[ii,jj,:]
RR=np.sqrt(np.power(R[:,:,0],2)+np.power(R[:,:,1],2)+np.power(R[:,:,2],2)) # mutual distances
unit=np.diag([1]*self.Nat)
RR=RR+unit # only for avoiding ddivision by 0 for diagonal elements
RR3=np.power(RR,3)
RR5=np.power(RR,5)
# definition of T tensor
T=np.zeros((self.Nat,self.Nat,3,3),dtype='f8') # mutual distance vectors
for ii in range(3):
T[:,:,ii,ii]=1/RR3[:,:]-3*np.power(R[:,:,ii],2)/RR5
for jj in range(ii+1,3):
T[:,:,ii,jj] = -3*R[:,:,ii]*R[:,:,jj]/RR5
T[:,:,jj,ii] = T[:,:,ii,jj]
for ii in range(self.Nat):
T[ii,ii,:,:]=0.0 # no self interaction of atom i with atom i
# Place transition charges only on the first defect (defect A)
if index1==index2:
self.charge[index1] = charge1
# The folowing is only for polarization with transition density and not polarization by ground state charges and interaction with excited ones
# if (charge1==charge2).all():
# self.charge[index1] = charge1
# else:
# raise Warning("For calculation of d_ApA same charges have to be inputed.")
else:
self.charge[index1] = charge1
self.charge[index2] = 0.0
# calculating derivation according to defect B atom displacement
Q=np.meshgrid(self.charge,self.charge)[0] # in columns same charges
ELF=np.zeros((self.Nat,self.Nat,3),dtype='f8')
# calculate electric field generated by the first defect (defect A)
for jj in range(3):
ELF[:,:,jj]=(Q/RR3)*R[:,:,jj] # ELF[i,j,:] is electric field at position i generated by atom j
for ii in range(self.Nat):
ELF[ii,ii,:]=np.zeros(3,dtype='f8')
# calculate induced dipoles induced by the first defect (defect A)
ELFV=np.array(np.sum(ELF,axis=1),dtype='f8') # ELFV[i,:] is electric field at position of atom i
for ii in range(self.Nat):
P[ii,:]=np.dot(self.polar[typ][ii],ELFV[ii,:])
ELFV=np.zeros((self.Nat,3),dtype='f8')
for ii in range(3):
for jj in range(3):
ELFV[:,ii]+=np.dot(T[:,:,ii,jj],P[:,jj])
for ii in range(len(index2)):
res[index2[ii],:] -= charge2[ii]*ELFV[index2[ii],:]
# calculating derivation with respect to displacement of environment atom
for ii in range(self.Nat):
if not (ii in index1 or ii in index2):
for jj in range(len(index2)):
res[ii,:]+=charge2[jj]*np.dot(T[index2[jj],ii,:,:],P[ii,:])
# # swap porarization parameters from defect A to defect B
# self._swap_atoms(index1,index2)
# Place transition charges only on the second defect (defect B)
if index1==index2:
self.charge[index2] = charge2
# viz previous case
# if (charge1==charge2).all():
# self.charge[index2] = charge2
# else:
# raise Warning("For calculation of d_ApA same charges have to be inputed.")
else:
self.charge[index1] = 0.0
self.charge[index2] = charge2
# calculating derivation according to atom displacement from defect A
Q=np.meshgrid(self.charge,self.charge)[0] # in columns same charges
ELF=np.zeros((self.Nat,self.Nat,3),dtype='f8')
# Calculate electric field generated by the second defect (defect B)
for jj in range(3):
ELF[:,:,jj]=(Q/RR3)*R[:,:,jj] # ELF[i,j,:] is electric field at position i generated by atom j
for ii in range(self.Nat):
ELF[ii,ii,:]=np.zeros(3,dtype='f8')
# Calculate induced dipoles, induced by the second defect (defect B)
ELFV=np.array(np.sum(ELF,axis=1),dtype='f8') # ELFV[i,:] is electric field at position of atom i
for ii in range(self.Nat):
P[ii,:]=np.dot(self.polar[typ][ii],ELFV[ii,:])
ELFV=np.zeros((self.Nat,3),dtype='f8')
for ii in range(3):
for jj in range(3):
ELFV[:,ii]+=np.dot(T[:,:,ii,jj],P[:,jj])
for ii in range(len(index1)):
res[index1[ii],:] -= charge1[ii]*ELFV[index1[ii],:]
# calculating derivation with respect to displacement of environment atom
for ii in range(self.Nat):
if not (ii in index1 or ii in index2):
for jj in range(len(index1)):
res[ii,:]+=charge1[jj]*np.dot(T[index1[jj],ii,:,:],P[ii,:])
# # swap porarization parameters back to original position
# self._swap_atoms(index1,index2)
# Place transition charges back on both defects
self.charge[index1] = charge1_orig
self.charge[index2] = charge2_orig
return res.reshape(3*self.Nat)
def _dR_BppA(self,index1,index2,charge1,charge2,typ,eps=1):
''' function which calculate derivation of second order interaction energy
between defect A and defect B defined by index1 resp. index2: \n
``d/dR[Sum_{n} E^{(B)}(Rn).(1/2*Polarizability(n)). Sum_{n'} T(Rn-Rn').(1/2*Polarizability(n')).E^{(A)}(Rn)]`` \n
which is -interaction energy ( for derivation of energy, Hamiltonian,
we need to take negative value of the result)
Parameters
----------
index1 : list or numpy.array of integer (dimension N_def_atoms)
Atomic indexes of atoms which coresponds to first defect (defect with zero charges)
index2 : list or numpy.array of integer (dimension N_def_atoms)
Atomic indexes of atoms which coresponds to second defect (defect with zero charges)
charge1 : numpy array of real (dimension N_def1_atoms)
Vector of transition charges for every atom of defect A (listed in ``index1``)
charge2 : numpy array of real (dimension N_def2_atoms)
Vector of transition charges for every atom of defect b (listed in ``index2``)
typ : str ('AlphaE','Alpha_E','BetaEE')
Specifies which polarizability is used for calculation of induced
atomic dipoles
eps : real (optional - init=1.0)
Relative dielectric polarizability of medium where the dipoles and
molecule is present ( by default vacuum with relative permitivity 1.0)
Notes
----------
**After exiting the function transition charges are placed on both defects
and not only on the first**
For calculation of derivation of AppA use ``_dR_BppA(index1,index1,
charge1,charge1,typ,eps=1)`` where charges in molecule Dielectric class
have to be nonzero for defect with ``index1`` **and zero for the other
defect if present**.
'''
# TODO: Add posibility to read charges from self.charges: charge1 = self.charges[index1] and charge2 = self.charges[index2]
# TODO: Read polarizabilities on the defects and when potting charges to zero put also zero polarizabilities
res=np.zeros((self.Nat,3),dtype='f8')
# calculation of tensors with interatomic distances
R=np.zeros((self.Nat,self.Nat,3),dtype='f8') # mutual distance vectors
P=np.zeros((self.Nat,3),dtype='f8')
for ii in range(self.Nat):
for jj in range(ii+1,self.Nat):
R[ii,jj,:]=self.coor[ii]-self.coor[jj]
R[jj,ii,:]=-R[ii,jj,:]
RR=np.sqrt(np.power(R[:,:,0],2)+np.power(R[:,:,1],2)+np.power(R[:,:,2],2)) # mutual distances
unit=np.diag([1]*self.Nat)
RR=RR+unit # only for avoiding ddivision by 0 for diagonal elements
RR3=np.power(RR,3)
RR5=np.power(RR,5)
RR7=np.power(RR,7)
# definition of T tensor
T=np.zeros((self.Nat,self.Nat,3,3),dtype='f8') # mutual distance vectors
for ii in range(3):
T[:,:,ii,ii]=1/RR3[:,:]-3*np.power(R[:,:,ii],2)/RR5
for jj in range(ii+1,3):
T[:,:,ii,jj] = -3*R[:,:,ii]*R[:,:,jj]/RR5
T[:,:,jj,ii] = T[:,:,ii,jj]
for ii in range(self.Nat):
T[ii,ii,:,:]=0.0 # no self interaction of atom i with atom i
# definition of S tensor
S=np.zeros((self.Nat,self.Nat,3,3,3),dtype='f8') # mutual distance vectors
for ii in range(3):
for jj in range(3):
for kk in range(3):
S[:,:,ii,jj,kk]=-5*R[:,:,ii]*R[:,:,jj]*R[:,:,kk]/RR7
for ii in range(3):
for jj in range(3):
S[:,:,ii,ii,jj]+=R[:,:,jj]/RR5
S[:,:,ii,jj,ii]+=R[:,:,jj]/RR5
S[:,:,jj,ii,ii]+=R[:,:,jj]/RR5
for ii in range(self.Nat):
S[ii,ii,:,:,:]=0.0 # no self interaction of atom i with atom i
# Place transition charges only on the first defect (defect A)
if index1==index2:
if (charge1==charge2).all():
self.charge[index1] = charge1
else:
raise Warning("For calculation of d_ApA same charges have to be inputed.")
else:
self.charge[index1] = charge1
self.charge[index2] = 0.0
# calculating derivation according to atom displacement from defect B
Q=np.meshgrid(self.charge,self.charge)[0] # in columns same charges
ELF=np.zeros((self.Nat,self.Nat,3),dtype='f8')
# Calculate electric field generated by the first defect (defect A)
for jj in range(3):
ELF[:,:,jj]=(Q/RR3)*R[:,:,jj] # ELF[i,j,:] is electric field at position i generated by atom j
for ii in range(self.Nat):
ELF[ii,ii,:]=np.zeros(3,dtype='f8')
# Calculate induced dipoles, induced by the first defect (defect A)
ELFV=np.array(np.sum(ELF,axis=1),dtype='f8') # ELFV[i,:] is electric field at position of atom i
PA=np.zeros((self.Nat,3),dtype='f8')
for ii in range(self.Nat):
PA[ii,:]=np.dot(self.polar[typ][ii],ELFV[ii,:])
for rep in range(2):
P=np.zeros((self.Nat,3),dtype='f8')
for ii in range(self.Nat):
P[ii,:]=np.dot(self.polar[typ][ii],ELFV[ii,:])
ELFV=np.zeros((self.Nat,3),dtype='f8')
for ii in range(3):
for jj in range(3):
ELFV[:,ii]+=np.dot(T[:,:,ii,jj],P[:,jj])
for ii in range(len(index2)):
res[index2[ii],:] += charge2[ii]*ELFV[index2[ii],:]
# calculating derivation with respect to displacement of environment atom
for ii in range(self.Nat):
if not (ii in index1 or ii in index2):
for jj in range(len(index2)):
res[ii,:] -= charge2[jj]*np.dot(T[index2[jj],ii,:,:],P[ii,:])
# # swap porarization parameters from defect A to defect B
# self._swap_atoms(index1,index2)
# Place transition charges only on the second defect (defect B)
if index1==index2:
if (charge1==charge2).all():
self.charge[index2] = charge2
else:
raise Warning("For calculation of d_ApA same charges have to be inputed.")
else:
self.charge[index1] = 0.0
self.charge[index2] = charge2
# calculating derivation according to atom displacement from defect A
Q=np.meshgrid(self.charge,self.charge)[0] # in columns same charges
ELF=np.zeros((self.Nat,self.Nat,3),dtype='f8')
# Calculate electric field generated by the second defect (defect B)
for jj in range(3):
ELF[:,:,jj]=(Q/RR3)*R[:,:,jj] # ELF[i,j,:] is electric field at position i generated by atom j
for ii in range(self.Nat):
ELF[ii,ii,:]=np.zeros(3,dtype='f8')
# Calculate induced dipoles, induced by the second defect (defect B)
ELFV=np.array(np.sum(ELF,axis=1),dtype='f8') # ELFV[i,:] is electric field at position of atom i
PB=np.zeros((self.Nat,3),dtype='f8')
for ii in range(self.Nat):
PB[ii,:]=np.dot(self.polar[typ][ii],ELFV[ii,:])
for rep in range(2):
P=np.zeros((self.Nat,3),dtype='f8')
for ii in range(self.Nat):
P[ii,:]=np.dot(self.polar[typ][ii],ELFV[ii,:])
ELFV=np.zeros((self.Nat,3),dtype='f8')
for ii in range(3):
for jj in range(3):
ELFV[:,ii]+=np.dot(T[:,:,ii,jj],P[:,jj])
for ii in range(len(index1)):
res[index1[ii],:] += charge1[ii]*ELFV[index1[ii],:]
# calculating derivation with respect to displacement of environment atom
for ii in range(self.Nat):
if not (ii in index1 or ii in index2):
for jj in range(len(index1)):
res[ii,:] -= charge1[jj]*np.dot(T[index1[jj],ii,:,:],P[ii,:])
# + contribution from S tensor
for nn in range(self.Nat):
for ii in range(3):
for kk in range(3):
res[nn,:]+=3*PB[nn,ii]*np.dot(S[nn,:,ii,:,kk].T,PA[:,kk])
res[nn,:]+=3*PA[nn,ii]*np.dot(S[nn,:,ii,:,kk].T,PB[:,kk])
# # swap porarization parameters back to original position
# self._swap_atoms(index1,index2)
# Place transition charges back on both defects
self.charge[index1] = charge1
self.charge[index2] = charge2
return res.reshape(3*self.Nat)
def _dR_ApEnv(self,index1,charge1,env_coor,env_charge,typ,eps=1):
''' function which calculate derivation of 'interaction energy' between defect
A defined by index and environment atoms: \n
d/dR[Sum_{n} E^{(A)}(Rn).(1/2*Polarizability(n)).E^{(env)}(Rn)] \n
which is -interaction energy ( for derivation of energy, Hamiltonian,
we need to take negative value of the result)
Parameters
----------
index1 : list or numpy.array of integer (dimension N_def1_atoms)
Atomic indexes of atoms which coresponds to first defect
charge1 : numpy array of real (dimension N_def1_atoms)
Vector of charges (transition, excited, ground, ...) for every atom of
defect A (listed in ``index1``)
env_coor : numpy.array of real (dimension Nat x 3)
Coordninates for every environment atom.
env_charge : numpy array or list of real (dimension Nat)
Atomic ESP charges for every atom in the environment
typ : str ('AlphaE','Alpha_E','BetaEE')
Specifies which polarizability is used for calculation of induced
atomic dipoles
eps : real (optional - init=1.0)
Relative dielectric polarizability of medium where the dipoles and
molecule is present ( by default vacuum with relative permitivity 1.0)
Return
----------
res
res_env
Notes
----------
**After exiting the function transition charges are the same as in the
begining**
'''
# TODO: Add posibility to read charges from self.charges: charge1 = self.charges[index1] and charge2 = self.charges[index2]
# TODO: Read polarizabilities on the defects and when potting charges to zero put also zero polarizabilities
charge1_orig = self.charge[index1]
charge_env_orig = env_charge[index1]
env_Nat = env_coor.shape[0]
res=np.zeros((self.Nat,3),dtype='f8')
res_env=np.zeros((env_Nat,3),dtype='f8')
MASK = np.zeros(self.Nat,dtype="bool")
MASK[index1] = True
# Place charges on the defect (defect A)
self.charge[index1] = charge1
# zero charges for defect in the environment
env_charge[index1] = 0.0
# calculation of tensors with interatomic distances for polarizability class
R=np.zeros((self.Nat,self.Nat,3),dtype='f8') # mutual distance vectors
P=np.zeros((self.Nat,3),dtype='f8')
for ii in range(self.Nat):
for jj in range(ii+1,self.Nat):
R[ii,jj,:]=self.coor[ii]-self.coor[jj]
R[jj,ii,:]=-R[ii,jj,:]
RR=np.sqrt(np.power(R[:,:,0],2)+np.power(R[:,:,1],2)+np.power(R[:,:,2],2)) # mutual distances
unit=np.diag([1]*self.Nat)
RR=RR+unit # only for avoiding ddivision by 0 for diagonal elements
RR3=np.power(RR,3)
RR5=np.power(RR,5)
# calculate tensor with interactomic distances between environmnet and polarizability class atoms
#R_env2pol=np.zeros((self.Nat,env_Nat,3),dtype='f8') # mutual distance vectors
R_pol = np.tile(self.coor,(env_Nat,1,1))
R_pol = np.swapaxes(R_pol,0,1)
R_env = np.tile(env_coor,(self.Nat,1,1))
R_env2pol = R_pol - R_env
RR_env2pol = np.linalg.norm(R_env2pol,axis=2)
RR3_env2pol = np.power(RR_env2pol,3)
RR5_env2pol = np.power(RR_env2pol,5)
# definition of T tensor
T=np.zeros((self.Nat,self.Nat,3,3),dtype='f8') # mutual distance vectors
for ii in range(3):
T[:,:,ii,ii]=1/RR3[:,:]-3*np.power(R[:,:,ii],2)/RR5
for jj in range(ii+1,3):
T[:,:,ii,jj] = -3*R[:,:,ii]*R[:,:,jj]/RR5
T[:,:,jj,ii] = T[:,:,ii,jj]
for ii in range(self.Nat):
T[ii,ii,:,:]=0.0 # no self interaction of atom i with atom i
# definition of T tensor between environment and the polarizability class
T_pol2env=np.zeros((env_Nat,self.Nat,3,3),dtype='f8') # mutual distance vectors
for ii in range(3):
T_pol2env[:,:,ii,ii]=1/(RR3_env2pol.T + np.identity(self.Nat))[:,:]-3*np.power(R_env2pol[:,:,ii],2).T/(RR5_env2pol.T+np.identity(self.Nat))
for jj in range(ii+1,3):
T_pol2env[:,:,ii,jj] = -3*R_env2pol[:,:,ii].T*R_env2pol[:,:,jj].T/(RR5_env2pol.T + np.identity(self.Nat))
T_pol2env[:,:,jj,ii] = T_pol2env[:,:,ii,jj]
for ii in range(self.Nat):
T_pol2env[ii,ii,:,:]=0.0 # no self interaction of atom i with atom i
# calculating derivation according to environment atom displacement
Q=np.meshgrid(self.charge,self.charge)[0] # in columns same charges
ELF=np.zeros((self.Nat,self.Nat,3),dtype='f8')
# calculate electric field generated by the first defect (defect A)
for jj in range(3):
ELF[:,:,jj]=(Q/RR3)*R[:,:,jj] # ELF[i,j,:] is electric field at position i generated by atom j
for ii in range(self.Nat):
ELF[ii,ii,:]=np.zeros(3,dtype='f8')
# calculate induced dipoles induced by the first defect (defect A)
ELFV=np.array(np.sum(ELF,axis=1),dtype='f8') # ELFV[i,:] is electric field at position of atom i
for ii in range(self.Nat):
P[ii,:]=np.dot(self.polar[typ][ii],ELFV[ii,:])
for ii in range(3):
for n in range(self.nat):
if not MASK[n]:
res[n,ii] += np.dot(np.dot(env_charge,T_pol2env[:,n,ii,:]),P[n,:])
ELFV=np.zeros((env_Nat,3),dtype='f8')
for ii in range(3):
for jj in range(3):
ELFV[:,ii]+=np.dot(T_pol2env[:,:,ii,jj],P[:,jj])
for ii in range(3):
res_env[:,ii] -= env_charge * ELFV[:,ii]
# calculate induced dipoles induced by the environment ESP atomic charges
Q=np.meshgrid(env_charge,self.charge)[0] # in columns same charges - in rows environment charges
ELF=np.zeros((self.Nat,env_Nat,3),dtype='f8')
for jj in range(3):
ELF[:,:,jj]=( Q/(RR3_env2pol+np.identity(self.Nat)) )*R_env2pol[:,:,jj] # ELF[i,j,:] is electric field at position i generated by atom j
for ii in range(self.Nat):
ELF[ii,ii,:]=np.zeros(3,dtype='f8')
ELFV=np.array(np.sum(ELF,axis=1),dtype='f8') # ELFV[i,:] is electric field at position of atom i
for ii in range(self.Nat):
P[ii,:]=np.dot(self.polar[typ][ii],ELFV[ii,:]) # induced dipoles by environment charge distribution
P[index1,:]=0.0 # just for sure
for n in range(self.Nat):
if not MASK[n]:
for jj in range(len(index1)):
res[n,:]+=charge1[jj]*np.dot(T[index1[jj],n,:,:],P[n,:])
# calculating derivation according to atom displacement from defect A
ELFV=np.zeros((self.Nat,3),dtype='f8')
for ii in range(3):
for jj in range(3):
ELFV[:,ii]+=np.dot(T[:,:,ii,jj],P[:,jj])
for ii in range(len(index1)):
res[index1[ii],:] -= charge1[ii]*ELFV[index1[ii],:]
# Place original charges back defects and the environment atoms
self.charge[index1] = charge1_orig
env_charge[index1] = charge_env_orig
return res.reshape(3*self.Nat),res_env.reshape(3*self.Nat)
# TODO: Add possibility for NN = -err to calculate dipoles until convergence is reached
def _calc_dipoles_All(self,typ,Estatic=np.zeros(3,dtype='f8'),NN=60,eps=1,debug=False):
''' Function for calculation induced dipoles of SCF procedure for interaction
of molecule with environment. It calculates induced dipoles on individual
atoms by static charge distribution and homogeneous electric field.
Parameters
----------
typ : str ('AlphaE','Alpha_E','BetaEE')
Specifies which polarizability is used for calculation of induced
atomic dipoles
Estatic : numpy.array of real (dimension 3) (optional - init=np.zeros(3,dtype='f8'))
External homogeneous electric fiel vectord (orientation and strength)
in ATOMIC UNITS. By default there is no electric field
NN : integer (optional - init=60)
Number of SCF steps for calculation of induced dipole
eps : real (optional - init=1.0)
Relative dielectric polarizability of medium where the dipoles and
molecule is present ( by default vacuum with relative permitivity 1.0)
'''
if debug:
import timeit
time0 = timeit.default_timer()
#R=np.zeros((self.Nat,self.Nat,3),dtype='f8') # mutual distance vectors
#P=np.zeros((self.Nat,self.Nat,3),dtype='f8')
#for ii in range(self.Nat):
# for jj in range(ii+1,self.Nat):
# R[ii,jj,:]=self.coor[ii]-self.coor[jj]
# R[jj,ii,:]=-R[ii,jj,:]
#if debug:
# time01 = timeit.default_timer()
#RR=np.sqrt(np.power(R[:,:,0],2)+np.power(R[:,:,1],2)+np.power(R[:,:,2],2)) # mutual distances
R = np.tile(self.coor,(self.Nat,1,1))
R = (np.swapaxes(R,0,1) - R)
RR=squareform(pdist(self.coor))
if 0:
RR=np.sqrt(np.power(R[:,:,0],2)+np.power(R[:,:,1],2)+np.power(R[:,:,2],2))
RR2=squareform(pdist(self.coor))
print((RR2==RR).all()) # False
print(np.allclose(RR2,RR)) # True
if not (RR2==RR).all():
print(RR[0,1])
print(pdist(self.coor)[0])
print(RR[0,2])
print(pdist(self.coor)[1])
if debug:
time01 = timeit.default_timer()
unit=np.diag([1]*self.Nat)
RR=RR+unit # only for avoiding ddivision by 0 for diagonal elements
RR3=np.power(RR,3)
RR5=np.power(RR,5)
#mask=[]
#for ii in range(len(self.charge)):
# if abs(self.charge[ii])>1e-8:
# mask.append(ii)
mask=(np.abs(self.charge)>1e-8)
mask=np.expand_dims(mask, axis=0)
MASK=np.dot(mask.T,mask)
MASK=np.tile(MASK,(3,1,1)) # np.shape(mask)=(3,N,N) True all indexes where are both non-zero charges
MASK=np.rollaxis(MASK,0,3)
MASK2=np.diag(np.ones(self.Nat,dtype='bool'))
MASK2=np.tile(MASK2,(3,1,1))
MASK2=np.rollaxis(MASK2,0,3)
Q=np.meshgrid(self.charge,self.charge)[0] # in columns same charges
#ELF=np.zeros((self.Nat,self.Nat,3),dtype='f8')
#ELF_Q=(Q/RR3)*np.rollaxis(R,2)
#ELF_Q=np.rollaxis(ELF,0,3)
if debug:
time1 = timeit.default_timer()
print('Time spend on preparation of variables in calc_dipoles_All:',time1-time0,'s')
for kk in range(NN):
# point charge electric field
ELF=(Q/RR3)*np.rollaxis(R,2)
ELF=np.rollaxis(ELF,0,3)
#for jj in range(3):
# ELF[:,:,jj]=(Q/RR3)*R[:,:,jj] # ELF[i,j,:] is electric field at position i generated by atom j - on diagonal there are zeros
# TODO: Change this procedure because atoms with a charges could be polarized by all atoms with charges - but imput defect charges should be fitted accordingly with polarizable atoms
# polarization by static charges only in area without charges:
#for ii in mask:
# ELF[ii,mask,:]=0.0
ELF[MASK]=0.0
# dipole electric field
#for ii in range(self.Nat):
# P[ii,:,:]=self.dipole[:,:]
P=np.tile(self.dipole[:,:],(self.Nat,1,1)) # P[ii,:,:]=self.dipole[:,:] for ii going through all atoms
PR=np.sum(np.multiply(P,R),axis=2)
# TODO: This takes One second - make it faster
for jj in range(3):
ELF[:,:,jj]+=(3*PR/RR5)*R[:,:,jj]
ELF[:,:,jj]-=P[:,:,jj]/RR3
#for ii in range(self.Nat):
# ELF[ii,ii,:]=np.zeros(3,dtype='f8')
ELF[MASK2]=0.0
elf=np.sum(ELF,axis=1)/eps
# TODO: Think if this could be done in some efficient way
for ii in range(self.Nat):
self.dipole[ii,:]=np.dot(self.polar[typ][ii],elf[ii]+Estatic)
if debug:
print('Dipole sum:',np.sum(self.dipole,axis=0))
if debug:
time2 = timeit.default_timer()
print('Time spend on calculation in calc_dipoles_All:',time2-time1,'s')
print('Calculation vs preparation ratio:',(time2-time1)/(time1-time0))
print('Time for filling coordinate matrix vs all the rest:',(time01-time0)/(time1-time01))
def _get_interaction_energy(self,index,charge=None,debug=False):
''' Function calculates interaction energy between atoms defined in index
and the rest of the atoms
Parameters
----------
index : list of int (dimension N)
List of atoms where we would like to calculate potential and
for which we would like to calculate interaction energy with the
rest of the system
charge : numpy.array of real (dimension Natoms_of_defect)
Atomic trasition charges (TrEsp charges) for every atom of one defect
defined by `index`
Returns
-------
InterE : real
Interaction energies in atomic units (Hartree)
'''
if isinstance(charge,np.ndarray) or isinstance(charge,list):
use_orig_charges=False
else:
if charge==None:
use_orig_charges=True
else:
raise IOError('Unable to determine charges')
if use_orig_charges:
charge=np.zeros(len(index),dtype='f8')
# coppy charges and assign zero charges to those in index
AllCharge=np.copy(self.charge)
AllDipole=np.copy(self.dipole)
for ii in range(self.Nat):
if ii in index:
if use_orig_charges:
charge[np.where(index==ii)[0][0]]=AllCharge[ii]
AllCharge[ii]=0.0
AllDipole[ii,:]=np.zeros(3,dtype='f8')
InterE=0.0
# TODO: This distance matrix R is calculated many times - it would be faster to have it as global variable
# TODO: Check if this filling of whole matrix and then taking only small slice is not slower than two for cycles only through relevant pairs
# Fill matrix of interatomic vectors:
R = np.tile(self.coor,(self.Nat,1,1))
R = (R - np.swapaxes(R,0,1)) # R[ii,jj,:]=self.coor[jj]-self.coor[ii]
# Correct regions with zero distance
if (AllCharge[index]==0.0).all():
R[index,index,0]=1.0 # it is small distance but it will be always multiplied by zero and therefore it wont influent total potential
else:
R[index,index,0]=1e20 # large distance to have a small norm in order not ti influent the total potential (these atoms should be excluded)
# Take only slice of the matrix R[:,jj,:] where jj corespond to indexes
R=R[:,index,:]
pot_charge=potential_charge(AllCharge,R)
pot_dipole=potential_dipole(AllDipole,R)
# TODO: Move to test part
if debug:
print('Length of index list:',len(index))
print('Shape of coor matrix:',R.shape)
#print('Coor 0,0:',R[0,0])
#print('Coor 0,1:',R[0,1])
#print('Coor 0,2:',R[0,2])
#print('Coor 2,3:',R[2,3])
potential_charge_test=np.zeros(len(index),dtype='f8')
potential_dipole_test=np.zeros(len(index),dtype='f8')
#print(pot_charge)
for jj in range(len(index)):
for ii in range(self.Nat):
if ii!=index[jj]:
R=self.coor[index[jj]]-self.coor[ii]
#if jj==0 and ii==0:
# print('Coor 0,0:',R)
#if jj==1 and ii==0:
# print('Coor 0,1:',R)
#if jj==2 and ii==0:
# print('Coor 0,2:',R)
#if jj==3 and ii==2:
# print('Coor 2,3:',R)
potential_charge_test[jj]+=potential_charge(AllCharge[ii],R)
potential_dipole_test[jj]+=potential_dipole(AllDipole[ii],R)
#print(potential_test)
print(pot_dipole)
print(potential_dipole_test)
if np.allclose(potential_charge_test,pot_charge):
print('Potential generated by charges is the same for old and new calculation')
else:
raise Warning('Potentials generated by charges are different for both methods')
if np.allclose(potential_dipole_test,pot_dipole):
print('Potential generated by dipoles is the same for old and new calculation')
else:
raise Warning('Potentials generated by dipoles are different for both methods')
for jj in range(len(index)):
potential=0.0
for ii in range(self.Nat):
if ii!=index[jj]:
R=self.coor[index[jj]]-self.coor[ii]
potential+=potential_charge(AllCharge[ii],R)
potential+=potential_dipole(AllDipole[ii],R)
InterE+=potential*charge[jj]
if np.allclose(InterE,np.dot(charge,pot_charge+pot_dipole)):
print('Interaction energy is calculated correctly')
else:
raise Warning('Interaction energy for both methods is different')
InterE = np.dot(charge, pot_charge+pot_dipole)
return InterE
def _fill_Polar_matrix(self,index1,index2,typ='AlphaE',order=80,debug=False):
""" Calculate polarization matrix representation for interaction energy
calculation.
Parameters
---------
index1 : list of integer (dimension Natoms_defect1)
Indexes of all atoms from the first defect (starting from 0)
index2 : list of integer (dimension Natoms_defect2)
Indexes of all atoms from the second defect (starting from 0)
typ : string (optional init = 'AlphaE')
Which polarizability should be used for calculation of induced
dipoles. Supported types are: ``'AlphaE'``, ``'Alpha_E'`` and
``'BetaEE'``
order : integer (optional - init=80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
Returns
-------
PolMAT : numpy array of float (dimension 2x2)
Polarizability matrix representation. For ``typ='AlphaE'`` or
``typ='BetaEE': PolMAT[0,0] = -E(1)*induced_dipole(1),
PolMAT[0,1] = PolMAT[1,0] = -E(1)*induced_dipole(2) and
PolMAT[1,1] = -E(2)*induced_dipole(2). For ``typ='Alpha_E'``
diagonal elements are swapped: PolMAT[0,0] = -E(2)*induced_dipole(2),
PolMAT[0,1] = PolMAT[1,0] = -E(1)*induced_dipole(2) and
PolMAT[1,1] = -E(1)*induced_dipole(1)
dipolesA : numpy array of float (dimension 3)
Total induced dipole moment in the environment by the first defect.
dipolesB : numpy array of float (dimension 3)
Total induced dipole moment in the environment by the second defect.
dipoles_polA : numpy array of float (dimension Natoms x 3)
Induced atomic dipole moments for all atoms in the environment by
the first defect
"""
if typ=='BetaEE' and order>1:
raise IOError('For calculation with beta polarization maximal order is 1')
elif typ=='BetaEE' and order<1:
return np.zeros((2,2),dtype='f8')
defA_charge=self.charge[index1]
defB_charge=self.charge[index2]
defA_indx=deepcopy(index1)
defB_indx=deepcopy(index2)
PolMAT=np.zeros((2,2),dtype='f8')
E_TrEsp=self.get_TrEsp_Eng(index1, index2)
if debug:
print(typ,order)
# Polarization by molecule B
self.charge[defA_indx]=0.0
self._calc_dipoles_All(typ,NN=order,eps=1,debug=False)
dipolesB=np.sum(self.dipole,axis=0) # induced dipoles by second defect (defect B)
self.charge[defA_indx]=defA_charge
PolMAT[1,1] = self._get_interaction_energy(defB_indx,charge=defB_charge,debug=False) - E_TrEsp
PolMAT[0,1] = self._get_interaction_energy(defA_indx,charge=defA_charge,debug=False) - E_TrEsp
PolMAT[1,0] = PolMAT[0,1]
dipoles_polB = self.dipole.copy()
self.dipole=np.zeros((self.Nat,3),dtype='f8')
# Polarization by molecule A
self.charge[defB_indx]=0.0
self._calc_dipoles_All(typ,NN=order,eps=1,debug=False)
dipolesA=np.sum(self.dipole,axis=0)
self.charge[defB_indx]=defB_charge
PolMAT[0,0] = self._get_interaction_energy(defA_indx,charge=defA_charge,debug=False) - E_TrEsp
if debug:
print(PolMAT*conversion_facs_energy["1/cm"])
if np.isclose(self._get_interaction_energy(defB_indx,charge=defB_charge,debug=False)-E_TrEsp,PolMAT[1,0]):
print('ApB = BpA')
else:
raise Warning('ApB != BpA')
dipoles_polA = self.dipole.copy()
self.dipole=np.zeros((self.Nat,3),dtype='f8')
if typ=='AlphaE' or typ=='BetaEE' or typ=='Alpha_st':
return PolMAT,dipolesA,dipolesB,dipoles_polA,dipoles_polB
elif typ=='Alpha_E':
PolMAT[[0,1],[0,1]] = PolMAT[[1,0],[1,0]] # Swap AlphaMAT[0,0] with AlphaMAT[1,1]
return PolMAT,dipolesA,dipolesB,dipoles_polA,dipoles_polB
def _TEST_fill_Polar_matrix(self,index1,index2,typ='AlphaE',order=80,debug=False, out_pot=False):
""" Calculate polarization matrix representation for interaction energy
calculation.
Parameters
---------
index1 : list of integer (dimension Natoms_defect1)
Indexes of all atoms from the first defect (starting from 0)
index2 : list of integer (dimension Natoms_defect2)
Indexes of all atoms from the second defect (starting from 0)
typ : string (optional init = 'AlphaE')
Which polarizability should be used for calculation of induced
dipoles. Supported types are: ``'AlphaE'``, ``'Alpha_E'`` and
``'BetaEE'``
order : integer (optional - init=80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
Returns
-------
PolMAT : numpy array of float (dimension 2x2)
Polarizability matrix representation. For ``typ='AlphaE'`` or
``typ='BetaEE': PolMAT[0,0] = -E(1)*induced_dipole(1),
PolMAT[0,1] = PolMAT[1,0] = -E(1)*induced_dipole(2) and
PolMAT[1,1] = -E(2)*induced_dipole(2). For ``typ='Alpha_E'``
diagonal elements are swapped: PolMAT[0,0] = -E(2)*induced_dipole(2),
PolMAT[0,1] = PolMAT[1,0] = -E(1)*induced_dipole(2) and
PolMAT[1,1] = -E(1)*induced_dipole(1)
dipolesA : numpy array of float (dimension 3)
Total induced dipole moment in the environment by the first defect.
dipolesB : numpy array of float (dimension 3)
Total induced dipole moment in the environment by the second defect.
dipoles_polA : numpy array of float (dimension Natoms x 3)
Induced atomic dipole moments for all atoms in the environment by
the first defect
"""
if typ=='BetaEE' and order>1:
raise IOError('For calculation with beta polarization maximal order is 1')
elif typ=='BetaEE' and order<1:
return np.zeros((2,2),dtype='f8')
defA_charge=self.charge[index1]
defB_charge=self.charge[index2]
defA_indx=deepcopy(index1)
defB_indx=deepcopy(index2)
PolMAT=np.zeros((2,2),dtype='f8')
E_TrEsp=self.get_TrEsp_Eng(index1, index2)
if debug:
print(typ,order)
# Polarization by molecule B
self.charge[defA_indx]=0.0
self._calc_dipoles_All(typ,NN=order,eps=1,debug=False)
dipolesB=np.sum(self.dipole,axis=0) # induced dipoles by second defect (defect B)
self.charge[defA_indx]=defA_charge
PolMAT[1,1] = self._get_interaction_energy(defB_indx,charge=defB_charge,debug=False) - E_TrEsp
PolMAT[0,1] = self._get_interaction_energy(defA_indx,charge=defA_charge,debug=False) - E_TrEsp
PolMAT[1,0] = PolMAT[0,1]
self.dipole=np.zeros((self.Nat,3),dtype='f8')
# Polarization by molecule A
self.charge[defB_indx]=0.0
self._calc_dipoles_All(typ,NN=order,eps=1,debug=False)
dipolesA=np.sum(self.dipole,axis=0)
self.charge[defB_indx]=defB_charge
PolMAT[0,0] = self._get_interaction_energy(defA_indx,charge=defA_charge,debug=False) - E_TrEsp
if debug:
print(PolMAT*conversion_facs_energy["1/cm"])
if np.isclose(self._get_interaction_energy(defB_indx,charge=defB_charge,debug=False)-E_TrEsp,PolMAT[1,0]):
print('ApB = BpA')
else:
raise Warning('ApB != BpA')
dipoles_polA = self.dipole.copy()
self.dipole=np.zeros((self.Nat,3),dtype='f8')
if typ=='AlphaE' or typ=='BetaEE' or typ=='Alpha_st':
return PolMAT,dipolesA,dipolesB,dipoles_polA
elif typ=='Alpha_E':
PolMAT[[0,1],[0,1]] = PolMAT[[1,0],[1,0]] # Swap AlphaMAT[0,0] with AlphaMAT[1,1]
return PolMAT,dipolesA,dipolesB,dipoles_polA
def get_TrEsp_Eng(self, index1, index2):
""" Calculate TrEsp interaction energy for defects (defect-like
molecules) in vacuum.
Parameters
--------
index1 : list of integer (dimension Natoms_defect1)
Indexes of all atoms from the first defect (starting from 0)
index2 : list of integer (dimension Natoms_defect2)
Indexes of all atoms from the second defect (starting from 0)
Returns
--------
E_TrEsp : float
TrEsp interaction energy in ATOMIC UNITS (Hartree) between defect
in vacuum.
"""
defA_coor = self.coor[index1]
defB_coor = self.coor[index2]
defA_charge = self.charge[index1]
defB_charge = self.charge[index2]
E_TrEsp = charge_charge(defA_coor,defA_charge,defB_coor,defB_charge)[0]
return E_TrEsp # in hartree
def get_TrEsp_Dipole(self, index):
""" Calculate vacuum transition dipole moment for single defect (from
TrEsp charges).
Parameters
----------
index : list of integer (dimension Natoms_defect)
Indexes of all atoms from the defect (starting from 0) of which
transition dipole is calculated
Returns
--------
Dip_TrEsp : numpy array of float (dimension 3)
Transition dipole in ATOMIC UNITS for specified defect (by index)
calculated from TrEsp charges
"""
def_coor = self.coor[index]
def_charge = self.charge[index]
Dip_TrEsp = np.dot(def_charge,def_coor)
return Dip_TrEsp # in AU
def get_SingleDefectProperties(self, index, dAVA=0.0, order=80, approx=1.1):
''' Calculate effects of environment such as transition energy shift
and transition dipole change for single defect.
Parameters
----------
index : list of integer (dimension Natoms_defect)
Indexes of all atoms from the defect (starting from 0) for which
transition energy and transition dipole is calculated
dAVA : float
**dAVA = <A|V|A> - <G|V|G>** Difference in electrostatic
interaction energy between defect and environment for defect in
excited state <A|V|A> and in ground state <G|V|G>.
order : integer (optional - init = 80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
approx : real (optional - init=1.1)
Specifies which approximation should be used.
* **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
`Alpha(-E)`.
* **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
* **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
`Alpha(E)=Alpha(-E)`, however the second one is not condition
Returns
-------
Eshift : Energy class
Transition energy shift for the defect due to the fluorographene
environment calculated from structure with single defect. Units are
energy managed
TrDip : numpy array of real (dimension 3)
Total transition dipole for the defect with environment effects
included calculated from structure with single defect (in ATOMIC
UNITS)
**Neglecting `tilde{Beta(E)}` is not valid approximation. It shoudl be
better to neglect Beta(E,-E) to be consistent with approximation for
interaction energy**
Notes
----------
dip = Alpha(E)*El_field_TrCharge + Alpha(-E)*El_field_TrCharge
Then final transition dipole of molecule with environment is calculated
according to the approximation:
**Approximation 1.1:**
dip_fin = dip - (Vinter-DE)*Beta(E,E)*El_field_TrCharge + dip_init(1-1/4*Ind_dip_Beta(E,E)*El_field_TrCharge)
**Approximation 1.2:**
dip_fin = dip - (Vinter-DE)*Beta(E,E)*El_field_TrCharge + dip_init
**Approximation 1.3:**
dip_fin = dip - 2*Vinter*Beta(E,E)*El_field_TrCharge + dip_init
'''
# Get TrEsp Transition dipole
TrDip_TrEsp = np.dot(self.charge[index],self.coor[index,:]) # vacuum transition dipole for single defect
charge = self.charge[index]
# Calculate polarization matrixes
# TODO: Shift this block to separate function
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('AlphaE',NN=order,eps=1,debug=False)
dip_AlphaE = np.sum(self.dipole,axis=0)
Polar_AlphaE = self._get_interaction_energy(index,charge=charge,debug=False)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('Alpha_E',NN=order,eps=1,debug=False)
dip_Alpha_E = np.sum(self.dipole,axis=0)
Polar_Alpha_E = self._get_interaction_energy(index,charge=charge,debug=False)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('BetaEE',NN=order//2,eps=1,debug=False)
dip_Beta = np.sum(self.dipole,axis=0)
Polar_Beta = self._get_interaction_energy(index,charge=charge,debug=False)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
if approx==1.1:
# Calculate transition energy shift
Eshift = dAVA + Polar_AlphaE - Polar_Alpha_E
Eshift -= (self.VinterFG - dAVA)*Polar_Beta
# Calculate transition dipoles for every defect
TrDip = TrDip_TrEsp*(1 + Polar_Beta/4) + dip_AlphaE + dip_Alpha_E
TrDip -= (self.VinterFG - dAVA)*dip_Beta
# Change to energy class
with energy_units('AU'):
Eshift = EnergyClass(Eshift)
return Eshift, TrDip
else:
raise IOError('Unsupported approximation')
def _TEST_Compare_SingleDefectProperties(self, tr_charge, gr_charge, ex_charge, struc, index, dAVA=0.0, order=80, approx=1.1):
''' Calculate effects of environment such as transition energy shift
and transition dipole change for single defect.
Parameters
----------
index : list of integer (dimension Natoms_defect)
Indexes of all atoms from the defect (starting from 0) for which
transition energy and transition dipole is calculated
dAVA : float
**dAVA = <A|V|A> - <G|V|G>** Difference in electrostatic
interaction energy between defect and environment for defect in
excited state <A|V|A> and in ground state <G|V|G>.
order : integer (optional - init = 80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
approx : real (optional - init=1.1)
Specifies which approximation should be used.
* **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
`Alpha(-E)`.
* **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
* **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
`Alpha(E)=Alpha(-E)`, however the second one is not condition
Returns
-------
Eshift : Energy class
Transition energy shift for the defect due to the fluorographene
environment calculated from structure with single defect. Units are
energy managed
TrDip : numpy array of real (dimension 3)
Total transition dipole for the defect with environment effects
included calculated from structure with single defect (in ATOMIC
UNITS)
**Neglecting `tilde{Beta(E)}` is not valid approximation. It shoudl be
better to neglect Beta(E,-E) to be consistent with approximation for
interaction energy**
Notes
----------
dip = Alpha(E)*El_field_TrCharge + Alpha(-E)*El_field_TrCharge
Then final transition dipole of molecule with environment is calculated
according to the approximation:
**Approximation 1.1:**
dip_fin = dip - (Vinter-DE)*Beta(E,E)*El_field_TrCharge + dip_init(1-1/4*Ind_dip_Beta(E,E)*El_field_TrCharge)
**Approximation 1.2:**
dip_fin = dip - (Vinter-DE)*Beta(E,E)*El_field_TrCharge + dip_init
**Approximation 1.3:**
dip_fin = dip - 2*Vinter*Beta(E,E)*El_field_TrCharge + dip_init
'''
# Get TrEsp Transition dipole
TrDip_TrEsp = np.dot(self.charge[index],self.coor[index,:]) # vacuum transition dipole for single defect
# Get energy contribution from polarization by transition density
self.charge[index] = tr_charge
charge = self.charge[index]
# Set distance matrix
R_elst = np.tile(struc.coor._value,(self.Nat,1,1))
R_pol = np.tile(self.coor,(struc.nat,1,1))
R = (R_elst - np.swapaxes(R_pol,0,1)) # R[ii,jj,:]=self.coor[jj]-self.coor[ii]
# Calculate polarization matrixes
# TODO: Shift this block to separate function
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('AlphaE',NN=1,eps=1,debug=False)
Polar1_AlphaE = self._get_interaction_energy(index,charge=charge,debug=False)
pot1_dipole_AlphaE_tr = potential_dipole(self.dipole,R)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('AlphaE',NN=2,eps=1,debug=False)
Polar2_AlphaE = self._get_interaction_energy(index,charge=charge,debug=False)
Polar2_AlphaE = Polar2_AlphaE - Polar1_AlphaE
dip_AlphaE = np.sum(self.dipole,axis=0)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('Alpha_E',NN=1,eps=1,debug=False)
Polar1_Alpha_E = self._get_interaction_energy(index,charge=charge,debug=False)
pot1_dipole_Alpha_E_tr = potential_dipole(self.dipole,R)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('Alpha_E',NN=2,eps=1,debug=False)
dip_Alpha_E = np.sum(self.dipole,axis=0)
dip_Alpha_E = np.sum(self.dipole,axis=0)
Polar2_Alpha_E = self._get_interaction_energy(index,charge=charge,debug=False)
Polar2_Alpha_E = Polar2_Alpha_E - Polar1_Alpha_E
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('BetaEE',NN=1,eps=1,debug=False)
dip_Beta = np.sum(self.dipole,axis=0)
Polar1_Beta_EE = self._get_interaction_energy(index,charge=charge,debug=False)
pot1_dipole_betaEE_tr = potential_dipole(self.dipole,R)
self.charge[index] = ex_charge
charge = self.charge[index]
Polar1_Beta_EE_tr_ex = self._get_interaction_energy(index,charge=charge,debug=False)
self.charge[index] = gr_charge
charge = self.charge[index]
Polar1_Beta_EE_tr_gr = self._get_interaction_energy(index,charge=charge,debug=False)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
# Calculate polarization by ground state charge distribution
self.charge[index] = gr_charge
charge = self.charge[index]
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('Alpha_st',NN=1,eps=1,debug=False)
Polar1_static_gr = self._get_interaction_energy(index,charge=charge,debug=False)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('Alpha_st',NN=2,eps=1,debug=False)
Polar2_static_gr = self._get_interaction_energy(index,charge=charge,debug=False)
Polar2_static_gr = Polar2_static_gr - Polar1_static_gr
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('BetaEE',NN=1,eps=1,debug=False)
Polar1_Beta_EE_gr = self._get_interaction_energy(index,charge=charge,debug=False)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
# Calculate polarization by excited state charge distribution
self.charge[index] = ex_charge
charge = self.charge[index]
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('Alpha_st',NN=1,eps=1,debug=False)
Polar1_static_ex = self._get_interaction_energy(index,charge=charge,debug=False)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('Alpha_st',NN=2,eps=1,debug=False)
Polar2_static_ex = self._get_interaction_energy(index,charge=charge,debug=False)
Polar2_static_ex = Polar2_static_ex - Polar1_static_ex
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('BetaEE',NN=1,eps=1,debug=False)
Polar1_Beta_EE_ex = self._get_interaction_energy(index,charge=charge,debug=False)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
# Calculate indiced dipole by charge difference between ground and excited state
self.charge[index] = ex_charge - gr_charge
charge = self.charge[index]
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('Alpha_st',NN=1,eps=1,debug=False)
pot1_dipole_ex_gr = potential_dipole(self.dipole,R)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('Alpha_st',NN=2,eps=1,debug=False)
pot2_dipole_ex_gr = potential_dipole(self.dipole,R)
pot2_dipole_ex_gr = pot2_dipole_ex_gr - pot1_dipole_ex_gr
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('BetaEE',NN=1,eps=1,debug=False)
pot1_dipole_betaEE_ex_gr = potential_dipole(self.dipole,R)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
# calculate interaction between induced dipoles by transition density with ground and excited charges of the chromophore
self.charge[index] = tr_charge
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('Alpha_st',NN=1,eps=1,debug=False)
pot1_dipole_static_tr = potential_dipole(self.dipole,R)
self.charge[index] = ex_charge
charge = self.charge[index]
Polar1_static_tr_ex = self._get_interaction_energy(index,charge=charge,debug=False)
self.charge[index] = gr_charge
charge = self.charge[index]
Polar1_static_tr_gr = self._get_interaction_energy(index,charge=charge,debug=False)
self.charge[index] = tr_charge
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('AlphaE',NN=1,eps=1,debug=False)
self.charge[index] = gr_charge
charge = self.charge[index]
Polar1_AlphaE_tr_gr = self._get_interaction_energy(index,charge=charge,debug=False)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index] = tr_charge
self._calc_dipoles_All('Alpha_E',NN=1,eps=1,debug=False)
self.charge[index] = ex_charge
charge = self.charge[index]
Polar1_Alpha_E_tr_ex = self._get_interaction_energy(index,charge=charge,debug=False)
# Set the variables to initial state
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index] = tr_charge
if approx==1.1:
# Calculate transition energy shift
Eshift = dAVA + Polar1_AlphaE + Polar2_AlphaE - Polar1_Alpha_E - Polar2_Alpha_E
Eshift -= (self.VinterFG - dAVA)*Polar1_Beta_EE
# Calculate transition dipoles for every defect
TrDip = TrDip_TrEsp*(1 + Polar1_Beta_EE/4) + dip_AlphaE + dip_Alpha_E
TrDip -= (self.VinterFG - dAVA)*dip_Beta
# Change to energy class
with energy_units('AU'):
Eshift = EnergyClass(Eshift)
dAVA = EnergyClass(dAVA)
Polar1_AlphaE = EnergyClass(Polar1_AlphaE)
Polar2_AlphaE = EnergyClass(Polar2_AlphaE)
Polar1_Alpha_E = EnergyClass(Polar1_Alpha_E)
Polar2_Alpha_E = EnergyClass(Polar2_Alpha_E)
Polar1_Beta_EE = EnergyClass(Polar1_Beta_EE)
Polar1_static_ex_gr = EnergyClass(Polar1_static_ex - Polar1_static_gr)
Polar2_static_ex_gr = EnergyClass(Polar2_static_ex - Polar2_static_gr)
Polar1_Beta_EE_ex_gr = EnergyClass(Polar1_Beta_EE_ex - Polar1_Beta_EE_gr)
Polar1_static_tr_ex = EnergyClass(Polar1_static_tr_ex)
Polar1_static_tr_gr = EnergyClass(Polar1_static_tr_gr)
Polar1_AlphaE_tr_gr = EnergyClass(Polar1_AlphaE_tr_gr)
Polar1_Alpha_E_tr_ex = EnergyClass(Polar1_Alpha_E_tr_ex)
Polar1_Beta_EE_tr_ex = EnergyClass(Polar1_Beta_EE_tr_ex)
Polar1_Beta_EE_tr_gr = EnergyClass(Polar1_Beta_EE_tr_gr)
res_Energy = {'dE_0-1': Eshift, 'dE_elstat(exct-grnd)': dAVA}
res_Energy['E_pol1_Alpha(E)'] = Polar1_AlphaE
res_Energy['E_pol2_Alpha(E)'] = Polar2_AlphaE
res_Energy['E_pol1_Alpha(-E)'] = Polar1_Alpha_E
res_Energy['E_pol2_Alpha(-E)'] = Polar2_Alpha_E
res_Energy['E_pol1_Beta(E,E)'] = Polar1_Beta_EE
res_Energy['E_pol1_static_(exct-grnd)'] = Polar1_static_ex_gr
res_Energy['E_pol2_static_(exct-grnd)'] = Polar2_static_ex_gr
res_Energy['E_pol1_Beta(E,E)_(exct-grnd)'] = Polar1_Beta_EE_ex_gr
res_Energy['E_pol1_static_(trans)_(exct)'] = Polar1_static_tr_ex
res_Energy['E_pol1_static_(trans)_(grnd)'] = Polar1_static_tr_gr
res_Energy['E_pol1_Alpha(E)_(trans)_(grnd)'] = Polar1_AlphaE_tr_gr
res_Energy['E_pol1_Alpha(-E)_(trans)_(exct)'] = Polar1_Alpha_E_tr_ex
res_Energy['E_pol1_Beta(E,E)_(trans)_(exct)'] = Polar1_Beta_EE_tr_ex
res_Energy['E_pol1_Beta(E,E)_(trans)_(grnd)'] = Polar1_Beta_EE_tr_gr
res_Pot = {'Pol2-env_static_(exct-grnd)': pot2_dipole_ex_gr}
res_Pot['Pol1-env_static_(exct-grnd)'] = pot1_dipole_ex_gr
res_Pot['Pol1-env_Beta(E,E)_(exct-grnd)'] = pot1_dipole_betaEE_ex_gr
res_Pot['Pol1-env_Beta(E,E)_(trans)'] = pot1_dipole_betaEE_tr
res_Pot['Pol1-env_Alpha(E)_(trans)'] = pot1_dipole_AlphaE_tr
res_Pot['Pol1-env_Alpha(-E)_(trans)'] = pot1_dipole_Alpha_E_tr
res_Pot['Pol1-env_static_(trans)'] = pot1_dipole_static_tr
# with energy_units('1/cm'):
# print(Eshift.value,dAVA.value,Polar1_AlphaE.value,Polar2_AlphaE.value,Polar1_AlphaE.value+Polar2_AlphaE.value,Polar1_Alpha_E.value,Polar2_Alpha_E.value,Polar1_Alpha_E.value+Polar2_Alpha_E.value)
#
return res_Energy, res_Pot, TrDip
else:
raise IOError('Unsupported approximation')
def get_SingleDefectProperties_new(self, gr_charge, ex_charge, FG_elstat, struc, index, E01, dAVA=0.0, order=2, approx=1.1):
''' Calculate effects of environment such as transition energy shift
and transition dipole change for single defect.
Parameters
----------
index : list of integer (dimension Natoms_defect)
Indexes of all atoms from the defect (starting from 0) for which
transition energy and transition dipole is calculated
dAVA : float
**dAVA = <A|V|A> - <G|V|G>** Difference in electrostatic
interaction energy between defect and environment for defect in
excited state <A|V|A> and in ground state <G|V|G>.
order : integer (optional - init = 80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
approx : real (optional - init=1.1)
Specifies which approximation should be used.
* **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
`Alpha(-E)`.
* **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
* **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
`Alpha(E)=Alpha(-E)`, however the second one is not condition
Returns
-------
Eshift : Energy class
Transition energy shift for the defect due to the fluorographene
environment calculated from structure with single defect. Units are
energy managed
TrDip : numpy array of real (dimension 3)
Total transition dipole for the defect with environment effects
included calculated from structure with single defect (in ATOMIC
UNITS)
**Neglecting `tilde{Beta(E)}` is not valid approximation. It shoudl be
better to neglect Beta(E,-E) to be consistent with approximation for
interaction energy**
Notes
----------
dip = Alpha(E)*El_field_TrCharge + Alpha(-E)*El_field_TrCharge
Then final transition dipole of molecule with environment is calculated
according to the approximation:
**Approximation 1.1:**
dip_fin = dip - (Vinter-DE)*Beta(E,E)*El_field_TrCharge + dip_init(1-1/4*Ind_dip_Beta(E,E)*El_field_TrCharge)
**Approximation 1.2:**
dip_fin = dip - (Vinter-DE)*Beta(E,E)*El_field_TrCharge + dip_init
**Approximation 1.3:**
dip_fin = dip - 2*Vinter*Beta(E,E)*El_field_TrCharge + dip_init
'''
# Get TrEsp Transition dipole
TrDip_TrEsp = np.dot(self.charge[index],self.coor[index,:]) # vacuum transition dipole for single defect
# Set initial charges
tr_charge = self.charge[index]
FG_charge_orig = FG_elstat.charge[index]
FG_charge = FG_elstat.charge.copy()
FG_charge[index] = 0.0
FG_elstat.charge[index] = tr_charge
Eelstat_trans=FG_elstat.get_EnergyShift()
FG_elstat.charge[index] = FG_charge_orig
# Set distance matrix
R_elst = np.tile(struc.coor._value,(self.Nat,1,1))
R_pol = np.tile(self.coor,(struc.nat,1,1))
R = (R_elst - np.swapaxes(R_pol,0,1)) # R[ii,jj,:]=self.coor[jj]-self.coor[ii]
# TODO: Maybe also exclude connected fluorinesto atoms ii
for ii in range(self.Nat):
R[ii,ii,:] = 0.0 # self interaction is not permited in potential calculation
# Calculate polarization matrixes
# TODO: Shift this block to separate function
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('AlphaE',NN=order,eps=1,debug=False)
Polar2_AlphaE = self._get_interaction_energy(index,charge=tr_charge,debug=False)
dip_AlphaE = np.sum(self.dipole,axis=0)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('AlphaE',NN=1,eps=1,debug=False)
Potential = potential_dipole(self.dipole,R)
E_Pol1_env_AE_tr = np.dot(FG_charge,Potential)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('Alpha_E',NN=order,eps=1,debug=False)
Polar2_Alpha_E = self._get_interaction_energy(index,charge=tr_charge,debug=False)
dip_Alpha_E = np.sum(self.dipole,axis=0)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('Alpha_E',NN=1,eps=1,debug=False)
Potential = potential_dipole(self.dipole,R)
E_Pol1_env_A_E_tr = np.dot(FG_charge,Potential)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index] = ex_charge
self._calc_dipoles_All('Alpha_st',NN=order,eps=1,debug=False)
Polar2_Alpha_st_ex = self._get_interaction_energy(index,charge=ex_charge,debug=False)
Potential = potential_dipole(self.dipole,R)
Polar2_env_Alpha_st_ex = np.dot(FG_charge,Potential)
dip_Alpha_st_ex = np.sum(self.dipole,axis=0)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index] = gr_charge
self._calc_dipoles_All('Alpha_st',NN=order,eps=1,debug=False)
Polar2_Alpha_st_gr = self._get_interaction_energy(index,charge=gr_charge,debug=False)
Potential = potential_dipole(self.dipole,R)
Polar2_env_Alpha_st_gr = np.dot(FG_charge,Potential)
dip_Alpha_st_gr = np.sum(self.dipole,axis=0)
# TODO: for pol2-env_static second order is twice and first order is only single times - therefore I need to calculate first and second order separately for environmnet efects
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index] = ex_charge
self._calc_dipoles_All('Alpha_st',NN=1,eps=1,debug=False)
dip1_Ast_ex = np.sum(self.dipole,axis=0)
Potential = potential_dipole(self.dipole,R)
Polar1_env_Alpha_st_ex = np.dot(FG_charge,Potential)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index] = gr_charge
self._calc_dipoles_All('Alpha_st',NN=1,eps=1,debug=False)
dip1_Ast_gr = np.sum(self.dipole,axis=0)
Potential = potential_dipole(self.dipole,R)
Polar1_env_Alpha_st_gr = np.dot(FG_charge,Potential)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index] = tr_charge
self._calc_dipoles_All('Alpha_st',NN=1,eps=1,debug=False)
Potential = potential_dipole(self.dipole,R)
Pol1_env_Alpha_st_tr = np.dot(FG_charge,Potential)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index] = tr_charge
self._calc_dipoles_All('BetaEE',NN=1,eps=1,debug=False)
dip_Beta = np.sum(self.dipole,axis=0)
Polar1_Beta_EE = self._get_interaction_energy(index,charge=tr_charge,debug=False)
#pot1_dipole_betaEE_tr = potential_dipole(self.dipole,R)
# needed for transition dipole
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index] = gr_charge
self._calc_dipoles_All('AlphaE',NN=1,eps=1,debug=False)
dip1_AE_gr = np.sum(self.dipole,axis=0)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index] = ex_charge
self._calc_dipoles_All('Alpha_E',NN=1,eps=1,debug=False)
dip1_A_E_ex = np.sum(self.dipole,axis=0)
# Set the variables to initial state
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index] = tr_charge
if approx==1.1:
# Calculate transition energy shift
Eshift = dAVA + Polar2_AlphaE - Polar2_Alpha_E
Eshift -= (self.VinterFG - dAVA)*Polar1_Beta_EE
Eshift += Polar2_Alpha_st_ex - Polar2_Alpha_st_gr
Eshift += Polar1_env_Alpha_st_ex - Polar1_env_Alpha_st_gr
Eshift += 2*(Polar2_env_Alpha_st_ex - Polar1_env_Alpha_st_ex - Polar2_env_Alpha_st_gr + Polar1_env_Alpha_st_gr)
Eshift += Eelstat_trans/E01._value * (2*E_Pol1_env_AE_tr + 4*Pol1_env_Alpha_st_tr + 2*E_Pol1_env_A_E_tr)
# Calculate transition dipoles for every defect
TrDip = TrDip_TrEsp*(1 + Polar1_Beta_EE/2 - 2*(Eelstat_trans/E01._value)*(Eelstat_trans/E01._value) )
TrDip += dip_AlphaE + dip_Alpha_E
TrDip -= (self.VinterFG - dAVA)*dip_Beta
TrDip += (Eelstat_trans/E01._value)*(dip1_Ast_gr - dip1_Ast_ex)
TrDip += (Eelstat_trans/E01._value)*(dip1_AE_gr - dip1_A_E_ex)
# TODO: Add term for polarization of environment by environment itself
# Change to energy class
with energy_units('AU'):
Eshift = EnergyClass(Eshift)
dAVA = EnergyClass(dAVA)
res_Energy = {'dE_0-1': Eshift, 'dE_elstat(exct-grnd)': dAVA}
res_Energy['E_pol2_Alpha(E)'] = EnergyClass(Polar2_AlphaE)
res_Energy['E_pol2_Alpha(-E)'] = EnergyClass(Polar2_Alpha_E)
res_Energy['E_pol1_Beta(E,E)'] = EnergyClass(Polar1_Beta_EE)
res_Energy['E_pol2_static_(exct-grnd)'] = EnergyClass(Polar2_Alpha_st_ex - Polar2_Alpha_st_gr)
res_Energy['Pol1-env_static_(exct-grnd)'] = EnergyClass(Polar1_env_Alpha_st_ex - Polar1_env_Alpha_st_gr)
res_Energy['Pol2-env_static_(exct-grnd)'] = EnergyClass(Polar2_env_Alpha_st_ex - Polar2_env_Alpha_st_gr)
res_Energy['Pol1-env_Alpha(E)_(trans)'] = EnergyClass(E_Pol1_env_AE_tr)
res_Energy['Pol1-env_Alpha(-E)_(trans)'] = EnergyClass(E_Pol1_env_A_E_tr)
res_Energy['Pol1-env_static_(trans)'] = EnergyClass(Pol1_env_Alpha_st_tr)
# with energy_units('1/cm'):
# print(Eshift.value,dAVA.value,res_Energy['E_pol2_Alpha(E)'].value,res_Energy['E_pol2_Alpha(-E)'].value,res_Energy['E_pol2_static_(exct-grnd)'].value,res_Energy['Pol2-env_static_(exct-grnd)'].value)
return Eshift, res_Energy, TrDip
else:
raise IOError('Unsupported approximation')
def get_SingleDefect_derivation(self, gr_charge, ex_charge, FG_elstat, struc, index, E01, order=2, approx=1.1):
''' Calculate derivative of single defect property
'''
# Set initial charges
tr_charge = self.charge[index]
FG_charge_orig = FG_elstat.charge[index]
FG_charge = FG_elstat.charge.copy()
FG_charge[index] = 0.0
dAVA, dR_dAVA = FG_elstat.get_EnergyShift_and_Derivative()
# calculate interaction between transition charges and environment atoms
FG_elstat.charge[index] = tr_charge
Eelstat_trans=FG_elstat.get_EnergyShift()
FG_elstat.charge[index] = FG_charge_orig
# Set distance matrix - polarizable atoms x electrostatic atoms
R_elst = np.tile(struc.coor._value,(self.Nat,1,1))
R_pol = np.tile(self.coor,(struc.nat,1,1))
R_pol_elst = (R_elst - np.swapaxes(R_pol,0,1)) # R[ii,jj,:]=self.coor[jj]-self.coor[ii]
# TODO: Maybe also exclude connected fluorinesto atoms ii
for ii in range(self.Nat):
R_pol_elst[ii,ii,:] = 0.0 # self interaction is not permited in potential calculation
# Negative values are there because we want to calculate dE/dR and not d(El(Rn)*1/2*Alpha*El(Rn))/dR
# calculate first order derivation - Polar1_Alpha(E)
dR_pol1_AlphaE = -self._dR_BpA(index, index, tr_charge, tr_charge, 'AlphaE')
# calculate first order derivation - Polar1_Alpha(-E)
dR_pol1_Alpha_E = -self._dR_BpA(index, index, tr_charge, tr_charge, 'Alpha_E')
# calculate second order derivation - Polar2_Alpha(E)
dR_pol2_AlphaE = -self._dR_BppA(index, index, tr_charge, tr_charge, 'AlphaE')
# calculate second order derivation - Polar2_Alpha(-E)
dR_pol2_Alpha_E = -self._dR_BppA(index, index, tr_charge, tr_charge, 'Alpha_E')
# calculate first order derivation - Polar1_static for excited and ground charges
dR_pol1_static_grnd = -self._dR_BpA(index, index, gr_charge, gr_charge, 'Alpha_st')
dR_pol1_static_exct = -self._dR_BpA(index, index, ex_charge, ex_charge, 'Alpha_st')
# calculate second order derivation - Polar2_static for excited and ground charges
dR_pol2_static_grnd = -self._dR_BppA(index, index, gr_charge, gr_charge, 'Alpha_st')
dR_pol2_static_exct = -self._dR_BppA(index, index, ex_charge, ex_charge, 'Alpha_st')
# calculate first order derivation - Polar1_Beta(E,E)
dR_pol1_BetaEE = -self._dR_BpA(index, index, tr_charge, tr_charge, 'BetaEE')
# calculate first order derivation of Palar1-env with static polarizability
dR_pol1_env_static_ex_gr, dR_pol1_env_static_ex_gr_env = -self._dR_ApEnv(index,ex_charge-gr_charge,FG_elstat.coor,FG_charge,'Alpha_st')
# this could be maybe left out:
dR_pol1_env_AlphaE_tr, dR_pol1_env_AlphaE_tr_env = -self._dR_ApEnv(index,tr_charge,FG_elstat.coor,FG_charge,'AlphaE')
dR_pol1_env_Alpha_E_tr, dR_pol1_env_Alpha_E_tr_env = -self._dR_ApEnv(index,tr_charge,FG_elstat.coor,FG_charge,'Alpha_E')
dR_pol1_env_static_tr, dR_pol1_env_static_tr_env = -self._dR_ApEnv(index,tr_charge,FG_elstat.coor,FG_charge,'Alpha_st')
# calculate Beta polarizability
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index] = tr_charge
self._calc_dipoles_All('BetaEE',NN=1,eps=1,debug=False)
Polar1_Beta_EE = self._get_interaction_energy(index,charge=tr_charge,debug=False)
# TODO: Add derivation of pol-env
# TODO: Split environment contribution and polarizable atoms contribution - both different dimensions
if approx==1.1:
# Calculate transition energy shift
dR_Eshift_env = dR_dAVA
dR_Eshift = (dR_pol1_AlphaE + dR_pol2_AlphaE)
dR_Eshift -= (dR_pol1_Alpha_E + dR_pol2_Alpha_E)
dR_Eshift -= (self.VinterFG - dAVA)*dR_pol1_BetaEE
dR_Eshift_env += dR_dAVA*Polar1_Beta_EE
dR_Eshift += (dR_pol1_static_exct + dR_pol2_static_exct)
dR_Eshift -= (dR_pol1_static_grnd + dR_pol2_static_grnd)
dR_Eshift += dR_pol1_env_static_ex_gr
dR_Eshift_env += dR_pol1_env_static_ex_gr_env
# this could be maybe left out
dR_Eshift += Eelstat_trans/E01._value * ( 2*dR_pol1_env_AlphaE_tr +
4*dR_pol1_env_static_tr +
2*dR_pol1_env_Alpha_E_tr)
dR_Eshift_env += Eelstat_trans/E01._value * ( 2*dR_pol1_env_AlphaE_tr_env +
4*dR_pol1_env_static_tr_env +
2*dR_pol1_env_Alpha_E_tr_env)
# Eshift += 2*(Polar2_env_Alpha_st_ex - Polar1_env_Alpha_st_ex - Polar2_env_Alpha_st_gr + Polar1_env_Alpha_st_gr)
return dR_Eshift, dR_Eshift_env
else:
raise IOError('Unsupported approximation')
def get_HeterodimerProperties(self, index1, index2, Eng1, Eng2, dAVA=0.0, dBVB=0.0, order=80, approx=1.1):
''' Calculate effects of the environment for structure with two different
defects such as interaction energy, site transition energy shifts and
changes in transition dipoles
Parameters
----------
index1 : list of integer (dimension Natoms_defect1)
Indexes of all atoms from the first defect (starting from 0)
index2 : list of integer (dimension Natoms_defect2)
Indexes of all atoms from the second defect (starting from 0)
Eng1 : float
Vacuum transition energy of the first defect in ATOMIC UNITS (Hartree)
Eng2 : float
Vacuum transition energy of the second defect in ATOMIC UNITS (Hartree)
dAVA : float
**dAVA = <A|V|A> - <G|V|G>** Difference in electrostatic
interaction energy between first defect the and environment for the
defect in excited state <A|V|A> and in ground state <G|V|G>.
dBVB : float
**dBVB = <B|V|B> - <G|V|G>** Difference in electrostatic
interaction energy between second defect and the environment for the
defect in excited state <B|V|B> and in ground state <G|V|G>.
order : integer (optional - init = 80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
approx : real (optional - init=1.1)
Specifies which approximation should be used.
* **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
`Alpha(-E)`.
* **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
* **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
`Alpha(E)=Alpha(-E)`, however the second one is not condition
Returns
-------
J_inter : Energy class
Interaction energy with effects of environment included. Units are
energy managed
Eshift1 : Energy class
Transition energy shift for the first defect due to fluorographene
environment calculated from heterodymer structure. Units are energy
managed
Eshift2 : Energy class
Transition energy shift for the second defect due to fluorographene
environment calculated from heterodymer structure. Units are energy
managed
TrDip1 : numpy array of real (dimension 3)
Total transition dipole for the first defect with environment effects
included calculated from heterodimer structure (in ATOMIC UNITS)
TrDip2 : numpy array of real (dimension 3)
Total transition dipole for the first defect with environment effects
included calculated from heterodimer structure (in ATOMIC UNITS)
AllDipAE : numpy array of float (dimension Natoms x 3)
Induced atomic dipole moments for all atoms in the environment by
the first defect with Alpha(E) atomic polarizability
AllDipA_E : numpy array of float (dimension Natoms x 3)
Induced atomic dipole moments for all atoms in the environment by
the first defect with Alpha(-E) atomic polarizability
AllDipBE : numpy array of float (dimension Natoms x 3)
Induced atomic dipole moments for all atoms in the environment by
the first defect with Beta(E,E) atomic polarizability
'''
# Get TrEsp interaction energy
E_TrEsp = self.get_TrEsp_Eng(index1, index2)
# Calculate polarization matrixes
PolarMat_AlphaE, dip_AlphaE1, dip_AlphaE2, AllDipAE1, AllDipAE2 = self._fill_Polar_matrix(index1,index2,typ='AlphaE',order=order)
PolarMat_Alpha_E, dip_Alpha_E1, dip_Alpha_E2, AllDipA_E1, AllDipA_E2 = self._fill_Polar_matrix(index1,index2,typ='Alpha_E',order=order)
PolarMat_Beta, dip_Beta1, dip_Beta2, AllDipBE1, AllDipBE2 = self._fill_Polar_matrix(index1,index2,typ='BetaEE',order=order//2)
# calculate new eigenstates and energies
HH=np.zeros((2,2),dtype='f8')
if Eng1<Eng2:
HH[0,0] = Eng1+dAVA
HH[1,1] = Eng2+dBVB
else:
HH[1,1] = Eng1+dAVA
HH[0,0] = Eng2+dBVB
HH[0,1] = E_TrEsp
HH[1,0] = HH[0,1]
Energy,Coeff=np.linalg.eigh(HH)
d_esp=np.sqrt( E_TrEsp**2 + ((Eng2-Eng1+dBVB-dAVA)/2)**2 ) # sqrt( (<A|V|B>)**2 + ((Eng2-Eng1+dBVB-dAVA)/2)**2 )
# Calculate interaction energies
if approx==1.1:
# Calculate Total polarizability matrix
PolarMat = PolarMat_AlphaE + PolarMat_Alpha_E + PolarMat_Beta*(dAVA/2 + dBVB/2 - self.VinterFG)
# Calculate interaction energies
C1 = Coeff.T[0]
E1 = Energy[0] + np.dot(C1, np.dot(PolarMat - d_esp*PolarMat_Beta, C1.T))
C2 = Coeff.T[1]
E2 = Energy[1] + np.dot(C2, np.dot(PolarMat + d_esp*PolarMat_Beta, C2.T))
J_inter = np.sqrt( (E2 - E1)**2 - (Eng2 - Eng1)**2 )/2*np.sign(E_TrEsp)
# Calculate energy shifts for every defect
Eshift1 = dAVA + PolarMat_AlphaE[0,0] - PolarMat_Alpha_E[1,1]
Eshift1 -= (self.VinterFG - dAVA)*PolarMat_Beta[0,0]
Eshift2 = dBVB + PolarMat_AlphaE[1,1] - PolarMat_Alpha_E[0,0]
Eshift2 -= (self.VinterFG - dBVB)*PolarMat_Beta[1,1]
# Calculate transition dipoles for every defect
TrDip1 = np.dot(self.charge[index1],self.coor[index1,:]) # vacuum transition dipole for single defect
TrDip1 = TrDip1*(1 + PolarMat_Beta[0,0]/4) + dip_AlphaE1 + dip_Alpha_E1
TrDip1 -= (self.VinterFG - dAVA)*dip_Beta1
TrDip2 = np.dot(self.charge[index2],self.coor[index2,:]) # vacuum transition dipole for single defect
TrDip2 = TrDip2*(1 + PolarMat_Beta[1,1]/4) + dip_AlphaE2 + dip_Alpha_E2
TrDip2 -= (self.VinterFG - dBVB)*dip_Beta2
# Change to energy class
with energy_units('AU'):
J_inter = EnergyClass(J_inter)
Eshift1 = EnergyClass(Eshift1)
Eshift2 = EnergyClass(Eshift2)
return J_inter, Eshift1, Eshift2, TrDip1, TrDip2, AllDipAE1, AllDipA_E1, AllDipBE1
else:
raise IOError('Unsupported approximation')
def _TEST_HeterodimerProperties(self, gr_charge1, ex_charge1, gr_charge2, ex_charge2, FG_charge, struc, index1, index2, Eng1, Eng2, dAVA=0.0, dBVB=0.0, order=80, approx=1.1):
''' Calculate effects of the environment for structure with two different
defects such as interaction energy, site transition energy shifts and
changes in transition dipoles
Parameters
----------
index1 : list of integer (dimension Natoms_defect1)
Indexes of all atoms from the first defect (starting from 0)
index2 : list of integer (dimension Natoms_defect2)
Indexes of all atoms from the second defect (starting from 0)
Eng1 : float
Vacuum transition energy of the first defect in ATOMIC UNITS (Hartree)
Eng2 : float
Vacuum transition energy of the second defect in ATOMIC UNITS (Hartree)
dAVA : float
**dAVA = <A|V|A> - <G|V|G>** Difference in electrostatic
interaction energy between first defect the and environment for the
defect in excited state <A|V|A> and in ground state <G|V|G>.
dBVB : float
**dBVB = <B|V|B> - <G|V|G>** Difference in electrostatic
interaction energy between second defect and the environment for the
defect in excited state <B|V|B> and in ground state <G|V|G>.
order : integer (optional - init = 80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
approx : real (optional - init=1.1)
Specifies which approximation should be used.
* **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
`Alpha(-E)`.
* **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
* **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
`Alpha(E)=Alpha(-E)`, however the second one is not condition
Returns
-------
J_inter : Energy class
Interaction energy with effects of environment included. Units are
energy managed
Eshift1 : Energy class
Transition energy shift for the first defect due to fluorographene
environment calculated from heterodymer structure. Units are energy
managed
Eshift2 : Energy class
Transition energy shift for the second defect due to fluorographene
environment calculated from heterodymer structure. Units are energy
managed
TrDip1 : numpy array of real (dimension 3)
Total transition dipole for the first defect with environment effects
included calculated from heterodimer structure (in ATOMIC UNITS)
TrDip2 : numpy array of real (dimension 3)
Total transition dipole for the first defect with environment effects
included calculated from heterodimer structure (in ATOMIC UNITS)
AllDipAE : numpy array of float (dimension Natoms x 3)
Induced atomic dipole moments for all atoms in the environment by
the first defect with Alpha(E) atomic polarizability
AllDipA_E : numpy array of float (dimension Natoms x 3)
Induced atomic dipole moments for all atoms in the environment by
the first defect with Alpha(-E) atomic polarizability
AllDipBE : numpy array of float (dimension Natoms x 3)
Induced atomic dipole moments for all atoms in the environment by
the first defect with Beta(E,E) atomic polarizability
'''
res = {}
# Get TrEsp interaction energy
E_TrEsp = self.get_TrEsp_Eng(index1, index2)
# Calculate polarization matrixes (1-2)
PolarMat1_AlphaE, dip_AlphaE1, dip_AlphaE2, AllDipAE1, AllDipAE2 = self._fill_Polar_matrix(index1,index2,typ='AlphaE',order=1)
PolarMat1_Alpha_E, dip_Alpha_E1, dip_Alpha_E2, AllDipA_E1, AllDipA_E2 = self._fill_Polar_matrix(index1,index2,typ='Alpha_E',order=1)
PolarMat_AlphaE, dip_AlphaE1, dip_AlphaE2, AllDipAE1, AllDipAE2 = self._fill_Polar_matrix(index1,index2,typ='AlphaE',order=2)
PolarMat_Alpha_E, dip_Alpha_E1, dip_Alpha_E2, AllDipA_E1, AllDipA_E2 = self._fill_Polar_matrix(index1,index2,typ='Alpha_E',order=2)
PolarMat_Beta, dip_Beta1, dip_Beta2, AllDipBE1, AllDipBE2 = self._fill_Polar_matrix(index1,index2,typ='BetaEE',order=order//2)
res["E_pol2_A(E)"] = (PolarMat_AlphaE - PolarMat1_AlphaE) * conversion_facs_energy["1/cm"]
res["E_pol2_A(-E)"] = (PolarMat_Alpha_E - PolarMat1_Alpha_E) * conversion_facs_energy["1/cm"]
res["E_pol2_B(E,E)"] = PolarMat_Beta
""" Aditional first order contribution """
# gr_charge1, ex_charge1, gr_charge2, ex_charge2
tr_charge1 = self.charge[index1]
tr_charge2 = self.charge[index2]
self.charge[index1] = gr_charge1
self.charge[index2] = ex_charge2
PolarMat_Alpha_st_gr_ex, dip_Alpha_st1_gr, dip_Alpha_st2_ex, AllDipA_st1_gr, AllDipA_st2_ex = self._fill_Polar_matrix(index1,index2,typ='Alpha_st',order=1)
self.charge[index1] = ex_charge1
self.charge[index2] = gr_charge2
PolarMat_Alpha_st_ex_gr, dip_Alpha_st1_ex, dip_Alpha_st2_gr, AllDipA_st1_ex, AllDipA_st2_gr = self._fill_Polar_matrix(index1,index2,typ='Alpha_st',order=1)
# charges for the ground state and excited state are the same => correct
# difference between first and second defect is in non symetrical charges - repeat the fit with symmetry constrains
PolarMat_Alpha_st = np.zeros((2,2),dtype='f8')
PolarMat_Alpha_st[0,0] = np.sum(PolarMat_Alpha_st_ex_gr) # PolarMat_Alpha_st_ex_gr[0,0] + PolarMat_Alpha_st_ex_gr[1,1] + 2*PolarMat_Alpha_st_ex_gr[0,1]
PolarMat_Alpha_st[1,1] = np.sum(PolarMat_Alpha_st_gr_ex) # PolarMat_Alpha_st_gr_ex[0,0] + PolarMat_Alpha_st_gr_ex[1,1] + 2*PolarMat_Alpha_st_gr_ex[0,1]
# pol1-env
#-----------------------------------
# Set distance matrix
R_elst = np.tile(struc.coor._value,(self.Nat,1,1))
R_pol = np.tile(self.coor,(struc.nat,1,1))
R = (R_elst - np.swapaxes(R_pol,0,1)) # R[ii,jj,:]=self.coor[jj]-self.coor[ii]
# if normaly ordered first are carbon atoms and then are fluorine atoms - for carbon atoms same indexes in pol_mol as in struc
for ii in range(self.Nat):
R[ii,ii,:] = 0.0 # self interaction is not permited in potential calculation
# TODO: Maybe also exclude connected fluorinesto atoms ii
# Calculate potential of induced dipoles
pot1_dipole_Alpha_st1_gr = potential_dipole(AllDipA_st1_gr,R)
pot1_dipole_Alpha_st1_ex = potential_dipole(AllDipA_st1_ex,R)
pot1_dipole_Alpha_st2_gr = potential_dipole(AllDipA_st2_gr,R)
pot1_dipole_Alpha_st2_ex = potential_dipole(AllDipA_st2_ex,R)
# calculate interaction energies with environment
FG_charge_tmp = FG_charge.charge.copy()
FG_charge_tmp[index1] = 0.0
FG_charge_tmp[index2] = 0.0
E_Pol1_env_static_gr1_FG = np.dot(FG_charge_tmp,pot1_dipole_Alpha_st1_gr)
E_Pol1_env_static_ex1_FG = np.dot(FG_charge_tmp,pot1_dipole_Alpha_st1_ex)
E_Pol1_env_static_gr2_FG = np.dot(FG_charge_tmp,pot1_dipole_Alpha_st2_gr)
E_Pol1_env_static_ex2_FG = np.dot(FG_charge_tmp,pot1_dipole_Alpha_st2_ex)
PolarMat_Alpha_st[0,0] = 2*( E_Pol1_env_static_ex1_FG + E_Pol1_env_static_gr2_FG )
PolarMat_Alpha_st[1,1] = 2*( E_Pol1_env_static_gr1_FG + E_Pol1_env_static_ex2_FG )
# return transition charges back
self.charge[index1] = tr_charge1
self.charge[index2] = tr_charge2
""" Aditional second order contribution - Comparison of magnitudes """
# Calculate polarization matrix A_grnd B_exct
self.charge[index1] = gr_charge1
self.charge[index2] = ex_charge2
PolarMat_Beta_gr_ex, dip_Beta1_gr, dip_Beta2_ex, AllDipBE1_gr, AllDipBE2_ex = self._fill_Polar_matrix(index1,index2,typ='BetaEE',order=1)
# Calculate polarization matrix A_exct B_grnd
self.charge[index1] = ex_charge1
self.charge[index2] = gr_charge2
PolarMat_Beta_ex_gr, dip_Beta1_ex, dip_Beta2_gr, AllDipBE1_ex, AllDipBE2_gr = self._fill_Polar_matrix(index1,index2,typ='BetaEE',order=1)
res["E_pol1_B(E,E)_(A_exct,B_grnd)"] = PolarMat_Beta_ex_gr
res["E_pol1_B(E,E)_(A_grnd,B_exct)"] = PolarMat_Beta_gr_ex
# calculate pol-env for previous:
pot1A_dipole_BEE_gr = potential_dipole(AllDipBE1_gr,R)
pot1A_dipole_BEE_ex = potential_dipole(AllDipBE1_ex,R)
pot1B_dipole_BEE_gr = potential_dipole(AllDipBE2_gr,R)
pot1B_dipole_BEE_ex = potential_dipole(AllDipBE2_ex,R)
PolarMat_env_Beta_ex = np.zeros((2,2),dtype="f8")
PolarMat_env_Beta_gr = np.zeros((2,2),dtype="f8")
PolarMat_env_Beta_ex[0,0] = np.dot(FG_charge_tmp,pot1A_dipole_BEE_ex)
PolarMat_env_Beta_ex[1,1] = np.dot(FG_charge_tmp,pot1B_dipole_BEE_ex)
PolarMat_env_Beta_gr[0,0] = np.dot(FG_charge_tmp,pot1B_dipole_BEE_gr)
PolarMat_env_Beta_gr[1,1] = np.dot(FG_charge_tmp,pot1A_dipole_BEE_gr)
res["E_pol1-env_B(E,E)_grnd"] = PolarMat_env_Beta_gr
res["E_pol1-env_B(E,E)_exct"] = PolarMat_env_Beta_ex
# Calculate secon order contribution to the first order quantities
self.charge[index1] = gr_charge1
self.charge[index2] = ex_charge2
PolarMat2_Alpha_st_gr_ex, dumm, dumm, AllDipA2_st1_gr, AllDipA2_st2_ex = self._fill_Polar_matrix(index1,index2,typ='Alpha_st',order=2)
PolarMat2_Alpha_st_gr_ex = PolarMat2_Alpha_st_gr_ex - PolarMat_Alpha_st_gr_ex
self.charge[index1] = ex_charge1
self.charge[index2] = gr_charge2
PolarMat2_Alpha_st_ex_gr, dumm, dumm, AllDipA2_st1_ex, AllDipA2_st2_gr = self._fill_Polar_matrix(index1,index2,typ='Alpha_st',order=2)
PolarMat2_Alpha_st_ex_gr = PolarMat2_Alpha_st_ex_gr - PolarMat_Alpha_st_ex_gr
res["E_pol2_st_(A_exct,B_grnd)"] = PolarMat2_Alpha_st_ex_gr * conversion_facs_energy["1/cm"]
res["E_pol2_st_(A_grnd,B_exct)"] = PolarMat2_Alpha_st_gr_ex * conversion_facs_energy["1/cm"]
pot2A_dipole_st_gr = potential_dipole(AllDipA2_st1_gr - AllDipA_st1_gr,R)
pot2A_dipole_st_ex = potential_dipole(AllDipA2_st1_ex - AllDipA_st1_ex,R)
pot2B_dipole_st_gr = potential_dipole(AllDipA2_st2_gr - AllDipA_st2_gr,R)
pot2B_dipole_st_ex = potential_dipole(AllDipA2_st2_ex - AllDipA_st2_ex,R)
PolarMat2_env_st_ex = np.zeros((2,2),dtype="f8")
PolarMat2_env_st_gr = np.zeros((2,2),dtype="f8")
PolarMat2_env_st_ex[0,0] = np.dot(FG_charge_tmp,pot2A_dipole_st_ex)
PolarMat2_env_st_ex[1,1] = np.dot(FG_charge_tmp,pot2B_dipole_st_ex)
PolarMat2_env_st_gr[0,0] = np.dot(FG_charge_tmp,pot2B_dipole_st_gr)
PolarMat2_env_st_gr[1,1] = np.dot(FG_charge_tmp,pot2A_dipole_st_gr)
res["E_pol2-env_st_grnd"] = PolarMat2_env_st_gr * conversion_facs_energy["1/cm"]
res["E_pol2-env_st_exct"] = PolarMat2_env_st_ex * conversion_facs_energy["1/cm"]
# Calculate polarization matrixes A_grnd B_0->1
self.charge[index1] = tr_charge1
self.charge[index2] = np.zeros(len(index2),dtype='f8')
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('AlphaE',NN=1,eps=1,debug=False)
self.charge[index1] = np.zeros(len(index1),dtype='f8')
E_AB_pol1_tr_gr_1 = self._get_interaction_energy(index2,charge=gr_charge2,debug=False)
E_A_pol1_tr_gr = self._get_interaction_energy(index1,charge=gr_charge1,debug=False)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index1] = np.zeros(len(index1),dtype='f8')
self.charge[index2] = tr_charge2
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('AlphaE',NN=1,eps=1,debug=False)
self.charge[index2] = np.zeros(len(index2),dtype='f8')
E_AB_pol1_gr_tr_1 = self._get_interaction_energy(index1,charge=gr_charge1,debug=False)
E_B_pol1_tr_gr = self._get_interaction_energy(index2,charge=gr_charge2,debug=False)
self.charge[index1] = gr_charge1
self.charge[index2] = np.zeros(len(index2),dtype='f8')
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('AlphaE',NN=1,eps=1,debug=False)
self.charge[index1] = np.zeros(len(index1),dtype='f8')
E_AB_pol1_gr_tr_2 = self._get_interaction_energy(index2,charge=tr_charge2,debug=False)
self.charge[index1] = np.zeros(len(index1),dtype='f8')
self.charge[index2] = gr_charge2
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('AlphaE',NN=1,eps=1,debug=False)
self.charge[index2] = np.zeros(len(index2),dtype='f8')
E_AB_pol1_tr_gr_2 = self._get_interaction_energy(index1,charge=tr_charge1,debug=False)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
# return transition charges back
if (gr_charge1!=gr_charge2).any() :
raise IOError("Heterodimer should have the same ground state charges")
# return transition charges back
if (tr_charge1!=tr_charge2).any() :
raise IOError("Heterodimer should have the same transition charges")
self.charge[index1] = gr_charge1
self.charge[index2] = tr_charge2
PolarMat_AlphaE_gr_tr, dip_AlphaE1_gr, dip_AlphaE2_tr, AllDipAE1_gr, AllDipAE2_tr = self._fill_Polar_matrix(index1,index2,typ='AlphaE',order=1)
E_AB_pol1_gr_tr = PolarMat_AlphaE_gr_tr[0,1]
self.charge[index1] = tr_charge1
self.charge[index2] = gr_charge2
PolarMat_AlphaE_gr_tr, dip_AlphaE1_gr, dip_AlphaE2_tr, AllDipAE1_gr, AllDipAE2_tr = self._fill_Polar_matrix(index1,index2,typ='AlphaE',order=1)
E_AB_pol1_tr_gr = PolarMat_AlphaE_gr_tr[0,1]
res["E_pol1_B(E,E)_(tr_gr,ex)"] = np.zeros((2,2),dtype="f8")
self.charge[index1] = tr_charge1
self.charge[index2] = np.zeros(len(index2),dtype='f8')
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('BetaEE',NN=1,eps=1,debug=False)
self.charge[index1] = np.zeros(len(index1),dtype='f8')
res["E_pol1_B(E,E)_(tr_gr,ex)"][0,0] = self._get_interaction_energy(index1,charge=gr_charge1,debug=False)
res["E_pol1_B(E,E)_(tr_gr,ex)"][0,1] = self._get_interaction_energy(index1,charge=ex_charge1,debug=False)
self.charge[index1] = np.zeros(len(index2),dtype='f8')
self.charge[index2] = tr_charge2
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self._calc_dipoles_All('BetaEE',NN=1,eps=1,debug=False)
self.charge[index2] = np.zeros(len(index2),dtype='f8')
res["E_pol1_B(E,E)_(tr_gr,ex)"][1,0] = self._get_interaction_energy(index2,charge=gr_charge2,debug=False)
res["E_pol1_B(E,E)_(tr_gr,ex)"][1,1] = self._get_interaction_energy(index2,charge=ex_charge2,debug=False)
# return transition charges back
self.charge[index1] = tr_charge1
self.charge[index2] = tr_charge2
# compare electrostatic energies - TEST
VAB_0101 = self.get_TrEsp_Eng(index1, index2)
self.charge[index1] = ex_charge1
VAB_1101 = self.get_TrEsp_Eng(index1, index2)
self.charge[index1] = gr_charge1
VAB_0001 = self.get_TrEsp_Eng(index1, index2)
self.charge[index2] = gr_charge2
VAB_0000 = self.get_TrEsp_Eng(index1, index2)
self.charge[index1] = ex_charge1
self.charge[index2] = ex_charge2
VAB_1111 = self.get_TrEsp_Eng(index1, index2)
self.charge[index2] = gr_charge2
VAB_1100 = self.get_TrEsp_Eng(index1, index2)
charge_orig1 = FG_charge.charge[index1]
charge_orig2 = FG_charge.charge[index2]
FG_charge.charge[index1] = gr_charge1
FG_charge.charge[index2] = 0.0
E_grnd=FG_charge.get_EnergyShift()
FG_charge.charge[index1] = ex_charge1
FG_charge.charge[index2] = 0.0
E_exct=FG_charge.get_EnergyShift()
FG_charge.charge[index1] = tr_charge1
FG_charge.charge[index2] = 0.0
E_trans=FG_charge.get_EnergyShift()
FG_charge.charge[index1] = charge_orig1
FG_charge.charge[index2] = charge_orig2
self.charge[index1] = tr_charge1
self.charge[index2] = tr_charge2
# calculate new eigenstates and energies
HH=np.zeros((2,2),dtype='f8')
if Eng1<Eng2:
HH[0,0] = Eng1+dAVA
HH[1,1] = Eng2+dBVB
else:
HH[1,1] = Eng1+dAVA
HH[0,0] = Eng2+dBVB
HH[0,1] = E_TrEsp
HH[1,0] = HH[0,1]
Energy,Coeff=np.linalg.eigh(HH)
d_esp=np.sqrt( E_TrEsp**2 + ((Eng2-Eng1+dBVB-dAVA)/2)**2 ) # sqrt( (<A|V|B>)**2 + ((Eng2-Eng1+dBVB-dAVA)/2)**2 )
# Calculate interaction energies
if approx==1.1:
# Calculate Total polarizability matrix
PolarMat = PolarMat_AlphaE + PolarMat_Alpha_E + PolarMat_Alpha_st + PolarMat_Beta*(dAVA/2 + dBVB/2 - self.VinterFG)
# Calculate interaction energies
C1 = Coeff.T[0]
E1 = Energy[0] + np.dot(C1, np.dot(PolarMat - d_esp*PolarMat_Beta, C1.T))
C2 = Coeff.T[1]
E2 = Energy[1] + np.dot(C2, np.dot(PolarMat + d_esp*PolarMat_Beta, C2.T))
J_inter = np.sqrt( (E2 - E1)**2 - (Eng2 - Eng1)**2 )/2*np.sign(E_TrEsp)
# Calculate energy shifts for every defect
Eshift1 = dAVA + PolarMat_AlphaE[0,0] - PolarMat_Alpha_E[1,1]
Eshift1 -= (self.VinterFG - dAVA)*PolarMat_Beta[0,0]
Eshift2 = dBVB + PolarMat_AlphaE[1,1] - PolarMat_Alpha_E[0,0]
Eshift2 -= (self.VinterFG - dBVB)*PolarMat_Beta[1,1]
# Calculate transition dipoles for every defect
TrDip1 = np.dot(self.charge[index1],self.coor[index1,:]) # vacuum transition dipole for single defect
TrDip1 = TrDip1*(1 + PolarMat_Beta[0,0]/4) + dip_AlphaE1 + dip_Alpha_E1
TrDip1 -= (self.VinterFG - dAVA)*dip_Beta1
TrDip2 = np.dot(self.charge[index2],self.coor[index2,:]) # vacuum transition dipole for single defect
TrDip2 = TrDip2*(1 + PolarMat_Beta[1,1]/4) + dip_AlphaE2 + dip_Alpha_E2
TrDip2 -= (self.VinterFG - dBVB)*dip_Beta2
# Change to energy class
with energy_units('AU'):
J_inter = EnergyClass(J_inter)
Eshift1 = EnergyClass(Eshift1)
Eshift2 = EnergyClass(Eshift2)
E_pol_static1_ex_gr = EnergyClass(PolarMat_Alpha_st_ex_gr[0,0]-PolarMat_Alpha_st_gr_ex[0,0])
E_pol_static2_ex_gr = EnergyClass(PolarMat_Alpha_st_gr_ex[1,1]-PolarMat_Alpha_st_ex_gr[1,1])
E_pol_env_static1_ex_gr = EnergyClass(E_Pol1_env_static_ex1_FG - E_Pol1_env_static_gr1_FG)
E_pol_env_static2_ex_gr = EnergyClass(E_Pol1_env_static_ex2_FG - E_Pol1_env_static_gr2_FG)
VAB_0101 = EnergyClass(VAB_0101)
VAB_1101 = EnergyClass(VAB_1101)
VAB_0001 = EnergyClass(VAB_0001)
VAB_0000 = EnergyClass(VAB_0000)
VAB_1111 = EnergyClass(VAB_1111)
VAB_1100 = EnergyClass(VAB_1100)
E_grnd = EnergyClass(E_grnd)
E_exct = EnergyClass(E_exct)
E_trans = EnergyClass(E_trans)
E_AB_pol1_gr_tr = EnergyClass(E_AB_pol1_gr_tr)
E_AB_pol1_tr_gr = EnergyClass(E_AB_pol1_tr_gr)
E_AB_pol1_gr_tr_1 = EnergyClass(E_AB_pol1_gr_tr_1)
E_AB_pol1_tr_gr_1 = EnergyClass(E_AB_pol1_tr_gr_1)
E_AB_pol1_gr_tr_2 = EnergyClass(E_AB_pol1_gr_tr_2)
E_AB_pol1_tr_gr_2 = EnergyClass(E_AB_pol1_tr_gr_2)
E_A_pol1_tr_gr = EnergyClass(E_A_pol1_tr_gr)
E_B_pol1_tr_gr = EnergyClass(E_B_pol1_tr_gr)
with energy_units("1/cm"):
print("EA_pol1_s_ex_gr EA_pol1_env_s_ex_gr EAB_pol1_tr_gr EA_pol1_tr_gr")
print(" {:9.4f} {:9.4f} {:9.4f} {:9.4f}".format(
E_pol_static1_ex_gr.value,
E_pol_env_static1_ex_gr.value,
E_AB_pol1_tr_gr.value,
E_A_pol1_tr_gr.value))
print(" VAB_0101 VAB_1101 VAB_0001 VAB_0000 VAB_1111 VAB_1100 E_grnd E_exct E_trans")
print(VAB_0101.value, VAB_1101.value, VAB_0001.value, VAB_0000.value, VAB_1111.value, VAB_1100.value, E_grnd.value, E_exct.value, E_trans.value)
# res["E_pol2_A(E)"]
# res["E_pol2_A(-E)"]
# res["E_pol2_B(E,E)"]
# res["E_pol1_B(E,E)_(A_exct,B_grnd)"]
# res["E_pol1_B(E,E)_(A_grnd,B_exct)"]
# res["E_pol1-env_B(E,E)_grnd"]
# res["E_pol1-env_B(E,E)_exct"]
# res["E_pol2_st_(A_exct,B_grnd)"]
# res["E_pol2_st_(A_grnd,B_exct)"]
# res["E_pol2-env_st_grnd"]
# res["E_pol2-env_st_exct"]
return J_inter, Eshift1, Eshift2, TrDip1, TrDip2, AllDipAE1, AllDipA_E1, AllDipBE1, res
else:
raise IOError('Unsupported approximation')
def get_HeterodimerProperties_new(self, gr_charge1, ex_charge1, gr_charge2, ex_charge2, FG_elstat, struc, index1, index2, Eng1, Eng2, eps, dAVA=0.0, dBVB=0.0, order=2, approx=1.1):
''' Calculate effects of the environment for structure with two different
defects such as interaction energy, site transition energy shifts and
changes in transition dipoles
Parameters
----------
index1 : list of integer (dimension Natoms_defect1)
Indexes of all atoms from the first defect (starting from 0)
index2 : list of integer (dimension Natoms_defect2)
Indexes of all atoms from the second defect (starting from 0)
Eng1 : float
Vacuum transition energy of the first defect in ATOMIC UNITS (Hartree)
Eng2 : float
Vacuum transition energy of the second defect in ATOMIC UNITS (Hartree)
dAVA : float
**dAVA = <A|V|A> - <G|V|G>** Difference in electrostatic
interaction energy between first defect the and environment for the
defect in excited state <A|V|A> and in ground state <G|V|G>.
dBVB : float
**dBVB = <B|V|B> - <G|V|G>** Difference in electrostatic
interaction energy between second defect and the environment for the
defect in excited state <B|V|B> and in ground state <G|V|G>.
order : integer (optional - init = 80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
approx : real (optional - init=1.1)
Specifies which approximation should be used.
* **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
`Alpha(-E)`.
* **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
* **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
`Alpha(E)=Alpha(-E)`, however the second one is not condition
Returns
-------
J_inter : Energy class
Interaction energy with effects of environment included. Units are
energy managed
Eshift1 : Energy class
Transition energy shift for the first defect due to fluorographene
environment calculated from heterodymer structure. Units are energy
managed
Eshift2 : Energy class
Transition energy shift for the second defect due to fluorographene
environment calculated from heterodymer structure. Units are energy
managed
TrDip1 : numpy array of real (dimension 3)
Total transition dipole for the first defect with environment effects
included calculated from heterodimer structure (in ATOMIC UNITS)
TrDip2 : numpy array of real (dimension 3)
Total transition dipole for the first defect with environment effects
included calculated from heterodimer structure (in ATOMIC UNITS)
AllDipAE : numpy array of float (dimension Natoms x 3)
Induced atomic dipole moments for all atoms in the environment by
the first defect with Alpha(E) atomic polarizability
AllDipA_E : numpy array of float (dimension Natoms x 3)
Induced atomic dipole moments for all atoms in the environment by
the first defect with Alpha(-E) atomic polarizability
AllDipBE : numpy array of float (dimension Natoms x 3)
Induced atomic dipole moments for all atoms in the environment by
the first defect with Beta(E,E) atomic polarizability
'''
# eps = EnergyClass
res = {}
# get transition charge
tr_charge1 = self.charge[index1]
tr_charge2 = self.charge[index2]
# Get fluorographene charges
charge_orig1 = FG_elstat.charge[index1]
charge_orig2 = FG_elstat.charge[index2]
FG_charge = FG_elstat.charge.copy()
FG_charge[index1] = 0.0
FG_charge[index2] = 0.0
# Set distance matrix for interaction of defects with the environmnet
R_elst = np.tile(struc.coor._value,(self.Nat,1,1))
R_pol = np.tile(self.coor,(struc.nat,1,1))
R = (R_elst - np.swapaxes(R_pol,0,1)) # R[ii,jj,:]=self.coor[jj]-self.coor[ii]
# if normaly ordered first are carbon atoms and then are fluorine atoms - for carbon atoms same indexes in pol_mol as in struc
# TODO: Maybe also exclude connected fluorinesto atoms ii
for ii in range(self.Nat):
R[ii,ii,:] = 0.0 # self interaction is not permited in potential calculation
# Get vaccuum interaction energies (V0100 != V0001) - ground state electron density symmetric to inversion and transition density antisymmetric to inversion - change of sign for some cases
E_TrEsp = self.get_TrEsp_Eng(index1, index2)
VAB_0101 = E_TrEsp
self.charge[index1] = ex_charge1
VAB_1101 = self.get_TrEsp_Eng(index1, index2)
self.charge[index1] = gr_charge1
VAB_0001 = self.get_TrEsp_Eng(index1, index2)
self.charge[index1] = tr_charge1
self.charge[index2] = ex_charge2
VAB_0111 = self.get_TrEsp_Eng(index1, index2)
self.charge[index2] = gr_charge2
VAB_0100 = self.get_TrEsp_Eng(index1, index2)
self.charge[index1] = gr_charge1
VAB_0000 = self.get_TrEsp_Eng(index1, index2)
self.charge[index1] = ex_charge1
VAB_1100 = self.get_TrEsp_Eng(index1, index2)
self.charge[index2] = ex_charge2
VAB_1111 = self.get_TrEsp_Eng(index1, index2)
self.charge[index1] = gr_charge1
VAB_0011 = self.get_TrEsp_Eng(index1, index2)
self.charge[index1] = tr_charge1
self.charge[index2] = tr_charge2
# get electroctatic interaction energy of defects with environment
FG_elstat.charge[index1] = gr_charge1
FG_elstat.charge[index2] = np.zeros(len(index2),dtype='f8')
EA_grnd=FG_elstat.get_EnergyShift()
FG_elstat.charge[index1] = ex_charge1
EA_exct=FG_elstat.get_EnergyShift()
FG_elstat.charge[index1] = tr_charge1
EA_trans=FG_elstat.get_EnergyShift()
FG_elstat.charge[index1] = np.zeros(len(index1),dtype='f8')
FG_elstat.charge[index2] = gr_charge2
EB_grnd=FG_elstat.get_EnergyShift()
FG_elstat.charge[index2] = ex_charge2
EB_exct=FG_elstat.get_EnergyShift()
FG_elstat.charge[index2] = tr_charge2
EB_trans=FG_elstat.get_EnergyShift()
# Calculate polarization matrixes for the second order contributions
PolarMat_AlphaE, dip_AlphaE1, dip_AlphaE2, AllDipAE1, AllDipAE2 = self._fill_Polar_matrix(index1,index2,typ='AlphaE',order=order)
PolarMat_Alpha_E, dip_Alpha_E1, dip_Alpha_E2, AllDipA_E1, AllDipA_E2 = self._fill_Polar_matrix(index1,index2,typ='Alpha_E',order=order)
self.charge[index1] = gr_charge1
self.charge[index2] = ex_charge2
PolarMat_Alpha_st_gr_ex, dip_Alpha_st1_gr, dip_Alpha_st2_ex, AllDipA_st1_gr, AllDipA_st2_ex = self._fill_Polar_matrix(index1,index2,typ='Alpha_st',order=order)
self.charge[index1] = ex_charge1
self.charge[index2] = gr_charge2
PolarMat_Alpha_st_ex_gr, dip_Alpha_st1_ex, dip_Alpha_st2_gr, AllDipA_st1_ex, AllDipA_st2_gr = self._fill_Polar_matrix(index1,index2,typ='Alpha_st',order=order)
res["E_pol2_A(E)"] = PolarMat_AlphaE
res["E_pol2_A(-E)"] = PolarMat_Alpha_E
PolarMat_Alpha_st = np.zeros((2,2),dtype='f8')
PolarMat_Alpha_st[0,0] = np.sum(PolarMat_Alpha_st_ex_gr) # PolarMat_Alpha_st_ex_gr[0,0] + PolarMat_Alpha_st_ex_gr[1,1] + 2*PolarMat_Alpha_st_ex_gr[0,1]
PolarMat_Alpha_st[1,1] = np.sum(PolarMat_Alpha_st_gr_ex) # PolarMat_Alpha_st_gr_ex[0,0] + PolarMat_Alpha_st_gr_ex[1,1] + 2*PolarMat_Alpha_st_gr_ex[0,1]
# Add Alpha static pol-env contribution
pot2_A_dipole_Alpha_st_gr = potential_dipole(AllDipA_st1_gr,R)
pot2_A_dipole_Alpha_st_ex = potential_dipole(AllDipA_st1_ex,R)
pot2_B_dipole_Alpha_st_gr = potential_dipole(AllDipA_st2_gr,R)
pot2_B_dipole_Alpha_st_ex = potential_dipole(AllDipA_st2_ex,R)
EA_Pol2_env_static_gr_FG = np.dot(FG_charge,pot2_A_dipole_Alpha_st_gr)
EA_Pol2_env_static_ex_FG = np.dot(FG_charge,pot2_A_dipole_Alpha_st_ex)
EB_Pol2_env_static_gr_FG = np.dot(FG_charge,pot2_B_dipole_Alpha_st_gr)
EB_Pol2_env_static_ex_FG = np.dot(FG_charge,pot2_B_dipole_Alpha_st_ex)
PolarMat_Alpha_st[0,0] = 2*( EA_Pol2_env_static_ex_FG + EB_Pol2_env_static_gr_FG )
PolarMat_Alpha_st[1,1] = 2*( EA_Pol2_env_static_gr_FG + EB_Pol2_env_static_ex_FG )
res["E_pol2_A_static"] = PolarMat_Alpha_st
# first order electrostatic contribution
ElstatMat_1 = np.zeros((2,2), dtype='f8')
ElstatMat_1[0,0] = (EA_trans + VAB_0100)**2 - (EB_trans + VAB_1101)**2
ElstatMat_1[1,1] = (EB_trans + VAB_0001)**2 - (EA_trans + VAB_0111)**2
ElstatMat_1[0,1] = (EA_trans + VAB_0100)*(EB_trans + VAB_0001) - (EA_trans + VAB_0111)*(EB_trans + VAB_1101)
ElstatMat_1[1,0] = ElstatMat_1[0,1]
ElstatMat_1 = ElstatMat_1/eps._value
res['E_elstat_1'] = ElstatMat_1
# TODO: This electrostatic contribution should be small print and see if it could be neglected
# calculate polarization matrixes for contriutions containing only first order polarizabilities
PolarMat_Beta, dip_Beta1, dip_Beta2, AllDipBE1, AllDipBE2 = self._fill_Polar_matrix(index1,index2,typ='BetaEE',order=order//2)
PolarMat_Beta_scaled = ( (VAB_1100 - VAB_0000 + VAB_0011 - VAB_0000 + EA_exct - EA_grnd + EB_exct - EB_grnd)/2 - self.VinterFG)*PolarMat_Beta
res["E_pol2_B(E,E)"] = PolarMat_Beta
res["E_pol2_B(E,E)_scaled"] = PolarMat_Beta_scaled
# TODO: Calculate and check contribution from d_epsilon
# calculate contribution from 0-1 ground interaction with alpha(E) polarizability
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index1] = tr_charge1
self.charge[index2] = np.zeros(len(index2),dtype='f8')
self._calc_dipoles_All('AlphaE',NN=1,eps=1,debug=False)
self.charge[index1] = np.zeros(len(index1),dtype='f8')
E_AB_pol1_tr_gr = self._get_interaction_energy(index2,charge=gr_charge2,debug=False)
E_A_pol1_tr_gr = self._get_interaction_energy(index1,charge=gr_charge1,debug=False)
Potential = potential_dipole(self.dipole,R)
E_A_pol1_env_tr = np.dot(FG_charge,Potential)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index1] = np.zeros(len(index1),dtype='f8')
self.charge[index2] = tr_charge2
self._calc_dipoles_All('AlphaE',NN=1,eps=1,debug=False)
self.charge[index2] = np.zeros(len(index1),dtype='f8')
E_AB_pol1_gr_tr = self._get_interaction_energy(index1,charge=gr_charge1,debug=False)
E_B_pol1_tr_gr = self._get_interaction_energy(index2,charge=gr_charge2,debug=False)
Potential = potential_dipole(self.dipole,R)
E_B_pol1_env_tr = np.dot(FG_charge,Potential)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
PolarMat_Alpha_tr_gr = np.zeros((2,2),dtype='f8')
PolarMat_Alpha_tr_gr[0,0] = E_A_pol1_tr_gr + E_AB_pol1_tr_gr + E_A_pol1_env_tr
PolarMat_Alpha_tr_gr[0,1] = E_B_pol1_tr_gr + E_AB_pol1_gr_tr + E_B_pol1_env_tr
PolarMat_Alpha_tr_gr[1,0] = PolarMat_Alpha_tr_gr[0,0]
PolarMat_Alpha_tr_gr[1,1] = PolarMat_Alpha_tr_gr[0,1]
PolarMat_Alpha_tr_gr[0,:] = PolarMat_Alpha_tr_gr[0,:]*( EA_trans + VAB_0100 )/eps._value
PolarMat_Alpha_tr_gr[1,:] = PolarMat_Alpha_tr_gr[1,:]*( EB_trans + VAB_0001 )/eps._value
res["E_pol2_A(E)_(trans,grnd)"] = PolarMat_Alpha_tr_gr
# calculate contribution from 0-1 ground and 0-1 excited interaction with alpha_static polarizability
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index1] = tr_charge1
self.charge[index2] = np.zeros(len(index2),dtype='f8')
self._calc_dipoles_All('Alpha_st',NN=1,eps=1,debug=False)
self.charge[index1] = np.zeros(len(index1),dtype='f8')
E_AB_st_pol1_tr_gr = self._get_interaction_energy(index2,charge=gr_charge2,debug=False)
E_AB_st_pol1_tr_ex = self._get_interaction_energy(index2,charge=ex_charge2,debug=False)
E_A_st_pol1_tr_gr = self._get_interaction_energy(index1,charge=gr_charge1,debug=False)
E_A_st_pol1_tr_ex = self._get_interaction_energy(index1,charge=ex_charge1,debug=False)
Potential = potential_dipole(self.dipole,R)
E_A_st_pol1_env_tr = np.dot(FG_charge,Potential)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
self.charge[index1] = np.zeros(len(index1),dtype='f8')
self.charge[index2] = tr_charge2
self._calc_dipoles_All('Alpha_st',NN=1,eps=1,debug=False)
self.charge[index2] = np.zeros(len(index1),dtype='f8')
E_AB_st_pol1_gr_tr = self._get_interaction_energy(index1,charge=gr_charge1,debug=False)
E_AB_st_pol1_ex_tr = self._get_interaction_energy(index1,charge=ex_charge1,debug=False)
E_B_st_pol1_tr_gr = self._get_interaction_energy(index2,charge=gr_charge2,debug=False)
E_B_st_pol1_tr_ex = self._get_interaction_energy(index2,charge=gr_charge2,debug=False)
Potential = potential_dipole(self.dipole,R)
E_B_st_pol1_env_tr = np.dot(FG_charge,Potential)
self.dipole = np.zeros((self.Nat,3),dtype='f8')
PolarMat_static_tr_gr_ex = np.zeros((2,2),dtype='f8')
PolarMat_static_tr_gr_ex[0,0] = (EA_trans + VAB_0100)/eps._value * (E_A_st_pol1_tr_ex + E_AB_st_pol1_tr_gr + E_A_st_pol1_env_tr)
PolarMat_static_tr_gr_ex[1,0] = (EB_trans + VAB_0001)/eps._value * (E_A_st_pol1_tr_ex + E_AB_st_pol1_tr_gr + E_A_st_pol1_env_tr)
PolarMat_static_tr_gr_ex[0,1] = (EA_trans + VAB_0100)/eps._value * (E_B_st_pol1_tr_ex + E_AB_st_pol1_gr_tr + E_B_st_pol1_env_tr)
PolarMat_static_tr_gr_ex[1,1] = (EB_trans + VAB_0001)/eps._value * (E_B_st_pol1_tr_ex + E_AB_st_pol1_gr_tr + E_B_st_pol1_env_tr)
PolarMat_static_tr_gr_ex[0,0] -= (EB_trans + VAB_1101)/eps._value * (E_AB_st_pol1_ex_tr + E_B_st_pol1_tr_gr + E_B_st_pol1_env_tr)
PolarMat_static_tr_gr_ex[0,1] -= (EA_trans + VAB_0111)/eps._value * (E_AB_st_pol1_ex_tr + E_B_st_pol1_tr_gr + E_B_st_pol1_env_tr)
PolarMat_static_tr_gr_ex[1,0] -= (EB_trans + VAB_1101)/eps._value * (E_AB_st_pol1_tr_ex + E_A_st_pol1_tr_gr + E_A_st_pol1_env_tr)
PolarMat_static_tr_gr_ex[1,1] -= (EA_trans + VAB_0111)/eps._value * (E_AB_st_pol1_tr_ex + E_A_st_pol1_tr_gr + E_A_st_pol1_env_tr)
res["E_pol1_A_static"] = PolarMat_static_tr_gr_ex
# return charges to original values
FG_elstat.charge[index1] = charge_orig1
FG_elstat.charge[index2] = charge_orig2
self.charge[index1] = tr_charge1
self.charge[index2] = tr_charge2
# calculate new eigenstates and energies
HH=np.zeros((2,2),dtype='f8')
if Eng1<Eng2:
HH[0,0] = Eng1+dAVA
HH[1,1] = Eng2+dBVB
else:
HH[1,1] = Eng1+dAVA
HH[0,0] = Eng2+dBVB
HH[0,1] = E_TrEsp
HH[1,0] = HH[0,1]
Energy,Coeff=np.linalg.eigh(HH)
d_esp=np.sqrt( E_TrEsp**2 + ((Eng2-Eng1+dBVB-dAVA)/2)**2 ) # sqrt( (<A|V|B>)**2 + ((Eng2-Eng1+dBVB-dAVA)/2)**2 )
# Calculate interaction energies
if approx==1.1:
# Calculate Total polarizability matrix
PolarMat = PolarMat_AlphaE + PolarMat_Alpha_E + PolarMat_Alpha_st
PolarMat += PolarMat_Beta_scaled + ElstatMat_1 + 2*PolarMat_Alpha_tr_gr
PolarMat += 2*PolarMat_static_tr_gr_ex
# Calculate interaction energies
C1 = Coeff.T[0]
E1 = Energy[0] + np.dot(C1, np.dot(PolarMat - d_esp*PolarMat_Beta, C1.T))
C2 = Coeff.T[1]
E2 = Energy[1] + np.dot(C2, np.dot(PolarMat + d_esp*PolarMat_Beta, C2.T))
J_inter = np.sqrt( (E2 - E1)**2 - (Eng2 - Eng1)**2 )/2*np.sign(E_TrEsp)
# Calculate energy shifts for every defect
Eshift1 = dAVA + PolarMat_AlphaE[0,0] - PolarMat_Alpha_E[1,1]
Eshift1 -= (self.VinterFG - dAVA)*PolarMat_Beta[0,0]
Eshift2 = dBVB + PolarMat_AlphaE[1,1] - PolarMat_Alpha_E[0,0]
Eshift2 -= (self.VinterFG - dBVB)*PolarMat_Beta[1,1]
# Calculate transition dipoles for every defect
TrDip1 = np.dot(self.charge[index1],self.coor[index1,:]) # vacuum transition dipole for single defect
TrDip1 = TrDip1*(1 + PolarMat_Beta[0,0]/4) + dip_AlphaE1 + dip_Alpha_E1
TrDip1 -= (self.VinterFG - dAVA)*dip_Beta1
TrDip2 = np.dot(self.charge[index2],self.coor[index2,:]) # vacuum transition dipole for single defect
TrDip2 = TrDip2*(1 + PolarMat_Beta[1,1]/4) + dip_AlphaE2 + dip_Alpha_E2
TrDip2 -= (self.VinterFG - dBVB)*dip_Beta2
# Change to energy class
with energy_units('AU'):
J_inter = EnergyClass(J_inter)
Eshift1 = EnergyClass(Eshift1)
Eshift2 = EnergyClass(Eshift2)
res["E_pol2_A(E)"] = EnergyClass(res["E_pol2_A(E)"])
res["E_pol2_A(-E)"] = EnergyClass(res["E_pol2_A(-E)"])
res["E_pol2_A_static"] = EnergyClass(res["E_pol2_A_static"])
res["E_pol2_B(E,E)_scaled"] = EnergyClass(res["E_pol2_B(E,E)_scaled"])
res["E_pol2_A(E)_(trans,grnd)"] = EnergyClass(res["E_pol2_A(E)_(trans,grnd)"])
res["E_pol1_A_static"] = EnergyClass(res["E_pol1_A_static"])
res["E_elstat_1"] = EnergyClass(res["E_elstat_1"])
res["E_pol2_B(E,E)"] = EnergyClass(res["E_pol2_B(E,E)"])
# with energy_units("1/cm"):
# print("EA_pol1_s_ex_gr EA_pol1_env_s_ex_gr EAB_pol1_tr_gr EA_pol1_tr_gr")
# print(" {:9.4f} {:9.4f} {:9.4f} {:9.4f}".format(
# E_pol_static1_ex_gr.value,
# E_pol_env_static1_ex_gr.value,
# E_AB_pol1_tr_gr.value,
# E_A_pol1_tr_gr.value))
# print(" VAB_0101 VAB_1101 VAB_0001 VAB_0000 VAB_1111 VAB_1100 E_grnd E_exct E_trans")
# print(VAB_0101.value, VAB_1101.value, VAB_0001.value, VAB_0000.value, VAB_1111.value, VAB_1100.value, E_grnd.value, E_exct.value, E_trans.value)
# res["E_pol2_A(E)"] = PolarMat_AlphaE
# res["E_pol2_A(-E)"] = PolarMat_Alpha_E
# res["E_pol2_A_static"] = PolarMat_Alpha_st
# res["E_pol2_B(E,E)"] = PolarMat_Beta
# res["E_pol2_B(E,E)_scaled"] = PolarMat_Beta_scaled
# res["E_pol2_A(E)_(trans,grnd)"] = PolarMat_Alpha_tr_gr
# res["E_pol1_A_static"] = PolarMat_static_tr_gr_ex
# res["E_elstat_1"] = ElstatMat_1
return J_inter, Eshift1, Eshift2, TrDip1, TrDip2, AllDipAE1, AllDipA_E1, AllDipBE1, res
else:
raise IOError('Unsupported approximation')
def get_gmm(self,gr_charge, ex_charge, FG_elstat, struc, index, E01,
int2cart, freq, red_mass, order=2, approx=1.1, CoarseGrain='C'):
""" Calculate coupling strength of the site energy to atomic coordinates.
The reult is dimensionless coupling strength and resulting spectral
density is defined as \sum_xi {gmm_xi*gmm_xi*\delta(omega-omega_xi)}
Parameters
----------
gr_charge : numpy array of real (dimension Natoms_defect)
Ground state ESP charges for every atom from the defect
ex_charge : numpy array of real (dimension Natoms_defect)
Excited state ESP charges for every atom from the defect
FG_elstat : Electrostatics class
Electrostatic definition of the system (atomic charges, positions,
...). It is possible to use it for calculation of electrostatic
interaction energy between defect and environment
struc : Structure class
Structure definition of the molecule (needed for calculation of
derivative of the hamiltonian with respect to atomic coordinates).
index : list of integer (dimension Natoms_defect)
Indexes of all atoms from the defect (starting from 0)
E01 : Energy class
Transition energy of isolated defect without environment (calculated
by quantum chemistry). Needed for calculation of derivative of
hamiltonian with respect to atomic coordinates
int2cart : numpy array of real (dimension 3*Nat x Nnormal_modes)
transformation matrix from internal to cartesian coordinates.
In columns there are normalized normal mode vectors in cartesian
coordinates ordered as [dx1,dy1,dz1,dx2,dy2,dz2,dx3,...]. Norm
of the whole vector is 1.0 and it is dimensionless
freq : numpy array of real (dimension Nnormal_modes)
Wavenumbers of individual normal modes (frequency/speed of light
- default output from gaussian and AMBER - in both called frequency)
in inverse centimeters
red_mass : numpy array of real (dimension Nnormal_modes)
Reduced masses for every normal mode in AMU (atomic mass units)
order : integer (optional - init = 2)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
CoarseGrain : string (optional - init = "C")
Possible values are: "plane","C","CF" and "all_atom". Define which
level of coarse grained model should be used. If ``CoarseGrain="plane"``
then all atoms are projected on plane defined by nvec and C-F atoms
are treated as single atom - for this case polarizabilities defined
only in 2D by two numbers. If ``CoarseGrain="C"`` then carbon atoms
are center for atomic polarizability tensor and again C-F are treated
as a single atom. If ``CoarseGrain="CF"`` then center of C-F bonds
are used as center for atomic polarizability tensor and again C-F
are treated as a single atom. If ``CoarseGrain="all_atom"`` all atoms
are used as centers polarizability tensor.
approx : real (optional - init=1.1)
Specifies which approximation should be used.
* **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
`Alpha(-E)`.
* **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
* **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
`Alpha(E)=Alpha(-E)`, however the second one is not condition
"""
dR_Hmm,dR_env_Hmm = self.get_SingleDefect_derivation(gr_charge, ex_charge, FG_elstat, struc, index, E01, order=2, approx=1.1)
# freq is actualy wavenumber (= frequency/speed_of_light).
# therefore angluar frequency omega in atomic units = 100/(2*Rydberg_inf) wavenumber in [cm-1]
omega_au = freq/conversion_facs_energy["1/cm"]
RedMass_au = red_mass/conversion_facs_mass
# in atomic units hbar = 1.0, m_e = 1.0, elementary_charge = 1.0, 1/(4*pi*eps_0) = 1.0, speed_of_light = 137 ( fine-structure constant)
# pick only carbon atoms from eigenvectors of normal modes (assume that fluorine atoms doesn't influent the result) - in needed
if CoarseGrain in ["C","plane"] :
indxC = np.where(np.array(struc.at_type) == 'C')
index = np.zeros((len(indxC),3),dtype='i8')
for ii in range(3):
index[:,ii] = index*3+ii
index.reshape(3*len(indxC))
int2cart_loc = int2cart[index,:]
else:
int2cart_loc = int2cart.copy()
g_mm = np.dot(int2cart_loc.T,dR_Hmm) + np.dot(int2cart.T,dR_env_Hmm)
g_mm = g_mm/(np.sqrt(omega_au*omega_au*omega_au))
g_mm = g_mm/(2*np.sqrt(RedMass_au))
return g_mm
# =============================================================================
# OLD AND NOT USED FUNCTION - WILL BE DELETED IN FUTURE
# =============================================================================
# def get_selfinteraction_energy(self,debug=False):
# ''' Calculates interaction energy between induced dipoles by chromophore
# transition charges and transition charges of the same chromophore
#
# Returns
# -------
# InterE : real
# Interaction energies in atomic units (Hartree) multiplied by (-1)
# correspond to Electric_field_of_TrCharges.Induced_dipole
#
# Notes
# -------
# **By definition it is not an interaction energy but interaction energy
# with opposite sign**
#
#
# '''
#
#
# # coppy charges and assign zero charges to those in index
# charge=[]
# charge_coor=[]
# dipole=[]
# dipole_coor=[]
# for ii in range(self.Nat):
# if self.charge[ii]!=0.0:
# charge.append(self.charge[ii])
# charge_coor.append(self.coor[ii])
# elif self.dipole[ii,0]!=0.0 or self.dipole[ii,1]!=0.0 or self.dipole[ii,2]!=0.0:
# dipole.append(self.dipole[ii])
# dipole_coor.append(self.coor[ii])
#
# charge=np.array(charge,dtype='f8')
# charge_coor=np.array(charge_coor,dtype='f8')
# dipole=np.array(dipole,dtype='f8')
# dipole_coor=np.array(dipole_coor,dtype='f8')
# if debug:
# print('Charges:')
# print(charge)
# print('Dipoles self-inter:')
# print(dipole)
#
# if debug:
# print('Charge coordinates')
# print(charge_coor.shape)
# print(charge_coor)
# print('Charges:')
# print(charge)
#
# if not charge.any():
# return 0.0 # If all charges are zero interaction is also zero
# if not dipole.any():
# print("All induced dipoles are zero - check if you calculating everything correctly")
# return 0.0 # If all dipoles are zero interaction is zero
#
# rr = np.tile(dipole_coor,(charge_coor.shape[0],1,1))
# rr = np.swapaxes(rr,0,1) # dipole coordinate
# R = np.tile(charge_coor,(dipole_coor.shape[0],1,1)) # charge coordinate
# R = R-rr # R[ii,jj,:]=charge_coor[jj]-dipole_coor[ii]
#
## TODO: There is no posibility to have charge and dipole on same atom (correct this) - so far no possibility to have zero R
# pot_dipole = potential_dipole(dipole, R)
# InterE = -np.dot(charge, pot_dipole)
#
# if debug:
# #calculate interaction energy
# InterE2=0.0
# for jj in range(len(charge)):
# potential=0.0
# for ii in range(len(dipole)):
# R=charge_coor[jj]-dipole_coor[ii]
# potential+=potential_dipole(dipole[ii],R)
# InterE2-=potential*charge[jj]
# # minus is here because we dont want to calculate interaction energy
# # but interaction of electric field of transition charges with induced
# # dipoles and this is exactly - interaction energy between transition
# # charge and dipole
#
# if np.allclose(InterE,InterE2):
# print('Selfinteraction energy is calculated correctly')
# else:
# raise Warning('Selfinteraction energy for both methods is different')
#
# return InterE
#
# def get_InteractionEng(self, index1, index2, Eng1, Eng2, dAVA=0.0, dBVB=0.0, order=80, approx=1.1):
# '''
#
# dAVA = <A|V|A> - <G|V|G>
# dBVB = <B|V|B> - <G|V|G>
# '''
#
# # Get TrEsp interaction energy
# E_TrEsp = self.get_TrEsp_Eng(index1, index2)
#
# # calculate new eigenstates and energies
# HH=np.zeros((2,2),dtype='f8')
# if Eng1<Eng2:
# HH[0,0] = Eng1+dAVA
# HH[1,1] = Eng2+dBVB
# else:
# HH[1,1] = Eng1+dAVA
# HH[0,0] = Eng2+dBVB
# HH[0,1] = E_TrEsp
# HH[1,0] = HH[0,1]
# Energy,Coeff=np.linalg.eigh(HH)
#
# d_esp=np.sqrt( E_TrEsp**2 + ((Eng2-Eng1+dBVB-dAVA)/2)**2 ) # sqrt( (<A|V|B>)**2 + ((Eng2-Eng1+dBVB-dAVA)/2)**2 )
#
#
# PolarMat=np.zeros((2,2),dtype='f8')
# if approx==1.1:
# # Fill polarization matrix
# PolarMat += self._fill_Polar_matrix(index1,index2,typ='AlphaE',order=order)
# PolarMat += self._fill_Polar_matrix(index1,index2,typ='Alpha_E',order=order)
# BetaMat = self._fill_Polar_matrix(index1,index2,typ='BetaEE',order=order//2)
# PolarMat += BetaMat*(dAVA/2 + dBVB/2 - self.VinterFG)
#
# # Calculate interaction energies
# C1 = Coeff.T[0]
# E1 = Energy[0] + np.dot(C1, np.dot(PolarMat - d_esp*BetaMat, C1.T))
# C2 = Coeff.T[1]
# E2 = Energy[1] + np.dot(C2, np.dot(PolarMat + d_esp*BetaMat, C2.T))
#
# J_inter = np.sqrt( (E2 - E1)**2 - (Eng2 - Eng1)**2 )/2*np.sign(E_TrEsp)
#
# return J_inter
# else:
# raise IOError('Unsupported approximation')
#
#
# def get_TrDip(self,*args,output_dipoles=False,order=80,approx=1.1):
# ''' Function for calculation of transition dipole moment for chromophore
# embeded in polarizable atom environment
#
# Parameters
# ----------
# *args : real (optional)
# Diference in electrostatic interaction energy between ground and
# excited state in ATOMIC UNITS (DE). If not defined it is assumed to
# be zero. DE=<A|V|A>-<G|V|G>
# output_dipoles : logical (optional - init=False)
# If atomic dipoles should be outputed or not. Atomic dipoles are
# outputed as `AtDip_Alpha(E)+AtDip_Alpha(-E)-self.VinterFG*AtDip_Beta(E,E)
# order : integer (optional - init=80)
# Specify how many SCF steps shoudl be used in calculation of induced dipoles
# approx : real (optional - init=1.2)
# Specifies which approximation should be used.
#
# **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)`.
# With this approximation diference in electrostatic interaction energy
# between ground and excited state in ATOMIC UNITS (DE) has to be imputed
# as `*args`
#
# **Approximation 1.1.2**: Approximation 1.1 + neglecting difference
# in electrostatic interaction between ground and excited state
# (imputed as approximation 1.1 but no electrostatic interaction energy
# diference - DE is defiend)
#
# **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
# With this apprximation diference in electrostatic interaction energy
# between ground and excited state in ATOMIC UNITS (DE) has to be imputed
# as `*args`
#
# **Approximation 1.2.2**: Approximation 1.2 + neglecting difference
# in electrostatic interaction between ground and excited state
# (imputed as approximation 1.2 but no electrostatic interaction energy
# diference - DE is defiend)
#
# **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
# `Alpha(E)=Alpha(-E)`, however the second one is not condition
#
# **Approximation MMpol**: Dipole will be calculated as a original dipole
# plus full polarization of the environmnet.
#
# Returns
# -------
# dipole : numpy.array of real (dimension 3)
# Transition dipole including the effects from interaction with environment
# in ATOMIC UNITS (e*Bohr)
# AtDipoles : numpy.array of real (dimension Natoms x 3) (optional)
# Induced atomic dipoles defined as:
# `AtDip_Alpha(E)+AtDip_Alpha(-E)-self.VinterFG*AtDip_Beta(E,E)
# in ATOMIC UNITS (e*Bohr)
#
# **Neglecting `tilde{Beta(E)}` is not valid approximation. It shoudl be
# better to neglect Beta(E,-E) to be consistent with approximation for
# interaction energy**
#
# Notes
# ----------
# dip = Alpha(E)*El_field_TrCharge + Alpha(-E)*El_field_TrCharge
# Then final transition dipole of molecule with environment is calculated
# according to the approximation:
#
# **Approximation 1.1:**
# dip_fin = dip - (Vinter-DE)*Beta(E,E)*El_field_TrCharge + dip_init(1-1/4*Ind_dip_Beta(E,E)*El_field_TrCharge)
# **Approximation 1.1.2:**
# dip_fin = dip - Vinter*Beta(E,E)*El_field_TrCharge + dip_init(1-1/4*Ind_dip_Beta(E,E)*El_field_TrCharge)
# **Approximation 1.2:**
# dip_fin = dip - (Vinter-DE)*Beta(E,E)*El_field_TrCharge + dip_init
# **Approximation 1.2.2:**
# dip_fin = dip - Vinter*Beta(E,E)*El_field_TrCharge + dip_init
# **Approximation 1.3:**
# dip_fin = dip - 2*Vinter*Beta(E,E)*El_field_TrCharge + dip_init
#
# '''
#
# if approx==1.3:
# if not np.array_equal(self.polar['AlphaE'],self.polar['Alpha_E']):
# raise Warning('For calculation with Approximation 1.3 Alpha(E) should be equal Alpha(-E)')
#
# if approx==1.1:
# if not np.array_equal(np.zeros((len(self.polar['Alpha_E']),3,3),dtype='f8'),self.polar['Alpha_E']):
# print('For calculation with Approximation 1.1 Alpha(-E) should be equal to zero')
#
# is_elstat=False
# if len(args)==1:
# DE=args[0]
# is_elstat=True
#
# use_alpha_instead_alphahalf=False
# if type(approx)==str and order==2:
# if 'MMpol' in approx:
# use_alpha_instead_alphahalf=True
#
# # For MMpol approximation we have to use alpha instead alpha/2 and resulting induced dipoles
# # have to be devided by 2. This way we correct the second term in perturbation expansion
# if use_alpha_instead_alphahalf:
# self.polar['AlphaE']=self.polar['AlphaE']*2
# self.polar['Alpha_E']=self.polar['Alpha_E']*2
#
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# # calculate induced dipoles with polarizability AlphaE for rescaled charges
# self._calc_dipoles_All('AlphaE',NN=order)
# AtDipoles1=np.copy(self.dipole)
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# if not (approx=='MMpol' and order>2):
# # if we calculate with MMpol procedure we use only one polarizability matrix and therefore doesn't have to be calculated
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# # calculate induced dipoles with polarizability Alpha_E for rescaled charges
# self._calc_dipoles_All('Alpha_E',NN=order)
# AtDipoles2=np.copy(self.dipole)
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# if use_alpha_instead_alphahalf:
# self.polar['AlphaE']=self.polar['AlphaE']/2
# self.polar['Alpha_E']=self.polar['Alpha_E']/2
# AtDipoles1=AtDipoles1/2
# AtDipoles2=AtDipoles2/2
#
# # calculate induced dipoles with polarizability Beta for rescaled charges
# #self._calc_dipoles_All('BetaEE',NN=order//2)
# if order>2:
# self._calc_dipoles_All('BetaEE',NN=1)
# else:
# self._calc_dipoles_All('BetaEE',NN=order//2)
# AtDipolesBeta=np.copy(self.dipole)
#
# # calculate transition dipole:
# dipole=np.zeros(3,dtype='f8')
# for ii in range(self.Nat):
# dipole+=self.coor[ii,:]*self.charge[ii]
# dipole_tmp=np.copy(dipole)
# dipole+=np.sum(AtDipoles1,axis=0)
# if not (approx=='MMpol' and order>2):
# dipole+=np.sum(AtDipoles2,axis=0)
#
# # term with Beta polarizability
# if approx==1.1 or approx=='MMpol_1.1':
# dipole-=self.VinterFG*np.sum(AtDipolesBeta,axis=0) - dipole_tmp*self.get_selfinteraction_energy()/4
# if is_elstat:
# dipole+=DE*np.sum(AtDipolesBeta,axis=0)
# if approx==1.2 or approx=='MMpol_1.2':
# dipole-=self.VinterFG*np.sum(AtDipolesBeta,axis=0)
# if is_elstat:
# dipole+=DE*np.sum(AtDipolesBeta,axis=0)
# elif approx==1.3 or approx=='MMpol_1.3':
# dipole-=2*self.VinterFG*np.sum(AtDipolesBeta,axis=0)
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# if output_dipoles:
# if approx=='MMpol' and order>2:
# return dipole,AtDipoles1
# elif approx=='MMpol':
# return dipole,AtDipoles1+AtDipoles2
# else:
# return dipole,AtDipoles1+AtDipoles2-self.VinterFG*AtDipolesBeta
# else:
# return dipole
#
#
# def calculate_EnergyShift(self,index,charge,*args,order=80,output_dipoles=False,approx=1.1):
# ''' Function for calculation of transition energy shift for chromophore
# embeded in polarizable atom environment
#
# Parameters
# ----------
# **index and charge** : Not used (useful only for structure with more than one defect)
#
# *args : real (optional)
# Diference in electrostatic interaction energy between ground and
# excited state in ATOMIC UNITS (DE). If not defined it is assumed to
# be zero. DE=<A|V|A>-<G|V|G>
# order : integer (optional - init=80)
# Specify how many SCF steps shoudl be used in calculation of induced dipoles
# output_dipoles : logical (optional - init=False)
# If atomic dipoles should be outputed or not. Atomic dipoles are
# outputed as `AtDip_Alpha(E)+AtDip_Alpha(-E)-self.VinterFG*AtDip_Beta(E,E)
# approx : real (optional - init=1.2)
# Specifies which approximation should be used.
#
# **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)`.
# With this approximation diference in electrostatic interaction energy
# between ground and excited state in ATOMIC UNITS (DE) has to be imputed
# as `*args`
#
# **Approximation 1.1.2**: Approximation 1.1 + neglecting difference
# in electrostatic interaction between ground and excited state
# (imputed as approximation 1.1 but no electrostatic interaction energy
# diference - DE is defiend)
#
# **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
# With this apprximation diference in electrostatic interaction energy
# between ground and excited state in ATOMIC UNITS (DE) has to be imputed
# as `*args`
#
# **Approximation 1.2.2**: Approximation 1.2 + neglecting difference
# in electrostatic interaction between ground and excited state
# (imputed as approximation 1.2 but no electrostatic interaction energy
# diference - DE is defiend)
#
# **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
# `Alpha(E)=Alpha(-E)`, however the second one is not condition
#
# **Approximation MMpol**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
# `Alpha(E)=Alpha(-E)`, however the second one is not condition
#
# Returns
# -------
# Eshift : real
# Excitation energy shift in ATOMIC UNITS (Hartree) caused by the
# interaction of molecule with polarizable atom environment
# AtDipoles : numpy.array of real (dimension Natoms x 3) (optional)
# Induced atomic dipoles defined as:
# `AtDip_Alpha(E)+AtDip_Alpha(-E)-self.VinterFG*AtDip_Beta(E,E)`
# in ATOMIC UNITS (e*Bohr)
#
# **Neglecting `tilde{Beta(E)}` is not valid approximation. It should be
# better to neglect Beta(E,-E) to be consistent with approximation for
# interaction energy**
#
# Notes
# ----------
# E = -Ind_dip_Alpha(E)*El_field_TrCharge + Ind_dip_Alpha(-E)*El_field_TrCharge
# Then final energy shift E_fin of molecule embeded in environment is calculated
# according to the approximation:
#
# *Approximation 1.1:**
# Exactly the same as Approximation 1.2
# *Approximation 1.1.2:**
# Exactly the same as Approximation 1.2.2
# **Approximation 1.2:**
# E_fin = E + DE + (Vinter-DE)*Ind_dip_Beta(E,E)*El_field_TrCharge
# **Approximation 1.2.2:**
# E_fin = E + Vinter*Ind_dip_Beta(E,E)*El_field_TrCharge
# **Approximation 1.3:**
# E_fin = E + DE*(1-2*Ind_dip_Beta(E,E)*El_field_TrCharge)
#
# '''
#
# if approx==1.3:
# if not np.array_equal(self.polar['AlphaE'],self.polar['Alpha_E']):
# raise Warning('For calculation with Approximation 1.3 Alpha(E) should be equal Alpha(-E)')
#
# if approx==1.1:
# if not np.array_equal(np.zeros((len(self.polar['Alpha_E']),3,3),dtype='f8'),self.polar['Alpha_E']):
# print('For calculation with Approximation 1.1 Alpha(-E) should be equal to zero')
#
# is_elstat=False
# if len(args)==1:
# DE=args[0]
# is_elstat=True
#
# use_alpha_instead_alphahalf=False
# if type(approx)==str and order==2:
# if 'MMpol' in approx:
# use_alpha_instead_alphahalf=True
#
# # For MMpol approximation we have to use alpha instead alpha/2 and resulting induced dipoles
# # have to be devided by 2. This way we correct the second term in perturbation expansion
# if use_alpha_instead_alphahalf:
# self.polar['AlphaE']=self.polar['AlphaE']*2
# self.polar['Alpha_E']=self.polar['Alpha_E']*2
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# # calculate induced dipoles with polarizability AlphaE for rescaled charges
# self._calc_dipoles_All('AlphaE',NN=order)
# AtDipoles1=np.copy(self.dipole)
# if use_alpha_instead_alphahalf:
# AtDipoles1=AtDipoles1/2
# self.dipole=self.dipole/2
# #Einter=self._get_interaction_energy(index,charge=charge)
## TODO: Check if with using MMpol procedure it souldn't be 1/2 of selfinteraction energy
# Eshift=-self.get_selfinteraction_energy()
#
# if not (approx=='MMpol' and order>2):
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# # calculate induced dipoles with polarizability Alpha_E for rescaled charges
# self._calc_dipoles_All('Alpha_E',NN=order)
# AtDipoles2=np.copy(self.dipole)
# if use_alpha_instead_alphahalf:
# AtDipoles2=AtDipoles2/2
# self.dipole=self.dipole/2
# #Eshift=-self._get_interaction_energy(index,charge=charge)
# Eshift+=self.get_selfinteraction_energy()
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# # calculate induced dipoles with polarizability Beta for rescaled charges
# #self._calc_dipoles_All('BetaEE',NN=order//2)
# if order>2:
# self._calc_dipoles_All('BetaEE',NN=1)
# else:
# self._calc_dipoles_All('BetaEE',NN=order//2)
# AtDipolesBeta=np.copy(self.dipole)
# #Eshift=-self._get_interaction_energy(index,charge=charge)
#
# if use_alpha_instead_alphahalf:
# self.polar['AlphaE']=self.polar['AlphaE']/2
# self.polar['Alpha_E']=self.polar['Alpha_E']/2
#
# if approx==1.2 or approx==1.1 or approx=='MMpol_1.2' or approx=='MMpol_1.1':
# if is_elstat:
# Eshift+=(self.VinterFG-DE)*self.get_selfinteraction_energy()
# Eshift+=DE
# else:
# Eshift+=self.VinterFG*self.get_selfinteraction_energy()
# elif approx==1.3 or approx=='MMpol_1.3':
# if is_elstat:
# Eshift+=DE*(1-2*self.get_selfinteraction_energy())
# elif approx=='MMpol':
# if is_elstat:
# Eshift+=DE
# if output_dipoles:
# if approx=='MMpol':
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
# if order>2:
# return Eshift,AtDipoles1
# else:
# return Eshift,AtDipoles1+AtDipoles2
# else:
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# return Eshift,AtDipoles1+AtDipoles2-self.VinterFG*AtDipolesBeta
# else:
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# return Eshift
#
#
# def calculate_InteractionEnergy(self,index,charge,*args,order=80,output_dipoles=False,approx=1.1):
# ''' Function for calculation of interaction energies for chromophores
# embeded in polarizable atom environment. So far only for symetric homodimer
#
# Parameters
# ----------
# index : list of integer (dimension Natoms_of_defect)
# Specify atomic indexes of one defect. For this defect interation energy
# with induced dipoles in the environment and also other defect will
# be calculated.
# charge : numpy.array of real (dimension Natoms_of_defect)
# Atomic trasition charges (TrEsp charges) for every atom of one defect
# defined by `index`
# *args : real (optional)
# Diference in electrostatic interaction energy between ground and
# excited state in ATOMIC UNITS (DE). If not defined it is assumed to
# be zero. DE=<A|V|A>-<G|V|G>
# order : integer (optional - init=80)
# Specify how many SCF steps shoudl be used in calculation of induced dipoles
# output_dipoles : logical (optional - init=False)
# If atomic dipoles should be outputed or not. Atomic dipoles are
# outputed as `AtDip_Alpha(E)+AtDip_Alpha(-E)-self.VinterFG*AtDip_Beta(E,E)
# approx : real (optional - init=1.2)
# Specifies which approximation should be used. **Different approximation
# than for dipole or energy shift**
#
# **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
# `Alpha(-E)`. With this apprximation diference in electrostatic interaction energy
# between ground and excited state in ATOMIC UNITS (DE) has to be imputed
# as `*args`
#
# **Approximation 1.1.2**: Approximation 1.2 + neglecting difference
# in electrostatic interaction between ground and excited state
# (imputed as approximation 1.2 but no electrostatic interaction energy
# diference - DE is defiend)
#
# **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
# `Alpha(E)=Alpha(-E)`, however the second one is not condition
#
# **Approximation MMpol**: Interaction energy is calculated as interaction
# without environment plus interaction of induced dipoles in the environmnet
# with electric field of the second molecule.
#
# Returns
# -------
# Einter : real
# Interaction energy in ATOMIC UNITS (Hartree) between two chormophores
# embeded in polarizable atom environment.
# AtDipoles : numpy.array of real (dimension Natoms x 3) (optional)
# Induced atomic dipoles defined as:
# `AtDip_Alpha(E)+AtDip_Alpha(-E)-2*self.VinterFG*AtDip_Beta(E,E)`
# in ATOMIC UNITS (e*Bohr)
#
# Notes
# ----------
# E = -Ind_dip_Alpha(E)*El_field_TrCharge + Ind_dip_Alpha(-E)*El_field_TrCharge
# Then final energy shift E_fin of molecule embeded in environment is calculated
# according to the approximation:
#
# **Approximation 1.1:**
# Einter=E_TrEsp*(1+E1Bself)+(self.VinterFG-DE)*E12B+E12AE+E12A_E
# **Approximation 1.1.2:**
# Einter=E_TrEsp*(1+E1Bself)+self.VinterFG*E12B+E12AE+E12A_E
# **Approximation 1.3:**
# Einter=E_TrEsp+2*self.VinterFG*E12B+E12AE+E12A_E
#
# '''
#
# debug=False
#
# if approx==1.2:
# raise IOError('Approximation 1.2 for interaction energy calculation not yet supported. Look at Approximation 1.1')
#
# if approx==1.3:
# if not np.array_equal(self.polar['AlphaE'],self.polar['Alpha_E']):
# raise Warning('For calculation with Approximation 1.3 Alpha(E) should be equal Alpha(-E)')
#
# if approx==1.1:
# if not np.array_equal(np.zeros((len(self.polar['Alpha_E']),3,3),dtype='f8'),self.polar['Alpha_E']):
# print('For calculation with Approximation 1.1 Alpha(-E) should be equal to zero')
#
# is_elstat=False
# if len(args)==1:
# DE=args[0]
# is_elstat=True
#
# use_alpha_instead_alphahalf=False
# if type(approx)==str and order==2:
# if 'MMpol' in approx:
# use_alpha_instead_alphahalf=True
#
# # For MMpol approximation we have to use alpha instead alpha/2 and resulting induced dipoles
# # have to be devided by 2. This way we correct the second term in perturbation expansion
# if use_alpha_instead_alphahalf:
# self.polar['AlphaE']=self.polar['AlphaE']*2
# self.polar['Alpha_E']=self.polar['Alpha_E']*2
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# # TrEsp interaction energy
# E_TrEsp=self._get_interaction_energy(index,charge=charge)
# #print('TrEsp interaction:',E_TrEsp*conversion_facs_energy["1/cm"])
# # this will put zero charges on index atoms then calculate potential from
# # everything else and calculate interaction with charges defined by charges
# # original charges and dipoles remain unchanged
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# if not (approx=='MMpol' and order>2):
# # calculate induced dipoles with polarizability Beta for rescaled charges
# if order>2:
# self._calc_dipoles_All('BetaEE',NN=1)
# else:
# self._calc_dipoles_All('BetaEE',NN=order//2)
# #self._calc_dipoles_All('BetaEE',NN=order//2)
# AtDipolesBeta=np.copy(self.dipole)
# E1Bself=-self.get_selfinteraction_energy() # should be negative for all Beta
# E12B=E_TrEsp-self._get_interaction_energy(index,charge=charge) #
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# # calculate induced dipoles with polarizability AlphaE for rescaled charges
# if debug==True and order==2:
# self._calc_dipoles_All('AlphaE',NN=1)
# self._test_2nd_order('AlphaE')
# else:
# self._calc_dipoles_All('AlphaE',NN=order)
# if use_alpha_instead_alphahalf:
# self.dipole=self.dipole/2
# AtDipoles1=np.copy(self.dipole)
# E12AE=(self._get_interaction_energy(index,charge=charge)-E_TrEsp)
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# if not (approx=='MMpol' and order>2):
# # calculate induced dipoles with polarizability AlphaE for rescaled charges
# self._calc_dipoles_All('Alpha_E',NN=order)
# if use_alpha_instead_alphahalf:
# self.dipole=self.dipole/2
# AtDipoles2=np.copy(self.dipole)
# E12A_E=(self._get_interaction_energy(index,charge=charge)-E_TrEsp)
#
# if use_alpha_instead_alphahalf:
# self.polar['AlphaE']=self.polar['AlphaE']/2
# self.polar['Alpha_E']=self.polar['Alpha_E']/2
#
#
# if approx==1.1 or approx=='MMpol_1.1':
# if is_elstat:
# Einter=E_TrEsp*(1+E1Bself)+(self.VinterFG-DE)*E12B+E12AE+E12A_E
# else:
# Einter=E_TrEsp*(1+E1Bself)+self.VinterFG*E12B+E12AE+E12A_E
# elif approx==1.3 or approx=='MMpol_1.3':
# Einter=E_TrEsp+2*self.VinterFG*E12B+E12AE+E12A_E
# elif approx=='MMpol':
# Einter=E_TrEsp+E12AE
# else:
# raise IOError('Unknown type of approximation. Alowed types are: 1.1 and 1.3')
#
# if output_dipoles:
# if approx=='MMpol':
# if order>2:
# return Einter,AtDipoles1
# else:
# return Einter,AtDipoles1+AtDipoles2
# else:
# return Einter,AtDipoles1+AtDipoles2-2*self.VinterFG*AtDipolesBeta
# else:
# return Einter
#
# def _calculate_InteractionEnergy2(self,index,charge,order=80,output_dipoles=False):
# ''' Function for calculation of interaction energies for chromophores
# embeded in polarizable atom environment. So far only for symetric homodimer
#
# Induced dipoles, needed for inteaction energy calculation, calculated
# at every step of the SCF procedure are for output multiplied by different
# factor. First order is multiplied by factor 1, second by factor 3/2,
# third by factor of 2, etc.
#
# **According to latest derivation the rescaling of every SCF step should
# not be used and therefore also this function should not be used**
#
# Notes
# ----------
# This function is kept only for maintaining backward compatibility.
#
# '''
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# # TrEsp interaction energy
# E_TrEsp=self._get_interaction_energy(index,charge=charge)
# #print('TrEsp interaction:',E_TrEsp*conversion_facs_energy["1/cm"])
# # this will put zero charges on index atoms then calculate potential from
# # everything else and calculate interaction with charges defined by charges
# # original charges and dipoles remain unchanged
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# # calculate induced dipoles with polarizability Beta for rescaled charges
# if order>2:
# self._calc_dipoles_All('BetaEE',NN=1)
# else:
# self._calc_dipoles_All('BetaEE',NN=order//2)
# AtDipolesBeta=np.copy(self.dipole)
# E1Bself=-self.get_selfinteraction_energy() # should be negative for all Beta
# E12B=E_TrEsp-self._get_interaction_energy(index,charge=charge) #
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# # calculate induced dipoles with polarizability AlphaE for rescaled charges
# #self._calc_dipoles_All('AlphaE',NN=order)
# self.__calc_dipoles_All2('AlphaE',NN=order+2)
# AtDipoles1=np.copy(self.dipole)
# E12AE=2*(self._get_interaction_energy(index,charge=charge)-E_TrEsp)
#
# # reset iduced dipoles to zero
# self.dipole=np.zeros((self.Nat,3),dtype='f8')
#
# # calculate induced dipoles with polarizability AlphaE for rescaled charges
# #self._calc_dipoles_All('Alpha_E',NN=order)
# self.__calc_dipoles_All2('Alpha_E',NN=order+2)
# AtDipoles2=np.copy(self.dipole)
# E12A_E=2*(self._get_interaction_energy(index,charge=charge)-E_TrEsp)
#
#
# Einter=E_TrEsp*(1+E1Bself)+2*self.VinterFG*E12B+E12AE+E12A_E
#
# if output_dipoles:
# return Einter,AtDipoles1+AtDipoles2-2*self.VinterFG*AtDipolesBeta
# else:
# return Einter
#
#def CalculateTrDip(filenames,ShortName,index_all,Dipole_QCH,Dip_all,AlphaE,Alpha_E,BetaE,VinterFG,FG_charges,ChargeType,order=80,verbose=False,approx=1.1,MathOut=False,**kwargs):
# ''' Calculate transition dipole for defect embeded in polarizable atom environment
# for all systems given in filenames.
#
# Parameters
# ----------
# filenames : list of dictionary (dimension Nsystems)
# In the dictionary there are specified all needed files which contains
# nessesary information for transformig the system into Dielectric class.
# keys:
# `'2def_structure'`: xyz file with system geometry and atom types
# `'charge_structure'`: xyz file with defect like molecule geometry for which transition charges were calculated
# `charge_grnd`: file with ground state charges for the defect
# `'charge_exct'`: file with excited state charges for the defect
# `'charge'`: file with transition charges for the defect
# ShortName : list of strings
# List of short description (name) of individual systems
# index_all : list of integers (dimension Nsystems x 6)
# There are specified indexes neded for asignment of defect
# atoms. First three indexes correspond to center and two main axes of
# reference structure (structure which was used for charges calculation)
# and the remaining three indexes are corresponding atoms of the defects
# on fluorographene system.
# Dipole_QCH : list of real (dimension Nsystems)
# List of quantum chemistry values of transition dipoles in ATOMIC UNITS
# (e*Bohr) for defect in polarizable atom environment
# (used for printing comparison - not used for calculation at all)
# Dip_all : list of real (dimension Nsystems)
# In this variable there will be stored dipoles in ATOMIC UNITS (e*Bohr)
# calculated by polarizable atoms method for description of the environment.
# AlphaE : numpy.array of real (dimension 2x2)
# Atomic polarizability Alpha(E) for C-F corse grained atoms of
# fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
# Alpha_E : numpy.array of real (dimension 2x2)
# Atomic polarizability Alpha(-E) for C-F corse grained atoms of
# fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
# BetaE : numpy.array of real (dimension 2x2)
# Atomic polarizability Beta(E,E) for C-F corse grained atoms of
# fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
# VinterFG : real
# Difference in electrostatic interaction energy between interaction of
# excited C-F corse grained atom of fluorographene with all others
# fluorographene corse grained atoms in ground state and interaction of
# ground state C-F corse grained atom of fluorographene with all others
# fluorographene corse grained atoms in ground state. Units are ATOMIC
# UNITS (Hartree)
# FG_charges : list of real (dimension 2)
# [charge on inner fluorographene atom, charge on borded fluorographe carbon]
# ChargeType : string
# Specifies which method was used for calcultion of ground and excited state
# charges for defect atoms. Allowed types are: 'qchem','qchem_all','AMBER'
# and 'gaussian'. **'qchem'** - charges calculated by fiting Q-Chem ESP on carbon
# atoms. **'qchem_all'** - charges calculated by fiting Q-Chem ESP on all
# atoms, only carbon charges are used and same charge is added to all carbon
# atoms in order to have neutral molecule. **'AMBER'** and **'gaussian'**
# not yet fully implemented.
# order : integer (optional - init=80)
# Specify how many SCF steps shoudl be used in calculation of induced dipoles
# verbose : logical (optional - init=False)
# If `True` aditional information about whole proces will be printed
# approx : real (optional - init=1.1)
# Specifies which approximation should be used.
#
# **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
# `Alpha(-E)`. With this apprximation diference in electrostatic interaction energy
# between ground and excited state in ATOMIC UNITS (DE) has to be imputed
# as `*args`
#
# **Approximation 1.1.2**: Approximation 1.2 + neglecting difference
# in electrostatic interaction between ground and excited state
# (imputed as approximation 1.2 but no electrostatic interaction energy
# diference - DE is defiend)
#
# **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
# With this apprximation diference in electrostatic interaction energy
# between ground and excited state in ATOMIC UNITS (DE) has to be imputed
# as `*args`
#
# **Approximation 1.2.2**: Approximation 1.2 + neglecting difference
# in electrostatic interaction between ground and excited state
# (imputed as approximation 1.2 but no electrostatic interaction energy
# diference - DE is defiend)
#
# **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
# `Alpha(E)=Alpha(-E)`, however the second one is not condition
# **kwargs : dictionary (optional)
# Definition of polarizabitity matrixes for defect atoms (if nonzero
# polarizability is used)
#
# Notes
# ----------
# Working only for fluorographene system with single defect
#
# '''
#
# for ii in range(len(filenames)):
# if verbose:
# print('Calculation of dipoles for:',ShortName[ii])
#
# # read and prepare molecule
# if kwargs:
# mol_polar,index1,charge=prepare_molecule_1Def(filenames[ii],index_all[ii],AlphaE,Alpha_E,BetaE,VinterFG,verbose=False,**kwargs)
# else:
# mol_polar,index1,charge=prepare_molecule_1Def(filenames[ii],index_all[ii],AlphaE,Alpha_E,BetaE,VinterFG,verbose=False)
# mol_Elstat,index,charge_grnd,charge_exct=ElStat_PrepareMolecule_1Def(filenames[ii],index_all[ii],FG_charges,ChargeType=ChargeType,verbose=False)
#
# # calculate <A|V|A>-<G|V|G>
# DE=mol_Elstat.get_EnergyShift()
# #print('DE:',DE*conversion_facs_energy["1/cm"],'cm-1')
#
# # calculate transition dipole
# TrDip,AtDipoles=mol_polar.get_TrDip(DE,order=order,output_dipoles=True,approx=approx)
#
# if verbose:
# print(' Total transition dipole:',np.sqrt(np.dot(TrDip,TrDip)),'Quantum chemistry dipole:',Dipole_QCH[ii])
# print(ShortName[ii],Dipole_QCH[ii],np.sqrt(np.dot(TrDip,TrDip)))
# Dip_all[ii,:]=TrDip[:]
#
# if MathOut:
# # output dipoles to mathematica
# Bonds=GuessBonds(mol_polar.coor,bond_length=4.0)
# mat_filename="".join(['Pictures/Polar_',ShortName[ii],'.nb'])
# OutputMathematica(mat_filename,mol_polar.coor,Bonds,['C']*mol_polar.Nat,scaleDipole=30.0,**{'TrPointCharge': mol_polar.charge,'AtDipole': AtDipoles,'rSphere_dip': 0.5,'rCylinder_dip':0.1})
#
#def CalculateEnergyShift(filenames,ShortName,index_all,Eshift_QCH,Eshift_all,AlphaE,Alpha_E,BetaE,VinterFG,FG_charges,ChargeType,order=80,verbose=False,approx=1.1,MathOut=False,**kwargs):
# ''' Calculate transition energy shifts for defect embeded in polarizable atom
# environment for all systems given in filenames.
#
# Parameters
# ----------
# filenames : list of dictionary (dimension Nsystems)
# In the dictionary there are specified all needed files which contains
# nessesary information for transformig the system into Dielectric class.
# keys:
# `'2def_structure'`: xyz file with system geometry and atom types
# `'charge_structure'`: xyz file with defect like molecule geometry for which transition charges were calculated
# `charge_grnd`: file with ground state charges for the defect
# `'charge_exct'`: file with excited state charges for the defect
# `'charge'`: file with transition charges for the defect
# ShortName : list of strings
# List of short description (name) of individual systems
# index_all : list of integers (dimension Nsystems x 6)
# There are specified indexes neded for asignment of defect
# atoms. First three indexes correspond to center and two main axes of
# reference structure (structure which was used for charges calculation)
# and the remaining three indexes are corresponding atoms of the defects
# on fluorographene system.
# Eshift_QCH : list of real (dimension Nsystems)
# List of quantum chemistry values of transition energy shifts in INVERSE
# CENTIMETERS for defect in polarizable atom environment (used for printing
# comparison - not used for calculation at all)
# Eshift_all : list of real (dimension Nsystems)
# In this variable there will be stored transition energy shifts in ATOMIC
# UNITS (Hartree) calculated by polarizable atoms method for description
# of the environment.
# AlphaE : numpy.array of real (dimension 2x2)
# Atomic polarizability Alpha(E) for C-F corse grained atoms of
# fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
# Alpha_E : numpy.array of real (dimension 2x2)
# Atomic polarizability Alpha(-E) for C-F corse grained atoms of
# fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
# BetaE : numpy.array of real (dimension 2x2)
# Atomic polarizability Beta(E,E) for C-F corse grained atoms of
# fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
# VinterFG : real
# Difference in electrostatic interaction energy between interaction of
# excited C-F corse grained atom of fluorographene with all others
# fluorographene corse grained atoms in ground state and interaction of
# ground state C-F corse grained atom of fluorographene with all others
# fluorographene corse grained atoms in ground state. Units are ATOMIC
# UNITS (Hartree)
# FG_charges : list of real (dimension 2)
# [charge on inner fluorographene atom, charge on borded fluorographe carbon]
# ChargeType : string
# Specifies which method was used for calcultion of ground and excited state
# charges for defect atoms. Allowed types are: 'qchem','qchem_all','AMBER'
# and 'gaussian'. **'qchem'** - charges calculated by fiting Q-Chem ESP on carbon
# atoms. **'qchem_all'** - charges calculated by fiting Q-Chem ESP on all
# atoms, only carbon charges are used and same charge is added to all carbon
# atoms in order to have neutral molecule. **'AMBER'** and **'gaussian'**
# not yet fully implemented.
# order : integer (optional - init=80)
# Specify how many SCF steps shoudl be used in calculation of induced dipoles
# verbose : logical (optional - init=False)
# If `True` aditional information about whole proces will be printed
# approx : real (optional - init=1.1)
# Specifies which approximation should be used.
#
# **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
# `Alpha(-E)`. With this apprximation diference in electrostatic interaction energy
# between ground and excited state in ATOMIC UNITS (DE) has to be imputed
# as `*args`
#
# **Approximation 1.1.2**: Approximation 1.2 + neglecting difference
# in electrostatic interaction between ground and excited state
# (imputed as approximation 1.2 but no electrostatic interaction energy
# diference - DE is defiend)
#
# **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
# With this apprximation diference in electrostatic interaction energy
# between ground and excited state in ATOMIC UNITS (DE) has to be imputed
# as `*args`
#
# **Approximation 1.2.2**: Approximation 1.2 + neglecting difference
# in electrostatic interaction between ground and excited state
# (imputed as approximation 1.2 but no electrostatic interaction energy
# diference - DE is defiend)
#
# **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
# `Alpha(E)=Alpha(-E)`, however the second one is not condition
# **kwargs : dictionary (optional)
# Definition of polarizabitity matrixes for defect atoms (if nonzero
# polarizability is used)
#
# Notes
# ----------
# Working only for system with single defect
#
# '''
#
# for ii in range(len(filenames)):
# if verbose:
# print('Calculation of excitation energy shift for:',ShortName[ii])
#
# # read and prepare molecule
# if kwargs:
# mol_polar,index1,charge=prepare_molecule_1Def(filenames[ii],index_all[ii],AlphaE,Alpha_E,BetaE,VinterFG,verbose=False,**kwargs)
# else:
# mol_polar,index1,charge=prepare_molecule_1Def(filenames[ii],index_all[ii],AlphaE,Alpha_E,BetaE,VinterFG,verbose=False)
# mol_Elstat,index,charge_grnd,charge_exct=ElStat_PrepareMolecule_1Def(filenames[ii],index_all[ii],FG_charges,ChargeType=ChargeType,verbose=False)
#
# # calculate <A|V|A>-<G|V|G>
# DE=mol_Elstat.get_EnergyShift()
# #print('DE:',DE*conversion_facs_energy["1/cm"],'cm-1',DE,'AU')
#
# # calculate transition dipole
# Eshift,AtDipoles=mol_polar.calculate_EnergyShift(index1,charge,DE,order=order,output_dipoles=True,approx=approx)
#
# if verbose:
# print(' Transition enegy shift:',Eshift*conversion_facs_energy["1/cm"],'Quantum chemistry shift:',Eshift_QCH[ii])
# print(ShortName[ii],Eshift_QCH[ii],Eshift*conversion_facs_energy["1/cm"])
# Eshift_all[ii]=Eshift*conversion_facs_energy["1/cm"]
#
# if MathOut:
# # output dipoles to mathematica
# Bonds=GuessBonds(mol_polar.coor,bond_length=4.0)
# mat_filename="".join(['Pictures/Polar_',ShortName[ii],'.nb'])
# OutputMathematica(mat_filename,mol_polar.coor,Bonds,['C']*mol_polar.Nat,scaleDipole=30.0,**{'TrPointCharge': mol_polar.charge,'AtDipole': AtDipoles,'rSphere_dip': 0.5,'rCylinder_dip':0.1})
#
#def CalculateInterE(filenames,ShortName,index_all,Energy_QCH,Energy_all,nvec_all,AlphaE,Alpha_E,BetaE,VinterFG,FG_charges,ChargeType,order=80,verbose=False,approx=1.1,MathOut=False,**kwargs):
# ''' Calculate interaction energies between defects embeded in polarizable atom
# environment for all systems given in filenames.
#
# Parameters
# ----------
# filenames : list of dictionary (dimension Nsystems)
# In the dictionary there are specified all needed files which contains
# nessesary information for transformig the system into Dielectric class.
# keys:
# `'2def_structure'`: xyz file with system geometry and atom types
# `'charge_structure'`: xyz file with defect like molecule geometry for which transition charges were calculated
# `charge_grnd`: file with ground state charges for the defect
# `'charge_exct'`: file with excited state charges for the defect
# `'charge'`: file with transition charges for the defect
# ShortName : list of strings
# List of short description (name) of individual systems
# index_all : list of integers (dimension Nsystems x 6)
# There are specified indexes neded for asignment of defect
# atoms. First three indexes correspond to center and two main axes of
# reference structure (structure which was used for charges calculation)
# and the remaining three indexes are corresponding atoms of the defects
# on fluorographene system.
# Energy_QCH : list of real (dimension Nsystems)
# List of quantum chemistry values of interaction energies in INVERSE
# CENTIMETERS between defects in polarizable atom environment
# (used for printing comparison - not used for calculation at all)
# Energy_all : list of real (dimension Nsystems)
# In this variable there will be stored interaction energies in ATOMIC UNITS
# (Hartree) calculated by polarizable atoms method for description of the
# environment.
# AlphaE : numpy.array of real (dimension 2x2)
# Atomic polarizability Alpha(E) for C-F corse grained atoms of
# fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
# Alpha_E : numpy.array of real (dimension 2x2)
# Atomic polarizability Alpha(-E) for C-F corse grained atoms of
# fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
# BetaE : numpy.array of real (dimension 2x2)
# Atomic polarizability Beta(E,E) for C-F corse grained atoms of
# fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
# VinterFG : real
# Difference in electrostatic interaction energy between interaction of
# excited C-F corse grained atom of fluorographene with all others
# fluorographene corse grained atoms in ground state and interaction of
# ground state C-F corse grained atom of fluorographene with all others
# fluorographene corse grained atoms in ground state. Units are ATOMIC
# UNITS (Hartree)
# FG_charges : list of real (dimension 2)
# [charge on inner fluorographene atom, charge on borded fluorographe carbon]
# ChargeType : string
# Specifies which method was used for calcultion of ground and excited state
# charges for defect atoms. Allowed types are: 'qchem','qchem_all','AMBER'
# and 'gaussian'. **'qchem'** - charges calculated by fiting Q-Chem ESP on carbon
# atoms. **'qchem_all'** - charges calculated by fiting Q-Chem ESP on all
# atoms, only carbon charges are used and same charge is added to all carbon
# atoms in order to have neutral molecule. **'AMBER'** and **'gaussian'**
# not yet fully implemented.
# order : integer (optional - init=80)
# Specify how many SCF steps shoudl be used in calculation of induced dipoles
# verbose : logical (optional - init=False)
# If `True` aditional information about whole proces will be printed
# approx : real (optional - init=1.1)
# Specifies which approximation should be used.
#
# **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
# `Alpha(-E)`. With this apprximation diference in electrostatic interaction energy
# between ground and excited state in ATOMIC UNITS (DE) has to be imputed
# as `*args`
#
# **Approximation 1.1.2**: Approximation 1.2 + neglecting difference
# in electrostatic interaction between ground and excited state
# (imputed as approximation 1.2 but no electrostatic interaction energy
# diference - DE is defiend)
#
# **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
# With this apprximation diference in electrostatic interaction energy
# between ground and excited state in ATOMIC UNITS (DE) has to be imputed
# as `*args`
#
# **Approximation 1.2.2**: Approximation 1.2 + neglecting difference
# in electrostatic interaction between ground and excited state
# (imputed as approximation 1.2 but no electrostatic interaction energy
# diference - DE is defiend)
#
# **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
# `Alpha(E)=Alpha(-E)`, however the second one is not condition
# **kwargs : dictionary (optional)
# Definition of polarizabitity matrixes for defect atoms (if nonzero
# polarizability is used)
#
# Notes
# ----------
# Working only for systems with two symetric defects
#
# '''
#
#
# for ii in range(len(filenames)):
# if verbose:
# print('Calculation of interaction energy for:',ShortName[ii])
#
# # read and prepare molecule
# if kwargs:
# mol_polar,index1,index2,charge=prepare_molecule_2Def(filenames[ii],index_all[ii],AlphaE,Alpha_E,BetaE,VinterFG,nvec=nvec_all[ii],verbose=False,**kwargs)
# else:
# mol_polar,index1,index2,charge=prepare_molecule_2Def(filenames[ii],index_all[ii],AlphaE,Alpha_E,BetaE,VinterFG,nvec=nvec_all[ii],verbose=False)
# # calculate <A|V|A>-<G|V|G>
# mol_Elstat,at_type=ElStat_PrepareMolecule_2Def(filenames[ii],index_all[ii],FG_charges,ChargeType=ChargeType,verbose=False)
# DE=mol_Elstat.get_EnergyShift()
# #print('DE:',DE*conversion_facs_energy["1/cm"],'cm-1')
#
# # calculate interaction energy
# Einter,AtDipoles=mol_polar.calculate_InteractionEnergy(index2,charge,DE,order=order,output_dipoles=True,approx=approx)
#
# if verbose:
# print(' Total interaction energy:',Einter*conversion_facs_energy["1/cm"],'Quantum interaction energy:',Energy_QCH[ii])
#
# print(ShortName[ii],Energy_QCH[ii],abs(Einter*conversion_facs_energy["1/cm"]))
#
# Energy_all[ii]=abs(Einter*conversion_facs_energy["1/cm"])
#
# if MathOut:
# # output dipoles to mathematica
# Bonds=GuessBonds(mol_polar.coor,bond_length=4.0)
# mat_filename="".join(['Pictures/Polar_',ShortName[ii],'.nb'])
# OutputMathematica(mat_filename,mol_polar.coor,Bonds,['C']*mol_polar.Nat,scaleDipole=30.0,**{'TrPointCharge': mol_polar.charge,'AtDipole': AtDipoles,'rSphere_dip': 0.5,'rCylinder_dip':0.1})
#==============================================================================
# Definition of fuction for allocation of polarized molecules
#==============================================================================
def prepare_molecule_1Def(filenames,indx,AlphaE,Alpha_E,BetaE,VinterFG,verbose=False,CoarseGrain="plane",**kwargs):
''' Read all informations needed for Dielectric class and transform system
with single defect into this class. Useful for calculation of interaction
energies, transition site energy shifts and dipole changes.
Parameters
----------
filenames : list of dictionary (dimension Nsystems)
In the dictionaries there are specified all needed files which contains
nessesary information for transformig the system into Dielectric class.
keys:
`'1def_structure'`: xyz file with system geometry and atom types
`'charge_structure'`: xyz file with defect like molecule geometry for which transition charges were calculated
`charge_grnd`: file with ground state charges for the defect
`'charge_exct'`: file with excited state charges for the defect
`'charge'`: file with transition charges for the defect
indx : list of integers (dimension Nsystems x 6)
For every system there are specified indexes neded for asignment of defect
atoms. First three indexes correspond to center and two main axes of
reference structure (structure which was used for charges calculation)
and the remaining three indexes are corresponding atoms of the defect
on fluorographene system.
AlphaE : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
Alpha_E : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(-E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
BetaE : numpy.array of real (dimension 2x2)
Atomic polarizability Beta(E,E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
VinterFG : real
Difference in electrostatic interaction energy between interaction of
excited C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state and interaction of
ground state C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state. Units are ATOMIC
UNITS (Hartree)
CoarseGrain : string (optional init = "plane")
Possible values are: "plane","C","CF". Define which level of coarse
grained model should be used. If ``CoarseGrain="plane"`` then all atoms
are projected on plane defined by nvec and C-F atoms re treated as single
atom - for this case polarizabilities defined only in 2D by two numbers.
If ``CoarseGrain="C"`` then carbon atoms are center for atomic
polarizability tensor and again C-F are treated as a single atom.
If ``CoarseGrain="CF"`` then center of C-F bonds are used as center for
atomic polarizability tensor and again C-F are treated as a single atom.
verbose : logical (optional - init=False)
If `True` aditional information about whole proces will be printed
**kwargs : dictionary (optional)
Definition of polarizabitity matrixes for defect atoms (if nonzero
polarizability is used)
Returns
-------
mol_polar : Dielectric class
Fluorographene with defect in Dielectric class which contains all information
needed for calculation of energy shifts and dipole changes for defect
embeded in fluorographene
index1 : list of integer (dimension Ndefect_atoms)
Atom indexes of defect atoms
charge : numpy.array of real (dimension Ndefect_atoms)
Transition charges for every defect atom. First charge correspond to atom
defined by first index in index1 list and so on.
struc : Structure class
Structure of the fluorographene system with single defects
'''
if verbose:
print(indx)
indx_center_test=indx[0]
indx_x_test=indx[1]
indx_y_test=indx[2]
indx_center1=indx[3]
indx_x1=indx[4]
indx_y1=indx[5]
# Specify files:
xyzfile2=filenames['charge_structure']
filenameESP=filenames['charge']
xyzfile=filenames['1def_structure']
if verbose:
print(' Reading charges and format to polarization format...')
struc_test=Structure()
struc_test.load_xyz(xyzfile2) # Structure of molecule used for fitting charges
if verbose:
print(' Loading molecule...')
struc=Structure()
struc.load_xyz(xyzfile) # Fluorographene with single defect
coor,charge,at_type=read_TrEsp_charges(filenameESP,verbose=False)
if verbose:
print(' Centering molecule...')
struc.center(indx_center1,indx_x1,indx_y1)
index1=identify_molecule(struc,struc_test,indx_center1,indx_x1,indx_y1,indx_center_test,indx_x_test,indx_y_test,onlyC=True)
if len(index1)!=len(np.unique(index1)):
raise IOError('There are repeating elements in index file')
# Assign pol types and charges
PolCoor,Polcharge,PolType = _prepare_polar_structure_1def(struc,index1,charge,CoarseGrain,verbose=False)
# PolType=[]
# Polcharge=[]
# PolCoor=[]
# for ii in range(struc.nat):
# if struc.at_type[ii]=='C' and (ii in index1):
# Polcharge.append(charge[np.where(index1==ii)[0][0]])
# PolType.append('C')
# PolCoor.append(struc.coor._value[ii])
# elif struc.at_type[ii]=='C':
# PolType.append('CF')
# Polcharge.append(0.0)
# PolCoor.append(struc.coor._value[ii])
# PolType=np.array(PolType)
# Polcharge=np.array(Polcharge,dtype='f8')
# PolCoor=np.array(PolCoor,dtype='f8')
#
# # project molecule whole system to plane defined by defect
# nvec=np.array([0.0,0.0,1.0],dtype='f8')
# center=np.array([0.0,0.0,0.0],dtype='f8')
# PolCoor=project_on_plane(PolCoor,nvec,center)
polar={}
polar['AlphaE']=np.zeros((len(PolCoor),3,3),dtype='f8')
polar['Alpha_E']=np.zeros((len(PolCoor),3,3),dtype='f8')
polar['BetaE']=np.zeros((len(PolCoor),3,3),dtype='f8')
mol_polar=Dielectric(PolCoor,Polcharge,np.zeros((len(PolCoor),3),dtype='f8'),
polar['AlphaE'],polar['Alpha_E'],polar['BetaE'],VinterFG)
ZeroM=np.zeros((3,3),dtype='f8')
Polarizability = { 'CF': [AlphaE,Alpha_E,BetaE], 'CD': [AlphaE,Alpha_E,BetaE]}
if "Alpha(E)" in kwargs.keys():
AlphaE_def=kwargs['Alpha(E)']
Alpha_E_def=kwargs['Alpha(-E)']
BetaE_def=kwargs['Beta(E,E)']
Polarizability['C'] = [AlphaE_def,Alpha_E_def,BetaE_def]
else :
Polarizability['C'] = [ZeroM,ZeroM,ZeroM]
if "Fpolar" in kwargs.keys():
Polarizability['FC'] = kwargs['Fpolar']
else:
Polarizability['FC'] = [ZeroM,ZeroM,ZeroM]
mol_polar.polar=mol_polar.assign_polar(PolType,**{'PolValues': Polarizability})
if "Alpha_static" in kwargs.keys():
mol_polar.polar['Alpha_st'] = np.zeros((len(PolCoor),3,3),dtype='f8')
if CoarseGrain=="all_atom":
Alpha_static=kwargs["Alpha_static"]
AlphaF_static=kwargs["AlphaF_static"]
else:
Alpha_static=kwargs["Alpha_static"]
AlphaF_static=ZeroM
for ii in range(len(PolType)):
if PolType[ii]=='CF':
mol_polar.polar['Alpha_st'][ii]=Alpha_static
elif PolType[ii]=='FC':
mol_polar.polar['Alpha_st'][ii]=AlphaF_static
return mol_polar,index1,charge,struc
def prepare_molecule_2Def(filenames,indx,AlphaE,Alpha_E,BetaE,VinterFG,verbose=False, def2_charge=True,CoarseGrain="plane",**kwargs):
''' Read all informations needed for Dielectric class and transform system
with two same defects into this class. Useful for calculation of interaction
energies, transition site energy shifts and dipole changes.
Parameters
----------
filenames : dictionary
In the dictionary there are specified all needed files which contains
nessesary information for transformig the system into Dielectric class.
keys:
* ``'2def_structure'``: xyz file with FG system with two defects
geometry and atom types
* ``'charge1_structure'``: xyz file with defect-like molecule geometry
for which transition charges were calculated corresponding to first
defect
* ``'charge1'``: file with transition charges for the first defect
(from TrEsp charges fitting)
* ``'charge2_structure'``: xyz file with defect-like molecule geometry
for which transition charges were calculated corresponding to second
defect
* ``'charge2'``: file with transition charges for the second defect
(from TrEsp charges fitting)
indx : list of integers (dimension 9)
There are specified indexes neded for asignment of defect
atoms. First three indexes correspond to center and two main axes of
reference structure (structure which was used for charges calculation)
and the remaining six indexes are corresponding atoms of the defects
on fluorographene system (three correspond to first defect and the last
three to the second one).
AlphaE : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
Alpha_E : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(-E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
BetaE : numpy.array of real (dimension 2x2)
Atomic polarizability Beta(E,E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
VinterFG : real
Difference in electrostatic interaction energy between interaction of
excited C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state and interaction of
ground state C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state. Units are ATOMIC
UNITS (Hartree)
def2_charge : logical (init = True)
Specifies if transition charges should be placed also to the second
defect
CoarseGrain : string (optional init = "plane")
Possible values are: "plane","C","CF". Define which level of coarse
grained model should be used. If ``CoarseGrain="plane"`` then all atoms
are projected on plane defined by nvec and C-F atoms re treated as single
atom - for this case polarizabilities defined only in 2D by two numbers.
If ``CoarseGrain="C"`` then carbon atoms are center for atomic
polarizability tensor and again C-F are treated as a single atom.
If ``CoarseGrain="CF"`` then center of C-F bonds are used as center for
atomic polarizability tensor and again C-F are treated as a single atom.
verbose : logical (optional - init=False)
If `True` aditional information about whole proces will be printed
**kwargs : dictionary (optional)
Definition of polarizabitity matrixes for defect atoms (if nonzero
polarizability is used)
Returns
-------
mol_polar : Dielectric class
Fluorographene with two defects in Dielectric class which contains all
information needed for calculation of energy shifts, dipole changes and
interaction energies for defect homodimer embeded in fluorographene
index1 : list of integer (dimension Ndefect_atoms)
Atom indexes of first defect atoms
index2 : list of integer (dimension Ndefect_atoms)
Atom indexes of second defect atoms
charge1 : numpy.array of real (dimension Ndefect1_atoms)
Transition charges for every atom of the first defect. First charge
correspond to atom defined by first index in index1 list and so on.
charge2 : numpy.array of real (dimension Ndefect2_atoms)
Transition charges for every atom of the second defect. First charge
correspond to atom defined by first index in index2 list and so on.
struc : Structure class
Structure of the fluorographene system with two defects
'''
indx_center_test=indx[0]
indx_x_test=indx[1]
indx_y_test=indx[2]
indx_center1=indx[3]
indx_x1=indx[4]
indx_y1=indx[5]
indx_center2=indx[6]
indx_x2=indx[7]
indx_y2=indx[8]
# Specify files:
xyzfile_chrg1=filenames['charge1_structure']
filenameESP_chrg1=filenames['charge1']
xyzfile_chrg2=filenames['charge2_structure']
filenameESP_chrg2=filenames['charge2']
xyzfile=filenames['2def_structure']
# Read Transition charges
#filenameESP="".join([MolDir,'Perylene_TDDFT_fitted_charges_NoH.out'])
if verbose:
print(' Reading charges and format to polarization format...')
struc1_test=Structure()
struc2_test=Structure()
struc1_test.load_xyz(xyzfile_chrg1) # Structure of molecule used for fitting charges
struc2_test.load_xyz(xyzfile_chrg2) # Structure of molecule used for fitting charges
coor,charge1,at_type=read_TrEsp_charges(filenameESP_chrg1,verbose=False)
coor,charge2,at_type=read_TrEsp_charges(filenameESP_chrg2,verbose=False)
# load molecule - fuorographene with 2 defects
if verbose:
print(' Loading molecule...')
struc=Structure()
struc.load_xyz(xyzfile) # Fluorographene with two defects
index1=identify_molecule(struc,struc1_test,indx_center1,indx_x1,indx_y1,indx_center_test,indx_x_test,indx_y_test,onlyC=True)
index2=identify_molecule(struc,struc2_test,indx_center2,indx_x2,indx_y2,indx_center_test,indx_x_test,indx_y_test,onlyC=True)
if len(index1)!=len(np.unique(index1)) or len(index2)!=len(np.unique(index2)):
print('index1:')
print(index1)
print('index2:')
print(index2)
raise IOError('There are repeating elements in index file')
# Assign pol types
PolCoor,Polcharge,PolType = _prepare_polar_structure_2def(struc,index1,charge1,index2,charge2,CoarseGrain)
# PolType=[]
# Polcharge=[]
# PolCoor=[]
# for ii in range(struc.nat):
# if struc.at_type[ii]=='C' and (ii in index1):
# Polcharge.append(charge1[np.where(index1==ii)[0][0]])
# PolType.append('C')
# PolCoor.append(struc.coor._value[ii])
# elif struc.at_type[ii]=='C' and (ii in index2):
# if def2_charge:
# Polcharge.append(charge2[np.where(index2==ii)[0][0]])
# else:
# Polcharge.append(0.0)
# #Polcharge.append(charge[np.where(index2==ii)[0][0]])
# PolType.append('C')
# PolCoor.append(struc.coor._value[ii])
# elif struc.at_type[ii]=='C':
# PolType.append('CF')
# Polcharge.append(0.0)
# PolCoor.append(struc.coor._value[ii])
#
# PolType=np.array(PolType)
# Polcharge=np.array(Polcharge,dtype='f8')
# PolCoor=np.array(PolCoor,dtype='f8')
#
# # project molecule whole system to plane defined by defect
# center=np.array([0.0,0.0,0.0],dtype='f8')
# PolCoor=project_on_plane(PolCoor,nvec,center)
# center projected molecule on plane
if verbose:
print(' Centering molecule...')
PolCoor,Phi,Psi,Chi,center=CenterMolecule(PolCoor,indx_center1,[indx_center1,indx_x1,indx_center2,indx_x2],[indx_center1,indx_y1,indx_center2,indx_y2],print_angles=True)
# Do the same transformation also with the structure
struc.move(-center[0],-center[1],-center[2])
struc.rotate(Phi,Psi,Chi)
polar={}
polar['AlphaE']=np.zeros((len(PolCoor),3,3),dtype='f8')
polar['Alpha_E']=np.zeros((len(PolCoor),3,3),dtype='f8')
polar['BetaE']=np.zeros((len(PolCoor),3,3),dtype='f8')
mol_polar=Dielectric(PolCoor,Polcharge,np.zeros((len(PolCoor),3),dtype='f8'),
polar['AlphaE'],polar['Alpha_E'],polar['BetaE'],VinterFG)
ZeroM=np.zeros((3,3),dtype='f8')
Polarizability = { 'CF': [AlphaE,Alpha_E,BetaE], 'CD': [AlphaE,Alpha_E,BetaE]}
if "Alpha(E)" in kwargs.keys():
AlphaE_def=kwargs['Alpha(E)']
Alpha_E_def=kwargs['Alpha(-E)']
BetaE_def=kwargs['Beta(E,E)']
Polarizability['C'] = [AlphaE_def,Alpha_E_def,BetaE_def]
else :
Polarizability['C'] = [ZeroM,ZeroM,ZeroM]
if "Fpolar" in kwargs.keys():
Polarizability['FC'] = kwargs['Fpolar']
else:
Polarizability['FC'] = [ZeroM,ZeroM,ZeroM]
mol_polar.polar=mol_polar.assign_polar(PolType,**{'PolValues': Polarizability})
if "Alpha_static" in kwargs.keys():
mol_polar.polar['Alpha_st'] = np.zeros((len(PolCoor),3,3),dtype='f8')
if CoarseGrain=="all_atom":
Alpha_static=ZeroM
else:
Alpha_static=kwargs["Alpha_static"]
for ii in range(len(PolType)):
if PolType[ii]=='CF':
mol_polar.polar['Alpha_st'][ii]=Alpha_static
return mol_polar,index1,index2,charge1,charge2,struc
def _prepare_polar_structure_1def(struc,index1,charge1,Type,verbose=False):
"""
Type = "plane","C","CF","all_atom"
"""
if not Type in ["plane","C","CF","all_atom"]:
raise Warning("Unsupported type of coarse graining.")
if verbose:
print(Type)
# Molecule has to be centered and oriented first before this calculation is done
# Assign pol types and charges
PolType=[]
Polcharge=[]
PolCoor=[]
if Type == "plane" or Type == "C":
for ii in range(struc.nat):
if struc.at_type[ii]=='C' and (ii in index1):
Polcharge.append(charge1[np.where(index1==ii)[0][0]])
PolType.append('C')
PolCoor.append(struc.coor._value[ii])
elif struc.at_type[ii]=='C':
PolType.append('CF')
Polcharge.append(0.0)
PolCoor.append(struc.coor._value[ii])
PolType=np.array(PolType)
Polcharge=np.array(Polcharge,dtype='f8')
PolCoor=np.array(PolCoor,dtype='f8')
if Type == "plane":
# project molecule whole system to plane defined by defect
nvec_test,origin_test = fit_plane(PolCoor)
PolCoor=project_on_plane(PolCoor,nvec_test,origin_test)
#center=np.array([0.0,0.0,0.0],dtype='f8')
#PolCoor=project_on_plane(PolCoor,nvec,center)
elif Type == "all_atom":
PolCoor = struc.coor._value.copy()
for ii in range(struc.nat):
if struc.at_type[ii]=='C' and (ii in index1):
Polcharge.append(charge1[np.where(index1==ii)[0][0]])
PolType.append('C')
elif struc.at_type[ii]=='C':
PolType.append('CF')
Polcharge.append(0.0)
elif struc.at_type[ii]=='F':
PolType.append('FC')
Polcharge.append(0.0)
PolType=np.array(PolType)
Polcharge=np.array(Polcharge,dtype='f8')
PolCoor=np.array(PolCoor,dtype='f8')
elif Type == "CF":
connectivity = []
for ii in range(struc.nat):
connectivity.append([])
if struc.bonds is None:
struc.guess_bonds()
for ii in range(len(struc.bonds)):
indx1=struc.bonds[ii][0]
at1=struc.at_type[indx1]
indx2=struc.bonds[ii][1]
at2=struc.at_type[indx2]
if at1=="C" and at2=="F":
connectivity[indx1].append(indx2)
elif at2=="C" and at1=="F":
connectivity[indx2].append(indx1)
for ii in range(struc.nat):
if struc.at_type[ii]=='C' and (ii in index1):
Polcharge.append(charge1[np.where(index1==ii)[0][0]])
PolType.append('C')
PolCoor.append(struc.coor._value[ii])
elif struc.at_type[ii]=='C':
PolType.append('CF')
Polcharge.append(0.0)
# polarizabiliy center will be located at center of C-F bond (or F-C-F for border carbons)
count = 1
position = struc.coor._value[ii]
for jj in range(len(connectivity[ii])):
position += struc.coor._value[ connectivity[ii][jj] ]
count += 1
position = position / count
PolCoor.append(position)
PolType=np.array(PolType)
Polcharge=np.array(Polcharge,dtype='f8')
PolCoor=np.array(PolCoor,dtype='f8')
# TODO: add all atom representation
return PolCoor,Polcharge,PolType
def _prepare_polar_structure_2def(struc,index1,charge1,index2,charge2,Type,verbose=False):
"""
Type = "plane","C","CF","all_atom"
"""
if not Type in ["plane","C","CF","all_atom"]:
raise Warning("Unsupported type of coarse graining.")
if verbose:
print(Type)
# Assign pol types
PolType=[]
Polcharge=[]
PolCoor=[]
if Type == "plane" or Type == "C":
for ii in range(struc.nat):
if struc.at_type[ii]=='C' and (ii in index1):
Polcharge.append(charge1[np.where(index1==ii)[0][0]])
PolType.append('C')
PolCoor.append(struc.coor._value[ii])
elif struc.at_type[ii]=='C' and (ii in index2):
Polcharge.append(charge2[np.where(index2==ii)[0][0]])
PolType.append('C')
PolCoor.append(struc.coor._value[ii])
elif struc.at_type[ii]=='C':
PolType.append('CF')
Polcharge.append(0.0)
PolCoor.append(struc.coor._value[ii])
PolType=np.array(PolType)
Polcharge=np.array(Polcharge,dtype='f8')
PolCoor=np.array(PolCoor,dtype='f8')
if Type == "plane":
# project molecule whole system to plane defined by defect
nvec_test,origin_test = fit_plane(PolCoor)
PolCoor=project_on_plane(PolCoor,nvec_test,origin_test)
#center=np.array([0.0,0.0,0.0],dtype='f8')
#PolCoor=project_on_plane(PolCoor,nvec,center)
elif Type == "all_atom":
PolCoor = struc.coor._value.copy()
for ii in range(struc.nat):
if struc.at_type[ii]=='C' and (ii in index1):
Polcharge.append(charge1[np.where(index1==ii)[0][0]])
PolType.append('C')
elif struc.at_type[ii]=='C' and (ii in index2):
Polcharge.append(charge2[np.where(index2==ii)[0][0]])
PolType.append('C')
elif struc.at_type[ii]=='C':
PolType.append('CF')
Polcharge.append(0.0)
elif struc.at_type[ii]=='F':
PolType.append('FC')
Polcharge.append(0.0)
PolType=np.array(PolType)
Polcharge=np.array(Polcharge,dtype='f8')
#print(len(PolCoor),len(PolType))
# TODO: TEST this assignment of polarizability centers
elif Type == "CF":
connectivity = []
for ii in range(struc.nat):
connectivity.append([])
if struc.bonds is None:
struc.guess_bonds()
for ii in range(len(struc.bonds)):
indx1=struc.bonds[ii][0]
at1=struc.at_type[indx1]
indx2=struc.bonds[ii][1]
at2=struc.at_type[indx2]
if at1=="C" and at2=="F":
connectivity[indx1].append(indx2)
elif at2=="C" and at1=="F":
connectivity[indx2].append(indx1)
for ii in range(struc.nat):
if struc.at_type[ii]=='C' and (ii in index1):
Polcharge.append(charge1[np.where(index1==ii)[0][0]])
PolType.append('C')
PolCoor.append(struc.coor._value[ii])
elif struc.at_type[ii]=='C' and (ii in index2):
Polcharge.append(charge2[np.where(index2==ii)[0][0]])
PolType.append('C')
PolCoor.append(struc.coor._value[ii])
elif struc.at_type[ii]=='C':
PolType.append('CF')
Polcharge.append(0.0)
# polarizabiliy center will be located at center of C-F bond (or F-C-F for border carbons)
count = 1
position = struc.coor._value[ii]
for jj in range(len(connectivity[ii])):
position += struc.coor._value[ connectivity[ii][jj] ]
count += 1
position = position / count
PolCoor.append(position)
PolType=np.array(PolType)
Polcharge=np.array(Polcharge,dtype='f8')
PolCoor=np.array(PolCoor,dtype='f8')
# TODO: add all atom representation
return PolCoor,Polcharge,PolType
#TODO: Get rid of ShortName
def Calc_SingleDef_FGprop(filenames,ShortName,index_all,AlphaE,Alpha_E,BetaE,VinterFG,FG_charges,ChargeType,order=80,verbose=False,approx=1.1,MathOut=False,CoarseGrain="plane",**kwargs):
''' Calculate energy shifts and transition dipole shifts for single defect
embeded in fluorographene
Parameters
----------
filenames : dictionary
Dictionary with information about all needed files which contains
nessesary information for transformig the system into Dielectric class
and electrostatic calculations. Keys:
* ``'1def_structure'``: xyz file with FG system with single defect
geometry and atom types
* ``'charge_structure'``: xyz file with defect-like molecule geometry
for which transition charges were calculated corresponding to first
defect
* ``'charge'``: file with transition charges for the defect
(from TrEsp charges fitting)
* ``'charge_grnd'``: file with ground state charges for the defect
(from TrEsp charges fitting)
* ``'charge_exct'``: file with excited state charges for the defect
(from TrEsp charges fitting)
ShortName : string
Short description of the system
index_all : list of integers (dimension 6)
There are specified indexes neded for asignment of defect
atoms. First three indexes correspond to center and two main axes of
reference structure (structure which was used for charges calculation)
and the last three indexes are corresponding atoms of the defect.
AlphaE : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
Alpha_E : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(-E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
BetaE : numpy.array of real (dimension 2x2)
Atomic polarizability Beta(E,E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
VinterFG : real
Difference in electrostatic interaction energy between interaction of
excited C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state and interaction of
ground state C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state. Units are ATOMIC
UNITS (Hartree)
FG_charges : list of real (dimension 2)
[charge on inner fluorographene atom, charge on borded fluorographe carbon]
ChargeType : string
Specifies which charges should be used for electrostatic calculations
(ground and excited state charges) for defect atoms. Allowed types are:
``'qchem'``, ``'qchem_all'``, ``'AMBER'`` and ``'gaussian'``.
* ``'qchem'`` - charges calculated by fiting Q-Chem ESP on carbon
atoms.
* ``'qchem_all'`` - charges calculated by fiting Q-Chem ESP on all
atoms, only carbon charges are used and same charge is added to all
carbon atoms in order to have neutral molecule.
* ``'AMBER'`` - not yet fully implemented.
* ``'gaussian'`` - not yet fully implemented.
order : integer (optional - init=80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
verbose : logical (optional - init=False)
If `True` aditional information about whole proces will be printed
approx : real (optional - init=1.1)
Specifies which approximation should be used.
* **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
`Alpha(-E)`.
* **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
* **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
`Alpha(E)=Alpha(-E)`, however the second one is not condition
Returns
--------
Eshift : Energy class
Transition energy shift for the defect due to the fluorographene
environment calculated from structure with single defect. Units are
energy managed
TrDip : numpy array of real (dimension 3)
Total transition dipole for the defect with environment effects
included calculated from structure with single defect (in ATOMIC UNITS)
Notes
--------
By comparing QC calculations it was found that energy shift from structure
with two defects and with single defect is almost the same.
'''
if verbose:
print('Calculation of interaction energy for:',ShortName)
# read and prepare molecule
mol_polar,index1,charge,struc=prepare_molecule_1Def(filenames,index_all,AlphaE,Alpha_E,BetaE,VinterFG,verbose=False,CoarseGrain=CoarseGrain,**kwargs)
# calculate dAVA = <A|V|A>-<G|V|G>
AditInfo={'Structure': struc,'index1': index1}
mol_Elstat,index,charge_grnd,charge_exct=ElStat_PrepareMolecule_1Def(filenames,index_all,FG_charges,ChargeType=ChargeType,verbose=False,**AditInfo)
dAVA=mol_Elstat.get_EnergyShift()
# calculate transition energy shifts and transition dipole change
Eshift,TrDip=mol_polar.get_SingleDefectProperties(index1,dAVA=dAVA,order=order,approx=approx)
if verbose:
with energy_units("1/cm"):
print(ShortName,Eshift.value)
print(" dipole:",np.linalg.norm(TrDip))
print(" dAVA:",dAVA*conversion_facs_energy["1/cm"],'cm-1')
return Eshift, TrDip
#TODO: Get rid of ShortName
#TODO: Input vacuum transition energies
def Calc_Heterodimer_FGprop(filenames,ShortName,index_all,AlphaE,Alpha_E,BetaE,VinterFG,FG_charges,ChargeType,order=80,verbose=False,approx=1.1,MathOut=False,CoarseGrain="plane",**kwargs):
''' Calculate interaction energies between defects embeded in polarizable atom
environment for all systems given in filenames. Possibility of calculate
transition energy shifts and transition dipoles.
Parameters
----------
filenames : dictionary
Dictionary with information about all needed files which contains
nessesary information for transformig the system into Dielectric class
and electrostatic calculations. Keys:
* ``'2def_structure'``: xyz file with FG system with two defects
geometry and atom types
* ``'charge1_structure'``: xyz file with defect-like molecule geometry
for which transition charges were calculated corresponding to first
defect
* ``'charge1'``: file with transition charges for the first defect
(from TrEsp charges fitting)
* ``'charge1_grnd'``: file with ground state charges for the first defect
(from TrEsp charges fitting)
* ``'charge1_exct'``: file with excited state charges for the first defect
(from TrEsp charges fitting)
* ``'charge2_structure'``: xyz file with defect-like molecule geometry
for which transition charges were calculated corresponding to second
defect
* ``'charge2'``: file with transition charges for the second defect
(from TrEsp charges fitting)
* ``'charge2_grnd'``: file with ground state charges for the second defect
(from TrEsp charges fitting)
* ``'charge2_exct'``: file with excited state charges for the second defect
(from TrEsp charges fitting)
ShortName : string
Short description of the system
index_all : list of integers (dimension 6)
There are specified indexes neded for asignment of defect
atoms. First three indexes correspond to center and two main axes of
reference structure (structure which was used for charges calculation)
and the next three indexes are corresponding atoms of the first defects
on fluorographene system and the last three indexes are corresponding
atoms of the second defect.
AlphaE : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
Alpha_E : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(-E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
BetaE : numpy.array of real (dimension 2x2)
Atomic polarizability Beta(E,E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
VinterFG : real
Difference in electrostatic interaction energy between interaction of
excited C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state and interaction of
ground state C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state. Units are ATOMIC
UNITS (Hartree)
FG_charges : list of real (dimension 2)
[charge on inner fluorographene atom, charge on borded fluorographe carbon]
ChargeType : string
Specifies which charges should be used for electrostatic calculations
(ground and excited state charges) for defect atoms. Allowed types are:
``'qchem'``, ``'qchem_all'``, ``'AMBER'`` and ``'gaussian'``.
* ``'qchem'`` - charges calculated by fiting Q-Chem ESP on carbon
atoms.
* ``'qchem_all'`` - charges calculated by fiting Q-Chem ESP on all
atoms, only carbon charges are used and same charge is added to all
carbon atoms in order to have neutral molecule.
* ``'AMBER'`` - not yet fully implemented.
* ``'gaussian'`` - not yet fully implemented.
order : integer (optional - init=80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
CoarseGrain : string (optional init = "plane")
Possible values are: "plane","C","CF". Define which level of coarse
grained model should be used. If ``CoarseGrain="plane"`` then all atoms
are projected on plane defined by nvec and C-F atoms re treated as single
atom - for this case polarizabilities defined only in 2D by two numbers.
If ``CoarseGrain="C"`` then carbon atoms are center for atomic
polarizability tensor and again C-F are treated as a single atom.
If ``CoarseGrain="CF"`` then center of C-F bonds are used as center for
atomic polarizability tensor and again C-F are treated as a single atom.
verbose : logical (optional - init=False)
If `True` aditional information about whole proces will be printed
approx : real (optional - init=1.1)
Specifies which approximation should be used.
**Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
`Alpha(-E)`.
**Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
**Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
`Alpha(E)=Alpha(-E)`, however the second one is not condition
Returns
--------
Einter : Energy class
Interaction energy with effects of environment included. Units are
energy managed
Eshift1 : Energy class
Transition energy shift for the first defect due to fluorographene
environment calculated from heterodymer structure. Units are energy
managed
Eshift2 : Energy class
Transition energy shift for the second defect due to fluorographene
environment calculated from heterodymer structure. Units are energy
managed
TrDip1 : numpy array of real (dimension 3)
Total transition dipole for the first defect with environment effects
included calculated from heterodimer structure (in ATOMIC UNITS)
TrDip2 : numpy array of real (dimension 3)
Total transition dipole for the first defect with environment effects
included calculated from heterodimer structure (in ATOMIC UNITS)
Notes
----------
No far working only with two symmetric defects - for heterodimer need to
input vacuum transition energy for every defect.
'''
if verbose:
print('Calculation of interaction energy for:',ShortName)
# read and prepare molecule
mol_polar,index1,index2,charge1,charge2,struc=prepare_molecule_2Def(filenames,index_all,AlphaE,Alpha_E,BetaE,VinterFG,verbose=False,def2_charge=True,CoarseGrain=CoarseGrain,**kwargs)
# # calculate dAVA = <A|V|A>-<G|V|G> and dBVB = <B|V|B>-<G|V|G>
AditInfo={'Structure': struc,'index1': index1,'index2':index2}
mol_Elstat,indx1,indx2,charge1_grnd,charge2_grnd,charge1_exct,charge2_exct=ElStat_PrepareMolecule_2Def(filenames,index_all,FG_charges,ChargeType=ChargeType,verbose=False,**AditInfo)
dAVA=mol_Elstat.get_EnergyShift(index=index2, charge=charge2_grnd)
dBVB=mol_Elstat.get_EnergyShift(index=index1, charge=charge1_grnd)
# calculate interaction energy and transition energy shifts
Einter,Eshift1,Eshift2,TrDip1,TrDip2,dipAE,dipA_E,dipBE=mol_polar.get_HeterodimerProperties(index1,index2,0.0,0.0,dAVA=dAVA,dBVB=dBVB,order=order,approx=approx)
if verbose:
with energy_units("1/cm"):
print(' Total interaction energy:',Einter.value)
print(ShortName,abs(Einter.value),Eshift1.value,Eshift2.value)
print("dipole:",np.linalg.norm(TrDip1),np.linalg.norm(TrDip2))
print("dAVA:",dAVA*conversion_facs_energy["1/cm"],"dBVB:",dBVB*conversion_facs_energy["1/cm"])
if MathOut:
if not os.path.exists("Pictures"):
os.makedirs("Pictures")
Bonds = GuessBonds(mol_polar.coor)
if CoarseGrain in ["plane","C","CF"]:
at_type = ['C']*mol_polar.Nat
elif CoarseGrain == "all_atom":
at_type = struc.at_type.copy()
mat_filename = "".join(['Pictures/Polar_',ShortName,'_AlphaE.nb'])
params = {'TrPointCharge': mol_polar.charge,'AtDipole': dipAE,'rSphere_dip': 0.5,'rCylinder_dip':0.1}
OutputMathematica(mat_filename,mol_polar.coor,Bonds,at_type,scaleDipole=50.0,**params)
mat_filename = "".join(['Pictures/Polar_',ShortName,'_Alpha_E.nb'])
params = {'TrPointCharge': mol_polar.charge,'AtDipole': dipA_E,'rSphere_dip': 0.5,'rCylinder_dip':0.1}
OutputMathematica(mat_filename,mol_polar.coor,Bonds,at_type,scaleDipole=50.0,**params)
mat_filename = "".join(['Pictures/Polar_',ShortName,'_BetaE.nb'])
params = {'TrPointCharge': mol_polar.charge,'AtDipole': dipBE,'rSphere_dip': 0.5,'rCylinder_dip':0.1}
OutputMathematica(mat_filename,mol_polar.coor,Bonds,at_type,scaleDipole=50.0,**params)
return Einter, Eshift1, Eshift2, TrDip1, TrDip2
def TEST_Calc_Heterodimer_FGprop(filenames,ShortName,index_all,AlphaE,Alpha_E,BetaE,VinterFG,FG_charges,ChargeType,order=80,verbose=False,approx=1.1,MathOut=False,CoarseGrain="plane",**kwargs):
''' Calculate interaction energies between defects embeded in polarizable atom
environment for all systems given in filenames. Possibility of calculate
transition energy shifts and transition dipoles.
Parameters
----------
filenames : dictionary
Dictionary with information about all needed files which contains
nessesary information for transformig the system into Dielectric class
and electrostatic calculations. Keys:
* ``'2def_structure'``: xyz file with FG system with two defects
geometry and atom types
* ``'charge1_structure'``: xyz file with defect-like molecule geometry
for which transition charges were calculated corresponding to first
defect
* ``'charge1'``: file with transition charges for the first defect
(from TrEsp charges fitting)
* ``'charge1_grnd'``: file with ground state charges for the first defect
(from TrEsp charges fitting)
* ``'charge1_exct'``: file with excited state charges for the first defect
(from TrEsp charges fitting)
* ``'charge2_structure'``: xyz file with defect-like molecule geometry
for which transition charges were calculated corresponding to second
defect
* ``'charge2'``: file with transition charges for the second defect
(from TrEsp charges fitting)
* ``'charge2_grnd'``: file with ground state charges for the second defect
(from TrEsp charges fitting)
* ``'charge2_exct'``: file with excited state charges for the second defect
(from TrEsp charges fitting)
ShortName : string
Short description of the system
index_all : list of integers (dimension 6)
There are specified indexes neded for asignment of defect
atoms. First three indexes correspond to center and two main axes of
reference structure (structure which was used for charges calculation)
and the next three indexes are corresponding atoms of the first defects
on fluorographene system and the last three indexes are corresponding
atoms of the second defect.
AlphaE : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
Alpha_E : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(-E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
BetaE : numpy.array of real (dimension 2x2)
Atomic polarizability Beta(E,E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
VinterFG : real
Difference in electrostatic interaction energy between interaction of
excited C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state and interaction of
ground state C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state. Units are ATOMIC
UNITS (Hartree)
FG_charges : list of real (dimension 2)
[charge on inner fluorographene atom, charge on borded fluorographe carbon]
ChargeType : string
Specifies which charges should be used for electrostatic calculations
(ground and excited state charges) for defect atoms. Allowed types are:
``'qchem'``, ``'qchem_all'``, ``'AMBER'`` and ``'gaussian'``.
* ``'qchem'`` - charges calculated by fiting Q-Chem ESP on carbon
atoms.
* ``'qchem_all'`` - charges calculated by fiting Q-Chem ESP on all
atoms, only carbon charges are used and same charge is added to all
carbon atoms in order to have neutral molecule.
* ``'AMBER'`` - not yet fully implemented.
* ``'gaussian'`` - not yet fully implemented.
order : integer (optional - init=80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
CoarseGrain : string (optional init = "plane")
Possible values are: "plane","C","CF". Define which level of coarse
grained model should be used. If ``CoarseGrain="plane"`` then all atoms
are projected on plane defined by nvec and C-F atoms re treated as single
atom - for this case polarizabilities defined only in 2D by two numbers.
If ``CoarseGrain="C"`` then carbon atoms are center for atomic
polarizability tensor and again C-F are treated as a single atom.
If ``CoarseGrain="CF"`` then center of C-F bonds are used as center for
atomic polarizability tensor and again C-F are treated as a single atom.
verbose : logical (optional - init=False)
If `True` aditional information about whole proces will be printed
approx : real (optional - init=1.1)
Specifies which approximation should be used.
**Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
`Alpha(-E)`.
**Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
**Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
`Alpha(E)=Alpha(-E)`, however the second one is not condition
Returns
--------
Einter : Energy class
Interaction energy with effects of environment included. Units are
energy managed
Eshift1 : Energy class
Transition energy shift for the first defect due to fluorographene
environment calculated from heterodymer structure. Units are energy
managed
Eshift2 : Energy class
Transition energy shift for the second defect due to fluorographene
environment calculated from heterodymer structure. Units are energy
managed
TrDip1 : numpy array of real (dimension 3)
Total transition dipole for the first defect with environment effects
included calculated from heterodimer structure (in ATOMIC UNITS)
TrDip2 : numpy array of real (dimension 3)
Total transition dipole for the first defect with environment effects
included calculated from heterodimer structure (in ATOMIC UNITS)
Notes
----------
No far working only with two symmetric defects - for heterodimer need to
input vacuum transition energy for every defect.
'''
if verbose:
print('Calculation of interaction energy for:',ShortName)
# read and prepare molecule
mol_polar,index1,index2,charge1,charge2,struc=prepare_molecule_2Def(filenames,index_all,AlphaE,Alpha_E,BetaE,VinterFG,verbose=False,def2_charge=True,CoarseGrain=CoarseGrain,**kwargs)
if (mol_polar.charge[index1] != mol_polar.charge[index2]).any():
raise Warning("Transition charges are not the same - after creation.")
# # calculate dAVA = <A|V|A>-<G|V|G> and dBVB = <B|V|B>-<G|V|G>
AditInfo={'Structure': struc,'index1': index1,'index2':index2}
mol_Elstat,indx1,indx2,charge1_grnd,charge2_grnd,charge1_exct,charge2_exct=ElStat_PrepareMolecule_2Def(filenames,index_all,FG_charges,ChargeType=ChargeType,verbose=False,**AditInfo)
dAVA=mol_Elstat.get_EnergyShift(index=index2, charge=charge2_grnd)
dBVB=mol_Elstat.get_EnergyShift(index=index1, charge=charge1_grnd)
# dAVA=mol_Elstat.get_EnergyShift(index=index2)
# dBVB=mol_Elstat.get_EnergyShift(index=index1)
if (mol_polar.charge[index1] != mol_polar.charge[index2]).any():
raise Warning("Transition charges are not the same - after elstat.")
# calculate interaction energy and transition energy shifts - so far for homodimer
Einter,Eshift1,Eshift2,TrDip1,TrDip2,dipAE,dipA_E,dipBE,res=mol_polar._TEST_HeterodimerProperties(charge1_grnd,charge1_exct,charge2_grnd,charge2_exct,mol_Elstat,struc,index1,index2,0.0,0.0,dAVA=dAVA,dBVB=dBVB,order=order,approx=approx)
#get_HeterodimerProperties_new(self, gr_charge1, ex_charge1, gr_charge2, ex_charge2, FG_elstat, struc, index1, index2, Eng1, Eng2, eps, dAVA=0.0, dBVB=0.0, order=2, approx=1.1)
# res["E_pol2_A(E)"]
# res["E_pol2_A(-E)"]
# res["E_pol2_B(E,E)"]
# res["E_pol1_B(E,E)_(A_exct,B_grnd)"]
# res["E_pol1_B(E,E)_(A_grnd,B_exct)"]
# res["E_pol1-env_B(E,E)_grnd"]
# res["E_pol1-env_B(E,E)_exct"]
# res["E_pol2_st_(A_exct,B_grnd)"]
# res["E_pol2_st_(A_grnd,B_exct)"]
# res["E_pol2-env_st_grnd"]
# res["E_pol2-env_st_exct"]
# res["E_pol1_B(E,E)_(tr_gr,ex)"]
import os
if not os.path.isfile("Temp.dat"):
text = " pol2_A(E) | pol2_A(-E) | pol2_st_(A_ex,B_gr) | pol2_st_(A_gr,B_ex) | E_pol2-env_st_grnd | E_pol2-env_st_exct | pol1_BEE | pol1_BEE_(A_ex,B_gr) | pol1_BEE_(A_gr,B_ex) | pol1-env_BEE_grnd | pol1-env_BEE_exct | pol1_BEE_(tr_gr,ex) |"
os.system("".join(['echo "',text,'" >> Temp.dat']))
text = "--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|"
os.system("".join(['echo "',text,'" >> Temp.dat']))
# pol2_A(E) | pol2_A(-E) | pol2_st_(A_ex,B_gr) | pol2_st_(A_gr,B_ex) | E_pol2-env_st_grnd | E_pol2-env_st_exct | pol1_BEE | pol1_BEE_(A_ex,B_gr) | pol1_BEE_(A_gr,B_ex) | pol1-env_BEE_grnd |"
ii = 0
text="{:21} {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.6f} {:10.6f} | {:10.6f} {:10.6f} | {:10.6f} {:10.6f} | {:10.6f} {:10.6f} | {:10.6f} {:10.6f} | {:10.6f} {:10.6f} |".format(
ShortName,res["E_pol2_A(E)"][ii,0],res["E_pol2_A(E)"][ii,1],res["E_pol2_A(-E)"][ii,0],res["E_pol2_A(-E)"][ii,1],
res["E_pol2_st_(A_exct,B_grnd)"][ii,0],res["E_pol2_st_(A_exct,B_grnd)"][ii,1],res["E_pol2_st_(A_grnd,B_exct)"][ii,0],
res["E_pol2_st_(A_grnd,B_exct)"][ii,1],res["E_pol2-env_st_grnd"][ii,0],res["E_pol2-env_st_grnd"][ii,1],
res["E_pol2-env_st_exct"][ii,0],res["E_pol2-env_st_exct"][ii,1],res["E_pol2_B(E,E)"][ii,0],res["E_pol2_B(E,E)"][ii,1],
res["E_pol1_B(E,E)_(A_exct,B_grnd)"][ii,0],res["E_pol1_B(E,E)_(A_exct,B_grnd)"][ii,1],res["E_pol1_B(E,E)_(A_grnd,B_exct)"][ii,0],
res["E_pol1_B(E,E)_(A_grnd,B_exct)"][ii,1],res["E_pol1-env_B(E,E)_grnd"][ii,0],res["E_pol1-env_B(E,E)_grnd"][ii,1],
res["E_pol1-env_B(E,E)_exct"][ii,0],res["E_pol1-env_B(E,E)_exct"][ii,1],res["E_pol1_B(E,E)_(tr_gr,ex)"][ii,0],res["E_pol1_B(E,E)_(tr_gr,ex)"][ii,1])
os.system("".join(['echo "',text,'" >> Temp.dat']))
ii = 1
text="{:21} {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.6f} {:10.6f} | {:10.6f} {:10.6f} | {:10.6f} {:10.6f} | {:10.6f} {:10.6f} | {:10.6f} {:10.6f} | {:10.6f} {:10.6f} |".format(
" ",res["E_pol2_A(E)"][ii,0],res["E_pol2_A(E)"][ii,1],res["E_pol2_A(-E)"][ii,0],res["E_pol2_A(-E)"][ii,1],
res["E_pol2_st_(A_exct,B_grnd)"][ii,0],res["E_pol2_st_(A_exct,B_grnd)"][ii,1],res["E_pol2_st_(A_grnd,B_exct)"][ii,0],
res["E_pol2_st_(A_grnd,B_exct)"][ii,1],res["E_pol2-env_st_grnd"][ii,0],res["E_pol2-env_st_grnd"][ii,1],
res["E_pol2-env_st_exct"][ii,0],res["E_pol2-env_st_exct"][ii,1],res["E_pol2_B(E,E)"][ii,0],res["E_pol2_B(E,E)"][ii,1],
res["E_pol1_B(E,E)_(A_exct,B_grnd)"][ii,0],res["E_pol1_B(E,E)_(A_exct,B_grnd)"][ii,1],res["E_pol1_B(E,E)_(A_grnd,B_exct)"][ii,0],
res["E_pol1_B(E,E)_(A_grnd,B_exct)"][ii,1],res["E_pol1-env_B(E,E)_grnd"][ii,0],res["E_pol1-env_B(E,E)_grnd"][ii,1],
res["E_pol1-env_B(E,E)_exct"][ii,0],res["E_pol1-env_B(E,E)_exct"][ii,1],res["E_pol1_B(E,E)_(tr_gr,ex)"][ii,0],res["E_pol1_B(E,E)_(tr_gr,ex)"][ii,1])
os.system("".join(['echo "',text,'" >> Temp.dat']))
text = "--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|"
os.system("".join(['echo "',text,' " >> Temp.dat']))
# if (mol_polar.charge[index1] != mol_polar.charge[index2]).any():
# raise Warning("Transition charges are not the same - after polar.")
# TODO: For testing output structure and polarization structure - I'm getting different values for first and second defect
# struc.output_to_xyz("".join([ShortName,"_structure.xyz"]))
# from QChemTool.QuantumChem.output import OutputToXYZ
# from QChemTool.General.units import conversion_facs_position
# OutputToXYZ(mol_polar.coor*conversion_facs_position["Angstrom"],["C"]*len(mol_polar.coor),"".join([ShortName,"_pol.xyz"]))
if verbose:
with energy_units("1/cm"):
print(' Total interaction energy:',Einter.value)
print(ShortName,abs(Einter.value),Eshift1.value,Eshift2.value)
print("dipole:",np.linalg.norm(TrDip1),np.linalg.norm(TrDip2))
print("dAVA:",dAVA*conversion_facs_energy["1/cm"],"dBVB:",dBVB*conversion_facs_energy["1/cm"])
if MathOut:
if not os.path.exists("Pictures"):
os.makedirs("Pictures")
Bonds = GuessBonds(mol_polar.coor)
if CoarseGrain in ["plane","C","CF"]:
at_type = ['C']*mol_polar.Nat
elif CoarseGrain == "all_atom":
at_type = struc.at_type.copy()
# if (mol_polar.charge[index1] != mol_polar.charge[index2]).any():
# raise Warning("Transition charges are not the same - before output.")
mat_filename = "".join(['Pictures/Polar_',ShortName,'_AlphaE.nb'])
params = {'TrPointCharge': mol_polar.charge,'AtDipole': dipAE,'rSphere_dip': 0.5,'rCylinder_dip':0.1}
OutputMathematica(mat_filename,mol_polar.coor,Bonds,at_type,scaleDipole=50.0,**params)
mat_filename = "".join(['Pictures/Polar_',ShortName,'_Alpha_E.nb'])
params = {'TrPointCharge': mol_polar.charge,'AtDipole': dipA_E,'rSphere_dip': 0.5,'rCylinder_dip':0.1}
OutputMathematica(mat_filename,mol_polar.coor,Bonds,at_type,scaleDipole=50.0,**params)
mat_filename = "".join(['Pictures/Polar_',ShortName,'_BetaE.nb'])
params = {'TrPointCharge': mol_polar.charge,'AtDipole': dipBE,'rSphere_dip': 0.5,'rCylinder_dip':0.1}
OutputMathematica(mat_filename,mol_polar.coor,Bonds,at_type,scaleDipole=50.0,**params)
return Einter, Eshift1, Eshift2, TrDip1, TrDip2
def Calc_Heterodimer_FGprop_new(filenames,ShortName,E1,E2,index_all,AlphaE,Alpha_E,BetaE,VinterFG,FG_charges,ChargeType,order=2,verbose=False,approx=1.1,MathOut=False,CoarseGrain="plane",**kwargs):
''' Calculate interaction energies between defects embeded in polarizable atom
environment for all systems given in filenames. Possibility of calculate
transition energy shifts and transition dipoles.
Parameters
----------
filenames : dictionary
Dictionary with information about all needed files which contains
nessesary information for transformig the system into Dielectric class
and electrostatic calculations. Keys:
* ``'2def_structure'``: xyz file with FG system with two defects
geometry and atom types
* ``'charge1_structure'``: xyz file with defect-like molecule geometry
for which transition charges were calculated corresponding to first
defect
* ``'charge1'``: file with transition charges for the first defect
(from TrEsp charges fitting)
* ``'charge1_grnd'``: file with ground state charges for the first defect
(from TrEsp charges fitting)
* ``'charge1_exct'``: file with excited state charges for the first defect
(from TrEsp charges fitting)
* ``'charge2_structure'``: xyz file with defect-like molecule geometry
for which transition charges were calculated corresponding to second
defect
* ``'charge2'``: file with transition charges for the second defect
(from TrEsp charges fitting)
* ``'charge2_grnd'``: file with ground state charges for the second defect
(from TrEsp charges fitting)
* ``'charge2_exct'``: file with excited state charges for the second defect
(from TrEsp charges fitting)
ShortName : string
Short description of the system
index_all : list of integers (dimension 6)
There are specified indexes neded for asignment of defect
atoms. First three indexes correspond to center and two main axes of
reference structure (structure which was used for charges calculation)
and the next three indexes are corresponding atoms of the first defects
on fluorographene system and the last three indexes are corresponding
atoms of the second defect.
AlphaE : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
Alpha_E : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(-E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
BetaE : numpy.array of real (dimension 2x2)
Atomic polarizability Beta(E,E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
VinterFG : real
Difference in electrostatic interaction energy between interaction of
excited C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state and interaction of
ground state C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state. Units are ATOMIC
UNITS (Hartree)
FG_charges : list of real (dimension 2)
[charge on inner fluorographene atom, charge on borded fluorographe carbon]
ChargeType : string
Specifies which charges should be used for electrostatic calculations
(ground and excited state charges) for defect atoms. Allowed types are:
``'qchem'``, ``'qchem_all'``, ``'AMBER'`` and ``'gaussian'``.
* ``'qchem'`` - charges calculated by fiting Q-Chem ESP on carbon
atoms.
* ``'qchem_all'`` - charges calculated by fiting Q-Chem ESP on all
atoms, only carbon charges are used and same charge is added to all
carbon atoms in order to have neutral molecule.
* ``'AMBER'`` - not yet fully implemented.
* ``'gaussian'`` - not yet fully implemented.
order : integer (optional - init=80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
CoarseGrain : string (optional init = "plane")
Possible values are: "plane","C","CF". Define which level of coarse
grained model should be used. If ``CoarseGrain="plane"`` then all atoms
are projected on plane defined by nvec and C-F atoms re treated as single
atom - for this case polarizabilities defined only in 2D by two numbers.
If ``CoarseGrain="C"`` then carbon atoms are center for atomic
polarizability tensor and again C-F are treated as a single atom.
If ``CoarseGrain="CF"`` then center of C-F bonds are used as center for
atomic polarizability tensor and again C-F are treated as a single atom.
verbose : logical (optional - init=False)
If `True` aditional information about whole proces will be printed
approx : real (optional - init=1.1)
Specifies which approximation should be used.
**Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
`Alpha(-E)`.
**Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
**Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
`Alpha(E)=Alpha(-E)`, however the second one is not condition
Returns
--------
Einter : Energy class
Interaction energy with effects of environment included. Units are
energy managed
Eshift1 : Energy class
Transition energy shift for the first defect due to fluorographene
environment calculated from heterodymer structure. Units are energy
managed
Eshift2 : Energy class
Transition energy shift for the second defect due to fluorographene
environment calculated from heterodymer structure. Units are energy
managed
TrDip1 : numpy array of real (dimension 3)
Total transition dipole for the first defect with environment effects
included calculated from heterodimer structure (in ATOMIC UNITS)
TrDip2 : numpy array of real (dimension 3)
Total transition dipole for the first defect with environment effects
included calculated from heterodimer structure (in ATOMIC UNITS)
Notes
----------
No far working only with two symmetric defects - for heterodimer need to
input vacuum transition energy for every defect.
'''
if verbose:
print('Calculation of interaction energy for:',ShortName)
# read and prepare molecule
mol_polar,index1,index2,charge1,charge2,struc=prepare_molecule_2Def(filenames,index_all,AlphaE,Alpha_E,BetaE,VinterFG,verbose=False,def2_charge=True,CoarseGrain=CoarseGrain,**kwargs)
if (mol_polar.charge[index1] != mol_polar.charge[index2]).any():
raise Warning("Transition charges are not the same - after creation.")
# # calculate dAVA = <A|V|A>-<G|V|G> and dBVB = <B|V|B>-<G|V|G>
AditInfo={'Structure': struc,'index1': index1,'index2':index2}
mol_Elstat,indx1,indx2,charge1_grnd,charge2_grnd,charge1_exct,charge2_exct=ElStat_PrepareMolecule_2Def(filenames,index_all,FG_charges,ChargeType=ChargeType,verbose=False,**AditInfo)
dAVA=mol_Elstat.get_EnergyShift(index=index2, charge=charge2_grnd)
dBVB=mol_Elstat.get_EnergyShift(index=index1, charge=charge1_grnd)
# dAVA=mol_Elstat.get_EnergyShift(index=index2)
# dBVB=mol_Elstat.get_EnergyShift(index=index1)
if (mol_polar.charge[index1] != mol_polar.charge[index2]).any():
raise Warning("Transition charges are not the same - after elstat.")
eps = EnergyClass( (E1.value+E2.value)/2 )
# calculate interaction energy and transition energy shifts - so far for homodimer
Einter,Eshift1,Eshift2,TrDip1,TrDip2,dipAE,dipA_E,dipBE,res=mol_polar.get_HeterodimerProperties_new(charge1_grnd,charge1_exct,charge2_grnd,charge2_exct,mol_Elstat,struc,index1,index2,0.0,0.0,eps,dAVA=dAVA,dBVB=dBVB,order=order,approx=approx)
if verbose:
with energy_units("1/cm"):
print(' Total interaction energy:',Einter.value)
print(ShortName,abs(Einter.value),Eshift1.value,Eshift2.value)
print("dipole:",np.linalg.norm(TrDip1),np.linalg.norm(TrDip2))
print("dAVA:",dAVA*conversion_facs_energy["1/cm"],"dBVB:",dBVB*conversion_facs_energy["1/cm"])
if MathOut:
if not os.path.exists("Pictures"):
os.makedirs("Pictures")
Bonds = GuessBonds(mol_polar.coor)
if CoarseGrain in ["plane","C","CF"]:
at_type = ['C']*mol_polar.Nat
elif CoarseGrain == "all_atom":
at_type = struc.at_type.copy()
mat_filename = "".join(['Pictures/Polar_',ShortName,'_AlphaE.nb'])
params = {'TrPointCharge': mol_polar.charge,'AtDipole': dipAE,'rSphere_dip': 0.5,'rCylinder_dip':0.1}
OutputMathematica(mat_filename,mol_polar.coor,Bonds,at_type,scaleDipole=50.0,**params)
mat_filename = "".join(['Pictures/Polar_',ShortName,'_Alpha_E.nb'])
params = {'TrPointCharge': mol_polar.charge,'AtDipole': dipA_E,'rSphere_dip': 0.5,'rCylinder_dip':0.1}
OutputMathematica(mat_filename,mol_polar.coor,Bonds,at_type,scaleDipole=50.0,**params)
mat_filename = "".join(['Pictures/Polar_',ShortName,'_BetaE.nb'])
params = {'TrPointCharge': mol_polar.charge,'AtDipole': dipBE,'rSphere_dip': 0.5,'rCylinder_dip':0.1}
OutputMathematica(mat_filename,mol_polar.coor,Bonds,at_type,scaleDipole=50.0,**params)
# res["E_pol2_A(E)"] = PolarMat_AlphaE
# res["E_pol2_A(-E)"] = PolarMat_Alpha_E
# res["E_pol2_A_static"] = PolarMat_Alpha_st
# res["E_pol2_B(E,E)"] = PolarMat_Beta
# res["E_pol2_B(E,E)_scaled"] = PolarMat_Beta_scaled
# res["E_pol2_A(E)_(trans,grnd)"] = PolarMat_Alpha_tr_gr
# res["E_pol1_A_static"] = PolarMat_static_tr_gr_ex
# res["E_elstat_1"] = ElstatMat_1
if verbose:
if not os.path.isfile("Temp.dat"):
text = " pol2_A(E) | pol2_A(-E) | pol2_st | pol2_BEE_scaled | E_pol1-A(E)_tr_gr | E_pol1_st | pol1_BEE | sum_elstat |"
os.system("".join(['echo "',text,'" >> Temp.dat']))
text = "----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|"
os.system("".join(['echo "',text,'" >> Temp.dat']))
with energy_units("1/cm"):
ii = 0
text="{:21} {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.6f} {:10.6f} | {:10.6f} {:10.6f} |".format(
ShortName,res["E_pol2_A(E)"].value[ii,0],res["E_pol2_A(E)"].value[ii,1],res["E_pol2_A(-E)"].value[ii,0],res["E_pol2_A(-E)"].value[ii,1],
res["E_pol2_A_static"].value[ii,0],res["E_pol2_A_static"].value[ii,1],res["E_pol2_B(E,E)_scaled"].value[ii,0],
res["E_pol2_B(E,E)_scaled"].value[ii,1],res["E_pol2_A(E)_(trans,grnd)"].value[ii,0],res["E_pol2_A(E)_(trans,grnd)"].value[ii,1],
res["E_pol1_A_static"].value[ii,0],res["E_pol1_A_static"].value[ii,1],res["E_pol2_B(E,E)"].value[ii,0],res["E_pol2_B(E,E)"].value[ii,1],
res["E_elstat_1"].value[ii,0],res["E_elstat_1"].value[ii,1])
os.system("".join(['echo "',text,'" >> Temp.dat']))
ii = 1
text="{:21} {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.3f} {:10.3f} | {:10.6f} {:10.6f} | {:10.6f} {:10.6f} |".format(
" ",res["E_pol2_A(E)"].value[ii,0],res["E_pol2_A(E)"].value[ii,1],res["E_pol2_A(-E)"].value[ii,0],res["E_pol2_A(-E)"].value[ii,1],
res["E_pol2_A_static"].value[ii,0],res["E_pol2_A_static"].value[ii,1],res["E_pol2_B(E,E)_scaled"].value[ii,0],
res["E_pol2_B(E,E)_scaled"].value[ii,1],res["E_pol2_A(E)_(trans,grnd)"].value[ii,0],res["E_pol2_A(E)_(trans,grnd)"].value[ii,1],
res["E_pol1_A_static"].value[ii,0],res["E_pol1_A_static"].value[ii,1],res["E_pol2_B(E,E)"].value[ii,0],res["E_pol2_B(E,E)"].value[ii,1],
res["E_elstat_1"].value[ii,0],res["E_elstat_1"].value[ii,1])
os.system("".join(['echo "',text,'" >> Temp.dat']))
text = "----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|"
os.system("".join(['echo "',text,' " >> Temp.dat']))
return Einter, Eshift1, Eshift2, TrDip1, TrDip2
def TEST_Compare_SingleDef_FGprop(filenames,ShortName,index_all,AlphaE,Alpha_E,BetaE,VinterFG,FG_charges,ChargeType,order=1,verbose=False,approx=1.1,MathOut=False,CoarseGrain="plane",**kwargs):
''' Compare magnitude of individual terms in energy shift calculation for
defect in Fluorographene environment (so far only for first order of
perturbation expansion -> order = 1)
Parameters
----------
filenames : dictionary
Dictionary with information about all needed files which contains
nessesary information for transformig the system into Dielectric class
and electrostatic calculations. Keys:
* ``'1def_structure'``: xyz file with FG system with single defect
geometry and atom types
* ``'charge_structure'``: xyz file with defect-like molecule geometry
for which transition charges were calculated corresponding to first
defect
* ``'charge'``: file with transition charges for the defect
(from TrEsp charges fitting)
* ``'charge_grnd'``: file with ground state charges for the defect
(from TrEsp charges fitting)
* ``'charge_exct'``: file with excited state charges for the defect
(from TrEsp charges fitting)
ShortName : string
Short description of the system
index_all : list of integers (dimension 6)
There are specified indexes neded for asignment of defect
atoms. First three indexes correspond to center and two main axes of
reference structure (structure which was used for charges calculation)
and the last three indexes are corresponding atoms of the defect.
AlphaE : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
Alpha_E : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(-E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
BetaE : numpy.array of real (dimension 2x2)
Atomic polarizability Beta(E,E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
VinterFG : real
Difference in electrostatic interaction energy between interaction of
excited C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state and interaction of
ground state C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state. Units are ATOMIC
UNITS (Hartree)
FG_charges : list of real (dimension 2)
[charge on inner fluorographene atom, charge on borded fluorographe carbon]
ChargeType : string
Specifies which charges should be used for electrostatic calculations
(ground and excited state charges) for defect atoms. Allowed types are:
``'qchem'``, ``'qchem_all'``, ``'AMBER'`` and ``'gaussian'``.
* ``'qchem'`` - charges calculated by fiting Q-Chem ESP on carbon
atoms.
* ``'qchem_all'`` - charges calculated by fiting Q-Chem ESP on all
atoms, only carbon charges are used and same charge is added to all
carbon atoms in order to have neutral molecule.
* ``'AMBER'`` - not yet fully implemented.
* ``'gaussian'`` - not yet fully implemented.
order : integer (optional - init=80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
verbose : logical (optional - init=False)
If `True` aditional information about whole proces will be printed
approx : real (optional - init=1.1)
Specifies which approximation should be used.
* **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
`Alpha(-E)`.
* **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
* **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
`Alpha(E)=Alpha(-E)`, however the second one is not condition
Returns
--------
Eshift : Energy class
Transition energy shift for the defect due to the fluorographene
environment calculated from structure with single defect. Units are
energy managed
TrDip : numpy array of real (dimension 3)
Total transition dipole for the defect with environment effects
included calculated from structure with single defect (in ATOMIC UNITS)
Notes
--------
By comparing QC calculations it was found that energy shift from structure
with two defects and with single defect is almost the same.
'''
# read and prepare molecule
mol_polar,index1,charge,struc=prepare_molecule_1Def(filenames,index_all,AlphaE,Alpha_E,BetaE,VinterFG,verbose=False,CoarseGrain=CoarseGrain,**kwargs)
# calculate dAVA = <A|V|A>-<G|V|G>
AditInfo={'Structure': struc,'index1': index1,'Output_exct': True}
mol_Elstat,index,charge_grnd,charge_exct=ElStat_PrepareMolecule_1Def(filenames,index_all,FG_charges,ChargeType=ChargeType,verbose=False,**AditInfo)
dAVA=mol_Elstat.get_EnergyShift()
# Calculate interaction with ground state charges
mol_Elstat.charge[index] = charge_grnd
E_elst_grnd = mol_Elstat.get_EnergyShift()
mol_Elstat.charge[index] = charge_exct - charge_grnd
# Calculate interaction with excited state charges
mol_Elstat.charge[index] = charge_exct
E_elst_exct = mol_Elstat.get_EnergyShift()
mol_Elstat.charge[index] = charge_exct - charge_grnd
# Calculate interaction with transition density
mol_Elstat.charge[index] = charge
E_elst_trans = mol_Elstat.get_EnergyShift()
mol_Elstat.charge[index] = charge_exct - charge_grnd
# calculate transition energy shifts and transition dipole change
res_Energy, res_Pot, TrDip = mol_polar._TEST_Compare_SingleDefectProperties(charge,charge_grnd,charge_exct,struc,index1,dAVA=dAVA,order=order,approx=approx)
charge_FG_grnd = mol_Elstat.charge.copy()
charge_FG_grnd[index] = 0.0
E_Pol1_env_static_ex_gr_FG = np.dot(charge_FG_grnd,res_Pot['Pol1-env_static_(exct-grnd)'])
E_Pol2_env_static_ex_gr_FG = np.dot(charge_FG_grnd,res_Pot['Pol2-env_static_(exct-grnd)'])
E_Pol1_env_BetaEE_ex_gr_FG = np.dot(charge_FG_grnd,res_Pot['Pol1-env_Beta(E,E)_(exct-grnd)'])
E_Pol1_env_BetaEE_trans_FG = np.dot(charge_FG_grnd,res_Pot['Pol1-env_Beta(E,E)_(trans)'])
E_Pol1_env_AlphaE_trans_FG = np.dot(charge_FG_grnd,res_Pot['Pol1-env_Alpha(E)_(trans)'])
E_Pol1_env_Alpha_E_trans_FG = np.dot(charge_FG_grnd,res_Pot['Pol1-env_Alpha(-E)_(trans)'])
E_Pol1_env_static_trans_FG = np.dot(charge_FG_grnd,res_Pot['Pol1-env_static_(trans)'])
#E_Polar_AlphaE_gr_ex_FG = 0.0
# pot_dipole_gr_ex = potential of induced dipoles induced by difference charges between ground and excited state (gr_charges - ex_charges)
with energy_units("AU"):
E_elst_trans = EnergyClass(E_elst_trans)
E_elst_grnd = EnergyClass(E_elst_grnd)
E_elst_exct = EnergyClass(E_elst_exct)
E_Pol1_env_static_ex_gr_FG = EnergyClass(E_Pol1_env_static_ex_gr_FG)
E_Pol2_env_static_ex_gr_FG = EnergyClass(E_Pol2_env_static_ex_gr_FG)
E_Pol1_env_BetaEE_ex_gr_FG = EnergyClass(E_Pol1_env_BetaEE_ex_gr_FG)
E_Pol1_env_BetaEE_trans_FG = EnergyClass(E_Pol1_env_BetaEE_trans_FG)
E_Pol1_env_AlphaE_trans_FG = EnergyClass(E_Pol1_env_AlphaE_trans_FG)
E_Pol1_env_Alpha_E_trans_FG = EnergyClass(E_Pol1_env_Alpha_E_trans_FG)
E_Pol1_env_static_trans_FG = EnergyClass(E_Pol1_env_static_trans_FG)
if MathOut:
if not os.path.exists("Pictures"):
os.makedirs("Pictures")
Bonds = GuessBonds(mol_polar.coor)
struc.guess_bonds()
if CoarseGrain in ["plane","C","CF"]:
at_type = ['C']*mol_polar.Nat
elif CoarseGrain == "all_atom":
at_type = struc.at_type.copy()
mat_filename = "".join(['Pictures/Charge_',ShortName,'_Exct-Grnd.nb'])
params = {'TrPointCharge': mol_Elstat.charge,'rSphere_dip': 0.5,'rCylinder_dip':0.1}
OutputMathematica(mat_filename,mol_Elstat.coor,struc.bonds,struc.at_type,**params)
mol_Elstat.charge[index] = charge
mat_filename = "".join(['Pictures/Charge_',ShortName,'_Trans.nb'])
params = {'TrPointCharge': mol_Elstat.charge,'rSphere_dip': 0.5,'rCylinder_dip':0.1}
OutputMathematica(mat_filename,mol_Elstat.coor,struc.bonds,struc.at_type,**params)
# res_Pot = {'Pol2-env_static_(exct-grnd)': pot2_dipole_ex_gr}
# res_Pot['Pol1-env_static_(exct-grnd)'] = pot1_dipole_ex_gr
# res_Pot['Pol1-env_Beta(E,E)_(exct-grnd)'] = pot1_dipole_betaEE_ex_gr
# res_Pot['Pol1-env_Beta(E,E)_(trans)'] = pot1_dipole_betaEE_tr
# res_Pot['Pol1-env_Alpha(E)_(trans)'] = pot1_dipole_AlphaE_tr
# res_Pot['Pol1-env_Alpha(-E)_(trans)'] = pot1_dipole_Alpha_E_tr
# res_Pot['Pol1-env_static_(trans)'] = pot1_dipole_static_tr
#
# res_Energy = {'dE_0-1': Eshift, 'dE_elstat(exct-grnd)': dAVA}
# res_Energy['E_pol1_Alpha(E)'] = Polar1_AlphaE
# res_Energy['E_pol2_Alpha(E)'] = Polar2_AlphaE
# res_Energy['E_pol1_Alpha(-E)'] = Polar1_Alpha_E
# res_Energy['E_pol2_Alpha(-E)'] = Polar2_Alpha_E
# res_Energy['E_pol1_Beta(E,E)'] = Polar1_Beta_EE
# res_Energy['E_pol1_static_(exct-grnd)'] = Polar1_static_ex_gr
# res_Energy['E_pol2_static_(exct-grnd)'] = Polar2_static_ex_gr
# res_Energy['E_pol1_Beta(E,E)_(exct-grnd)'] = Polar1_Beta_EE_ex_gr
# res_Energy['E_pol1_static_(trans)_(exct)'] = Polar1_static_tr_ex
# res_Energy['E_pol1_static_(trans)_(grnd)'] = Polar1_static_tr_gr
# res_Energy['E_pol1_Alpha(E)_(trans)_(grnd)'] = Polar1_AlphaE_tr_gr
# res_Energy['E_pol1_Alpha(-E)_(trans)_(exct)'] = Polar1_Alpha_E_tr_ex
# res_Energy['E_pol1_Beta(E,E)_(trans)_(exct-grnd)'] = Polar1_Beta_EE_tr_ex_gr
#
res_Energy['E_elstat_trans'] = E_elst_trans
res_Energy['E_pol1-env_static_(exct-grnd)'] = E_Pol1_env_static_ex_gr_FG
res_Energy['E_pol2-env_static_(exct-grnd)'] = E_Pol2_env_static_ex_gr_FG
res_Energy['E_pol1-env_Beta(E,E)_(exct-grnd)'] = E_Pol1_env_BetaEE_ex_gr_FG
res_Energy['E_pol1-env_Beta(E,E)_(trans)'] = E_Pol1_env_BetaEE_trans_FG
res_Energy['E_pol1-env_Alpha(E)_(trans)'] = E_Pol1_env_AlphaE_trans_FG
res_Energy['E_pol1-env_Alpha(-E)_(trans)'] = E_Pol1_env_Alpha_E_trans_FG
res_Energy['E_pol1-env_static_(trans)'] = E_Pol1_env_static_trans_FG
# E_elst_grnd, E_elst_exct
return res_Energy, TrDip
def Calc_SingleDef_FGprop_new(filenames,ShortName,index_all,E01,AlphaE,Alpha_E,BetaE,VinterFG,FG_charges,ChargeType,order=2,verbose=False,approx=1.1,MathOut=False,CoarseGrain="plane",**kwargs):
''' Compare magnitude of individual terms in energy shift calculation for
defect in Fluorographene environment (so far only for first order of
perturbation expansion -> order = 1)
Parameters
----------
filenames : dictionary
Dictionary with information about all needed files which contains
nessesary information for transformig the system into Dielectric class
and electrostatic calculations. Keys:
* ``'1def_structure'``: xyz file with FG system with single defect
geometry and atom types
* ``'charge_structure'``: xyz file with defect-like molecule geometry
for which transition charges were calculated corresponding to first
defect
* ``'charge'``: file with transition charges for the defect
(from TrEsp charges fitting)
* ``'charge_grnd'``: file with ground state charges for the defect
(from TrEsp charges fitting)
* ``'charge_exct'``: file with excited state charges for the defect
(from TrEsp charges fitting)
ShortName : string
Short description of the system
index_all : list of integers (dimension 6)
There are specified indexes neded for asignment of defect
atoms. First three indexes correspond to center and two main axes of
reference structure (structure which was used for charges calculation)
and the last three indexes are corresponding atoms of the defect.
AlphaE : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
Alpha_E : numpy.array of real (dimension 2x2)
Atomic polarizability Alpha(-E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
BetaE : numpy.array of real (dimension 2x2)
Atomic polarizability Beta(E,E) for C-F corse grained atoms of
fluorographene in ATOMIC UNITS (Bohr^2 - because 2D)
VinterFG : real
Difference in electrostatic interaction energy between interaction of
excited C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state and interaction of
ground state C-F corse grained atom of fluorographene with all others
fluorographene corse grained atoms in ground state. Units are ATOMIC
UNITS (Hartree)
FG_charges : list of real (dimension 2)
[charge on inner fluorographene atom, charge on borded fluorographe carbon]
ChargeType : string
Specifies which charges should be used for electrostatic calculations
(ground and excited state charges) for defect atoms. Allowed types are:
``'qchem'``, ``'qchem_all'``, ``'AMBER'`` and ``'gaussian'``.
* ``'qchem'`` - charges calculated by fiting Q-Chem ESP on carbon
atoms.
* ``'qchem_all'`` - charges calculated by fiting Q-Chem ESP on all
atoms, only carbon charges are used and same charge is added to all
carbon atoms in order to have neutral molecule.
* ``'AMBER'`` - not yet fully implemented.
* ``'gaussian'`` - not yet fully implemented.
order : integer (optional - init=80)
Specify how many SCF steps shoudl be used in calculation of induced
dipoles - according to the used model it should be 2
verbose : logical (optional - init=False)
If `True` aditional information about whole proces will be printed
approx : real (optional - init=1.1)
Specifies which approximation should be used.
* **Approximation 1.1**: Neglect of `Beta(-E,-E)` and `Beta(-E,E)` and
`Alpha(-E)`.
* **Approximation 1.2**: Neglect of `Beta(-E,-E)` and `tilde{Beta(E)}`.
* **Approximation 1.3**: `Beta(E,E)=Beta(-E,E)=Beta(-E,-E)` and also
`Alpha(E)=Alpha(-E)`, however the second one is not condition
Returns
--------
Eshift : Energy class
Transition energy shift for the defect due to the fluorographene
environment calculated from structure with single defect. Units are
energy managed
TrDip : numpy array of real (dimension 3)
Total transition dipole for the defect with environment effects
included calculated from structure with single defect (in ATOMIC UNITS)
Notes
--------
By comparing QC calculations it was found that energy shift from structure
with two defects and with single defect is almost the same.
'''
# read and prepare molecule
mol_polar,index1,charge,struc=prepare_molecule_1Def(filenames,index_all,AlphaE,Alpha_E,BetaE,VinterFG,verbose=False,CoarseGrain=CoarseGrain,**kwargs)
# calculate dAVA = <A|V|A>-<G|V|G>
AditInfo={'Structure': struc,'index1': index1,'Output_exct': True}
mol_Elstat,index,charge_grnd,charge_exct=ElStat_PrepareMolecule_1Def(filenames,index_all,FG_charges,ChargeType=ChargeType,verbose=False,**AditInfo)
dAVA=mol_Elstat.get_EnergyShift()
# dAVA2, dAVA_R = mol_Elstat.get_EnergyShift_and_Derivative()
# print(dAVA,dAVA2,dAVA-dAVA2)
# calculate transition energy shifts and transition dipole change
# res_Energy, res_Pot, TrDip = mol_polar._TEST_Compare_SingleDefectProperties(charge,charge_grnd,charge_exct,struc,index1,dAVA=dAVA,order=order,approx=approx)
Eshift,res_Energy,TrDip = mol_polar.get_SingleDefectProperties_new(charge_grnd, charge_exct, mol_Elstat, struc, index1, E01, dAVA=dAVA, order=order, approx=approx)
if MathOut:
if not os.path.exists("Pictures"):
os.makedirs("Pictures")
Bonds = GuessBonds(mol_polar.coor)
struc.guess_bonds()
if CoarseGrain in ["plane","C","CF"]:
at_type = ['C']*mol_polar.Nat
elif CoarseGrain == "all_atom":
at_type = struc.at_type.copy()
mat_filename = "".join(['Pictures/Charge_',ShortName,'_Exct-Grnd.nb'])
params = {'TrPointCharge': mol_Elstat.charge,'rSphere_dip': 0.5,'rCylinder_dip':0.1}
OutputMathematica(mat_filename,mol_Elstat.coor,struc.bonds,struc.at_type,**params)
mol_Elstat.charge[index] = charge
mat_filename = "".join(['Pictures/Charge_',ShortName,'_Trans.nb'])
params = {'TrPointCharge': mol_Elstat.charge,'rSphere_dip': 0.5,'rCylinder_dip':0.1}
OutputMathematica(mat_filename,mol_Elstat.coor,struc.bonds,struc.at_type,**params)
return Eshift, TrDip
'''----------------------- TEST PART --------------------------------'''
if __name__=="__main__":
print(' TESTS')
print('-----------------------------------------')
''' Test derivation of energy d/dR ApB '''
# SETUP VERY SIMPLE SYSTEM OF TWO DEFECT ATOMS AND ONE ENVIRONMENT ATOM:
coor=np.array([[-1.0,0.0,0.0],[0.0,0.0,0.0],[1.0,0.0,0.0]],dtype='f8')
charge_pol=np.array([1.0,0.0,0.0],dtype='f8')
dipole=np.zeros((len(coor),3),dtype='f8')
AlphaE=np.array([np.zeros((3,3)),[[2.0,0.0,0.0],[0.0,2.0,0.0],[0.0,0.0,0.0]],np.zeros((3,3))],dtype='f8')
pol_mol=Dielectric(coor,charge_pol,dipole,AlphaE,AlphaE,AlphaE,0.0)
# definition of defect atoms and corresponding charges
charge=np.array([1.0],dtype='f8')
index1=[0]
index2=[2]
res_general=pol_mol._dR_BpA(index1,index2,charge,'AlphaE')
result=np.zeros((3,3),dtype='f8')
result2=np.array([[-4.0,0.0,0.0],[0.0,0.0,0.0],[4.0,0.0,0.0]],dtype='f8').reshape(3*len(coor))
R01=coor[1,:]-coor[0,:]
RR01=np.sqrt(np.dot(R01,R01))
R21=coor[1,:]-coor[2,:]
RR21=np.sqrt(np.dot(R21,R21))
dn=np.dot(AlphaE[1],R21/(RR21**3))
result[0,:]=charge[0]*charge[0]*(3*np.dot(R01/(RR01**5),dn)*R01-1/(RR01**3)*dn)
dn=np.dot(AlphaE[1],R01/(RR01**3))
result[2,:]=charge[0]*charge[0]*(3*np.dot(R21/(RR21**5),dn)*R21-1/(RR21**3)*dn)
if np.allclose(res_general,result2):
print('Symm _dR_BpA simple system ... OK')
else:
print('Symm _dR_BpA simple system ... Error')
print(' General result: ',res_general)
print(' Analytical result:',result2)
result3=np.array([[8.0,0.0,0.0],[-8.0,0.0,0.0]],dtype='f8').reshape(6)
pol_mol._swap_atoms(index1,index2)
res_general=pol_mol._dR_BpA(index2,index2,charge,'AlphaE')
if np.allclose(res_general[3:9],result3):
print('Symm _dR_ApA simple system ... OK')
else:
print('Symm _dR_ApA simple system ... Error')
print(' General result: ',res_general)
print(' Analytical result:',result3)
# SETUP NON-SYMETRIC SIMPLE SYSTEM OF TWO DEFECT ATOMS AND ONE ENVIRONMENT ATOM:
coor=np.array([[-1.0,0.0,0.0],[0.0,0.0,0.0],[1.0,2.0,0.0]],dtype='f8')
charge_pol=np.array([1.0,0.0,0.0],dtype='f8')
dipole=np.zeros((len(coor),3),dtype='f8')
AlphaE=np.array([np.zeros((3,3)),[[2.0,0.0,0.0],[0.0,2.0,0.0],[0.0,0.0,0.0]],np.zeros((3,3))],dtype='f8')
pol_mol=Dielectric(coor,charge_pol,dipole,AlphaE,AlphaE,AlphaE,0.0)
# definition of defect atoms and corresponding charges
charge=np.array([1.0],dtype='f8')
index1=[0]
index2=[2]
res_general=pol_mol._dR_BpA(index1,index2,charge,'AlphaE')
#
# result=np.zeros((3,3),dtype='f8')
result2=np.array([[-4.0/np.sqrt(5)**3,4.0/np.sqrt(5)**3,0.0],
[6*(1/np.sqrt(5)**3-1/np.sqrt(5)**5),-4/np.sqrt(5)**3-12/np.sqrt(5)**5,0.0],
[6/np.sqrt(5)**5-2/np.sqrt(5)**3,12/np.sqrt(5)**5,0.0]],dtype='f8').reshape(3*len(coor))
result=np.zeros((3,3),dtype='f8')
R01=coor[1,:]-coor[0,:]
RR01=np.sqrt(np.dot(R01,R01))
R21=coor[1,:]-coor[2,:]
RR21=np.sqrt(np.dot(R21,R21))
dn=np.dot(AlphaE[1],R21/(RR21**3))
result[0,:]=charge[0]*charge[0]*(3*np.dot(R01/(RR01**5),dn)*R01-1/(RR01**3)*dn)
dn=np.dot(AlphaE[1],R01/(RR01**3))
result[2,:]=charge[0]*charge[0]*(3*np.dot(R21/(RR21**5),dn)*R21-1/(RR21**3)*dn)
#print(result2)
#print(result)
if np.allclose(res_general,result2):
print('non-Symm _dR_BpA simple system ... OK')
else:
print('non-Symm _dR_BpA simple system ... Error')
print(' General result: ',res_general)
print(' Analytical result:',result2)
result3=np.array([[0.064,0.128,0.0],[-0.064,-0.128,0.0]],dtype='f8').reshape(6)
pol_mol._swap_atoms(index1,index2)
res_general=pol_mol._dR_BpA(index2,index2,charge,'AlphaE')
if np.allclose(res_general[3:9],result3):
print('non-Symm _dR_ApA simple system ... OK')
else:
print('non-Symm _dR_ApA simple system ... Error')
print(' General result: ',res_general)
print(' Analytical result:',result3)
# SETUP LITTLE BIT MORE COMPLICATED SYSTEM OF 2 DEFECT ATOMS AND 2ENVIRONMENT ATOMS
for kk in range(2):
if kk==0:
coor=np.array([[-2.0,0.0,0.0],[-2.0,-1.0,0.0],[0.0,0.0,0.0],[1.0,0.0,0.0],[2.0,0.0,0.0],[2.0,1.0,0.0]],dtype='f8')
else:
coor=np.array([[-2.0,0.0,0.0],[-2.0,1.0,0.0],[0.0,0.0,0.0],[1.0,0.0,0.0],[2.0,0.0,0.0],[2.0,1.0,0.0]],dtype='f8')
charge_pol=np.array([1.0,-1.0,0.0,0.0,0.0,0.0],dtype='f8')
dipole=np.zeros((len(coor),3),dtype='f8')
AlphaE=np.array([np.zeros((3,3)),np.zeros((3,3)),
[[2.0,0.0,0.0],[0.0,2.0,0.0],[0.0,0.0,0.0]],
[[2.0,0.0,0.0],[0.0,2.0,0.0],[0.0,0.0,0.0]],
np.zeros((3,3)),np.zeros((3,3))],dtype='f8')
pol_mol=Dielectric(coor,charge_pol,dipole,AlphaE,AlphaE,AlphaE,0.0)
# definition of defect atoms and corresponding charges
charge=np.array([1.0,-1.0],dtype='f8')
index1=[0,1]
index2=[4,5]
res_general=pol_mol._dR_BpA(index1,index2,charge,'AlphaE')
if kk==0:
# for coor[1]=[-2.0,-1.0,0.0]
result2=np.array([[-0.1313271490,-0.04854981982,0.0],[0.04798957640,0.07411449339,0.0],
[0.0,0.0,0.0],[-0.04637925945,-0.08345754376,0.0],
[0.1005284061,0.08560623298,0.0],
[0.02918842589,-0.02771336278,0.0]],dtype='f8').reshape(3*len(coor))
else:
# for coor[1]=[-2.0,1.0,0.0]
result2=np.array([[-0.131327,-0.0485498,0.0],[0.126639,-0.0300095,0.0],
[0.0,0.0624526,0.0],[-0.0195464,0.138987,0.0],
[0.100528,-0.0856062,0.0],[-0.0762936,-0.037274,0.0]],dtype='f8').reshape(3*len(coor))
if np.allclose(res_general,result2):
print('non-Symm _dR_BpA system',kk+1,' ... OK')
else:
print('non-Symm _dR_BpA system',kk+1,' ... Error')
print(' General result: ',res_general)
print(' Analytical result:',result2)
if kk==1:
res_general=pol_mol._dR_BpA(index1,index1,charge,'AlphaE')
result3=np.array([[0.0759272,-0.0494062,0.0],[0.00288743,0.0479804,0.0],
[-0.0738948,0.0013901,0.0],[-0.00491991,0.00003574515217,0.0]],dtype='f8').reshape(12)
if np.allclose(res_general[0:12],result3):
print('non-Symm _dR_ApA system',kk+1,' ... OK')
else:
print('non-Symm _dR_ApA system',kk+1,' ... Error')
print(' General result: ',res_general)
print(' Analytical result:',result3)
''' Test derivation of energy d/dR BppA '''
# SETUP NON-SYMETRIC SIMPLE SYSTEM OF TWO DEFECT ATOMS AND TWO ENVIRONMENT ATOM:
coor=np.array([[-1.0,0.0,0.0],[0.0,0.0,0.0],[0.0,1.0,0.0],[1.0,0.0,0.0]],dtype='f8')
charge_pol=np.array([1.0,0.0,0.0,0.0],dtype='f8')
dipole=np.zeros((len(coor),3),dtype='f8')
AlphaE=np.array([np.zeros((3,3)),[[2.0,0.0,0.0],[0.0,2.0,0.0],[0.0,0.0,0.0]],
[[2.0,0.0,0.0],[0.0,2.0,0.0],[0.0,0.0,0.0]],np.zeros((3,3))],dtype='f8')
pol_mol=Dielectric(coor,charge_pol,dipole,AlphaE,AlphaE,AlphaE,0.0)
# definition of defect atoms and corresponding charges
charge=np.array([1.0],dtype='f8')
index1=[0]
index2=[3]
res_general=pol_mol._dR_BppA(index1,index2,charge,'AlphaE')
result2=np.array([[3.535533906,-0.7071067812,0.0],[0.0,14.14213562,0.0],
[0.0,-12.72792206,0.0],[-3.535533906,-0.7071067812,0.0],
],dtype='f8').reshape(3*len(coor))
if np.allclose(res_general,result2):
print('non-Symm _dR_BppA simple system ... OK')
else:
print('non-Symm _dR_BppA simple system ... Error')
print(' General result: ',res_general)
print(' Analytical result:',result2)
res_general=pol_mol._dR_BppA(index1,index1,charge,'AlphaE')
result3=np.array([[-7.071067812,-9.899494937,0.0],[-2.8284271247,-2.8284271247,0.0],
[9.899494937,12.72792206,0.0],
],dtype='f8').reshape(9)
if np.allclose(res_general[0:9],result3):
print('non-Symm _dR_AppA simple system ... OK')
else:
print('non-Symm _dR_AppA simple system ... Error')
print(' General result: ',res_general[0:9])
print(' Analytical result:',result3)
| 52.33852 | 326 | 0.607394 | 40,276 | 302,726 | 4.416451 | 0.027411 | 0.005015 | 0.003896 | 0.003778 | 0.870045 | 0.848986 | 0.824924 | 0.802881 | 0.782187 | 0.767407 | 0 | 0.030933 | 0.276825 | 302,726 | 5,784 | 327 | 52.33852 | 0.781573 | 0.506696 | 0 | 0.64111 | 0 | 0.002822 | 0.097259 | 0.023847 | 0 | 0 | 0 | 0.002939 | 0 | 1 | 0.015052 | false | 0 | 0.009407 | 0 | 0.039981 | 0.047037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0211ba964bfcfda6a3a91c1911c1918cb54f15f1 | 763 | py | Python | gprm/__init__.py | siwill22/GPlatesClassStruggle | 713a87ff4f054d3a493ec09e5f310aa3036d3bc5 | [
"MIT"
] | 7 | 2020-05-04T03:05:09.000Z | 2022-01-28T13:52:53.000Z | gprm/__init__.py | siwill22/GPlatesClassStruggle | 713a87ff4f054d3a493ec09e5f310aa3036d3bc5 | [
"MIT"
] | null | null | null | gprm/__init__.py | siwill22/GPlatesClassStruggle | 713a87ff4f054d3a493ec09e5f310aa3036d3bc5 | [
"MIT"
] | 3 | 2021-05-23T01:53:52.000Z | 2021-09-14T12:21:53.000Z | #import utils
#from .GPlatesReconstructionModel.gprm import utils
from .GPlatesReconstructionModel import ReconstructionModel
from .GPlatesReconstructionModel import ReconstructedPolygonSnapshot
from .GPlatesReconstructionModel import PlateTree
from .GPlatesReconstructionModel import GPlatesRaster
from .GPlatesReconstructionModel import PlateSnapshot
from .GPlatesReconstructionModel import MotionPathFeature
from .GPlatesReconstructionModel import FlowlineFeature
from .GPlatesReconstructionModel import VelocityField
from .GPlatesReconstructionModel import SubductionConvergence
from .GPlatesReconstructionModel import AgeCodedPointDataset
from .GPlatesReconstructionModel import PointDistributionOnSphere
from .GPlatesReconstructionModel import CrossSection
| 50.866667 | 68 | 0.908257 | 55 | 763 | 12.6 | 0.309091 | 0.562771 | 0.623377 | 0.118326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070773 | 763 | 14 | 69 | 54.5 | 0.977433 | 0.081258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
02208259341a6bd919959511e42bd473f447377d | 149 | py | Python | make_us_rich/pipelines/fetching/__init__.py | ChainYo/make-me-rich | ad3bbc23bef4840f80799e0fd4903767d9a57a72 | [
"Apache-2.0"
] | 11 | 2022-02-06T18:01:29.000Z | 2022-02-23T15:51:48.000Z | make_us_rich/pipelines/fetching/__init__.py | ChainYo/make-me-rich | ad3bbc23bef4840f80799e0fd4903767d9a57a72 | [
"Apache-2.0"
] | null | null | null | make_us_rich/pipelines/fetching/__init__.py | ChainYo/make-me-rich | ad3bbc23bef4840f80799e0fd4903767d9a57a72 | [
"Apache-2.0"
] | 1 | 2022-02-14T10:41:53.000Z | 2022-02-14T10:41:53.000Z | from .pipeline import create_pipeline
from .nodes import fetch_data_to_dataframe
__all__ = [
"create_pipeline",
"fetch_data_to_dataframe",
] | 21.285714 | 42 | 0.778523 | 19 | 149 | 5.473684 | 0.526316 | 0.269231 | 0.211538 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147651 | 149 | 7 | 43 | 21.285714 | 0.818898 | 0 | 0 | 0 | 0 | 0 | 0.253333 | 0.153333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
026863faa67c919d45449c13c767272bc0bdde4f | 28,921 | py | Python | cmws/examples/stacking/render.py | tuananhle7/hmws | 175f77a2b386ce5a9598b61c982e053e7ecff8a2 | [
"MIT"
] | null | null | null | cmws/examples/stacking/render.py | tuananhle7/hmws | 175f77a2b386ce5a9598b61c982e053e7ecff8a2 | [
"MIT"
] | null | null | null | cmws/examples/stacking/render.py | tuananhle7/hmws | 175f77a2b386ce5a9598b61c982e053e7ecff8a2 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
from cmws import util
class Square:
def __init__(self, name, color, size):
self.name = name
self.color = color
self.size = size
@property
def device(self):
return self.size.device
def __repr__(self):
return f"{self.name}(color={self.color.tolist()}, size={self.size.item():.1f})"
class LearnableSquare(nn.Module):
def __init__(self, name=None, fixed_color=False):
super().__init__()
if name is None:
self.name = "LearnableSquare"
else:
self.name = name
self.fixed_color = fixed_color
if not self.fixed_color:
self.raw_color = nn.Parameter(torch.randn((3,)))
self.raw_size = nn.Parameter(torch.randn(()))
@property
def device(self):
return self.raw_size.device
@property
def size(self):
min_size = 0.01
max_size = 1.0
return self.raw_size.sigmoid() * (max_size - min_size) + min_size
@property
def color(self):
if self.fixed_color:
return torch.zeros((3,), device=self.device)
else:
return self.raw_color.sigmoid()
def __repr__(self):
return f"{self.name}(color={self.color.tolist()}, size={self.size.item():.1f})"
def get_min_edge_distance(square_size, location, point):
"""Computes shortest distance from a point to the square edge. (batched)
Negative if it's inside the square.
Positive if it's outside the square.
Args
square_size [] or [*location_shape]
location [*location_shape, 2]
point [*point_shape, 2]
Returns [*location_shape, *point_shape]
"""
# Extract
device = location.device
# [*location_shape]
min_x, min_y = location[..., 0], location[..., 1]
max_x = min_x + square_size
max_y = min_y + square_size
location_shape = min_x.shape
num_locations = int(torch.tensor(location_shape).prod().long().item())
# [*point_shape]
x, y = point[..., 0], point[..., 1]
point_shape = x.shape
num_points = int(torch.tensor(point_shape).prod().long().item())
# Flatten
# [num_locations, 1]
min_x, min_y, max_x, max_y = [tmp.view(-1)[:, None] for tmp in [min_x, min_y, max_x, max_y]]
# [1, num_points]
x, y = [tmp.view(-1)[None] for tmp in [x, y]]
# Determine which area the point is in
# [num_locations, num_points]
# --High level areas
up = y >= max_y
middle = (y >= min_y) & (y < max_y)
bottom = y < min_y
left = x < min_x
center = (x >= min_x) & (x < max_x)
right = x >= max_x
# --Use high level areas to define smaller sectors which we're going to work with
area_1 = left & up
area_2 = center & up
area_3 = right & up
area_4 = left & middle
area_5 = center & middle
area_6 = right & middle
area_7 = left & bottom
area_8 = center & bottom
area_9 = right & bottom
# Compute min distances
# --Init the results
# [num_locations, num_points]
min_edge_distance = torch.zeros((num_locations, num_points), device=device)
# --Compute distances for points in the corners (areas 1, 3, 7, 9)
min_edge_distance[area_1] = util.sqrt((x - min_x) ** 2 + (y - max_y) ** 2)[area_1]
min_edge_distance[area_3] = util.sqrt((x - max_x) ** 2 + (y - max_y) ** 2)[area_3]
min_edge_distance[area_7] = util.sqrt((x - min_x) ** 2 + (y - min_y) ** 2)[area_7]
min_edge_distance[area_9] = util.sqrt((x - max_x) ** 2 + (y - min_y) ** 2)[area_9]
# --Compute distances for points in the outside strips (areas 2, 4, 6, 8)
min_edge_distance[area_2] = (y - max_y)[area_2]
min_edge_distance[area_4] = (min_x - x)[area_4]
min_edge_distance[area_6] = (x - max_x)[area_6]
min_edge_distance[area_8] = (min_y - y)[area_8]
# --Compute distances for points inside the square
min_edge_distance[area_5] = -torch.min(
torch.stack([y - min_y, max_y - y, x - min_x, max_x - x]), dim=0
)[0][area_5]
return min_edge_distance.view(*[*location_shape, *point_shape])
def get_render_log_prob(min_edge_distance, blur=1e-4):
"""
Returns the (log) probability map used for soft rasterization as specified by
equation (1) of
https://openaccess.thecvf.com/content_ICCV_2019/papers/Liu_Soft_Rasterizer_A_Differentiable_Renderer_for_Image-Based_3D_Reasoning_ICCV_2019_paper.pdf
Also visualized here https://www.desmos.com/calculator/5z95dy2mny
Args
min_edge_distance [*shape]
blur [] (default 1e-4): this is the σ in equation (1)
Returns [*shape]
"""
return F.logsigmoid(-torch.sign(min_edge_distance) * min_edge_distance ** 2 / blur)
def get_canvas_xy(num_rows, num_cols, device):
"""Create xy points on the canvas
Args
num_rows (int)
num_cols (int)
Returns
canvas_x [num_rows, num_cols]
canvas_y [num_rows, num_cols]
"""
x_range = torch.linspace(-1, 1, steps=num_cols, device=device)
y_range = torch.linspace(-1, 1, steps=num_rows, device=device).flip(dims=[0])
# [num_cols, num_rows]
canvas_x, canvas_y = torch.meshgrid(x_range, y_range)
# [num_rows, num_cols]
canvas_x, canvas_y = canvas_x.T, canvas_y.T
return canvas_x, canvas_y
def init_canvas(device, num_channels=3, num_rows=32, num_cols=32, shape=[]):
"""Return a white canvas of shape [*shape, num_channels, num_rows, num_cols]"""
return torch.ones(*[*shape, num_channels, num_rows, num_cols], device=device)
def render_square(square, location, canvas, draw_on_top=False):
"""Draws a square on a canvas whose xy limits are [-1, 1].
Args
square
location [2]
canvas [num_channels, num_rows, num_cols]
draw_on_top (bool): draw squares on top of the canvas, instead of adding it to the canvas
Returns
new_canvas [num_channels, num_rows, num_cols]
"""
# Extract
# []
min_x, min_y = location
max_x = min_x + square.size
max_y = min_y + square.size
num_channels, num_rows, num_cols = canvas.shape
device = location.device
# Canvas xy
# [num_rows, num_cols]
canvas_x, canvas_y = get_canvas_xy(num_rows, num_cols, device)
# Draw on canvas
new_canvas = canvas.clone()
for channel_id in range(num_channels):
if draw_on_top:
new_canvas[
channel_id,
(canvas_x >= min_x)
& (canvas_x <= max_x)
& (canvas_y >= min_y)
& (canvas_y <= max_y),
] = square.color[channel_id]
else:
new_canvas[
channel_id,
(canvas_x >= min_x)
& (canvas_x <= max_x)
& (canvas_y >= min_y)
& (canvas_y <= max_y),
] -= (1 - square.color[channel_id])
new_canvas = new_canvas.clamp(0, 1)
return new_canvas
def soft_render_square(
square, location, background, background_depth=-1e-3, color_sharpness=1e-4, blur=1e-4
):
"""Draws a square on a canvas whose xy limits are [-1, 1].
Follows equations (2) and (3) in
https://openaccess.thecvf.com/content_ICCV_2019/papers/Liu_Soft_Rasterizer_A_Differentiable_Renderer_for_Image-Based_3D_Reasoning_ICCV_2019_paper.pdf
Args
square
location [*shape, 2]
background [num_channels, num_rows, num_cols] or [*shape, num_channels, num_rows, num_cols]
this is the background color C_b in equation (2)
background_weight [] (default 1.): ϵ in equation (3)
color_sharpness [] (default 1e-4): γ in equation (3)
blur [] (default 1e-4): this is the σ in equation (1)
Returns
new_canvas [*shape, num_channels, num_rows, num_cols]
"""
# Extract
shape = location.shape[:-1]
# Init
device = location.device
if background.ndim > 3:
num_channels, num_rows, num_cols = background.shape[-3:]
expanded_background = True
assert background.shape[:-3] == shape
else:
num_channels, num_rows, num_cols = background.shape
expanded_background = False
# Canvas xy
# [num_rows, num_cols]
canvas_x, canvas_y = get_canvas_xy(num_rows, num_cols, device)
canvas_xy = torch.stack([canvas_x, canvas_y], dim=-1)
# Get render log prob
# --Foreground object (treat depth z = -1) [*shape, num_rows, num_cols]
depth = 0
square_render_log_prob = (
get_render_log_prob(get_min_edge_distance(square.size, location, canvas_xy), blur=blur)
+ depth / color_sharpness
)
# --Background [*shape, num_rows, num_cols]
background_render_log_prob = (
torch.ones_like(square_render_log_prob) * background_depth / color_sharpness
)
# Compute color weight (equation (3))
# [*shape, num_rows, num_cols]
square_weight, background_weight = F.softmax(
torch.stack([square_render_log_prob, background_render_log_prob]), dim=0
)
# Flatten
# [num_samples, num_rows, num_cols]
square_weight_flattened = square_weight.view(-1, num_rows, num_cols)
background_weight_flattened = background_weight.view(-1, num_rows, num_cols)
if expanded_background:
background_flattened = background.view(-1, num_channels, num_rows, num_cols)
else:
background_flattened = background[None]
return (
square_weight_flattened[:, None] * square.color[None, :, None, None]
+ background_weight_flattened[:, None] * background_flattened
).view(*[*shape, num_channels, num_rows, num_cols])
def soft_render_square_batched(
square_size,
square_color,
location,
background,
background_depth=-1e-3,
color_sharpness=1e-4,
blur=1e-4,
):
"""Draws a square on a canvas whose xy limits are [-1, 1].
Follows equations (2) and (3) in
https://openaccess.thecvf.com/content_ICCV_2019/papers/Liu_Soft_Rasterizer_A_Differentiable_Renderer_for_Image-Based_3D_Reasoning_ICCV_2019_paper.pdf
Args
square_size [*shape] or []
square_color [*shape, 3] or [3]
location [*shape, 2]
background [num_channels, num_rows, num_cols] or [*shape, num_channels, num_rows, num_cols]
this is the background color C_b in equation (2)
background_weight [] (default 1.): ϵ in equation (3)
color_sharpness [] (default 1e-4): γ in equation (3)
blur [] (default 1e-4): this is the σ in equation (1)
Returns
new_canvas [*shape, num_channels, num_rows, num_cols]
"""
# Extract
shape = location.shape[:-1]
# Init
device = location.device
if background.ndim > 3:
num_channels, num_rows, num_cols = background.shape[-3:]
expanded_background = True
assert background.shape[:-3] == shape
else:
num_channels, num_rows, num_cols = background.shape
expanded_background = False
# Canvas xy
# [num_rows, num_cols]
canvas_x, canvas_y = get_canvas_xy(num_rows, num_cols, device)
canvas_xy = torch.stack([canvas_x, canvas_y], dim=-1)
# Get render log prob
# --Foreground object (treat depth z = -1) [*shape, num_rows, num_cols]
depth = 0
square_render_log_prob = (
get_render_log_prob(get_min_edge_distance(square_size, location, canvas_xy), blur=blur)
+ depth / color_sharpness
)
# --Background [*shape, num_rows, num_cols]
background_render_log_prob = (
torch.ones_like(square_render_log_prob) * background_depth / color_sharpness
)
# Compute color weight (equation (3))
# [*shape, num_rows, num_cols]
square_weight, background_weight = F.softmax(
torch.stack([square_render_log_prob, background_render_log_prob]), dim=0
)
# Flatten
# [num_samples, num_rows, num_cols]
square_weight_flattened = square_weight.view(-1, num_rows, num_cols)
background_weight_flattened = background_weight.view(-1, num_rows, num_cols)
if expanded_background:
background_flattened = background.view(-1, num_channels, num_rows, num_cols)
else:
background_flattened = background[None]
if square_color.ndim == 1:
square_color_expanded = square_color[None, :, None, None]
else:
square_color_expanded = square_color.reshape(-1, 3)[:, :, None, None]
return (
square_weight_flattened[:, None] * square_color_expanded
+ background_weight_flattened[:, None] * background_flattened
).view(*[*shape, num_channels, num_rows, num_cols])
def render_square_batched(
square_size, square_color, location, background,
):
"""Draws a square on a canvas whose xy limits are [-1, 1].
Follows equations (2) and (3) in
https://openaccess.thecvf.com/content_ICCV_2019/papers/Liu_Soft_Rasterizer_A_Differentiable_Renderer_for_Image-Based_3D_Reasoning_ICCV_2019_paper.pdf
Args
square_size [*shape] or []
square_color [*shape, 3] or [3]
location [*shape, 2]
background [num_channels, num_rows, num_cols] or [*shape, num_channels, num_rows, num_cols]
this is the background color C_b in equation (2)
background_weight [] (default 1.): ϵ in equation (3)
color_sharpness [] (default 1e-4): γ in equation (3)
blur [] (default 1e-4): this is the σ in equation (1)
Returns
new_canvas [*shape, num_channels, num_rows, num_cols]
"""
# Extract
shape = location.shape[:-1]
device = location.device
num_elements = int(torch.tensor(shape).prod().long().item())
num_channels, num_rows, num_cols = background.shape[-3:]
num_points = num_rows * num_cols
# Canvas xy
# --Compute
# [num_rows, num_cols]
canvas_x, canvas_y = get_canvas_xy(num_rows, num_cols, device)
# --Flatten
# [1, num_points]
x, y = [tmp.reshape(-1)[None] for tmp in [canvas_x, canvas_y]]
# Compute boundaries
# --Compute
# [*shape]
min_x, min_y = location[..., 0], location[..., 1]
max_x = min_x + square_size
max_y = min_y + square_size
# --Flatten
# [num_elements, 1]
min_x, min_y, max_x, max_y = [tmp.view(-1)[:, None] for tmp in [min_x, min_y, max_x, max_y]]
# Draw on canvas
# --Expand background
if background.ndim > 3:
canvas = background.clone().view(num_elements, num_channels, num_points)
assert background.shape[:-3] == shape
else:
canvas = (
background.clone()
.view(1, num_channels, num_points)
.expand(num_elements, num_channels, num_points)
)
# --Expand square_color
if square_color.ndim == 1:
square_color_expanded = square_color[None, :, None].expand(
num_elements, num_channels, num_points
)
else:
square_color_expanded = square_color.reshape(-1, 3, 1).expand(
num_elements, num_channels, num_points
)
# --Compute a mask that indicates whether a point is inside a square
# [num_elements, num_channels, num_points]
inside_square = ((x >= min_x) & (x <= max_x) & (y >= min_y) & (y <= max_y))[:, None, :].expand(
num_elements, num_channels, num_points
)
# --Draw inside the square
canvas[inside_square] = square_color_expanded[inside_square]
return canvas.view(*[*shape, num_channels, num_rows, num_cols])
def render(primitives, stacking_program, raw_locations, num_channels=3, num_rows=32, num_cols=32):
# Init
device = primitives[0].device
# Convert
locations = convert_raw_locations(raw_locations, stacking_program, primitives)
# Render
canvas = init_canvas(device, num_channels, num_rows, num_cols)
for primitive_id, location in zip(stacking_program, locations):
primitive = primitives[primitive_id]
canvas = render_square(primitive, location, canvas)
return canvas
def render_batched(
primitives,
num_blocks,
stacking_program,
raw_locations,
num_channels=3,
num_rows=32,
num_cols=32,
):
"""
Args
primitives (list [num_primitives])
num_blocks [*shape]
stacking_program (tensor [*shape, max_num_blocks])
raw_locations (tensor [*shape, max_num_blocks])
Returns [*shape, num_channels, num_rows, num_cols]
"""
# Extract
device = primitives[0].device
shape = stacking_program.shape[:-1]
max_num_blocks = stacking_program.shape[-1]
# [num_primitives]
square_size = torch.stack([primitive.size for primitive in primitives])
# [num_primitives, 3]
square_color = torch.stack([primitive.color for primitive in primitives])
# Convert [*shape, max_num_blocks, 2]
locations = convert_raw_locations_batched(raw_locations, stacking_program, primitives)
# Render
canvas = init_canvas(device, num_channels, num_rows, num_cols, shape)
for block_id in range(max_num_blocks):
# Determine whether this block is drawn
# [*shape, 1, 1, 1]
is_drawn = (block_id < num_blocks).float()[..., None, None, None]
# Draw the block
canvas = render_square_batched(
square_size[stacking_program[..., block_id]],
square_color[stacking_program[..., block_id]],
locations[..., block_id, :],
canvas,
) * is_drawn + canvas * (1 - is_drawn)
return canvas
def soft_render(
primitives,
stacking_program,
raw_locations,
raw_color_sharpness,
raw_blur,
num_channels=3,
num_rows=32,
num_cols=32,
):
"""
Args
primitives (list [num_primitives])
stacking_program (tensor [num_blocks])
raw_locations (tensor [num_blocks])
raw_color_sharpness []
raw_blur []
Returns [num_channels, num_rows, num_cols]
"""
# Init
device = primitives[0].device
# Convert
locations = convert_raw_locations(raw_locations, stacking_program, primitives)
# Render
canvas = init_canvas(device, num_channels, num_rows, num_cols)
for primitive_id, location in zip(stacking_program, locations):
primitive = primitives[primitive_id]
canvas = soft_render_square(
primitive,
location,
canvas,
color_sharpness=get_color_sharpness(raw_color_sharpness),
blur=get_blur(raw_blur),
)
return canvas
def soft_render_batched(
primitives,
stacking_program,
raw_locations,
raw_color_sharpness,
raw_blur,
num_channels=3,
num_rows=32,
num_cols=32,
):
"""
Args
primitives (list [num_primitives])
stacking_program (tensor [*shape, num_blocks])
raw_locations (tensor [*shape, num_blocks])
raw_color_sharpness []
raw_blur []
Returns [*shape, num_channels, num_rows, num_cols]
"""
# Extract
device = primitives[0].device
shape = stacking_program.shape[:-1]
num_blocks = stacking_program.shape[-1]
# [num_primitives]
square_size = torch.stack([primitive.size for primitive in primitives])
# [num_primitives, 3]
square_color = torch.stack([primitive.color for primitive in primitives])
# Convert
locations = convert_raw_locations_batched(raw_locations, stacking_program, primitives)
# Render
canvas = init_canvas(device, num_channels, num_rows, num_cols, shape)
for block_id in range(num_blocks):
canvas = soft_render_square_batched(
square_size[stacking_program[..., block_id]],
square_color[stacking_program[..., block_id]],
locations[..., block_id, :],
canvas,
color_sharpness=get_color_sharpness(raw_color_sharpness),
blur=get_blur(raw_blur),
)
return canvas
def soft_render_variable_num_blocks(
primitives,
num_blocks,
stacking_program,
raw_locations,
raw_color_sharpness,
raw_blur,
num_channels=3,
num_rows=32,
num_cols=32,
):
"""
Args
primitives (list [num_primitives])
num_blocks [*shape]
stacking_program (tensor [*shape, max_num_blocks])
raw_locations (tensor [*shape, max_num_blocks])
raw_color_sharpness []
raw_blur []
Returns [*shape, num_channels, num_rows, num_cols]
"""
# Extract
device = primitives[0].device
shape = stacking_program.shape[:-1]
max_num_blocks = stacking_program.shape[-1]
# [num_primitives]
square_size = torch.stack([primitive.size for primitive in primitives])
# [num_primitives, 3]
square_color = torch.stack([primitive.color for primitive in primitives])
# Convert [*shape, max_num_blocks, 2]
locations = convert_raw_locations_batched(raw_locations, stacking_program, primitives)
# Render
canvas = init_canvas(device, num_channels, num_rows, num_cols, shape)
for block_id in range(max_num_blocks):
# Determine whether this block is drawn
# [*shape, 1, 1, 1]
is_drawn = (block_id < num_blocks).float()[..., None, None, None]
# Draw the block
canvas = soft_render_square_batched(
square_size[stacking_program[..., block_id]],
square_color[stacking_program[..., block_id]],
locations[..., block_id, :],
canvas,
color_sharpness=get_color_sharpness(raw_color_sharpness),
blur=get_blur(raw_blur),
) * is_drawn + canvas * (1 - is_drawn)
return canvas
def convert_raw_locations(raw_locations, stacking_program, primitives):
"""
Args
raw_locations (tensor [num_blocks])
stacking_program (tensor [num_blocks])
primitives (list [num_primitives])
Returns [num_blocks, 2]
"""
# Extract
device = primitives[0].device
# Sample the bottom
y = torch.tensor(-1.0, device=device)
min_x = -0.8
max_x = 0.8
locations = []
for primitive_id, raw_location in zip(stacking_program, raw_locations):
size = primitives[primitive_id].size
min_x = min_x - size
x = raw_location.sigmoid() * (max_x - min_x) + min_x
locations.append(torch.stack([x, y]))
y = y + size
min_x = x
max_x = min_x + size
return torch.stack(locations)
def convert_raw_locations_batched(raw_locations, stacking_program, primitives):
"""
Args
raw_locations (tensor [*shape, num_blocks])
stacking_program (tensor [*shape, num_blocks])
primitives (list [num_primitives])
Returns [*shape, num_blocks, 2]
"""
# Extract
shape = raw_locations.shape[:-1]
num_samples = util.get_num_elements(shape)
num_blocks = raw_locations.shape[-1]
# Flatten
# [num_samples, num_blocks]
raw_locations_flattened = raw_locations.view(num_samples, num_blocks)
stacking_program_flattened = stacking_program.reshape(num_samples, num_blocks)
locations_batched = []
for sample_id in range(num_samples):
locations_batched.append(
convert_raw_locations(
raw_locations_flattened[sample_id],
stacking_program_flattened[sample_id],
primitives,
)
)
return torch.stack(locations_batched).view(*[*shape, num_blocks, 2])
def get_color_sharpness(raw_color_sharpness):
return raw_color_sharpness.exp()
def get_blur(raw_blur):
return raw_blur.exp()
def convert_raw_locations_top_down(raw_locations, stacking_program, primitives):
"""
Args
raw_locations (tensor [num_blocks])
stacking_program (tensor [num_blocks])
primitives (list [num_primitives])
Returns [num_blocks, 2]
"""
# Sample the bottom
min_x = -0.8
max_x = 0.8
locations = []
for primitive_id, raw_location in zip(stacking_program, raw_locations):
size = primitives[primitive_id].size
min_x = min_x - size
x = raw_location.sigmoid() * (max_x - min_x) + min_x
y = -size / 2.0
locations.append(torch.stack([x, y]))
min_x = x
max_x = min_x + size
return torch.stack(locations)
def convert_raw_locations_batched_top_down(raw_locations, stacking_program, primitives):
"""
Args
raw_locations (tensor [*shape, num_blocks])
stacking_program (tensor [*shape, num_blocks])
primitives (list [num_primitives])
Returns [*shape, num_blocks, 2]
"""
# Extract
shape = raw_locations.shape[:-1]
num_samples = util.get_num_elements(shape)
num_blocks = raw_locations.shape[-1]
# Flatten
# [num_samples, num_blocks]
raw_locations_flattened = raw_locations.view(num_samples, num_blocks)
stacking_program_flattened = stacking_program.reshape(num_samples, num_blocks)
locations_batched = []
for sample_id in range(num_samples):
locations_batched.append(
convert_raw_locations_top_down(
raw_locations_flattened[sample_id],
stacking_program_flattened[sample_id],
primitives,
)
)
return torch.stack(locations_batched).view(*[*shape, num_blocks, 2])
def soft_render_top_down(
primitives,
num_blocks,
stacking_program,
raw_locations,
raw_color_sharpness,
raw_blur,
num_channels=3,
num_rows=32,
num_cols=32,
):
"""
Args
primitives (list [num_primitives])
num_blocks [*shape]
stacking_program (tensor [*shape, max_num_blocks])
raw_locations (tensor [*shape, max_num_blocks])
raw_color_sharpness []
raw_blur []
Returns [*shape, num_channels, num_rows, num_cols]
"""
# Extract
device = primitives[0].device
shape = stacking_program.shape[:-1]
max_num_blocks = stacking_program.shape[-1]
# [num_primitives]
square_size = torch.stack([primitive.size for primitive in primitives])
# [num_primitives, 3]
square_color = torch.stack([primitive.color for primitive in primitives])
# Convert [*shape, max_num_blocks, 2]
locations = convert_raw_locations_batched_top_down(raw_locations, stacking_program, primitives)
# Render
canvas = init_canvas(device, num_channels, num_rows, num_cols, shape)
for block_id in range(max_num_blocks):
# Determine whether this block is drawn
# [*shape, 1, 1, 1]
is_drawn = (block_id < num_blocks).float()[..., None, None, None]
# Draw the block
canvas = soft_render_square_batched(
square_size[stacking_program[..., block_id]],
square_color[stacking_program[..., block_id]],
locations[..., block_id, :],
canvas,
color_sharpness=get_color_sharpness(raw_color_sharpness),
blur=get_blur(raw_blur),
) * is_drawn + canvas * (1 - is_drawn)
return canvas
def render_batched_top_down(
primitives,
num_blocks,
stacking_program,
raw_locations,
num_channels=3,
num_rows=32,
num_cols=32,
):
"""
Args
primitives (list [num_primitives])
num_blocks [*shape]
stacking_program (tensor [*shape, max_num_blocks])
raw_locations (tensor [*shape, max_num_blocks])
Returns [*shape, num_channels, num_rows, num_cols]
"""
# Extract
device = primitives[0].device
shape = stacking_program.shape[:-1]
max_num_blocks = stacking_program.shape[-1]
# [num_primitives]
square_size = torch.stack([primitive.size for primitive in primitives])
# [num_primitives, 3]
square_color = torch.stack([primitive.color for primitive in primitives])
# Convert [*shape, max_num_blocks, 2]
locations = convert_raw_locations_batched_top_down(raw_locations, stacking_program, primitives)
# Render
canvas = init_canvas(device, num_channels, num_rows, num_cols, shape)
for block_id in range(max_num_blocks):
# Determine whether this block is drawn
# [*shape, 1, 1, 1]
is_drawn = (block_id < num_blocks).float()[..., None, None, None]
# Draw the block
canvas = render_square_batched(
square_size[stacking_program[..., block_id]],
square_color[stacking_program[..., block_id]],
locations[..., block_id, :],
canvas,
) * is_drawn + canvas * (1 - is_drawn)
return canvas
def render_top_down(
primitives, stacking_program, raw_locations, num_channels=3, num_rows=32, num_cols=32
):
# Init
device = primitives[0].device
# Convert
locations = convert_raw_locations_top_down(raw_locations, stacking_program, primitives)
# Render
canvas = init_canvas(device, num_channels, num_rows, num_cols)
for primitive_id, location in zip(stacking_program, locations):
primitive = primitives[primitive_id]
canvas = render_square(primitive, location, canvas, draw_on_top=True)
return canvas
| 31.781319 | 153 | 0.649182 | 3,792 | 28,921 | 4.643724 | 0.063291 | 0.029814 | 0.035777 | 0.050088 | 0.813675 | 0.795446 | 0.768982 | 0.74638 | 0.729059 | 0.714123 | 0 | 0.01521 | 0.243007 | 28,921 | 909 | 154 | 31.816282 | 0.789111 | 0.271429 | 0 | 0.664557 | 0 | 0.004219 | 0.007553 | 0.006714 | 0 | 0 | 0 | 0 | 0.006329 | 1 | 0.063291 | false | 0 | 0.008439 | 0.012658 | 0.137131 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65f4379a10ea774538684f28caff94390ec74965 | 104 | py | Python | rpas/tests.py | geoffreynyaga/ANGA-UTM | 8371a51ad27c85d2479bb34d8c4e02ea28465941 | [
"Apache-2.0"
] | 7 | 2020-01-18T16:53:41.000Z | 2021-12-21T07:02:43.000Z | rpas/tests.py | geoffreynyaga/ANGA-UTM | 8371a51ad27c85d2479bb34d8c4e02ea28465941 | [
"Apache-2.0"
] | 28 | 2020-01-06T18:36:54.000Z | 2022-02-10T10:03:55.000Z | rpas/tests.py | geoffreynyaga/ANGA-UTM | 8371a51ad27c85d2479bb34d8c4e02ea28465941 | [
"Apache-2.0"
] | 3 | 2020-01-18T16:53:54.000Z | 2020-10-26T11:21:41.000Z | from django.test import TestCase
# Create your tests here.
def test_a_plus_b():
assert 1 == 1
| 17.333333 | 33 | 0.682692 | 17 | 104 | 4 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025316 | 0.240385 | 104 | 5 | 34 | 20.8 | 0.835443 | 0.221154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5a091a9bd9345bb779b53c04f7a3e2c1e7df9d93 | 377 | py | Python | python/testData/refactoring/pullup/properties/Class.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/refactoring/pullup/properties/Class.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/refactoring/pullup/properties/Class.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | from SuperClass import SuperClass
class AnyClass(SuperClass):
C = 1
def __init__(self):
super(AnyClass, self).__init__()
@property
def new_property(self):
return 1
@new_property.setter
def new_property(self, value):
pass
@new_property.deleter
def new_property(self):
pass
def foo(self):
pass
| 14.5 | 40 | 0.612732 | 44 | 377 | 4.954545 | 0.431818 | 0.252294 | 0.192661 | 0.247706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007634 | 0.30504 | 377 | 25 | 41 | 15.08 | 0.824427 | 0 | 0 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0.1875 | 0.0625 | 0.0625 | 0.5625 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
5a403728f96f58315c8f4f3eb45da15397484d96 | 24,669 | py | Python | scripts/dual_power_law_constrained.py | kevincovey/fit-rossby | d7041d093e4399088797e01d64d33195053631bc | [
"MIT"
] | null | null | null | scripts/dual_power_law_constrained.py | kevincovey/fit-rossby | d7041d093e4399088797e01d64d33195053631bc | [
"MIT"
] | null | null | null | scripts/dual_power_law_constrained.py | kevincovey/fit-rossby | d7041d093e4399088797e01d64d33195053631bc | [
"MIT"
] | null | null | null | # Originally written by Stephanie T. Douglas (2012-2014)
# Modified by Kevin Covey (2019)
# under the MIT License (see LICENSE.txt for full details)
import numpy as np
import emcee
import matplotlib.pyplot as plt
def quantile(x,quantiles):
""" Calculates quantiles - taken from DFM's triangle.py """
xsorted = sorted(x)
qvalues = [xsorted[int(q * len(xsorted))] for q in quantiles]
return list(zip(quantiles,qvalues))
def dual_power_law(parameters,x):
"""
computes a dual-power law model
For x >= turnover, the model values follow a power-law with slope beta_2:
y = C + beta_2 log10(x)
For x < turnover, the model values are a second power-law with slope beta_1:
y = C + (beta_2 - beta_1)*log10(turnover) + beta_1 * log10(x)
Inputs and outputs are in log space (ie, saturation level is -3., rather than 10.**(-3.); similar for loglxlbol values)
Input
-----
parameters : array-like (4)
parameters for the model: C (intercept constant), turnover, beta_1, beta_2
Ro : array-like
Rossby number values. The model Log L_{whatever}/L_{bol} values will
be computed for these Rossby numbers
Output
------
: numpy.ndarray (same size as Ro)
Model Log L_{whatever}/L_{bol} values corresponding to input Ro
"""
#save the parameters with intuitive names
intercept_constant, turnover, beta_1, beta_2 = parameters[0], parameters[1], parameters[2], parameters[3]
#calculate the pivot constant that ensures the two laws meet at the same point
pivot_constant = intercept_constant + (beta_2 - beta_1) * np.log10(turnover)
#define the Log_LxLbol array and fill with saturated level datapoints
Log_LxLbol = np.ones(len(x))
#find unsaturated objects and calculate their Log_LxLbols based on the assumed power law behavior
un_sat = np.where(x>=turnover)[0]
Log_LxLbol[un_sat] = intercept_constant + beta_2 * np.log10(x[un_sat])
#find saturated points and calculate their Log_LxLbols
sat = np.where(x<turnover)[0]
Log_LxLbol[sat] = pivot_constant + beta_1 * np.log10(x[sat])
return Log_LxLbol
def dual_lnprior_periods_fixSlope(parameters, low_slope, high_slope):
"""
simple method of setting (flat) priors on model parameters
If input parameters are within the priors, a (constant) likelihood is returned;
if the input parameters are outside the priors, a negative infinity is returned
to indicate an unacceptable fit.
Input
-----
parameters : array-like (3)
parameters for the model: saturation level (expressed as Log L_{whatever}/L_{bol}, turnover_Ro, beta
Output
------
: value
0.0 if parameters are within priors; -np.inf if not.
"""
#print('slope bounds are: ', low_slope, high_slope)
intercept_constant, turnover, beta_1, beta_2, lnf = parameters[0], parameters[1], parameters[2], parameters[3], parameters[4]
if 20 < intercept_constant < 40 and 2 < turnover < 50 and -4 < beta_1 < 2 and low_slope < beta_2 < high_slope and -10.0 < lnf < 1.0:
return 0.0
return -np.inf
def dual_lnprior_periods(parameters):
"""
simple method of setting (flat) priors on model parameters
If input parameters are within the priors, a (constant) likelihood is returned;
if the input parameters are outside the priors, a negative infinity is returned
to indicate an unacceptable fit.
Input
-----
parameters : array-like (3)
parameters for the model: saturation level (expressed as Log L_{whatever}/L_{bol}, turnover_Ro, beta
Output
------
: value
0.0 if parameters are within priors; -np.inf if not.
"""
intercept_constant, turnover, beta_1, beta_2, lnf = parameters[0], parameters[1], parameters[2], parameters[3], parameters[4]
if 20 < intercept_constant < 40 and 2 < turnover < 50 and -4 < beta_1 < 2 and -5 < beta_2 < 1 and -10.0 < lnf < 1.0:
return 0.0
return -np.inf
def dual_lnprior_fixSlope(parameters, low_slope, high_slope):
"""
simple method of setting (flat) priors on model parameters
If input parameters are within the priors, a (constant) likelihood is returned;
if the input parameters are outside the priors, a negative infinity is returned
to indicate an unacceptable fit.
Input
-----
parameters : array-like (3)
parameters for the model: saturation level (expressed as Log L_{whatever}/L_{bol}, turnover_Ro, beta
Output
------
: value
0.0 if parameters are within priors; -np.inf if not.
"""
#print('slope bounds are: ', low_slope, high_slope)
intercept_constant, turnover, beta_1, beta_2, lnf = parameters[0], parameters[1], parameters[2], parameters[3], parameters[4]
if -99 < intercept_constant < 100 and 0.05 < turnover < 0.5 and -1 < beta_1 < 1 and low_slope < beta_2 < high_slope and -10.0 < lnf < 1.0:
# if 20 < intercept_constant < 40 and 2 < turnover < 50 and -4 < beta_1 < 2 and low_slope < beta_2 < high_slope and -10.0 < lnf < 1.0:
return 0.0
return -np.inf
def dual_lnprior(parameters):
"""
simple method of setting (flat) priors on model parameters
If input parameters are within the priors, a (constant) likelihood is returned;
if the input parameters are outside the priors, a negative infinity is returned
to indicate an unacceptable fit.
Input
-----
parameters : array-like (3)
parameters for the model: saturation level (expressed as Log L_{whatever}/L_{bol}, turnover_Ro, beta
Output
------
: value
0.0 if parameters are within priors; -np.inf if not.
"""
intercept_constant, turnover, beta_1, beta_2, lnf = parameters[0], parameters[1], parameters[2], parameters[3], parameters[4]
if -99 < intercept_constant < 100 and 0.05 < turnover < 0.5 and -1 < beta_1 < 1 and -4 < beta_2 < -1 and -10.0 < lnf < 1.0:
return 0.0
return -np.inf
def dual_lnlike(parameters, rossby_no, log_LxLbol ,err_ll):
"""
Calculates the natural log of the likelihood for a given model fit to a given input dataset (with errors).
Input
-----
parameters : array-like (4)
parameters for the model: saturation level, turnover, beta, multiplicative error inflator
rossby_no : array-like
Data Rossby number values
log_LxLbol : array-like
Data activity values (L_{whatever}/L_{bol} - in the original case, LxLbol
error_ll : array-like
Uncertainties in the data activity values.
Output
------
lnprob : float
natural log of the likelihood of the model given the data
"""
intercept_constant, turnover, beta_1, beta_2, lnf = parameters[0], parameters[1], parameters[2], parameters[3], parameters[4]
#if ((sat_level>1e-1) or (sat_level<1e-8) or (turnover<0.001) ## stephanie's original method of setting priors;
# or (turnover>2) or (beta>2) or (beta<-6)): ## now offloaded to lnprior
# return -np.inf
model_ll = dual_power_law(parameters, rossby_no)
#inv_sigma2 = 1.0/(err_ll**2) ## inverse sigma assuming only quoted errors
inv_sigma2 = 1.0/(err_ll**2 + model_ll**2*np.exp(2*lnf)) ## inverse sigma assuming errors are underestimated by some multiplicative factor
ln_like = -0.5*(np.sum((log_LxLbol-model_ll)**2*inv_sigma2 - np.log(inv_sigma2)))
return ln_like
def dual_lnprob_periods_fixed(parameters, rossby_no, log_LxLbol, err_ll, lowSlope, highSlope):
"""
Calculates the natural log of the probability of a model, given a set of priors, the defined likelihood function, and the observed data
Input
-----
parameters : array-like (4)
parameters for the model: saturation level, turnover, beta, multiplicative error inflator
rossby_no : array-like
Data Rossby number values
log_LxLbol : array-like
Data activity values (L_{whatever}/L_{bol} - in the original case, LxLbol
error_ll : array-like
Uncertainties in the data activity values.
Output
------
lnprob : float
natural log of the likelihood of the model given the data and the priors
(by adding prior and model likelihood terms, which are
calculated by lnprior() and lnlike() respectively)
"""
lp = dual_lnprior_periods_fixSlope(parameters, lowSlope, highSlope)
if not np.isfinite(lp):
return -np.inf
return lp + dual_lnlike(parameters, rossby_no, log_LxLbol, err_ll)
def dual_lnprob_periods(parameters, rossby_no, log_LxLbol, err_ll):
"""
Calculates the natural log of the probability of a model, given a set of priors, the defined likelihood function, and the observed data
Input
-----
parameters : array-like (4)
parameters for the model: saturation level, turnover, beta, multiplicative error inflator
rossby_no : array-like
Data Rossby number values
log_LxLbol : array-like
Data activity values (L_{whatever}/L_{bol} - in the original case, LxLbol
error_ll : array-like
Uncertainties in the data activity values.
Output
------
lnprob : float
natural log of the likelihood of the model given the data and the priors
(by adding prior and model likelihood terms, which are
calculated by lnprior() and lnlike() respectively)
"""
lp = dual_lnprior_periods(parameters)
if not np.isfinite(lp):
return -np.inf
return lp + dual_lnlike(parameters, rossby_no, log_LxLbol, err_ll)
def dual_lnprob(parameters, rossby_no, log_LxLbol, err_ll):
"""
Calculates the natural log of the probability of a model, given a set of priors, the defined likelihood function, and the observed data
Input
-----
parameters : array-like (4)
parameters for the model: saturation level, turnover, beta, multiplicative error inflator
rossby_no : array-like
Data Rossby number values
log_LxLbol : array-like
Data activity values (L_{whatever}/L_{bol} - in the original case, LxLbol
error_ll : array-like
Uncertainties in the data activity values.
Output
------
lnprob : float
natural log of the likelihood of the model given the data and the priors
(by adding prior and model likelihood terms, which are
calculated by lnprior() and lnlike() respectively)
"""
lp = dual_lnprior(parameters)
if not np.isfinite(lp):
return -np.inf
return lp + dual_lnlike(parameters, rossby_no, log_LxLbol, err_ll)
def dual_lnprob_fixed(parameters, rossby_no, log_LxLbol, err_ll, low_slope, high_slope):
"""
Calculates the natural log of the probability of a model, given a set of priors, the defined likelihood function, and the observed data
Input
-----
parameters : array-like (4)
parameters for the model: saturation level, turnover, beta, multiplicative error inflator
rossby_no : array-like
Data Rossby number values
log_LxLbol : array-like
Data activity values (L_{whatever}/L_{bol} - in the original case, LxLbol
error_ll : array-like
Uncertainties in the data activity values.
Output
------
lnprob : float
natural log of the likelihood of the model given the data and the priors
(by adding prior and model likelihood terms, which are
calculated by lnprior() and lnlike() respectively)
"""
#lp = dual_lnprior(parameters)
lp = dual_lnprior_fixSlope(parameters, low_slope, high_slope)
if not np.isfinite(lp):
return -np.inf
return lp + dual_lnlike(parameters, rossby_no, log_LxLbol, err_ll)
def run_dual_fit_constrained(start_p, data_rossby, data_ll, data_ull, lowSlope, highSlope,
nwalkers=256,nsteps=40000):
"""
Sets up the emcee ensemble sampler, runs it, prints out the results,
then returns the samples.
Input
-----
start_p : (3)
starting guesses for the three model parameters
saturation level, turnover point, and power-law slope (beta)
data_rossby : array-like (ndata)
Data Rossby number values
data_ll : array-like (ndata)
Data activity values (L_{whatever}/L_{bol} - in my case
I was using L_{Halpha}/L_{bol})
data_ull : array-like (ndata)
Uncertainties in the data activity values.
Output
------
samples : array-like (nwalkers*nsteps,3)
all the samples from all the emcee walkers, reshaped so there's
just one column per parameter
"""
ndim = 5
p0 = np.zeros((nwalkers,ndim))
# initialize the walkers in a tiny gaussian ball around the starting point
for i in range(nwalkers):
p0[i] = start_p + (1e-1*np.random.randn(ndim)*start_p)
sampler = emcee.EnsembleSampler(nwalkers,ndim,dual_lnprob_fixed,
args=[data_rossby,data_ll,data_ull,lowSlope,highSlope])
pos,prob,state=sampler.run_mcmc(p0,nsteps/2)
sampler.reset()
pos,prob,state=sampler.run_mcmc(pos,nsteps)
ic_mcmc = quantile(sampler.flatchain[:,0],[.16,.5,.84])
#sl_mcmc.info()
#print(sl_mcmc)
to_mcmc = quantile(sampler.flatchain[:,1],[.16,.5,.84])
#print(to_mcmc)
beta1_mcmc = quantile(sampler.flatchain[:,2],[.16,.5,.84])
beta2_mcmc = quantile(sampler.flatchain[:,3],[.16,.5,.84])
#print(be_mcmc)
var_mcmc = quantile(sampler.flatchain[:,4],[.16,.5,.84])
print('intercept constant={0:.7f} +{1:.7f}/-{2:.7f}'.format(
ic_mcmc[1][1],ic_mcmc[1][1]-ic_mcmc[0][1],ic_mcmc[2][1]-ic_mcmc[1][1]))
print('turnover={0:.3f} +{1:.3f}/-{2:.3f}'.format(
to_mcmc[1][1],to_mcmc[1][1]-to_mcmc[0][1],to_mcmc[2][1]-to_mcmc[1][1]))
print('beta1={0:.3f} +{1:.3f}/-{2:.3f}'.format(
beta1_mcmc[1][1],beta1_mcmc[1][1]-beta1_mcmc[0][1],beta1_mcmc[2][1]-beta1_mcmc[1][1]))
print('beta2={0:.3f} +{1:.3f}/-{2:.3f}'.format(
beta2_mcmc[1][1],beta2_mcmc[1][1]-beta2_mcmc[0][1],beta2_mcmc[2][1]-beta2_mcmc[1][1]))
print('var={0:.3f} +{1:.3f}/-{2:.3f}'.format(
var_mcmc[1][1],var_mcmc[1][1]-var_mcmc[0][1],var_mcmc[2][1]-var_mcmc[1][1]))
samples = sampler.flatchain
return samples
pos,prob,state=sampler.run_mcmc(pos,nsteps)
ic_mcmc = quantile(sampler.flatchain[:,0],[.16,.5,.84])
#sl_mcmc.info()
#print(sl_mcmc)
to_mcmc = quantile(sampler.flatchain[:,1],[.16,.5,.84])
#print(to_mcmc)
beta1_mcmc = quantile(sampler.flatchain[:,2],[.16,.5,.84])
beta2_mcmc = quantile(sampler.flatchain[:,3],[.16,.5,.84])
#print(be_mcmc)
var_mcmc = quantile(sampler.flatchain[:,4],[.16,.5,.84])
print('intercept constant={0:.7f} +{1:.7f}/-{2:.7f}'.format(
ic_mcmc[1][1],ic_mcmc[1][1]-ic_mcmc[0][1],ic_mcmc[2][1]-ic_mcmc[1][1]))
print('turnover={0:.3f} +{1:.3f}/-{2:.3f}'.format(
to_mcmc[1][1],to_mcmc[1][1]-to_mcmc[0][1],to_mcmc[2][1]-to_mcmc[1][1]))
print('beta1={0:.3f} +{1:.3f}/-{2:.3f}'.format(
beta1_mcmc[1][1],beta1_mcmc[1][1]-beta1_mcmc[0][1],beta1_mcmc[2][1]-beta1_mcmc[1][1]))
print('beta2={0:.3f} +{1:.3f}/-{2:.3f}'.format(
beta2_mcmc[1][1],beta2_mcmc[1][1]-beta2_mcmc[0][1],beta2_mcmc[2][1]-beta2_mcmc[1][1]))
print('var={0:.3f} +{1:.3f}/-{2:.3f}'.format(
var_mcmc[1][1],var_mcmc[1][1]-var_mcmc[0][1],var_mcmc[2][1]-var_mcmc[1][1]))
samples = sampler.flatchain
return samples
def run_dual_fit_periods_constrained(start_p, data_rossby, data_ll, data_ull, lowSlope, highSlope,
nwalkers=256,nsteps=10000):
"""
Sets up the emcee ensemble sampler, runs it, prints out the results,
then returns the samples.
Input
-----
start_p : (3)
starting guesses for the three model parameters
saturation level, turnover point, and power-law slope (beta)
data_rossby : array-like (ndata)
Data Rossby number values
data_ll : array-like (ndata)
Data activity values (L_{whatever}/L_{bol} - in my case
I was using L_{Halpha}/L_{bol})
data_ull : array-like (ndata)
Uncertainties in the data activity values.
Output
------
samples : array-like (nwalkers*nsteps,3)
all the samples from all the emcee walkers, reshaped so there's
just one column per parameter
"""
ndim = 5
p0 = np.zeros((nwalkers,ndim))
# initialize the walkers in a tiny gaussian ball around the starting point
for i in range(nwalkers):
p0[i] = start_p + (1e-1*np.random.randn(ndim)*start_p)
sampler = emcee.EnsembleSampler(nwalkers,ndim,dual_lnprob_periods_fixed,
args=[data_rossby,data_ll,data_ull,lowSlope,highSlope])
pos,prob,state=sampler.run_mcmc(p0,nsteps/2)
sampler.reset()
pos,prob,state=sampler.run_mcmc(pos,nsteps)
ic_mcmc = quantile(sampler.flatchain[:,0],[.16,.5,.84])
#sl_mcmc.info()
#print(sl_mcmc)
to_mcmc = quantile(sampler.flatchain[:,1],[.16,.5,.84])
#print(to_mcmc)
beta1_mcmc = quantile(sampler.flatchain[:,2],[.16,.5,.84])
beta2_mcmc = quantile(sampler.flatchain[:,3],[.16,.5,.84])
#print(be_mcmc)
var_mcmc = quantile(sampler.flatchain[:,4],[.16,.5,.84])
print('intercept constant={0:.7f} +{1:.7f}/-{2:.7f}'.format(
ic_mcmc[1][1],ic_mcmc[1][1]-ic_mcmc[0][1],ic_mcmc[2][1]-ic_mcmc[1][1]))
print('turnover={0:.3f} +{1:.3f}/-{2:.3f}'.format(
to_mcmc[1][1],to_mcmc[1][1]-to_mcmc[0][1],to_mcmc[2][1]-to_mcmc[1][1]))
print('beta1={0:.3f} +{1:.3f}/-{2:.3f}'.format(
beta1_mcmc[1][1],beta1_mcmc[1][1]-beta1_mcmc[0][1],beta1_mcmc[2][1]-beta1_mcmc[1][1]))
print('beta2={0:.3f} +{1:.3f}/-{2:.3f}'.format(
beta2_mcmc[1][1],beta2_mcmc[1][1]-beta2_mcmc[0][1],beta2_mcmc[2][1]-beta2_mcmc[1][1]))
print('var={0:.3f} +{1:.3f}/-{2:.3f}'.format(
var_mcmc[1][1],var_mcmc[1][1]-var_mcmc[0][1],var_mcmc[2][1]-var_mcmc[1][1]))
samples = sampler.flatchain
return samples
def plot_dual_fit(samples,data_rossby,data_ll,data_ull,plotfilename=None,ylabel=r'$L_{X}/L_{bol}$', sampleName=None):
"""
Plot fit results with data
Input
-----
samples : array-like (nwalkers*nsteps,3)
all the samples from all the emcee walkers, reshaped so there's
just one column per parameter
data_rossby : array-like (ndata)
Data Rossby number values
data_ll : array-like (ndata)
Data activity values (L_{whatever}/L_{bol} - in my case
I was using L_{Halpha}/L_{bol})
data_ull : array-like (ndata)
Uncertainties in the data activity values.
plotfilename : string (optional; default=None)
if not None, the plot will be saved using this filename
"""
ic_mcmc = quantile(samples[:,0],[.16,.5,.84])
to_mcmc = quantile(samples[:,1],[.16,.5,.84])
beta1_mcmc = quantile(samples[:,2],[.16,.5,.84])
beta2_mcmc = quantile(samples[:,3],[.16,.5,.84])
var_mcmc = quantile(samples[:,4],[.16,.5,.84])
plt.figure()
ax = plt.subplot(111)
ax.set_xscale('log')
#ax.set_yscale('log')
# Just trying to reduce the number of plotted points...
xl = np.append(np.arange(0.001,0.2,0.001),np.arange(0.2,2.5,0.02))
# xl = np.arange(0.001,2.0,0.005)
#for p in list(samples[np.random.randint(len(samples), size=100)]):
# ax.plot(xl,rossby_model(p,xl),color='LightGrey')
intercept_constant = ic_mcmc[1][1]
turnover = to_mcmc[1][1]
x = np.asarray([turnover,2.0])
# x = np.arange(turnover,2.0,0.001)
#constant = sat_level/(turnover**-1.)
#ax.plot(x,constant*(x**-1.),'k--',lw=1.5,label=r'$\beta=\ -1$')
#constant = sat_level/(turnover**-2.1)
#ax.plot(x,constant*(x**-2.1),'k-.',lw=1.5,label=r'$\beta=\ -2.1$')
#constant = sat_level/(turnover**-2.7)
#ax.plot(x,constant*(x**-2.7),'k:',lw=2,label=r'$\beta=\ -2.7$')
star_color = 'steelblue'
ax.errorbar(data_rossby,data_ll,data_ull,color=star_color,fmt='.',capsize=1,
ms=2,mec=star_color)
#print('parameters for model plot:')
#print('xl: ')
#print(xl)
#print('model inputs: ')
#print([sl_mcmc[1][1],to_mcmc[1][1],be_mcmc[1][1]])
#print('model: ')
#print(
ax.plot(xl,dual_power_law([ic_mcmc[1][1],to_mcmc[1][1],beta1_mcmc[1][1],beta2_mcmc[1][1]],xl),
'k-',lw=2,label=r'$\beta1=\ {0:.2f}$'.format(beta1_mcmc[1][1])+"\n"+r'$\beta2=\ {0:.2f}$'.format(beta2_mcmc[1][1]) )
ax.set_ylabel(ylabel,fontsize='xx-large')
ax.set_xlabel('R$_o$',fontsize='x-large')
ax.set_xlim(1e-3,2)
ax.tick_params(labelsize='x-large')
#ax.set_xticklabels((0.001,0.01,0.1,1))
handles, labels = ax.get_legend_handles_labels()
new_handles = np.append(handles[-1],handles[0:-1])
new_labels = np.append(labels[-1],labels[0:-1])
if sampleName!=None:
ax.legend(new_handles,new_labels,loc=3, title=sampleName)
else:
ax.legend(new_handles,new_labels,loc=3)
if plotfilename!=None:
plt.savefig(plotfilename)
def plot_dual_fit_periods(samples,data_rossby,data_ll,data_ull,plotfilename=None,ylabel=r'$Log L_{X}$', sampleName=None):
"""
Plot fit results with data
Input
-----
samples : array-like (nwalkers*nsteps,3)
all the samples from all the emcee walkers, reshaped so there's
just one column per parameter
data_rossby : array-like (ndata)
Data Rossby number values
data_ll : array-like (ndata)
Data activity values (L_{whatever}/L_{bol} - in my case
I was using L_{Halpha}/L_{bol})
data_ull : array-like (ndata)
Uncertainties in the data activity values.
plotfilename : string (optional; default=None)
if not None, the plot will be saved using this filename
"""
#print(len(data_rossby),len(data_ll), len(data_ull))
ic_mcmc = quantile(samples[:,0],[.16,.5,.84])
to_mcmc = quantile(samples[:,1],[.16,.5,.84])
beta1_mcmc = quantile(samples[:,2],[.16,.5,.84])
beta2_mcmc = quantile(samples[:,3],[.16,.5,.84])
var_mcmc = quantile(samples[:,4],[.16,.5,.84])
plt.figure()
ax = plt.subplot(111)
ax.set_xscale('log')
#ax.set_yscale('log')
# Just trying to reduce the number of plotted points...
xl = np.append(np.arange(0.05,7,0.01),np.arange(7,160,0.5))
# xl = np.arange(0.001,2.0,0.005)
#for p in list(samples[np.random.randint(len(samples), size=100)]):
# ax.plot(xl,rossby_model(p,xl),color='LightGrey')
intercept_constant = ic_mcmc[1][1]
turnover = to_mcmc[1][1]
x = np.asarray([turnover,2.0])
# x = np.arange(turnover,2.0,0.001)
#constant = sat_level/(turnover**-1.)
#ax.plot(x,constant*(x**-1.),'k--',lw=1.5,label=r'$\beta=\ -1$')
#constant = sat_level/(turnover**-2.1)
#ax.plot(x,constant*(x**-2.1),'k-.',lw=1.5,label=r'$\beta=\ -2.1$')
#constant = sat_level/(turnover**-2.7)
#ax.plot(x,constant*(x**-2.7),'k:',lw=2,label=r'$\beta=\ -2.7$')
star_color = 'steelblue'
# ax.errorbar(data_rossby,data_ll,data_ull,color=star_color,fmt='.',capsize=0,
# ms=4,mec=star_color)
ax.scatter(data_rossby,data_ll,color=star_color) #,fmt='.',capsize=0,
# ms=4,mec=star_color)
#print('parameters for model plot:')
#print('xl: ')
#print(xl)
#print('model inputs: ')
#print([sl_mcmc[1][1],to_mcmc[1][1],be_mcmc[1][1]])
#print('model: ')
#print(
ax.plot(xl,dual_power_law([ic_mcmc[1][1],to_mcmc[1][1],beta1_mcmc[1][1],beta2_mcmc[1][1]],xl),
'k-',lw=2,label=r'$\beta1=\ {0:.2f}$'.format(beta1_mcmc[1][1])+"\n"+r'$\beta2=\ {0:.2f}$'.format(beta2_mcmc[1][1]) )
ax.set_ylabel(ylabel,fontsize='xx-large')
ax.set_xlabel(r'P$_{rot}$',fontsize='x-large')
ax.set_xlim(0.05,200)
ax.tick_params(labelsize='x-large')
#ax.set_xticklabels((0.001,0.01,0.1,1))
handles, labels = ax.get_legend_handles_labels()
new_handles = np.append(handles[-1],handles[0:-1])
new_labels = np.append(labels[-1],labels[0:-1])
if sampleName!=None:
ax.legend(new_handles,new_labels,loc=3, title=sampleName)
else:
ax.legend(new_handles,new_labels,loc=3)
if plotfilename!=None:
plt.savefig(plotfilename)
def print_pdf(cropchain,filename,col_names=["sat_level,turnover,beta"]):
f = open(filename,"w")
f.write("# {}".format(col_names[0]))
for cname in col_names[1:]:
f.write(",{}".format(cname))
f.write("\n")
for i,p in enumerate(cropchain):
#print p
f.write(str(p[0]))
for this_p in p[1:]:
f.write(",{}".format(this_p))
f.write("\n")
f.close()
| 36.384956 | 143 | 0.645507 | 3,775 | 24,669 | 4.090861 | 0.100397 | 0.009195 | 0.026031 | 0.012627 | 0.873276 | 0.864081 | 0.857735 | 0.847504 | 0.833128 | 0.829761 | 0 | 0.047838 | 0.206859 | 24,669 | 677 | 144 | 36.4387 | 0.741439 | 0.479144 | 0 | 0.69378 | 0 | 0 | 0.062585 | 0.001961 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076555 | false | 0 | 0.014354 | 0 | 0.196172 | 0.076555 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5a440f5e374be50e1a76de06f9f6d2dc745b5e7a | 4,079 | py | Python | polliwog/line/test_functions.py | algrs/polliwog | faa6531e8e2f7d0b52e928d64a4c1914199c4023 | [
"BSD-2-Clause"
] | null | null | null | polliwog/line/test_functions.py | algrs/polliwog | faa6531e8e2f7d0b52e928d64a4c1914199c4023 | [
"BSD-2-Clause"
] | null | null | null | polliwog/line/test_functions.py | algrs/polliwog | faa6531e8e2f7d0b52e928d64a4c1914199c4023 | [
"BSD-2-Clause"
] | null | null | null | import numpy as np
import vg
from .functions import project_to_line
def test_project_to_line():
p1 = np.array([5.0, 5.0, 4.0])
p2 = np.array([10.0, 10.0, 6.0])
along_line = p2 - p1
common_kwargs = dict(reference_points_of_lines=p1, vectors_along_lines=along_line)
np.testing.assert_array_almost_equal(
project_to_line(points=p1, **common_kwargs), p1
)
np.testing.assert_array_almost_equal(
project_to_line(points=p2, **common_kwargs), p2
)
other_point_on_line = np.array([0.0, 0.0, 2.0])
np.testing.assert_array_almost_equal(
project_to_line(points=other_point_on_line, **common_kwargs),
other_point_on_line,
)
example_perpendicular_displacement = [
k * vg.perpendicular(vg.normalize(along_line), vg.basis.x)
for k in [0.1, 0.5, -2.0]
]
for point_on_line in [p1, p2, other_point_on_line]:
for displacement in example_perpendicular_displacement:
np.testing.assert_array_almost_equal(
project_to_line(points=point_on_line + displacement, **common_kwargs),
point_on_line,
)
def test_project_to_line_stacked_points():
p1 = np.array([5.0, 5.0, 4.0])
p2 = np.array([10.0, 10.0, 6.0])
along_line = p2 - p1
common_kwargs = dict(reference_points_of_lines=p1, vectors_along_lines=along_line)
other_point_on_line = np.array([0.0, 0.0, 2.0])
example_perpendicular_displacement = [
k * vg.perpendicular(vg.normalize(along_line), vg.basis.x)
for k in [0.1, 0.5, -2.0]
]
example_points = np.vstack([p1, p2, other_point_on_line])
expected_projected_points = np.vstack([p1, p2, other_point_on_line])
np.testing.assert_array_almost_equal(
project_to_line(points=example_points, **common_kwargs),
expected_projected_points,
)
np.testing.assert_array_almost_equal(
project_to_line(
points=example_points + example_perpendicular_displacement, **common_kwargs
),
expected_projected_points,
)
def test_project_to_line_stacked_lines():
p1 = np.array([5.0, 5.0, 4.0])
p2 = np.array([10.0, 10.0, 6.0])
along_line = p2 - p1
common_kwargs = dict(
reference_points_of_lines=np.array([p1, p1]),
vectors_along_lines=np.array([along_line, along_line]),
)
other_point_on_line = np.array([0.0, 0.0, 2.0])
np.testing.assert_array_almost_equal(
project_to_line(points=other_point_on_line, **common_kwargs),
np.array([other_point_on_line, other_point_on_line]),
)
example_perpendicular_displacement = [
k * vg.perpendicular(vg.normalize(along_line), vg.basis.x)
for k in [0.1, 0.5, -2.0]
]
for point_on_line in [p1, p2, other_point_on_line]:
for displacement in example_perpendicular_displacement:
np.testing.assert_array_almost_equal(
project_to_line(points=point_on_line + displacement, **common_kwargs),
np.array([point_on_line, point_on_line]),
)
def test_project_to_line_stacked_both():
p1 = np.array([5.0, 5.0, 4.0])
p2 = np.array([10.0, 10.0, 6.0])
along_line = p2 - p1
common_kwargs = dict(
reference_points_of_lines=np.array([p1, p1, p1]),
vectors_along_lines=np.array([along_line, along_line, along_line]),
)
other_point_on_line = np.array([0.0, 0.0, 2.0])
example_perpendicular_displacement = [
k * vg.perpendicular(vg.normalize(along_line), vg.basis.x)
for k in [0.1, 0.5, -2.0]
]
example_points = np.vstack([p1, p2, other_point_on_line])
expected_projected_points = np.vstack([p1, p2, other_point_on_line])
np.testing.assert_array_almost_equal(
project_to_line(points=example_points, **common_kwargs),
expected_projected_points,
)
np.testing.assert_array_almost_equal(
project_to_line(
points=example_points + example_perpendicular_displacement, **common_kwargs
),
expected_projected_points,
)
| 32.632 | 87 | 0.666585 | 598 | 4,079 | 4.192308 | 0.086957 | 0.061428 | 0.09653 | 0.095732 | 0.946949 | 0.945353 | 0.924212 | 0.924212 | 0.924212 | 0.893897 | 0 | 0.044884 | 0.218926 | 4,079 | 124 | 88 | 32.895161 | 0.741996 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10101 | 1 | 0.040404 | false | 0 | 0.030303 | 0 | 0.070707 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5a4a914e7782ac523d8f9db863a54b5f8a4af7fc | 6,500 | py | Python | registry/application/handlers/service_handlers.py | vinthedark/snet-marketplace-service | 66ed9d093b00f09d3e28ef4d86c4e4c125037d06 | [
"MIT"
] | null | null | null | registry/application/handlers/service_handlers.py | vinthedark/snet-marketplace-service | 66ed9d093b00f09d3e28ef4d86c4e4c125037d06 | [
"MIT"
] | null | null | null | registry/application/handlers/service_handlers.py | vinthedark/snet-marketplace-service | 66ed9d093b00f09d3e28ef4d86c4e4c125037d06 | [
"MIT"
] | null | null | null | import json
from common.constant import StatusCode
from common.exception_handler import exception_handler
from common.exceptions import BadRequestException
from common.logger import get_logger
from common.utils import generate_lambda_response, handle_exception_with_slack_notification, validate_dict, \
validate_dict_list
from registry.application.services.service_publisher_service import ServicePublisherService
from registry.config import NETWORK_ID, SLACK_HOOK
from registry.exceptions import EXCEPTIONS
logger = get_logger(__name__)
@handle_exception_with_slack_notification(SLACK_HOOK=SLACK_HOOK, NETWORK_ID=NETWORK_ID, logger=logger)
def verify_service_id(event, context):
username = event["requestContext"]["authorizer"]["claims"]["email"]
path_parameters = event["pathParameters"]
query_parameters = event["queryStringParameters"]
if "org_uuid" not in path_parameters and "service_id" not in query_parameters:
raise BadRequestException()
org_uuid = path_parameters["org_uuid"]
service_id = query_parameters["service_id"]
response = ServicePublisherService(username, org_uuid, None).get_service_id_availability_status(service_id)
return generate_lambda_response(
StatusCode.OK,
{"status": "success", "data": response, "error": {}}, cors_enabled=True
)
@handle_exception_with_slack_notification(SLACK_HOOK=SLACK_HOOK, NETWORK_ID=NETWORK_ID, logger=logger)
def save_transaction_hash_for_published_service(event, context):
username = event["requestContext"]["authorizer"]["claims"]["email"]
path_parameters = event["pathParameters"]
payload = json.loads(event["body"])
if "org_uuid" not in path_parameters and "service_uuid" not in path_parameters:
raise BadRequestException()
org_uuid = path_parameters["org_uuid"]
service_uuid = path_parameters["service_uuid"]
response = ServicePublisherService(username, org_uuid, service_uuid).save_transaction_hash_for_published_service(payload)
return generate_lambda_response(
StatusCode.OK,
{"status": "success", "data": response, "error": {}}, cors_enabled=True
)
@handle_exception_with_slack_notification(SLACK_HOOK=SLACK_HOOK, NETWORK_ID=NETWORK_ID, logger=logger)
def submit_service_for_approval(event, context):
username = event["requestContext"]["authorizer"]["claims"]["email"]
path_parameters = event["pathParameters"]
payload = json.loads(event["body"])
if "org_uuid" not in path_parameters and "service_uuid" not in path_parameters:
raise BadRequestException()
org_uuid = path_parameters["org_uuid"]
service_uuid = path_parameters["service_uuid"]
response = ServicePublisherService(username, org_uuid, service_uuid).submit_service_for_approval(payload)
return generate_lambda_response(
StatusCode.OK,
{"status": "success", "data": response, "error": {}}, cors_enabled=True
)
@handle_exception_with_slack_notification(SLACK_HOOK=SLACK_HOOK, NETWORK_ID=NETWORK_ID, logger=logger)
def save_service(event, context):
username = event["requestContext"]["authorizer"]["claims"]["email"]
path_parameters = event["pathParameters"]
payload = json.loads(event["body"])
if "org_uuid" not in path_parameters and "service_uuid" not in path_parameters:
raise BadRequestException()
org_uuid = path_parameters["org_uuid"]
service_uuid = path_parameters["service_uuid"]
response = ServicePublisherService(username, org_uuid, service_uuid).save_service(payload)
return generate_lambda_response(
StatusCode.OK,
{"status": "success", "data": response, "error": {}}, cors_enabled=True
)
@handle_exception_with_slack_notification(SLACK_HOOK=SLACK_HOOK, NETWORK_ID=NETWORK_ID, logger=logger)
def create_service(event, context):
username = event["requestContext"]["authorizer"]["claims"]["email"]
path_parameters = event["pathParameters"]
payload = json.loads(event["body"])
if "org_uuid" not in path_parameters:
raise BadRequestException()
org_uuid = path_parameters["org_uuid"]
response = ServicePublisherService(username, org_uuid, None).create_service(payload)
return generate_lambda_response(
StatusCode.OK,
{"status": "success", "data": response, "error": {}}, cors_enabled=True
)
@handle_exception_with_slack_notification(SLACK_HOOK=SLACK_HOOK, NETWORK_ID=NETWORK_ID, logger=logger)
def get_services_for_organization(event, context):
username = event["requestContext"]["authorizer"]["claims"]["email"]
path_parameters = event["pathParameters"]
payload = json.loads(event["body"])
if "org_uuid" not in path_parameters:
raise BadRequestException()
org_uuid = path_parameters["org_uuid"]
response = ServicePublisherService(username, org_uuid, None).get_services_for_organization(payload)
return generate_lambda_response(
StatusCode.OK,
{"status": "success", "data": response, "error": {}}, cors_enabled=True
)
@handle_exception_with_slack_notification(SLACK_HOOK=SLACK_HOOK, NETWORK_ID=NETWORK_ID, logger=logger)
def get_service_for_service_uuid(event, context):
username = event["requestContext"]["authorizer"]["claims"]["email"]
path_parameters = event["pathParameters"]
if "org_uuid" not in path_parameters and "service_uuid" not in path_parameters:
raise BadRequestException()
org_uuid = path_parameters["org_uuid"]
service_uuid = path_parameters["service_uuid"]
response = ServicePublisherService(username, org_uuid, service_uuid).get_service_for_given_service_uuid()
return generate_lambda_response(
StatusCode.OK,
{"status": "success", "data": response, "error": {}}, cors_enabled=True
)
@handle_exception_with_slack_notification(SLACK_HOOK=SLACK_HOOK, NETWORK_ID=NETWORK_ID, logger=logger)
def publish_service_metadata_to_ipfs(event, context):
username = event["requestContext"]["authorizer"]["claims"]["email"]
path_parameters = event["pathParameters"]
if "org_uuid" not in path_parameters and "service_uuid" not in path_parameters:
raise BadRequestException()
org_uuid = path_parameters["org_uuid"]
service_uuid = path_parameters["service_uuid"]
response = ServicePublisherService(username, org_uuid, service_uuid).publish_service_data_to_ipfs()
return generate_lambda_response(
StatusCode.OK,
{"status": "success", "data": response, "error": {}}, cors_enabled=True
)
| 47.101449 | 125 | 0.753385 | 756 | 6,500 | 6.145503 | 0.109788 | 0.102454 | 0.025183 | 0.036375 | 0.836634 | 0.828885 | 0.817262 | 0.805209 | 0.805209 | 0.79703 | 0 | 0 | 0.138154 | 6,500 | 137 | 126 | 47.445255 | 0.829347 | 0 | 0 | 0.680672 | 0 | 0 | 0.134923 | 0.003231 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067227 | false | 0 | 0.07563 | 0 | 0.210084 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5a800e64085648342c91064d728f42cbb69ec851 | 34 | py | Python | env/lib/python3.8/site-packages/plotly/graph_objs/layout/template/data/_bar.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 11,750 | 2015-10-12T07:03:39.000Z | 2022-03-31T20:43:15.000Z | env/lib/python3.8/site-packages/plotly/graph_objs/layout/template/data/_bar.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 2,951 | 2015-10-12T00:41:25.000Z | 2022-03-31T22:19:26.000Z | env/lib/python3.8/site-packages/plotly/graph_objs/layout/template/data/_bar.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 2,623 | 2015-10-15T14:40:27.000Z | 2022-03-28T16:05:50.000Z | from plotly.graph_objs import Bar
| 17 | 33 | 0.852941 | 6 | 34 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ce4b79e5ae5ad6d170b58e1a9070ebe19cacb1c4 | 112 | py | Python | app_name/views.py | hotbaby/django-app-skeleton | db965ee14dd377681e14a02a70c258b8c1cb73d8 | [
"MIT"
] | null | null | null | app_name/views.py | hotbaby/django-app-skeleton | db965ee14dd377681e14a02a70c258b8c1cb73d8 | [
"MIT"
] | 1 | 2019-02-12T09:21:19.000Z | 2019-02-12T09:21:19.000Z | app_name/views.py | hotbaby/django-app-skeleton | db965ee14dd377681e14a02a70c258b8c1cb73d8 | [
"MIT"
] | null | null | null | # encoding: utf8
from . import models
from . import filters
from . import exceptions
from . import serializers
| 16 | 25 | 0.767857 | 14 | 112 | 6.142857 | 0.571429 | 0.465116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01087 | 0.178571 | 112 | 6 | 26 | 18.666667 | 0.923913 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ce50714576cefbe2b5c2188eabeef0888f943e06 | 4,180 | py | Python | django_extended/utils/date/format_date_range_html.py | dalou/django-extended | a7ba952ea7089cfb319b4615ae098579c9ab14f9 | [
"BSD-3-Clause"
] | 1 | 2015-12-14T17:16:04.000Z | 2015-12-14T17:16:04.000Z | django_extended/utils/date/format_date_range_html.py | dalou/django-extended | a7ba952ea7089cfb319b4615ae098579c9ab14f9 | [
"BSD-3-Clause"
] | null | null | null | django_extended/utils/date/format_date_range_html.py | dalou/django-extended | a7ba952ea7089cfb319b4615ae098579c9ab14f9 | [
"BSD-3-Clause"
] | null | null | null | # encoding: utf-8
import datetime
from django.utils.safestring import mark_safe
def format_date_range_html(
start_date=None,
end_date=None,
start_hour=None,
end_hour=None,
divider='<br/>'):
if end_date and start_date:
if start_date.day == end_date.day and \
start_date.month == end_date.month and \
start_date.year == end_date.year:
if start_hour and end_hour:
return mark_safe(
start_date.strftime("Le %A %d %B ") + divider + "de " + start_hour.strftime("%H:%M") + " à " + end_hour.strftime("%H:%M")
)
elif start_hour:
return mark_safe(
start_date.strftime("Le %A %d %B ") + divider + "à partir de " + start_hour.strftime("%H:%M")
)
elif end_hour:
return mark_safe(
start_date.strftime("Le %A %d %B ") + divider + "jusqu'à " + end_hour.strftime("%H:%M")
)
else:
return mark_safe(
start_date.strftime("Le %A %d %B ") + divider + "toute la journée"
)
else:
if start_hour and end_hour:
return mark_safe(
start_date.strftime("Du %A %d %B ") + divider + "à " + start_hour.strftime("%H:%M") +
divider + end_date.strftime("Jusqu'au %A %d %B ") + divider + "à " + end_hour.strftime("%H:%M")
)
elif start_hour:
return mark_safe(
start_date.strftime("Du %A %d %B ") + divider + "à " + start_hour.strftime("%H:%M") +
divider + end_date.strftime("Jusqu'au %A %d %B")
)
elif end_hour:
return mark_safe(
start_date.strftime("Du %A %d %B") +
divider + end_date.strftime("Jusqu'au %A %d %B ") + divider + "à " + end_hour.strftime("%H:%M")
)
else:
return mark_safe(
start_date.strftime("Du %A %d %B") +
divider + end_date.strftime("Jusqu'au %A %d %B ") + divider
)
elif start_date:
if start_hour and end_hour:
return mark_safe(
start_date.strftime("À partir du %A %d %B ") + divider + "de " + start_hour.strftime("%H:%M") + " à " + end_hour.strftime("%H:%M")
)
elif start_hour:
return mark_safe(
start_date.strftime("À partir du %A %d %B ") + divider + "à " + start_hour.strftime("%H:%M")
)
elif end_hour:
return mark_safe(
start_date.strftime("À partir du %A %d %B ") + divider + "jusqu'à " + end_hour.strftime("%H:%M")
)
else:
return mark_safe(
start_date.strftime("À partir du %A %d %B")
)
elif end_date:
if start_hour and end_hour:
return mark_safe(
end_date.strftime("Jusqu'au %A %d %B ") + divider + "de " + start_hour.strftime("%H:%M") + " à " + end_hour.strftime("%H:%M")
)
elif start_hour:
return mark_safe(
end_date.strftime("Jusqu'au %A %d %B ") + divider + "à partir de " + start_hour.strftime("%H:%M")
)
elif end_hour:
return mark_safe(
end_date.strftime("Jusqu'au %A %d %B ") + divider + "à " + end_hour.strftime("%H:%M")
)
else:
return mark_safe(
start_date.strftime("Jusqu'au %A %d %B")
)
else:
if start_hour and end_hour:
return mark_safe(
"Aujourd'hui " + divider + start_hour.strftime("de %H:%M") + " à " + end_hour.strftime("%H:%M")
)
elif start_hour:
return mark_safe(
"Aujourd'hui " + divider + start_hour.strftime("à %H:%M")
)
elif end_hour:
return mark_safe(
"Aujourd'hui " + divider + end_hour.strftime("jusqu'à %H:%M")
)
else:
return None
| 39.065421 | 148 | 0.478947 | 504 | 4,180 | 3.779762 | 0.095238 | 0.099213 | 0.031496 | 0.089239 | 0.83832 | 0.834646 | 0.834646 | 0.829396 | 0.806299 | 0.804199 | 0 | 0.000393 | 0.391866 | 4,180 | 106 | 149 | 39.433962 | 0.749017 | 0.003589 | 0 | 0.479592 | 0 | 0 | 0.13668 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010204 | false | 0 | 0.020408 | 0 | 0.234694 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ce7b024c3a4fe2d6e4f49c2f4f83d37a2fe4926b | 36 | py | Python | srfnef/corrections/new_scatter/__init__.py | twj2417/srf | 63365cfd75199d70eea2273214a4fa580a9fdf2a | [
"Apache-2.0"
] | null | null | null | srfnef/corrections/new_scatter/__init__.py | twj2417/srf | 63365cfd75199d70eea2273214a4fa580a9fdf2a | [
"Apache-2.0"
] | null | null | null | srfnef/corrections/new_scatter/__init__.py | twj2417/srf | 63365cfd75199d70eea2273214a4fa580a9fdf2a | [
"Apache-2.0"
] | null | null | null | from .scatter import ScatterCorrect
| 18 | 35 | 0.861111 | 4 | 36 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ceb31df1832ffedfcb127492db48f01015d8c27b | 94 | py | Python | planning_python/utils/helpers.py | daahuang/planning_python | 1b9ae0346df66ed8cf2cb80a54a92fd909a578fe | [
"BSD-3-Clause"
] | 12 | 2017-10-18T21:39:20.000Z | 2021-11-28T06:36:37.000Z | planning_python/utils/helpers.py | daahuang/planning_python | 1b9ae0346df66ed8cf2cb80a54a92fd909a578fe | [
"BSD-3-Clause"
] | null | null | null | planning_python/utils/helpers.py | daahuang/planning_python | 1b9ae0346df66ed8cf2cb80a54a92fd909a578fe | [
"BSD-3-Clause"
] | 6 | 2017-10-29T05:23:16.000Z | 2020-11-17T10:53:41.000Z | import numpy as np
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114]) | 18.8 | 53 | 0.595745 | 18 | 94 | 3.111111 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 0.180851 | 94 | 5 | 53 | 18.8 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
ceb6bb0f66a3e6969deea1f6aa10921211766226 | 208 | py | Python | base/admin.py | omololevy/study-college | 5ce482b4f09314fd370509654337e95ec39c4612 | [
"MIT"
] | 1 | 2022-03-21T08:23:19.000Z | 2022-03-21T08:23:19.000Z | base/admin.py | omololevy/study-college | 5ce482b4f09314fd370509654337e95ec39c4612 | [
"MIT"
] | 1 | 2022-03-21T08:21:27.000Z | 2022-03-21T08:21:27.000Z | base/admin.py | omololevy/study-college | 5ce482b4f09314fd370509654337e95ec39c4612 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Cohort, Profile, Module, Discussion
admin.site.register(Profile)
admin.site.register(Cohort)
admin.site.register(Module)
admin.site.register(Discussion)
| 23.111111 | 55 | 0.817308 | 28 | 208 | 6.071429 | 0.428571 | 0.211765 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081731 | 208 | 8 | 56 | 26 | 0.890052 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
cebba6a2e505265209e84617816b66c6606824e8 | 25,325 | py | Python | patternMatching/algorithms.py | yaukwankiu/armor | 6c57df82fe3e7761f43f9fbfe4f3b21882c91436 | [
"CC0-1.0"
] | 1 | 2015-11-06T06:41:33.000Z | 2015-11-06T06:41:33.000Z | patternMatching/algorithms.py | yaukwankiu/armor | 6c57df82fe3e7761f43f9fbfe4f3b21882c91436 | [
"CC0-1.0"
] | null | null | null | patternMatching/algorithms.py | yaukwankiu/armor | 6c57df82fe3e7761f43f9fbfe4f3b21882c91436 | [
"CC0-1.0"
] | null | null | null | """
matching algorithms for pattern2.DataStreamSets
format:
def alg(obs, w, obsTime, maxHourDiff, **kwargs):
input:
obs - observation stream, a pattern.DBZstream object
w - wrf (model) stream, another pattern.DBZstream object
obsTime - time at which obs is compared with the wrfs, e.g. "20140612.0200'
maxHourDiff - the maximal time difference (in hours) between obs and wrfs, e.g. 7 (hours)
kwargs - key-worded parameters (a=1, b=2, etc)
output:
a dict: {'score':score, 'timeShift':timeShift}
where score - a real number, representing the alikeness/degree of matching
timeShift - a number, in hours, the optimal timeShift for the wrf
to do:
combinations of - correlations, adjusted correlations, histogram, moments, other features
(2014-04-15)
"""
#######################
# imports
import os, time, datetime
from datetime import timedelta
import numpy as np
from armor import pattern
dp = pattern.dp # defaultParameters
###########################
# functions
def normalisedCorrelation():
"""
1 May, 2014
"""
pass
def plainCorr(obs, wrf, obsTime, maxHourDiff=7, verbose=False):
a = obs(obsTime)[0] # getting the DBZ object from the observation stream to be compared
T = a.datetime() # getting the time range
maxTime = T + timedelta(maxHourDiff * 1./24)
minTime = T - timedelta(maxHourDiff * 1./24)
maxTime = a.getDataTime(maxTime) # converting it into string format for pattern.DBZ
minTime = a.getDataTime(minTime) # converting it into string format for pattern.DBZ
if verbose:
print "minTime, obsTime, maxTime:", minTime, obsTime, maxTime
scores = []
for w in wrf:
if w.dataTime > maxTime or w.dataTime < minTime:
continue
else:
if a.matrix.var()==0: # test if it is empty
a.load()
if w.matrix.var()==0:
w.load()
score = a.corr(w) # <<<<<<<< key comparison step >>>>>>>>>>>>>>
scores.append( {'time':w.dataTime, 'score': score} )
scores.sort(key=lambda v:v['score'], reverse=True)
topScore = scores[0]['score']
topScoreTime = scores[0]['time']
timeShift = (a.datetime(topScoreTime) - a.datetime()).total_seconds() # the time difference is a datetime.timedelta object
timeShift = 1. * timeShift/3600 # convert it into hours
return {'score':topScore, 'timeShift':timeShift} #score - depending on the algorithm; timeShift - in hours
def nonstandardKernel(obs, wrf, regions, shiibaAlg="",
shiibaArgs={}, obsTime="", maxHourDiff=7,
k=24, # number of 10-minute steps to semi-lagrange advect
verbose=False,
outputFolder= dp.defaultLabReportsFolder +'2014-03-07-filter-matching-scoring-pipeline/',
volumevolumeProportionWeight =0.,
**kwargs #just in case
):
"""
"Non-standard Kernel matching":
see ARMOR RFP 2014
inputs:
obs - one observation stream
wrf - one wrf (NWP model) stream
regions - list of regions of interest (listed in descending order of priority?!)
{'name':name, 'points':points, 'weight':weight}
regionalWeights - to get a weighed averge if necessary
shiibaAlg - pointer to the function for shiiba Regression
shiibaArgs- parameters for shiiba regression
maxHourDiff - max temporal difference (in hours) you would consider for the WRF to match the OBS
outputs:
{'score': a list, top weighed average score followed by a list of regional scores
'timeShift': time shift for the top score}
USE:
from armor import defaultParameters as dp
from armor import misc
from armor import pattern, pattern2
from armor.patternMatching import pipeline as pp, algorithms
from armor.filter import filters
hualien4 = misc.getFourCorners(dp.hualienCounty)
yilan4 = misc.getFourCorners(dp.yilanCounty)
kaohsiung4 = misc.getFourCorners(dp.kaohsiungCounty)
regions = [{'name': "hualien", 'points': hualien4, 'weight': 0.5},
{'name': "kaohsiung", 'points':kaohsiung4, 'weight':0.3},
{'name':"yilan", 'points':yilan4, 'weight':0.2},
]
pp.pipeline(filteringAlgorithm = filters.gaussianFilter,
filteringAlgorithmArgs = {'sigma':20},
matchingAlgorithm = algorithms.nonstandardKernel,
matchingAlgorithmArgs = {'obsTime':"20130829.0300", 'maxHourDiff':7, 'regions':regions} ,
outputFolder=dp.defaultRootFolder + "labReports/2014-03-07-filter-matching-scoring-pipeline/",
toLoad=False)
2014-03-11
"""
# codes adapted from plainCorr above
timeString = str(int(time.time()))
miscRemarks = ""
outputFolder += timeString + wrf.name + '/'
print "\n\n\n................................................................."
print "outputFolder:", outputFolder
print "volumevolumeProportionWeight:",volumevolumeProportionWeight
os.makedirs(outputFolder)
print "sleeping 5 seconds", time.sleep(5)
if obsTime == "":
obsTime = obs[0].dataTime
a = obs(obsTime)[0] # getting the DBZ object from the observation stream to be compared
#a.show() #debug
T = a.datetime() # getting the time range
T_string = a.getDataTime(T)
T2 = sorted([v.dataTime for v in obs if v.dataTime>T_string])[0] # the next time, assumed 10 mins apart - or else
# or else need to adjust the k for vect.semiLagrange below
b = obs(T2)[0]
if b.datetime() - a.datetime() > datetime.timedelta(600./86400): # 600 seconds
td = b.datetime() - a.datetime()
miscRemarks += "\nTime difference between %s and %s is " % (b.name, a.name)
miscRemarks += str(td.days) + " days " + str(td.seconds) + "seconds.\n"
#b.debug() #show
if a.matrix.var()==0: # test if it is empty
a.load()
if b.matrix.var()==0:
b.load()
#a.saveImage()
#b.saveImage()
if shiibaAlg == "":
from armor import analysis
shiibaAlg = analysis.shiiba
maxTime = T + timedelta(maxHourDiff * 1./24)
minTime = T - timedelta(maxHourDiff * 1./24)
maxTime = a.getDataTime(maxTime) # converting it into string format for pattern.DBZ
minTime = a.getDataTime(minTime) # converting it into string format for pattern.DBZ
if verbose:
print "minTime, obsTime, maxTime:", minTime, obsTime, maxTime
# check if there's no corresponing wrf for the time
if [v.dataTime for v in wrf if v.dataTime>=minTime and v.dataTime<=maxTime] == []:
return {}
scores = []
# get the ABLER-Shiiba vector field
try:
shiibaResults = a.shiibaResultLocalCopy # need to regress at least once!!!
except AttributeError:
shiibaResults = shiibaAlg(a, b, **shiibaArgs)
a.shiibaResultLocalCopy = shiibaResults
vect = shiibaResults['vect'] + shiibaResults['mn']
a.backupMatrix('good_copy')
a.drawCoast()
for R in regions:
a.drawRectangularHull(R['points'])
a.saveImage(imagePath=outputFolder+ a.name +dp.defaultImageSuffix)
a.restoreMatrix('good_copy')
b.backupMatrix('good_copy')
b.drawCoast()
for R in regions:
b.drawRectangularHull(R['points'])
b.saveImage(imagePath=outputFolder+ b.name +dp.defaultImageSuffix)
b.restoreMatrix('good_copy')
print "a saved to", outputFolder+ a.name +dp.defaultImageSuffix # debug
print "b saved to", outputFolder+ b.name +dp.defaultImageSuffix # debug
vect.saveImage(imagePath=outputFolder+ "abler_vector_field" + dp.defaultImageSuffix)
# looping
a_with_windows = a.copy()
a_with_windows.drawCoast()
for w in wrf:
if w.dataTime > maxTime or w.dataTime < minTime:
continue
else:
if w.matrix.var()==0: # test if it is empty
w.load()
#w.backupMatrix('good_copy')
####################################################################
# matching core
# 1. shiiba regression -> find the vector field
# 2. semi-lagrangian -> find the extended region
# 3. cut out the region in obs
# 4. match the appropriate region in wrf
regionalScores = []
for R0 in regions:
name = R0['name']
points = R0['points']
weight = R0['weight']
# extract the "nonstandard kernel" as a1
points1 = vect.semiLagrange(L=points, k=k, direction=-1, verbose=verbose) # back advection
points2 = points + points1
iMax = int(max(v[0] for v in points2))
iMin = int(min(v[0] for v in points2))
jMax = int(max(v[1] for v in points2))
jMin = int(min(v[1] for v in points2))
height = iMax-iMin
width = jMax-jMin
a1 = a.getWindow(iMin, jMin, height, width)
a1.name = a.name + '_' + name
a1.imagePath = outputFolder + a1.name + dp.defaultImageSuffix # suffix = ".png"
a1.saveImage(imagePath=a1.imagePath)
a_with_windows.drawRectangle(iMin, jMin, height, width, newObject=False)
# match a1 with a similar rectangle on the wrf, scoring by correlation
# we shift the kernel by 1/10 of it's width/height
# 4 times left, right, up and down respectively
iStep = int(height//10 + 1)
jStep = int(width//10 + 1)
print "points (corners for the region):", points #debug
print "iStep, jStep", iStep, jStep #debug
score = 0
shift = (-999,-999) #initialise
for i in range(-4*iStep, 4*iStep+1, iStep):
for j in range(-4*jStep, 4*jStep+1, jStep):
#w.restoreMatrix('good_copy')
w1 = w.getWindow(iMin+i, jMin+j, height, width)
tempScore = a1.corr(w1) # <<<<<<<< key comparison step >>>>>>>>>>>>>>
# adding a step to compare the relative volume, 2014-03-28
proportion = a1.matrix.sum() / w1.matrix.sum()
if proportion > 1:
proportion = 1./proportion
#diffLog = abs(np.log(a1.matrix.sum()) - np.log(w1.matrix.sum()))
#tempScore = a1.cov(w1)[0,1]
# use straight corr for now, will convert to shiiba or normalised corr later
# or can use covariance rather than correlation
tempScore = tempScore*(1-volumevolumeProportionWeight ) + proportion*volumevolumeProportionWeight
if score < tempScore:
score = tempScore # get the highest
#scoreTime = w.dataTime
shift = (i,j) # this info is probably not needed
regionalScores.append({'name' : name, # name of the region
'score' : score,
'shift' : shift,
'weight' : weight,
'upWindRegion': (iMin, jMin, height, width),
})
# compute weighed average over regions
averageScore = np.sum([v['score']*v['weight'] for v in regionalScores])
#
#
#####################################################################
scores.append( {'time':w.dataTime, 'score': averageScore, 'regionalScores': regionalScores} )
scores.sort(key=lambda v:v['score'], reverse=True)
topScore = scores[0]['score']
topScoreTime = scores[0]['time']
topScoresRegional = scores[0]['regionalScores'] # actually regional scores for the top score
timeShift = (a.datetime(topScoreTime) - a.datetime()).total_seconds() # the time difference is a datetime.timedelta object
timeShift = 1. * timeShift/3600 # convert it into hours
# saving images
w = wrf(topScoreTime)[0].copy() # temp image object
#########
# 2014-06-26
for R0 in regions:
print "extracting window for", R0['name']
name = R0['name']
points = R0['points']
#weight = R0['weight']
# extract the "nonstandard kernel" as a1
iMax = int(max(v[0] for v in points))
iMin = int(min(v[0] for v in points))
jMax = int(max(v[1] for v in points))
jMin = int(min(v[1] for v in points))
height = iMax-iMin
width = jMax-jMin
print "iMin, jMin=", iMin, jMin
print "topScoresRegional:",topScoresRegional #debug
upWindRegion = [v['upWindRegion'] for v in topScoresRegional if v['name']==name][0]
print "upWindRegion:", upWindRegion
w1 = w.getWindow(*upWindRegion)
w1.name = w.name + '_' + name + " Upwind Region"
w1.imagePath = outputFolder + w.name + "_window_" + name + "_upWindRegion"+ dp.defaultImageSuffix # suffix = ".png"
#print w1.imagePath #debug
#w1.show() #debug
w1.saveImage(imagePath=w1.imagePath)
#
#########
w.coastDataPath=obs[0].coastDataPath
w.drawCoast()
a_frames = (a_with_windows.matrix > 999) # hack, getting the window frames for w
w.matrix += a_frames * 9999 # hack, getting the window frames for w
w.saveImage(imagePath=outputFolder+w.name+dp.defaultImageSuffix)
a_with_windows.saveImage(imagePath=outputFolder+ a.name + "_with_windows" + dp.defaultImageSuffix)
return {'score':topScore, 'timeShift':timeShift, 'topScoresRegional': topScoresRegional,
'Remarks': "'topScoresRegional' stands for regional scores for the top score",
'miscRemarks': miscRemarks,
} #score - depending on the algorithm; timeShift - in hours
def shiftedCorr(obs, wrf, regions="", obsTime="", maxHourDiff=7, maxLatDiff=4, maxLongDiff=6,
shiftStep = 2, #2014-06-25
verbose=False,
outputFolder= dp.defaultLabLogsFolder ,
volumevolumeProportionWeight =0.,
**kwargs #just in case
):
"""
adapted from nonStandardKernel() above
2014-06-24
first applied to 20140312.1100 etc
maxLatDiff / maxLongDiff: maximal latitudinal / longitudinal difference between obs frame and wrf frame
"""
timeString = str(int(time.time()))
miscRemarks = ""
outputFolder += timeString + wrf.name + '/'
print "\n\n\n................................................................."
print "outputFolder:", outputFolder
print "volumevolumeProportionWeight:",volumevolumeProportionWeight
os.makedirs(outputFolder)
print "sleeping .5 second", time.sleep(.5)
if obsTime == "":
obsTime = obs[0].dataTime
a = obs(obsTime)[0] # getting the DBZ object from the observation stream to be compared
#a.show() #debug
if regions == "":
regions = [(0, 0, a.matrix.shape[0], a.matrix.shape[1])] # a list of one region consisting of the full array, if none given
T = a.datetime() # getting the time range
T_string = a.getDataTime(T)
T2 = sorted([v.dataTime for v in obs if v.dataTime>T_string])[0] # the next time, assumed 10 mins apart - or else
# or else need to adjust the k for vect.semiLagrange below
b = obs(T2)[0]
if b.datetime() - a.datetime() > datetime.timedelta(600./86400): # 600 seconds
td = b.datetime() - a.datetime()
miscRemarks += "\nTime difference between %s and %s is " % (b.name, a.name)
miscRemarks += str(td.days) + " days " + str(td.seconds) + "seconds.\n"
#b.debug() #show
if a.matrix.var()==0: # test if it is empty
a.load()
if b.matrix.var()==0:
b.load()
#a.saveImage()
#b.saveImage()
maxTime = T + timedelta(maxHourDiff * 1./24)
minTime = T - timedelta(maxHourDiff * 1./24)
maxTime = a.getDataTime(maxTime) # converting it into string format for pattern.DBZ
minTime = a.getDataTime(minTime) # converting it into string format for pattern.DBZ
if verbose:
print "minTime, obsTime, maxTime:", minTime, obsTime, maxTime
# check if there's no corresponing wrf for the time
if [v.dataTime for v in wrf if v.dataTime>=minTime and v.dataTime<=maxTime] == []:
return {}
scores = []
a.backupMatrix('good_copy')
a.drawCoast()
for R in regions:
a.drawRectangularHull(R['points'])
a.saveImage(imagePath=outputFolder+ a.name +dp.defaultImageSuffix)
a.restoreMatrix('good_copy')
b.backupMatrix('good_copy')
b.drawCoast()
for R in regions:
b.drawRectangularHull(R['points'])
b.saveImage(imagePath=outputFolder+ b.name +dp.defaultImageSuffix)
b.restoreMatrix('good_copy')
print "a saved to", outputFolder+ a.name +dp.defaultImageSuffix # debug
print "b saved to", outputFolder+ b.name +dp.defaultImageSuffix # debug
# looping
a_with_windows = a.copy()
a_with_windows.drawCoast()
for w in wrf:
if w.dataTime > maxTime or w.dataTime < minTime:
continue
else:
if w.matrix.var()==0: # test if it is empty
w.load()
#w.backupMatrix('good_copy')
####################################################################
# matching core
# 1. shiiba regression -> find the vector field
# 2. semi-lagrangian -> find the extended region
# 3. cut out the region in obs
# 4. match the appropriate region in wrf
regionalScores = []
for R0 in regions:
name = R0['name']
points = R0['points']
weight = R0['weight']
# extract the "nonstandard kernel" as a1
iMax = int(max(v[0] for v in points))
iMin = int(min(v[0] for v in points))
jMax = int(max(v[1] for v in points))
jMin = int(min(v[1] for v in points))
height = iMax-iMin
width = jMax-jMin
a1 = a.getWindow(iMin, jMin, height, width)
a1.name = a.name + '_' + name
a1.imagePath = outputFolder + a1.name + dp.defaultImageSuffix # suffix = ".png"
a1.saveImage(imagePath=a1.imagePath)
a_with_windows.drawRectangle(iMin, jMin, height, width, newObject=False)
# match a1 with a similar rectangle on the wrf, scoring by correlation
# we shift the kernel by 1/10 of it's width/height
# 4 times left, right, up and down respectively
iStep = shiftStep
jStep = shiftStep
print "points (corners for the region):", points #debug
#print "iStep, jStep", iStep, jStep #debug
score = 0
shift = (-999,-999) #initialise
for i in range(-maxLatDiff, maxLatDiff+1, iStep):
for j in range(-maxLongDiff, maxLongDiff+1, jStep):
#w.restoreMatrix('good_copy')
w1 = w.getWindow(iMin+i, jMin+j, height, width)
tempScore = a1.corr(w1) # <<<<<<<< key comparison step >>>>>>>>>>>>>>
# adding a step to compare the relative volume, 2014-03-28
proportion = abs(np.log(a1.matrix.sum() / w1.matrix.sum()))
#diffLog = abs(np.log(a1.matrix.sum()) - np.log(w1.matrix.sum()))
#tempScore = a1.cov(w1)[0,1]
# use straight corr for now, will convert to shiiba or normalised corr later
# or can use covariance rather than correlation
tempScore = tempScore*(1-volumevolumeProportionWeight ) + proportion*volumevolumeProportionWeight
if score < tempScore:
score = tempScore # get the highest
#scoreTime = w.dataTime
shift = (i,j) # this info is probably not needed
regionalScores.append({'name' : name, # name of the region
'score' : score,
'shift' : shift,
'weight' : weight,
})
# compute weighed average over regions
averageScore = np.sum([v['score']*v['weight'] for v in regionalScores])
#
#
#####################################################################
scores.append( {'time':w.dataTime, 'score': averageScore, 'regionalScores': regionalScores} )
scores.sort(key=lambda v:v['score'], reverse=True)
topScore = scores[0]['score']
topScoreTime = scores[0]['time']
topScoresRegional = scores[0]['regionalScores'] # actually regional scores for the top score
timeShift = (a.datetime(topScoreTime) - a.datetime()).total_seconds() # the time difference is a datetime.timedelta object
timeShift = 1. * timeShift/3600 # convert it into hours
# saving images
w = wrf(topScoreTime)[0].copy() # temp image object
#########
# 2014-06-26
for R0 in regions:
print "extracting window for", R0['name']
name = R0['name']
points = R0['points']
#weight = R0['weight']
# extract the "nonstandard kernel" as a1
iMax = int(max(v[0] for v in points))
iMin = int(min(v[0] for v in points))
jMax = int(max(v[1] for v in points))
jMin = int(min(v[1] for v in points))
height = iMax-iMin
width = jMax-jMin
print "iMin, jMin=", iMin, jMin
print "topScoresRegional:",topScoresRegional #debug
iShift, jShift = [v['shift'] for v in topScoresRegional if v['name']==name][0]
iMin += iShift
jMin += jShift
print "iShift, jShift=", iShift, jShift
w1 = w.getWindow(iMin, jMin, height, width)
w1.name = w.name + '_' + name + " with shift: (x, y) = " + str((jShift, iShift))
w1.imagePath = outputFolder + w.name + "_window_" + name + "_with_shift"+ dp.defaultImageSuffix # suffix = ".png"
#print w1.imagePath #debug
#w1.show() #debug
w1.saveImage(imagePath=w1.imagePath)
#
#########
try:
w.coastDataPath=obs[0].coastDataPath
w.drawCoast()
except:
print "can't draw coast for ", w.name
a_frames = (a_with_windows.matrix > 999) # hack, getting the window frames for w
w.matrix += a_frames * 9999 # hack, getting the window frames for w
w.saveImage(imagePath=outputFolder+w.name+dp.defaultImageSuffix)
a_with_windows.saveImage(imagePath=outputFolder+ a.name + "_with_windows" + dp.defaultImageSuffix)
return {'score':topScore, 'timeShift':timeShift, 'topScoresRegional': topScoresRegional,
'Remarks': "'topScoresRegional' stands for regional scores for the top score",
'miscRemarks': miscRemarks,
} #score - depending on the algorithm; timeShift - in hours
| 46.725092 | 134 | 0.53307 | 2,679 | 25,325 | 5.016797 | 0.154162 | 0.007143 | 0.010714 | 0.010714 | 0.746354 | 0.738467 | 0.721057 | 0.701637 | 0.69628 | 0.687351 | 0 | 0.025222 | 0.347167 | 25,325 | 541 | 135 | 46.81146 | 0.787697 | 0.178124 | 0 | 0.79257 | 0 | 0 | 0.088198 | 0.014242 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.003096 | 0.01548 | null | null | 0.083591 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0c6850850efc50809b6ea6c3afb4857144ba34cc | 97 | py | Python | Python/PythonCrashCourse2ndEdition/simple_messages.py | awakun/LearningPython | 578f9290c8065df37ade49abe4b0ab4e6b35a1bd | [
"MIT"
] | null | null | null | Python/PythonCrashCourse2ndEdition/simple_messages.py | awakun/LearningPython | 578f9290c8065df37ade49abe4b0ab4e6b35a1bd | [
"MIT"
] | null | null | null | Python/PythonCrashCourse2ndEdition/simple_messages.py | awakun/LearningPython | 578f9290c8065df37ade49abe4b0ab4e6b35a1bd | [
"MIT"
] | null | null | null | message = 'Take me to your leader.'
print(message)
message = 'ACK ACK ACK ACK ACK'
print(message) | 24.25 | 35 | 0.731959 | 16 | 97 | 4.4375 | 0.5 | 0.338028 | 0.380282 | 0.338028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154639 | 97 | 4 | 36 | 24.25 | 0.865854 | 0 | 0 | 0.5 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
0b7aa4beeafad036f9afd2c3dd4d7c0a10e29b3c | 228 | py | Python | puckdb/exceptions.py | metahockey/aaront_puckdb | 915638aaff304321f918d19582d2d1fdc83192e6 | [
"Apache-2.0"
] | null | null | null | puckdb/exceptions.py | metahockey/aaront_puckdb | 915638aaff304321f918d19582d2d1fdc83192e6 | [
"Apache-2.0"
] | null | null | null | puckdb/exceptions.py | metahockey/aaront_puckdb | 915638aaff304321f918d19582d2d1fdc83192e6 | [
"Apache-2.0"
] | null | null | null | class FilterException(Exception):
def __init__(self, message=None):
self.message = message
def __str__(self):
return 'Invalid filter{message}'.format(message=': ' + self.message if self.message else '')
| 32.571429 | 100 | 0.675439 | 26 | 228 | 5.615385 | 0.576923 | 0.30137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197368 | 228 | 6 | 101 | 38 | 0.797814 | 0 | 0 | 0 | 0 | 0 | 0.109649 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0b9949e80570fd417feb5fd31228c4b6d01358f8 | 432 | py | Python | pathvalidate/handler.py | thombashi/pathvalidate | 44807d66392d911e949056f9ccd3248746b0393f | [
"MIT"
] | 113 | 2016-06-14T06:38:28.000Z | 2022-03-29T15:15:19.000Z | pathvalidate/handler.py | thombashi/pathvalidate | 44807d66392d911e949056f9ccd3248746b0393f | [
"MIT"
] | 21 | 2016-06-14T03:55:29.000Z | 2022-03-21T17:35:50.000Z | pathvalidate/handler.py | thombashi/pathvalidate | 44807d66392d911e949056f9ccd3248746b0393f | [
"MIT"
] | 12 | 2016-06-14T06:38:32.000Z | 2021-09-01T09:48:54.000Z | """
.. codeauthor:: Tsuyoshi Hombashi <tsuyoshi.hombashi@gmail.com>
"""
from datetime import datetime
from typing import Callable
from .error import ValidationError
Handler = Callable[[ValidationError], str]
def return_null_string(e: ValidationError) -> str:
return ""
def return_timestamp(e: ValidationError) -> str:
return str(datetime.now().timestamp())
def raise_error(e: ValidationError) -> str:
raise e
| 17.28 | 63 | 0.729167 | 50 | 432 | 6.22 | 0.44 | 0.231511 | 0.18328 | 0.160772 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155093 | 432 | 24 | 64 | 18 | 0.852055 | 0.145833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.3 | 0.2 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e7e8c934762b60eb0776c80585ce73028477bafc | 7,950 | py | Python | attackgraph/training.py | wyz2368/deepRL | b92c7dc9c6dbec5ff217162c4fcce35695eabcbb | [
"MIT"
] | null | null | null | attackgraph/training.py | wyz2368/deepRL | b92c7dc9c6dbec5ff217162c4fcce35695eabcbb | [
"MIT"
] | null | null | null | attackgraph/training.py | wyz2368/deepRL | b92c7dc9c6dbec5ff217162c4fcce35695eabcbb | [
"MIT"
] | null | null | null | from attackgraph import json_op as jp
from baselines.common import models
from baselines.deepq.deepq import learn_multi_nets, Learner
from baselines.common.tf_util import ALREADY_INITIALIZED
import os
# import copy
DIR_def = os.getcwd() + '/defender_strategies/'
DIR_att = os.getcwd() + '/attacker_strategies/'
def training_att(game, mix_str_def, epoch, retrain = False):
if len(mix_str_def) != len(game.def_str):
raise ValueError("The length of mix_str_def and def_str does not match while training")
# env = copy.deepcopy(game.env)
print("training_att mix_str_def is ", mix_str_def)
ALREADY_INITIALIZED.clear()
env = game.env
env.reset_everything()
env.set_training_flag(1)
env.defender.set_mix_strategy(mix_str_def)
env.defender.set_str_set(game.def_str)
param_path = os.getcwd() + '/network_parameters/param.json'
param = jp.load_json_data(param_path)
if retrain:
scope = 'att_str_retrain' + str(0) + '.pkl' + '/'
else:
scope = 'att_str_epoch' + str(epoch) + '.pkl' + '/'
learner = Learner()
with learner.graph.as_default():
with learner.sess.as_default():
act_att, a_BD = learner.learn_multi_nets(
env,
network = models.mlp(num_hidden=param['num_hidden'], num_layers=param['num_layers']),
lr =param['lr'],
total_timesteps=param['total_timesteps_att'],
exploration_fraction=param['exploration_fraction_att'],
exploration_final_eps=param['exploration_final_eps'],
print_freq=param['print_freq'],
param_noise=param['param_noise'],
gamma=param['gamma'],
prioritized_replay=param['prioritized_replay'],
checkpoint_freq=param['checkpoint_freq'],
scope = scope,
epoch = epoch
)
print("Saving attacker's model to pickle.")
if retrain:
act_att.save(os.getcwd() + '/retrain_att/' + 'att_str_retrain' + str(0) + '.pkl', 'att_str_retrain' + str(0) + '.pkl' + '/')
else:
act_att.save(DIR_att + "att_str_epoch" + str(epoch) + ".pkl", 'att_str_epoch' + str(epoch) + '.pkl' + '/')
learner.sess.close()
return a_BD
def training_def(game, mix_str_att, epoch, retrain = False):
if len(mix_str_att) != len(game.att_str):
raise ValueError("The length of mix_str_att and att_str does not match while retraining")
print("training_def mix_str_att is ", mix_str_att)
ALREADY_INITIALIZED.clear()
# env = copy.deepcopy(game.env)
env = game.env
env.reset_everything()
env.set_training_flag(0)
env.attacker.set_mix_strategy(mix_str_att)
env.attacker.set_str_set(game.att_str)
param_path = os.getcwd() + '/network_parameters/param.json'
param = jp.load_json_data(param_path)
if retrain:
scope = 'def_str_retrain' + str(0) + '.pkl' + '/'
else:
scope = 'def_str_epoch' + str(epoch) + '.pkl' + '/'
learner = Learner()
with learner.graph.as_default():
with learner.sess.as_default():
act_def, d_BD = learner.learn_multi_nets(
env,
network=models.mlp(num_hidden=param['num_hidden'], num_layers=param['num_layers']),
lr=param['lr'],
total_timesteps=param['total_timesteps_def'],
exploration_fraction=param['exploration_fraction_def'],
exploration_final_eps=param['exploration_final_eps'],
print_freq=param['print_freq'],
param_noise=param['param_noise'],
gamma=param['gamma'],
prioritized_replay=param['prioritized_replay'],
checkpoint_freq=param['checkpoint_freq'],
scope = scope,
epoch=epoch
)
print("Saving defender's model to pickle.")
if retrain:
act_def.save(os.getcwd() + '/retrain_def/' + 'def_str_retrain' + str(0) + '.pkl', 'def_str_retrain' + str(0) + '.pkl' + '/')
else:
act_def.save(DIR_def + "def_str_epoch" + str(epoch) + ".pkl", "def_str_epoch" + str(epoch) + '.pkl' + '/')
learner.sess.close()
return d_BD
# for all strategies learned by retraining, the scope index is 0.
def training_hado_att(game):
param = game.param
mix_str_def = game.hado_str(identity=0, param=param)
if len(mix_str_def) != len(game.def_str):
raise ValueError("The length of mix_str_def and def_str does not match while retraining")
# env = copy.deepcopy(game.env)
env = game.env
env.reset_everything()
env.set_training_flag(1)
env.defender.set_mix_strategy(mix_str_def)
env.defender.set_str_set(game.def_str)
param_path = os.getcwd() + '/network_parameters/param.json'
param = jp.load_json_data(param_path)
learner = Learner(retrain=True, freq=param['retrain_freq'])
# TODO: add epoch???
with learner.graph.as_default():
with learner.sess.as_default():
act_att, _ = learner.learn_multi_nets(
env,
network = models.mlp(num_hidden=param['num_hidden'], num_layers=param['num_layers']),
lr =param['lr'],
total_timesteps=param['retrain_timesteps'],
exploration_fraction=param['exploration_fraction'],
exploration_final_eps=param['exploration_final_eps'],
print_freq=param['print_freq'],
param_noise=param['param_noise'],
gamma=param['gamma'],
prioritized_replay=param['prioritized_replay'],
checkpoint_freq=param['checkpoint_freq'],
scope = 'att_str_retrain' + str(0) + '.pkl' + '/',
load_path=os.getcwd() + '/retrain_att/' + 'att_str_retrain' + str(0) + '.pkl'
)
# print("Saving attacker's model to pickle.")
# act_att.save(os.getcwd() + '/retrain_att/' + 'att_str_retrain' + str(epoch) + ".pkl", 'att_str_epoch' + str(epoch) + '.pkl' + '/')
learner.sess.close()
def training_hado_def(game):
param = game.param
mix_str_att = game.hado_str(identity=1, param=param)
if len(mix_str_att) != len(game.att_str):
raise ValueError("The length of mix_str_att and att_str does not match while training")
# env = copy.deepcopy(game.env)
env = game.env
env.reset_everything()
env.set_training_flag(0)
env.attacker.set_mix_strategy(mix_str_att)
env.attacker.set_str_set(game.att_str)
param_path = os.getcwd() + '/network_parameters/param.json'
param = jp.load_json_data(param_path)
learner = Learner(retrain=True, freq=param['retrain_freq'])
with learner.graph.as_default():
with learner.sess.as_default():
act_def, _ = learner.learn_multi_nets(
env,
network=models.mlp(num_hidden=param['num_hidden'], num_layers=param['num_layers']),
lr=param['lr'],
total_timesteps=param['retrain_timesteps'],
exploration_fraction=param['exploration_fraction'],
exploration_final_eps=param['exploration_final_eps'],
print_freq=param['print_freq'],
param_noise=param['param_noise'],
gamma=param['gamma'],
prioritized_replay=param['prioritized_replay'],
checkpoint_freq=param['checkpoint_freq'],
scope = 'def_str_retrain' + str(0) + '.pkl' + '/',
load_path = os.getcwd() + '/retrain_def/' + 'def_str_retrain' + str(0) + '.pkl'
)
# print("Saving defender's model to pickle.")
# act_def.save(os.getcwd() + '/retrain_def/' + 'def_str_retrain' + str(epoch) + ".pkl", "def_str_epoch" + str(epoch) + '.pkl' + '/')
learner.sess.close() | 39.949749 | 144 | 0.612201 | 991 | 7,950 | 4.619576 | 0.113017 | 0.026212 | 0.034076 | 0.030581 | 0.867628 | 0.848842 | 0.824814 | 0.762342 | 0.762342 | 0.75972 | 0 | 0.002883 | 0.258365 | 7,950 | 199 | 145 | 39.949749 | 0.773575 | 0.070943 | 0 | 0.688742 | 0 | 0 | 0.203932 | 0.039864 | 0.039735 | 0 | 0 | 0.005025 | 0 | 1 | 0.02649 | false | 0 | 0.033113 | 0 | 0.072848 | 0.05298 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f0102a920ff1a58aee641748c8ce080081bbfd27 | 294 | pyde | Python | processing/chapter3/sketch_3_3_L3/sketch_3_3_L3.pyde | brickdonut/2019-fall-polytech-cs | b2830795f35e65ff90cf73e0746551c6efdd1f87 | [
"MIT"
] | null | null | null | processing/chapter3/sketch_3_3_L3/sketch_3_3_L3.pyde | brickdonut/2019-fall-polytech-cs | b2830795f35e65ff90cf73e0746551c6efdd1f87 | [
"MIT"
] | null | null | null | processing/chapter3/sketch_3_3_L3/sketch_3_3_L3.pyde | brickdonut/2019-fall-polytech-cs | b2830795f35e65ff90cf73e0746551c6efdd1f87 | [
"MIT"
] | null | null | null | def setup():
size(500,500)
noLoop()
def setup():
size(500, 500)
smooth()
background(255)
noLoop()
fill(50, 80)
stroke(100)
strokeWeight(3)
def draw():
ellipse(250,200,100,100)
ellipse(250-50,250,100,100)
ellipse(250+50,250,100,100)
ellipse(250,250+50,100,100)
| 14 | 29 | 0.632653 | 47 | 294 | 3.957447 | 0.404255 | 0.215054 | 0.209677 | 0.258065 | 0.505376 | 0.311828 | 0.311828 | 0.311828 | 0.311828 | 0.311828 | 0 | 0.322176 | 0.187075 | 294 | 20 | 30 | 14.7 | 0.456067 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | true | 0 | 0 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f03f832cd30b77fad02c711f3a04696320130d7e | 266 | py | Python | tests/drop_packets/fanout/fanout_base.py | shubav/sonic-mgmt | 0ff71b907a55489bb4ed7d17b1682380fd459bf2 | [
"Apache-2.0"
] | 132 | 2016-10-19T12:34:44.000Z | 2022-03-16T09:00:39.000Z | tests/drop_packets/fanout/fanout_base.py | shubav/sonic-mgmt | 0ff71b907a55489bb4ed7d17b1682380fd459bf2 | [
"Apache-2.0"
] | 3,152 | 2016-09-21T23:05:58.000Z | 2022-03-31T23:29:08.000Z | tests/drop_packets/fanout/fanout_base.py | shubav/sonic-mgmt | 0ff71b907a55489bb4ed7d17b1682380fd459bf2 | [
"Apache-2.0"
] | 563 | 2016-09-20T01:00:15.000Z | 2022-03-31T22:43:54.000Z | from abc import ABCMeta, abstractmethod
class BaseFanoutHandler(object):
__metaclass__ = ABCMeta
def __init__(self):
pass
@abstractmethod
def update_config(self):
pass
@abstractmethod
def restore_config(self):
pass
| 17.733333 | 39 | 0.669173 | 26 | 266 | 6.461538 | 0.615385 | 0.142857 | 0.261905 | 0.297619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.270677 | 266 | 14 | 40 | 19 | 0.865979 | 0 | 0 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0.272727 | 0.090909 | 0 | 0.545455 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
f07891bf831deea713dd1a49588e6b8419cfa9fa | 242 | py | Python | affpose/ARLAffPose/utils/pose/load_camera_pose.py | UW-Advanced-Robotics-Lab/densefusion | 8784af40a954421dab9c9648f2d6a739de4c706c | [
"MIT"
] | 1 | 2021-07-23T05:12:43.000Z | 2021-07-23T05:12:43.000Z | affpose/ARLAffPose/utils/pose/load_camera_pose.py | akeaveny/DenseFusion | 8784af40a954421dab9c9648f2d6a739de4c706c | [
"MIT"
] | null | null | null | affpose/ARLAffPose/utils/pose/load_camera_pose.py | akeaveny/DenseFusion | 8784af40a954421dab9c9648f2d6a739de4c706c | [
"MIT"
] | 1 | 2021-11-16T23:55:11.000Z | 2021-11-16T23:55:11.000Z |
import yaml
import numpy as np
#######################################
#######################################
def load_camera_pose(posegraph_addr):
return np.loadtxt(posegraph_addr, dtype=np.float32)[:, 1:] # we exclude the timestamp | 24.2 | 89 | 0.508264 | 25 | 242 | 4.76 | 0.8 | 0.218487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013889 | 0.107438 | 242 | 10 | 89 | 24.2 | 0.537037 | 0.099174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b2e80f54b7e4db840337f6a0d3137a6954c436a1 | 3,264 | py | Python | tests/unit/worker/test_docker_api.py | mateuspontes/fastlane | a77505a344da990ad67cffd0ee0eb830489f7324 | [
"MIT"
] | 32 | 2019-02-19T01:37:57.000Z | 2022-03-19T22:12:23.000Z | tests/unit/worker/test_docker_api.py | mateuspontes/fastlane | a77505a344da990ad67cffd0ee0eb830489f7324 | [
"MIT"
] | 15 | 2019-02-18T17:51:57.000Z | 2020-03-20T16:27:38.000Z | tests/unit/worker/test_docker_api.py | mateuspontes/fastlane | a77505a344da990ad67cffd0ee0eb830489f7324 | [
"MIT"
] | 26 | 2019-02-14T20:00:11.000Z | 2020-01-24T18:12:57.000Z | # Standard Library
from json import dumps
# 3rd Party
from preggy import expect
# Fastlane
from fastlane.worker.docker.api import validate_hostname
def test_validate_hostname1():
expect(validate_hostname("example.com")).to_be_false()
def test_validate_hostname2():
expect(validate_hostname("example.com/")).to_be_false()
def test_validate_hostname3():
expect(validate_hostname("example.com:abcd")).to_be_false()
def test_validate_hostname4():
expect(validate_hostname("example.com:1234")).to_be_true()
def test_add_to_blacklist1(client):
"""Test adding to blacklist without payload returns 400"""
with client.application.app_context():
resp = client.post(
f"/docker-executor/blacklist"
)
expect(resp.status_code).to_equal(400)
def test_add_to_blacklist2(client):
"""Test adding to blacklist without host on payload returns 400"""
with client.application.app_context():
resp = client.post(
f"/docker-executor/blacklist",
data=dumps({
"myparam": "abc"
})
)
expect(resp.status_code).to_equal(400)
def test_add_to_blacklist3(client):
"""Test adding to blacklist with valid host on payload returns 400"""
with client.application.app_context():
resp = client.post(
f"/docker-executor/blacklist",
data=dumps({
"host": "example.com"
})
)
expect(resp.status_code).to_equal(400)
def test_add_to_blacklist4(client):
"""Test adding to blacklist with valid host on payload returns 200"""
with client.application.app_context():
resp = client.post(
f"/docker-executor/blacklist",
data=dumps({
"host": "example.com:1234"
})
)
expect(resp.status_code).to_equal(200)
def test_remove_from_blacklist1(client):
"""Test adding to blacklist without payload returns 400"""
with client.application.app_context():
resp = client.delete(
f"/docker-executor/blacklist"
)
expect(resp.status_code).to_equal(400)
def test_remove_from_blacklist2(client):
"""Test adding to blacklist without host on payload returns 400"""
with client.application.app_context():
resp = client.delete(
f"/docker-executor/blacklist",
data=dumps({
"myparam": "abc"
})
)
expect(resp.status_code).to_equal(400)
def test_remove_from_blacklist3(client):
"""Test adding to blacklist with valid host on payload returns 400"""
with client.application.app_context():
resp = client.delete(
f"/docker-executor/blacklist",
data=dumps({
"host": "example.com"
})
)
expect(resp.status_code).to_equal(400)
def test_remove_from_blacklist4(client):
"""Test adding to blacklist with valid host on payload returns 200"""
with client.application.app_context():
resp = client.delete(
f"/docker-executor/blacklist",
data=dumps({
"host": "example.com:1234"
})
)
expect(resp.status_code).to_equal(200)
| 26.975207 | 73 | 0.625919 | 381 | 3,264 | 5.181102 | 0.16273 | 0.042553 | 0.064843 | 0.072948 | 0.890071 | 0.857649 | 0.845491 | 0.845491 | 0.845491 | 0.845491 | 0 | 0.030316 | 0.262255 | 3,264 | 120 | 74 | 27.2 | 0.789452 | 0.159314 | 0 | 0.649351 | 0 | 0 | 0.130692 | 0.077009 | 0 | 0 | 0 | 0 | 0 | 1 | 0.155844 | false | 0 | 0.038961 | 0 | 0.194805 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65091020a3512ae832d4b196ce83a514d82ecdd4 | 5,626 | py | Python | tests/bam_fixtures.py | holtgrewe/pyhtslib | ad3f313d5aa75a313ab092cfc4331bd57a91d959 | [
"BSD-3-Clause"
] | null | null | null | tests/bam_fixtures.py | holtgrewe/pyhtslib | ad3f313d5aa75a313ab092cfc4331bd57a91d959 | [
"BSD-3-Clause"
] | null | null | null | tests/bam_fixtures.py | holtgrewe/pyhtslib | ad3f313d5aa75a313ab092cfc4331bd57a91d959 | [
"BSD-3-Clause"
] | 1 | 2020-09-01T22:27:16.000Z | 2020-09-01T22:27:16.000Z | #!/usr/bin/env python
"""Fixture files for the pyhtslib.bam tests"""
import os
import py
import pytest
import pyhtslib.bam_internal as bam_internal
import pyhtslib.hts_internal as hts_internal
__author__ = 'Manuel Holtgrewe <manuel.holtgrewe@bihealth.de>'
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.yield_fixture
def header_only_sam(tmpdir):
"""Copy the header_only.sam file to temporary directory."""
src = py.path.local(os.path.dirname(__file__)).join(
'files', 'header_only.sam')
dst = tmpdir.join('header_only.sam')
src.copy(dst)
yield dst
dst.remove()
@pytest.yield_fixture
def header_only_sam_header(header_only_sam):
hts_file = hts_internal._hts_open(
str(header_only_sam).encode('utf-8'), 'r')
hdr = bam_internal._sam_hdr_read(hts_file)
yield hdr
bam_internal._bam_hdr_destroy(hdr)
hts_internal._hts_close(hts_file)
@pytest.yield_fixture
def header_only_sam_gz(tmpdir):
"""Copy the header_only.sam.gz file to temporary directory."""
src = py.path.local(os.path.dirname(__file__)).join(
'files', 'header_only.sam.gz')
dst = tmpdir.join('header_only.sam.gz')
src.copy(dst)
yield dst
dst.remove()
@pytest.yield_fixture
def header_only_sam_gz_header(header_only_sam_gz):
hts_file = hts_internal._hts_open(
str(header_only_sam_gz).encode('utf-8'), 'r')
hdr = bam_internal._sam_hdr_read(hts_file)
yield hdr
bam_internal._bam_hdr_destroy(hdr)
hts_internal._hts_close(hts_file)
@pytest.yield_fixture
def header_only_bam(tmpdir):
"""Copy the header_only.bam file to temporary directory."""
src = py.path.local(os.path.dirname(__file__)).join(
'files', 'header_only.bam')
dst = tmpdir.join('header_only.bam')
src.copy(dst)
yield dst
dst.remove()
@pytest.yield_fixture
def header_only_bai(tmpdir):
"""Copy the header_only.bam.bai file to temporary directory."""
src = py.path.local(os.path.dirname(__file__)).join(
'files', 'header_only.bam.bai')
dst = tmpdir.join('header_only.bam.bai')
src.copy(dst)
yield dst
dst.remove()
@pytest.yield_fixture
def six_records_sam(tmpdir):
"""Copy the six_records.sam file to temporary directory."""
src = py.path.local(os.path.dirname(__file__)).join(
'files', 'six_records.sam')
dst = tmpdir.join('six_records.sam')
src.copy(dst)
yield dst
dst.remove()
@pytest.yield_fixture
def six_records_sam_header(six_records_sam):
hts_file = hts_internal._hts_open(
str(six_records_sam).encode('utf-8'), 'r')
hdr = bam_internal._sam_hdr_read(hts_file)
yield hdr
bam_internal._bam_hdr_destroy(hdr)
hts_internal._hts_close(hts_file)
@pytest.yield_fixture
def six_records_sam_gz(tmpdir):
"""Copy the six_records.sam.gz file to temporary directory."""
src = py.path.local(os.path.dirname(__file__)).join(
'files', 'six_records.sam.gz')
dst = tmpdir.join('six_records.sam.gz')
src.copy(dst)
yield dst
dst.remove()
@pytest.yield_fixture
def six_records_sam_gz_header(six_records_sam_gz):
hts_file = hts_internal._hts_open(
str(six_records_sam_gz).encode('utf-8'), 'r')
hdr = bam_internal._sam_hdr_read(hts_file)
yield hdr
bam_internal._bam_hdr_destroy(hdr)
hts_internal._hts_close(hts_file)
@pytest.yield_fixture
def six_records_bam(tmpdir):
"""Copy the six_records.bam file to temporary directory."""
src = py.path.local(os.path.dirname(__file__)).join(
'files', 'six_records.bam')
dst = tmpdir.join('six_records.bam')
src.copy(dst)
yield dst
dst.remove()
@pytest.yield_fixture
def six_records_bai(tmpdir):
"""Copy the six_records.bam.bai file to temporary directory."""
src = py.path.local(os.path.dirname(__file__)).join(
'files', 'six_records.bam.bai')
dst = tmpdir.join('six_records.bam.bai')
src.copy(dst)
yield dst
dst.remove()
@pytest.yield_fixture
def two_hundred_sam(tmpdir):
"""Copy the two_hundred.sam file to temporary directory."""
src = py.path.local(os.path.dirname(__file__)).join(
'files', 'two_hundred.sam')
dst = tmpdir.join('two_hundred.sam')
src.copy(dst)
yield dst
dst.remove()
@pytest.yield_fixture
def two_hundred_sam_gz(tmpdir):
"""Copy the two_hundred.sam.gz file to temporary directory."""
src = py.path.local(os.path.dirname(__file__)).join(
'files', 'two_hundred.sam.gz')
dst = tmpdir.join('two_hundred.sam.gz')
src.copy(dst)
yield dst
dst.remove()
@pytest.yield_fixture
def two_hundred_tbi(tmpdir):
"""Copy the two_hundred.sam.gz.tbi file to temporary directory."""
src = py.path.local(os.path.dirname(__file__)).join(
'files', 'two_hundred.sam.gz.tbi')
dst = tmpdir.join('two_hundred.sam.gz.tbi')
src.copy(dst)
yield dst
dst.remove()
@pytest.yield_fixture
def two_hundred_bam(tmpdir):
"""Copy the two_hundred.bam file to temporary directory."""
src = py.path.local(os.path.dirname(__file__)).join(
'files', 'two_hundred.bam')
dst = tmpdir.join('two_hundred.bam')
src.copy(dst)
yield dst
dst.remove()
@pytest.yield_fixture
def two_hundred_bai(tmpdir):
"""Copy the two_hundred.bam.bai file to temporary directory."""
src = py.path.local(os.path.dirname(__file__)).join(
'files', 'two_hundred.bam.bai')
dst = tmpdir.join('two_hundred.bam.bai')
src.copy(dst)
yield dst
dst.remove()
| 28.271357 | 77 | 0.673658 | 819 | 5,626 | 4.326007 | 0.072039 | 0.062094 | 0.086367 | 0.100762 | 0.917584 | 0.91081 | 0.767993 | 0.729608 | 0.723963 | 0.711826 | 0 | 0.000855 | 0.16797 | 5,626 | 198 | 78 | 28.414141 | 0.756035 | 0.169214 | 0 | 0.626761 | 0 | 0 | 0.126522 | 0.016087 | 0 | 0 | 0 | 0 | 0 | 1 | 0.119718 | false | 0 | 0.035211 | 0 | 0.15493 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65233b4c67cf933bda7ee7abb25c6d0fc82af331 | 48 | py | Python | testsuite/modulegraph-dir/renamed_b.py | xoviat/modulegraph2 | 766d00bdb40e5b2fe206b53a87b1bce3f9dc9c2a | [
"MIT"
] | 9 | 2020-03-22T14:48:01.000Z | 2021-05-30T12:18:12.000Z | testsuite/modulegraph-dir/renamed_b.py | xoviat/modulegraph2 | 766d00bdb40e5b2fe206b53a87b1bce3f9dc9c2a | [
"MIT"
] | 15 | 2020-01-06T10:02:32.000Z | 2021-05-28T12:22:44.000Z | testsuite/modulegraph-dir/renamed_b.py | ronaldoussoren/modulegraph2 | b6ab1766b0098651b51083235ff8a18a5639128b | [
"MIT"
] | 4 | 2020-05-10T18:51:41.000Z | 2021-04-07T14:03:12.000Z | import sys as c
from package import submod as d
| 16 | 31 | 0.791667 | 10 | 48 | 3.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 48 | 2 | 32 | 24 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
65494a89d5fd46a719792ccc35be36b97a2c4076 | 1,050 | py | Python | varnish/datadog_checks/varnish/config_models/defaults.py | gaffneyd4/integrations-core | 4c7725c9f1be4985381aad9740e7186f16a87976 | [
"BSD-3-Clause"
] | null | null | null | varnish/datadog_checks/varnish/config_models/defaults.py | gaffneyd4/integrations-core | 4c7725c9f1be4985381aad9740e7186f16a87976 | [
"BSD-3-Clause"
] | null | null | null | varnish/datadog_checks/varnish/config_models/defaults.py | gaffneyd4/integrations-core | 4c7725c9f1be4985381aad9740e7186f16a87976 | [
"BSD-3-Clause"
] | null | null | null | # (C) Datadog, Inc. 2021-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
from datadog_checks.base.utils.models.fields import get_default_field_value
def shared_service(field, value):
return get_default_field_value(field, value)
def instance_daemon_host(field, value):
return 'localhost'
def instance_daemon_port(field, value):
return 6082
def instance_empty_default_hostname(field, value):
return False
def instance_metrics_filter(field, value):
return get_default_field_value(field, value)
def instance_min_collection_interval(field, value):
return 15
def instance_name(field, value):
return get_default_field_value(field, value)
def instance_secretfile(field, value):
return '/etc/varnish/secret'
def instance_service(field, value):
return get_default_field_value(field, value)
def instance_tags(field, value):
return get_default_field_value(field, value)
def instance_varnishadm(field, value):
return get_default_field_value(field, value)
| 21.428571 | 75 | 0.777143 | 146 | 1,050 | 5.315068 | 0.369863 | 0.309278 | 0.226804 | 0.180412 | 0.444588 | 0.444588 | 0.444588 | 0.444588 | 0.444588 | 0.385309 | 0 | 0.012222 | 0.142857 | 1,050 | 48 | 76 | 21.875 | 0.85 | 0.102857 | 0 | 0.26087 | 0 | 0 | 0.029851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.478261 | false | 0 | 0.043478 | 0.478261 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
3308cf3b0db548074d1e2bbb40d6eda871756e34 | 10,401 | py | Python | visulization/weather_piechart.py | Jonas2019/car-accident-analysis | 1d900beae0c2428c0c85eec44887012fbc2ab988 | [
"MIT"
] | null | null | null | visulization/weather_piechart.py | Jonas2019/car-accident-analysis | 1d900beae0c2428c0c85eec44887012fbc2ab988 | [
"MIT"
] | null | null | null | visulization/weather_piechart.py | Jonas2019/car-accident-analysis | 1d900beae0c2428c0c85eec44887012fbc2ab988 | [
"MIT"
] | null | null | null | from pymongo import MongoClient
import plotly as py
import plotly.graph_objs as go
import pandas as pd
pyplt = py.offline.plot
from plotly.subplots import make_subplots
client = MongoClient("mongodb+srv://dbAdmin:cmpt732@cluster732.jfbfw.mongodb.net")
db = client.CMPT732
df_weather_condition=pd.DataFrame(list(db['WeatherCondition'].find()))
df_weather_wind=pd.DataFrame(list(db['WeatherWind'].find()))
#.......................Weather Condition..............................................................................
colors_condition = ['rgb(107,174,214)', 'rgb(8,81,156)', 'rgb(7,40,89)']
specs_condtion = [[{'type':'domain'}, {'type':'domain'}, {'type':'domain'}],
[{'type':'domain'}, {'type':'domain'}, {'type':'domain'}]]
condition = make_subplots(rows=2, cols=3, specs=specs_condtion,
subplot_titles=("Clear & Cloudy", "Fog", "Rain", "Snow", "Storm", "Sand & Dust"))
labels_condition = ['1','2','3']
clear_1 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Clear & Cloudy'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Clear & Cloudy'])['Severity']==1])['Counts'])
clear_2 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Clear & Cloudy'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Clear & Cloudy'])['Severity']==2])['Counts'])
clear_3 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Clear & Cloudy'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Clear & Cloudy'])['Severity']==3])['Counts'])
list_clear = [clear_1,clear_2,clear_3]
condition.add_trace(go.Pie(labels=labels_condition,values=list_clear,marker={'colors':colors_condition}), 1, 1)
fog_1 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Fog'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Fog'])['Severity']==1])['Counts'])
fog_2 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Fog'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Fog'])['Severity']==2])['Counts'])
fog_3 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Fog'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Fog'])['Severity']==3])['Counts'])
list_fog = [fog_1,fog_2,fog_3]
condition.add_trace(go.Pie(labels=labels_condition,values=list_fog,marker={'colors':colors_condition}), 1, 2)
rain_1 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Rain'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Rain'])['Severity']==1])['Counts'])
rain_2 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Rain'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Rain'])['Severity']==2])['Counts'])
rain_3 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Rain'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Rain'])['Severity']==3])['Counts'])
list_rain = [rain_1,rain_2,rain_3]
condition.add_trace(go.Pie(labels=labels_condition,values=list_rain,marker={'colors':colors_condition}), 1, 3)
snow_1 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Snow'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Snow'])['Severity']==1])['Counts'])
snow_2 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Snow'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Snow'])['Severity']==2])['Counts'])
snow_3 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Snow'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Snow'])['Severity']==3])['Counts'])
list_snow = [snow_1,snow_2,snow_3]
condition.add_trace(go.Pie(labels=labels_condition,values=list_snow,marker={'colors':colors_condition}), 2, 1)
storm_1 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Storm'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Storm'])['Severity']==1])['Counts'])
storm_2 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Storm'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Storm'])['Severity']==2])['Counts'])
storm_3 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Storm'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Storm'])['Severity']==3])['Counts'])
list_storm = [storm_1,storm_2,storm_3]
condition.add_trace(go.Pie(labels=labels_condition,values=list_storm,marker={'colors':colors_condition}), 2, 2)
sand_1 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Sand & Dust'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Sand & Dust'])['Severity']==1])['Counts'])
sand_2 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Sand & Dust'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Sand & Dust'])['Severity']==2])['Counts'])
sand_3 =int(((df_weather_condition[df_weather_condition['Weather_Condition'] == 'Sand & Dust'])[(df_weather_condition[df_weather_condition['Weather_Condition'] == 'Sand & Dust'])['Severity']==3])['Counts'])
list_sand = [sand_1,sand_2,sand_3]
condition.add_trace(go.Pie(labels=labels_condition,values=list_sand,marker={'colors':colors_condition}), 2, 3)
condition.update_traces(hoverinfo='label+percent+name', textinfo='percent')
condition.update_layout(legend_title_text='Severity',
autosize=False,
width=1000,
height=800)
condition = go.Figure(condition)
condition.show()
#pyplt(condition,filename='weather_c.html',image='png')
#.......................Wind Direction.................................................................................
colors_wind = ['rgb(107,174,214)','rgb(8,81,156)','rgb(7,40,89)' ]
specs_wind = [[{'type':'domain'}, {'type':'domain'}, {'type':'domain'}],
[{'type':'domain'}, {'type':'domain'}, {'type':'domain'}]]
wind = make_subplots(rows=2, cols=3,specs = specs_wind,
subplot_titles=("Variable", "Clam", "South", "East", "North", "West"))
labels_wind = ['1','2','3']
variable_1 =int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'Variable'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'Variable'])['Severity']==1])['Counts'])
variable_2 =int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'Variable'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'Variable'])['Severity']==2])['Counts'])
variable_3 = int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'Variable'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'Variable'])['Severity']==3])['Counts'])
list_variable = [variable_1,variable_2,variable_3]
wind.add_trace(go.Pie(labels=labels_wind,values=list_variable,marker={'colors':colors_wind}), 1, 1)
clam_1 =int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'Clam'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'Clam'])['Severity']==1])['Counts'])
clam_2 =int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'Clam'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'Clam'])['Severity']==2])['Counts'])
clam_3 = int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'Clam'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'Clam'])['Severity']==3])['Counts'])
list_clam = [clam_1,clam_2,clam_3]
wind.add_trace(go.Pie(labels=labels_wind,values=list_clam,marker={'colors':colors_wind}), 1, 2)
south_1 =int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'South'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'South'])['Severity']==1])['Counts'])
south_2 =int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'South'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'South'])['Severity']==2])['Counts'])
south_3 = int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'South'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'South'])['Severity']==3])['Counts'])
list_s = [south_1,south_2,south_3]
wind.add_trace(go.Pie(labels=labels_wind,values=list_s,marker={'colors':colors_wind}), 1, 3)
east_1 =int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'East'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'East'])['Severity']==1])['Counts'])
east_2 =int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'East'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'East'])['Severity']==2])['Counts'])
east_3 = int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'East'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'East'])['Severity']==3])['Counts'])
list_e = [east_1,east_2,east_3]
wind.add_trace(go.Pie(labels=labels_wind,values=list_e,marker={'colors':colors_wind}), 2, 1)
north_1 =int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'North'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'North'])['Severity']==1])['Counts'])
north_2 =int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'North'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'North'])['Severity']==2])['Counts'])
north_3 = int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'North'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'North'])['Severity']==3])['Counts'])
list_n = [north_1,north_2,north_3]
wind.add_trace(go.Pie(labels=labels_wind,values=list_n,marker={'colors':colors_wind}), 2, 2)
west_1 =int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'West'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'West'])['Severity']==1])['Counts'])
west_2 =int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'West'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'West'])['Severity']==2])['Counts'])
west_3 = int(((df_weather_wind[df_weather_wind['Wind_Direction'] == 'West'])[(df_weather_wind[df_weather_wind['Wind_Direction'] == 'West'])['Severity']==3])['Counts'])
list_w = [west_1,west_2,west_3]
wind.add_trace(go.Pie(labels=labels_wind,values=list_w,marker={'colors':colors_wind}), 2, 3)
wind.update_traces(hoverinfo='label+percent+name', textinfo='percent')
wind.update_layout(legend_title_text='Severity',
autosize=False,
width=1000,
height=800)
wind = go.Figure(wind)
#pyplt(wind,filename='weather_w.html',image='png')
wind.show()
| 81.257813 | 213 | 0.716181 | 1,415 | 10,401 | 4.89682 | 0.077032 | 0.189638 | 0.189638 | 0.103911 | 0.790879 | 0.746717 | 0.746717 | 0.746717 | 0.722471 | 0.722471 | 0 | 0.020942 | 0.058841 | 10,401 | 127 | 214 | 81.897638 | 0.686893 | 0.032593 | 0 | 0.083333 | 0 | 0 | 0.253381 | 0.005768 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052083 | 0 | 0.052083 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
333243522f96d41b444dd7f661b36537aa050102 | 84 | py | Python | src/nna/exp/__init__.py | EnisBerk/speech_audio_understanding | 2b1ba15a67bb48de7b949a6e5e0205dc5c3e24bd | [
"MIT"
] | 2 | 2019-12-05T22:27:54.000Z | 2020-04-05T21:24:50.000Z | src/nna/exp/__init__.py | EnisBerk/speech_audio_understanding | 2b1ba15a67bb48de7b949a6e5e0205dc5c3e24bd | [
"MIT"
] | 21 | 2020-01-28T22:53:24.000Z | 2022-02-10T02:50:11.000Z | src/nna/exp/__init__.py | speechLabBcCuny/nnaAudiosetClassification | ed61303609a069aac1887ca98116521e09cbd2ee | [
"MIT"
] | null | null | null | # import nna.exp.augmentations
# import nna.exp.runutils
# import nna.exp.modelArchs | 28 | 30 | 0.797619 | 12 | 84 | 5.583333 | 0.5 | 0.402985 | 0.537313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 84 | 3 | 31 | 28 | 0.881579 | 0.928571 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
336491240f5779b06deb0a5598deb2cdbee8410d | 20,628 | py | Python | postgres_backend/data/tests/test_models.py | scripted-adventurer/Custom-Fantasy-Football | 334419d46d2142ceec7630d4582bf61e06a4de1a | [
"Unlicense"
] | 1 | 2020-09-12T04:25:19.000Z | 2020-09-12T04:25:19.000Z | postgres_backend/data/tests/test_models.py | scripted-adventurer/Custom-Fantasy-Football | 334419d46d2142ceec7630d4582bf61e06a4de1a | [
"Unlicense"
] | null | null | null | postgres_backend/data/tests/test_models.py | scripted-adventurer/Custom-Fantasy-Football | 334419d46d2142ceec7630d4582bf61e06a4de1a | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
import datetime
from freezegun import freeze_time
import os
import pytz
from django.test import TestCase
import data.models as models
from .setup import TestData
from common.current_week import get_current_week
from common.hashing import generate_hash
class ModelsTest(TestCase):
# automatically loaded:
# 5 test users (referenced with TestData().user)
# all active players and teams (prior to the 2020 season)
# all games from 2019 REG 17
# all drives, plays, and play_players from game 10160000-0581-45c0-455c-8dcc2dd0671b
fixtures = ['user', 'team', 'player', 'game', 'drive', 'play', 'play_player']
def setUp(self):
super().setUp()
self.data = TestData()
def test_game(self):
main = models.Game.objects.get(game_id='10160000-0581-45c0-455c-8dcc2dd0671b')
same = models.Game.objects.get(game_id='10160000-0581-45c0-455c-8dcc2dd0671b')
different = models.Game.objects.get(game_id='10160000-0581-4680-ba82-12e629d4584f')
other = models.Drive.objects.get(id=76757)
self.assertEqual(repr(main), "{'model': 'Game', 'game_id': '10160000-0581-45c0-455c-8dcc2dd0671b'}")
self.assertEqual(str(main), "{Game '10160000-0581-45c0-455c-8dcc2dd0671b'}")
self.assertEqual(main.data_dict(),
{'id': '10160000-0581-45c0-455c-8dcc2dd0671b', 'start_time': "2019-12-29 18:00",
'season_type': 'REG', 'season_year': 2019, 'week': 17, 'home_team': 'DET',
'away_team': 'GB', 'home_score': 20, 'away_score': 23})
self.assertEqual((main == same), True)
self.assertEqual(hash(main) == hash(same), True)
self.assertEqual((main == different), False)
self.assertEqual(hash(main) == hash(different), False)
self.assertEqual(main == other, False)
def test_drive(self):
main = models.Drive.objects.get(id=76756)
same = models.Drive.objects.get(id=76756)
different = models.Drive.objects.get(id=76757)
other = models.Play.objects.get(id=544463)
self.assertEqual(repr(main), "{'model': 'Drive', 'game_id': "
"'10160000-0581-45c0-455c-8dcc2dd0671b', 'drive_id': 14}")
self.assertEqual(str(main),
"{Drive 14 from Game '10160000-0581-45c0-455c-8dcc2dd0671b'}")
self.assertEqual((main == same), True)
self.assertEqual(hash(main) == hash(same), True)
self.assertEqual((main == different), False)
self.assertEqual(hash(main) == hash(different), False)
self.assertEqual(main == other, False)
def test_play(self):
main = models.Play.objects.get(id=544462)
same = models.Play.objects.get(id=544462)
different = models.Play.objects.get(id=544463)
other = models.Drive.objects.get(id=76757)
self.assertEqual(repr(main),
"{'model': 'Play', 'game_id': '10160000-0581-45c0-455c-8dcc2dd0671b', "
"'drive_id': 14, 'play_id': 2945}")
self.assertEqual(str(main),
"{Play 2945 from Drive 14 from Game '10160000-0581-45c0-455c-8dcc2dd0671b'}")
self.assertEqual((main == same), True)
self.assertEqual(hash(main) == hash(same), True)
self.assertEqual((main == different), False)
self.assertEqual(hash(main) == hash(different), False)
self.assertEqual(main == other, False)
def test_play_player(self):
main = models.PlayPlayer.objects.get(id=1277030)
same = models.PlayPlayer.objects.get(id=1277030)
different = models.PlayPlayer.objects.get(id=1277031)
other = models.Play.objects.get(id=544463)
self.assertEqual(repr(main),
"{'model': 'PlayPlayer', 'player': '3200524f-4433-9293-a3cf-ad7758d03003', "
"'game_id': '10160000-0581-45c0-455c-8dcc2dd0671b', 'drive_id': 14, "
"'play_id': 2945}")
self.assertEqual(str(main),
"{Player '3200524f-4433-9293-a3cf-ad7758d03003' from Play 2945 from Drive "
"14 from Game '10160000-0581-45c0-455c-8dcc2dd0671b'}")
self.assertEqual((main == same), True)
self.assertEqual(hash(main) == hash(same), True)
self.assertEqual((main == different), False)
self.assertEqual(hash(main) == hash(different), False)
self.assertEqual(main == other, False)
def test_player(self):
main = models.Player.objects.get(
player_id='3200524f-4433-9293-a3cf-ad7758d03003')
same = models.Player.objects.get(
player_id='3200524f-4433-9293-a3cf-ad7758d03003')
different = models.Player.objects.get(
player_id='3200434f-5570-9400-e1ae-f835abb5963e')
other = models.Play.objects.get(id=544463)
self.assertEqual(repr(main), "{'model': 'Player', 'player_id': "
"'3200524f-4433-9293-a3cf-ad7758d03003'}")
self.assertEqual(str(main), "{Aaron Rodgers QB GB}")
self.assertEqual(main.data_dict(), {'id': '3200524f-4433-9293-a3cf-ad7758d03003',
'name': 'Aaron Rodgers', 'team': 'GB', 'position': 'QB', 'status': 'ACT'})
self.assertEqual((main == same), True)
self.assertEqual(hash(main) == hash(same), True)
self.assertEqual((main == different), False)
self.assertEqual(hash(main) == hash(different), False)
self.assertEqual(main == other, False)
# Sunday during Packers 2019 bye week
@freeze_time("2019-11-17 20:00:00")
def test_player_is_locked_bye_week(self):
gb_qb = models.Player.objects.get(
player_id='3200434f-5570-9400-e1ae-f835abb5963e')
self.assertEqual(gb_qb.is_locked(), False)
# Saturday during Week 17 2019
@freeze_time("2019-12-28 12:00:00")
def test_player_is_locked_neither(self):
gb_qb = models.Player.objects.get(
player_id='3200434f-5570-9400-e1ae-f835abb5963e')
sea_qb = models.Player.objects.get(
player_id='32005749-4c77-7781-795c-94c753706d1d')
self.assertEqual(gb_qb.is_locked(), False)
self.assertEqual(sea_qb.is_locked(), False)
# Sunday afternoon during Week 17 2019
@freeze_time("2019-12-29 18:30:00")
def test_player_is_locked_one(self):
gb_qb = models.Player.objects.get(
player_id='3200434f-5570-9400-e1ae-f835abb5963e')
sea_qb = models.Player.objects.get(
player_id='32005749-4c77-7781-795c-94c753706d1d')
self.assertEqual(gb_qb.is_locked(), True)
self.assertEqual(sea_qb.is_locked(), False)
# Monday during Week 1 2019
@freeze_time("2019-12-30 12:00:00")
def test_player_is_locked_both(self):
gb_qb = models.Player.objects.get(
player_id='3200434f-5570-9400-e1ae-f835abb5963e')
sea_qb = models.Player.objects.get(
player_id='32005749-4c77-7781-795c-94c753706d1d')
self.assertEqual(gb_qb.is_locked(), True)
self.assertEqual(sea_qb.is_locked(), True)
def test_team(self):
main = models.Team.objects.get(team_id='GB')
same = models.Team.objects.get(team_id='GB')
different = models.Team.objects.get(team_id='CHI')
other = models.Play.objects.get(id=544463)
self.assertEqual(repr(main), "{'model': 'Team', 'team_id': 'GB'}")
self.assertEqual(str(main), "{Green Bay Packers}")
self.assertEqual(main.data_dict(), {'id': 'GB', 'name': 'Green Bay Packers'})
self.assertEqual((main == same), True)
self.assertEqual(hash(main) == hash(same), True)
self.assertEqual((main == different), False)
self.assertEqual(hash(main) == hash(different), False)
self.assertEqual(main == other, False)
def test_league_basic(self):
self.data.create('League', name='test_league_basic_0', password='password')
self.data.create('League', name='test_league_basic_1', password='password')
main = models.League.objects.get(name=self.data.league[0].name)
same = models.League.objects.get(name=self.data.league[0].name)
different = self.data.league[1]
other = models.Play.objects.get(id=544463)
self.assertEqual(repr(main),
f"{{'model': 'League', 'name': '{self.data.league[0].name}'}}")
self.assertEqual(str(main), f"{{League {self.data.league[0].name}}}")
self.assertEqual((main == same), True)
self.assertEqual(hash(main) == hash(same), True)
self.assertEqual((main == different), False)
self.assertEqual(hash(main) == hash(different), False)
self.assertEqual(main == other, False)
def test_league_additional(self):
password_hash = generate_hash('password')
lineup_settings = {'K': 1, 'QB': 1, 'RB': 2, 'TE': 1, 'WR': 2}
scoring_settings = [
{'name': 'passing yards', 'field': 'passing_yds', 'conditions': [],
'multiplier': .04},
{'name': 'fg made 40-49 yards', 'field': 'kicking_fgm', 'conditions': [
{'field': 'kicking_fgm_yds', 'comparison': '>=', 'value': 40},
{'field': 'kicking_fgm_yds', 'comparison': '<', 'value': 50}],
'multiplier': 2.0},
{'name': 'fg made 50+ yards', 'field': 'kicking_fgm', 'conditions': [
{'field': 'kicking_fgm_yds', 'comparison': '>=', 'value': 50}],
'multiplier': 3.0}]
new_scoring_settings = [
{'name': 'rushing yards', 'field': 'rushing_yds', 'conditions': [],
'multiplier': .1}]
# league 0 is blank, league 1 has standard settings
self.data.create('League', name='test_league_additional_0',
password=password_hash)
self.data.create('League', name='test_league_additional_1',
password=password_hash, qb=1, rb=2, wr=2, te=1, k=1)
for stat in scoring_settings:
conditions = True if stat['conditions'] else False
self.data.create('LeagueStat', league=self.data.league[1], name=stat['name'],
field=stat['field'], conditions=conditions, multiplier=stat['multiplier'])
self.data.create('StatCondition', league_stat=self.data.leaguestat[1],
field='kicking_fgm_yds', comparison='>=', value=40)
self.data.create('StatCondition', league_stat=self.data.leaguestat[1],
field='kicking_fgm_yds', comparison='<', value=50)
self.data.create('StatCondition', league_stat=self.data.leaguestat[2],
field='kicking_fgm_yds', comparison='>=', value=50)
self.data.create('Member', league=self.data.league[1], user=self.data.user[0],
admin=True)
self.data.create('Member', league=self.data.league[1], user=self.data.user[1])
self.data.create('Member', league=self.data.league[1], user=self.data.user[2])
self.assertEqual(self.data.league[0].correct_password('password'), True)
self.assertEqual(self.data.league[0].correct_password('incorrect'), False)
self.assertEqual(self.data.league[0].get_lineup_settings(), {})
self.assertEqual(self.data.league[0].get_scoring_settings(), [])
self.assertEqual(self.data.league[0].get_members(), [])
self.assertEqual(self.data.league[1].get_lineup_settings(),
lineup_settings)
self.assertEqual(self.data.league[1].get_scoring_settings(),
scoring_settings)
self.assertEqual(self.data.league[1].get_members(), [self.data.user[0].username,
self.data.user[1].username, self.data.user[2].username])
self.data.league[1].set_lineup_settings(lineup_settings)
self.data.league[1].set_scoring_settings(scoring_settings)
self.data.league[1].set_password('new_password')
# check all changes were made
self.data.league[1] = models.League.objects.get(name='test_league_additional_1')
self.assertEqual(self.data.league[1].get_lineup_settings(),
lineup_settings)
self.assertEqual(self.data.league[1].get_scoring_settings(),
scoring_settings)
self.assertEqual(self.data.league[1].correct_password('new_password'), True)
# check updating stats deletes old stats
self.data.league[1].set_scoring_settings(new_scoring_settings)
self.data.league[1] = models.League.objects.get(name='test_league_additional_1')
self.assertEqual(self.data.league[1].get_scoring_settings(),
new_scoring_settings)
def test_league_stat(self):
password_hash = generate_hash('password')
self.data.create('League', name='test_league_stat_0', password=password_hash)
self.data.create('LeagueStat', league=self.data.league[0], name='passing yards',
field='passing_yds', multiplier=.04)
self.data.create('LeagueStat', league=self.data.league[0], name='rushing tds',
field='rushing_tds', multiplier=6)
main = models.LeagueStat.objects.get(league=self.data.league[0],
name='passing yards')
same = models.LeagueStat.objects.get(league=self.data.league[0],
name='passing yards')
different = models.LeagueStat.objects.get(league=self.data.league[0],
name='rushing tds')
other = models.Play.objects.get(id=544463)
self.assertEqual(repr(main),
("{'model': 'LeagueStat', 'league': 'test_league_stat_0', 'name': " +
"'passing yards'}"))
self.assertEqual(str(main),
"{Stat 'passing yards' from League 'test_league_stat_0'}")
self.assertEqual((main == same), True)
self.assertEqual(hash(main) == hash(same), True)
self.assertEqual((main == different), False)
self.assertEqual(hash(main) == hash(different), False)
self.assertEqual(main == other, False)
def test_stat_condition(self):
password_hash = generate_hash('password')
self.data.create('League', name='test_stat_condition_0', password=password_hash)
self.data.create('LeagueStat', league=self.data.league[0],
name='fg bonus (40-49)', field='kicking_fgm', conditions=True, multiplier=1)
self.data.create('StatCondition', league_stat=self.data.leaguestat[0],
field='kicking_fgm_yds', comparison='>=', value=40)
self.data.create('StatCondition', league_stat=self.data.leaguestat[0],
field='kicking_fgm_yds', comparison='<', value=50)
main = models.StatCondition.objects.get(league_stat=self.data.leaguestat[0],
field='kicking_fgm_yds', comparison='>=', value=40)
same = models.StatCondition.objects.get(league_stat=self.data.leaguestat[0],
field='kicking_fgm_yds', comparison='>=', value=40)
different = models.StatCondition.objects.get(league_stat=self.data.leaguestat[0],
field='kicking_fgm_yds', comparison='<', value=50)
other = models.Play.objects.get(id=544463)
self.assertEqual(repr(main),
("{'model': 'StatCondition', 'league': 'test_stat_condition_0', "
"'stat': 'fg bonus (40-49)', 'field': 'kicking_fgm_yds', "
"'comparison': '>=', 'value': 40}"))
self.assertEqual(str(main),
("{Condition kicking_fgm_yds>=40 for 'fg bonus (40-49)' in League "
"'test_stat_condition_0'}"))
self.assertEqual((main == same), True)
self.assertEqual(hash(main) == hash(same), True)
self.assertEqual((main == different), False)
self.assertEqual(hash(main) == hash(different), False)
self.assertEqual(main == other, False)
def test_member_basic(self):
password_hash = generate_hash('password')
self.data.create('League', name='test_member_basic_0', password=password_hash)
self.data.create('Member', user=self.data.user[0], league=self.data.league[0])
self.data.create('Member', user=self.data.user[1], league=self.data.league[0])
main = models.Member.objects.get(user=self.data.user[0],
league=self.data.league[0])
same = models.Member.objects.get(user=self.data.user[0],
league=self.data.league[0])
different = models.Member.objects.get(user=self.data.user[1],
league=self.data.league[0])
other = models.Play.objects.get(id=544463)
self.assertEqual(repr(main),
("{'model': 'Member', 'username': 'test_user_0', 'league': " +
"'test_member_basic_0'}"))
self.assertEqual(str(main),
("{User 'test_user_0' in League 'test_member_basic_0'}"))
self.assertEqual((main == same), True)
self.assertEqual(hash(main) == hash(same), True)
self.assertEqual((main == different), False)
self.assertEqual(hash(main) == hash(different), False)
self.assertEqual(main == other, False)
def test_member_additional(self):
password_hash = generate_hash('password')
self.data.create('League', name='test_member_additional_0',
password=password_hash)
self.data.create('Member', user=self.data.user[0], league=self.data.league[0])
self.data.create('Member', user=self.data.user[1], league=self.data.league[0])
# past
self.data.create('Lineup', member=self.data.member[1], season_type='REG',
season_year=2019, week=17, player_id='3200524f-4433-9293-a3cf-ad7758d03003')
self.data.create('Lineup', member=self.data.member[1], season_type='REG',
season_year=2019, week=17, player_id='3200434f-5570-9400-e1ae-f835abb5963e')
# current
season_year, season_type, week = get_current_week()
self.data.create('Lineup', member=self.data.member[1], season_type=season_type,
season_year=season_year, week=week,
player_id='3200524f-4433-9293-a3cf-ad7758d03003')
self.data.create('Lineup', member=self.data.member[1], season_type=season_type,
season_year=season_year, week=week,
player_id='32005749-4c77-7781-795c-94c753706d1d')
# no lineup
self.assertEqual(self.data.member[0].get_lineup('REG', 2019, 17), [])
self.assertEqual(self.data.member[0].get_lineup(), [])
# existing lineup past
existing_lineup = self.data.member[1].get_lineup('REG', 2019, 17)
self.assertEqual(existing_lineup[0]['id'], '3200524f-4433-9293-a3cf-ad7758d03003')
self.assertEqual(existing_lineup[1]['id'], '3200434f-5570-9400-e1ae-f835abb5963e')
# existing lineup current
existing_lineup = self.data.member[1].get_lineup()
self.assertEqual(existing_lineup[0]['id'], '3200524f-4433-9293-a3cf-ad7758d03003')
self.assertEqual(existing_lineup[1]['id'], '32005749-4c77-7781-795c-94c753706d1d')
# lineup add previous week
self.data.member[0].lineup_add('3200524f-4433-9293-a3cf-ad7758d03003',
'REG', 2019, 17)
previous_lineup = self.data.member[0].get_lineup('REG', 2019, 17)
self.assertEqual(previous_lineup[0]['name'], 'Aaron Rodgers')
# lineup add current week
self.data.member[0].lineup_add('3200434f-5570-9400-e1ae-f835abb5963e')
current_lineup = self.data.member[0].get_lineup()
self.assertEqual(current_lineup[0]['team'], 'MIN')
# lineup delete previous week - player doesn't exist
self.data.member[0].lineup_delete('32005749-4c77-7781-795c-94c753706d1d',
'REG', 2019, 17)
previous_lineup = self.data.member[0].get_lineup('REG', 2019, 17)
self.assertEqual(previous_lineup[0]['position'], 'QB')
# lineup delete previous week - player exists
self.data.member[0].lineup_delete('3200524f-4433-9293-a3cf-ad7758d03003',
'REG', 2019, 17)
self.assertEqual(self.data.member[0].get_lineup('REG', 2019, 17), [])
# lineup delete current week - player doesn't exist
self.data.member[0].lineup_delete('32005749-4c77-7781-795c-94c753706d1d')
current_lineup = self.data.member[0].get_lineup()
self.assertEqual(current_lineup[0]['status'], 'ACT')
# lineup delete current week - player exists
self.data.member[0].lineup_delete('3200434f-5570-9400-e1ae-f835abb5963e')
self.assertEqual(self.data.member[0].get_lineup(), [])
def test_lineup(self):
password_hash = generate_hash('password')
self.data.create('League', name='test_lineup_0', password=password_hash)
self.data.create('Member', user=self.data.user[0], league=self.data.league[0])
self.data.create('Lineup', member=self.data.member[0], season_type='REG',
season_year=2019, week=17, player_id='3200524f-4433-9293-a3cf-ad7758d03003')
self.data.create('Lineup', member=self.data.member[0], season_type='REG',
season_year=2019, week=17, player_id='3200434f-5570-9400-e1ae-f835abb5963e')
main = models.Lineup.objects.get(member=self.data.member[0],
season_type='REG', season_year=2019, week=17,
player_id='3200524f-4433-9293-a3cf-ad7758d03003')
same = models.Lineup.objects.get(member=self.data.member[0],
season_type='REG', season_year=2019, week=17,
player_id='3200524f-4433-9293-a3cf-ad7758d03003')
different = models.Lineup.objects.get(member=self.data.member[0],
season_type='REG', season_year=2019, week=17,
player_id='3200434f-5570-9400-e1ae-f835abb5963e')
other = models.Play.objects.get(id=544463)
self.assertEqual(repr(main),
("{'model': 'Lineup', 'user': 'test_user_0', 'league': "
"'test_lineup_0', 'season_year': 2019, 'season_type': 'REG', "
"'week': 17, 'player_id': '3200524f-4433-9293-a3cf-ad7758d03003'}"))
self.assertEqual(str(main),
("{Player '3200524f-4433-9293-a3cf-ad7758d03003' for User 'test_user_0' "
"in League 'test_lineup_0' for 'REG' 2019 week 17}"))
self.assertEqual((main == same), True)
self.assertEqual(hash(main) == hash(same), True)
self.assertEqual((main == different), False)
self.assertEqual(hash(main) == hash(different), False)
self.assertEqual(main == other, False) | 50.558824 | 104 | 0.688142 | 2,770 | 20,628 | 5.002166 | 0.079061 | 0.069861 | 0.041426 | 0.024899 | 0.837038 | 0.813727 | 0.763063 | 0.709945 | 0.641527 | 0.625433 | 0 | 0.097975 | 0.145482 | 20,628 | 408 | 105 | 50.558824 | 0.688092 | 0.039122 | 0 | 0.492997 | 0 | 0 | 0.230136 | 0.104157 | 0 | 0 | 0 | 0 | 0.310924 | 1 | 0.05042 | false | 0.072829 | 0.02521 | 0 | 0.081232 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
68a14d7fa3e81bbd515e23671b5ee3bd4e88ef9d | 1,892 | py | Python | sorting/quick_sort.py | nomadkitty/cs_unit_assessment_prep | 4361be48636f583b1fafe33f2432a195f29a4f95 | [
"MIT"
] | null | null | null | sorting/quick_sort.py | nomadkitty/cs_unit_assessment_prep | 4361be48636f583b1fafe33f2432a195f29a4f95 | [
"MIT"
] | null | null | null | sorting/quick_sort.py | nomadkitty/cs_unit_assessment_prep | 4361be48636f583b1fafe33f2432a195f29a4f95 | [
"MIT"
] | null | null | null | # What kind of input will we get?
# We expect a list
def quicksort(data):
# check if data has 1 or 0 elements
# (base case) a side only contains a single element
if len(data) <= 1:
return data
# Partition the data
# Start by choosing a pivot (choose the first item in the list)
pivot = data[0]
# We need to create storage for the LHS and RHS
left = []
right = []
# We need to loop through each item
for current in data[1:]:
# if it's smaller or equal, add to LHS storage
if current <= pivot:
left.append(current)
# if its' bigger, add to RHS storage
else:
right.append(current)
# (recursive case) Recursively Quick Sort LHS and RHS until
return quicksort(left) + [pivot] + quicksort(right)
quicksort([2, 5, 7, 1, 3, 4, 6, 9, 8])
# # helper version:
# def partition(data):
# # Partition the data
# # Start by choosing a pivot (choose the first item in the list)
# pivot = data[0]
# # We need to create storage for the LHS and the RHS
# left = []
# right = []
#
# # We need to loop through each item
# for current in data[1:]:
# # if it's smaller or equal, append to left
# if current <= pivot:
# left.append(current)
# # if it's bigger, add to RHS storage
# else:
# right.append(current)
#
# return left, right, pivot
#
#
# # What kind of input will we get?
# # We expect a list
# def quicksort(data):
# # check if data has 1 or 0 elements
# # (base case) a side only contains a single element
# if len(data) <= 1:
# return data
#
# left, right, pivot = partition(data)
#
# # (recursive case) Recursively Quick Sort LHS and RHS until
# return quicksort(left) + [pivot] + quicksort(right)
#
#
# print(quicksort([2,5,7,1,3,4,6,9,8]))
| 28.666667 | 69 | 0.589323 | 278 | 1,892 | 4.010791 | 0.280576 | 0.017937 | 0.0287 | 0.026906 | 0.898655 | 0.898655 | 0.898655 | 0.839462 | 0.839462 | 0.762332 | 0 | 0.021228 | 0.302854 | 1,892 | 65 | 70 | 29.107692 | 0.824109 | 0.734144 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
68ab9a85fc8fba1cd0584d3b46b3327e0c54ff6f | 5,030 | py | Python | tests/test_cli.py | Red-Lex/tpkutils | 5f3694451a1759548af579b689f478cefc633252 | [
"BSD-3-Clause"
] | 85 | 2016-09-21T19:29:06.000Z | 2022-03-29T10:31:33.000Z | tests/test_cli.py | Red-Lex/tpkutils | 5f3694451a1759548af579b689f478cefc633252 | [
"BSD-3-Clause"
] | 28 | 2016-09-05T23:55:16.000Z | 2022-01-17T11:21:23.000Z | tests/test_cli.py | Red-Lex/tpkutils | 5f3694451a1759548af579b689f478cefc633252 | [
"BSD-3-Clause"
] | 18 | 2018-08-30T21:27:10.000Z | 2022-03-17T13:13:47.000Z | import os
import sqlite3
import pytest
from click.testing import CliRunner
from tpkutils.cli import cli
from pymbtiles import MBtiles
@pytest.fixture(scope="function")
def runner():
return CliRunner()
def test_export_mbtiles(runner, tmpdir):
tpk = "tests/data/states_filled.tpk"
mbtiles = str(tmpdir.join("test.mbtiles"))
result = runner.invoke(cli, ["export", "mbtiles", tpk, mbtiles])
assert result.exit_code == 0
assert os.path.exists(mbtiles)
with sqlite3.connect(mbtiles) as db:
cursor = db.cursor()
# # Verify zoom levels present
cursor.execute("select distinct zoom_level from tiles order by zoom_level")
zoom_levels = {x[0] for x in cursor.fetchall()}
assert zoom_levels == {0, 1, 2, 3, 4}
cursor.close()
def test_export_mbtiles_zoom(runner, tmpdir):
tpk = "tests/data/states_filled.tpk"
mbtiles = str(tmpdir.join("test.mbtiles"))
result = runner.invoke(cli, ["export", "mbtiles", tpk, mbtiles, "--zoom", "0,1"])
assert result.exit_code == 0
assert os.path.exists(mbtiles)
with sqlite3.connect(mbtiles) as db:
cursor = db.cursor()
# Verify zoom levels present
cursor.execute("select distinct zoom_level from tiles order by zoom_level")
zoom_levels = {x[0] for x in cursor.fetchall()}
assert zoom_levels == {0, 1}
cursor.close()
def test_export_mbtiles_existing_output(runner, tmpdir):
tpk = "tests/data/states_filled.tpk"
mbtiles = str(tmpdir.join("test.mbtiles"))
result = runner.invoke(cli, ["export", "mbtiles", tpk, mbtiles])
assert result.exit_code == 0
assert os.path.exists(mbtiles)
result = runner.invoke(cli, ["export", "mbtiles", tpk, mbtiles])
assert result.exit_code == 1
assert "Output exists and overwrite is false" in result.output
result = runner.invoke(cli, ["export", "mbtiles", tpk, mbtiles, "--overwrite"])
assert result.exit_code == 0
assert os.path.exists(mbtiles)
def test_export_mbtiles_tile_bounds(runner, tmpdir):
tpk = "tests/data/states_filled.tpk"
mbtiles_filename = str(tmpdir.join("test.mbtiles"))
result = runner.invoke(
cli, ["export", "mbtiles", tpk, mbtiles_filename, "-z", "0", "--tile-bounds"]
)
assert result.exit_code == 0
print(result.output)
assert os.path.exists(mbtiles_filename)
with MBtiles(mbtiles_filename) as mbtiles:
assert mbtiles.zoom_range() == (0, 0)
assert mbtiles.meta["bounds"] == "-180.000000,-85.051129,180.000000,85.051129"
def test_export_mbtiles_verbosity(runner, tmpdir):
tpk = "tests/data/states_filled.tpk"
mbtiles = str(tmpdir.join("test.mbtiles"))
result = runner.invoke(cli, ["export", "mbtiles", tpk, mbtiles, "-v"])
assert result.exit_code == 0
# assert 'INFO:tpkutils' in result.output # not working w/ pytest
mbtiles = str(tmpdir.join("test2.mbtiles"))
result = runner.invoke(cli, ["export", "mbtiles", tpk, mbtiles, "-v", "-v"])
assert result.exit_code == 0
# assert 'DEBUG:tpkutils' in result.output # not working w/ pytest
def test_export_disk(runner, tmpdir):
tpk = "tests/data/states_filled.tpk"
path = str(tmpdir.join("tiles"))
result = runner.invoke(cli, ["export", "disk", tpk, path])
assert result.exit_code == 0
assert os.path.exists(path)
assert os.path.exists(os.path.join(path, "0/0/0.png"))
def test_export_disk_zoom(runner, tmpdir):
tpk = "tests/data/states_filled.tpk"
path = str(tmpdir.join("tiles"))
result = runner.invoke(cli, ["export", "disk", tpk, path, "--zoom", "1"])
assert result.exit_code == 0
assert os.path.exists(path)
assert os.path.exists(os.path.join(path, "1/0/0.png"))
assert not os.path.exists(os.path.join(path, "0/0/0.png"))
def test_export_disk_existing_output(runner, tmpdir):
tpk = "tests/data/states_filled.tpk"
path = str(tmpdir.join("tiles"))
result = runner.invoke(cli, ["export", "disk", tpk, path])
assert result.exit_code == 0
assert os.path.exists(path)
result = runner.invoke(cli, ["export", "disk", tpk, path])
assert result.exit_code == 1
assert "Output directory must be empty" in result.output
def test_export_disk_scheme(runner, tmpdir):
tpk = "tests/data/states_filled.tpk"
path = str(tmpdir.join("tiles"))
result = runner.invoke(cli, ["export", "disk", tpk, path, "--scheme", "xyz"])
assert result.exit_code == 0
assert os.path.exists(path)
assert os.path.exists(os.path.join(path, "1/0/1.png"))
assert not os.path.exists(os.path.join(path, "1/0/0.png"))
def test_export_disk_drop_empty(runner, tmpdir):
tpk = "tests/data/states_filled.tpk"
path = str(tmpdir.join("tiles"))
result = runner.invoke(cli, ["export", "disk", tpk, path, "--drop-empty"])
assert result.exit_code == 0
assert os.path.exists(path)
assert os.path.exists(os.path.join(path, "4/2/6.png"))
assert not os.path.exists(os.path.join(path, "4/2/7.png"))
| 31.835443 | 86 | 0.664811 | 711 | 5,030 | 4.606188 | 0.136428 | 0.043969 | 0.06229 | 0.089771 | 0.812824 | 0.798779 | 0.779847 | 0.749924 | 0.712672 | 0.694962 | 0 | 0.021437 | 0.183897 | 5,030 | 157 | 87 | 32.038217 | 0.77637 | 0.036183 | 0 | 0.552381 | 0 | 0 | 0.188921 | 0.066763 | 0 | 0 | 0 | 0 | 0.352381 | 1 | 0.104762 | false | 0 | 0.057143 | 0.009524 | 0.171429 | 0.009524 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d7ad80ed2e6ae498cd702997c567607e6410170e | 126 | py | Python | app/admin/__init__.py | cmumford/asset-tracker | 8356870116478f947ea0c3231ac368320428709a | [
"MIT"
] | null | null | null | app/admin/__init__.py | cmumford/asset-tracker | 8356870116478f947ea0c3231ac368320428709a | [
"MIT"
] | null | null | null | app/admin/__init__.py | cmumford/asset-tracker | 8356870116478f947ea0c3231ac368320428709a | [
"MIT"
] | null | null | null | from flask import Blueprint
admin_blueprint = Blueprint('admin', __name__, template_folder='templates')
from . import routes
| 25.2 | 75 | 0.801587 | 15 | 126 | 6.333333 | 0.666667 | 0.294737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 126 | 4 | 76 | 31.5 | 0.848214 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
d7b9cfba72c41cc3f7d365caab097660cd376aec | 19 | py | Python | flow_next/models/__init__.py | chenwenxiao/DOI | 14bdedd0b1b886efe77737cfb62695f03ee17c58 | [
"MIT"
] | 1 | 2021-08-13T22:14:10.000Z | 2021-08-13T22:14:10.000Z | flow_next/models/__init__.py | chenwenxiao/DOI | 14bdedd0b1b886efe77737cfb62695f03ee17c58 | [
"MIT"
] | null | null | null | flow_next/models/__init__.py | chenwenxiao/DOI | 14bdedd0b1b886efe77737cfb62695f03ee17c58 | [
"MIT"
] | null | null | null | from . import glow
| 9.5 | 18 | 0.736842 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0bdadefd8b5366911f1422d6e9f14e9e8ddec257 | 2,396 | py | Python | model2/models/__init__.py | alibell/DMLI_Gleason_Score_Challenge | 53d3f60d884088a25bd2658fca4c8928c2828c49 | [
"MIT"
] | null | null | null | model2/models/__init__.py | alibell/DMLI_Gleason_Score_Challenge | 53d3f60d884088a25bd2658fca4c8928c2828c49 | [
"MIT"
] | null | null | null | model2/models/__init__.py | alibell/DMLI_Gleason_Score_Challenge | 53d3f60d884088a25bd2658fca4c8928c2828c49 | [
"MIT"
] | null | null | null | from torchvision.models import efficientnet_b0
from torch import nn, optim
from torch.utils.data import Dataset, DataLoader
import pandas as pd
import numpy as np
import openslide
import hashlib
import random
import torch
import os
import pickle
from torchvision import transforms
from torch import nn, optim
from torch.nn.utils import clip_grad_norm_
from tqdm import tqdm
from PIL import Image
class patchClassifier (nn.Module):
def __init__ (self):
super().__init__()
self.backbone = efficientnet_b0(pretrained=True)
classifier_input = self.backbone.classifier[1].in_features
self.backbone.classifier[1] = nn.Sequential(
nn.Linear(classifier_input, 4)
)
for parameter in self.backbone.parameters():
parameter.requires_grad = True
self.network = self.backbone
self.criterion = nn.CrossEntropyLoss()
self.optim = optim.AdamW(self.parameters())
def forward(self, x):
predictions = self.network(x)
y_hat = predictions
return y_hat
def predict(self, x):
self.eval()
with torch.no_grad():
y_hat = self.forward(x)
return y_hat
def fit(self, x, y):
self.train()
self.optim.zero_grad()
y_hat = self.forward(x)
loss = self.criterion(y_hat, y)
loss.backward()
self.optim.step()
loss_ = loss.detach().cpu().item()
return loss_
class tilesClassifier (nn.Module):
def __init__ (self):
super().__init__()
self.backbone = efficientnet_b0(pretrained=True)
classifier_input = self.backbone.classifier[1].in_features
self.backbone.classifier[1] = nn.Sequential(
nn.Linear(classifier_input, 6)
)
for parameter in self.backbone.parameters():
parameter.requires_grad = True
self.softmax = nn.Softmax(dim=0)
self.network = self.backbone
self.criterion = nn.CrossEntropyLoss()
self.optim = optim.AdamW(self.parameters())
def forward(self, x):
predictions = self.network(x)
y_hat = predictions
return y_hat
def predict(self, x):
self.eval()
with torch.no_grad():
y_hat = self.forward(x)
y_hat = self.softmax(y_hat)
return y_hat
def fit(self, x, y):
self.train()
self.optim.zero_grad()
y_hat = self.forward(x)
loss = self.criterion(y_hat, y)
loss.backward()
self.optim.step()
loss_ = loss.detach().cpu().item()
return loss_ | 21.017544 | 62 | 0.679466 | 323 | 2,396 | 4.879257 | 0.247678 | 0.035533 | 0.025381 | 0.058376 | 0.779188 | 0.779188 | 0.779188 | 0.739848 | 0.739848 | 0.739848 | 0 | 0.005302 | 0.212855 | 2,396 | 114 | 63 | 21.017544 | 0.830329 | 0 | 0 | 0.725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0be9eea3fb4468b68f455e905decfb437ef7b071 | 35 | py | Python | authz/controller/__init__.py | nimatbt/Auth-Microservice | 449fd7c3210822d6c59940f817c978fd1715a876 | [
"Apache-2.0"
] | null | null | null | authz/controller/__init__.py | nimatbt/Auth-Microservice | 449fd7c3210822d6c59940f817c978fd1715a876 | [
"Apache-2.0"
] | null | null | null | authz/controller/__init__.py | nimatbt/Auth-Microservice | 449fd7c3210822d6c59940f817c978fd1715a876 | [
"Apache-2.0"
] | null | null | null | from authz.controller import apiv1
| 17.5 | 34 | 0.857143 | 5 | 35 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.114286 | 35 | 1 | 35 | 35 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0bf22fa01a81ec04ca96eead71fe3e518f54e99b | 21,050 | py | Python | test-http/src/test/org_user_tests/org_as_org_admin.py | wizedkyle/cve-services | 3e63f2d0f3328d542bc39727300a91bd3acecfef | [
"CC0-1.0"
] | 9 | 2019-05-22T17:28:38.000Z | 2019-08-22T15:55:07.000Z | test-http/src/test/org_user_tests/org_as_org_admin.py | wizedkyle/cve-services | 3e63f2d0f3328d542bc39727300a91bd3acecfef | [
"CC0-1.0"
] | 2 | 2019-08-22T04:18:46.000Z | 2019-09-09T10:45:29.000Z | test-http/src/test/org_user_tests/org_as_org_admin.py | wizedkyle/cve-services | 3e63f2d0f3328d542bc39727300a91bd3acecfef | [
"CC0-1.0"
] | 2 | 2019-07-09T01:57:24.000Z | 2019-08-14T04:23:04.000Z | # Tests in this file use an Org admin user provided by a Pytest fixture. The
# tests here should be a subset of the secretariat tests, since the CNA of last
# resort should always be able to perform any root CNA functionality in
# addition to functionality reserved for the CNA of last resort.
import json
import requests
import uuid
from src import env, utils
from src.test.org_user_tests.org import (ORG_URL, create_new_user_with_new_org_by_uuid,
create_new_user_with_new_org_by_shortname,
post_new_org_user, post_new_org)
from src.utils import response_contains, response_contains_json
### GET /org ####
def test_org_admin_get_all_orgs(org_admin_headers):
""" services api rejects requests for all orgs by non-secretariat users """
res = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}',
headers=org_admin_headers
)
assert res.status_code == 403
response_contains_json(res, 'error', 'SECRETARIAT_ONLY')
#### GET /org/:identifier ####
def test_org_admin_get_mitre_org(org_admin_headers):
""" services api rejects requests for secretariat by non-secretariat users """
res = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}/mitre', # the secretariat's org
headers=org_admin_headers
)
assert res.status_code == 403
response_contains_json(res, 'error', 'NOT_SAME_ORG_OR_SECRETARIAT')
def test_org_admin_get_another_org(org_admin_headers):
""" services api rejects requests for any org by another org user """
different_org = str(uuid.uuid4()) # name of an org
res = post_new_org(different_org, different_org) # create an org
assert res.status_code == 200
res = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}/{different_org}',
headers=org_admin_headers
)
assert res.status_code == 403
response_contains_json(res, 'error', 'NOT_SAME_ORG_OR_SECRETARIAT')
def test_org_admin_get_own_org(org_admin_headers):
""" services api allows org admins to get their own org's document """
org = org_admin_headers["CVE-API-ORG"]
res = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}',
headers=org_admin_headers
)
assert res.status_code == 200
response_contains(res, org_admin_headers['CVE-API-ORG'])
#### GET /org/:shortname/id_quota ####
def test_org_admin_get_secretariat_id_quota_info(org_admin_headers):
""" services api rejects requests for secretariat by non-secretariat users """
res = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}/mitre/id_quota', # the secretariat's org
headers=org_admin_headers
)
assert res.status_code == 403
response_contains_json(res, 'error', 'NOT_SAME_ORG_OR_SECRETARIAT')
def test_org_admin_get_another_org_id_quota_info(org_admin_headers):
""" services api rejects requests for any org by another org user """
different_org = str(uuid.uuid4()) # name of an org
res = post_new_org(different_org, different_org) # create an org
assert res.status_code == 200
res = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}/{different_org}/id_quota',
headers=org_admin_headers
)
assert res.status_code == 403
response_contains_json(res, 'error', 'NOT_SAME_ORG_OR_SECRETARIAT')
def test_org_admin_get_own_id_quota_info(org_admin_headers):
""" services api allows org admins to get info about their org's quota """
org = org_admin_headers["CVE-API-ORG"]
res = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/id_quota',
headers=org_admin_headers
)
assert res.status_code == 200
id_quota = json.loads(res.content.decode())['id_quota']
assert id_quota >= 0
assert id_quota <= 100000
#### GET /org/:shortname/user/:username ####
def test_org_admin_get_mitre_user_info(org_admin_headers):
""" services api prevents org users from viewing secretariat user info """
res = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}/mitre/user/{env.AWG_USER_NAME}',
headers=org_admin_headers
)
assert res.status_code == 403
response_contains_json(res, 'error', 'NOT_SAME_ORG_OR_SECRETARIAT')
def test_org_admin_get_another_org_user_info(org_admin_headers):
""" services api prevents org admin users from viewing another org user's info """
org, user = create_new_user_with_new_org_by_uuid()
res = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}',
headers=org_admin_headers
)
assert res.status_code == 403
response_contains_json(res, 'error', 'NOT_SAME_ORG_OR_SECRETARIAT')
def test_org_admin_get_own_user_info(org_admin_headers):
""" services api allows org admin to get its own user info """
org = org_admin_headers['CVE-API-ORG']
user = org_admin_headers['CVE-API-USER']
res = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}',
headers=org_admin_headers
)
assert res.status_code == 200
response_contains(res, user)
#### GET /org/:shortname/users ####
def test_org_admin_get_mitre_users_info(org_admin_headers):
""" services api prevents org users from viewing secretariat users info """
res = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}/mitre/users',
headers=org_admin_headers
)
assert res.status_code == 403
response_contains_json(res, 'error', 'NOT_SAME_ORG_OR_SECRETARIAT')
def test_org_admin_get_another_org_users_info(org_admin_headers):
""" services api prevents org admin users from viewing all other org's user info """
org = str(uuid.uuid4())
res = post_new_org(org, org)
assert res.status_code == 200
res = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/users',
headers=org_admin_headers
)
assert res.status_code == 403
response_contains_json(res, 'error', 'NOT_SAME_ORG_OR_SECRETARIAT')
def test_org_admin_get_own_users_info(org_admin_headers):
""" services api allows org admin to get its own users info """
org = org_admin_headers['CVE-API-ORG']
user = org_admin_headers['CVE-API-USER']
res = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/users',
headers=org_admin_headers
)
assert res.status_code == 200
assert len(json.loads(res.content.decode())['users']) >= 1
response_contains(res, user)
#### POST /org ####
def test_org_admin_cannot_create_another_org(org_admin_headers):
""" services api does not allow org admins to create other orgs """
res = requests.post(
f'{env.AWG_BASE_URL}{ORG_URL}',
headers=org_admin_headers,
params={'short_name': str(uuid.uuid4())}
)
assert res.status_code == 403
response_contains_json(res, 'error', 'SECRETARIAT_ONLY')
def test_org_admin_cannot_update_org(org_admin_headers):
""" services api does not allow org admins to update their own orgs """
res = requests.post(
f'{env.AWG_BASE_URL}{ORG_URL}',
headers=org_admin_headers,
params={'name': str(uuid.uuid4())}
)
assert res.status_code == 403
response_contains_json(res, 'error', 'SECRETARIAT_ONLY')
#### POST /org/:shortname/user ####
def test_org_admin_cannot_create_user_for_another_org(org_admin_headers):
""" services api prevents org admins from creating a user with conflicts in the organization the user belongs to (org in path is diff from org in json body) """
org = str(uuid.uuid4())
res = post_new_org(org, org)
assert res.status_code == 200
res = requests.post(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user',
headers=org_admin_headers,
json={'username':'BLARG', 'org_UUID': 'test'}
)
assert res.status_code == 400
response_contains_json(res, 'error', 'SHORTNAME_MISMATCH')
def test_org_admin_cannot_create_user_for_another_org(org_admin_headers):
""" services api prevents org admins from creating users for other orgs """
org = str(uuid.uuid4())
res = post_new_org(org, org)
assert res.status_code == 200
res = requests.post(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user',
headers=org_admin_headers,
json={'username':'BLARG'}
)
assert res.status_code == 403
response_contains_json(res, 'error', 'NOT_ORG_ADMIN_OR_SECRETARIAT')
def test_org_admin_cannot_create_existen_user(org_admin_headers):
""" services api prevents org admins from creating existing users """
user = str(uuid.uuid4())
org = org_admin_headers['CVE-API-ORG']
res = post_new_org_user(org, user)
assert res.status_code == 200
res = requests.post(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user',
headers=org_admin_headers,
json={'username':user}
)
assert res.status_code == 400
response_contains_json(res, 'error', 'USER_EXISTS')
#### PUT /org/:shortname/user/:username ####
def test_org_admin_cannot_update_user_org_dne(org_admin_headers):
""" services api prevents org admins from updating a user from an org that doesn't exist """
user = org_admin_headers['CVE-API-USER']
org = str(uuid.uuid4())
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}',
headers=org_admin_headers
)
assert res.status_code == 404
response_contains_json(res, 'error', 'ORG_DNE_PARAM')
def test_org_admin_cannot_update_user_dne(org_admin_headers):
""" services api prevents org admins from updating a user that doesn't exist """
user = str(uuid.uuid4())
org = org_admin_headers['CVE-API-ORG']
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}',
headers=org_admin_headers
)
assert res.status_code == 404
response_contains_json(res, 'error', 'USER_DNE')
def test_org_admin_cannot_update_user_for_another_org(org_admin_headers):
""" services api prevents org admins from updating a user from a diff org """
org, user = create_new_user_with_new_org_by_uuid()
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}',
headers=org_admin_headers
)
assert res.status_code == 403
response_contains_json(res, 'error', 'NOT_SAME_ORG_OR_SECRETARIAT')
# Admins can't change user's org
# def test_org_admin_cannot_update_user_new_shortname_dne(org_admin_headers):
# """ services api prevents org admins from updating a user's org that doesn't exist """
# org = org_admin_headers['CVE-API-ORG']
# user = org_admin_headers['CVE-API-USER']
# org_shortname = str(uuid.uuid4())
# res = requests.put(
# f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}?org_shortname={org_shortname}',
# headers=org_admin_headers
# )
# assert res.status_code == 404
# response_contains_json(res, 'error', 'ORG_DNE')
# Admins can't change user's org
# def test_org_admin_cannot_update_duplicate_user_with_new_shortname_and_username(org_admin_headers):
# """ services api prevents org admins from updating a user's org and username if that user already exist """
# org1 = org_admin_headers['CVE-API-ORG']
# user1 = org_admin_headers['CVE-API-USER']
# org2, user2 = create_new_user_with_new_org_by_uuid()
# res = requests.put(
# f'{env.AWG_BASE_URL}{ORG_URL}/{org1}/user/{user1}?org_shortname={org2}&new_username={user2}',
# headers=org_admin_headers
# )
# assert res.status_code == 403
# response_contains_json(res, 'error', 'DUPLICATE_USERNAME')
def test_org_admin_cannot_update_duplicate_user_with_new_username(org_admin_headers):
""" services api prevents org admins from updating a user's username if that user already exist """
org = org_admin_headers['CVE-API-ORG']
user1 = org_admin_headers['CVE-API-USER']
user2 = str(uuid.uuid4())
res = post_new_org_user(org, user2) # creating a user with same org as admin org user
assert res.status_code == 200
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user1}?new_username={user2}',
headers=org_admin_headers
)
assert res.status_code == 403
response_contains_json(res, 'error', 'DUPLICATE_USERNAME')
# Admin users aren't able to update a users org
# def test_org_admin_cannot_update_duplicate_user_with_new_shortname(org_admin_headers):
# """ services api prevents org admins from updating a user's org if that user already exist """
# user = org_admin_headers['CVE-API-USER']
# org1 = org_admin_headers['CVE-API-ORG']
# org2 = str(uuid.uuid4())
# res = create_new_user_with_new_org_by_shortname(org2, user) # creating a user with same username as org admin user's username
# res = requests.put(
# f'{env.AWG_BASE_URL}{ORG_URL}/{org1}/user/{user}?org_shortname={org2}',
# headers=org_admin_headers
# )
# assert res.status_code == 403
# response_contains_json(res, 'error', 'NOT_ALLOWED_TO_CHANGE_ORGANIZATION')
def test_org_admin_update_same_org_user_state_sn_un(org_admin_headers):
""" allows admin users to update a user's active state and user username """
org = org_admin_headers['CVE-API-ORG']
user = str(uuid.uuid4())
res = post_new_org_user(org, user) # creating a user with same org as admin org user
assert res.status_code == 200
new_shortname = str(uuid.uuid4()) # used in query
new_username = str(uuid.uuid4()) # used in query
res = post_new_org(new_shortname, new_shortname) # create new org
assert res.status_code == 200
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}?new_username={new_username}&active=false',
headers=org_admin_headers
)
assert res.status_code == 200
assert json.loads(res.content.decode())['updated']['active'] == False
assert json.loads(res.content.decode())['updated']['username'] == new_username
assert json.loads(res.content.decode())['updated']['username'] is not None
def test_org_admin_update_same_org_user_roles_name(org_admin_headers):
""" allows admin users to update a user's name, add role, and remove role """
org = org_admin_headers['CVE-API-ORG']
user = str(uuid.uuid4())
res = post_new_org_user(org, user) # creating a user with same org as admin org user
assert res.status_code == 200
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}?active_roles.add=admin', # adding role
headers=org_admin_headers
)
assert res.status_code == 200
assert json.loads(res.content.decode())['updated']['authority']['active_roles'] == ["ADMIN"]
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}?active_roles.remove=admin', # removing role
headers=org_admin_headers
)
assert res.status_code == 200
assert json.loads(res.content.decode())['updated']['authority']['active_roles'] == []
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}?name.first=t&name.last=e&name.middle=s&name.suffix=t', # updating name
headers=org_admin_headers
)
assert res.status_code == 200
assert json.loads(res.content.decode())['updated']['name']['first'] == 't'
assert json.loads(res.content.decode())['updated']['name']['last'] == 'e'
assert json.loads(res.content.decode())['updated']['name']['middle'] == 's'
assert json.loads(res.content.decode())['updated']['name']['suffix'] == 't'
# Admin users can't change org?
# def test_org_admin_update_own_user_state_sn_un(org_admin_headers):
# """ allows admin users to update its own active state, org shortname, and user username """
# org = org_admin_headers['CVE-API-ORG']
# user = org_admin_headers['CVE-API-USER']
# new_shortname = str(uuid.uuid4()) # used in query
# new_username = str(uuid.uuid4()) # used in query
# res = post_new_org(new_shortname, new_shortname) # create new org
# assert res.status_code == 200
# res = requests.put(
# f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}?org_shortname={new_shortname}&new_username={new_username}&active=false',
# headers=org_admin_headers
# )
# assert res.status_code == 200
# assert json.loads(res.content.decode())['updated']['active'] == False
# assert json.loads(res.content.decode())['updated']['username'] == new_username
# assert json.loads(res.content.decode())['updated']['username'] is not None
def test_org_admin_update_own_user_roles_name(org_admin_headers):
""" allows admin users to update its own name and remove role """
org = org_admin_headers['CVE-API-ORG']
user = org_admin_headers['CVE-API-USER']
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}?active_roles.remove=admin', # removing role
headers=org_admin_headers
)
assert res.status_code == 200
assert json.loads(res.content.decode())['updated']['authority']['active_roles'] == []
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}?active_roles.add=admin', # adding role
headers=org_admin_headers
)
assert res.status_code == 403
response_contains_json(res, 'error', 'NOT_ORG_ADMIN_OR_SECRETARIAT') # cannot add role because org admin doesn't have "ADMIN" role anymore
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}?active_roles.add=admin', # adding "ADMIN" role back to org admin user
headers=utils.BASE_HEADERS
)
assert res.status_code == 200
assert json.loads(res.content.decode())['updated']['authority']['active_roles'] == ["ADMIN"]
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}?name.first=t&name.last=e&name.middle=s&name.suffix=t', # updating name
headers=org_admin_headers
)
assert res.status_code == 200
assert json.loads(res.content.decode())['updated']['name']['first'] == 't'
assert json.loads(res.content.decode())['updated']['name']['last'] == 'e'
assert json.loads(res.content.decode())['updated']['name']['middle'] == 's'
assert json.loads(res.content.decode())['updated']['name']['suffix'] == 't'
#### PUT /org/:shortname/user/:username/reset_secret ####
def test_org_admin_reset_secret_org_dne(org_admin_headers):
org = str(uuid.uuid4())
user = str(uuid.uuid4())
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}/reset_secret',
headers=org_admin_headers
)
assert res.status_code == 404
response_contains_json(res, 'error', 'ORG_DNE_PARAM')
def test_org_admin_reset_secret_org_dne(org_admin_headers):
org = org_admin_headers['CVE-API-ORG']
user = str(uuid.uuid4())
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}/reset_secret',
headers=org_admin_headers
)
assert res.status_code == 404
response_contains_json(res, 'error', 'USER_DNE')
def test_org_admin_reset_diff_org_secret(org_admin_headers):
""" services api prevents admin users to reset the secret of users of different org"""
org, user = create_new_user_with_new_org_by_uuid()
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}/reset_secret',
headers=org_admin_headers
)
assert res.status_code == 403
response_contains_json(res, 'error', 'NOT_SAME_ORG_OR_SECRETARIAT')
def test_org_admin_reset_same_org_secret(org_admin_headers):
""" services api allows admin users to reset the secret of users of same org"""
org = org_admin_headers['CVE-API-ORG']
user = str(uuid.uuid4())
res = post_new_org_user(org, user)
assert res.status_code == 200
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}/reset_secret',
headers=org_admin_headers
)
assert res.status_code == 200
response_contains(res, 'API-secret')
def test_org_admin_reset_own_secret(org_admin_headers):
""" services api allows admin users to reset their own secret """
org = org_admin_headers['CVE-API-ORG']
user = org_admin_headers['CVE-API-USER']
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}/reset_secret',
headers=org_admin_headers
)
assert res.status_code == 200
response_contains(res, 'API-secret')
def test_admin_role_preserved_after_resetting_own_secret(org_admin_headers):
""" admin user's role remains after resetting own secret """
org = org_admin_headers['CVE-API-ORG']
user = org_admin_headers['CVE-API-USER']
res = requests.put(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}/reset_secret',
headers=org_admin_headers
)
secret = json.loads(res.content.decode())["API-secret"]
assert res.status_code == 200
headers2 = org_admin_headers
headers2['CVE-API-KEY'] = secret
response_contains(res, 'API-secret')
res2 = requests.get(
f'{env.AWG_BASE_URL}{ORG_URL}/{org}/user/{user}',
headers=headers2
)
assert res2.status_code == 200
assert json.loads(res2.content.decode())["authority"]["active_roles"][0] == "ADMIN" # admin role still remains after changing secret
| 41.518738 | 165 | 0.694394 | 3,100 | 21,050 | 4.425806 | 0.059032 | 0.08688 | 0.114796 | 0.072012 | 0.888848 | 0.862609 | 0.840452 | 0.828717 | 0.803061 | 0.794825 | 0 | 0.012433 | 0.17848 | 21,050 | 506 | 166 | 41.600791 | 0.780952 | 0.287363 | 0 | 0.673469 | 0 | 0.005831 | 0.218337 | 0.153647 | 0 | 0 | 0 | 0 | 0.195335 | 1 | 0.090379 | false | 0 | 0.017493 | 0 | 0.107872 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
04225a4e35825be5624b09a7800199cc2339ca2d | 34 | py | Python | attack_lookup/__init__.py | curated-intel/attack-lookup | 301a384abc58ea931930dba492b90be871218b92 | [
"MIT"
] | 20 | 2021-11-25T20:16:30.000Z | 2022-03-19T22:44:58.000Z | attack_lookup/__init__.py | curated-intel/attack-lookup | 301a384abc58ea931930dba492b90be871218b92 | [
"MIT"
] | null | null | null | attack_lookup/__init__.py | curated-intel/attack-lookup | 301a384abc58ea931930dba492b90be871218b92 | [
"MIT"
] | null | null | null | from .mapping import AttackMapping | 34 | 34 | 0.882353 | 4 | 34 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
043066db8fa6a05c0dd14189ceee4dcbd59fc7b3 | 191 | py | Python | src/assignments/main_assignment2.py | acc-cosc-1336/cosc-1336-spring-2018-BluishSilver-1 | cc4f066fc5d3c88007bc9e3bf0739f1384388086 | [
"MIT"
] | null | null | null | src/assignments/main_assignment2.py | acc-cosc-1336/cosc-1336-spring-2018-BluishSilver-1 | cc4f066fc5d3c88007bc9e3bf0739f1384388086 | [
"MIT"
] | null | null | null | src/assignments/main_assignment2.py | acc-cosc-1336/cosc-1336-spring-2018-BluishSilver-1 | cc4f066fc5d3c88007bc9e3bf0739f1384388086 | [
"MIT"
] | null | null | null | from assignment2 import faculty_evaluation_result
'''Write code to call the faculty_evaluation_result function with data of your choice'''
print(faculty_evaluation_result(5,10,15,20,25,30))
| 38.2 | 88 | 0.827225 | 30 | 191 | 5.066667 | 0.8 | 0.335526 | 0.453947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069364 | 0.094241 | 191 | 4 | 89 | 47.75 | 0.809249 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
044ed63ad861c0d6477b762cb463573ed3f242ca | 1,754 | py | Python | tests/test_config.py | tsudd/docker-python-test | 1c866e475ae9c9ccb0c3f6d823609fa95ebe5adb | [
"MIT"
] | 1 | 2021-04-24T12:29:57.000Z | 2021-04-24T12:29:57.000Z | tests/test_config.py | tsudd/docker-python-test | 1c866e475ae9c9ccb0c3f6d823609fa95ebe5adb | [
"MIT"
] | null | null | null | tests/test_config.py | tsudd/docker-python-test | 1c866e475ae9c9ccb0c3f6d823609fa95ebe5adb | [
"MIT"
] | 1 | 2021-05-14T14:02:52.000Z | 2021-05-14T14:02:52.000Z | DICT_RESULT = "{\'sosediOptions\': {\'marke\\\\\":tName\': \'Sosedi\', \'goodsURL\': \'https://sosedi.by/sales/\'," \
" \'goodFields\': {\'pri ce \': True, \'priceBack\': \'priceBack\', \'sale\': " \
"[22, 5.7, \'nice\', None]}}, \'greenOptions\': {\'ff\': 22.8, \'goodsURL\': " \
"\'https://www.green-market.by/shares\', \'headers\': {\'accept-encoding\': \'gzip, deflate, br\'," \
" \'x-requested-with\': \'XMLHttpRequest\'}, " \
"\'formData\': \'page={0}&cat=\', " \
"\'goodHTMLSection\': {\'class\': \'stock-preview-item\'}}, \'damn\': \'wtf\'}"
JSON_FILE = """{
"sosediOptions": {
"marke\\\":tName": "Sosedi",
"goodsURL": "https://sosedi.by/sales/",
"goodFields": {
"pri ce ": true,
"priceBack": "priceBack",
"sale": [22, 5.7, "nice", null]
}
},
"greenOptions": {
"ff": 22.8,
"goodsURL": "https://www.green-market.by/shares",\
"headers": {
"accept-encoding": "gzip, deflate, br",
"x-requested-with": "XMLHttpRequest"
},
"formData": "page={0}&cat=",
"goodHTMLSection": {
"class": "stock-preview-item"
}
},
"damn": "wtf"
}"""
DATA_DICT = {"cool": ["228", "nice"], "gogo": {"good": True, "nice": 229, "dont": {"lol": 20.9}}, "next": None}
PARSED_DICT = "{ \"cool\": [ \"228\", \"nice\" ], \"gogo\": { \"good\": true," \
" \"nice\": 229, \"dont\": { \"lol\": 20.9 } }, \"next\": null }"
def sum_two_elements(a=0, b=0):
rez = a + b
print_equation(a, b, rez)
return rez
def print_equation(a, b, c):
print(f"{a} + {b} = {c}")
def fib_nums(n):
if n < 1:
return 1
return fib_nums(n - 1) + fib_nums(n - 2)
| 33.09434 | 117 | 0.474344 | 189 | 1,754 | 4.343915 | 0.433862 | 0.063337 | 0.029233 | 0.070646 | 0.791717 | 0.791717 | 0.791717 | 0.791717 | 0.791717 | 0.791717 | 0 | 0.030234 | 0.245724 | 1,754 | 52 | 118 | 33.730769 | 0.590325 | 0 | 0 | 0 | 0 | 0 | 0.641391 | 0.038198 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068182 | false | 0 | 0 | 0 | 0.136364 | 0.068182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f09646606337a26ae61fdac6d82a65fd88ea3ada | 1,920 | py | Python | tests/fixtures/test_abstract/content_02_expected.py | elifesciences/elife-tools | ee345bf0e6703ef0f7e718355e85730abbdfd117 | [
"MIT"
] | 9 | 2015-04-16T08:13:31.000Z | 2020-05-18T14:03:06.000Z | tests/fixtures/test_abstract/content_02_expected.py | elifesciences/elife-tools | ee345bf0e6703ef0f7e718355e85730abbdfd117 | [
"MIT"
] | 310 | 2015-02-11T00:30:09.000Z | 2021-07-14T23:58:50.000Z | tests/fixtures/test_abstract/content_02_expected.py | elifesciences/elife-tools | ee345bf0e6703ef0f7e718355e85730abbdfd117 | [
"MIT"
] | 9 | 2015-02-04T01:21:28.000Z | 2021-06-15T12:50:47.000Z | from collections import OrderedDict
expected = u"To estimate the proportion of rotavirus gastroenteritis (RVGE) among children aged less than 5\u2005years who had been diagnosed with acute gastroenteritis (AGE) and admitted to hospitals and emergency rooms (ERs). The seasonal distribution of RVGE and most prevalent rotavirus (RV) strains was also assessed. A cross-sectional hospital-based surveillance study. 5 reference paediatric hospitals across Abidjan. Children aged less than 5\u2005years, who were hospitalised/visiting ERs for WHO-defined AGE, were enrolled. Written informed consent was obtained from parents/guardians before enrolment. Children who acquired nosocomial infection were excluded from the study. The proportion of RVGE among AGE hospitalisations and ER visits was expressed with 95% exact CI. Stool samples were collected from all enrolled children and were tested for the presence of RV using an enzyme immunoassay. RV-positive samples were serotyped using reverse transcriptase-PCR. Of 357 enrolled children (mean age 13.6\xb111.14\u2005months), 332 were included in the final analyses; 56.3% (187/332) were hospitalised and 43.7% (145/332) were admitted to ERs. The proportion of RVGE hospitalisations and ER visits among all AGE cases was 30.1% (95% CI 23.6% to 37.3%) and 26.9% (95% CI 19.9% to 34.9%), respectively. Ninety-five children (28.6%) were RV positive; the highest number of RVGE cases was observed in children aged 6\u201311\u2005months. The number of GE cases peaked in July and August 2008; the highest percentage of RV-positive cases was observed in January 2008. G1P[8] wild-type and G8P[6] were the most commonly detected strains. RVGE causes substantial morbidity among children under 5\u2005years of age and remains a health concern in the Republic of Ivory Coast, where implementation of prevention strategies such as vaccination might help to reduce disease burden."
| 480 | 1,882 | 0.808333 | 306 | 1,920 | 5.071895 | 0.539216 | 0.015464 | 0.028995 | 0.025773 | 0.043814 | 0.043814 | 0.043814 | 0 | 0 | 0 | 0 | 0.062006 | 0.143229 | 1,920 | 3 | 1,883 | 640 | 0.881459 | 0 | 0 | 0 | 0 | 0.5 | 0.972917 | 0.036458 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f0b39f6a7bde2e1c109819e047cfab68df60ba6a | 33 | py | Python | candex/__init__.py | wknoben/candex | 0f73a2075a28d75dcce50fa8ee3d6890776f4223 | [
"Apache-2.0"
] | 2 | 2020-06-16T16:42:17.000Z | 2021-01-22T10:19:35.000Z | candex/__init__.py | goosefall/candex | b7efc001aa00176f76c3b7735d06fa43fed7072b | [
"Apache-2.0"
] | null | null | null | candex/__init__.py | goosefall/candex | b7efc001aa00176f76c3b7735d06fa43fed7072b | [
"Apache-2.0"
] | 1 | 2021-04-12T05:15:10.000Z | 2021-04-12T05:15:10.000Z | from .functions import lat_lon_2D | 33 | 33 | 0.878788 | 6 | 33 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.090909 | 33 | 1 | 33 | 33 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f0d24db0264558a5af7231c3cba51dc3e415a75f | 9,270 | py | Python | tests/test_rules.py | mweltin/sticks | 84f6f82bece0ac8b47824545ebdbf4c1e52d193c | [
"MIT"
] | null | null | null | tests/test_rules.py | mweltin/sticks | 84f6f82bece0ac8b47824545ebdbf4c1e52d193c | [
"MIT"
] | null | null | null | tests/test_rules.py | mweltin/sticks | 84f6f82bece0ac8b47824545ebdbf4c1e52d193c | [
"MIT"
] | null | null | null | import rules.rules as rules
import environment.env as env
import unittest
class RulesTestCase(unittest.TestCase):
def test_has_winner_opponent_wins(self):
value = [[0, 0], [0, 1]]
act = rules.has_winner(value)
self.assertEqual(env.Players.opponent, act)
def test_has_winner_when_agent_wins(self):
value = [[1, 0], [0, 0]]
act = rules.has_winner(value)
self.assertEqual(env.Players.agent, act)
def test_has_winner_when_there_is_no_winner(self):
value = [[1, 0], [0, 1]]
act = rules.has_winner(value)
self.assertFalse(act)
self.assertNotEqual(act, env.Players.agent)
def test_can_swap_returns_true_if_one_hand_is_empty_and_the_other_has_and_even_number(self):
value = [0, 2]
act = rules.can_swap(value)
self.assertTrue(act)
def test_can_swap_false_if_one_hand_is_empty_and_the_other_has_and_odd_number(self):
value = [0, 3]
act = rules.can_swap(value)
self.assertFalse(act)
def test_can_swap_false_if_both_hands_are_not_empty(self):
value = [1, 3]
act = rules.can_swap(value)
self.assertFalse(act)
def test_swap_returns_an_array_of_two_elements_and_both_elements_are_half_of_largest_element_in_input_array(self):
val = 4
value = [0, val]
act = rules.swap(value)
self.assertEqual([val / 2, val / 2], act)
def test_get_opponent_player_index(self):
active_player_index = 0
opponent_index = rules.get_opponent_player_index(active_player_index)
self.assertEqual(1, opponent_index)
active_player_index = 1
opponent_index = rules.get_opponent_player_index(active_player_index)
self.assertEqual(0, opponent_index)
def test_take_turn_allows_for_swap(self):
state = [[0, 4], [1, 1]]
active_player_index = 0
action = env.action_table[0]
new_state = rules.take_turn(state, active_player_index, action)
self.assertEqual([2, 2], new_state[active_player_index])
def test_take_turn_handles_left_to_left_for_both_players(self):
state = [[2, 1], [1, 2]]
expected_state = [[2, 1], [3, 2]]
active_player_index = 0
action = env.action_table[1]
new_state = rules.take_turn(state, active_player_index, action)
self.assertEqual(expected_state, new_state)
state = [[2, 1], [1, 2]]
expected_state = [[3, 1], [1, 2]]
active_player_index = 1
action = env.action_table[1]
new_state = rules.take_turn(state, active_player_index, action)
self.assertEqual(expected_state, new_state)
def test_take_turn_handles_left_to_right_for_both_players(self):
state = [[2, 1], [1, 2]]
expected_state = [[2, 1], [1, 4]]
active_player_index = 0
action = env.action_table[2]
new_state = rules.take_turn(state, active_player_index, action)
self.assertEqual(expected_state, new_state)
state = [[2, 1], [1, 2]]
expected_state = [[2, 2], [1, 2]]
active_player_index = 1
action = env.action_table[2]
new_state = rules.take_turn(state, active_player_index, action)
self.assertEqual(expected_state, new_state)
def test_take_turn_handles_right_to_left_for_both_players(self):
state = [[2, 1], [1, 2]]
expected_state = [[2, 1], [2, 2]]
active_player_index = 0
action = env.action_table[4]
new_state = rules.take_turn(state, active_player_index, action)
self.assertEqual(expected_state, new_state)
state = [[2, 1], [1, 2]]
expected_state = [[4, 1], [1, 2]]
active_player_index = 1
action = env.action_table[4]
new_state = rules.take_turn(state, active_player_index, action)
self.assertEqual(expected_state, new_state)
def test_take_turn_handles_right_to_right_for_both_players(self):
state = [[2, 1], [1, 2]]
expected_state = [[2, 1], [1, 3]]
active_player_index = 0
action = env.action_table[3]
new_state = rules.take_turn(state, active_player_index, action)
self.assertEqual(expected_state, new_state)
state = [[2, 1], [1, 2]]
expected_state = [[2, 3], [1, 2]]
active_player_index = 1
action = env.action_table[3]
new_state = rules.take_turn(state, active_player_index, action)
self.assertEqual(expected_state, new_state)
def test_take_turn_handles_case_when_outcome_is_above_five(self):
state = [[4, 1], [1, 4]]
expected_state = [[4, 1], [1, 3]]
active_player_index = 0
action = env.action_table[2]
new_state = rules.take_turn(state, active_player_index, action)
self.assertEqual(expected_state, new_state)
def test_take_turn_handles_case_when_outcome_is_equal_five(self):
state = [[4, 1], [1, 1]]
expected_state = [[4, 1], [1, 0]]
active_player_index = 0
action = env.action_table[2]
new_state = rules.take_turn(state, active_player_index, action)
self.assertEqual(expected_state, new_state)
def test_get_valid_actions_identifies_when_swap_is_valid(self):
state = [[4, 0], [1, 1]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertIn(env.action_table.index([env.Actions.SWAP]), valid_moves)
state = [[4, 1], [1, 1]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertNotIn(env.action_table.index([env.Actions.SWAP]), valid_moves)
def test_get_valid_actions_identifies_when_right_right_is_valid(self):
state = [[4, 1], [1, 1]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertTrue(env.action_table.index([env.Actions.RIGHT, env.Actions.RIGHT]), valid_moves)
def test_get_valid_actions_identifies_when_right_right_is_not_valid(self):
state = [[4, 1], [1, 0]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertNotIn(env.action_table.index([env.Actions.RIGHT, env.Actions.RIGHT]), valid_moves)
state = [[4, 0], [1, 1]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertNotIn(env.action_table.index([env.Actions.RIGHT, env.Actions.RIGHT]), valid_moves)
def test_get_valid_actions_identifies_when_right_left_is_valid(self):
state = [[4, 1], [1, 1]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertIn(env.action_table.index([env.Actions.RIGHT, env.Actions.LEFT]), valid_moves)
def test_get_valid_actions_identifies_when_right_left_is_not_valid(self):
state = [[4, 0], [1, 1]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertNotIn(env.action_table.index([env.Actions.RIGHT, env.Actions.LEFT]), valid_moves)
state = [[4, 1], [0, 1]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertNotIn(env.action_table.index([env.Actions.RIGHT, env.Actions.LEFT]), valid_moves)
def test_get_valid_actions_identifies_when_left_left_is_valid(self):
state = [[4, 1], [1, 1]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertIn(env.action_table.index([env.Actions.LEFT, env.Actions.LEFT]), valid_moves)
def test_get_valid_actions_identifies_when_left_left_is_not_valid(self):
state = [[4, 1], [0, 1]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertNotIn(env.action_table.index([env.Actions.LEFT, env.Actions.LEFT]), valid_moves)
state = [[0, 1], [1, 1]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertNotIn(env.action_table.index([env.Actions.LEFT, env.Actions.LEFT]), valid_moves)
def test_get_valid_actions_identifies_when_left_right_is_valid(self):
state = [[4, 1], [1, 1]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertIn(env.action_table.index([env.Actions.LEFT, env.Actions.RIGHT]), valid_moves)
def test_get_valid_actions_identifies_when_left_right_is_not_valid(self):
state = [[4, 1], [3, 0]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertNotIn(env.action_table.index([env.Actions.LEFT, env.Actions.RIGHT]), valid_moves)
state = [[0, 2], [1, 1]]
active_player_index = 0
valid_moves = rules.get_valid_actions(state, active_player_index)
self.assertNotIn(env.action_table.index([env.Actions.LEFT, env.Actions.RIGHT]), valid_moves)
if __name__ == '__main__':
unittest.TestLoader.sortTestMethodsUsing = None
unittest.main()
| 42.328767 | 118 | 0.67411 | 1,305 | 9,270 | 4.413793 | 0.073563 | 0.110764 | 0.162326 | 0.099306 | 0.877431 | 0.864931 | 0.844965 | 0.825868 | 0.813889 | 0.77934 | 0 | 0.02803 | 0.214887 | 9,270 | 218 | 119 | 42.522936 | 0.763397 | 0 | 0 | 0.607735 | 0 | 0 | 0.000863 | 0 | 0 | 0 | 0 | 0 | 0.19337 | 1 | 0.132597 | false | 0 | 0.016575 | 0 | 0.154696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f0e4411dfe73f22993a6e64393101095d5514d94 | 190 | py | Python | jupyter-lab-serverless/config.py | u2takey/jupyter-lab-serverless | e43b3e64afb5643fb72d901b395d72add3a7be2f | [
"Apache-2.0"
] | 6 | 2020-03-13T23:58:31.000Z | 2021-08-29T07:33:29.000Z | jupyter-lab-serverless/config.py | u2takey/jupyter-lab-serverless | e43b3e64afb5643fb72d901b395d72add3a7be2f | [
"Apache-2.0"
] | 3 | 2021-08-05T03:07:04.000Z | 2022-03-25T21:34:03.000Z | jupyter-lab-serverless/config.py | u2takey/jupyter-lab-serverless | e43b3e64afb5643fb72d901b395d72add3a7be2f | [
"Apache-2.0"
] | 1 | 2020-04-06T16:30:28.000Z | 2020-04-06T16:30:28.000Z | from traitlets.config import Configurable
class LatexConfig(Configurable):
"""
A Configurable that declares the configuration options
for the FunctionHandler.
"""
pass
| 19 | 58 | 0.726316 | 19 | 190 | 7.263158 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215789 | 190 | 9 | 59 | 21.111111 | 0.926175 | 0.415789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f0fe40fb3c6ff17e19d1a5d5c98983c0b83ba8d7 | 34 | py | Python | test/login.py | MeiBianChuiDi/gz02 | 5c839a85f1409c0abdd9542c4f4d1c20f66a1a28 | [
"MIT"
] | null | null | null | test/login.py | MeiBianChuiDi/gz02 | 5c839a85f1409c0abdd9542c4f4d1c20f66a1a28 | [
"MIT"
] | null | null | null | test/login.py | MeiBianChuiDi/gz02 | 5c839a85f1409c0abdd9542c4f4d1c20f66a1a28 | [
"MIT"
] | null | null | null | num =1
num1=10
num2=20
num3=30
| 4.25 | 7 | 0.647059 | 8 | 34 | 2.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.384615 | 0.235294 | 34 | 7 | 8 | 4.857143 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9bd63c4830840b56cb87afbd7ddb5ceec428af67 | 15,397 | py | Python | tests/test_taskpane.py | pyro-team/bkt-toolbox | bbccba142a81ca0a46056f2bcda75899979158a5 | [
"MIT"
] | 12 | 2019-05-31T02:57:26.000Z | 2022-03-26T09:40:50.000Z | tests/test_taskpane.py | mrflory/bkt-toolbox | bbccba142a81ca0a46056f2bcda75899979158a5 | [
"MIT"
] | 27 | 2021-11-27T16:33:19.000Z | 2022-03-27T17:47:26.000Z | tests/test_taskpane.py | pyro-team/bkt-toolbox | bbccba142a81ca0a46056f2bcda75899979158a5 | [
"MIT"
] | 3 | 2019-06-12T10:59:20.000Z | 2020-04-21T15:13:50.000Z | # -*- coding: utf-8 -*-
from __future__ import absolute_import
import unittest
import bkt
XMLNS = ' xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"'
XMLNS_FR = ' xmlns="urn:fluent-ribbon"'
class TaskpaneBaseObjectTest(unittest.TestCase):
def test_XamlPropertyElement(self):
bkt.taskpane.TaskPaneControl.no_id = True
# default XamlPropertyElement
b = bkt.taskpane.XamlPropertyElement()
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<NotSpecified.NotSpecified' + XMLNS + ' />')
#self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<NotSpecified.Resources xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" />')
# specifying property name failed
b = bkt.taskpane.XamlPropertyElement(property_name="PropertyName")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<NotSpecified.PropertyName' + XMLNS + ' />')
# specifying type name failed
b = bkt.taskpane.XamlPropertyElement(type_name="TypeName")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<TypeName.NotSpecified' + XMLNS + ' />')
# specifying type name and property name failed
b = bkt.taskpane.XamlPropertyElement(type_name="TypeName", property_name="PropertyName")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<TypeName.PropertyName' + XMLNS + ' />')
# specifying type name at xml-generation failed
b = bkt.taskpane.XamlPropertyElement(property_name="PropertyName")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml("TypeName")), u'<TypeName.PropertyName' + XMLNS + ' />')
bkt.taskpane.TaskPaneControl.no_id = False
def test_XamlPropertyElement_fixed_type(self):
bkt.taskpane.TaskPaneControl.no_id = True
myclass = type("myclassname", (bkt.taskpane.XamlPropertyElement,), {'_type_name': 'FixedTypeName'})
# Definition of XamlPropertyElement with fixed type name failed
b = myclass()
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<FixedTypeName.NotSpecified' + XMLNS + ' />')
# specifying property name failed
b = myclass(property_name="PropertyName")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<FixedTypeName.PropertyName' + XMLNS + ' />')
# type name should be overwritable
b = myclass(type_name="TypeName")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<TypeName.NotSpecified' + XMLNS + ' />')
# type name and property name should be overwritable
b = myclass(type_name="TypeName", property_name="PropertyName")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<TypeName.PropertyName' + XMLNS + ' />')
bkt.taskpane.TaskPaneControl.no_id = False
def test_XamlPropertyElement_fixed_property(self):
bkt.taskpane.TaskPaneControl.no_id = True
myclass = type("myclassname", (bkt.taskpane.XamlPropertyElement,), {'_property_name': 'FixedPropertyName'})
# Definition of XamlPropertyElement with fixed property name failed
b = myclass()
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<NotSpecified.FixedPropertyName' + XMLNS + ' />')
# property name should be overwritable
b = myclass(property_name="PropertyName")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<NotSpecified.PropertyName' + XMLNS + ' />')
# specifying type name failed
b = myclass(type_name="TypeName")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<TypeName.FixedPropertyName' + XMLNS + ' />')
bkt.taskpane.TaskPaneControl.no_id = False
def test_XamlPropertyElements(self):
self.maxDiff = None
bkt.taskpane.TaskPaneControl.no_id = True
# simple usage of XamlPropertyElementGenerator failed
b = bkt.taskpane.XamlPropertyElements.Resources()
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<NotSpecified.Resources' + XMLNS + ' />')
# specification of type name failed
b = bkt.taskpane.XamlPropertyElements.Resources("Button")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Button.Resources' + XMLNS + ' />')
# specification of type name failed
b = bkt.taskpane.XamlPropertyElements.Resources(type_name="Button")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Button.Resources' + XMLNS + ' />')
# property name should be overwritable
b = bkt.taskpane.XamlPropertyElements.Resources(type_name="Button", property_name="Overwritten")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Button.Overwritten' + XMLNS + ' />')
bkt.taskpane.TaskPaneControl.no_id = False
def test_XamlPropertyElement_Attribute(self):
bkt.taskpane.TaskPaneControl.no_id = True
# usage of XamlPropertyElement as attribute failed
b = bkt.taskpane.TaskPaneControl(resources=bkt.taskpane.XamlPropertyElement(property_name="PropertyName"))
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<TaskPaneControl' + XMLNS + '>\r\n <TaskPaneControl.PropertyName />\r\n</TaskPaneControl>')
# usage of XamlPropertyElementGenerator as attribute failed
b = bkt.taskpane.TaskPaneControl(resources=bkt.taskpane.XamlPropertyElements.Resources())
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<TaskPaneControl' + XMLNS + '>\r\n <TaskPaneControl.Resources />\r\n</TaskPaneControl>')
# usage of XamlPropertyElementGenerator with other xml-namespace failed
b = bkt.taskpane.FluentRibbon.Button(resources=bkt.taskpane.XamlPropertyElements.Resources())
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Button' + XMLNS_FR + '>\r\n <Button.Resources />\r\n</Button>')
# type name should not be overwritable
b = bkt.taskpane.TaskPaneControl(resources=bkt.taskpane.XamlPropertyElement(type_name="TypeName", property_name="PropertyName"))
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<TaskPaneControl' + XMLNS + '>\r\n <TaskPaneControl.PropertyName />\r\n</TaskPaneControl>')
# type name should not be overwritable
b = bkt.taskpane.TaskPaneControl(resources=bkt.taskpane.XamlPropertyElements.Resources(type_name="TypeName"))
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<TaskPaneControl' + XMLNS + '>\r\n <TaskPaneControl.Resources />\r\n</TaskPaneControl>')
bkt.taskpane.TaskPaneControl.no_id = False
def test_XamlPropertyElement_Child(self):
bkt.taskpane.TaskPaneControl.no_id = True
# usage of XamlPropertyElement as child-element faild
b = bkt.taskpane.TaskPaneControl(children=[bkt.taskpane.XamlPropertyElement(type_name="TypeName", property_name="PropertyName")])
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<TaskPaneControl' + XMLNS + '>\r\n <TypeName.PropertyName />\r\n</TaskPaneControl>')
# usage of XamlPropertyElementGenerator as child-element failed
b = bkt.taskpane.TaskPaneControl(children=[bkt.taskpane.XamlPropertyElements.Resources("TypeName")])
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<TaskPaneControl' + XMLNS + '>\r\n <TypeName.Resources />\r\n</TaskPaneControl>')
# no default type_name if child definition is used
# type name should have no fallback if child definition is used
b = bkt.taskpane.TaskPaneControl(children=[bkt.taskpane.XamlPropertyElement(property_name="PropertyName")])
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<TaskPaneControl' + XMLNS + '>\r\n <NotSpecified.PropertyName />\r\n</TaskPaneControl>')
# type name should have no fallback if child definition is used
b = bkt.taskpane.TaskPaneControl(children=[bkt.taskpane.XamlPropertyElements.Resources()])
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<TaskPaneControl' + XMLNS + '>\r\n <NotSpecified.Resources />\r\n</TaskPaneControl>')
bkt.taskpane.TaskPaneControl.no_id = False
def test_WPFRibbon(self):
bkt.taskpane.TaskPaneControl.no_id = True
# definition of RibbonButton failed
b = bkt.taskpane.Ribbon.RibbonButton()
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<RibbonButton xmlns="clr-namespace:System.Windows.Controls.Ribbon;assembly=System.Windows.Controls.Ribbon" />')
bkt.taskpane.TaskPaneControl.no_id = False
def test_FluentRibbon(self):
bkt.taskpane.TaskPaneControl.no_id = True
bkt.taskpane.FluentRibbonControl.no_id = True
# definition of FluentRibbon-Button failed
b = bkt.taskpane.FluentRibbon.Button()
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Button xmlns="urn:fluent-ribbon" />')
bkt.taskpane.FluentRibbonControl.no_id = False
def test_FluentRibbon_ScreenTip(self):
bkt.taskpane.TaskPaneControl.no_id = True
bkt.taskpane.FluentRibbonControl.no_id = True
# Button with simple tooltip attribute failed
b = bkt.taskpane.FluentRibbon.Button(tool_tip="Tooltip text")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Button ToolTip="Tooltip text"' + XMLNS_FR + ' />')
# Definition ToolTip-Property-Element failed
b = bkt.taskpane.ToolTip("Button")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Button.ToolTip' + XMLNS_FR + ' />')
# Definition of Screentip-Element failed
b = bkt.taskpane.FluentRibbon.ScreenTip(text="screentip text")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<ScreenTip Text="screentip text"' + XMLNS_FR + ' />')
# Definition of Screentip-Element failed
b = bkt.taskpane.FluentRibbon.ScreenTip(text="screentip text", title="screentip title", disable_reason="This button is diabled because ...", help_topic="Info for additional help")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<ScreenTip DisableReason="This button is diabled because ..." HelpTopic="Info for additional help" Text="screentip text" Title="screentip title"' + XMLNS_FR + ' />')
# Screentip-attribute should be parsed to ScreenTip-object
b = bkt.taskpane.FluentRibbon.Button(screentip="Screentip text")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Button' + XMLNS_FR + '>\r\n <Button.ToolTip>\r\n <ScreenTip IsRibbonAligned="False" Text="Screentip text" />\r\n </Button.ToolTip>\r\n</Button>')
# Screentip-attribute should be parsed to ScreenTip-object
b = bkt.taskpane.FluentRibbon.Button(screentip="Screentip text", screentip_title="Title", disable_reason="This button is diabled because ...", help_topic="Info for additional help")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Button' + XMLNS_FR + '>\r\n <Button.ToolTip>\r\n <ScreenTip DisableReason="This button is diabled because ..." HelpTopic="Info for additional help" IsRibbonAligned="False" Text="Screentip text" Title="Title" />\r\n </Button.ToolTip>\r\n</Button>')
# Screentip definition should overwrite tooltip
b = bkt.taskpane.FluentRibbon.Button(tool_tip="Tooltip text", screentip="Screentip text")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Button' + XMLNS_FR + '>\r\n <Button.ToolTip>\r\n <ScreenTip IsRibbonAligned="False" Text="Screentip text" />\r\n </Button.ToolTip>\r\n</Button>')
# Definition of screentip through tooltip-attribute failed
b = bkt.taskpane.FluentRibbon.Button(tool_tip=bkt.taskpane.FluentRibbon.ScreenTip(text="Screentip text"))
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Button' + XMLNS_FR + '>\r\n <Button.ToolTip>\r\n <ScreenTip Text="Screentip text" />\r\n </Button.ToolTip>\r\n</Button>')
bkt.taskpane.FluentRibbonControl.no_id = False
def test_FluentRibbon_Image(self):
bkt.taskpane.TaskPaneControl.no_id = True
bkt.taskpane.FluentRibbonControl.no_id = True
b = bkt.taskpane.FluentRibbon.Button(image="test_image")
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Button Icon="{StaticResource test_image}"' + XMLNS_FR + ' />')
def test_ExpanderStackPanel(self):
bkt.taskpane.TaskPaneControl.no_id = True
bkt.taskpane.FluentRibbonControl.no_id = True
b = bkt.taskpane.Expander(auto_stack=True, children=[bkt.taskpane.Wpf.Button()])
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Expander IsExpanded="false"'+XMLNS+'>\r\n <StackPanel Orientation="Vertical">\r\n <Button />\r\n </StackPanel>\r\n</Expander>')
b = bkt.taskpane.Expander(auto_wrap=True, children=[bkt.taskpane.Wpf.Button()])
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Expander IsExpanded="false"'+XMLNS+'>\r\n <WrapPanel Orientation="Horizontal">\r\n <Button />\r\n </WrapPanel>\r\n</Expander>')
b = bkt.taskpane.Expander(auto_stack=True, header="Test Header", children=[bkt.taskpane.Wpf.Button()])
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Expander Header="Test Header" IsExpanded="false"'+XMLNS+'>\r\n <StackPanel Orientation="Vertical">\r\n <Button />\r\n </StackPanel>\r\n</Expander>')
b = bkt.taskpane.Expander(auto_stack=True, is_expanded=True, children=[bkt.taskpane.Wpf.Button()])
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<Expander IsExpanded="true"'+XMLNS+'>\r\n <StackPanel Orientation="Vertical">\r\n <Button />\r\n </StackPanel>\r\n</Expander>')
def test_Group(self):
bkt.taskpane.TaskPaneControl.no_id = True
bkt.taskpane.FluentRibbonControl.no_id = True
self.maxDiff = None
b = bkt.taskpane.Group(auto_wrap=True, children=[bkt.taskpane.Wpf.Button()])
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<StackPanel Orientation="Vertical"'+XMLNS+'>\r\n <Grid Margin="0,10,0,5">\r\n <Grid.ColumnDefinitions>\r\n <ColumnDefinition Width="*" />\r\n </Grid.ColumnDefinitions>\r\n <Border Height="1" Background="{StaticResource BKTDivider}" HorizontalAlignment="Stretch" SnapsToDevicePixels="True" Margin="7,3,10,3" />\r\n </Grid>\r\n <WrapPanel Orientation="Horizontal">\r\n <Button />\r\n </WrapPanel>\r\n</StackPanel>')
b = bkt.taskpane.Group(auto_wrap=True, show_separator=False, children=[bkt.taskpane.Wpf.Button()])
self.assertEqual(bkt.xml.WpfXMLFactory.to_string(b.wpf_xml()), u'<StackPanel Orientation="Vertical"'+XMLNS+'>\r\n <WrapPanel Orientation="Horizontal">\r\n <Button />\r\n </WrapPanel>\r\n</StackPanel>')
| 60.144531 | 507 | 0.68942 | 1,815 | 15,397 | 5.7427 | 0.088705 | 0.083373 | 0.074259 | 0.086635 | 0.875468 | 0.843807 | 0.819438 | 0.794013 | 0.767821 | 0.716109 | 0 | 0.00157 | 0.172826 | 15,397 | 255 | 508 | 60.380392 | 0.816818 | 0.11827 | 0 | 0.378788 | 0 | 0.090909 | 0.281989 | 0.117465 | 0 | 0 | 0 | 0 | 0.318182 | 1 | 0.090909 | false | 0 | 0.022727 | 0 | 0.121212 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
50127204a0103d3932d88899f06339333fdbd9a7 | 61 | py | Python | ch1/Ex_1.3.4.1/task06.py | jiayushe/cpbook-code | c3fb85a1acc5f31e15879741e4c826684243fddf | [
"UPL-1.0"
] | 1,441 | 2018-12-03T23:46:17.000Z | 2022-03-29T06:36:43.000Z | ch1/Ex_1.3.4.1/task06.py | jiayushe/cpbook-code | c3fb85a1acc5f31e15879741e4c826684243fddf | [
"UPL-1.0"
] | 53 | 2018-12-11T13:50:35.000Z | 2022-03-20T04:30:39.000Z | ch1/Ex_1.3.4.1/task06.py | jiayushe/cpbook-code | c3fb85a1acc5f31e15879741e4c826684243fddf | [
"UPL-1.0"
] | 420 | 2018-12-04T11:22:08.000Z | 2022-03-27T15:25:33.000Z | from bisect import bisect_left
print(v == bisect_left(L, v))
| 20.333333 | 30 | 0.754098 | 11 | 61 | 4 | 0.636364 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131148 | 61 | 2 | 31 | 30.5 | 0.830189 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
50362b903c5cb464a79f3ff5a05c636e9dd2277c | 131 | py | Python | src/masonite_permission/providers/__init__.py | yubarajshrestha/masonite-permission | 5807b80a50b94526efbc03f0933d3960087a7e54 | [
"MIT"
] | 4 | 2022-03-15T13:52:37.000Z | 2022-03-17T05:26:54.000Z | src/masonite_permission/providers/__init__.py | yubarajshrestha/masonite-permission | 5807b80a50b94526efbc03f0933d3960087a7e54 | [
"MIT"
] | 2 | 2022-03-15T06:36:59.000Z | 2022-03-15T09:41:47.000Z | src/masonite_permission/providers/__init__.py | yubarajshrestha/masonite-permission | 5807b80a50b94526efbc03f0933d3960087a7e54 | [
"MIT"
] | null | null | null | # flake8: noqa: E501
from .PermissionProvider import PermissionProvider
from .PermissionGateProvider import PermissionGateProvider
| 32.75 | 58 | 0.870229 | 11 | 131 | 10.363636 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033613 | 0.091603 | 131 | 3 | 59 | 43.666667 | 0.92437 | 0.137405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.