hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a4474528822317d73d77d6d9db96f75001ee8d9a | 1,549 | py | Python | tracardi/tests/unit/mocks/mock_storage.py | ryomahan/read-tracardi | 216b05395d70a869593a60e2804e0a71b58dfc8f | [
"MIT"
] | 29 | 2021-04-17T06:04:46.000Z | 2021-11-25T10:22:43.000Z | tracardi/tests/unit/mocks/mock_storage.py | ryomahan/read-tracardi | 216b05395d70a869593a60e2804e0a71b58dfc8f | [
"MIT"
] | 77 | 2021-04-03T21:20:04.000Z | 2021-10-17T10:12:23.000Z | tracardi/tests/unit/mocks/mock_storage.py | ryomahan/read-tracardi | 216b05395d70a869593a60e2804e0a71b58dfc8f | [
"MIT"
] | 17 | 2021-06-29T13:13:18.000Z | 2021-10-17T10:52:57.000Z | sessions = [{
"1": {
"type": "session",
"source": {"id": "scope"},
"id": "1",
'profile': {"id": "1"}
}
}]
profiles = [
{"1": {'id': "1", "traits": {}}},
{"2": {'id': "2", "traits": {}}},
]
class MockStorageCrud:
def __init__(self, index, domain_class_ref, entity):
self.index = index
self.domain_class_ref = domain_class_ref
self.entity = entity
if index == 'session':
self.data = sessions
elif index == 'profile':
self.data = profiles
async def load(self):
for item in self.data:
if self.entity.id in item:
return self.domain_class_ref(**item[self.entity.id])
return None
async def save(self):
self.data.append({self.entity.id: self.entity.dict(exclude_unset=True)})
async def delete(self):
del(self.data[self.entity.id])
class EntityStorageCrud:
def __init__(self, index, entity):
self.index = index
self.entity = entity
if index == 'session':
self.data = sessions
elif index == 'profile':
self.data = profiles
async def load(self, domain_class_ref):
for item in self.data:
if self.entity.id in item:
return domain_class_ref(**item[self.entity.id])
return None
async def save(self):
self.data.append({self.entity.id: self.entity.dict(exclude_unset=True)})
async def delete(self):
del(self.data[self.entity.id])
| 24.203125 | 80 | 0.551323 | 187 | 1,549 | 4.449198 | 0.208556 | 0.144231 | 0.115385 | 0.064904 | 0.75 | 0.697115 | 0.697115 | 0.697115 | 0.697115 | 0.697115 | 0 | 0.0065 | 0.304713 | 1,549 | 63 | 81 | 24.587302 | 0.766017 | 0 | 0 | 0.553191 | 0 | 0 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0 | 0 | 0 | 0.170213 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a44a4231fa12b52d4dbb4a4c31e144cce9f32f67 | 491 | py | Python | examples/null_support/client.py | amrhgh/django-grpc-framework | 158e1d9001bd426410ca962e2f72b14ee3e2f935 | [
"Apache-2.0"
] | 269 | 2020-05-06T03:22:43.000Z | 2022-03-26T21:05:24.000Z | examples/null_support/client.py | amrhgh/django-grpc-framework | 158e1d9001bd426410ca962e2f72b14ee3e2f935 | [
"Apache-2.0"
] | 19 | 2020-06-03T03:46:39.000Z | 2022-03-30T20:24:55.000Z | examples/null_support/client.py | amrhgh/django-grpc-framework | 158e1d9001bd426410ca962e2f72b14ee3e2f935 | [
"Apache-2.0"
] | 39 | 2020-05-27T07:23:12.000Z | 2022-03-27T13:10:24.000Z | import grpc
import snippets_pb2
import snippets_pb2_grpc
from google.protobuf.struct_pb2 import NullValue
with grpc.insecure_channel('localhost:50051') as channel:
stub = snippets_pb2_grpc.SnippetControllerStub(channel)
request = snippets_pb2.Snippet(id=1, title='snippet title')
# send non-null value
# request.language.value = "python"
# send null value
request.language.null = NullValue.NULL_VALUE
response = stub.Update(request)
print(response, end='')
| 30.6875 | 63 | 0.753564 | 63 | 491 | 5.730159 | 0.507937 | 0.121884 | 0.094183 | 0.132964 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026506 | 0.154786 | 491 | 15 | 64 | 32.733333 | 0.843373 | 0.14053 | 0 | 0 | 0 | 0 | 0.066986 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a44c48f3e3ce5e20e3e7b72b596dd9b53b3b830c | 326 | py | Python | exercicios_mundo-I/ex008.py | Lucas-Lourencao/ExerciciosPython | e43ce23dbddd126abb9ce6c02bd39dac3677856e | [
"MIT"
] | null | null | null | exercicios_mundo-I/ex008.py | Lucas-Lourencao/ExerciciosPython | e43ce23dbddd126abb9ce6c02bd39dac3677856e | [
"MIT"
] | null | null | null | exercicios_mundo-I/ex008.py | Lucas-Lourencao/ExerciciosPython | e43ce23dbddd126abb9ce6c02bd39dac3677856e | [
"MIT"
] | null | null | null | m = float(input('Digite uma distância em metros: '))
k = m/1000
hct = m/100
dct = m/10
dcm = m*10
cm = m*100
mm = m*1000
print('A distância de {} metros informada corresponde a: \n{} Quilômetros;\n{} Hectômetros;\n{}Decâmetros;\n{:.0f}Decímetros;\n{:.0f}Centímetros e;\n{:.0f}Milímetros.'.format(m, k, hct, dct, dcm, cm, mm))
| 36.222222 | 204 | 0.662577 | 58 | 326 | 3.724138 | 0.551724 | 0.041667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074468 | 0.134969 | 326 | 8 | 205 | 40.75 | 0.691489 | 0 | 0 | 0 | 0 | 0.125 | 0.58589 | 0.263804 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a450260365598db276731bd5e9a50fe2e149415c | 180 | py | Python | ta_bot/__main__.py | ToxicPie/discord-ta-bot | 7ea163b11491039d2a287195d2026c4ed4b50b3b | [
"MIT"
] | null | null | null | ta_bot/__main__.py | ToxicPie/discord-ta-bot | 7ea163b11491039d2a287195d2026c4ed4b50b3b | [
"MIT"
] | null | null | null | ta_bot/__main__.py | ToxicPie/discord-ta-bot | 7ea163b11491039d2a287195d2026c4ed4b50b3b | [
"MIT"
] | null | null | null | import os
from . import setup_bot
bot_token = os.environ.get('DISCORD_BOT_TOKEN')
bot_prefix = os.environ.get('COMMAND_PREFIX')
bot = setup_bot(bot_prefix)
bot.run(bot_token)
| 15 | 47 | 0.766667 | 30 | 180 | 4.3 | 0.4 | 0.186047 | 0.170543 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116667 | 180 | 11 | 48 | 16.363636 | 0.811321 | 0 | 0 | 0 | 0 | 0 | 0.172222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a453f35423943d09ee98cc3713636f0e0c38c6c2 | 321 | py | Python | modules/controller/commands/key.py | TheSlimvReal/PSE---LA-meets-ML | 7614db17ae746c7b02dd95bd44d2c91ee3c1e662 | [
"BSD-2-Clause"
] | 4 | 2018-11-01T16:14:21.000Z | 2019-11-21T14:13:53.000Z | modules/controller/commands/key.py | TheSlimvReal/PSE---LA-meets-ML | 7614db17ae746c7b02dd95bd44d2c91ee3c1e662 | [
"BSD-2-Clause"
] | 56 | 2018-10-31T15:09:05.000Z | 2020-04-25T13:10:27.000Z | modules/controller/commands/key.py | TheSlimvReal/PSE---LA-meets-ML | 7614db17ae746c7b02dd95bd44d2c91ee3c1e662 | [
"BSD-2-Clause"
] | 1 | 2019-03-14T15:26:42.000Z | 2019-03-14T15:26:42.000Z | from enum import Enum
## enum that represents the possible keys a user can enter
#
# @extends Enum to get the enum logic
class Key(Enum):
QUIT = 0
AMOUNT = 1
NAME = 2
SIZE = 3
PATH = 4
GENERATE = 5
SAVING_PATH = 6
TRAIN = 7
NETWORK = 8
SOLVE = 9
HELP = 10
UPDATE = 11
| 16.05 | 59 | 0.579439 | 49 | 321 | 3.77551 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067961 | 0.358255 | 321 | 19 | 60 | 16.894737 | 0.830097 | 0.28972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a4669d9d29d0ac94144a7a972f616e9948724859 | 3,337 | py | Python | _tests/test_app.py | EinsteinCarrey/Shoppinglist | 8a5dbd22687619b189d69522090c0bc226384663 | [
"MIT"
] | null | null | null | _tests/test_app.py | EinsteinCarrey/Shoppinglist | 8a5dbd22687619b189d69522090c0bc226384663 | [
"MIT"
] | 8 | 2017-08-29T02:47:57.000Z | 2019-10-18T16:32:23.000Z | _tests/test_app.py | EinsteinNjoroge/Shoppinglist | 8a5dbd22687619b189d69522090c0bc226384663 | [
"MIT"
] | 3 | 2017-08-29T08:49:08.000Z | 2018-03-02T03:43:01.000Z | from unittest import TestCase
import global_functions
import app
from app import flask_app
class TestApp(TestCase):
def setUp(self):
self.app = app
self.username = "newuser"
self.pword = "password123"
self.test_client_app = flask_app.test_client()
self.test_client_app.testing = True
def tearDown(self):
self.app = None
self.username = None
self.pword = None
self.test_client_app = None
def test_user_accounts_is_dict(self):
self.assertIsInstance(self.app.user_accounts, dict)
def test_create_user_account_without_username(self):
self.assertEqual(
self.app.create_user_account("", "pword"),
"User must provide a username"
)
def test_create_user_account_with_invalid_username(self):
self.assertEqual(
self.app.create_user_account([], "pword"),
"Username must be string"
)
def test_create_user_account_with_invalid_username_characters(self):
self.assertEqual(
self.app.create_user_account("@#^&&", "pword"),
"Username should only contain letters and numbers"
)
def test_create_user_account_without_password(self):
self.assertEqual(
self.app.create_user_account("username", ""),
"User must provide a pword"
)
def test_create_user_account_with_invalid_password(self):
self.assertEqual(
self.app.create_user_account("username", 12546),
"Password provided must be a string"
)
def test_create_user_account_with_short_password(self):
self.assertEqual(
self.app.create_user_account("username", "123"),
"Password should have at-least 6 characters"
)
def test_create_user_account(self):
self.app.create_user_account("username", "1234567")
self.assertTrue(len(self.app.user_accounts) == 1)
def test_create_user_account_password_is_hashed(self):
self.app.create_user_account(self.username, self.pword)
stored_password = self.app.user_accounts[self.username].password_hash
self.assertEqual(
stored_password,
global_functions.sha1_hash(self.pword),
msg="Stored passwords should be Hashed"
)
def test_create_user_with_duplicate_username(self):
self.app.create_user_account("username", "1234567")
self.assertEqual(
self.app.create_user_account("username", "1234567"),
"Username username is already taken. Use a unique username"
)
def test_login_without_password(self):
self.assertEqual(
self.app.login("username", None),
"Password must be provided"
)
def test_login_with_invalid_password(self):
self.assertEqual(
self.app.login("username", "asdasdsds"),
"Wrong credentials combination"
)
def test_login_without_username(self):
self.assertEqual(
self.app.login(None, "asdasdsds"),
"Username must be provided"
)
def test_login_with_invalid_username(self):
self.assertEqual(
self.app.login("non-existent-username", "asdasdsds"),
"Wrong credentials combination"
)
| 32.086538 | 77 | 0.642194 | 379 | 3,337 | 5.377309 | 0.203166 | 0.06526 | 0.150147 | 0.118744 | 0.553484 | 0.490677 | 0.453386 | 0.383219 | 0.219333 | 0.173209 | 0 | 0.014303 | 0.266707 | 3,337 | 103 | 78 | 32.398058 | 0.818553 | 0 | 0 | 0.188235 | 0 | 0 | 0.171411 | 0.006293 | 0 | 0 | 0 | 0 | 0.164706 | 1 | 0.188235 | false | 0.152941 | 0.047059 | 0 | 0.247059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a46bebce2e5af6624fe2efad9d1562b2a2de61f2 | 910 | py | Python | script/sklearn_like_toolkit/warpper/skClf_wrapper/skRidgeCVClf.py | demetoir/MLtools | 8c42fcd4cc71728333d9c116ade639fe57d50d37 | [
"MIT"
] | null | null | null | script/sklearn_like_toolkit/warpper/skClf_wrapper/skRidgeCVClf.py | demetoir/MLtools | 8c42fcd4cc71728333d9c116ade639fe57d50d37 | [
"MIT"
] | null | null | null | script/sklearn_like_toolkit/warpper/skClf_wrapper/skRidgeCVClf.py | demetoir/MLtools | 8c42fcd4cc71728333d9c116ade639fe57d50d37 | [
"MIT"
] | null | null | null | from sklearn.linear_model import RidgeClassifierCV as _RidgeClassifierCV
from script.sklearn_like_toolkit.warpper.base.BaseWrapperClf import BaseWrapperClf
from script.sklearn_like_toolkit.warpper.base.MixIn import MetaBaseWrapperClfWithABC
class skRidgeCVClf(_RidgeClassifierCV, BaseWrapperClf, metaclass=MetaBaseWrapperClfWithABC):
def __init__(self, alphas=(0.1, 1.0, 10.0), fit_intercept=True, normalize=False, scoring=None, cv=None,
class_weight=None):
_RidgeClassifierCV.__init__(self, alphas, fit_intercept, normalize, scoring, cv, class_weight)
BaseWrapperClf.__init__(self)
HyperOpt_space = {}
tuning_grid = {
# shape positive float
'alphas': (0.1, 1.0, 10.0),
# 'cv': None,
# 'scoring': None,
# 'class_weight': None
# 'fit_intercept': True,
# 'normalize': False,
}
| 36.4 | 108 | 0.679121 | 96 | 910 | 6.145833 | 0.427083 | 0.040678 | 0.057627 | 0.071186 | 0.277966 | 0.176271 | 0.176271 | 0 | 0 | 0 | 0 | 0.019746 | 0.220879 | 910 | 24 | 109 | 37.916667 | 0.812412 | 0.124176 | 0 | 0 | 0 | 0 | 0.007833 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a477cbe56b0f143b53d9193de114c7aa2f5db5ad | 2,195 | py | Python | app.py | 10239847509238470925387z/tmp123 | e899272ab0da746d7d31e3f2323530fb4e0272e9 | [
"Apache-2.0"
] | null | null | null | app.py | 10239847509238470925387z/tmp123 | e899272ab0da746d7d31e3f2323530fb4e0272e9 | [
"Apache-2.0"
] | null | null | null | app.py | 10239847509238470925387z/tmp123 | e899272ab0da746d7d31e3f2323530fb4e0272e9 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import urllib
import json
import os
import constants
import accounts
from flask import Flask
from flask import request
from flask import make_response
# Flask app should start in global layout
app = Flask(__name__)
PERSON = constants.TEST_1
@app.route('/webhook', methods=['POST'])
def webhook():
req = request.get_json(silent=True, force=True)
print("Request:")
print(json.dumps(req, indent=4))
res = makeWebhookResult(req)
res = json.dumps(res, indent=4)
print(res)
r = make_response(res)
r.headers['Content-Type'] = 'application/json'
return r
def makeWebhookResult(req):
if req.get("result").get("action") != "account-balance":
return constants.ERR_DICT(req.get("result").get("action"))
result = req.get("result")
parameters = result.get("parameters")
acct = parameters.get("account-type")
acct = acct.strip()
if acct=='401k':
acct='WI'
qual = parameters.get("qualifier")
speech = str(req.get("result").get("action"))
if acct:
if acct in constants.ACCT_TYPES:
speech = "The value of your {ACCT_TYPE} accounts is {VALU} dollars.".format(VALU=accounts.get_balance(PERSON, acct), ACCT_TYPE=acct)
else:
speech = "You don't have any accounts of that type. The total value of your other accounts is {VALU} dollars.".format(
VALU=accounts.get_balance(PERSON))
elif qual:
speech = "The total value of your accounts is {VALU} dollars.".format(VALU=accounts.get_balance(PERSON))
else:
speech = "The total value of your accounts is {VALU} dollars.".format(VALU=accounts.get_balance(PERSON))
# speech = "The cost of shipping to " + zone + " is " + str(cost[zone]) + " euros."
print("Response:")
print(speech)
speech += "\nAnything else I can help you with today?"
return {
"speech": speech,
"displayText": speech,
#"data": {},
# "contextOut": [],
"source": "home"
}
if __name__ == '__main__':
port = int(os.getenv('PORT', 5000))
print "Starting app on port %d" % port
app.run(debug=True, port=port, host='0.0.0.0')
| 26.768293 | 144 | 0.636446 | 291 | 2,195 | 4.718213 | 0.378007 | 0.01748 | 0.03496 | 0.06118 | 0.256373 | 0.19665 | 0.19665 | 0.19665 | 0.19665 | 0.19665 | 0 | 0.008216 | 0.22369 | 2,195 | 81 | 145 | 27.098765 | 0.797535 | 0.077904 | 0 | 0.074074 | 0 | 0.018519 | 0.257553 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.148148 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a47d6bfe819ded66253723a137d5f16ea4ccfb43 | 1,211 | py | Python | dist/assets/code/seance3/8.py | Mistergix/deficode | 6460ec3e22d36b67cef6815d9977fba973ab139b | [
"MIT"
] | null | null | null | dist/assets/code/seance3/8.py | Mistergix/deficode | 6460ec3e22d36b67cef6815d9977fba973ab139b | [
"MIT"
] | 10 | 2018-07-11T22:40:57.000Z | 2018-11-24T21:05:14.000Z | dist/assets/code/seance3/8.py | Mistergix/deficode | 6460ec3e22d36b67cef6815d9977fba973ab139b | [
"MIT"
] | null | null | null | class Chocolatine:
def __init__(self):
print("je suis mangé!")
def nom(self):
return "CH0C0LA71N3 !!!!!"
cadeau = (Chocolatine(), "Nicolas", "Julian")
#déplier un tuple
objet, destinataire, expediteur = cadeau
#afficher un tuple
print(cadeau)
#fonction qui mange un objet de type: tuple(objet, destinataire, expediteur)
def offrir( truc ):
liste_des_trucs_offert = []
objet, destinataire, expediteur = truc
print("-> {} offre {} à {}. Le voici dans le return:".format(expediteur, objet.nom(), destinataire))
return objet
truc_pour_untel = offrir(cadeau)
print(truc_pour_untel)
##exercice avancé##
class Chocolatine:
def __init__(self):
print("je suis mangé!")
def nom(self):
return "CH0C0LA71N3 !!!!!"
cadeau = (Chocolatine, 3, "Nicolas")
def offrir2( truc ):
liste_des_trucs_offert = []
objet, nombre, destinataire = truc
for i in range(nombre):
liste_des_trucs_offert.append(objet())
return liste_des_trucs_offert, destinataire
truc_pour_untel, untel = offrir2(cadeau)
print("j'offre {} {} à {}. les voici:".format(cadeau[1], cadeau[0]().nom(), untel), truc_pour_untel) | 22.849057 | 104 | 0.652353 | 146 | 1,211 | 5.219178 | 0.376712 | 0.041995 | 0.068241 | 0.099738 | 0.301837 | 0.301837 | 0.228346 | 0.228346 | 0.228346 | 0.228346 | 0 | 0.015806 | 0.21635 | 1,211 | 53 | 105 | 22.849057 | 0.787144 | 0.101569 | 0 | 0.413793 | 0 | 0 | 0.145102 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.206897 | false | 0 | 0 | 0.068966 | 0.413793 | 0.206897 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a47fa029a36ff0c32a340f11c3e4c167cc005848 | 17,038 | py | Python | lib/datasets/tpod_dataset.py | junjuew/py-faster-rcnn | cee50460078ea9756e03babc598a2fcbd18430b2 | [
"BSD-2-Clause"
] | 4 | 2017-01-24T01:02:40.000Z | 2020-05-19T02:32:16.000Z | lib/datasets/tpod_dataset.py | junjuew/py-faster-rcnn | cee50460078ea9756e03babc598a2fcbd18430b2 | [
"BSD-2-Clause"
] | 3 | 2016-09-04T20:07:31.000Z | 2018-03-31T20:25:21.000Z | lib/datasets/tpod_dataset.py | junjuew/py-faster-rcnn | cee50460078ea9756e03babc598a2fcbd18430b2 | [
"BSD-2-Clause"
] | 2 | 2017-04-03T02:00:54.000Z | 2020-05-14T17:09:57.000Z | # --------------------------------------------------------
# Fast R-CNN
# Copyright (c) 2015 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ross Girshick
# --------------------------------------------------------
import os, time
from datasets.imdb import imdb
import datasets.ds_utils as ds_utils
import xml.etree.ElementTree as ET
import numpy as np
import scipy.sparse
import scipy.io as sio
import utils.cython_bbox
import cPickle
import subprocess
import uuid
from tpod_eval import voc_eval
from fast_rcnn.config import cfg
import pdb
'''
Documentation:
this class is used as the loader of the data set
it should accept one paths: the path for the training data folder, under this folder
there are three important files
1. (image_set.txt) the image set (the file containing the list of all image paths)
2. (label_set.txt) the label (the file containing the list of all label contents)
3. (labels.txt) the file containing name of all labels
On initialization, we can load all image paths and all labels into array, thus during the
training phase, we only need to read the array
'''
class tpod(imdb):
TPOD_IMDB_NAME='web_demo'
def __init__(self, devkit_path):
# image_set should be a filename (without extension) in devkit_path that contains# a list of image paths
# devkit_path should have an annotation dir, a label text
imdb.__init__(self, self.__class__.TPOD_IMDB_NAME)
self._year = '2016'
if not devkit_path or not os.path.isdir(devkit_path):
raise ValueError('Please provide image_set and devkit_path for tpod imdb. The devkit_path should contain '
'a text file with all labels named "label.txt". devkit_path/image_set.txt should be '
'a text file that each line is an absolute path to an image\n '
'Current devkit_path: {}'.format(devkit_path))
self._image_set = 'train'
self._devkit_path = devkit_path
self._label_path = os.path.join(self._devkit_path, 'labels.txt')
self._label_set_path = os.path.join(self._devkit_path, 'label_set.txt')
self._image_set_path = os.path.join(self._devkit_path, 'image_set.txt')
# load classes names
if os.path.isfile(self._label_path):
self._classes = self.load_label_classes()
else:
print 'no label file given, model is single class'
# backward compatibility, single class
self._classes = ('__background__', # always index 0
'object')
self._image_index = self._load_image_set_index()
self._annotation_index, self._obj_num_index = self._load_annotations()
# Default to roidb handler
self._roidb_handler = self.selective_search_roidb
self._salt = str(uuid.uuid4())
self._comp_id = time.strftime("%Y%m%d%H%M%S")
# PASCAL specific config options
self.config = {'cleanup' : True,
'use_salt' : True,
'use_diff' : False,
'matlab_eval' : False,
'rpn_file' : None,
'min_size' : 2}
print 'initialized imdb from devkit_path: {}, image_set: {}'.format(self._devkit_path, self._image_set)
def load_label_classes(self):
f = open(self._label_path, 'r')
labels = f.read().splitlines()
labels.insert(0, '__background__')
print 'custom labels: {}'.format(labels)
return labels
# need customization
def _is_test(self):
return False
# need customization
def _load_image_set_index(self):
"""
Load the indexes listed in this dataset's image set file.
# Example path to image set file:
# self._devkit_path + /image_set.txt
"""
f = open(self._image_set_path, 'r')
# at the same time remove the empty space at the beginning and end
image_paths = [x.strip().split() for x in f.readlines()]
return image_paths
def _load_annotations(self):
# the basic structure: class is separated by '.' label is separated by ';' coordination is separated by ','
f = open(self._label_set_path, 'r')
lines = f.readlines()
annotation_set = []
object_num_set = []
for line in lines:
obj_num = 0
line_classes = line.split('.')
line_annotation = []
for i, line_class in enumerate(line_classes):
# there might be extra separation symbol,
# we should ignore these more than actual classes
if i >= len(self._classes) - 1:
break
line_label = []
if len(line_class) > 1:
labels = line_class.split(';')
for label in labels:
if len(label) < 1:
continue
coordination = label.split(',')
line_label.append(coordination)
obj_num += 1
line_annotation.append(line_label)
annotation_set.append(line_annotation)
object_num_set.append(obj_num)
return annotation_set, object_num_set
# need customization
def image_path_at(self, i):
"""
Return the absolute path to image i in the image sequence.
"""
image_path = '/' + self._image_index[i][-1]
assert os.path.exists(image_path), \
'Path does not exist: {}'.format(image_path)
return image_path
# need customization
def _load_tpod_annotation(self, index):
"""
Load image and bounding boxes info from XML file in the PASCAL VOC
format.
"""
index = int(index)
frame_label = self._annotation_index[index]
# we should include the background inside
num_objs = self._obj_num_index[index]
# print 'load tpod annotation %s num objs %s ' % (str(index), str(num_objs))
boxes = np.zeros((num_objs, 4), dtype=np.uint16)
gt_classes = np.zeros((num_objs), dtype=np.int32)
overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32)
# "Seg" area for pascal is just the box area
seg_areas = np.zeros((num_objs), dtype=np.float32)
# n: the number of objects
# c: the number of classes
# boxes: a n x 4 matrix, the rect for each object, each row is a box, n is the number of objects
# gt_classes: ground truth, a n length array, each element is the class index the object
# overlaps: a n x c matrix, in each row, if that object appears, set it to be 1
# seg_areas: a n length array, each element is the area size for that object
# Load object bounding boxes into a data frame.
idx = 0
for i, current_class in enumerate(frame_label):
if len(current_class) > 0 and i < self.num_classes:
for label in current_class:
if len(label) > 0:
x1 = float(label[0])
y1 = float(label[1])
w = float(label[2])
h = float(label[3])
x2 = x1 + w
y2 = y1 + h
# j: class is the last word in an entry separated by white space
cls = i + 1
boxes[idx, :] = [x1, y1, x2, y2]
gt_classes[idx] = cls
overlaps[idx, cls] = 1.0
seg_areas[idx] = w * h
idx += 1
overlaps = scipy.sparse.csr_matrix(overlaps)
return {'boxes' : boxes,
'gt_classes': gt_classes,
'gt_overlaps' : overlaps,
'flipped' : False,
'seg_areas' : seg_areas}
@property
def cache_path(self):
return self._devkit_path
def gt_roidb(self):
"""
Return the database of ground-truth regions of interest.
This function loads/saves from/to a cache file to speed up future calls.
"""
cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl')
if os.path.exists(cache_file):
with open(cache_file, 'rb') as fid:
roidb = cPickle.load(fid)
print '{} gt roidb loaded from {}'.format(self.name, cache_file)
return roidb
gt_roidb = [self._load_tpod_annotation(index)
for index, path in enumerate(self.image_index)]
with open(cache_file, 'wb') as fid:
cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL)
print 'wrote gt roidb to {}'.format(cache_file)
return gt_roidb
def selective_search_roidb(self):
"""
Return the database of selective search regions of interest.
Ground-truth ROIs are also included.
This function loads/saves from/to a cache file to speed up future calls.
"""
cache_file = os.path.join(self.cache_path,
self.name + '_selective_search_roidb.pkl')
if os.path.exists(cache_file):
with open(cache_file, 'rb') as fid:
roidb = cPickle.load(fid)
print '{} ss roidb loaded from {}'.format(self.name, cache_file)
return roidb
if not self._is_test():
gt_roidb = self.gt_roidb()
ss_roidb = self._load_selective_search_roidb(gt_roidb)
roidb = imdb.merge_roidbs(gt_roidb, ss_roidb)
else:
roidb = self._load_selective_search_roidb(None)
with open(cache_file, 'wb') as fid:
cPickle.dump(roidb, fid, cPickle.HIGHEST_PROTOCOL)
print 'wrote ss roidb to {}'.format(cache_file)
return roidb
def rpn_roidb(self):
print 'rpn_roidb called'
if not self._is_test():
gt_roidb = self.gt_roidb()
rpn_roidb = self._load_rpn_roidb(gt_roidb)
roidb = imdb.merge_roidbs(gt_roidb, rpn_roidb)
else:
roidb = self._load_rpn_roidb(None)
return roidb
def _load_rpn_roidb(self, gt_roidb):
filename = self.config['rpn_file']
print '_load_rpn_roidb called'
print 'loading {}'.format(filename)
assert os.path.exists(filename), \
'rpn data not found at: {}'.format(filename)
with open(filename, 'rb') as f:
box_list = cPickle.load(f)
return self.create_roidb_from_box_list(box_list, gt_roidb)
def _load_selective_search_roidb(self, gt_roidb):
filename = os.path.abspath(os.path.join(cfg.DATA_DIR,
'selective_search_data',
self.name + '.mat'))
assert os.path.exists(filename), \
'Selective search data not found at: {}'.format(filename)
raw_data = sio.loadmat(filename)['boxes'].ravel()
box_list = []
for i in xrange(raw_data.shape[0]):
boxes = raw_data[i][:, (1, 0, 3, 2)] - 1
keep = ds_utils.unique_boxes(boxes)
boxes = boxes[keep, :]
keep = ds_utils.filter_small_boxes(boxes, self.config['min_size'])
boxes = boxes[keep, :]
box_list.append(boxes)
return self.create_roidb_from_box_list(box_list, gt_roidb)
def _get_comp_id(self):
return self._comp_id
def _get_voc_results_file_template(self, prefix):
filename = self._get_comp_id() + '_det_' + self._image_set.replace('/', '_') + '_{:s}.txt'
path = os.path.join(
prefix,
filename)
return path
def _write_voc_results_file(self, all_boxes, output_dir):
for cls_ind, cls in enumerate(self.classes):
if cls == '__background__':
continue
filename = self._get_voc_results_file_template(output_dir).format(cls)
print 'Writing {} VOC results file ==> {}'.format(cls, filename)
with open(filename, 'wt') as f:
for im_ind, index in enumerate(self.image_index):
dets = all_boxes[cls_ind][im_ind]
if dets == []:
continue
# the VOCdevkit expects 1-based indices
for k in xrange(dets.shape[0]):
f.write('{:s} {:.3f} {:.1f} {:.1f} {:.1f} {:.1f}\n'.
format(index[0], dets[k, -1],
dets[k, 0] + 1, dets[k, 1] + 1,
dets[k, 2] + 1, dets[k, 3] + 1))
# need customization
def _do_python_eval(self, annopath, output_dir, evaluation_result_name, cachedir=None):
'''annopath = '/home/junjuew/object-detection-web/demo-web/train/headphone-model/Annotations/{}.txt'
'''
aps = []
# The PASCAL VOC metric changed in 2010
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
folder_name = '/eval/' + evaluation_result_name
if not os.path.exists(folder_name):
os.makedirs(folder_name)
print 'create result folder: %s ' % str(folder_name)
for i, cls in enumerate(self._classes):
if cls == '__background__':
continue
filename = self._get_voc_results_file_template(output_dir).format(cls)
image_path_array = self._image_index
annotation_path_array = self._annotation_index
rec, prec, ap = voc_eval(
filename, i, image_path_array, annotation_path_array, cls, cachedir, ovthresh=0.5)
aps += [ap]
print('AP for {} = {:.4f}'.format(cls, ap))
print('precision for {} = {}'.format(cls, prec))
print('recall for {} = {}'.format(cls, rec))
with open(os.path.join(output_dir, cls + '_pr.pkl'), 'w') as f:
cPickle.dump({'rec': rec, 'prec': prec, 'ap': ap}, f)
# store the result under the evaluation folder, each class with the class name as the file name
file_name = folder_name + '/' + str(cls) + '.pkl'
with open(file_name, 'w') as f:
cPickle.dump({'rec': rec, 'prec': prec, 'ap': ap}, f)
print 'result saved: %s ' % str(file_name)
print('Mean AP = {:.4f}'.format(np.mean(aps)))
print('~~~~~~~~')
print('Results:')
for ap in aps:
print('{:.3f}'.format(ap))
print('{:.3f}'.format(np.mean(aps)))
print('~~~~~~~~')
print('')
print('--------------------------------------------------------------')
print('Results computed with the **unofficial** Python eval code.')
print('Results should be very close to the official MATLAB eval code.')
print('Recompute with `./tools/reval.py --matlab ...` for your paper.')
print('-- Thanks, The Management')
print('--------------------------------------------------------------')
def _do_matlab_eval(self, output_dir='output'):
print '-----------------------------------------------------'
print 'Computing results with the official MATLAB eval code.'
print '-----------------------------------------------------'
path = os.path.join(cfg.ROOT_DIR, 'lib', 'datasets',
'VOCdevkit-matlab-wrapper')
cmd = 'cd {} && '.format(path)
cmd += '{:s} -nodisplay -nodesktop '.format(cfg.MATLAB)
cmd += '-r "dbstop if error; '
cmd += 'voc_eval(\'{:s}\',\'{:s}\',\'{:s}\',\'{:s}\'); quit;"' \
.format(self._devkit_path, self._get_comp_id(),
self._image_set, output_dir)
print('Running:\n{}'.format(cmd))
status = subprocess.call(cmd, shell=True)
def evaluate_detections(self, all_boxes, annopath, output_dir, evaluation_result_name):
self._write_voc_results_file(all_boxes, output_dir)
self._do_python_eval(annopath, output_dir, evaluation_result_name)
if self.config['matlab_eval']:
self._do_matlab_eval(output_dir)
if self.config['cleanup']:
for cls in self._classes:
if cls == '__background__':
continue
filename = self._get_voc_results_file_template(output_dir).format(cls)
os.remove(filename)
def competition_mode(self, on):
if on:
self.config['use_salt'] = False
self.config['cleanup'] = False
else:
self.config['use_salt'] = True
self.config['cleanup'] = True
if __name__ == '__main__':
from datasets.tpod import tpod
d = tpod('trainval', '')
res = d.roidb
from IPython import embed; embed()
| 41.055422 | 118 | 0.560042 | 2,101 | 17,038 | 4.330319 | 0.195621 | 0.021983 | 0.013849 | 0.009892 | 0.262036 | 0.213344 | 0.155309 | 0.137063 | 0.123873 | 0.107606 | 0 | 0.0074 | 0.317878 | 17,038 | 414 | 119 | 41.154589 | 0.775426 | 0.099542 | 0 | 0.155709 | 0 | 0 | 0.136409 | 0.023502 | 0 | 0 | 0 | 0 | 0.010381 | 0 | null | null | 0 | 0.055363 | null | null | 0.114187 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a484bdd1eb403382f98fb22c6f6ec574c6778758 | 529 | py | Python | octopus/constants.py | Fletch498ma/octopus | f2de0204a92b8cd9424805454212003f037d2984 | [
"MIT"
] | 12 | 2015-08-31T22:57:40.000Z | 2022-02-07T19:18:09.000Z | octopus/constants.py | Fletch498ma/octopus | f2de0204a92b8cd9424805454212003f037d2984 | [
"MIT"
] | 8 | 2017-12-17T18:56:23.000Z | 2021-08-23T13:26:16.000Z | octopus/constants.py | Fletch498ma/octopus | f2de0204a92b8cd9424805454212003f037d2984 | [
"MIT"
] | 10 | 2015-09-22T23:06:12.000Z | 2021-04-06T06:24:06.000Z | # Twisted Imports
from twisted.python.constants import ValueConstant, Values
class State (Values):
READY = ValueConstant("ready")
RUNNING = ValueConstant("running")
PAUSED = ValueConstant("paused")
COMPLETE = ValueConstant("complete")
CANCELLED = ValueConstant("cancelled")
ERROR = ValueConstant("error")
class Event (Values):
NEW_EXPERIMENT = ValueConstant("new-expt")
EXPERIMENT = ValueConstant("e")
INTERFACE = ValueConstant("i")
STEP = ValueConstant("s")
LOG = ValueConstant("l")
TIMEZERO = ValueConstant("z")
| 27.842105 | 58 | 0.742911 | 53 | 529 | 7.396226 | 0.54717 | 0.117347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122873 | 529 | 19 | 59 | 27.842105 | 0.844828 | 0.028355 | 0 | 0 | 0 | 0 | 0.103314 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a485203eed7da9af1f3d07a900bf441e21f54870 | 376 | py | Python | lang/py/pylib/code/fractions/fractions_limit_denominator.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | 13 | 2020-01-04T07:37:38.000Z | 2021-08-31T05:19:58.000Z | lang/py/pylib/code/fractions/fractions_limit_denominator.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | 3 | 2020-06-05T22:42:53.000Z | 2020-08-24T07:18:54.000Z | lang/py/pylib/code/fractions/fractions_limit_denominator.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | 9 | 2020-10-19T04:53:06.000Z | 2021-08-31T05:20:01.000Z | #!/usr/bin/env python
# encoding: utf-8
#
# Copyright (c) 2009 Doug Hellmann All rights reserved.
#
"""
"""
#end_pymotw_header
import fractions
import math
print 'PI =', math.pi
f_pi = fractions.Fraction(str(math.pi))
print 'No limit =', f_pi
for i in [ 1, 6, 11, 60, 70, 90, 100 ]:
limited = f_pi.limit_denominator(i)
print '{0:8} = {1}'.format(i, limited)
| 17.904762 | 55 | 0.640957 | 61 | 376 | 3.852459 | 0.688525 | 0.038298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069307 | 0.194149 | 376 | 20 | 56 | 18.8 | 0.706271 | 0.284574 | 0 | 0 | 0 | 0 | 0.121094 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a48a978010878e0af222f955c60047e8c7e7b87e | 2,725 | py | Python | onnx_model_maker/ops/op_ver_2.py | BernardJiang/onnx-pytorch | d5181b1267f16d540881caec4bdce1103d0630d6 | [
"Apache-2.0"
] | 66 | 2021-05-05T08:03:22.000Z | 2022-03-10T07:03:02.000Z | onnx_model_maker/ops/op_ver_2.py | BernardJiang/onnx-pytorch | d5181b1267f16d540881caec4bdce1103d0630d6 | [
"Apache-2.0"
] | 21 | 2021-05-06T02:56:42.000Z | 2021-11-25T03:52:14.000Z | onnx_model_maker/ops/op_ver_2.py | BernardJiang/onnx-pytorch | d5181b1267f16d540881caec4bdce1103d0630d6 | [
"Apache-2.0"
] | 18 | 2021-05-05T11:54:09.000Z | 2022-03-25T17:42:32.000Z | # Autogenerated by onnx-model-maker. Don't modify it manually.
import onnx
import onnx.helper
import onnx.numpy_helper
from onnx_model_maker import omm
from onnx_model_maker import onnx_mm_export
from onnx_model_maker.ops.op_helper import _add_input
@onnx_mm_export("v2.LabelEncoder")
def LabelEncoder(X, **kwargs):
_inputs = []
for i in (X, ):
_add_input(i, _inputs)
idx = omm.op_counter["LabelEncoder"]
omm.op_counter["LabelEncoder"] += 1
node = onnx.helper.make_node("LabelEncoder",
_inputs, [f'_t_LabelEncoder_{idx}_Y'],
name=f"LabelEncoder_{idx}",
**kwargs)
onnx.checker.check_node(node, omm.ctx)
omm.model.graph.node.append(node)
return node
@onnx_mm_export("v2.Split")
def Split(input, **kwargs):
_inputs = []
for i in (input, ):
_add_input(i, _inputs)
idx = omm.op_counter["Split"]
omm.op_counter["Split"] += 1
node = onnx.helper.make_node("Split",
_inputs, [f"_t_Split_{idx}_{i}" for i in range(len(kwargs["split"]))],
name=f"Split_{idx}",
**kwargs)
onnx.checker.check_node(node, omm.ctx)
omm.model.graph.node.append(node)
return node
@onnx_mm_export("v2.Pad")
def Pad(data, **kwargs):
_inputs = []
for i in (data, ):
_add_input(i, _inputs)
idx = omm.op_counter["Pad"]
omm.op_counter["Pad"] += 1
node = onnx.helper.make_node("Pad",
_inputs, [f'_t_Pad_{idx}_output'],
name=f"Pad_{idx}",
**kwargs)
onnx.checker.check_node(node, omm.ctx)
omm.model.graph.node.append(node)
return node
@onnx_mm_export("v2.LpPool")
def LpPool(X, **kwargs):
_inputs = []
for i in (X, ):
_add_input(i, _inputs)
idx = omm.op_counter["LpPool"]
omm.op_counter["LpPool"] += 1
node = onnx.helper.make_node("LpPool",
_inputs, [f'_t_LpPool_{idx}_Y'],
name=f"LpPool_{idx}",
**kwargs)
onnx.checker.check_node(node, omm.ctx)
omm.model.graph.node.append(node)
return node
@onnx_mm_export("v2.GlobalLpPool")
def GlobalLpPool(X, **kwargs):
_inputs = []
for i in (X, ):
_add_input(i, _inputs)
idx = omm.op_counter["GlobalLpPool"]
omm.op_counter["GlobalLpPool"] += 1
node = onnx.helper.make_node("GlobalLpPool",
_inputs, [f'_t_GlobalLpPool_{idx}_Y'],
name=f"GlobalLpPool_{idx}",
**kwargs)
onnx.checker.check_node(node, omm.ctx)
omm.model.graph.node.append(node)
return node
| 28.989362 | 101 | 0.57945 | 350 | 2,725 | 4.248571 | 0.148571 | 0.033625 | 0.080699 | 0.047075 | 0.568258 | 0.511769 | 0.434432 | 0.434432 | 0.394082 | 0.394082 | 0 | 0.005118 | 0.282936 | 2,725 | 93 | 102 | 29.301075 | 0.755885 | 0.022018 | 0 | 0.434211 | 1 | 0 | 0.127676 | 0.017274 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065789 | false | 0 | 0.078947 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a48ae7336ee1d7ff7f6b28c9bedae865b8e88589 | 3,350 | py | Python | modules/spotify.py | montarion/athena | ae8bd49786be613acc59e60edc758094bd26e7f4 | [
"Unlicense"
] | 3 | 2019-04-10T21:06:02.000Z | 2020-04-15T15:46:45.000Z | modules/spotify.py | montarion/athena | ae8bd49786be613acc59e60edc758094bd26e7f4 | [
"Unlicense"
] | 1 | 2019-05-21T13:45:35.000Z | 2019-05-21T13:45:35.000Z | modules/spotify.py | montarion/athena | ae8bd49786be613acc59e60edc758094bd26e7f4 | [
"Unlicense"
] | 2 | 2019-05-21T13:44:03.000Z | 2020-04-15T10:45:11.000Z | from components.logger import Logger
import requests
class Spotify:
def __init__(self, Database=None, Oauth=None, Watcher=None):
self.dependencies = {"tier":"user", "dependencies":["Database", "Oauth", "Watcher"]}
self.characteristics= ["timed"]
self.capabilities = ["music", "spotify", "playback", "podcasts"]
self.timing = {"unit": "seconds", "count":10}
self.db = Database
self.oauth = Oauth
self.watcher = Watcher
self.logger = Logger("Spotify").logger
self.currentsong = ""
# do not add init stuff
def dostuff(self):
data = {"playing": True, "Song":"Good thing", "Artist": "Zedd", "metadata":{"type":"update"}}
msg = self.db.messagebuilder("Spotify", "update", data)
self.watcher.publish(self, msg)
def query(self, query, connectionID = 666):
"""connectionID is the id of who is asking(starts at 0, database is 999)
the query is a dict containing the following keys:
"category", "type", "data", "metadata"
lets the system ask your module stuff.
"""
category = query["category"]
qtype = query["type"]
qdata = query.get("data", None)
metadata = query.get("metadata", None)
response = {}
# TODO: Read out the query
# TODO: Use it to write out the response
return response
def getplaying(self):
self.logger("getting token")
ac = self.oauth.getaccesstoken()
self.logger("Done")
baseurl = "https://api.spotify.com/v1/me/player/currently-playing"
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': f"Bearer {ac}"
}
params = {'market': 'from_token'}
self.logger("firing request")
response = requests.get(baseurl, headers=headers, params=params)
self.logger("Done")
data = {}
if len(response.content) > 1:
playdata = response.json()
item = playdata["item"]
name = item["name"]
self.logger(f"Currently playing: {name}")
duration = item["duration_ms"]
progress = playdata["progress_ms"]
artists = [x["name"] for x in playdata["item"]["artists"]]
playing = bool(playdata["is_playing"])
#self.logger(playdata)
if name != self.currentsong:
self.currentsong = name
data = {"playing":playing, "song": name, "artist": artists, "Progress":progress, "duration":duration}
#self.logger(data)
msg = self.db.messagebuilder("Spotify", "update", data)
self.watcher.publish(self, msg)
else:
self.logger("nothing is playing")
data = {"playing":False}
msg = self.db.messagebuilder("Spotify", "update", data)
self.watcher.publish(self, msg)
self.logger("published")
def startrun(self):
"""Various methods to control spotify playback"""
# init stuff..
self.logger = Logger("Spotify").logger
self.logger("Startrun")
self.datapath = f"data/modules/{self.__class__.__name__.lower()}"
#self.dostuff()
self.getplaying()
| 38.505747 | 117 | 0.565373 | 350 | 3,350 | 5.365714 | 0.388571 | 0.063898 | 0.014377 | 0.036741 | 0.138978 | 0.138978 | 0.103834 | 0.103834 | 0.103834 | 0.103834 | 0 | 0.004661 | 0.295522 | 3,350 | 86 | 118 | 38.953488 | 0.791102 | 0.118209 | 0 | 0.15625 | 0 | 0 | 0.205527 | 0.015889 | 0 | 0 | 0 | 0.011628 | 0 | 1 | 0.078125 | false | 0 | 0.03125 | 0 | 0.140625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a495b9287cef3b1495deb0a17f3ccdbbcfee8124 | 11,838 | py | Python | model/run_GRU.py | Liang-Qiu/GRU_Android | 5ebab518fb700f49fa181eeff09cc030114d2992 | [
"MIT"
] | 11 | 2017-12-04T08:55:22.000Z | 2021-06-24T09:21:33.000Z | model/run_GRU.py | Liang-Qiu/GRU_Android | 5ebab518fb700f49fa181eeff09cc030114d2992 | [
"MIT"
] | 1 | 2017-12-13T11:09:59.000Z | 2021-01-21T19:59:00.000Z | model/run_GRU.py | LiangKlausQiu/GRU_Android | 5ebab518fb700f49fa181eeff09cc030114d2992 | [
"MIT"
] | 3 | 2017-07-21T10:53:52.000Z | 2017-10-03T06:24:15.000Z | #copyright 2016 LiangKlausQiu. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import time
import numpy as np
import tensorflow as tf
import reader
import GRU
from tensorflow.python.framework.graph_util import convert_variables_to_constants
#import freeze_graph
flags = tf.flags
logging = tf.logging
flags.DEFINE_string(
"model", "small",
"A type of model. Possible options are: small, medium, large.")
flags.DEFINE_string(
"data_path", "/home/liangqiu/Documents/GRU_data",
"Where the training/test data is stored.")
flags.DEFINE_string(
"save_path", "/home/liangqiu/Documents/GRU_output",
"Model output directory.")
#flags.DEFINE_bool(
# "use_fp16", False,
# "Train using 16-bit floats instead of 32bit floats.")
FLAGS = flags.FLAGS
#def data_type():
# return tf.float if FLAGS.use_fp16 else tf.float32
def get_config():
if FLAGS.model == "small":
return GRU.SmallConfig()
elif FLAGS.model == "medium":
return GRU.MediumConfig()
elif FLAGS.model == "large":
return GRU.LargeConfig()
elif FLAGS.model == "test":
return GRU.TestConfig()
else:
raise ValueError("Invalid model: %s", FLAGS.model)
def run_training(session, model):
# train the model on the given data.
start_time = time.time()
costs = 0.0
iters = 0
fetches = {
"cost": model.cost,
"train_op": model.train_op
}
for step in range(model.input.epoch_size):
vals = session.run(fetches)
cost = vals["cost"]
costs += cost
iters += model.input.num_steps
# if step % (model.input.epoch_size // 10) == 0:
# print("%.2f perplexity: %.3f speed: %.0f wps" %
# (step * 1.0 /model.input.epoch_size, np.exp(costs / iters),
# iters * model.input.batch_size / (time.time() - start_time)))
return np.exp(costs / iters)
def run_validation(session, model):
# train the model on the given data.
start_time = time.time()
costs = 0.0
iters = 0
for step in range(model.input.epoch_size):
cost = session.run(model.cost)
costs += cost
iters += model.input.num_steps
return np.exp(costs / iters)
def run_test(session, model, id_to_word=None, test_data=None):
# test the model on the given data.
start_time = time.time()
# costs = 0.0
# iters = 0
fetches = {
"result": model.result,
# "cost": model.cost
}
test_feed1 = np.array([[1,3,3,4,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]], dtype=np.int32)
test_feed2 = np.array([[31,32,33,34,35,36,37,38,39,40,0,0,0,0,0,0,0,0,0,0]], dtype=np.int32)
len_feed1 = np.array([5], dtype=np.int32)
len_feed2 = np.array([10], dtype=np.int32)
# print(test_feed1.shape[0])
# print(test_feed1.shape[1])
# print(test_feed1.dtype)
test_dict1 = {
model.input.input_data: test_feed1,
model.input.data_len: len_feed1
}
test_dict2 = {
model.input.input_data: test_feed2,
model.input.data_len: len_feed2
}
# for step in range(model.input.epoch_size):
print("input_data")
print(session.run(model.input.input_data, feed_dict = test_dict1))
print(session.run(model.input.input_data, feed_dict = test_dict2))
# print("data_len")
# print(session.run(model.input.data_len, feed_dict = test_dict))
print("inputs")
print(session.run(model.inputs, feed_dict = test_dict1))
print(session.run(model.inputs, feed_dict = test_dict2))
print("output")
print(session.run(model.output, feed_dict = test_dict1))
print(session.run(model.output, feed_dict = test_dict2))
vals = session.run(fetches, feed_dict=test_dict1)
# cost = vals["cost"]
result = vals["result"]
# costs += cost
# iters += model.input.num_steps
# print("data_length: %s perplexity: %.3f speed: %.0f wps" % (len(test_data), np.exp(costs / iters), iters * model.input.batch_size / (time.time() - start_time)))
print("%.8f" %(result[0, np.argmax(result)]))
print(result)
print("predicted word: %s" % (id_to_word[np.argmax(result)]))
vals = session.run(fetches, feed_dict=test_dict2)
result = vals["result"]
print("%.8f" %(result[0, np.argmax(result)]))
print(result)
print("predicted word: %s" % (id_to_word[np.argmax(result)]))
# return np.exp(costs / iters)
def main(_):
if not FLAGS.data_path:
raise ValueError("Must set --data_path to data directory")
raw_data = reader.ptb_raw_data(FLAGS.data_path)
train_data, valid_data, test_data, _, id_to_word = raw_data
config = get_config()
eval_config = get_config()
eval_config.batch_size = 1
# eval_config.num_steps = 4
# checkpoint_prefix = os.path.join(FLAGS.save_path, "saved_checkpoint")
# checkpoint_state_name = "checkpoint_state"
# input_graph_name = "input_graph.pb"
# output_graph_name = "output_graph.pb"
g1 = tf.Graph()
with g1.as_default():
initializer = tf.random_uniform_initializer(-config.init_scale,
config.init_scale)
with tf.name_scope("Train"):
train_input = GRU.Input(is_testing=False, config=config, data=train_data, name="TrainInput")
with tf.variable_scope("Model", reuse = None, initializer=initializer):
mtrain = GRU.Model(is_training=True, is_testing=False, config=config, input_=train_input)
with tf.name_scope("Valid"):
valid_input = GRU.Input(is_testing=False, config=config, data=valid_data, name="ValidInput")
with tf.variable_scope("Model", reuse = True, initializer=initializer):
mvalid = GRU.Model(is_training=False, is_testing=False, config=config, input_=valid_input)
with tf.name_scope("Test"):
test_input = GRU.Input(is_testing=True, config=eval_config, data=test_data, name="TestInput")
with tf.variable_scope("Model", reuse = True, initializer=initializer):
mtest = GRU.Model(is_training=False, is_testing=True, config=eval_config, input_=test_input)
sv = tf.train.Supervisor(logdir=FLAGS.save_path)
with sv.managed_session() as session:
# with tf.Session() as session:
for i in range(config.max_max_epoch):
lr_decay = config.lr_decay ** max(i + 1 - config.max_epoch, 0.0)
mtrain.assign_lr(session, config.learning_rate * lr_decay)
# train
print("Epoch: %d Learning rate: %.3f" % (i + 1, session.run(mtrain.lr)))
train_perplexity = run_training(session, mtrain)
print("Epoch: %d Train Perplexity: %.3f" % (i + 1, train_perplexity))
# valid
#valid_perplexity = run_validation(session,mvalid)
#print("Epoch: %d Valid Perplexity: %.3f" % (i + 1, valid_perplexity))
run_test(session, mtest, id_to_word=id_to_word, test_data=test_data)
# print("Test Perplexity: %.3f" % test_perplexity)
# for FPGA use
# Embedding = session.run(tf.get_default_graph().get_tensor_by_name("Model/embedding:0"))
# Cell0_W1 = session.run(tf.get_default_graph().get_tensor_by_name("Model/RNN/MultiRNNCell/Cell0/GRUCell/Gates/Linear/Matrix:0"))
# Cell0_B1 = session.run(tf.get_default_graph().get_tensor_by_name("Model/RNN/MultiRNNCell/Cell0/GRUCell/Gates/Linear/Bias:0"))
# Cell0_W2 = session.run(tf.get_default_graph().get_tensor_by_name("Model/RNN/MultiRNNCell/Cell0/GRUCell/Candidate/Linear/Matrix:0"))
# Cell0_B2 = session.run(tf.get_default_graph().get_tensor_by_name("Model/RNN/MultiRNNCell/Cell0/GRUCell/Candidate/Linear/Bias:0"))
# Cell1_W1 = session.run(tf.get_default_graph().get_tensor_by_name("Model/RNN/MultiRNNCell/Cell1/GRUCell/Gates/Linear/Matrix:0"))
# Cell1_B1 = session.run(tf.get_default_graph().get_tensor_by_name("Model/RNN/MultiRNNCell/Cell1/GRUCell/Gates/Linear/Bias:0"))
# Cell1_W2 = session.run(tf.get_default_graph().get_tensor_by_name("Model/RNN/MultiRNNCell/Cell1/GRUCell/Candidate/Linear/Matrix:0"))
# Cell1_B2 = session.run(tf.get_default_graph().get_tensor_by_name("Model/RNN/MultiRNNCell/Cell1/GRUCell/Candidate/Linear/Bias:0"))
# softmax_W = session.run(tf.get_default_graph().get_tensor_by_name("Model/softmax_w:0"))
# softmax_B = session.run(tf.get_default_graph().get_tensor_by_name("Model/softmax_b:0"))
# para_file = open(FLAGS.save_path+"para_file.txt", "w")
# para_file.write("# Embedding ")
# np.savetxt(para_file, Embedding, fmt='%0.8f',delimiter=',')
# para_file.write("# Cell0_W1 ")
# np.savetxt(para_file, Cell0_W1, fmt='%0.8f',delimiter=',')
# para_file.write("# Cell0_B1 ")
# np.savetxt(para_file, Cell0_B1, fmt='%0.8f',delimiter=',')
# para_file.write("# Cell0_W2 ")
# np.savetxt(para_file, Cell0_W2, fmt='%0.8f',delimiter=',')
# para_file.write("# Cell0_B2 ")
# np.savetxt(para_file, Cell0_B2, fmt='%0.8f',delimiter=',')
# para_file.write("# Cell1_W1 ")
# np.savetxt(para_file, Cell1_W1, fmt='%0.8f',delimiter=',')
# para_file.write("# Cell1_B1 ")
# np.savetxt(para_file, Cell1_B1, fmt='%0.8f',delimiter=',')
# para_file.write("# Cell1_W2 ")
# np.savetxt(para_file, Cell1_W2, fmt='%0.8f',delimiter=',')
# para_file.write("# Cell1_B2 ")
# np.savetxt(para_file, Cell1_B2, fmt='%0.8f',delimiter=',')
# para_file.write("# softmax_W ")
# np.savetxt(para_file, softmax_W, fmt='%0.8f',delimiter=',')
# para_file.write("# softmax_B ")
# np.savetxt(para_file, softmax_B, fmt='%0.8f',delimiter=',')
# para_file.close()
if FLAGS.save_path:
print("Saving model to %s." % FLAGS.save_path)
# sv.saver.save(session, checkpoint_prefix, global_step=0, latest_filename=checkpoint_state_name)
# tf.train.write_graph(session.graph.as_graph_def(), FLAGS.save_path, input_graph_name)
# input_graph_path = os.path.join(FLAGS.save_path, input_graph_name)
# input_saver_def_path = ""
# input_binary = False
# input_checkpoint_path = checkpoint_prefix + "-0"
# output_node_names = "Test/Model/logits"
# restore_op_name = "save/restore_all"
# filename_tensor_name = "save/Const:0"
# output_graph_path = os.path.join(FLAGS.save_path, output_graph_name)
# clear_devices = False
# freeze_graph.freeze_graph(input_graph_path, input_saver_def_path,
# input_binary, input_checkpoint_path,
# output_node_names, restore_op_name,
# filename_tensor_name, output_graph_path,
# clear_devices, None)
# for v in tf.trainable_variables():
# vc = tf.constant(v.eval())
# tf.assign(v, vc, "assign_variables")
# gru_graph = graph_util.extract_sub_graph(graph_def, ["Test/Model/logits"])
gru_graph = convert_variables_to_constants(session, session.graph_def, ["Test/Model/result"])
tf.train.write_graph(gru_graph, FLAGS.save_path, 'gru_graph.pb', as_text=False)
tf.train.write_graph(gru_graph, FLAGS.save_path, "gru_graph.pbtxt", as_text=True)
print("write gru_graph")
# g2 = tf.Graph()
# gru_input = {"Test/TestInput/raw_data": tf.placeholder(tf.int32, shape=20)}
# with g2.as_default():
# with tf.Session(graph=g2) as session:
# tf.import_graph_def(tmp_graph, return_elements=["Test/Model/result"], name="")
# tf.train.write_graph(session.graph_def, FLAGS.save_path, 'gru_graph.pbtxt', as_text=True)
if __name__ == "__main__":
tf.app.run()
| 39.328904 | 163 | 0.688207 | 1,729 | 11,838 | 4.478311 | 0.172932 | 0.006974 | 0.008136 | 0.009815 | 0.492186 | 0.408111 | 0.375565 | 0.332558 | 0.235438 | 0.203539 | 0 | 0.023999 | 0.162274 | 11,838 | 300 | 164 | 39.46 | 0.756781 | 0.492144 | 0 | 0.225564 | 0 | 0 | 0.10158 | 0.011551 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.075188 | null | null | 0.150376 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a4967d174d0a18e78ed6cd486861863e94b773ac | 822 | py | Python | lib/tagger/exceptions.py | shunf4/pys60v1-pymusic | 6bf03e296c7faae67dbe1d59ef563190b56ace5b | [
"MIT"
] | 3 | 2016-08-18T22:10:55.000Z | 2022-03-12T15:19:52.000Z | lib/tagger/exceptions.py | shunf4/pys60v1-pymusic | 6bf03e296c7faae67dbe1d59ef563190b56ace5b | [
"MIT"
] | null | null | null | lib/tagger/exceptions.py | shunf4/pys60v1-pymusic | 6bf03e296c7faae67dbe1d59ef563190b56ace5b | [
"MIT"
] | 3 | 2015-04-30T22:35:26.000Z | 2019-06-11T13:06:48.000Z | """ Custom Exceptions """
__author__ = "Alastair Tse <alastair@tse.id.au>"
__license__ = "BSD"
__copyright__ = "Copyright (c) 2004, Alastair Tse"
__revision__ = "$Id: exceptions.py,v 1.2 2004/05/04 12:18:21 acnt2 Exp $"
class ID3Exception(Exception):
"""General ID3Exception"""
pass
class ID3EncodingException(ID3Exception):
"""Encoding Exception"""
pass
class ID3VersionMismatchException(ID3Exception):
"""Version Mismatch problems"""
pass
class ID3HeaderInvalidException(ID3Exception):
"""Header is malformed or none existant"""
pass
class ID3ParameterException(ID3Exception):
"""Parameters are missing or malformed"""
pass
class ID3FrameException(ID3Exception):
"""Frame is malformed or missing"""
pass
class ID3NotImplementedException(ID3Exception):
"""This function isn't implemented"""
pass
| 22.216216 | 73 | 0.754258 | 88 | 822 | 6.863636 | 0.613636 | 0.089404 | 0.043046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048611 | 0.124088 | 822 | 36 | 74 | 22.833333 | 0.790278 | 0.265207 | 0 | 0.388889 | 0 | 0.055556 | 0.221429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.388889 | 0 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a4a1341a47418ffee6ba8d4388fa1e153f30467e | 714 | py | Python | quokka/utils/__init__.py | mutita/FlaskPyCMS | cf87c7f2a55fc2df0be427764819b5df4d375520 | [
"MIT"
] | 1 | 2015-11-05T17:33:16.000Z | 2015-11-05T17:33:16.000Z | quokka/utils/__init__.py | laborautonomo/quokka | d6e08f15e8b35ce69ea1f0c9c294d5141f34c53a | [
"MIT"
] | null | null | null | quokka/utils/__init__.py | laborautonomo/quokka | d6e08f15e8b35ce69ea1f0c9c294d5141f34c53a | [
"MIT"
] | 2 | 2019-10-06T14:07:43.000Z | 2021-07-08T20:27:24.000Z | # -*- coding: utf-8 -*-
import logging
from speaklater import make_lazy_string
from quokka.modules.accounts.models import User
logger = logging.getLogger()
def lazy_str_setting(key, default=None):
from flask import current_app
return make_lazy_string(
lambda: current_app.config.get(key, default)
)
def get_current_user():
from flask.ext.security import current_user
try:
if not current_user.is_authenticated():
return None
except RuntimeError:
# Flask-Testing will fail
pass
try:
return User.objects.get(id=current_user.id)
except Exception as e:
logger.warning("No user found: %s" % e.message)
return None
| 23.8 | 55 | 0.677871 | 94 | 714 | 5 | 0.56383 | 0.093617 | 0.059574 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001842 | 0.239496 | 714 | 29 | 56 | 24.62069 | 0.86372 | 0.063025 | 0 | 0.190476 | 0 | 0 | 0.025526 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0.047619 | 0.238095 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a4a2393b2f74bb225f936fa45e3248350e11173d | 377 | py | Python | easy/leetcode2.py | ayang818/LeetCode | f15276f550997652b81f456134c0b64bcb61f65c | [
"MIT"
] | 1 | 2019-03-12T04:05:41.000Z | 2019-03-12T04:05:41.000Z | easy/leetcode2.py | ayang818/LeetCode | f15276f550997652b81f456134c0b64bcb61f65c | [
"MIT"
] | null | null | null | easy/leetcode2.py | ayang818/LeetCode | f15276f550997652b81f456134c0b64bcb61f65c | [
"MIT"
] | null | null | null | class Solution:
def reverse(self, x: int) -> int:
b = 0
if x < 0:
b, x = 1, abs(x)
mod, res = 0, 0
while x:
#下面两行分离出每一个位,从而完成逆序
x, mod = x // 10, x % 10
res = res * 10 + mod
if res > 2147483648:
return 0
if b == 1:
res = -res
return res
| 23.5625 | 37 | 0.371353 | 48 | 377 | 2.916667 | 0.416667 | 0.042857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129213 | 0.527851 | 377 | 15 | 38 | 25.133333 | 0.657303 | 0.047745 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a4a51161da395cb642968996bd39887142236598 | 3,854 | py | Python | dataReader/dataset.py | chupengrocky/HackthonClimate | 3011c18fd2a3901625064f30e213f16dab4304c3 | [
"MIT"
] | null | null | null | dataReader/dataset.py | chupengrocky/HackthonClimate | 3011c18fd2a3901625064f30e213f16dab4304c3 | [
"MIT"
] | null | null | null | dataReader/dataset.py | chupengrocky/HackthonClimate | 3011c18fd2a3901625064f30e213f16dab4304c3 | [
"MIT"
] | null | null | null | from torch.utils.data import Dataset
import torch
import numpy as np
from sklearn.preprocessing import StandardScaler, MinMaxScaler, OneHotEncoder
import pandas as pd
np.random.seed(0)
class dataset(Dataset):
def __init__(self ,data_frame, model_indx, mode='train'):
self.x = None
self.y_cls = None
self.y_reg = None
self.length = None
self.data_np = None
self.data_frame = data_frame
# self.data_np = data_frame.to_numpy(dtype="float32")
self.model_indx = model_indx
self.mode = mode
# Initialize x, y_cls, y_reg
self.split_data()
def split_data(self):
# print("Starting spliting data")
# cls_data = self.data_np[self.data_np[:,-1]==self.model_indx] #Filtered by cluster
if self.mode in ['train','val']:
np_data = self.data_frame[self.data_frame['model_{}'.format(self.model_indx)]==1].to_numpy(dtype="float32")
else:
np_data = self.data_frame.to_numpy(dtype="float32")
sc_x = MinMaxScaler()
sc_reg = MinMaxScaler()
x_full = np_data[:, 3:11].copy() # coloumn name [number,latitude,longitude,t_winter,t_spring,t_summer,t_fall,p_winter,p_spring,p_summer,p_fall,carb,veg,cluster], shape: (row, 8)
y_full = np_data[:, 11:13].copy() # coloumn name: [..... ,carb, veg, cluster], shape:(row, 3)
self.centers = np_data[:,1:3]
x = x_full.copy()
y_reg = y_full[:,0].copy()
y_cls = y_full[:,1].copy()
x = sc_x.fit_transform(x)
y_reg = sc_reg.fit_transform(y_reg.reshape(-1,1))
y_cls -= 1
y_reg = y_reg.reshape(-1)
# if self.mode == 'train':
# self.x = torch.tensor(x_train, dtype=torch.float32)
# self.y_reg = torch.tensor(y_reg_train, dtype=torch.float32)
# self.y_cls = torch.tensor(y_cls_train, dtype=torch.float32)
# print(self.mode," dataset:", self.x.shape[0], ", positive:", self.y_cls.sum().numpy())
# elif self.mode == 'val':
# self.x = torch.tensor(x_val, dtype=torch.float32)
# self.y_reg = torch.tensor(y_reg_val, dtype=torch.float32)
# self.y_cls = torch.tensor(y_cls_val, dtype=torch.float32)
# print(self.mode," dataset:", self.x.shape[0], ", positive:",self.y_cls.sum().numpy())
self.x = torch.tensor(x, dtype=torch.float32)
self.y_reg = torch.tensor(y_reg, dtype=torch.float32)
self.y_cls = torch.tensor(y_cls, dtype=torch.float32)
self.length = len(self.x)
def get_cls_label_weight(self):
class_counts = [int(len(self.y_cls)-self.y_cls.sum().numpy()),int(self.y_cls.sum().numpy())]
class_weights = [len(self.y_cls)/class_counts[i] for i in range(len(np.unique(self.y_cls)))]
weights = [class_weights[int(self.y_cls[i].numpy())] for i in range(len(self.y_cls))]
# print(weights)
return weights
def get_cls_weight(self):
return (len(self.y_cls)-self.y_cls.sum())/(self.y_cls.sum())
def get_feature_len(self):
return self.x.shape[1]
def get_x(self):
return self.x
def get_y_cls(self):
return self.y_cls
def get_y_reg(self):
return self.y_reg
def get_location(self):
return self.centers
def fit_transform(self, data_features, data_carbs, data_vegs):
self.params = [np.std(data_features, axis=0, keepdims=True), np.mean(data_features, axis=0, keepdims=True),
np.std(data_carbs, axis=0, keepdims=True), np.mean(data_carbs, axis=0, keepdims=True)]
return self.transform(data_features, data_carbs, data_vegs)
def transform(self, data_features, data_carbs, data_vegs):
if self.params is None:
raise ValueError('Not fit yet')
return (data_features-self.params[1])/self.params[0], (data_carbs-self.params[3])/self.params[2], data_vegs-1
def __getitem__(self,idx):
return self.x[idx], self.y_reg[idx], self.y_cls[idx]
def __len__(self):
return self.length | 34.106195 | 194 | 0.665023 | 617 | 3,854 | 3.930308 | 0.181524 | 0.041237 | 0.059381 | 0.060619 | 0.375258 | 0.303918 | 0.249485 | 0.211134 | 0.158351 | 0.158351 | 0 | 0.017795 | 0.183446 | 3,854 | 113 | 195 | 34.106195 | 0.75278 | 0.255579 | 0 | 0 | 0 | 0 | 0.016157 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.19403 | false | 0 | 0.074627 | 0.119403 | 0.447761 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
8ef620d25e139d90023f0be743c33c64d489c6c0 | 1,810 | py | Python | utils/mysqlTools.py | kalenforn/MMVA | 1e4ec5417d4497a14f226fab8a66fe065a9f0f65 | [
"MIT"
] | 4 | 2021-12-16T08:17:49.000Z | 2022-03-12T10:14:50.000Z | utils/mysqlTools.py | kalenforn/video-content-clean | 4b6e572ec034fbe2e668c250cff8e1c9a13dd0e0 | [
"MIT"
] | null | null | null | utils/mysqlTools.py | kalenforn/video-content-clean | 4b6e572ec034fbe2e668c250cff8e1c9a13dd0e0 | [
"MIT"
] | 1 | 2021-12-14T08:17:41.000Z | 2021-12-14T08:17:41.000Z | import pymysql
class MySqlHold:
def __init__(self, host: str, user: str, password: str, database: str, port=3306):
self.db = pymysql.connect(host=host, user=user, port=port, database=database, password=password)
self.cursor = self.db.cursor()
def execute_command(self, command: str):
self.cursor.execute(command)
self.cursor.connection.commit()
def fetchall(self):
result = self.cursor.fetchall()
return result
def search(self, table: str, column_name: str, value: str):
command = f"select * from {table} where {column_name} like \'{value}\'"
self.execute_command(command)
return self.fetchall()
def close(self):
self.cursor.close()
self.db.close()
def get_one_item(conf, table: str, column_name: str, value: str):
mysql = MySqlHold(host=conf.get('mysql', 'host'),
user=conf.get('mysql', 'user'),
password=conf.get('mysql', 'passwd'),
database=conf.get('mysql', 'db'),
port=int(conf.get('mysql', 'port')))
result = mysql.search(table=table, column_name=column_name, value=value)
mysql.close()
return result
def get_video_info(conf) -> dict:
mysql = MySqlHold(host=conf.get('mysql', 'host'),
user=conf.get('mysql', 'user'),
password=conf.get('mysql', 'passwd'),
database=conf.get('mysql', 'db'),
port=int(conf.get('mysql', 'port')))
command = f"select * from {conf.get('mysql', 'video_table')}"
mysql.execute_command(command)
video_list = mysql.fetchall()
result = {}
for i, id_, path in video_list:
result.update({id_: path})
mysql.close()
return result
| 31.206897 | 104 | 0.583425 | 217 | 1,810 | 4.769585 | 0.239631 | 0.074396 | 0.127536 | 0.034783 | 0.297585 | 0.297585 | 0.297585 | 0.241546 | 0.241546 | 0.241546 | 0 | 0.003026 | 0.269613 | 1,810 | 57 | 105 | 31.754386 | 0.779879 | 0 | 0 | 0.365854 | 0 | 0 | 0.108347 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170732 | false | 0.097561 | 0.02439 | 0 | 0.317073 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8efcb07f31cdfd94de1b43df75d06bb21b637b78 | 1,748 | py | Python | lilac/__init__.py | mba811/lilac | ce7a4e19f7f73c07b2e6b4b33fb19b3812e8e131 | [
"MIT"
] | 1 | 2019-05-07T15:09:39.000Z | 2019-05-07T15:09:39.000Z | lilac/__init__.py | mba811/lilac | ce7a4e19f7f73c07b2e6b4b33fb19b3812e8e131 | [
"MIT"
] | null | null | null | lilac/__init__.py | mba811/lilac | ce7a4e19f7f73c07b2e6b4b33fb19b3812e8e131 | [
"MIT"
] | null | null | null | # coding=utf8
#
# OOO$QHHHQ$$$$$$$$$QQQHHHHNHHHNNNNNNNNNNN
# OO$$QHHNHQ$$$$$O$$$QQQHHHNNHHHNNNNNNMNNN
# $$$QQHHHH$$$OOO$$$$QQQQHHHHHHHNHNNNMNNNN
# HHQQQHHH--:!OOO$$$QQQQQQQHHHHHNNNNNNNNNN
# NNNHQHQ-;-:-:O$$$$$QQQ$QQQQHHHHNNNNNNNNN
# NMNHHQ;-;----:$$$$$$$:::OQHHHHHNNNNHHNNN
# NNNHH;;;-----:C$$$$$::----::-::>NNNNNNNN
# NNHHQ:;;--:---:$$$$$:::--;;;;;-HNNNNNNNN
# HHQQQ:-;-----:--$$$7::-;;;.;.;;HNNMMNNNH
# QQQ$Q>-:---:----Q$$!!:;;;...;;QHNMMMNNHH
# $$$$$$$:::-:--:!7Q$!:::::;;;;OQHMMMNNHHH
# OOO$$$$O:!--:!:-7HQ>:!---.-;7O$HMMMNHHHQ
# OOOOOO$OO:::!:!!>N7!7:!-.;:;CC$HMMMNNQQQ
# OOOOOOOOOO!::>7C!N>!!C-!7--O?COHMMMNNQQQ
# OOOO?7?OOOO!!!!:CH7>7>7--:QC??OC>NMNNHQQ
# OOOO?>>>>COO!>?!7H?O>!:::H$:-;---:!ONHHH
# OOOO?7>>>>!>?7!>>$O>QOC?N7;;;..;;;;-!NNN
# COCC77>>>>!!>7>!>?7C7!>O:;;-.;..-----!MM
# OCCC>>>>!!!!7>>O?OOOO>!!;-;-..;.;;;;-:CN
# OOCC!!!!!!!>>!>7COCC$C7>->-;:;.;;;;;;-:M
# CCOO7::!!!!!!!!CCOOC$?7::-;-:;;;;-----:7
# OOOQH!-!!!!!!!>C7$OC7?!:-;!;-----;---::O
# CO$QNN7:!>!!!!!!>?CC$7C!!?CC?CO$$7---->N
# CO$QHNNNO7>!!>O?C??!7C>!---7CO$QOOQHHHHN
# OO$HHNNNNNNNNNNCCO?!!!------7$QQOO$QQQQH
# O$QHHNNMNNNNNNNQO$C!!!:----::QQ$OCOQQ$QQ
# $QHQHHNMMMMNNNNQQ$C>!::------QQ$OCO$$$$$
# QHHHHNNMMNHHHHHQQQQQ7!!:-----?Q$$OO$$$O$
# HHQQHHNNH$$$$$$QQHHHH$>!:--::7QQ$OOO$OOO
# $$$QQQHQ$OCO$O$QQ$QQQ$$C!::::$HQ$OOO$OOO
# OOO$$QQ$OCCOOOOQO$$$$$OC>!!:?HHQ$OO$$OOO
# OCCO$Q$$OOO77>>7CO$$OOOOOCQQHQQQ$$O$$$$O
#
# lilac - a static blog generator.
# https://github.com/hit9/lilac
# nz2324 AT 126.com
"""global vars"""
version = "0.3.9"
charset = "utf8" # utf8 read and write everywhere
src_ext = ".md" # source filename extension
out_ext = ".html" # output filename extension
src_dir = "src" # source directory, './src'
out_dir = "." # output directory, './'
| 36.416667 | 50 | 0.521167 | 208 | 1,748 | 4.360577 | 0.586538 | 0.006615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034014 | 0.074943 | 1,748 | 47 | 51 | 37.191489 | 0.526902 | 0.886156 | 0 | 0 | 0 | 0 | 0.139073 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f100d09d5dd0ea20c9825e6a6951cbbe2a921a4d | 4,351 | py | Python | kensu/client/models/field_def.py | vidma/kensu-py | aae1e04373f03c988d55772fde6563de3ca9f375 | [
"Apache-2.0"
] | 16 | 2021-04-28T13:22:41.000Z | 2022-03-02T10:45:19.000Z | kensu/client/models/field_def.py | vidma/kensu-py | aae1e04373f03c988d55772fde6563de3ca9f375 | [
"Apache-2.0"
] | 12 | 2021-05-17T08:06:42.000Z | 2022-02-28T22:43:04.000Z | kensu/client/models/field_def.py | vidma/kensu-py | aae1e04373f03c988d55772fde6563de3ca9f375 | [
"Apache-2.0"
] | 5 | 2021-04-27T15:02:16.000Z | 2021-10-15T16:07:21.000Z | # coding: utf-8
"""
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
OpenAPI spec version: beta
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from pprint import pformat
from six import iteritems
class FieldDef(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'name': 'str',
'field_type': 'str',
'nullable': 'bool'
}
attribute_map = {
'name': 'name',
'field_type': 'fieldType',
'nullable': 'nullable'
}
def __init__(self, name=None, field_type=None, nullable=None):
"""
FieldDef - a model defined in Swagger
"""
self._name = None
self._field_type = None
self._nullable = None
self.name = name
self.field_type = field_type
self.nullable = nullable
@property
def name(self):
"""
Gets the name of this FieldDef.
:return: The name of this FieldDef.
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""
Sets the name of this FieldDef.
:param name: The name of this FieldDef.
:type: str
"""
if name is None:
raise ValueError("Invalid value for `name`, must not be `None`")
self._name = name
@property
def field_type(self):
"""
Gets the field_type of this FieldDef.
:return: The field_type of this FieldDef.
:rtype: str
"""
return self._field_type
@field_type.setter
def field_type(self, field_type):
"""
Sets the field_type of this FieldDef.
:param field_type: The field_type of this FieldDef.
:type: str
"""
if field_type is None:
raise ValueError("Invalid value for `field_type`, must not be `None`")
self._field_type = field_type
@property
def nullable(self):
"""
Gets the nullable of this FieldDef.
:return: The nullable of this FieldDef.
:rtype: bool
"""
return self._nullable
@nullable.setter
def nullable(self, nullable):
"""
Sets the nullable of this FieldDef.
:param nullable: The nullable of this FieldDef.
:type: bool
"""
if nullable is None:
raise ValueError("Invalid value for `nullable`, must not be `None`")
self._nullable = nullable
def to_dict(self):
"""
Returns the model properties as a dict
"""
result = {}
for attr, _ in iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""
Returns the string representation of the model
"""
return pformat(self.to_dict())
def __repr__(self):
"""
For `print` and `pprint`
"""
return self.to_str()
def __eq__(self, other):
"""
Returns true if both objects are equal
"""
if not isinstance(other, FieldDef):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""
Returns true if both objects are not equal
"""
return not self == other
| 24.307263 | 105 | 0.536199 | 489 | 4,351 | 4.629857 | 0.226994 | 0.079505 | 0.074205 | 0.022968 | 0.353799 | 0.228799 | 0.174912 | 0.065371 | 0.033569 | 0.033569 | 0 | 0.001455 | 0.367961 | 4,351 | 178 | 106 | 24.44382 | 0.821818 | 0.251436 | 0 | 0.067568 | 1 | 0 | 0.092571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162162 | false | 0 | 0.027027 | 0 | 0.351351 | 0.013514 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f10416747a1ff8f4a4affe6681c94660775c7e00 | 5,472 | py | Python | nominations/migrations/0001_initial.py | ewjoachim/pythondotorg | 382741cc6208fc56aa827cdd1da41983fb7e6ba8 | [
"Apache-2.0"
] | 911 | 2015-01-03T22:16:06.000Z | 2022-03-31T23:56:22.000Z | nominations/migrations/0001_initial.py | ewjoachim/pythondotorg | 382741cc6208fc56aa827cdd1da41983fb7e6ba8 | [
"Apache-2.0"
] | 1,342 | 2015-01-02T16:14:45.000Z | 2022-03-28T08:01:20.000Z | nominations/migrations/0001_initial.py | ewjoachim/pythondotorg | 382741cc6208fc56aa827cdd1da41983fb7e6ba8 | [
"Apache-2.0"
] | 551 | 2015-01-04T02:17:31.000Z | 2022-03-23T11:59:25.000Z | # Generated by Django 2.0.9 on 2019-03-18 20:21
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import markupfield.fields
class Migration(migrations.Migration):
initial = True
dependencies = [migrations.swappable_dependency(settings.AUTH_USER_MODEL)]
operations = [
migrations.CreateModel(
name="Election",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("name", models.CharField(max_length=100)),
("date", models.DateField()),
("nominations_open_at", models.DateTimeField(blank=True, null=True)),
("nominations_close_at", models.DateTimeField(blank=True, null=True)),
("slug", models.SlugField(blank=True, max_length=255, null=True)),
],
options={"ordering": ["-date"]},
),
migrations.CreateModel(
name="Nomination",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("name", models.CharField(max_length=1024, null=True)),
("email", models.CharField(max_length=1024, null=True)),
(
"previous_board_service",
models.CharField(max_length=1024, null=True),
),
("employer", models.CharField(max_length=1024, null=True)),
(
"other_affiliations",
models.CharField(blank=True, max_length=2048, null=True),
),
(
"nomination_statement",
markupfield.fields.MarkupField(null=True, rendered_field=True),
),
(
"nomination_statement_markup_type",
models.CharField(
choices=[
("", "--"),
("html", "HTML"),
("plain", "Plain"),
("markdown", "Markdown"),
("restructuredtext", "Restructured Text"),
],
default="markdown",
editable=False,
max_length=30,
),
),
(
"_nomination_statement_rendered",
models.TextField(editable=False, null=True),
),
("accepted", models.BooleanField(default=False)),
("approved", models.BooleanField(default=False)),
(
"election",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to="nominations.Election",
),
),
(
"nominator",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="nominations_made",
to=settings.AUTH_USER_MODEL,
),
),
],
),
migrations.CreateModel(
name="Nominee",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("accepted", models.BooleanField(default=False)),
("approved", models.BooleanField(default=False)),
("slug", models.SlugField(blank=True, max_length=255, null=True)),
(
"election",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="nominees",
to="nominations.Election",
),
),
(
"user",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="nominations_recieved",
to=settings.AUTH_USER_MODEL,
),
),
],
),
migrations.AddField(
model_name="nomination",
name="nominee",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="nominations",
to="nominations.Nominee",
),
),
migrations.AlterUniqueTogether(
name="nominee", unique_together={("user", "election")}
),
]
| 36.238411 | 86 | 0.41356 | 366 | 5,472 | 6.038251 | 0.289617 | 0.047059 | 0.038009 | 0.059729 | 0.540724 | 0.540724 | 0.51086 | 0.422172 | 0.422172 | 0.422172 | 0 | 0.016341 | 0.485563 | 5,472 | 150 | 87 | 36.48 | 0.768739 | 0.008224 | 0 | 0.552448 | 1 | 0 | 0.098065 | 0.015484 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027972 | 0 | 0.055944 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f10b49aaee68e7293e9a1314d7972c5caea9edb2 | 750 | py | Python | neighbor/migrations/0002_auto_20210412_2210.py | LewisNjagi/neighborhood | cbefac55b930999629dba202a784a096799949a4 | [
"MIT"
] | null | null | null | neighbor/migrations/0002_auto_20210412_2210.py | LewisNjagi/neighborhood | cbefac55b930999629dba202a784a096799949a4 | [
"MIT"
] | null | null | null | neighbor/migrations/0002_auto_20210412_2210.py | LewisNjagi/neighborhood | cbefac55b930999629dba202a784a096799949a4 | [
"MIT"
] | 2 | 2021-04-14T05:56:06.000Z | 2021-04-15T14:20:02.000Z | # Generated by Django 3.2 on 2021-04-12 22:10
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('neighbor', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='business',
name='neighborhood',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='business', to='neighbor.neighborhood'),
),
migrations.AlterField(
model_name='business',
name='user',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='owner', to='neighbor.profile'),
),
]
| 30 | 145 | 0.64 | 81 | 750 | 5.839506 | 0.493827 | 0.067653 | 0.088795 | 0.139535 | 0.498943 | 0.498943 | 0.325581 | 0.325581 | 0.325581 | 0.325581 | 0 | 0.031359 | 0.234667 | 750 | 24 | 146 | 31.25 | 0.792683 | 0.057333 | 0 | 0.333333 | 1 | 0 | 0.144681 | 0.029787 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f10c79f2e66be2d31bb61e574317d2eef21da993 | 4,068 | py | Python | pylps/core.py | astraldawn/pylps | e9964a24bb38657b180d441223b4cdb9e1dadc8a | [
"MIT"
] | 1 | 2018-05-19T18:28:12.000Z | 2018-05-19T18:28:12.000Z | pylps/core.py | astraldawn/pylps | e9964a24bb38657b180d441223b4cdb9e1dadc8a | [
"MIT"
] | 12 | 2018-04-26T00:58:11.000Z | 2018-05-13T22:03:39.000Z | pylps/core.py | astraldawn/pylps | e9964a24bb38657b180d441223b4cdb9e1dadc8a | [
"MIT"
] | null | null | null | from pylps.constants import *
from pylps.config import CONFIG
from pylps.kb import KB
from pylps.engine import ENGINE
from pylps.lps_objects import GoalClause, Observation, ReactiveRule
import pylps.creator as creator
''' Declarations '''
def create_actions(*args, return_obj=False):
ret = creator.create_objects(args, ACTION, return_obj)
if return_obj:
return ret
def create_events(*args, return_obj=False):
ret = creator.create_objects(args, EVENT, return_obj)
if return_obj:
return ret
def create_facts(*args, return_obj=False):
ret = creator.create_objects(args, FACT, return_obj)
if return_obj:
return ret
def create_fluents(*args, return_obj=False):
ret = creator.create_objects(args, FLUENT, return_obj)
if return_obj:
return ret
def create_variables(*args, return_obj=False):
ret = creator.create_objects(args, VARIABLE, return_obj)
if return_obj:
return ret
def initially(*args):
KB.initial_fluents.extend(args)
def observe(obs):
# TODO: Make observations iterable?
obs = Observation(obs, obs.start_time, obs.end_time)
KB.add_observation(obs)
def reactive_rule(*args):
new_rule = ReactiveRule(args)
KB.add_rule(new_rule)
return new_rule
def event(*args):
new_clause = GoalClause(args)
KB.add_clause(new_clause)
return new_clause
def goal(*args):
new_clause = GoalClause(args)
KB.add_clause(new_clause)
return new_clause
def false_if(*args):
converted = []
for arg in args:
if isinstance(arg, tuple):
converted.append(arg)
else:
converted.append((arg, True))
KB.add_constraint(converted)
''' Core loop '''
def initialise(max_time=5, create_vars=True):
# Must call create object directly due to stack issues
if create_vars:
creator.create_objects(
['T', 'T1', 'T2', 'T3', 'T4', 'T5'], VARIABLE)
creator.create_objects(['_'], VARIABLE)
ENGINE.set_params(max_time=max_time)
def execute(
n_solutions=CONFIG_DEFAULT_N_SOLUTIONS,
single_clause=False,
solution_preference=SOLN_PREF_FIRST,
debug=False,
experimental=True,
strategy=STRATEGY_GREEDY,
stepwise=False,
obs=OBS_BEFORE
):
'''Execute pyLPS program
Keyword arguments:
n_solutions -- the number of solutions, use -1 for all solutions
(default 1)
single_clause -- only consider the first clause for an event (default true)
solutions_preference -- the type of solution to favour (defaults to first)
'''
if solution_preference is SOLN_PREF_MAX:
# If we want maximum solutions, override n_solutions if it is default
if n_solutions == CONFIG_DEFAULT_N_SOLUTIONS:
n_solutions = -1
# All solutions if -1
if n_solutions == -1:
n_solutions = 10000000
options_dict = {
'n_solutions': n_solutions,
'obs': obs,
'single_clause': single_clause,
'solution_preference': solution_preference,
# Development
'debug': debug,
'experimental': experimental,
'strategy': strategy,
}
# Resets
CONFIG.reactive_id = 0
KB.reset_kb()
# Initially
# for fluent in KB.initial_fluents:
# KB.add_fluent(fluent)
# KB.log_fluent(fluent, 0, F_INITIATE)
CONFIG.set_options(options_dict)
ENGINE.run(stepwise=stepwise)
def execute_next_step():
ENGINE.next_step()
''' Utility '''
def show_kb_causalities():
return KB.show_causalities()
def show_kb_clauses():
return KB.show_clauses()
def show_kb_constraints():
return KB.show_constraints()
def show_kb_facts():
return KB.show_facts()
def show_kb_fluents():
return KB.show_fluents()
def show_kb_log(show_events=False):
return KB.show_log(show_events=show_events)
def show_kb_rules():
return KB.show_reactive_rules()
def kb_display_log(show_events=False, print_log=False):
KB.show_log(show_events=show_events, print_log=False)
return KB.display_log
| 21.1875 | 79 | 0.682645 | 538 | 4,068 | 4.920074 | 0.263941 | 0.051001 | 0.05289 | 0.034001 | 0.250472 | 0.250472 | 0.225538 | 0.203627 | 0.191538 | 0.049112 | 0 | 0.006631 | 0.221485 | 4,068 | 191 | 80 | 21.298429 | 0.829176 | 0.141347 | 0 | 0.153846 | 0 | 0 | 0.024412 | 0 | 0 | 0 | 0 | 0.005236 | 0 | 1 | 0.211538 | false | 0 | 0.057692 | 0.067308 | 0.423077 | 0.019231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f10f9d27421300b1fdd0a309643ad08a50750df9 | 1,323 | py | Python | tests/test_k2.py | LanzLagman/chronos | 3c7e32e7bd8ed85d442ce3ecbf4c9a5272e8e470 | [
"MIT"
] | null | null | null | tests/test_k2.py | LanzLagman/chronos | 3c7e32e7bd8ed85d442ce3ecbf4c9a5272e8e470 | [
"MIT"
] | null | null | null | tests/test_k2.py | LanzLagman/chronos | 3c7e32e7bd8ed85d442ce3ecbf4c9a5272e8e470 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import pandas as pd
import lightkurve as lk
from chronos.k2 import K2, Everest, K2sff
EPICID = 211916756 # k2-95
CAMPAIGN = 5 # or 18
def test_k2_attributes():
"""
"""
# test inherited attributes
s = K2(epicid=EPICID, campaign=CAMPAIGN)
assert s.epicid is not None
assert s.target_coord is not None
gaia_params = s.query_gaia_dr2_catalog(return_nearest_xmatch=True)
assert isinstance(gaia_params, pd.Series)
tic_params = s.query_tic_catalog(return_nearest_xmatch=True)
assert isinstance(tic_params, pd.Series)
def test_k2_lc_pipeline():
s = K2(epicid=EPICID, campaign=CAMPAIGN)
s.get_lc("sap")
assert isinstance(s.lc_sap, lk.LightCurve)
s.get_lc("pdcsap")
assert isinstance(s.lc_pdcsap, lk.LightCurve)
# def test_k2_lc_custom():
# s = K2(epicid=EPICID, campaign=CAMPAIGN)
# sap = s.make_custom_lc()
def test_k2_tpf():
s = K2(epicid=EPICID, campaign=CAMPAIGN)
tpf = s.get_tpf()
assert isinstance(tpf, lk.targetpixelfile.TargetPixelFile)
def test_everest():
"""
"""
s = Everest(epicid=EPICID, campaign=CAMPAIGN)
assert isinstance(s.lc_everest, lk.LightCurve)
def test_k2sff():
"""
"""
s = K2sff(epicid=EPICID, campaign=CAMPAIGN)
assert isinstance(s.lc_k2sff, lk.LightCurve)
| 24.5 | 70 | 0.692366 | 186 | 1,323 | 4.736559 | 0.290323 | 0.127128 | 0.136209 | 0.190692 | 0.358683 | 0.351873 | 0.211124 | 0.106697 | 0 | 0 | 0 | 0.028918 | 0.18972 | 1,323 | 53 | 71 | 24.962264 | 0.79291 | 0.119426 | 0 | 0.103448 | 0 | 0 | 0.008007 | 0 | 0 | 0 | 0 | 0 | 0.310345 | 1 | 0.172414 | false | 0 | 0.103448 | 0 | 0.275862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f11004b8b2440fc97ec0e3b374da4a63264ff26e | 388 | py | Python | ch22-直方图/22.3.4.hsv_hist-绘制2D直方图.py | makelove/OpenCV-Python-Tutorial | e428d648f7aa50d6a0fb4f4d0fb1bd1a600fef41 | [
"MIT"
] | 2,875 | 2016-10-21T01:33:22.000Z | 2022-03-30T12:15:28.000Z | ch22-直方图/22.3.4.hsv_hist-绘制2D直方图.py | makelove/OpenCV-Python-Tutorial | e428d648f7aa50d6a0fb4f4d0fb1bd1a600fef41 | [
"MIT"
] | 12 | 2017-07-18T14:24:27.000Z | 2021-07-04T10:32:25.000Z | ch22-直方图/22.3.4.hsv_hist-绘制2D直方图.py | makelove/OpenCV-Python-Tutorial | e428d648f7aa50d6a0fb4f4d0fb1bd1a600fef41 | [
"MIT"
] | 1,066 | 2017-03-11T01:43:28.000Z | 2022-03-29T14:52:41.000Z | # -*-coding:utf8-*-#
__author__ = 'play4fun'
"""
create time:15-11-8 下午4:44
绘制2D直方图
"""
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('../data/home.jpg')
# cv2.imshow("src", img)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
hist = cv2.calcHist([hsv], [0, 1], None, [180, 256], [0, 180, 0, 256])
plt.imshow(hist, interpolation='nearest')
plt.show()
| 19.4 | 70 | 0.667526 | 61 | 388 | 4.163934 | 0.688525 | 0.047244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10119 | 0.134021 | 388 | 19 | 71 | 20.421053 | 0.654762 | 0.103093 | 0 | 0 | 0 | 0 | 0.102649 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f11b98c4cbad0c6ce3b3cacf5a4c884e14ed54bb | 363 | py | Python | blog/migrations/0002_auto_20190719_0620.py | Sedherthe/Dj-Blog | 43d24e2a3654ba080da0be233d2e6f51c9baf4d4 | [
"MIT"
] | null | null | null | blog/migrations/0002_auto_20190719_0620.py | Sedherthe/Dj-Blog | 43d24e2a3654ba080da0be233d2e6f51c9baf4d4 | [
"MIT"
] | null | null | null | blog/migrations/0002_auto_20190719_0620.py | Sedherthe/Dj-Blog | 43d24e2a3654ba080da0be233d2e6f51c9baf4d4 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.4 on 2019-07-19 00:50
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('blog', '0001_initial'),
]
operations = [
migrations.RenameField(
model_name='post',
old_name='data_published',
new_name='date_published',
),
]
| 19.105263 | 47 | 0.584022 | 39 | 363 | 5.282051 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075099 | 0.30303 | 363 | 18 | 48 | 20.166667 | 0.73913 | 0.123967 | 0 | 0 | 1 | 0 | 0.151899 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f12352df10226259fd2f7adb00c402a851d5ea01 | 839 | py | Python | python/magic-8-ball.py | CindyMullins/intro-to-open-source | 6099f28b30c62ee6362ae67f31d2d665ae2a6326 | [
"MIT"
] | 16 | 2017-08-20T06:08:53.000Z | 2021-02-18T03:41:28.000Z | python/magic-8-ball.py | CindyMullins/intro-to-open-source | 6099f28b30c62ee6362ae67f31d2d665ae2a6326 | [
"MIT"
] | 19 | 2017-08-19T05:32:30.000Z | 2021-03-26T18:40:19.000Z | python/magic-8-ball.py | CindyMullins/intro-to-open-source | 6099f28b30c62ee6362ae67f31d2d665ae2a6326 | [
"MIT"
] | 44 | 2017-08-19T05:09:13.000Z | 2021-07-28T13:39:33.000Z | import random
import time
responses = ["Not so sure", "Shitty", "Great", "Absolutely not", "Outlook is good", "I see good things happening", "Never", "Negative", "Could be", "Unclear, ask again", "Yes definitely", "No, Idon't think so"]
## Following function asks user question, then returns random results from responses
def answerQuery():
print("Let me dig deep into the waters of life, and find your answer")
time.sleep(2)
print("Hmmm")
time.sleep(2)
print(random.choice(responses))
print("\n")
## Following asks user if they would like to play again, and loops until user is finished
query = 'Would you like to ask the Wise One a question? Y/N: '
response = raw_input(query)
print response
while response.lower() == 'y':
answerQuery()
response = raw_input(query)
else:
print 'quitting'
exit()
| 34.958333 | 210 | 0.692491 | 123 | 839 | 4.707317 | 0.674797 | 0.027634 | 0.034542 | 0.051813 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002928 | 0.185936 | 839 | 23 | 211 | 36.478261 | 0.844802 | 0.200238 | 0 | 0.210526 | 0 | 0 | 0.417417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.105263 | null | null | 0.315789 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f124c20b1fe8e78ebe894a5b161358d7754b3437 | 584 | py | Python | DJangoHotel/viewspackage/roomInfoView.py | chuangkee/mygithub | a3742d45002de55378987eea551776a3168acc17 | [
"MIT"
] | null | null | null | DJangoHotel/viewspackage/roomInfoView.py | chuangkee/mygithub | a3742d45002de55378987eea551776a3168acc17 | [
"MIT"
] | null | null | null | DJangoHotel/viewspackage/roomInfoView.py | chuangkee/mygithub | a3742d45002de55378987eea551776a3168acc17 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from django.shortcuts import render
from qiniuyun.backend import QiniuPush
from qiniuyun.models import ImageAtQiniu
from .indexView import ImgList
from DJangoHotel.models import RoomInfo
def roomInfo(request):
rooms=RoomInfo.objects.all()
imgObjs=ImageAtQiniu.objects.all()
imgUrls=[QiniuPush.private_download_url(i.fullname) for i in imgObjs]
imgs=ImgList()
for i in imgUrls:
if 'hotel-logo' in i:
imgs.logo=i
return render(request,'roominfo.html',{'roomInfoList':rooms,'img':imgs})
| 29.2 | 76 | 0.688356 | 72 | 584 | 5.555556 | 0.541667 | 0.06 | 0.03 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002165 | 0.208904 | 584 | 19 | 77 | 30.736842 | 0.863636 | 0.035959 | 0 | 0 | 0 | 0 | 0.067736 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.357143 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f12913ad1c3dbc3a14c703a151bdf8159fa5b1c5 | 1,176 | py | Python | punica/cli/box_cmd.py | PunicaSuite/punica-python | dfcd50de4452454b96d0c252c2db994e3bad38c0 | [
"MIT"
] | 6 | 2018-11-01T23:39:32.000Z | 2020-03-25T13:39:46.000Z | punica/cli/box_cmd.py | PunicaSuite/punica-python | dfcd50de4452454b96d0c252c2db994e3bad38c0 | [
"MIT"
] | 3 | 2018-10-26T12:53:16.000Z | 2019-06-21T13:22:56.000Z | punica/cli/box_cmd.py | PunicaSuite/punica-python | dfcd50de4452454b96d0c252c2db994e3bad38c0 | [
"MIT"
] | 7 | 2018-10-11T07:03:10.000Z | 2019-06-26T02:38:58.000Z | import webbrowser
from ontology.exception.exception import SDKException
from click import (
argument,
pass_context
)
from .main import main
from punica.box.repo_box import Box
from punica.utils.output import echo_cli_exception
from punica.exception.punica_exception import PunicaException
@main.command('unbox')
@argument('box_name', nargs=1)
@pass_context
def unbox_cmd(ctx, box_name):
"""
Download a Punica Box, a pre-built Punica project.
"""
box = Box(ctx.obj['PROJECT_DIR'])
try:
box.unbox(box_name)
except (PunicaException, SDKException) as e:
echo_cli_exception(e)
@main.command('init')
@pass_context
def init_cmd(ctx):
"""
Initialize new and empty Ontology project.
"""
box = Box(ctx.obj['PROJECT_DIR'])
try:
box.init_box()
except (PunicaException, SDKException) as e:
echo_cli_exception(e)
@main.command('boxes')
@pass_context
def boxes_cmd(ctx):
"""
List all available punica box.
"""
box = Box(ctx.obj['PROJECT_DIR'])
try:
box.list_boxes()
except (PunicaException, SDKException):
webbrowser.open('https://punica.ont.io/boxes/')
| 21.777778 | 61 | 0.681973 | 153 | 1,176 | 5.091503 | 0.333333 | 0.056483 | 0.061617 | 0.046213 | 0.290116 | 0.290116 | 0.290116 | 0.290116 | 0.254172 | 0.164313 | 0 | 0.001065 | 0.201531 | 1,176 | 53 | 62 | 22.188679 | 0.828541 | 0.105442 | 0 | 0.371429 | 0 | 0 | 0.082505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0.114286 | 0.2 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f1363e56bdfc412112127936e4e7fe72aeef181e | 10,827 | py | Python | python/project_kine/kine.py | OthmanEmpire/misc_code | 4d487db518167b62969ad65abaffbc9e01777d91 | [
"MIT"
] | null | null | null | python/project_kine/kine.py | OthmanEmpire/misc_code | 4d487db518167b62969ad65abaffbc9e01777d91 | [
"MIT"
] | null | null | null | python/project_kine/kine.py | OthmanEmpire/misc_code | 4d487db518167b62969ad65abaffbc9e01777d91 | [
"MIT"
] | null | null | null | ################################################################################
# Author: Othman Alikhan #
# Year Initiated: 2012 #
# #
# An AI designed to answer a specific branch of questions on Yahoo Answers #
# --Kinematics of constant acceleration bodies #
################################################################################
# TODO: include GUI interface
# Project modules
import res
# Python Standard modules
import time as time
import sys as sys
import io as io
import unittest as unittest
import urllib.request #TODO: Is using "as" actually good practice?
import re as re
from unittest.mock import patch as patch
# The main UI for navigation in KINE
class Terminal:
### NO CONSTRUCTOR IMPLIES?
def printWelcome(self):
"""
Prints the startup welcome message.
"""
print(res.TERMINAL_MSG_PRINT_WELCOME_1)
def promptUserForKineCode(self):
"""
Prompts the user to enter a password to continue further on.
"""
kineCode = input(res.TERMINAL_MSG_PROMPT_USER_FOR_KINE_CODE_1)
return kineCode
def verifyKineCode(self, kineCode):
"""
Verifies whether the startup password should be as it is. Returns a
boolean indicating so.
"""
if(kineCode == res.TERMINAL_MSG_VERIFY_KINE_CODE_1):
print(res.TERMINAL_MSG_VERIFY_KINE_CODE_2)
return True
else:
print(res.TERMINAL_MSG_VERIFY_KINE_CODE_3)
for i in range(3):
print(res.TERMINAL_MSG_VERIFY_KINE_CODE_4, end='')
time.sleep(1)
print('\n')
for i in range(6):
time.sleep(0.8)
print(res.TERMINAL_MSG_VERIFY_KINE_CODE_5)
print(res.TERMINAL_MSG_VERIFY_KINE_CODE_6)
time.sleep(2)
print(res.TERMINAL_MSG_VERIFY_KINE_CODE_7)
time.sleep(1)
return False
def printMainMenu(self):
"""
Prints the main menu in the Kine Terminal, indicating the choices the
user can make.
"""
print(res.TERMINAL_MSG_PRINT_MAIN_MENU_1)
def promptUserForMainMenuChoice(self):
"""
Prompts the user to enter a choice for main menu.
"""
choice = input(res.TERMINAL_MSG_PROMPT_USER_FOR_MAIN_MENU_CHOICE_1)
return choice
# TODO: Eventually add all the print menu choices
def getProblemClass(self, choice):
"""
Returns the print menu function for the given choice.
"""
choiceToPrintMenuDict = {"A": InclinedPlane,
"B": NotImplementedError,
"C": NotImplementedError,
"D": NotImplementedError,
"E": NotImplementedError}
return choiceToPrintMenuDict[choice]
def run(self):
"""
Runs the Kine terminal which starts the program
"""
self.printWelcome()
isCorrectPassword = self.verifyKineCode(self.promptUserForKineCode())
if not isCorrectPassword:
sys.exit()
self.printMainMenu()
problemChosen = self.promptUserForMainMenuChoice()
self.getProblemClass(problemChosen)
# A type of physics problem that KINE can solve
class InclinedPlane:
def printMenu(self):
"""
prints the main menu which displays two choices.
"""
print(res.INCLINEDPLANE_MSG_PRINT_MENU_1)
def promptUserForMainMenuChoice(self):
"""
Prompts the user to enter a choice for main menu.
"""
choice = input(res.INCLINEDPLANE_MSG_PROMPT_USER_FOR_MAIN_MENU_CHOICE_1)
return choice
def getChoice(self, choice):
"""
Returns the print menu function for the given choice.
"""
choiceToPath = {"A": self.fetchQuestionFromInternet,
"B": NotImplementedError,
"C": NotImplementedError}
return choiceToPath[choice]
def fetchQuestionFromInternet(self):
"""
Fetches a a potential inclined plane question from Yahoo Answers
to be analyzed thereafter.
"""
try:
url = "http://www.yahoo.com"
values = {"s": "basic",
"submit": "search"}
data = urllib.parse.urlencode(values)
data = data.encode("utf-8")
req = urllib.request.Request(url, data)
resp = urllib.request.urlopen(req)
respData = resp.read()
# print(respData)
paragraphs = re.findall(r"<p(.*?)</p>", str(respData))
for p in paragraphs:
print(p)
except:
raise Exception("GG")
# def calculateCorrelationCoefficient(self):
# """
# Calculates how closely the question resembles an inclined plane physics
# problem.
# """
# pass
# All the unit tests carried out on the Terminal object
class TerminalTestCase(unittest.TestCase):
def setUp(self):
"""
Creates a new Terminal object prior to each test run.
"""
self.terminal = Terminal()
@patch("sys.stdout", new=io.StringIO())
def testPrintWelcome(self):
"""
Checks the printed output of the printWelcome method against a literal
string.
"""
self.terminal.printWelcome()
self.assertEqual(sys.stdout.getvalue().rstrip(),
res.TERMINAL_MSG_PRINT_WELCOME_1.rstrip())
@patch("builtins.input", return_value="Feeding")
def testPromptUserForMainMenuChoice(self, mockInputtedKineCode):
"""
Checks whether inputted password is actually the returned password by
testing it against a literal string.
"""
self.assertEqual(self.terminal.promptUserForKineCode(), "Feeding")
@patch("time.sleep") # removes the sleep delay to speed up testing
def testVerifyKineCode(self, mockSleepDelay):
"""
Checks whether the kineCode password is indeed what it seem to be by
testing it against two samples--one correct and the other one incorrect.
"""
truePassword = res.TERMINAL_MSG_VERIFY_KINE_CODE_1
trueCase = self.terminal.verifyKineCode(truePassword)
self.assertTrue(trueCase, msg="Apparently, {} is not the password. "
"The real password should be: {}"
.format(truePassword,
res.TERMINAL_MSG_VERIFY_KINE_CODE_1))
falsePassword = "YogHurt"
falseCase = self.terminal.verifyKineCode(falsePassword)
self.assertFalse(falseCase, msg="Apparently, {} is the password. "
"The real password should be: {}"
.format(falsePassword,
res.TERMINAL_MSG_VERIFY_KINE_CODE_1))
@patch("sys.stdout", new=io.StringIO())
def testPrintMainMenu(self):
"""
Checks the printed output of the printMainMenu method against a literal
string.
"""
self.terminal.printMainMenu()
self.assertEqual(sys.stdout.getvalue().rstrip(),
res.TERMINAL_MSG_PRINT_MAIN_MENU_1.rstrip())
@patch("builtins.input", return_value="GG")
def testPromptUserForChoice(self, mockInputtedChoice):
"""
Checks whether inputted choice for the main menu is actually the
returned choice by testing it against a literal string.
"""
self.assertEqual(self.terminal.promptUserForMainMenuChoice(), "GG")
def testGetProblemClass(self):
"""
Checks whether the getProblemClass method returns the correct class of
problem by testing it against a valid and invalid key choice.
"""
invalidChoice = "QQ"
self.assertRaises(KeyError,
self.terminal.getProblemClass,
invalidChoice)
validChoice = "A"
self.assertEqual(InclinedPlane,
self.terminal.getProblemClass(validChoice))
# All the unit tests carried out on the InclinedPlane object
class InclinedPlaneTestCase(unittest.TestCase):
def setUp(self):
"""
Creates a new InclinedPlane object prior to each test run
"""
self.inclinedPlane = InclinedPlane()
@patch("sys.stdout", new=io.StringIO())
def testPrintMenu(self):
"""
Checks the printed output of printMenu method against a literal string.
"""
self.inclinedPlane.printMenu()
self.assertEqual(sys.stdout.getvalue().rstrip(),
res.INCLINEDPLANE_MSG_PRINT_MENU_1.rstrip())
@patch("builtins.input", return_value="Feeding")
def testPromptUserForMainMenuChoice(self, mockInputtedKineCode):
"""
Checks whether inputted choice is actually the returned choice by
testing it against a literal string.
"""
self.assertEqual(self.inclinedPlane.promptUserForMainMenuChoice(),
"Feeding")
def testGetChoice(self):
"""
Checks whether the getChoice method returns the correct activity by
testing it against a valid and invalid key choice.
"""
invalidChoice = "QQ"
self.assertRaises(KeyError,
self.inclinedPlane.getChoice, invalidChoice)
validChoice = "A"
self.assertEqual(self.inclinedPlane.fetchQuestionFromInternet,
self.inclinedPlane.getChoice(validChoice))
# The integration test carried out on the Terminal object
class TerminalIntegrationTest(unittest.TestCase):
def setUp(self):
"""
Creates a Terminal object that is used as the basis of testing
"""
self.terminal = Terminal()
@patch("builtins.input", return_value="Feeding")
@patch("time.sleep") # removes the sleep delay to speed up testing
def testRunWithFalseKineCode(self, mockInputtedKineCode, mockSleepDelay):
"""
Executes the run method while supplying an incorrect KineCode after the
welcome menu. The expected outcome should be a boot from Kine.
"""
self.assertRaises(SystemExit, self.terminal.run)
# Runs all the test cases
@patch("sys.stdout", new=io.StringIO())
def main():
unittest.main(verbosity=0)
if __name__ == "__main__":
InclinedPlane().fetchQuestionFromInternet()
main() | 32.70997 | 81 | 0.590745 | 1,083 | 10,827 | 5.803324 | 0.263158 | 0.028003 | 0.03564 | 0.031822 | 0.427844 | 0.406842 | 0.38027 | 0.253779 | 0.199045 | 0.199045 | 0 | 0.004316 | 0.315138 | 10,827 | 331 | 82 | 32.70997 | 0.843291 | 0.281149 | 0 | 0.245033 | 0 | 0 | 0.053579 | 0 | 0 | 0 | 0 | 0.006042 | 0.086093 | 1 | 0.165563 | false | 0.07947 | 0.05298 | 0 | 0.298013 | 0.125828 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f140c7545f7a24cc77e25acf7054d41c00c1c115 | 518 | py | Python | setup.py | madhav-datt/seeyourmail | 0fa1d312dbd8b091b41c92d4bcc26590705c1b97 | [
"MIT"
] | 1 | 2016-08-13T07:07:30.000Z | 2016-08-13T07:07:30.000Z | setup.py | madhav-datt/seeyourmail | 0fa1d312dbd8b091b41c92d4bcc26590705c1b97 | [
"MIT"
] | null | null | null | setup.py | madhav-datt/seeyourmail | 0fa1d312dbd8b091b41c92d4bcc26590705c1b97 | [
"MIT"
] | null | null | null |
from distutils.core import setup
setup(
name='seeyourmail',
packages=['seeyourmail'],
version='1.0',
description='seeyourmail makes retrieving and checking your email in programs as simple as can be',
author='Madhav Datt',
author_email='madhav.datt@hotmail.com',
url='https://github.com/madhav-datt/seeyourmail',
download_url='https://github.com/madhav-datt/seeyourmail/tarball/1.0',
keywords=['email', 'receive', 'receive mail', 'automatic', 'attachment'],
classifiers=[],
)
| 32.375 | 103 | 0.696911 | 63 | 518 | 5.698413 | 0.634921 | 0.111421 | 0.077994 | 0.094708 | 0.211699 | 0.211699 | 0.211699 | 0 | 0 | 0 | 0 | 0.00907 | 0.148649 | 518 | 15 | 104 | 34.533333 | 0.804989 | 0 | 0 | 0 | 0 | 0 | 0.545455 | 0.044487 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f1440fd46e2a7fc12bee012f62a878da9039092a | 517 | py | Python | history/migrations/0005_auto_20190411_2258.py | hereischen/Goliath | 2305048537f892febde6610b4cd108d3fb1b895a | [
"BSD-3-Clause"
] | 5 | 2019-05-08T00:50:25.000Z | 2021-11-24T07:33:10.000Z | history/migrations/0005_auto_20190411_2258.py | hereischen/Goliath | 2305048537f892febde6610b4cd108d3fb1b895a | [
"BSD-3-Clause"
] | 12 | 2019-12-02T00:05:06.000Z | 2022-01-13T01:04:35.000Z | history/migrations/0005_auto_20190411_2258.py | hereischen/Goliath | 2305048537f892febde6610b4cd108d3fb1b895a | [
"BSD-3-Clause"
] | 1 | 2019-01-08T17:43:41.000Z | 2019-01-08T17:43:41.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.18 on 2019-04-11 22:58
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('history', '0004_history_prev_quantity'),
]
operations = [
migrations.AlterField(
model_name='history',
name='prev_quantity',
field=models.PositiveIntegerField(default=0, verbose_name='\u539f\u5e93\u5b58\u6570\u91cf'),
),
]
| 24.619048 | 104 | 0.646035 | 57 | 517 | 5.666667 | 0.754386 | 0.074303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09596 | 0.234043 | 517 | 20 | 105 | 25.85 | 0.719697 | 0.133462 | 0 | 0 | 1 | 0 | 0.186517 | 0.125843 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f148a0878c8fb132c93c5a6ed444cf5362a2ef2f | 558 | py | Python | server/huTools/couch.py | jeffjia/Send2Cloud | 4900c0a85ba9ea817bf34f0b5356580c2c3117c3 | [
"Apache-2.0"
] | 1 | 2018-03-01T13:12:46.000Z | 2018-03-01T13:12:46.000Z | server/huTools/couch.py | jeffjia/Send2Cloud | 4900c0a85ba9ea817bf34f0b5356580c2c3117c3 | [
"Apache-2.0"
] | null | null | null | server/huTools/couch.py | jeffjia/Send2Cloud | 4900c0a85ba9ea817bf34f0b5356580c2c3117c3 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
"""
couch.py
Created by Christian Klein on 2010-02-26.
Copyright (c) 2010 HUDORA GmbH. All rights reserved.
"""
import couchdb.client
import warnings
def setup_couchdb(servername, database):
"""Get a connection handler to the CouchDB Database, creating it when needed."""
warnings.warn("hutools.couch is deprecated", DeprecationWarning, stacklevel=2)
server = couchdb.client.Server(servername)
if database in server:
return server[database]
else:
return server.create(database)
| 25.363636 | 84 | 0.724014 | 73 | 558 | 5.520548 | 0.753425 | 0.064516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030435 | 0.175627 | 558 | 21 | 85 | 26.571429 | 0.845652 | 0.387097 | 0 | 0 | 0 | 0 | 0.082317 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f14921b446dc5a23fa822e2572f00ead72c3441d | 1,931 | py | Python | app/mixins.py | valentinDruzhinin/todo-lists | b517137437d5480d1aaec0451f060e1556bf3d8f | [
"MIT"
] | null | null | null | app/mixins.py | valentinDruzhinin/todo-lists | b517137437d5480d1aaec0451f060e1556bf3d8f | [
"MIT"
] | null | null | null | app/mixins.py | valentinDruzhinin/todo-lists | b517137437d5480d1aaec0451f060e1556bf3d8f | [
"MIT"
] | null | null | null | import json
from bson import ObjectId
from pymongo import ReturnDocument
from .exceptions import DBException
class DBActionsMixin:
def __init__(self, model, db):
self._model_cls = model
self._db = db
def add(self, item):
db_obj = self._collection.insert_one(item.prepare_for_db())
model = self._model_cls(id=str(db_obj.inserted_id))
return self.query(**model.db_key()).pop()
def query(self, **query_params):
return [
self._model_cls.from_db_object(db_obj) for db_obj in self._collection.find(query_params)
]
def remove(self, todo_item):
db_obj = self._collection.find_one_and_delete(todo_item.db_key())
if db_obj:
return self._model_cls.from_db_object(db_obj)
raise DBException(f'Unable to remove. Object with id={todo_item.id} is absent.')
def update(self, old_todo_item, new_todo_item):
if not self.query(**old_todo_item.db_key()):
raise DBException(f'Unable to update. Object with id={old_todo_item.id} is absent.')
db_obj = self._collection.find_one_and_update(
old_todo_item.db_key(),
{'$set': new_todo_item.prepare_for_db(with_empty_fields=False)},
return_document=ReturnDocument.AFTER
)
return self._model_cls.from_db_object(db_obj)
class ToDBModelMixin:
def prepare_for_db(self, with_empty_fields=True):
obj = vars(self)
del obj['id']
filter_func = lambda x: True if with_empty_fields else lambda x: x
obj = {key: str(value) for key, value in obj.items() if filter_func(value)}
return obj
def db_key(self):
return {'_id': ObjectId(self.id)}
class ModelSerializeMixin:
def __str__(self):
return json.dumps(self)
class ModelContentMixin:
@property
def is_empty(self):
return not bool([v for v in vars(self).values() if v])
| 31.655738 | 100 | 0.663905 | 278 | 1,931 | 4.305755 | 0.273381 | 0.037594 | 0.050125 | 0.047619 | 0.25731 | 0.136174 | 0.136174 | 0.087719 | 0.087719 | 0 | 0 | 0 | 0.235111 | 1,931 | 60 | 101 | 32.183333 | 0.810427 | 0 | 0 | 0.043478 | 0 | 0 | 0.066805 | 0.010875 | 0 | 0 | 0 | 0 | 0 | 1 | 0.195652 | false | 0 | 0.086957 | 0.086957 | 0.543478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f149870ce5099dc14c5ea591f5e0b61107cd647f | 276 | py | Python | demo.py | re-treat/DialoFlow | b7b729ddf5d146ee201c987f3a44894ad24938ce | [
"MIT"
] | null | null | null | demo.py | re-treat/DialoFlow | b7b729ddf5d146ee201c987f3a44894ad24938ce | [
"MIT"
] | null | null | null | demo.py | re-treat/DialoFlow | b7b729ddf5d146ee201c987f3a44894ad24938ce | [
"MIT"
] | null | null | null | from generate import *
history = []
while True:
new_sentence = input('>> User: ').replace('\n','')
history.append(new_sentence)
resps = get_responses(history[-5:],mode='sample')
best_resp = resps[0]
print(">> Bot:",best_resp)
history.append(best_resp) | 27.6 | 54 | 0.644928 | 35 | 276 | 4.914286 | 0.685714 | 0.139535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008772 | 0.173913 | 276 | 10 | 55 | 27.6 | 0.745614 | 0 | 0 | 0 | 1 | 0 | 0.086643 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f14ca1f87eb014f04eefe6485b3e9dd32f417cfa | 6,308 | py | Python | pysnmp-with-texts/ZYXEL-L3-IP-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/ZYXEL-L3-IP-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/ZYXEL-L3-IP-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module ZYXEL-L3-IP-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/ZYXEL-L3-IP-MIB
# Produced by pysmi-0.3.4 at Wed May 1 15:50:32 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
Integer, OctetString, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "Integer", "OctetString", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
SingleValueConstraint, ValueRangeConstraint, ConstraintsUnion, ConstraintsIntersection, ValueSizeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "SingleValueConstraint", "ValueRangeConstraint", "ConstraintsUnion", "ConstraintsIntersection", "ValueSizeConstraint")
ModuleCompliance, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "NotificationGroup")
iso, Counter64, NotificationType, MibScalar, MibTable, MibTableRow, MibTableColumn, Bits, Counter32, MibIdentifier, Gauge32, TimeTicks, ObjectIdentity, IpAddress, Integer32, ModuleIdentity, Unsigned32 = mibBuilder.importSymbols("SNMPv2-SMI", "iso", "Counter64", "NotificationType", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Bits", "Counter32", "MibIdentifier", "Gauge32", "TimeTicks", "ObjectIdentity", "IpAddress", "Integer32", "ModuleIdentity", "Unsigned32")
TextualConvention, RowStatus, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TextualConvention", "RowStatus", "DisplayString")
esMgmt, = mibBuilder.importSymbols("ZYXEL-ES-SMI", "esMgmt")
zyxelL3Ip = ModuleIdentity((1, 3, 6, 1, 4, 1, 890, 1, 15, 3, 40))
if mibBuilder.loadTexts: zyxelL3Ip.setLastUpdated('201207010000Z')
if mibBuilder.loadTexts: zyxelL3Ip.setOrganization('Enterprise Solution ZyXEL')
if mibBuilder.loadTexts: zyxelL3Ip.setContactInfo('')
if mibBuilder.loadTexts: zyxelL3Ip.setDescription('The subtree for layer 3 switch ip address')
zyxelLayer3IpSetup = MibIdentifier((1, 3, 6, 1, 4, 1, 890, 1, 15, 3, 40, 1))
zyLayer3IpDnsIpAddress = MibScalar((1, 3, 6, 1, 4, 1, 890, 1, 15, 3, 40, 1, 1), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: zyLayer3IpDnsIpAddress.setStatus('current')
if mibBuilder.loadTexts: zyLayer3IpDnsIpAddress.setDescription('Enter a domain name server IP address in order to be able to use a domain name instead of an IP address.')
zyLayer3IpDefaultMgmt = MibScalar((1, 3, 6, 1, 4, 1, 890, 1, 15, 3, 40, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("inBand", 0), ("outOfBand", 1)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: zyLayer3IpDefaultMgmt.setStatus('current')
if mibBuilder.loadTexts: zyLayer3IpDefaultMgmt.setDescription('Specify which traffic flow (In-Band or Out-of-band) the switch is to send packets originating from it or packets with unknown source.')
zyLayer3IpDefaultGateway = MibScalar((1, 3, 6, 1, 4, 1, 890, 1, 15, 3, 40, 1, 3), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: zyLayer3IpDefaultGateway.setStatus('current')
if mibBuilder.loadTexts: zyLayer3IpDefaultGateway.setDescription('IP address of the default outgoing gateway.')
zyLayer3IpInbandMaxNumberOfInterfaces = MibScalar((1, 3, 6, 1, 4, 1, 890, 1, 15, 3, 40, 1, 4), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: zyLayer3IpInbandMaxNumberOfInterfaces.setStatus('current')
if mibBuilder.loadTexts: zyLayer3IpInbandMaxNumberOfInterfaces.setDescription('The maximum number of in-band IP that can be created.')
zyxelLayer3IpInbandTable = MibTable((1, 3, 6, 1, 4, 1, 890, 1, 15, 3, 40, 1, 5), )
if mibBuilder.loadTexts: zyxelLayer3IpInbandTable.setStatus('current')
if mibBuilder.loadTexts: zyxelLayer3IpInbandTable.setDescription('The table contains layer3 IP in-band configuration.')
zyxelLayer3IpInbandEntry = MibTableRow((1, 3, 6, 1, 4, 1, 890, 1, 15, 3, 40, 1, 5, 1), ).setIndexNames((0, "ZYXEL-L3-IP-MIB", "zyLayer3IpInbandIpAddress"), (0, "ZYXEL-L3-IP-MIB", "zyLayer3IpInbandMask"))
if mibBuilder.loadTexts: zyxelLayer3IpInbandEntry.setStatus('current')
if mibBuilder.loadTexts: zyxelLayer3IpInbandEntry.setDescription('An entry contains layer3 IP in-band configuration.')
zyLayer3IpInbandIpAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 890, 1, 15, 3, 40, 1, 5, 1, 1), IpAddress())
if mibBuilder.loadTexts: zyLayer3IpInbandIpAddress.setStatus('current')
if mibBuilder.loadTexts: zyLayer3IpInbandIpAddress.setDescription('Enter the IP address of your switch in dotted decimal notation, for example, 192.168.1.1. This is the IP address of the Switch in an IP routing domain.')
zyLayer3IpInbandMask = MibTableColumn((1, 3, 6, 1, 4, 1, 890, 1, 15, 3, 40, 1, 5, 1, 2), IpAddress())
if mibBuilder.loadTexts: zyLayer3IpInbandMask.setStatus('current')
if mibBuilder.loadTexts: zyLayer3IpInbandMask.setDescription('Enter the IP subnet mask of an IP routing domain in dotted decimal notation, for example, 255.255.255.0.')
zyLayer3IpInbandVid = MibTableColumn((1, 3, 6, 1, 4, 1, 890, 1, 15, 3, 40, 1, 5, 1, 3), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: zyLayer3IpInbandVid.setStatus('current')
if mibBuilder.loadTexts: zyLayer3IpInbandVid.setDescription('Enter the VLAN identification number to which an IP routing domain belongs.')
zyLayer3IpInbandRowStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 890, 1, 15, 3, 40, 1, 5, 1, 4), RowStatus()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: zyLayer3IpInbandRowStatus.setStatus('current')
if mibBuilder.loadTexts: zyLayer3IpInbandRowStatus.setDescription('This object allows entries to be created and deleted from the in-band IP table.')
mibBuilder.exportSymbols("ZYXEL-L3-IP-MIB", zyLayer3IpInbandVid=zyLayer3IpInbandVid, PYSNMP_MODULE_ID=zyxelL3Ip, zyxelLayer3IpSetup=zyxelLayer3IpSetup, zyxelL3Ip=zyxelL3Ip, zyLayer3IpDnsIpAddress=zyLayer3IpDnsIpAddress, zyLayer3IpInbandIpAddress=zyLayer3IpInbandIpAddress, zyLayer3IpDefaultGateway=zyLayer3IpDefaultGateway, zyLayer3IpInbandMaxNumberOfInterfaces=zyLayer3IpInbandMaxNumberOfInterfaces, zyxelLayer3IpInbandTable=zyxelLayer3IpInbandTable, zyLayer3IpDefaultMgmt=zyLayer3IpDefaultMgmt, zyLayer3IpInbandRowStatus=zyLayer3IpInbandRowStatus, zyxelLayer3IpInbandEntry=zyxelLayer3IpInbandEntry, zyLayer3IpInbandMask=zyLayer3IpInbandMask)
| 121.307692 | 643 | 0.789474 | 724 | 6,308 | 6.875691 | 0.270718 | 0.057855 | 0.101245 | 0.009642 | 0.310566 | 0.174367 | 0.126557 | 0.126557 | 0.126557 | 0.126557 | 0 | 0.067988 | 0.088301 | 6,308 | 51 | 644 | 123.686275 | 0.7976 | 0.051363 | 0 | 0 | 0 | 0.090909 | 0.270795 | 0.011548 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.159091 | 0 | 0.159091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f15f43daea5e4b5fe8d0b408d0ba74de358571e1 | 491 | py | Python | beyondtheadmin/clients/migrations/0014_auto_20210825_0854.py | gfavre/invoice-manager | 2a1db22edd51b461c090282c6fc1f290f3265379 | [
"MIT"
] | 1 | 2021-11-27T06:40:34.000Z | 2021-11-27T06:40:34.000Z | beyondtheadmin/clients/migrations/0014_auto_20210825_0854.py | gfavre/invoice-manager | 2a1db22edd51b461c090282c6fc1f290f3265379 | [
"MIT"
] | 2 | 2021-05-13T04:50:50.000Z | 2022-02-28T21:06:24.000Z | beyondtheadmin/clients/migrations/0014_auto_20210825_0854.py | gfavre/invoice-manager | 2a1db22edd51b461c090282c6fc1f290f3265379 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.7 on 2021-08-25 06:54
from django.db import migrations
import django_countries.fields
class Migration(migrations.Migration):
dependencies = [
('clients', '0013_auto_20210821_1503'),
]
operations = [
migrations.AlterField(
model_name='client',
name='country',
field=django_countries.fields.CountryField(blank=True, default='CH', max_length=2, null=True, verbose_name='Country'),
),
]
| 23.380952 | 130 | 0.645621 | 56 | 491 | 5.517857 | 0.767857 | 0.097087 | 0.135922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085562 | 0.238289 | 491 | 20 | 131 | 24.55 | 0.740642 | 0.09165 | 0 | 0 | 1 | 0 | 0.117117 | 0.051802 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f1624fc7acc4aea1db82d1a5689c7c3fa486c42f | 1,681 | py | Python | essaie_snippet/test_find_planet.py | Vahen/millenium_falcon_challenge | 1cc2493ae1f4a95b0bb7332bb877035a7070657d | [
"MIT"
] | null | null | null | essaie_snippet/test_find_planet.py | Vahen/millenium_falcon_challenge | 1cc2493ae1f4a95b0bb7332bb877035a7070657d | [
"MIT"
] | null | null | null | essaie_snippet/test_find_planet.py | Vahen/millenium_falcon_challenge | 1cc2493ae1f4a95b0bb7332bb877035a7070657d | [
"MIT"
] | null | null | null | ##planets = [
## {
## "planet": "Tatooine",
## "visited": False,
## "reachable": ["Dagobah", "Hoth"]
## },
## {
## "planet": "Dagobah",
## "visited": False,
## "reachable": ["Hoth", "Endor"]
## },
## {
## "planet": "Endor",
## "visited": False,
## "reachable": []
## },
## {
## "planet": "Hoth",
## "visited": False,
## "reachable": ["Endor"]
## },
##]
##
##def build_reachables(planets, reachables):
## reachable_planets = []
## for planet in planets:
## if planet["planet"] in reachables:
## reachable_planets.append(planet)
## return reachable_planets
##
##def explore_all_routes(planets, current_planet,trajectory):
## current_planet["visited"] = True
## print(current_planet)
## trajectory.append(current_planet)
## reachables = current_planet["reachable"]
## reachables_list = build_reachables(planets,reachables)
## for voisin in reachables_list:
## if not voisin["visited"]:
## explore_all_routes(planets, voisin, trajectory)
##
##
##
##for planet in planets:
## trajectory = []
## if not planet["visited"]:
## explore_all_routes(planets, planet, trajectory)
## planets[planet]
mylist =[{'trajectory': ['Tatooine', 'Hoth', 'Endor'], 'total_time': 7, 'caught_proba': 0.19},
{'trajectory': ['Tatooine', 'Hoth', 'Endor'], 'total_time': 7, 'caught_proba': 0.18},
{'trajectory': ['Tatooine', 'Hoth', 'Endor'], 'total_time': 7, 'caught_proba': 0}
]
def take_caught_proba(elem):
return elem["caught_proba"]
mylist.sort(key=take_caught_proba)
print(mylist[0])
| 28.491525 | 94 | 0.568709 | 161 | 1,681 | 5.757764 | 0.248447 | 0.071197 | 0.090615 | 0.074434 | 0.223301 | 0.158576 | 0.158576 | 0.158576 | 0.158576 | 0.158576 | 0 | 0.008507 | 0.230815 | 1,681 | 58 | 95 | 28.982759 | 0.70843 | 0.673409 | 0 | 0 | 0 | 0 | 0.349451 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0.125 | 0.25 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
f16280252c9840cdee510267af09c4d7567873a5 | 261 | py | Python | thehardway/exts/ext10.py | abos5/pythontutor | eba451700def8bd98d74668d1b6cc08c0ccc0d3c | [
"MIT"
] | null | null | null | thehardway/exts/ext10.py | abos5/pythontutor | eba451700def8bd98d74668d1b6cc08c0ccc0d3c | [
"MIT"
] | null | null | null | thehardway/exts/ext10.py | abos5/pythontutor | eba451700def8bd98d74668d1b6cc08c0ccc0d3c | [
"MIT"
] | null | null | null | tabby_cat = "\tI'm tabbed in"
persian_cat = "I'm split\non a line."
backslash_cat = "I'm \\ a \\ cat."
fat_cat = """
I'll do a list:
\t\t* Cat food.
\t* Fishes.
\t\t\t* Catnip\n\t* Grass
"""
print tabby_cat
print persian_cat
print backslash_cat
print fat_cat
| 16.3125 | 37 | 0.666667 | 52 | 261 | 3.192308 | 0.461538 | 0.072289 | 0.060241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168582 | 261 | 15 | 38 | 17.4 | 0.764977 | 0 | 0 | 0 | 0 | 0 | 0.471264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.307692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f1633f649d0f1c1a41f819eb66c27318389d52d9 | 3,591 | py | Python | relational/populate-maria/populate-maria.py | eduardolfalcao/bigdata-unifacisa | a725fafd88c9508a717d7b0979dac9770dee6905 | [
"Apache-2.0"
] | 1 | 2020-11-02T18:44:46.000Z | 2020-11-02T18:44:46.000Z | relational/populate-maria/populate-maria.py | eduardolfalcao/bigdata-unifacisa | a725fafd88c9508a717d7b0979dac9770dee6905 | [
"Apache-2.0"
] | null | null | null | relational/populate-maria/populate-maria.py | eduardolfalcao/bigdata-unifacisa | a725fafd88c9508a717d7b0979dac9770dee6905 | [
"Apache-2.0"
] | 1 | 2021-09-15T01:42:35.000Z | 2021-09-15T01:42:35.000Z | #!/usr/bin/python3
import mysql.connector
from mysql.connector import Error
from mysql.connector import errorcode
from os import environ
import json
DB_HOST = environ.get('DB_HOST')
DB_NAME = environ.get('DB_NAME')
DB_USER = environ.get('DB_USER')
DB_PASSWORD = environ.get('DB_PASSWORD')
if DB_PASSWORD is not None:
print('###################################')
print('These are the environment variables: DB_HOST='+DB_HOST+', DB_NAME='+DB_NAME+', DB_USER='+DB_USER+', DB_PASSWORD='+DB_PASSWORD)
print('###################################')
else:
print('###################################')
print('No environment variable appeared!')
print('###################################')
def add_people_count(request_data):
#print('Add people count called!')
#print('Request' + str(request_data))
insert_query = """INSERT INTO PeopleCount (value, collector_id, timestamp) VALUES (%s, %s, %s)"""
val = (request_data['value'],request_data['collector_id'],str(request_data['timestamp']))
run_insert_query(insert_query, val, 'PeopleCount')
return run_insert_query(insert_query, val, 'PeopleCount')
def add_people_recognized(request_data):
name_ids = []
print('Request data (json): '+str(request_data['value']))
for name in request_data['value']:
#print(name)
name_ids.append(add_people(name))
recog_id = add_recognized(request_data)
res = []
for name_id in name_ids:
insert_query = """INSERT IGNORE INTO PeopleRecognized (id_recognized, id_people) VALUES (%s, %s)"""
val = (recog_id, name_id)
res.append(run_insert_query(insert_query, val, 'PeopleRecognized'))
return json.dumps(res)
def add_people(name):
insert_query = """INSERT IGNORE INTO People (name) VALUES (%s)"""
val = (name,)
return run_insert_query(insert_query, val, 'People')[1] #returns id
def add_recognized(request_data):
insert_query = """INSERT INTO Recognized (collector_id, timestamp) VALUES (%s, %s)"""
val = (request_data['collector_id'],str(request_data['timestamp']))
return run_insert_query(insert_query, val, 'Recognized')[1] #returns id
def get_database_connection():
return mysql.connector.connect(host=DB_HOST, database=DB_NAME, user=DB_USER, password=DB_PASSWORD)
def run_insert_query(query, values, table_name):
connection = get_database_connection()
res = ''
id = None
try:
cursor = connection.cursor()
cursor.execute(query, values)
connection.commit()
id = cursor.lastrowid
if id is not None:
res += 'Record with id('+str(id)+') inserted successfully into '+table_name+' table'
else:
res += str(cursor.rowcount) + ' Record inserted successfully into '+table_name+' table'
#print(res)
cursor.close()
except mysql.connector.Error as error:
res += "Failed to insert record into table {}".format(error)
print(res)
finally:
if connection.is_connected():
connection.close()
return (res,id)
request_data_pc = dict()
request_data_pr = dict()
for i in range(10000):
request_data_pc['value'] = i
request_data_pc['collector_id'] = 'iot_dev_id_'+str(i)
request_data_pc['timestamp'] = i
add_people_count(request_data_pc)
request_data_pr['value'] = ['andrey','eduardo','fabio']
request_data_pr['collector_id'] = 'iot_dev_id_'+str(i)
request_data_pr['timestamp'] = i
add_people_recognized(request_data_pr)
print("Entry "+str(i)+" inserted!")
| 37.40625 | 137 | 0.644946 | 456 | 3,591 | 4.826754 | 0.217105 | 0.114948 | 0.069514 | 0.045434 | 0.325307 | 0.242617 | 0.140845 | 0.071786 | 0.030895 | 0 | 0 | 0.002743 | 0.187691 | 3,591 | 95 | 138 | 37.8 | 0.7518 | 0.038151 | 0 | 0.077922 | 0 | 0 | 0.265158 | 0.040615 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077922 | false | 0.051948 | 0.064935 | 0.012987 | 0.220779 | 0.116883 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f165dc6651485b189667c7065cc6cc8aee155840 | 432 | py | Python | ABC/abc101-abc150/abc117/c.py | KATO-Hiro/AtCoder | cbbdb18e95110b604728a54aed83a6ed6b993fde | [
"CC0-1.0"
] | 2 | 2020-06-12T09:54:23.000Z | 2021-05-04T01:34:07.000Z | ABC/abc101-abc150/abc117/c.py | KATO-Hiro/AtCoder | cbbdb18e95110b604728a54aed83a6ed6b993fde | [
"CC0-1.0"
] | 961 | 2020-06-23T07:26:22.000Z | 2022-03-31T21:34:52.000Z | ABC/abc101-abc150/abc117/c.py | KATO-Hiro/AtCoder | cbbdb18e95110b604728a54aed83a6ed6b993fde | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
def main():
n, m = map(int, input().split())
xs = sorted(list(map(int, input().split())))
if n >= m:
print(0)
else:
ans = xs[-1] - xs[0]
diff = [0 for _ in range(m - 1)]
for i in range(m - 1):
diff[i] = xs[i + 1] - xs[i]
print(ans - sum(sorted(diff, reverse=True)[:n - 1]))
if __name__ == '__main__':
main()
| 19.636364 | 61 | 0.428241 | 62 | 432 | 2.83871 | 0.467742 | 0.022727 | 0.125 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033088 | 0.37037 | 432 | 21 | 62 | 20.571429 | 0.613971 | 0.048611 | 0 | 0 | 0 | 0 | 0.020619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.076923 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f16971b36b3a751347c35eb371dce2fda69c3718 | 9,071 | py | Python | mount_point_checker.py | light-bringer/mount-point-verifier | 883a1391e3d79b25444efc4351aaa9dc9828c002 | [
"MIT"
] | null | null | null | mount_point_checker.py | light-bringer/mount-point-verifier | 883a1391e3d79b25444efc4351aaa9dc9828c002 | [
"MIT"
] | null | null | null | mount_point_checker.py | light-bringer/mount-point-verifier | 883a1391e3d79b25444efc4351aaa9dc9828c002 | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
PROGRAM : mount_point_checker.py
PyVersion : Python 2.7
This module takes multiple mount paths and compares them after backup
The module has 2 modes : Backup (-b) and Verify (-v)
Input Params :
-m <mount_points> All the mount point data that need to be backed up,
can be multiple, in that case please use multiple -m params
-b pre-backup data status backup, gives back an unique string, save the same
-p post-backup data status backup, gives back an unique string, save the same
-v "unique_input_id1,unique_id_2" provided after backups (Pre and Post) with no space in between.
Pass -v/-b/-p one at a time
Example :
## Pre - backup : python mount_point_checker.py -b -m "/path/to/mount1" -m "/path/to/mount2"
## Post - backup : python mount_point_checker.py -p -m "/path/to/mount1" -m "/path/to/mount2"
## To verify : python mount_point_checker.py -v "pre_migration_key,post_migration_key"
"""
import os
import uuid
import datetime
import argparse
import pickle
import hashlib
import sys
import Mount
__BACKUPDIR = '/home/lightbringer/Desktop'
__BACKUPFILEPATH = 'backup.txt'
def read_dir(path):
'''
read a Directory and get all files and their full paths
input : directory path
output : list(full filepaths) file prefix paths
'''
fullfilepaths = [os.path.join(d, x)
for d, dirs, files in os.walk(path)
for x in files]
return fullfilepaths, os.path.commonprefix(fullfilepaths)
def relative_paths(full_filepaths, common_prefix):
'''
read fullfilepaths and get all relative filepaths
input : fullpaths, common_prefix_path
output : list(relative_filepaths)
'''
relative_paths = [os.path.relpath(path, common_prefix) for path in full_filepaths]
return relative_paths
def get_filesizes(full_filepaths, common_prefix):
'''
get file sizes for each and every element and return the same
as a directory.
input : list(full_filepath), common_prefix_path
return : dict({'relative_path': 'file_size'})
'''
fileinfodict = {}
for path in full_filepaths:
relative_path = os.path.relpath(path, common_prefix)
fileinfodict[relative_path] = os.stat(path).st_size
return fileinfodict
def compare(SrcObj, DestObj):
'''
Compare two different objects for comparing backup and current
input : two Data() objects
output : Boolean <True/False>
'''
for key in SrcObj.get_mountdata().keys():
try:
if len(DestObj.get_mountdata()[key].gen_dict()['allfiles']) == len(SrcObj.get_mountdata()[key].gen_dict()['allfiles']):
if DestObj.get_mountdata()[key].gen_dict()['relative'] == SrcObj.get_mountdata()[key].gen_dict()['relative']:
if DestObj.get_mountdata()[key].gen_dict()['filesizes'] == SrcObj.get_mountdata()[key].gen_dict()['filesizes']:
return True
else:
return False
except KeyError as e:
print e
return False
def find_mount_point(path):
path = os.path.abspath(path)
while not os.path.ismount(path):
path = os.path.dirname(path)
return path
def check_write_permission(srcdir):
'''
Copy file permissions from src file to dest file
'''
try:
if os.path.isdir(srcdir):
return os.access(srcdir, os.W_OK)
else:
print "Error: Not a valid directory, do not have write permission"
exit(-1)
except Exception as e:
print e
exit(-100)
def main(args):
'''
the holy grail main ()
'''
mountlist = []
for mounts in list(args.mount_paths):
# remove all trailing slashes from path
mountlist.append(mounts.rstrip(os.sep))
pre_migration = args.b
post_migration = args.p
verify = args.uuid
currentMount = Mount.MountData(mountlist)
if pre_migration is True:
backupdir = os.path.join(__BACKUPDIR, 'pre_migration')
if os.path.exists(backupdir):
pass
else:
os.makedirs(backupdir)
print " ----- Pre Migration Mode ------ "
if check_write_permission(backupdir) is False:
print " ===== Backup Path is not writeable ====="
print " ***** ERROR ***** "
exit(-1)
else:
unique_id = uuid.uuid4().hex
backupfile = os.path.join(backupdir, unique_id)
os.makedirs(backupfile)
backupfile = os.path.join(backupfile, __BACKUPFILEPATH)
with open(backupfile, 'wb+') as output:
pickle.dump(currentMount, output)
print "Backup taken!!"
print "Please save this unique token for pre-migration: %s"%(unique_id)
exit(0)
elif post_migration is True:
backupdir = os.path.join(__BACKUPDIR, 'post_migration')
if os.path.exists(backupdir):
pass
else:
os.makedirs(backupdir)
print " ----- Post Migration Mode ------ "
if check_write_permission(backupdir) is False:
print " ===== Backup Path is not writeable ====="
print " ***** ERROR ***** "
exit(-1)
else:
unique_id = uuid.uuid4().hex
backupfile = os.path.join(backupdir, unique_id)
os.makedirs(backupfile)
backupfile = os.path.join(backupfile, __BACKUPFILEPATH)
with open(backupfile, 'wb+') as output:
pickle.dump(currentMount, output)
print "Backup taken!!"
print "Please save this unique token for post-migration : %s"%(unique_id)
exit(0)
elif bool(verify) is True:
uuidstrings = args.uuid
uuids = uuidstrings.split(',')
if(len(uuids)!=2):
print "Give only two unique keys, STRICTLY!"
exit(-1)
pre_migrate_uuid, post_migrate_uuid = uuids[0], uuids[1]
pre_backupdir = backupdir = os.path.join(__BACKUPDIR, 'pre_migration')
post_backupdir = os.path.join(__BACKUPDIR, 'post_migration')
backed_up_data_path_pre = os.path.join(pre_backupdir, pre_migrate_uuid)
backed_up_data_path_pre = os.path.join(backed_up_data_path_pre, __BACKUPFILEPATH)
backed_up_data_path_post = os.path.join(post_backupdir, post_migrate_uuid)
backed_up_data_path_post = os.path.join(backed_up_data_path_post, __BACKUPFILEPATH)
preBackupMount, postBackupMount = None, None
with open(backed_up_data_path_pre, 'rb') as input:
preBackupMount = pickle.load(input)
with open(backed_up_data_path_post, 'rb') as input:
postBackupMount = pickle.load(input)
if compare(preBackupMount, postBackupMount):
print " *** Consistent data *** "
exit(0)
else:
print " *** Inconsistent Data *** "
return(100)
return
if __name__ == '__main__':
helpstr = """
PROGRAM : mount_point_checker.py
PyVersion : Python 2.7
This module takes multiple mount paths and compares them after backup
The module has 2 modes : Backup (-b) and Verify (-v)
Input Params :
-m <mount_points> All the mount point data that need to be backed up,
can be multiple, in that case please use multiple -m params
-b pre-backup data status backup, gives back an unique string, save the same
-p post-backup data status backup, gives back an unique string, save the same
-v "unique_input_id1,unique_id_2" provided after backups (Pre and Post) with no space in between.
Pass -v/-b/-p one at a time
Example :
## Pre - backup : python mount_point_checker.py -b -m "/path/to/mount1" -m "/path/to/mount2"
## Post - backup : python mount_point_checker.py -p -m "/path/to/mount1" -m "/path/to/mount2"
## To verify : python mount_point_checker.py -v "pre_migration_key,post_migration_key"
"""
Parser = argparse.ArgumentParser()
Parser.add_argument('-m', dest='mount_paths', help=helpstr, default=[],
type=str, action='append')
Parser.add_argument('-b', action='store_true')
Parser.add_argument('-p', action='store_true')
Parser.add_argument('-v', dest='uuid', action='store')
Parser.add_argument('--version', action='version', version='%(prog)s 1.0')
args, leftovers = Parser.parse_known_args()
if (bool(args.b)*bool(args.p)*bool(args.uuid)) is True:
print "Error: Give either -b, -p or -v option"
print "-b : Take Backup -m <mount_points>"
print "-p : Post Backup handling -m <mount_points>"
print "-v : <backup_id> Verify from backup"
exit(-1)
elif (len(sys.argv) <2):
print "Error: Give either -b, -p or -v option"
print "-b : Take Backup -m <mount_points>"
print "-p : Post Backup handling -m <mount_points>"
print "-v : <backup_id> Verify from backup"
parseresults = Parser.parse_args()
main(parseresults)
| 35.996032 | 131 | 0.631904 | 1,180 | 9,071 | 4.701695 | 0.211017 | 0.023792 | 0.023432 | 0.027397 | 0.574802 | 0.564348 | 0.50036 | 0.457823 | 0.421053 | 0.421053 | 0 | 0.006233 | 0.257193 | 9,071 | 251 | 132 | 36.139442 | 0.817156 | 0.00893 | 0 | 0.333333 | 0 | 0.018182 | 0.270939 | 0.024867 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.018182 | 0.048485 | null | null | 0.145455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f16a791d25b74cea013a19c66e01310b130ce613 | 6,678 | py | Python | handlers/account.py | zhy0216-collection/Gather | 4a42715b83ad8802514c4f8128fd0a30658ba359 | [
"MIT"
] | null | null | null | handlers/account.py | zhy0216-collection/Gather | 4a42715b83ad8802514c4f8128fd0a30658ba359 | [
"MIT"
] | null | null | null | handlers/account.py | zhy0216-collection/Gather | 4a42715b83ad8802514c4f8128fd0a30658ba359 | [
"MIT"
] | null | null | null | # coding=utf-8
import time
import hashlib
import tornado.web
import tornado.locale
from bson.objectid import ObjectId
from . import BaseHandler
from .utils import username_validator, email_validator
class SignupHandler(BaseHandler):
def get(self):
if self.current_user:
self.redirect(self.get_argument('next', '/'))
self.render('account/signup.html')
def post(self):
self.recaptcha_validate()
username = self.get_argument('username', None)
email = self.get_argument('email', '').lower()
password = self.get_argument('password', None)
password2 = self.get_argument('password2', None)
if not (username and email and password and password2):
self.flash('Please fill the required field')
if password != password2:
self.flash("Password doesn't match")
if username and not username_validator.match(username):
self.flash('Username is invalid')
if email and not email_validator.match(email):
self.flash('Not a valid email address')
if username and \
self.db.members.find_one({'name_lower': username.lower()}):
self.flash('This username is already registered')
if email and self.db.members.find_one({'email': email}):
self.flash('This email is already registered')
if self.messages:
self.render('account/signup.html')
return
password = hashlib.sha1(password + username.lower()).hexdigest()
role = 1
if not self.db.members.count():
role = 5
self.db.members.insert({
'name': username,
'name_lower': username.lower(),
'password': password,
'email': email,
'website': '',
'description': '',
'created': time.time(),
'language': self.settings['default_locale'],
'role': role, # TODO:send mail.
'like': [], # topics
'follow': [], # users
'favorite': [] # nodes
})
self.set_secure_cookie('user', password, expires_days=30)
self.redirect(self.get_argument('next', '/'))
class SigninHandler(BaseHandler):
def get(self):
if self.current_user:
self.redirect(self.get_argument('next', '/'))
self.render('account/signin.html')
def post(self):
username = self.get_argument('username', '').lower()
password = self.get_argument('password', None)
if not (username and password):
self.flash('Please fill the required field')
password = hashlib.sha1(password + username).hexdigest()
member = self.db.members.find_one({'name_lower': username,
'password': password})
if not member:
self.flash('Invalid account or password')
self.render('account/signin.html')
return
self.set_secure_cookie('user', password, expires_days=30)
self.redirect(self.get_argument('next', '/'))
class SignoutHandler(BaseHandler):
def get(self):
user_name = self.get_argument('user', None)
if user_name != self.current_user['name']:
raise tornado.web.HTTPError(403)
self.clear_cookie('user')
self.redirect(self.get_argument('next', '/'))
class SettingsHandler(BaseHandler):
@tornado.web.authenticated
def get(self):
self.render('account/settings.html', locales=self.application.locales)
@tornado.web.authenticated
def post(self):
website = self.get_argument('website', '')
description = self.get_argument('description', '')
pushover = self.get_argument('pushover', '')
language = self.get_argument('language')
if len(description) > 1500:
self.flash("The description is too long")
self.db.members.update({'_id': self.current_user['_id']}, {'$set': {
'website': website,
'description': description,
'language': language,
'pushover': pushover
}})
self.flash('Saved successfully', type='success')
self.redirect('/account/settings')
class ChangePasswordHandler(BaseHandler):
@tornado.web.authenticated
def post(self):
old_password = self.get_argument('old_password', None)
new_password = self.get_argument('new_password', None)
if not (old_password and new_password):
self.flash('Please fill the required field')
key = old_password + self.current_user['name'].lower()
password = hashlib.sha1(key).hexdigest()
if password != self.current_user['password']:
self.flash('Invalid password')
if self.messages:
self.redirect('/account/settings')
return
key = new_password + self.current_user['name'].lower()
password = str(hashlib.sha1(key).hexdigest())
self.db.members.update({'_id': self.current_user['_id']},
{'$set': {'password': password}})
self.set_secure_cookie('user', password, expires_days=30)
self.flash('Saved successfully', type='success')
self.redirect('/account/settings')
class NotificationsHandler(BaseHandler):
@tornado.web.authenticated
def get(self):
p = int(self.get_argument('p', 1))
notis = self.db.notifications.find({
'to': self.current_user['name_lower']
}, sort=[('created', -1)])
notis_count = notis.count()
per_page = self.settings['notifications_per_page']
notis = notis[(p - 1) * per_page:p * per_page]
self.render('account/notifications.html', notis=notis,
notis_count=notis_count, p=p)
class NotificationsClearHandler(BaseHandler):
@tornado.web.authenticated
def get(self):
self.db.notifications.remove({'to': self.current_user['name_lower']})
self.redirect('/')
class NotificationsRemoveHandler(BaseHandler):
@tornado.web.authenticated
def get(self, id):
self.db.notifications.remove({'_id': ObjectId(id)})
self.redirect(self.get_argument('next', '/account/notifications'))
handlers = [
(r'/account/signup', SignupHandler),
(r'/account/signin', SigninHandler),
(r'/account/signout', SignoutHandler),
(r'/account/settings', SettingsHandler),
(r'/account/password', ChangePasswordHandler),
(r'/account/notifications', NotificationsHandler),
(r'/account/notifications/clear', NotificationsClearHandler),
(r'/account/notifications/(\w+)/remove', NotificationsRemoveHandler),
]
| 37.516854 | 78 | 0.614555 | 717 | 6,678 | 5.616457 | 0.192469 | 0.034765 | 0.074497 | 0.028309 | 0.39856 | 0.335734 | 0.29178 | 0.211572 | 0.148001 | 0.148001 | 0 | 0.005387 | 0.249476 | 6,678 | 177 | 79 | 37.728814 | 0.798085 | 0.007038 | 0 | 0.294118 | 0 | 0 | 0.169962 | 0.026566 | 0 | 0 | 0 | 0.00565 | 0 | 1 | 0.071895 | false | 0.176471 | 0.045752 | 0 | 0.189542 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f16a9f1e0a62e908912b60fb92e081e1677a4276 | 3,772 | py | Python | main.py | sajidengg/tets | 34287901dfb9222b109525f2198e73b561a1c55f | [
"MIT"
] | 1 | 2019-09-12T02:19:50.000Z | 2019-09-12T02:19:50.000Z | main.py | sajidengg/tets | 34287901dfb9222b109525f2198e73b561a1c55f | [
"MIT"
] | 1 | 2019-04-10T15:16:54.000Z | 2019-04-10T15:17:28.000Z | main.py | sajidengg/tets | 34287901dfb9222b109525f2198e73b561a1c55f | [
"MIT"
] | null | null | null | # -*- coding: UTF-8 -*-
# main.py
# Root file for EmotionDetection program.
# Prints out command line menu and handles user choices
from __future__ import print_function
from EmotionDetection import WordMap
from EmotionDetection import EvaluateText
from EmotionDetection import GUI
try:
input = raw_input
except NameError:
pass
import sys
reload(sys)
sys.setdefaultencoding('utf8')
def printMenu():
print("°º¤ø,¸¸,ø¤º°`°º¤ø,¸,ø¤°º¤ø,¸¸,", "EmotionDetection", ",¸¸,ø¤º°`°º¤ø,¸,ø¤°º¤ø,¸¸,ø¤º°\n")
print("1. Training")
print("2. Testing")
print("3. Evaluate Text")
print("4. GUI Evaluation")
print("5. Information")
print("6. Exit\n")
print(78 * "-", "\n")
def main():
choice = True
while choice:
printMenu()
choice = input("Select option [1-6]: ")
print
if choice == "1":
train()
elif choice == "2":
test()
elif choice == "3":
evaluate()
elif choice == "4":
gui()
elif choice == "5":
printInfo()
elif choice == "6":
print("Exiting....\n")
choice = False
else:
print("Invalid choice.")
choice = True
# Training Program, builds map of words and emotion values from annotated corpus
def train():
reset = input("Reset training data? [y/n]: ").lower() in ["yes", "y", "1"]
text = input("Text file: ")
values = input("Value file: ")
print("")
try:
print("Loading input values into WordMap...\n")
with open("./data/" + text, 'r') as textFile:
with open("./data/" + values, 'r') as valueFile:
WordMap.buildWordMap(reset, textFile, valueFile)
except IOError:
print("File not found. Returning to main menu...\n")
def test():
text = input("Text file: ")
values = input("Value file: ")
print("")
print ("values file is " , "./data/" + values)
try:
print("\nRunning text evaluation...\n")
with open("./data/" + text, 'r') as textFile:
print ("text found")
with open("./data/" + values, 'r') as valueFile:
print ("values found")
EvaluateText.evaluate(textFile, valueFile)
except IOError:
print("File not found. Returning to main menu...\n")
def evaluate():
text = input("Text file: ")
print("")
try:
print("Running text evaluation...\n")
with open("./data/" + text, 'r') as textFile:
EvaluateText.evaluate(textFile)
except IOError:
print("File not found. Returning to main menu...\n")
def gui():
window = GUI.Evaluator()
def printInfo():
print("\n°`°º¤ø,¸¸,ø¤º°`°º¤ø,¸,ø¤°º¤ø,¸¸,", "INFORMATION", ",¸¸,ø¤º°`°º¤ø,¸,ø¤°º¤ø,¸¸,ø¤º°º¤ø\n")
print("EmotionDetection v1, sentiment analysis system operating off a multinomial")
print("Naive Bayes classififer. There are 13 possible labels that text can be")
print("labelled as, the emotions are :empty, sadness, enthusiasm, neutral, worry,")
print("surprise, love, fun, hate, happiness, boredom, relief and anger.\n")
print("1. Training - Generates a WordMap using a text file and emotion value file.")
print(" A word map is required for both testing and evaluation.\n")
print("2. Testing - Run the system and test its accuracy by supplying correct ")
print(" emotion values. Also produces reports and confusion plot\n")
print("3. Evaluate Text - Run the system without given values. Used to evaluate input ")
print(" file that has not been pre-labelled.")
print(78 * "-", "\n")
input("Press enter to return to menu...\n")
main()
| 29.936508 | 101 | 0.574231 | 523 | 3,772 | 4.240918 | 0.321224 | 0.009919 | 0.014878 | 0.018034 | 0.244364 | 0.244364 | 0.243913 | 0.216862 | 0.204238 | 0.166366 | 0 | 0.009843 | 0.2728 | 3,772 | 125 | 102 | 30.176 | 0.777616 | 0.053552 | 0 | 0.287234 | 0 | 0.021277 | 0.409933 | 0.036756 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074468 | false | 0.010638 | 0.053191 | 0 | 0.12766 | 0.425532 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
f17253e50c3d29da6840924939fc8b996955e407 | 545 | py | Python | main.py | upkodah/upkodah-service | 8ec16a4d5e53ec8a50cb3808d4473f60c5b12507 | [
"MIT"
] | null | null | null | main.py | upkodah/upkodah-service | 8ec16a4d5e53ec8a50cb3808d4473f60c5b12507 | [
"MIT"
] | 5 | 2020-10-29T01:27:37.000Z | 2020-11-26T09:10:12.000Z | main.py | upkodah/upkodah-service | 8ec16a4d5e53ec8a50cb3808d4473f60c5b12507 | [
"MIT"
] | null | null | null | import destination_search
'''
test용 값
gpsX = 127.0816985 # 동경 - 경도
gpsY = 37.5642135 # 북위 - 위도
time = 20
1. 좌표, 시간 입력 -> 주변 버스 정류소 ID 반환
2. 정류소 ID 입력 -> 버스 노선 획득
3. 획득한 버스 노선에서 출발지, 목적지 시간 계산하여 도착 버스 정류장 반환
'''
print("gps_x : ", end='')
gps_x = input()
print("gps_y : ", end='')
gps_y = input()
print("time : ", end='')
time = int(input())
print('gps X : ', gps_x, 'gps Y : ', gps_y, 'time : ', time)
result = destination_search.DestinationStation(gps_x, gps_y, time)
print('gps_x, gps_y, 도착 시간, 도착 정류장')
result.show_result()
| 20.185185 | 66 | 0.609174 | 97 | 545 | 3.28866 | 0.484536 | 0.075235 | 0.087774 | 0.075235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056338 | 0.218349 | 545 | 26 | 67 | 20.961538 | 0.692488 | 0 | 0 | 0 | 0 | 0 | 0.210983 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.454545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
f17d5629104c63897a4fbfefaa73edad287f30c6 | 2,361 | py | Python | libs/gen_shop.py | a1ex4/ownfoil | fed98a393b83e10509dda7a397c6ab4d8feb0960 | [
"BSD-3-Clause"
] | 7 | 2021-06-15T21:17:25.000Z | 2022-03-21T15:02:29.000Z | libs/gen_shop.py | a1ex4/ownfoil | fed98a393b83e10509dda7a397c6ab4d8feb0960 | [
"BSD-3-Clause"
] | 1 | 2021-08-30T00:57:11.000Z | 2021-10-19T00:21:10.000Z | libs/gen_shop.py | a1ex4/ownfoil | fed98a393b83e10509dda7a397c6ab4d8feb0960 | [
"BSD-3-Clause"
] | 2 | 2021-09-08T02:29:48.000Z | 2021-11-15T10:44:42.000Z | # Usage:
# python gen_shop.py <directory to scan>
# Generate a 'shop.tfl' Tinfoil index file
# as well as 'shop.json', same content but viewable in the browser
import os, json, sys, time
from consts import *
from jsonc_parser.parser import JsoncParser
import logging
logging.basicConfig(format='%(asctime)s | %(levelname)s: %(message)s', level=logging.DEBUG)
path = sys.argv[1]
def getDirsAndFiles(path):
entries = os.listdir(path)
allFiles = list()
allDirs = list()
for entry in entries:
fullPath = os.path.join(path, entry)
if os.path.isdir(fullPath):
allDirs.append(fullPath)
dirs, files = getDirsAndFiles(fullPath)
allDirs += dirs
allFiles += files
else:
if fullPath.split('.')[-1] in valid_ext:
allFiles.append(fullPath)
return allDirs, allFiles
while True:
logging.info(f'Start scanning directory "{path}"')
dirs = []
games = []
shop = default_shop
template_file = os.path.join(path, template_name)
if not os.path.isfile(template_file):
logging.warning(f'Template file {template_file} not found, will use default shop template')
else:
try:
shop = JsoncParser.parse_file(template_file)
except Exception as e:
logging.warning(f'Error parsing template file {template_file}, will use default shop template, error was:\n{e}')
dirs, files = getDirsAndFiles(path)
rel_dirs = [os.path.join('..', os.path.relpath(s, path)) for s in dirs]
rel_files = [os.path.join('..', os.path.relpath(s, path)) for s in files]
logging.info(f'Found {len(dirs)} directories, {len(files)} game/save files')
for game, rel_path in zip(files, rel_files):
size = round(os.path.getsize(game))
games.append(
{
'url': rel_path,
'size': size
})
shop['directories'] = rel_dirs
shop['files'] = games
for a in ['json', 'tfl']:
out_file = os.path.join(path, f'shop.{a}')
try:
with open(out_file, 'w') as f:
json.dump(shop, f, indent=4)
logging.info(f'Successfully wrote {out_file}')
except Exception as e:
logging.error(f'Failed to write {out_file}, error was:\n{e}')
time.sleep(scan_interval * 60) | 31.065789 | 124 | 0.611605 | 312 | 2,361 | 4.557692 | 0.371795 | 0.042194 | 0.035162 | 0.029536 | 0.150492 | 0.088608 | 0.04782 | 0.04782 | 0.04782 | 0.04782 | 0 | 0.002877 | 0.263871 | 2,361 | 76 | 125 | 31.065789 | 0.815305 | 0.063956 | 0 | 0.107143 | 1 | 0 | 0.18631 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017857 | false | 0 | 0.071429 | 0 | 0.107143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f180164738e1dc06d26dc68102e4eeb08427357b | 2,944 | py | Python | tech_project/lib/python2.7/site-packages/aldryn_common/timesince.py | priyamshah112/Project-Descripton-Blog | 8e01016c6be79776c4f5ca75563fa3daa839e39e | [
"MIT"
] | 2 | 2015-03-11T09:58:13.000Z | 2016-08-05T20:18:37.000Z | tech_project/lib/python2.7/site-packages/aldryn_common/timesince.py | priyamshah112/Project-Descripton-Blog | 8e01016c6be79776c4f5ca75563fa3daa839e39e | [
"MIT"
] | 27 | 2015-03-23T16:09:02.000Z | 2018-12-18T14:22:20.000Z | tech_project/lib/python2.7/site-packages/aldryn_common/timesince.py | priyamshah112/Project-Descripton-Blog | 8e01016c6be79776c4f5ca75563fa3daa839e39e | [
"MIT"
] | 8 | 2015-03-23T16:14:22.000Z | 2017-02-21T21:07:01.000Z | import datetime
from django.utils.timezone import is_aware, utc
from django.utils.translation import ugettext, ungettext
def timesince_data(d, now=None, reverse=False):
"""
Takes two datetime objects and returns the time between d and now
as a list of components, ordered by size.
e.g. [{'number': 2, 'type': "hours"}, {'number': 10, 'type': "minutes"}] or
[
{'number': 1, 'type': "year"},
{'number': 8, 'type': "months"},
{'number': 1, 'type': "week"},
{'number': 2, 'type': "days"},
{'number': 9, 'type': "hours"},
{'number': 42, 'type': "minutes"},
].
If d occurs after now, then [{'number': 0, 'type': "minutes"}] is returned.
Units used are years, months, weeks, days, hours, and minutes.
Seconds and microseconds are ignored. Unlike django.utils.timesince,
all the components are returned as a list of dictionaries.
"""
chunks = (
(60 * 60 * 24 * 365, lambda n: ungettext('year', 'years', n)),
(60 * 60 * 24 * 30, lambda n: ungettext('month', 'months', n)),
(60 * 60 * 24 * 7, lambda n: ungettext('week', 'weeks', n)),
(60 * 60 * 24, lambda n: ungettext('day', 'days', n)),
(60 * 60, lambda n: ungettext('hour', 'hours', n)),
(60, lambda n: ungettext('minute', 'minutes', n))
)
# Convert datetime.date to datetime.datetime for comparison.
if not isinstance(d, datetime.datetime):
d = datetime.datetime(d.year, d.month, d.day)
if now and not isinstance(now, datetime.datetime):
now = datetime.datetime(now.year, now.month, now.day)
if not now:
now = datetime.datetime.now(utc if is_aware(d) else None)
delta = (d - now) if reverse else (now - d)
# ignore microseconds
since = delta.days * 24 * 60 * 60 + delta.seconds
if since <= 0:
# d is in the future compared to now, stop processing.
return []
r = []
for i, (seconds, name) in enumerate(chunks):
count = since // seconds
r.append({'number': count, 'type': name(count)})
return r
def timesince_data_nonzero(d, now=None, reverse=False):
return [item for item in timesince_data(d, now, reverse) if item['number'] > 0]
def timesince_data_single(d, now=None, reverse=False):
r = timesince_data_nonzero(d, now, reverse)
if r:
return r[0]
else:
return {'number': 0, 'type': ugettext('minutes')}
def timesince_text(d, now=None):
return ugettext('%(number)d %(type)s') % timesince_data_single(d, now)[0]
def timeuntil_data(d, now=None):
return timesince_data(d, now, reverse=True)
def timeuntil_data_nonzero(d, now=None):
return timesince_data_nonzero(d, now, reverse=True)
def timeuntil_data_single(d, now=None):
return timesince_data_nonzero(d, now, reverse=True)
def timeuntil_text(d, now=None):
return ugettext('%(number)d %(type)s') % timeuntil_data_nonzero(d, now)[0]
| 34.232558 | 83 | 0.619565 | 412 | 2,944 | 4.364078 | 0.271845 | 0.035595 | 0.035595 | 0.050056 | 0.263626 | 0.161847 | 0.129588 | 0.110122 | 0.110122 | 0.110122 | 0 | 0.025945 | 0.227582 | 2,944 | 85 | 84 | 34.635294 | 0.764732 | 0.273777 | 0 | 0.044444 | 0 | 0 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.177778 | false | 0 | 0.066667 | 0.133333 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
74ac48de8e16cbda27909756ffb236d23adc27ea | 376 | py | Python | country/migrations/0023_auto_20210301_0839.py | bikramtuladhar/covid-19-procurement-explorer-admin | 9bba473c8b83c8651e3178b6fba01af74d8b27dc | [
"BSD-3-Clause"
] | null | null | null | country/migrations/0023_auto_20210301_0839.py | bikramtuladhar/covid-19-procurement-explorer-admin | 9bba473c8b83c8651e3178b6fba01af74d8b27dc | [
"BSD-3-Clause"
] | null | null | null | country/migrations/0023_auto_20210301_0839.py | bikramtuladhar/covid-19-procurement-explorer-admin | 9bba473c8b83c8651e3178b6fba01af74d8b27dc | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 3.1.2 on 2021-03-01 08:39
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("country", "0022_merge_20210226_1050"),
]
operations = [
migrations.AlterModelOptions(
name="equitycategory",
options={"verbose_name_plural": "Equity Categories"},
),
]
| 20.888889 | 65 | 0.625 | 38 | 376 | 6.052632 | 0.868421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111913 | 0.263298 | 376 | 17 | 66 | 22.117647 | 0.718412 | 0.119681 | 0 | 0 | 1 | 0 | 0.246201 | 0.072948 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74ae0bf85eb2d9d38456959bc42d199b78f2e0f3 | 323 | py | Python | techproject/src/mainapp/products/migrations/0002_auto_20210416_2106.py | Joshuajking/RoyalHotel-WebApp | 351681ea9ee2a79046b09b22a9dae26780cd9e9c | [
"Unlicense"
] | null | null | null | techproject/src/mainapp/products/migrations/0002_auto_20210416_2106.py | Joshuajking/RoyalHotel-WebApp | 351681ea9ee2a79046b09b22a9dae26780cd9e9c | [
"Unlicense"
] | null | null | null | techproject/src/mainapp/products/migrations/0002_auto_20210416_2106.py | Joshuajking/RoyalHotel-WebApp | 351681ea9ee2a79046b09b22a9dae26780cd9e9c | [
"Unlicense"
] | null | null | null | # Generated by Django 2.0.7 on 2021-04-17 02:06
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('products', '0001_initial'),
]
operations = [
migrations.RenameModel(
old_name='Products',
new_name='Product',
),
]
| 17.944444 | 47 | 0.585139 | 34 | 323 | 5.470588 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084071 | 0.30031 | 323 | 17 | 48 | 19 | 0.738938 | 0.139319 | 0 | 0 | 1 | 0 | 0.126812 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74b1b62ded3a565c9e9435f28ddf27ba6a746cb0 | 738 | py | Python | ptmv/snd.py | ryw89/ptmv | 63df738a9700de173e3eb4202ec24a867e2525f4 | [
"MIT"
] | null | null | null | ptmv/snd.py | ryw89/ptmv | 63df738a9700de173e3eb4202ec24a867e2525f4 | [
"MIT"
] | null | null | null | ptmv/snd.py | ryw89/ptmv | 63df738a9700de173e3eb4202ec24a867e2525f4 | [
"MIT"
] | null | null | null | import os
import subprocess
import tempfile
import time
import wave
import simpleaudio
def extract(file):
ptmv_tempdir = os.path.join(tempfile.gettempdir(), "ptmv")
if not os.path.exists(ptmv_tempdir): os.makedirs(ptmv_tempdir)
snd_file = ptmv_tempdir + str(int(time.time())) + ".wav"
command = "ffmpeg -i " + file + " -b:a 48k -ac 1 " + snd_file
subprocess.run(command.split(), stdout = subprocess.DEVNULL, stderr = subprocess.DEVNULL)
return snd_file
def play(file, start_time):
if not os.path.exists(file): return
wave_raw = wave.open(file)
frame_rate = wave_raw.getframerate()
wave_raw.setpos(int(frame_rate * start_time))
return simpleaudio.WaveObject.from_wave_read(wave_raw).play()
def stop(play_obj): play_obj.stop() | 32.086957 | 90 | 0.752033 | 113 | 738 | 4.743363 | 0.442478 | 0.08209 | 0.05597 | 0.041045 | 0.063433 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004615 | 0.119241 | 738 | 23 | 91 | 32.086957 | 0.82 | 0 | 0 | 0 | 0 | 0 | 0.046008 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.3 | 0 | 0.55 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
74b7f7e963a239b074c92c650bdbd8d4e5bdc2e6 | 1,127 | py | Python | rpg/lib.py | FlorianLeChat/Python-RPG | 833594a964687f57b71eeb63a2a410a8fee43aa1 | [
"MIT"
] | null | null | null | rpg/lib.py | FlorianLeChat/Python-RPG | 833594a964687f57b71eeb63a2a410a8fee43aa1 | [
"MIT"
] | null | null | null | rpg/lib.py | FlorianLeChat/Python-RPG | 833594a964687f57b71eeb63a2a410a8fee43aa1 | [
"MIT"
] | null | null | null | #
# Try to check if the input is a number.
# https://stackoverflow.com/a/1267145
#
def tryGetNumber(value = ""):
try:
int(value)
return True
except ValueError:
return False
#
# Try to retrieve the input from the usage by checking the termination
# controls like CTRL+C to stop the program.
#
def tryGetInput(prompt = ""):
try:
return input(prompt)
except KeyboardInterrupt:
return False
#
# Checks if the string is a loadable JSON object.
# https://stackoverflow.com/a/5508597
#
import json
def tryGetJSON(value = ""):
try:
return json.loads(value)
except ValueError:
return False
#
# Displays a message in the output terminal formatted for the RPG project.
#
def consoleLog(prefix = "Info", message = "", newLine = "\n"):
print("[" + prefix + "] " + message, end = newLine, flush = True)
#
# Returns a list of files inside a single folder.
# https://stackoverflow.com/a/1724723
#
import os, fnmatch
def findFiles(pattern, path):
result = []
for root, _, files in os.walk(path):
for name in files:
if fnmatch.fnmatch(name, pattern):
result.append(os.path.join(root, name))
return result | 20.87037 | 74 | 0.700089 | 158 | 1,127 | 4.987342 | 0.518987 | 0.068528 | 0.079949 | 0.083756 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022801 | 0.182786 | 1,127 | 54 | 75 | 20.87037 | 0.83279 | 0.377995 | 0 | 0.296296 | 0 | 0 | 0.013196 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0 | 0.074074 | 0 | 0.518519 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
74c416090412740f9213f9dd77f04282a2b98bbb | 372 | py | Python | setup.py | RosettaCommons/pyrosetta_colab_setup | 8e00ed2f1e73e46abc30a2103cef89e48a44a1c8 | [
"MIT"
] | 1 | 2021-07-16T19:31:19.000Z | 2021-07-16T19:31:19.000Z | setup.py | dw5601/pyrosetta_colab_setup | 8e00ed2f1e73e46abc30a2103cef89e48a44a1c8 | [
"MIT"
] | null | null | null | setup.py | dw5601/pyrosetta_colab_setup | 8e00ed2f1e73e46abc30a2103cef89e48a44a1c8 | [
"MIT"
] | 2 | 2021-07-16T19:31:21.000Z | 2021-09-09T06:50:32.000Z | from setuptools import setup
setup(name='pyrosettacolabsetup',
version='0.5',
description='Mounts Google Drive for PyRosetta use in Google Colaboratory',
url='https://github.com/kathyle9/pyrosettacolabsetup',
author='kathyle9',
author_email='kle16@jhu.edu',
license='MIT',
packages=['pyrosettacolabsetup'],
zip_safe=False)
| 31 | 81 | 0.688172 | 40 | 372 | 6.35 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019868 | 0.188172 | 372 | 11 | 82 | 33.818182 | 0.821192 | 0 | 0 | 0 | 0 | 0 | 0.462366 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74c56a4b96f6fcd9893adae4e172250b3e40e46f | 1,885 | py | Python | my_decorator.py | 4enzo/learning_py | 6f0bc7b974fd7cdb9138a5c41ebaec339373513f | [
"MIT"
] | null | null | null | my_decorator.py | 4enzo/learning_py | 6f0bc7b974fd7cdb9138a5c41ebaec339373513f | [
"MIT"
] | null | null | null | my_decorator.py | 4enzo/learning_py | 6f0bc7b974fd7cdb9138a5c41ebaec339373513f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
'''
Created on 2022-03-16
@author: Enzo
'''
import time
import functools
def run_time(func):
"函数运行时长"
@functools.wraps(func)
def inner(*args, **kwargs):
start_time = time.time()
result = func(*args, **kwargs)
end_time = time.time()
print('%s spend: %.4f s'%(func.__name__,end_time-start_time))
return result
return inner
#@property
class Test(object):
def __init__(self):
self.__num = 100
#@property可以把一个方法装饰成一个属性来使用
#正常情况下,操作私有变量需要使用set get函数,但这显得繁琐
# 把一个getter方法变成属性,只需要加上@property就可以了,此时,@property本身又创建了另一个装饰器@xxx.setter,负责把一个setter方法变成属性赋值,于是,我们就拥有一个可控的属性操作
@property
def num(self):
return self.__num
@num.setter
def num(self,number):
if isinstance(number,int):
self.__num = number
else:
print('The number is not a int')
class Foo(object):
name = 'test'
def __init__(self):
pass
#通常情况下,类中定义的方法默认都是实例方法
#self实例方法的调用离不开实例,需要把实例自己传给函数
def normal_func(self):
"实例方法"
print('This is normal_func')
#实例方法调用静态方法使用self.方法名()
self.static_func()
#@staticmethod不需要表示自身对象的self和自身类的cls参数,就跟使用函数一样
#如果方法内部没有操作实例属性的操作,仅仅包含一些工具性的操作,建议使用静态方法,比如格式化时间输出
@staticmethod
def static_func():
"静态方法"
print('This is static_func')
#@classmethod 修饰符对应的函数不需要实例化,不需要 self 参数,但第一个参数需要是表示自身类的 cls 参数,可以来调用类的属性,类的方法,实例化对象等,避免硬编码
#如果需要对类属性,即静态变量进行限制性操作,则建议使用类方法
@classmethod
def class_func(cls):
"类方法"
print('This is class_func')
if __name__ == '__main__':
#实例方法需要实例,只能:类名().方法名()调用
Foo().normal_func()
#类名.方法名()
#也可以类名().方法名(),但不建议
Foo.static_func()
#类名.方法名()
#也可以类名().方法名(),但不建议
Foo.class_func()
| 23.271605 | 115 | 0.603714 | 203 | 1,885 | 5.403941 | 0.507389 | 0.02917 | 0.030082 | 0.025524 | 0.043756 | 0.043756 | 0.043756 | 0 | 0 | 0 | 0 | 0.009538 | 0.276923 | 1,885 | 80 | 116 | 23.5625 | 0.795304 | 0.327851 | 0 | 0.045455 | 0 | 0 | 0.10473 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.204545 | false | 0.022727 | 0.045455 | 0.022727 | 0.386364 | 0.113636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74c6c5264a18874bfc3529ea2edc3a70a8558e9d | 35,236 | py | Python | src/oci/data_integration/models/task_run.py | LaudateCorpus1/oci-python-sdk | b0d3ce629d5113df4d8b83b7a6502b2c5bfa3015 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | src/oci/data_integration/models/task_run.py | LaudateCorpus1/oci-python-sdk | b0d3ce629d5113df4d8b83b7a6502b2c5bfa3015 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | src/oci/data_integration/models/task_run.py | LaudateCorpus1/oci-python-sdk | b0d3ce629d5113df4d8b83b7a6502b2c5bfa3015 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
# Copyright (c) 2016, 2022, Oracle and/or its affiliates. All rights reserved.
# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
from oci.util import formatted_flat_dict, NONE_SENTINEL, value_allowed_none_or_none_sentinel # noqa: F401
from oci.decorators import init_model_state_from_kwargs
@init_model_state_from_kwargs
class TaskRun(object):
"""
The information about a task run.
"""
#: A constant which can be used with the status property of a TaskRun.
#: This constant has a value of "NOT_STARTED"
STATUS_NOT_STARTED = "NOT_STARTED"
#: A constant which can be used with the status property of a TaskRun.
#: This constant has a value of "QUEUED"
STATUS_QUEUED = "QUEUED"
#: A constant which can be used with the status property of a TaskRun.
#: This constant has a value of "RUNNING"
STATUS_RUNNING = "RUNNING"
#: A constant which can be used with the status property of a TaskRun.
#: This constant has a value of "TERMINATING"
STATUS_TERMINATING = "TERMINATING"
#: A constant which can be used with the status property of a TaskRun.
#: This constant has a value of "TERMINATED"
STATUS_TERMINATED = "TERMINATED"
#: A constant which can be used with the status property of a TaskRun.
#: This constant has a value of "SUCCESS"
STATUS_SUCCESS = "SUCCESS"
#: A constant which can be used with the status property of a TaskRun.
#: This constant has a value of "ERROR"
STATUS_ERROR = "ERROR"
#: A constant which can be used with the expected_duration_unit property of a TaskRun.
#: This constant has a value of "SECONDS"
EXPECTED_DURATION_UNIT_SECONDS = "SECONDS"
#: A constant which can be used with the expected_duration_unit property of a TaskRun.
#: This constant has a value of "MINUTES"
EXPECTED_DURATION_UNIT_MINUTES = "MINUTES"
#: A constant which can be used with the expected_duration_unit property of a TaskRun.
#: This constant has a value of "HOURS"
EXPECTED_DURATION_UNIT_HOURS = "HOURS"
#: A constant which can be used with the expected_duration_unit property of a TaskRun.
#: This constant has a value of "DAYS"
EXPECTED_DURATION_UNIT_DAYS = "DAYS"
#: A constant which can be used with the auth_mode property of a TaskRun.
#: This constant has a value of "OBO"
AUTH_MODE_OBO = "OBO"
#: A constant which can be used with the auth_mode property of a TaskRun.
#: This constant has a value of "RESOURCE_PRINCIPAL"
AUTH_MODE_RESOURCE_PRINCIPAL = "RESOURCE_PRINCIPAL"
#: A constant which can be used with the auth_mode property of a TaskRun.
#: This constant has a value of "USER_CERTIFICATE"
AUTH_MODE_USER_CERTIFICATE = "USER_CERTIFICATE"
#: A constant which can be used with the task_type property of a TaskRun.
#: This constant has a value of "INTEGRATION_TASK"
TASK_TYPE_INTEGRATION_TASK = "INTEGRATION_TASK"
#: A constant which can be used with the task_type property of a TaskRun.
#: This constant has a value of "DATA_LOADER_TASK"
TASK_TYPE_DATA_LOADER_TASK = "DATA_LOADER_TASK"
#: A constant which can be used with the task_type property of a TaskRun.
#: This constant has a value of "PIPELINE_TASK"
TASK_TYPE_PIPELINE_TASK = "PIPELINE_TASK"
#: A constant which can be used with the task_type property of a TaskRun.
#: This constant has a value of "SQL_TASK"
TASK_TYPE_SQL_TASK = "SQL_TASK"
#: A constant which can be used with the task_type property of a TaskRun.
#: This constant has a value of "OCI_DATAFLOW_TASK"
TASK_TYPE_OCI_DATAFLOW_TASK = "OCI_DATAFLOW_TASK"
#: A constant which can be used with the task_type property of a TaskRun.
#: This constant has a value of "REST_TASK"
TASK_TYPE_REST_TASK = "REST_TASK"
def __init__(self, **kwargs):
"""
Initializes a new TaskRun object with values from keyword arguments.
The following keyword arguments are supported (corresponding to the getters/setters of this class):
:param key:
The value to assign to the key property of this TaskRun.
:type key: str
:param model_type:
The value to assign to the model_type property of this TaskRun.
:type model_type: str
:param model_version:
The value to assign to the model_version property of this TaskRun.
:type model_version: str
:param parent_ref:
The value to assign to the parent_ref property of this TaskRun.
:type parent_ref: oci.data_integration.models.ParentReference
:param name:
The value to assign to the name property of this TaskRun.
:type name: str
:param description:
The value to assign to the description property of this TaskRun.
:type description: str
:param object_version:
The value to assign to the object_version property of this TaskRun.
:type object_version: int
:param config_provider:
The value to assign to the config_provider property of this TaskRun.
:type config_provider: oci.data_integration.models.ConfigProvider
:param status:
The value to assign to the status property of this TaskRun.
Allowed values for this property are: "NOT_STARTED", "QUEUED", "RUNNING", "TERMINATING", "TERMINATED", "SUCCESS", "ERROR", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:type status: str
:param start_time_millis:
The value to assign to the start_time_millis property of this TaskRun.
:type start_time_millis: int
:param end_time_millis:
The value to assign to the end_time_millis property of this TaskRun.
:type end_time_millis: int
:param last_updated:
The value to assign to the last_updated property of this TaskRun.
:type last_updated: int
:param records_written:
The value to assign to the records_written property of this TaskRun.
:type records_written: int
:param bytes_processed:
The value to assign to the bytes_processed property of this TaskRun.
:type bytes_processed: int
:param error_message:
The value to assign to the error_message property of this TaskRun.
:type error_message: str
:param expected_duration:
The value to assign to the expected_duration property of this TaskRun.
:type expected_duration: float
:param expected_duration_unit:
The value to assign to the expected_duration_unit property of this TaskRun.
Allowed values for this property are: "SECONDS", "MINUTES", "HOURS", "DAYS", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:type expected_duration_unit: str
:param task_key:
The value to assign to the task_key property of this TaskRun.
:type task_key: str
:param external_id:
The value to assign to the external_id property of this TaskRun.
:type external_id: str
:param retry_attempt:
The value to assign to the retry_attempt property of this TaskRun.
:type retry_attempt: int
:param task_schedule:
The value to assign to the task_schedule property of this TaskRun.
:type task_schedule: oci.data_integration.models.TaskSchedule
:param metrics:
The value to assign to the metrics property of this TaskRun.
:type metrics: dict(str, float)
:param outputs:
The value to assign to the outputs property of this TaskRun.
:type outputs: dict(str, ParameterValue)
:param execution_errors:
The value to assign to the execution_errors property of this TaskRun.
:type execution_errors: list[str]
:param termination_errors:
The value to assign to the termination_errors property of this TaskRun.
:type termination_errors: list[str]
:param auth_mode:
The value to assign to the auth_mode property of this TaskRun.
Allowed values for this property are: "OBO", "RESOURCE_PRINCIPAL", "USER_CERTIFICATE", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:type auth_mode: str
:param opc_request_id:
The value to assign to the opc_request_id property of this TaskRun.
:type opc_request_id: str
:param object_status:
The value to assign to the object_status property of this TaskRun.
:type object_status: int
:param task_type:
The value to assign to the task_type property of this TaskRun.
Allowed values for this property are: "INTEGRATION_TASK", "DATA_LOADER_TASK", "PIPELINE_TASK", "SQL_TASK", "OCI_DATAFLOW_TASK", "REST_TASK", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:type task_type: str
:param identifier:
The value to assign to the identifier property of this TaskRun.
:type identifier: str
:param metadata:
The value to assign to the metadata property of this TaskRun.
:type metadata: oci.data_integration.models.ObjectMetadata
:param key_map:
The value to assign to the key_map property of this TaskRun.
:type key_map: dict(str, str)
"""
self.swagger_types = {
'key': 'str',
'model_type': 'str',
'model_version': 'str',
'parent_ref': 'ParentReference',
'name': 'str',
'description': 'str',
'object_version': 'int',
'config_provider': 'ConfigProvider',
'status': 'str',
'start_time_millis': 'int',
'end_time_millis': 'int',
'last_updated': 'int',
'records_written': 'int',
'bytes_processed': 'int',
'error_message': 'str',
'expected_duration': 'float',
'expected_duration_unit': 'str',
'task_key': 'str',
'external_id': 'str',
'retry_attempt': 'int',
'task_schedule': 'TaskSchedule',
'metrics': 'dict(str, float)',
'outputs': 'dict(str, ParameterValue)',
'execution_errors': 'list[str]',
'termination_errors': 'list[str]',
'auth_mode': 'str',
'opc_request_id': 'str',
'object_status': 'int',
'task_type': 'str',
'identifier': 'str',
'metadata': 'ObjectMetadata',
'key_map': 'dict(str, str)'
}
self.attribute_map = {
'key': 'key',
'model_type': 'modelType',
'model_version': 'modelVersion',
'parent_ref': 'parentRef',
'name': 'name',
'description': 'description',
'object_version': 'objectVersion',
'config_provider': 'configProvider',
'status': 'status',
'start_time_millis': 'startTimeMillis',
'end_time_millis': 'endTimeMillis',
'last_updated': 'lastUpdated',
'records_written': 'recordsWritten',
'bytes_processed': 'bytesProcessed',
'error_message': 'errorMessage',
'expected_duration': 'expectedDuration',
'expected_duration_unit': 'expectedDurationUnit',
'task_key': 'taskKey',
'external_id': 'externalId',
'retry_attempt': 'retryAttempt',
'task_schedule': 'taskSchedule',
'metrics': 'metrics',
'outputs': 'outputs',
'execution_errors': 'executionErrors',
'termination_errors': 'terminationErrors',
'auth_mode': 'authMode',
'opc_request_id': 'opcRequestId',
'object_status': 'objectStatus',
'task_type': 'taskType',
'identifier': 'identifier',
'metadata': 'metadata',
'key_map': 'keyMap'
}
self._key = None
self._model_type = None
self._model_version = None
self._parent_ref = None
self._name = None
self._description = None
self._object_version = None
self._config_provider = None
self._status = None
self._start_time_millis = None
self._end_time_millis = None
self._last_updated = None
self._records_written = None
self._bytes_processed = None
self._error_message = None
self._expected_duration = None
self._expected_duration_unit = None
self._task_key = None
self._external_id = None
self._retry_attempt = None
self._task_schedule = None
self._metrics = None
self._outputs = None
self._execution_errors = None
self._termination_errors = None
self._auth_mode = None
self._opc_request_id = None
self._object_status = None
self._task_type = None
self._identifier = None
self._metadata = None
self._key_map = None
@property
def key(self):
"""
Gets the key of this TaskRun.
The key of the object.
:return: The key of this TaskRun.
:rtype: str
"""
return self._key
@key.setter
def key(self, key):
"""
Sets the key of this TaskRun.
The key of the object.
:param key: The key of this TaskRun.
:type: str
"""
self._key = key
@property
def model_type(self):
"""
Gets the model_type of this TaskRun.
The type of the object.
:return: The model_type of this TaskRun.
:rtype: str
"""
return self._model_type
@model_type.setter
def model_type(self, model_type):
"""
Sets the model_type of this TaskRun.
The type of the object.
:param model_type: The model_type of this TaskRun.
:type: str
"""
self._model_type = model_type
@property
def model_version(self):
"""
Gets the model_version of this TaskRun.
The model version of an object.
:return: The model_version of this TaskRun.
:rtype: str
"""
return self._model_version
@model_version.setter
def model_version(self, model_version):
"""
Sets the model_version of this TaskRun.
The model version of an object.
:param model_version: The model_version of this TaskRun.
:type: str
"""
self._model_version = model_version
@property
def parent_ref(self):
"""
Gets the parent_ref of this TaskRun.
:return: The parent_ref of this TaskRun.
:rtype: oci.data_integration.models.ParentReference
"""
return self._parent_ref
@parent_ref.setter
def parent_ref(self, parent_ref):
"""
Sets the parent_ref of this TaskRun.
:param parent_ref: The parent_ref of this TaskRun.
:type: oci.data_integration.models.ParentReference
"""
self._parent_ref = parent_ref
@property
def name(self):
"""
Gets the name of this TaskRun.
Free form text without any restriction on permitted characters. Name can have letters, numbers, and special characters. The value is editable and is restricted to 1000 characters.
:return: The name of this TaskRun.
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""
Sets the name of this TaskRun.
Free form text without any restriction on permitted characters. Name can have letters, numbers, and special characters. The value is editable and is restricted to 1000 characters.
:param name: The name of this TaskRun.
:type: str
"""
self._name = name
@property
def description(self):
"""
Gets the description of this TaskRun.
Detailed description for the object.
:return: The description of this TaskRun.
:rtype: str
"""
return self._description
@description.setter
def description(self, description):
"""
Sets the description of this TaskRun.
Detailed description for the object.
:param description: The description of this TaskRun.
:type: str
"""
self._description = description
@property
def object_version(self):
"""
Gets the object_version of this TaskRun.
The version of the object that is used to track changes in the object instance.
:return: The object_version of this TaskRun.
:rtype: int
"""
return self._object_version
@object_version.setter
def object_version(self, object_version):
"""
Sets the object_version of this TaskRun.
The version of the object that is used to track changes in the object instance.
:param object_version: The object_version of this TaskRun.
:type: int
"""
self._object_version = object_version
@property
def config_provider(self):
"""
Gets the config_provider of this TaskRun.
:return: The config_provider of this TaskRun.
:rtype: oci.data_integration.models.ConfigProvider
"""
return self._config_provider
@config_provider.setter
def config_provider(self, config_provider):
"""
Sets the config_provider of this TaskRun.
:param config_provider: The config_provider of this TaskRun.
:type: oci.data_integration.models.ConfigProvider
"""
self._config_provider = config_provider
@property
def status(self):
"""
Gets the status of this TaskRun.
The status of the task run.
Allowed values for this property are: "NOT_STARTED", "QUEUED", "RUNNING", "TERMINATING", "TERMINATED", "SUCCESS", "ERROR", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:return: The status of this TaskRun.
:rtype: str
"""
return self._status
@status.setter
def status(self, status):
"""
Sets the status of this TaskRun.
The status of the task run.
:param status: The status of this TaskRun.
:type: str
"""
allowed_values = ["NOT_STARTED", "QUEUED", "RUNNING", "TERMINATING", "TERMINATED", "SUCCESS", "ERROR"]
if not value_allowed_none_or_none_sentinel(status, allowed_values):
status = 'UNKNOWN_ENUM_VALUE'
self._status = status
@property
def start_time_millis(self):
"""
Gets the start_time_millis of this TaskRun.
The start time.
:return: The start_time_millis of this TaskRun.
:rtype: int
"""
return self._start_time_millis
@start_time_millis.setter
def start_time_millis(self, start_time_millis):
"""
Sets the start_time_millis of this TaskRun.
The start time.
:param start_time_millis: The start_time_millis of this TaskRun.
:type: int
"""
self._start_time_millis = start_time_millis
@property
def end_time_millis(self):
"""
Gets the end_time_millis of this TaskRun.
The end time.
:return: The end_time_millis of this TaskRun.
:rtype: int
"""
return self._end_time_millis
@end_time_millis.setter
def end_time_millis(self, end_time_millis):
"""
Sets the end_time_millis of this TaskRun.
The end time.
:param end_time_millis: The end_time_millis of this TaskRun.
:type: int
"""
self._end_time_millis = end_time_millis
@property
def last_updated(self):
"""
Gets the last_updated of this TaskRun.
The date and time the object was last updated.
:return: The last_updated of this TaskRun.
:rtype: int
"""
return self._last_updated
@last_updated.setter
def last_updated(self, last_updated):
"""
Sets the last_updated of this TaskRun.
The date and time the object was last updated.
:param last_updated: The last_updated of this TaskRun.
:type: int
"""
self._last_updated = last_updated
@property
def records_written(self):
"""
Gets the records_written of this TaskRun.
The number of records processed in the task run.
:return: The records_written of this TaskRun.
:rtype: int
"""
return self._records_written
@records_written.setter
def records_written(self, records_written):
"""
Sets the records_written of this TaskRun.
The number of records processed in the task run.
:param records_written: The records_written of this TaskRun.
:type: int
"""
self._records_written = records_written
@property
def bytes_processed(self):
"""
Gets the bytes_processed of this TaskRun.
The number of bytes processed in the task run.
:return: The bytes_processed of this TaskRun.
:rtype: int
"""
return self._bytes_processed
@bytes_processed.setter
def bytes_processed(self, bytes_processed):
"""
Sets the bytes_processed of this TaskRun.
The number of bytes processed in the task run.
:param bytes_processed: The bytes_processed of this TaskRun.
:type: int
"""
self._bytes_processed = bytes_processed
@property
def error_message(self):
"""
Gets the error_message of this TaskRun.
Contains an error message if status is `ERROR`.
:return: The error_message of this TaskRun.
:rtype: str
"""
return self._error_message
@error_message.setter
def error_message(self, error_message):
"""
Sets the error_message of this TaskRun.
Contains an error message if status is `ERROR`.
:param error_message: The error_message of this TaskRun.
:type: str
"""
self._error_message = error_message
@property
def expected_duration(self):
"""
Gets the expected_duration of this TaskRun.
The expected duration for the task run.
:return: The expected_duration of this TaskRun.
:rtype: float
"""
return self._expected_duration
@expected_duration.setter
def expected_duration(self, expected_duration):
"""
Sets the expected_duration of this TaskRun.
The expected duration for the task run.
:param expected_duration: The expected_duration of this TaskRun.
:type: float
"""
self._expected_duration = expected_duration
@property
def expected_duration_unit(self):
"""
Gets the expected_duration_unit of this TaskRun.
The expected duration unit of measure.
Allowed values for this property are: "SECONDS", "MINUTES", "HOURS", "DAYS", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:return: The expected_duration_unit of this TaskRun.
:rtype: str
"""
return self._expected_duration_unit
@expected_duration_unit.setter
def expected_duration_unit(self, expected_duration_unit):
"""
Sets the expected_duration_unit of this TaskRun.
The expected duration unit of measure.
:param expected_duration_unit: The expected_duration_unit of this TaskRun.
:type: str
"""
allowed_values = ["SECONDS", "MINUTES", "HOURS", "DAYS"]
if not value_allowed_none_or_none_sentinel(expected_duration_unit, allowed_values):
expected_duration_unit = 'UNKNOWN_ENUM_VALUE'
self._expected_duration_unit = expected_duration_unit
@property
def task_key(self):
"""
Gets the task_key of this TaskRun.
Task Key of the task for which TaskRun is being created. If not specified, the AggregatorKey in RegistryMetadata will be assumed to be the TaskKey
:return: The task_key of this TaskRun.
:rtype: str
"""
return self._task_key
@task_key.setter
def task_key(self, task_key):
"""
Sets the task_key of this TaskRun.
Task Key of the task for which TaskRun is being created. If not specified, the AggregatorKey in RegistryMetadata will be assumed to be the TaskKey
:param task_key: The task_key of this TaskRun.
:type: str
"""
self._task_key = task_key
@property
def external_id(self):
"""
Gets the external_id of this TaskRun.
The external identifier for the task run.
:return: The external_id of this TaskRun.
:rtype: str
"""
return self._external_id
@external_id.setter
def external_id(self, external_id):
"""
Sets the external_id of this TaskRun.
The external identifier for the task run.
:param external_id: The external_id of this TaskRun.
:type: str
"""
self._external_id = external_id
@property
def retry_attempt(self):
"""
Gets the retry_attempt of this TaskRun.
Holds the particular attempt number.
:return: The retry_attempt of this TaskRun.
:rtype: int
"""
return self._retry_attempt
@retry_attempt.setter
def retry_attempt(self, retry_attempt):
"""
Sets the retry_attempt of this TaskRun.
Holds the particular attempt number.
:param retry_attempt: The retry_attempt of this TaskRun.
:type: int
"""
self._retry_attempt = retry_attempt
@property
def task_schedule(self):
"""
Gets the task_schedule of this TaskRun.
:return: The task_schedule of this TaskRun.
:rtype: oci.data_integration.models.TaskSchedule
"""
return self._task_schedule
@task_schedule.setter
def task_schedule(self, task_schedule):
"""
Sets the task_schedule of this TaskRun.
:param task_schedule: The task_schedule of this TaskRun.
:type: oci.data_integration.models.TaskSchedule
"""
self._task_schedule = task_schedule
@property
def metrics(self):
"""
Gets the metrics of this TaskRun.
A map of metrics for the run.
:return: The metrics of this TaskRun.
:rtype: dict(str, float)
"""
return self._metrics
@metrics.setter
def metrics(self, metrics):
"""
Sets the metrics of this TaskRun.
A map of metrics for the run.
:param metrics: The metrics of this TaskRun.
:type: dict(str, float)
"""
self._metrics = metrics
@property
def outputs(self):
"""
Gets the outputs of this TaskRun.
A map of the outputs of the run.
:return: The outputs of this TaskRun.
:rtype: dict(str, ParameterValue)
"""
return self._outputs
@outputs.setter
def outputs(self, outputs):
"""
Sets the outputs of this TaskRun.
A map of the outputs of the run.
:param outputs: The outputs of this TaskRun.
:type: dict(str, ParameterValue)
"""
self._outputs = outputs
@property
def execution_errors(self):
"""
Gets the execution_errors of this TaskRun.
An array of execution errors from the run.
:return: The execution_errors of this TaskRun.
:rtype: list[str]
"""
return self._execution_errors
@execution_errors.setter
def execution_errors(self, execution_errors):
"""
Sets the execution_errors of this TaskRun.
An array of execution errors from the run.
:param execution_errors: The execution_errors of this TaskRun.
:type: list[str]
"""
self._execution_errors = execution_errors
@property
def termination_errors(self):
"""
Gets the termination_errors of this TaskRun.
An array of termination errors from the run.
:return: The termination_errors of this TaskRun.
:rtype: list[str]
"""
return self._termination_errors
@termination_errors.setter
def termination_errors(self, termination_errors):
"""
Sets the termination_errors of this TaskRun.
An array of termination errors from the run.
:param termination_errors: The termination_errors of this TaskRun.
:type: list[str]
"""
self._termination_errors = termination_errors
@property
def auth_mode(self):
"""
Gets the auth_mode of this TaskRun.
The autorization mode for when the task was executed.
Allowed values for this property are: "OBO", "RESOURCE_PRINCIPAL", "USER_CERTIFICATE", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:return: The auth_mode of this TaskRun.
:rtype: str
"""
return self._auth_mode
@auth_mode.setter
def auth_mode(self, auth_mode):
"""
Sets the auth_mode of this TaskRun.
The autorization mode for when the task was executed.
:param auth_mode: The auth_mode of this TaskRun.
:type: str
"""
allowed_values = ["OBO", "RESOURCE_PRINCIPAL", "USER_CERTIFICATE"]
if not value_allowed_none_or_none_sentinel(auth_mode, allowed_values):
auth_mode = 'UNKNOWN_ENUM_VALUE'
self._auth_mode = auth_mode
@property
def opc_request_id(self):
"""
Gets the opc_request_id of this TaskRun.
The OPC request ID of execution of the task run.
:return: The opc_request_id of this TaskRun.
:rtype: str
"""
return self._opc_request_id
@opc_request_id.setter
def opc_request_id(self, opc_request_id):
"""
Sets the opc_request_id of this TaskRun.
The OPC request ID of execution of the task run.
:param opc_request_id: The opc_request_id of this TaskRun.
:type: str
"""
self._opc_request_id = opc_request_id
@property
def object_status(self):
"""
Gets the object_status of this TaskRun.
The status of an object that can be set to value 1 for shallow references across objects, other values reserved.
:return: The object_status of this TaskRun.
:rtype: int
"""
return self._object_status
@object_status.setter
def object_status(self, object_status):
"""
Sets the object_status of this TaskRun.
The status of an object that can be set to value 1 for shallow references across objects, other values reserved.
:param object_status: The object_status of this TaskRun.
:type: int
"""
self._object_status = object_status
@property
def task_type(self):
"""
Gets the task_type of this TaskRun.
The type of task run.
Allowed values for this property are: "INTEGRATION_TASK", "DATA_LOADER_TASK", "PIPELINE_TASK", "SQL_TASK", "OCI_DATAFLOW_TASK", "REST_TASK", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:return: The task_type of this TaskRun.
:rtype: str
"""
return self._task_type
@task_type.setter
def task_type(self, task_type):
"""
Sets the task_type of this TaskRun.
The type of task run.
:param task_type: The task_type of this TaskRun.
:type: str
"""
allowed_values = ["INTEGRATION_TASK", "DATA_LOADER_TASK", "PIPELINE_TASK", "SQL_TASK", "OCI_DATAFLOW_TASK", "REST_TASK"]
if not value_allowed_none_or_none_sentinel(task_type, allowed_values):
task_type = 'UNKNOWN_ENUM_VALUE'
self._task_type = task_type
@property
def identifier(self):
"""
Gets the identifier of this TaskRun.
Value can only contain upper case letters, underscore and numbers. It should begin with upper case letter or underscore. The value can be modified.
:return: The identifier of this TaskRun.
:rtype: str
"""
return self._identifier
@identifier.setter
def identifier(self, identifier):
"""
Sets the identifier of this TaskRun.
Value can only contain upper case letters, underscore and numbers. It should begin with upper case letter or underscore. The value can be modified.
:param identifier: The identifier of this TaskRun.
:type: str
"""
self._identifier = identifier
@property
def metadata(self):
"""
Gets the metadata of this TaskRun.
:return: The metadata of this TaskRun.
:rtype: oci.data_integration.models.ObjectMetadata
"""
return self._metadata
@metadata.setter
def metadata(self, metadata):
"""
Sets the metadata of this TaskRun.
:param metadata: The metadata of this TaskRun.
:type: oci.data_integration.models.ObjectMetadata
"""
self._metadata = metadata
@property
def key_map(self):
"""
Gets the key_map of this TaskRun.
A key map. If provided, key is replaced with generated key. This structure provides mapping between user provided key and generated key.
:return: The key_map of this TaskRun.
:rtype: dict(str, str)
"""
return self._key_map
@key_map.setter
def key_map(self, key_map):
"""
Sets the key_map of this TaskRun.
A key map. If provided, key is replaced with generated key. This structure provides mapping between user provided key and generated key.
:param key_map: The key_map of this TaskRun.
:type: dict(str, str)
"""
self._key_map = key_map
def __repr__(self):
return formatted_flat_dict(self)
def __eq__(self, other):
if other is None:
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not self == other
| 31.237589 | 245 | 0.630662 | 4,367 | 35,236 | 4.89352 | 0.06343 | 0.045204 | 0.097333 | 0.04773 | 0.658914 | 0.554001 | 0.468507 | 0.397567 | 0.344829 | 0.337155 | 0 | 0.001131 | 0.297508 | 35,236 | 1,127 | 246 | 31.265306 | 0.862199 | 0.528267 | 0 | 0.094955 | 0 | 0 | 0.138681 | 0.003467 | 0 | 0 | 0 | 0 | 0 | 1 | 0.20178 | false | 0 | 0.005935 | 0.005935 | 0.376855 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74c93ec778e6f2115d37da29c96d96d36fd726a2 | 5,653 | py | Python | code/ml/gptheano/vecgpdm/test.py | dmytrov/gaussianprocess | 7044bd2d66f44e10656fee17e94fdee0c24c70bb | [
"MIT"
] | null | null | null | code/ml/gptheano/vecgpdm/test.py | dmytrov/gaussianprocess | 7044bd2d66f44e10656fee17e94fdee0c24c70bb | [
"MIT"
] | null | null | null | code/ml/gptheano/vecgpdm/test.py | dmytrov/gaussianprocess | 7044bd2d66f44e10656fee17e94fdee0c24c70bb | [
"MIT"
] | null | null | null | import os
import numpy as np
import matplotlibex as plx
import numerical.numpytheano.theanopool as tp
import ml.gptheano.vecgpdm.multiprimitivemodel as mdl
import ml.gptheano.vecgpdm.kernels as krn
import ml.gptheano.vecgpdm.equations as vceq
import numerical.numpytheano as nt
import matplotlibex.mlplot as plx
def generate_y(t, fs):
return np.vstack([ np.vstack([5.0*np.sin(f*t+0.0), 5.0*np.sin(f*t+3.5),
4.1*np.sin(f*t+0.4), 4.1*np.sin(f*t+2.8),
3.1*np.sin(f*t+0.4), 3.1*np.sin(f*t+3.8),
2.1*np.sin(f*t+0.4), 2.1*np.sin(f*t+4.8)]) for f in fs]).T
if __name__ == "__main__":
t = np.linspace(0.0, 3.1*2*np.pi, num=600)
y1 = generate_y(t, fs=[1.0, 3.14, 1.0])
y2 = generate_y(t, fs=[1.0, 1.0, 3.14])
y3 = generate_y(t, fs=[1.0, 3.14, 3.14])
np.random.seed(0)
y = [y1 + 0.2*np.reshape(np.random.normal(size=y1.size), y1.shape),
y2 + 0.2*np.reshape(np.random.normal(size=y2.size), y2.shape) + 1.0,
y3 + 0.2*np.reshape(np.random.normal(size=y3.size), y3.shape) + 2.0,
]
#ns = tp.NumpyVarPool()
ns = tp.TheanoVarPool()
lvm_kern_type = krn.RBF_ARD_Kernel
dyn_kern_type = krn.RBF_ARD_Kernel_noscale
dataset = mdl.Dataset(lvm_kern_type=lvm_kern_type, dyn_kern_type=dyn_kern_type, observed_mean_mode=mdl.ObservedMeanMode.per_part, ns=ns)
pt_1 = dataset.create_part_type("1", y_indexes=range(0, 8), Q=2, M=10)
pt_2 = dataset.create_part_type("2", y_indexes=range(8, 16), Q=2, M=10)
pt_3 = dataset.create_part_type("3", y_indexes=range(16, 24), Q=2, M=10)
mpt_sin_1 = dataset.create_mp_type("sin", pt_1)
mpt_sin_2 = dataset.create_mp_type("sin", pt_2)
mpt_sin_3 = dataset.create_mp_type("sin", pt_3)
mpt_up_1 = dataset.create_mp_type("up", pt_1)
mpt_up_2 = dataset.create_mp_type("up", pt_2)
mpt_up_3 = dataset.create_mp_type("up", pt_3)
start = 100
end = 500
trial1 = dataset.create_trial(Y=y[0], learning_mode=mdl.LearningMode.full)
trial1.add_segment(mdl.MPSegment(mp_type=mpt_sin_1, start=start, end=end))
trial1.add_segment(mdl.MPSegment(mp_type=mpt_up_2, start=start, end=end))
trial1.add_segment(mdl.MPSegment(mp_type=mpt_sin_3, start=start, end=end))
trial2 = dataset.create_trial(Y=y[1], learning_mode=mdl.LearningMode.full)
trial2.add_segment(mdl.MPSegment(mp_type=mpt_sin_1, start=start, end=end))
trial2.add_segment(mdl.MPSegment(mp_type=mpt_sin_2, start=start, end=end))
trial2.add_segment(mdl.MPSegment(mp_type=mpt_up_3, start=start, end=end))
trial3 = dataset.create_trial(Y=y[2], learning_mode=mdl.LearningMode.limited)
trial3.add_segment(mdl.MPSegment(mp_type=mpt_sin_1, start=start, end=end))
trial3.add_segment(mdl.MPSegment(mp_type=mpt_up_2, start=start, end=end))
trial3.add_segment(mdl.MPSegment(mp_type=mpt_up_3, start=start, end=end))
dataset.init_pieces()
dataset.init_x()
dataset.init_aug_z(dyn_M=20)
dataset.init_psi_stats()
directory = "test"
if not os.path.exists(directory):
os.makedirs(directory)
steps = [1, 2, 3, 4]
for step in steps:
if step == 1:
elbo_mode = vceq.ELBOMode.separate_dynamics
dataset.init_elbo(elbo_mode)
print("ELBO: {}".format(dataset.get_elbo_value()))
dataset.precalc_posterior_predictive()
mdl.save_plot_latent_space(dataset, directory, "initial")
mdl.optimize_blocked(dataset, niterations=5, maxiter=300, print_vars=True, save_directory=directory)
dataset.save_state_to(directory + "/step_1.pkl")
elif step == 2:
dataset.load_state_from(directory + "/step_1.pkl")
elbo_mode = vceq.ELBOMode.couplings_only
dataset.init_elbo(elbo_mode)
print("ELBO: {}".format(dataset.get_elbo_value()))
mdl.optimize_blocked(dataset, maxiter=300, print_vars=True, save_directory=directory)
dataset.save_state_to(directory + "/step_2.pkl")
elif step == 3:
dataset.load_state_from(directory + "/step_2.pkl")
elbo_mode = vceq.ELBOMode.couplings_only
dataset.init_elbo(elbo_mode)
dataset.precalc_posterior_predictive()
#mps = [mpt_sin_1, mpt_up_2, mpt_sin_3]
mps = [mpt_sin_1, mpt_up_2, mpt_up_3]
x_path = dataset.run_generative_dynamics(mps, nsteps=100)
y_path = dataset.lvm_map_to_observed(mps, x_path)
mdl.save_plot_piece_posterior_predictive(dataset, mps, directory)
mdl.save_plot_latent_space(dataset, directory, "final_3")
mdl.save_plot_latent_vs_generated(dataset, mps, directory, "final_3")
mdl.save_plot_training_vs_generated(dataset, mps, directory, "final_3")
elif step == 4:
dataset.load_state_from(directory + "/step_2.pkl")
elbo_mode = vceq.ELBOMode.full
dataset.init_elbo(elbo_mode)
dataset.precalc_posterior_predictive()
#mps = [mpt_sin_1, mpt_sin_2, mpt_up_3]
mps = [mpt_sin_1, mpt_up_2, mpt_sin_3]
x_path = dataset.run_generative_dynamics(mps, nsteps=100)
y_path = dataset.lvm_map_to_observed(mps, x_path)
mdl.save_plot_piece_posterior_predictive(dataset, mps, directory)
mdl.save_plot_latent_space(dataset, directory, "final_4")
mdl.save_plot_latent_vs_generated(dataset, mps, directory, "final_4")
mdl.save_plot_training_vs_generated(dataset, mps, directory, "final_4")
| 48.316239 | 140 | 0.66018 | 892 | 5,653 | 3.906951 | 0.179372 | 0.025825 | 0.033572 | 0.056815 | 0.677762 | 0.618938 | 0.537446 | 0.516786 | 0.480344 | 0.46944 | 0 | 0.0442 | 0.211569 | 5,653 | 116 | 141 | 48.732759 | 0.737492 | 0.017336 | 0 | 0.191919 | 0 | 0 | 0.027032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0.040404 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74ce4a0df4da03c89539d9d88b200fbc9f5506a3 | 993 | py | Python | application/__init__.py | UniversidadeDeVassouras/labproginter-2020.2-PedroHenriqueVasconcelos-t2 | 1ea78c16ada65da602de38b04ca741711734dc96 | [
"Apache-2.0"
] | null | null | null | application/__init__.py | UniversidadeDeVassouras/labproginter-2020.2-PedroHenriqueVasconcelos-t2 | 1ea78c16ada65da602de38b04ca741711734dc96 | [
"Apache-2.0"
] | null | null | null | application/__init__.py | UniversidadeDeVassouras/labproginter-2020.2-PedroHenriqueVasconcelos-t2 | 1ea78c16ada65da602de38b04ca741711734dc96 | [
"Apache-2.0"
] | null | null | null | from flask import Flask
import os
from application.model.entity.aula import Aula
from application.model.entity.disciplina import Disciplina
app = Flask(__name__, static_folder=os.path.abspath("application/view/static"), template_folder=os.path.abspath("application/view/templates"))
aula1 = Aula(1, "Aula 1", "Introdução ao Linux", "Sistemas Operacionais")
aula2 = Aula(2, "Aula 1", "Introdução ao grid layout e flexbox", "Interface com Usuário")
aula3 = Aula(3, "Aula 2", "Introdução ao Windows", "Sistemas Operacionais")
aula4 = Aula(4, "Aula 2", "Práticas com CSS 3", "Interface com Usuário")
todas_aulas = [aula1, aula2, aula3, aula4]
disciplina1 = Disciplina(1, "Sistemas Operacionais", "Felipe Melo", 4, [aula1, aula3])
disciplina2 = Disciplina(2, "Interface com Usuário", "Tássio Auad", 4, [aula2, aula4])
todas_disciplinas = [disciplina1, disciplina2]
from application.controller import home_controller
from application.controller import disciplina_controller | 47.285714 | 143 | 0.752266 | 128 | 993 | 5.757813 | 0.40625 | 0.081411 | 0.077341 | 0.070556 | 0.092266 | 0.092266 | 0 | 0 | 0 | 0 | 0 | 0.033565 | 0.129909 | 993 | 21 | 144 | 47.285714 | 0.819444 | 0 | 0 | 0 | 0 | 0 | 0.322382 | 0.050308 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
74d36a11253d7812de977142377470e5b81ffeb2 | 522 | py | Python | tractseg/experiments/endings_seg.py | jelleman8/TractSeg | 2a42efe6016141f3a32d46c8f509758302c5875b | [
"Apache-2.0"
] | 1 | 2020-06-07T09:36:46.000Z | 2020-06-07T09:36:46.000Z | tractseg/experiments/endings_seg.py | jelleman8/TractSeg | 2a42efe6016141f3a32d46c8f509758302c5875b | [
"Apache-2.0"
] | null | null | null | tractseg/experiments/endings_seg.py | jelleman8/TractSeg | 2a42efe6016141f3a32d46c8f509758302c5875b | [
"Apache-2.0"
] | null | null | null |
from tractseg.experiments.base import Config as BaseConfig
class Config(BaseConfig):
EXPERIMENT_TYPE = "endings_segmentation"
CLASSES = "All_endpoints"
LOSS_WEIGHT = 5
LOSS_WEIGHT_LEN = -1
# BATCH_SIZE = 30 # for all 72 (=144) classes we need smaller batch size because of memory limit
BATCH_SIZE = 28 # Using torch 1.0 batch_size had to be still fit in memory
FEATURES_FILENAME = "12g90g270g_CSD_BX"
NUM_EPOCHS = 150 # easily enough if using plateau LR schedule
| 26.1 | 108 | 0.703065 | 73 | 522 | 4.849315 | 0.794521 | 0.101695 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058228 | 0.243295 | 522 | 19 | 109 | 27.473684 | 0.837975 | 0.385057 | 0 | 0 | 0 | 0 | 0.159236 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
74d41a0f44d86a7b2cb92a7609690c034a52e6af | 1,844 | py | Python | putput/presets/factory.py | cicorias/putput | 96bde3c2219070bfed1ca76d47a4fe7cd0bc4b44 | [
"MIT"
] | 1 | 2019-01-17T07:45:43.000Z | 2019-01-17T07:45:43.000Z | putput/presets/factory.py | cicorias/putput | 96bde3c2219070bfed1ca76d47a4fe7cd0bc4b44 | [
"MIT"
] | 1 | 2019-01-17T07:47:04.000Z | 2019-01-17T07:47:04.000Z | putput/presets/factory.py | cicorias/putput | 96bde3c2219070bfed1ca76d47a4fe7cd0bc4b44 | [
"MIT"
] | null | null | null | from typing import Callable
from putput.presets import displaCy
from putput.presets import iob2
from putput.presets import luis
from putput.presets import stochastic
def get_preset(preset: str) -> Callable:
"""A factory that gets a 'preset' Callable.
Args:
preset: the preset's name.
Returns:
The return value of calling a preset's 'preset'
function without arguments.
Examples:
>>> from pathlib import Path
>>> from putput.pipeline import Pipeline
>>> pattern_def_path = Path(__file__).parent.parent.parent / 'tests' / 'doc' / 'example_pattern_definition.yml'
>>> dynamic_token_patterns_map = {'ITEM': ('fries',)}
>>> p = Pipeline.from_preset('IOB2',
... pattern_def_path,
... dynamic_token_patterns_map=dynamic_token_patterns_map)
>>> generator = p.flow(disable_progress_bar=True)
>>> for utterance, tokens, groups in generator:
... print(utterance)
... print(tokens)
... print(groups)
... break
can she get fries can she get fries and fries
('B-ADD I-ADD I-ADD', 'B-ITEM', 'B-ADD I-ADD I-ADD', 'B-ITEM', 'B-CONJUNCTION', 'B-ITEM')
('B-ADD_ITEM I-ADD_ITEM I-ADD_ITEM I-ADD_ITEM', 'B-ADD_ITEM I-ADD_ITEM I-ADD_ITEM I-ADD_ITEM',
'B-None', 'B-None')
"""
supported_presets = ('IOB2', 'DISPLACY', 'LUIS', 'STOCHASTIC')
if preset == 'IOB2':
return iob2.preset()
if preset == 'DISPLACY':
return displaCy.preset()
if preset == 'LUIS':
return luis.preset()
if preset == 'STOCHASTIC': # pragma: no cover
return stochastic.preset()
raise ValueError('Unrecoginzed preset. Please choose from the supported presets: {}'.format(supported_presets))
| 38.416667 | 119 | 0.611714 | 226 | 1,844 | 4.845133 | 0.358407 | 0.03653 | 0.043836 | 0.060274 | 0.094977 | 0.094977 | 0.094977 | 0.094977 | 0.094977 | 0.063014 | 0 | 0.003682 | 0.263557 | 1,844 | 47 | 120 | 39.234043 | 0.802651 | 0.599783 | 0 | 0 | 0 | 0 | 0.184252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.3125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
74d42683a6db7d7fd07c3575b8cbc1f38ea019f1 | 1,273 | py | Python | profilesapi/serializers.py | farbodgerami/profilerestapi | 8ccd42b4b6aff2d7a302d0301be08854fb5cce45 | [
"MIT"
] | null | null | null | profilesapi/serializers.py | farbodgerami/profilerestapi | 8ccd42b4b6aff2d7a302d0301be08854fb5cce45 | [
"MIT"
] | null | null | null | profilesapi/serializers.py | farbodgerami/profilerestapi | 8ccd42b4b6aff2d7a302d0301be08854fb5cce45 | [
"MIT"
] | null | null | null |
from rest_framework import serializers
from .models import UserProfile
from profilesapi import models
class helloserializer(serializers.Serializer):
name = serializers.CharField(max_length=15)
class UserProfileserializer(serializers.ModelSerializer):
class Meta:
model = UserProfile
# fields='__all__'
# oonai ke mikhaim namayesh dade beshe
fields = ('id', 'email', 'name', 'password')
extra_kwargs = {
'password': {
'write_only': True,
'style': {'input_type': 'password'}
}
}
def create(self, validated_data):
"""create and return a new user"""
"""it overrides the create functino"""
user = UserProfile.objects.create_user(
email=validated_data['email'],
name=validated_data['name'],
password=validated_data['password']
)
return user
class profilefeeditemserializer(serializers.ModelSerializer):
"""serializers profile feed items"""
class Meta:
model = models.profilefeeditem
fields = ('id', 'userprofile', 'statustext', 'createdon')
extra_kwargs = {
'userprofile': {
'read_only': True,
}
}
| 27.673913 | 65 | 0.595444 | 113 | 1,273 | 6.566372 | 0.548673 | 0.070081 | 0.037736 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002242 | 0.299293 | 1,273 | 45 | 66 | 28.288889 | 0.829596 | 0.089552 | 0 | 0.129032 | 0 | 0 | 0.116426 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0.129032 | 0.096774 | 0 | 0.354839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
74d6177d2881f21b08bc1ebf5e156825688392d8 | 1,456 | py | Python | consumers/venv/lib/python3.7/site-packages/faust/types/assignor.py | spencerpomme/Public-Transit-Status-with-Apache-Kafka | 2c85d7daadf4614fe7ce2eabcd13ff87236b1c7e | [
"MIT"
] | null | null | null | consumers/venv/lib/python3.7/site-packages/faust/types/assignor.py | spencerpomme/Public-Transit-Status-with-Apache-Kafka | 2c85d7daadf4614fe7ce2eabcd13ff87236b1c7e | [
"MIT"
] | null | null | null | consumers/venv/lib/python3.7/site-packages/faust/types/assignor.py | spencerpomme/Public-Transit-Status-with-Apache-Kafka | 2c85d7daadf4614fe7ce2eabcd13ff87236b1c7e | [
"MIT"
] | null | null | null | import abc
import typing
from typing import List, MutableMapping, Set
from mode import ServiceT
from yarl import URL
from .tuples import TP
if typing.TYPE_CHECKING:
from .app import AppT as _AppT
else:
class _AppT: ... # noqa
__all__ = [
'TopicToPartitionMap',
'HostToPartitionMap',
'PartitionAssignorT',
'LeaderAssignorT',
]
TopicToPartitionMap = MutableMapping[str, List[int]]
HostToPartitionMap = MutableMapping[str, TopicToPartitionMap]
class PartitionAssignorT(abc.ABC):
replicas: int
app: _AppT
@abc.abstractmethod
def __init__(self, app: _AppT, replicas: int = 0) -> None:
...
@abc.abstractmethod
def group_for_topic(self, topic: str) -> int:
...
@abc.abstractmethod
def assigned_standbys(self) -> Set[TP]:
...
@abc.abstractmethod
def assigned_actives(self) -> Set[TP]:
...
@abc.abstractmethod
def is_active(self, tp: TP) -> bool:
...
@abc.abstractmethod
def is_standby(self, tp: TP) -> bool:
...
@abc.abstractmethod
def key_store(self, topic: str, key: bytes) -> URL:
...
@abc.abstractmethod
def table_metadata(self, topic: str) -> HostToPartitionMap:
...
@abc.abstractmethod
def tables_metadata(self) -> HostToPartitionMap:
...
class LeaderAssignorT(ServiceT):
app: _AppT
@abc.abstractmethod
def is_leader(self) -> bool:
...
| 19.413333 | 63 | 0.635989 | 152 | 1,456 | 5.934211 | 0.348684 | 0.18847 | 0.221729 | 0.073171 | 0.195122 | 0.135255 | 0.070953 | 0 | 0 | 0 | 0 | 0.000914 | 0.248626 | 1,456 | 74 | 64 | 19.675676 | 0.823583 | 0.002747 | 0 | 0.415094 | 0 | 0 | 0.048276 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.188679 | false | 0 | 0.132075 | 0 | 0.433962 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74d87e0b37acbf34d04b4a8b0edf22298797772c | 8,403 | py | Python | tests/utils/test_events.py | nox237/CTFd | ff6e093fa6bf23b526ecddf9271195b429240ff4 | [
"Apache-2.0"
] | 3,592 | 2017-03-12T19:44:07.000Z | 2022-03-30T16:03:33.000Z | tests/utils/test_events.py | nox237/CTFd | ff6e093fa6bf23b526ecddf9271195b429240ff4 | [
"Apache-2.0"
] | 1,648 | 2017-03-12T23:44:34.000Z | 2022-03-31T15:28:38.000Z | tests/utils/test_events.py | nox237/CTFd | ff6e093fa6bf23b526ecddf9271195b429240ff4 | [
"Apache-2.0"
] | 1,736 | 2017-03-13T14:01:28.000Z | 2022-03-31T08:14:24.000Z | from collections import defaultdict
from queue import Queue
from unittest.mock import patch
from redis.exceptions import ConnectionError
from CTFd.config import TestingConfig
from CTFd.utils.events import EventManager, RedisEventManager, ServerSentEvent
from tests.helpers import create_ctfd, destroy_ctfd, login_as_user, register_user
def test_event_manager_installed():
"""Test that EventManager is installed on the Flask app"""
app = create_ctfd()
assert type(app.events_manager) == EventManager
destroy_ctfd(app)
def test_event_manager_subscription():
"""Test that EventManager subscribing works"""
with patch.object(Queue, "get") as fake_queue:
saved_data = {
"user_id": None,
"title": "asdf",
"content": "asdf",
"team_id": None,
"user": None,
"team": None,
"date": "2019-01-28T01:20:46.017649+00:00",
"id": 10,
}
saved_event = {"type": "notification", "data": saved_data}
fake_queue.return_value = saved_event
event_manager = EventManager()
events = event_manager.subscribe()
message = next(events)
assert isinstance(message, ServerSentEvent)
assert message.to_dict() == {"data": "", "type": "ping"}
assert message.__str__().startswith("event:ping")
assert len(event_manager.clients) == 1
message = next(events)
assert isinstance(message, ServerSentEvent)
assert message.to_dict() == saved_event
assert message.__str__().startswith("event:notification\ndata:")
assert len(event_manager.clients) == 1
def test_event_manager_publish():
"""Test that EventManager publishing to clients works"""
saved_data = {
"user_id": None,
"title": "asdf",
"content": "asdf",
"team_id": None,
"user": None,
"team": None,
"date": "2019-01-28T01:20:46.017649+00:00",
"id": 10,
}
event_manager = EventManager()
q = defaultdict(Queue)
event_manager.clients[id(q)] = q
event_manager.publish(data=saved_data, type="notification", channel="ctf")
event = event_manager.clients[id(q)]["ctf"].get()
event = ServerSentEvent(**event)
assert event.data == saved_data
def test_event_endpoint_is_event_stream():
"""Test that the /events endpoint is text/event-stream"""
app = create_ctfd()
with patch.object(Queue, "get") as fake_queue:
saved_data = {
"user_id": None,
"title": "asdf",
"content": "asdf",
"team_id": None,
"user": None,
"team": None,
"date": "2019-01-28T01:20:46.017649+00:00",
"id": 10,
}
saved_event = {"type": "notification", "data": saved_data}
fake_queue.return_value = saved_event
with app.app_context():
register_user(app)
with login_as_user(app) as client:
r = client.get("/events")
assert "text/event-stream" in r.headers["Content-Type"]
destroy_ctfd(app)
def test_redis_event_manager_installed():
"""Test that RedisEventManager is installed on the Flask app"""
class RedisConfig(TestingConfig):
REDIS_URL = "redis://localhost:6379/1"
CACHE_REDIS_URL = "redis://localhost:6379/1"
CACHE_TYPE = "redis"
try:
app = create_ctfd(config=RedisConfig)
except ConnectionError:
print("Failed to connect to redis. Skipping test.")
else:
with app.app_context():
assert isinstance(app.events_manager, RedisEventManager)
destroy_ctfd(app)
def test_redis_event_manager_subscription():
"""Test that RedisEventManager subscribing works."""
class RedisConfig(TestingConfig):
REDIS_URL = "redis://localhost:6379/2"
CACHE_REDIS_URL = "redis://localhost:6379/2"
CACHE_TYPE = "redis"
try:
app = create_ctfd(config=RedisConfig)
except ConnectionError:
print("Failed to connect to redis. Skipping test.")
else:
with app.app_context():
saved_data = {
"user_id": None,
"title": "asdf",
"content": "asdf",
"team_id": None,
"user": None,
"team": None,
"date": "2019-01-28T01:20:46.017649+00:00",
"id": 10,
}
saved_event = {"type": "notification", "data": saved_data}
with patch.object(Queue, "get") as fake_queue:
fake_queue.return_value = saved_event
event_manager = RedisEventManager()
events = event_manager.subscribe()
message = next(events)
assert isinstance(message, ServerSentEvent)
assert message.to_dict() == {"data": "", "type": "ping"}
assert message.__str__().startswith("event:ping")
message = next(events)
assert isinstance(message, ServerSentEvent)
assert message.to_dict() == saved_event
assert message.__str__().startswith("event:notification\ndata:")
destroy_ctfd(app)
def test_redis_event_manager_publish():
"""Test that RedisEventManager publishing to clients works."""
class RedisConfig(TestingConfig):
REDIS_URL = "redis://localhost:6379/3"
CACHE_REDIS_URL = "redis://localhost:6379/3"
CACHE_TYPE = "redis"
try:
app = create_ctfd(config=RedisConfig)
except ConnectionError:
print("Failed to connect to redis. Skipping test.")
else:
with app.app_context():
saved_data = {
"user_id": None,
"title": "asdf",
"content": "asdf",
"team_id": None,
"user": None,
"team": None,
"date": "2019-01-28T01:20:46.017649+00:00",
"id": 10,
}
event_manager = RedisEventManager()
event_manager.publish(data=saved_data, type="notification", channel="ctf")
destroy_ctfd(app)
def test_redis_event_manager_listen():
"""Test that RedisEventManager listening pubsub works."""
# This test is nob currently working properly
# This test is sort of incomplete b/c we aren't also subscribing
# I wasnt able to get listening and subscribing to work at the same time
# But the code does work under gunicorn and serve.py
try:
# import importlib
# from gevent.monkey import patch_time, patch_socket
# from gevent import Timeout
# patch_time()
# patch_socket()
class RedisConfig(TestingConfig):
REDIS_URL = "redis://localhost:6379/4"
CACHE_REDIS_URL = "redis://localhost:6379/4"
CACHE_TYPE = "redis"
try:
app = create_ctfd(config=RedisConfig)
except ConnectionError:
print("Failed to connect to redis. Skipping test.")
else:
with app.app_context():
# saved_event = {
# "data": {
# "team_id": None,
# "user_id": None,
# "content": "asdf",
# "title": "asdf",
# "id": 1,
# "team": None,
# "user": None,
# "date": "2020-08-31T23:57:27.193081+00:00",
# "type": "toast",
# "sound": None,
# },
# "type": "notification",
# }
event_manager = RedisEventManager()
# def disable_retry(f, *args, **kwargs):
# return f()
# with patch("tenacity.retry", side_effect=disable_retry):
# with Timeout(10):
# event_manager.listen()
event_manager.listen()
# event_manager.publish(
# data=saved_event["data"], type="notification", channel="ctf"
# )
destroy_ctfd(app)
finally:
pass
# import socket
# import time
# importlib.reload(socket)
# importlib.reload(time)
| 33.478088 | 86 | 0.558848 | 873 | 8,403 | 5.198167 | 0.189003 | 0.06082 | 0.022918 | 0.038784 | 0.655134 | 0.595857 | 0.572499 | 0.543411 | 0.456589 | 0.430145 | 0 | 0.035159 | 0.326431 | 8,403 | 250 | 87 | 33.612 | 0.766608 | 0.169106 | 0 | 0.745455 | 0 | 0 | 0.146539 | 0.05821 | 0 | 0 | 0 | 0 | 0.109091 | 1 | 0.048485 | false | 0.006061 | 0.042424 | 0 | 0.187879 | 0.024242 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74dacdf39663c95dee29152f69fb689022586231 | 940 | py | Python | graphene_django_jwt/schema/middleware.py | Speedy1991/graphene-django-jwt | d5a09785fdda31328e6a6dbdbbdf3436c9275435 | [
"MIT"
] | null | null | null | graphene_django_jwt/schema/middleware.py | Speedy1991/graphene-django-jwt | d5a09785fdda31328e6a6dbdbbdf3436c9275435 | [
"MIT"
] | null | null | null | graphene_django_jwt/schema/middleware.py | Speedy1991/graphene-django-jwt | d5a09785fdda31328e6a6dbdbbdf3436c9275435 | [
"MIT"
] | null | null | null | from django.contrib.auth.models import AnonymousUser
from graphene_django_jwt.blacklist import Blacklist
from graphene_django_jwt.shortcuts import get_user_by_token
from graphene_django_jwt.utils import get_credentials, get_payload
def _load_user(request):
token = get_credentials(request)
if token is not None:
refresh_token = get_payload(token)['refresh_token']
if Blacklist.is_blacklisted(refresh_token):
return None
return get_user_by_token(token)
class JSONWebTokenMiddleware:
def __init__(self, *args, **kwargs):
self._skip = False
def resolve(self, next, root, info, **kwargs):
if self._skip:
return next(root, info, **kwargs)
if not info.context.user.is_authenticated:
user = _load_user(info.context)
info.context.user = user or AnonymousUser()
self._skip = True
return next(root, info, **kwargs)
| 31.333333 | 66 | 0.695745 | 119 | 940 | 5.226891 | 0.369748 | 0.057878 | 0.086817 | 0.101286 | 0.11254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223404 | 940 | 29 | 67 | 32.413793 | 0.852055 | 0 | 0 | 0.090909 | 0 | 0 | 0.01383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0 | 0.181818 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
74edb93d9d0f36c2269aab0e11e28b72cbcd34f1 | 6,327 | py | Python | src/pwnbot.py | jtorres-dev/discord_ctfbot | c4063e7b495a7096ff4e43e1c7bf39111f3ab401 | [
"MIT"
] | 2 | 2021-07-18T10:06:09.000Z | 2021-11-10T17:17:16.000Z | src/pwnbot.py | jtorres-dev/discord_ctfbot | c4063e7b495a7096ff4e43e1c7bf39111f3ab401 | [
"MIT"
] | null | null | null | src/pwnbot.py | jtorres-dev/discord_ctfbot | c4063e7b495a7096ff4e43e1c7bf39111f3ab401 | [
"MIT"
] | 1 | 2022-02-16T20:19:31.000Z | 2022-02-16T20:19:31.000Z | import discord
import ctftime
import os
import random
from discord.ext import commands, tasks
from datetime import datetime
# Token generated from https://discord.com/developers/applications
# Keep this private, if exposed generate new one
TOKEN = ''
# Bot channel ID was grabbed from Settings > Appearance > Developer Mode (On). Afterwards, right click on desired channel to copy ID
BOT_CHANNEL = 0
bot = commands.Bot(command_prefix = '!')
# When bot is ready
@bot.event
async def on_ready():
print('Starting loop task')
update_channel.start()
print('pwnbot is now ready for commands.')
@bot.event
async def on_command_error(ctx, error):
if isinstance(error, commands.errors.CommandNotFound):
await ctx.send(':robot: *That command does not exist.\nTry:* `!help`')
return
raise error
# removes default help command
bot.remove_command('help');
# contex is passed in automatically
@bot.command()
async def help(ctx):
await ctx.send(
'```\n' +
'---------------------------------- Help ----------------------------------\n\n' +
'Usage: !<command>\n\n' +
'Event Commands:\n' +
' * !events - Displays ongoing ctf events within the week.\n' +
' * !events all - Displays ctf events for the past and next week.\n' +
' * !events next - Displays upcoming ctf events for next week.\n' +
' * !events past - Displays finished ctf events from the past week.\n\n' +
'Clear Commands:\n' +
' * !clear - Clears the last 20 messages from pwnbot in current channel.\n' +
' * !clear all - Clears all messages from pwnbot in current channel.\n' +
' * !clear last - Clears last message from pwnbot in current channel.\n\n' +
'Util Commands:\n' +
' * !ping - Checks the latency for pwnbot with date/time\n\n' +
'Misc Commands:\n' +
' * !celebrate - Celebration!!\n' +
' * !facepalm - Sometimes you just have to facepalm ...\n\n' +
'--------------------------------------------------------------------------\n' +
'```'
)
@bot.command()
async def events(ctx, arg=None):
embed_msgs = []
current_time = int(datetime.now().timestamp())
SEVEN_DAYS = ctftime.days_to_secs(7)
if arg == 'all':
start = current_time - SEVEN_DAYS
finish = current_time + SEVEN_DAYS
# checks previous json events to see if its the same as the newly fetched events
embed_msgs = ctftime.get_events(start, finish)
# if there are new events, embed the new events and send to current channel
if len(embed_msgs) == 0:
await ctx.send(':robot: *There are no events happening from last week to next week.*')
return
elif arg == 'next':
start = current_time
finish = current_time + SEVEN_DAYS
embed_msgs = ctftime.get_events(start, finish, status='upcoming')
if len(embed_msgs) == 0:
await ctx.send(':robot: *There are no upcoming events next week.*')
return
elif arg == 'past':
start = current_time - SEVEN_DAYS
finish = current_time
embed_msgs = ctftime.get_events(start, finish, status='finished')
if len(embed_msgs) == 0:
await ctx.send(':robot: *There are no finished events from last week.*')
return
else:
start = current_time - SEVEN_DAYS
finish = current_time + SEVEN_DAYS
embed_msgs = ctftime.get_events(start, finish, status='update')
if len(embed_msgs) == 0:
await ctx.send(':robot: **There are no ongoing events.**')
return
for embed in embed_msgs:
await ctx.send(embed=embed)
@bot.command()
async def clear(ctx, arg=None):
if arg == 'all':
await ctx.channel.purge(limit=200, check=is_bot)
if arg == 'last':
await ctx.channel.purge(limit=1, check=is_bot)
else:
await ctx.channel.purge(limit=20, check=is_bot)
@bot.command()
async def ping(ctx):
await ctx.send(f"Pong!\n[**{round(bot.latency * 1000)}ms**]: *Current date/time: {datetime.now()}*")
@bot.command()
async def celebrate(ctx):
await ctx.send('\o/ :confetti_ball: :tada:')
@bot.command()
async def facepalm(ctx):
await ctx.send(':man_facepalming:')
@bot.command()
async def pwnbot(ctx):
#-5 removes #0000 at the end of username for discord
user = str(ctx.author)[:-5]
responses = [
f"*Oh my! You caught me by surprise! How can I help, {user}?*",
f"**BRUTEFORCE!**",
f"*get pwned {user}* :computer:",
f"*You must be bored. Check out: `!events` for current ctf events :robot:*",
f"*pwnbot at your service!*",
f"!{user}",
f"*-thinks of a quirky comment-*",
f"*What do you want?*",
f"*-currently sleeping-*",
f"*At your service, {user}!*",
f"*'UNO is the best'* -pwnbot 2020",
f":robot: ||*NTk2Zjc1MjA2ZDc1NzM3NDIwNjI2NTIwNzY2NTcyNzkyMDYzNmM2NTc2NjU3MjJlMjA0ZDc5MjA2ZTYxNmQ2NTIwNjk3MzIwNzA3NzZlNjI2Zjc0MjEyMDNhMjkK*||",
f"*Beware of this command!! :robot:*",
f"*Ah yes, some human interaction. How can I assist you?*"
]
await ctx.send(random.choice(responses))
# checks if message is from bot or a command to bot. helper for clear command
def is_bot(msg):
return msg.author == bot.user or msg.content[0] == '!'
def diff_events(curr, prev):
if len(curr) != len(prev):
return True
for curr_event, prev_event in zip(curr, prev):
if curr_event.title != prev_event.title:
return True
return False
# sends update to #bot-channel with embedded new events. if events are the same as the old update,
# dont update. checks every 30 minutes for new content
prev_update = ''
bot_msg = ''
@tasks.loop(minutes=30)
async def update_channel():
channel = bot.get_channel(BOT_CHANNEL)
SEVEN_DAYS = ctftime.days_to_secs(7)
current_time = int(datetime.now().timestamp())
start = current_time - SEVEN_DAYS
finish = current_time + SEVEN_DAYS
curr_events = ctftime.get_events(start, finish, status='update')
# prev_events is used to self check bot for new events
global prev_update, bot_msg
if prev_update == '' or diff_events(curr_events, prev_update):
prev_update = curr_events
if len(curr_events) == 0 and bot_msg == '':
bot_msg = ':robot: *There are no ongoing/upcoming events. I will update this channel when I see new events.*'
await channel.send(bot_msg)
if curr_events != None:
for embed in curr_events:
await channel.send(embed=embed)
bot_msg = ''
bot.run(TOKEN)
| 29.84434 | 146 | 0.657974 | 893 | 6,327 | 4.571109 | 0.262038 | 0.027438 | 0.032337 | 0.039196 | 0.246693 | 0.202107 | 0.178834 | 0.145762 | 0.105586 | 0.105586 | 0 | 0.010433 | 0.197092 | 6,327 | 211 | 147 | 29.985782 | 0.79311 | 0.127233 | 0 | 0.260274 | 1 | 0.013699 | 0.367781 | 0.055576 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013699 | false | 0 | 0.041096 | 0.006849 | 0.116438 | 0.013699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74f0b6c0862dce65538cbea19beab8636aa3411a | 414 | py | Python | store/urls.py | hardik1410/ShopEase | 8d941cad662f8ed4bddab43f1ed486307f6654ff | [
"MIT"
] | null | null | null | store/urls.py | hardik1410/ShopEase | 8d941cad662f8ed4bddab43f1ed486307f6654ff | [
"MIT"
] | null | null | null | store/urls.py | hardik1410/ShopEase | 8d941cad662f8ed4bddab43f1ed486307f6654ff | [
"MIT"
] | 4 | 2021-08-14T19:03:20.000Z | 2022-03-31T20:58:05.000Z | from django.urls import path
from django.conf.urls import url
from store import views
from .views import getStore, addStore, updateStore, deleteStore
urlpatterns = [
url(r'getStore/', views.getStore),
url(r'addStore/', views.addStore),
url(r'updateStore/', views.updateStore),
url(r'deleteStore/', views.deleteStore),
url(r'getStoreByOwnerId/(?P<ownerId>[0-9]+)$', views.getStoreByOwnerId),
]
| 31.846154 | 76 | 0.724638 | 52 | 414 | 5.769231 | 0.384615 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005556 | 0.130435 | 414 | 12 | 77 | 34.5 | 0.827778 | 0 | 0 | 0 | 0 | 0 | 0.193237 | 0.091787 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
74f1d9921e9f47ecc63831f5691a1c67010b6be9 | 1,727 | py | Python | wwwhero/migrations/0006_auto_20210109_1311.py | IharSha/build_a_hero | 4a0f0aa701c205d04edd6bc801707a73bcc210f2 | [
"BSD-3-Clause"
] | null | null | null | wwwhero/migrations/0006_auto_20210109_1311.py | IharSha/build_a_hero | 4a0f0aa701c205d04edd6bc801707a73bcc210f2 | [
"BSD-3-Clause"
] | 2 | 2021-01-08T11:53:33.000Z | 2021-09-23T07:04:20.000Z | wwwhero/migrations/0006_auto_20210109_1311.py | IharSha/build_a_hero | 4a0f0aa701c205d04edd6bc801707a73bcc210f2 | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 3.1.4 on 2021-01-09 13:11
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('wwwhero', '0005_auto_20210106_1308'),
]
operations = [
migrations.CreateModel(
name='LocationType',
fields=[
('name', models.CharField(max_length=32, primary_key=True, serialize=False)),
],
),
migrations.AlterField(
model_name='characterselection',
name='character',
field=models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to='wwwhero.character'),
),
migrations.CreateModel(
name='Location',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=64, unique=True)),
('image', models.ImageField(blank=True, upload_to='bg/location')),
('min_level', models.PositiveSmallIntegerField(default=1)),
('is_active', models.BooleanField(default=False)),
('type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='wwwhero.locationtype')),
],
),
migrations.CreateModel(
name='CharacterLocation',
fields=[
('character', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, primary_key=True, serialize=False, to='wwwhero.character')),
('location', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='wwwhero.location')),
],
),
]
| 39.25 | 156 | 0.599884 | 167 | 1,727 | 6.095808 | 0.42515 | 0.047151 | 0.068762 | 0.108055 | 0.378193 | 0.240668 | 0.240668 | 0.240668 | 0.240668 | 0.121807 | 0 | 0.028391 | 0.265779 | 1,727 | 43 | 157 | 40.162791 | 0.774448 | 0.026057 | 0 | 0.351351 | 1 | 0 | 0.1375 | 0.01369 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.054054 | 0 | 0.135135 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74f899e21e839972b7e9857acc8c5b5a51ab3347 | 1,824 | py | Python | 2009/scientific-computing/prax3/src/RLPrax2_2.py | rla/old-code | 06aa69c3adef8434992410687d466dc42779e57b | [
"Ruby",
"MIT"
] | 2 | 2015-11-08T10:01:47.000Z | 2020-03-10T00:00:58.000Z | 2009/scientific-computing/prax3/src/RLPrax2_2.py | rla/old-code | 06aa69c3adef8434992410687d466dc42779e57b | [
"Ruby",
"MIT"
] | null | null | null | 2009/scientific-computing/prax3/src/RLPrax2_2.py | rla/old-code | 06aa69c3adef8434992410687d466dc42779e57b | [
"Ruby",
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Raivo Laanemets
# rlaanemt@ut.ee
import sys
from Taring import Taring
# Abiklass nelja täringuviske mängu simuleerimiseks.
class GameSimulator:
def __init__(self):
# Täringute ettevalmistamine.
self.t1 = Taring()
self.t2 = Taring()
self.t3 = Taring()
self.t4 = Taring()
# Viskab kõiki nelja täringut ja tagastab visete summa.
def viskaTaringuid(self):
return self.t1.viska() + self.t2.viska() + self.t3.viska()+ self.t4.viska()
# Jooksutab simulatsiooni etteantud arv kordi.
# Tagastab float-arvu, mis väljendab, mitu korda
# soovitud tulemus tuli.
def simulate(self, mange):
voite = 0
for i in range(0, mange):
if self.viskaTaringuid() < 9:
voite += 1
return voite/float(mange)
# Põhiprogramm.
# Võtab argumendiks simulatsioonis kasutatava
# mängukordade arvu ja väljastab tulemuse, kas
# mängu on mõistlik mängida või mitte.
def main(argv):
manguMaksumus = 1.0
voiduPreemia = 10.0
mange = int(argv[0])
simulator = GameSimulator()
voiduTn = simulator.simulate(mange)
maksumusKokku = mange * manguMaksumus
voiduPreemiaKokku = mange * voiduPreemia
voiduPreemiaSimulatsioon = voiduPreemiaKokku * voiduTn
print "Mängude arv: " + str(mange)
print "Võidu tõenäosus: " + str(voiduTn)
print "Mängude maksumus kokku: " + str(maksumusKokku)
print "Maksimaalne võidupreemia (kõik mängud võidetud): " + str(voiduPreemiaKokku)
print "Simulatsiooni võidupreemia: " + str(voiduPreemiaSimulatsioon)
if maksumusKokku < voiduPreemiaSimulatsioon:
print "Mängu on mõistlik mängida"
else:
print "Mängu ei ole mõistlik mängida"
if __name__ == "__main__":
main(sys.argv[1:]) | 30.915254 | 86 | 0.660636 | 195 | 1,824 | 6.117949 | 0.533333 | 0.025147 | 0.025147 | 0.036882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014545 | 0.246162 | 1,824 | 59 | 87 | 30.915254 | 0.853091 | 0.243421 | 0 | 0 | 0 | 0 | 0.141185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.055556 | null | null | 0.194444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d027dd1899fc456d6f79fdb25756da0442fe0e9 | 19,594 | py | Python | tests/test_admin.py | douglatornell/randopony-tetra | 258b0a8223b2a820d66b1a4d98eb0a54e07afa25 | [
"BSD-3-Clause"
] | 1 | 2020-06-30T18:56:22.000Z | 2020-06-30T18:56:22.000Z | tests/test_admin.py | douglatornell/randopony-tetra | 258b0a8223b2a820d66b1a4d98eb0a54e07afa25 | [
"BSD-3-Clause"
] | null | null | null | tests/test_admin.py | douglatornell/randopony-tetra | 258b0a8223b2a820d66b1a4d98eb0a54e07afa25 | [
"BSD-3-Clause"
] | null | null | null | """Tests for RandoPony admin views and functionality.
"""
from datetime import datetime
import unittest
from unittest.mock import patch
from pyramid import testing
from pyramid_mailer import get_mailer
from sqlalchemy import create_engine
from randopony.models.meta import (
Base,
DBSession,
)
class TestCoreAdminViews(unittest.TestCase):
"""Unit tests for core admin interface views.
"""
def _get_target_class(self):
from randopony.views.admin.core import AdminViews
return AdminViews
def _make_one(self, *args, **kwargs):
return self._get_target_class()(*args, **kwargs)
def setUp(self):
self.config = testing.setUp()
engine = create_engine('sqlite://')
DBSession.configure(bind=engine)
Base.metadata.create_all(engine)
def tearDown(self):
DBSession.remove()
testing.tearDown()
def test_home(self):
"""admin home view has expected template variables
"""
from randopony import __pkg_metadata__ as version
request = testing.DummyRequest()
admin = self._make_one(request)
tmpl_vars = admin.home()
self.assertEqual(
tmpl_vars, {'version': version.number + version.release})
def test_wranglers_list(self):
"""admin wranglers view has expected template variables
"""
from randopony import __pkg_metadata__ as version
request = testing.DummyRequest()
request.matchdict['list'] = 'wranglers'
admin = self._make_one(request)
tmpl_vars = admin.items_list()
self.assertEqual(
tmpl_vars['version'], version.number + version.release)
self.assertEqual(tmpl_vars['list'], 'wranglers')
self.assertEqual(tmpl_vars['list_title'], 'Pony Wranglers')
self.assertEqual(tmpl_vars['action'], 'edit')
def test_wranglers_list_order(self):
"""admin wranglers list is alpha ordered by email
"""
from randopony.models import Administrator
admin1 = Administrator(email='tom@example.com', password_hash='hash')
admin2 = Administrator(email='harry@example.com', password_hash='hash')
DBSession.add_all((admin1, admin2))
request = testing.DummyRequest()
request.matchdict['list'] = 'wranglers'
admin = self._make_one(request)
tmpl_vars = admin.items_list()
admins = [a.email for a in tmpl_vars['items'].all()]
self.assertEqual(
admins, 'harry@example.com tom@example.com'.split())
def test_delete_cancel(self):
"""admin delete cancel leaves item in database
"""
from randopony.models import Administrator
admin = Administrator(email='tom@example.com', password_hash='hash')
DBSession.add(admin)
self.config.add_route('admin.list', '/admin/{list}/')
request = testing.DummyRequest(post={'cancel': 'cancel'})
request.matchdict['list'] = 'wranglers'
request.matchdict['item'] = 'tom@example.com'
admin = self._make_one(request)
admin.delete()
wrangler = DBSession.query(Administrator).first()
self.assertEqual(wrangler.email, 'tom@example.com')
def test_delete_wrangler_confirmation(self):
"""admin delete confirmation view for wrangler has exp template vars
"""
from randopony import __pkg_metadata__ as version
self.config.add_route('admin.list', '/admin/{list}/')
request = testing.DummyRequest()
request.matchdict['list'] = 'wranglers'
request.matchdict['item'] = 'tom@example.com'
admin = self._make_one(request)
tmpl_vars = admin.delete()
self.assertEqual(
tmpl_vars,
{
'version': version.number + version.release,
'list': 'wranglers',
'item': 'tom@example.com',
'item_type': 'administrator',
})
def test_delete_wrangler(self):
"""admin delete for wrangler deletes item from database
"""
from sqlalchemy.orm.exc import NoResultFound
from randopony.models import Administrator
admin = Administrator(email='tom@example.com', password_hash='hash')
DBSession.add(admin)
self.config.add_route('admin.list', '/admin/{list}/')
request = testing.DummyRequest(post={'delete': 'delete'})
request.matchdict['list'] = 'wranglers'
request.matchdict['item'] = 'tom@example.com'
admin = self._make_one(request)
admin.delete()
query = DBSession.query(Administrator)
with self.assertRaises(NoResultFound):
query.filter_by(email='tom@example.com').one()
def test_delete_brevet(self):
"""admin delete for brevet deletes item from database
"""
from sqlalchemy.orm.exc import NoResultFound
from randopony.models import core
from randopony.models import Brevet
brevet = Brevet(
region='LM',
distance=200,
date_time=datetime(2012, 11, 11, 7, 0, 0),
route_name='11th Hour',
start_locn='Bean Around the World Coffee, Lonsdale Quay, '
'123 Carrie Cates Ct, North Vancouver',
organizer_email='tracy@example.com',
)
DBSession.add(brevet)
self.config.add_route('admin.list', '/admin/{list}/')
request = testing.DummyRequest(post={'delete': 'delete'})
request.matchdict['list'] = 'brevets'
request.matchdict['item'] = str(brevet)
with patch.object(core, 'datetime') as mock_datetime:
mock_datetime.today.return_value = datetime(2012, 11, 1, 12, 55, 42)
admin = self._make_one(request)
admin.delete()
with self.assertRaises(NoResultFound):
Brevet.get_current().one()
class TestEmailToOrganizer(unittest.TestCase):
"""Unit tests for email_to_organizer admin function re: event URLs.
"""
def _call_email_to_organizer(self, *args, **kwargs):
from randopony.views.admin.core import email_to_organizer
return email_to_organizer(*args, **kwargs)
def setUp(self):
from randopony.models import EmailAddress
self.config = testing.setUp(
settings={
'mako.directories': 'randopony:templates',
})
self.config.include('pyramid_mailer.testing')
self.config.include('pyramid_mako')
self.config.add_route(
'admin.populaires.view', '/admin/brevet/{item}')
self.config.add_route(
'brevet', '/brevets/{region}/{distance}/{date}')
self.config.add_route(
'brevet.rider_emails',
'/brevets/{region}/{distance}/{date}/rider_emails/{uuid}')
engine = create_engine('sqlite://')
DBSession.configure(bind=engine)
Base.metadata.create_all(engine)
from_randopony = EmailAddress(
key='from_randopony',
email='randopony@randonneurs.bc.ca',
)
admin_email = EmailAddress(
key='admin_email',
email='djl@douglatornell.ca',
)
DBSession.add_all((from_randopony, admin_email))
def tearDown(self):
DBSession.remove()
testing.tearDown()
def test_email_to_organizer_catches_missing_google_doc_id(self):
"""email_to_organizer return error flash if google_doc_id not set
"""
from randopony.models import Brevet
brevet = Brevet(
region='VI',
distance=200,
date_time=datetime(2013, 3, 3, 7, 0),
route_name='Chilly 200',
start_locn='Chez Croy, 3131 Millgrove St, Victoria',
organizer_email='mcroy@example.com',
registration_end=datetime(2013, 3, 2, 12, 0),
)
DBSession.add(brevet)
request = testing.DummyRequest()
request.matchdict.update({
'region': 'VI',
'distance': '200',
'date': '03Mar2013',
})
date = '03Mar2013'
event_page_url = request.route_url(
'brevet', region=brevet.region, distance=brevet.distance,
date=date)
rider_emails_url = request.route_url(
'brevet.rider_emails', region=brevet.region,
distance=brevet.distance, date=date, uuid=brevet.uuid)
flash = self._call_email_to_organizer(
request, brevet, event_page_url, rider_emails_url)
self.assertEqual(
flash, [
'error',
'Google Drive rider list must be created before email to '
'organizer(s) can be sent'
])
def test_email_to_organizer_sends_email(self):
"""email_to_organizer sends message & sets expected flash message
"""
from randopony.models import Brevet
brevet = Brevet(
region='VI',
distance=200,
date_time=datetime(2013, 3, 3, 7, 0),
route_name='Chilly 200',
start_locn='Chez Croy, 3131 Millgrove St, Victoria',
organizer_email='mcroy@example.com',
registration_end=datetime(2013, 3, 2, 12, 0),
google_doc_id='spreadsheet:1234',
)
DBSession.add(brevet)
request = testing.DummyRequest()
request.matchdict.update({
'region': 'VI',
'distance': '200',
'date': '03Mar2013',
})
date = '03Mar2013'
event_page_url = request.route_url(
'brevet', region=brevet.region, distance=brevet.distance,
date=date)
rider_emails_url = request.route_url(
'brevet.rider_emails', region=brevet.region,
distance=brevet.distance, date=date, uuid=brevet.uuid)
mailer = get_mailer(request)
flash = self._call_email_to_organizer(
request, brevet, event_page_url, rider_emails_url)
self.assertEqual(len(mailer.outbox), 1)
self.assertEqual(
flash,
['success', 'Email sent to VI200 03Mar2013 organizer(s)'])
def test_email_to_organizer_message(self):
"""email_to_organizer message has expected content
"""
from randopony.models import (
EmailAddress,
Brevet,
)
brevet = Brevet(
region='VI',
distance=200,
date_time=datetime(2013, 3, 3, 7, 0),
route_name='Chilly 200',
start_locn='Chez Croy, 3131 Millgrove St, Victoria',
organizer_email='mcroy@example.com',
registration_end=datetime(2013, 3, 2, 12, 0),
google_doc_id='spreadsheet:123'
)
DBSession.add(brevet)
request = testing.DummyRequest()
request.matchdict.update({
'region': 'VI',
'distance': '200',
'date': '03Mar2013',
})
date = '03Mar2013'
event_page_url = request.route_url(
'brevet', region=brevet.region, distance=brevet.distance,
date=date)
rider_emails_url = request.route_url(
'brevet.rider_emails', region=brevet.region,
distance=brevet.distance, date=date, uuid=brevet.uuid)
mailer = get_mailer(request)
self._call_email_to_organizer(
request, brevet, event_page_url, rider_emails_url)
msg = mailer.outbox[0]
self.assertEqual(msg.subject, 'RandoPony URLs for VI200 03Mar2013')
from_randopony = (
DBSession.query(EmailAddress)
.filter_by(key='from_randopony').first().email)
self.assertEqual(msg.sender, from_randopony)
self.assertEqual(msg.recipients, ['mcroy@example.com'])
self.assertIn(
'The URL is <http://example.com/brevets/VI/200/03Mar2013>.',
msg.body)
self.assertIn(
'rider list URL is <https://spreadsheets.google.com/ccc?key=123>.',
msg.body)
self.assertIn(
'email address list URL is <http://example.com/brevets/'
'VI/200/03Mar2013/rider_emails/'
'ba8e8e00-dd42-5c6c-9b30-b65ce9c8df26>.',
msg.body)
self.assertIn(
'Pre-registration on the pony closes at 12:00 on 2013-03-02',
msg.body)
self.assertIn('send email to <djl@douglatornell.ca>.', msg.body)
def test_email_to_organizer_multi_organizer(self):
"""email to organizer has expected to list for multi-organizer event
"""
from randopony.models import Brevet
brevet = Brevet(
region='VI',
distance=200,
date_time=datetime(2013, 3, 3, 7, 0),
route_name='Chilly 200',
start_locn='Chez Croy, 3131 Millgrove St, Victoria',
organizer_email='mjansson@example.com, mcroy@example.com',
registration_end=datetime(2013, 3, 2, 12, 0),
google_doc_id='spreadsheet:1234'
)
DBSession.add(brevet)
request = testing.DummyRequest()
request.matchdict.update({
'region': 'VI',
'distance': '200',
'date': '03Mar2013',
})
date = '03Mar2013'
event_page_url = request.route_url(
'brevet', region=brevet.region, distance=brevet.distance,
date=date)
rider_emails_url = request.route_url(
'brevet.rider_emails', region=brevet.region,
distance=brevet.distance, date=date, uuid=brevet.uuid)
mailer = get_mailer(request)
self._call_email_to_organizer(
request, brevet, event_page_url, rider_emails_url)
msg = mailer.outbox[0]
self.assertEqual(
msg.recipients, ['mjansson@example.com', 'mcroy@example.com'])
class TestEmailToWebmaster(unittest.TestCase):
"""Unit tests for email_to_webmaster admin function re: event page URL.
"""
def _call_email_to_webmaster(self, *args, **kwargs):
from randopony.views.admin.core import email_to_webmaster
return email_to_webmaster(*args, **kwargs)
def setUp(self):
from randopony.models import EmailAddress
self.config = testing.setUp(
settings={
'mako.directories': 'randopony:templates',
})
self.config.include('pyramid_mailer.testing')
self.config.include('pyramid_mako')
self.config.add_route(
'admin.populaires.view', '/admin/populaire/{item}')
self.config.add_route(
'populaire', '/populaires/{short_name}')
engine = create_engine('sqlite://')
DBSession.configure(bind=engine)
Base.metadata.create_all(engine)
from_randopony = EmailAddress(
key='from_randopony',
email='randopony@randonneurs.bc.ca',
)
club_webmaster = EmailAddress(
key='club_webmaster',
email='webmaster@randonneurs.bc.ca',
)
admin_email = EmailAddress(
key='admin_email',
email='djl@douglatornell.ca',
)
DBSession.add_all((from_randopony, club_webmaster, admin_email))
def tearDown(self):
DBSession.remove()
testing.tearDown()
def test_email_to_webmaster_sends_email(self):
"""email_to_webmaster sends message & sets expected flash message
"""
from randopony.models import Populaire
populaire = Populaire(
event_name='Victoria Populaire',
short_name='VicPop',
distance='50 km, 100 km',
date_time=datetime(2011, 3, 27, 10, 0),
start_locn='University of Victoria, Parking Lot #2 '
'(Gabriola Road, near McKinnon Gym)',
organizer_email='mjansson@example.com',
registration_end=datetime(2011, 3, 24, 12, 0),
entry_form_url='http://www.randonneurs.bc.ca/VicPop/'
'VicPop11_registration.pdf',
google_doc_id='spreadsheet:1234'
)
DBSession.add(populaire)
request = testing.DummyRequest()
request.matchdict['item'] = 'VicPop'
event_page_url = request.route_url(
'populaire', short_name=populaire.short_name)
mailer = get_mailer(request)
flash = self._call_email_to_webmaster(
request, populaire, event_page_url)
self.assertEqual(len(mailer.outbox), 1)
self.assertEqual(
flash,
['success', 'Email with VicPop page URL sent to webmaster'])
def test_email_to_webmaster_message(self):
"""email_to_webmaster message has expected content
"""
from randopony.models import Populaire
populaire = Populaire(
event_name='Victoria Populaire',
short_name='VicPop',
distance='50 km, 100 km',
date_time=datetime(2011, 3, 27, 10, 0),
start_locn='University of Victoria, Parking Lot #2 '
'(Gabriola Road, near McKinnon Gym)',
organizer_email='mjansson@example.com',
registration_end=datetime(2011, 3, 24, 12, 0),
entry_form_url='http://www.randonneurs.bc.ca/VicPop/'
'VicPop11_registration.pdf',
google_doc_id='spreadsheet:1234'
)
DBSession.add(populaire)
request = testing.DummyRequest()
request.matchdict['item'] = 'VicPop'
event_page_url = request.route_url(
'populaire', short_name=populaire.short_name)
mailer = get_mailer(request)
self._call_email_to_webmaster(request, populaire, event_page_url)
msg = mailer.outbox[0]
self.assertEqual(
msg.subject, 'RandoPony Pre-registration page for VicPop')
self.assertEqual(msg.sender, 'randopony@randonneurs.bc.ca')
self.assertEqual(msg.recipients, ['webmaster@randonneurs.bc.ca'])
self.assertIn('page for the VicPop event has been added', msg.body)
self.assertIn(
'The URL is <http://example.com/populaires/VicPop>.', msg.body)
self.assertIn('send email to <djl@douglatornell.ca>.', msg.body)
class TestFinalizeFlashMsg(unittest.TestCase):
"""Unit tests for finalize_flash_msg function.
"""
def _call_finalize_flash_msg(self, *args, **kwargs):
from randopony.views.admin.core import finalize_flash_msg
return finalize_flash_msg(*args, **kwargs)
def test_finalize_flash_msg_error(self):
"""flash 1st element is error when error present in flash list
"""
request = testing.DummyRequest()
self._call_finalize_flash_msg(request, 'success foo error bar'.split())
flash = request.session.pop_flash()
self.assertEqual(flash[0], 'error')
def test_finalize_flash_msg_success(self):
"""flash 1st element is success when error not present in flash list
"""
request = testing.DummyRequest()
self._call_finalize_flash_msg(
request, 'success foo sucess bar'.split())
flash = request.session.pop_flash()
self.assertEqual(flash[0], 'success')
def test_finalize_flash_msg_content(self):
"""flash[1:] are msgs w/o error or success elements of flash list
"""
request = testing.DummyRequest()
self._call_finalize_flash_msg(request, 'success foo error bar'.split())
flash = request.session.pop_flash()
self.assertEqual(flash[1:], 'foo bar'.split())
| 39.583838 | 80 | 0.610289 | 2,136 | 19,594 | 5.428371 | 0.138109 | 0.033635 | 0.023458 | 0.028029 | 0.730229 | 0.679603 | 0.672359 | 0.656749 | 0.633549 | 0.602932 | 0 | 0.025324 | 0.278504 | 19,594 | 494 | 81 | 39.663968 | 0.794865 | 0.068235 | 0 | 0.658711 | 0 | 0 | 0.177466 | 0.029917 | 0 | 0 | 0 | 0 | 0.078759 | 1 | 0.064439 | false | 0.009547 | 0.069212 | 0.002387 | 0.155131 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d036ce61565c7bf56d00dfde72b453399c3d5d5 | 4,204 | py | Python | src/lib/parsers/parseretinac.py | Project-Prismatica/Prism-Shell | 006d04fdabbe51c4a3fd642e05ba276251f1bba4 | [
"MIT"
] | null | null | null | src/lib/parsers/parseretinac.py | Project-Prismatica/Prism-Shell | 006d04fdabbe51c4a3fd642e05ba276251f1bba4 | [
"MIT"
] | null | null | null | src/lib/parsers/parseretinac.py | Project-Prismatica/Prism-Shell | 006d04fdabbe51c4a3fd642e05ba276251f1bba4 | [
"MIT"
] | 1 | 2018-02-22T02:18:48.000Z | 2018-02-22T02:18:48.000Z | #!/usr/bin/python
# parseretinac.py
#
# By Adrien de Beaupre adriendb@gmail.com | adrien@intru-shun.ca
# Copyright 2011 Intru-Shun.ca Inc.
# v0.09
# 16 October 2011
#
# The current version of these scripts are at: http://dshield.handers.org/adebeaupre/ossams-parser.tgz
#
# Parses retina community version XML output
# http://eeye.com
#
# This file is part of the ossams-parser.
#
# The ossams-parser is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# The ossams-parser is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with the ossams-parser. If not, see <http://www.gnu.org/licenses/>.
#
# parseretina function
def parseretinac(time, os, root, filetoread, db, dbconnection, projectname, projectid, separator):
# Check to see if the document root is 'scanJob', exit if it is not
if root.tag:
if root.tag != "scanJob":
print filetoread, "is not a retina XML report file"
return
retinafile = filetoread.split(separator)
file = retinafile[-1]
filetime = time.ctime(os.path.getmtime(filetoread))
timenow = time.ctime()
db.execute("""
INSERT INTO tooloutput (toolname, filename, OSSAMSVersion, filedate, inputtimestamp, projectname, projectid)
VALUES
('retina', '%s', 0.09, '%s', '%s', '%s', '%s')
""" % (file, filetime, timenow, projectname, projectid)
)
tooloutputnumber = int(db.lastrowid)
print "Processed retina report number:", tooloutputnumber
hostattribs = ['ip', 'netBIOSName', 'netBIOSDomain', 'dnsName', 'mac', 'os']
auditattribs = ['rthID', 'cve', 'cce', 'name', 'description', 'date', 'risk', 'pciLevel', 'cvssScore', 'fixInformation', 'exploit']
hosts = root.findall('hosts/host')
for host in hosts:
hostvalues = {'ip': " ", 'netBIOSName': " ", 'netBIOSDomain': " ", 'dnsName': " ", 'mac': " ", 'os': " "}
auditvalues = {'rthID': " ", 'cve': " ", 'cce': " ", 'name': " ", 'description': " ", 'date': " ", 'risk': " ", 'pciLevel': " ",
'cvssScore': " ", 'fixInformation': " ", 'exploit': " "}
refs = ['cve', 'cce', 'cvssScore', 'pciLevel']
for value in hostattribs:
node = host.find(value)
if node.text:
hostvalues[value] = node.text
db.execute("""
INSERT INTO hosts (tooloutputnumber, ipv4, macaddress, hostname, recon, hostcriticality, hostos)
VALUES
(%s, '%s', '%s', '%s', 1, 0, '%s')
""" % (tooloutputnumber, hostvalues['ip'], hostvalues['mac'], hostvalues['dnsName'], hostvalues['os'])
)
hostnumber = int(db.lastrowid)
print "Processed host:", hostnumber, "IP:", hostvalues['ip']
audits = host.findall('audit')
for audit in audits:
for value in auditattribs:
node = audit.find(value)
if node.text:
auditvalues[value] = node.text
description = auditvalues['description']
encodeddescription = description.encode('utf-8','ignore')
db.execute("""
INSERT INTO vulnerabilities (tooloutputnumber, hostnumber, vulnerabilityid, vulnerabilityname, vulnerabilityrisk,
vulnerabilitydescription, vulnerabilityvalidation, vulnerabilitysolution)
VALUES
('%s', '%s', '%s', '%s', '%s', '%s', 0, '%s')
""" % (tooloutputnumber, hostnumber, auditvalues['rthID'], auditvalues['name'], auditvalues['risk'],
dbconnection.escape_string(encodeddescription), dbconnection.escape_string(auditvalues['fixInformation'])
)
)
vulnnumber = int(db.lastrowid)
for ref in refs:
refvalue = audit.find(ref)
if refvalue.text:
db.execute("""
INSERT INTO refs (tooloutputnumber, hostnumber, vulnerabilitynumber, referencetype, referencevalue )
VALUES
('%s', '%s', '%s', '%s', '%s')
""" % (tooloutputnumber, hostnumber, vulnnumber, refvalue.tag, refvalue.text)
)
return
| 42.897959 | 133 | 0.657469 | 481 | 4,204 | 5.742204 | 0.428274 | 0.010862 | 0.011948 | 0.010138 | 0.171977 | 0.107893 | 0.052136 | 0.052136 | 0.052136 | 0.052136 | 0 | 0.006777 | 0.192674 | 4,204 | 97 | 134 | 43.340206 | 0.807012 | 0.260942 | 0 | 0.184615 | 0 | 0.030769 | 0.395638 | 0.023826 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.046154 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d0642d0a2bf80cc847ea773f1cbfa1193d454b9 | 5,103 | py | Python | animate.py | manuelnaranjo/pyrealtimecharter | 72fa4faa132686ba2a401cc7dfff326d36ecc505 | [
"Apache-2.0"
] | null | null | null | animate.py | manuelnaranjo/pyrealtimecharter | 72fa4faa132686ba2a401cc7dfff326d36ecc505 | [
"Apache-2.0"
] | null | null | null | animate.py | manuelnaranjo/pyrealtimecharter | 72fa4faa132686ba2a401cc7dfff326d36ecc505 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
Copyright 2010 Naranjo, Manuel Francisco <manuel@aircable.net>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Based on one of the matplot examples, this will draw received data from the
serial port on real time.
"""
import os, sys, random, select, termios, fcntl, serial
from matplotlib.figure import Figure
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
from PyQt4 import QtCore, QtGui
import numpy as np
import time
SPAN=60 # last X seconds
def scan():
"""scan for available ports. return a list of tuples (num, name)"""
available = []
for i in range(256):
try:
s = serial.Serial(i)
available.append((i, s.portstr))
s.close() # explicit close 'cause of delayed GC in java
except serial.SerialException:
pass
if os.name == 'posix':
for i in range(256):
try:
s = serial.Serial('/dev/ttyUSB%s' % i)
available.append( (i, s.portstr) )
s.close() # explicit close 'cause of delayed GC in java
except serial.SerialException:
pass
return available
class BlitQT(FigureCanvas):
def addSerie(self, serie):
ind = len(self.series)+1
plot = self.figure.add_subplot( ind, 1, ind )
plot.autoscale(enable=True, tight=True)
plot.set_ylabel(serie)
plot.set_xlim(left=self.tstart)
plot.grid()
self.series[serie] = {
'plot': plot,
'x':[],
'y':[],
'background': self.copy_from_bbox(plot.bbox),
'chart': plot.plot([],[])[0]
}
i=1
for s in self.series:
if s == serie:
continue
self.figure.delaxes(self.series[s]['plot'])
plot = self.figure.add_subplot( ind, 1, i)
plot.autoscale(enable=True, tight=True)
plot.set_ylabel(s)
plot.grid()
self.series[s]['plot']=plot
self.series[s]['background'] = self.copy_from_bbox(plot.bbox)
self.series[s]['chart']=plot.plot([],[])[0]
i+=1
self.draw()
def addDataPoint(self, serie, value):
if serie not in self.series:
# no conozco serie, agregar
self.addSerie(serie)
plot = self.series[serie]
self.lasttime = time.time()
plot['x'].append(self.lasttime)
plot['y'].append(value)
plot['chart'].set_xdata(plot['x'])
plot['chart'].set_ydata(plot['y'])
if len(plot['x']) == 0:
return
m = min(plot['y'])*0.9
M = max(plot['y'])*1.1
if m != M:
plot['plot'].set_ylim(m, M)
def updateSpan(self):
now = time.time()
for s in self.series:
self.series[s]['plot'].set_xlim(now-SPAN, now)
def keyPressEvent(self, event):
key = event.key()
if key == QtCore.Qt.Key_Escape:
self.close
return
print key, chr(key)
self.port.write('%s' % chr(key))
def __init__(self, port):
FigureCanvas.__init__(self, Figure())
self.series = dict()
self.draw()
self.tstart = time.time()
self.startTimer(500)
self.port = serial.Serial(port, timeout=0.01)
self.lasttime = 0
self.cnt = 0
# vaciar buffer
if len(select.select([self.port,],[],[], 0)[0]) > 0:
self.port.flush()
def timerEvent(self, event):
self.cnt+=1
# ver si hay algo en el puerto
if len(select.select([self.port,],[],[], 0)[0]) == 0:
self.doRedraw()
return
for line in self.port.readlines():
line=line.strip()
if len(line) == 0:
continue
try:
serie, dato = line.split(';')[:2]
dato = float(dato)
self.addDataPoint(serie, dato)
except Exception, err:
pass
self.doRedraw()
def resizeEvent(self, event):
for i in self.series:
self.series[i]['background']=self.copy_from_bbox(self.series[i]['plot'].bbox)
super(BlitQT, self).resizeEvent(event)
def doRedraw(self):
self.updateSpan()
self.draw()
for i in self.series:
pl = self.series[i]
# clear!
self.restore_region(['background'])
# update chart
pl['plot'].draw_artist(pl['chart'])
# plot axis
self.blit(pl['plot'].bbox)
self.draw()
if __name__=='__main__':
i = 0
ports = scan()
if len(ports) == 0 and len(sys.argv) == 1:
print "Couldn't found port, please give me a port name as argument"
sys.exit(0)
if len(sys.argv) == 1:
print "Choose port:"
for num, port in ports:
print "\t %i: %s" % (i, port)
i+=1
port = int(raw_input("enter number: "))
port = ports[port][1]
else:
port = sys.argv[1]
app = QtGui.QApplication(sys.argv)
widget = BlitQT(port)
widget.show()
sys.exit(app.exec_())
| 26.169231 | 82 | 0.619636 | 732 | 5,103 | 4.269126 | 0.337432 | 0.0544 | 0.0192 | 0.02112 | 0.23808 | 0.19264 | 0.17024 | 0.13056 | 0.13056 | 0.08256 | 0 | 0.014381 | 0.236919 | 5,103 | 194 | 83 | 26.304124 | 0.788136 | 0.047031 | 0 | 0.251799 | 0 | 0 | 0.056487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.021583 | 0.043165 | null | null | 0.028777 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d09d5c976fe0b439fd326a0507f29f1c53af7a3 | 4,149 | py | Python | testing/unit_tests/test_loader.py | MotionCorrect/ivadomed | 5e402fe6dc180847ad12bb813ba5b0431d243f04 | [
"MIT"
] | null | null | null | testing/unit_tests/test_loader.py | MotionCorrect/ivadomed | 5e402fe6dc180847ad12bb813ba5b0431d243f04 | [
"MIT"
] | 6 | 2021-03-24T16:23:29.000Z | 2021-04-08T15:22:53.000Z | testing/unit_tests/test_loader.py | MotionCorrect/ivadomed | 5e402fe6dc180847ad12bb813ba5b0431d243f04 | [
"MIT"
] | null | null | null | import os
import pytest
import csv_diff
import logging
import torch
from unit_tests.t_utils import remove_tmp_dir, create_tmp_dir, __data_testing_dir__, __tmp_dir__
from ivadomed.loader import utils as imed_loader_utils
from ivadomed.loader import loader as imed_loader
logger = logging.getLogger(__name__)
def setup_function():
create_tmp_dir()
@pytest.mark.parametrize('loader_parameters', [{
"path_data": [os.path.join(__data_testing_dir__, "microscopy_png")],
"bids_config": "ivadomed/config/config_bids.json",
"target_suffix": [["_seg-myelin-manual", "_seg-axon-manual"]],
"extensions": [".png"],
"roi_params": {"suffix": None, "slice_filter_roi": None},
"contrast_params": {"contrast_lst": []}
}])
def test_bids_df_microscopy_png(loader_parameters):
"""
Test for microscopy png file format
Test for _sessions.tsv and _scans.tsv files
Test for target_suffix as a nested list
Test for when no contrast_params are provided
"""
bids_df = imed_loader_utils.BidsDataframe(loader_parameters, __tmp_dir__, derivatives=True)
df_test = bids_df.df.drop(columns=['path'])
# TODO: modify df_ref.csv file in data-testing dataset to include "participant_id"
# and "sample_id" columns, then delete next line
df_test = df_test.drop(columns=['participant_id', 'sample_id'])
df_test = df_test.sort_values(by=['filename']).reset_index(drop=True)
csv_ref = os.path.join(loader_parameters["path_data"][0], "df_ref.csv")
csv_test = os.path.join(loader_parameters["path_data"][0], "df_test.csv")
df_test.to_csv(csv_test, index=False)
diff = csv_diff.compare(csv_diff.load_csv(open(csv_ref)), csv_diff.load_csv(open(csv_test)))
assert diff == {'added': [], 'removed': [], 'changed': [], 'columns_added': [], 'columns_removed': []}
@pytest.mark.parametrize('loader_parameters', [{
"path_data": [__data_testing_dir__],
"target_suffix": ["_seg-manual"],
"extensions": [],
"roi_params": {"suffix": None, "slice_filter_roi": None},
"contrast_params": {"contrast_lst": ["T1w", "T2w"]}
}])
def test_bids_df_anat(loader_parameters):
"""
Test for MRI anat nii.gz file format
Test for when no file extensions are provided
Test for multiple target_suffix
TODO: modify test and "df_ref.csv" file in data-testing dataset to test behavior when "roi_suffix" is not None
"""
bids_df = imed_loader_utils.BidsDataframe(loader_parameters, __tmp_dir__, derivatives = True)
df_test = bids_df.df.drop(columns=['path'])
# TODO: modify df_ref.csv file in data-testing dataset to include "participant_id"
# column then delete next line
df_test = df_test.drop(columns=['participant_id'])
df_test = df_test.sort_values(by=['filename']).reset_index(drop=True)
csv_ref = os.path.join(loader_parameters["path_data"][0], "df_ref.csv")
csv_test = os.path.join(loader_parameters["path_data"][0], "df_test.csv")
df_test.to_csv(csv_test, index=False)
diff = csv_diff.compare(csv_diff.load_csv(open(csv_ref)), csv_diff.load_csv(open(csv_test)))
assert diff == {'added': [], 'removed': [], 'changed': [],
'columns_added': [], 'columns_removed': []}
# TODO: add a test to ensure the loader can read in multiple entries in path_data
@pytest.mark.parametrize('seg_pair', [
{"input": torch.rand((2, 5, 5))},
{"input": torch.rand((1, 5, 5))},
{"input": torch.rand((5, 5, 5, 5))},
{"input": (torch.rand((5, 5, 5, 3)) * torch.tensor([1, 0, 1], dtype=torch.float)).transpose(0, -1)},
{"input": (torch.rand((7, 7, 4)) * torch.tensor([1, 0, 0, 0], dtype=torch.float)).transpose(0, -1)}
])
def test_dropout_input(seg_pair):
n_channels = seg_pair['input'].size(0)
seg_pair = imed_loader.dropout_input(seg_pair)
empty_channels = [len(torch.unique(input_data)) == 1 for input_data in seg_pair['input']]
# If multichannel
if n_channels > 1:
# Verify that there is still at least one channel remaining
assert sum(empty_channels) <= n_channels
else:
assert sum(empty_channels) == 0
def teardown_function():
remove_tmp_dir()
| 42.336735 | 114 | 0.692215 | 606 | 4,149 | 4.438944 | 0.254125 | 0.031227 | 0.04461 | 0.053532 | 0.5171 | 0.511152 | 0.491822 | 0.458364 | 0.44461 | 0.43197 | 0 | 0.010904 | 0.160039 | 4,149 | 97 | 115 | 42.773196 | 0.760976 | 0.188961 | 0 | 0.3125 | 0 | 0 | 0.182809 | 0.009685 | 0 | 0 | 0 | 0.030928 | 0.0625 | 1 | 0.078125 | false | 0 | 0.125 | 0 | 0.203125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d102ba963d0416b1e7dda9fdfd6232ee7b52e79 | 640 | py | Python | server/project/api_v1/migrations/0003_auto_20191020_1229.py | Utree/TRAGRAM | d994394d580296c6acccbc3e222454b883ffb4d5 | [
"MIT"
] | null | null | null | server/project/api_v1/migrations/0003_auto_20191020_1229.py | Utree/TRAGRAM | d994394d580296c6acccbc3e222454b883ffb4d5 | [
"MIT"
] | null | null | null | server/project/api_v1/migrations/0003_auto_20191020_1229.py | Utree/TRAGRAM | d994394d580296c6acccbc3e222454b883ffb4d5 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.8 on 2019-10-20 03:29
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api_v1', '0002_auto_20191020_1005'),
]
operations = [
migrations.AlterField(
model_name='post',
name='map_lat',
field=models.DecimalField(decimal_places=10, default=0, max_digits=20, verbose_name='緯度'),
),
migrations.AlterField(
model_name='post',
name='map_lon',
field=models.DecimalField(decimal_places=10, default=0, max_digits=20, verbose_name='経度'),
),
]
| 26.666667 | 102 | 0.60625 | 74 | 640 | 5.054054 | 0.608108 | 0.106952 | 0.13369 | 0.15508 | 0.57754 | 0.57754 | 0.57754 | 0.363636 | 0.363636 | 0.363636 | 0 | 0.090129 | 0.271875 | 640 | 23 | 103 | 27.826087 | 0.712446 | 0.070313 | 0 | 0.352941 | 1 | 0 | 0.092749 | 0.038786 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d12bbb1e89b0a3a7d504dd09f39cc0aa58d6b36 | 613 | py | Python | leads/migrations/0012_auto_20190521_0128.py | goplannr-samim/manager-app | cd5bf7f1fea28d51dea55e48fa69cc461520a878 | [
"MIT"
] | null | null | null | leads/migrations/0012_auto_20190521_0128.py | goplannr-samim/manager-app | cd5bf7f1fea28d51dea55e48fa69cc461520a878 | [
"MIT"
] | 8 | 2020-06-05T20:58:52.000Z | 2022-03-11T23:48:48.000Z | leads/migrations/0012_auto_20190521_0128.py | goplannr-samim/manager-app | cd5bf7f1fea28d51dea55e48fa69cc461520a878 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.7 on 2019-05-20 19:58
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('leads', '0011_auto_20190520_1217'),
]
operations = [
migrations.AddField(
model_name='lead',
name='first_name',
field=models.CharField(max_length=255, null=True, verbose_name='First name'),
),
migrations.AddField(
model_name='lead',
name='last_name',
field=models.CharField(max_length=255, null=True, verbose_name='Last name'),
),
]
| 25.541667 | 89 | 0.595432 | 69 | 613 | 5.130435 | 0.57971 | 0.101695 | 0.129944 | 0.152542 | 0.508475 | 0.508475 | 0.310734 | 0.310734 | 0.310734 | 0.310734 | 0 | 0.084282 | 0.28385 | 613 | 23 | 90 | 26.652174 | 0.722096 | 0.073409 | 0 | 0.352941 | 1 | 0 | 0.130742 | 0.040636 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d146ad7b735919c96c63ca899f284a422b94a05 | 7,406 | py | Python | ansys/dpf/core/operators/math/add_fc.py | jfthuong/pydpf-core | bf2895ebc546e0004f759289bfc9a23196559ac3 | [
"MIT"
] | 18 | 2021-10-16T10:38:29.000Z | 2022-03-29T11:26:42.000Z | ansys/dpf/core/operators/math/add_fc.py | jfthuong/pydpf-core | bf2895ebc546e0004f759289bfc9a23196559ac3 | [
"MIT"
] | 79 | 2021-10-11T23:18:54.000Z | 2022-03-29T14:53:14.000Z | ansys/dpf/core/operators/math/add_fc.py | jfthuong/pydpf-core | bf2895ebc546e0004f759289bfc9a23196559ac3 | [
"MIT"
] | 5 | 2021-11-29T18:35:37.000Z | 2022-03-16T16:49:21.000Z | """
add_fc
===============
Autogenerated DPF operator classes.
"""
from warnings import warn
from ansys.dpf.core.dpf_operator import Operator
from ansys.dpf.core.inputs import Input, _Inputs
from ansys.dpf.core.outputs import Output, _Outputs
from ansys.dpf.core.operators.specification import PinSpecification, Specification
class add_fc(Operator):
"""Select all fields having the same label space in the input fields
container, and add those together. If fields, doubles, or vectors
of doubles are put in input, they are added to all the fields.
Parameters
----------
fields_container1 : FieldsContainer or Field or float
fields_container2 : FieldsContainer or Field or float
Examples
--------
>>> from ansys.dpf import core as dpf
>>> # Instantiate operator
>>> op = dpf.operators.math.add_fc()
>>> # Make input connections
>>> my_fields_container1 = dpf.FieldsContainer()
>>> op.inputs.fields_container1.connect(my_fields_container1)
>>> my_fields_container2 = dpf.FieldsContainer()
>>> op.inputs.fields_container2.connect(my_fields_container2)
>>> # Instantiate operator and connect inputs in one line
>>> op = dpf.operators.math.add_fc(
... fields_container1=my_fields_container1,
... fields_container2=my_fields_container2,
... )
>>> # Get output data
>>> result_fields_container = op.outputs.fields_container()
"""
def __init__(
self, fields_container1=None, fields_container2=None, config=None, server=None
):
super().__init__(name="add_fc", config=config, server=server)
self._inputs = InputsAddFc(self)
self._outputs = OutputsAddFc(self)
if fields_container1 is not None:
self.inputs.fields_container1.connect(fields_container1)
if fields_container2 is not None:
self.inputs.fields_container2.connect(fields_container2)
@staticmethod
def _spec():
description = """Select all fields having the same label space in the input fields
container, and add those together. If fields, doubles, or
vectors of doubles are put in input, they are added to all
the fields."""
spec = Specification(
description=description,
map_input_pin_spec={
0: PinSpecification(
name="fields_container",
type_names=[
"fields_container",
"field",
"double",
"vector<double>",
],
optional=False,
document="""""",
),
1: PinSpecification(
name="fields_container",
type_names=[
"fields_container",
"field",
"double",
"vector<double>",
],
optional=False,
document="""""",
),
},
map_output_pin_spec={
0: PinSpecification(
name="fields_container",
type_names=["fields_container"],
optional=False,
document="""""",
),
},
)
return spec
@staticmethod
def default_config(server=None):
"""Returns the default config of the operator.
This config can then be changed to the user needs and be used to
instantiate the operator. The Configuration allows to customize
how the operation will be processed by the operator.
Parameters
----------
server : server.DPFServer, optional
Server with channel connected to the remote or local instance. When
``None``, attempts to use the the global server.
"""
return Operator.default_config(name="add_fc", server=server)
@property
def inputs(self):
"""Enables to connect inputs to the operator
Returns
--------
inputs : InputsAddFc
"""
return super().inputs
@property
def outputs(self):
"""Enables to get outputs of the operator by evaluationg it
Returns
--------
outputs : OutputsAddFc
"""
return super().outputs
class InputsAddFc(_Inputs):
"""Intermediate class used to connect user inputs to
add_fc operator.
Examples
--------
>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.math.add_fc()
>>> my_fields_container1 = dpf.FieldsContainer()
>>> op.inputs.fields_container1.connect(my_fields_container1)
>>> my_fields_container2 = dpf.FieldsContainer()
>>> op.inputs.fields_container2.connect(my_fields_container2)
"""
def __init__(self, op: Operator):
super().__init__(add_fc._spec().inputs, op)
self._fields_container1 = Input(add_fc._spec().input_pin(0), 0, op, 0)
self._inputs.append(self._fields_container1)
self._fields_container2 = Input(add_fc._spec().input_pin(1), 1, op, 1)
self._inputs.append(self._fields_container2)
@property
def fields_container1(self):
"""Allows to connect fields_container1 input to the operator.
Parameters
----------
my_fields_container1 : FieldsContainer or Field or float
Examples
--------
>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.math.add_fc()
>>> op.inputs.fields_container1.connect(my_fields_container1)
>>> # or
>>> op.inputs.fields_container1(my_fields_container1)
"""
return self._fields_container1
@property
def fields_container2(self):
"""Allows to connect fields_container2 input to the operator.
Parameters
----------
my_fields_container2 : FieldsContainer or Field or float
Examples
--------
>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.math.add_fc()
>>> op.inputs.fields_container2.connect(my_fields_container2)
>>> # or
>>> op.inputs.fields_container2(my_fields_container2)
"""
return self._fields_container2
class OutputsAddFc(_Outputs):
"""Intermediate class used to get outputs from
add_fc operator.
Examples
--------
>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.math.add_fc()
>>> # Connect inputs : op.inputs. ...
>>> result_fields_container = op.outputs.fields_container()
"""
def __init__(self, op: Operator):
super().__init__(add_fc._spec().outputs, op)
self._fields_container = Output(add_fc._spec().output_pin(0), 0, op)
self._outputs.append(self._fields_container)
@property
def fields_container(self):
"""Allows to get fields_container output of the operator
Returns
----------
my_fields_container : FieldsContainer
Examples
--------
>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.math.add_fc()
>>> # Connect inputs : op.inputs. ...
>>> result_fields_container = op.outputs.fields_container()
""" # noqa: E501
return self._fields_container
| 32.340611 | 90 | 0.591007 | 779 | 7,406 | 5.406932 | 0.16303 | 0.087369 | 0.02849 | 0.029915 | 0.537037 | 0.480532 | 0.447293 | 0.418566 | 0.401947 | 0.401947 | 0 | 0.011592 | 0.301107 | 7,406 | 228 | 91 | 32.482456 | 0.802164 | 0.44518 | 0 | 0.397727 | 1 | 0 | 0.11127 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113636 | false | 0 | 0.056818 | 0 | 0.284091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d15860238c72c1cb7074178d1db48f26a5c4c75 | 8,665 | py | Python | scripts/gazebo_move_object.py | RCPRG-ros-pkg/rcprg_gazebo_utils | d50151280f430d1b0894b12beb4071906ac726ff | [
"BSD-3-Clause"
] | null | null | null | scripts/gazebo_move_object.py | RCPRG-ros-pkg/rcprg_gazebo_utils | d50151280f430d1b0894b12beb4071906ac726ff | [
"BSD-3-Clause"
] | null | null | null | scripts/gazebo_move_object.py | RCPRG-ros-pkg/rcprg_gazebo_utils | d50151280f430d1b0894b12beb4071906ac726ff | [
"BSD-3-Clause"
] | 1 | 2022-02-08T14:36:16.000Z | 2022-02-08T14:36:16.000Z | #!/usr/bin/env python
## Provides interactive 6D pose marker and allows moving object in Gazebo.
# @ingroup utilities
# @file gazebo_move_object.py
# @namespace scripts.gazebo_move_object Provides interactive 6D pose marker and allows moving object in Gazebo
# Copyright (c) 2017, Robot Control and Pattern Recognition Group,
# Institute of Control and Computation Engineering
# Warsaw University of Technology
#
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the Warsaw University of Technology nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL <COPYright HOLDER> BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Author: Dawid Seredynski
#
import roslib; roslib.load_manifest('rcprg_gazebo_utils')
import sys
import rospy
import math
import copy
import tf
from std_msgs.msg import ColorRGBA
from interactive_markers.interactive_marker_server import *
from interactive_markers.menu_handler import *
from visualization_msgs.msg import *
from geometry_msgs.msg import *
from tf.transformations import *
import tf_conversions.posemath as pm
import PyKDL
from cartesian_trajectory_msgs.msg import *
import actionlib
from gazebo_msgs.srv import *
class IntMarkers6D:
def __init__(self, link_name):
self.link_name = link_name
rospy.wait_for_service('/gazebo/set_link_state', 4.0)
self.set_link_state = rospy.ServiceProxy('/gazebo/set_link_state', SetLinkState)
rospy.wait_for_service('/gazebo/get_link_state', 4.0)
get_link_state = rospy.ServiceProxy('/gazebo/get_link_state', GetLinkState)
req = GetLinkStateRequest()
req.link_name = self.link_name
req.reference_frame = "torso_base"
resp = get_link_state(req)
if resp.success == False:
print "success:", resp.success, ", status:", resp.status_message
raise Exception("/gazebo/get_link_state")
print "IntMarkers6D init ok"
# create an interactive marker server on the topic namespace simple_marker
self.server = InteractiveMarkerServer('gazebo_move_object_' + self.link_name.replace(":", "_"))
self.T_B_M = PyKDL.Frame(PyKDL.Vector(1,1,1))
self.insert6DofGlobalMarker(self.T_B_M)
self.server.applyChanges();
def setLinkState(self, T_B_M):
req = SetLinkStateRequest()
req.link_state.link_name = self.link_name
req.link_state.pose.position.x = T_B_M.p.x()
req.link_state.pose.position.y = T_B_M.p.y()
req.link_state.pose.position.z = T_B_M.p.z()
qx, qy, qz, qw = T_B_M.M.GetQuaternion()
req.link_state.pose.orientation.x = qx
req.link_state.pose.orientation.y = qy
req.link_state.pose.orientation.z = qz
req.link_state.pose.orientation.w = qw
req.link_state.twist.linear.x = 0
req.link_state.twist.linear.y = 0
req.link_state.twist.linear.z = 0
req.link_state.twist.angular.x = 0
req.link_state.twist.angular.y = 0
req.link_state.twist.angular.z = 0
req.link_state.reference_frame = "torso_base"
resp = self.set_link_state(req)
if resp.success == False:
print "success:", resp.success, ", status:", resp.status_message
def erase6DofMarker(self):
self.server.erase(self.link_name+'_position_marker')
self.server.applyChanges();
def insert6DofGlobalMarker(self, T_B_M):
int_position_marker = InteractiveMarker()
int_position_marker.header.frame_id = 'torso_base'
int_position_marker.name = self.link_name+'_position_marker'
int_position_marker.scale = 0.2
int_position_marker.pose = pm.toMsg(T_B_M)
int_position_marker.controls.append(self.createInteractiveMarkerControl6DOF(InteractiveMarkerControl.ROTATE_AXIS,'x'));
int_position_marker.controls.append(self.createInteractiveMarkerControl6DOF(InteractiveMarkerControl.ROTATE_AXIS,'y'));
int_position_marker.controls.append(self.createInteractiveMarkerControl6DOF(InteractiveMarkerControl.ROTATE_AXIS,'z'));
int_position_marker.controls.append(self.createInteractiveMarkerControl6DOF(InteractiveMarkerControl.MOVE_AXIS,'x'));
int_position_marker.controls.append(self.createInteractiveMarkerControl6DOF(InteractiveMarkerControl.MOVE_AXIS,'y'));
int_position_marker.controls.append(self.createInteractiveMarkerControl6DOF(InteractiveMarkerControl.MOVE_AXIS,'z'));
box = self.createAxisMarkerControl(Point(0.15,0.015,0.015), Point(0.0, 0.0, 0.0) )
box.interaction_mode = InteractiveMarkerControl.BUTTON
box.name = 'button'
int_position_marker.controls.append( box )
self.server.insert(int_position_marker, self.processFeedback);
self.server.applyChanges();
def processFeedback(self, feedback):
if ( feedback.marker_name == self.link_name+'_position_marker' ):
T_B_M = pm.fromMsg(feedback.pose)
self.setLinkState(T_B_M)
print "pose:", T_B_M.p.x(), T_B_M.p.y(), T_B_M.p.z()
def createAxisMarkerControl(self, scale, position):
markerX = Marker()
markerX.type = Marker.ARROW
markerX.scale = scale
markerX.pose.position = position
ori = quaternion_about_axis(0, [0, 1 ,0])
markerX.pose.orientation = Quaternion(ori[0], ori[1], ori[2], ori[3])
markerX.color = ColorRGBA(1,0,0,1)
markerY = Marker()
markerY.type = Marker.ARROW
markerY.scale = scale
markerY.pose.position = position
ori = quaternion_about_axis(math.pi/2.0, [0, 0 ,1])
markerY.pose.orientation = Quaternion(ori[0], ori[1], ori[2], ori[3])
markerY.color = ColorRGBA(0,1,0,1)
markerZ = Marker()
markerZ.type = Marker.ARROW
markerZ.scale = scale
markerZ.pose.position = position
ori = quaternion_about_axis(-math.pi/2.0, [0, 1 ,0])
markerZ.pose.orientation = Quaternion(ori[0], ori[1], ori[2], ori[3])
markerZ.color = ColorRGBA(0,0,1,1)
control = InteractiveMarkerControl()
control.always_visible = True;
control.markers.append( markerX );
control.markers.append( markerY );
control.markers.append( markerZ );
return control
def createInteractiveMarkerControl6DOF(self, mode, axis):
control = InteractiveMarkerControl()
control.orientation_mode = InteractiveMarkerControl.FIXED
if mode == InteractiveMarkerControl.ROTATE_AXIS:
control.name = 'rotate_';
if mode == InteractiveMarkerControl.MOVE_AXIS:
control.name = 'move_';
if axis == 'x':
control.orientation = Quaternion(1,0,0,1)
control.name = control.name+'x';
if axis == 'y':
control.orientation = Quaternion(0,1,0,1)
control.name = control.name+'x';
if axis == 'z':
control.orientation = Quaternion(0,0,1,1)
control.name = control.name+'x';
control.interaction_mode = mode
return control
def printUsage():
print "usage: gazebo_move_object.py model_name::link_name"
if __name__ == "__main__":
rospy.init_node('gazebo_move_object', anonymous=True)
if len(sys.argv) != 2:
printUsage()
exit(1)
int_markers = IntMarkers6D(sys.argv[1])
rospy.spin()
| 42.47549 | 127 | 0.698211 | 1,110 | 8,665 | 5.287387 | 0.268468 | 0.036804 | 0.03067 | 0.019083 | 0.385074 | 0.280286 | 0.242801 | 0.223377 | 0.223377 | 0.212813 | 0 | 0.015103 | 0.205309 | 8,665 | 203 | 128 | 42.684729 | 0.837206 | 0.227928 | 0 | 0.101449 | 0 | 0 | 0.05893 | 0.02285 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.123188 | null | null | 0.050725 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d1adcf9576413e99c39f705340235365fb3ff9b | 29,251 | py | Python | api/cueSearch/elasticSearch/elastic_search_indexing.py | cuebook/CueSearch | 8bf047de273b27bba41b8bf4e266aac1eee7f81a | [
"Apache-2.0"
] | 3 | 2022-02-10T17:00:19.000Z | 2022-03-29T14:31:25.000Z | api/cueSearch/elasticSearch/elastic_search_indexing.py | cuebook/CueSearch | 8bf047de273b27bba41b8bf4e266aac1eee7f81a | [
"Apache-2.0"
] | null | null | null | api/cueSearch/elasticSearch/elastic_search_indexing.py | cuebook/CueSearch | 8bf047de273b27bba41b8bf4e266aac1eee7f81a | [
"Apache-2.0"
] | null | null | null | import os
import time
import logging
from typing import List, Dict
from collections import deque
# from search import app
from elasticsearch import Elasticsearch
from elasticsearch.helpers import parallel_bulk
from datetime import datetime
# from config import ELASTICSEARCH_URL
import threading
from .utils import Utils
import traceback
ELASTICSEARCH_URL = os.environ.get("ELASTICSEARCH_URL", "http://localhost:9200/")
class ESIndexingUtils:
"""
Class to handle Elastic Search related indexing
and search utilities
"""
GLOBAL_DIMENSIONS_NAMES_INDEX_NAME = (
"cuesearch_global_dimensions_names_for_search_index"
)
GLOBAL_DIMENSIONS_INDEX_NAME = "global_dimensions_name_index_cuesearch"
GLOBAL_DIMENSIONS_INDEX_DATA = "cuesearch_global_dimensions_data_index"
AUTO_GLOBAL_DIMENSIONS_INDEX_DATA = "cuesearch_auto_global_dimensions_data_index"
AUTO_GLOBAL_DIMENSIONS_INDEX_DATA_SEARCH_SUGGESTION = (
"cuesearch_auto_global_dimensions_search_suggestion_data_index"
)
GLOBAL_DIMENSIONS_INDEX_SEARCH_SUGGESTION_DATA = (
"cuesearch_global_dimensions_search_suggestion_data_index"
)
DATASET_MEASURES_INDEX_NAME = "dataset_measures_index_cuesearch"
@staticmethod
def _getESClient() -> Elasticsearch:
"""
Method to get the ES Client
"""
esHost = ELASTICSEARCH_URL
esClient = Elasticsearch(hosts=[esHost], timeout=30)
return esClient
@staticmethod
def initializeIndex(indexName: str, indexDefinition: dict) -> str:
"""
Method to name the index in Elasticsearch
:indexName: the index name to be used for index creation
:indexDefinition: the index definition - dict.
"""
esClient = ESIndexingUtils._getESClient()
logging.info("intializing Index here ...")
currentIndexVersion = "_" + str(int(round(time.time() * 1000)))
aliasIndex = indexName + currentIndexVersion
logging.info("Creating index of: %s", aliasIndex)
esClient.indices.create(index=aliasIndex, body=indexDefinition)
return aliasIndex
@staticmethod
def ingestIndex(documentsToIndex: List[Dict], aliasIndex: str):
"""
Method to ingest data into the index
:param documentsToIndex The documents that need to be indexed.e.g,
List of Cards or List of Global Dimensions
:aliasIndex: the index name to be used for ingestion
"""
esClient = ESIndexingUtils._getESClient()
for documentToIndex in documentsToIndex:
documentToIndex["_index"] = aliasIndex
documentToIndex["_op_type"] = "index"
logging.debug("Parallel indexing process starting.")
deque(parallel_bulk(esClient, documentsToIndex), maxlen=0)
logging.info("Alias index created at: %s", aliasIndex)
@staticmethod
def deleteOldIndex(indexName: str, aliasIndex: str):
"""
Method to ingest data into the index
:param documentsToIndex The documents that need to be indexed.e.g,
List of Cards or List of Global Dimensions
:aliasIndex: the index name to be used for ingestion
"""
esClient = ESIndexingUtils._getESClient()
logging.info(
"Now point the alias index: { %s } to { %s }", aliasIndex, indexName
)
esClient.indices.put_alias(index=aliasIndex, name=indexName)
logging.info("Now delete the older indices. They are of no use now.")
# Now delete the older indices following a certain pattern.
# Those indices are old indices and are of no use.
allAliases = esClient.indices.get_alias("*")
for key, value in allAliases.items():
logging.debug("Checking for index: %s", key)
# delete only the indexes matching the given pattern,
# retain all the other indexes they may be coming from some other source
if indexName in key:
# do not delete the current index
if aliasIndex == key:
continue
logging.info("Deleting the index: %s", key)
esClient.indices.delete(index=key, ignore=[400, 404])
@staticmethod
def deleteAllIndex():
logging.info("Deleting all indexes")
esClient = ESIndexingUtils._getESClient()
allAliases = esClient.indices.get_alias("*")
for key, value in allAliases.items():
logging.info("Deleting the index: %s", key)
esClient.indices.delete(index=key, ignore=[400, 404])
logging.info("All indexes deleted !")
@staticmethod
def _createIndex(
documentsToIndex: List[Dict], indexName: str, indexDefinition: dict
):
"""
Method to create an index in Elasticsearch
:param documentsToIndex The documents that need to be indexed.e.g,
List of Cards or List of Global Dimensions
:indexName: the index name to be used for index creation
:indexDefinition: the index definition - dict.
"""
aliasIndex = ESIndexingUtils.initializeIndex(indexName, indexDefinition)
# ingest entries in the initialized index
ESIndexingUtils.ingestIndex(documentsToIndex, aliasIndex)
# at this stage index has been created at a new location
# now change the alias of the main Index to point to the new index
ESIndexingUtils.deleteOldIndex(indexName, aliasIndex)
@staticmethod
def runAllIndexDimension():
"""
Method to spawn a thread to index global dimension into elasticsearch existing indices
The child thread assumes an index existing with a predefined unaltered indexDefinition
"""
logging.info("Indexing starts on global dimension action")
cardIndexer1 = threading.Thread(
target=ESIndexingUtils.indexGlobalDimensionsDataForSearchSuggestion
)
cardIndexer1.start()
cardIndexer2 = threading.Thread(
target=ESIndexingUtils.indexGlobalDimensionsData
)
cardIndexer2.start()
cardIndexer3 = threading.Thread(
target=ESIndexingUtils.indexNonGlobalDimensionsDataForSearchSuggestion
)
cardIndexer3.start()
cardIndexer4 = threading.Thread(
target=ESIndexingUtils.indexNonGlobalDimensionsData()
)
cardIndexer4.start()
logging.info("Indexing completed !! ")
@staticmethod
def fetchGlobalDimensionsValueForIndexing(globalDimensionGroup):
"""
Method to fetch the global dimensions and the dimension values.
:return List of Documents to be indexed
"""
indexingDocuments = []
dimension = ""
logging.info("global dimension group in fetch %s", globalDimensionGroup)
globalDimensionName = globalDimensionGroup["name"]
logging.debug("Starting fetch for global dimension: %s", globalDimensionName)
globalDimensionId = globalDimensionGroup["id"]
dimensionObjs = globalDimensionGroup["values"] # dimensional values
logging.info(
"Merging dimensions Value percentile with mulitple values in list of dimensionValues"
)
for dmObj in dimensionObjs:
displayValue = ""
dimension = dmObj["dimension"]
dataset = dmObj["dataset"]
datasetId = dmObj["datasetId"]
res = Utils.getDimensionalValuesForDimension(datasetId, dimension)
dimensionValues = res.get("data", [])
for values in dimensionValues:
if values:
logging.info("Dimensional value is %s", values)
displayValue = values
elasticsearchUniqueId = (
str(globalDimensionId)
+ "_"
+ str(displayValue)
+ "_"
+ str(dataset)
)
document = {
"_id": elasticsearchUniqueId,
"globalDimensionValue": str(displayValue).lower(),
"globalDimensionDisplayValue": str(displayValue),
"globalDimensionName": str(globalDimensionName),
"globalDimensionId": globalDimensionId,
"dimension": dimension,
"dataset": dataset,
"datasetId": datasetId,
}
indexingDocuments.append(document)
logging.debug("Document to index: %s", document)
return indexingDocuments
@staticmethod
def indexGlobalDimensionsData(joblogger=None):
"""
Method to index global dimensions data
"""
logging.info(
"****************** Indexing Starts for Global Dimension values **************** "
)
response = Utils.getGlobalDimensionForIndex()
if response["success"]:
globalDimensions = response.get("data", [])
logging.debug("Global dimensions Fetched ")
indexDefinition = {
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer",
"filter": ["lowercase"],
}
},
"default_search": {"type": "my_analyzer"},
"tokenizer": {
"my_tokenizer": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 10,
"token_chars": ["letter", "digit"],
}
},
}
},
"mappings": {
"properties": {
"globalDimensionId": {"type": "integer"},
"globalDimensionDisplayValue": {"type": "keyword"},
"globalDimensionValue": {
"type": "text",
"search_analyzer": "my_analyzer",
"analyzer": "my_analyzer",
"fields": {
"ngram": {"type": "text", "analyzer": "my_analyzer"}
},
},
"globalDimensionName": {
"type": "text",
"search_analyzer": "my_analyzer",
"analyzer": "my_analyzer",
"fields": {
"ngram": {"type": "text", "analyzer": "my_analyzer"}
},
},
"dimension": {
"type": "text",
"search_analyzer": "my_analyzer",
"analyzer": "my_analyzer",
"fields": {
"ngram": {"type": "text", "analyzer": "my_analyzer"}
},
},
"dataset": {"type": "text"},
"datasetId": {"type": "integer"},
}
},
}
indexName = ESIndexingUtils.GLOBAL_DIMENSIONS_INDEX_DATA
aliasIndex = ESIndexingUtils.initializeIndex(indexName, indexDefinition)
logging.info("IndexName %s", indexName)
logging.info("aliasIndex %s", aliasIndex)
for globalDimensionGroup in globalDimensions:
logging.info("globaldimensionGroup %s", globalDimensionGroup)
# globalDimensionGroup is an array
try:
documentsToIndex = (
ESIndexingUtils.fetchGlobalDimensionsValueForIndexing(
globalDimensionGroup
)
)
ESIndexingUtils.ingestIndex(documentsToIndex, aliasIndex)
except (Exception) as error:
logging.error(str(error))
pass
ESIndexingUtils.deleteOldIndex(indexName, aliasIndex)
logging.info(
"****************** Indexing Completed for Global Dimension values **************** "
)
else:
logging.error("Error in fetching global dimensions.")
raise RuntimeError("Error in fetching global dimensions")
@staticmethod
def fetchGlobalDimensionsValueForSearchSuggestionIndexing(globalDimensionGroup):
"""
Method to fetch the global dimensions and the dimension values.
:return List of Documents to be indexed
"""
indexingDocuments = []
globalDimensionName = globalDimensionGroup["name"]
logging.debug("Starting fetch for global dimension: %s", globalDimensionName)
globalDimensionId = globalDimensionGroup["id"]
dimensionObjs = globalDimensionGroup["values"] # dimensional values
logging.info(
"Merging dimensions Value with mulitple values in list of dimensionValues"
)
for dmObj in dimensionObjs:
displayValue = ""
dimension = dmObj["dimension"]
dataset = dmObj["dataset"]
datasetId = dmObj["datasetId"]
res = Utils.getDimensionalValuesForDimension(datasetId, dimension)
dimensionValues = res.get("data", [])
if dimensionValues:
for values in dimensionValues:
if values:
logging.info("Dimensional values is %s", values)
displayValue = values
elasticsearchUniqueId = (
str(globalDimensionId) + "_" + str(displayValue)
)
document = {
"_id": elasticsearchUniqueId,
"globalDimensionValue": str(displayValue).lower(),
"globalDimensionDisplayValue": str(displayValue),
"globalDimensionName": str(globalDimensionName),
"globalDimensionId": globalDimensionId,
"dataset": dataset,
"datasetId": datasetId,
}
indexingDocuments.append(document)
logging.debug("Document to index: %s", document)
return indexingDocuments
# Below function is used for search suggestion / To avoid duplicates in search dropdown(Temparory)
def indexGlobalDimensionsDataForSearchSuggestion(joblogger=None):
"""
Indexing is being done for dropdown suggestion
"""
logging.info(
"*************************** Indexing starts of Global Dimension Values for Search Suggestion **************************"
)
response = Utils.getGlobalDimensionForIndex()
if response["success"]:
globalDimensions = response.get("data", [])
logging.debug("Global dimensions: %s", globalDimensions)
indexDefinition = {
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer",
"filter": ["lowercase"],
}
},
"default_search": {"type": "my_analyzer"},
"tokenizer": {
"my_tokenizer": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 10,
"token_chars": ["letter", "digit"],
}
},
}
},
"mappings": {
"properties": {
"globalDimensionId": {"type": "integer"},
"globalDimensionDisplayValue": {"type": "text"},
"globalDimensionValue": {
"type": "text",
"search_analyzer": "my_analyzer",
"analyzer": "my_analyzer",
"fields": {
"ngram": {"type": "text", "analyzer": "my_analyzer"}
},
},
"globalDimensionName": {
"type": "text",
"search_analyzer": "my_analyzer",
"analyzer": "my_analyzer",
"fields": {
"ngram": {"type": "text", "analyzer": "my_analyzer"}
},
},
"dataset": {"type": "text"},
"datasetId": {"type": "integer"},
}
},
}
indexName = ESIndexingUtils.GLOBAL_DIMENSIONS_INDEX_SEARCH_SUGGESTION_DATA
aliasIndex = ESIndexingUtils.initializeIndex(indexName, indexDefinition)
logging.info("IndexName %s", indexName)
logging.info("aliasIndex %s", aliasIndex)
for globalDimensionGroup in globalDimensions:
# globalDimensionGroup is an array
logging.info("globaldimensionGroup %s", globalDimensionGroup)
try:
documentsToIndex = ESIndexingUtils.fetchGlobalDimensionsValueForSearchSuggestionIndexing(
globalDimensionGroup
)
ESIndexingUtils.ingestIndex(documentsToIndex, aliasIndex)
except (Exception) as error:
logging.error(str(error))
pass
ESIndexingUtils.deleteOldIndex(indexName, aliasIndex)
logging.info(
"*************************** Indexing Completed of Global Dimension Values for Search Suggestion **************************"
)
else:
logging.error("Error in fetching global dimensions.")
raise RuntimeError("Error in fetching global dimensions")
@staticmethod
def indexNonGlobalDimensionsDataForSearchSuggestion(joblogger=None):
"""
Method to index global dimensions data
"""
from cueSearch.services import GlobalDimensionServices
logging.info(
"*************************** Indexing Starts of Non Global Dimension Values for Search Suggestion **************************"
)
response = GlobalDimensionServices.nonGlobalDimensionForIndexing()
if response["success"]:
datsetDimensions = response.get("data", [])
logging.debug("Dataset dimensions: %s", datsetDimensions)
indexDefinition = {
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer",
"filter": ["lowercase"],
}
},
"default_search": {"type": "my_analyzer"},
"tokenizer": {
"my_tokenizer": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 10,
"token_chars": ["letter", "digit"],
}
},
}
},
"mappings": {
"properties": {
"globalDimensionId": {"type": "text"},
"globalDimensionDisplayValue": {"type": "text"},
"globalDimensionValue": {
"type": "text",
"search_analyzer": "my_analyzer",
"analyzer": "my_analyzer",
"fields": {
"ngram": {"type": "text", "analyzer": "my_analyzer"}
},
},
"globalDimensionName": {
"type": "text",
"search_analyzer": "my_analyzer",
"analyzer": "my_analyzer",
"fields": {
"ngram": {"type": "text", "analyzer": "my_analyzer"}
},
},
"dimension": {
"type": "text",
"search_analyzer": "my_analyzer",
"analyzer": "my_analyzer",
"fields": {
"ngram": {"type": "text", "analyzer": "my_analyzer"}
},
},
"dataset": {"type": "text"},
"datasetId": {"type": "integer"},
}
},
}
indexName = (
ESIndexingUtils.AUTO_GLOBAL_DIMENSIONS_INDEX_DATA_SEARCH_SUGGESTION
)
aliasIndex = ESIndexingUtils.initializeIndex(indexName, indexDefinition)
logging.info("IndexName %s", indexName)
logging.info("aliasIndex %s", aliasIndex)
# datsetDimensions is an array
try:
documentsToIndex = (
ESIndexingUtils.fetchNonGlobalDimensionsValueForIndexing(
datsetDimensions
)
)
ESIndexingUtils.ingestIndex(documentsToIndex, aliasIndex)
except (Exception) as error:
logging.error(str(error))
pass
ESIndexingUtils.deleteOldIndex(indexName, aliasIndex)
logging.info(
"*************************** Indexing Completed of Non Dimensional Values for Search Suggestion **************************"
)
else:
logging.error("Error in fetching global dimensions.")
raise RuntimeError("Error in fetching global dimensions")
@staticmethod
def fetchNonGlobalDimensionsValueForIndexing(datasetDimensions: list):
"""
Method to fetch the global dimensions and the dimension values.
:return List of Documents to be indexed
"""
indexingDocuments = []
dimension = ""
globalDimensionName = ""
globalDimensionId = ""
dimensionObjs = datasetDimensions
logging.info(
"Merging dimensions Value percentile with mulitple values in list of dimensionValues"
)
for dmObj in dimensionObjs:
displayValue = ""
dimension = dmObj["dimension"]
dataset = dmObj["dataset"]
datasetId = dmObj["datasetId"]
res = Utils.getDimensionalValuesForDimension(datasetId, dimension)
dimensionValues = res.get("data", [])
for values in dimensionValues:
if values:
logging.info(
" Non global dimensional values %s",
values,
)
displayValue = values
globalDimensionId = (
str(dimension) + "_" + str(displayValue) + "_" + str(datasetId)
)
globalDimensionName = str(dataset) + "_" + str(dimension)
elasticsearchUniqueId = (
str(globalDimensionId)
+ "_"
+ str(displayValue)
+ "_"
+ str(dataset)
)
document = {
"_id": elasticsearchUniqueId,
"globalDimensionValue": str(displayValue).lower(),
"globalDimensionDisplayValue": str(displayValue),
"globalDimensionName": str(globalDimensionName),
"globalDimensionId": globalDimensionId,
"dimension": dimension,
"dataset": dataset,
"datasetId": datasetId,
}
indexingDocuments.append(document)
logging.info(
"Indexing Documents length of non global dimension %s",
len(indexingDocuments),
)
return indexingDocuments
@staticmethod
def indexNonGlobalDimensionsData(joblogger=None):
"""
Method to index Non global dimensions data
"""
from cueSearch.services import GlobalDimensionServices
logging.info(
"*************************** Indexing Starts of Non Global Dimension Data **************************"
)
response = GlobalDimensionServices.nonGlobalDimensionForIndexing()
if response["success"]:
datsetDimensions = response.get("data", [])
logging.debug("Dataset dimensions: %s", datsetDimensions)
indexDefinition = {
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer",
"filter": ["lowercase"],
}
},
"default_search": {"type": "my_analyzer"},
"tokenizer": {
"my_tokenizer": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 10,
"token_chars": ["letter", "digit"],
}
},
}
},
"mappings": {
"properties": {
"globalDimensionId": {"type": "text"},
"globalDimensionDisplayValue": {"type": "keyword"},
"globalDimensionValue": {
"type": "text",
"search_analyzer": "my_analyzer",
"analyzer": "my_analyzer",
"fields": {
"ngram": {"type": "text", "analyzer": "my_analyzer"}
},
},
"globalDimensionName": {
"type": "text",
"search_analyzer": "my_analyzer",
"analyzer": "my_analyzer",
"fields": {
"ngram": {"type": "text", "analyzer": "my_analyzer"}
},
},
"dimension": {
"type": "text",
"search_analyzer": "my_analyzer",
"analyzer": "my_analyzer",
"fields": {
"ngram": {"type": "text", "analyzer": "my_analyzer"}
},
},
"dataset": {"type": "text"},
"datasetId": {"type": "integer"},
}
},
}
indexName = ESIndexingUtils.AUTO_GLOBAL_DIMENSIONS_INDEX_DATA
aliasIndex = ESIndexingUtils.initializeIndex(indexName, indexDefinition)
logging.info("IndexName %s", indexName)
logging.info("aliasIndex %s", aliasIndex)
# datsetDimensions is an array
try:
documentsToIndex = (
ESIndexingUtils.fetchNonGlobalDimensionsValueForIndexing(
datsetDimensions
)
)
ESIndexingUtils.ingestIndex(documentsToIndex, aliasIndex)
except (Exception) as error:
logging.error(str(error))
pass
ESIndexingUtils.deleteOldIndex(indexName, aliasIndex)
logging.info(
"*************************** Indexing Completed of Non Dimensional Data **************************"
)
else:
logging.error("Error in fetching global dimensions.")
raise RuntimeError("Error in fetching global dimensions")
| 41.025245 | 140 | 0.487607 | 1,945 | 29,251 | 7.234961 | 0.141902 | 0.029136 | 0.047328 | 0.017197 | 0.711981 | 0.685972 | 0.673039 | 0.664937 | 0.659821 | 0.647385 | 0 | 0.002515 | 0.415541 | 29,251 | 712 | 141 | 41.082865 | 0.820601 | 0.078459 | 0 | 0.636364 | 0 | 0 | 0.206507 | 0.03114 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024955 | false | 0.00713 | 0.023173 | 0 | 0.071301 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d1cdb8c16bb2b3f1d6f22088415125ae731aadd | 1,510 | py | Python | scripts/plot.py | dfvella/vtol | a4dfca2c29cae653156f2681d58ba6c83d3639f3 | [
"MIT"
] | null | null | null | scripts/plot.py | dfvella/vtol | a4dfca2c29cae653156f2681d58ba6c83d3639f3 | [
"MIT"
] | null | null | null | scripts/plot.py | dfvella/vtol | a4dfca2c29cae653156f2681d58ba6c83d3639f3 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import sys
import matplotlib.pyplot as plt
try:
filename = sys.argv[1]
except IndexError:
print('error: data file not specified')
exit(1)
#response = [ 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1 ]
#response = [ 0.2, 0.2, 0.2, 0.2, 0.1, 0.05, 0.02, 0.02, 0.01, 0.0 ]
#response = [ 0.4, 0.3, 0.2, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 ] # delay <20 ms
response = [ 0.3, 0.3, 0.2, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 ] # delay ~25 ms
buffer = [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 ]
assert(abs(sum(response) - 1) < 0.001)
assert(len(response) == len(buffer))
def filter(data: float, buff: list, resp: list) -> float:
buff.insert(0, data)
buff.pop()
res = 0
for data, weight in zip(buff, resp):
res += data * weight
return res
tList = []
xList = []
dList = []
fList = []
with open(filename, 'r') as f:
for line in f:
try:
t, x = line.split()
except TypeError:
print('error: failed to parse line')
exit(1)
if len(tList) > 1:
d = (xList[-1] - xList[-2]) / (tList[-1] - tList[-2])
else:
d = 0
tList.append(float(t))
xList.append(float(x))
dList.append(float(d))
fList.append(float(filter(d, buffer, response)))
plt.plot(tList, xList, label='x')
plt.plot(tList, dList, label='dx/dt')
plt.plot(tList, fList, label='filtered')
plt.title(filename)
plt.ylabel('x')
plt.xlabel('t')
plt.legend()
plt.show()
| 23.968254 | 79 | 0.541722 | 274 | 1,510 | 2.985401 | 0.310219 | 0.102689 | 0.139364 | 0.171149 | 0.118582 | 0.101467 | 0.101467 | 0.090465 | 0.090465 | 0.090465 | 0 | 0.11121 | 0.255629 | 1,510 | 62 | 80 | 24.354839 | 0.616548 | 0.159603 | 0 | 0.088889 | 0 | 0 | 0.058591 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 1 | 0.022222 | false | 0 | 0.044444 | 0 | 0.088889 | 0.044444 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d263e5b201c53646dfb63b01b0b12af6c504dd2 | 9,027 | py | Python | components/homme/scripts_for_paper/dictionaries.py | cjvogl/E3SM | d8990bc2efda76b6f9096f989eed46bd3ab87463 | [
"FTL",
"zlib-acknowledgement",
"RSA-MD"
] | 1 | 2019-12-11T16:41:13.000Z | 2019-12-11T16:41:13.000Z | components/homme/scripts_for_paper/dictionaries.py | cjvogl/E3SM | d8990bc2efda76b6f9096f989eed46bd3ab87463 | [
"FTL",
"zlib-acknowledgement",
"RSA-MD"
] | null | null | null | components/homme/scripts_for_paper/dictionaries.py | cjvogl/E3SM | d8990bc2efda76b6f9096f989eed46bd3ab87463 | [
"FTL",
"zlib-acknowledgement",
"RSA-MD"
] | null | null | null | from netCDF4 import Dataset
import glob
# This is meant to be an all-inclusive list of possible methods
methodDict = {'KGU35-native': 5,
'ARS232-native': 7,
'KGU35': 21,
'ARS232': 22,
'DBM453': 23,
'ARS222': 24,
'ARS233': 25,
'ARS343': 26,
'ARS443': 27,
'ARK324': 28,
'ARK436': 29,
'SSP3333b': 30,
'SSP3333c': 31,
'IMKG232a': 32,
'IMKG232b': 33,
'IMKG242a': 34,
'IMKG242b': 35,
'IMKG243a': 37,
'IMKG252a': 38,
'IMKG252b': 39,
'IMKG253a': 40,
'IMKG253b': 41,
'IMKG254a': 42,
'IMKG254b': 43,
'IMKG254c': 44,
'IMKG343a': 45,
'ARK437': 47,
'ARK548': 48,
'SSP2232': 49,
'GSA222': 50}
# Line+marker styles chosen to provide certain information about the method
# line:
# solid = second order
# dashed = third order
# dot-dashed = fourth order
# solid = fifth order
# marker:
# circle = 0 implicit stages
# x = 2 implicit stages
# triangle = 3 implicit stages
# square = 4 implicit stages
# pentagon = 5 implicit stages
# hexagon = 6 implicit stages
# star = 7 implicit stages
lineStyleDict = {'KGU35-native': '--o',
'ARS232-native': '-x',
'KGU35': '--o',
'ARS222': '-x',
'ARS232': '-x',
'SSP3333b': '--x',
'SSP3333c': '--x',
'ARS443': '--s',
'DBM453': '--s',
'ARS233': '--x',
'ARS343': '--^',
'ARK324': '--^',
'ARK436': '-.p',
'IMKG232a': '-x',
'IMKG232b': '-x',
'IMKG242a': '-x',
'IMKG242b': '-x',
'IMKG243a': '-^',
'IMKG243b': '-^',
'IMKG252a': '-x',
'IMKG252b': '-x',
'IMKG253a': '-^',
'IMKG253b': '-^',
'IMKG254a': '-s',
'IMKG254b': '-s',
'IMKG254c': '-s',
'IMKG343a': '--^',
'ARK437': '-.H',
'ARK548': '-*',
'SSP2232': '-x',
'GSA222': '-x'}
colorDict = {'KGU35-native': 'k',
'KGU35': 'tab:blue',
'ARS232-native': 'k',
'ARS222': 'tab:blue',
'ARS232': 'tab:orange',
'SSP3333b': 'tab:green',
'SSP3333c': 'k',
'ARS233': 'tab:red',
'IMKG232a': 'tab:purple',
'IMKG232b': 'tab:brown',
'IMKG242a': 'tab:pink',
'IMKG242b': 'tab:gray',
'IMKG252a': 'tab:olive',
'IMKG252b': 'tab:cyan',
'ARS343': 'tab:blue',
'ARK324': 'tab:orange',
'IMKG243a': 'tab:red',
'IMKG253a': 'tab:purple',
'IMKG253b': 'tab:brown',
'IMKG343a': 'tab:pink',
'ARS443': 'tab:blue',
'DBM453': 'tab:orange',
'IMKG254a': 'tab:green',
'IMKG254b': 'tab:red',
'IMKG254c': 'tab:purple',
'ARK436': 'tab:blue',
'ARK437': 'tab:orange',
'ARK548': 'tab:blue',
'SSP2232': 'tab:red',
'GSA222': 'tab:purple'}
# method to create solution dictionary
def create_solution_dict(globstr, testName, varRef, indRef, tRef, minDt, maxDt, suffix, suffix_omit):
solutionDict = {}
for fileName in glob.glob(globstr):
if (suffix_omit is not None and suffix_omit in fileName):
continue
words = fileName.split('/')
shortName = words[-1]
words = shortName.split('_')
dt = words[1].replace('tstep','')
dt = dt.replace('.out','')
# If current timestep is appropriate, obtain solution if it exists
if (float(dt) < maxDt and float(dt) > minDt):
directory = './output_'+fileName.replace('.out','')
print 'Reading solution in ' + directory
data = Dataset(directory+'/'+testName)
q = data[varRef][:]
t = data['time'][:]
if (len(t) > indRef and abs(t[indRef] - tRef) < 1e-10):
solutionDict[dt] = q[indRef,:,:,:]
else:
print '... skipping due to incomplete results ...'
return solutionDict
# method to create energy error dictionary
def create_energy_error_dict(globstr, minDt, maxDt, suffix, suffix_omit):
energyErrorDict = {}
for fileName in glob.glob(globstr):
if (suffix_omit is not None and suffix_omit in fileName):
continue
words = fileName.split('/')
shortName = words[-1]
words = shortName.split('_')
dt = words[1].replace('tstep','')
dt = dt.replace('.out','')
if (float(dt) < maxDt and float(dt) > minDt):
energyErrorMax = 0.0
print 'Reading diagnostics in ' + fileName
fileObj = open(fileName)
lines = list(fileObj)
fileObj.close()
flag = False
for line in reversed(lines):
if ('Finished main timestepping loop' in line):
flag = True
if (flag and '(E-E0)/E0' in line):
words = line.split()
currentError = float(words[1])
if (abs(energyErrorMax) < abs(currentError)):
energyErrorMax = currentError
break
if (flag):
energyErrorDict[dt] = energyErrorMax
else:
print '... skipping due to incomplete results ...'
return energyErrorDict
def create_walltime_dict(globstr, minDt, maxDt, suffix_omit):
walltimeDict = {}
for fileName in glob.glob(globstr):
if (suffix_omit is not None and suffix_omit in fileName):
continue
words = fileName.split('/')
shortName = words[-1]
words = shortName.split('_')
dt = words[1].replace('tstep','')
dt = dt.replace('.out','')
if (float(dt) < maxDt and float(dt) > minDt):
fileNameTiming = fileName.replace('.out','.err')
print 'Reading timing info in ' + fileNameTiming
fileObj = open(fileName)
lines = list(fileObj)
fileObj.close()
flag = False
for line in reversed(lines):
if ('Finished main timestepping loop' in line):
flag = True
break
if (not flag):
print '... skipping due to incomplete results ...'
continue
fileObj = open(fileNameTiming)
lines = list(fileObj)
fileObj.close()
words = lines[1].split()
word = words[1] # ignore the word 'real'
words = word.split(('m'))
seconds = int(words[0])*60 + float(words[1].replace('s',''))
walltimeDict[dt] = seconds
return walltimeDict
# method to create solution dictionary
def create_solution_dict(globstr, testName, varRef, indRef, tRef, minDt, maxDt, suffix_omit):
solutionDict = {}
for fileName in glob.glob(globstr):
if (suffix_omit is not None and suffix_omit in fileName):
continue
words = fileName.split('/')
shortName = words[-1]
words = shortName.split('_')
dt = words[1].replace('tstep','')
dt = dt.replace('.out','')
# If current timestep is appropriate, obtain solution if it exists
if (float(dt) < maxDt and float(dt) > minDt):
directory = fileName.replace('.out','').replace('tsteptype','output_tsteptype')
print 'Reading solution in ' + directory + '/' + testName
data = Dataset(directory+'/'+testName)
q = data[varRef][:]
t = data['time'][:]
if (len(t) > indRef and abs(t[indRef] - tRef) < 1e-10):
solutionDict[dt] = q[indRef,:,:,:]
else:
print '... skipping due to incomplete results ...'
return solutionDict
# method to create energy error dictionary
def create_energy_error_dict(globstr, minDt, maxDt, suffix, suffix_omit):
energyErrorDict = {}
for fileName in glob.glob(globstr):
if (suffix_omit is not None and suffix_omit in fileName):
continue
words = fileName.split('/')
shortName = words[-1]
words = shortName.split('_')
dt = words[1].replace('tstep','')
dt = dt.replace('.out','')
if (float(dt) < maxDt and float(dt) > minDt):
energyErrorMax = 0.0
print 'Reading diagnostics in ' + fileName
fileObj = open(fileName)
lines = list(fileObj)
fileObj.close()
flag = False
for line in reversed(lines):
if ('Finished main timestepping loop' in line):
flag = True
if (flag and '(E-E0)/E0' in line):
words = line.split()
currentError = float(words[1])
if (abs(energyErrorMax) < abs(currentError)):
energyErrorMax = currentError
break
if (flag):
energyErrorDict[dt] = energyErrorMax
else:
print '... skipping due to incomplete results ...'
return energyErrorDict
| 34.719231 | 101 | 0.519442 | 921 | 9,027 | 5.054289 | 0.237785 | 0.032223 | 0.016756 | 0.01826 | 0.628357 | 0.603222 | 0.595704 | 0.595704 | 0.595704 | 0.595704 | 0 | 0.062219 | 0.334109 | 9,027 | 259 | 102 | 34.853282 | 0.712194 | 0.089842 | 0 | 0.517391 | 0 | 0 | 0.187355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.008696 | null | null | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d31fb36bbc127eece967cc6d7bfbeedbbe21933 | 225,425 | pyt | Python | LinkedDataConnector.pyt | gengchenmai/esri-linked-data-connector | bd68a3526dae77f221d60d04a5dbb96b2e456407 | [
"Apache-2.0"
] | 2 | 2020-02-18T06:35:51.000Z | 2020-07-21T12:57:54.000Z | LinkedDataConnector.pyt | gengchenmai/esri-linked-data-connector | bd68a3526dae77f221d60d04a5dbb96b2e456407 | [
"Apache-2.0"
] | null | null | null | LinkedDataConnector.pyt | gengchenmai/esri-linked-data-connector | bd68a3526dae77f221d60d04a5dbb96b2e456407 | [
"Apache-2.0"
] | 2 | 2020-06-17T13:51:39.000Z | 2020-08-28T00:01:29.000Z | import arcpy
import os
import json
import requests
import re
import os
import collections
import datetime
import numpy
from sets import Set
from collections import OrderedDict
from collections import defaultdict
from collections import namedtuple
from decimal import Decimal
from arcpy import env
class Toolbox(object):
def __init__(self):
"""Define the toolbox (the name of the toolbox is the name of the
.pyt file)."""
self.label = "Toolbox"
self.alias = ""
# List of tool classes associated with this toolbox
self.tools = [LinkedDataSpatialQuery, LinkedDataPropertyEnrich, MergeBatchNoFunctionalProperty, MergeSingleNoFunctionalProperty, LocationPropertyPath, RelFinder]
class LinkedDataSpatialQuery(object):
def __init__(self):
"""Define the tool (tool name is the name of the class)."""
self.label = "Linked Data Spatial Query"
self.description = "Get geographic features from wikidata by mouse clicking. The Place type can be specified."
self.canRunInBackground = False
self.entityTypeURLList = []
self.entityTypeLabel = []
self.enterTypeText = ""
def getParameterInfo(self):
"""Define parameter definitions"""
# interactively draw feature on the map
in_buf_query_center = arcpy.Parameter(
displayName="Input Buffer Query Center",
name="in_buf_query_center",
datatype="GPFeatureRecordSetLayer",
parameterType="Required",
direction="Input")
# Use __file__ attribute to find the .lyr file (assuming the
# .pyt and .lyr files exist in the same folder)
in_buf_query_center.value = os.path.join(os.path.dirname(__file__),"symbology.lyr")
in_buf_query_center.filter.list = ["Point"]
# Choose place type for search
in_place_type = arcpy.Parameter(
displayName="Input Place Type",
name="in_place_type",
datatype="GPString",
parameterType="Optional",
direction="Input")
in_place_type.filter.type = "ValueList"
in_place_type.filter.list = []
# Choose place type for search
in_is_directed_instance = arcpy.Parameter(
displayName="Disable Transitive Subclass Reasoning",
name="in_is_directed_instance",
datatype="GPBoolean",
parameterType="Optional",
direction="Input")
in_is_directed_instance.value = False
# Search Radius
in_radius = arcpy.Parameter(
displayName="Input Search Radius (mile)",
name="in_radius",
datatype="GPString",
parameterType="Required",
direction="Input")
in_radius.value = "10"
out_location = arcpy.Parameter(
displayName="Output Location",
name="out_location",
datatype="DEWorkspace",
parameterType="Required",
direction="Input")
out_location.value = os.path.dirname(__file__)
# Derived Output Point Feature Class Name
out_points_name = arcpy.Parameter(
displayName="Output Point Feature Class Name",
name="out_points_name",
datatype="GPString",
parameterType="Required",
direction="Input")
# out_features.parameterDependencies = [in_buf_query_center.name]
# out_features.schema.clone = True
out_place_type_url = arcpy.Parameter(
displayName="Output Place Type URL",
name="out_place_type_url",
datatype="GPString",
parameterType="Derived",
direction="Output")
out_place_type_url.value = ""
# out_place_type_url.parameterDependencies = [in_place_type.name]
params = [in_buf_query_center, in_place_type, in_is_directed_instance, in_radius, out_location, out_points_name, out_place_type_url]
return params
def isLicensed(self):
"""Set whether tool is licensed to execute."""
return True
def updateParameters(self, parameters):
"""Modify the values and properties of parameters before internal
validation is performed. This method is called whenever a parameter
has been changed."""
in_buf_query_center = parameters[0]
in_place_type = parameters[1]
in_is_directed_instance = parameters[2]
in_radius = parameters[3]
out_location = parameters[4]
out_points_name = parameters[5]
out_place_type_url = parameters[6]
outLocation = out_location.valueAsText
outFeatureClassName = out_points_name.valueAsText
arcpy.env.workspace = outLocation
if out_points_name.value and arcpy.Exists(os.path.join(outLocation, outFeatureClassName)):
arcpy.AddError("The Output Point Feature Class Name already exists in the current workspace!")
raise arcpy.ExecuteError
if in_place_type.value:
enterTypeText = in_place_type.valueAsText
if "(" in enterTypeText:
lastIndex = enterTypeText.rfind("(")
placeType = enterTypeText[:lastIndex]
else:
placeType = enterTypeText
# messages.addMessage("Use Input Type: {0}.".format(in_place_type.valueAsText))
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>"""
entityTypeQuery = queryPrefix + """SELECT ?entityType ?entityTypeLabel
WHERE
{
#?entity wdt:P31 ?entityType.
?entityType wdt:P279* wd:Q2221906.
# retrieve the English label
?entityType rdfs:label ?entityTypeLabel .
FILTER (LANG(?entityTypeLabel) = "en")
FILTER REGEX(?entityTypeLabel, '""" + placeType + """')
# show results ordered by distance
}
"""
# sparqlParam = {'query':'SELECT ?item ?itemLabel WHERE{ ?item wdt:P31 wd:Q146 . SERVICE wikibase:label { bd:serviceParam wikibase:language "en" }}', 'format':'json'}
entityTypeSparqlParam = {'query': entityTypeQuery, 'format': 'json'}
# headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
sparqlRequest = requests.get('https://query.wikidata.org/sparql', params=entityTypeSparqlParam)
print(sparqlRequest.url)
# messages.addMessage("URL: {0}.".format(sparqlRequest.url))
entityTypeJson = sparqlRequest.json()["results"]["bindings"]
if len(entityTypeJson) == 0:
arcpy.AddError("No entity type matches the user's input.")
raise arcpy.ExecuteError
else:
in_place_type.filter.list = [enterTypeText]
self.entityTypeLabel = []
self.entityTypeURLList = []
for jsonItem in entityTypeJson:
label = jsonItem["entityTypeLabel"]["value"]
wikiURL = jsonItem["entityType"]["value"]
wikiURLLastIndex = wikiURL.rfind("/")
wikiURLLastName = wikiURL[(wikiURLLastIndex+1):]
self.entityTypeLabel.append(label+"("+"wd:"+wikiURLLastName+")")
self.entityTypeURLList.append(wikiURL)
# in_place_type.filter.list.append(jsonItem["entityTypeLabel"]["value"])
in_place_type.filter.list = in_place_type.filter.list + self.entityTypeLabel
for i in range(len(self.entityTypeLabel)):
# messages.addMessage("Label: {0}".format(self.entityTypeLabel[i]))
if in_place_type.valueAsText == self.entityTypeLabel[i]:
out_place_type_url.value = self.entityTypeURLList[i]
return
def updateMessages(self, parameters):
"""Modify the messages created by internal validation for each tool
parameter. This method is called after internal validation."""
return
def execute(self, parameters, messages):
"""The source code of the tool."""
in_buf_query_center = parameters[0]
in_place_type = parameters[1]
in_is_directed_instance = parameters[2]
in_radius = parameters[3]
out_location = parameters[4]
out_points_name = parameters[5]
out_place_type_url = parameters[6]
inBufCenter = in_buf_query_center.valueAsText
inPlaceType = in_place_type.valueAsText
searchRadius = in_radius.valueAsText
outLocation = out_location.valueAsText
outFeatureClassName = out_points_name.valueAsText
isDirectInstance = False
if in_is_directed_instance.valueAsText == 'true':
isDirectInstance = True
elif in_is_directed_instance.valueAsText == 'false':
isDirectInstance = False
# arcpy.AddMessage(("in_is_directed_instance.valueAsText: {0}").format(in_is_directed_instance.valueAsText))
if ".gdb" in outLocation:
# if the outputLocation is a file geodatabase, cancatnate the outputlocation with outFeatureClassName to create a feature class in current geodatabase
out_path = os.path.join(outLocation,outFeatureClassName)
else:
# if the outputLocation is a folder, creats a shapefile in this folder
out_path = os.path.join(outLocation,outFeatureClassName) + ".shp"
# however, Relationship Class must be created in a geodatabase, so we forbid to create a shapfile
# messages.addErrorMessage("Please enter a file geodatabase as output location in order to create a relation class")
# raise arcpy.ExecuteError
messages.addMessage("outpath: {0}".format(out_path))
selectedURL = out_place_type_url.valueAsText
# messages.addMessage("len(self.entityTypeLabel): {0}".format(len(self.entityTypeLabel)))
# for i in range(len(self.entityTypeLabel)):
# messages.addMessage("Label: {0}".format(self.entityTypeLabel[i]))
# if inPlaceType == self.entityTypeLabel[i]:
# selectedURL = self.entityTypeURLList[i]
messages.addMessage("selectedURL: {0}".format(selectedURL))
# Create a FeatureSet object and load in_memory feature class
in_feature_set = arcpy.FeatureSet()
in_feature_set.load(inBufCenter)
in_feature_set_json = json.loads(in_feature_set.JSON)
# messages.addMessage("Points: {0}".format(json.loads(in_feature_set.JSON)))
# messages.addMessage("Point: {0}".format(json.loads(in_feature_set.JSON)['spatialReference']['wkid']))
WGS84Reference = arcpy.SpatialReference(4326)
currentSpatialReference = arcpy.SpatialReference(in_feature_set_json['spatialReference']['latestWkid'])
# a set of unique Coordinates for each input points
# searchCoordsSet = Set()
searchCoordsSet = []
for i in range(len(in_feature_set_json['features'])):
lat = in_feature_set_json['features'][i]['geometry']['y']
lng = in_feature_set_json['features'][i]['geometry']['x']
coords = [lng, lat]
searchCoordsSet.append(coords)
# if i == 0:
# searchCoordsSet.append(coords)
# else:
# if coords not in searchCoordsSet:
# searchCoordsSet.add(coords)
# searchCoordsSet = List(searchCoordsSet)
# a set of unique Coordinates for each found places
placeIRISet = Set()
placeList = []
for coord in searchCoordsSet:
lat = coord[1]
lng = coord[0]
# lat = in_feature_set_json['features'][0]['geometry']['y']
# lng = in_feature_set_json['features'][0]['geometry']['x']
if in_feature_set_json['spatialReference']['wkid'] != '4326' or in_feature_set_json['spatialReference']['latestWkid'] != '4326':
WGS84PtGeometry = arcpy.PointGeometry(arcpy.Point(lng, lat), currentSpatialReference).projectAs(WGS84Reference)
# messages.addMessage("My Coordinates: {0}".format(WGS84PtGeometry.WKT))
coordList = re.split("[( )]", WGS84PtGeometry.WKT)
lat = coordList[3]
lng = coordList[2]
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX geo-pos: <http://www.w3.org/2003/01/geo/wgs84_pos#>
PREFIX omgeo: <http://www.ontotext.com/owlim/geo#>
PREFIX dbpedia: <http://dbpedia.org/resource/>
PREFIX dbp-ont: <http://dbpedia.org/ontology/>
PREFIX ff: <http://factforge.net/>
PREFIX om: <http://www.ontotext.com/owlim/>
PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX bd: <http://www.bigdata.com/rdf#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX geo: <http://www.opengis.net/ont/geosparql#>"""
if selectedURL != None:
query = queryPrefix + """SELECT distinct ?place ?placeLabel ?distance ?location
WHERE {
# geospatial queries
SERVICE wikibase:around {
# get the coordinates of a place
?place wdt:P625 ?location .
# create a buffer around (-122.4784360859997 37.81826788900048)
bd:serviceParam wikibase:center "Point(""" + str(lng) + """ """ + str(lat) + """)"^^geo:wktLiteral .
# buffer radius 2km
bd:serviceParam wikibase:radius '"""+searchRadius+"""' .
bd:serviceParam wikibase:distance ?distance .
}
# retrieve the English label
SERVICE wikibase:label {bd:serviceParam wikibase:language "en". ?place rdfs:label ?placeLabel .}"""
if isDirectInstance == False:
query += """?place wdt:P31 ?placeFlatType.
?placeFlatType wdt:P279* <""" + selectedURL + """>."""
else:
query += """?place wdt:P31 <""" + selectedURL + """>."""
# show results ordered by distance
query += """} ORDER BY ?distance"""
else:
query = queryPrefix + """SELECT distinct ?place ?placeLabel ?distance ?location
WHERE {
# geospatial queries
SERVICE wikibase:around {
# get the coordinates of a place
?place wdt:P625 ?location .
# create a buffer around (-122.4784360859997 37.81826788900048)
bd:serviceParam wikibase:center "Point(""" + str(lng) + """ """ + str(lat) + """)"^^geo:wktLiteral .
# buffer radius 2km
bd:serviceParam wikibase:radius '"""+searchRadius+"""' .
bd:serviceParam wikibase:distance ?distance .
}
# retrieve the English label
SERVICE wikibase:label {bd:serviceParam wikibase:language "en". ?place rdfs:label ?placeLabel .}
?place wdt:P31 ?placeFlatType.
?placeFlatType wdt:P279* wd:Q2221906.
# show results ordered by distance
} ORDER BY ?distance"""
sparqlParam = {'query': query, 'format': 'json'}
sparqlRequest = requests.get('https://query.wikidata.org/sparql', params=sparqlParam)
print(sparqlRequest.url)
messages.addMessage("SPARQL: {0}".format(sparqlRequest.url))
bufferQueryResult = sparqlRequest.json()["results"]["bindings"]
# if len(bufferQueryResult) == 0:
# messages.addMessage("No {0} nearby the clicked place can be finded!".format(inPlaceType))
# # pythonaddins.MessageBox("No " + inPlaceType + " nearby the clicked place can be finded!",
# # "Warning Message", 0)
# else:
for item in bufferQueryResult:
print "%s\t%s\t%s\t%s" % (
item["place"]["value"], item["placeLabel"]["value"], item["distance"]["value"],
item["location"]["value"])
if len(placeIRISet) == 0 or item["place"]["value"] not in placeIRISet:
placeIRISet.add(item["place"]["value"])
coordItem = item["location"]["value"]
coordList = re.split("[( )]", coordItem)
itemlat = coordList[2]
itemlng = coordList[1]
placeList.append(
[item["place"]["value"], item["placeLabel"]["value"], item["distance"]["value"],
itemlat,itemlng])
if len(placeList) == 0:
messages.addMessage("No {0} nearby the input point(s) can be finded!".format(inPlaceType))
else:
# Spatial reference set to GCS_WGS_1984
spatial_reference = arcpy.SpatialReference(4326)
# creat a Point feature class in arcpy
pt = arcpy.Point()
ptGeoms = []
for p in placeList:
pt.X = float(p[4])
pt.Y = float(p[3])
pointGeometry = arcpy.PointGeometry(pt, spatial_reference)
ptGeoms.append(pointGeometry)
# out_path = pythonaddins.SaveDialog("Save Nearby Places", "placeNear",
# os.path.dirname(arcpy.mapping.MapDocument("current").filePath),
# FileGDBSave())
if out_path == None:
messages.addMessage("No data will be added to the map document.")
# pythonaddins.MessageBox("No data will be added to the map document.", "Warning Message", 0)
else:
# create a geometry Feature class to represent
placeNearFeatureClass = arcpy.CopyFeatures_management(ptGeoms, out_path)
labelFieldLength = Json2Field.fieldLengthDecide(bufferQueryResult, "placeLabel")
arcpy.AddMessage("labelFieldLength: {0}".format(labelFieldLength))
# add field to this point feature class
arcpy.AddField_management(placeNearFeatureClass, "Label", "TEXT", field_length=labelFieldLength)
arcpy.AddField_management(placeNearFeatureClass, "URL", "TEXT", field_length=100)
# arcpy.AddField_management(placeNearFeatureClass, "TypeURL", "TEXT", field_length=50)
# arcpy.AddField_management(placeNearFeatureClass, "TypeName", "TEXT", field_length=50)
# if selectedURL != None:
# arcpy.AddField_management(placeNearFeatureClass, "BTypeURL", "TEXT", field_length=50)
# arcpy.AddField_management(placeNearFeatureClass, "BTypeName", "TEXT", field_length=50)
# arcpy.AddField_management(placeNearFeatureClass, "Latitude", "TEXT", 10, 10)
# arcpy.AddField_management(placeNearFeatureClass, "Longitude", "TEXT", 10, 10)
arcpy.AddXY_management(placeNearFeatureClass)
# add label, latitude, longitude value to this point feature class
i = 0
cursor = arcpy.UpdateCursor(out_path)
row = cursor.next()
while row:
row.setValue("Label", placeList[i][1])
row.setValue("URL", placeList[i][0])
# row.setValue("TypeURL", placeList[i][5])
# row.setValue("TypeName", placeList[i][6])
cursor.updateRow(row)
i = i + 1
row = cursor.next()
# if selectedURL != None:
# i = 0
# cursor = arcpy.UpdateCursor(out_path)
# row = cursor.next()
# while row:
# row.setValue("BTypeURL", selectedURL)
# row.setValue("BTypeName", inPlaceType)
# cursor.updateRow(row)
# i = i + 1
# row = cursor.next()
# get the map document
# mxd = arcpy.mapping.MapDocument(
# r"D:\UCSB_STKO_Lab\STKO Research\research\DBpedia-Search-plugin\wiki1.mxd")
mxd = arcpy.mapping.MapDocument("CURRENT")
# get the data frame
df = arcpy.mapping.ListDataFrames(mxd)[0]
# create a new layer
placeNearLayer = arcpy.mapping.Layer(out_path)
# add the layer to the map at the bottom of the TOC in data frame 0
arcpy.mapping.AddLayer(df, placeNearLayer, "BOTTOM")
return
class LinkedDataPropertyEnrich(object):
count = 0
propertyNameList = []
propertyURLList = []
propertyURLDict = dict()
inversePropertyNameList = []
inversePropertyURLList = []
inversePropertyURLDict = dict()
expandedPropertyNameList = []
expandedPropertyURLList = []
expandedPropertyURLDict = dict()
inverseExpandedPropertyNameList = []
inverseExpandedPropertyURLList = []
inverseExpandedPropertyURLDict = dict()
# FunctionalPropertySet = Set()
# noFunctionalPropertyURLList = []
# noFunctionalPropertyNameList = []
# noFunctionalPropertyURLDict = dict()
def __init__(self):
"""Define the tool (tool name is the name of the class)."""
self.label = "Linked Data Location Entities Property Enrichment"
self.description = "Get the most common properties from DBpedia according to input wikidata location entity IRI"
self.canRunInBackground = False
# self.propertyURLList = []
#propertyNameList = []
LinkedDataPropertyEnrich.count += 1
def getParameterInfo(self):
"""Define parameter definitions"""
# The input Feature class which is the output of LinkedDataAnalysis Tool, "URL" column should be included in the attribute table
in_wikiplace_IRI = arcpy.Parameter(
displayName="Input wikidata location entities Feature Class",
name="in_wikiplace_IRI",
datatype="DEFeatureClass",
parameterType="Required",
direction="Input")
# Use __file__ attribute to find the .lyr file (assuming the
# .pyt and .lyr files exist in the same folder)
in_wikiplace_IRI.filter.list = ["Point"]
# Choose place type for search
in_com_property = arcpy.Parameter(
displayName="Common Properties",
name="in_com_property",
datatype="GPString",
parameterType="Optional",
direction="Input",
multiValue=True)
# in_com_property.parameterDependencies = [in_wikiplace_IRI.name]
# in_com_property.columns =([["GPString", "GPString"], ["GPString","GPString"]])
# in_com_property.filters[0].type = 'ValueList'
# in_com_property.filters[0].list = []
# in_com_property.filters[1].type = 'ValueList'
# in_com_property.filters[1].list = []
in_com_property.filter.type = "ValueList"
in_com_property.filter.list = []
in_boolean_inverse_com = arcpy.Parameter(
displayName="Get Inverse Common Properties",
name="in_boolean_inverse_com",
datatype="GPBoolean",
parameterType="Optional",
direction="Input")
in_boolean_inverse_com.value = False
in_inverse_com_property = arcpy.Parameter(
displayName="Inverse Common Properties",
name="in_inverse_com_property",
datatype="GPString",
parameterType="Optional",
direction="Input",
multiValue=True)
in_inverse_com_property.filter.type = "ValueList"
in_inverse_com_property.filter.list = []
in_inverse_com_property.enabled = False
in_boolean_isPartOf = arcpy.Parameter(
displayName="Get Expanded Common Properties by following Transitive Inverse Partonomical and Subdivision Paths",
name="in_boolean_isPartOf",
datatype="GPBoolean",
parameterType="Optional",
direction="Input")
in_boolean_isPartOf.value = False
in_expanded_com_property = arcpy.Parameter(
displayName="Expanded Common Properties",
name="in_expanded_com_property",
datatype="GPString",
parameterType="Optional",
direction="Input",
multiValue=True)
in_expanded_com_property.filter.type = "ValueList"
in_expanded_com_property.filter.list = []
in_expanded_com_property.enabled = False
in_boolean_inverse_expanded_com = arcpy.Parameter(
displayName="Get Inverse Expanded Common Properties",
name="in_boolean_inverse_expanded_com",
datatype="GPBoolean",
parameterType="Optional",
direction="Input")
in_boolean_inverse_expanded_com.value = False
in_inverse_expanded_com_property = arcpy.Parameter(
displayName="Inverse Expanded Common Properties",
name="in_inverse_expanded_com_property",
datatype="GPString",
parameterType="Optional",
direction="Input",
multiValue=True)
in_inverse_expanded_com_property.filter.type = "ValueList"
in_inverse_expanded_com_property.filter.list = []
in_inverse_expanded_com_property.enabled = False
# out_location = arcpy.Parameter(
# displayName="Output Location",
# name="out_location",
# datatype="DEWorkspace",
# parameterType="Required",
# direction="Input")
# out_location.value = os.path.dirname(__file__)
# # Derived Output Property Table Name
# out_property_table_name = arcpy.Parameter(
# displayName="Output Property Table Name",
# name="out_property_table_name",
# datatype="GPString",
# parameterType="Required",
# direction="Input")
# # Save the select property URL
# out_com_property_URL = arcpy.Parameter(
# displayName="Common Property",
# name="out_com_property_URL",
# datatype="GPString",
# parameterType="Derived",
# direction="Output",
# multiValue=True)
# out_com_property_URL.filter.type = "ValueList"
# out_com_property_URL.filter.list = []
# params = [in_wikiplace_IRI, in_com_property, out_location, out_property_table_name, out_com_property_URL]
params = [in_wikiplace_IRI, in_com_property, in_boolean_inverse_com, in_inverse_com_property, in_boolean_isPartOf, in_expanded_com_property, in_boolean_inverse_expanded_com, in_inverse_expanded_com_property]
return params
def isLicensed(self):
"""Set whether tool is licensed to execute."""
return True
def updateParameters(self, parameters):
"""Modify the values and properties of parameters before internal
validation is performed. This method is called whenever a parameter
has been changed."""
in_wikiplace_IRI = parameters[0]
in_com_property = parameters[1]
in_boolean_inverse_com = parameters[2]
in_inverse_com_property = parameters[3]
in_boolean_isPartOf = parameters[4]
in_expanded_com_property = parameters[5]
in_boolean_inverse_expanded_com = parameters[6]
in_inverse_expanded_com_property = parameters[7]
# out_location = parameters[2]
# out_property_table_name = parameters[3]
# out_com_property_URL = parameters[4]
isInverse = False
if in_boolean_inverse_com.valueAsText == 'true':
isInverse = True
elif in_boolean_inverse_com.valueAsText == 'false':
isInverse = False
isExpandedPartOf = False
if in_boolean_isPartOf.valueAsText == 'true':
isExpandedPartOf = True
elif in_boolean_isPartOf.valueAsText == 'false':
isExpandedPartOf = False
isInverseExpanded = False
if in_boolean_inverse_expanded_com.valueAsText == 'true':
isInverseExpanded = True
elif in_boolean_inverse_expanded_com.valueAsText == 'false':
isInverseExpanded = False
arcpy.AddMessage(("in_boolean_isPartOf.valueAsText: {0}").format(in_boolean_isPartOf.valueAsText))
if isInverse == False:
in_inverse_com_property.enabled = False
else:
in_inverse_com_property.enabled = True
if isExpandedPartOf == False:
in_expanded_com_property.enabled = False
in_inverse_expanded_com_property.enabled = False
else:
in_expanded_com_property.enabled = True
if isInverseExpanded == False:
in_inverse_expanded_com_property.enabled = False
else:
in_inverse_expanded_com_property.enabled = True
if in_wikiplace_IRI.value:
inputFeatureClassName = in_wikiplace_IRI.valueAsText
arcpy.AddMessage("{0}".format(inputFeatureClassName))
# inputFeatureClass = arcpy.Describe(inputFeatureClassName)
fieldList = arcpy.ListFields(inputFeatureClassName)
isURLinFieldList = False
for field in fieldList:
if field.name == "URL":
isURLinFieldList = True
if isURLinFieldList == False:
arcpy.AddErrorMessage("Please a point feature class which include a 'URL' Field for the wikidata IRI of this entity")
raise arcpy.ExecuteError
else:
# update the output directory of this tool to the same geodatabase
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
outputLocation = inputFeatureClassName[:lastIndexOFGDB]
# out_location.value = outputLocation
# get all the IRI from input point feature class of wikidata places
inplaceIRIList = []
cursor = arcpy.SearchCursor(inputFeatureClassName)
for row in cursor:
inplaceIRIList.append(row.getValue("URL"))
if len(inplaceIRIList) == 0:
arcpy.AddMessage("Input Feature class do not have record")
raise arcpy.ExecuteError
else:
# get the direct common property
commonPropertyJSONObj = SPARQLQuery.commonPropertyQuery(inplaceIRIList)
commonPropertyJSON = commonPropertyJSONObj["results"]["bindings"]
if len(commonPropertyJSON) == 0:
arcpy.AddMessage("No property find.")
raise arcpy.ExecuteError
else:
LinkedDataPropertyEnrich.propertyURLList = []
LinkedDataPropertyEnrich.propertyNameList = []
LinkedDataPropertyEnrich.propertyURLDict = dict()
# LinkedDataPropertyEnrich.FunctionalPropertySet = Set()
# LinkedDataPropertyEnrich.noFunctionalPropertyURLList = []
# LinkedDataPropertyEnrich.noFunctionalPropertyNameList = []
# LinkedDataPropertyEnrich.noFunctionalPropertyURLDict = dict()
for jsonItem in commonPropertyJSON:
propertyURL = jsonItem["p"]["value"]
if "http://dbpedia.org/ontology/" in propertyURL or "http://dbpedia.org/property/" in propertyURL:
if propertyURL not in LinkedDataPropertyEnrich.propertyURLList:
LinkedDataPropertyEnrich.propertyURLList.append(propertyURL)
lastIndex = propertyURL.rfind("/")
propertyName = propertyURL[(lastIndex+1):]
if "http://dbpedia.org/ontology/" in propertyURL:
lastIndex = len("http://dbpedia.org/ontology/")
propertyName = propertyURL[lastIndex:]
propertyName = "dbo:" + propertyName + "(" + jsonItem["NumofSub"]["value"] + ")"
elif "http://dbpedia.org/property/" in propertyURL:
lastIndex = len("http://dbpedia.org/property/")
propertyName = propertyURL[lastIndex:]
propertyName = "dbp:" + propertyName + "(" + jsonItem["NumofSub"]["value"] + ")"
# propertyName = propertyName + "(" + jsonItem["NumofSub"]["value"] + ")"
LinkedDataPropertyEnrich.propertyNameList.append(propertyName)
# propertyNameURLList.append(propertyURL + " " +propertyName)
LinkedDataPropertyEnrich.propertyURLDict = dict(zip(LinkedDataPropertyEnrich.propertyNameList, LinkedDataPropertyEnrich.propertyURLList))
in_com_property.filter.list = LinkedDataPropertyEnrich.propertyNameList
# in_com_property.filters[0] = LinkedDataPropertyEnrich.propertyNameList
# in_com_property.filter.list = LinkedDataPropertyEnrich.propertyNameList
# in_com_property.filters[0].list = LinkedDataPropertyEnrich.propertyNameList
# in_com_property.filters[1].list = LinkedDataPropertyEnrich.propertyURLList
# arcpy.AddMessage("URLLIst: {0}".format(LinkedDataPropertyEnrich.propertyURLList))
# arcpy.AddMessage("NameLIst: {0}".format(LinkedDataPropertyEnrich.propertyNameList))
# out_com_property_URL.filter.list = LinkedDataPropertyEnrich.propertyURLList
# get the inverse direct common property
if isInverse == True:
inverseCommonPropertyJSONObj = SPARQLQuery.inverseCommonPropertyQuery(inplaceIRIList)
inverseCommonPropertyJSON = inverseCommonPropertyJSONObj["results"]["bindings"]
if len(inverseCommonPropertyJSON) == 0:
arcpy.AddMessage("No inverse property find.")
raise arcpy.ExecuteError
else:
LinkedDataPropertyEnrich.inversePropertyNameList = []
LinkedDataPropertyEnrich.inversePropertyURLList = []
LinkedDataPropertyEnrich.inversePropertyURLDict = dict()
# LinkedDataPropertyEnrich.propertyURLList = []
# LinkedDataPropertyEnrich.propertyNameList = []
# LinkedDataPropertyEnrich.propertyURLDict = dict()
for jsonItem in inverseCommonPropertyJSON:
propertyURL = jsonItem["p"]["value"]
if "http://dbpedia.org/ontology/" in propertyURL or "http://dbpedia.org/property/" in propertyURL:
if propertyURL not in LinkedDataPropertyEnrich.inversePropertyURLList:
LinkedDataPropertyEnrich.inversePropertyURLList.append(propertyURL)
# lastIndex = propertyURL.rfind("/")
# propertyName = propertyURL[(lastIndex+1):]
if "http://dbpedia.org/ontology/" in propertyURL:
lastIndex = len("http://dbpedia.org/ontology/")
propertyName = propertyURL[lastIndex:]
propertyName = "is dbo:" + propertyName + " Of (" + jsonItem["NumofSub"]["value"] + ")"
elif "http://dbpedia.org/property/" in propertyURL:
lastIndex = len("http://dbpedia.org/property/")
propertyName = propertyURL[lastIndex:]
propertyName = "is dbp:" + propertyName + " Of (" + jsonItem["NumofSub"]["value"] + ")"
# propertyName = propertyName + "(" + jsonItem["NumofSub"]["value"] + ")"
LinkedDataPropertyEnrich.inversePropertyNameList.append(propertyName)
# propertyNameURLList.append(propertyURL + " " +propertyName)
LinkedDataPropertyEnrich.inversePropertyURLDict = dict(zip(LinkedDataPropertyEnrich.inversePropertyNameList, LinkedDataPropertyEnrich.inversePropertyURLList))
in_inverse_com_property.filter.list = LinkedDataPropertyEnrich.inversePropertyNameList
if isExpandedPartOf == True:
expandedCommonPropertyJSONObj = SPARQLQuery.locationDBpediaExpandedCommonPropertyQuery(inplaceIRIList)
expandedCommonPropertyJSON = expandedCommonPropertyJSONObj["results"]["bindings"]
if len(expandedCommonPropertyJSON) == 0:
arcpy.AddMessage("No expanded property find.")
raise arcpy.ExecuteError
else:
# LinkedDataPropertyEnrich.propertyURLList = []
# LinkedDataPropertyEnrich.propertyNameList = []
# LinkedDataPropertyEnrich.propertyURLDict = dict()
# LinkedDataPropertyEnrich.FunctionalPropertySet = Set()
LinkedDataPropertyEnrich.expandedPropertyNameList = []
LinkedDataPropertyEnrich.expandedPropertyURLList = []
LinkedDataPropertyEnrich.expandedPropertyURLDict = dict()
for jsonItem in expandedCommonPropertyJSON:
propertyURL = jsonItem["p"]["value"]
if "http://dbpedia.org/ontology/" in propertyURL or "http://dbpedia.org/property/" in propertyURL:
if propertyURL not in LinkedDataPropertyEnrich.expandedPropertyURLList:
LinkedDataPropertyEnrich.expandedPropertyURLList.append(propertyURL)
# lastIndex = propertyURL.rfind("/")
# propertyName = propertyURL[(lastIndex+1):]
if "http://dbpedia.org/ontology/" in propertyURL:
lastIndex = len("http://dbpedia.org/ontology/")
propertyName = propertyURL[lastIndex:]
propertyName = "dbo:" + propertyName + "(" + jsonItem["NumofSub"]["value"] + ")"
elif "http://dbpedia.org/property/" in propertyURL:
lastIndex = len("http://dbpedia.org/property/")
propertyName = propertyURL[lastIndex:]
propertyName = "dbp:" + propertyName + "(" + jsonItem["NumofSub"]["value"] + ")"
# propertyName = propertyName + "(" + jsonItem["NumofSub"]["value"] + ")"
LinkedDataPropertyEnrich.expandedPropertyNameList.append(propertyName)
# propertyNameURLList.append(propertyURL + " " +propertyName)
LinkedDataPropertyEnrich.expandedPropertyURLDict = dict(zip(LinkedDataPropertyEnrich.expandedPropertyNameList, LinkedDataPropertyEnrich.expandedPropertyURLList))
in_expanded_com_property.filter.list = LinkedDataPropertyEnrich.expandedPropertyNameList
if isInverseExpanded == True:
inverseExpandedCommonPropertyJSONObj = SPARQLQuery.locationDBpediaInverseExpandedCommonPropertyQuery(inplaceIRIList)
inverseExpandedCommonPropertyJSON = inverseExpandedCommonPropertyJSONObj["results"]["bindings"]
if len(inverseExpandedCommonPropertyJSON) == 0:
arcpy.AddMessage("No inverse expanded property find.")
raise arcpy.ExecuteError
else:
# LinkedDataPropertyEnrich.propertyURLList = []
# LinkedDataPropertyEnrich.propertyNameList = []
# LinkedDataPropertyEnrich.propertyURLDict = dict()
# LinkedDataPropertyEnrich.FunctionalPropertySet = Set()
LinkedDataPropertyEnrich.inverseExpandedPropertyNameList = []
LinkedDataPropertyEnrich.inverseExpandedPropertyURLList = []
LinkedDataPropertyEnrich.inverseExpandedPropertyURLDict = dict()
for jsonItem in inverseExpandedCommonPropertyJSON:
propertyURL = jsonItem["p"]["value"]
if "http://dbpedia.org/ontology/" in propertyURL or "http://dbpedia.org/property/" in propertyURL:
if propertyURL not in LinkedDataPropertyEnrich.inverseExpandedPropertyURLList:
LinkedDataPropertyEnrich.inverseExpandedPropertyURLList.append(propertyURL)
if "http://dbpedia.org/ontology/" in propertyURL:
lastIndex = len("http://dbpedia.org/ontology/")
propertyName = propertyURL[lastIndex:]
propertyName = "is dbo:" + propertyName + " Of (" + jsonItem["NumofSub"]["value"] + ")"
elif "http://dbpedia.org/property/" in propertyURL:
lastIndex = len("http://dbpedia.org/property/")
propertyName = propertyURL[lastIndex:]
propertyName = "is dbp:" + propertyName + " Of (" + jsonItem["NumofSub"]["value"] + ")"
# propertyName = propertyName + "(" + jsonItem["NumofSub"]["value"] + ")"
LinkedDataPropertyEnrich.inverseExpandedPropertyNameList.append(propertyName)
# propertyNameURLList.append(propertyURL + " " +propertyName)
LinkedDataPropertyEnrich.inverseExpandedPropertyURLDict = dict(zip(LinkedDataPropertyEnrich.inverseExpandedPropertyNameList, LinkedDataPropertyEnrich.inverseExpandedPropertyURLList))
in_inverse_expanded_com_property.filter.list = LinkedDataPropertyEnrich.inverseExpandedPropertyNameList
# # send a SPARQL query to DBpedia endpoint to test whether the properties are functionalProperty
# isFuncnalPropertyJSONObj = SPARQLQuery.functionalPropertyQuery(LinkedDataPropertyEnrich.propertyURLList)
# isFuncnalPropertyJSON = isFuncnalPropertyJSONObj["results"]["bindings"]
# FunctionalPropertySet = Set()
# for jsonItem in isFuncnalPropertyJSON:
# functionalPropertyURL = jsonItem["property"]["value"]
# FunctionalPropertySet.add(functionalPropertyURL)
# LinkedDataPropertyEnrich.FunctionalPropertySet = FunctionalPropertySet
# use set differences to get the no functional property set
# propertyURLSet = Set(LinkedDataPropertyEnrich.propertyURLList)
# noFunctionalPropertySet = propertyURLSet.difference(FunctionalPropertySet)
# LinkedDataPropertyEnrich.noFunctionalPropertyURLList = list(noFunctionalPropertySet)
# for propertyURL in LinkedDataPropertyEnrich.noFunctionalPropertyURLList:
# if "http://dbpedia.org/ontology/" in propertyURL:
# lastIndex = len("http://dbpedia.org/ontology/")
# propertyName = propertyURL[lastIndex:]
# propertyName = "dbo:" + propertyName
# elif "http://dbpedia.org/property/" in propertyURL:
# lastIndex = len("http://dbpedia.org/property/")
# propertyName = propertyURL[lastIndex:]
# propertyName = "dbp:" + propertyName
# LinkedDataPropertyEnrich.noFunctionalPropertyNameList.append(propertyName)
# LinkedDataPropertyEnrich.noFunctionalPropertyURLDict = dict(zip(LinkedDataPropertyEnrich.noFunctionalPropertyNameList, LinkedDataPropertyEnrich.noFunctionalPropertyURLList))
return
def updateMessages(self, parameters):
"""Modify the messages created by internal validation for each tool
parameter. This method is called after internal validation."""
return
def execute(self, parameters, messages):
"""The source code of the tool."""
in_wikiplace_IRI = parameters[0]
in_com_property = parameters[1]
in_boolean_inverse_com = parameters[2]
in_inverse_com_property = parameters[3]
in_boolean_isPartOf = parameters[4]
in_expanded_com_property = parameters[5]
in_boolean_inverse_expanded_com = parameters[6]
in_inverse_expanded_com_property = parameters[7]
# out_location = parameters[2]
# out_property_table_name = parameters[3]
# out_com_property_URL = parameters[4]
# arcpy.AddMessage("propertyNameList: {0}".format(LinkedDataPropertyEnrich.propertyNameList))
# arcpy.AddMessage("propertyURLList: {0}".format(LinkedDataPropertyEnrich.propertyURLList))
isInverse = False
if in_boolean_inverse_com.valueAsText == 'true':
isInverse = True
elif in_boolean_inverse_com.valueAsText == 'false':
isInverse = False
isExpandedPartOf = False
if in_boolean_isPartOf.valueAsText == 'true':
isExpandedPartOf = True
elif in_boolean_isPartOf.valueAsText == 'false':
isExpandedPartOf = False
isInverseExpanded = False
if in_boolean_inverse_expanded_com.valueAsText == 'true':
isInverseExpanded = True
elif in_boolean_inverse_expanded_com.valueAsText == 'false':
isInverseExpanded = False
arcpy.AddMessage(("in_boolean_isPartOf.valueAsText: {0}").format(in_boolean_isPartOf.valueAsText))
for URL in LinkedDataPropertyEnrich.propertyURLList:
arcpy.AddMessage(URL)
arcpy.AddMessage("count: {0}".format(LinkedDataPropertyEnrich.count))
# arcpy.AddMessage("FunctionalPropertySet: {0}".format(LinkedDataPropertyEnrich.FunctionalPropertySet))
inputFeatureClassName = in_wikiplace_IRI.valueAsText
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
outputLocation = inputFeatureClassName[:lastIndexOFGDB]
if outputLocation.endswith(".gdb") == False:
messages.addErrorMessage("Please enter a feature class in file geodatabase for the input feature class in order to create a relation class")
raise arcpy.ExecuteError
else:
arcpy.env.workspace = outputLocation
lastIndexOFFeatureClassName = inputFeatureClassName.rfind("\\")
featureClassName = inputFeatureClassName[(lastIndexOFGDB+1):]
# get all the IRI from input point feature class of wikidata places
inplaceIRIList = []
cursor = arcpy.SearchCursor(inputFeatureClassName)
for row in cursor:
inplaceIRIList.append(row.getValue("URL"))
# send a SPARQL query to DBpedia endpoint to get the DBpedia IRI according to wikidata IRI
dbpediaIRIJSONObj = SPARQLQuery.dbpediaIRIQuery(inplaceIRIList)
dbpediaIRIJSON = dbpediaIRIJSONObj["results"]["bindings"]
# according to dbpediaIRIJSON, add or update the field "DBpediaURL" in inputFeatureClass table
Json2Field.addOrUpdateFieldInTableByMapping(dbpediaIRIJSON, "wikidataSub", "DBpediaSub", inputFeatureClassName, "URL", "DBpediaURL")
# if outputLocation != out_location.valueAsText:
# messages.addErrorMessage("Please make the output location the same geodatabase as the input Feature class")
# raise arcpy.ExecuteError
# else:
arcpy.AddMessage(in_com_property.valueAsText)
propertySelect = in_com_property.valueAsText
selectPropertyURLList = []
if propertySelect != None:
propertySplitList = re.split("[;]", propertySelect)
for propertyItem in propertySplitList:
# lastIndex = propertyItem.rfind("(")
# propertyItem = propertyItem[:lastIndex]
selectPropertyURLList.append(LinkedDataPropertyEnrich.propertyURLDict[propertyItem])
# propertyURL = "http://dbpedia.org/ontology/" + propertyItem[:lastIndex]
# LinkedDataPropertyEnrich.propertyURLList.append(propertyURL)
# arcpy.AddMessage("URLList: {0}".format(LinkedDataPropertyEnrich.propertyURLList))
# # send a SPARQL query to DBpedia endpoint to test whether the user selected properties are functionalProperty
# isFuncnalPropertyALLJSONObj = SPARQLQuery.functionalPropertyQuery(LinkedDataPropertyEnrich.propertyURLList)
# isFuncnalPropertyALLJSON = isFuncnalPropertyALLJSONObj["results"]["bindings"]
# send a SPARQL query to DBpedia endpoint to test whether the user selected properties are functionalProperty
# isFuncnalPropertyJSONObj = SPARQLQuery.functionalPropertyQuery(selectPropertyURLList)
# isFuncnalPropertyJSON = isFuncnalPropertyJSONObj["results"]["bindings"]
# FuncnalPropertySet = Set()
# for jsonItem in isFuncnalPropertyJSON:
# functionalPropertyURL = jsonItem["property"]["value"]
# FuncnalPropertySet.add(functionalPropertyURL)
# send a SPARQL query to DBpedia endpoint to test whether the properties are functionalProperty
isFuncnalPropertyJSON = SPARQLQuery.functionalPropertyQuery(selectPropertyURLList)
# isFuncnalPropertyJSON = isFuncnalPropertyJSONObj["results"]["bindings"]
FunctionalPropertySet = Set()
for jsonItem in isFuncnalPropertyJSON:
functionalPropertyURL = jsonItem["property"]["value"]
FunctionalPropertySet.add(functionalPropertyURL)
# LinkedDataPropertyEnrich.FunctionalPropertySet = FunctionalPropertySet
# # the selected functional properties
# functionalPropertySet = Set()
# FuncnalPropertySet = Set()
# for propertyURL in selectPropertyURLList:
# if propertyURL in LinkedDataPropertyEnrich.FunctionalPropertySet:
# functionalPropertySet.add(propertyURL)
arcpy.AddMessage("FunctionalPropertySet: {0}".format(FunctionalPropertySet))
# get the value for each functionalProperty
FuncnalPropertyList = list(FunctionalPropertySet)
# add these functionalProperty value to feature class table
for functionalProperty in FuncnalPropertyList:
functionalPropertyJSON = SPARQLQuery.propertyValueQuery(inplaceIRIList, functionalProperty)
# functionalPropertyJSON = functionalPropertyJSONObj["results"]["bindings"]
Json2Field.addFieldInTableByMapping(functionalPropertyJSON, "wikidataSub", "o", inputFeatureClassName, "URL", functionalProperty, False)
selectPropertyURLSet = Set(selectPropertyURLList)
noFunctionalPropertySet = selectPropertyURLSet.difference(FunctionalPropertySet)
noFunctionalPropertyList = list(noFunctionalPropertySet)
for noFunctionalProperty in noFunctionalPropertyList:
noFunctionalPropertyJSON = SPARQLQuery.propertyValueQuery(inplaceIRIList, noFunctionalProperty)
# noFunctionalPropertyJSON = noFunctionalPropertyJSONObj["results"]["bindings"]
# create a seperate table to store one-to-many property value, return the created table name
tableName = Json2Field.createMappingTableFromJSON(noFunctionalPropertyJSON, "wikidataSub", "o", noFunctionalProperty, inputFeatureClassName, "wikiURL", False, False)
# creat relationship class between the original feature class and the created table
relationshipClassName = featureClassName + "_" + tableName + "_RelClass"
arcpy.CreateRelationshipClass_management(featureClassName, tableName, relationshipClassName, "SIMPLE",
noFunctionalProperty, "features from wikidata",
"FORWARD", "ONE_TO_MANY", "NONE", "URL", "wikiURL")
# if the user want the inverse properties
if isInverse == True:
inversePropertySelect = in_inverse_com_property.valueAsText
arcpy.AddMessage("LinkedDataPropertyEnrich.inversePropertyURLDict: {0}".format(LinkedDataPropertyEnrich.inversePropertyURLDict))
arcpy.AddMessage("inversePropertySelect: {0}".format(inversePropertySelect))
selectInversePropertyURLList = []
inversePropertySplitList = re.split("[;]", inversePropertySelect)
for propertyItem in inversePropertySplitList:
selectInversePropertyURLList.append(LinkedDataPropertyEnrich.inversePropertyURLDict[propertyItem.split("\'")[1]])
# send a SPARQL query to DBpedia endpoint to test whether the properties are InverseFunctionalProperty
isInverseFuncnalPropertyJSON = SPARQLQuery.inverseFunctionalPropertyQuery(selectInversePropertyURLList)
# isFuncnalPropertyJSON = isFuncnalPropertyJSONObj["results"]["bindings"]
inverseFunctionalPropertySet = Set()
for jsonItem in isInverseFuncnalPropertyJSON:
inverseFunctionalPropertyURL = jsonItem["property"]["value"]
inverseFunctionalPropertySet.add(inverseFunctionalPropertyURL)
arcpy.AddMessage("inverseFunctionalPropertySet: {0}".format(inverseFunctionalPropertySet))
# get the value for each functionalProperty
inverseFuncnalPropertyList = list(inverseFunctionalPropertySet)
# add these inverseFunctionalProperty subject value to feature class table
for inverseFunctionalProperty in inverseFuncnalPropertyList:
inverseFunctionalPropertyJSON = SPARQLQuery.inversePropertyValueQuery(inplaceIRIList, inverseFunctionalProperty)
# functionalPropertyJSON = functionalPropertyJSONObj["results"]["bindings"]
Json2Field.addFieldInTableByMapping(inverseFunctionalPropertyJSON, "wikidataSub", "o", inputFeatureClassName, "URL", inverseFunctionalProperty, True)
selectInversePropertyURLSet = Set(selectInversePropertyURLList)
noFunctionalInversePropertySet = selectInversePropertyURLSet.difference(inverseFunctionalPropertySet)
noFunctionalInversePropertyList = list(noFunctionalInversePropertySet)
for noFunctionalInverseProperty in noFunctionalInversePropertyList:
noFunctionalInversePropertyJSON = SPARQLQuery.inversePropertyValueQuery(inplaceIRIList, noFunctionalInverseProperty)
# noFunctionalPropertyJSON = noFunctionalPropertyJSONObj["results"]["bindings"]
# create a seperate table to store one-to-many property-subject pair, return the created table name
tableName = Json2Field.createMappingTableFromJSON(noFunctionalInversePropertyJSON, "wikidataSub", "o", noFunctionalInverseProperty, inputFeatureClassName, "wikiURL", True, False)
# creat relationship class between the original feature class and the created table
relationshipClassName = featureClassName + "_" + tableName + "_RelClass"
arcpy.AddMessage("featureClassName: {0}".format(featureClassName))
arcpy.AddMessage("tableName: {0}".format(tableName))
arcpy.CreateRelationshipClass_management(featureClassName, tableName, relationshipClassName, "SIMPLE",
noFunctionalInverseProperty, "features from wikidata",
"FORWARD", "ONE_TO_MANY", "NONE", "URL", "wikiURL")
# if the user want the expanded properties
if isExpandedPartOf == True:
arcpy.AddMessage(in_expanded_com_property.valueAsText)
expandedPropertySelect = in_expanded_com_property.valueAsText
selectExpandedPropertyURLList = []
expandedPropertySplitList = re.split("[;]", expandedPropertySelect)
for propertyItem in expandedPropertySplitList:
selectExpandedPropertyURLList.append(LinkedDataPropertyEnrich.expandedPropertyURLDict[propertyItem])
isPartOfReverseTransiveJSON = SPARQLQuery.isPartOfReverseTransiveQuery(inplaceIRIList)
# create a seperate table to store "isPartOf" transitive relationship, return the created table name
isPartOfTableName = Json2Field.createMappingTableFromJSON(isPartOfReverseTransiveJSON, "wikidataSub", "subDivision", "http://dbpedia.org/ontology/isPartOf_reverse_Transtive", inputFeatureClassName, "wikiURL", False, True)
# creat relationship class between the original feature class and the created table
isPartOfRelationshipClassName = featureClassName + "_" + isPartOfTableName + "_RelClass"
arcpy.CreateRelationshipClass_management(featureClassName, isPartOfTableName, isPartOfRelationshipClassName, "SIMPLE",
"is \"http://dbpedia.org/ontology/isPartOf+\" of", "http://dbpedia.org/ontology/isPartOf+",
"FORWARD", "ONE_TO_MANY", "NONE", "URL", "wikiURL")
# send a SPARQL query to DBpedia endpoint to test whether the properties are functionalProperty
isFuncnalExpandedPropertyJSON = SPARQLQuery.functionalPropertyQuery(selectExpandedPropertyURLList)
# isFuncnalPropertyJSON = isFuncnalPropertyJSONObj["results"]["bindings"]
FunctionalExpandedPropertySet = Set()
for jsonItem in isFuncnalExpandedPropertyJSON:
functionalPropertyURL = jsonItem["property"]["value"]
FunctionalExpandedPropertySet.add(functionalPropertyURL)
# get the value for each functionalProperty
FuncnalExpandedPropertyList = list(FunctionalExpandedPropertySet)
# add these functionalProperty value to feature class table
for functionalProperty in FuncnalExpandedPropertyList:
functionalPropertyJSON = SPARQLQuery.expandedPropertyValueQuery(inplaceIRIList, functionalProperty)
Json2Field.addFieldInTableByMapping(functionalPropertyJSON, "subDivision", "o", os.path.join(outputLocation, isPartOfTableName), "subDivisionIRI", functionalProperty, False)
# Json2Field.addFieldInMultiKeyTableByMapping(jsonBindingObject, keyPropertyNameList, valuePropertyName, inputFeatureClassName, keyPropertyFieldNameList, valuePropertyURL, isInverse)
selectExpandedPropertyURLSet = Set(selectExpandedPropertyURLList)
noFunctionalExpandedPropertySet = selectExpandedPropertyURLSet.difference(FunctionalExpandedPropertySet)
noFunctionalExpandedPropertyList = list(noFunctionalExpandedPropertySet)
# SPARQLQuery.expandedPropertyValueQuery(inplaceIRIList, propertyURL) select ?wikidataSub ?subDivision ?o
# createMappingTableFromJSON(jsonBindingObject, keyPropertyName, valuePropertyName, valuePropertyURL, inputFeatureClassName, keyPropertyFieldName)
for propertyURL in noFunctionalExpandedPropertyList:
expandedPropertyValueJSON = SPARQLQuery.expandedPropertyValueQuery(inplaceIRIList, propertyURL)
# create a seperate table to store the value of the property of the subDivision got from "isPartOf" transitive relationship, return the created table name
subDivisionValueTableName = Json2Field.createMappingTableFromJSON(expandedPropertyValueJSON, "subDivision", "o", propertyURL, os.path.join(outputLocation, isPartOfTableName), "DBpediaIRI", False, False)
# creat relationship class between the isPartOfTableName and the created table
subDivisionValueRelationshipClassName = isPartOfTableName + "_" + subDivisionValueTableName + "_RelClass"
arcpy.CreateRelationshipClass_management(os.path.join(outputLocation, isPartOfTableName), subDivisionValueTableName, subDivisionValueRelationshipClassName, "SIMPLE",
propertyURL, "Super Division of DBpediaIRI following \"isPartOf+\"",
"FORWARD", "ONE_TO_MANY", "NONE", "subDivisionIRI", "DBpediaIRI")
if isInverseExpanded == True:
arcpy.AddMessage(in_inverse_expanded_com_property.valueAsText)
inverseExpandedPropertySelect = in_inverse_expanded_com_property.valueAsText
selectInverseExpandedPropertyURLList = []
inverseExpandedPropertySplitList = re.split("[;]", inverseExpandedPropertySelect)
for propertyItem in inverseExpandedPropertySplitList:
selectInverseExpandedPropertyURLList.append(LinkedDataPropertyEnrich.inverseExpandedPropertyURLDict[propertyItem.split("\'")[1]])
# send a SPARQL query to DBpedia endpoint to test whether the properties are InverseFunctionalProperty
isFuncnalInverseExpandedPropertyJSON = SPARQLQuery.inverseFunctionalPropertyQuery(selectInverseExpandedPropertyURLList)
# isFuncnalPropertyJSON = isFuncnalPropertyJSONObj["results"]["bindings"]
FunctionalInverseExpandedPropertySet = Set()
for jsonItem in isFuncnalInverseExpandedPropertyJSON:
inverseFunctionalPropertyURL = jsonItem["property"]["value"]
FunctionalInverseExpandedPropertySet.add(inverseFunctionalPropertyURL)
# get the value for each InverseFunctionalProperty
FuncnalInverseExpandedPropertyList = list(FunctionalInverseExpandedPropertySet)
# add these functionalProperty value to feature class table
for inverseFunctionalProperty in FuncnalInverseExpandedPropertyList:
inverseFunctionalPropertyJSON = SPARQLQuery.inverseExpandedPropertyValueQuery(inplaceIRIList, inverseFunctionalProperty)
Json2Field.addFieldInTableByMapping(inverseFunctionalPropertyJSON, "subDivision", "o", os.path.join(outputLocation, isPartOfTableName), "subDivisionIRI", inverseFunctionalProperty, True)
# Json2Field.addFieldInMultiKeyTableByMapping(jsonBindingObject, keyPropertyNameList, valuePropertyName, inputFeatureClassName, keyPropertyFieldNameList, valuePropertyURL, isInverse)
selectInverseExpandedPropertyURLSet = Set(selectInverseExpandedPropertyURLList)
noFunctionalInverseExpandedPropertySet = selectInverseExpandedPropertyURLSet.difference(FunctionalInverseExpandedPropertySet)
noFunctionalInverseExpandedPropertyList = list(noFunctionalInverseExpandedPropertySet)
for propertyURL in noFunctionalInverseExpandedPropertyList:
inverseExpandedPropertyValueJSON = SPARQLQuery.inverseExpandedPropertyValueQuery(inplaceIRIList, propertyURL)
# create a seperate table to store the value of the property of the subDivision got from "isPartOf" transitive relationship, return the created table name
subDivisionValueTableName = Json2Field.createMappingTableFromJSON(inverseExpandedPropertyValueJSON, "subDivision", "o", propertyURL, os.path.join(outputLocation, isPartOfTableName), "DBpediaIRI", True, False)
# creat relationship class between the isPartOfTableName and the created table
subDivisionValueRelationshipClassName = isPartOfTableName + "_" + subDivisionValueTableName + "_RelClass"
arcpy.CreateRelationshipClass_management(os.path.join(outputLocation, isPartOfTableName), subDivisionValueTableName, subDivisionValueRelationshipClassName, "SIMPLE",
propertyURL, "Super Division of DBpediaIRI following \"isPartOf+\"",
"FORWARD", "ONE_TO_MANY", "NONE", "subDivisionIRI", "DBpediaIRI")
return
class MergeBatchNoFunctionalProperty(object):
relatedTableFieldList = []
def __init__(self):
"""Define the tool (tool name is the name of the class)."""
self.label = "Linked Data Batch No Functional Property Merge"
self.description = """The related seperated tables from Linked Data Location Entities Property Enrichment Tool have multivalue for each wikidata location because the coresponding property is not functional property.
This Tool helps user to merge these multivalue to a single record and add it to original feature class sttribute table by using merge rules which are specified by users."""
self.canRunInBackground = False
def getParameterInfo(self):
"""Define parameter definitions"""
# The input Feature class which is the output of LinkedDataAnalysis Tool, "URL" column should be included in the attribute table
in_wikiplace_IRI = arcpy.Parameter(
displayName="Input wikidata location entities Feature Class",
name="in_wikiplace_IRI",
datatype="DEFeatureClass",
parameterType="Required",
direction="Input")
in_wikiplace_IRI.filter.list = ["Point"]
# in_related_table = arcpy.Parameter(
# displayName="Input no-functional property table which should be related to the input Feature Class",
# name="in_related_table",
# datatype="DETable",
# parameterType="Required",
# direction="Input")
in_stat_fields = arcpy.Parameter(
displayName='No functional Property Field(s) of related table of input feature class which need to Merge',
name='in_stat_fields',
datatype='GPValueTable',
parameterType='Required',
direction='Input')
in_stat_fields.parameterDependencies = [in_wikiplace_IRI.name]
in_stat_fields.columns = [['Field', 'Field'], ['GPString', 'Statistic Type'], ['DETable', 'Table Name']]
in_stat_fields.filters[1].type = 'ValueList'
# in_stat_fields.values = [['NAME', 'SUM']]
in_stat_fields.filters[1].list = ['SUM', 'MIN', 'MAX', 'STDEV', 'MEAN', 'COUNT', 'FIRST', 'LAST']
# out_location = arcpy.Parameter(
# displayName="Output Location",
# name="out_location",
# datatype="DEWorkspace",
# parameterType="Required",
# direction="Input")
# out_location.value = os.path.dirname(__file__)
# # Derived Output Point Feature Class Name
# out_points_name = arcpy.Parameter(
# displayName="Output Point Feature Class Name with No-Functional Property Merged Values",
# name="out_points_name",
# datatype="GPString",
# parameterType="Required",
# direction="Input")
# params = [in_wikiplace_IRI, in_stat_fields,out_location, out_points_name]
params = [in_wikiplace_IRI, in_stat_fields]
return params
def isLicensed(self):
"""Set whether tool is licensed to execute."""
return True
def updateParameters(self, parameters):
"""Modify the values and properties of parameters before internal
validation is performed. This method is called whenever a parameter
has been changed."""
in_wikiplace_IRI = parameters[0]
in_stat_fields = parameters[1]
# out_location = parameters[2]
# out_points_name = parameters[3]
if in_wikiplace_IRI.altered and not in_stat_fields.altered:
inputFeatureClassName = in_wikiplace_IRI.valueAsText
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
featureClassName = inputFeatureClassName[(lastIndexOFGDB+1):]
currentWorkspace = inputFeatureClassName[:lastIndexOFGDB]
if currentWorkspace.endswith(".gdb") == False:
messages.addErrorMessage("Please enter a feature class in file geodatabase for the input feature class.")
raise arcpy.ExecuteError
else:
# if in_related_table.value:
arcpy.env.workspace = currentWorkspace
# out_location.value = currentWorkspace
# out_points_name.value = featureClassName + "_noFunc_merge"
# # check whether the input table are in the same file geodatabase as the input feature class
# inputTableName = in_related_table.valueAsText
# lastIndexOFTable = inputTableName.rfind("\\")
# currentWorkspaceTable = inputTableName[:lastIndexOFTable]
# if currentWorkspaceTable != currentWorkspace:
# messages.addErrorMessage("Please enter a table in the same file geodatabase as the input feature class.")
# raise arcpy.ExecuteError
# else:
# if UTIL.detectRelationship(inputFeatureClassName, inputTableName):
# arcpy.AddMessage("The feature class and table are related!")
relatedTableList = UTIL.getRelatedTableFromFeatureClass(inputFeatureClassName)
# fieldmappings = arcpy.FieldMappings()
# fieldmappings.addTable(inputFeatureClassName)
noFunctionalPropertyTable = []
for relatedTable in relatedTableList:
fieldList = arcpy.ListFields(relatedTable)
if "origin" not in fieldList and "end" not in fieldList:
noFunctionalFieldName = fieldList[2].name
arcpy.AddMessage("noFunctionalFieldName: {0}".format(noFunctionalFieldName))
noFunctionalPropertyTable.append([noFunctionalFieldName, 'COUNT', relatedTable])
# MergeNoFunctionalProperty.relatedTableFieldList.append([noFunctionalFieldName, relatedTable, 'COUNT'])
# fieldmappings.addTable(relatedTable)
# fieldList = arcpy.ListFields(relatedTable)
# noFunctionalFieldName = fieldList[len(fieldList)-1].name
# arcpy.AddMessage("noFunctionalFieldName: {0}".format(noFunctionalFieldName))
# fieldmap = fieldmappings.getFieldMap(fieldmappings.findFieldMapIndex(noFunctionalFieldName))
# fieldmap.addInputField(relatedTable, "wikiURL")
# fieldmap.addInputField(inputFeatureClassName, "URL")
# fieldmappings.replaceFieldMap(fieldmappings.findFieldMapIndex(noFunctionalFieldName), fieldmap)
in_stat_fields.values = noFunctionalPropertyTable
# fieldmappings.removeFieldMap(fieldmappings.findFieldMapIndex("wikiURL"))
# in_field_mapping.value = fieldmappings.exportToString()
# if in_stat_fields.altered:
# fieldMergeRuleTest = in_stat_fields.valueAsText
# if fieldMergeRuleTest:
# fieldSplitList = fieldMergeRuleTest.split(";")
# for fieldSplitItem in fieldSplitList:
# fieldMergeList = fieldSplitList.split("\t")
# for item in MergeNoFunctionalProperty.relatedTableFieldList:
# if item[]
return
def updateMessages(self, parameters):
"""Modify the messages created by internal validation for each tool
parameter. This method is called after internal validation."""
return
def execute(self, parameters, messages):
"""The source code of the tool."""
in_wikiplace_IRI = parameters[0]
in_stat_fields = parameters[1]
# out_location = parameters[2]
# out_points_name = parameters[3]
if in_wikiplace_IRI.value:
inputFeatureClassName = in_wikiplace_IRI.valueAsText
# outLocation = out_location.valueAsText
# outFeatureClassName = out_points_name.valueAsText
fieldMergeRuleTest = in_stat_fields.valueAsText
# messages.addErrorMessage("in_stat_fields.values: {0}".format(in_stat_fields.values))
# messages.addErrorMessage("MergeNoFunctionalProperty.relatedTableFieldList: {0}".format(MergeNoFunctionalProperty.relatedTableFieldList))
# fieldmappings = in_field_mapping.valueAsText
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
currentWorkspace = inputFeatureClassName[:lastIndexOFGDB]
if currentWorkspace.endswith(".gdb") == False:
messages.addErrorMessage("Please enter a feature class in file geodatabase for the input feature class.")
raise arcpy.ExecuteError
else:
# if in_related_table.value:
arcpy.env.workspace = currentWorkspace
# relatedTableList = UTIL.getRelatedTableFromFeatureClass(inputFeatureClassName)
# fieldmappings = arcpy.FieldMappings()
# fieldmappings.addTable(inputFeatureClassName)
# for relatedTable in relatedTableList:
# fieldmappings.addTable(relatedTable)
# fieldList = arcpy.ListFields(relatedTable)
# fieldName = fieldList[len(fieldList)-1].name
# arcpy.AddMessage("fieldName: {0}".format(fieldName))
# fieldmappings.removeFieldMap(fieldmappings.findFieldMapIndex("wikiURL"))
# arcpy.AddMessage("fieldmappings: {0}".format(fieldmappings))
# if out_location.value and out_points_name.value:
# arcpy.FeatureClassToFeatureClass_conversion(inputFeatureClassName, outLocation, outFeatureClassName, "", fieldmappings)
# get the ValueTable(fieldName, merge rule, related table full path)
fieldMergeRuleFileNameList = []
if fieldMergeRuleTest:
fieldSplitList = fieldMergeRuleTest.split(";")
for fieldSplitItem in fieldSplitList:
fieldMergeList = fieldSplitItem.split(" ", 2)
fieldMergeRuleFileNameList.append(fieldMergeList)
arcpy.AddMessage("fieldMergeRuleFileNameList: {0}".format(fieldMergeRuleFileNameList))
for fieldMergeRuleFileNameItem in fieldMergeRuleFileNameList:
appendFieldName = fieldMergeRuleFileNameItem[0]
mergeRule = fieldMergeRuleFileNameItem[1]
relatedTableName = fieldMergeRuleFileNameItem[2].replace("'", "")
noFunctionalPropertyDict = UTIL.buildMultiValueDictFromNoFunctionalProperty(appendFieldName, relatedTableName)
if noFunctionalPropertyDict != -1:
UTIL.appendFieldInFeatureClassByMergeRule(inputFeatureClassName, noFunctionalPropertyDict, appendFieldName, relatedTableName, mergeRule)
# UTIL.buildMultiValueDictFromNoFunctionalProperty(fieldName, tableName)
# UTIL.appendFieldInFeatureClassByMergeRule(inputFeatureClassName, noFunctionalPropertyDict, appendFieldName, relatedTableName, mergeRule)
return
class MergeSingleNoFunctionalProperty(object):
relatedTableFieldList = []
relatedTableList = []
relatedNoFunctionalPropertyURLList = []
def __init__(self):
"""Define the tool (tool name is the name of the class)."""
self.label = "Linked Data Single No Functional Property Merge"
self.description = """The related seperated tables from Linked Data Location Entities Property Enrichment Tool have multivalue for each wikidata location because the coresponding property is not functional property.
This Tool helps user to merge these multivalue to a single record and add it to original feature class sttribute table by using merge rules which are specified by users."""
self.canRunInBackground = False
def getParameterInfo(self):
"""Define parameter definitions"""
# The input Feature class which is the output of LinkedDataAnalysis Tool, "URL" column should be included in the attribute table
in_wikiplace_IRI = arcpy.Parameter(
displayName="Input wikidata location entities Feature Class",
name="in_wikiplace_IRI",
datatype="DEFeatureClass",
parameterType="Required",
direction="Input")
in_wikiplace_IRI.filter.list = ["Point"]
in_no_functional_property_list = arcpy.Parameter(
displayName="List of No-Functional Properties of Current Feature Class",
name="in_no_functional_property_list",
datatype="GPString",
parameterType="Required",
direction="Input")
in_no_functional_property_list.filter.type = "ValueList"
in_no_functional_property_list.filter.list = []
in_related_table_list = arcpy.Parameter(
displayName="List of Related Tables",
name="in_related_table_list",
datatype="GPString",
parameterType="Required",
direction="Input")
in_related_table_list.filter.type = "ValueList"
in_related_table_list.filter.list = []
in_merge_rule = arcpy.Parameter(
displayName='List of Merge Rules',
name='in_merge_rule',
datatype='GPString',
parameterType='Required',
direction='Input')
in_merge_rule.filter.type = "ValueList"
in_merge_rule.filter.list = ['SUM', 'MIN', 'MAX', 'STDEV', 'MEAN', 'COUNT', 'FIRST', 'LAST', 'CONCATENATE']
in_cancatenate_delimiter = arcpy.Parameter(
displayName='The delimiter of cancatenating fields',
name='in_cancatenate_delimiter',
datatype='GPString',
parameterType='Optional',
direction='Input')
in_cancatenate_delimiter.filter.type = "ValueList"
in_cancatenate_delimiter.filter.list = ['DASH', 'COMMA', 'VERTICAL BAR', 'TAB', 'SPACE']
in_cancatenate_delimiter.enabled = False
params = [in_wikiplace_IRI, in_no_functional_property_list, in_related_table_list, in_merge_rule, in_cancatenate_delimiter]
return params
def isLicensed(self):
"""Set whether tool is licensed to execute."""
return True
def updateParameters(self, parameters):
"""Modify the values and properties of parameters before internal
validation is performed. This method is called whenever a parameter
has been changed."""
in_wikiplace_IRI = parameters[0]
in_no_functional_property_list = parameters[1]
in_related_table_list = parameters[2]
in_merge_rule = parameters[3]
in_cancatenate_delimiter = parameters[4]
if in_wikiplace_IRI.altered:
inputFeatureClassName = in_wikiplace_IRI.valueAsText
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
featureClassName = inputFeatureClassName[(lastIndexOFGDB+1):]
currentWorkspace = inputFeatureClassName[:lastIndexOFGDB]
if currentWorkspace.endswith(".gdb") == False:
messages.addErrorMessage("Please enter a feature class in file geodatabase for the input feature class.")
raise arcpy.ExecuteError
else:
# if in_related_table.value:
arcpy.env.workspace = currentWorkspace
# out_location.value = currentWorkspace
# out_points_name.value = featureClassName + "_noFunc_merge"
# # check whether the input table are in the same file geodatabase as the input feature class
# inputTableName = in_related_table.valueAsText
# lastIndexOFTable = inputTableName.rfind("\\")
# currentWorkspaceTable = inputTableName[:lastIndexOFTable]
# if currentWorkspaceTable != currentWorkspace:
# messages.addErrorMessage("Please enter a table in the same file geodatabase as the input feature class.")
# raise arcpy.ExecuteError
# else:
# if UTIL.detectRelationship(inputFeatureClassName, inputTableName):
# arcpy.AddMessage("The feature class and table are related!")
MergeSingleNoFunctionalProperty.relatedTableFieldList = []
MergeSingleNoFunctionalProperty.relatedTableList = []
MergeSingleNoFunctionalProperty.relatedNoFunctionalPropertyURLList = []
MergeSingleNoFunctionalProperty.relatedTableList = UTIL.getRelatedTableFromFeatureClass(inputFeatureClassName)
in_related_table_list.filter.list = MergeSingleNoFunctionalProperty.relatedTableList
# noFunctionalPropertyTable = []
for relatedTable in MergeSingleNoFunctionalProperty.relatedTableList:
fieldList = arcpy.ListFields(relatedTable)
if "origin" not in fieldList and "end" not in fieldList:
noFunctionalFieldName = fieldList[2].name
arcpy.AddMessage("noFunctionalFieldName: {0}".format(noFunctionalFieldName))
MergeSingleNoFunctionalProperty.relatedTableFieldList.append(noFunctionalFieldName)
# get the no functioal property URL from the firt row of this table field "propURL"
# propURL = arcpy.da.SearchCursor(relatedTable, ("propURL")).next()[0]
TableRelationshipClassList = UTIL.getRelationshipClassFromTable(relatedTable)
propURL = arcpy.Describe(TableRelationshipClassList[0]).forwardPathLabel
MergeSingleNoFunctionalProperty.relatedNoFunctionalPropertyURLList.append(propURL)
in_no_functional_property_list.filter.list = MergeSingleNoFunctionalProperty.relatedNoFunctionalPropertyURLList
# noFunctionalPropertyTable.append([noFunctionalFieldName, 'COUNT', relatedTable])
# MergeNoFunctionalProperty.relatedTableFieldList.append([noFunctionalFieldName, relatedTable, 'COUNT'])
# fieldmappings.addTable(relatedTable)
# fieldList = arcpy.ListFields(relatedTable)
# noFunctionalFieldName = fieldList[len(fieldList)-1].name
# arcpy.AddMessage("noFunctionalFieldName: {0}".format(noFunctionalFieldName))
# in_stat_fields.values = noFunctionalPropertyTable
if in_no_functional_property_list.altered:
selectPropURL = in_no_functional_property_list.valueAsText
selectIndex = MergeSingleNoFunctionalProperty.relatedNoFunctionalPropertyURLList.index(selectPropURL)
selectFieldName = MergeSingleNoFunctionalProperty.relatedTableFieldList[selectIndex]
selectTableName = MergeSingleNoFunctionalProperty.relatedTableList[selectIndex]
in_related_table_list.value = selectTableName
currentDataType = UTIL.getFieldDataTypeInTable(selectFieldName, selectTableName)
if currentDataType in ['Single', 'Double', 'SmallInteger', 'Integer']:
in_merge_rule.filter.list = ['SUM', 'MIN', 'MAX', 'STDEV', 'MEAN', 'COUNT', 'FIRST', 'LAST', 'CONCATENATE']
# elif currentDataType in ['SmallInteger', 'Integer']:
# in_merge_rule.filter.list = ['SUM', 'MIN', 'MAX', 'COUNT', 'FIRST', 'LAST']
else:
in_merge_rule.filter.list = ['COUNT', 'FIRST', 'LAST', 'CONCATENATE']
if in_related_table_list.altered:
selectTableName = in_related_table_list.valueAsText
selectIndex = MergeSingleNoFunctionalProperty.relatedTableList.index(selectTableName)
selectFieldName = MergeSingleNoFunctionalProperty.relatedTableFieldList[selectIndex]
selectPropURL = MergeSingleNoFunctionalProperty.relatedNoFunctionalPropertyURLList[selectIndex]
in_no_functional_property_list.value = selectPropURL
currentDataType = UTIL.getFieldDataTypeInTable(selectFieldName, selectTableName)
if currentDataType in ['Single', 'Double', 'SmallInteger', 'Integer']:
in_merge_rule.filter.list = ['SUM', 'MIN', 'MAX', 'STDEV', 'MEAN', 'COUNT', 'FIRST', 'LAST', 'CONCATENATE']
# elif currentDataType in ['SmallInteger', 'Integer']:
# in_merge_rule.filter.list = ['SUM', 'MIN', 'MAX', 'COUNT', 'FIRST', 'LAST']
else:
in_merge_rule.filter.list = ['COUNT', 'FIRST', 'LAST', 'CONCATENATE']
if in_merge_rule.valueAsText == "CONCATENATE":
in_cancatenate_delimiter.enabled = True
return
def updateMessages(self, parameters):
"""Modify the messages created by internal validation for each tool
parameter. This method is called after internal validation."""
return
def execute(self, parameters, messages):
"""The source code of the tool."""
in_wikiplace_IRI = parameters[0]
in_no_functional_property_list = parameters[1]
in_related_table_list = parameters[2]
in_merge_rule = parameters[3]
in_cancatenate_delimiter = parameters[4]
if in_wikiplace_IRI.value:
inputFeatureClassName = in_wikiplace_IRI.valueAsText
selectPropURL = in_no_functional_property_list.valueAsText
selectTableName = in_related_table_list.valueAsText
selectMergeRule = in_merge_rule.valueAsText
selectIndex = MergeSingleNoFunctionalProperty.relatedTableList.index(selectTableName)
selectFieldName = MergeSingleNoFunctionalProperty.relatedTableFieldList[selectIndex]
arcpy.AddMessage("CurrentDataType: {0}".format(UTIL.getFieldDataTypeInTable(selectFieldName, selectTableName)))
arcpy.AddMessage("selectTableName: {0}".format(selectTableName))
arcpy.AddMessage("MergeSingleNoFunctionalProperty.relatedTableList: {0}".format(MergeSingleNoFunctionalProperty.relatedTableList))
arcpy.AddMessage("MergeSingleNoFunctionalProperty.relatedTableList.index(selectTableName): {0}".format(MergeSingleNoFunctionalProperty.relatedTableList.index(selectTableName)))
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
currentWorkspace = inputFeatureClassName[:lastIndexOFGDB]
if currentWorkspace.endswith(".gdb") == False:
messages.addErrorMessage("Please enter a feature class in file geodatabase for the input feature class.")
raise arcpy.ExecuteError
else:
# if in_related_table.value:
arcpy.env.workspace = currentWorkspace
noFunctionalPropertyDict = UTIL.buildMultiValueDictFromNoFunctionalProperty(selectFieldName, selectTableName)
if noFunctionalPropertyDict != -1:
if selectMergeRule == 'CONCATENATE':
selectDelimiter = in_cancatenate_delimiter.valueAsText
delimiter = ','
# ['DASH', 'COMMA', 'VERTICAL BAR', 'TAB', 'SPACE']
if selectDelimiter == 'DASH':
delimiter = '-'
elif selectDelimiter == 'COMMA':
delimiter = ','
elif selectDelimiter == 'VERTICAL BAR':
delimiter = '|'
elif selectDelimiter == 'TAB':
delimiter = ' '
elif selectDelimiter == 'SPACE':
delimiter = ' '
UTIL.appendFieldInFeatureClassByMergeRule(inputFeatureClassName, noFunctionalPropertyDict, selectFieldName, selectTableName, selectMergeRule, delimiter)
else:
UTIL.appendFieldInFeatureClassByMergeRule(inputFeatureClassName, noFunctionalPropertyDict, selectFieldName, selectTableName, selectMergeRule, '')
# fieldMergeRuleFileNameList = []
# if fieldMergeRuleTest:
# fieldSplitList = fieldMergeRuleTest.split(";")
# for fieldSplitItem in fieldSplitList:
# fieldMergeList = fieldSplitItem.split(" ", 2)
# fieldMergeRuleFileNameList.append(fieldMergeList)
# arcpy.AddMessage("fieldMergeRuleFileNameList: {0}".format(fieldMergeRuleFileNameList))
# for fieldMergeRuleFileNameItem in fieldMergeRuleFileNameList:
# appendFieldName = fieldMergeRuleFileNameItem[0]
# mergeRule = fieldMergeRuleFileNameItem[1]
# relatedTableName = fieldMergeRuleFileNameItem[2].replace("'", "")
# noFunctionalPropertyDict = UTIL.buildMultiValueDictFromNoFunctionalProperty(appendFieldName, relatedTableName)
# if noFunctionalPropertyDict != -1:
# UTIL.appendFieldInFeatureClassByMergeRule(inputFeatureClassName, noFunctionalPropertyDict, appendFieldName, relatedTableName, mergeRule)
# UTIL.buildMultiValueDictFromNoFunctionalProperty(fieldName, tableName)
# UTIL.appendFieldInFeatureClassByMergeRule(inputFeatureClassName, noFunctionalPropertyDict, appendFieldName, relatedTableName, mergeRule)
return
# class MergeNoFunctionalProperty(object):
# def __init__(self):
# """Define the tool (tool name is the name of the class)."""
# self.label = "Linked Data Multivalue No Functional Property Merge"
# self.description = """The related seperated tables from Linked Data Location Entities Property Enrichment Tool have multivalue for each wikidata location because the coresponding property is not functional property.
# This Tool helps user to merge these multivalue to a single record and add it to original feature class sttribute table by using merge rules which are specified by users."""
# self.canRunInBackground = False
# def getParameterInfo(self):
# """Define parameter definitions"""
# # The input Feature class which is the output of LinkedDataAnalysis Tool, "URL" column should be included in the attribute table
# in_wikiplace_IRI = arcpy.Parameter(
# displayName="Input wikidata location entities Feature Class",
# name="in_wikiplace_IRI",
# datatype="DEFeatureClass",
# parameterType="Required",
# direction="Input")
# in_wikiplace_IRI.filter.list = ["Point"]
# # in_related_table = arcpy.Parameter(
# # displayName="Input no-functional property table which should be related to the input Feature Class",
# # name="in_related_table",
# # datatype="DETable",
# # parameterType="Required",
# # direction="Input")
# in_field_mapping = arcpy.Parameter(
# displayName="Field Mapping From no-functional properties in all feature class related tables",
# name="in_field_mapping",
# datatype="GPFieldMapping",
# parameterType="Optional",
# direction="Input")
# out_location = arcpy.Parameter(
# displayName="Output Location",
# name="out_location",
# datatype="DEWorkspace",
# parameterType="Required",
# direction="Input")
# out_location.value = os.path.dirname(__file__)
# # Derived Output Point Feature Class Name
# out_points_name = arcpy.Parameter(
# displayName="Output Point Feature Class Name with No-Functional Property Merged Values",
# name="out_points_name",
# datatype="GPString",
# parameterType="Required",
# direction="Input")
# params = [in_wikiplace_IRI, in_field_mapping,out_location, out_points_name]
# return params
# def isLicensed(self):
# """Set whether tool is licensed to execute."""
# return True
# def updateParameters(self, parameters):
# """Modify the values and properties of parameters before internal
# validation is performed. This method is called whenever a parameter
# has been changed."""
# in_wikiplace_IRI = parameters[0]
# in_field_mapping = parameters[1]
# out_location = parameters[2]
# out_points_name = parameters[3]
# if in_wikiplace_IRI.value:
# inputFeatureClassName = in_wikiplace_IRI.valueAsText
# lastIndexOFGDB = inputFeatureClassName.rfind("\\")
# featureClassName = inputFeatureClassName[(lastIndexOFGDB+1):]
# currentWorkspace = inputFeatureClassName[:lastIndexOFGDB]
# if currentWorkspace.endswith(".gdb") == False:
# messages.addErrorMessage("Please enter a feature class in file geodatabase for the input feature class.")
# raise arcpy.ExecuteError
# else:
# # if in_related_table.value:
# arcpy.env.workspace = currentWorkspace
# out_location.value = currentWorkspace
# out_points_name.value = featureClassName + "_noFunc_merge"
# # # check whether the input table are in the same file geodatabase as the input feature class
# # inputTableName = in_related_table.valueAsText
# # lastIndexOFTable = inputTableName.rfind("\\")
# # currentWorkspaceTable = inputTableName[:lastIndexOFTable]
# # if currentWorkspaceTable != currentWorkspace:
# # messages.addErrorMessage("Please enter a table in the same file geodatabase as the input feature class.")
# # raise arcpy.ExecuteError
# # else:
# # if UTIL.detectRelationship(inputFeatureClassName, inputTableName):
# # arcpy.AddMessage("The feature class and table are related!")
# relatedTableList = UTIL.getRelatedTableFromFeatureClass(inputFeatureClassName)
# fieldmappings = arcpy.FieldMappings()
# fieldmappings.addTable(inputFeatureClassName)
# for relatedTable in relatedTableList:
# fieldmappings.addTable(relatedTable)
# # fieldList = arcpy.ListFields(relatedTable)
# # noFunctionalFieldName = fieldList[len(fieldList)-1].name
# # arcpy.AddMessage("noFunctionalFieldName: {0}".format(noFunctionalFieldName))
# # fieldmap = fieldmappings.getFieldMap(fieldmappings.findFieldMapIndex(noFunctionalFieldName))
# # fieldmap.addInputField(relatedTable, "wikiURL")
# # fieldmap.addInputField(inputFeatureClassName, "URL")
# # fieldmappings.replaceFieldMap(fieldmappings.findFieldMapIndex(noFunctionalFieldName), fieldmap)
# fieldmappings.removeFieldMap(fieldmappings.findFieldMapIndex("wikiURL"))
# in_field_mapping.value = fieldmappings.exportToString()
# return
# def updateMessages(self, parameters):
# """Modify the messages created by internal validation for each tool
# parameter. This method is called after internal validation."""
# return
# def execute(self, parameters, messages):
# """The source code of the tool."""
# in_wikiplace_IRI = parameters[0]
# in_field_mapping = parameters[1]
# out_location = parameters[2]
# out_points_name = parameters[3]
# if in_wikiplace_IRI.value:
# inputFeatureClassName = in_wikiplace_IRI.valueAsText
# outLocation = out_location.valueAsText
# outFeatureClassName = out_points_name.valueAsText
# fieldmappings = in_field_mapping.valueAsText
# lastIndexOFGDB = inputFeatureClassName.rfind("\\")
# currentWorkspace = inputFeatureClassName[:lastIndexOFGDB]
# if currentWorkspace.endswith(".gdb") == False:
# messages.addErrorMessage("Please enter a feature class in file geodatabase for the input feature class.")
# raise arcpy.ExecuteError
# else:
# # if in_related_table.value:
# arcpy.env.workspace = currentWorkspace
# # relatedTableList = UTIL.getRelatedTableFromFeatureClass(inputFeatureClassName)
# # fieldmappings = arcpy.FieldMappings()
# # fieldmappings.addTable(inputFeatureClassName)
# # for relatedTable in relatedTableList:
# # fieldmappings.addTable(relatedTable)
# # fieldList = arcpy.ListFields(relatedTable)
# # fieldName = fieldList[len(fieldList)-1].name
# # arcpy.AddMessage("fieldName: {0}".format(fieldName))
# # fieldmappings.removeFieldMap(fieldmappings.findFieldMapIndex("wikiURL"))
# # arcpy.AddMessage("fieldmappings: {0}".format(fieldmappings))
# if out_location.value and out_points_name.value:
# arcpy.FeatureClassToFeatureClass_conversion(inputFeatureClassName, outLocation, outFeatureClassName, "", fieldmappings)
# return
class LocationPropertyPath(object):
locationCommonPropertyDict = dict()
locationCommonPropertyNameCountList = []
locationCommonPropertyURLList = []
locationCommonPropertyCountList = []
def __init__(self):
"""Define the tool (tool name is the name of the class)."""
self.label = "Linked Data Location Linkage Exploration"
self.description = """This Tool enables the users to explore the linkages between locations in wikidata.
Given an input feature class, this tool gets all properties whose objects are also locations.
The output is another feature class which contains the locations which are linked to the locations of input feature class."""
self.canRunInBackground = False
def getParameterInfo(self):
"""Define parameter definitions"""
# The input Feature class which is the output of LinkedDataAnalysis Tool, "URL" column should be included in the attribute table
in_wikiplace_IRI = arcpy.Parameter(
displayName="Input wikidata location entities Feature Class",
name="in_wikiplace_IRI",
datatype="DEFeatureClass",
parameterType="Required",
direction="Input")
in_wikiplace_IRI.filter.list = ["Point"]
# Choose property which links to a locations
in_location_property = arcpy.Parameter(
displayName="Input Property which represents location relationships",
name="in_location_property",
datatype="GPString",
parameterType="Required",
direction="Input")
in_location_property.filter.type = "ValueList"
in_location_property.filter.list = []
# Enter the degree of relationship between these location features. Like 2-degree sister city relationship
in_relation_degree = arcpy.Parameter(
displayName="Input Relationship Degree",
name="in_relation_degree",
datatype="GPLong",
parameterType="Required",
direction="Input")
in_relation_degree.value = 1
out_location = arcpy.Parameter(
displayName="Output Location",
name="out_location",
datatype="DEWorkspace",
parameterType="Required",
direction="Input")
out_location.value = os.path.dirname(__file__)
# Derived Output Point Feature Class Name
out_points_name = arcpy.Parameter(
displayName="Output Point Feature Class Name",
name="out_points_name",
datatype="GPString",
parameterType="Required",
direction="Input")
params = [in_wikiplace_IRI, in_location_property,in_relation_degree, out_location, out_points_name]
return params
def isLicensed(self):
"""Set whether tool is licensed to execute."""
return True
def updateParameters(self, parameters):
"""Modify the values and properties of parameters before internal
validation is performed. This method is called whenever a parameter
has been changed."""
in_wikiplace_IRI = parameters[0]
in_location_property = parameters[1]
in_relation_degree = parameters[2]
out_location = parameters[3]
out_points_name = parameters[4]
if in_wikiplace_IRI.value:
inputFeatureClassName = in_wikiplace_IRI.valueAsText
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
featureClassName = inputFeatureClassName[(lastIndexOFGDB+1):]
currentWorkspace = inputFeatureClassName[:lastIndexOFGDB]
arcpy.env.workspace = currentWorkspace
out_location.value = currentWorkspace
# get all the IRI from input point feature class of wikidata places
inplaceIRIList = []
cursor = arcpy.SearchCursor(inputFeatureClassName)
for row in cursor:
inplaceIRIList.append(row.getValue("URL"))
# get all the property URL which are used in the input feature class. their objects are geographic locations which have coordinates, I call them location common properties
locationCommonPropertyJSONObj = SPARQLQuery.locationCommonPropertyQuery(inplaceIRIList)
locationCommonPropertyJSON = locationCommonPropertyJSONObj["results"]["bindings"]
LocationPropertyPath.locationCommonPropertyURLList = []
LocationPropertyPath.locationCommonPropertyCountList = []
for jsonItem in locationCommonPropertyJSON:
LocationPropertyPath.locationCommonPropertyURLList.append(jsonItem["p"]["value"])
LocationPropertyPath.locationCommonPropertyCountList.append(jsonItem["NumofSub"]["value"])
locationCommonPropertyCountDict = dict(zip(LocationPropertyPath.locationCommonPropertyURLList, LocationPropertyPath.locationCommonPropertyCountList))
# get the english label for each location common property
locationCommonPropertyLabelJSON = SPARQLQuery.locationCommonPropertyLabelQuery(LocationPropertyPath.locationCommonPropertyURLList)
# locationCommonPropertyLabelJSON = locationCommonPropertyLabelJSONObj["results"]["bindings"]
# a dictionary object: key: propertyNameCount, value: propertyURL
LocationPropertyPath.locationCommonPropertyDict = dict()
LocationPropertyPath.locationCommonPropertyNameCountList = []
LocationPropertyPath.locationCommonPropertyURLList = []
LocationPropertyPath.locationCommonPropertyCountList = []
for jsonItem in locationCommonPropertyLabelJSON:
propertyURL = jsonItem["p"]["value"]
LocationPropertyPath.locationCommonPropertyURLList.append(propertyURL)
propertyName = jsonItem["propertyLabel"]["value"]
propertyCount = locationCommonPropertyCountDict[propertyURL]
LocationPropertyPath.locationCommonPropertyCountList.append(propertyCount)
propertyNameCount = propertyName + "(" + propertyCount + ")"
LocationPropertyPath.locationCommonPropertyNameCountList.append(propertyNameCount)
LocationPropertyPath.locationCommonPropertyDict[propertyNameCount] = propertyURL
in_location_property.filter.list = LocationPropertyPath.locationCommonPropertyNameCountList
if in_location_property.value and in_relation_degree.value and out_points_name.valueAsText == None:
propertyName = in_location_property.valueAsText
relationdegree = in_relation_degree.valueAsText
lastIndex = propertyName.rfind("(")
propertyName = propertyName[:lastIndex]
propertyName = propertyName.replace(" ", "_")
if featureClassName.endswith(".shp"):
lastIndex = featureClassName.rfind(".")
featureClassNameNoShp = featureClassName[:lastIndex]
out_points_name.value = featureClassNameNoShp + "_D" + relationdegree + "_" + propertyName + ".shp"
else:
out_points_name.value = featureClassName + "_D" + relationdegree + "_" + propertyName
if arcpy.Exists(out_points_name.valueAsText):
arcpy.AddError("The output feature class name already exists in current workspace!")
raise arcpy.ExecuteError
if in_relation_degree.value:
relationDegree = int(in_relation_degree.valueAsText)
if relationDegree > 4:
in_relation_degree.value = 4
return
def updateMessages(self, parameters):
"""Modify the messages created by internal validation for each tool
parameter. This method is called after internal validation."""
return
def execute(self, parameters, messages):
"""The source code of the tool."""
in_wikiplace_IRI = parameters[0]
in_location_property = parameters[1]
in_relation_degree = parameters[2]
out_location = parameters[3]
out_points_name = parameters[4]
if in_wikiplace_IRI.value:
inputFeatureClassName = in_wikiplace_IRI.valueAsText
locationCommonPropertyNameCount = in_location_property.valueAsText
relationDegree = int(in_relation_degree.valueAsText)
outLocation = out_location.valueAsText
outFeatureClassName = out_points_name.valueAsText
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
originFeatureClassName = inputFeatureClassName[(lastIndexOFGDB+1):]
if outLocation.endswith(".gdb") == False:
messages.addErrorMessage("Please enter a file geodatabase as the file location for output feature class.")
raise arcpy.ExecuteError
else:
arcpy.env.workspace = outLocation
endFeatureClassName = outLocation + "\\" + outFeatureClassName
if arcpy.Exists(endFeatureClassName):
messages.addErrorMessage("The output feature class name already exists in current workspace!")
raise arcpy.ExecuteError
else:
# get all the IRI from input point feature class of wikidata places
inplaceIRIList = []
cursor = arcpy.SearchCursor(inputFeatureClassName)
for row in cursor:
inplaceIRIList.append(row.getValue("URL"))
if relationDegree > 4:
relationDegree = 4
in_relation_degree.value = 4
locationCommonPropertyURL = LocationPropertyPath.locationCommonPropertyDict[locationCommonPropertyNameCount]
locationLinkageRelationJSONObj = SPARQLQuery.locationLinkageRelationQuery(inplaceIRIList, locationCommonPropertyURL, relationDegree)
locationLinkageRelationJSON = locationLinkageRelationJSONObj["results"]["bindings"]
endPlaceIRISet = Set()
for jsonItem in locationLinkageRelationJSON:
endPlaceIRISet.add(jsonItem["end"]["value"])
endPlaceIRIList = list(endPlaceIRISet)
# endPlaceJSONObj = SPARQLQuery.endPlaceInformationQuery(endPlaceIRIList)
endPlaceJSON = SPARQLQuery.endPlaceInformationQuery(endPlaceIRIList)
Json2Field.creatPlaceFeatureClassFromJSON(endPlaceJSON, endFeatureClassName, None, "")
lastIndex = locationCommonPropertyNameCount.rfind("(")
locationCommonPropertyName = locationCommonPropertyNameCount[:lastIndex]
locationLinkageTableName = Json2Field.createLocationLinkageMappingTableFromJSON(locationLinkageRelationJSON, "origin", "end", inputFeatureClassName, endFeatureClassName, locationCommonPropertyURL, locationCommonPropertyName, relationDegree)
endFeatureRelationshipClassName = outFeatureClassName + "_" + locationLinkageTableName + "_RelClass"
arcpy.CreateRelationshipClass_management(outFeatureClassName, locationLinkageTableName, endFeatureRelationshipClassName, "SIMPLE",
"is "+ locationCommonPropertyName + "of", locationCommonPropertyName,
"FORWARD", "ONE_TO_MANY", "NONE", "URL", "end")
originFeatureRelationshipClassName = originFeatureClassName + "_" + locationLinkageTableName + "_RelClass"
arcpy.CreateRelationshipClass_management(originFeatureClassName, locationLinkageTableName, originFeatureRelationshipClassName, "SIMPLE",
locationCommonPropertyName, "is "+ locationCommonPropertyName + "of",
"FORWARD", "ONE_TO_MANY", "NONE", "URL", "origin")
return
class RelFinder(object):
firstPropertyLabelURLDict = dict()
secondPropertyLabelURLDict = dict()
thirdPropertyLabelURLDict = dict()
fourthPropertyLabelURLDict = dict()
def __init__(self):
"""Define the tool (tool name is the name of the class)."""
self.label = "Linked Data Relationship Finder from Location Features"
self.description = """Getting a table of S-P-O triples for the relationships from locations features."""
self.canRunInBackground = False
def getParameterInfo(self):
"""Define parameter definitions"""
# The input Feature class which is the output of LinkedDataAnalysis Tool, "URL" column should be included in the attribute table
in_wikiplace_IRI = arcpy.Parameter(
displayName="Input wikidata location entities Feature Class",
name="in_wikiplace_IRI",
datatype="DEFeatureClass",
parameterType="Required",
direction="Input")
in_wikiplace_IRI.filter.list = ["Point"]
in_relation_degree = arcpy.Parameter(
displayName="Relationship Degree",
name="in_relation_degree",
datatype="GPLong",
parameterType="Required",
direction="Input")
in_relation_degree.filter.type = "ValueList"
in_relation_degree.filter.list = [1, 2, 3, 4]
# Choose the first degree property direction which links to a locations, "ORIGIN" means in_wikiplace_IRI are origins, "DESTINATION" means in_wikiplace_IRI are destinations
in_first_property_dir = arcpy.Parameter(
displayName="The first degree property direction",
name="in_first_property_dir",
datatype="GPString",
parameterType="Required",
direction="Input")
in_first_property_dir.filter.type = "ValueList"
in_first_property_dir.filter.list = ["BOTH", "ORIGIN", "DESTINATION"]
# Choose the first degree property which links to a locations
in_first_property = arcpy.Parameter(
displayName="The first degree property",
name="in_first_property",
datatype="GPString",
parameterType="Optional",
direction="Input")
in_first_property.filter.type = "ValueList"
in_first_property.filter.list = []
# Choose the second degree property direction which links to a locations, "ORIGIN" means in_wikiplace_IRI are origins, "DESTINATION" means in_wikiplace_IRI are destinations
in_second_property_dir = arcpy.Parameter(
displayName="The second degree property direction",
name="in_second_property_dir",
datatype="GPString",
parameterType="Optional",
direction="Input")
in_second_property_dir.filter.type = "ValueList"
in_second_property_dir.filter.list = ["BOTH", "ORIGIN", "DESTINATION"]
in_second_property_dir.enabled = False
# Choose the second degree property which links to a locations
in_second_property = arcpy.Parameter(
displayName="The second degree property",
name="in_second_property",
datatype="GPString",
parameterType="Optional",
direction="Input")
in_second_property.filter.type = "ValueList"
in_second_property.filter.list = []
in_second_property.enabled = False
# Choose the third degree property direction which links to a locations, "ORIGIN" means in_wikiplace_IRI are origins, "DESTINATION" means in_wikiplace_IRI are destinations
in_third_property_dir = arcpy.Parameter(
displayName="The third degree property direction",
name="in_third_property_dir",
datatype="GPString",
parameterType="Optional",
direction="Input")
in_third_property_dir.filter.type = "ValueList"
in_third_property_dir.filter.list = ["BOTH", "ORIGIN", "DESTINATION"]
in_third_property_dir.enabled = False
# Choose the third degree property which links to a locations
in_third_property = arcpy.Parameter(
displayName="The third degree property",
name="in_third_property",
datatype="GPString",
parameterType="Optional",
direction="Input")
in_third_property.filter.type = "ValueList"
in_third_property.filter.list = []
in_third_property.enabled = False
# Choose the fourth degree property direction which links to a locations, "ORIGIN" means in_wikiplace_IRI are origins, "DESTINATION" means in_wikiplace_IRI are destinations
in_fourth_property_dir = arcpy.Parameter(
displayName="The fourth degree property direction",
name="in_fourth_property_dir",
datatype="GPString",
parameterType="Optional",
direction="Input")
in_fourth_property_dir.filter.type = "ValueList"
in_fourth_property_dir.filter.list = ["BOTH", "ORIGIN", "DESTINATION"]
in_fourth_property_dir.enabled = False
# Choose the fourth degree property which links to a locations
in_fourth_property = arcpy.Parameter(
displayName="The fourth degree property",
name="in_fourth_property",
datatype="GPString",
parameterType="Optional",
direction="Input")
in_fourth_property.filter.type = "ValueList"
in_fourth_property.filter.list = []
in_fourth_property.enabled = False
out_location = arcpy.Parameter(
displayName="Output Location",
name="out_location",
datatype="DEWorkspace",
parameterType="Required",
direction="Input")
out_location.value = os.path.dirname(__file__)
# Derived Output Triple Store Table Name
out_table_name = arcpy.Parameter(
displayName="Output Triple Store Table Name",
name="out_table_name",
datatype="GPString",
parameterType="Required",
direction="Input")
# Derived Output Feature Class Name
out_points_name = arcpy.Parameter(
displayName="Output Feature Class Name",
name="out_points_name",
datatype="GPString",
parameterType="Required",
direction="Input")
params = [in_wikiplace_IRI, in_relation_degree, in_first_property_dir, in_first_property, in_second_property_dir, in_second_property, in_third_property_dir, in_third_property, in_fourth_property_dir, in_fourth_property, out_location, out_table_name, out_points_name]
return params
def isLicensed(self):
"""Set whether tool is licensed to execute."""
return True
def updateParameters(self, parameters):
"""Modify the values and properties of parameters before internal
validation is performed. This method is called whenever a parameter
has been changed."""
in_wikiplace_IRI = parameters[0]
in_relation_degree = parameters[1]
in_first_property_dir = parameters[2]
in_first_property = parameters[3]
in_second_property_dir = parameters[4]
in_second_property = parameters[5]
in_third_property_dir = parameters[6]
in_third_property = parameters[7]
in_fourth_property_dir = parameters[8]
in_fourth_property = parameters[9]
out_location = parameters[10]
out_table_name = parameters[11]
out_points_name = parameters[12]
if in_relation_degree.altered:
relationDegree = int(in_relation_degree.valueAsText)
if relationDegree == 1:
in_first_property.enabled = True
in_first_property_dir.enabled = True
in_second_property.enabled = False
in_second_property_dir.enabled = False
in_third_property.enabled = False
in_third_property_dir.enabled = False
in_fourth_property.enabled = False
in_fourth_property_dir.enabled = False
elif relationDegree == 2:
in_first_property.enabled = True
in_first_property_dir.enabled = True
in_second_property.enabled = True
in_second_property_dir.enabled = True
in_third_property.enabled = False
in_third_property_dir.enabled = False
in_fourth_property.enabled = False
in_fourth_property_dir.enabled = False
elif relationDegree == 3:
in_first_property.enabled = True
in_first_property_dir.enabled = True
in_second_property.enabled = True
in_second_property_dir.enabled = True
in_third_property.enabled = True
in_third_property_dir.enabled = True
in_fourth_property.enabled = False
in_fourth_property_dir.enabled = False
elif relationDegree == 4:
in_first_property.enabled = True
in_first_property_dir.enabled = True
in_second_property.enabled = True
in_second_property_dir.enabled = True
in_third_property.enabled = True
in_third_property_dir.enabled = True
in_fourth_property.enabled = True
in_fourth_property_dir.enabled = True
if in_wikiplace_IRI.value:
inputFeatureClassName = in_wikiplace_IRI.valueAsText
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
featureClassName = inputFeatureClassName[(lastIndexOFGDB+1):]
currentWorkspace = inputFeatureClassName[:lastIndexOFGDB]
arcpy.env.workspace = currentWorkspace
out_location.value = currentWorkspace
out_table_name.value = featureClassName + "PathQueryTripleStore"
out_points_name.value = featureClassName + "PathQueryLocation"
outLocation = out_location.valueAsText
outTableName = out_table_name.valueAsText
outputTableName = os.path.join(outLocation,outTableName)
if arcpy.Exists(outputTableName):
arcpy.AddError("The output table already exists in current workspace!")
raise arcpy.ExecuteError
outFeatureClassName = out_points_name.valueAsText
outputFeatureClassName = os.path.join(outLocation,outFeatureClassName)
if arcpy.Exists(outputFeatureClassName):
arcpy.AddError("The output Feature Class already exists in current workspace!")
raise arcpy.ExecuteError
# get all the IRI from input point feature class of wikidata places
inplaceIRIList = []
cursor = arcpy.SearchCursor(inputFeatureClassName)
for row in cursor:
inplaceIRIList.append(row.getValue("URL"))
# get the first property URL list and label list
if in_first_property_dir.value:
fristDirection = in_first_property_dir.valueAsText
# get the first property URL list
firstPropertyURLListJsonBindingObject = SPARQLQuery.relFinderCommonPropertyQuery(inplaceIRIList, relationDegree, [fristDirection], ["", "", ""])
firstPropertyURLList = []
for jsonItem in firstPropertyURLListJsonBindingObject:
firstPropertyURLList.append(jsonItem["p1"]["value"])
firstPropertyLabelJSON = SPARQLQuery.locationCommonPropertyLabelQuery(firstPropertyURLList)
# firstPropertyLabelJSON = firstPropertyLabelJSONObj["results"]["bindings"]
# get the first property label list
firstPropertyURLList = []
firstPropertyLabelList = []
for jsonItem in firstPropertyLabelJSON:
propertyURL = jsonItem["p"]["value"]
firstPropertyURLList.append(propertyURL)
propertyName = jsonItem["propertyLabel"]["value"]
firstPropertyLabelList.append(propertyName)
RelFinder.firstPropertyLabelURLDict = dict(zip(firstPropertyLabelList, firstPropertyURLList))
in_first_property.filter.list = firstPropertyLabelList
# get the second property URL list and label list
if in_second_property_dir.value:
fristDirection = in_first_property_dir.valueAsText
firstProperty = in_first_property.valueAsText
if firstProperty == None:
firstProperty = ""
else:
firstProperty = RelFinder.firstPropertyLabelURLDict[firstProperty]
secondDirection = in_second_property_dir.valueAsText
# get the second property URL list
secondPropertyURLListJsonBindingObject = SPARQLQuery.relFinderCommonPropertyQuery(inplaceIRIList, relationDegree, [fristDirection, secondDirection], [firstProperty, "", ""])
secondPropertyURLList = []
for jsonItem in secondPropertyURLListJsonBindingObject:
secondPropertyURLList.append(jsonItem["p2"]["value"])
secondPropertyLabelJSON = SPARQLQuery.locationCommonPropertyLabelQuery(secondPropertyURLList)
# secondPropertyLabelJSON = secondPropertyLabelJSONObj["results"]["bindings"]
# get the second property label list
secondPropertyURLList = []
secondPropertyLabelList = []
for jsonItem in secondPropertyLabelJSON:
propertyURL = jsonItem["p"]["value"]
secondPropertyURLList.append(propertyURL)
propertyName = jsonItem["propertyLabel"]["value"]
secondPropertyLabelList.append(propertyName)
RelFinder.secondPropertyLabelURLDict = dict(zip(secondPropertyLabelList, secondPropertyURLList))
in_second_property.filter.list = secondPropertyLabelList
# get the third property URL list and label list
if in_third_property_dir.value:
fristDirection = in_first_property_dir.valueAsText
firstProperty = in_first_property.valueAsText
secondDirection = in_second_property_dir.valueAsText
secondProperty = in_second_property.valueAsText
if firstProperty == None:
firstProperty = ""
else:
firstProperty = RelFinder.firstPropertyLabelURLDict[firstProperty]
if secondProperty == None:
secondProperty = ""
else:
secondProperty = RelFinder.secondPropertyLabelURLDict[secondProperty]
thirdDirection = in_third_property_dir.valueAsText
# get the third property URL list
thirdPropertyURLListJsonBindingObject = SPARQLQuery.relFinderCommonPropertyQuery(inplaceIRIList, relationDegree, [fristDirection, secondDirection, thirdDirection], [firstProperty, secondProperty, ""])
thirdPropertyURLList = []
for jsonItem in thirdPropertyURLListJsonBindingObject:
thirdPropertyURLList.append(jsonItem["p3"]["value"])
thirdPropertyLabelJSON = SPARQLQuery.locationCommonPropertyLabelQuery(thirdPropertyURLList)
# thirdPropertyLabelJSON = thirdPropertyLabelJSONObj["results"]["bindings"]
# get the third property label list
thirdPropertyURLList = []
thirdPropertyLabelList = []
for jsonItem in thirdPropertyLabelJSON:
propertyURL = jsonItem["p"]["value"]
thirdPropertyURLList.append(propertyURL)
propertyName = jsonItem["propertyLabel"]["value"]
thirdPropertyLabelList.append(propertyName)
RelFinder.thirdPropertyLabelURLDict = dict(zip(thirdPropertyLabelList, thirdPropertyURLList))
in_third_property.filter.list = thirdPropertyLabelList
# get the fourth property URL list and label list
if in_fourth_property_dir.value:
fristDirection = in_first_property_dir.valueAsText
firstProperty = in_first_property.valueAsText
secondDirection = in_second_property_dir.valueAsText
secondProperty = in_second_property.valueAsText
thirdDirection = in_third_property_dir.valueAsText
thirdProperty = in_third_property.valueAsText
if firstProperty == None:
firstProperty = ""
else:
firstProperty = RelFinder.firstPropertyLabelURLDict[firstProperty]
if secondProperty == None:
secondProperty = ""
else:
secondProperty = RelFinder.secondPropertyLabelURLDict[secondProperty]
if thirdProperty == None:
thirdProperty = ""
else:
thirdProperty = RelFinder.thirdPropertyLabelURLDict[thirdProperty]
fourthDirection = in_fourth_property_dir.valueAsText
# get the fourth property URL list
fourthPropertyURLListJsonBindingObject = SPARQLQuery.relFinderCommonPropertyQuery(inplaceIRIList, relationDegree, [fristDirection, secondDirection, thirdDirection, fourthDirection], [firstProperty, secondProperty, thirdProperty])
fourthPropertyURLList = []
for jsonItem in fourthPropertyURLListJsonBindingObject:
fourthPropertyURLList.append(jsonItem["p4"]["value"])
fourthPropertyLabelJSON = SPARQLQuery.locationCommonPropertyLabelQuery(fourthPropertyURLList)
# fourthPropertyLabelJSON = fourthPropertyLabelJSONObj["results"]["bindings"]
# get the fourth property label list
fourthPropertyURLList = []
fourthPropertyLabelList = []
for jsonItem in fourthPropertyLabelJSON:
propertyURL = jsonItem["p"]["value"]
fourthPropertyURLList.append(propertyURL)
propertyName = jsonItem["propertyLabel"]["value"]
fourthPropertyLabelList.append(propertyName)
RelFinder.fourthPropertyLabelURLDict = dict(zip(fourthPropertyLabelList, fourthPropertyURLList))
in_fourth_property.filter.list = fourthPropertyLabelList
return
def updateMessages(self, parameters):
"""Modify the messages created by internal validation for each tool
parameter. This method is called after internal validation."""
return
def execute(self, parameters, messages):
"""The source code of the tool."""
in_wikiplace_IRI = parameters[0]
in_relation_degree = parameters[1]
in_first_property_dir = parameters[2]
in_first_property = parameters[3]
in_second_property_dir = parameters[4]
in_second_property = parameters[5]
in_third_property_dir = parameters[6]
in_third_property = parameters[7]
in_fourth_property_dir = parameters[8]
in_fourth_property = parameters[9]
out_location = parameters[10]
out_table_name = parameters[11]
out_points_name = parameters[12]
if in_wikiplace_IRI.value:
inputFeatureClassName = in_wikiplace_IRI.valueAsText
relationDegree = int(in_relation_degree.valueAsText)
outLocation = out_location.valueAsText
outTableName = out_table_name.valueAsText
outFeatureClassName = out_points_name.valueAsText
outputTableName = os.path.join(outLocation,outTableName)
outputFeatureClassName = os.path.join(outLocation,outFeatureClassName)
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
originFeatureClassName = inputFeatureClassName[(lastIndexOFGDB+1):]
if outLocation.endswith(".gdb") == False:
messages.addErrorMessage("Please enter a file geodatabase as the file location for output feature class.")
raise arcpy.ExecuteError
else:
arcpy.env.workspace = outLocation
if arcpy.Exists(outputTableName) or arcpy.Exists(outputFeatureClassName):
messages.addErrorMessage("The output table or feature class already exists in current workspace!")
raise arcpy.ExecuteError
else:
# get all the IRI from input point feature class of wikidata places
inplaceIRIList = []
cursor = arcpy.SearchCursor(inputFeatureClassName)
for row in cursor:
inplaceIRIList.append(row.getValue("URL"))
# get the user specified property URL and direction at each degree
propertyDirectionList = []
selectPropertyURLList = ["", "", "", ""]
if in_first_property_dir.value:
fristDirection = in_first_property_dir.valueAsText
firstProperty = in_first_property.valueAsText
if firstProperty == None:
firstPropertyURL = ""
else:
firstPropertyURL = RelFinder.firstPropertyLabelURLDict[firstProperty]
propertyDirectionList.append(fristDirection)
selectPropertyURLList[0] = firstPropertyURL
if in_second_property_dir.value:
secondDirection = in_second_property_dir.valueAsText
secondProperty = in_second_property.valueAsText
if secondProperty == None:
secondPropertyURL = ""
else:
secondPropertyURL = RelFinder.secondPropertyLabelURLDict[secondProperty]
propertyDirectionList.append(secondDirection)
selectPropertyURLList[1] = secondPropertyURL
if in_third_property_dir.value:
thirdDirection = in_third_property_dir.valueAsText
thirdProperty = in_third_property.valueAsText
if thirdProperty == None:
thirdPropertyURL = ""
else:
thirdPropertyURL = RelFinder.thirdPropertyLabelURLDict[thirdProperty]
propertyDirectionList.append(thirdDirection)
selectPropertyURLList[2] = thirdPropertyURL
if in_fourth_property_dir.value:
fourthDirection = in_fourth_property_dir.valueAsText
fourthProperty = in_fourth_property.valueAsText
if fourthProperty == None:
fourthPropertyURL = ""
else:
fourthPropertyURL = RelFinder.thirdPropertyLabelURLDict[fourthProperty]
propertyDirectionList.append(fourthDirection)
selectPropertyURLList[3] = fourthPropertyURL
arcpy.AddMessage("propertyDirectionList: {0}".format(propertyDirectionList))
arcpy.AddMessage("selectPropertyURLList: {0}".format(selectPropertyURLList))
# for the direction list, change "BOTH" to "OROIGIN" and "DESTINATION"
directionExpendedLists = UTIL.directionListFromBoth2OD(propertyDirectionList)
tripleStore = dict()
for currentDirectionList in directionExpendedLists:
# get a list of triples for curent specified property path
newTripleStore = SPARQLQuery.relFinderTripleQuery(inplaceIRIList, currentDirectionList, selectPropertyURLList)
tripleStore = UTIL.mergeTripleStoreDicts(tripleStore, newTripleStore)
# tripleStore = UTIL.mergeListsWithUniqueElement(tripleStore, newTripleStore)
triplePropertyURLList = []
for triple in tripleStore:
if triple.p not in triplePropertyURLList:
triplePropertyURLList.append(triple.p)
triplePropertyLabelJSON = SPARQLQuery.locationCommonPropertyLabelQuery(triplePropertyURLList)
triplePropertyURLList = []
triplePropertyLabelList = []
for jsonItem in triplePropertyLabelJSON:
propertyURL = jsonItem["p"]["value"]
triplePropertyURLList.append(propertyURL)
propertyName = jsonItem["propertyLabel"]["value"]
triplePropertyLabelList.append(propertyName)
triplePropertyURLLabelDict = dict(zip(triplePropertyURLList, triplePropertyLabelList))
tripleStoreTable = arcpy.CreateTable_management(outLocation,outTableName)
arcpy.AddField_management(tripleStoreTable, "Subject", "TEXT", field_length=50)
arcpy.AddField_management(tripleStoreTable, "Predicate", "TEXT", field_length=50)
arcpy.AddField_management(tripleStoreTable, "Object", "TEXT", field_length=50)
arcpy.AddField_management(tripleStoreTable, "Pred_Label", "TEXT", field_length=50)
arcpy.AddField_management(tripleStoreTable, "Degree", "LONG")
# Create insert cursor for table
rows = arcpy.InsertCursor(tripleStoreTable)
# for triple in tripleStore:
# row = rows.newRow()
# row.setValue("Subject", triple[0])
# row.setValue("Predicate", triple[1])
# row.setValue("Object", triple[2])
for triple in tripleStore:
row = rows.newRow()
row.setValue("Subject", triple.s)
row.setValue("Predicate", triple.p)
row.setValue("Object", triple.o)
row.setValue("Pred_Label", triplePropertyURLLabelDict[triple.p])
row.setValue("Degree", tripleStore[triple])
rows.insertRow(row)
entitySet = Set()
for triple in tripleStore:
entitySet.add(triple.s)
entitySet.add(triple.o)
# for triple in tripleStore:
# entitySet.add(triple[0])
# entitySet.add(triple[2])
placeJSON = SPARQLQuery.endPlaceInformationQuery(list(entitySet))
Json2Field.creatPlaceFeatureClassFromJSON(placeJSON, outputFeatureClassName, None, "")
arcpy.env.workspace = outLocation
originFeatureRelationshipClassName = outputFeatureClassName + "_" + outTableName + "_Origin" + "_RelClass"
arcpy.CreateRelationshipClass_management(outputFeatureClassName, outTableName, originFeatureRelationshipClassName, "SIMPLE",
"S-P-O Link", "Origin of S-P-O Link",
"FORWARD", "ONE_TO_MANY", "NONE", "URL", "Subject")
endFeatureRelationshipClassName = outputFeatureClassName + "_" + outTableName + "_Destination" + "_RelClass"
arcpy.CreateRelationshipClass_management(outputFeatureClassName, outTableName, endFeatureRelationshipClassName, "SIMPLE",
"S-P-O Link", "Destination of S-P-O Link",
"FORWARD", "ONE_TO_MANY", "NONE", "URL", "Object")
# Json2Field.getNoExistTableNameInWorkspace(outLocation,outTableName)
LinkageTableName = Json2Field.getNoExistTableNameInWorkspace(outLocation,outTableName + "_Linkage")
arcpy.CopyRows_management(outTableName, LinkageTableName)
arcpy.JoinField_management(LinkageTableName, "Subject", outputFeatureClassName, "URL", [])
arcpy.JoinField_management(LinkageTableName, "Object", outputFeatureClassName, "URL", [])
# originFeatureClassName = LinkageTableName[(lastIndexOFGDB+1):]
where_clause = '("URL" IS NOT NULL) AND ("URL_1" IS NOT NULL)'
LinkedNotNullTableName = Json2Field.getNoExistTableNameInWorkspace(outLocation, LinkageTableName[(lastIndexOFGDB+1):] + "_SQL")
arcpy.TableSelect_analysis(LinkageTableName, LinkedNotNullTableName, where_clause)
lineFeatureName = Json2Field.getNoExistTableNameInWorkspace(outLocation, outTableName + "_LinkedLine")
arcpy.XYToLine_management(LinkedNotNullTableName,lineFeatureName,
"POINT_X","POINT_Y","POINT_X_1","POINT_Y_1","GREAT_CIRCLE")
arcpy.JoinField_management(lineFeatureName, "OID", LinkedNotNullTableName, "OBJECTID", ["URL", "Label", "URL_1", "Label_1", "Pred_Label", "Degree"])
mxd = arcpy.mapping.MapDocument("CURRENT")
# get the data frame
df = arcpy.mapping.ListDataFrames(mxd)[0]
# create a new layer
lineFeatureLayer = arcpy.mapping.Layer(os.path.join(outLocation,lineFeatureName))
# add the layer to the map at the bottom of the TOC in data frame 0
arcpy.mapping.AddLayer(df, lineFeatureLayer, "BOTTOM")
# ["BOTH", "ORIGIN", "DESTINATION"]
# if relationDegree > 1:
# sencondDegreeTableAppendName = ""
# if (propertyDirectionList[0] == "ORIGIN" or propertyDirectionList[0] == "BOTH") and (propertyDirectionList[1] == "ORIGIN" or propertyDirectionList[1] == "BOTH"):
# if firstProperty == None:
# sencondDegreeTableName += "_O"
# else:
# sencondDegreeTableName += "_O_" + firstProperty.replace(" ", "_")
# if secondProperty == None:
# sencondDegreeTableName += "_O"
# else:
# sencondDegreeTableName += "_O_" + secondProperty.replace(" ", "_")
# sencondDegreeTableName = outTableName + sencondDegreeTableAppendName
# arcpy.CopyRows_management(outTableName, sencondDegreeTableName)
# arcpy.JoinField_management(sencondDegreeTableName, "Object", outTableName, "Subject", [])
# # get the first property URL list and label list
# if in_first_property_dir.value:
# fristDirection = in_first_property_dir.valueAsText
# # get the first property URL list
# firstPropertyURLListJsonBindingObject = SPARQLQuery.relFinderCommonPropertyQuery(inplaceIRIList, 1, [fristDirection], ["", "", ""])
# firstPropertyURLList = []
# for jsonItem in firstPropertyURLListJsonBindingObject:
# firstPropertyURLList.append(jsonItem["p1"]["value"])
# firstPropertyLabelJSON = SPARQLQuery.locationCommonPropertyLabelQuery(firstPropertyURLList)
# # firstPropertyLabelJSON = firstPropertyLabelJSONObj["results"]["bindings"]
# # get the first property label list
# firstPropertyURLList = []
# firstPropertyLabelList = []
# for jsonItem in firstPropertyLabelJSON:
# propertyURL = jsonItem["p"]["value"]
# firstPropertyURLList.append(propertyURL)
# propertyName = jsonItem["propertyLabel"]["value"]
# firstPropertyLabelList.append(propertyName)
# # RelFinder.firstPropertyLabelURLDict = dict(zip(firstPropertyLabelList, firstPropertyURLList))
# # in_first_property.filter.list = firstPropertyLabelList
# arcpy.AddMessage("firstPropertyURLList: {0}".format(firstPropertyURLList))
# arcpy.AddMessage("firstPropertyLabelList: {0}".format(firstPropertyLabelList))
# # get the second property URL list and label list
# if in_second_property_dir.value:
# fristDirection = in_first_property_dir.valueAsText
# firstProperty = in_first_property.valueAsText
# if firstProperty == None:
# firstProperty = ""
# else:
# firstProperty = RelFinder.firstPropertyLabelURLDict[firstProperty]
# secondDirection = in_second_property_dir.valueAsText
# arcpy.AddMessage("firstProperty: {0}".format(firstProperty))
# # get the second property URL list
# secondPropertyURLListJsonBindingObject = SPARQLQuery.relFinderCommonPropertyQuery(inplaceIRIList, 2, [fristDirection, secondDirection], [firstProperty, "", ""])
# secondPropertyURLList = []
# for jsonItem in secondPropertyURLListJsonBindingObject:
# secondPropertyURLList.append(jsonItem["p2"]["value"])
# secondPropertyLabelJSON = SPARQLQuery.locationCommonPropertyLabelQuery(secondPropertyURLList)
# # secondPropertyLabelJSON = secondPropertyLabelJSONObj["results"]["bindings"]
# # get the second property label list
# secondPropertyURLList = []
# secondPropertyLabelList = []
# for jsonItem in secondPropertyLabelJSON:
# propertyURL = jsonItem["p"]["value"]
# secondPropertyURLList.append(propertyURL)
# propertyName = jsonItem["propertyLabel"]["value"]
# secondPropertyLabelList.append(propertyName)
# # RelFinder.secondPropertyLabelURLDict = dict(zip(secondPropertyLabelList, secondPropertyURLList))
# # in_second_property.filter.list = secondPropertyLabelList
# arcpy.AddMessage("secondPropertyURLList: {0}".format(secondPropertyURLList))
# arcpy.AddMessage("secondPropertyLabelList: {0}".format(secondPropertyLabelList))
# # get the third property URL list and label list
# if in_third_property_dir.value:
# fristDirection = in_first_property_dir.valueAsText
# firstProperty = in_first_property.valueAsText
# secondDirection = in_second_property_dir.valueAsText
# secondProperty = in_second_property.valueAsText
# thirdDirection = in_third_property_dir.valueAsText
# if firstProperty == None:
# firstProperty = ""
# else:
# firstProperty = RelFinder.firstPropertyLabelURLDict[firstProperty]
# if secondProperty == None:
# secondProperty = ""
# else:
# secondProperty = RelFinder.secondPropertyLabelURLDict[secondProperty]
# # get the third property URL list
# thirdPropertyURLListJsonBindingObject = SPARQLQuery.relFinderCommonPropertyQuery(inplaceIRIList, 3, [fristDirection, secondDirection, thirdDirection], [firstProperty, secondProperty, ""])
# thirdPropertyURLList = []
# for jsonItem in thirdPropertyURLListJsonBindingObject:
# thirdPropertyURLList.append(jsonItem["p3"]["value"])
# thirdPropertyLabelJSON = SPARQLQuery.locationCommonPropertyLabelQuery(thirdPropertyURLList)
# # thirdPropertyLabelJSON = thirdPropertyLabelJSONObj["results"]["bindings"]
# # get the third property label list
# thirdPropertyURLList = []
# thirdPropertyLabelList = []
# for jsonItem in thirdPropertyLabelJSON:
# propertyURL = jsonItem["p"]["value"]
# thirdPropertyURLList.append(propertyURL)
# propertyName = jsonItem["propertyLabel"]["value"]
# thirdPropertyLabelList.append(propertyName)
# # RelFinder.thirdPropertyLabelURLDict = dict(zip(thirdPropertyLabelList, thirdPropertyURLList))
# # in_third_property.filter.list = thirdPropertyLabelList
# arcpy.AddMessage("thirdPropertyURLList: {0}".format(thirdPropertyURLList))
# arcpy.AddMessage("thirdPropertyLabelList: {0}".format(thirdPropertyLabelList))
# # get the fourth property URL list and label list
# if in_fourth_property_dir.value:
# fristDirection = in_first_property_dir.valueAsText
# firstProperty = in_first_property.valueAsText
# secondDirection = in_second_property_dir.valueAsText
# secondProperty = in_second_property.valueAsText
# thirdDirection = in_third_property_dir.valueAsText
# thirdProperty = in_third_property.valueAsText
# fourthDirection = in_fourth_property_dir.valueAsText
# if firstProperty == None:
# firstProperty = ""
# else:
# firstProperty = RelFinder.firstPropertyLabelURLDict[firstProperty]
# if secondProperty == None:
# secondProperty = ""
# else:
# secondProperty = RelFinder.secondPropertyLabelURLDict[secondProperty]
# if thirdProperty == None:
# thirdProperty = ""
# else:
# thirdProperty = RelFinder.thirdPropertyLabelURLDict[thirdProperty]
# # get the fourth property URL list
# fourthPropertyURLListJsonBindingObject = SPARQLQuery.relFinderCommonPropertyQuery(inplaceIRIList, 4, [fristDirection, secondDirection, thirdDirection, fourthDirection], [firstProperty, secondProperty, thirdProperty])
# fourthPropertyURLList = []
# for jsonItem in fourthPropertyURLListJsonBindingObject:
# fourthPropertyURLList.append(jsonItem["p4"]["value"])
# fourthPropertyLabelJSON = SPARQLQuery.locationCommonPropertyLabelQuery(fourthPropertyURLList)
# # fourthPropertyLabelJSON = fourthPropertyLabelJSONObj["results"]["bindings"]
# # get the fourth property label list
# fourthPropertyURLList = []
# fourthPropertyLabelList = []
# for jsonItem in fourthPropertyLabelJSON:
# propertyURL = jsonItem["p"]["value"]
# fourthPropertyURLList.append(propertyURL)
# propertyName = jsonItem["propertyLabel"]["value"]
# fourthPropertyLabelList.append(propertyName)
# # RelFinder.fourthPropertyLabelURLDict = dict(zip(fourthPropertyLabelList, fourthPropertyURLList))
# # in_fourth_property.filter.list = fourthPropertyLabelList
# arcpy.AddMessage("fourthPropertyURLList: {0}".format(fourthPropertyURLList))
# arcpy.AddMessage("fourthPropertyLabelList: {0}".format(fourthPropertyLabelList))
# if relationDegree > 4:
# relationDegree = 4
# in_relation_degree.value = 4
# locationCommonPropertyURL = LocationPropertyPath.locationCommonPropertyDict[locationCommonPropertyNameCount]
# locationLinkageRelationJSONObj = SPARQLQuery.locationLinkageRelationQuery(inplaceIRIList, locationCommonPropertyURL, relationDegree)
# locationLinkageRelationJSON = locationLinkageRelationJSONObj["results"]["bindings"]
# endPlaceIRISet = Set()
# for jsonItem in locationLinkageRelationJSON:
# endPlaceIRISet.add(jsonItem["end"]["value"])
# endPlaceIRIList = list(endPlaceIRISet)
# # endPlaceJSONObj = SPARQLQuery.endPlaceInformationQuery(endPlaceIRIList)
# endPlaceJSON = SPARQLQuery.endPlaceInformationQuery(endPlaceIRIList)
# Json2Field.creatPlaceFeatureClassFromJSON(endPlaceJSON, endFeatureClassName, None, "")
# lastIndex = locationCommonPropertyNameCount.rfind("(")
# locationCommonPropertyName = locationCommonPropertyNameCount[:lastIndex]
# locationLinkageTableName = Json2Field.createLocationLinkageMappingTableFromJSON(locationLinkageRelationJSON, "origin", "end", inputFeatureClassName, endFeatureClassName, locationCommonPropertyURL, locationCommonPropertyName, relationDegree)
# endFeatureRelationshipClassName = outFeatureClassName + "_" + locationLinkageTableName + "_RelClass"
# arcpy.CreateRelationshipClass_management(outFeatureClassName, locationLinkageTableName, endFeatureRelationshipClassName, "SIMPLE",
# "is "+ locationCommonPropertyName + "of", locationCommonPropertyName,
# "FORWARD", "ONE_TO_MANY", "NONE", "URL", "end")
# originFeatureRelationshipClassName = originFeatureClassName + "_" + locationLinkageTableName + "_RelClass"
# arcpy.CreateRelationshipClass_management(originFeatureClassName, locationLinkageTableName, originFeatureRelationshipClassName, "SIMPLE",
# locationCommonPropertyName, "is "+ locationCommonPropertyName + "of",
# "FORWARD", "ONE_TO_MANY", "NONE", "URL", "origin")
return
class SPARQLQuery(object):
@staticmethod
def endPlaceInformationQuery(endPlaceIRIList):
jsonBindingObject = []
i = 0
while i < len(endPlaceIRIList):
if i + 50 > len(endPlaceIRIList):
endPlaceIRISubList = endPlaceIRIList[i:]
else:
endPlaceIRISubList = endPlaceIRIList[i:(i+50)]
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX geo-pos: <http://www.w3.org/2003/01/geo/wgs84_pos#>
PREFIX omgeo: <http://www.ontotext.com/owlim/geo#>
PREFIX dbpedia: <http://dbpedia.org/resource/>
PREFIX dbp-ont: <http://dbpedia.org/ontology/>
PREFIX ff: <http://factforge.net/>
PREFIX om: <http://www.ontotext.com/owlim/>
PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX bd: <http://www.bigdata.com/rdf#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX geo: <http://www.opengis.net/ont/geosparql#>"""
endPlaceQuery = queryPrefix + """SELECT distinct ?place ?placeLabel ?location
WHERE {
?place wdt:P625 ?location .
# retrieve the English label
SERVICE wikibase:label {bd:serviceParam wikibase:language "en". ?place rdfs:label ?placeLabel .}
?place wdt:P31 ?placeFlatType.
?placeFlatType wdt:P279* wd:Q2221906.
VALUES ?place
{"""
for IRI in endPlaceIRISubList:
endPlaceQuery = endPlaceQuery + "<" + IRI + "> \n"
endPlaceQuery = endPlaceQuery + """
}
}
"""
endPlaceSparqlParam = {'query': endPlaceQuery, 'format': 'json'}
endPlaceSparqlRequest = requests.get('https://query.wikidata.org/sparql', params=endPlaceSparqlParam)
arcpy.AddMessage("SPARQL: {0}".format(endPlaceSparqlRequest.url))
jsonBindingObject.extend(endPlaceSparqlRequest.json()["results"]["bindings"])
i = i + 50
return jsonBindingObject
@staticmethod
def locationLinkageRelationQuery(inplaceIRIList, locationCommonPropertyURL, relationDegree):
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX wikibase: <http://wikiba.se/ontology#>"""
locationLinkageQuery = queryPrefix + """select ?origin ?end
where
{
?origin """
for i in range(relationDegree-1):
locationLinkageQuery += """<""" + locationCommonPropertyURL + """>/"""
locationLinkageQuery += """<""" + locationCommonPropertyURL + """>""" + """?end.
VALUES ?origin
{"""
for IRI in inplaceIRIList:
locationLinkageQuery = locationLinkageQuery + "<" + IRI + "> \n"
locationLinkageQuery = locationLinkageQuery + """
}
}
"""
locationLinkageSparqlParam = {'query': locationLinkageQuery, 'format': 'json'}
locationLinkageSparqlRequest = requests.get('https://query.wikidata.org/sparql', params=locationLinkageSparqlParam)
arcpy.AddMessage("SPARQL: {0}".format(locationLinkageSparqlRequest.url))
return locationLinkageSparqlRequest.json()
@staticmethod
def locationCommonPropertyQuery(inplaceIRIList):
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX wikibase: <http://wikiba.se/ontology#>"""
commonPropertyQuery = queryPrefix + """select distinct ?p (count(distinct ?s) as ?NumofSub)
where
{
?s ?p ?objPlace.
?objPlace wdt:P625 ?coordinate.
#?objPlace wdt:P31 ?placeFlatType.
#?placeFlatType wdt:P279* wd:Q2221906.
VALUES ?s
{"""
for IRI in inplaceIRIList:
commonPropertyQuery = commonPropertyQuery + "<" + IRI + "> \n"
commonPropertyQuery = commonPropertyQuery + """
}
}
group by ?p
order by DESC(?NumofSub)
"""
commonPropertySparqlParam = {'query': commonPropertyQuery, 'format': 'json'}
commonPropertySparqlRequest = requests.get('https://query.wikidata.org/sparql', params=commonPropertySparqlParam)
# print(commonPropertySparqlRequest.url)
arcpy.AddMessage("SPARQL: {0}".format(commonPropertySparqlRequest.url))
# commonPropertyJSON = commonPropertySparqlRequest.json()["results"]["bindings"]
return commonPropertySparqlRequest.json()
@staticmethod
def commonPropertyQuery(inplaceIRIList):
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>"""
commonPropertyQuery = queryPrefix + """select distinct ?p (count(distinct ?s) as ?NumofSub)
where
{ ?s owl:sameAs ?wikidataSub.
?s ?p ?o.
VALUES ?wikidataSub
{"""
for IRI in inplaceIRIList:
commonPropertyQuery = commonPropertyQuery + "<" + IRI + "> \n"
commonPropertyQuery = commonPropertyQuery + """
}
}
group by ?p
order by DESC(?NumofSub)
"""
commonPropertySparqlParam = {'query': commonPropertyQuery, 'format': 'json'}
commonPropertySparqlRequest = requests.get('https://dbpedia.org/sparql', params=commonPropertySparqlParam)
# print(commonPropertySparqlRequest.url)
arcpy.AddMessage("SPARQL: {0}".format(commonPropertySparqlRequest.url))
return commonPropertySparqlRequest.json()
@staticmethod
def inverseCommonPropertyQuery(inplaceIRIList):
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>"""
commonPropertyQuery = queryPrefix + """select distinct ?p (count(distinct ?s) as ?NumofSub)
where
{ ?s owl:sameAs ?wikidataSub.
?o ?p ?s.
VALUES ?wikidataSub
{"""
for IRI in inplaceIRIList:
commonPropertyQuery = commonPropertyQuery + "<" + IRI + "> \n"
commonPropertyQuery = commonPropertyQuery + """
}
}
group by ?p
order by DESC(?NumofSub)
"""
commonPropertySparqlParam = {'query': commonPropertyQuery, 'format': 'json'}
commonPropertySparqlRequest = requests.get('https://dbpedia.org/sparql', params=commonPropertySparqlParam)
# print(commonPropertySparqlRequest.url)
arcpy.AddMessage("SPARQL: {0}".format(commonPropertySparqlRequest.url))
return commonPropertySparqlRequest.json()
@staticmethod
def locationDBpediaExpandedCommonPropertyQuery(inplaceIRIList):
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>"""
commonPropertyQuery = queryPrefix + """select distinct ?p (count(distinct ?subDivision) as ?NumofSub)
where
{ ?s owl:sameAs ?wikidataSub.
?subDivision dbo:isPartOf+ ?s.
?subDivision ?p ?o.
VALUES ?wikidataSub
{"""
for IRI in inplaceIRIList:
commonPropertyQuery = commonPropertyQuery + "<" + IRI + "> \n"
commonPropertyQuery = commonPropertyQuery + """
}
}
group by ?p
order by DESC(?NumofSub)
"""
commonPropertySparqlParam = {'query': commonPropertyQuery, 'format': 'json'}
commonPropertySparqlRequest = requests.get('https://dbpedia.org/sparql', params=commonPropertySparqlParam)
# print(commonPropertySparqlRequest.url)
arcpy.AddMessage("SPARQL: {0}".format(commonPropertySparqlRequest.url))
return commonPropertySparqlRequest.json()
@staticmethod
def locationDBpediaInverseExpandedCommonPropertyQuery(inplaceIRIList):
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>"""
commonPropertyQuery = queryPrefix + """select distinct ?p (count(distinct ?subDivision) as ?NumofSub)
where
{ ?s owl:sameAs ?wikidataSub.
?subDivision dbo:isPartOf+ ?s.
?o ?p ?subDivision.
VALUES ?wikidataSub
{"""
for IRI in inplaceIRIList:
commonPropertyQuery = commonPropertyQuery + "<" + IRI + "> \n"
commonPropertyQuery = commonPropertyQuery + """
}
}
group by ?p
order by DESC(?NumofSub)
"""
commonPropertySparqlParam = {'query': commonPropertyQuery, 'format': 'json'}
commonPropertySparqlRequest = requests.get('https://dbpedia.org/sparql', params=commonPropertySparqlParam)
# print(commonPropertySparqlRequest.url)
arcpy.AddMessage("SPARQL: {0}".format(commonPropertySparqlRequest.url))
return commonPropertySparqlRequest.json()
@staticmethod
def locationCommonPropertyLabelQuery(locationCommonPropertyURLList):
jsonBindingObject = []
i = 0
while i < len(locationCommonPropertyURLList):
if i + 50 > len(locationCommonPropertyURLList):
propertyIRISubList = locationCommonPropertyURLList[i:]
else:
propertyIRISubList = locationCommonPropertyURLList[i:(i+50)]
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX wikibase: <http://wikiba.se/ontology#>"""
commonPropertyLabelQuery = queryPrefix + """select ?p ?propertyLabel
where
{
?wdProperty wikibase:directClaim ?p.
SERVICE wikibase:label {bd:serviceParam wikibase:language "en". ?wdProperty rdfs:label ?propertyLabel.}
VALUES ?p
{"""
for propertyURL in propertyIRISubList:
commonPropertyLabelQuery = commonPropertyLabelQuery + "<" + propertyURL + "> \n"
commonPropertyLabelQuery = commonPropertyLabelQuery + """
}
}
"""
commonPropertyLabelSparqlParam = {'query': commonPropertyLabelQuery, 'format': 'json'}
commonPropertyLabelSparqlRequest = requests.get('https://query.wikidata.org/sparql', params=commonPropertyLabelSparqlParam)
# print(commonPropertySparqlRequest.url)
arcpy.AddMessage("SPARQL: {0}".format(commonPropertyLabelSparqlRequest.url))
# commonPropertyJSON = commonPropertySparqlRequest.json()["results"]["bindings"]
jsonBindingObject.extend(commonPropertyLabelSparqlRequest.json()["results"]["bindings"])
i = i + 50
return jsonBindingObject
@staticmethod
def functionalPropertyQuery(propertyURLList):
# give a list of property, get a sublist which are functional property
# send a SPARQL query to DBpedia endpoint to test whether the user selected properties are functionalProperty
jsonBindingObject = []
i = 0
while i < len(propertyURLList):
if i + 50 > len(propertyURLList):
propertyURLSubList = propertyURLList[i:]
else:
propertyURLSubList = propertyURLList[i:(i+50)]
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>"""
isFuncnalPropertyQuery = queryPrefix + """select ?property
where
{ ?property a owl:FunctionalProperty.
VALUES ?property
{"""
for propertyURL in propertyURLSubList:
isFuncnalPropertyQuery = isFuncnalPropertyQuery + "<" + propertyURL + "> \n"
isFuncnalPropertyQuery = isFuncnalPropertyQuery + """
}
}
"""
isFuncnalPropertySparqlParam = {'query': isFuncnalPropertyQuery, 'format': 'json'}
isFuncnalPropertySparqlRequest = requests.get('https://dbpedia.org/sparql', params=isFuncnalPropertySparqlParam)
print(isFuncnalPropertySparqlRequest.url)
arcpy.AddMessage("isFuncnalPropertySparqlRequest: {0}".format(isFuncnalPropertySparqlRequest.url))
jsonBindingObject.extend(isFuncnalPropertySparqlRequest.json()["results"]["bindings"])
i = i + 50
return jsonBindingObject
# return isFuncnalPropertySparqlRequest.json()
@staticmethod
def inverseFunctionalPropertyQuery(propertyURLList):
# give a list of property, get a sublist which are inverse functional property
# send a SPARQL query to DBpedia endpoint to test whether the user selected properties are InverseFunctionalProperty
jsonBindingObject = []
i = 0
while i < len(propertyURLList):
if i + 50 > len(propertyURLList):
propertyURLSubList = propertyURLList[i:]
else:
propertyURLSubList = propertyURLList[i:(i+50)]
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>"""
isFuncnalPropertyQuery = queryPrefix + """select ?property
where
{ ?property a owl:InverseFunctionalProperty.
VALUES ?property
{"""
for propertyURL in propertyURLSubList:
isFuncnalPropertyQuery = isFuncnalPropertyQuery + "<" + propertyURL + "> \n"
isFuncnalPropertyQuery = isFuncnalPropertyQuery + """
}
}
"""
isFuncnalPropertySparqlParam = {'query': isFuncnalPropertyQuery, 'format': 'json'}
isFuncnalPropertySparqlRequest = requests.get('https://dbpedia.org/sparql', params=isFuncnalPropertySparqlParam)
print(isFuncnalPropertySparqlRequest.url)
arcpy.AddMessage("isFuncnalPropertySparqlRequest: {0}".format(isFuncnalPropertySparqlRequest.url))
jsonBindingObject.extend(isFuncnalPropertySparqlRequest.json()["results"]["bindings"])
i = i + 50
return jsonBindingObject
# return isFuncnalPropertySparqlRequest.json()
@staticmethod
def dbpediaIRIQuery(inplaceIRIList):
# send a SPARQL query to DBpedia endpoint to get the DBpedia IRI according to wikidata IRI
# inplaceIRIList: a list of wikidata IRI for locations in the Feature class
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>"""
dbpediaIRIQuery = queryPrefix + """select ?DBpediaSub ?wikidataSub
where
{ ?DBpediaSub owl:sameAs ?wikidataSub.
VALUES ?wikidataSub
{
"""
for IRI in inplaceIRIList:
dbpediaIRIQuery = dbpediaIRIQuery + "<" + IRI + "> \n"
dbpediaIRIQuery = dbpediaIRIQuery + """
}
}
"""
dbpediaIRISparqlParam = {'query': dbpediaIRIQuery, 'format': 'json'}
dbpediaIRISparqlRequest = requests.get('https://dbpedia.org/sparql', params=dbpediaIRISparqlParam)
print(dbpediaIRISparqlRequest.url)
arcpy.AddMessage("dbpediaIRISparqlRequest: {0}".format(dbpediaIRISparqlRequest.url))
return dbpediaIRISparqlRequest.json()
@staticmethod
def propertyValueQuery(inplaceIRIList, propertyURL):
# according to a list of wikidata IRI (inplaceIRIList), get the value for a specific property (propertyURL) from DBpedia
jsonBindingObject = []
i = 0
while i < len(inplaceIRIList):
if i + 50 > len(inplaceIRIList):
inplaceIRISubList = inplaceIRIList[i:]
else:
inplaceIRISubList = inplaceIRIList[i:(i+50)]
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>"""
PropertyValueQuery = queryPrefix + """select ?wikidataSub ?o
where
{ ?s owl:sameAs ?wikidataSub.
?s <"""+ propertyURL +"""> ?o.
VALUES ?wikidataSub
{
"""
for IRI in inplaceIRISubList:
PropertyValueQuery += "<" + IRI + "> \n"
PropertyValueQuery += """
}
}
"""
PropertyValueSparqlParam = {'query': PropertyValueQuery, 'format': 'json'}
PropertyValueSparqlRequest = requests.get('https://dbpedia.org/sparql', params=PropertyValueSparqlParam)
print(PropertyValueSparqlRequest.url)
arcpy.AddMessage("PropertyValueSparqlRequest: {0}".format(PropertyValueSparqlRequest.url))
jsonBindingObject.extend(PropertyValueSparqlRequest.json()["results"]["bindings"])
i = i + 50
return jsonBindingObject
# return PropertyValueSparqlRequest.json()
@staticmethod
def inversePropertyValueQuery(inplaceIRIList, propertyURL):
# according to a list of wikidata IRI (inplaceIRIList), get the value for a specific property (propertyURL) from DBpedia
jsonBindingObject = []
i = 0
while i < len(inplaceIRIList):
if i + 50 > len(inplaceIRIList):
inplaceIRISubList = inplaceIRIList[i:]
else:
inplaceIRISubList = inplaceIRIList[i:(i+50)]
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>"""
PropertyValueQuery = queryPrefix + """select ?wikidataSub ?o
where
{ ?s owl:sameAs ?wikidataSub.
?o <"""+ propertyURL +"""> ?s.
VALUES ?wikidataSub
{
"""
for IRI in inplaceIRISubList:
PropertyValueQuery += "<" + IRI + "> \n"
PropertyValueQuery += """
}
}
"""
PropertyValueSparqlParam = {'query': PropertyValueQuery, 'format': 'json'}
PropertyValueSparqlRequest = requests.get('https://dbpedia.org/sparql', params=PropertyValueSparqlParam)
print(PropertyValueSparqlRequest.url)
arcpy.AddMessage("PropertyValueSparqlRequest: {0}".format(PropertyValueSparqlRequest.url))
jsonBindingObject.extend(PropertyValueSparqlRequest.json()["results"]["bindings"])
i = i + 50
return jsonBindingObject
@staticmethod
def isPartOfReverseTransiveQuery(inplaceIRIList):
# according to a list of wikidata IRI (inplaceIRIList), get the coresponding DBpedia IRI.
# Using "isPartOf" relation to get the subdivision of current DBpedia IRI locatoin
jsonBindingObject = []
i = 0
while i < len(inplaceIRIList):
if i + 50 > len(inplaceIRIList):
inplaceIRISubList = inplaceIRIList[i:]
else:
inplaceIRISubList = inplaceIRIList[i:(i+50)]
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>"""
PropertyValueQuery = queryPrefix + """select ?wikidataSub ?subDivision
where
{ ?s owl:sameAs ?wikidataSub.
?subDivision dbo:isPartOf+ ?s.
VALUES ?wikidataSub
{
"""
for IRI in inplaceIRISubList:
PropertyValueQuery += "<" + IRI + "> \n"
PropertyValueQuery += """
}
}
"""
PropertyValueSparqlParam = {'query': PropertyValueQuery, 'format': 'json'}
PropertyValueSparqlRequest = requests.get('https://dbpedia.org/sparql', params=PropertyValueSparqlParam)
print(PropertyValueSparqlRequest.url)
arcpy.AddMessage("PropertyValueSparqlRequest: {0}".format(PropertyValueSparqlRequest.url))
jsonBindingObject.extend(PropertyValueSparqlRequest.json()["results"]["bindings"])
i = i + 50
return jsonBindingObject
@staticmethod
def expandedPropertyValueQuery(inplaceIRIList, propertyURL):
# according to a list of wikidata IRI (inplaceIRIList), get the coresponding DBpedia IRI.
# Using "isPartOf" relation to get the subdivision of current DBpedia IRI locatoin and then get the value for a specific property (propertyURL) from DBpedia
jsonBindingObject = []
i = 0
while i < len(inplaceIRIList):
if i + 50 > len(inplaceIRIList):
inplaceIRISubList = inplaceIRIList[i:]
else:
inplaceIRISubList = inplaceIRIList[i:(i+50)]
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>"""
PropertyValueQuery = queryPrefix + """select ?wikidataSub ?subDivision ?o
where
{ ?s owl:sameAs ?wikidataSub.
?subDivision dbo:isPartOf+ ?s.
?subDivision <"""+ propertyURL +"""> ?o.
VALUES ?wikidataSub
{
"""
for IRI in inplaceIRISubList:
PropertyValueQuery += "<" + IRI + "> \n"
PropertyValueQuery += """
}
}
"""
PropertyValueSparqlParam = {'query': PropertyValueQuery, 'format': 'json'}
PropertyValueSparqlRequest = requests.get('https://dbpedia.org/sparql', params=PropertyValueSparqlParam)
print(PropertyValueSparqlRequest.url)
arcpy.AddMessage("PropertyValueSparqlRequest: {0}".format(PropertyValueSparqlRequest.url))
jsonBindingObject.extend(PropertyValueSparqlRequest.json()["results"]["bindings"])
i = i + 50
return jsonBindingObject
# return PropertyValueSparqlRequest.json()
@staticmethod
def inverseExpandedPropertyValueQuery(inplaceIRIList, propertyURL):
# according to a list of wikidata IRI (inplaceIRIList), get the coresponding DBpedia IRI.
# Using "isPartOf" relation to get the subdivision of current DBpedia IRI locatoin and then get the value for a specific property (propertyURL) from DBpedia
jsonBindingObject = []
i = 0
while i < len(inplaceIRIList):
if i + 50 > len(inplaceIRIList):
inplaceIRISubList = inplaceIRIList[i:]
else:
inplaceIRISubList = inplaceIRIList[i:(i+50)]
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX owl: <http://www.w3.org/2002/07/owl#>"""
PropertyValueQuery = queryPrefix + """select ?wikidataSub ?subDivision ?o
where
{ ?s owl:sameAs ?wikidataSub.
?subDivision dbo:isPartOf+ ?s.
?o <"""+ propertyURL +"""> ?subDivision.
VALUES ?wikidataSub
{
"""
for IRI in inplaceIRISubList:
PropertyValueQuery += "<" + IRI + "> \n"
PropertyValueQuery += """
}
}
"""
PropertyValueSparqlParam = {'query': PropertyValueQuery, 'format': 'json'}
PropertyValueSparqlRequest = requests.get('https://dbpedia.org/sparql', params=PropertyValueSparqlParam)
print(PropertyValueSparqlRequest.url)
arcpy.AddMessage("PropertyValueSparqlRequest: {0}".format(PropertyValueSparqlRequest.url))
jsonBindingObject.extend(PropertyValueSparqlRequest.json()["results"]["bindings"])
i = i + 50
return jsonBindingObject
@staticmethod
def relFinderCommonPropertyQuery(inplaceIRIList, relationDegree, propertyDirectionList, selectPropertyURLList):
# get the property URL list in the specific degree path from the inplaceIRIList
# inplaceIRIList: the URL list of wikidata locations
# relationDegree: the degree of the property on the path the current query wants to get
# propertyDirectionList: the list of property direction, it has at most 4 elements, the length is the path degree. The element value is from ["BOTH", "ORIGIN", "DESTINATION"]
# selectPropertyURLList: the selected peoperty URL list, it always has three elements, "" if no property has been selected
if len(propertyDirectionList) == 1:
selectParam = "?p1"
elif len(propertyDirectionList) == 2:
selectParam = "?p2"
elif len(propertyDirectionList) == 3:
selectParam = "?p3"
elif len(propertyDirectionList) == 4:
selectParam = "?p4"
jsonBindingObject = []
i = 0
while i < len(inplaceIRIList):
if i + 50 > len(inplaceIRIList):
inplaceIRISubList = inplaceIRIList[i:]
else:
inplaceIRISubList = inplaceIRIList[i:(i+50)]
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX geo-pos: <http://www.w3.org/2003/01/geo/wgs84_pos#>
PREFIX omgeo: <http://www.ontotext.com/owlim/geo#>
PREFIX dbpedia: <http://dbpedia.org/resource/>
PREFIX dbp-ont: <http://dbpedia.org/ontology/>
PREFIX ff: <http://factforge.net/>
PREFIX om: <http://www.ontotext.com/owlim/>
PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX bd: <http://www.bigdata.com/rdf#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX geo: <http://www.opengis.net/ont/geosparql#>"""
# ["BOTH", "ORIGIN", "DESTINATION"]
# if propertyDirectionList[0] == "BOTH"
relFinderPropertyQuery = queryPrefix + """SELECT distinct """ + selectParam + """
WHERE {"""
if len(propertyDirectionList) > 0:
if selectPropertyURLList[0] == "":
if propertyDirectionList[0] == "BOTH":
relFinderPropertyQuery += """{?place ?p1 ?o1.} UNION {?o1 ?p1 ?place.}\n"""
elif propertyDirectionList[0] == "ORIGIN":
relFinderPropertyQuery += """?place ?p1 ?o1.\n"""
elif propertyDirectionList[0] == "DESTINATION":
relFinderPropertyQuery += """?o1 ?p1 ?place.\n"""
if relationDegree > 1:
relFinderPropertyQuery += """?p1 a owl:ObjectProperty.\n"""
else:
if propertyDirectionList[0] == "BOTH":
relFinderPropertyQuery += """{?place <"""+ selectPropertyURLList[0] + """> ?o1.} UNION {?o1 <"""+ selectPropertyURLList[0] + """> ?place.}\n"""
elif propertyDirectionList[0] == "ORIGIN":
relFinderPropertyQuery += """?place <"""+ selectPropertyURLList[0] + """> ?o1.\n"""
elif propertyDirectionList[0] == "DESTINATION":
relFinderPropertyQuery += """?o1 <"""+ selectPropertyURLList[0] + """> ?place.\n"""
if len(propertyDirectionList) > 1:
if selectPropertyURLList[1] == "":
if propertyDirectionList[1] == "BOTH":
relFinderPropertyQuery += """{?o1 ?p2 ?o2.} UNION {?o2 ?p2 ?o1.}\n"""
elif propertyDirectionList[1] == "ORIGIN":
relFinderPropertyQuery += """?o1 ?p2 ?o2.\n"""
elif propertyDirectionList[1] == "DESTINATION":
relFinderPropertyQuery += """?o2 ?p2 ?o1.\n"""
if relationDegree > 2:
relFinderPropertyQuery += """?p2 a owl:ObjectProperty.\n"""
else:
if propertyDirectionList[1] == "BOTH":
relFinderPropertyQuery += """{?o1 <"""+ selectPropertyURLList[1] + """> ?o2.} UNION {?o2 <"""+ selectPropertyURLList[1] + """> ?o1.}\n"""
elif propertyDirectionList[1] == "ORIGIN":
relFinderPropertyQuery += """?o1 <"""+ selectPropertyURLList[1] + """> ?o2.\n"""
elif propertyDirectionList[1] == "DESTINATION":
relFinderPropertyQuery += """?o2 <"""+ selectPropertyURLList[1] + """> ?o1.\n"""
if len(propertyDirectionList) > 2:
if selectPropertyURLList[2] == "":
if propertyDirectionList[2] == "BOTH":
relFinderPropertyQuery += """{?o2 ?p3 ?o3.} UNION {?o3 ?p3 ?o2.}\n"""
elif propertyDirectionList[2] == "ORIGIN":
relFinderPropertyQuery += """?o2 ?p3 ?o3.\n"""
elif propertyDirectionList[2] == "DESTINATION":
relFinderPropertyQuery += """?o3 ?p3 ?o2.\n"""
if relationDegree > 3:
relFinderPropertyQuery += """?p3 a owl:ObjectProperty.\n"""
else:
if propertyDirectionList[2] == "BOTH":
relFinderPropertyQuery += """{?o2 <"""+ selectPropertyURLList[2] + """> ?o3.} UNION {?o3 <"""+ selectPropertyURLList[2] + """> ?o2.}\n"""
elif propertyDirectionList[2] == "ORIGIN":
relFinderPropertyQuery += """?o2 <"""+ selectPropertyURLList[2] + """> ?o3.\n"""
elif propertyDirectionList[2] == "DESTINATION":
relFinderPropertyQuery += """?o3 <"""+ selectPropertyURLList[2] + """> ?o2.\n"""
if len(propertyDirectionList) > 3:
if propertyDirectionList[3] == "BOTH":
relFinderPropertyQuery += """{?o3 ?p4 ?o4.} UNION {?o4 ?p4 ?o3.}\n"""
elif propertyDirectionList[3] == "ORIGIN":
relFinderPropertyQuery += """?o3 ?p4 ?o4.\n"""
elif propertyDirectionList[3] == "DESTINATION":
relFinderPropertyQuery += """?o4 ?p4 ?o3.\n"""
relFinderPropertyQuery += """
VALUES ?place
{"""
for IRI in inplaceIRISubList:
relFinderPropertyQuery = relFinderPropertyQuery + "<" + IRI + "> \n"
relFinderPropertyQuery = relFinderPropertyQuery + """
}
}
"""
relFinderPropertySparqlParam = {'query': relFinderPropertyQuery, 'format': 'json'}
relFinderPropertySparqlRequest = requests.get('https://query.wikidata.org/sparql', params=relFinderPropertySparqlParam)
arcpy.AddMessage("SPARQL: {0}".format(relFinderPropertySparqlRequest.url))
jsonBindingObject.extend(relFinderPropertySparqlRequest.json()["results"]["bindings"])
i = i + 50
return jsonBindingObject
@staticmethod
def relFinderTripleQuery(inplaceIRIList, propertyDirectionList, selectPropertyURLList):
# get the triple set in the specific degree path from the inplaceIRIList
# inplaceIRIList: the URL list of wikidata locations
# propertyDirectionList: the list of property direction, it has at most 4 elements, the length is the path degree. The element value is from ["ORIGIN", "DESTINATION"]
# selectPropertyURLList: the selected peoperty URL list, it always has 4 elements, "" if no property has been selected
# get the selected parameter
# selectParam = "?place ?p1 ?o1 ?p2 ?o2 ?p3 ?o3 ?p4 ?o4"
selectParam = "?place "
if len(propertyDirectionList) > 0:
if selectPropertyURLList[0] == "":
selectParam += "?p1 "
selectParam += "?o1 "
if len(propertyDirectionList) > 1:
if selectPropertyURLList[1] == "":
selectParam += "?p2 "
selectParam += "?o2 "
if len(propertyDirectionList) > 2:
if selectPropertyURLList[2] == "":
selectParam += "?p3 "
selectParam += "?o3 "
if len(propertyDirectionList) > 3:
if selectPropertyURLList[3] == "":
selectParam += "?p4 "
selectParam += "?o4 "
jsonBindingObject = []
i = 0
while i < len(inplaceIRIList):
if i + 50 > len(inplaceIRIList):
inplaceIRISubList = inplaceIRIList[i:]
else:
inplaceIRISubList = inplaceIRIList[i:(i+50)]
queryPrefix = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX geo-pos: <http://www.w3.org/2003/01/geo/wgs84_pos#>
PREFIX omgeo: <http://www.ontotext.com/owlim/geo#>
PREFIX dbpedia: <http://dbpedia.org/resource/>
PREFIX dbp-ont: <http://dbpedia.org/ontology/>
PREFIX ff: <http://factforge.net/>
PREFIX om: <http://www.ontotext.com/owlim/>
PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX bd: <http://www.bigdata.com/rdf#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX geo: <http://www.opengis.net/ont/geosparql#>"""
relFinderPropertyQuery = queryPrefix + """SELECT distinct """ + selectParam + """
WHERE {"""
if len(propertyDirectionList) > 0:
if selectPropertyURLList[0] == "":
# if propertyDirectionList[0] == "BOTH":
# relFinderPropertyQuery += """{?place ?p1 ?o1.} UNION {?o1 ?p1 ?place.}\n"""
if propertyDirectionList[0] == "ORIGIN":
relFinderPropertyQuery += """?place ?p1 ?o1.\n"""
elif propertyDirectionList[0] == "DESTINATION":
relFinderPropertyQuery += """?o1 ?p1 ?place.\n"""
else:
# if propertyDirectionList[0] == "BOTH":
# relFinderPropertyQuery += """{?place <"""+ selectPropertyURLList[0] + """> ?o1.} UNION {?o1 <"""+ selectPropertyURLList[0] + """> ?place.}\n"""
if propertyDirectionList[0] == "ORIGIN":
relFinderPropertyQuery += """?place <"""+ selectPropertyURLList[0] + """> ?o1.\n"""
elif propertyDirectionList[0] == "DESTINATION":
relFinderPropertyQuery += """?o1 <"""+ selectPropertyURLList[0] + """> ?place.\n"""
if len(propertyDirectionList) > 1:
if selectPropertyURLList[1] == "":
# if propertyDirectionList[1] == "BOTH":
# relFinderPropertyQuery += """{?o1 ?p2 ?o2.} UNION {?o2 ?p2 ?o1.}\n"""
if propertyDirectionList[1] == "ORIGIN":
relFinderPropertyQuery += """?o1 ?p2 ?o2.\n"""
elif propertyDirectionList[1] == "DESTINATION":
relFinderPropertyQuery += """?o2 ?p2 ?o1.\n"""
else:
# if propertyDirectionList[1] == "BOTH":
# relFinderPropertyQuery += """{?o1 <"""+ selectPropertyURLList[1] + """> ?o2.} UNION {?o2 <"""+ selectPropertyURLList[1] + """> ?o1.}\n"""
if propertyDirectionList[1] == "ORIGIN":
relFinderPropertyQuery += """?o1 <"""+ selectPropertyURLList[1] + """> ?o2.\n"""
elif propertyDirectionList[1] == "DESTINATION":
relFinderPropertyQuery += """?o2 <"""+ selectPropertyURLList[1] + """> ?o1.\n"""
if len(propertyDirectionList) > 2:
if selectPropertyURLList[2] == "":
# if propertyDirectionList[2] == "BOTH":
# relFinderPropertyQuery += """{?o2 ?p3 ?o3.} UNION {?o3 ?p3 ?o2.}\n"""
if propertyDirectionList[2] == "ORIGIN":
relFinderPropertyQuery += """?o2 ?p3 ?o3.\n"""
elif propertyDirectionList[2] == "DESTINATION":
relFinderPropertyQuery += """?o3 ?p3 ?o2.\n"""
else:
# if propertyDirectionList[2] == "BOTH":
# relFinderPropertyQuery += """{?o2 <"""+ selectPropertyURLList[2] + """> ?o3.} UNION {?o3 <"""+ selectPropertyURLList[2] + """> ?o2.}\n"""
if propertyDirectionList[2] == "ORIGIN":
relFinderPropertyQuery += """?o2 <"""+ selectPropertyURLList[2] + """> ?o3.\n"""
elif propertyDirectionList[2] == "DESTINATION":
relFinderPropertyQuery += """?o3 <"""+ selectPropertyURLList[2] + """> ?o2.\n"""
if len(propertyDirectionList) > 3:
if selectPropertyURLList[3] == "":
# if propertyDirectionList[3] == "BOTH":
# relFinderPropertyQuery += """{?o3 ?p4 ?o4.} UNION {?o4 ?p4 ?o3.}\n"""
if propertyDirectionList[3] == "ORIGIN":
relFinderPropertyQuery += """?o3 ?p4 ?o4.\n"""
elif propertyDirectionList[3] == "DESTINATION":
relFinderPropertyQuery += """?o4 ?p4 ?o3.\n"""
else:
# if propertyDirectionList[3] == "BOTH":
# relFinderPropertyQuery += """{?o3 <"""+ selectPropertyURLList[3] + """> ?o4.} UNION {?o4 <"""+ selectPropertyURLList[3] + """> ?o3.}\n"""
if propertyDirectionList[3] == "ORIGIN":
relFinderPropertyQuery += """?o3 <"""+ selectPropertyURLList[3] + """> ?o4.\n"""
elif propertyDirectionList[3] == "DESTINATION":
relFinderPropertyQuery += """?o4 <"""+ selectPropertyURLList[3] + """> ?o3.\n"""
relFinderPropertyQuery += """
VALUES ?place
{"""
for IRI in inplaceIRISubList:
relFinderPropertyQuery = relFinderPropertyQuery + "<" + IRI + "> \n"
relFinderPropertyQuery = relFinderPropertyQuery + """
}
}
"""
relFinderPropertySparqlParam = {'query': relFinderPropertyQuery, 'format': 'json'}
relFinderPropertySparqlRequest = requests.get('https://query.wikidata.org/sparql', params=relFinderPropertySparqlParam)
arcpy.AddMessage("SPARQL: {0}".format(relFinderPropertySparqlRequest.url))
jsonBindingObject.extend(relFinderPropertySparqlRequest.json()["results"]["bindings"])
i = i + 50
tripleStore = dict()
Triple = namedtuple("Triple", ["s", "p", "o"])
for jsonItem in jsonBindingObject:
if len(propertyDirectionList) > 0:
# triple = []
if selectPropertyURLList[0] == "":
if propertyDirectionList[0] == "ORIGIN":
# relFinderPropertyQuery += """?place ?p1 ?o1.\n"""
currentTriple = Triple(s = jsonItem["place"]["value"], p = jsonItem["p1"]["value"], o = jsonItem["o1"]["value"])
elif propertyDirectionList[0] == "DESTINATION":
# relFinderPropertyQuery += """?o1 ?p1 ?place.\n"""
currentTriple = Triple(s = jsonItem["o1"]["value"], p = jsonItem["p1"]["value"], o = jsonItem["place"]["value"])
# triple = [jsonItem["o1"]["value"], jsonItem["p1"]["value"], jsonItem["place"]["value"]]
else:
if propertyDirectionList[0] == "ORIGIN":
# relFinderPropertyQuery += """?place <"""+ selectPropertyURLList[0] + """> ?o1.\n"""
currentTriple = Triple(s = jsonItem["place"]["value"], p = selectPropertyURLList[0], o = jsonItem["o1"]["value"])
# triple = [jsonItem["place"]["value"], selectPropertyURLList[0], jsonItem["o1"]["value"]]
elif propertyDirectionList[0] == "DESTINATION":
# relFinderPropertyQuery += """?o1 <"""+ selectPropertyURLList[0] + """> ?place.\n"""
currentTriple = Triple(s = jsonItem["o1"]["value"], p = selectPropertyURLList[0], o = jsonItem["place"]["value"])
# triple = [jsonItem["o1"]["value"], selectPropertyURLList[0], jsonItem["place"]["value"]]
if currentTriple not in tripleStore:
tripleStore[currentTriple] = 1
else:
if tripleStore[currentTriple] > 1:
tripleStore[currentTriple] = 1
if len(propertyDirectionList) > 1:
# triple = []
if selectPropertyURLList[1] == "":
if propertyDirectionList[1] == "ORIGIN":
# relFinderPropertyQuery += """?o1 ?p2 ?o2.\n"""
currentTriple = Triple(s = jsonItem["o1"]["value"], p = jsonItem["p2"]["value"], o = jsonItem["o2"]["value"])
# triple = [jsonItem["o1"]["value"], jsonItem["p2"]["value"], jsonItem["o2"]["value"]]
elif propertyDirectionList[1] == "DESTINATION":
# relFinderPropertyQuery += """?o2 ?p2 ?o1.\n"""
currentTriple = Triple(s = jsonItem["o2"]["value"], p = jsonItem["p2"]["value"], o = jsonItem["o1"]["value"])
# triple = [jsonItem["o2"]["value"], jsonItem["p2"]["value"], jsonItem["o1"]["value"]]
else:
if propertyDirectionList[1] == "ORIGIN":
# relFinderPropertyQuery += """?o1 <"""+ selectPropertyURLList[1] + """> ?o2.\n"""
currentTriple = Triple(s = jsonItem["o1"]["value"], p = selectPropertyURLList[1], o = jsonItem["o2"]["value"])
# triple = [jsonItem["o1"]["value"], selectPropertyURLList[1], jsonItem["o2"]["value"]]
elif propertyDirectionList[1] == "DESTINATION":
# relFinderPropertyQuery += """?o2 <"""+ selectPropertyURLList[1] + """> ?o1.\n"""
currentTriple = Triple(s = jsonItem["o2"]["value"], p = selectPropertyURLList[1], o = jsonItem["o1"]["value"])
# triple = [jsonItem["o2"]["value"], selectPropertyURLList[1], jsonItem["o1"]["value"]]
if currentTriple not in tripleStore:
tripleStore[currentTriple] = 2
else:
if tripleStore[currentTriple] > 2:
tripleStore[currentTriple] = 2
if len(propertyDirectionList) > 2:
# triple = []
if selectPropertyURLList[2] == "":
if propertyDirectionList[2] == "ORIGIN":
# relFinderPropertyQuery += """?o2 ?p3 ?o3.\n"""
currentTriple = Triple(s = jsonItem["o2"]["value"], p = jsonItem["p3"]["value"], o = jsonItem["o3"]["value"])
# triple = [jsonItem["o2"]["value"], jsonItem["p3"]["value"], jsonItem["o3"]["value"]]
elif propertyDirectionList[2] == "DESTINATION":
# relFinderPropertyQuery += """?o3 ?p3 ?o2.\n"""
currentTriple = Triple(s = jsonItem["o3"]["value"], p = jsonItem["p3"]["value"], o = jsonItem["o2"]["value"])
# triple = [jsonItem["o3"]["value"], jsonItem["p3"]["value"], jsonItem["o2"]["value"]]
else:
if propertyDirectionList[2] == "ORIGIN":
# relFinderPropertyQuery += """?o2 <"""+ selectPropertyURLList[2] + """> ?o3.\n"""
currentTriple = Triple(s = jsonItem["o2"]["value"], p = selectPropertyURLList[2], o = jsonItem["o3"]["value"])
# triple = [jsonItem["o2"]["value"], selectPropertyURLList[2], jsonItem["o3"]["value"]]
elif propertyDirectionList[2] == "DESTINATION":
# relFinderPropertyQuery += """?o3 <"""+ selectPropertyURLList[2] + """> ?o2.\n"""
currentTriple = Triple(s = jsonItem["o3"]["value"], p = selectPropertyURLList[2], o = jsonItem["o2"]["value"])
# triple = [jsonItem["o3"]["value"], selectPropertyURLList[2], jsonItem["o2"]["value"]]
if currentTriple not in tripleStore:
tripleStore[currentTriple] = 3
else:
if tripleStore[currentTriple] > 3:
tripleStore[currentTriple] = 3
if len(propertyDirectionList) > 3:
triple = []
if selectPropertyURLList[3] == "":
if propertyDirectionList[3] == "ORIGIN":
# relFinderPropertyQuery += """?o3 ?p4 ?o4.\n"""
currentTriple = Triple(s = jsonItem["o3"]["value"], p = jsonItem["p4"]["value"], o = jsonItem["o4"]["value"])
# triple = [jsonItem["o3"]["value"], jsonItem["p4"]["value"], jsonItem["o4"]["value"]]
elif propertyDirectionList[3] == "DESTINATION":
# relFinderPropertyQuery += """?o4 ?p4 ?o3.\n"""
currentTriple = Triple(s = jsonItem["o4"]["value"], p = jsonItem["p4"]["value"], o = jsonItem["o3"]["value"])
# triple = [jsonItem["o4"]["value"], jsonItem["p4"]["value"], jsonItem["o3"]["value"]]
else:
if propertyDirectionList[3] == "ORIGIN":
# relFinderPropertyQuery += """?o3 <"""+ selectPropertyURLList[3] + """> ?o4.\n"""
currentTriple = Triple(s = jsonItem["o3"]["value"], p = selectPropertyURLList[3], o = jsonItem["o4"]["value"])
# triple = [jsonItem["o3"]["value"], selectPropertyURLList[3], jsonItem["o4"]["value"]]
elif propertyDirectionList[3] == "DESTINATION":
# relFinderPropertyQuery += """?o4 <"""+ selectPropertyURLList[3] + """> ?o3.\n"""
currentTriple = Triple(s = jsonItem["o4"]["value"], p = selectPropertyURLList[3], o = jsonItem["o3"]["value"])
# triple = [jsonItem["o4"]["value"], selectPropertyURLList[3], jsonItem["o3"]["value"]]
if currentTriple not in tripleStore:
tripleStore[currentTriple] = 4
# tripleStore = []
# for jsonItem in jsonBindingObject:
# if len(propertyDirectionList) > 0:
# triple = []
# if selectPropertyURLList[0] == "":
# if propertyDirectionList[0] == "ORIGIN":
# # relFinderPropertyQuery += """?place ?p1 ?o1.\n"""
# triple = [jsonItem["place"]["value"], jsonItem["p1"]["value"], jsonItem["o1"]["value"]]
# elif propertyDirectionList[0] == "DESTINATION":
# # relFinderPropertyQuery += """?o1 ?p1 ?place.\n"""
# triple = [jsonItem["o1"]["value"], jsonItem["p1"]["value"], jsonItem["place"]["value"]]
# else:
# if propertyDirectionList[0] == "ORIGIN":
# # relFinderPropertyQuery += """?place <"""+ selectPropertyURLList[0] + """> ?o1.\n"""
# triple = [jsonItem["place"]["value"], selectPropertyURLList[0], jsonItem["o1"]["value"]]
# elif propertyDirectionList[0] == "DESTINATION":
# # relFinderPropertyQuery += """?o1 <"""+ selectPropertyURLList[0] + """> ?place.\n"""
# triple = [jsonItem["o1"]["value"], selectPropertyURLList[0], jsonItem["place"]["value"]]
# if triple not in tripleStore:
# tripleStore.append(triple)
# if len(propertyDirectionList) > 1:
# triple = []
# if selectPropertyURLList[1] == "":
# if propertyDirectionList[1] == "ORIGIN":
# # relFinderPropertyQuery += """?o1 ?p2 ?o2.\n"""
# triple = [jsonItem["o1"]["value"], jsonItem["p2"]["value"], jsonItem["o2"]["value"]]
# elif propertyDirectionList[1] == "DESTINATION":
# # relFinderPropertyQuery += """?o2 ?p2 ?o1.\n"""
# triple = [jsonItem["o2"]["value"], jsonItem["p2"]["value"], jsonItem["o1"]["value"]]
# else:
# if propertyDirectionList[1] == "ORIGIN":
# # relFinderPropertyQuery += """?o1 <"""+ selectPropertyURLList[1] + """> ?o2.\n"""
# triple = [jsonItem["o1"]["value"], selectPropertyURLList[1], jsonItem["o2"]["value"]]
# elif propertyDirectionList[1] == "DESTINATION":
# # relFinderPropertyQuery += """?o2 <"""+ selectPropertyURLList[1] + """> ?o1.\n"""
# triple = [jsonItem["o2"]["value"], selectPropertyURLList[1], jsonItem["o1"]["value"]]
# if triple not in tripleStore:
# tripleStore.append(triple)
# if len(propertyDirectionList) > 2:
# triple = []
# if selectPropertyURLList[2] == "":
# if propertyDirectionList[2] == "ORIGIN":
# # relFinderPropertyQuery += """?o2 ?p3 ?o3.\n"""
# triple = [jsonItem["o2"]["value"], jsonItem["p3"]["value"], jsonItem["o3"]["value"]]
# elif propertyDirectionList[2] == "DESTINATION":
# # relFinderPropertyQuery += """?o3 ?p3 ?o2.\n"""
# triple = [jsonItem["o3"]["value"], jsonItem["p3"]["value"], jsonItem["o2"]["value"]]
# else:
# if propertyDirectionList[2] == "ORIGIN":
# # relFinderPropertyQuery += """?o2 <"""+ selectPropertyURLList[2] + """> ?o3.\n"""
# triple = [jsonItem["o2"]["value"], selectPropertyURLList[2], jsonItem["o3"]["value"]]
# elif propertyDirectionList[2] == "DESTINATION":
# # relFinderPropertyQuery += """?o3 <"""+ selectPropertyURLList[2] + """> ?o2.\n"""
# triple = [jsonItem["o3"]["value"], selectPropertyURLList[2], jsonItem["o2"]["value"]]
# if triple not in tripleStore:
# tripleStore.append(triple)
# if len(propertyDirectionList) > 3:
# triple = []
# if selectPropertyURLList[3] == "":
# if propertyDirectionList[3] == "ORIGIN":
# # relFinderPropertyQuery += """?o3 ?p4 ?o4.\n"""
# triple = [jsonItem["o3"]["value"], jsonItem["p4"]["value"], jsonItem["o4"]["value"]]
# elif propertyDirectionList[3] == "DESTINATION":
# # relFinderPropertyQuery += """?o4 ?p4 ?o3.\n"""
# triple = [jsonItem["o4"]["value"], jsonItem["p4"]["value"], jsonItem["o3"]["value"]]
# else:
# if propertyDirectionList[3] == "ORIGIN":
# # relFinderPropertyQuery += """?o3 <"""+ selectPropertyURLList[3] + """> ?o4.\n"""
# triple = [jsonItem["o3"]["value"], selectPropertyURLList[3], jsonItem["o4"]["value"]]
# elif propertyDirectionList[3] == "DESTINATION":
# # relFinderPropertyQuery += """?o4 <"""+ selectPropertyURLList[3] + """> ?o3.\n"""
# triple = [jsonItem["o4"]["value"], selectPropertyURLList[3], jsonItem["o3"]["value"]]
# if triple not in tripleStore:
# tripleStore.append(triple)
return tripleStore
class UTIL(object):
# @staticmethod
# def addTriple2TripleStore(tripleStore, newTriple):
# # newTriple: [subject, predicate, object, degree]
# # add newTriple to the tripleStore.
# # If S-P-O is in the tripleStore, update the degree to the smaller one between the original degree in tripleSpre and the one in newTriple
# # If S-P-O is not in the tripleStore, add it
# isInTripleStore = False
# for element in tripleStore:
# if element[0] == newTriple[0] and element[1] == newTriple[1] and element[2] == newTriple[2]:
@staticmethod
def mergeTripleStoreDicts(superTripleStore, childTripleStore):
# superTripleStore and childTripleStore: dict() object with key nameTuple Triple("Triple",["s", "p", "o"])
# add childTripleStore to superTripleStore.
# If S-P-O is in the superTripleStore, update the degree to the smaller one between the original degree in superTripleStore and the one in childTripleStore
# If S-P-O is not in the superTripleStore, add it
for triple in childTripleStore:
if triple not in superTripleStore:
superTripleStore[triple] = childTripleStore[triple]
else:
if superTripleStore[triple] > childTripleStore[triple]:
superTripleStore[triple] = childTripleStore[triple]
return superTripleStore
@staticmethod
def mergeListsWithUniqueElement(superList, childList):
# merge childList to superList, append the elements which are not in superList to superList
for element in childList:
if element not in superList:
superList.append(element)
return superList
@staticmethod
def directionListFromBoth2OD(propertyDirectionList):
# given a list of direction, return a list of lists which change a list with "BOTH" to two list with "ORIGIN" and "DESTINATION"
# e.g. ["BOTH", "ORIGIN", "DESTINATION", "ORIGIN"] -> ["ORIGIN", "ORIGIN", "DESTINATION", "ORIGIN"] and ["DESTINATION", "ORIGIN", "DESTINATION", "ORIGIN"]
# propertyDirectionList: a list of direction from ["BOTH", "ORIGIN", "DESTINATION"], it has at most 4 elements
propertyDirectionExpandedLists = []
propertyDirectionExpandedLists.append(propertyDirectionList)
resultList = []
for currentPropertyDirectionList in propertyDirectionExpandedLists:
i = 0
indexOfBOTH = -1
while i < len(currentPropertyDirectionList):
if currentPropertyDirectionList[i] == "BOTH":
indexOfBOTH = i
break
i = i + 1
if indexOfBOTH != -1:
newList1 = currentPropertyDirectionList[:]
newList1[indexOfBOTH] = "ORIGIN"
propertyDirectionExpandedLists.append(newList1)
newList2 = currentPropertyDirectionList[:]
newList2[indexOfBOTH] = "DESTINATION"
propertyDirectionExpandedLists.append(newList2)
else:
if currentPropertyDirectionList not in resultList:
resultList.append(currentPropertyDirectionList)
return resultList
@staticmethod
def buildMultiValueDictFromNoFunctionalProperty(fieldName, tableName):
# build a collections.defaultdict object to store the multivalue for each no-functional property's subject.
# The subject "wikiURL" is the key, the corespnding property value in "fieldName" is the value
if UTIL.isFieldNameInTable(fieldName, tableName):
noFunctionalPropertyDict = defaultdict(list)
# fieldList = arcpy.ListFields(tableName)
srows = None
srows = arcpy.SearchCursor(tableName, '', '', '', '{0} A;{1} A'.format('wikiURL', fieldName))
for row in srows:
foreignKeyValue = row.getValue('wikiURL')
noFunctionalPropertyValue = row.getValue(fieldName)
# if from_field in ['Double', 'Float']:
# value = locale.format('%s', (row.getValue(from_field)))
if noFunctionalPropertyValue <> None:
noFunctionalPropertyDict[foreignKeyValue].append(noFunctionalPropertyValue)
return noFunctionalPropertyDict
else:
return -1
@staticmethod
def appendFieldInFeatureClassByMergeRule(inputFeatureClassName, noFunctionalPropertyDict, appendFieldName, relatedTableName, mergeRule, delimiter):
# append a new field in inputFeatureClassName which will install the merged no-functional property value
# noFunctionalPropertyDict: the collections.defaultdict object which stores the no-functional property value for each URL
# appendFieldName: the field name of no-functional property in the relatedTableName
# mergeRule: the merge rule the user selected, one of ['SUM', 'MIN', 'MAX', 'STDEV', 'MEAN', 'COUNT', 'FIRST', 'LAST']
# delimiter: the optional paramter which define the delimiter of the cancatenate operation
appendFieldType = ''
appendFieldLength = 0
fieldList = arcpy.ListFields(relatedTableName)
for field in fieldList:
if field.name == appendFieldName:
appendFieldType = field.type
if field.type == "String":
appendFieldLength = field.length
break
mergeRuleField = ''
if mergeRule == 'SUM':
mergeRuleField = 'SUM'
elif mergeRule == 'MIN':
mergeRuleField = 'MIN'
elif mergeRule == 'MAX':
mergeRuleField = 'MAX'
elif mergeRule == 'STDEV':
mergeRuleField = 'STD'
elif mergeRule == 'MEAN':
mergeRuleField = 'MEN'
elif mergeRule == 'COUNT':
mergeRuleField = 'COUNT'
elif mergeRule == 'FIRST':
mergeRuleField = 'FIRST'
elif mergeRule == 'LAST':
mergeRuleField = 'LAST'
elif mergeRule == 'CONCATENATE':
mergeRuleField = 'CONCAT'
if appendFieldType != "String":
cursor = arcpy.SearchCursor(relatedTableName)
for row in cursor:
rowValue = row.getValue(appendFieldName)
if appendFieldLength < len(str(rowValue)):
appendFieldLength = len(str(rowValue))
# subFieldName = appendFieldName[:5]
# arcpy.AddMessage("subFieldName: {0}".format(subFieldName))
# featureClassAppendFieldName = subFieldName + "_" + mergeRuleField
featureClassAppendFieldName = appendFieldName + "_" + mergeRuleField
newAppendFieldName = UTIL.getFieldNameWithTable(featureClassAppendFieldName, inputFeatureClassName)
if newAppendFieldName != -1:
if mergeRule == 'COUNT':
arcpy.AddField_management(inputFeatureClassName, newAppendFieldName, "SHORT")
elif mergeRule == 'STDEV' or mergeRule == 'MEAN':
arcpy.AddField_management(inputFeatureClassName, newAppendFieldName, "DOUBLE")
elif mergeRule == 'CONCATENATE':
# get the maximum number of values for current property: maxNumOfValue
# maxNumOfValue * field.length = the length of new append field
maxNumOfValue = 1
for key in noFunctionalPropertyDict:
if maxNumOfValue < len(noFunctionalPropertyDict[key]):
maxNumOfValue = len(noFunctionalPropertyDict[key])
arcpy.AddField_management(inputFeatureClassName, newAppendFieldName, 'TEXT', field_length=appendFieldLength * maxNumOfValue)
else:
if appendFieldType == "String":
arcpy.AddField_management(inputFeatureClassName, newAppendFieldName, appendFieldType, field_length=appendFieldLength)
else:
arcpy.AddField_management(inputFeatureClassName, newAppendFieldName, appendFieldType)
if UTIL.isFieldNameInTable("URL", inputFeatureClassName):
urows = None
urows = arcpy.UpdateCursor(inputFeatureClassName)
for row in urows:
foreignKeyValue = row.getValue("URL")
noFunctionalPropertyValueList = noFunctionalPropertyDict[foreignKeyValue]
if len(noFunctionalPropertyValueList) != 0:
rowValue = ""
# if mergeRule in ['STDEV', 'MEAN']:
# if appendFieldType in ['Single', 'Double']:
# if mergeRule == 'MEAN':
# rowValue = numpy.average(noFunctionalPropertyValueList)
# elif mergeRule == 'STDEV':
# rowValue = numpy.std(noFunctionalPropertyValueList)
# else:
# arcpy.AddError("The {0} data type of Field {1} does not support {2} merge rule".format(appendFieldType, appendFieldName, mergeRule))
# raise arcpy.ExecuteError
# elif mergeRule in ['SUM', 'MIN', 'MAX']:
# if appendFieldType in ['Single', 'Double', 'SmallInteger', 'Integer']:
# if mergeRule == 'SUM':
# rowValue = numpy.sum(noFunctionalPropertyValueList)
# elif mergeRule == 'MIN':
# rowValue = numpy.amin(noFunctionalPropertyValueList)
# elif mergeRule == 'MAX':
# rowValue = numpy.amax(noFunctionalPropertyValueList)
# else:
# arcpy.AddError("The {0} data type of Field {1} does not support {2} merge rule".format(appendFieldType, appendFieldName, mergeRule))
if mergeRule in ['STDEV', 'MEAN', 'SUM', 'MIN', 'MAX']:
if appendFieldType in ['Single', 'Double', 'SmallInteger', 'Integer']:
if mergeRule == 'MEAN':
rowValue = numpy.average(noFunctionalPropertyValueList)
elif mergeRule == 'STDEV':
rowValue = numpy.std(noFunctionalPropertyValueList)
elif mergeRule == 'SUM':
rowValue = numpy.sum(noFunctionalPropertyValueList)
elif mergeRule == 'MIN':
rowValue = numpy.amin(noFunctionalPropertyValueList)
elif mergeRule == 'MAX':
rowValue = numpy.amax(noFunctionalPropertyValueList)
else:
arcpy.AddError("The {0} data type of Field {1} does not support {2} merge rule".format(appendFieldType, appendFieldName, mergeRule))
elif mergeRule in ['COUNT', 'FIRST', 'LAST']:
if mergeRule == 'COUNT':
rowValue = len(noFunctionalPropertyValueList)
elif mergeRule == 'FIRST':
rowValue = noFunctionalPropertyValueList[0]
elif mergeRule == 'LAST':
rowValue = noFunctionalPropertyValueList[len(noFunctionalPropertyValueList)-1]
elif mergeRule == 'CONCATENATE':
value = ""
if appendFieldType in ['String']:
rowValue = delimiter.join(sorted(set([val for val in noFunctionalPropertyValueList if not value is None])))
else:
rowValue = delimiter.join(sorted(set([str(val) for val in noFunctionalPropertyValueList if not value is None])))
row.setValue(newAppendFieldName, rowValue)
urows.updateRow(row)
@staticmethod
def getPropertyName(propertyURL):
# give a URL of property, get the property name (without prefix)
if "#" in propertyURL:
lastIndex = propertyURL.rfind("#")
propertyName = propertyURL[(lastIndex+1):]
else:
lastIndex = propertyURL.rfind("/")
propertyName = propertyURL[(lastIndex+1):]
return propertyName
@staticmethod
def getFieldNameWithTable(propertyName, inputFeatureClassName):
# give a property Name which have been sliced by getPropertyName(propertyURL)
# decide whether its lengh is larger than 10
# decide whether it is already in the feature class table
# return the final name of this field, if return -1, that mean the field name has more than 10 times in this table, you just do nothing
# if len(propertyName) > 10:
# propertyName = propertyName[:9]
isfieldNameinTable = UTIL.isFieldNameInTable(propertyName, inputFeatureClassName)
if isfieldNameinTable == False:
return propertyName
else:
return UTIL.changeFieldNameWithTable(propertyName, inputFeatureClassName)
@staticmethod
def changeFieldNameWithTable(propertyName, inputFeatureClassName):
for i in range(1,10):
propertyName = propertyName[:(len(propertyName)-1)] + str(i)
isfieldNameinTable = UTIL.isFieldNameInTable(propertyName, inputFeatureClassName)
if isfieldNameinTable == False:
return propertyName
return -1
@staticmethod
def isFieldNameInTable(fieldName, inputFeatureClassName):
fieldList = arcpy.ListFields(inputFeatureClassName)
isfieldNameinFieldList = False
for field in fieldList:
if field.name == fieldName:
isfieldNameinFieldList = True
break
return isfieldNameinFieldList
@staticmethod
def getFieldLength(fieldName, inputFeatureClassName):
fieldList = arcpy.ListFields(inputFeatureClassName)
for field in fieldList:
if field.name == fieldName:
return field.length
return -1
@staticmethod
def getFieldDataTypeInTable(fieldName, inputFeatureClassName):
fieldList = arcpy.ListFields(inputFeatureClassName)
for field in fieldList:
if field.name == fieldName:
return field.type
return -1
@staticmethod
def detectRelationship(inputFeatureClassName, inputTableName):
# given full path of feature class and table, decide whether there are a relationship class in this current fiel geodatabase between them
# return False, if inputFeatureClassName and inputTableName are in different filegeodatabase
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
featureClassName = inputFeatureClassName[(lastIndexOFGDB+1):]
featureClassWorkspace = inputFeatureClassName[:lastIndexOFGDB]
lastIndexOFTable = inputTableName.rfind("\\")
tableName = inputTableName[(lastIndexOFTable+1):]
tableWorkspace = inputTableName[:lastIndexOFTable]
isFeatureClassAndTableRelated = False
# arcpy.AddMessage("featureClassName: {0}".format(featureClassName))
# arcpy.AddMessage("tableName: {0}".format(tableName))
if featureClassWorkspace == tableWorkspace and featureClassWorkspace.endswith(".gdb"):
workspace = featureClassWorkspace
rc_list = [c.name for c in arcpy.Describe(workspace).children if c.datatype == "RelationshipClass"]
for rc in rc_list:
rc_path = workspace + "\\" + rc
des_rc = arcpy.Describe(rc_path)
origin = des_rc.originClassNames
destination = des_rc.destinationClassNames
# print "Relationship Class: %s \n Origin: %s \n Desintation: %s" %(rc, origin, destination)
# arcpy.AddMessage("Relationship Class: {0} \n Origin: {1} \n Desintation: {2}".format(rc, origin, destination))
if origin[0] == featureClassName and destination[0] == tableName:
isFeatureClassAndTableRelated = True
# arcpy.AddMessage("Yes!!!!!!!!!!!")
break
return isFeatureClassAndTableRelated
@staticmethod
def getRelatedTableFromFeatureClass(inputFeatureClassName):
# given full path of feature class, get a list of table which are related to it
# return a list of table names who are related to the input feature class in the current file geodatabase
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
featureClassName = inputFeatureClassName[(lastIndexOFGDB+1):]
workspace = inputFeatureClassName[:lastIndexOFGDB]
# arcpy.AddMessage("featureClassName: {0}".format(featureClassName))
# arcpy.AddMessage("tableName: {0}".format(tableName))
relatedTableList = []
if workspace.endswith(".gdb"):
for c in arcpy.Describe(workspace).children:
# arcpy.AddMessage("c: ".format(c))
if c.datatype == "RelationshipClass":
rc_path = workspace + "\\" + c.name
# arcpy.AddMessage("rc_path: {0}".format(rc_path))
des_rc = arcpy.Describe(rc_path)
origin = des_rc.originClassNames
# arcpy.AddMessage("origin: {0}".format(origin[0]))
if origin[0] == featureClassName:
destination = des_rc.destinationClassNames
relatedTableList.append(workspace + "\\" + destination[0])
arcpy.AddMessage("relatedTableList: {0}".format(relatedTableList))
return relatedTableList
@staticmethod
def getRelationshipClassFromFeatureClass(inputFeatureClassName):
# given full path of feature class, get a list of relationship class which are related to it
# return a list of relationship class name whose origin is the input feature class in the current file geodatabase
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
featureClassName = inputFeatureClassName[(lastIndexOFGDB+1):]
workspace = inputFeatureClassName[:lastIndexOFGDB]
# arcpy.AddMessage("featureClassName: {0}".format(featureClassName))
# arcpy.AddMessage("tableName: {0}".format(tableName))
relatedRelationshipClassList = []
if workspace.endswith(".gdb"):
for c in arcpy.Describe(workspace).children:
# arcpy.AddMessage("c: ".format(c))
if c.datatype == "RelationshipClass":
rc_path = workspace + "\\" + c.name
# arcpy.AddMessage("rc_path: {0}".format(rc_path))
des_rc = arcpy.Describe(rc_path)
origin = des_rc.originClassNames
# arcpy.AddMessage("origin: {0}".format(origin[0]))
if origin[0] == featureClassName:
destination = des_rc.destinationClassNames
relatedRelationshipClassList.append(rc_path)
arcpy.AddMessage("relatedRelationshipClassList from Feature Class: {0}".format(relatedRelationshipClassList))
return relatedRelationshipClassList
@staticmethod
def getRelationshipClassFromTable(inputTableName):
# given full path of table, get a list of relationship class which are related to it
# return a list of relationship class name whose destination is the table in the current file geodatabase
lastIndexOFGDB = inputTableName.rfind("\\")
tableName = inputTableName[(lastIndexOFGDB+1):]
workspace = inputTableName[:lastIndexOFGDB]
# arcpy.AddMessage("featureClassName: {0}".format(featureClassName))
# arcpy.AddMessage("tableName: {0}".format(tableName))
relatedRelationshipClassList = []
if workspace.endswith(".gdb"):
for c in arcpy.Describe(workspace).children:
# arcpy.AddMessage("c: ".format(c))
if c.datatype == "RelationshipClass":
rc_path = workspace + "\\" + c.name
# arcpy.AddMessage("rc_path: {0}".format(rc_path))
des_rc = arcpy.Describe(rc_path)
destination = des_rc.destinationClassNames
# arcpy.AddMessage("origin: {0}".format(origin[0]))
if destination[0] == tableName:
origin = des_rc.originClassNames
relatedRelationshipClassList.append(rc_path)
arcpy.AddMessage("relatedRelationshipClassList from Feature Class: {0}".format(relatedRelationshipClassList))
return relatedRelationshipClassList
class Json2Field(object):
@staticmethod
def creatPlaceFeatureClassFromJSON(jsonBindingObject, endFeatureClassName, selectedURL, inPlaceType):
# After querying the information of location entities, we get the information and create a feature class from them
arcpy.AddMessage("jsonBindingObject: {0}".format(jsonBindingObject))
# a set of unique Coordinates for each found places
placeIRISet = Set()
placeList = []
for item in jsonBindingObject:
print "%s\t%s\t%s" % (
item["place"]["value"], item["placeLabel"]["value"],
item["location"]["value"])
# arcpy.AddMessage("endFeatureClass: {0}\t{1}\t{2}".format(item["place"]["value"], item["placeLabel"]["value"],item["location"]["value"]))
if len(placeIRISet) == 0 or item["place"]["value"] not in placeIRISet:
placeIRISet.add(item["place"]["value"])
coordItem = item["location"]["value"]
coordList = re.split("[( )]", coordItem)
itemlat = coordList[2]
itemlng = coordList[1]
placeList.append([item["place"]["value"], item["placeLabel"]["value"], itemlat, itemlng])
if len(placeList) == 0:
arcpy.AddMessage("No location information can be finded!")
else:
# Spatial reference set to GCS_WGS_1984
spatial_reference = arcpy.SpatialReference(4326)
# creat a Point feature class in arcpy
pt = arcpy.Point()
ptGeoms = []
for p in placeList:
pt.X = float(p[3])
pt.Y = float(p[2])
pointGeometry = arcpy.PointGeometry(pt, spatial_reference)
ptGeoms.append(pointGeometry)
arcpy.AddMessage("ptGeoms: {0}".format(ptGeoms))
arcpy.AddMessage("endFeatureClassName: {0}".format(endFeatureClassName))
if endFeatureClassName == None:
arcpy.AddMessage("No data will be added to the map document.")
else:
# create a geometry Feature class to represent
endFeatureClass = arcpy.CopyFeatures_management(ptGeoms, endFeatureClassName)
# add field to this point feature class
arcpy.AddField_management(endFeatureClass, "Label", "TEXT", field_length=50)
arcpy.AddField_management(endFeatureClass, "URL", "TEXT", field_length=50)
# arcpy.AddField_management(placeNearFeatureClass, "TypeURL", "TEXT", field_length=50)
# arcpy.AddField_management(placeNearFeatureClass, "TypeName", "TEXT", field_length=50)
if selectedURL != None:
arcpy.AddField_management(placeNearFeatureClass, "BTypeURL", "TEXT", field_length=50)
arcpy.AddField_management(placeNearFeatureClass, "BTypeName", "TEXT", field_length=50)
# arcpy.AddField_management(placeNearFeatureClass, "Latitude", "TEXT", 10, 10)
# arcpy.AddField_management(placeNearFeatureClass, "Longitude", "TEXT", 10, 10)
arcpy.AddXY_management(endFeatureClass)
# add label, latitude, longitude value to this point feature class
i = 0
cursor = arcpy.UpdateCursor(endFeatureClassName)
row = cursor.next()
while row:
row.setValue("Label", placeList[i][1])
row.setValue("URL", placeList[i][0])
# row.setValue("TypeURL", placeList[i][5])
# row.setValue("TypeName", placeList[i][6])
cursor.updateRow(row)
i = i + 1
row = cursor.next()
if selectedURL != None:
i = 0
cursor = arcpy.UpdateCursor(endFeatureClassName)
row = cursor.next()
while row:
row.setValue("BTypeURL", selectedURL)
row.setValue("BTypeName", inPlaceType)
cursor.updateRow(row)
i = i + 1
row = cursor.next()
mxd = arcpy.mapping.MapDocument("CURRENT")
# get the data frame
df = arcpy.mapping.ListDataFrames(mxd)[0]
# create a new layer
endFeatureClassLayer = arcpy.mapping.Layer(endFeatureClassName)
# add the layer to the map at the bottom of the TOC in data frame 0
arcpy.mapping.AddLayer(df, endFeatureClassLayer, "BOTTOM")
@staticmethod
def createLocationLinkageMappingTableFromJSON(jsonBindingObject, originField, endField, originFeatureClassName, endFeatureClassName, locationCommonPropertyURL, locationCommonPropertyName, relationDegree):
# After a sparql query to find the linked location entities by a specific location commom property, we create a sperate table to install the linkage information from the originFeatureClassName to endFeatureClassName
# jsonBindingObject: the SPARQl query JSOn result contains the linkage information
# originField: the Field name and the variable name which represent the original location features
# originFeatureClassName: the full path of the original feature class
# endFeatureClassName: the full path of the end feature class
# endField: the Field name and the variable name which represent the end location features
# locationCommonPropertyURL: a specific location commom property which linked the originFeatureClassName to endFeatureClassName
# locationCommonPropertyName: the lable of this location commom property, like the label of wd:P17 -> wdt:P17
# relationDegree: the degree of relation between origin and end locations
lastIndexOFGDB = originFeatureClassName.rfind("\\")
originLocation = originFeatureClassName[:lastIndexOFGDB]
originName = originFeatureClassName[(lastIndexOFGDB+1):]
lastIndexOFGDB = endFeatureClassName.rfind("\\")
endLocation = endFeatureClassName[:lastIndexOFGDB]
endName = endFeatureClassName[(lastIndexOFGDB+1):]
# currentValuePropertyName = UTIL.getPropertyName(valuePropertyURL)
if originLocation.endswith(".gdb") == False or originLocation != endLocation:
return -1
else:
outputLocation = originLocation
PropertyName = locationCommonPropertyName.replace(" ", "_")
tableName = originName + "_" + endName + "_" + "D"+ str(relationDegree) + "_" + PropertyName
tablePath = Json2Field.getNoExistTableNameInWorkspace(outputLocation, tableName)
lastIndexOftableName = tablePath.rfind("\\")
tableName = tablePath[(lastIndexOftableName+1):]
arcpy.AddMessage("outputLocation: {0}".format(outputLocation))
arcpy.AddMessage("tableName: {0}".format(tableName))
# create a table in current workspace
locationLinkageTable = arcpy.CreateTable_management(outputLocation, tableName)
arcpy.AddField_management(locationLinkageTable, originField, "TEXT", field_length=50)
arcpy.AddField_management(locationLinkageTable, endField, "TEXT", field_length=50)
arcpy.AddField_management(locationLinkageTable, "propURL", "TEXT", field_length=len(locationCommonPropertyURL))
arcpy.AddField_management(locationLinkageTable, "propName", "TEXT", field_length=len(locationCommonPropertyName))
arcpy.AddField_management(locationLinkageTable, "reDegree", "LONG")
# Create insert cursor for table
rows = arcpy.InsertCursor(locationLinkageTable)
for jsonItem in jsonBindingObject:
row = rows.newRow()
row.setValue(originField, jsonItem[originField]["value"])
row.setValue(endField, jsonItem[endField]["value"])
row.setValue("propURL", locationCommonPropertyURL)
row.setValue("propName", locationCommonPropertyName)
row.setValue("reDegree", relationDegree)
rows.insertRow(row)
# Delete cursor and row objects to remove locks on the data
# del row
# del rows
return tableName
@staticmethod
def getNoExistTableNameInWorkspace(outputLocation, tableName):
# given a table name and a worksapce, we what to see whether this table alreay exists in this workspace, if it does, change the table name until it does not exist
tablePath = outputLocation + "\\" + tableName
if tablePath.endswith(".dbf"):
if arcpy.Exists(tablePath):
i = 1
lastIndex = tablePath.rfind(".")
tablePath = tablePath[:lastIndex]
tablePath += "_" + str(i) + ".dbf"
while arcpy.Exists(tablePath):
i = i + 1
lastIndex = tablePath.rfind("_")
tablePath = tablePath[:lastIndex]
tablePath += "_" + str(i) + ".dbf"
else:
if arcpy.Exists(tablePath):
i = 1
tablePath += "_" + str(i)
while arcpy.Exists(tablePath):
i = i + 1
lastIndex = tablePath.rfind("_")
tablePath = tablePath[:lastIndex]
tablePath += "_" + str(i)
return tablePath
@staticmethod
def createMappingTableFromJSON(jsonBindingObject, keyPropertyName, valuePropertyName, valuePropertyURL, inputFeatureClassName, keyPropertyFieldName, isInverse, isSubDivisionTable):
# according to jsonBindingObject, create a sperate table to store the nofunctional property-value pairs
# OR store the transtive "isPartOf" relationship between location and its subDivision
# return the name of the table without the full path
# isInverse: Boolean variable indicates whether the value we get is the subject value or object value of valuePropertyURL
# isSubDivisionTable: Boolean variable indicates whether the current table store the value of subdivision for the original location
lastIndexOFGDB = inputFeatureClassName.rfind("\\")
outputLocation = inputFeatureClassName[:lastIndexOFGDB]
lastIndexOFFeatureClassName = inputFeatureClassName.rfind("\\")
featureClassName = inputFeatureClassName[(lastIndexOFGDB+1):]
currentValuePropertyName = UTIL.getPropertyName(valuePropertyURL)
if isInverse == True:
currentValuePropertyName = "is_" + currentValuePropertyName + "_Of"
if isSubDivisionTable == True:
currentValuePropertyName = "subDivisionIRI"
if outputLocation.endswith(".gdb"):
tableName = featureClassName + "_" + keyPropertyFieldName + "_" + currentValuePropertyName
# propertyTable = arcpy.CreateTable_management(outputLocation, "wikiURL_"+currentValuePropertyName)
else:
lastIndexOFshp = featureClassName.rfind(".")
featureClassName = featureClassName[:lastIndexOFshp]
tableName = featureClassName + "_" + keyPropertyFieldName + "_" + currentValuePropertyName+".dbf"
# propertyTable = arcpy.CreateTable_management(outputLocation, "wikiURL_"+currentValuePropertyName+".dbf")
tablePath = Json2Field.getNoExistTableNameInWorkspace(outputLocation, tableName)
lastIndexOftableName = tablePath.rfind("\\")
tableName = tablePath[(lastIndexOftableName+1):]
# create a table in current workspace
propertyTable = arcpy.CreateTable_management(outputLocation, tableName)
keyPropertyFieldLength = Json2Field.fieldLengthDecide(jsonBindingObject, keyPropertyName)
arcpy.AddField_management(propertyTable, keyPropertyFieldName, "TEXT", field_length=keyPropertyFieldLength)
# if len(currentValuePropertyName) > 10:
# currentValuePropertyName = currentValuePropertyName[:9]
valuePropertyFieldType = Json2Field.fieldDataTypeDecide(jsonBindingObject, valuePropertyName)
arcpy.AddMessage("valuePropertyURL: {0}".format(valuePropertyURL))
arcpy.AddMessage("valuePropertyFieldType: {0}".format(valuePropertyFieldType))
if valuePropertyFieldType == "TEXT":
valuePropertyFieldLength = Json2Field.fieldLengthDecide(jsonBindingObject, valuePropertyName)
arcpy.AddMessage("valuePropertyFieldLength: {0}".format(valuePropertyFieldLength))
arcpy.AddField_management(propertyTable, currentValuePropertyName, valuePropertyFieldType, field_length=valuePropertyFieldLength)
else:
arcpy.AddField_management(propertyTable, currentValuePropertyName, valuePropertyFieldType)
# arcpy.AddField_management(propertyTable, "propURL", "TEXT", field_length=len(valuePropertyURL))
PropertyValue = namedtuple("PropertyValue", ["key", "value"])
propertyValueSet = Set()
for jsonItem in jsonBindingObject:
pair = PropertyValue(key=jsonItem[keyPropertyName]["value"], value=jsonItem[valuePropertyName]["value"])
propertyValueSet.add(pair)
propertyValueList = list(propertyValueSet)
# Create insert cursor for table
rows = arcpy.InsertCursor(propertyTable)
for pair in propertyValueList:
row = rows.newRow()
row.setValue(keyPropertyFieldName, pair.key)
row.setValue(currentValuePropertyName, pair.value)
# row.setValue("propURL", valuePropertyURL)
rows.insertRow(row)
# for jsonItem in jsonBindingObject:
# row = rows.newRow()
# row.setValue("wikiURL", jsonItem[keyPropertyName]["value"])
# row.setValue(currentValuePropertyName, jsonItem[valuePropertyName]["value"])
# # row.setValue("propURL", valuePropertyURL)
# rows.insertRow(row)
# Delete cursor and row objects to remove locks on the data
del row
del rows
return tableName
@staticmethod
def buildDictFromJSONToModifyTable(jsonBindingObject, keyPropertyName, valuePropertyName):
valuePropertyList = []
keyPropertyList = []
for jsonItem in jsonBindingObject:
valuePropertyList.append(jsonItem[valuePropertyName]["value"])
keyPropertyList.append(jsonItem[keyPropertyName]["value"])
keyValueDict = dict(zip(keyPropertyList, valuePropertyList))
arcpy.AddMessage("keyValueDict: {0}".format(keyValueDict))
return keyValueDict
@staticmethod
def buildDictFromJSONToModifyMultiKeyTable(jsonBindingObject, keyPropertyNameList, valuePropertyName):
# create a dict() object. Use multiple value as keys
# jsonBindingObject: the json object from sparql query which contains the mapping from keyProperty to valueProperty, ex. functionalPropertyJSON
# keyPropertyNameList: a list of the names of keyProperty in JSON object, ex. wikidataSub, s
# valuePropertyName: the name of valueProperty in JSON object, ex. o
MultiKey = namedtuple("MultiKey", keyPropertyNameList)
keyValueDict = dict()
for jsonItem in jsonBindingObject:
MultiKeyValueList = []
i = 0
while i < len(keyPropertyNameList):
MultiKeyValueList.append(jsonItem[keyPropertyNameList[i]]["value"])
i = i + 1
currentMultiKey = MultiKey._make(MultiKeyValueList)
keyValueDict[currentMultiKey] = jsonItem[valuePropertyName]["value"]
arcpy.AddMessage("keyValueDict: {0}".format(keyValueDict))
return keyValueDict
@staticmethod
def addFieldInTableByMapping(jsonBindingObject, keyPropertyName, valuePropertyName, inputFeatureClassName, keyPropertyFieldName, valuePropertyURL, isInverse):
# according to the json object from sparql query which contains the mapping from keyProperty to valueProperty, add field in the Table
# change the field name if there is already a field which has the same name in table
# jsonBindingObject: the json object from sparql query which contains the mapping from keyProperty to valueProperty, ex. functionalPropertyJSON
# keyPropertyName: the name of keyProperty in JSON object, ex. wikidataSub
# valuePropertyName: the name of valueProperty in JSON object, ex. o
# keyPropertyFieldName: the name of the field which stores the value of keyProperty, ex. URL
# valuePropertyURL: the URL of valueProperty, we use it to get the field name of valueProperty, ex. functionalProperty
# isInverse: Boolean variable indicates whether the value we get is the subject value or object value of valuePropertyURL
keyValueDict = Json2Field.buildDictFromJSONToModifyTable(jsonBindingObject, keyPropertyName, valuePropertyName)
currentValuePropertyName = UTIL.getPropertyName(valuePropertyURL)
if isInverse == True:
currentFieldName = "is_" + currentFieldName + "_Of"
currentFieldName = UTIL.getFieldNameWithTable(currentValuePropertyName, inputFeatureClassName)
arcpy.AddMessage("currentFieldName: {0}".format(currentFieldName))
if currentFieldName == -1:
messages.addWarningMessage("The table of current feature class has more than 10 fields for property name {0}.".format(currentValuePropertyName))
else:
# add one field for each functional property in input feature class
fieldType = Json2Field.fieldDataTypeDecide(jsonBindingObject, valuePropertyName)
arcpy.AddMessage("fieldType: {0}".format(fieldType))
if fieldType == "TEXT":
fieldLength = Json2Field.fieldLengthDecide(jsonBindingObject, valuePropertyName)
arcpy.AddMessage("fieldLength: {0}".format(fieldLength))
arcpy.AddField_management(inputFeatureClassName, currentFieldName, fieldType, field_length=fieldLength)
else:
arcpy.AddField_management(inputFeatureClassName, currentFieldName, fieldType)
# cursor = arcpy.da.UpdateCursor(inputFeatureClassName, [keyPropertyFieldName, currentFieldName])
cursor = arcpy.UpdateCursor(inputFeatureClassName)
for row in cursor:
# currentKeyPropertyValue = row[0]
currentKeyPropertyValue = row.getValue(keyPropertyFieldName)
if currentKeyPropertyValue in keyValueDict:
propertyValue = Json2Field.dataTypeCast(keyValueDict[currentKeyPropertyValue], fieldType)
# row[1] = propertyValue
row.setValue(currentFieldName, propertyValue)
cursor.updateRow(row)
@staticmethod
def addFieldInMultiKeyTableByMapping(jsonBindingObject, keyPropertyNameList, valuePropertyName, inputFeatureClassName, keyPropertyFieldNameList, valuePropertyURL, isInverse):
# this function deals with a table with multiple fields as its candidate keys
# according to the json object from sparql query which contains the mapping from multiple keyProperty to valueProperty, add field in the Table
# change the field name if there is already a field which has the same name in table
# jsonBindingObject: the json object from sparql query which contains the mapping from keyProperty to valueProperty, ex. functionalPropertyJSON
# keyPropertyNameList: a list of the names of keyProperty in JSON object, ex. wikidataSub, s
# valuePropertyName: the name of valueProperty in JSON object, ex. o
# keyPropertyFieldNameList: a list of the names of the fields which stores the value of keyProperty, ex. wikiURL, subDivisionIRI
# valuePropertyURL: the URL of valueProperty, we use it to get the field name of valueProperty, ex. functionalProperty
# isInverse: Boolean variable indicates whether the value we get is the subject value or object value of valuePropertyURL
MultiKey = namedtuple("MultiKey", keyPropertyNameList)
keyValueDict = Json2Field.buildDictFromJSONToModifyMultiKeyTable(jsonBindingObject, keyPropertyNameList, valuePropertyName)
currentValuePropertyName = UTIL.getPropertyName(valuePropertyURL)
if isInverse == True:
currentFieldName = "is_" + currentFieldName + "_Of"
currentFieldName = UTIL.getFieldNameWithTable(currentValuePropertyName, inputFeatureClassName)
arcpy.AddMessage("currentFieldName: {0}".format(currentFieldName))
if currentFieldName == -1:
messages.addWarningMessage("The table of current feature class has more than 10 fields for property name {0}.".format(currentValuePropertyName))
else:
# add one field for each functional property in input feature class
fieldType = Json2Field.fieldDataTypeDecide(jsonBindingObject, valuePropertyName)
arcpy.AddMessage("fieldType: {0}".format(fieldType))
if fieldType == "TEXT":
fieldLength = Json2Field.fieldLengthDecide(jsonBindingObject, valuePropertyName)
arcpy.AddMessage("fieldLength: {0}".format(fieldLength))
arcpy.AddField_management(inputFeatureClassName, currentFieldName, fieldType, field_length=fieldLength)
else:
arcpy.AddField_management(inputFeatureClassName, currentFieldName, fieldType)
# cursor = arcpy.da.UpdateCursor(inputFeatureClassName, [keyPropertyFieldName, currentFieldName])
cursor = arcpy.UpdateCursor(inputFeatureClassName)
for row in cursor:
# currentKeyPropertyValue = row[0]
MultiKeyValueList = []
i = 0
while i < len(keyPropertyFieldNameList):
MultiKeyValueList.append(row.getValue(keyPropertyFieldNameList[i]))
i = i + 1
currentMultiKey = MultiKey._make(MultiKeyValueList)
# currentKeyPropertyValue = row.getValue(keyPropertyFieldName)
if currentMultiKey in keyValueDict:
propertyValue = Json2Field.dataTypeCast(keyValueDict[currentMultiKey], fieldType)
# row[1] = propertyValue
row.setValue(currentFieldName, propertyValue)
cursor.updateRow(row)
@staticmethod
def addOrUpdateFieldInTableByMapping(jsonBindingObject, keyPropertyName, valuePropertyName, inputFeatureClassName, keyPropertyFieldName, valuePropertyFieldName):
# according to the json object from sparql query which contains the mapping from keyProperty to valueProperty, add or update field(if the valueProperty Field already exist in table) in the Table
# jsonBindingObject: the json object from sparql query which contains the mapping from keyProperty to valueProperty, ex. wikidataIRI -> DBpediaIRI, dbpediaIRIJSON
# keyPropertyName: the name of keyProperty in JSON object, ex. wikidataSub
# valuePropertyName: the name of valueProperty in JSON object, ex. DBpediaSub
# keyPropertyFieldName: the name of the field which stores the value of keyProperty, ex. URL
# valuePropertyFieldName: the name of the field which stores the value of valueProperty (its length should be less or equal to 10), ex. DBpediaURL
# build a wikidata IRI to DBpedia IRI dictionary
#
keyValueDict = Json2Field.buildDictFromJSONToModifyTable(jsonBindingObject, keyPropertyName, valuePropertyName)
isURLinFieldList = UTIL.isFieldNameInTable(valuePropertyName, inputFeatureClassName)
fieldType = Json2Field.fieldDataTypeDecide(jsonBindingObject, valuePropertyName)
arcpy.AddMessage("fieldType: {0}".format(fieldType))
if isURLinFieldList == False:
# add one field valuePropertyName, ex. "DBpediaIRI", in input feature class
if fieldType == "TEXT":
fieldLength = Json2Field.fieldLengthDecide(jsonBindingObject, valuePropertyName)
arcpy.AddMessage("fieldLength: {0}".format(fieldLength))
arcpy.AddField_management(inputFeatureClassName, valuePropertyFieldName, fieldType, field_length=fieldLength)
else:
arcpy.AddField_management(inputFeatureClassName, valuePropertyFieldName, fieldType)
else:
if fieldType == "TEXT":
fieldLength = Json2Field.fieldLengthDecide(jsonBindingObject, valuePropertyName)
fieldList = arcpy.ListFields(inputFeatureClassName)
for field in fieldList:
if field.name == valuePropertyFieldName:
if fieldLength > field.length:
field.length = fieldLength
break
# cursor = arcpy.da.UpdateCursor(inputFeatureClassName, [keyPropertyFieldName, valuePropertyFieldName])
cursor = arcpy.UpdateCursor(inputFeatureClassName)
for row in cursor:
# currentKeyPropertyValue = row[0]
currentKeyPropertyValue = row.getValue(keyPropertyFieldName)
if currentKeyPropertyValue in keyValueDict:
currentValuePropertyValue = keyValueDict[currentKeyPropertyValue]
# row[1] = currentValuePropertyValue
row.setValue(valuePropertyFieldName, currentValuePropertyValue)
cursor.updateRow(row)
@staticmethod
def fieldLengthDecide(jsonBindingObject, fieldName):
# This option is only applicable on fields of type text or blob
fieldType = Json2Field.fieldDataTypeDecide(jsonBindingObject, fieldName)
if fieldType != "TEXT":
# you do not need field length
return -1
else:
maxLength = 30
for jsonItem in jsonBindingObject:
textLength = len(jsonItem[fieldName]["value"])
if textLength > maxLength:
maxLength = textLength
return maxLength
@staticmethod
def fieldDataTypeDecide(jsonBindingObject, fieldName):
# jsonBindingObject: a list object which is jsonObject.json()["results"]["bindings"]
# fieldName: the name of the property/field in the JSON object thet what to evaluate
# return the Field data type given a JSONItem for one property, return -1 if the field is about geometry and bnode
dataTypeSet = Set()
for jsonItem in jsonBindingObject:
dataTypeSet.add(Json2Field.getLinkedDataType(jsonItem, fieldName))
dataTypeList = list(dataTypeSet)
dataTypeCountDict = dict(zip(dataTypeList, [0]*len(dataTypeList)))
for jsonItem in jsonBindingObject:
dataTypeCountDict[Json2Field.getLinkedDataType(jsonItem, fieldName)] += 1
dataTypeCountOrderDict = OrderedDict(sorted(dataTypeCountDict.items(), key=lambda t: t[1]))
majorityDataType = next(reversed(dataTypeCountOrderDict))
majorityFieldDataType = Json2Field.urlDataType2FieldDataType(majorityDataType)
arcpy.AddMessage("majorityFieldDataType: {0}".format(majorityFieldDataType))
return majorityFieldDataType
@staticmethod
def urlDataType2FieldDataType(urlDataType):
# urlDataType: url string date geometry int double float bnode
# get a data type of Linked Data Literal (see getLinkedDataType), return a data type for field in arcgis Table View
if urlDataType == "uri":
return "TEXT"
elif urlDataType == "string":
return "TEXT"
elif urlDataType == "date":
return "DATE"
elif urlDataType == "geometry":
return -1
elif urlDataType == "int":
return "LONG"
elif urlDataType == "double":
return "DOUBLE"
elif urlDataType == "float":
return "FLOAT"
elif urlDataType == "bnode":
return -1
else:
return "TEXT"
@staticmethod
def getLinkedDataType(jsonBindingObjectItem, propertyName):
# according the the property name of this jsonBindingObjectItem, return the meaningful dataType
rdfDataType = jsonBindingObjectItem[propertyName]["type"]
if rdfDataType == "uri":
return "uri"
elif "literal" in rdfDataType:
if "datatype" not in jsonBindingObjectItem[propertyName]:
return "string"
else:
specifiedDataType = jsonBindingObjectItem[propertyName]["datatype"]
if specifiedDataType == "http://www.w3.org/2001/XMLSchema#date":
return "date"
elif specifiedDataType == "http://www.openlinksw.com/schemas/virtrdf#Geometry":
return "geometry"
elif specifiedDataType == "http://www.w3.org/2001/XMLSchema#integer" or specifiedDataType == "http://www.w3.org/2001/XMLSchema#nonNegativeInteger":
return "int"
elif specifiedDataType == "http://www.w3.org/2001/XMLSchema#double":
return "double"
elif specifiedDataType == "http://www.w3.org/2001/XMLSchema#float":
return "float"
else:
return "string"
elif rdfDataType == "bnode":
return "bnode"
else:
return "string"
@staticmethod
def dataTypeCast(fieldValue, fieldDataType):
# according to the field data type, cast the data into corresponding data type
if fieldDataType == "TEXT":
return fieldValue
elif fieldDataType == "DATE":
return fieldValue
elif fieldDataType == "LONG":
return int(fieldValue)
elif fieldDataType == "DOUBLE":
return Decimal(fieldValue)
elif fieldDataType == "FLOAT":
return float(fieldValue)
| 43.417758 | 269 | 0.708595 | 20,864 | 225,425 | 7.570648 | 0.056748 | 0.011301 | 0.006293 | 0.005166 | 0.693582 | 0.666086 | 0.631431 | 0.595768 | 0.571977 | 0.553332 | 0 | 0.010322 | 0.185139 | 225,425 | 5,191 | 270 | 43.426122 | 0.849573 | 0.293534 | 0 | 0.584185 | 0 | 0.011247 | 0.18592 | 0.00846 | 0.000341 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.005112 | null | null | 0.00409 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d3598ee4e5aabfea09ea77913d15276ad388685 | 4,904 | py | Python | neural_seq/utils.py | Dragon-hxl/LARC | c9f13f110a56aa2ecf837e464745375d9521409e | [
"MIT"
] | null | null | null | neural_seq/utils.py | Dragon-hxl/LARC | c9f13f110a56aa2ecf837e464745375d9521409e | [
"MIT"
] | null | null | null | neural_seq/utils.py | Dragon-hxl/LARC | c9f13f110a56aa2ecf837e464745375d9521409e | [
"MIT"
] | null | null | null | import argparse
def commandLineArgs():
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument("--restrict-types",
dest="restrict_types",
default=False,
action="store_true")
parser.add_argument("--test",
dest="test",
default=False,
help="True if we want to just test an existing model",
action="store_true")
parser.add_argument("--limit-overfit",
dest="limit_overfit",
default=False,
action="store_true")
parser.add_argument("--use-cuda",
dest="use_cuda",
default=False,
action="store_true")
parser.add_argument("--verbose",
dest="verbose",
default=False,
action="store_true")
parser.add_argument("--rnn-decode",
dest="rnn_decode",
default=False,
action="store_true")
parser.add_argument("--batch-size",
dest="batch_size",
default=32,
type=int)
parser.add_argument("--lr",
dest="lr",
default=0.001,
type=float)
parser.add_argument("--weight-decay",
dest="weight_decay",
default=0.0,
type=float)
parser.add_argument("--beta",
dest="beta",
default=0.0,
type=float)
parser.add_argument("--epochs-per-replay",
dest="epochs_per_replay",
default=0,
type=int)
parser.add_argument("--beam-width",
dest="beam_width",
default=128,
type=int)
parser.add_argument("--epsilon",
dest="epsilon",
default=0.3,
type=float)
parser.add_argument("--num-cpus",
dest="num_cpus",
default=1,
type=int)
parser.add_argument("--num-cycles",
dest="num_cycles",
default=1,
type=int)
parser.add_argument("--max-p-per-task",
dest="max_p_per_task",
# something >> number of distinct programs per tasks
default=1000000,
type=int)
parser.add_argument("--seed",
dest="seed",
default=0,
type=int)
parser.add_argument("--jumpstart",
dest="jumpstart",
default=False,
help="Whether to jumpstart by training on set of ground truth programs first",
action="store_true")
parser.add_argument("--num-iter-beam-search",
dest="num_iter_beam_search",
default=1,
type=int)
parser.add_argument("--num-epochs-start",
dest="num_epochs_start",
default=1,
type=int)
parser.add_argument("--resume-iter",
dest="resume_iter",
default=0,
type=int)
parser.add_argument("--test-decode-time",
dest="test_decode_time",
default=0,
type=int)
parser.add_argument("--fixed-epoch-pretrain",
dest="fixed_epoch_pretrain",
default=0,
type=int)
parser.add_argument("--preload-frontiers",
dest="preload_frontiers",
default=None,
type=str)
parser.add_argument("--resume",
dest="resume",
default=None,
type=str)
parser.add_argument("--no-nl",
dest="no_nl",
default=False,
help="Whether to condition on natural language description i.e. use NL as input to encoder",
action="store_true")
parser.add_argument("--no-io",
dest="no_io",
default=False,
help="Whether to use IO as input to encoder",
action="store_true")
args = vars(parser.parse_args())
return args
| 40.866667 | 116 | 0.418434 | 411 | 4,904 | 4.836983 | 0.262774 | 0.122233 | 0.230885 | 0.096579 | 0.501006 | 0.400905 | 0.360161 | 0.181087 | 0 | 0 | 0 | 0.012186 | 0.48124 | 4,904 | 119 | 117 | 41.210084 | 0.769261 | 0.010196 | 0 | 0.418803 | 0 | 0 | 0.198063 | 0.009068 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008547 | false | 0 | 0.008547 | 0 | 0.025641 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d359bca45bc6e6f2073a2891a9bc99f4d08e0df | 7,168 | py | Python | pypillometry/fakedata.py | wsojka00/pypillometry | 6c2258f1df2a435e6474ce33ebc46ce595d3d25e | [
"MIT"
] | 13 | 2020-07-01T02:00:23.000Z | 2021-11-18T22:49:54.000Z | pypillometry/fakedata.py | wsojka00/pypillometry | 6c2258f1df2a435e6474ce33ebc46ce595d3d25e | [
"MIT"
] | 17 | 2020-06-24T10:26:09.000Z | 2022-01-27T13:36:10.000Z | pypillometry/fakedata.py | wsojka00/pypillometry | 6c2258f1df2a435e6474ce33ebc46ce595d3d25e | [
"MIT"
] | 5 | 2020-06-30T20:03:36.000Z | 2022-01-27T13:22:34.000Z | """
fakedata.py
====================================
Generate artificial pupil-data.
"""
import numpy as np
import scipy.stats as stats
from .baseline import *
from .pupil import *
def generate_pupil_data(event_onsets, fs=1000, pad=5000, baseline_lowpass=0.2,
evoked_response_perc=0.02, response_fluct_sd=1,
prf_npar=(10.35,0), prf_tmax=(917.0,0),
prop_spurious_events=0.2, noise_amp=0.0005):
"""
Generate artificial pupil data as a sum of slow baseline-fluctuations
on which event-evoked responses are "riding".
Parameters
-----------
event_onsets: list
list of all events that evoke a response (in seconds)
fs: float
sampling rate in Hz
pad: float
append `pad` milliseconds of signal after the last event is decayed
baseline_lowpass: float
cutoff for the lowpass-filter that defines the baseline
(highest allowed frequency in the baseline fluctuations)
evoked_response_perc: float
amplitude of the pupil-response as proportion of the baseline
response_fluct_sd: float
How much do the amplitudes of the individual events fluctuate?
This is determined by drawing each individual pupil-response to
a single event from a (positive) normal distribution with mean as determined
by `evoked_response_perc` and sd `response_fluct_sd` (in units of
`evoked_response_perc`).
prf_npar: tuple (float,float)
(mean,std) of the npar parameter from :py:func:`pypillometry.pupil.pupil_kernel()`.
If the std is exactly zero, then the mean is used for all pupil-responses.
If the std is positive, npar is taken i.i.d. from ~ normal(mean,std) for each event.
prf_tmax: tuple (float,float)
(mean,std) of the tmax parameter from :py:func:`pypillometry.pupil.pupil_kernel()`.
If the std is exactly zero, then the mean is used for all pupil-responses.
If the std is positive, tmax is taken i.i.d. from ~ normal(mean,std) for each event.
prop_spurious_events: float
Add random events to the pupil signal. `prop_spurious_events` is expressed
as proportion of the number of real events.
noise_amp: float
Amplitude of random gaussian noise that sits on top of the simulated signal.
Expressed in units of mean baseline pupil diameter.
Returns
--------
tx, sy: np.array
time and simulated pupil-dilation (n)
x0: np.array
baseline (n)
delta_weights: np.array
pupil-response strengths (len(event_onsets))
"""
nevents=len(event_onsets)
## npar
if prf_npar[1]==0: # deterministic parameter
npars=np.ones(nevents)*prf_npar[0]
else:
npars=np.random.randn(nevents)*prf_npar[1]+prf_npar[0]
## tmax
if prf_tmax[1]==0: # deterministic parameter
tmaxs=np.ones(nevents)*prf_tmax[0]
else:
tmaxs=np.random.randn(nevents)*prf_tmax[1]+prf_tmax[0]
if np.any(npars<=0):
raise ValueError("npar must be >0")
if np.any(tmaxs<=0):
raise ValueError("tmax must be >0")
# get maximum duration of one of the PRFs
maxdur=pupil_get_max_duration(npars.min(), tmaxs.max())
T=np.array(event_onsets).max()+maxdur+pad # stop pad millisec after last event
n=int(np.ceil(T/1000.*fs)) # number of sampling points
sy=np.zeros(n) # pupil diameter
tx=np.linspace(0,T,n) # time-vector in milliseconds
# create baseline-signal
slack=int(0.50*n) # add slack to avoid edge effects of the filter
x0=butter_lowpass_filter(np.random.rand(n+slack), baseline_lowpass, fs, 2)[slack:(n+slack)]
x0=x0*1000+5000 # scale it up to a scale as usually obtained from eyetracker
### real events regressor
## scaling
event_ix=(np.array(event_onsets)/1000.*fs).astype(np.int)
#a, b = (myclip_a - my_mean) / my_std, (myclip_b - my_mean) / my_std
delta_weights=stats.truncnorm.rvs(-1/response_fluct_sd,np.inf, loc=1, scale=response_fluct_sd, size=event_ix.size)
x1=np.zeros_like(sy)
for i,ev in enumerate(event_onsets):
# create kernel and delta-functions for events
kernel=pupil_kernel(duration=maxdur,fs=fs,npar=npars[i], tmax=tmaxs[i])
x1[event_ix[i]:(event_ix[i]+kernel.size)]=x1[event_ix[i]:(event_ix[i]+kernel.size)]+kernel*delta_weights[i]
## spurious events regressor
sp_event_ix=np.random.randint(low=0,high=np.ceil((T-maxdur-pad)/1000.*fs),size=int( nevents*prop_spurious_events ))
sp_events=tx[ sp_event_ix ]
n_sp_events=sp_events.size
## npar
if prf_npar[1]==0: # deterministic parameter
npars=np.ones(n_sp_events)*prf_npar[0]
else:
npars=np.random.randn(n_sp_events)*prf_npar[1]+prf_npar[0]
## tmax
if prf_tmax[1]==0: # deterministic parameter
tmaxs=np.ones(n_sp_events)*prf_tmax[0]
else:
tmaxs=np.random.randn(n_sp_events)*prf_tmax[1]+prf_tmax[0]
## scaling
sp_delta_weights=stats.truncnorm.rvs(-1/response_fluct_sd,np.inf, loc=1, scale=response_fluct_sd, size=sp_event_ix.size)
x2=np.zeros_like(sy)
for i,ev in enumerate(sp_events):
# create kernel and delta-functions for events
kernel=pupil_kernel(duration=maxdur,fs=fs,npar=npars[i], tmax=tmaxs[i])
x2[sp_event_ix[i]:(sp_event_ix[i]+kernel.size)]=x2[sp_event_ix[i]:(sp_event_ix[i]+kernel.size)]+kernel*sp_delta_weights[i]
amp=np.mean(x0)*evoked_response_perc # mean amplitude for the evoked response
noise=noise_amp*np.mean(x0)*np.random.randn(n)
sy = x0 + amp*x1 + amp*x2 + noise
return (tx,sy,x0,delta_weights)
def get_dataset(ntrials=100, isi=2000, rtdist=(1000,500),fs=1000,pad=5000, **kwargs):
"""
Convenience function to run :py:func:`generate_pupil_data()` with standard parameters.
Parameters
-----------
ntrials:int
number of trials
isi: float
inter-stimulus interval in milliseconds
rtdist: tuple (float,float)
mean and std of a (truncated at zero) normal distribution to generate response times
fs: float
sampling rate
pad: float
padding before the first and after the last event in seconds
kwargs: dict
arguments for :py:func:`pypillometry.fakedata.generate_pupil_data()`
Returns
--------
tx, sy: np.array
time and simulated pupil-dilation (n)
baseline: np.array
baseline (n)
event_onsets: np.array
timing of the simulated event-onsets (stimuli and responses not separated)
response_coef: np.array
pupil-response strengths (len(event_onsets))
"""
stim_onsets=np.arange(ntrials)*isi+pad
rts=stats.truncnorm.rvs( (0-rtdist[0])/rtdist[1], np.inf, loc=rtdist[0], scale=rtdist[1], size=ntrials)
resp_onsets=stim_onsets+rts
event_onsets=np.concatenate( (stim_onsets, resp_onsets) )
kwargs.update({"fs":fs})
tx,sy,baseline,response_coef=generate_pupil_data(event_onsets, **kwargs)
return tx,sy,baseline,event_onsets, response_coef
| 37.726316 | 130 | 0.667411 | 1,071 | 7,168 | 4.331466 | 0.222222 | 0.030826 | 0.013796 | 0.008623 | 0.339728 | 0.324639 | 0.311921 | 0.295753 | 0.251347 | 0.226342 | 0 | 0.022894 | 0.220006 | 7,168 | 189 | 131 | 37.925926 | 0.806832 | 0.493722 | 0 | 0.15873 | 1 | 0 | 0.009855 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031746 | false | 0.031746 | 0.063492 | 0 | 0.126984 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d373b4ad5acb25372312662f936a4b9444821b2 | 608 | py | Python | Day 5/how_about_a_nice_game_of_chess_1.py | Shunderpooch/AdventOfCode2016 | 9651c829e2c1906fac82de1ec19e46561543362d | [
"MIT"
] | null | null | null | Day 5/how_about_a_nice_game_of_chess_1.py | Shunderpooch/AdventOfCode2016 | 9651c829e2c1906fac82de1ec19e46561543362d | [
"MIT"
] | null | null | null | Day 5/how_about_a_nice_game_of_chess_1.py | Shunderpooch/AdventOfCode2016 | 9651c829e2c1906fac82de1ec19e46561543362d | [
"MIT"
] | null | null | null | """
Arthur Dooner
Advent of Code Day 5
Challenge 1
"""
import sys
import hashlib
def md5_func(string):
md5result = hashlib.md5()
md5result.update(string.encode('utf-8'))
return md5result.hexdigest()
INTEGER_ID = 0
PASSWORD = ""
if len(sys.argv) < 2:
print("Please pass the puzzle input as a command line argument.")
exit(0)
while len(PASSWORD) < 8:
temp_md5 = md5_func(sys.argv[1] + str(INTEGER_ID))
if temp_md5[:5] == "00000":
print(INTEGER_ID)
PASSWORD += temp_md5[5]
INTEGER_ID += 1
print("The password for the door is:" + PASSWORD)
sys.stdout.flush() | 20.266667 | 69 | 0.661184 | 91 | 608 | 4.318681 | 0.571429 | 0.091603 | 0.040712 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051867 | 0.207237 | 608 | 30 | 70 | 20.266667 | 0.763485 | 0.075658 | 0 | 0 | 0 | 0 | 0.171171 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0.263158 | 0.105263 | 0 | 0.210526 | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
2d376984cf7521c9d0b8668970e457c5497e11d2 | 1,204 | py | Python | estudiantes/migrations/0008_auto_20180411_2345.py | jlopez0591/SIGIA | e857e2273daa43ab64fa78df254275af2dbcc2a5 | [
"MIT"
] | null | null | null | estudiantes/migrations/0008_auto_20180411_2345.py | jlopez0591/SIGIA | e857e2273daa43ab64fa78df254275af2dbcc2a5 | [
"MIT"
] | 7 | 2020-02-12T00:42:15.000Z | 2022-03-11T23:23:48.000Z | estudiantes/migrations/0008_auto_20180411_2345.py | jlopez0591/SIGIA | e857e2273daa43ab64fa78df254275af2dbcc2a5 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.8 on 2018-04-12 04:45
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('estudiantes', '0007_auto_20180411_2100'),
]
operations = [
migrations.AlterField(
model_name='trabajograduacion',
name='carrera',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='trabajos_graduacion', to='ubicacion.CarreraInstancia'),
),
migrations.AlterField(
model_name='trabajograduacion',
name='escuela',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='trabajos_graduacion', to='ubicacion.EscuelaInstancia'),
),
migrations.AlterField(
model_name='trabajograduacion',
name='facultad',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='trabajos_graduacion', to='ubicacion.FacultadInstancia'),
),
]
| 37.625 | 175 | 0.675249 | 129 | 1,204 | 6.124031 | 0.410853 | 0.050633 | 0.070886 | 0.111392 | 0.626582 | 0.626582 | 0.436709 | 0.436709 | 0.436709 | 0.436709 | 0 | 0.034519 | 0.20598 | 1,204 | 31 | 176 | 38.83871 | 0.791841 | 0.056478 | 0 | 0.375 | 1 | 0 | 0.214475 | 0.090026 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d3b855417b5b01166e9c53aefdcacbf633fd636 | 370 | py | Python | PythonExercicios/ex005.py | gabjohann/python_3 | 380cb622669ed82d6b22fdd09d41f02f1ad50a73 | [
"MIT"
] | null | null | null | PythonExercicios/ex005.py | gabjohann/python_3 | 380cb622669ed82d6b22fdd09d41f02f1ad50a73 | [
"MIT"
] | null | null | null | PythonExercicios/ex005.py | gabjohann/python_3 | 380cb622669ed82d6b22fdd09d41f02f1ad50a73 | [
"MIT"
] | null | null | null | # Faça um programa que leia um número inteiro e mostre na tela o seu sucessor e seu antecessor
num = int(input('Digite um número inteiro: '))
ant = num - 1
suc = num + 1
print('O sucessor de {} é {} e seu antecessor é {}'.format(num, suc, ant))
# Resolução com somente uma variável:
# print('O sucessor de {} é {} e seu antecessor é {}'.format(num, (num+1), (num-1)))
| 37 | 94 | 0.667568 | 64 | 370 | 3.859375 | 0.484375 | 0.064777 | 0.17004 | 0.129555 | 0.331984 | 0.331984 | 0.331984 | 0.331984 | 0.331984 | 0.331984 | 0 | 0.013378 | 0.191892 | 370 | 9 | 95 | 41.111111 | 0.812709 | 0.57027 | 0 | 0 | 0 | 0 | 0.445161 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2d3ce3e54e71945a04f010ae135aa8c962618bc4 | 369 | py | Python | modules/audio/settings.py | mlc2307/pyradio | 94b3705faea4d019f3f62efb49aa4f502c1a556e | [
"MIT"
] | null | null | null | modules/audio/settings.py | mlc2307/pyradio | 94b3705faea4d019f3f62efb49aa4f502c1a556e | [
"MIT"
] | null | null | null | modules/audio/settings.py | mlc2307/pyradio | 94b3705faea4d019f3f62efb49aa4f502c1a556e | [
"MIT"
] | null | null | null | """
Default audio settings.
"""
import numpy as np
from modules.socket.settings import PACKAGE_SIZE
# Number of sound channels.
CHANNELS = 2
# The size of the streaming buffer, that needs to fit into the socket buffer.
CHUNK_SIZE = PACKAGE_SIZE // CHANNELS // np.dtype(np.int16).itemsize
# Sound device frame rate. In this case, 44.1 KHz.
FRAME_RATE = int(44.1e3)
| 21.705882 | 77 | 0.742547 | 59 | 369 | 4.576271 | 0.677966 | 0.103704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032468 | 0.165312 | 369 | 16 | 78 | 23.0625 | 0.844156 | 0.474255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
2d404ddffb52c313e2728aba0594c50a8c449448 | 2,231 | py | Python | parsl/dataflow/states.py | cylondata/parsl | 00ff9372bd841dafef8a0b3566c79ffe68f0e367 | [
"Apache-2.0"
] | 323 | 2017-07-28T21:31:27.000Z | 2022-03-05T13:06:05.000Z | parsl/dataflow/states.py | cylondata/parsl | 00ff9372bd841dafef8a0b3566c79ffe68f0e367 | [
"Apache-2.0"
] | 1,286 | 2017-06-01T16:50:00.000Z | 2022-03-31T16:45:14.000Z | parsl/dataflow/states.py | cylondata/parsl | 00ff9372bd841dafef8a0b3566c79ffe68f0e367 | [
"Apache-2.0"
] | 113 | 2017-06-03T11:38:40.000Z | 2022-03-26T16:43:05.000Z | from enum import IntEnum
class States(IntEnum):
"""Enumerates the states a parsl task may be in.
These states occur inside the task record for a task inside
a `DataFlowKernel` and in the monitoring database.
In a single successful task execution, tasks will progress in this
sequence:
pending -> launched -> running -> exec_done
Other states represent deviations from this path, either due to
failures, or to deliberate changes to how tasks are executed (for
example due to join_app, or memoization).
All tasks should end up in one of the states listed in `FINAL_STATES`.
"""
unsched = -1
pending = 0
"""Task is known to parsl but cannot run yet. Usually, a task cannot
run because it is waiting for dependency tasks to complete.
"""
running = 2
"""Task is running on a resource. This state is special - a DFK task
record never goes to States.running state; but the monitoring database
may represent a task in this state based on non-DFK information received
from monitor_wrapper."""
exec_done = 3
"""Task has been executed successfully."""
failed = 4
"""Task has failed and no more attempts will be made to run it."""
dep_fail = 5
"""Dependencies of this task failed, so it is marked as failed without
even an attempt to launch it."""
launched = 7
"""Task has been passed to a `ParslExecutor` for execution."""
fail_retryable = 8
"""Task has failed, but can be retried"""
memo_done = 9
"""Task was found in the memoization table, so it is marked as done
without even an attempt to launch it."""
joining = 10
"""Task is a join_app, joining on internal tasks. The task has run its
own Python code, and is now waiting on other tasks before it can make
further progress (to a done/failed state)."""
FINAL_STATES = [States.exec_done, States.memo_done, States.failed, States.dep_fail]
"""States from which we will never move to another state, because the job has
either definitively completed or failed."""
FINAL_FAILURE_STATES = [States.failed, States.dep_fail]
"""States which are final and which indicate a failure. This must
be a subset of FINAL_STATES"""
| 32.808824 | 83 | 0.702824 | 344 | 2,231 | 4.508721 | 0.427326 | 0.022566 | 0.027079 | 0.015474 | 0.096712 | 0.078659 | 0.038685 | 0 | 0 | 0 | 0 | 0.006395 | 0.229045 | 2,231 | 67 | 84 | 33.298507 | 0.895349 | 0.235769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
74254b7b90cfc0efc1bfc67b61af489b33a9bc98 | 862 | py | Python | data_conversion_subsystem/config/config.py | diego-hermida/ClimateChangeApp | 576d49ec5b76f709cc86874ffb03f4a38dbbbbfd | [
"MIT"
] | 2 | 2018-07-01T20:36:46.000Z | 2019-11-01T22:47:06.000Z | data_conversion_subsystem/config/config.py | diego-hermida/ClimateChangeApp | 576d49ec5b76f709cc86874ffb03f4a38dbbbbfd | [
"MIT"
] | 1 | 2021-06-10T20:28:53.000Z | 2021-06-10T20:28:53.000Z | data_conversion_subsystem/config/config.py | diego-hermida/ClimateChangeApp | 576d49ec5b76f709cc86874ffb03f4a38dbbbbfd | [
"MIT"
] | null | null | null | from os import environ
from utilities.util import get_config
DCS_CONFIG = get_config(__file__)
DCS_CONFIG.update(get_config(__file__.replace('config.py', 'docker_config.py')) if environ.get('DOCKER_MODE', False)
else get_config(__file__.replace('config.py', 'dev_config.py')))
DCS_CONFIG['DATA_CONVERSION_SUBSYSTEM_LOG_FILES_ROOT_FOLDER'] = DCS_CONFIG[
'DATA_CONVERSION_SUBSYSTEM_LOG_FILES_ROOT_FOLDER'].replace(DCS_CONFIG['ID_WILDCARD_PATTERN'],
'' if DCS_CONFIG['SUBSYSTEM_INSTANCE_ID'] == 1 else '_' + str(DCS_CONFIG['SUBSYSTEM_INSTANCE_ID']))
DCS_CONFIG['DATA_CONVERSION_SUBSYSTEM_STATE_FILES_ROOT_FOLDER'] = DCS_CONFIG[
'DATA_CONVERSION_SUBSYSTEM_STATE_FILES_ROOT_FOLDER'].replace(DCS_CONFIG['ID_WILDCARD_PATTERN'],
'' if DCS_CONFIG['SUBSYSTEM_INSTANCE_ID'] == 1 else '_' + str(DCS_CONFIG['SUBSYSTEM_INSTANCE_ID']))
| 66.307692 | 116 | 0.778422 | 119 | 862 | 5.058824 | 0.277311 | 0.179402 | 0.086379 | 0.152824 | 0.760797 | 0.760797 | 0.667774 | 0.667774 | 0.667774 | 0.378738 | 0 | 0.002577 | 0.099768 | 862 | 12 | 117 | 71.833333 | 0.773196 | 0 | 0 | 0.181818 | 0 | 0 | 0.433875 | 0.320186 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74392be75a25359b077926e8e16806629ba266d4 | 3,693 | py | Python | blackduck/Reporting.py | rishianand06/hub-rest-api-python | 06ff0136b5e8f54a8947b70eca6fd26d72a9c6ea | [
"Apache-2.0"
] | null | null | null | blackduck/Reporting.py | rishianand06/hub-rest-api-python | 06ff0136b5e8f54a8947b70eca6fd26d72a9c6ea | [
"Apache-2.0"
] | null | null | null | blackduck/Reporting.py | rishianand06/hub-rest-api-python | 06ff0136b5e8f54a8947b70eca6fd26d72a9c6ea | [
"Apache-2.0"
] | null | null | null | import logging
import requests
import json
from operator import itemgetter
import urllib.parse
from .Utils import object_id
logger = logging.getLogger(__name__)
valid_categories = ['VERSION','CODE_LOCATIONS','COMPONENTS','SECURITY','FILES', 'ATTACHMENTS', 'CRYPTO_ALGORITHMS', 'PROJECT_VERSION_CUSTOM_FIELDS', 'BOM_COMPONENT_CUSTOM_FIELDS', 'LICENSE_TERM_FULFILLMENT']
valid_report_formats = ["CSV", "JSON"]
def create_version_reports(self, version, report_list, format="CSV"):
assert all(list(map(lambda k: k in valid_categories, report_list))), "One or more selected report categories in {} are not valid ({})".format(
report_list, valid_categories)
assert format in valid_report_formats, "Format must be one of {}".format(valid_report_formats)
post_data = {
'categories': report_list,
'versionId': version['_meta']['href'].split("/")[-1],
'reportType': 'VERSION',
'reportFormat': format
}
version_reports_url = self.get_link(version, 'versionReport')
return self.execute_post(version_reports_url, post_data)
valid_notices_formats = ["TEXT", "JSON"]
def create_version_notices_report(self, version, format="TEXT", include_copyright_info=True):
assert format in valid_notices_formats, "Format must be one of {}".format(valid_notices_formats)
post_data = {
'versionId': object_id(version),
'reportType': 'VERSION_LICENSE',
'reportFormat': format
}
if include_copyright_info:
post_data.update({'categories': ["COPYRIGHT_TEXT"] })
notices_report_url = self.get_link(version, 'licenseReports')
return self.execute_post(notices_report_url, post_data)
def download_report(self, report_id):
# TODO: Fix me, looks like the reports should be downloaded from different paths than the one here, and depending on the type and format desired the path can change
url = self.get_urlbase() + "/api/reports/{}".format(report_id)
return self.execute_get(url, {'Content-Type': 'application/zip', 'Accept':'application/zip'})
def download_notification_report(self, report_location_url):
'''Download the notices report using the report URL. Inspect the report object to determine
the format and use the appropriate media header'''
custom_headers = {'Accept': 'application/vnd.blackducksoftware.report-4+json'}
response = self.execute_get(report_location_url, custom_headers=custom_headers)
report_obj = response.json()
if report_obj['reportFormat'] == 'TEXT':
download_url = self.get_link(report_obj, "download") + ".json"
logger.debug("downloading report from {}".format(download_url))
response = self.execute_get(download_url, {'Accept': 'application/zip'})
else:
# JSON
contents_url = self.get_link(report_obj, "content")
logger.debug("retrieving report contents from {}".format(contents_url))
response = self.execute_get(contents_url, {'Accept': 'application/json'})
return response, report_obj['reportFormat']
##
#
# (Global) Vulnerability reports
#
##
valid_vuln_status_report_formats = ["CSV", "JSON"]
def create_vuln_status_report(self, format="CSV"):
assert format in valid_vuln_status_report_formats, "Format must be one of {}".format(valid_vuln_status_report_formats)
post_data = {
"reportFormat": format,
"locale": "en_US"
}
url = self.get_apibase() + "/vulnerability-status-reports"
custom_headers = {
'Content-Type': 'application/vnd.blackducksoftware.report-4+json',
'Accept': 'application/vnd.blackducksoftware.report-4+json'
}
return self.execute_post(url, custom_headers=custom_headers, data=post_data)
| 43.964286 | 207 | 0.721094 | 458 | 3,693 | 5.558952 | 0.30131 | 0.021995 | 0.023566 | 0.021995 | 0.220738 | 0.141006 | 0.083661 | 0.045954 | 0.032207 | 0 | 0 | 0.001288 | 0.158949 | 3,693 | 83 | 208 | 44.493976 | 0.818416 | 0.090983 | 0 | 0.080645 | 0 | 0 | 0.271039 | 0.074873 | 0 | 0 | 0 | 0.012048 | 0.064516 | 1 | 0.080645 | false | 0 | 0.096774 | 0 | 0.258065 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
743bf38636435755c9ffb7ddc1a07e96b2063afd | 3,346 | py | Python | server/adb.py | narata/answerot_new_1 | 65d3dee0f4a1f0b743b17b82fbaf17479052f6be | [
"Apache-2.0"
] | null | null | null | server/adb.py | narata/answerot_new_1 | 65d3dee0f4a1f0b743b17b82fbaf17479052f6be | [
"Apache-2.0"
] | null | null | null | server/adb.py | narata/answerot_new_1 | 65d3dee0f4a1f0b743b17b82fbaf17479052f6be | [
"Apache-2.0"
] | null | null | null |
import subprocess,re
import ConfigParser
def set_config(device, sx, sy, ci, ck):
device_info = get_device()
result = {
"device": "",
"sx": "",
"sy": "",
"client_id": "",
"client_secret": "",
"msg": "",
"device_info": device_info,
}
cf = ConfigParser.ConfigParser()
cf.read("server\\config.ini")
if device == '' and sx=='' and sy == '':
if len(cf.sections())<=0:
return result
try:
result['sx']= cf.get('config', 'sx')
except:
pass
try:
result['sy'] = cf.get('config', 'sy')
except:
pass
try:
result['device'] = cf.get('config', 'device')
except:
pass
try:
result['client_id'] = cf.get('config', 'client_id')
except:
pass
try:
result['client_secret'] = cf.get('config', 'client_secret')
except Exception as e:
print e
return result
if not cf.has_section('config'):
cf.add_section('config')
cf.set('config', 'sx', sx)
cf.set('config', 'sy', sy)
cf.set('config', 'device', device)
cf.set('config', 'client_id', ci)
cf.set('config', 'client_secret', ck)
cf.write(open("server\\config.ini", "w"))
result = {
"device": device,
"sx": sx,
"sy": sy,
"client_id": ci,
"client_secret": ck,
"msg": "Config success!",
"device_info": device_info,
}
return result
def get_config(name):
cf = ConfigParser.ConfigParser()
try:
cf.read("server\\config.ini")
return cf.get('config', name)
except Exception as e:
return ""
def get_file_data(path):
f = file(path, 'r')
b = f.read()
f.close()
return b
def get_pic(path):
cmd1 = "server\\bin\\adb.exe shell /system/bin/screencap -p /sdcard/screenshot.png"
cmd2 = "server\\bin\\adb.exe pull /sdcard/screenshot.png " + path
device = ''
try:
device = get_config('device')
if device != '':
cmd1 = "server\\bin\\adb.exe -s %s shell /system/bin/screencap -p /sdcard/screenshot.png" % device
cmd2 = "server\\bin\\adb.exe -s %s pull /sdcard/screenshot.png %s" % (device, path)
except:
pass
try:
p = subprocess.Popen(cmd1, stderr=file("server\\bin\\log.txt", 'w'))
p.wait()
b = get_file_data("server\\bin\\log.txt")
if len(b) > 0 and b.find('device') !=-1:
return b
p = subprocess.Popen(cmd2, stderr=file("server\\bin\\log.txt", 'w'))
p.wait()
b = get_file_data("server\\bin\\log.txt")
if len(b) > 0 and b.find('device') !=-1:
return b
return True
except Exception as e:
return e
def get_device():
cmd1 = "server\\bin\\adb.exe devices"
try:
p = subprocess.Popen(cmd1, stdout=file("server\\bin\\log.txt", 'w'))
p.wait()
b = get_file_data("server\\bin\\log.txt")
b = b.split('\n')
r = ''
for bi in b:
b1 = re.match(re.compile('(\d+).*device'), bi)
if b1 != None:
bi = bi + ' <<<<'
r = r + bi + '\n'
return r[:-2]
except Exception as e:
return e | 28.355932 | 110 | 0.50269 | 417 | 3,346 | 3.959233 | 0.211031 | 0.059964 | 0.039976 | 0.054512 | 0.384615 | 0.261054 | 0.188976 | 0.188976 | 0.136887 | 0.136887 | 0 | 0.007114 | 0.327854 | 3,346 | 118 | 111 | 28.355932 | 0.72699 | 0 | 0 | 0.427273 | 0 | 0.018182 | 0.235804 | 0.038852 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.045455 | 0.018182 | null | null | 0.009091 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7441bd95b6465fa3864a116467ce2c5d66e46a23 | 283 | py | Python | ged4py/__init__.py | haney/ged4py | d4ae4baa030caa23961fd1c15298f9e3a225c67d | [
"MIT"
] | 1 | 2022-01-06T22:56:38.000Z | 2022-01-06T22:56:38.000Z | ged4py/__init__.py | haney/ged4py | d4ae4baa030caa23961fd1c15298f9e3a225c67d | [
"MIT"
] | null | null | null | ged4py/__init__.py | haney/ged4py | d4ae4baa030caa23961fd1c15298f9e3a225c67d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Top-level package for GEDCOM parser for Python."""
__author__ = """Andy Salnikov"""
__email__ = 'ged4py@py-dev.com'
__version__ = '0.1.10'
from . import codecs # noqa: F401, needed to register ANSEL codec
from .parser import GedcomReader # noqa: F401
| 25.727273 | 66 | 0.689046 | 39 | 283 | 4.692308 | 0.846154 | 0.087432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050633 | 0.162544 | 283 | 10 | 67 | 28.3 | 0.721519 | 0.438163 | 0 | 0 | 0 | 0 | 0.238411 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
74420935d01b554c5fcec41d054c6daf42c2218a | 1,600 | py | Python | src/auth.py | 4shub/weExist | 5b3187ecdd11ee079b3a0ce85bd1a1aeee242b16 | [
"MIT"
] | null | null | null | src/auth.py | 4shub/weExist | 5b3187ecdd11ee079b3a0ce85bd1a1aeee242b16 | [
"MIT"
] | null | null | null | src/auth.py | 4shub/weExist | 5b3187ecdd11ee079b3a0ce85bd1a1aeee242b16 | [
"MIT"
] | null | null | null | """
Original: Shelley Pham
New Author: Shubham Naik
"""
from __future__ import print_function
import httplib2
import os
import re
import time
import base64
from apiclient import discovery
from apiclient import errors
from oauth2client import client
from oauth2client import tools
from oauth2client.file import Storage
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
SCOPES = 'https://mail.google.com/'
CLIENT_SECRET_FILE = 'client_secret.json'
APPLICATION_NAME = 'Gmail API Python Quickstart'
try:
import argparse
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
except ImportError:
flags = None
def get_credentials():
home_dir = os.path.expanduser('~')
credential_dir = os.path.join(home_dir, '.credentials')
if not os.path.exists(credential_dir):
os.makedirs(credential_dir)
credential_path = os.path.join(credential_dir,
'gmail-python-quickstart.json')
store = Storage(credential_path)
credentials = store.get()
if not credentials or credentials.invalid:
flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
flow.user_agent = APPLICATION_NAME
if flags:
credentials = tools.run_flow(flow, store, flags)
else: # Needed only for compatibility with Python 2.6
credentials = tools.run(flow, store)
print('Storing credentials to ' + credential_path)
return credentials
def main():
credentials = get_credentials()
if __name__ == '__main__':
main()
| 27.118644 | 75 | 0.719375 | 194 | 1,600 | 5.747423 | 0.458763 | 0.021525 | 0.034081 | 0.041256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00624 | 0.19875 | 1,600 | 58 | 76 | 27.586207 | 0.863495 | 0.05875 | 0 | 0 | 0 | 0 | 0.094126 | 0.018692 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.363636 | 0 | 0.431818 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
7445bb6b5d5d54d39878d1c0db391f23d5030f90 | 396 | py | Python | picky/wsgi.py | Wilfred/Picky | 48c40ad3e5bcb531c76439384094b1acfb174824 | [
"Python-2.0"
] | 2 | 2018-10-18T00:02:21.000Z | 2020-03-01T07:11:34.000Z | picky/wsgi.py | Wilfred/Picky | 48c40ad3e5bcb531c76439384094b1acfb174824 | [
"Python-2.0"
] | null | null | null | picky/wsgi.py | Wilfred/Picky | 48c40ad3e5bcb531c76439384094b1acfb174824 | [
"Python-2.0"
] | null | null | null | import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "picky.settings")
# This application object is used by the development server
# as well as any WSGI server configured to use this file.
from django.core.wsgi import get_wsgi_application
from raven.contrib.django.raven_compat.middleware.wsgi import Sentry
from dj_static import Cling
application = Cling(Sentry(get_wsgi_application()))
| 33 | 68 | 0.820707 | 59 | 396 | 5.372881 | 0.610169 | 0.063091 | 0.113565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 396 | 11 | 69 | 36 | 0.900568 | 0.285354 | 0 | 0 | 0 | 0 | 0.128571 | 0.078571 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
744740c25492cd193acb8f86ba79136bde1411c5 | 594 | py | Python | label_maker/utils.py | cgoodier/label-maker | 4e8774ff2790234352a78de82b92d762697a65d2 | [
"MIT"
] | 1 | 2021-03-21T13:37:46.000Z | 2021-03-21T13:37:46.000Z | label_maker/utils.py | cgoodier/label-maker | 4e8774ff2790234352a78de82b92d762697a65d2 | [
"MIT"
] | null | null | null | label_maker/utils.py | cgoodier/label-maker | 4e8774ff2790234352a78de82b92d762697a65d2 | [
"MIT"
] | null | null | null | """Provide utility functions"""
import numpy as np
def url(tile, imagery):
"""Return a tile url provided an imagery template and a tile"""
return imagery.replace('{x}', tile[0]).replace('{y}', tile[1]).replace('{z}', tile[2])
def class_match(ml_type, label, i):
"""Determine if a label matches a given class index"""
if ml_type == 'classification':
return label[i] > 0
elif ml_type == 'object-detection':
return len(list(filter(lambda bb: bb[4] == i, label)))
elif ml_type == 'segmentation':
return np.count_nonzero(label == i)
return None
| 34.941176 | 90 | 0.638047 | 87 | 594 | 4.287356 | 0.563218 | 0.064343 | 0.053619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010593 | 0.205387 | 594 | 16 | 91 | 37.125 | 0.779661 | 0.222222 | 0 | 0 | 0 | 0 | 0.11435 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
744966ab2f30b26d35cfd82483b84e8043c64525 | 2,702 | py | Python | vb_baseapp/admin/actions/basemodel_with_softdelete.py | vbyazilim/django-vb-baseapp | 83a62a9d7cb349351ea64aeeb616afe9a94cda5d | [
"MIT"
] | null | null | null | vb_baseapp/admin/actions/basemodel_with_softdelete.py | vbyazilim/django-vb-baseapp | 83a62a9d7cb349351ea64aeeb616afe9a94cda5d | [
"MIT"
] | 1 | 2021-10-30T16:44:15.000Z | 2021-10-30T16:44:15.000Z | vb_baseapp/admin/actions/basemodel_with_softdelete.py | vbyazilim/django-vb-baseapp | 83a62a9d7cb349351ea64aeeb616afe9a94cda5d | [
"MIT"
] | null | null | null | from django.contrib.admin import helpers
from django.contrib.admin.utils import model_ngettext
from django.core.exceptions import PermissionDenied
from django.template.response import TemplateResponse
from django.utils.translation import ugettext_lazy as _
from console import console
__all__ = ['recover_selected', 'hard_delete_selected']
console = console(source=__name__)
def recover_selected(modeladmin, request, queryset):
"""
TODO: Implement this method!, try to inject intermediate page which is
enabled on `hard_delete_selected` method. Inform user about which
related records will be recovered...
"""
number_of_rows_recovered, __ = queryset.undelete() # __ = recovered_items
if number_of_rows_recovered == 1:
message_bit = _('1 record was')
else:
message_bit = _('%(number_of_rows)s records were') % dict(number_of_rows=number_of_rows_recovered)
message = _('%(message_bit)s successfully marked as active') % dict(message_bit=message_bit)
modeladmin.message_user(request, message)
def hard_delete_selected(modeladmin, request, queryset):
opts = modeladmin.model._meta # # pylint: disable=W0212
deletable_objects, model_count, perms_needed, protected = modeladmin.get_deleted_objects(queryset, request)
if request.POST.get('post') and not protected:
if perms_needed:
raise PermissionDenied
if queryset.count():
number_of_rows_deleted, __ = queryset.hard_delete() # __ = deleted_items
if number_of_rows_deleted == 1:
message_bit = _('1 record was')
else:
message_bit = _('%(number_of_rows)s records were') % dict(number_of_rows=number_of_rows_deleted)
message = _('%(message_bit)s deleted') % dict(message_bit=message_bit)
modeladmin.message_user(request, message)
return None
objects_name = model_ngettext(queryset)
if perms_needed or protected:
title = _('Cannot delete %(name)s') % {'name': objects_name}
else:
title = _('Are you sure?')
context = {
**modeladmin.admin_site.each_context(request),
'title': title,
'objects_name': str(objects_name),
'deletable_objects': [deletable_objects],
'model_count': dict(model_count).items(),
'queryset': queryset,
'perms_lacking': perms_needed,
'protected': protected,
'opts': opts,
'action_checkbox_name': helpers.ACTION_CHECKBOX_NAME,
'media': modeladmin.media,
}
request.current_app = modeladmin.admin_site.name
return TemplateResponse(request, 'admin/hard_delete_selected_confirmation.html', context)
| 37.013699 | 112 | 0.695041 | 315 | 2,702 | 5.625397 | 0.339683 | 0.045147 | 0.06772 | 0.035553 | 0.18623 | 0.164786 | 0.164786 | 0.164786 | 0.164786 | 0.164786 | 0 | 0.00374 | 0.208364 | 2,702 | 72 | 113 | 37.527778 | 0.824684 | 0.087713 | 0 | 0.137255 | 0 | 0 | 0.156404 | 0.018062 | 0 | 0 | 0 | 0.013889 | 0 | 1 | 0.039216 | false | 0 | 0.117647 | 0 | 0.196078 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7449e4cac17ed016fac5ca67316d8bffabed1d94 | 808 | py | Python | appname/mailers/teams.py | Dnida/Ignite | 38ac6042c9b16cdccf5ef066a1ee80b741b51a37 | [
"BSD-2-Clause"
] | 53 | 2017-12-12T06:58:25.000Z | 2022-03-24T17:58:37.000Z | appname/mailers/teams.py | Dnida/Ignite | 38ac6042c9b16cdccf5ef066a1ee80b741b51a37 | [
"BSD-2-Clause"
] | 144 | 2018-01-22T15:15:41.000Z | 2022-03-28T09:02:00.000Z | appname/mailers/teams.py | Dnida/Ignite | 38ac6042c9b16cdccf5ef066a1ee80b741b51a37 | [
"BSD-2-Clause"
] | 19 | 2017-12-12T07:04:14.000Z | 2022-03-23T00:20:00.000Z | from flask import render_template, url_for
from appname.mailers import Mailer
class InviteEmail(Mailer):
TEMPLATE = 'email/teams/invite.html'
def __init__(self, invite):
self.recipient = None
self.invite = invite
self.recipient_email = invite.invite_email or (invite.user and invite.user.email)
@property
def subject(self):
return ("{0} invited you to join their team on appname"
.format(self.invite.inviter.email))
def send(self):
link = url_for('auth.invite_page', invite_id=self.invite.id,
secret=self.invite.invite_secret, _external=True)
html_body = render_template(self.TEMPLATE, link=link, invite=self.invite)
return self.deliver_now(self.recipient_email, self.subject, html_body)
| 35.130435 | 89 | 0.679455 | 105 | 808 | 5.057143 | 0.457143 | 0.112994 | 0.071563 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001595 | 0.22401 | 808 | 22 | 90 | 36.727273 | 0.845295 | 0 | 0 | 0 | 0 | 0 | 0.10396 | 0.028465 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.117647 | 0.058824 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
744cd6bb1eb26fbde0da3fa1db370c1066848b73 | 1,915 | py | Python | graphstar/tests/graph.py | pengboomouch/graphstar | f7f3537aa92118765b358dd3a47b4fa5cea8587c | [
"MIT"
] | null | null | null | graphstar/tests/graph.py | pengboomouch/graphstar | f7f3537aa92118765b358dd3a47b4fa5cea8587c | [
"MIT"
] | null | null | null | graphstar/tests/graph.py | pengboomouch/graphstar | f7f3537aa92118765b358dd3a47b4fa5cea8587c | [
"MIT"
] | null | null | null | import pytest
from graphstar import graph
@pytest.fixture(scope="function")
def g():
return graph.Graph()
@pytest.fixture
def n():
return graph.Node()
@pytest.fixture(scope="function")
def two_nodes(g):
n1 = g.make_node(1, 1)
n2 = g.make_node(2, 2)
return n1, n2
def test_get_node_by_id(g):
"""Test the node_by_id method"""
n1 = g.make_node(1, 1)
n2 = g.node_by_id(n1.id)
assert n1 is n2 and n1.id == n2.id
def test_node_id_is_unique(g):
"""Test incremental ids work"""
for i in range(0, 10):
n = g.make_node(i, i)
assert n.id == i
def test_node_duplicate_not_allowed(g):
"""Two nodes on the same position not allowed"""
n1 = g.make_node(1, 1)
n2 = g.node_by_id(n1.id)
assert n1 is n2 and n1.id == n2.id
def test_connection_creation(g, two_nodes):
"""Test the make_connection method"""
n1, n2 = two_nodes
c = g.make_connection(n1, n2)
assert c is True
def test_get_connections(g, two_nodes):
"""Test the get_connections method"""
n1, n2 = two_nodes
g.make_connection(n1, n2)
clist = g.get_connections(n1)
for c in clist:
assert isinstance(c, graph.Connection)
assert n1 is c.from_node and n2 is c.to_node
def test_connection_refuse(g, n):
"""Connection with a node that is not in
the Graph is not allowed"""
# This node is in the graph, the other is not
n1 = g.make_node(1, 1)
c = g.make_connection(n, n1)
assert c is False
def test_connection_costs(g):
"""Test connection costs are assigned
correctly (manhattan heuristic)"""
n1 = g.make_node(1, 1)
n2 = g.make_node(3, 3)
g.make_connection(n1, n2)
clist = g.get_connections(n1)
for c in clist:
assert c.cost == 4
def test_duplicate_connection_not_allowed(g, two_nodes):
"""Duplicate connections in same direction
are not allowed"""
n1, n2 = two_nodes
connected = g.make_connection(n1, n2)
assert connected is True
connect_again = g.make_connection(n1, n2)
assert connect_again is False
| 21.277778 | 56 | 0.710705 | 348 | 1,915 | 3.741379 | 0.20977 | 0.053763 | 0.0553 | 0.042243 | 0.408602 | 0.282642 | 0.215054 | 0.215054 | 0.215054 | 0.215054 | 0 | 0.037712 | 0.169191 | 1,915 | 89 | 57 | 21.516854 | 0.780641 | 0.203655 | 0 | 0.377358 | 0 | 0 | 0.010804 | 0 | 0 | 0 | 0 | 0 | 0.188679 | 1 | 0.207547 | false | 0 | 0.037736 | 0.037736 | 0.301887 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
744eeaef10e5457895973e69731f2d0834acf4b0 | 30,350 | py | Python | __classes__/node.py | Ahuge/NukeParser | 97a37883f5ccf030da319d21c5f80cfd6113df49 | [
"MIT"
] | 24 | 2016-06-26T01:29:04.000Z | 2021-04-30T16:20:13.000Z | __classes__/node.py | Ahuge/NukeParser | 97a37883f5ccf030da319d21c5f80cfd6113df49 | [
"MIT"
] | 2 | 2021-10-13T23:12:06.000Z | 2022-01-17T21:30:00.000Z | __classes__/node.py | Ahuge/NukeParser | 97a37883f5ccf030da319d21c5f80cfd6113df49 | [
"MIT"
] | 2 | 2019-10-29T17:23:59.000Z | 2020-06-24T18:47:14.000Z | class Node(object):
def Class(self):
"""
self.Class() -> Class of node.
@return: Class of node.
"""
# raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
return "%s(%r)" % (self.__class__, self.__dict__)
def __getitem__(self):
"""
x.__getitem__(y) <==> x[y]
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def __len__(self):
"""
x.__len__() <==> len(x)
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def __reduce_ex__(self):
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def __repr__(self):
"""
x.__repr__() <==> repr(x)
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def __str__(self):
"""
x.__str__() <==> str(x)
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def addKnob(self, k):
"""
self.addKnob(k) -> None.
Add knob k to this node or panel.
@param k: Knob.
@return: None.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def allKnobs(self):
"""
self.allKnobs() -> list
Get a list of all knobs in this node, including nameless knobs.
For example:
>>> b = nuke.nodes.Blur()
>>> b.allKnobs()
@return: List of all knobs.
Note that this doesn't follow the links for Link_Knobs
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def autoplace(self):
"""
self.autoplace() -> None.
Automatically place nodes, so they do not overlap.
@return: None.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def bbox(self):
"""
self.bbox() -> List of x, y, w, h.
Bounding box of the node.
@return: List of x, y, w, h.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def canSetInput(self, i, node):
"""
self.canSetInput(i, node) -> bool
Check whether the output of 'node' can be connected to input i.
@param i: Input number.
@param node: The node to be connected to input i.
@return: True if node can be connected, False otherwise.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def channels(self):
"""
self.channels() -> String list.
List channels output by this node.
@return: String list.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def clones(self):
"""
self.clones() -> Number of clones.
@return: Number of clones.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def connectInput(self, i, node):
"""
self.connectInput(i, node) -> bool
Connect the output of 'node' to the i'th input or the next available unconnected input. The requested input is tried first, but if it is already set then subsequent inputs are tried until an unconnected one is found, as when you drop a connection arrow onto a node in the GUI.
@param i: Input number to try first.
@param node: The node to connect to input i.
@return: True if a connection is made, False otherwise.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def deepSample(self, c, x, y, n):
"""
self.deepSample(c, x, y, n) -> Floating point value.
Return pixel values from a deep image.
This requires the image to be calculated, so performance may be very bad if this is placed into an expression in
a control panel.
@param c: Channel name.
@param x: Position to sample (X coordinate).
@param y: Position to sample (Y coordinate).
@param n: Sample index (between 0 and the number returned by deepSampleCount() for this pixel, or -1 for the frontmost).
@return: Floating point value.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def deepSampleCount(self, x, y):
"""
self.deepSampleCount(x, y) -> Integer value.
Return number of samples for a pixel on a deep image.
This requires the image to be calculated, so performance may be very bad if this is placed into an expression in
a control panel.
@param x: Position to sample (X coordinate).
@param y: Position to sample (Y coordinate).
@return: Integer value.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def dependencies(self, what):
"""
self.dependencies(what) -> List of nodes.
List all nodes referred to by this node. 'what' is an optional integer (see below).
You can use the following constants or'ed together to select what types of dependencies are looked for:
nuke.EXPRESSIONS = expressions
nuke.INPUTS = visible input pipes
nuke.HIDDEN_INPUTS = hidden input pipes.
The default is to look for all types of connections.
Example:
nuke.toNode('Blur1').dependencies( nuke.INPUTS | nuke.EXPRESSIONS )
@param what: Or'ed constant of nuke.EXPRESSIONS, nuke.INPUTS and nuke.HIDDEN_INPUTS to select the types of dependencies. The default is to look for all types of connections.
@return: List of nodes.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def dependent(self, what, forceEvaluate):
"""
self.dependent(what, forceEvaluate) -> List of nodes.
List all nodes that read information from this node. 'what' is an optional integer:
You can use any combination of the following constants or'ed together to select what types of dependent nodes to look for:
nuke.EXPRESSIONS = expressions
nuke.INPUTS = visible input pipes
nuke.HIDDEN_INPUTS = hidden input pipes.
The default is to look for all types of connections.
forceEvaluate is an optional boolean defaulting to True. When this parameter is true, it forces a re-evaluation of the entire tree.
This can be expensive, but otherwise could give incorrect results if nodes are expression-linked.
Example:
nuke.toNode('Blur1').dependent( nuke.INPUTS | nuke.EXPRESSIONS )
@param what: Or'ed constant of nuke.EXPRESSIONS, nuke.INPUTS and nuke.HIDDEN_INPUTS to select the types of dependent nodes. The default is to look for all types of connections.
@param forceEvaluate: Specifies whether a full tree evaluation will take place. Defaults to True.
@return: List of nodes.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def error(self):
"""
error() -> bool
True if the node or any in its input tree have an error, or False otherwise.
Error state of the node and its input tree. Deprecated; use hasError or treeHasError instead.
Note that this will always return false for viewers, which cannot generate their input trees. Instead, choose an input of the viewer (e.g. the active one), and call treeHasError() on that.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def fileDependencies(self, start, end):
"""
self.fileDependencies(start, end) -> List of nodes and filenames.
@param start: first frame
@param end: last frame
Returns the list of input file dependencies for this node and all nodes upstream from this node for the given frame range.
The file dependencies are calcuated by searching for Read ops or ops with a File knob.
All views are considered and current proxy mode is used to decide on whether full format or proxy files are returned.
Note that Write nodes files are also included but precomps, gizmos and external plugins are not.
Any time shifting operation such as frameholds, timeblurs, motionblur etc are taken into consideration.
@return The return list is a list of nodes and files they require.
Eg. [Read1, ['file1.dpx, file2.dpx'] ], [Read2, ['file3.dpx', 'file4.dpx'] ] ]
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def firstFrame(self):
"""
self.firstFrame() -> int.
First frame in frame range for this node.
@return: int.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def forceValidate(self):
"""
self.forceValidate() -> None
Force the node to validate itself, updating its hash.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def format(self):
"""
self.format() -> Format.
Format of the node.
@return: Format.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def frameRange(self):
"""
self.frameRange() -> FrameRange.
Frame range for this node.
@return: FrameRange.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def fullName(self):
"""
self.fullName() -> str
Get the name of this node and any groups enclosing it in 'group.group.name' form.
@return: The fully-qualified name of this node, as a string.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def getNumKnobs(self):
"""
self.numKnobs() -> The number of knobs.
@return: The number of knobs.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def hasError(self):
"""
hasError() -> bool
True if the node itself has an error, regardless of the state of the ops in its input tree, or False otherwise.
Error state of the node itself, regardless of the state of the ops in its input tree.
Note that an error on a node may not appear if there is an error somewhere in its input tree, because it may not be possible to validate the node itself correctly in that case.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def height(self):
"""
self.height() -> int.
Height of the node.
@return: int.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def help(self):
"""
self.help() -> str
@return: Help for the node.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def hideControlPanel(self):
"""
self.hideControlPanel() -> None
@return: None
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def input(self, i):
"""
self.input(i) -> The i'th input.
@param i: Input number.
@return: The i'th input.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def inputs(self):
"""
self.inputs() -> Gets the maximum number of connected inputs.
@return: Number of the highest connected input + 1. If inputs 0, 1, and 3 are connected, this will return 4.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def isSelected(self):
"""
self.isSelected() -> bool
Returns the current selection state of the node. This is the same as checking the 'selected' knob.
@return: True if selected, or False if not.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def knob(self, p):
"""
self.knob(p) -> The knob named p or the pth knob.
@param p: A string or an integer.
@return: The knob named p or the pth knob.
Note that this follows the links for Link_Knobs
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def knobs(self):
"""
self.knobs() -> dict
Get a dictionary of (name, knob) pairs for all knobs in this node.
For example:
>>> b = nuke.nodes.Blur()
>>> b.knobs()
@return: Dictionary of all knobs.
Note that this doesn't follow the links for Link_Knobs
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def lastFrame(self):
"""
self.lastFrame() -> int.
Last frame in frame range for this node.
@return: int.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def linkableKnobs(self):
"""
self.linkableKnobs(knobType) -> List
Returns a list of any knobs that may be linked to from the node as well as some meta information about the knob. This may include whether the knob is enabled and whether it should be used for absolute or relative values. Not all of these variables may make sense for all knobs..
@param knobType A KnobType describing the type of knobs you want.@return: A list of LinkableKnobInfo that may be empty .
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def maxInputs(self):
"""
self.maximumInputs() -> Maximum number of inputs this node can have.
@return: Maximum number of inputs this node can have.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def maxOutputs(self):
"""
self.maximumOutputs() -> Maximum number of outputs this node can have.
@return: Maximum number of outputs this node can have.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def maximumInputs(self):
"""
self.maximumInputs() -> Maximum number of inputs this node can have.
@return: Maximum number of inputs this node can have.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def maximumOutputs(self):
"""
self.maximumOutputs() -> Maximum number of outputs this node can have.
@return: Maximum number of outputs this node can have.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def metadata(self, key, time, view):
"""
self.metadata(key, time, view) -> value or dict
Return the metadata item for key on this node at current output context, or at optional time and view.
If key is not specified a dictionary containing all key/value pairs is returned.
None is returned if key does not exist on this node.
@param key: Optional name of the metadata key to retrieve.
@param time: Optional time to evaluate at (default is taken from node's current output context).
@param view: Optional view to evaluate at (default is taken from node's current output context).
@return: The requested metadata value, a dictionary containing all keys if a key name is not provided, or None if the specified key is not matched.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def minInputs(self):
"""
self.minimumInputs() -> Minimum number of inputs this node can have.
@return: Minimum number of inputs this node can have.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def minimumInputs(self):
"""
self.minimumInputs() -> Minimum number of inputs this node can have.
@return: Minimum number of inputs this node can have.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def name(self):
"""
self.name() -> str
@return: Name of node.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def numKnobs(self):
"""
self.numKnobs() -> The number of knobs.
@return: The number of knobs.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def opHashes(self):
"""
self.opHashes() -> list of int
Returns a list of hash values, one for each op in this node.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def optionalInput(self):
"""
self.optionalInput() -> Number of first optional input.
@return: Number of first optional input.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def performanceInfo(self):
"""
self.performanceInfo( category ) -> Returns performance information for this node. Performance timing must be enabled.
@category: performance category ( optional ).A performance category, must be either nuke.PROFILE_STORE, nuke.PROFILE_VALIDATE, nuke.PROFILE_REQUEST or nuke.PROFILE_ENGINE The default is nuke.PROFILE_ENGINE which gives the performance info of the render engine.
@return: A dictionary containing the cumulative performance info for this category, where:
callCount = the number of calls made
timeTakenCPU = the CPU time spent in microseconds
timeTakenWall = the actual time ( wall time ) spent in microseconds
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def pixelAspect(self):
"""
self.pixelAspect() -> int.
Pixel Aspect ratio of the node.
@return: float.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def proxy(self):
"""
self.proxy() -> bool
@return: True if proxy is enabled, False otherwise.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def readKnobs(self, s):
"""
self.readKnobs(s) -> None.
Read the knobs from a string (TCL syntax).
@param s: A string.
@return: None.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def redraw(self):
"""
self.redraw() -> None.
Force a redraw of the node.
@return: None.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def removeKnob(self, k):
"""
self.removeKnob(k) -> None.
Remove knob k from this node or panel. Throws a ValueError exception if k is not found on the node.
@param k: Knob.
@return: None.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def resetKnobsToDefault(self):
"""
self.resetKnobsToDefault() -> None
Reset all the knobs to their default values.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def running(self):
"""
self.running() -> Node rendering when paralled threads are running or None.
Class method.
@return: Node rendering when paralled threads are running or None.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def sample(self, c, x, y, dx, dy, frame):
"""
self.sample(c, x, y, dx, dy) -> Floating point value.
Return pixel values from an image.
This requires the image to be calculated, so performance may be very bad if this is placed into an expression in
a control panel. Produces a cubic filtered result. Any sizes less than 1, including 0, produce the same filtered result,
this is correct based on sampling theory. Note that integers are at the corners of pixels, to center on a pixel add .5 to both coordinates.
If the optional dx,dy are not given then the exact value of the square pixel that x,y lands in is returned. This is also called 'impulse filtering'.
@param c: Channel name.
@param x: Centre of the area to sample (X coordinate).
@param y: Centre of the area to sample (Y coordinate).
@param dx: Optional size of the area to sample (X coordinate).
@param dy: Optional size of the area to sample (Y coordinate).
@param frame: Optional frame to sample the node at.
@return: Floating point value.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def screenHeight(self):
"""
self.screenHeight() -> int.
Height of the node when displayed on screen in the DAG, at 1:1 zoom, in pixels.
@return: int.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def screenWidth(self):
"""
self.screenWidth() -> int.
Width of the node when displayed on screen in the DAG, at 1:1 zoom, in pixels.
@return: int.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def selectOnly(self):
"""
self.selectOnly() -> None.
Set this node to be the only selection, as if it had been clicked in the DAG.
@return: None.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def setInput(self, i, node):
"""
self.setInput(i, node) -> bool
Connect input i to node if canSetInput() returns true.
@param i: Input number.
@param node: The node to connect to input i.
@return: True if canSetInput() returns true, or if the input is already correct.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def setName(self, name, uncollide, updateExpressions):
"""
self.setName(name, uncollide=True, updateExpressions=False) -> None
Set name of the node and resolve name collisions if optional named argument 'uncollide' is True.
@param name: A string.
@param uncollide: Optional boolean to resolve name collisions. Defaults to True.
@param updateExpressions: Optional boolean to update expressions in other nodes to point at the new name. Defaults to False.
@return: None
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def setSelected(self, selected):
"""
self.setSelected(selected) -> None.
Set the selection state of the node. This is the same as changing the 'selected' knob.
@param selected: New selection state - True or False.
@return: None.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def setTab(self, tabIndex):
"""
self.setTab(tabIndex) -> None
@param tabIndex: The tab to show (first is 0).
@return: None
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def setXYpos(self, x, y):
"""
self.setXYpos(x, y) -> None.
Set the (x, y) position of node in node graph.
@param x: The x position of node in node graph.
@param y: The y position of node in node graph.
@return: None.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def setXpos(self, x):
"""
self.setXpos(x) -> None.
Set the x position of node in node graph.
@param x: The x position of node in node graph.
@return: None.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def setYpos(self, y):
"""
self.setYpos(y) -> None.
Set the y position of node in node graph.
@param y: The y position of node in node graph.
@return: None.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def showControlPanel(self, forceFloat):
"""
self.showControlPanel(forceFloat = false) -> None
@param forceFloat: Optional python object. If it evaluates to True the control panel will always open as a floating panel. Default is False.
@return: None
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def showInfo(self, s):
"""
self.showInfo(s) -> None.
Creates a dialog box showing the result of script s.
@param s: A string.
@return: None.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def shown(self):
"""
self.shown() -> true if the properties panel is open. This can be used to skip updates that are not visible to the user.
@return: true if the properties panel is open. This can be used to skip updates that are not visible to the user.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def treeHasError(self):
"""
treeHasError() -> bool
True if the node or any in its input tree have an error, or False otherwise.
Error state of the node and its input tree.
Note that this will always return false for viewers, which cannot generate their input trees. Instead, choose an input of the viewer (e.g. the active one), and call treeHasError() on that.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def upstreamFrameRange(self, i):
"""
self.upstreamFrameRange(i) -> FrameRange
Frame range for the i'th input of this node.
@param i: Input number.
@return: FrameRange. Returns None when querying an invalid input.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def width(self):
"""
self.width() -> int.
Width of the node.
@return: int.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def writeKnobs(self, i):
"""
self.writeKnobs(i) -> String in .nk form.
Return a tcl list. If TO_SCRIPT | TO_VALUE is not on, this is a simple list
of knob names. If it is on, it is an alternating list of knob names
and the output of to_script().
Flags can be any of these or'd together:
- nuke.TO_SCRIPT produces to_script(0) values
- nuke.TO_VALUE produces to_script(context) values
- nuke.WRITE_NON_DEFAULT_ONLY skips knobs with not_default() false
- nuke.WRITE_USER_KNOB_DEFS writes addUserKnob commands for user knobs
- nuke.WRITE_ALL writes normally invisible knobs like name, xpos, ypos
@param i: The set of flags or'd together. Default is TO_SCRIPT | TO_VALUE.
@return: String in .nk form.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
def xpos(self):
"""
self.xpos() -> X position of node in node graph.
@return: X position of node in node graph.
"""
raise NotImplementedError("This function is not written yet. Please put in an issue on the github page.")
| 45.776772 | 290 | 0.635848 | 4,121 | 30,350 | 4.664887 | 0.125455 | 0.020807 | 0.109238 | 0.140449 | 0.592072 | 0.579744 | 0.571369 | 0.558885 | 0.545672 | 0.537973 | 0 | 0.001109 | 0.286689 | 30,350 | 662 | 291 | 45.845921 | 0.886877 | 0.512718 | 0 | 0.490066 | 0 | 0 | 0.494641 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.496689 | false | 0 | 0 | 0 | 0.509934 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7452e14d90ee89dce38e2b779f06bcdfbe502433 | 11,061 | py | Python | tests/test_admin.py | hongquan/django-improved-user | ae91086adf58aeca81d6e0fe1aeb4c7f3f4c2a77 | [
"BSD-2-Clause"
] | null | null | null | tests/test_admin.py | hongquan/django-improved-user | ae91086adf58aeca81d6e0fe1aeb4c7f3f4c2a77 | [
"BSD-2-Clause"
] | null | null | null | tests/test_admin.py | hongquan/django-improved-user | ae91086adf58aeca81d6e0fe1aeb4c7f3f4c2a77 | [
"BSD-2-Clause"
] | null | null | null | """Test Admin interface provided by Improved User"""
import os
import re
from django import VERSION as DjangoVersion
from django.contrib.admin.models import LogEntry
from django.contrib.auth import SESSION_KEY
from django.test import TestCase, override_settings
from django.test.utils import patch_logger
from django.utils.encoding import force_text
from improved_user.admin import UserAdmin
from improved_user.forms import UserChangeForm, UserCreationForm
from improved_user.models import User
# TODO: remove conditional import when Dj 1.8 dropped
# pylint: disable=ungrouped-imports
try:
from django.urls import reverse
except ImportError:
from django.core.urlresolvers import reverse
# pylint: enable=ungrouped-imports
# Redirect in test_user_change_password will fail if session auth hash
# isn't updated after password change (#21649)
@override_settings(ROOT_URLCONF='tests.urls')
class UserAdminTests(TestCase):
"""Based off django.tests.auth_tests.test_views.ChangelistTests"""
@classmethod
def setUpTestData(cls):
"""Called by TestCase during initialization; creates users"""
cls.user1 = User.objects.create_user(
email='testclient@example.com',
password='password',
)
cls.user2 = User.objects.create_user(
email='staffmember@example.com',
password='password',
)
def setUp(self):
"""Make user1 a superuser before logging in."""
User.objects\
.filter(email='testclient@example.com')\
.update(is_staff=True, is_superuser=True)
self.login()
self.admin = User.objects.get(pk=self.user1.pk)
def login(self, username='testclient@example.com', password='password'):
"""Helper function to login the user (specified or default)"""
response = self.client.post('/login/', {
'username': username,
'password': password,
})
self.assertIn(SESSION_KEY, self.client.session)
return response
def logout(self):
"""Helper function to logout the user"""
response = self.client.get('/admin/logout/')
self.assertEqual(response.status_code, 200)
self.assertNotIn(SESSION_KEY, self.client.session)
def get_user_data(self, user): # pylint: disable=no-self-use
"""Generate dictionary of values to compare against"""
return {
'email': user.email,
'password': user.password,
'is_active': user.is_active,
'is_staff': user.is_staff,
'is_superuser': user.is_superuser,
'last_login_0': user.last_login.strftime('%Y-%m-%d'),
'last_login_1': user.last_login.strftime('%H:%M:%S'),
'initial-last_login_0': user.last_login.strftime('%Y-%m-%d'),
'initial-last_login_1': user.last_login.strftime('%H:%M:%S'),
'date_joined_0': user.date_joined.strftime('%Y-%m-%d'),
'date_joined_1': user.date_joined.strftime('%H:%M:%S'),
'initial-date_joined_0': user.date_joined.strftime('%Y-%m-%d'),
'initial-date_joined_1': user.date_joined.strftime('%H:%M:%S'),
'full_name': user.full_name,
'short_name': user.short_name,
}
def test_display_fields(self):
"""Test that admin shows all user fields"""
excluded_model_fields = ['id', 'logentry']
model_fields = set(
field.name for field in User._meta.get_fields()
if field.name not in excluded_model_fields
)
admin_fieldset_fields = set(
fieldname
for name, fieldset in UserAdmin.fieldsets
for fieldname in fieldset['fields']
)
self.assertEqual(model_fields, admin_fieldset_fields)
def test_add_has_required_fields(self):
"""Test all required fields in Admin Add view"""
excluded_model_fields = [
'date_joined', 'is_active', 'is_staff', 'is_superuser', 'password',
]
required_model_fields = [
field.name
for field in User._meta.get_fields()
if (field.name not in excluded_model_fields
and hasattr(field, 'null') and field.null is False
and hasattr(field, 'blank') and field.blank is False)
]
extra_form_fields = [
field_name
for field_name in list(
UserCreationForm.declared_fields, # pylint: disable=no-member
)
]
admin_add_fields = [
fieldname
for name, fieldset in UserAdmin.add_fieldsets
for fieldname in fieldset['fields']
]
for field in required_model_fields+extra_form_fields:
with self.subTest(field=field):
self.assertIn(field, admin_add_fields)
def test_correct_forms_used(self):
"""Test that UserAdmin uses the right forms"""
self.assertIs(UserAdmin.add_form, UserCreationForm)
self.assertIs(UserAdmin.form, UserChangeForm)
def test_user_add(self):
"""Ensure the admin add view works correctly"""
# we can get the form view
get_response = self.client.get(
reverse('auth_test_admin:improved_user_user_add'))
self.assertEqual(get_response.status_code, 200)
# we can create new users in the form view
post_response = self.client.post(
reverse('auth_test_admin:improved_user_user_add'),
{
'email': 'newuser@example.com',
'password1': 'passw0rd1!',
'password2': 'passw0rd1!',
},
follow=True,
)
self.assertEqual(post_response.status_code, 200)
self.assertTrue(
User.objects.filter(email='newuser@example.com').exists())
new_user = User.objects.get(email='newuser@example.com')
self.assertTrue(new_user.check_password('passw0rd1!'))
def test_user_change_email(self):
"""Test that user can change email in Admin"""
data = self.get_user_data(self.admin)
data['email'] = 'new_' + data['email']
response = self.client.post(
reverse(
'auth_test_admin:improved_user_user_change',
args=(self.admin.pk,),
),
data,
)
self.assertRedirects(
response,
reverse('auth_test_admin:improved_user_user_changelist'))
row = LogEntry.objects.latest('id')
if DjangoVersion >= (1, 9):
self.assertEqual(row.get_change_message(), 'Changed email.')
else:
self.assertEqual(row.change_message, 'Changed email.')
def test_user_not_change(self):
"""Test that message is raised when form submitted unchanged"""
response = self.client.post(
reverse(
'auth_test_admin:improved_user_user_change',
args=(self.admin.pk,),
),
self.get_user_data(self.admin),
)
self.assertRedirects(
response,
reverse('auth_test_admin:improved_user_user_changelist'))
row = LogEntry.objects.latest('id')
if DjangoVersion >= (1, 9):
self.assertEqual(row.get_change_message(), 'No fields changed.')
else:
self.assertEqual(row.change_message, 'No fields changed.')
def test_user_change_password(self):
"""Test that URL to change password form is correct"""
user_change_url = reverse(
'auth_test_admin:improved_user_user_change', args=(self.admin.pk,))
password_change_url = reverse(
'auth_test_admin:auth_user_password_change',
args=(self.admin.pk,))
response = self.client.get(user_change_url)
# Test the link inside password field help_text.
rel_link = re.search(
r'you can change the password using '
r'<a href="([^"]*)">this form</a>',
force_text(response.content),
).groups()[0]
self.assertEqual(
os.path.normpath(user_change_url + rel_link),
os.path.normpath(password_change_url),
)
response = self.client.post(
password_change_url,
{
'password1': 'password1',
'password2': 'password1',
},
)
self.assertRedirects(response, user_change_url)
row = LogEntry.objects.latest('id')
if DjangoVersion >= (1, 9):
self.assertEqual(row.get_change_message(), 'Changed password.')
else:
self.assertEqual(row.change_message, 'Changed password.')
self.logout()
self.login(password='password1')
def test_user_change_password_subclass_path(self):
"""Test subclasses can override password URL"""
class CustomChangeForm(UserChangeForm):
"""Subclass of UserChangeForm; uses rel_password_url"""
rel_password_url = 'moOps'
form = CustomChangeForm()
self.assertEqual(form.rel_password_url, 'moOps')
rel_link = re.search(
r'you can change the password using '
r'<a href="([^"]*)">this form</a>',
form.fields['password'].help_text,
).groups()[0]
self.assertEqual(rel_link, 'moOps')
def test_user_change_different_user_password(self):
"""Test that administrator can update other Users' passwords"""
user = User.objects.get(email='staffmember@example.com')
response = self.client.post(
reverse(
'auth_test_admin:auth_user_password_change',
args=(user.pk,),
),
{
'password1': 'password1',
'password2': 'password1',
},
)
self.assertRedirects(
response,
reverse(
'auth_test_admin:improved_user_user_change',
args=(user.pk,)))
row = LogEntry.objects.latest('id')
self.assertEqual(row.user_id, self.admin.pk)
self.assertEqual(row.object_id, str(user.pk))
if DjangoVersion >= (1, 9):
self.assertEqual(row.get_change_message(), 'Changed password.')
else:
self.assertEqual(row.change_message, 'Changed password.')
def test_changelist_disallows_password_lookups(self):
"""Users shouldn't be allowed to guess password
Checks against repeated password__startswith queries
https://code.djangoproject.com/ticket/20078
"""
# A lookup that tries to filter on password isn't OK
with patch_logger(
'django.security.DisallowedModelAdminLookup', 'error',
) as logger_calls:
response = self.client.get(
reverse('auth_test_admin:improved_user_user_changelist')
+ '?password__startswith=sha1$')
self.assertEqual(response.status_code, 400)
self.assertEqual(len(logger_calls), 1)
| 38.810526 | 79 | 0.610975 | 1,252 | 11,061 | 5.199681 | 0.211661 | 0.043779 | 0.025346 | 0.033794 | 0.403379 | 0.319662 | 0.282028 | 0.260369 | 0.254531 | 0.238863 | 0 | 0.008166 | 0.280354 | 11,061 | 284 | 80 | 38.947183 | 0.809673 | 0.126752 | 0 | 0.262009 | 0 | 0 | 0.162417 | 0.071346 | 0 | 0 | 0 | 0.003521 | 0.131004 | 1 | 0.065502 | false | 0.144105 | 0.061135 | 0 | 0.144105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7457d920cfc6444acc0e2d57941c84cbbc902130 | 370 | py | Python | metoo/db/redis.py | Kevin-Huang-NZ/fastapi_face_recognition | ef0811260a79c441816e007f1f9ca2a0597fc055 | [
"MIT"
] | null | null | null | metoo/db/redis.py | Kevin-Huang-NZ/fastapi_face_recognition | ef0811260a79c441816e007f1f9ca2a0597fc055 | [
"MIT"
] | null | null | null | metoo/db/redis.py | Kevin-Huang-NZ/fastapi_face_recognition | ef0811260a79c441816e007f1f9ca2a0597fc055 | [
"MIT"
] | null | null | null | from aioredis import Redis, from_url
from core.config import settings
async def init_redis_pool() -> Redis:
if settings.USE_REDIS_SENTINEL:
pass
else:
redis = await from_url(
settings.REDIS_URL,
password=settings.REDIS_PASSWORD,
encoding="utf-8",
db=settings.REDIS_DB,
)
return redis | 23.125 | 45 | 0.618919 | 44 | 370 | 5 | 0.545455 | 0.177273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003922 | 0.310811 | 370 | 16 | 46 | 23.125 | 0.858824 | 0 | 0 | 0 | 0 | 0 | 0.013477 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.153846 | 0.153846 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.