hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
432dc8e7d82bd1c645808fd3279cfe61b574c76b | 96 | py | Python | venv/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/c1/3f/37/3c78815910a494bfa72c9d7ef2c936077c81234e91b1ed47d7572b3ac2 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.447917 | 0 | 96 | 1 | 96 | 96 | 0.447917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4a53fba53a10a337837eb28d1835d4b111ed9a90 | 84 | py | Python | majority/__init__.py | iterait/cxflow-examples | e1c8e5a5e0cfe3abe92971748ac7f2c2a3673823 | [
"MIT"
] | null | null | null | majority/__init__.py | iterait/cxflow-examples | e1c8e5a5e0cfe3abe92971748ac7f2c2a3673823 | [
"MIT"
] | 3 | 2019-09-06T11:37:18.000Z | 2019-09-10T11:01:07.000Z | majority/__init__.py | iterait/emloop-examples | e1c8e5a5e0cfe3abe92971748ac7f2c2a3673823 | [
"MIT"
] | null | null | null | from .majority_net import MajorityNet
from .majority_dataset import MajorityDataset
| 28 | 45 | 0.880952 | 10 | 84 | 7.2 | 0.7 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 84 | 2 | 46 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4a83d7b19357049fb9d526a519ffb98f7895119f | 5,519 | py | Python | modules/feedback/tests/unit/test_feedback_field.py | heolin123/funcrowd | 20167783de208394c09ed0429a5f02ec6dd79c42 | [
"MIT"
] | null | null | null | modules/feedback/tests/unit/test_feedback_field.py | heolin123/funcrowd | 20167783de208394c09ed0429a5f02ec6dd79c42 | [
"MIT"
] | 11 | 2019-11-12T23:26:45.000Z | 2021-06-10T17:37:23.000Z | modules/feedback/tests/unit/test_feedback_field.py | heolin123/funcrowd | 20167783de208394c09ed0429a5f02ec6dd79c42 | [
"MIT"
] | null | null | null | import pytest
from tasks.models import Task
from modules.feedback.models.fields import (
VoteRanking, AnnotationsCount, ReferenceValue,
NERReferenceValue)
@pytest.mark.django_db
def test_vote_ranking(task_with_items, users):
user1, user2, user3 = users
task = Task.objects.first()
item = task.items.first()
annotation_field = item.template.annotations_fields.first()
field = VoteRanking(annotation_field.name)
item = task.items.get(order=0)
votes = {
user1: {1: 0.33, 2: 0.67},
user2: {1: 0.33, 2: 0.67},
user3: {1: 0.33, 2: 0.67}
}
for annotation in item.annotations.exclude(user=None):
for key, value in field.evaluate(annotation).items():
assert round(value, 2) == votes[annotation.user][key]
item = task.items.get(order=1)
votes = {
user1: {4: 1.0},
user2: {4: 1.0},
user3: {4: 1.0}
}
for annotation in item.annotations.exclude(user=None):
for key, value in field.evaluate(annotation).items():
assert round(value, 2) == votes[annotation.user][key]
item = task.items.get(order=2)
votes = {
user1: {3: 0.33, 6: 0.33, 9: 0.33},
user2: {3: 0.33, 6: 0.33, 9: 0.33},
user3: {3: 0.33, 6: 0.33, 9: 0.33},
}
for annotation in item.annotations.exclude(user=None):
for key, value in field.evaluate(annotation).items():
assert round(value, 2) == votes[annotation.user][key]
item = task.items.get(order=3)
votes = {
user1: {9: 0.33, 12: 0.67},
user2: {9: 0.33, 12: 0.67},
user3: {9: 0.33, 12: 0.67},
}
for annotation in item.annotations.exclude(user=None):
for key, value in field.evaluate(annotation).items():
assert round(value, 2) == votes[annotation.user][key]
@pytest.mark.django_db
def test_vote_ranking(task_with_items_data_source, users):
user1, user2, user3 = users
task = Task.objects.first()
item = task.items.first()
annotation_field = item.template.annotations_fields.first()
field = VoteRanking(annotation_field.name)
item = task.items.get(order=0)
votes = {
user1: {1: 0.33, 2: 0.33, "<OTHER>": 0.33},
user2: {1: 0.33, 2: 0.33, "<OTHER>": 0.33},
user3: {1: 0.33, 2: 0.33, "<OTHER>": 0.33},
}
for annotation in item.annotations.exclude(user=None):
for key, value in field.evaluate(annotation).items():
assert round(value, 2) == votes[annotation.user][key]
@pytest.mark.django_db
def test_annotations_count(task_with_items, users):
user1, user2, user3 = users
task = Task.objects.first()
item = task.items.first()
annotation_field = item.template.annotations_fields.first()
field = AnnotationsCount(annotation_field.name)
item = task.items.get(order=0)
votes = {
user1: 2,
user2: 2,
user3: 2,
}
for annotation in item.annotations.exclude(user=None):
assert field.evaluate(annotation) == votes[annotation.user]
item = task.items.get(order=1)
votes = {
user1: 2,
user2: 2,
user3: 2,
}
for annotation in item.annotations.exclude(user=None):
assert field.evaluate(annotation) == votes[annotation.user]
@pytest.mark.django_db
def test_reference_value(task_with_items, users):
user1, user2, user3 = users
task = Task.objects.first()
item = task.items.first()
annotation_field = item.template.annotations_fields.first()
field = ReferenceValue(annotation_field.name)
item = task.items.get(order=0)
votes = {
user1: [2],
user2: [2],
user3: [2],
}
for annotation in item.annotations.exclude(user=None):
assert field.evaluate(annotation) == votes[annotation.user]
item = task.items.get(order=1)
votes = {
user1: [4],
user2: [4],
user3: [4],
}
for annotation in item.annotations.exclude(user=None):
assert field.evaluate(annotation) == votes[annotation.user]
item = task.items.get(order=2)
votes = {
user1: set([3, 9]),
user2: set([3, 9]),
user3: set([3, 9]),
}
for annotation in item.annotations.exclude(user=None):
assert set(field.evaluate(annotation)) == votes[annotation.user]
@pytest.mark.django_db
def test_ner_reference_value(task_with_ner_items, users):
user1, _, _ = users
task = Task.objects.first()
item = task.items.first()
annotation_field = item.template.annotations_fields.first()
evaluator = NERReferenceValue(annotation_field.name)
item = task.items.get(order=0)
annotation = item.annotations.get(user=user1)
result = evaluator.evaluate(annotation)
assert type(result) == list
assert len(result) == 2
correct = 0
for row in result:
correct += row['is_correct']
assert correct == 2
assert set(result[0].keys()) == {'annotation', 'is_correct', 'reference', 'text'}
item = task.items.get(order=1)
annotation = item.annotations.get(user=user1)
result = evaluator.evaluate(annotation)
assert len(result) == 2
correct = 0
for row in result:
correct += row['is_correct']
assert correct == 1
item = task.items.get(order=2)
annotation = item.annotations.get(user=user1)
result = evaluator.evaluate(annotation)
assert len(result) == 2
correct = 0
for row in result:
correct += row['is_correct']
assert correct == 0
| 28.744792 | 85 | 0.623664 | 728 | 5,519 | 4.659341 | 0.101648 | 0.021226 | 0.068986 | 0.061321 | 0.873526 | 0.873526 | 0.84316 | 0.84316 | 0.836675 | 0.784493 | 0 | 0.052494 | 0.237181 | 5,519 | 191 | 86 | 28.895288 | 0.753207 | 0 | 0 | 0.635762 | 0 | 0 | 0.01522 | 0 | 0 | 0 | 0 | 0 | 0.119205 | 1 | 0.033113 | false | 0 | 0.019868 | 0 | 0.05298 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4aa41949d01fd4a289fcee425c963f9134ee3b31 | 7,183 | py | Python | checkio/Scientific Expedition/Open Labyrinth/open_labyrinth.py | KenMercusLai/checkio | c7702221e1bc0b0b30425859ffa6c09722949d65 | [
"MIT"
] | 39 | 2015-02-09T13:24:12.000Z | 2019-05-16T17:51:19.000Z | checkio/Scientific Expedition/Open Labyrinth/open_labyrinth.py | KenMercusLai/checkio | c7702221e1bc0b0b30425859ffa6c09722949d65 | [
"MIT"
] | 1 | 2019-10-21T16:18:14.000Z | 2019-10-21T16:18:14.000Z | checkio/Scientific Expedition/Open Labyrinth/open_labyrinth.py | KenMercusLai/checkio | c7702221e1bc0b0b30425859ffa6c09722949d65 | [
"MIT"
] | 22 | 2015-01-30T18:00:05.000Z | 2021-05-22T02:57:23.000Z | import heapq
from collections import defaultdict
def shortestPath(graph, start, end):
queue = [(0, start, [])]
seen = set()
while True:
(cost, v, path) = heapq.heappop(queue)
if v not in seen:
path = path + [v]
seen.add(v)
if v == end:
return cost, path
for (next, c) in graph[v].items():
heapq.heappush(queue, (cost + c, next, path))
return queue
def checkio(maze_map):
connect_map = defaultdict(dict)
for row in range(len(maze_map)):
for col in range(len(maze_map[0])):
# only collect states of path cells
if maze_map[row][col] == 0:
# N
if row - 1 > 0 and maze_map[row - 1][col] == 0:
connect_map[(row, col)][(row - 1, col)] = 1
connect_map[(row - 1, col)][(row, col)] = 1
# S
if row + 1 < len(maze_map) and maze_map[row + 1][col] == 0:
connect_map[(row, col)][(row + 1, col)] = 1
connect_map[(row + 1, col)][(row, col)] = 1
# E
if col + 1 < len(maze_map[row]) and maze_map[row][col + 1] == 0:
connect_map[(row, col)][(row, col + 1)] = 1
connect_map[(row, col + 1)][(row, col)] = 1
# W
if col - 1 > 0 and maze_map[row][col - 1] == 0:
connect_map[(row, col)][(row, col - 1)] = 1
connect_map[(row, col - 1)][(row, col)] = 1
steps, path = shortestPath(connect_map, (1, 1), (10, 10))
path.append((10, 10))
steps += 1
directions = []
for i in range(1, steps):
previous_step, current_step = path[i - 1], path[i]
if current_step[0] > previous_step[0]:
directions.append('S')
elif current_step[0] < previous_step[0]:
directions.append('N')
elif current_step[1] > previous_step[1]:
directions.append('E')
elif current_step[1] < previous_step[1]:
directions.append('W')
return ''.join(directions)
if __name__ == '__main__': # pragma: no cover
# This code using only for self-checking and not necessary for auto-testing
def check_route(func, labyrinth):
MOVE = {"S": (1, 0), "N": (-1, 0), "W": (0, -1), "E": (0, 1)}
# copy maze
route = func([row[:] for row in labyrinth])
pos = (1, 1)
goal = (10, 10)
for i, d in enumerate(route):
move = MOVE.get(d, None)
if not move:
print("Wrong symbol in route")
return False
pos = pos[0] + move[0], pos[1] + move[1]
if pos == goal:
return True
if labyrinth[pos[0]][pos[1]] == 1:
print("Player in the pit")
return False
print("Player did not reach exit")
return False
# These assert are using only for self-testing as examples.
assert check_route(
checkio,
[
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1],
[1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1],
[1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1],
[1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1],
[1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1],
[1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1],
[1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
],
), "First maze"
assert check_route(
checkio,
[
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
],
), "Empty maze"
assert check_route(
checkio,
[
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1],
[1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1],
[1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1],
[1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1],
[1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1],
[1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1],
[1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1],
[1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1],
[1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1],
[1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
],
), "Up and down maze"
assert check_route(
checkio,
[
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1],
[1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1],
[1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1],
[1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1],
[1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1],
[1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1],
[1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1],
[1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1],
[1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1],
[1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
],
), "Dotted maze"
assert check_route(
checkio,
[
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1],
[1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1],
[1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1],
[1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1],
[1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1],
[1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
],
), "Need left maze"
assert check_route(
checkio,
[
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1],
[1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1],
[1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1],
[1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1],
[1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1],
[1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
],
), "The big dead end."
print("The local tests are done.")
| 38.61828 | 80 | 0.358346 | 1,313 | 7,183 | 1.926123 | 0.085301 | 0.23013 | 0.233689 | 0.245156 | 0.591538 | 0.578094 | 0.576512 | 0.576512 | 0.544089 | 0.508106 | 0 | 0.230826 | 0.433663 | 7,183 | 185 | 81 | 38.827027 | 0.390855 | 0.027844 | 0 | 0.384615 | 0 | 0 | 0.026101 | 0 | 0 | 0 | 0 | 0 | 0.035503 | 1 | 0.017751 | false | 0 | 0.011834 | 0 | 0.071006 | 0.023669 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4351742acc9486a8b77be4682cc2fb2606f2e1fb | 21,999 | py | Python | net/model/losses.py | sdjsngs/Cross-Epoch-Learning-for-Weakly-Supervised-Anomaly-Detection-in-Surveillance-Videos | f734db8d440f2974cb6b4234b30da6856ef62ce3 | [
"MIT"
] | 3 | 2021-07-30T04:45:08.000Z | 2022-02-23T12:44:16.000Z | net/model/losses.py | sdjsngs/Cross-Epoch-Learning-for-Weakly-Supervised-Anomaly-Detection-in-Surveillance-Videos | f734db8d440f2974cb6b4234b30da6856ef62ce3 | [
"MIT"
] | null | null | null | net/model/losses.py | sdjsngs/Cross-Epoch-Learning-for-Weakly-Supervised-Anomaly-Detection-in-Surveillance-Videos | f734db8d440f2974cb6b4234b30da6856ef62ce3 | [
"MIT"
] | 3 | 2021-07-30T09:26:45.000Z | 2022-03-16T15:31:41.000Z | """
loss function
img l2 loss
flow l1 loss
GAN loss
"""
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
def L1_loss(img_pred,img):
l1_loss=nn.L1Loss()
loss=l1_loss(img_pred,img)
return loss
def L2_loss(pred_score,label):
l2_loss=nn.MSELoss(reduction='mean')
loss=l2_loss(pred_score,label)
return loss
def BCE_loss(img_pred,img_label):
bce_loss=nn.BCELoss()
loss=bce_loss(img_pred,img_label)
return loss
def hinge_loss(abnormal_score,normal_score):
"""
hinge loss
loss=max(0,1-max(abnormal)+max(normal))
:param abnormal_score: [B,32,1]
:param normal_score: [B,32,1]
:return:
"""
abnormal_score=abnormal_score.squeeze()
normal_score=normal_score.squeeze()
max_a_value,max_a_index=torch.max(abnormal_score,dim=-1) # batch_size
max_n_value,max_n_index=torch.max(normal_score,dim=-1)
margin_1=torch.ones_like(max_a_value)
# margin_0=torch.zeros_like(max_a)
# margin_loss = nn.MarginRankingLoss()
#
# h_loss=margin_loss(max_a,max_n,margin_1)
h_loss=F.relu((margin_1 - max_a_value + max_n_value))
return h_loss,max_a_index,max_n_index
def T_1_loss(abnormal_score):
"""
smooth loss
:param abnormal_score:
:return:
"""
abnormal_score=abnormal_score.squeeze(dim=-1)
p_score=abnormal_score[:,:-1]
l_score=abnormal_score[:,1:]
# p_score=abnormal_score[:-1]
# l_score=abnormal_score[1:]
# do l2 or l1
# l1_loss=torch.sum(
# torch.abs(p_score-l_score)
# )
l2_loss=torch.sum(
torch.pow(p_score - l_score, 2), dim=-1
)
return l2_loss
def T_2_loss(abnormal_score):
"""
sparsity loss
:param abnormal_score:[30,32,1]
:return: shape [30]
"""
loss_value=torch.sum(abnormal_score.squeeze(dim=-1),dim=-1)
return loss_value
def combine_loss(abnormal_score,normal_score):
"""
combine loss
abnormal score shape in [b,t,1]
normal score shape in [b,t,1]
hyp= 8X10^-5
:return:
"""
h_loss,max_a_index,max_n_index=hinge_loss(abnormal_score,normal_score)
smooth_loss=T_1_loss(abnormal_score)
sparsity_loss=T_2_loss(abnormal_score)
hyp=0.00008
combine_loss=torch.mean(h_loss+hyp*smooth_loss+hyp*sparsity_loss)
return combine_loss,h_loss.mean(),smooth_loss.mean(),sparsity_loss.mean(),max_a_index,max_n_index
def hard_sample_loss(abnormal_score,hard_instance_score):
"""
:param abnormal_score: [30,32,1]
:param hard_instance_score: [800,1,1]
:return:
"""
abnormal_size=abnormal_score.shape[0]
memory_size=hard_instance_score.shape[0]
abnormal_score = abnormal_score.squeeze()
max_a, max_a_index = torch.max(abnormal_score, dim=1) # (30,1)
max_a_repeat=max_a.unsqueeze(dim=1).repeat(1,memory_size).permute(1,0).flatten() # shape in [memory size ,30 ]
hard_instance_score=hard_instance_score.squeeze(dim=-1).repeat(1,abnormal_size).flatten()
margin_1=torch.ones_like(max_a_repeat)
hard_loss=torch.mean(
F.relu((margin_1 - max_a_repeat + hard_instance_score))
)
return hard_loss
def hard_sample_loss_remove_one(abnormal_score,hard_instance_score):
"""
:param abnormal_score: [30,32,1]
:param hard_instance_score: [800,1,1]
:return:
"""
abnormal_size=abnormal_score.shape[0]
memory_size=hard_instance_score.shape[0]
abnormal_score = abnormal_score.squeeze()
max_a, max_a_index = torch.max(abnormal_score, dim=1) # (30,1)
max_a_repeat=max_a.unsqueeze(dim=1).repeat(1,memory_size).permute(1,0).flatten() # shape in [memory size ,30 ]
hard_instance_score=hard_instance_score.squeeze(dim=-1).repeat(1,abnormal_size).flatten()
# margin_1=torch.ones_like(max_a_repeat)
hard_loss=torch.mean(
F.relu((max_a_repeat - hard_instance_score))
)
return hard_loss
def combine_loss_hard_sample(abnormal_score,normal_score,hard_instance_score):
"""
combine loss
abnormal score shape in [B,T,1]
normal score shape in [B,T,1]
hard_instance_score in [memory_size ,1,1 ]
hyp= 8X10^-5
:return:
"""
# abnormal score and
h_loss,max_a_index,max_n_index=hinge_loss(abnormal_score,normal_score)
smooth_loss=T_1_loss(abnormal_score)
sparsity_loss=T_2_loss(abnormal_score)
hard_loss=hard_sample_loss_remove_one(abnormal_score,hard_instance_score)
# min the hard score
hard_min_score=torch.mean(hard_instance_score.squeeze())
hyp=0.00008
combine_loss=torch.mean(h_loss+hyp*smooth_loss+hyp*sparsity_loss)+hard_loss+hard_min_score
return combine_loss,h_loss.mean(),smooth_loss.mean(),sparsity_loss.mean(),hard_loss,hard_min_score #,max_a_index,max_n_index
def combine_loss_1_hard_sample(abnormal_score,normal_score,hard_instance_score):
"""
combine loss
abnormal score
normal score
hyp= 8X10^-5
plus loss 1
:return:
"""
# abnormal score and
h_loss,max_a_index,max_n_index=hinge_loss(abnormal_score,normal_score)
smooth_loss=T_1_loss(abnormal_score)
sparsity_loss=T_2_loss(abnormal_score)
hard_loss=hard_sample_loss(abnormal_score,hard_instance_score)
hyp=0.00008
combine_loss=torch.mean(h_loss+hyp*smooth_loss+hyp*sparsity_loss)+hard_loss
return combine_loss,h_loss.mean(),smooth_loss.mean(),sparsity_loss.mean(),hard_loss#,max_a_index,max_n_index
def combine_loss_2_hard_sample(abnormal_score,normal_score,hard_instance_score):
"""
combine loss
abnormal score
normal score
hyp= 8X10^-5
plus loss 2
:return:
"""
# abnormal score and
h_loss,max_a_index,max_n_index=hinge_loss(abnormal_score,normal_score)
smooth_loss=T_1_loss(abnormal_score)
sparsity_loss=T_2_loss(abnormal_score)
# hard_loss=hard_sample_loss(abnormal_score,hard_instance_score)
# min the hard score
hard_min_score = torch.mean(hard_instance_score.squeeze())
hyp=0.00008
combine_loss=torch.mean(h_loss+hyp*smooth_loss+hyp*sparsity_loss)+hard_min_score
return combine_loss,h_loss.mean(),smooth_loss.mean(),sparsity_loss.mean(),hard_min_score#,max_a_index,max_n_index
class RegularizedLoss(torch.nn.Module):
"""
||w|| regular weight
"""
def __init__(self, model, lambdas=0.001):
super(RegularizedLoss, self).__init__()
self.lambdas = lambdas
self.model = model
def forward(self, y_pred, y_true):
# loss
# Our loss is defined with respect to l2 regularization, as used in the original keras code
fc1_params = torch.cat(tuple([x.view(-1) for x in self.model.fc1.parameters()]))
fc2_params = torch.cat(tuple([x.view(-1) for x in self.model.fc2.parameters()]))
fc3_params = torch.cat(tuple([x.view(-1) for x in self.model.fc3.parameters()]))
l1_regularization = self.lambdas * torch.norm(fc1_params, p=2)
l2_regularization = self.lambdas * torch.norm(fc2_params, p=2)
l3_regularization = self.lambdas * torch.norm(fc3_params, p=2)
regular_loss=l1_regularization + l2_regularization + l3_regularization
return regular_loss
def SRF_loss(pred_score,pseudo_y,euc_dis,video_label="Abnormal"):
"""
loss in A Self-Reasoning Framework for Anomaly Detection Using Video-Level Labels
:param pred_score:
:param pseudo_y:
:param euc_dis:
:param video_label: Abnormal or Normal
:return:
"""
upper_bound_alpha=torch.tensor([1.0]).cuda()
hyp=0.05
# Lr mse loss in pred_score and
L_r=L2_loss(pred_score,pseudo_y)
if video_label in ["Normal"]:
L_c=euc_dis if euc_dis<upper_bound_alpha else upper_bound_alpha
elif video_label in ["Abnormal"]:
L_c = 1.0/(euc_dis+1e-8)
total_loss=L_r+hyp*L_c
return total_loss,L_r,L_c
def SRF_hard_hinge_loss(abnormal_score,hard_instance_score):
"""
:param abnormal_score: [T]
:param hard_instance_score: [800]
:return:
"""
max_a, max_a_index = torch.max(abnormal_score, dim=0) # (1)
# abnormal_size=abnormal_score.shape[0]
memory_size=hard_instance_score.shape[0]
#
# abnormal_score = abnormal_score.squeeze()
#
#
# max_a, max_a_index = torch.max(abnormal_score, dim=1) # (30,1)
max_a_repeat=max_a.repeat(memory_size) # shape in [memory size]
assert max_a_repeat.shape[0] ==hard_instance_score.shape[0]
margin_1=torch.ones_like(max_a_repeat)
hard_loss=torch.mean(
F.relu((margin_1 - max_a_repeat + hard_instance_score))
)
return hard_loss
def SRF_hard_hinge_loss_remove_one(abnormal_score,hard_instance_score):
"""
:param abnormal_score: [T]
:param hard_instance_score: [800]
:return:
"""
max_a, max_a_index = torch.max(abnormal_score, dim=0) # (1)
# abnormal_size=abnormal_score.shape[0]
memory_size=hard_instance_score.shape[0]
#
# abnormal_score = abnormal_score.squeeze()
#
#
# max_a, max_a_index = torch.max(abnormal_score, dim=1) # (30,1)
max_a_repeat=max_a.repeat(memory_size) # shape in [memory size]
assert max_a_repeat.shape[0] ==hard_instance_score.shape[0]
margin_1=torch.ones_like(max_a_repeat)*0.9
hard_loss=torch.mean(
F.relu((margin_1-max_a_repeat + hard_instance_score))
)
return hard_loss
def SRF_hard_hinge_loss_dynamic_margin(abnormal_score,hard_instance_score,margin_value):
"""
:param abnormal_score: [T]
:param hard_instance_score: [800]
:return:
"""
max_a, max_a_index = torch.max(abnormal_score, dim=0) # (1)
# abnormal_size=abnormal_score.shape[0]
memory_size=hard_instance_score.shape[0]
# abnormal_score = abnormal_score.squeeze()
#
#
# max_a, max_a_index = torch.max(abnormal_score, dim=1) # (30,1)
max_a_repeat=max_a.repeat(memory_size) # shape in [memory size]
assert max_a_repeat.shape[0] ==hard_instance_score.shape[0]
margin_1=torch.ones_like(max_a_repeat)*margin_value
hard_loss=torch.mean(
F.relu((margin_1-max_a_repeat + hard_instance_score))
)
return hard_loss
def SRF_hard_hinge_loss_dynamic_margin_2(abnormal_score,hard_instance_score,margin_value):
"""
:param abnormal_score: [B,T]
:param hard_instance_score: [M,1]
:return:
"""
max_a, max_a_index = torch.max(abnormal_score, dim=1) # max_a [B,1]
# abnormal_size=abnormal_score.shape[0]
memory_size=hard_instance_score.shape[0]
max_a_repeat=max_a.unsqueeze(dim=1).repeat(1,memory_size) # shape in [B,memory size]
hard_instance_score_repeat=hard_instance_score.repeat(1,abnormal_score.shape[0]).permute(1,0)
assert max_a_repeat.shape[0] ==hard_instance_score_repeat.shape[0]
margin_1=torch.ones_like(max_a_repeat)*margin_value
hard_loss=torch.mean(
F.relu((margin_1-max_a_repeat + hard_instance_score_repeat))
)
return hard_loss
def SRF_loss_combine(pred_score,pseudo_y,euc_dis,pred_hard_score,video_label="Abnormal"):
"""
loss in A Self-Reasoning Framework for Anomaly Detection Using Video-Level Labels
:param pred_score:
:param pseudo_y:
:param euc_dis:
:param pred_hard_score: shape in [800]
:param video_label: Abnormal or Normal
:return:
"""
upper_bound_alpha=torch.tensor([1.0]).cuda()
hyp=0.05
# Lr mse loss in pred_score and
L_r=L2_loss(pred_score,pseudo_y)
if video_label in ["Normal"]:
L_c=euc_dis if euc_dis<upper_bound_alpha else upper_bound_alpha
hard_hinge_loss=torch.tensor([0.0]).cuda()
elif video_label in ["Abnormal"]:
L_c = 1.0/(euc_dis+1e-8)
hard_hinge_loss = SRF_hard_hinge_loss_remove_one(pred_score, pred_hard_score)
else: raise NotImplementedError(
"No supported type for videl_label:{}".format(video_label)
)
hard_score_loss=torch.mean(pred_hard_score)
total_loss=L_r+hyp*L_c+hard_hinge_loss+hard_score_loss
return total_loss,L_r,L_c,hard_hinge_loss,hard_score_loss
def SRF_loss_combine_dynamic_margin(pred_score,pseudo_y,euc_dis,pred_hard_score,video_label="Abnormal",margin_value=1):
"""
loss in A Self-Reasoning Framework for Anomaly Detection Using Video-Level Labels
FMB loss with dynamic margin
maring list in [0.6,0.7,0.8,0.9,1.0]
:param pred_score:
:param pseudo_y:
:param euc_dis:
:param pred_hard_score: shape in [800]
:param video_label: Abnormal or Normal
:return:
"""
upper_bound_alpha=torch.tensor([1.0]).cuda()
hyp=0.05
# Lr mse loss in pred_score and
L_r=L2_loss(pred_score,pseudo_y)
if video_label in ["Normal"]:
L_c=euc_dis if euc_dis<upper_bound_alpha else upper_bound_alpha
hard_hinge_loss=torch.tensor([0.0]).cuda()
elif video_label in ["Abnormal"]:
L_c = 1.0/(euc_dis+1e-8)
hard_hinge_loss = SRF_hard_hinge_loss_dynamic_margin(pred_score, pred_hard_score,margin_value)
else: raise NotImplementedError(
"No supported type for videl_label:{}".format(video_label)
)
hard_score_loss=torch.mean(pred_hard_score)
total_loss=L_r+hyp*L_c+hard_hinge_loss+hard_score_loss
return total_loss,L_r,L_c,hard_hinge_loss,hard_score_loss
def SRF_loss_combine_dynamic_margin_warm_up(pred_score,pseudo_y,euc_dis,pred_hard_score,video_label="Abnormal",margin_value=1,warmup_=True):
"""
loss in A Self-Reasoning Framework for Anomaly Detection Using Video-Level Labels
FMB loss with dynamic margin
maring list in [0.6,0.7,0.8,0.9,1.0]
:param pred_score:
:param pseudo_y:
:param euc_dis:
:param pred_hard_score: shape in [800]
:param video_label: Abnormal or Normal
:return:
"""
upper_bound_alpha=torch.tensor([1.0]).cuda()
hyp=0.05
# Lr mse loss in pred_score and
L_r=L2_loss(pred_score,pseudo_y)
if video_label in ["Normal"]:
L_c=euc_dis if euc_dis<upper_bound_alpha else upper_bound_alpha
hard_hinge_loss=torch.tensor([0.0]).cuda()
elif video_label in ["Abnormal"]:
L_c = 1.0/(euc_dis+1e-8)
hard_hinge_loss = SRF_hard_hinge_loss_dynamic_margin(pred_score, pred_hard_score,margin_value)
else: raise NotImplementedError(
"No supported type for videl_label:{}".format(video_label)
)
hard_score_loss=torch.mean(pred_hard_score)
if warmup_:
total_loss=L_r+hyp*L_c
else:
total_loss=L_r+hyp*L_c+hard_hinge_loss+hard_score_loss
return total_loss,L_r,L_c,hard_hinge_loss,hard_score_loss
def SRF_loss_combine_dynamic_margin_warm_up_2(
pred_score_abnormal,pseudo_y_abnormal,euc_dis_abnormal,
pred_score_normal,pseudo_y_normal,euc_dis_normal,
pred_hard_score,margin_value=1,warmup_=True):
"""
loss in A Self-Reasoning Framework for Anomaly Detection Using Video-Level Labels
FMB loss with dynamic margin
maring list in [0.6,0.7,0.8,0.9,1.0]
pred_score_abnormal in [B,T,1]
:param pred_score:
:param pseudo_y:
:param euc_dis:
:param pred_hard_score: shape in [800]
:param video_label: Abnormal or Normal
:return:
"""
upper_bound_alpha=torch.tensor([1.0]).cuda()
hyp=0.05
# if pred_score_abnormal.ndim==3:
# pred_score_abnormal=pred_score_abnormal.squeeze(dim=-1)
# if pred_score_normal.ndim==3:
# pred_score_normal=pred_score_normal.squeeze(dim=-1)
# Lr mse loss in pred_score and
L_r_abnormal=L2_loss(pred_score_abnormal,pseudo_y_abnormal)
L_r_normal = L2_loss(pred_score_normal, pseudo_y_normal)
L_r=L_r_abnormal+L_r_normal
# euc_size=euc_dis_normal.shape[0]
# for e in range(euc_size):
# euc_dis_normal[e]=euc_dis_normal[e] if euc_dis_normal[e] < upper_bound_alpha else upper_bound_alpha
euc_dis_normal = euc_dis_normal if euc_dis_normal < upper_bound_alpha else upper_bound_alpha
L_c_normal =torch.mean(euc_dis_normal)
L_c_abnormal = torch.mean(1.0 / (euc_dis_abnormal + 1e-8))
L_c=L_c_abnormal+L_c_normal
hard_hinge_loss = SRF_hard_hinge_loss_dynamic_margin(pred_score_abnormal, pred_hard_score, margin_value)
# if video_label in ["Normal"]:
# L_c=euc_dis if euc_dis<upper_bound_alpha else upper_bound_alpha
# hard_hinge_loss=torch.tensor([0.0]).cuda()
# elif video_label in ["Abnormal"]:
# L_c = 1.0/(euc_dis+1e-8)
# hard_hinge_loss = SRF_hard_hinge_loss_dynamic_margin(pred_score, pred_hard_score,margin_value)
#
#
# else: raise NotImplementedError(
# "No supported type for videl_label:{}".format(video_label)
# )
hard_score_loss=torch.mean(pred_hard_score)
if warmup_:
total_loss=L_r+hyp*L_c
else:
total_loss=L_r+hyp*L_c+hard_hinge_loss+hard_score_loss
return total_loss,L_r,L_c,hard_hinge_loss,hard_score_loss
def SRF_loss_1_dynamic_margin_warm_up(pred_score,pseudo_y,euc_dis,pred_hard_score,video_label="Abnormal",margin_value=1,warmup_=True):
"""
loss in A Self-Reasoning Framework for Anomaly Detection Using Video-Level Labels
FMB loss with dynamic margin
maring list in [0.6,0.7,0.8,0.9,1.0]
:param pred_score:
:param pseudo_y:
:param euc_dis:
:param pred_hard_score: shape in [800]
:param video_label: Abnormal or Normal
:return:
"""
upper_bound_alpha=torch.tensor([1.0]).cuda()
hyp=0.05
# Lr mse loss in pred_score and
L_r=L2_loss(pred_score,pseudo_y)
if video_label in ["Normal"]:
L_c=euc_dis if euc_dis<upper_bound_alpha else upper_bound_alpha
hard_hinge_loss=torch.tensor([0.0]).cuda()
elif video_label in ["Abnormal"]:
L_c = 1.0/(euc_dis+1e-8)
hard_hinge_loss = SRF_hard_hinge_loss_dynamic_margin(pred_score, pred_hard_score,margin_value)
else: raise NotImplementedError(
"No supported type for videl_label:{}".format(video_label)
)
hard_score_loss=torch.mean(pred_hard_score)
if warmup_:
total_loss=L_r+hyp*L_c
else:
total_loss=L_r+hyp*L_c+hard_hinge_loss #+hard_score_loss
return total_loss,L_r,L_c,hard_hinge_loss #,hard_score_loss
def SRF_loss_1(pred_score,pseudo_y,euc_dis,pred_hard_score,video_label="Abnormal"):
"""
loss in A Self-Reasoning Framework for Anomaly Detection Using Video-Level Labels
:param pred_score:
:param pseudo_y:
:param euc_dis:
:param pred_hard_score: shape in [800]
:param video_label: Abnormal or Normal
:return:
"""
upper_bound_alpha=torch.tensor([1.0]).cuda()
hyp=0.05
# Lr mse loss in pred_score and
L_r=L2_loss(pred_score,pseudo_y)
if video_label in ["Normal"]:
L_c=euc_dis if euc_dis<upper_bound_alpha else upper_bound_alpha
hard_hinge_loss=torch.tensor([0.0]).cuda()
elif video_label in ["Abnormal"]:
L_c = 1.0/(euc_dis+1e-8)
hard_hinge_loss = SRF_hard_hinge_loss(pred_score, pred_hard_score)
else: raise NotImplementedError(
"No supported type for videl_label:{}".format(video_label)
)
# hard_score_loss=torch.mean(pred_hard_score)
total_loss=L_r+hyp*L_c+hard_hinge_loss #+hard_score_loss
return total_loss,L_r,L_c,hard_hinge_loss
def SRF_loss_2(pred_score,pseudo_y,euc_dis,pred_hard_score,video_label="Abnormal"):
"""
loss in A Self-Reasoning Framework for Anomaly Detection Using Video-Level Labels
:param pred_score:
:param pseudo_y:
:param euc_dis:
:param pred_hard_score: shape in [800]
:param video_label: Abnormal or Normal
:return:
"""
upper_bound_alpha=torch.tensor([1.0]).cuda()
hyp=0.05
# Lr L2 loss in pred_score and pseudo label
L_r=L2_loss(pred_score,pseudo_y)
if video_label in ["Normal"]:
L_c=euc_dis if euc_dis<upper_bound_alpha else upper_bound_alpha
# hard_hinge_loss=torch.tensor([0.0]).cuda()
elif video_label in ["Abnormal"]:
L_c = 1.0/(euc_dis+1e-8)
# hard_hinge_loss = SRF_hard_hinge_loss(pred_score, pred_hard_score)
else: raise NotImplementedError(
"No supported type for videl_label:{}".format(video_label)
)
hard_score_loss=torch.mean(pred_hard_score)*2
total_loss=L_r+hyp*L_c+hard_score_loss
return total_loss,L_r,L_c,hard_score_loss
_LOSSES={
# "MSE":L2_loss,
"COMBINE_LOSS":combine_loss,
"HARD_COMBINE_LOSS":combine_loss_hard_sample,
"HARD_LOSS_1":combine_loss_1_hard_sample,
"HARD_LOSS_2":combine_loss_2_hard_sample,
"SRF_LOSS":SRF_loss, # plus loss1 loss2 combine
"SRF_LOSS_1":SRF_loss_1,
"SRF_LOSS_2":SRF_loss_2,
"SRF_LOSS_COMBINE":SRF_loss_combine,
"SRF_LOSS_COMBINE_DYNAMIC_MARGIN":SRF_loss_combine_dynamic_margin,
"SRF_LOSS_COMBINE_DYNAMIC_MARGIN_WARMUP":SRF_loss_combine_dynamic_margin_warm_up,
"SRF_LOSS_COMBINE_DYNAMIC_MARGIN_WARMUP_version2":SRF_loss_combine_dynamic_margin_warm_up_2,
"SRF_LOSS_1_DYNAMIC_MARGIN_WARMUP":SRF_loss_1_dynamic_margin_warm_up,
}
def get_loss_func(loss_name):
if loss_name not in _LOSSES.keys():
raise NotImplementedError(
"loss {} is not in supported".format(loss_name)
)
return _LOSSES[loss_name]
if __name__=="__main__":
print("loss func")
# batch size in 30
# feature [batch_size,32,4096]
# normal and abnormal shape in [30,32,1]
# pred score shape in [batch_size,32]
# memory bank feature in shape[memory_size,]
# upper_bound_alpha = torch.tensor([3.0])
# print(upper_bound_alpha)
# euc_dis_normal = torch.tensor([2.0, 1, 5, 1, 15, 6])
# euc_size = euc_dis_normal.shape[0]
# for e in range(euc_size):
# euc_dis_normal[e] = euc_dis_normal[e] if euc_dis_normal[e] < upper_bound_alpha else upper_bound_alpha
#
#
pred=torch.rand(size=[30,64,1])
hard_score=torch.rand(size=[800,1])
print(hard_score.repeat(1,156).shape)
loss=SRF_hard_hinge_loss_dynamic_margin_2(pred,hard_score,1)
print(loss)
| 27.192831 | 140 | 0.707987 | 3,503 | 21,999 | 4.080788 | 0.054525 | 0.072753 | 0.051137 | 0.014621 | 0.858272 | 0.812802 | 0.779154 | 0.764533 | 0.760056 | 0.743057 | 0 | 0.026585 | 0.186099 | 21,999 | 808 | 141 | 27.226485 | 0.771796 | 0.277922 | 0 | 0.551495 | 0 | 0 | 0.043993 | 0.00985 | 0 | 0 | 0 | 0 | 0.013289 | 1 | 0.089701 | false | 0 | 0.013289 | 0 | 0.192691 | 0.009967 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
436a8a2d50fb9b837baa0426bfb91d82633efea4 | 10,786 | py | Python | tests/test_resource/test_jdbc/test_mysql.py | roganov/local-data-api | 2c58206f0221c913521778c627ed2bdbff11d274 | [
"MIT"
] | 102 | 2019-06-15T19:32:20.000Z | 2022-03-25T18:39:07.000Z | tests/test_resource/test_jdbc/test_mysql.py | roganov/local-data-api | 2c58206f0221c913521778c627ed2bdbff11d274 | [
"MIT"
] | 176 | 2019-06-16T05:57:29.000Z | 2022-03-28T01:26:16.000Z | tests/test_resource/test_jdbc/test_mysql.py | healthpraxone/local-data-api | 7f81daee9e80958c082d8d4ebbe767dbfecb2544 | [
"MIT"
] | 20 | 2019-10-30T09:02:20.000Z | 2022-01-14T09:07:26.000Z | from __future__ import annotations
from typing import TYPE_CHECKING, Dict, Union
import jaydebeapi
import pytest
from local_data_api.exceptions import BadRequestException
from local_data_api.models import ColumnMetadata, ExecuteStatementResponse, Field
from local_data_api.resources.jdbc.mysql import MySQLJDBC
from tests.test_resource.test_resource import helper_default_test_field
DATABASE_SETTINGS: Dict[str, Dict[str, Union[str, int]]] = {
'SQLite': {'host': '', 'port': None, 'user_name': None, 'password': None}
}
if TYPE_CHECKING:
pass
@pytest.fixture
def mocked_connection(mocker):
connection_mock = mocker.Mock()
return connection_mock
@pytest.fixture
def mocked_cursor(mocked_connection, mocker):
cursor_mock = mocker.Mock()
mocked_connection.cursor.side_effect = [cursor_mock]
return cursor_mock
def test_execute_insert(mocked_connection, mocked_cursor, mocker):
mocked_cursor.description = ''
mocked_cursor.rowcount = 1
mocked_cursor.fetchone.side_effect = [[0]]
dummy = MySQLJDBC(mocked_connection)
assert dummy.execute(
"insert into users values (1, 'abc')"
) == ExecuteStatementResponse(numberOfRecordsUpdated=1, generatedFields=[])
mocked_cursor.execute.assert_has_calls(
[
mocker.call('SELECT LAST_INSERT_ID(NULL)'),
mocker.call("insert into users values (1, 'abc')"),
mocker.call('SELECT LAST_INSERT_ID()'),
]
)
mocked_cursor.close.assert_called_once_with()
mocked_cursor = mocker.Mock()
mocked_connection.cursor.side_effect = [mocked_cursor]
mocked_cursor.description = ''
mocked_cursor.rowcount = 1
mocked_cursor.fetchone.side_effect = [[0]]
assert dummy.execute(
"insert into users values (1, 'abc')"
) == ExecuteStatementResponse(numberOfRecordsUpdated=1, generatedFields=[])
mocked_cursor.execute.assert_has_calls(
[
mocker.call('SELECT LAST_INSERT_ID(NULL)'),
mocker.call("insert into users values (1, 'abc')"),
mocker.call('SELECT LAST_INSERT_ID()'),
]
)
mocked_cursor.close.assert_called_once_with()
def test_execute_insert_with_generated_field(mocked_connection, mocked_cursor, mocker):
mocked_cursor.description = ''
mocked_cursor.rowcount = 1
mocked_cursor.fetchone.side_effect = [[1]]
dummy = MySQLJDBC(mocked_connection)
assert dummy.execute(
"insert into users (name) values ('abc')"
) == ExecuteStatementResponse(
numberOfRecordsUpdated=1, generatedFields=[Field(longValue=1)]
)
mocked_cursor.execute.assert_has_calls(
[
mocker.call('SELECT LAST_INSERT_ID(NULL)'),
mocker.call("insert into users (name) values ('abc')"),
mocker.call('SELECT LAST_INSERT_ID()'),
]
)
mocked_cursor.close.assert_called_once_with()
def test_execute_insert_with_params(mocked_connection, mocked_cursor, mocker):
mocked_cursor.description = ''
mocked_cursor.rowcount = 1
mocked_cursor.fetchone.side_effect = [[0]]
dummy = MySQLJDBC(mocked_connection)
assert dummy.execute(
"insert into users values (:id, :name)", {'id': 1, 'name': 'abc'}
) == ExecuteStatementResponse(numberOfRecordsUpdated=1, generatedFields=[])
mocked_cursor.execute.assert_has_calls(
[
mocker.call('SELECT LAST_INSERT_ID(NULL)'),
mocker.call("insert into users values (1, 'abc')"),
mocker.call('SELECT LAST_INSERT_ID()'),
]
)
mocked_cursor.close.assert_called_once_with()
def test_execute_select(mocked_connection, mocked_cursor, mocker):
mocked_cursor.description = 1, 1, 1, 1, 1, 1, 1
mocked_cursor.fetchall.side_effect = [((1, 'abc'),)]
dummy = MySQLJDBC(mocked_connection, transaction_id='123')
dummy.create_column_metadata_set = create_column_metadata_set_mock = mocker.Mock()
create_column_metadata_set_mock.side_effect = [
[
ColumnMetadata(
arrayBaseColumnType=0,
isAutoIncrement=False,
isCaseSensitive=False,
isCurrency=False,
isSigned=False,
label=1,
name=1,
precision=5,
scale=6,
tableName=None,
type=None,
typeName=None,
),
ColumnMetadata(
arrayBaseColumnType=0,
isAutoIncrement=False,
isCaseSensitive=False,
isCurrency=False,
isSigned=False,
label=8,
name=8,
precision=12,
scale=13,
tableName=None,
type=None,
typeName=None,
),
]
]
assert dummy.execute("select * from users",) == ExecuteStatementResponse(
numberOfRecordsUpdated=0,
records=[[dummy.get_field_from_value(1), dummy.get_field_from_value('abc')]],
)
mocked_cursor.execute.assert_has_calls(
[mocker.call('SELECT LAST_INSERT_ID(NULL)'), mocker.call('select * from users')]
)
mocked_cursor.close.assert_called_once_with()
def test_execute_select_with_include_metadata(mocked_connection, mocked_cursor, mocker):
meta_mock = mocker.Mock()
mocked_cursor._meta = meta_mock
mocked_cursor.description = (1, 2, 3, 4, 5, 6, 7), (8, 9, 10, 11, 12, 13, 14)
mocked_cursor.fetchall.side_effect = [((1, 'abc'),)]
dummy = MySQLJDBC(mocked_connection, transaction_id='123')
dummy.create_column_metadata_set = create_column_metadata_set_mock = mocker.Mock()
create_column_metadata_set_mock.side_effect = [
[
ColumnMetadata(
arrayBaseColumnType=0,
isAutoIncrement=False,
isCaseSensitive=False,
isCurrency=False,
isSigned=False,
label=1,
name=1,
precision=5,
scale=6,
tableName=None,
type=None,
typeName=None,
),
ColumnMetadata(
arrayBaseColumnType=0,
isAutoIncrement=False,
isCaseSensitive=False,
isCurrency=False,
isSigned=False,
label=8,
name=8,
precision=12,
scale=13,
tableName=None,
type=None,
typeName=None,
),
]
]
assert dummy.execute(
"select * from users", include_result_metadata=True
) == ExecuteStatementResponse(
numberOfRecordsUpdated=0,
records=[[dummy.get_field_from_value(1), dummy.get_field_from_value('abc')]],
columnMetadata=[
ColumnMetadata(
arrayBaseColumnType=0,
isAutoIncrement=False,
isCaseSensitive=False,
isCurrency=False,
isSigned=False,
label=1,
name=1,
precision=5,
scale=6,
tableName=None,
type=None,
typeName=None,
),
ColumnMetadata(
arrayBaseColumnType=0,
isAutoIncrement=False,
isCaseSensitive=False,
isCurrency=False,
isSigned=False,
label=8,
name=8,
precision=12,
scale=13,
tableName=None,
type=None,
typeName=None,
),
],
)
create_column_metadata_set_mock.assert_called_once_with(mocked_cursor)
mocked_cursor.execute.assert_has_calls(
[mocker.call('SELECT LAST_INSERT_ID(NULL)'), mocker.call('select * from users')]
)
mocked_cursor.close.assert_called_once_with()
def test_execute_exception_1(mocked_connection, mocked_cursor, mocker):
error = jaydebeapi.DatabaseError('error_message')
error.args = ['error_message']
mocked_cursor.execute.side_effect = [0, error]
mocked_connection.cursor.side_effect = [mocked_cursor]
dummy = MySQLJDBC(mocked_connection, transaction_id='123')
with pytest.raises(BadRequestException) as e:
dummy.execute("select * from users")
assert e.value.message == 'error_message'
mocked_cursor.execute.assert_has_calls([mocker.call('SELECT LAST_INSERT_ID(NULL)')])
mocked_cursor.close.assert_called_once_with()
def test_execute_exception_2(mocked_connection, mocked_cursor, mocker):
error = jaydebeapi.DatabaseError('error')
cause = mocker.Mock()
cause.cause.message = 'cause_error_message'
inner_error = mocker.Mock()
inner_error.args = [cause]
error.args = [inner_error]
mocked_cursor.execute.side_effect = [0, error]
mocked_connection.cursor.side_effect = [mocked_cursor]
dummy = MySQLJDBC(mocked_connection, transaction_id='123')
with pytest.raises(BadRequestException) as e:
dummy.execute("select * from users")
assert e.value.message == 'cause_error_message'
mocked_cursor.execute.assert_has_calls([mocker.call('SELECT LAST_INSERT_ID(NULL)')])
mocked_cursor.close.assert_called_once_with()
def test_execute_exception_3(mocked_connection, mocked_cursor, mocker):
mocked_connection.cursor.side_effect = [jaydebeapi.DatabaseError()]
dummy = MySQLJDBC(mocked_connection, transaction_id='123')
with pytest.raises(BadRequestException):
dummy.execute("select * from users")
mocked_cursor.close.assert_not_called()
def test_execute_exception_4(mocked_connection, mocked_cursor, mocker):
error = jaydebeapi.DatabaseError('error')
inner_error = mocker.Mock()
inner_error.args = ['inner_error_message']
error.args = [inner_error]
mocked_cursor.execute.side_effect = [0, error]
mocked_connection.cursor.side_effect = [mocked_cursor]
dummy = MySQLJDBC(mocked_connection, transaction_id='123')
with pytest.raises(BadRequestException) as e:
dummy.execute("select * from users")
assert e.value.message == 'inner_error_message'
mocked_cursor.execute.assert_has_calls([mocker.call('SELECT LAST_INSERT_ID(NULL)')])
mocked_cursor.close.assert_called_once_with()
def test_from_value(mocker) -> None:
connection_mock = mocker.Mock()
dummy = MySQLJDBC(connection_mock)
class BigInteger:
def __init__(self, val: int):
self._val: int = val
def __str__(self) -> int:
return self._val
assert dummy.get_filed_from_jdbc_type(BigInteger("55"), None) == Field(longValue=55)
helper_default_test_field(dummy)
| 35.13355 | 88 | 0.636288 | 1,140 | 10,786 | 5.74386 | 0.120175 | 0.100794 | 0.036652 | 0.039707 | 0.81796 | 0.791539 | 0.77413 | 0.74328 | 0.734575 | 0.703879 | 0 | 0.014602 | 0.26349 | 10,786 | 306 | 89 | 35.248366 | 0.809668 | 0 | 0 | 0.675373 | 0 | 0 | 0.090302 | 0 | 0 | 0 | 0 | 0 | 0.11194 | 1 | 0.052239 | false | 0.007463 | 0.029851 | 0.003731 | 0.097015 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
437ba200fcb4025c5a2fb5188da5da809d408246 | 139 | py | Python | importers/__init__.py | tofis/human4d_dataset | ffa87275302c25ef16cec6ab99acdb9410b762b8 | [
"MIT"
] | 8 | 2020-11-20T15:10:10.000Z | 2022-01-17T08:21:10.000Z | importers/__init__.py | tofis/human4d_dataset | ffa87275302c25ef16cec6ab99acdb9410b762b8 | [
"MIT"
] | 1 | 2021-02-10T18:35:59.000Z | 2021-04-23T12:13:03.000Z | importers/__init__.py | tofis/human4d_dataset | ffa87275302c25ef16cec6ab99acdb9410b762b8 | [
"MIT"
] | 3 | 2020-12-10T02:48:08.000Z | 2021-07-18T12:06:20.000Z | from .extrinsics import *
from .intrinsics import *
from .image import *
from .timestamps import *
from .gt import *
from .offsets import * | 23.166667 | 25 | 0.748201 | 18 | 139 | 5.777778 | 0.444444 | 0.480769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165468 | 139 | 6 | 26 | 23.166667 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
439288a73b8632e63203798a6293385a9c1f20e1 | 64,472 | py | Python | lib/rucio/tests/test_rule.py | arisfkiaras/rucio | 275793a04aa85f25bf84705a893ef18679bd305a | [
"Apache-2.0"
] | null | null | null | lib/rucio/tests/test_rule.py | arisfkiaras/rucio | 275793a04aa85f25bf84705a893ef18679bd305a | [
"Apache-2.0"
] | null | null | null | lib/rucio/tests/test_rule.py | arisfkiaras/rucio | 275793a04aa85f25bf84705a893ef18679bd305a | [
"Apache-2.0"
] | null | null | null | # Copyright European Organization for Nuclear Research (CERN)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# You may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Authors:
# - Vincent Garonne, <vincent.garonne@cern.ch>, 2012-2015
# - Mario Lassnig, <mario.lassnig@cern.ch>, 2013-2014, 2017
# - Martin Barisits, <martin.barisits@cern.ch>, 2013-2019
# - Cedric Serfon, <cedric.serfon@cern.ch>, 2015-2019
# - Hannes Hansen, <hannes.jakob.hansen@cern.ch>, 2019
# - Robert Illingworth, <illingwo@fnal.gov>, 2019
# - Andrew Lister, <andrew.lister@stfc.ac.uk>, 2019
#
# PY3K COMPATIBLE
import string
import random
import json
from nose.tools import assert_is_instance, assert_in, assert_not_in, assert_raises, assert_equal
import rucio.api.rule
from rucio.api.account import add_account
from rucio.client.accountclient import AccountClient
from rucio.client.lockclient import LockClient
from rucio.client.didclient import DIDClient
from rucio.client.ruleclient import RuleClient
from rucio.client.subscriptionclient import SubscriptionClient
from rucio.common.utils import generate_uuid as uuid
from rucio.common.exception import (RuleNotFound, AccessDenied, InsufficientAccountLimit, DuplicateRule, RSEBlacklisted, RSEOverQuota,
RuleReplaceFailed, ManualRuleApprovalBlocked, InputValidationError, UnsupportedOperation)
from rucio.common.types import InternalAccount, InternalScope
from rucio.daemons.judge.evaluator import re_evaluator
from rucio.core.did import add_did, attach_dids, set_status
from rucio.core.lock import get_replica_locks, get_dataset_locks, successful_transfer
from rucio.core.account import add_account_attribute, get_usage
from rucio.core.account_limit import set_account_limit
from rucio.core.request import get_request_by_did
from rucio.core.replica import add_replica, get_replica
from rucio.core.rse import add_rse_attribute, add_rse, update_rse, get_rse_id, del_rse_attribute, set_rse_limits
from rucio.core.rse_counter import get_counter as get_rse_counter
from rucio.core.rule import add_rule, get_rule, delete_rule, add_rules, update_rule, reduce_rule, move_rule, list_rules
from rucio.daemons.abacus.account import account_update
from rucio.daemons.abacus.rse import rse_update
from rucio.db.sqla import models
from rucio.db.sqla.constants import DIDType, OBSOLETE, RuleState, LockState
from rucio.db.sqla.session import transactional_session
from rucio.tests.common import rse_name_generator, account_name_generator
def create_files(nrfiles, scope, rse_id, bytes=1):
"""
Creates a number of test files and add replicas to rse
:param nrfiles: Number of files to create
:param scope: Scope to create the files in
:param rse_id: RSE to add the replica to
:param bytes: Bytes of each file
:returns: List of dict
"""
files = []
jdoe = InternalAccount('jdoe')
for i in range(nrfiles):
file = 'file_%s' % uuid()
if isinstance(rse_id, list):
for r in rse_id:
add_replica(rse_id=r, scope=scope, name=file, bytes=bytes, account=jdoe)
else:
add_replica(rse_id=rse_id, scope=scope, name=file, bytes=bytes, account=jdoe)
files.append({'scope': scope, 'name': file, 'bytes': bytes})
return files
def tag_generator(size=8, chars=string.ascii_uppercase):
return ''.join(random.choice(chars) for x in range(size))
@transactional_session
def check_dataset_ok_callback(scope, name, rse, rse_id, rule_id, session=None):
callbacks = session.query(models.Message.id).filter(models.Message.payload == json.dumps({'scope': scope.external,
'name': name,
'rse': rse,
'rse_id': rse_id,
'rule_id': rule_id})).all()
if len(callbacks) > 0:
return True
return False
@transactional_session
def check_rule_progress_callback(scope, name, progress, rule_id, session=None):
callbacks = session.query(models.Message.id).filter(models.Message.payload == json.dumps({'scope': scope.external,
'name': name,
'rule_id': rule_id,
'progress': progress})).all()
if callbacks:
return True
return False
class TestReplicationRuleCore():
@classmethod
def setUpClass(cls):
# Add test RSE
cls.rse1 = 'MOCK'
cls.rse3 = 'MOCK3'
cls.rse4 = 'MOCK4'
cls.rse5 = 'MOCK5'
cls.rse1_id = get_rse_id(rse=cls.rse1)
cls.rse3_id = get_rse_id(rse=cls.rse3)
cls.rse4_id = get_rse_id(rse=cls.rse4)
cls.rse5_id = get_rse_id(rse=cls.rse5)
# Add Tags
cls.T1 = tag_generator()
cls.T2 = tag_generator()
add_rse_attribute(cls.rse1_id, cls.T1, True)
add_rse_attribute(cls.rse3_id, cls.T1, True)
add_rse_attribute(cls.rse4_id, cls.T2, True)
add_rse_attribute(cls.rse5_id, cls.T1, True)
# Add fake weights
add_rse_attribute(cls.rse1_id, "fakeweight", 10)
add_rse_attribute(cls.rse3_id, "fakeweight", 0)
add_rse_attribute(cls.rse4_id, "fakeweight", 0)
add_rse_attribute(cls.rse5_id, "fakeweight", 0)
# Add quota
cls.jdoe = InternalAccount('jdoe')
cls.root = InternalAccount('root')
set_account_limit(cls.jdoe, cls.rse1_id, -1)
set_account_limit(cls.jdoe, cls.rse3_id, -1)
set_account_limit(cls.jdoe, cls.rse4_id, -1)
set_account_limit(cls.jdoe, cls.rse5_id, -1)
set_account_limit(cls.root, cls.rse1_id, -1)
set_account_limit(cls.root, cls.rse3_id, -1)
set_account_limit(cls.root, cls.rse4_id, -1)
set_account_limit(cls.root, cls.rse5_id, -1)
def test_add_rule_file_none(self):
""" REPLICATION RULE (CORE): Add a replication rule on a group of files, NONE Grouping"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
add_rule(dids=files, account=self.jdoe, copies=2, rse_expression=self.T1, grouping='NONE', weight=None, lifetime=None, locked=False, subscription_id=None)
# Check if the Locks are created properly
t1 = set([self.rse1_id, self.rse1_id, self.rse3_id, self.rse5_id])
for file in files:
rse_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
assert(len(t1.intersection(rse_locks)) > 0)
assert_not_in(self.rse4_id, rse_locks)
def test_add_rule_dataset_none(self):
""" REPLICATION RULE (CORE): Add a replication rule on a dataset, NONE Grouping"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
# Add a first rule to the DS
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=2, rse_expression=self.T1, grouping='NONE', weight=None, lifetime=None, locked=False, subscription_id=None)
# Add a second rule and check if the right locks are created
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=2, rse_expression='%s|%s' % (self.T1, self.T2), grouping='NONE', weight=None, lifetime=None, locked=False, subscription_id=None)
# Check if the Locks are created properly
t1 = set([self.rse1_id, self.rse3_id, self.rse5_id])
for file in files:
rse_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
assert(len(t1.intersection(rse_locks)) == 2)
assert_not_in(self.rse4_id, rse_locks)
def test_add_rule_duplicate(self):
""" REPLICATION RULE (CORE): Add a replication rule duplicate"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
# Add a first rule to the DS
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=2, rse_expression=self.T1, grouping='NONE', weight=None, lifetime=None, locked=False, subscription_id=None)
# Add a second rule and check if the right locks are created
assert_raises(DuplicateRule, add_rule, dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=2, rse_expression=self.T1, grouping='NONE', weight=None, lifetime=None, locked=False, subscription_id=None)
def test_add_rules_datasets_none(self):
""" REPLICATION RULE (CORE): Add replication rules to multiple datasets, NONE Grouping"""
scope = InternalScope('mock')
files1 = create_files(3, scope, self.rse4_id)
dataset1 = 'dataset_' + str(uuid())
add_did(scope, dataset1, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset1, files1, self.jdoe)
files2 = create_files(3, scope, self.rse4_id)
dataset2 = 'dataset_' + str(uuid())
add_did(scope, dataset2, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset2, files2, self.jdoe)
# Add the rules to both DS
add_rules(dids=[{'scope': scope, 'name': dataset1}, {'scope': scope, 'name': dataset2}],
rules=[{'account': self.jdoe,
'copies': 1,
'rse_expression': self.T1,
'grouping': 'NONE',
'weight': None,
'lifetime': None,
'locked': False,
'subscription_id': None},
{'account': self.root,
'copies': 1,
'rse_expression': self.T1,
'grouping': 'NONE',
'weight': 'fakeweight',
'lifetime': None,
'locked': False,
'subscription_id': None}])
# Check if the Locks are created properly
for file in files1:
rse_locks = [lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])]
assert(rse_locks[0] == rse_locks[1])
for file in files2:
rse_locks = [lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])]
assert(rse_locks[0] == rse_locks[1])
def test_add_rule_container_none(self):
""" REPLICATION RULE (CORE): Add a replication rule on a container, NONE Grouping"""
scope = InternalScope('mock')
container = 'container_' + str(uuid())
add_did(scope, container, DIDType.from_sym('CONTAINER'), self.jdoe)
all_files = []
for i in range(3):
files = create_files(3, scope, self.rse1_id)
all_files.extend(files)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
attach_dids(scope, container, [{'scope': scope, 'name': dataset}], self.jdoe)
add_rule(dids=[{'scope': scope, 'name': container}], account=self.jdoe, copies=1, rse_expression=self.T2, grouping='NONE', weight=None, lifetime=None, locked=False, subscription_id=None)
for file in all_files:
rse_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
assert_in(self.rse4_id, rse_locks)
assert_not_in(self.rse5_id, rse_locks)
def test_add_rule_dataset_all(self):
""" REPLICATION RULE (CORE): Add a replication rule on a dataset, ALL Grouping"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=2, rse_expression=self.T1, grouping='ALL', weight=None, lifetime=None, locked=False, subscription_id=None)
# Check if the Locks are created properly
t1 = set([self.rse1_id, self.rse3_id, self.rse5_id])
first_locks = None
for file in files:
if first_locks is None:
first_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
rse_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
assert(len(t1.intersection(rse_locks)) == 2)
assert(len(first_locks.intersection(rse_locks)) == 2)
# Check if the DatasetLocks are created properly
dataset_locks = [lock for lock in get_dataset_locks(scope=scope, name=dataset)]
assert(len(t1.intersection(set([lock['rse_id'] for lock in dataset_locks]))) == 2)
assert(len(first_locks.intersection(set([lock['rse_id'] for lock in dataset_locks]))) == 2)
def test_add_rule_container_all(self):
""" REPLICATION RULE (CORE): Add a replication rule on a container, ALL Grouping"""
scope = InternalScope('mock')
container = 'container_' + str(uuid())
add_did(scope, container, DIDType.from_sym('CONTAINER'), self.jdoe)
all_files = []
for i in range(3):
files = create_files(3, scope, self.rse1_id)
all_files.extend(files)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
attach_dids(scope, container, [{'scope': scope, 'name': dataset}], self.jdoe)
add_rule(dids=[{'scope': scope, 'name': container}], account=self.jdoe, copies=2, rse_expression=self.T1, grouping='ALL', weight=None, lifetime=None, locked=False, subscription_id=None)
t1 = set([self.rse1_id, self.rse3_id, self.rse5_id])
first_locks = None
for file in all_files:
if first_locks is None:
first_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
rse_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
assert(len(t1.intersection(rse_locks)) == 2)
assert(len(first_locks.intersection(rse_locks)) == 2)
def test_add_rule_requests(self):
""" REPLICATION RULE (CORE): Add a replication rule on a dataset, DATASET Grouping"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=2, rse_expression=self.T1, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)
# Check if the Locks are created properly
t1 = set([self.rse1_id, self.rse3_id, self.rse5_id])
first_locks = None
for file in files:
if first_locks is None:
first_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
rse_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
assert(len(t1.intersection(rse_locks)) == 2)
assert(len(first_locks.intersection(rse_locks)) == 2)
# Check if the DatasetLocks are created properly
dataset_locks = [lock for lock in get_dataset_locks(scope=scope, name=dataset)]
assert(len(t1.intersection(set([lock['rse_id'] for lock in dataset_locks]))) == 2)
assert(len(first_locks.intersection(set([lock['rse_id'] for lock in dataset_locks]))) == 2)
def test_add_rule_dataset_dataset(self):
""" REPLICATION RULE (CORE): Add a replication rule on a dataset and check if requests are created"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse5, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)
for file in files:
get_request_by_did(scope=file['scope'], name=file['name'], rse_id=self.rse5_id)
def test_add_rule_container_dataset(self):
""" REPLICATION RULE (CORE): Add a replication rule on a container, DATASET Grouping"""
scope = InternalScope('mock')
container = 'container_' + str(uuid())
add_did(scope, container, DIDType.from_sym('CONTAINER'), self.jdoe)
all_files = []
dataset_files = []
for i in range(3):
files = create_files(3, scope, self.rse1_id)
all_files.extend(files)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
attach_dids(scope, container, [{'scope': scope, 'name': dataset}], self.jdoe)
dataset_files.append({'scope': scope, 'name': dataset, 'files': files})
add_rule(dids=[{'scope': scope, 'name': container}], account=self.jdoe, copies=2, rse_expression=self.T1, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)
t1 = set([self.rse1_id, self.rse3_id, self.rse5_id])
for dataset in dataset_files:
first_locks = None
for file in dataset['files']:
if first_locks is None:
first_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
rse_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
assert(len(t1.intersection(rse_locks)) == 2)
assert(len(first_locks.intersection(rse_locks)) == 2)
def test_add_rule_dataset_none_with_weights(self):
""" REPLICATION RULE (CORE): Add a replication rule on a dataset, NONE Grouping, WEIGHTS"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=2, rse_expression=self.T1, grouping='NONE', weight="fakeweight", lifetime=None, locked=False, subscription_id=None)
# Check if the Locks are created properly
t1 = set([self.rse1_id, self.rse3_id, self.rse5_id])
for file in files:
rse_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
assert(len(t1.intersection(rse_locks)) == 2)
assert_in(self.rse1_id, rse_locks)
def test_add_rule_container_dataset_with_weights(self):
""" REPLICATION RULE (CORE): Add a replication rule on a container, DATASET Grouping, WEIGHTS"""
scope = InternalScope('mock')
container = 'container_' + str(uuid())
add_did(scope, container, DIDType.from_sym('CONTAINER'), self.jdoe)
all_files = []
dataset_files = []
for i in range(3):
files = create_files(3, scope, self.rse1_id)
all_files.extend(files)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
attach_dids(scope, container, [{'scope': scope, 'name': dataset}], self.jdoe)
dataset_files.append({'scope': scope, 'name': dataset, 'files': files})
add_rule(dids=[{'scope': scope, 'name': container}], account=self.jdoe, copies=2, rse_expression=self.T1, grouping='DATASET', weight='fakeweight', lifetime=None, locked=False, subscription_id=None)
t1 = set([self.rse1_id, self.rse3_id, self.rse5_id])
for dataset in dataset_files:
first_locks = None
for file in dataset['files']:
if first_locks is None:
first_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
rse_locks = set([lock['rse_id'] for lock in get_replica_locks(scope=file['scope'], name=file['name'])])
assert(len(t1.intersection(rse_locks)) == 2)
assert(len(first_locks.intersection(rse_locks)) == 2)
assert_in(self.rse1_id, rse_locks)
def test_get_rule(self):
""" REPLICATION RULE (CORE): Test to get a previously created rule"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=2, rse_expression=self.T1, grouping='NONE', weight='fakeweight', lifetime=None, locked=False, subscription_id=None)[0]
assert(rule_id == get_rule(rule_id)['id'].replace('-', '').lower())
assert_raises(RuleNotFound, get_rule, uuid())
def test_delete_rule(self):
""" REPLICATION RULE (CORE): Test to delete a previously created rule"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=2, rse_expression=self.T1, grouping='DATASET', weight='fakeweight', lifetime=None, locked=False, subscription_id=None)[0]
delete_rule(rule_id)
for file in files:
rse_locks = get_replica_locks(scope=file['scope'], name=file['name'])
assert(len(rse_locks) == 0)
assert_raises(RuleNotFound, delete_rule, uuid())
def test_delete_rule_and_cancel_transfers(self):
""" REPLICATION RULE (CORE): Test to delete a previously created rule and do not cancel overlapping transfers"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id_1 = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='NONE', weight='fakeweight', lifetime=None, locked=False, subscription_id=None)[0]
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=2, rse_expression=self.T1, grouping='NONE', weight='fakeweight', lifetime=None, locked=False, subscription_id=None)[0]
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=3, rse_expression=self.T1, grouping='NONE', weight='fakeweight', lifetime=None, locked=False, subscription_id=None)[0]
delete_rule(rule_id_1)
for file in files:
rse_locks = get_replica_locks(scope=file['scope'], name=file['name'])
assert(len(rse_locks) == 5)
# TODO Need to check transfer queue here, this is actually not the check of this test case
assert_raises(RuleNotFound, delete_rule, uuid())
def test_locked_rule(self):
""" REPLICATION RULE (CORE): Delete a locked replication rule"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id_1 = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='NONE', weight='fakeweight', lifetime=None, locked=True, subscription_id=None)[0]
assert_raises(UnsupportedOperation, delete_rule, rule_id_1)
update_rule(rule_id=rule_id_1, options={'locked': False})
delete_rule(rule_id=rule_id_1)
def test_account_counter_rule_create(self):
""" REPLICATION RULE (CORE): Test if the account counter is updated correctly when new rule is created"""
account_update(once=True)
account_counter_before = get_usage(self.rse1_id, self.jdoe)
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id, bytes=100)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='ALL', weight=None, lifetime=None, locked=False, subscription_id=None)
# Check if the counter has been updated correctly
account_update(once=True)
account_counter_after = get_usage(self.rse1_id, self.jdoe)
assert(account_counter_before['bytes'] + 3 * 100 == account_counter_after['bytes'])
assert(account_counter_before['files'] + 3 == account_counter_after['files'])
def test_account_counter_rule_delete(self):
""" REPLICATION RULE (CORE): Test if the account counter is updated correctly when a rule is removed"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id, bytes=100)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='ALL', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
account_update(once=True)
account_counter_before = get_usage(self.rse1_id, self.jdoe)
delete_rule(rule_id)
account_update(once=True)
# Check if the counter has been updated correctly
account_counter_after = get_usage(self.rse1_id, self.jdoe)
assert(account_counter_before['bytes'] - 3 * 100 == account_counter_after['bytes'])
assert(account_counter_before['files'] - 3 == account_counter_after['files'])
def test_account_counter_rule_update(self):
""" REPLICATION RULE (CORE): Test if the account counter is updated correctly when a rule is updated"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id, bytes=100)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='ALL', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
account_update(once=True)
account_counter_before_1 = get_usage(self.rse1_id, self.jdoe)
account_counter_before_2 = get_usage(self.rse1_id, self.root)
update_rule(rule_id, {'account': self.root})
account_update(once=True)
# Check if the counter has been updated correctly
account_counter_after_1 = get_usage(self.rse1_id, self.jdoe)
account_counter_after_2 = get_usage(self.rse1_id, self.root)
assert(account_counter_before_1['bytes'] - 3 * 100 == account_counter_after_1['bytes'])
assert(account_counter_before_2['bytes'] + 3 * 100 == account_counter_after_2['bytes'])
def test_rse_counter_unavailable_replicas(self):
""" REPLICATION RULE (CORE): Test if creating UNAVAILABLE replicas updates the RSE Counter correctly"""
rse_update(once=True)
rse_counter_before = get_rse_counter(self.rse3_id)
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id, bytes=100)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='ALL', weight=None, lifetime=None, locked=False, subscription_id=None)
# Check if the rse has been updated correctly
rse_update(once=True)
rse_counter_after = get_rse_counter(self.rse3_id)
assert(rse_counter_before['bytes'] + 3 * 100 == rse_counter_after['bytes'])
assert(rse_counter_before['files'] + 3 == rse_counter_after['files'])
def test_rule_add_fails_account_limit(self):
""" REPLICATION RULE (CORE): Test if a rule fails correctly when account limit conflict"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse3_id, bytes=100)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
set_account_limit(account=self.jdoe, rse_id=self.rse3_id, bytes=5)
assert_raises(InsufficientAccountLimit, add_rule, dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='ALL', weight=None, lifetime=None, locked=False, subscription_id=None)
set_account_limit(account=self.jdoe, rse_id=self.rse3_id, bytes=-1)
def test_rule_add_fails_rse_limit(self):
""" REPLICATION RULE (CORE): Test if a rule fails correctly when rse limit set"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id, bytes=100)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
set_rse_limits(self.rse3_id, 'MaxSpaceAvailable', 250)
try:
assert_raises(RSEOverQuota, add_rule, dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='ALL', weight=None, lifetime=None, locked=False, subscription_id=None)
assert_raises(RSEOverQuota, add_rule, dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)
assert_raises(RSEOverQuota, add_rule, dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='NONE', weight=None, lifetime=None, locked=False, subscription_id=None)
finally:
set_rse_limits(self.rse3_id, 'MaxSpaceAvailable', -1)
def test_dataset_callback(self):
""" REPLICATION RULE (CORE): Test dataset callback"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id, bytes=100)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
set_status(scope=scope, name=dataset, open=False)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None, notify='C')[0]
successful_transfer(scope=scope, name=files[0]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[1]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[2]['name'], rse_id=self.rse3_id, nowait=False)
# Check if rule exists
assert(True is check_dataset_ok_callback(scope, dataset, self.rse3, self.rse3_id, rule_id))
def test_dataset_callback_no(self):
""" REPLICATION RULE (CORE): Test dataset callback should not be sent"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id, bytes=100)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
set_status(scope=scope, name=dataset, open=False)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None, notify='C')[0]
successful_transfer(scope=scope, name=files[0]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[1]['name'], rse_id=self.rse3_id, nowait=False)
# Check if rule exists
assert(False is check_dataset_ok_callback(scope, dataset, self.rse3, self.rse3_id, rule_id))
def test_dataset_callback_close_late(self):
""" REPLICATION RULE (CORE): Test dataset callback with late close"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id, bytes=100)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None, notify='C')[0]
successful_transfer(scope=scope, name=files[0]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[1]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[2]['name'], rse_id=self.rse3_id, nowait=False)
# Check if rule exists
assert(False is check_dataset_ok_callback(scope, dataset, self.rse3, self.rse3_id, rule_id))
set_status(scope=scope, name=dataset, open=False)
assert(True is check_dataset_ok_callback(scope, dataset, self.rse3, self.rse3_id, rule_id))
def test_dataset_callback_with_evaluator(self):
""" REPLICATION RULE (CORE): Test dataset callback with judge evaluator"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id, bytes=100)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None, notify='C')[0]
assert(False is check_dataset_ok_callback(scope, dataset, self.rse3, self.rse3_id, rule_id))
attach_dids(scope, dataset, files, self.jdoe)
set_status(scope=scope, name=dataset, open=False)
assert(False is check_dataset_ok_callback(scope, dataset, self.rse3, self.rse3_id, rule_id))
re_evaluator(once=True)
successful_transfer(scope=scope, name=files[0]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[1]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[2]['name'], rse_id=self.rse3_id, nowait=False)
assert(True is check_dataset_ok_callback(scope, dataset, self.rse3, self.rse3_id, rule_id))
def test_rule_progress_callback_with_evaluator(self):
""" REPLICATION RULE (CORE): Test rule progress callback with judge evaluator"""
scope = InternalScope('mock')
files = create_files(30, scope, self.rse1_id, bytes=100)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None, notify='P')[0]
assert(False is check_rule_progress_callback(scope, dataset, 0, rule_id))
attach_dids(scope, dataset, files, self.jdoe)
re_evaluator(once=True)
set_status(scope=scope, name=dataset, open=False)
assert(False is check_rule_progress_callback(scope, dataset, 0, rule_id))
successful_transfer(scope=scope, name=files[0]['name'], rse_id=self.rse3_id, nowait=False)
assert(False is check_rule_progress_callback(scope, dataset, 10, rule_id))
successful_transfer(scope=scope, name=files[1]['name'], rse_id=self.rse3_id, nowait=False)
assert(False is check_rule_progress_callback(scope, dataset, 10, rule_id))
successful_transfer(scope=scope, name=files[2]['name'], rse_id=self.rse3_id, nowait=False)
assert(True is check_rule_progress_callback(scope, dataset, 10, rule_id))
successful_transfer(scope=scope, name=files[3]['name'], rse_id=self.rse3_id, nowait=False)
assert(False is check_rule_progress_callback(scope, dataset, 20, rule_id))
successful_transfer(scope=scope, name=files[4]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[5]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[6]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[7]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[8]['name'], rse_id=self.rse3_id, nowait=False)
assert(True is check_rule_progress_callback(scope, dataset, 30, rule_id))
successful_transfer(scope=scope, name=files[9]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[10]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[11]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[12]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[13]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[14]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[15]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[16]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[17]['name'], rse_id=self.rse3_id, nowait=False)
assert(True is check_rule_progress_callback(scope, dataset, 60, rule_id))
successful_transfer(scope=scope, name=files[18]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[19]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[20]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[21]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[22]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[23]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[24]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[25]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[26]['name'], rse_id=self.rse3_id, nowait=False)
assert(True is check_rule_progress_callback(scope, dataset, 90, rule_id))
successful_transfer(scope=scope, name=files[27]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[28]['name'], rse_id=self.rse3_id, nowait=False)
successful_transfer(scope=scope, name=files[29]['name'], rse_id=self.rse3_id, nowait=False)
assert(True is check_rule_progress_callback(scope, dataset, 100, rule_id))
def test_add_rule_with_purge(self):
""" REPLICATION RULE (CORE): Add a replication rule with purge setting"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse4, grouping='NONE', weight=None, lifetime=None, locked=False, subscription_id=None, purge_replicas=True)[0]
delete_rule(rule_id)
# Check if the Locks are created properly
for file in files:
replica = get_replica(rse_id=self.rse4_id, scope=file['scope'], name=file['name'])
assert(replica['tombstone'] == OBSOLETE)
def test_add_rule_with_ignore_availability(self):
""" REPLICATION RULE (CORE): Add a replication rule with ignore_availability setting"""
rse = rse_name_generator()
rse_id = add_rse(rse)
update_rse(rse_id, {'availability_write': False})
set_account_limit(self.jdoe, rse_id, -1)
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
with assert_raises(RSEBlacklisted):
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=rse, grouping='NONE', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=rse, grouping='NONE', weight=None, lifetime=None, locked=False, subscription_id=None, ignore_availability=True)[0]
for file in files:
for l in [lock for lock in get_replica_locks(scope=file['scope'], name=file['name'])]:
assert(l['state'] == LockState.STUCK)
def test_delete_rule_country_admin(self):
""" REPLICATION RULE (CORE): Delete a rule with a country admin account"""
rse = rse_name_generator()
rse_id = add_rse(rse)
add_rse_attribute(rse_id, 'country', 'test')
set_account_limit(self.jdoe, rse_id, -1)
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=rse, grouping='NONE', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
usr = account_name_generator()
add_account(usr, 'USER', 'rucio@email.com', 'root')
with assert_raises(AccessDenied):
rucio.api.rule.delete_replication_rule(rule_id=rule_id, purge_replicas=None, issuer=usr)
add_account_attribute(InternalAccount(usr), 'country-test', 'admin')
rucio.api.rule.delete_replication_rule(rule_id=rule_id, purge_replicas=None, issuer=usr)
def test_reduce_rule(self):
""" REPLICATION RULE (CORE): Reduce a rule"""
scope = InternalScope('mock')
files = create_files(3, scope, [self.rse1_id, self.rse3_id])
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=2, rse_expression=self.rse1 + '|' + self.rse3, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
assert(get_rule(rule_id)['state'] == RuleState.OK)
rule_id2 = reduce_rule(rule_id=rule_id, copies=1, exclude_expression=self.rse1)
assert(get_rule(rule_id2)['state'] == RuleState.OK)
assert_raises(RuleNotFound, get_rule, rule_id)
scope = InternalScope('mock')
files = create_files(3, scope, [self.rse1_id, self.rse3_id])
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=2, rse_expression=self.rse1 + '|' + self.rse3 + '|' + self.rse5, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
with assert_raises(RuleReplaceFailed):
reduce_rule(rule_id=rule_id, copies=1, exclude_expression=self.rse1 + '|' + self.rse3)
def test_move_rule(self):
""" REPLICATION RULE (CORE): Move a rule"""
scope = InternalScope('mock')
files = create_files(3, scope, [self.rse1_id])
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
assert(get_rule(rule_id)['state'] == RuleState.OK)
rule_id2 = move_rule(rule_id, self.rse3)
assert(get_rule(rule_id2)['state'] == RuleState.REPLICATING)
assert(get_rule(rule_id)['child_rule_id'] == rule_id2)
def test_add_rule_with_scratchdisk(self):
""" REPLICATION RULE (CORE): Add a replication rule for scratchdisk"""
rse = rse_name_generator()
rse_id = add_rse(rse)
add_rse_attribute(rse_id, 'type', 'SCRATCHDISK')
set_account_limit(self.jdoe, rse_id, -1)
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression='%s' % rse, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
assert(get_rule(rule_id)['expires_at'] is not None)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression='%s' % self.rse1, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
assert(get_rule(rule_id)['expires_at'] is None)
def test_add_rule_with_auto_approval(self):
""" REPLICATION RULE (CORE): Add a replication rule with auto approval"""
rse = rse_name_generator()
rse_id = add_rse(rse)
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id, bytes=200)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
set_status(scope=scope, name=dataset, open=False)
with assert_raises(InsufficientAccountLimit):
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression='%s' % rse, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression='%s' % rse, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None, ask_approval=True)[0]
assert(get_rule(rule_id)['state'] == RuleState.WAITING_APPROVAL)
delete_rule(rule_id=rule_id)
add_rse_attribute(rse_id, 'auto_approve_bytes', 500)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression='%s' % rse, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None, ask_approval=True)[0]
assert(get_rule(rule_id)['state'] == RuleState.WAITING_APPROVAL)
delete_rule(rule_id=rule_id)
del_rse_attribute(rse_id, 'auto_approve_bytes')
add_rse_attribute(rse_id, 'auto_approve_bytes', 1000)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression='%s' % rse, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None, ask_approval=True)[0]
assert(get_rule(rule_id)['state'] == RuleState.INJECT)
def test_add_rule_with_manual_approval_block(self):
""" REPLICATION RULE (CORE): Add a replication rule for a RSE with manual approval block"""
rse = rse_name_generator()
rse_id = add_rse(rse)
add_rse_attribute(rse_id, 'block_manual_approval', '1')
set_account_limit(self.jdoe, rse_id, -1)
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
with assert_raises(ManualRuleApprovalBlocked):
add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression='%s' % rse, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None, ask_approval=True)[0]
def test_update_rule_child_rule(self):
""" REPLICATION RULE (CORE): Update a replication rule with a child_rule"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset1 = 'dataset_' + str(uuid())
dataset2 = 'dataset_' + str(uuid())
add_did(scope, dataset1, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset1, files, self.jdoe)
add_did(scope, dataset2, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset2, files, self.jdoe)
rule_id_1 = add_rule(dids=[{'scope': scope, 'name': dataset1}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
rule_id_2 = add_rule(dids=[{'scope': scope, 'name': dataset2}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
rule_id_3 = add_rule(dids=[{'scope': scope, 'name': dataset1}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
with assert_raises(InputValidationError):
update_rule(rule_id_1, options={'child_rule_id': rule_id_2})
update_rule(rule_id_1, options={'child_rule_id': rule_id_3})
with assert_raises(UnsupportedOperation):
delete_rule(rule_id_1)
def test_release_rule(self):
""" REPLICATION RULE (CORE): Test to release a parent rule after child rule is OK"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id, bytes=100)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id_1 = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
rule_id_2 = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='DATASET', weight=None, lifetime=None, locked=False, subscription_id=None)[0]
update_rule(rule_id_1, options={'child_rule_id': rule_id_2})
with assert_raises(UnsupportedOperation):
delete_rule(rule_id_1)
successful_transfer(scope=scope, name=files[0]['name'], rse_id=self.rse3_id, nowait=False)
with assert_raises(UnsupportedOperation):
delete_rule(rule_id_1)
successful_transfer(scope=scope, name=files[1]['name'], rse_id=self.rse3_id, nowait=False)
with assert_raises(UnsupportedOperation):
delete_rule(rule_id_1)
successful_transfer(scope=scope, name=files[2]['name'], rse_id=self.rse3_id, nowait=False)
delete_rule(rule_id_1)
def test_metadata__rule(self):
""" REPLICATION RULE (CORE): Test to write wfms metadata to rule"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=2, rse_expression=self.T1, grouping='NONE',
weight='fakeweight', lifetime=None, locked=False, meta={'task_id': 55, 'job_ids': [1, 2, 3, 4]}, subscription_id=None)[0]
assert(get_rule(rule_id)['meta'] == json.dumps({'task_id': 55, 'job_ids': [1, 2, 3, 4]}))
def test_rule_on_archive(self):
""" REPLICATION RULE (CORE): Test to add a rule on a constituent should add rule on archive"""
scope = InternalScope('mock')
archive = {'scope': scope, 'name': '%s.zip' % str(uuid()), 'type': 'FILE',
'bytes': 2596, 'adler32': 'beefdead'}
add_replica(rse_id=self.rse1_id, scope=scope, name=archive['name'], bytes=2596, account=self.jdoe)
files_in_archive = [{'scope': scope, 'name': 'witrep-%i-%s' % (i, str(uuid())), 'type': 'FILE',
'bytes': 1234, 'adler32': 'deadbeef'} for i in range(2)]
attach_dids(scope, archive['name'], files_in_archive, self.jdoe)
add_rule(dids=[{'scope': scope, 'name': files_in_archive[1]['name']}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='NONE',
weight=None, lifetime=None, locked=False, subscription_id=None)
assert(len(list(list_rules(filters={'scope': scope, 'name': archive['name']}))) == 1)
# Check the same but now a replica of the constituent exists as well
scope = InternalScope('mock')
archive = {'scope': scope, 'name': '%s.zip' % str(uuid()), 'type': 'FILE',
'bytes': 2596, 'adler32': 'beefdead'}
add_replica(rse_id=self.rse1_id, scope=scope, name=archive['name'], bytes=2596, account=self.jdoe)
files_in_archive = [{'scope': scope, 'name': 'witrep-%i-%s' % (i, str(uuid())), 'type': 'FILE',
'bytes': 1234, 'adler32': 'deadbeef'} for i in range(2)]
attach_dids(scope, archive['name'], files_in_archive, self.jdoe)
add_replica(rse_id=self.rse1_id, scope=scope, name=files_in_archive[1]['name'], bytes=2596, account=self.jdoe)
add_rule(dids=[{'scope': scope, 'name': files_in_archive[1]['name']}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='NONE',
weight=None, lifetime=None, locked=False, subscription_id=None)
assert(len(list(list_rules(filters={'scope': scope, 'name': archive['name']}))) == 0)
assert(len(list(list_rules(filters={'scope': scope, 'name': files_in_archive[1]['name']}))) == 1)
class TestReplicationRuleClient():
@classmethod
def setUpClass(cls):
# Add test RSE
cls.rse1 = 'MOCK'
cls.rse3 = 'MOCK3'
cls.rse4 = 'MOCK4'
cls.rse5 = 'MOCK5'
cls.rse1_id = get_rse_id(cls.rse1)
cls.rse3_id = get_rse_id(cls.rse3)
cls.rse4_id = get_rse_id(cls.rse4)
cls.rse5_id = get_rse_id(cls.rse5)
# Add Tags
cls.T1 = tag_generator()
cls.T2 = tag_generator()
add_rse_attribute(cls.rse1_id, cls.T1, True)
add_rse_attribute(cls.rse3_id, cls.T1, True)
add_rse_attribute(cls.rse4_id, cls.T2, True)
add_rse_attribute(cls.rse5_id, cls.T1, True)
# Add fake weights
add_rse_attribute(cls.rse1_id, "fakeweight", 10)
add_rse_attribute(cls.rse3_id, "fakeweight", 0)
add_rse_attribute(cls.rse4_id, "fakeweight", 0)
add_rse_attribute(cls.rse5_id, "fakeweight", 0)
cls.jdoe = InternalAccount('jdoe')
set_account_limit(cls.jdoe, cls.rse1_id, -1)
set_account_limit(cls.jdoe, cls.rse3_id, -1)
set_account_limit(cls.jdoe, cls.rse4_id, -1)
set_account_limit(cls.jdoe, cls.rse5_id, -1)
def setup(self):
self.rule_client = RuleClient()
self.did_client = DIDClient()
self.subscription_client = SubscriptionClient()
self.account_client = AccountClient()
self.lock_client = LockClient()
def test_add_rule(self):
""" REPLICATION RULE (CLIENT): Add a replication rule and list full history """
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
ret = self.rule_client.add_replication_rule(dids=[{'scope': scope.external, 'name': dataset}], account='jdoe', copies=2, rse_expression=self.T1, grouping='NONE')
assert_is_instance(ret, list)
rep_rules = [rep_rule for rep_rule in self.rule_client.list_replication_rule_full_history(scope.external, dataset)]
assert_equal(len(rep_rules), 1)
assert_equal(ret[0], rep_rules[0]['rule_id'])
def test_delete_rule(self):
""" REPLICATION RULE (CLIENT): Delete a replication rule """
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='NONE', weight='fakeweight', lifetime=None, locked=False, subscription_id=None)[0]
ret = self.rule_client.delete_replication_rule(rule_id=rule_id)
assert(ret is True)
get = self.rule_client.get_replication_rule(rule_id)
assert(get['expires_at'] is not None)
def test_list_rules_by_did(self):
""" DID (CLIENT): List Replication Rules per DID """
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id_1 = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='NONE', weight='fakeweight', lifetime=None, locked=False, subscription_id=None)[0]
rule_id_2 = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse3, grouping='NONE', weight='fakeweight', lifetime=None, locked=False, subscription_id=None)[0]
ret = self.did_client.list_did_rules(scope=scope.external, name=dataset)
ids = [rule['id'] for rule in ret]
assert_in(rule_id_1, ids)
assert_in(rule_id_2, ids)
def test_get_rule(self):
""" REPLICATION RULE (CLIENT): Get Replication Rule by id """
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
ret = self.rule_client.add_replication_rule(dids=[{'scope': scope.external, 'name': dataset}], account='jdoe', copies=2, rse_expression=self.T1, grouping='NONE')
get = self.rule_client.get_replication_rule(ret[0])
assert(ret[0] == get['id'])
def test_get_rule_by_account(self):
""" ACCOUNT (CLIENT): Get Replication Rule by account """
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
ret = self.rule_client.add_replication_rule(dids=[{'scope': scope.external, 'name': dataset}], account='jdoe', copies=2, rse_expression=self.T1, grouping='NONE')
get = self.account_client.list_account_rules('jdoe')
rules = [rule['id'] for rule in get]
assert_in(ret[0], rules)
def test_locked_rule(self):
""" REPLICATION RULE (CLIENT): Delete a locked replication rule"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id_1 = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='NONE', weight='fakeweight', lifetime=None, locked=True, subscription_id=None)[0]
assert_raises(UnsupportedOperation, delete_rule, rule_id_1)
self.rule_client.update_replication_rule(rule_id=rule_id_1, options={'locked': False})
delete_rule(rule_id=rule_id_1)
def test_dataset_lock(self):
""" DATASETLOCK (CLIENT): Get a datasetlock for a specific dataset"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id_1 = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='DATASET', weight='fakeweight', lifetime=None, locked=True, subscription_id=None)[0]
rule_ids = [lock['rule_id'] for lock in self.lock_client.get_dataset_locks(scope=scope.external, name=dataset)]
assert_in(rule_id_1, rule_ids)
def test_change_rule_lifetime(self):
""" REPLICATION RULE (CLIENT): Change rule lifetime"""
scope = InternalScope('mock')
files = create_files(3, scope, self.rse1_id)
dataset = 'dataset_' + str(uuid())
add_did(scope, dataset, DIDType.from_sym('DATASET'), self.jdoe)
attach_dids(scope, dataset, files, self.jdoe)
rule_id_1 = add_rule(dids=[{'scope': scope, 'name': dataset}], account=self.jdoe, copies=1, rse_expression=self.rse1, grouping='DATASET', weight='fakeweight', lifetime=150, locked=True, subscription_id=None)[0]
get = self.rule_client.get_replication_rule(rule_id_1)
self.rule_client.update_replication_rule(rule_id_1, options={'lifetime': 10000})
get2 = self.rule_client.get_replication_rule(rule_id_1)
assert(get['expires_at'] != get2['expires_at'])
| 55.771626 | 246 | 0.66069 | 8,653 | 64,472 | 4.718479 | 0.046805 | 0.035661 | 0.045262 | 0.032918 | 0.855764 | 0.832888 | 0.822895 | 0.801783 | 0.776825 | 0.75848 | 0 | 0.017122 | 0.204647 | 64,472 | 1,155 | 247 | 55.819913 | 0.779106 | 0.083199 | 0 | 0.637349 | 0 | 0 | 0.06416 | 0.000357 | 0 | 0 | 0 | 0.000866 | 0.122892 | 1 | 0.06506 | false | 0 | 0.036145 | 0.001205 | 0.110843 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
43b1e2364c34be7cfaebe3849f91ee5825169d1e | 5,298 | py | Python | tests/vmss/test_vmss_fetcher.py | proofdock/chaos-azure | 85302f8be18153862656c587988eafb5dd37ddf7 | [
"Apache-2.0"
] | 1 | 2021-04-24T20:01:54.000Z | 2021-04-24T20:01:54.000Z | tests/vmss/test_vmss_fetcher.py | proofdock/chaos-azure | 85302f8be18153862656c587988eafb5dd37ddf7 | [
"Apache-2.0"
] | 23 | 2020-05-22T06:43:14.000Z | 2021-02-25T21:02:28.000Z | tests/vmss/test_vmss_fetcher.py | proofdock/chaos-azure | 85302f8be18153862656c587988eafb5dd37ddf7 | [
"Apache-2.0"
] | null | null | null | from unittest.mock import patch
import pytest
from chaoslib.exceptions import InterruptExecution
import pdchaosazure
from pdchaosazure.vmss.fetcher import fetch_vmss, fetch_instances
from tests.data import vmss_provider
@patch('pdchaosazure.vmss.fetcher.fetch_resources', autospec=True)
def test_succesful_fetch_vmss(mocked_fetch_vmss):
scale_set = vmss_provider.provide_scale_set()
scale_sets = [scale_set]
mocked_fetch_vmss.return_value = scale_sets
result = fetch_vmss(None, None, None)
assert len(result) == 1
assert result[0].get('name') == 'chaos-pool'
@patch.object(pdchaosazure.vmss.fetcher, 'fetch_all_vmss_instances', autospec=True)
def test_succesful_fetch_instances_without_instance_criteria(mocked_fetch_instances):
instance = vmss_provider.provide_instance()
instances = [instance]
mocked_fetch_instances.return_value = instances
scale_set = vmss_provider.provide_scale_set()
result = fetch_instances(scale_set, None, None)
assert len(result) == 1
assert result[0].get('name') == 'chaos-pool_0'
assert result[0].get('instance_id') == '0'
@patch.object(pdchaosazure.vmss.fetcher, 'fetch_all_vmss_instances', autospec=True)
def test_happily_fetch_empty_list_instances_with_empty_instance_filter(mocked_fetch_instances):
mocked_fetch_instances.return_value = []
scale_set = vmss_provider.provide_scale_set()
result = fetch_instances(scale_set, None, None)
assert len(result) == 0
@patch.object(pdchaosazure.vmss.fetcher, 'fetch_all_vmss_instances', autospec=True)
def test_happily_fetch_instances_with_instance_filter_for_instance0(mocked_fetch_instances):
# arrange
instance_0 = vmss_provider.provide_instance()
instance_0['instance_id'] = '0'
instance_1 = vmss_provider.provide_instance()
instance_1['instance_id'] = '1'
instance_2 = vmss_provider.provide_instance()
instance_2['instance_id'] = '2'
instances = [instance_0, instance_1, instance_2]
mocked_fetch_instances.return_value = instances
scale_set = vmss_provider.provide_scale_set()
# fire
result = fetch_instances(scale_set, "where instance_id=='0'", None)
# assert
assert len(result) == 1
assert result[0].get('name') == 'chaos-pool_0'
assert result[0].get('instance_id') == '0'
@patch.object(pdchaosazure.vmss.fetcher, 'fetch_all_vmss_instances', autospec=True)
def test_happily_fetch_instances_with_instance_filter_for_instance0_or_instance_2(mocked_fetch_instances):
# arrange
instance_0 = vmss_provider.provide_instance()
instance_0['instance_id'] = '0'
instance_0['name'] = 'chaos-pool_0'
instance_1 = vmss_provider.provide_instance()
instance_1['instance_id'] = '1'
instance_1['name'] = 'chaos-pool_1'
instance_2 = vmss_provider.provide_instance()
instance_2['instance_id'] = '2'
instance_2['name'] = 'chaos-pool_2'
instances = [instance_0, instance_1, instance_2]
mocked_fetch_instances.return_value = instances
scale_set = vmss_provider.provide_scale_set()
# fire
result = fetch_instances(scale_set, "where instance_id=='0' or instance_id=='2'", None)
# assert
assert len(result) == 2
assert result[0].get('name') == 'chaos-pool_0'
assert result[0].get('instance_id') == '0'
assert result[1].get('name') == 'chaos-pool_2'
assert result[1].get('instance_id') == '2'
@patch.object(pdchaosazure.vmss.fetcher, 'fetch_all_vmss_instances', autospec=True)
def test_happily_fetch_instances_with_instance_filter_for_all_instances(mocked_fetch_instances):
# arrange
instance_0 = vmss_provider.provide_instance()
instance_0['instance_id'] = '0'
instance_0['name'] = 'chaos-pool_0'
instance_1 = vmss_provider.provide_instance()
instance_1['instance_id'] = '1'
instance_1['name'] = 'chaos-pool_1'
instance_2 = vmss_provider.provide_instance()
instance_2['instance_id'] = '2'
instance_2['name'] = 'chaos-pool_2'
instances = [instance_0, instance_1, instance_2]
mocked_fetch_instances.return_value = instances
scale_set = vmss_provider.provide_scale_set()
# fire
result = fetch_instances(scale_set, "top 3", None)
# assert
assert len(result) == 3
assert result[0].get('name') == 'chaos-pool_0'
assert result[0].get('instance_id') == '0'
assert result[1].get('name') == 'chaos-pool_1'
assert result[1].get('instance_id') == '1'
assert result[2].get('name') == 'chaos-pool_2'
assert result[2].get('instance_id') == '2'
@patch.object(pdchaosazure.vmss.fetcher, 'fetch_all_vmss_instances', autospec=True)
def test_sadly_fetch_instances_with_invalid_instance_criteria(mocked_fetch_instances):
# arrange
instance_0 = vmss_provider.provide_instance()
instance_0['instance_id'] = '0'
instance_1 = vmss_provider.provide_instance()
instance_1['instance_id'] = '1'
instance_2 = vmss_provider.provide_instance()
instance_2['instance_id'] = '2'
instances = [instance_0, instance_1, instance_2]
mocked_fetch_instances.return_value = instances
scale_set = vmss_provider.provide_scale_set()
# fire
with pytest.raises(InterruptExecution) as x:
fetch_instances(scale_set, "invalid filter query syntax", None)
assert "invalid query" in x.value
| 37.048951 | 106 | 0.734994 | 709 | 5,298 | 5.12976 | 0.09591 | 0.092384 | 0.104482 | 0.096508 | 0.823206 | 0.781413 | 0.761067 | 0.739896 | 0.739896 | 0.739896 | 0 | 0.022566 | 0.146848 | 5,298 | 142 | 107 | 37.309859 | 0.78208 | 0.01359 | 0 | 0.643564 | 0 | 0 | 0.142693 | 0.035481 | 0 | 0 | 0 | 0 | 0.217822 | 1 | 0.069307 | false | 0 | 0.059406 | 0 | 0.128713 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6037fae9cf35c2e5a927556e555c2043496f6193 | 6,261 | py | Python | tests/test_options.py | fruch/nose-timeout | 80f2ceb8bf4ab0f6c388bdc00d683a6f5182dd1e | [
"MIT"
] | 1 | 2018-06-10T21:13:03.000Z | 2018-06-10T21:13:03.000Z | tests/test_options.py | fruch/nose-timeout | 80f2ceb8bf4ab0f6c388bdc00d683a6f5182dd1e | [
"MIT"
] | null | null | null | tests/test_options.py | fruch/nose-timeout | 80f2ceb8bf4ab0f6c388bdc00d683a6f5182dd1e | [
"MIT"
] | 4 | 2019-05-07T11:21:57.000Z | 2021-04-21T08:18:12.000Z |
import os
import unittest
from optparse import OptionParser
from nose.config import Config
from distributed_nose.plugin import DistributedNose
class TestOptionValidation(unittest.TestCase):
def setUp(self):
self.plugin = DistributedNose()
self.parser = OptionParser()
def test_defaults(self):
self.plugin.options(self.parser, env={})
args = []
options, _ = self.parser.parse_args(args)
self.assertEqual(options.distributed_node_number, 1)
self.assertEqual(options.distributed_nodes, 1)
self.assertEqual(options.distributed_hash_by_class, False)
def test_vanilla(self):
self.plugin.options(self.parser, env={})
args = ['--nodes=4', '--node-number=3']
options, _ = self.parser.parse_args(args)
self.plugin.configure(options, Config())
self.assertEqual(self.plugin.node_count, 4)
self.assertEqual(self.plugin.node_id, 3)
self.assertEqual(self.plugin.hash_by_class, False)
self.assertTrue(self.plugin.enabled)
def test_env_configs(self):
env = {'NOSE_NODES': 6,
'NOSE_NODE_NUMBER': 4,
'NOSE_HASH_BY_CLASS': 'yes'}
self.plugin.options(self.parser, env=env)
options, _ = self.parser.parse_args([])
self.plugin.configure(options, Config())
self.assertEqual(self.plugin.node_count, 6)
self.assertEqual(self.plugin.node_id, 4)
self.assertEqual(self.plugin.hash_by_class, True)
self.assertTrue(self.plugin.enabled)
def test_hash_by_class_via_flag(self):
env = {'NOSE_NODES': 6,
'NOSE_NODE_NUMBER': 4}
self.plugin.options(self.parser, env=env)
args = ['--hash-by-class']
options, _ = self.parser.parse_args(args)
self.plugin.configure(options, Config())
self.assertEqual(self.plugin.hash_by_class, True)
self.assertTrue(self.plugin.enabled)
def test_disable_via_flag(self):
env = {'NOSE_NODES': 6, 'NOSE_NODE_NUMBER': 4}
self.plugin.options(self.parser, env=env)
args = ['--distributed-disabled']
options, _ = self.parser.parse_args(args)
self.plugin.configure(options, Config())
self.assertFalse(self.plugin.enabled)
def test_integer_required_count(self):
self.plugin.options(self.parser, env={})
args = ['--nodes=foo', '--node-number=1']
options, _ = self.parser.parse_args(args)
self.plugin.configure(options, Config())
self.assertFalse(self.plugin.enabled)
def test_integer_required_id(self):
self.plugin.options(self.parser, env={})
args = ['--nodes=2', '--node-number=baz']
options, _ = self.parser.parse_args(args)
self.plugin.configure(options, Config())
self.assertFalse(self.plugin.enabled)
def test_id_in_range(self):
self.plugin.options(self.parser, env={})
args = ['--nodes=2', '--node-number=3']
options, _ = self.parser.parse_args(args)
self.plugin.configure(options, Config())
self.assertFalse(self.plugin.enabled)
def test_lpt_via_flag(self):
LPT_DATA_FILEPATH = os.path.join(
os.path.dirname(__file__),
'lpt_data',
'lpt_all.json'
)
env = {'NOSE_NODES': 6,
'NOSE_NODE_NUMBER': 4}
self.plugin.options(self.parser, env=env)
args = [
'--algorithm=least-processing-time',
'--lpt-data={}'.format(LPT_DATA_FILEPATH),
'--hash-by-class'
]
options, _ = self.parser.parse_args(args)
self.plugin.configure(options, Config())
self.assertEqual(
self.plugin.algorithm,
DistributedNose.ALGORITHM_LEAST_PROCESSING_TIME
)
self.assertTrue(self.plugin.enabled)
def test_lpt_no_data_arg_aborts(self):
env = {'NOSE_NODES': 6,
'NOSE_NODE_NUMBER': 4}
self.plugin.options(self.parser, env=env)
args = [
'--algorithm=least-processing-time'
]
options, _ = self.parser.parse_args(args)
# TODO: make compatible with python 2.6 ?
with self.assertRaises(AssertionError):
self.plugin.configure(options, Config())
def test_lpt_missing_data_file_aborts(self):
LPT_DATA_FILEPATH = os.path.join(
os.path.dirname(__file__),
'no_such_file.json'
)
env = {'NOSE_NODES': 6,
'NOSE_NODE_NUMBER': 4}
self.plugin.options(self.parser, env=env)
args = [
'--algorithm=least-processing-time',
'--lpt-data={}'.format(LPT_DATA_FILEPATH)
]
options, _ = self.parser.parse_args(args)
# TODO: make compatible with python 2.6 ?
with self.assertRaises(IOError):
self.plugin.configure(options, Config())
def test_lpt_invalid_json_file_aborts(self):
LPT_DATA_FILEPATH = os.path.join(
os.path.dirname(__file__),
'lpt_data',
'lpt_invalid_json.json'
)
env = {'NOSE_NODES': 6,
'NOSE_NODE_NUMBER': 4}
self.plugin.options(self.parser, env=env)
args = [
'--algorithm=least-processing-time',
'--lpt-data={}'.format(LPT_DATA_FILEPATH)
]
options, _ = self.parser.parse_args(args)
# TODO: make compatible with python 2.6 ?
with self.assertRaises(ValueError):
self.plugin.configure(options, Config())
def test_lpt_invalid_data_format_aborts(self):
LPT_DATA_FILEPATH = os.path.join(
os.path.dirname(__file__),
'lpt_data',
'lpt_invalid_data.json',
)
env = {'NOSE_NODES': 6,
'NOSE_NODE_NUMBER': 4}
self.plugin.options(self.parser, env=env)
args = [
'--algorithm=least-processing-time',
'--lpt-data={}'.format(LPT_DATA_FILEPATH),
'--hash-by-class'
]
options, _ = self.parser.parse_args(args)
# TODO: make compatible with python 2.6 ?
with self.assertRaises(KeyError):
self.plugin.configure(options, Config())
| 33.66129 | 66 | 0.604536 | 722 | 6,261 | 5.022161 | 0.119114 | 0.11583 | 0.121897 | 0.07529 | 0.840596 | 0.806398 | 0.787645 | 0.738279 | 0.716216 | 0.656646 | 0 | 0.007857 | 0.268168 | 6,261 | 185 | 67 | 33.843243 | 0.783501 | 0.025395 | 0 | 0.577181 | 0 | 0 | 0.116142 | 0.037566 | 0 | 0 | 0 | 0.005405 | 0.154362 | 1 | 0.09396 | false | 0 | 0.033557 | 0 | 0.134228 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60428c6806883829851dad411834baa011dff3f0 | 4,874 | py | Python | koreto/grids.py | xvdp/koreto | 70f683aeec5e43a15549d447b8f540fa4c5fde4f | [
"MIT"
] | null | null | null | koreto/grids.py | xvdp/koreto | 70f683aeec5e43a15549d447b8f540fa4c5fde4f | [
"MIT"
] | null | null | null | koreto/grids.py | xvdp/koreto | 70f683aeec5e43a15549d447b8f540fa4c5fde4f | [
"MIT"
] | null | null | null | """ mesh grids
"""
import numpy as np
from koreto import WITH_TORCH
if WITH_TORCH:
import torch
# pylint: disable=no-member
def mgrid(shape, dtype="float32", shift=0.5, flip_columns=True, layout=1, form="torch"):
""" fast nd mgrid: not transposing means contiguity requires no fixing
Args
shape (tuple, list) any number of dimensions
dtype torch.dtype [torch.float32]
shift float [0.5]
flip_columns bool [True]: col[0] corresponds to shape[-1]
layout int [1]: [..., dims] 0: [dims, ...]
"""
if not WITH_TORCH or form[0] == "n":
return np_mgrid(shape, dtype=dtype, shift=shift, flip_columns=flip_columns, layout=layout)
dtype = dtype if isinstance(dtype, torch.dtype) else torch.__dict__[dtype]
with torch.no_grad():
_layout = (*shape, len(shape)) if layout else (len(shape), *shape)
out = torch.ones(_layout, dtype=dtype)
for i, side in enumerate(shape):
view = [1] * len(shape)
view[i] = side
col = i if not flip_columns else len(shape)-i-1
if layout:
out[..., col].mul_(torch.arange(shift, side+shift, 1,
dtype=dtype).view(*view))
else:
out[col, ...].mul_(torch.arange(shift, side+shift, 1,
dtype=dtype).view(*view))
return out
def mgrid_pos(idx, shape, shift=0.5, dtype="float32", flip_columns=True, layout=1, form="torch"):
""" return dtype [float32] mesh grid positions for input flat indices
Args:
idx flat indices of mgrid position
shape tuple
shift float [0.5] pixel center
dtype torch.dtype [torch.float32]
flip_columns bool[True] reverse column order
layout int [1]: [N, dims] 0: [dims, N]
"""
if not WITH_TORCH or form[0] == "n":
return np_mgrid_pos(idx, shape, shift=shift, dtype=dtype, flip_columns=flip_columns, layout=layout)
dtype = dtype if isinstance(dtype, torch.dtype) else torch.__dict__[dtype]
idx = torch.as_tensor(idx, dtype=dtype)
_layout = (len(idx), len(shape)) if layout else (len(shape), len(idx))
shape = torch.asarray(shape)
out = torch.ones(_layout, dtype=dtype)
for i, side in enumerate(shape):
col = i if not flip_columns else len(shape)-i-1
view = [1] * len(shape)
view[i] = side
if layout:
out[..., col].mul_((idx//torch.prod(shape[i+1:]))%shape[i])
else:
out[col, ...].mul_((idx//torch.prod(shape[i+1:]))%shape[i])
return out + shift
##
# numpy versions
def np_mgrid(shape, dtype="float32", shift=0.5, flip_columns=True, layout=1):
""" fast nd mgrid: not transposing means contiguity requires no fixing
Args
shape (tuple, list) any number of dimensions
dtype np.dtype [np.float32]
shift float [0.5]
flip_columns bool [True]: col[0] corresponds to shape[-1]
layout int [1]: [..., dims] 0: [dims, ...]
"""
dtype = dtype if isinstance(dtype, np.dtype) else np.__dict__[dtype]
_layout = (*shape, len(shape)) if layout else (len(shape), *shape)
out = np.ones(_layout, dtype=dtype)
for i, side in enumerate(shape):
view = [1] * len(shape)
view[i] = side
col = i if not flip_columns else len(shape)-i-1
if layout:
out[..., col] *= np.arange(shift, side+shift, 1, dtype=dtype).reshape(*view)
else:
out[col, ...] *= np.arange(shift, side+shift, 1, dtype=dtype).reshape(*view)
return out
def np_mgrid_pos(idx, shape, shift=0.5, dtype="float32", flip_columns=True, layout=1):
""" return mesh grid positions for input flat indices
Args:
idx flat indices of mgrid position
shape tuple
shift float [0.5] pixel center
dtype np.dtype [np.float32]
flip_columns bool[True] reverse column order
layout int [1]: [N, dims] 0: [dims, N]
"""
dtype = dtype if isinstance(dtype, np.dtype) else np.__dict__[dtype]
idx = np.asarray(idx, dtype=dtype)
_layout = (len(idx), len(shape)) if layout else (len(shape), len(idx))
shape = np.asarray(shape)
out = np.ones(_layout, dtype=dtype)
for i, side in enumerate(shape):
col = i if not flip_columns else len(shape)-i-1
view = [1] * len(shape)
view[i] = side
if layout:
out[..., col] *= ((idx//np.prod(shape[i+1:]))%shape[i])
else:
out[col, ...] *= ((idx//np.prod(shape[i+1:]))%shape[i])
return out + shift
| 40.957983 | 107 | 0.560115 | 651 | 4,874 | 4.105991 | 0.133641 | 0.065844 | 0.035915 | 0.019454 | 0.897868 | 0.87991 | 0.873176 | 0.866442 | 0.866442 | 0.852974 | 0 | 0.020148 | 0.30755 | 4,874 | 118 | 108 | 41.305085 | 0.771852 | 0.283545 | 0 | 0.656716 | 0 | 0 | 0.012206 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059701 | false | 0 | 0.044776 | 0 | 0.19403 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60447525d4f67a1c91d1c8d25f843f5d9114ce3e | 38,705 | py | Python | script/dataset_cfg.py | pl8787/textnet-release | c85a4162c55b4cfe22eab6f8f0c8b615854f9b8f | [
"Apache-2.0"
] | 114 | 2017-06-14T07:05:31.000Z | 2021-06-13T05:30:49.000Z | script/dataset_cfg.py | pl8787/textnet-release | c85a4162c55b4cfe22eab6f8f0c8b615854f9b8f | [
"Apache-2.0"
] | 7 | 2017-11-17T08:16:55.000Z | 2019-10-05T00:09:20.000Z | script/dataset_cfg.py | pl8787/textnet-release | c85a4162c55b4cfe22eab6f8f0c8b615854f9b8f | [
"Apache-2.0"
] | 40 | 2017-06-15T03:21:10.000Z | 2021-10-31T15:03:30.000Z |
class DatasetCfg:
def __init__(self, dataset):
if dataset == 'mr':
self.train_data_file = '/home/wsx/data/movie_review/lstm.train.nopad'
self.valid_data_file = '/home/wsx/data/movie_review/lstm.valid.nopad'
self.test_data_file = '/home/wsx/data/movie_review/lstm.test.nopad'
self.embedding_file = '/home/wsx/data/movie_review/word_rep_w2v'
self.dp_rate = 0.5
self.batch_size = 50
self.train_batch_size = 50
self.valid_batch_size = 10
self.test_batch_size = 10
self.max_doc_len = 56
self.vocab_size = 18766
self.num_class = 2
self.d_word_rep = 300
self.n_train = 1067 * 8
self.n_valid = 1067
self.n_test = 1067
elif dataset == 'tb_fine':
self.train_data_file = '/home/wsx/data/treebank/train.seq.allnode.unique.fine.shuffle'
self.valid_data_file = '/home/wsx/data/treebank/dev.seq.fine'
self.test_data_file = '/home/wsx/data/treebank/test.seq.fine'
self.embedding_file = '/home/wsx/data/treebank/treebank.embed.glove'
self.dp_rate = 0.5
# self.batch_size = 200
self.train_batch_size = 20
self.valid_batch_size = 10
self.test_batch_size = 10
self.max_doc_len = 56
self.vocab_size = 21701
self.num_class = 5
self.d_word_rep = 300
self.n_train = 159247
self.n_valid = 1101
self.n_test = 2210
elif dataset == 'tb_binary':
self.train_data_file = '/home/wsx/data/treebank/train.seq.allnode.unique.binary.shuffle'
self.valid_data_file = '/home/wsx/data/treebank/dev.seq.binary'
self.test_data_file = '/home/wsx/data/treebank/test.seq.binary'
self.embedding_file = '/home/wsx/data/treebank/treebank.embed.glove'
self.dp_rate = 0.5
# self.batch_size = 200
self.train_batch_size = 20
self.valid_batch_size = 10
self.test_batch_size = 10
self.max_doc_len = 56
self.vocab_size = 21701
self.num_class = 2
self.d_word_rep = 300
self.n_train = 67349
self.n_valid = 872
self.n_test = 1821
elif dataset == 'trec':
self.train_data_file = '/home/wsx/data/trec/train'
self.valid_data_file = '/home/wsx/data/trec/valid'
self.test_data_file = '/home/wsx/data/trec/test'
self.embedding_file = '/home/wsx/data/trec/word.rep'
self.dp_rate = 0.5
self.batch_size = 50
self.train_batch_size = 50
self.valid_batch_size = 50
self.test_batch_size = 50
self.max_doc_len = 40
self.vocab_size = 9593
self.num_class = 6
self.d_word_rep = 300
self.n_train = 4952
self.n_valid = 500
self.n_test = 500
elif dataset == 'msrp_char':
self.train_data_file = '/home/wsx/data/msrp/train.char'
self.valid_data_file = '/home/wsx/data/msrp/valid.char'
self.test_data_file = '/home/wsx/data/msrp/test.char'
self.max_doc_len = 225
self.min_doc_len = 1
self.vocab_size = 128
self.dp_rate = 0.5
self.num_class = 2
self.d_word_rep = 100
self.batch_size = 50
self.train_batch_size = 50
self.valid_batch_size = 50
self.test_batch_size = 50
self.n_train = 7152
self.n_valid = 500
self.n_test = 1725
self.train_display_interval = 1
self.valid_display_interval = 100
self.test_display_interval = 100
self.train_max_iters = 5000
self.valid_max_iters = self.n_valid/ self.valid_batch_size
self.test_max_iters = self.n_test / self.test_batch_size
elif dataset == 'tf':
self.train_data_file = '/home/wsx/data/nbp/tf.train.lstm'
self.valid_data_file = '/home/wsx/data/nbp/tf.valid.lstm'
self.test_data_file = '/home/wsx/data/nbp/tf.test.lstm'
self.num_item = 7973
self.num_user = 2265
self.max_session_len = 105
self.max_context_len = 10
self.dp_rate = 0.0
self.d_user_rep = 30
self.d_item_rep = 30
self.batch_size = 1
self.train_batch_size = 1
self.valid_batch_size = 1
self.test_batch_size = 1
self.n_train = 30747
self.n_valid = 2265
self.n_test = 2265
self.train_display_interval = 1
self.valid_display_interval = 10000
self.test_display_interval = 10000
self.train_max_iters = 300000
self.valid_max_iters = self.n_valid/ self.valid_batch_size
self.test_max_iters = self.n_test / self.test_batch_size
elif dataset == 'msrp':
self.train_data_file = '/home/wsx/data/msrp/msr_paraphrase_num_local_train_wid_dup.txt'
self.valid_data_file = '/home/wsx/data/msrp/msr_paraphrase_num_local_valid_wid.txt'
self.test_data_file = '/home/wsx/data/msrp/msr_paraphrase_num_test_wid.txt'
self.embedding_file = '/home/wsx/data/msrp/msrp.embed'
self.update_indication_file = '/home/wsx/data/msrp/wikicorp_num_50_msr_ind.txt'
self.max_doc_len = 33
self.min_doc_len = 5
# self.vocab_size = 15586
self.vocab_size = 50000
self.dp_rate = 0.0
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 50
self.train_batch_size = 50
self.valid_batch_size = 50
self.test_batch_size = 50
self.n_train = 7152
self.n_valid = 500
self.n_test = 1725
self.train_display_interval = 1
self.valid_display_interval = 100
self.test_display_interval = 100
self.train_max_iters = 5000
self.valid_max_iters = self.n_valid/ self.valid_batch_size
self.test_max_iters = self.n_test / self.test_batch_size
elif dataset == 'qa_top10':
self.train_data_file = '/home/wsx/data/qa_top10/qa.neg.10.50.train'
self.valid_data_file = '/home/wsx/data/qa_top10/qa.neg.10.50.valid'
self.test_data_file = '/home/wsx/data/qa_top10/qa.neg.10.50.test'
self.embedding_file = '/home/wsx/data/qa_top10/qa_embed'
# self.update_indication_file = '/home/wsx/data/msrp/wikicorp_num_50_msr_ind.txt'
self.max_doc_len = 50
self.min_doc_len = 5
# self.vocab_size = 15586
# self.vocab_size = 219071
self.vocab_size = 130242
self.dp_rate = 0.0
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 128
self.train_batch_size = 128
self.valid_batch_size = 128
self.test_batch_size = 128
# self.n_train = 1082851
# self.n_valid = 135355
# self.n_test = 135355
self.train_display_interval = 10
self.valid_display_interval = 10000000
self.test_display_interval = 1000
self.train_max_iters = 20000
self.valid_max_iters =6056
self.test_max_iters =6056
elif dataset == 'qa_top300':
self.train_data_file = '/home/wsx/data/qa_top300/qa.neg.10.50.train'
self.valid_data_file = '/home/wsx/data/qa_top300/qa.neg.10.50.valid'
self.test_data_file = '/home/wsx/data/qa_top300/qa.neg.10.50.test'
self.embedding_file = '/home/wsx/data/qa_top300/qa_embed'
# self.update_indication_file = '/home/wsx/data/msrp/wikicorp_num_50_msr_ind.txt'
self.max_doc_len = 50
self.min_doc_len = 5
# self.vocab_size = 15586
# self.vocab_size = 219071
self.vocab_size = 130242
self.dp_rate = 0.0
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 128
self.train_batch_size = 128
self.valid_batch_size = 128
self.test_batch_size = 128
# self.n_train = 1082851
# self.n_valid = 135355
# self.n_test = 135355
self.train_display_interval = 10
self.valid_display_interval = 10000000
self.test_display_interval = 1000
self.train_max_iters = 20000
self.valid_max_iters =6056
self.test_max_iters =6056
elif dataset == 'qa_top1k_4_end':
self.train_data_file = '/home/wsx/data/qa_top1k_4/qa.neg.4.train.end_token'
self.valid_data_file = '/home/wsx/data/qa_top1k_4/qa.neg.4.valid.end_token'
self.test_data_file = '/home/wsx/data/qa_top1k_4/qa.neg.4.test.end_token'
self.embedding_file = '/home/wsx/data/qa_top1k_4/qa_embed'
# self.update_indication_file = '/home/wsx/data/msrp/wikicorp_num_50_msr_ind.txt'
self.max_doc_len = 52
self.min_doc_len = 5
# self.vocab_size = 15586
# self.vocab_size = 219071
self.vocab_size = 130242
self.dp_rate = 0.0
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 128
self.train_batch_size = 128
self.valid_batch_size = 128
self.test_batch_size = 128
# self.n_train = 1082851
# self.n_valid = 135355
# self.n_test = 135355
self.train_display_interval = 10
self.valid_display_interval = 10000000
self.test_display_interval = 1000
self.train_max_iters = 20000
self.valid_max_iters =6056
self.test_max_iters =6056
elif dataset == 'ubuntu':
self.train_data_file = '/home/wsx/data/ubuntu/train.txt'
self.valid_data_file = '/home/wsx/data/ubuntu/valid.txt'
self.test_data_file = '/home/wsx/data/ubuntu/test.txt'
# self.embedding_file = '/home/wsx/data/dialogue/qa_embed'
# self.update_indication_file = '/home/wsx/data/msrp/wikicorp_num_50_msr_ind.txt'
self.max_doc_len = 2002
self.min_doc_len = 1
# self.vocab_size = 15586
# self.vocab_size = 219071
self.vocab_size = 144953
self.dp_rate = 0.0
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 128
self.train_batch_size = 128
self.valid_batch_size = 128
self.test_batch_size = 128
self.n_train = 1000192
self.n_valid = 356096
self.n_test = 355170
self.train_display_interval = 10
self.valid_display_interval = 10000000
self.test_display_interval = 1000
self.train_max_iters = 50000
self.valid_max_iters = 1000
self.test_max_iters = 1000
# self.valid_max_iters = self.n_valid/self.valid_batch_size
# self.test_max_iters = self.n_test/self.test_batch_size
elif dataset == 'lcs_toy':
self.train_data_file = '/home/wsx/data/lcs_toy/train'
self.valid_data_file = '/home/wsx/data/lcs_toy/valid'
self.test_data_file = '/home/wsx/data/lcs_toy/test'
self.max_doc_len = 5
self.min_doc_len = 5
self.vocab_size = 10
self.dp_rate = 0.0
self.batch_size = 1
self.train_batch_size = 1
self.valid_batch_size = 100
self.test_batch_size = 100
self.train_display_interval = 1
self.valid_display_interval = 10000000
self.test_display_interval = 1000
self.train_max_iters = 10000
self.valid_max_iters = 1
self.test_max_iters = 1
elif dataset == 'lcs_toy_v10_l10':
self.train_data_file = '/home/wsx/data/lcs_toy_v10_l10/train'
self.valid_data_file = '/home/wsx/data/lcs_toy_v10_l10/valid'
self.test_data_file = '/home/wsx/data/lcs_toy_v10_l10/test'
self.max_doc_len = 10
self.min_doc_len = 5
self.vocab_size = 10
self.dp_rate = 0.0
self.batch_size = 1
self.train_batch_size = 1
self.valid_batch_size = 100
self.test_batch_size = 100
self.train_display_interval = 1
self.valid_display_interval = 10000000
self.test_display_interval = 1000
self.train_max_iters = 10000
self.valid_max_iters = 1
self.test_max_iters = 1
elif dataset == 'lcs_toy_v10_varlen':
self.train_data_file = '/home/wsx/data/lcs_toy_v10_varlen/train'
self.valid_data_file = '/home/wsx/data/lcs_toy_v10_varlen/valid'
self.test_data_file = '/home/wsx/data/lcs_toy_v10_varlen/test'
self.max_doc_len = 10
self.min_doc_len = 1
self.vocab_size = 10
self.dp_rate = 0.0
self.batch_size = 1
self.train_batch_size = 1
self.valid_batch_size = 100
self.test_batch_size = 100
self.train_display_interval = 1
self.valid_display_interval = 10000000
self.test_display_interval = 1000
self.train_max_iters = 10000
self.valid_max_iters = 1
self.test_max_iters = 1
elif dataset == 'qa_top1k_4':
self.train_data_file = '/home/wsx/data/qa_top1k_4/qa.neg.4.train'
self.valid_data_file = '/home/wsx/data/qa_top1k_4/qa.neg.4.valid'
self.test_data_file = '/home/wsx/data/qa_top1k_4/qa.neg.4.test'
self.embedding_file = '/home/wsx/data/qa_top1k_4/qa_embed'
# self.update_indication_file = '/home/wsx/data/msrp/wikicorp_num_50_msr_ind.txt'
self.max_doc_len = 50
self.min_doc_len = 5
# self.vocab_size = 15586
# self.vocab_size = 219071
self.vocab_size = 130242
self.dp_rate = 0.0
self.num_class = 2
self.d_word_rep = 50
# print "ORC: WARNING: BATCH SIZE IS SET TO 2 FOR DEBUG."
self.batch_size = 128
self.train_batch_size = 128
self.valid_batch_size = 128
self.test_batch_size = 128
# self.n_train = 1082851
# self.n_valid = 135355
# self.n_test = 135355
self.train_display_interval = 10
self.valid_display_interval = 10000000
self.test_display_interval = 1000
self.train_max_iters = 10000
self.valid_max_iters =6056
self.test_max_iters =6056
elif dataset == 'qa_top1k':
self.train_data_file = '/home/wsx/data/qa_top1k/qa.neg.10.50.train'
self.valid_data_file = '/home/wsx/data/qa_top1k/qa.neg.10.50.valid'
self.test_data_file = '/home/wsx/data/qa_top1k/qa.neg.10.50.test'
self.embedding_file = '/home/wsx/data/qa_top1k/qa_embed'
# self.update_indication_file = '/home/wsx/data/msrp/wikicorp_num_50_msr_ind.txt'
self.max_doc_len = 50
self.min_doc_len = 5
# self.vocab_size = 15586
# self.vocab_size = 219071
self.vocab_size = 130242
self.dp_rate = 0.0
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 128
self.train_batch_size = 128
self.valid_batch_size = 128
self.test_batch_size = 128
# self.n_train = 1082851
# self.n_valid = 135355
# self.n_test = 135355
self.train_display_interval = 10
self.valid_display_interval = 10000000
self.test_display_interval = 1000
self.train_max_iters = 20000
self.valid_max_iters =6056
self.test_max_iters =6056
elif dataset == 'sentence':
self.train_data_file = '/home/wsx/data/sentence/train'
self.valid_data_file = '/home/wsx/data/sentence/test'
self.test_data_file = '/home/wsx/data/sentence/test'
self.embedding_file = '/home/wsx/data/sentence/sentence_embed'
# self.update_indication_file = '/home/wsx/data/msrp/wikicorp_num_50_msr_ind.txt'
self.max_doc_len = 28
self.min_doc_len = 4
# self.vocab_size = 15586
# self.vocab_size = 219071
self.vocab_size = 127889
self.dp_rate = 0.0
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 128
self.train_batch_size = 128
self.valid_batch_size = 128
self.test_batch_size = 128
# self.n_train = 1082851
# self.n_valid = 135355
# self.n_test = 135355
self.train_display_interval = 10
self.valid_display_interval = 10000000
self.test_display_interval = 4000
self.train_max_iters = 40001
self.valid_max_iters = 50000
self.test_max_iters = 50000
elif dataset == 'qa_50':
self.train_data_file = '/home/wsx/data/qa_50/qa.neg.10.50.train'
self.valid_data_file = '/home/wsx/data/qa_50/qa.neg.10.50.valid'
self.test_data_file = '/home/wsx/data/qa_50/qa.neg.10.50.test'
self.embedding_file = '/home/wsx/data/qa_50/qa_embed'
# self.update_indication_file = '/home/wsx/data/msrp/wikicorp_num_50_msr_ind.txt'
self.max_doc_len = 50
self.min_doc_len = 5
# self.vocab_size = 15586
# self.vocab_size = 219071
self.vocab_size = 130242
self.dp_rate = 0.0
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 128
self.train_batch_size = 128
self.valid_batch_size = 128
self.test_batch_size = 128
# self.n_train = 1082851
# self.n_valid = 135355
# self.n_test = 135355
self.train_display_interval = 10
self.valid_display_interval = 10000000
self.test_display_interval = 2000
self.train_max_iters = 20001
self.valid_max_iters =6057
self.test_max_iters =6057
elif dataset == 'qa':
self.train_data_file = '/home/wsx/data/qa/qa.neg.xrear10.3.32.train.dat'
self.valid_data_file = '/home/wsx/data/qa/qa.neg.xrear10.3.32.valid.dat'
self.test_data_file = '/home/wsx/data/qa/qa.neg.xrear10.3.32.test.dat'
self.embedding_file = '/home/wsx/data/qa/qa_embed'
# self.update_indication_file = '/home/wsx/data/msrp/wikicorp_num_50_msr_ind.txt'
self.max_doc_len = 33
self.min_doc_len = 1
# self.vocab_size = 15586
# self.vocab_size = 219071
self.vocab_size = 120750
self.dp_rate = 0.0
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 128
self.train_batch_size = 128
self.valid_batch_size = 128
self.test_batch_size = 128
# self.n_train = 1082851
# self.n_valid = 135355
# self.n_test = 135355
self.train_display_interval = 10
self.valid_display_interval = 10000000
self.test_display_interval = 1000
self.train_max_iters = 40000
self.valid_max_iters =12303
self.test_max_iters =12303
elif dataset == 'qa_candi':
self.train_data_file = '/home/wsx/data/qa/qa.xmore10.32.train.dat'
self.valid_data_file = '/home/wsx/data/qa/qa.xmore10.32.valid.dat'
self.test_data_file = '/home/wsx/data/qa/qa.xmore10.32.test.dat'
self.embedding_file = '/home/wsx/data/qa/qa_embed'
# self.update_indication_file = '/home/wsx/data/msrp/wikicorp_num_50_msr_ind.txt'
self.max_doc_len = 33
self.min_doc_len = 5
# self.vocab_size = 15586
self.vocab_size = 219071
self.dp_rate = 0.0
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 128
self.train_batch_size = 128
self.valid_batch_size = 128
self.test_batch_size = 128
self.n_train = 1082851
self.n_valid = 135355
self.n_test = 135355
self.train_display_interval = 10
self.valid_display_interval = 500
self.test_display_interval = 500
self.train_max_iters = 20000
self.valid_max_iters =12305
self.test_max_iters =12305
elif dataset == 'qa_balance':
self.train_data_file = '/home/wsx/data/qa/qa.neg.xmore10.32.train.dat.balance'
self.valid_data_file = '/home/wsx/data/qa/qa.neg.xmore10.32.valid.dat.balance'
self.test_data_file = '/home/wsx/data/qa/qa.neg.xmore10.32.test.dat.balance'
self.embedding_file = '/home/wsx/data/qa/qa_embed'
# self.update_indication_file = '/home/wsx/data/msrp/wikicorp_num_50_msr_ind.txt'
self.max_doc_len = 33
self.min_doc_len = 5
# self.vocab_size = 15586
self.vocab_size = 219071
self.dp_rate = 0.0
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 128
self.train_batch_size = 128
self.valid_batch_size = 128
self.test_batch_size = 128
self.n_train = 196882
self.n_valid = 24610
self.n_test = 24610
self.train_display_interval = 10
self.valid_display_interval = 500
self.test_display_interval = 500
self.train_max_iters = 20000
self.valid_max_iters = self.n_valid / self.valid_batch_size
self.test_max_iters = self.n_test / self.test_batch_size
elif dataset == 'msrp_seq':
self.train_data_file = '/home/wsx/data/msrp/train.seq'
self.valid_data_file = '/home/wsx/data/msrp/valid.seq'
self.test_data_file = '/home/wsx/data/msrp/test.seq'
# self.embedding_file = '/home/wsx/data/msrp/msrp.embed'
# self.update_indication_file = '/home/wsx/data/msrp/wikicorp_num_50_msr_ind.txt'
self.max_doc_len = 33
self.min_doc_len = 5
# self.vocab_size = 15586
# self.vocab_size = 50000
self.dp_rate = 0.0
self.num_class = 2
# self.d_word_rep = 50
self.batch_size = 50
self.train_batch_size = 50
self.valid_batch_size = 50
self.test_batch_size = 50
self.n_train = 7152
self.n_valid = 500
self.n_test = 1725
self.train_display_interval = 1
self.valid_display_interval = 100
self.test_display_interval = 100
self.train_max_iters = 5000
self.valid_max_iters = self.n_valid/ self.valid_batch_size
self.test_max_iters = self.n_test / self.test_batch_size
elif dataset == 'nyt':
self.data_dir = '/home/wsx/data/nyt/'
self.train_data_file = self.data_dir + 'nyt.wid.train.with_msrp'
self.valid_data_file = self.data_dir + 'nyt.wid.valid'
self.test_data_file = self.data_dir + 'msrp.sentence.valid'
# self.embedding_file = self.data_dir + 'wiki.embed'
# self.update_indication_file = self.data_dir + 'wiki.ind'
# self.word_class_file = self.data_dir + 'id2class'
# self.word_freq_file = self.data_dir + 'word_freq'
self.max_doc_len = 60
self.min_doc_len = 0
self.vocab_size = 45844 # without orc_unknown
self.dp_rate = 0.
self.d_word_rep = 1000
self.batch_size = 32
self.train_batch_size = 32
self.valid_batch_size = 32
self.test_batch_size = 32
self.n_train = 111456
self.n_valid = 10000
self.n_test = 1000
self.train_display_interval = 1
self.valid_display_interval = 500
self.test_display_interval = 500
self.train_max_iters = (self.n_train/self.train_batch_size) * 5
self.valid_max_iters = (self.n_valid/10)/self.valid_batch_size
self.test_max_iters = (self.n_test)/self.test_batch_size
elif dataset == 'wiki':
self.data_dir = '/home/wsx/data/wiki/'
self.train_data_file = self.data_dir + 'wiki.train.with_msrp'
self.valid_data_file = self.data_dir + 'wiki.valid'
self.test_data_file = self.data_dir + 'msrp.sentence.valid'
# self.embedding_file = self.data_dir + 'wiki.embed'
# self.update_indication_file = self.data_dir + 'wiki.ind'
self.word_class_file = self.data_dir + 'id2class'
# self.word_freq_file = self.data_dir + 'word_freq'
self.max_doc_len = 50
self.min_doc_len = 5
self.vocab_size = 177859 # without orc_unknown
self.dp_rate = 0.
self.d_word_rep = 2
self.batch_size = 10
self.train_batch_size = 10
self.valid_batch_size = 10
self.test_batch_size = 10
self.n_train = 924735
self.n_valid = 94802
self.n_test = 1000
self.train_display_interval = 1
self.valid_display_interval = 1000
self.test_display_interval = 1000
self.train_max_iters = (self.n_train/self.train_batch_size) * 5
self.valid_max_iters = (self.n_valid/50)/self.valid_batch_size
self.test_max_iters = self.n_test /self.test_batch_size
self.valid_display_interval = 50
self.test_display_interval = 50
self.train_max_iters = 10000
self.valid_max_iters = self.n_valid/ self.valid_batch_size
self.test_max_iters = self.n_test / self.test_batch_size
elif dataset == 'webscope':
self.train_data_file = '/home/pangliang/matching/data/webscope/qa_instances.train.dat'
self.valid_data_file = '/home/pangliang/matching/data/webscope/qa_instances.valid.dat'
self.test_data_file = '/home/pangliang/matching/data/webscope/qa_instances.test.dat'
self.embedding_file = ''
self.update_indication_file = ''
self.max_doc_len = 32
self.min_doc_len = 5
self.vocab_size = 214555
self.dp_rate = 0.5
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 128
self.train_batch_size = 128
self.valid_batch_size = 128
self.test_batch_size = 128
self.n_train = 114103
self.n_valid = 14262
self.n_test = 14262
self.train_display_interval = 1
self.valid_display_interval = 200
self.test_display_interval = 200
self.train_max_iters = 100000
self.valid_max_iters = self.n_valid/ self.valid_batch_size
self.test_max_iters = self.n_test / self.test_batch_size
elif dataset == 'paper':
self.train_data_file = '/home/wsx/data/PaperData/relation.train.wid.txt'
self.valid_data_file = '/home/wsx/data/PaperData/relation.valid.wid.txt'
self.test_data_file = '/home/wsx/data/PaperData/relation.test.wid.txt'
self.embedding_file = '/home/wsx/data/PaperData/wikicorp_50_english_norm.txt'
self.update_indication_file = ''
self.max_doc_len = 32
self.min_doc_len = 4
self.vocab_size = 256017
self.dp_rate = 0.5
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 128
self.train_batch_size = 128
self.valid_batch_size = 128
self.test_batch_size = 128
# self.n_train = 6152
self.n_valid = 119829
self.n_test = 119883
self.train_display_interval = 1
self.valid_display_interval = 2000
self.test_display_interval = 2000
self.train_max_iters = 40000
self.valid_max_iters = self.n_valid/ self.valid_batch_size
self.test_max_iters = self.n_test / self.test_batch_size
elif dataset == 'relation':
self.train_data_file = '/home/wsx/data/relation/relation.train.wid.txt'
self.valid_data_file = '/home/wsx/data/relation/relation.valid.wid.txt'
self.test_data_file = '/home/wsx/data/relation/relation.test.wid.txt'
self.embedding_file = '/home/wsx/data/relation/wikicorp_50_english_norm.txt'
self.update_indication_file = '/home/wsx/data/relation/wikicorp_50_english_ind.txt'
self.max_doc_len = 32
self.min_doc_len = 4
self.vocab_size = 415472
self.dp_rate = 0.5
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 32
self.train_batch_size = 32
self.valid_batch_size = 32
self.test_batch_size = 32
self.train_display_interval = 1
self.valid_display_interval = 2000
self.test_display_interval = 2000
self.train_max_iters = 200000
self.valid_max_iters = 1000
self.test_max_iters = 1000
elif dataset == 'relation_dep':
self.train_data_file = '/home/wsx/data/relation_dep/relation.train.wid.txt'
self.valid_data_file = '/home/wsx/data/relation_dep/relation.valid.wid.txt'
self.test_data_file = '/home/wsx/data/relation_dep/relation.test.wid.txt'
self.embedding_file = '/home/wsx/data/relation_dep/wikicorp_50_english_norm.txt'
self.update_indication_file = '/home/wsx/data/relation_dep/wikicorp_50_english_ind.txt'
self.max_doc_len = 32
self.min_doc_len = 4
self.vocab_size = 415472
self.dp_rate = 0.5
self.num_class = 2
self.d_word_rep = 50
self.batch_size = 32
self.train_batch_size = 32
self.valid_batch_size = 32
self.test_batch_size = 32
self.train_display_interval = 1
self.valid_display_interval = 2000
self.test_display_interval = 2000
self.train_max_iters = 200000
self.valid_max_iters = 1000
self.test_max_iters = 1000
elif dataset == 'relation_dep_100':
self.train_data_file = '/home/wsx/data/relation_dep_100/relation.train.wid.txt'
self.valid_data_file = '/home/wsx/data/relation_dep_100/relation.valid.wid.txt'
self.test_data_file = '/home/wsx/data/relation_dep_100/relation.test.wid.txt'
self.embedding_file = '/home/wsx/data/relation_dep_100/wikicorp_100_english_norm.txt'
self.update_indication_file = '/home/wsx/data/relation_dep_100/wikicorp_100_english_ind.txt'
self.max_doc_len = 32
self.min_doc_len = 4
self.vocab_size = 415472
self.dp_rate = 0.5
self.num_class = 2
self.d_word_rep = 100
self.batch_size = 32
self.train_batch_size = 32
self.valid_batch_size = 32
self.test_batch_size = 32
self.train_display_interval = 1
self.valid_display_interval = 2000
self.test_display_interval = 2000
self.train_max_iters = 200000
self.valid_max_iters = 1000
self.test_max_iters = 1000
elif dataset == 'simulation':
self.train_data_file = '/home/wsx/dl.shengxian/data/simulation/neg.gen.train'
self.valid_data_file = '/home/wsx/dl.shengxian/data/simulation/neg.gen.train'
self.test_data_file = '/home/wsx/dl.shengxian/data/simulation/neg.gen.test'
self.embedding_file = ''
self.max_doc_len = 20
self.vocab_size = 2000
self.dp_rate = 0.5
self.num_class = 2
self.d_word_rep = 20
self.batch_size = 1
self.train_batch_size = 1
self.valid_batch_size = 1
self.test_batch_size = 1
self.n_train = 300
self.n_valid = 300
self.n_test = 200
elif dataset == 'simulation_topk':
self.train_data_file = '/home/wsx/dl.shengxian/data/simulation/gen.train.topk'
self.valid_data_file = '/home/wsx/dl.shengxian/data/simulation/gen.train.topk'
self.test_data_file = '/home/wsx/dl.shengxian/data/simulation/gen.test.topk'
self.embedding_file = ''
self.max_doc_len = 10
self.vocab_size = 10000
self.dp_rate = 0.5
self.num_class = 2
self.d_word_rep = 30
self.batch_size = 10
self.train_batch_size = 1
self.valid_batch_size = 1
self.test_batch_size = 1
self.n_train = 3000
self.n_valid = 3000
self.n_test = 2000
elif dataset == 'test_lm':
self.data_dir = '/home/wsx/data/test/test_lm/'
self.train_data_file = self.data_dir + 'train.txt'
self.valid_data_file = self.data_dir + 'train.txt'
self.test_data_file = self.data_dir + 'train.txt'
self.word_class_file = self.data_dir + 'id2class'
self.word_freq_file = self.data_dir + 'word_freq'
self.max_doc_len = 6
self.min_doc_len = 0
self.vocab_size = 8 # without orc_unknown
self.dp_rate = 0.
self.d_word_rep = 5
self.batch_size = 2
self.train_batch_size = 2
self.valid_batch_size = 2
self.test_batch_size = 2
self.n_train = 4
self.n_valid = 4
self.n_test = 4
self.train_display_interval = 1
self.valid_display_interval = 1
self.test_display_interval = 1
self.train_max_iters = (self.n_train/self.train_batch_size) * 5
self.valid_max_iters = (self.n_valid/5)/self.valid_batch_size
self.test_max_iters = self.n_test /self.test_batch_size
elif dataset == 'msrp_dpool':
self.train_data_file = '/home/wsx/data/msrp_dpool/train'
self.valid_data_file = '/home/wsx/data/msrp_dpool/valid'
self.test_data_file = '/home/wsx/data/msrp_dpool/test'
self.feat_size = 25
self.dp_rate = 0.5
self.num_class = 2
self.batch_size = 50
self.train_batch_size = 50
self.valid_batch_size = 50
self.test_batch_size = 50
self.n_train = 7152
self.n_valid = 500
self.n_test = 1725
self.train_display_interval = 1
self.valid_display_interval = 100
self.test_display_interval = 100
self.train_max_iters = 5000
self.valid_max_iters = self.n_valid/ self.valid_batch_size
self.test_max_iters = self.n_test / self.test_batch_size
elif dataset == 'char_lstm_w2v':
self.data_dir = '/home/wsx/data/char_lstm_w2v/dim300/'
self.train_data_file = self.data_dir + 'train'
self.valid_data_file = self.data_dir + 'valid'
self.test_data_file = self.data_dir + 'test'
self.max_word_len = 100
self.d_word_rep = 300
self.batch_size = 100
self.train_batch_size = 100
self.valid_batch_size = 100
self.test_batch_size = 100
self.n_train = 50000
self.n_valid = 50000
self.n_test = 5000
self.train_display_interval = 10
self.valid_display_interval = 1000
self.test_display_interval = 1000
# self.train_max_iters = (self.n_train/self.train_batch_size) * 5
# self.valid_max_iters = (self.n_valid/10)/self.valid_batch_size
# self.test_max_iters = (self.n_test)/self.test_batch_size
self.train_max_iters = 40000
self.valid_max_iters = 500
self.test_max_iters = 50
elif dataset == 'sogou_im':
self.data_dir = '/home/wsx/data/sogou_im/'
self.train_data_file = self.data_dir + 'data.wid.split.nospace.train'
self.valid_data_file = self.data_dir + 'data.wid.split.nospace.valid'
self.test_data_file = self.data_dir + 'data.wid.split.nospace.test'
self.max_doc_len = 20
self.vocab_size = 5842
self.d_word_rep = 100
self.batch_size = 100
self.train_batch_size = 100
self.valid_batch_size = 100
self.test_batch_size = 100
self.n_train = 50000
self.n_valid = 50000
self.n_test = 5000
self.train_display_interval = 10
self.valid_display_interval = 1000000
self.test_display_interval = 200
# self.train_max_iters = (self.n_train/self.train_batch_size) * 5
# self.valid_max_iters = (self.n_valid/10)/self.valid_batch_size
# self.test_max_iters = (self.n_test)/self.test_batch_size
self.train_max_iters = 40000
self.valid_max_iters = 50
self.test_max_iters = 50
else:
assert False
| 43.053393 | 104 | 0.579641 | 5,139 | 38,705 | 4.043004 | 0.043199 | 0.076238 | 0.06565 | 0.08519 | 0.914713 | 0.908408 | 0.897772 | 0.869471 | 0.821437 | 0.771093 | 0 | 0.082654 | 0.334195 | 38,705 | 898 | 105 | 43.101336 | 0.723593 | 0.083323 | 0 | 0.628533 | 0 | 0.008075 | 0.151499 | 0.137993 | 0 | 0 | 0 | 0 | 0.001346 | 1 | 0.001346 | false | 0 | 0 | 0 | 0.002692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60651fb4599c4a24c6ba6187735245f015ed5f21 | 37,592 | py | Python | scripts/plot_other.py | HuangQiang/P2HNNS | a8a234879907c6ea076de7576bcf707e54dce730 | [
"MIT"
] | 11 | 2021-06-17T04:43:42.000Z | 2022-01-28T15:16:29.000Z | scripts/plot_other.py | HuangQiang/P2HNNS | a8a234879907c6ea076de7576bcf707e54dce730 | [
"MIT"
] | null | null | null | scripts/plot_other.py | HuangQiang/P2HNNS | a8a234879907c6ea076de7576bcf707e54dce730 | [
"MIT"
] | null | null | null | import os
import re
import numpy as np
import matplotlib.pylab as plt
from scipy.spatial import ConvexHull
from itertools import chain
from scipy.interpolate import interp1d
from collections import defaultdict
from plot import *
from plot_util import *
# ------------------------------------------------------------------------------
def plot_time_recall(chosen_top_k, methods, input_folder, output_folder):
'''
draw the querytime-recall curve for all methods on all datasets
:params chosen_top_k: top_k value for drawing figure (integer)
:params methods: a list of method (list)
:params input_folder: input folder (string)
:params output_folder: output folder (string)
:returns: None
'''
fig_width, fig_height = calc_width_and_height(len(datasets), 1)
plt_helper = PlotHelper(plt, fig_width, fig_height)
plt_helper.plot_subplots_adjust() # define a window for a figure
method_labels = [method_labels_map[method] for method in methods]
for di, (dataset, dataset_label) in enumerate(zip(datasets, dataset_labels)):
# set up each sub-figure
ax = plt.subplot(1, len(datasets), di+1)
plt.title(dataset_label) # title
plt.xlim(0, 100) # limit (or range) of x-axis
plt.xlabel('Recall (%)') # label of x-axis
if di == 0: # add label of y-axis at 1st dataset
plt.ylabel('Query Time (ms)')
miny = 1e9
maxy = -1e9
for method_idx, method, method_label, method_color, method_marker in \
zip(count(), methods, method_labels, method_colors, method_markers):
# get file name for this method on this dataset
filename = get_filename(input_folder, dataset, method)
if filename is None: continue
print(filename)
# get time-recall results
time_recalls = []
for _,res in parse_res(filename, chosen_top_k):
time_recalls += [[gettime(res), getrecall(res)]]
time_recalls = np.array(time_recalls)
# print(time_recalls)
# get the time-recall curve by convex hull and interpolation, where
# lower_recalls -> x, lower_times -> y
lower_recalls, lower_times = lower_bound_curve(time_recalls)
miny = min(miny, np.min(lower_times))
maxy = max(maxy, np.max(lower_times))
ax.semilogy(lower_recalls, lower_times, '-', color=method_color,
marker=method_marker, label=method_label if di==0 else "",
markevery=10, markerfacecolor='none', markersize=7,
zorder=len(methods)-method_idx)
# set up the limit (or range) of y-axis
plt_helper.set_y_axis_log10(ax, miny, maxy)
# plot legend and save figure
plt_helper.plot_fig_legend(ncol=len(methods))
plt_helper.plot_and_save(output_folder, 'time_recall')
plt.show()
# ------------------------------------------------------------------------------
def plot_fraction_recall(chosen_top_k, methods, input_folder, output_folder):
'''
draw the fraction-recall curve for all methods on all datasets
:params chosen_top_k: top_k value for drawing figure (integer)
:params methods: a list of method (list)
:params input_folder: input folder (string)
:params output_folder: output folder (string)
:returns: None
'''
fig_width, fig_height = calc_width_and_height(len(datasets), 1)
plt_helper = PlotHelper(plt, fig_width, fig_height)
plt_helper.plot_subplots_adjust() # define a window for a figure
method_labels = [method_labels_map[method] for method in methods]
for di, (dataset, dataset_label) in enumerate(zip(datasets, dataset_labels)):
# set up each sub-figure
ax = plt.subplot(1, len(datasets), di+1)
plt.title(dataset_label) # title
plt.xlim(0, 100) # limit (or range) of x-axis
plt.xlabel('Recall (%)') # label of x-axis
if di == 0: # add label of y-axis at 1st dataset
plt.ylabel('Fraction (%)')
miny = 1e9
maxy = -1e9
for method_idx, method, method_label, method_color, method_marker in \
zip(count(), methods, method_labels, method_colors, method_markers):
# get file name for this method on this dataset
filename = get_filename(input_folder, dataset, method)
if filename is None: continue
print(filename)
# get fraction-recall results
fraction_recalls = []
for _,res in parse_res(filename, chosen_top_k):
fraction_recalls += [[getfraction(res), getrecall(res)]]
fraction_recalls = np.array(fraction_recalls)
# print(fraction_recalls)
# get the fraction-recall curve by convex hull and interpolation, where
# lower_recalls -> x, lower_times -> y
# print('fraction_recall!!!!\n', fraction_recalls)
lower_recalls, lower_fractions = lower_bound_curve(fraction_recalls)
miny = min(miny, np.min(lower_fractions))
maxy = max(maxy, np.max(lower_fractions))
ax.semilogy(lower_recalls, lower_fractions, '-', color=method_color,
marker=method_marker, label=method_label if di==0 else "",
markevery=10, markerfacecolor='none', markersize=7,
zorder=len(methods)-method_idx)
# set up the limit (or range) of y-axis
plt_helper.set_y_axis_log10(ax, miny, maxy)
# plot legend and save figure
plt_helper.plot_fig_legend(ncol=len(methods))
plt_helper.plot_and_save(output_folder, 'fraction_recall')
plt.show()
# ------------------------------------------------------------------------------
def plot_precision_recall(chosen_top_k, methods, input_folder, output_folder):
'''
draw the precision-recall curve for all methods on all datasets
:params chosen_top_k: top_k value for drawing figure (integer)
:params methods: a list of method (list)
:returns: None
'''
fig_width, fig_height = calc_width_and_height(len(datasets), 1)
plt_helper = PlotHelper(plt, fig_width, fig_height)
plt_helper.plot_subplots_adjust() # define a window for a figure
method_labels = [method_labels_map[method] for method in methods]
for di, (dataset, dataset_label) in enumerate(zip(datasets, dataset_labels)):
# set up each sub-figure
ax = plt.subplot(1, len(datasets), di+1)
plt.title(dataset_label) # title
plt.xlim(0, 100) # limit (or range) of x-axis
plt.xlabel('Recall (%)') # label of x-axis
if di == 0: # add label of y-axis for the 1st dataset
plt.ylabel('Precision (%)')
miny = 1e9
maxy = -1e9
for method_idx, method, method_label, method_color, method_marker in \
zip(count(), methods, method_labels, method_colors, method_markers):
# get file name for this method on this dataset
filename = get_filename(input_folder, dataset, method)
if filename is None: continue
print(filename)
# get precision-recall results
precision_recalls = []
for _,res in parse_res(filename, chosen_top_k):
precision = getprecision(res)
recall = getrecall(res)
if (recall > 0 and precision > 0):
precision_recalls += [[precision, recall]]
precision_recalls = np.array(precision_recalls)
# print(precision_recalls)
# get the time-recall curve by convex hull and interpolation, where
upper_recalls, upper_precisions = upper_bound_curve(precision_recalls, 1.0, True)
if len(upper_recalls) > 0:
miny = min(miny, np.min(upper_precisions))
maxy = max(maxy, np.max(upper_precisions))
ax.semilogy(upper_recalls, upper_precisions, '-',
color=method_color, marker=method_marker,
label=method_label if di==0 else "", markevery=10,
markerfacecolor='none', markersize=7,
zorder=len(methods)-method_idx)
# set up the limit (or range) of y-axis
plt_helper.set_y_axis_log10(ax, miny, maxy)
# plot legend and save figure
plt_helper.plot_fig_legend(ncol=len(methods))
plt_helper.plot_and_save(output_folder, 'precision_recall')
plt.show()
# ------------------------------------------------------------------------------
def plot_time_recall_ratio(chosen_top_k, methods, input_folder, output_folder):
'''
draw the querytime-recall curves and querytime-ratio curves for all methods
on all datasets
:params chosen_top_k: top_k value for drawing figure (integer)
:params methods: a list of method (list)
:params input_folder: input folder (string)
:params output_folder: output folder (string)
:returns: None
'''
n_datasets = len(datasets)
fig_width, fig_height = calc_width_and_height(n_datasets, 2)
plt_helper = PlotHelper(plt, fig_width, fig_height)
plt_helper.plot_subplots_adjust() # define a window for a figure
method_labels = [method_labels_map[method] for method in methods]
for di, (dataset, dataset_label) in enumerate(zip(datasets, dataset_labels)):
# set up two sub-figures
ax_recall = plt.subplot(2, n_datasets, di+1)
plt.title(dataset_label) # title
plt.xlabel('Recall (%)') # label of x-axis
plt.xlim(0, 100)
ax_ratio = plt.subplot(2, n_datasets, n_datasets+di+1)
plt.xlabel('Ratio')
plt.xlim(1.0, 11.0)
plt.xticks([1.0, 3.0, 5.0, 7.0, 9.0, 11.0])
if di == 0:
ax_recall.set_ylabel('Query Time (ms)')
ax_ratio.set_ylabel('Query Time (ms)')
miny = 1e9
maxy = -1e9
for method_idx, method, method_label, method_color, method_marker in \
zip(count(), methods, method_labels, method_colors, method_markers):
# get file name for this method on this dataset
filename = get_filename(input_folder, dataset, method)
if filename is None: continue
print(filename)
# get querytime-recall and querytime-ratio results from disk
time_recalls = []
time_ratios = []
for _,res in parse_res(filename, chosen_top_k):
time_recalls += [[gettime(res), getrecall(res)]]
time_ratios += [[gettime(res), getratio(res)]]
time_recalls = np.array(time_recalls)
time_ratios = np.array(time_ratios)
# print(time_recalls, time_ratios)
# get the querytime-recall curve by convex hull and interpolation
lower_recalls, lower_times = lower_bound_curve(time_recalls)
ax_recall.semilogy(lower_recalls, lower_times, '-',
color=method_color, marker=method_marker,
label=method_label if di==0 else "", markevery=10,
markerfacecolor='none', markersize=10)
miny = min(miny, np.min(lower_times))
maxy = max(maxy, np.max(lower_times))
# get the querytime-ratio curve by convex hull
upper_ratios, upper_times = upper_bound_curve(time_ratios, 0.2, False)
ax_ratio.semilogy(upper_ratios, upper_times, '-',
color=method_color, marker=method_marker, label="",
markevery=5, markerfacecolor='none', markersize=10,
zorder=len(methods)-method_idx)
miny = min(miny, np.min(upper_times))
maxy = max(maxy, np.max(upper_times))
# set up the limit (or range) of y-axis
plt_helper.set_y_axis_log10(ax_recall, miny, maxy)
plt_helper.set_y_axis_log10(ax_ratio, miny, maxy)
# plot legend and save figure
plt_helper.plot_fig_legend(ncol=len(methods))
plt_helper.plot_and_save(output_folder, 'time_recall_ratio')
# ------------------------------------------------------------------------------
def plot_time_index(chosen_top_k, recall_level, methods, input_folder, output_folder):
'''
draw the querytime-indexsize curves and querytime-indexingtime curves for
all methods on all datasets
:params chosen_top_k: top_k value for drawing figure (integer)
:params recall_level: recall value for drawing figure (integer)
:params methods: a list of method (list)
:params input_folder: input folder (string)
:params output_folder: output folder (string)
:returns: None
'''
n_datasets = len(datasets)
fig_width, fig_height = calc_width_and_height(n_datasets, 2)
plt_helper = PlotHelper(plt, fig_width, fig_height)
plt_helper.plot_subplots_adjust() # define a window for a figure
method_labels = [method_labels_map[method] for method in methods]
for di, (dataset, dataset_label) in enumerate(zip(datasets, dataset_labels)):
# set up two sub-figures
ax_size = plt.subplot(2, n_datasets, di+1)
plt.title(dataset_label) # title
plt.xlabel('Index Size (MB)') # label of x-axis
ax_time = plt.subplot(2, n_datasets, n_datasets+di+1)
plt.xlabel('Indexing Time (Seconds)') # label of x-axis
if di == 0:
ax_size.set_ylabel('Query Time (ms)')
ax_time.set_ylabel('Query Time (ms)')
min_size_y = 1e9; max_size_y = -1e9
min_time_y = 1e9; max_time_y = -1e9
for method_idx, method, method_label, method_color, method_marker in \
zip(count(), methods, method_labels, method_colors, method_markers):
# get file name for this method on this dataset
filename = get_filename(input_folder, dataset, method)
if filename is None: continue
print(filename)
# get all results from disk
chosen_ks_dict = defaultdict(list)
for _,res in parse_res(filename, chosen_top_k):
query_time = gettime(res)
recall = getrecall(res)
index_time = getindexingtime(res)
index_size = getindexsize(res)
chosen_ks_dict[(index_time, index_size)] += [[recall, query_time]]
# get querytime-indexsize and querytime-indexingtime results if its
# recall is higher than recall_level
index_times, index_sizes, querytimes_at_recall = [], [], []
for (index_time, index_size), recall_querytimes_ in chosen_ks_dict.items():
# add [[0, 0]] for interpolation
recall_querytimes_ = np.array([[0, 0]] + recall_querytimes_)
recalls, query_times = lower_bound_curve2(recall_querytimes_)
if np.max(recalls) > recall_level:
# get the estimated time at recall level by interpolation
f = interp1d(recalls, query_times)
querytime_at_recall = f(recall_level)
# update results
index_times += [index_time]
index_sizes += [index_size]
querytimes_at_recall += [querytime_at_recall]
print('interp, ', querytime_at_recall, index_size, index_time)
index_times = np.array(index_times)
index_sizes = np.array(index_sizes)
querytimes_at_recall = np.array(querytimes_at_recall)
# get the querytime-indexsize curve by convex hull
isize_qtime = np.zeros(shape=(len(index_sizes), 2))
isize_qtime[:, 0] = index_sizes
isize_qtime[:, 1] = querytimes_at_recall
lower_isizes, lower_qtimes = lower_bound_curve2(isize_qtime)
if len(lower_isizes) > 0:
# print(method, lower_isizes, lower_qtimes)
min_size_y = min(min_size_y, np.min(lower_qtimes))
max_size_y = max(max_size_y, np.max(lower_qtimes))
ax_size.semilogy(lower_isizes, lower_qtimes, '-', color=method_color,
marker=method_marker, label=method_label if di==0 else "",
markerfacecolor='none', markersize=10)
# get the querytime-indextime curve by convex hull
itime_qtime = np.zeros(shape=(len(index_times), 2))
itime_qtime[:, 0] = index_times
itime_qtime[:, 1] = querytimes_at_recall
lower_itimes, lower_qtimes = lower_bound_curve2(itime_qtime)
# print(method, lower_itimes, lower_qtimes)
min_time_y = min(min_time_y, np.min(lower_qtimes))
max_time_y = max(max_time_y, np.max(lower_qtimes))
ax_time.semilogy(lower_itimes, lower_qtimes, '-', color=method_color,
marker=method_marker, label="", markerfacecolor='none',
markersize=10, zorder=len(methods)-method_idx)
# set up the limit (or range) of y-axis
plt_helper.set_y_axis_log10(ax_size, min_size_y, max_size_y)
plt_helper.set_y_axis_log10(ax_time, min_time_y, max_time_y)
# plot legend and save figure
plt_helper.plot_fig_legend(ncol=len(methods))
plt_helper.plot_and_save(output_folder, 'time_index')
# ------------------------------------------------------------------------------
def plot_time_indextime(chosen_top_k, recall_level, methods, input_folder,
output_folder):
'''
draw the querytime-indexsize curves and querytime-indexingtime curves for
all methods on all datasets
:params chosen_top_k: top_k value for drawing figure (integer)
:params recall_level: recall value for drawing figure (integer)
:params methods: a list of method (list)
:params input_folder: input folder (string)
:params output_folder: output folder (string)
:returns: None
'''
n_datasets = len(datasets)
fig_width, fig_height = calc_width_and_height(n_datasets, 1)
plt_helper = PlotHelper(plt, fig_width, fig_height)
plt_helper.plot_subplots_adjust() # define a window for a figure
method_labels = [method_labels_map[method] for method in methods]
for di, (dataset, dataset_label) in enumerate(zip(datasets, dataset_labels)):
# set up sub-figure
ax_time = plt.subplot(1, n_datasets, di+1)
plt.title(dataset_label) # title
plt.xlabel('Indexing Time (Seconds)') # label of x-axis
if di == 0:
ax_time.set_ylabel('Query Time (ms)')
miny = 1e9; maxy = -1e9
minx = 1e9; maxx = -1e9
for method_idx, method, method_label, method_color, method_marker in \
zip(count(), methods, method_labels, method_colors, method_markers):
# get file name for this method on this dataset
filename = get_filename(input_folder, dataset, method)
if filename is None: continue
print(filename)
# get all results from disk
chosen_ks_dict = defaultdict(list)
for _,res in parse_res(filename, chosen_top_k):
query_time = gettime(res)
recall = getrecall(res)
index_time = getindexingtime(res)
chosen_ks_dict[index_time] += [[recall, query_time]]
# get querytime-indexsize and querytime-indexingtime results if its
# recall is higher than recall_level
index_times, querytimes_at_recall = [], []
for index_time, recall_querytimes_ in chosen_ks_dict.items():
# add [[0, 0]] for interpolation
recall_querytimes_ = np.array([[0, 0]] +recall_querytimes_)
recalls, query_times = lower_bound_curve2(recall_querytimes_)
if np.max(recalls) > recall_level:
# get the estimated time at recall level by interpolation
f = interp1d(recalls, query_times)
querytime_at_recall = f(recall_level)
# update results
index_times += [index_time]
querytimes_at_recall += [querytime_at_recall]
# print('interp, ', querytime_at_recall, index_time)
index_times = np.array(index_times)
querytimes_at_recall = np.array(querytimes_at_recall)
# get the querytime-indextime curve by convex hull
itime_qtimes = np.zeros(shape=(len(index_times), 2))
itime_qtimes[:, 0] = index_times
itime_qtimes[:, 1] = querytimes_at_recall
lower_itimes, lower_qtimes = lower_bound_curve2(itime_qtimes)
if len(lower_itimes) > 0:
# print(method, lower_itimes, lower_qtimes)
minx = min(minx, np.min(lower_itimes))
maxx = max(maxx, np.max(lower_itimes))
miny = min(miny, np.min(lower_qtimes))
maxy = max(maxy, np.max(lower_qtimes))
ax_time.semilogy(lower_itimes, lower_qtimes, '-', color=method_color,
marker=method_marker, label=method_label if di==0 else "",
markerfacecolor='none', markersize=10, zorder=len(methods)-method_idx)
# set up the limit (or range) of x-axis and y-axis
if dataset == "Msong":
plt_helper.set_x_axis(ax_time, minx, 0.02*maxx)
else:
plt_helper.set_x_axis(ax_time, minx, 0.22*maxx)
plt_helper.set_y_axis_log10(ax_time, miny, maxy)
# plot legend and save figure
plt_helper.plot_fig_legend(ncol=len(methods))
plt_helper.plot_and_save(output_folder, 'time_indextime')
# ------------------------------------------------------------------------------
def plot_time_k(chosen_top_ks, recall_level, methods, input_folder,
output_folder):
'''
draw the querytime-indexsize curves and querytime-indexingtime curves for
all methods on all datasets
:params chosen_top_ks: top_k value for drawing figure (list)
:params recall_level: recall value for drawing figure (integer)
:params methods: a list of method (list)
:params input_folder: input folder (string)
:params output_folder: output folder (string)
:returns: None
'''
n_datasets = len(datasets)
fig_width, fig_height = calc_width_and_height(n_datasets, 1)
plt_helper = PlotHelper(plt, fig_width, fig_height)
plt_helper.plot_subplots_adjust() # define a window for a figure
method_labels = [method_labels_map[method] for method in methods]
for di, (dataset, dataset_label) in enumerate(zip(datasets, dataset_labels)):
# set up sub-figure
ax_k = plt.subplot(1, n_datasets, di+1)
plt.title(dataset_label) # title
plt.xlabel('$k$') # label of x-axis
if di == 0:
ax_k.set_ylabel('Query Time (ms)')
miny = 1e9; maxy = -1e9
for method_idx, method, method_label, method_color, method_marker in \
zip(count(), methods, method_labels, method_colors, method_markers):
# get file name for this method on this dataset
filename = get_filename(input_folder, dataset, method)
if filename is None: continue
print(filename)
# get all results from disk
chosen_ks_dict = defaultdict(list)
for chosen_top_k in chosen_top_ks:
for _,res in parse_res(filename, chosen_top_k):
query_time = gettime(res)
recall = getrecall(res)
chosen_ks_dict[chosen_top_k] += [[recall, query_time]]
# get querytime-indexsize and querytime-indexingtime results if its
# recall is higher than recall_level
chosen_ks, querytimes_at_recall = [], []
for chosen_k, recall_querytimes_ in chosen_ks_dict.items():
# add [[0, 0]] for interpolation
recall_querytimes_ = np.array([[0, 0]] + recall_querytimes_)
recalls, query_times = lower_bound_curve2(recall_querytimes_)
if np.max(recalls) > recall_level:
# get the estimated time at recall level by interpolation
f = interp1d(recalls, query_times)
querytime_at_recall = f(recall_level)
# update results
chosen_ks += [chosen_k]
querytimes_at_recall += [querytime_at_recall]
chosen_ks = np.array(chosen_ks)
querytimes_at_recall = np.array(querytimes_at_recall)
miny = min(miny, np.min(querytimes_at_recall))
maxy = max(maxy, np.max(querytimes_at_recall))
ax_k.semilogy(chosen_ks, querytimes_at_recall, '-',
color=method_color, marker=method_marker,
label=method_label if di==0 else "",
markerfacecolor='none', markersize=10,
zorder=len(methods)-method_idx)
# set up the limit (or range) of y-axis
plt_helper.set_y_axis_log10(ax_k, miny, maxy)
# plot legend and save figure
plt_helper.plot_fig_legend(ncol=len(methods))
plt_helper.plot_and_save(output_folder, 'time_k')
# ------------------------------------------------------------------------------
def plot_nh_t(chosen_top_k, datasets, input_folder, output_folder, \
fig_width=6.5, fig_height=6.0):
plt_helper = PlotHelper(plt, fig_width, fig_height)
plt_helper.plot_subplots_adjust(top_space=1.2, hspace=0.37)
method = 'NH'
for di, dataset in enumerate(datasets):
ax = plt.subplot(1, len(datasets), di+1)
ax.set_xlabel(r'Recall (%)')
if di == 0:
ax.set_ylabel(r'Query Time (ms)')
ax.set_title('%s' % dataset_labels_map[dataset])
# get file name for this method on this dataset
filename = get_filename(input_folder, dataset, method)
print(filename, method, dataset)
fix_s=2
data = []
for record in parse_res(filename, chosen_top_k):
# print(record)
m = get_m(record)
s = get_s(record)
cand = get_cand(record)
time = get_time(record)
recall = get_recall(record)
if s == fix_s:
print(m, s, cand, time, recall)
data += [[m, s, cand, time, recall]]
data = np.array(data)
ms = [8, 16, 32, 64, 128, 256]
maxy = -1e9
miny = 1e9
for color, marker, m in zip(method_colors, method_markers, ms):
data_mp = data[data[:, 0]==m]
# print(m, data_mp)
plt.semilogy(data_mp[:, -1], data_mp[:, -2], marker=marker,
label='$t=%d$'%(m) if di==0 else "", c=color,
markerfacecolor='none', markersize=7)
miny = min(miny, np.min(data_mp[:,-2]) )
maxy = max(maxy, np.max(data_mp[:,-2]) )
plt.xlim(0, 100)
# print(dataset, distance, miny, maxy)
plt_helper.set_y_axis_log10(ax, miny, maxy)
plt_helper.plot_fig_legend(ncol=3)
plt_helper.plot_and_save(output_folder, 'varying_nh_t')
# ------------------------------------------------------------------------------
def plot_fh_m(chosen_top_k, datasets, input_folder, output_folder, \
fig_width=6.5, fig_height=6.0):
plt_helper = PlotHelper(plt, fig_width, fig_height)
plt_helper.plot_subplots_adjust(top_space=1.2, hspace=0.37)
method = 'FH'
for di, dataset in enumerate(datasets):
ax = plt.subplot(1, len(datasets), di+1)
ax.set_xlabel(r'Recall (%)')
if di == 0:
ax.set_ylabel(r'Query Time (ms)')
ax.set_title('%s' % dataset_labels_map[dataset])
# get file name for this method on this dataset
filename = get_filename(input_folder, dataset, method)
print(filename, method, dataset)
fix_l=4
fix_s=2
data = []
for record in parse_res(filename, chosen_top_k):
# print(record)
m = get_m(record)
l = get_l(record)
s = get_s(record)
cand = get_cand(record)
time = get_time(record)
recall = get_recall(record)
if l == fix_l and s == fix_s:
print(m, l, s, cand, time, recall)
data += [[m, l, s, cand, time, recall]]
data = np.array(data)
ms = [8, 16, 32, 64, 128, 256]
maxy = -1e9
miny = 1e9
for color, marker, m in zip(method_colors, method_markers, ms):
data_mp = data[data[:, 0]==m]
# print(m, data_mp)
plt.semilogy(data_mp[:, -1], data_mp[:, -2], marker=marker,
label='$m=%d$'%(m) if di==0 else "", c=color,
markerfacecolor='none', markersize=7)
miny = min(miny, np.min(data_mp[:,-2]) )
maxy = max(maxy, np.max(data_mp[:,-2]) )
plt.xlim(0, 100)
# print(dataset, distance, miny, maxy)
plt_helper.set_y_axis_log10(ax, miny, maxy)
plt_helper.plot_fig_legend(ncol=3)
plt_helper.plot_and_save(output_folder, 'varying_fh_m')
# ------------------------------------------------------------------------------
def plot_fh_l(chosen_top_k, datasets, input_folder, output_folder, \
fig_width=6.5, fig_height=6.0):
plt_helper = PlotHelper(plt, fig_width, fig_height)
plt_helper.plot_subplots_adjust(top_space=0.8, hspace=0.37)
method = 'FH'
for di, dataset in enumerate(datasets):
ax = plt.subplot(1, len(datasets), di+1)
ax.set_xlabel(r'Recall (%)')
if di == 0:
ax.set_ylabel(r'Query Time (ms)')
ax.set_title('%s' % dataset_labels_map[dataset])
# get file name for this method on this dataset
filename = get_filename(input_folder, dataset, method)
print(filename, method, dataset)
fix_m=16
fix_s=2
data = []
for record in parse_res(filename, chosen_top_k):
# print(record)
m = get_m(record)
l = get_l(record)
s = get_s(record)
cand = get_cand(record)
time = get_time(record)
recall = get_recall(record)
if m == fix_m and s == fix_s:
print(m, l, s, cand, time, recall)
data += [[m, l, s, cand, time, recall]]
data = np.array(data)
ls = [2, 4, 6, 8, 10]
maxy = -1e9
miny = 1e9
for color, marker, l in zip(method_colors, method_markers, ls):
data_mp = data[data[:, 1]==l]
# print(m, data_mp)
plt.semilogy(data_mp[:, -1], data_mp[:, -2], marker=marker,
label='$l=%d$'%(l) if di==0 else "", c=color,
markerfacecolor='none', markersize=7)
miny = min(miny, np.min(data_mp[:,-2]) )
maxy = max(maxy, np.max(data_mp[:,-2]) )
plt.xlim(0, 100)
# print(dataset, distance, miny, maxy)
plt_helper.set_y_axis_log10(ax, miny, maxy)
plt_helper.plot_fig_legend(ncol=5)
plt_helper.plot_and_save(output_folder, 'varying_fh_l')
# ------------------------------------------------------------------------------
def plot_fh_s(chosen_top_k, datasets, input_folder, output_folder, \
fig_width=6.5, fig_height=6.0):
plt_helper = PlotHelper(plt, fig_width, fig_height)
plt_helper.plot_subplots_adjust(top_space=0.8, hspace=0.37)
method = 'FH'
for di, dataset in enumerate(datasets):
ax = plt.subplot(1, len(datasets), di+1)
ax.set_xlabel(r'Recall (%)')
if di == 0:
ax.set_ylabel(r'Query Time (ms)')
ax.set_title('%s' % dataset_labels_map[dataset])
# get file name for this method on this dataset
filename = get_filename(input_folder, dataset, method)
print(filename, method, dataset)
fix_m=16
fix_l=4
data = []
for record in parse_res(filename, chosen_top_k):
# print(record)
m = get_m(record)
l = get_l(record)
s = get_s(record)
cand = get_cand(record)
time = get_time(record)
recall = get_recall(record)
if m == fix_m and l == fix_l:
print(m, l, s, cand, time, recall)
data += [[m, l, s, cand, time, recall]]
data = np.array(data)
ss = [1, 2, 4, 8]
maxy = -1e9
miny = 1e9
for color, marker, s in zip(method_colors, method_markers, ss):
data_mp = data[data[:, 2]==s]
# print(m, data_mp)
plt.semilogy(data_mp[:, -1], data_mp[:, -2], marker=marker,
label='$\lambda=%d d$'%(s) if di==0 else "", c=color,
markerfacecolor='none', markersize=7)
miny = min(miny, np.min(data_mp[:,-2]) )
maxy = max(maxy, np.max(data_mp[:,-2]) )
plt.xlim(0, 100)
# print(dataset, distance, miny, maxy)
plt_helper.set_y_axis_log10(ax, miny, maxy)
plt_helper.plot_fig_legend(ncol=4)
plt_helper.plot_and_save(output_folder, 'varying_fh_s')
# ------------------------------------------------------------------------------
def plot_nh_s(chosen_top_k, datasets, input_folder, output_folder, \
fig_width=6.5, fig_height=6.0):
plt_helper = PlotHelper(plt, fig_width, fig_height)
plt_helper.plot_subplots_adjust(top_space=0.8, hspace=0.37)
method = 'NH'
for di, dataset in enumerate(datasets):
ax = plt.subplot(1, len(datasets), di+1)
ax.set_xlabel(r'Recall (%)')
if di == 0:
ax.set_ylabel(r'Query Time (ms)')
ax.set_title('%s' % dataset_labels_map[dataset])
# get file name for this method on this dataset
filename = get_filename(input_folder, dataset, method)
print(filename, method, dataset)
fix_m=256
data = []
for record in parse_res(filename, chosen_top_k):
# print(record)
m = get_m(record)
s = get_s(record)
cand = get_cand(record)
time = get_time(record)
recall = get_recall(record)
if m == fix_m:
print(m, s, cand, time, recall)
data += [[m, s, cand, time, recall]]
data = np.array(data)
ss = [1, 2, 4, 8]
maxy = -1e9
miny = 1e9
for color, marker, s in zip(method_colors, method_markers, ss):
data_mp = data[data[:, 1]==s]
# print(m, data_mp)
plt.semilogy(data_mp[:, -1], data_mp[:, -2], marker=marker,
label='$\lambda=%d d$'%(s) if di==0 else "", c=color,
markerfacecolor='none', markersize=7)
miny = min(miny, np.min(data_mp[:,-2]) )
maxy = max(maxy, np.max(data_mp[:,-2]) )
plt.xlim(0, 100)
# print(dataset, distance, miny, maxy)
plt_helper.set_y_axis_log10(ax, miny, maxy)
plt_helper.plot_fig_legend(ncol=4)
plt_helper.plot_and_save(output_folder, 'varying_nh_s')
# ------------------------------------------------------------------------------
if __name__ == '__main__':
chosen_top_k = 10
input_folder = "../results/"
output_folder = "../figures/param/"
datasets = ['Yelp', 'GloVe100']
plot_nh_t(chosen_top_k, datasets, input_folder, output_folder, fig_height=3.4)
plot_nh_s(chosen_top_k, datasets, input_folder, output_folder, fig_height=3.0)
plot_fh_m(chosen_top_k, datasets, input_folder, output_folder, fig_height=3.4)
plot_fh_l(chosen_top_k, datasets, input_folder, output_folder, fig_height=3.0)
plot_fh_s(chosen_top_k, datasets, input_folder, output_folder, fig_height=3.0)
| 43.209195 | 95 | 0.56645 | 4,612 | 37,592 | 4.375325 | 0.051388 | 0.028545 | 0.018336 | 0.016007 | 0.885971 | 0.873036 | 0.844591 | 0.83468 | 0.820259 | 0.800436 | 0 | 0.016138 | 0.31262 | 37,592 | 869 | 96 | 43.258918 | 0.764783 | 0.185758 | 0 | 0.695971 | 0 | 0 | 0.024025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021978 | false | 0 | 0.018315 | 0 | 0.040293 | 0.032967 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
607d13c3c85245c96b4c0c2121722284ec982c21 | 15,111 | py | Python | contrib/opencensus-ext-zipkin/tests/test_zipkin_exporter.py | Flared/opencensus-python | e2535e688a50c7a06be8af93ca3b987d387da605 | [
"Apache-2.0"
] | 650 | 2017-07-09T02:08:10.000Z | 2022-03-22T20:39:54.000Z | contrib/opencensus-ext-zipkin/tests/test_zipkin_exporter.py | Flared/opencensus-python | e2535e688a50c7a06be8af93ca3b987d387da605 | [
"Apache-2.0"
] | 735 | 2017-07-26T01:15:16.000Z | 2022-03-29T20:17:20.000Z | contrib/opencensus-ext-zipkin/tests/test_zipkin_exporter.py | Flared/opencensus-python | e2535e688a50c7a06be8af93ca3b987d387da605 | [
"Apache-2.0"
] | 256 | 2017-07-24T18:29:15.000Z | 2022-03-15T15:33:03.000Z | # Copyright 2017, OpenCensus Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
from datetime import datetime
import mock
from opencensus.ext.zipkin import trace_exporter
from opencensus.trace import span_context
from opencensus.trace import span_data as span_data_module
from opencensus.trace import time_event
class TestZipkinExporter(unittest.TestCase):
def test_constructor(self):
service_name = 'my_service'
host_name = '0.0.0.0'
port = 2333
endpoint = '/api/v2/test'
ipv4 = '127.0.0.1'
exporter = trace_exporter.ZipkinExporter(
service_name=service_name,
host_name=host_name,
port=port,
endpoint=endpoint,
ipv4=ipv4)
expected_url = 'http://0.0.0.0:2333/api/v2/test'
self.assertEqual(exporter.service_name, service_name)
self.assertEqual(exporter.host_name, host_name)
self.assertEqual(exporter.port, port)
self.assertEqual(exporter.endpoint, endpoint)
self.assertEqual(exporter.url, expected_url)
self.assertEqual(exporter.ipv4, ipv4)
def test_export(self):
exporter = trace_exporter.ZipkinExporter(
service_name='my_service', transport=MockTransport)
exporter.export({})
self.assertTrue(exporter.transport.export_called)
@mock.patch('requests.post')
@mock.patch.object(trace_exporter.ZipkinExporter, 'translate_to_zipkin')
def test_emit_succeeded(self, translate_mock, requests_mock):
import json
trace = {'test': 'this_is_for_test'}
exporter = trace_exporter.ZipkinExporter(service_name='my_service')
response = mock.Mock()
response.status_code = 202
requests_mock.return_value = response
translate_mock.return_value = trace
exporter.emit([])
requests_mock.assert_called_once_with(
url=exporter.url,
data=json.dumps(trace),
headers=trace_exporter.ZIPKIN_HEADERS)
@mock.patch('requests.post')
@mock.patch.object(trace_exporter.ZipkinExporter, 'translate_to_zipkin')
def test_emit_failed(self, translate_mock, requests_mock):
import json
trace = {'test': 'this_is_for_test'}
exporter = trace_exporter.ZipkinExporter(service_name='my_service')
response = mock.Mock()
response.status_code = 400
requests_mock.return_value = response
translate_mock.return_value = trace
exporter.emit([])
requests_mock.assert_called_once_with(
url=exporter.url,
data=json.dumps(trace),
headers=trace_exporter.ZIPKIN_HEADERS)
def test_translate_to_zipkin_span_kind_none(self):
trace_id = '6e0c63257de34c92bf9efcd03927272e'
spans_ipv4 = [
span_data_module.SpanData(
name='child_span',
context=span_context.SpanContext(trace_id=trace_id),
span_id='6e0c63257de34c92',
parent_span_id='6e0c63257de34c93',
attributes={'test_key': 'test_value'},
start_time='2017-08-15T18:02:26.071158Z',
end_time='2017-08-15T18:02:36.071158Z',
child_span_count=None,
stack_trace=None,
annotations=None,
message_events=None,
links=None,
status=None,
same_process_as_parent_span=None,
span_kind=0,
),
span_data_module.SpanData(
name='child_span',
context=span_context.SpanContext(trace_id=trace_id),
span_id='6e0c63257de34c92',
parent_span_id='6e0c63257de34c93',
attributes={'test_key': 1},
start_time='2017-08-15T18:02:26.071158Z',
end_time='2017-08-15T18:02:36.071158Z',
child_span_count=None,
stack_trace=None,
annotations=None,
message_events=None,
links=None,
status=None,
same_process_as_parent_span=None,
span_kind=None,
),
]
trace_id = '6e0c63257de34c92bf9efcd03927272e'
spans_ipv6 = [
span_data_module.SpanData(
name='child_span',
context=span_context.SpanContext(trace_id=trace_id),
span_id='6e0c63257de34c92',
parent_span_id=None,
attributes={
'test_key': False,
'test_key2': 'raw_value',
'test_key3': 0.1,
},
start_time='2017-08-15T18:02:26.071158Z',
end_time='2017-08-15T18:02:36.071158Z',
child_span_count=None,
stack_trace=None,
annotations=None,
message_events=None,
links=None,
status=None,
same_process_as_parent_span=None,
span_kind=1,
),
]
ipv4 = '127.0.0.1'
ipv6 = '2001:0db8:85a3:0000:0000:8a2e:0370:7334'
local_endpoint_ipv4 = {
'serviceName': 'my_service',
'ipv4': ipv4,
'port': 9411,
}
local_endpoint_ipv6 = {
'serviceName': 'my_service',
'ipv6': ipv6,
'port': 9411,
}
expected_zipkin_spans_ipv4 = [
{
'traceId': '6e0c63257de34c92bf9efcd03927272e',
'id': '6e0c63257de34c92',
'parentId': '6e0c63257de34c93',
'name': 'child_span',
'timestamp': 1502820146071158,
'duration': 10000000,
'localEndpoint': local_endpoint_ipv4,
'tags': {
'test_key': 'test_value'
},
'annotations': [],
},
{
'traceId': '6e0c63257de34c92bf9efcd03927272e',
'id': '6e0c63257de34c92',
'parentId': '6e0c63257de34c93',
'name': 'child_span',
'timestamp': 1502820146071158,
'duration': 10000000,
'localEndpoint': local_endpoint_ipv4,
'tags': {
'test_key': '1'
},
'annotations': [],
},
]
expected_zipkin_spans_ipv6 = [
{
'traceId': '6e0c63257de34c92bf9efcd03927272e',
'id': '6e0c63257de34c92',
'name': 'child_span',
'timestamp': 1502820146071158,
'duration': 10000000,
'localEndpoint': local_endpoint_ipv6,
'tags': {
'test_key': 'False',
'test_key2': 'raw_value',
'test_key3': '0.1'
},
'kind': 'SERVER',
'annotations': [],
},
]
# Test ipv4 local endpoint
exporter_ipv4 = trace_exporter.ZipkinExporter(
service_name='my_service', ipv4=ipv4)
zipkin_spans_ipv4 = exporter_ipv4.translate_to_zipkin(
span_datas=spans_ipv4)
self.assertEqual(zipkin_spans_ipv4, expected_zipkin_spans_ipv4)
# Test ipv6 local endpoint
exporter_ipv6 = trace_exporter.ZipkinExporter(
service_name='my_service', ipv6=ipv6)
zipkin_spans_ipv6 = exporter_ipv6.translate_to_zipkin(
span_datas=spans_ipv6)
self.assertEqual(zipkin_spans_ipv6, expected_zipkin_spans_ipv6)
def test_translate_to_zipkin_with_annotations(self):
trace_id = '6e0c63257de34c92bf9efcd03927272e'
annotation_attributes = {
'annotation_bool': True,
'annotation_string': 'annotation_test',
'key_float': .3
}
s = '2017-08-15T18:02:26.071158'
time = datetime.strptime(s, '%Y-%m-%dT%H:%M:%S.%f')
annotations = [
time_event.Annotation(
timestamp=time,
description='First Annotation',
attributes=annotation_attributes,
)
]
message_events = [
time_event.MessageEvent(
timestamp=time,
id='message-event-id',
uncompressed_size_bytes=0,
)
]
spans_ipv4 = [
span_data_module.SpanData(
name='child_span',
context=span_context.SpanContext(trace_id=trace_id),
span_id='6e0c63257de34c92',
parent_span_id='6e0c63257de34c93',
attributes={'test_key': 'test_value'},
start_time='2017-08-15T18:02:26.071158Z',
end_time='2017-08-15T18:02:36.071158Z',
child_span_count=None,
stack_trace=None,
annotations=annotations,
message_events=message_events,
links=None,
status=None,
same_process_as_parent_span=None,
span_kind=0,
),
span_data_module.SpanData(
name='child_span',
context=span_context.SpanContext(trace_id=trace_id),
span_id='6e0c63257de34c92',
parent_span_id='6e0c63257de34c93',
attributes={'test_key': 1},
start_time='2017-08-15T18:02:26.071158Z',
end_time='2017-08-15T18:02:36.071158Z',
child_span_count=None,
stack_trace=None,
annotations=annotations,
message_events=message_events,
links=None,
status=None,
same_process_as_parent_span=None,
span_kind=None,
),
]
spans_ipv6 = [
span_data_module.SpanData(
name='child_span',
context=span_context.SpanContext(trace_id=trace_id),
span_id='6e0c63257de34c92',
parent_span_id=None,
attributes={
'test_key': False,
'test_key2': 'raw_value',
'test_key3': 0.1,
},
start_time='2017-08-15T18:02:26.071158Z',
end_time='2017-08-15T18:02:36.071158Z',
child_span_count=None,
stack_trace=None,
annotations=annotations,
message_events=message_events,
links=None,
status=None,
same_process_as_parent_span=None,
span_kind=1,
),
]
ipv4 = '127.0.0.1'
ipv6 = '2001:0db8:85a3:0000:0000:8a2e:0370:7334'
local_endpoint_ipv4 = {
'serviceName': 'my_service',
'ipv4': ipv4,
'port': 9411,
}
local_endpoint_ipv6 = {
'serviceName': 'my_service',
'ipv6': ipv6,
'port': 9411,
}
expected_zipkin_spans_ipv4 = [
{
'traceId':
'6e0c63257de34c92bf9efcd03927272e',
'id':
'6e0c63257de34c92',
'parentId':
'6e0c63257de34c93',
'name':
'child_span',
'timestamp':
1502820146071158,
'duration':
10000000,
'localEndpoint':
local_endpoint_ipv4,
'tags': {
'test_key': 'test_value'
},
'annotations': [{
'timestamp': 1502820146071158,
'value': 'First Annotation'
}]
},
{
'traceId':
'6e0c63257de34c92bf9efcd03927272e',
'id':
'6e0c63257de34c92',
'parentId':
'6e0c63257de34c93',
'name':
'child_span',
'timestamp':
1502820146071158,
'duration':
10000000,
'localEndpoint':
local_endpoint_ipv4,
'tags': {
'test_key': '1'
},
'annotations': [{
'timestamp': 1502820146071158,
'value': 'First Annotation'
}]
},
]
expected_zipkin_spans_ipv6 = [
{
'traceId':
'6e0c63257de34c92bf9efcd03927272e',
'id':
'6e0c63257de34c92',
'name':
'child_span',
'timestamp':
1502820146071158,
'duration':
10000000,
'localEndpoint':
local_endpoint_ipv6,
'tags': {
'test_key': 'False',
'test_key2': 'raw_value',
'test_key3': '0.1'
},
'kind':
'SERVER',
'annotations': [{
'timestamp': 1502820146071158,
'value': 'First Annotation'
}]
},
]
# Test ipv4 local endpoint
exporter_ipv4 = trace_exporter.ZipkinExporter(
service_name='my_service', ipv4=ipv4)
zipkin_spans_ipv4 = exporter_ipv4.translate_to_zipkin(
span_datas=spans_ipv4)
self.assertEqual(zipkin_spans_ipv4, expected_zipkin_spans_ipv4)
# Test ipv6 local endpoint
exporter_ipv6 = trace_exporter.ZipkinExporter(
service_name='my_service', ipv6=ipv6)
zipkin_spans_ipv6 = exporter_ipv6.translate_to_zipkin(
span_datas=spans_ipv6)
self.assertEqual(zipkin_spans_ipv6, expected_zipkin_spans_ipv6)
def test_ignore_incorrect_spans(self):
attributes = {'unknown_value': {}}
self.assertEqual(
trace_exporter._extract_tags_from_span(attributes), {})
attributes = None
self.assertEqual(
trace_exporter._extract_tags_from_span(attributes), {})
class MockTransport(object):
def __init__(self, exporter=None):
self.export_called = False
self.exporter = exporter
def export(self, trace):
self.export_called = True
| 33.58 | 76 | 0.525842 | 1,351 | 15,111 | 5.600296 | 0.150999 | 0.021412 | 0.0189 | 0.022337 | 0.764473 | 0.748084 | 0.724161 | 0.724161 | 0.716891 | 0.701824 | 0 | 0.11427 | 0.380915 | 15,111 | 449 | 77 | 33.654788 | 0.694495 | 0.043412 | 0 | 0.738903 | 0 | 0 | 0.166655 | 0.049595 | 0 | 0 | 0 | 0 | 0.039164 | 1 | 0.023499 | false | 0 | 0.023499 | 0 | 0.052219 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7148e5d93467c21926344ad9460ee439890cdda0 | 102 | py | Python | inferfuzzy/rules/__init__.py | leynier/inferfuzzy | bc9dd3a3d0d59f323c5c573423ff7d20ba771eeb | [
"MIT"
] | 3 | 2020-11-23T21:05:31.000Z | 2020-11-25T17:33:27.000Z | inferfuzzy/rules/__init__.py | leynier/fuzzpy | bc9dd3a3d0d59f323c5c573423ff7d20ba771eeb | [
"MIT"
] | null | null | null | inferfuzzy/rules/__init__.py | leynier/fuzzpy | bc9dd3a3d0d59f323c5c573423ff7d20ba771eeb | [
"MIT"
] | null | null | null | from .larsen_rule import LarsenRule # noqa: F401
from .mamdani_rule import MamdaniRule # noqa: F401
| 34 | 51 | 0.784314 | 14 | 102 | 5.571429 | 0.642857 | 0.25641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 0.156863 | 102 | 2 | 52 | 51 | 0.837209 | 0.205882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
714c619063f646229f35db4f270e763c70803125 | 42 | py | Python | Python-RPiCam/pyimagesearch/notifications/__init__.py | Kgray44/Automated--Driveway-Gate | 59ed2a8b73d468c5aa701a02f3d94ca61777efdf | [
"MIT"
] | 5 | 2020-04-26T17:41:00.000Z | 2022-02-16T20:52:16.000Z | Python-RPiCam/pyimagesearch/notifications/__init__.py | Kgray44/Automated--Driveway-Gate | 59ed2a8b73d468c5aa701a02f3d94ca61777efdf | [
"MIT"
] | null | null | null | Python-RPiCam/pyimagesearch/notifications/__init__.py | Kgray44/Automated--Driveway-Gate | 59ed2a8b73d468c5aa701a02f3d94ca61777efdf | [
"MIT"
] | null | null | null | from .twilionotifier import TwilioNotifier | 42 | 42 | 0.904762 | 4 | 42 | 9.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 42 | 1 | 42 | 42 | 0.974359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0c5c4756ae814f863b02bfd2a95f27d471c9597 | 108 | py | Python | bscscan_web_api/models/__init__.py | kkristof200/py_bscscan_web_api | 1cf1b5a93b1e3273ec3de4c425811fcec51621db | [
"MIT"
] | 2 | 2021-06-07T13:06:41.000Z | 2022-03-27T15:58:25.000Z | bscscan_web_api/models/__init__.py | kkristof200/py_bscscan_web_api | 1cf1b5a93b1e3273ec3de4c425811fcec51621db | [
"MIT"
] | null | null | null | bscscan_web_api/models/__init__.py | kkristof200/py_bscscan_web_api | 1cf1b5a93b1e3273ec3de4c425811fcec51621db | [
"MIT"
] | 2 | 2021-06-18T20:36:46.000Z | 2021-09-29T06:11:38.000Z | from .recently_added_token import RecentlyAddedToken
from .compiler import Compiler
from .token import Token | 36 | 52 | 0.87037 | 14 | 108 | 6.571429 | 0.5 | 0.23913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101852 | 108 | 3 | 53 | 36 | 0.948454 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0cd316ecfe8f6aca4ec81a654a36a7ac6324fe3 | 484 | py | Python | venv/Lib/site-packages/tensorflow/saved_model/tag_constants/__init__.py | caiovini/Image_reader_api | 7fae630a17195d3415eb739278ef21a3b58cae76 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/tensorflow/saved_model/tag_constants/__init__.py | caiovini/Image_reader_api | 7fae630a17195d3415eb739278ef21a3b58cae76 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/tensorflow/saved_model/tag_constants/__init__.py | caiovini/Image_reader_api | 7fae630a17195d3415eb739278ef21a3b58cae76 | [
"MIT"
] | null | null | null | # This file is MACHINE GENERATED! Do not edit.
# Generated by: tensorflow/tools/api/generator/create_python_api.py script.
"""Common tags used for graphs in SavedModel.
"""
from __future__ import print_function
from tensorflow.python.saved_model.tag_constants import GPU
from tensorflow.python.saved_model.tag_constants import SERVING
from tensorflow.python.saved_model.tag_constants import TPU
from tensorflow.python.saved_model.tag_constants import TRAINING
del print_function
| 30.25 | 75 | 0.836777 | 70 | 484 | 5.557143 | 0.542857 | 0.143959 | 0.205656 | 0.257069 | 0.493573 | 0.493573 | 0.493573 | 0.493573 | 0 | 0 | 0 | 0 | 0.10124 | 484 | 15 | 76 | 32.266667 | 0.894253 | 0.334711 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.833333 | 0 | 0.833333 | 0.333333 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0fdc35e01ba9ee08f61a2aaa69743f89a455d8c | 69,123 | py | Python | cottonformation/res/iotevents.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | cottonformation/res/iotevents.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | cottonformation/res/iotevents.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
This module
"""
import attr
import typing
from ..core.model import (
Property, Resource, Tag, GetAtt, TypeHint, TypeCheck,
)
from ..core.constant import AttrMeta
#--- Property declaration ---
@attr.s
class DetectorModelSetTimer(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.SetTimer"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-settimer.html
Property Document:
- ``rp_TimerName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-settimer.html#cfn-iotevents-detectormodel-settimer-timername
- ``p_DurationExpression``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-settimer.html#cfn-iotevents-detectormodel-settimer-durationexpression
- ``p_Seconds``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-settimer.html#cfn-iotevents-detectormodel-settimer-seconds
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.SetTimer"
rp_TimerName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "TimerName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-settimer.html#cfn-iotevents-detectormodel-settimer-timername"""
p_DurationExpression: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "DurationExpression"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-settimer.html#cfn-iotevents-detectormodel-settimer-durationexpression"""
p_Seconds: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "Seconds"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-settimer.html#cfn-iotevents-detectormodel-settimer-seconds"""
@attr.s
class DetectorModelResetTimer(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.ResetTimer"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-resettimer.html
Property Document:
- ``rp_TimerName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-resettimer.html#cfn-iotevents-detectormodel-resettimer-timername
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.ResetTimer"
rp_TimerName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "TimerName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-resettimer.html#cfn-iotevents-detectormodel-resettimer-timername"""
@attr.s
class DetectorModelClearTimer(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.ClearTimer"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-cleartimer.html
Property Document:
- ``rp_TimerName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-cleartimer.html#cfn-iotevents-detectormodel-cleartimer-timername
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.ClearTimer"
rp_TimerName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "TimerName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-cleartimer.html#cfn-iotevents-detectormodel-cleartimer-timername"""
@attr.s
class InputAttribute(Property):
"""
AWS Object Type = "AWS::IoTEvents::Input.Attribute"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-input-attribute.html
Property Document:
- ``rp_JsonPath``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-input-attribute.html#cfn-iotevents-input-attribute-jsonpath
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::Input.Attribute"
rp_JsonPath: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "JsonPath"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-input-attribute.html#cfn-iotevents-input-attribute-jsonpath"""
@attr.s
class DetectorModelAssetPropertyTimestamp(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.AssetPropertyTimestamp"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertytimestamp.html
Property Document:
- ``rp_TimeInSeconds``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertytimestamp.html#cfn-iotevents-detectormodel-assetpropertytimestamp-timeinseconds
- ``p_OffsetInNanos``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertytimestamp.html#cfn-iotevents-detectormodel-assetpropertytimestamp-offsetinnanos
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.AssetPropertyTimestamp"
rp_TimeInSeconds: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "TimeInSeconds"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertytimestamp.html#cfn-iotevents-detectormodel-assetpropertytimestamp-timeinseconds"""
p_OffsetInNanos: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "OffsetInNanos"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertytimestamp.html#cfn-iotevents-detectormodel-assetpropertytimestamp-offsetinnanos"""
@attr.s
class DetectorModelAssetPropertyVariant(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.AssetPropertyVariant"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvariant.html
Property Document:
- ``p_BooleanValue``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvariant.html#cfn-iotevents-detectormodel-assetpropertyvariant-booleanvalue
- ``p_DoubleValue``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvariant.html#cfn-iotevents-detectormodel-assetpropertyvariant-doublevalue
- ``p_IntegerValue``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvariant.html#cfn-iotevents-detectormodel-assetpropertyvariant-integervalue
- ``p_StringValue``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvariant.html#cfn-iotevents-detectormodel-assetpropertyvariant-stringvalue
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.AssetPropertyVariant"
p_BooleanValue: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "BooleanValue"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvariant.html#cfn-iotevents-detectormodel-assetpropertyvariant-booleanvalue"""
p_DoubleValue: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "DoubleValue"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvariant.html#cfn-iotevents-detectormodel-assetpropertyvariant-doublevalue"""
p_IntegerValue: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "IntegerValue"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvariant.html#cfn-iotevents-detectormodel-assetpropertyvariant-integervalue"""
p_StringValue: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "StringValue"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvariant.html#cfn-iotevents-detectormodel-assetpropertyvariant-stringvalue"""
@attr.s
class DetectorModelSetVariable(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.SetVariable"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-setvariable.html
Property Document:
- ``rp_Value``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-setvariable.html#cfn-iotevents-detectormodel-setvariable-value
- ``rp_VariableName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-setvariable.html#cfn-iotevents-detectormodel-setvariable-variablename
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.SetVariable"
rp_Value: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Value"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-setvariable.html#cfn-iotevents-detectormodel-setvariable-value"""
rp_VariableName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "VariableName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-setvariable.html#cfn-iotevents-detectormodel-setvariable-variablename"""
@attr.s
class DetectorModelPayload(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.Payload"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-payload.html
Property Document:
- ``rp_ContentExpression``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-payload.html#cfn-iotevents-detectormodel-payload-contentexpression
- ``rp_Type``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-payload.html#cfn-iotevents-detectormodel-payload-type
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.Payload"
rp_ContentExpression: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "ContentExpression"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-payload.html#cfn-iotevents-detectormodel-payload-contentexpression"""
rp_Type: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Type"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-payload.html#cfn-iotevents-detectormodel-payload-type"""
@attr.s
class DetectorModelAssetPropertyValue(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.AssetPropertyValue"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvalue.html
Property Document:
- ``rp_Value``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvalue.html#cfn-iotevents-detectormodel-assetpropertyvalue-value
- ``p_Quality``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvalue.html#cfn-iotevents-detectormodel-assetpropertyvalue-quality
- ``p_Timestamp``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvalue.html#cfn-iotevents-detectormodel-assetpropertyvalue-timestamp
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.AssetPropertyValue"
rp_Value: typing.Union['DetectorModelAssetPropertyVariant', dict] = attr.ib(
default=None,
converter=DetectorModelAssetPropertyVariant.from_dict,
validator=attr.validators.instance_of(DetectorModelAssetPropertyVariant),
metadata={AttrMeta.PROPERTY_NAME: "Value"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvalue.html#cfn-iotevents-detectormodel-assetpropertyvalue-value"""
p_Quality: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Quality"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvalue.html#cfn-iotevents-detectormodel-assetpropertyvalue-quality"""
p_Timestamp: typing.Union['DetectorModelAssetPropertyTimestamp', dict] = attr.ib(
default=None,
converter=DetectorModelAssetPropertyTimestamp.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelAssetPropertyTimestamp)),
metadata={AttrMeta.PROPERTY_NAME: "Timestamp"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-assetpropertyvalue.html#cfn-iotevents-detectormodel-assetpropertyvalue-timestamp"""
@attr.s
class InputInputDefinition(Property):
"""
AWS Object Type = "AWS::IoTEvents::Input.InputDefinition"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-input-inputdefinition.html
Property Document:
- ``rp_Attributes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-input-inputdefinition.html#cfn-iotevents-input-inputdefinition-attributes
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::Input.InputDefinition"
rp_Attributes: typing.List[typing.Union['InputAttribute', dict]] = attr.ib(
default=None,
converter=InputAttribute.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(InputAttribute), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "Attributes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-input-inputdefinition.html#cfn-iotevents-input-inputdefinition-attributes"""
@attr.s
class DetectorModelLambda(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.Lambda"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-lambda.html
Property Document:
- ``rp_FunctionArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-lambda.html#cfn-iotevents-detectormodel-lambda-functionarn
- ``p_Payload``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-lambda.html#cfn-iotevents-detectormodel-lambda-payload
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.Lambda"
rp_FunctionArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "FunctionArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-lambda.html#cfn-iotevents-detectormodel-lambda-functionarn"""
p_Payload: typing.Union['DetectorModelPayload', dict] = attr.ib(
default=None,
converter=DetectorModelPayload.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelPayload)),
metadata={AttrMeta.PROPERTY_NAME: "Payload"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-lambda.html#cfn-iotevents-detectormodel-lambda-payload"""
@attr.s
class DetectorModelIotEvents(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.IotEvents"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotevents.html
Property Document:
- ``rp_InputName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotevents.html#cfn-iotevents-detectormodel-iotevents-inputname
- ``p_Payload``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotevents.html#cfn-iotevents-detectormodel-iotevents-payload
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.IotEvents"
rp_InputName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "InputName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotevents.html#cfn-iotevents-detectormodel-iotevents-inputname"""
p_Payload: typing.Union['DetectorModelPayload', dict] = attr.ib(
default=None,
converter=DetectorModelPayload.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelPayload)),
metadata={AttrMeta.PROPERTY_NAME: "Payload"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotevents.html#cfn-iotevents-detectormodel-iotevents-payload"""
@attr.s
class DetectorModelIotSiteWise(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.IotSiteWise"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotsitewise.html
Property Document:
- ``rp_PropertyValue``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotsitewise.html#cfn-iotevents-detectormodel-iotsitewise-propertyvalue
- ``p_AssetId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotsitewise.html#cfn-iotevents-detectormodel-iotsitewise-assetid
- ``p_EntryId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotsitewise.html#cfn-iotevents-detectormodel-iotsitewise-entryid
- ``p_PropertyAlias``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotsitewise.html#cfn-iotevents-detectormodel-iotsitewise-propertyalias
- ``p_PropertyId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotsitewise.html#cfn-iotevents-detectormodel-iotsitewise-propertyid
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.IotSiteWise"
rp_PropertyValue: typing.Union['DetectorModelAssetPropertyValue', dict] = attr.ib(
default=None,
converter=DetectorModelAssetPropertyValue.from_dict,
validator=attr.validators.instance_of(DetectorModelAssetPropertyValue),
metadata={AttrMeta.PROPERTY_NAME: "PropertyValue"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotsitewise.html#cfn-iotevents-detectormodel-iotsitewise-propertyvalue"""
p_AssetId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "AssetId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotsitewise.html#cfn-iotevents-detectormodel-iotsitewise-assetid"""
p_EntryId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "EntryId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotsitewise.html#cfn-iotevents-detectormodel-iotsitewise-entryid"""
p_PropertyAlias: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "PropertyAlias"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotsitewise.html#cfn-iotevents-detectormodel-iotsitewise-propertyalias"""
p_PropertyId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "PropertyId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iotsitewise.html#cfn-iotevents-detectormodel-iotsitewise-propertyid"""
@attr.s
class DetectorModelDynamoDB(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.DynamoDB"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html
Property Document:
- ``rp_HashKeyField``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-hashkeyfield
- ``rp_HashKeyValue``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-hashkeyvalue
- ``rp_TableName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-tablename
- ``p_HashKeyType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-hashkeytype
- ``p_Operation``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-operation
- ``p_Payload``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-payload
- ``p_PayloadField``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-payloadfield
- ``p_RangeKeyField``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-rangekeyfield
- ``p_RangeKeyType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-rangekeytype
- ``p_RangeKeyValue``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-rangekeyvalue
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.DynamoDB"
rp_HashKeyField: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "HashKeyField"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-hashkeyfield"""
rp_HashKeyValue: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "HashKeyValue"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-hashkeyvalue"""
rp_TableName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "TableName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-tablename"""
p_HashKeyType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "HashKeyType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-hashkeytype"""
p_Operation: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Operation"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-operation"""
p_Payload: typing.Union['DetectorModelPayload', dict] = attr.ib(
default=None,
converter=DetectorModelPayload.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelPayload)),
metadata={AttrMeta.PROPERTY_NAME: "Payload"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-payload"""
p_PayloadField: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "PayloadField"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-payloadfield"""
p_RangeKeyField: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "RangeKeyField"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-rangekeyfield"""
p_RangeKeyType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "RangeKeyType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-rangekeytype"""
p_RangeKeyValue: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "RangeKeyValue"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodb.html#cfn-iotevents-detectormodel-dynamodb-rangekeyvalue"""
@attr.s
class DetectorModelFirehose(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.Firehose"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-firehose.html
Property Document:
- ``rp_DeliveryStreamName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-firehose.html#cfn-iotevents-detectormodel-firehose-deliverystreamname
- ``p_Payload``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-firehose.html#cfn-iotevents-detectormodel-firehose-payload
- ``p_Separator``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-firehose.html#cfn-iotevents-detectormodel-firehose-separator
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.Firehose"
rp_DeliveryStreamName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "DeliveryStreamName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-firehose.html#cfn-iotevents-detectormodel-firehose-deliverystreamname"""
p_Payload: typing.Union['DetectorModelPayload', dict] = attr.ib(
default=None,
converter=DetectorModelPayload.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelPayload)),
metadata={AttrMeta.PROPERTY_NAME: "Payload"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-firehose.html#cfn-iotevents-detectormodel-firehose-payload"""
p_Separator: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Separator"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-firehose.html#cfn-iotevents-detectormodel-firehose-separator"""
@attr.s
class DetectorModelSns(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.Sns"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-sns.html
Property Document:
- ``rp_TargetArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-sns.html#cfn-iotevents-detectormodel-sns-targetarn
- ``p_Payload``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-sns.html#cfn-iotevents-detectormodel-sns-payload
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.Sns"
rp_TargetArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "TargetArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-sns.html#cfn-iotevents-detectormodel-sns-targetarn"""
p_Payload: typing.Union['DetectorModelPayload', dict] = attr.ib(
default=None,
converter=DetectorModelPayload.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelPayload)),
metadata={AttrMeta.PROPERTY_NAME: "Payload"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-sns.html#cfn-iotevents-detectormodel-sns-payload"""
@attr.s
class DetectorModelSqs(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.Sqs"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-sqs.html
Property Document:
- ``rp_QueueUrl``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-sqs.html#cfn-iotevents-detectormodel-sqs-queueurl
- ``p_Payload``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-sqs.html#cfn-iotevents-detectormodel-sqs-payload
- ``p_UseBase64``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-sqs.html#cfn-iotevents-detectormodel-sqs-usebase64
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.Sqs"
rp_QueueUrl: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "QueueUrl"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-sqs.html#cfn-iotevents-detectormodel-sqs-queueurl"""
p_Payload: typing.Union['DetectorModelPayload', dict] = attr.ib(
default=None,
converter=DetectorModelPayload.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelPayload)),
metadata={AttrMeta.PROPERTY_NAME: "Payload"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-sqs.html#cfn-iotevents-detectormodel-sqs-payload"""
p_UseBase64: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "UseBase64"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-sqs.html#cfn-iotevents-detectormodel-sqs-usebase64"""
@attr.s
class DetectorModelIotTopicPublish(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.IotTopicPublish"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iottopicpublish.html
Property Document:
- ``rp_MqttTopic``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iottopicpublish.html#cfn-iotevents-detectormodel-iottopicpublish-mqtttopic
- ``p_Payload``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iottopicpublish.html#cfn-iotevents-detectormodel-iottopicpublish-payload
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.IotTopicPublish"
rp_MqttTopic: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "MqttTopic"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iottopicpublish.html#cfn-iotevents-detectormodel-iottopicpublish-mqtttopic"""
p_Payload: typing.Union['DetectorModelPayload', dict] = attr.ib(
default=None,
converter=DetectorModelPayload.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelPayload)),
metadata={AttrMeta.PROPERTY_NAME: "Payload"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-iottopicpublish.html#cfn-iotevents-detectormodel-iottopicpublish-payload"""
@attr.s
class DetectorModelDynamoDBv2(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.DynamoDBv2"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodbv2.html
Property Document:
- ``rp_TableName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodbv2.html#cfn-iotevents-detectormodel-dynamodbv2-tablename
- ``p_Payload``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodbv2.html#cfn-iotevents-detectormodel-dynamodbv2-payload
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.DynamoDBv2"
rp_TableName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "TableName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodbv2.html#cfn-iotevents-detectormodel-dynamodbv2-tablename"""
p_Payload: typing.Union['DetectorModelPayload', dict] = attr.ib(
default=None,
converter=DetectorModelPayload.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelPayload)),
metadata={AttrMeta.PROPERTY_NAME: "Payload"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-dynamodbv2.html#cfn-iotevents-detectormodel-dynamodbv2-payload"""
@attr.s
class DetectorModelAction(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.Action"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html
Property Document:
- ``p_ClearTimer``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-cleartimer
- ``p_DynamoDB``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-dynamodb
- ``p_DynamoDBv2``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-dynamodbv2
- ``p_Firehose``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-firehose
- ``p_IotEvents``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-iotevents
- ``p_IotSiteWise``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-iotsitewise
- ``p_IotTopicPublish``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-iottopicpublish
- ``p_Lambda``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-lambda
- ``p_ResetTimer``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-resettimer
- ``p_SetTimer``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-settimer
- ``p_SetVariable``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-setvariable
- ``p_Sns``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-sns
- ``p_Sqs``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-sqs
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.Action"
p_ClearTimer: typing.Union['DetectorModelClearTimer', dict] = attr.ib(
default=None,
converter=DetectorModelClearTimer.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelClearTimer)),
metadata={AttrMeta.PROPERTY_NAME: "ClearTimer"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-cleartimer"""
p_DynamoDB: typing.Union['DetectorModelDynamoDB', dict] = attr.ib(
default=None,
converter=DetectorModelDynamoDB.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelDynamoDB)),
metadata={AttrMeta.PROPERTY_NAME: "DynamoDB"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-dynamodb"""
p_DynamoDBv2: typing.Union['DetectorModelDynamoDBv2', dict] = attr.ib(
default=None,
converter=DetectorModelDynamoDBv2.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelDynamoDBv2)),
metadata={AttrMeta.PROPERTY_NAME: "DynamoDBv2"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-dynamodbv2"""
p_Firehose: typing.Union['DetectorModelFirehose', dict] = attr.ib(
default=None,
converter=DetectorModelFirehose.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelFirehose)),
metadata={AttrMeta.PROPERTY_NAME: "Firehose"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-firehose"""
p_IotEvents: typing.Union['DetectorModelIotEvents', dict] = attr.ib(
default=None,
converter=DetectorModelIotEvents.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelIotEvents)),
metadata={AttrMeta.PROPERTY_NAME: "IotEvents"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-iotevents"""
p_IotSiteWise: typing.Union['DetectorModelIotSiteWise', dict] = attr.ib(
default=None,
converter=DetectorModelIotSiteWise.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelIotSiteWise)),
metadata={AttrMeta.PROPERTY_NAME: "IotSiteWise"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-iotsitewise"""
p_IotTopicPublish: typing.Union['DetectorModelIotTopicPublish', dict] = attr.ib(
default=None,
converter=DetectorModelIotTopicPublish.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelIotTopicPublish)),
metadata={AttrMeta.PROPERTY_NAME: "IotTopicPublish"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-iottopicpublish"""
p_Lambda: typing.Union['DetectorModelLambda', dict] = attr.ib(
default=None,
converter=DetectorModelLambda.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelLambda)),
metadata={AttrMeta.PROPERTY_NAME: "Lambda"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-lambda"""
p_ResetTimer: typing.Union['DetectorModelResetTimer', dict] = attr.ib(
default=None,
converter=DetectorModelResetTimer.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelResetTimer)),
metadata={AttrMeta.PROPERTY_NAME: "ResetTimer"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-resettimer"""
p_SetTimer: typing.Union['DetectorModelSetTimer', dict] = attr.ib(
default=None,
converter=DetectorModelSetTimer.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelSetTimer)),
metadata={AttrMeta.PROPERTY_NAME: "SetTimer"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-settimer"""
p_SetVariable: typing.Union['DetectorModelSetVariable', dict] = attr.ib(
default=None,
converter=DetectorModelSetVariable.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelSetVariable)),
metadata={AttrMeta.PROPERTY_NAME: "SetVariable"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-setvariable"""
p_Sns: typing.Union['DetectorModelSns', dict] = attr.ib(
default=None,
converter=DetectorModelSns.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelSns)),
metadata={AttrMeta.PROPERTY_NAME: "Sns"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-sns"""
p_Sqs: typing.Union['DetectorModelSqs', dict] = attr.ib(
default=None,
converter=DetectorModelSqs.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelSqs)),
metadata={AttrMeta.PROPERTY_NAME: "Sqs"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-action.html#cfn-iotevents-detectormodel-action-sqs"""
@attr.s
class DetectorModelTransitionEvent(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.TransitionEvent"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-transitionevent.html
Property Document:
- ``rp_Condition``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-transitionevent.html#cfn-iotevents-detectormodel-transitionevent-condition
- ``rp_EventName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-transitionevent.html#cfn-iotevents-detectormodel-transitionevent-eventname
- ``rp_NextState``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-transitionevent.html#cfn-iotevents-detectormodel-transitionevent-nextstate
- ``p_Actions``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-transitionevent.html#cfn-iotevents-detectormodel-transitionevent-actions
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.TransitionEvent"
rp_Condition: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Condition"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-transitionevent.html#cfn-iotevents-detectormodel-transitionevent-condition"""
rp_EventName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "EventName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-transitionevent.html#cfn-iotevents-detectormodel-transitionevent-eventname"""
rp_NextState: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "NextState"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-transitionevent.html#cfn-iotevents-detectormodel-transitionevent-nextstate"""
p_Actions: typing.List[typing.Union['DetectorModelAction', dict]] = attr.ib(
default=None,
converter=DetectorModelAction.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(DetectorModelAction), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Actions"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-transitionevent.html#cfn-iotevents-detectormodel-transitionevent-actions"""
@attr.s
class DetectorModelEvent(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.Event"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-event.html
Property Document:
- ``rp_EventName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-event.html#cfn-iotevents-detectormodel-event-eventname
- ``p_Actions``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-event.html#cfn-iotevents-detectormodel-event-actions
- ``p_Condition``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-event.html#cfn-iotevents-detectormodel-event-condition
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.Event"
rp_EventName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "EventName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-event.html#cfn-iotevents-detectormodel-event-eventname"""
p_Actions: typing.List[typing.Union['DetectorModelAction', dict]] = attr.ib(
default=None,
converter=DetectorModelAction.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(DetectorModelAction), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Actions"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-event.html#cfn-iotevents-detectormodel-event-actions"""
p_Condition: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Condition"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-event.html#cfn-iotevents-detectormodel-event-condition"""
@attr.s
class DetectorModelOnExit(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.OnExit"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-onexit.html
Property Document:
- ``p_Events``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-onexit.html#cfn-iotevents-detectormodel-onexit-events
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.OnExit"
p_Events: typing.List[typing.Union['DetectorModelEvent', dict]] = attr.ib(
default=None,
converter=DetectorModelEvent.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(DetectorModelEvent), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Events"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-onexit.html#cfn-iotevents-detectormodel-onexit-events"""
@attr.s
class DetectorModelOnInput(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.OnInput"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-oninput.html
Property Document:
- ``p_Events``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-oninput.html#cfn-iotevents-detectormodel-oninput-events
- ``p_TransitionEvents``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-oninput.html#cfn-iotevents-detectormodel-oninput-transitionevents
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.OnInput"
p_Events: typing.List[typing.Union['DetectorModelEvent', dict]] = attr.ib(
default=None,
converter=DetectorModelEvent.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(DetectorModelEvent), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Events"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-oninput.html#cfn-iotevents-detectormodel-oninput-events"""
p_TransitionEvents: typing.List[typing.Union['DetectorModelTransitionEvent', dict]] = attr.ib(
default=None,
converter=DetectorModelTransitionEvent.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(DetectorModelTransitionEvent), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "TransitionEvents"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-oninput.html#cfn-iotevents-detectormodel-oninput-transitionevents"""
@attr.s
class DetectorModelOnEnter(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.OnEnter"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-onenter.html
Property Document:
- ``p_Events``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-onenter.html#cfn-iotevents-detectormodel-onenter-events
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.OnEnter"
p_Events: typing.List[typing.Union['DetectorModelEvent', dict]] = attr.ib(
default=None,
converter=DetectorModelEvent.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(DetectorModelEvent), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Events"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-onenter.html#cfn-iotevents-detectormodel-onenter-events"""
@attr.s
class DetectorModelState(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.State"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-state.html
Property Document:
- ``rp_StateName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-state.html#cfn-iotevents-detectormodel-state-statename
- ``p_OnEnter``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-state.html#cfn-iotevents-detectormodel-state-onenter
- ``p_OnExit``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-state.html#cfn-iotevents-detectormodel-state-onexit
- ``p_OnInput``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-state.html#cfn-iotevents-detectormodel-state-oninput
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.State"
rp_StateName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "StateName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-state.html#cfn-iotevents-detectormodel-state-statename"""
p_OnEnter: typing.Union['DetectorModelOnEnter', dict] = attr.ib(
default=None,
converter=DetectorModelOnEnter.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelOnEnter)),
metadata={AttrMeta.PROPERTY_NAME: "OnEnter"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-state.html#cfn-iotevents-detectormodel-state-onenter"""
p_OnExit: typing.Union['DetectorModelOnExit', dict] = attr.ib(
default=None,
converter=DetectorModelOnExit.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelOnExit)),
metadata={AttrMeta.PROPERTY_NAME: "OnExit"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-state.html#cfn-iotevents-detectormodel-state-onexit"""
p_OnInput: typing.Union['DetectorModelOnInput', dict] = attr.ib(
default=None,
converter=DetectorModelOnInput.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DetectorModelOnInput)),
metadata={AttrMeta.PROPERTY_NAME: "OnInput"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-state.html#cfn-iotevents-detectormodel-state-oninput"""
@attr.s
class DetectorModelDetectorModelDefinition(Property):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel.DetectorModelDefinition"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-detectormodeldefinition.html
Property Document:
- ``rp_InitialStateName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-detectormodeldefinition.html#cfn-iotevents-detectormodel-detectormodeldefinition-initialstatename
- ``rp_States``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-detectormodeldefinition.html#cfn-iotevents-detectormodel-detectormodeldefinition-states
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel.DetectorModelDefinition"
rp_InitialStateName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "InitialStateName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-detectormodeldefinition.html#cfn-iotevents-detectormodel-detectormodeldefinition-initialstatename"""
rp_States: typing.List[typing.Union['DetectorModelState', dict]] = attr.ib(
default=None,
converter=DetectorModelState.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(DetectorModelState), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "States"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotevents-detectormodel-detectormodeldefinition.html#cfn-iotevents-detectormodel-detectormodeldefinition-states"""
#--- Resource declaration ---
@attr.s
class Input(Resource):
"""
AWS Object Type = "AWS::IoTEvents::Input"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-input.html
Property Document:
- ``rp_InputDefinition``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-input.html#cfn-iotevents-input-inputdefinition
- ``p_InputDescription``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-input.html#cfn-iotevents-input-inputdescription
- ``p_InputName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-input.html#cfn-iotevents-input-inputname
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-input.html#cfn-iotevents-input-tags
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::Input"
rp_InputDefinition: typing.Union['InputInputDefinition', dict] = attr.ib(
default=None,
converter=InputInputDefinition.from_dict,
validator=attr.validators.instance_of(InputInputDefinition),
metadata={AttrMeta.PROPERTY_NAME: "InputDefinition"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-input.html#cfn-iotevents-input-inputdefinition"""
p_InputDescription: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "InputDescription"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-input.html#cfn-iotevents-input-inputdescription"""
p_InputName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "InputName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-input.html#cfn-iotevents-input-inputname"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-input.html#cfn-iotevents-input-tags"""
@attr.s
class DetectorModel(Resource):
"""
AWS Object Type = "AWS::IoTEvents::DetectorModel"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html
Property Document:
- ``rp_DetectorModelDefinition``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-detectormodeldefinition
- ``rp_RoleArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-rolearn
- ``p_DetectorModelDescription``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-detectormodeldescription
- ``p_DetectorModelName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-detectormodelname
- ``p_EvaluationMethod``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-evaluationmethod
- ``p_Key``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-key
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-tags
"""
AWS_OBJECT_TYPE = "AWS::IoTEvents::DetectorModel"
rp_DetectorModelDefinition: typing.Union['DetectorModelDetectorModelDefinition', dict] = attr.ib(
default=None,
converter=DetectorModelDetectorModelDefinition.from_dict,
validator=attr.validators.instance_of(DetectorModelDetectorModelDefinition),
metadata={AttrMeta.PROPERTY_NAME: "DetectorModelDefinition"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-detectormodeldefinition"""
rp_RoleArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "RoleArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-rolearn"""
p_DetectorModelDescription: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "DetectorModelDescription"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-detectormodeldescription"""
p_DetectorModelName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "DetectorModelName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-detectormodelname"""
p_EvaluationMethod: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "EvaluationMethod"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-evaluationmethod"""
p_Key: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Key"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-key"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotevents-detectormodel.html#cfn-iotevents-detectormodel-tags"""
| 63.707834 | 229 | 0.769556 | 7,337 | 69,123 | 7.16042 | 0.02317 | 0.176717 | 0.044598 | 0.068924 | 0.900678 | 0.900678 | 0.884079 | 0.827851 | 0.827851 | 0.827851 | 0 | 0.000499 | 0.101601 | 69,123 | 1,084 | 230 | 63.766605 | 0.845491 | 0.334606 | 0 | 0.425676 | 0 | 0 | 0.097378 | 0.053846 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.006757 | 0 | 0.260135 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1c9fe19ea11318b2cee7e34a3f157a5e613235bd | 7,782 | py | Python | MODEL/model_attention.py | quincy-125/DigiPath_CLAM_TF | 8b7ab50caaca13f666268b0f4e071d123e190978 | [
"MIT"
] | 5 | 2021-05-10T17:23:46.000Z | 2022-02-27T22:33:03.000Z | MODEL/model_attention.py | quincy-125/DigiPath_CLAM_TF | 8b7ab50caaca13f666268b0f4e071d123e190978 | [
"MIT"
] | null | null | null | MODEL/model_attention.py | quincy-125/DigiPath_CLAM_TF | 8b7ab50caaca13f666268b0f4e071d123e190978 | [
"MIT"
] | 2 | 2020-12-12T00:15:21.000Z | 2021-05-10T17:23:57.000Z | import tensorflow as tf
class NG_Att_Net(tf.keras.Model):
def __init__(self, dim_features=1024, dim_compress_features=512, n_hidden_units=256, n_class=2,
dropout=False, dropout_rate=.25):
super(NG_Att_Net, self).__init__()
self.dim_features = dim_features
self.dim_compress_features = dim_compress_features
self.n_hidden_units = n_hidden_units
self.n_class = n_class
self.dropout = dropout
self.dropout_rate = dropout_rate
self.compression_model = tf.keras.models.Sequential()
self.model = tf.keras.models.Sequential()
self.fc_compress_layer = tf.keras.layers.Dense(units=dim_compress_features,
activation='relu',
input_shape=(dim_features,),
kernel_initializer='glorot_normal',
bias_initializer='zeros',
name='Fully_Connected_Layer')
self.compression_model.add(self.fc_compress_layer)
self.att_layer1 = tf.keras.layers.Dense(units=n_hidden_units,
activation='linear',
input_shape=(dim_compress_features,),
kernel_initializer='glorot_normal',
bias_initializer='zeros',
name='Attention_layer1')
self.att_layer2 = tf.keras.layers.Dense(units=n_hidden_units,
activation='tanh',
input_shape=(dim_compress_features,),
kernel_initializer='glorot_normal',
bias_initializer='zeros',
name='Attention_Layer2')
self.att_layer3 = tf.keras.layers.Dense(units=n_class,
activation='linear',
input_shape=(n_hidden_units,),
kernel_initializer='glorot_normal',
bias_initializer='zeros',
name='Attention_Layer3')
self.model.add(self.att_layer1)
self.model.add(self.att_layer2)
if dropout:
self.model.add(tf.keras.layers.Dropout(dropout_rate, name='Dropout_Layer'))
self.model.add(self.att_layer3)
def att_model(self):
attention_model = [self.compression_model, self.model]
return attention_model
def call(self, img_features):
h = list()
A = list()
for i in img_features:
c_imf = self.att_model()[0](i)
h.append(c_imf)
for j in h:
a = self.att_model()[1](j)
A.append(a)
return h, A
class G_Att_Net(tf.keras.Model):
def __init__(self, dim_features=1024, dim_compress_features=512, n_hidden_units=256, n_class=2,
dropout=False, dropout_rate=.25):
super(G_Att_Net, self).__init__()
self.dim_features = dim_features
self.dim_compress_features = dim_compress_features
self.n_hidden_units = n_hidden_units
self.n_class = n_class
self.dropout = dropout
self.dropout_rate = dropout_rate
self.compression_model = tf.keras.models.Sequential()
self.model_v = tf.keras.models.Sequential()
self.model_u = tf.keras.models.Sequential()
self.model = tf.keras.models.Sequential()
self.fc_compress_layer = tf.keras.layers.Dense(units=dim_compress_features,
activation='relu',
input_shape=(dim_features,),
kernel_initializer='glorot_normal',
bias_initializer='zeros',
name='Fully_Connected_Layer')
self.compression_model.add(self.fc_compress_layer)
self.att_v_layer1 = tf.keras.layers.Dense(units=n_hidden_units,
activation='linear',
input_shape=(dim_compress_features,),
kernel_initializer='glorot_normal',
bias_initializer='zeros',
name='Attention_V_Layer1')
self.att_v_layer2 = tf.keras.layers.Dense(units=n_hidden_units,
activation='tanh',
input_shape=(dim_compress_features,),
kernel_initializer='glorot_normal',
bias_initializer='zeros',
name='Attention_V_Layer2')
self.att_u_layer1 = tf.keras.layers.Dense(units=n_hidden_units,
activation='linear',
input_shape=(dim_compress_features,),
kernel_initializer='glorot_normal',
bias_initializer='zeros',
name='Attention_U_Layer1')
self.att_u_layer2 = tf.keras.layers.Dense(units=n_hidden_units,
activation='sigmoid',
input_shape=(dim_compress_features,),
kernel_initializer='glorot_normal',
bias_initializer='zeros',
name='Attention_U_Layer2')
self.att_layer_f = tf.keras.layers.Dense(units=n_class,
activation='linear',
input_shape=(n_hidden_units,),
kernel_initializer='glorot_normal',
bias_initializer='zeros',
name='Attention_Gated_Final_Layer')
self.model_v.add(self.att_v_layer1)
self.model_v.add(self.att_v_layer2)
self.model_u.add(self.att_u_layer1)
self.model_u.add(self.att_u_layer2)
if dropout:
self.model_v.add(tf.keras.layers.Dropout(dropout_rate, name='Dropout_V_Layer'))
self.model_u.add(tf.keras.layers.Dropout(dropout_rate, name='Dropout_U_Layer'))
self.model.add(self.att_layer_f)
def att_model(self):
attention_model = [self.compression_model, self.model_v, self.model_u, self.model]
return attention_model
def call(self, img_features):
h = list()
A = list()
for i in img_features:
c_imf = self.att_model()[0](i)
h.append(c_imf)
for j in h:
att_v_output = self.att_model()[1](j)
att_u_output = self.att_model()[2](j)
att_input = tf.math.multiply(att_v_output, att_u_output)
a = self.att_model()[3](att_input)
A.append(a)
return h, A | 47.163636 | 99 | 0.4734 | 730 | 7,782 | 4.706849 | 0.112329 | 0.04482 | 0.077416 | 0.052387 | 0.905704 | 0.872526 | 0.83993 | 0.815483 | 0.815483 | 0.776193 | 0 | 0.012328 | 0.447571 | 7,782 | 165 | 100 | 47.163636 | 0.786927 | 0 | 0 | 0.679389 | 0 | 0 | 0.059746 | 0.008865 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045802 | false | 0 | 0.007634 | 0 | 0.099237 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
08fe264ab6c0a4071dd0636c496fe5065c30ca29 | 67 | py | Python | library/__init__.py | spectraldani/DeepMahalanobisGP | bf2d788ac8b56d25f544b6cb9c0325820f4b7e64 | [
"Apache-2.0"
] | null | null | null | library/__init__.py | spectraldani/DeepMahalanobisGP | bf2d788ac8b56d25f544b6cb9c0325820f4b7e64 | [
"Apache-2.0"
] | null | null | null | library/__init__.py | spectraldani/DeepMahalanobisGP | bf2d788ac8b56d25f544b6cb9c0325820f4b7e64 | [
"Apache-2.0"
] | null | null | null | from . import helper
from . import transforms
from . import models
| 16.75 | 24 | 0.776119 | 9 | 67 | 5.777778 | 0.555556 | 0.576923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179104 | 67 | 3 | 25 | 22.333333 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1c7a73c1b3a702a11dc1a927106f4e8205f6d102 | 207 | py | Python | todolist/urls.py | Russel777/todolist | 479142a750cdcc724308018617eec8eeac5876c6 | [
"MIT"
] | null | null | null | todolist/urls.py | Russel777/todolist | 479142a750cdcc724308018617eec8eeac5876c6 | [
"MIT"
] | null | null | null | todolist/urls.py | Russel777/todolist | 479142a750cdcc724308018617eec8eeac5876c6 | [
"MIT"
] | null | null | null | from django.contrib import admin
from django.urls import include
from django.urls import path
urlpatterns = [
path('admin/', admin.site.urls),
path('todolist_app/', include('todolist_app.urls'))
]
| 20.7 | 55 | 0.729469 | 28 | 207 | 5.321429 | 0.428571 | 0.201342 | 0.187919 | 0.268456 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144928 | 207 | 9 | 56 | 23 | 0.841808 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1c8ce1a5c90f73af2f105b077ab4fa500c4821e3 | 37 | py | Python | cloud/amazon/services/__init__.py | Sunchasing/python-common | bc9f11fe4585ef9abca7006c0bf64b11062742fd | [
"Apache-2.0"
] | 5 | 2021-08-15T23:04:25.000Z | 2021-09-06T18:32:53.000Z | cloud/amazon/services/__init__.py | Sunchasing/python-common | bc9f11fe4585ef9abca7006c0bf64b11062742fd | [
"Apache-2.0"
] | null | null | null | cloud/amazon/services/__init__.py | Sunchasing/python-common | bc9f11fe4585ef9abca7006c0bf64b11062742fd | [
"Apache-2.0"
] | 1 | 2022-01-28T13:12:23.000Z | 2022-01-28T13:12:23.000Z | from .s3 import *
from .ec2 import *
| 12.333333 | 18 | 0.675676 | 6 | 37 | 4.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0.216216 | 37 | 2 | 19 | 18.5 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98c5806d206c39acdb2349720a03af0a09ce072c | 18,009 | py | Python | pyunity/values/vector.py | pyunity/pyunity | 5003cef4cdec320d3ee45c306b1a0f8e35175ceb | [
"MIT"
] | 158 | 2021-05-24T01:05:04.000Z | 2022-03-30T03:04:13.000Z | pyunity/values/vector.py | pyunity/pyunity | 5003cef4cdec320d3ee45c306b1a0f8e35175ceb | [
"MIT"
] | 14 | 2021-06-13T07:13:27.000Z | 2021-11-15T19:09:06.000Z | pyunity/values/vector.py | pyunity/pyunity | 5003cef4cdec320d3ee45c306b1a0f8e35175ceb | [
"MIT"
] | 6 | 2021-06-16T22:46:23.000Z | 2021-11-05T22:36:27.000Z | __all__ = ["Vector2", "Vector", "Vector3", "clamp"]
from .abc import ABCMeta, abstractmethod, abstractproperty
import glm
import operator
def clamp(x, _min, _max): return min(_max, max(_min, x))
"""Clamp a value between a minimum and a maximum"""
class Vector(metaclass=ABCMeta):
def __repr__(self):
return f"{self.__class__.__name__}({', '.join(map(str, self))})"
def __str__(self):
return f"{self.__class__.__name__}({', '.join(map(str, self))})"
def __getitem__(self, i):
return list(self)[i]
@abstractmethod
def __iter__(self):
pass
def __list__(self):
return list(iter(self))
@abstractmethod
def __len__(self):
pass
def __bool__(self):
return all(self)
@abstractmethod
def _o1(self, f):
pass
@abstractmethod
def _o2(self, other, f):
pass
@abstractmethod
def _r_o2(self, other, f):
pass
@abstractmethod
def _io(self, other, f):
pass
def __add__(self, other):
return self._o2(other, operator.add)
def __radd__(self, other):
return self._r_o2(other, operator.add)
def __iadd__(self, other):
return self._io(other, operator.add)
def __sub__(self, other):
return self._o2(other, operator.sub)
def __rsub__(self, other):
return self._r_o2(other, operator.sub)
def __isub__(self, other):
return self._io(other, operator.sub)
def __mul__(self, other):
return self._o2(other, operator.mul)
def __rmul__(self, other):
return self._r_o2(other, operator.mul)
def __imul__(self, other):
return self._io(other, operator.mul)
def __div__(self, other):
return self._o2(other, operator.div)
def __rdiv__(self, other):
return self._r_o2(other, operator.div)
def __idiv__(self, other):
return self._io(other, operator.div)
def __floordiv__(self, other):
return self._o2(other, operator.floordiv)
def __rfloordiv__(self, other):
return self._r_o2(other, operator.floordiv)
def __ifloordiv__(self, other):
return self._io(other, operator.floordiv)
def __truediv__(self, other):
return self._o2(other, operator.truediv)
def __rtruediv__(self, other):
return self._r_o2(other, operator.truediv)
def __itruediv__(self, other):
return self._io(other, operator.truediv)
def __mod__(self, other):
return self._o2(other, operator.mod)
def __rmod__(self, other):
return self._r_o2(other, operator.mod)
def __imod__(self, other):
return self._io(other, operator.mod)
def __lshift__(self, other):
return self._o2(other, operator.lshift)
def __rlshift__(self, other):
return self._r_o2(other, operator.lshift)
def __ilshift__(self, other):
return self._io(other, operator.lshift)
def __rshift__(self, other):
return self._o2(other, operator.rshift)
def __rrshift__(self, other):
return self._r_o2(other, operator.rshift)
def __irshift__(self, other):
return self._io(other, operator.rshift)
def __eq__(self, other):
return all(self._o2(other, operator.eq))
def __ne__(self, other):
return any(self._o2(other, operator.ne))
def __gt__(self, other):
return all(self._o2(other, operator.gt))
def __lt__(self, other):
return all(self._o2(other, operator.lt))
def __ge__(self, other):
return all(self._o2(other, operator.ge))
def __le__(self, other):
return all(self._o2(other, operator.le))
def __and__(self, other):
return self._o2(other, operator.and_)
def __rand__(self, other):
return self._r_o2(other, operator.and_)
def __or__(self, other):
return self._o2(other, operator.or_)
def __ror__(self, other):
return self._r_o2(other, operator.or_)
def __xor__(self, other):
return self._o2(other, operator.xor)
def __rxor__(self, other):
return self._r_o2(other, operator.xor)
def __neg__(self):
return self._o1(operator.neg)
def __pos__(self):
return self._o1(operator.pos)
def __abs__(self):
return self.length
def abs(self):
return self._o1(abs)
def __round__(self, other):
return self._r_o2(other, round)
def __invert__(self):
return self._o1(operator.invert)
@abstractproperty
def length(self):
pass
class Vector2(Vector):
def __init__(self, x_or_list=None, y=None):
if x_or_list is not None:
if y is None:
if hasattr(x_or_list, "x") and hasattr(x_or_list, "y"):
self.x = x_or_list.x
self.y = x_or_list.y
else:
self.x = x_or_list[0]
self.y = x_or_list[1]
else:
self.x = x_or_list
self.y = y
else:
self.x = 0
self.y = 0
def __iter__(self):
yield self.x
yield self.y
def __len__(self):
return 2
def _o1(self, f):
"""Unary operator"""
return Vector2(f(self.x), f(self.y))
def _o2(self, other, f):
"""Any two-operator operation where the left operand is a Vector2"""
if hasattr(other, "__getitem__"):
return Vector2(f(self.x, other[0]), f(self.y, other[1]))
else:
return Vector2(f(self.x, other), f(self.y, other))
def _r_o2(self, other, f):
"""Any two-operator operation where the right operand is a Vector2"""
if hasattr(other, "__getitem__"):
return Vector2(f(other[0], self.x), f(other[1], self.y))
else:
return Vector2(f(other, self.x), f(other, self.y))
def _io(self, other, f):
"""Inplace operator"""
if hasattr(other, "__getitem__"):
self.x = f(self.x, other[0])
self.y = f(self.y, other[1])
else:
self.x = f(self.x, other)
self.y = f(self.y, other)
return self
def copy(self):
"""Makes a copy of the Vector2"""
return Vector2(self.x, self.y)
def get_length_sqrd(self):
"""
Gets the length of the vector squared. This
is much faster than finding the length.
Returns
-------
float
The length of the vector squared
"""
return self.x ** 2 + self.y ** 2
@property
def length(self):
"""Gets or sets the magnitude of the vector"""
return glm.sqrt(self.x ** 2 + self.y ** 2)
@length.setter
def length(self, value):
length = self.length
if length != 0:
self.x *= value / length
self.y *= value / length
def normalized(self):
"""
Get a normalized copy of the vector, or Vector2(0, 0)
if the length is 0.
Returns
-------
Vector2
A normalized vector
"""
length = self.length
if length != 0:
return 1 / length * self
return self.copy()
def normalize(self):
"""
Normalize the vector in place.
"""
length = self.length
if length != 0:
self.x /= length
self.y /= length
def normalize_return_length(self):
"""
Normalize the vector and return its length before the normalization
Returns
-------
float
The length before the normalization
"""
length = self.length
if length != 0:
self.x /= length
self.y /= length
return length
def get_distance(self, other):
"""
The distance between this vector and the other vector
Returns
-------
float
The distance
"""
return glm.sqrt((self.x - other[0]) ** 2 + (self.y - other[1]) ** 2)
def get_dist_sqrd(self, other):
"""
The distance between this vector and the other vector, squared.
It is more efficient to call this than to call `get_distance` and
square it.
Returns
-------
float
The squared distance
"""
return (self.x - other[0]) ** 2 + (self.y - other[1]) ** 2
@property
def int_tuple(self):
"""Return the x, y and z values of this vector as ints"""
return int(self.x), int(self.y)
@property
def rounded(self):
"""Return the x, y and z values of this vector rounded to the nearest integer"""
return round(self.x), round(self.y)
def clamp(self, min, max):
"""
Clamps a vector between two other vectors,
resulting in the vector being as close to the
edge of a bounding box created as possible.
Parameters
----------
min : Vector2
Min vector
max : Vector2
Max vector
"""
self.x = clamp(self.x, min.x, max.x)
self.y = clamp(self.y, min.y, max.y)
def dot(self, other):
"""
Dot product of two vectors.
Parameters
----------
other : Vector2
Other vector
Returns
-------
float
Dot product of the two vectors
"""
return self.x * other[0] + self.y * other[1]
def cross(self, other):
"""
Cross product of two vectors. In 2D this
is a scalar.
Parameters
----------
other : Vector2
Other vector
Returns
-------
float
Cross product of the two vectors
"""
z = self.x * other[1] - self.y * other[0]
return z
@staticmethod
def min(a, b):
return a._o2(b, min)
@staticmethod
def max(a, b):
return a._o2(b, max)
@staticmethod
def zero():
"""A vector of zero length"""
return Vector2(0, 0)
@staticmethod
def one():
"""A vector of ones"""
return Vector2(1, 1)
@staticmethod
def left():
"""Vector2 pointing in the negative x axis"""
return Vector2(-1, 0)
@staticmethod
def right():
"""Vector2 pointing in the postive x axis"""
return Vector2(1, 0)
@staticmethod
def up():
"""Vector2 pointing in the postive y axis"""
return Vector2(0, 1)
@staticmethod
def down():
"""Vector2 pointing in the negative y axis"""
return Vector2(0, -1)
class Vector3(Vector):
def __init__(self, x_or_list=None, y=None, z=None):
if x_or_list is not None:
if y is None:
if hasattr(x_or_list, "x") and hasattr(x_or_list, "y") and hasattr(x_or_list, "z"):
self.x = x_or_list.x
self.y = x_or_list.y
self.z = x_or_list.z
else:
self.x = x_or_list[0]
self.y = x_or_list[1]
self.z = x_or_list[2]
else:
self.x = x_or_list
self.y = y
self.z = z
else:
self.x = 0
self.y = 0
self.z = 0
def __iter__(self):
yield self.x
yield self.y
yield self.z
def __len__(self):
return 3
def _o1(self, f):
"""Unary operator"""
return Vector3(f(self.x), f(self.y), f(self.z))
def _o2(self, other, f):
"""Any two-operator operation where the left operand is a Vector3"""
if isinstance(other, Vector3):
return Vector3(f(self.x, other.x), f(self.y, other.y), f(self.z, other.z))
elif hasattr(other, "__getitem__"):
return Vector3(f(self.x, other[0]), f(self.y, other[1]), f(self.z, other[2]))
else:
return Vector3(f(self.x, other), f(self.y, other), f(self.z, other))
def _r_o2(self, other, f):
"""Any two-operator operation where the right operand is a Vector3"""
if hasattr(other, "__getitem__"):
return Vector3(f(other[0], self.x), f(other[1], self.y), f(other[2], self.z))
else:
return Vector3(f(other, self.x), f(other, self.y), f(other, self.z))
def _io(self, other, f):
"""Inplace operator"""
if hasattr(other, "__getitem__"):
self.x = f(self.x, other[0])
self.y = f(self.y, other[1])
self.z = f(self.z, other[2])
else:
self.x = f(self.x, other)
self.y = f(self.y, other)
self.z = f(self.z, other)
return self
def copy(self):
"""
Makes a copy of the Vector3
Returns
-------
Vector3
A shallow copy of the vector
"""
return Vector3(self.x, self.y, self.z)
def get_length_sqrd(self):
"""
Gets the length of the vector squared. This
is much faster than finding the length.
Returns
-------
float
The length of the vector squared
"""
return self.x ** 2 + self.y ** 2 + self.z ** 2
@property
def length(self):
"""Gets or sets the magnitude of the vector"""
return glm.sqrt(self.x ** 2 + self.y ** 2 + self.z ** 2)
@length.setter
def length(self, value):
length = self.length
if length != 0:
self.x *= value / length
self.y *= value / length
self.z *= value / length
def normalized(self):
"""
Get a normalized copy of the vector, or Vector3(0, 0, 0)
if the length is 0.
Returns
-------
Vector3
A normalized vector
"""
length = self.length
if length != 0:
return 1 / length * self
return self.copy()
def normalize(self):
"""
Normalize the vector in place.
"""
length = self.length
if length != 0:
self.x /= length
self.y /= length
self.z /= length
def normalize_return_length(self):
"""
Normalize the vector and return its length before the normalization
Returns
-------
float
The length before the normalization
"""
length = self.length
if length != 0:
self.x /= length
self.y /= length
self.z /= length
return length
def get_distance(self, other):
"""
The distance between this vector and the other vector
Returns
-------
float
The distance
"""
return glm.sqrt((self.x - other[0]) ** 2 + (self.y - other[1]) ** 2 + (self.z - other[2]) ** 2)
def get_dist_sqrd(self, other):
"""
The distance between this vector and the other vector, squared.
It is more efficient to call this than to call `get_distance` and
square it.
Returns
-------
float
The squared distance
"""
return (self.x - other[0]) ** 2 + (self.y - other[1]) ** 2 + (self.z - other[2]) ** 2
@property
def int_tuple(self):
"""Return the x, y and z values of this vector as ints"""
return int(self.x), int(self.y), int(self.z)
@property
def rounded(self):
"""Return the x, y and z values of this vector rounded to the nearest integer"""
return round(self.x), round(self.y), round(self.z)
def clamp(self, min, max):
"""
Clamps a vector between two other vectors,
resulting in the vector being as close to the
edge of a bounding box created as possible.
Parameters
----------
min : Vector3
Min vector
max : Vector3
Max vector
"""
self.x = clamp(self.x, min.x, max.x)
self.y = clamp(self.y, min.y, max.y)
self.z = clamp(self.z, min.z, max.z)
def dot(self, other):
"""
Dot product of two vectors.
Parameters
----------
other : Vector3
Other vector
Returns
-------
float
Dot product of the two vectors
"""
return self.x * other[0] + self.y * other[1] + self.z * other[2]
def cross(self, other):
"""
Cross product of two vectors
Parameters
----------
other : Vector3
Other vector
Returns
-------
Vector3
Cross product of the two vectors
"""
x = self.y * other[2] - self.z * other[1]
y = self.z * other[0] - self.x * other[2]
z = self.x * other[1] - self.y * other[0]
return Vector3(x, y, z)
@staticmethod
def min(a, b):
return a._o2(b, min)
@staticmethod
def max(a, b):
return a._o2(b, max)
@staticmethod
def zero():
"""A vector of zero length"""
return Vector3(0, 0, 0)
@staticmethod
def one():
"""A vector of ones"""
return Vector3(1, 1, 1)
@staticmethod
def forward():
"""Vector3 pointing in the positive z axis"""
return Vector3(0, 0, 1)
@staticmethod
def back():
"""Vector3 pointing in the negative z axis"""
return Vector3(0, 0, -1)
@staticmethod
def left():
"""Vector3 pointing in the negative x axis"""
return Vector3(-1, 0, 0)
@staticmethod
def right():
"""Vector3 pointing in the postive x axis"""
return Vector3(1, 0, 0)
@staticmethod
def up():
"""Vector3 pointing in the postive y axis"""
return Vector3(0, 1, 0)
@staticmethod
def down():
"""Vector3 pointing in the negative y axis"""
return Vector3(0, -1, 0)
| 25.987013 | 103 | 0.534066 | 2,309 | 18,009 | 3.985275 | 0.081421 | 0.032602 | 0.065203 | 0.070202 | 0.839926 | 0.790372 | 0.761682 | 0.633884 | 0.546512 | 0.518148 | 0 | 0.019976 | 0.346771 | 18,009 | 692 | 104 | 26.024566 | 0.762241 | 0.205508 | 0 | 0.540323 | 0 | 0 | 0.015879 | 0.004515 | 0 | 0 | 0 | 0 | 0 | 1 | 0.317204 | false | 0.018817 | 0.008065 | 0.153226 | 0.620968 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c709306b7ad62d18e8cef295b5a01ef0ad2fe460 | 22 | py | Python | elliot/recommender/latent_factor_models/iALS/__init__.py | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 175 | 2021-03-04T15:46:25.000Z | 2022-03-31T05:56:58.000Z | elliot/recommender/latent_factor_models/iALS/__init__.py | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 15 | 2021-03-06T17:53:56.000Z | 2022-03-24T17:02:07.000Z | elliot/recommender/latent_factor_models/iALS/__init__.py | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 39 | 2021-03-04T15:46:26.000Z | 2022-03-09T15:37:12.000Z | from .iALS import iALS | 22 | 22 | 0.818182 | 4 | 22 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c719fe07b3239884acc730fbf4b0e94111389abc | 18,105 | py | Python | OpenMatch/data/datasets/edrm_dataset.py | fengtaoo/opmft | 64f2a12c724295cd913eda02502f2e2a20f2dd55 | [
"MIT"
] | 1 | 2020-11-18T06:44:19.000Z | 2020-11-18T06:44:19.000Z | OpenMatch/data/datasets/edrm_dataset.py | zkt12/OpenMatch | 7c04f0eb7285946524a1235a10b1339753f4ab6d | [
"MIT"
] | null | null | null | OpenMatch/data/datasets/edrm_dataset.py | zkt12/OpenMatch | 7c04f0eb7285946524a1235a10b1339753f4ab6d | [
"MIT"
] | null | null | null | from typing import Union, List, Tuple, Dict, Any
import json
import torch
from torch.utils.data import Dataset
from OpenMatch.data.tokenizers import Tokenizer
class EDRMDataset(Dataset):
def __init__(
self,
dataset: Union[Dict, str],
wrd_tokenizer: Tokenizer,
ent_tokenizer: Tokenizer,
mode: str,
query_max_len: int = 10,
doc_max_len: int = 256,
des_max_len: int = 20,
max_ent_num: int = 3,
max_input: int = 1280000,
task: str = 'ranking'
) -> None:
self._dataset = dataset
self._wrd_tokenizer = wrd_tokenizer
self._ent_tokenizer = ent_tokenizer
self._mode = mode
self._query_max_len = query_max_len
self._doc_max_len = doc_max_len
self._des_max_len = des_max_len
self._max_ent_num = max_ent_num
self._max_input = max_input
self._task = task
if isinstance(self._dataset, str):
self._id = False
with open(self._dataset, 'r') as f:
self._examples = []
for i, line in enumerate(f):
if i >= self._max_input:
break
line = json.loads(line)
self._examples.append(line)
elif isinstance(self._dataset, dict):
self._id = True
self._queries = {}
with open(self._dataset['queries'], 'r') as f:
for line in f:
if self._dataset['queries'].split('.')[-1] == 'json' or self._dataset['queries'].split('.')[-1] == 'jsonl':
line = json.loads(line)
else:
query_id, query = line.strip('\n').split('\t')
line = {'query_id': query_id, 'query': query}
self._queries[line['query_id']] = (line['query'], line['query_ent'], line['query_des'])
self._docs = {}
with open(self._dataset['docs'], 'r') as f:
for line in f:
if self._dataset['docs'].split('.')[-1] == 'json' or self._dataset['docs'].split('.')[-1] == 'jsonl':
line = json.loads(line)
else:
doc_id, doc = line.strip('\n').split('\t')
line = {'doc_id': doc_id, 'doc': doc}
self._docs[line['doc_id']] = (line['doc'], line['doc_ent'], line['doc_des'])
if self._mode == 'dev':
qrels = {}
with open(self._dataset['qrels'], 'r') as f:
for line in f:
line = line.strip().split()
if line[0] not in qrels:
qrels[line[0]] = {}
qrels[line[0]][line[2]] = int(line[3])
with open(self._dataset['trec'], 'r') as f:
self._examples = []
for i, line in enumerate(f):
if i >= self._max_input:
break
line = line.strip().split()
if self._mode == 'dev':
if line[0] not in qrels or line[2] not in qrels[line[0]]:
label = 0
else:
label = qrels[line[0]][line[2]]
if self._mode == 'train':
if self._task == 'ranking':
self._examples.append({'query_id': line[0], 'doc_pos_id': line[1], 'doc_neg_id': line[2]})
elif self._task == 'classification':
self._examples.append({'query': line[0], 'doc_id': line[2], 'label': int(line[2])})
else:
raise ValueError('Task must be `ranking` or `classification`.')
elif self._mode == 'dev':
self._examples.append({'label': label, 'query_id': line[0], 'doc_id': line[2], 'retrieval_score': float(line[4])})
elif self._mode == 'test':
self._examples.append({'query_id': line[0], 'doc_id': line[2], 'retrieval_score': float(line[4])})
else:
raise ValueError('Mode must be `train`, `dev` or `test`.')
else:
raise ValueError('Dataset must be `str` or `dict`.')
self._count = len(self._examples)
def collate(self, batch: Dict[str, Any]):
if self._mode == 'train':
if self._task == 'ranking':
query_wrd_idx = torch.tensor([item['query_wrd_idx'] for item in batch])
query_wrd_mask = torch.tensor([item['query_wrd_mask'] for item in batch])
doc_pos_wrd_idx = torch.tensor([item['doc_pos_wrd_idx'] for item in batch])
doc_pos_wrd_mask = torch.tensor([item['doc_pos_wrd_mask'] for item in batch])
doc_neg_wrd_idx = torch.tensor([item['doc_neg_wrd_idx'] for item in batch])
doc_neg_wrd_mask = torch.tensor([item['doc_neg_wrd_mask'] for item in batch])
query_ent_idx = torch.tensor([item['query_ent_idx'] for item in batch])
query_ent_mask = torch.tensor([item['query_ent_mask'] for item in batch])
doc_pos_ent_idx = torch.tensor([item['doc_pos_ent_idx'] for item in batch])
doc_pos_ent_mask = torch.tensor([item['doc_pos_ent_mask'] for item in batch])
doc_neg_ent_idx = torch.tensor([item['doc_neg_ent_idx'] for item in batch])
doc_neg_ent_mask = torch.tensor([item['doc_neg_ent_mask'] for item in batch])
query_des_idx = torch.tensor([item['query_des_idx'] for item in batch])
doc_pos_des_idx = torch.tensor([item['doc_pos_des_idx'] for item in batch])
doc_neg_des_idx = torch.tensor([item['doc_neg_des_idx'] for item in batch])
return {'query_wrd_idx': query_wrd_idx, 'query_wrd_mask': query_wrd_mask,
'doc_pos_wrd_idx': doc_pos_wrd_idx, 'doc_pos_wrd_mask': doc_pos_wrd_mask,
'doc_neg_wrd_idx': doc_neg_wrd_idx, 'doc_neg_wrd_mask': doc_neg_wrd_mask,
'query_ent_idx': query_ent_idx, 'query_ent_mask': query_ent_mask,
'doc_pos_ent_idx': doc_pos_ent_idx, 'doc_pos_ent_mask': doc_pos_ent_mask,
'doc_neg_ent_idx': doc_neg_ent_idx, 'doc_neg_ent_mask': doc_neg_ent_mask,
'query_des_idx': query_des_idx, 'doc_pos_des_idx': doc_pos_des_idx, 'doc_neg_des_idx': doc_neg_des_idx}
elif self._task == 'classification':
query_wrd_idx = torch.tensor([item['query_wrd_idx'] for item in batch])
query_wrd_mask = torch.tensor([item['query_wrd_mask'] for item in batch])
doc_wrd_idx = torch.tensor([item['doc_wrd_idx'] for item in batch])
doc_wrd_mask = torch.tensor([item['doc_wrd_mask'] for item in batch])
query_ent_idx = torch.tensor([item['query_ent_idx'] for item in batch])
query_ent_mask = torch.tensor([item['query_ent_mask'] for item in batch])
doc_ent_idx = torch.tensor([item['doc_ent_idx'] for item in batch])
doc_ent_mask = torch.tensor([item['doc_ent_mask'] for item in batch])
query_des_idx = torch.tensor([item['query_des_idx'] for item in batch])
doc_des_idx = torch.tensor([item['doc_des_idx'] for item in batch])
label = torch.tensor([item['label'] for item in batch])
return {'query_wrd_idx': query_wrd_idx, 'query_wrd_mask': query_wrd_mask,
'doc_wrd_idx': doc_wrd_idx, 'doc_wrd_mask': doc_wrd_mask,
'query_ent_idx': query_ent_idx, 'query_ent_mask': query_ent_mask,
'doc_ent_idx': doc_ent_idx, 'doc_ent_mask': doc_ent_mask,
'query_des_idx': query_des_idx, 'doc_des_idx': doc_des_idx,
'label': label}
else:
raise ValueError('Task must be `ranking` or `classification`.')
elif self._mode == 'dev':
query_id = [item['query_id'] for item in batch]
doc_id = [item['doc_id'] for item in batch]
label = [item['label'] for item in batch]
retrieval_score = torch.tensor([item['retrieval_score'] for item in batch])
query_wrd_idx = torch.tensor([item['query_wrd_idx'] for item in batch])
query_wrd_mask = torch.tensor([item['query_wrd_mask'] for item in batch])
doc_wrd_idx = torch.tensor([item['doc_wrd_idx'] for item in batch])
doc_wrd_mask = torch.tensor([item['doc_wrd_mask'] for item in batch])
query_ent_idx = torch.tensor([item['query_ent_idx'] for item in batch])
query_ent_mask = torch.tensor([item['query_ent_mask'] for item in batch])
doc_ent_idx = torch.tensor([item['doc_ent_idx'] for item in batch])
doc_ent_mask = torch.tensor([item['doc_ent_mask'] for item in batch])
query_des_idx = torch.tensor([item['query_des_idx'] for item in batch])
doc_des_idx = torch.tensor([item['doc_des_idx'] for item in batch])
return {'query_id': query_id, 'doc_id': doc_id, 'label': label, 'retrieval_score': retrieval_score,
'query_wrd_idx': query_wrd_idx, 'query_wrd_mask': query_wrd_mask,
'doc_wrd_idx': doc_wrd_idx, 'doc_wrd_mask': doc_wrd_mask,
'query_ent_idx': query_ent_idx, 'query_ent_mask': query_ent_mask,
'doc_ent_idx': doc_ent_idx, 'doc_ent_mask': doc_ent_mask,
'query_des_idx': query_des_idx, 'doc_des_idx': doc_des_idx}
else:
query_id = [item['query_id'] for item in batch]
doc_id = [item['doc_id'] for item in batch]
retrieval_score = torch.tensor([item['retrieval_score'] for item in batch])
query_wrd_idx = torch.tensor([item['query_wrd_idx'] for item in batch])
query_wrd_mask = torch.tensor([item['query_wrd_mask'] for item in batch])
doc_wrd_idx = torch.tensor([item['doc_wrd_idx'] for item in batch])
doc_wrd_mask = torch.tensor([item['doc_wrd_mask'] for item in batch])
query_ent_idx = torch.tensor([item['query_ent_idx'] for item in batch])
query_ent_mask = torch.tensor([item['query_ent_mask'] for item in batch])
doc_ent_idx = torch.tensor([item['doc_ent_idx'] for item in batch])
doc_ent_mask = torch.tensor([item['doc_ent_mask'] for item in batch])
query_des_idx = torch.tensor([item['query_des_idx'] for item in batch])
doc_des_idx = torch.tensor([item['doc_des_idx'] for item in batch])
return {'query_id': query_id, 'doc_id': doc_id, 'retrieval_score': retrieval_score,
'query_wrd_idx': query_wrd_idx, 'query_wrd_mask': query_wrd_mask,
'doc_wrd_idx': doc_wrd_idx, 'doc_wrd_mask': doc_wrd_mask,
'query_ent_idx': query_ent_idx, 'query_ent_mask': query_ent_mask,
'doc_ent_idx': doc_ent_idx, 'doc_ent_mask': doc_ent_mask,
'query_des_idx': query_des_idx, 'doc_des_idx': doc_des_idx}
def __getitem__(self, index: int) -> Dict[str, Any]:
example = self._examples[index]
if self._id:
example['query'], example['query_ent'], example['query_des'] = self._queries[example['query_id']]
if self._mode == 'train' and self._task == 'ranking':
example['doc_pos'], example['doc_pos_ent'], example['doc_pos_des'] = self._docs[example['doc_pos_id']]
example['doc_neg'], example['doc_neg_ent'], example['doc_neg_des'] = self._docs[example['doc_neg_id']]
else:
example['doc'], example['doc_ent'], example['doc_des'] = self._docs[example['doc_id']]
if self._mode == 'train':
if self._task == 'ranking':
query_wrd_idx, query_wrd_mask = self._wrd_tokenizer.process(example['query'], self._query_max_len)
doc_pos_wrd_idx, doc_pos_wrd_mask = self._wrd_tokenizer.process(example['doc_pos'], self._doc_max_len)
doc_neg_wrd_idx, doc_neg_wrd_mask = self._wrd_tokenizer.process(example['doc_neg'], self._doc_max_len)
query_ent_idx, query_ent_mask = self._ent_tokenizer.token_process(example['query_ent'], self._max_ent_num)
doc_pos_ent_idx, doc_pos_ent_mask = self._ent_tokenizer.token_process(example['doc_pos_ent'], self._max_ent_num)
doc_neg_ent_idx, doc_neg_ent_mask = self._ent_tokenizer.token_process(example['doc_neg_ent'], self._max_ent_num)
query_des_idx, query_des_mask = self._wrd_tokenizer.batch_process(example['query_des'], self._des_max_len, self._max_ent_num)
doc_pos_des_idx, doc_pos_des_mask = self._wrd_tokenizer.batch_process(example['doc_pos_des'], self._des_max_len, self._max_ent_num)
doc_neg_des_idx, doc_neg_des_mask = self._wrd_tokenizer.batch_process(example['doc_neg_des'], self._des_max_len, self._max_ent_num)
return {'query_wrd_idx': query_wrd_idx, 'query_wrd_mask': query_wrd_mask,
'doc_pos_wrd_idx': doc_pos_wrd_idx, 'doc_pos_wrd_mask': doc_pos_wrd_mask,
'doc_neg_wrd_idx': doc_neg_wrd_idx, 'doc_neg_wrd_mask': doc_neg_wrd_mask,
'query_ent_idx': query_ent_idx, 'query_ent_mask': query_ent_mask,
'doc_pos_ent_idx': doc_pos_ent_idx, 'doc_pos_ent_mask': doc_pos_ent_mask,
'doc_neg_ent_idx': doc_neg_ent_idx, 'doc_neg_ent_mask': doc_neg_ent_mask,
'query_des_idx': query_des_idx, 'doc_pos_des_idx': doc_pos_des_idx, 'doc_neg_des_idx': doc_neg_des_idx}
elif self._task == 'classification':
query_wrd_idx, query_wrd_mask = self._wrd_tokenizer.process(example['query'], self._query_max_len)
doc_wrd_idx, doc_wrd_mask = self._wrd_tokenizer.process(example['doc'], self._doc_max_len)
query_ent_idx, query_ent_mask = self._ent_tokenizer.token_process(example['query_ent'], self._max_ent_num)
doc_ent_idx, doc_ent_mask = self._ent_tokenizer.token_process(example['doc_ent'], self._max_ent_num)
query_des_idx, query_des_mask = self._wrd_tokenizer.batch_process(example['query_des'], self._des_max_len, self._max_ent_num)
doc_des_idx, doc_des_mask = self._wrd_tokenizer.batch_process(example['doc_des'], self._des_max_len, self._max_ent_num)
return {'query_wrd_idx': query_wrd_idx, 'query_wrd_mask': query_wrd_mask,
'doc_wrd_idx': doc_wrd_idx, 'doc_wrd_mask': doc_wrd_mask,
'query_ent_idx': query_ent_idx, 'query_ent_mask': query_ent_mask,
'doc_ent_idx': doc_ent_idx, 'doc_ent_mask': doc_ent_mask,
'query_des_idx': query_des_idx, 'doc_des_idx': doc_des_idx,
'label': example['label']}
else:
raise ValueError('Task must be `ranking` or `classification`.')
elif self._mode == 'dev':
query_wrd_idx, query_wrd_mask = self._wrd_tokenizer.process(example['query'], self._query_max_len)
doc_wrd_idx, doc_wrd_mask = self._wrd_tokenizer.process(example['doc'], self._doc_max_len)
query_ent_idx, query_ent_mask = self._ent_tokenizer.token_process(example['query_ent'], self._max_ent_num)
doc_ent_idx, doc_ent_mask = self._ent_tokenizer.token_process(example['doc_ent'], self._max_ent_num)
query_des_idx, query_des_mask = self._wrd_tokenizer.batch_process(example['query_des'], self._des_max_len, self._max_ent_num)
doc_des_idx, doc_des_mask = self._wrd_tokenizer.batch_process(example['doc_des'], self._des_max_len, self._max_ent_num)
return {'query_id': example['query_id'], 'doc_id': example['doc_id'], 'label': example['label'], 'retrieval_score': example['retrieval_score'],
'query_wrd_idx': query_wrd_idx, 'query_wrd_mask': query_wrd_mask,
'doc_wrd_idx': doc_wrd_idx, 'doc_wrd_mask': doc_wrd_mask,
'query_ent_idx': query_ent_idx, 'query_ent_mask': query_ent_mask,
'doc_ent_idx': doc_ent_idx, 'doc_ent_mask': doc_ent_mask,
'query_des_idx': query_des_idx, 'doc_des_idx': doc_des_idx}
elif self._mode == 'test':
query_wrd_idx, query_wrd_mask = self._wrd_tokenizer.process(example['query'], self._query_max_len)
doc_wrd_idx, doc_wrd_mask = self._wrd_tokenizer.process(example['doc'], self._doc_max_len)
query_ent_idx, query_ent_mask = self._ent_tokenizer.token_process(example['query_ent'], self._max_ent_num)
doc_ent_idx, doc_ent_mask = self._ent_tokenizer.token_process(example['doc_ent'], self._max_ent_num)
query_des_idx, query_des_mask = self._wrd_tokenizer.batch_process(example['query_des'], self._des_max_len, self._max_ent_num)
doc_des_idx, doc_des_mask = self._wrd_tokenizer.batch_process(example['doc_des'], self._des_max_len, self._max_ent_num)
return {'query_id': example['query_id'], 'doc_id': example['doc_id'], 'retrieval_score': example['retrieval_score'],
'query_wrd_idx': query_wrd_idx, 'query_wrd_mask': query_wrd_mask,
'doc_wrd_idx': doc_wrd_idx, 'doc_wrd_mask': doc_wrd_mask,
'query_ent_idx': query_ent_idx, 'query_ent_mask': query_ent_mask,
'doc_ent_idx': doc_ent_idx, 'doc_ent_mask': doc_ent_mask,
'query_des_idx': query_des_idx, 'doc_des_idx': doc_des_idx}
else:
raise ValueError('Mode must be `train`, `dev` or `test`.')
def __len__(self) -> int:
return self._count
| 69.634615 | 155 | 0.601326 | 2,514 | 18,105 | 3.88743 | 0.041766 | 0.046045 | 0.048808 | 0.075923 | 0.84007 | 0.815614 | 0.772741 | 0.743784 | 0.713189 | 0.694771 | 0 | 0.003225 | 0.280696 | 18,105 | 259 | 156 | 69.903475 | 0.747216 | 0 | 0 | 0.613546 | 0 | 0 | 0.165755 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015936 | false | 0 | 0.01992 | 0.003984 | 0.075697 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c72e69e18d4c1714b6c899a9b8134859dce05203 | 30 | py | Python | test2.py | kfeelixge/pyneta | 4af7a664d2f43bd3160f8b6d3c2fda5c0b417727 | [
"Apache-2.0"
] | null | null | null | test2.py | kfeelixge/pyneta | 4af7a664d2f43bd3160f8b6d3c2fda5c0b417727 | [
"Apache-2.0"
] | null | null | null | test2.py | kfeelixge/pyneta | 4af7a664d2f43bd3160f8b6d3c2fda5c0b417727 | [
"Apache-2.0"
] | null | null | null | print("First Python Program")
| 15 | 29 | 0.766667 | 4 | 30 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
c7380a7d8af07d0da7c44b3f608afdf5bdd2fd28 | 292 | py | Python | poem/Poem/poem/models.py | vrdel/poem | eb46f74f043ed94274915b0e687b18f3ca4f4e81 | [
"Apache-2.0"
] | null | null | null | poem/Poem/poem/models.py | vrdel/poem | eb46f74f043ed94274915b0e687b18f3ca4f4e81 | [
"Apache-2.0"
] | 34 | 2015-01-14T08:46:11.000Z | 2020-09-17T09:31:13.000Z | poem/Poem/poem/models.py | vrdel/poem | eb46f74f043ed94274915b0e687b18f3ca4f4e81 | [
"Apache-2.0"
] | 2 | 2016-03-11T14:23:36.000Z | 2018-09-19T09:58:34.000Z | from Poem.poem.dbmodels.probes import *
from Poem.poem.dbmodels.profiles import *
from Poem.poem.dbmodels.user import *
from Poem.poem.dbmodels.metricstags import *
from Poem.poem.dbmodels.rever import *
from Poem.poem.dbmodels.services import *
from Poem.poem.dbmodels.aggregations import *
| 36.5 | 45 | 0.808219 | 42 | 292 | 5.619048 | 0.261905 | 0.237288 | 0.355932 | 0.59322 | 0.661017 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09589 | 292 | 7 | 46 | 41.714286 | 0.893939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c769da92297b11c87d3cfa52992660fb20a89755 | 14,869 | py | Python | PWWS/adversarial_tools.py | ForeverZyh/ASCC | 2d76d679889953501c469221a37d486e7ee42ded | [
"MIT"
] | 21 | 2021-03-22T07:14:29.000Z | 2022-03-24T02:05:25.000Z | PWWS/adversarial_tools.py | ForeverZyh/ASCC | 2d76d679889953501c469221a37d486e7ee42ded | [
"MIT"
] | 2 | 2021-04-07T11:31:01.000Z | 2022-01-10T03:41:10.000Z | PWWS/adversarial_tools.py | ForeverZyh/ASCC | 2d76d679889953501c469221a37d486e7ee42ded | [
"MIT"
] | 4 | 2021-05-05T18:44:13.000Z | 2021-07-29T03:09:50.000Z | import sys
import keras
import spacy
import numpy as np
import tensorflow as tf
import os
from .config import config
from keras import backend as K
from .paraphrase import _compile_perturbed_tokens, PWWS, PWWS_snli
from .word_level_process import text_to_vector
from .char_level_process import doc_process, get_embedding_dict
from .evaluate_word_saliency import evaluate_word_saliency, evaluate_word_saliency_snli
#from keras.backend.tensorflow_backend import set_session
from .unbuffered import Unbuffered
import torch.nn.functional as F
import torch
sys.stdout = Unbuffered(sys.stdout)
nlp = spacy.load('en', tagger=False, entity=False)
class ForwardGradWrapper:
'''
Utility class that computes the classification probability of model input and predict its class
'''
def __init__(self, model):
'''
:param model: Keras model.
This code makes a bunch of assumptions about the model:
- Model has single input
- Embedding is the first layer
- Model output is a scalar (logistic regression)
'''
input_tensor = model.input
self.model = model
self.input_tensor = input_tensor
self.sess = K.get_session()
def predict_prob(self, input_vector):
prob = self.model.predict(input_vector).squeeze()
return prob
def predict_classes(self, input_vector):
prediction = self.model.predict(input_vector)
classes = np.argmax(prediction, axis=1)
return classes
class ForwardGradWrapper_pytorch_snli:
'''
Utility class that computes the classification probability of model input and predict its class
'''
def __init__(self, model, device):
'''
:param model: Keras model.
This code makes a bunch of assumptions about the model:
- Model has single input
- Embedding is the first layer
- Model output is a scalar (logistic regression)
'''
model.eval()
self.model=model
self.device=device
def get_mask(self, tensor):
#mask = 1- (tensor==0)
mask = ~(tensor==0)
mask=mask.to(self.device).to(torch.float)
return mask
def predict_prob(self, input_vector_p, input_vector_h):
input_vector_p=torch.from_numpy(input_vector_p).to(self.device).to(torch.long)
input_vector_h=torch.from_numpy(input_vector_h).to(self.device).to(torch.long)
mask_p = self.get_mask(input_vector_p)
mask_h = self.get_mask(input_vector_h)
logit = self.model(mode="text_to_logit",x_p=input_vector_p, x_h=input_vector_h, x_p_mask=mask_p, x_h_mask=mask_h).squeeze(0)
return F.softmax(logit).detach().cpu().numpy()
def predict_classes(self, input_vector_p, input_vector_h):
input_vector_p=torch.from_numpy(input_vector_p).to(self.device).to(torch.long)
input_vector_h=torch.from_numpy(input_vector_h).to(self.device).to(torch.long)
mask_p = self.get_mask(input_vector_p)
mask_h = self.get_mask(input_vector_h)
logit = self.model(mode="text_to_logit",x_p=input_vector_p, x_h=input_vector_h, x_p_mask=mask_p, x_h_mask=mask_h).squeeze(0)
logit=logit.detach().cpu().numpy()
classes = np.argmax(logit, axis=-1)
return classes
class ForwardGradWrapper_pytorch:
'''
Utility class that computes the classification probability of model input and predict its class
'''
def __init__(self, model, device):
'''
:param model: Keras model.
This code makes a bunch of assumptions about the model:
- Model has single input
- Embedding is the first layer
- Model output is a scalar (logistic regression)
'''
model.eval()
self.model=model
self.device=device
def predict_prob(self, input_vector):
input_vector=torch.from_numpy(input_vector).to(self.device).to(torch.long)
logit = self.model(mode="text_to_logit",input=input_vector).squeeze(0)
return F.softmax(logit).detach().cpu().numpy()
def predict_classes(self, input_vector):
input_vector=torch.from_numpy(input_vector).to(self.device).to(torch.long)
logit = self.model(mode="text_to_logit",input=input_vector).squeeze(0)
logit=logit.detach().cpu().numpy()
classes = np.argmax(logit, axis=-1)
return classes
def adversarial_paraphrase(opt, input_text, true_y, grad_guide, tokenizer, dataset, level, verbose=True):
'''
Compute a perturbation, greedily choosing the synonym if it causes the most
significant change in the classification probability after replacement
:return perturbed_text: generated adversarial examples
:return perturbed_y: predicted class of perturbed_text
:return sub_rate: word replacement rate showed in Table 3
:return change_tuple_list: list of substitute words
'''
def halt_condition_fn(perturbed_text):
'''
Halt if model output is changed.
'''
perturbed_vector = None
if level == 'word':
perturbed_vector = text_to_vector(perturbed_text, tokenizer, dataset)
elif level == 'char':
max_len = config.char_max_len[dataset]
perturbed_vector = doc_process(perturbed_text, get_embedding_dict(), dataset).reshape(1, max_len)
adv_y = grad_guide.predict_classes(input_vector=perturbed_vector)
if adv_y != true_y:
return True
else:
return False
def heuristic_fn(text, candidate):
'''
Return the difference between the classification probability of the original
word and the candidate substitute synonym, which is defined in Eq.(4) and Eq.(5).
'''
doc = nlp(text)
origin_vector = None
perturbed_vector = None
if level == 'word':
origin_vector = text_to_vector(text, tokenizer, dataset)
perturbed_text_list = _compile_perturbed_tokens(doc, [candidate])
perturbed_text = ""
for i, word_str in enumerate(perturbed_text_list):
if i==0:
perturbed_text+=word_str
else:
if word_str[0] in [".", ",", "-", "'", ":", "!", "?", "(", ")", ";", "<", ">"]:
perturbed_text+=word_str
else:
perturbed_text+=(" "+word_str)
perturbed_doc = nlp(perturbed_text)
perturbed_vector = text_to_vector(perturbed_doc.text, tokenizer, dataset)
elif level == 'char':
max_len = config.char_max_len[dataset]
origin_vector = doc_process(text, get_embedding_dict(), dataset).reshape(1, max_len)
perturbed_tokens = _compile_perturbed_tokens(nlp(input_text), [candidate])
perturbed_text = ' '.join(perturbed_tokens)
perturbed_vector = doc_process(perturbed_text, get_embedding_dict(), dataset).reshape(1, max_len)
origin_prob = grad_guide.predict_prob(input_vector=origin_vector)
perturbed_prob = grad_guide.predict_prob(input_vector=perturbed_vector)
delta_p = origin_prob[true_y] - perturbed_prob[true_y]
return delta_p
doc = nlp(input_text)
# PWWS
position_word_list, word_saliency_list = evaluate_word_saliency(doc, grad_guide, tokenizer, true_y, dataset, level)
perturbed_text, sub_rate, NE_rate, change_tuple_list = PWWS(opt,
doc,
true_y,
dataset,
word_saliency_list=word_saliency_list,
heuristic_fn=heuristic_fn,
halt_condition_fn=halt_condition_fn,
verbose=verbose)
# print("perturbed_text after perturb_text:", perturbed_text)
origin_vector = perturbed_vector = None
if level == 'word':
origin_vector = text_to_vector(input_text, tokenizer, dataset)
perturbed_vector = text_to_vector(perturbed_text, tokenizer, dataset)
elif level == 'char':
max_len = config.char_max_len[dataset]
origin_vector = doc_process(input_text, get_embedding_dict(), dataset).reshape(1, max_len)
perturbed_vector = doc_process(perturbed_text, get_embedding_dict(), dataset).reshape(1, max_len)
perturbed_y = grad_guide.predict_classes(input_vector=perturbed_vector)
if verbose:
origin_prob = grad_guide.predict_prob(input_vector=origin_vector)
perturbed_prob = grad_guide.predict_prob(input_vector=perturbed_vector)
raw_score = origin_prob[true_y] - perturbed_prob[true_y]
print('Prob before: ', origin_prob[true_y], '. Prob after: ', perturbed_prob[true_y],
'. Prob shift: ', raw_score)
return perturbed_text, perturbed_y, sub_rate, NE_rate, change_tuple_list
def adversarial_paraphrase_snli(opt, input_text_p, input_text_h, true_y, grad_guide, tokenizer, dataset, level, verbose=True):
'''
Compute a perturbation, greedily choosing the synonym if it causes the most
significant change in the classification probability after replacement
:return perturbed_text: generated adversarial examples
:return perturbed_y: predicted class of perturbed_text
:return sub_rate: word replacement rate showed in Table 3
:return change_tuple_list: list of substitute words
'''
def halt_condition_fn(perturbed_text):
'''
Halt if model output is changed.
'''
perturbed_vector = None
if level == 'word':
perturbed_vector = text_to_vector(perturbed_text, tokenizer, dataset)
elif level == 'char':
max_len = config.char_max_len[dataset]
perturbed_vector = doc_process(perturbed_text, get_embedding_dict(), dataset).reshape(1, max_len)
adv_y = grad_guide.predict_classes(input_vector=perturbed_vector)
if adv_y != true_y:
return True
else:
return False
def gen(perturbed_text_list):
perturbed_text = ""
recur = 0
reduc = 0
for i, word_str in enumerate(perturbed_text_list):
if reduc==1 or i==0:
space = ""
reduc=0
else:
space = " "
if len(word_str)==1 and word_str[0] in [".", ",", "-", ":", "!", "?", "(", ")", ";", "<", ">", "{","}", "[","]"]:
space = ""
if word_str[0] in [ "(", "<", "{", "["]:
reduc=1
elif len(word_str)==1 and word_str[0] in ["\"",]:
if recur==0:
space = " "
reduc=1
elif recur==1:
space = ""
recur=(recur+1)%2
elif len(word_str)==1 and word_str[0] in ["'",]:
space = ""
reduc=1
perturbed_text+=(space+word_str)
return perturbed_text
def heuristic_fn(text_p, text_h, candidate_h):
'''
Return the difference between the classification probability of the original
word and the candidate substitute synonym, which is defined in Eq.(4) and Eq.(5).
'''
doc_h = nlp(text_h)
origin_vector_h = None
perturbed_vector_h = None
if level == 'word':
origin_vector_p = text_to_vector(text_p, tokenizer, dataset)
origin_vector_h = text_to_vector(text_h, tokenizer, dataset)
perturbed_text_list_h = _compile_perturbed_tokens(doc_h, [candidate_h])
"""
perturbed_text = ""
for i, word_str in enumerate(perturbed_text_list):
if i==0:
perturbed_text+=word_str
else:
if word_str[0] in [".", ",", "-", "'", ":", "!", "?", "(", ")", ";", "<", ">"]:
perturbed_text+=word_str
else:
perturbed_text+=(" "+word_str)
"""
perturbed_text_h = gen(perturbed_text_list_h)
perturbed_doc_h = nlp(perturbed_text_h)
perturbed_vector_h = text_to_vector(perturbed_doc_h.text, tokenizer, dataset)
origin_prob = grad_guide.predict_prob(input_vector_p=origin_vector_p, input_vector_h=origin_vector_h)
perturbed_prob = grad_guide.predict_prob(input_vector_p=origin_vector_p, input_vector_h=perturbed_vector_h)
delta_p = origin_prob[true_y] - perturbed_prob[true_y]
return delta_p
doc_p = nlp(input_text_p)
doc_h = nlp(input_text_h)
# PWWS
position_word_list_h, word_saliency_list_h = evaluate_word_saliency_snli(doc_p, doc_h, grad_guide, tokenizer, true_y, dataset, level)
perturbed_text_p, perturbed_text_h, sub_rate, NE_rate, change_tuple_list = PWWS_snli(opt, doc_p, doc_h,
true_y,
dataset,
word_saliency_list=word_saliency_list_h,
heuristic_fn=heuristic_fn,
#halt_condition_fn=halt_condition_fn,
halt_condition_fn=None,
verbose=verbose)
origin_vector = perturbed_vector = None
if level == 'word':
origin_vector_p = text_to_vector(input_text_p, tokenizer, dataset)
perturbed_vector_p = text_to_vector(perturbed_text_p, tokenizer, dataset)
origin_vector_h = text_to_vector(input_text_h, tokenizer, dataset)
perturbed_vector_h = text_to_vector(perturbed_text_h, tokenizer, dataset)
perturbed_y = grad_guide.predict_classes(input_vector_p=perturbed_vector_p, input_vector_h=perturbed_vector_h)
if verbose:
origin_prob = grad_guide.predict_prob(input_vector_p=origin_vector_p, input_vector_h=origin_vector_h)
perturbed_prob = grad_guide.predict_prob(input_vector_p=perturbed_vector_p, input_vector_h=perturbed_vector_h)
raw_score = origin_prob[true_y] - perturbed_prob[true_y]
print('Prob before: ', origin_prob[true_y], '. Prob after: ', perturbed_prob[true_y],
'. Prob shift: ', raw_score)
return perturbed_text_p, perturbed_text_h, perturbed_y, sub_rate, NE_rate, change_tuple_list
| 43.349854 | 137 | 0.614365 | 1,801 | 14,869 | 4.750694 | 0.10161 | 0.062997 | 0.021038 | 0.0187 | 0.800608 | 0.770804 | 0.753039 | 0.73352 | 0.720313 | 0.689925 | 0 | 0.004299 | 0.296052 | 14,869 | 342 | 138 | 43.476608 | 0.813127 | 0.148026 | 0 | 0.55814 | 0 | 0 | 0.017994 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07907 | false | 0 | 0.069767 | 0 | 0.237209 | 0.009302 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c791d1320c6019830d15149d804e2cfa3a492b96 | 46 | py | Python | allennlp/data/tokenizers/utils/__init__.py | Mokanarangan/UOM-Allen | a7d576899b348fc3c43ffce50c5051adcd707eb2 | [
"Apache-2.0"
] | 1 | 2021-03-01T09:43:22.000Z | 2021-03-01T09:43:22.000Z | allennlp/data/tokenizers/utils/__init__.py | Mokanarangan/UOM-Allen | a7d576899b348fc3c43ffce50c5051adcd707eb2 | [
"Apache-2.0"
] | null | null | null | allennlp/data/tokenizers/utils/__init__.py | Mokanarangan/UOM-Allen | a7d576899b348fc3c43ffce50c5051adcd707eb2 | [
"Apache-2.0"
] | null | null | null | import allennlp.data.tokenizers.utils.sinhala
| 23 | 45 | 0.869565 | 6 | 46 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 46 | 1 | 46 | 46 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c7d24ff842504f7d868b403140fc4ae8329b279f | 3,773 | py | Python | api/tests/test_batch_mode_split_requests.py | cglewis/FakeFinder | 5ba213508c5a76d7ca9a2359a5e421a7ba507e45 | [
"Apache-2.0"
] | 26 | 2021-05-19T17:24:58.000Z | 2022-03-29T16:46:23.000Z | api/tests/test_batch_mode_split_requests.py | cglewis/FakeFinder | 5ba213508c5a76d7ca9a2359a5e421a7ba507e45 | [
"Apache-2.0"
] | 37 | 2021-03-11T18:44:08.000Z | 2022-03-30T02:47:53.000Z | api/tests/test_batch_mode_split_requests.py | cglewis/FakeFinder | 5ba213508c5a76d7ca9a2359a5e421a7ba507e45 | [
"Apache-2.0"
] | 12 | 2021-03-01T17:45:17.000Z | 2022-01-06T23:32:39.000Z | import requests
import json
import pytest
url = 'http://0.0.0.0:5000/fakefinder/'
headers = {'Content-Type': 'application/json' }
@pytest.mark.skip(reason="no way of currently testing this")
def test_batch_mode_ntech():
# Body
payload = {"batchMode": True,
"alwaysOn": False,
"location": ["4000.mp4", "4001.mp4", "4002.mp4", "4003.mp4", "4004.mp4", "4005.mp4"],
"modelName": "ntech",
"splitRequests": True,
"numSplitRequests": 2,
}
# convert dict to json string by json.dumps() for body data.
resp = requests.post(url, headers=headers, data=json.dumps(payload,indent=4))
# Validate response headers and body contents, e.g. status code.
assert resp.status_code == 200
# print response full body as text
print(resp.json())
@pytest.mark.parametrize('num_splits', [2, 4, 6, 10])
def test_batch_mode_boken(num_splits):
# Body
payload = {"batchMode": True,
"alwaysOn": False,
"location": ["4000.mp4", "4001.mp4", "4002.mp4", "4003.mp4", "4004.mp4", "4005.mp4", "4006.mp4", "4007.mp4", "4008.mp4", "4009.mp4"],
"modelName": "boken",
"splitRequests": True,
"numSplitRequests": num_splits,
}
# convert dict to json string by json.dumps() for body data.
resp = requests.post(url, headers=headers, data=json.dumps(payload,indent=4))
# Validate response headers and body contents, e.g. status code.
assert resp.status_code == 200
# print response full body as text
print(resp.json())
@pytest.mark.skip(reason="no way of currently testing this")
def test_batch_mode_medics():
# Body
payload = {"batchMode": True,
"alwaysOn": False,
"location": ["4000.mp4", "4001.mp4", "4002.mp4", "4003.mp4", "4004.mp4", "4005.mp4"],
"modelName": "medics",
"splitRequests": True,
"numSplitRequests": 2,
}
# convert dict to json string by json.dumps() for body data.
resp = requests.post(url, headers=headers, data=json.dumps(payload,indent=4))
# Validate response headers and body contents, e.g. status code.
assert resp.status_code == 200
# print response full body as text
print(resp.json())
@pytest.mark.skip(reason="no way of currently testing this")
def test_batch_mode_wm():
# Body
payload = {"batchMode": True,
"alwaysOn": False,
"location": ["4000.mp4", "4001.mp4", "4002.mp4", "4003.mp4", "4004.mp4", "4005.mp4"],
"modelName": "wm",
"splitRequests": True,
"numSplitRequests": 2,
}
# convert dict to json string by json.dumps() for body data.
resp = requests.post(url, headers=headers, data=json.dumps(payload,indent=4))
# Validate response headers and body contents, e.g. status code.
assert resp.status_code == 200
# print response full body as text
print(resp.json())
@pytest.mark.skip(reason="no way of currently testing this")
def test_batch_mode_eighteen():
# Body
payload = {"batchMode": True,
"alwaysOn": False,
"location": ["4000.mp4", "4001.mp4", "4002.mp4", "4003.mp4", "4004.mp4", "4005.mp4"],
"modelName": "eighteen",
"splitRequests": True,
"numSplitRequests": 2,
}
# convert dict to json string by json.dumps() for body data.
resp = requests.post(url, headers=headers, data=json.dumps(payload,indent=4))
# Validate response headers and body contents, e.g. status code.
assert resp.status_code == 200
# print response full body as text
print(resp.json())
| 35.261682 | 148 | 0.598198 | 464 | 3,773 | 4.814655 | 0.181034 | 0.040286 | 0.031334 | 0.03581 | 0.878693 | 0.878693 | 0.878693 | 0.878693 | 0.878693 | 0.878693 | 0 | 0.074194 | 0.260535 | 3,773 | 106 | 149 | 35.59434 | 0.726523 | 0.211768 | 0 | 0.646154 | 0 | 0 | 0.274297 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.076923 | false | 0 | 0.046154 | 0 | 0.123077 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
405b3926e43fd6de83d589e4e121b8eae0dec560 | 841 | py | Python | octicons16px/infinity.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | 1 | 2021-01-28T06:47:39.000Z | 2021-01-28T06:47:39.000Z | octicons16px/infinity.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null | octicons16px/infinity.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null |
OCTICON_INFINITY = """
<svg class="octicon octicon-infinity" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M3.5 6c-1.086 0-2 .914-2 2 0 1.086.914 2 2 2 .525 0 1.122-.244 1.825-.727.51-.35 1.025-.79 1.561-1.273-.536-.483-1.052-.922-1.56-1.273C4.621 6.244 4.025 6 3.5 6zm4.5.984c-.59-.533-1.204-1.066-1.825-1.493-.797-.548-1.7-.991-2.675-.991C1.586 4.5 0 6.086 0 8s1.586 3.5 3.5 3.5c.975 0 1.878-.444 2.675-.991.621-.427 1.235-.96 1.825-1.493.59.533 1.204 1.066 1.825 1.493.797.547 1.7.991 2.675.991 1.914 0 3.5-1.586 3.5-3.5s-1.586-3.5-3.5-3.5c-.975 0-1.878.443-2.675.991-.621.427-1.235.96-1.825 1.493zM9.114 8c.536.483 1.052.922 1.56 1.273.704.483 1.3.727 1.826.727 1.086 0 2-.914 2-2 0-1.086-.914-2-2-2-.525 0-1.122.244-1.825.727-.51.35-1.025.79-1.561 1.273z"></path></svg>
"""
| 168.2 | 812 | 0.648038 | 238 | 841 | 2.285714 | 0.331933 | 0.025735 | 0.027574 | 0.044118 | 0.555147 | 0.507353 | 0.507353 | 0.507353 | 0.444853 | 0.444853 | 0 | 0.548303 | 0.08918 | 841 | 4 | 813 | 210.25 | 0.16188 | 0 | 0 | 0 | 0 | 0.333333 | 0.969048 | 0.34881 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
40dc7e55979f6b897fcea27e836748214111edc7 | 241 | py | Python | spectacles/validators/__init__.py | felipefrancisco/spectacles | 92f7af5810e2669343dd18425b2a8cb49d7167d2 | [
"MIT"
] | 150 | 2019-10-05T18:35:36.000Z | 2022-03-26T21:21:44.000Z | spectacles/validators/__init__.py | felipefrancisco/spectacles | 92f7af5810e2669343dd18425b2a8cb49d7167d2 | [
"MIT"
] | 406 | 2019-10-03T14:54:22.000Z | 2022-03-28T04:02:31.000Z | spectacles/validators/__init__.py | felipefrancisco/spectacles | 92f7af5810e2669343dd18425b2a8cb49d7167d2 | [
"MIT"
] | 26 | 2019-11-08T16:21:50.000Z | 2022-03-28T06:06:14.000Z | from spectacles.validators.sql import SqlValidator
from spectacles.validators.data_test import DataTestValidator
from spectacles.validators.content import ContentValidator
__all__ = ["SqlValidator", "DataTestValidator", "ContentValidator"]
| 40.166667 | 67 | 0.854772 | 23 | 241 | 8.73913 | 0.521739 | 0.208955 | 0.358209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074689 | 241 | 5 | 68 | 48.2 | 0.901345 | 0 | 0 | 0 | 0 | 0 | 0.186722 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
40fe2053b40cd53ab82146f07f97d3b6e466ba8d | 292 | py | Python | contour/__init__.py | Workiva/contour | 599e05c7ab6020b1ccc27e3f64f625abaec33ff2 | [
"Apache-2.0"
] | null | null | null | contour/__init__.py | Workiva/contour | 599e05c7ab6020b1ccc27e3f64f625abaec33ff2 | [
"Apache-2.0"
] | null | null | null | contour/__init__.py | Workiva/contour | 599e05c7ab6020b1ccc27e3f64f625abaec33ff2 | [
"Apache-2.0"
] | null | null | null | from contour import Contour
from contour import MissingConfigurationError
from contour import BadModulePathError
from contour import InvalidYamlFile
from contour import EmptyYamlFile
from contour import MissingYamlFile
from contour import find_contour_yaml
from contour import module_import
| 29.2 | 45 | 0.886986 | 35 | 292 | 7.314286 | 0.314286 | 0.34375 | 0.53125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113014 | 292 | 9 | 46 | 32.444444 | 0.988417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9051f40db2f91a628c330002e4e7918096a2b6b9 | 160 | py | Python | unit_tests/test_sensorSim.py | haakonsh/FSR-Desktop | 3796ace5d00da40f2609c77183bccb3f8bf8e721 | [
"MIT"
] | null | null | null | unit_tests/test_sensorSim.py | haakonsh/FSR-Desktop | 3796ace5d00da40f2609c77183bccb3f8bf8e721 | [
"MIT"
] | null | null | null | unit_tests/test_sensorSim.py | haakonsh/FSR-Desktop | 3796ace5d00da40f2609c77183bccb3f8bf8e721 | [
"MIT"
] | null | null | null | from unittest import TestCase
class TestSensorSim(TestCase):
def test_generate(self):
self.fail()
def test_decode(self):
self.fail()
| 16 | 30 | 0.6625 | 19 | 160 | 5.473684 | 0.631579 | 0.134615 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24375 | 160 | 9 | 31 | 17.777778 | 0.859504 | 0 | 0 | 0.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
9057c84240cd930fee24fbe62c016934da11328d | 10,629 | py | Python | torchpruner/module_pruner/pruners.py | THU-MIG/torch-model-compression | 6c48f8a67d84cbc4d3079cbff5ab516b62dd2ff5 | [
"MIT"
] | 86 | 2021-06-21T11:09:49.000Z | 2022-03-21T09:09:26.000Z | torchpruner/module_pruner/pruners.py | THUMIG/torch-model-compression | 6c48f8a67d84cbc4d3079cbff5ab516b62dd2ff5 | [
"MIT"
] | 7 | 2021-06-26T09:37:37.000Z | 2022-03-09T03:49:11.000Z | torchpruner/module_pruner/pruners.py | THU-MIG/torch-model-compression | 6c48f8a67d84cbc4d3079cbff5ab516b62dd2ff5 | [
"MIT"
] | 17 | 2021-08-18T17:06:44.000Z | 2022-02-28T09:14:38.000Z | from collections import OrderedDict
import torch
import torch.nn as nn
import numpy as np
from typing import Dict, List
from .prune_function import *
class BasePruner(object):
def __init__(self, name):
self.name = name
# set the value to be zeros and return the context
def set_zero(self, nn_module, cut_dict):
raise NotImplementedError("The set_zero is not implemented")
# recovery from zeros
def recovery_zero(self, nn_module, cut_dict, context):
raise NotImplementedError("The recovery_zero is not implemented")
# cut the value from zeros and return the context
def set_cut(self, nn_module, cut_dict):
raise NotImplementedError("The set_cut is not implemented")
# reconvery the model from the context
def recovery_cut(self, nn_module, cut_dict, context):
raise NotImplementedError("The recovery_cut is not implemented")
class TensorPruner(BasePruner):
def __init__(self, name):
super(TensorPruner, self).__init__(name)
def set_zero(self, data, cut_dict):
if self.name not in cut_dict["terminal"]:
return data, {}
param_context = {}
cut_dims = cut_dict["terminal"][self.name]
data, param_list = set_zero_tensor(data, cut_dims)
param_context[self.name] = param_list
return data, param_context
def recovery_zero(self, data, cut_dict, param_context):
if self.name not in cut_dict["terminal"]:
return data
cut_dims = cut_dict["terminal"][self.name]
if self.name in param_context.keys():
param_list = param_context[self.name]
else:
param_list = None
return recovery_zero_tensor(data, cut_dims, param_list)
def set_cut(self, data, cut_dict):
if self.name not in cut_dict["terminal"]:
return data, {}
param_context = {}
cut_dims = cut_dict["terminal"][self.name]
data, param_list = set_cut_tensor(data, cut_dims)
param_context[self.name] = param_list
return data, param_context
def recovery_cut(self, data, cut_dict, param_context):
if self.name not in cut_dict["terminal"]:
return data
if self.name in param_context.keys():
param_list = param_context[self.name]
else:
param_list = None
cut_dims = cut_dict["terminal"][self.name]
return recovery_cut_tensor(data, cut_dims, param_list)
class ParameterPruner(TensorPruner):
def __init__(self, name):
super(ParameterPruner, self).__init__(name)
class ConvPruner(BasePruner):
def __init__(self, name):
super(ConvPruner, self).__init__(name)
self.weight_pruner = ParameterPruner(name + ".weight")
self.bias_pruner = ParameterPruner(name + ".bias")
# set the value to be zeros and return the context
def set_zero(self, nn_module, cut_dict):
nn_module.weight, weight_context = self.weight_pruner.set_zero(
nn_module.weight, cut_dict
)
nn_module.bias, bias_context = self.bias_pruner.set_zero(
nn_module.bias, cut_dict
)
return nn_module, {**weight_context, **bias_context}
# recovery from zeros
def recovery_zero(self, nn_module, cut_dict, param_context):
nn_module.weight = self.weight_pruner.recovery_zero(
nn_module.weight, cut_dict, param_context
)
nn_module.bias = self.bias_pruner.recovery_zero(
nn_module.bias, cut_dict, param_context
)
return nn_module
# cut the value from zeros and return the context
def set_cut(self, nn_module, cut_dict):
nn_module.weight, weight_context = self.weight_pruner.set_cut(
nn_module.weight, cut_dict
)
nn_module.bias, bias_context = self.bias_pruner.set_cut(
nn_module.bias, cut_dict
)
onnx_name = self.name + ".Conv"
if onnx_name in cut_dict["operator"]:
ONNX_params = cut_dict["operator"][onnx_name]
nn_module.groups -= ONNX_params["group"]
in_dim = 1 if isinstance(nn_module, (nn.Conv1d, nn.Conv2d, nn.Conv3d)) else 0
nn_module.in_channels = nn_module.weight.data.size(in_dim) * nn_module.groups
nn_module.out_channels = nn_module.weight.data.size(1 - in_dim)
return nn_module, {**weight_context, **bias_context}
# reconvery the model from the context
def recovery_cut(self, nn_module, cut_dict, param_context):
nn_module.weight = self.weight_pruner.recovery_cut(
nn_module.weight, cut_dict, param_context
)
nn_module.bias = self.bias_pruner.recovery_cut(
nn_module.bias, cut_dict, param_context
)
onnx_name = self.name + ".CONV"
if onnx_name in cut_dict["operator"]:
ONNX_params = cut_dict["operator"][onnx_name]
nn_module.groups += ONNX_params["group"]
in_dim = 1 if isinstance(nn_module, (nn.Conv1d, nn.Conv2d, nn.Conv3d)) else 0
nn_module.in_channels = nn_module.weight.data.size(in_dim) * nn_module.groups
nn_module.out_channels = nn_module.weight.data.size(1 - in_dim)
return nn_module
class BNPruner(BasePruner):
def __init__(self, name):
super(BNPruner, self).__init__(name)
self.weight_pruner = ParameterPruner(name + ".weight")
self.bias_pruner = ParameterPruner(name + ".bias")
self.running_mean_pruner = TensorPruner(name + ".running_mean")
self.running_var_pruner = TensorPruner(name + ".running_var")
self.num_batches_tracked_pruner = TensorPruner(name + ".num_batches_tracked")
# set the value to be zeros and return the context
def set_zero(self, nn_module, cut_dict):
nn_module.weight, weight_context = self.weight_pruner.set_zero(
nn_module.weight, cut_dict
)
nn_module.bias, bias_context = self.bias_pruner.set_zero(
nn_module.bias, cut_dict
)
(
nn_module.running_mean,
running_mean_context,
) = self.running_mean_pruner.set_zero(nn_module.running_mean, cut_dict)
nn_module.running_var, running_var_context = self.running_var_pruner.set_zero(
nn_module.running_var, cut_dict
)
return nn_module, {
**weight_context,
**bias_context,
**running_mean_context,
**running_var_context,
}
# recovery from zeros
def recovery_zero(self, nn_module, cut_dict, context):
nn_module.weight = self.weight_pruner.recovery_zero(
nn_module.weight, cut_dict, context
)
nn_module.bias = self.bias_pruner.recovery_zero(
nn_module.bias, cut_dict, context
)
nn_module.running_mean = self.running_mean_pruner.recovery_zero(
nn_module.running_mean, cut_dict, context
)
nn_module.running_var = self.running_var_pruner.recovery_zero(
nn_module.running_var, cut_dict, context
)
return nn_module
# cut the value from zeros and return the context
def set_cut(self, nn_module, cut_dict):
nn_module.weight, weight_context = self.weight_pruner.set_cut(
nn_module.weight, cut_dict
)
nn_module.bias, bias_context = self.bias_pruner.set_cut(
nn_module.bias, cut_dict
)
nn_module.running_mean, running_mean_context = self.running_mean_pruner.set_cut(
nn_module.running_mean, cut_dict
)
nn_module.running_var, running_var_context = self.running_var_pruner.set_cut(
nn_module.running_var, cut_dict
)
nn_module.num_features = nn_module.bias.size(0)
return nn_module, {
**weight_context,
**bias_context,
**running_mean_context,
**running_var_context,
}
# reconvery the model from the context
def recovery_cut(self, nn_module, cut_dict, context):
nn_module.weight = self.weight_pruner.recovery_cut(
nn_module.weight, cut_dict, context
)
nn_module.bias = self.bias_pruner.recovery_cut(
nn_module.bias, cut_dict, context
)
nn_module.running_mean = self.running_mean_pruner.recovery_cut(
nn_module.running_mean, cut_dict, context
)
nn_module.running_var = self.running_var_pruner.recovery_cut(
nn_module.running_var, cut_dict, context
)
nn_module.num_features = nn_module.bias.size(0)
return nn_module
class LinearPruner(BasePruner):
def __init__(self, name):
super(LinearPruner, self).__init__(name)
self.weight_pruner = ParameterPruner(name + ".weight")
self.bias_pruner = ParameterPruner(name + ".bias")
# set the value to be zeros and return the context
def set_zero(self, nn_module, cut_dict):
nn_module.weight, weight_context = self.weight_pruner.set_zero(
nn_module.weight, cut_dict
)
nn_module.bias, bias_context = self.bias_pruner.set_zero(
nn_module.bias, cut_dict
)
return nn_module, {**weight_context, **bias_context}
# recovery from zeros
def recovery_zero(self, nn_module, cut_dict, param_context):
nn_module.weight = self.weight_pruner.recovery_zero(
nn_module.weight, cut_dict, param_context
)
nn_module.bias = self.bias_pruner.recovery_zero(
nn_module.bias, cut_dict, param_context
)
return nn_module
# cut the value from zeros and return the context
def set_cut(self, nn_module, cut_dict):
nn_module.weight, weight_context = self.weight_pruner.set_cut(
nn_module.weight, cut_dict
)
nn_module.bias, bias_context = self.bias_pruner.set_cut(
nn_module.bias, cut_dict
)
nn_module.in_channels = nn_module.weight.data.size(1)
nn_module.out_channels = nn_module.weight.data.size(0)
return nn_module, {**weight_context, **bias_context}
# reconvery the model from the context
def recovery_cut(self, nn_module, cut_dict, param_context):
nn_module.weight = self.weight_pruner.recovery_cut(
nn_module.weight, cut_dict, param_context
)
nn_module.bias = self.bias_pruner.recovery_cut(
nn_module.bias, cut_dict, param_context
)
nn_module.in_channels = nn_module.weight.data.size(1)
nn_module.out_channels = nn_module.weight.data.size(0)
return nn_module
| 38.371841 | 88 | 0.657917 | 1,396 | 10,629 | 4.671203 | 0.059456 | 0.144763 | 0.081583 | 0.041405 | 0.905383 | 0.900169 | 0.871645 | 0.842816 | 0.842816 | 0.832541 | 0 | 0.002278 | 0.256562 | 10,629 | 276 | 89 | 38.51087 | 0.822956 | 0.057861 | 0 | 0.637168 | 0 | 0 | 0.03291 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115044 | false | 0 | 0.026549 | 0 | 0.256637 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
90585e2dcdc6c467eef5798c3e7029b219e36c0b | 29 | py | Python | clients/python/vzlogger/__init__.py | vizstack/vizstack-logger | 479f956a72ad6851060c315d243262106bfd0ff9 | [
"MIT"
] | 4 | 2019-09-14T00:54:16.000Z | 2021-03-23T08:26:38.000Z | clients/python/vzlogger/__init__.py | vizstack/vizstack-logger | 479f956a72ad6851060c315d243262106bfd0ff9 | [
"MIT"
] | 17 | 2019-12-23T03:41:50.000Z | 2022-02-26T17:34:54.000Z | clients/python/vzlogger/__init__.py | vizstack/vz-logger | 479f956a72ad6851060c315d243262106bfd0ff9 | [
"MIT"
] | null | null | null | from vzlogger.logger import * | 29 | 29 | 0.827586 | 4 | 29 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
90631eaff2ba7c6658aeae39ce3dfa6ed394efd6 | 9,438 | py | Python | knapsack_problem/tests/test_knapsack_ga_csv.py | platiagro/GA | 0103668aef8d8432209406c374824e7695d569c4 | [
"Apache-2.0"
] | null | null | null | knapsack_problem/tests/test_knapsack_ga_csv.py | platiagro/GA | 0103668aef8d8432209406c374824e7695d569c4 | [
"Apache-2.0"
] | null | null | null | knapsack_problem/tests/test_knapsack_ga_csv.py | platiagro/GA | 0103668aef8d8432209406c374824e7695d569c4 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import unittest
from unittest import TestCase
from random import uniform, randint
import numpy as np
import matplotlib.pyplot as plt
import time
from knapsack_problem.knapsack_ga import Candidate, ObjectsList, stop_search, search, apply_selection, apply_crossover, apply_mutation, create_initial_population
best_fit_array_full = [1]
medium_fit_array_full = [1]
obj_list_full = ObjectsList(20)
cand_full = Candidate(obj_list_full)
pop_full = []
pop_full.append(cand_full)
cand_pop = np.array(pop_full)
class TestFiles(unittest.TestCase):
def test_stop_search_weight_limit_blank(self):
with self.assertRaises(ValueError):
stop_search(0, 1, cand_pop, best_fit_array_full, medium_fit_array_full, 1, obj_list_full)
#-----------------------------------------------------------
def test_stop_search_weight_limit_neg(self):
with self.assertRaises(ValueError):
stop_search(-1, 1, cand_pop, best_fit_array_full, medium_fit_array_full, 1, obj_list_full)
#-----------------------------------------------------------
def test_stop_search_weight_tolerance_blank(self):
with self.assertRaises(ValueError):
stop_search(1, 0, cand_pop, best_fit_array_full, medium_fit_array_full, 1, obj_list_full)
#-----------------------------------------------------------
def test_stop_search_weight_tolerance_neg(self):
with self.assertRaises(ValueError):
stop_search(1, -1, cand_pop, best_fit_array_full, medium_fit_array_full, 1, obj_list_full)
#-----------------------------------------------------------
def test_stop_search_pop_blank(self):
with self.assertRaises(ValueError):
stop_search(1, 1, None, best_fit_array_full, medium_fit_array_full, 1, obj_list_full)
#-----------------------------------------------------------
def test_stop_search_best_fit_array_blank(self):
with self.assertRaises(ValueError):
stop_search(1, 1, cand_pop, None, medium_fit_array_full, 1, obj_list_full)
#-----------------------------------------------------------
def test_stop_search_medium_fit_array_blank(self):
with self.assertRaises(ValueError):
stop_search(1, 1, cand_pop, best_fit_array_full, None, 1, obj_list_full)
#-----------------------------------------------------------
def test_stop_search_vet_generation_blank(self):
with self.assertRaises(ValueError):
stop_search(1, 1, cand_pop, best_fit_array_full, medium_fit_array_full, 0, obj_list_full)
#-----------------------------------------------------------
def test_stop_search_vet_generation_neg(self):
with self.assertRaises(ValueError):
stop_search(1, 1, cand_pop, best_fit_array_full, medium_fit_array_full, -1, obj_list_full)
#-----------------------------------------------------------
def test_stop_search_vet_obj_list_blank(self):
with self.assertRaises(ValueError):
stop_search(1, 1, cand_pop, best_fit_array_full, medium_fit_array_full, 1, None)
#-----------------------------------------------------------
def test_stop_search_ok(self):
result = stop_search(1, 1, cand_pop, best_fit_array_full, medium_fit_array_full, 1, obj_list_full)
self.assertNotEqual(result, "ok")
#-----------------------------------------------------------
#-----------------------------------------------------------
def test_search_weight_limit_blank(self):
with self.assertRaises(ValueError):
search(0, 1, 1, obj_list_full)
#-----------------------------------------------------------
def test_search_weight_limit_neg(self):
with self.assertRaises(ValueError):
search(-1, 1, 1, obj_list_full)
#-----------------------------------------------------------
def test_search_weight_tolerance_blank(self):
with self.assertRaises(ValueError):
search(1, 0, 1, obj_list_full)
#-----------------------------------------------------------
def test_search_weight_tolerance_neg(self):
with self.assertRaises(ValueError):
search(1, -1, 1, obj_list_full)
#-----------------------------------------------------------
def test_search_available_objects_qt_blank(self):
with self.assertRaises(ValueError):
search(1, 1, 0, obj_list_full)
#-----------------------------------------------------------
def test_search_available_objects_qt_neg(self):
with self.assertRaises(ValueError):
search(1, 1, -1, obj_list_full)
#-----------------------------------------------------------
def test_search_available_objects_qt_neg(self):
with self.assertRaises(ValueError):
search(1, 1, -1, None)
#-----------------------------------------------------------
def test_search_ok(self):
result = search(1, 1, 1, obj_list_full)
self.assertEqual(result, "ok")
#-----------------------------------------------------------
#-----------------------------------------------------------
def test_apply_selection_pop_qt_blank(self):
with self.assertRaises(ValueError):
apply_selection(0, 1, cand_pop)
#-----------------------------------------------------------
def test_apply_selection_pop_qt_neg(self):
with self.assertRaises(ValueError):
apply_selection(-1, 1, cand_pop)
#-----------------------------------------------------------
def test_apply_selection_weight_limit_blank(self):
with self.assertRaises(ValueError):
apply_selection(1, 0, cand_pop)
#-----------------------------------------------------------
def test_apply_selection_weight_limit_neg(self):
with self.assertRaises(ValueError):
apply_selection(1, -1, cand_pop)
#-----------------------------------------------------------
def test_apply_selection_pop_intermed_blank(self):
with self.assertRaises(ValueError):
apply_selection(1, 1, None)
#-----------------------------------------------------------
def test_apply_selection_ok(self):
result = apply_selection(1, 1, cand_pop)
self.assertNotEqual(result, "ok")
#-----------------------------------------------------------
#-----------------------------------------------------------
def test_apply_crossover_crossover_qt_blank(self):
with self.assertRaises(ValueError):
apply_crossover(0, cand_pop, obj_list_full)
#-----------------------------------------------------------
def test_apply_crossover_crossover_qt_neg(self):
with self.assertRaises(ValueError):
apply_crossover(-1, cand_pop, obj_list_full)
#-----------------------------------------------------------
def test_apply_crossover_cand_to_repro_blank(self):
with self.assertRaises(ValueError):
apply_crossover(1, None, obj_list_full)
#-----------------------------------------------------------
def test_apply_crossover_obj_list_blank(self):
with self.assertRaises(ValueError):
apply_crossover(1, cand_pop, None)
#-----------------------------------------------------------
def test_apply_crossover_ok(self):
result = apply_mutation(1, cand_pop, obj_list_full)
self.assertNotEqual(result, "ok")
#-----------------------------------------------------------
#-----------------------------------------------------------
def test_apply_mutation_wished_qt_blank(self):
with self.assertRaises(ValueError):
apply_mutation(0, cand_pop, obj_list_full)
#-----------------------------------------------------------
def test_apply_mutation_wished_qt_neg(self):
with self.assertRaises(ValueError):
apply_mutation(-1, cand_pop, obj_list_full)
#-----------------------------------------------------------
def test_apply_mutation_cand_to_repro_blank(self):
with self.assertRaises(ValueError):
apply_mutation(1, None, obj_list_full)
#-----------------------------------------------------------
def test_apply_mutation_obj_list_blank(self):
with self.assertRaises(ValueError):
apply_mutation(1, cand_pop, None)
#-----------------------------------------------------------
def test_apply_mutation_ok(self):
result = apply_mutation(1, cand_pop, obj_list_full)
self.assertNotEqual(result, "ok")
#-----------------------------------------------------------
#-----------------------------------------------------------
def test_create_initial_population_init_pop_qt_blank(self):
with self.assertRaises(ValueError):
create_initial_population(0, pop_full)
#-----------------------------------------------------------
def test_create_initial_population_init_pop_qt_neg(self):
with self.assertRaises(ValueError):
create_initial_population(-1, pop_full)
#-----------------------------------------------------------
def test_create_initial_population_obj_list_blank(self):
with self.assertRaises(ValueError):
create_initial_population(1, None)
#-----------------------------------------------------------
def test_create_initial_population_ok(self):
result = create_initial_population(1, obj_list_full)
self.assertNotEqual(result, "ok")
#-----------------------------------------------------------
#-----------------------------------------------------------
| 51.016216 | 161 | 0.509536 | 926 | 9,438 | 4.778618 | 0.077754 | 0.061695 | 0.089492 | 0.178983 | 0.867119 | 0.852655 | 0.829153 | 0.792542 | 0.67096 | 0.468927 | 0 | 0.010917 | 0.1459 | 9,438 | 184 | 162 | 51.293478 | 0.538023 | 0.283535 | 0 | 0.318182 | 0 | 0 | 0.001787 | 0 | 0 | 0 | 0 | 0 | 0.295455 | 1 | 0.295455 | false | 0 | 0.05303 | 0 | 0.356061 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
90a87dcdb0fa39d397cca8c7b236c61513503cb5 | 130 | py | Python | dymos/examples/brachistochrone/__init__.py | kaushikponnapalli/dymos | 3fba91d0fc2c0e8460717b1bec80774676287739 | [
"Apache-2.0"
] | 104 | 2018-09-08T16:52:27.000Z | 2022-03-10T23:35:30.000Z | dymos/examples/brachistochrone/__init__.py | kaushikponnapalli/dymos | 3fba91d0fc2c0e8460717b1bec80774676287739 | [
"Apache-2.0"
] | 628 | 2018-06-27T20:32:59.000Z | 2022-03-31T19:24:32.000Z | dymos/examples/brachistochrone/__init__.py | kaushikponnapalli/dymos | 3fba91d0fc2c0e8460717b1bec80774676287739 | [
"Apache-2.0"
] | 46 | 2018-06-27T20:54:07.000Z | 2021-12-19T07:23:32.000Z | from .brachistochrone_ode import BrachistochroneODE
from .brachistochrone_vector_states_ode import BrachistochroneVectorStatesODE
| 43.333333 | 77 | 0.923077 | 12 | 130 | 9.666667 | 0.666667 | 0.327586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061538 | 130 | 2 | 78 | 65 | 0.95082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
90b1ac4081db56eddb95dc0b415a3b0f4ff2a2e7 | 72 | py | Python | test_mathcode.py | noahgift/python-functions-11-11 | 63c69791b6a05e1cdeb250f3f05eb3b21783bad1 | [
"CC0-1.0"
] | null | null | null | test_mathcode.py | noahgift/python-functions-11-11 | 63c69791b6a05e1cdeb250f3f05eb3b21783bad1 | [
"CC0-1.0"
] | 1 | 2021-11-11T14:17:54.000Z | 2021-11-11T14:17:54.000Z | test_mathcode.py | noahgift/python-functions-11-11 | 63c69791b6a05e1cdeb250f3f05eb3b21783bad1 | [
"CC0-1.0"
] | 1 | 2022-03-05T00:55:56.000Z | 2022-03-05T00:55:56.000Z | from mylib.mathcode import add
def test_add():
assert 2 == add(1,1) | 18 | 30 | 0.680556 | 13 | 72 | 3.692308 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 0.194444 | 72 | 4 | 31 | 18 | 0.775862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2900d33d77385d3cec22e5b4822aca61718adce6 | 99 | py | Python | Analysis/__init__.py | jkluter/MLG | 0ef337c1f08f3ad22a8530091c1e6e5548e4a244 | [
"MIT"
] | null | null | null | Analysis/__init__.py | jkluter/MLG | 0ef337c1f08f3ad22a8530091c1e6e5548e4a244 | [
"MIT"
] | null | null | null | Analysis/__init__.py | jkluter/MLG | 0ef337c1f08f3ad22a8530091c1e6e5548e4a244 | [
"MIT"
] | null | null | null | from . import Plots
from . import Table
from .utils import statistics, ALL_Analysis, find_files
| 19.8 | 55 | 0.777778 | 14 | 99 | 5.357143 | 0.714286 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171717 | 99 | 4 | 56 | 24.75 | 0.914634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
29176983df836bca8efce5b0038c244fe0a41dbd | 13,885 | py | Python | tests/commands/test_command.py | kikuchi-m/ceryle | 1f91a9aaa17c60700d8827158cb69e7220200757 | [
"MIT"
] | 2 | 2019-10-29T22:50:28.000Z | 2020-03-25T03:06:48.000Z | tests/commands/test_command.py | kikuchi-m/ceryle | 1f91a9aaa17c60700d8827158cb69e7220200757 | [
"MIT"
] | null | null | null | tests/commands/test_command.py | kikuchi-m/ceryle | 1f91a9aaa17c60700d8827158cb69e7220200757 | [
"MIT"
] | null | null | null | import os
import pathlib
import platform
import re
import shutil
import tempfile
import pytest
from ceryle import Command, CommandFormatError
from ceryle.dsl.support import Arg, Env, PathArg
from ceryle.util import std_capture
FILE_DIR = os.path.dirname(__file__)
def stub_env():
return Env('FOO')
def stub_arg():
return Arg('BAR', {})
def stub_path_arg():
return PathArg('a', 'b')
@pytest.mark.parametrize(
'cmd_in, cmd, cmd_str', [
(['ls', '-a'], ['ls', '-a'], '[ls -a]'),
('ls -a', ['ls', '-a'], '[ls -a]'),
# syntax sugar with double quoted
('echo "a b"', ['echo', 'a b'], '[echo "a b"]'),
(' foo "a b" c d ', ['foo', 'a b', 'c', 'd'], '[foo "a b" c d]'),
# with escape sequence
('echo a\\"b', ['echo', 'a\\"b'], '[echo a\\"b]'),
('echo a \\"b', ['echo', 'a', '\\"b'], '[echo a \\"b]'),
('echo a b\\"', ['echo', 'a', 'b\\"'], '[echo a b\\"]'),
('echo a \\"', ['echo', 'a', '\\"'], '[echo a \\"]'),
('echo a\\"b c', ['echo', 'a\\"b', 'c'], '[echo a\\"b c]'),
('echo a\\"b c "d e"', ['echo', 'a\\"b', 'c', 'd e'], '[echo a\\"b c "d e"]'),
# env and arg
(['do-some', stub_env(), stub_arg(), stub_path_arg()],
['do-some', stub_env(), stub_arg(), stub_path_arg()],
f'[do-some {stub_env()} {stub_arg()} {stub_path_arg()}]'),
(stub_env(), [stub_env()], f'[{stub_env()}]'),
(stub_arg(), [stub_arg()], f'[{stub_arg()}]'),
(stub_path_arg(), [stub_path_arg()], f'[{stub_path_arg()}]'),
])
def test_new_command(cmd_in, cmd, cmd_str):
command = Command(cmd_in)
assert command.cmd == cmd
assert str(command) == cmd_str
def test_raise_if_invalid_command():
with pytest.raises(TypeError):
Command(None)
with pytest.raises(TypeError):
Command(1)
with pytest.raises(TypeError):
Command(object())
with pytest.raises(CommandFormatError, match=r'invalid command format: \[a b "c d\]'):
Command('a b "c d')
class TestAnyPlatform:
def test_execute_script(self):
with std_capture() as (o, e):
command = Command('./scripts/sample1', cwd=FILE_DIR)
assert command.execute().return_code == 0
lines = [l.rstrip() for l in o.getvalue().splitlines()]
assert lines == ['hello', 'good-bye']
def test_execute_script_with_error(self):
with std_capture() as (o, e):
command = Command('./scripts/stderr', cwd=FILE_DIR)
assert command.execute().return_code == 3
assert re.match('.*sample error.*', e.getvalue().rstrip())
def test_execute_command_return_stdout(self):
command = Command('echo foo')
result = command.execute()
assert result.return_code == 0
assert len(result.stdout) == 1
assert result.stdout[0].rstrip() == 'foo'
assert len(result.stderr) == 0
def test_execute_command_return_stderr(self):
command = Command('./scripts/stderr', cwd=FILE_DIR)
result = command.execute()
assert result.return_code == 3
assert len(result.stdout) == 0
assert len(result.stderr) == 1
assert result.stderr[0].rstrip() == 'sample error'
def test_execute_script_quiet(self):
with std_capture() as (o, e):
command = Command('./scripts/sample1', cwd=FILE_DIR, quiet=True)
result = command.execute()
assert result.return_code == 0
assert result.stdout == ['hello', 'good-bye']
lines = [l.rstrip() for l in o.getvalue().splitlines()]
assert lines == []
def test_execute_script_quiet_with_error(self):
with std_capture() as (o, e):
command = Command('./scripts/stderr', cwd=FILE_DIR, quiet=True)
assert command.execute().return_code == 3
assert re.match('.*sample error.*', e.getvalue().rstrip())
def test_execute_with_inputs_as_args(self):
with std_capture() as (o, e):
command = Command(['echo'], inputs_as_args=True)
result = command.execute(inputs=['foo', 'bar'], timeout=3)
assert result.return_code == 0
assert len(result.stdout) == 1
assert result.stdout[0].rstrip() == 'foo bar'
lines = [l.rstrip() for l in o.getvalue().splitlines()]
assert lines == ['foo bar']
def test_execute_with_context(self):
with tempfile.TemporaryDirectory() as tmpd:
context = pathlib.Path(tmpd)
for s in ['sample1', 'sample1.bat']:
script = pathlib.Path(context, s)
shutil.copy(
str(pathlib.Path(FILE_DIR, 'scripts', s)),
str(script))
with std_capture() as (o, e):
command = Command('./sample1')
assert command.execute(context=str(context)).return_code == 0
lines = [l.rstrip() for l in o.getvalue().splitlines()]
assert lines == ['hello', 'good-bye']
def test_execute_with_context_and_cwd(self):
with tempfile.TemporaryDirectory() as tmpd:
context = pathlib.Path(tmpd)
sub_dir = 'aa'
context.joinpath(sub_dir).mkdir()
for s in ['sample1', 'sample1.bat']:
shutil.copy(
str(pathlib.Path(FILE_DIR, 'scripts', s)),
str(pathlib.Path(context, sub_dir, s)))
with std_capture() as (o, e):
command = Command('./sample1', cwd=sub_dir)
assert command.execute(context=str(context)).return_code == 0
lines = [l.rstrip() for l in o.getvalue().splitlines()]
assert lines == ['hello', 'good-bye']
def test_execute_absolute_cwd(self):
with tempfile.TemporaryDirectory() as tmpd1, tempfile.TemporaryDirectory() as tmpd2:
context = pathlib.Path(tmpd1)
cwd = pathlib.Path(tmpd2, 'aa')
cwd.mkdir()
for s in ['sample1', 'sample1.bat']:
shutil.copy(
str(pathlib.Path(FILE_DIR, 'scripts', s)),
str(pathlib.Path(cwd, s)))
with std_capture() as (o, e):
command = Command('./sample1', cwd=str(cwd))
assert command.execute(context=str(context)).return_code == 0
lines = [l.rstrip() for l in o.getvalue().splitlines()]
assert lines == ['hello', 'good-bye']
@pytest.mark.skipif(platform.system() == 'Windows', reason='Not a Windows platform')
class TestForPosix:
def test_new_command_by_relative_path(self):
command = Command(['./dosome', '-a'])
assert command.cmd == ['./dosome', '-a']
assert str(command) == '[./dosome -a]'
@pytest.mark.parametrize(
'cmd_in, stdout', [
(['echo', 'foo'], 'foo'),
('echo foo', 'foo'),
('echo "foo bar"', 'foo bar'),
])
def test_execute_command(self, cmd_in, stdout):
with std_capture() as (o, e):
command = Command(cmd_in)
assert command.execute().return_code == 0
assert o.getvalue().rstrip() == stdout
def test_execute_with_inputs(self):
with std_capture() as (o, e):
command = Command(['cat'])
result = command.execute(inputs=['foo', 'bar'], timeout=3)
assert result.return_code == 0
assert len(result.stdout) == 2
assert result.stdout[0].rstrip() == 'foo'
assert result.stdout[1].rstrip() == 'bar'
lines = [l.rstrip() for l in o.getvalue().splitlines()]
assert lines == ['foo', 'bar']
def test_execute_with_environment_variables(self):
no_env = Command('./scripts/env_test', cwd=FILE_DIR)
no_env_res = no_env.execute()
assert no_env_res.return_code == 0
assert no_env_res.stdout == ['']
env = {'CERYLE_ENV_TEST': 'ceryle environment variable test'}
with_env = Command('./scripts/env_test', cwd=FILE_DIR, env=env)
with_env_res = with_env.execute()
assert with_env_res.return_code == 0
assert with_env_res.stdout == ['ceryle environment variable test']
def test_execute_with_environment_variables_from_envs(self, mocker):
mocker.patch.dict('os.environ', {'FOO': 'ceryle environment variable test'})
env = {'CERYLE_ENV_TEST': Env('FOO')}
command = Command('./scripts/env_test', cwd=FILE_DIR, env=env)
res = command.execute()
assert res.return_code == 0
assert res.stdout == ['ceryle environment variable test']
def test_execute_with_environment_variables_from_args(self, mocker):
env = {'CERYLE_ENV_TEST': Arg('FOO', {'FOO': 'ceryle environment variable test'})}
command = Command('./scripts/env_test', cwd=FILE_DIR, env=env)
res = command.execute()
assert res.return_code == 0
assert res.stdout == ['ceryle environment variable test']
def test_with_envs_and_args(self, mocker):
mocker.patch.dict('os.environ', {'ENV1': 'AAA'})
args = {'ARG1': 'BBB'}
command = Command(['echo', Env('ENV1'), Arg('ARG1', args)])
res = command.execute()
assert res.return_code == 0
assert res.stdout == ['AAA BBB']
def test_execute_command_containing_arg(self):
arg = Arg('FOO', {'FOO': 'ceryle command arg test'})
command = Command(['echo', arg])
res = command.execute()
assert res.return_code == 0
assert res.stdout == ['ceryle command arg test']
@pytest.mark.parametrize(
'cwd', [
Arg('TEST_CWD', {'TEST_CWD': str(FILE_DIR)}),
PathArg(str(FILE_DIR)),
])
def test_execute_with_cwd_by_arg(self, cwd):
with_env = Command('./scripts/env_test', cwd=cwd)
with_env_res = with_env.execute()
assert with_env_res.return_code == 0
assert with_env_res.stdout == ['']
@pytest.mark.skipif(platform.system() != 'Windows', reason='Not a Windows platform')
class TestForWin:
@pytest.mark.parametrize(
'cmd_in, cmd, cmd_str', [
(['./dosome', '-a'], ['dosome', '-a'], '[dosome -a]'),
('./dosome -a', ['dosome', '-a'], '[dosome -a]'),
(['./dir/dosome', '-a'], ['dir\\dosome', '-a'], '[dir\\dosome -a]'),
])
def test_new_command_by_relative_path(self, cmd_in, cmd, cmd_str):
command = Command(cmd)
assert command.cmd == cmd
assert str(command) == cmd_str
@pytest.mark.parametrize(
'cmd_in, stdout', [
(['echo', 'foo'], 'foo'),
('echo foo', 'foo'),
('echo "foo bar"', '"foo bar"'),
])
def test_execute_command(self, cmd_in, stdout):
with std_capture() as (o, e):
command = Command(cmd_in)
assert command.execute().return_code == 0
assert o.getvalue().rstrip() == stdout
def test_execute_with_inputs(self):
with std_capture() as (o, e):
command = Command(['findstr', 'ba'])
result = command.execute(inputs=['foo', 'bar', 'baz'], timeout=3)
assert result.return_code == 0
assert len(result.stdout) == 2
assert result.stdout[0].rstrip() == 'bar'
assert result.stdout[1].rstrip() == 'baz'
lines = [l.rstrip() for l in o.getvalue().splitlines()]
assert lines == ['bar', 'baz']
def test_execute_with_environment_variables(self):
no_env = Command('./scripts/env_test', cwd=FILE_DIR)
no_env_res = no_env.execute()
assert no_env_res.return_code == 0
assert no_env_res.stdout == ['""']
env = {'CERYLE_ENV_TEST': 'ceryle environment variable test'}
with_env = Command('./scripts/env_test', cwd=FILE_DIR, env=env)
with_env_res = with_env.execute()
assert with_env_res.return_code == 0
assert with_env_res.stdout == ['"ceryle environment variable test"']
def test_execute_with_environment_variables_from_envs(self, mocker):
mocker.patch.dict('os.environ', {'FOO': 'ceryle environment variable test'})
env = {'CERYLE_ENV_TEST': Env('FOO')}
command = Command('./scripts/env_test', cwd=FILE_DIR, env=env)
res = command.execute()
assert res.return_code == 0
assert res.stdout == ['"ceryle environment variable test"']
def test_execute_with_environment_variables_from_args(self, mocker):
env = {'CERYLE_ENV_TEST': Arg('FOO', {'FOO': 'ceryle environment variable test'})}
command = Command('./scripts/env_test', cwd=FILE_DIR, env=env)
res = command.execute()
assert res.return_code == 0
assert res.stdout == ['"ceryle environment variable test"']
def test_with_envs_and_args(self, mocker):
mocker.patch.dict('os.environ', {'ENV1': 'AAA'})
args = {'ARG1': 'BBB'}
command = Command(['echo', Env('ENV1'), Arg('ARG1', args)])
res = command.execute()
assert res.return_code == 0
assert res.stdout == ['AAA BBB']
def test_execute_command_containing_arg(self):
arg = Arg('FOO', {'FOO': 'ceryle command arg test'})
command = Command(['echo', arg])
res = command.execute()
assert res.return_code == 0
assert res.stdout == ['"ceryle command arg test"']
@pytest.mark.parametrize(
'cwd', [
Arg('TEST_CWD', {'TEST_CWD': str(FILE_DIR)}),
PathArg(str(FILE_DIR)),
])
def test_execute_with_cwd_by_arg(self, cwd):
with_env = Command('./scripts/env_test', cwd=cwd)
with_env_res = with_env.execute()
assert with_env_res.return_code == 0
assert with_env_res.stdout == ['""']
| 38.250689 | 92 | 0.570256 | 1,719 | 13,885 | 4.424084 | 0.082606 | 0.027613 | 0.03616 | 0.046943 | 0.864563 | 0.824721 | 0.806969 | 0.794609 | 0.761473 | 0.725312 | 0 | 0.006789 | 0.267987 | 13,885 | 362 | 93 | 38.356354 | 0.74144 | 0.004609 | 0 | 0.624138 | 0 | 0 | 0.15604 | 0 | 0 | 0 | 0 | 0 | 0.258621 | 1 | 0.113793 | false | 0 | 0.034483 | 0.010345 | 0.168966 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
292900491088963f9c806928172ce2c0cc5b2279 | 23 | py | Python | build/lib/brave/__init__.py | qiulikun/brave | a44f63497fae9755d5f798821073b669b828521e | [
"Apache-2.0"
] | 13 | 2017-07-04T15:59:21.000Z | 2021-07-10T08:33:47.000Z | build/lib/brave/__init__.py | qiulikun/brave | a44f63497fae9755d5f798821073b669b828521e | [
"Apache-2.0"
] | 1 | 2019-12-24T16:14:52.000Z | 2019-12-25T20:44:17.000Z | build/lib/brave/__init__.py | qiulikun/brave | a44f63497fae9755d5f798821073b669b828521e | [
"Apache-2.0"
] | 7 | 2017-07-02T12:35:02.000Z | 2021-02-08T03:49:23.000Z | from ._brave import *
| 7.666667 | 21 | 0.695652 | 3 | 23 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 23 | 2 | 22 | 11.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
29299475b1e576547131f3b03899feef01676c16 | 146 | py | Python | tests/data/format/quotes_type/class_docstring.py | DanielNoord/pydocstringformatter | a69302cee6bd32b9b5cc48912a47d0e8ad3f7abe | [
"MIT"
] | 4 | 2022-01-02T22:50:59.000Z | 2022-02-09T09:04:37.000Z | tests/data/format/quotes_type/class_docstring.py | DanielNoord/pydocstringformatter | a69302cee6bd32b9b5cc48912a47d0e8ad3f7abe | [
"MIT"
] | 80 | 2022-01-02T09:02:50.000Z | 2022-03-30T13:34:10.000Z | tests/data/format/quotes_type/class_docstring.py | DanielNoord/pydocstringformatter | a69302cee6bd32b9b5cc48912a47d0e8ad3f7abe | [
"MIT"
] | 2 | 2022-01-02T11:58:29.000Z | 2022-01-04T18:53:29.000Z | class MyClass:
''' A multi-line
docstring
'''
class InnerClass:
''' A multi-line
docstring
'''
| 14.6 | 27 | 0.445205 | 12 | 146 | 5.416667 | 0.583333 | 0.184615 | 0.307692 | 0.584615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.445205 | 146 | 9 | 28 | 16.222222 | 0.802469 | 0.349315 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
2951849537e778fa72c1b2579c025f5713d8b665 | 99 | py | Python | vue_backend/user/throttles.py | hanson190505/coteam | 8bd01f4edc2a0b2a65dc18d68e36efb11cbdf576 | [
"MIT"
] | 1 | 2021-03-18T17:04:52.000Z | 2021-03-18T17:04:52.000Z | vue_backend/user/throttles.py | hanson190505/coteam | 8bd01f4edc2a0b2a65dc18d68e36efb11cbdf576 | [
"MIT"
] | 11 | 2020-04-03T04:16:24.000Z | 2022-03-26T10:36:49.000Z | vue_backend/user/throttles.py | hanson190505/coteam | 8bd01f4edc2a0b2a65dc18d68e36efb11cbdf576 | [
"MIT"
] | null | null | null | from rest_framework.throttling import BaseThrottle
class CustomerThrottle(BaseThrottle):
pass | 19.8 | 50 | 0.838384 | 10 | 99 | 8.2 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 99 | 5 | 51 | 19.8 | 0.942529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
295ae48cf3eea4f42b385729e5f65079928d6f04 | 32,292 | py | Python | watcher/db/api.py | ajaytikoo/watcher | 6dbac1f6ae7f3e10dfdcef5721fa4af7af54e159 | [
"Apache-2.0"
] | 64 | 2015-10-18T02:57:24.000Z | 2022-01-13T11:27:51.000Z | watcher/db/api.py | ajaytikoo/watcher | 6dbac1f6ae7f3e10dfdcef5721fa4af7af54e159 | [
"Apache-2.0"
] | null | null | null | watcher/db/api.py | ajaytikoo/watcher | 6dbac1f6ae7f3e10dfdcef5721fa4af7af54e159 | [
"Apache-2.0"
] | 35 | 2015-12-25T13:53:21.000Z | 2021-07-19T15:50:16.000Z | # Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Base classes for storage engines
"""
import abc
from oslo_config import cfg
from oslo_db import api as db_api
_BACKEND_MAPPING = {'sqlalchemy': 'watcher.db.sqlalchemy.api'}
IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING,
lazy=True)
def get_instance():
"""Return a DB API instance."""
return IMPL
class BaseConnection(object, metaclass=abc.ABCMeta):
"""Base class for storage system connections."""
@abc.abstractmethod
def get_goal_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, eager=False):
"""Get specific columns for matching goals.
Return a list of the specified columns for all goals that
match the specified filters.
:param context: The security context
:param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of goals to return.
:param marker: the last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted.
(asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def create_goal(self, values):
"""Create a new goal.
:param values: A dict containing several items used to identify
and track the goal. For example:
::
{
'uuid': utils.generate_uuid(),
'name': 'DUMMY',
'display_name': 'Dummy',
}
:returns: A goal
:raises: :py:class:`~.GoalAlreadyExists`
"""
@abc.abstractmethod
def get_goal_by_id(self, context, goal_id, eager=False):
"""Return a goal given its ID.
:param context: The security context
:param goal_id: The ID of a goal
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A goal
:raises: :py:class:`~.GoalNotFound`
"""
@abc.abstractmethod
def get_goal_by_uuid(self, context, goal_uuid, eager=False):
"""Return a goal given its UUID.
:param context: The security context
:param goal_uuid: The UUID of a goal
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A goal
:raises: :py:class:`~.GoalNotFound`
"""
@abc.abstractmethod
def get_goal_by_name(self, context, goal_name, eager=False):
"""Return a goal given its name.
:param context: The security context
:param goal_name: The name of a goal
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A goal
:raises: :py:class:`~.GoalNotFound`
"""
@abc.abstractmethod
def destroy_goal(self, goal_uuid):
"""Destroy a goal.
:param goal_uuid: The UUID of a goal
:raises: :py:class:`~.GoalNotFound`
"""
@abc.abstractmethod
def update_goal(self, goal_uuid, values):
"""Update properties of a goal.
:param goal_uuid: The UUID of a goal
:param values: A dict containing several items used to identify
and track the goal. For example:
::
{
'uuid': utils.generate_uuid(),
'name': 'DUMMY',
'display_name': 'Dummy',
}
:returns: A goal
:raises: :py:class:`~.GoalNotFound`
:raises: :py:class:`~.Invalid`
"""
def soft_delete_goal(self, goal_id):
"""Soft delete a goal.
:param goal_id: The id or uuid of a goal.
:raises: :py:class:`~.GoalNotFound`
"""
@abc.abstractmethod
def get_strategy_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None,
eager=True):
"""Get specific columns for matching strategies.
Return a list of the specified columns for all strategies that
match the specified filters.
:param context: The security context
:param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of strategies to return.
:param marker: The last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: Direction in which results should be sorted.
(asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def create_strategy(self, values):
"""Create a new strategy.
:param values: A dict containing items used to identify
and track the strategy. For example:
::
{
'id': 1,
'uuid': utils.generate_uuid(),
'name': 'my_strategy',
'display_name': 'My strategy',
'goal_uuid': utils.generate_uuid(),
}
:returns: A strategy
:raises: :py:class:`~.StrategyAlreadyExists`
"""
@abc.abstractmethod
def get_strategy_by_id(self, context, strategy_id, eager=False):
"""Return a strategy given its ID.
:param context: The security context
:param strategy_id: The ID of a strategy
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A strategy
:raises: :py:class:`~.StrategyNotFound`
"""
@abc.abstractmethod
def get_strategy_by_uuid(self, context, strategy_uuid, eager=False):
"""Return a strategy given its UUID.
:param context: The security context
:param strategy_uuid: The UUID of a strategy
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A strategy
:raises: :py:class:`~.StrategyNotFound`
"""
@abc.abstractmethod
def get_strategy_by_name(self, context, strategy_name, eager=False):
"""Return a strategy given its name.
:param context: The security context
:param strategy_name: The name of a strategy
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A strategy
:raises: :py:class:`~.StrategyNotFound`
"""
@abc.abstractmethod
def destroy_strategy(self, strategy_uuid):
"""Destroy a strategy.
:param strategy_uuid: The UUID of a strategy
:raises: :py:class:`~.StrategyNotFound`
"""
@abc.abstractmethod
def update_strategy(self, strategy_uuid, values):
"""Update properties of a strategy.
:param strategy_uuid: The UUID of a strategy
:returns: A strategy
:raises: :py:class:`~.StrategyNotFound`
:raises: :py:class:`~.Invalid`
"""
def soft_delete_strategy(self, strategy_id):
"""Soft delete a strategy.
:param strategy_id: The id or uuid of a strategy.
:raises: :py:class:`~.StrategyNotFound`
"""
@abc.abstractmethod
def get_audit_template_list(self, context, filters=None,
limit=None, marker=None, sort_key=None,
sort_dir=None, eager=False):
"""Get specific columns for matching audit templates.
Return a list of the specified columns for all audit templates that
match the specified filters.
:param context: The security context
:param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of audit templates to return.
:param marker: the last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted.
(asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def create_audit_template(self, values):
"""Create a new audit template.
:param values: A dict containing several items used to identify
and track the audit template. For example:
::
{
'uuid': utils.generate_uuid(),
'name': 'example',
'description': 'free text description'
'goal': 'DUMMY'
}
:returns: An audit template.
:raises: :py:class:`~.AuditTemplateAlreadyExists`
"""
@abc.abstractmethod
def get_audit_template_by_id(self, context, audit_template_id,
eager=False):
"""Return an audit template.
:param context: The security context
:param audit_template_id: The id of an audit template.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An audit template.
:raises: :py:class:`~.AuditTemplateNotFound`
"""
@abc.abstractmethod
def get_audit_template_by_uuid(self, context, audit_template_uuid,
eager=False):
"""Return an audit template.
:param context: The security context
:param audit_template_uuid: The uuid of an audit template.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An audit template.
:raises: :py:class:`~.AuditTemplateNotFound`
"""
def get_audit_template_by_name(self, context, audit_template_name,
eager=False):
"""Return an audit template.
:param context: The security context
:param audit_template_name: The name of an audit template.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An audit template.
:raises: :py:class:`~.AuditTemplateNotFound`
"""
@abc.abstractmethod
def destroy_audit_template(self, audit_template_id):
"""Destroy an audit template.
:param audit_template_id: The id or uuid of an audit template.
:raises: :py:class:`~.AuditTemplateNotFound`
"""
@abc.abstractmethod
def update_audit_template(self, audit_template_id, values):
"""Update properties of an audit template.
:param audit_template_id: The id or uuid of an audit template.
:returns: An audit template.
:raises: :py:class:`~.AuditTemplateNotFound`
:raises: :py:class:`~.Invalid`
"""
@abc.abstractmethod
def soft_delete_audit_template(self, audit_template_id):
"""Soft delete an audit template.
:param audit_template_id: The id or uuid of an audit template.
:raises: :py:class:`~.AuditTemplateNotFound`
"""
@abc.abstractmethod
def get_audit_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, eager=False):
"""Get specific columns for matching audits.
Return a list of the specified columns for all audits that match the
specified filters.
:param context: The security context
:param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of audits to return.
:param marker: the last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted.
(asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def create_audit(self, values):
"""Create a new audit.
:param values: A dict containing several items used to identify
and track the audit, and several dicts which are passed
into the Drivers when managing this audit. For example:
::
{
'uuid': utils.generate_uuid(),
'type': 'ONESHOT',
}
:returns: An audit.
:raises: :py:class:`~.AuditAlreadyExists`
"""
@abc.abstractmethod
def get_audit_by_id(self, context, audit_id, eager=False):
"""Return an audit.
:param context: The security context
:param audit_id: The id of an audit.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An audit.
:raises: :py:class:`~.AuditNotFound`
"""
@abc.abstractmethod
def get_audit_by_uuid(self, context, audit_uuid, eager=False):
"""Return an audit.
:param context: The security context
:param audit_uuid: The uuid of an audit.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An audit.
:raises: :py:class:`~.AuditNotFound`
"""
def get_audit_by_name(self, context, audit_name, eager=False):
"""Return an audit.
:param context: The security context
:param audit_name: The name of an audit.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An audit.
:raises: :py:class:`~.AuditNotFound`
"""
@abc.abstractmethod
def destroy_audit(self, audit_id):
"""Destroy an audit and all associated action plans.
:param audit_id: The id or uuid of an audit.
:raises: :py:class:`~.AuditNotFound`
"""
@abc.abstractmethod
def update_audit(self, audit_id, values):
"""Update properties of an audit.
:param audit_id: The id or uuid of an audit.
:returns: An audit.
:raises: :py:class:`~.AuditNotFound`
:raises: :py:class:`~.Invalid`
"""
def soft_delete_audit(self, audit_id):
"""Soft delete an audit and all associated action plans.
:param audit_id: The id or uuid of an audit.
:raises: :py:class:`~.AuditNotFound`
"""
@abc.abstractmethod
def get_action_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None,
eager=False):
"""Get specific columns for matching actions.
Return a list of the specified columns for all actions that match the
specified filters.
:param context: The security context
:param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of actions to return.
:param marker: the last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted.
(asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def create_action(self, values):
"""Create a new action.
:param values: A dict containing several items used to identify
and track the action, and several dicts which are passed
into the Drivers when managing this action. For example:
::
{
'uuid': utils.generate_uuid(),
'name': 'example',
'description': 'free text description'
'aggregate': 'nova aggregate name or uuid'
}
:returns: A action.
:raises: :py:class:`~.ActionAlreadyExists`
"""
@abc.abstractmethod
def get_action_by_id(self, context, action_id, eager=False):
"""Return a action.
:param context: The security context
:param action_id: The id of a action.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A action.
:raises: :py:class:`~.ActionNotFound`
"""
@abc.abstractmethod
def get_action_by_uuid(self, context, action_uuid, eager=False):
"""Return a action.
:param context: The security context
:param action_uuid: The uuid of a action.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A action.
:raises: :py:class:`~.ActionNotFound`
"""
@abc.abstractmethod
def destroy_action(self, action_id):
"""Destroy a action and all associated interfaces.
:param action_id: The id or uuid of a action.
:raises: :py:class:`~.ActionNotFound`
:raises: :py:class:`~.ActionReferenced`
"""
@abc.abstractmethod
def update_action(self, action_id, values):
"""Update properties of a action.
:param action_id: The id or uuid of a action.
:returns: A action.
:raises: :py:class:`~.ActionNotFound`
:raises: :py:class:`~.ActionReferenced`
:raises: :py:class:`~.Invalid`
"""
def soft_delete_action(self, action_id):
"""Soft delete an action.
:param action_id: The id or uuid of an action.
:raises: :py:class:`~.ActionNotFound`
"""
@abc.abstractmethod
def get_action_plan_list(
self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, eager=False):
"""Get specific columns for matching action plans.
Return a list of the specified columns for all action plans that
match the specified filters.
:param context: The security context
:param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of audits to return.
:param marker: the last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted.
(asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def create_action_plan(self, values):
"""Create a new action plan.
:param values: A dict containing several items used to identify
and track the action plan.
:returns: An action plan.
:raises: :py:class:`~.ActionPlanAlreadyExists`
"""
@abc.abstractmethod
def get_action_plan_by_id(self, context, action_plan_id, eager=False):
"""Return an action plan.
:param context: The security context
:param action_plan_id: The id of an action plan.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An action plan.
:raises: :py:class:`~.ActionPlanNotFound`
"""
@abc.abstractmethod
def get_action_plan_by_uuid(self, context, action_plan__uuid, eager=False):
"""Return a action plan.
:param context: The security context
:param action_plan__uuid: The uuid of an action plan.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An action plan.
:raises: :py:class:`~.ActionPlanNotFound`
"""
@abc.abstractmethod
def destroy_action_plan(self, action_plan_id):
"""Destroy an action plan and all associated interfaces.
:param action_plan_id: The id or uuid of a action plan.
:raises: :py:class:`~.ActionPlanNotFound`
:raises: :py:class:`~.ActionPlanReferenced`
"""
@abc.abstractmethod
def update_action_plan(self, action_plan_id, values):
"""Update properties of an action plan.
:param action_plan_id: The id or uuid of an action plan.
:returns: An action plan.
:raises: :py:class:`~.ActionPlanNotFound`
:raises: :py:class:`~.ActionPlanReferenced`
:raises: :py:class:`~.Invalid`
"""
def soft_delete_action_plan(self, action_plan_id):
"""Soft delete an action plan.
:param action_plan_id: The id or uuid of an action plan.
:raises: :py:class:`~.ActionPlanNotFound`
"""
@abc.abstractmethod
def get_efficacy_indicator_list(self, context, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None,
eager=False):
"""Get specific columns for matching efficacy indicators.
Return a list of the specified columns for all efficacy indicators that
match the specified filters.
:param context: The security context
:param columns: List of column names to return.
Defaults to 'id' column when columns == None.
:param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of efficacy indicators to return.
:param marker: The last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: Direction in which results should be sorted.
(asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def create_efficacy_indicator(self, values):
"""Create a new efficacy indicator.
:param values: A dict containing items used to identify
and track the efficacy indicator. For example:
::
{
'id': 1,
'uuid': utils.generate_uuid(),
'name': 'my_efficacy_indicator',
'display_name': 'My efficacy indicator',
'goal_uuid': utils.generate_uuid(),
}
:returns: An efficacy_indicator
:raises: :py:class:`~.EfficacyIndicatorAlreadyExists`
"""
@abc.abstractmethod
def get_efficacy_indicator_by_id(self, context, efficacy_indicator_id,
eager=False):
"""Return an efficacy indicator given its ID.
:param context: The security context
:param efficacy_indicator_id: The ID of an efficacy indicator
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An efficacy indicator
:raises: :py:class:`~.EfficacyIndicatorNotFound`
"""
@abc.abstractmethod
def get_efficacy_indicator_by_uuid(self, context, efficacy_indicator_uuid,
eager=False):
"""Return an efficacy indicator given its UUID.
:param context: The security context
:param efficacy_indicator_uuid: The UUID of an efficacy indicator
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An efficacy indicator
:raises: :py:class:`~.EfficacyIndicatorNotFound`
"""
@abc.abstractmethod
def get_efficacy_indicator_by_name(self, context, efficacy_indicator_name,
eager=False):
"""Return an efficacy indicator given its name.
:param context: The security context
:param efficacy_indicator_name: The name of an efficacy indicator
:param eager: If True, also loads One-to-X data (Default: False)
:returns: An efficacy indicator
:raises: :py:class:`~.EfficacyIndicatorNotFound`
"""
@abc.abstractmethod
def destroy_efficacy_indicator(self, efficacy_indicator_uuid):
"""Destroy an efficacy indicator.
:param efficacy_indicator_uuid: The UUID of an efficacy indicator
:raises: :py:class:`~.EfficacyIndicatorNotFound`
"""
@abc.abstractmethod
def update_efficacy_indicator(self, efficacy_indicator_id, values):
"""Update properties of an efficacy indicator.
:param efficacy_indicator_id: The ID of an efficacy indicator
:returns: An efficacy indicator
:raises: :py:class:`~.EfficacyIndicatorNotFound`
:raises: :py:class:`~.Invalid`
"""
@abc.abstractmethod
def get_scoring_engine_list(
self, context, columns=None, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None, eager=False):
"""Get specific columns for matching scoring engines.
Return a list of the specified columns for all scoring engines that
match the specified filters.
:param context: The security context
:param columns: List of column names to return.
Defaults to 'id' column when columns == None.
:param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of scoring engines to return.
:param marker: the last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted.
(asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def create_scoring_engine(self, values):
"""Create a new scoring engine.
:param values: A dict containing several items used to identify
and track the scoring engine.
:returns: A scoring engine.
:raises: :py:class:`~.ScoringEngineAlreadyExists`
"""
@abc.abstractmethod
def get_scoring_engine_by_id(self, context, scoring_engine_id,
eager=False):
"""Return a scoring engine by its id.
:param context: The security context
:param scoring_engine_id: The id of a scoring engine.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A scoring engine.
:raises: :py:class:`~.ScoringEngineNotFound`
"""
@abc.abstractmethod
def get_scoring_engine_by_uuid(self, context, scoring_engine_uuid,
eager=False):
"""Return a scoring engine by its uuid.
:param context: The security context
:param scoring_engine_uuid: The uuid of a scoring engine.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A scoring engine.
:raises: :py:class:`~.ScoringEngineNotFound`
"""
@abc.abstractmethod
def get_scoring_engine_by_name(self, context, scoring_engine_name,
eager=False):
"""Return a scoring engine by its name.
:param context: The security context
:param scoring_engine_name: The name of a scoring engine.
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A scoring engine.
:raises: :py:class:`~.ScoringEngineNotFound`
"""
@abc.abstractmethod
def destroy_scoring_engine(self, scoring_engine_id):
"""Destroy a scoring engine.
:param scoring_engine_id: The id of a scoring engine.
:raises: :py:class:`~.ScoringEngineNotFound`
"""
@abc.abstractmethod
def update_scoring_engine(self, scoring_engine_id, values):
"""Update properties of a scoring engine.
:param scoring_engine_id: The id of a scoring engine.
:returns: A scoring engine.
:raises: :py:class:`~.ScoringEngineNotFound`
:raises: :py:class:`~.Invalid`
"""
@abc.abstractmethod
def get_service_list(self, context, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None, eager=False):
"""Get specific columns for matching services.
Return a list of the specified columns for all services that
match the specified filters.
:param context: The security context
:param filters: Filters to apply. Defaults to None.
:param limit: Maximum number of services to return.
:param marker: The last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: Direction in which results should be sorted.
(asc, desc)
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def create_service(self, values):
"""Create a new service.
:param values: A dict containing items used to identify
and track the service. For example:
::
{
'id': 1,
'name': 'watcher-api',
'status': 'ACTIVE',
'host': 'controller'
}
:returns: A service
:raises: :py:class:`~.ServiceAlreadyExists`
"""
@abc.abstractmethod
def get_service_by_id(self, context, service_id, eager=False):
"""Return a service given its ID.
:param context: The security context
:param service_id: The ID of a service
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A service
:raises: :py:class:`~.ServiceNotFound`
"""
@abc.abstractmethod
def get_service_by_name(self, context, service_name, eager=False):
"""Return a service given its name.
:param context: The security context
:param service_name: The name of a service
:param eager: If True, also loads One-to-X data (Default: False)
:returns: A service
:raises: :py:class:`~.ServiceNotFound`
"""
@abc.abstractmethod
def destroy_service(self, service_id):
"""Destroy a service.
:param service_id: The ID of a service
:raises: :py:class:`~.ServiceNotFound`
"""
@abc.abstractmethod
def update_service(self, service_id, values):
"""Update properties of a service.
:param service_id: The ID of a service
:returns: A service
:raises: :py:class:`~.ServiceyNotFound`
:raises: :py:class:`~.Invalid`
"""
@abc.abstractmethod
def soft_delete_service(self, service_id):
"""Soft delete a service.
:param service_id: The id of a service.
:returns: A service.
:raises: :py:class:`~.ServiceNotFound`
"""
| 36.695455 | 79 | 0.59795 | 3,802 | 32,292 | 4.981852 | 0.063651 | 0.029988 | 0.04873 | 0.040072 | 0.87044 | 0.826567 | 0.766908 | 0.72673 | 0.689562 | 0.626313 | 0 | 0.000497 | 0.314443 | 32,292 | 879 | 80 | 36.737201 | 0.855091 | 0.630806 | 0 | 0.4875 | 0 | 0 | 0.004509 | 0.003221 | 0 | 0 | 0 | 0 | 0 | 1 | 0.425 | false | 0 | 0.01875 | 0 | 0.45625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4623797f8d99bb895fbfa0fb27d789a938a3bfb3 | 430 | py | Python | fill_ipd.py | ranggasenatama/Auto-Kusioner-Submit | ddeb5a61009e6351aa22e8c0a658306a4495b6f9 | [
"MIT"
] | 1 | 2020-07-13T15:45:08.000Z | 2020-07-13T15:45:08.000Z | fill_ipd.py | ranggasenatama/Auto-Kusioner-Submit | ddeb5a61009e6351aa22e8c0a658306a4495b6f9 | [
"MIT"
] | null | null | null | fill_ipd.py | ranggasenatama/Auto-Kusioner-Submit | ddeb5a61009e6351aa22e8c0a658306a4495b6f9 | [
"MIT"
] | null | null | null | def ipm(kusioner):
counter = 1
while counter <= 10:
kusioner['MK'+str(counter)].value = '4'
counter += 1
kusioner['txtKomentar'].value = 'Mantap'
kusioner['chkPermanent'].value = '1'
def ipd(kusioner):
counter = 1
while counter <= 10:
kusioner['DO'+str(counter)].value = '4'
counter += 1
kusioner['txtKomentar'].value = 'Mantap'
kusioner['chkPermanent'].value = '1' | 28.666667 | 47 | 0.586047 | 48 | 430 | 5.25 | 0.333333 | 0.126984 | 0.126984 | 0.166667 | 0.936508 | 0.936508 | 0.936508 | 0.634921 | 0.634921 | 0.634921 | 0 | 0.037037 | 0.246512 | 430 | 15 | 48 | 28.666667 | 0.740741 | 0 | 0 | 0.714286 | 0 | 0 | 0.153132 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
463703e8db9e1479c2b840dc8ff8ad8b65a596f3 | 4,898 | py | Python | test/test_convert_banana.py | tomas-psorn/bruker2nifti | 128c5aa245e786a51ba2da62709e0f3b48d2aa7b | [
"MIT"
] | 28 | 2017-04-12T18:35:38.000Z | 2020-11-02T03:46:44.000Z | test/test_convert_banana.py | tomas-psorn/bruker2nifti | 128c5aa245e786a51ba2da62709e0f3b48d2aa7b | [
"MIT"
] | 66 | 2017-07-21T14:15:46.000Z | 2021-07-28T09:52:02.000Z | test/test_convert_banana.py | tomas-psorn/bruker2nifti | 128c5aa245e786a51ba2da62709e0f3b48d2aa7b | [
"MIT"
] | 18 | 2017-08-02T23:06:11.000Z | 2021-06-16T05:54:22.000Z | import os
import warnings
import subprocess
import platform
import shutil
import sys
import pytest
from bruker2nifti.converter import Bruker2Nifti
here = os.path.abspath(os.path.dirname(__file__))
root_dir = os.path.dirname(here)
def test_convert_the_banana(open_converted=False):
pfo_study_in = os.path.join(root_dir, "test_data", "bru_banana")
pfo_study_out = os.path.join(root_dir, "test_data", "nifti_banana")
# delete study if already exists:
target_folder = os.path.join(pfo_study_out, "banana")
if os.path.exists(target_folder):
os.system("rm -r {}".format(os.path.join(target_folder)))
# instantiate the converter:
bru = Bruker2Nifti(pfo_study_in, pfo_study_out, study_name="banana")
bru.verbose = 2
bru.correct_slope = True
bru.get_acqp = False
bru.get_method = False
bru.get_reco = False
bru.convert()
if open_converted:
if platform.system() == "Windows":
os.startfile(pfo_study_out.encode("string-escape"))
elif platform.system() == "Darwin":
subprocess.Popen(["open", pfo_study_out])
else:
subprocess.Popen(["xdg-open", pfo_study_out])
for ex in ["1", "2", "3"]:
experiment_folder = os.path.join(
pfo_study_out, "banana", "banana_{}".format(ex)
)
assert os.path.exists(experiment_folder)
assert os.path.exists(
os.path.join(experiment_folder, "banana_{}.nii.gz".format(ex))
)
def test_convert_the_banana_with_spaces(open_converted=False):
pfo_study_in = os.path.join(root_dir, "test_data", "bru banana")
pfo_study_out = os.path.join(root_dir, "test_data", "nifti banana")
# Copy test data to a folder with space in it
original_study_in = os.path.join(root_dir, "test_data", "bru_banana")
if os.path.exists(pfo_study_in):
shutil.rmtree(pfo_study_in)
shutil.copytree(original_study_in, pfo_study_in)
# delete study if already exists:
target_folder = os.path.join(pfo_study_out, "banana")
if os.path.exists(target_folder):
shutil.rmtree(target_folder)
# instantiate the converter:
bru = Bruker2Nifti(pfo_study_in, pfo_study_out, study_name="banana")
bru.verbose = 2
bru.correct_slope = True
bru.get_acqp = False
bru.get_method = False
bru.get_reco = False
bru.convert()
if open_converted:
if platform.system() == "Windows":
os.startfile(pfo_study_out.encode("string-escape"))
elif platform.system() == "Darwin":
subprocess.Popen(["open", pfo_study_out])
else:
subprocess.Popen(["xdg-open", pfo_study_out])
for ex in ["1", "2", "3"]:
experiment_folder = os.path.join(
pfo_study_out, "banana", "banana_{}".format(ex)
)
assert os.path.exists(experiment_folder)
assert os.path.exists(
os.path.join(experiment_folder, "banana_{}.nii.gz".format(ex))
)
# Delete temporary copy of the test data
shutil.rmtree(pfo_study_in)
def test_convert_the_banana_no_name(open_converted=False):
pfo_study_in = os.path.join(root_dir, "test_data", "bru_banana")
pfo_study_out = os.path.join(root_dir, "test_data", "nifti_banana")
# delete study if already exists:
target_folder = os.path.join(pfo_study_out, "APMFruits20111130")
if os.path.exists(target_folder):
os.system("rm -r {}".format(os.path.join(target_folder)))
bru = Bruker2Nifti(pfo_study_in, pfo_study_out)
bru.verbose = (2,)
bru.correct_slope = (True,)
bru.get_acqp = (False,)
bru.get_method = (False,)
bru.get_reco = False
bru.convert()
if open_converted:
if platform.system() == "Windows":
os.startfile(pfo_study_out.encode("string-escape"))
elif platform.system() == "Darwin":
subprocess.Popen(["open", pfo_study_out])
else:
subprocess.Popen(["xdg-open", pfo_study_out])
for ex in ["1", "2", "3"]:
experiment_folder = os.path.join(
pfo_study_out, "APMFruits20111130", "APMFruits20111130_{}".format(ex)
)
assert os.path.exists(experiment_folder)
assert os.path.exists(
os.path.join(experiment_folder, "APMFruits20111130_{}.nii.gz".format(ex))
)
def test_warning_banana_bad_n():
for n in ["1", "2", "3"]:
pfo_study_in = os.path.join(root_dir, "test_data", "bru_banana_bad_" + n)
pfo_study_out = os.path.join(root_dir, "test_data", "nifti_banana")
bru = Bruker2Nifti(pfo_study_in, pfo_study_out, study_name="banana")
bru.correct_slope = True
bru.verbose = 2
if sys.version_info.major == 2:
with pytest.raises(OSError):
bru.convert()
else:
with pytest.raises(FileExistsError):
bru.convert()
| 30.805031 | 85 | 0.64557 | 658 | 4,898 | 4.547112 | 0.145897 | 0.093583 | 0.084559 | 0.042112 | 0.82988 | 0.783422 | 0.774398 | 0.774398 | 0.751003 | 0.751003 | 0 | 0.014535 | 0.22744 | 4,898 | 158 | 86 | 31 | 0.776163 | 0.047366 | 0 | 0.651786 | 0 | 0 | 0.107128 | 0.005796 | 0 | 0 | 0 | 0 | 0.053571 | 1 | 0.035714 | false | 0 | 0.071429 | 0 | 0.107143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
466ce376ee73bfa5dad70eef247fdb19997c3918 | 24 | py | Python | overStat/__init__.py | t04glovern/Overstat | 08eac77ecbcb4ca7d7cd23f73c26ff9b8bddc0a1 | [
"MIT"
] | null | null | null | overStat/__init__.py | t04glovern/Overstat | 08eac77ecbcb4ca7d7cd23f73c26ff9b8bddc0a1 | [
"MIT"
] | null | null | null | overStat/__init__.py | t04glovern/Overstat | 08eac77ecbcb4ca7d7cd23f73c26ff9b8bddc0a1 | [
"MIT"
] | null | null | null | from .overStat import *
| 12 | 23 | 0.75 | 3 | 24 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
469910d986c0bdb2ff2e12e886a7548eadea27b6 | 6,251 | py | Python | tests/test_neurons.py | vandermeerlab/nept | fcb0b83d30f4be2783f3e8a9b3c842e4eef4426b | [
"MIT"
] | 7 | 2017-07-17T08:57:11.000Z | 2020-10-23T09:59:58.000Z | tests/test_neurons.py | vandermeerlab/nept | fcb0b83d30f4be2783f3e8a9b3c842e4eef4426b | [
"MIT"
] | 9 | 2017-03-01T17:49:18.000Z | 2020-04-21T19:32:07.000Z | tests/test_neurons.py | vandermeerlab/nept | fcb0b83d30f4be2783f3e8a9b3c842e4eef4426b | [
"MIT"
] | 2 | 2017-03-06T00:32:22.000Z | 2017-07-17T08:57:14.000Z | import numpy as np
import pytest
import nept
def test_neurons_basic():
spikes = np.array(
[
nept.SpikeTrain(np.array([0.5]), "test"),
nept.SpikeTrain(np.array([1.5]), "test"),
nept.SpikeTrain(np.array([2.5]), "test"),
]
)
tuning = np.array(
[[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0]]
)
neurons = nept.Neurons(spikes, tuning)
assert np.allclose(neurons.spikes[0].time, spikes[0].time)
assert np.allclose(neurons.tuning_curves, tuning)
def test_neurons_n_wrong():
spikes = np.array(
[
nept.SpikeTrain(np.array([0.5]), "test"),
nept.SpikeTrain(np.array([1.5]), "test"),
nept.SpikeTrain(np.array([2.5]), "test"),
]
)
tuning = np.array([[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0]])
with pytest.raises(ValueError) as excinfo:
neurons = nept.Neurons(spikes, tuning)
assert (
str(excinfo.value)
== "spikes and tuning curves must have the same number of neurons"
)
def test_neurons_getitem_single():
spikes = np.array(
[
nept.SpikeTrain(np.array([0.5]), "test"),
nept.SpikeTrain(np.array([1.5]), "test"),
nept.SpikeTrain(np.array([2.5]), "test"),
]
)
tuning = np.array(
[[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0]]
)
neurons = nept.Neurons(spikes, tuning)
sliced = neurons[1]
assert np.allclose(sliced.spikes[0].time, np.array([1.5]))
assert np.allclose(sliced.tuning_curves[0], np.array([0.0, 1.0, 0.0, 0.0]))
def test_neurons_getitem_multiple():
spikes = np.array(
[
nept.SpikeTrain(np.array([0.5]), "test"),
nept.SpikeTrain(np.array([1.5]), "test"),
nept.SpikeTrain(np.array([2.5]), "test"),
]
)
tuning = np.array(
[[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0]]
)
neurons = nept.Neurons(spikes, tuning)
sliced = neurons[0:2]
assert np.allclose(
sliced.tuning_curves, np.array([[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0]])
)
assert np.allclose(sliced.spikes[0].time, np.array([0.5]))
assert np.allclose(sliced.spikes[1].time, np.array([1.5]))
def test_neurons_slicing_specified_startstop():
spikes = np.array(
[
nept.SpikeTrain(np.array([0.5]), "test"),
nept.SpikeTrain(np.array([1.5]), "test"),
nept.SpikeTrain(np.array([2.5]), "test"),
]
)
tuning = np.array(
[[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0]]
)
neurons = nept.Neurons(spikes, tuning)
t_start = 1.0
t_stop = 2.0
sliced_neurons = neurons.time_slice(t_start, t_stop)
assert np.allclose(sliced_neurons.spikes[0].time, np.array([]))
assert np.allclose(sliced_neurons.spikes[1].time, np.array([1.5]))
assert np.allclose(sliced_neurons.spikes[2].time, np.array([]))
assert np.allclose(neurons.tuning_curves, tuning)
def test_neurons_slicing_specified_stop():
spikes = np.array(
[
nept.SpikeTrain(np.array([0.5]), "test"),
nept.SpikeTrain(np.array([1.5]), "test"),
nept.SpikeTrain(np.array([2.5]), "test"),
]
)
tuning = np.array(
[[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0]]
)
neurons = nept.Neurons(spikes, tuning)
t_stop = 2.0
sliced_neurons = neurons.time_slice(None, t_stop)
assert np.allclose(sliced_neurons.spikes[0].time, np.array([0.5]))
assert np.allclose(sliced_neurons.spikes[1].time, np.array([1.5]))
assert np.allclose(sliced_neurons.spikes[2].time, np.array([]))
assert np.allclose(neurons.tuning_curves, tuning)
def test_neurons_slicing_specified_start():
spikes = np.array(
[
nept.SpikeTrain(np.array([0.5]), "test"),
nept.SpikeTrain(np.array([1.5]), "test"),
nept.SpikeTrain(np.array([2.5]), "test"),
]
)
tuning = np.array(
[[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0]]
)
neurons = nept.Neurons(spikes, tuning)
t_start = 1.0
sliced_neurons = neurons.time_slice(t_start, None)
assert np.allclose(sliced_neurons.spikes[0].time, np.array([]))
assert np.allclose(sliced_neurons.spikes[1].time, np.array([1.5]))
assert np.allclose(sliced_neurons.spikes[2].time, np.array([2.5]))
assert np.allclose(neurons.tuning_curves, tuning)
def test_neurons_slicing_mult():
spikes = np.array(
[
nept.SpikeTrain(np.array([0.5]), "test"),
nept.SpikeTrain(np.array([1.5]), "test"),
nept.SpikeTrain(np.array([2.5]), "test"),
]
)
tuning = np.array(
[[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0]]
)
neurons = nept.Neurons(spikes, tuning)
t_starts = [0.0, 2.0]
t_stops = [1.0, 3.0]
sliced_neurons = neurons.time_slice(t_starts, t_stops)
assert np.allclose(sliced_neurons.spikes[0].time, np.array([0.5]))
assert np.allclose(sliced_neurons.spikes[1].time, np.array([]))
assert np.allclose(sliced_neurons.spikes[2].time, np.array([2.5]))
assert np.allclose(neurons.tuning_curves, tuning)
def test_neurons_get_num():
spikes = np.array(
[
nept.SpikeTrain(np.array([0.5]), "test"),
nept.SpikeTrain(np.array([1.5]), "test"),
nept.SpikeTrain(np.array([2.5]), "test"),
]
)
tuning = np.array(
[[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0]]
)
neurons = nept.Neurons(spikes, tuning)
assert np.allclose(neurons.n_neurons, spikes.shape[0])
def test_neurons_get_tuning_shape():
spikes = np.array(
[
nept.SpikeTrain(np.array([0.5]), "test"),
nept.SpikeTrain(np.array([1.5]), "test"),
nept.SpikeTrain(np.array([2.5]), "test"),
]
)
tuning = np.array(
[[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0]]
)
neurons = nept.Neurons(spikes, tuning)
assert np.allclose(neurons.tuning_shape, tuning[0].shape)
| 27.659292 | 84 | 0.557511 | 974 | 6,251 | 3.502053 | 0.057495 | 0.112577 | 0.138962 | 0.147757 | 0.887423 | 0.876869 | 0.852829 | 0.837877 | 0.823512 | 0.777485 | 0 | 0.079449 | 0.244921 | 6,251 | 225 | 85 | 27.782222 | 0.64322 | 0 | 0 | 0.533333 | 0 | 0 | 0.028955 | 0 | 0 | 0 | 0 | 0 | 0.157576 | 1 | 0.060606 | false | 0 | 0.018182 | 0 | 0.078788 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
46b55e6cfb833c1ed1301fe3f47de0379091bab7 | 185 | py | Python | nativeconfig/__init__.py | rgammans/nativeconfig | f8a6b0c541d8c288b4577209336c03a328fe841f | [
"MIT"
] | 6 | 2015-07-07T13:06:54.000Z | 2021-01-01T07:25:44.000Z | nativeconfig/__init__.py | rgammans/nativeconfig | f8a6b0c541d8c288b4577209336c03a328fe841f | [
"MIT"
] | 16 | 2016-12-23T00:50:55.000Z | 2021-07-13T19:45:36.000Z | nativeconfig/__init__.py | rgammans/nativeconfig | f8a6b0c541d8c288b4577209336c03a328fe841f | [
"MIT"
] | 4 | 2015-04-29T19:52:21.000Z | 2020-05-27T10:59:51.000Z | from nativeconfig.configs import *
from nativeconfig.exceptions import *
from nativeconfig.options import *
from nativeconfig.version import VERSION as _VERSION
__version__ = _VERSION
| 26.428571 | 52 | 0.837838 | 21 | 185 | 7.095238 | 0.380952 | 0.42953 | 0.442953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118919 | 185 | 6 | 53 | 30.833333 | 0.91411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d3b948cc7485ff135ed3d4a1cc15e530f49b5179 | 90 | py | Python | src/gocept/testdb/db.py | risclog-solution/gocept.testdb | 3da1ac8a86e5009f279175adcf6ad21361a35c51 | [
"ZPL-2.1"
] | null | null | null | src/gocept/testdb/db.py | risclog-solution/gocept.testdb | 3da1ac8a86e5009f279175adcf6ad21361a35c51 | [
"ZPL-2.1"
] | null | null | null | src/gocept/testdb/db.py | risclog-solution/gocept.testdb | 3da1ac8a86e5009f279175adcf6ad21361a35c51 | [
"ZPL-2.1"
] | null | null | null | # BBB
from gocept.testdb.postgres import PostgreSQL
from gocept.testdb.mysql import MySQL
| 22.5 | 45 | 0.833333 | 13 | 90 | 5.769231 | 0.615385 | 0.266667 | 0.426667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 90 | 3 | 46 | 30 | 0.9375 | 0.033333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d3d66c61b81ffbcdbfe44c02ab510e03896ef9c5 | 316 | py | Python | tests/test_address.py | UlordChain/ulordschema | 693b660af834736afa0b3b2d21010a89987afb89 | [
"MIT"
] | 37 | 2018-01-16T13:27:02.000Z | 2018-08-21T06:39:34.000Z | tests/test_address.py | UlordChain/ulordschema | 693b660af834736afa0b3b2d21010a89987afb89 | [
"MIT"
] | 2 | 2018-05-16T08:29:20.000Z | 2018-06-17T04:51:08.000Z | tests/test_address.py | UlordChain/ulordschema | 693b660af834736afa0b3b2d21010a89987afb89 | [
"MIT"
] | 4 | 2018-05-14T11:43:31.000Z | 2018-09-29T09:58:58.000Z | import unittest
# TODO: add it.
class TestMainNetAddressValidation(unittest.TestCase):
pass
class TestTestnetAddressValidation(unittest.TestCase):
pass
class TestSmartDecode(unittest.TestCase):
pass
class TestSmartEncode(unittest.TestCase):
pass
if __name__ == '__main__':
unittest.main()
| 15.8 | 54 | 0.756329 | 30 | 316 | 7.7 | 0.5 | 0.277056 | 0.34632 | 0.324675 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158228 | 316 | 19 | 55 | 16.631579 | 0.868421 | 0.041139 | 0 | 0.363636 | 0 | 0 | 0.026578 | 0 | 0 | 0 | 0 | 0.052632 | 0 | 1 | 0 | true | 0.363636 | 0.090909 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
318a7da1d18de14f0e7811492e410744e7e653c6 | 17 | py | Python | syn/util/log/__init__.py | mbodenhamer/syn | aeaa3ad8a49bac8f50cf89b6f1fe97ad43d1d258 | [
"MIT"
] | 1 | 2021-07-15T08:55:12.000Z | 2021-07-15T08:55:12.000Z | syn/util/log/__init__.py | mbodenhamer/syn | aeaa3ad8a49bac8f50cf89b6f1fe97ad43d1d258 | [
"MIT"
] | 7 | 2021-01-07T23:51:57.000Z | 2021-12-13T19:50:57.000Z | syn/util/constraint/__init__.py | mbodenhamer/syn | aeaa3ad8a49bac8f50cf89b6f1fe97ad43d1d258 | [
"MIT"
] | 2 | 2016-07-11T08:46:31.000Z | 2017-12-13T13:30:51.000Z | from .b import *
| 8.5 | 16 | 0.647059 | 3 | 17 | 3.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 17 | 1 | 17 | 17 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3198ead44b827c50490d83ab7b1641a8a4fabf75 | 36 | py | Python | flogging/__init__.py | FragileTech/flogging | e07f74d097b17571998312ca43722a7289cc64e5 | [
"MIT"
] | null | null | null | flogging/__init__.py | FragileTech/flogging | e07f74d097b17571998312ca43722a7289cc64e5 | [
"MIT"
] | 111 | 2021-01-22T13:44:30.000Z | 2022-03-28T04:05:12.000Z | flogging/__init__.py | FragileTech/flogging | e07f74d097b17571998312ca43722a7289cc64e5 | [
"MIT"
] | null | null | null | from flogging.flogging import setup
| 18 | 35 | 0.861111 | 5 | 36 | 6.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
31c6b50882c010e43e1a75350466de202b553a61 | 82 | py | Python | hnsw/math_test.py | xiangyangkan/hnsw-gpu | bad9f93ce2c3fe28567c2b7674b710d6202c2d37 | [
"Apache-2.0"
] | null | null | null | hnsw/math_test.py | xiangyangkan/hnsw-gpu | bad9f93ce2c3fe28567c2b7674b710d6202c2d37 | [
"Apache-2.0"
] | null | null | null | hnsw/math_test.py | xiangyangkan/hnsw-gpu | bad9f93ce2c3fe28567c2b7674b710d6202c2d37 | [
"Apache-2.0"
] | null | null | null | from hnsw import math
assert math.add(1, 1) == 2
assert math.subtract(1, 1) == 0
| 16.4 | 31 | 0.670732 | 16 | 82 | 3.4375 | 0.625 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089552 | 0.182927 | 82 | 4 | 32 | 20.5 | 0.731343 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
31c980510b80a9103a749d6a39d8010b693d5113 | 348 | py | Python | simplenn/metrics/loss/__init__.py | robertocrespond/SimpleNN | ac9b7bd7fdf189666876d52d2af23fe48dbbd372 | [
"MIT"
] | null | null | null | simplenn/metrics/loss/__init__.py | robertocrespond/SimpleNN | ac9b7bd7fdf189666876d52d2af23fe48dbbd372 | [
"MIT"
] | null | null | null | simplenn/metrics/loss/__init__.py | robertocrespond/SimpleNN | ac9b7bd7fdf189666876d52d2af23fe48dbbd372 | [
"MIT"
] | null | null | null | from .binary_cross_entropy import BinaryCrossEntropy # pragma: no cover # noqa: F401
from .categorical_cross_entropy import CategoricalCrossEntropy # pragma: no cover # noqa: F401
from .mean_absolute_error import MeanAbsoluteError # pragma: no cover # noqa: F401
from .mean_squared_error import MeanSquaredError # pragma: no cover # noqa: F401
| 69.6 | 95 | 0.804598 | 44 | 348 | 6.181818 | 0.431818 | 0.117647 | 0.191176 | 0.25 | 0.382353 | 0.305147 | 0.213235 | 0 | 0 | 0 | 0 | 0.04 | 0.137931 | 348 | 4 | 96 | 87 | 0.866667 | 0.33046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
31d683f3978d2e4b48f08ffad56853d3b6d8424b | 26,399 | py | Python | python/pyxir/frontend/onnx/ops/onnx_l2_convolution.py | Xilinx/pyxir | bef661d6d77adcdbd2cf4163f2cf3a1d31d40406 | [
"Apache-2.0"
] | 25 | 2020-06-17T22:41:13.000Z | 2022-03-22T16:28:22.000Z | python/pyxir/frontend/onnx/ops/onnx_l2_convolution.py | Xilinx/pyxir | bef661d6d77adcdbd2cf4163f2cf3a1d31d40406 | [
"Apache-2.0"
] | 25 | 2021-03-16T06:26:44.000Z | 2022-03-18T11:28:33.000Z | python/pyxir/frontend/onnx/ops/onnx_l2_convolution.py | Xilinx/pyxir | bef661d6d77adcdbd2cf4163f2cf3a1d31d40406 | [
"Apache-2.0"
] | 19 | 2020-07-30T10:03:02.000Z | 2021-06-29T01:18:16.000Z | # Copyright 2020 Xilinx Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Module for transforming ONNX L2 operators to XLayer objects
L2: Convolution related operators
"""
import math
import logging
import numpy as np
import pyxir as px
from typing import Dict, List
from pyxir.graph.layer import xlayer_factory as xlf
from pyxir.graph.layer import XLayer
from ..onnx_2_xlayer_registry import register_onnx_2_xlayer_converter
from ..onnx_tools import NodeWrapper
from .tools import eltwise_any_op
logger = logging.getLogger('pyxir')
@register_onnx_2_xlayer_converter("AveragePool")
def avg_pool(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]) -> List[XLayer]:
"""ONNX AveragePool to XLayer Pooling (Avg) conversion function"""
logger.info("ONNX AveragePool -> XLayer Pooling (Avg)")
assert len(node.get_outputs()) == 1
name = node.get_outputs()[0]
bottoms = node.get_inputs()
node_attrs = node.get_attributes()
iX = xmap[bottoms[0]] # NCHW
_, in_c, in_h, in_w = iX.shapes
auto_pad = node_attrs['auto_pad'] if 'auto_pad' in node_attrs\
else 'NOTSET'
ceil_mode = bool(node_attrs['ceil_mode']) if 'ceil_mode' in node_attrs\
else False
count_include_pad = node_attrs['count_include_pad']\
if 'count_include_pad' in node_attrs else 0
kernel_shape = node_attrs['kernel_shape'] if 'kernel_shape' in node_attrs\
else W.shape[2:]
kernel_h, kernel_w = kernel_shape
pads = node_attrs['pads'] if 'pads' in node_attrs\
else None
strides = node_attrs['strides'] if 'strides' in node_attrs\
else [1, 1]
stride_h, stride_w = strides
if auto_pad not in ['NOTSET', "SAME_UPPER", "SAME_LOWER"]:
raise ValueError("AveragePool autopad attribute not supported but was:"
" {}".format(auto_pad))
if auto_pad in ["SAME_UPPER", "SAME_LOWER"]:
out_h, out_w = int(math.ceil(in_h / stride_h)), int(math.ceil(in_w / stride_w))
pad_h = (out_h - 1) * stride_h + kernel_h - in_h
pad_w = (out_w - 1) * stride_w + kernel_w - in_w
if auto_pad == "SAME_UPPER":
pad_ht, pad_hb = pad_h // 2, pad_h - (pad_h // 2)
pad_wl, pad_wr = pad_w // 2, pad_w - (pad_w // 2)
else:
pad_ht, pad_hb = pad_h - (pad_h // 2), pad_h // 2
pad_wl, pad_wr = pad_w - (pad_w // 2), pad_w // 2
padding = [pad_ht, pad_hb, pad_wl, pad_wr]
else:
padding = pads if pads is not None else [0, 0, 0, 0]
# [pad_ht, pad_hb, pad_wl, pad_wr] -> [pad_ht, pad_wl, pad_hb, pad_wr]
# TODO move internal pool padding to [pad_ht, pad_hb, pad_wl, pad_wr]
padding = [padding[i] for i in [0, 2, 1, 3]]
# Quant_info (optional)
vai_quant_in = node_attrs['vai_quant_in']\
if 'vai_quant_in' in node_attrs else []
vai_quant_out = node_attrs['vai_quant_out']\
if 'vai_quant_out' in node_attrs else []
vai_quant = node_attrs['vai_quant']\
if 'vai_quant' in node_attrs else []
X = px.ops.pool2d(
op_name=px.stringify(name),
input_layer=iX,
pool_type='Avg',
pool_size=kernel_shape,
strides=strides,
padding=padding,
layout='NCHW',
ceil_mode=ceil_mode,
count_include_pad=count_include_pad,
vai_quant=vai_quant,
vai_quant_in=vai_quant_in,
vai_quant_out=vai_quant_out,
onnx_id=name
)
return [X]
@register_onnx_2_xlayer_converter("Conv")
def conv(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]) -> List[XLayer]:
"""ONNX Conv to XLayer Conv conversion function"""
logger.info("ONNX Conv -> XLayer Conv (+ BiasAdd)")
assert len(node.get_outputs()) == 1
name = node.get_outputs()[0]
bottoms = node.get_inputs()
node_attrs = node.get_attributes()
iX = xmap[bottoms[0]] # NCHW
_, in_c, in_h, in_w = iX.shapes
W_name = bottoms[1]
wX = xmap[W_name] # OIHW
B_name = bottoms[2] if len(bottoms) == 3 else None
bX = xmap[B_name] if len(bottoms) == 3 else None
auto_pad = node_attrs['auto_pad'] if 'auto_pad' in node_attrs\
else 'NOTSET'
dilations = node_attrs['dilations'] if 'dilations' in node_attrs\
else [1, 1]
dil_h, dil_w = dilations
groups = node_attrs['group'] if 'group' in node_attrs\
else 1
kernel_shape = node_attrs['kernel_shape'] if 'kernel_shape' in node_attrs\
else wX.shapes[2:]
kernel_h, kernel_w = kernel_shape
pads = node_attrs['pads'] if 'pads' in node_attrs\
else None
strides = node_attrs['strides'] if 'strides' in node_attrs\
else [1, 1]
stride_h, stride_w = strides
channels = wX.shapes[0]
assert wX.shapes[1] == in_c // groups
assert auto_pad == 'NOTSET' or pads is None
if (auto_pad == 'NOTSET' and pads is None) or auto_pad == 'VALID':
padding = [0, 0, 0, 0] # ht, hb, wl, wr
elif auto_pad in ["SAME_UPPER", "SAME_LOWER"]:
out_h, out_w = int(math.ceil(in_h / stride_h)), int(math.ceil(in_w / stride_w))
pad_h = (out_h - 1) * stride_h + (dil_h * (kernel_h - 1) + 1) - in_h
pad_w = (out_w - 1) * stride_w + (dil_w * (kernel_w - 1) + 1) - in_w
if auto_pad == "SAME_UPPER":
pad_ht, pad_hb = pad_h // 2, pad_h - (pad_h // 2)
pad_wl, pad_wr = pad_w // 2, pad_w - (pad_w // 2)
else:
pad_ht, pad_hb = pad_h - (pad_h // 2), pad_h // 2
pad_wl, pad_wr = pad_w - (pad_w // 2), pad_w // 2
padding = [pad_ht, pad_hb, pad_wl, pad_wr]
else:
assert len(pads) % 2 == 0
half = len(pads) // 2
padding = []
for i in range(half):
padding.extend([pads[i], pads[i+half]])
# Quant_info (optional)
vai_quant_in = node_attrs['vai_quant_in']\
if 'vai_quant_in' in node_attrs else []
vai_quant_out = node_attrs['vai_quant_out']\
if 'vai_quant_out' in node_attrs else []
vai_quant_weights = node_attrs['vai_quant_weights']\
if 'vai_quant_weights' in node_attrs else []
vai_quant_biases = node_attrs['vai_quant_biases']\
if 'vai_quant_biases' in node_attrs else []
vai_quant = node_attrs['vai_quant']\
if 'vai_quant' in node_attrs else []
conv_name = name if B_name is None else name + '_Conv'
X = px.ops.conv2d(
op_name=px.stringify(conv_name),
input_layer=iX,
weights_layer=wX,
kernel_size=kernel_shape,
strides=strides,
padding_hw=padding,
dilation=dilations,
groups=groups,
channels=channels,
data_layout='NCHW',
kernel_layout='OIHW',
vai_quant=vai_quant,
vai_quant_in=vai_quant_in,
vai_quant_out=vai_quant_out,
vai_quant_weights=vai_quant_weights,
vai_quant_biases=vai_quant_biases,
onnx_id=name
)
res = [X]
if B_name is not None:
bias_add_X = xlf.get_xop_factory_func('BiasAdd')(
op_name=px.stringify(name),
axis=1,
input_layer=X,
bias_layer=bX,
onnx_id=name
)
res.append(bias_add_X)
return res
@register_onnx_2_xlayer_converter("ConvInteger")
def conv_integer(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]) -> List[XLayer]:
"""ONNX Convinteger to XLayer Conv conversion function"""
logger.info("ONNX ConvInteger -> XLayer Conv")
return conv(node, params, xmap)
@register_onnx_2_xlayer_converter("ConvTranspose")
def conv_transpose(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]) -> List[XLayer]:
"""ONNX ConvTranspose to XLayer Conv2DTranspose conversion function"""
logger.info("ONNX ConvTranspose -> XLayer Conv2DTranspose (+ BiasAdd)")
assert len(node.get_outputs()) == 1
name = node.get_outputs()[0]
bottoms = node.get_inputs()
node_attrs = node.get_attributes()
iX = xmap[bottoms[0]] # NCHW
_, in_c, in_h, in_w = iX.shapes
W_name = bottoms[1]
wX = xmap[W_name] # OIHW
assert wX.shapes[1] == in_c
B_name = bottoms[2] if len(bottoms) == 3 else None
bX = xmap[B_name] if len(bottoms) == 3 else None
auto_pad = node_attrs['auto_pad'] if 'auto_pad' in node_attrs\
else 'NOTSET'
dilations = node_attrs['dilations'] if 'dilations' in node_attrs\
else [1, 1]
dil_h, dil_w = dilations
groups = node_attrs['group'] if 'group' in node_attrs\
else 1
kernel_shape = node_attrs['kernel_shape'] if 'kernel_shape' in node_attrs\
else wX.shapes[2:]
kernel_h, kernel_w = kernel_shape
output_padding = node_attrs['output_padding'] \
if 'output_padding' in node_attrs else [0, 0]
if np.sum(output_padding) != 0:
raise NotImplementedError("Conv2DTranspose with output padding not"
" equal to a zero vector is unsupported")
out_pad_h, out_pad_w = output_padding
output_shape = node_attrs['output_shape'] if 'output_shape' in node_attrs\
else None
pads = node_attrs['pads'] if 'pads' in node_attrs\
else None
strides = node_attrs['strides'] if 'strides' in node_attrs\
else [1, 1]
stride_h, stride_w = strides
channels = wX.shapes[0]
if output_shape is None:
assert auto_pad == 'NOTSET' or pads is None
if (auto_pad == 'NOTSET' and pads is None) or auto_pad == 'VALID':
padding = [0, 0, 0, 0] # ht, hb, wl, wr
elif auto_pad in ["SAME_UPPER", "SAME_LOWER"]:
out_h, out_w = in_h * stride_h, in_w * stride_w
pad_h = stride_h * (in_h - 1) + out_pad_h + ((kernel_h - 1) * dil_h + 1) - out_h
pad_w = stride_w * (in_w - 1) + out_pad_w + ((kernel_w - 1) * dil_w + 1) - out_w
if auto_pad == "SAME_UPPER":
pad_ht, pad_hb = pad_h // 2, pad_h - (pad_h // 2)
pad_wl, pad_wr = pad_w // 2, pad_w - (pad_w // 2)
else:
pad_ht, pad_hb = pad_h - (pad_h // 2), pad_h // 2
pad_wl, pad_wr = pad_w - (pad_w // 2), pad_w // 2
padding = [pad_ht, pad_hb, pad_wl, pad_wr]
else:
padding = pads
else:
out_h, out_w = output_shape[2], output_shape[3]
pad_h = stride_h * (in_h - 1) + out_pad_h + ((kernel_h - 1) * dil_h + 1) - out_h
pad_w = stride_w * (in_w - 1) + out_pad_w + ((kernel_w - 1) * dil_w + 1) - out_w
if auto_pad != 'SAME_UPPER':
pad_ht, pad_hb = pad_h // 2, pad_h - (pad_h // 2)
pad_wl, pad_wr = pad_w // 2, pad_w - (pad_w // 2)
else:
pad_ht, pad_hb = pad_h - (pad_h // 2), pad_h // 2
pad_wl, pad_wr = pad_w - (pad_w // 2), pad_w // 2
padding = [pad_ht, pad_hb, pad_wl, pad_wr]
# Quant_info (optional)
vai_quant_in = node_attrs['vai_quant_in']\
if 'vai_quant_in' in node_attrs else []
vai_quant_out = node_attrs['vai_quant_out']\
if 'vai_quant_out' in node_attrs else []
vai_quant_weights = node_attrs['vai_quant_weights']\
if 'vai_quant_weights' in node_attrs else []
vai_quant_biases = node_attrs['vai_quant_biases']\
if 'vai_quant_biases' in node_attrs else []
vai_quant = node_attrs['vai_quant']\
if 'vai_quant' in node_attrs else []
conv_name = name if B_name is None else name + '_Conv'
X = px.ops.conv2d_transpose(
op_name=px.stringify(conv_name),
input_layer=iX,
weights_layer=wX,
kernel_size=kernel_shape,
strides=strides,
padding_hw=padding,
dilation=dilations,
groups=groups,
channels=channels,
data_layout='NCHW',
kernel_layout='OIHW',
vai_quant=vai_quant,
vai_quant_in=vai_quant_in,
vai_quant_out=vai_quant_out,
vai_quant_weights=vai_quant_weights,
vai_quant_biases=vai_quant_biases,
onnx_id=name
)
res = [X]
if B_name is not None:
bias_add_X = xlf.get_xop_factory_func('BiasAdd')(
op_name=px.stringify(name),
axis=1,
input_layer=X,
bias_layer=bX,
onnx_id=name
)
res.append(bias_add_X)
return res
@register_onnx_2_xlayer_converter("Flatten")
def flatten(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]) -> List[XLayer]:
"""
ONNX Flatten to XLayer Flatten or Reshape conversion function
ONNX: Flattens the input tensor into a 2D matrix. If input tensor has
shape (d_0, d_1, ... d_n) then the output will have shape
(d_0 X d_1 ... d_(axis-1), d_axis X d_(axis+1) ... X dn).
See https://github.com/onnx/onnx/blob/master/docs/Operators.md#Flatten
"""
logger.info("ONNX Flatten -> XLayer Flatten/Reshape")
assert len(node.get_outputs()) == 1
assert len(node.get_inputs()) == 1
name = node.get_outputs()[0]
bottoms = node.get_inputs()
node_attrs = node.get_attributes()
iX = xmap[bottoms[0]]
shape = iX.shapes.tolist()
rank = len(shape)
axis = node_attrs['axis'] if 'axis' in node_attrs else 1
assert axis >= -rank and axis <= rank
if axis == 1 or axis == -(rank-1):
X = px.ops.batch_flatten(px.stringify(name), [iX], onnx_id=name)
else:
shape_1 = int(np.prod(shape[:axis])) if shape[:axis] != [] else 1
shape_2 = int(np.prod(shape[axis:])) if shape[axis:] != [] else 1
newshape = [shape_1, shape_2]
X = px.ops.reshape(
op_name=px.stringify(name),
newshape=newshape,
input_layer=iX,
onnx_id=name
)
return [X]
@register_onnx_2_xlayer_converter("GlobalAveragePool")
def global_avg_pool(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]) -> List[XLayer]:
"""ONNX GlobalAveragePool to XLayer Pooling (Avg) conversion function"""
logger.info("ONNX GlobalAveragePool -> XLayer Pooling (Avg)")
assert len(node.get_outputs()) == 1
name = node.get_outputs()[0]
bottoms = node.get_inputs()
node_attrs = node.get_attributes()
iX = xmap[bottoms[0]] # NCHW
_, in_c, in_h, in_w = iX.shapes
# Quant_info (optional)
vai_quant_in = node_attrs['vai_quant_in']\
if 'vai_quant_in' in node_attrs else []
vai_quant_out = node_attrs['vai_quant_out']\
if 'vai_quant_out' in node_attrs else []
vai_quant = node_attrs['vai_quant']\
if 'vai_quant' in node_attrs else []
X = xlf.get_xop_factory_func('GlobalPooling')(
op_name=px.stringify(name),
input_layer=iX,
pool_type='Avg',
layout='NCHW',
vai_quant=vai_quant,
vai_quant_in=vai_quant_in,
vai_quant_out=vai_quant_out,
onnx_id=name
)
return [X]
@register_onnx_2_xlayer_converter("GlobalMaxPool")
def global_max_pool(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]) -> List[XLayer]:
"""ONNX GlobalMaxPool to XLayer Pooling (Max) conversion function"""
logger.info("ONNX GlobalMaxPool -> XLayer Pooling (Max)")
assert len(node.get_outputs()) == 1
name = node.get_outputs()[0]
bottoms = node.get_inputs()
node_attrs = node.get_attributes()
iX = xmap[bottoms[0]] # NCHW
_, in_c, in_h, in_w = iX.shapes
# Quant_info (optional)
vai_quant_in = node_attrs['vai_quant_in']\
if 'vai_quant_in' in node_attrs else []
vai_quant_out = node_attrs['vai_quant_out']\
if 'vai_quant_out' in node_attrs else []
vai_quant = node_attrs['vai_quant']\
if 'vai_quant' in node_attrs else []
X = xlf.get_xop_factory_func('GlobalPooling')(
op_name=px.stringify(name),
input_layer=iX,
pool_type='Max',
layout='NCHW',
vai_quant=vai_quant,
vai_quant_in=vai_quant_in,
vai_quant_out=vai_quant_out,
onnx_id=name
)
return [X]
@register_onnx_2_xlayer_converter("LRN")
def lrn(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]) -> List[XLayer]:
return eltwise_any_op("LRN", node, params, xmap)
@register_onnx_2_xlayer_converter("MaxPool")
def max_pool(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]):
"""ONNX MaxPool to XLayer MaxPool conversion function"""
logger.info("ONNX MaxPool -> XLayer Pooling")
assert len(node.get_outputs()) == 1
name = node.get_outputs()[0]
bottoms = node.get_inputs()
node_attrs = node.get_attributes()
iX = xmap[bottoms[0]] # NCHW
_, in_c, in_h, in_w = iX.shapes
auto_pad = node_attrs['auto_pad'] if 'auto_pad' in node_attrs\
else 'NOTSET'
ceil_mode = bool(node_attrs['ceil_mode']) if 'ceil_mode' in node_attrs\
else False
dilations = node_attrs['dilations'] if 'dilations' in node_attrs\
else [1, 1]
dil_h, dil_w = dilations
kernel_shape = node_attrs['kernel_shape'] if 'kernel_shape' in node_attrs\
else W.shape[2:]
kernel_h, kernel_w = kernel_shape
pads = node_attrs['pads'] if 'pads' in node_attrs\
else None
storage_order = node_attrs['storage_order']\
if 'storage_order' in node_attrs else 0
strides = node_attrs['strides'] if 'strides' in node_attrs\
else [1, 1]
stride_h, stride_w = strides
if auto_pad not in ['NOTSET', 'VALID', 'SAME_UPPER', 'SAME_LOWER']:
raise ValueError("MaxPool autopad attribute not supported but was: {}"
.format(auto_pad))
if storage_order != 0:
raise ValueError("MaxPool storage_order != 0 attribute not supported"
" but got: {}".format(storage_order))
# TODO dilations
if dilations != [1, 1]:
raise NotImplementedError("Dilations are expected to be [1, 1] for"
" now")
if auto_pad in ["SAME_UPPER", "SAME_LOWER"]:
out_h, out_w = int(math.ceil(in_h / stride_h)), int(math.ceil(in_w / stride_w))
pad_h = (out_h - 1) * stride_h + (dil_h * (kernel_h - 1) + 1) - in_h
pad_w = (out_w - 1) * stride_w + (dil_w * (kernel_w - 1) + 1) - in_w
if auto_pad == "SAME_UPPER":
pad_ht, pad_hb = pad_h // 2, pad_h - (pad_h // 2)
pad_wl, pad_wr = pad_w // 2, pad_w - (pad_w // 2)
else:
pad_ht, pad_hb = pad_h - (pad_h // 2), pad_h // 2
pad_wl, pad_wr = pad_w - (pad_w // 2), pad_w // 2
padding = [pad_ht, pad_hb, pad_wl, pad_wr]
else:
padding = pads if pads is not None else [0, 0, 0, 0]
# [pad_ht, pad_hb, pad_wl, pad_wr] -> [pad_ht, pad_wl, pad_hb, pad_wr]
# TODO move internal pool padding to [pad_ht, pad_hb, pad_wl, pad_wr]
padding = [padding[i] for i in [0, 2, 1, 3]]
# Quant_info (optional)
vai_quant_in = node_attrs['vai_quant_in']\
if 'vai_quant_in' in node_attrs else []
vai_quant_out = node_attrs['vai_quant_out']\
if 'vai_quant_out' in node_attrs else []
vai_quant = node_attrs['vai_quant']\
if 'vai_quant' in node_attrs else []
X = px.ops.pool2d(
op_name=px.stringify(name),
input_layer=iX,
pool_type='Max',
pool_size=kernel_shape,
strides=strides,
padding=padding,
layout='NCHW',
ceil_mode=ceil_mode,
count_include_pad=False,
vai_quant=vai_quant,
vai_quant_in=vai_quant_in,
vai_quant_out=vai_quant_out,
onnx_id=name
)
return [X]
@register_onnx_2_xlayer_converter("MaxRoiPool")
def max_roi_pool(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]) -> List[XLayer]:
"""ONNX MaxRoiPool to XLayer AnyOp conversion function"""
logger.info("ONNX MaxRoiPool -> XLayer AnyOp")
assert len(node.get_outputs()) == 1
name = node.get_outputs()[0]
bottoms = node.get_inputs()
node_attrs = node.get_attributes()
iX = xmap[bottoms[0]] # NCHW
_, in_c, in_h, in_w = iX.shapes
rois = xmap[bottoms[1]]
num_rois = rois.shapes[0]
out_h, out_w = [int(i) for i in node_attrs['pooled_shape']]
out_shape = [num_rois, in_c, out_h, out_w]
X = px.ops.any_op(
op_name=px.stringify(name),
in_xlayers=[iX],
any_shape=out_shape,
onnx_id=name
)
return [X]
@register_onnx_2_xlayer_converter("MaxUnPool")
def max_unpool(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]) -> List[XLayer]:
"""ONNX MaxUnPool to XLayer AnyOp conversion function"""
logger.info("ONNX MaxPool -> XLayer Pooling")
assert len(node.get_outputs()) == 1
name = node.get_outputs()[0]
bottoms = node.get_inputs()
node_attrs = node.get_attributes()
iX = xmap[bottoms[0]] # NCHW
in_b, in_c, in_h, in_w = iX.shapes
if len(bottoms) == 3:
out_shape = [int(i) for i in list(xmap[bottoms[2]].data[0])]
else:
kernel_shape = node_attrs['kernel_shape'] \
if 'kernel_shape' in node_attrs else W.shape[2:]
kernel_h, kernel_w = kernel_shape
pads = node_attrs['pads'] if 'pads' in node_attrs\
else None
strides = node_attrs['strides'] if 'strides' in node_attrs\
else [1, 1]
stride_h, stride_w = strides
padding = pads if pads is not None else [0, 0, 0, 0]
# [pad_ht, pad_hb, pad_wl, pad_wr] -> [pad_ht, pad_wl, pad_hb, pad_wr]
# TODO move internal pool padding to [pad_ht, pad_hb, pad_wl, pad_wr]
padding = [padding[i] for i in [0, 2, 1, 3]]
pad_ht, pad_wl, pad_hb, pad_wr = padding
out_h = (in_h - 1) * stride_h + kernel_h - pad_ht - pad_hb
out_w = (in_w - 1) * stride_w + kernel_w - pad_wl - pad_wr
out_shape = [in_b, in_c, out_h, out_w]
X = px.ops.any_op(
op_name=px.stringify(name),
in_xlayers=[iX],
any_shape=out_shape,
onnx_id=name
)
return [X]
@register_onnx_2_xlayer_converter("Pad")
def pad(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]) -> List[XLayer]:
"""ONNX Pad to XLayer Pad conversion function"""
logger.info("ONNX Pad -> XLayer Pad")
assert len(node.get_outputs()) == 1
name = node.get_outputs()[0]
bottoms = node.get_inputs()
node_attrs = node.get_attributes()
iX = xmap[bottoms[0]] # NCHW
if len(bottoms) > 1:
padding = [int(i) for i in xmap[bottoms[1]].data[0]]
pad_value = float(xmap[bottoms[2]].data[0])
else:
pad_str = 'pads' if 'pads' in node_attrs else 'paddings'
padding = [int(i) for i in node_attrs[pad_str]]
pad_value = float(node_attrs['value']) \
if 'value' in node_attrs else 0.
h = len(padding) // 2
padding = [[padding[i], padding[i + h]] for i in range(h)]
# Quant_info (optional)
vai_quant_in = node_attrs['vai_quant_in']\
if 'vai_quant_in' in node_attrs else []
vai_quant_out = node_attrs['vai_quant_out']\
if 'vai_quant_out' in node_attrs else []
vai_quant = node_attrs['vai_quant']\
if 'vai_quant' in node_attrs else []
X = px.ops.pad(
op_name=px.stringify(name),
input_layer=iX,
padding=padding,
pad_value=pad_value,
onnx_id=name,
vai_quant=vai_quant,
vai_quant_in=vai_quant_in,
vai_quant_out=vai_quant_out,
)
return [X]
@register_onnx_2_xlayer_converter("QLinearConv")
def qlinearconv(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]) -> List[XLayer]:
"""ONNX QLinearConv to XLayer AnyOp conversion function"""
raise NotImplementedError("Unsupported ONNX QLinearConv operator")
@register_onnx_2_xlayer_converter("Upsample")
def upsample(node: NodeWrapper,
params: Dict[str, np.ndarray],
xmap: Dict[str, XLayer]) -> List[XLayer]:
"""ONNX Upsample to XLayer Upsampling2D conversion function"""
logger.info("ONNX Upsample -> XLayer Upsampling2D")
assert len(node.get_outputs()) == 1
name = node.get_outputs()[0]
bottoms = node.get_inputs()
node_attrs = node.get_attributes()
assert len(bottoms) == 2 or 'scales' in node_attrs
iX = xmap[bottoms[0]] # NCHW
scales = [float(i) for i in (list(xmap[bottoms[1]].data[0])
if 'scales' not in node_attrs
else node_attrs['scales'])]
assert len(scales) == len(iX.shapes)
scale_n, scale_c, scale_h, scale_w = scales
if scale_n != 1:
raise NotImplementedError("Unsupported upsampling layer with scale"
" for batch dim != 1")
if scale_c != 1:
raise NotImplementedError("Unsupported upsampling layer with scale"
" for channel dim != 1")
mode = node_attrs['mode'] if 'mode' in node_attrs \
else 'nearest'
if mode == 'nearest':
mode = 'nearest_neighbor'
# Quant_info (optional)
vai_quant_in = node_attrs['vai_quant_in']\
if 'vai_quant_in' in node_attrs else []
vai_quant_out = node_attrs['vai_quant_out']\
if 'vai_quant_out' in node_attrs else []
vai_quant = node_attrs['vai_quant']\
if 'vai_quant' in node_attrs else []
X = px.ops.upsampling2d(
op_name=px.stringify(name),
in_xlayers=[iX],
scale_h=scale_h,
scale_w=scale_w,
data_layout='NCHW',
method=mode,
onnx_id=name
)
return [X]
| 34.553665 | 92 | 0.614948 | 3,882 | 26,399 | 3.901082 | 0.072901 | 0.082607 | 0.053751 | 0.062401 | 0.784271 | 0.753962 | 0.734218 | 0.724842 | 0.702919 | 0.686741 | 0 | 0.013329 | 0.266753 | 26,399 | 763 | 93 | 34.598952 | 0.769024 | 0.087844 | 0 | 0.723549 | 0 | 0 | 0.110819 | 0 | 0 | 0 | 0 | 0.002621 | 0.03413 | 1 | 0.023891 | false | 0 | 0.017065 | 0.001706 | 0.06314 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9ec9ab20e2d1f011f58cccce3bce47a9fa4d262e | 26 | py | Python | examples/list_subscr.py | igfish/toyvm | bb1ab371a8c71ba01522556235fc9f017c9b6b8f | [
"MIT"
] | null | null | null | examples/list_subscr.py | igfish/toyvm | bb1ab371a8c71ba01522556235fc9f017c9b6b8f | [
"MIT"
] | null | null | null | examples/list_subscr.py | igfish/toyvm | bb1ab371a8c71ba01522556235fc9f017c9b6b8f | [
"MIT"
] | null | null | null | l = [1, 3, 4]
print(l[2])
| 8.666667 | 13 | 0.423077 | 7 | 26 | 1.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0.230769 | 26 | 2 | 14 | 13 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9ed2a3a3fe90254134fd65478924e48c422d2d7d | 39 | py | Python | django_google_json_style_api/__init__.py | azevakin/django-google-json-style-api | f1d8058ed7ce03368ea36ca333e96e21fa74b2e1 | [
"MIT"
] | 1 | 2021-10-19T20:00:02.000Z | 2021-10-19T20:00:02.000Z | django_google_json_style_api/__init__.py | azevakin/django-google-json-style-api | f1d8058ed7ce03368ea36ca333e96e21fa74b2e1 | [
"MIT"
] | null | null | null | django_google_json_style_api/__init__.py | azevakin/django-google-json-style-api | f1d8058ed7ce03368ea36ca333e96e21fa74b2e1 | [
"MIT"
] | null | null | null | from .requests import PaginatedRequest
| 19.5 | 38 | 0.871795 | 4 | 39 | 8.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4257e38a7d947ee3c1a194dda7cb15a43809cf0 | 349 | py | Python | tests/protocol/secondary/d.py | gufolabs/gufo_loader | ffb4e17b2e8f36d938a145d50b7bd27d976f9fce | [
"BSD-3-Clause"
] | 4 | 2022-03-04T07:49:18.000Z | 2022-03-08T07:57:05.000Z | tests/protocol/secondary/d.py | gufolabs/gufo_loader | ffb4e17b2e8f36d938a145d50b7bd27d976f9fce | [
"BSD-3-Clause"
] | null | null | null | tests/protocol/secondary/d.py | gufolabs/gufo_loader | ffb4e17b2e8f36d938a145d50b7bd27d976f9fce | [
"BSD-3-Clause"
] | 1 | 2022-03-08T07:57:07.000Z | 2022-03-08T07:57:07.000Z | # ---------------------------------------------------------------------
# Gufo Labs Loader:
# Imprort from annother module, must be ignored
# ---------------------------------------------------------------------
# Copyright (C) 2022, Gufo Labs
# ---------------------------------------------------------------------
from .c import CPlugin # noqa
| 34.9 | 71 | 0.275072 | 20 | 349 | 4.8 | 0.8 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012658 | 0.094556 | 349 | 9 | 72 | 38.777778 | 0.291139 | 0.882521 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b439598790547b3855ac025d056d5c4c9130e1a5 | 12,372 | py | Python | plugins/item_licenses/plugin_tests/item_licenses_test.py | JKitok/girder | 317962d155fc9811d25e5f33bd3e849c4ac96645 | [
"Apache-2.0"
] | 395 | 2015-01-12T19:20:13.000Z | 2022-03-30T05:40:40.000Z | plugins/item_licenses/plugin_tests/item_licenses_test.py | JKitok/girder | 317962d155fc9811d25e5f33bd3e849c4ac96645 | [
"Apache-2.0"
] | 2,388 | 2015-01-01T20:09:19.000Z | 2022-03-29T16:49:14.000Z | plugins/item_licenses/plugin_tests/item_licenses_test.py | JKitok/girder | 317962d155fc9811d25e5f33bd3e849c4ac96645 | [
"Apache-2.0"
] | 177 | 2015-01-04T14:47:00.000Z | 2022-03-25T09:01:51.000Z | # -*- coding: utf-8 -*-
from girder.exceptions import ValidationException
from girder.models.folder import Folder
from girder.models.setting import Setting
from girder.models.user import User
from tests import base
from girder_item_licenses.settings import PluginSettings
def setUpModule():
base.enabledPlugins.append('item_licenses')
base.startServer()
def tearDownModule():
base.stopServer()
class ItemLicensesTestCase(base.TestCase):
def setUp(self):
super().setUp()
# Create a user
user = {
'email': 'user1@girder.test',
'login': 'user1login',
'firstName': 'First',
'lastName': 'Last',
'password': 'user1password',
'admin': False
}
self.user = User().createUser(**user)
# Get user's private folder
folders = Folder().childFolders(self.user, 'user', user=self.user)
for folder in folders:
if folder['name'] == 'Private':
self.folder = folder
break
def testItemCreateInvalid(self):
"""
Test creating items with invalid licenses.
"""
# Create item with a null name
params = {
'name': ' my item name',
'description': ' a description ',
'folderId': self.folder['_id'],
'license': None
}
resp = self.request(path='/item', method='POST', params=params,
user=self.user)
self.assertValidationError(resp, 'license')
# Create item with an invalid license name
params = {
'name': ' my item name',
'description': ' a description ',
'folderId': self.folder['_id'],
'license': 'Unsupported license'
}
resp = self.request(path='/item', method='POST', params=params,
user=self.user)
self.assertValidationError(resp, 'license')
# Create item with a valid license name with extra whitespace
params = {
'name': ' my item name',
'description': ' a description ',
'folderId': self.folder['_id'],
'license': ' The MIT License (MIT) '
}
resp = self.request(path='/item', method='POST', params=params,
user=self.user)
self.assertValidationError(resp, 'license')
def testItemCreateAndUpdate(self):
"""
Test creating, reading, and updating an item, especially with regards to
its license field.
"""
# Create item without specifying a license
params = {
'name': ' my item name',
'description': ' a description ',
'folderId': self.folder['_id']
}
resp = self.request(path='/item', method='POST', params=params,
user=self.user)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], '')
# Create item with a blank license name
params = {
'name': ' my item name',
'description': ' a description ',
'folderId': self.folder['_id'],
'license': ''
}
resp = self.request(path='/item', method='POST', params=params,
user=self.user)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], '')
# Fetch item
resp = self.request(path='/item/%s' % resp.json['_id'],
params=params, user=self.user)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], '')
# Update item license
params = {
'license': 'Apache License 2'
}
resp = self.request(path='/item/%s' % resp.json['_id'], method='PUT',
params=params, user=self.user)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], 'Apache License 2')
# Fetch item
resp = self.request(path='/item/%s' % resp.json['_id'],
params=params, user=self.user)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], 'Apache License 2')
# Update item license to be unspecified
params = {
'license': ''
}
resp = self.request(path='/item/%s' % resp.json['_id'], method='PUT',
params=params, user=self.user)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], '')
# Fetch item
resp = self.request(path='/item/%s' % resp.json['_id'],
params=params, user=self.user)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], '')
# Create item with a valid license name
params = {
'name': ' my item name',
'description': ' a description ',
'folderId': self.folder['_id'],
'license': 'The MIT License (MIT)'
}
resp = self.request(path='/item', method='POST', params=params,
user=self.user)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], 'The MIT License (MIT)')
# Fetch item
resp = self.request(path='/item/%s' % resp.json['_id'],
params=params, user=self.user)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], 'The MIT License (MIT)')
# Update item
params = {
'name': 'changed name',
'description': 'new description',
'license': 'Apache License 2'
}
resp = self.request(path='/item/%s' % resp.json['_id'], method='PUT',
params=params, user=self.user)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], 'Apache License 2')
# Fetch item
resp = self.request(path='/item/%s' % resp.json['_id'],
params=params, user=self.user)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], 'Apache License 2')
# Update item with the same license name
params = {
'license': 'Apache License 2'
}
resp = self.request(path='/item/%s' % resp.json['_id'], method='PUT',
params=params, user=self.user)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], 'Apache License 2')
def testItemCopy(self):
"""
Test copying an item, especially with regards to its license field.
"""
params = {
'name': 'original item',
'description': 'original description',
'license': 'The MIT License (MIT)',
'folderId': self.folder['_id']
}
# Create item
resp = self.request(path='/item', method='POST', params=params,
user=self.user)
self.assertStatusOk(resp)
origItemId = resp.json['_id']
# Copy to a new item with different name and license.
params = {
'name': 'new item',
'license': 'Apache License 2'
}
resp = self.request(path='/item/%s/copy' % origItemId,
method='POST', user=self.user, params=params)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], 'Apache License 2')
# Fetch item
resp = self.request(path='/item/%s' % resp.json['_id'],
params=params, user=self.user)
self.assertStatusOk(resp)
self.assertEqual(resp.json['license'], 'Apache License 2')
def testGetLicenses(self):
"""
Test getting list of licenses.
"""
# Get default settings
resp = self.request(path='/item/licenses', user=self.user, params={
'default': True
})
self.assertStatusOk(resp)
self.assertGreater(len(resp.json), 1)
self.assertIn('category', resp.json[0])
self.assertIn('licenses', resp.json[0])
self.assertGreater(len(resp.json[0]['licenses']), 8)
self.assertIn('name', resp.json[0]['licenses'][0])
self.assertGreater(len(resp.json[0]['licenses'][0]['name']), 0)
self.assertIn('name', resp.json[0]['licenses'][1])
self.assertGreater(len(resp.json[0]['licenses'][1]['name']), 0)
# Get current settings
resp = self.request(path='/item/licenses', user=self.user)
self.assertStatusOk(resp)
self.assertGreater(len(resp.json), 1)
self.assertIn('category', resp.json[0])
self.assertIn('licenses', resp.json[0])
self.assertGreater(len(resp.json[0]['licenses']), 8)
self.assertIn('name', resp.json[0]['licenses'][0])
self.assertGreater(len(resp.json[0]['licenses'][0]['name']), 0)
self.assertIn('name', resp.json[0]['licenses'][1])
self.assertGreater(len(resp.json[0]['licenses'][1]['name']), 0)
# Change licenses
Setting().set(
PluginSettings.LICENSES,
[{'category': 'A', 'licenses': [{'name': '1'}]},
{'category': 'B', 'licenses': [{'name': '2'}, {'name': '3'}]}])
# Get default settings after changing licenses
resp = self.request(path='/item/licenses', user=self.user, params={
'default': True
})
self.assertStatusOk(resp)
self.assertStatusOk(resp)
self.assertGreater(len(resp.json), 1)
self.assertIn('category', resp.json[0])
self.assertIn('licenses', resp.json[0])
self.assertGreater(len(resp.json[0]['licenses']), 8)
self.assertIn('name', resp.json[0]['licenses'][0])
self.assertGreater(len(resp.json[0]['licenses'][0]['name']), 0)
self.assertIn('name', resp.json[0]['licenses'][1])
self.assertGreater(len(resp.json[0]['licenses'][1]['name']), 0)
# Get current settings after changing licenses
resp = self.request(path='/item/licenses', user=self.user)
self.assertStatusOk(resp)
self.assertCountEqual(
resp.json,
[{'category': 'A', 'licenses': [{'name': '1'}]},
{'category': 'B', 'licenses': [{'name': '2'}, {'name': '3'}]}])
def testLicensesSettingValidation(self):
"""
Test validation of licenses setting.
"""
# Test valid settings
Setting().set(
PluginSettings.LICENSES,
[])
Setting().set(
PluginSettings.LICENSES,
[{'category': 'A', 'licenses': []}])
Setting().set(
PluginSettings.LICENSES,
[{'category': 'A', 'licenses': [{'name': '1'}]}])
Setting().set(
PluginSettings.LICENSES,
[{'category': 'A', 'licenses': [{'name': '1'}, {'name': '2'}]}])
Setting().set(
PluginSettings.LICENSES,
[{'category': 'A', 'licenses': []},
{'category': 'B', 'licenses': [{'name': '1'}]}])
Setting().set(
PluginSettings.LICENSES,
[{'category': 'A', 'licenses': []},
{'category': 'B', 'licenses': [{'name': '1'}, {'name': '2'}]}])
# Test invalid top-level types
for val in (None, 1, '', {}, [{}]):
self.assertRaises(ValidationException, Setting().set, PluginSettings.LICENSES, val)
# Test invalid category types
for category, licenses in ((None, []), (1, []), ('', []), ({}, [])):
self.assertRaises(
ValidationException,
Setting().set,
PluginSettings.LICENSES,
[{'category': category, 'licenses': licenses}])
# Test invalid licenses types
for val in (None, {}, [1], ['']):
self.assertRaises(
ValidationException,
Setting().set,
PluginSettings.LICENSES,
[{'category': 'A', 'licenses': val}])
# Test invalid license names
for val in (None, 1, '', {}, []):
self.assertRaises(
ValidationException,
Setting().set,
PluginSettings.LICENSES,
[{'category': 'A', 'licenses': [{'name': val}]}])
| 37.153153 | 95 | 0.529421 | 1,226 | 12,372 | 5.325449 | 0.115824 | 0.061265 | 0.042273 | 0.064022 | 0.761219 | 0.756165 | 0.75448 | 0.75448 | 0.742993 | 0.728749 | 0 | 0.008978 | 0.315794 | 12,372 | 332 | 96 | 37.26506 | 0.762315 | 0.08883 | 0 | 0.707317 | 0 | 0 | 0.165842 | 0 | 0 | 0 | 0 | 0 | 0.268293 | 1 | 0.03252 | false | 0.004065 | 0.02439 | 0 | 0.060976 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b4769e14110d7033c24b8f62bcd1f21214dfbb99 | 80 | py | Python | nemcore/types/playlist.py | nnnewb/NEMCore | 9fbd8b9358d84c96a43bb98dbecac14b7a2ef8ef | [
"MIT"
] | 7 | 2019-10-14T10:26:49.000Z | 2021-05-14T03:45:57.000Z | nemcore/types/playlist.py | nnnewb/NEMCore | 9fbd8b9358d84c96a43bb98dbecac14b7a2ef8ef | [
"MIT"
] | 2 | 2020-12-14T12:32:06.000Z | 2021-03-13T12:53:50.000Z | nemcore/types/playlist.py | nnnewb/NEMCore | 9fbd8b9358d84c96a43bb98dbecac14b7a2ef8ef | [
"MIT"
] | 1 | 2021-01-07T13:34:07.000Z | 2021-01-07T13:34:07.000Z | from .get_user_playlist_resp import Playlist as P
class Playlist(P):
pass
| 13.333333 | 49 | 0.7625 | 13 | 80 | 4.461538 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 80 | 5 | 50 | 16 | 0.892308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
81eed39172db7002bde9d558a84236c528c3bae8 | 41 | py | Python | kivymd/uix/dialog/__init__.py | AnEx07/KivyMD | e4004a570ad3f1874b3540cc1b0c243b3037bba8 | [
"MIT"
] | null | null | null | kivymd/uix/dialog/__init__.py | AnEx07/KivyMD | e4004a570ad3f1874b3540cc1b0c243b3037bba8 | [
"MIT"
] | null | null | null | kivymd/uix/dialog/__init__.py | AnEx07/KivyMD | e4004a570ad3f1874b3540cc1b0c243b3037bba8 | [
"MIT"
] | null | null | null | from .dialog import BaseDialog, MDDialog
| 20.5 | 40 | 0.829268 | 5 | 41 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 41 | 1 | 41 | 41 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c3158604b898717e71068dffff5f7f6d88c12d10 | 36 | py | Python | src/foremast/awslambda/cloudwatch_log_event/__init__.py | gitter-badger/foremast | 33530438ba5893a1d5cf822a63e03d7ab49dfcd7 | [
"Apache-2.0"
] | null | null | null | src/foremast/awslambda/cloudwatch_log_event/__init__.py | gitter-badger/foremast | 33530438ba5893a1d5cf822a63e03d7ab49dfcd7 | [
"Apache-2.0"
] | null | null | null | src/foremast/awslambda/cloudwatch_log_event/__init__.py | gitter-badger/foremast | 33530438ba5893a1d5cf822a63e03d7ab49dfcd7 | [
"Apache-2.0"
] | null | null | null | from .cloudwatch_log_event import *
| 18 | 35 | 0.833333 | 5 | 36 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c33969b3494ebb890a3ee67af245045e217e12f2 | 8,039 | py | Python | tests/test_aamp.py | jrbourbeau/stumpy | e9150aeb08a47dbaaa2ba86e00dea46c5baff2a2 | [
"BSD-3-Clause"
] | null | null | null | tests/test_aamp.py | jrbourbeau/stumpy | e9150aeb08a47dbaaa2ba86e00dea46c5baff2a2 | [
"BSD-3-Clause"
] | null | null | null | tests/test_aamp.py | jrbourbeau/stumpy | e9150aeb08a47dbaaa2ba86e00dea46c5baff2a2 | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
import numpy.testing as npt
import pandas as pd
from stumpy import config, aamp
import pytest
import naive
test_data = [
(
np.array([9, 8100, -60, 7], dtype=np.float64),
np.array([584, -11, 23, 79, 1001, 0, -19], dtype=np.float64),
),
(
np.random.uniform(-1000, 1000, [8]).astype(np.float64),
np.random.uniform(-1000, 1000, [64]).astype(np.float64),
),
]
substitution_locations = [(slice(0, 0), 0, -1, slice(1, 3), [0, 3])]
substitution_values = [np.nan, np.inf]
def test_aamp_int_input():
with pytest.raises(TypeError):
aamp(np.arange(10), 5)
@pytest.mark.parametrize("T_A, T_B", test_data)
def test_aamp_self_join(T_A, T_B):
m = 3
ref_mp = naive.aamp(T_B, m)
comp_mp = aamp(T_B, m)
naive.replace_inf(ref_mp)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp, comp_mp)
comp_mp = aamp(pd.Series(T_B), m)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp, comp_mp)
@pytest.mark.parametrize("T_A, T_B", test_data)
def test_aamp_A_B_join(T_A, T_B):
m = 3
ref_mp = naive.aamp(T_A, m, T_B=T_B)
comp_mp = aamp(T_A, m, T_B, ignore_trivial=False)
naive.replace_inf(ref_mp)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp, comp_mp)
comp_mp = aamp(pd.Series(T_A), m, pd.Series(T_B), ignore_trivial=False)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp, comp_mp)
def test_aamp_constant_subsequence_self_join():
T_A = np.concatenate((np.zeros(20, dtype=np.float64), np.ones(5, dtype=np.float64)))
m = 3
ref_mp = naive.aamp(T_A, m)
comp_mp = aamp(T_A, m, ignore_trivial=True)
naive.replace_inf(ref_mp)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp[:, 0], comp_mp[:, 0]) # ignore indices
comp_mp = aamp(pd.Series(T_A), m, ignore_trivial=True)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp[:, 0], comp_mp[:, 0]) # ignore indices
def test_aamp_one_constant_subsequence_A_B_join():
T_A = np.random.rand(20)
T_B = np.concatenate((np.zeros(20, dtype=np.float64), np.ones(5, dtype=np.float64)))
m = 3
ref_mp = naive.aamp(T_A, m, T_B=T_B)
comp_mp = aamp(T_A, m, T_B, ignore_trivial=False)
naive.replace_inf(ref_mp)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp[:, 0], comp_mp[:, 0]) # ignore indices
comp_mp = aamp(pd.Series(T_A), m, pd.Series(T_B), ignore_trivial=False)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp[:, 0], comp_mp[:, 0]) # ignore indices
# Swap inputs
ref_mp = naive.aamp(T_B, m, T_B=T_A)
comp_mp = aamp(T_B, m, T_A, ignore_trivial=False)
naive.replace_inf(ref_mp)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp[:, 0], comp_mp[:, 0]) # ignore indices
def test_aamp_two_constant_subsequences_A_B_join():
T_A = np.concatenate(
(np.zeros(10, dtype=np.float64), np.ones(10, dtype=np.float64))
)
T_B = np.concatenate((np.zeros(20, dtype=np.float64), np.ones(5, dtype=np.float64)))
m = 3
ref_mp = naive.aamp(T_A, m, T_B=T_B)
comp_mp = aamp(T_A, m, T_B, ignore_trivial=False)
naive.replace_inf(ref_mp)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp[:, 0], comp_mp[:, 0]) # ignore indices
comp_mp = aamp(pd.Series(T_A), m, pd.Series(T_B), ignore_trivial=False)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp[:, 0], comp_mp[:, 0]) # ignore indices
# Swap inputs
ref_mp = naive.aamp(T_B, m, T_B=T_A)
comp_mp = aamp(T_B, m, T_A, ignore_trivial=False)
naive.replace_inf(ref_mp)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp[:, 0], comp_mp[:, 0]) # ignore indices
comp_mp = aamp(pd.Series(T_B), m, pd.Series(T_A), ignore_trivial=False)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp[:, 0], comp_mp[:, 0]) # ignore indices
def test_aamp_identical_subsequence_self_join():
identical = np.random.rand(8)
T_A = np.random.rand(20)
T_A[1 : 1 + identical.shape[0]] = identical
T_A[11 : 11 + identical.shape[0]] = identical
m = 3
ref_mp = naive.aamp(T_A, m)
comp_mp = aamp(T_A, m, ignore_trivial=True)
naive.replace_inf(ref_mp)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(
ref_mp[:, 0], comp_mp[:, 0], decimal=config.STUMPY_TEST_PRECISION
) # ignore indices
comp_mp = aamp(pd.Series(T_A), m, ignore_trivial=True)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(
ref_mp[:, 0], comp_mp[:, 0], decimal=config.STUMPY_TEST_PRECISION
) # ignore indices
def test_aamp_identical_subsequence_A_B_join():
identical = np.random.rand(8)
T_A = np.random.rand(20)
T_B = np.random.rand(20)
T_A[1 : 1 + identical.shape[0]] = identical
T_B[11 : 11 + identical.shape[0]] = identical
m = 3
ref_mp = naive.aamp(T_A, m, T_B=T_B)
comp_mp = aamp(T_A, m, T_B, ignore_trivial=False)
naive.replace_inf(ref_mp)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(
ref_mp[:, 0], comp_mp[:, 0], config.STUMPY_TEST_PRECISION
) # ignore indices
comp_mp = aamp(pd.Series(T_A), m, pd.Series(T_B), ignore_trivial=False)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(
ref_mp[:, 0], comp_mp[:, 0], config.STUMPY_TEST_PRECISION
) # ignore indices
# Swap inputs
ref_mp = naive.aamp(T_B, m, T_B=T_A)
comp_mp = aamp(T_B, m, T_A, ignore_trivial=False)
naive.replace_inf(ref_mp)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(
ref_mp[:, 0], comp_mp[:, 0], config.STUMPY_TEST_PRECISION
) # ignore indices
@pytest.mark.parametrize("T_A, T_B", test_data)
@pytest.mark.parametrize("substitute_B", substitution_values)
@pytest.mark.parametrize("substitution_locations", substitution_locations)
def test_aamp_nan_inf_self_join(T_A, T_B, substitute_B, substitution_locations):
m = 3
T_B_sub = T_B.copy()
for substitution_location_B in substitution_locations:
T_B_sub[:] = T_B[:]
T_B_sub[substitution_location_B] = substitute_B
zone = int(np.ceil(m / 4))
ref_mp = naive.aamp(T_B_sub, m)
comp_mp = aamp(T_B_sub, m, ignore_trivial=True)
naive.replace_inf(ref_mp)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp, comp_mp)
comp_mp = aamp(pd.Series(T_B_sub), m, ignore_trivial=True)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp, comp_mp)
@pytest.mark.parametrize("T_A, T_B", test_data)
@pytest.mark.parametrize("substitute_A", substitution_values)
@pytest.mark.parametrize("substitute_B", substitution_values)
@pytest.mark.parametrize("substitution_locations", substitution_locations)
def test_aamp_nan_inf_A_B_join(
T_A, T_B, substitute_A, substitute_B, substitution_locations
):
m = 3
T_A_sub = T_A.copy()
T_B_sub = T_B.copy()
for substitution_location_B in substitution_locations:
for substitution_location_A in substitution_locations:
T_A_sub[:] = T_A[:]
T_B_sub[:] = T_B[:]
T_A_sub[substitution_location_A] = substitute_A
T_B_sub[substitution_location_B] = substitute_B
ref_mp = naive.aamp(T_A_sub, m, T_B=T_B_sub)
comp_mp = aamp(T_A_sub, m, T_B_sub, ignore_trivial=False)
naive.replace_inf(ref_mp)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp, comp_mp)
comp_mp = aamp(
pd.Series(T_A_sub), m, pd.Series(T_B_sub), ignore_trivial=False
)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp, comp_mp)
def test_aamp_nan_zero_mean_self_join():
T = np.array([-1, 0, 1, np.inf, 1, 0, -1])
m = 3
zone = int(np.ceil(m / 4))
ref_mp = naive.aamp(T, m)
comp_mp = aamp(T, m, ignore_trivial=True)
naive.replace_inf(ref_mp)
naive.replace_inf(comp_mp)
npt.assert_almost_equal(ref_mp, comp_mp)
| 33.635983 | 88 | 0.673716 | 1,356 | 8,039 | 3.66003 | 0.074484 | 0.083417 | 0.108805 | 0.088052 | 0.888777 | 0.871449 | 0.854523 | 0.803748 | 0.78783 | 0.78239 | 0 | 0.025162 | 0.194178 | 8,039 | 238 | 89 | 33.777311 | 0.740969 | 0.030476 | 0 | 0.668421 | 0 | 0 | 0.014403 | 0.005658 | 0 | 0 | 0 | 0 | 0.121053 | 1 | 0.057895 | false | 0 | 0.031579 | 0 | 0.089474 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c33dc759b4b7571b72fa2ef8e8d3f2ad947f6939 | 19,688 | py | Python | skuba-update/test/unit/skuba_update_test.py | cmurphy/skuba | 14cb03b7374b210cb4633b3d2e25a16e7bfc36e5 | [
"Apache-2.0"
] | null | null | null | skuba-update/test/unit/skuba_update_test.py | cmurphy/skuba | 14cb03b7374b210cb4633b3d2e25a16e7bfc36e5 | [
"Apache-2.0"
] | null | null | null | skuba-update/test/unit/skuba_update_test.py | cmurphy/skuba | 14cb03b7374b210cb4633b3d2e25a16e7bfc36e5 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- encoding: utf-8 -*-
# Copyright (c) 2019 SUSE LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from collections import namedtuple
from mock import patch, call, mock_open, Mock, ANY
from skuba_update.skuba_update import (
main,
update,
run_command,
run_zypper_command,
node_name_from_machine_id,
annotate,
is_reboot_needed,
reboot_sentinel_file,
annotate_updates_available,
get_update_list,
restart_services,
REBOOT_REQUIRED_PATH,
ZYPPER_EXIT_INF_UPDATE_NEEDED,
ZYPPER_EXIT_INF_RESTART_NEEDED,
ZYPPER_EXIT_INF_REBOOT_NEEDED,
KUBE_UPDATES_KEY,
KUBE_SECURITY_UPDATES_KEY,
KUBE_DISRUPTIVE_UPDATES_KEY
)
@patch('subprocess.Popen')
def test_run_command(mock_subprocess):
mock_process = Mock()
mock_process.communicate.return_value = (b'stdout', b'stderr')
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
result = run_command(['/bin/dummycmd', 'arg1'])
assert result.output == "stdout"
assert result.returncode == 0
assert result.error == 'stderr'
mock_process.returncode = 1
result = run_command(['/bin/dummycmd', 'arg1'])
assert result.output == "stdout"
assert result.returncode == 1
mock_process.communicate.return_value = (b'', b'stderr')
result = run_command(['/bin/dummycmd', 'arg1'])
assert result.output == ""
assert result.returncode == 1
@patch('argparse.ArgumentParser.parse_args')
@patch('subprocess.Popen')
def test_main_wrong_version(mock_subprocess, mock_args):
mock_process = Mock()
mock_process.communicate.return_value = (b'zypper 1.13.0', b'stderr')
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
exception = False
try:
main()
except Exception as e:
exception = True
assert 'higher is required' in str(e)
assert exception
@patch('argparse.ArgumentParser.parse_args')
@patch('subprocess.Popen')
def test_main_bad_format_version(mock_subprocess, mock_args):
mock_process = Mock()
mock_process.communicate.return_value = (b'zypper', b'stderr')
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
exception = False
try:
main()
except Exception as e:
exception = True
assert 'Could not parse' in str(e)
assert exception
@patch('argparse.ArgumentParser.parse_args')
@patch('subprocess.Popen')
def test_main_no_root(mock_subprocess, mock_args):
mock_process = Mock()
mock_process.communicate.return_value = (b'zypper 1.14.15', b'stderr')
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
exception = False
try:
main()
except Exception as e:
exception = True
assert 'root privileges' in str(e)
assert exception
@patch('skuba_update.skuba_update.annotate_updates_available')
@patch('argparse.ArgumentParser.parse_args')
@patch('os.environ.get', new={}.get, spec_set=True)
@patch('os.geteuid')
@patch('subprocess.Popen')
def test_main(mock_subprocess, mock_geteuid, mock_args, mock_annotate):
return_values = [
(b'some_service1\nsome_service2', b''),
(b'zypper 1.14.15', b'')
]
def mock_communicate():
if len(return_values) > 1:
return return_values.pop()
else:
return return_values[0]
args = Mock()
args.annotate_only = False
mock_args.return_value = args
mock_geteuid.return_value = 0
mock_process = Mock()
mock_process.communicate.side_effect = mock_communicate
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
main()
assert mock_subprocess.call_args_list == [
call(['zypper', '--version'], stdout=-1, stderr=-1, env=ANY),
call(['zypper', 'ref', '-s'], stdout=None, stderr=None, env=ANY),
call([
'zypper', '--non-interactive',
'--non-interactive-include-reboot-patches', 'patch'
], stdout=None, stderr=None, env=ANY),
call(
['zypper', 'ps', '-sss'],
stdout=-1, stderr=-1, env=ANY
),
call(
['systemctl', 'restart', 'some_service1'],
stdout=None, stderr=None, env=ANY
),
call(
['systemctl', 'restart', 'some_service2'],
stdout=None, stderr=None, env=ANY
),
call(['zypper', 'needs-rebooting'], stdout=None, stderr=None, env=ANY),
]
@patch('subprocess.Popen')
@patch('skuba_update.skuba_update.run_zypper_command')
def test_restart_services_error(mock_zypp_cmd, mock_subprocess, capsys):
command_type = namedtuple(
'command', ['output', 'error', 'returncode']
)
mock_process = Mock()
mock_process.communicate.return_value = (b'', b'restart error msg')
mock_process.returncode = 1
mock_subprocess.return_value = mock_process
mock_zypp_cmd.return_value = command_type(
output="service1\nservice2",
error='',
returncode=0
)
restart_services()
out, err = capsys.readouterr()
assert 'returned non zero exit code' in out
@patch('skuba_update.skuba_update.annotate_updates_available')
@patch('argparse.ArgumentParser.parse_args')
@patch('os.environ.get', new={}.get, spec_set=True)
@patch('os.geteuid')
@patch('subprocess.Popen')
def test_main_annotate_only(
mock_subprocess, mock_geteuid, mock_args, mock_annotate
):
args = Mock()
args.annotate_only = True
mock_args.return_value = args
mock_geteuid.return_value = 0
mock_process = Mock()
mock_process.communicate.return_value = (b'zypper 1.14.15', b'stderr')
mock_process.returncode = ZYPPER_EXIT_INF_UPDATE_NEEDED
mock_subprocess.return_value = mock_process
main()
assert mock_subprocess.call_args_list == [
call(['zypper', '--version'], stdout=-1, stderr=-1, env=ANY),
call(['zypper', 'ref', '-s'], stdout=None, stderr=None, env=ANY),
]
@patch('skuba_update.skuba_update.annotate_updates_available')
@patch('argparse.ArgumentParser.parse_args')
@patch('os.environ.get', new={}.get, spec_set=True)
@patch('os.geteuid')
@patch('subprocess.Popen')
def test_main_zypper_returns_100(
mock_subprocess, mock_geteuid, mock_args, mock_annotate
):
return_values = [(b'', b''), (b'zypper 1.14.15', b'')]
def mock_communicate():
if len(return_values) > 1:
return return_values.pop()
else:
return return_values[0]
args = Mock()
args.annotate_only = False
mock_args.return_value = args
mock_geteuid.return_value = 0
mock_process = Mock()
mock_process.communicate.side_effect = mock_communicate
mock_process.returncode = ZYPPER_EXIT_INF_RESTART_NEEDED
mock_subprocess.return_value = mock_process
main()
assert mock_subprocess.call_args_list == [
call(['zypper', '--version'], stdout=-1, stderr=-1, env=ANY),
call(['zypper', 'ref', '-s'], stdout=None, stderr=None, env=ANY),
call([
'zypper', '--non-interactive',
'--non-interactive-include-reboot-patches', 'patch'
], stdout=None, stderr=None, env=ANY),
call([
'zypper', '--non-interactive',
'--non-interactive-include-reboot-patches', 'patch'
], stdout=None, stderr=None, env=ANY),
call(
['zypper', 'ps', '-sss'],
stdout=-1, stderr=-1, env=ANY
),
call([
'zypper', 'needs-rebooting'
], stdout=None, stderr=None, env=ANY),
]
@patch('pathlib.Path.is_file')
@patch('subprocess.Popen')
def test_update_zypper_is_fine_but_created_reboot_required(
mock_subprocess, mock_is_file
):
mock_process = Mock()
mock_process.communicate.return_value = (b'stdout', b'stderr')
mock_process.returncode = ZYPPER_EXIT_INF_REBOOT_NEEDED
mock_subprocess.return_value = mock_process
mock_is_file.return_value = True
exception = False
try:
reboot_sentinel_file(update())
except PermissionError as e:
exception = True
msg = 'Permission denied: \'{0}\''.format(REBOOT_REQUIRED_PATH)
assert msg in str(e)
assert exception
@patch('subprocess.Popen')
def test_run_zypper_command(mock_subprocess):
mock_process = Mock()
mock_process.communicate.return_value = (b'stdout', b'stderr')
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
assert run_zypper_command(['zypper', 'patch']) == 0
mock_process.returncode = ZYPPER_EXIT_INF_RESTART_NEEDED
mock_subprocess.return_value = mock_process
assert run_zypper_command(
['zypper', 'patch']) == ZYPPER_EXIT_INF_RESTART_NEEDED
@patch('subprocess.Popen')
def test_run_zypper_command_failure(mock_subprocess):
mock_process = Mock()
mock_process.communicate.return_value = (b'', b'')
mock_process.returncode = 1
mock_subprocess.return_value = mock_process
exception = False
try:
run_zypper_command(['zypper', 'patch']) == 'stdout'
except Exception as e:
exception = True
assert '"zypper patch" failed' in str(e)
assert exception
@patch('builtins.open',
mock_open(read_data='9ea12911449eb7b5f8f228294bf9209a'))
@patch('subprocess.Popen')
@patch('json.loads')
def test_node_name_from_machine_id(mock_loads, mock_subprocess):
json_node_object = {
'items': [
{
'metadata': {
'name': 'my-node-1'
},
'status': {
'nodeInfo': {
'machineID': '49f8e2911a1449b7b5ef2bf92282909a'
}
}
},
{
'metadata': {
'name': 'my-node-2'
},
'status': {
'nodeInfo': {
'machineID': '9ea12911449eb7b5f8f228294bf9209a'
}
}
}
]
}
breaking_json_node_object = {'Items': []}
mock_process = Mock()
mock_process.communicate.return_value = (json.dumps(json_node_object)
.encode(), b'')
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
mock_loads.return_value = json_node_object
assert node_name_from_machine_id() == 'my-node-2'
json_node_object2 = json_node_object
json_node_object2['items'][1]['status']['nodeInfo']['machineID'] = \
'another-id-that-doesnt-reflect-a-node'
mock_loads.return_value = json_node_object2
exception = False
try:
node_name_from_machine_id() == 'my-node-2'
except Exception as e:
exception = True
assert 'Node name could not be determined' in str(e)
assert exception
mock_loads.return_value = breaking_json_node_object
exception = False
try:
node_name_from_machine_id() == 'my-node-2'
except Exception as e:
exception = True
assert 'Unexpected format' in str(e)
assert exception
exception = False
mock_process.returncode = 1
try:
node_name_from_machine_id() == 'my-node'
except Exception as e:
exception = True
assert 'Kubectl failed getting nodes list' in str(e)
assert exception
@patch('subprocess.Popen')
def test_annotate(mock_subprocess, capsys):
mock_process = Mock()
mock_process.communicate.return_value = (b'node/my-node-1 annotated',
b'stderr')
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
assert annotate(
'node', 'my-node-1',
KUBE_DISRUPTIVE_UPDATES_KEY, 'yes'
) == 'node/my-node-1 annotated'
mock_process.returncode = 1
annotate(
'node', 'my-node-1',
KUBE_DISRUPTIVE_UPDATES_KEY, 'yes'
)
out, err = capsys.readouterr()
assert 'Warning! kubectl returned non zero exit code' in out
@patch('skuba_update.skuba_update.node_name_from_machine_id')
@patch('skuba_update.skuba_update.annotate')
@patch('subprocess.Popen')
def test_annotate_updates_empty(mock_subprocess, mock_annotate, mock_name):
mock_name.return_value = 'mynode'
mock_process = Mock()
mock_process.communicate.return_value = (
b'<stream><update-status><update-list>'
b'</update-list></update-status></stream>', b''
)
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
annotate_updates_available()
assert mock_subprocess.call_args_list == [
call(
['zypper', '--non-interactive', '--xmlout', 'list-patches'],
stdout=-1, stderr=-1, env=ANY
)
]
assert mock_annotate.call_args_list == [
call('node', 'mynode', KUBE_UPDATES_KEY, 'no'),
call('node', 'mynode', KUBE_SECURITY_UPDATES_KEY, 'no'),
call('node', 'mynode', KUBE_DISRUPTIVE_UPDATES_KEY, 'no')
]
@patch('skuba_update.skuba_update.node_name_from_machine_id')
@patch('skuba_update.skuba_update.annotate')
@patch('subprocess.Popen')
def test_annotate_updates(mock_subprocess, mock_annotate, mock_name):
mock_name.return_value = 'mynode'
mock_process = Mock()
mock_process.communicate.return_value = (
b'<stream><update-status><update-list><update interactive="message">'
b'</update></update-list></update-status></stream>', b''
)
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
annotate_updates_available()
assert mock_subprocess.call_args_list == [
call(
['zypper', '--non-interactive', '--xmlout', 'list-patches'],
stdout=-1, stderr=-1, env=ANY
)
]
assert mock_annotate.call_args_list == [
call('node', 'mynode', KUBE_UPDATES_KEY, 'yes'),
call('node', 'mynode', KUBE_SECURITY_UPDATES_KEY, 'no'),
call('node', 'mynode', KUBE_DISRUPTIVE_UPDATES_KEY, 'yes')
]
@patch("skuba_update.skuba_update.node_name_from_machine_id")
@patch("builtins.open", read_data="aa59dc0c5fe84247a77c26780dd0b3fd")
@patch('subprocess.Popen')
def test_annotate_updates_available(mock_subprocess, mock_open, mock_name):
mock_name.return_value = 'mynode'
mock_process = Mock()
mock_process.communicate.return_value = (
b'<stream><update-status><update-list><update interactive="message">'
b'</update></update-list></update-status></stream>', b''
)
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
annotate_updates_available()
assert mock_subprocess.call_args_list == [
call(
['zypper', '--non-interactive', '--xmlout', 'list-patches'],
stdout=-1, stderr=-1, env=ANY
),
call(
["kubectl", "annotate", "--overwrite", "node",
"mynode", "caasp.suse.com/has-updates=yes"],
stdout=-1, stderr=-1, env=ANY
),
call(
["kubectl", "annotate", "--overwrite", "node",
"mynode", "caasp.suse.com/has-security-updates=no"],
stdout=-1, stderr=-1, env=ANY
),
call(
["kubectl", "annotate", "--overwrite", "node",
"mynode", "caasp.suse.com/has-disruptive-updates=yes"],
stdout=-1, stderr=-1, env=ANY
)
]
@patch('skuba_update.skuba_update.node_name_from_machine_id')
@patch('skuba_update.skuba_update.annotate')
@patch('subprocess.Popen')
def test_annotate_updates_bad_xml(mock_subprocess, mock_annotate, mock_name):
mock_name.return_value = 'mynode'
mock_process = Mock()
mock_process.communicate.return_value = (
b'<update-status><update-list><update interactive="message">'
b'</update></update-list></update-status>', b''
)
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
annotate_updates_available()
assert mock_subprocess.call_args_list == [
call(
['zypper', '--non-interactive', '--xmlout', 'list-patches'],
stdout=-1, stderr=-1, env=ANY
)
]
assert mock_annotate.call_args_list == [
call('node', 'mynode', KUBE_UPDATES_KEY, 'no'),
call('node', 'mynode', KUBE_SECURITY_UPDATES_KEY, 'no'),
call('node', 'mynode', KUBE_DISRUPTIVE_UPDATES_KEY, 'no')
]
@patch('skuba_update.skuba_update.node_name_from_machine_id')
@patch('skuba_update.skuba_update.annotate')
@patch('subprocess.Popen')
def test_annotate_updates_security(
mock_subprocess, mock_annotate, mock_name
):
mock_name.return_value = 'mynode'
mock_process = Mock()
mock_process.communicate.return_value = (
b'<stream><update-status><update-list>'
b'<update interactive="false" category="security">'
b'</update></update-list></update-status></stream>', b''
)
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
annotate_updates_available()
assert mock_subprocess.call_args_list == [
call(
['zypper', '--non-interactive', '--xmlout', 'list-patches'],
stdout=-1, stderr=-1, env=ANY
)
]
assert mock_annotate.call_args_list == [
call('node', 'mynode', KUBE_UPDATES_KEY, 'yes'),
call('node', 'mynode', KUBE_SECURITY_UPDATES_KEY, 'yes'),
call('node', 'mynode', KUBE_DISRUPTIVE_UPDATES_KEY, 'no')
]
@patch('skuba_update.skuba_update.node_name_from_machine_id')
@patch('skuba_update.skuba_update.annotate')
@patch('subprocess.Popen')
def test_annotate_updates_available_is_reboot(
mock_subprocess, mock_annotate, mock_name
):
mock_name.return_value = 'mynode'
mock_process = Mock()
mock_process.communicate.return_value = (
b'<stream><update-status><update-list><update interactive="reboot">'
b'</update></update-list></update-status></stream>', b''
)
mock_process.returncode = 0
mock_subprocess.return_value = mock_process
annotate_updates_available()
assert mock_subprocess.call_args_list == [
call(
['zypper', '--non-interactive', '--xmlout', 'list-patches'],
stdout=-1, stderr=-1, env=ANY
)
]
assert mock_annotate.call_args_list == [
call('node', 'mynode', KUBE_UPDATES_KEY, 'yes'),
call('node', 'mynode', KUBE_SECURITY_UPDATES_KEY, 'no'),
call('node', 'mynode', KUBE_DISRUPTIVE_UPDATES_KEY, 'yes')
]
@patch('subprocess.Popen')
def test_is_reboot_needed_truthy(mock_subprocess):
mock_process = Mock()
mock_process.communicate.return_value = (b'', b'')
mock_process.returncode = ZYPPER_EXIT_INF_REBOOT_NEEDED
mock_subprocess.return_value = mock_process
assert is_reboot_needed()
@patch('subprocess.Popen')
def test_is_reboot_needed_falsey(mock_subprocess):
mock_process = Mock()
mock_process.communicate.return_value = (b'', b'')
mock_process.returncode = ZYPPER_EXIT_INF_RESTART_NEEDED
mock_subprocess.return_value = mock_process
assert not is_reboot_needed()
def test_get_update_list_bad_xml():
assert get_update_list('<xml') is None
| 33.144781 | 79 | 0.654866 | 2,351 | 19,688 | 5.211825 | 0.103786 | 0.080797 | 0.042847 | 0.044887 | 0.801518 | 0.769934 | 0.754509 | 0.741941 | 0.707745 | 0.69836 | 0 | 0.013122 | 0.218102 | 19,688 | 593 | 80 | 33.200675 | 0.782837 | 0.030221 | 0 | 0.663386 | 0 | 0 | 0.214383 | 0.099329 | 0 | 0 | 0 | 0 | 0.090551 | 1 | 0.047244 | false | 0 | 0.007874 | 0 | 0.062992 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5ee80978da9804eecbe2c661ce46cfa5a5c3ede6 | 333 | py | Python | 02. Conditional/034.py | MaksonViini/Aprendendo-Python | 8d8422f793e4ea9f81fa4ed0e4101bcfc2ba3c99 | [
"MIT"
] | 1 | 2020-09-20T23:18:47.000Z | 2020-09-20T23:18:47.000Z | 02. Conditional/034.py | MaksonViini/Aprendendo-Python | 8d8422f793e4ea9f81fa4ed0e4101bcfc2ba3c99 | [
"MIT"
] | null | null | null | 02. Conditional/034.py | MaksonViini/Aprendendo-Python | 8d8422f793e4ea9f81fa4ed0e4101bcfc2ba3c99 | [
"MIT"
] | 1 | 2020-09-20T23:18:49.000Z | 2020-09-20T23:18:49.000Z | # Aumentos múltiplos
salario = float(input("Digite seu salario: "))
if salario > 1250:
print(f"Seu salario e de R$ {salario} e voce teve um aumento de 10%, seu novo salario e de R$ {salario * 1.1:.2f}")
else:
print(f"Seu salario e de R$ {salario} e voce teve um aumento de 10%, seu novo salario e de R$ {salario * 1.15:.2f}") | 55.5 | 120 | 0.672673 | 62 | 333 | 3.612903 | 0.387097 | 0.214286 | 0.178571 | 0.196429 | 0.669643 | 0.669643 | 0.669643 | 0.669643 | 0.669643 | 0.669643 | 0 | 0.05597 | 0.195195 | 333 | 6 | 120 | 55.5 | 0.779851 | 0.054054 | 0 | 0 | 0 | 0.4 | 0.735669 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5eef5e446e1922c169ce5770f96bdb08b8933d69 | 17,847 | py | Python | openmdao/core/tests/test_getset_vars.py | friedenhe/OpenMDAO | db1d7e22a8bf9f66afa82ec3544b7244d5545f6d | [
"Apache-2.0"
] | 451 | 2015-07-20T11:52:35.000Z | 2022-03-28T08:04:56.000Z | openmdao/core/tests/test_getset_vars.py | friedenhe/OpenMDAO | db1d7e22a8bf9f66afa82ec3544b7244d5545f6d | [
"Apache-2.0"
] | 1,096 | 2015-07-21T03:08:26.000Z | 2022-03-31T11:59:17.000Z | openmdao/core/tests/test_getset_vars.py | friedenhe/OpenMDAO | db1d7e22a8bf9f66afa82ec3544b7244d5545f6d | [
"Apache-2.0"
] | 301 | 2015-07-16T20:02:11.000Z | 2022-03-28T08:04:39.000Z | """Test getting/setting variables and subjacs with promoted/relative/absolute names."""
import unittest
import numpy as np
from openmdao.api import Problem, Group, ExecComp, IndepVarComp, DirectSolver, ParallelGroup
from openmdao.utils.mpi import MPI
try:
from openmdao.vectors.petsc_vector import PETScVector
except ImportError:
PETScVector = None
class TestGetSetVariables(unittest.TestCase):
def test_no_promotion(self):
"""
Illustrative examples showing how to access variables and subjacs.
"""
c = ExecComp('y=2*x')
g = Group()
g.add_subsystem('c', c)
model = Group()
model.add_subsystem('g', g)
p = Problem(model)
p.setup()
# -------------------------------------------------------------------
# inputs
p['g.c.x'] = 5.0
self.assertEqual(p['g.c.x'], 5.0)
# outputs
p['g.c.y'] = 5.0
self.assertEqual(p['g.c.y'], 5.0)
# Conclude setup but don't run model.
p.final_setup()
inputs, outputs, residuals = g.get_nonlinear_vectors()
# inputs
inputs['c.x'] = 5.0
self.assertEqual(inputs['c.x'], 5.0)
# outputs
outputs['c.y'] = 5.0
self.assertEqual(outputs['c.y'], 5.0)
# Removed part of test where we set values into the jacobian willy-nilly.
# You can only set declared values now.
def test_with_promotion(self):
"""
Illustrative examples showing how to access variables and subjacs.
"""
c1 = IndepVarComp('x')
c2 = ExecComp('y=2*x')
c3 = ExecComp('z=3*x')
g = Group()
g.add_subsystem('c1', c1, promotes=['*'])
g.add_subsystem('c2', c2, promotes=['*'])
g.add_subsystem('c3', c3, promotes=['*'])
model = Group()
model.add_subsystem('g', g, promotes=['*'])
p = Problem(model)
p.setup()
# -------------------------------------------------------------------
# inputs
p['g.c2.x'] = 5.0
self.assertEqual(p['g.c2.x'], 5.0)
# outputs
p['g.c2.y'] = 5.0
self.assertEqual(p['g.c2.y'], 5.0)
p['y'] = 5.0
self.assertEqual(p['y'], 5.0)
# Conclude setup but don't run model.
p.final_setup()
inputs, outputs, residuals = g.get_nonlinear_vectors()
# inputs
inputs['c2.x'] = 5.0
self.assertEqual(inputs['c2.x'], 5.0)
# outputs
outputs['c2.y'] = 5.0
self.assertEqual(outputs['c2.y'], 5.0)
outputs['y'] = 5.0
self.assertEqual(outputs['y'], 5.0)
# Removed part of test where we set values into the jacobian willy-nilly. You can only set
# declared values now.
def test_no_promotion_errors(self):
"""
Tests for error-handling for invalid variable names and keys.
"""
g = Group(assembled_jac_type='dense')
g.linear_solver = DirectSolver(assemble_jac=True)
g.add_subsystem('c', ExecComp('y=2*x'))
p = Problem()
model = p.model
model.add_subsystem('g', g)
p.setup()
# -------------------------------------------------------------------
msg = '\'<model> <class Group>: Variable "{}" not found.\''
# inputs
with self.assertRaises(KeyError) as ctx:
p['x'] = 5.0
self.assertEqual(str(ctx.exception), msg.format('x'))
p._initial_condition_cache = {}
with self.assertRaises(KeyError) as ctx:
p['x']
self.assertEqual(str(ctx.exception), msg.format('x'))
# outputs
with self.assertRaises(KeyError) as ctx:
p['y'] = 5.0
self.assertEqual(str(ctx.exception), msg.format('y'))
p._initial_condition_cache = {}
with self.assertRaises(KeyError) as ctx:
p['y']
self.assertEqual(str(ctx.exception), msg.format('y'))
p.final_setup()
msg = "'g' <class Group>: Variable name '{}' not found."
inputs, outputs, residuals = g.get_nonlinear_vectors()
# inputs
for vname in ['x', 'g.c.x']:
with self.assertRaises(KeyError) as cm:
inputs[vname] = 5.0
self.assertEqual(cm.exception.args[0], f"'g' <class Group>: Variable name '{vname}' not found.")
with self.assertRaises(KeyError) as cm:
inputs[vname]
self.assertEqual(cm.exception.args[0], f"'g' <class Group>: Variable name '{vname}' not found.")
# outputs
for vname in ['y', 'g.c.y']:
with self.assertRaises(KeyError) as cm:
outputs[vname] = 5.0
self.assertEqual(cm.exception.args[0], f"'g' <class Group>: Variable name '{vname}' not found.")
with self.assertRaises(KeyError) as cm:
outputs[vname]
self.assertEqual(cm.exception.args[0], f"'g' <class Group>: Variable name '{vname}' not found.")
msg = r'Variable name pair \("{}", "{}"\) not found.'
jac = g.linear_solver._assembled_jac
# d(output)/d(input)
with self.assertRaisesRegex(KeyError, msg.format('y', 'x')):
jac['y', 'x'] = 5.0
with self.assertRaisesRegex(KeyError, msg.format('y', 'x')):
jac['y', 'x']
# allow absolute keys now
# with self.assertRaisesRegex(KeyError, msg.format('g.c.y', 'g.c.x')):
# jac['g.c.y', 'g.c.x'] = 5.0
# with self.assertRaisesRegex(KeyError, msg.format('g.c.y', 'g.c.x')):
# deriv = jac['g.c.y', 'g.c.x']
# d(output)/d(output)
with self.assertRaisesRegex(KeyError, msg.format('y', 'y')):
jac['y', 'y'] = 5.0
with self.assertRaisesRegex(KeyError, msg.format('y', 'y')):
jac['y', 'y']
# allow absoute keys now
# with self.assertRaisesRegex(KeyError, msg.format('g.c.y', 'g.c.y')):
# jac['g.c.y', 'g.c.y'] = 5.0
# with self.assertRaisesRegex(KeyError, msg.format('g.c.y', 'g.c.y')):
# deriv = jac['g.c.y', 'g.c.y']
def test_with_promotion_errors(self):
"""
Tests for error-handling for invalid variable names and keys.
"""
c1 = IndepVarComp('x')
c2 = ExecComp('y=2*x')
c3 = ExecComp('z=3*x')
g = Group(assembled_jac_type='dense')
g.add_subsystem('c1', c1, promotes=['*'])
g.add_subsystem('c2', c2, promotes=['*'])
g.add_subsystem('c3', c3, promotes=['*'])
g.linear_solver = DirectSolver(assemble_jac=True)
model = Group()
model.add_subsystem('g', g, promotes=['*'])
p = Problem(model)
p.setup()
# Conclude setup but don't run model.
p.final_setup()
# -------------------------------------------------------------------
msg1 = "'g' <class Group>: Variable name '{}' not found."
msg2 = "The promoted name x is invalid because it refers to multiple inputs: " \
"[g.c2.x ,g.c3.x]. Access the value from the connected output variable x instead."
inputs, outputs, residuals = g.get_nonlinear_vectors()
# inputs
with self.assertRaises(Exception) as context:
inputs['x'] = 5.0
self.assertEqual(str(context.exception), msg2)
with self.assertRaises(Exception) as context:
self.assertEqual(inputs['x'], 5.0)
self.assertEqual(str(context.exception), msg2)
with self.assertRaises(KeyError) as cm:
inputs['g.c2.x'] = 5.0
self.assertEqual(cm.exception.args[0], msg1.format('g.c2.x'))
with self.assertRaises(KeyError) as cm:
inputs['g.c2.x']
self.assertEqual(cm.exception.args[0], msg1.format('g.c2.x'))
# outputs
with self.assertRaises(KeyError) as cm:
outputs['g.c2.y'] = 5.0
self.assertEqual(cm.exception.args[0], msg1.format('g.c2.y'))
with self.assertRaises(KeyError) as cm:
outputs['g.c2.y']
self.assertEqual(cm.exception.args[0], msg1.format('g.c2.y'))
msg1 = r'Variable name pair \("{}", "{}"\) not found.'
jac = g.linear_solver._assembled_jac
# d(outputs)/d(inputs)
with self.assertRaises(Exception) as context:
jac['y', 'x'] = 5.0
self.assertEqual(str(context.exception), msg2)
with self.assertRaises(Exception) as context:
self.assertEqual(jac['y', 'x'], 5.0)
self.assertEqual(str(context.exception), msg2)
def test_serial_multi_src_inds(self):
p = Problem()
p.model.add_subsystem('indep', IndepVarComp('x', val=np.ones(10)))
p.model.add_subsystem('C1', ExecComp('y=x*2.', x=np.zeros(7), y=np.zeros(7)))
p.model.add_subsystem('C2', ExecComp('y=x*3.', x=np.zeros(3), y=np.zeros(3)))
p.model.connect('indep.x', 'C1.x', src_indices=list(range(7)))
p.model.connect('indep.x', 'C2.x', src_indices=list(range(7, 10)))
p.setup()
p['C1.x'] = (np.arange(7) + 1.) * 2.
p['C2.x'] = (np.arange(7,10) + 1.) * 3.
p.run_model()
np.testing.assert_allclose(p['indep.x'][:7], (np.arange(7) + 1.) * 2.)
np.testing.assert_allclose(p['indep.x'][7:10], (np.arange(7,10) + 1.) * 3.)
np.testing.assert_allclose(p['C1.x'], (np.arange(7) + 1.) * 2.)
np.testing.assert_allclose(p['C2.x'], (np.arange(7,10) + 1.) * 3.)
np.testing.assert_allclose(p['C1.y'], (np.arange(7) + 1.) * 4.)
np.testing.assert_allclose(p['C2.y'], (np.arange(7,10) + 1.) * 9.)
def test_serial_multi_src_inds_promoted(self):
p = Problem()
p.model.add_subsystem('indep', IndepVarComp('x', val=np.ones(10)), promotes=['x'])
p.model.add_subsystem('C1', ExecComp('y=x*2.',
x={'val': np.zeros(7)},
y={'val': np.zeros(7)}))
p.model.add_subsystem('C2', ExecComp('y=x*3.',
x={'val': np.zeros(3)},
y={'val': np.zeros(3)}))
p.model.promotes('C1', inputs=['x'], src_indices=list(range(7)))
p.model.promotes('C2', inputs=['x'], src_indices=list(range(7, 10)))
p.setup()
p['C1.x'] = (np.arange(7) + 1.) * 2.
p['C2.x'] = (np.arange(7,10) + 1.) * 3.
p.run_model()
np.testing.assert_allclose(p['indep.x'][:7], (np.arange(7) + 1.) * 2.)
np.testing.assert_allclose(p['indep.x'][7:10], (np.arange(7,10) + 1.) * 3.)
np.testing.assert_allclose(p['C1.x'], (np.arange(7) + 1.) * 2.)
np.testing.assert_allclose(p['C2.x'], (np.arange(7,10) + 1.) * 3.)
np.testing.assert_allclose(p['C1.y'], (np.arange(7) + 1.) * 4.)
np.testing.assert_allclose(p['C2.y'], (np.arange(7,10) + 1.) * 9.)
def test_serial_multi_src_inds_units_promoted(self):
p = Problem()
indep = p.model.add_subsystem('indep', IndepVarComp(), promotes=['x'])
indep.add_output('x', units='inch', val=np.ones(10))
p.model.add_subsystem('C1', ExecComp('y=x*2.',
x={'val': np.zeros(7),
'units': 'ft'},
y={'val': np.zeros(7), 'units': 'ft'}))
p.model.add_subsystem('C2', ExecComp('y=x*3.',
x={'val': np.zeros(3),
'units': 'inch'},
y={'val': np.zeros(3), 'units': 'inch'}))
p.model.promotes('C1', inputs=['x'], src_indices=list(range(7)))
p.model.promotes('C2', inputs=['x'], src_indices=list(range(7, 10)))
p.setup()
p['C1.x'] = np.ones(7) * 2.
p['C2.x'] = np.ones(3) * 3.
p.run_model()
np.testing.assert_allclose(p['indep.x'][:7], np.ones(7) * 24.)
np.testing.assert_allclose(p['indep.x'][7:10], np.ones(3) * 3.)
np.testing.assert_allclose(p['C1.x'], np.ones(7) * 2.)
np.testing.assert_allclose(p['C1.y'], np.ones(7) * 4.)
np.testing.assert_allclose(p['C2.x'], np.ones(3) * 3.)
np.testing.assert_allclose(p['C2.y'], np.ones(3) * 9.)
def test_serial_multi_src_inds_units_promoted_no_src(self):
p = Problem()
p.model.add_subsystem('C1', ExecComp('y=x*2.',
x={'val': np.zeros(7),
'units': 'ft'},
y={'val': np.zeros(7), 'units': 'ft'}))
p.model.add_subsystem('C2', ExecComp('y=x*3.',
x={'val': np.zeros(3),
'units': 'inch'},
y={'val': np.zeros(3), 'units': 'inch'}))
p.model.add_subsystem('C3', ExecComp('y=x*4.',
x={'val': np.zeros(10), 'units': 'mm'},
y={'val': np.zeros(10), 'units': 'mm'}),
promotes=['x'])
p.model.promotes('C1', inputs=['x'], src_indices=list(range(7)))
p.model.promotes('C2', inputs=['x'], src_indices=list(range(7, 10)))
with self.assertRaises(RuntimeError) as cm:
p.setup()
self.assertEqual(str(cm.exception), "<model> <class Group>: The following inputs, ['C1.x', 'C2.x', 'C3.x'], promoted to 'x', are connected but their metadata entries ['units'] differ. Call <group>.set_input_defaults('x', units=?), where <group> is the model to remove the ambiguity.")
def test_serial_multi_src_inds_units_setval_promoted(self):
p = Problem()
indep = p.model.add_subsystem('indep', IndepVarComp(), promotes=['x'])
indep.add_output('x', units='inch', val=np.ones(10))
p.model.add_subsystem('C1', ExecComp('y=x*2.',
x={'val': np.zeros(7),
'units': 'ft'},
y={'val': np.zeros(7), 'units': 'ft'}))
p.model.add_subsystem('C2', ExecComp('y=x*3.',
x={'val': np.zeros(3),
'units': 'inch'},
y={'val': np.zeros(3), 'units': 'inch'}))
p.model.promotes('C1', inputs=['x'], src_indices=list(range(7)))
p.model.promotes('C2', inputs=['x'], src_indices=list(range(7, 10)))
p.setup()
p.set_val('C1.x', np.ones(7) * 24., units='inch')
p.set_val('C2.x', np.ones(3) * 3., units='inch')
p.run_model()
np.testing.assert_allclose(p['indep.x'][:7], np.ones(7) * 24.)
np.testing.assert_allclose(p['indep.x'][7:10], np.ones(3) * 3.)
np.testing.assert_allclose(p['C1.x'], np.ones(7) * 2.)
np.testing.assert_allclose(p['C1.y'], np.ones(7) * 4.)
np.testing.assert_allclose(p['C2.x'], np.ones(3) * 3.)
np.testing.assert_allclose(p['C2.y'], np.ones(3) * 9.)
@unittest.skipUnless(MPI and PETScVector, "MPI and PETSc are required.")
class ParTestCase(unittest.TestCase):
N_PROCS = 2
def test_par_multi_src_inds(self):
p = Problem()
p.model.add_subsystem('indep', IndepVarComp('x', val=np.ones(10)))
par = p.model.add_subsystem('par', ParallelGroup())
par.add_subsystem('C1', ExecComp('y=x*2.', x=np.zeros(7), y=np.zeros(7)))
par.add_subsystem('C2', ExecComp('y=x*3.', x=np.zeros(3), y=np.zeros(3)))
p.model.connect('indep.x', 'par.C1.x', src_indices=list(range(7)))
p.model.connect('indep.x', 'par.C2.x', src_indices=list(range(7, 10)))
p.setup()
p['indep.x'] = np.concatenate([(np.arange(7) + 1.) * 2., (np.arange(7, 10) + 1.) * 3.])
p.run_model()
np.testing.assert_allclose(p['indep.x'][:7], (np.arange(7) + 1.) * 2.)
np.testing.assert_allclose(p['indep.x'][7:10], (np.arange(7,10) + 1.) * 3.)
np.testing.assert_allclose(p.get_val('par.C1.x', get_remote=True), (np.arange(7) + 1.) * 2.)
np.testing.assert_allclose(p.get_val('par.C2.x', get_remote=True), (np.arange(7,10) + 1.) * 3.)
np.testing.assert_allclose(p.get_val('par.C1.y', get_remote=True), (np.arange(7) + 1.) * 4.)
np.testing.assert_allclose(p.get_val('par.C2.y', get_remote=True), (np.arange(7,10) + 1.) * 9.)
@unittest.expectedFailure
def test_par_multi_src_inds_fail(self):
p = Problem()
p.model.add_subsystem('indep', IndepVarComp('x', val=np.ones(10)))
par = p.model.add_subsystem('par', ParallelGroup())
par.add_subsystem('C1', ExecComp('y=x*2.', x=np.zeros(7), y=np.zeros(7)))
par.add_subsystem('C2', ExecComp('y=x*3.', x=np.zeros(3), y=np.zeros(3)))
p.model.connect('indep.x', 'par.C1.x', src_indices=list(range(7)))
p.model.connect('indep.x', 'par.C2.x', src_indices=list(range(7, 10)))
p.setup()
p['par.C1.x'] = (np.arange(7) + 1.) * 2.
p['par.C2.x'] = (np.arange(7,10) + 1.) * 3.
p.run_model()
np.testing.assert_allclose(p['indep.x'][:7], (np.arange(7) + 1.) * 2.)
np.testing.assert_allclose(p['indep.x'][7:10], (np.arange(7,10) + 1.) * 3.)
np.testing.assert_allclose(p['par.C1.x'], (np.arange(7) + 1.) * 2.)
np.testing.assert_allclose(p['par.C2.x'], (np.arange(7,10) + 1.) * 3.)
np.testing.assert_allclose(p['par.C1.y'], (np.arange(7) + 1.) * 4.)
np.testing.assert_allclose(p['par.C2.y'], (np.arange(7,10) + 1.) * 9.)
if __name__ == '__main__':
unittest.main()
| 39.837054 | 292 | 0.518462 | 2,411 | 17,847 | 3.756533 | 0.082538 | 0.035773 | 0.059622 | 0.091421 | 0.869935 | 0.859777 | 0.824114 | 0.764271 | 0.72121 | 0.697803 | 0 | 0.036724 | 0.282905 | 17,847 | 447 | 293 | 39.926175 | 0.670964 | 0.08741 | 0 | 0.641638 | 0 | 0.006826 | 0.110265 | 0.001919 | 0 | 0 | 0 | 0 | 0.293515 | 1 | 0.037543 | false | 0 | 0.020478 | 0 | 0.068259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5ef0bee1a75047dd1279f417d0bb5d6579a8161c | 158 | py | Python | example_python_package_shim/core.py | Shimwell/example_python_package_shim | ed04d8c4a90f74dd4ddd4fc2c205d8d3858af400 | [
"MIT"
] | null | null | null | example_python_package_shim/core.py | Shimwell/example_python_package_shim | ed04d8c4a90f74dd4ddd4fc2c205d8d3858af400 | [
"MIT"
] | null | null | null | example_python_package_shim/core.py | Shimwell/example_python_package_shim | ed04d8c4a90f74dd4ddd4fc2c205d8d3858af400 | [
"MIT"
] | null | null | null |
def my_name(firstname):
print('my name is ', firstname)
return 'my name is ' + firstname
def multi(number1, number2):
return number1 * number2
| 17.555556 | 36 | 0.670886 | 21 | 158 | 5 | 0.47619 | 0.171429 | 0.152381 | 0.32381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032787 | 0.227848 | 158 | 8 | 37 | 19.75 | 0.827869 | 0 | 0 | 0 | 0 | 0 | 0.140127 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
5ef1529950411e6c7142487f9ac8847ab246b20d | 48 | py | Python | src/owmpy/current/__init__.py | ernieIzde8ski/open_weather_mappy | c50629065de85f6d2f4fcf46b741ff3320182a55 | [
"MIT"
] | null | null | null | src/owmpy/current/__init__.py | ernieIzde8ski/open_weather_mappy | c50629065de85f6d2f4fcf46b741ff3320182a55 | [
"MIT"
] | null | null | null | src/owmpy/current/__init__.py | ernieIzde8ski/open_weather_mappy | c50629065de85f6d2f4fcf46b741ff3320182a55 | [
"MIT"
] | null | null | null | from .response import *
from ._classes import *
| 16 | 23 | 0.75 | 6 | 48 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 48 | 2 | 24 | 24 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
489694da7ed7b43e47e2bc4cd10cc32e1276739c | 213 | py | Python | rooms/admin.py | studentisgss/booking | e0e28f42cf2a466688b4ea3787eb28dbc0980cac | [
"MIT"
] | 7 | 2015-12-11T19:18:39.000Z | 2020-10-30T12:50:19.000Z | rooms/admin.py | studentisgss/booking | e0e28f42cf2a466688b4ea3787eb28dbc0980cac | [
"MIT"
] | 119 | 2015-11-03T22:21:09.000Z | 2021-03-17T21:36:49.000Z | rooms/admin.py | studentisgss/booking | e0e28f42cf2a466688b4ea3787eb28dbc0980cac | [
"MIT"
] | null | null | null | from django.contrib import admin
from rooms.models import *
# Register your models here.
admin.site.register(Room)
admin.site.register(RoomPermission)
admin.site.register(RoomRule)
admin.site.register(Building)
| 21.3 | 35 | 0.812207 | 29 | 213 | 5.965517 | 0.517241 | 0.208092 | 0.393064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089202 | 213 | 9 | 36 | 23.666667 | 0.891753 | 0.122066 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
48fefdfb0cde1a02a33391193ce6e5e7975f0978 | 367 | py | Python | nhtsa/nhtsa_uri.py | wingedw/autoresearch | 1c6bc2a51ec8cdf398f30fe9e583c31f8078761d | [
"Apache-2.0"
] | null | null | null | nhtsa/nhtsa_uri.py | wingedw/autoresearch | 1c6bc2a51ec8cdf398f30fe9e583c31f8078761d | [
"Apache-2.0"
] | null | null | null | nhtsa/nhtsa_uri.py | wingedw/autoresearch | 1c6bc2a51ec8cdf398f30fe9e583c31f8078761d | [
"Apache-2.0"
] | null | null | null | class Endpoint:
year = "https://webapi.nhtsa.gov/api/SafetyRatings?format=json"
make = "https://webapi.nhtsa.gov/api/SafetyRatings/modelyear/{0}?format=json"
model = "https://webapi.nhtsa.gov/api/SafetyRatings/modelyear/{0}/make/{1}?format=json"
report = "https://one.nhtsa.gov/webapi/api/Recalls/vehicle/modelyear/{0}/make/{1}/model/{2}?format=json"
| 61.166667 | 108 | 0.713896 | 52 | 367 | 5.038462 | 0.403846 | 0.122137 | 0.183206 | 0.217557 | 0.477099 | 0.477099 | 0.343511 | 0.343511 | 0 | 0 | 0 | 0.017804 | 0.081744 | 367 | 5 | 109 | 73.4 | 0.759644 | 0 | 0 | 0 | 0 | 0.4 | 0.79564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
5b208fb5e84345f6fd487fe8883f979d37e2db49 | 259 | py | Python | protocol/radar_msgs/msg/__init__.py | Tsinghua-OpenICV/carla_icv_bridge | 4d5f8c26b1847dbb16a81fe43f146bf4a9a8da5e | [
"MIT"
] | null | null | null | protocol/radar_msgs/msg/__init__.py | Tsinghua-OpenICV/carla_icv_bridge | 4d5f8c26b1847dbb16a81fe43f146bf4a9a8da5e | [
"MIT"
] | null | null | null | protocol/radar_msgs/msg/__init__.py | Tsinghua-OpenICV/carla_icv_bridge | 4d5f8c26b1847dbb16a81fe43f146bf4a9a8da5e | [
"MIT"
] | 1 | 2020-12-19T05:48:01.000Z | 2020-12-19T05:48:01.000Z | from ._RadarDetection import *
from ._RadarDetectionArray import *
from ._RadarDetectionStamped import *
from ._RadarErrorStatus import *
from ._RadarStatus import *
from ._RadarTrack import *
from ._RadarTrackArray import *
from ._RadarTrackStamped import *
| 28.777778 | 37 | 0.814672 | 24 | 259 | 8.458333 | 0.416667 | 0.344828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123552 | 259 | 8 | 38 | 32.375 | 0.894273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d28751748698d620fc80909f7f1022f74999e84b | 1,642 | py | Python | electricLineMotor.py | deadrobots/Create-19 | 61861a667938aa74f0c66e423336ec1efe61b448 | [
"MIT"
] | 2 | 2019-01-23T01:49:12.000Z | 2022-02-16T01:19:22.000Z | electricLineMotor.py | deadrobots/Create-19 | 61861a667938aa74f0c66e423336ec1efe61b448 | [
"MIT"
] | 1 | 2019-03-17T18:11:27.000Z | 2019-03-17T18:11:27.000Z | electricLineMotor.py | deadrobots/Create-19 | 61861a667938aa74f0c66e423336ec1efe61b448 | [
"MIT"
] | 1 | 2022-02-16T01:19:02.000Z | 2022-02-16T01:19:02.000Z | import utilities as u
from wallaby import *
import constants as c
def clear_ticks_button():
print ("Waiting for motor to be placed in zero position")
u.wait_for_button()
clear_motor_position_counter(c.electric_line_motor)
def clear_ticks(speed):
count = 0
motor_power(c.electric_line_motor, speed)
while count < 10:
x = get_motor_position_counter(c.electric_line_motor)
msleep(5)
if get_motor_position_counter(c.electric_line_motor) == x:
count = count + 1
motor(c.electric_line_motor, 0)
clear_motor_position_counter(c.electric_line_motor)
def electric_line_motor(speed, endPos, n = 10):
count = 0
if get_motor_position_counter(c.electric_line_motor) > endPos:
speed = -speed
motor_power(c.electric_line_motor, speed)
while get_motor_position_counter(c.electric_line_motor) > endPos:
x = get_motor_position_counter(c.electric_line_motor)
msleep(5)
if count == n:
break
elif x == get_motor_position_counter(c.electric_line_motor):
count = count + 1
else:
count = 0
else:
motor_power(c.electric_line_motor, speed)
while get_motor_position_counter(c.electric_line_motor) < endPos:
x = get_motor_position_counter(c.electric_line_motor)
msleep(5)
if count == n:
break
elif x == get_motor_position_counter(c.electric_line_motor):
count = count + 1
else:
count = 0
motor(c.electric_line_motor, 0)
| 30.407407 | 74 | 0.635201 | 216 | 1,642 | 4.481481 | 0.199074 | 0.210744 | 0.298554 | 0.297521 | 0.771694 | 0.771694 | 0.722107 | 0.722107 | 0.682851 | 0.494835 | 0 | 0.013758 | 0.291717 | 1,642 | 53 | 75 | 30.981132 | 0.818573 | 0 | 0 | 0.674419 | 0 | 0 | 0.028676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.069767 | 0 | 0.139535 | 0.023256 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d28bbda84676a37e98aa6eafca56082b9a123c45 | 205 | py | Python | molecule/default/tests/test_default.py | nekeal/ansible-role-postgresql-db | 197019828f5fa3b724c841cc69f0a4cf67bd61df | [
"MIT"
] | null | null | null | molecule/default/tests/test_default.py | nekeal/ansible-role-postgresql-db | 197019828f5fa3b724c841cc69f0a4cf67bd61df | [
"MIT"
] | null | null | null | molecule/default/tests/test_default.py | nekeal/ansible-role-postgresql-db | 197019828f5fa3b724c841cc69f0a4cf67bd61df | [
"MIT"
] | null | null | null | """Role testing files using testinfra."""
def test_is_postgresql_runnnig_and_enabled(host):
postgresql = host.service('postgresql')
assert postgresql.is_running
assert postgresql.is_enabled
| 22.777778 | 49 | 0.770732 | 25 | 205 | 6.04 | 0.64 | 0.211921 | 0.238411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141463 | 205 | 8 | 50 | 25.625 | 0.857955 | 0.170732 | 0 | 0 | 0 | 0 | 0.060976 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d292f94621060d1bc32b8754619bb7c76b54d1c7 | 31 | py | Python | exercios/Mundo 1/1-Primeiros passos com o Python/ex001.py | DarkEyeBr/Python | f45239551d19f49eac35185e4f72b067d5820f3a | [
"MIT"
] | null | null | null | exercios/Mundo 1/1-Primeiros passos com o Python/ex001.py | DarkEyeBr/Python | f45239551d19f49eac35185e4f72b067d5820f3a | [
"MIT"
] | null | null | null | exercios/Mundo 1/1-Primeiros passos com o Python/ex001.py | DarkEyeBr/Python | f45239551d19f49eac35185e4f72b067d5820f3a | [
"MIT"
] | null | null | null | print('\033[34mHello, world!')
| 15.5 | 30 | 0.677419 | 4 | 31 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 0.064516 | 31 | 1 | 31 | 31 | 0.551724 | 0 | 0 | 0 | 0 | 0 | 0.677419 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d2c7ba585785d51d20aa9f96dcab2f9031bb87d3 | 41 | py | Python | tests/roots/test-advanced/apidoc_dummy_package/_apidoc_private_dummy_submodule.py | lalten/apidoc | 4e3dc7aafcb14c0557ac308a27a2a751c0823d9f | [
"BSD-2-Clause"
] | null | null | null | tests/roots/test-advanced/apidoc_dummy_package/_apidoc_private_dummy_submodule.py | lalten/apidoc | 4e3dc7aafcb14c0557ac308a27a2a751c0823d9f | [
"BSD-2-Clause"
] | null | null | null | tests/roots/test-advanced/apidoc_dummy_package/_apidoc_private_dummy_submodule.py | lalten/apidoc | 4e3dc7aafcb14c0557ac308a27a2a751c0823d9f | [
"BSD-2-Clause"
] | null | null | null | def very_private():
return 'private'
| 13.666667 | 20 | 0.682927 | 5 | 41 | 5.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 41 | 2 | 21 | 20.5 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
9617264c8f67b12e53e3fd2a84b383734ce6fe02 | 138 | py | Python | scripts/npc/autogen_2159478.py | hsienjan/SideQuest-Server | 3e88debaf45615b759d999255908f99a15283695 | [
"MIT"
] | null | null | null | scripts/npc/autogen_2159478.py | hsienjan/SideQuest-Server | 3e88debaf45615b759d999255908f99a15283695 | [
"MIT"
] | null | null | null | scripts/npc/autogen_2159478.py | hsienjan/SideQuest-Server | 3e88debaf45615b759d999255908f99a15283695 | [
"MIT"
] | null | null | null | # ParentID: 2159478
# Character field ID when accessed: 910150300
# ObjectID: 1000009
# Object Position X: 1564
# Object Position Y: -321
| 23 | 45 | 0.753623 | 18 | 138 | 5.777778 | 0.888889 | 0.269231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.26087 | 0.166667 | 138 | 5 | 46 | 27.6 | 0.643478 | 0.92029 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
82e7b3ea7def7dac98d180352e6f0d238c444314 | 386 | py | Python | Werewolf/game/roles/Role.py | GeorgeVelikov/Werewolf-Framework | 6a4501cc98cab92111eec2551b9a3d2464adad7f | [
"MIT"
] | 1 | 2021-11-14T16:51:16.000Z | 2021-11-14T16:51:16.000Z | Werewolf/game/roles/Role.py | GeorgeVelikov/Werewolf-Framework | 6a4501cc98cab92111eec2551b9a3d2464adad7f | [
"MIT"
] | null | null | null | Werewolf/game/roles/Role.py | GeorgeVelikov/Werewolf-Framework | 6a4501cc98cab92111eec2551b9a3d2464adad7f | [
"MIT"
] | null | null | null | from Shared.enums.PlayerTypeEnum import PlayerTypeEnum;
class Role():
def __init__(self):
pass;
@property
def Type(self):
return PlayerTypeEnum._None;
@property
def CanTargetDeadPlayers(self):
return False;
@property
def HasDayAction(self):
return False;
@property
def HasNightAction(self):
return False;
| 17.545455 | 55 | 0.634715 | 37 | 386 | 6.486486 | 0.513514 | 0.183333 | 0.1875 | 0.191667 | 0.216667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.284974 | 386 | 21 | 56 | 18.380952 | 0.869565 | 0 | 0 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0.0625 | 0.0625 | 0.25 | 0.6875 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
7d7f0fc3fef18173d78098ecb5555dcfc496154d | 6,393 | py | Python | tests/tests.py | joealcorn/django-cursor-pagination | d066d004a8bedfef5aa4c9ffa1fd65e9e760f270 | [
"BSD-3-Clause"
] | null | null | null | tests/tests.py | joealcorn/django-cursor-pagination | d066d004a8bedfef5aa4c9ffa1fd65e9e760f270 | [
"BSD-3-Clause"
] | null | null | null | tests/tests.py | joealcorn/django-cursor-pagination | d066d004a8bedfef5aa4c9ffa1fd65e9e760f270 | [
"BSD-3-Clause"
] | null | null | null | import datetime
from django.test import TestCase
from django.utils import timezone
from cursor_pagination import CursorPaginator, InvalidCursor
from .models import Author, Post
class TestNoArgs(TestCase):
def test_empty(self):
paginator = CursorPaginator(Post.objects.all(), ('id',))
page = paginator.page()
self.assertEqual(len(page), 0)
self.assertFalse(page.has_next)
self.assertFalse(page.has_previous)
def test_with_items(self):
for i in range(20):
Post.objects.create(name='Name %s' % i)
paginator = CursorPaginator(Post.objects.all(), ('id',))
page = paginator.page()
self.assertEqual(len(page), 20)
self.assertFalse(page.has_next)
self.assertFalse(page.has_previous)
class TestForwardPagination(TestCase):
@classmethod
def setUpTestData(cls):
now = timezone.now()
cls.items = []
for i in range(20):
post = Post.objects.create(name='Name %s' % i, created=now - datetime.timedelta(hours=i))
cls.items.append(post)
cls.paginator = CursorPaginator(Post.objects.all(), ('-created',))
def test_first_page(self):
page = self.paginator.page(first=2)
self.assertSequenceEqual(page, [self.items[0], self.items[1]])
self.assertTrue(page.has_next)
self.assertFalse(page.has_previous)
def test_second_page(self):
previous_page = self.paginator.page(first=2)
cursor = self.paginator.cursor(previous_page[-1])
page = self.paginator.page(first=2, after=cursor)
self.assertSequenceEqual(page, [self.items[2], self.items[3]])
self.assertTrue(page.has_next)
self.assertTrue(page.has_previous)
def test_last_page(self):
previous_page = self.paginator.page(first=18)
cursor = self.paginator.cursor(previous_page[-1])
page = self.paginator.page(first=2, after=cursor)
self.assertSequenceEqual(page, [self.items[18], self.items[19]])
self.assertFalse(page.has_next)
self.assertTrue(page.has_previous)
def test_incomplete_last_page(self):
previous_page = self.paginator.page(first=18)
cursor = self.paginator.cursor(previous_page[-1])
page = self.paginator.page(first=100, after=cursor)
self.assertSequenceEqual(page, [self.items[18], self.items[19]])
self.assertFalse(page.has_next)
self.assertTrue(page.has_previous)
class TestBackwardsPagination(TestCase):
@classmethod
def setUpTestData(cls):
now = timezone.now()
cls.items = []
for i in range(20):
post = Post.objects.create(name='Name %s' % i, created=now - datetime.timedelta(hours=i))
cls.items.append(post)
cls.paginator = CursorPaginator(Post.objects.all(), ('-created',))
def test_first_page(self):
page = self.paginator.page(last=2)
self.assertSequenceEqual(page, [self.items[18], self.items[19]])
self.assertTrue(page.has_previous)
self.assertFalse(page.has_next)
def test_second_page(self):
previous_page = self.paginator.page(last=2)
cursor = self.paginator.cursor(previous_page[0])
page = self.paginator.page(last=2, before=cursor)
self.assertSequenceEqual(page, [self.items[16], self.items[17]])
self.assertTrue(page.has_previous)
self.assertTrue(page.has_next)
def test_last_page(self):
previous_page = self.paginator.page(last=18)
cursor = self.paginator.cursor(previous_page[0])
page = self.paginator.page(last=2, before=cursor)
self.assertSequenceEqual(page, [self.items[0], self.items[1]])
self.assertFalse(page.has_previous)
self.assertTrue(page.has_next)
def test_incomplete_last_page(self):
previous_page = self.paginator.page(last=18)
cursor = self.paginator.cursor(previous_page[0])
page = self.paginator.page(last=100, before=cursor)
self.assertSequenceEqual(page, [self.items[0], self.items[1]])
self.assertFalse(page.has_previous)
self.assertTrue(page.has_next)
class TestTwoFieldPagination(TestCase):
@classmethod
def setUpTestData(cls):
now = timezone.now()
cls.items = []
data = [
(now, 'B'),
(now, 'C'),
(now, 'D'),
(now + datetime.timedelta(hours=1), 'A'),
]
for time, name in data:
post = Post.objects.create(name=name, created=time)
cls.items.append(post)
def test_order(self):
paginator = CursorPaginator(Post.objects.all(), ('created', 'name'))
previous_page = paginator.page(first=2)
self.assertSequenceEqual(previous_page, [self.items[0], self.items[1]])
cursor = paginator.cursor(previous_page[-1])
page = paginator.page(first=2, after=cursor)
self.assertSequenceEqual(page, [self.items[2], self.items[3]])
def test_reverse_order(self):
paginator = CursorPaginator(Post.objects.all(), ('-created', '-name'))
previous_page = paginator.page(first=2)
self.assertSequenceEqual(previous_page, [self.items[3], self.items[2]])
cursor = paginator.cursor(previous_page[-1])
page = paginator.page(first=2, after=cursor)
self.assertSequenceEqual(page, [self.items[1], self.items[0]])
def test_mixed_order(self):
with self.assertRaises(InvalidCursor):
CursorPaginator(Post.objects.all(), ('created', '-name'))
class TestRelationships(TestCase):
@classmethod
def setUpTestData(cls):
cls.items = []
author_1 = Author.objects.create(name='Ana')
author_2 = Author.objects.create(name='Bob')
for i in range(20):
post = Post.objects.create(name='Name %02d' % i, author=author_1 if i % 2 else author_2)
cls.items.append(post)
cls.paginator = CursorPaginator(Post.objects.all(), ('author__name', 'name'))
def test_first_page(self):
page = self.paginator.page(first=2)
self.assertSequenceEqual(page, [self.items[1], self.items[3]])
def test_after_page(self):
cursor = self.paginator.cursor(self.items[17])
page = self.paginator.page(first=2, after=cursor)
self.assertSequenceEqual(page, [self.items[19], self.items[0]])
| 37.828402 | 101 | 0.647114 | 780 | 6,393 | 5.207692 | 0.108974 | 0.082718 | 0.066962 | 0.082718 | 0.834072 | 0.815608 | 0.776711 | 0.766864 | 0.759724 | 0.759724 | 0 | 0.018823 | 0.218833 | 6,393 | 168 | 102 | 38.053571 | 0.794553 | 0 | 0 | 0.65942 | 0 | 0 | 0.017519 | 0 | 0 | 0 | 0 | 0 | 0.268116 | 1 | 0.137681 | false | 0 | 0.036232 | 0 | 0.210145 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7db790d2ad410332bc382aac47b4e4f9a9fb5c87 | 113 | py | Python | src/savoia/config/dir_config.py | Ma-r-co/savoia | d66ddde28d7e0e40d771f3e685e7c6ccadeb18f7 | [
"MIT"
] | 1 | 2020-08-11T03:44:18.000Z | 2020-08-11T03:44:18.000Z | src/savoia/config/dir_config.py | Ma-r-co/savoia | d66ddde28d7e0e40d771f3e685e7c6ccadeb18f7 | [
"MIT"
] | 9 | 2020-07-09T19:24:55.000Z | 2020-07-20T21:26:39.000Z | src/savoia/config/dir_config.py | Ma-r-co/savoia | d66ddde28d7e0e40d771f3e685e7c6ccadeb18f7 | [
"MIT"
] | 1 | 2020-07-17T15:25:42.000Z | 2020-07-17T15:25:42.000Z | OUTPUT_RESULTS_DIR: str = "/Users/makoto/Pywork/output"
CSV_DATA_DIR: str = "/Users/makoto/Pywork/historic-data"
| 37.666667 | 56 | 0.778761 | 17 | 113 | 4.941176 | 0.588235 | 0.142857 | 0.261905 | 0.404762 | 0.547619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070796 | 113 | 2 | 57 | 56.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.539823 | 0.539823 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7de7ecb4ed743fa7c25e150570d9c510e306f08a | 86 | py | Python | dso/dso/task/__init__.py | brendenpetersen/deep-symbolic-optimization | 8724839dab910022e24d03debdf564236683474b | [
"BSD-3-Clause"
] | 134 | 2021-07-06T06:14:02.000Z | 2022-03-31T18:24:08.000Z | dso/dso/task/__init__.py | brendenpetersen/deep-symbolic-optimization | 8724839dab910022e24d03debdf564236683474b | [
"BSD-3-Clause"
] | 15 | 2021-06-10T17:03:09.000Z | 2022-01-21T20:15:35.000Z | dso/dso/task/__init__.py | brendenpetersen/deep-symbolic-optimization | 8724839dab910022e24d03debdf564236683474b | [
"BSD-3-Clause"
] | 44 | 2021-06-26T19:11:28.000Z | 2022-03-25T04:07:41.000Z | from dso.task.task import make_task, set_task, Task, HierarchicalTask, SequentialTask
| 43 | 85 | 0.837209 | 12 | 86 | 5.833333 | 0.666667 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 86 | 1 | 86 | 86 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7deb232dfa31370adab5cfda5b8f44d6b0a89bd6 | 22 | py | Python | config/world/__init__.py | kelceydamage/learning | 40655cb8d6d03ca85178cbfe5d56db9e699c0cff | [
"Apache-2.0"
] | null | null | null | config/world/__init__.py | kelceydamage/learning | 40655cb8d6d03ca85178cbfe5d56db9e699c0cff | [
"Apache-2.0"
] | null | null | null | config/world/__init__.py | kelceydamage/learning | 40655cb8d6d03ca85178cbfe5d56db9e699c0cff | [
"Apache-2.0"
] | null | null | null | from registry import * | 22 | 22 | 0.818182 | 3 | 22 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81497d8f306b1ed6eeb96451febeac48b5c826e3 | 165 | py | Python | appscanner/model/__init__.py | siteblindado/python-trustwave-appscanner | 7acba76bedd343521fe5d21184b4d6f6be7a8fa1 | [
"MIT"
] | null | null | null | appscanner/model/__init__.py | siteblindado/python-trustwave-appscanner | 7acba76bedd343521fe5d21184b4d6f6be7a8fa1 | [
"MIT"
] | 2 | 2021-03-22T16:55:35.000Z | 2021-12-13T19:34:53.000Z | appscanner/model/__init__.py | siteblindado/python-trustwave-appscanner | 7acba76bedd343521fe5d21184b4d6f6be7a8fa1 | [
"MIT"
] | null | null | null | from .assessment import Assessments
from .assessment_runs import AssessmentRuns, AssessmentRun
from .assessment_run_result import Assessments as AssessmentRunResults | 55 | 70 | 0.890909 | 18 | 165 | 8 | 0.611111 | 0.291667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084848 | 165 | 3 | 70 | 55 | 0.953642 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81512278dfdfa73dd0915defa732b3b0e7db6af6 | 23 | py | Python | mlhep2019/pivot/__init__.py | Meshreki/mlhep2019 | 7934173666267ee21faa88d939e26cafe8c5323e | [
"MIT"
] | 1 | 2021-09-22T12:51:40.000Z | 2021-09-22T12:51:40.000Z | mlhep2019/pivot/__init__.py | nadiinchi/mlhep2019 | b2ecd75dfd4e7cbc249e5e24202b4d258fe4ca75 | [
"MIT"
] | null | null | null | mlhep2019/pivot/__init__.py | nadiinchi/mlhep2019 | b2ecd75dfd4e7cbc249e5e24202b4d258fe4ca75 | [
"MIT"
] | null | null | null | from .plotting import * | 23 | 23 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c4a2b6f447f6c057cce97eb99fb2912c126e15b6 | 27 | py | Python | tapis_cli/commands/taccapis/v2/apps/init/__init__.py | bpachev/tapis-cli | c3128fb5b63ef74e06b737bbd95ef28fb24f0d32 | [
"BSD-3-Clause"
] | 8 | 2020-10-18T22:48:23.000Z | 2022-01-10T09:16:14.000Z | tapis_cli/commands/taccapis/v2/apps/init/__init__.py | bpachev/tapis-cli | c3128fb5b63ef74e06b737bbd95ef28fb24f0d32 | [
"BSD-3-Clause"
] | 238 | 2019-09-04T14:37:54.000Z | 2020-04-15T16:24:24.000Z | tapis_cli/commands/taccapis/v2/apps/init/__init__.py | bpachev/tapis-cli | c3128fb5b63ef74e06b737bbd95ef28fb24f0d32 | [
"BSD-3-Clause"
] | 5 | 2019-09-20T04:23:49.000Z | 2020-01-16T17:45:14.000Z | from .init import AppsInit
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c4a9991898f69ad1ff8ec1c019c0c979cdd91ba8 | 158 | py | Python | app/auth/__init__.py | garryforgit/flasky | 7117023bf69180b8eacae9dde69c621668ddf11d | [
"MIT"
] | null | null | null | app/auth/__init__.py | garryforgit/flasky | 7117023bf69180b8eacae9dde69c621668ddf11d | [
"MIT"
] | null | null | null | app/auth/__init__.py | garryforgit/flasky | 7117023bf69180b8eacae9dde69c621668ddf11d | [
"MIT"
] | null | null | null | # coding:utf8
from flask import Blueprint
auth = Blueprint('auth', __name__) # create blueprint namespace 'auth'
from . import views # import route
| 15.8 | 71 | 0.71519 | 19 | 158 | 5.736842 | 0.631579 | 0.238532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007937 | 0.202532 | 158 | 9 | 72 | 17.555556 | 0.857143 | 0.367089 | 0 | 0 | 0 | 0 | 0.042553 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
c4cbf8993b31f49930a02619e845e4356fc4ca23 | 26 | py | Python | contour/__init__.py | MercenaryLogic/contour | fdff459810043ccac179dfe636303539036960fb | [
"MIT"
] | null | null | null | contour/__init__.py | MercenaryLogic/contour | fdff459810043ccac179dfe636303539036960fb | [
"MIT"
] | null | null | null | contour/__init__.py | MercenaryLogic/contour | fdff459810043ccac179dfe636303539036960fb | [
"MIT"
] | null | null | null |
from rdflib import graph
| 8.666667 | 24 | 0.807692 | 4 | 26 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 26 | 2 | 25 | 13 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
483c719ae86a6d4b4dcb5f83ff0f34dc49570e4e | 3,941 | py | Python | scripts/deprecated/test42.simple_v3_no_terminat_increase_X.py | johnpzh/parallel_ANNS | 36639ddfba66bb38c04a4c3bbccb05c2d30488eb | [
"MIT"
] | 4 | 2020-06-10T02:38:23.000Z | 2022-03-09T08:25:49.000Z | scripts/deprecated/test42.simple_v3_no_terminat_increase_X.py | johnpzh/parallel_ANNS | 36639ddfba66bb38c04a4c3bbccb05c2d30488eb | [
"MIT"
] | null | null | null | scripts/deprecated/test42.simple_v3_no_terminat_increase_X.py | johnpzh/parallel_ANNS | 36639ddfba66bb38c04a4c3bbccb05c2d30488eb | [
"MIT"
] | 1 | 2022-03-09T08:25:52.000Z | 2022-03-09T08:25:52.000Z | #! python3
import os
import sys
import subprocess
if len(sys.argv) != 7:
print(f"{sys.argv[0]} <data_dir> <tag> <num_t> <L_low> <L_up> <X_low>")
# print(f"{sys.argv[0]} <data_dir> <tag>")
exit()
base_dir = sys.argv[1]
tag = sys.argv[2]
num_t = int(sys.argv[3])
L_lower = int(sys.argv[4])
L_upper = int(sys.argv[5])
X_lower = int(sys.argv[6])
# X_upper = int(sys.argv[7])
env_vars = os.environ
env_vars["KMP_AFFINITY"] = "granularity=fine,compact,1,0"
bin="./profile_para_single_query_search_simple_v3"
#### SIFT1M
data_dir = base_dir + "/sift1m"
data_name = "sift"
label = F"{tag}.sift1M"
raw_file = F"output.{label}.raw.txt"
subprocess.run(F':> {raw_file}', shell=True, check=True)
for L in range(L_lower, L_upper + 1):
for X in range(X_lower, L + 5):
command = F"{bin} {data_dir}/{data_name}_base.fvecs {data_dir}/{data_name}_query.fvecs {data_dir}/{data_name}.nsg " \
F"{L} 100 output.ivecs {data_dir}/{data_name}.true-100_NN.q-10000.binary {num_t} {L} {X} " \
F"| tee -a {raw_file}"
subprocess.run(command, env=env_vars, shell=True, check=True)
rows_file = F"output.{label}.rows.txt"
table_file = F"output.{label}.table.txt"
selected_file = F"output.{label}.table.selected.txt"
subprocess.run(F'python3 ../scripts/output_rows_to_table.py {raw_file} {rows_file} 2 3 10 12 13 15 1', shell=True, check=True)
subprocess.run(F'python3 ../scripts/output_row_minimum.py {rows_file} {table_file} 2 0', shell=True, check=True)
subprocess.run(F'python3 ../scripts/output_find_runtime_above_presicion.py {table_file} {selected_file} 0 2', shell=True, check=True)
# #### GIST1M
# data_dir = base_dir + "/gist1m"
# data_name = "gist"
# label = F"{tag}.gist1M"
# raw_file = F"output.{label}.raw.txt"
#
# subprocess.run(F':> {raw_file}', shell=True, check=True)
#
# for L in range(L_lower, L_upper + 1):
# for X in range(X_lower, L + 5):
# command = F"{bin} {data_dir}/{data_name}_base.fvecs {data_dir}/{data_name}_query.fvecs {data_dir}/{data_name}.nsg " \
# F"{L} 100 output.ivecs {data_dir}/{data_name}.true-100_NN.q-1000.binary {num_t} {L} {X} " \
# F"| tee -a {raw_file}"
# subprocess.run(command, env=env_vars, shell=True, check=True)
#
# rows_file = F"output.{label}.rows.txt"
# table_file = F"output.{label}.table.txt"
# selected_file = F"output.{label}.table.selected.txt"
# subprocess.run(F'python3 ../scripts/output_rows_to_table.py {raw_file} {rows_file} 2 3 10 12 13 15 1', shell=True, check=True)
# subprocess.run(F'python3 ../scripts/output_row_minimum.py {rows_file} {table_file} 2 0', shell=True, check=True)
# subprocess.run(F'python3 ../scripts/output_find_runtime_above_presicion.py {table_file} {selected_file} 0 2', shell=True, check=True)
# #### DEEP10M
# data_dir = base_dir + "/deep1b"
# data_name = "deep10M"
# label = F"{tag}.deep10M"
# raw_file = F"output.{label}.raw.txt"
#
# subprocess.run(F':> {raw_file}', shell=True, check=True)
#
# for L in range(L_lower, L_upper + 1):
# for X in range(X_lower, L + 5):
# command = F"{bin} {data_dir}/{data_name}_base.fvecs {data_dir}/{data_name}_query.fvecs {data_dir}/{data_name}.nsg " \
# F"{L} 100 output.ivecs {data_dir}/{data_name}.true-100_NN.q-10000.binary {num_t} {L} {X} " \
# F"| tee -a {raw_file}"
# subprocess.run(command, env=env_vars, shell=True, check=True)
#
# rows_file = F"output.{label}.rows.txt"
# table_file = F"output.{label}.table.txt"
# selected_file = F"output.{label}.table.selected.txt"
# subprocess.run(F'python3 ../scripts/output_rows_to_table.py {raw_file} {rows_file} 2 3 10 12 13 15 1', shell=True, check=True)
# subprocess.run(F'python3 ../scripts/output_row_minimum.py {rows_file} {table_file} 2 0', shell=True, check=True)
# subprocess.run(F'python3 ../scripts/output_find_runtime_above_presicion.py {table_file} {selected_file} 0 2', shell=True, check=True)
| 44.784091 | 135 | 0.673687 | 658 | 3,941 | 3.820669 | 0.144377 | 0.047335 | 0.083532 | 0.107399 | 0.822593 | 0.822593 | 0.822593 | 0.822593 | 0.8035 | 0.8035 | 0 | 0.035354 | 0.145902 | 3,941 | 87 | 136 | 45.298851 | 0.711527 | 0.583608 | 0 | 0 | 0 | 0.09375 | 0.462753 | 0.272096 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.09375 | 0 | 0.09375 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f93c361fed0deb091b370b634d67168aa2bb9e5 | 111 | py | Python | pricing/__init__.py | codestetic/optionworkshop | f7f8c7ab1744069255da0d156916d0c376137040 | [
"MIT"
] | null | null | null | pricing/__init__.py | codestetic/optionworkshop | f7f8c7ab1744069255da0d156916d0c376137040 | [
"MIT"
] | null | null | null | pricing/__init__.py | codestetic/optionworkshop | f7f8c7ab1744069255da0d156916d0c376137040 | [
"MIT"
] | null | null | null | from .models import black_scholes
from .models.black_scholes import *
from .iv import *
from .context import *
| 22.2 | 35 | 0.783784 | 16 | 111 | 5.3125 | 0.4375 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144144 | 111 | 4 | 36 | 27.75 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6fe0051c8bb5e857b21dc941d53ef89b3357f689 | 36 | py | Python | wroclaw_building_footprint/__init__.py | Greenpp/wroc-build | d59a675c5da904b75ff74b4edaadf4cdce9c3418 | [
"MIT"
] | null | null | null | wroclaw_building_footprint/__init__.py | Greenpp/wroc-build | d59a675c5da904b75ff74b4edaadf4cdce9c3418 | [
"MIT"
] | null | null | null | wroclaw_building_footprint/__init__.py | Greenpp/wroc-build | d59a675c5da904b75ff74b4edaadf4cdce9c3418 | [
"MIT"
] | null | null | null | from .segmentator import Segmentator | 36 | 36 | 0.888889 | 4 | 36 | 8 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b5150969c57e93e11763f625bf3c57557da287bd | 7,786 | py | Python | Runs/simulationRuns.py | lsiemens/QBox | ef43c9bbc5f8437fb4d44fbf0e58e29a8e0b1b39 | [
"BSD-3-Clause"
] | 3 | 2019-03-15T01:34:42.000Z | 2020-05-09T15:25:39.000Z | Runs/simulationRuns.py | lsiemens/QBox | ef43c9bbc5f8437fb4d44fbf0e58e29a8e0b1b39 | [
"BSD-3-Clause"
] | 3 | 2019-02-19T00:34:45.000Z | 2020-01-10T04:57:07.000Z | Runs/simulationRuns.py | lsiemens/QBox | ef43c9bbc5f8437fb4d44fbf0e58e29a8e0b1b39 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
import sys
sys.path.insert(0, "../QBoxSolver")
import QBHD
import numpy
from pathlib import Path
from matplotlib import pyplot
resolution = 512
numberOfGrids = 5
maxNumberOfStates = 1024
length = 20.0 # hartree length units
mass = 1 # hartree mass units
omega = 1
wallHeight = 50 # hartree energy units
wallThick = 0.075 # percentage of simulation
wallThin = 0.01 # percentage of simulation
slitWidth = 0.10 # percentage of simulation
isPeriodicPotential = False
def setup(path="./", fname="data.h5", resolution=128, length=10.0):
Path(path).mkdir(parents=True, exist_ok=True)
fname = path + "/" + fname
run = QBHD.create(fname, resolution, length)
run.numberOfGrids = numberOfGrids
run.maxNumberOfStates = maxNumberOfStates
run.mass = mass
run.biasEnergy = 0.0
return run
def box(path):
run = setup(path, resolution=resolution, length=length)
run.isPeriodicBoundary = False
run.targetEvolutionTime = run.evolutionTimeCalculator(0.002, 32)
potential = 0*run.X
run.potential = potential
run.save()
def space(path):
run = setup(path, resolution=resolution, length=length)
run.isPeriodicBoundary = True
run.targetEvolutionTime = run.evolutionTimeCalculator(0.002, 32)
potential = 0*run.X
run.potential = potential
run.save()
def harmonic(path):
run = setup(path, resolution=resolution, length=length)
run.isPeriodicBoundary = False
run.targetEvolutionTime = run.evolutionTimeCalculator(0.02, 32)
potential = (1/2)*run.mass*omega**2*(run.X**2 + run.Y**2)
run.potential = potential
run.save()
def harmonicWall(path):
run = setup(path, resolution=resolution, length=length)
run.isPeriodicBoundary = False
run.targetEvolutionTime = run.evolutionTimeCalculator(0.02, 32)
potential = (1/2)*run.mass*omega**2*(run.X**2 + run.Y**2)
potential[run.resolution//2 - int(numpy.ceil(run.resolution*wallThin/2)):run.resolution//2 + int(numpy.ceil(run.resolution*wallThin/2)), :] += wallHeight
run.potential = potential
run.save()
def wall(path):
run = setup(path, resolution=resolution, length=length)
run.isPeriodicBoundary = isPeriodicPotential
run.targetEvolutionTime = run.evolutionTimeCalculator(0.004, 32)
potential = 0*run.X
potential[run.resolution//2 - int(numpy.ceil(run.resolution*wallThin/2)):run.resolution//2 + int(numpy.ceil(run.resolution*wallThin/2)), :] += wallHeight
run.potential = potential
run.save()
def harmonicSingleSlit(path):
run = setup(path, resolution=resolution, length=length)
run.isPeriodicBoundary = False
run.targetEvolutionTime = run.evolutionTimeCalculator(0.02, 32)
potential = (1/2)*run.mass*omega**2*(run.X**2 + run.Y**2)
potential[run.resolution//2 - int(numpy.ceil(run.resolution*wallThick//2)):run.resolution//2 + int(numpy.ceil(run.resolution*wallThick/2)), :run.resolution//2 - int(numpy.ceil(run.resolution*slitWidth//2))] += wallHeight
potential[run.resolution//2 - int(numpy.ceil(run.resolution*wallThick//2)):run.resolution//2 + int(numpy.ceil(run.resolution*wallThick/2)), run.resolution//2 + int(numpy.ceil(run.resolution*slitWidth//2)):] += wallHeight
run.potential = potential
run.save()
def singleSlit(path):
run = setup(path, resolution=resolution, length=length)
run.isPeriodicBoundary = isPeriodicPotential
run.targetEvolutionTime = run.evolutionTimeCalculator(0.01, 32)
potential = 0*run.X
potential[run.resolution//2 - int(numpy.ceil(run.resolution*wallThick//2)):run.resolution//2 + int(numpy.ceil(run.resolution*wallThick/2)), :run.resolution//2 - int(numpy.ceil(run.resolution*slitWidth//2))] += wallHeight
potential[run.resolution//2 - int(numpy.ceil(run.resolution*wallThick//2)):run.resolution//2 + int(numpy.ceil(run.resolution*wallThick/2)), run.resolution//2 + int(numpy.ceil(run.resolution*slitWidth//2)):] += wallHeight
run.potential = potential
run.save()
def harmonicDoubleSlit(path):
run = setup(path, resolution=resolution, length=length)
run.isPeriodicBoundary = False
run.targetEvolutionTime = run.evolutionTimeCalculator(0.02, 32)
potential = (1/2)*run.mass*omega**2*(run.X**2 + run.Y**2)
potential[run.resolution//2 - int(numpy.ceil(run.resolution*wallThick/2)):run.resolution//2 + int(numpy.ceil(run.resolution*wallThick/2)), :run.resolution//2 - 3*int(numpy.ceil(run.resolution*slitWidth/2))] += wallHeight
potential[run.resolution//2 - int(numpy.ceil(run.resolution*wallThick/2)):run.resolution//2 + int(numpy.ceil(run.resolution*wallThick/2)), run.resolution//2 - int(numpy.ceil(run.resolution*slitWidth/2)):run.resolution//2 + int(numpy.ceil(run.resolution*slitWidth/2))] += wallHeight
potential[run.resolution//2 - int(numpy.ceil(run.resolution*wallThick/2)):run.resolution//2 + int(numpy.ceil(run.resolution*wallThick/2)), run.resolution//2 + 3*int(numpy.ceil(run.resolution*slitWidth/2)):] += wallHeight
run.potential = potential
run.save()
def doubleSlit(path):
run = setup(path, resolution=resolution, length=length)
run.isPeriodicBoundary = isPeriodicPotential
run.targetEvolutionTime = run.evolutionTimeCalculator(0.01, 32)
potential = 0*run.X
potential[run.resolution//2 - int(numpy.ceil(run.resolution*wallThick/2)):run.resolution//2 + int(numpy.ceil(run.resolution*wallThick/2)), :run.resolution//2 - 3*int(numpy.ceil(run.resolution*slitWidth/2))] += wallHeight
potential[run.resolution//2 - int(numpy.ceil(run.resolution*wallThick/2)):run.resolution//2 + int(numpy.ceil(run.resolution*wallThick/2)), run.resolution//2 - int(numpy.ceil(run.resolution*slitWidth/2)):run.resolution//2 + int(numpy.ceil(run.resolution*slitWidth/2))] += wallHeight
potential[run.resolution//2 - int(numpy.ceil(run.resolution*wallThick/2)):run.resolution//2 + int(numpy.ceil(run.resolution*wallThick/2)), run.resolution//2 + 3*int(numpy.ceil(run.resolution*slitWidth/2)):] += wallHeight
run.potential = potential
run.save()
def hydrogenAtom(path):
run = setup(path, resolution=resolution, length=length)
run.isPeriodicBoundary = False
run.targetEvolutionTime = run.evolutionTimeCalculator(0.015, 32)
potential = 0*run.X
potential = - 1/numpy.sqrt(run.X**2 + run.Y**2)
biasEnergy = numpy.min(potential)
potential -= biasEnergy
run.biasEnergy = biasEnergy
run.potential = potential
run.save()
def hydrogenMolecularIon(path):
run = setup(path, resolution=resolution, length=100.0)
run.isPeriodicBoundary = False
run.targetEvolutionTime = run.evolutionTimeCalculator(0.005, 32)
bondLength = 0.52
r1 = numpy.sqrt((run.X - bondLength/2)**2 + run.Y**2)
r2 = numpy.sqrt((run.X + bondLength/2)**2 + run.Y**2)
potential = 0*run.X
potential = - 1/r1 - 1/r2 + 1/bondLength
run.biasEnergy = numpy.min(potential)
potential -= run.biasEnergy
run.potential = potential
run.save()
def lattice(path):
run = setup(path, resolution=resolution, length=length)
run.isPeriodicBoundary = True
run.targetEvolutionTime = run.evolutionTimeCalculator(0.0025, 32)
n = 3
smoothing = 0.2
potential = -1/numpy.sqrt(numpy.sin(n*numpy.pi*run.X/run.length)**2 + numpy.sin(n*numpy.pi*run.Y/run.length)**2 + smoothing)
run.biasEnergy = numpy.min(potential)
potential -= run.biasEnergy
run.potential = potential
run.save()
problems = [box,
space,
harmonic,
harmonicWall,
wall,
harmonicSingleSlit,
singleSlit,
harmonicDoubleSlit,
doubleSlit,
hydrogenAtom,
hydrogenMolecularIon,
lattice]
for problem in problems:
run = problem("./temp/" + problem.__name__)
| 42.086486 | 285 | 0.704983 | 998 | 7,786 | 5.49499 | 0.10521 | 0.170678 | 0.091904 | 0.098468 | 0.806163 | 0.799599 | 0.776076 | 0.760394 | 0.746718 | 0.746718 | 0 | 0.035498 | 0.149756 | 7,786 | 184 | 286 | 42.315217 | 0.7929 | 0.020164 | 0 | 0.546667 | 0 | 0 | 0.003936 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086667 | false | 0 | 0.033333 | 0 | 0.126667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d216984bd20f21e76860a85300cac3e13e142bf7 | 21,198 | py | Python | knowledge/views.py | nuwainfo/treeio | f57bf9114d9774c11468a1b0e44614b04631beb1 | [
"MIT"
] | null | null | null | knowledge/views.py | nuwainfo/treeio | f57bf9114d9774c11468a1b0e44614b04631beb1 | [
"MIT"
] | null | null | null | knowledge/views.py | nuwainfo/treeio | f57bf9114d9774c11468a1b0e44614b04631beb1 | [
"MIT"
] | null | null | null | # encoding: utf-8
# Copyright 2011 Tree.io Limited
# This file is part of Treeio.
# License www.tree.io/license
"""
Knowledge Base module views
"""
from django.db.models import Q
from django.template import RequestContext
from django.core.urlresolvers import reverse
from django.http import HttpResponseRedirect
from django.shortcuts import get_object_or_404
from treeio.knowledge.models import KnowledgeFolder, KnowledgeItem, KnowledgeCategory
from treeio.core.models import Object
from treeio.core.views import user_denied
from treeio.core.rendering import render_to_response
from treeio.core.decorators import treeio_login_required, handle_response_format
from treeio.knowledge.forms import KnowledgeFolderForm, KnowledgeItemForm, KnowledgeCategoryForm, \
FilterForm, MassActionForm
from django.http import Http404
def _get_filter_query(args):
"Creates a query to filter Knowledge Items based on FilterForm arguments"
query = Q()
for arg in args:
if hasattr(KnowledgeItem, arg) and args[arg]:
kwargs = {str(arg + '__id'): long(args[arg])}
query = query & Q(**kwargs)
return query
def _get_default_context(request):
"Returns default context as a dict()"
folders = Object.filter_permitted(manager=KnowledgeFolder.objects.filter(parent__isnull=True),
user=request.user.get_profile(), mode='r')
massform = MassActionForm(request.user.get_profile())
context = {'folders': folders,
'massform': massform}
return context
def _process_mass_form(f):
"Pre-process request to handle mass action form for Knowledge Items"
def wrap(request, *args, **kwargs):
"Wrap"
user = request.user.get_profile()
if 'massform' in request.POST:
for key in request.POST:
if 'mass-item' in key:
try:
item = KnowledgeItem.objects.get(pk=request.POST[key])
form = MassActionForm(user, request.POST, instance=item)
if form.is_valid() and user.has_permission(item, mode='w'):
form.save()
except Exception:
pass
return f(request, *args, **kwargs)
wrap.__doc__ = f.__doc__
wrap.__name__ = f.__name__
return wrap
@handle_response_format
@treeio_login_required
@_process_mass_form
def index(request, response_format='html'):
"Knowledge base index page"
if request.GET:
query = _get_filter_query(request.GET)
items = Object.filter_by_request(
request, KnowledgeItem.objects.filter(query))
else:
items = Object.filter_by_request(request, KnowledgeItem.objects)
filters = FilterForm(request.user.get_profile(), 'name', request.GET)
context = _get_default_context(request)
context.update({'filters': filters,
'items': items})
return render_to_response('knowledge/index', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
def index_categories(request, response_format='html'):
"Knowledge base categories page"
if request.GET:
query = _get_filter_query(request.GET)
items = Object.filter_by_request(
request, KnowledgeItem.objects.filter(query))
else:
items = Object.filter_by_request(request, KnowledgeItem.objects)
filters = FilterForm(request.user.get_profile(), 'category', request.GET)
categories = Object.filter_by_request(request, KnowledgeCategory.objects)
context = _get_default_context(request)
context.update({'filters': filters,
'items': items,
'categories': categories})
return render_to_response('knowledge/index_categories', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
def folder_add(request, response_format='html'):
"New folder form"
if request.POST:
if not 'cancel' in request.POST:
folder = KnowledgeFolder()
form = KnowledgeFolderForm(
request.user.get_profile(), None, request.POST, instance=folder)
if form.is_valid():
folder = form.save()
folder.set_user_from_request(request)
return HttpResponseRedirect(reverse('knowledge_folder_view', args=[folder.treepath]))
else:
return HttpResponseRedirect(reverse('knowledge'))
else:
form = KnowledgeFolderForm(request.user.get_profile(), None)
context = _get_default_context(request)
context.update({'form': form})
return render_to_response('knowledge/folder_add', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
def folder_add_folder(request, folderPath, response_format='html'):
"Add new knowledge folder to preselected folder"
try:
folder = KnowledgeFolder.by_path(folderPath)
knowledgeType_id = folder.id
except KnowledgeFolder.DoesNotExist:
raise Http404
parent = None
if knowledgeType_id:
parent = get_object_or_404(KnowledgeFolder, pk=knowledgeType_id)
if not request.user.get_profile().has_permission(parent, mode='x'):
parent = None
if request.POST:
if not 'cancel' in request.POST:
folder = KnowledgeFolder()
form = KnowledgeFolderForm(request.user.get_profile(), knowledgeType_id,
request.POST, instance=folder)
if form.is_valid():
folder = form.save()
folder.set_user_from_request(request)
return HttpResponseRedirect(reverse('knowledge_folder_view', args=[folder.treepath]))
else:
return HttpResponseRedirect(reverse('knowledge'))
else:
form = KnowledgeFolderForm(
request.user.get_profile(), knowledgeType_id)
context = _get_default_context(request)
context.update({'form': form,
'parent': parent})
return render_to_response('knowledge/folder_add_folder', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
@_process_mass_form
def folder_view(request, folderPath, response_format='html'):
"Single knowledge folder view page"
folder = KnowledgeFolder.by_path(folderPath)
if not folder:
raise Http404
if not request.user.get_profile().has_permission(folder):
return user_denied(request, message="You don't have access to this Knowledge Type")
items = Object.filter_by_request(
request, manager=KnowledgeItem.objects.filter(folder=folder))
subfolders = KnowledgeFolder.objects.filter(parent=folder)
context = _get_default_context(request)
context.update({'items': items,
'folder': folder,
'subfolders': subfolders})
return render_to_response('knowledge/folder_view', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
def folder_edit(request, knowledgeType_id, response_format='html'):
"Knowledge folder edit page"
folder = get_object_or_404(KnowledgeFolder, pk=knowledgeType_id)
items = Object.filter_by_request(
request, manager=KnowledgeItem.objects.filter(folder=folder))
if not request.user.get_profile().has_permission(folder, mode="w"):
return user_denied(request, message="You don't have access to this Knowledge Type")
if request.POST:
if not 'cancel' in request.POST:
form = KnowledgeFolderForm(
request.user.get_profile(), None, request.POST, instance=folder)
if form.is_valid():
folder = form.save()
return HttpResponseRedirect(reverse('knowledge_folder_view', args=[folder.treepath]))
else:
return HttpResponseRedirect(reverse('knowledge_folder_view', args=[folder.treepath]))
else:
form = KnowledgeFolderForm(
request.user.get_profile(), None, instance=folder)
context = _get_default_context(request)
context.update({'items': items,
'folder': folder,
'form': form})
return render_to_response('knowledge/folder_edit', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
def folder_delete(request, knowledgeType_id, response_format='html'):
"Type delete"
folder = get_object_or_404(KnowledgeFolder, pk=knowledgeType_id)
items = Object.filter_by_request(
request, manager=KnowledgeItem.objects.filter(folder=folder))
if not request.user.get_profile().has_permission(folder, mode='w'):
return user_denied(request, message="You don't have access to this Knowledge Type")
if request.POST:
if 'delete' in request.POST:
if 'trash' in request.POST:
folder.trash = True
folder.save()
else:
folder.delete()
return HttpResponseRedirect(reverse('knowledge_index'))
elif 'cancel' in request.POST:
return HttpResponseRedirect(reverse('knowledge_folder_view', args=[folder.treepath]))
context = _get_default_context(request)
context.update({'items': items,
'folder': folder})
return render_to_response('knowledge/folder_delete', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
def item_add(request, response_format='html'):
"Add new knowledge item"
items = Object.filter_permitted(
manager=KnowledgeItem.objects, user=request.user.get_profile(), mode='r')
if request.POST:
if not 'cancel' in request.POST:
item = KnowledgeItem()
form = KnowledgeItemForm(
request.user.get_profile(), None, request.POST, instance=item)
if form.is_valid():
item = form.save()
item.set_user_from_request(request)
return HttpResponseRedirect(reverse('knowledge_item_view',
args=[item.folder.treepath, item.treepath]))
else:
return HttpResponseRedirect(reverse('knowledge'))
else:
form = KnowledgeItemForm(request.user.get_profile(), None)
context = _get_default_context(request)
context.update({'items': items,
'form': form})
return render_to_response('knowledge/item_add', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
def item_add_folder(request, folderPath, response_format='html'):
"Add new knowledge item to preselected folder"
items = Object.filter_permitted(
manager=KnowledgeItem.objects, user=request.user.get_profile(), mode='r')
try:
folder = KnowledgeFolder.by_path(folderPath)
knowledgeType_id = folder.id
except KnowledgeFolder.DoesNotExist:
raise Http404
if request.POST:
if not 'cancel' in request.POST:
item = KnowledgeItem()
form = KnowledgeItemForm(
request.user.get_profile(), knowledgeType_id, request.POST, instance=item)
if form.is_valid():
item = form.save()
item.set_user_from_request(request)
return HttpResponseRedirect(reverse('knowledge_item_view',
args=[item.folder.treepath, item.treepath]))
else:
return HttpResponseRedirect(reverse('knowledge'))
else:
form = KnowledgeItemForm(request.user.get_profile(), knowledgeType_id)
context = _get_default_context(request)
context.update({'items': items,
'form': form,
'folder': folder})
return render_to_response('knowledge/item_add_folder', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
def item_view(request, folderPath, itemPath, response_format='html'):
"Single knowledge item view page"
try:
item = KnowledgeItem.by_path(folderPath, itemPath)
except KnowledgeItem.DoesNotExist:
raise Http404
if not item:
raise Http404
items = Object.filter_permitted(
manager=KnowledgeItem.objects, user=request.user.get_profile(), mode='r')
if not request.user.get_profile().has_permission(item):
return user_denied(request, message="You don't have access to this Knowledge Item")
context = _get_default_context(request)
context.update({'items': items,
'item': item})
return render_to_response('knowledge/item_view', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
def item_edit(request, knowledgeItem_id, response_format='html'):
"Knowledge item edit page"
item = get_object_or_404(KnowledgeItem, pk=knowledgeItem_id)
items = Object.filter_permitted(
manager=KnowledgeItem.objects, user=request.user.get_profile(), mode='r')
if not request.user.get_profile().has_permission(item, mode="w"):
return user_denied(request, message="You don't have access to this Knowledge Item")
if request.POST:
if not 'cancel' in request.POST:
form = KnowledgeItemForm(
request.user.get_profile(), None, request.POST, instance=item)
if form.is_valid():
item = form.save()
return HttpResponseRedirect(reverse('knowledge_item_view',
args=[item.folder.treepath, item.treepath]))
else:
return HttpResponseRedirect(reverse('knowledge_item_view',
args=[item.folder.treepath, item.treepath]))
else:
form = KnowledgeItemForm(
request.user.get_profile(), None, instance=item)
context = _get_default_context(request)
context.update({'form': form,
'item': item,
'items': items})
return render_to_response('knowledge/item_edit', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
def item_delete(request, knowledgeItem_id, response_format='html'):
"Item delete"
item = get_object_or_404(KnowledgeItem, pk=knowledgeItem_id)
items = Object.filter_permitted(
manager=KnowledgeItem.objects, user=request.user.get_profile(), mode='r')
if not request.user.get_profile().has_permission(item, mode="w"):
return user_denied(request, message="You don't have access to this Knowledge Item")
if request.POST:
if 'delete' in request.POST:
if 'trash' in request.POST:
item.trash = True
item.save()
else:
item.delete()
return HttpResponseRedirect(reverse('knowledge_index'))
elif 'cancel' in request.POST:
return HttpResponseRedirect(reverse('knowledge_item_view',
args=[item.folder.treepath, item.treepath]))
context = _get_default_context(request)
context.update({'item': item,
'items': items})
return render_to_response('knowledge/item_delete', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
def category_add(request, response_format='html'):
"Add new knowledge category"
if request.POST:
if not 'cancel' in request.POST:
category = KnowledgeCategory()
form = KnowledgeCategoryForm(request.POST, instance=category)
if form.is_valid():
category = form.save()
category.set_user_from_request(request)
return HttpResponseRedirect(reverse('knowledge_category_view', args=[category.treepath]))
else:
return HttpResponseRedirect(reverse('knowledge_categories'))
else:
form = KnowledgeCategoryForm()
context = _get_default_context(request)
context.update({'form': form})
return render_to_response('knowledge/category_add', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
@_process_mass_form
def category_view(request, categoryPath, response_format='html'):
"Single knowledge category view page"
try:
category = KnowledgeCategory.by_path(categoryPath)
except KnowledgeCategory.DoesNotExist:
raise Http404
items = Object.filter_permitted(
manager=KnowledgeItem.objects, user=request.user.get_profile(), mode='r')
if not request.user.get_profile().has_permission(category):
return user_denied(request, message="You don't have access to this Knowledge Category")
context = _get_default_context(request)
context.update({'category': category,
'items': items})
return render_to_response('knowledge/category_view', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
def category_edit(request, knowledgeCategory_id, response_format='html'):
"Knowledge category edit page"
category = get_object_or_404(KnowledgeCategory, pk=knowledgeCategory_id)
items = Object.filter_permitted(
manager=KnowledgeItem.objects, user=request.user.get_profile(), mode='r')
if not request.user.get_profile().has_permission(category, mode="w"):
return user_denied(request, message="You don't have access to this Knowledge Category")
if request.POST:
if not 'cancel' in request.POST:
form = KnowledgeCategoryForm(request.POST, instance=category)
if form.is_valid():
category = form.save()
return HttpResponseRedirect(reverse('knowledge_category_view', args=[category.treepath]))
else:
return HttpResponseRedirect(reverse('knowledge_category_view', args=[category.treepath]))
else:
form = KnowledgeCategoryForm(instance=category)
context = _get_default_context(request)
context.update({'form': form,
'category': category,
'items': items})
return render_to_response('knowledge/category_edit', context,
context_instance=RequestContext(request),
response_format=response_format)
@handle_response_format
@treeio_login_required
def category_delete(request, knowledgeCategory_id, response_format='html'):
"Knowledge Category delete"
category = get_object_or_404(KnowledgeCategory, pk=knowledgeCategory_id)
items = Object.filter_permitted(
manager=KnowledgeItem.objects, user=request.user.get_profile(), mode='r')
if not request.user.get_profile().has_permission(category, mode="w"):
return user_denied(request, message="You don't have access to this Knowledge Category")
if request.POST:
if 'delete' in request.POST:
if 'trash' in request.POST:
category.trash = True
category.save()
else:
category.delete()
return HttpResponseRedirect(reverse('knowledge_index'))
elif 'cancel' in request.POST:
return HttpResponseRedirect(reverse('knowledge_category_view', args=[category.treepath]))
context = _get_default_context(request)
context.update({'category': category,
'items': items})
return render_to_response('knowledge/category_delete', context,
context_instance=RequestContext(request),
response_format=response_format)
| 37.585106 | 105 | 0.646901 | 2,202 | 21,198 | 6.010445 | 0.072661 | 0.068757 | 0.037023 | 0.055535 | 0.821987 | 0.801511 | 0.772875 | 0.756328 | 0.717189 | 0.692558 | 0 | 0.0032 | 0.262808 | 21,198 | 563 | 106 | 37.651865 | 0.843732 | 0.035852 | 0 | 0.669704 | 0 | 0 | 0.103633 | 0.021563 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045558 | false | 0.002278 | 0.027335 | 0 | 0.189066 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d21a7fa06bb1d7d041ef1112f309c4c49769cad8 | 16,297 | py | Python | test/unit_tests/datautil/test_serialization.py | lapaill/braindecode | d5d6e34baef1c8df092e77d1f3e757b53d0e69ea | [
"BSD-3-Clause"
] | 301 | 2020-01-15T16:40:59.000Z | 2022-03-31T05:28:00.000Z | test/unit_tests/datautil/test_serialization.py | lapaill/braindecode | d5d6e34baef1c8df092e77d1f3e757b53d0e69ea | [
"BSD-3-Clause"
] | 325 | 2020-01-12T21:36:55.000Z | 2022-03-21T11:59:01.000Z | test/unit_tests/datautil/test_serialization.py | lapaill/braindecode | d5d6e34baef1c8df092e77d1f3e757b53d0e69ea | [
"BSD-3-Clause"
] | 98 | 2020-01-12T21:22:42.000Z | 2022-03-24T14:36:08.000Z | # Authors: Lukas Gemein <l.gemein@gmail.com>
#
# License: BSD-3
import os
import pytest
import numpy as np
import pandas as pd
from braindecode.datasets import BaseConcatDataset, MOABBDataset
from braindecode.preprocessing import (
create_windows_from_events, Preprocessor, preprocess)
from braindecode.datautil.serialization import (
load_concat_dataset, _check_save_dir_empty)
@pytest.fixture()
def setup_concat_raw_dataset():
return MOABBDataset(dataset_name="BNCI2014001", subject_ids=[1])
@pytest.fixture()
def setup_concat_windows_dataset(setup_concat_raw_dataset):
moabb_dataset = setup_concat_raw_dataset
return create_windows_from_events(
concat_ds=moabb_dataset,
trial_start_offset_samples=0,
trial_stop_offset_samples=0)
def test_outdated_save_concat_raw_dataset(setup_concat_raw_dataset, tmpdir):
concat_raw_dataset = setup_concat_raw_dataset
n_raw_datasets = len(concat_raw_dataset.datasets)
with pytest.warns(
UserWarning, match='This function only exists for '
'backwards compatibility purposes. DO NOT USE!'):
concat_raw_dataset._outdated_save(path=tmpdir, overwrite=False)
assert os.path.exists(tmpdir.join("description.json"))
for raw_i in range(n_raw_datasets):
assert os.path.exists(tmpdir.join(f"{raw_i}-raw.fif"))
assert not os.path.exists(tmpdir.join(f"{n_raw_datasets}-raw.fif"))
def test_outdated_save_concat_windows_dataset(
setup_concat_windows_dataset, tmpdir):
concat_windows_dataset = setup_concat_windows_dataset
n_windows_datasets = len(concat_windows_dataset.datasets)
with pytest.warns(
UserWarning, match='This function only exists for '
'backwards compatibility purposes. DO NOT USE!'):
concat_windows_dataset._outdated_save(path=tmpdir, overwrite=False)
assert os.path.exists(tmpdir.join("description.json"))
for windows_i in range(n_windows_datasets):
assert os.path.exists(tmpdir.join(f"{windows_i}-epo.fif"))
assert not os.path.exists(tmpdir.join(f"{n_windows_datasets}-epo.fif"))
def test_load_concat_raw_dataset(setup_concat_raw_dataset, tmpdir):
concat_raw_dataset = setup_concat_raw_dataset
n_raw_datasets = len(concat_raw_dataset.datasets)
with pytest.warns(
UserWarning, match='This function only exists for '
'backwards compatibility purposes. DO NOT USE!'):
concat_raw_dataset._outdated_save(path=tmpdir, overwrite=False)
with pytest.warns(
UserWarning, match="The way your dataset was saved is deprecated by"
" now. Please save it again using dataset.save()"
"."):
loaded_concat_raw_dataset = load_concat_dataset(
path=tmpdir, preload=False)
assert len(concat_raw_dataset) == len(loaded_concat_raw_dataset)
assert (len(concat_raw_dataset.datasets) ==
len(loaded_concat_raw_dataset.datasets))
assert (len(concat_raw_dataset.description) ==
len(loaded_concat_raw_dataset.description))
for raw_i in range(n_raw_datasets):
actual_x, actual_y = concat_raw_dataset[raw_i]
x, y = loaded_concat_raw_dataset[raw_i]
np.testing.assert_allclose(x, actual_x, rtol=1e-4, atol=1e-5)
pd.testing.assert_frame_equal(
concat_raw_dataset.description, loaded_concat_raw_dataset.description)
def test_load_concat_windows_dataset(setup_concat_windows_dataset, tmpdir):
concat_windows_dataset = setup_concat_windows_dataset
n_windows_datasets = len(concat_windows_dataset.datasets)
with pytest.warns(
UserWarning, match='This function only exists for '
'backwards compatibility purposes. DO NOT USE!'):
concat_windows_dataset._outdated_save(path=tmpdir, overwrite=False)
with pytest.warns(
UserWarning, match="The way your dataset was saved is deprecated by"
" now. Please save it again using dataset.save()"
"."):
loaded_concat_windows_dataset = load_concat_dataset(
path=tmpdir, preload=False)
assert len(concat_windows_dataset) == len(loaded_concat_windows_dataset)
assert (len(concat_windows_dataset.datasets) ==
len(loaded_concat_windows_dataset.datasets))
assert (len(concat_windows_dataset.description) ==
len(loaded_concat_windows_dataset.description))
for windows_i in range(n_windows_datasets):
actual_x, actual_y, actual_crop_inds = concat_windows_dataset[windows_i]
x, y, crop_inds = loaded_concat_windows_dataset[windows_i]
np.testing.assert_allclose(x, actual_x, rtol=1e-4, atol=1e-5)
np.testing.assert_allclose(y, actual_y, rtol=1e-4, atol=1e-5)
np.testing.assert_array_equal(crop_inds, actual_crop_inds)
pd.testing.assert_frame_equal(concat_windows_dataset.description,
loaded_concat_windows_dataset.description)
def test_load_multiple_concat_raw_dataset(setup_concat_raw_dataset, tmpdir):
concat_raw_dataset = setup_concat_raw_dataset
for i in range(2):
path = os.path.join(tmpdir, str(i))
os.makedirs(path)
with pytest.warns(
UserWarning, match='This function only exists for '
'backwards compatibility purposes. DO NOT '
'USE!'):
concat_raw_dataset._outdated_save(path=path, overwrite=False)
with pytest.warns(
UserWarning, match="The way your dataset was saved is "
"deprecated by now. Please save it again "
"using dataset.save()."):
loaded_concat_raw_datasets = load_concat_dataset(
path=tmpdir, preload=False)
assert 2 * len(concat_raw_dataset) == len(loaded_concat_raw_datasets)
assert (2 * len(concat_raw_dataset.datasets) ==
len(loaded_concat_raw_datasets.datasets))
assert (2 * len(concat_raw_dataset.description) ==
len(loaded_concat_raw_datasets.description))
def test_load_multiple_concat_windows_dataset(setup_concat_windows_dataset,
tmpdir):
concat_windows_dataset = setup_concat_windows_dataset
for i in range(2):
path = os.path.join(tmpdir, str(i))
os.makedirs(path)
with pytest.warns(
UserWarning, match='This function only exists for '
'backwards compatibility purposes. DO NOT '
'USE!'):
concat_windows_dataset._outdated_save(path=path, overwrite=False)
with pytest.warns(
UserWarning, match="The way your dataset was saved is "
"deprecated by now. Please save it again "
"using dataset.save()."):
loaded_concat_windows_datasets = load_concat_dataset(
path=tmpdir, preload=False)
assert 2 * len(concat_windows_dataset) == len(loaded_concat_windows_datasets)
assert (2 * len(concat_windows_dataset.datasets) ==
len(loaded_concat_windows_datasets.datasets))
assert (2 * len(concat_windows_dataset.description) ==
len(loaded_concat_windows_datasets.description))
def test_load_save_raw_preproc_kwargs(setup_concat_raw_dataset, tmpdir):
concat_raw_dataset = setup_concat_raw_dataset
preprocess(concat_raw_dataset, [
Preprocessor('pick_channels', ch_names=['C3']),
])
concat_raw_dataset.save(tmpdir, overwrite=False)
for i in range(len(concat_raw_dataset.datasets)):
assert os.path.exists(os.path.join(tmpdir, str(i), 'raw_preproc_kwargs.json'))
loaded_concat_raw_dataset = load_concat_dataset(tmpdir, preload=False)
for ds in loaded_concat_raw_dataset.datasets:
assert ds.raw_preproc_kwargs == [
('pick_channels', {'ch_names': ['C3']}),
]
def test_load_save_window_preproc_kwargs(setup_concat_windows_dataset, tmpdir):
concat_windows_dataset = setup_concat_windows_dataset
concat_windows_dataset.save(tmpdir, overwrite=False)
for i in range(len(concat_windows_dataset.datasets)):
subdir = os.path.join(tmpdir, str(i))
assert os.path.exists(os.path.join(subdir, 'window_kwargs.json'))
preprocess(concat_windows_dataset, [
Preprocessor('pick_channels', ch_names=['Cz']),
])
concat_windows_dataset.save(tmpdir, overwrite=True)
for i in range(len(concat_windows_dataset.datasets)):
subdir = os.path.join(tmpdir, str(i))
assert os.path.exists(os.path.join(subdir, 'window_kwargs.json'))
assert os.path.exists(os.path.join(subdir, 'window_preproc_kwargs.json'))
loaded_concat_windows_dataset = load_concat_dataset(tmpdir, preload=False)
for ds in loaded_concat_windows_dataset.datasets:
assert ds.window_kwargs == [
('create_windows_from_events', {
'infer_mapping': True, 'infer_window_size_stride': True,
'trial_start_offset_samples': 0, 'trial_stop_offset_samples': 0,
'window_size_samples': None, 'window_stride_samples': None,
'drop_last_window': False, 'mapping': {
'feet': 0, 'left_hand': 1, 'right_hand': 2, 'tongue': 3},
'preload': False, 'drop_bad_windows': True, 'picks': None,
'reject': None, 'flat': None, 'on_missing': 'error'})
]
assert ds.window_preproc_kwargs == [
('pick_channels', {'ch_names': ['Cz']}),
]
def test_save_concat_raw_dataset(setup_concat_raw_dataset, tmpdir):
concat_raw_dataset = setup_concat_raw_dataset
n_raw_datasets = len(concat_raw_dataset.datasets)
# assert no warning raised with 'new' saving function
with pytest.warns(None) as raised_warnings:
concat_raw_dataset.save(path=tmpdir, overwrite=False)
assert len(raised_warnings) == 0
for raw_i in range(n_raw_datasets):
subdir = os.path.join(tmpdir, str(raw_i))
assert os.path.exists(os.path.join(subdir, "description.json"))
assert os.path.exists(os.path.join(subdir, f"{raw_i}-raw.fif"))
assert not os.path.exists(os.path.join(tmpdir, f"{n_raw_datasets}"))
def test_save_concat_windows_dataset(setup_concat_windows_dataset, tmpdir):
concat_windows_dataset = setup_concat_windows_dataset
n_windows_datasets = len(concat_windows_dataset.datasets)
# assert no warning raised with 'new' saving function
with pytest.warns(None) as raised_warnings:
concat_windows_dataset.save(path=tmpdir, overwrite=False)
assert len(raised_warnings) == 0
for windows_i in range(n_windows_datasets):
subdir = os.path.join(tmpdir, str(windows_i))
assert os.path.exists(os.path.join(subdir, "description.json"))
assert os.path.exists(os.path.join(subdir, f"{windows_i}-epo.fif"))
assert not os.path.exists(os.path.join(tmpdir, f"{n_windows_datasets}"))
def test_load_concat_raw_dataset_parallel(setup_concat_raw_dataset, tmpdir):
concat_raw_dataset = setup_concat_raw_dataset
n_raw_datasets = len(concat_raw_dataset.datasets)
# assert no warning raised with 'new' saving function
with pytest.warns(None) as raised_warnings:
concat_raw_dataset.save(path=tmpdir, overwrite=False)
assert len(raised_warnings) == 0
# assert no warning raised with loading dataset saved in 'new' way
with pytest.warns(None) as raised_warnings:
loaded_concat_raw_dataset = load_concat_dataset(
path=tmpdir, preload=False, n_jobs=2)
assert len(raised_warnings) == 0
assert len(concat_raw_dataset) == len(loaded_concat_raw_dataset)
assert (len(concat_raw_dataset.datasets) ==
len(loaded_concat_raw_dataset.datasets))
assert (len(concat_raw_dataset.description) ==
len(loaded_concat_raw_dataset.description))
for raw_i in range(n_raw_datasets):
actual_x, actual_y = concat_raw_dataset[raw_i]
x, y = loaded_concat_raw_dataset[raw_i]
np.testing.assert_allclose(x, actual_x, rtol=1e-4, atol=1e-5)
pd.testing.assert_frame_equal(
concat_raw_dataset.description, loaded_concat_raw_dataset.description)
def test_load_concat_windows_dataset_parallel(setup_concat_windows_dataset, tmpdir):
concat_windows_dataset = setup_concat_windows_dataset
n_windows_datasets = len(concat_windows_dataset.datasets)
# assert no warning raised with 'new' saving function
with pytest.warns(None) as raised_warnings:
concat_windows_dataset.save(path=tmpdir, overwrite=False)
assert len(raised_warnings) == 0
# assert warning raised because of n_jobs not supported with mne.Epochs
with pytest.warns(UserWarning, match='Parallelized reading with '
'`preload=False` is not supported for '
'windowed data. Will use `n_jobs=1`.'):
loaded_concat_windows_dataset = load_concat_dataset(
path=tmpdir, preload=False, n_jobs=2)
assert len(raised_warnings) == 0
assert len(concat_windows_dataset) == len(loaded_concat_windows_dataset)
assert (len(concat_windows_dataset.datasets) ==
len(loaded_concat_windows_dataset.datasets))
assert (len(concat_windows_dataset.description) ==
len(loaded_concat_windows_dataset.description))
for windows_i in range(n_windows_datasets):
actual_x, actual_y, actual_crop_inds = concat_windows_dataset[windows_i]
x, y, crop_inds = loaded_concat_windows_dataset[windows_i]
np.testing.assert_allclose(x, actual_x, rtol=1e-4, atol=1e-5)
np.testing.assert_allclose(y, actual_y, rtol=1e-4, atol=1e-5)
np.testing.assert_array_equal(crop_inds, actual_crop_inds)
pd.testing.assert_frame_equal(concat_windows_dataset.description,
loaded_concat_windows_dataset.description)
def test_save_varying_number_of_datasets_with_overwrite(setup_concat_windows_dataset, tmpdir):
concat_windows_dataset = setup_concat_windows_dataset
concat_windows_dataset.save(path=tmpdir, overwrite=False)
subset = concat_windows_dataset.split([0])['0']
with pytest.warns(UserWarning, match='The number of saved datasets'):
subset.save(path=tmpdir, overwrite=True)
# assert no warning raised when there are as many subdirectories than before
with pytest.warns(None) as raised_warnings:
concat_windows_dataset.save(path=tmpdir, overwrite=True)
assert len(raised_warnings) == 0
# assert no warning raised when there are more subdirectories than before
double_concat_windows_dataset = BaseConcatDataset(
[concat_windows_dataset, concat_windows_dataset])
with pytest.warns(None) as raised_warnings:
double_concat_windows_dataset.save(path=tmpdir, overwrite=True)
assert len(raised_warnings) == 0
def test_directory_contains_file(setup_concat_windows_dataset, tmpdir):
with open(os.path.join(tmpdir, 'test.txt'), 'w') as f:
f.write('test')
concat_windows_dataset = setup_concat_windows_dataset
with pytest.warns(UserWarning, match='Chosen directory'):
concat_windows_dataset.save(tmpdir)
def test_other_subdirectories_exist(setup_concat_windows_dataset, tmpdir):
os.mkdir(os.path.join(tmpdir, '999'))
concat_windows_dataset = setup_concat_windows_dataset
with pytest.warns(UserWarning, match='Chosen directory'):
concat_windows_dataset.save(tmpdir)
def test_subdirectory_already_exist(setup_concat_windows_dataset, tmpdir):
os.mkdir(os.path.join(tmpdir, '0'))
concat_windows_dataset = setup_concat_windows_dataset
with pytest.raises(FileExistsError, match='Subdirectory'):
concat_windows_dataset.save(tmpdir)
def test_check_save_dir_empty(setup_concat_raw_dataset, tmpdir):
_check_save_dir_empty(tmpdir)
setup_concat_raw_dataset.save(tmpdir)
with pytest.raises(FileExistsError):
_check_save_dir_empty(tmpdir)
| 48.35905 | 94 | 0.700804 | 2,082 | 16,297 | 5.157541 | 0.096061 | 0.110169 | 0.162041 | 0.048892 | 0.865338 | 0.825945 | 0.784038 | 0.753306 | 0.729838 | 0.698827 | 0 | 0.005288 | 0.210959 | 16,297 | 336 | 95 | 48.502976 | 0.829769 | 0.033564 | 0 | 0.617857 | 0 | 0 | 0.106049 | 0.01417 | 0 | 0 | 0 | 0 | 0.203571 | 1 | 0.067857 | false | 0 | 0.025 | 0.003571 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d26c624b11aaa57a81f3cea9032373c9a6469933 | 68 | py | Python | tests/conftest.py | jsbeckwith/unweaver | a4ba9e4e288c75e93bf7f9d67bc11680f09c3da0 | [
"Apache-2.0"
] | 4 | 2019-04-24T16:38:57.000Z | 2021-12-28T20:38:08.000Z | tests/conftest.py | jsbeckwith/unweaver | a4ba9e4e288c75e93bf7f9d67bc11680f09c3da0 | [
"Apache-2.0"
] | 3 | 2021-06-02T04:06:33.000Z | 2021-11-02T01:47:20.000Z | tests/conftest.py | jsbeckwith/unweaver | a4ba9e4e288c75e93bf7f9d67bc11680f09c3da0 | [
"Apache-2.0"
] | 1 | 2020-08-13T04:42:05.000Z | 2020-08-13T04:42:05.000Z | from .fixtures import built_G, built_G_weighted, test_waypoint_legs
| 34 | 67 | 0.867647 | 11 | 68 | 4.909091 | 0.818182 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 68 | 1 | 68 | 68 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
963000c4e912be28722dc48fd1100ef0bf1c1260 | 47 | py | Python | toponimos_peru/models/__init__.py | juazisco/gestion_rifa | bce6b75f17cb5ab2df7e2f7dd5141fc85a1a5bfb | [
"MIT"
] | null | null | null | toponimos_peru/models/__init__.py | juazisco/gestion_rifa | bce6b75f17cb5ab2df7e2f7dd5141fc85a1a5bfb | [
"MIT"
] | null | null | null | toponimos_peru/models/__init__.py | juazisco/gestion_rifa | bce6b75f17cb5ab2df7e2f7dd5141fc85a1a5bfb | [
"MIT"
] | null | null | null | from . import res_country, res_partner # noqa
| 23.5 | 46 | 0.765957 | 7 | 47 | 4.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 1 | 47 | 47 | 0.871795 | 0.085106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9693dc1b8c34a8da3963db46b005a2d6e2552aee | 98 | py | Python | 5-tests/blog_app/utils.py | rcmgn/kts-school-backend | 8a895043b7f0156ec49554504198b631df41d2cd | [
"MIT"
] | 9 | 2021-02-04T07:00:59.000Z | 2022-03-21T06:28:27.000Z | 5-tests/blog_app/utils.py | rcmgn/kts-school-backend | 8a895043b7f0156ec49554504198b631df41d2cd | [
"MIT"
] | null | null | null | 5-tests/blog_app/utils.py | rcmgn/kts-school-backend | 8a895043b7f0156ec49554504198b631df41d2cd | [
"MIT"
] | 4 | 2021-10-20T18:44:22.000Z | 2022-02-16T19:11:49.000Z | import datetime
from dateutil import tz
def now():
return datetime.datetime.now(tz=tz.UTC)
| 12.25 | 43 | 0.734694 | 15 | 98 | 4.8 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173469 | 98 | 7 | 44 | 14 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
969ec1661e53478f29892f69799a29ee07ad3707 | 34 | py | Python | CodeWars/8 Kyu/get ascii value of character.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | CodeWars/8 Kyu/get ascii value of character.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | CodeWars/8 Kyu/get ascii value of character.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | def get_ascii(c):
return ord(c) | 17 | 17 | 0.676471 | 7 | 34 | 3.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 34 | 2 | 18 | 17 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
739ad2f27de1a23400099cd4e7d53b72dab71c01 | 61 | py | Python | polus-cell-nuclei-segmentation/src/dsb2018_topcoders/albu/src/bowl_eval.py | nishaq503/polus-plugins-dl | 511689e82eb29a84761538144277d1be1af7aa44 | [
"MIT"
] | null | null | null | polus-cell-nuclei-segmentation/src/dsb2018_topcoders/albu/src/bowl_eval.py | nishaq503/polus-plugins-dl | 511689e82eb29a84761538144277d1be1af7aa44 | [
"MIT"
] | 1 | 2021-09-09T23:22:16.000Z | 2021-09-09T23:22:16.000Z | polus-cell-nuclei-segmentation/src/dsb2018_topcoders/albu/src/bowl_eval.py | nishaq503/polus-plugins-dl | 511689e82eb29a84761538144277d1be1af7aa44 | [
"MIT"
] | 4 | 2021-06-22T13:54:52.000Z | 2022-01-26T19:23:39.000Z | from bowl_train import eval_bowl
import torch
eval_bowl() | 15.25 | 33 | 0.803279 | 10 | 61 | 4.6 | 0.6 | 0.347826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163934 | 61 | 4 | 34 | 15.25 | 0.901961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.