hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8a46c9196deb1cdacc33456bf12b87c9892777f9 | 26 | py | Python | flask_tus/__init__.py | eokeeffe/flask-tus | c2d29d7ac4435fa113fe1f88957df44146066bf9 | [
"MIT"
] | 3 | 2020-02-02T10:14:23.000Z | 2021-01-05T11:38:23.000Z | flask_tus/__init__.py | eokeeffe/flask-tus | c2d29d7ac4435fa113fe1f88957df44146066bf9 | [
"MIT"
] | 1 | 2021-03-03T18:15:00.000Z | 2021-03-04T18:12:27.000Z | flask_tus/__init__.py | eokeeffe/flask-tus | c2d29d7ac4435fa113fe1f88957df44146066bf9 | [
"MIT"
] | 2 | 2020-02-29T21:20:48.000Z | 2021-03-05T10:41:35.000Z | from .app import FlaskTus
| 13 | 25 | 0.807692 | 4 | 26 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8a47d6c29e6444ace214cdf49fe5a53dea932212 | 16,461 | py | Python | tests/test_gh_issue.py | sabderemane/bedevere | 1f7ab8324e3f75071af10fd086bccfe44067d869 | [
"Apache-2.0"
] | null | null | null | tests/test_gh_issue.py | sabderemane/bedevere | 1f7ab8324e3f75071af10fd086bccfe44067d869 | [
"Apache-2.0"
] | null | null | null | tests/test_gh_issue.py | sabderemane/bedevere | 1f7ab8324e3f75071af10fd086bccfe44067d869 | [
"Apache-2.0"
] | null | null | null | from unittest import mock
import http
import aiohttp
import pytest
import gidgethub
from gidgethub import sansio
from bedevere import gh_issue
class FakeGH:
def __init__(self, *, getitem=None, post=None):
self._getitem_return = getitem
self._post_return = post
self.post_url = []
self.post_data = []
async def getitem(self, url):
if isinstance(self._getitem_return, Exception):
raise self._getitem_return
return self._getitem_return
async def post(self, url, *, data):
self.post_url.append(url)
self.post_data.append(data)
return self._post_return
@pytest.mark.asyncio
@pytest.mark.parametrize("action", ["opened", "synchronize", "reopened"])
async def test_set_status_failure(action, monkeypatch):
monkeypatch.setattr(gh_issue, '_validate_issue_number',
mock.AsyncMock(return_value=True))
data = {
"action": action,
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "No issue in title",
"issue_url": "issue URL",
},
}
issue_data = {
"labels": [
{"name": "non-trivial"},
]
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH(getitem=issue_data)
await gh_issue.router.dispatch(event, gh, session=None)
status = gh.post_data[0]
assert status["state"] == "failure"
assert status["target_url"].startswith("https://devguide.python.org")
assert status["context"] == "bedevere/issue-number"
gh_issue._validate_issue_number.assert_not_awaited()
@pytest.mark.asyncio
@pytest.mark.parametrize("action", ["opened", "synchronize", "reopened"])
async def test_set_status_failure_via_issue_not_found_on_github(action, monkeypatch):
monkeypatch.setattr(gh_issue, '_validate_issue_number',
mock.AsyncMock(return_value=False))
data = {
"action": action,
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "gh-123: Invalid issue number",
},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
async with aiohttp.ClientSession() as session:
await gh_issue.router.dispatch(event, gh, session=session)
status = gh.post_data[0]
assert status["state"] == "failure"
assert status["target_url"] == "https://github.com/python/cpython/issues/123"
assert status["context"] == "bedevere/issue-number"
assert status["description"] == "GH Issue #123 is not valid."
@pytest.mark.asyncio
@pytest.mark.parametrize("action", ["opened", "synchronize", "reopened"])
async def test_set_status_success_issue_found_on_bpo(action):
data = {
"action": action,
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "bpo-12345: an issue!",
},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
async with aiohttp.ClientSession() as session:
await gh_issue.router.dispatch(event, gh, session=session)
status = gh.post_data[0]
assert status["state"] == "success"
assert status["target_url"].endswith("bpo=12345")
assert "12345" in status["description"]
assert status["context"] == "bedevere/issue-number"
assert "git-sha" in gh.post_url[0]
@pytest.mark.asyncio
@pytest.mark.parametrize("action", ["opened", "synchronize", "reopened"])
async def test_set_status_success(action, monkeypatch):
monkeypatch.setattr(gh_issue, '_validate_issue_number',
mock.AsyncMock(return_value=True))
data = {
"action": action,
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "[3.6] gh-1234: an issue!",
},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
await gh_issue.router.dispatch(event, gh, session=None)
status = gh.post_data[0]
assert status["state"] == "success"
assert status["target_url"] == "https://github.com/python/cpython/issues/1234"
assert "1234" in status["description"]
assert status["context"] == "bedevere/issue-number"
assert "git-sha" in gh.post_url[0]
gh_issue._validate_issue_number.assert_awaited_with(gh, 1234, session=None, kind="gh")
@pytest.mark.asyncio
@pytest.mark.parametrize("action", ["opened", "synchronize", "reopened"])
async def test_set_status_success_issue_found_on_gh(action, monkeypatch):
monkeypatch.setattr(gh_issue, '_validate_issue_number',
mock.AsyncMock(return_value=True))
data = {
"action": action,
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "gh-12345: an issue!",
},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
async with aiohttp.ClientSession() as session:
await gh_issue.router.dispatch(event, gh, session=session)
status = gh.post_data[0]
assert status["state"] == "success"
assert status["target_url"] == "https://github.com/python/cpython/issues/12345"
assert "12345" in status["description"]
assert status["context"] == "bedevere/issue-number"
assert "git-sha" in gh.post_url[0]
@pytest.mark.asyncio
@pytest.mark.parametrize("action", ["opened", "synchronize", "reopened"])
async def test_set_status_success_issue_found_on_gh_ignore_case(action, monkeypatch):
monkeypatch.setattr(gh_issue, '_validate_issue_number',
mock.AsyncMock(return_value=True))
data = {
"action": action,
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "GH-12345: an issue!",
},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
async with aiohttp.ClientSession() as session:
await gh_issue.router.dispatch(event, gh, session=session)
status = gh.post_data[0]
assert status["state"] == "success"
assert status["target_url"] == "https://github.com/python/cpython/issues/12345"
assert "12345" in status["description"]
assert status["context"] == "bedevere/issue-number"
assert "git-sha" in gh.post_url[0]
@pytest.mark.asyncio
@pytest.mark.parametrize("action", ["opened", "synchronize", "reopened"])
async def test_set_status_success_via_skip_issue_label(action, monkeypatch):
monkeypatch.setattr(gh_issue, '_validate_issue_number',
mock.AsyncMock(return_value=True))
data = {
"action": action,
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "No issue in title",
"issue_url": "issue URL",
},
}
issue_data = {
"labels": [
{"name": "skip issue"},
]
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH(getitem=issue_data)
await gh_issue.router.dispatch(event, gh, session=None)
status = gh.post_data[0]
assert status["state"] == "success"
assert status["context"] == "bedevere/issue-number"
assert "git-sha" in gh.post_url[0]
gh_issue._validate_issue_number.assert_not_awaited()
@pytest.mark.asyncio
async def test_edit_title(monkeypatch):
monkeypatch.setattr(gh_issue, '_validate_issue_number',
mock.AsyncMock(return_value=True))
data = {
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "gh-1234: an issue!",
},
"action": "edited",
"changes": {"title": "thingy"},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
await gh_issue.router.dispatch(event, gh, session=None)
assert len(gh.post_data) == 1
gh_issue._validate_issue_number.assert_awaited_with(gh, 1234, session=None, kind="gh")
@pytest.mark.asyncio
async def test_no_body_when_edit_title(monkeypatch):
monkeypatch.setattr(gh_issue, '_validate_issue_number',
mock.AsyncMock(return_value=True))
data = {
"action": "edited",
"pull_request": {
"url": "https://api.github.com/repos/python/cpython/pulls/5291",
"title": "gh-32636: Fix @asyncio.coroutine debug mode bug",
"body": None,
"statuses_url": "https://api.github.com/repos/python/cpython/statuses/98d60953c85df9f0f28e04322a4c4ebec7b180f4",
},
"changes": {
"title": "gh-32636: Fix @asyncio.coroutine debug mode bug exposed by #5250."
},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
await gh_issue.router.dispatch(event, gh, session=None)
gh_issue._validate_issue_number.assert_awaited_with(gh, 32636, session=None, kind="gh")
@pytest.mark.asyncio
async def test_edit_other_than_title(monkeypatch):
monkeypatch.setattr(gh_issue, '_validate_issue_number',
mock.AsyncMock(return_value=True))
data = {
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "bpo-1234: an issue!",
},
"action": "edited",
"changes": {"stuff": "thingy"},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
await gh_issue.router.dispatch(event, gh, session=None)
assert len(gh.post_data) == 0
gh_issue._validate_issue_number.assert_not_awaited()
@pytest.mark.asyncio
async def test_new_label_skip_issue_no_issue():
data = {
"action": "labeled",
"label": {"name": "skip issue"},
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "An easy fix",
},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
await gh_issue.router.dispatch(event, gh)
assert gh.post_data[0]["state"] == "success"
assert "git-sha" in gh.post_url[0]
@pytest.mark.asyncio
async def test_new_label_skip_issue_with_issue_number():
data = {
"action": "labeled",
"label": {"name": "skip issue"},
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "Revert gh-1234: revert an easy fix",
},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
await gh_issue.router.dispatch(event, gh)
status = gh.post_data[0]
assert status["state"] == "success"
assert status["target_url"] == "https://github.com/python/cpython/issues/1234"
assert "1234" in status["description"]
assert status["context"] == "bedevere/issue-number"
assert "git-sha" in gh.post_url[0]
@pytest.mark.asyncio
async def test_new_label_skip_issue_with_issue_number_ignore_case():
data = {
"action": "labeled",
"label": {"name": "skip issue"},
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "Revert Gh-1234: revert an easy fix",
},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
await gh_issue.router.dispatch(event, gh)
status = gh.post_data[0]
assert status["state"] == "success"
assert status["target_url"] == "https://github.com/python/cpython/issues/1234"
assert "1234" in status["description"]
assert status["context"] == "bedevere/issue-number"
assert "git-sha" in gh.post_url[0]
@pytest.mark.asyncio
async def test_new_label_not_skip_issue():
data = {
"action": "labeled",
"label": {"name": "non-trivial"},
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
await gh_issue.router.dispatch(event, gh)
assert len(gh.post_data) == 0
@pytest.mark.asyncio
async def test_removed_label_from_label_deletion(monkeypatch):
"""When a label is completely deleted from a repo, it triggers an 'unlabeled'
event, but the payload has no details about the removed label."""
monkeypatch.setattr(gh_issue, '_validate_issue_number',
mock.AsyncMock(return_value=True))
data = {
"action": "unlabeled",
# No "label" key.
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "gh-1234: an issue!",
},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
await gh_issue.router.dispatch(event, gh, session=None)
assert len(gh.post_data) == 0
gh_issue._validate_issue_number.assert_not_awaited()
@pytest.mark.asyncio
async def test_removed_label_skip_issue(monkeypatch):
monkeypatch.setattr(gh_issue, '_validate_issue_number',
mock.AsyncMock(return_value=True))
data = {
"action": "unlabeled",
"label": {"name": "skip issue"},
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
"title": "gh-1234: an issue!",
},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
await gh_issue.router.dispatch(event, gh, session=None)
status = gh.post_data[0]
assert status["state"] == "success"
assert status["target_url"] == "https://github.com/python/cpython/issues/1234"
assert "1234" in status["description"]
assert status["context"] == "bedevere/issue-number"
assert "git-sha" in gh.post_url[0]
gh_issue._validate_issue_number.assert_awaited_with(gh, 1234, session=None, kind="gh")
@pytest.mark.asyncio
async def test_removed_label_non_skip_issue(monkeypatch):
monkeypatch.setattr(gh_issue, '_validate_issue_number',
mock.AsyncMock(return_value=True))
data = {
"action": "unlabeled",
"label": {"name": "non-trivial"},
"pull_request": {
"statuses_url": "https://api.github.com/blah/blah/git-sha",
},
}
event = sansio.Event(data, event="pull_request", delivery_id="12345")
gh = FakeGH()
await gh_issue.router.dispatch(event, gh, session=None)
assert len(gh.post_data) == 0
gh_issue._validate_issue_number.assert_not_awaited()
@pytest.mark.asyncio
async def test_validate_issue_number_valid_on_github():
gh = FakeGH(getitem={"number": 123})
async with aiohttp.ClientSession() as session:
response = await gh_issue._validate_issue_number(gh, 123, session=session)
assert response is True
@pytest.mark.asyncio
async def test_validate_issue_number_valid_on_bpo():
gh = FakeGH(getitem={"number": 1234})
async with aiohttp.ClientSession() as session:
response = await gh_issue._validate_issue_number(
gh, 1234, kind="bpo", session=session
)
assert response is True
@pytest.mark.asyncio
async def test_validate_issue_number_is_pr_on_github():
gh = FakeGH(getitem={
"number": 123,
"pull_request": {"html_url": "https://github.com/python/cpython/pull/123"}
})
async with aiohttp.ClientSession() as session:
response = await gh_issue._validate_issue_number(gh, 123, session=session)
assert response is False
@pytest.mark.asyncio
async def test_validate_issue_number_is_not_valid():
gh = FakeGH(
getitem=gidgethub.BadRequest(
status_code=http.HTTPStatus(404)
)
)
async with aiohttp.ClientSession() as session:
response = await gh_issue._validate_issue_number(gh, 123, session=session)
assert response is False
@pytest.mark.asyncio
async def test_validate_issue_number_coverage100():
gh = FakeGH(getitem={"number": 1234})
async with aiohttp.ClientSession() as session:
with pytest.raises(ValueError):
await gh_issue._validate_issue_number(
gh, 123, session=session, kind="invalid" # type: ignore
)
| 36.418142 | 124 | 0.643339 | 2,020 | 16,461 | 5.040099 | 0.081188 | 0.03094 | 0.057853 | 0.051076 | 0.892545 | 0.892545 | 0.876338 | 0.86573 | 0.851881 | 0.839014 | 0 | 0.025143 | 0.21475 | 16,461 | 451 | 125 | 36.498891 | 0.762494 | 0.001701 | 0 | 0.681013 | 0 | 0 | 0.247989 | 0.02911 | 0 | 0 | 0 | 0 | 0.164557 | 1 | 0.002532 | false | 0 | 0.017722 | 0 | 0.027848 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8ab310fc1dd3b9e5cbfcc64409c415699893a5c9 | 185 | py | Python | mythril/laser/ethereum/transaction/__init__.py | kalloc/mythril | eeee4d7c459189d278ac82a38c5fb778ddd58cdc | [
"MIT"
] | 1,887 | 2018-01-07T10:16:08.000Z | 2022-03-31T16:07:26.000Z | mythril/laser/ethereum/transaction/__init__.py | strawberrylady99/mythril | 727d5f3049333f71ccd90a95ca8fe13368aa9c15 | [
"MIT"
] | 746 | 2018-01-09T07:14:01.000Z | 2022-03-31T08:12:44.000Z | mythril/laser/ethereum/transaction/__init__.py | strawberrylady99/mythril | 727d5f3049333f71ccd90a95ca8fe13368aa9c15 | [
"MIT"
] | 431 | 2018-01-08T07:47:59.000Z | 2022-03-31T13:00:51.000Z | from mythril.laser.ethereum.transaction.transaction_models import *
from mythril.laser.ethereum.transaction.symbolic import (
execute_message_call,
execute_contract_creation,
)
| 30.833333 | 67 | 0.827027 | 21 | 185 | 7.047619 | 0.619048 | 0.148649 | 0.216216 | 0.324324 | 0.472973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102703 | 185 | 5 | 68 | 37 | 0.891566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0a0b89b4a10dbf8a242e4587316888e42073b3e2 | 102 | py | Python | py_fitness/py_fitness/workout/tests/test_views.py | audiolion/py-fitness | 9e0ca785c73a07cb788685bbde6e840a7a2e3419 | [
"MIT"
] | 1 | 2017-04-17T19:59:15.000Z | 2017-04-17T19:59:15.000Z | py_fitness/py_fitness/workout/tests/test_views.py | audiolion/py-fitness | 9e0ca785c73a07cb788685bbde6e840a7a2e3419 | [
"MIT"
] | 1 | 2016-12-09T01:58:46.000Z | 2016-12-09T01:58:46.000Z | py_fitness/py_fitness/workout/tests/test_views.py | audiolion/py-fitness | 9e0ca785c73a07cb788685bbde6e840a7a2e3419 | [
"MIT"
] | null | null | null | from django.test import RequestFactory
from test_plus.test import TestCase
# from ..views import ()
| 17 | 38 | 0.784314 | 14 | 102 | 5.642857 | 0.571429 | 0.253165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 102 | 5 | 39 | 20.4 | 0.908046 | 0.215686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a5ba74c1189b296edb2e29b4065d42f4c4a5566 | 33,974 | py | Python | tests/unit/test_backup.py | chrisrossi/python-bigtable | be1703812754dd8b5fbce306c143554b6341a04d | [
"Apache-2.0"
] | null | null | null | tests/unit/test_backup.py | chrisrossi/python-bigtable | be1703812754dd8b5fbce306c143554b6341a04d | [
"Apache-2.0"
] | null | null | null | tests/unit/test_backup.py | chrisrossi/python-bigtable | be1703812754dd8b5fbce306c143554b6341a04d | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Google LLC All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import mock
import unittest
from ._testing import _make_credentials
from google.cloud._helpers import UTC
class TestBackup(unittest.TestCase):
PROJECT_ID = "project-id"
INSTANCE_ID = "instance-id"
INSTANCE_NAME = "projects/" + PROJECT_ID + "/instances/" + INSTANCE_ID
CLUSTER_ID = "cluster-id"
CLUSTER_NAME = INSTANCE_NAME + "/clusters/" + CLUSTER_ID
TABLE_ID = "table-id"
TABLE_NAME = INSTANCE_NAME + "/tables/" + TABLE_ID
BACKUP_ID = "backup-id"
BACKUP_NAME = CLUSTER_NAME + "/backups/" + BACKUP_ID
@staticmethod
def _get_target_class():
from google.cloud.bigtable.backup import Backup
return Backup
@staticmethod
def _make_table_admin_client():
from google.cloud.bigtable_admin_v2 import BigtableTableAdminClient
return mock.create_autospec(BigtableTableAdminClient, instance=True)
def _make_one(self, *args, **kwargs):
return self._get_target_class()(*args, **kwargs)
def _make_timestamp(self):
return datetime.datetime.utcnow().replace(tzinfo=UTC)
def test_constructor_defaults(self):
instance = _Instance(self.INSTANCE_NAME)
backup = self._make_one(self.BACKUP_ID, instance)
self.assertEqual(backup.backup_id, self.BACKUP_ID)
self.assertIs(backup._instance, instance)
self.assertIsNone(backup._cluster)
self.assertIsNone(backup.table_id)
self.assertIsNone(backup._expire_time)
self.assertIsNone(backup._parent)
self.assertIsNone(backup._source_table)
self.assertIsNone(backup._start_time)
self.assertIsNone(backup._end_time)
self.assertIsNone(backup._size_bytes)
self.assertIsNone(backup._state)
self.assertIsNone(backup._encryption_info)
def test_constructor_non_defaults(self):
instance = _Instance(self.INSTANCE_NAME)
expire_time = self._make_timestamp()
backup = self._make_one(
self.BACKUP_ID,
instance,
cluster_id=self.CLUSTER_ID,
table_id=self.TABLE_ID,
expire_time=expire_time,
encryption_info="encryption_info",
)
self.assertEqual(backup.backup_id, self.BACKUP_ID)
self.assertIs(backup._instance, instance)
self.assertIs(backup._cluster, self.CLUSTER_ID)
self.assertEqual(backup.table_id, self.TABLE_ID)
self.assertEqual(backup._expire_time, expire_time)
self.assertEqual(backup._encryption_info, "encryption_info")
self.assertIsNone(backup._parent)
self.assertIsNone(backup._source_table)
self.assertIsNone(backup._start_time)
self.assertIsNone(backup._end_time)
self.assertIsNone(backup._size_bytes)
self.assertIsNone(backup._state)
def test_from_pb_project_mismatch(self):
from google.cloud.bigtable_admin_v2.types import table
alt_project_id = "alt-project-id"
client = _Client(project=alt_project_id)
instance = _Instance(self.INSTANCE_NAME, client)
backup_pb = table.Backup(name=self.BACKUP_NAME)
klasse = self._get_target_class()
with self.assertRaises(ValueError):
klasse.from_pb(backup_pb, instance)
def test_from_pb_instance_mismatch(self):
from google.cloud.bigtable_admin_v2.types import table
alt_instance = "/projects/%s/instances/alt-instance" % self.PROJECT_ID
client = _Client()
instance = _Instance(alt_instance, client)
backup_pb = table.Backup(name=self.BACKUP_NAME)
klasse = self._get_target_class()
with self.assertRaises(ValueError):
klasse.from_pb(backup_pb, instance)
def test_from_pb_bad_name(self):
from google.cloud.bigtable_admin_v2.types import table
client = _Client()
instance = _Instance(self.INSTANCE_NAME, client)
backup_pb = table.Backup(name="invalid_name")
klasse = self._get_target_class()
with self.assertRaises(ValueError):
klasse.from_pb(backup_pb, instance)
def test_from_pb_success(self):
from google.cloud.bigtable.encryption_info import EncryptionInfo
from google.cloud.bigtable.error import Status
from google.cloud.bigtable_admin_v2.types import table
from google.cloud._helpers import _datetime_to_pb_timestamp
from google.rpc.code_pb2 import Code
client = _Client()
instance = _Instance(self.INSTANCE_NAME, client)
timestamp = _datetime_to_pb_timestamp(self._make_timestamp())
size_bytes = 1234
state = table.Backup.State.READY
GOOGLE_DEFAULT_ENCRYPTION = (
table.EncryptionInfo.EncryptionType.GOOGLE_DEFAULT_ENCRYPTION
)
backup_pb = table.Backup(
name=self.BACKUP_NAME,
source_table=self.TABLE_NAME,
expire_time=timestamp,
start_time=timestamp,
end_time=timestamp,
size_bytes=size_bytes,
state=state,
encryption_info=table.EncryptionInfo(
encryption_type=GOOGLE_DEFAULT_ENCRYPTION,
encryption_status=_StatusPB(Code.OK, "Status OK"),
kms_key_version="2",
),
)
klasse = self._get_target_class()
backup = klasse.from_pb(backup_pb, instance)
self.assertTrue(isinstance(backup, klasse))
self.assertEqual(backup._instance, instance)
self.assertEqual(backup.backup_id, self.BACKUP_ID)
self.assertEqual(backup.cluster, self.CLUSTER_ID)
self.assertEqual(backup.table_id, self.TABLE_ID)
self.assertEqual(backup._expire_time, timestamp)
self.assertEqual(backup.start_time, timestamp)
self.assertEqual(backup.end_time, timestamp)
self.assertEqual(backup._size_bytes, size_bytes)
self.assertEqual(backup._state, state)
self.assertEqual(
backup.encryption_info,
EncryptionInfo(
encryption_type=GOOGLE_DEFAULT_ENCRYPTION,
encryption_status=Status(_StatusPB(Code.OK, "Status OK")),
kms_key_version="2",
),
)
def test_property_name(self):
from google.cloud.bigtable.client import Client
from google.cloud.bigtable_admin_v2.services.bigtable_instance_admin import (
BigtableInstanceAdminClient,
)
api = mock.create_autospec(BigtableInstanceAdminClient)
credentials = _make_credentials()
client = Client(project=self.PROJECT_ID, credentials=credentials, admin=True)
client._table_admin_client = api
instance = _Instance(self.INSTANCE_NAME, client)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
self.assertEqual(backup.name, self.BACKUP_NAME)
def test_property_cluster(self):
backup = self._make_one(
self.BACKUP_ID, _Instance(self.INSTANCE_NAME), cluster_id=self.CLUSTER_ID
)
self.assertEqual(backup.cluster, self.CLUSTER_ID)
def test_property_cluster_setter(self):
backup = self._make_one(self.BACKUP_ID, _Instance(self.INSTANCE_NAME))
backup.cluster = self.CLUSTER_ID
self.assertEqual(backup.cluster, self.CLUSTER_ID)
def test_property_parent_none(self):
backup = self._make_one(self.BACKUP_ID, _Instance(self.INSTANCE_NAME),)
self.assertIsNone(backup.parent)
def test_property_parent_w_cluster(self):
from google.cloud.bigtable.client import Client
from google.cloud.bigtable_admin_v2.services.bigtable_instance_admin import (
BigtableInstanceAdminClient,
)
api = mock.create_autospec(BigtableInstanceAdminClient)
credentials = _make_credentials()
client = Client(project=self.PROJECT_ID, credentials=credentials, admin=True)
client._table_admin_client = api
instance = _Instance(self.INSTANCE_NAME, client)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
self.assertEqual(backup._cluster, self.CLUSTER_ID)
self.assertEqual(backup.parent, self.CLUSTER_NAME)
def test_property_source_table_none(self):
from google.cloud.bigtable.client import Client
from google.cloud.bigtable_admin_v2.services.bigtable_instance_admin import (
BigtableInstanceAdminClient,
)
api = mock.create_autospec(BigtableInstanceAdminClient)
credentials = _make_credentials()
client = Client(project=self.PROJECT_ID, credentials=credentials, admin=True)
client._table_admin_client = api
instance = _Instance(self.INSTANCE_NAME, client)
backup = self._make_one(self.BACKUP_ID, instance)
self.assertIsNone(backup.source_table)
def test_property_source_table_valid(self):
from google.cloud.bigtable.client import Client
from google.cloud.bigtable_admin_v2.services.bigtable_instance_admin import (
BigtableInstanceAdminClient,
)
api = mock.create_autospec(BigtableInstanceAdminClient)
credentials = _make_credentials()
client = Client(project=self.PROJECT_ID, credentials=credentials, admin=True)
client._table_admin_client = api
instance = _Instance(self.INSTANCE_NAME, client)
backup = self._make_one(self.BACKUP_ID, instance, table_id=self.TABLE_ID)
self.assertEqual(backup.source_table, self.TABLE_NAME)
def test_property_expire_time(self):
instance = _Instance(self.INSTANCE_NAME)
expire_time = self._make_timestamp()
backup = self._make_one(self.BACKUP_ID, instance, expire_time=expire_time)
self.assertEqual(backup.expire_time, expire_time)
def test_property_expire_time_setter(self):
instance = _Instance(self.INSTANCE_NAME)
expire_time = self._make_timestamp()
backup = self._make_one(self.BACKUP_ID, instance)
backup.expire_time = expire_time
self.assertEqual(backup.expire_time, expire_time)
def test_property_start_time(self):
instance = _Instance(self.INSTANCE_NAME)
backup = self._make_one(self.BACKUP_ID, instance)
expected = backup._start_time = self._make_timestamp()
self.assertEqual(backup.start_time, expected)
def test_property_end_time(self):
instance = _Instance(self.INSTANCE_NAME)
backup = self._make_one(self.BACKUP_ID, instance)
expected = backup._end_time = self._make_timestamp()
self.assertEqual(backup.end_time, expected)
def test_property_size(self):
instance = _Instance(self.INSTANCE_NAME)
backup = self._make_one(self.BACKUP_ID, instance)
expected = backup._size_bytes = 10
self.assertEqual(backup.size_bytes, expected)
def test_property_state(self):
from google.cloud.bigtable_admin_v2.types import table
instance = _Instance(self.INSTANCE_NAME)
backup = self._make_one(self.BACKUP_ID, instance)
expected = backup._state = table.Backup.State.READY
self.assertEqual(backup.state, expected)
def test___eq__(self):
instance = object()
backup1 = self._make_one(self.BACKUP_ID, instance)
backup2 = self._make_one(self.BACKUP_ID, instance)
self.assertTrue(backup1 == backup2)
def test___eq__different_types(self):
instance = object()
backup1 = self._make_one(self.BACKUP_ID, instance)
backup2 = object()
self.assertFalse(backup1 == backup2)
def test___ne__same_value(self):
instance = object()
backup1 = self._make_one(self.BACKUP_ID, instance)
backup2 = self._make_one(self.BACKUP_ID, instance)
self.assertFalse(backup1 != backup2)
def test___ne__(self):
backup1 = self._make_one("backup_1", "instance1")
backup2 = self._make_one("backup_2", "instance2")
self.assertTrue(backup1 != backup2)
def test_create_grpc_error(self):
from google.api_core.exceptions import GoogleAPICallError
from google.api_core.exceptions import Unknown
from google.cloud._helpers import _datetime_to_pb_timestamp
from google.cloud.bigtable_admin_v2.types import table
client = _Client()
api = client._table_admin_client = self._make_table_admin_client()
api.create_backup.side_effect = Unknown("testing")
timestamp = self._make_timestamp()
backup = self._make_one(
self.BACKUP_ID,
_Instance(self.INSTANCE_NAME, client=client),
table_id=self.TABLE_ID,
expire_time=timestamp,
)
backup_pb = table.Backup(
source_table=self.TABLE_NAME,
expire_time=_datetime_to_pb_timestamp(timestamp),
)
with self.assertRaises(GoogleAPICallError):
backup.create(self.CLUSTER_ID)
api.create_backup.assert_called_once_with(
request={
"parent": self.CLUSTER_NAME,
"backup_id": self.BACKUP_ID,
"backup": backup_pb,
}
)
def test_create_already_exists(self):
from google.cloud._helpers import _datetime_to_pb_timestamp
from google.cloud.bigtable_admin_v2.types import table
from google.cloud.exceptions import Conflict
client = _Client()
api = client._table_admin_client = self._make_table_admin_client()
api.create_backup.side_effect = Conflict("testing")
timestamp = self._make_timestamp()
backup = self._make_one(
self.BACKUP_ID,
_Instance(self.INSTANCE_NAME, client=client),
table_id=self.TABLE_ID,
expire_time=timestamp,
)
backup_pb = table.Backup(
source_table=self.TABLE_NAME,
expire_time=_datetime_to_pb_timestamp(timestamp),
)
with self.assertRaises(Conflict):
backup.create(self.CLUSTER_ID)
api.create_backup.assert_called_once_with(
request={
"parent": self.CLUSTER_NAME,
"backup_id": self.BACKUP_ID,
"backup": backup_pb,
}
)
def test_create_instance_not_found(self):
from google.cloud._helpers import _datetime_to_pb_timestamp
from google.cloud.bigtable_admin_v2.types import table
from google.cloud.exceptions import NotFound
client = _Client()
api = client._table_admin_client = self._make_table_admin_client()
api.create_backup.side_effect = NotFound("testing")
timestamp = self._make_timestamp()
backup = self._make_one(
self.BACKUP_ID,
_Instance(self.INSTANCE_NAME, client=client),
table_id=self.TABLE_ID,
expire_time=timestamp,
)
backup_pb = table.Backup(
source_table=self.TABLE_NAME,
expire_time=_datetime_to_pb_timestamp(timestamp),
)
with self.assertRaises(NotFound):
backup.create(self.CLUSTER_ID)
api.create_backup.assert_called_once_with(
request={
"parent": self.CLUSTER_NAME,
"backup_id": self.BACKUP_ID,
"backup": backup_pb,
}
)
def test_create_cluster_not_set(self):
backup = self._make_one(
self.BACKUP_ID,
_Instance(self.INSTANCE_NAME),
table_id=self.TABLE_ID,
expire_time=self._make_timestamp(),
)
with self.assertRaises(ValueError):
backup.create()
def test_create_table_not_set(self):
backup = self._make_one(
self.BACKUP_ID,
_Instance(self.INSTANCE_NAME),
expire_time=self._make_timestamp(),
)
with self.assertRaises(ValueError):
backup.create(self.CLUSTER_ID)
def test_create_expire_time_not_set(self):
backup = self._make_one(
self.BACKUP_ID, _Instance(self.INSTANCE_NAME), table_id=self.TABLE_ID,
)
with self.assertRaises(ValueError):
backup.create(self.CLUSTER_ID)
def test_create_success(self):
from google.cloud._helpers import _datetime_to_pb_timestamp
from google.cloud.bigtable_admin_v2.types import table
from google.cloud.bigtable import Client
op_future = object()
credentials = _make_credentials()
client = Client(project=self.PROJECT_ID, credentials=credentials, admin=True)
api = client._table_admin_client = self._make_table_admin_client()
api.create_backup.return_value = op_future
timestamp = self._make_timestamp()
backup = self._make_one(
self.BACKUP_ID,
_Instance(self.INSTANCE_NAME, client=client),
table_id=self.TABLE_ID,
expire_time=timestamp,
)
backup_pb = table.Backup(
source_table=self.TABLE_NAME,
expire_time=_datetime_to_pb_timestamp(timestamp),
)
future = backup.create(self.CLUSTER_ID)
self.assertEqual(backup._cluster, self.CLUSTER_ID)
self.assertIs(future, op_future)
api.create_backup.assert_called_once_with(
request={
"parent": self.CLUSTER_NAME,
"backup_id": self.BACKUP_ID,
"backup": backup_pb,
}
)
def test_exists_grpc_error(self):
from google.api_core.exceptions import Unknown
client = _Client()
api = client._table_admin_client = self._make_table_admin_client()
api.get_backup.side_effect = Unknown("testing")
instance = _Instance(self.INSTANCE_NAME, client=client)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
with self.assertRaises(Unknown):
backup.exists()
api.get_backup(self.BACKUP_NAME)
def test_exists_not_found(self):
from google.api_core.exceptions import NotFound
client = _Client()
api = client._table_admin_client = self._make_table_admin_client()
api.get_backup.side_effect = NotFound("testing")
instance = _Instance(self.INSTANCE_NAME, client=client)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
self.assertFalse(backup.exists())
api.get_backup.assert_called_once_with(request={"name": self.BACKUP_NAME})
def test_get(self):
from google.cloud.bigtable_admin_v2.types import table
from google.cloud._helpers import _datetime_to_pb_timestamp
timestamp = _datetime_to_pb_timestamp(self._make_timestamp())
state = table.Backup.State.READY
client = _Client()
backup_pb = table.Backup(
name=self.BACKUP_NAME,
source_table=self.TABLE_NAME,
expire_time=timestamp,
start_time=timestamp,
end_time=timestamp,
size_bytes=0,
state=state,
)
api = client._table_admin_client = self._make_table_admin_client()
api.get_backup.return_value = backup_pb
instance = _Instance(self.INSTANCE_NAME, client=client)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
self.assertEqual(backup.get(), backup_pb)
def test_reload(self):
from google.cloud.bigtable_admin_v2.types import table
from google.cloud._helpers import _datetime_to_pb_timestamp
timestamp = _datetime_to_pb_timestamp(self._make_timestamp())
state = table.Backup.State.READY
client = _Client()
backup_pb = table.Backup(
name=self.BACKUP_NAME,
source_table=self.TABLE_NAME,
expire_time=timestamp,
start_time=timestamp,
end_time=timestamp,
size_bytes=0,
state=state,
)
api = client._table_admin_client = self._make_table_admin_client()
api.get_backup.return_value = backup_pb
instance = _Instance(self.INSTANCE_NAME, client=client)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
backup.reload()
self.assertEqual(backup._source_table, self.TABLE_NAME)
self.assertEqual(backup._expire_time, timestamp)
self.assertEqual(backup._start_time, timestamp)
self.assertEqual(backup._end_time, timestamp)
self.assertEqual(backup._size_bytes, 0)
self.assertEqual(backup._state, state)
def test_exists_success(self):
from google.cloud.bigtable_admin_v2.types import table
client = _Client()
backup_pb = table.Backup(name=self.BACKUP_NAME)
api = client._table_admin_client = self._make_table_admin_client()
api.get_backup.return_value = backup_pb
instance = _Instance(self.INSTANCE_NAME, client=client)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
self.assertTrue(backup.exists())
api.get_backup.assert_called_once_with(request={"name": self.BACKUP_NAME})
def test_delete_grpc_error(self):
from google.api_core.exceptions import Unknown
client = _Client()
api = client._table_admin_client = self._make_table_admin_client()
api.delete_backup.side_effect = Unknown("testing")
instance = _Instance(self.INSTANCE_NAME, client=client)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
with self.assertRaises(Unknown):
backup.delete()
api.delete_backup.assert_called_once_with(request={"name": self.BACKUP_NAME})
def test_delete_not_found(self):
from google.api_core.exceptions import NotFound
client = _Client()
api = client._table_admin_client = self._make_table_admin_client()
api.delete_backup.side_effect = NotFound("testing")
instance = _Instance(self.INSTANCE_NAME, client=client)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
with self.assertRaises(NotFound):
backup.delete()
api.delete_backup.assert_called_once_with(request={"name": self.BACKUP_NAME})
def test_delete_success(self):
from google.protobuf.empty_pb2 import Empty
client = _Client()
api = client._table_admin_client = self._make_table_admin_client()
api.delete_backup.return_value = Empty()
instance = _Instance(self.INSTANCE_NAME, client=client)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
backup.delete()
api.delete_backup.assert_called_once_with(request={"name": self.BACKUP_NAME})
def test_update_expire_time_grpc_error(self):
from google.api_core.exceptions import Unknown
from google.cloud._helpers import _datetime_to_pb_timestamp
from google.cloud.bigtable_admin_v2.types import table
from google.protobuf import field_mask_pb2
client = _Client()
api = client._table_admin_client = self._make_table_admin_client()
api.update_backup.side_effect = Unknown("testing")
instance = _Instance(self.INSTANCE_NAME, client=client)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
expire_time = self._make_timestamp()
with self.assertRaises(Unknown):
backup.update_expire_time(expire_time)
backup_update = table.Backup(
name=self.BACKUP_NAME, expire_time=_datetime_to_pb_timestamp(expire_time),
)
update_mask = field_mask_pb2.FieldMask(paths=["expire_time"])
api.update_backup.assert_called_once_with(
request={"backup": backup_update, "update_mask": update_mask}
)
def test_update_expire_time_not_found(self):
from google.api_core.exceptions import NotFound
from google.cloud._helpers import _datetime_to_pb_timestamp
from google.cloud.bigtable_admin_v2.types import table
from google.protobuf import field_mask_pb2
client = _Client()
api = client._table_admin_client = self._make_table_admin_client()
api.update_backup.side_effect = NotFound("testing")
instance = _Instance(self.INSTANCE_NAME, client=client)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
expire_time = self._make_timestamp()
with self.assertRaises(NotFound):
backup.update_expire_time(expire_time)
backup_update = table.Backup(
name=self.BACKUP_NAME, expire_time=_datetime_to_pb_timestamp(expire_time),
)
update_mask = field_mask_pb2.FieldMask(paths=["expire_time"])
api.update_backup.assert_called_once_with(
request={"backup": backup_update, "update_mask": update_mask}
)
def test_update_expire_time_success(self):
from google.cloud._helpers import _datetime_to_pb_timestamp
from google.cloud.bigtable_admin_v2.types import table
from google.protobuf import field_mask_pb2
client = _Client()
api = client._table_admin_client = self._make_table_admin_client()
api.update_backup.return_type = table.Backup(name=self.BACKUP_NAME)
instance = _Instance(self.INSTANCE_NAME, client=client)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
expire_time = self._make_timestamp()
backup.update_expire_time(expire_time)
backup_update = table.Backup(
name=self.BACKUP_NAME, expire_time=_datetime_to_pb_timestamp(expire_time),
)
update_mask = field_mask_pb2.FieldMask(paths=["expire_time"])
api.update_backup.assert_called_once_with(
request={"backup": backup_update, "update_mask": update_mask}
)
def test_restore_grpc_error(self):
from google.api_core.exceptions import GoogleAPICallError
from google.api_core.exceptions import Unknown
client = _Client()
api = client._table_admin_client = self._make_table_admin_client()
api.restore_table.side_effect = Unknown("testing")
timestamp = self._make_timestamp()
backup = self._make_one(
self.BACKUP_ID,
_Instance(self.INSTANCE_NAME, client=client),
cluster_id=self.CLUSTER_ID,
table_id=self.TABLE_NAME,
expire_time=timestamp,
)
with self.assertRaises(GoogleAPICallError):
backup.restore(self.TABLE_ID)
api.restore_table.assert_called_once_with(
request={
"parent": self.INSTANCE_NAME,
"table_id": self.TABLE_ID,
"backup": self.BACKUP_NAME,
}
)
def test_restore_cluster_not_set(self):
client = _Client()
client._table_admin_client = self._make_table_admin_client()
backup = self._make_one(
self.BACKUP_ID,
_Instance(self.INSTANCE_NAME, client=client),
table_id=self.TABLE_ID,
expire_time=self._make_timestamp(),
)
with self.assertRaises(ValueError):
backup.restore(self.TABLE_ID)
def test_restore_success(self):
op_future = object()
client = _Client()
api = client._table_admin_client = self._make_table_admin_client()
api.restore_table.return_value = op_future
timestamp = self._make_timestamp()
backup = self._make_one(
self.BACKUP_ID,
_Instance(self.INSTANCE_NAME, client=client),
cluster_id=self.CLUSTER_ID,
table_id=self.TABLE_NAME,
expire_time=timestamp,
)
future = backup.restore(self.TABLE_ID)
self.assertEqual(backup._cluster, self.CLUSTER_ID)
self.assertIs(future, op_future)
api.restore_table.assert_called_once_with(
request={
"parent": self.INSTANCE_NAME,
"table_id": self.TABLE_ID,
"backup": self.BACKUP_NAME,
}
)
def test_get_iam_policy(self):
from google.cloud.bigtable.client import Client
from google.cloud.bigtable_admin_v2.services.bigtable_table_admin import (
BigtableTableAdminClient,
)
from google.iam.v1 import policy_pb2
from google.cloud.bigtable.policy import BIGTABLE_ADMIN_ROLE
credentials = _make_credentials()
client = Client(project=self.PROJECT_ID, credentials=credentials, admin=True)
instance = client.instance(instance_id=self.INSTANCE_ID)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
version = 1
etag = b"etag_v1"
members = ["serviceAccount:service_acc1@test.com", "user:user1@test.com"]
bindings = [{"role": BIGTABLE_ADMIN_ROLE, "members": members}]
iam_policy = policy_pb2.Policy(version=version, etag=etag, bindings=bindings)
table_api = mock.create_autospec(BigtableTableAdminClient)
client._table_admin_client = table_api
table_api.get_iam_policy.return_value = iam_policy
result = backup.get_iam_policy()
table_api.get_iam_policy.assert_called_once_with(
request={"resource": backup.name}
)
self.assertEqual(result.version, version)
self.assertEqual(result.etag, etag)
admins = result.bigtable_admins
self.assertEqual(len(admins), len(members))
for found, expected in zip(sorted(admins), sorted(members)):
self.assertEqual(found, expected)
def test_set_iam_policy(self):
from google.cloud.bigtable.client import Client
from google.cloud.bigtable_admin_v2.services.bigtable_table_admin import (
BigtableTableAdminClient,
)
from google.iam.v1 import policy_pb2
from google.cloud.bigtable.policy import Policy
from google.cloud.bigtable.policy import BIGTABLE_ADMIN_ROLE
credentials = _make_credentials()
client = Client(project=self.PROJECT_ID, credentials=credentials, admin=True)
instance = client.instance(instance_id=self.INSTANCE_ID)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
version = 1
etag = b"etag_v1"
members = ["serviceAccount:service_acc1@test.com", "user:user1@test.com"]
bindings = [{"role": BIGTABLE_ADMIN_ROLE, "members": sorted(members)}]
iam_policy_pb = policy_pb2.Policy(version=version, etag=etag, bindings=bindings)
table_api = mock.create_autospec(BigtableTableAdminClient)
client._table_admin_client = table_api
table_api.set_iam_policy.return_value = iam_policy_pb
iam_policy = Policy(etag=etag, version=version)
iam_policy[BIGTABLE_ADMIN_ROLE] = [
Policy.user("user1@test.com"),
Policy.service_account("service_acc1@test.com"),
]
result = backup.set_iam_policy(iam_policy)
table_api.set_iam_policy.assert_called_once_with(
request={"resource": backup.name, "policy": iam_policy_pb}
)
self.assertEqual(result.version, version)
self.assertEqual(result.etag, etag)
admins = result.bigtable_admins
self.assertEqual(len(admins), len(members))
for found, expected in zip(sorted(admins), sorted(members)):
self.assertEqual(found, expected)
def test_test_iam_permissions(self):
from google.cloud.bigtable.client import Client
from google.cloud.bigtable_admin_v2.services.bigtable_table_admin import (
BigtableTableAdminClient,
)
from google.iam.v1 import iam_policy_pb2
credentials = _make_credentials()
client = Client(project=self.PROJECT_ID, credentials=credentials, admin=True)
instance = client.instance(instance_id=self.INSTANCE_ID)
backup = self._make_one(self.BACKUP_ID, instance, cluster_id=self.CLUSTER_ID)
permissions = ["bigtable.backups.create", "bigtable.backups.list"]
response = iam_policy_pb2.TestIamPermissionsResponse(permissions=permissions)
table_api = mock.create_autospec(BigtableTableAdminClient)
table_api.test_iam_permissions.return_value = response
client._table_admin_client = table_api
result = backup.test_iam_permissions(permissions)
self.assertEqual(result, permissions)
table_api.test_iam_permissions.assert_called_once_with(
request={"resource": backup.name, "permissions": permissions}
)
class _Client(object):
def __init__(self, project=TestBackup.PROJECT_ID):
self.project = project
self.project_name = "projects/" + self.project
class _Instance(object):
def __init__(self, name, client=None):
self.name = name
self.instance_id = name.rsplit("/", 1)[1]
self._client = client
def _StatusPB(code, message):
from google.rpc import status_pb2
status_pb = status_pb2.Status()
status_pb.code = code
status_pb.message = message
return status_pb
| 37.875139 | 88 | 0.676046 | 3,975 | 33,974 | 5.429937 | 0.059623 | 0.031134 | 0.028354 | 0.030578 | 0.84507 | 0.815187 | 0.793365 | 0.782987 | 0.763158 | 0.760563 | 0 | 0.003593 | 0.238094 | 33,974 | 896 | 89 | 37.917411 | 0.830249 | 0.016719 | 0 | 0.634943 | 0 | 0 | 0.024377 | 0.005151 | 0 | 0 | 0 | 0 | 0.153409 | 1 | 0.076705 | false | 0 | 0.103693 | 0.002841 | 0.204545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a655658807210a3a95c7635e6a374e2c65ee5a3 | 25 | py | Python | client/agents/ppo/__init__.py | tbienhoff/carla-rl | 51960c8ce3b7e90cdd6c3ab5e18721d1969e1b50 | [
"MIT"
] | 80 | 2019-01-30T13:14:11.000Z | 2022-02-14T08:51:01.000Z | client/agents/ppo/__init__.py | tbienhoff/carla-rl | 51960c8ce3b7e90cdd6c3ab5e18721d1969e1b50 | [
"MIT"
] | 8 | 2019-02-03T18:21:36.000Z | 2020-10-23T00:51:30.000Z | client/agents/ppo/__init__.py | tbienhoff/carla-rl | 51960c8ce3b7e90cdd6c3ab5e18721d1969e1b50 | [
"MIT"
] | 27 | 2019-03-15T08:22:19.000Z | 2022-03-20T05:37:48.000Z | from .ppo_carla import *
| 12.5 | 24 | 0.76 | 4 | 25 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6a69da6ce2eab576c6377a944a2dfdcdfde3ffeb | 497 | py | Python | machina/pols/__init__.py | AswinRetnakumar/Machina | 6519935ca4553192ac99fc1c7c1e7cab9dd72693 | [
"MIT"
] | 302 | 2019-03-13T10:21:29.000Z | 2022-03-25T10:01:46.000Z | machina/pols/__init__.py | AswinRetnakumar/Machina | 6519935ca4553192ac99fc1c7c1e7cab9dd72693 | [
"MIT"
] | 50 | 2019-03-13T09:45:00.000Z | 2021-12-23T18:32:00.000Z | machina/pols/__init__.py | AswinRetnakumar/Machina | 6519935ca4553192ac99fc1c7c1e7cab9dd72693 | [
"MIT"
] | 55 | 2019-03-17T01:59:57.000Z | 2022-03-28T01:13:40.000Z | from machina.pols.base import BasePol
from machina.pols.gaussian_pol import GaussianPol
from machina.pols.mixture_gaussian_pol import MixtureGaussianPol
from machina.pols.deterministic_action_noise_pol import DeterministicActionNoisePol
from machina.pols.categorical_pol import CategoricalPol
from machina.pols.multi_categorical_pol import MultiCategoricalPol
from machina.pols.mpc_pol import MPCPol
from machina.pols.random_pol import RandomPol
from machina.pols.argmax_qf_pol import ArgmaxQfPol
| 49.7 | 83 | 0.891348 | 67 | 497 | 6.41791 | 0.38806 | 0.230233 | 0.313953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072435 | 497 | 9 | 84 | 55.222222 | 0.932755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6a6d5aa1a5d290ebaef6c692addcfe88ed943b40 | 51,644 | py | Python | cinder/tests/api/v1/test_volumes.py | cloudbau/cinder | 3179f2f42ae940a08b910e326a809556689864d8 | [
"Apache-2.0"
] | null | null | null | cinder/tests/api/v1/test_volumes.py | cloudbau/cinder | 3179f2f42ae940a08b910e326a809556689864d8 | [
"Apache-2.0"
] | null | null | null | cinder/tests/api/v1/test_volumes.py | cloudbau/cinder | 3179f2f42ae940a08b910e326a809556689864d8 | [
"Apache-2.0"
] | null | null | null | # Copyright 2013 Josh Durgin
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
from lxml import etree
from oslo.config import cfg
import urllib
import webob
from cinder.api import extensions
from cinder.api.v1 import volumes
from cinder import context
from cinder import db
from cinder import exception
from cinder import test
from cinder.tests.api import fakes
from cinder.tests.api.v2 import stubs
from cinder.tests.image import fake as fake_image
from cinder.volume import api as volume_api
NS = '{http://docs.openstack.org/volume/api/v1}'
TEST_SNAPSHOT_UUID = '00000000-0000-0000-0000-000000000001'
CONF = cfg.CONF
def stub_snapshot_get(self, context, snapshot_id):
if snapshot_id != TEST_SNAPSHOT_UUID:
raise exception.NotFound
return {'id': snapshot_id,
'volume_id': 12,
'status': 'available',
'volume_size': 100,
'created_at': None,
'display_name': 'Default name',
'display_description': 'Default description', }
class VolumeApiTest(test.TestCase):
def setUp(self):
super(VolumeApiTest, self).setUp()
self.ext_mgr = extensions.ExtensionManager()
self.ext_mgr.extensions = {}
fake_image.stub_out_image_service(self.stubs)
self.controller = volumes.VolumeController(self.ext_mgr)
self.stubs.Set(db, 'volume_get_all', stubs.stub_volume_get_all)
self.stubs.Set(db, 'service_get_all_by_topic',
stubs.stub_service_get_all_by_topic)
self.stubs.Set(volume_api.API, 'delete', stubs.stub_volume_delete)
def test_volume_create(self):
self.stubs.Set(volume_api.API, 'get', stubs.stub_volume_get)
self.stubs.Set(volume_api.API, "create", stubs.stub_volume_create)
vol = {"size": 100,
"display_name": "Volume Test Name",
"display_description": "Volume Test Desc",
"availability_zone": "zone1:host1"}
body = {"volume": vol}
req = fakes.HTTPRequest.blank('/v1/volumes')
res_dict = self.controller.create(req, body)
expected = {'volume': {'status': 'fakestatus',
'display_description': 'Volume Test Desc',
'availability_zone': 'zone1:host1',
'display_name': 'Volume Test Name',
'attachments': [{'device': '/',
'server_id': 'fakeuuid',
'host_name': None,
'id': '1',
'volume_id': '1'}],
'bootable': 'false',
'volume_type': 'vol_type_name',
'snapshot_id': None,
'source_volid': None,
'metadata': {'attached_mode': 'rw',
'readonly': 'False'},
'id': '1',
'created_at': datetime.datetime(1, 1, 1,
1, 1, 1),
'size': 100}}
self.assertEqual(res_dict, expected)
def test_volume_create_with_type(self):
vol_type = CONF.default_volume_type
db.volume_type_create(context.get_admin_context(),
dict(name=vol_type, extra_specs={}))
db_vol_type = db.volume_type_get_by_name(context.get_admin_context(),
vol_type)
vol = {"size": 100,
"display_name": "Volume Test Name",
"display_description": "Volume Test Desc",
"availability_zone": "zone1:host1",
"volume_type": "FakeTypeName"}
body = {"volume": vol}
req = fakes.HTTPRequest.blank('/v1/volumes')
# Raise 404 when type name isn't valid
self.assertRaises(webob.exc.HTTPNotFound, self.controller.create,
req, body)
# Use correct volume type name
vol.update(dict(volume_type=CONF.default_volume_type))
body.update(dict(volume=vol))
res_dict = self.controller.create(req, body)
volume_id = res_dict['volume']['id']
self.assertEqual(len(res_dict), 1)
self.assertEqual(res_dict['volume']['volume_type'],
db_vol_type['name'])
# Use correct volume type id
vol.update(dict(volume_type=db_vol_type['id']))
body.update(dict(volume=vol))
res_dict = self.controller.create(req, body)
volume_id = res_dict['volume']['id']
self.assertEqual(len(res_dict), 1)
self.assertEqual(res_dict['volume']['volume_type'],
db_vol_type['name'])
def test_volume_creation_fails_with_bad_size(self):
vol = {"size": '',
"display_name": "Volume Test Name",
"display_description": "Volume Test Desc",
"availability_zone": "zone1:host1"}
body = {"volume": vol}
req = fakes.HTTPRequest.blank('/v1/volumes')
self.assertRaises(exception.InvalidInput,
self.controller.create,
req,
body)
def test_volume_creation_fails_with_bad_availability_zone(self):
vol = {"size": '1',
"name": "Volume Test Name",
"description": "Volume Test Desc",
"availability_zone": "zonen:hostn"}
body = {"volume": vol}
req = fakes.HTTPRequest.blank('/v2/volumes')
self.assertRaises(exception.InvalidInput,
self.controller.create,
req, body)
def test_volume_create_with_image_id(self):
self.stubs.Set(db, 'volume_get', stubs.stub_volume_get_db)
self.stubs.Set(volume_api.API, "create", stubs.stub_volume_create)
self.ext_mgr.extensions = {'os-image-create': 'fake'}
test_id = "c905cedb-7281-47e4-8a62-f26bc5fc4c77"
vol = {"size": '1',
"display_name": "Volume Test Name",
"display_description": "Volume Test Desc",
"availability_zone": "nova",
"imageRef": test_id}
expected = {'volume': {'status': 'fakestatus',
'display_description': 'Volume Test Desc',
'availability_zone': 'nova',
'display_name': 'Volume Test Name',
'attachments': [{'device': '/',
'server_id': 'fakeuuid',
'host_name': None,
'id': '1',
'volume_id': '1'}],
'bootable': 'false',
'volume_type': 'vol_type_name',
'image_id': test_id,
'snapshot_id': None,
'source_volid': None,
'metadata': {'attached_mode': 'rw',
'readonly': 'False'},
'id': '1',
'created_at': datetime.datetime(1, 1, 1,
1, 1, 1),
'size': '1'}}
body = {"volume": vol}
req = fakes.HTTPRequest.blank('/v1/volumes')
res_dict = self.controller.create(req, body)
self.assertEqual(res_dict, expected)
def test_volume_create_with_image_id_is_integer(self):
self.stubs.Set(volume_api.API, "create", stubs.stub_volume_create)
self.ext_mgr.extensions = {'os-image-create': 'fake'}
vol = {"size": '1',
"display_name": "Volume Test Name",
"display_description": "Volume Test Desc",
"availability_zone": "cinder",
"imageRef": 1234}
body = {"volume": vol}
req = fakes.HTTPRequest.blank('/v1/volumes')
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.create,
req,
body)
def test_volume_create_with_image_id_not_uuid_format(self):
self.stubs.Set(volume_api.API, "create", stubs.stub_volume_create)
self.ext_mgr.extensions = {'os-image-create': 'fake'}
vol = {"size": '1',
"display_name": "Volume Test Name",
"display_description": "Volume Test Desc",
"availability_zone": "cinder",
"imageRef": '12345'}
body = {"volume": vol}
req = fakes.HTTPRequest.blank('/v1/volumes')
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.create,
req,
body)
def test_volume_update(self):
self.stubs.Set(db, 'volume_get', stubs.stub_volume_get_db)
self.stubs.Set(volume_api.API, "update", stubs.stub_volume_update)
updates = {
"display_name": "Updated Test Name",
}
body = {"volume": updates}
req = fakes.HTTPRequest.blank('/v1/volumes/1')
res_dict = self.controller.update(req, '1', body)
expected = {'volume': {
'status': 'fakestatus',
'display_description': 'displaydesc',
'availability_zone': 'fakeaz',
'display_name': 'Updated Test Name',
'attachments': [{
'id': '1',
'volume_id': '1',
'server_id': 'fakeuuid',
'host_name': None,
'device': '/'
}],
'bootable': 'false',
'volume_type': 'vol_type_name',
'snapshot_id': None,
'source_volid': None,
'metadata': {'attached_mode': 'rw',
'readonly': 'False'},
'id': '1',
'created_at': datetime.datetime(1, 1, 1, 1, 1, 1),
'size': 1}}
self.assertEqual(res_dict, expected)
def test_volume_update_metadata(self):
self.stubs.Set(db, 'volume_get', stubs.stub_volume_get_db)
self.stubs.Set(volume_api.API, "update", stubs.stub_volume_update)
updates = {
"metadata": {"qos_max_iops": 2000}
}
body = {"volume": updates}
req = fakes.HTTPRequest.blank('/v1/volumes/1')
res_dict = self.controller.update(req, '1', body)
expected = {'volume': {
'status': 'fakestatus',
'display_description': 'displaydesc',
'availability_zone': 'fakeaz',
'display_name': 'displayname',
'attachments': [{
'id': '1',
'volume_id': '1',
'server_id': 'fakeuuid',
'host_name': None,
'device': '/'
}],
'bootable': 'false',
'volume_type': 'vol_type_name',
'snapshot_id': None,
'source_volid': None,
'metadata': {"qos_max_iops": 2000,
"readonly": "False",
"attached_mode": "rw"},
'id': '1',
'created_at': datetime.datetime(1, 1, 1, 1, 1, 1),
'size': 1
}}
self.assertEqual(res_dict, expected)
def test_volume_update_with_admin_metadata(self):
self.stubs.Set(volume_api.API, "update", stubs.stub_volume_update)
volume = stubs.stub_volume("1")
del volume['name']
del volume['volume_type']
del volume['volume_type_id']
volume['metadata'] = {'key': 'value'}
db.volume_create(context.get_admin_context(), volume)
db.volume_admin_metadata_update(context.get_admin_context(), "1",
{"readonly": "True",
"invisible_key": "invisible_value"},
False)
updates = {
"display_name": "Updated Test Name",
}
body = {"volume": updates}
req = fakes.HTTPRequest.blank('/v1/volumes/1')
admin_ctx = context.RequestContext('admin', 'fakeproject', True)
req.environ['cinder.context'] = admin_ctx
res_dict = self.controller.update(req, '1', body)
expected = {'volume': {
'status': 'fakestatus',
'display_description': 'displaydesc',
'availability_zone': 'fakeaz',
'display_name': 'Updated Test Name',
'attachments': [{
'id': '1',
'volume_id': '1',
'server_id': 'fakeuuid',
'host_name': None,
'device': '/'
}],
'bootable': 'false',
'volume_type': 'None',
'snapshot_id': None,
'source_volid': None,
'metadata': {'key': 'value',
'readonly': 'True'},
'id': '1',
'created_at': datetime.datetime(1, 1, 1, 1, 1, 1),
'size': 1}}
self.assertEqual(res_dict, expected)
def test_update_empty_body(self):
body = {}
req = fakes.HTTPRequest.blank('/v1/volumes/1')
self.assertRaises(webob.exc.HTTPUnprocessableEntity,
self.controller.update,
req, '1', body)
def test_update_invalid_body(self):
body = {'display_name': 'missing top level volume key'}
req = fakes.HTTPRequest.blank('/v1/volumes/1')
self.assertRaises(webob.exc.HTTPUnprocessableEntity,
self.controller.update,
req, '1', body)
def test_update_not_found(self):
self.stubs.Set(volume_api.API, "get", stubs.stub_volume_get_notfound)
updates = {
"display_name": "Updated Test Name",
}
body = {"volume": updates}
req = fakes.HTTPRequest.blank('/v1/volumes/1')
self.assertRaises(webob.exc.HTTPNotFound,
self.controller.update,
req, '1', body)
def test_volume_list(self):
self.stubs.Set(db, 'volume_get', stubs.stub_volume_get_db)
self.stubs.Set(volume_api.API, 'get_all',
stubs.stub_volume_get_all_by_project)
req = fakes.HTTPRequest.blank('/v1/volumes')
res_dict = self.controller.index(req)
expected = {'volumes': [{'status': 'fakestatus',
'display_description': 'displaydesc',
'availability_zone': 'fakeaz',
'display_name': 'displayname',
'attachments': [{'device': '/',
'server_id': 'fakeuuid',
'host_name': None,
'id': '1',
'volume_id': '1'}],
'bootable': 'false',
'volume_type': 'vol_type_name',
'snapshot_id': None,
'source_volid': None,
'metadata': {'attached_mode': 'rw',
'readonly': 'False'},
'id': '1',
'created_at': datetime.datetime(1, 1, 1,
1, 1, 1),
'size': 1}]}
self.assertEqual(res_dict, expected)
def test_volume_list_with_admin_metadata(self):
volume = stubs.stub_volume("1")
del volume['name']
del volume['volume_type']
del volume['volume_type_id']
volume['metadata'] = {'key': 'value'}
db.volume_create(context.get_admin_context(), volume)
db.volume_admin_metadata_update(context.get_admin_context(), "1",
{"readonly": "True",
"invisible_key": "invisible_value"},
False)
req = fakes.HTTPRequest.blank('/v1/volumes')
admin_ctx = context.RequestContext('admin', 'fakeproject', True)
req.environ['cinder.context'] = admin_ctx
res_dict = self.controller.index(req)
expected = {'volumes': [{'status': 'fakestatus',
'display_description': 'displaydesc',
'availability_zone': 'fakeaz',
'display_name': 'displayname',
'attachments': [{'device': '/',
'server_id': 'fakeuuid',
'host_name': None,
'id': '1',
'volume_id': '1'}],
'bootable': 'false',
'volume_type': 'None',
'snapshot_id': None,
'source_volid': None,
'metadata': {'key': 'value',
'readonly': 'True'},
'id': '1',
'created_at': datetime.datetime(1, 1, 1,
1, 1, 1),
'size': 1}]}
self.assertEqual(res_dict, expected)
def test_volume_list_detail(self):
self.stubs.Set(db, 'volume_get', stubs.stub_volume_get_db)
self.stubs.Set(volume_api.API, 'get_all',
stubs.stub_volume_get_all_by_project)
req = fakes.HTTPRequest.blank('/v1/volumes/detail')
res_dict = self.controller.index(req)
expected = {'volumes': [{'status': 'fakestatus',
'display_description': 'displaydesc',
'availability_zone': 'fakeaz',
'display_name': 'displayname',
'attachments': [{'device': '/',
'server_id': 'fakeuuid',
'host_name': None,
'id': '1',
'volume_id': '1'}],
'bootable': 'false',
'volume_type': 'vol_type_name',
'snapshot_id': None,
'source_volid': None,
'metadata': {'attached_mode': 'rw',
'readonly': 'False'},
'id': '1',
'created_at': datetime.datetime(1, 1, 1,
1, 1, 1),
'size': 1}]}
self.assertEqual(res_dict, expected)
def test_volume_list_detail_with_admin_metadata(self):
volume = stubs.stub_volume("1")
del volume['name']
del volume['volume_type']
del volume['volume_type_id']
volume['metadata'] = {'key': 'value'}
db.volume_create(context.get_admin_context(), volume)
db.volume_admin_metadata_update(context.get_admin_context(), "1",
{"readonly": "True",
"invisible_key": "invisible_value"},
False)
req = fakes.HTTPRequest.blank('/v1/volumes/detail')
admin_ctx = context.RequestContext('admin', 'fakeproject', True)
req.environ['cinder.context'] = admin_ctx
res_dict = self.controller.index(req)
expected = {'volumes': [{'status': 'fakestatus',
'display_description': 'displaydesc',
'availability_zone': 'fakeaz',
'display_name': 'displayname',
'attachments': [{'device': '/',
'server_id': 'fakeuuid',
'host_name': None,
'id': '1',
'volume_id': '1'}],
'bootable': 'false',
'volume_type': 'None',
'snapshot_id': None,
'source_volid': None,
'metadata': {'key': 'value',
'readonly': 'True'},
'id': '1',
'created_at': datetime.datetime(1, 1, 1,
1, 1, 1),
'size': 1}]}
self.assertEqual(res_dict, expected)
def test_volume_list_by_name(self):
def stub_volume_get_all_by_project(context, project_id, marker, limit,
sort_key, sort_dir):
return [
stubs.stub_volume(1, display_name='vol1'),
stubs.stub_volume(2, display_name='vol2'),
stubs.stub_volume(3, display_name='vol3'),
]
self.stubs.Set(db, 'volume_get', stubs.stub_volume_get_db)
self.stubs.Set(db, 'volume_get_all_by_project',
stub_volume_get_all_by_project)
# no display_name filter
req = fakes.HTTPRequest.blank('/v1/volumes')
resp = self.controller.index(req)
self.assertEqual(len(resp['volumes']), 3)
# filter on display_name
req = fakes.HTTPRequest.blank('/v1/volumes?display_name=vol2')
resp = self.controller.index(req)
self.assertEqual(len(resp['volumes']), 1)
self.assertEqual(resp['volumes'][0]['display_name'], 'vol2')
# filter no match
req = fakes.HTTPRequest.blank('/v1/volumes?display_name=vol4')
resp = self.controller.index(req)
self.assertEqual(len(resp['volumes']), 0)
def test_volume_list_by_metadata(self):
def stub_volume_get_all_by_project(context, project_id, marker, limit,
sort_key, sort_dir):
return [
stubs.stub_volume(1, display_name='vol1',
status='available',
volume_metadata=[{'key': 'key1',
'value': 'value1'}]),
stubs.stub_volume(2, display_name='vol2',
status='available',
volume_metadata=[{'key': 'key1',
'value': 'value2'}]),
stubs.stub_volume(3, display_name='vol3',
status='in-use',
volume_metadata=[{'key': 'key1',
'value': 'value2'}]),
]
self.stubs.Set(db, 'volume_get_all_by_project',
stub_volume_get_all_by_project)
# no metadata filter
req = fakes.HTTPRequest.blank('/v1/volumes', use_admin_context=True)
resp = self.controller.index(req)
self.assertEqual(len(resp['volumes']), 3)
# single match
qparams = urllib.urlencode({'metadata': {'key1': 'value1'}})
req = fakes.HTTPRequest.blank('/v1/volumes?%s' % qparams,
use_admin_context=True)
resp = self.controller.index(req)
self.assertEqual(len(resp['volumes']), 1)
self.assertEqual(resp['volumes'][0]['display_name'], 'vol1')
self.assertEqual(resp['volumes'][0]['metadata']['key1'], 'value1')
# multiple matches
qparams = urllib.urlencode({'metadata': {'key1': 'value2'}})
req = fakes.HTTPRequest.blank('/v1/volumes?%s' % qparams,
use_admin_context=True)
resp = self.controller.index(req)
self.assertEqual(len(resp['volumes']), 2)
for volume in resp['volumes']:
self.assertEqual(volume['metadata']['key1'], 'value2')
# multiple filters
qparams = urllib.urlencode({'metadata': {'key1': 'value2'}})
req = fakes.HTTPRequest.blank('/v1/volumes?status=in-use&%s' % qparams,
use_admin_context=True)
resp = self.controller.index(req)
self.assertEqual(len(resp['volumes']), 1)
self.assertEqual(resp['volumes'][0]['display_name'], 'vol3')
# no match
qparams = urllib.urlencode({'metadata': {'key1': 'value3'}})
req = fakes.HTTPRequest.blank('/v1/volumes?%s' % qparams,
use_admin_context=True)
resp = self.controller.index(req)
self.assertEqual(len(resp['volumes']), 0)
def test_volume_list_by_status(self):
def stub_volume_get_all_by_project(context, project_id, marker, limit,
sort_key, sort_dir):
return [
stubs.stub_volume(1, display_name='vol1', status='available'),
stubs.stub_volume(2, display_name='vol2', status='available'),
stubs.stub_volume(3, display_name='vol3', status='in-use'),
]
self.stubs.Set(db, 'volume_get', stubs.stub_volume_get_db)
self.stubs.Set(db, 'volume_get_all_by_project',
stub_volume_get_all_by_project)
# no status filter
req = fakes.HTTPRequest.blank('/v1/volumes')
resp = self.controller.index(req)
self.assertEqual(len(resp['volumes']), 3)
# single match
req = fakes.HTTPRequest.blank('/v1/volumes?status=in-use')
resp = self.controller.index(req)
self.assertEqual(len(resp['volumes']), 1)
self.assertEqual(resp['volumes'][0]['status'], 'in-use')
# multiple match
req = fakes.HTTPRequest.blank('/v1/volumes?status=available')
resp = self.controller.index(req)
self.assertEqual(len(resp['volumes']), 2)
for volume in resp['volumes']:
self.assertEqual(volume['status'], 'available')
# multiple filters
req = fakes.HTTPRequest.blank('/v1/volumes?status=available&'
'display_name=vol1')
resp = self.controller.index(req)
self.assertEqual(len(resp['volumes']), 1)
self.assertEqual(resp['volumes'][0]['display_name'], 'vol1')
self.assertEqual(resp['volumes'][0]['status'], 'available')
# no match
req = fakes.HTTPRequest.blank('/v1/volumes?status=in-use&'
'display_name=vol1')
resp = self.controller.index(req)
self.assertEqual(len(resp['volumes']), 0)
def test_volume_show(self):
self.stubs.Set(db, 'volume_get', stubs.stub_volume_get_db)
req = fakes.HTTPRequest.blank('/v1/volumes/1')
res_dict = self.controller.show(req, '1')
expected = {'volume': {'status': 'fakestatus',
'display_description': 'displaydesc',
'availability_zone': 'fakeaz',
'display_name': 'displayname',
'attachments': [{'device': '/',
'server_id': 'fakeuuid',
'host_name': None,
'id': '1',
'volume_id': '1'}],
'bootable': 'false',
'volume_type': 'vol_type_name',
'snapshot_id': None,
'source_volid': None,
'metadata': {'attached_mode': 'rw',
'readonly': 'False'},
'id': '1',
'created_at': datetime.datetime(1, 1, 1,
1, 1, 1),
'size': 1}}
self.assertEqual(res_dict, expected)
def test_volume_show_no_attachments(self):
def stub_volume_get(self, context, volume_id):
return stubs.stub_volume(volume_id, attach_status='detached')
self.stubs.Set(volume_api.API, 'get', stub_volume_get)
req = fakes.HTTPRequest.blank('/v1/volumes/1')
res_dict = self.controller.show(req, '1')
expected = {'volume': {'status': 'fakestatus',
'display_description': 'displaydesc',
'availability_zone': 'fakeaz',
'display_name': 'displayname',
'attachments': [],
'bootable': 'false',
'volume_type': 'vol_type_name',
'snapshot_id': None,
'source_volid': None,
'metadata': {'readonly': 'False'},
'id': '1',
'created_at': datetime.datetime(1, 1, 1,
1, 1, 1),
'size': 1}}
self.assertEqual(res_dict, expected)
def test_volume_show_bootable(self):
def stub_volume_get(self, context, volume_id):
return (stubs.stub_volume(volume_id,
volume_glance_metadata=dict(foo='bar')))
self.stubs.Set(volume_api.API, 'get', stub_volume_get)
req = fakes.HTTPRequest.blank('/v1/volumes/1')
res_dict = self.controller.show(req, '1')
expected = {'volume': {'status': 'fakestatus',
'display_description': 'displaydesc',
'availability_zone': 'fakeaz',
'display_name': 'displayname',
'attachments': [{'device': '/',
'server_id': 'fakeuuid',
'host_name': None,
'id': '1',
'volume_id': '1'}],
'bootable': 'true',
'volume_type': 'vol_type_name',
'snapshot_id': None,
'source_volid': None,
'metadata': {'attached_mode': 'rw',
'readonly': 'False'},
'id': '1',
'created_at': datetime.datetime(1, 1, 1,
1, 1, 1),
'size': 1}}
self.assertEqual(res_dict, expected)
def test_volume_show_no_volume(self):
self.stubs.Set(volume_api.API, "get", stubs.stub_volume_get_notfound)
req = fakes.HTTPRequest.blank('/v1/volumes/1')
self.assertRaises(webob.exc.HTTPNotFound,
self.controller.show,
req,
1)
def test_volume_detail_limit_offset(self):
def volume_detail_limit_offset(is_admin):
def stub_volume_get_all_by_project(context, project_id, marker,
limit, sort_key, sort_dir):
return [
stubs.stub_volume(1, display_name='vol1'),
stubs.stub_volume(2, display_name='vol2'),
]
self.stubs.Set(db, 'volume_get_all_by_project',
stub_volume_get_all_by_project)
self.stubs.Set(db, 'volume_get', stubs.stub_volume_get_db)
req = fakes.HTTPRequest.blank('/v1/volumes/detail?limit=2\
&offset=1',
use_admin_context=is_admin)
res_dict = self.controller.index(req)
volumes = res_dict['volumes']
self.assertEqual(len(volumes), 1)
self.assertEqual(volumes[0]['id'], 2)
#admin case
volume_detail_limit_offset(is_admin=True)
#non_admin case
volume_detail_limit_offset(is_admin=False)
def test_volume_show_with_admin_metadata(self):
volume = stubs.stub_volume("1")
del volume['name']
del volume['volume_type']
del volume['volume_type_id']
volume['metadata'] = {'key': 'value'}
db.volume_create(context.get_admin_context(), volume)
db.volume_admin_metadata_update(context.get_admin_context(), "1",
{"readonly": "True",
"invisible_key": "invisible_value"},
False)
req = fakes.HTTPRequest.blank('/v1/volumes/1')
admin_ctx = context.RequestContext('admin', 'fakeproject', True)
req.environ['cinder.context'] = admin_ctx
res_dict = self.controller.show(req, '1')
expected = {'volume': {'status': 'fakestatus',
'display_description': 'displaydesc',
'availability_zone': 'fakeaz',
'display_name': 'displayname',
'attachments': [{'device': '/',
'server_id': 'fakeuuid',
'host_name': None,
'id': '1',
'volume_id': '1'}],
'bootable': 'false',
'volume_type': 'None',
'snapshot_id': None,
'source_volid': None,
'metadata': {'key': 'value',
'readonly': 'True'},
'id': '1',
'created_at': datetime.datetime(1, 1, 1,
1, 1, 1),
'size': 1}}
self.assertEqual(res_dict, expected)
def test_volume_delete(self):
self.stubs.Set(db, 'volume_get', stubs.stub_volume_get_db)
req = fakes.HTTPRequest.blank('/v1/volumes/1')
resp = self.controller.delete(req, 1)
self.assertEqual(resp.status_int, 202)
def test_volume_delete_no_volume(self):
self.stubs.Set(volume_api.API, "get", stubs.stub_volume_get_notfound)
req = fakes.HTTPRequest.blank('/v1/volumes/1')
self.assertRaises(webob.exc.HTTPNotFound,
self.controller.delete,
req,
1)
def test_admin_list_volumes_limited_to_project(self):
self.stubs.Set(db, 'volume_get_all_by_project',
stubs.stub_volume_get_all_by_project)
req = fakes.HTTPRequest.blank('/v1/fake/volumes',
use_admin_context=True)
res = self.controller.index(req)
self.assertIn('volumes', res)
self.assertEqual(1, len(res['volumes']))
def test_admin_list_volumes_all_tenants(self):
req = fakes.HTTPRequest.blank('/v1/fake/volumes?all_tenants=1',
use_admin_context=True)
res = self.controller.index(req)
self.assertIn('volumes', res)
self.assertEqual(3, len(res['volumes']))
def test_all_tenants_non_admin_gets_all_tenants(self):
self.stubs.Set(db, 'volume_get_all_by_project',
stubs.stub_volume_get_all_by_project)
self.stubs.Set(db, 'volume_get', stubs.stub_volume_get_db)
req = fakes.HTTPRequest.blank('/v1/fake/volumes?all_tenants=1')
res = self.controller.index(req)
self.assertIn('volumes', res)
self.assertEqual(1, len(res['volumes']))
def test_non_admin_get_by_project(self):
self.stubs.Set(db, 'volume_get_all_by_project',
stubs.stub_volume_get_all_by_project)
self.stubs.Set(db, 'volume_get', stubs.stub_volume_get_db)
req = fakes.HTTPRequest.blank('/v1/fake/volumes')
res = self.controller.index(req)
self.assertIn('volumes', res)
self.assertEqual(1, len(res['volumes']))
def test_add_visible_admin_metadata_visible_key_only(self):
admin_metadata = [{"key": "invisible_key", "value": "invisible_value"},
{"key": "readonly", "value": "visible"},
{"key": "attached_mode", "value": "visible"}]
metadata = [{"key": "key", "value": "value"}]
volume = dict(volume_admin_metadata=admin_metadata,
volume_metadata=metadata)
admin_ctx = context.get_admin_context()
self.controller._add_visible_admin_metadata(admin_ctx,
volume)
self.assertEqual(volume['volume_metadata'],
[{"key": "key", "value": "value"},
{"key": "readonly", "value": "visible"},
{"key": "attached_mode", "value": "visible"}])
admin_metadata = {"invisible_key": "invisible_value",
"readonly": "visible",
"attached_mode": "visible"}
metadata = {"key": "value"}
volume = dict(admin_metadata=admin_metadata,
metadata=metadata)
admin_ctx = context.get_admin_context()
self.controller._add_visible_admin_metadata(admin_ctx,
volume)
self.assertEqual(volume['metadata'],
{'key': 'value',
'attached_mode': 'visible',
'readonly': 'visible'})
class VolumeSerializerTest(test.TestCase):
def _verify_volume_attachment(self, attach, tree):
for attr in ('id', 'volume_id', 'server_id', 'device'):
self.assertEqual(str(attach[attr]), tree.get(attr))
def _verify_volume(self, vol, tree):
self.assertEqual(tree.tag, NS + 'volume')
for attr in ('id', 'status', 'size', 'availability_zone', 'created_at',
'display_name', 'display_description', 'volume_type',
'bootable', 'snapshot_id'):
self.assertEqual(str(vol[attr]), tree.get(attr))
for child in tree:
self.assertIn(child.tag, (NS + 'attachments', NS + 'metadata'))
if child.tag == 'attachments':
self.assertEqual(1, len(child))
self.assertEqual('attachment', child[0].tag)
self._verify_volume_attachment(vol['attachments'][0], child[0])
elif child.tag == 'metadata':
not_seen = set(vol['metadata'].keys())
for gr_child in child:
self.assertIn(gr_child.get("key"), not_seen)
self.assertEqual(str(vol['metadata'][gr_child.get("key")]),
gr_child.text)
not_seen.remove(gr_child.get('key'))
self.assertEqual(0, len(not_seen))
def test_volume_show_create_serializer(self):
serializer = volumes.VolumeTemplate()
raw_volume = dict(
id='vol_id',
status='vol_status',
size=1024,
availability_zone='vol_availability',
bootable='false',
created_at=datetime.datetime.now(),
attachments=[dict(id='vol_id',
volume_id='vol_id',
server_id='instance_uuid',
device='/foo')],
display_name='vol_name',
display_description='vol_desc',
volume_type='vol_type',
snapshot_id='snap_id',
source_volid='source_volid',
metadata=dict(foo='bar',
baz='quux', ), )
text = serializer.serialize(dict(volume=raw_volume))
tree = etree.fromstring(text)
self._verify_volume(raw_volume, tree)
def test_volume_index_detail_serializer(self):
serializer = volumes.VolumesTemplate()
raw_volumes = [dict(id='vol1_id',
status='vol1_status',
size=1024,
availability_zone='vol1_availability',
bootable='true',
created_at=datetime.datetime.now(),
attachments=[dict(id='vol1_id',
volume_id='vol1_id',
server_id='instance_uuid',
device='/foo1')],
display_name='vol1_name',
display_description='vol1_desc',
volume_type='vol1_type',
snapshot_id='snap1_id',
source_volid=None,
metadata=dict(foo='vol1_foo',
bar='vol1_bar', ), ),
dict(id='vol2_id',
status='vol2_status',
size=1024,
availability_zone='vol2_availability',
bootable='true',
created_at=datetime.datetime.now(),
attachments=[dict(id='vol2_id',
volume_id='vol2_id',
server_id='instance_uuid',
device='/foo2')],
display_name='vol2_name',
display_description='vol2_desc',
volume_type='vol2_type',
snapshot_id='snap2_id',
source_volid=None,
metadata=dict(foo='vol2_foo',
bar='vol2_bar', ), )]
text = serializer.serialize(dict(volumes=raw_volumes))
tree = etree.fromstring(text)
self.assertEqual(NS + 'volumes', tree.tag)
self.assertEqual(len(raw_volumes), len(tree))
for idx, child in enumerate(tree):
self._verify_volume(raw_volumes[idx], child)
class TestVolumeCreateRequestXMLDeserializer(test.TestCase):
def setUp(self):
super(TestVolumeCreateRequestXMLDeserializer, self).setUp()
self.deserializer = volumes.CreateDeserializer()
def test_minimal_volume(self):
self_request = """
<volume xmlns="http://docs.openstack.org/compute/api/v1.1"
size="1"></volume>"""
request = self.deserializer.deserialize(self_request)
expected = {"volume": {"size": "1", }, }
self.assertEqual(request['body'], expected)
def test_display_name(self):
self_request = """
<volume xmlns="http://docs.openstack.org/compute/api/v1.1"
size="1"
display_name="Volume-xml"></volume>"""
request = self.deserializer.deserialize(self_request)
expected = {
"volume": {
"size": "1",
"display_name": "Volume-xml",
},
}
self.assertEqual(request['body'], expected)
def test_display_description(self):
self_request = """
<volume xmlns="http://docs.openstack.org/compute/api/v1.1"
size="1"
display_name="Volume-xml"
display_description="description"></volume>"""
request = self.deserializer.deserialize(self_request)
expected = {
"volume": {
"size": "1",
"display_name": "Volume-xml",
"display_description": "description",
},
}
self.assertEqual(request['body'], expected)
def test_volume_type(self):
self_request = """
<volume xmlns="http://docs.openstack.org/compute/api/v1.1"
size="1"
display_name="Volume-xml"
display_description="description"
volume_type="289da7f8-6440-407c-9fb4-7db01ec49164"></volume>"""
request = self.deserializer.deserialize(self_request)
expected = {
"volume": {
"display_name": "Volume-xml",
"size": "1",
"display_name": "Volume-xml",
"display_description": "description",
"volume_type": "289da7f8-6440-407c-9fb4-7db01ec49164",
},
}
self.assertEqual(request['body'], expected)
def test_availability_zone(self):
self_request = """
<volume xmlns="http://docs.openstack.org/compute/api/v1.1"
size="1"
display_name="Volume-xml"
display_description="description"
volume_type="289da7f8-6440-407c-9fb4-7db01ec49164"
availability_zone="us-east1"></volume>"""
request = self.deserializer.deserialize(self_request)
expected = {
"volume": {
"size": "1",
"display_name": "Volume-xml",
"display_description": "description",
"volume_type": "289da7f8-6440-407c-9fb4-7db01ec49164",
"availability_zone": "us-east1",
},
}
self.assertEqual(request['body'], expected)
def test_metadata(self):
self_request = """
<volume xmlns="http://docs.openstack.org/compute/api/v1.1"
display_name="Volume-xml"
size="1">
<metadata><meta key="Type">work</meta></metadata></volume>"""
request = self.deserializer.deserialize(self_request)
expected = {
"volume": {
"display_name": "Volume-xml",
"size": "1",
"metadata": {
"Type": "work",
},
},
}
self.assertEqual(request['body'], expected)
def test_full_volume(self):
self_request = """
<volume xmlns="http://docs.openstack.org/compute/api/v1.1"
size="1"
display_name="Volume-xml"
display_description="description"
volume_type="289da7f8-6440-407c-9fb4-7db01ec49164"
availability_zone="us-east1">
<metadata><meta key="Type">work</meta></metadata></volume>"""
request = self.deserializer.deserialize(self_request)
expected = {
"volume": {
"size": "1",
"display_name": "Volume-xml",
"display_description": "description",
"volume_type": "289da7f8-6440-407c-9fb4-7db01ec49164",
"availability_zone": "us-east1",
"metadata": {
"Type": "work",
},
},
}
self.assertEqual(request['body'], expected)
def test_imageref(self):
self_request = """
<volume xmlns="http://docs.openstack.org/volume/api/v1"
size="1"
display_name="Volume-xml"
display_description="description"
imageRef="4a90189d-d702-4c7c-87fc-6608c554d737"></volume>"""
request = self.deserializer.deserialize(self_request)
expected = {
"volume": {
"size": "1",
"display_name": "Volume-xml",
"display_description": "description",
"imageRef": "4a90189d-d702-4c7c-87fc-6608c554d737",
},
}
self.assertEqual(expected, request['body'])
def test_snapshot_id(self):
self_request = """
<volume xmlns="http://docs.openstack.org/volume/api/v1"
size="1"
display_name="Volume-xml"
display_description="description"
snapshot_id="4a90189d-d702-4c7c-87fc-6608c554d737"></volume>"""
request = self.deserializer.deserialize(self_request)
expected = {
"volume": {
"size": "1",
"display_name": "Volume-xml",
"display_description": "description",
"snapshot_id": "4a90189d-d702-4c7c-87fc-6608c554d737",
},
}
self.assertEqual(expected, request['body'])
def test_source_volid(self):
self_request = """
<volume xmlns="http://docs.openstack.org/volume/api/v1"
size="1"
display_name="Volume-xml"
display_description="description"
source_volid="4a90189d-d702-4c7c-87fc-6608c554d737"></volume>"""
request = self.deserializer.deserialize(self_request)
expected = {
"volume": {
"size": "1",
"display_name": "Volume-xml",
"display_description": "description",
"source_volid": "4a90189d-d702-4c7c-87fc-6608c554d737",
},
}
self.assertEqual(expected, request['body'])
class VolumesUnprocessableEntityTestCase(test.TestCase):
"""Tests of places we throw 422 Unprocessable Entity from."""
def setUp(self):
super(VolumesUnprocessableEntityTestCase, self).setUp()
self.ext_mgr = extensions.ExtensionManager()
self.ext_mgr.extensions = {}
self.controller = volumes.VolumeController(self.ext_mgr)
def _unprocessable_volume_create(self, body):
req = fakes.HTTPRequest.blank('/v2/fake/volumes')
req.method = 'POST'
self.assertRaises(webob.exc.HTTPUnprocessableEntity,
self.controller.create, req, body)
def test_create_no_body(self):
self._unprocessable_volume_create(body=None)
def test_create_missing_volume(self):
body = {'foo': {'a': 'b'}}
self._unprocessable_volume_create(body=body)
def test_create_malformed_entity(self):
body = {'volume': 'string'}
self._unprocessable_volume_create(body=body)
| 44.559103 | 79 | 0.490125 | 4,754 | 51,644 | 5.107278 | 0.071729 | 0.03126 | 0.006425 | 0.042504 | 0.803418 | 0.780313 | 0.759596 | 0.728007 | 0.709638 | 0.703089 | 0 | 0.023788 | 0.38868 | 51,644 | 1,158 | 80 | 44.597582 | 0.745273 | 0.018957 | 0 | 0.690099 | 0 | 0 | 0.211266 | 0.033697 | 0 | 0 | 0 | 0 | 0.083168 | 1 | 0.061386 | false | 0 | 0.014851 | 0.005941 | 0.087129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6ab82fce400eca42b39d743eb38edfb3d27ee262 | 33 | py | Python | prior/__init__.py | tals/RAVE | 1bd0c081ccb2404c0739fe18fa7579498e272699 | [
"MIT"
] | 2 | 2022-01-29T20:09:19.000Z | 2022-01-31T22:50:42.000Z | prior/__init__.py | tals/RAVE | 1bd0c081ccb2404c0739fe18fa7579498e272699 | [
"MIT"
] | null | null | null | prior/__init__.py | tals/RAVE | 1bd0c081ccb2404c0739fe18fa7579498e272699 | [
"MIT"
] | null | null | null | from .model import Model as Prior | 33 | 33 | 0.818182 | 6 | 33 | 4.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 33 | 1 | 33 | 33 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6aee7bcd20a3c782cfb156b035147c022e7d1c4d | 198 | py | Python | vedaseg/lr_schedulers/registry.py | E18301194/vedaseg | c62c8ea46dbba12f03262452dd7bed22969cfe4e | [
"Apache-2.0"
] | 2 | 2020-07-15T02:36:46.000Z | 2021-03-08T03:18:26.000Z | vedaseg/lr_schedulers/registry.py | E18301194/vedaseg | c62c8ea46dbba12f03262452dd7bed22969cfe4e | [
"Apache-2.0"
] | null | null | null | vedaseg/lr_schedulers/registry.py | E18301194/vedaseg | c62c8ea46dbba12f03262452dd7bed22969cfe4e | [
"Apache-2.0"
] | 1 | 2021-09-16T09:40:12.000Z | 2021-09-16T09:40:12.000Z | from torch.optim import lr_scheduler
from vedaseg.utils import Registry
LR_SCHEDULERS = Registry('lr_scheduler')
MultiStepLR = lr_scheduler.MultiStepLR
LR_SCHEDULERS.register_module(MultiStepLR)
| 22 | 42 | 0.848485 | 25 | 198 | 6.48 | 0.52 | 0.203704 | 0.271605 | 0.296296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 198 | 8 | 43 | 24.75 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0a8b563f97ec536c6bb57ad76fdb9d47256e28bc | 19,769 | py | Python | models/data_loading_script/summary_generation_data.py | ShuyangCao/hibrids_summ | 6f882c4c4d43d1029314f72535494072fcdfca12 | [
"MIT"
] | 2 | 2022-03-22T11:35:25.000Z | 2022-03-30T09:18:41.000Z | models/data_loading_script/summary_generation_data.py | ShuyangCao/hibrids_summ | 6f882c4c4d43d1029314f72535494072fcdfca12 | [
"MIT"
] | 1 | 2022-03-30T12:42:31.000Z | 2022-03-30T16:19:38.000Z | models/data_loading_script/summary_generation_data.py | ShuyangCao/hibrids_summ | 6f882c4c4d43d1029314f72535494072fcdfca12 | [
"MIT"
] | null | null | null | import datasets
import os
import json
def _get_question(current_question, depth=0):
question_text = current_question["question"]
summary = current_question["answer"]
question_summary_pairs = [{'question': question_text, 'summary': summary, 'depth': depth}]
for child_question in current_question["child_questions"]:
question_summary_pairs.extend(_get_question(child_question, depth + 1))
return question_summary_pairs
def _get_onehop_question(current_question, depth=0):
question_text = current_question["question"]
summary = current_question["answer"]
question_summary_pairs = [{'question': question_text, 'summary': summary, 'depth': depth,
'child_pairs': [{'question': child_question['question'], 'summary': child_question['answer']} for child_question in current_question["child_questions"]]}]
for child_question in current_question["child_questions"]:
question_summary_pairs.extend(_get_onehop_question(child_question, depth + 1))
return question_summary_pairs
def _recursive_load_qs_section(section, current_id=0, parent_id=-1, depth=0):
section_paragraphs = []
child_id = current_id
if section["section_title"]:
section_paragraphs.append((" ".join(section["section_title"].split()),
[" ".join(paragraph.split()) for paragraph in section["paragraphs"]], depth, current_id, parent_id))
parent_id = current_id
child_id += 1
for subsection in section["subsections"]:
child_paragraphs, ccid = _recursive_load_section(subsection, child_id, parent_id, depth + 1)
child_id = ccid
section_paragraphs.extend(child_paragraphs)
return section_paragraphs, child_id
def _load_qs(line):
report = json.loads(line)
document_paragraphs, _ = _recursive_load_qs_section(report["section"])
question_summary_pairs = []
for question in report["questions"]:
question_summary_pairs.extend(_get_question(question))
return document_paragraphs, question_summary_pairs, report["sample_id"]
def _load_onehop_qs(line):
report = json.loads(line)
document_paragraphs, _ = _recursive_load_qs_section(report["section"])
question_summary_pairs = []
for question in report["questions"]:
question_summary_pairs.extend(_get_onehop_question(question))
return document_paragraphs, question_summary_pairs, report["sample_id"]
def _recursive_load(section, keep_letter=False, current_id=0, parent_id=-1, depth=0):
section_paragraphs = []
child_id = current_id
if section["section_title"] and (section["section_title"] != 'Letter' or (section["section_title"] == 'Letter' and keep_letter)):
section_paragraphs.append((" ".join(section["section_title"].split()), [" ".join(paragraph.split()) for paragraph in section["paragraphs"]], depth, current_id, parent_id))
child_id += 1
for subsection in section["subsections"]:
child_paragraphs, ccid = _recursive_load(subsection, keep_letter, child_id, current_id, depth + 1)
child_id = ccid
section_paragraphs.extend(child_paragraphs)
else:
for subsection in section["subsections"]:
child_paragraphs, ccid = _recursive_load(subsection, keep_letter, child_id, parent_id, depth)
child_id = ccid
section_paragraphs.extend(child_paragraphs)
return section_paragraphs, child_id
def _recursive_load_section(section, current_id=0, parent_id=-1, depth=0):
section_paragraphs = []
child_id = current_id
section_paragraphs.append((" ".join(section["section_title"].split()), [" ".join(paragraph.split()) for paragraph in section["paragraphs"]], depth, current_id, parent_id))
child_id += 1
for subsection in section["subsections"]:
child_paragraphs, ccid = _recursive_load_section(subsection, child_id, current_id, depth + 1)
child_id = ccid
section_paragraphs.extend(child_paragraphs)
return section_paragraphs, child_id
def _load_gao_doc(filepath, no_rec=False):
with open(filepath, encoding="utf-8") as f:
report = json.load(f)
document_paragraphs = []
current_id = 0
for section in report["report"]:
paragraphs, pcid = _recursive_load(section, keep_letter=False, current_id=current_id)
current_id = pcid
document_paragraphs.extend(paragraphs)
summary_paragraphs = []
for section in report["highlight"]:
if no_rec and section["section_title"] == "What GAO Recommends":
continue
summary_paragraphs.extend(section["paragraphs"])
return document_paragraphs, summary_paragraphs
def _load_crs_doc(filepath):
with open(filepath, encoding="utf-8") as f:
report = json.load(f)
document_paragraphs, _ = _recursive_load(report["reports"], keep_letter=True)
summary_paragraphs = report["summary"]
return document_paragraphs, summary_paragraphs
def _load_wiki(line):
report = json.loads(line)
document_paragraphs, _ = _recursive_load(report["report"], keep_letter=True)
summary_paragraphs = report["summary"]
return document_paragraphs, summary_paragraphs, report['id']
class FullSummaryGenerationConfig(datasets.BuilderConfig):
def __init__(self, **kwargs):
super().__init__(**kwargs)
class FullSummaryGenerationDataset(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
BUILDER_CONFIGS = [
FullSummaryGenerationConfig(
name="gov_report",
version=VERSION,
description="gov_report"
),
FullSummaryGenerationConfig(
name="qs_hierarchy_fq",
version=VERSION,
description="qs_hierarchy_fq"
),
FullSummaryGenerationConfig(
name="qs_hierarchy_qg",
version=VERSION,
description="qs_hierarchy_qg"
),
FullSummaryGenerationConfig(
name="wiki_bio_sum",
version=VERSION,
description="wiki_bio_sum"
),
]
def _info(self):
if self.config.name == 'gov_report':
features = datasets.Features(
{
"id": datasets.Value("string"),
"document_paragraphs": [datasets.Value("string")],
"section_paragraph_ends": [datasets.Value("int32")],
"section_depths": [datasets.Value("int32")],
"section_parent_ids": [datasets.Value("int32")],
"section_titles": [datasets.Value("string")],
"summary": datasets.Value("string")
}
)
elif self.config.name.startswith('wiki_bio_sum'):
features = datasets.Features(
{
"id": datasets.Value("string"),
"document_paragraphs": [datasets.Value("string")],
"section_paragraph_ends": [datasets.Value("int32")],
"section_depths": [datasets.Value("int32")],
"section_parent_ids": [datasets.Value("int32")],
"section_titles": [datasets.Value("string")],
"summary": datasets.Value("string")
}
)
elif self.config.name == 'qs_hierarchy_fq':
features = datasets.Features(
{
"id": datasets.Value("string"),
"document_paragraphs": [datasets.Value("string")],
"section_paragraph_ends": [datasets.Value("int32")],
"section_depths": [datasets.Value("int32")],
"section_parent_ids": [datasets.Value("int32")],
"section_titles": [datasets.Value("string")],
"first_question": datasets.Value("string"),
"first_summary": datasets.Value("string"),
"summary_paragraphs": [datasets.Value("string")],
"summary_depths": [datasets.Value("int32")]
}
)
elif self.config.name == 'qs_hierarchy_qg':
features = datasets.Features(
{
"id": datasets.Value("string"),
"document_paragraphs": [datasets.Value("string")],
"section_paragraph_ends": [datasets.Value("int32")],
"section_depths": [datasets.Value("int32")],
"section_parent_ids": [datasets.Value("int32")],
"section_titles": [datasets.Value("string")],
"first_question": datasets.Value("string"),
"first_summary": datasets.Value("string"),
"summary_paragraphs": [datasets.Value("string")],
}
)
else:
raise ValueError
return datasets.DatasetInfo(
description="summary dataset",
features=features,
supervised_keys=None
)
def _split_generators(self, dl_manager):
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"split_name": "train"
}
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
gen_kwargs={
"split_name": "valid"
}
),
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={
"split_name": "test"
}
)
]
def _generate_examples(self, split_name):
if self.config.name == 'wiki_bio_sum':
with open(os.path.join(self.config.data_dir, 'wikibiosum', f'{split_name}.jsonl')) as f:
for line in f:
document_sections, summary_paragraphs, wiki_id = _load_wiki(line)
summary = " ".join(summary_paragraphs)
_id = wiki_id
section_paragraph_ends = []
section_depths = []
document_paragraphs = []
current_end = 0
section_titles = []
parent_ids = []
for section_title, document_section, section_depth, current_id, parent_id in document_sections:
paragraphs = document_section
section_titles.append(section_title)
current_end += len(paragraphs)
section_paragraph_ends.append(current_end)
section_depths.append(section_depth)
document_paragraphs.extend(paragraphs)
parent_ids.append(parent_id)
if document_paragraphs:
yield _id, {
"id": _id,
"document_paragraphs": document_paragraphs,
"section_paragraph_ends": section_paragraph_ends,
"section_depths": section_depths,
"section_parent_ids": parent_ids,
"section_titles": section_titles,
"summary": summary
}
elif self.config.name == 'qs_hierarchy_fq':
with open(os.path.join(self.config.data_dir, 'gov-report-qs', f'{split_name}.jsonl')) as f:
for line_i, line in enumerate(f):
document_sections, question_summary_pairs, sample_id = _load_qs(line)
first_question = question_summary_pairs[0]["question"]
first_summary = question_summary_pairs[0]["summary"]
summary_paragraphs = [question_summary_pair["question"] + " " + question_summary_pair["summary"] for
question_summary_pair in question_summary_pairs[1:]]
summary_depths = [question_summary_pair["depth"] for question_summary_pair in
question_summary_pairs[1:]]
_id = f'{sample_id}_{line_i}'
section_paragraph_ends = []
section_depths = []
document_paragraphs = []
current_end = 0
section_titles = []
parent_ids = []
for section_title, document_section, section_depth, current_id, parent_id in document_sections:
paragraphs = document_section
section_titles.append(section_title)
current_end += len(paragraphs)
section_paragraph_ends.append(current_end)
section_depths.append(section_depth)
document_paragraphs.extend(paragraphs)
parent_ids.append(parent_id)
if document_paragraphs:
yield _id, {
"id": _id,
"document_paragraphs": document_paragraphs,
"section_paragraph_ends": section_paragraph_ends,
"section_depths": section_depths,
"section_parent_ids": parent_ids,
"section_titles": section_titles,
"first_question": first_question,
"first_summary": first_summary,
"summary_paragraphs": summary_paragraphs,
"summary_depths": summary_depths
}
elif self.config.name == 'qs_hierarchy_qg':
with open(os.path.join(self.config.data_dir, 'gov-report-qs', f'{split_name}.jsonl')) as f:
for line_i, line in enumerate(f):
document_sections, question_summary_pairs, sample_id = _load_onehop_qs(line)
_id = f'{sample_id}_{line_i}'
section_paragraph_ends = []
section_depths = []
document_paragraphs = []
current_end = 0
section_titles = []
parent_ids = []
for section_title, document_section, section_depth, current_id, parent_id in document_sections:
paragraphs = document_section
section_titles.append(section_title)
current_end += len(paragraphs)
section_paragraph_ends.append(current_end)
section_depths.append(section_depth)
document_paragraphs.extend(paragraphs)
parent_ids.append(parent_id)
if document_paragraphs:
for qi, question_summary_pair in enumerate(question_summary_pairs):
if question_summary_pair["question"] and question_summary_pair["summary"] and question_summary_pair["child_pairs"]:
summary_paragraphs = [
pair["question"] for
pair in question_summary_pair["child_pairs"]]
yield f'{_id}_{qi}', {
"id": f'{_id}_{qi}',
"document_paragraphs": document_paragraphs,
"section_paragraph_ends": section_paragraph_ends,
"section_depths": section_depths,
"section_parent_ids": parent_ids,
"section_titles": section_titles,
"first_question": question_summary_pair["question"],
"first_summary": question_summary_pair["summary"],
"summary_paragraphs": summary_paragraphs,
}
elif self.config.name == 'gov_report':
gao_split_file = os.path.join(self.config.data_dir, "gov-report", "split_ids", f'gao_{split_name}.ids')
crs_split_file = os.path.join(self.config.data_dir, "gov-report", "split_ids", f'crs_{split_name}.ids')
document_dir = os.path.join(self.config.data_dir, "gov-report")
with open(gao_split_file) as f:
gao_split_ids = [line.strip() for line in f]
with open(crs_split_file) as f:
crs_split_ids = [line.strip() for line in f]
for gao_split_id in gao_split_ids:
document_sections, summary_paragraphs = _load_gao_doc(
os.path.join(document_dir, 'gao', f'{gao_split_id}.json'), no_rec=False)
summary = " ".join(summary_paragraphs)
_id = 'GAO_' + gao_split_id
section_paragraph_ends = []
section_depths = []
document_paragraphs = []
current_end = 0
section_titles = []
parent_ids = []
for section_title, document_section, section_depth, current_id, parent_id in document_sections:
paragraphs = document_section
section_titles.append(section_title)
current_end += len(paragraphs)
section_paragraph_ends.append(current_end)
section_depths.append(section_depth)
document_paragraphs.extend(paragraphs)
parent_ids.append(parent_id)
if document_paragraphs:
yield _id, {
"id": _id,
"document_paragraphs": document_paragraphs,
"section_paragraph_ends": section_paragraph_ends,
"section_depths": section_depths,
"section_parent_ids": parent_ids,
"section_titles": section_titles,
"summary": summary
}
for crs_split_id in crs_split_ids:
document_sections, summary_paragraphs = _load_crs_doc(
os.path.join(document_dir, 'crs', f'{crs_split_id}.json'))
summary = " ".join(summary_paragraphs)
_id = 'CRS_' + crs_split_id
section_paragraph_ends = []
section_depths = []
document_paragraphs = []
current_end = 0
section_titles = []
parent_ids = []
for section_title, document_section, section_depth, current_id, parent_id in document_sections:
paragraphs = document_section
section_titles.append(section_title)
current_end += len(paragraphs)
section_paragraph_ends.append(current_end)
section_depths.append(section_depth)
document_paragraphs.extend(paragraphs)
parent_ids.append(parent_id)
if document_paragraphs:
yield _id, {
"id": _id,
"document_paragraphs": document_paragraphs,
"section_paragraph_ends": section_paragraph_ends,
"section_depths": section_depths,
"section_parent_ids": parent_ids,
"section_titles": section_titles,
"summary": summary
}
| 44.424719 | 185 | 0.557995 | 1,835 | 19,769 | 5.655586 | 0.074114 | 0.069378 | 0.046252 | 0.039025 | 0.807381 | 0.750337 | 0.745712 | 0.721526 | 0.700617 | 0.6799 | 0 | 0.004663 | 0.349183 | 19,769 | 444 | 186 | 44.524775 | 0.801959 | 0 | 0 | 0.633952 | 0 | 0 | 0.119986 | 0.010016 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037135 | false | 0 | 0.007958 | 0.002653 | 0.087533 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a909cc58a8a05a950a5a15fb9fba264d96f51c2 | 4,675 | py | Python | hc/api/tests/test_smtpd.py | gdepeyrot/healthchecks | 148894bd9ee028d32ab2032441f6746bc7e794e4 | [
"BSD-3-Clause"
] | 4,813 | 2015-07-27T11:44:52.000Z | 2022-03-31T14:24:07.000Z | hc/api/tests/test_smtpd.py | gdepeyrot/healthchecks | 148894bd9ee028d32ab2032441f6746bc7e794e4 | [
"BSD-3-Clause"
] | 579 | 2015-07-20T11:49:19.000Z | 2022-03-31T18:20:11.000Z | hc/api/tests/test_smtpd.py | gdepeyrot/healthchecks | 148894bd9ee028d32ab2032441f6746bc7e794e4 | [
"BSD-3-Clause"
] | 644 | 2015-09-11T16:14:26.000Z | 2022-03-31T13:20:20.000Z | from hc.api.models import Check, Ping
from hc.test import BaseTestCase
from hc.api.management.commands.smtpd import _process_message
PAYLOAD_TMPL = """
From: "User Name" <username@gmail.com>
To: "John Smith" <john@example.com>
Subject: %s
...
""".strip()
class SmtpdTestCase(BaseTestCase):
def setUp(self):
super().setUp()
self.check = Check.objects.create(project=self.project)
self.email = "%s@does.not.matter" % self.check.code
def test_it_works(self):
_process_message("1.2.3.4", "foo@example.org", self.email, b"hello world")
ping = Ping.objects.latest("id")
self.assertEqual(ping.scheme, "email")
self.assertEqual(ping.ua, "Email from foo@example.org")
self.assertEqual(ping.body, "hello world")
self.assertEqual(ping.kind, None)
def test_it_handles_subject_filter_match(self):
self.check.subject = "SUCCESS"
self.check.save()
body = PAYLOAD_TMPL % "[SUCCESS] Backup completed"
_process_message("1.2.3.4", "foo@example.org", self.email, body.encode("utf8"))
ping = Ping.objects.latest("id")
self.assertEqual(ping.scheme, "email")
self.assertEqual(ping.ua, "Email from foo@example.org")
self.assertEqual(ping.kind, None)
def test_it_handles_subject_filter_miss(self):
self.check.subject = "SUCCESS"
self.check.save()
body = PAYLOAD_TMPL % "[FAIL] Backup did not complete"
_process_message("1.2.3.4", "foo@example.org", self.email, body.encode("utf8"))
ping = Ping.objects.latest("id")
self.assertEqual(ping.scheme, "email")
self.assertEqual(ping.ua, "Email from foo@example.org")
self.assertEqual(ping.kind, "ign")
def test_it_handles_subject_fail_filter_match(self):
self.check.subject_fail = "FAIL"
self.check.save()
body = PAYLOAD_TMPL % "[FAIL] Backup did not complete"
_process_message("1.2.3.4", "foo@example.org", self.email, body.encode("utf8"))
ping = Ping.objects.latest("id")
self.assertEqual(ping.scheme, "email")
self.assertEqual(ping.ua, "Email from foo@example.org")
self.assertEqual(ping.kind, "fail")
def test_it_handles_subject_fail_filter_miss(self):
self.check.subject_fail = "FAIL"
self.check.save()
body = PAYLOAD_TMPL % "[SUCCESS] Backup completed"
_process_message("1.2.3.4", "foo@example.org", self.email, body.encode("utf8"))
ping = Ping.objects.latest("id")
self.assertEqual(ping.scheme, "email")
self.assertEqual(ping.ua, "Email from foo@example.org")
self.assertEqual(ping.kind, "ign")
def test_it_handles_multiple_subject_keywords(self):
self.check.subject = "SUCCESS, OK"
self.check.save()
body = PAYLOAD_TMPL % "[OK] Backup completed"
_process_message("1.2.3.4", "foo@example.org", self.email, body.encode("utf8"))
ping = Ping.objects.latest("id")
self.assertEqual(ping.scheme, "email")
self.assertEqual(ping.ua, "Email from foo@example.org")
self.assertEqual(ping.kind, None)
def test_it_handles_multiple_subject_fail_keywords(self):
self.check.subject_fail = "FAIL, WARNING"
self.check.save()
body = PAYLOAD_TMPL % "[WARNING] Backup did not complete"
_process_message("1.2.3.4", "foo@example.org", self.email, body.encode("utf8"))
ping = Ping.objects.latest("id")
self.assertEqual(ping.scheme, "email")
self.assertEqual(ping.ua, "Email from foo@example.org")
self.assertEqual(ping.kind, "fail")
def test_it_handles_subject_fail_before_success(self):
self.check.subject = "SUCCESS"
self.check.subject_fail = "FAIL"
self.check.save()
body = PAYLOAD_TMPL % "[SUCCESS] 1 Backup completed, [FAIL] 1 Backup did not complete"
_process_message("1.2.3.4", "foo@example.org", self.email, body.encode("utf8"))
ping = Ping.objects.latest("id")
self.assertEqual(ping.scheme, "email")
self.assertEqual(ping.ua, "Email from foo@example.org")
self.assertEqual(ping.kind, "fail")
def test_it_handles_encoded_subject(self):
self.check.subject = "SUCCESS"
self.check.save()
body = PAYLOAD_TMPL % "=?US-ASCII?B?W1NVQ0NFU1NdIEJhY2t1cCBjb21wbGV0ZWQ=?="
_process_message("1.2.3.4", "foo@example.org", self.email, body.encode("utf8"))
ping = Ping.objects.latest("id")
self.assertEqual(ping.scheme, "email")
self.assertEqual(ping.ua, "Email from foo@example.org")
self.assertEqual(ping.kind, None)
| 37.103175 | 94 | 0.648984 | 611 | 4,675 | 4.837971 | 0.13748 | 0.142084 | 0.179973 | 0.103518 | 0.838972 | 0.824425 | 0.77977 | 0.762855 | 0.762855 | 0.762855 | 0 | 0.014555 | 0.206417 | 4,675 | 125 | 95 | 37.4 | 0.78221 | 0 | 0 | 0.670213 | 0 | 0 | 0.218396 | 0.010909 | 0 | 0 | 0 | 0 | 0.297872 | 1 | 0.106383 | false | 0 | 0.031915 | 0 | 0.148936 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0aa411655d5294a08745e6b218bc9bd7701f93ca | 112 | py | Python | nightson/handlers/base.py | vswamy/nightson | 01b5efbf1acbca79aaece1b70e15f2877a1ea8c4 | [
"Apache-2.0"
] | null | null | null | nightson/handlers/base.py | vswamy/nightson | 01b5efbf1acbca79aaece1b70e15f2877a1ea8c4 | [
"Apache-2.0"
] | null | null | null | nightson/handlers/base.py | vswamy/nightson | 01b5efbf1acbca79aaece1b70e15f2877a1ea8c4 | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
import tornado
class BaseHandler(tornado.web.RequestHandler):
pass | 16 | 46 | 0.821429 | 13 | 112 | 6.692308 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133929 | 112 | 7 | 47 | 16 | 0.896907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
0ad0a28bd83d7955ca983f9f634da66133aa2eb6 | 129 | py | Python | app/app/test_add.py | trevord7371/recipe-app-api-pytest | 25787c391d1bfb5ae85baff4b30c0772a2a5c111 | [
"MIT"
] | 1 | 2020-12-13T04:03:05.000Z | 2020-12-13T04:03:05.000Z | app/app/test_add.py | trevord7371/recipe-app-api-pytest | 25787c391d1bfb5ae85baff4b30c0772a2a5c111 | [
"MIT"
] | null | null | null | app/app/test_add.py | trevord7371/recipe-app-api-pytest | 25787c391d1bfb5ae85baff4b30c0772a2a5c111 | [
"MIT"
] | null | null | null | from app.calc import add
def test_add_numbers():
"""Test that two numbers are added together"""
assert add(3, 8) == 11
| 18.428571 | 50 | 0.666667 | 21 | 129 | 4 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039604 | 0.217054 | 129 | 6 | 51 | 21.5 | 0.792079 | 0.310078 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0af60efebf939e4f136bbddfc1e01ee1eb501fe0 | 11,329 | py | Python | model-optimizer/extensions/middle/ReluQuantizeFuse_test.py | anton-potapov/openvino | 84119afe9a8c965e0a0cd920fff53aee67b05108 | [
"Apache-2.0"
] | 1 | 2020-06-21T09:51:42.000Z | 2020-06-21T09:51:42.000Z | model-optimizer/extensions/middle/ReluQuantizeFuse_test.py | anton-potapov/openvino | 84119afe9a8c965e0a0cd920fff53aee67b05108 | [
"Apache-2.0"
] | 4 | 2021-04-01T08:29:48.000Z | 2021-08-30T16:12:52.000Z | model-optimizer/extensions/middle/ReluQuantizeFuse_test.py | anton-potapov/openvino | 84119afe9a8c965e0a0cd920fff53aee67b05108 | [
"Apache-2.0"
] | 3 | 2021-03-09T08:27:29.000Z | 2021-04-07T04:58:54.000Z | """
Copyright (C) 2018-2020 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import unittest
import numpy as np
from extensions.middle.ReluQuantizeFuse import ReluQuantizeFuse, ReluFakeQuantizeMark
from mo.utils.ir_engine.compare_graphs import compare_graphs
from mo.utils.unittest.graph import build_graph
nodes = {
# input
'placeholder': {'type': 'Parameter', 'kind': 'op', 'op': 'Parameter'},
'placeholder_d': {'value': None, 'shape': None, 'kind': 'data', 'data_type': None},
# Relu
'relu': {'kind': 'op', 'op': 'ReLU'},
'relu_d': {'value': None, 'shape': None, 'kind': 'data'},
# Quantize
'const_1': {'op': 'Const', 'kind': 'op'},
'const_1_d': {'kind': 'data', 'value': None},
'const_2': {'op': 'Const', 'kind': 'op'},
'const_2_d': {'kind': 'data', 'value': None},
'const_3': {'op': 'Const', 'kind': 'op'},
'const_3_d': {'kind': 'data', 'value': None},
'const_4': {'op': 'Const', 'kind': 'op'},
'const_4_d': {'kind': 'data', 'value': None},
'quantize': {'kind': 'op', 'op': 'FakeQuantize', 'keep_in_IR': True},
'quantize_d': {'value': None, 'shape': None, 'kind': 'data'},
'quantize_1': {'kind': 'op', 'op': 'FakeQuantize', 'keep_in_IR': True},
'quantize_1_d': {'value': None, 'shape': None, 'kind': 'data'},
# Result
'output': {'kind': 'op', 'op': 'Result'},
'output_1': {'kind': 'op', 'op': 'Result'},
# Ops for extra connection expressing
'extra_op': {'kind': 'op', 'op': 'SomeOp'},
'extra_data': {'kind': 'data'},
}
i8_edges = [
# op to data connections
('placeholder', 'placeholder_d'),
('relu', 'relu_d', {'out': 0}),
('const_1', 'const_1_d'),
('const_2', 'const_2_d'),
('const_3', 'const_3_d'),
('const_4', 'const_4_d'),
('quantize', 'quantize_d'),
# data to op connections
('placeholder_d', 'relu'),
('relu_d', 'quantize', {'in': 0}),
('const_1_d', 'quantize', {'in': 1}),
('const_2_d', 'quantize', {'in': 2}),
('const_3_d', 'quantize', {'in': 3}),
('const_4_d', 'quantize', {'in': 4}),
('quantize_d', 'output'),
]
ref_i8_edges = [
# op to data connections
('placeholder', 'placeholder_d'),
('const_1', 'const_1_d'),
('const_2', 'const_2_d'),
('const_3', 'const_3_d'),
('const_4', 'const_4_d'),
('quantize', 'quantize_d'),
# data to op connections
('placeholder_d', 'quantize', {'in': 0}),
('const_1_d', 'quantize', {'in': 1}),
('const_2_d', 'quantize', {'in': 2}),
('const_3_d', 'quantize', {'in': 3}),
('const_4_d', 'quantize', {'in': 4}),
('quantize_d', 'output'),
('placeholder_d', 'relu', {'out': 0}),
('relu', 'relu_d', {'out': 0}),
]
i1_edges = [
# op to data connections
('placeholder', 'placeholder_d'),
('relu', 'relu_d', {'out': 0}),
('const_1', 'const_1_d'),
('const_3', 'const_3_d'),
('const_4', 'const_4_d'),
('quantize', 'quantize_d'),
# data to op connections
('placeholder_d', 'relu'),
('relu_d', 'quantize', {'in': 0}),
('const_1_d', 'quantize', {'in': 1}),
('const_1_d', 'quantize', {'in': 2}),
('const_3_d', 'quantize', {'in': 3}),
('const_4_d', 'quantize', {'in': 4}),
('quantize_d', 'output'),
]
ref_i1_edges = [
# op to data connections
('placeholder', 'placeholder_d'),
('const_1', 'const_1_d'),
('const_3', 'const_3_d'),
('const_4', 'const_4_d'),
('quantize', 'quantize_d'),
# data to op connections
('placeholder_d', 'quantize', {'in': 0}),
('const_1_d', 'quantize', {'in': 1}),
('const_1_d', 'quantize', {'in': 2}),
('const_3_d', 'quantize', {'in': 3}),
('const_4_d', 'quantize', {'in': 4}),
('quantize_d', 'output'),
('placeholder_d', 'relu', {'out': 0}),
('relu', 'relu_d', {'out': 0}),
]
relu_extra_output = [
# op to data connections
('placeholder', 'placeholder_d'),
('relu', 'relu_d', {'out': 0}),
('const_1', 'const_1_d'),
('const_3', 'const_3_d'),
('const_4', 'const_4_d'),
('quantize', 'quantize_d'),
# data to op connections
('placeholder_d', 'relu'),
('relu_d', 'quantize', {'in': 0}),
('const_1_d', 'quantize', {'in': 1}),
('const_1_d', 'quantize', {'in': 2}),
('const_3_d', 'quantize', {'in': 3}),
('const_4_d', 'quantize', {'in': 4}),
('quantize_d', 'output'),
# extra output of relu
('relu_d', 'extra_op'),
('extra_op', 'extra_data'),
('extra_data', 'output_1'),
]
const_extra = [
# op to data connections
('placeholder', 'placeholder_d'),
('relu', 'relu_d', {'out': 0}),
('const_1', 'const_1_d'),
('const_3', 'const_3_d'),
('const_4', 'const_4_d'),
('quantize', 'quantize_d'),
('quantize_1', 'quantize_1_d'),
# data to op connections
('placeholder_d', 'relu', {'out': 0}),
('relu_d', 'quantize', {'in': 0}),
('relu_d', 'quantize_1', {'in': 0}),
('const_1_d', 'quantize', {'in': 1}),
('const_1_d', 'quantize', {'in': 2}),
('const_1_d', 'quantize_1', {'in': 1}),
('const_1_d', 'quantize_1', {'in': 2}),
('const_3_d', 'quantize', {'in': 3}),
('const_3_d', 'quantize_1', {'in': 3}),
('const_4_d', 'quantize', {'in': 4}),
('const_4_d', 'quantize_1', {'in': 4}),
('quantize_d', 'output'),
('quantize_1_d', 'output_1'),
]
ref_const_extra = [
# op to data connections
('placeholder', 'placeholder_d'),
('const_1', 'const_1_d'),
('const_2', 'const_2_d'),
('const_3', 'const_3_d'),
('const_4', 'const_4_d'),
('quantize', 'quantize_d'),
('quantize_1', 'quantize_1_d'),
# data to op connections
('placeholder_d', 'quantize', {'in': 0, 'out': 0}),
('placeholder_d', 'quantize_1', {'in': 0, 'out': 0}),
('const_1_d', 'quantize', {'out': 0, 'in': 1}),
('const_1_d', 'quantize', {'out': 0, 'in': 2}),
('const_2_d', 'quantize_1', {'out': 0, 'in': 1}),
('const_2_d', 'quantize_1', {'out': 0, 'in': 2}),
('const_3_d', 'quantize', {'in': 3}),
('const_3_d', 'quantize_1', {'in': 3}),
('const_4_d', 'quantize', {'in': 4}),
('const_4_d', 'quantize_1', {'in': 4}),
('quantize_d', 'output'),
('quantize_1_d', 'output_1'),
('placeholder_d', 'relu', {'out': 0}),
('relu', 'relu_d', {'out': 0}),
]
class ReluQuantizeFuseTests(unittest.TestCase):
def test_classic_i8_positive_case(self):
graph = build_graph(nodes, i8_edges,
{'const_1_d': {'value': np.zeros([1, 2, 3, 4])}, 'quantize': {'levels': 256}},
nodes_with_edges_only=True)
graph.graph['layout'] = 'NHWC'
graph.stage = 'middle'
graph_ref = build_graph(nodes, ref_i8_edges, nodes_with_edges_only=True)
ReluFakeQuantizeMark().find_and_replace_pattern(graph)
ReluQuantizeFuse().find_and_replace_pattern(graph)
(flag, resp) = compare_graphs(graph, graph_ref, 'output', check_op_attrs=True)
self.assertTrue(flag, resp)
def test_classic_i8_negative_case(self):
graph = build_graph(nodes, i8_edges,
{'const_1_d': {'value': np.full([1, 2, 3, 4], -1)}, 'quantize': {'levels': 256}},
nodes_with_edges_only=True)
graph.graph['layout'] = 'NHWC'
graph.stage = 'middle'
graph_ref = build_graph(nodes, i8_edges, nodes_with_edges_only=True)
ReluFakeQuantizeMark().find_and_replace_pattern(graph)
ReluQuantizeFuse().find_and_replace_pattern(graph)
(flag, resp) = compare_graphs(graph, graph_ref, 'output', check_op_attrs=True)
self.assertTrue(flag, resp)
def test_classic_i1_positive_case(self):
graph = build_graph(nodes, i1_edges,
{'const_1_d': {'value': np.zeros([1, 2, 3, 4], dtype=np.float32)},
'quantize': {'levels': 2}},
nodes_with_edges_only=True)
graph.graph['layout'] = 'NHWC'
graph.stage = 'middle'
graph_ref = build_graph(nodes, ref_i1_edges, nodes_with_edges_only=True)
ReluFakeQuantizeMark().find_and_replace_pattern(graph)
ReluQuantizeFuse().find_and_replace_pattern(graph)
(flag, resp) = compare_graphs(graph, graph_ref, 'output', check_op_attrs=True)
self.assertTrue(flag, resp)
def test_classic_i1_negative_case(self):
graph = build_graph(nodes, i1_edges,
{'const_1_d': {'value': np.full([1, 2, 3, 4], -1, dtype=np.float32)},
'quantize': {'levels': 2}},
nodes_with_edges_only=True)
graph.graph['layout'] = 'NHWC'
graph.stage = 'middle'
graph_ref = build_graph(nodes, ref_i1_edges, nodes_with_edges_only=True)
ReluFakeQuantizeMark().find_and_replace_pattern(graph)
ReluQuantizeFuse().find_and_replace_pattern(graph)
(flag, resp) = compare_graphs(graph, graph_ref, 'output', check_op_attrs=True)
self.assertTrue(flag, resp)
np.array_equal(np.full([1, 2, 3, 4], float('-inf'), dtype=np.float32), graph_ref.node['const_1_d']['value'])
def test_relu_extra_outputs_i1_case(self):
graph = build_graph(nodes, relu_extra_output,
{'const_1_d': {'value': np.full([1, 2, 3, 4], -1, dtype=np.float32)},
'quantize': {'levels': 2}},
nodes_with_edges_only=True)
graph.graph['layout'] = 'NHWC'
graph.stage = 'middle'
graph_ref = build_graph(nodes, relu_extra_output, nodes_with_edges_only=True)
ReluFakeQuantizeMark().find_and_replace_pattern(graph)
ReluQuantizeFuse().find_and_replace_pattern(graph)
(flag, resp) = compare_graphs(graph, graph_ref, 'relu', check_op_attrs=True)
self.assertTrue(flag, resp)
np.array_equal(np.full([1, 2, 3, 4], float('-inf'), dtype=np.float32), graph_ref.node['const_1_d']['value'])
def test_const_extra_outputs_i1_case(self):
graph = build_graph(nodes, const_extra,
{'const_1_d': {'value': np.full([1, 2, 3, 4], -1, dtype=np.float32)},
'quantize': {'levels': 2}, 'quantize_1': {'levels': 2}},
nodes_with_edges_only=True)
graph.graph['layout'] = 'NHWC'
graph.stage = 'middle'
graph_ref = build_graph(nodes, ref_const_extra, nodes_with_edges_only=True)
ReluFakeQuantizeMark().find_and_replace_pattern(graph)
ReluQuantizeFuse().find_and_replace_pattern(graph)
(flag, resp) = compare_graphs(graph, graph_ref, 'relu', check_op_attrs=True)
self.assertTrue(flag, resp)
np.array_equal(np.full([1, 2, 3, 4], float('-inf'), dtype=np.float32), graph_ref.node['const_1_d']['value']) | 35.292835 | 116 | 0.575691 | 1,476 | 11,329 | 4.120596 | 0.100949 | 0.079908 | 0.059684 | 0.039461 | 0.821769 | 0.800888 | 0.778034 | 0.756988 | 0.737915 | 0.706347 | 0 | 0.03187 | 0.221732 | 11,329 | 321 | 117 | 35.292835 | 0.657934 | 0.085974 | 0 | 0.690265 | 0 | 0 | 0.282295 | 0 | 0 | 0 | 0 | 0 | 0.026549 | 1 | 0.026549 | false | 0 | 0.022124 | 0 | 0.053097 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7c1e19c2aee7f9e9a159ddf6dd5eeb8f6b63e726 | 300 | py | Python | backend/models/transaction.py | akita8/scrapper | 4027cae737bf9eaccb5421518b0c577ef3ecac5f | [
"MIT"
] | null | null | null | backend/models/transaction.py | akita8/scrapper | 4027cae737bf9eaccb5421518b0c577ef3ecac5f | [
"MIT"
] | null | null | null | backend/models/transaction.py | akita8/scrapper | 4027cae737bf9eaccb5421518b0c577ef3ecac5f | [
"MIT"
] | null | null | null | """Documentation."""
from .base import BaseModel
class StockTransaction(BaseModel):
"""Documentation."""
def key(self):
"""Documentation."""
return
class BondTransaction(BaseModel):
"""Documentation."""
def key(self):
"""Documentation."""
return
| 15.789474 | 34 | 0.596667 | 23 | 300 | 7.782609 | 0.521739 | 0.24581 | 0.27933 | 0.312849 | 0.569832 | 0.569832 | 0.569832 | 0 | 0 | 0 | 0 | 0 | 0.243333 | 300 | 18 | 35 | 16.666667 | 0.788546 | 0.246667 | 0 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
7c494f0170c8a26d3a10c8f0ff285af326673f1f | 12,275 | py | Python | sdk/translation/azure-ai-translation-document/tests/test_translation.py | benbp/azure-sdk-for-python | 2329ba03e48098dcdc581898f6434d7c2b13a7b9 | [
"MIT"
] | null | null | null | sdk/translation/azure-ai-translation-document/tests/test_translation.py | benbp/azure-sdk-for-python | 2329ba03e48098dcdc581898f6434d7c2b13a7b9 | [
"MIT"
] | null | null | null | sdk/translation/azure-ai-translation-document/tests/test_translation.py | benbp/azure-sdk-for-python | 2329ba03e48098dcdc581898f6434d7c2b13a7b9 | [
"MIT"
] | null | null | null | # coding=utf-8
# ------------------------------------
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
# ------------------------------------
import functools
import pytest
import uuid
from azure.core.exceptions import HttpResponseError
from azure.storage.blob import ContainerClient
from testcase import DocumentTranslationTest, Document
from preparer import DocumentTranslationPreparer, \
DocumentTranslationClientPreparer as _DocumentTranslationClientPreparer
from azure.ai.translation.document import DocumentTranslationClient, DocumentTranslationInput, TranslationTarget
DocumentTranslationClientPreparer = functools.partial(_DocumentTranslationClientPreparer, DocumentTranslationClient)
class TestTranslation(DocumentTranslationTest):
@DocumentTranslationPreparer()
@DocumentTranslationClientPreparer()
def test_single_source_single_target(self, client):
# prepare containers and test data
blob_data = b'This is some text'
source_container_sas_url = self.create_source_container(data=Document(data=blob_data))
target_container_sas_url = self.create_target_container()
# prepare translation inputs
translation_inputs = [
DocumentTranslationInput(
source_url=source_container_sas_url,
targets=[
TranslationTarget(
target_url=target_container_sas_url,
language_code="es"
)
]
)
]
# submit job and test
self._submit_and_validate_translation_job(client, translation_inputs, 1)
@DocumentTranslationPreparer()
@DocumentTranslationClientPreparer()
def test_single_source_two_targets(self, client):
# prepare containers and test data
blob_data = b'This is some text'
source_container_sas_url = self.create_source_container(data=Document(data=blob_data))
target_container_sas_url = self.create_target_container()
additional_target_container_sas_url = self.create_target_container()
# prepare translation inputs
translation_inputs = [
DocumentTranslationInput(
source_url=source_container_sas_url,
targets=[
TranslationTarget(
target_url=target_container_sas_url,
language_code="es"
),
TranslationTarget(
target_url=additional_target_container_sas_url,
language_code="fr"
)
]
)
]
# submit job and test
self._submit_and_validate_translation_job(client, translation_inputs)
@DocumentTranslationPreparer()
@DocumentTranslationClientPreparer()
def test_multiple_sources_single_target(self, client):
# prepare containers and test data
blob_data = b'This is some text'
source_container_sas_url = self.create_source_container(data=Document(data=blob_data))
blob_data = b'This is some text2'
additional_source_container_sas_url = self.create_source_container(data=Document(data=blob_data))
target_container_sas_url = self.create_target_container()
# prepare translation inputs
translation_inputs = [
DocumentTranslationInput(
source_url=source_container_sas_url,
targets=[
TranslationTarget(
target_url=target_container_sas_url,
language_code="es"
)
]
),
DocumentTranslationInput(
source_url=additional_source_container_sas_url,
targets=[
TranslationTarget(
target_url=target_container_sas_url,
language_code="fr"
)
]
)
]
# submit job and test
self._submit_and_validate_translation_job(client, translation_inputs, 2)
@DocumentTranslationPreparer()
@DocumentTranslationClientPreparer()
def test_single_source_single_target_with_prefix(self, client):
# prepare containers and test data
blob_data = b'This is some text'
prefix = "xyz"
source_container_sas_url = self.create_source_container(data=Document(data=blob_data, prefix=prefix))
target_container_sas_url = self.create_target_container()
# prepare translation inputs
translation_inputs = [
DocumentTranslationInput(
source_url=source_container_sas_url,
targets=[
TranslationTarget(
target_url=target_container_sas_url,
language_code="es"
)
],
prefix=prefix
)
]
# submit job and test
self._submit_and_validate_translation_job(client, translation_inputs, 1)
@DocumentTranslationPreparer()
@DocumentTranslationClientPreparer()
def test_single_source_single_target_with_suffix(self, client):
# prepare containers and test data
blob_data = b'This is some text'
suffix = "txt"
source_container_sas_url = self.create_source_container(data=Document(data=blob_data))
target_container_sas_url = self.create_target_container()
# prepare translation inputs
translation_inputs = [
DocumentTranslationInput(
source_url=source_container_sas_url,
targets=[
TranslationTarget(
target_url=target_container_sas_url,
language_code="es"
)
],
suffix=suffix
)
]
# submit job and test
self._submit_and_validate_translation_job(client, translation_inputs, 1)
@DocumentTranslationPreparer()
@DocumentTranslationClientPreparer()
def test_bad_input_source(self, client):
# prepare containers and test data
target_container_sas_url = self.create_target_container()
# prepare translation inputs
translation_inputs = [
DocumentTranslationInput(
source_url="https://idont.ex.ist",
targets=[
TranslationTarget(
target_url=target_container_sas_url,
language_code="es"
)
]
)
]
with pytest.raises(HttpResponseError) as e:
job = client.create_translation_job(translation_inputs)
job = client.wait_until_done(job.id)
assert e.value.error.code == "InvalidDocumentAccessLevel"
@DocumentTranslationPreparer()
@DocumentTranslationClientPreparer()
def test_bad_input_target(self, client):
# prepare containers and test data
blob_data = b'This is some text'
source_container_sas_url = self.create_source_container(data=Document(data=blob_data))
# prepare translation inputs
translation_inputs = [
DocumentTranslationInput(
source_url=source_container_sas_url,
targets=[
TranslationTarget(
target_url="https://idont.ex.ist",
language_code="es"
)
]
)
]
with pytest.raises(HttpResponseError) as e:
job = client.create_translation_job(translation_inputs)
job = client.wait_until_done(job.id)
assert e.value.error.code == "InvalidDocumentAccessLevel"
@DocumentTranslationPreparer()
@DocumentTranslationClientPreparer()
def test_use_supported_and_unsupported_files(self, client):
# prepare containers and test data
source_container_sas_url = self.create_source_container(data=[
Document(suffix=".txt"),
Document(suffix=".jpg")
]
)
target_container_sas_url = self.create_target_container()
# prepare translation inputs
translation_inputs = [
DocumentTranslationInput(
source_url=source_container_sas_url,
targets=[
TranslationTarget(
target_url=target_container_sas_url,
language_code="es"
)
]
)
]
job = client.create_translation_job(translation_inputs)
job = client.wait_until_done(job.id)
self._validate_translation_job(job, status="Succeeded", total=1, succeeded=1)
@DocumentTranslationPreparer()
@DocumentTranslationClientPreparer()
def test_existing_documents_in_target(self, client):
# prepare containers and test data
source_container_sas_url = self.create_source_container(data=Document(name="document"))
target_container_sas_url = self.create_target_container(data=Document(name="document"))
# prepare translation inputs
translation_inputs = [
DocumentTranslationInput(
source_url=source_container_sas_url,
targets=[
TranslationTarget(
target_url=target_container_sas_url,
language_code="es"
)
]
)
]
job = client.create_translation_job(translation_inputs)
job = client.wait_until_done(job.id)
self._validate_translation_job(job, status="Failed", total=1, failed=1)
doc_status = client.list_all_document_statuses(job.id)
doc = next(doc_status)
assert doc.status == "Failed"
assert doc.error.code == "TargetFileAlreadyExists"
@DocumentTranslationPreparer()
@DocumentTranslationClientPreparer()
def test_existing_documents_in_target_one_valid(self, client):
# prepare containers and test data
source_container_sas_url = self.create_source_container(data=[Document(name="document"), Document()])
target_container_sas_url = self.create_target_container(data=Document(name="document"))
# prepare translation inputs
translation_inputs = [
DocumentTranslationInput(
source_url=source_container_sas_url,
targets=[
TranslationTarget(
target_url=target_container_sas_url,
language_code="es"
)
]
)
]
job = client.create_translation_job(translation_inputs)
job = client.wait_until_done(job.id)
self._validate_translation_job(job, status="Succeeded", total=2, failed=1)
doc_statuses = client.list_all_document_statuses(job.id)
for doc in doc_statuses:
if doc.status == "Failed":
assert doc.error.code == "TargetFileAlreadyExists"
@DocumentTranslationPreparer()
@DocumentTranslationClientPreparer()
def test_empty_document(self, client):
# prepare containers and test data
source_container_sas_url = self.create_source_container(Document(data=b''))
target_container_sas_url = self.create_target_container()
# prepare translation inputs
translation_inputs = [
DocumentTranslationInput(
source_url=source_container_sas_url,
targets=[
TranslationTarget(
target_url=target_container_sas_url,
language_code="es"
)
]
)
]
job = client.create_translation_job(translation_inputs)
job = client.wait_until_done(job.id)
self._validate_translation_job(job, status="Failed", total=1, failed=1)
doc_status = client.list_all_document_statuses(job.id)
doc = next(doc_status)
assert doc.status == "Failed"
assert doc.error.code == "WrongDocumentEncoding"
| 37.885802 | 116 | 0.613605 | 1,097 | 12,275 | 6.517776 | 0.110301 | 0.075524 | 0.094406 | 0.067552 | 0.860839 | 0.850909 | 0.848671 | 0.836504 | 0.809091 | 0.793566 | 0 | 0.001668 | 0.31609 | 12,275 | 323 | 117 | 38.003096 | 0.85003 | 0.074542 | 0 | 0.626984 | 0 | 0 | 0.035226 | 0.010506 | 0 | 0 | 0 | 0 | 0.027778 | 1 | 0.043651 | false | 0 | 0.031746 | 0 | 0.079365 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7cafb220a1dc71039f5171123993b412d0ba6d85 | 43 | py | Python | domestic_violence_news_classifer_spanish/__init__.py | domestic-violence-ai-studies/domestic-violence-news-classifier | 58e262187436b7c35fe6ec233f40f929879a7bf3 | [
"MIT"
] | null | null | null | domestic_violence_news_classifer_spanish/__init__.py | domestic-violence-ai-studies/domestic-violence-news-classifier | 58e262187436b7c35fe6ec233f40f929879a7bf3 | [
"MIT"
] | null | null | null | domestic_violence_news_classifer_spanish/__init__.py | domestic-violence-ai-studies/domestic-violence-news-classifier | 58e262187436b7c35fe6ec233f40f929879a7bf3 | [
"MIT"
] | null | null | null | from . import domestic_violence_classifier
| 21.5 | 42 | 0.883721 | 5 | 43 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7cdbe2fe1dfb51aa0d2d91dc1284c2b02612b44f | 34 | py | Python | keras_easy/models/rcnn.py | afranck64/keras-easy | a27c0fefe8f9796dc22eca7aa3123548ac5a4646 | [
"MIT"
] | 1 | 2021-01-21T21:45:20.000Z | 2021-01-21T21:45:20.000Z | keras_easy/models/rcnn.py | afranck64/keras-easy | a27c0fefe8f9796dc22eca7aa3123548ac5a4646 | [
"MIT"
] | null | null | null | keras_easy/models/rcnn.py | afranck64/keras-easy | a27c0fefe8f9796dc22eca7aa3123548ac5a4646 | [
"MIT"
] | null | null | null | from keras_rcnn.models import RCNN | 34 | 34 | 0.882353 | 6 | 34 | 4.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7cef023b8e8da20968c36ad801cb0fd0de67931d | 212 | py | Python | ui-tests/jupyter_server_config.py | xtianpoli/ipyvega | e29559709f279d3b27511a3c57da1ccf56d09472 | [
"BSD-3-Clause"
] | 2 | 2021-07-10T12:42:53.000Z | 2021-11-18T09:42:42.000Z | ui-tests/jupyter_server_config.py | xtianpoli/ipyvega | e29559709f279d3b27511a3c57da1ccf56d09472 | [
"BSD-3-Clause"
] | 15 | 2021-03-05T14:30:06.000Z | 2021-06-14T12:52:10.000Z | ui-tests/jupyter_server_config.py | xtianpoli/ipyvega | e29559709f279d3b27511a3c57da1ccf56d09472 | [
"BSD-3-Clause"
] | 2 | 2021-08-30T12:03:19.000Z | 2022-03-29T10:10:33.000Z | c.ServerApp.port = 8888
c.ServerApp.token = ""
c.ServerApp.password = ""
c.ServerApp.disable_check_xsrf = True
c.ServerApp.open_browser = False
c.LabApp.open_browser = False
c.LabApp.expose_app_in_browser = True
| 26.5 | 37 | 0.783019 | 33 | 212 | 4.818182 | 0.515152 | 0.314465 | 0.201258 | 0.213836 | 0.289308 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020942 | 0.099057 | 212 | 7 | 38 | 30.285714 | 0.811518 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
6b10747622574ad298d008a27697059233fb9bf1 | 2,250 | py | Python | evaluation_framework/__init__.py | janothan/Evaluation-Framework | e53847bc352f657953933e1d7c97b68ac890c852 | [
"Apache-2.0"
] | 5 | 2020-02-12T13:11:14.000Z | 2021-01-28T12:45:22.000Z | evaluation_framework/__init__.py | charyeezy/Evaluation-Framework | ddfd4ea654a3d7d2abd58f062ec98a8a736f8f51 | [
"Apache-2.0"
] | 9 | 2019-07-29T17:45:30.000Z | 2022-03-17T12:24:47.000Z | evaluation_framework/__init__.py | charyeezy/Evaluation-Framework | ddfd4ea654a3d7d2abd58f062ec98a8a736f8f51 | [
"Apache-2.0"
] | 7 | 2020-02-12T13:22:49.000Z | 2021-11-29T01:08:50.000Z | from evaluation_framework.manager import FrameworkManager
from evaluation_framework.txt_dataManager import DataManager as txt_dataManager
from evaluation_framework.hdf5_dataManager import DataManager as hdf5_dataManager
from evaluation_framework.evaluationManager import EvaluationManager
import evaluation_framework.Classification.classification_model
import evaluation_framework.Classification.classification_taskManager
import evaluation_framework.Clustering.clustering_model
import evaluation_framework.Clustering.clustering_taskManager
import evaluation_framework.DocumentSimilarity.documentSimilarity_model
import evaluation_framework.DocumentSimilarity.documentSimilarity_taskManager
import evaluation_framework.EntityRelatedness.entityRelatedness_model
import evaluation_framework.EntityRelatedness.entityRelatedness_taskManager
import evaluation_framework.Regression.regression_model
import evaluation_framework.Regression.regression_taskManager
import evaluation_framework.SemanticAnalogies.semanticAnalogies_model
import evaluation_framework.SemanticAnalogies.semanticAnalogies_taskManager
"""
from evaluation_framework.Classification.classification_model import ClassificationModel
from evaluation_framework.Classification.classification_taskManager import ClassificationManager
from evaluation_framework.Clustering.clustering_model import ClusteringModel
from evaluation_framework.Clustering.clustering_taskManager import ClusteringManager
from evaluation_framework.DocumentSimilarity.documentSimilarity_model import DocumentSimilarityModel
from evaluation_framework.DocumentSimilarity.documentSimilarity_taskManager import DocumentSimilarityManager
from evaluation_framework.EntityRelatedness.entityRelatedness_model import EntityRelatednessModel
from evaluation_framework.EntityRelatedness.entityRelatedness_taskManager import EntityRelatednessManager
from evaluation_framework.Regression.regression_model import RegressionModel
from evaluation_framework.Regression.regression_taskManager import RegressionManager
from evaluation_framework.SemanticAnalogies.semanticAnalogies_model import SemanticAnalogiesModel
from evaluation_framework.SemanticAnalogies.semanticAnalogies_taskManager import SemanticAnalogiesManager
"""
| 51.136364 | 108 | 0.926667 | 200 | 2,250 | 10.145 | 0.155 | 0.262198 | 0.18137 | 0.088714 | 0.750123 | 0.660424 | 0 | 0 | 0 | 0 | 0 | 0.000933 | 0.047556 | 2,250 | 43 | 109 | 52.325581 | 0.94587 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6b3673b0a4e3cfe622fe7be95c00ba164191074d | 29 | py | Python | plugins/pelican-plugins/read_more_link/__init__.py | dbgriffith01/blog_source | bc5cd3e1ac1ff068de0cbb78b1470a7db743cd53 | [
"MIT"
] | 2 | 2021-01-12T14:55:55.000Z | 2021-03-24T11:52:44.000Z | plugins/pelican-plugins/read_more_link/__init__.py | dbgriffith01/blog_source | bc5cd3e1ac1ff068de0cbb78b1470a7db743cd53 | [
"MIT"
] | 1 | 2021-12-13T20:50:25.000Z | 2021-12-13T20:50:25.000Z | pelican-plugins/read_more_link/__init__.py | JN-Blog/jn-blog.com | 669bf9a9c6813f2b7980792fb137f6718077aea1 | [
"MIT"
] | 3 | 2021-03-24T11:58:31.000Z | 2022-01-12T16:03:06.000Z | from .read_more_link import * | 29 | 29 | 0.827586 | 5 | 29 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6b4b385dbec95a13c3711a17277ad7efcaa991a6 | 183 | py | Python | mountainview/vote_events.py | jayktee/scrapers-us-municipal | ff52a331e91cb590a3eda7db6c688d75b77acacb | [
"MIT"
] | null | null | null | mountainview/vote_events.py | jayktee/scrapers-us-municipal | ff52a331e91cb590a3eda7db6c688d75b77acacb | [
"MIT"
] | null | null | null | mountainview/vote_events.py | jayktee/scrapers-us-municipal | ff52a331e91cb590a3eda7db6c688d75b77acacb | [
"MIT"
] | null | null | null | from pupa.scrape import Scraper
from pupa.scrape import VoteEvent
class MountainviewVoteEventScraper(Scraper):
def scrape(self):
# needs to be implemented
pass
| 18.3 | 44 | 0.726776 | 21 | 183 | 6.333333 | 0.714286 | 0.120301 | 0.210526 | 0.300752 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.224044 | 183 | 9 | 45 | 20.333333 | 0.93662 | 0.125683 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.2 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
86173cc72a0c6113f7dddb99d4307655ea9e19f3 | 319 | py | Python | windowsbuild/MSVC2017/vtk/8.1.0/lib/python3.7/site-packages/vtk/vtkInfovisCore.py | Tech-XCorp/visit-deps | 23e2bd534bf9c332d6b7d32310495f1f65b1b936 | [
"BSD-3-Clause"
] | 1 | 2022-01-09T20:04:31.000Z | 2022-01-09T20:04:31.000Z | windowsbuild/MSVC2017/vtk/8.1.0/lib/python3.7/site-packages/vtk/vtkInfovisCore.py | Tech-XCorp/visit-deps | 23e2bd534bf9c332d6b7d32310495f1f65b1b936 | [
"BSD-3-Clause"
] | 1 | 2022-02-15T12:01:57.000Z | 2022-03-24T19:48:47.000Z | Latest/venv/Lib/site-packages/vtk/vtkInfovisCore.py | adamcvj/SatelliteTracker | 49a8f26804422fdad6f330a5548e9f283d84a55d | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
try:
# use relative import for installed modules
from .vtkInfovisCorePython import *
except ImportError:
# during build and testing, the modules will be elsewhere,
# e.g. in lib directory or Release/Debug config directories
from vtkInfovisCorePython import *
| 31.9 | 63 | 0.76489 | 39 | 319 | 6.128205 | 0.794872 | 0.200837 | 0.251046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194357 | 319 | 9 | 64 | 35.444444 | 0.929961 | 0.489028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
86c0f61cbff8aad2aeb814c7892f403a8c162cd9 | 160 | py | Python | crds/core/git_version.py | oirlab/tmt-crds | 0b7dc9747b24751c304e51ed304fb98da65f9d99 | [
"BSD-3-Clause"
] | null | null | null | crds/core/git_version.py | oirlab/tmt-crds | 0b7dc9747b24751c304e51ed304fb98da65f9d99 | [
"BSD-3-Clause"
] | 1 | 2019-08-11T23:21:13.000Z | 2020-09-01T07:02:30.000Z | crds/core/git_version.py | oirlab/tmt-crds | 0b7dc9747b24751c304e51ed304fb98da65f9d99 | [
"BSD-3-Clause"
] | null | null | null |
__version__ = '06db005d4edcb17352ee543595f89de56c961914'
__full_version_info__ = '''
branch: b7.5.0.2
sha1: 06db005d4edcb17352ee543595f89de56c961914
'''
| 17.777778 | 56 | 0.79375 | 12 | 160 | 9.75 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.415493 | 0.1125 | 160 | 9 | 57 | 17.777778 | 0.408451 | 0 | 0 | 0 | 0 | 0 | 0.660377 | 0.503145 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
86dfc9ed6711e5d9d95af7fbfe17c8e0bb6ab4f6 | 966 | py | Python | api/service/send_confirmation.py | guisteglich/EasyPay | 378f03d16606516e1d1afed2de03cd95fcf7d3a0 | [
"MIT"
] | null | null | null | api/service/send_confirmation.py | guisteglich/EasyPay | 378f03d16606516e1d1afed2de03cd95fcf7d3a0 | [
"MIT"
] | null | null | null | api/service/send_confirmation.py | guisteglich/EasyPay | 378f03d16606516e1d1afed2de03cd95fcf7d3a0 | [
"MIT"
] | null | null | null | # Install Courier SDK: pip install trycourier
from trycourier import Courier
def send_confirmation(name, email):
client = Courier(auth_token="pk_prod_XQZGKEJ9P24C79QZF6H2XM5ATCEC")
resp = client.send_message(
message={
"to": {
"email": email
},
"content": {
"title": "Welcome to EasyPay!",
"body": "{{name}}, you have registered with our bank and are receiving a confirmation email."
},
"data":{
"name": name
}
}
)
# def send_transaction_confirmation(name, email):
# client = Courier(auth_token="pk_prod_XQZGKEJ9P24C79QZF6H2XM5ATCEC")
# resp = client.send_message(
# message={
# "to": {
# "email": email
# },
# "content": {
# "title": "Welcome to EasyPay!",
# "body": "{{name}}, you have registered with our bank and are receiving a confirmation email."
# },
# "data":{
# "name": name
# }
# }
# )
| 24.15 | 103 | 0.571429 | 95 | 966 | 5.694737 | 0.4 | 0.025878 | 0.077634 | 0.099815 | 0.83549 | 0.83549 | 0.83549 | 0.83549 | 0.83549 | 0.83549 | 0 | 0.023324 | 0.289855 | 966 | 39 | 104 | 24.769231 | 0.765306 | 0.491718 | 0 | 0 | 0 | 0 | 0.357294 | 0.07611 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
810c85465d218e70d5e90d7360a1b93ed0c22101 | 284 | py | Python | mmpose/datasets/pipelines/__init__.py | kristinbranson/APT-mmpose | 19464d2cea2e963e771a91ed200270c069024ddc | [
"Apache-2.0"
] | null | null | null | mmpose/datasets/pipelines/__init__.py | kristinbranson/APT-mmpose | 19464d2cea2e963e771a91ed200270c069024ddc | [
"Apache-2.0"
] | null | null | null | mmpose/datasets/pipelines/__init__.py | kristinbranson/APT-mmpose | 19464d2cea2e963e771a91ed200270c069024ddc | [
"Apache-2.0"
] | null | null | null | from .bottom_up_transform import * # noqa
from .loading import LoadImageFromFile # noqa
from .mesh_transform import * # noqa
from .pose3d_transform import * # noqa
from .shared_transform import * # noqa
from .top_down_transform import * # noqa
from .apt_transform import * #noqa | 40.571429 | 46 | 0.767606 | 37 | 284 | 5.675676 | 0.378378 | 0.428571 | 0.542857 | 0.547619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004202 | 0.161972 | 284 | 7 | 47 | 40.571429 | 0.878151 | 0.116197 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8112a60a67d44b0ab1876625bcdb144f97181a48 | 53,007 | py | Python | tests.py | Frimkron/JSONPyth | 47cedf1c8b2b111a83bc0b0cdcbad79deaeb984a | [
"MIT"
] | 2 | 2020-03-01T18:37:03.000Z | 2021-03-27T17:47:33.000Z | tests.py | Frimkron/JSONPyth | 47cedf1c8b2b111a83bc0b0cdcbad79deaeb984a | [
"MIT"
] | null | null | null | tests.py | Frimkron/JSONPyth | 47cedf1c8b2b111a83bc0b0cdcbad79deaeb984a | [
"MIT"
] | null | null | null | import unittest
import logging
import jsonpyth as jp
logging.getLogger().setLevel(logging.ERROR)
class TestParse(unittest.TestCase):
# empty path
def test_raises_error_for_empty_path(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('')
# root symbol
def test_parses_start_root(self):
result = jp.parse('$')
self.assert_child_steps((jp.PRoot,), result)
def test_parses_root_in_path(self):
result = jp.parse('$.$')
self.assert_child_steps((jp.PRoot, jp.PRoot), result)
# name property
def test_parses_name_child(self):
result = jp.parse('$.foo')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('foo', result[1].targets[0].name)
def test_name_can_include_digit(self):
result = jp.parse('$.f00')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('f00', result[1].targets[0].name)
def test_raises_error_for_name_leading_digit(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$.00f')
def test_name_can_include_underscore(self):
result = jp.parse('$.f_o')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('f_o', result[1].targets[0].name)
def test_raises_error_for_punctuation_in_name(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$.f**')
# double-quoted string property
def test_parses_double_string_child(self):
result = jp.parse('$."foo"')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('foo', result[1].targets[0].name)
def test_double_string_can_include_punctuation(self):
result = jp.parse('$."f**"')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('f**', result[1].targets[0].name)
def test_double_string_can_include_unicode(self):
result = jp.parse('$."仮借文字"')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('仮借文字', result[1].targets[0].name)
def test_raises_error_for_unclosed_double_string(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$."foo')
def test_raises_error_for_newline_in_double_string(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$."fo\no"')
def test_double_string_can_include_escaped_double_quote(self):
result = jp.parse('$."fo\\"o"')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('fo"o', result[1].targets[0].name)
def test_double_string_can_include_escaped_backslash(self):
result = jp.parse('$."fo\\\\o"')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('fo\\o', result[1].targets[0].name)
def test_double_string_can_include_single_quote(self):
result = jp.parse('$."fo\'o"')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual("fo'o", result[1].targets[0].name)
def test_double_string_can_start_with_digit(self):
result = jp.parse('$."00f"')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual("00f", result[1].targets[0].name)
def test_double_string_asterisk_is_not_wildcard(self):
result = jp.parse('$."*"')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual("*", result[1].targets[0].name)
def test_double_string_digit_is_not_index(self):
result = jp.parse('$."9"')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual("9", result[1].targets[0].name)
def test_double_string_preserves_whitespace(self):
result = jp.parse('$." f o o "')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual(' f o o ', result[1].targets[0].name)
# single-quoted string property
def test_parses_single_string_child(self):
result = jp.parse("$.'foo'")
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('foo', result[1].targets[0].name)
def test_single_string_can_include_punctuation(self):
result = jp.parse("$.'f**'")
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('f**', result[1].targets[0].name)
def test_single_string_can_include_unicode(self):
result = jp.parse("$.'仮借文字'")
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('仮借文字', result[1].targets[0].name)
def test_raises_error_for_unclosed_single_string(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse("$.'foo")
def test_raises_error_for_newline_in_single_string(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse("$.'fo\no'")
def test_single_string_can_include_escaped_single_quote(self):
result = jp.parse("$.'fo\\'o'")
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual("fo'o", result[1].targets[0].name)
def test_single_string_can_include_escaped_backslash(self):
result = jp.parse("$.'fo\\\\o'")
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual("fo\\o", result[1].targets[0].name)
def test_single_string_can_include_double_quote(self):
result = jp.parse("$.'fo\"o'")
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('fo"o', result[1].targets[0].name)
def test_single_string_can_start_with_digit(self):
result = jp.parse("$.'00f'")
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual("00f", result[1].targets[0].name)
def test_single_string_asterisk_is_not_wildcard(self):
result = jp.parse("$.'*'")
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual("*", result[1].targets[0].name)
def test_single_string_digit_is_not_index(self):
result = jp.parse("$.'9'")
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual("9", result[1].targets[0].name)
def test_single_string_preserves_whitespace(self):
result = jp.parse("$.' f o o '")
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual(' f o o ', result[1].targets[0].name)
# child steps
def test_parses_multiple_child_steps(self):
result = jp.parse('$.foo.bar.weh.blah')
self.assert_types((jp.PChild, jp.PChild, jp.PChild, jp.PChild, jp.PChild), result)
def test_parses_first_child_step_without_root(self):
result = jp.parse('.foo')
self.assert_types((jp.PChild,), result)
self.assert_types((jp.PProperty,), result[0].targets)
self.assertEqual('foo', result[0].targets[0].name)
def test_parses_implicit_first_child_step_without_dot(self):
result = jp.parse('foo')
self.assert_types((jp.PChild,), result)
self.assert_types((jp.PProperty,), result[0].targets)
self.assertEqual('foo', result[0].targets[0].name)
def test_raises_error_for_child_dot_without_target(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$.')
def test_parses_square_bracketted_child_step(self):
result = jp.parse('$[foo]')
self.assert_types((jp.PChild, jp.PChild), result)
self.assert_types((jp.PProperty,), result[1].targets)
self.assertEqual('foo', result[1].targets[0].name)
def test_parses_multiple_square_bracket_steps(self):
result = jp.parse('$[foo][bar]')
self.assert_types((jp.PChild, jp.PChild, jp.PChild), result)
self.assert_types((jp.PProperty,), result[1].targets)
self.assertEqual('foo', result[1].targets[0].name)
self.assert_types((jp.PProperty,), result[2].targets)
self.assertEqual('bar', result[2].targets[0].name)
def test_parses_mixed_dotted_and_bracketted_child_steps(self):
result = jp.parse('$.foo[bar].weh')
self.assert_child_steps((jp.PRoot, jp.PProperty, jp.PProperty, jp.PProperty), result)
def test_raises_error_for_unclosed_square_bracket(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$[foo')
def test_raises_error_for_child_with_both_dot_and_square_brackets(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$.[foo]')
def test_raises_error_for_step_with_both_double_dot_and_square_brackets(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$..[foo]')
def test_raises_error_for_square_bracket_without_target(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$[]')
def test_parses_square_bracketted_root(self):
result = jp.parse('[$]')
self.assert_child_steps((jp.PRoot,),result)
def test_parses_square_bracketted_double_string(self):
result = jp.parse('$["foo"]')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('foo', result[1].targets[0].name)
def test_parses_square_bracketted_single_string(self):
result = jp.parse("$['foo']")
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
self.assertEqual('foo', result[1].targets[0].name)
# recursive step
def test_parses_name_recursive(self):
result = jp.parse('$..foo')
self.assert_types((jp.PChild, jp.PRecursive), result)
self.assert_types((jp.PProperty,), result[1].targets)
self.assertEqual('foo', result[1].targets[0].name)
def test_parses_mixed_child_and_recursive_steps(self):
result = jp.parse('$.foo..bar.weh..blah')
self.assert_types((jp.PChild, jp.PChild, jp.PRecursive, jp.PChild, jp.PRecursive), result)
def test_parses_first_recursive_step_without_root(self):
result = jp.parse('..foo')
self.assert_types((jp.PRecursive,), result)
self.assert_types((jp.PProperty,), result[0].targets)
self.assertEqual('foo', result[0].targets[0].name)
# whitespace
def test_ignores_spaces(self):
result = jp.parse(' $ . foo ')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
def test_ignores_newlines(self):
result = jp.parse('\n\n$\n.\n\nfoo\n\n\n')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
def test_ignores_tabs(self):
result = jp.parse('\t\t$\t.\t\tfoo\t\t\t')
self.assert_child_steps((jp.PRoot, jp.PProperty), result)
# current node symbol
def test_parses_current_node_symbol(self):
result = jp.parse('$.@')
self.assert_child_steps((jp.PRoot, jp.PCurrent), result)
def test_parses_current_node_at_start(self):
result = jp.parse('@.foo')
self.assert_child_steps((jp.PCurrent, jp.PProperty), result)
def test_parses_square_bracketted_current_node_symbol(self):
result = jp.parse('$[@]')
self.assert_child_steps((jp.PRoot, jp.PCurrent), result)
# wildcard symbol
def test_parses_wildcard(self):
result = jp.parse('$.*')
self.assert_child_steps((jp.PRoot, jp.PWildcard), result)
def test_parses_wildcard_at_start(self):
result = jp.parse('*.foo')
self.assert_child_steps((jp.PWildcard, jp.PProperty), result)
def test_parses_square_bracketted_wildcard(self):
result = jp.parse('$[*]')
self.assert_child_steps((jp.PRoot, jp.PWildcard), result)
# multiple targets
def test_parses_two_targets_for_step(self):
result = jp.parse('$.foo,bar')
self.assert_child_steps((jp.PRoot, (jp.PProperty, jp.PProperty)), result)
self.assertEqual('foo', result[1].targets[0].name)
self.assertEqual('bar', result[1].targets[1].name)
def test_parses_more_than_two_targets_for_step(self):
result = jp.parse('$.foo,bar,weh,blah')
self.assert_child_steps((jp.PRoot, (jp.PProperty, jp.PProperty, jp.PProperty, jp.PProperty)), result)
self.assertEqual('foo', result[1].targets[0].name)
self.assertEqual('bar', result[1].targets[1].name)
self.assertEqual('weh', result[1].targets[2].name)
self.assertEqual('blah',result[1].targets[3].name)
def test_parses_mixed_target_types(self):
result = jp.parse('$.foo,"bar",*,@')
self.assert_child_steps((jp.PRoot, (jp.PProperty, jp.PProperty, jp.PWildcard, jp.PCurrent)), result)
self.assertEqual('foo', result[1].targets[0].name)
self.assertEqual('bar', result[1].targets[1].name)
def test_raises_error_for_trailing_comma_on_targets(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$.foo,')
def test_parses_multiple_multitarget_steps(self):
result = jp.parse('$.foo,bar.weh,blah')
self.assert_child_steps((jp.PRoot, (jp.PProperty, jp.PProperty), (jp.PProperty, jp.PProperty)), result)
def test_parses_square_bracketted_multiple_targets(self):
result = jp.parse('$[foo,bar]')
self.assert_child_steps((jp.PRoot, (jp.PProperty, jp.PProperty)), result)
self.assertEqual('foo', result[1].targets[0].name)
self.assertEqual('bar', result[1].targets[1].name)
def test_raises_error_for_empty_target_in_target_set(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$.foo,,bar')
def test_raises_error_for_child_brackets_inside_target_set(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$.foo,[bar]')
# slices
def test_parses_simple_index(self):
result = jp.parse('$.1')
self.assert_child_steps((jp.PRoot, jp.PSlice), result)
targ = result[1].targets[0]
self.assertTrue(hasattr(targ, 'index'))
self.assertFalse(hasattr(targ, 'start'))
self.assertFalse(hasattr(targ, 'end'))
self.assertFalse(hasattr(targ, 'step'))
self.assertEqual(1, targ.index)
def test_parses_negative_index(self):
result = jp.parse('$.-1')
self.assert_child_steps((jp.PRoot, jp.PSlice), result)
targ = result[1].targets[0]
self.assertTrue(hasattr(targ, 'index'))
self.assertEqual(-1, targ.index)
def test_parses_canonical_slice(self):
result = jp.parse('$.1:2:3')
self.assert_child_steps((jp.PRoot, jp.PSlice), result)
targ = result[1].targets[0]
self.assertFalse(hasattr(targ, 'index'))
self.assertTrue(hasattr(targ, 'start'))
self.assertTrue(hasattr(targ, 'end'))
self.assertTrue(hasattr(targ, 'step'))
self.assertEqual(1, targ.start)
self.assertEqual(2, targ.end)
self.assertEqual(3, targ.step)
def test_parses_negatives_in_canonical_slice(self):
result = jp.parse('$.-1:-2:-3')
self.assert_child_steps((jp.PRoot, jp.PSlice), result)
targ = result[1].targets[0]
self.assertEqual(-1, targ.start)
self.assertEqual(-2, targ.end)
self.assertEqual(-3, targ.step)
def test_parses_toend_slice(self):
result = jp.parse('$.1:')
self.assert_child_steps((jp.PRoot, jp.PSlice), result)
targ = result[1].targets[0]
self.assertFalse(hasattr(targ, 'index'))
self.assertTrue(hasattr(targ, 'start'))
self.assertFalse(hasattr(targ, 'end'))
self.assertFalse(hasattr(targ, 'step'))
self.assertEqual(1, targ.start)
def test_parses_fromstart_slice(self):
result = jp.parse('$.:1')
self.assert_child_steps((jp.PRoot, jp.PSlice), result)
targ = result[1].targets[0]
self.assertFalse(hasattr(targ, 'index'))
self.assertFalse(hasattr(targ, 'start'))
self.assertTrue(hasattr(targ, 'end'))
self.assertFalse(hasattr(targ, 'step'))
self.assertEqual(1, targ.end)
def test_parses_allwithstep_slice(self):
result = jp.parse('$.::1')
self.assert_child_steps((jp.PRoot, jp.PSlice), result)
targ = result[1].targets[0]
self.assertFalse(hasattr(targ, 'index'))
self.assertFalse(hasattr(targ, 'start'))
self.assertFalse(hasattr(targ, 'end'))
self.assertTrue(hasattr(targ, 'step'))
self.assertEqual(1, targ.step)
def test_parses_all_slice(self):
result = jp.parse('$.:')
self.assert_child_steps((jp.PRoot, jp.PSlice), result)
targ = result[1].targets[0]
self.assertFalse(hasattr(targ, 'index'))
self.assertFalse(hasattr(targ, 'start'))
self.assertFalse(hasattr(targ, 'end'))
self.assertFalse(hasattr(targ, 'step'))
def test_parses_other_slice_combinations(self):
for sl,ix,st,en,sp in [
# :
# :1
# 1:
('1:2', False,1, 2, False),
('::', False,False,False,False),
# ::1
(':1:', False,False,1, False),
(':1:2',False,False,1, 2 ),
('1::', False,1, False,False),
('1::2',False,1, False,2 ),
('1:2:',False,1, 2, False)
# 1:2:3
]:
with self.subTest(slice=sl):
result = jp.parse('$.{}'.format(sl))
self.assert_child_steps((jp.PRoot, jp.PSlice), result)
targ = result[1].targets[0]
for p,v in [('index',ix),('start',st),('end',en),('step',sp)]:
if v is False:
self.assertFalse(hasattr(targ, p))
else:
self.assertEqual(v, getattr(targ, p))
def test_parses_square_bracketted_slice(self):
result = jp.parse('$[1:2]')
self.assert_child_steps((jp.PRoot, jp.PSlice), result)
targ = result[1].targets[0]
self.assertFalse(hasattr(targ, 'index'))
self.assertTrue(hasattr(targ, 'start'))
self.assertTrue(hasattr(targ, 'end'))
self.assertFalse(hasattr(targ, 'step'))
self.assertEqual(1, targ.start)
self.assertEqual(2, targ.end)
# expressions
def test_parses_expression_child_step(self):
result = jp.parse('$.(1+1)')
self.assert_child_steps((jp.PRoot, jp.PExpression), result)
self.assertEqual('1+1', result[1].targets[0].code)
def test_expression_can_contain_other_path_symbols(self):
result = jp.parse('$.($@0f!仮*:[].?,)')
self.assert_child_steps((jp.PRoot, jp.PExpression), result)
self.assertEqual('$@0f!仮*:[].?,', result[1].targets[0].code)
def test_expression_can_contain_escaped_round_brackets(self):
result = jp.parse('$.(\\(1+1\\)*5)')
self.assert_child_steps((jp.PRoot, jp.PExpression), result)
self.assertEqual('(1+1)*5', result[1].targets[0].code)
def test_expression_can_contain_escaped_backslash(self):
result = jp.parse('$.(\\\\)')
self.assert_child_steps((jp.PRoot, jp.PExpression), result)
self.assertEqual('\\', result[1].targets[0].code)
def test_expression_can_contain_escaped_newline(self):
result = jp.parse('$.(\\n)')
self.assert_child_steps((jp.PRoot, jp.PExpression), result)
self.assertEqual('\n', result[1].targets[0].code)
def test_expression_preserves_whitespace(self):
result = jp.parse('$.( 1 + 1 )')
self.assert_child_steps((jp.PRoot, jp.PExpression), result)
self.assertEqual(' 1 + 1 ', result[1].targets[0].code)
def test_raises_error_for_expression_without_closing_bracket(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$.(1+1')
def test_raises_error_for_expression_containing_newline(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$.(1\n+\n1)')
def test_parses_square_bracketted_expression(self):
result = jp.parse('$[(1+1)]')
self.assert_child_steps((jp.PRoot, jp.PExpression), result)
self.assertEqual('1+1', result[1].targets[0].code)
def test_parses_empty_expression(self):
result = jp.parse('$.()')
self.assert_child_steps((jp.PRoot, jp.PExpression), result)
self.assertEqual('', result[1].targets[0].code)
# filters
def test_parses_filter_child_step(self):
result = jp.parse('$.?(1+1)')
self.assert_child_steps((jp.PRoot, jp.PFilter), result)
self.assertEqual('1+1', result[1].targets[0].code)
def test_filter_can_contain_other_path_symbols(self):
result = jp.parse('$.?($@0f!仮*:[].?,)')
self.assert_child_steps((jp.PRoot, jp.PFilter), result)
self.assertEqual('$@0f!仮*:[].?,', result[1].targets[0].code)
def test_filter_can_contain_escaped_round_brackets(self):
result = jp.parse('$.?(\\(1+1\\)*5)')
self.assert_child_steps((jp.PRoot, jp.PFilter), result)
self.assertEqual('(1+1)*5', result[1].targets[0].code)
def test_filter_can_contain_escaped_backslash(self):
result = jp.parse('$.?(\\\\)')
self.assert_child_steps((jp.PRoot, jp.PFilter), result)
self.assertEqual('\\', result[1].targets[0].code)
def test_filter_can_contain_escaped_newline(self):
result = jp.parse('$.?(\\n)')
self.assert_child_steps((jp.PRoot, jp.PFilter), result)
self.assertEqual('\n', result[1].targets[0].code)
def test_filter_preserves_whitespace(self):
result = jp.parse('$.?( True or False )')
self.assert_child_steps((jp.PRoot, jp.PFilter), result)
self.assertEqual(' True or False ', result[1].targets[0].code)
def test_raises_error_for_filter_without_closing_bracket(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$.?(1+1')
def test_raises_error_for_filter_containing_newline(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$.?(1\n+\n1)')
def test_parses_square_bracketted_filter(self):
result = jp.parse('$[?(1+1)]')
self.assert_child_steps((jp.PRoot, jp.PFilter), result)
self.assertEqual('1+1', result[1].targets[0].code)
def test_parses_empty_filter(self):
result = jp.parse('$.?()')
self.assert_child_steps((jp.PRoot, jp.PFilter), result)
self.assertEqual('', result[1].targets[0].code)
# general bad input
def test_raises_error_for_invalid_child_target(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$.~')
def test_raises_error_for_invalid_square_bracketted_child_target(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('$[~]')
def test_raises_error_for_invalid_child_target_at_start(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('.~')
def test_raises_error_for_invalid_implicit_child_target(self):
with self.assertRaises(jp.JsonPathSyntaxError):
jp.parse('~')
# helper methods
def assert_types(self, expected, result):
self.assertEqual(expected, tuple(type(s) for s in result))
def assert_child_steps(self, expected, result):
targs = []
for s in result:
self.assertEqual(jp.PChild, type(s))
targs.append(tuple(type(t) for t in s.targets) if len(s.targets) > 1 else type(s.targets[0]))
self.assertEqual(expected, tuple(targs))
class TestEvaluate(unittest.TestCase):
example = {
"store": {
"book": [
{
"category": "reference",
"author": "Nigel Rees",
"title": "Sayings of the Century",
"price": 8.95,
},
{
"category": "fiction",
"author": "Evelyn Waugh",
"title": "Sword of Honour",
"price": 12.99,
},
{
"category": "fiction",
"author": "Herman Melville",
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"price": 8.99,
},
{
"category": "fiction",
"author": "J. R. R. Tolkien",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8",
"price": 22.99,
},
],
"bicycle": {
"color": "red",
"price": 19.95,
},
}
}
# root & property child steps
def test_evaluates_root_child(self):
result = jp.evaluate({"a":1,"b":[2,3]}, [jp.PChild(targets=[jp.PRoot()])])
self.assertEqual([( {"a":1,"b":[2,3]}, '$' )], result)
def test_evaluates_property_child(self):
result = jp.evaluate({"a":1,"b":[2,3]}, [jp.PChild(targets=[jp.PProperty(name='b')])])
self.assertEqual([( [2,3], '$["b"]' )], result)
def test_evaluates_root_mid_path(self):
result = jp.evaluate({"a":1,"b":[2,3]},
[jp.PChild(targets=[jp.PRoot()]), jp.PChild(targets=[jp.PProperty(name='b')]),
jp.PChild(targets=[jp.PRoot()]), jp.PChild(targets=[jp.PProperty(name='a')])] )
self.assertEqual([( 1, '$["a"]' )], result)
def test_evaluates_nested_child_properties(self):
result = jp.evaluate({"a":{"b":1,"c":2},"d":3},
[jp.PChild(targets=[jp.PProperty(name='a')]), jp.PChild(targets=[jp.PProperty(name='b')])])
self.assertEqual([( 1, '$["a"]["b"]' )], result)
# current node
def test_evaluates_current_child(self):
result = jp.evaluate({"a":{"b":1,"c":2},"d":3},
[jp.PChild(targets=[jp.PProperty(name='a')]), jp.PChild(targets=[jp.PCurrent()]),
jp.PChild(targets=[jp.PProperty(name='c')])] )
self.assertEqual([( 2, '$["a"]["c"]' )], result)
# target sets
def test_evaluates_multipletarget_child(self):
result = jp.evaluate({"a":{"b":1,"c":2},"d":3},
[jp.PChild(targets=[jp.PProperty(name='a')]),
jp.PChild(targets=[jp.PProperty(name='b'),jp.PProperty(name='c')])] )
self.assertEqual([( 1, '$["a"]["b"]' ), ( 2, '$["a"]["c"]' )], result)
def test_evaluates_multipletarget_child_mid_path(self):
result = jp.evaluate({"a":{"b":{"c":1,"d":2},"e":{"c":3,"d":4}},"f":0},
[jp.PChild(targets=[jp.PProperty(name='a')]),
jp.PChild(targets=[jp.PProperty(name='b'),jp.PProperty(name='e')]),
jp.PChild(targets=[jp.PProperty(name='c')])] )
self.assertEqual([( 1, '$["a"]["b"]["c"]' ), ( 3, '$["a"]["e"]["c"]' )], result)
def test_evaluates_nested_multipletarget_child_steps(self):
result = jp.evaluate({"A":{"i":1,"ii":2,"iii":3},
"B":{"i":4,"ii":5,"iii":6},
"C":{"i":7,"ii":8,"iii":9}},
[jp.PChild(targets=[jp.PProperty(name='A'),jp.PProperty(name='C')]),
jp.PChild(targets=[jp.PProperty(name='ii'),jp.PProperty(name='iii')])] )
self.assertEqual([( 2, '$["A"]["ii"]' ), ( 8, '$["C"]["ii"]' ),
( 3, '$["A"]["iii"]' ), ( 9, '$["C"]["iii"]' )], result)
# wildcards
def test_evaluates_wildcard_child_for_dict_in_sort_order(self):
result = jp.evaluate({"a":1,"c":[3,4],"b":2,},
[jp.PChild(targets=[jp.PWildcard()])] )
self.assertEqual([( 1, '$["a"]' ), ( 2, '$["b"]' ), ( [3,4], '$["c"]' )], result)
def test_evaluates_wildcard_child_for_sequence_in_sequence_order(self):
result = jp.evaluate([2,1,[3,4]],
[jp.PChild(targets=[jp.PWildcard()])] )
self.assertEqual([(2, '$[0]'), (1, '$[1]'), ([3,4], '$[2]')], result)
def test_evaluates_wildcard_mid_path(self):
result = jp.evaluate({"a":1,"b":[{"i":1,"ii":2},{"i":3,"ii":4},{"i":5,"ii":6}]},
[jp.PChild(targets=[jp.PProperty(name='b')]),
jp.PChild(targets=[jp.PWildcard()]),
jp.PChild(targets=[jp.PProperty(name='ii')])] )
self.assertEqual([(2, '$["b"][0]["ii"]'), (4, '$["b"][1]["ii"]'),
(6, '$["b"][2]["ii"]')], result)
def test_evaluates_nested_wildcard_steps(self):
result = jp.evaluate({ "i":{"a":[{"a":1,"b":2},{"a":3,"b":4}],
"b":[{"a":5,"b":6},{"a":7,"b":8}]},
"ii":{"a":[{"a":9,"b":1},{"a":2,"b":3}],
"b":[{"a":4,"b":5},{"a":6,"b":7}]} },
[jp.PChild(targets=[jp.PWildcard()]), jp.PChild(targets=[jp.PProperty(name='a')]),
jp.PChild(targets=[jp.PWildcard()]), jp.PChild(targets=[jp.PProperty(name='b')])] )
self.assertEqual([(2, '$["i"]["a"][0]["b"]'), (4, '$["i"]["a"][1]["b"]'),
(1, '$["ii"]["a"][0]["b"]'),(3, '$["ii"]["a"][1]["b"]')], result)
def test_evaluates_multiple_wildcard_targets_by_duplicating(self):
result = jp.evaluate([4,2],
[jp.PChild(targets=[jp.PWildcard(),jp.PWildcard()])] )
self.assertEqual([(4, '$[0]'), (2, '$[1]'), (4, '$[0]'), (2, '$[1]')], result)
# slices
def test_evaluates_simple_index_slice(self):
result = jp.evaluate(["a","b","c","d","e"],
[jp.PChild(targets=[jp.PSlice(index=3)])])
self.assertEqual([("d", '$[3]')], result)
def test_evaluates_slice_with_step(self):
result = jp.evaluate(["a","b","c","d","e","f","g"],
[jp.PChild(targets=[jp.PSlice(start=1,end=5,step=2)])] )
self.assertEqual([('b', '$[1]'), ('d', '$[3]')], result)
def test_evaluates_slice_without_step(self):
result = jp.evaluate(["a","b","c","d","e"],
[jp.PChild(targets=[jp.PSlice(start=1,end=3)])] )
self.assertEqual([('b', '$[1]'), ('c', '$[2]')], result)
def test_evaluates_slice_without_end(self):
result = jp.evaluate(["a","b","c","d"],
[jp.PChild(targets=[jp.PSlice(start=2)])] )
self.assertEqual([('c', '$[2]'), ('d', '$[3]')], result)
def test_evaluates_slice_without_start(self):
result = jp.evaluate(["a","b","c","d"],
[jp.PChild(targets=[jp.PSlice(end=2)])] )
self.assertEqual([('a', '$[0]'), ('b', '$[1]')], result)
def test_evaluates_slice_with_negative_start(self):
result = jp.evaluate(["a","b","c","d"],
[jp.PChild(targets=[jp.PSlice(start=-2)])] )
self.assertEqual([('c', '$[2]'), ('d', '$[3]')], result)
def test_evaluates_slice_with_negative_end(self):
result = jp.evaluate(["a","b","c","d"],
[jp.PChild(targets=[jp.PSlice(end=-1)])] )
self.assertEqual([('a', '$[0]'), ('b', '$[1]'), ('c', '$[2]')], result)
def test_evaluates_slice_with_negative_step(self):
result = jp.evaluate(["a","b","c","d"],
[jp.PChild(targets=[jp.PSlice(step=-2)])] )
self.assertEqual([('d', '$[3]'), ('b', '$[1]')], result)
def test_evaluates_slice_mid_path(self):
result = jp.evaluate({"a":[{"i":1,"ii":2},{"i":3,"ii":4},
{"i":5,"ii":6},{"i":7,"ii":8}]},
[jp.PChild(targets=[jp.PProperty(name='a')]),
jp.PChild(targets=[jp.PSlice(start=1,end=3)]),
jp.PChild(targets=[jp.PProperty(name='ii')])] )
self.assertEqual([(4, '$["a"][1]["ii"]'), (6, '$["a"][2]["ii"]')], result)
def test_evaluates_nested_slice_steps(self):
result = jp.evaluate([{"a":["x","y"],"b":["z","x"]},{"a":["y","z"],"b":["x","y"]},
{"a":["z","x"],"b":["y","z"]},{"a":["x","y"],"b":["z","x"]}],
[jp.PChild(targets=[jp.PSlice(start=1,end=3)]),
jp.PChild(targets=[jp.PProperty(name='a')]),
jp.PChild(targets=[jp.PSlice(index=1)])] )
self.assertEqual([('z', '$[1]["a"][1]'), ('x', '$[2]["a"][1]')], result)
def test_evaluates_multiple_slice_targets(self):
result = jp.evaluate(['a','b','c'],
[jp.PChild(targets=[jp.PSlice(start=1),jp.PSlice(end=-1)])] )
self.assertEqual([('b', '$[1]'), ('c', '$[2]'), ('a', '$[0]'), ('b', '$[1]')], result)
# recursive steps
def test_evaluates_recursive_step_in_top_down_order(self):
result = jp.evaluate({"a":{"a":{"a":1}}},
[jp.PRecursive(targets=[jp.PProperty(name='a')])] )
self.assertEqual([({"a":{"a":1}},'$["a"]'), ({"a":1}, '$["a"]["a"]'),
(1, '$["a"]["a"]["a"]')], result)
def test_evaluates_recursive_step_breadth_first(self):
result = jp.evaluate([[9,8,7],[6,5,4],[3,2,1]],
[jp.PRecursive(targets=[jp.PSlice(start=1)])] )
self.assertEqual([([6,5,4], '$[1]'), ([3,2,1], '$[2]'),
(8, '$[0][1]'), (7, '$[0][2]'),
(5, '$[1][1]'), (4, '$[1][2]'),
(2, '$[2][1]'), (1, '$[2][2]')], result)
def test_evaluates_properties_in_sort_order_for_recursive_step(self):
result = jp.evaluate({"c":{"a":{"a":1},"b":2},"b":[{"a":2},4],"a":0},
[jp.PRecursive(targets=[jp.PProperty(name='a')])] )
self.assertEqual([(0, '$["a"]'), ({"a":1}, '$["c"]["a"]'), (2, '$["b"][0]["a"]'),
(1, '$["c"]["a"]["a"]')], result)
def test_evaluates_recursive_step_with_multiple_targets(self):
result = jp.evaluate({"c":{"a":{"a":1},"b":2},"b":[{"a":2},4],"a":0},
[jp.PRecursive(targets=[jp.PProperty(name='a'),jp.PProperty(name='b')])] )
self.assertEqual([(0, '$["a"]'), ([{"a":2},4], '$["b"]'), ({"a":1}, '$["c"]["a"]'),
(2, '$["c"]["b"]'), (2, '$["b"][0]["a"]'), (1, '$["c"]["a"]["a"]')], result)
def test_evaluates_nested_recursive_steps(self):
result = jp.evaluate({"b":{"b":4,"a":{"b":{"b":2},"a":5}},"a":{"b":3,"a":{"b":1}}},
[jp.PRecursive(targets=[jp.PProperty(name='a')]),
jp.PRecursive(targets=[jp.PProperty(name='b')])])
self.assertEqual([(3, '$["a"]["b"]'), (1, '$["a"]["a"]["b"]'),
({"b":2}, '$["b"]["a"]["b"]'), (1, '$["a"]["a"]["b"]'),
(2, '$["b"]["a"]["b"]["b"]')], result)
# expressions
def test_evaluates_expression_child_for_key(self):
result = jp.evaluate({"bar":1, "foo":2, "weh":3},
[jp.PChild(targets=[jp.PExpression(code='"foo"')])] )
self.assertEqual([(2, '$["foo"]')], result)
def test_evaluates_expression_child_for_index(self):
result = jp.evaluate([1,2,3],
[jp.PChild(targets=[jp.PExpression(code='1')])] )
self.assertEqual([(2, '$[1]')], result)
def test_ignores_non_string_expression_for_key(self):
result = jp.evaluate({"11":1, "101":2, "1001":3},
[jp.PChild(targets=[jp.PExpression(code='101')])] )
self.assertEqual([], result)
def test_ignores_non_numeric_expression_for_index(self):
result = jp.evaluate([1,2,3],
[jp.PChild(targets=[jp.PExpression(code='"2"')])] )
self.assertEqual([], result)
def test_ignores_non_numeric_expression_for_index_even_if_bool(self):
result = jp.evaluate([1,2,3],
[jp.PChild(targets=[jp.PExpression(code='True')])] )
self.assertEqual([], result)
def test_evaluates_operators_in_expression(self):
result = jp.evaluate([1,2,3,4,5],
[jp.PChild(targets=[jp.PExpression(code='2**3/(2+2)')])] )
self.assertEqual([(3, '$[2]')], result)
def test_evaluates_current_node_in_expression(self):
result = jp.evaluate([1,2,3,4,5],
[jp.PChild(targets=[jp.PExpression(code='@[2]+1')])] )
self.assertEqual([(5, '$[4]')], result)
def test_raises_error_for_bad_syntax_in_expression(self):
with self.assertRaises(jp.PythonSyntaxError):
jp.evaluate([1,2,3,4,5],
[jp.PChild(targets=[jp.PExpression(code='^!*&~')])] )
def test_evaluates_expression_with_builtins(self):
result = jp.evaluate([1,2,3,4,5],
[jp.PChild(targets=[jp.PExpression(code='len("foo")')])] )
self.assertEqual([(4, '$[3]')], result)
def test_evaluates_escaped_at_symbol_in_expression_as_plain_at(self):
result = jp.evaluate({"@":1,"~":2},
[jp.PChild(targets=[jp.PExpression(code='"\\@"')])] )
self.assertEqual([(1, '$["@"]')], result)
def test_evaluates_expression_with_actual_backslash_preceeding_current_node(self):
result = jp.evaluate({"\\__current":1,"~":2},
[jp.PChild(targets=[jp.PExpression(code='"\\\\@"')])] )
self.assertEqual([(1, '$["\\\\__current"]')], result)
def test_evaluates_escaped_at_symbol_with_multiple_actual_backslashes_preceeding(self):
result = jp.evaluate({'\\\\@':1,'~':2},
[jp.PChild(targets=[jp.PExpression(code='"\\\\\\\\\\@"')])] )
self.assertEqual([(1, '$["\\\\\\\\@"]')], result)
def test_evaluates_current_node_with_multiple_actual_backslashes_preceeding(self):
result = jp.evaluate({"\\\\__current":1,"~":2},
[jp.PChild(targets=[jp.PExpression(code='"\\\\\\\\@"')])] )
self.assertEqual([(1, '$["\\\\\\\\__current"]')], result)
def test_evaluates_root_node_in_expression(self):
result = jp.evaluate({'a':['i','ii','iii'],'b':1},
[jp.PChild(targets=[jp.PProperty(name='a')]),
jp.PChild(targets=[jp.PExpression(code='$["b"]')])] )
self.assertEqual([('ii', '$["a"][1]')], result)
def test_evaluates_escaped_dollar_symbol_in_expression_as_plain_dollar(self):
result = jp.evaluate({"~":1,"$":2},
[jp.PChild(targets=[jp.PExpression(code='"\\$"')])] )
self.assertEqual([(2, '$["$"]')], result)
def test_evaluates_expression_with_actual_backslash_preceeding_root_node(self):
result = jp.evaluate({'\\__root':1,'~':2},
[jp.PChild(targets=[jp.PExpression(code='"\\\\$"')])] )
self.assertEqual([(1, '$["\\\\__root"]')], result)
def test_evaluates_escaped_dollar_symbol__with_multiple_actual_backslashes_preceeding(self):
result = jp.evaluate({'\\\\$':1,'~':2},
[jp.PChild(targets=[jp.PExpression(code='"\\\\\\\\\\$"')])] )
self.assertEqual([(1, '$["\\\\\\\\$"]')], result)
def test_evaluates_root_node_with_multiple_actual_backslashes_preceeding(self):
result = jp.evaluate({'\\\\__root':1,'~':2},
[jp.PChild(targets=[jp.PExpression(code='"\\\\\\\\$"')])] )
self.assertEqual([(1, '$["\\\\\\\\__root"]')], result)
# filters
def test_evaluates_filter_child_for_dict(self):
result = jp.evaluate({"a":1, "b":2, "c":3},
[jp.PChild(targets=[jp.PFilter(code='True')])] )
self.assertEqual([(1, '$["a"]'), (2, '$["b"]'), (3, '$["c"]')], result)
def test_evaluates_filter_child_for_sequence(self):
result = jp.evaluate(["a","b","c"],
[jp.PChild(targets=[jp.PFilter(code='True')])] )
self.assertEqual([("a", '$[0]'), ("b", '$[1]'), ("c", '$[2]')], result)
def test_evaluates_operators_in_filter(self):
result = jp.evaluate(["a","b","c"],
[jp.PChild(targets=[jp.PFilter(code='1>0 or 0>1')])] )
self.assertEqual([("a", '$[0]'), ("b", '$[1]'), ("c", '$[2]')], result)
def test_evaluates_current_node_in_filter(self):
result = jp.evaluate(["foo","bar","baz"],
[jp.PChild(targets=[jp.PFilter(code='"a" in @')])] )
self.assertEqual([("bar", '$[1]'), ("baz", '$[2]')], result)
def test_evaluates_non_boolean_result_for_filter(self):
result = jp.evaluate([1,6,4,3,8],
[jp.PChild(targets=[jp.PFilter(code='@ % 3')])] )
self.assertEqual([(1, '$[0]'), (4, '$[2]'), (8, '$[4]')], result)
def test_raises_error_for_bad_syntax_in_filter(self):
with self.assertRaises(jp.PythonSyntaxError):
jp.evaluate([1,2,3,4,5],
[jp.PChild(targets=[jp.PExpression(code='^!*&~')])] )
def test_evaluates_filter_with_builtins(self):
result = jp.evaluate([1,2,3],
[jp.PChild(targets=[jp.PFilter(code='len("foo") > 2')])] )
self.assertEqual([(1, '$[0]'), (2, '$[1]'), (3, '$[2]')], result)
def test_evaluates_root_node_in_filter(self):
result = jp.evaluate({"a":2, "b":3, "c":1},
[jp.PChild(targets=[jp.PFilter(code='@ >= $["a"]')])] )
self.assertEqual([(2, '$["a"]'), (3, '$["b"]')], result)
# failed steps
def test_ignores_missing_property(self):
result = jp.evaluate([{'b':1},{'a':2},{'a':3,'b':4}],
[jp.PChild(targets=[jp.PWildcard()]),
jp.PChild(targets=[jp.PProperty(name='b')])] )
self.assertEqual([(1, '$[0]["b"]'), (4, '$[2]["b"]')], result)
def test_ignores_missing_index(self):
result = jp.evaluate({'a':['i','ii'], 'b':['iv','v','vi'], 'c':['vii','viii','ix']},
[jp.PChild(targets=[jp.PWildcard()]),
jp.PChild(targets=[jp.PSlice(index=2)])] )
self.assertEqual([('vi', '$["b"][2]'), ('ix', '$["c"][2]')], result)
def test_ignores_property_on_non_dict(self):
result = jp.evaluate([1,'a',{'b':4},[2,3]],
[jp.PChild(targets=[jp.PWildcard()]),
jp.PChild(targets=[jp.PProperty(name='b')])] )
self.assertEqual([(4, '$[2]["b"]')], result)
def test_ignores_index_on_non_sequence(self):
result = jp.evaluate([1,'a',{'b':4},[2,3]],
[jp.PChild(targets=[jp.PWildcard()]),
jp.PChild(targets=[jp.PSlice(index=1)])] )
self.assertEqual([(3, '$[3][1]')], result)
def test_applies_partial_slice_to_short_sequence(self):
result = jp.evaluate(['a','b','c','d'],
[jp.PChild(targets=[jp.PSlice(start=1, end=10)])] )
self.assertEqual([('b', '$[1]'), ('c', '$[2]'), ('d', '$[3]')], result)
def test_ignores_non_syntax_error_in_expression(self):
result = jp.evaluate(['a','b','c'],
[jp.PChild(targets=[jp.PExpression(code='len("a") > foo')])] )
self.assertEqual([], result)
def test_ignores_non_syntax_error_in_filter(self):
result = jp.evaluate(['ay','bee','eff'],
[jp.PChild(targets=[jp.PFilter(code='@[2] == "e"')])] )
self.assertEqual([('bee', '$[1]')], result)
# spec examples
def test_example_book_store_authors(self):
result = jp.evaluate(self.example,
[jp.PChild(targets=[jp.PRoot()]),
jp.PChild(targets=[jp.PProperty(name='store')]),
jp.PChild(targets=[jp.PProperty(name='book')]),
jp.PChild(targets=[jp.PWildcard()]),
jp.PChild(targets=[jp.PProperty(name='author')])] )
self.assertEqual([('Nigel Rees', '$["store"]["book"][0]["author"]'),
('Evelyn Waugh', '$["store"]["book"][1]["author"]'),
('Herman Melville', '$["store"]["book"][2]["author"]'),
('J. R. R. Tolkien', '$["store"]["book"][3]["author"]')], result)
def test_example_all_authors(self):
result = jp.evaluate(self.example,
[jp.PChild(targets=[jp.PRoot()]),
jp.PRecursive(targets=[jp.PProperty(name='author')])] )
self.assertEqual([('Nigel Rees', '$["store"]["book"][0]["author"]'),
('Evelyn Waugh', '$["store"]["book"][1]["author"]'),
('Herman Melville', '$["store"]["book"][2]["author"]'),
('J. R. R. Tolkien', '$["store"]["book"][3]["author"]')], result)
def test_example_all_things_in_store(self):
result = jp.evaluate(self.example,
[jp.PChild(targets=[jp.PRoot()]),
jp.PChild(targets=[jp.PProperty(name='store')]),
jp.PChild(targets=[jp.PWildcard()])] )
self.assertEqual([
(
{
"color": "red",
"price": 19.95,
},
'$["store"]["bicycle"]'
),
(
[
{
"category": "reference",
"author": "Nigel Rees",
"title": "Sayings of the Century",
"price": 8.95,
},
{
"category": "fiction",
"author": "Evelyn Waugh",
"title": "Sword of Honour",
"price": 12.99,
},
{
"category": "fiction",
"author": "Herman Melville",
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"price": 8.99,
},
{
"category": "fiction",
"author": "J. R. R. Tolkien",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8",
"price": 22.99,
},
],
'$["store"]["book"]'
)
], result)
def test_example_all_store_prices(self):
result = jp.evaluate(self.example,
[jp.PChild(targets=[jp.PRoot()]),
jp.PChild(targets=[jp.PProperty(name='store')]),
jp.PRecursive(targets=[jp.PProperty(name='price')])] )
self.assertEqual([(19.95, '$["store"]["bicycle"]["price"]'),
(8.95, '$["store"]["book"][0]["price"]'),
(12.99, '$["store"]["book"][1]["price"]'),
(8.99, '$["store"]["book"][2]["price"]'),
(22.99, '$["store"]["book"][3]["price"]')], result)
def test_example_third_book(self):
result = jp.evaluate(self.example,
[jp.PChild(targets=[jp.PRoot()]),
jp.PRecursive(targets=[jp.PProperty(name='book')]),
jp.PChild(targets=[jp.PSlice(index=2)])] )
self.assertEqual([({
"category": "fiction",
"author": "Herman Melville",
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"price": 8.99,
},
'$["store"]["book"][2]')], result)
def test_example_last_book(self):
result = jp.evaluate(self.example,
[jp.PChild(targets=[jp.PRoot()]),
jp.PRecursive(targets=[jp.PProperty(name='book')]),
jp.PChild(targets=[jp.PSlice(start=-1)])] )
self.assertEqual([({
"category": "fiction",
"author": "J. R. R. Tolkien",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8",
"price": 22.99,
},
'$["store"]["book"][3]')], result)
def test_example_first_two_books(self):
result = jp.evaluate(self.example,
[jp.PChild(targets=[jp.PRoot()]),
jp.PRecursive(targets=[jp.PProperty(name='book')]),
jp.PChild(targets=[jp.PSlice(end=2)])] )
self.assertEqual([({
"category": "reference",
"author": "Nigel Rees",
"title": "Sayings of the Century",
"price": 8.95,
},
'$["store"]["book"][0]'),
({
"category": "fiction",
"author": "Evelyn Waugh",
"title": "Sword of Honour",
"price": 12.99,
},
'$["store"]["book"][1]')], result)
def test_example_books_with_isbn(self):
result = jp.evaluate(self.example,
[jp.PChild(targets=[jp.PRoot()]),
jp.PRecursive(targets=[jp.PProperty(name='book')]),
jp.PChild(targets=[jp.PFilter(code='"isbn" in @')])] )
self.assertEqual([({
"category": "fiction",
"author": "Herman Melville",
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"price": 8.99,
},
'$["store"]["book"][2]'),
({
"category": "fiction",
"author": "J. R. R. Tolkien",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8",
"price": 22.99,
},
'$["store"]["book"][3]')], result)
def test_example_books_cheaper_than_ten(self):
result = jp.evaluate(self.example,
[jp.PChild(targets=[jp.PRoot()]),
jp.PRecursive(targets=[jp.PProperty(name='book')]),
jp.PChild(targets=[jp.PFilter(code='@["price"] < 10')])] )
self.assertEqual([({
"category": "reference",
"author": "Nigel Rees",
"title": "Sayings of the Century",
"price": 8.95,
},
'$["store"]["book"][0]'),
({
"category": "fiction",
"author": "Herman Melville",
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"price": 8.99,
},
'$["store"]["book"][2]')], result)
class TestJsonPath(unittest.TestCase):
def test_returns_values_by_default(self):
result = jp.jsonpath({"a":1, "b":2, "c":"d"}, '$[*]')
self.assertEqual([1, 2, "d"], result)
def test_returns_values_if_specified(self):
result = jp.jsonpath({"a":1, "b":2, "c":"d"}, '$[*]', jp.RESULT_TYPE_VALUE)
self.assertEqual([1, 2, "d"], result)
def test_returns_paths_if_specified(self):
result = jp.jsonpath({"a":1, "b":2, "c":"d"}, '$[*]', jp.RESULT_TYPE_PATH)
self.assertEqual(['$["a"]', '$["b"]', '$["c"]'], result)
def test_returns_both_if_specified(self):
result = jp.jsonpath({"a":1, "b":2, "c":"d"}, '$[*]', jp.RESULT_TYPE_BOTH)
self.assertEqual([(1, '$["a"]'), (2, '$["b"]'), ("d", '$["c"]')], result)
def test_returns_false_on_no_match(self):
result = jp.jsonpath({"a":1, "b":2, "c":"d"}, '$.e')
self.assertEqual(False, result)
def test_returns_empty_list_on_no_match_if_specified(self):
result = jp.jsonpath({"a":1, "b":2, "c":"d"}, '$.e', always_return_list=True)
self.assertEqual([], result)
| 44.506297 | 120 | 0.533062 | 6,195 | 53,007 | 4.383858 | 0.05004 | 0.045622 | 0.066721 | 0.063849 | 0.873371 | 0.82959 | 0.786361 | 0.745931 | 0.714044 | 0.681088 | 0 | 0.022983 | 0.271926 | 53,007 | 1,190 | 121 | 44.543697 | 0.680719 | 0.007829 | 0 | 0.413136 | 0 | 0.027542 | 0.093533 | 0.012786 | 0 | 0 | 0 | 0 | 0.308263 | 1 | 0.189619 | false | 0 | 0.003178 | 0 | 0.197034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8128230ffb436ccc8c76e1bc3791c161bbc19da6 | 17,453 | py | Python | drp/tests.py | dennereed/paleocore | d6da6c39cde96050ee4b9e7213ec1200530cbeee | [
"MIT"
] | 1 | 2021-02-05T19:50:13.000Z | 2021-02-05T19:50:13.000Z | drp/tests.py | dennereed/paleocore | d6da6c39cde96050ee4b9e7213ec1200530cbeee | [
"MIT"
] | 59 | 2020-06-17T22:21:51.000Z | 2022-02-10T05:00:01.000Z | drp/tests.py | dennereed/paleocore | d6da6c39cde96050ee4b9e7213ec1200530cbeee | [
"MIT"
] | 2 | 2020-07-01T14:11:09.000Z | 2020-08-10T17:27:26.000Z | from django.test import TestCase
from drp.models import Occurrence, Biology, Locality
from drp.models import Taxon, IdentificationQualifier
from datetime import datetime
from django.contrib.auth.models import User
from django.contrib.gis.geos import Point, Polygon
class LocalityMethodsTests(TestCase):
"""
Test Locality instance creation and methods
"""
def test_locality_save_simple(self):
starting_record_count = Locality.objects.count() # get current record count
# Create a simple square polygon
# Note that the first tuple and last tuple must have exact same coordinates.
poly = Polygon(
((41.6, 11.2), (41.8, 11.2), (41.8, 11.0), (41.6, 11.0), (41.6, 11.2))
)
new_locality = Locality(paleolocality_number=1, geom=poly)
new_locality.save()
self.assertEqual(Locality.objects.count(), starting_record_count+1)
def test_locality_create_simple(self):
starting_record_count = Locality.objects.count() # get current record count
poly = Polygon(
((41.2, 11.2), (41.4, 11.2), (41.4, 11.0), (41.2, 11.0), (41.2, 11.2))
)
Locality.objects.create(paleolocality_number=2, geom=poly)
self.assertEqual(Locality.objects.count(), starting_record_count+1)
# class OccurrenceCreationMethodTests(TestCase):
# """
# Test Occurrence instance creation and methods
# """
#
# def setUp(self):
# def create_square_locality(x, y):
# return Polygon(
# (
# (x-0.01, y+0.01),
# (x+0.01, y+0.01),
# (x+0.01, y-0.01),
# (x-0.01, y-0.01),
# (x-0.01, y+0.01)
# )
# )
#
# Locality.objects.create(paleolocality_number=1, geom=create_square_locality(41.1, 11.1))
# Locality.objects.create(paleolocality_number=2, geom=create_square_locality(41.2, 11.2))
# Locality.objects.create(paleolocality_number=3, geom=create_square_locality(41.3, 11.3))
# Locality.objects.create(paleolocality_number=4, geom=create_square_locality(41.4, 11.4))
#
# def test_occurrence_save_simple(self):
# """
# Test Occurrence instance save method with the simplest possible attributes, coordinates only
# """
# starting_record_count = Occurrence.objects.count() # get current number of occurrence records
# new_occurrence = Occurrence(geom="POINT (41.1 11.1)",
# locality=Locality.objects.get(paleolocality_number=1),
# field_number=datetime.now())
# new_occurrence.save()
# now = datetime.now()
# self.assertEqual(Occurrence.objects.count(), starting_record_count+1) # test that one record has been added
# self.assertEqual(new_occurrence.catalog_number, "--") # test catalog number generation in save method
# self.assertEqual(new_occurrence.date_last_modified.day, now.day) # test date last modified is correct
# self.assertEqual(new_occurrence.point_x(), 41.1)
# self.assertEqual(new_occurrence.locality.paleolocality_number, 1)
#
# def test_occurrence_create_simple(self):
# """
# Test Occurrence instance creation with the simplest possible attributes, coordinates only
# """
# starting_record_count = Occurrence.objects.count() # get current number of occurrence records
# new_occurrence = Occurrence.objects.create(geom=Point(41.2, 11.2),
# locality=Locality.objects.get(paleolocality_number=2),
# field_number=datetime.now())
# now = datetime.now()
# self.assertEqual(Occurrence.objects.count(), starting_record_count+1) # test that one record has been added
# self.assertEqual(new_occurrence.catalog_number, "--") # test catalog number generation in save method
# self.assertEqual(new_occurrence.date_last_modified.day, now.day) # test date last modified is correct
# self.assertEqual(new_occurrence.point_x(), 41.2)
#
# def test_occurrence_admin_view(self):
# starting_record_count = Occurrence.objects.count() # get current number of occurrence records
# # The simplest occurrence instance we can create needs only a location.
# # Using the instance creation and then save methods
#
# new_occurrence = Occurrence.objects.create(geom=Point(41.3, 11.3),
# locality=Locality.objects.get(paleolocality_number=3),
# field_number=datetime.now())
# now = datetime.now()
# self.assertEqual(Occurrence.objects.count(), starting_record_count+1) # test that one record has been added
# self.assertEqual(new_occurrence.catalog_number, "--") # test catalog number generation in save method
# self.assertEqual(new_occurrence.date_last_modified.day, now.day) # test date last modified is correct
# self.assertEqual(new_occurrence.point_x(), 41.3)
#
# response = self.client.get('/admin/mlp/', follow=True)
# self.assertEqual(response.status_code, 200)
# self.assertContains(response, 'Username') # redirects to login form
# class OccurrenceMethodTests(TestCase):
# """
# Test Occurrence Methods
# """
#
# def setUp(self):
# def create_square_locality(x, y):
# return Polygon(
# (
# (x-0.01, y+0.01),
# (x+0.01, y+0.01),
# (x+0.01, y-0.01),
# (x-0.01, y-0.01),
# (x-0.01, y+0.01)
# )
# )
# # Create four simple square locality polygons
# Locality.objects.create(paleolocality_number=1, geom=create_square_locality(41.1, 11.1))
#
# # Create one occurrence point in polygon 1
# Occurrence.objects.create(geom=Point(41.1, 11.1),
# barcode=1,
# catalog_number='DIK-1-1',
# locality=Locality.objects.get(paleolocality_number=1),
# field_number=datetime.now())
#
# def test_point_x_method(self):
# dik1 = Occurrence.objects.get(barcode=1)
# self.assertEqual(dik1.point_x(), 41.1)
#
# def test_point_y_method(self):
# dik1 = Occurrence.objects.get(barcode=1)
# self.assertEqual(dik1.point_y(), 11.1)
#
# def test_easting_method(self):
# dik1 = Occurrence.objects.get(barcode=1)
# self.assertEqual(dik1.easting(), 729382.2689836712)
#
# def test_northing_method(self):
# dik1 = Occurrence.objects.get(barcode=1)
# self.assertEqual(dik1.northing(), 1227846.080614904)
#
#
#
# class BiologyMethodTests(TestCase):
# """
# Test Biology instance methods
# """
# fixtures = [
# 'fixtures/fiber_data_150611.json',
# 'taxonomy/fixtures/taxonomy_data_150611.json'
# ]
#
# def setUp(self):
# def create_square_locality(x, y):
# return Polygon(
# (
# (x-0.01, y+0.01),
# (x+0.01, y+0.01),
# (x+0.01, y-0.01),
# (x-0.01, y-0.01),
# (x-0.01, y+0.01)
# )
# )
#
# Locality.objects.create(paleolocality_number=1, geom=create_square_locality(41.1, 11.1))
# Locality.objects.create(paleolocality_number=2, geom=create_square_locality(41.2, 11.2))
# Locality.objects.create(paleolocality_number=3, geom=create_square_locality(41.3, 11.3))
# Locality.objects.create(paleolocality_number=4, geom=create_square_locality(41.4, 11.4))
#
# def test_biology_save_method(self):
# """
# Test Biology instance creation with save method
# """
#
# # self.biology_setup()
# locality_1 = Locality.objects.get(paleolocality_number__exact=1)
# new_taxon = Taxon.objects.get(name__exact="Primates")
# id_qual = IdentificationQualifier.objects.get(name__exact="None")
#
# starting_occurrence_record_count = Occurrence.objects.count() # get current number of occurrence records
# starting_biology_record_count = Biology.objects.count() # get the current number of biology records
# # The simplest occurrence instance we can create needs only a location.
# # Using the instance creation and then save methods
#
# new_bio = Biology(
# barcode=1111,
# basis_of_record="FossilSpecimen",
# collection_code="DRP",
# paleolocality_number="1",
# item_number="1",
# geom="POINT (41.1 11.1)",
# locality=locality_1,
# taxon=new_taxon,
# identification_qualifier=id_qual,
# field_number=datetime.now()
# )
# new_bio.save()
# now = datetime.now()
# self.assertEqual(Occurrence.objects.count(), starting_occurrence_record_count+1)
# self.assertEqual(Biology.objects.count(), starting_biology_record_count+1)
#
# self.assertEqual(new_bio.catalog_number, "DRP-1-1") # test catalog number generation in save method
# self.assertEqual(new_bio.date_last_modified.day, now.day) # test date last modified is correct
# self.assertEqual(new_bio.point_x(), 41.1)
#
# def test_biology_create_observation(self):
# """
# Test Biology instance creation for observations
# """
# occurrence_starting_record_count = Occurrence.objects.count() # get current number of occurrence records
# biology_starting_record_count = Biology.objects.count() # get the current number of biology records
# # The simplest occurrence instance we can create needs only a location.
# # Using the instance creation and then save methods
# new_occurrence = Biology.objects.create(
# barcode=2222,
# basis_of_record="HumanObservation",
# collection_code="COL",
# paleolocality_number="2",
# item_number="1",
# geom=Point(41.21, 11.21),
# locality=Locality.objects.get(paleolocality_number__exact=2),
# taxon=Taxon.objects.get(name__exact="Primates"),
# identification_qualifier=IdentificationQualifier.objects.get(name__exact="None"),
# field_number=datetime.now()
# )
# now = datetime.now()
# self.assertEqual(Occurrence.objects.count(), occurrence_starting_record_count+1) # one record added?
# self.assertEqual(new_occurrence.catalog_number, "COL-2-1") # test catalog number generation in save method
# self.assertEqual(new_occurrence.date_last_modified.day, now.day) # test date last modified is correct
# self.assertEqual(new_occurrence.point_x(), 41.21)
# self.assertEqual(Biology.objects.count(), biology_starting_record_count+1) # no biology record was added?
# self.assertEqual(Biology.objects.filter(basis_of_record__exact="HumanObservation").count(), 1)
# response = self.client.get('/admin/mlp/', follow=True)
# self.assertEqual(response.status_code, 200)
# self.assertContains(response, 'Username') # redirects to login form
#
# # response = self.client.get('/admin/mlp/biology/')
# # self.assertEqual(response.status_code, 200)
# # response = self.client.get('/admin/mlp/biology/'+str(new_occurrence.pk)+'/')
# # self.assertEqual(response.status_code, 200)
#
#
# class DRPViewsTests(TestCase):
# """
# The DRP Views Test Case depends on two fixtures.
# """
# fixtures = [
# 'fixtures/fiber_data_150611.json',
# 'taxonomy/fixtures/taxonomy_data_150611.json',
# ]
#
# def setUp(self):
#
# # Populate Localities
# def create_square_locality(x, y):
# return Polygon(
# (
# (x-0.01, y+0.01),
# (x+0.01, y+0.01),
# (x+0.01, y-0.01),
# (x-0.01, y-0.01),
# (x-0.01, y+0.01)
# )
# )
#
# Locality.objects.create(paleolocality_number=1, geom=create_square_locality(41.1, 11.1))
# Locality.objects.create(paleolocality_number=2, geom=create_square_locality(41.2, 11.2))
# Locality.objects.create(paleolocality_number=3, geom=create_square_locality(41.3, 11.3))
# Locality.objects.create(paleolocality_number=4, geom=create_square_locality(41.4, 11.4))
#
# # Populate Biology instances
# id_qualifier = IdentificationQualifier.objects.get(name__exact="None")
# barcode_index = 1
# mammal_orders = (("Primates", "Primates"),
# ("Perissodactyla", "Perissodactyla"),
# ("Artiodactyla", "Artiodactyla"),
# ("Rodentia", "Rodentia"),
# ("Carnivora", "Carnivora"),)
#
# for order_tuple_element in mammal_orders:
# Biology.objects.create(
# barcode=barcode_index,
# basis_of_record="HumanObservation",
# collection_code="DRP",
# paleolocality_number="1",
# item_number=barcode_index,
# geom=Point(41.11, 11.11),
# locality=Locality.objects.get(paleolocality_number__exact=1),
# taxon=Taxon.objects.get(name__exact=order_tuple_element[0]),
# identification_qualifier=id_qualifier,
# field_number=datetime.now()
# )
# barcode_index += 1
#
# self.assertEqual(Locality.objects.count(), 4)
# self.assertEqual(Occurrence.objects.count(), len(mammal_orders))
#
# def test_admin_list_view(self):
# response = self.client.get('/admin/mlp/', follow=True)
# self.assertEqual(response.status_code, 200)
# self.assertContains(response, 'Username') # redirects to login form
#
#
# class DRPAdminViewTests(TestCase):
# """
# The DRP Views Test Case depends on two fixtures.
# """
# fixtures = [
# 'fixtures/fiber_data_150611.json',
# 'taxonomy/fixtures/taxonomy_data_150611.json',
# ]
#
# def setUp(self):
#
# # Populate Localities
# def create_square_locality(x, y):
# return Polygon(
# (
# (x-0.01, y+0.01),
# (x+0.01, y+0.01),
# (x+0.01, y-0.01),
# (x-0.01, y-0.01),
# (x-0.01, y+0.01)
# )
# )
#
# Locality.objects.create(paleolocality_number=1, geom=create_square_locality(41.1, 11.1))
# Locality.objects.create(paleolocality_number=2, geom=create_square_locality(41.2, 11.2))
# Locality.objects.create(paleolocality_number=3, geom=create_square_locality(41.3, 11.3))
# Locality.objects.create(paleolocality_number=4, geom=create_square_locality(41.4, 11.4))
#
# # Populate Biology instances
# id_qualifier = IdentificationQualifier.objects.get(name__exact="None")
# barcode_index = 1
# mammal_orders = (("Primates", "Primates"),
# ("Perissodactyla", "Perissodactyla"),
# ("Artiodactyla", "Artiodactyla"),
# ("Rodentia", "Rodentia"),
# ("Carnivora", "Carnivora"),)
#
# for order_tuple_element in mammal_orders:
# Biology.objects.create(
# barcode=barcode_index,
# basis_of_record="HumanObservation",
# collection_code="DRP",
# paleolocality_number="1",
# item_number=barcode_index,
# geom=Point(41.11, 11.11),
# locality=Locality.objects.get(paleolocality_number__exact=1),
# taxon=Taxon.objects.get(name__exact=order_tuple_element[0]),
# identification_qualifier=id_qualifier,
# field_number=datetime.now()
# )
# barcode_index += 1
#
# self.assertEqual(Locality.objects.count(), 4)
# self.assertEqual(Occurrence.objects.count(), len(mammal_orders))
#
# test_user = User.objects.create_user(username='test_user', password='password')
# test_user.is_staff = True
# test_user.save()
#
# def test_admin_list_view(self):
# response = self.client.get('/admin/mlp/', follow=True)
# self.assertEqual(response.status_code, 200)
# self.assertContains(response, 'Username') # redirects to login form
#
# def test_admin_list_view_with_login(self):
# test_user = User.objects.get(username='test_user')
# self.assertEqual(test_user.is_staff, True) # Test user is staff
# self.client.login(username='test_user', password='password')
# response = self.client.get('/admin/mlp/', follow=True)
# self.assertEqual(response.status_code, 403)
# #self.assertContains(response, 'Username') # redirects to login form
| 45.928947 | 118 | 0.598006 | 1,952 | 17,453 | 5.177766 | 0.09375 | 0.014841 | 0.009894 | 0.012368 | 0.829029 | 0.784704 | 0.751163 | 0.711685 | 0.691501 | 0.676066 | 0 | 0.044499 | 0.280238 | 17,453 | 379 | 119 | 46.050132 | 0.76007 | 0.894746 | 0 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
813596623392056beabcdcc0bfb77f949a65efff | 143 | py | Python | hplc_interface/__init__.py | peterpolidoro/hplc_interface_python | 36b3c72732a9f8c76c5f6c5e4c325fc6254fc192 | [
"BSD-3-Clause"
] | 1 | 2021-02-11T02:56:46.000Z | 2021-02-11T02:56:46.000Z | hplc_interface/__init__.py | peterpolidoro/hplc_interface_python | 36b3c72732a9f8c76c5f6c5e4c325fc6254fc192 | [
"BSD-3-Clause"
] | null | null | null | hplc_interface/__init__.py | peterpolidoro/hplc_interface_python | 36b3c72732a9f8c76c5f6c5e4c325fc6254fc192 | [
"BSD-3-Clause"
] | 2 | 2018-10-12T17:16:17.000Z | 2021-02-11T02:56:50.000Z | '''
This Python package (hplc_interface) creates a class named HplcInterface.
'''
from .hplc_interface import HplcInterface, __version__, main
| 28.6 | 73 | 0.797203 | 17 | 143 | 6.352941 | 0.823529 | 0.240741 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118881 | 143 | 4 | 74 | 35.75 | 0.857143 | 0.51049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
be09cfc1c896a9ec7b05ec7036c25b66b05d16cd | 36 | py | Python | __init__.py | CarboniDavide/rtscan | 3fe447235d0534a7d66e7d6f5e8b9c00d8832b4f | [
"MIT"
] | null | null | null | __init__.py | CarboniDavide/rtscan | 3fe447235d0534a7d66e7d6f5e8b9c00d8832b4f | [
"MIT"
] | null | null | null | __init__.py | CarboniDavide/rtscan | 3fe447235d0534a7d66e7d6f5e8b9c00d8832b4f | [
"MIT"
] | null | null | null | from core import *
from mod import * | 18 | 18 | 0.75 | 6 | 36 | 4.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 36 | 2 | 19 | 18 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
077f4c2df2d02be0e9c50cf8739098e9df055d97 | 124,944 | py | Python | vanir/tests/api_admin.py | VanirLab/VOS | e6cb3e4e391e583e98d548292b5f272320d38cc4 | [
"MIT"
] | null | null | null | vanir/tests/api_admin.py | VanirLab/VOS | e6cb3e4e391e583e98d548292b5f272320d38cc4 | [
"MIT"
] | null | null | null | vanir/tests/api_admin.py | VanirLab/VOS | e6cb3e4e391e583e98d548292b5f272320d38cc4 | [
"MIT"
] | null | null | null | import asyncio
import operator
import os
import shutil
import tempfile
import unittest.mock
import libvirt
import copy
import vanir
import vanir.devices
import vanir.firewall
import vanir.api.admin
import vanir.tests
import vanir.storage
# properties defined in API
volume_properties = [
'pool', 'vid', 'size', 'usage', 'rw', 'source', 'path',
'save_on_stop', 'snap_on_start', 'revisions_to_keep']
class AdminAPITestCase(vanir.tests.VanirTestCase):
def setUp(self):
super().setUp()
self.test_base_dir = '/tmp/vanir-test-dir'
self.base_dir_patch = unittest.mock.patch.dict(vanir.config.system_path,
{'vanir_base_dir': self.test_base_dir})
self.base_dir_patch2 = unittest.mock.patch(
'vanir.config.vanir_base_dir', self.test_base_dir)
self.base_dir_patch3 = unittest.mock.patch.dict(
vanir.config.defaults['pool_configs']['varlibvanir'],
{'dir_path': self.test_base_dir})
self.base_dir_patch.start()
self.base_dir_patch2.start()
self.base_dir_patch3.start()
app = vanir.Vanir('/tmp/vanir-test.xml', load=False)
app.vmm = unittest.mock.Mock(spec=vanir.app.VMMConnection)
app.load_initial_values()
app.setup_pools()
app.default_kernel = '1.0'
app.default_netvm = None
self.template = app.add_new_vm('TemplateVM', label='black',
name='test-template')
app.default_template = 'test-template'
with vanir.tests.substitute_entry_points('vanir.storage',
'vanir.tests.storage'):
self.loop.run_until_complete(
app.add_pool('test', driver='test'))
app.default_pool = 'varlibvanir'
app.save = unittest.mock.Mock()
self.vm = app.add_new_vm('AppVM', label='red', name='test-vm1',
template='test-template')
self.app = app
libvirt_attrs = {
'libvirt_conn.lookupByUUID.return_value.isActive.return_value':
False,
'libvirt_conn.lookupByUUID.return_value.state.return_value':
[libvirt.VIR_DOMAIN_SHUTOFF],
}
app.vmm.configure_mock(**libvirt_attrs)
self.emitter = vanir.tests.TestEmitter()
self.app.domains[0].fire_event = self.emitter.fire_event
def tearDown(self):
self.base_dir_patch3.stop()
self.base_dir_patch2.stop()
self.base_dir_patch.stop()
if os.path.exists(self.test_base_dir):
shutil.rmtree(self.test_base_dir)
try:
del self.netvm
except AttributeError:
pass
del self.vm
del self.template
self.app.close()
del self.app
del self.emitter
super(AdminAPITestCase, self).tearDown()
def call_mgmt_func(self, method, dest, arg=b'', payload=b''):
mgmt_obj = vanir.api.admin.VanirAdminAPI(self.app, b'dom0', method, dest, arg)
loop = asyncio.get_event_loop()
response = loop.run_until_complete(
mgmt_obj.execute(untrusted_payload=payload))
self.assertEventFired(self.emitter,
'admin-permission:' + method.decode('ascii'))
return response
class TC_00_VMs(AdminAPITestCase):
def test_000_vm_list(self):
value = self.call_mgmt_func(b'admin.vm.List', b'dom0')
self.assertEqual(value,
'dom0 class=AdminVM state=Running\n'
'test-template class=TemplateVM state=Halted\n'
'test-vm1 class=AppVM state=Halted\n')
def test_001_vm_list_single(self):
value = self.call_mgmt_func(b'admin.vm.List', b'test-vm1')
self.assertEqual(value,
'test-vm1 class=AppVM state=Halted\n')
def test_010_vm_property_list(self):
# this test is kind of stupid, but at least check if appropriate
# admin-permission event is fired
value = self.call_mgmt_func(b'admin.vm.property.List', b'test-vm1')
properties = self.app.domains['test-vm1'].property_list()
self.assertEqual(value,
''.join('{}\n'.format(prop.__name__) for prop in properties))
def test_020_vm_property_get_str(self):
value = self.call_mgmt_func(b'admin.vm.property.Get', b'test-vm1',
b'name')
self.assertEqual(value, 'default=False type=str test-vm1')
def test_021_vm_property_get_int(self):
value = self.call_mgmt_func(b'admin.vm.property.Get', b'test-vm1',
b'vcpus')
self.assertEqual(value, 'default=True type=int 2')
def test_022_vm_property_get_bool(self):
value = self.call_mgmt_func(b'admin.vm.property.Get', b'test-vm1',
b'provides_network')
self.assertEqual(value, 'default=True type=bool False')
def test_023_vm_property_get_label(self):
value = self.call_mgmt_func(b'admin.vm.property.Get', b'test-vm1',
b'label')
self.assertEqual(value, 'default=False type=label red')
def test_024_vm_property_get_vm(self):
value = self.call_mgmt_func(b'admin.vm.property.Get', b'test-vm1',
b'template')
self.assertEqual(value, 'default=False type=vm test-template')
def test_025_vm_property_get_vm_none(self):
value = self.call_mgmt_func(b'admin.vm.property.Get', b'test-vm1',
b'netvm')
self.assertEqual(value, 'default=True type=vm ')
def test_025_vm_property_get_default_vm_none(self):
value = self.call_mgmt_func(
b'admin.vm.property.GetDefault',
b'test-vm1',
b'template')
self.assertEqual(value, None)
def test_026_vm_property_get_default_bool(self):
self.vm.provides_network = True
value = self.call_mgmt_func(
b'admin.vm.property.GetDefault',
b'test-vm1',
b'provides_network')
self.assertEqual(value, 'type=bool False')
def test_030_vm_property_set_vm(self):
netvm = self.app.add_new_vm('AppVM', label='red', name='test-net',
template='test-template', provides_network=True)
with unittest.mock.patch('vanir.vm.VMProperty.__set__') as mock:
value = self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'netvm', b'test-net')
self.assertIsNone(value)
mock.assert_called_once_with(self.vm, 'test-net')
self.app.save.assert_called_once_with()
def test_031_vm_property_set_vm_none(self):
netvm = self.app.add_new_vm('AppVM', label='red', name='test-net',
template='test-template', provides_network=True)
with unittest.mock.patch('vanir.vm.VMProperty.__set__') as mock:
value = self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'netvm', b'')
self.assertIsNone(value)
mock.assert_called_once_with(self.vm, '')
self.app.save.assert_called_once_with()
def test_032_vm_property_set_vm_invalid1(self):
with unittest.mock.patch('vanir.vm.VMProperty.__set__') as mock:
with self.assertRaises(vanir.exc.VanirValueError):
self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'netvm', b'forbidden-chars/../!')
self.assertFalse(mock.called)
self.assertFalse(self.app.save.called)
def test_033_vm_property_set_vm_invalid2(self):
with unittest.mock.patch('vanir.vm.VMProperty.__set__') as mock:
with self.assertRaises(vanir.exc.VanirValueError):
self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'netvm', b'\x80\x90\xa0')
self.assertFalse(mock.called)
self.assertFalse(self.app.save.called)
def test_034_vm_propert_set_bool_true(self):
with unittest.mock.patch('vanir.property.__set__') as mock:
value = self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'autostart', b'True')
self.assertIsNone(value)
mock.assert_called_once_with(self.vm, True)
self.app.save.assert_called_once_with()
def test_035_vm_propert_set_bool_false(self):
with unittest.mock.patch('vanir.property.__set__') as mock:
value = self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'autostart', b'False')
self.assertIsNone(value)
mock.assert_called_once_with(self.vm, False)
self.app.save.assert_called_once_with()
def test_036_vm_propert_set_bool_invalid1(self):
with unittest.mock.patch('vanir.property.__set__') as mock:
with self.assertRaises(vanir.exc.VanirValueError):
self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'autostart', b'some string')
self.assertFalse(mock.called)
self.assertFalse(self.app.save.called)
def test_037_vm_propert_set_bool_invalid2(self):
with unittest.mock.patch('vanir.property.__set__') as mock:
with self.assertRaises(vanir.exc.VanirValueError):
self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'autostart', b'\x80\x90@#$%^&*(')
self.assertFalse(mock.called)
self.assertFalse(self.app.save.called)
def test_038_vm_propert_set_str(self):
with unittest.mock.patch('vanir.property.__set__') as mock:
value = self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'kernel', b'1.0')
self.assertIsNone(value)
mock.assert_called_once_with(self.vm, '1.0')
self.app.save.assert_called_once_with()
def test_039_vm_propert_set_str_invalid1(self):
with unittest.mock.patch('vanir.property.__set__') as mock:
with self.assertRaises(vanir.exc.VanirValueError):
self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'kernel', b'some, non-ASCII: \x80\xd2')
self.assertFalse(mock.called)
self.assertFalse(self.app.save.called)
def test_040_vm_propert_set_int(self):
with unittest.mock.patch('vanir.property.__set__') as mock:
value = self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'maxmem', b'1024000')
self.assertIsNone(value)
mock.assert_called_once_with(self.vm, 1024000)
self.app.save.assert_called_once_with()
def test_041_vm_propert_set_int_invalid1(self):
with unittest.mock.patch('vanir.property.__set__') as mock:
with self.assertRaises(vanir.exc.VanirValueError):
self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'maxmem', b'fourty two')
self.assertFalse(mock.called)
self.assertFalse(self.app.save.called)
def test_042_vm_propert_set_label(self):
with unittest.mock.patch('vanir.property.__set__') as mock:
value = self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'label', b'green')
self.assertIsNone(value)
mock.assert_called_once_with(self.vm, 'green')
self.app.save.assert_called_once_with()
def test_043_vm_propert_set_label_invalid1(self):
with unittest.mock.patch('vanir.property.__set__') as mock:
with self.assertRaises(vanir.exc.VanirValueError):
self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'maxmem', b'some, non-ASCII: \x80\xd2')
self.assertFalse(mock.called)
self.assertFalse(self.app.save.called)
@unittest.skip('label existence not checked before actual setter yet')
def test_044_vm_propert_set_label_invalid2(self):
with unittest.mock.patch('vanir.property.__set__') as mock:
with self.assertRaises(vanir.exc.VanirValueError):
self.call_mgmt_func(b'admin.vm.property.Set', b'test-vm1',
b'maxmem', b'non-existing-color')
self.assertFalse(mock.called)
self.assertFalse(self.app.save.called)
def test_050_vm_property_help(self):
value = self.call_mgmt_func(b'admin.vm.property.Help', b'test-vm1',
b'label')
self.assertEqual(value,
'Colourful label assigned to VM. This is where the colour of the '
'padlock is set.')
self.assertFalse(self.app.save.called)
def test_052_vm_property_help_invalid_property(self):
with self.assertRaises(vanir.exc.VanirNoSuchPropertyError):
self.call_mgmt_func(b'admin.vm.property.Help', b'test-vm1',
b'no-such-property')
self.assertFalse(self.app.save.called)
def test_060_vm_property_reset(self):
with unittest.mock.patch('vanir.property.__delete__') as mock:
value = self.call_mgmt_func(b'admin.vm.property.Reset', b'test-vm1',
b'default_user')
mock.assert_called_with(self.vm)
self.assertIsNone(value)
self.app.save.assert_called_once_with()
def test_062_vm_property_reset_invalid_property(self):
with unittest.mock.patch('vanir.property.__delete__') as mock:
with self.assertRaises(vanir.exc.VanirNoSuchPropertyError):
self.call_mgmt_func(b'admin.vm.property.Help', b'test-vm1',
b'no-such-property')
self.assertFalse(mock.called)
self.assertFalse(self.app.save.called)
def test_070_vm_volume_list(self):
self.vm.volumes = unittest.mock.Mock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel']
}
self.vm.volumes.configure_mock(**volumes_conf)
value = self.call_mgmt_func(b'admin.vm.volume.List', b'test-vm1')
self.assertEqual(value, 'root\nprivate\nvolatile\nkernel\n')
# check if _only_ keys were accessed
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys()])
def test_080_vm_volume_info(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel']
}
for prop in volume_properties:
volumes_conf[
'__getitem__.return_value.{}'.format(prop)] = prop + '-value'
volumes_conf[
'__getitem__.return_value.is_outdated.return_value'] = False
self.vm.volumes.configure_mock(**volumes_conf)
value = self.call_mgmt_func(b'admin.vm.volume.Info', b'test-vm1',
b'private')
self.assertEqual(value,
''.join('{p}={p}-value\n'.format(p=p) for p in volume_properties) +
'is_outdated=False\n')
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys(),
unittest.mock.call.__getattr__('__getitem__')('private'),
unittest.mock.call.__getattr__('__getitem__')().is_outdated()])
def test_081_vm_volume_info_unsupported_is_outdated(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel']
}
for prop in volume_properties:
volumes_conf[
'__getitem__.return_value.{}'.format(prop)] = prop + '-value'
volumes_conf[
'__getitem__.return_value.is_outdated.side_effect'] = \
NotImplementedError
self.vm.volumes.configure_mock(**volumes_conf)
value = self.call_mgmt_func(b'admin.vm.volume.Info', b'test-vm1',
b'private')
self.assertEqual(value,
''.join('{p}={p}-value\n'.format(p=p) for p in volume_properties))
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys(),
unittest.mock.call.__getattr__('__getitem__')('private'),
unittest.mock.call.__getattr__('__getitem__')().is_outdated()])
def test_080_vm_volume_info_invalid_volume(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel']
}
self.vm.volumes.configure_mock(**volumes_conf)
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.vm.volume.Info', b'test-vm1',
b'no-such-volume')
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys()])
def test_090_vm_volume_listsnapshots(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel'],
'__getitem__.return_value.revisions':
{'rev2': '2018-02-22T22:22:22', 'rev1': '2018-01-11T11:11:11'},
}
self.vm.volumes.configure_mock(**volumes_conf)
value = self.call_mgmt_func(b'admin.vm.volume.ListSnapshots',
b'test-vm1', b'private')
self.assertEqual(value,
'rev1\nrev2\n')
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys(),
unittest.mock.call.__getattr__('__getitem__')('private')])
def test_090_vm_volume_listsnapshots_invalid_volume(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel']
}
self.vm.volumes.configure_mock(**volumes_conf)
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.vm.volume.ListSnapshots', b'test-vm1',
b'no-such-volume')
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys()])
@unittest.skip('method not implemented yet')
def test_100_vm_volume_snapshot(self):
pass
@unittest.skip('method not implemented yet')
def test_100_vm_volume_snapshot_invalid_volume(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel'],
'__getitem__.return_value.revisions':
{'rev2': '2018-02-22T22:22:22', 'rev1': '2018-01-11T11:11:11'},
}
self.vm.volumes.configure_mock(**volumes_conf)
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.vm.volume.Snapshots',
b'test-vm1', b'no-such-volume')
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys()])
@unittest.skip('method not implemented yet')
def test_100_vm_volume_snapshot_invalid_revision(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel']
}
self.vm.volumes.configure_mock(**volumes_conf)
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.vm.volume.Snapshots',
b'test-vm1', b'private', b'no-such-rev')
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys(),
unittest.mock.call.__getattr__('__getitem__')('private')])
def test_110_vm_volume_revert(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel'],
'__getitem__.return_value.revisions':
{'rev2': '2018-02-22T22:22:22', 'rev1': '2018-01-11T11:11:11'},
}
self.vm.volumes.configure_mock(**volumes_conf)
del self.vm.volumes['private'].revert('rev1')._is_coroutine
self.vm.storage = unittest.mock.Mock()
value = self.call_mgmt_func(b'admin.vm.volume.Revert',
b'test-vm1', b'private', b'rev1')
self.assertIsNone(value)
self.assertEqual(self.vm.volumes.mock_calls, [
('__getitem__', ('private', ), {}),
('__getitem__().revert', ('rev1', ), {}),
('keys', (), {}),
('__getitem__', ('private', ), {}),
('__getitem__().__hash__', (), {}),
('__getitem__().revert', ('rev1', ), {}),
])
self.assertEqual(self.vm.storage.mock_calls, [])
def test_110_vm_volume_revert_invalid_rev(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel'],
'__getitem__.return_value.revisions':
{'rev2': '2018-02-22T22:22:22', 'rev1': '2018-01-11T11:11:11'},
}
self.vm.volumes.configure_mock(**volumes_conf)
self.vm.storage = unittest.mock.Mock()
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.vm.volume.Revert',
b'test-vm1', b'private', b'no-such-rev')
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys(),
unittest.mock.call.__getattr__('__getitem__')('private')])
self.assertFalse(self.vm.storage.called)
def test_120_vm_volume_resize(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel'],
}
self.vm.volumes.configure_mock(**volumes_conf)
self.vm.storage = unittest.mock.Mock()
self.vm.storage.resize.side_effect = self.dummy_coro
value = self.call_mgmt_func(b'admin.vm.volume.Resize',
b'test-vm1', b'private', b'1024000000')
self.assertIsNone(value)
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys()])
self.assertEqual(self.vm.storage.mock_calls,
[unittest.mock.call.resize('private', 1024000000)])
def test_120_vm_volume_resize_invalid_size1(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel'],
}
self.vm.volumes.configure_mock(**volumes_conf)
self.vm.storage = unittest.mock.Mock()
self.vm.storage.resize.side_effect = self.dummy_coro
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.vm.volume.Resize',
b'test-vm1', b'private', b'no-int-size')
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys()])
self.assertFalse(self.vm.storage.called)
def test_120_vm_volume_resize_invalid_size2(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel'],
}
self.vm.volumes.configure_mock(**volumes_conf)
self.vm.storage = unittest.mock.Mock()
self.vm.storage.resize.side_effect = self.dummy_coro
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.vm.volume.Resize',
b'test-vm1', b'private', b'-1')
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys()])
self.assertFalse(self.vm.storage.called)
def test_130_pool_list(self):
self.app.pools = ['file', 'lvm']
value = self.call_mgmt_func(b'admin.pool.List', b'dom0')
self.assertEqual(value, 'file\nlvm\n')
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.pool_drivers')
@unittest.mock.patch('vanir.storage.driver_parameters')
def test_140_pool_listdrivers(self, mock_parameters, mock_drivers):
self.app.pools = ['file', 'lvm']
mock_drivers.return_value = ['driver1', 'driver2']
mock_parameters.side_effect = \
lambda driver: {
'driver1': ['param1', 'param2'],
'driver2': ['param3', 'param4']
}[driver]
value = self.call_mgmt_func(b'admin.pool.ListDrivers', b'dom0')
self.assertEqual(value,
'driver1 param1 param2\ndriver2 param3 param4\n')
self.assertEqual(mock_drivers.mock_calls, [unittest.mock.call()])
self.assertEqual(mock_parameters.mock_calls,
[unittest.mock.call('driver1'), unittest.mock.call('driver2')])
self.assertFalse(self.app.save.called)
def test_150_pool_info(self):
self.app.pools = {
'pool1': unittest.mock.Mock(config={
'param1': 'value1', 'param2': 'value2'},
usage=102400,
size=204800)
}
self.app.pools['pool1'].included_in.return_value = None
value = self.call_mgmt_func(b'admin.pool.Info', b'dom0', b'pool1')
self.assertEqual(value,
'param1=value1\nparam2=value2\nsize=204800\nusage=102400\n')
self.assertFalse(self.app.save.called)
def test_151_pool_info_unsupported_size(self):
self.app.pools = {
'pool1': unittest.mock.Mock(config={
'param1': 'value1', 'param2': 'value2'},
size=None, usage=None),
}
self.app.pools['pool1'].included_in.return_value = None
value = self.call_mgmt_func(b'admin.pool.Info', b'dom0', b'pool1')
self.assertEqual(value,
'param1=value1\nparam2=value2\n')
self.assertFalse(self.app.save.called)
def test_152_pool_info_included_in(self):
self.app.pools = {
'pool1': unittest.mock.MagicMock(config={
'param1': 'value1',
'param2': 'value2'},
usage=102400,
size=204800)
}
self.app.pools['pool1'].included_in.return_value = \
self.app.pools['pool1']
self.app.pools['pool1'].__str__.return_value = 'pool1'
value = self.call_mgmt_func(b'admin.pool.Info', b'dom0', b'pool1')
self.assertEqual(value,
'param1=value1\nparam2=value2\nsize=204800\nusage=102400'
'\nincluded_in=pool1\n')
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.pool_drivers')
@unittest.mock.patch('vanir.storage.driver_parameters')
def test_160_pool_add(self, mock_parameters, mock_drivers):
self.app.pools = {
'file': unittest.mock.Mock(),
'lvm': unittest.mock.Mock()
}
mock_drivers.return_value = ['driver1', 'driver2']
mock_parameters.side_effect = \
lambda driver: {
'driver1': ['param1', 'param2'],
'driver2': ['param3', 'param4']
}[driver]
add_pool_mock, self.app.add_pool = self.coroutine_mock()
value = self.call_mgmt_func(b'admin.pool.Add', b'dom0', b'driver1',
b'name=test-pool\nparam1=some-value\n')
self.assertIsNone(value)
self.assertEqual(mock_drivers.mock_calls, [unittest.mock.call()])
self.assertEqual(mock_parameters.mock_calls,
[unittest.mock.call('driver1')])
self.assertEqual(add_pool_mock.mock_calls,
[unittest.mock.call(name='test-pool', driver='driver1',
param1='some-value')])
self.assertTrue(self.app.save.called)
@unittest.mock.patch('vanir.storage.pool_drivers')
@unittest.mock.patch('vanir.storage.driver_parameters')
def test_160_pool_add_invalid_driver(self, mock_parameters, mock_drivers):
self.app.pools = {
'file': unittest.mock.Mock(),
'lvm': unittest.mock.Mock()
}
mock_drivers.return_value = ['driver1', 'driver2']
mock_parameters.side_effect = \
lambda driver: {
'driver1': ['param1', 'param2'],
'driver2': ['param3', 'param4']
}[driver]
add_pool_mock, self.app.add_pool = self.coroutine_mock()
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.pool.Add', b'dom0',
b'no-such-driver', b'name=test-pool\nparam1=some-value\n')
self.assertEqual(mock_drivers.mock_calls, [unittest.mock.call()])
self.assertEqual(mock_parameters.mock_calls, [])
self.assertEqual(add_pool_mock.mock_calls, [])
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.pool_drivers')
@unittest.mock.patch('vanir.storage.driver_parameters')
def test_160_pool_add_invalid_param(self, mock_parameters, mock_drivers):
self.app.pools = {
'file': unittest.mock.Mock(),
'lvm': unittest.mock.Mock()
}
mock_drivers.return_value = ['driver1', 'driver2']
mock_parameters.side_effect = \
lambda driver: {
'driver1': ['param1', 'param2'],
'driver2': ['param3', 'param4']
}[driver]
add_pool_mock, self.app.add_pool = self.coroutine_mock()
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.pool.Add', b'dom0',
b'driver1', b'name=test-pool\nparam3=some-value\n')
self.assertEqual(mock_drivers.mock_calls, [unittest.mock.call()])
self.assertEqual(mock_parameters.mock_calls,
[unittest.mock.call('driver1')])
self.assertEqual(add_pool_mock.mock_calls, [])
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.pool_drivers')
@unittest.mock.patch('vanir.storage.driver_parameters')
def test_160_pool_add_missing_name(self, mock_parameters, mock_drivers):
self.app.pools = {
'file': unittest.mock.Mock(),
'lvm': unittest.mock.Mock()
}
mock_drivers.return_value = ['driver1', 'driver2']
mock_parameters.side_effect = \
lambda driver: {
'driver1': ['param1', 'param2'],
'driver2': ['param3', 'param4']
}[driver]
add_pool_mock, self.app.add_pool = self.coroutine_mock()
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.pool.Add', b'dom0',
b'driver1', b'param1=value\nparam2=some-value\n')
self.assertEqual(mock_drivers.mock_calls, [unittest.mock.call()])
self.assertEqual(mock_parameters.mock_calls, [])
self.assertEqual(add_pool_mock.mock_calls, [])
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.pool_drivers')
@unittest.mock.patch('vanir.storage.driver_parameters')
def test_160_pool_add_existing_pool(self, mock_parameters, mock_drivers):
self.app.pools = {
'file': unittest.mock.Mock(),
'lvm': unittest.mock.Mock()
}
mock_drivers.return_value = ['driver1', 'driver2']
mock_parameters.side_effect = \
lambda driver: {
'driver1': ['param1', 'param2'],
'driver2': ['param3', 'param4']
}[driver]
add_pool_mock, self.app.add_pool = self.coroutine_mock()
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.pool.Add', b'dom0',
b'driver1', b'name=file\nparam1=value\nparam2=some-value\n')
self.assertEqual(mock_drivers.mock_calls, [unittest.mock.call()])
self.assertEqual(mock_parameters.mock_calls, [])
self.assertEqual(add_pool_mock.mock_calls, [])
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.pool_drivers')
@unittest.mock.patch('vanir.storage.driver_parameters')
def test_160_pool_add_invalid_config_format(self, mock_parameters,
mock_drivers):
self.app.pools = {
'file': unittest.mock.Mock(),
'lvm': unittest.mock.Mock()
}
mock_drivers.return_value = ['driver1', 'driver2']
mock_parameters.side_effect = \
lambda driver: {
'driver1': ['param1', 'param2'],
'driver2': ['param3', 'param4']
}[driver]
add_pool_mock, self.app.add_pool = self.coroutine_mock()
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.pool.Add', b'dom0',
b'driver1', b'name=test-pool\nparam 1=value\n_param2\n')
self.assertEqual(mock_drivers.mock_calls, [unittest.mock.call()])
self.assertEqual(mock_parameters.mock_calls, [])
self.assertEqual(add_pool_mock.mock_calls, [])
self.assertFalse(self.app.save.called)
def test_170_pool_remove(self):
self.app.pools = {
'file': unittest.mock.Mock(),
'lvm': unittest.mock.Mock(),
'test-pool': unittest.mock.Mock(),
}
remove_pool_mock, self.app.remove_pool = self.coroutine_mock()
value = self.call_mgmt_func(b'admin.pool.Remove', b'dom0', b'test-pool')
self.assertIsNone(value)
self.assertEqual(remove_pool_mock.mock_calls,
[unittest.mock.call('test-pool')])
self.assertTrue(self.app.save.called)
def test_170_pool_remove_invalid_pool(self):
self.app.pools = {
'file': unittest.mock.Mock(),
'lvm': unittest.mock.Mock(),
'test-pool': unittest.mock.Mock(),
}
remove_pool_mock, self.app.remove_pool = self.coroutine_mock()
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.pool.Remove', b'dom0',
b'no-such-pool')
self.assertEqual(remove_pool_mock.mock_calls, [])
self.assertFalse(self.app.save.called)
def test_180_label_list(self):
value = self.call_mgmt_func(b'admin.label.List', b'dom0')
self.assertEqual(value,
''.join('{}\n'.format(l.name) for l in self.app.labels.values()))
self.assertFalse(self.app.save.called)
def test_190_label_get(self):
self.app.get_label = unittest.mock.Mock()
self.app.get_label.configure_mock(**{'return_value.color': '0xff0000'})
value = self.call_mgmt_func(b'admin.label.Get', b'dom0', b'red')
self.assertEqual(value, '0xff0000')
self.assertEqual(self.app.get_label.mock_calls,
[unittest.mock.call('red')])
self.assertFalse(self.app.save.called)
def test_195_label_index(self):
self.app.get_label = unittest.mock.Mock()
self.app.get_label.configure_mock(**{'return_value.index': 1})
value = self.call_mgmt_func(b'admin.label.Index', b'dom0', b'red')
self.assertEqual(value, '1')
self.assertEqual(self.app.get_label.mock_calls,
[unittest.mock.call('red')])
self.assertFalse(self.app.save.called)
def test_200_label_create(self):
self.app.get_label = unittest.mock.Mock()
self.app.get_label.side_effect=KeyError
self.app.labels = unittest.mock.MagicMock()
labels_config = {
'keys.return_value': range(1, 9),
}
self.app.labels.configure_mock(**labels_config)
value = self.call_mgmt_func(b'admin.label.Create', b'dom0', b'cyan',
b'0x00ffff')
self.assertIsNone(value)
self.assertEqual(self.app.get_label.mock_calls,
[unittest.mock.call('cyan')])
self.assertEqual(self.app.labels.mock_calls,
[unittest.mock.call.keys(),
unittest.mock.call.__getattr__('__setitem__')(9,
vanir.Label(9, '0x00ffff', 'cyan'))])
self.assertTrue(self.app.save.called)
def test_200_label_create_invalid_color(self):
self.app.get_label = unittest.mock.Mock()
self.app.get_label.side_effect=KeyError
self.app.labels = unittest.mock.MagicMock()
labels_config = {
'keys.return_value': range(1, 9),
}
self.app.labels.configure_mock(**labels_config)
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.label.Create', b'dom0', b'cyan',
b'abcd')
self.assertEqual(self.app.get_label.mock_calls,
[unittest.mock.call('cyan')])
self.assertEqual(self.app.labels.mock_calls, [])
self.assertFalse(self.app.save.called)
def test_200_label_create_invalid_name(self):
self.app.get_label = unittest.mock.Mock()
self.app.get_label.side_effect=KeyError
self.app.labels = unittest.mock.MagicMock()
labels_config = {
'keys.return_value': range(1, 9),
}
self.app.labels.configure_mock(**labels_config)
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.label.Create', b'dom0', b'01',
b'0xff0000')
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.label.Create', b'dom0', b'../xxx',
b'0xff0000')
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.label.Create', b'dom0',
b'strange-name!@#$',
b'0xff0000')
self.assertEqual(self.app.get_label.mock_calls, [])
self.assertEqual(self.app.labels.mock_calls, [])
self.assertFalse(self.app.save.called)
def test_200_label_create_already_exists(self):
self.app.get_label = unittest.mock.Mock(wraps=self.app.get_label)
with self.assertRaises(vanir.exc.VanirValueError):
self.call_mgmt_func(b'admin.label.Create', b'dom0', b'red',
b'abcd')
self.assertEqual(self.app.get_label.mock_calls,
[unittest.mock.call('red')])
self.assertFalse(self.app.save.called)
def test_210_label_remove(self):
label = vanir.Label(9, '0x00ffff', 'cyan')
self.app.labels[9] = label
self.app.get_label = unittest.mock.Mock(wraps=self.app.get_label,
**{'return_value.index': 9})
self.app.labels = unittest.mock.MagicMock(wraps=self.app.labels)
value = self.call_mgmt_func(b'admin.label.Remove', b'dom0', b'cyan')
self.assertIsNone(value)
self.assertEqual(self.app.get_label.mock_calls,
[unittest.mock.call('cyan')])
self.assertEqual(self.app.labels.mock_calls,
[unittest.mock.call.__delitem__(9)])
self.assertTrue(self.app.save.called)
def test_210_label_remove_invalid_label(self):
with self.assertRaises(vanir.exc.VanirValueError):
self.call_mgmt_func(b'admin.label.Remove', b'dom0',
b'no-such-label')
self.assertFalse(self.app.save.called)
def test_210_label_remove_default_label(self):
self.app.labels = unittest.mock.MagicMock(wraps=self.app.labels)
self.app.get_label = unittest.mock.Mock(wraps=self.app.get_label,
**{'return_value.index': 6})
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.label.Remove', b'dom0',
b'blue')
self.assertEqual(self.app.labels.mock_calls, [])
self.assertFalse(self.app.save.called)
def test_210_label_remove_in_use(self):
self.app.labels = unittest.mock.MagicMock(wraps=self.app.labels)
self.app.get_label = unittest.mock.Mock(wraps=self.app.get_label,
**{'return_value.index': 1})
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.label.Remove', b'dom0',
b'red')
self.assertEqual(self.app.labels.mock_calls, [])
self.assertFalse(self.app.save.called)
def test_220_start(self):
func_mock = unittest.mock.Mock()
@asyncio.coroutine
def coroutine_mock(*args, **kwargs):
return func_mock(*args, **kwargs)
self.vm.start = coroutine_mock
value = self.call_mgmt_func(b'admin.vm.Start', b'test-vm1')
self.assertIsNone(value)
func_mock.assert_called_once_with()
def test_230_shutdown(self):
func_mock = unittest.mock.Mock()
@asyncio.coroutine
def coroutine_mock(*args, **kwargs):
return func_mock(*args, **kwargs)
self.vm.shutdown = coroutine_mock
value = self.call_mgmt_func(b'admin.vm.Shutdown', b'test-vm1')
self.assertIsNone(value)
func_mock.assert_called_once_with()
def test_240_pause(self):
func_mock = unittest.mock.Mock()
@asyncio.coroutine
def coroutine_mock(*args, **kwargs):
return func_mock(*args, **kwargs)
self.vm.pause = coroutine_mock
value = self.call_mgmt_func(b'admin.vm.Pause', b'test-vm1')
self.assertIsNone(value)
func_mock.assert_called_once_with()
def test_250_unpause(self):
func_mock = unittest.mock.Mock()
@asyncio.coroutine
def coroutine_mock(*args, **kwargs):
return func_mock(*args, **kwargs)
self.vm.unpause = coroutine_mock
value = self.call_mgmt_func(b'admin.vm.Unpause', b'test-vm1')
self.assertIsNone(value)
func_mock.assert_called_once_with()
def test_260_kill(self):
func_mock = unittest.mock.Mock()
@asyncio.coroutine
def coroutine_mock(*args, **kwargs):
return func_mock(*args, **kwargs)
self.vm.kill = coroutine_mock
value = self.call_mgmt_func(b'admin.vm.Kill', b'test-vm1')
self.assertIsNone(value)
func_mock.assert_called_once_with()
def test_270_events(self):
send_event = unittest.mock.Mock(spec=[])
mgmt_obj = vanir.api.admin.VanirAdminAPI(self.app, b'dom0', b'admin.Events',
b'dom0', b'', send_event=send_event)
@asyncio.coroutine
def fire_event():
self.vm.fire_event('test-event', arg1='abc')
mgmt_obj.cancel()
loop = asyncio.get_event_loop()
execute_task = asyncio.ensure_future(
mgmt_obj.execute(untrusted_payload=b''))
asyncio.ensure_future(fire_event())
loop.run_until_complete(execute_task)
self.assertIsNone(execute_task.result())
self.assertEventFired(self.emitter,
'admin-permission:' + 'admin.Events')
self.assertEqual(send_event.mock_calls,
[
unittest.mock.call(self.app, 'connection-established'),
unittest.mock.call(self.vm, 'test-event', arg1='abc')
])
def test_271_events_add_vm(self):
send_event = unittest.mock.Mock(spec=[])
mgmt_obj = vanir.api.admin.VanirAdminAPI(self.app, b'dom0', b'admin.Events',
b'dom0', b'', send_event=send_event)
@asyncio.coroutine
def fire_event():
self.vm.fire_event('test-event', arg1='abc')
# add VM _after_ starting admin.Events call
vm = self.app.add_new_vm('AppVM', label='red', name='test-vm2',
template='test-template')
vm.fire_event('test-event2', arg1='abc')
mgmt_obj.cancel()
return vm
loop = asyncio.get_event_loop()
execute_task = asyncio.ensure_future(
mgmt_obj.execute(untrusted_payload=b''))
event_task = asyncio.ensure_future(fire_event())
loop.run_until_complete(execute_task)
vm2 = event_task.result()
self.assertIsNone(execute_task.result())
self.assertEventFired(self.emitter,
'admin-permission:' + 'admin.Events')
self.assertEqual(send_event.mock_calls,
[
unittest.mock.call(self.app, 'connection-established'),
unittest.mock.call(self.vm, 'test-event', arg1='abc'),
unittest.mock.call(self.app, 'domain-add', vm=vm2),
unittest.mock.call(vm2, 'test-event2', arg1='abc'),
])
def test_280_feature_list(self):
self.vm.features['test-feature'] = 'some-value'
value = self.call_mgmt_func(b'admin.vm.feature.List', b'test-vm1')
self.assertEqual(value, 'test-feature\n')
self.assertFalse(self.app.save.called)
def test_290_feature_get(self):
self.vm.features['test-feature'] = 'some-value'
value = self.call_mgmt_func(b'admin.vm.feature.Get', b'test-vm1',
b'test-feature')
self.assertEqual(value, 'some-value')
self.assertFalse(self.app.save.called)
def test_291_feature_get_none(self):
with self.assertRaises(vanir.exc.VanirFeatureNotFoundError):
self.call_mgmt_func(b'admin.vm.feature.Get',
b'test-vm1', b'test-feature')
self.assertFalse(self.app.save.called)
def test_300_feature_remove(self):
self.vm.features['test-feature'] = 'some-value'
value = self.call_mgmt_func(b'admin.vm.feature.Remove', b'test-vm1',
b'test-feature')
self.assertIsNone(value, None)
self.assertNotIn('test-feature', self.vm.features)
self.assertTrue(self.app.save.called)
def test_301_feature_remove_none(self):
with self.assertRaises(vanir.exc.VanirFeatureNotFoundError):
self.call_mgmt_func(b'admin.vm.feature.Remove',
b'test-vm1', b'test-feature')
self.assertFalse(self.app.save.called)
def test_310_feature_checkwithtemplate(self):
self.vm.features['test-feature'] = 'some-value'
value = self.call_mgmt_func(b'admin.vm.feature.CheckWithTemplate',
b'test-vm1', b'test-feature')
self.assertEqual(value, 'some-value')
self.assertFalse(self.app.save.called)
def test_311_feature_checkwithtemplate_tpl(self):
self.template.features['test-feature'] = 'some-value'
value = self.call_mgmt_func(b'admin.vm.feature.CheckWithTemplate',
b'test-vm1', b'test-feature')
self.assertEqual(value, 'some-value')
self.assertFalse(self.app.save.called)
def test_312_feature_checkwithtemplate_none(self):
with self.assertRaises(vanir.exc.VanirFeatureNotFoundError):
self.call_mgmt_func(b'admin.vm.feature.CheckWithTemplate',
b'test-vm1', b'test-feature')
self.assertFalse(self.app.save.called)
def test_315_feature_checkwithnetvm(self):
self.vm.features['test-feature'] = 'some-value'
value = self.call_mgmt_func(b'admin.vm.feature.CheckWithNetvm',
b'test-vm1', b'test-feature')
self.assertEqual(value, 'some-value')
self.assertFalse(self.app.save.called)
def test_316_feature_checkwithnetvm_netvm(self):
self.netvm = self.app.add_new_vm('AppVM', label='red',
name='test-netvm1',
template='test-template',
provides_network=True)
self.vm.netvm = self.netvm
self.netvm.features['test-feature'] = 'some-value'
value = self.call_mgmt_func(b'admin.vm.feature.CheckWithNetvm',
b'test-vm1', b'test-feature')
self.assertEqual(value, 'some-value')
self.assertFalse(self.app.save.called)
def test_317_feature_checkwithnetvm_none(self):
with self.assertRaises(vanir.exc.VanirFeatureNotFoundError):
self.call_mgmt_func(b'admin.vm.feature.CheckWithNetvm',
b'test-vm1', b'test-feature')
self.assertFalse(self.app.save.called)
def test_318_feature_checkwithadminvm(self):
self.app.domains['dom0'].features['test-feature'] = 'some-value'
value = self.call_mgmt_func(b'admin.vm.feature.CheckWithAdminVM',
b'test-vm1', b'test-feature')
self.assertEqual(value, 'some-value')
self.assertFalse(self.app.save.called)
def test_319_feature_checkwithtpladminvm(self):
self.app.domains['dom0'].features['test-feature'] = 'some-value'
value = self.call_mgmt_func(
b'admin.vm.feature.CheckWithTemplateAndAdminVM',
b'test-vm1', b'test-feature')
self.assertEqual(value, 'some-value')
self.template.features['test-feature'] = 'some-value2'
value = self.call_mgmt_func(
b'admin.vm.feature.CheckWithTemplateAndAdminVM',
b'test-vm1', b'test-feature')
self.assertEqual(value, 'some-value2')
self.assertFalse(self.app.save.called)
def test_320_feature_set(self):
value = self.call_mgmt_func(b'admin.vm.feature.Set',
b'test-vm1', b'test-feature', b'some-value')
self.assertIsNone(value)
self.assertEqual(self.vm.features['test-feature'], 'some-value')
self.assertTrue(self.app.save.called)
def test_321_feature_set_empty(self):
value = self.call_mgmt_func(b'admin.vm.feature.Set',
b'test-vm1', b'test-feature', b'')
self.assertIsNone(value)
self.assertEqual(self.vm.features['test-feature'], '')
self.assertTrue(self.app.save.called)
def test_320_feature_set_invalid(self):
with self.assertRaises(UnicodeDecodeError):
self.call_mgmt_func(b'admin.vm.feature.Set',
b'test-vm1', b'test-feature', b'\x02\x03\xffsome-value')
self.assertNotIn('test-feature', self.vm.features)
self.assertFalse(self.app.save.called)
@asyncio.coroutine
def dummy_coro(self, *args, **kwargs):
pass
def coroutine_mock(self):
func_mock = unittest.mock.Mock()
@asyncio.coroutine
def coroutine_mock(*args, **kwargs):
return func_mock(*args, **kwargs)
return func_mock, coroutine_mock
@unittest.mock.patch('vanir.storage.Storage.create')
def test_330_vm_create_standalone(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
self.call_mgmt_func(b'admin.vm.Create.StandaloneVM',
b'dom0', b'', b'name=test-vm2 label=red')
self.assertIn('test-vm2', self.app.domains)
vm = self.app.domains['test-vm2']
self.assertIsInstance(vm, vanir.vm.standalonevm.StandaloneVM)
self.assertEqual(vm.label, self.app.get_label('red'))
self.assertEqual(storage_mock.mock_calls,
[unittest.mock.call(self.app.domains['test-vm2']).create()])
self.assertTrue(os.path.exists(os.path.join(
self.test_base_dir, 'appvms', 'test-vm2')))
self.assertTrue(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_331_vm_create_standalone_spurious_template(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
with self.assertRaises(vanir.exc.VanirValueError):
self.call_mgmt_func(b'admin.vm.Create.StandaloneVM',
b'dom0', b'test-template', b'name=test-vm2 label=red')
self.assertNotIn('test-vm2', self.app.domains)
self.assertEqual(storage_mock.mock_calls, [])
self.assertFalse(os.path.exists(os.path.join(
self.test_base_dir, 'appvms', 'test-vm2')))
self.assertNotIn('test-vm2', self.app.domains)
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_332_vm_create_app(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
self.call_mgmt_func(b'admin.vm.Create.AppVM',
b'dom0', b'test-template', b'name=test-vm2 label=red')
self.assertIn('test-vm2', self.app.domains)
vm = self.app.domains['test-vm2']
self.assertEqual(vm.label, self.app.get_label('red'))
self.assertEqual(vm.template, self.app.domains['test-template'])
self.assertEqual(storage_mock.mock_calls,
[unittest.mock.call(self.app.domains['test-vm2']).create()])
self.assertTrue(os.path.exists(os.path.join(
self.test_base_dir, 'appvms', 'test-vm2')))
self.assertTrue(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_333_vm_create_app_default_template(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
self.call_mgmt_func(b'admin.vm.Create.AppVM',
b'dom0', b'', b'name=test-vm2 label=red')
self.assertEqual(storage_mock.mock_calls,
[unittest.mock.call(self.app.domains['test-vm2']).create()])
self.assertIn('test-vm2', self.app.domains)
self.assertEqual(self.app.domains['test-vm2'].template,
self.app.default_template)
self.assertTrue(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_334_vm_create_invalid_name(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
with self.assertRaises(vanir.exc.VanirValueError):
self.call_mgmt_func(b'admin.vm.Create.AppVM',
b'dom0', b'test-template', b'name=test-###')
self.assertNotIn('test-###', self.app.domains)
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_335_vm_create_missing_name(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
with self.assertRaises(vanir.api.ProtocolError):
self.call_mgmt_func(b'admin.vm.Create.AppVM',
b'dom0', b'test-template', b'label=red')
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_336_vm_create_spurious_pool(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
with self.assertRaises(vanir.api.ProtocolError):
self.call_mgmt_func(b'admin.vm.Create.AppVM',
b'dom0', b'test-template',
b'name=test-vm2 label=red pool=default')
self.assertNotIn('test-vm2', self.app.domains)
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_337_vm_create_duplicate_name(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
with self.assertRaises(vanir.exc.VanirException):
self.call_mgmt_func(b'admin.vm.Create.AppVM',
b'dom0', b'test-template',
b'name=test-vm1 label=red')
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_338_vm_create_name_twice(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
with self.assertRaises(vanir.api.ProtocolError):
self.call_mgmt_func(b'admin.vm.Create.AppVM',
b'dom0', b'test-template',
b'name=test-vm2 name=test-vm3 label=red')
self.assertNotIn('test-vm2', self.app.domains)
self.assertNotIn('test-vm3', self.app.domains)
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_340_vm_create_in_pool_app(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
self.call_mgmt_func(b'admin.vm.CreateInPool.AppVM',
b'dom0', b'test-template', b'name=test-vm2 label=red '
b'pool=test')
self.assertIn('test-vm2', self.app.domains)
vm = self.app.domains['test-vm2']
self.assertEqual(vm.label, self.app.get_label('red'))
self.assertEqual(vm.template, self.app.domains['test-template'])
# setting pool= affect only volumes actually created for this VM,
# not used from a template or so
self.assertEqual(vm.volume_config['root']['pool'],
self.template.volumes['root'].pool)
self.assertEqual(vm.volume_config['private']['pool'], 'test')
self.assertEqual(vm.volume_config['volatile']['pool'], 'test')
self.assertEqual(vm.volume_config['kernel']['pool'], 'linux-kernel')
self.assertEqual(storage_mock.mock_calls,
[unittest.mock.call(self.app.domains['test-vm2']).create()])
self.assertTrue(os.path.exists(os.path.join(
self.test_base_dir, 'appvms', 'test-vm2')))
self.assertTrue(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_341_vm_create_in_pool_private(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
self.call_mgmt_func(b'admin.vm.CreateInPool.AppVM',
b'dom0', b'test-template', b'name=test-vm2 label=red '
b'pool:private=test')
self.assertIn('test-vm2', self.app.domains)
vm = self.app.domains['test-vm2']
self.assertEqual(vm.label, self.app.get_label('red'))
self.assertEqual(vm.template, self.app.domains['test-template'])
self.assertEqual(vm.volume_config['root']['pool'],
self.template.volumes['root'].pool)
self.assertEqual(vm.volume_config['private']['pool'], 'test')
self.assertEqual(vm.volume_config['volatile']['pool'],
self.app.default_pool_volatile)
self.assertEqual(vm.volume_config['kernel']['pool'], 'linux-kernel')
self.assertEqual(storage_mock.mock_calls,
[unittest.mock.call(self.app.domains['test-vm2']).create()])
self.assertTrue(os.path.exists(os.path.join(
self.test_base_dir, 'appvms', 'test-vm2')))
self.assertTrue(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_342_vm_create_in_pool_invalid_pool(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
with self.assertRaises(vanir.exc.VanirException):
self.call_mgmt_func(b'admin.vm.CreateInPool.AppVM',
b'dom0', b'test-template', b'name=test-vm2 label=red '
b'pool=no-such-pool')
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_343_vm_create_in_pool_invalid_pool2(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
with self.assertRaises(vanir.exc.VanirException):
self.call_mgmt_func(b'admin.vm.CreateInPool.AppVM',
b'dom0', b'test-template', b'name=test-vm2 label=red '
b'pool:private=no-such-pool')
self.assertNotIn('test-vm2', self.app.domains)
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_344_vm_create_in_pool_invalid_volume(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.vm.CreateInPool.AppVM',
b'dom0', b'test-template', b'name=test-vm2 label=red '
b'pool:invalid=test')
self.assertNotIn('test-vm2', self.app.domains)
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_345_vm_create_in_pool_app_root(self, storage_mock):
# setting custom pool for 'root' volume of AppVM should not be
# allowed - this volume belongs to the template
storage_mock.side_effect = self.dummy_coro
with self.assertRaises(vanir.exc.VanirException):
self.call_mgmt_func(b'admin.vm.CreateInPool.AppVM',
b'dom0', b'test-template', b'name=test-vm2 label=red '
b'pool:root=test')
self.assertNotIn('test-vm2', self.app.domains)
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_346_vm_create_in_pool_duplicate_pool(self, storage_mock):
# setting custom pool for 'root' volume of AppVM should not be
# allowed - this volume belongs to the template
storage_mock.side_effect = self.dummy_coro
with self.assertRaises(vanir.api.ProtocolError):
self.call_mgmt_func(b'admin.vm.CreateInPool.AppVM',
b'dom0', b'test-template', b'name=test-vm2 label=red '
b'pool=test pool:root=test')
self.assertNotIn('test-vm2', self.app.domains)
self.assertFalse(self.app.save.called)
def test_400_property_list(self):
# actual function tested for admin.vm.property.* already
# this test is kind of stupid, but at least check if appropriate
# admin-permission event is fired
value = self.call_mgmt_func(b'admin.property.List', b'dom0')
properties = self.app.property_list()
self.assertEqual(value,
''.join('{}\n'.format(prop.__name__) for prop in properties))
def test_410_property_get_str(self):
# actual function tested for admin.vm.property.* already
value = self.call_mgmt_func(b'admin.property.Get', b'dom0',
b'default_kernel')
self.assertEqual(value, 'default=False type=str 1.0')
def test_420_propert_set_str(self):
# actual function tested for admin.vm.property.* already
with unittest.mock.patch('vanir.property.__set__') as mock:
value = self.call_mgmt_func(b'admin.property.Set', b'dom0',
b'default_kernel', b'1.0')
self.assertIsNone(value)
mock.assert_called_once_with(self.app, '1.0')
self.app.save.assert_called_once_with()
def test_440_property_help(self):
# actual function tested for admin.vm.property.* already
value = self.call_mgmt_func(b'admin.property.Help', b'dom0',
b'clockvm')
self.assertEqual(value,
'Which VM to use as NTP proxy for updating AdminVM')
self.assertFalse(self.app.save.called)
def test_450_property_reset(self):
# actual function tested for admin.vm.property.* already
with unittest.mock.patch('vanir.property.__delete__') as mock:
value = self.call_mgmt_func(b'admin.property.Reset', b'dom0',
b'clockvm')
mock.assert_called_with(self.app)
self.assertIsNone(value)
self.app.save.assert_called_once_with()
def device_list_testclass(self, vm, event):
if vm is not self.vm:
return
dev = vanir.devices.DeviceInfo(self.vm, '1234')
dev.description = 'Some device'
dev.extra_prop = 'xx'
yield dev
dev = vanir.devices.DeviceInfo(self.vm, '4321')
dev.description = 'Some other device'
yield dev
def test_460_vm_device_available(self):
self.vm.add_handler('device-list:testclass', self.device_list_testclass)
value = self.call_mgmt_func(b'admin.vm.device.testclass.Available',
b'test-vm1')
self.assertEqual(value,
'1234 extra_prop=xx description=Some '
'device\n'
'4321 description=Some other device\n')
self.assertFalse(self.app.save.called)
def test_461_vm_device_available_specific(self):
self.vm.add_handler('device-list:testclass', self.device_list_testclass)
value = self.call_mgmt_func(b'admin.vm.device.testclass.Available',
b'test-vm1', b'4321')
self.assertEqual(value,
'4321 description=Some other device\n')
self.assertFalse(self.app.save.called)
def test_462_vm_device_available_invalid(self):
self.vm.add_handler('device-list:testclass', self.device_list_testclass)
value = self.call_mgmt_func(b'admin.vm.device.testclass.Available',
b'test-vm1', b'no-such-device')
self.assertEqual(value, '')
self.assertFalse(self.app.save.called)
def test_470_vm_device_list_persistent(self):
assignment = vanir.devices.DeviceAssignment(self.vm, '1234',
persistent=True)
self.loop.run_until_complete(
self.vm.devices['testclass'].attach(assignment))
value = self.call_mgmt_func(b'admin.vm.device.testclass.List',
b'test-vm1')
self.assertEqual(value,
'test-vm1+1234 persistent=yes\n')
self.assertFalse(self.app.save.called)
def test_471_vm_device_list_persistent_options(self):
assignment = vanir.devices.DeviceAssignment(self.vm, '1234',
persistent=True, options={'opt1': 'value'})
self.loop.run_until_complete(
self.vm.devices['testclass'].attach(assignment))
assignment = vanir.devices.DeviceAssignment(self.vm, '4321',
persistent=True)
self.loop.run_until_complete(
self.vm.devices['testclass'].attach(assignment))
value = self.call_mgmt_func(b'admin.vm.device.testclass.List',
b'test-vm1')
self.assertEqual(value,
'test-vm1+1234 opt1=value persistent=yes\n'
'test-vm1+4321 persistent=yes\n')
self.assertFalse(self.app.save.called)
def device_list_attached_testclass(self, vm, event, **kwargs):
if vm is not self.vm:
return
dev = vanir.devices.DeviceInfo(self.vm, '1234')
yield (dev, {'attach_opt': 'value'})
def test_472_vm_device_list_temporary(self):
self.vm.add_handler('device-list-attached:testclass',
self.device_list_attached_testclass)
value = self.call_mgmt_func(b'admin.vm.device.testclass.List',
b'test-vm1')
self.assertEqual(value,
'test-vm1+1234 attach_opt=value persistent=no\n')
self.assertFalse(self.app.save.called)
def test_473_vm_device_list_mixed(self):
self.vm.add_handler('device-list-attached:testclass',
self.device_list_attached_testclass)
assignment = vanir.devices.DeviceAssignment(self.vm, '4321',
persistent=True)
self.loop.run_until_complete(
self.vm.devices['testclass'].attach(assignment))
value = self.call_mgmt_func(b'admin.vm.device.testclass.List',
b'test-vm1')
self.assertEqual(value,
'test-vm1+1234 attach_opt=value persistent=no\n'
'test-vm1+4321 persistent=yes\n')
self.assertFalse(self.app.save.called)
def test_474_vm_device_list_specific(self):
self.vm.add_handler('device-list-attached:testclass',
self.device_list_attached_testclass)
assignment = vanir.devices.DeviceAssignment(self.vm, '4321',
persistent=True)
self.loop.run_until_complete(
self.vm.devices['testclass'].attach(assignment))
value = self.call_mgmt_func(b'admin.vm.device.testclass.List',
b'test-vm1', b'test-vm1+1234')
self.assertEqual(value,
'test-vm1+1234 attach_opt=value persistent=no\n')
self.assertFalse(self.app.save.called)
def test_480_vm_device_attach(self):
self.vm.add_handler('device-list:testclass', self.device_list_testclass)
mock_attach = unittest.mock.Mock()
mock_attach.return_value = None
del mock_attach._is_coroutine
self.vm.add_handler('device-attach:testclass', mock_attach)
with unittest.mock.patch.object(vanir.vm.vanirvm.VanirVM,
'is_halted', lambda _: False):
value = self.call_mgmt_func(b'admin.vm.device.testclass.Attach',
b'test-vm1', b'test-vm1+1234')
self.assertIsNone(value)
mock_attach.assert_called_once_with(self.vm, 'device-attach:testclass',
device=self.vm.devices['testclass']['1234'],
options={})
self.assertEqual(len(self.vm.devices['testclass'].persistent()), 0)
self.app.save.assert_called_once_with()
def test_481_vm_device_attach(self):
self.vm.add_handler('device-list:testclass', self.device_list_testclass)
mock_attach = unittest.mock.Mock()
mock_attach.return_value = None
del mock_attach._is_coroutine
self.vm.add_handler('device-attach:testclass', mock_attach)
with unittest.mock.patch.object(vanir.vm.vanirvm.VanirVM,
'is_halted', lambda _: False):
value = self.call_mgmt_func(b'admin.vm.device.testclass.Attach',
b'test-vm1', b'test-vm1+1234', b'persistent=no')
self.assertIsNone(value)
mock_attach.assert_called_once_with(self.vm, 'device-attach:testclass',
device=self.vm.devices['testclass']['1234'],
options={})
self.assertEqual(len(self.vm.devices['testclass'].persistent()), 0)
self.app.save.assert_called_once_with()
def test_482_vm_device_attach_not_running(self):
self.vm.add_handler('device-list:testclass', self.device_list_testclass)
mock_attach = unittest.mock.Mock()
del mock_attach._is_coroutine
self.vm.add_handler('device-attach:testclass', mock_attach)
with self.assertRaises(vanir.exc.VanirVMNotRunningError):
self.call_mgmt_func(b'admin.vm.device.testclass.Attach',
b'test-vm1', b'test-vm1+1234')
self.assertFalse(mock_attach.called)
self.assertEqual(len(self.vm.devices['testclass'].persistent()), 0)
self.assertFalse(self.app.save.called)
def test_483_vm_device_attach_persistent(self):
self.vm.add_handler('device-list:testclass', self.device_list_testclass)
mock_attach = unittest.mock.Mock()
mock_attach.return_value = None
del mock_attach._is_coroutine
self.vm.add_handler('device-attach:testclass', mock_attach)
with unittest.mock.patch.object(vanir.vm.vanirvm.VanirVM,
'is_halted', lambda _: False):
value = self.call_mgmt_func(b'admin.vm.device.testclass.Attach',
b'test-vm1', b'test-vm1+1234', b'persistent=yes')
self.assertIsNone(value)
dev = self.vm.devices['testclass']['1234']
mock_attach.assert_called_once_with(self.vm, 'device-attach:testclass',
device=dev,
options={})
self.assertIn(dev, self.vm.devices['testclass'].persistent())
self.app.save.assert_called_once_with()
def test_484_vm_device_attach_persistent_not_running(self):
self.vm.add_handler('device-list:testclass', self.device_list_testclass)
mock_attach = unittest.mock.Mock()
mock_attach.return_value = None
del mock_attach._is_coroutine
self.vm.add_handler('device-attach:testclass', mock_attach)
value = self.call_mgmt_func(b'admin.vm.device.testclass.Attach',
b'test-vm1', b'test-vm1+1234', b'persistent=yes')
self.assertIsNone(value)
dev = self.vm.devices['testclass']['1234']
mock_attach.assert_called_once_with(self.vm, 'device-attach:testclass',
device=dev,
options={})
self.assertIn(dev, self.vm.devices['testclass'].persistent())
self.app.save.assert_called_once_with()
def test_485_vm_device_attach_options(self):
self.vm.add_handler('device-list:testclass', self.device_list_testclass)
mock_attach = unittest.mock.Mock()
mock_attach.return_value = None
del mock_attach._is_coroutine
self.vm.add_handler('device-attach:testclass', mock_attach)
with unittest.mock.patch.object(vanir.vm.vanirvm.VanirVM,
'is_halted', lambda _: False):
value = self.call_mgmt_func(b'admin.vm.device.testclass.Attach',
b'test-vm1', b'test-vm1+1234', b'option1=value2')
self.assertIsNone(value)
dev = self.vm.devices['testclass']['1234']
mock_attach.assert_called_once_with(self.vm, 'device-attach:testclass',
device=dev,
options={'option1': 'value2'})
self.app.save.assert_called_once_with()
def test_490_vm_device_detach(self):
self.vm.add_handler('device-list:testclass', self.device_list_testclass)
self.vm.add_handler('device-list-attached:testclass',
self.device_list_attached_testclass)
mock_detach = unittest.mock.Mock()
mock_detach.return_value = None
del mock_detach._is_coroutine
self.vm.add_handler('device-detach:testclass', mock_detach)
with unittest.mock.patch.object(vanir.vm.vanirvm.VanirVM,
'is_halted', lambda _: False):
value = self.call_mgmt_func(b'admin.vm.device.testclass.Detach',
b'test-vm1', b'test-vm1+1234')
self.assertIsNone(value)
mock_detach.assert_called_once_with(self.vm, 'device-detach:testclass',
device=self.vm.devices['testclass']['1234'])
self.app.save.assert_called_once_with()
def test_491_vm_device_detach_not_attached(self):
mock_detach = unittest.mock.Mock()
mock_detach.return_value = None
del mock_detach._is_coroutine
self.vm.add_handler('device-detach:testclass', mock_detach)
with unittest.mock.patch.object(vanir.vm.vanirvm.VanirVM,
'is_halted', lambda _: False):
with self.assertRaises(vanir.devices.DeviceNotAttached):
self.call_mgmt_func(b'admin.vm.device.testclass.Detach',
b'test-vm1', b'test-vm1+1234')
self.assertFalse(mock_detach.called)
self.assertFalse(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.remove')
@unittest.mock.patch('shutil.rmtree')
def test_500_vm_remove(self, mock_rmtree, mock_remove):
mock_remove.side_effect = self.dummy_coro
value = self.call_mgmt_func(b'admin.vm.Remove', b'test-vm1')
self.assertIsNone(value)
mock_rmtree.assert_called_once_with(
'/tmp/vanir-test-dir/appvms/test-vm1')
mock_remove.assert_called_once_with()
self.app.save.assert_called_once_with()
@unittest.mock.patch('vanir.storage.Storage.remove')
@unittest.mock.patch('shutil.rmtree')
def test_501_vm_remove_running(self, mock_rmtree, mock_remove):
mock_remove.side_effect = self.dummy_coro
with unittest.mock.patch.object(
self.vm, 'get_power_state', lambda: 'Running'):
with self.assertRaises(vanir.exc.VanirVMNotHaltedError):
self.call_mgmt_func(b'admin.vm.Remove', b'test-vm1')
self.assertFalse(mock_rmtree.called)
self.assertFalse(mock_remove.called)
self.assertFalse(self.app.save.called)
def test_510_vm_volume_import(self):
value = self.call_mgmt_func(b'admin.vm.volume.Import', b'test-vm1',
b'private')
self.assertEqual(value, '{} {}'.format(
2*2**30, '/tmp/vanir-test-dir/appvms/test-vm1/private-import.img'))
self.assertFalse(self.app.save.called)
def test_511_vm_volume_import_running(self):
with unittest.mock.patch.object(
self.vm, 'get_power_state', lambda: 'Running'):
with self.assertRaises(vanir.exc.VanirVMNotHaltedError):
self.call_mgmt_func(b'admin.vm.volume.Import', b'test-vm1',
b'private')
def setup_for_clone(self):
self.pool = unittest.mock.MagicMock()
self.app.pools['test'] = self.pool
self.vm2 = self.app.add_new_vm('AppVM', label='red',
name='test-vm2',
template='test-template', kernel='')
self.pool.configure_mock(**{
'volumes': vanir.storage.VolumesCollection(self.pool),
'init_volume.return_value.pool': self.pool,
'__str__.return_value': 'test',
'get_volume.side_effect': (lambda vid:
self.vm.volumes['private']
if vid is self.vm.volumes['private'].vid
else self.vm2.volumes['private']
),
})
self.loop.run_until_complete(
self.vm.create_on_disk(pool='test'))
self.loop.run_until_complete(
self.vm2.create_on_disk(pool='test'))
# the call replaces self.vm.volumes[...] with result of import
# operation - make sure it stays as the same object
self.vm.volumes['private'].import_volume.return_value = \
self.vm.volumes['private']
self.vm2.volumes['private'].import_volume.return_value = \
self.vm2.volumes['private']
self.addCleanup(self.cleanup_for_clone)
def cleanup_for_clone(self):
del self.vm2
del self.pool
def test_520_vm_volume_clone(self):
self.setup_for_clone()
token = self.call_mgmt_func(b'admin.vm.volume.CloneFrom',
b'test-vm1', b'private', b'')
# token
self.assertEqual(len(token), 32)
self.assertFalse(self.app.save.called)
value = self.call_mgmt_func(b'admin.vm.volume.CloneTo',
b'test-vm2', b'private', token.encode())
self.assertIsNone(value)
self.vm2.volumes['private'].import_volume.assert_called_once_with(
self.vm.volumes['private']
)
self.vm2.volumes['private'].import_volume.assert_called_once_with(
self.vm2.volumes['private']
)
self.app.save.assert_called_once_with()
def test_521_vm_volume_clone_invalid_volume(self):
self.setup_for_clone()
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.vm.volume.CloneFrom',
b'test-vm1', b'private123', b'')
self.assertNotIn('init_volume().import_volume',
map(operator.itemgetter(0), self.pool.mock_calls))
self.assertFalse(self.app.save.called)
def test_522_vm_volume_clone_invalid_volume2(self):
self.setup_for_clone()
token = self.call_mgmt_func(b'admin.vm.volume.CloneFrom',
b'test-vm1', b'private', b'')
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.vm.volume.CloneTo',
b'test-vm1', b'private123', token.encode())
self.assertNotIn('init_volume().import_volume',
map(operator.itemgetter(0), self.pool.mock_calls))
self.assertFalse(self.app.save.called)
def test_523_vm_volume_clone_removed_volume(self):
self.setup_for_clone()
token = self.call_mgmt_func(b'admin.vm.volume.CloneFrom',
b'test-vm1', b'private', b'')
def get_volume(vid):
if vid == self.vm.volumes['private']:
raise KeyError(vid)
else:
return unittest.mock.DEFAULT
self.pool.get_volume.side_effect = get_volume
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.vm.volume.CloneTo',
b'test-vm1', b'private', token.encode())
self.assertNotIn('init_volume().import_volume',
map(operator.itemgetter(0), self.pool.mock_calls))
self.assertFalse(self.app.save.called)
def test_524_vm_volume_clone_invlid_token(self):
self.setup_for_clone()
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.vm.volume.CloneTo',
b'test-vm1', b'private', b'no-such-token')
self.assertNotIn('init_volume().import_volume',
map(operator.itemgetter(0), self.pool.mock_calls))
self.assertFalse(self.app.save.called)
def test_530_tag_list(self):
self.vm.tags.add('tag1')
self.vm.tags.add('tag2')
value = self.call_mgmt_func(b'admin.vm.tag.List', b'test-vm1')
self.assertEqual(value, 'tag1\ntag2\n')
self.assertFalse(self.app.save.called)
def test_540_tag_get(self):
self.vm.tags.add('tag1')
value = self.call_mgmt_func(b'admin.vm.tag.Get', b'test-vm1',
b'tag1')
self.assertEqual(value, '1')
self.assertFalse(self.app.save.called)
def test_541_tag_get_absent(self):
value = self.call_mgmt_func(b'admin.vm.tag.Get', b'test-vm1', b'tag1')
self.assertEqual(value, '0')
self.assertFalse(self.app.save.called)
def test_550_tag_remove(self):
self.vm.tags.add('tag1')
value = self.call_mgmt_func(b'admin.vm.tag.Remove', b'test-vm1',
b'tag1')
self.assertIsNone(value, None)
self.assertNotIn('tag1', self.vm.tags)
self.assertTrue(self.app.save.called)
def test_551_tag_remove_absent(self):
with self.assertRaises(vanir.exc.VanirTagNotFoundError):
self.call_mgmt_func(b'admin.vm.tag.Remove',
b'test-vm1', b'tag1')
self.assertFalse(self.app.save.called)
def test_560_tag_set(self):
value = self.call_mgmt_func(b'admin.vm.tag.Set',
b'test-vm1', b'tag1')
self.assertIsNone(value)
self.assertIn('tag1', self.vm.tags)
self.assertTrue(self.app.save.called)
def test_561_tag_set_invalid(self):
with self.assertRaises(ValueError):
self.call_mgmt_func(b'admin.vm.tag.Set',
b'test-vm1', b'+.some-tag')
self.assertNotIn('+.some-tag', self.vm.tags)
self.assertFalse(self.app.save.called)
def test_570_firewall_get(self):
self.vm.firewall.save = unittest.mock.Mock()
value = self.call_mgmt_func(b'admin.vm.firewall.Get',
b'test-vm1', b'')
self.assertEqual(value, 'action=accept\n')
self.assertFalse(self.vm.firewall.save.called)
self.assertFalse(self.app.save.called)
def test_571_firewall_get_non_default(self):
self.vm.firewall.save = unittest.mock.Mock()
self.vm.firewall.rules = [
vanir.firewall.Rule(action='accept', proto='tcp',
dstports='1-1024'),
vanir.firewall.Rule(action='drop', proto='icmp',
comment='No ICMP'),
# should not output expired rule
vanir.firewall.Rule(action='drop', proto='udp',
expire='1499450306'),
vanir.firewall.Rule(action='drop', proto='udp',
expire='2099450306'),
vanir.firewall.Rule(action='accept'),
]
value = self.call_mgmt_func(b'admin.vm.firewall.Get',
b'test-vm1', b'')
self.assertEqual(value,
'action=accept proto=tcp dstports=1-1024\n'
'action=drop proto=icmp comment=No ICMP\n'
'action=drop expire=2099450306 proto=udp\n'
'action=accept\n')
self.assertFalse(self.vm.firewall.save.called)
self.assertFalse(self.app.save.called)
def test_580_firewall_set_simple(self):
self.vm.firewall.save = unittest.mock.Mock()
value = self.call_mgmt_func(b'admin.vm.firewall.Set',
b'test-vm1', b'', b'action=accept\n')
self.assertEqual(self.vm.firewall.rules,
['action=accept'])
self.assertTrue(self.vm.firewall.save.called)
self.assertFalse(self.app.save.called)
def test_581_firewall_set_multi(self):
self.vm.firewall.save = unittest.mock.Mock()
rules = [
vanir.firewall.Rule(action='accept', proto='tcp',
dstports='1-1024'),
vanir.firewall.Rule(action='drop', proto='icmp',
comment='No ICMP'),
vanir.firewall.Rule(action='drop', proto='udp',
expire='1499450306'),
vanir.firewall.Rule(action='accept'),
]
rules_txt = (
'action=accept proto=tcp dstports=1-1024\n'
'action=drop proto=icmp comment=No ICMP\n'
'action=drop expire=1499450306 proto=udp\n'
'action=accept\n')
value = self.call_mgmt_func(b'admin.vm.firewall.Set',
b'test-vm1', b'', rules_txt.encode())
self.assertEqual(self.vm.firewall.rules, rules)
self.assertTrue(self.vm.firewall.save.called)
self.assertFalse(self.app.save.called)
def test_582_firewall_set_invalid(self):
self.vm.firewall.save = unittest.mock.Mock()
rules_txt = (
'action=accept protoxyz=tcp dst4=127.0.0.1\n'
'action=drop\n')
with self.assertRaises(ValueError):
self.call_mgmt_func(b'admin.vm.firewall.Set',
b'test-vm1', b'', rules_txt.encode())
self.assertEqual(self.vm.firewall.rules,
[vanir.firewall.Rule(action='accept')])
self.assertFalse(self.vm.firewall.save.called)
self.assertFalse(self.app.save.called)
def test_583_firewall_set_invalid(self):
self.vm.firewall.save = unittest.mock.Mock()
rules_txt = (
'proto=tcp dstports=1-1024\n'
'action=drop\n')
with self.assertRaises(ValueError):
self.call_mgmt_func(b'admin.vm.firewall.Set',
b'test-vm1', b'', rules_txt.encode())
self.assertEqual(self.vm.firewall.rules,
[vanir.firewall.Rule(action='accept')])
self.assertFalse(self.vm.firewall.save.called)
self.assertFalse(self.app.save.called)
def test_584_firewall_set_invalid(self):
self.vm.firewall.save = unittest.mock.Mock()
rules_txt = (
'action=accept proto=tcp dstports=1-1024 '
'action=drop\n')
with self.assertRaises(ValueError):
self.call_mgmt_func(b'admin.vm.firewall.Set',
b'test-vm1', b'', rules_txt.encode())
self.assertEqual(self.vm.firewall.rules,
[vanir.firewall.Rule(action='accept')])
self.assertFalse(self.vm.firewall.save.called)
self.assertFalse(self.app.save.called)
def test_585_firewall_set_invalid(self):
self.vm.firewall.save = unittest.mock.Mock()
rules_txt = (
'action=accept dstports=1-1024 comment=ążźł\n'
'action=drop\n')
with self.assertRaises(UnicodeDecodeError):
self.call_mgmt_func(b'admin.vm.firewall.Set',
b'test-vm1', b'', rules_txt.encode())
self.assertEqual(self.vm.firewall.rules,
[vanir.firewall.Rule(action='accept')])
self.assertFalse(self.vm.firewall.save.called)
self.assertFalse(self.app.save.called)
def test_590_firewall_reload(self):
self.vm.firewall.save = unittest.mock.Mock()
self.app.domains['test-vm1'].fire_event = self.emitter.fire_event
value = self.call_mgmt_func(b'admin.vm.firewall.Reload',
b'test-vm1', b'')
self.assertIsNone(value)
self.assertEventFired(self.emitter, 'firewall-changed')
self.assertFalse(self.vm.firewall.save.called)
self.assertFalse(self.app.save.called)
def test_600_backup_info(self):
backup_profile = (
'include:\n'
' - test-vm1\n'
'destination_vm: test-vm1\n'
'destination_path: /var/tmp\n'
'passphrase_text: test\n'
)
expected_info = (
'------------------+--------------+--------------+\n'
' VM | type | size |\n'
'------------------+--------------+--------------+\n'
' test-vm1 | VM | 0 |\n'
'------------------+--------------+--------------+\n'
' Total size: | 0 |\n'
'------------------+--------------+--------------+\n'
'VMs not selected for backup:\n'
' - dom0\n'
' - test-template\n'
)
with tempfile.TemporaryDirectory() as profile_dir:
with open(os.path.join(profile_dir, 'testprofile.conf'), 'w') as \
profile_file:
profile_file.write(backup_profile)
with unittest.mock.patch('vanir.config.backup_profile_dir',
profile_dir):
result = self.call_mgmt_func(b'admin.backup.Info', b'dom0',
b'testprofile')
self.assertEqual(result, expected_info)
def test_601_backup_info_profile_missing_destination_path(self):
backup_profile = (
'include:\n'
' - test-vm1\n'
'destination_vm: test-vm1\n'
'passphrase_text: test\n'
)
with tempfile.TemporaryDirectory() as profile_dir:
with open(os.path.join(profile_dir, 'testprofile.conf'), 'w') as \
profile_file:
profile_file.write(backup_profile)
with unittest.mock.patch('vanir.config.backup_profile_dir',
profile_dir):
with self.assertRaises(vanir.exc.VanirException):
self.call_mgmt_func(b'admin.backup.Info', b'dom0',
b'testprofile')
def test_602_backup_info_profile_missing_destination_vm(self):
backup_profile = (
'include:\n'
' - test-vm1\n'
'destination_path: /home/user\n'
'passphrase_text: test\n'
)
with tempfile.TemporaryDirectory() as profile_dir:
with open(os.path.join(profile_dir, 'testprofile.conf'), 'w') as \
profile_file:
profile_file.write(backup_profile)
with unittest.mock.patch('vanir.config.backup_profile_dir',
profile_dir):
with self.assertRaises(vanir.exc.VanirException):
self.call_mgmt_func(b'admin.backup.Info', b'dom0',
b'testprofile')
def test_610_backup_cancel_not_running(self):
with self.assertRaises(vanir.exc.VanirException):
self.call_mgmt_func(b'admin.backup.Cancel', b'dom0',
b'testprofile')
@unittest.mock.patch('vanir.backup.Backup')
def test_620_backup_execute(self, mock_backup):
backup_profile = (
'include:\n'
' - test-vm1\n'
'destination_vm: test-vm1\n'
'destination_path: /home/user\n'
'passphrase_text: test\n'
)
mock_backup.return_value.backup_do.side_effect = self.dummy_coro
with tempfile.TemporaryDirectory() as profile_dir:
with open(os.path.join(profile_dir, 'testprofile.conf'), 'w') as \
profile_file:
profile_file.write(backup_profile)
with unittest.mock.patch('vanir.config.backup_profile_dir',
profile_dir):
result = self.call_mgmt_func(b'admin.backup.Execute', b'dom0',
b'testprofile')
self.assertIsNone(result)
mock_backup.assert_called_once_with(
self.app,
{self.vm},
set(),
target_vm=self.vm,
target_dir='/home/user',
compressed=True,
passphrase='test')
mock_backup.return_value.backup_do.assert_called_once_with()
@unittest.mock.patch('vanir.backup.Backup')
def test_621_backup_execute_passphrase_service(self, mock_backup):
backup_profile = (
'include:\n'
' - test-vm1\n'
'destination_vm: test-vm1\n'
'destination_path: /home/user\n'
'passphrase_vm: test-vm1\n'
)
@asyncio.coroutine
def service_passphrase(*args, **kwargs):
return (b'pass-from-vm', None)
mock_backup.return_value.backup_do.side_effect = self.dummy_coro
self.vm.run_service_for_stdio = unittest.mock.Mock(
side_effect=service_passphrase)
with tempfile.TemporaryDirectory() as profile_dir:
with open(os.path.join(profile_dir, 'testprofile.conf'), 'w') as \
profile_file:
profile_file.write(backup_profile)
with unittest.mock.patch('vanir.config.backup_profile_dir',
profile_dir):
result = self.call_mgmt_func(b'admin.backup.Execute', b'dom0',
b'testprofile')
self.assertIsNone(result)
mock_backup.assert_called_once_with(
self.app,
{self.vm},
set(),
target_vm=self.vm,
target_dir='/home/user',
compressed=True,
passphrase=b'pass-from-vm')
mock_backup.return_value.backup_do.assert_called_once_with()
self.vm.run_service_for_stdio.assert_called_with(
'vanir.BackupPassphrase+testprofile')
def test_630_vm_stats(self):
send_event = unittest.mock.Mock(spec=[])
stats1 = {
0: {
'cpu_time': 243951379111104 // 8,
'cpu_usage': 0,
'memory_kb': 3733212,
},
1: {
'cpu_time': 2849496569205,
'cpu_usage': 0,
'memory_kb': 303916,
},
}
stats2 = copy.deepcopy(stats1)
stats2[0]['cpu_time'] += 100000000
stats2[0]['cpu_usage'] = 10
stats2[1]['cpu_usage'] = 5
self.app.host.get_vm_stats = unittest.mock.Mock()
self.app.host.get_vm_stats.side_effect = [
(0, stats1), (1, stats2),
]
self.app.stats_interval = 1
mgmt_obj = vanir.api.admin.VanirAdminAPI(
self.app, b'dom0', b'admin.vm.Stats',
b'dom0', b'', send_event=send_event)
def cancel_call():
mgmt_obj.cancel()
class MockVM(object):
def __init__(self, name):
self._name = name
def name(self):
return self._name
loop = asyncio.get_event_loop()
self.app.vmm.libvirt_conn.lookupByID.side_effect = lambda xid: {
0: MockVM('Domain-0'),
1: MockVM('test-template'),
2: MockVM('test-vm1')}[xid]
execute_task = asyncio.ensure_future(
mgmt_obj.execute(untrusted_payload=b''))
loop.call_later(1.1, cancel_call)
loop.run_until_complete(execute_task)
self.assertIsNone(execute_task.result())
self.assertEventFired(self.emitter,
'admin-permission:' + 'admin.vm.Stats')
self.assertEqual(self.app.host.get_vm_stats.mock_calls, [
unittest.mock.call(None, None, only_vm=None),
unittest.mock.call(0, stats1, only_vm=None),
])
self.assertEqual(send_event.mock_calls, [
unittest.mock.call(self.app, 'connection-established'),
unittest.mock.call('dom0', 'vm-stats',
cpu_time=stats1[0]['cpu_time'] // 1000000,
cpu_usage=stats1[0]['cpu_usage'],
memory_kb=stats1[0]['memory_kb']),
unittest.mock.call('test-template', 'vm-stats',
cpu_time=stats1[1]['cpu_time'] // 1000000,
cpu_usage=stats1[1]['cpu_usage'],
memory_kb=stats1[1]['memory_kb']),
unittest.mock.call('dom0', 'vm-stats',
cpu_time=stats2[0]['cpu_time'] // 1000000,
cpu_usage=stats2[0]['cpu_usage'],
memory_kb=stats2[0]['memory_kb']),
unittest.mock.call('test-template', 'vm-stats',
cpu_time=stats2[1]['cpu_time'] // 1000000,
cpu_usage=stats2[1]['cpu_usage'],
memory_kb=stats2[1]['memory_kb']),
])
def test_631_vm_stats_single_vm(self):
send_event = unittest.mock.Mock(spec=[])
stats1 = {
2: {
'cpu_time': 2849496569205,
'cpu_usage': 0,
'memory_kb': 303916,
},
}
stats2 = copy.deepcopy(stats1)
stats2[2]['cpu_usage'] = 5
self.app.host.get_vm_stats = unittest.mock.Mock()
self.app.host.get_vm_stats.side_effect = [
(0, stats1), (1, stats2),
]
self.app.stats_interval = 1
mgmt_obj = vanir.api.admin.VanirAdminAPI(
self.app, b'dom0', b'admin.vm.Stats',
b'test-vm1', b'', send_event=send_event)
def cancel_call():
mgmt_obj.cancel()
class MockVM(object):
def __init__(self, name):
self._name = name
def name(self):
return self._name
loop = asyncio.get_event_loop()
self.app.vmm.libvirt_conn.lookupByID.side_effect = lambda xid: {
0: MockVM('Domain-0'),
1: MockVM('test-template'),
2: MockVM('test-vm1')}[xid]
execute_task = asyncio.ensure_future(
mgmt_obj.execute(untrusted_payload=b''))
loop.call_later(1.1, cancel_call)
loop.run_until_complete(execute_task)
self.assertIsNone(execute_task.result())
self.assertEventFired(self.emitter,
'admin-permission:' + 'admin.vm.Stats')
self.assertEqual(self.app.host.get_vm_stats.mock_calls, [
unittest.mock.call(None, None, only_vm=self.vm),
unittest.mock.call(0, stats1, only_vm=self.vm),
])
self.assertEqual(send_event.mock_calls, [
unittest.mock.call(self.app, 'connection-established'),
unittest.mock.call('test-vm1', 'vm-stats',
cpu_time=stats1[2]['cpu_time'] // 1000000,
cpu_usage=stats1[2]['cpu_usage'],
memory_kb=stats1[2]['memory_kb']),
unittest.mock.call('test-vm1', 'vm-stats',
cpu_time=stats2[2]['cpu_time'] // 1000000,
cpu_usage=stats2[2]['cpu_usage'],
memory_kb=stats2[2]['memory_kb']),
])
@unittest.mock.patch('vanir.storage.Storage.create')
def test_640_vm_create_disposable(self, mock_storage):
mock_storage.side_effect = self.dummy_coro
self.vm.template_for_dispvms = True
retval = self.call_mgmt_func(b'admin.vm.CreateDisposable',
b'test-vm1')
self.assertTrue(retval.startswith('disp'))
self.assertIn(retval, self.app.domains)
dispvm = self.app.domains[retval]
self.assertEqual(dispvm.template, self.vm)
mock_storage.assert_called_once_with()
self.assertTrue(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_641_vm_create_disposable_default(self, mock_storage):
mock_storage.side_effect = self.dummy_coro
self.vm.template_for_dispvms = True
self.app.default_dispvm = self.vm
retval = self.call_mgmt_func(b'admin.vm.CreateDisposable',
b'dom0')
self.assertTrue(retval.startswith('disp'))
mock_storage.assert_called_once_with()
self.assertTrue(self.app.save.called)
@unittest.mock.patch('vanir.storage.Storage.create')
def test_642_vm_create_disposable_not_allowed(self, storage_mock):
storage_mock.side_effect = self.dummy_coro
with self.assertRaises(vanir.exc.VanirException):
self.call_mgmt_func(b'admin.vm.CreateDisposable',
b'test-vm1')
self.assertFalse(self.app.save.called)
def test_650_vm_device_set_persistent_true(self):
self.vm.add_handler('device-list:testclass',
self.device_list_testclass)
self.vm.add_handler('device-list-attached:testclass',
self.device_list_attached_testclass)
with unittest.mock.patch.object(vanir.vm.vanirvm.VanirVM,
'is_halted', lambda _: False):
value = self.call_mgmt_func(
b'admin.vm.device.testclass.Set.persistent',
b'test-vm1', b'test-vm1+1234', b'True')
self.assertIsNone(value)
dev = vanir.devices.DeviceInfo(self.vm, '1234')
self.assertIn(dev, self.vm.devices['testclass'].persistent())
self.app.save.assert_called_once_with()
def test_651_vm_device_set_persistent_false_unchanged(self):
self.vm.add_handler('device-list:testclass',
self.device_list_testclass)
self.vm.add_handler('device-list-attached:testclass',
self.device_list_attached_testclass)
with unittest.mock.patch.object(vanir.vm.vanirvm.VanirVM,
'is_halted', lambda _: False):
value = self.call_mgmt_func(
b'admin.vm.device.testclass.Set.persistent',
b'test-vm1', b'test-vm1+1234', b'False')
self.assertIsNone(value)
dev = vanir.devices.DeviceInfo(self.vm, '1234')
self.assertNotIn(dev, self.vm.devices['testclass'].persistent())
self.app.save.assert_called_once_with()
def test_652_vm_device_set_persistent_false(self):
self.vm.add_handler('device-list:testclass',
self.device_list_testclass)
assignment = vanir.devices.DeviceAssignment(self.vm, '1234', {},
True)
self.loop.run_until_complete(
self.vm.devices['testclass'].attach(assignment))
self.vm.add_handler('device-list-attached:testclass',
self.device_list_attached_testclass)
dev = vanir.devices.DeviceInfo(self.vm, '1234')
self.assertIn(dev, self.vm.devices['testclass'].persistent())
with unittest.mock.patch.object(vanir.vm.vanirvm.VanirVM,
'is_halted', lambda _: False):
value = self.call_mgmt_func(
b'admin.vm.device.testclass.Set.persistent',
b'test-vm1', b'test-vm1+1234', b'False')
self.assertIsNone(value)
self.assertNotIn(dev, self.vm.devices['testclass'].persistent())
self.assertIn(dev, self.vm.devices['testclass'].attached())
self.app.save.assert_called_once_with()
def test_653_vm_device_set_persistent_true_unchanged(self):
self.vm.add_handler('device-list:testclass',
self.device_list_testclass)
assignment = vanir.devices.DeviceAssignment(self.vm, '1234', {},
True)
self.loop.run_until_complete(
self.vm.devices['testclass'].attach(assignment))
self.vm.add_handler('device-list-attached:testclass',
self.device_list_attached_testclass)
with unittest.mock.patch.object(vanir.vm.vanirvm.VanirVM,
'is_halted', lambda _: False):
value = self.call_mgmt_func(
b'admin.vm.device.testclass.Set.persistent',
b'test-vm1', b'test-vm1+1234', b'True')
self.assertIsNone(value)
dev = vanir.devices.DeviceInfo(self.vm, '1234')
self.assertIn(dev, self.vm.devices['testclass'].persistent())
self.assertIn(dev, self.vm.devices['testclass'].attached())
self.app.save.assert_called_once_with()
def test_654_vm_device_set_persistent_not_attached(self):
self.vm.add_handler('device-list:testclass',
self.device_list_testclass)
with unittest.mock.patch.object(vanir.vm.vanirvm.VanirVM,
'is_halted', lambda _: False):
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(
b'admin.vm.device.testclass.Set.persistent',
b'test-vm1', b'test-vm1+1234', b'True')
dev = vanir.devices.DeviceInfo(self.vm, '1234')
self.assertNotIn(dev, self.vm.devices['testclass'].persistent())
self.assertFalse(self.app.save.called)
def test_655_vm_device_set_persistent_invalid_value(self):
self.vm.add_handler('device-list:testclass',
self.device_list_testclass)
with unittest.mock.patch.object(vanir.vm.vanirvm.VanirVM,
'is_halted', lambda _: False):
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(
b'admin.vm.device.testclass.Set.persistent',
b'test-vm1', b'test-vm1+1234', b'maybe')
dev = vanir.devices.DeviceInfo(self.vm, '1234')
self.assertNotIn(dev, self.vm.devices['testclass'].persistent())
self.assertFalse(self.app.save.called)
def test_660_pool_set_revisions_to_keep(self):
self.app.pools['test-pool'] = unittest.mock.Mock()
value = self.call_mgmt_func(b'admin.pool.Set.revisions_to_keep',
b'dom0', b'test-pool', b'2')
self.assertIsNone(value)
self.assertEqual(self.app.pools['test-pool'].mock_calls, [])
self.assertEqual(self.app.pools['test-pool'].revisions_to_keep, 2)
self.app.save.assert_called_once_with()
def test_661_pool_set_revisions_to_keep_negative(self):
self.app.pools['test-pool'] = unittest.mock.Mock()
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.pool.Set.revisions_to_keep',
b'dom0', b'test-pool', b'-2')
self.assertEqual(self.app.pools['test-pool'].mock_calls, [])
self.assertFalse(self.app.save.called)
def test_662_pool_set_revisions_to_keep_not_a_number(self):
self.app.pools['test-pool'] = unittest.mock.Mock()
with self.assertRaises(vanir.api.ProtocolError):
self.call_mgmt_func(b'admin.pool.Set.revisions_to_keep',
b'dom0', b'test-pool', b'abc')
self.assertEqual(self.app.pools['test-pool'].mock_calls, [])
self.assertFalse(self.app.save.called)
def test_670_vm_volume_set_revisions_to_keep(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel'],
}
self.vm.volumes.configure_mock(**volumes_conf)
self.vm.storage = unittest.mock.Mock()
value = self.call_mgmt_func(b'admin.vm.volume.Set.revisions_to_keep',
b'test-vm1', b'private', b'2')
self.assertIsNone(value)
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys(),
('__getitem__', ('private',), {})])
self.assertEqual(self.vm.volumes['private'].revisions_to_keep, 2)
self.app.save.assert_called_once_with()
def test_671_vm_volume_set_revisions_to_keep_negative(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel'],
}
self.vm.volumes.configure_mock(**volumes_conf)
self.vm.storage = unittest.mock.Mock()
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(b'admin.vm.volume.Set.revisions_to_keep',
b'test-vm1', b'private', b'-2')
def test_672_vm_volume_set_revisions_to_keep_not_a_number(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel'],
}
self.vm.volumes.configure_mock(**volumes_conf)
self.vm.storage = unittest.mock.Mock()
with self.assertRaises(vanir.api.ProtocolError):
self.call_mgmt_func(b'admin.vm.volume.Set.revisions_to_keep',
b'test-vm1', b'private', b'abc')
def test_680_vm_volume_set_rw(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel'],
}
self.vm.volumes.configure_mock(**volumes_conf)
self.vm.storage = unittest.mock.Mock()
value = self.call_mgmt_func(b'admin.vm.volume.Set.rw',
b'test-vm1', b'private', b'True')
self.assertIsNone(value)
self.assertEqual(self.vm.volumes.mock_calls,
[unittest.mock.call.keys(),
('__getitem__', ('private',), {})])
self.assertEqual(self.vm.volumes['private'].rw, True)
self.app.save.assert_called_once_with()
def test_681_vm_volume_set_rw_invalid(self):
self.vm.volumes = unittest.mock.MagicMock()
volumes_conf = {
'keys.return_value': ['root', 'private', 'volatile', 'kernel'],
}
self.vm.volumes.configure_mock(**volumes_conf)
self.vm.storage = unittest.mock.Mock()
with self.assertRaises(vanir.api.ProtocolError):
self.call_mgmt_func(b'admin.vm.volume.Set.revisions_to_keep',
b'test-vm1', b'private', b'abc')
self.assertFalse(self.app.save.called)
def test_990_vm_unexpected_payload(self):
methods_with_no_payload = [
b'admin.vm.List',
b'admin.vm.Remove',
b'admin.vm.property.List',
b'admin.vm.property.Get',
b'admin.vm.property.Help',
#b'admin.vm.property.HelpRst',
b'admin.vm.property.Reset',
b'admin.vm.feature.List',
b'admin.vm.feature.Get',
b'admin.vm.feature.CheckWithTemplate',
b'admin.vm.feature.Remove',
b'admin.vm.tag.List',
b'admin.vm.tag.Get',
b'admin.vm.tag.Remove',
b'admin.vm.tag.Set',
b'admin.vm.firewall.Get',
b'admin.vm.firewall.Reload',
b'admin.vm.device.pci.Detach',
b'admin.vm.device.pci.List',
b'admin.vm.device.pci.Available',
b'admin.vm.volume.ListSnapshots',
b'admin.vm.volume.List',
b'admin.vm.volume.Info',
b'admin.vm.Start',
b'admin.vm.Shutdown',
b'admin.vm.Pause',
b'admin.vm.Unpause',
b'admin.vm.Kill',
b'admin.Events',
b'admin.vm.feature.List',
b'admin.vm.feature.Get',
b'admin.vm.feature.Remove',
b'admin.vm.feature.CheckWithTemplate',
]
# make sure also no methods on actual VM gets called
vm_mock = unittest.mock.MagicMock()
vm_mock.name = self.vm.name
vm_mock.qid = self.vm.qid
vm_mock.__lt__ = (lambda x, y: x.qid < y.qid)
self.app.domains._dict[self.vm.qid] = vm_mock
for method in methods_with_no_payload:
# should reject payload regardless of having argument or not
with self.subTest(method.decode('ascii')):
with self.assertRaises(vanir.api.ProtocolError):
self.call_mgmt_func(method, b'test-vm1', b'',
b'unexpected-payload')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
with self.subTest(method.decode('ascii') + '+arg'):
with self.assertRaises(vanir.api.ProtocolError):
self.call_mgmt_func(method, b'test-vm1', b'some-arg',
b'unexpected-payload')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
def test_991_vm_unexpected_argument(self):
methods_with_no_argument = [
b'admin.vm.List',
b'admin.vm.Remove',
b'admin.vm.property.List',
b'admin.vm.feature.List',
b'admin.vm.tag.List',
b'admin.vm.firewall.Get',
b'admin.vm.firewall.Set',
b'admin.vm.firewall.Reload',
b'admin.vm.volume.List',
b'admin.vm.Start',
b'admin.vm.Shutdown',
b'admin.vm.Pause',
b'admin.vm.Unpause',
b'admin.vm.Kill',
b'admin.Events',
b'admin.vm.feature.List',
]
# make sure also no methods on actual VM gets called
vm_mock = unittest.mock.MagicMock()
vm_mock.name = self.vm.name
vm_mock.qid = self.vm.qid
vm_mock.__lt__ = (lambda x, y: x.qid < y.qid)
self.app.domains._dict[self.vm.qid] = vm_mock
exceptions = (vanir.api.PermissionDenied, vanir.api.ProtocolError)
for method in methods_with_no_argument:
# should reject argument regardless of having payload or not
with self.subTest(method.decode('ascii')):
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(method, b'test-vm1', b'some-arg',
b'')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
with self.subTest(method.decode('ascii') + '+payload'):
with self.assertRaises(exceptions):
self.call_mgmt_func(method, b'test-vm1', b'unexpected-arg',
b'some-payload')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
def test_992_dom0_unexpected_payload(self):
methods_with_no_payload = [
b'admin.vmclass.List',
b'admin.vm.List',
b'admin.label.List',
b'admin.label.Get',
b'admin.label.Remove',
b'admin.property.List',
b'admin.property.Get',
b'admin.property.Help',
#b'admin.property.HelpRst',
b'admin.property.Reset',
b'admin.pool.List',
b'admin.pool.ListDrivers',
b'admin.pool.Info',
b'admin.pool.Remove',
b'admin.backup.Execute',
b'admin.Events',
]
# make sure also no methods on actual VM gets called
vm_mock = unittest.mock.MagicMock()
vm_mock.name = self.vm.name
vm_mock.qid = self.vm.qid
vm_mock.__lt__ = (lambda x, y: x.qid < y.qid)
self.app.domains._dict[self.vm.qid] = vm_mock
for method in methods_with_no_payload:
# should reject payload regardless of having argument or not
with self.subTest(method.decode('ascii')):
with self.assertRaises(vanir.api.ProtocolError):
self.call_mgmt_func(method, b'dom0', b'',
b'unexpected-payload')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
with self.subTest(method.decode('ascii') + '+arg'):
with self.assertRaises(vanir.api.ProtocolError):
self.call_mgmt_func(method, b'dom0', b'some-arg',
b'unexpected-payload')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
def test_993_dom0_unexpected_argument(self):
methods_with_no_argument = [
b'admin.vmclass.List',
b'admin.vm.List',
b'admin.label.List',
b'admin.property.List',
b'admin.pool.List',
b'admin.pool.ListDrivers',
b'admin.Events',
]
# make sure also no methods on actual VM gets called
vm_mock = unittest.mock.MagicMock()
vm_mock.name = self.vm.name
vm_mock.qid = self.vm.qid
vm_mock.__lt__ = (lambda x, y: x.qid < y.qid)
self.app.domains._dict[self.vm.qid] = vm_mock
exceptions = (vanir.api.PermissionDenied, vanir.api.ProtocolError)
for method in methods_with_no_argument:
# should reject argument regardless of having payload or not
with self.subTest(method.decode('ascii')):
with self.assertRaises(vanir.api.PermissionDenied):
self.call_mgmt_func(method, b'dom0', b'some-arg',
b'')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
with self.subTest(method.decode('ascii') + '+payload'):
with self.assertRaises(exceptions):
self.call_mgmt_func(method, b'dom0', b'unexpected-arg',
b'some-payload')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
def test_994_dom0_only_calls(self):
# TODO set some better arguments, to make sure the call was rejected
# because of invalid destination, not invalid arguments
methods_for_dom0_only = [
b'admin.vmclass.List',
b'admin.vm.Create.AppVM',
b'admin.vm.CreateInPool.AppVM',
b'admin.label.List',
b'admin.label.Create',
b'admin.label.Get',
b'admin.label.Remove',
b'admin.property.List',
b'admin.property.Get',
b'admin.property.Set',
b'admin.property.Help',
#b'admin.property.HelpRst',
b'admin.property.Reset',
b'admin.pool.List',
b'admin.pool.ListDrivers',
b'admin.pool.Info',
b'admin.pool.Add',
b'admin.pool.Remove',
#b'admin.pool.volume.List',
#b'admin.pool.volume.Info',
#b'admin.pool.volume.ListSnapshots',
#b'admin.pool.volume.Snapshot',
#b'admin.pool.volume.Revert',
#b'admin.pool.volume.Resize',
b'admin.backup.Execute',
b'admin.backup.Info',
]
# make sure also no methods on actual VM gets called
vm_mock = unittest.mock.MagicMock()
vm_mock.name = self.vm.name
vm_mock.qid = self.vm.qid
vm_mock.__lt__ = (lambda x, y: x.qid < y.qid)
self.app.domains._dict[self.vm.qid] = vm_mock
exceptions = (vanir.api.PermissionDenied, vanir.api.ProtocolError)
for method in methods_for_dom0_only:
# should reject call regardless of having payload or not
with self.subTest(method.decode('ascii')):
with self.assertRaises(exceptions):
self.call_mgmt_func(method, b'test-vm1', b'',
b'')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
with self.subTest(method.decode('ascii') + '+arg'):
with self.assertRaises(exceptions):
self.call_mgmt_func(method, b'test-vm1', b'some-arg',
b'')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
with self.subTest(method.decode('ascii') + '+payload'):
with self.assertRaises(exceptions):
self.call_mgmt_func(method, b'test-vm1', b'',
b'payload')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
with self.subTest(method.decode('ascii') + '+arg+payload'):
with self.assertRaises(exceptions):
self.call_mgmt_func(method, b'test-vm1', b'some-arg',
b'some-payload')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
@unittest.skip('undecided')
def test_995_vm_only_calls(self):
# XXX is it really a good idea to prevent those calls this early?
# TODO set some better arguments, to make sure the call was rejected
# because of invalid destination, not invalid arguments
methods_for_vm_only = [
b'admin.vm.Clone',
b'admin.vm.Remove',
b'admin.vm.property.List',
b'admin.vm.property.Get',
b'admin.vm.property.Set',
b'admin.vm.property.Help',
b'admin.vm.property.HelpRst',
b'admin.vm.property.Reset',
b'admin.vm.feature.List',
b'admin.vm.feature.Get',
b'admin.vm.feature.Set',
b'admin.vm.feature.CheckWithTemplate',
b'admin.vm.feature.Remove',
b'admin.vm.tag.List',
b'admin.vm.tag.Get',
b'admin.vm.tag.Remove',
b'admin.vm.tag.Set',
b'admin.vm.firewall.Get',
b'admin.vm.firewall.Set',
b'admin.vm.firewall.Reload',
b'admin.vm.device.pci.Attach',
b'admin.vm.device.pci.Detach',
b'admin.vm.device.pci.List',
b'admin.vm.device.pci.Available',
b'admin.vm.microphone.Attach',
b'admin.vm.microphone.Detach',
b'admin.vm.microphone.Status',
b'admin.vm.volume.ListSnapshots',
b'admin.vm.volume.List',
b'admin.vm.volume.Info',
b'admin.vm.volume.Revert',
b'admin.vm.volume.Resize',
b'admin.vm.Start',
b'admin.vm.Shutdown',
b'admin.vm.Pause',
b'admin.vm.Unpause',
b'admin.vm.Kill',
b'admin.vm.feature.List',
b'admin.vm.feature.Get',
b'admin.vm.feature.Set',
b'admin.vm.feature.Remove',
b'admin.vm.feature.CheckWithTemplate',
]
# make sure also no methods on actual VM gets called
vm_mock = unittest.mock.MagicMock()
vm_mock.name = self.vm.name
vm_mock.qid = self.vm.qid
vm_mock.__lt__ = (lambda x, y: x.qid < y.qid)
self.app.domains._dict[self.vm.qid] = vm_mock
exceptions = (vanir.api.PermissionDenied, vanir.api.ProtocolError)
for method in methods_for_vm_only:
# should reject payload regardless of having argument or not
# should reject call regardless of having payload or not
with self.subTest(method.decode('ascii')):
with self.assertRaises(exceptions):
self.call_mgmt_func(method, b'dom0', b'',
b'')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
with self.subTest(method.decode('ascii') + '+arg'):
with self.assertRaises(exceptions):
self.call_mgmt_func(method, b'dom0', b'some-arg',
b'')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
with self.subTest(method.decode('ascii') + '+payload'):
with self.assertRaises(exceptions):
self.call_mgmt_func(method, b'dom0', b'',
b'payload')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called)
with self.subTest(method.decode('ascii') + '+arg+payload'):
with self.assertRaises(exceptions):
self.call_mgmt_func(method, b'dom0', b'some-arg',
b'some-payload')
self.assertFalse(vm_mock.called)
self.assertFalse(self.app.save.called) | 44.895437 | 87 | 0.60227 | 15,171 | 124,944 | 4.767253 | 0.049173 | 0.026796 | 0.025884 | 0.043139 | 0.885405 | 0.863199 | 0.840454 | 0.827028 | 0.807464 | 0.780239 | 0 | 0.020189 | 0.26896 | 124,944 | 2,783 | 88 | 44.895437 | 0.771631 | 0.018368 | 0 | 0.697912 | 0 | 0 | 0.185106 | 0.069739 | 0 | 0 | 0.000534 | 0.000359 | 0.228817 | 1 | 0.085551 | false | 0.00614 | 0.011052 | 0.003684 | 0.10438 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
078570f2a1a164689e29699d00e67324e669bb2a | 24,503 | py | Python | populations/pop_C1_80plies.py | noemiefedon/LAYLA | b4aec89131c5bca98d7a0f961090c9152800eccc | [
"MIT"
] | null | null | null | populations/pop_C1_80plies.py | noemiefedon/LAYLA | b4aec89131c5bca98d7a0f961090c9152800eccc | [
"MIT"
] | null | null | null | populations/pop_C1_80plies.py | noemiefedon/LAYLA | b4aec89131c5bca98d7a0f961090c9152800eccc | [
"MIT"
] | 1 | 2021-12-02T21:10:06.000Z | 2021-12-02T21:10:06.000Z | # -*- coding: utf-8 -*-
"""
Function to create a population of 200 symmetric and balanced laminates of
160 plies satisfying the design and manufacturing guidelines C1
"""
__version__ = '1.0'
__author__ = 'Noemie Fedon'
import sys
import pandas as pd
import numpy as np
sys.path.append(r'C:\LAYLA')
from src.LAYLA_V02.constraints import Constraints
from src.LAYLA_V02.materials import Material
from src.CLA.lampam_functions import calc_lampam
from src.CLA.ABD import A_from_lampam, B_from_lampam, D_from_lampam
from src.LAYLA_V02.save_set_up import save_constraints_LAYLA
from src.LAYLA_V02.save_set_up import save_materials
from src.guidelines.ten_percent_rule import is_ten_percent_rule
from src.guidelines.internal_diso_contig import internal_diso_contig
from src.divers.excel import append_df_to_excel, autofit_column_widths, delete_file
n_plies_in_panels = 80
n_pop = 52
filename = 'pop_sym_C1_80plies.xlsx'
delete_file(filename)
#==============================================================================
# Material properties
#==============================================================================
# Elastic modulus in the fibre direction (Pa)
E11 = 130e9
# Elastic modulus in the transverse direction (Pa)
E22 = 9e9
# Poisson's ratio relating transverse deformation and axial loading (-)
nu12 = 0.3
# In-plane shear modulus (Pa)
G12 = 4e9
mat = Material(E11=E11, E22=E22, G12=G12, nu12=nu12)
#==============================================================================
# Design guidelines
#==============================================================================
### Set of design and manufacturing constraints:
constraints_set = 'C1'
# Set of admissible fibre orientations
set_of_angles = np.array([-45, 0, 45, 90], dtype=int)
#set_of_angles= np.array([-45, 0, 45, 90, +30, -30, +60, -60], dtype=int)
# symmetry
sym = True
# balance and in-plane orthotropy requirements
if constraints_set == 'C0':
bal = False
ipo = False
else:
bal = True
ipo = True
# out-of-plane orthotropy requirements
oopo = False
# damage tolerance
dam_tol = False
# 10% rule
if constraints_set == 'C0':
rule_10_percent = False
else:
rule_10_percent = True
percent_0 = 10 # percentage used in the 10% rule for 0 deg plies
percent_45 = 10 # percentage used in the 10% rule for +45 deg plies
percent_90 = 10 # percentage used in the 10% rule for 90 deg plies
percent_135 =10 # percentage used in the 10% rule for -45 deg plies
# disorientation
if constraints_set == 'C0':
diso = False
else:
diso = True
# Upper bound of the variation of fibre orientation between two
# contiguous plies if the disorientation constraint is active
delta_angle = 45
# delta_angle must be at least 45 to ensure convergence when the 10% is active
# contiguity
if constraints_set == 'C0':
contig = False
else:
contig = True
n_contig = 5
# No more that constraints.n_contig plies with same fibre orientation should be
# next to each other if the contiguity constraint is active. The value taken
# can only be 2, 3, 4 or 5, otherwise test functions should be modified
constraints = Constraints(
sym=sym,
bal=bal,
ipo=ipo,
oopo=oopo,
dam_tol=dam_tol,
rule_10_percent=rule_10_percent,
percent_0=percent_0,
percent_45=percent_45,
percent_90=percent_90,
percent_135=percent_135,
diso=diso,
contig=contig,
n_contig=n_contig,
delta_angle=delta_angle,
set_of_angles=set_of_angles)
#print(constraints)
Pop = pd.DataFrame()
# Initialisation
print(' Creating Pop ... ')
ipop = 1
# Loop until Pop is complete
while ipop < n_pop + 1:
print('ipop', ipop)
if ipop == 1:
ss = np.hstack((np.array([0, 0, 0, 0, 0, 45, 0, 0, 0, 0, 0, -45, 0, 0, 0, 0, 0, 45, 0, 0, 0, 0,
-45, 0, 0, 0, 0, 0, 45, 0, 0, 0, 0, -45, -45, 90, 90, 90, 90, 45])))
if ipop == 2:
ss = np.hstack((np.array([90, 90, 90, 90, 45, 0, 0, 0, 0, 0, -45, 0, 0, 0, 0, 0, 45, 0, 0, 0, 0,
-45, 0, 0, 0, 0, 0, 45, 0, 0, 0, 0, -45, -45, 0, 0, 0, 0, 0, 45])))
if ipop == 3:
ss = np.hstack((np.array([0, 0, 0, 0, 0, 45, 0, 0, 0, 0, 0, -45, 0, 0, 0, 0, 0, 45, 0, 0, 0, 0,
-45, 90, 90, 90, 90, 45, 0, 0, 0, 0, -45, -45, 0, 0, 0, 0, 0, 45])))
if ipop == 4:
ss = np.hstack((np.array([0, 0, 0, 0, 0, 45, 90, 90, 90, 90, 90, -45, 0, 0, 0, 0, 0, 45, 0, 0, 0, 0,
-45, 0, 0, 0, 0, 0, 45, 0, 0, 0, 0, -45, -45, 0, 0, 0, 0, 45])))
if ipop == 5:
ss = np.hstack((np.array([90, 90, 90, 90, 90, 45, 0, 0, 0, 0, 0, -45, 0, 0, 0, 0, 0, 45, 0, 0, 0, 0,
-45, 0, 0, 0, 0, 0, 45, 0, 0, 0, 0, -45, -45, 90, 90, 90, 90, 45])))
if ipop == 6:
ss = np.hstack((np.array([0, 0, 0, 0, 0, 45, 90, 90, 90, 90, 90, -45, 90, 90, 90, 90, 90, 45, 0, 0, 0, 0,
-45, 0, 0, 0, 0, 0, 45, 0, 0, 0, 0, -45, -45, 0, 0, 0, 0, 45])))
if ipop == 7:
ss = np.hstack((np.array([0, 0, 0, 0, 45, 45, 45, 90, 90, 90, 90, 45, 90, 90, 90, 90, 90, -45, -45, -45, 0,
-45, -45, 0, 0, 0, 0, 45, 45, 0, 0, 0, 0, -45, -45, 0, 0, 0, 0, 45])))
if ipop == 8:
ss = np.matlib.repmat(np.array([90, 90, 45, 45, 0, 0, -45, -45]), 1, 5)
if ipop == 9:
ss = np.matlib.repmat(np.array([90, 90, 90, 45, 0, 0, 0, -45]), 1, 5)
if ipop == 10:
ss = np.matlib.repmat(np.array([-45, 90, 90, 90, 90, 90, 45, 0]), 1, 5)
if ipop == 11:
ss = np.matlib.repmat(np.array([45, 90, 90, 90, -45, -45, 0, 45]), 1, 5)
if ipop == 12:
ss = np.matlib.repmat(np.array([-45, -45, 90, 45, 45, 45, 0, -45]), 1, 5)
if ipop == 13:
ss = np.matlib.repmat(np.array([-45, 90, 90, 90, 45, 45, 0, -45]), 1, 5)
if ipop == 14:
ss = np.hstack((np.array([0, 0, 45, 90, 90, 45, 45, 90, 90, -45, 90, 90, 90, 90, 90, 45, 0, 0, -45, -45, 0,
-45, -45, 0, 0, 45, 45, 45, 45, 0, 0, 0, 0, -45, -45, -45, -45, 0, 0, 45])))
if ipop == 15:
ss = np.hstack((np.array([0, 0, -45, 90, 90, 45, 45, 0, 0, -45, 90, 90, 45, 0, 0, 45, 45, 0, -45, -45, -45, 0,
-45, -45, 0, 0, 45, 0, 0, 0, 45, 45, 45, 0, -45, 90, 90, 90, -45, 0])))
if ipop == 16:
ss = np.hstack((np.array([45, 45, 45, 0, 45, 45, 0, 45, 90, -45, -45, 90, 45, 90, 90, 90, 90, 90, 45, 45, 45, 0,
-45, -45, 0, -45, -45, 0, 45, 0, -45, -45, 0, -45, -45, 0, 0, 0, 0, -45])))
if ipop == 17:
ss = np.hstack((np.array([45, 45, 45, 90, 45, 45, 90, 45, 90, -45, -45, 90, 45, 90, 90, 90, 90, 90, 45, 45, 45, 0,
-45, -45, 0, -45, -45, 90, 45, 90, -45, -45, 90, -45, -45, 0, 0, 0, 0, -45])))
if ipop == 18:
ss = np.hstack((np.array([ -45, -45, 90, 45, 90, 90, 90, 90, 45, 45, 45, 90, 45, 45, 90, 45, 90, 90, 45, 45, 45, 0,
-45, -45, 0, -45, -45, 90, 45, 90, -45, -45, 90, -45, -45, 0, 0, 0, 0, -45])))
if ipop == 19:
ss = np.hstack((np.array([0, -45, 0, 45, 45, 45, 90, 90, -45, 90, -45, -45, 90, 90, 45, 0, 0, -45, -45, 0,
-45, -45, -45, 0, 45, 45, 45, 90, 45, 45, 45, 0, 45, 0, -45, -45, -45, 0, 0, 45])))
if ipop == 20:
ss = np.hstack((np.array([0, -45, 0, 0, 0, 45, 90, 90, -45, 90, 90, -45, 90, 90, 45, 0, 0, -45, -45, 0,
-45, -45, -45, 0, 45, 45, 45, 90, 45, 90, 45, 0, 45, 0, 0, 0, -45, 0, 0, 45])))
if ipop == 21:
ss = np.hstack((np.array([0, -45, 0, 45, 0, 0, -45, 0, 0, 45, 90, 90, -45, 90, 90, 45, 0, 0, -45, -45, 0,
-45, 0, 0, 0, 45, 45, 90, -45, 90, 45, 0, 45, 0, 0, 0, -45, 0, 0, 45])))
if ipop == 22:
ss = np.hstack((np.array([90, -45, 0, 0, 45, 0, 0, -45, 0, 0, 45, 90, 90, -45, 90, 90, 45, 0, 0, -45, -45, 0,
-45, 0, 0, 0, 0, 45, 45, 0, 0, 45, 0, -45, -45, 90, 90, 90, 45, 45])))
if ipop == 23:
ss = np.hstack((np.array([90, -45, 90, 90, 45, 90, 90, -45, 0, 0, 45, 90, 90, -45, 90, 90, 45, 0, 0, -45, -45, 0,
-45, 0, 0, 0, 0, 45, 45, 0, 0, 45, 0, -45, -45, 90, 90, 90, 45, 45])))
if ipop == 24:
ss = np.hstack((np.array([90, -45, 90, 90, -45, 90, 90, -45, 0, 0, -45, 90, 90, -45, 90, -45, -45, 0, 0, -45, -45, 0,
45, 0, 45, 0, 0, 45, 45, 0, 0, 45, 0, 45, 45, 90, 90, 45, 45, 90])))
if ipop == 25:
ss = np.hstack((np.array([90, -45, 90, 90, -45, 90, 90, 45, 0, 0, -45, -45, 0, 45, 0, 0, 0, 0, 45, 45, 0,
-45, 0, 0, -45, 90, -45, 90, 45, 45, 0, 45, 0, -45, -45, 90, 90, 90, 45, 45])))
if ipop == 26:
ss = np.hstack((np.array([0, 0, 0, 0, -45, 90, 90, 45, 0, 0, -45, -45, 0, 45, 0, 0, 0, 45, 45, 45, 0,
-45, 0, 0, -45, 0, 45, 0, 45, 45, 0, 45, 0, -45, -45, 90, 90, 90, -45, -45])))
if ipop == 27:
ss = np.hstack((np.array([45, 45, 45, 45, 0, -45, 90, 45, 45, 45, 0, -45, -45, 0, 45, 0, -45, -45, -45, -45, 0,
-45, 90, 90, -45, -45, -45, 90, 45, 45, 45, 45, 0, 0, -45, -45, -45, 90, 45, 45])))
if ipop == 28:
ss = np.hstack((np.array([45, 45, 45, 45, 0, -45, 90, -45, -45, -45, 0, -45, -45, 0, -45, 0, -45, -45, -45, -45, 0,
45, 90, 90, 45, 45, 45, 90, 45, 45, 45, 45, 0, 0, -45, -45, -45, 90, 45, 45])))
if ipop == 29:
ss = np.hstack((np.array([45, 45, 45, 45, 0, 45, 45, 0, -45, -45, -45, -45, -45, 0, -45, 0, -45, -45, -45, -45, 0,
45, 90, 90, 45, 45, 45, 90, 45, 45, 45, 45, 45, 0, -45, -45, -45, 90, -45, -45])))
if ipop == 30:
ss = np.hstack((np.array([90, 90, 90, 45, 0, 45, 45, 0, -45, -45, -45, -45, -45, 0, -45, 0, -45, -45, -45, -45, 0,
45, 90, 90, 45, 0, 45, 90, 45, 45, 45, 45, 45, 0, 45, 90, 90, 90, -45, -45])))
if ipop == 31:
ss = np.hstack((np.array([-45, 0, 0, -45, 0, 45, 0, 45, 45, 0, 45, 0, -45, -45, -45, 0, -45, 0, -45, -45, -45, -45, 0,
45, 90, 45, 45, 0, 45, 90, 45, 45, 45, 0, 45, 90, 90, 90, -45, -45])))
if ipop == 32:
ss = np.hstack((np.array([-45, 90, 90, -45, 90, 45, 0, 45, 45, 0, 45, 0, -45, -45, -45, 90, -45, 90, -45, -45, -45, -45, 0,
45, 90, 45, 45, 0, 45, 90, 45, 45, 45, 0, 45, 90, 90, 90, -45, -45])))
if ipop == 33:
ss = np.hstack((np.array([-45, 90, 90, 45, 90, 45, 45, 90, -45, 0, 0, 45, 0, -45, 0, 45, 90, -45, 90, -45, -45, -45, -45, 0,
-45, 90, 45, 45, 0, 45, 45, 90, 45, 0, 45, 90, 90, 90, -45, -45])))
if ipop == 34:
ss = np.hstack((np.array([0, 0, 0, 0, 45, 45, 90, -45, 0, 0, 45, 0, -45, 0, 45, 90, -45, 0, 0, 0, -45, -45, -45, 0,
-45, 0, 45, 45, 0, 45, 45, 0, 0, 0, 45, 90, 90, 90, -45, -45])))
if ipop == 35:
ss = np.hstack((np.array([0, 0, 0, 0, 45, 45, 90, 45, 0, 0, 45, 0, 45, 0, 45, 90, -45, 0, 0, 0, 45, 45, 45, 0,
-45, 0, -45, -45, 0, -45, -45, 0, 0, 0, -45, 90, 90, 90, -45, -45])))
if ipop == 36:
ss = np.hstack((np.array([0, 0, -45, -45, 90, 45, 45, 0, 45, 90, 45, 45, 45, 0, 45, 90, -45, 0, 0, 0, 45, 45, 45, 0,
-45, 0, -45, -45, 0, -45, 0, 0, 0, 0, -45, 90, 90, 90, -45, -45])))
if ipop == 37:
ss = np.hstack((np.array([0, 45, 90, -45, 0, 0, 0, -45, -45, -45, -45, 0, 45, 90, -45, 90, 45, 45, 0, 45, 0, -45,
-45, 90, 45, 0, 45, 45, 90, 45, 0, 0, 0, 0, 45, 90, 90, 90, -45, -45])))
if ipop == 38:
ss = np.hstack((np.array([-45, 90, 45, 0, 45, 45, 90, 45, 0, 0, 0, 0, 45, 90, 90, -45, 90, -45, -45, -45, -45, 0,
45, 90, 45, 45, 0, 0, 0, 45, 45, 45, 0, -45, 0, -45, -45, 90, -45, -45])))
if ipop == 39:
ss = np.hstack((np.array([-45, 90, 90, 45, 90, 90, 45, 90, -45, 0, 0, 45, 0, 0, 0, 45, 90, -45, 90, 90, 90, -45, 0, 0,
0, 0, 45, 0, 0, 45, 90, -45, -45, 0, 45, 90, 90, 90, -45, 0])))
if ipop == 40:
ss = np.hstack((np.array([ 90, -45, 0, 0, 45, 0, 0, 0, 45, 90, -45, 90, 90, 90, -45, 0, 0, 0, -45, -45, -45, -45, 0,
45, 90, 45, 45, 0, 45, 45, 90, 90, 45, 0, 45, 90, 90, 90, -45, -45])))
if ipop == 41:
ss = np.hstack((np.array([45, 90, 45, 0, 45, 0, 45, 45, 90, 90, 45, 0, 45, 90, 90, 90, -45, -45, 90, -45, -45, -45, -45, 0,
-45, 90, -45, -45, 0, 45, 45, 90, 45, 0, 45, 90, 90, 90, -45, -45])))
if ipop == 42:
ss = np.hstack((np.array([90, -45, 0, 0, 45, 0, 0, 0, 45, 90, -45, 90, 90, 90, 45, 90, 45, 0, 0, -45, -45, 0,
-45, -45, 0, 0, 45, 45, 45, 45, 0, 45, 0, -45, -45, -45, -45, 0, 0, 45])))
if ipop == 43:
ss = np.hstack((np.array([90, 90, 90, 90, 45, 45, 0, 0, 45, 90, -45, 90, 90, 90, 45, 90, 45, 0, 0, -45, -45, 0,
-45, -45, 0, 45, 45, 45, 0, -45, 90, 45, 0, -45, -45, -45, -45, 0, 0, 45])))
if ipop == 44:
ss = np.hstack((np.array([90, 45, 0, 0, 0, 45, 90, -45, 90, 90, 90, -45, 0, 0, 0, 45, 0, 0, 0, 45, 90, -45, 90, 90, 90,
-45, 0, 0, 0,45, 0, -45, 90, 45, 0, -45, -45, 0, 0, 45])))
if ipop == 45:
ss = np.hstack((np.array([90, 45, 90, 90, 90, 45, 0, -45, 0, 0, 0, -45, 0, 0, 0, 45, 0, 0, 0, 45, 90, -45, 90, 90, 90,
-45, 90, 90, 90,45, 0, -45, 90, 45, 0, -45, -45, 0, 0, 45])))
if ipop == 46:
ss = np.hstack((np.array([90, -45, 90, 90, 90, -45, 0, -45, 0, 0, 0, -45, 0, 0, 0, -45, 0, 0, 0, -45, 90, -45, 90, 45, 90,
-45, 90, 45, 90,45, 0, 45, 90, 45, 0, 45, 45, 0, 0, 45])))
if ipop == 47:
ss = np.hstack((np.array([0, 0, 0, 45, 45, 45, 0, 0, 45, 0, 0, -45, 0, 0, 0, 0, 0, 45, 0, 0, 0, 0,
-45, 90, -45, -45, 90, 45, 0, -45, 0, 0, -45, 90, 90, -45, 0, 0, 0, 45])))
if ipop == 48:
ss = np.hstack((np.array([45, 0, 0, 0, 0, 0, 45, 90, 90, 45, 0, 0, -45, 0, 0, 0, 0, 0, 45, 0, 0, 0, 0,
-45, 90, -45, 90, 45, 90, -45, 0, 0, -45, 90, 90, -45, 0, 0, 0, 45])))
if ipop == 49:
ss = np.hstack((np.array([45, 90, 90, -45, -45, 0, 45, 90, 45, 45, 0, 0, -45, 90, 90, 45, 0, 0, 45, 0, 0, 0, 0,
-45, 0, -45, 90, 45, 90, -45, 0, 0, -45, 0, 0, -45, 0, 0, 0, 45])))
if ipop == 50:
ss = np.hstack((np.array([90, 90, 90, 90, 45, 45, 0, 0, -45, 90, -45, 90, 90, 90, 45, 90, 45, 0, 45, 0, 0, 0, 0,
-45, 0, -45, 90, 45, 90, -45, 0, 0, -45, 0, 0, -45, 0, 0, 0, 45])))
if ipop == 51:
ss = np.hstack((np.array([90, 90, -45, 90, 90, 45, 45, 0, 0, -45, 90, -45, 90, 90, 90, 45, 90, -45, 0, -45, -45, 0, 0, 0,
45, 45, 0, 45, 45, 45, 0, -45, 90, 45, 0, -45, -45, -45, 0, 45])))
if ipop == 52:
ss = np.hstack((np.array([-45, 90, 90, 45, 90, 45, 45, 90, -45, 0, 0, 45, 0, -45, 0, 45, 90, -45, 90, -45, -45, -45, -45, 0,
-45, 90, 45, 45, 0, 45, 45, 90, 45, 0, 45, 90, 90, 90, -45, -45])))
# Complete the stacking sequence
ss.reshape((40,))
# print(ipop, ss.ndim, ss)
if ss.ndim == 2:
ss = np.hstack((ss, np.flip(ss, axis=1)))
else:
ss = np.hstack((ss, np.flip(ss, axis=0)))
ss = ss.reshape((80,))
# For the 10# rule
if not is_ten_percent_rule(constraints, stack=ss):
raise Exception('Laminate violating the 10% rule at line ', ipop)
# For the contiguity and disorientation constraints
ss, _ = internal_diso_contig(ss, constraints)
if ss.size == 0:
raise Exception('Laminate violating disorientation or contiguity at line ', ipop)
# For the correct total number of plies
if ss.size != n_plies_in_panels:
print('ss.size', ss.size)
raise Exception('Laminate with incorrect ply count at line ', ipop)
# Storage of the stacking sequence
ss = np.ravel(ss)
N0 = sum(ss == 0)
N90 = sum(ss==90)
N45 = sum(ss==45)
N135 = sum(ss==-45)
# balance check
if N45 != N135:
print('N45 ', N45, 'N-45 ', N135)
raise Exception('Laminate not balanced at line ', ipop)
ss = ss.astype(int)
#print(ipop, N0, N90, N45, N135)
lampam = calc_lampam(ss, constraints)
Pop.loc[ipop, 'ply_counts'] = n_plies_in_panels
Pop.loc[ipop, 'lampam[1]'] = lampam[0]
Pop.loc[ipop, 'lampam[2]'] = lampam[1]
Pop.loc[ipop, 'lampam[3]'] = lampam[2]
Pop.loc[ipop, 'lampam[4]'] = lampam[3]
Pop.loc[ipop, 'lampam[5]'] = lampam[4]
Pop.loc[ipop, 'lampam[6]'] = lampam[5]
Pop.loc[ipop, 'lampam[7]'] = lampam[6]
Pop.loc[ipop, 'lampam[8]'] = lampam[7]
Pop.loc[ipop, 'lampam[9]'] = lampam[8]
Pop.loc[ipop, 'lampam[10]'] = lampam[9]
Pop.loc[ipop, 'lampam[11]'] = lampam[10]
Pop.loc[ipop, 'lampam[12]'] = lampam[11]
A = A_from_lampam(lampam, mat)
Pop.loc[ipop, 'A11'] = A[0, 0]
Pop.loc[ipop, 'A22'] = A[1, 1]
Pop.loc[ipop, 'A12'] = A[0, 1]
Pop.loc[ipop, 'A66'] = A[2, 2]
Pop.loc[ipop, 'A16'] = A[0, 2]
Pop.loc[ipop, 'A26'] = A[1, 2]
B = B_from_lampam(lampam, mat, constraints.sym)
Pop.loc[ipop, 'B11'] = B[0, 0]
Pop.loc[ipop, 'B22'] = B[1, 1]
Pop.loc[ipop, 'B12'] = B[0, 1]
Pop.loc[ipop, 'B66'] = B[2, 2]
Pop.loc[ipop, 'B16'] = B[0, 2]
Pop.loc[ipop, 'B26'] = B[1, 2]
D = D_from_lampam(lampam, mat)
Pop.loc[ipop, 'D11'] = D[0, 0]
Pop.loc[ipop, 'D22'] = D[1, 1]
Pop.loc[ipop, 'D12'] = D[0, 1]
Pop.loc[ipop, 'D66'] = D[2, 2]
Pop.loc[ipop, 'D16'] = D[0, 2]
Pop.loc[ipop, 'D26'] = D[1, 2]
Pop.loc[ipop, 'N0'] = N0
Pop.loc[ipop, 'N45'] = N45
Pop.loc[ipop, 'N90'] = N90
Pop.loc[ipop, 'N-45'] = N135
ss_flatten = np.array(ss, dtype=str)
ss_flatten = ' '.join(ss_flatten)
Pop.loc[ipop, 'ss'] = ss_flatten
ipop += 1
print(' Pop Created ... ')
for i in range(1, len(Pop.index) + 1):
ss = Pop.loc[i, 'ss']
# unflatenning
ss = ss.split(' ')
ss_ini = np.array(ss, dtype=int)
ss_ini = ss_ini.astype(int)
# Creating more stacking sequences with:
# 0->90
# 90-> 0
ss = np.copy(ss_ini)
ss[ss == 0] = 80
ss[ss == 90] = 0
ss[ss == 80] = 90
lampam = calc_lampam(ss, constraints)
N0 = sum(ss == 0)
N90 = sum(ss==90)
N45 = sum(ss==45)
N135 = sum(ss==-45)
Pop.loc[ipop, 'ply_counts'] = n_plies_in_panels
Pop.loc[ipop, 'lampam[1]'] = lampam[0]
Pop.loc[ipop, 'lampam[2]'] = lampam[1]
Pop.loc[ipop, 'lampam[3]'] = lampam[2]
Pop.loc[ipop, 'lampam[4]'] = lampam[3]
Pop.loc[ipop, 'lampam[5]'] = lampam[4]
Pop.loc[ipop, 'lampam[6]'] = lampam[5]
Pop.loc[ipop, 'lampam[7]'] = lampam[6]
Pop.loc[ipop, 'lampam[8]'] = lampam[7]
Pop.loc[ipop, 'lampam[9]'] = lampam[8]
Pop.loc[ipop, 'lampam[10]'] = lampam[9]
Pop.loc[ipop, 'lampam[11]'] = lampam[10]
Pop.loc[ipop, 'lampam[12]'] = lampam[11]
A = A_from_lampam(lampam, mat)
Pop.loc[ipop, 'A11'] = A[0, 0]
Pop.loc[ipop, 'A22'] = A[1, 1]
Pop.loc[ipop, 'A12'] = A[0, 1]
Pop.loc[ipop, 'A66'] = A[2, 2]
Pop.loc[ipop, 'A16'] = A[0, 2]
Pop.loc[ipop, 'A26'] = A[1, 2]
B = B_from_lampam(lampam, mat, constraints.sym)
Pop.loc[ipop, 'B11'] = B[0, 0]
Pop.loc[ipop, 'B22'] = B[1, 1]
Pop.loc[ipop, 'B12'] = B[0, 1]
Pop.loc[ipop, 'B66'] = B[2, 2]
Pop.loc[ipop, 'B16'] = B[0, 2]
Pop.loc[ipop, 'B26'] = B[1, 2]
D = D_from_lampam(lampam, mat)
Pop.loc[ipop, 'D11'] = D[0, 0]
Pop.loc[ipop, 'D22'] = D[1, 1]
Pop.loc[ipop, 'D12'] = D[0, 1]
Pop.loc[ipop, 'D66'] = D[2, 2]
Pop.loc[ipop, 'D16'] = D[0, 2]
Pop.loc[ipop, 'D26'] = D[1, 2]
Pop.loc[ipop, 'N0'] = N0
Pop.loc[ipop, 'N45'] = N45
Pop.loc[ipop, 'N90'] = N90
Pop.loc[ipop, 'N-45'] = N135
ss = np.array(ss, dtype=str)
ss = ' '.join(ss)
Pop.loc[ipop, 'ss'] = ss
ipop +=1
# Creating more stacking sequences with:
# 0->90
# 90-> 0
# 45->-45
# -45-> 45
ss = np.copy(ss_ini)
ss[ss == 45] = -10
ss[ss == -45] = 45
ss[ss == -10] = -45
ss[ss == 0] = 80
ss[ss == 90] = 0
ss[ss == 80] = 90
lampam = calc_lampam(ss, constraints)
N0 = sum(ss == 0)
N90 = sum(ss==90)
N45 = sum(ss==45)
N135 = sum(ss==-45)
Pop.loc[ipop, 'ply_counts'] = n_plies_in_panels
Pop.loc[ipop, 'lampam[1]'] = lampam[0]
Pop.loc[ipop, 'lampam[2]'] = lampam[1]
Pop.loc[ipop, 'lampam[3]'] = lampam[2]
Pop.loc[ipop, 'lampam[4]'] = lampam[3]
Pop.loc[ipop, 'lampam[5]'] = lampam[4]
Pop.loc[ipop, 'lampam[6]'] = lampam[5]
Pop.loc[ipop, 'lampam[7]'] = lampam[6]
Pop.loc[ipop, 'lampam[8]'] = lampam[7]
Pop.loc[ipop, 'lampam[9]'] = lampam[8]
Pop.loc[ipop, 'lampam[10]'] = lampam[9]
Pop.loc[ipop, 'lampam[11]'] = lampam[10]
Pop.loc[ipop, 'lampam[12]'] = lampam[11]
A = A_from_lampam(lampam, mat)
Pop.loc[ipop, 'A11'] = A[0, 0]
Pop.loc[ipop, 'A22'] = A[1, 1]
Pop.loc[ipop, 'A12'] = A[0, 1]
Pop.loc[ipop, 'A66'] = A[2, 2]
Pop.loc[ipop, 'A16'] = A[0, 2]
Pop.loc[ipop, 'A26'] = A[1, 2]
B = B_from_lampam(lampam, mat, constraints.sym)
Pop.loc[ipop, 'B11'] = B[0, 0]
Pop.loc[ipop, 'B22'] = B[1, 1]
Pop.loc[ipop, 'B12'] = B[0, 1]
Pop.loc[ipop, 'B66'] = B[2, 2]
Pop.loc[ipop, 'B16'] = B[0, 2]
Pop.loc[ipop, 'B26'] = B[1, 2]
D = D_from_lampam(lampam, mat)
Pop.loc[ipop, 'D11'] = D[0, 0]
Pop.loc[ipop, 'D22'] = D[1, 1]
Pop.loc[ipop, 'D12'] = D[0, 1]
Pop.loc[ipop, 'D66'] = D[2, 2]
Pop.loc[ipop, 'D16'] = D[0, 2]
Pop.loc[ipop, 'D26'] = D[1, 2]
Pop.loc[ipop, 'N0'] = N0
Pop.loc[ipop, 'N45'] = N45
Pop.loc[ipop, 'N90'] = N90
Pop.loc[ipop, 'N-45'] = N135
ss = np.array(ss, dtype=str)
ss = ' '.join(ss)
Pop.loc[ipop, 'ss'] = ss
ipop +=1
# Creating more stacking sequences with:
# 45->-45
# -45-> 45
ss = np.copy(ss_ini)
ss[ss == 45] = -10
ss[ss == -45] = 45
ss[ss == -10] = -45
lampam = calc_lampam(ss, constraints)
N0 = sum(ss == 0)
N90 = sum(ss==90)
N45 = sum(ss==45)
N135 = sum(ss==-45)
Pop.loc[ipop, 'ply_counts'] = n_plies_in_panels
Pop.loc[ipop, 'lampam[1]'] = lampam[0]
Pop.loc[ipop, 'lampam[2]'] = lampam[1]
Pop.loc[ipop, 'lampam[3]'] = lampam[2]
Pop.loc[ipop, 'lampam[4]'] = lampam[3]
Pop.loc[ipop, 'lampam[5]'] = lampam[4]
Pop.loc[ipop, 'lampam[6]'] = lampam[5]
Pop.loc[ipop, 'lampam[7]'] = lampam[6]
Pop.loc[ipop, 'lampam[8]'] = lampam[7]
Pop.loc[ipop, 'lampam[9]'] = lampam[8]
Pop.loc[ipop, 'lampam[10]'] = lampam[9]
Pop.loc[ipop, 'lampam[11]'] = lampam[10]
Pop.loc[ipop, 'lampam[12]'] = lampam[11]
A = A_from_lampam(lampam, mat)
Pop.loc[ipop, 'A11'] = A[0, 0]
Pop.loc[ipop, 'A22'] = A[1, 1]
Pop.loc[ipop, 'A12'] = A[0, 1]
Pop.loc[ipop, 'A66'] = A[2, 2]
Pop.loc[ipop, 'A16'] = A[0, 2]
Pop.loc[ipop, 'A26'] = A[1, 2]
B = B_from_lampam(lampam, mat, constraints.sym)
Pop.loc[ipop, 'B11'] = B[0, 0]
Pop.loc[ipop, 'B22'] = B[1, 1]
Pop.loc[ipop, 'B12'] = B[0, 1]
Pop.loc[ipop, 'B66'] = B[2, 2]
Pop.loc[ipop, 'B16'] = B[0, 2]
Pop.loc[ipop, 'B26'] = B[1, 2]
D = D_from_lampam(lampam, mat)
Pop.loc[ipop, 'D11'] = D[0, 0]
Pop.loc[ipop, 'D22'] = D[1, 1]
Pop.loc[ipop, 'D12'] = D[0, 1]
Pop.loc[ipop, 'D66'] = D[2, 2]
Pop.loc[ipop, 'D16'] = D[0, 2]
Pop.loc[ipop, 'D26'] = D[1, 2]
Pop.loc[ipop, 'N0'] = N0
Pop.loc[ipop, 'N45'] = N45
Pop.loc[ipop, 'N90'] = N90
Pop.loc[ipop, 'N-45'] = N135
ss = np.array(ss, dtype=str)
ss = ' '.join(ss)
Pop.loc[ipop, 'ss'] = ss
ipop +=1
Pop.drop_duplicates(keep='first', inplace=True)
Pop.reset_index(inplace=True)
print(f'The population consist of {len(Pop.index)} individuals')
save_constraints_LAYLA(filename, constraints)
save_materials(filename, mat)
append_df_to_excel(filename, Pop, 'stacks', index=True, header=True)
autofit_column_widths(filename)
| 41.530508 | 132 | 0.481574 | 4,367 | 24,503 | 2.663613 | 0.0703 | 0.059663 | 0.045134 | 0.039976 | 0.723951 | 0.714666 | 0.712001 | 0.700911 | 0.663858 | 0.633253 | 0 | 0.232008 | 0.292862 | 24,503 | 589 | 133 | 41.601019 | 0.439314 | 0.086071 | 0 | 0.494553 | 0 | 0 | 0.049798 | 0.00103 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.026144 | 0 | 0.026144 | 0.013072 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0789164e5def84a40166b25c24cd3611ac89687b | 4,158 | py | Python | architect/examples/satellite_stl/data_cleaning_mission_2.py | MIT-REALM/architect | 1b5bbf6ddf08146cd3b8ad5c058539ac140e9ebb | [
"BSD-2-Clause"
] | 2 | 2022-03-30T03:07:26.000Z | 2022-03-30T17:35:21.000Z | architect/examples/satellite_stl/data_cleaning_mission_2.py | MIT-REALM/architect | 1b5bbf6ddf08146cd3b8ad5c058539ac140e9ebb | [
"BSD-2-Clause"
] | null | null | null | architect/examples/satellite_stl/data_cleaning_mission_2.py | MIT-REALM/architect | 1b5bbf6ddf08146cd3b8ad5c058539ac140e9ebb | [
"BSD-2-Clause"
] | null | null | null | import pandas as pd
def clean_counterexample_guided():
# Create a dataframe to aggregate the results
results_df = pd.DataFrame()
for seed in range(50):
save_dir = "logs/satellite_stl/all_constraints/comparison"
filename = f"{save_dir}/counterexample_guided_{seed}.csv"
df = pd.read_csv(filename)
# Add one to rounds to account for zero-indexing
rounds_mask = df.measurement == "Optimization rounds"
df.loc[rounds_mask, "value"] = df[rounds_mask].value + 1
results_df = results_df.append(df, ignore_index=True)
# Save the new dataframe
save_dir = "logs/satellite_stl/all_constraints/comparison"
filename = f"{save_dir}/combined_counterexample_guided.csv"
results_df.to_csv(filename, index=False)
def clean_random_batch_1():
# Create a dataframe to aggregate the results
results_df = pd.DataFrame()
for seed in range(50):
save_dir = "logs/satellite_stl/all_constraints/comparison"
filename = f"{save_dir}/random_batch_1_{seed}.csv"
df = pd.read_csv(filename)
results_df = results_df.append(df, ignore_index=True)
# Save the new dataframe
save_dir = "logs/satellite_stl/all_constraints/comparison"
filename = f"{save_dir}/combined_random_batch_1.csv"
results_df.to_csv(filename, index=False)
def clean_random_batch_32():
# Create a dataframe to aggregate the results
results_df = pd.DataFrame()
for seed in range(50):
save_dir = "logs/satellite_stl/all_constraints/comparison"
filename = f"{save_dir}/random_batch_32_{seed}.csv"
df = pd.read_csv(filename)
# Add missing rows (this optimization ran for 1 round with 32 exogenous samples)
df = df.append(
{"seed": seed, "measurement": "Optimization rounds", "value": 1},
ignore_index=True,
)
df = df.append(
{"seed": seed, "measurement": "Final population size", "value": 32},
ignore_index=True,
)
results_df = results_df.append(df, ignore_index=True)
# Save the new dataframe
save_dir = "logs/satellite_stl/all_constraints/comparison"
filename = f"{save_dir}/combined_random_batch_32.csv"
results_df.to_csv(filename, index=False)
def clean_random_batch_64():
# Create a dataframe to aggregate the results
results_df = pd.DataFrame()
for seed in range(50):
save_dir = "logs/satellite_stl/all_constraints/comparison"
filename = f"{save_dir}/random_batch_64_{seed}.csv"
df = pd.read_csv(filename)
# Add missing rows (this optimization ran for 1 round with 64 exogenous samples)
# Add one to rounds to account for zero-indexing
rounds_mask = df.measurement == "Optimization rounds"
df.loc[rounds_mask, "value"] = df[rounds_mask].value + 1
results_df = results_df.append(df, ignore_index=True)
# Save the new dataframe
save_dir = "logs/satellite_stl/all_constraints/comparison"
filename = f"{save_dir}/combined_random_batch_64.csv"
results_df.to_csv(filename, index=False)
def clean_milp():
# Create a dataframe to aggregate the results
results_df = pd.DataFrame()
for seed in range(50):
save_dir = "logs/satellite_stl/all_constraints/comparison"
filename = f"{save_dir}/milp_{seed}.csv"
df = pd.read_csv(filename)
# Add missing rows (this optimization ran for 1 round)
df = df.append(
{"seed": seed, "measurement": "Optimization rounds", "value": 1},
ignore_index=True,
)
mask = df.measurement == "Feasible"
replace_mask = df[mask].value == "True"
df.loc[mask & replace_mask, "value"] = 1.0
results_df = results_df.append(df, ignore_index=True)
# Save the new dataframe
save_dir = "logs/satellite_stl/all_constraints/comparison"
filename = f"{save_dir}/combined_milp.csv"
results_df.to_csv(filename, index=False)
if __name__ == "__main__":
clean_counterexample_guided()
clean_random_batch_1()
clean_random_batch_32()
clean_random_batch_64()
clean_milp()
| 35.237288 | 88 | 0.675565 | 558 | 4,158 | 4.77957 | 0.132616 | 0.067492 | 0.041245 | 0.074991 | 0.863892 | 0.863892 | 0.853018 | 0.84327 | 0.820397 | 0.820397 | 0 | 0.01391 | 0.221982 | 4,158 | 117 | 89 | 35.538462 | 0.81051 | 0.15368 | 0 | 0.618421 | 0 | 0 | 0.288324 | 0.233514 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065789 | false | 0 | 0.013158 | 0 | 0.078947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
078bd048918f3f855c7ee7b1f4d73cf76e395a4a | 312 | py | Python | release/scripts/presets/render/iphone.py | sambler/myblender | 241cc5e8469efc67320f0be942115dacca281096 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | 3 | 2016-10-01T22:48:11.000Z | 2018-08-14T17:29:04.000Z | release/scripts/presets/render/iphone.py | sambler/myblender | 241cc5e8469efc67320f0be942115dacca281096 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | release/scripts/presets/render/iphone.py | sambler/myblender | 241cc5e8469efc67320f0be942115dacca281096 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | import bpy
bpy.context.scene.render.resolution_x = 320
bpy.context.scene.render.resolution_y = 480
bpy.context.scene.render.resolution_percentage = 100
bpy.context.scene.render.pixel_aspect_x = 1
bpy.context.scene.render.pixel_aspect_y = 1
bpy.context.scene.render.fps = 24
bpy.context.scene.render.fps_base = 1
| 34.666667 | 52 | 0.814103 | 52 | 312 | 4.730769 | 0.346154 | 0.284553 | 0.426829 | 0.597561 | 0.841463 | 0.260163 | 0 | 0 | 0 | 0 | 0 | 0.048443 | 0.073718 | 312 | 8 | 53 | 39 | 0.802768 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
07cded30e747b021168c34710de3d63a988a61c5 | 166 | py | Python | examples/multiple_files/api.py | izi-global/izir | d1a4bfb5c082c3de1956402ef0280564014a3bd8 | [
"MIT"
] | null | null | null | examples/multiple_files/api.py | izi-global/izir | d1a4bfb5c082c3de1956402ef0280564014a3bd8 | [
"MIT"
] | 5 | 2021-03-18T21:01:05.000Z | 2022-03-11T23:29:48.000Z | examples/multiple_files/api.py | izi-global/izir | d1a4bfb5c082c3de1956402ef0280564014a3bd8 | [
"MIT"
] | null | null | null | import izi
import part_1
import part_2
@izi.get('/')
def say_hi():
return "Hi from root"
@izi.extend_api()
def with_other_apis():
return [part_1, part_2]
| 11.857143 | 27 | 0.680723 | 29 | 166 | 3.62069 | 0.586207 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02963 | 0.186747 | 166 | 13 | 28 | 12.769231 | 0.748148 | 0 | 0 | 0 | 0 | 0 | 0.078313 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | true | 0 | 0.333333 | 0.222222 | 0.777778 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
07d61b4f75064e8f630c7d8540199f6d3d4c0d02 | 44 | py | Python | modules/2.79/bpy/types/ShaderNodeBrightContrast.py | cmbasnett/fake-bpy-module | acb8b0f102751a9563e5b5e5c7cd69a4e8aa2a55 | [
"MIT"
] | null | null | null | modules/2.79/bpy/types/ShaderNodeBrightContrast.py | cmbasnett/fake-bpy-module | acb8b0f102751a9563e5b5e5c7cd69a4e8aa2a55 | [
"MIT"
] | null | null | null | modules/2.79/bpy/types/ShaderNodeBrightContrast.py | cmbasnett/fake-bpy-module | acb8b0f102751a9563e5b5e5c7cd69a4e8aa2a55 | [
"MIT"
] | null | null | null | class ShaderNodeBrightContrast:
pass
| 7.333333 | 31 | 0.75 | 3 | 44 | 11 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 44 | 5 | 32 | 8.8 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
07dee51166e58f9703801e096d35588febd65d33 | 525 | py | Python | djangoBlog/libs/exception_func.py | blackmonkey121/blog | 938f104d3360c5f7562a2fd5a7d2f2e77c4695c0 | [
"BSD-3-Clause"
] | 4 | 2019-07-20T02:04:11.000Z | 2020-05-02T06:15:22.000Z | djangoBlog/libs/exception_func.py | blackmonkey121/blog | 938f104d3360c5f7562a2fd5a7d2f2e77c4695c0 | [
"BSD-3-Clause"
] | 8 | 2020-05-03T09:01:14.000Z | 2022-01-13T02:13:14.000Z | djangoBlog/libs/exception_func.py | blackmonkey121/blog | 938f104d3360c5f7562a2fd5a7d2f2e77c4695c0 | [
"BSD-3-Clause"
] | null | null | null | from django.shortcuts import render
def bad_request(request, exception, template_name='exception/400.html'):
return render(request, template_name)
def permission_denied(request, exception, template_name='exception/403.html'):
return render(request, template_name)
def page_not_found(request, exception, template_name='exception/404.html'):
return render(request, template_name)
# 注意这里没有 exception 参数
def server_error(request, template_name='exception/500.html'):
return render(request, template_name)
| 27.631579 | 78 | 0.786667 | 68 | 525 | 5.882353 | 0.382353 | 0.24 | 0.2375 | 0.23 | 0.6425 | 0.365 | 0.19 | 0 | 0 | 0 | 0 | 0.025751 | 0.112381 | 525 | 18 | 79 | 29.166667 | 0.832618 | 0.03619 | 0 | 0.444444 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.444444 | false | 0 | 0.111111 | 0.444444 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
07f27a486440125fa3cd73a719efe359fbab7623 | 142 | py | Python | nff/nn/__init__.py | torchmd/mdgrad | 77bd7685b74b41acf54a9483546e1e8cb545eb01 | [
"MIT"
] | 54 | 2021-03-10T18:35:49.000Z | 2022-03-28T13:54:47.000Z | nff/nn/__init__.py | wwang2/torchmd | 77bd7685b74b41acf54a9483546e1e8cb545eb01 | [
"MIT"
] | 1 | 2021-03-17T07:01:02.000Z | 2021-03-17T07:01:02.000Z | nff/nn/__init__.py | torchmd/mdgrad | 77bd7685b74b41acf54a9483546e1e8cb545eb01 | [
"MIT"
] | 5 | 2021-06-08T02:44:35.000Z | 2021-12-17T11:50:08.000Z | from .activations import *
from .layers import *
from .modules import *
from .utils import *
from .models import *
from .tensorgrad import *
| 17.75 | 26 | 0.739437 | 18 | 142 | 5.833333 | 0.444444 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176056 | 142 | 7 | 27 | 20.285714 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed185ca7231454e53b72701eb1753b5dff82969d | 6,301 | py | Python | armada/tests/unit/api/test_tiller_controller.py | mohan19/airship-armada | 3b66c0548585d2ab75e0b3ae0cd32aa1fca51171 | [
"Apache-2.0"
] | null | null | null | armada/tests/unit/api/test_tiller_controller.py | mohan19/airship-armada | 3b66c0548585d2ab75e0b3ae0cd32aa1fca51171 | [
"Apache-2.0"
] | null | null | null | armada/tests/unit/api/test_tiller_controller.py | mohan19/airship-armada | 3b66c0548585d2ab75e0b3ae0cd32aa1fca51171 | [
"Apache-2.0"
] | null | null | null | # Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from oslo_config import cfg
from armada.api.controller import tiller as tiller_controller
from armada.common.policies import base as policy_base
from armada.tests import test_utils
from armada.tests.unit.api import base
CONF = cfg.CONF
class TillerControllerTest(base.BaseControllerTest):
@mock.patch.object(tiller_controller, 'Tiller')
def test_get_tiller_status(self, mock_tiller):
"""Tests GET /api/v1.0/status endpoint."""
rules = {'tiller:get_status': '@'}
self.policy.set_rules(rules)
mock_tiller.return_value.tiller_status.return_value = 'fake_status'
mock_tiller.return_value.tiller_version.return_value = 'fake_version'
result = self.app.simulate_get('/api/v1.0/status')
expected = {
'tiller': {
'version': 'fake_version',
'state': 'fake_status'
}
}
self.assertEqual(expected, result.json)
self.assertEqual('application/json', result.headers['content-type'])
mock_tiller.assert_called_once_with(
tiller_host=None,
tiller_port=44134,
tiller_namespace='kube-system')
@mock.patch.object(tiller_controller, 'Tiller')
def test_get_tiller_status_with_params(self, mock_tiller):
"""Tests GET /api/v1.0/status endpoint with query parameters."""
rules = {'tiller:get_status': '@'}
self.policy.set_rules(rules)
mock_tiller.return_value.tiller_status.return_value = 'fake_status'
mock_tiller.return_value.tiller_version.return_value = 'fake_version'
result = self.app.simulate_get(
'/api/v1.0/status',
params_csv=False,
params={
'tiller_host': 'fake_host',
'tiller_port': '98765',
'tiller_namespace': 'fake_ns'
})
expected = {
'tiller': {
'version': 'fake_version',
'state': 'fake_status'
}
}
self.assertEqual(expected, result.json)
self.assertEqual('application/json', result.headers['content-type'])
mock_tiller.assert_called_once_with(
tiller_host='fake_host',
tiller_port=98765,
tiller_namespace='fake_ns')
@mock.patch.object(tiller_controller, 'Tiller')
def test_tiller_releases(self, mock_tiller):
"""Tests GET /api/v1.0/releases endpoint."""
rules = {'tiller:get_release': '@'}
self.policy.set_rules(rules)
def _get_fake_release(name, namespace):
fake_release = mock.Mock(namespace='%s_namespace' % namespace)
fake_release.configure_mock(name=name)
return fake_release
mock_tiller.return_value.list_releases.return_value = [
_get_fake_release('foo', 'bar'),
_get_fake_release('baz', 'qux')
]
result = self.app.simulate_get('/api/v1.0/releases')
expected = {
'releases': {
'bar_namespace': ['foo'],
'qux_namespace': ['baz']
}
}
self.assertEqual(expected, result.json)
mock_tiller.assert_called_once_with(
tiller_host=None,
tiller_port=44134,
tiller_namespace='kube-system')
mock_tiller.return_value.list_releases.assert_called_once_with()
@mock.patch.object(tiller_controller, 'Tiller')
def test_tiller_releases_with_params(self, mock_tiller):
"""Tests GET /api/v1.0/releases endpoint with query parameters."""
rules = {'tiller:get_release': '@'}
self.policy.set_rules(rules)
def _get_fake_release(name, namespace):
fake_release = mock.Mock(namespace='%s_namespace' % namespace)
fake_release.configure_mock(name=name)
return fake_release
mock_tiller.return_value.list_releases.return_value = [
_get_fake_release('foo', 'bar'),
_get_fake_release('baz', 'qux')
]
result = self.app.simulate_get(
'/api/v1.0/releases',
params_csv=False,
params={
'tiller_host': 'fake_host',
'tiller_port': '98765',
'tiller_namespace': 'fake_ns'
})
expected = {
'releases': {
'bar_namespace': ['foo'],
'qux_namespace': ['baz']
}
}
self.assertEqual(expected, result.json)
mock_tiller.assert_called_once_with(
tiller_host='fake_host',
tiller_port=98765,
tiller_namespace='fake_ns')
mock_tiller.return_value.list_releases.assert_called_once_with()
class TillerControllerNegativeRbacTest(base.BaseControllerTest):
@test_utils.attr(type=['negative'])
def test_list_tiller_releases_insufficient_permissions(self):
"""Tests the GET /api/v1.0/releases endpoint returns 403 following
failed authorization.
"""
rules = {'tiller:get_release': policy_base.RULE_ADMIN_REQUIRED}
self.policy.set_rules(rules)
resp = self.app.simulate_get('/api/v1.0/releases')
self.assertEqual(403, resp.status_code)
@test_utils.attr(type=['negative'])
def test_get_tiller_status_insufficient_permissions(self):
"""Tests the GET /api/v1.0/status endpoint returns 403 following
failed authorization.
"""
rules = {'tiller:get_status': policy_base.RULE_ADMIN_REQUIRED}
self.policy.set_rules(rules)
resp = self.app.simulate_get('/api/v1.0/status')
self.assertEqual(403, resp.status_code)
| 36.005714 | 77 | 0.63117 | 725 | 6,301 | 5.231724 | 0.212414 | 0.042183 | 0.02531 | 0.028474 | 0.784867 | 0.780912 | 0.760348 | 0.733456 | 0.729765 | 0.674928 | 0 | 0.015876 | 0.260276 | 6,301 | 174 | 78 | 36.212644 | 0.797897 | 0.152198 | 0 | 0.731707 | 0 | 0 | 0.139601 | 0 | 0 | 0 | 0 | 0 | 0.113821 | 1 | 0.065041 | false | 0 | 0.04878 | 0 | 0.146341 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed1c4362b27b5228a54f8abb6bdd1d1dfd934639 | 26,088 | py | Python | tests/open_alchemy/schemas/helpers/test_iterate.py | MihailMiller/OpenAlchemy | 55b751c58ca50706ebc46262f50addb7dec34278 | [
"Apache-2.0"
] | 40 | 2019-11-05T06:50:35.000Z | 2022-03-09T01:34:57.000Z | tests/open_alchemy/schemas/helpers/test_iterate.py | MihailMiller/OpenAlchemy | 55b751c58ca50706ebc46262f50addb7dec34278 | [
"Apache-2.0"
] | 178 | 2019-11-03T04:10:38.000Z | 2022-03-31T00:07:17.000Z | tests/open_alchemy/schemas/helpers/test_iterate.py | MihailMiller/OpenAlchemy | 55b751c58ca50706ebc46262f50addb7dec34278 | [
"Apache-2.0"
] | 17 | 2019-11-04T07:22:46.000Z | 2022-03-23T05:29:49.000Z | """Test for iterate helpers."""
import pytest
from open_alchemy.schemas.helpers import iterate
@pytest.mark.parametrize(
"schemas, expected_schemas",
[
pytest.param({}, [], id="empty"),
pytest.param({"Schema1": {}}, [], id="single not"),
pytest.param(
{"Schema1": {"x-tablename": True}},
[("Schema1", {"x-tablename": True})],
id="single malformed tablename",
),
pytest.param(
{"Schema1": {"x-inherits": 1}},
[("Schema1", {"x-inherits": 1})],
id="single malformed inherits",
),
pytest.param(
{"Schema1": {"allOf": [{"$ref": "#/components/schemas/Schema2"}, {}]}},
[],
id="single missing reference",
),
pytest.param(
{"Schema1": {"x-tablename": "table 1"}},
[("Schema1", {"x-tablename": "table 1"})],
id="single is",
),
pytest.param({"Schema1": {}, "Schema2": {}}, [], id="multiple none"),
pytest.param(
{"Schema1": {"x-tablename": "table 1"}, "Schema2": {}},
[("Schema1", {"x-tablename": "table 1"})],
id="multiple first",
),
pytest.param(
{"Schema1": {}, "Schema2": {"x-tablename": "table 2"}},
[("Schema2", {"x-tablename": "table 2"})],
id="multiple last",
),
pytest.param(
{
"Schema1": {"x-tablename": "table 1"},
"Schema2": {"x-tablename": "table 2"},
},
[
("Schema1", {"x-tablename": "table 1"}),
("Schema2", {"x-tablename": "table 2"}),
],
id="multiple all",
),
],
)
@pytest.mark.schemas
@pytest.mark.helper
def test_constructable(schemas, expected_schemas):
"""
GIVEN schemas and expected schemas
WHEN constructable is called with the schemas
THEN an iterable with all the names and schemas in the expected schemas are
returned.
"""
returned_schemas = iterate.constructable(schemas=schemas)
assert list(returned_schemas) == expected_schemas
@pytest.mark.parametrize(
"schemas, expected_schemas",
[
pytest.param({}, [], id="empty"),
pytest.param({"Schema1": {}}, [("Schema1", {})], id="single not"),
pytest.param(
{"Schema1": {"x-tablename": True}},
[],
id="single tablename",
),
pytest.param(
{"Schema1": {"x-inherits": 1}},
[],
id="single inherits",
),
pytest.param(
{"Schema1": {"allOf": [{"$ref": "#/components/schemas/Schema2"}, {}]}},
[],
id="single missing reference",
),
pytest.param(
{
"Schema1": {"x-tablename": "table 1"},
"Schema2": {"x-tablename": "table 2"},
},
[],
id="multiple all",
),
pytest.param(
{"Schema1": {}, "Schema2": {"x-tablename": "table 2"}},
[("Schema1", {})],
id="multiple last",
),
pytest.param(
{"Schema1": {"x-tablename": "table 1"}, "Schema2": {}},
[("Schema2", {})],
id="multiple first",
),
pytest.param(
{"Schema1": {}, "Schema2": {}},
[("Schema1", {}), ("Schema2", {})],
id="multiple none",
),
],
)
@pytest.mark.schemas
@pytest.mark.helper
def test_not_constructable(schemas, expected_schemas):
"""
GIVEN schemas and expected schemas
WHEN not_constructable is called with the schemas
THEN an iterable with all the names and schemas in the expected schemas are
returned.
"""
returned_schemas = iterate.not_constructable(schemas=schemas)
assert list(returned_schemas) == expected_schemas
@pytest.mark.parametrize(
"schema, schemas, expected_properties",
[
pytest.param(True, {}, [], id="not dict"),
pytest.param({}, {}, [], id="no properties"),
pytest.param({"properties": {}}, {}, [], id="empty properties"),
pytest.param(
{"properties": True},
{},
[],
id="properties not dictionary",
),
pytest.param(
{"properties": {"prop_1": "value 1"}},
{},
[("prop_1", "value 1")],
id="single property",
),
pytest.param(
{"x-inherits": False, "properties": {"prop_1": "value 1"}},
{},
[("prop_1", "value 1")],
id="single property x-inherits False",
),
pytest.param(
{"properties": {"prop_1": "value 1", "prop_2": "value 2"}},
{},
[("prop_1", "value 1"), ("prop_2", "value 2")],
id="multiple property",
),
pytest.param(
{"$ref": True},
{},
[],
id="$ref not string",
),
pytest.param(
{"$ref": "#/components/schemas/RefSchema"},
{"RefSchema": {"properties": {"prop_1": "value 1"}}},
[("prop_1", "value 1")],
id="$ref",
),
pytest.param(
{"$ref": "#/components/schemas/RefSchema"},
{},
[],
id="$ref not resolve",
),
pytest.param({"allOf": True}, {}, [], id="allOf not list"),
pytest.param({"allOf": []}, {}, [], id="allOf empty"),
pytest.param({"allOf": [True]}, {}, [], id="allOf elements not dict"),
pytest.param(
{"allOf": [{"properties": {"prop_1": "value 1"}}]},
{},
[("prop_1", "value 1")],
id="allOf single",
),
pytest.param(
{
"allOf": [
{"properties": {"prop_1": "value 1"}},
{"properties": {"prop_2": "value 2"}},
]
},
{},
[("prop_1", "value 1"), ("prop_2", "value 2")],
id="allOf multiple",
),
pytest.param(
{"allOf": [{"properties": True}, {"properties": {"prop_2": "value 2"}}]},
{},
[("prop_2", "value 2")],
id="allOf multiple first not dict",
),
pytest.param(
{"allOf": [{"properties": {"prop_1": "value 1"}}, {"properties": True}]},
{},
[("prop_1", "value 1")],
id="allOf multiple second not dict",
),
pytest.param(
{"allOf": [{"$ref": "#/components/schemas/RefSchema"}]},
{"RefSchema": {"properties": {"prop_1": "value 1"}}},
[("prop_1", "value 1")],
id="allOf $ref",
),
pytest.param(
{
"allOf": [
{"$ref": "#/components/schemas/RefSchema1"},
{"$ref": "#/components/schemas/RefSchema2"},
]
},
{
"RefSchema1": {"properties": {"prop_1": "value 1"}},
"RefSchema2": {"properties": {"prop_2": "value 2"}},
},
[("prop_1", "value 1"), ("prop_2", "value 2")],
id="allOf multiple $ref",
),
pytest.param(
{
"allOf": [
{"properties": {"prop_1": "value 1"}},
{"properties": {"prop_2": "value 2"}},
]
},
{},
[("prop_1", "value 1"), ("prop_2", "value 2")],
id="allOf multiple local",
),
pytest.param(
{
"allOf": [
{"properties": {"prop_1": "value 1"}},
{"$ref": "#/components/schemas/RefSchema"},
]
},
{"RefSchema": {"properties": {"prop_2": "value 2"}}},
[("prop_1", "value 1"), ("prop_2", "value 2")],
id="allOf local and $ref local first",
),
pytest.param(
{
"allOf": [
{"$ref": "#/components/schemas/RefSchema"},
{"properties": {"prop_1": "value 1"}},
]
},
{"RefSchema": {"properties": {"prop_2": "value 2"}}},
[("prop_1", "value 1"), ("prop_2", "value 2")],
id="allOf local and $ref $ref first",
),
pytest.param(
{
"allOf": [
{"properties": {"prop_1": "value 1"}},
{"properties": {"prop_1": "value 2"}},
]
},
{},
[("prop_1", "value 1")],
id="allOf multiple duplicate",
),
pytest.param(
{
"allOf": [
{"properties": {"prop_1": "value 1"}},
{"$ref": "#/components/schemas/RefSchema"},
]
},
{"RefSchema": {"properties": {"prop_1": "value 2"}}},
[("prop_1", "value 1")],
id="allOf local and $ref local first duplicate",
),
pytest.param(
{
"allOf": [
{"$ref": "#/components/schemas/RefSchema"},
{"properties": {"prop_1": "value 1"}},
]
},
{"RefSchema": {"properties": {"prop_1": "value 2"}}},
[("prop_1", "value 1")],
id="allOf local and $ref $ref first duplicate",
),
],
)
@pytest.mark.schemas
@pytest.mark.helper
def test_properties_items(schema, schemas, expected_properties):
"""
GIVEN schema, schemas and expected properties
WHEN properties is called with the schema and schemas
THEN the expected name and property schema are returned.
"""
returned_properties = iterate.properties_items(schema=schema, schemas=schemas)
assert list(returned_properties) == expected_properties
@pytest.mark.parametrize(
"schema, schemas, expected_properties",
[
pytest.param(
{"x-inherits": 1, "properties": {"prop_1": "value 1"}},
{},
[],
id="x-inherits causes MalformedSchemaError",
),
pytest.param(
{"x-inherits": "ParentSchema", "properties": {"prop_1": "value 1"}},
{},
[],
id="x-inherits causes InheritanceError",
),
pytest.param(
{
"allOf": [
{"x-inherits": "ParentSchema", "properties": {"prop_1": "value 1"}},
{"$ref": "#/components/schemas/ParentSchema"},
]
},
{},
[],
id="x-inherits causes SchemaNotFoundError",
),
pytest.param(
{
"allOf": [
{
"x-inherits": "ParentSchema",
"x-tablename": "schema",
"properties": {"prop_1": "value 1"},
},
{"$ref": "#/components/schemas/ParentSchema"},
]
},
{
"ParentSchema": {
"x-tablename": "parent_schema",
"properties": {"prop_2": "value 2"},
}
},
[("prop_1", "value 1")],
id="skip",
),
pytest.param(
{
"allOf": [
{"x-tablename": "schema", "properties": {"prop_1": "value 1"}},
{"$ref": "#/components/schemas/ParentSchema"},
]
},
{
"ParentSchema": {
"x-tablename": "parent_schema",
"properties": {"prop_2": "value 2"},
}
},
[("prop_1", "value 1"), ("prop_2", "value 2")],
id="no inheritance not skip",
),
pytest.param(
{
"allOf": [
{
"properties": {"prop_1": "value 1"},
"x-inherits": "ParentSchema",
},
{"$ref": "#/components/schemas/ParentSchema"},
]
},
{
"ParentSchema": {
"x-tablename": "parent_schema",
"properties": {"prop_2": "value 2"},
}
},
[("prop_1", "value 1"), ("prop_2", "value 2")],
id="single table not skip",
),
],
)
@pytest.mark.schemas
@pytest.mark.helper
def test_properties_joined(schema, schemas, expected_properties):
"""
GIVEN schema, schemas and expected properties
WHEN properties is called with the schema and schemas
THEN the expected name and property schema are returned.
"""
returned_properties = iterate.properties_items(
schema=schema, schemas=schemas, stay_within_tablename=True
)
assert list(returned_properties) == expected_properties
@pytest.mark.parametrize(
"schema, schemas, expected_properties",
[
pytest.param(
{
"allOf": [
{
"x-inherits": "ParentSchema",
"x-tablename": "schema",
"properties": {"prop_1": "value 1"},
},
{"$ref": "#/components/schemas/ParentSchema"},
]
},
{
"ParentSchema": {
"x-tablename": "parent_schema",
"properties": {"prop_2": "value 2"},
}
},
[("prop_1", "value 1")],
id="skip",
),
pytest.param(
{
"allOf": [
{
"properties": {"prop_1": "value 1"},
"x-inherits": "ParentSchema",
},
{"$ref": "#/components/schemas/ParentSchema"},
]
},
{
"ParentSchema": {
"x-tablename": "parent_schema",
"properties": {"prop_2": "value 2"},
}
},
[("prop_1", "value 1")],
id="single table skip",
),
pytest.param(
{
"allOf": [
{"x-tablename": "schema", "properties": {"prop_1": "value 1"}},
{"$ref": "#/components/schemas/ParentSchema"},
]
},
{
"ParentSchema": {
"x-tablename": "parent_schema",
"properties": {"prop_2": "value 2"},
}
},
[("prop_1", "value 1"), ("prop_2", "value 2")],
id="no inheritance not skip",
),
],
)
@pytest.mark.schemas
@pytest.mark.helper
def test_properties_single(schema, schemas, expected_properties):
"""
GIVEN schema, schemas and expected properties
WHEN properties is called with the schema and schemas
THEN the expected name and property schema are returned.
"""
returned_properties = iterate.properties_items(
schema=schema, schemas=schemas, stay_within_model=True
)
assert list(returned_properties) == expected_properties
@pytest.mark.parametrize(
"schema, schemas, expected_required_values",
[
pytest.param({}, {}, [], id="no required"),
pytest.param(
{"required": "value 1"},
{},
["value 1"],
id="single required",
),
pytest.param(
{"x-inherits": False, "required": "value 1"},
{},
["value 1"],
id="single required x-inherits False",
),
pytest.param(
{"$ref": "#/components/schemas/RefSchema"},
{"RefSchema": {"required": "value 1"}},
["value 1"],
id="$ref",
),
pytest.param(
{"allOf": [{"required": "value 1"}]},
{},
["value 1"],
id="allOf single",
),
pytest.param(
{"allOf": [{"required": "value 1"}, {"required": "value 2"}]},
{},
["value 1", "value 2"],
id="allOf multiple",
),
],
)
@pytest.mark.schemas
@pytest.mark.helper
def test_required_values(schema, schemas, expected_required_values):
"""
GIVEN schema, schemas and expected required lists
WHEN required_values is called with the schema and schemas
THEN the expected name and property schema are returned.
"""
returned_required_values = iterate.required_values(schema=schema, schemas=schemas)
assert list(returned_required_values) == expected_required_values
@pytest.mark.parametrize(
"schema, schemas, expected_required_values",
[
pytest.param(
{"x-inherits": 1, "required": "value 1"},
{},
[],
id="x-inherits causes MalformedSchemaError",
),
pytest.param(
{"x-inherits": "ParentSchema", "required": "value 1"},
{},
[],
id="x-inherits causes InheritanceError",
),
pytest.param(
{
"allOf": [
{"x-inherits": "ParentSchema", "required": "value 1"},
{"$ref": "#/components/schemas/ParentSchema"},
]
},
{},
[],
id="x-inherits causes SchemaNotFoundError",
),
pytest.param(
{
"allOf": [
{
"x-inherits": "ParentSchema",
"x-tablename": "schema",
"required": "value 1",
},
{"$ref": "#/components/schemas/ParentSchema"},
]
},
{"ParentSchema": {"x-tablename": "parent_schema", "required": "value 2"}},
["value 1"],
id="skip",
),
pytest.param(
{
"allOf": [
{"required": "value 1", "x-inherits": "ParentSchema"},
{"$ref": "#/components/schemas/ParentSchema"},
]
},
{"ParentSchema": {"x-tablename": "parent_schema", "required": "value 2"}},
["value 1"],
id="single table skip",
),
pytest.param(
{
"allOf": [
{"x-tablename": "schema", "required": "value 1"},
{"$ref": "#/components/schemas/ParentSchema"},
]
},
{"ParentSchema": {"x-tablename": "parent_schema", "required": "value 2"}},
["value 1", "value 2"],
id="no inheritance not skip",
),
],
)
@pytest.mark.schemas
@pytest.mark.helper
def test_required_values_single(schema, schemas, expected_required_values):
"""
GIVEN schema, schemas and expected required lists
WHEN required_values is called with the schema and schemas and
stay_within_model set
THEN the expected name and property schema are returned.
"""
returned_required_values = iterate.required_values(
schema=schema, schemas=schemas, stay_within_model=True
)
assert list(returned_required_values) == expected_required_values
@pytest.mark.parametrize(
"schema, schemas, expected_values",
[
pytest.param({"required": True}, {}, [], id="required not list"),
pytest.param({"required": []}, {}, [], id="required empty"),
pytest.param({"required": ["value 1"]}, {}, ["value 1"], id="required single"),
pytest.param(
{"required": ["value 1", "value 2"]},
{},
["value 1", "value 2"],
id="required multiple",
),
pytest.param(
{"allOf": [{"required": ["value 1"]}, {"required": ["value 2"]}]},
{},
["value 1", "value 2"],
id="multiple required single",
),
pytest.param(
{"allOf": [{"required": True}, {"required": ["value 2"]}]},
{},
["value 2"],
id="multiple required first not list",
),
pytest.param(
{"allOf": [{"required": ["value 1"]}, {"required": True}]},
{},
["value 1"],
id="multiple required second not list",
),
],
)
@pytest.mark.schemas
@pytest.mark.helper
def test_required_items(schema, schemas, expected_values):
"""
GIVEN schema, schemas and expected values
WHEN required_items is called with the schema and schemas
THEN the expected name and property schema are returned.
"""
returned_values = iterate.required_items(schema=schema, schemas=schemas)
assert list(returned_values) == expected_values
@pytest.mark.parametrize(
"schema, schemas, expected_values",
[
pytest.param(
{
"allOf": [
{"required": ["value 1"], "x-inherits": "ParentSchema"},
{"$ref": "#/components/schemas/ParentSchema"},
]
},
{
"ParentSchema": {
"x-tablename": "parent_schema",
"required": ["value 2"],
}
},
["value 1"],
id="single table skip",
),
pytest.param(
{
"allOf": [
{"x-tablename": "schema", "required": ["value 1"]},
{"$ref": "#/components/schemas/ParentSchema"},
]
},
{
"ParentSchema": {
"x-tablename": "parent_schema",
"required": ["value 2"],
}
},
["value 1", "value 2"],
id="no inheritance not skip",
),
],
)
@pytest.mark.schemas
@pytest.mark.helper
def test_required_items_single(schema, schemas, expected_values):
"""
GIVEN schema, schemas and expected values
WHEN required_items is called with the schema and schemas and
stay_within_model set
THEN the expected name and property schema are returned.
"""
returned_values = iterate.required_items(
schema=schema, schemas=schemas, stay_within_model=True
)
assert list(returned_values) == expected_values
@pytest.mark.parametrize(
"schema, schemas, expected_backrefs",
[
pytest.param(True, {}, [], id="not dict"),
pytest.param({}, {}, [], id="no backrefs"),
pytest.param({"x-backrefs": {}}, {}, [], id="empty backrefs"),
pytest.param(
{"x-backrefs": True},
{},
[],
id="backrefs not dictionary",
),
pytest.param(
{"x-backrefs": {"prop_1": "value 1"}},
{},
[("prop_1", "value 1")],
id="single property",
),
pytest.param(
{"x-backrefs": {"prop_1": "value 1", "prop_2": "value 2"}},
{},
[("prop_1", "value 1"), ("prop_2", "value 2")],
id="multiple property",
),
pytest.param(
{"$ref": True},
{},
[],
id="$ref not string",
),
pytest.param(
{"$ref": "#/components/schemas/RefSchema"},
{"RefSchema": {"x-backrefs": {"prop_1": "value 1"}}},
[("prop_1", "value 1")],
id="$ref",
),
pytest.param(
{"$ref": "#/components/schemas/RefSchema"},
{},
[],
id="$ref not resolve",
),
pytest.param({"allOf": True}, {}, [], id="allOf not list"),
pytest.param({"allOf": []}, {}, [], id="allOf empty"),
pytest.param({"allOf": [True]}, {}, [], id="allOf elements not dict"),
pytest.param(
{"allOf": [{"x-backrefs": {"prop_1": "value 1"}}]},
{},
[("prop_1", "value 1")],
id="allOf single",
),
pytest.param(
{
"allOf": [
{"x-backrefs": {"prop_1": "value 1"}},
{"x-backrefs": {"prop_2": "value 2"}},
]
},
{},
[("prop_1", "value 1"), ("prop_2", "value 2")],
id="allOf multiple",
),
pytest.param(
{"allOf": [{"x-backrefs": True}, {"x-backrefs": {"prop_2": "value 2"}}]},
{},
[("prop_2", "value 2")],
id="allOf multiple first not dict",
),
pytest.param(
{"allOf": [{"x-backrefs": {"prop_1": "value 1"}}, {"x-backrefs": True}]},
{},
[("prop_1", "value 1")],
id="allOf multiple second not dict",
),
pytest.param(
{"allOf": [{"$ref": "#/components/schemas/RefSchema"}]},
{"RefSchema": {"x-backrefs": {"prop_1": "value 1"}}},
[("prop_1", "value 1")],
id="allOf $ref",
),
pytest.param(
{
"allOf": [
{"x-backrefs": {"prop_1": "value 1"}},
{"x-backrefs": {"prop_1": "value 2"}},
]
},
{},
[("prop_1", "value 1")],
id="allOf duplicates",
),
],
)
@pytest.mark.schemas
@pytest.mark.helper
def test_backrefs_items(schema, schemas, expected_backrefs):
"""
GIVEN schema, schemas and expected backrefs
WHEN backrefs is called with the schema and schemas
THEN the expected name and property schema are returned.
"""
returned_backrefs = iterate.backrefs_items(schema=schema, schemas=schemas)
assert list(returned_backrefs) == expected_backrefs
| 31.970588 | 88 | 0.442196 | 2,182 | 26,088 | 5.189276 | 0.043538 | 0.089376 | 0.040802 | 0.05926 | 0.936678 | 0.912037 | 0.887309 | 0.85525 | 0.817451 | 0.801996 | 0 | 0.018586 | 0.389528 | 26,088 | 815 | 89 | 32.009816 | 0.69239 | 0.065202 | 0 | 0.674451 | 0 | 0 | 0.293557 | 0.038326 | 0 | 0 | 0 | 0 | 0.013736 | 1 | 0.013736 | false | 0 | 0.002747 | 0 | 0.016484 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed3869b05df911781461eb12b8b9aaadb79915b4 | 26 | py | Python | kolizei_api/__init__.py | Foxdogface/kolizei_api | 68b7005669ba2c0de8746910e0b80eda2cd09511 | [
"MIT"
] | null | null | null | kolizei_api/__init__.py | Foxdogface/kolizei_api | 68b7005669ba2c0de8746910e0b80eda2cd09511 | [
"MIT"
] | null | null | null | kolizei_api/__init__.py | Foxdogface/kolizei_api | 68b7005669ba2c0de8746910e0b80eda2cd09511 | [
"MIT"
] | null | null | null | from .kolizei_api import * | 26 | 26 | 0.807692 | 4 | 26 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed460d5267ce4f877edb9e37ccdeb7a8c0af9da7 | 29,709 | py | Python | tests/test_builder.py | vblagoje/datasets | d21457e4f27e024b6c4479237d2d87073c56ef55 | [
"Apache-2.0"
] | 1 | 2021-04-13T09:57:56.000Z | 2021-04-13T09:57:56.000Z | tests/test_builder.py | vblagoje/datasets | d21457e4f27e024b6c4479237d2d87073c56ef55 | [
"Apache-2.0"
] | null | null | null | tests/test_builder.py | vblagoje/datasets | d21457e4f27e024b6c4479237d2d87073c56ef55 | [
"Apache-2.0"
] | 1 | 2021-05-12T17:31:41.000Z | 2021-05-12T17:31:41.000Z | import os
import tempfile
import types
from unittest import TestCase
import numpy as np
from datasets.arrow_dataset import Dataset
from datasets.arrow_writer import ArrowWriter
from datasets.builder import FORCE_REDOWNLOAD, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
from datasets.dataset_dict import DatasetDict
from datasets.features import Features, Value
from datasets.info import DatasetInfo, PostProcessedInfo
from datasets.splits import Split, SplitDict, SplitGenerator, SplitInfo
from .utils import require_faiss
class DummyBuilder(DatasetBuilder):
def _info(self):
return DatasetInfo(features=Features({"text": Value("string")}))
def _split_generators(self, dl_manager):
return [SplitGenerator(name=Split.TRAIN)]
def _prepare_split(self, split_generator, **kwargs):
fname = "{}-{}.arrow".format(self.name, split_generator.name)
writer = ArrowWriter(features=self.info.features, path=os.path.join(self._cache_dir, fname))
writer.write_batch({"text": ["foo"] * 100})
num_examples, num_bytes = writer.finalize()
split_generator.split_info.num_examples = num_examples
split_generator.split_info.num_bytes = num_bytes
class DummyGeneratorBasedBuilder(GeneratorBasedBuilder):
def _info(self):
return DatasetInfo(features=Features({"text": Value("string")}))
def _split_generators(self, dl_manager):
return [SplitGenerator(name=Split.TRAIN)]
def _generate_examples(self):
for i in range(100):
yield i, {"text": "foo"}
class DummyGeneratorBasedBuilderWithIntegers(GeneratorBasedBuilder):
def _info(self):
return DatasetInfo(features=Features({"id": Value("int8")}))
def _split_generators(self, dl_manager):
return [SplitGenerator(name=Split.TRAIN)]
def _generate_examples(self):
for i in range(100):
yield i, {"id": i}
class DummyGeneratorBasedBuilderWithConfigConfig(BuilderConfig):
def __init__(self, content="foo", times=2, *args, **kwargs):
super().__init__(*args, **kwargs)
self.content = content
class DummyGeneratorBasedBuilderWithConfig(GeneratorBasedBuilder):
BUILDER_CONFIG_CLASS = DummyGeneratorBasedBuilderWithConfigConfig
def _info(self):
return DatasetInfo(features=Features({"text": Value("string")}))
def _split_generators(self, dl_manager):
return [SplitGenerator(name=Split.TRAIN)]
def _generate_examples(self):
for i in range(100):
yield i, {"text": self.config.content * self.config.times}
class BuilderTest(TestCase):
def test_as_dataset(self):
with tempfile.TemporaryDirectory() as tmp_dir:
dummy_builder = DummyBuilder(cache_dir=tmp_dir, name="dummy")
os.makedirs(dummy_builder.cache_dir)
dummy_builder.info.splits = SplitDict()
dummy_builder.info.splits.add(SplitInfo("train", num_examples=10))
dummy_builder.info.splits.add(SplitInfo("test", num_examples=10))
for split in dummy_builder.info.splits:
writer = ArrowWriter(
path=os.path.join(dummy_builder.cache_dir, f"dummy_builder-{split}.arrow"),
features=Features({"text": Value("string")}),
)
writer.write_batch({"text": ["foo"] * 10})
writer.finalize()
dsets = dummy_builder.as_dataset()
self.assertIsInstance(dsets, DatasetDict)
self.assertListEqual(list(dsets.keys()), ["train", "test"])
self.assertEqual(len(dsets["train"]), 10)
self.assertEqual(len(dsets["test"]), 10)
self.assertDictEqual(dsets["train"].features, Features({"text": Value("string")}))
self.assertDictEqual(dsets["test"].features, Features({"text": Value("string")}))
self.assertListEqual(dsets["train"].column_names, ["text"])
self.assertListEqual(dsets["test"].column_names, ["text"])
del dsets
dset = dummy_builder.as_dataset("train")
self.assertIsInstance(dset, Dataset)
self.assertEqual(dset.split, "train")
self.assertEqual(len(dset), 10)
self.assertDictEqual(dset.features, Features({"text": Value("string")}))
self.assertListEqual(dset.column_names, ["text"])
del dset
dset = dummy_builder.as_dataset("train+test[:30%]")
self.assertIsInstance(dset, Dataset)
self.assertEqual(dset.split, "train+test[:30%]")
self.assertEqual(len(dset), 13)
self.assertDictEqual(dset.features, Features({"text": Value("string")}))
self.assertListEqual(dset.column_names, ["text"])
del dset
def test_download_and_prepare(self):
with tempfile.TemporaryDirectory() as tmp_dir:
dummy_builder = DummyBuilder(cache_dir=tmp_dir, name="dummy")
dummy_builder.download_and_prepare(try_from_hf_gcs=False, download_mode=FORCE_REDOWNLOAD)
self.assertTrue(
os.path.exists(os.path.join(tmp_dir, "dummy_builder", "dummy", "0.0.0", "dummy_builder-train.arrow"))
)
self.assertDictEqual(dummy_builder.info.features, Features({"text": Value("string")}))
self.assertEqual(dummy_builder.info.splits["train"].num_examples, 100)
self.assertTrue(
os.path.exists(os.path.join(tmp_dir, "dummy_builder", "dummy", "0.0.0", "dataset_info.json"))
)
def test_as_dataset_with_post_process(self):
def _post_process(self, dataset, resources_paths):
def char_tokenize(example):
return {"tokens": list(example["text"])}
return dataset.map(char_tokenize, cache_file_name=resources_paths["tokenized_dataset"])
def _post_processing_resources(self, split):
return {"tokenized_dataset": "tokenized_dataset-{split}.arrow".format(split=split)}
with tempfile.TemporaryDirectory() as tmp_dir:
dummy_builder = DummyBuilder(cache_dir=tmp_dir, name="dummy")
dummy_builder.info.post_processed = PostProcessedInfo(
features=Features({"text": Value("string"), "tokens": [Value("string")]})
)
dummy_builder._post_process = types.MethodType(_post_process, dummy_builder)
dummy_builder._post_processing_resources = types.MethodType(_post_processing_resources, dummy_builder)
os.makedirs(dummy_builder.cache_dir)
dummy_builder.info.splits = SplitDict()
dummy_builder.info.splits.add(SplitInfo("train", num_examples=10))
dummy_builder.info.splits.add(SplitInfo("test", num_examples=10))
for split in dummy_builder.info.splits:
writer = ArrowWriter(
path=os.path.join(dummy_builder.cache_dir, f"dummy_builder-{split}.arrow"),
features=Features({"text": Value("string")}),
)
writer.write_batch({"text": ["foo"] * 10})
writer.finalize()
writer = ArrowWriter(
path=os.path.join(dummy_builder.cache_dir, f"tokenized_dataset-{split}.arrow"),
features=Features({"text": Value("string"), "tokens": [Value("string")]}),
)
writer.write_batch({"text": ["foo"] * 10, "tokens": [list("foo")] * 10})
writer.finalize()
dsets = dummy_builder.as_dataset()
self.assertIsInstance(dsets, DatasetDict)
self.assertListEqual(list(dsets.keys()), ["train", "test"])
self.assertEqual(len(dsets["train"]), 10)
self.assertEqual(len(dsets["test"]), 10)
self.assertDictEqual(
dsets["train"].features, Features({"text": Value("string"), "tokens": [Value("string")]})
)
self.assertDictEqual(
dsets["test"].features, Features({"text": Value("string"), "tokens": [Value("string")]})
)
self.assertListEqual(dsets["train"].column_names, ["text", "tokens"])
self.assertListEqual(dsets["test"].column_names, ["text", "tokens"])
del dsets
dset = dummy_builder.as_dataset("train")
self.assertIsInstance(dset, Dataset)
self.assertEqual(dset.split, "train")
self.assertEqual(len(dset), 10)
self.assertDictEqual(dset.features, Features({"text": Value("string"), "tokens": [Value("string")]}))
self.assertListEqual(dset.column_names, ["text", "tokens"])
self.assertGreater(dummy_builder.info.post_processing_size, 0)
self.assertGreater(
dummy_builder.info.post_processed.resources_checksums["train"]["tokenized_dataset"]["num_bytes"], 0
)
del dset
dset = dummy_builder.as_dataset("train+test[:30%]")
self.assertIsInstance(dset, Dataset)
self.assertEqual(dset.split, "train+test[:30%]")
self.assertEqual(len(dset), 13)
self.assertDictEqual(dset.features, Features({"text": Value("string"), "tokens": [Value("string")]}))
self.assertListEqual(dset.column_names, ["text", "tokens"])
del dset
def _post_process(self, dataset, resources_paths):
return dataset.select([0, 1], keep_in_memory=True)
with tempfile.TemporaryDirectory() as tmp_dir:
dummy_builder = DummyBuilder(cache_dir=tmp_dir, name="dummy")
dummy_builder._post_process = types.MethodType(_post_process, dummy_builder)
os.makedirs(dummy_builder.cache_dir)
dummy_builder.info.splits = SplitDict()
dummy_builder.info.splits.add(SplitInfo("train", num_examples=10))
dummy_builder.info.splits.add(SplitInfo("test", num_examples=10))
for split in dummy_builder.info.splits:
writer = ArrowWriter(
path=os.path.join(dummy_builder.cache_dir, f"dummy_builder-{split}.arrow"),
features=Features({"text": Value("string")}),
)
writer.write_batch({"text": ["foo"] * 10})
writer.finalize()
writer = ArrowWriter(
path=os.path.join(dummy_builder.cache_dir, f"small_dataset-{split}.arrow"),
features=Features({"text": Value("string")}),
)
writer.write_batch({"text": ["foo"] * 2})
writer.finalize()
dsets = dummy_builder.as_dataset()
self.assertIsInstance(dsets, DatasetDict)
self.assertListEqual(list(dsets.keys()), ["train", "test"])
self.assertEqual(len(dsets["train"]), 2)
self.assertEqual(len(dsets["test"]), 2)
self.assertDictEqual(dsets["train"].features, Features({"text": Value("string")}))
self.assertDictEqual(dsets["test"].features, Features({"text": Value("string")}))
self.assertListEqual(dsets["train"].column_names, ["text"])
self.assertListEqual(dsets["test"].column_names, ["text"])
del dsets
dset = dummy_builder.as_dataset("train")
self.assertIsInstance(dset, Dataset)
self.assertEqual(dset.split, "train")
self.assertEqual(len(dset), 2)
self.assertDictEqual(dset.features, Features({"text": Value("string")}))
self.assertListEqual(dset.column_names, ["text"])
del dset
dset = dummy_builder.as_dataset("train+test[:30%]")
self.assertIsInstance(dset, Dataset)
self.assertEqual(dset.split, "train+test[:30%]")
self.assertEqual(len(dset), 2)
self.assertDictEqual(dset.features, Features({"text": Value("string")}))
self.assertListEqual(dset.column_names, ["text"])
del dset
@require_faiss
def test_as_dataset_with_post_process_with_index(self):
def _post_process(self, dataset, resources_paths):
if os.path.exists(resources_paths["index"]):
dataset.load_faiss_index("my_index", resources_paths["index"])
return dataset
else:
dataset.add_faiss_index_from_external_arrays(
external_arrays=np.ones((len(dataset), 8)), string_factory="Flat", index_name="my_index"
)
dataset.save_faiss_index("my_index", resources_paths["index"])
return dataset
def _post_processing_resources(self, split):
return {"index": "Flat-{split}.faiss".format(split=split)}
with tempfile.TemporaryDirectory() as tmp_dir:
dummy_builder = DummyBuilder(cache_dir=tmp_dir, name="dummy")
dummy_builder._post_process = types.MethodType(_post_process, dummy_builder)
dummy_builder._post_processing_resources = types.MethodType(_post_processing_resources, dummy_builder)
os.makedirs(dummy_builder.cache_dir)
dummy_builder.info.splits = SplitDict()
dummy_builder.info.splits.add(SplitInfo("train", num_examples=10))
dummy_builder.info.splits.add(SplitInfo("test", num_examples=10))
for split in dummy_builder.info.splits:
writer = ArrowWriter(
path=os.path.join(dummy_builder.cache_dir, f"dummy_builder-{split}.arrow"),
features=Features({"text": Value("string")}),
)
writer.write_batch({"text": ["foo"] * 10})
writer.finalize()
writer = ArrowWriter(
path=os.path.join(dummy_builder.cache_dir, f"small_dataset-{split}.arrow"),
features=Features({"text": Value("string")}),
)
writer.write_batch({"text": ["foo"] * 2})
writer.finalize()
dsets = dummy_builder.as_dataset()
self.assertIsInstance(dsets, DatasetDict)
self.assertListEqual(list(dsets.keys()), ["train", "test"])
self.assertEqual(len(dsets["train"]), 10)
self.assertEqual(len(dsets["test"]), 10)
self.assertDictEqual(dsets["train"].features, Features({"text": Value("string")}))
self.assertDictEqual(dsets["test"].features, Features({"text": Value("string")}))
self.assertListEqual(dsets["train"].column_names, ["text"])
self.assertListEqual(dsets["test"].column_names, ["text"])
self.assertListEqual(dsets["train"].list_indexes(), ["my_index"])
self.assertListEqual(dsets["test"].list_indexes(), ["my_index"])
self.assertGreater(dummy_builder.info.post_processing_size, 0)
self.assertGreater(dummy_builder.info.post_processed.resources_checksums["train"]["index"]["num_bytes"], 0)
del dsets
dset = dummy_builder.as_dataset("train")
self.assertIsInstance(dset, Dataset)
self.assertEqual(dset.split, "train")
self.assertEqual(len(dset), 10)
self.assertDictEqual(dset.features, Features({"text": Value("string")}))
self.assertListEqual(dset.column_names, ["text"])
self.assertListEqual(dset.list_indexes(), ["my_index"])
del dset
dset = dummy_builder.as_dataset("train+test[:30%]")
self.assertIsInstance(dset, Dataset)
self.assertEqual(dset.split, "train+test[:30%]")
self.assertEqual(len(dset), 13)
self.assertDictEqual(dset.features, Features({"text": Value("string")}))
self.assertListEqual(dset.column_names, ["text"])
self.assertListEqual(dset.list_indexes(), ["my_index"])
del dset
def test_download_and_prepare_with_post_process(self):
def _post_process(self, dataset, resources_paths):
def char_tokenize(example):
return {"tokens": list(example["text"])}
return dataset.map(char_tokenize, cache_file_name=resources_paths["tokenized_dataset"])
def _post_processing_resources(self, split):
return {"tokenized_dataset": "tokenized_dataset-{split}.arrow".format(split=split)}
with tempfile.TemporaryDirectory() as tmp_dir:
dummy_builder = DummyBuilder(cache_dir=tmp_dir, name="dummy")
dummy_builder.info.post_processed = PostProcessedInfo(
features=Features({"text": Value("string"), "tokens": [Value("string")]})
)
dummy_builder._post_process = types.MethodType(_post_process, dummy_builder)
dummy_builder._post_processing_resources = types.MethodType(_post_processing_resources, dummy_builder)
dummy_builder.download_and_prepare(try_from_hf_gcs=False, download_mode=FORCE_REDOWNLOAD)
self.assertTrue(
os.path.exists(os.path.join(tmp_dir, "dummy_builder", "dummy", "0.0.0", "dummy_builder-train.arrow"))
)
self.assertDictEqual(dummy_builder.info.features, Features({"text": Value("string")}))
self.assertDictEqual(
dummy_builder.info.post_processed.features,
Features({"text": Value("string"), "tokens": [Value("string")]}),
)
self.assertEqual(dummy_builder.info.splits["train"].num_examples, 100)
self.assertTrue(
os.path.exists(os.path.join(tmp_dir, "dummy_builder", "dummy", "0.0.0", "dataset_info.json"))
)
def _post_process(self, dataset, resources_paths):
return dataset.select([0, 1], keep_in_memory=True)
with tempfile.TemporaryDirectory() as tmp_dir:
dummy_builder = DummyBuilder(cache_dir=tmp_dir, name="dummy")
dummy_builder._post_process = types.MethodType(_post_process, dummy_builder)
dummy_builder.download_and_prepare(try_from_hf_gcs=False, download_mode=FORCE_REDOWNLOAD)
self.assertTrue(
os.path.exists(os.path.join(tmp_dir, "dummy_builder", "dummy", "0.0.0", "dummy_builder-train.arrow"))
)
self.assertDictEqual(dummy_builder.info.features, Features({"text": Value("string")}))
self.assertIsNone(dummy_builder.info.post_processed)
self.assertEqual(dummy_builder.info.splits["train"].num_examples, 100)
self.assertTrue(
os.path.exists(os.path.join(tmp_dir, "dummy_builder", "dummy", "0.0.0", "dataset_info.json"))
)
def _post_process(self, dataset, resources_paths):
if os.path.exists(resources_paths["index"]):
dataset.load_faiss_index("my_index", resources_paths["index"])
return dataset
else:
dataset = dataset.add_faiss_index_from_external_arrays(
external_arrays=np.ones((len(dataset), 8)), string_factory="Flat", index_name="my_index"
)
dataset.save_faiss_index("my_index", resources_paths["index"])
return dataset
def _post_processing_resources(self, split):
return {"index": "Flat-{split}.faiss".format(split=split)}
with tempfile.TemporaryDirectory() as tmp_dir:
dummy_builder = DummyBuilder(cache_dir=tmp_dir, name="dummy")
dummy_builder._post_process = types.MethodType(_post_process, dummy_builder)
dummy_builder._post_processing_resources = types.MethodType(_post_processing_resources, dummy_builder)
dummy_builder.download_and_prepare(try_from_hf_gcs=False, download_mode=FORCE_REDOWNLOAD)
self.assertTrue(
os.path.exists(os.path.join(tmp_dir, "dummy_builder", "dummy", "0.0.0", "dummy_builder-train.arrow"))
)
self.assertDictEqual(dummy_builder.info.features, Features({"text": Value("string")}))
self.assertIsNone(dummy_builder.info.post_processed)
self.assertEqual(dummy_builder.info.splits["train"].num_examples, 100)
self.assertTrue(
os.path.exists(os.path.join(tmp_dir, "dummy_builder", "dummy", "0.0.0", "dataset_info.json"))
)
def test_error_download_and_prepare(self):
def _prepare_split(self, split_generator, **kwargs):
raise ValueError()
with tempfile.TemporaryDirectory() as tmp_dir:
dummy_builder = DummyBuilder(cache_dir=tmp_dir, name="dummy")
dummy_builder._prepare_split = types.MethodType(_prepare_split, dummy_builder)
self.assertRaises(
ValueError, dummy_builder.download_and_prepare, try_from_hf_gcs=False, download_mode=FORCE_REDOWNLOAD
)
self.assertRaises(AssertionError, dummy_builder.as_dataset)
def test_generator_based_download_and_prepare(self):
with tempfile.TemporaryDirectory() as tmp_dir:
dummy_builder = DummyGeneratorBasedBuilder(cache_dir=tmp_dir, name="dummy")
dummy_builder.download_and_prepare(try_from_hf_gcs=False, download_mode=FORCE_REDOWNLOAD)
self.assertTrue(
os.path.exists(
os.path.join(
tmp_dir,
"dummy_generator_based_builder",
"dummy",
"0.0.0",
"dummy_generator_based_builder-train.arrow",
)
)
)
self.assertDictEqual(dummy_builder.info.features, Features({"text": Value("string")}))
self.assertEqual(dummy_builder.info.splits["train"].num_examples, 100)
self.assertTrue(
os.path.exists(
os.path.join(tmp_dir, "dummy_generator_based_builder", "dummy", "0.0.0", "dataset_info.json")
)
)
def test_cache_dir_for_data_files(self):
with tempfile.TemporaryDirectory() as tmp_dir:
dummy_data1 = os.path.join(tmp_dir, "dummy_data1.txt")
with open(dummy_data1, "w", encoding="utf-8") as f:
f.writelines("foo bar")
dummy_data2 = os.path.join(tmp_dir, "dummy_data2.txt")
with open(dummy_data2, "w", encoding="utf-8") as f:
f.writelines("foo bar\n")
dummy_builder = DummyGeneratorBasedBuilder(cache_dir=tmp_dir, name="dummy", data_files=dummy_data1)
other_builder = DummyGeneratorBasedBuilder(cache_dir=tmp_dir, name="dummy", data_files=dummy_data1)
self.assertEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilder(cache_dir=tmp_dir, name="dummy", data_files=[dummy_data1])
self.assertEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files={"train": dummy_data1}
)
self.assertEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files={Split.TRAIN: dummy_data1}
)
self.assertEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files={"train": [dummy_data1]}
)
self.assertEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files={"test": dummy_data1}
)
self.assertNotEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilder(cache_dir=tmp_dir, name="dummy", data_files=dummy_data2)
self.assertNotEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilder(cache_dir=tmp_dir, name="dummy", data_files=[dummy_data2])
self.assertNotEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files=[dummy_data1, dummy_data2]
)
self.assertNotEqual(dummy_builder.cache_dir, other_builder.cache_dir)
dummy_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files=[dummy_data1, dummy_data2]
)
other_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files=[dummy_data1, dummy_data2]
)
self.assertEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files=[dummy_data2, dummy_data1]
)
self.assertNotEqual(dummy_builder.cache_dir, other_builder.cache_dir)
dummy_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files={"train": dummy_data1, "test": dummy_data2}
)
other_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files={"train": dummy_data1, "test": dummy_data2}
)
self.assertEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files={"train": [dummy_data1], "test": dummy_data2}
)
self.assertEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files={"test": dummy_data2, "train": dummy_data1}
)
self.assertEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files={"train": dummy_data1, "validation": dummy_data2}
)
self.assertNotEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir, name="dummy", data_files={"train": [dummy_data1, dummy_data2], "test": dummy_data2}
)
self.assertNotEqual(dummy_builder.cache_dir, other_builder.cache_dir)
def test_cache_dir_for_features(self):
with tempfile.TemporaryDirectory() as tmp_dir:
f1 = Features({"id": Value("int8")})
f2 = Features({"id": Value("int32")})
dummy_builder = DummyGeneratorBasedBuilderWithIntegers(cache_dir=tmp_dir, name="dummy", features=f1)
other_builder = DummyGeneratorBasedBuilderWithIntegers(cache_dir=tmp_dir, name="dummy", features=f1)
self.assertEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilderWithIntegers(cache_dir=tmp_dir, name="dummy", features=f2)
self.assertNotEqual(dummy_builder.cache_dir, other_builder.cache_dir)
def test_cache_dir_for_config_kwargs(self):
with tempfile.TemporaryDirectory() as tmp_dir:
dummy_builder = DummyGeneratorBasedBuilderWithConfig(
cache_dir=tmp_dir, name="dummy", content="foo", times=2
)
other_builder = DummyGeneratorBasedBuilderWithConfig(
cache_dir=tmp_dir, name="dummy", times=2, content="foo"
)
self.assertEqual(dummy_builder.cache_dir, other_builder.cache_dir)
self.assertIn("content=foo", dummy_builder.cache_dir)
self.assertIn("times=2", dummy_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilderWithConfig(
cache_dir=tmp_dir, name="dummy", content="bar", times=2
)
self.assertNotEqual(dummy_builder.cache_dir, other_builder.cache_dir)
other_builder = DummyGeneratorBasedBuilderWithConfig(cache_dir=tmp_dir, name="dummy", content="foo")
self.assertNotEqual(dummy_builder.cache_dir, other_builder.cache_dir)
def test_custom_writer_batch_size(self):
with tempfile.TemporaryDirectory() as tmp_dir:
self.assertEqual(DummyGeneratorBasedBuilder._writer_batch_size, None)
dummy_builder1 = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir,
name="dummy1",
)
DummyGeneratorBasedBuilder._writer_batch_size = 5
dummy_builder2 = DummyGeneratorBasedBuilder(
cache_dir=tmp_dir,
name="dummy2",
)
dummy_builder3 = DummyGeneratorBasedBuilder(cache_dir=tmp_dir, name="dummy3", writer_batch_size=10)
dummy_builder1.download_and_prepare(try_from_hf_gcs=False, download_mode=FORCE_REDOWNLOAD)
dummy_builder2.download_and_prepare(try_from_hf_gcs=False, download_mode=FORCE_REDOWNLOAD)
dummy_builder3.download_and_prepare(try_from_hf_gcs=False, download_mode=FORCE_REDOWNLOAD)
dataset1 = dummy_builder1.as_dataset("train")
self.assertEqual(len(dataset1._data[0].chunks), 1)
dataset2 = dummy_builder2.as_dataset("train")
self.assertEqual(len(dataset2._data[0].chunks), 20)
dataset3 = dummy_builder3.as_dataset("train")
self.assertEqual(len(dataset3._data[0].chunks), 10)
del dataset1, dataset2, dataset3
| 51.578125 | 119 | 0.635094 | 3,177 | 29,709 | 5.681146 | 0.061379 | 0.093745 | 0.045709 | 0.030251 | 0.888415 | 0.875339 | 0.858496 | 0.842484 | 0.829907 | 0.819602 | 0 | 0.010052 | 0.246592 | 29,709 | 575 | 120 | 51.667826 | 0.796319 | 0 | 0 | 0.653307 | 0 | 0 | 0.082029 | 0.015282 | 0 | 0 | 0 | 0 | 0.264529 | 1 | 0.074148 | false | 0 | 0.026052 | 0.032064 | 0.158317 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed675267c658c38926f3aeaac53e3a83926d869c | 27,407 | py | Python | tests/components/tod/test_binary_sensor.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 3 | 2021-11-22T22:37:43.000Z | 2022-03-17T00:55:28.000Z | tests/components/tod/test_binary_sensor.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 14 | 2022-01-13T04:27:21.000Z | 2022-03-06T20:30:43.000Z | tests/components/tod/test_binary_sensor.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 3 | 2022-01-02T18:49:54.000Z | 2022-01-25T02:03:54.000Z | """Test Times of the Day Binary Sensor."""
from datetime import datetime, timedelta
from freezegun import freeze_time
import pytest
from homeassistant.const import STATE_OFF, STATE_ON
import homeassistant.core as ha
from homeassistant.helpers.sun import get_astral_event_date, get_astral_event_next
from homeassistant.setup import async_setup_component
import homeassistant.util.dt as dt_util
from tests.common import assert_setup_component
@pytest.fixture(autouse=True)
def mock_legacy_time(legacy_patchable_time):
"""Make time patchable for all the tests."""
yield
@pytest.fixture
def hass_time_zone():
"""Return default hass timezone."""
return "US/Pacific"
@pytest.fixture(autouse=True)
def setup_fixture(hass, hass_time_zone):
"""Set up things to be run when tests are started."""
hass.config.latitude = 50.27583
hass.config.longitude = 18.98583
hass.config.set_time_zone(hass_time_zone)
@pytest.fixture
def hass_tz_info(hass):
"""Return timezone info for the hass timezone."""
return dt_util.get_time_zone(hass.config.time_zone)
async def test_setup(hass):
"""Test the setup."""
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Early Morning",
"after": "sunrise",
"after_offset": "-02:00",
"before": "7:00",
"before_offset": "1:00",
},
{
"platform": "tod",
"name": "Morning",
"after": "sunrise",
"before": "12:00",
},
]
}
with assert_setup_component(2):
assert await async_setup_component(hass, "binary_sensor", config)
async def test_setup_no_sensors(hass):
"""Test setup with no sensors."""
with assert_setup_component(0):
assert await async_setup_component(
hass, "binary_sensor", {"binary_sensor": {"platform": "tod"}}
)
@freeze_time("2019-01-10 18:43:00-08:00")
async def test_in_period_on_start(hass):
"""Test simple setting."""
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Evening",
"after": "18:00",
"before": "22:00",
}
]
}
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.evening")
assert state.state == STATE_ON
@freeze_time("2019-01-10 22:30:00-08:00")
async def test_midnight_turnover_before_midnight_inside_period(hass):
"""Test midnight turnover setting before midnight inside period ."""
config = {
"binary_sensor": [
{"platform": "tod", "name": "Night", "after": "22:00", "before": "5:00"}
]
}
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.night")
assert state.state == STATE_ON
async def test_midnight_turnover_after_midnight_inside_period(
hass, freezer, hass_tz_info
):
"""Test midnight turnover setting before midnight inside period ."""
test_time = datetime(2019, 1, 10, 21, 0, 0, tzinfo=hass_tz_info)
config = {
"binary_sensor": [
{"platform": "tod", "name": "Night", "after": "22:00", "before": "5:00"}
]
}
freezer.move_to(test_time)
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.night")
assert state.state == STATE_OFF
await hass.async_block_till_done()
freezer.move_to(test_time + timedelta(hours=1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.night")
assert state.state == STATE_ON
@freeze_time("2019-01-10 20:30:00-08:00")
async def test_midnight_turnover_before_midnight_outside_period(hass):
"""Test midnight turnover setting before midnight outside period."""
config = {
"binary_sensor": [
{"platform": "tod", "name": "Night", "after": "22:00", "before": "5:00"}
]
}
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.night")
assert state.state == STATE_OFF
@freeze_time("2019-01-10 10:00:00-08:00")
async def test_after_happens_tomorrow(hass):
"""Test when both before and after are in the future, and after is later than before."""
config = {
"binary_sensor": [
{"platform": "tod", "name": "Night", "after": "23:00", "before": "12:00"}
]
}
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.night")
assert state.state == STATE_ON
async def test_midnight_turnover_after_midnight_outside_period(
hass, freezer, hass_tz_info
):
"""Test midnight turnover setting before midnight inside period ."""
test_time = datetime(2019, 1, 10, 20, 0, 0, tzinfo=hass_tz_info)
config = {
"binary_sensor": [
{"platform": "tod", "name": "Night", "after": "22:00", "before": "5:00"}
]
}
freezer.move_to(test_time)
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.night")
assert state.state == STATE_OFF
switchover_time = datetime(2019, 1, 11, 4, 59, 0, tzinfo=hass_tz_info)
freezer.move_to(switchover_time)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.night")
assert state.state == STATE_ON
freezer.move_to(switchover_time + timedelta(minutes=1, seconds=1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.night")
assert state.state == STATE_OFF
async def test_from_sunrise_to_sunset(hass, freezer, hass_tz_info):
"""Test period from sunrise to sunset."""
test_time = datetime(2019, 1, 12, tzinfo=hass_tz_info)
sunrise = dt_util.as_local(
get_astral_event_date(hass, "sunrise", dt_util.as_utc(test_time))
)
sunset = dt_util.as_local(
get_astral_event_date(hass, "sunset", dt_util.as_utc(test_time))
)
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Day",
"after": "sunrise",
"before": "sunset",
}
]
}
entity_id = "binary_sensor.day"
freezer.move_to(sunrise + timedelta(seconds=-1))
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(sunrise)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunrise + timedelta(seconds=1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunset + timedelta(seconds=-1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunset)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(sunset + timedelta(seconds=1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
async def test_from_sunset_to_sunrise(hass, freezer, hass_tz_info):
"""Test period from sunset to sunrise."""
test_time = datetime(2019, 1, 12, tzinfo=hass_tz_info)
sunset = dt_util.as_local(get_astral_event_date(hass, "sunset", test_time))
sunrise = dt_util.as_local(get_astral_event_next(hass, "sunrise", sunset))
# assert sunset == sunrise
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Night",
"after": "sunset",
"before": "sunrise",
}
]
}
entity_id = "binary_sensor.night"
freezer.move_to(sunset + timedelta(seconds=-1))
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(sunset)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunset + timedelta(minutes=1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunrise + timedelta(minutes=-1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunrise)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(sunrise + timedelta(minutes=1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
async def test_offset(hass, freezer, hass_tz_info):
"""Test offset."""
after = datetime(2019, 1, 10, 18, 0, 0, tzinfo=hass_tz_info) + timedelta(
hours=1, minutes=34
)
before = datetime(2019, 1, 10, 22, 0, 0, tzinfo=hass_tz_info) + timedelta(
hours=1, minutes=45
)
entity_id = "binary_sensor.evening"
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Evening",
"after": "18:00",
"after_offset": "1:34",
"before": "22:00",
"before_offset": "1:45",
}
]
}
freezer.move_to(after + timedelta(seconds=-1))
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(after)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(before + timedelta(seconds=-1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(before)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(before + timedelta(seconds=1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
async def test_offset_overnight(hass, freezer, hass_tz_info):
"""Test offset overnight."""
after = datetime(2019, 1, 10, 18, 0, 0, tzinfo=hass_tz_info) + timedelta(
hours=1, minutes=34
)
entity_id = "binary_sensor.evening"
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Evening",
"after": "18:00",
"after_offset": "1:34",
"before": "22:00",
"before_offset": "3:00",
}
]
}
freezer.move_to(after + timedelta(seconds=-1))
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(after)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
async def test_norwegian_case_winter(hass, freezer, hass_tz_info):
"""Test location in Norway where the sun doesn't set in summer."""
hass.config.latitude = 69.6
hass.config.longitude = 18.8
test_time = datetime(2010, 1, 1, tzinfo=hass_tz_info)
sunrise = dt_util.as_local(
get_astral_event_next(hass, "sunrise", dt_util.as_utc(test_time))
)
sunset = dt_util.as_local(
get_astral_event_next(hass, "sunset", dt_util.as_utc(test_time))
)
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Day",
"after": "sunrise",
"before": "sunset",
}
]
}
entity_id = "binary_sensor.day"
freezer.move_to(test_time)
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(sunrise + timedelta(seconds=-1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(sunrise)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunrise + timedelta(seconds=1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunset + timedelta(seconds=-1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunset)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(sunset + timedelta(seconds=1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
async def test_norwegian_case_summer(hass, freezer, hass_tz_info):
"""Test location in Norway where the sun doesn't set in summer."""
hass.config.latitude = 69.6
hass.config.longitude = 18.8
hass.config.elevation = 10.0
test_time = datetime(2010, 6, 1, tzinfo=hass_tz_info)
sunrise = dt_util.as_local(
get_astral_event_next(hass, "sunrise", dt_util.as_utc(test_time))
)
sunset = dt_util.as_local(
get_astral_event_next(hass, "sunset", dt_util.as_utc(sunrise))
)
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Day",
"after": "sunrise",
"before": "sunset",
}
]
}
entity_id = "binary_sensor.day"
freezer.move_to(test_time)
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(sunrise + timedelta(seconds=-1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(sunrise)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunrise + timedelta(seconds=1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunset + timedelta(seconds=-1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunset)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(sunset + timedelta(seconds=1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
async def test_sun_offset(hass, freezer, hass_tz_info):
"""Test sun event with offset."""
test_time = datetime(2019, 1, 12, tzinfo=hass_tz_info)
sunrise = dt_util.as_local(
get_astral_event_date(hass, "sunrise", dt_util.as_utc(test_time))
+ timedelta(hours=-1, minutes=-30)
)
sunset = dt_util.as_local(
get_astral_event_date(hass, "sunset", dt_util.as_utc(test_time))
+ timedelta(hours=1, minutes=30)
)
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Day",
"after": "sunrise",
"after_offset": "-1:30",
"before": "sunset",
"before_offset": "1:30",
}
]
}
entity_id = "binary_sensor.day"
freezer.move_to(sunrise + timedelta(seconds=-1))
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(sunrise)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunrise + timedelta(seconds=1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
freezer.move_to(sunset + timedelta(seconds=-1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
await hass.async_block_till_done()
freezer.move_to(sunset)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
freezer.move_to(sunset + timedelta(seconds=1))
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_OFF
test_time = test_time + timedelta(days=1)
sunrise = dt_util.as_local(
get_astral_event_date(hass, "sunrise", dt_util.as_utc(test_time))
+ timedelta(hours=-1, minutes=-30)
)
freezer.move_to(sunrise)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: dt_util.utcnow()})
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.state == STATE_ON
async def test_dst(hass, freezer, hass_tz_info):
"""Test sun event with offset."""
hass.config.time_zone = "CET"
dt_util.set_default_time_zone(dt_util.get_time_zone("CET"))
test_time = datetime(2019, 3, 30, 3, 0, 0, tzinfo=hass_tz_info)
config = {
"binary_sensor": [
{"platform": "tod", "name": "Day", "after": "2:30", "before": "2:40"}
]
}
# Test DST:
# after 2019-03-30 03:00 CET the next update should ge scheduled
# at 3:30 not 2:30 local time
entity_id = "binary_sensor.day"
freezer.move_to(test_time)
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get(entity_id)
assert state.attributes["after"] == "2019-03-31T03:30:00+02:00"
assert state.attributes["before"] == "2019-03-31T03:40:00+02:00"
assert state.attributes["next_update"] == "2019-03-31T03:30:00+02:00"
assert state.state == STATE_OFF
@freeze_time("2019-01-10 18:43:00")
@pytest.mark.parametrize("hass_time_zone", ("UTC",))
async def test_simple_before_after_does_not_loop_utc_not_in_range(hass):
"""Test simple before after."""
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Night",
"before": "06:00",
"after": "22:00",
}
]
}
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.night")
assert state.state == STATE_OFF
assert state.attributes["after"] == "2019-01-10T22:00:00+00:00"
assert state.attributes["before"] == "2019-01-11T06:00:00+00:00"
assert state.attributes["next_update"] == "2019-01-10T22:00:00+00:00"
@freeze_time("2019-01-10 22:43:00")
@pytest.mark.parametrize("hass_time_zone", ("UTC",))
async def test_simple_before_after_does_not_loop_utc_in_range(hass):
"""Test simple before after."""
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Night",
"before": "06:00",
"after": "22:00",
}
]
}
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.night")
assert state.state == STATE_ON
assert state.attributes["after"] == "2019-01-10T22:00:00+00:00"
assert state.attributes["before"] == "2019-01-11T06:00:00+00:00"
assert state.attributes["next_update"] == "2019-01-11T06:00:00+00:00"
@freeze_time("2019-01-11 06:00:00")
@pytest.mark.parametrize("hass_time_zone", ("UTC",))
async def test_simple_before_after_does_not_loop_utc_fire_at_before(hass):
"""Test simple before after."""
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Night",
"before": "06:00",
"after": "22:00",
}
]
}
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.night")
assert state.state == STATE_OFF
assert state.attributes["after"] == "2019-01-11T22:00:00+00:00"
assert state.attributes["before"] == "2019-01-12T06:00:00+00:00"
assert state.attributes["next_update"] == "2019-01-11T22:00:00+00:00"
@freeze_time("2019-01-10 22:00:00")
@pytest.mark.parametrize("hass_time_zone", ("UTC",))
async def test_simple_before_after_does_not_loop_utc_fire_at_after(hass):
"""Test simple before after."""
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Night",
"before": "06:00",
"after": "22:00",
}
]
}
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.night")
assert state.state == STATE_ON
assert state.attributes["after"] == "2019-01-10T22:00:00+00:00"
assert state.attributes["before"] == "2019-01-11T06:00:00+00:00"
assert state.attributes["next_update"] == "2019-01-11T06:00:00+00:00"
@freeze_time("2019-01-10 22:00:00")
@pytest.mark.parametrize("hass_time_zone", ("UTC",))
async def test_simple_before_after_does_not_loop_utc_both_before_now(hass):
"""Test simple before after."""
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Morning",
"before": "08:00",
"after": "00:00",
}
]
}
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.morning")
assert state.state == STATE_OFF
assert state.attributes["after"] == "2019-01-11T00:00:00+00:00"
assert state.attributes["before"] == "2019-01-11T08:00:00+00:00"
assert state.attributes["next_update"] == "2019-01-11T00:00:00+00:00"
@freeze_time("2019-01-10 17:43:00+01:00")
@pytest.mark.parametrize("hass_time_zone", ("Europe/Berlin",))
async def test_simple_before_after_does_not_loop_berlin_not_in_range(hass):
"""Test simple before after."""
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Dark",
"before": "06:00",
"after": "00:00",
}
]
}
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.dark")
assert state.state == STATE_OFF
assert state.attributes["after"] == "2019-01-11T00:00:00+01:00"
assert state.attributes["before"] == "2019-01-11T06:00:00+01:00"
assert state.attributes["next_update"] == "2019-01-11T00:00:00+01:00"
@freeze_time("2019-01-11 00:43:00+01:00")
@pytest.mark.parametrize("hass_time_zone", ("Europe/Berlin",))
async def test_simple_before_after_does_not_loop_berlin_in_range(hass):
"""Test simple before after."""
config = {
"binary_sensor": [
{
"platform": "tod",
"name": "Dark",
"before": "06:00",
"after": "00:00",
}
]
}
await async_setup_component(hass, "binary_sensor", config)
await hass.async_block_till_done()
state = hass.states.get("binary_sensor.dark")
assert state.state == STATE_ON
assert state.attributes["after"] == "2019-01-11T00:00:00+01:00"
assert state.attributes["before"] == "2019-01-11T06:00:00+01:00"
assert state.attributes["next_update"] == "2019-01-11T06:00:00+01:00"
| 34.387704 | 92 | 0.645565 | 3,732 | 27,407 | 4.47776 | 0.052519 | 0.068219 | 0.050266 | 0.068219 | 0.891568 | 0.879062 | 0.8723 | 0.862905 | 0.83843 | 0.826103 | 0 | 0.045246 | 0.219397 | 27,407 | 796 | 93 | 34.430905 | 0.735861 | 0.011822 | 0 | 0.686822 | 0 | 0 | 0.130605 | 0.026236 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.006202 | false | 0 | 0.013953 | 0 | 0.023256 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed7e1cc0b261cddd6d1944cc03a504ac09d0467f | 35 | py | Python | build/lib/PyQuantum/TCL/Unitary.py | alexfmsu/pyquantum | 78b09987cbfecf549e67b919bb5cb2046b21ad44 | [
"MIT"
] | null | null | null | build/lib/PyQuantum/TCL/Unitary.py | alexfmsu/pyquantum | 78b09987cbfecf549e67b919bb5cb2046b21ad44 | [
"MIT"
] | null | null | null | build/lib/PyQuantum/TCL/Unitary.py | alexfmsu/pyquantum | 78b09987cbfecf549e67b919bb5cb2046b21ad44 | [
"MIT"
] | 2 | 2020-07-28T08:40:06.000Z | 2022-02-16T23:04:58.000Z | from PyQuantum.TC.Unitary import *
| 17.5 | 34 | 0.8 | 5 | 35 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c3eec1884caef0a92c657edc158a16cc58d4095 | 13,398 | py | Python | tests/components/switch_as_x/test_init.py | mib1185/core | b17d4ac65cde9a27ff6032d70b148792e5eba8df | [
"Apache-2.0"
] | 30,023 | 2016-04-13T10:17:53.000Z | 2020-03-02T12:56:31.000Z | tests/components/switch_as_x/test_init.py | mib1185/core | b17d4ac65cde9a27ff6032d70b148792e5eba8df | [
"Apache-2.0"
] | 24,710 | 2016-04-13T08:27:26.000Z | 2020-03-02T12:59:13.000Z | tests/components/switch_as_x/test_init.py | mib1185/core | b17d4ac65cde9a27ff6032d70b148792e5eba8df | [
"Apache-2.0"
] | 11,956 | 2016-04-13T18:42:31.000Z | 2020-03-02T09:32:12.000Z | """Tests for the Switch as X."""
from __future__ import annotations
from unittest.mock import patch
import pytest
from homeassistant.components.switch_as_x.const import CONF_TARGET_DOMAIN, DOMAIN
from homeassistant.const import (
CONF_ENTITY_ID,
STATE_CLOSED,
STATE_LOCKED,
STATE_OFF,
STATE_ON,
STATE_OPEN,
STATE_UNLOCKED,
Platform,
)
from homeassistant.core import HomeAssistant
from homeassistant.helpers import device_registry as dr, entity_registry as er
from tests.common import MockConfigEntry
PLATFORMS_TO_TEST = (
Platform.COVER,
Platform.FAN,
Platform.LIGHT,
Platform.LOCK,
Platform.SIREN,
)
@pytest.mark.parametrize("target_domain", PLATFORMS_TO_TEST)
async def test_config_entry_unregistered_uuid(
hass: HomeAssistant, target_domain: str
) -> None:
"""Test light switch setup from config entry with unknown entity registry id."""
fake_uuid = "a266a680b608c32770e6c45bfe6b8411"
config_entry = MockConfigEntry(
data={},
domain=DOMAIN,
options={
CONF_ENTITY_ID: fake_uuid,
CONF_TARGET_DOMAIN: target_domain,
},
title="ABC",
)
config_entry.add_to_hass(hass)
assert not await hass.config_entries.async_setup(config_entry.entry_id)
await hass.async_block_till_done()
assert len(hass.states.async_all()) == 0
@pytest.mark.parametrize(
"target_domain,state_on,state_off",
(
(Platform.COVER, STATE_OPEN, STATE_CLOSED),
(Platform.FAN, STATE_ON, STATE_OFF),
(Platform.LIGHT, STATE_ON, STATE_OFF),
(Platform.LOCK, STATE_UNLOCKED, STATE_LOCKED),
(Platform.SIREN, STATE_ON, STATE_OFF),
),
)
async def test_entity_registry_events(
hass: HomeAssistant, target_domain: str, state_on: str, state_off: str
) -> None:
"""Test entity registry events are tracked."""
registry = er.async_get(hass)
registry_entry = registry.async_get_or_create("switch", "test", "unique")
switch_entity_id = registry_entry.entity_id
hass.states.async_set(switch_entity_id, STATE_ON)
config_entry = MockConfigEntry(
data={},
domain=DOMAIN,
options={
CONF_ENTITY_ID: registry_entry.id,
CONF_TARGET_DOMAIN: target_domain,
},
title="ABC",
)
config_entry.add_to_hass(hass)
assert await hass.config_entries.async_setup(config_entry.entry_id)
await hass.async_block_till_done()
assert hass.states.get(f"{target_domain}.abc").state == state_on
# Change entity_id
new_switch_entity_id = f"{switch_entity_id}_new"
registry.async_update_entity(switch_entity_id, new_entity_id=new_switch_entity_id)
hass.states.async_set(new_switch_entity_id, STATE_OFF)
await hass.async_block_till_done()
# Check tracking the new entity_id
await hass.async_block_till_done()
assert hass.states.get(f"{target_domain}.abc").state == state_off
# The old entity_id should no longer be tracked
hass.states.async_set(switch_entity_id, STATE_ON)
await hass.async_block_till_done()
assert hass.states.get(f"{target_domain}.abc").state == state_off
# Check changing name does not reload the config entry
with patch(
"homeassistant.components.switch_as_x.async_unload_entry",
) as mock_setup_entry:
registry.async_update_entity(new_switch_entity_id, name="New name")
await hass.async_block_till_done()
mock_setup_entry.assert_not_called()
# Check removing the entity removes the config entry
registry.async_remove(new_switch_entity_id)
await hass.async_block_till_done()
assert hass.states.get(f"{target_domain}.abc") is None
assert registry.async_get(f"{target_domain}.abc") is None
assert len(hass.config_entries.async_entries("switch_as_x")) == 0
@pytest.mark.parametrize("target_domain", PLATFORMS_TO_TEST)
async def test_device_registry_config_entry_1(
hass: HomeAssistant, target_domain: str
) -> None:
"""Test we add our config entry to the tracked switch's device."""
device_registry = dr.async_get(hass)
entity_registry = er.async_get(hass)
switch_config_entry = MockConfigEntry()
device_entry = device_registry.async_get_or_create(
config_entry_id=switch_config_entry.entry_id,
connections={(dr.CONNECTION_NETWORK_MAC, "12:34:56:AB:CD:EF")},
)
switch_entity_entry = entity_registry.async_get_or_create(
"switch",
"test",
"unique",
config_entry=switch_config_entry,
device_id=device_entry.id,
)
# Add another config entry to the same device
device_registry.async_update_device(
device_entry.id, add_config_entry_id=MockConfigEntry().entry_id
)
switch_as_x_config_entry = MockConfigEntry(
data={},
domain=DOMAIN,
options={
CONF_ENTITY_ID: switch_entity_entry.id,
CONF_TARGET_DOMAIN: target_domain,
},
title="ABC",
)
switch_as_x_config_entry.add_to_hass(hass)
assert await hass.config_entries.async_setup(switch_as_x_config_entry.entry_id)
await hass.async_block_till_done()
entity_entry = entity_registry.async_get(f"{target_domain}.abc")
assert entity_entry.device_id == switch_entity_entry.device_id
device_entry = device_registry.async_get(device_entry.id)
assert switch_as_x_config_entry.entry_id in device_entry.config_entries
# Remove the wrapped switch's config entry from the device
device_registry.async_update_device(
device_entry.id, remove_config_entry_id=switch_config_entry.entry_id
)
await hass.async_block_till_done()
await hass.async_block_till_done()
# Check that the switch_as_x config entry is removed from the device
device_entry = device_registry.async_get(device_entry.id)
assert switch_as_x_config_entry.entry_id not in device_entry.config_entries
@pytest.mark.parametrize("target_domain", PLATFORMS_TO_TEST)
async def test_device_registry_config_entry_2(
hass: HomeAssistant, target_domain: str
) -> None:
"""Test we add our config entry to the tracked switch's device."""
device_registry = dr.async_get(hass)
entity_registry = er.async_get(hass)
switch_config_entry = MockConfigEntry()
device_entry = device_registry.async_get_or_create(
config_entry_id=switch_config_entry.entry_id,
connections={(dr.CONNECTION_NETWORK_MAC, "12:34:56:AB:CD:EF")},
)
switch_entity_entry = entity_registry.async_get_or_create(
"switch",
"test",
"unique",
config_entry=switch_config_entry,
device_id=device_entry.id,
)
switch_as_x_config_entry = MockConfigEntry(
data={},
domain=DOMAIN,
options={
CONF_ENTITY_ID: switch_entity_entry.id,
CONF_TARGET_DOMAIN: target_domain,
},
title="ABC",
)
switch_as_x_config_entry.add_to_hass(hass)
assert await hass.config_entries.async_setup(switch_as_x_config_entry.entry_id)
await hass.async_block_till_done()
entity_entry = entity_registry.async_get(f"{target_domain}.abc")
assert entity_entry.device_id == switch_entity_entry.device_id
device_entry = device_registry.async_get(device_entry.id)
assert switch_as_x_config_entry.entry_id in device_entry.config_entries
# Remove the wrapped switch from the device
entity_registry.async_update_entity(switch_entity_entry.entity_id, device_id=None)
await hass.async_block_till_done()
# Check that the switch_as_x config entry is removed from the device
device_entry = device_registry.async_get(device_entry.id)
assert switch_as_x_config_entry.entry_id not in device_entry.config_entries
@pytest.mark.parametrize("target_domain", PLATFORMS_TO_TEST)
async def test_config_entry_entity_id(
hass: HomeAssistant, target_domain: Platform
) -> None:
"""Test light switch setup from config entry with entity id."""
config_entry = MockConfigEntry(
data={},
domain=DOMAIN,
options={
CONF_ENTITY_ID: "switch.abc",
CONF_TARGET_DOMAIN: target_domain,
},
title="ABC",
)
config_entry.add_to_hass(hass)
assert await hass.config_entries.async_setup(config_entry.entry_id)
await hass.async_block_till_done()
assert DOMAIN in hass.config.components
state = hass.states.get(f"{target_domain}.abc")
assert state
assert state.state == "unavailable"
# Name copied from config entry title
assert state.name == "ABC"
# Check the light is added to the entity registry
registry = er.async_get(hass)
entity_entry = registry.async_get(f"{target_domain}.abc")
assert entity_entry
assert entity_entry.unique_id == config_entry.entry_id
@pytest.mark.parametrize("target_domain", PLATFORMS_TO_TEST)
async def test_config_entry_uuid(hass: HomeAssistant, target_domain: Platform) -> None:
"""Test light switch setup from config entry with entity registry id."""
registry = er.async_get(hass)
registry_entry = registry.async_get_or_create("switch", "test", "unique")
config_entry = MockConfigEntry(
data={},
domain=DOMAIN,
options={
CONF_ENTITY_ID: registry_entry.id,
CONF_TARGET_DOMAIN: target_domain,
},
title="ABC",
)
config_entry.add_to_hass(hass)
assert await hass.config_entries.async_setup(config_entry.entry_id)
await hass.async_block_till_done()
assert hass.states.get(f"{target_domain}.abc")
@pytest.mark.parametrize("target_domain", PLATFORMS_TO_TEST)
async def test_device(hass: HomeAssistant, target_domain: Platform) -> None:
"""Test the entity is added to the wrapped entity's device."""
device_registry = dr.async_get(hass)
entity_registry = er.async_get(hass)
test_config_entry = MockConfigEntry()
device_entry = device_registry.async_get_or_create(
config_entry_id=test_config_entry.entry_id,
connections={(dr.CONNECTION_NETWORK_MAC, "12:34:56:AB:CD:EF")},
)
switch_entity_entry = entity_registry.async_get_or_create(
"switch", "test", "unique", device_id=device_entry.id
)
switch_as_x_config_entry = MockConfigEntry(
data={},
domain=DOMAIN,
options={
CONF_ENTITY_ID: switch_entity_entry.id,
CONF_TARGET_DOMAIN: target_domain,
},
title="ABC",
)
switch_as_x_config_entry.add_to_hass(hass)
assert await hass.config_entries.async_setup(switch_as_x_config_entry.entry_id)
await hass.async_block_till_done()
entity_entry = entity_registry.async_get(f"{target_domain}.abc")
assert entity_entry
assert entity_entry.device_id == switch_entity_entry.device_id
@pytest.mark.parametrize("target_domain", PLATFORMS_TO_TEST)
async def test_setup_and_remove_config_entry(
hass: HomeAssistant,
target_domain: Platform,
) -> None:
"""Test removing a config entry."""
registry = er.async_get(hass)
# Setup the config entry
switch_as_x_config_entry = MockConfigEntry(
data={},
domain=DOMAIN,
options={
CONF_ENTITY_ID: "switch.test",
CONF_TARGET_DOMAIN: target_domain,
},
title="ABC",
)
switch_as_x_config_entry.add_to_hass(hass)
assert await hass.config_entries.async_setup(switch_as_x_config_entry.entry_id)
await hass.async_block_till_done()
# Check the state and entity registry entry are present
assert hass.states.get(f"{target_domain}.abc") is not None
assert registry.async_get(f"{target_domain}.abc") is not None
# Remove the config entry
assert await hass.config_entries.async_remove(switch_as_x_config_entry.entry_id)
await hass.async_block_till_done()
# Check the state and entity registry entry are removed
assert hass.states.get(f"{target_domain}.abc") is None
assert registry.async_get(f"{target_domain}.abc") is None
@pytest.mark.parametrize(
"hidden_by_before,hidden_by_after",
(
(er.RegistryEntryHider.USER, er.RegistryEntryHider.USER),
(er.RegistryEntryHider.INTEGRATION, None),
),
)
@pytest.mark.parametrize("target_domain", PLATFORMS_TO_TEST)
async def test_reset_hidden_by(
hass: HomeAssistant,
target_domain: Platform,
hidden_by_before: er.RegistryEntryHider | None,
hidden_by_after: er.RegistryEntryHider,
) -> None:
"""Test removing a config entry resets hidden by."""
registry = er.async_get(hass)
switch_entity_entry = registry.async_get_or_create("switch", "test", "unique")
registry.async_update_entity(
switch_entity_entry.entity_id, hidden_by=hidden_by_before
)
# Add the config entry
switch_as_x_config_entry = MockConfigEntry(
data={},
domain=DOMAIN,
options={
CONF_ENTITY_ID: switch_entity_entry.id,
CONF_TARGET_DOMAIN: target_domain,
},
title="ABC",
)
switch_as_x_config_entry.add_to_hass(hass)
# Remove the config entry
assert await hass.config_entries.async_remove(switch_as_x_config_entry.entry_id)
await hass.async_block_till_done()
# Check hidden by is reset
switch_entity_entry = registry.async_get(switch_entity_entry.entity_id)
assert switch_entity_entry.hidden_by == hidden_by_after
| 33 | 87 | 0.71854 | 1,809 | 13,398 | 4.96241 | 0.079049 | 0.089451 | 0.026067 | 0.036761 | 0.803498 | 0.758494 | 0.725521 | 0.705135 | 0.701236 | 0.656344 | 0 | 0.004069 | 0.192939 | 13,398 | 405 | 88 | 33.081481 | 0.826135 | 0.060681 | 0 | 0.613333 | 0 | 0 | 0.065729 | 0.014394 | 0 | 0 | 0 | 0 | 0.126667 | 1 | 0 | false | 0 | 0.026667 | 0 | 0.026667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c4384e6e78e5371947aaffdd213d732a075eb1e | 27 | py | Python | chaban/actions/__init__.py | ibrag8998/botchaban | 372aaeaa1e9d025bd0f93925ce037bcbf742163c | [
"MIT"
] | 1 | 2020-12-22T08:38:00.000Z | 2020-12-22T08:38:00.000Z | wakasha/action/__init__.py | tb0hdan/wakasha | 015e7d7087659f687d185650039d51e975a6246c | [
"Unlicense"
] | null | null | null | wakasha/action/__init__.py | tb0hdan/wakasha | 015e7d7087659f687d185650039d51e975a6246c | [
"Unlicense"
] | null | null | null | from .action import Action
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c65b7bf78b3088133f38888617a834821078e16 | 164 | py | Python | example.py | spheroid/pylambs | 2f1f8058ced978ebd0a957e641f4d722c72b410d | [
"MIT"
] | 1 | 2021-02-11T00:21:09.000Z | 2021-02-11T00:21:09.000Z | example.py | spheroid/pylambs | 2f1f8058ced978ebd0a957e641f4d722c72b410d | [
"MIT"
] | null | null | null | example.py | spheroid/pylambs | 2f1f8058ced978ebd0a957e641f4d722c72b410d | [
"MIT"
] | null | null | null | # @FunctionName: ExampleHandler
# @Includes: hello.py
# @Region: eu-central-1
from hello import message
def function_handler(event, context):
return message() | 20.5 | 37 | 0.75 | 20 | 164 | 6.1 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007092 | 0.140244 | 164 | 8 | 38 | 20.5 | 0.858156 | 0.432927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
9c6c9dfddc1be1221d975c0f56da49dcb582160d | 891 | py | Python | web/transiq/restapi/migrations/0007_auto_20180803_1253.py | manibhushan05/transiq | 763fafb271ce07d13ac8ce575f2fee653cf39343 | [
"Apache-2.0"
] | null | null | null | web/transiq/restapi/migrations/0007_auto_20180803_1253.py | manibhushan05/transiq | 763fafb271ce07d13ac8ce575f2fee653cf39343 | [
"Apache-2.0"
] | 14 | 2020-06-05T23:06:45.000Z | 2022-03-12T00:00:18.000Z | web/transiq/restapi/migrations/0007_auto_20180803_1253.py | manibhushan05/transiq | 763fafb271ce07d13ac8ce575f2fee653cf39343 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.0.5 on 2018-08-03 12:53
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('restapi', '0006_auto_20180802_1645'),
]
operations = [
migrations.RenameField(
model_name='bookingstatusesmapping',
old_name='stage',
new_name='booking_stage',
),
migrations.RenameField(
model_name='bookingstatusesmapping',
old_name='booking',
new_name='manual_booking',
),
migrations.RenameField(
model_name='historicalbookingstatusesmapping',
old_name='stage',
new_name='booking_stage',
),
migrations.RenameField(
model_name='historicalbookingstatusesmapping',
old_name='booking',
new_name='manual_booking',
),
]
| 26.205882 | 58 | 0.584736 | 76 | 891 | 6.605263 | 0.460526 | 0.167331 | 0.207171 | 0.239044 | 0.713147 | 0.713147 | 0.713147 | 0.243028 | 0.243028 | 0.243028 | 0 | 0.05082 | 0.315376 | 891 | 33 | 59 | 27 | 0.772131 | 0.050505 | 0 | 0.740741 | 1 | 0 | 0.255924 | 0.155213 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
92d3baad68e14b3946699f5816b4383ec1204cdc | 61 | py | Python | src/recipi/accounts/utils.py | EnTeQuAk/recpy | 58f157bd5b054db07be366beca76821386c28f6c | [
"0BSD"
] | 2 | 2015-11-12T08:58:14.000Z | 2021-03-06T01:30:11.000Z | src/recipi/accounts/utils.py | recipi/recipi | 58f157bd5b054db07be366beca76821386c28f6c | [
"0BSD"
] | 6 | 2015-02-03T00:16:58.000Z | 2015-02-04T23:47:30.000Z | src/recipi/accounts/utils.py | recipi/recipi | 58f157bd5b054db07be366beca76821386c28f6c | [
"0BSD"
] | null | null | null |
def get_user_name(user):
return user.name or user.email
| 15.25 | 34 | 0.737705 | 11 | 61 | 3.909091 | 0.636364 | 0.372093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180328 | 61 | 3 | 35 | 20.333333 | 0.86 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
92d70073d59ef64ad9cc4a22dd552a5176f193b5 | 5,698 | py | Python | joplin/pages/base_page/tests/tests.py | cityofaustin/joplin | 01424e46993e9b1c8e57391d6b7d9448f31d596b | [
"MIT"
] | 15 | 2018-09-27T07:36:30.000Z | 2021-08-03T16:01:21.000Z | joplin/pages/base_page/tests/tests.py | cityofaustin/joplin | 01424e46993e9b1c8e57391d6b7d9448f31d596b | [
"MIT"
] | 183 | 2017-11-16T23:30:47.000Z | 2020-12-18T21:43:36.000Z | joplin/pages/base_page/tests/tests.py | cityofaustin/joplin | 01424e46993e9b1c8e57391d6b7d9448f31d596b | [
"MIT"
] | 12 | 2017-12-12T22:48:05.000Z | 2021-03-01T18:01:24.000Z | from pages.base_page.factories import JanisBasePageFactory
from pages.topic_page.factories import JanisBasePageWithTopicsFactory
import pytest
# If we don't have any associated department,
# and coa_global=False (top level page isn't checked)
@pytest.mark.django_db()
def test_base_page_no_department_not_global_urls(home_page):
page = JanisBasePageFactory.create(slug="global_slug", coa_global=False, parent=home_page)
urls = page.janis_urls()
janis_publish_url = page.janis_publish_url()
assert urls == []
assert janis_publish_url == '#'
# If we don't have any associated department,
# and coa_global=True (top level is checked)
@pytest.mark.django_db
def test_base_page_no_department_coa_global_urls(home_page, expected_publish_url_base):
page = JanisBasePageFactory.create(slug="global_slug", coa_global=True, parent=home_page)
urls = page.janis_urls()
janis_publish_url = page.janis_publish_url()
# since it's global, it should ignore the departments and just publish at the top level
assert urls == ['/global_slug/']
assert janis_publish_url == f'{expected_publish_url_base}/global_slug/'
# If we have an associated department,
# and coa_global=True (top level is checked)
@pytest.mark.django_db
def test_base_page_with_department_coa_global_urls(home_page, expected_publish_url_base):
page = JanisBasePageFactory.create(
slug="global_slug",
coa_global=True,
add_departments__dummy=True,
parent=home_page
)
urls = page.janis_urls()
janis_publish_url = page.janis_publish_url()
# since it's global, it should ignore the departments and just publish at the top level
assert urls == ['/global_slug/']
assert janis_publish_url == f'{expected_publish_url_base}/global_slug/'
# If we have an associated department,
# and coa_global=False (top level is not checked)
@pytest.mark.django_db
def test_base_page_with_department_not_global_urls(home_page, expected_publish_url_base):
# Using .create() here makes it so the factory also creates
# our GroupPagePermissions to associate departments
page = JanisBasePageFactory.create(
slug="page_slug",
coa_global=False,
add_departments__dummy=True,
parent=home_page,
)
# Set expected urls using group page permission department slugs
expected_urls = ['/{department_slug}/{page_slug}/'.format(
department_slug=permission.group.department.department_page.slug, page_slug=page.slug) for permission in
page.group_permissions.all()]
urls = page.janis_urls()
janis_publish_url = page.janis_publish_url()
# we should get a url under every department
assert urls == expected_urls
assert janis_publish_url == f'{expected_publish_url_base}{expected_urls[0]}'
# If we don't have any associated department,
# and we don't have any associated topic pages
# and coa_global=False (top level page isn't checked)
@pytest.mark.django_db
def test_base_page_with_topics_no_topic_no_department_not_global_urls():
page = JanisBasePageWithTopicsFactory.create(slug="global_slug", coa_global=False)
urls = page.janis_urls()
janis_publish_url = page.janis_publish_url()
assert urls == []
assert janis_publish_url == '#'
# # If we don't have any associated department,
# # and we don't have any associated topic pages
# # and coa_global=True (top level page is checked)
@pytest.mark.django_db
def test_base_page_with_topics_no_topic_no_department_coa_global_urls(home_page, expected_publish_url_base):
page = JanisBasePageWithTopicsFactory.create(slug="global_slug", coa_global=True, parent=home_page)
urls = page.janis_urls()
janis_publish_url = page.janis_publish_url()
assert urls == ['/global_slug/']
assert janis_publish_url == f'{expected_publish_url_base}/global_slug/'
# If we have associated departments,
# and we have associated topic pages
# and coa_global=False (top level is not checked)
@pytest.mark.django_db()
def test_base_page_with_topics_with_department_not_global_urls(home_page, expected_publish_url_base):
# Using .create() here makes it so the factory also creates
# our GroupPagePermissions to associate departments
page = JanisBasePageWithTopicsFactory.create(
slug="page_slug",
coa_global=False,
add_topics__dummy=True,
parent=home_page
)
# Set expected urls using departments and topic pages
expected_urls = []
expected_urls.extend(['{department_slug}/{page_slug}/'.format(
department_slug=permission.group.department.department_page.slug, page_slug=page.slug) for permission in
page.group_permissions.all()])
for base_page_topic in page.topics.all():
expected_urls.extend(
['{topic_url}page_slug/'.format(topic_url=url) for url in base_page_topic.topic.janis_urls()])
urls = page.janis_urls()
janis_publish_url = page.janis_publish_url()
# we should get a url under every department
assert urls == expected_urls
assert janis_publish_url == f'{expected_publish_url_base}{expected_urls[0]}'
# If we have associated departments,
# and we have associated topic pages
# and coa_global=True (top level is checked)
@pytest.mark.django_db()
def test_base_page_with_topics_with_topic_with_department_coa_global_urls(home_page, expected_publish_url_base):
page = JanisBasePageWithTopicsFactory.create(slug="global_slug_2", coa_global=True, parent=home_page)
urls = page.janis_urls()
janis_publish_url = page.janis_publish_url()
assert urls == ['/global_slug_2/']
assert janis_publish_url == f'{expected_publish_url_base}/global_slug_2/'
| 38.5 | 112 | 0.753598 | 805 | 5,698 | 5.013665 | 0.108075 | 0.089197 | 0.089197 | 0.065411 | 0.897919 | 0.894698 | 0.889495 | 0.879584 | 0.833499 | 0.809217 | 0 | 0.001045 | 0.160232 | 5,698 | 147 | 113 | 38.761905 | 0.842424 | 0.254124 | 0 | 0.595238 | 0 | 0 | 0.112903 | 0.079222 | 0 | 0 | 0 | 0 | 0.190476 | 1 | 0.095238 | false | 0 | 0.035714 | 0 | 0.130952 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
130ea190709208b1b13e0bf2eed7acdc469e9719 | 221 | py | Python | CreateVersionFile.py | michaeldcanady/AutoClicker | 669c03c87c64b437407820a01f2dcab9a8c9657a | [
"MIT"
] | 2 | 2021-12-21T05:31:10.000Z | 2022-02-19T01:21:58.000Z | CreateVersionFile.py | michaeldcanady/AutoClicker | 669c03c87c64b437407820a01f2dcab9a8c9657a | [
"MIT"
] | null | null | null | CreateVersionFile.py | michaeldcanady/AutoClicker | 669c03c87c64b437407820a01f2dcab9a8c9657a | [
"MIT"
] | null | null | null | import pyinstaller_versionfile
import os
pyinstaller_versionfile.create_versionfile_from_input_file(
output_file=os.path.join('.', "src", 'versionfile.txt'),
input_file=os.path.join('.', "src", 'metadata.yml')
)
| 27.625 | 60 | 0.751131 | 28 | 221 | 5.642857 | 0.535714 | 0.278481 | 0.126582 | 0.177215 | 0.21519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095023 | 221 | 7 | 61 | 31.571429 | 0.79 | 0 | 0 | 0 | 0 | 0 | 0.158371 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
13165dae81b7cb77f1ef0ffde65c47213cd916a5 | 70 | py | Python | deeplodocus/brain/visual_cortex/__init__.py | Ahleroy/deeplodocus | 70d0c870b41f02d629c0305a2702793a09d53f33 | [
"MIT"
] | 2 | 2019-09-13T12:02:23.000Z | 2022-03-11T13:46:35.000Z | deeplodocus/brain/visual_cortex/__init__.py | Ahleroy/deeplodocus | 70d0c870b41f02d629c0305a2702793a09d53f33 | [
"MIT"
] | 11 | 2018-11-23T14:01:17.000Z | 2019-09-16T15:25:07.000Z | deeplodocus/brain/visual_cortex/__init__.py | Ahleroy/deeplodocus | 70d0c870b41f02d629c0305a2702793a09d53f33 | [
"MIT"
] | 4 | 2018-09-22T13:31:08.000Z | 2018-12-05T18:34:46.000Z | from deeplodocus.brain.visual_cortex.visual_cortex import VisualCortex | 70 | 70 | 0.914286 | 9 | 70 | 6.888889 | 0.777778 | 0.387097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042857 | 70 | 1 | 70 | 70 | 0.925373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13229ac1716749b8db8851166223e72f4d9ea0e3 | 266 | py | Python | search_spaces/LatencyPredictors/model_src/search_space/ofa_profile/arch_utils.py | Ascend-Research/BlockProfile | 94a0f065e3632c204af77b736b944d30562468f9 | [
"MIT"
] | 2 | 2021-06-09T05:34:44.000Z | 2021-12-22T00:57:07.000Z | search_spaces/LatencyPredictors/model_src/search_space/ofa_profile/arch_utils.py | Ascend-Research/BlockProfile | 94a0f065e3632c204af77b736b944d30562468f9 | [
"MIT"
] | null | null | null | search_spaces/LatencyPredictors/model_src/search_space/ofa_profile/arch_utils.py | Ascend-Research/BlockProfile | 94a0f065e3632c204af77b736b944d30562468f9 | [
"MIT"
] | null | null | null | from search_spaces.LatencyPredictors.utils.math_utils import make_divisible
def get_final_channel_size(C_base, w):
return make_divisible(C_base * w, divisor=8)
def get_final_channel_sizes(C_list, w):
return [get_final_channel_size(c, w) for c in C_list]
| 26.6 | 75 | 0.793233 | 46 | 266 | 4.217391 | 0.521739 | 0.123711 | 0.231959 | 0.185567 | 0.206186 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00431 | 0.12782 | 266 | 9 | 76 | 29.555556 | 0.831897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
13536d2b264dcb38fc097a29db11ae2eac7dc3f0 | 24,780 | py | Python | cupy/array_api/_elementwise_functions.py | prkhrsrvstv1/cupy | ea86c8225b575af9d2855fb77a306cf86fd098ea | [
"MIT"
] | 6,180 | 2016-11-01T14:22:30.000Z | 2022-03-31T08:39:20.000Z | cupy/array_api/_elementwise_functions.py | prkhrsrvstv1/cupy | ea86c8225b575af9d2855fb77a306cf86fd098ea | [
"MIT"
] | 6,281 | 2016-12-22T07:42:31.000Z | 2022-03-31T19:57:02.000Z | cupy/array_api/_elementwise_functions.py | prkhrsrvstv1/cupy | ea86c8225b575af9d2855fb77a306cf86fd098ea | [
"MIT"
] | 829 | 2017-02-23T05:46:12.000Z | 2022-03-27T17:40:03.000Z | from __future__ import annotations
from ._dtypes import (
_boolean_dtypes,
_floating_dtypes,
_integer_dtypes,
_integer_or_boolean_dtypes,
_numeric_dtypes,
_result_type,
)
from ._array_object import Array
import cupy as np
def abs(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.abs <numpy.abs>`.
See its docstring for more information.
"""
if x.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in abs")
return Array._new(np.abs(x._array))
# Note: the function name is different here
def acos(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.arccos <numpy.arccos>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in acos")
return Array._new(np.arccos(x._array))
# Note: the function name is different here
def acosh(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.arccosh <numpy.arccosh>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in acosh")
return Array._new(np.arccosh(x._array))
def add(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.add <numpy.add>`.
See its docstring for more information.
"""
if x1.dtype not in _numeric_dtypes or x2.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in add")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.add(x1._array, x2._array))
# Note: the function name is different here
def asin(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.arcsin <numpy.arcsin>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in asin")
return Array._new(np.arcsin(x._array))
# Note: the function name is different here
def asinh(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.arcsinh <numpy.arcsinh>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in asinh")
return Array._new(np.arcsinh(x._array))
# Note: the function name is different here
def atan(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.arctan <numpy.arctan>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in atan")
return Array._new(np.arctan(x._array))
# Note: the function name is different here
def atan2(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.arctan2 <numpy.arctan2>`.
See its docstring for more information.
"""
if x1.dtype not in _floating_dtypes or x2.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in atan2")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.arctan2(x1._array, x2._array))
# Note: the function name is different here
def atanh(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.arctanh <numpy.arctanh>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in atanh")
return Array._new(np.arctanh(x._array))
def bitwise_and(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.bitwise_and <numpy.bitwise_and>`.
See its docstring for more information.
"""
if (
x1.dtype not in _integer_or_boolean_dtypes
or x2.dtype not in _integer_or_boolean_dtypes
):
raise TypeError("Only integer or boolean dtypes are allowed in bitwise_and")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.bitwise_and(x1._array, x2._array))
# Note: the function name is different here
def bitwise_left_shift(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.left_shift <numpy.left_shift>`.
See its docstring for more information.
"""
if x1.dtype not in _integer_dtypes or x2.dtype not in _integer_dtypes:
raise TypeError("Only integer dtypes are allowed in bitwise_left_shift")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
# Note: bitwise_left_shift is only defined for x2 nonnegative.
if np.any(x2._array < 0):
raise ValueError("bitwise_left_shift(x1, x2) is only defined for x2 >= 0")
return Array._new(np.left_shift(x1._array, x2._array))
# Note: the function name is different here
def bitwise_invert(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.invert <numpy.invert>`.
See its docstring for more information.
"""
if x.dtype not in _integer_or_boolean_dtypes:
raise TypeError("Only integer or boolean dtypes are allowed in bitwise_invert")
return Array._new(np.invert(x._array))
def bitwise_or(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.bitwise_or <numpy.bitwise_or>`.
See its docstring for more information.
"""
if (
x1.dtype not in _integer_or_boolean_dtypes
or x2.dtype not in _integer_or_boolean_dtypes
):
raise TypeError("Only integer or boolean dtypes are allowed in bitwise_or")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.bitwise_or(x1._array, x2._array))
# Note: the function name is different here
def bitwise_right_shift(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.right_shift <numpy.right_shift>`.
See its docstring for more information.
"""
if x1.dtype not in _integer_dtypes or x2.dtype not in _integer_dtypes:
raise TypeError("Only integer dtypes are allowed in bitwise_right_shift")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
# Note: bitwise_right_shift is only defined for x2 nonnegative.
if np.any(x2._array < 0):
raise ValueError("bitwise_right_shift(x1, x2) is only defined for x2 >= 0")
return Array._new(np.right_shift(x1._array, x2._array))
def bitwise_xor(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.bitwise_xor <numpy.bitwise_xor>`.
See its docstring for more information.
"""
if (
x1.dtype not in _integer_or_boolean_dtypes
or x2.dtype not in _integer_or_boolean_dtypes
):
raise TypeError("Only integer or boolean dtypes are allowed in bitwise_xor")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.bitwise_xor(x1._array, x2._array))
def ceil(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.ceil <numpy.ceil>`.
See its docstring for more information.
"""
if x.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in ceil")
if x.dtype in _integer_dtypes:
# Note: The return dtype of ceil is the same as the input
return x
return Array._new(np.ceil(x._array))
def cos(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.cos <numpy.cos>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in cos")
return Array._new(np.cos(x._array))
def cosh(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.cosh <numpy.cosh>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in cosh")
return Array._new(np.cosh(x._array))
def divide(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.divide <numpy.divide>`.
See its docstring for more information.
"""
if x1.dtype not in _floating_dtypes or x2.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in divide")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.divide(x1._array, x2._array))
def equal(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.equal <numpy.equal>`.
See its docstring for more information.
"""
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.equal(x1._array, x2._array))
def exp(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.exp <numpy.exp>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in exp")
return Array._new(np.exp(x._array))
def expm1(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.expm1 <numpy.expm1>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in expm1")
return Array._new(np.expm1(x._array))
def floor(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.floor <numpy.floor>`.
See its docstring for more information.
"""
if x.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in floor")
if x.dtype in _integer_dtypes:
# Note: The return dtype of floor is the same as the input
return x
return Array._new(np.floor(x._array))
def floor_divide(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.floor_divide <numpy.floor_divide>`.
See its docstring for more information.
"""
if x1.dtype not in _numeric_dtypes or x2.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in floor_divide")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.floor_divide(x1._array, x2._array))
def greater(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.greater <numpy.greater>`.
See its docstring for more information.
"""
if x1.dtype not in _numeric_dtypes or x2.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in greater")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.greater(x1._array, x2._array))
def greater_equal(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.greater_equal <numpy.greater_equal>`.
See its docstring for more information.
"""
if x1.dtype not in _numeric_dtypes or x2.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in greater_equal")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.greater_equal(x1._array, x2._array))
def isfinite(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.isfinite <numpy.isfinite>`.
See its docstring for more information.
"""
if x.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in isfinite")
return Array._new(np.isfinite(x._array))
def isinf(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.isinf <numpy.isinf>`.
See its docstring for more information.
"""
if x.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in isinf")
return Array._new(np.isinf(x._array))
def isnan(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.isnan <numpy.isnan>`.
See its docstring for more information.
"""
if x.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in isnan")
return Array._new(np.isnan(x._array))
def less(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.less <numpy.less>`.
See its docstring for more information.
"""
if x1.dtype not in _numeric_dtypes or x2.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in less")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.less(x1._array, x2._array))
def less_equal(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.less_equal <numpy.less_equal>`.
See its docstring for more information.
"""
if x1.dtype not in _numeric_dtypes or x2.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in less_equal")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.less_equal(x1._array, x2._array))
def log(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.log <numpy.log>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in log")
return Array._new(np.log(x._array))
def log1p(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.log1p <numpy.log1p>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in log1p")
return Array._new(np.log1p(x._array))
def log2(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.log2 <numpy.log2>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in log2")
return Array._new(np.log2(x._array))
def log10(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.log10 <numpy.log10>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in log10")
return Array._new(np.log10(x._array))
def logaddexp(x1: Array, x2: Array) -> Array:
"""
Array API compatible wrapper for :py:func:`np.logaddexp <numpy.logaddexp>`.
See its docstring for more information.
"""
if x1.dtype not in _floating_dtypes or x2.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in logaddexp")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.logaddexp(x1._array, x2._array))
def logical_and(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.logical_and <numpy.logical_and>`.
See its docstring for more information.
"""
if x1.dtype not in _boolean_dtypes or x2.dtype not in _boolean_dtypes:
raise TypeError("Only boolean dtypes are allowed in logical_and")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.logical_and(x1._array, x2._array))
def logical_not(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.logical_not <numpy.logical_not>`.
See its docstring for more information.
"""
if x.dtype not in _boolean_dtypes:
raise TypeError("Only boolean dtypes are allowed in logical_not")
return Array._new(np.logical_not(x._array))
def logical_or(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.logical_or <numpy.logical_or>`.
See its docstring for more information.
"""
if x1.dtype not in _boolean_dtypes or x2.dtype not in _boolean_dtypes:
raise TypeError("Only boolean dtypes are allowed in logical_or")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.logical_or(x1._array, x2._array))
def logical_xor(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.logical_xor <numpy.logical_xor>`.
See its docstring for more information.
"""
if x1.dtype not in _boolean_dtypes or x2.dtype not in _boolean_dtypes:
raise TypeError("Only boolean dtypes are allowed in logical_xor")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.logical_xor(x1._array, x2._array))
def multiply(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.multiply <numpy.multiply>`.
See its docstring for more information.
"""
if x1.dtype not in _numeric_dtypes or x2.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in multiply")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.multiply(x1._array, x2._array))
def negative(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.negative <numpy.negative>`.
See its docstring for more information.
"""
if x.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in negative")
return Array._new(np.negative(x._array))
def not_equal(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.not_equal <numpy.not_equal>`.
See its docstring for more information.
"""
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.not_equal(x1._array, x2._array))
def positive(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.positive <numpy.positive>`.
See its docstring for more information.
"""
if x.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in positive")
return Array._new(np.positive(x._array))
# Note: the function name is different here
def pow(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.power <numpy.power>`.
See its docstring for more information.
"""
if x1.dtype not in _floating_dtypes or x2.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in pow")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.power(x1._array, x2._array))
def remainder(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.remainder <numpy.remainder>`.
See its docstring for more information.
"""
if x1.dtype not in _numeric_dtypes or x2.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in remainder")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.remainder(x1._array, x2._array))
def round(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.round <numpy.round>`.
See its docstring for more information.
"""
if x.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in round")
return Array._new(np.round(x._array))
def sign(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.sign <numpy.sign>`.
See its docstring for more information.
"""
if x.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in sign")
return Array._new(np.sign(x._array))
def sin(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.sin <numpy.sin>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in sin")
return Array._new(np.sin(x._array))
def sinh(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.sinh <numpy.sinh>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in sinh")
return Array._new(np.sinh(x._array))
def square(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.square <numpy.square>`.
See its docstring for more information.
"""
if x.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in square")
return Array._new(np.square(x._array))
def sqrt(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.sqrt <numpy.sqrt>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in sqrt")
return Array._new(np.sqrt(x._array))
def subtract(x1: Array, x2: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.subtract <numpy.subtract>`.
See its docstring for more information.
"""
if x1.dtype not in _numeric_dtypes or x2.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in subtract")
# Call result type here just to raise on disallowed type combinations
_result_type(x1.dtype, x2.dtype)
x1, x2 = Array._normalize_two_args(x1, x2)
return Array._new(np.subtract(x1._array, x2._array))
def tan(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.tan <numpy.tan>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in tan")
return Array._new(np.tan(x._array))
def tanh(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.tanh <numpy.tanh>`.
See its docstring for more information.
"""
if x.dtype not in _floating_dtypes:
raise TypeError("Only floating-point dtypes are allowed in tanh")
return Array._new(np.tanh(x._array))
def trunc(x: Array, /) -> Array:
"""
Array API compatible wrapper for :py:func:`np.trunc <numpy.trunc>`.
See its docstring for more information.
"""
if x.dtype not in _numeric_dtypes:
raise TypeError("Only numeric dtypes are allowed in trunc")
if x.dtype in _integer_dtypes:
# Note: The return dtype of trunc is the same as the input
return x
return Array._new(np.trunc(x._array))
| 33.945205 | 87 | 0.684019 | 3,642 | 24,780 | 4.510983 | 0.034871 | 0.068172 | 0.045651 | 0.061355 | 0.860125 | 0.848317 | 0.82537 | 0.82537 | 0.82537 | 0.82537 | 0 | 0.016154 | 0.210573 | 24,780 | 729 | 88 | 33.99177 | 0.823689 | 0.347337 | 0 | 0.388514 | 0 | 0 | 0.170537 | 0.002986 | 0 | 0 | 0 | 0 | 0 | 1 | 0.189189 | false | 0 | 0.013514 | 0 | 0.402027 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
138598aa240314471f5df3a3c176ae574d89dc39 | 5,847 | py | Python | fcm_django/fcm.py | vstasn/fcm-django | 94ec77173e296b0797a493718b6ab663d4011a51 | [
"MIT"
] | null | null | null | fcm_django/fcm.py | vstasn/fcm-django | 94ec77173e296b0797a493718b6ab663d4011a51 | [
"MIT"
] | null | null | null | fcm_django/fcm.py | vstasn/fcm-django | 94ec77173e296b0797a493718b6ab663d4011a51 | [
"MIT"
] | null | null | null | from firebase_admin import exceptions
from firebase_admin import messaging
response_dict = {
"multicast_ids": [],
"success": 0,
"failure": 0,
"canonical_ids": 0,
"results": [],
"topic_message_id": None,
}
def fcm_send_message(
registration_id,
title=None,
body=None,
icon=None,
data=None,
sound=None,
badge=None,
low_priority=False,
condition=None,
time_to_live=None,
click_action=None,
collapse_key=None,
delay_while_idle=False,
restricted_package_name=None,
dry_run=False,
color=None,
tag=None,
body_loc_key=None,
body_loc_args=None,
title_loc_key=None,
title_loc_args=None,
content_available=None,
extra_kwargs={},
api_key=None,
json_encoder=None,
extra_notification_kwargs=None,
channel_id=None,
critical=False,
android_priority=None,
**kwargs
):
apns_sound = sound
if critical:
sound_name = sound if sound else "default"
apns_sound = messaging.CriticalSound(sound_name, critical=True)
notification = None
if title and body:
notification = messaging.Notification(
title=title,
body=body,
image=icon,
)
message = messaging.Message(
notification=notification,
data=data,
token=registration_id,
android=messaging.AndroidConfig(
priority=android_priority,
collapse_key=collapse_key,
ttl=time_to_live,
restricted_package_name=restricted_package_name,
notification=messaging.AndroidNotification(
color=color,
sound=sound,
tag=tag,
click_action=click_action,
body_loc_key=body_loc_key,
body_loc_args=body_loc_args,
title_loc_key=title_loc_key,
title_loc_args=title_loc_args,
channel_id=channel_id,
),
),
apns=messaging.APNSConfig(
payload=messaging.APNSPayload(
aps=messaging.Aps(
alert=messaging.ApsAlert(
loc_key=body_loc_key,
loc_args=body_loc_args,
title_loc_key=title_loc_key,
title_loc_args=title_loc_args,
),
sound=apns_sound,
content_available=content_available,
badge=badge,
)
)
),
)
res = {}
try:
res["data"] = messaging.send(message, dry_run)
except (exceptions.FirebaseError, ValueError) as error:
res["error"] = error
return {"results": [res]}
def fcm_send_single_device_data_message(registration_id, *args, **kwargs):
data = kwargs.pop("data_message", None)
kwargs.update({"data": data})
return fcm_send_message(registration_id=registration_id, **kwargs)
def fcm_send_bulk_message(
registration_ids,
title=None,
body=None,
icon=None,
data=None,
sound=None,
badge=None,
low_priority=False,
condition=None,
time_to_live=None,
click_action=None,
collapse_key=None,
delay_while_idle=False,
restricted_package_name=None,
dry_run=False,
color=None,
tag=None,
body_loc_key=None,
body_loc_args=None,
title_loc_key=None,
title_loc_args=None,
content_available=None,
extra_kwargs={},
api_key=None,
json_encoder=None,
extra_notification_kwargs=None,
channel_id=None,
critical=False,
android_priority=None,
**kwargs
):
apns_sound = sound
if critical:
sound_name = sound if sound else "default"
apns_sound = messaging.CriticalSound(sound_name, critical=True)
notification = None
if title and body:
notification = messaging.Notification(
title=title,
body=body,
image=icon,
)
multicast = messaging.MulticastMessage(
notification=notification,
data=data,
tokens=registration_ids,
android=messaging.AndroidConfig(
priority=android_priority,
collapse_key=collapse_key,
ttl=time_to_live,
restricted_package_name=restricted_package_name,
notification=messaging.AndroidNotification(
color=color,
sound=sound,
tag=tag,
click_action=click_action,
body_loc_key=body_loc_key,
body_loc_args=body_loc_args,
title_loc_key=title_loc_key,
title_loc_args=title_loc_args,
channel_id=channel_id,
),
),
apns=messaging.APNSConfig(
payload=messaging.APNSPayload(
aps=messaging.Aps(
alert=messaging.ApsAlert(
loc_key=body_loc_key,
loc_args=body_loc_args,
title_loc_key=title_loc_key,
title_loc_args=title_loc_args,
),
sound=apns_sound,
content_available=content_available,
badge=badge,
)
)
),
)
try:
response = messaging.send_multicast(multicast)
responses = [
{"error": response.exception, "success": response.success}
for response in response.responses
]
except exceptions.FirebaseError:
responses = []
return {"results": responses}
def fcm_send_bulk_data_messages(*args, **kwargs):
data = kwargs.pop("data_message", None)
kwargs.update({"data": data})
return fcm_send_bulk_message(**kwargs)
class FCMError(Exception):
pass
| 26.821101 | 74 | 0.585599 | 605 | 5,847 | 5.347107 | 0.171901 | 0.037094 | 0.034003 | 0.037094 | 0.759505 | 0.744359 | 0.744359 | 0.744359 | 0.744359 | 0.744359 | 0 | 0.000769 | 0.332478 | 5,847 | 217 | 75 | 26.9447 | 0.828081 | 0 | 0 | 0.769231 | 0 | 0 | 0.024628 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020513 | false | 0.005128 | 0.010256 | 0 | 0.05641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
13de29438eaf1a85abd2faf1c78e85ec724a6ecc | 67 | py | Python | src/tests/grpc_test.py | ChartsBot/telegram-bots | 3cb0966d15c3c1d5b6dd065f01259f3d348be3b2 | [
"MIT"
] | null | null | null | src/tests/grpc_test.py | ChartsBot/telegram-bots | 3cb0966d15c3c1d5b6dd065f01259f3d348be3b2 | [
"MIT"
] | null | null | null | src/tests/grpc_test.py | ChartsBot/telegram-bots | 3cb0966d15c3c1d5b6dd065f01259f3d348be3b2 | [
"MIT"
] | 1 | 2022-01-19T01:03:06.000Z | 2022-01-19T01:03:06.000Z | from pprint import pprint
def greet(message):
pprint(message) | 13.4 | 25 | 0.746269 | 9 | 67 | 5.555556 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179104 | 67 | 5 | 26 | 13.4 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
13f19b44bad9206d0277cc64d0c2ddefd3250316 | 3,200 | py | Python | backend/api/python_http_client/kfp_server_api/models/__init__.py | Iuiu1234/pipelines | 1e032f550ce23cd40bfb6827b995248537b07d08 | [
"Apache-2.0"
] | 2,860 | 2018-05-24T04:55:01.000Z | 2022-03-31T13:49:56.000Z | backend/api/python_http_client/kfp_server_api/models/__init__.py | Iuiu1234/pipelines | 1e032f550ce23cd40bfb6827b995248537b07d08 | [
"Apache-2.0"
] | 7,331 | 2018-05-16T09:03:26.000Z | 2022-03-31T23:22:04.000Z | backend/api/python_http_client/kfp_server_api/models/__init__.py | Iuiu1234/pipelines | 1e032f550ce23cd40bfb6827b995248537b07d08 | [
"Apache-2.0"
] | 1,359 | 2018-05-15T11:05:41.000Z | 2022-03-31T09:42:09.000Z | # coding: utf-8
# flake8: noqa
"""
Kubeflow Pipelines API
This file contains REST API specification for Kubeflow Pipelines. The file is autogenerated from the swagger definition.
Contact: kubeflow-pipelines@google.com
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
# import models into model package
from kfp_server_api.models.api_cron_schedule import ApiCronSchedule
from kfp_server_api.models.api_experiment import ApiExperiment
from kfp_server_api.models.api_experiment_storage_state import ApiExperimentStorageState
from kfp_server_api.models.api_get_healthz_response import ApiGetHealthzResponse
from kfp_server_api.models.api_get_template_response import ApiGetTemplateResponse
from kfp_server_api.models.api_job import ApiJob
from kfp_server_api.models.api_list_experiments_response import ApiListExperimentsResponse
from kfp_server_api.models.api_list_jobs_response import ApiListJobsResponse
from kfp_server_api.models.api_list_pipeline_versions_response import ApiListPipelineVersionsResponse
from kfp_server_api.models.api_list_pipelines_response import ApiListPipelinesResponse
from kfp_server_api.models.api_list_runs_response import ApiListRunsResponse
from kfp_server_api.models.api_parameter import ApiParameter
from kfp_server_api.models.api_periodic_schedule import ApiPeriodicSchedule
from kfp_server_api.models.api_pipeline import ApiPipeline
from kfp_server_api.models.api_pipeline_runtime import ApiPipelineRuntime
from kfp_server_api.models.api_pipeline_spec import ApiPipelineSpec
from kfp_server_api.models.api_pipeline_version import ApiPipelineVersion
from kfp_server_api.models.api_read_artifact_response import ApiReadArtifactResponse
from kfp_server_api.models.api_relationship import ApiRelationship
from kfp_server_api.models.api_report_run_metrics_request import ApiReportRunMetricsRequest
from kfp_server_api.models.api_report_run_metrics_response import ApiReportRunMetricsResponse
from kfp_server_api.models.api_resource_key import ApiResourceKey
from kfp_server_api.models.api_resource_reference import ApiResourceReference
from kfp_server_api.models.api_resource_type import ApiResourceType
from kfp_server_api.models.api_run import ApiRun
from kfp_server_api.models.api_run_detail import ApiRunDetail
from kfp_server_api.models.api_run_metric import ApiRunMetric
from kfp_server_api.models.api_run_storage_state import ApiRunStorageState
from kfp_server_api.models.api_status import ApiStatus
from kfp_server_api.models.api_trigger import ApiTrigger
from kfp_server_api.models.api_url import ApiUrl
from kfp_server_api.models.api_value import ApiValue
from kfp_server_api.models.job_mode import JobMode
from kfp_server_api.models.pipeline_spec_runtime_config import PipelineSpecRuntimeConfig
from kfp_server_api.models.protobuf_any import ProtobufAny
from kfp_server_api.models.report_run_metrics_response_report_run_metric_result import ReportRunMetricsResponseReportRunMetricResult
from kfp_server_api.models.report_run_metrics_response_report_run_metric_result_status import ReportRunMetricsResponseReportRunMetricResultStatus
from kfp_server_api.models.run_metric_format import RunMetricFormat
| 58.181818 | 145 | 0.900938 | 441 | 3,200 | 6.147392 | 0.272109 | 0.098119 | 0.182221 | 0.224271 | 0.430837 | 0.398377 | 0.30616 | 0.079675 | 0.079675 | 0.049428 | 0 | 0.00067 | 0.067813 | 3,200 | 54 | 146 | 59.259259 | 0.908146 | 0.090625 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b93d427163c257503dd6f8ace1936e81b2436d92 | 174 | py | Python | src/cool/cmp/__init__.py | C0NKER/cool-compiler-2020 | 1da79216bb3e67effd79215979d0ca8909b41c38 | [
"MIT"
] | 3 | 2019-10-07T15:37:08.000Z | 2019-12-03T19:56:24.000Z | src/cool/cmp/__init__.py | C0NKER/cool-compiler-base | 42700586e35808061789b35ef3e048ca79c36e7f | [
"MIT"
] | null | null | null | src/cool/cmp/__init__.py | C0NKER/cool-compiler-base | 42700586e35808061789b35ef3e048ca79c36e7f | [
"MIT"
] | null | null | null | from .automata import *
from .evaluation import *
from .grammartools import *
from .grammar import *
from .semantic import *
from .utils import *
from .visitor import * | 24.857143 | 28 | 0.729885 | 21 | 174 | 6.047619 | 0.428571 | 0.472441 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189655 | 174 | 7 | 29 | 24.857143 | 0.900709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b99346e09794890afba62d04a1bb56ecdefa69b2 | 39,560 | py | Python | pybind/slxos/v16r_1_00b/sfm_state/queue/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/sfm_state/queue/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/sfm_state/queue/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
class queue(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-sysmgr-operational - based on the path /sfm-state/queue. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: SFM Queue
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__queue_sfmid','__queue_feid','__queue_pipe','__queue_dch_count','__queue_dcm_count','__queue_dcl_count','__queue_dch_value','__queue_dch_linkno','__queue_dcm_value','__queue_dcm_linkno','__queue_dcl_value','__queue_dcl_linkno',)
_yang_name = 'queue'
_rest_name = 'queue'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__queue_dcl_linkno = YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcl-linkno", rest_name="queue-dcl-linkno", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
self.__queue_sfmid = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-sfmid", rest_name="queue-sfmid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
self.__queue_feid = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-feid", rest_name="queue-feid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
self.__queue_dcm_count = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-dcm-count", rest_name="queue-dcm-count", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
self.__queue_dcl_value = YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcl-value", rest_name="queue-dcl-value", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
self.__queue_dcm_linkno = YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcm-linkno", rest_name="queue-dcm-linkno", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
self.__queue_dch_value = YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dch-value", rest_name="queue-dch-value", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
self.__queue_dch_linkno = YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dch-linkno", rest_name="queue-dch-linkno", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
self.__queue_dcl_count = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-dcl-count", rest_name="queue-dcl-count", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
self.__queue_dcm_value = YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcm-value", rest_name="queue-dcm-value", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
self.__queue_dch_count = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-dch-count", rest_name="queue-dch-count", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
self.__queue_pipe = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-pipe", rest_name="queue-pipe", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'sfm-state', u'queue']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'sfm-state', u'queue']
def _get_queue_sfmid(self):
"""
Getter method for queue_sfmid, mapped from YANG variable /sfm_state/queue/queue_sfmid (uint32)
YANG Description: SFM Queue
"""
return self.__queue_sfmid
def _set_queue_sfmid(self, v, load=False):
"""
Setter method for queue_sfmid, mapped from YANG variable /sfm_state/queue/queue_sfmid (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_queue_sfmid is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_queue_sfmid() directly.
YANG Description: SFM Queue
"""
parent = getattr(self, "_parent", None)
if parent is not None and load is False:
raise AttributeError("Cannot set keys directly when" +
" within an instantiated list")
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-sfmid", rest_name="queue-sfmid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """queue_sfmid must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-sfmid", rest_name="queue-sfmid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)""",
})
self.__queue_sfmid = t
if hasattr(self, '_set'):
self._set()
def _unset_queue_sfmid(self):
self.__queue_sfmid = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-sfmid", rest_name="queue-sfmid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
def _get_queue_feid(self):
"""
Getter method for queue_feid, mapped from YANG variable /sfm_state/queue/queue_feid (uint32)
YANG Description: SFM Queue
"""
return self.__queue_feid
def _set_queue_feid(self, v, load=False):
"""
Setter method for queue_feid, mapped from YANG variable /sfm_state/queue/queue_feid (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_queue_feid is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_queue_feid() directly.
YANG Description: SFM Queue
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-feid", rest_name="queue-feid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """queue_feid must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-feid", rest_name="queue-feid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)""",
})
self.__queue_feid = t
if hasattr(self, '_set'):
self._set()
def _unset_queue_feid(self):
self.__queue_feid = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-feid", rest_name="queue-feid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
def _get_queue_pipe(self):
"""
Getter method for queue_pipe, mapped from YANG variable /sfm_state/queue/queue_pipe (uint32)
YANG Description: SFM Queue Pipe
"""
return self.__queue_pipe
def _set_queue_pipe(self, v, load=False):
"""
Setter method for queue_pipe, mapped from YANG variable /sfm_state/queue/queue_pipe (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_queue_pipe is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_queue_pipe() directly.
YANG Description: SFM Queue Pipe
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-pipe", rest_name="queue-pipe", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """queue_pipe must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-pipe", rest_name="queue-pipe", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)""",
})
self.__queue_pipe = t
if hasattr(self, '_set'):
self._set()
def _unset_queue_pipe(self):
self.__queue_pipe = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-pipe", rest_name="queue-pipe", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
def _get_queue_dch_count(self):
"""
Getter method for queue_dch_count, mapped from YANG variable /sfm_state/queue/queue_dch_count (uint32)
YANG Description: SFM Queue
"""
return self.__queue_dch_count
def _set_queue_dch_count(self, v, load=False):
"""
Setter method for queue_dch_count, mapped from YANG variable /sfm_state/queue/queue_dch_count (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_queue_dch_count is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_queue_dch_count() directly.
YANG Description: SFM Queue
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-dch-count", rest_name="queue-dch-count", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """queue_dch_count must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-dch-count", rest_name="queue-dch-count", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)""",
})
self.__queue_dch_count = t
if hasattr(self, '_set'):
self._set()
def _unset_queue_dch_count(self):
self.__queue_dch_count = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-dch-count", rest_name="queue-dch-count", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
def _get_queue_dcm_count(self):
"""
Getter method for queue_dcm_count, mapped from YANG variable /sfm_state/queue/queue_dcm_count (uint32)
YANG Description: SFM Queue
"""
return self.__queue_dcm_count
def _set_queue_dcm_count(self, v, load=False):
"""
Setter method for queue_dcm_count, mapped from YANG variable /sfm_state/queue/queue_dcm_count (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_queue_dcm_count is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_queue_dcm_count() directly.
YANG Description: SFM Queue
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-dcm-count", rest_name="queue-dcm-count", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """queue_dcm_count must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-dcm-count", rest_name="queue-dcm-count", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)""",
})
self.__queue_dcm_count = t
if hasattr(self, '_set'):
self._set()
def _unset_queue_dcm_count(self):
self.__queue_dcm_count = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-dcm-count", rest_name="queue-dcm-count", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
def _get_queue_dcl_count(self):
"""
Getter method for queue_dcl_count, mapped from YANG variable /sfm_state/queue/queue_dcl_count (uint32)
YANG Description: SFM Queue
"""
return self.__queue_dcl_count
def _set_queue_dcl_count(self, v, load=False):
"""
Setter method for queue_dcl_count, mapped from YANG variable /sfm_state/queue/queue_dcl_count (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_queue_dcl_count is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_queue_dcl_count() directly.
YANG Description: SFM Queue
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-dcl-count", rest_name="queue-dcl-count", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """queue_dcl_count must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-dcl-count", rest_name="queue-dcl-count", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)""",
})
self.__queue_dcl_count = t
if hasattr(self, '_set'):
self._set()
def _unset_queue_dcl_count(self):
self.__queue_dcl_count = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="queue-dcl-count", rest_name="queue-dcl-count", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
def _get_queue_dch_value(self):
"""
Getter method for queue_dch_value, mapped from YANG variable /sfm_state/queue/queue_dch_value (uint32)
YANG Description: SFM DCH Value
"""
return self.__queue_dch_value
def _set_queue_dch_value(self, v, load=False):
"""
Setter method for queue_dch_value, mapped from YANG variable /sfm_state/queue/queue_dch_value (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_queue_dch_value is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_queue_dch_value() directly.
YANG Description: SFM DCH Value
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dch-value", rest_name="queue-dch-value", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """queue_dch_value must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dch-value", rest_name="queue-dch-value", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)""",
})
self.__queue_dch_value = t
if hasattr(self, '_set'):
self._set()
def _unset_queue_dch_value(self):
self.__queue_dch_value = YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dch-value", rest_name="queue-dch-value", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
def _get_queue_dch_linkno(self):
"""
Getter method for queue_dch_linkno, mapped from YANG variable /sfm_state/queue/queue_dch_linkno (uint32)
YANG Description: SFM DCH Link-No
"""
return self.__queue_dch_linkno
def _set_queue_dch_linkno(self, v, load=False):
"""
Setter method for queue_dch_linkno, mapped from YANG variable /sfm_state/queue/queue_dch_linkno (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_queue_dch_linkno is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_queue_dch_linkno() directly.
YANG Description: SFM DCH Link-No
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dch-linkno", rest_name="queue-dch-linkno", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """queue_dch_linkno must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dch-linkno", rest_name="queue-dch-linkno", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)""",
})
self.__queue_dch_linkno = t
if hasattr(self, '_set'):
self._set()
def _unset_queue_dch_linkno(self):
self.__queue_dch_linkno = YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dch-linkno", rest_name="queue-dch-linkno", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
def _get_queue_dcm_value(self):
"""
Getter method for queue_dcm_value, mapped from YANG variable /sfm_state/queue/queue_dcm_value (uint32)
YANG Description: SFM DCM Value
"""
return self.__queue_dcm_value
def _set_queue_dcm_value(self, v, load=False):
"""
Setter method for queue_dcm_value, mapped from YANG variable /sfm_state/queue/queue_dcm_value (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_queue_dcm_value is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_queue_dcm_value() directly.
YANG Description: SFM DCM Value
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcm-value", rest_name="queue-dcm-value", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """queue_dcm_value must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcm-value", rest_name="queue-dcm-value", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)""",
})
self.__queue_dcm_value = t
if hasattr(self, '_set'):
self._set()
def _unset_queue_dcm_value(self):
self.__queue_dcm_value = YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcm-value", rest_name="queue-dcm-value", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
def _get_queue_dcm_linkno(self):
"""
Getter method for queue_dcm_linkno, mapped from YANG variable /sfm_state/queue/queue_dcm_linkno (uint32)
YANG Description: SFM DCM Link-No
"""
return self.__queue_dcm_linkno
def _set_queue_dcm_linkno(self, v, load=False):
"""
Setter method for queue_dcm_linkno, mapped from YANG variable /sfm_state/queue/queue_dcm_linkno (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_queue_dcm_linkno is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_queue_dcm_linkno() directly.
YANG Description: SFM DCM Link-No
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcm-linkno", rest_name="queue-dcm-linkno", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """queue_dcm_linkno must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcm-linkno", rest_name="queue-dcm-linkno", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)""",
})
self.__queue_dcm_linkno = t
if hasattr(self, '_set'):
self._set()
def _unset_queue_dcm_linkno(self):
self.__queue_dcm_linkno = YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcm-linkno", rest_name="queue-dcm-linkno", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
def _get_queue_dcl_value(self):
"""
Getter method for queue_dcl_value, mapped from YANG variable /sfm_state/queue/queue_dcl_value (uint32)
YANG Description: SFM DCL Value
"""
return self.__queue_dcl_value
def _set_queue_dcl_value(self, v, load=False):
"""
Setter method for queue_dcl_value, mapped from YANG variable /sfm_state/queue/queue_dcl_value (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_queue_dcl_value is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_queue_dcl_value() directly.
YANG Description: SFM DCL Value
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcl-value", rest_name="queue-dcl-value", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """queue_dcl_value must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcl-value", rest_name="queue-dcl-value", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)""",
})
self.__queue_dcl_value = t
if hasattr(self, '_set'):
self._set()
def _unset_queue_dcl_value(self):
self.__queue_dcl_value = YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcl-value", rest_name="queue-dcl-value", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
def _get_queue_dcl_linkno(self):
"""
Getter method for queue_dcl_linkno, mapped from YANG variable /sfm_state/queue/queue_dcl_linkno (uint32)
YANG Description: SFM DCL Link-No
"""
return self.__queue_dcl_linkno
def _set_queue_dcl_linkno(self, v, load=False):
"""
Setter method for queue_dcl_linkno, mapped from YANG variable /sfm_state/queue/queue_dcl_linkno (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_queue_dcl_linkno is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_queue_dcl_linkno() directly.
YANG Description: SFM DCL Link-No
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcl-linkno", rest_name="queue-dcl-linkno", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """queue_dcl_linkno must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcl-linkno", rest_name="queue-dcl-linkno", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)""",
})
self.__queue_dcl_linkno = t
if hasattr(self, '_set'):
self._set()
def _unset_queue_dcl_linkno(self):
self.__queue_dcl_linkno = YANGDynClass(base=TypedListType(allowed_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)), is_leaf=False, yang_name="queue-dcl-linkno", rest_name="queue-dcl-linkno", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-sysmgr-operational', defining_module='brocade-sysmgr-operational', yang_type='uint32', is_config=False)
queue_sfmid = __builtin__.property(_get_queue_sfmid)
queue_feid = __builtin__.property(_get_queue_feid)
queue_pipe = __builtin__.property(_get_queue_pipe)
queue_dch_count = __builtin__.property(_get_queue_dch_count)
queue_dcm_count = __builtin__.property(_get_queue_dcm_count)
queue_dcl_count = __builtin__.property(_get_queue_dcl_count)
queue_dch_value = __builtin__.property(_get_queue_dch_value)
queue_dch_linkno = __builtin__.property(_get_queue_dch_linkno)
queue_dcm_value = __builtin__.property(_get_queue_dcm_value)
queue_dcm_linkno = __builtin__.property(_get_queue_dcm_linkno)
queue_dcl_value = __builtin__.property(_get_queue_dcl_value)
queue_dcl_linkno = __builtin__.property(_get_queue_dcl_linkno)
_pyangbind_elements = {'queue_sfmid': queue_sfmid, 'queue_feid': queue_feid, 'queue_pipe': queue_pipe, 'queue_dch_count': queue_dch_count, 'queue_dcm_count': queue_dcm_count, 'queue_dcl_count': queue_dcl_count, 'queue_dch_value': queue_dch_value, 'queue_dch_linkno': queue_dch_linkno, 'queue_dcm_value': queue_dcm_value, 'queue_dcm_linkno': queue_dcm_linkno, 'queue_dcl_value': queue_dcl_value, 'queue_dcl_linkno': queue_dcl_linkno, }
| 70.26643 | 490 | 0.749874 | 5,403 | 39,560 | 5.198408 | 0.03424 | 0.039164 | 0.049845 | 0.052337 | 0.921067 | 0.889415 | 0.865952 | 0.863567 | 0.850927 | 0.835974 | 0 | 0.023669 | 0.122118 | 39,560 | 562 | 491 | 70.391459 | 0.785079 | 0.160288 | 0 | 0.49359 | 0 | 0.038462 | 0.362173 | 0.194912 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.025641 | 0 | 0.262821 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9a4a3c44ff4b8eec31a04b3499d6b49a13df5f7 | 37 | py | Python | algo/mappo2/elements/nn.py | xlnwel/g2rl | e1261fdd2ce70724a99ddd174616cf013917b241 | [
"Apache-2.0"
] | 1 | 2022-03-27T08:25:57.000Z | 2022-03-27T08:25:57.000Z | algo/mappo2/elements/nn.py | xlnwel/g2rl | e1261fdd2ce70724a99ddd174616cf013917b241 | [
"Apache-2.0"
] | null | null | null | algo/mappo2/elements/nn.py | xlnwel/g2rl | e1261fdd2ce70724a99ddd174616cf013917b241 | [
"Apache-2.0"
] | 1 | 2021-11-09T08:33:35.000Z | 2021-11-09T08:33:35.000Z | from algo.mappo.elements.nn import *
| 18.5 | 36 | 0.783784 | 6 | 37 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9afa3ef8f81fe915d0ce8ebc3c822d090a29228 | 20,697 | py | Python | packages/core/minos-microservice-aggregate/tests/test_aggregate/test_snapshots/test_pg/test_queries.py | sorasful/minos-python | 1189330eebf6444627a2af6b29f347670f95a4dd | [
"MIT"
] | null | null | null | packages/core/minos-microservice-aggregate/tests/test_aggregate/test_snapshots/test_pg/test_queries.py | sorasful/minos-python | 1189330eebf6444627a2af6b29f347670f95a4dd | [
"MIT"
] | null | null | null | packages/core/minos-microservice-aggregate/tests/test_aggregate/test_snapshots/test_pg/test_queries.py | sorasful/minos-python | 1189330eebf6444627a2af6b29f347670f95a4dd | [
"MIT"
] | null | null | null | import unittest
from unittest.mock import (
MagicMock,
patch,
)
from uuid import (
uuid4,
)
import aiopg
from psycopg2.extras import (
Json,
)
from psycopg2.sql import (
SQL,
Literal,
Placeholder,
)
from minos.aggregate import (
IS_REPOSITORY_SERIALIZATION_CONTEXT_VAR,
Condition,
Ordering,
PostgreSqlSnapshotQueryBuilder,
)
from minos.aggregate.snapshots.pg.queries import (
_SELECT_ENTRIES_QUERY,
_SELECT_TRANSACTION_CHUNK,
)
from minos.common import (
NULL_UUID,
)
from minos.common.testing import (
PostgresAsyncTestCase,
)
from tests.utils import (
BASE_PATH,
)
class TestPostgreSqlSnapshotQueryBuilder(PostgresAsyncTestCase):
CONFIG_FILE_PATH = BASE_PATH / "test_config.yml"
def setUp(self) -> None:
super().setUp()
self.classname = "path.to.Product"
self.base_parameters = {
"name": self.classname,
"transaction_uuid_1": NULL_UUID,
}
self.base_select = _SELECT_ENTRIES_QUERY.format(
from_parts=_SELECT_TRANSACTION_CHUNK.format(
index=Literal(1), transaction_uuid=Placeholder("transaction_uuid_1")
)
)
def test_constructor(self):
qb = PostgreSqlSnapshotQueryBuilder(self.classname, Condition.TRUE)
self.assertEqual(self.classname, qb.name)
self.assertEqual(Condition.TRUE, qb.condition)
self.assertEqual(None, qb.ordering)
self.assertEqual(None, qb.limit)
self.assertFalse(qb.exclude_deleted)
def test_constructor_full(self):
transaction_uuids = (NULL_UUID, uuid4())
qb = PostgreSqlSnapshotQueryBuilder(
self.classname, Condition.TRUE, Ordering.ASC("name"), 10, transaction_uuids, True
)
self.assertEqual(self.classname, qb.name)
self.assertEqual(Condition.TRUE, qb.condition)
self.assertEqual(Ordering.ASC("name"), qb.ordering)
self.assertEqual(10, qb.limit)
self.assertEqual(transaction_uuids, qb.transaction_uuids)
self.assertTrue(qb.exclude_deleted)
def test_build_submitting_context_var(self):
builder = PostgreSqlSnapshotQueryBuilder(self.classname, Condition.TRUE)
def _fn():
self.assertEqual(True, IS_REPOSITORY_SERIALIZATION_CONTEXT_VAR.get())
mock = MagicMock(side_effect=_fn)
builder._build = mock
self.assertEqual(False, IS_REPOSITORY_SERIALIZATION_CONTEXT_VAR.get())
builder.build()
self.assertEqual(False, IS_REPOSITORY_SERIALIZATION_CONTEXT_VAR.get())
self.assertEqual(1, mock.call_count)
def test_build_raises(self):
with self.assertRaises(ValueError):
# noinspection PyTypeChecker
PostgreSqlSnapshotQueryBuilder(self.classname, True).build()
async def test_build_with_transactions(self):
transaction_uuids = (NULL_UUID, uuid4())
observed = PostgreSqlSnapshotQueryBuilder(
self.classname, Condition.TRUE, transaction_uuids=transaction_uuids
).build()
expected_query = SQL(" WHERE ").join(
[
_SELECT_ENTRIES_QUERY.format(
from_parts=SQL(" UNION ALL ").join(
[
_SELECT_TRANSACTION_CHUNK.format(
index=Literal(1), transaction_uuid=Placeholder("transaction_uuid_1")
),
_SELECT_TRANSACTION_CHUNK.format(
index=Literal(2), transaction_uuid=Placeholder("transaction_uuid_2")
),
]
)
),
SQL("TRUE"),
]
)
expected_parameters = self.base_parameters | {"transaction_uuid_2": transaction_uuids[1]}
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_true(self):
condition = Condition.TRUE
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("TRUE")])
expected_parameters = self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_false(self):
condition = Condition.FALSE
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("FALSE")])
expected_parameters = self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_fixed_uuid(self):
uuid = uuid4()
condition = Condition.EQUAL("uuid", uuid)
with patch("minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello"]):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL('("uuid" = %(hello)s)')])
expected_parameters = {"hello": str(uuid)} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_fixed_version(self):
condition = Condition.EQUAL("version", 1)
with patch("minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello"]):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL('("version" = %(hello)s)')])
expected_parameters = {"hello": 1} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_fixed_created_at(self):
condition = Condition.EQUAL("created_at", 1)
with patch("minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello"]):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL('("created_at" = %(hello)s)')])
expected_parameters = {"hello": 1} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_fixed_updated_at(self):
condition = Condition.EQUAL("updated_at", 1)
with patch("minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello"]):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL('("updated_at" = %(hello)s)')])
expected_parameters = {"hello": 1} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_lower(self):
condition = Condition.LOWER("age", 1)
with patch("minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello"]):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("(data#>'{age}' < %(hello)s::jsonb)")])
expected_parameters = {"hello": 1} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_lower_equal(self):
condition = Condition.LOWER_EQUAL("age", 1)
with patch("minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello"]):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("(data#>'{age}' <= %(hello)s::jsonb)")])
expected_parameters = {"hello": 1} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_greater(self):
condition = Condition.GREATER("age", 1)
with patch("minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello"]):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("(data#>'{age}' > %(hello)s::jsonb)")])
expected_parameters = {"hello": 1} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_greater_equal(self):
condition = Condition.GREATER_EQUAL("age", 1)
with patch("minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello"]):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("(data#>'{age}' >= %(hello)s::jsonb)")])
expected_parameters = {"hello": 1} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_equal(self):
condition = Condition.EQUAL("age", 1)
with patch("minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello"]):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("(data#>'{age}' = %(hello)s::jsonb)")])
expected_parameters = {"hello": 1} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_not_equal(self):
condition = Condition.NOT_EQUAL("age", 1)
with patch("minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello"]):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("(data#>'{age}' <> %(hello)s::jsonb)")])
expected_parameters = {"hello": 1} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_in(self):
condition = Condition.IN("age", [1, 2, 3])
with patch("minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello"]):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("(data#>'{age}' IN %(hello)s::jsonb)")])
expected_parameters = {"hello": (1, 2, 3)} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_in_empty(self):
condition = Condition.IN("age", [])
with patch("minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello"]):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("FALSE")])
expected_parameters = self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_not(self):
condition = Condition.NOT(Condition.LOWER("age", 1))
with patch("minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello"]):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("(NOT (data#>'{age}' < %(hello)s::jsonb))")])
expected_parameters = {"hello": 1} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_and(self):
condition = Condition.AND(Condition.LOWER("age", 1), Condition.LOWER("level", 3))
with patch(
"minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello", "goodbye"]
):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join(
[self.base_select, SQL("((data#>'{age}' < %(hello)s::jsonb) AND (data#>'{level}' < %(goodbye)s::jsonb))")]
)
expected_parameters = {"hello": 1, "goodbye": 3} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_or(self):
condition = Condition.OR(Condition.LOWER("age", 1), Condition.LOWER("level", 3))
with patch(
"minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["hello", "goodbye"]
):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition).build()
expected_query = SQL(" WHERE ").join(
[self.base_select, SQL("((data#>'{age}' < %(hello)s::jsonb) OR (data#>'{level}' < %(goodbye)s::jsonb))")]
)
expected_parameters = {"hello": 1, "goodbye": 3} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_exclude_deleted(self):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, Condition.TRUE, exclude_deleted=True).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("TRUE AND (data IS NOT NULL)")])
expected_parameters = self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_fixed_ordering_asc(self):
ordering = Ordering.ASC("created_at")
observed = PostgreSqlSnapshotQueryBuilder(self.classname, Condition.TRUE, ordering).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL('TRUE ORDER BY "created_at" ASC')])
expected_parameters = self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_fixed_ordering_desc(self):
ordering = Ordering.DESC("created_at")
observed = PostgreSqlSnapshotQueryBuilder(self.classname, Condition.TRUE, ordering).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL('TRUE ORDER BY "created_at" DESC')])
expected_parameters = self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_ordering_asc(self):
ordering = Ordering.ASC("name")
observed = PostgreSqlSnapshotQueryBuilder(self.classname, Condition.TRUE, ordering).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("TRUE ORDER BY data#>'{name}' ASC")])
expected_parameters = self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_ordering_desc(self):
ordering = Ordering.DESC("name")
observed = PostgreSqlSnapshotQueryBuilder(self.classname, Condition.TRUE, ordering).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("TRUE ORDER BY data#>'{name}' DESC")])
expected_parameters = self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_limit(self):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, Condition.TRUE, limit=10).build()
expected_query = SQL(" WHERE ").join([self.base_select, SQL("TRUE LIMIT 10")])
expected_parameters = self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def test_build_complex(self):
condition = Condition.AND(
Condition.EQUAL("inventory.amount", 0),
Condition.OR(Condition.EQUAL("title", "Fanta Zero"), Condition.GREATER("version", 1)),
)
ordering = Ordering.DESC("updated_at")
limit = 100
with patch(
"minos.aggregate.PostgreSqlSnapshotQueryBuilder.generate_random_str", side_effect=["one", "two", "three"]
):
observed = PostgreSqlSnapshotQueryBuilder(self.classname, condition, ordering, limit).build()
expected_query = SQL(" WHERE ").join(
[
self.base_select,
SQL(
"((data#>'{inventory,amount}' = %(one)s::jsonb) AND ((data#>'{title}' = %(two)s::jsonb) OR "
'("version" > %(three)s))) '
'ORDER BY "updated_at" DESC '
"LIMIT 100"
),
]
)
expected_parameters = {"one": 0, "three": 1, "two": "Fanta Zero"} | self.base_parameters
self.assertEqual(await self._flatten_query(expected_query), await self._flatten_query(observed[0]))
self.assertEqual(self._flatten_parameters(expected_parameters), self._flatten_parameters(observed[1]))
async def _flatten_query(self, query) -> str:
async with aiopg.connect(**self.snapshot_db) as connection:
return query.as_string(connection.raw)
@staticmethod
def _flatten_parameters(parameters) -> dict:
return {k: (v if not isinstance(v, Json) else v.adapted) for k, v in parameters.items()}
if __name__ == "__main__":
unittest.main()
| 49.396181 | 118 | 0.692468 | 2,235 | 20,697 | 6.15302 | 0.070694 | 0.079988 | 0.058173 | 0.076353 | 0.834788 | 0.802938 | 0.764252 | 0.748691 | 0.748691 | 0.740256 | 0 | 0.006999 | 0.185389 | 20,697 | 418 | 119 | 49.514354 | 0.80866 | 0.001256 | 0 | 0.419255 | 0 | 0.009317 | 0.126131 | 0.052446 | 0 | 0 | 0 | 0 | 0.204969 | 1 | 0.021739 | false | 0 | 0.034161 | 0.003106 | 0.068323 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a0f142c6c45bb56386646d30a6c969a618b7025 | 22 | py | Python | python_estat/__init__.py | jmdc-dkanazawa/python-eStat | 18da7e02b36129cf6488cf3aaec5c5d1c9a74dcb | [
"BSD-2-Clause"
] | null | null | null | python_estat/__init__.py | jmdc-dkanazawa/python-eStat | 18da7e02b36129cf6488cf3aaec5c5d1c9a74dcb | [
"BSD-2-Clause"
] | null | null | null | python_estat/__init__.py | jmdc-dkanazawa/python-eStat | 18da7e02b36129cf6488cf3aaec5c5d1c9a74dcb | [
"BSD-2-Clause"
] | null | null | null | from .estat import *
| 11 | 21 | 0.681818 | 3 | 22 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 22 | 1 | 22 | 22 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dbee6ebdc61125a4e0db1bda79fdfb3eb586df7f | 36 | py | Python | code/appendix_1-2.py | KoyanagiHitoshi/AtCoder-Python-Introduction | 6d014e333a873f545b4d32d438e57cf428b10b96 | [
"MIT"
] | 1 | 2022-03-29T13:50:12.000Z | 2022-03-29T13:50:12.000Z | code/appendix_1-2.py | KoyanagiHitoshi/AtCoder-Python-Introduction | 6d014e333a873f545b4d32d438e57cf428b10b96 | [
"MIT"
] | null | null | null | code/appendix_1-2.py | KoyanagiHitoshi/AtCoder-Python-Introduction | 6d014e333a873f545b4d32d438e57cf428b10b96 | [
"MIT"
] | null | null | null | x, y = 1, 2
x, y = y, x
print(x, y)
| 9 | 11 | 0.416667 | 11 | 36 | 1.363636 | 0.454545 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0.333333 | 36 | 3 | 12 | 12 | 0.541667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.333333 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e00f08e12d8d5258b27e59b3eae1b2c64dea1354 | 7,463 | py | Python | examples/python/worker_schedule_sat.py | prezaei85/or-tools | 8ae61b6feb64c6193b4706535f8d06ee6e4e7270 | [
"Apache-2.0"
] | 3 | 2021-12-11T12:30:09.000Z | 2021-12-30T09:49:45.000Z | examples/python/worker_schedule_sat.py | kamyu104/or-tools | 8ae61b6feb64c6193b4706535f8d06ee6e4e7270 | [
"Apache-2.0"
] | null | null | null | examples/python/worker_schedule_sat.py | kamyu104/or-tools | 8ae61b6feb64c6193b4706535f8d06ee6e4e7270 | [
"Apache-2.0"
] | null | null | null | # Copyright 2010-2017 Google
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ortools.sat.python import cp_model
def schedule():
# Input data.
positions = [
1, 2, 8, 10, 5, 3, 4, 3, 6, 6, 4, 5, 4, 3, 4, 4, 3, 4, 2, 1, 0, 0, 0, 0,
1, 2, 9, 9, 4, 3, 4, 3, 5, 4, 5, 2, 5, 6, 6, 7, 4, 2, 1, 0, 0, 0, 0, 0, 0,
2, 7, 6, 5, 2, 4, 4, 6, 6, 4, 5, 5, 5, 7, 5, 4, 4, 2, 3, 1, 0, 0, 0, 1, 2,
9, 7, 2, 2, 4, 2, 4, 5, 3, 2, 6, 7, 5, 6, 4, 4, 2, 1, 0, 0, 0, 0, 2, 2, 8,
8, 6, 3, 3, 3, 10, 9, 6, 3, 3, 4, 5, 4, 5, 4, 2, 1, 0, 0, 0, 0, 1, 2, 9,
5, 5, 4, 5, 2, 5, 7, 5, 3, 4, 8, 4, 4, 2, 3, 1, 0, 0, 0, 0, 0, 1, 2, 10,
5, 5, 4, 5, 2, 4, 6, 7, 4, 4, 5, 4, 4, 3, 3, 2, 1, 0, 0, 0, 0
]
possible_shifts = [[
1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,
1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1,
1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,
1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1,
1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 40
], [
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 40
], [
0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 40
], [
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 40
], [
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0
], [
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 16
], [
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 6, 16
], [
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 7, 16
], [
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1,
1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,
1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1,
1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 8, 40
]]
# Useful numbers.
num_slots = len(positions)
all_slots = range(num_slots)
num_shifts = len(possible_shifts)
all_shifts = range(num_shifts)
min_number_of_workers = [5 * x for x in positions]
num_workers = 300
# Model the problem.
model = cp_model.CpModel()
workers_per_shift = [model.NewIntVar(0, num_workers, 'shift[%i]' % i)
for i in all_shifts]
# Satisfy min requirements.
for slot in all_slots:
model.Add(sum(workers_per_shift[shift] * possible_shifts[shift][slot]
for shift in all_shifts) >= min_number_of_workers[slot])
# Create the objective variable.
objective = model.NewIntVar(0, num_workers, 'objective')
# Link the objective.
model.Add(sum(workers_per_shift) == objective)
# Minimize.
model.Minimize(objective)
solver = cp_model.CpSolver()
status = solver.Solve(model)
if status == cp_model.OPTIMAL:
print('Objective value = %i' % solver.ObjectiveValue())
print('Statistics')
print(' - conflicts : %i' % solver.NumConflicts())
print(' - branches : %i' % solver.NumBranches())
print(' - wall time : %f ms' % solver.WallTime())
if __name__ == '__main__':
schedule()
| 50.425676 | 80 | 0.405467 | 1,955 | 7,463 | 1.527877 | 0.070077 | 0.838299 | 1.213257 | 1.558755 | 0.590224 | 0.551724 | 0.529963 | 0.523937 | 0.518581 | 0.517576 | 0 | 0.349737 | 0.338336 | 7,463 | 147 | 81 | 50.768707 | 0.255164 | 0.091384 | 0 | 0.458716 | 0 | 0 | 0.016714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009174 | false | 0 | 0.009174 | 0 | 0.018349 | 0.045872 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e01fb025cfa6baa9bc26f8d88e2f4410a25b8cb5 | 30,192 | py | Python | rootfs/usr/lib/python3/dist-packages/pygame/tests/gfxdraw_test.py | kappaIO-Dev/kappaIO-sdk-armhf-crosscompile | 66fc5fc21e6235f7a3be72a7ccac68e2224b7fb2 | [
"MIT"
] | 3 | 2016-04-11T00:41:52.000Z | 2016-06-11T03:58:12.000Z | rootfs/usr/lib/python3/dist-packages/pygame/tests/gfxdraw_test.py | kappaIO-Dev/kappaIO-sdk-armhf-crosscompile | 66fc5fc21e6235f7a3be72a7ccac68e2224b7fb2 | [
"MIT"
] | 5 | 2016-02-11T01:16:51.000Z | 2016-04-08T06:10:28.000Z | rootfs/usr/lib/python3/dist-packages/pygame/tests/gfxdraw_test.py | kappaIO-Dev/kappaIO-sdk-armhf-crosscompile | 66fc5fc21e6235f7a3be72a7ccac68e2224b7fb2 | [
"MIT"
] | 1 | 2020-04-22T22:53:39.000Z | 2020-04-22T22:53:39.000Z | # Two unit tests fail! Disable to allow automated builds to continue.
__tags__ = ('ignore', 'subprocess_ignore')
if __name__ == '__main__':
import sys
import os
pkg_dir = os.path.split(os.path.abspath(__file__))[0]
parent_dir, pkg_name = os.path.split(pkg_dir)
is_pygame_pkg = (pkg_name == 'tests' and
os.path.split(parent_dir)[1] == 'pygame')
if not is_pygame_pkg:
sys.path.insert(0, parent_dir)
else:
is_pygame_pkg = __name__.startswith('pygame.tests.')
if is_pygame_pkg:
from pygame.tests.test_utils import unittest
else:
from test.test_utils import unittest
import pygame
import pygame.gfxdraw
from pygame.locals import *
def intensity(c, i):
"""Return color c changed by intensity i
For 0 <= i <= 127 the color is a shade, with 0 being black, 127 being the
unaltered color.
For 128 <= i <= 255 the color is a tint, with 255 being white, 128 the
unaltered color.
"""
r, g, b = c[0:3]
if 0 <= i <= 127:
# Darken
return ((r * i) // 127, (g * i) // 127, (b * i) // 127)
# Lighten
return (r + ((255 - r) * (255 - i)) // 127,
g + ((255 - g) * (255 - i)) // 127,
b + ((255 - b) * (255 - i)) // 127)
class GfxdrawDefaultTest( unittest.TestCase ):
is_started = False
foreground_color = (128, 64, 8)
background_color = (255, 255, 255)
def make_palette(base_color):
"""Return color palette that is various intensities of base_color"""
# Need this function for Python 3.x so the base_color
# is within the scope of the list comprehension.
return [intensity(base_color, i) for i in range(0, 256)]
default_palette = make_palette(foreground_color)
default_size = (100, 100)
def check_at(self, surf, posn, color):
sc = surf.get_at(posn)
fail_msg = ("%s != %s at %s, bitsize: %i, flags: %i, masks: %s" %
(sc, color, posn, surf.get_bitsize(), surf.get_flags(),
surf.get_masks()))
self.failUnlessEqual(sc, color, fail_msg)
def check_not_at(self, surf, posn, color):
sc = surf.get_at(posn)
fail_msg = ("%s != %s at %s, bitsize: %i, flags: %i, masks: %s" %
(sc, color, posn, surf.get_bitsize(), surf.get_flags(),
surf.get_masks()))
self.failIfEqual(sc, color, fail_msg)
def setUp(self):
Surface = pygame.Surface
size = self.default_size
palette = self.default_palette
if not self.is_started:
# Necessary for Surface.set_palette.
pygame.init()
pygame.display.set_mode((1, 1))
# Create test surfaces
self.surfaces = [Surface(size, 0, 8),
Surface(size, 0, 16),
Surface(size, 0, 24),
Surface(size, 0, 32),
Surface(size, SRCALPHA, 16),
Surface(size, SRCALPHA, 32)]
self.surfaces[0].set_palette(palette)
# Special pixel formats
for i in range(1, 6):
s = self.surfaces[i]
flags = s.get_flags()
bitsize = s.get_bitsize()
masks = s.get_masks()
if flags:
masks = (masks[1], masks[2], masks[3], masks[0])
else:
masks = (masks[1], masks[2], masks[0], masks[3])
self.surfaces.append(Surface(size, flags, bitsize, masks))
for surf in self.surfaces:
surf.fill(self.background_color)
def test_pixel(self):
"""pixel(surface, x, y, color): return None"""
fg = self.foreground_color
bg = self.background_color
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.pixel(surf, 2, 2, fg)
for x in range(1, 4):
for y in range(1, 4):
if x == 2 and y == 2:
self.check_at(surf, (x, y), fg_adjusted)
else:
self.check_at(surf, (x, y), bg_adjusted)
def test_hline(self):
"""hline(surface, x1, x2, y, color): return None"""
fg = self.foreground_color
bg = self.background_color
startx = 10
stopx = 80
y = 50
fg_test_points = [(startx, y), (stopx, y), ((stopx - startx) // 2, y)]
bg_test_points = [(startx - 1, y), (stopx + 1, y),
(startx, y - 1), (startx, y + 1),
(stopx, y - 1), (stopx, y + 1)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.hline(surf, startx, stopx, y, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_vline(self):
"""vline(surface, x, y1, y2, color): return None"""
fg = self.foreground_color
bg = self.background_color
x = 50
starty = 10
stopy = 80
fg_test_points = [(x, starty), (x, stopy), (x, (stopy - starty) // 2)]
bg_test_points = [(x, starty - 1), (x, stopy + 1),
(x - 1, starty), (x + 1, starty),
(x - 1, stopy), (x + 1, stopy)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.vline(surf, x, starty, stopy, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_rectangle(self):
"""rectangle(surface, rect, color): return None"""
fg = self.foreground_color
bg = self.background_color
rect = pygame.Rect(10, 15, 55, 62)
rect_tuple = tuple(rect)
fg_test_points = [rect.topleft,
(rect.right - 1, rect.top),
(rect.left, rect.bottom - 1),
(rect.right - 1, rect.bottom - 1)]
bg_test_points = [(rect.left - 1, rect.top - 1),
(rect.left + 1, rect.top + 1),
(rect.right, rect.top - 1),
(rect.right - 2, rect.top + 1),
(rect.left - 1, rect.bottom),
(rect.left + 1, rect.bottom - 2),
(rect.right, rect.bottom),
(rect.right - 2, rect.bottom - 2)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.rectangle(surf, rect, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
surf.fill(bg)
pygame.gfxdraw.rectangle(surf, rect_tuple, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_box(self):
"""box(surface, rect, color): return None"""
fg = self.foreground_color
bg = self.background_color
rect = pygame.Rect(10, 15, 55, 62)
rect_tuple = tuple(rect)
fg_test_points = [rect.topleft,
(rect.left + 1, rect.top + 1),
(rect.right - 1, rect.top),
(rect.right - 2, rect.top + 1),
(rect.left, rect.bottom - 1),
(rect.left + 1, rect.bottom - 2),
(rect.right - 1, rect.bottom - 1),
(rect.right - 2, rect.bottom - 2)]
bg_test_points = [(rect.left - 1, rect.top - 1),
(rect.right, rect.top - 1),
(rect.left - 1, rect.bottom),
(rect.right, rect.bottom)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.box(surf, rect, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
surf.fill(bg)
pygame.gfxdraw.box(surf, rect_tuple, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_line(self):
"""line(surface, x1, y1, x2, y2, color): return None"""
fg = self.foreground_color
bg = self.background_color
x1 = 10
y1 = 15
x2 = 92
y2 = 77
fg_test_points = [(x1, y1), (x2, y2)]
bg_test_points = [(x1 - 1, y1), (x1, y1 - 1), (x1 - 1, y1 - 1),
(x2 + 1, y2), (x2, y2 + 1), (x2 + 1, y2 + 1)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.line(surf, x1, y1, x2, y2, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_circle(self):
"""circle(surface, x, y, r, color): return None"""
fg = self.foreground_color
bg = self.background_color
x = 45
y = 40
r = 30
fg_test_points = [(x, y - r),
(x, y + r),
(x - r, y),
(x + r, y)]
bg_test_points = [(x, y),
(x, y - r + 1),
(x, y - r - 1),
(x, y + r + 1),
(x, y + r - 1),
(x - r - 1, y),
(x - r + 1, y),
(x + r + 1, y),
(x + r - 1, y)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.circle(surf, x, y, r, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_arc(self):
"""arc(surface, x, y, r, start, end, color): return None"""
fg = self.foreground_color
bg = self.background_color
x = 45
y = 40
r = 30
start = 0 # +x direction, but not (x + r, y) (?)
end = 90 # -y direction, including (x, y + r)
fg_test_points = [(x, y + r), (x + r, y + 1)]
bg_test_points = [(x, y),
(x, y - r),
(x - r, y),
(x, y + r + 1),
(x, y + r - 1),
(x - 1, y + r),
(x + r + 1, y),
(x + r - 1, y),
(x + r, y)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.arc(surf, x, y, r, start, end, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_aacircle(self):
"""aacircle(surface, x, y, r, color): return None"""
fg = self.foreground_color
bg = self.background_color
x = 45
y = 40
r = 30
fg_test_points = [(x, y - r),
(x, y + r),
(x - r, y),
(x + r, y)]
bg_test_points = [(x, y),
(x, y - r + 1),
(x, y - r - 1),
(x, y + r + 1),
(x, y + r - 1),
(x - r - 1, y),
(x - r + 1, y),
(x + r + 1, y),
(x + r - 1, y)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.aacircle(surf, x, y, r, fg)
for posn in fg_test_points:
self.check_not_at(surf, posn, bg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_filled_circle(self):
"""filled_circle(surface, x, y, r, color): return None"""
fg = self.foreground_color
bg = self.background_color
x = 45
y = 40
r = 30
fg_test_points = [(x, y - r),
(x, y - r + 1),
(x, y + r),
(x, y + r - 1),
(x - r, y),
(x - r + 1, y),
(x + r, y),
(x + r - 1, y),
(x, y)]
bg_test_points = [(x, y - r - 1),
(x, y + r + 1),
(x - r - 1, y),
(x + r + 1, y)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.filled_circle(surf, x, y, r, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_ellipse(self):
"""ellipse(surface, x, y, rx, ry, color): return None"""
fg = self.foreground_color
bg = self.background_color
x = 45
y = 40
rx = 30
ry = 35
fg_test_points = [(x, y - ry),
(x, y + ry),
(x - rx, y),
(x + rx, y)]
bg_test_points = [(x, y),
(x, y - ry + 1),
(x, y - ry - 1),
(x, y + ry + 1),
(x, y + ry - 1),
(x - rx - 1, y),
(x - rx + 1, y),
(x + rx + 1, y),
(x + rx - 1, y)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.ellipse(surf, x, y, rx, ry, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_aaellipse(self):
"""aaellipse(surface, x, y, rx, ry, color): return None"""
fg = self.foreground_color
bg = self.background_color
x = 45
y = 40
rx = 30
ry = 35
fg_test_points = [(x, y - ry),
(x, y + ry),
(x - rx, y),
(x + rx, y)]
bg_test_points = [(x, y),
(x, y - ry + 1),
(x, y - ry - 1),
(x, y + ry + 1),
(x, y + ry - 1),
(x - rx - 1, y),
(x - rx + 1, y),
(x + rx + 1, y),
(x + rx - 1, y)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.aaellipse(surf, x, y, rx, ry, fg)
for posn in fg_test_points:
self.check_not_at(surf, posn, bg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_filled_ellipse(self):
"""filled_ellipse(surface, x, y, rx, ry, color): return None"""
fg = self.foreground_color
bg = self.background_color
x = 45
y = 40
rx = 30
ry = 35
fg_test_points = [(x, y - ry),
(x, y - ry + 1),
(x, y + ry),
(x, y + ry - 1),
(x - rx, y),
(x - rx + 1, y),
(x + rx, y),
(x + rx - 1, y),
(x, y)]
bg_test_points = [(x, y - ry - 1),
(x, y + ry + 1),
(x - rx - 1, y),
(x + rx + 1, y)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.filled_ellipse(surf, x, y, rx, ry, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_pie(self):
"""pie(surface, x, y, r, start, end, color): return None"""
fg = self.foreground_color
bg = self.background_color
x = 45
y = 40
r = 30
start = 0 # +x direction, including (x + r, y)
end = 90 # -y direction, but not (x, y + r) (?)
fg_test_points = [(x, y),
(x + 1, y),
(x, y + 1),
(x + r, y)]
bg_test_points = [(x - 1, y),
(x, y - 1),
(x - 1, y - 1),
(x + 1, y + 1),
(x + r + 1, y),
(x + r, y - 1),
(x, y + r)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.pie(surf, x, y, r, start, end, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_trigon(self):
"""trigon(surface, x1, y1, x2, y2, x3, y3, color): return None"""
fg = self.foreground_color
bg = self.background_color
x1 = 10
y1 = 15
x2 = 92
y2 = 77
x3 = 20
y3 = 60
fg_test_points = [(x1, y1), (x2, y2), (x3, y3)]
bg_test_points = [(x1 - 1, y1 - 1),
(x2 + 1, y2 + 1),
(x3 - 1, y3 + 1),
(x1 + 10, y1 + 30)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.trigon(surf, x1, y1, x2, y2, x3, y3, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_aatrigon(self):
"""aatrigon(surface, x1, y1, x2, y2, x3, y3, color): return None"""
fg = self.foreground_color
bg = self.background_color
x1 = 10
y1 = 15
x2 = 92
y2 = 77
x3 = 20
y3 = 60
fg_test_points = [(x1, y1), (x2, y2), (x3, y3)]
bg_test_points = [(x1 - 1, y1 - 1),
(x2 + 1, y2 + 1),
(x3 - 1, y3 + 1),
(x1 + 10, y1 + 30)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.aatrigon(surf, x1, y1, x2, y2, x3, y3, fg)
for posn in fg_test_points:
self.check_not_at(surf, posn, bg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_filled_trigon(self):
"""filled_trigon(surface, x1, y1, x2, y2, x3, y3, color): return None"""
fg = self.foreground_color
bg = self.background_color
x1 = 10
y1 = 15
x2 = 92
y2 = 77
x3 = 20
y3 = 60
fg_test_points = [(x1, y1), (x2, y2), (x3, y3),
(x1 + 10, y1 + 30)]
bg_test_points = [(x1 - 1, y1 - 1),
(x2 + 1, y2 + 1),
(x3 - 1, y3 + 1)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.filled_trigon(surf, x1, y1, x2, y2, x3, y3, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_polygon(self):
"""polygon(surface, points, color): return None"""
fg = self.foreground_color
bg = self.background_color
points = [(10, 80), (10, 15), (92, 25), (92, 80)]
fg_test_points = (points +
[(points[0][0], points[0][1] - 1),
(points[0][0] + 1, points[0][1]),
(points[3][0] - 1, points[3][1]),
(points[3][0], points[3][1] - 1),
(points[2][0], points[2][1] + 1)])
bg_test_points = [(points[0][0] - 1, points[0][1]),
(points[0][0], points[0][1] + 1),
(points[0][0] - 1, points[0][1] + 1),
(points[0][0] + 1, points[0][1] - 1),
(points[3][0] + 1, points[3][1]),
(points[3][0], points[3][1] + 1),
(points[3][0] + 1, points[3][1] + 1),
(points[3][0] - 1, points[3][1] - 1),
(points[2][0] + 1, points[2][1]),
(points[2][0] - 1, points[2][1] + 1),
(points[1][0] - 1, points[1][1]),
(points[1][0], points[1][1] - 1),
(points[1][0] - 1, points[1][1] - 1)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.polygon(surf, points, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_aapolygon(self):
"""aapolygon(surface, points, color): return None"""
fg = self.foreground_color
bg = self.background_color
points = [(10, 80), (10, 15), (92, 25), (92, 80)]
fg_test_points = (points +
[(points[0][0], points[0][1] - 1),
(points[0][0] + 1, points[0][1]),
(points[3][0] - 1, points[3][1]),
(points[3][0], points[3][1] - 1),
(points[2][0], points[2][1] + 1)])
bg_test_points = [(points[0][0] - 1, points[0][1]),
(points[0][0], points[0][1] + 1),
(points[0][0] - 1, points[0][1] + 1),
(points[0][0] + 1, points[0][1] - 1),
(points[3][0] + 1, points[3][1]),
(points[3][0], points[3][1] + 1),
(points[3][0] + 1, points[3][1] + 1),
(points[3][0] - 1, points[3][1] - 1),
(points[2][0] + 1, points[2][1]),
(points[2][0] - 1, points[2][1] + 1),
(points[1][0] - 1, points[1][1]),
(points[1][0], points[1][1] - 1),
(points[1][0] - 1, points[1][1] - 1)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.aapolygon(surf, points, fg)
for posn in fg_test_points:
self.check_not_at(surf, posn, bg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_filled_polygon(self):
"""filled_polygon(surface, points, color): return None"""
fg = self.foreground_color
bg = self.background_color
points = [(10, 80), (10, 15), (92, 25), (92, 80)]
fg_test_points = (points +
[(points[0][0], points[0][1] - 1),
(points[0][0] + 1, points[0][1]),
(points[0][0] + 1, points[0][1] - 1),
(points[3][0] - 1, points[3][1]),
(points[3][0], points[3][1] - 1),
(points[3][0] - 1, points[3][1] - 1),
(points[2][0], points[2][1] + 1),
(points[2][0] - 1, points[2][1] + 1)])
bg_test_points = [(points[0][0] - 1, points[0][1]),
(points[0][0], points[0][1] + 1),
(points[0][0] - 1, points[0][1] + 1),
(points[3][0] + 1, points[3][1]),
(points[3][0], points[3][1] + 1),
(points[3][0] + 1, points[3][1] + 1),
(points[2][0] + 1, points[2][1]),
(points[1][0] - 1, points[1][1]),
(points[1][0], points[1][1] - 1),
(points[1][0] - 1, points[1][1] - 1)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.filled_polygon(surf, points, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
def test_textured_polygon(self):
"""textured_polygon(surface, points, texture, tx, ty): return None"""
w, h = self.default_size
fg = self.foreground_color
bg = self.background_color
tx = 0
ty = 0
texture = pygame.Surface((w + tx, h + ty), 0, 24)
texture.fill(fg, (0, 0, w, h))
points = [(10, 80), (10, 15), (92, 25), (92, 80)]
# Don't know how to really check this as boarder points may
# or may not be included in the textured polygon.
fg_test_points = [(points[1][0] + 30, points[1][1] + 40)]
bg_test_points = [(points[0][0] - 1, points[0][1]),
(points[0][0], points[0][1] + 1),
(points[0][0] - 1, points[0][1] + 1),
(points[3][0] + 1, points[3][1]),
(points[3][0], points[3][1] + 1),
(points[3][0] + 1, points[3][1] + 1),
(points[2][0] + 1, points[2][1]),
(points[1][0] - 1, points[1][1]),
(points[1][0], points[1][1] - 1),
(points[1][0] - 1, points[1][1] - 1)]
for surf in self.surfaces[1:]:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.textured_polygon(surf, points, texture, -tx, -ty)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
# Alpha blit to 8 bits-per-pixel surface forbidden.
texture = pygame.Surface(self.default_size, SRCALPHA, 32)
self.failUnlessRaises(ValueError,
pygame.gfxdraw.textured_polygon,
self.surfaces[0],
points,
texture, 0, 0)
def test_bezier(self):
"""bezier(surface, points, steps, color): return None"""
fg = self.foreground_color
bg = self.background_color
points = [(10, 50), (25, 15), (60, 80), (92, 30)]
fg_test_points = [points[0], points[3]]
bg_test_points = [(points[0][0] - 1, points[0][1]),
(points[3][0] + 1, points[3][1]),
(points[1][0], points[1][1] + 3),
(points[2][0], points[2][1] - 3)]
for surf in self.surfaces:
fg_adjusted = surf.unmap_rgb(surf.map_rgb(fg))
bg_adjusted = surf.unmap_rgb(surf.map_rgb(bg))
pygame.gfxdraw.bezier(surf, points, 30, fg)
for posn in fg_test_points:
self.check_at(surf, posn, fg_adjusted)
for posn in bg_test_points:
self.check_at(surf, posn, bg_adjusted)
if __name__ == '__main__':
unittest.main()
| 42.345021 | 80 | 0.449059 | 3,828 | 30,192 | 3.390543 | 0.059039 | 0.056091 | 0.032668 | 0.06734 | 0.772556 | 0.763387 | 0.743124 | 0.733647 | 0.714616 | 0.70591 | 0 | 0.057088 | 0.418654 | 30,192 | 712 | 81 | 42.404494 | 0.682372 | 0.065282 | 0 | 0.727417 | 0 | 0.00317 | 0.005737 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042789 | false | 0 | 0.011094 | 0 | 0.068146 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0eb564f1518073d55001985e153edc28c1aef04a | 285 | py | Python | tests/correctness/framework/BadTargetChars/test1.xpybuild.py | xpybuild/xpybuild | c71a73e47414871c8192381d0356ab62f5a58127 | [
"Apache-2.0"
] | 9 | 2017-02-06T16:45:46.000Z | 2021-12-05T09:42:58.000Z | tests/correctness/framework/BadTargetChars/test1.xpybuild.py | xpybuild/xpybuild | c71a73e47414871c8192381d0356ab62f5a58127 | [
"Apache-2.0"
] | 15 | 2019-01-11T19:39:34.000Z | 2022-01-08T11:11:35.000Z | tests/correctness/framework/BadTargetChars/test1.xpybuild.py | xpybuild/xpybuild | c71a73e47414871c8192381d0356ab62f5a58127 | [
"Apache-2.0"
] | 5 | 2017-02-06T16:51:17.000Z | 2020-12-02T17:36:30.000Z | from xpybuild.propertysupport import *
from xpybuild.buildcommon import *
from xpybuild.pathsets import *
from xpybuild.targets.writefile import *
from xpybuild.targets.copy import *
defineOutputDirProperty('OUTPUT_DIR', None)
WriteFile('${OUTPUT_DIR}/invalidchars*and<.txt', 'Hi')
| 25.909091 | 54 | 0.796491 | 33 | 285 | 6.818182 | 0.515152 | 0.266667 | 0.32 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094737 | 285 | 10 | 55 | 28.5 | 0.872093 | 0 | 0 | 0 | 0 | 0 | 0.164912 | 0.122807 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.714286 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0ec8fb380d7abc9b213375ad3d13de1348f6c9a2 | 30 | py | Python | src/quadratic_qaoa/__init__.py | sami-b95/quadratic_qaoa | 6441d0656a8fe8cbf10bd8f051d003cddf062c77 | [
"MIT"
] | null | null | null | src/quadratic_qaoa/__init__.py | sami-b95/quadratic_qaoa | 6441d0656a8fe8cbf10bd8f051d003cddf062c77 | [
"MIT"
] | null | null | null | src/quadratic_qaoa/__init__.py | sami-b95/quadratic_qaoa | 6441d0656a8fe8cbf10bd8f051d003cddf062c77 | [
"MIT"
] | null | null | null | from .quadratic_qaoa import *
| 15 | 29 | 0.8 | 4 | 30 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0ef66984df4e82eb07ccb14285d7e77e5414ca6f | 28 | py | Python | common/__init__.py | zhiyanliu/swing | 7d203c3159b9c0b5837b08d3e936c1ed982c6087 | [
"Apache-2.0"
] | 3 | 2020-05-11T03:35:31.000Z | 2020-05-11T10:01:24.000Z | common/__init__.py | aws-samples/swing | 106fa7aa6874372cd3c81d9d7128f50cbbb97017 | [
"MIT-0"
] | null | null | null | common/__init__.py | aws-samples/swing | 106fa7aa6874372cd3c81d9d7128f50cbbb97017 | [
"MIT-0"
] | 1 | 2021-06-10T18:53:19.000Z | 2021-06-10T18:53:19.000Z | from common.logger import *
| 14 | 27 | 0.785714 | 4 | 28 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0efbcef264200dfb080b9227f3c8cb2de959bfa7 | 156 | py | Python | test/test_setup.py | farooq-teqniqly/tqdnld | 63cf98fe4f7464e277640a0905759842d6b14307 | [
"MIT"
] | null | null | null | test/test_setup.py | farooq-teqniqly/tqdnld | 63cf98fe4f7464e277640a0905759842d6b14307 | [
"MIT"
] | null | null | null | test/test_setup.py | farooq-teqniqly/tqdnld | 63cf98fe4f7464e277640a0905759842d6b14307 | [
"MIT"
] | null | null | null | """
This file only tests the import statements work.
"""
from tq_scroll_scrape.scroll_and_scrape import ScrollAndScrape
def test_foo():
assert 1 == 1
| 17.333333 | 62 | 0.75 | 23 | 156 | 4.869565 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0.166667 | 156 | 8 | 63 | 19.5 | 0.846154 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
16315ded3fe86af9b7d143218a8d740ae07658d2 | 37 | py | Python | __init__.py | vimal-dharmalingam/py_custom_spellrectify | e07b8d85deb13991a9220beae925fcfbeda314d4 | [
"MIT"
] | null | null | null | __init__.py | vimal-dharmalingam/py_custom_spellrectify | e07b8d85deb13991a9220beae925fcfbeda314d4 | [
"MIT"
] | null | null | null | __init__.py | vimal-dharmalingam/py_custom_spellrectify | e07b8d85deb13991a9220beae925fcfbeda314d4 | [
"MIT"
] | null | null | null | from .rectifier import WordCorrection | 37 | 37 | 0.891892 | 4 | 37 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
16850f6901dc5de97dc85aa2f01a5c08757b4a8b | 6,401 | py | Python | tests/core/environment_manager_test.py | Sunil-Rathore/golem | 00e51ebb987c305aa4dec5e1ced4c9069c0cc716 | [
"MIT"
] | 171 | 2018-11-08T21:42:38.000Z | 2022-03-12T10:09:50.000Z | tests/core/environment_manager_test.py | Sunil-Rathore/golem | 00e51ebb987c305aa4dec5e1ced4c9069c0cc716 | [
"MIT"
] | 106 | 2018-11-04T03:50:45.000Z | 2022-02-17T08:05:08.000Z | tests/core/environment_manager_test.py | Sunil-Rathore/golem | 00e51ebb987c305aa4dec5e1ced4c9069c0cc716 | [
"MIT"
] | 42 | 2018-11-06T16:01:03.000Z | 2022-02-06T21:41:01.000Z | import os
from golem.core import environment_manager
ENV_DATA = ('{\n'
' "test": {\n'
' "url": "http://localhost:8000"\n'
' },\n'
' "development": {\n'
' "url": "http://localhost:8001"\n'
' }\n'
'}')
ENV_DATA_INVALID_JSON = ('{\n'
' \'test\': {\n'
' "url": "http://localhost:8000"\n'
' }\n'
'}')
class TestGetEnvs:
def test_get_envs(self, project_session):
_, project = project_session.activate()
env_json_path = os.path.join(project_session.path, 'environments.json')
with open(env_json_path, 'w') as env_json_file:
env_json_file.write(ENV_DATA)
envs = environment_manager.get_envs(project)
assert len(envs) == 2
assert 'test' in envs
assert 'development' in envs
def test_get_envs_empty_file(self, project_session):
_, project = project_session.activate()
env_json_path = os.path.join(project_session.path, 'environments.json')
with open(env_json_path, 'w') as env_json_file:
env_json_file.write('')
assert environment_manager.get_envs(project) == []
def test_get_envs_invalid_json(self, project_session):
_, project = project_session.activate()
env_json_path = os.path.join(project_session.path, 'environments.json')
with open(env_json_path, 'w') as env_json_file:
env_json_file.write(ENV_DATA_INVALID_JSON)
assert environment_manager.get_envs(project) == []
def test_get_envs_file_not_exist(self, project_session):
_, project = project_session.activate()
env_json_path = os.path.join(project_session.path, 'environments.json')
if os.path.isfile(env_json_path):
os.remove(env_json_path)
assert environment_manager.get_envs(project) == []
class TestGetEnvironmentData:
def test_get_environment_data(self, project_session):
_, project = project_session.activate()
env_json_path = os.path.join(project_session.path, 'environments.json')
with open(env_json_path, 'w') as env_json_file:
env_json_file.write(ENV_DATA)
result = environment_manager.get_environment_data(project)
expected = {
"test": {
"url": "http://localhost:8000"
},
"development": {
"url": "http://localhost:8001"
}
}
assert result == expected
def test_get_environment_data_empty_file(self, project_session):
_, project = project_session.activate()
env_json_path = os.path.join(project_session.path, 'environments.json')
with open(env_json_path, 'w') as env_json_file:
env_json_file.write('')
result = environment_manager.get_environment_data(project)
assert result == {}
def test_get_environment_data_invalid_json(self, project_session):
_, project = project_session.activate()
env_json_path = os.path.join(project_session.path, 'environments.json')
with open(env_json_path, 'w') as env_json_file:
env_json_file.write(ENV_DATA_INVALID_JSON)
result = environment_manager.get_environment_data(project)
assert result == {}
def test_get_environment_data_file_not_exist(self, project_function):
_, project = project_function.activate()
result = environment_manager.get_environment_data(project)
assert result == {}
class TestGetEnvironmentsAsString:
def test_get_environments_as_string(self, project_session):
_, project = project_session.activate()
env_json_path = os.path.join(project_session.path, 'environments.json')
with open(env_json_path, 'w') as env_json_file:
env_json_file.write(ENV_DATA)
result = environment_manager.get_environments_as_string(project)
assert result == ENV_DATA
def test_get_environments_as_string_empty_file(self, project_session):
_, project = project_session.activate()
env_json_path = os.path.join(project_session.path, 'environments.json')
with open(env_json_path, 'w') as env_json_file:
env_json_file.write('')
result = environment_manager.get_environments_as_string(project)
assert result == ''
def test_get_environments_as_string_file_not_exist(self, project_function):
_, project = project_function.activate()
path = os.path.join(project_function.path, 'environments.json')
os.remove(path)
result = environment_manager.get_environments_as_string(project)
assert result == ''
class TestSaveEnvironments:
def test_save_environments(self, project_session):
_, project = project_session.activate()
error = environment_manager.save_environments(project, ENV_DATA)
assert error == ''
env_json_path = os.path.join(project_session.path, 'environments.json')
with open(env_json_path) as json_file:
file_content = json_file.read()
assert file_content == ENV_DATA
def test_save_environments_empty_env_data(self, project_session):
_, project = project_session.activate()
error = environment_manager.save_environments(project, '')
assert error == ''
env_json_path = os.path.join(project_session.path, 'environments.json')
with open(env_json_path) as json_file:
file_content = json_file.read()
assert file_content == ''
def test_save_environments_invalid_json(self, project_function):
_, project = project_function.activate()
env_json_path = os.path.join(project_function.path, 'environments.json')
original_json = '{"test": "value"}'
with open(env_json_path, 'w') as json_file:
json_file.write(original_json)
error = environment_manager.save_environments(project, ENV_DATA_INVALID_JSON)
assert error == 'must be valid JSON'
# assert the original environments.json file was not modified
with open(env_json_path) as json_file:
file_content = json_file.read()
assert file_content == original_json
| 41.564935 | 85 | 0.644431 | 747 | 6,401 | 5.15261 | 0.088353 | 0.076383 | 0.074305 | 0.043908 | 0.826708 | 0.798909 | 0.765653 | 0.74357 | 0.722006 | 0.666147 | 0 | 0.004409 | 0.255898 | 6,401 | 153 | 86 | 41.836601 | 0.803695 | 0.009217 | 0 | 0.555556 | 0 | 0 | 0.085174 | 0 | 0 | 0 | 0 | 0 | 0.150794 | 1 | 0.111111 | false | 0 | 0.015873 | 0 | 0.15873 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
16c84a8aa8fd1959f6e7ddd6c8a1ec275c98eb1c | 154 | py | Python | teacher_directory/forms.py | rkusantos/dnc_teacher_directory | f293732eebf1941df01cf0e5bcff03cef9d6abf2 | [
"MIT"
] | null | null | null | teacher_directory/forms.py | rkusantos/dnc_teacher_directory | f293732eebf1941df01cf0e5bcff03cef9d6abf2 | [
"MIT"
] | null | null | null | teacher_directory/forms.py | rkusantos/dnc_teacher_directory | f293732eebf1941df01cf0e5bcff03cef9d6abf2 | [
"MIT"
] | null | null | null | from django import forms
class InputDataForm(forms.Form):
teachers = forms.FileField(required=False)
pictures = forms.FileField(required=False)
| 22 | 46 | 0.766234 | 18 | 154 | 6.555556 | 0.666667 | 0.237288 | 0.372881 | 0.457627 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 154 | 6 | 47 | 25.666667 | 0.893939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
16d6605f647f526676d2e82d163c59b65f293ab8 | 83 | py | Python | bot/markups/__init__.py | Im-zeus/Stickers | f2484a1ecc9a3e4a2029eaadbde4ae1b0fe74536 | [
"MIT"
] | 44 | 2019-09-02T08:27:57.000Z | 2022-02-08T09:57:55.000Z | bot/markups/__init__.py | Im-zeus/Stickers | f2484a1ecc9a3e4a2029eaadbde4ae1b0fe74536 | [
"MIT"
] | 37 | 2018-11-09T11:51:15.000Z | 2021-12-27T15:08:48.000Z | bot/markups/__init__.py | Im-zeus/Stickers | f2484a1ecc9a3e4a2029eaadbde4ae1b0fe74536 | [
"MIT"
] | 38 | 2019-03-27T21:12:23.000Z | 2022-01-08T07:57:39.000Z | from .reply_keyboards import Keyboard
from .inline_keyboards import InlineKeyboard
| 27.666667 | 44 | 0.879518 | 10 | 83 | 7.1 | 0.7 | 0.422535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096386 | 83 | 2 | 45 | 41.5 | 0.946667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc489dbe251172b4800f93468e62acb63d6c4992 | 10,672 | py | Python | tests/components/alarm_control_panel/test_device_action.py | joopert/home-assistant | a3f6fbb3774004b15c397c0556f98c5f7a59cb22 | [
"Apache-2.0"
] | 3 | 2020-10-23T14:39:11.000Z | 2021-02-17T14:40:17.000Z | tests/components/alarm_control_panel/test_device_action.py | joopert/home-assistant | a3f6fbb3774004b15c397c0556f98c5f7a59cb22 | [
"Apache-2.0"
] | 3 | 2021-02-08T20:54:46.000Z | 2021-09-08T02:30:04.000Z | tests/components/alarm_control_panel/test_device_action.py | joopert/home-assistant | a3f6fbb3774004b15c397c0556f98c5f7a59cb22 | [
"Apache-2.0"
] | 4 | 2020-05-30T08:19:47.000Z | 2021-05-14T11:39:19.000Z | """The tests for Alarm control panel device actions."""
import pytest
from homeassistant.components.alarm_control_panel import DOMAIN
from homeassistant.const import (
CONF_PLATFORM,
STATE_ALARM_ARMED_AWAY,
STATE_ALARM_ARMED_HOME,
STATE_ALARM_ARMED_NIGHT,
STATE_ALARM_DISARMED,
STATE_ALARM_TRIGGERED,
STATE_UNKNOWN,
)
from homeassistant.setup import async_setup_component
import homeassistant.components.automation as automation
from homeassistant.helpers import device_registry
from tests.common import (
MockConfigEntry,
assert_lists_same,
mock_device_registry,
mock_registry,
async_get_device_automations,
async_get_device_automation_capabilities,
)
@pytest.fixture
def device_reg(hass):
"""Return an empty, loaded, registry."""
return mock_device_registry(hass)
@pytest.fixture
def entity_reg(hass):
"""Return an empty, loaded, registry."""
return mock_registry(hass)
async def test_get_actions(hass, device_reg, entity_reg):
"""Test we get the expected actions from a alarm_control_panel."""
config_entry = MockConfigEntry(domain="test", data={})
config_entry.add_to_hass(hass)
device_entry = device_reg.async_get_or_create(
config_entry_id=config_entry.entry_id,
connections={(device_registry.CONNECTION_NETWORK_MAC, "12:34:56:AB:CD:EF")},
)
entity_reg.async_get_or_create(DOMAIN, "test", "5678", device_id=device_entry.id)
hass.states.async_set(
"alarm_control_panel.test_5678", "attributes", {"supported_features": 15}
)
expected_actions = [
{
"domain": DOMAIN,
"type": "arm_away",
"device_id": device_entry.id,
"entity_id": "alarm_control_panel.test_5678",
},
{
"domain": DOMAIN,
"type": "arm_home",
"device_id": device_entry.id,
"entity_id": "alarm_control_panel.test_5678",
},
{
"domain": DOMAIN,
"type": "arm_night",
"device_id": device_entry.id,
"entity_id": "alarm_control_panel.test_5678",
},
{
"domain": DOMAIN,
"type": "disarm",
"device_id": device_entry.id,
"entity_id": "alarm_control_panel.test_5678",
},
{
"domain": DOMAIN,
"type": "trigger",
"device_id": device_entry.id,
"entity_id": "alarm_control_panel.test_5678",
},
]
actions = await async_get_device_automations(hass, "action", device_entry.id)
assert_lists_same(actions, expected_actions)
async def test_get_actions_arm_night_only(hass, device_reg, entity_reg):
"""Test we get the expected actions from a alarm_control_panel."""
config_entry = MockConfigEntry(domain="test", data={})
config_entry.add_to_hass(hass)
device_entry = device_reg.async_get_or_create(
config_entry_id=config_entry.entry_id,
connections={(device_registry.CONNECTION_NETWORK_MAC, "12:34:56:AB:CD:EF")},
)
entity_reg.async_get_or_create(DOMAIN, "test", "5678", device_id=device_entry.id)
hass.states.async_set(
"alarm_control_panel.test_5678", "attributes", {"supported_features": 4}
)
expected_actions = [
{
"domain": DOMAIN,
"type": "arm_night",
"device_id": device_entry.id,
"entity_id": "alarm_control_panel.test_5678",
},
{
"domain": DOMAIN,
"type": "disarm",
"device_id": device_entry.id,
"entity_id": "alarm_control_panel.test_5678",
},
]
actions = await async_get_device_automations(hass, "action", device_entry.id)
assert_lists_same(actions, expected_actions)
async def test_get_action_capabilities(hass, device_reg, entity_reg):
"""Test we get the expected capabilities from a sensor trigger."""
platform = getattr(hass.components, f"test.{DOMAIN}")
platform.init()
config_entry = MockConfigEntry(domain="test", data={})
config_entry.add_to_hass(hass)
device_entry = device_reg.async_get_or_create(
config_entry_id=config_entry.entry_id,
connections={(device_registry.CONNECTION_NETWORK_MAC, "12:34:56:AB:CD:EF")},
)
entity_reg.async_get_or_create(
DOMAIN,
"test",
platform.ENTITIES["no_arm_code"].unique_id,
device_id=device_entry.id,
)
assert await async_setup_component(hass, DOMAIN, {DOMAIN: {CONF_PLATFORM: "test"}})
expected_capabilities = {
"arm_away": {"extra_fields": []},
"arm_home": {"extra_fields": []},
"arm_night": {"extra_fields": []},
"disarm": {
"extra_fields": [{"name": "code", "optional": True, "type": "string"}]
},
"trigger": {"extra_fields": []},
}
actions = await async_get_device_automations(hass, "action", device_entry.id)
assert len(actions) == 5
for action in actions:
capabilities = await async_get_device_automation_capabilities(
hass, "action", action
)
assert capabilities == expected_capabilities[action["type"]]
async def test_get_action_capabilities_arm_code(hass, device_reg, entity_reg):
"""Test we get the expected capabilities from a sensor trigger."""
platform = getattr(hass.components, f"test.{DOMAIN}")
platform.init()
config_entry = MockConfigEntry(domain="test", data={})
config_entry.add_to_hass(hass)
device_entry = device_reg.async_get_or_create(
config_entry_id=config_entry.entry_id,
connections={(device_registry.CONNECTION_NETWORK_MAC, "12:34:56:AB:CD:EF")},
)
entity_reg.async_get_or_create(
DOMAIN,
"test",
platform.ENTITIES["arm_code"].unique_id,
device_id=device_entry.id,
)
assert await async_setup_component(hass, DOMAIN, {DOMAIN: {CONF_PLATFORM: "test"}})
expected_capabilities = {
"arm_away": {
"extra_fields": [{"name": "code", "optional": True, "type": "string"}]
},
"arm_home": {
"extra_fields": [{"name": "code", "optional": True, "type": "string"}]
},
"arm_night": {
"extra_fields": [{"name": "code", "optional": True, "type": "string"}]
},
"disarm": {
"extra_fields": [{"name": "code", "optional": True, "type": "string"}]
},
"trigger": {"extra_fields": []},
}
actions = await async_get_device_automations(hass, "action", device_entry.id)
assert len(actions) == 5
for action in actions:
capabilities = await async_get_device_automation_capabilities(
hass, "action", action
)
assert capabilities == expected_capabilities[action["type"]]
async def test_action(hass):
"""Test for turn_on and turn_off actions."""
platform = getattr(hass.components, f"test.{DOMAIN}")
platform.init()
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: [
{
"trigger": {
"platform": "event",
"event_type": "test_event_arm_away",
},
"action": {
"domain": DOMAIN,
"device_id": "abcdefgh",
"entity_id": "alarm_control_panel.alarm_no_arm_code",
"type": "arm_away",
},
},
{
"trigger": {
"platform": "event",
"event_type": "test_event_arm_home",
},
"action": {
"domain": DOMAIN,
"device_id": "abcdefgh",
"entity_id": "alarm_control_panel.alarm_no_arm_code",
"type": "arm_home",
},
},
{
"trigger": {
"platform": "event",
"event_type": "test_event_arm_night",
},
"action": {
"domain": DOMAIN,
"device_id": "abcdefgh",
"entity_id": "alarm_control_panel.alarm_no_arm_code",
"type": "arm_night",
},
},
{
"trigger": {"platform": "event", "event_type": "test_event_disarm"},
"action": {
"domain": DOMAIN,
"device_id": "abcdefgh",
"entity_id": "alarm_control_panel.alarm_no_arm_code",
"type": "disarm",
"code": "1234",
},
},
{
"trigger": {
"platform": "event",
"event_type": "test_event_trigger",
},
"action": {
"domain": DOMAIN,
"device_id": "abcdefgh",
"entity_id": "alarm_control_panel.alarm_no_arm_code",
"type": "trigger",
},
},
]
},
)
assert await async_setup_component(hass, DOMAIN, {DOMAIN: {CONF_PLATFORM: "test"}})
assert (
hass.states.get("alarm_control_panel.alarm_no_arm_code").state == STATE_UNKNOWN
)
hass.bus.async_fire("test_event_arm_away")
await hass.async_block_till_done()
assert (
hass.states.get("alarm_control_panel.alarm_no_arm_code").state
== STATE_ALARM_ARMED_AWAY
)
hass.bus.async_fire("test_event_arm_home")
await hass.async_block_till_done()
assert (
hass.states.get("alarm_control_panel.alarm_no_arm_code").state
== STATE_ALARM_ARMED_HOME
)
hass.bus.async_fire("test_event_arm_night")
await hass.async_block_till_done()
assert (
hass.states.get("alarm_control_panel.alarm_no_arm_code").state
== STATE_ALARM_ARMED_NIGHT
)
hass.bus.async_fire("test_event_disarm")
await hass.async_block_till_done()
assert (
hass.states.get("alarm_control_panel.alarm_no_arm_code").state
== STATE_ALARM_DISARMED
)
hass.bus.async_fire("test_event_trigger")
await hass.async_block_till_done()
assert (
hass.states.get("alarm_control_panel.alarm_no_arm_code").state
== STATE_ALARM_TRIGGERED
)
| 34.649351 | 88 | 0.573651 | 1,124 | 10,672 | 5.080071 | 0.103203 | 0.050438 | 0.071454 | 0.042032 | 0.849737 | 0.838354 | 0.808231 | 0.78021 | 0.751489 | 0.713485 | 0 | 0.010415 | 0.307253 | 10,672 | 307 | 89 | 34.762215 | 0.761937 | 0.011151 | 0 | 0.545788 | 0 | 0 | 0.20657 | 0.065305 | 0 | 0 | 0 | 0 | 0.062271 | 1 | 0.007326 | false | 0 | 0.025641 | 0 | 0.040293 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc63b506bafb5bd68d66b0427850a7d7032dcb74 | 171 | py | Python | Darlington/phase1/python Basic 1/day2 solutions/qtn8.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 6 | 2020-05-23T19:53:25.000Z | 2021-05-08T20:21:30.000Z | Darlington/phase1/python Basic 1/day2 solutions/qtn8.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 8 | 2020-05-14T18:53:12.000Z | 2020-07-03T00:06:20.000Z | Darlington/phase1/python Basic 1/day2 solutions/qtn8.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 39 | 2020-05-10T20:55:02.000Z | 2020-09-12T17:40:59.000Z | #to display first and last color
color_list = ["Red","Green","White" ,"Black"]
#print(color_list[0], color_list[len(color_list) - 1])
print(color_list[0], color_list[3]) | 28.5 | 54 | 0.707602 | 29 | 171 | 3.965517 | 0.551724 | 0.469565 | 0.243478 | 0.26087 | 0.417391 | 0.417391 | 0 | 0 | 0 | 0 | 0 | 0.025974 | 0.099415 | 171 | 6 | 55 | 28.5 | 0.720779 | 0.491228 | 0 | 0 | 0 | 0 | 0.209302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
bc7ed235c088797cec11ef55a13ab5a5306d59d2 | 150 | py | Python | ci/infra/testrunner/utils/__init__.py | manuelbuil/skuba | 71770c969f59275d6f7fb7a788635fcce6900bee | [
"Apache-2.0"
] | 72 | 2019-07-18T13:01:36.000Z | 2022-03-05T04:14:06.000Z | ci/infra/testrunner/utils/__init__.py | manuelbuil/skuba | 71770c969f59275d6f7fb7a788635fcce6900bee | [
"Apache-2.0"
] | 602 | 2019-07-18T13:48:04.000Z | 2021-09-27T14:10:30.000Z | ci/infra/testrunner/utils/__init__.py | manuelbuil/skuba | 71770c969f59275d6f7fb7a788635fcce6900bee | [
"Apache-2.0"
] | 90 | 2019-07-18T09:27:52.000Z | 2020-12-08T15:57:27.000Z | from utils.config import (BaseConfig, Constant)
from utils.format import Format
from utils.logger import Logger
from utils.utils import (Utils, step)
| 30 | 47 | 0.813333 | 22 | 150 | 5.545455 | 0.409091 | 0.295082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 150 | 4 | 48 | 37.5 | 0.924242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bcd62b09983b4469f3db1196eec8e2c93068996c | 15,386 | py | Python | tests/assistants/test_archive_links.py | betodealmeida/nefelibata | 3f6faab6de13c01f9662f18dda529569066ba5c9 | [
"MIT"
] | 22 | 2020-01-03T23:05:36.000Z | 2021-12-29T00:43:57.000Z | tests/assistants/test_archive_links.py | betodealmeida/nefelibata | 3f6faab6de13c01f9662f18dda529569066ba5c9 | [
"MIT"
] | 18 | 2020-05-25T03:49:07.000Z | 2022-03-28T14:02:56.000Z | tests/assistants/test_archive_links.py | betodealmeida/nefelibata | 3f6faab6de13c01f9662f18dda529569066ba5c9 | [
"MIT"
] | null | null | null | import json
from datetime import datetime
from datetime import timezone
from pathlib import Path
from typing import Any
from typing import Dict
from unittest.mock import MagicMock
import requests
from freezegun import freeze_time
from nefelibata.assistants.archive_links import ArchiveLinksAssistant
from nefelibata.builders.post import PostBuilder
__author__ = "Beto Dealmeida"
__copyright__ = "Beto Dealmeida"
__license__ = "mit"
config: Dict[str, Any] = {
"url": "https://example.com/",
"language": "en",
"theme": "test-theme",
"webmention": {"endpoint": "https://webmention.io/example.com/webmention"},
}
def test_archive_links(mock_post, requests_mock):
root = Path("/path/to/blog")
with freeze_time("2020-01-01T00:00:00Z"):
post = mock_post(
"""
subject: Hello, World!
keywords: test
summary: My first post
Hi, there!
This is [an external link](https://google.com/).
This is a link to [the blog itself](https://example.com/).
This is a link to [the Wayback Machine](https://archive.org/web).
This is a link to <a href="https://webmention.io/example.com/webmention" rel="webmention">send webmentions</a>.
""",
)
PostBuilder(root, config).process_post(post)
assistant = ArchiveLinksAssistant(root, config)
requests_mock.get(
"https://web.archive.org/save/https://google.com/",
headers={
"Link": (
'<https://google.com/>; rel="original", '
'<https://web.archive.org/web/timemap/link/https://google.com/>; rel="timemap"; type="application/link-format", '
'<https://web.archive.org/web/https://google.com/>; rel="timegate", '
'<https://web.archive.org/web/20200101080000/https://google.com/>; rel="first memento"; datetime="Wed, 1 Jan 2020 00:00:00 GMT", '
'<https://web.archive.org/web/20200101080000/https://google.com/>; rel="memento"; datetime="Wed, 1 Jan 2020 00:00:00 GMT", '
'<https://web.archive.org/web/20200101080000/https://google.com/>; rel="last memento"; datetime="Wed, 1 Jan Nov 2020 00:00:00 GMT"'
),
},
)
with freeze_time("2020-01-01T00:00:00Z"):
assistant.process_post(post)
with open(post.file_path.with_suffix(".html")) as fp:
contents = fp.read()
assert (
contents
== """
<!DOCTYPE html>
<html lang="en">
<head>
<meta content="article" property="og:type"/>
<meta content="Post title" property="og:title"/>
<meta content="This is the post description" property="og:description"/>
<link href="https://webmention.io/example.com/webmention" rel="webmention"/>
<link href="https://external.example.com/css/basic.css" rel="stylesheet"/>
<link href="/css/style.css" rel="stylesheet"/>
</head>
<body>
<p>Hi, there!</p>
<p>This is <a data-archive-date="2020-01-01T00:00:00+00:00" data-archive-url="https://web.archive.org/web/20200101080000/https://google.com/" href="https://google.com/">an external link</a><span class="archive">[<a href="https://web.archive.org/web/20200101080000/https://google.com/">archived</a>]</span>.
This is a link to <a href="https://example.com/">the blog itself</a>.
This is a link to <a href="https://archive.org/web">the Wayback Machine</a>.
This is a link to <a href="https://webmention.io/example.com/webmention" rel="webmention">send webmentions</a>.</p>
</body>
</html>"""
)
def test_archive_links_storage_exists(mock_post, mocker):
root = Path("/path/to/blog")
with freeze_time("2020-01-01T00:00:00Z"):
post = mock_post(
"""
subject: Hello, World!
keywords: test
summary: My first post
Hi, there!
This is [an external link](https://google.com/).
This is a link to [the blog itself](https://example.com/).
This is a link to [the Wayback Machine](https://archive.org/web).
This is a link to <a href="https://webmention.io/example.com/webmention" rel="webmention">send webmentions</a>.
""",
)
PostBuilder(root, config).process_post(post)
assistant = ArchiveLinksAssistant(root, config)
mock_requests = MagicMock()
mocker.patch("nefelibata.assistants.archive_links.requests", mock_requests)
archives = {
"https://google.com/": {
"url": "https://web.archive.org/web/20200101000000/https://google.com/",
"date": "2020-01-01T08:00:00+00:00",
},
}
storage = post.file_path.parent / "archives.json"
with freeze_time("2020-01-01T00:00:00Z"):
with open(storage, "w") as fp:
json.dump(archives, fp)
with freeze_time("2020-01-02T00:00:00Z"):
assistant.process_post(post)
mock_requests.get.assert_not_called()
assert (
datetime.fromtimestamp(storage.stat().st_mtime).astimezone(
timezone.utc,
)
== datetime(2020, 1, 1, 0, 0, tzinfo=timezone.utc)
)
def test_archive_links_already_modified(mock_post, requests_mock):
root = Path("/path/to/blog")
with freeze_time("2020-01-01T00:00:00Z"):
post = mock_post(
"""
subject: Hello, World!
keywords: test
summary: My first post
Hi, there!
This is [an external link](https://google.com/).
This is a link to [the blog itself](https://example.com/).
This is a link to [the Wayback Machine](https://archive.org/web).
This is a link to <a href="https://webmention.io/example.com/webmention" rel="webmention">send webmentions</a>.
""",
)
PostBuilder(root, config).process_post(post)
assistant = ArchiveLinksAssistant(root, config)
requests_mock.get(
"https://web.archive.org/save/https://google.com/",
headers={
"Link": (
'<https://google.com/>; rel="original", '
'<https://web.archive.org/web/timemap/link/https://google.com/>; rel="timemap"; type="application/link-format", '
'<https://web.archive.org/web/https://google.com/>; rel="timegate", '
'<https://web.archive.org/web/20200101080000/https://google.com/>; rel="first memento"; datetime="Wed, 1 Jan 2020 00:00:00 GMT", '
'<https://web.archive.org/web/20200101080000/https://google.com/>; rel="prev memento"; datetime="Wed, 1 Jan 2020 00:00:00 GMT", '
'<https://web.archive.org/web/20200102080000/https://google.com/>; rel="memento"; datetime="Thu, 2 Jan 2020 00:00:00 GMT", '
'<https://web.archive.org/web/20200102080000/https://google.com/>; rel="last memento"; datetime="Thu, 2 Jan Nov 2020 00:00:00 GMT"'
),
},
)
with freeze_time("2020-01-02T00:00:00Z"):
assistant.process_post(post)
with freeze_time("2020-01-03T00:00:00Z"):
assistant.process_post(post)
assert (
datetime.fromtimestamp(
post.file_path.with_suffix(".html").stat().st_mtime,
).astimezone(timezone.utc)
== datetime(2020, 1, 2, 0, 0, tzinfo=timezone.utc)
)
def test_archive_links_update(mock_post, requests_mock):
root = Path("/path/to/blog")
with freeze_time("2020-01-01T00:00:00Z"):
post = mock_post(
"""
subject: Hello, World!
keywords: test
summary: My first post
Hi, there!
This is [an external link](https://google.com/).
This is a link to [the blog itself](https://example.com/).
This is a link to [the Wayback Machine](https://archive.org/web).
This is a link to <a href="https://webmention.io/example.com/webmention" rel="webmention">send webmentions</a>.
""",
)
PostBuilder(root, config).process_post(post)
assistant = ArchiveLinksAssistant(root, config)
archives = {
"https://google.com/": {"url": None, "date": "2020-01-01T08:00:00+00:00"},
}
storage = post.file_path.parent / "archives.json"
with freeze_time("2020-01-01T00:00:00Z"):
with open(storage, "w") as fp:
json.dump(archives, fp)
with freeze_time("2020-01-01T01:00:00Z"):
assistant.process_post(post)
assert (
datetime.fromtimestamp(storage.stat().st_mtime).astimezone(
timezone.utc,
)
== datetime(2020, 1, 1, 0, 0, tzinfo=timezone.utc)
)
requests_mock.get(
"https://web.archive.org/save/https://google.com/",
headers={
"Link": (
'<https://google.com/>; rel="original", '
'<https://web.archive.org/web/timemap/link/https://google.com/>; rel="timemap"; type="application/link-format", '
'<https://web.archive.org/web/https://google.com/>; rel="timegate", '
'<https://web.archive.org/web/20200101080000/https://google.com/>; rel="first memento"; datetime="Wed, 1 Jan 2020 00:00:00 GMT", '
'<https://web.archive.org/web/20200102080000/https://google.com/>; rel="prev memento"; datetime="Thu, 2 Jan 2020 00:00:00 GMT", '
'<https://web.archive.org/web/20200103080000/https://google.com/>; rel="memento"; datetime="Fri, 3 Jan 2020 00:00:00 GMT", '
'<https://web.archive.org/web/20200103080000/https://google.com/>; rel="last memento"; datetime="Fri, 3 Jan Nov 2020 00:00:00 GMT"'
),
},
)
with freeze_time("2020-01-03T00:00:00Z"):
assistant.process_post(post)
with open(storage) as fp:
archives = json.load(fp)
assert archives == {
"https://google.com/": {
"url": "https://web.archive.org/web/20200103080000/https://google.com/",
"date": "2020-01-03T00:00:00+00:00",
},
}
def test_archive_links_timeout(mocker, mock_post):
root = Path("/path/to/blog")
with freeze_time("2020-01-01T00:00:00Z"):
post = mock_post(
"""
subject: Hello, World!
keywords: test
summary: My first post
Hi, there!
This is [an external link](https://google.com/).
This is a link to [the blog itself](https://example.com/).
This is a link to [the Wayback Machine](https://archive.org/web).
This is a link to <a href="https://webmention.io/example.com/webmention" rel="webmention">send webmentions</a>.
""",
)
PostBuilder(root, config).process_post(post)
assistant = ArchiveLinksAssistant(root, config)
mocker.patch(
"nefelibata.assistants.archive_links.requests.get",
side_effect=requests.exceptions.ReadTimeout(),
)
with freeze_time("2020-01-01T00:00:00Z"):
assistant.process_post(post)
with open(post.file_path.with_suffix(".html")) as fp:
contents = fp.read()
assert (
contents
== """
<!DOCTYPE html><html lang="en">
<head>
<meta content="article" property="og:type"/>
<meta content="Post title" property="og:title"/>
<meta content="This is the post description" property="og:description"/>
<link href="https://webmention.io/example.com/webmention" rel="webmention" />
<link href="https://external.example.com/css/basic.css" rel="stylesheet">
<link href="/css/style.css" rel="stylesheet">
</head>
<body>
<p>Hi, there!</p>
<p>This is <a href="https://google.com/">an external link</a>.
This is a link to <a href="https://example.com/">the blog itself</a>.
This is a link to <a href="https://archive.org/web">the Wayback Machine</a>.
This is a link to <a href="https://webmention.io/example.com/webmention" rel="webmention">send webmentions</a>.</p>
</body>
</html>"""
)
def test_archive_links_no_link_header(mock_post, requests_mock):
root = Path("/path/to/blog")
with freeze_time("2020-01-01T00:00:00Z"):
post = mock_post(
"""
subject: Hello, World!
keywords: test
summary: My first post
Hi, there!
This is [an external link](https://google.com/).
This is a link to [the blog itself](https://example.com/).
This is a link to [the Wayback Machine](https://archive.org/web).
This is a link to <a href="https://webmention.io/example.com/webmention" rel="webmention">send webmentions</a>.
""",
)
PostBuilder(root, config).process_post(post)
assistant = ArchiveLinksAssistant(root, config)
requests_mock.get("https://web.archive.org/save/https://google.com/")
with freeze_time("2020-01-01T00:00:00Z"):
assistant.process_post(post)
with open(post.file_path.with_suffix(".html")) as fp:
contents = fp.read()
assert (
contents
== """
<!DOCTYPE html><html lang="en">
<head>
<meta content="article" property="og:type"/>
<meta content="Post title" property="og:title"/>
<meta content="This is the post description" property="og:description"/>
<link href="https://webmention.io/example.com/webmention" rel="webmention" />
<link href="https://external.example.com/css/basic.css" rel="stylesheet">
<link href="/css/style.css" rel="stylesheet">
</head>
<body>
<p>Hi, there!</p>
<p>This is <a href="https://google.com/">an external link</a>.
This is a link to <a href="https://example.com/">the blog itself</a>.
This is a link to <a href="https://archive.org/web">the Wayback Machine</a>.
This is a link to <a href="https://webmention.io/example.com/webmention" rel="webmention">send webmentions</a>.</p>
</body>
</html>"""
)
def test_archive_links_no_header_links(mocker, mock_post):
root = Path("/path/to/blog")
with freeze_time("2020-01-01T00:00:00Z"):
post = mock_post(
"""
subject: Hello, World!
keywords: test
summary: My first post
Hi, there!
This is [an external link](https://google.com/).
This is a link to [the blog itself](https://example.com/).
This is a link to [the Wayback Machine](https://archive.org/web).
This is a link to <a href="https://webmention.io/example.com/webmention" rel="webmention">send webmentions</a>.
""",
)
PostBuilder(root, config).process_post(post)
assistant = ArchiveLinksAssistant(root, config)
mock_requests = MagicMock()
mock_requests.utils.parse_header_links.return_value = []
mocker.patch("nefelibata.assistants.archive_links.requests", mock_requests)
with freeze_time("2020-01-01T00:00:00Z"):
assistant.process_post(post)
with open(post.file_path.with_suffix(".html")) as fp:
contents = fp.read()
assert (
contents
== """
<!DOCTYPE html><html lang="en">
<head>
<meta content="article" property="og:type"/>
<meta content="Post title" property="og:title"/>
<meta content="This is the post description" property="og:description"/>
<link href="https://webmention.io/example.com/webmention" rel="webmention" />
<link href="https://external.example.com/css/basic.css" rel="stylesheet">
<link href="/css/style.css" rel="stylesheet">
</head>
<body>
<p>Hi, there!</p>
<p>This is <a href="https://google.com/">an external link</a>.
This is a link to <a href="https://example.com/">the blog itself</a>.
This is a link to <a href="https://archive.org/web">the Wayback Machine</a>.
This is a link to <a href="https://webmention.io/example.com/webmention" rel="webmention">send webmentions</a>.</p>
</body>
</html>"""
)
| 35.948598 | 306 | 0.626934 | 2,055 | 15,386 | 4.632603 | 0.085158 | 0.030252 | 0.061765 | 0.03813 | 0.914916 | 0.910609 | 0.89958 | 0.887395 | 0.883088 | 0.840021 | 0 | 0.055465 | 0.204342 | 15,386 | 427 | 307 | 36.032787 | 0.722186 | 0 | 0 | 0.666667 | 0 | 0.141844 | 0.541623 | 0.057581 | 0 | 0 | 0 | 0 | 0.031915 | 1 | 0.024823 | false | 0 | 0.039007 | 0 | 0.06383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bcef114bbf1d04b70cf411e9ebef5f0c8a8441de | 44 | py | Python | trader/core/strategy/__init__.py | Ricky294/trader | 5f5ecc047fb3ff82476cb751ac9fc3d5fa749dc5 | [
"MIT"
] | null | null | null | trader/core/strategy/__init__.py | Ricky294/trader | 5f5ecc047fb3ff82476cb751ac9fc3d5fa749dc5 | [
"MIT"
] | null | null | null | trader/core/strategy/__init__.py | Ricky294/trader | 5f5ecc047fb3ff82476cb751ac9fc3d5fa749dc5 | [
"MIT"
] | null | null | null | # nopycln: file
from .base import Strategy
| 11 | 26 | 0.75 | 6 | 44 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 3 | 27 | 14.666667 | 0.916667 | 0.295455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c1664e0f8f1ccdefd3b6be2fbdfc1f9a19e0439 | 81,644 | py | Python | calliope/test/test_backend_pyomo.py | guidogz/Calliope_guido | 148ee39c3671e55ad3a1a2da216ee23112d16abf | [
"Apache-2.0"
] | null | null | null | calliope/test/test_backend_pyomo.py | guidogz/Calliope_guido | 148ee39c3671e55ad3a1a2da216ee23112d16abf | [
"Apache-2.0"
] | null | null | null | calliope/test/test_backend_pyomo.py | guidogz/Calliope_guido | 148ee39c3671e55ad3a1a2da216ee23112d16abf | [
"Apache-2.0"
] | null | null | null | import pytest # pylint: disable=unused-import
import numpy as np
import pyomo.core as po
import os
import collections
import logging
from itertools import product
from calliope.backend.pyomo.util import get_param
import calliope.exceptions as exceptions
from calliope.core.attrdict import AttrDict
from calliope.test.common.util import build_test_model as build_model
from calliope.test.common.util import check_error_or_warning, check_variable_exists
def check_standard_warning(info, warning):
if warning == 'transmission':
return check_error_or_warning(
info,
'dimension loc_techs_transmission and associated variables distance, '
'lookup_remotes were empty, so have been deleted'
)
class TestUtil:
def test_get_param_with_timestep_existing(self):
"""
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run()
param = get_param(
m._backend_model,
'resource',
('1::test_demand_elec', m._backend_model.timesteps[1])
)
assert po.value(param) == -5 # see demand_elec.csv
def test_get_param_no_timestep_existing(self):
"""
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run()
param = get_param(
m._backend_model,
'energy_eff',
('1::test_supply_elec', m._backend_model.timesteps[1])
)
assert po.value(param) == 0.9 # see test model.yaml
def test_get_param_no_timestep_possible(self):
"""
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run()
param = get_param(
m._backend_model,
'energy_cap_max',
('1::test_supply_elec')
)
assert po.value(param) == 10 # see test model.yaml
param = get_param(
m._backend_model,
'cost_energy_cap',
('monetary', '0::test_supply_elec')
)
assert po.value(param) == 10
def test_get_param_from_default(self):
"""
"""
m = build_model({}, 'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run()
param = get_param(
m._backend_model,
'parasitic_eff',
('1::test_supply_plus', m._backend_model.timesteps[1])
)
assert po.value(param) == 1 # see defaults.yaml
param = get_param(
m._backend_model,
'resource_cap_min',
('0::test_supply_plus')
)
assert po.value(param) == 0 # see defaults.yaml
param = get_param(
m._backend_model,
'cost_resource_cap',
('monetary', '1::test_supply_plus')
)
assert po.value(param) == 0 # see defaults.yaml
def test_get_param_no_default_defined(self):
"""
If a default is not defined, raise KeyError
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run()
with pytest.raises(KeyError):
get_param(
m._backend_model,
'random_param',
('1::test_demand_elec', m._backend_model.timesteps[1])
)
get_param(
m._backend_model,
'random_param',
('1::test_supply_elec')
)
class TestModel:
@pytest.mark.serial # Cannot run in parallel with other tests
def test_load_constraints_no_order(self):
temp_file = os.path.join(
os.path.dirname(__file__), '..', 'backend', 'pyomo', 'constraints', 'temp_constraint_file_for_testing.py'
)
# Write an empty file
with open(temp_file, 'w') as f:
f.write('')
# Should fail, since the empty .py file is included in the list, but
# has no attribute 'ORDER'.
with pytest.raises(AttributeError) as excinfo:
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
# We can't use `with` because reasons related to Windows,
# so we manually remove the temp file in the end
os.remove(temp_file)
assert check_error_or_warning(
excinfo,
"module 'calliope.backend.pyomo.constraints.temp_constraint_file_for_testing' "
"has no attribute 'ORDER'"
)
class TestChecks:
@pytest.mark.parametrize('on', (True, False))
def test_operate_cyclic_storage(self, on):
"""Cannot have cyclic storage in operate mode"""
if on is True:
override = {} # cyclic storage is True by default
m = build_model(override, 'simple_supply_and_supply_plus,operate,investment_costs')
assert m.run_config['cyclic_storage'] is True
elif on is False:
override = {'run.cyclic_storage': False}
m = build_model(override, 'simple_supply_and_supply_plus,operate,investment_costs')
assert m.run_config['cyclic_storage'] is False
with pytest.warns(exceptions.ModelWarning) as warning:
m.run(build_only=True)
check_warn = check_error_or_warning(warning, 'Storage cannot be cyclic in operate run mode')
if on is True:
assert check_warn
elif on is True:
assert not check_warn
assert AttrDict.from_yaml_string(m._model_data.attrs['run_config']).cyclic_storage is False
@pytest.mark.parametrize('param', [('energy_eff'), ('resource_eff'), ('parasitic_eff')])
def test_loading_timeseries_operate_efficiencies(self, param):
m = build_model(
{'techs.test_supply_plus.constraints.' + param: 'file=supply_plus_resource.csv:1'},
'simple_supply_and_supply_plus,operate,investment_costs'
)
assert 'timesteps' in m._model_data[param].dims
with pytest.warns(exceptions.ModelWarning) as warning:
m.run(build_only=True) # will fail to complete run if there's a problem
def test_operate_group_demand_share_per_timestep_decision(self):
"""Cannot have group_demand_share_per_timestep_decision in operate mode"""
m = build_model({}, 'simple_supply,investment_costs,operate,enable_group_demand_share_per_timestep_decision')
with pytest.warns(exceptions.ModelWarning) as warning:
m.run(build_only=True)
assert check_error_or_warning(warning, '`demand_share_per_timestep_decision` group constraints cannot be')
assert 'group_demand_share_per_timestep_decision' not in m._model_data
@pytest.mark.parametrize('force', (True, False))
def test_operate_energy_cap_min_use(self, force):
"""If we depend on a finite energy_cap, we have to error on a user failing to define it"""
m = build_model(
{'techs.test_supply_elec.constraints': {
'force_resource': force, 'energy_cap_min_use': 0.1,
'resource': 'file=supply_plus_resource.csv:1',
'energy_cap_equals': np.inf
}}, 'simple_supply_and_supply_plus,operate,investment_costs'
)
with pytest.raises(exceptions.ModelError) as error:
with pytest.warns(exceptions.ModelWarning):
m.run(build_only=True)
assert check_error_or_warning(error, ['Operate mode: User must define a finite energy_cap'])
@pytest.mark.parametrize('force', (True, False))
def test_operate_energy_cap_resource_unit(self, force):
"""If we depend on a finite energy_cap, we have to error on a user failing to define it"""
m = build_model(
{'techs.test_supply_elec.constraints': {
'force_resource': force, 'resource_unit': 'energy_per_cap',
'resource': 'file=supply_plus_resource.csv:1',
'energy_cap_equals': np.inf, 'energy_cap_max': np.inf
}}, 'simple_supply_and_supply_plus,operate,investment_costs'
)
if force is True:
with pytest.raises(exceptions.ModelError) as error:
with pytest.warns(exceptions.ModelWarning) as warning:
m.run(build_only=True)
assert check_error_or_warning(error, ['Operate mode: User must define a finite energy_cap'])
elif force is False:
with pytest.warns(exceptions.ModelWarning) as warning:
m.run(build_only=True)
@pytest.mark.parametrize('resource_unit,force',
list(product(('energy', 'energy_per_cap', 'energy_per_area'), (True, False))))
def test_operate_resource_unit_with_resource_area(self, resource_unit, force):
"""Different resource unit affects the capacities which are set to infinite"""
m = build_model(
{'techs.test_supply_elec.constraints': {
'resource_unit': resource_unit, 'resource_area_max': 10, 'energy_cap_max': 15,
'resource': 'file=supply_plus_resource.csv:1', 'force_resource': force
}}, 'simple_supply_and_supply_plus,operate,investment_costs'
)
with pytest.warns(exceptions.ModelWarning) as warning:
m.run(build_only=True)
if resource_unit == 'energy':
_warnings = [
'Energy capacity constraint removed from 0::test_supply_elec as force_resource is applied and resource is not linked to energy flow (resource_unit = `energy`)',
'Resource area constraint removed from 0::test_supply_elec as force_resource is applied and resource is not linked to energy flow (resource_unit = `energy`)'
]
elif resource_unit == 'energy_per_area':
_warnings = [
'Energy capacity constraint removed from 0::test_supply_elec as force_resource is applied and resource is linked to energy flow using `energy_per_area`'
]
elif resource_unit == 'energy_per_cap':
_warnings = [
'Resource area constraint removed from 0::test_supply_elec as force_resource is applied and resource is linked to energy flow using `energy_per_cap`'
]
if force is True:
assert check_error_or_warning(warning, _warnings)
elif force is False:
assert ~check_error_or_warning(warning, _warnings)
@pytest.mark.parametrize('resource_unit', [('energy'), ('energy_per_cap'), ('energy_per_area')])
def test_operate_resource_unit_without_resource_area(self, resource_unit):
"""Different resource unit affects the capacities which are set to infinite"""
m = build_model(
{'techs.test_supply_elec.constraints': {
'resource_unit': resource_unit, 'force_resource': True,
'resource': 'file=supply_plus_resource.csv:1', 'energy_cap_max': 15,
}}, 'simple_supply_and_supply_plus,operate,investment_costs'
)
with pytest.warns(exceptions.ModelWarning) as warning:
# energy_per_area without a resource_cap will cause an error, which we have to catch here
if resource_unit == 'energy_per_area':
with pytest.raises(exceptions.ModelError) as error:
m.run(build_only=True)
else:
m.run(build_only=True)
if resource_unit == 'energy':
_warnings = [
'Energy capacity constraint removed from 0::test_supply_elec as force_resource is applied and resource is not linked to energy flow (resource_unit = `energy`)'
]
not_warnings = [
'Resource area constraint removed from 0::test_supply_elec as force_resource is applied and resource is not linked to energy flow (resource_unit = `energy`)',
'Energy capacity constraint removed from 0::test_demand_elec as force_resource is applied and resource is not linked to energy flow (resource_unit = `energy`)',
'Energy capacity constraint removed from 1::test_demand_elec as force_resource is applied and resource is not linked to energy flow (resource_unit = `energy`)'
]
elif resource_unit == 'energy_per_area':
_warnings = [
'Energy capacity constraint removed from 0::test_supply_elec as force_resource is applied and resource is linked to energy flow using `energy_per_area`'
]
not_warnings = [
'Resource area constraint removed from 0::test_supply_elec as force_resource is applied and resource is linked to energy flow using `energy_per_cap`',
'Energy capacity constraint removed from 0::test_demand_elec as force_resource is applied and resource is not linked to energy flow (resource_unit = `energy`)',
'Energy capacity constraint removed from 1::test_demand_elec as force_resource is applied and resource is not linked to energy flow (resource_unit = `energy`)'
]
# energy_per_area without a resource_cap will cause an error
check_error_or_warning(
error, 'Operate mode: User must define a finite resource_area '
'(via resource_area_equals or resource_area_max) for 0::test_supply_elec'
)
elif resource_unit == 'energy_per_cap':
_warnings = []
not_warnings = [
'Resource area constraint removed from 0::test_supply_elec as force_resource is applied and resource is linked to energy flow using `energy_per_cap`',
'Energy capacity constraint removed from 0::test_supply_elec as force_resource is applied and resource is not linked to energy flow (resource_unit = `energy`)',
'Energy capacity constraint removed from 0::test_demand_elec as force_resource is applied and resource is not linked to energy flow (resource_unit = `energy`)',
'Energy capacity constraint removed from 1::test_demand_elec as force_resource is applied and resource is not linked to energy flow (resource_unit = `energy`)'
]
assert check_error_or_warning(warning, _warnings)
assert not check_error_or_warning(warning, not_warnings)
@pytest.mark.parametrize('param', ('charge_rate', 'energy_cap_per_storage_cap_max'))
def test_operate_storage(self, param):
"""Can't violate storage capacity constraints in the definition of a technology"""
m = build_model(
{'techs.test_supply_plus.constraints': {param: 0.1}},
'simple_supply_and_supply_plus,operate,investment_costs'
)
with pytest.warns(exceptions.ModelWarning) as warning:
with pytest.raises(exceptions.ModelError) as error:
m.run(build_only=True)
assert check_error_or_warning(
error, 'fixed storage capacity * {} is not larger than fixed energy '
'capacity for loc::tech {}'.format(param, '0::test_supply_plus')
)
assert check_error_or_warning(
warning, [
'Initial stored energy not defined',
'Resource capacity constraint defined and set to infinity',
'Storage cannot be cyclic in operate run mode'
]
)
@pytest.mark.parametrize('on', (True, False))
def test_operate_resource_cap_max(self, on):
"""Some constraints, if not defined, will throw a warning and possibly change values in model_data"""
if on is False:
override = {}
else:
override = {'techs.test_supply_plus.constraints.resource_cap_max': 1e6}
m = build_model(
override, 'simple_supply_and_supply_plus,operate,investment_costs'
)
with pytest.warns(exceptions.ModelWarning) as warning:
m.run(build_only=True)
if on is False:
assert check_error_or_warning(warning, 'Resource capacity constraint defined and set to infinity')
assert np.isinf(m._model_data.resource_cap.loc['0::test_supply_plus'].item())
elif on is True:
assert not check_error_or_warning(warning, 'Resource capacity constraint defined and set to infinity')
assert m._model_data.resource_cap.loc['0::test_supply_plus'].item() == 1e6
@pytest.mark.parametrize('on', (True, False))
def test_operate_storage_initial(self, on):
"""Some constraints, if not defined, will throw a warning and possibly change values in model_data"""
if on is False:
override = {}
else:
override = {'techs.test_supply_plus.constraints.storage_initial': 0.5}
m = build_model(
override, 'simple_supply_and_supply_plus,operate,investment_costs'
)
with pytest.warns(exceptions.ModelWarning) as warning:
m.run(build_only=True)
if on is False:
assert check_error_or_warning(warning, 'Initial stored energy not defined')
assert m._model_data.storage_initial.loc['0::test_supply_plus'].item() == 0
elif on is True:
assert not check_error_or_warning(warning, 'Initial stored energy not defined')
assert m._model_data.storage_initial.loc['0::test_supply_plus'].item() == 0.5
class TestBalanceConstraints:
def test_loc_carriers_system_balance_constraint(self):
"""
sets.loc_carriers
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'system_balance_constraint')
def test_loc_techs_balance_supply_constraint(self):
"""
sets.loc_techs_finite_resource_supply,
"""
m = build_model({'techs.test_supply_elec.constraints.resource': 20},
'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'balance_supply_constraint')
m = build_model(
{'techs.test_supply_elec.constraints.resource': 20,
'techs.test_supply_elec.constraints.resource_unit': 'energy_per_cap'},
'simple_supply,two_hours,investment_costs'
)
m.run(build_only=True)
assert check_variable_exists(m._backend_model, 'balance_supply_constraint', 'energy_cap')
m = build_model(
{'techs.test_supply_elec.constraints.resource': 20,
'techs.test_supply_elec.constraints.resource_unit': 'energy_per_area'},
'simple_supply,two_hours,investment_costs'
)
m.run(build_only=True)
assert check_variable_exists(m._backend_model, 'balance_supply_constraint', 'resource_area')
def test_loc_techs_balance_demand_constraint(self):
"""
sets.loc_techs_finite_resource_demand,
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'balance_demand_constraint')
m = build_model({'techs.test_demand_elec.constraints.resource_unit': 'energy_per_cap'},
'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert check_variable_exists(m._backend_model, 'balance_demand_constraint', 'energy_cap')
m = build_model({'techs.test_demand_elec.constraints.resource_unit': 'energy_per_area'},
'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert check_variable_exists(m._backend_model, 'balance_demand_constraint', 'resource_area')
def test_loc_techs_resource_availability_supply_plus_constraint(self):
"""
sets.loc_techs_finite_resource_supply_plus,
"""
m = build_model({}, 'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(
m._backend_model, 'resource_availability_supply_plus_constraint'
)
m = build_model({'techs.test_supply_plus.constraints.resource_unit': 'energy_per_cap'},
'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert check_variable_exists(m._backend_model, 'resource_availability_supply_plus_constraint', 'energy_cap')
m = build_model({'techs.test_supply_plus.constraints.resource_unit': 'energy_per_area'},
'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert check_variable_exists(m._backend_model, 'resource_availability_supply_plus_constraint', 'resource_area')
def test_loc_techs_balance_transmission_constraint(self):
"""
sets.loc_techs_transmission,
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'balance_transmission_constraint')
def test_loc_techs_balance_supply_plus_constraint(self):
"""
sets.loc_techs_supply_plus,
"""
m = build_model({}, 'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'balance_supply_plus_constraint')
def test_loc_techs_balance_storage_constraint(self):
"""
sets.loc_techs_storage,
"""
m = build_model({}, 'simple_storage,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'balance_storage_constraint')
assert not hasattr(m._backend_model, 'storage_initial_constraint')
def test_loc_techs_balance_storage_discharge_depth_constraint(self):
"""
sets.loc_techs_storage,
"""
m = build_model({}, 'simple_storage,two_hours,investment_costs,storage_discharge_depth')
m.run(build_only=True)
assert hasattr(m._backend_model, 'storage_discharge_depth_constraint')
assert not hasattr(m._backend_model, 'storage_initial_constraint')
with pytest.raises(exceptions.ModelError) as error:
m2 = build_model(
{'techs.test_storage.constraints.storage_initial': 0.0},
'simple_storage,one_day,investment_costs,storage_discharge_depth'
)
m2.run(build_only=True)
assert check_error_or_warning(
error,
'storage_initial is smaller than storage_discharge_depth.'
)
m3 = build_model(
{'techs.test_storage.constraints.storage_initial': 1},
'simple_storage,one_day,investment_costs,storage_discharge_depth'
)
m3.run(build_only=True)
assert (m3._model_data.storage_initial.values > m3._model_data.storage_discharge_depth.values).all()
def test_storage_initial_constraint(self):
"""
sets.loc_techs_store,
"""
m = build_model(
{}, 'simple_storage,one_day,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'balance_storage_constraint')
assert not hasattr(m._backend_model, 'storage_initial_constraint')
m2 = build_model(
{'techs.test_storage.constraints.storage_initial': 0},
'simple_storage,one_day,investment_costs'
)
m2.run(build_only=True)
assert hasattr(m2._backend_model, 'balance_storage_constraint')
assert hasattr(m2._backend_model, 'storage_initial_constraint')
def test_carriers_reserve_margin_constraint(self):
"""
i for i in sets.carriers if i in model_run.model.get_key('reserve_margin', {}).keys()
"""
m = build_model({'model.reserve_margin.electricity': 0.01}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'reserve_margin_constraint')
class TestCostConstraints:
# costs.py
def test_loc_techs_cost_constraint(self):
"""
sets.loc_techs_cost,
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'cost_constraint')
def test_loc_techs_cost_investment_constraint(self):
"""
sets.loc_techs_investment_cost,
"""
m = build_model({}, 'simple_conversion,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'cost_investment_constraint')
@pytest.mark.filterwarnings("ignore:(?s).*Integer:calliope.exceptions.ModelWarning")
def test_loc_techs_cost_investment_milp_constraint(self):
m = build_model(
{'techs.test_supply_elec.constraints.lifetime': 10,
'techs.test_supply_elec.costs.monetary.interest_rate': 0.1},
'supply_purchase,two_hours'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'cost_investment_constraint')
def test_loc_techs_not_cost_var_constraint(self):
"""
i for i in sets.loc_techs_om_cost if i not in sets.loc_techs_conversion_plus + sets.loc_techs_conversion
"""
m = build_model({}, 'simple_conversion,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'cost_var_constraint')
@pytest.mark.parametrize("tech,scenario,cost", (
('test_conversion', 'simple_conversion', 'om_con'),
('test_conversion_plus', 'simple_conversion_plus', 'om_prod')
))
def test_loc_techs_cost_var_rhs(self, tech, scenario, cost):
m = build_model(
{'techs.{}.costs.monetary.{}'.format(tech, cost): 1},
'{},two_hours'.format(scenario)
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'cost_var_rhs')
assert not hasattr(m._backend_model, 'cost_var_constraint')
@pytest.mark.parametrize("tech,scenario,cost", (
('test_supply_elec', 'simple_supply', 'om_prod'),
('test_supply_elec', 'simple_supply', 'om_con'),
('test_supply_plus', 'simple_supply_and_supply_plus', 'om_con'),
('test_demand_elec', 'simple_supply', 'om_con'),
('test_transmission_elec', 'simple_supply', 'om_prod')
))
def test_loc_techs_cost_var_constraint(self, tech, scenario, cost):
"""
i for i in sets.loc_techs_om_cost if i not in sets.loc_techs_conversion_plus + sets.loc_techs_conversion
"""
m = build_model(
{'techs.{}.costs.monetary.{}'.format(tech, cost): 1},
'{},two_hours'.format(scenario)
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'cost_var_constraint')
class TestExportConstraints:
# export.py
def test_loc_carriers_update_system_balance_constraint(self):
"""
i for i in sets.loc_carriers if sets.loc_techs_export
and any(['{0}::{2}'.format(*j.split('::')) == i
for j in sets.loc_tech_carriers_export])
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
export_exists = check_variable_exists(
m._backend_model, 'system_balance_constraint', 'carrier_export'
)
assert not export_exists
m = build_model({}, 'supply_export,two_hours,investment_costs')
m.run(build_only=True)
export_exists = check_variable_exists(
m._backend_model, 'system_balance_constraint', 'carrier_export'
)
assert export_exists
def test_loc_tech_carriers_export_balance_constraint(self):
"""
sets.loc_tech_carriers_export,
"""
m = build_model({}, 'supply_export,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'export_balance_constraint')
def test_loc_techs_update_costs_var_constraint(self):
"""
i for i in sets.loc_techs_om_cost if i in sets.loc_techs_export
"""
m = build_model({}, 'supply_export,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'cost_var_rhs')
assert hasattr(m._backend_model, 'cost_var_constraint')
m = build_model(
{'techs.test_supply_elec.costs.monetary.om_prod': 0.1},
'supply_export,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'cost_var_rhs')
assert hasattr(m._backend_model, 'cost_var_constraint')
export_exists = check_variable_exists(
m._backend_model, 'cost_var_constraint', 'carrier_export'
)
assert export_exists
def test_loc_tech_carriers_export_max_constraint(self):
"""
i for i in sets.loc_tech_carriers_export
if constraint_exists(model_run, i.rsplit('::', 1)[0], 'constraints.export_cap')
"""
m = build_model(
{'techs.test_supply_elec.constraints.export_cap': 5},
'supply_export,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'export_max_constraint')
class TestCapacityConstraints:
# capacity.py
def test_loc_techs_storage_capacity_constraint(self):
"""
i for i in sets.loc_techs_store if i not in sets.loc_techs_milp
"""
m = build_model({}, 'simple_storage,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'storage_capacity_constraint')
m = build_model({}, 'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'storage_capacity_constraint')
m = build_model(
{'techs.test_storage.constraints.storage_cap_equals': 20},
'simple_storage,two_hours,investment_costs'
)
m.run(build_only=True)
assert m._backend_model.storage_capacity_constraint['0::test_storage'].upper() == 20
assert m._backend_model.storage_capacity_constraint['0::test_storage'].lower() == 20
@pytest.mark.filterwarnings("ignore:(?s).*Integer:calliope.exceptions.ModelWarning")
def test_loc_techs_storage_capacity_milp_constraint(self):
m = build_model(
{'techs.test_storage.constraints':
{'units_max': 1, 'energy_cap_per_unit': 20,
'storage_cap_per_unit': 20}},
'simple_storage,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(m._backend_model, 'storage_capacity_constraint')
@pytest.mark.parametrize('scenario,tech,override', [
i + (j,) for i in [('simple_supply_and_supply_plus', 'test_supply_plus'), ('simple_storage', 'test_storage')]
for j in ['max', 'equals', 'min']
])
def test_loc_techs_energy_capacity_storage_constraint(self, scenario, tech, override):
"""
i for i in sets.loc_techs_store if constraint_exists(model_run, i, 'constraints.energy_cap_per_storage_cap_max')
"""
m = build_model(
{'techs.{}.constraints.energy_cap_per_storage_cap_{}'.format(tech, override): 0.5},
'{},two_hours,investment_costs'.format(scenario)
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_storage_{}_constraint'.format(override))
if override == 'equals':
assert not any([
hasattr(m._backend_model, 'energy_capacity_storage_{}_constraint'.format(i))
for i in set(['max', 'min'])
])
@pytest.mark.filterwarnings("ignore:(?s).*Integer:calliope.exceptions.ModelWarning")
@pytest.mark.parametrize('override', (('max', 'equals', 'min')))
def test_loc_techs_energy_capacity_milp_storage_constraint(self, override):
"""
i for i in sets.loc_techs_store if constraint_exists(model_run, i, 'constraints.energy_cap_per_storage_cap_max')
"""
m = build_model(
{'techs.test_supply_plus.constraints.energy_cap_per_storage_cap_{}'.format(override): 0.5},
'supply_and_supply_plus_milp,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_storage_{}_constraint'.format(override))
if override == 'equals':
assert not any([
hasattr(m._backend_model, 'energy_capacity_storage_{}_constraint'.format(i))
for i in set(['max', 'min'])
])
def test_no_loc_techs_energy_capacity_storage_constraint(self, caplog):
"""
i for i in sets.loc_techs_store if constraint_exists(model_run, i, 'constraints.energy_cap_per_storage_cap_max')
"""
with caplog.at_level(logging.INFO):
m = build_model(model_file='energy_cap_per_storage_cap.yaml')
assert 'consider defining a `energy_cap_per_storage_cap_min/max/equals` constraint' in caplog.text
m.run(build_only=True)
assert not any([
hasattr(m._backend_model, 'energy_capacity_storage_{}_constraint'.format(i))
for i in ['max', 'equals', 'min']
])
@pytest.mark.parametrize('override', ((None, 'max', 'equals', 'min')))
def test_loc_techs_resource_capacity_constraint(self, override):
"""
i for i in sets.loc_techs_finite_resource_supply_plus
if any([constraint_exists(model_run, i, 'constraints.resource_cap_equals'),
constraint_exists(model_run, i, 'constraints.resource_cap_max'),
constraint_exists(model_run, i, 'constraints.resource_cap_min')])
"""
if override is None:
m = build_model({}, 'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'resource_capacity_constraint')
else:
m = build_model(
{'techs.test_supply_plus.constraints.resource_cap_{}'.format(override): 10},
'simple_supply_and_supply_plus,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'resource_capacity_constraint')
def test_loc_techs_resource_capacity_equals_energy_capacity_constraint(self):
"""
i for i in sets.loc_techs_finite_resource_supply_plus
if constraint_exists(model_run, i, 'constraints.resource_cap_equals_energy_cap')
"""
m = build_model({}, 'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(
m._backend_model, 'resource_capacity_equals_energy_capacity_constraint'
)
m = build_model(
{'techs.test_supply_plus.constraints.resource_cap_equals_energy_cap': True},
'simple_supply_and_supply_plus,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(
m._backend_model, 'resource_capacity_equals_energy_capacity_constraint'
)
def test_loc_techs_resource_area_constraint(self):
"""
i for i in sets.loc_techs_area if i in sets.loc_techs_supply_plus
"""
m = build_model({}, 'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'resource_area_constraint')
m = build_model(
{'techs.test_supply_plus.constraints.resource_area_max': 10},
'simple_supply_and_supply_plus,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'resource_area_constraint')
m = build_model(
{'techs.test_supply_elec.constraints.resource_area_max': 10},
'simple_supply_and_supply_plus,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'resource_area_constraint')
# Check that setting energy_cap_max to 0 also forces this constraint to 0
m = build_model(
{'techs.test_supply_plus.constraints': {'resource_area_max': 10, 'energy_cap_max': 0}},
'simple_supply_and_supply_plus,two_hours,investment_costs'
)
m.run(build_only=True)
assert m._backend_model.resource_area_constraint['0::test_supply_plus'].upper() == 0
def test_loc_techs_resource_area_per_energy_capacity_constraint(self):
"""
i for i in sets.loc_techs_area if i in sets.loc_techs_supply_plus
and constraint_exists(model_run, i, 'constraints.resource_area_per_energy_cap')
"""
m = build_model({}, 'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'resource_area_per_energy_capacity_constraint')
m = build_model(
{'techs.test_supply_plus.constraints.resource_area_max': 10},
'simple_supply_and_supply_plus,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(m._backend_model, 'resource_area_per_energy_capacity_constraint')
m = build_model(
{'techs.test_supply_elec.constraints.resource_area_per_energy_cap': 10},
'simple_supply_and_supply_plus,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'resource_area_per_energy_capacity_constraint')
m = build_model(
{'techs.test_supply_elec.constraints': {'resource_area_per_energy_cap': 10,
'resource_area_max': 10}},
'simple_supply_and_supply_plus,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'resource_area_per_energy_capacity_constraint')
def test_locs_resource_area_capacity_per_loc_constraint(self):
"""
i for i in sets.locs
if model_run.locations[i].get_key('available_area', None) is not None
"""
m = build_model({}, 'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'resource_area_capacity_per_loc_constraint')
m = build_model({'locations.0.available_area': 1}, 'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'resource_area_capacity_per_loc_constraint')
m = build_model(
{'locations.0.available_area': 1,
'techs.test_supply_plus.constraints.resource_area_max': 10},
'simple_supply_and_supply_plus,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'resource_area_capacity_per_loc_constraint')
def test_loc_techs_energy_capacity_constraint(self):
"""
i for i in sets.loc_techs
if i not in sets.loc_techs_milp + sets.loc_techs_purchase
"""
m = build_model({}, 'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_constraint')
m2 = build_model(
{'techs.test_supply_elec.constraints.energy_cap_scale': 5},
'simple_supply_and_supply_plus,two_hours,investment_costs'
)
m2.run(build_only=True)
assert (
m2._backend_model.energy_capacity_constraint['0::test_supply_elec'].upper() ==
m._backend_model.energy_capacity_constraint['0::test_supply_elec'].upper() * 5
)
@pytest.mark.filterwarnings("ignore:(?s).*Integer:calliope.exceptions.ModelWarning")
def test_loc_techs_energy_capacity_milp_constraint(self):
m = build_model({}, 'supply_milp,two_hours,investment_costs') # demand still is in loc_techs
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_constraint')
def test_loc_techs_energy_capacity_constraint_warning_on_infinite_equals(self):
# Check that setting `_equals` to infinity is caught:
override = {'locations.0.techs.test_supply_elec.constraints.energy_cap_equals': np.inf}
with pytest.raises(exceptions.ModelError) as error:
m = build_model(override, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert check_error_or_warning(
error,
'Cannot use inf for energy_cap_equals for loc:tech `0::test_supply_elec`'
)
def test_techs_energy_capacity_systemwide_constraint(self):
"""
i for i in sets.techs
if model_run.get_key('techs.{}.constraints.energy_cap_max_systemwide'.format(i), None)
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'energy_capacity_systemwide_constraint')
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_max_systemwide': 20},
'simple_supply,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_systemwide_constraint')
assert (
'test_supply_elec' in
m._backend_model.energy_capacity_systemwide_constraint.keys()
)
# setting the constraint to infinity leads to Pyomo creating NoConstraint
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_max_systemwide': np.inf},
'simple_supply,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_systemwide_constraint')
assert (
'test_supply_elec' not in
m._backend_model.energy_capacity_systemwide_constraint.keys()
)
# Check that setting `_equals` to infinity is caught:
with pytest.raises(exceptions.ModelError) as error:
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_equals_systemwide': np.inf},
'simple_supply,two_hours,investment_costs'
)
m.run(build_only=True)
assert check_error_or_warning(
error,
'Cannot use inf for energy_cap_equals_systemwide for tech `test_supply_elec`'
)
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_equals_systemwide': 20},
'simple_supply,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_systemwide_constraint')
# Check that a model without transmission techs doesn't cause an error
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_equals_systemwide': 20},
'simple_supply,two_hours,investment_costs', model_file='model_minimal.yaml'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_systemwide_constraint')
class TestDispatchConstraints:
# dispatch.py
def test_loc_tech_carriers_carrier_production_max_constraint(self):
"""
i for i in sets.loc_tech_carriers_prod
if i not in sets.loc_tech_carriers_conversion_plus
and i.rsplit('::', 1)[0] not in sets.loc_techs_milp
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'carrier_production_max_constraint')
@pytest.mark.filterwarnings("ignore:(?s).*Integer:calliope.exceptions.ModelWarning")
def test_loc_tech_carriers_carrier_production_max_milp_constraint(self):
m = build_model({}, 'supply_milp,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_production_max_constraint')
def test_loc_tech_carriers_carrier_production_min_constraint(self):
"""
i for i in sets.loc_tech_carriers_prod
if i not in sets.loc_tech_carriers_conversion_plus
and constraint_exists(model_run, i, 'constraints.energy_cap_min_use')
and i.rsplit('::', 1)[0] not in sets.loc_techs_milp
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_production_min_constraint')
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_min_use': 0.1},
'simple_supply,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'carrier_production_min_constraint')
@pytest.mark.filterwarnings("ignore:(?s).*Integer:calliope.exceptions.ModelWarning")
def test_loc_tech_carriers_carrier_production_min_milp_constraint(self):
m = build_model({}, 'supply_milp,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_production_min_constraint')
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_min_use': 0.1},
'supply_milp,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_production_min_constraint')
def test_loc_tech_carriers_carrier_consumption_max_constraint(self):
"""
i for i in sets.loc_tech_carriers_con
if i.rsplit('::', 1)[0] in sets.loc_techs_demand +
sets.loc_techs_storage + sets.loc_techs_transmission
and i.rsplit('::', 1)[0] not in sets.loc_techs_milp
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'carrier_consumption_max_constraint')
@pytest.mark.filterwarnings("ignore:(?s).*Integer:calliope.exceptions.ModelWarning")
def test_loc_tech_carriers_carrier_consumption_max_milp_constraint(self):
m = build_model({}, 'supply_milp,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'carrier_consumption_max_constraint')
def test_loc_techs_resource_max_constraint(self):
"""
sets.loc_techs_finite_resource_supply_plus,
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'resource_max_constraint')
m = build_model({}, 'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'resource_max_constraint')
m = build_model(
{'techs.test_supply_plus.constraints.resource': np.inf},
'simple_supply_and_supply_plus,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'resource_max_constraint')
def test_loc_techs_storage_max_constraint(self):
"""
sets.loc_techs_store
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'storage_max_constraint')
m = build_model({}, 'simple_supply_and_supply_plus,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'storage_max_constraint')
m = build_model({}, 'simple_storage,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'storage_max_constraint')
def test_loc_tech_carriers_ramping_constraint(self):
"""
i for i in sets.loc_tech_carriers_prod
if i.rsplit('::', 1)[0] in sets.loc_techs_ramping
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'ramping_up_constraint')
assert not hasattr(m._backend_model, 'ramping_down_constraint')
m = build_model(
{'techs.test_supply_elec.constraints.energy_ramping': 0.1},
'simple_supply,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'ramping_up_constraint')
assert hasattr(m._backend_model, 'ramping_down_constraint')
m = build_model(
{'techs.test_conversion.constraints.energy_ramping': 0.1},
'simple_conversion,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'ramping_up_constraint')
assert hasattr(m._backend_model, 'ramping_down_constraint')
@pytest.mark.filterwarnings("ignore:(?s).*Integer:calliope.exceptions.ModelWarning")
class TestMILPConstraints:
# milp.py
def test_loc_techs_unit_commitment_milp_constraint(self):
"""
sets.loc_techs_milp,
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'unit_commitment_milp_constraint')
m = build_model({}, 'supply_milp,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'unit_commitment_milp_constraint')
m = build_model({}, 'supply_purchase,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'unit_commitment_milp_constraint')
def test_loc_techs_unit_capacity_milp_constraint(self):
"""
sets.loc_techs_milp,
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'unit_capacity_milp_constraint')
m = build_model({}, 'supply_milp,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'unit_capacity_milp_constraint')
m = build_model({}, 'supply_purchase,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'unit_capacity_milp_constraint')
def test_loc_tech_carriers_carrier_production_max_milp_constraint(self):
"""
i for i in sets.loc_tech_carriers_prod
if i not in sets.loc_tech_carriers_conversion_plus
and i.rsplit('::', 1)[0] in sets.loc_techs_milp
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_production_max_milp_constraint')
m = build_model({}, 'supply_milp,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'carrier_production_max_milp_constraint')
m = build_model({}, 'supply_purchase,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_production_max_milp_constraint')
m = build_model({}, 'conversion_plus_milp,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_production_max_milp_constraint')
def test_loc_techs_carrier_production_max_conversion_plus_milp_constraint(self):
"""
i for i in sets.loc_techs_conversion_plus
if i in sets.loc_techs_milp
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(
m._backend_model,
'carrier_production_max_conversion_plus_milp_constraint'
)
m = build_model({}, 'supply_milp,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(
m._backend_model,
'carrier_production_max_conversion_plus_milp_constraint'
)
m = build_model({}, 'supply_purchase,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(
m._backend_model,
'carrier_production_max_conversion_plus_milp_constraint'
)
m = build_model({}, 'conversion_plus_milp,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(
m._backend_model,
'carrier_production_max_conversion_plus_milp_constraint'
)
m = build_model({}, 'conversion_plus_purchase,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(
m._backend_model,
'carrier_production_max_conversion_plus_milp_constraint'
)
def test_loc_tech_carriers_carrier_production_min_milp_constraint(self):
"""
i for i in sets.loc_tech_carriers_prod
if i not in sets.loc_tech_carriers_conversion_plus
and constraint_exists(model_run, i.rsplit('::', 1)[0], 'constraints.energy_cap_min_use')
and i.rsplit('::', 1)[0] in sets.loc_techs_milp
"""
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_min_use': 0.1},
'simple_supply,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_production_min_milp_constraint')
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_min_use': 0.1},
'supply_milp,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'carrier_production_min_milp_constraint')
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_min_use': 0.1},
'supply_purchase,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_production_min_milp_constraint')
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_min_use': 0.1},
'conversion_plus_milp,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_production_min_milp_constraint')
m = build_model(
{'techs.test_conversion_plus.constraints.energy_cap_min_use': 0.1},
'conversion_plus_milp,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_production_min_milp_constraint')
def test_loc_techs_carrier_production_min_conversion_plus_milp_constraint(self):
"""
i for i in sets.loc_techs_conversion_plus
if constraint_exists(model_run, i, 'constraints.energy_cap_min_use')
and i in sets.loc_techs_milp
"""
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_min_use': 0.1},
'simple_supply,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(
m._backend_model,
'carrier_production_min_conversion_plus_milp_constraint'
)
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_min_use': 0.1},
'supply_milp,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(
m._backend_model,
'carrier_production_min_conversion_plus_milp_constraint'
)
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_min_use': 0.1},
'conversion_plus_milp,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(
m._backend_model,
'carrier_production_min_conversion_plus_milp_constraint'
)
m = build_model(
{'techs.test_conversion_plus.constraints.energy_cap_min_use': 0.1},
'conversion_plus_milp,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(
m._backend_model,
'carrier_production_min_conversion_plus_milp_constraint'
)
m = build_model(
{'techs.test_conversion_plus.constraints.energy_cap_min_use': 0.1},
'conversion_plus_purchase,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(
m._backend_model,
'carrier_production_min_conversion_plus_milp_constraint'
)
def test_loc_tech_carriers_carrier_consumption_max_milp_constraint(self):
"""
i for i in sets.loc_tech_carriers_con
if i.rsplit('::', 1)[0] in sets.loc_techs_demand +
sets.loc_techs_storage + sets.loc_techs_transmission
and i.rsplit('::', 1)[0] in sets.loc_techs_milp
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_consumption_max_milp_constraint')
m = build_model({}, 'supply_milp,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_consumption_max_milp_constraint')
m = build_model({}, 'storage_milp,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'carrier_consumption_max_milp_constraint')
m = build_model({}, 'conversion_plus_milp,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'carrier_consumption_max_milp_constraint')
def test_loc_techs_energy_capacity_units_milp_constraint(self):
"""
i for i in sets.loc_techs_milp
if constraint_exists(model_run, i, 'constraints.energy_cap_per_unit')
is not None
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'energy_capacity_units_milp_constraint')
m = build_model({}, 'supply_milp,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_units_milp_constraint')
m = build_model({}, 'storage_milp,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_units_milp_constraint')
m = build_model({}, 'conversion_plus_milp,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_units_milp_constraint')
def test_loc_techs_storage_capacity_units_milp_constraint(self):
"""
i for i in sets.loc_techs_milp if i in sets.loc_techs_store
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'storage_capacity_units_milp_constraint')
m = build_model({}, 'supply_milp,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'storage_capacity_units_milp_constraint')
m = build_model({}, 'storage_milp,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'storage_capacity_units_milp_constraint')
m = build_model({}, 'conversion_plus_milp,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'storage_capacity_units_milp_constraint')
m = build_model({}, 'supply_and_supply_plus_milp,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'storage_capacity_units_milp_constraint')
def test_loc_techs_energy_capacity_max_purchase_milp_constraint(self):
"""
i for i in sets.loc_techs_purchase
if (constraint_exists(model_run, i, 'constraints.energy_cap_equals') is not None
or constraint_exists(model_run, i, 'constraints.energy_cap_max') is not None)
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'energy_capacity_max_purchase_milp_constraint')
m = build_model({}, 'supply_milp,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'energy_capacity_max_purchase_milp_constraint')
m = build_model({}, 'supply_purchase,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_max_purchase_milp_constraint')
m = build_model(
{'techs.test_supply_elec.constraints': {'energy_cap_max': None, 'energy_cap_equals': 15}},
'supply_purchase,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_max_purchase_milp_constraint')
def test_loc_techs_energy_capacity_min_purchase_milp_constraint(self):
"""
i for i in sets.loc_techs_purchase
if (not constraint_exists(model_run, i, 'constraints.energy_cap_equals')
and constraint_exists(model_run, i, 'constraints.energy_cap_min'))
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'energy_capacity_min_purchase_milp_constraint')
m = build_model({}, 'supply_milp,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'energy_capacity_min_purchase_milp_constraint')
m = build_model({}, 'supply_purchase,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'energy_capacity_min_purchase_milp_constraint')
m = build_model(
{'techs.test_supply_elec.constraints': {'energy_cap_max': None, 'energy_cap_equals': 15}},
'supply_purchase,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(m._backend_model, 'energy_capacity_min_purchase_milp_constraint')
m = build_model(
{'techs.test_supply_elec.constraints.energy_cap_min': 10},
'supply_purchase,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'energy_capacity_min_purchase_milp_constraint')
def test_loc_techs_storage_capacity_max_purchase_milp_constraint(self):
"""
i for i in set(sets.loc_techs_purchase).intersection(sets.loc_techs_store)
"""
m = build_model({}, 'simple_storage,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'storage_capacity_max_purchase_milp_constraint')
m = build_model({}, 'storage_milp,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'storage_capacity_max_purchase_milp_constraint')
m = build_model({}, 'storage_purchase,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'storage_capacity_max_purchase_milp_constraint')
m = build_model({}, 'supply_purchase,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'storage_capacity_max_purchase_milp_constraint')
def test_loc_techs_storage_capacity_min_purchase_milp_constraint(self):
"""
i for i in set(sets.loc_techs_purchase).intersection(sets.loc_techs_store)
if (not constraint_exists(model_run, i, 'constraints.storage_cap_equals')
and (constraint_exists(model_run, i, 'constraints.storage_cap_min')
or constraint_exists(model_run, i, 'constraints.energy_cap_min')))
"""
m = build_model(
{'techs.test_storage.constraints.storage_cap_min': 10},
'simple_storage,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(m._backend_model, 'storage_capacity_min_purchase_milp_constraint')
m = build_model(
{'techs.test_storage.constraints.storage_cap_min': 10},
'storage_milp,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(m._backend_model, 'storage_capacity_min_purchase_milp_constraint')
m = build_model({}, 'storage_purchase,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'storage_capacity_min_purchase_milp_constraint')
m = build_model(
{'techs.test_storage.constraints.storage_cap_min': 10},
'storage_purchase,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'storage_capacity_min_purchase_milp_constraint')
m = build_model(
{'techs.test_storage.constraints': {'storage_cap_equals': 10, 'storage_cap_min': 10}},
'storage_purchase,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(m._backend_model, 'storage_capacity_min_purchase_milp_constraint')
def test_loc_techs_update_costs_investment_units_milp_constraint(self):
"""
i for i in sets.loc_techs_milp
if i in sets.loc_techs_investment_cost and
any(constraint_exists(model_run, i, 'costs.{}.purchase'.format(j))
for j in model_run.sets.costs)
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not check_variable_exists(m._backend_model, 'cost_investment_constraint', 'purchased')
assert not check_variable_exists(m._backend_model, 'cost_investment_constraint', 'units')
m = build_model({}, 'supply_milp,two_hours,investment_costs')
m.run(build_only=True)
assert not check_variable_exists(m._backend_model, 'cost_investment_constraint', 'purchased')
assert not check_variable_exists(m._backend_model, 'cost_investment_constraint', 'units')
m = build_model(
{'techs.test_supply_elec.costs.monetary.purchase': 1},
'supply_milp,two_hours,investment_costs'
)
m.run(build_only=True)
assert not check_variable_exists(m._backend_model, 'cost_investment_constraint', 'purchased')
assert check_variable_exists(m._backend_model, 'cost_investment_constraint', 'units')
def test_loc_techs_update_costs_investment_purchase_milp_constraint(self):
"""
sets.loc_techs_purchase,
"""
m = build_model({}, 'supply_purchase,two_hours,investment_costs')
m.run(build_only=True)
assert check_variable_exists(m._backend_model, 'cost_investment_constraint', 'purchased')
assert not check_variable_exists(m._backend_model, 'cost_investment_constraint', 'units')
def test_techs_unit_capacity_systemwide_milp_constraint(self):
"""
sets.techs if unit_cap_max_systemwide or unit_cap_equals_systemwide
"""
override_max = {
'links.0,1.exists': True,
'techs.test_conversion_plus.constraints.units_max_systemwide': 2,
'locations.1.techs.test_conversion_plus.constraints': {
'units_max': 2,
'energy_cap_per_unit': 5
}
}
override_equals = {
'links.0,1.exists': True,
'techs.test_conversion_plus.constraints.units_equals_systemwide': 1,
'locations.1.techs.test_conversion_plus.costs.monetary.purchase': 1
}
override_equals_inf = {
'links.0,1.exists': True,
'techs.test_conversion_plus.constraints.units_equals_systemwide': np.inf,
'locations.1.techs.test_conversion_plus.costs.monetary.purchase': 1
}
override_transmission = {
'links.0,1.exists': True,
'techs.test_transmission_elec.constraints': {
'units_max_systemwide': 1, 'lifetime': 25
},
'techs.test_transmission_elec.costs.monetary': {
'purchase': 1, 'interest_rate': 0.1
}
}
override_no_transmission = {
'techs.test_supply_elec.constraints.units_equals_systemwide': 1,
'locations.1.techs.test_supply_elec.costs.monetary.purchase': 1
}
m = build_model(override_max, 'conversion_plus_milp,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'unit_capacity_systemwide_milp_constraint')
assert m._backend_model.unit_capacity_systemwide_milp_constraint['test_conversion_plus'].upper() == 2
m = build_model(override_equals, 'conversion_plus_milp,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'unit_capacity_systemwide_milp_constraint')
assert m._backend_model.unit_capacity_systemwide_milp_constraint['test_conversion_plus'].lower() == 1
assert m._backend_model.unit_capacity_systemwide_milp_constraint['test_conversion_plus'].upper() == 1
with pytest.raises(ValueError) as error:
m = build_model(override_equals_inf, 'conversion_plus_milp,two_hours,investment_costs')
m.run(build_only=True)
assert check_error_or_warning(error, 'Cannot use inf for energy_cap_equals_systemwide')
m = build_model(override_transmission, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'unit_capacity_systemwide_milp_constraint')
assert m._backend_model.unit_capacity_systemwide_milp_constraint['test_transmission_elec'].upper() == 2
m = build_model(
override_no_transmission,
'simple_supply,two_hours,investment_costs',
model_file='model_minimal.yaml')
m.run(build_only=True)
assert hasattr(m._backend_model, 'unit_capacity_systemwide_milp_constraint')
def test_asynchronous_prod_con_constraint(self):
"""
Binary switch for prod/con can be activated using the option
'asynchronous_prod_con'
"""
m_store = build_model(
{'techs.test_storage.constraints.force_asynchronous_prod_con': True},
'simple_storage,investment_costs'
)
m_store.run(build_only=True)
assert hasattr(m_store._backend_model, 'prod_con_switch')
assert hasattr(m_store._backend_model, 'asynchronous_con_milp_constraint')
assert hasattr(m_store._backend_model, 'asynchronous_prod_milp_constraint')
m_trans = build_model(
{'techs.test_transmission_elec.constraints.force_asynchronous_prod_con': True},
'simple_storage,investment_costs'
)
m_trans.run(build_only=True)
assert hasattr(m_trans._backend_model, 'prod_con_switch')
assert hasattr(m_trans._backend_model, 'asynchronous_con_milp_constraint')
assert hasattr(m_trans._backend_model, 'asynchronous_prod_milp_constraint')
class TestConversionConstraints:
# conversion.py
def test_loc_techs_balance_conversion_constraint(self):
"""
sets.loc_techs_conversion,
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'balance_conversion_constraint')
m = build_model({}, 'simple_conversion,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'balance_conversion_constraint')
m = build_model({}, 'simple_conversion_plus,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'balance_conversion_constraint')
def test_loc_techs_cost_var_conversion_constraint(self):
"""
sets.loc_techs_om_cost_conversion,
"""
m = build_model(
{'techs.test_supply_elec.costs.monetary.om_prod': 0.1},
'simple_supply,two_hours,investment_costs'
)
m.run(build_only=True)
assert not hasattr(m._backend_model, 'cost_var_conversion_constraint')
m = build_model({}, 'simple_conversion,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'cost_var_conversion_constraint')
m = build_model(
{'techs.test_conversion.costs.monetary.om_prod': 0.1},
'simple_conversion,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'cost_var_conversion_constraint')
assert check_variable_exists(
m._backend_model, 'cost_var_conversion_constraint', 'carrier_prod'
)
assert not check_variable_exists(
m._backend_model, 'cost_var_conversion_constraint', 'carrier_con'
)
m = build_model(
{'techs.test_conversion.costs.monetary.om_con': 0.1},
'simple_conversion,two_hours,investment_costs'
)
m.run(build_only=True)
assert hasattr(m._backend_model, 'cost_var_conversion_constraint')
assert check_variable_exists(
m._backend_model, 'cost_var_conversion_constraint', 'carrier_con'
)
assert not check_variable_exists(
m._backend_model, 'cost_var_conversion_constraint', 'carrier_prod'
)
class TestNetworkConstraints:
# network.py
def test_loc_techs_symmetric_transmission_constraint(self):
"""
sets.loc_techs_transmission,
"""
m = build_model({}, 'simple_supply,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'symmetric_transmission_constraint')
m = build_model({}, 'simple_conversion_plus,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'symmetric_transmission_constraint')
class TestPolicyConstraints:
# policy.py
def test_techlists_group_share_energy_cap_min_constraint(self):
"""
i for i in sets.techlists
if 'energy_cap_min' in model_run.model.get_key('group_share.{}'.format(i), {}).keys()
"""
m = build_model({}, 'simple_supply,group_share_energy_cap_min,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'group_share_energy_cap_min_constraint')
assert not hasattr(m._backend_model, 'group_share_energy_cap_max_constraint')
assert not hasattr(m._backend_model, 'group_share_energy_cap_equals_constraint')
def test_techlists_group_share_energy_cap_max_constraint(self):
"""
i for i in sets.techlists
if 'energy_cap_max' in model_run.model.get_key('group_share.{}'.format(i), {}).keys()
"""
m = build_model({}, 'simple_supply,group_share_energy_cap_max,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'group_share_energy_cap_min_constraint')
assert hasattr(m._backend_model, 'group_share_energy_cap_max_constraint')
assert not hasattr(m._backend_model, 'group_share_energy_cap_equals_constraint')
def test_techlists_group_share_energy_cap_equals_constraint(self):
"""
i for i in sets.techlists
if 'energy_cap_equals' in model_run.model.get_key('group_share.{}'.format(i), {}).keys()
"""
m = build_model({}, 'simple_supply,group_share_energy_cap_equals,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'group_share_energy_cap_min_constraint')
assert not hasattr(m._backend_model, 'group_share_energy_cap_max_constraint')
assert hasattr(m._backend_model, 'group_share_energy_cap_equals_constraint')
def test_techlists_carrier_group_share_carrier_prod_min_constraint(self):
"""
i + '::' + carrier
for i in sets.techlists
if 'carrier_prod_min' in model_run.model.get_key('group_share.{}'.format(i), {}).keys()
for carrier in sets.carriers
if carrier in model_run.model.get_key('group_share.{}.carrier_prod_min'.format(i), {}).keys()
"""
m = build_model({}, 'conversion_and_conversion_plus,group_share_carrier_prod_min,two_hours,investment_costs')
m.run(build_only=True)
assert hasattr(m._backend_model, 'group_share_carrier_prod_min_constraint')
assert not hasattr(m._backend_model, 'group_share_carrier_prod_max_constraint')
assert not hasattr(m._backend_model, 'group_share_carrier_prod_equals_constraint')
def test_techlists_carrier_group_share_carrier_prod_max_constraint(self):
"""
i + '::' + carrier
for i in sets.techlists
if 'carrier_prod_max' in model_run.model.get_key('group_share.{}'.format(i), {}).keys()
for carrier in sets.carriers
if carrier in model_run.model.get_key('group_share.{}.carrier_prod_max'.format(i), {}).keys()
"""
m = build_model({}, 'conversion_and_conversion_plus,group_share_carrier_prod_max,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'group_share_carrier_prod_min_constraint')
assert hasattr(m._backend_model, 'group_share_carrier_prod_max_constraint')
assert not hasattr(m._backend_model, 'group_share_carrier_prod_equals_constraint')
def test_techlists_carrier_group_share_carrier_prod_equals_constraint(self):
"""
i + '::' + carrier
for i in sets.techlists
if 'carrier_prod_equals' in model_run.model.get_key('group_share.{}'.format(i), {}).keys()
for carrier in sets.carriers
if carrier in model_run.model.get_key('group_share.{}.carrier_prod_equals'.format(i), {}).keys()
"""
m = build_model({}, 'conversion_and_conversion_plus,group_share_carrier_prod_equals,two_hours,investment_costs')
m.run(build_only=True)
assert not hasattr(m._backend_model, 'group_share_carrier_prod_min_constraint')
assert not hasattr(m._backend_model, 'group_share_carrier_prod_max_constraint')
assert hasattr(m._backend_model, 'group_share_carrier_prod_equals_constraint')
# clustering constraints
class TestClusteringConstraints:
def constraints(self):
return ['balance_storage_inter_cluster_constraint',
'storage_intra_max_constraint', 'storage_intra_min_constraint',
'storage_inter_max_constraint', 'storage_inter_min_constraint']
def decision_variables(self):
return ['storage_inter_cluster',
'storage_intra_cluster_max', 'storage_intra_cluster_min']
def cluster_model(self, how='mean', storage_inter_cluster=True,
cyclic=False, storage_initial=False):
override = {
'model.subset_time': ['2005-01-01', '2005-01-04'],
'model.time': {
'function': 'apply_clustering',
'function_options': {
'clustering_func': 'file=cluster_days.csv:0', 'how': how,
'storage_inter_cluster': storage_inter_cluster
}
},
'run.cyclic_storage': cyclic
}
if storage_initial:
override.update({'techs.test_storage.constraints.storage_initial': 0})
return build_model(override, 'simple_storage,investment_costs')
def test_cluster_storage_constraints(self):
m = self.cluster_model()
m.run(build_only=True)
for variable in self.decision_variables():
assert hasattr(m._backend_model, variable)
for constraint in self.constraints():
assert hasattr(m._backend_model, constraint)
assert not hasattr(m._backend_model, 'storage_max_constraint')
assert not hasattr(m._backend_model, 'storage_initial_constraint')
def test_cluster_cyclic_storage_constraints(self):
m = self.cluster_model(cyclic=True)
m.run(build_only=True)
for variable in self.decision_variables():
assert hasattr(m._backend_model, variable)
for constraint in self.constraints():
assert hasattr(m._backend_model, constraint)
assert not hasattr(m._backend_model, 'storage_max_constraint')
assert not hasattr(m._backend_model, 'storage_initial_constraint')
def test_no_cluster_storage_constraints(self):
m = self.cluster_model(storage_inter_cluster=False)
m.run(build_only=True)
for variable in self.decision_variables():
assert not hasattr(m._backend_model, variable)
for constraint in self.constraints():
assert not hasattr(m._backend_model, constraint)
assert hasattr(m._backend_model, 'storage_max_constraint')
class TestLogging:
@pytest.fixture(scope='module')
def gurobi_model(self):
pytest.importorskip("gurobipy")
model_file = os.path.join('model_config_group', 'base_model.yaml')
return build_model(
model_file=model_file,
override_dict={"run": {"solver": "gurobi", "solver_io": "python"}}
)
def test_no_duplicate_log_message(self, caplog, gurobi_model):
caplog.set_level(logging.DEBUG)
gurobi_model.run()
all_log_messages = [r.msg for r in caplog.records]
duplicates = [item for item, count in collections.Counter(all_log_messages).items()
if count > 1
if item != '']
assert duplicates == []
| 43.965536 | 176 | 0.671929 | 10,150 | 81,644 | 5.00601 | 0.03803 | 0.053374 | 0.055519 | 0.056051 | 0.885576 | 0.866112 | 0.839621 | 0.806735 | 0.770483 | 0.732833 | 0 | 0.004524 | 0.231138 | 81,644 | 1,856 | 177 | 43.989224 | 0.804916 | 0.099689 | 0 | 0.579954 | 0 | 0.012242 | 0.367476 | 0.281844 | 0 | 0 | 0 | 0 | 0.192043 | 1 | 0.07192 | false | 0 | 0.009946 | 0.00153 | 0.096404 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c226b621d0deb40edda6b95e3b27f4315184699 | 23 | py | Python | control_block_diagram/components/text/__init__.py | upb-lea/control-block-diagram | d1d8377d6ff8a60900580b654c975cc06ff2e564 | [
"MIT"
] | 4 | 2022-01-14T07:42:59.000Z | 2022-01-24T15:05:59.000Z | control_block_diagram/components/text/__init__.py | upb-lea/control-block-diagram | d1d8377d6ff8a60900580b654c975cc06ff2e564 | [
"MIT"
] | null | null | null | control_block_diagram/components/text/__init__.py | upb-lea/control-block-diagram | d1d8377d6ff8a60900580b654c975cc06ff2e564 | [
"MIT"
] | null | null | null | from .text import Text
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c4c962606493df66993a495a8f8c4999829e4e6 | 230 | py | Python | pyCuSDR/lib/__init__.py | mugpahug/pycu-sdr | 012aad85a66fd02bb13e325e2b0a978d7667a718 | [
"BSD-3-Clause"
] | 1 | 2021-07-10T13:13:11.000Z | 2021-07-10T13:13:11.000Z | pyCuSDR/lib/__init__.py | mugpahug/pycu-sdr | 012aad85a66fd02bb13e325e2b0a978d7667a718 | [
"BSD-3-Clause"
] | 1 | 2021-07-12T06:04:07.000Z | 2021-07-12T06:04:07.000Z | pyCuSDR/lib/__init__.py | mugpahug/pycu-sdr | 012aad85a66fd02bb13e325e2b0a978d7667a718 | [
"BSD-3-Clause"
] | 1 | 2021-07-10T13:13:15.000Z | 2021-07-10T13:13:15.000Z | # -*- coding: utf-8 -*-
# Copyright: (c) 2021, Edwin G. W. Peters
from lib.msbLsbBinOps import *
from lib.sysStopException import *
from lib.cudaConvertSMVer2Cores import *
from lib.safe_lists import *
from lib.filters import *
| 23 | 41 | 0.734783 | 31 | 230 | 5.419355 | 0.612903 | 0.208333 | 0.309524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030769 | 0.152174 | 230 | 9 | 42 | 25.555556 | 0.830769 | 0.265217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c5647a2931d9ca8ca5f7e2933ae1369bb9e3d9d | 133 | py | Python | coding_assistant/__init__.py | lorlouis/coding-assistant | 27bff5e4dd5e1325f8448b483fa107c7ab69632d | [
"MIT"
] | 3 | 2021-11-09T01:14:00.000Z | 2021-11-10T12:17:00.000Z | coding_assistant/__init__.py | lorlouis/coding-assistant | 27bff5e4dd5e1325f8448b483fa107c7ab69632d | [
"MIT"
] | 2 | 2021-11-08T23:19:48.000Z | 2021-11-10T17:37:03.000Z | coding_assistant/__init__.py | lorlouis/coding-assistant | 27bff5e4dd5e1325f8448b483fa107c7ab69632d | [
"MIT"
] | 2 | 2021-11-08T21:50:00.000Z | 2021-11-09T18:22:16.000Z | # type: ignore
from coding_assistant.clipy import clipy
from coding_assistant.assistant import set_excepthook
set_excepthook(clipy)
| 22.166667 | 53 | 0.857143 | 18 | 133 | 6.111111 | 0.5 | 0.181818 | 0.345455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097744 | 133 | 5 | 54 | 26.6 | 0.916667 | 0.090226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c7b2ff807b123a8d436764240e1eddd85b718df | 7,869 | py | Python | layers/rnn_cells.py | Rufaim/Filtering-Clouds | 5703884a55f449ed737a3350d5276e29a69372f2 | [
"MIT"
] | null | null | null | layers/rnn_cells.py | Rufaim/Filtering-Clouds | 5703884a55f449ed737a3350d5276e29a69372f2 | [
"MIT"
] | null | null | null | layers/rnn_cells.py | Rufaim/Filtering-Clouds | 5703884a55f449ed737a3350d5276e29a69372f2 | [
"MIT"
] | null | null | null | import tensorflow as tf
from .utils import _activation_to_string
class UGRnnCell(tf.contrib.rnn.RNNCell):
def __init__(self,
num_units,
activation=tf.nn.relu,
reuse=None,
kernel_initializer=tf.contrib.layers.xavier_initializer(),
bias_initializer=tf.contrib.layers.xavier_initializer(),
name=None,
dtype=None,
**kwargs):
super(UGRnnCell, self).__init__(
_reuse=reuse, name=name, dtype=dtype, **kwargs)
self._num_units = num_units
self._activation = activation
self._kernel_initializer = kernel_initializer
self._bias_initializer = bias_initializer
@property
def state_size(self):
return self._num_units
@property
def output_size(self):
return self._num_units
def zero_state(self,batch_size, dtype=tf.float32):
shape = [batch_size,self.state_size]
return tf.zeros(shape,dtype=dtype)
def __call__(self, inputs, state):
self.feature_size = inputs.get_shape().as_list()[-1]
self.time_size = inputs.get_shape().as_list()[-2]
with tf.variable_scope("UGRNN",reuse=tf.AUTO_REUSE):
self._context_w = tf.get_variable("context_w",shape=[self.feature_size*2,self._num_units],dtype=self.dtype,
initializer=self._kernel_initializer,trainable=True)
self._context_b = tf.get_variable("context_b",shape=[self._num_units],dtype=self.dtype,
initializer=self._bias_initializer,trainable=True)
self._gate_w = tf.get_variable("gate_w",shape=[self.feature_size*2,self._num_units],dtype=self.dtype,
initializer=self._kernel_initializer,trainable=True)
self._gate_b = tf.get_variable("gate_b",shape=[self._num_units],dtype=self.dtype,
initializer=self._bias_initializer,trainable=True)
inp = tf.concat([inputs, state],1)
c = self._activation(tf.matmul(inp,self._context_w) + self._context_b)
gate = tf.matmul(inp,self._gate_w) + self._gate_b
# Returns 0. if x < -2.5, 1. if x > 2.5. In -2.5 <= x <= 2.5, returns 0.2 * x + 0.5.
#gate = tf.keras.backend.hard_sigmoid(gate)
gate = tf.nn.sigmoid(gate)
out = gate * state + (1-gate) * c
return out, c
def to_json(self,sess):
curr_layer = {}
W_context, b_context, W_gate, b_gate = sess.run([self._context_w, self._context_b,self._gate_w,self._gate_b])
curr_layer['W_context'] = W_context.tolist()
curr_layer['b_context'] = b_context.tolist()
curr_layer['W_gate'] = W_gate.tolist()
curr_layer['b_gate'] = b_gate.tolist()
curr_layer['in_dim'] = W_context.shape[0]
curr_layer['out_dim'] = W_context.shape[1]
curr_layer['time_dim'] = self.time_size
curr_layer['activation'] = _activation_to_string(self._activation)
curr_layer['type'] = "UGRNN"
return curr_layer
class SRUCell(tf.contrib.rnn.RNNCell):
def __init__(self,
num_units,
activation=tf.nn.relu,
reuse=None,
kernel_initializer=tf.contrib.layers.xavier_initializer(),
bias_initializer=tf.contrib.layers.xavier_initializer(),
name=None,
dtype=None,
**kwargs):
super(SRUCell, self).__init__(
_reuse=reuse, name=name, dtype=dtype, **kwargs)
self._num_units = num_units
self._activation = activation
self._kernel_initializer = kernel_initializer
self._bias_initializer = bias_initializer
@property
def state_size(self):
return self._num_units
@property
def output_size(self):
return self._num_units
def zero_state(self,batch_size, dtype=tf.float32):
shape = [batch_size,self.state_size]
return tf.zeros(shape,dtype=dtype)
def __call__(self, inputs, state):
self.feature_size = inputs.get_shape().as_list()[-1]
self.time_size = inputs.get_shape().as_list()[-2]
last_c = state
with tf.variable_scope("SRU",reuse=tf.AUTO_REUSE):
b_init = tf.zeros_initializer()
self._context_w = tf.get_variable("context_w",shape=[self.feature_size,self._num_units],dtype=tf.float32,
initializer=self._kernel_initializer,trainable=True)
self._out_w = tf.get_variable("out_w",shape=[self.feature_size,self._num_units],dtype=tf.float32,
initializer=self._kernel_initializer,trainable=True)
self._gate_f_w = tf.get_variable("gate_f_w",shape=[self.feature_size,self._num_units],dtype=tf.float32,
initializer=self._kernel_initializer,trainable=True)
self._gate_f_v = tf.get_variable("gate_f_v",shape=[self._num_units],dtype=tf.float32,
initializer=self._kernel_initializer,trainable=True)
self._gate_f_b = tf.get_variable("gate_f_b",shape=[self._num_units],dtype=tf.float32,
initializer=b_init,trainable=True)
self._gate_r_w = tf.get_variable("gate_r_w",shape=[self.feature_size,self._num_units],dtype=tf.float32,
initializer=self._kernel_initializer,trainable=True)
self._gate_r_v = tf.get_variable("gate_f_v",shape=[self._num_units],dtype=tf.float32,
initializer=self._kernel_initializer,trainable=True)
self._gate_r_b = tf.get_variable("gate_r_b",shape=[self._num_units],dtype=tf.float32,
initializer=b_init,trainable=True)
# Returns 0. if x < -2.5, 1. if x > 2.5. In -2.5 <= x <= 2.5, returns 0.2 * x + 0.5.
#f = tf.keras.backend.hard_sigmoid(tf.matmul(inputs,self._gate_f_w) + self._gate_f_v*last_c + self._gate_f_b)
f = tf.nn.sigmoid(tf.matmul(inputs,self._gate_f_w) + self._gate_f_v*last_c + self._gate_f_b)
c = f*last_c + (1-f)*self._activation(tf.matmul(inputs,self._context_w))
#r = tf.keras.backend.hard_sigmoid(tf.matmul(inputs,self._gate_r_w) + self._gate_r_v*last_c + self._gate_r_b)
r = tf.nn.sigmoid(tf.matmul(inputs,self._gate_r_w) + self._gate_r_v*last_c + self._gate_r_b)
#alpha = tf.sqrt(1+tf.exp(self._gate_r_b)*2)
#alpha = tf.sqrt([3.0])
out = r*c + (1-r) * tf.matmul(inputs,self._out_w) #* alpha
return out, c
def to_json(self,sess):
curr_layer = {}
W_context, W_out, f_W_gate, f_v_gate, f_b_gate, \
r_W_gate, r_v_gate,r_b_gate = sess.run([self._context_w, self._out_w,
self._gate_f_w,self._gate_f_v,self._gate_f_b,
self._gate_r_w,self._gate_r_v,self._gate_r_b])
curr_layer['W_context'] = W_context.tolist()
curr_layer['W_out'] = W_out.tolist()
curr_layer['f_W_gate'] = f_W_gate.tolist()
curr_layer['f_v_gate'] = f_v_gate.tolist()
curr_layer['f_b_gate'] = f_b_gate.tolist()
curr_layer['r_W_gate'] = r_W_gate.tolist()
curr_layer['r_v_gate'] = r_v_gate.tolist()
curr_layer['r_b_gate'] = r_b_gate.tolist()
curr_layer['in_dim'] = W_context.shape[0]
curr_layer['out_dim'] = W_context.shape[1]
curr_layer['time_dim'] = self.time_size
curr_layer['activation'] = _activation_to_string(self._activation)
curr_layer['type'] = "SRURNN"
return curr_layer | 49.803797 | 121 | 0.604905 | 1,062 | 7,869 | 4.102637 | 0.089454 | 0.05692 | 0.055084 | 0.046821 | 0.844159 | 0.774386 | 0.759008 | 0.759008 | 0.734221 | 0.734221 | 0 | 0.011585 | 0.27602 | 7,869 | 158 | 122 | 49.803797 | 0.753203 | 0.063032 | 0 | 0.630769 | 0 | 0 | 0.037067 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092308 | false | 0 | 0.015385 | 0.030769 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d5dcd349a22e24759d91056f326021ad5893d52e | 118 | py | Python | redbean/reactive/__init__.py | lcgong/redbean | 4e3a075567336db3f5469c5bc7dc009b5a24071c | [
"Apache-2.0"
] | 4 | 2017-08-26T10:03:59.000Z | 2022-02-17T20:46:02.000Z | redbean/reactive/__init__.py | lcgong/redbean | 4e3a075567336db3f5469c5bc7dc009b5a24071c | [
"Apache-2.0"
] | 1 | 2018-06-04T10:57:28.000Z | 2018-06-04T10:57:28.000Z | redbean/reactive/__init__.py | lcgong/redbean | 4e3a075567336db3f5469c5bc7dc009b5a24071c | [
"Apache-2.0"
] | null | null | null |
from .topic import Topic
try:
import pulsar
from .pulsar import PulsarChannel
except ImportError:
pass
| 11.8 | 37 | 0.720339 | 14 | 118 | 6.071429 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245763 | 118 | 9 | 38 | 13.111111 | 0.955056 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.166667 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
9125711aca4850c9d2831033922d568192f901e6 | 31 | py | Python | pingExecutor/__init__.py | 5genesis/Remote_Ping_Agent | aeaf8a68d04741631910228e52c9ae7c068ec570 | [
"Apache-2.0"
] | 2 | 2020-10-20T09:15:01.000Z | 2021-11-15T06:36:23.000Z | pingExecutor/__init__.py | 5genesis/Remote_Ping_Agent | aeaf8a68d04741631910228e52c9ae7c068ec570 | [
"Apache-2.0"
] | null | null | null | pingExecutor/__init__.py | 5genesis/Remote_Ping_Agent | aeaf8a68d04741631910228e52c9ae7c068ec570 | [
"Apache-2.0"
] | 1 | 2022-03-21T09:21:28.000Z | 2022-03-21T09:21:28.000Z | from .pingExecutor import ping
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
912e886bc279d8582e877da457ec4b751bafe5ce | 38 | py | Python | proxypool/exceptions/__init__.py | zronghui/ProxyPool | 7c0dde213c56942807d6421fa1e3604d3f25514f | [
"MIT"
] | null | null | null | proxypool/exceptions/__init__.py | zronghui/ProxyPool | 7c0dde213c56942807d6421fa1e3604d3f25514f | [
"MIT"
] | null | null | null | proxypool/exceptions/__init__.py | zronghui/ProxyPool | 7c0dde213c56942807d6421fa1e3604d3f25514f | [
"MIT"
] | null | null | null | from .empty import PoolEmptyException
| 19 | 37 | 0.868421 | 4 | 38 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
914fcac07e397659660bd3e899e61f5fcacea064 | 112 | py | Python | 6_kyu/Handshake_problem.py | UlrichBerntien/Codewars-Katas | bbd025e67aa352d313564d3862db19fffa39f552 | [
"MIT"
] | null | null | null | 6_kyu/Handshake_problem.py | UlrichBerntien/Codewars-Katas | bbd025e67aa352d313564d3862db19fffa39f552 | [
"MIT"
] | null | null | null | 6_kyu/Handshake_problem.py | UlrichBerntien/Codewars-Katas | bbd025e67aa352d313564d3862db19fffa39f552 | [
"MIT"
] | null | null | null | from math import ceil, sqrt
def get_participants(handshakes):
return int(ceil(1+sqrt(8*handshakes+1)+1)/2)
| 22.4 | 48 | 0.741071 | 19 | 112 | 4.315789 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05102 | 0.125 | 112 | 4 | 49 | 28 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
914fce6f0c1a9cf778921074b8254bd80ff45830 | 23,991 | py | Python | src/advpipe/attack_algorithms/apgd_auto_attack.py | kubic71/bachelors-thesis | f17a85e4e144972034f1d2174b51e63c68b39ff7 | [
"MIT"
] | null | null | null | src/advpipe/attack_algorithms/apgd_auto_attack.py | kubic71/bachelors-thesis | f17a85e4e144972034f1d2174b51e63c68b39ff7 | [
"MIT"
] | null | null | null | src/advpipe/attack_algorithms/apgd_auto_attack.py | kubic71/bachelors-thesis | f17a85e4e144972034f1d2174b51e63c68b39ff7 | [
"MIT"
] | null | null | null | # type: ignore
from __future__ import annotations
from advpipe.attack_algorithms import BlackBoxTransferAlgorithm
from advpipe.blackbox.local import LocalModel
import numpy as np
from advpipe.log import logger
from advpipe import utils
import torch
from advpipe.config_datamodel.attack_algorithm_config import APGDAlgorithmConfig
import eagerpy as ep
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from advpipe.utils import LossCallCounter
from advpipe.blackbox.local import LocalModel
from typing import Generator, Sequence
class APGDAutoAttack(BlackBoxTransferAlgorithm):
def __init__(self, surrogate: LocalModel, config: APGDAlgorithmConfig):
super().__init__(surrogate)
self.config = config
# APGDAttack offers two losses: cross-entropy (ce) and DRL loss, but DRL is unusable for us, because it requires at least 3 output categories (we have only 2)
self.attack = APGDAttack(surrogate, norm=self.config.metric, eps=self.config.epsilon, n_iter=self.config.n_iters, eot_iter=self.config.eot_iter, n_restarts = self.config.n_restarts, early_stop_at=self.config.early_stop_at, loss="ce", device="cuda", verbose=True)
def run(self, images: torch.Tensor, labels: torch.Tensor) -> torch.Tensor:
acc, x_adv = self.attack.perturb(images, labels, best_loss=True)
return x_adv
# AutoAttack's APGD
# Fixed numerical instability of gradients (autograd returned NaNs)
import time
import torch
#import scipy.io
#import numpy.linalg as nl
#
import os
import sys
import torch.nn as nn
import torch.nn.functional as F
class APGDAttack():
def __init__(self, model, n_iter=100, norm='Linf', n_restarts=1, eps=None,
seed=0, loss='ce', eot_iter=1, rho=.75, verbose=False, early_stop_at=None,
device='cuda'):
self.model = model
self.n_iter = n_iter
self.eps = eps
self.norm = norm
self.n_restarts = n_restarts
self.seed = seed
self.loss = loss
self.eot_iter = eot_iter
self.thr_decr = rho
self.verbose = verbose
self.device = device
self.early_stop_at = early_stop_at
def check_oscillation(self, x, j, k, y5, k3=0.75):
t = np.zeros(x.shape[1])
for counter5 in range(k):
t += x[j - counter5] > x[j - counter5 - 1]
return t <= k*k3*np.ones(t.shape)
def check_shape(self, x):
return x if len(x.shape) > 0 else np.expand_dims(x, 0)
def dlr_loss(self, x, y):
x_sorted, ind_sorted = x.sort(dim=1)
ind = (ind_sorted[:, -1] == y).float()
return -(x[np.arange(x.shape[0]), y] - x_sorted[:, -2] * ind - x_sorted[:, -1] * (1. - ind)) / (x_sorted[:, -1] - x_sorted[:, -3] + 1e-12)
def attack_single_run(self, x_in, y_in):
print("APGD single run")
x = x_in.clone() if len(x_in.shape) == 4 else x_in.clone().unsqueeze(0)
y = y_in.clone() if len(y_in.shape) == 1 else y_in.clone().unsqueeze(0)
self.n_iter_2, self.n_iter_min, self.size_decr = max(int(0.22 * self.n_iter), 1), max(int(0.06 * self.n_iter), 1), max(int(0.03 * self.n_iter), 1)
if self.verbose:
print('parameters: ', self.n_iter, self.n_iter_2, self.n_iter_min, self.size_decr)
if self.norm == 'Linf':
t = 2 * torch.rand(x.shape).to(self.device).detach() - 1
x_adv = x.detach() + self.eps * torch.ones([x.shape[0], 1, 1, 1]).to(self.device).detach() * t / (t.reshape([t.shape[0], -1]).abs().max(dim=1, keepdim=True)[0].reshape([-1, 1, 1, 1]))
elif self.norm == 'L2':
t = torch.randn(x.shape).to(self.device).detach()
x_adv = x.detach() + self.eps * torch.ones([x.shape[0], 1, 1, 1]).to(self.device).detach() * t / ((t ** 2).sum(dim=(1, 2, 3), keepdim=True).sqrt() + 1e-12)
x_adv = x_adv.clamp(0., 1.)
x_best = x_adv.clone()
x_best_adv = x_adv.clone()
loss_steps = torch.zeros([self.n_iter, x.shape[0]])
loss_best_steps = torch.zeros([self.n_iter + 1, x.shape[0]])
acc_steps = torch.zeros_like(loss_best_steps)
if self.loss == 'ce':
criterion_indiv = nn.CrossEntropyLoss(reduce=False, reduction='none')
elif self.loss == 'dlr':
criterion_indiv = self.dlr_loss
else:
raise ValueError('unknowkn loss')
x_adv.requires_grad_()
grad = torch.zeros_like(x)
for _ in range(self.eot_iter):
with torch.enable_grad():
logits = self.model(x_adv) # 1 forward pass (eot_iter = 1)
loss_indiv = criterion_indiv(logits, y)
loss = loss_indiv.sum()
grad += torch.autograd.grad(loss, [x_adv])[0].detach() # 1 backward pass (eot_iter = 1)
grad /= float(self.eot_iter)
grad_best = grad.clone()
# TODO: make this work with stochastic networks
if self.early_stop_at is not None:
idx_to_fool = loss_indiv.detach() < self.early_stop_at
else:
idx_to_fool = ~torch.zeros_like(loss_indiv, dtype=torch.bool)
print("Idx to fool: ", idx_to_fool)
acc = logits.detach().max(1)[1] == y
acc_steps[0] = acc + 0
loss_best = loss_indiv.detach().clone()
step_size = self.eps * torch.ones([x.shape[0], 1, 1, 1]).to(self.device).detach() * torch.Tensor([2.0]).to(self.device).detach().reshape([1, 1, 1, 1])
x_adv_old = x_adv.clone()
counter = 0
k = self.n_iter_2 + 0
u = np.arange(x.shape[0])
counter3 = 0
loss_best_last_check = loss_best.clone()
reduced_last_check = np.zeros(loss_best.shape) == np.zeros(loss_best.shape)
n_reduced = 0
for i in range(self.n_iter):
### gradient step
with torch.no_grad():
x_adv = x_adv.detach()
grad2 = x_adv - x_adv_old
x_adv_old = x_adv.clone()
a = 0.75 if i > 0 else 1.0
if self.norm == 'Linf':
x_adv_1 = x_adv + step_size * torch.sign(grad)
x_adv_1 = torch.clamp(torch.min(torch.max(x_adv_1, x - self.eps), x + self.eps), 0.0, 1.0)
x_adv_1 = torch.clamp(torch.min(torch.max(x_adv + (x_adv_1 - x_adv) * a + grad2 * (1 - a), x - self.eps), x + self.eps), 0.0, 1.0)
elif self.norm == 'L2':
x_adv_1 = x_adv + step_size * grad / ((grad ** 2).sum(dim=(1, 2, 3), keepdim=True).sqrt() + 1e-12)
# assert not torch.isnan(x_adv_1.max())
x_adv_1 = torch.clamp(x + (x_adv_1 - x) / (((x_adv_1 - x) ** 2).sum(dim=(1, 2, 3), keepdim=True).sqrt() + 1e-12) * torch.min(
self.eps * torch.ones(x.shape).to(self.device).detach(), ((x_adv_1 - x) ** 2).sum(dim=(1, 2, 3), keepdim=True).sqrt()), 0.0, 1.0)
# assert not torch.isnan(x_adv_1.max())
x_adv_1 = x_adv + (x_adv_1 - x_adv) * a + grad2 * (1 - a)
# assert not torch.isnan(x_adv_1.max())
x_adv_1 = torch.clamp(x + (x_adv_1 - x) / (((x_adv_1 - x) ** 2).sum(dim=(1, 2, 3), keepdim=True).sqrt() + 1e-12) * torch.min(
self.eps * torch.ones(x.shape).to(self.device).detach(), ((x_adv_1 - x) ** 2).sum(dim=(1, 2, 3), keepdim=True).sqrt() + 1e-12), 0.0, 1.0)
# assert not torch.isnan(x_adv_1.max())
x_adv[idx_to_fool] = x_adv_1[idx_to_fool] + 0.
# assert not torch.isnan(x_adv_1.max())
### get gradient
x_adv_to_fool = x_adv[idx_to_fool]
x_adv_to_fool.requires_grad_()
grad_to_fool = torch.zeros_like(x_adv_to_fool)
# time.sleep(0.3)
for _ in range(self.eot_iter):
with torch.enable_grad():
logits_to_fool = self.model(x_adv_to_fool) # 1 forward pass (eot_iter = 1)
# assert not torch.isnan(logits.max())
loss_indiv_to_fool = criterion_indiv(logits_to_fool, y[idx_to_fool])
# print(loss_indiv_to_fool)
# assert not torch.isnan(loss_indiv.max())
loss = loss_indiv_to_fool.sum()
# print(loss)
# assert not torch.isnan(loss.max())
grad_to_fool += torch.nan_to_num(torch.autograd.grad(loss, [x_adv_to_fool])[0].detach()) # 1 backward pass (eot_iter = 1)
grad = torch.zeros_like(x)
grad[idx_to_fool] = grad_to_fool
grad /= float(self.eot_iter)
logits[idx_to_fool] = logits_to_fool.detach()
loss_indiv[idx_to_fool] = loss_indiv_to_fool.detach()
# sometimes if the loss gets too big, gradient becomes NaN
# just stop the gradient descent when that happens
if self.early_stop_at is not None:
idx_to_fool = loss_best.detach() < self.early_stop_at
print(f"Idx to fool: {idx_to_fool}")
if idx_to_fool.sum() == 0:
print(f"Early Stopping at loss {self.early_stop_at}")
break
pred = logits.detach().max(1)[1] == y
acc = torch.min(acc, pred)
acc_steps[i + 1] = acc + 0
x_best_adv[(pred == 0).nonzero().squeeze()] = x_adv[(pred == 0).nonzero().squeeze()] + 0.
if self.verbose:
print('iteration: {} - Best loss: {:.6f}'.format(i, loss_best.sum()))
print('best losses:', loss_best)
### check step size
with torch.no_grad():
y1 = loss_indiv.detach().clone()
loss_steps[i] = y1.cpu() + 0
ind = (y1 > loss_best).nonzero().squeeze()
x_best[ind] = x_adv[ind].clone()
grad_best[ind] = grad[ind].clone()
loss_best[ind] = y1[ind] + 0
loss_best_steps[i + 1] = loss_best + 0
counter3 += 1
if counter3 == k:
fl_oscillation = self.check_oscillation(loss_steps.detach().cpu().numpy(), i, k, loss_best.detach().cpu().numpy(), k3=self.thr_decr)
fl_reduce_no_impr = (~reduced_last_check) * (loss_best_last_check.cpu().numpy() >= loss_best.cpu().numpy())
fl_oscillation = ~(~fl_oscillation * ~fl_reduce_no_impr)
reduced_last_check = np.copy(fl_oscillation)
loss_best_last_check = loss_best.clone()
if np.sum(fl_oscillation) > 0:
step_size[u[fl_oscillation]] /= 2.0
n_reduced = fl_oscillation.astype(float).sum()
fl_oscillation = np.where(fl_oscillation)
x_adv[fl_oscillation] = x_best[fl_oscillation].clone()
grad[fl_oscillation] = grad_best[fl_oscillation].clone()
counter3 = 0
k = np.maximum(k - self.size_decr, self.n_iter_min)
return x_best, acc, loss_best, x_best_adv
def perturb(self, x_in, y_in, best_loss=False, cheap=True):
assert self.norm in ['Linf', 'L2']
x = x_in.clone() if len(x_in.shape) == 4 else x_in.clone().unsqueeze(0)
y = y_in.clone() if len(y_in.shape) == 1 else y_in.clone().unsqueeze(0)
adv = x.clone()
acc = self.model(x).max(1)[1] == y
loss = -1e10 * torch.ones_like(acc).float()
if self.verbose:
print('-------------------------- running {}-attack with epsilon {:.4f} --------------------------'.format(self.norm, self.eps))
print('initial accuracy: {:.2%}'.format(acc.float().mean()))
startt = time.time()
if not best_loss:
torch.random.manual_seed(self.seed)
torch.cuda.random.manual_seed(self.seed)
if not cheap:
raise ValueError('not implemented yet')
else:
for counter in range(self.n_restarts):
ind_to_fool = acc.nonzero().squeeze()
if len(ind_to_fool.shape) == 0: ind_to_fool = ind_to_fool.unsqueeze(0)
if ind_to_fool.numel() != 0:
x_to_fool, y_to_fool = x[ind_to_fool].clone(), y[ind_to_fool].clone()
best_curr, acc_curr, loss_curr, adv_curr = self.attack_single_run(x_to_fool, y_to_fool)
ind_curr = (acc_curr == 0).nonzero().squeeze()
#
acc[ind_to_fool[ind_curr]] = 0
adv[ind_to_fool[ind_curr]] = adv_curr[ind_curr].clone()
if self.verbose:
print('restart {} - robust accuracy: {:.2%} - cum. time: {:.1f} s'.format(
counter, acc.float().mean(), time.time() - startt))
return acc, adv
else:
adv_best = x.detach().clone()
loss_best = torch.ones([x.shape[0]]).to(self.device) * (-float('inf'))
for counter in range(self.n_restarts):
best_curr, _, loss_curr, _ = self.attack_single_run(x, y)
ind_curr = (loss_curr > loss_best).nonzero().squeeze()
adv_best[ind_curr] = best_curr[ind_curr] + 0.
loss_best[ind_curr] = loss_curr[ind_curr] + 0.
if self.verbose:
print('restart {} - loss: {:.5f}'.format(counter, loss_best.sum()))
return loss_best, adv_best
class APGDAttack_targeted():
def __init__(self, model, n_iter=100, norm='Linf', n_restarts=1, eps=None,
seed=0, eot_iter=1, rho=.75, verbose=False, device='cuda',
n_target_classes=9):
self.model = model
self.n_iter = n_iter
self.eps = eps
self.norm = norm
self.n_restarts = n_restarts
self.seed = seed
self.eot_iter = eot_iter
self.thr_decr = rho
self.verbose = verbose
self.target_class = None
self.device = device
self.n_target_classes = n_target_classes
def check_oscillation(self, x, j, k, y5, k3=0.5):
t = np.zeros(x.shape[1])
for counter5 in range(k):
t += x[j - counter5] > x[j - counter5 - 1]
return t <= k*k3*np.ones(t.shape)
def check_shape(self, x):
return x if len(x.shape) > 0 else np.expand_dims(x, 0)
def dlr_loss_targeted(self, x, y, y_target):
x_sorted, ind_sorted = x.sort(dim=1)
return -(x[np.arange(x.shape[0]), y] - x[np.arange(x.shape[0]), y_target]) / (x_sorted[:, -1] - .5 * x_sorted[:, -3] - .5 * x_sorted[:, -4] + 1e-12)
def attack_single_run(self, x_in, y_in):
x = x_in.clone() if len(x_in.shape) == 4 else x_in.clone().unsqueeze(0)
y = y_in.clone() if len(y_in.shape) == 1 else y_in.clone().unsqueeze(0)
self.n_iter_2, self.n_iter_min, self.size_decr = max(int(0.22 * self.n_iter), 1), max(int(0.06 * self.n_iter), 1), max(int(0.03 * self.n_iter), 1)
if self.verbose:
print('parameters: ', self.n_iter, self.n_iter_2, self.n_iter_min, self.size_decr)
if self.norm == 'Linf':
t = 2 * torch.rand(x.shape).to(self.device).detach() - 1
x_adv = x.detach() + self.eps * torch.ones([x.shape[0], 1, 1, 1]).to(self.device).detach() * t / (t.reshape([t.shape[0], -1]).abs().max(dim=1, keepdim=True)[0].reshape([-1, 1, 1, 1]))
elif self.norm == 'L2':
t = torch.randn(x.shape).to(self.device).detach()
x_adv = x.detach() + self.eps * torch.ones([x.shape[0], 1, 1, 1]).to(self.device).detach() * t / ((t ** 2).sum(dim=(1, 2, 3), keepdim=True).sqrt() + 1e-12)
x_adv = x_adv.clamp(0., 1.)
x_best = x_adv.clone()
x_best_adv = x_adv.clone()
loss_steps = torch.zeros([self.n_iter, x.shape[0]])
loss_best_steps = torch.zeros([self.n_iter + 1, x.shape[0]])
acc_steps = torch.zeros_like(loss_best_steps)
output = self.model(x)
y_target = output.sort(dim=1)[1][:, -self.target_class]
x_adv.requires_grad_()
grad = torch.zeros_like(x)
for _ in range(self.eot_iter):
with torch.enable_grad():
logits = self.model(x_adv) # 1 forward pass (eot_iter = 1)
loss_indiv = self.dlr_loss_targeted(logits, y, y_target)
loss = loss_indiv.sum()
grad += torch.autograd.grad(loss, [x_adv])[0].detach() # 1 backward pass (eot_iter = 1)
grad /= float(self.eot_iter)
grad_best = grad.clone()
acc = logits.detach().max(1)[1] == y
acc_steps[0] = acc + 0
loss_best = loss_indiv.detach().clone()
step_size = self.eps * torch.ones([x.shape[0], 1, 1, 1]).to(self.device).detach() * torch.Tensor([2.0]).to(self.device).detach().reshape([1, 1, 1, 1])
x_adv_old = x_adv.clone()
counter = 0
k = self.n_iter_2 + 0
u = np.arange(x.shape[0])
counter3 = 0
loss_best_last_check = loss_best.clone()
reduced_last_check = np.zeros(loss_best.shape) == np.zeros(loss_best.shape)
n_reduced = 0
for i in range(self.n_iter):
### gradient step
with torch.no_grad():
x_adv = x_adv.detach()
grad2 = x_adv - x_adv_old
x_adv_old = x_adv.clone()
a = 0.75 if i > 0 else 1.0
if self.norm == 'Linf':
x_adv_1 = x_adv + step_size * torch.sign(grad)
x_adv_1 = torch.clamp(torch.min(torch.max(x_adv_1, x - self.eps), x + self.eps), 0.0, 1.0)
x_adv_1 = torch.clamp(torch.min(torch.max(x_adv + (x_adv_1 - x_adv)*a + grad2*(1 - a), x - self.eps), x + self.eps), 0.0, 1.0)
elif self.norm == 'L2':
x_adv_1 = x_adv + step_size[0] * grad / ((grad ** 2).sum(dim=(1, 2, 3), keepdim=True).sqrt() + 1e-12)
x_adv_1 = torch.clamp(x + (x_adv_1 - x) / (((x_adv_1 - x) ** 2).sum(dim=(1, 2, 3), keepdim=True).sqrt() + 1e-12) * torch.min(
self.eps * torch.ones(x.shape).to(self.device).detach(), ((x_adv_1 - x) ** 2).sum(dim=(1, 2, 3), keepdim=True).sqrt()), 0.0, 1.0)
x_adv_1 = x_adv + (x_adv_1 - x_adv)*a + grad2*(1 - a)
x_adv_1 = torch.clamp(x + (x_adv_1 - x) / (((x_adv_1 - x) ** 2).sum(dim=(1, 2, 3), keepdim=True).sqrt() + 1e-12) * torch.min(
self.eps * torch.ones(x.shape).to(self.device).detach(), ((x_adv_1 - x) ** 2).sum(dim=(1, 2, 3), keepdim=True).sqrt() + 1e-12), 0.0, 1.0)
x_adv = x_adv_1 + 0.
### get gradient
x_adv.requires_grad_()
grad = torch.zeros_like(x)
for _ in range(self.eot_iter):
with torch.enable_grad():
logits = self.model(x_adv) # 1 forward pass (eot_iter = 1)
loss_indiv = self.dlr_loss_targeted(logits, y, y_target)
loss = loss_indiv.sum()
grad += torch.autograd.grad(loss, [x_adv])[0].detach() # 1 backward pass (eot_iter = 1)
grad /= float(self.eot_iter)
pred = logits.detach().max(1)[1] == y
acc = torch.min(acc, pred)
acc_steps[i + 1] = acc + 0
x_best_adv[(pred == 0).nonzero().squeeze()] = x_adv[(pred == 0).nonzero().squeeze()] + 0.
if self.verbose:
print('iteration: {} - Best loss: {:.6f}'.format(i, loss_best.sum()))
### check step size
with torch.no_grad():
y1 = loss_indiv.detach().clone()
loss_steps[i] = y1.cpu() + 0
ind = (y1 > loss_best).nonzero().squeeze()
x_best[ind] = x_adv[ind].clone()
grad_best[ind] = grad[ind].clone()
loss_best[ind] = y1[ind] + 0
loss_best_steps[i + 1] = loss_best + 0
counter3 += 1
if counter3 == k:
fl_oscillation = self.check_oscillation(loss_steps.detach().cpu().numpy(), i, k, loss_best.detach().cpu().numpy(), k3=self.thr_decr)
fl_reduce_no_impr = (~reduced_last_check) * (loss_best_last_check.cpu().numpy() >= loss_best.cpu().numpy())
fl_oscillation = ~(~fl_oscillation * ~fl_reduce_no_impr)
reduced_last_check = np.copy(fl_oscillation)
loss_best_last_check = loss_best.clone()
if np.sum(fl_oscillation) > 0:
step_size[u[fl_oscillation]] /= 2.0
n_reduced = fl_oscillation.astype(float).sum()
fl_oscillation = np.where(fl_oscillation)
x_adv[fl_oscillation] = x_best[fl_oscillation].clone()
grad[fl_oscillation] = grad_best[fl_oscillation].clone()
counter3 = 0
k = np.maximum(k - self.size_decr, self.n_iter_min)
return x_best, acc, loss_best, x_best_adv
def perturb(self, x_in, y_in, best_loss=False, cheap=True):
assert self.norm in ['Linf', 'L2']
x = x_in.clone() if len(x_in.shape) == 4 else x_in.clone().unsqueeze(0)
y = y_in.clone() if len(y_in.shape) == 1 else y_in.clone().unsqueeze(0)
adv = x.clone()
acc = self.model(x).max(1)[1] == y
loss = -1e10 * torch.ones_like(acc).float()
if self.verbose:
print('-------------------------- running {}-attack with epsilon {:.4f} --------------------------'.format(self.norm, self.eps))
print('initial accuracy: {:.2%}'.format(acc.float().mean()))
startt = time.time()
torch.random.manual_seed(self.seed)
torch.cuda.random.manual_seed(self.seed)
if not cheap:
raise ValueError('not implemented yet')
else:
for target_class in range(2, self.n_target_classes + 2):
self.target_class = target_class
for counter in range(self.n_restarts):
ind_to_fool = acc.nonzero().squeeze()
if len(ind_to_fool.shape) == 0: ind_to_fool = ind_to_fool.unsqueeze(0)
if ind_to_fool.numel() != 0:
x_to_fool, y_to_fool = x[ind_to_fool].clone(), y[ind_to_fool].clone()
best_curr, acc_curr, loss_curr, adv_curr = self.attack_single_run(x_to_fool, y_to_fool)
ind_curr = (acc_curr == 0).nonzero().squeeze()
#
acc[ind_to_fool[ind_curr]] = 0
adv[ind_to_fool[ind_curr]] = adv_curr[ind_curr].clone()
if self.verbose:
print('restart {} - target_class {} - robust accuracy: {:.2%} at eps = {:.5f} - cum. time: {:.1f} s'.format(
counter, self.target_class, acc.float().mean(), self.eps, time.time() - startt))
return acc, adv
| 47.695825 | 270 | 0.528823 | 3,344 | 23,991 | 3.57177 | 0.082237 | 0.034829 | 0.017582 | 0.012056 | 0.776206 | 0.756698 | 0.732669 | 0.72212 | 0.717934 | 0.708891 | 0 | 0.030343 | 0.329624 | 23,991 | 502 | 271 | 47.790837 | 0.712305 | 0.047184 | 0 | 0.764075 | 0 | 0.002681 | 0.031873 | 0.00456 | 0 | 0 | 0 | 0.001992 | 0.005362 | 1 | 0.037534 | false | 0 | 0.050938 | 0.005362 | 0.128686 | 0.042895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6756d56d4650f67e4d0276d0efa9d83ee640633 | 82 | py | Python | gui/screens/betplacepage.py | tonymorony/DiceCC-GUI | 89dbdcf9fe762fe673a0c8c90d461efc10ab31e4 | [
"MIT"
] | null | null | null | gui/screens/betplacepage.py | tonymorony/DiceCC-GUI | 89dbdcf9fe762fe673a0c8c90d461efc10ab31e4 | [
"MIT"
] | null | null | null | gui/screens/betplacepage.py | tonymorony/DiceCC-GUI | 89dbdcf9fe762fe673a0c8c90d461efc10ab31e4 | [
"MIT"
] | 1 | 2019-01-04T05:52:38.000Z | 2019-01-04T05:52:38.000Z | from kivy.uix.screenmanager import Screen
class BetPlacePage(Screen):
pass
| 11.714286 | 41 | 0.768293 | 10 | 82 | 6.3 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 82 | 6 | 42 | 13.666667 | 0.926471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e67be2fbbd6acf891d010994b3f3e483c1186ac4 | 47 | py | Python | scripts/eos.py | iwamura-lab/my_codes | 70140fe81b70d7ea4969c442771db40054cc109e | [
"MIT"
] | null | null | null | scripts/eos.py | iwamura-lab/my_codes | 70140fe81b70d7ea4969c442771db40054cc109e | [
"MIT"
] | null | null | null | scripts/eos.py | iwamura-lab/my_codes | 70140fe81b70d7ea4969c442771db40054cc109e | [
"MIT"
] | null | null | null | #!/usr/bin/env python
def main():
print()
| 9.4 | 21 | 0.574468 | 7 | 47 | 3.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212766 | 47 | 4 | 22 | 11.75 | 0.72973 | 0.425532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e67d3c0904d55580860a01b8aec29919befcd141 | 2,541 | py | Python | zephyr/zmake/tests/test_multiproc_executor.py | sjg20/ec | b9558d0a0aaca9dfb07ffb1eb915541bc2f6bf3b | [
"BSD-3-Clause"
] | null | null | null | zephyr/zmake/tests/test_multiproc_executor.py | sjg20/ec | b9558d0a0aaca9dfb07ffb1eb915541bc2f6bf3b | [
"BSD-3-Clause"
] | null | null | null | zephyr/zmake/tests/test_multiproc_executor.py | sjg20/ec | b9558d0a0aaca9dfb07ffb1eb915541bc2f6bf3b | [
"BSD-3-Clause"
] | null | null | null | # Copyright 2021 The Chromium OS Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import threading
import zmake.multiproc
def test_single_function_executor_success():
executor = zmake.multiproc.Executor(fail_fast=True)
executor.append(lambda: 0)
assert executor.wait() == 0
executor = zmake.multiproc.Executor(fail_fast=False)
executor.append(lambda: 0)
assert executor.wait() == 0
def test_single_function_executor_fail():
executor = zmake.multiproc.Executor(fail_fast=True)
executor.append(lambda: -1)
assert executor.wait() == -1
executor = zmake.multiproc.Executor(fail_fast=False)
executor.append(lambda: -2)
assert executor.wait() == -2
def test_single_function_executor_raise():
executor = zmake.multiproc.Executor(fail_fast=True)
executor.append(lambda: 1/0)
assert executor.wait() != 0
executor = zmake.multiproc.Executor(fail_fast=False)
executor.append(lambda: 1/0)
assert executor.wait() != 0
def _lock_step(cv, predicate, step, return_value=0):
with cv:
cv.wait_for(predicate=lambda: step[0] == predicate)
step[0] += 1
cv.notify_all()
return return_value
def test_two_function_executor_wait_for_both():
cv = threading.Condition()
step = [0]
executor = zmake.multiproc.Executor(fail_fast=True)
executor.append(lambda: _lock_step(cv=cv, predicate=0, step=step))
executor.append(lambda: _lock_step(cv=cv, predicate=1, step=step))
assert executor.wait() == 0
assert step[0] == 2
step = [0]
executor = zmake.multiproc.Executor(fail_fast=False)
executor.append(lambda: _lock_step(cv=cv, predicate=0, step=step))
executor.append(lambda: _lock_step(cv=cv, predicate=1, step=step))
assert executor.wait() == 0
assert step[0] == 2
def test_two_function_executor_one_fails():
cv = threading.Condition()
step = [0]
executor = zmake.multiproc.Executor(fail_fast=True)
executor.append(lambda: _lock_step(cv=cv, predicate=0, step=step, return_value=-1))
executor.append(lambda: _lock_step(cv=cv, predicate=2, step=step))
assert executor.wait() == -1
assert step[0] == 1
step = [0]
executor = zmake.multiproc.Executor(fail_fast=False)
executor.append(lambda: _lock_step(cv=cv, predicate=0, step=step, return_value=-1))
executor.append(lambda: _lock_step(cv=cv, predicate=1, step=step))
assert executor.wait() == -1
assert step[0] == 2
| 32.164557 | 87 | 0.702086 | 355 | 2,541 | 4.864789 | 0.177465 | 0.113492 | 0.162131 | 0.173712 | 0.796178 | 0.717429 | 0.713955 | 0.713955 | 0.702374 | 0.661262 | 0 | 0.022825 | 0.172373 | 2,541 | 78 | 88 | 32.576923 | 0.798383 | 0.06218 | 0 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245614 | 1 | 0.105263 | false | 0 | 0.035088 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e682045e77c14755d9f46ff5b9456e3246d702a4 | 63 | py | Python | web/src/order/__init__.py | saurabh1e/SuperFlaskSeed | a533daee568ca349be8d9ef4a7a9d5065abb2324 | [
"MIT"
] | 11 | 2017-01-19T16:27:07.000Z | 2022-01-19T07:18:47.000Z | web/src/order/__init__.py | saurabh1e/SuperFlaskSeed | a533daee568ca349be8d9ef4a7a9d5065abb2324 | [
"MIT"
] | null | null | null | web/src/order/__init__.py | saurabh1e/SuperFlaskSeed | a533daee568ca349be8d9ef4a7a9d5065abb2324 | [
"MIT"
] | 6 | 2016-11-13T14:07:25.000Z | 2019-12-04T15:34:09.000Z | from . import models
from . import schemas
from . import views
| 15.75 | 21 | 0.761905 | 9 | 63 | 5.333333 | 0.555556 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 63 | 3 | 22 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e68922ecabcb31d230957ff7fdd9075afd62f86b | 3,031 | py | Python | propellent_03_function/parameter_GUI_file.py | yechong316/propellant | 41223679acf47f25dddda8aad8b7f5e0687abb49 | [
"Apache-2.0"
] | 3 | 2019-10-24T12:22:54.000Z | 2020-12-31T07:10:45.000Z | propellent_03_function/parameter_GUI_file.py | PandaJerrrrrrrry/propellant | 41223679acf47f25dddda8aad8b7f5e0687abb49 | [
"Apache-2.0"
] | null | null | null | propellent_03_function/parameter_GUI_file.py | PandaJerrrrrrrry/propellant | 41223679acf47f25dddda8aad8b7f5e0687abb49 | [
"Apache-2.0"
] | 2 | 2020-04-02T05:26:01.000Z | 2021-12-12T07:50:33.000Z | # -* - coding:UTF-8 -*-
#Start defining data for interfaces and text
class var_data:
def __init__(self):
self.var_data = 'GUI'
self.GUI_c_d = 1e-6
self.GUI_c_e = 23500
self.GUI_c_p = 0.33
self.GUI_c_c = 0.00043
self.GUI_c_s = 826
self.GUI_c_ep = 0.00143
self.GUI_c_mesh_size = 5
self.GUI_b_d = 1.23E-006
self.GUI_b_e = 0.384
self.GUI_b_p = 0.3
self.GUI_b_c = 1
self.GUI_b_s = 1219
self.GUI_b_ep = 0.000326
self.GUI_b_mesh_size = 5
self.GUI_f_d = 0.00785
self.GUI_f_e = 2e5
self.GUI_f_p = 0.3
self.GUI_f_c = 1.6578
self.GUI_f_s = 512
self.GUI_f_ep = 1.22E-005
self.GUI_f_mesh_size = 3
self.GUI_h_d = 1.65E-006
self.GUI_h_e = 4000
self.GUI_h_p = 0.3
self.GUI_h_c = 0.001
self.GUI_h_s = 1500
self.GUI_h_ep = 0.0001263
self.GUI_h_mesh_size = 5
@property
def extract_var(self):
return self.var_data
#Start defining methods for extracting GUI data
class Data_GUI(var_data):
@property
def c_d(self):
return self.GUI_c_d
@property
def c_e(self):
return self.GUI_c_e
@property
def c_p(self):
return self.GUI_c_p
@property
def c_c(self):
return self.GUI_c_c
@property
def c_s(self):
return self.GUI_c_s
@property
def c_ep(self):
return self.GUI_c_ep
@property
def c_mesh_size(self):
return self.GUI_c_mesh_size
@property
def b_d(self):
return self.GUI_b_d
@property
def b_e(self):
return self.GUI_b_e
@property
def b_p(self):
return self.GUI_b_p
@property
def b_c(self):
return self.GUI_b_c
@property
def b_s(self):
return self.GUI_b_s
@property
def b_ep(self):
return self.GUI_b_ep
@property
def b_mesh_size(self):
return self.GUI_b_mesh_size
@property
def f_d(self):
return self.GUI_f_d
@property
def f_e(self):
return self.GUI_f_e
@property
def f_p(self):
return self.GUI_f_p
@property
def f_c(self):
return self.GUI_f_c
@property
def f_s(self):
return self.GUI_f_s
@property
def f_ep(self):
return self.GUI_f_ep
@property
def f_mesh_size(self):
return self.GUI_f_mesh_size
@property
def h_d(self):
return self.GUI_h_d
@property
def h_e(self):
return self.GUI_h_e
@property
def h_p(self):
return self.GUI_h_p
@property
def h_c(self):
return self.GUI_h_c
@property
def h_s(self):
return self.GUI_h_s
@property
def h_ep(self):
return self.GUI_h_ep
@property
def h_mesh_size(self):
return self.GUI_h_mesh_size
| 23.866142 | 48 | 0.561201 | 492 | 3,031 | 3.128049 | 0.119919 | 0.254711 | 0.263808 | 0.309292 | 0.458739 | 0.064977 | 0 | 0 | 0 | 0 | 0 | 0.054555 | 0.358957 | 3,031 | 126 | 49 | 24.055556 | 0.737519 | 0.036292 | 0 | 0.243697 | 0 | 0 | 0.001075 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.252101 | false | 0 | 0 | 0.243697 | 0.512605 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e69f250438b37252f662714352e7b0c3efd5ab7f | 6,526 | py | Python | iai/networks.py | jradrion/i-against-i | f5066865c7ff6a243e359e16b378474380510d99 | [
"MIT"
] | 1 | 2020-01-07T04:17:30.000Z | 2020-01-07T04:17:30.000Z | iai/networks.py | jradrion/i-against-i | f5066865c7ff6a243e359e16b378474380510d99 | [
"MIT"
] | 2 | 2020-01-28T23:14:27.000Z | 2020-02-13T18:07:28.000Z | iai/networks.py | jradrion/i-against-i | f5066865c7ff6a243e359e16b378474380510d99 | [
"MIT"
] | null | null | null | '''
Authors: Jeff Adrion
'''
from iai.imports import *
#def iaiCNN_categorical_crossentropy(inputShape,y):
#
# haps,pos = inputShape
#
# numSNPs = haps[0].shape[0]
# numSamps = haps[0].shape[1]
# numPos = pos[0].shape[0]
#
# img_1_inputs = layers.Input(shape=(numSNPs,numSamps))
#
# h = layers.Conv1D(1250, kernel_size=2, activation='relu', name='conv1_1')(img_1_inputs)
# h = layers.Conv1D(512, kernel_size=2, dilation_rate=1, activation='relu')(h)
# h = layers.AveragePooling1D(pool_size=2)(h)
# h = layers.Dropout(0.25)(h)
# h = layers.Conv1D(512, kernel_size=2, activation='relu')(h)
# h = layers.AveragePooling1D(pool_size=2)(h)
# h = layers.Dropout(0.25)(h)
# h = layers.Flatten()(h)
#
# loc_input = layers.Input(shape=(numPos,))
# m2 = layers.Dense(64,name="m2_dense1")(loc_input)
# m2 = layers.Dropout(0.1)(m2)
#
# h = layers.concatenate([h,m2])
# h = layers.Dense(128,activation='relu')(h)
# h = layers.Dropout(0.2)(h)
# output = layers.Dense(2,kernel_initializer='normal',name="softmax",activation='softmax')(h)
#
# model = Model(inputs=[img_1_inputs,loc_input], outputs=[output])
# model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# model.summary()
#
# return model
def iaiCNN_categorical_crossentropy_noPos(x,y):
haps = x
numSNPs = haps[0].shape[0]
numSamps = haps[0].shape[1]
img_1_inputs = layers.Input(shape=(numSNPs,numSamps))
h = layers.Conv1D(1250, kernel_size=2, activation='relu', name='conv1_1')(img_1_inputs)
h = layers.Conv1D(512, kernel_size=2, dilation_rate=1, activation='relu')(h)
h = layers.AveragePooling1D(pool_size=2)(h)
h = layers.Dropout(0.25)(h)
h = layers.Conv1D(512, kernel_size=2, activation='relu')(h)
h = layers.AveragePooling1D(pool_size=2)(h)
h = layers.Dropout(0.25)(h)
h = layers.Flatten()(h)
h = layers.Dense(128,activation='relu')(h)
h = layers.Dropout(0.2)(h)
output = layers.Dense(2,kernel_initializer='normal',name="softmax",activation='softmax')(h)
model = Model(inputs=[img_1_inputs], outputs=[output])
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
return model
def iaiGRU_categorical_crossentropy_noPos(x,y):
'''
Same as GRU_VANILLA but with dropout AFTER each dense layer.
'''
haps = x
numSNPs = haps[0].shape[0]
numSamps = haps[0].shape[1]
genotype_inputs = layers.Input(shape=(numSNPs,numSamps))
model = layers.Bidirectional(layers.GRU(84,return_sequences=False))(genotype_inputs)
model = layers.Dense(256)(model)
model = layers.Dropout(0.35)(model)
#----------------------------------------------------
model = layers.Dense(64)(model)
model = layers.Dropout(0.35)(model)
output = layers.Dense(2,kernel_initializer='normal',name="softmax",activation='softmax')(model)
#----------------------------------------------------
model = Model(inputs=[genotype_inputs], outputs=[output])
#model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.summary()
return model
#def iaiGRU_categorical_crossentropy_noPos(x,y):
# '''
# Same as GRU_VANILLA but with dropout AFTER each dense layer.
# '''
# haps = x
#
# numSNPs = haps[0].shape[0]
# numSamps = haps[0].shape[1]
#
# genotype_inputs = layers.Input(shape=(numSNPs,numSamps))
# model = layers.Bidirectional(layers.GRU(84,return_sequences=False,bias_initializer='zeros',kernel_initializer=tf.random_uniform_initializer(seed=123)))(genotype_inputs)
# model = layers.Dense(256)(model)
# model = layers.Dropout(0.35)(model)
#
# #----------------------------------------------------
#
# model = layers.Dense(64)(model)
# model = layers.Dropout(0.35)(model)
# output = layers.Dense(2,kernel_initializer='normal',name="softmax",activation='softmax')(model)
#
# #----------------------------------------------------
#
# model = Model(inputs=[genotype_inputs], outputs=[output])
# model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
# model.summary()
#
# return model
def iaiCNN_binary_crossentropy(inputShape,y):
haps,pos = inputShape
numSNPs = haps[0].shape[0]
numSamps = haps[0].shape[1]
numPos = pos[0].shape[0]
img_1_inputs = layers.Input(shape=(numSNPs,numSamps))
h = layers.Conv1D(1250, kernel_size=2, activation='relu', name='conv1_1')(img_1_inputs)
h = layers.Conv1D(512, kernel_size=2, dilation_rate=1, activation='relu')(h)
h = layers.AveragePooling1D(pool_size=2)(h)
h = layers.Dropout(0.25)(h)
h = layers.Conv1D(512, kernel_size=2, activation='relu')(h)
h = layers.AveragePooling1D(pool_size=2)(h)
h = layers.Dropout(0.25)(h)
h = layers.Flatten()(h)
loc_input = layers.Input(shape=(numPos,))
m2 = layers.Dense(64,name="m2_dense1")(loc_input)
m2 = layers.Dropout(0.1)(m2)
h = layers.concatenate([h,m2])
h = layers.Dense(128,activation='relu')(h)
h = layers.Dropout(0.2)(h)
output = layers.Dense(1,kernel_initializer='normal',name="out_dense",activation='sigmoid')(h)
model = Model(inputs=[img_1_inputs,loc_input], outputs=[output])
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.summary()
return model
def iaiCNN_adv(logits=False, input_ph=None, img_rows=28, img_cols=28,
channels=1, nb_filters=64, nb_classes=10):
model = Sequential()
input_shape = (img_rows, img_cols)
layer_list = [layers.Conv1D(1250, kernel_size=2, activation='relu', name='conv1_1', input_shape=input_shape),
layers.Conv1D(512, kernel_size=2, dilation_rate=1, activation='relu'),
layers.AveragePooling1D(pool_size=2),
layers.Dropout(0.25),
layers.Conv1D(512, kernel_size=2, activation='relu'),
layers.AveragePooling1D(pool_size=2),
layers.Dropout(0.25),
layers.Flatten(),
layers.Dense(128,activation='relu'),
layers.Dropout(0.2),
layers.Dense(nb_classes)]
for layer in layer_list:
model.add(layer)
if logits:
logits_tensor = model(input_ph)
model.add(layers.Activation('softmax'))
if logits:
return model, logits_tensor
else:
return model
| 33.639175 | 173 | 0.644346 | 849 | 6,526 | 4.823322 | 0.134276 | 0.054701 | 0.042979 | 0.035165 | 0.875214 | 0.86105 | 0.855678 | 0.855678 | 0.849328 | 0.849328 | 0 | 0.042624 | 0.165952 | 6,526 | 193 | 174 | 33.813472 | 0.709719 | 0.369445 | 0 | 0.529412 | 0 | 0 | 0.063571 | 0.01192 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047059 | false | 0 | 0.011765 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc46e0877551dbe2fce3a70fbdaefeaf66bd24fe | 43 | py | Python | spikeforest/spikeforestwidgets/templatewidget/__init__.py | mhhennig/spikeforest | 5b4507ead724af3de0be5d48a3b23aaedb0be170 | [
"Apache-2.0"
] | 1 | 2021-09-23T01:07:19.000Z | 2021-09-23T01:07:19.000Z | spikeforest/spikeforestwidgets/templatewidget/__init__.py | mhhennig/spikeforest | 5b4507ead724af3de0be5d48a3b23aaedb0be170 | [
"Apache-2.0"
] | null | null | null | spikeforest/spikeforestwidgets/templatewidget/__init__.py | mhhennig/spikeforest | 5b4507ead724af3de0be5d48a3b23aaedb0be170 | [
"Apache-2.0"
] | 1 | 2021-09-23T01:07:21.000Z | 2021-09-23T01:07:21.000Z | from .templatewidget import TemplateWidget
| 21.5 | 42 | 0.883721 | 4 | 43 | 9.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.974359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc4acc4a756e6048fe0dd4b26b6aed94bcd3a0f0 | 70 | py | Python | build/lib/autovc/models/__init__.py | jlucete/ParallelWaveGAN-AutoVC | d2d2c2135587f5c1a221c23abb5724c0993f232b | [
"MIT"
] | null | null | null | build/lib/autovc/models/__init__.py | jlucete/ParallelWaveGAN-AutoVC | d2d2c2135587f5c1a221c23abb5724c0993f232b | [
"MIT"
] | null | null | null | build/lib/autovc/models/__init__.py | jlucete/ParallelWaveGAN-AutoVC | d2d2c2135587f5c1a221c23abb5724c0993f232b | [
"MIT"
] | null | null | null | from .autovc import * # NOQA
from .parallel_wavegan import * # NOQA
| 23.333333 | 39 | 0.714286 | 9 | 70 | 5.444444 | 0.666667 | 0.408163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 70 | 2 | 40 | 35 | 0.875 | 0.128571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc4da6453bc79e87bea1625269e4448765c93c9a | 23 | py | Python | gxscalc/__init__.py | cycloawaodorin/gxscalc | 45b9f59b2d7cd47410ba67b87d31154bb5123752 | [
"MIT"
] | 3 | 2020-07-07T04:27:21.000Z | 2020-10-12T12:21:37.000Z | gxscalc/__init__.py | cycloawaodorin/gxscalc | 45b9f59b2d7cd47410ba67b87d31154bb5123752 | [
"MIT"
] | null | null | null | gxscalc/__init__.py | cycloawaodorin/gxscalc | 45b9f59b2d7cd47410ba67b87d31154bb5123752 | [
"MIT"
] | null | null | null | from .gxscalc import *
| 11.5 | 22 | 0.73913 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fca8997b15e9e7851339cdcc919a0b3683917c5d | 769 | py | Python | TF.py | lisadratva/BIO-463 | 2f15ffbf2c29c0952548d2c006cfe0e58df02488 | [
"MIT"
] | null | null | null | TF.py | lisadratva/BIO-463 | 2f15ffbf2c29c0952548d2c006cfe0e58df02488 | [
"MIT"
] | null | null | null | TF.py | lisadratva/BIO-463 | 2f15ffbf2c29c0952548d2c006cfe0e58df02488 | [
"MIT"
] | null | null | null | # names of TFs to inspect in different formats needed for the code
TFs = ['GM12878|BRCA1|None',
'GM12801|CTCF|None',
'K562|E2F4|None',
'GM12878|c-Fos|None',
'HeLa-S3|IRF3|None',
'H1-hESC|SP4|None',
'GM12891|TAF1|None',
'K562|STAT2|IFNa6h',
'T-47D|ERalpha|Genistein_100nM',
'HEK293|KAP1|None']
TFsimple = ['BRCA1', 'CTCF', 'E2F4', 'c-Fos', 'IRF3', 'SP4', 'TAF1', 'STAT2', 'ERalpha', 'KAP1']
TFsSlash = ['GM12878\|BRCA1\|None',
'GM12801\|CTCF\|None',
'K562\|E2F4\|None',
'GM12878\|c-Fos\|None',
'HeLa-S3\|IRF3\|None',
'H1-hESC\|SP4\|None',
'GM12891\|TAF1\|None',
'K562\|STAT2\|IFNa6h',
'T-47D\|ERalpha\|Genistein_100nM',
'HEK293\|KAP1\|None'] | 30.76 | 96 | 0.553966 | 96 | 769 | 4.416667 | 0.416667 | 0.075472 | 0.075472 | 0.108491 | 0.726415 | 0.726415 | 0.726415 | 0.726415 | 0.726415 | 0.726415 | 0 | 0.165541 | 0.230169 | 769 | 25 | 97 | 30.76 | 0.550676 | 0.083225 | 0 | 0 | 0 | 0 | 0.600852 | 0.085227 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d7a3a6c93c239478bc3988f9c8259b93834a2ab | 245 | py | Python | gym_pybullet_drones/envs/__init__.py | therealjtgill/gym-pybullet-drones | d813c0b22d7e1174e577eec42f98ea1dd2383ee6 | [
"MIT"
] | null | null | null | gym_pybullet_drones/envs/__init__.py | therealjtgill/gym-pybullet-drones | d813c0b22d7e1174e577eec42f98ea1dd2383ee6 | [
"MIT"
] | null | null | null | gym_pybullet_drones/envs/__init__.py | therealjtgill/gym-pybullet-drones | d813c0b22d7e1174e577eec42f98ea1dd2383ee6 | [
"MIT"
] | null | null | null | from gym_pybullet_drones.envs.CtrlAviary import CtrlAviary
from gym_pybullet_drones.envs.DynAviary import DynAviary
from gym_pybullet_drones.envs.VelocityAviary import VelocityAviary
from gym_pybullet_drones.envs.VisionAviary import VisionAviary | 61.25 | 66 | 0.906122 | 32 | 245 | 6.6875 | 0.3125 | 0.130841 | 0.280374 | 0.392523 | 0.46729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061224 | 245 | 4 | 67 | 61.25 | 0.930435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.