hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1340d0f1ccf1247bf84d8184663b5ee70efb87cc | 15,869 | py | Python | tests/contrib/elasticsearch/test.py | uniq10/dd-trace-py | ca9ce1fe552cf03c2828bcd160e537336aa275d5 | [
"Apache-2.0",
"BSD-3-Clause"
] | 1 | 2020-10-17T14:55:46.000Z | 2020-10-17T14:55:46.000Z | tests/contrib/elasticsearch/test.py | uniq10/dd-trace-py | ca9ce1fe552cf03c2828bcd160e537336aa275d5 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | tests/contrib/elasticsearch/test.py | uniq10/dd-trace-py | ca9ce1fe552cf03c2828bcd160e537336aa275d5 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | import datetime
# project
from ddtrace import Pin
from ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY
from ddtrace.ext import http
from ddtrace.contrib.elasticsearch import get_traced_transport
from ddtrace.contrib.elasticsearch.elasticsearch import elasticsearch
from ddtrace.contrib.elasticsearch.patch import patch, unpatch
# testing
from tests.opentracer.utils import init_tracer
from ..config import ELASTICSEARCH_CONFIG
from ...base import BaseTracerTestCase
from ...test_tracer import get_dummy_tracer
from ...utils import assert_span_http_status_code
class ElasticsearchTest(BaseTracerTestCase):
"""
Elasticsearch integration test suite.
Need a running ElasticSearch
"""
ES_INDEX = "ddtrace_index"
ES_TYPE = "ddtrace_type"
TEST_SERVICE = "test"
TEST_PORT = str(ELASTICSEARCH_CONFIG["port"])
def setUp(self):
"""Prepare ES"""
es = elasticsearch.Elasticsearch(port=ELASTICSEARCH_CONFIG["port"])
es.indices.delete(index=self.ES_INDEX, ignore=[400, 404])
def tearDown(self):
"""Clean ES"""
es = elasticsearch.Elasticsearch(port=ELASTICSEARCH_CONFIG["port"])
es.indices.delete(index=self.ES_INDEX, ignore=[400, 404])
def test_elasticsearch(self):
"""Test the elasticsearch integration
All in this for now. Will split it later.
"""
tracer = get_dummy_tracer()
writer = tracer.writer
transport_class = get_traced_transport(datadog_tracer=tracer, datadog_service=self.TEST_SERVICE,)
es = elasticsearch.Elasticsearch(transport_class=transport_class, port=ELASTICSEARCH_CONFIG["port"])
# Test index creation
mapping = {"mapping": {"properties": {"created": {"type": "date", "format": "yyyy-MM-dd"}}}}
es.indices.create(index=self.ES_INDEX, ignore=400, body=mapping)
spans = writer.pop()
assert spans
assert len(spans) == 1
span = spans[0]
BaseTracerTestCase.assert_is_measured(span)
assert span.service == self.TEST_SERVICE
assert span.name == "elasticsearch.query"
assert span.span_type == "elasticsearch"
assert span.error == 0
assert span.get_tag("elasticsearch.method") == "PUT"
assert span.get_tag("elasticsearch.url") == "/%s" % self.ES_INDEX
assert span.resource == "PUT /%s" % self.ES_INDEX
# Put data
args = {"index": self.ES_INDEX, "doc_type": self.ES_TYPE}
es.index(id=10, body={"name": "ten", "created": datetime.date(2016, 1, 1)}, **args)
es.index(id=11, body={"name": "eleven", "created": datetime.date(2016, 2, 1)}, **args)
es.index(id=12, body={"name": "twelve", "created": datetime.date(2016, 3, 1)}, **args)
spans = writer.pop()
assert spans
assert len(spans) == 3
span = spans[0]
BaseTracerTestCase.assert_is_measured(span)
assert span.error == 0
assert span.get_tag("elasticsearch.method") == "PUT"
assert span.get_tag("elasticsearch.url") == "/%s/%s/%s" % (self.ES_INDEX, self.ES_TYPE, 10)
assert span.resource == "PUT /%s/%s/?" % (self.ES_INDEX, self.ES_TYPE)
# Make the data available
es.indices.refresh(index=self.ES_INDEX)
spans = writer.pop()
assert spans, spans
assert len(spans) == 1
span = spans[0]
BaseTracerTestCase.assert_is_measured(span)
assert span.resource == "POST /%s/_refresh" % self.ES_INDEX
assert span.get_tag("elasticsearch.method") == "POST"
assert span.get_tag("elasticsearch.url") == "/%s/_refresh" % self.ES_INDEX
# Search data
result = es.search(sort=["name:desc"], size=100, body={"query": {"match_all": {}}}, **args)
assert len(result["hits"]["hits"]) == 3, result
spans = writer.pop()
assert spans
assert len(spans) == 1
span = spans[0]
BaseTracerTestCase.assert_is_measured(span)
assert span.resource == "GET /%s/%s/_search" % (self.ES_INDEX, self.ES_TYPE)
assert span.get_tag("elasticsearch.method") == "GET"
assert span.get_tag("elasticsearch.url") == "/%s/%s/_search" % (self.ES_INDEX, self.ES_TYPE)
assert span.get_tag("elasticsearch.body").replace(" ", "") == '{"query":{"match_all":{}}}'
assert set(span.get_tag("elasticsearch.params").split("&")) == {"sort=name%3Adesc", "size=100"}
assert http.QUERY_STRING not in span.meta
self.assertTrue(span.get_metric("elasticsearch.took") > 0)
# Search by type not supported by default json encoder
query = {"range": {"created": {"gte": datetime.date(2016, 2, 1)}}}
result = es.search(size=100, body={"query": query}, **args)
assert len(result["hits"]["hits"]) == 2, result
# Raise error 404 with a non existent index
writer.pop()
try:
es.get(index="non_existent_index", id=100, doc_type="_all")
assert "error_not_raised" == "elasticsearch.exceptions.TransportError"
except elasticsearch.exceptions.TransportError:
spans = writer.pop()
assert spans
span = spans[0]
BaseTracerTestCase.assert_is_measured(span)
assert_span_http_status_code(span, 404)
# Raise error 400, the index 10 is created twice
try:
es.indices.create(index=10)
es.indices.create(index=10)
assert "error_not_raised" == "elasticsearch.exceptions.TransportError"
except elasticsearch.exceptions.TransportError:
spans = writer.pop()
assert spans
span = spans[-1]
BaseTracerTestCase.assert_is_measured(span)
assert_span_http_status_code(span, 400)
# Drop the index, checking it won't raise exception on success or failure
es.indices.delete(index=self.ES_INDEX, ignore=[400, 404])
es.indices.delete(index=self.ES_INDEX, ignore=[400, 404])
def test_elasticsearch_ot(self):
"""Shortened OpenTracing version of test_elasticsearch."""
tracer = get_dummy_tracer()
writer = tracer.writer
ot_tracer = init_tracer("my_svc", tracer)
transport_class = get_traced_transport(datadog_tracer=tracer, datadog_service=self.TEST_SERVICE,)
es = elasticsearch.Elasticsearch(transport_class=transport_class, port=ELASTICSEARCH_CONFIG["port"])
# Test index creation
mapping = {"mapping": {"properties": {"created": {"type": "date", "format": "yyyy-MM-dd"}}}}
with ot_tracer.start_active_span("ot_span"):
es.indices.create(index=self.ES_INDEX, ignore=400, body=mapping)
spans = writer.pop()
assert spans
assert len(spans) == 2
ot_span, dd_span = spans
# confirm the parenting
assert ot_span.parent_id is None
assert dd_span.parent_id == ot_span.span_id
assert ot_span.service == "my_svc"
assert ot_span.resource == "ot_span"
BaseTracerTestCase.assert_is_measured(dd_span)
assert dd_span.service == self.TEST_SERVICE
assert dd_span.name == "elasticsearch.query"
assert dd_span.span_type == "elasticsearch"
assert dd_span.error == 0
assert dd_span.get_tag("elasticsearch.method") == "PUT"
assert dd_span.get_tag("elasticsearch.url") == "/%s" % self.ES_INDEX
assert dd_span.resource == "PUT /%s" % self.ES_INDEX
class ElasticsearchPatchTest(BaseTracerTestCase):
"""
Elasticsearch integration test suite.
Need a running ElasticSearch.
Test cases with patching.
Will merge when patching will be the default/only way.
"""
ES_INDEX = "ddtrace_index"
ES_TYPE = "ddtrace_type"
TEST_SERVICE = "test"
TEST_PORT = str(ELASTICSEARCH_CONFIG["port"])
def setUp(self):
"""Prepare ES"""
super(ElasticsearchPatchTest, self).setUp()
es = elasticsearch.Elasticsearch(port=ELASTICSEARCH_CONFIG["port"])
Pin(service=self.TEST_SERVICE, tracer=self.tracer).onto(es.transport)
mapping = {"mapping": {"properties": {"created": {"type": "date", "format": "yyyy-MM-dd"}}}}
es.indices.create(index=self.ES_INDEX, ignore=400, body=mapping)
patch()
self.es = es
def tearDown(self):
"""Clean ES"""
super(ElasticsearchPatchTest, self).tearDown()
unpatch()
self.es.indices.delete(index=self.ES_INDEX, ignore=[400, 404])
def test_elasticsearch(self):
es = self.es
mapping = {"mapping": {"properties": {"created": {"type": "date", "format": "yyyy-MM-dd"}}}}
es.indices.create(index=self.ES_INDEX, ignore=400, body=mapping)
spans = self.get_spans()
self.reset()
assert spans, spans
assert len(spans) == 1
span = spans[0]
BaseTracerTestCase.assert_is_measured(span)
assert span.service == self.TEST_SERVICE
assert span.name == "elasticsearch.query"
assert span.span_type == "elasticsearch"
assert span.error == 0
assert span.get_tag("elasticsearch.method") == "PUT"
assert span.get_tag("elasticsearch.url") == "/%s" % self.ES_INDEX
assert span.resource == "PUT /%s" % self.ES_INDEX
args = {"index": self.ES_INDEX, "doc_type": self.ES_TYPE}
es.index(id=10, body={"name": "ten", "created": datetime.date(2016, 1, 1)}, **args)
es.index(id=11, body={"name": "eleven", "created": datetime.date(2016, 2, 1)}, **args)
es.index(id=12, body={"name": "twelve", "created": datetime.date(2016, 3, 1)}, **args)
spans = self.get_spans()
self.reset()
assert spans, spans
assert len(spans) == 3
span = spans[0]
BaseTracerTestCase.assert_is_measured(span)
assert span.error == 0
assert span.get_tag("elasticsearch.method") == "PUT"
assert span.get_tag("elasticsearch.url") == "/%s/%s/%s" % (self.ES_INDEX, self.ES_TYPE, 10)
assert span.resource == "PUT /%s/%s/?" % (self.ES_INDEX, self.ES_TYPE)
args = {"index": self.ES_INDEX, "doc_type": self.ES_TYPE}
es.indices.refresh(index=self.ES_INDEX)
spans = self.get_spans()
self.reset()
assert spans, spans
assert len(spans) == 1
span = spans[0]
BaseTracerTestCase.assert_is_measured(span)
assert span.resource == "POST /%s/_refresh" % self.ES_INDEX
assert span.get_tag("elasticsearch.method") == "POST"
assert span.get_tag("elasticsearch.url") == "/%s/_refresh" % self.ES_INDEX
# search data
args = {"index": self.ES_INDEX, "doc_type": self.ES_TYPE}
with self.override_http_config("elasticsearch", dict(trace_query_string=True)):
es.index(id=10, body={"name": "ten", "created": datetime.date(2016, 1, 1)}, **args)
es.index(id=11, body={"name": "eleven", "created": datetime.date(2016, 2, 1)}, **args)
es.index(id=12, body={"name": "twelve", "created": datetime.date(2016, 3, 1)}, **args)
result = es.search(sort=["name:desc"], size=100, body={"query": {"match_all": {}}}, **args)
assert len(result["hits"]["hits"]) == 3, result
spans = self.get_spans()
self.reset()
assert spans, spans
assert len(spans) == 4
span = spans[-1]
BaseTracerTestCase.assert_is_measured(span)
assert span.resource == "GET /%s/%s/_search" % (self.ES_INDEX, self.ES_TYPE)
assert span.get_tag("elasticsearch.method") == "GET"
assert span.get_tag("elasticsearch.url") == "/%s/%s/_search" % (self.ES_INDEX, self.ES_TYPE)
assert span.get_tag("elasticsearch.body").replace(" ", "") == '{"query":{"match_all":{}}}'
assert set(span.get_tag("elasticsearch.params").split("&")) == {"sort=name%3Adesc", "size=100"}
assert set(span.get_tag(http.QUERY_STRING).split("&")) == {"sort=name%3Adesc", "size=100"}
self.assertTrue(span.get_metric("elasticsearch.took") > 0)
# Search by type not supported by default json encoder
query = {"range": {"created": {"gte": datetime.date(2016, 2, 1)}}}
result = es.search(size=100, body={"query": query}, **args)
assert len(result["hits"]["hits"]) == 2, result
def test_analytics_default(self):
es = self.es
mapping = {"mapping": {"properties": {"created": {"type": "date", "format": "yyyy-MM-dd"}}}}
es.indices.create(index=self.ES_INDEX, ignore=400, body=mapping)
spans = self.get_spans()
self.assertEqual(len(spans), 1)
self.assertIsNone(spans[0].get_metric(ANALYTICS_SAMPLE_RATE_KEY))
def test_analytics_with_rate(self):
with self.override_config("elasticsearch", dict(analytics_enabled=True, analytics_sample_rate=0.5)):
es = self.es
mapping = {"mapping": {"properties": {"created": {"type": "date", "format": "yyyy-MM-dd"}}}}
es.indices.create(index=self.ES_INDEX, ignore=400, body=mapping)
spans = self.get_spans()
self.assertEqual(len(spans), 1)
self.assertEqual(spans[0].get_metric(ANALYTICS_SAMPLE_RATE_KEY), 0.5)
def test_analytics_without_rate(self):
with self.override_config("elasticsearch", dict(analytics_enabled=True)):
es = self.es
mapping = {"mapping": {"properties": {"created": {"type": "date", "format": "yyyy-MM-dd"}}}}
es.indices.create(index=self.ES_INDEX, ignore=400, body=mapping)
spans = self.get_spans()
self.assertEqual(len(spans), 1)
self.assertEqual(spans[0].get_metric(ANALYTICS_SAMPLE_RATE_KEY), 1.0)
def test_patch_unpatch(self):
# Test patch idempotence
patch()
patch()
es = elasticsearch.Elasticsearch(port=ELASTICSEARCH_CONFIG["port"])
Pin(service=self.TEST_SERVICE, tracer=self.tracer).onto(es.transport)
# Test index creation
es.indices.create(index=self.ES_INDEX, ignore=400)
spans = self.get_spans()
self.reset()
assert spans, spans
assert len(spans) == 1
# Test unpatch
self.reset()
unpatch()
es = elasticsearch.Elasticsearch(port=ELASTICSEARCH_CONFIG["port"])
# Test index creation
es.indices.create(index=self.ES_INDEX, ignore=400)
spans = self.get_spans()
self.reset()
assert not spans, spans
# Test patch again
self.reset()
patch()
es = elasticsearch.Elasticsearch(port=ELASTICSEARCH_CONFIG["port"])
Pin(service=self.TEST_SERVICE, tracer=self.tracer).onto(es.transport)
# Test index creation
es.indices.create(index=self.ES_INDEX, ignore=400)
spans = self.get_spans()
self.reset()
assert spans, spans
assert len(spans) == 1
@BaseTracerTestCase.run_in_subprocess(env_overrides=dict(DD_SERVICE="mysvc"))
def test_user_specified_service(self):
"""
When a user specifies a service for the app
The elasticsearch integration should not use it.
"""
# Ensure that the service name was configured
from ddtrace import config
assert config.service == "mysvc"
self.es.indices.create(index=self.ES_INDEX, ignore=400)
Pin(service="es", tracer=self.tracer).onto(self.es.transport)
spans = self.get_spans()
self.reset()
assert len(spans) == 1
assert spans[0].service != "mysvc"
def test_none_param(self):
try:
self.es.transport.perform_request("GET", "/test-index", body="{}", params=None)
except elasticsearch.exceptions.NotFoundError:
pass
spans = self.get_spans()
assert len(spans) == 1
| 39.6725 | 108 | 0.625874 | 1,942 | 15,869 | 4.968074 | 0.110711 | 0.039179 | 0.045605 | 0.036484 | 0.786795 | 0.764822 | 0.757774 | 0.734142 | 0.721808 | 0.701907 | 0 | 0.019972 | 0.233285 | 15,869 | 399 | 109 | 39.77193 | 0.772993 | 0.065726 | 0 | 0.768116 | 0 | 0 | 0.127444 | 0.008855 | 0 | 0 | 0 | 0 | 0.391304 | 1 | 0.047101 | false | 0.003623 | 0.047101 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
135ac8d2b5df32d0e1cb4f7160d030492c88b456 | 19 | py | Python | dtech_instagram/InstagramAPI/__init__.py | hideki-saito/InstagramAPP_Flask | c3ee6f10d35edb74f0f82f4370faca8f0c25200c | [
"MIT"
] | 126 | 2016-05-18T19:20:32.000Z | 2022-02-12T10:30:50.000Z | dtech_instagram/InstagramAPI/__init__.py | hideki-saito/InstagramAPP_Flask | c3ee6f10d35edb74f0f82f4370faca8f0c25200c | [
"MIT"
] | 41 | 2016-08-07T17:32:37.000Z | 2022-01-13T00:25:31.000Z | dtech_instagram/InstagramAPI/__init__.py | hideki-saito/InstagramAPP_Flask | c3ee6f10d35edb74f0f82f4370faca8f0c25200c | [
"MIT"
] | 61 | 2016-07-07T14:18:38.000Z | 2021-03-28T12:48:26.000Z | from .src import *
| 9.5 | 18 | 0.684211 | 3 | 19 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13b45e7bf73f08d06adbb63574de911e7dbfd60a | 174 | py | Python | classes/__init__.py | veken1199/CityLibraries | f1097c7b081acdd74f35c7aa04e2fed2ecb16e85 | [
"MIT"
] | null | null | null | classes/__init__.py | veken1199/CityLibraries | f1097c7b081acdd74f35c7aa04e2fed2ecb16e85 | [
"MIT"
] | 8 | 2019-02-13T03:42:19.000Z | 2022-02-17T19:18:49.000Z | classes/__init__.py | veken1199/CityLibraries | f1097c7b081acdd74f35c7aa04e2fed2ecb16e85 | [
"MIT"
] | null | null | null | from .library_result import *
from .super_map import *
from .api_response import *
from .cache_managers import ImageUrlCacheManager
from .thread_manager import ThreadsManager | 34.8 | 48 | 0.844828 | 22 | 174 | 6.454545 | 0.636364 | 0.211268 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109195 | 174 | 5 | 49 | 34.8 | 0.916129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13c637cc92d13aa921c83c3ea7bfaa4fb308d5ac | 92 | py | Python | suite/views/__init__.py | KieranSweeden/fol.io | a6f231e3f9fb96841387b04d72131470c5fc3239 | [
"OLDAP-2.5",
"OLDAP-2.4",
"OLDAP-2.3"
] | null | null | null | suite/views/__init__.py | KieranSweeden/fol.io | a6f231e3f9fb96841387b04d72131470c5fc3239 | [
"OLDAP-2.5",
"OLDAP-2.4",
"OLDAP-2.3"
] | null | null | null | suite/views/__init__.py | KieranSweeden/fol.io | a6f231e3f9fb96841387b04d72131470c5fc3239 | [
"OLDAP-2.5",
"OLDAP-2.4",
"OLDAP-2.3"
] | null | null | null | from .general import *
from .projects import *
from .skills import *
from .profile import *
| 18.4 | 23 | 0.73913 | 12 | 92 | 5.666667 | 0.5 | 0.441176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 92 | 4 | 24 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13e48d49a384106dcf66a2596ed0f921b570fa74 | 129 | py | Python | lotube/videos/comments/models.py | zurfyx/lotube | d00a456d4aff5a6f4c63dab5d90ba6a3a72e3a3f | [
"MIT"
] | null | null | null | lotube/videos/comments/models.py | zurfyx/lotube | d00a456d4aff5a6f4c63dab5d90ba6a3a72e3a3f | [
"MIT"
] | null | null | null | lotube/videos/comments/models.py | zurfyx/lotube | d00a456d4aff5a6f4c63dab5d90ba6a3a72e3a3f | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from .abstract_models import AbstractComment
class Comment(AbstractComment):
pass
| 16.125 | 44 | 0.829457 | 14 | 129 | 7.214286 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 129 | 7 | 45 | 18.428571 | 0.90991 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b965216cf1a4ee443ac722b06c44a1e118d7742e | 184 | py | Python | src/sage/gsl/gsl_array.py | vbraun/sage | 07d6c37d18811e2b377a9689790a7c5e24da16ba | [
"BSL-1.0"
] | 3 | 2016-06-19T14:48:31.000Z | 2022-01-28T08:46:01.000Z | src/sage/gsl/gsl_array.py | rwst/sage | a9d274b9338e6ee24bf35ea8d25875507e51e455 | [
"BSL-1.0"
] | null | null | null | src/sage/gsl/gsl_array.py | rwst/sage | a9d274b9338e6ee24bf35ea8d25875507e51e455 | [
"BSL-1.0"
] | 7 | 2021-11-08T10:01:59.000Z | 2022-03-03T11:25:52.000Z | """
GSL arrays
"""
from sage.misc.superseded import deprecation
deprecation(9084, "the module sage.gsl.gsl_array has moved to sage.libs.gsl.array")
from sage.libs.gsl.array import *
| 20.444444 | 83 | 0.76087 | 29 | 184 | 4.793103 | 0.551724 | 0.172662 | 0.158273 | 0.230216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024691 | 0.119565 | 184 | 8 | 84 | 23 | 0.833333 | 0.054348 | 0 | 0 | 0 | 0 | 0.373494 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b97328e65edcfc112343d7c77811cd7f3140fbd4 | 181 | py | Python | pypackage_test/__init__.py | MarinaRuizSO/pypackage_test | c731797ab72c0c1b9686e239f174e0864031b240 | [
"MIT"
] | null | null | null | pypackage_test/__init__.py | MarinaRuizSO/pypackage_test | c731797ab72c0c1b9686e239f174e0864031b240 | [
"MIT"
] | 1 | 2022-02-03T10:08:56.000Z | 2022-02-03T10:08:56.000Z | pypackage_test/__init__.py | MarinaRuizSO/pypackage_test | c731797ab72c0c1b9686e239f174e0864031b240 | [
"MIT"
] | null | null | null | """Top-level package for pypackage-test."""
__author__ = """Marina Ruiz Sanchez-Oro[D[D[D[D[D[D[D[D[D[Dá"""
__email__ = 'marina.ruiz.so@ed.ac.uk'
__version__ = '[0.1.0]'
| 30.166667 | 73 | 0.624309 | 42 | 181 | 2.642857 | 0.52381 | 0.162162 | 0.243243 | 0.288288 | 0.171171 | 0.171171 | 0.171171 | 0.171171 | 0.171171 | 0.171171 | 0 | 0.018182 | 0.088398 | 181 | 5 | 74 | 36.2 | 0.593939 | 0.20442 | 0 | 0 | 0 | 0.333333 | 0.608696 | 0.471014 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9bd9c6dcf451bcb691a6cc2356a653a50c976a3 | 4,012 | py | Python | mmu/viz/contours.py | RUrlus/ModelMetricUncertainty | f401a25dd196d6e4edf4901fcfee4b56ebd7c10b | [
"Apache-2.0"
] | null | null | null | mmu/viz/contours.py | RUrlus/ModelMetricUncertainty | f401a25dd196d6e4edf4901fcfee4b56ebd7c10b | [
"Apache-2.0"
] | 11 | 2021-12-08T10:34:17.000Z | 2022-01-20T13:40:05.000Z | mmu/viz/contours.py | RUrlus/ModelMetricUncertainty | f401a25dd196d6e4edf4901fcfee4b56ebd7c10b | [
"Apache-2.0"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
from mmu.viz.utils import _get_color_hexes
from mmu.viz.utils import _create_pr_legend
from mmu.viz.utils import _create_pr_legend_scatter
def _plot_pr_curve_contours(
precision,
recall,
scores,
prec_grid,
rec_grid,
levels,
labels,
cmap,
ax,
alpha,
legend_loc,
equal_aspect,
limit_axis
):
if cmap is None:
cmap = 'Blues'
if legend_loc is None:
# likely to be the best place for pr curve
legend_loc = 'lower center'
if ax is None:
fig, ax = plt.subplots(figsize=(12,8))
else:
fig = ax.get_figure()
# create meshgrid for plotting
RX, PY = np.meshgrid(rec_grid, prec_grid)
colors = _get_color_hexes(cmap, n_colors=len(labels), keep_alpha=True)
levels = [0.0] + levels.tolist()
# create contours
ax.contourf(RX, PY, scores, levels=levels, colors=colors, alpha=alpha) # type: ignore
# plot precision recall
ax.plot(recall, precision, c='black', alpha=0.6, zorder=10) # type: ignore
ax.set_xlabel('Recall', fontsize=14) # type: ignore
ax.set_ylabel('Precision', fontsize=14) # type: ignore
ax.tick_params(labelsize=12) # type: ignore
if limit_axis:
ylim_lb, ylim_ub = ax.get_ylim() # type: ignore
ylim_lb = max(-0.001, ylim_lb)
ylim_ub = min(1.001, ylim_ub)
ax.set_ylim(ylim_lb, ylim_ub) # type: ignore
xlim_lb, xlim_ub = ax.get_xlim() # type: ignore
xlim_lb = max(-0.001, xlim_lb)
xlim_ub = min(1.001, xlim_ub)
ax.set_xlim(xlim_lb, xlim_ub) # type: ignore
if equal_aspect:
ax.set_aspect('equal') # type: ignore
# create custom legend with the correct colours and labels
handles = _create_pr_legend(colors, labels)
ax.legend(handles=handles, loc=legend_loc, fontsize=12) # type: ignore
fig.tight_layout()
return ax, handles
def _plot_pr_contours(
n_bins,
precision,
recall,
scores,
bounds,
levels,
labels,
cmap,
ax,
alpha,
legend_loc,
equal_aspect,
limit_axis,
):
if cmap is None:
cmap = 'Blues'
if legend_loc is None:
# likely to be the best place for pr curve
legend_loc = 'lower left'
if ax is None:
fig, ax = plt.subplots(figsize=(12,8))
else:
fig = ax.get_figure()
# create meshgrid for plotting
prec_grid = np.linspace(bounds[0], bounds[1], num=n_bins)
rec_grid = np.linspace(bounds[2], bounds[3], num=n_bins)
RX, PY = np.meshgrid(rec_grid, prec_grid)
colors, c_marker = _get_color_hexes(
cmap, n_colors=len(labels), return_marker=True, keep_alpha=True
)
# add zero level to contours
levels = [0.0] + levels.tolist()
# create contours
ax.contourf(RX, PY, scores, levels=levels, colors=colors, alpha=alpha) # type: ignore
# plot precision recall
ax.scatter( # type: ignore
recall,
precision,
color=c_marker,
marker='x',
s=50,
lw=2,
zorder=len(labels) + 1
)
ax.set_xlabel('Recall', fontsize=14) # type: ignore
ax.set_ylabel('Precision', fontsize=14) # type: ignore
ax.tick_params(labelsize=12) # type: ignore
if limit_axis:
ylim_lb, ylim_ub = ax.get_ylim() # type: ignore
ylim_lb = max(-0.001, ylim_lb)
ylim_ub = min(1.001, ylim_ub)
ax.set_ylim(ylim_lb, ylim_ub) # type: ignore
xlim_lb, xlim_ub = ax.get_xlim() # type: ignore
xlim_lb = max(-0.001, xlim_lb)
xlim_ub = min(1.001, xlim_ub)
ax.set_xlim(xlim_lb, xlim_ub) # type: ignore
if equal_aspect:
ax.set_aspect('equal') # type: ignore
# create custom legend with the correct colours and labels
handles = _create_pr_legend_scatter(colors, c_marker, labels, (precision, recall))
ax.legend(handles=handles, loc=legend_loc, fontsize=12) # type: ignore
fig.tight_layout()
return ax, handles
| 29.5 | 90 | 0.63335 | 583 | 4,012 | 4.157804 | 0.200686 | 0.090759 | 0.024752 | 0.029703 | 0.791667 | 0.783003 | 0.783003 | 0.783003 | 0.726898 | 0.69802 | 0 | 0.023553 | 0.259222 | 4,012 | 135 | 91 | 29.718519 | 0.792059 | 0.16002 | 0 | 0.715596 | 0 | 0 | 0.023381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018349 | false | 0 | 0.045872 | 0 | 0.082569 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a12150f9e9ca089dece8b5c10afd093080c9a7b | 22 | py | Python | djamix/__init__.py | djamix/djamix | 36630a3072e084097a071afa4a5de00bf800d3d0 | [
"MIT"
] | 1 | 2018-10-07T11:37:38.000Z | 2018-10-07T11:37:38.000Z | djamix/__init__.py | djamix/djamix | 36630a3072e084097a071afa4a5de00bf800d3d0 | [
"MIT"
] | 4 | 2018-07-19T20:31:07.000Z | 2018-09-21T09:01:46.000Z | djamix/__init__.py | djamix/djamix | 36630a3072e084097a071afa4a5de00bf800d3d0 | [
"MIT"
] | null | null | null | from .djamix import *
| 11 | 21 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6a39d188b4aa2ec334de9297a39ed5cf4e0b2109 | 7,952 | py | Python | models/spa_softmax_v2.py | facexteam/pytorch-cifar | 48abfba662dc41b6f35b70f54f543af658f56be8 | [
"MIT"
] | 1 | 2019-02-19T09:23:26.000Z | 2019-02-19T09:23:26.000Z | models/spa_softmax_v2.py | facexteam/pytorch-cifar | 48abfba662dc41b6f35b70f54f543af658f56be8 | [
"MIT"
] | null | null | null | models/spa_softmax_v2.py | facexteam/pytorch-cifar | 48abfba662dc41b6f35b70f54f543af658f56be8 | [
"MIT"
] | 1 | 2019-02-19T09:23:30.000Z | 2019-02-19T09:23:30.000Z | '''Large margin softmax in PyTorch.
@author: zhaoyafei
'''
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
eps = 1e-4
class SpaSoftmax_v2(nn.Module):
def __init__(self, embedding_net, output_size=10, scale=4, m=0.8, b=0):
super(SpaSoftmax_v2, self).__init__()
assert (output_size > 1 and
scale >= 1 and
m <= 1.0 and m > 0 and
b >= 0)
self.input_size = sum(embedding_net.output_shape)
assert(self.input_size > 1)
self.scale = scale
self.m = m
self.b = b
self.output_size = output_size
self.embedding_net = embedding_net
self.linear = nn.Linear(self.input_size, self.output_size, bias=False)
def get_fc_weights(self):
wt = self.linear.weight.clone().detach()
return wt
def set_fc_weights(self, wt):
self.linear.weight.data.copy_(wt)
def set_fc_weights_to_ones(self):
self.linear.weight.data.fill_(1.0)
def forward(self, x, targets):
embedding = self.embedding_net(x)
# print('---> emb (before norm):\n', embedding)
# print('---> emb[j].norm (before norm):\n', embedding.norm(dim=1))
# print('---> weight (before norm):\n', self.linear.weight)
# print('---> weight[j].norm (before norm):\n',
# self.linear.weight.norm(dim=1))
# print('---> targets:\n', targets)
# do L2 normalization
embedding = F.normalize(embedding, dim=1)
# print('---> emb (after norm):\n', embedding)
# print('---> emb[j].norm (after norm):\n', embedding.norm(dim=1))
weight = F.normalize(self.linear.weight, dim=1)
# print('---> weight (after norm):\n', weight)
# print('---> weight[j].norm (after norm):\n',
# weight.norm(dim=1))
cos_theta = F.linear(embedding, weight).clamp(-1, 1)
# print('---> cos_theta:\n', cos_theta)
if self.m < 1.0:
one_hot = torch.zeros_like(cos_theta)
one_hot.scatter_(1, targets.view(-1, 1), 1)
# print('---> one_hot:\n', one_hot)
cos_theta_1 = cos_theta + 1
# print('---> cos_theta_1:\n', cos_theta_1)
biased_cos_theta = cos_theta_1 + torch.mul(cos_theta_1, one_hot) * \
(self.m - 1)
if self.b > 0:
biased_cos_theta = biased_cos_theta - one_hot * self.b
# print('---> biased_cos_theta:\n', biased_cos_theta)
output = biased_cos_theta * self.scale
else:
output = cos_theta * self.scale
# print('---> output (s*biased_cos_theta):\n', output)
# print(
# '---> output[j].norm (s*biased_cos_theta):\n',
# output.norm(dim=1))
return output, cos_theta
def __repr__(self):
return self.__class__.__name__ + '(' \
+ 'input_size=' + str(self.input_size) \
+ ', output_size=' + str(self.output_size) \
+ ', scale=' + str(self.scale) \
+ ', m=' + str(self.m) \
+ ', b=' + str(self.b) \
+ ')'
class SpaSoftmax_v2_ext(nn.Module):
def __init__(self, embedding_net, output_size=10, scale=4, m=0.8, n=1, b=0):
super(SpaSoftmax_v2_ext, self).__init__()
assert (output_size > 1 and
scale >= 1 and
m <= n and m > 0 and
b >= 0)
self.input_size = sum(embedding_net.output_shape)
assert(self.input_size > 1)
self.scale = scale
self.m = m
self.n = n
self.b = b
self.output_size = output_size
self.embedding_net = embedding_net
self.linear = nn.Linear(self.input_size, self.output_size, bias=False)
def get_fc_weights(self):
wt = self.linear.weight.clone().detach()
return wt
def set_fc_weights(self, wt):
self.linear.weight.data.copy_(wt)
def set_fc_weights_to_ones(self):
self.linear.weight.data.fill_(1.0)
def forward(self, x, targets):
embedding = self.embedding_net(x)
# print('---> emb (before norm):\n', embedding)
# print('---> emb[j].norm (before norm):\n', embedding.norm(dim=1))
# print('---> weight (before norm):\n', self.linear.weight)
# print('---> weight[j].norm (before norm):\n',
# self.linear.weight.norm(dim=1))
# print('---> targets:\n', targets)
# do L2 normalization
embedding = F.normalize(embedding, dim=1)
# print('---> emb (after norm):\n', embedding)
# print('---> emb[j].norm (after norm):\n', embedding.norm(dim=1))
weight = F.normalize(self.linear.weight, dim=1)
# print('---> weight (after norm):\n', weight)
# print('---> weight[j].norm (after norm):\n',
# weight.norm(dim=1))
cos_theta = F.linear(embedding, weight).clamp(-1, 1)
# print('---> cos_theta:\n', cos_theta)
one_hot = torch.zeros_like(cos_theta)
one_hot.scatter_(1, targets.view(-1, 1), 1)
# print('---> one_hot:\n', one_hot)
if (self.m - self.n) < 0:
cos_theta_1 = cos_theta + 1
# biased_cos_theta = cos_theta_1 * self.n + torch.mul(cos_theta_1, one_hot) * \
# (self.m - self.n) + (1 + self.b)
biased_cos_theta = cos_theta_1 * self.n + torch.mul(cos_theta_1, one_hot) * \
(self.m - self.n)
else:
biased_cos_theta = cos_theta
if self.b > 0:
biased_cos_theta = biased_cos_theta - one_hot * self.b
print('---> biased_cos_theta:\n', biased_cos_theta)
output = biased_cos_theta * self.scale
# print('---> output (s*biased_cos_theta):\n', output)
# print(
# '---> output[j].norm (s*biased_cos_theta):\n',
# output.norm(dim=1))
return output, cos_theta
def __repr__(self):
return self.__class__.__name__ + '(' \
+ 'input_size=' + str(self.input_size) \
+ ', output_size=' + str(self.output_size) \
+ ', scale=' + str(self.scale) \
+ ', m=' + str(self.m) \
+ ', n=' + str(self.n) \
+ ', b=' + str(self.b) \
+ ')'
if __name__ == '__main__':
class IdentityModule(nn.Module):
def __init__(self, output_size=64):
super(IdentityModule, self).__init__()
self.output_shape = (output_size,)
def forward(self, x):
return x
def infer(net, data, targets):
print('===> input data: \n', data)
print('===> targets: \n', targets)
pred, cos_theta = net(data, targets)
print('===> output of net(data):\n')
print('---> pred (s*biased_cos_theta): \n', pred)
print('---> cos_theta: \n', cos_theta)
emb_size = 20
output_size = 10
scale = 8
#dummpy_data = torch.ones(3, emb_size)
dummpy_data = torch.zeros(3, emb_size)
for i, row in enumerate(dummpy_data):
# print('row[{}]: {}'.format(i, row))
for j in range(len(row)):
if j % 2 == 0:
row[j] = 1
# print('row[{}]: {}'.format(i, row))
dummpy_targets = torch.tensor([0, 1, 2])
print('\n#=============================')
print('\n===> Testing SpaSoftmax_v2 net with dummy data')
net = IdentityModule(emb_size)
net = SpaSoftmax_v2(net, output_size, scale)
net.set_fc_weights_to_ones()
print('net:\n', net)
infer(net, dummpy_data, dummpy_targets)
print('\n#=============================')
print('\n===> Testing SpaSoftmax_v2_ext net with dummy data')
net = IdentityModule(emb_size)
net = SpaSoftmax_v2_ext(net, output_size, scale)
net.set_fc_weights_to_ones()
print('net:\n', net)
infer(net, dummpy_data, dummpy_targets)
| 31.808 | 91 | 0.545775 | 1,045 | 7,952 | 3.915789 | 0.111005 | 0.089932 | 0.065005 | 0.02566 | 0.832845 | 0.800831 | 0.788856 | 0.773705 | 0.773705 | 0.756843 | 0 | 0.017882 | 0.289738 | 7,952 | 249 | 92 | 31.935743 | 0.706622 | 0.232897 | 0 | 0.628571 | 0 | 0 | 0.068099 | 0.01405 | 0 | 0 | 0 | 0 | 0.028571 | 1 | 0.107143 | false | 0 | 0.035714 | 0.021429 | 0.214286 | 0.092857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e01383ebca3939b6f081bef8dfacd04dff5db547 | 44 | py | Python | app/services/__init__.py | nkthanh98/flask-seed | 353993e11a765c2b9cec8adf63a555580c223981 | [
"MIT"
] | null | null | null | app/services/__init__.py | nkthanh98/flask-seed | 353993e11a765c2b9cec8adf63a555580c223981 | [
"MIT"
] | null | null | null | app/services/__init__.py | nkthanh98/flask-seed | 353993e11a765c2b9cec8adf63a555580c223981 | [
"MIT"
] | null | null | null | # coding=utf-8
from .pet import PetService
| 11 | 27 | 0.75 | 7 | 44 | 4.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.159091 | 44 | 3 | 28 | 14.666667 | 0.864865 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0233310dbcb20dc71886e50fbe1fd4ab519a24c | 226 | py | Python | projects/siamfc-pytorch/siamfc/__init__.py | happywu/mmaction2-CycleContrast | 019734e471dffd1161b7a9c617ba862d2349a96c | [
"Apache-2.0"
] | null | null | null | projects/siamfc-pytorch/siamfc/__init__.py | happywu/mmaction2-CycleContrast | 019734e471dffd1161b7a9c617ba862d2349a96c | [
"Apache-2.0"
] | null | null | null | projects/siamfc-pytorch/siamfc/__init__.py | happywu/mmaction2-CycleContrast | 019734e471dffd1161b7a9c617ba862d2349a96c | [
"Apache-2.0"
] | null | null | null | # from .default_config import default_cfg
# from .siamfc_tracker_v2 import TrackerSiamFC
from .default_config_base import default_cfg
from .siamfc_tracker_base import TrackerSiamFC
__all__ = ['TrackerSiamFC', 'default_cfg']
| 32.285714 | 46 | 0.831858 | 29 | 226 | 6 | 0.37931 | 0.172414 | 0.195402 | 0.229885 | 0.37931 | 0.37931 | 0 | 0 | 0 | 0 | 0 | 0.004951 | 0.106195 | 226 | 6 | 47 | 37.666667 | 0.856436 | 0.371681 | 0 | 0 | 0 | 0 | 0.172662 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0438e173da762b3dcbd2295cc3670168a49de34 | 5,173 | py | Python | tests/pipelines/test_log_pipeline.py | ozgurkara/pydiator_core | 4194e2a27ac4e340447f03975fe3d9765506c868 | [
"MIT"
] | 24 | 2020-12-12T18:27:40.000Z | 2022-03-28T08:00:42.000Z | tests/pipelines/test_log_pipeline.py | ozgurkara/pydiator_core | 4194e2a27ac4e340447f03975fe3d9765506c868 | [
"MIT"
] | null | null | null | tests/pipelines/test_log_pipeline.py | ozgurkara/pydiator_core | 4194e2a27ac4e340447f03975fe3d9765506c868 | [
"MIT"
] | 2 | 2021-10-05T11:55:22.000Z | 2021-11-15T05:53:16.000Z | from unittest import mock
from unittest.mock import MagicMock
from pydiator_core.pipelines.log_pipeline import LogPipeline
from pydiator_core.serializer import SerializerFactory
from tests.base_test_case import BaseTestCase, TestRequest, TestResponse
class TestLogPipeline(BaseTestCase):
def setUp(self):
SerializerFactory.set_serializer(None)
def tearDown(self):
pass
def test_handle_return_exception_when_next_is_none(self):
# Given
log_pipeline = LogPipeline()
# When
with self.assertRaises(Exception) as context:
self.async_loop(log_pipeline.handle(TestRequest()))
# Then
assert context.exception.args[0] == 'pydiator_log_pipeline_has_no_next_pipeline'
def test_handle_return_exception_when_next_handle_is_none(self):
# Given
mock_test_pipeline = MagicMock()
mock_test_pipeline.handle = None
log_pipeline = LogPipeline()
log_pipeline.set_next(mock_test_pipeline)
# When
with self.assertRaises(Exception) as context:
self.async_loop(log_pipeline.handle(TestRequest()))
# Then
assert context.exception.args[0] == 'handle_function_of_next_pipeline_is_not_valid_for_log_pipeline'
def test_handle_when_response_is_str(self):
# Given
next_response_text = "next_response"
async def next_handle(req):
return next_response_text
mock_test_pipeline = MagicMock()
mock_test_pipeline.handle = next_handle
log_pipeline = LogPipeline()
log_pipeline.set_next(mock_test_pipeline)
# When
response = self.async_loop(log_pipeline.handle(TestRequest()))
# Then
assert response is not None
assert response == next_response_text
@mock.patch("pydiator_core.pipelines.log_pipeline.LoggerFactory")
@mock.patch("pydiator_core.pipelines.log_pipeline.SerializerFactory")
def test_handle_when_response_is_instance_of_dict(self, mock_serializer_factory, mock_logger_factory):
# Given
next_response = TestResponse(success=True)
async def next_handle(req):
return next_response
mock_test_pipeline = MagicMock()
mock_test_pipeline.handle = next_handle
log_pipeline = LogPipeline()
log_pipeline.set_next(mock_test_pipeline)
# When
response = self.async_loop(log_pipeline.handle(TestRequest()))
# Then
assert response is not None
assert response == next_response
assert mock_serializer_factory.get_serializer.called
assert mock_serializer_factory.get_serializer.return_value.deserialize.called
assert mock_serializer_factory.get_serializer.return_value.deserialize.call_count == 2
@mock.patch("pydiator_core.pipelines.log_pipeline.LoggerFactory")
@mock.patch("pydiator_core.pipelines.log_pipeline.SerializerFactory")
def test_handle_when_response_type_is_list(self, mock_serializer_factory, mock_logger_factory):
# Given
next_response = [TestResponse(success=True)]
async def next_handle(req):
return next_response
mock_test_pipeline = MagicMock()
mock_test_pipeline.handle = next_handle
log_pipeline = LogPipeline()
log_pipeline.set_next(mock_test_pipeline)
# When
response = self.async_loop(log_pipeline.handle(TestRequest()))
# Then
assert response is not None
assert response == next_response
assert len(response) == 1
assert mock_serializer_factory.get_serializer.called
assert mock_serializer_factory.get_serializer.return_value.deserialize.called
assert mock_serializer_factory.get_serializer.return_value.deserialize.call_count == 2
assert mock_logger_factory.get_logger.called
assert mock_logger_factory.get_logger.return_value.log.called
assert mock_logger_factory.get_logger.return_value.log.call_count == 1
@mock.patch("pydiator_core.pipelines.log_pipeline.LoggerFactory")
def test_handle_log_when_response_type_is_list(self, mock_logger_factory):
# Given
next_response = [TestResponse(success=True)]
async def next_handle(req):
return next_response
mock_test_pipeline = MagicMock()
mock_test_pipeline.handle = next_handle
log_pipeline = LogPipeline()
log_pipeline.set_next(mock_test_pipeline)
# When
response = self.async_loop(log_pipeline.handle(TestRequest()))
# Then
assert response is not None
assert response == next_response
assert len(response) == 1
assert mock_logger_factory.get_logger.called
assert mock_logger_factory.get_logger.return_value.log.called
assert mock_logger_factory.get_logger.return_value.log.call_count == 1
mock_logger_factory.get_logger.return_value. \
log.assert_called_once_with(source="LogPipeline",
message="TestRequest",
data={'req': {}, 'res': [{'success': True}]})
| 35.923611 | 108 | 0.700947 | 600 | 5,173 | 5.678333 | 0.135 | 0.080716 | 0.070443 | 0.041092 | 0.828001 | 0.818609 | 0.810097 | 0.778397 | 0.727913 | 0.727913 | 0 | 0.002003 | 0.227914 | 5,173 | 143 | 109 | 36.174825 | 0.851027 | 0.018365 | 0 | 0.7 | 0 | 0 | 0.081044 | 0.071556 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.088889 | false | 0.011111 | 0.055556 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e054f1aa2d7448bd22f85c9ea259665e86547d53 | 226 | py | Python | publish.py | StackStorm/pip-conflict-checker | 937cc909e2263a9a6bbd993e537b43f8e9520aa1 | [
"MIT"
] | 59 | 2015-05-05T02:43:22.000Z | 2021-12-07T13:34:58.000Z | publish.py | StackStorm/pip-conflict-checker | 937cc909e2263a9a6bbd993e537b43f8e9520aa1 | [
"MIT"
] | 8 | 2017-02-10T20:02:31.000Z | 2021-02-01T16:23:54.000Z | publish.py | StackStorm/pip-conflict-checker | 937cc909e2263a9a6bbd993e537b43f8e9520aa1 | [
"MIT"
] | 18 | 2015-05-28T19:25:45.000Z | 2020-10-30T09:02:46.000Z | import subprocess
subprocess.call(['pip', 'install', 'wheel'])
subprocess.call(['python', 'setup.py', 'clean', '--all'])
subprocess.call(['python', 'setup.py', 'register', 'sdist', 'bdist_wheel', 'upload', '-r', 'ambition'])
| 37.666667 | 103 | 0.646018 | 26 | 226 | 5.576923 | 0.653846 | 0.289655 | 0.275862 | 0.344828 | 0.372414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079646 | 226 | 5 | 104 | 45.2 | 0.697115 | 0 | 0 | 0 | 0 | 0 | 0.411504 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1609b78a948c20948dfd2a20f81cbdcb62eef163 | 121 | py | Python | pyqt_graphics_video_item_video_player/__init__.py | yjg30737/pyqt-graphics-video-item-video-player | 3ce8f5bfb3e6c3be32d2090df34a3beb18a42237 | [
"MIT"
] | null | null | null | pyqt_graphics_video_item_video_player/__init__.py | yjg30737/pyqt-graphics-video-item-video-player | 3ce8f5bfb3e6c3be32d2090df34a3beb18a42237 | [
"MIT"
] | null | null | null | pyqt_graphics_video_item_video_player/__init__.py | yjg30737/pyqt-graphics-video-item-video-player | 3ce8f5bfb3e6c3be32d2090df34a3beb18a42237 | [
"MIT"
] | null | null | null | from .videoControlWidget import *
from .videoGraphicsView import *
from .videoPlayer import *
from .videoSlider import *
| 24.2 | 33 | 0.801653 | 12 | 121 | 8.083333 | 0.5 | 0.309278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132231 | 121 | 4 | 34 | 30.25 | 0.92381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
160b1d4b43e57b3517a90b8b27509de7f9b0721b | 131 | py | Python | chesscom/__init__.py | chrka/chesscom-api-and-graphql-bridge | 015a528e04f74fe82d5fde3bcc83568806416f8d | [
"MIT"
] | null | null | null | chesscom/__init__.py | chrka/chesscom-api-and-graphql-bridge | 015a528e04f74fe82d5fde3bcc83568806416f8d | [
"MIT"
] | null | null | null | chesscom/__init__.py | chrka/chesscom-api-and-graphql-bridge | 015a528e04f74fe82d5fde3bcc83568806416f8d | [
"MIT"
] | null | null | null | from .player import lookup_player, Title, Status, titled_players
from .country import lookup_country
from .club import lookup_club
| 32.75 | 64 | 0.839695 | 19 | 131 | 5.578947 | 0.526316 | 0.339623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114504 | 131 | 3 | 65 | 43.666667 | 0.913793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
162c2e8a45c6cf048b98ba46007dcd9573fb2fdf | 12,098 | py | Python | saleor/graphql/giftcard/tests/queries/test_gift_card_filtering.py | altankhuyagrio/saleor | cd91227c1580c17ab0ccf86eb5fa9545158f193c | [
"CC-BY-4.0"
] | 6 | 2021-09-20T19:00:30.000Z | 2022-03-19T06:55:35.000Z | saleor/graphql/giftcard/tests/queries/test_gift_card_filtering.py | altankhuyagrio/saleor | cd91227c1580c17ab0ccf86eb5fa9545158f193c | [
"CC-BY-4.0"
] | 201 | 2021-06-07T16:31:47.000Z | 2022-03-21T04:50:43.000Z | saleor/graphql/giftcard/tests/queries/test_gift_card_filtering.py | abenezerBelachew/saleor | 31b68b106991cf4834e3e0f6ae23a043159acf00 | [
"CC-BY-4.0"
] | 1 | 2021-06-13T02:48:37.000Z | 2021-06-13T02:48:37.000Z | import graphene
import pytest
from .....giftcard.models import GiftCard
from ....tests.utils import get_graphql_content, get_graphql_content_from_response
QUERY_GIFT_CARDS = """
query giftCards($filter: GiftCardFilterInput){
giftCards(first: 10, filter: $filter) {
edges {
node {
id
displayCode
product {
name
}
}
}
totalCount
}
}
"""
@pytest.mark.parametrize(
"filter_value, expected_gift_card_indexes",
[
("test-tag", [0]),
("another-tag", [1]),
("tag", [0, 1, 2]),
("not existing", []),
("", [0, 1, 2]),
],
)
def test_query_filter_gift_cards_by_tag(
filter_value,
expected_gift_card_indexes,
staff_api_client,
gift_card,
gift_card_expiry_date,
gift_card_used,
permission_manage_gift_card,
):
# given
query = QUERY_GIFT_CARDS
gift_cards = [
gift_card,
gift_card_expiry_date,
gift_card_used,
]
variables = {"filter": {"tag": filter_value}}
# when
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_gift_card]
)
# then
content = get_graphql_content(response)
data = content["data"]["giftCards"]["edges"]
assert len(data) == len(expected_gift_card_indexes)
assert {card["node"]["id"] for card in data} == {
graphene.Node.to_global_id("GiftCard", gift_cards[i].pk)
for i in expected_gift_card_indexes
}
@pytest.mark.parametrize(
"filter_value, expected_gift_card_indexes",
[
(["test-tag", "tag"], [0, 2]),
(["another-tag"], [1]),
(["tag", "test-tag", "another-tag"], [0, 1, 2]),
(["not existing"], []),
([], [0, 1, 2]),
],
)
def test_query_filter_gift_cards_by_tags(
filter_value,
expected_gift_card_indexes,
staff_api_client,
gift_card,
gift_card_expiry_date,
gift_card_used,
permission_manage_gift_card,
):
# given
query = QUERY_GIFT_CARDS
gift_cards = [
gift_card,
gift_card_expiry_date,
gift_card_used,
]
variables = {"filter": {"tags": filter_value}}
# when
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_gift_card]
)
# then
content = get_graphql_content(response)
data = content["data"]["giftCards"]["edges"]
assert len(data) == len(expected_gift_card_indexes)
assert {card["node"]["id"] for card in data} == {
graphene.Node.to_global_id("GiftCard", gift_cards[i].pk)
for i in expected_gift_card_indexes
}
def test_query_filter_gift_cards_by_products(
staff_api_client,
gift_card,
gift_card_expiry_date,
gift_card_used,
shippable_gift_card_product,
non_shippable_gift_card_product,
permission_manage_gift_card,
):
# given
query = QUERY_GIFT_CARDS
gift_card.product = shippable_gift_card_product
gift_card_used.product = shippable_gift_card_product
gift_card_expiry_date.product = non_shippable_gift_card_product
GiftCard.objects.bulk_update(
[gift_card, gift_card_expiry_date, gift_card_used], ["product"]
)
variables = {
"filter": {
"products": [
graphene.Node.to_global_id("Product", shippable_gift_card_product.pk)
]
}
}
# when
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_gift_card]
)
# then
content = get_graphql_content(response)
data = content["data"]["giftCards"]["edges"]
assert len(data) == 2
assert {card["node"]["id"] for card in data} == {
graphene.Node.to_global_id("GiftCard", card.pk)
for card in [gift_card, gift_card_used]
}
def test_query_filter_gift_cards_by_used_by_user(
staff_api_client,
gift_card,
gift_card_expiry_date,
gift_card_used,
permission_manage_gift_card,
):
# given
query = QUERY_GIFT_CARDS
variables = {
"filter": {
"usedBy": [graphene.Node.to_global_id("User", gift_card_used.used_by.pk)]
}
}
# when
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_gift_card]
)
# then
content = get_graphql_content(response)
data = content["data"]["giftCards"]["edges"]
assert len(data) == 1
assert data[0]["node"]["id"] == graphene.Node.to_global_id(
"GiftCard", gift_card_used.pk
)
@pytest.mark.parametrize(
"filter_value, expected_gift_card_indexes",
[("PLN", [0]), ("USD", [1, 2]), ("EUR", []), ("", [0, 1, 2])],
)
def test_query_filter_gift_cards_by_currency(
filter_value,
expected_gift_card_indexes,
staff_api_client,
gift_card,
gift_card_expiry_date,
gift_card_used,
permission_manage_gift_card,
):
# given
query = QUERY_GIFT_CARDS
gift_card.currency = "PLN"
gift_card_used.currency = "USD"
gift_card_expiry_date.currency = "USD"
gift_cards = [
gift_card,
gift_card_expiry_date,
gift_card_used,
]
GiftCard.objects.bulk_update(gift_cards, ["currency"])
variables = {"filter": {"currency": filter_value}}
# when
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_gift_card]
)
# then
content = get_graphql_content(response)
data = content["data"]["giftCards"]["edges"]
assert len(data) == len(expected_gift_card_indexes)
assert {card["node"]["id"] for card in data} == {
graphene.Node.to_global_id("GiftCard", gift_cards[i].pk)
for i in expected_gift_card_indexes
}
@pytest.mark.parametrize(
"filter_value, expected_gift_card_indexes",
[
(True, [0]),
(False, [1, 2]),
],
)
def test_query_filter_gift_cards_by_is_active(
filter_value,
expected_gift_card_indexes,
staff_api_client,
gift_card,
gift_card_expiry_date,
gift_card_used,
permission_manage_gift_card,
):
# given
query = QUERY_GIFT_CARDS
gift_card.is_active = True
gift_card_used.is_active = False
gift_card_expiry_date.is_active = False
gift_cards = [
gift_card,
gift_card_expiry_date,
gift_card_used,
]
GiftCard.objects.bulk_update(gift_cards, ["is_active"])
variables = {"filter": {"isActive": filter_value}}
# when
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_gift_card]
)
# then
content = get_graphql_content(response)
data = content["data"]["giftCards"]["edges"]
assert len(data) == len(expected_gift_card_indexes)
assert {card["node"]["id"] for card in data} == {
graphene.Node.to_global_id("GiftCard", gift_cards[i].pk)
for i in expected_gift_card_indexes
}
def test_query_filter_gift_cards_by_current_balance_no_currency_given(
staff_api_client,
gift_card,
gift_card_expiry_date,
gift_card_used,
permission_manage_gift_card,
):
# given
query = QUERY_GIFT_CARDS
variables = {
"filter": {
"currentBalance": {
"gte": "15",
}
}
}
# when
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_gift_card]
)
# then
content = get_graphql_content_from_response(response)
assert len(content["errors"]) == 1
assert (
content["errors"][0]["message"]
== "You must provide a `currency` filter parameter for filtering by price."
)
@pytest.mark.parametrize(
"filter_value, expected_gift_card_indexes",
[
({"gte": 50}, [2]),
({"gte": 0, "lte": 50}, [0, 1]),
({"lte": 50}, [0, 1]),
({"gte": 90}, []),
({"lte": 5}, []),
({}, [0, 1, 2]),
],
)
def test_query_filter_gift_cards_by_current_balance(
filter_value,
expected_gift_card_indexes,
staff_api_client,
gift_card,
gift_card_expiry_date,
gift_card_used,
permission_manage_gift_card,
):
# given
query = QUERY_GIFT_CARDS
gift_cards = [
gift_card,
gift_card_expiry_date,
gift_card_used,
]
variables = {
"filter": {"currentBalance": filter_value, "currency": gift_card.currency}
}
# when
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_gift_card]
)
# then
content = get_graphql_content(response)
data = content["data"]["giftCards"]["edges"]
assert len(data) == len(expected_gift_card_indexes)
assert {card["node"]["id"] for card in data} == {
graphene.Node.to_global_id("GiftCard", gift_cards[i].pk)
for i in expected_gift_card_indexes
}
def test_query_filter_gift_cards_by_initial_balance_no_currency_given(
staff_api_client,
gift_card,
gift_card_expiry_date,
gift_card_used,
permission_manage_gift_card,
):
# given
query = QUERY_GIFT_CARDS
variables = {
"filter": {
"initialBalance": {
"gte": "15",
}
}
}
# when
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_gift_card]
)
# then
content = get_graphql_content_from_response(response)
assert len(content["errors"]) == 1
assert (
content["errors"][0]["message"]
== "You must provide a `currency` filter parameter for filtering by price."
)
@pytest.mark.parametrize(
"filter_value, expected_gift_card_indexes",
[
({"gte": 90}, [2]),
({"gte": 0, "lte": 50}, [0, 1]),
({"lte": 50}, [0, 1]),
({"gte": 1100}, []),
({"lte": 5}, []),
({}, [0, 1, 2]),
],
)
def test_query_filter_gift_cards_by_initial_balance(
filter_value,
expected_gift_card_indexes,
staff_api_client,
gift_card,
gift_card_expiry_date,
gift_card_used,
permission_manage_gift_card,
):
# given
query = QUERY_GIFT_CARDS
gift_cards = [
gift_card,
gift_card_expiry_date,
gift_card_used,
]
variables = {
"filter": {"initialBalance": filter_value, "currency": gift_card.currency}
}
# when
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_gift_card]
)
# then
content = get_graphql_content(response)
data = content["data"]["giftCards"]["edges"]
assert len(data) == len(expected_gift_card_indexes)
assert {card["node"]["id"] for card in data} == {
graphene.Node.to_global_id("GiftCard", gift_cards[i].pk)
for i in expected_gift_card_indexes
}
def test_query_filter_gift_cards_by_code(
staff_api_client,
gift_card,
gift_card_expiry_date,
gift_card_used,
permission_manage_gift_card,
):
# given
query = QUERY_GIFT_CARDS
variables = {"filter": {"code": gift_card.code}}
# when
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_gift_card]
)
# then
content = get_graphql_content(response)
data = content["data"]["giftCards"]["edges"]
assert len(data) == 1
assert data[0]["node"]["id"] == graphene.Node.to_global_id("GiftCard", gift_card.pk)
def test_query_filter_gift_cards_by_code_no_gift_card(
staff_api_client,
gift_card,
gift_card_expiry_date,
gift_card_used,
permission_manage_gift_card,
):
# given
query = QUERY_GIFT_CARDS
variables = {"filter": {"code": "code-does-not-exist"}}
# when
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_gift_card]
)
# then
content = get_graphql_content(response)
data = content["data"]["giftCards"]["edges"]
assert len(data) == 0
| 25.523207 | 88 | 0.632171 | 1,439 | 12,098 | 4.913829 | 0.075747 | 0.145948 | 0.042427 | 0.078065 | 0.892802 | 0.878659 | 0.863951 | 0.848819 | 0.843586 | 0.82209 | 0 | 0.008808 | 0.249215 | 12,098 | 473 | 89 | 25.577167 | 0.76968 | 0.015788 | 0 | 0.634667 | 0 | 0 | 0.123926 | 0.014912 | 0 | 0 | 0 | 0 | 0.061333 | 1 | 0.032 | false | 0 | 0.010667 | 0 | 0.042667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
163f5c5c16d59c42e4a5f17ad54ff8a84336504e | 3,407 | py | Python | O'REILLY/LightbulbStartWatching.py | kei-academic/CheckiO | 9f4c1fa44704f302ce95f5d9e20c4fa0beda06c3 | [
"MIT"
] | 1 | 2021-12-26T21:52:02.000Z | 2021-12-26T21:52:02.000Z | O'REILLY/LightbulbStartWatching.py | kei-academic/CheckiO | 9f4c1fa44704f302ce95f5d9e20c4fa0beda06c3 | [
"MIT"
] | null | null | null | O'REILLY/LightbulbStartWatching.py | kei-academic/CheckiO | 9f4c1fa44704f302ce95f5d9e20c4fa0beda06c3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from datetime import datetime
from typing import List, Optional
def sum_light(els: List[datetime], start_watching: Optional[datetime] = None) -> int:
"""
how long the light bulb has been turned on
"""
if start_watching is not None:
for i in range(len(els)):
if els[i] == start_watching:
els = els[i:]
break
elif els[i] > start_watching:
els.insert(i, start_watching)
els = els[i:]
break
ans = 0
for i in range(0, len(els)-1, 2):
ans += int((els[i+1] - els[i]).total_seconds())
return ans
if __name__ == "__main__":
print("Example:")
print(
sum_light(
[
datetime(2015, 1, 12, 10, 0, 0),
datetime(2015, 1, 12, 10, 0, 10),
],
datetime(2015, 1, 12, 10, 0, 5),
)
)
assert (
sum_light(
els=[
datetime(2015, 1, 12, 10, 0, 0),
datetime(2015, 1, 12, 10, 0, 10),
],
start_watching=datetime(2015, 1, 12, 10, 0, 5),
)
== 5
)
assert (
sum_light(
[
datetime(2015, 1, 12, 10, 0, 0),
datetime(2015, 1, 12, 10, 0, 10),
],
datetime(2015, 1, 12, 10, 0, 0),
)
== 10
)
assert (
sum_light(
[
datetime(2015, 1, 12, 10, 0, 0),
datetime(2015, 1, 12, 10, 10, 10),
datetime(2015, 1, 12, 11, 0, 0),
datetime(2015, 1, 12, 11, 10, 10),
],
datetime(2015, 1, 12, 11, 0, 0),
)
== 610
)
assert (
sum_light(
[
datetime(2015, 1, 12, 10, 0, 0),
datetime(2015, 1, 12, 10, 10, 10),
datetime(2015, 1, 12, 11, 0, 0),
datetime(2015, 1, 12, 11, 10, 10),
],
datetime(2015, 1, 12, 11, 0, 10),
)
== 600
)
assert (
sum_light(
[
datetime(2015, 1, 12, 10, 0, 0),
datetime(2015, 1, 12, 10, 10, 10),
datetime(2015, 1, 12, 11, 0, 0),
datetime(2015, 1, 12, 11, 10, 10),
],
datetime(2015, 1, 12, 10, 10, 0),
)
== 620
)
assert (
sum_light(
[
datetime(2015, 1, 12, 10, 0, 0),
datetime(2015, 1, 12, 10, 10, 10),
datetime(2015, 1, 12, 11, 0, 0),
datetime(2015, 1, 12, 11, 10, 10),
datetime(2015, 1, 12, 11, 10, 11),
datetime(2015, 1, 12, 12, 10, 11),
],
datetime(2015, 1, 12, 12, 10, 11),
)
== 0
)
assert (
sum_light(
[
datetime(2015, 1, 12, 10, 0, 0),
datetime(2015, 1, 12, 10, 10, 10),
datetime(2015, 1, 12, 11, 0, 0),
datetime(2015, 1, 12, 11, 10, 10),
datetime(2015, 1, 12, 11, 10, 11),
datetime(2015, 1, 12, 12, 10, 11),
],
datetime(2015, 1, 12, 12, 9, 11),
)
== 60
)
print("The second mission in series is done? Click 'Check' to earn cool rewards!")
| 26.207692 | 86 | 0.398004 | 409 | 3,407 | 3.256724 | 0.166259 | 0.342342 | 0.370871 | 0.427928 | 0.68994 | 0.672673 | 0.66967 | 0.614865 | 0.614865 | 0.611111 | 0 | 0.259563 | 0.462871 | 3,407 | 129 | 87 | 26.410853 | 0.468306 | 0.019078 | 0 | 0.535714 | 0 | 0 | 0.026759 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.008929 | false | 0 | 0.017857 | 0 | 0.035714 | 0.026786 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1661a24b37b55d75c724f14fa73d18a44f4a7625 | 24 | py | Python | Modules - Lab/FibonacciSequence/__init__.py | DiyanKalaydzhiev23/Advanced---Python | ed2c60bb887c49e5a87624719633e2b8432f6f6b | [
"MIT"
] | null | null | null | Modules - Lab/FibonacciSequence/__init__.py | DiyanKalaydzhiev23/Advanced---Python | ed2c60bb887c49e5a87624719633e2b8432f6f6b | [
"MIT"
] | null | null | null | Modules - Lab/FibonacciSequence/__init__.py | DiyanKalaydzhiev23/Advanced---Python | ed2c60bb887c49e5a87624719633e2b8432f6f6b | [
"MIT"
] | null | null | null | from .fibonacii import * | 24 | 24 | 0.791667 | 3 | 24 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
168e4a9d13e836f9c36e1df539cdf97de9554103 | 169 | py | Python | plugins/logging_mod.py | surajssd/kube-hunter | 0157ac83ce6a735d4a431e9e3a6d4ec847aa6fc1 | [
"Apache-2.0"
] | 1 | 2019-09-25T12:31:33.000Z | 2019-09-25T12:31:33.000Z | plugins/logging_mod.py | surajssd/kube-hunter | 0157ac83ce6a735d4a431e9e3a6d4ec847aa6fc1 | [
"Apache-2.0"
] | null | null | null | plugins/logging_mod.py | surajssd/kube-hunter | 0157ac83ce6a735d4a431e9e3a6d4ec847aa6fc1 | [
"Apache-2.0"
] | 1 | 2020-08-17T16:05:45.000Z | 2020-08-17T16:05:45.000Z | import logging
# Supress logging from scapy
logging.getLogger("scapy.runtime").setLevel(logging.CRITICAL)
logging.getLogger("scapy.loading").setLevel(logging.CRITICAL)
| 28.166667 | 61 | 0.816568 | 20 | 169 | 6.9 | 0.5 | 0.231884 | 0.304348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059172 | 169 | 5 | 62 | 33.8 | 0.867925 | 0.153846 | 0 | 0 | 0 | 0 | 0.184397 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
16a7cc0afcf2dfd519fc09e32bca965bedff9670 | 204 | py | Python | tests/helpers/functions/test_logger.py | racelandshop/integration | 424057dcad30f20ed0276aec07d28b48b2b187be | [
"MIT"
] | null | null | null | tests/helpers/functions/test_logger.py | racelandshop/integration | 424057dcad30f20ed0276aec07d28b48b2b187be | [
"MIT"
] | null | null | null | tests/helpers/functions/test_logger.py | racelandshop/integration | 424057dcad30f20ed0276aec07d28b48b2b187be | [
"MIT"
] | null | null | null | import os
from custom_components.racelandshop.helpers.functions.logger import getLogger
def test_logger():
os.environ["GITHUB_ACTION"] = "value"
getLogger()
del os.environ["GITHUB_ACTION"]
| 20.4 | 77 | 0.75 | 25 | 204 | 5.96 | 0.68 | 0.120805 | 0.201342 | 0.281879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142157 | 204 | 9 | 78 | 22.666667 | 0.851429 | 0 | 0 | 0 | 0 | 0 | 0.151961 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.333333 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
16e605b055adfaeb4be288253fea0c4480da07e7 | 25 | py | Python | stochopy/factory/__init__.py | keurfonluu/stochopy | a566a9db0efbd74e385f8d46fa4ece2ca939e45b | [
"BSD-3-Clause"
] | 36 | 2020-04-27T04:59:51.000Z | 2022-03-29T02:34:44.000Z | stochopy/factory/__init__.py | XiangyuYang-Opt/stochopy | 5f6625c40ec80297dbcd3bd85b5073b9ed6b8cb2 | [
"BSD-3-Clause"
] | 9 | 2020-09-20T08:38:00.000Z | 2021-11-22T08:20:41.000Z | stochopy/factory/__init__.py | XiangyuYang-Opt/stochopy | 5f6625c40ec80297dbcd3bd85b5073b9ed6b8cb2 | [
"BSD-3-Clause"
] | 6 | 2018-04-17T03:54:46.000Z | 2020-03-03T14:15:31.000Z | from .benchmark import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
16eef2c62c245f73bc1181339c8208dfd7a7ab8d | 5,148 | py | Python | crds/tests/test_table_effects.py | nden/crds | b72f14cf07531ca70b61daa6b58e762e5899afa4 | [
"BSD-3-Clause"
] | null | null | null | crds/tests/test_table_effects.py | nden/crds | b72f14cf07531ca70b61daa6b58e762e5899afa4 | [
"BSD-3-Clause"
] | null | null | null | crds/tests/test_table_effects.py | nden/crds | b72f14cf07531ca70b61daa6b58e762e5899afa4 | [
"BSD-3-Clause"
] | null | null | null | """This tests, through the use of bestrefs, the functioning of table effects."""
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import doctest
from crds import tests
from crds.tests import test_config
from crds.bestrefs import BestrefsScript
def dt_table_effects_default_always_reprocess():
"""
Test: Default rule: always reprocess, based on STIS PCTAB.
Test: STIS APERTAB: No reprocess
>>> old_state = test_config.setup()
>>> doctest.ELLIPSIS_MARKER = '...'
>>> BestrefsScript(argv="bestrefs.py -z --verbosity 25 --old-context hst_0003.pmap --new-context hst_0268.pmap --datasets O8EX02EFQ")() # doctest: +ELLIPSIS
CRDS - DEBUG - Using explicit new context 'hst_0268.pmap' for computing updated best references.
CRDS - DEBUG - Using explicit old context 'hst_0003.pmap'
CRDS - INFO - Dumping dataset parameters from CRDS server at 'https://...' for ['O8EX02EFQ']
CRDS - INFO - Dumped 1 of 1 datasets from CRDS server at 'https://...'
CRDS - INFO - Computing bestrefs for datasets ['O8EX02EFQ']
CRDS - DEBUG - ===> Processing O8EX02010:O8EX02EFQ
CRDS - DEBUG - Deep Reference examination between .../n7p1032ao_apt.fits and .../y2r1559to_apt.fits initiated.
CRDS - DEBUG - Instantiating rules for reference type stis_apertab.
CRDS - DEBUG - Rule DeepLook_STISaperture: Reprocessing is not required.
CRDS - DEBUG - Rule DeepLook_STISaperture: Selection rules have executed and the selected rows are the same.
CRDS - DEBUG - Removing table update for STIS apertab O8EX02010:O8EX02EFQ no effective change from reference 'N7P1032AO_APT.FITS' --> 'Y2R1559TO_APT.FITS'
CRDS - DEBUG - Deep Reference examination between .../q5417413o_pct.fits and .../y2r16006o_pct.fits initiated.
CRDS - DEBUG - Instantiating rules for reference type stis_pctab.
CRDS - DEBUG - Rule DeepLook_Default: Reprocessing is required.
CRDS - DEBUG - Rule DeepLook_Default: Reference type cannot be examined, by definition.
CRDS - INFO - 0 errors
CRDS - INFO - 0 warnings
CRDS - INFO - 3 infos
0
>>> test_config.cleanup(old_state)
"""
def dt_table_effects_reprocess_test():
"""
Test: COS WCPTAB, reprocess yes
>>> old_state = test_config.setup()
>>> doctest.ELLIPSIS_MARKER = '...'
>>> BestrefsScript(argv="bestrefs.py -z --verbosity 25 --old-context hst_0018.pmap --new-context hst_0024.pmap --datasets LB6M01030")() # doctest: +ELLIPSIS
CRDS - DEBUG - Using explicit new context 'hst_0024.pmap' for computing updated best references.
CRDS - DEBUG - Using explicit old context 'hst_0018.pmap'
CRDS - INFO - Dumping dataset parameters from CRDS server at 'https://...' for ['LB6M01030']
CRDS - INFO - Dumped 1 of 1 datasets from CRDS server at 'https://...'
CRDS - INFO - Computing bestrefs for datasets ['LB6M01030']
CRDS - DEBUG - ===> Processing LB6M01030:LB6M01AVQ
CRDS - DEBUG - Deep Reference examination between .../x2i1559gl_wcp.fits and .../xaf1429el_wcp.fits initiated.
CRDS - DEBUG - Instantiating rules for reference type cos_wcptab.
CRDS - DEBUG - Rule DeepLook_COSOpt_elem: Reprocessing is required.
CRDS - DEBUG - Rule DeepLook_COSOpt_elem: Selection rules have executed and the selected rows are different.
CRDS - INFO - 0 errors
CRDS - INFO - 0 warnings
CRDS - INFO - 3 infos
0
>>> test_config.cleanup(old_state)
"""
def dt_table_effects_reprocess_no():
"""
Test: COS WCPTAB, reprocess no
>>> old_state = test_config.setup()
>>> doctest.ELLIPSIS_MARKER = '...'
>>> BestrefsScript(argv="bestrefs.py -z --verbosity 25 --old-context hst_0018.pmap --new-context hst_0024.pmap --datasets LBK617YRQ")() # doctest: +ELLIPSIS
CRDS - DEBUG - Using explicit new context 'hst_0024.pmap' for computing updated best references.
CRDS - DEBUG - Using explicit old context 'hst_0018.pmap'
CRDS - INFO - Dumping dataset parameters from CRDS server at 'https://...' for ['LBK617YRQ']
CRDS - INFO - Dumped 1 of 1 datasets from CRDS server at 'https://...'
CRDS - INFO - Computing bestrefs for datasets ['LBK617YRQ']
CRDS - DEBUG - ===> Processing LBK617010:LBK617YRQ
CRDS - DEBUG - Deep Reference examination between .../x2i1559gl_wcp.fits and .../xaf1429el_wcp.fits initiated.
CRDS - DEBUG - Instantiating rules for reference type cos_wcptab.
CRDS - DEBUG - Rule DeepLook_COSOpt_elem: Reprocessing is not required.
CRDS - DEBUG - Rule DeepLook_COSOpt_elem: Selection rules have executed and the selected rows are the same.
CRDS - DEBUG - Removing table update for COS wcptab LBK617010:LBK617YRQ no effective change from reference 'X2I1559GL_WCP.FITS' --> 'XAF1429EL_WCP.FITS'
CRDS - INFO - 0 errors
CRDS - INFO - 0 warnings
CRDS - INFO - 3 infos
0
>>> test_config.cleanup(old_state)
"""
def main():
"""Run module tests, for now just doctests only."""
from crds.tests import test_table_effects, tstmod
return tstmod(test_table_effects)
if __name__ == "__main__":
print(main())
| 49.028571 | 162 | 0.708236 | 663 | 5,148 | 5.355958 | 0.21267 | 0.068431 | 0.029288 | 0.047311 | 0.737539 | 0.67474 | 0.652211 | 0.640101 | 0.626302 | 0.612222 | 0 | 0.051313 | 0.193667 | 5,148 | 104 | 163 | 49.5 | 0.804144 | 0.834887 | 0 | 0 | 0 | 0 | 0.015209 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | true | 0 | 0.533333 | 0 | 0.866667 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc9a8e4a6f64cc0ea803d11914681c7598fe9208 | 151 | py | Python | openprocurement/auctions/core/plugins/awarding/v1/tests/blanks/award_blanks.py | EBRD-ProzorroSale/openprocurement.auctions.core | 52bd59f193f25e4997612fca0f87291decf06966 | [
"Apache-2.0"
] | 2 | 2016-09-15T20:17:43.000Z | 2017-01-08T03:32:43.000Z | openprocurement/auctions/core/plugins/awarding/v1/tests/blanks/award_blanks.py | EBRD-ProzorroSale/openprocurement.auctions.core | 52bd59f193f25e4997612fca0f87291decf06966 | [
"Apache-2.0"
] | 183 | 2017-12-21T11:04:37.000Z | 2019-03-27T08:14:34.000Z | openprocurement/auctions/core/plugins/awarding/v1/tests/blanks/award_blanks.py | EBRD-ProzorroSale/openprocurement.auctions.core | 52bd59f193f25e4997612fca0f87291decf06966 | [
"Apache-2.0"
] | 12 | 2016-09-05T12:07:48.000Z | 2019-02-26T09:24:17.000Z | from zope import deprecation
deprecation.moved('openprocurement.auctions.core.tests.plugins.awarding.v1.tests.blanks.award_blanks', 'version update')
| 37.75 | 120 | 0.834437 | 19 | 151 | 6.578947 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006993 | 0.05298 | 151 | 3 | 121 | 50.333333 | 0.867133 | 0 | 0 | 0 | 0 | 0 | 0.629139 | 0.536424 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
bca82c548aa062a5e4c7486f7e41959ecb84f471 | 3,302 | py | Python | tests/routes/test_data.py | dixonwhitmire/connect | 800d821c8f6d6abff6485b43727353b909ef4b76 | [
"Apache-2.0"
] | 33 | 2020-06-16T11:47:03.000Z | 2022-03-24T02:41:00.000Z | tests/routes/test_data.py | dixonwhitmire/connect | 800d821c8f6d6abff6485b43727353b909ef4b76 | [
"Apache-2.0"
] | 470 | 2020-06-12T01:18:43.000Z | 2022-02-20T23:08:00.000Z | tests/routes/test_data.py | dixonwhitmire/connect | 800d821c8f6d6abff6485b43727353b909ef4b76 | [
"Apache-2.0"
] | 30 | 2020-06-12T19:36:09.000Z | 2022-01-31T15:25:35.000Z | """
test_data.py
Tests /data endpoints
"""
import pytest
from connect.exceptions import KafkaMessageNotFoundError
from connect.routes import data
from unittest.mock import Mock
@pytest.fixture
def endpoint_parameters():
return {"dataformat": "EXAMPLE", "partition": 100, "offset": 4561}
@pytest.mark.asyncio
async def test_get_data_ok(
mock_async_kafka_consumer, async_test_client, endpoint_parameters, monkeypatch
):
"""
Tests /data where a 200 status code is returned
:param mock_async_kafka_consumer: The mock kafka consumer
:param async_test_client: The httpx async test client used to submit requests
:param endpoint_parameters: The endpoint parameters fixture
:param monkeypatch: pyTest monkeypatch fixture
"""
with monkeypatch.context() as m:
m.setattr(
data, "get_kafka_consumer", Mock(return_value=mock_async_kafka_consumer)
)
async with async_test_client as atc:
actual_response = await atc.get("/data", params=endpoint_parameters)
assert actual_response.status_code == 200
actual_json = actual_response.json()
assert actual_json["data_record_location"] == "EXAMPLE:100:4561"
@pytest.mark.asyncio
async def test_get_data_bad_request(
mock_async_kafka_consumer, async_test_client, endpoint_parameters, monkeypatch
):
"""
Tests /data where a 400 status code is returned
:param mock_async_kafka_consumer: The mock kafka consumer
:param async_test_client: The httpx async test client used to submit requests
:param endpoint_parameters: The endpoint parameters fixture
:param monkeypatch: pyTest monkeypatch fixture
"""
mock_async_kafka_consumer.get_message_from_kafka_cb.side_effect = ValueError(
"Test bad request"
)
with monkeypatch.context() as m:
m.setattr(
data, "get_kafka_consumer", Mock(return_value=mock_async_kafka_consumer)
)
async with async_test_client as atc:
actual_response = await atc.get("/data", params=endpoint_parameters)
assert actual_response.status_code == 400
actual_json = actual_response.json()
assert actual_json["detail"] == "Test bad request"
@pytest.mark.asyncio
async def test_get_data_not_found(
mock_async_kafka_consumer, async_test_client, endpoint_parameters, monkeypatch
):
"""
Tests /data where a 404 status code is returned
:param mock_async_kafka_consumer: The mock kafka consumer
:param async_test_client: The httpx async test client used to submit requests
:param endpoint_parameters: The endpoint parameters fixture
:param monkeypatch: pyTest monkeypatch fixture
"""
mock_async_kafka_consumer.get_message_from_kafka_cb.side_effect = (
KafkaMessageNotFoundError("Data record not found")
)
with monkeypatch.context() as m:
m.setattr(
data, "get_kafka_consumer", Mock(return_value=mock_async_kafka_consumer)
)
async with async_test_client as atc:
actual_response = await atc.get("/data", params=endpoint_parameters)
assert actual_response.status_code == 404
actual_json = actual_response.json()
assert actual_json["detail"] == "Data record not found"
| 37.954023 | 84 | 0.719261 | 413 | 3,302 | 5.479419 | 0.181598 | 0.097658 | 0.07954 | 0.106938 | 0.829872 | 0.829872 | 0.829872 | 0.829872 | 0.794521 | 0.71498 | 0 | 0.012246 | 0.208661 | 3,302 | 86 | 85 | 38.395349 | 0.853808 | 0.010297 | 0 | 0.519231 | 0 | 0 | 0.09721 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 1 | 0.019231 | false | 0 | 0.076923 | 0.019231 | 0.115385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bcc2bab69838d5c5c526d09af6f997a67e35c1b8 | 144 | py | Python | gate/gate_xor.py | shimomura314/Arithmetic-Logic-Unit | a547d2fe4ac3cbc2e38a64d654e26141f4c9c81a | [
"MIT"
] | null | null | null | gate/gate_xor.py | shimomura314/Arithmetic-Logic-Unit | a547d2fe4ac3cbc2e38a64d654e26141f4c9c81a | [
"MIT"
] | null | null | null | gate/gate_xor.py | shimomura314/Arithmetic-Logic-Unit | a547d2fe4ac3cbc2e38a64d654e26141f4c9c81a | [
"MIT"
] | null | null | null | from .gate_and import AND
from .gate_nand import NAND
from .gate_or import OR
def XOR(*x):
if sum(x)%2 == 1:
return 1
return 0 | 16 | 27 | 0.638889 | 27 | 144 | 3.296296 | 0.555556 | 0.269663 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038095 | 0.270833 | 144 | 9 | 28 | 16 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | true | 0 | 0.428571 | 0 | 0.857143 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c278b39f4dee0d22d59456bf01a42802e5217ae | 22 | py | Python | server/templates/Hello.py | AnoTherK-ATK/online-judge-mean | e0dc457756987496c397627e204453ee8909588c | [
"MIT"
] | 23 | 2021-01-18T09:50:05.000Z | 2022-02-23T04:52:26.000Z | server/templates/Hello.py | AnoTherK-ATK/online-judge-mean | e0dc457756987496c397627e204453ee8909588c | [
"MIT"
] | 3 | 2021-01-16T11:15:06.000Z | 2021-09-01T05:47:41.000Z | server/templates/Hello.py | AnoTherK-ATK/online-judge-mean | e0dc457756987496c397627e204453ee8909588c | [
"MIT"
] | 10 | 2021-05-09T12:55:47.000Z | 2022-03-27T13:35:41.000Z | print "Hello, Python!" | 22 | 22 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 22 | 1 | 22 | 22 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.608696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
4c50c910baa48825b9b7bfd78751b021607b4097 | 10,875 | py | Python | tests/molecular/molecules/constructed_molecule/fixtures/cof.py | andrewtarzia/stk | 1ac2ecbb5c9940fe49ce04cbf5603fd7538c475a | [
"MIT"
] | 21 | 2018-04-12T16:25:24.000Z | 2022-02-14T23:05:43.000Z | tests/molecular/molecules/constructed_molecule/fixtures/cof.py | JelfsMaterialsGroup/stk | 0d3e1b0207aa6fa4d4d5ee8dfe3a29561abb08a2 | [
"MIT"
] | 8 | 2019-03-19T12:36:36.000Z | 2020-11-11T12:46:00.000Z | tests/molecular/molecules/constructed_molecule/fixtures/cof.py | supramolecular-toolkit/stk | 0d3e1b0207aa6fa4d4d5ee8dfe3a29561abb08a2 | [
"MIT"
] | 5 | 2018-08-07T13:00:16.000Z | 2021-11-01T00:55:10.000Z | import pytest
import stk
from ..case_data import CaseData
class CofData:
"""
Data to initialize a COF :class:`.CaseData` instance.
Attributes
----------
topology_graph : :class:`type`
A COF class.
building_blocks : :class:`tuple` of :class:`.BuildingBlock`
The building blocks of the COF.
lattice_size : :class:`tuple` of :class:`int`
The size of the lattice.
vertex_alignments : :class:`dict`
Passed to the `vertex_alignments` parameter of the COF
initializer.
num_new_atoms : :class:`int`
The number of new atoms added by the construction process.
num_new_bonds : :class:`int`
The number of new bonds added by the construction process.
num_building_blocks : :class:`dict`
For each building block in :attr:`building_blocks`, maps its
index to the number of times its used in the construction of
the COF.
"""
def __init__(
self,
topology_graph,
building_blocks,
lattice_size,
vertex_alignments,
num_new_atoms,
num_new_bonds,
num_building_blocks,
):
"""
Initialize a :class:`.CofData` instance.
Parameters
----------
topology_graph : :class:`type`
A COF class.
building_blocks : :class:`tuple` of :class:`.BuildingBlock`
The building blocks of the COF.
lattice_size : :class:`tuple` of :class:`int`
The size of the lattice.
vertex_alignments : :class:`dict`
Passed to the `vertex_alignments` parameter of the COF
initializer.
num_new_atoms : :class:`int`
The number of new atoms added by the construction process.
num_new_bonds : :class:`int`
The number of new bonds added by the construction process.
num_building_blocks : :class:`dict`
For each building block in `building_blocks`, maps its
index to the number of times its used in the construction
of the COF.
"""
self.constructed_molecule = stk.ConstructedMolecule(
topology_graph=topology_graph(
building_blocks=building_blocks,
lattice_size=lattice_size,
vertex_alignments=vertex_alignments,
)
)
self.num_new_atoms = num_new_atoms
self.num_new_bonds = num_new_bonds
self.num_building_blocks = {
building_blocks[index]: num
for index, num in num_building_blocks.items()
}
self.building_blocks = building_blocks
@classmethod
def init_from_construction_result(
cls,
topology_graph,
building_blocks,
lattice_size,
vertex_alignments,
num_new_atoms,
num_new_bonds,
num_building_blocks,
):
"""
Initialize a :class:`.CofData` instance.
This method creates the constructed molecule using
:meth:`.ConstructedMolecule.init_from_construction_result`.
Parameters
----------
topology_graph : :class:`type`
A COF class.
building_blocks : :class:`tuple` of :class:`.BuildingBlock`
The building blocks of the COF.
lattice_size : :class:`tuple` of :class:`int`
The size of the lattice.
vertex_alignments : :class:`dict`
Passed to the `vertex_alignments` parameter of the COF
initializer.
num_new_atoms : :class:`int`
The number of new atoms added by the construction process.
num_new_bonds : :class:`int`
The number of new bonds added by the construction process.
num_building_blocks : :class:`dict`
For each building block in `building_blocks`, maps its
index to the number of times its used in the construction
of the COF.
"""
obj = cls.__new__(cls)
topology_graph_instance = topology_graph(
building_blocks=building_blocks,
lattice_size=lattice_size,
vertex_alignments=vertex_alignments,
)
construction_result = topology_graph_instance.construct()
obj.constructed_molecule = (
stk.ConstructedMolecule.init_from_construction_result(
construction_result=construction_result,
)
)
obj.num_new_atoms = num_new_atoms
obj.num_new_bonds = num_new_bonds
obj.num_building_blocks = {
building_blocks[index]: num
for index, num in num_building_blocks.items()
}
obj.building_blocks = building_blocks
return obj
@pytest.fixture(
scope='session',
params=(
lambda: CofData(
topology_graph=stk.cof.Honeycomb,
building_blocks=(
stk.BuildingBlock(
smiles=(
'Br[C+]1[C+2][C+](Br)[C+](F)[C+](Br)[C+2]1'
),
functional_groups=[stk.BromoFactory()],
),
stk.BuildingBlock(
smiles='Br[C+]=NC#CBr',
functional_groups=[stk.BromoFactory()],
),
),
lattice_size=(2, 2, 1),
vertex_alignments=None,
num_new_atoms=0,
num_new_bonds=20,
num_building_blocks={0: 8, 1: 12},
),
lambda: CofData.init_from_construction_result(
topology_graph=stk.cof.Honeycomb,
building_blocks=(
stk.BuildingBlock(
smiles=(
'Br[C+]1[C+2][C+](Br)[C+](F)[C+](Br)[C+2]1'
),
functional_groups=[stk.BromoFactory()],
),
stk.BuildingBlock(
smiles='Br[C+]=NC#CBr',
functional_groups=[stk.BromoFactory()],
),
),
lattice_size=(2, 2, 1),
vertex_alignments=None,
num_new_atoms=0,
num_new_bonds=20,
num_building_blocks={0: 8, 1: 12},
),
lambda: CofData(
topology_graph=stk.cof.Honeycomb,
building_blocks=(
stk.BuildingBlock(
smiles=(
'Br[C+]1[C+2][C+](Br)[C+](F)[C+](Br)[C+2]1'
),
functional_groups=[stk.BromoFactory()],
),
stk.BuildingBlock(
smiles='Br[C+]=NC#CBr',
functional_groups=[stk.BromoFactory()],
),
),
lattice_size=(2, 2, 1),
vertex_alignments={0: 1, 1: 1, 2: 1, 3: 1, 4: 1},
num_new_atoms=0,
num_new_bonds=20,
num_building_blocks={0: 8, 1: 12},
),
lambda: CofData(
topology_graph=stk.cof.Honeycomb,
building_blocks=(
stk.BuildingBlock(
smiles=(
'Br[C+]1[C+2][C+](Br)[C+](F)[C+](Br)[C+2]1'
),
functional_groups=[stk.BromoFactory()],
),
stk.BuildingBlock(
smiles='Br[C+]=NC#CBr',
functional_groups=[stk.BromoFactory()],
),
),
lattice_size=(2, 2, 1),
vertex_alignments={0: 2, 1: 2},
num_new_atoms=0,
num_new_bonds=20,
num_building_blocks={0: 8, 1: 12},
),
lambda: CofData(
topology_graph=stk.cof.LinkerlessHoneycomb,
building_blocks=(
stk.BuildingBlock(
smiles=(
'Br[C+]1[C+2][C+](Br)[C+](F)[C+](Br)[C+2]1'
),
functional_groups=[stk.BromoFactory()],
),
),
lattice_size=(2, 2, 1),
vertex_alignments=None,
num_new_atoms=0,
num_new_bonds=8,
num_building_blocks={0: 8},
),
lambda: CofData(
topology_graph=stk.cof.Hexagonal,
building_blocks=(
stk.BuildingBlock(
smiles=(
'Br[C+]1[C+](F)[C+](I)[C+](I)[C+](Br)C1Br'
),
functional_groups=[
stk.BromoFactory(),
stk.IodoFactory(),
stk.FluoroFactory(),
],
),
stk.BuildingBlock(
smiles='Br[C+]=NC#CBr',
functional_groups=[stk.BromoFactory()],
),
),
lattice_size=(2, 2, 1),
vertex_alignments={0: 5},
num_new_atoms=0,
num_new_bonds=81,
num_building_blocks={0: 16, 1: 48},
),
lambda: CofData(
topology_graph=stk.cof.Kagome,
building_blocks=(
stk.BuildingBlock(
smiles=(
'Br[C+]1[C+](Br)[C+](F)[C+](Br)[C+](Br)'
'[C+2]1'
),
functional_groups=[stk.BromoFactory()],
),
stk.BuildingBlock(
smiles='Br[C+]=NC#CBr',
functional_groups=[stk.BromoFactory()],
),
),
lattice_size=(2, 2, 1),
vertex_alignments=None,
num_new_atoms=0,
num_new_bonds=41,
num_building_blocks={0: 12, 1: 24},
),
lambda: CofData(
topology_graph=stk.cof.Square,
building_blocks=(
stk.BuildingBlock(
smiles='BrC1=C(Br)C(F)(Br)[C+]1Br',
functional_groups=[stk.BromoFactory()],
),
stk.BuildingBlock(
smiles='Br[C+]=NC#CBr',
functional_groups=[stk.BromoFactory()],
),
),
lattice_size=(2, 2, 1),
vertex_alignments=None,
num_new_atoms=0,
num_new_bonds=12,
num_building_blocks={0: 4, 1: 8},
),
),
)
def cof_data(request) -> CofData:
"""
A :class:`.CofData` instance.
"""
return request.param()
@pytest.fixture
def cof(cof_data):
"""
A :class:`.CaseData` instance.
"""
return CaseData(
constructed_molecule=cof_data.constructed_molecule,
num_new_atoms=cof_data.num_new_atoms,
num_new_bonds=cof_data.num_new_bonds,
num_building_blocks=cof_data.num_building_blocks,
building_blocks=cof_data.building_blocks,
)
| 31.160458 | 70 | 0.50354 | 1,112 | 10,875 | 4.70054 | 0.101619 | 0.13392 | 0.039985 | 0.088961 | 0.814425 | 0.774249 | 0.729482 | 0.723742 | 0.723742 | 0.708054 | 0 | 0.019347 | 0.391632 | 10,875 | 348 | 71 | 31.25 | 0.770707 | 0.226115 | 0 | 0.688034 | 0 | 0.029915 | 0.052001 | 0.038874 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017094 | false | 0 | 0.012821 | 0 | 0.047009 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
910dee13ada4f50700344b34cb4ca5f9537d39dc | 7,774 | py | Python | testing/test_checkpoints.py | vincent-antaki/Neuraxle | cef1284a261010c655f8ef02b4fca5b8bb45850c | [
"Apache-2.0"
] | 4 | 2019-06-24T01:06:57.000Z | 2020-08-18T08:16:10.000Z | testing/test_checkpoints.py | Tubbz-alt/Neuraxle | 308f24248cdb242b7e2f6ec7c51daf2ee3e38834 | [
"Apache-2.0"
] | 1 | 2020-02-07T15:08:42.000Z | 2020-02-07T15:08:42.000Z | testing/test_checkpoints.py | Tubbz-alt/Neuraxle | 308f24248cdb242b7e2f6ec7c51daf2ee3e38834 | [
"Apache-2.0"
] | null | null | null | import os
from pickle import dump
from py._path.local import LocalPath
from neuraxle.checkpoints import DefaultCheckpoint
from neuraxle.pipeline import ResumablePipeline
from neuraxle.steps.misc import FitTransformCallbackStep, TapeCallbackFunction
SUMMARY_ID = '6e4419c1957e7772f3957d63bb41efcd'
def test_resumable_pipeline_with_checkpoint_fit_transform_should_save_data_inputs(tmpdir: LocalPath):
test_case = create_checkpoint_test_case(tmpdir)
pipeline, outputs = test_case.pipeline.fit_transform([0, 1], [1, 2])
assert os.path.exists(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'di', '0.pickle'))
assert os.path.exists(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'di', '1.pickle'))
def test_resumable_pipeline_with_checkpoint_transform_should_save_data_inputs(tmpdir):
test_case = create_checkpoint_test_case(tmpdir)
pipeline, outputs = test_case.pipeline.transform([0, 1])
assert os.path.exists(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'di', '0.pickle'))
assert os.path.exists(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'di', '1.pickle'))
def test_resumable_pipeline_with_checkpoint_fit_should_save_data_inputs(tmpdir):
test_case = create_checkpoint_test_case(tmpdir)
pipeline = test_case.pipeline.fit([0, 1], [1, 2])
assert os.path.exists(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'di', '0.pickle'))
assert os.path.exists(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'di', '1.pickle'))
def test_resumable_pipeline_with_checkpoint_fit_transform_should_save_expected_outputs(tmpdir):
test_case = create_checkpoint_test_case(tmpdir)
pipeline, outputs = test_case.pipeline.fit_transform([0, 1], [1, 2])
assert os.path.exists(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'eo', '0.pickle'))
assert os.path.exists(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'eo', '1.pickle'))
def test_resumable_pipeline_with_checkpoint_fit_should_save_expected_outputs(tmpdir):
test_case = create_checkpoint_test_case(tmpdir)
pipeline = test_case.pipeline.fit([0, 1], [1, 2])
assert os.path.exists(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'eo', '0.pickle'))
assert os.path.exists(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'eo', '1.pickle'))
def test_resumable_pipeline_with_checkpoint_fit_transform_should_resume_saved_checkpoints(tmpdir):
given_fully_saved_checkpoints(tmpdir)
test_case = create_checkpoint_test_case(tmpdir)
pipeline, outputs = test_case.pipeline.fit_transform([0, 1], [1, 2])
assert test_case.tape_transform_step1.data == [[0, 1]]
assert test_case.tape_fit_step1.data == [([0, 1], [1, 2])]
assert test_case.tape_fit_step2.data == [([0, 1], [1, 2])]
assert test_case.tape_transform_step2.data == [[0, 1]]
def test_resumable_pipeline_with_checkpoint_transform_should_resume_saved_checkpoints(tmpdir):
given_fully_saved_checkpoints(tmpdir)
test_case = create_checkpoint_test_case(tmpdir)
outputs = test_case.pipeline.transform([0, 1])
assert test_case.tape_transform_step1.data == [[0, 1]]
assert test_case.tape_fit_step2.data == []
assert test_case.tape_transform_step2.data == [[0, 1]]
def test_resumable_pipeline_with_checkpoint_fit_should_resume_saved_checkpoints(tmpdir):
given_fully_saved_checkpoints(tmpdir)
test_case = create_checkpoint_test_case(tmpdir)
pipeline = test_case.pipeline.fit([0, 1], [1, 2])
assert test_case.tape_fit_step1.data == [([0, 1], [1, 2])]
assert test_case.tape_transform_step1.data == [[0, 1]]
assert test_case.tape_transform_step2.data == []
assert test_case.tape_fit_step2.data == [([0, 1], [1, 2])]
def given_fully_saved_checkpoints(tmpdir):
os.makedirs(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'di'))
os.makedirs(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'eo'))
with open(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint',
'{0}.txt'.format(SUMMARY_ID)), 'w+') as file:
file.writelines([
'0\n',
'1'
])
with open(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'di', '0.pickle'), 'wb') as file:
dump(0, file)
with open(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'di', '1.pickle'), 'wb') as file:
dump(1, file)
with open(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'eo', '0.pickle'), 'wb') as file:
dump(1, file)
with open(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'eo', '1.pickle'), 'wb') as file:
dump(2, file)
def test_resumable_pipeline_with_checkpoint_fit_transform_should_not_resume_partially_saved_checkpoints(tmpdir):
given_partially_saved_checkpoints(tmpdir)
test_case = create_checkpoint_test_case(tmpdir)
pipeline, outputs = test_case.pipeline.fit_transform([0, 1], [1, 2])
assert test_case.tape_fit_step1.data == [([0, 1], [1, 2])]
assert test_case.tape_fit_step2.data == [([0, 1], [1, 2])]
def test_resumable_pipeline_with_checkpoint_fit_should_not_resume_partially_saved_checkpoints(tmpdir):
given_partially_saved_checkpoints(tmpdir)
test_case = create_checkpoint_test_case(tmpdir)
pipeline = test_case.pipeline.fit([0, 1], [1, 2])
assert test_case.tape_fit_step1.data == [([0, 1], [1, 2])]
assert test_case.tape_transform_step1.data == [[0, 1]]
assert test_case.tape_fit_step2.data == [([0, 1], [1, 2])]
assert test_case.tape_transform_step2.data == []
def test_resumable_pipeline_with_checkpoint_transform_should_not_resume_partially_saved_checkpoints(tmpdir):
given_partially_saved_checkpoints(tmpdir)
test_case = create_checkpoint_test_case(tmpdir)
outputs = test_case.pipeline.transform([0, 1])
assert test_case.tape_fit_step1.data == []
assert test_case.tape_transform_step1.data == [[0, 1]]
assert test_case.tape_fit_step2.data == []
assert test_case.tape_transform_step2.data == [[0, 1]]
def given_partially_saved_checkpoints(tmpdir):
os.makedirs(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'di'))
os.makedirs(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'eo'))
with open(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint',
'{0}.txt'.format(SUMMARY_ID)), 'w+') as file:
file.writelines([
'0\n',
'1'
])
with open(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'di', '0.pickle'), 'wb') as file:
dump(0, file)
with open(os.path.join(tmpdir, 'ResumablePipeline', 'checkpoint', 'eo', '1.pickle'), 'wb') as file:
dump(0, file)
class CheckpointTest:
def __init__(self, tape_transform_step1, tape_fit_step1, tape_transform_step2, tape_fit_step2, pipeline):
self.pipeline = pipeline
self.tape_transform_step1 = tape_transform_step1
self.tape_fit_step1 = tape_fit_step1
self.tape_transform_step2 = tape_transform_step2
self.tape_fit_step2 = tape_fit_step2
def create_checkpoint_test_case(tmpdir):
tape_transform_1 = TapeCallbackFunction()
tape_fit_1 = TapeCallbackFunction()
tape_transform_2 = TapeCallbackFunction()
tape_fit_2 = TapeCallbackFunction()
pipeline = ResumablePipeline([
('step1', FitTransformCallbackStep(tape_transform_1, tape_fit_1)),
('checkpoint', DefaultCheckpoint()),
('step2', FitTransformCallbackStep(tape_transform_2, tape_fit_2))
], cache_folder=tmpdir)
return CheckpointTest(
tape_transform_1, tape_fit_1, tape_transform_2, tape_fit_2, pipeline
)
def test_resumable_pipeline_with_checkpoint_should_save_steps():
pass
| 40.489583 | 112 | 0.721893 | 1,020 | 7,774 | 5.186275 | 0.070588 | 0.083176 | 0.041588 | 0.066541 | 0.853308 | 0.83327 | 0.806427 | 0.802079 | 0.783365 | 0.77259 | 0 | 0.027439 | 0.146771 | 7,774 | 191 | 113 | 40.701571 | 0.770089 | 0 | 0 | 0.635659 | 0 | 0 | 0.109596 | 0.004116 | 0 | 0 | 0 | 0 | 0.24031 | 1 | 0.124031 | false | 0.007752 | 0.046512 | 0 | 0.186047 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9114ca9b2103bde294ec6638f4917e3effd45d01 | 45 | py | Python | atlas/emitter/__init__.py | medav/atlas.py | 2f3e382d53bf63d37dcb2ef743b66c294d0e28b2 | [
"MIT"
] | 1 | 2019-02-27T10:50:37.000Z | 2019-02-27T10:50:37.000Z | atlas/emitter/__init__.py | medav/pyatlas | 6e18c54c844303094af15dbd96a9b71c3e245395 | [
"MIT"
] | 4 | 2021-05-04T04:58:13.000Z | 2021-05-04T04:59:08.000Z | atlas/emitter/__init__.py | medav/pyatlas | 6e18c54c844303094af15dbd96a9b71c3e245395 | [
"MIT"
] | 1 | 2021-05-02T01:51:29.000Z | 2021-05-02T01:51:29.000Z | from .emitter import *
from .verilog import * | 22.5 | 22 | 0.755556 | 6 | 45 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 2 | 23 | 22.5 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
911d55e5d2d7b5e1adcffba2d86c40524175a0e9 | 46 | py | Python | src/devops_menu/controller/__init__.py | rubelw/devops_menu | 38a8eadb26c5ac90a201fd3af0d09d5165ed8249 | [
"Apache-2.0"
] | 2 | 2020-10-08T21:42:56.000Z | 2021-03-21T08:17:52.000Z | src/devops_menu/controller/__init__.py | rubelw/devops_menu | 38a8eadb26c5ac90a201fd3af0d09d5165ed8249 | [
"Apache-2.0"
] | null | null | null | src/devops_menu/controller/__init__.py | rubelw/devops_menu | 38a8eadb26c5ac90a201fd3af0d09d5165ed8249 | [
"Apache-2.0"
] | null | null | null | from .command_executor import CommandExecutor
| 23 | 45 | 0.891304 | 5 | 46 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
912bcbe1c6bfacd151314fa0952894705897b040 | 9,375 | py | Python | mooring/context_processors.py | jawaidm/moorings | 22db3fa5917fb13cbee144e64529221ef862cb39 | [
"Apache-2.0"
] | null | null | null | mooring/context_processors.py | jawaidm/moorings | 22db3fa5917fb13cbee144e64529221ef862cb39 | [
"Apache-2.0"
] | 2 | 2020-04-30T12:02:15.000Z | 2021-03-19T22:41:46.000Z | mooring/context_processors.py | jawaidm/moorings | 22db3fa5917fb13cbee144e64529221ef862cb39 | [
"Apache-2.0"
] | 6 | 2020-01-13T08:45:09.000Z | 2021-02-24T03:31:02.000Z | from django.conf import settings
from django.core.cache import cache
from mooring import models
from mooring import helpers
import json
def mooring_url(request):
web_url = request.META.get('HTTP_HOST', None)
tg = 'pvs'
if web_url in settings.ROTTNEST_ISLAND_URL:
tg = 'ria'
else:
tg = 'pvs'
authed = request.user.is_authenticated
mooring_url = mooring_url_group(tg)
is_officer = False
is_inventory = False
is_admin = False
is_payment_officer = False
is_customer = False
failed_refund_count = 0
if authed:
if request.user.is_staff or request.user.is_superuser:
failed_refund_count = models.RefundFailed.objects.filter(status=0).count()
is_officer = helpers.is_officer(request.user)
is_inventory = helpers.is_inventory(request.user)
is_admin = helpers.is_admin(request.user)
is_payment_officer = helpers.is_payment_officer(request.user)
is_customer = helpers.is_customer(request.user)
mooring_url['REFUND_FAILED_COUNT'] = failed_refund_count
mooring_url['IS_OFFICER'] = is_officer
mooring_url['IS_INVENTORY'] = is_inventory
mooring_url['IS_ADMIN'] = is_admin
mooring_url['IS_PAYMENT_OFFICER'] = is_payment_officer
mooring_url['IS_CUSTOMER'] = is_customer
return mooring_url
#def mooring_url(request):
# #web_url = request.META['HTTP_HOST']
# web_url = request.META.get('HTTP_HOST', None)
# TERMS = ''
# DAILY_TERMS_URL = ''
# DAILY_FEES_URL = ''
# if web_url in settings.ROTTNEST_ISLAND_URL:
# mooring_group = 'ria'
# template_group = 'rottnest'
# alr = None
# dumped_data = cache.get('AdmissionsLocation:'+mooring_group)
#
# if dumped_data is None:
# al= models.AdmissionsLocation.objects.filter(key=mooring_group).values('mooring_booking_terms','daily_admissions_terms','daily_admissions_more_price_info_url')
# if al.count() > 0:
# dumped_data = json.dumps(al[0])
# alr = al[0]
# cache.set('AdmissionsLocation:'+mooring_group,dumped_data, 3600)
# else:
# alr = json.loads(dumped_data)
# pass
# if alr:
# TERMS = alr['mooring_booking_terms']
# DAILY_TERMS_URL = alr['daily_admissions_terms']
# DAILY_FEES_URL = alr['daily_admissions_more_price_info_url']
# #DAILY_TERMS = al[0].daily_admissions_term
# #TERMS = "https://www.rottnestisland.com/~/media/Files/boating-documents/marine-hire-facilities-tcs.pdf?la=en"
# PUBLIC_URL='https://mooring-ria.dbca.wa.gov.au/'
# else:
# template_group = 'pvs'
# mooring_group = 'pvs'
#
# alr = None
# dumped_data = cache.get('AdmissionsLocation:'+mooring_group)
#
# if dumped_data is None:
# al= models.AdmissionsLocation.objects.filter(key=mooring_group).values('mooring_booking_terms','daily_admissions_terms','daily_admissions_more_price_info_url')
# if al.count() > 0:
# dumped_data = json.dumps(al[0])
# alr = al[0]
# cache.set('AdmissionsLocation:'+mooring_group,dumped_data, 3600)
# else:
# alr = json.loads(dumped_data)
# pass
# if alr:
# TERMS = alr['mooring_booking_terms']
# DAILY_TERMS_URL = alr['daily_admissions_terms']
# DAILY_FEES_URL = alr['daily_admissions_more_price_info_url']
#
# #TERMS = "/know/online-mooring-site-booking-terms-and-conditions"
# PUBLIC_URL='https://mooring.dbca.wa.gov.au'
#
#
# is_officer = False
# is_inventory = False
# is_admin = False
# is_payment_officer = False
# is_customer = False
#
# failed_refund_count = 0
# if request.user.is_authenticated:
# if request.user.is_staff or request.user.is_superuser:
# failed_refund_count = models.RefundFailed.objects.filter(status=0).count()
# is_officer = helpers.is_officer(request.user)
# is_inventory = helpers.is_inventory(request.user)
# is_admin = helpers.is_admin(request.user)
# is_payment_officer = helpers.is_payment_officer(request.user)
# is_customer = helpers.is_customer(request.user)
#
# return {
# 'EXPLORE_PARKS_SEARCH': '/map',
# 'EXPLORE_PARKS_CONTACT': '/contact-us',
# 'EXPLORE_PARKS_CONSERVE': '/know/conserving-our-moorings',
# 'EXPLORE_PARKS_PEAK_PERIODS': '/know/when-visit',
# 'EXPLORE_PARKS_ENTRY_FEES': '/know/entry-fees',
# 'EXPLORE_PARKS_TERMS': TERMS,
# 'DAILY_TERMS_URL': DAILY_TERMS_URL,
# 'DAILY_FEES_URL': DAILY_FEES_URL,
# 'PARKSTAY_EXTERNAL_URL': settings.PARKSTAY_EXTERNAL_URL,
# 'DEV_STATIC': settings.DEV_STATIC,
# 'DEV_STATIC_URL': settings.DEV_STATIC_URL,
# 'TEMPLATE_GROUP' : template_group,
# 'GIT_COMMIT_DATE' : settings.GIT_COMMIT_DATE,
# 'GIT_COMMIT_HASH' : settings.GIT_COMMIT_HASH,
# 'SYSTEM_NAME' : settings.SYSTEM_NAME,
# 'REFUND_FAILED_COUNT': failed_refund_count,
# 'IS_OFFICER' : is_officer,
# 'IS_INVENTORY' : is_inventory,
# 'IS_ADMIN' : is_admin,
# 'IS_PAYMENT_OFFICER' : is_payment_officer,
# 'IS_CUSTOMER' : is_customer,
# 'PUBLIC_URL' : PUBLIC_URL,
# 'MOORING_GROUP': mooring_group
# }
def mooring_url_group(tg):
#web_url = request.META['HTTP_HOST']
TERMS = ''
DAILY_TERMS_URL = ''
DAILY_FEES_URL = ''
if tg == 'ria':
mooring_group = 'ria'
template_group = 'rottnest'
alr = None
dumped_data = cache.get('AdmissionsLocation:'+mooring_group)
if dumped_data is None:
al= models.AdmissionsLocation.objects.filter(key=mooring_group).values('mooring_booking_terms','daily_admissions_terms','daily_admissions_more_price_info_url')
if al.count() > 0:
dumped_data = json.dumps(al[0])
alr = al[0]
cache.set('AdmissionsLocation:'+mooring_group,dumped_data, 3600)
else:
alr = json.loads(dumped_data)
pass
if alr:
TERMS = alr['mooring_booking_terms']
DAILY_TERMS_URL = alr['daily_admissions_terms']
DAILY_FEES_URL = alr['daily_admissions_more_price_info_url']
#DAILY_TERMS = al[0].daily_admissions_term
#TERMS = "https://www.rottnestisland.com/~/media/Files/boating-documents/marine-hire-facilities-tcs.pdf?la=en"
PUBLIC_URL='https://mooring-ria.dbca.wa.gov.au/'
else:
template_group = 'pvs'
mooring_group = 'pvs'
alr = None
dumped_data = cache.get('AdmissionsLocation:'+mooring_group)
if dumped_data is None:
al= models.AdmissionsLocation.objects.filter(key=mooring_group).values('mooring_booking_terms','daily_admissions_terms','daily_admissions_more_price_info_url')
if al.count() > 0:
dumped_data = json.dumps(al[0])
alr = al[0]
cache.set('AdmissionsLocation:'+mooring_group,dumped_data, 3600)
else:
alr = json.loads(dumped_data)
pass
if alr:
TERMS = alr['mooring_booking_terms']
DAILY_TERMS_URL = alr['daily_admissions_terms']
DAILY_FEES_URL = alr['daily_admissions_more_price_info_url']
#TERMS = "/know/online-mooring-site-booking-terms-and-conditions"
PUBLIC_URL='https://mooring.dbca.wa.gov.au'
# is_officer = False
# is_inventory = False
# is_admin = False
# is_payment_officer = False
# is_customer = False
#
# failed_refund_count = 0
# if authed:
# if request.user.is_staff or request.user.is_superuser:
# failed_refund_count = models.RefundFailed.objects.filter(status=0).count()
# is_officer = helpers.is_officer(request.user)
# is_inventory = helpers.is_inventory(request.user)
# is_admin = helpers.is_admin(request.user)
# is_payment_officer = helpers.is_payment_officer(request.user)
# is_customer = helpers.is_customer(request.user)
return {
'EXPLORE_PARKS_SEARCH': '/map',
'EXPLORE_PARKS_CONTACT': '/contact-us',
'EXPLORE_PARKS_CONSERVE': '/know/conserving-our-moorings',
'EXPLORE_PARKS_PEAK_PERIODS': '/know/when-visit',
'EXPLORE_PARKS_ENTRY_FEES': '/know/entry-fees',
'EXPLORE_PARKS_TERMS': TERMS,
'DAILY_TERMS_URL': DAILY_TERMS_URL,
'DAILY_FEES_URL': DAILY_FEES_URL,
'PARKSTAY_EXTERNAL_URL': settings.PARKSTAY_EXTERNAL_URL,
'DEV_STATIC': settings.DEV_STATIC,
'DEV_STATIC_URL': settings.DEV_STATIC_URL,
'TEMPLATE_GROUP' : template_group,
'GIT_COMMIT_DATE' : settings.GIT_COMMIT_DATE,
'GIT_COMMIT_HASH' : settings.GIT_COMMIT_HASH,
'SYSTEM_NAME' : settings.SYSTEM_NAME,
# 'REFUND_FAILED_COUNT': failed_refund_count,
# 'IS_OFFICER' : is_officer,
# 'IS_INVENTORY' : is_inventory,
# 'IS_ADMIN' : is_admin,
# 'IS_PAYMENT_OFFICER' : is_payment_officer,
# 'IS_CUSTOMER' : is_customer,
'PUBLIC_URL' : PUBLIC_URL,
'MOORING_GROUP': mooring_group
}
def template_context(request):
"""Pass extra context variables to every template.
"""
context = mooring_url(request)
return context
| 38.422131 | 171 | 0.651733 | 1,135 | 9,375 | 5.036123 | 0.112775 | 0.044262 | 0.045486 | 0.025192 | 0.914276 | 0.914276 | 0.896956 | 0.896956 | 0.852694 | 0.852694 | 0 | 0.005017 | 0.23456 | 9,375 | 243 | 172 | 38.580247 | 0.791527 | 0.52864 | 0 | 0.343434 | 0 | 0 | 0.217584 | 0.107043 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0.020202 | 0.050505 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6c03e7d80599c2f75c5a55d53106de2f52bb214 | 132 | py | Python | pythran/tests/user_defined_import/tiny_project/level/dummy.py | davidbrochart/pythran | 24b6c8650fe99791a4091cbdc2c24686e86aa67c | [
"BSD-3-Clause"
] | 1,647 | 2015-01-13T01:45:38.000Z | 2022-03-28T01:23:41.000Z | pythran/tests/user_defined_import/tiny_project/level/dummy.py | davidbrochart/pythran | 24b6c8650fe99791a4091cbdc2c24686e86aa67c | [
"BSD-3-Clause"
] | 1,116 | 2015-01-01T09:52:05.000Z | 2022-03-18T21:06:40.000Z | pythran/tests/user_defined_import/tiny_project/level/dummy.py | davidbrochart/pythran | 24b6c8650fe99791a4091cbdc2c24686e86aa67c | [
"BSD-3-Clause"
] | 180 | 2015-02-12T02:47:28.000Z | 2022-03-14T10:28:18.000Z | from .. csts import return_cst
from ..level.dummer import twice
#pythran export yummy()
def yummy():
return twice(return_cst())
| 22 | 32 | 0.734848 | 19 | 132 | 5 | 0.631579 | 0.189474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 132 | 5 | 33 | 26.4 | 0.848214 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
e6d1ccf6fc0cbd4dbbcb35c6745a74783e78b41a | 105,481 | py | Python | google_appengine/google/appengine/tools/devappserver2/module_test.py | Serag8/Bachelor | 097c0ad2264e9c8790afcdbafa8e7fe8f46410a3 | [
"MIT"
] | null | null | null | google_appengine/google/appengine/tools/devappserver2/module_test.py | Serag8/Bachelor | 097c0ad2264e9c8790afcdbafa8e7fe8f46410a3 | [
"MIT"
] | null | null | null | google_appengine/google/appengine/tools/devappserver2/module_test.py | Serag8/Bachelor | 097c0ad2264e9c8790afcdbafa8e7fe8f46410a3 | [
"MIT"
] | 2 | 2020-07-25T05:03:06.000Z | 2020-11-04T04:55:57.000Z | #!/usr/bin/env python
#
# Copyright 2007 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Tests for google.apphosting.tools.devappserver2.module."""
import functools
import httplib
import logging
import os
import re
import time
import google
from concurrent import futures
import mock
import mox
from google.testing.pybase import googletest
from google.testing.pybase import parameterized
from google.appengine.api import appinfo
from google.appengine.api import request_info
from google.appengine.tools.devappserver2 import api_server
from google.appengine.tools.devappserver2 import application_configuration
from google.appengine.tools.devappserver2 import constants
from google.appengine.tools.devappserver2 import dispatcher
from google.appengine.tools.devappserver2 import go_application
from google.appengine.tools.devappserver2 import go_runtime
from google.appengine.tools.devappserver2 import health_check_service
from google.appengine.tools.devappserver2 import instance
from google.appengine.tools.devappserver2 import java_runtime
from google.appengine.tools.devappserver2 import module
from google.appengine.tools.devappserver2 import python_runtime
from google.appengine.tools.devappserver2 import runtime_config_pb2
from google.appengine.tools.devappserver2 import start_response_utils
from google.appengine.tools.devappserver2 import vm_runtime_factory
from google.appengine.tools.devappserver2 import wsgi_server
from google.appengine.tools.docker import containers
class ModuleConfigurationStub(object):
def __init__(self,
application_root='/root',
application='app',
module_name='default',
automatic_scaling=appinfo.AutomaticScaling(),
version='version',
runtime='python27',
threadsafe=False,
skip_files='',
inbound_services=['warmup'],
handlers=[appinfo.URLMap(url=r'/python-(.*)',
script=r'\1.py')],
normalized_libraries=None,
env_variables=None,
manual_scaling=None,
basic_scaling=None,
health_check=None,
application_external_name='app'):
self.application_root = application_root
self.application = application
self.module_name = module_name
self.automatic_scaling = automatic_scaling
self.manual_scaling = manual_scaling
self.basic_scaling = basic_scaling
self.major_version = version
self.runtime = runtime
self.threadsafe = threadsafe
self.skip_files = skip_files
self.inbound_services = inbound_services
self.handlers = handlers
self.normalized_libraries = normalized_libraries or []
self.env_variables = env_variables or []
self.version_id = '%s:%s.%s' % (module_name, version, '12345')
self.is_backend = False
self.health_check = health_check
self.application_external_name = application_external_name
def check_for_updates(self):
return set()
class ModuleFacade(module.Module):
def __init__(self,
module_configuration=ModuleConfigurationStub(),
instance_factory=None,
ready=True,
allow_skipped_files=False,
threadsafe_override=None,
php_config=None,
python_config=None,
java_config=None,
vm_config=None):
super(ModuleFacade, self).__init__(
module_configuration,
host='fakehost',
balanced_port=0,
api_host='localhost',
api_port=8080,
auth_domain='gmail.com',
runtime_stderr_loglevel=1,
php_config=None,
python_config=None,
java_config=None,
cloud_sql_config=None,
vm_config=vm_config,
default_version_port=8080,
port_registry=dispatcher.PortRegistry(),
request_data=None,
dispatcher=None,
max_instances=None,
use_mtime_file_watcher=False,
automatic_restarts=True,
allow_skipped_files=allow_skipped_files,
threadsafe_override=threadsafe_override)
if instance_factory is not None:
self._instance_factory = instance_factory
self._ready = ready
@property
def ready(self):
return self._ready
@property
def balanced_port(self):
return self._balanced_port
class AutoScalingModuleFacade(module.AutoScalingModule):
def __init__(self,
module_configuration=ModuleConfigurationStub(),
balanced_port=0,
instance_factory=None,
max_instances=None,
ready=True):
super(AutoScalingModuleFacade, self).__init__(
module_configuration=module_configuration,
host='fakehost',
balanced_port=balanced_port,
api_host='localhost',
api_port=8080,
auth_domain='gmail.com',
runtime_stderr_loglevel=1,
php_config=None,
python_config=None,
java_config=None,
cloud_sql_config=None,
vm_config=None,
default_version_port=8080,
port_registry=dispatcher.PortRegistry(),
request_data=None,
dispatcher=None,
max_instances=max_instances,
use_mtime_file_watcher=False,
automatic_restarts=True,
allow_skipped_files=False,
threadsafe_override=None)
if instance_factory is not None:
self._instance_factory = instance_factory
self._ready = ready
@property
def ready(self):
return self._ready
@property
def balanced_port(self):
return self._balanced_port
class ManualScalingModuleFacade(module.ManualScalingModule):
def __init__(self,
module_configuration=None,
balanced_port=0,
instance_factory=None,
ready=True,
vm_config=None):
if module_configuration is None:
module_configuration = ModuleConfigurationStub()
super(ManualScalingModuleFacade, self).__init__(
module_configuration=module_configuration,
host='fakehost',
balanced_port=balanced_port,
api_host='localhost',
api_port=8080,
auth_domain='gmail.com',
runtime_stderr_loglevel=1,
php_config=None,
python_config=None,
java_config=None,
cloud_sql_config=None,
vm_config=vm_config,
default_version_port=8080,
port_registry=dispatcher.PortRegistry(),
request_data=None,
dispatcher=None,
max_instances=None,
use_mtime_file_watcher=False,
automatic_restarts=True,
allow_skipped_files=False,
threadsafe_override=None)
if instance_factory is not None:
self._instance_factory = instance_factory
self._ready = ready
@property
def ready(self):
return self._ready
@property
def balanced_port(self):
return self._balanced_port
class BasicScalingModuleFacade(module.BasicScalingModule):
def __init__(self,
host='fakehost',
module_configuration=ModuleConfigurationStub(),
balanced_port=0,
instance_factory=None,
ready=True):
super(BasicScalingModuleFacade, self).__init__(
module_configuration=module_configuration,
host=host,
balanced_port=balanced_port,
api_host='localhost',
api_port=8080,
auth_domain='gmail.com',
runtime_stderr_loglevel=1,
php_config=None,
python_config=None,
java_config=None,
cloud_sql_config=None,
vm_config=None,
default_version_port=8080,
port_registry=dispatcher.PortRegistry(),
request_data=None,
dispatcher=None,
max_instances=None,
use_mtime_file_watcher=False,
automatic_restarts=True,
allow_skipped_files=False,
threadsafe_override=None)
if instance_factory is not None:
self._instance_factory = instance_factory
self._ready = ready
@property
def ready(self):
return self._ready
@property
def balanced_port(self):
return self._balanced_port
class BuildRequestEnvironTest(googletest.TestCase):
def setUp(self):
api_server.test_setup_stubs()
self.module = ModuleFacade()
def test_build_request_environ(self):
expected_environ = {
constants.FAKE_IS_ADMIN_HEADER: '1',
'HTTP_HOST': 'fakehost:8080',
'HTTP_HEADER': 'Value',
'HTTP_OTHER': 'Values',
'CONTENT_LENGTH': '4',
'PATH_INFO': '/foo',
'QUERY_STRING': 'bar=baz',
'REQUEST_METHOD': 'PUT',
'REMOTE_ADDR': '1.2.3.4',
'SERVER_NAME': 'fakehost',
'SERVER_PORT': '8080',
'SERVER_PROTOCOL': 'HTTP/1.1',
'wsgi.version': (1, 0),
'wsgi.url_scheme': 'http',
'wsgi.multithread': True,
'wsgi.multiprocess': True}
environ = self.module.build_request_environ(
'PUT', '/foo?bar=baz', [('Header', 'Value'), ('Other', 'Values')],
'body', '1.2.3.4', 8080)
self.assertEqual('', environ.pop('wsgi.errors').getvalue())
self.assertEqual('body', environ.pop('wsgi.input').getvalue())
self.assertEqual(expected_environ, environ)
def test_build_request_environ_fake_is_logged_in(self):
expected_environ = {
constants.FAKE_IS_ADMIN_HEADER: '1',
constants.FAKE_LOGGED_IN_HEADER: '1',
'HTTP_HOST': 'fakehost:8080',
'HTTP_HEADER': 'Value',
'HTTP_OTHER': 'Values',
'CONTENT_LENGTH': '4',
'PATH_INFO': '/foo',
'QUERY_STRING': 'bar=baz',
'REQUEST_METHOD': 'PUT',
'REMOTE_ADDR': '1.2.3.4',
'SERVER_NAME': 'fakehost',
'SERVER_PORT': '8080',
'SERVER_PROTOCOL': 'HTTP/1.1',
'wsgi.version': (1, 0),
'wsgi.url_scheme': 'http',
'wsgi.multithread': True,
'wsgi.multiprocess': True}
environ = self.module.build_request_environ(
'PUT', '/foo?bar=baz', [('Header', 'Value'), ('Other', 'Values')],
'body', '1.2.3.4', 8080, fake_login=True)
self.assertEqual('', environ.pop('wsgi.errors').getvalue())
self.assertEqual('body', environ.pop('wsgi.input').getvalue())
self.assertEqual(expected_environ, environ)
def test_build_request_environ_unicode_body(self):
expected_environ = {
constants.FAKE_IS_ADMIN_HEADER: '1',
'HTTP_HOST': 'fakehost',
'HTTP_HEADER': 'Value',
'HTTP_OTHER': 'Values',
'CONTENT_LENGTH': '4',
'PATH_INFO': '/foo',
'QUERY_STRING': 'bar=baz',
'REQUEST_METHOD': 'PUT',
'REMOTE_ADDR': '1.2.3.4',
'SERVER_NAME': 'fakehost',
'SERVER_PORT': '80',
'SERVER_PROTOCOL': 'HTTP/1.1',
'wsgi.version': (1, 0),
'wsgi.url_scheme': 'http',
'wsgi.multithread': True,
'wsgi.multiprocess': True}
environ = self.module.build_request_environ(
'PUT', '/foo?bar=baz', [('Header', 'Value'), ('Other', 'Values')],
u'body', '1.2.3.4', 80)
self.assertEqual('', environ.pop('wsgi.errors').getvalue())
self.assertEqual('body', environ.pop('wsgi.input').getvalue())
self.assertEqual(expected_environ, environ)
class TestModuleCreateUrlHandlers(googletest.TestCase):
"""Tests for module.Module._create_url_handlers."""
def setUp(self):
self.module_configuration = ModuleConfigurationStub()
self.instance_factory = instance.InstanceFactory(None, 1)
self.servr = ModuleFacade(instance_factory=self.instance_factory,
module_configuration=self.module_configuration)
self.instance_factory.START_URL_MAP = appinfo.URLMap(
url='/_ah/start',
script='start_handler',
login='admin')
self.instance_factory.WARMUP_URL_MAP = appinfo.URLMap(
url='/_ah/warmup',
script='warmup_handler',
login='admin')
# Built-in: login, blob_upload, blob_image, channel, gcs, endpoints
self.num_builtin_handlers = 6
def test_match_all(self):
self.module_configuration.handlers = [appinfo.URLMap(url=r'.*',
script=r'foo.py')]
handlers = self.servr._create_url_handlers()
self.assertEqual(self.num_builtin_handlers + 1, len(handlers))
def test_match_start_only(self):
self.module_configuration.handlers = [appinfo.URLMap(url=r'/_ah/start',
script=r'foo.py')]
handlers = self.servr._create_url_handlers()
self.assertEqual(self.num_builtin_handlers + 2, len(handlers))
self.assertEqual(self.instance_factory.WARMUP_URL_MAP, handlers[0].url_map)
def test_match_warmup_only(self):
self.module_configuration.handlers = [appinfo.URLMap(url=r'/_ah/warmup',
script=r'foo.py')]
handlers = self.servr._create_url_handlers()
self.assertEqual(self.num_builtin_handlers + 2, len(handlers))
self.assertEqual(self.instance_factory.START_URL_MAP, handlers[0].url_map)
def test_match_neither_warmup_nor_start(self):
self.module_configuration.handlers = [appinfo.URLMap(url=r'/',
script=r'foo.py')]
handlers = self.servr._create_url_handlers()
self.assertEqual(self.num_builtin_handlers + 3, len(handlers))
self.assertEqual(self.instance_factory.WARMUP_URL_MAP, handlers[0].url_map)
self.assertEqual(self.instance_factory.START_URL_MAP, handlers[1].url_map)
def test_match_static_only(self):
self.module_configuration.handlers = [
appinfo.URLMap(url=r'/_ah/start', static_dir='foo'),
appinfo.URLMap(url=r'/_ah/warmup', static_files='foo', upload='foo')]
handlers = self.servr._create_url_handlers()
self.assertEqual(self.num_builtin_handlers + 4, len(handlers))
self.assertEqual(self.instance_factory.WARMUP_URL_MAP, handlers[0].url_map)
self.assertEqual(self.instance_factory.START_URL_MAP, handlers[1].url_map)
def test_match_start_only_no_inbound_warmup(self):
self.module_configuration.inbound_services = None
self.module_configuration.handlers = [appinfo.URLMap(url=r'/_ah/start',
script=r'foo.py')]
handlers = self.servr._create_url_handlers()
self.assertEqual(self.num_builtin_handlers + 1, len(handlers))
def test_match_warmup_only_no_inbound_warmup(self):
self.module_configuration.inbound_services = None
self.module_configuration.handlers = [appinfo.URLMap(url=r'/_ah/warmup',
script=r'foo.py')]
handlers = self.servr._create_url_handlers()
self.assertEqual(self.num_builtin_handlers + 2, len(handlers))
self.assertEqual(self.instance_factory.START_URL_MAP, handlers[0].url_map)
def test_match_neither_warmup_nor_start_no_inbound_warmup(self):
self.module_configuration.inbound_services = None
self.module_configuration.handlers = [appinfo.URLMap(url=r'/',
script=r'foo.py')]
handlers = self.servr._create_url_handlers()
self.assertEqual(self.num_builtin_handlers + 2, len(handlers))
self.assertEqual(self.instance_factory.START_URL_MAP, handlers[0].url_map)
class TestModuleGetRuntimeConfig(parameterized.ParameterizedTestCase):
"""Tests for module.Module._get_runtime_config."""
def setUp(self):
self.module_configuration = ModuleConfigurationStub(skip_files='foo')
self.module_configuration.handlers = [
appinfo.URLMap(url=r'/static', static_dir='static'),
appinfo.URLMap(url=r'/app_read_static', static_dir='app_read_static',
application_readable=True),
appinfo.URLMap(url=r'/static_images/*.png',
static_files=r'static_images/\\1',
upload=r'static_images/*.png'),
appinfo.URLMap(url=r'/app_readable_static_images/*.png',
static_files=r'app_readable_static_images/\\1',
upload=r'app_readable_static_images/*.png',
application_readable=True),
]
self.instance_factory = instance.InstanceFactory(None, 1)
def test_static_files_regex(self):
servr = ModuleFacade(instance_factory=self.instance_factory,
module_configuration=self.module_configuration)
config = servr._get_runtime_config()
self.assertEqual(r'^(static%s.*)|(static_images/*.png)$' %
re.escape(os.path.sep),
config.static_files)
def test_allow_skipped_files(self):
servr = ModuleFacade(instance_factory=self.instance_factory,
module_configuration=self.module_configuration,
allow_skipped_files=True)
config = servr._get_runtime_config()
self.assertFalse(config.HasField('skip_files'))
self.assertFalse(config.HasField('static_files'))
def test_threadsafe_true_override_none(self):
self.module_configuration.threadsafe = True
servr = ModuleFacade(instance_factory=self.instance_factory,
module_configuration=self.module_configuration)
config = servr._get_runtime_config()
self.assertTrue(config.threadsafe)
def test_threadsafe_false_override_none(self):
self.module_configuration.threadsafe = False
servr = ModuleFacade(instance_factory=self.instance_factory,
module_configuration=self.module_configuration)
config = servr._get_runtime_config()
self.assertFalse(config.threadsafe)
def test_threadsafe_true_override_false(self):
self.module_configuration.threadsafe = True
servr = ModuleFacade(instance_factory=self.instance_factory,
module_configuration=self.module_configuration,
threadsafe_override=False)
config = servr._get_runtime_config()
self.assertFalse(config.threadsafe)
def test_threadsafe_false_override_true(self):
self.module_configuration.threadsafe = False
servr = ModuleFacade(instance_factory=self.instance_factory,
module_configuration=self.module_configuration,
threadsafe_override=True)
config = servr._get_runtime_config()
self.assertTrue(config.threadsafe)
@parameterized.Parameters(
('php', 'php_config', runtime_config_pb2.PhpConfig),
('php55', 'php_config', runtime_config_pb2.PhpConfig),
('java', 'java_config', runtime_config_pb2.JavaConfig),
('java7', 'java_config', runtime_config_pb2.JavaConfig),
('python', 'python_config', runtime_config_pb2.PythonConfig),
('python27', 'python_config', runtime_config_pb2.PythonConfig),
)
@mock.patch('google.appengine.tools.devappserver2.java_runtime.'
'JavaRuntimeInstanceFactory._make_java_command',
new=mock.Mock(return_value=''))
def test_copy_runtime_config(self, runtime, field_to_set, field_class):
module_configuration = ModuleConfigurationStub(runtime=runtime)
php_config = runtime_config_pb2.PhpConfig()
python_config = runtime_config_pb2.PhpConfig()
java_config = runtime_config_pb2.JavaConfig()
servr = ModuleFacade(instance_factory=self.instance_factory,
module_configuration=module_configuration,
php_config=php_config,
python_config=python_config,
java_config=java_config)
config = servr._get_runtime_config()
self.assertTrue(hasattr(config, field_to_set))
self.assertEqual(field_class, type(getattr(config, field_to_set)))
class TestModuleShutdownInstance(googletest.TestCase):
"""Tests for module.Module._shutdown_instance."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.module_configuration = ModuleConfigurationStub()
self.instance_factory = instance.InstanceFactory(None, 1)
self.servr = ModuleFacade(instance_factory=self.instance_factory,
module_configuration=self.module_configuration)
self.mox.StubOutWithMock(logging, 'exception')
self.mox.StubOutWithMock(self.servr, '_handle_request')
self.mox.StubOutWithMock(self.servr._quit_event, 'wait')
self.mox.StubOutWithMock(module.Module, 'build_request_environ')
self.inst = self.mox.CreateMock(instance.Instance)
self.time = 0
self.mox.stubs.Set(time, 'time', lambda: self.time)
def tearDown(self):
self.mox.UnsetStubs()
def test_shutdown_instance(self):
def advance_time(*unused_args, **unused_kwargs):
self.time += 10
environ = object()
self.servr.build_request_environ(
'GET', '/_ah/stop', [], '', '0.1.0.3', 9000, fake_login=True).AndReturn(
environ)
self.servr._handle_request(
environ,
start_response_utils.null_start_response,
inst=self.inst,
request_type=instance.SHUTDOWN_REQUEST).WithSideEffects(advance_time)
self.servr._quit_event.wait(20)
self.inst.quit(force=True)
self.mox.ReplayAll()
self.servr._shutdown_instance(self.inst, 9000)
self.mox.VerifyAll()
class TestModuleRuntime(googletest.TestCase):
"""Tests for module.Module.runtime."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.mox.StubOutWithMock(application_configuration.ModuleConfiguration,
'_parse_configuration')
self.mox.StubOutWithMock(os.path, 'getmtime')
def tearDown(self):
self.mox.UnsetStubs()
class ModuleStubRuntime(module.Module):
def __init__(self, module_configuration):
self._module_configuration = module_configuration
def test_vm_false(self):
automatic_scaling = appinfo.AutomaticScaling(min_pending_latency='1.0s',
max_pending_latency='2.0s',
min_idle_instances=1,
max_idle_instances=2)
error_handlers = [appinfo.ErrorHandlers(file='error.html')]
handlers = [appinfo.URLMap(url=r'/python-(.*)',
script=r'\1.py')]
info = appinfo.AppInfoExternal(
application='app',
module='module1',
version='1',
runtime='python27',
threadsafe=False,
automatic_scaling=automatic_scaling,
skip_files=r'\*.gif',
error_handlers=error_handlers,
handlers=handlers,
inbound_services=['warmup'],
env_variables=appinfo.EnvironmentVariables(),
)
config_path = '/appdir/app.yaml'
application_configuration.ModuleConfiguration._parse_configuration(
config_path).AndReturn((info, [config_path]))
os.path.getmtime(config_path).AndReturn(10)
self.mox.ReplayAll()
config = application_configuration.ModuleConfiguration(
'/appdir/app.yaml')
servr = TestModuleRuntime.ModuleStubRuntime(
module_configuration=config)
self.assertEqual(servr.runtime, 'python27')
self.assertEqual(servr.effective_runtime, 'python27')
self.mox.VerifyAll()
def test_vm_true(self):
manual_scaling = appinfo.ManualScaling()
vm_settings = appinfo.VmSettings()
vm_settings['vm_runtime'] = 'python27'
handlers = [appinfo.URLMap(url=r'/*', script=r'\1.py')]
info = appinfo.AppInfoExternal(
application='app',
module='module1',
version='1',
runtime='vm',
vm_settings=vm_settings,
threadsafe=False,
manual_scaling=manual_scaling,
handlers=handlers,
)
config_path = '/appdir/app.yaml'
application_configuration.ModuleConfiguration._parse_configuration(
config_path).AndReturn((info, [config_path]))
os.path.getmtime(config_path).AndReturn(10)
self.mox.ReplayAll()
module_configuration = application_configuration.ModuleConfiguration(
config_path)
servr = TestModuleRuntime.ModuleStubRuntime(
module_configuration=module_configuration)
self.assertEqual(servr.runtime, 'vm')
self.assertEqual(servr.effective_runtime, 'python27')
self.mox.VerifyAll()
class TestAutoScalingModuleWarmup(googletest.TestCase):
"""Tests for module.AutoScalingModule._warmup."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.mox.StubOutWithMock(module.Module, 'build_request_environ')
def tearDown(self):
self.mox.UnsetStubs()
def test_warmup(self):
s = AutoScalingModuleFacade(balanced_port=8080)
self.mox.StubOutWithMock(s, '_handle_request')
self.mox.StubOutWithMock(s._condition, 'notify')
inst = self.mox.CreateMock(instance.Instance)
environ = object()
s.build_request_environ('GET', '/_ah/warmup', [], '', '0.1.0.3', 8080,
fake_login=True).AndReturn(environ)
s._handle_request(environ,
mox.IgnoreArg(),
inst=inst,
request_type=instance.READY_REQUEST)
s._condition.notify(1)
self.mox.ReplayAll()
s._warmup(inst)
self.mox.VerifyAll()
class TestAutoScalingModuleAddInstance(googletest.TestCase):
"""Tests for module.AutoScalingModule._add_instance."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.factory = self.mox.CreateMock(instance.InstanceFactory)
self.factory.max_concurrent_requests = 10
def tearDown(self):
self.mox.UnsetStubs()
def test_permit_warmup(self):
s = AutoScalingModuleFacade(instance_factory=self.factory)
self.mox.StubOutWithMock(s, '_async_warmup')
self.mox.StubOutWithMock(s._condition, 'notify')
inst = self.mox.CreateMock(instance.Instance)
self.factory.new_instance(mox.Regex('[a-f0-9]{36}'),
expect_ready_request=True).AndReturn(inst)
inst.start().AndReturn(True)
s._async_warmup(inst)
self.mox.ReplayAll()
self.assertEqual(inst, s._add_instance(permit_warmup=True))
self.mox.VerifyAll()
self.assertEqual(1, len(s._instances))
def test_no_permit_warmup(self):
s = AutoScalingModuleFacade(instance_factory=self.factory)
self.mox.StubOutWithMock(s._condition, 'notify')
inst = self.mox.CreateMock(instance.Instance)
self.factory.new_instance(mox.Regex('[a-f0-9]{36}'),
expect_ready_request=False).AndReturn(inst)
inst.start().AndReturn(True)
s._condition.notify(10)
self.mox.ReplayAll()
self.assertEqual(inst, s._add_instance(permit_warmup=False))
self.mox.VerifyAll()
self.assertIn(inst, s._instances)
def test_failed_to_start(self):
s = AutoScalingModuleFacade(instance_factory=self.factory)
self.mox.StubOutWithMock(s, '_async_warmup')
self.mox.StubOutWithMock(s._condition, 'notify')
inst = self.mox.CreateMock(instance.Instance)
self.factory.new_instance(mox.Regex('[a-f0-9]{36}'),
expect_ready_request=True).AndReturn(inst)
inst.start().AndReturn(False)
self.mox.ReplayAll()
self.assertIsNone(s._add_instance(permit_warmup=True))
self.mox.VerifyAll()
self.assertEqual(1, len(s._instances))
def test_max_instances(self):
s = AutoScalingModuleFacade(instance_factory=self.factory,
max_instances=1)
self.mox.StubOutWithMock(s._condition, 'notify')
inst = self.mox.CreateMock(instance.Instance)
self.factory.new_instance(mox.Regex('[a-f0-9]{36}'),
expect_ready_request=False).AndReturn(inst)
inst.start().AndReturn(True)
s._condition.notify(10)
self.mox.ReplayAll()
self.assertEqual(inst, s._add_instance(permit_warmup=False))
self.assertEqual(None, s._add_instance(permit_warmup=False))
self.mox.VerifyAll()
self.assertEqual(1, len(s._instances))
class TestAutoScalingInstancePoolHandleScriptRequest(googletest.TestCase):
"""Tests for module.AutoScalingModule.handle."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.inst = self.mox.CreateMock(instance.Instance)
self.environ = {}
self.start_response = object()
self.response = [object()]
self.url_map = object()
self.match = object()
self.request_id = object()
self.auto_module = AutoScalingModuleFacade(
instance_factory=instance.InstanceFactory(object(), 10))
self.mox.StubOutWithMock(self.auto_module, '_choose_instance')
self.mox.StubOutWithMock(self.auto_module, '_add_instance')
self.mox.stubs.Set(time, 'time', lambda: 0.0)
def tearDown(self):
self.mox.UnsetStubs()
def test_handle_script_request(self):
self.auto_module._choose_instance(0.1).AndReturn(self.inst)
self.inst.handle(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id,
instance.NORMAL_REQUEST).AndReturn(self.response)
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.auto_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
self.assertEqual([(mox.IgnoreArg(), 1)],
list(self.auto_module._outstanding_request_history))
def test_handle_cannot_accept_request(self):
self.auto_module._choose_instance(0.1).AndReturn(self.inst)
self.auto_module._choose_instance(0.1).AndReturn(self.inst)
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndRaise(
instance.CannotAcceptRequests)
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndReturn(
self.response)
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.auto_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
self.assertEqual([(mox.IgnoreArg(), 1)],
list(self.auto_module._outstanding_request_history))
def test_handle_new_instance(self):
self.auto_module._choose_instance(0.1).AndReturn(None)
self.auto_module._add_instance(permit_warmup=False).AndReturn(self.inst)
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndReturn(
self.response)
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.auto_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
def test_handle_new_instance_none_returned(self):
self.auto_module._choose_instance(0.1).AndReturn(None)
self.auto_module._add_instance(permit_warmup=False).AndReturn(None)
self.auto_module._choose_instance(0.2).AndReturn(self.inst)
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndReturn(
self.response)
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.auto_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
class TestAutoScalingInstancePoolTrimRequestTimesAndOutstanding(
googletest.TestCase):
"""Tests for AutoScalingModule._trim_outstanding_request_history."""
def setUp(self):
api_server.test_setup_stubs()
def test_trim_outstanding_request_history(self):
servr = AutoScalingModuleFacade(
instance_factory=instance.InstanceFactory(object(), 10))
servr._outstanding_request_history.append((0, 100))
servr._outstanding_request_history.append((1.0, 101))
servr._outstanding_request_history.append((1.2, 102))
servr._outstanding_request_history.append((2.5, 103))
now = time.time()
servr._outstanding_request_history.append((now, 42))
servr._outstanding_request_history.append((now + 1, 43))
servr._outstanding_request_history.append((now + 3, 44))
servr._outstanding_request_history.append((now + 4, 45))
servr._trim_outstanding_request_history()
self.assertEqual([(now, 42), (now + 1, 43), (now + 3, 44), (now + 4, 45)],
list(servr._outstanding_request_history))
class TestAutoScalingInstancePoolGetNumRequiredInstances(googletest.TestCase):
"""Tests for AutoScalingModule._outstanding_request_history."""
def setUp(self):
api_server.test_setup_stubs()
self.servr = AutoScalingModuleFacade(
instance_factory=instance.InstanceFactory(object(), 5))
def test_get_num_required_instances(self):
now = time.time()
self.servr._outstanding_request_history.append((now, 42))
self.servr._outstanding_request_history.append((now + 1, 43))
self.servr._outstanding_request_history.append((now + 3, 44))
self.servr._outstanding_request_history.append((now + 4, 45))
self.assertEqual(9, self.servr._get_num_required_instances())
def test_no_requests(self):
self.assertEqual(0, self.servr._get_num_required_instances())
class TestAutoScalingInstancePoolSplitInstances(googletest.TestCase):
"""Tests for module.AutoScalingModule._split_instances."""
class Instance(object):
def __init__(self, num_outstanding_requests, can_accept_requests=True):
self.num_outstanding_requests = num_outstanding_requests
self.can_accept_requests = can_accept_requests
def __repr__(self):
return str(self.num_outstanding_requests)
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.servr = AutoScalingModuleFacade(
instance_factory=instance.InstanceFactory(object(), 10))
self.mox.StubOutWithMock(self.servr, '_get_num_required_instances')
def tearDown(self):
self.mox.UnsetStubs()
def test_split_instances(self):
instance1 = self.Instance(1)
instance2 = self.Instance(2, can_accept_requests=False)
instance3 = self.Instance(3)
instance4 = self.Instance(4)
instance5 = self.Instance(5)
instance6 = self.Instance(6)
instance7 = self.Instance(7)
instance8 = self.Instance(8, can_accept_requests=False)
instance9 = self.Instance(9)
instance10 = self.Instance(10)
self.servr._get_num_required_instances().AndReturn(5)
self.servr._instances = set([instance1, instance2, instance3, instance4,
instance5, instance6, instance7, instance8,
instance9, instance10])
self.mox.ReplayAll()
self.assertEqual(
(set([instance10, instance9, instance7,
instance6, instance5]),
set([instance1, instance2, instance3, instance4, instance8])),
self.servr._split_instances())
self.mox.VerifyAll()
def test_split_instances_no_instances(self):
self.servr._get_num_required_instances().AndReturn(5)
self.servr._instances = set([])
self.mox.ReplayAll()
self.assertEqual((set([]), set([])),
self.servr._split_instances())
self.mox.VerifyAll()
def test_split_instances_no_instances_not_enough_accepting_requests(self):
instance1 = self.Instance(1)
instance2 = self.Instance(1, can_accept_requests=False)
instance3 = self.Instance(2, can_accept_requests=False)
self.servr._get_num_required_instances().AndReturn(5)
self.servr._instances = set([instance1, instance2, instance3])
self.mox.ReplayAll()
self.assertEqual((set([instance1]), set([instance2, instance3])),
self.servr._split_instances())
self.mox.VerifyAll()
def test_split_instances_no_required_instances(self):
instance1 = self.Instance(1)
instance2 = self.Instance(2, can_accept_requests=False)
instance3 = self.Instance(3, can_accept_requests=False)
instance4 = self.Instance(4)
instance5 = self.Instance(5)
instance6 = self.Instance(6)
instance7 = self.Instance(7)
instance8 = self.Instance(8)
self.servr._get_num_required_instances().AndReturn(0)
self.servr._instances = set([instance1, instance2, instance3, instance4,
instance5, instance6, instance7, instance8])
self.mox.ReplayAll()
self.assertEqual(
(set(),
set([instance8, instance7, instance6, instance5, instance4,
instance3, instance2, instance1])),
self.servr._split_instances())
self.mox.VerifyAll()
class TestAutoScalingInstancePoolChooseInstances(googletest.TestCase):
"""Tests for module.AutoScalingModule._choose_instance."""
class Instance(object):
def __init__(self, num_outstanding_requests, can_accept_requests=True):
self.num_outstanding_requests = num_outstanding_requests
self.remaining_request_capacity = 10 - num_outstanding_requests
self.can_accept_requests = can_accept_requests
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.servr = AutoScalingModuleFacade(
instance_factory=instance.InstanceFactory(object(), 10))
self.mox.StubOutWithMock(self.servr, '_split_instances')
self.mox.StubOutWithMock(self.servr._condition, 'wait')
self.time = 10
self.mox.stubs.Set(time, 'time', lambda: self.time)
def advance_time(self, *unused_args):
self.time += 10
def tearDown(self):
self.mox.UnsetStubs()
def test_choose_instance_required_available(self):
instance1 = self.Instance(1)
instance2 = self.Instance(2)
instance3 = self.Instance(3)
instance4 = self.Instance(4)
self.servr._split_instances().AndReturn((set([instance3, instance4]),
set([instance1, instance2])))
self.mox.ReplayAll()
self.assertEqual(instance3, # Least busy required instance.
self.servr._choose_instance(15))
self.mox.VerifyAll()
def test_choose_instance_no_instances(self):
self.servr._split_instances().AndReturn((set([]), set([])))
self.servr._condition.wait(5).WithSideEffects(self.advance_time)
self.mox.ReplayAll()
self.assertEqual(None, self.servr._choose_instance(15))
self.mox.VerifyAll()
def test_choose_instance_no_instance_that_can_accept_requests(self):
instance1 = self.Instance(1, can_accept_requests=False)
self.servr._split_instances().AndReturn((set([]), set([instance1])))
self.servr._condition.wait(5).WithSideEffects(self.advance_time)
self.mox.ReplayAll()
self.assertEqual(None, self.servr._choose_instance(15))
self.mox.VerifyAll()
def test_choose_instance_required_full(self):
instance1 = self.Instance(1)
instance2 = self.Instance(2)
instance3 = self.Instance(10)
instance4 = self.Instance(10)
self.servr._split_instances().AndReturn((set([instance3, instance4]),
set([instance1, instance2])))
self.mox.ReplayAll()
self.assertEqual(instance2, # Busyest non-required instance.
self.servr._choose_instance(15))
self.mox.VerifyAll()
def test_choose_instance_must_wait(self):
instance1 = self.Instance(10)
instance2 = self.Instance(10)
self.servr._split_instances().AndReturn((set([instance1]),
set([instance2])))
self.servr._condition.wait(5).WithSideEffects(self.advance_time)
self.mox.ReplayAll()
self.assertIsNone(self.servr._choose_instance(15))
self.mox.VerifyAll()
class TestAutoScalingInstancePoolAdjustInstances(googletest.TestCase):
"""Tests for module.AutoScalingModule._adjust_instances."""
class Instance(object):
def __init__(self, num_outstanding_requests):
self.num_outstanding_requests = num_outstanding_requests
def quit(self):
pass
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.servr = AutoScalingModuleFacade(
module_configuration=ModuleConfigurationStub(
automatic_scaling=appinfo.AutomaticScaling(
min_pending_latency='0.1s',
max_pending_latency='1.0s',
min_idle_instances=1,
max_idle_instances=2)),
instance_factory=instance.InstanceFactory(object(), 10))
self.mox.StubOutWithMock(self.servr, '_split_instances')
self.mox.StubOutWithMock(self.servr, '_add_instance')
def tearDown(self):
self.mox.UnsetStubs()
def test_adjust_instances_create_new(self):
instance1 = self.Instance(0)
instance2 = self.Instance(2)
instance3 = self.Instance(3)
instance4 = self.Instance(4)
self.servr._instances = set([instance1, instance2, instance3, instance4])
self.servr._split_instances().AndReturn(
(set([instance1, instance2, instance3, instance4]),
set([])))
self.servr._add_instance(permit_warmup=True)
self.mox.ReplayAll()
self.servr._adjust_instances()
self.mox.VerifyAll()
def test_adjust_instances_quit_idle(self):
instance1 = self.Instance(0)
instance2 = self.Instance(2)
instance3 = self.Instance(3)
instance4 = self.Instance(4)
self.mox.StubOutWithMock(instance1, 'quit')
self.servr._instances = set([instance1, instance2, instance3, instance4])
self.servr._split_instances().AndReturn(
(set([]),
set([instance1, instance2, instance3, instance4])))
instance1.quit()
self.mox.ReplayAll()
self.servr._adjust_instances()
self.mox.VerifyAll()
def test_adjust_instances_quit_idle_with_race(self):
instance1 = self.Instance(0)
instance2 = self.Instance(2)
instance3 = self.Instance(3)
instance4 = self.Instance(4)
self.mox.StubOutWithMock(instance1, 'quit')
self.servr._instances = set([instance1, instance2, instance3, instance4])
self.servr._split_instances().AndReturn(
(set([]),
set([instance1, instance2, instance3, instance4])))
instance1.quit().AndRaise(instance.CannotQuitServingInstance)
self.mox.ReplayAll()
self.servr._adjust_instances()
self.mox.VerifyAll()
class TestAutoScalingInstancePoolHandleChanges(googletest.TestCase):
"""Tests for module.AutoScalingModule._handle_changes."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.instance_factory = instance.InstanceFactory(object(), 10)
self.servr = AutoScalingModuleFacade(
instance_factory=self.instance_factory)
self.mox.StubOutWithMock(self.instance_factory, 'files_changed')
self.mox.StubOutWithMock(self.instance_factory, 'configuration_changed')
self.mox.StubOutWithMock(self.servr, '_maybe_restart_instances')
self.mox.StubOutWithMock(self.servr, '_create_url_handlers')
self.mox.StubOutWithMock(self.servr._module_configuration,
'check_for_updates')
self.mox.StubOutWithMock(self.servr._watcher, 'changes')
def tearDown(self):
self.mox.UnsetStubs()
def test_no_changes(self):
self.servr._module_configuration.check_for_updates().AndReturn(frozenset())
self.servr._watcher.changes(0).AndReturn(set())
self.servr._maybe_restart_instances(config_changed=False,
file_changed=False)
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_irrelevant_config_change(self):
self.servr._module_configuration.check_for_updates().AndReturn(frozenset())
self.servr._watcher.changes(0).AndReturn(set())
self.servr._maybe_restart_instances(config_changed=False,
file_changed=False)
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_restart_config_change(self):
conf_change = frozenset([application_configuration.ENV_VARIABLES_CHANGED])
self.servr._module_configuration.check_for_updates().AndReturn(conf_change)
self.servr._watcher.changes(0).AndReturn(set())
self.instance_factory.configuration_changed(conf_change)
self.servr._maybe_restart_instances(config_changed=True, file_changed=False)
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_handler_change(self):
conf_change = frozenset([application_configuration.HANDLERS_CHANGED])
self.servr._module_configuration.check_for_updates().AndReturn(conf_change)
self.servr._watcher.changes(0).AndReturn(set())
self.servr._create_url_handlers()
self.instance_factory.configuration_changed(conf_change)
self.servr._maybe_restart_instances(config_changed=True, file_changed=False)
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_file_change(self):
self.servr._module_configuration.check_for_updates().AndReturn(frozenset())
self.servr._watcher.changes(0).AndReturn({'-'})
self.instance_factory.files_changed()
self.servr._maybe_restart_instances(config_changed=False, file_changed=True)
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
class TestAutoScalingInstancePoolMaybeRestartInstances(googletest.TestCase):
"""Tests for module.AutoScalingModule._maybe_restart_instances."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.instance_factory = instance.InstanceFactory(object(), 10)
self.instance_factory.FILE_CHANGE_INSTANCE_RESTART_POLICY = instance.ALWAYS
self.servr = AutoScalingModuleFacade(instance_factory=self.instance_factory)
self.inst1 = self.mox.CreateMock(instance.Instance)
self.inst2 = self.mox.CreateMock(instance.Instance)
self.inst3 = self.mox.CreateMock(instance.Instance)
self.inst1.total_requests = 2
self.inst2.total_requests = 0
self.inst3.total_requests = 4
self.servr._instances.add(self.inst1)
self.servr._instances.add(self.inst2)
self.servr._instances.add(self.inst3)
def tearDown(self):
self.mox.UnsetStubs()
def test_no_changes(self):
self.mox.ReplayAll()
self.servr._maybe_restart_instances(config_changed=False,
file_changed=False)
self.mox.VerifyAll()
def test_config_change(self):
self.inst1.quit(allow_async=True).InAnyOrder()
self.inst2.quit(allow_async=True).InAnyOrder()
self.inst3.quit(allow_async=True).InAnyOrder()
self.mox.ReplayAll()
self.servr._maybe_restart_instances(config_changed=True,
file_changed=False)
self.mox.VerifyAll()
def test_file_change_restart_always(self):
self.instance_factory.FILE_CHANGE_INSTANCE_RESTART_POLICY = instance.ALWAYS
self.inst1.quit(allow_async=True).InAnyOrder()
self.inst2.quit(allow_async=True).InAnyOrder()
self.inst3.quit(allow_async=True).InAnyOrder()
self.mox.ReplayAll()
self.servr._maybe_restart_instances(config_changed=False,
file_changed=True)
self.mox.VerifyAll()
self.assertSequenceEqual(set(), self.servr._instances)
def test_file_change_restart_after_first_request(self):
self.instance_factory.FILE_CHANGE_INSTANCE_RESTART_POLICY = (
instance.AFTER_FIRST_REQUEST)
self.inst1.quit(allow_async=True).InAnyOrder()
self.inst3.quit(allow_async=True).InAnyOrder()
self.mox.ReplayAll()
self.servr._maybe_restart_instances(config_changed=False,
file_changed=True)
self.mox.VerifyAll()
self.assertSequenceEqual(set([self.inst2]), self.servr._instances)
def test_file_change_restart_never(self):
self.instance_factory.FILE_CHANGE_INSTANCE_RESTART_POLICY = instance.NEVER
self.mox.ReplayAll()
self.servr._maybe_restart_instances(config_changed=False,
file_changed=True)
self.mox.VerifyAll()
self.assertSequenceEqual(set([self.inst1, self.inst2, self.inst3]),
self.servr._instances)
class TestAutoScalingInstancePoolLoopAdjustingInstances(googletest.TestCase):
"""Tests for module.AutoScalingModule._adjust_instances."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.servr = AutoScalingModuleFacade(
instance_factory=instance.InstanceFactory(object(), 10))
def tearDown(self):
self.mox.UnsetStubs()
def test_loop_and_quit(self):
self.mox.StubOutWithMock(self.servr, '_adjust_instances')
self.mox.StubOutWithMock(self.servr, '_handle_changes')
inst1 = self.mox.CreateMock(instance.Instance)
inst2 = self.mox.CreateMock(instance.Instance)
inst3 = self.mox.CreateMock(instance.Instance)
self.servr._instances.add(inst1)
self.servr._instances.add(inst2)
self.servr._instances.add(inst3)
self.servr._handle_changes(1000)
def do_quit(*unused_args):
self.servr._quit_event.set()
self.servr._adjust_instances().WithSideEffects(do_quit)
self.mox.ReplayAll()
self.servr._loop_adjusting_instances()
self.mox.VerifyAll()
class TestAutoScalingInstancePoolAutomaticScaling(googletest.TestCase):
def setUp(self):
api_server.test_setup_stubs()
def _create_module(self, automatic_scaling):
return AutoScalingModuleFacade(
module_configuration=ModuleConfigurationStub(
automatic_scaling=automatic_scaling),
instance_factory=instance.InstanceFactory(object(), 10))
def test_unset_automatic_settings(self):
settings = appinfo.AutomaticScaling()
pool = self._create_module(settings)
self.assertEqual(0.1, pool._min_pending_latency)
self.assertEqual(0.5, pool._max_pending_latency)
self.assertEqual(1, pool._min_idle_instances)
self.assertEqual(1000, pool._max_idle_instances)
def test_automatic_automatic_settings(self):
settings = appinfo.AutomaticScaling(
min_pending_latency='automatic',
max_pending_latency='automatic',
min_idle_instances='automatic',
max_idle_instances='automatic')
pool = self._create_module(settings)
self.assertEqual(0.1, pool._min_pending_latency)
self.assertEqual(0.5, pool._max_pending_latency)
self.assertEqual(1, pool._min_idle_instances)
self.assertEqual(1000, pool._max_idle_instances)
def test_explicit_automatic_settings(self):
settings = appinfo.AutomaticScaling(
min_pending_latency='1234ms',
max_pending_latency='5.67s',
min_idle_instances='3',
max_idle_instances='20')
pool = self._create_module(settings)
self.assertEqual(1.234, pool._min_pending_latency)
self.assertEqual(5.67, pool._max_pending_latency)
self.assertEqual(3, pool._min_idle_instances)
self.assertEqual(20, pool._max_idle_instances)
class TestManualScalingModuleStart(googletest.TestCase):
"""Tests for module.ManualScalingModule._start_instance."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.mox.StubOutWithMock(module.Module, 'build_request_environ')
def tearDown(self):
self.mox.UnsetStubs()
def test_instance_start_success(self):
s = ManualScalingModuleFacade(balanced_port=8080)
self.mox.StubOutWithMock(s, '_handle_request')
self.mox.StubOutWithMock(s._condition, 'notify')
wsgi_servr = self.mox.CreateMock(wsgi_server.WsgiServer)
wsgi_servr.port = 12345
inst = self.mox.CreateMock(instance.Instance)
inst.instance_id = 0
inst.start().AndReturn(True)
environ = object()
s.build_request_environ('GET', '/_ah/start', [], '', '0.1.0.3', 12345,
fake_login=True).AndReturn(environ)
s._handle_request(environ,
mox.IgnoreArg(),
inst=inst,
request_type=instance.READY_REQUEST)
s._condition.notify(1)
self.mox.ReplayAll()
s._start_instance(wsgi_servr, inst)
self.mox.VerifyAll()
def test_instance_start_failure(self):
s = ManualScalingModuleFacade(balanced_port=8080)
self.mox.StubOutWithMock(s, '_handle_request')
self.mox.StubOutWithMock(s._condition, 'notify')
wsgi_servr = self.mox.CreateMock(wsgi_server.WsgiServer)
wsgi_servr.port = 12345
inst = self.mox.CreateMock(instance.Instance)
inst.instance_id = 0
inst.start().AndReturn(False)
self.mox.ReplayAll()
s._start_instance(wsgi_servr, inst)
self.mox.VerifyAll()
class TestManualScalingModuleAddInstance(googletest.TestCase):
"""Tests for module.ManualScalingModule._add_instance."""
class WsgiServer(object):
def __init__(self, port):
self.port = port
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.factory = self.mox.CreateMock(instance.InstanceFactory)
self.factory.max_concurrent_requests = 10
def tearDown(self):
self.mox.UnsetStubs()
def test_add_while_started(self):
servr = ManualScalingModuleFacade(instance_factory=self.factory)
inst = self.mox.CreateMock(instance.Instance)
self.mox.StubOutWithMock(module._THREAD_POOL, 'submit')
self.mox.StubOutWithMock(wsgi_server.WsgiServer, 'start')
self.mox.StubOutWithMock(wsgi_server.WsgiServer, 'port')
wsgi_server.WsgiServer.port = 12345
self.factory.new_instance(0, expect_ready_request=True).AndReturn(inst)
wsgi_server.WsgiServer.start()
module._THREAD_POOL.submit(servr._start_instance,
mox.IsA(wsgi_server.WsgiServer), inst)
self.mox.ReplayAll()
servr._add_instance()
self.mox.VerifyAll()
self.assertIn(inst, servr._instances)
self.assertEqual((servr, inst), servr._port_registry.get(12345))
def test_add_with_health_checks(self):
class MockFuture(object):
"""Mock Future object."""
def __init__(self):
self.cb = None
def add_done_callback(self, cb):
# Just run the callback immediately.
cb(None)
servr = ManualScalingModuleFacade(instance_factory=self.factory)
servr.vm_config = runtime_config_pb2.VMConfig()
servr.module_configuration.runtime = 'vm'
servr.module_configuration.health_check = appinfo.VmHealthCheck(
enable_health_check=True)
inst = self.mox.CreateMock(instance.Instance)
self.mox.StubOutWithMock(module._THREAD_POOL, 'submit')
self.mox.StubOutWithMock(wsgi_server.WsgiServer, 'start')
self.mox.StubOutWithMock(wsgi_server.WsgiServer, 'port')
self.mox.StubOutWithMock(health_check_service.HealthChecker, 'start')
wsgi_server.WsgiServer.port = 12345
self.factory.new_instance(0, expect_ready_request=True).AndReturn(inst)
wsgi_server.WsgiServer.start()
health_check_service.HealthChecker.start()
mock_future = MockFuture()
module._THREAD_POOL.submit(
servr._start_instance,
mox.IsA(wsgi_server.WsgiServer), inst).AndReturn(mock_future)
self.mox.ReplayAll()
servr._add_instance()
self.mox.VerifyAll()
self.assertIn(inst, servr._instances)
self.assertEqual((servr, inst), servr._port_registry.get(12345))
def test_add_while_stopped(self):
servr = ManualScalingModuleFacade(instance_factory=self.factory)
servr._suspended = True
inst = self.mox.CreateMock(instance.Instance)
self.mox.StubOutWithMock(wsgi_server.WsgiServer, 'start')
self.mox.StubOutWithMock(wsgi_server.WsgiServer, 'port')
wsgi_server.WsgiServer.port = 12345
self.mox.StubOutWithMock(module._THREAD_POOL, 'submit')
self.factory.new_instance(0, expect_ready_request=True).AndReturn(inst)
wsgi_server.WsgiServer.start()
self.mox.ReplayAll()
servr._add_instance()
self.mox.VerifyAll()
self.assertIn(inst, servr._instances)
self.assertEqual((servr, inst), servr._port_registry.get(12345))
def test_add_health_checks(self):
inst = self.mox.CreateMock(instance.Instance)
wsgi_servr = self.mox.CreateMock(wsgi_server.WsgiServer)
config = appinfo.VmHealthCheck()
self.mox.StubOutWithMock(health_check_service.HealthChecker, 'start')
health_check_service.HealthChecker.start()
servr = ManualScalingModuleFacade(instance_factory=self.factory)
self.mox.ReplayAll()
servr._add_health_checks(inst, wsgi_servr, config)
self.mox.VerifyAll()
def test_do_health_check_last_successful(self):
servr = ManualScalingModuleFacade(instance_factory=self.factory)
wsgi_servr = self.mox.CreateMock(wsgi_server.WsgiServer)
wsgi_servr.port = 3
inst = self.mox.CreateMock(instance.Instance)
start_response = start_response_utils.CapturingStartResponse()
self.mox.StubOutWithMock(module.ManualScalingModule, '_handle_request')
servr._handle_request(
mox.And(
mox.ContainsKeyValue('PATH_INFO', '/_ah/health'),
mox.ContainsKeyValue('QUERY_STRING', 'IsLastSuccessful=yes')),
start_response, inst=inst, request_type=instance.NORMAL_REQUEST)
self.mox.ReplayAll()
servr._do_health_check(wsgi_servr, inst, start_response, True)
self.mox.VerifyAll()
def test_do_health_check_last_unsuccessful(self):
servr = ManualScalingModuleFacade(instance_factory=self.factory)
wsgi_servr = self.mox.CreateMock(wsgi_server.WsgiServer)
wsgi_servr.port = 3
inst = self.mox.CreateMock(instance.Instance)
start_response = start_response_utils.CapturingStartResponse()
self.mox.StubOutWithMock(module.ManualScalingModule, '_handle_request')
servr._handle_request(
mox.And(
mox.ContainsKeyValue('PATH_INFO', '/_ah/health'),
mox.ContainsKeyValue('QUERY_STRING', 'IsLastSuccessful=no')),
start_response, inst=inst, request_type=instance.NORMAL_REQUEST)
self.mox.ReplayAll()
servr._do_health_check(wsgi_servr, inst, start_response, False)
self.mox.VerifyAll()
def test_restart_instance(self):
inst = self.mox.CreateMock(instance.Instance)
new_inst = self.mox.CreateMock(instance.Instance)
inst.instance_id = 0
new_inst.instance_id = 0
self.mox.StubOutWithMock(inst, 'quit')
self.mox.StubOutWithMock(new_inst, 'start')
self.mox.StubOutWithMock(
self.factory, 'new_instance')
self.mox.StubOutWithMock(module.ManualScalingModule, '_add_health_checks')
servr = ManualScalingModuleFacade(instance_factory=self.factory)
servr.module_configuration.runtime = 'vm'
servr.module_configuration.health_check = appinfo.VmHealthCheck(
enable_health_check=True)
wsgi_servr = self.mox.CreateMock(wsgi_server.WsgiServer)
self.mox.StubOutWithMock(wsgi_servr, 'set_app')
wsgi_servr.port = 3
servr._wsgi_servers = [wsgi_servr]
servr._instances = [inst]
inst.quit(force=True)
wsgi_servr.set_app(mox.IsA(functools.partial))
self.factory.new_instance(0).AndReturn(new_inst)
module.ManualScalingModule._add_health_checks(
new_inst, wsgi_servr, mox.IsA(appinfo.VmHealthCheck))
new_inst.start()
self.mox.ReplayAll()
servr._restart_instance(inst)
self.mox.VerifyAll()
def test_restart_instance_no_health_checks(self):
inst = self.mox.CreateMock(instance.Instance)
new_inst = self.mox.CreateMock(instance.Instance)
inst.instance_id = 0
new_inst.instance_id = 0
self.mox.StubOutWithMock(inst, 'quit')
self.mox.StubOutWithMock(new_inst, 'start')
self.mox.StubOutWithMock(
self.factory, 'new_instance')
servr = ManualScalingModuleFacade(instance_factory=self.factory)
servr.module_configuration.health_check = appinfo.VmHealthCheck(
enable_health_check=False)
wsgi_servr = self.mox.CreateMock(wsgi_server.WsgiServer)
self.mox.StubOutWithMock(wsgi_servr, 'set_app')
wsgi_servr.port = 3
servr._wsgi_servers = [wsgi_servr]
servr._instances = [inst]
inst.quit(force=True)
wsgi_servr.set_app(mox.IsA(functools.partial))
self.factory.new_instance(0).AndReturn(new_inst)
new_inst.start()
self.mox.ReplayAll()
servr._restart_instance(inst)
self.mox.VerifyAll()
class TestManualScalingInstancePoolHandleScriptRequest(googletest.TestCase):
"""Tests for module.ManualScalingModule.handle."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.inst = self.mox.CreateMock(instance.Instance)
self.inst.instance_id = 0
self.environ = {}
self.start_response = object()
self.response = [object()]
self.url_map = object()
self.match = object()
self.request_id = object()
self.manual_module = ManualScalingModuleFacade(
instance_factory=instance.InstanceFactory(object(), 10))
self.mox.StubOutWithMock(self.manual_module, '_choose_instance')
self.mox.StubOutWithMock(self.manual_module, '_add_instance')
self.mox.StubOutWithMock(self.manual_module._condition, 'notify')
self.mox.stubs.Set(time, 'time', lambda: 0.0)
def tearDown(self):
self.mox.UnsetStubs()
def test_handle_script_request(self):
self.manual_module._choose_instance(10.0).AndReturn(self.inst)
self.inst.handle(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id,
instance.NORMAL_REQUEST).AndReturn(self.response)
self.manual_module._condition.notify()
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.manual_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
def test_handle_cannot_accept_request(self):
self.manual_module._choose_instance(10.0).AndReturn(self.inst)
self.manual_module._choose_instance(10.0).AndReturn(self.inst)
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndRaise(
instance.CannotAcceptRequests)
self.manual_module._condition.notify()
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndReturn(
self.response)
self.manual_module._condition.notify()
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.manual_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
def test_handle_must_wait(self):
self.manual_module._choose_instance(10.0).AndReturn(None)
self.manual_module._choose_instance(10.0).AndReturn(self.inst)
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndReturn(
self.response)
self.manual_module._condition.notify()
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.manual_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
def test_handle_timeout(self):
self.time = 0.0
def advance_time(*unused_args):
self.time += 11
self.mox.stubs.Set(time, 'time', lambda: self.time)
self.mox.StubOutWithMock(self.manual_module, '_error_response')
self.manual_module._choose_instance(10.0).WithSideEffects(advance_time)
self.manual_module._error_response(
self.environ, self.start_response, 503, mox.IgnoreArg()).AndReturn(
self.response)
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.manual_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
class TestManualScalingInstancePoolChooseInstances(googletest.TestCase):
"""Tests for module.ManualScalingModule._choose_instance."""
class Instance(object):
def __init__(self, can_accept_requests):
self.can_accept_requests = can_accept_requests
def setUp(self):
self.mox = mox.Mox()
api_server.test_setup_stubs()
self.servr = ManualScalingModuleFacade(
instance_factory=instance.InstanceFactory(object(), 10))
self.mox.StubOutWithMock(self.servr._condition, 'wait')
self.time = 0
self.mox.stubs.Set(time, 'time', lambda: self.time)
def advance_time(self, *unused_args):
self.time += 10
def tearDown(self):
self.mox.UnsetStubs()
def test_choose_instance_first_can_accept(self):
instance1 = self.Instance(True)
instance2 = self.Instance(True)
self.servr._instances = [instance1, instance2]
self.mox.ReplayAll()
self.assertEqual(instance1, self.servr._choose_instance(1))
self.mox.VerifyAll()
def test_choose_instance_first_cannot_accept(self):
instance1 = self.Instance(False)
instance2 = self.Instance(True)
self.servr._instances = [instance1, instance2]
self.mox.ReplayAll()
self.assertEqual(instance2, self.servr._choose_instance(1))
self.mox.VerifyAll()
def test_choose_instance_none_can_accept(self):
instance1 = self.Instance(False)
instance2 = self.Instance(False)
self.servr._instances = [instance1, instance2]
self.servr._condition.wait(5).WithSideEffects(self.advance_time)
self.mox.ReplayAll()
self.assertEqual(None, self.servr._choose_instance(5))
self.mox.VerifyAll()
def test_choose_instance_no_instances(self):
self.servr._condition.wait(5).WithSideEffects(self.advance_time)
self.mox.ReplayAll()
self.assertEqual(None, self.servr._choose_instance(5))
self.mox.VerifyAll()
class TestManualScalingInstancePoolSetNumInstances(googletest.TestCase):
"""Tests for module.ManualScalingModule.set_num_instances."""
def setUp(self):
self.mox = mox.Mox()
api_server.test_setup_stubs()
self.module = ManualScalingModuleFacade(
instance_factory=instance.InstanceFactory(object(), 10))
self._instance = self.mox.CreateMock(instance.Instance)
self._wsgi_server = self.mox.CreateMock(wsgi_server.WsgiServer)
self._wsgi_server.port = 8080
self.module._instances = [self._instance]
self.module._wsgi_servers = [self._wsgi_server]
self.mox.StubOutWithMock(module._THREAD_POOL, 'submit')
self.mox.StubOutWithMock(self.module, '_add_instance')
self.mox.StubOutWithMock(self.module, '_shutdown_instance')
def tearDown(self):
self.mox.UnsetStubs()
def test_no_op(self):
self.mox.ReplayAll()
self.assertEqual(1, self.module.get_num_instances())
self.module.set_num_instances(1)
self.mox.VerifyAll()
def test_add_an_instance(self):
self.module._add_instance()
self.mox.ReplayAll()
self.assertEqual(1, self.module.get_num_instances())
self.module.set_num_instances(2)
self.mox.VerifyAll()
def test_remove_an_instance(self):
module._THREAD_POOL.submit(self.module._quit_instance,
self._instance,
self._wsgi_server)
self._instance.quit(expect_shutdown=True)
self._wsgi_server.quit()
self.module._shutdown_instance(self._instance, 8080)
self.mox.ReplayAll()
self.assertEqual(1, self.module.get_num_instances())
self.module.set_num_instances(0)
self.module._quit_instance(self._instance,
self._wsgi_server)
self.mox.VerifyAll()
class TestManualScalingInstancePoolSuspendAndResume(googletest.TestCase):
"""Tests for module.ManualScalingModule.suspend and resume."""
def setUp(self):
self.mox = mox.Mox()
api_server.test_setup_stubs()
self.factory = self.mox.CreateMock(instance.InstanceFactory)
self.module = ManualScalingModuleFacade(
instance_factory=self.factory)
self._instance = self.mox.CreateMock(instance.Instance)
self._wsgi_server = wsgi_server.WsgiServer(('localhost', 0), None)
self.module._instances = [self._instance]
self.module._wsgi_servers = [self._wsgi_server]
self.mox.StubOutWithMock(module._THREAD_POOL, 'submit')
self.mox.StubOutWithMock(self.module, '_shutdown_instance')
self._wsgi_server.start()
def tearDown(self):
self._wsgi_server.quit()
self.mox.UnsetStubs()
def test_already_suspended(self):
self.module._suspended = True
self.assertRaises(request_info.VersionAlreadyStoppedError,
self.module.suspend)
def test_already_resumed(self):
self.assertRaises(request_info.VersionAlreadyStartedError,
self.module.resume)
def test_suspend_instance(self):
module._THREAD_POOL.submit(self.module._suspend_instance, self._instance,
self._wsgi_server.port)
self._instance.quit(expect_shutdown=True)
port = object()
self.module._shutdown_instance(self._instance, port)
self.mox.ReplayAll()
self.module.suspend()
self.module._suspend_instance(self._instance, port)
self.mox.VerifyAll()
self.assertEqual(404, self._wsgi_server._error)
self.assertEqual(None, self._wsgi_server._app)
self.assertTrue(self.module._suspended)
def test_resume(self):
self.module._suspended = True
self.module._instances = [object()]
self.factory.new_instance(0, expect_ready_request=True).AndReturn(
self._instance)
module._THREAD_POOL.submit(self.module._start_instance, self._wsgi_server,
self._instance)
self.mox.ReplayAll()
self.module.resume()
self.mox.VerifyAll()
self.assertEqual(self.module._handle_request,
self._wsgi_server._app.func)
self.assertEqual({'inst': self._instance},
self._wsgi_server._app.keywords)
self.assertFalse(self.module._suspended)
def test_restart(self):
self._new_instance = self.mox.CreateMock(instance.Instance)
self.factory.new_instance(0, expect_ready_request=True).AndReturn(
self._new_instance)
f = futures.Future()
f.set_result(True)
module._THREAD_POOL.submit(self.module._start_instance, self._wsgi_server,
self._new_instance).AndReturn(f)
self._instance.quit(force=True)
port = object()
self.mox.ReplayAll()
self.module.restart()
self.mox.VerifyAll()
self.assertEqual(self.module._handle_request,
self._wsgi_server._app.func)
self.assertEqual({'inst': self._new_instance},
self._wsgi_server._app.keywords)
self.assertFalse(self.module._suspended)
class TestManualScalingInstancePoolHandleChanges(googletest.TestCase):
"""Tests for module.ManualScalingModule._handle_changes."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.instance_factory = instance.InstanceFactory(object(), 10)
self.servr = ManualScalingModuleFacade(
instance_factory=self.instance_factory)
self.mox.StubOutWithMock(self.instance_factory, 'files_changed')
self.mox.StubOutWithMock(self.instance_factory, 'configuration_changed')
self.mox.StubOutWithMock(self.servr, 'restart')
self.mox.StubOutWithMock(self.servr, '_create_url_handlers')
self.mox.StubOutWithMock(self.servr._module_configuration,
'check_for_updates')
self.mox.StubOutWithMock(self.servr._watcher, 'changes')
def tearDown(self):
self.mox.UnsetStubs()
def test_no_changes(self):
self.servr._module_configuration.check_for_updates().AndReturn(frozenset())
self.servr._watcher.changes(0).AndReturn(set())
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_irrelevant_config_change(self):
self.servr._module_configuration.check_for_updates().AndReturn(frozenset())
self.servr._watcher.changes(0).AndReturn(set())
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_restart_config_change(self):
conf_change = frozenset([application_configuration.ENV_VARIABLES_CHANGED])
self.servr._module_configuration.check_for_updates().AndReturn(conf_change)
self.servr._watcher.changes(0).AndReturn(set())
self.instance_factory.configuration_changed(conf_change)
self.servr.restart()
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_handler_change(self):
conf_change = frozenset([application_configuration.HANDLERS_CHANGED])
self.servr._module_configuration.check_for_updates().AndReturn(conf_change)
self.servr._watcher.changes(0).AndReturn(set())
self.servr._create_url_handlers()
self.instance_factory.configuration_changed(conf_change)
self.servr.restart()
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_file_change(self):
self.servr._module_configuration.check_for_updates().AndReturn(frozenset())
self.servr._watcher.changes(0).AndReturn({'-'})
self.instance_factory.files_changed()
self.servr.restart()
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_restart_config_change_suspended(self):
self.servr._suspended = True
conf_change = frozenset([application_configuration.ENV_VARIABLES_CHANGED])
self.servr._module_configuration.check_for_updates().AndReturn(conf_change)
self.servr._watcher.changes(0).AndReturn(set())
self.instance_factory.configuration_changed(conf_change)
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_handler_change_suspended(self):
self.servr._suspended = True
conf_change = frozenset([application_configuration.HANDLERS_CHANGED])
self.servr._module_configuration.check_for_updates().AndReturn(conf_change)
self.servr._watcher.changes(0).AndReturn(set())
self.servr._create_url_handlers()
self.instance_factory.configuration_changed(conf_change)
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_file_change_suspended(self):
self.servr._suspended = True
self.servr._module_configuration.check_for_updates().AndReturn(frozenset())
self.servr._watcher.changes(0).AndReturn({'-'})
self.instance_factory.files_changed()
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
class TestBasicScalingModuleStart(googletest.TestCase):
"""Tests for module.BasicScalingModule._start_instance."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.mox.StubOutWithMock(module.Module, 'build_request_environ')
def tearDown(self):
self.mox.UnsetStubs()
def test_instance_start_success(self):
s = BasicScalingModuleFacade(balanced_port=8080)
self.mox.StubOutWithMock(s, '_handle_request')
self.mox.StubOutWithMock(s._condition, 'notify')
wsgi_servr = self.mox.CreateMock(wsgi_server.WsgiServer)
wsgi_servr.port = 12345
s._wsgi_servers[0] = wsgi_servr
inst = self.mox.CreateMock(instance.Instance)
inst.instance_id = 0
s._instances[0] = inst
inst.start().AndReturn(True)
environ = object()
s.build_request_environ('GET', '/_ah/start', [], '', '0.1.0.3', 12345,
fake_login=True).AndReturn(environ)
s._handle_request(environ,
mox.IgnoreArg(),
inst=inst,
request_type=instance.READY_REQUEST)
s._condition.notify(1)
self.mox.ReplayAll()
s._start_instance(0)
self.mox.VerifyAll()
def test_instance_start_failure(self):
s = BasicScalingModuleFacade(balanced_port=8080)
self.mox.StubOutWithMock(s, '_handle_request')
self.mox.StubOutWithMock(s._condition, 'notify')
wsgi_servr = self.mox.CreateMock(wsgi_server.WsgiServer)
wsgi_servr.port = 12345
s._wsgi_servers[0] = wsgi_servr
inst = self.mox.CreateMock(instance.Instance)
inst.instance_id = 0
s._instances[0] = inst
inst.start().AndReturn(False)
self.mox.ReplayAll()
s._start_instance(0)
self.mox.VerifyAll()
def test_start_any_instance_success(self):
s = BasicScalingModuleFacade(balanced_port=8080)
s._instance_running = [True, False, False, True]
inst = object()
s._instances = [None, inst, None, None]
self.mox.StubOutWithMock(module._THREAD_POOL, 'submit')
module._THREAD_POOL.submit(s._start_instance, 1)
self.mox.ReplayAll()
self.assertEqual(inst, s._start_any_instance())
self.mox.VerifyAll()
self.assertEqual([True, True, False, True], s._instance_running)
def test_start_any_instance_all_already_running(self):
s = BasicScalingModuleFacade(balanced_port=8080)
s._instance_running = [True, True, True, True]
self.mox.StubOutWithMock(module._THREAD_POOL, 'submit')
self.mox.ReplayAll()
self.assertIsNone(s._start_any_instance())
self.mox.VerifyAll()
self.assertEqual([True, True, True, True], s._instance_running)
class TestBasicScalingInstancePoolHandleScriptRequest(googletest.TestCase):
"""Tests for module.BasicScalingModule.handle."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.inst = self.mox.CreateMock(instance.Instance)
self.inst.instance_id = 0
self.environ = {}
self.start_response = object()
self.response = [object()]
self.url_map = object()
self.match = object()
self.request_id = object()
self.basic_module = BasicScalingModuleFacade(
instance_factory=instance.InstanceFactory(object(), 10))
self.mox.StubOutWithMock(self.basic_module, '_choose_instance')
self.mox.StubOutWithMock(self.basic_module, '_start_any_instance')
self.mox.StubOutWithMock(self.basic_module, '_start_instance')
self.mox.StubOutWithMock(self.basic_module._condition, 'wait')
self.mox.StubOutWithMock(self.basic_module._condition, 'notify')
self.time = 10
self.mox.stubs.Set(time, 'time', lambda: self.time)
def advance_time(self, *unused_args):
self.time += 11
def tearDown(self):
self.mox.UnsetStubs()
def test_handle_script_request(self):
self.basic_module._choose_instance(20).AndReturn(self.inst)
self.inst.handle(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id,
instance.NORMAL_REQUEST).AndReturn(self.response)
self.basic_module._condition.notify()
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.basic_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
def test_handle_cannot_accept_request(self):
self.basic_module._choose_instance(20).AndReturn(self.inst)
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndRaise(
instance.CannotAcceptRequests)
self.basic_module._condition.notify()
self.basic_module._choose_instance(20).AndReturn(self.inst)
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndReturn(
self.response)
self.basic_module._condition.notify()
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.basic_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
def test_handle_timeout(self):
self.mox.StubOutWithMock(self.basic_module, '_error_response')
self.basic_module._choose_instance(20).WithSideEffects(self.advance_time)
self.basic_module._error_response(
self.environ, self.start_response, 503, mox.IgnoreArg()).AndReturn(
self.response)
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.basic_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
def test_handle_instance(self):
self.inst.instance_id = 0
self.inst.has_quit = False
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndReturn(
self.response)
self.basic_module._condition.notify()
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.basic_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id,
inst=self.inst))
self.mox.VerifyAll()
def test_handle_instance_start_the_instance(self):
self.inst.instance_id = 0
self.inst.has_quit = False
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndRaise(
instance.CannotAcceptRequests)
self.basic_module._start_instance(0).AndReturn(True)
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndReturn(
self.response)
self.basic_module._condition.notify()
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.basic_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id,
inst=self.inst))
self.mox.VerifyAll()
def test_handle_instance_already_running(self):
self.inst.instance_id = 0
self.inst.has_quit = False
self.basic_module._instance_running[0] = True
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndRaise(
instance.CannotAcceptRequests)
self.inst.wait(20)
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndReturn(
self.response)
self.basic_module._condition.notify()
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.basic_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id,
inst=self.inst))
self.mox.VerifyAll()
def test_handle_instance_timeout(self):
self.mox.StubOutWithMock(self.basic_module, '_error_response')
self.inst.instance_id = 0
self.inst.has_quit = False
self.basic_module._instance_running[0] = True
self.inst.handle(
self.environ, self.start_response, self.url_map, self.match,
self.request_id, instance.NORMAL_REQUEST).AndRaise(
instance.CannotAcceptRequests)
self.inst.wait(20).WithSideEffects(self.advance_time)
self.basic_module._error_response(self.environ, self.start_response,
503).AndReturn(self.response)
self.basic_module._condition.notify()
self.mox.ReplayAll()
self.assertEqual(
self.response,
self.basic_module._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id,
inst=self.inst))
self.mox.VerifyAll()
class TestBasicScalingInstancePoolChooseInstances(googletest.TestCase):
"""Tests for module.BasicScalingModule._choose_instance."""
class Instance(object):
def __init__(self, can_accept_requests):
self.can_accept_requests = can_accept_requests
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.servr = BasicScalingModuleFacade(
instance_factory=instance.InstanceFactory(object(), 10))
self.mox.stubs.Set(time, 'time', lambda: self.time)
self.mox.StubOutWithMock(self.servr._condition, 'wait')
self.mox.StubOutWithMock(self.servr, '_start_any_instance')
self.time = 0
def tearDown(self):
self.mox.UnsetStubs()
def advance_time(self, *unused_args):
self.time += 10
def test_choose_instance_first_can_accept(self):
instance1 = self.Instance(True)
instance2 = self.Instance(True)
self.servr._instances = [instance1, instance2]
self.mox.ReplayAll()
self.assertEqual(instance1, self.servr._choose_instance(1))
self.mox.VerifyAll()
def test_choose_instance_first_cannot_accept(self):
instance1 = self.Instance(False)
instance2 = self.Instance(True)
self.servr._instances = [instance1, instance2]
self.mox.ReplayAll()
self.assertEqual(instance2, self.servr._choose_instance(1))
self.mox.VerifyAll()
def test_choose_instance_none_can_accept(self):
instance1 = self.Instance(False)
instance2 = self.Instance(False)
self.servr._instance_running = [True, True]
self.servr._instances = [instance1, instance2]
self.servr._start_any_instance().AndReturn(None)
self.servr._condition.wait(1).WithSideEffects(self.advance_time)
self.mox.ReplayAll()
self.assertEqual(None, self.servr._choose_instance(1))
self.mox.VerifyAll()
def test_choose_instance_start_an_instance(self):
instance1 = self.Instance(False)
instance2 = self.Instance(False)
mock_instance = self.mox.CreateMock(instance.Instance)
self.servr._instances = [instance1, instance2]
self.servr._instance_running = [True, False]
self.servr._start_any_instance().AndReturn(mock_instance)
mock_instance.wait(1)
self.mox.ReplayAll()
self.assertEqual(mock_instance, self.servr._choose_instance(1))
self.mox.VerifyAll()
def test_choose_instance_no_instances(self):
self.servr._start_any_instance().AndReturn(None)
self.servr._condition.wait(1).WithSideEffects(self.advance_time)
self.mox.ReplayAll()
self.assertEqual(None, self.servr._choose_instance(1))
self.mox.VerifyAll()
class TestBasicScalingInstancePoolInstanceManagement(googletest.TestCase):
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.factory = self.mox.CreateMock(instance.InstanceFactory)
self.factory.max_concurrent_requests = 10
self.mox.StubOutWithMock(module._THREAD_POOL, 'submit')
self.module = BasicScalingModuleFacade(instance_factory=self.factory,
host='localhost')
self.wsgi_server = self.module._wsgi_servers[0]
self.wsgi_server.start()
def tearDown(self):
self.wsgi_server.quit()
self.mox.UnsetStubs()
def test_restart(self):
old_instances = [self.mox.CreateMock(instance.Instance),
self.mox.CreateMock(instance.Instance)]
self.module._instances = old_instances[:]
self.module._instance_running = [True, False]
new_instance = self.mox.CreateMock(instance.Instance)
self.factory.new_instance(0, expect_ready_request=True).AndReturn(
new_instance)
module._THREAD_POOL.submit(self.module._start_instance, 0)
old_instances[0].quit(expect_shutdown=True)
module._THREAD_POOL.submit(self.module._shutdown_instance, old_instances[0],
self.wsgi_server.port)
self.mox.ReplayAll()
self.module.restart()
self.mox.VerifyAll()
self.assertEqual([True, False], self.module._instance_running)
self.assertEqual(new_instance, self.module._instances[0])
self.assertEqual(self.module._handle_request,
self.module._wsgi_servers[0]._app.func)
self.assertEqual({'inst': new_instance},
self.module._wsgi_servers[0]._app.keywords)
def test_shutdown_idle_instances(self):
s = BasicScalingModuleFacade(instance_factory=self.factory)
old_instances = [self.mox.CreateMock(instance.Instance),
self.mox.CreateMock(instance.Instance),
self.mox.CreateMock(instance.Instance)]
self.module._instances = old_instances[:]
old_instances[0].idle_seconds = (self.module._instance_idle_timeout + 1)
old_instances[1].idle_seconds = 0
old_instances[2].idle_seconds = (self.module._instance_idle_timeout + 1)
self.module._instance_running = [True, True, False]
new_instance = self.mox.CreateMock(instance.Instance)
self.factory.new_instance(0, expect_ready_request=True).AndReturn(
new_instance)
old_instances[0].quit(expect_shutdown=True)
module._THREAD_POOL.submit(self.module._shutdown_instance, old_instances[0],
self.wsgi_server.port)
self.mox.ReplayAll()
self.module._shutdown_idle_instances()
self.mox.VerifyAll()
self.assertEqual([False, True, False], self.module._instance_running)
self.assertEqual(new_instance, self.module._instances[0])
self.assertEqual(self.module._handle_request,
self.module._wsgi_servers[0]._app.func)
self.assertEqual({'inst': new_instance},
self.module._wsgi_servers[0]._app.keywords)
class TestBasicScalingInstancePoolHandleChanges(googletest.TestCase):
"""Tests for module.BasicScalingModule._handle_changes."""
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.instance_factory = instance.InstanceFactory(object(), 10)
self.servr = BasicScalingModuleFacade(
instance_factory=self.instance_factory)
self.mox.StubOutWithMock(self.instance_factory, 'files_changed')
self.mox.StubOutWithMock(self.instance_factory, 'configuration_changed')
self.mox.StubOutWithMock(self.servr, 'restart')
self.mox.StubOutWithMock(self.servr, '_create_url_handlers')
self.mox.StubOutWithMock(self.servr._module_configuration,
'check_for_updates')
self.mox.StubOutWithMock(self.servr._watcher.__class__, 'changes')
def tearDown(self):
self.mox.UnsetStubs()
def test_no_changes(self):
self.servr._module_configuration.check_for_updates().AndReturn(frozenset())
self.servr._watcher.changes(0).AndReturn(set())
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_irrelevant_config_change(self):
self.servr._module_configuration.check_for_updates().AndReturn(frozenset())
self.servr._watcher.changes(0).AndReturn(set())
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_restart_config_change(self):
conf_change = frozenset([application_configuration.ENV_VARIABLES_CHANGED])
self.servr._module_configuration.check_for_updates().AndReturn(conf_change)
self.servr._watcher.changes(0).AndReturn(set())
self.instance_factory.configuration_changed(conf_change)
self.servr.restart()
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_handler_change(self):
conf_change = frozenset([application_configuration.HANDLERS_CHANGED])
self.servr._module_configuration.check_for_updates().AndReturn(conf_change)
self.servr._watcher.changes(0).AndReturn(set())
self.servr._create_url_handlers()
self.instance_factory.configuration_changed(conf_change)
self.servr.restart()
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
def test_file_change(self):
self.servr._module_configuration.check_for_updates().AndReturn(frozenset())
self.servr._watcher.changes(0).AndReturn({'-'})
self.instance_factory.files_changed().AndReturn(True)
self.servr.restart()
self.mox.ReplayAll()
self.servr._handle_changes()
self.mox.VerifyAll()
class TestInteractiveCommandModule(googletest.TestCase):
def setUp(self):
api_server.test_setup_stubs()
self.mox = mox.Mox()
self.inst = self.mox.CreateMock(instance.Instance)
self.inst.instance_id = 0
self.environ = object()
self.start_response = object()
self.response = [object()]
self.url_map = object()
self.match = object()
self.request_id = object()
self.servr = module.InteractiveCommandModule(
ModuleConfigurationStub(),
'fakehost',
balanced_port=8000,
api_host='localhost',
api_port=9000,
auth_domain='gmail.com',
runtime_stderr_loglevel=1,
php_config=None,
python_config=None,
java_config=None,
cloud_sql_config=None,
vm_config=None,
default_version_port=8080,
port_registry=dispatcher.PortRegistry(),
request_data=None,
dispatcher=None,
use_mtime_file_watcher=False,
allow_skipped_files=False,
threadsafe_override=None)
self.mox.StubOutWithMock(self.servr._instance_factory, 'new_instance')
self.mox.StubOutWithMock(self.servr, '_handle_request')
self.mox.StubOutWithMock(self.servr, 'build_request_environ')
def test_send_interactive_command(self):
def good_response(unused_environ, start_response, request_type):
start_response('200 OK', [])
return ['10\n']
environ = object()
self.servr.build_request_environ(
'POST', '/', [], 'print 5+5', '192.0.2.0', 8000).AndReturn(environ)
self.servr._handle_request(
environ,
mox.IgnoreArg(),
request_type=instance.INTERACTIVE_REQUEST).WithSideEffects(
good_response)
self.mox.ReplayAll()
self.assertEqual('10\n', self.servr.send_interactive_command('print 5+5'))
self.mox.VerifyAll()
def test_send_interactive_command_handle_request_exception(self):
environ = object()
self.servr.build_request_environ(
'POST', '/', [], 'print 5+5', '192.0.2.0', 8000).AndReturn(environ)
self.servr._handle_request(
environ,
mox.IgnoreArg(),
request_type=instance.INTERACTIVE_REQUEST).AndRaise(Exception('error'))
self.mox.ReplayAll()
self.assertRaisesRegexp(module.InteractiveCommandError,
'error',
self.servr.send_interactive_command,
'print 5+5')
self.mox.VerifyAll()
def test_send_interactive_command_handle_request_failure(self):
def good_response(unused_environ, start_response, request_type):
start_response('503 Service Unavailable', [])
return ['Instance was restarted while executing command']
environ = object()
self.servr.build_request_environ(
'POST', '/', [], 'print 5+5', '192.0.2.0', 8000).AndReturn(environ)
self.servr._handle_request(
environ,
mox.IgnoreArg(),
request_type=instance.INTERACTIVE_REQUEST).WithSideEffects(
good_response)
self.mox.ReplayAll()
self.assertRaisesRegexp(module.InteractiveCommandError,
'Instance was restarted while executing command',
self.servr.send_interactive_command,
'print 5+5')
self.mox.VerifyAll()
def test_handle_script_request(self):
self.servr._instance_factory.new_instance(
mox.IgnoreArg(),
expect_ready_request=False).AndReturn(self.inst)
self.inst.start()
self.inst.handle(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id,
instance.INTERACTIVE_REQUEST).AndReturn(['10\n'])
self.mox.ReplayAll()
self.assertEqual(
['10\n'],
self.servr._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
def test_handle_script_request_busy(self):
self.servr._instance_factory.new_instance(
mox.IgnoreArg(),
expect_ready_request=False).AndReturn(self.inst)
self.inst.start()
self.inst.handle(
self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id,
instance.INTERACTIVE_REQUEST).AndRaise(instance.CannotAcceptRequests())
self.inst.wait(mox.IgnoreArg())
self.inst.handle(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id,
instance.INTERACTIVE_REQUEST).AndReturn(['10\n'])
self.mox.ReplayAll()
self.assertEqual(
['10\n'],
self.servr._handle_script_request(self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
def test_handle_script_request_timeout(self):
old_timeout = self.servr._MAX_REQUEST_WAIT_TIME
try:
self.servr._MAX_REQUEST_WAIT_TIME = 0
start_response = start_response_utils.CapturingStartResponse()
self.mox.ReplayAll()
self.assertEqual(
['The command timed-out while waiting for another one to complete'],
self.servr._handle_script_request(self.environ,
start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
self.assertEqual('503 Service Unavailable',
start_response.status)
finally:
self.servr._MAX_REQUEST_WAIT_TIME = old_timeout
def test_handle_script_request_restart(self):
def restart_and_raise(*args):
self.servr._inst = None
raise httplib.BadStatusLine('line')
start_response = start_response_utils.CapturingStartResponse()
self.servr._instance_factory.new_instance(
mox.IgnoreArg(),
expect_ready_request=False).AndReturn(self.inst)
self.inst.start()
self.inst.handle(
self.environ,
start_response,
self.url_map,
self.match,
self.request_id,
instance.INTERACTIVE_REQUEST).WithSideEffects(restart_and_raise)
self.mox.ReplayAll()
self.assertEqual(
['Instance was restarted while executing command'],
self.servr._handle_script_request(self.environ,
start_response,
self.url_map,
self.match,
self.request_id))
self.mox.VerifyAll()
self.assertEqual('503 Service Unavailable',
start_response.status)
def test_handle_script_request_unexpected_instance_exception(self):
self.servr._instance_factory.new_instance(
mox.IgnoreArg(),
expect_ready_request=False).AndReturn(self.inst)
self.inst.start()
self.inst.handle(
self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id,
instance.INTERACTIVE_REQUEST).AndRaise(httplib.BadStatusLine('line'))
self.mox.ReplayAll()
self.assertRaises(
httplib.BadStatusLine,
self.servr._handle_script_request,
self.environ,
self.start_response,
self.url_map,
self.match,
self.request_id)
self.mox.VerifyAll()
class InstanceFactoryTest(googletest.TestCase):
"""Tests for the _create_instance_factory method."""
def setUp(self):
self.mox = mox.Mox()
if os.environ.get('GAE_LOCAL_VM_RUNTIME'):
del os.environ['GAE_LOCAL_VM_RUNTIME']
def tearDown(self):
self.mox.UnsetStubs()
self.mox.VerifyAll()
def _run_test(self, runtime, vm, expected_factory_class):
if vm:
module_stub = ModuleFacade(vm_config=runtime_config_pb2.VMConfig())
module_configuration = ModuleConfigurationStub(runtime='vm')
module_configuration.effective_runtime = runtime
else:
module_stub = ModuleFacade()
module_configuration = ModuleConfigurationStub(runtime=runtime)
self.mox.ReplayAll()
instance_factory = module_stub._create_instance_factory(
module_configuration)
self.assertIsInstance(instance_factory, expected_factory_class)
def test_non_vm_python(self):
self._run_test('python', False, python_runtime.PythonRuntimeInstanceFactory)
def test_non_vm_go(self):
self.mox.StubOutWithMock(go_application, 'GoApplication')
go_application.GoApplication(mox.IgnoreArg())
self._run_test('go', False, go_runtime.GoRuntimeInstanceFactory)
def test_non_vm_java(self):
self.mox.StubOutWithMock(
java_runtime.JavaRuntimeInstanceFactory, '_make_java_command')
java_runtime.JavaRuntimeInstanceFactory._make_java_command()
self._run_test('java', False, java_runtime.JavaRuntimeInstanceFactory)
def test_vm_python(self):
self.mox.StubOutWithMock(containers, 'NewDockerClient')
containers.NewDockerClient(version=mox.IgnoreArg(), timeout=mox.IgnoreArg())
self._run_test(
'python27', True, vm_runtime_factory.VMRuntimeInstanceFactory)
def test_vm_disabled(self):
os.environ['GAE_LOCAL_VM_RUNTIME'] = '1'
self._run_test('python', True, python_runtime.PythonRuntimeInstanceFactory)
def test_custom(self):
self.mox.StubOutWithMock(containers, 'NewDockerClient')
containers.NewDockerClient(version=mox.IgnoreArg(), timeout=mox.IgnoreArg())
self._run_test(
'custom', True, vm_runtime_factory.VMRuntimeInstanceFactory)
def test_custom_vm_disabled(self):
os.environ['GAE_LOCAL_VM_RUNTIME'] = '1'
self.assertRaises(
RuntimeError,
self._run_test,
'custom',
True,
vm_runtime_factory.VMRuntimeInstanceFactory)
if __name__ == '__main__':
googletest.main()
| 37.180472 | 80 | 0.67651 | 11,753 | 105,481 | 5.800562 | 0.048328 | 0.042612 | 0.033884 | 0.024349 | 0.857204 | 0.821516 | 0.763576 | 0.718192 | 0.688474 | 0.660296 | 0 | 0.012378 | 0.216503 | 105,481 | 2,836 | 81 | 37.193583 | 0.812533 | 0.020705 | 0 | 0.734966 | 0 | 0 | 0.039978 | 0.004315 | 0 | 0 | 0 | 0 | 0.061856 | 1 | 0.09579 | false | 0.00043 | 0.012887 | 0.004725 | 0.133162 | 0.002577 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc4edf28dac39757c9fbe4459c51f3d0efa0dbb5 | 100 | py | Python | util/parsing.py | giuseppe/quay | a1b7e4b51974edfe86f66788621011eef2667e6a | [
"Apache-2.0"
] | 2,027 | 2019-11-12T18:05:48.000Z | 2022-03-31T22:25:04.000Z | util/parsing.py | giuseppe/quay | a1b7e4b51974edfe86f66788621011eef2667e6a | [
"Apache-2.0"
] | 496 | 2019-11-12T18:13:37.000Z | 2022-03-31T10:43:45.000Z | util/parsing.py | giuseppe/quay | a1b7e4b51974edfe86f66788621011eef2667e6a | [
"Apache-2.0"
] | 249 | 2019-11-12T18:02:27.000Z | 2022-03-22T12:19:19.000Z | def truthy_bool(param):
return param not in {False, "false", "False", "0", "FALSE", "", "null"}
| 33.333333 | 75 | 0.6 | 14 | 100 | 4.214286 | 0.714286 | 0.338983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.17 | 100 | 2 | 76 | 50 | 0.698795 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
fca8fba390c7dd155bcd07c0bcb0b89255c2d1f2 | 2,099 | py | Python | packages/pyright-internal/src/tests/samples/typeVar6.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 4,391 | 2019-05-07T01:18:57.000Z | 2022-03-31T20:45:44.000Z | packages/pyright-internal/src/tests/samples/typeVar6.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 2,740 | 2019-05-07T03:29:30.000Z | 2022-03-31T12:57:46.000Z | packages/pyright-internal/src/tests/samples/typeVar6.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 455 | 2019-05-07T12:55:14.000Z | 2022-03-31T17:09:15.000Z | # This sample tests that generic type variables
# with a bound type properly generate errors. It tests
# both class-defined and function-defined type variables.
from typing import Generic, TypeVar, Union
class Foo:
var1: int
def __call__(self, val: int):
pass
def do_stuff(self) -> int:
return 0
class Bar:
var1: int
var2: int
def __call__(self, val: int):
pass
def do_stuff(self) -> float:
return 0
def do_other_stuff(self) -> float:
return 0
_T1 = TypeVar("_T1", bound=Foo)
_T2 = TypeVar("_T2", bound=Union[Foo, Bar])
class ClassA(Generic[_T1]):
async def func1(self, a: _T1) -> _T1:
_ = a.var1
# This should generate an error.
_ = a.var2
_ = a(3)
# This should generate an error.
_ = a(3.3)
# This should generate an error.
_ = a[0]
# This should generate an error.
_ = a + 1
# This should generate an error.
_ = -a
# This should generate an error.
a += 3
# This should generate an error.
_ = await a
# This should generate an error.
for _ in a:
pass
a.do_stuff()
# This should generate an error.
a.do_other_stuff()
_ = a.__class__
_ = a.__doc__
return a
async def func2(self, a: _T2) -> _T2:
_ = a.var1
# This should generate an error.
_ = a.var2
_ = a(3)
# This should generate an error.
_ = a(3.3)
# This should generate two errors.
_ = a[0]
# This should generate an error.
_ = a + 1
# This should generate an error.
_ = -a
# This should generate an error.
a += 3
# This should generate an error.
_ = await a
# This should generate an error.
for _ in a:
pass
a.do_stuff()
# This should generate an error.
a.do_other_stuff()
_ = a.__class__
_ = a.__doc__
return a
| 18.094828 | 57 | 0.525488 | 264 | 2,099 | 3.931818 | 0.212121 | 0.17341 | 0.312139 | 0.327553 | 0.678227 | 0.646435 | 0.646435 | 0.639692 | 0.639692 | 0.639692 | 0 | 0.025822 | 0.391139 | 2,099 | 115 | 58 | 18.252174 | 0.786385 | 0.340162 | 0 | 0.773585 | 1 | 0 | 0.004402 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09434 | false | 0.075472 | 0.018868 | 0.056604 | 0.320755 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
5d86907621f8906115faaf4552ee860eae6f1908 | 185,674 | py | Python | glance/tests/unit/v2/test_images_resource.py | arvindn05/glance | 055d15a6ba5d132f649156eac0fc91f4cd2813e4 | [
"Apache-2.0"
] | null | null | null | glance/tests/unit/v2/test_images_resource.py | arvindn05/glance | 055d15a6ba5d132f649156eac0fc91f4cd2813e4 | [
"Apache-2.0"
] | null | null | null | glance/tests/unit/v2/test_images_resource.py | arvindn05/glance | 055d15a6ba5d132f649156eac0fc91f4cd2813e4 | [
"Apache-2.0"
] | null | null | null | # Copyright 2012 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import eventlet
import uuid
import glance_store as store
import mock
from oslo_serialization import jsonutils
import six
from six.moves import http_client as http
# NOTE(jokke): simplified transition to py3, behaves like py2 xrange
from six.moves import range
import testtools
import webob
import glance.api.v2.image_actions
import glance.api.v2.images
from glance.common import exception
from glance import domain
import glance.schema
from glance.tests.unit import base
import glance.tests.unit.utils as unit_test_utils
import glance.tests.utils as test_utils
DATETIME = datetime.datetime(2012, 5, 16, 15, 27, 36, 325355)
ISOTIME = '2012-05-16T15:27:36Z'
BASE_URI = unit_test_utils.BASE_URI
UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d'
UUID2 = 'a85abd86-55b3-4d5b-b0b4-5d0a6e6042fc'
UUID3 = '971ec09a-8067-4bc8-a91f-ae3557f1c4c7'
UUID4 = '6bbe7cc2-eae7-4c0f-b50d-a7160b0c6a86'
TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df'
TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81'
TENANT3 = '5a3e60e8-cfa9-4a9e-a90a-62b42cea92b8'
TENANT4 = 'c6c87f25-8a94-47ed-8c83-053c25f42df4'
CHKSUM = '93264c3edf5972c9f1cb309543d38a5c'
CHKSUM1 = '43254c3edf6972c9f1cb309543d38a8c'
def _db_fixture(id, **kwargs):
obj = {
'id': id,
'name': None,
'visibility': 'shared',
'properties': {},
'checksum': None,
'owner': None,
'status': 'queued',
'tags': [],
'size': None,
'virtual_size': None,
'locations': [],
'protected': False,
'disk_format': None,
'container_format': None,
'deleted': False,
'min_ram': None,
'min_disk': None,
}
obj.update(kwargs)
return obj
def _domain_fixture(id, **kwargs):
properties = {
'image_id': id,
'name': None,
'visibility': 'private',
'checksum': None,
'owner': None,
'status': 'queued',
'size': None,
'virtual_size': None,
'locations': [],
'protected': False,
'disk_format': None,
'container_format': None,
'min_ram': None,
'min_disk': None,
'tags': [],
}
properties.update(kwargs)
return glance.domain.Image(**properties)
def _db_image_member_fixture(image_id, member_id, **kwargs):
obj = {
'image_id': image_id,
'member': member_id,
}
obj.update(kwargs)
return obj
class FakeImage(object):
def __init__(self, status='active', container_format='ami',
disk_format='ami'):
self.id = UUID4
self.status = status
self.container_format = container_format
self.disk_format = disk_format
class TestImagesController(base.IsolatedUnitTest):
def setUp(self):
super(TestImagesController, self).setUp()
self.db = unit_test_utils.FakeDB(initialize=False)
self.policy = unit_test_utils.FakePolicyEnforcer()
self.notifier = unit_test_utils.FakeNotifier()
self.store = unit_test_utils.FakeStoreAPI()
for i in range(1, 4):
self.store.data['%s/fake_location_%i' % (BASE_URI, i)] = ('Z', 1)
self.store_utils = unit_test_utils.FakeStoreUtils(self.store)
self._create_images()
self._create_image_members()
self.controller = glance.api.v2.images.ImagesController(self.db,
self.policy,
self.notifier,
self.store)
self.action_controller = (glance.api.v2.image_actions.
ImageActionsController(self.db,
self.policy,
self.notifier,
self.store))
self.controller.gateway.store_utils = self.store_utils
store.create_stores()
def _create_images(self):
self.images = [
_db_fixture(UUID1, owner=TENANT1, checksum=CHKSUM,
name='1', size=256, virtual_size=1024,
visibility='public',
locations=[{'url': '%s/%s' % (BASE_URI, UUID1),
'metadata': {}, 'status': 'active'}],
disk_format='raw',
container_format='bare',
status='active'),
_db_fixture(UUID2, owner=TENANT1, checksum=CHKSUM1,
name='2', size=512, virtual_size=2048,
visibility='public',
disk_format='raw',
container_format='bare',
status='active',
tags=['redhat', '64bit', 'power'],
properties={'hypervisor_type': 'kvm', 'foo': 'bar',
'bar': 'foo'}),
_db_fixture(UUID3, owner=TENANT3, checksum=CHKSUM1,
name='3', size=512, virtual_size=2048,
visibility='public', tags=['windows', '64bit', 'x86']),
_db_fixture(UUID4, owner=TENANT4, name='4',
size=1024, virtual_size=3072),
]
[self.db.image_create(None, image) for image in self.images]
self.db.image_tag_set_all(None, UUID1, ['ping', 'pong'])
def _create_image_members(self):
self.image_members = [
_db_image_member_fixture(UUID4, TENANT2),
_db_image_member_fixture(UUID4, TENANT3,
status='accepted'),
]
[self.db.image_member_create(None, image_member)
for image_member in self.image_members]
def test_index(self):
self.config(limit_param_default=1, api_limit_max=3)
request = unit_test_utils.get_fake_request()
output = self.controller.index(request)
self.assertEqual(1, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID3])
self.assertEqual(expected, actual)
def test_index_member_status_accepted(self):
self.config(limit_param_default=5, api_limit_max=5)
request = unit_test_utils.get_fake_request(tenant=TENANT2)
output = self.controller.index(request)
self.assertEqual(3, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID1, UUID2, UUID3])
# can see only the public image
self.assertEqual(expected, actual)
request = unit_test_utils.get_fake_request(tenant=TENANT3)
output = self.controller.index(request)
self.assertEqual(4, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID1, UUID2, UUID3, UUID4])
self.assertEqual(expected, actual)
def test_index_admin(self):
request = unit_test_utils.get_fake_request(is_admin=True)
output = self.controller.index(request)
self.assertEqual(4, len(output['images']))
def test_index_admin_deleted_images_hidden(self):
request = unit_test_utils.get_fake_request(is_admin=True)
self.controller.delete(request, UUID1)
output = self.controller.index(request)
self.assertEqual(3, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID2, UUID3, UUID4])
self.assertEqual(expected, actual)
def test_index_return_parameters(self):
self.config(limit_param_default=1, api_limit_max=3)
request = unit_test_utils.get_fake_request()
output = self.controller.index(request, marker=UUID3, limit=1,
sort_key=['created_at'],
sort_dir=['desc'])
self.assertEqual(1, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID2])
self.assertEqual(actual, expected)
self.assertEqual(UUID2, output['next_marker'])
def test_index_next_marker(self):
self.config(limit_param_default=1, api_limit_max=3)
request = unit_test_utils.get_fake_request()
output = self.controller.index(request, marker=UUID3, limit=2)
self.assertEqual(2, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID2, UUID1])
self.assertEqual(expected, actual)
self.assertEqual(UUID1, output['next_marker'])
def test_index_no_next_marker(self):
self.config(limit_param_default=1, api_limit_max=3)
request = unit_test_utils.get_fake_request()
output = self.controller.index(request, marker=UUID1, limit=2)
self.assertEqual(0, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([])
self.assertEqual(expected, actual)
self.assertNotIn('next_marker', output)
def test_index_with_id_filter(self):
request = unit_test_utils.get_fake_request('/images?id=%s' % UUID1)
output = self.controller.index(request, filters={'id': UUID1})
self.assertEqual(1, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID1])
self.assertEqual(expected, actual)
def test_index_with_checksum_filter_single_image(self):
req = unit_test_utils.get_fake_request('/images?checksum=%s' % CHKSUM)
output = self.controller.index(req, filters={'checksum': CHKSUM})
self.assertEqual(1, len(output['images']))
actual = list([image.image_id for image in output['images']])
expected = [UUID1]
self.assertEqual(expected, actual)
def test_index_with_checksum_filter_multiple_images(self):
req = unit_test_utils.get_fake_request('/images?checksum=%s' % CHKSUM1)
output = self.controller.index(req, filters={'checksum': CHKSUM1})
self.assertEqual(2, len(output['images']))
actual = list([image.image_id for image in output['images']])
expected = [UUID3, UUID2]
self.assertEqual(expected, actual)
def test_index_with_non_existent_checksum(self):
req = unit_test_utils.get_fake_request('/images?checksum=236231827')
output = self.controller.index(req, filters={'checksum': '236231827'})
self.assertEqual(0, len(output['images']))
def test_index_size_max_filter(self):
request = unit_test_utils.get_fake_request('/images?size_max=512')
output = self.controller.index(request, filters={'size_max': 512})
self.assertEqual(3, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID1, UUID2, UUID3])
self.assertEqual(expected, actual)
def test_index_size_min_filter(self):
request = unit_test_utils.get_fake_request('/images?size_min=512')
output = self.controller.index(request, filters={'size_min': 512})
self.assertEqual(2, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID2, UUID3])
self.assertEqual(expected, actual)
def test_index_size_range_filter(self):
path = '/images?size_min=512&size_max=512'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request,
filters={'size_min': 512,
'size_max': 512})
self.assertEqual(2, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID2, UUID3])
self.assertEqual(expected, actual)
def test_index_virtual_size_max_filter(self):
ref = '/images?virtual_size_max=2048'
request = unit_test_utils.get_fake_request(ref)
output = self.controller.index(request,
filters={'virtual_size_max': 2048})
self.assertEqual(3, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID1, UUID2, UUID3])
self.assertEqual(expected, actual)
def test_index_virtual_size_min_filter(self):
ref = '/images?virtual_size_min=2048'
request = unit_test_utils.get_fake_request(ref)
output = self.controller.index(request,
filters={'virtual_size_min': 2048})
self.assertEqual(2, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID2, UUID3])
self.assertEqual(expected, actual)
def test_index_virtual_size_range_filter(self):
path = '/images?virtual_size_min=512&virtual_size_max=2048'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request,
filters={'virtual_size_min': 2048,
'virtual_size_max': 2048})
self.assertEqual(2, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID2, UUID3])
self.assertEqual(expected, actual)
def test_index_with_invalid_max_range_filter_value(self):
request = unit_test_utils.get_fake_request('/images?size_max=blah')
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.index,
request,
filters={'size_max': 'blah'})
def test_index_with_filters_return_many(self):
path = '/images?status=queued'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request, filters={'status': 'queued'})
self.assertEqual(1, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID3])
self.assertEqual(expected, actual)
def test_index_with_nonexistent_name_filter(self):
request = unit_test_utils.get_fake_request('/images?name=%s' % 'blah')
images = self.controller.index(request,
filters={'name': 'blah'})['images']
self.assertEqual(0, len(images))
def test_index_with_non_default_is_public_filter(self):
private_uuid = str(uuid.uuid4())
new_image = _db_fixture(private_uuid,
visibility='private',
owner=TENANT3)
self.db.image_create(None, new_image)
path = '/images?visibility=private'
request = unit_test_utils.get_fake_request(path, is_admin=True)
output = self.controller.index(request,
filters={'visibility': 'private'})
self.assertEqual(1, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([private_uuid])
self.assertEqual(expected, actual)
path = '/images?visibility=shared'
request = unit_test_utils.get_fake_request(path, is_admin=True)
output = self.controller.index(request,
filters={'visibility': 'shared'})
self.assertEqual(1, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID4])
self.assertEqual(expected, actual)
def test_index_with_many_filters(self):
url = '/images?status=queued&name=3'
request = unit_test_utils.get_fake_request(url)
output = self.controller.index(request,
filters={
'status': 'queued',
'name': '3',
})
self.assertEqual(1, len(output['images']))
actual = set([image.image_id for image in output['images']])
expected = set([UUID3])
self.assertEqual(expected, actual)
def test_index_with_marker(self):
self.config(limit_param_default=1, api_limit_max=3)
path = '/images'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request, marker=UUID3)
actual = set([image.image_id for image in output['images']])
self.assertEqual(1, len(actual))
self.assertIn(UUID2, actual)
def test_index_with_limit(self):
path = '/images'
limit = 2
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request, limit=limit)
actual = set([image.image_id for image in output['images']])
self.assertEqual(limit, len(actual))
self.assertIn(UUID3, actual)
self.assertIn(UUID2, actual)
def test_index_greater_than_limit_max(self):
self.config(limit_param_default=1, api_limit_max=3)
path = '/images'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request, limit=4)
actual = set([image.image_id for image in output['images']])
self.assertEqual(3, len(actual))
self.assertNotIn(output['next_marker'], output)
def test_index_default_limit(self):
self.config(limit_param_default=1, api_limit_max=3)
path = '/images'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request)
actual = set([image.image_id for image in output['images']])
self.assertEqual(1, len(actual))
def test_index_with_sort_dir(self):
path = '/images'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request, sort_dir=['asc'], limit=3)
actual = [image.image_id for image in output['images']]
self.assertEqual(3, len(actual))
self.assertEqual(UUID1, actual[0])
self.assertEqual(UUID2, actual[1])
self.assertEqual(UUID3, actual[2])
def test_index_with_sort_key(self):
path = '/images'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request, sort_key=['created_at'],
limit=3)
actual = [image.image_id for image in output['images']]
self.assertEqual(3, len(actual))
self.assertEqual(UUID3, actual[0])
self.assertEqual(UUID2, actual[1])
self.assertEqual(UUID1, actual[2])
def test_index_with_multiple_sort_keys(self):
path = '/images'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request,
sort_key=['created_at', 'name'],
limit=3)
actual = [image.image_id for image in output['images']]
self.assertEqual(3, len(actual))
self.assertEqual(UUID3, actual[0])
self.assertEqual(UUID2, actual[1])
self.assertEqual(UUID1, actual[2])
def test_index_with_marker_not_found(self):
fake_uuid = str(uuid.uuid4())
path = '/images'
request = unit_test_utils.get_fake_request(path)
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.index, request, marker=fake_uuid)
def test_index_invalid_sort_key(self):
path = '/images'
request = unit_test_utils.get_fake_request(path)
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.index, request, sort_key=['foo'])
def test_index_zero_images(self):
self.db.reset()
request = unit_test_utils.get_fake_request()
output = self.controller.index(request)
self.assertEqual([], output['images'])
def test_index_with_tags(self):
path = '/images?tag=64bit'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request, filters={'tags': ['64bit']})
actual = [image.tags for image in output['images']]
self.assertEqual(2, len(actual))
self.assertIn('64bit', actual[0])
self.assertIn('64bit', actual[1])
def test_index_with_multi_tags(self):
path = '/images?tag=power&tag=64bit'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request,
filters={'tags': ['power', '64bit']})
actual = [image.tags for image in output['images']]
self.assertEqual(1, len(actual))
self.assertIn('64bit', actual[0])
self.assertIn('power', actual[0])
def test_index_with_multi_tags_and_nonexistent(self):
path = '/images?tag=power&tag=fake'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request,
filters={'tags': ['power', 'fake']})
actual = [image.tags for image in output['images']]
self.assertEqual(0, len(actual))
def test_index_with_tags_and_properties(self):
path = '/images?tag=64bit&hypervisor_type=kvm'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request,
filters={'tags': ['64bit'],
'hypervisor_type': 'kvm'})
tags = [image.tags for image in output['images']]
properties = [image.extra_properties for image in output['images']]
self.assertEqual(len(tags), len(properties))
self.assertIn('64bit', tags[0])
self.assertEqual('kvm', properties[0]['hypervisor_type'])
def test_index_with_multiple_properties(self):
path = '/images?foo=bar&hypervisor_type=kvm'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request,
filters={'foo': 'bar',
'hypervisor_type': 'kvm'})
properties = [image.extra_properties for image in output['images']]
self.assertEqual('kvm', properties[0]['hypervisor_type'])
self.assertEqual('bar', properties[0]['foo'])
def test_index_with_core_and_extra_property(self):
path = '/images?disk_format=raw&foo=bar'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request,
filters={'foo': 'bar',
'disk_format': 'raw'})
properties = [image.extra_properties for image in output['images']]
self.assertEqual(1, len(output['images']))
self.assertEqual('raw', output['images'][0].disk_format)
self.assertEqual('bar', properties[0]['foo'])
def test_index_with_nonexistent_properties(self):
path = '/images?abc=xyz&pudding=banana'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request,
filters={'abc': 'xyz',
'pudding': 'banana'})
self.assertEqual(0, len(output['images']))
def test_index_with_non_existent_tags(self):
path = '/images?tag=fake'
request = unit_test_utils.get_fake_request(path)
output = self.controller.index(request,
filters={'tags': ['fake']})
actual = [image.tags for image in output['images']]
self.assertEqual(0, len(actual))
def test_show(self):
request = unit_test_utils.get_fake_request()
output = self.controller.show(request, image_id=UUID2)
self.assertEqual(UUID2, output.image_id)
self.assertEqual('2', output.name)
def test_show_deleted_properties(self):
"""Ensure that the api filters out deleted image properties."""
# get the image properties into the odd state
image = {
'id': str(uuid.uuid4()),
'status': 'active',
'properties': {'poo': 'bear'},
}
self.db.image_create(None, image)
self.db.image_update(None, image['id'],
{'properties': {'yin': 'yang'}},
purge_props=True)
request = unit_test_utils.get_fake_request()
output = self.controller.show(request, image['id'])
self.assertEqual('yang', output.extra_properties['yin'])
def test_show_non_existent(self):
request = unit_test_utils.get_fake_request()
image_id = str(uuid.uuid4())
self.assertRaises(webob.exc.HTTPNotFound,
self.controller.show, request, image_id)
def test_show_deleted_image_admin(self):
request = unit_test_utils.get_fake_request(is_admin=True)
self.controller.delete(request, UUID1)
self.assertRaises(webob.exc.HTTPNotFound,
self.controller.show, request, UUID1)
def test_show_not_allowed(self):
request = unit_test_utils.get_fake_request()
self.assertEqual(TENANT1, request.context.tenant)
self.assertRaises(webob.exc.HTTPNotFound,
self.controller.show, request, UUID4)
def test_image_import_raises_conflict_if_container_format_is_none(self):
request = unit_test_utils.get_fake_request()
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.return_value = FakeImage(container_format=None)
self.assertRaises(webob.exc.HTTPConflict,
self.controller.import_image, request, UUID4,
{'method': {'name': 'glance-direct'}})
def test_image_import_raises_conflict_if_disk_format_is_none(self):
request = unit_test_utils.get_fake_request()
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.return_value = FakeImage(disk_format=None)
self.assertRaises(webob.exc.HTTPConflict,
self.controller.import_image, request, UUID4,
{'method': {'name': 'glance-direct'}})
def test_image_import_raises_conflict(self):
request = unit_test_utils.get_fake_request()
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.return_value = FakeImage(status='queued')
self.assertRaises(webob.exc.HTTPConflict,
self.controller.import_image, request, UUID4,
{'method': {'name': 'glance-direct'}})
def test_image_import_raises_conflict_for_web_download(self):
request = unit_test_utils.get_fake_request()
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.return_value = FakeImage()
self.assertRaises(webob.exc.HTTPConflict,
self.controller.import_image, request, UUID4,
{'method': {'name': 'web-download'}})
def test_image_import_raises_conflict_for_invalid_status_change(self):
request = unit_test_utils.get_fake_request()
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.return_value = FakeImage()
self.assertRaises(webob.exc.HTTPConflict,
self.controller.import_image, request, UUID4,
{'method': {'name': 'glance-direct'}})
def test_image_import_raises_bad_request(self):
request = unit_test_utils.get_fake_request()
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.return_value = FakeImage(status='uploading')
# NOTE(abhishekk): Due to
# https://bugs.launchpad.net/glance/+bug/1712463 taskflow is not
# executing. Once it is fixed instead of mocking spawn_n method
# we should mock execute method of _ImportToStore task.
with mock.patch.object(eventlet.GreenPool, 'spawn_n',
side_effect=ValueError):
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.import_image, request, UUID4,
{'method': {'name': 'glance-direct'}})
def test_image_import_invalid_uri_filtering(self):
request = unit_test_utils.get_fake_request()
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.return_value = FakeImage(status='queued')
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.import_image, request, UUID4,
{'method': {'name': 'web-download',
'uri': 'fake_uri'}})
def test_create(self):
request = unit_test_utils.get_fake_request()
image = {'name': 'image-1'}
output = self.controller.create(request, image=image,
extra_properties={},
tags=[])
self.assertEqual('image-1', output.name)
self.assertEqual({}, output.extra_properties)
self.assertEqual(set([]), output.tags)
self.assertEqual('shared', output.visibility)
output_logs = self.notifier.get_logs()
self.assertEqual(1, len(output_logs))
output_log = output_logs[0]
self.assertEqual('INFO', output_log['notification_type'])
self.assertEqual('image.create', output_log['event_type'])
self.assertEqual('image-1', output_log['payload']['name'])
def test_create_disabled_notification(self):
self.config(disabled_notifications=["image.create"])
request = unit_test_utils.get_fake_request()
image = {'name': 'image-1'}
output = self.controller.create(request, image=image,
extra_properties={},
tags=[])
self.assertEqual('image-1', output.name)
self.assertEqual({}, output.extra_properties)
self.assertEqual(set([]), output.tags)
self.assertEqual('shared', output.visibility)
output_logs = self.notifier.get_logs()
self.assertEqual(0, len(output_logs))
def test_create_with_properties(self):
request = unit_test_utils.get_fake_request()
image_properties = {'foo': 'bar'}
image = {'name': 'image-1'}
output = self.controller.create(request, image=image,
extra_properties=image_properties,
tags=[])
self.assertEqual('image-1', output.name)
self.assertEqual(image_properties, output.extra_properties)
self.assertEqual(set([]), output.tags)
self.assertEqual('shared', output.visibility)
output_logs = self.notifier.get_logs()
self.assertEqual(1, len(output_logs))
output_log = output_logs[0]
self.assertEqual('INFO', output_log['notification_type'])
self.assertEqual('image.create', output_log['event_type'])
self.assertEqual('image-1', output_log['payload']['name'])
def test_create_with_too_many_properties(self):
self.config(image_property_quota=1)
request = unit_test_utils.get_fake_request()
image_properties = {'foo': 'bar', 'foo2': 'bar'}
image = {'name': 'image-1'}
self.assertRaises(webob.exc.HTTPRequestEntityTooLarge,
self.controller.create, request,
image=image,
extra_properties=image_properties,
tags=[])
def test_create_with_bad_min_disk_size(self):
request = unit_test_utils.get_fake_request()
image = {'min_disk': -42, 'name': 'image-1'}
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.create, request,
image=image,
extra_properties={},
tags=[])
def test_create_with_bad_min_ram_size(self):
request = unit_test_utils.get_fake_request()
image = {'min_ram': -42, 'name': 'image-1'}
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.create, request,
image=image,
extra_properties={},
tags=[])
def test_create_public_image_as_admin(self):
request = unit_test_utils.get_fake_request()
image = {'name': 'image-1', 'visibility': 'public'}
output = self.controller.create(request, image=image,
extra_properties={}, tags=[])
self.assertEqual('public', output.visibility)
output_logs = self.notifier.get_logs()
self.assertEqual(1, len(output_logs))
output_log = output_logs[0]
self.assertEqual('INFO', output_log['notification_type'])
self.assertEqual('image.create', output_log['event_type'])
self.assertEqual(output.image_id, output_log['payload']['id'])
def test_create_dup_id(self):
request = unit_test_utils.get_fake_request()
image = {'image_id': UUID4}
self.assertRaises(webob.exc.HTTPConflict,
self.controller.create,
request,
image=image,
extra_properties={},
tags=[])
def test_create_duplicate_tags(self):
request = unit_test_utils.get_fake_request()
tags = ['ping', 'ping']
output = self.controller.create(request, image={},
extra_properties={}, tags=tags)
self.assertEqual(set(['ping']), output.tags)
output_logs = self.notifier.get_logs()
self.assertEqual(1, len(output_logs))
output_log = output_logs[0]
self.assertEqual('INFO', output_log['notification_type'])
self.assertEqual('image.create', output_log['event_type'])
self.assertEqual(output.image_id, output_log['payload']['id'])
def test_create_with_too_many_tags(self):
self.config(image_tag_quota=1)
request = unit_test_utils.get_fake_request()
tags = ['ping', 'pong']
self.assertRaises(webob.exc.HTTPRequestEntityTooLarge,
self.controller.create,
request, image={}, extra_properties={},
tags=tags)
def test_create_with_owner_non_admin(self):
request = unit_test_utils.get_fake_request()
request.context.is_admin = False
image = {'owner': '12345'}
self.assertRaises(webob.exc.HTTPForbidden,
self.controller.create,
request, image=image, extra_properties={},
tags=[])
request = unit_test_utils.get_fake_request()
request.context.is_admin = False
image = {'owner': TENANT1}
output = self.controller.create(request, image=image,
extra_properties={}, tags=[])
self.assertEqual(TENANT1, output.owner)
def test_create_with_owner_admin(self):
request = unit_test_utils.get_fake_request()
request.context.is_admin = True
image = {'owner': '12345'}
output = self.controller.create(request, image=image,
extra_properties={}, tags=[])
self.assertEqual('12345', output.owner)
def test_create_with_duplicate_location(self):
request = unit_test_utils.get_fake_request()
location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}}
image = {'name': 'image-1', 'locations': [location, location]}
self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create,
request, image=image, extra_properties={},
tags=[])
def test_create_unexpected_property(self):
request = unit_test_utils.get_fake_request()
image_properties = {'unexpected': 'unexpected'}
image = {'name': 'image-1'}
with mock.patch.object(domain.ImageFactory, 'new_image',
side_effect=TypeError):
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.create, request, image=image,
extra_properties=image_properties, tags=[])
def test_create_reserved_property(self):
request = unit_test_utils.get_fake_request()
image_properties = {'reserved': 'reserved'}
image = {'name': 'image-1'}
with mock.patch.object(domain.ImageFactory, 'new_image',
side_effect=exception.ReservedProperty(
property='reserved')):
self.assertRaises(webob.exc.HTTPForbidden,
self.controller.create, request, image=image,
extra_properties=image_properties, tags=[])
def test_create_readonly_property(self):
request = unit_test_utils.get_fake_request()
image_properties = {'readonly': 'readonly'}
image = {'name': 'image-1'}
with mock.patch.object(domain.ImageFactory, 'new_image',
side_effect=exception.ReadonlyProperty(
property='readonly')):
self.assertRaises(webob.exc.HTTPForbidden,
self.controller.create, request, image=image,
extra_properties=image_properties, tags=[])
def test_update_no_changes(self):
request = unit_test_utils.get_fake_request()
output = self.controller.update(request, UUID1, changes=[])
self.assertEqual(UUID1, output.image_id)
self.assertEqual(output.created_at, output.updated_at)
self.assertEqual(2, len(output.tags))
self.assertIn('ping', output.tags)
self.assertIn('pong', output.tags)
output_logs = self.notifier.get_logs()
# NOTE(markwash): don't send a notification if nothing is updated
self.assertEqual(0, len(output_logs))
def test_update_with_bad_min_disk(self):
request = unit_test_utils.get_fake_request()
changes = [{'op': 'replace', 'path': ['min_disk'], 'value': -42}]
self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update,
request, UUID1, changes=changes)
def test_update_with_bad_min_ram(self):
request = unit_test_utils.get_fake_request()
changes = [{'op': 'replace', 'path': ['min_ram'], 'value': -42}]
self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update,
request, UUID1, changes=changes)
def test_update_image_doesnt_exist(self):
request = unit_test_utils.get_fake_request()
self.assertRaises(webob.exc.HTTPNotFound, self.controller.update,
request, str(uuid.uuid4()), changes=[])
def test_update_deleted_image_admin(self):
request = unit_test_utils.get_fake_request(is_admin=True)
self.controller.delete(request, UUID1)
self.assertRaises(webob.exc.HTTPNotFound, self.controller.update,
request, UUID1, changes=[])
def test_update_with_too_many_properties(self):
self.config(show_multiple_locations=True)
self.config(user_storage_quota='1')
new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}}
request = unit_test_utils.get_fake_request()
changes = [{'op': 'add', 'path': ['locations', '-'],
'value': new_location}]
self.assertRaises(webob.exc.HTTPRequestEntityTooLarge,
self.controller.update,
request, UUID1, changes=changes)
def test_update_replace_base_attribute(self):
self.db.image_update(None, UUID1, {'properties': {'foo': 'bar'}})
request = unit_test_utils.get_fake_request()
request.context.is_admin = True
changes = [{'op': 'replace', 'path': ['name'], 'value': 'fedora'},
{'op': 'replace', 'path': ['owner'], 'value': TENANT3}]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual('fedora', output.name)
self.assertEqual(TENANT3, output.owner)
self.assertEqual({'foo': 'bar'}, output.extra_properties)
self.assertNotEqual(output.created_at, output.updated_at)
def test_update_replace_onwer_non_admin(self):
request = unit_test_utils.get_fake_request()
request.context.is_admin = False
changes = [{'op': 'replace', 'path': ['owner'], 'value': TENANT3}]
self.assertRaises(webob.exc.HTTPForbidden,
self.controller.update, request, UUID1, changes)
def test_update_replace_tags(self):
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'replace', 'path': ['tags'], 'value': ['king', 'kong']},
]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual(2, len(output.tags))
self.assertIn('king', output.tags)
self.assertIn('kong', output.tags)
self.assertNotEqual(output.created_at, output.updated_at)
def test_update_replace_property(self):
request = unit_test_utils.get_fake_request()
properties = {'foo': 'bar', 'snitch': 'golden'}
self.db.image_update(None, UUID1, {'properties': properties})
output = self.controller.show(request, UUID1)
self.assertEqual('bar', output.extra_properties['foo'])
self.assertEqual('golden', output.extra_properties['snitch'])
changes = [
{'op': 'replace', 'path': ['foo'], 'value': 'baz'},
]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual('baz', output.extra_properties['foo'])
self.assertEqual('golden', output.extra_properties['snitch'])
self.assertNotEqual(output.created_at, output.updated_at)
def test_update_add_too_many_properties(self):
self.config(image_property_quota=1)
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'add', 'path': ['foo'], 'value': 'baz'},
{'op': 'add', 'path': ['snitch'], 'value': 'golden'},
]
self.assertRaises(webob.exc.HTTPRequestEntityTooLarge,
self.controller.update, request,
UUID1, changes)
def test_update_add_and_remove_too_many_properties(self):
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'add', 'path': ['foo'], 'value': 'baz'},
{'op': 'add', 'path': ['snitch'], 'value': 'golden'},
]
self.controller.update(request, UUID1, changes)
self.config(image_property_quota=1)
# We must remove two properties to avoid being
# over the limit of 1 property
changes = [
{'op': 'remove', 'path': ['foo']},
{'op': 'add', 'path': ['fizz'], 'value': 'buzz'},
]
self.assertRaises(webob.exc.HTTPRequestEntityTooLarge,
self.controller.update, request,
UUID1, changes)
def test_update_add_unlimited_properties(self):
self.config(image_property_quota=-1)
request = unit_test_utils.get_fake_request()
output = self.controller.show(request, UUID1)
changes = [{'op': 'add',
'path': ['foo'],
'value': 'bar'}]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertNotEqual(output.created_at, output.updated_at)
def test_update_format_properties(self):
statuses_for_immutability = ['active', 'saving', 'killed']
request = unit_test_utils.get_fake_request(is_admin=True)
for status in statuses_for_immutability:
image = {
'id': str(uuid.uuid4()),
'status': status,
'disk_format': 'ari',
'container_format': 'ari',
}
self.db.image_create(None, image)
changes = [
{'op': 'replace', 'path': ['disk_format'], 'value': 'ami'},
]
self.assertRaises(webob.exc.HTTPForbidden,
self.controller.update,
request, image['id'], changes)
changes = [
{'op': 'replace',
'path': ['container_format'],
'value': 'ami'},
]
self.assertRaises(webob.exc.HTTPForbidden,
self.controller.update,
request, image['id'], changes)
self.db.image_update(None, image['id'], {'status': 'queued'})
changes = [
{'op': 'replace', 'path': ['disk_format'], 'value': 'raw'},
{'op': 'replace', 'path': ['container_format'], 'value': 'bare'},
]
resp = self.controller.update(request, image['id'], changes)
self.assertEqual('raw', resp.disk_format)
self.assertEqual('bare', resp.container_format)
def test_update_remove_property_while_over_limit(self):
"""Ensure that image properties can be removed.
Image properties should be able to be removed as long as the image has
fewer than the limited number of image properties after the
transaction.
"""
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'add', 'path': ['foo'], 'value': 'baz'},
{'op': 'add', 'path': ['snitch'], 'value': 'golden'},
{'op': 'add', 'path': ['fizz'], 'value': 'buzz'},
]
self.controller.update(request, UUID1, changes)
self.config(image_property_quota=1)
# We must remove two properties to avoid being
# over the limit of 1 property
changes = [
{'op': 'remove', 'path': ['foo']},
{'op': 'remove', 'path': ['snitch']},
]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual(1, len(output.extra_properties))
self.assertEqual('buzz', output.extra_properties['fizz'])
self.assertNotEqual(output.created_at, output.updated_at)
def test_update_add_and_remove_property_under_limit(self):
"""Ensure that image properties can be removed.
Image properties should be able to be added and removed simultaneously
as long as the image has fewer than the limited number of image
properties after the transaction.
"""
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'add', 'path': ['foo'], 'value': 'baz'},
{'op': 'add', 'path': ['snitch'], 'value': 'golden'},
]
self.controller.update(request, UUID1, changes)
self.config(image_property_quota=1)
# We must remove two properties to avoid being
# over the limit of 1 property
changes = [
{'op': 'remove', 'path': ['foo']},
{'op': 'remove', 'path': ['snitch']},
{'op': 'add', 'path': ['fizz'], 'value': 'buzz'},
]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual(1, len(output.extra_properties))
self.assertEqual('buzz', output.extra_properties['fizz'])
self.assertNotEqual(output.created_at, output.updated_at)
def test_update_replace_missing_property(self):
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'replace', 'path': 'foo', 'value': 'baz'},
]
self.assertRaises(webob.exc.HTTPConflict,
self.controller.update, request, UUID1, changes)
def test_prop_protection_with_create_and_permitted_role(self):
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
created_image = self.controller.create(request, image=image,
extra_properties={},
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['member'])
changes = [
{'op': 'add', 'path': ['x_owner_foo'], 'value': 'bar'},
]
output = self.controller.update(another_request,
created_image.image_id, changes)
self.assertEqual('bar', output.extra_properties['x_owner_foo'])
def test_prop_protection_with_update_and_permitted_policy(self):
self.set_property_protections(use_policies=True)
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
request = unit_test_utils.get_fake_request(roles=['spl_role'])
image = {'name': 'image-1'}
extra_props = {'spl_creator_policy': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
self.assertEqual('bar',
created_image.extra_properties['spl_creator_policy'])
another_request = unit_test_utils.get_fake_request(roles=['spl_role'])
changes = [
{'op': 'replace', 'path': ['spl_creator_policy'], 'value': 'par'},
]
self.assertRaises(webob.exc.HTTPForbidden, self.controller.update,
another_request, created_image.image_id, changes)
another_request = unit_test_utils.get_fake_request(roles=['admin'])
output = self.controller.update(another_request,
created_image.image_id, changes)
self.assertEqual('par',
output.extra_properties['spl_creator_policy'])
def test_prop_protection_with_create_with_patch_and_policy(self):
self.set_property_protections(use_policies=True)
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
request = unit_test_utils.get_fake_request(roles=['spl_role', 'admin'])
image = {'name': 'image-1'}
extra_props = {'spl_default_policy': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['fake_role'])
changes = [
{'op': 'add', 'path': ['spl_creator_policy'], 'value': 'bar'},
]
self.assertRaises(webob.exc.HTTPForbidden, self.controller.update,
another_request, created_image.image_id, changes)
another_request = unit_test_utils.get_fake_request(roles=['spl_role'])
output = self.controller.update(another_request,
created_image.image_id, changes)
self.assertEqual('bar',
output.extra_properties['spl_creator_policy'])
def test_prop_protection_with_create_and_unpermitted_role(self):
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
created_image = self.controller.create(request, image=image,
extra_properties={},
tags=[])
roles = ['fake_member']
another_request = unit_test_utils.get_fake_request(roles=roles)
changes = [
{'op': 'add', 'path': ['x_owner_foo'], 'value': 'bar'},
]
self.assertRaises(webob.exc.HTTPForbidden,
self.controller.update, another_request,
created_image.image_id, changes)
def test_prop_protection_with_show_and_permitted_role(self):
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_owner_foo': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['member'])
output = self.controller.show(another_request, created_image.image_id)
self.assertEqual('bar', output.extra_properties['x_owner_foo'])
def test_prop_protection_with_show_and_unpermitted_role(self):
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['member'])
image = {'name': 'image-1'}
extra_props = {'x_owner_foo': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['fake_role'])
output = self.controller.show(another_request, created_image.image_id)
self.assertRaises(KeyError, output.extra_properties.__getitem__,
'x_owner_foo')
def test_prop_protection_with_update_and_permitted_role(self):
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_owner_foo': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['member'])
changes = [
{'op': 'replace', 'path': ['x_owner_foo'], 'value': 'baz'},
]
output = self.controller.update(another_request,
created_image.image_id, changes)
self.assertEqual('baz', output.extra_properties['x_owner_foo'])
def test_prop_protection_with_update_and_unpermitted_role(self):
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_owner_foo': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['fake_role'])
changes = [
{'op': 'replace', 'path': ['x_owner_foo'], 'value': 'baz'},
]
self.assertRaises(webob.exc.HTTPConflict, self.controller.update,
another_request, created_image.image_id, changes)
def test_prop_protection_with_delete_and_permitted_role(self):
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_owner_foo': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['member'])
changes = [
{'op': 'remove', 'path': ['x_owner_foo']}
]
output = self.controller.update(another_request,
created_image.image_id, changes)
self.assertRaises(KeyError, output.extra_properties.__getitem__,
'x_owner_foo')
def test_prop_protection_with_delete_and_unpermitted_role(self):
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_owner_foo': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['fake_role'])
changes = [
{'op': 'remove', 'path': ['x_owner_foo']}
]
self.assertRaises(webob.exc.HTTPConflict, self.controller.update,
another_request, created_image.image_id, changes)
def test_create_protected_prop_case_insensitive(self):
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
created_image = self.controller.create(request, image=image,
extra_properties={},
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['member'])
changes = [
{'op': 'add', 'path': ['x_case_insensitive'], 'value': '1'},
]
output = self.controller.update(another_request,
created_image.image_id, changes)
self.assertEqual('1', output.extra_properties['x_case_insensitive'])
def test_read_protected_prop_case_insensitive(self):
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_case_insensitive': '1'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['member'])
output = self.controller.show(another_request, created_image.image_id)
self.assertEqual('1', output.extra_properties['x_case_insensitive'])
def test_update_protected_prop_case_insensitive(self):
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_case_insensitive': '1'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['member'])
changes = [
{'op': 'replace', 'path': ['x_case_insensitive'], 'value': '2'},
]
output = self.controller.update(another_request,
created_image.image_id, changes)
self.assertEqual('2', output.extra_properties['x_case_insensitive'])
def test_delete_protected_prop_case_insensitive(self):
enforcer = glance.api.policy.Enforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
enforcer,
self.notifier,
self.store)
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_case_insensitive': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['member'])
changes = [
{'op': 'remove', 'path': ['x_case_insensitive']}
]
output = self.controller.update(another_request,
created_image.image_id, changes)
self.assertRaises(KeyError, output.extra_properties.__getitem__,
'x_case_insensitive')
def test_create_non_protected_prop(self):
"""Property marked with special char @ creatable by an unknown role"""
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_all_permitted_1': '1'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
self.assertEqual('1',
created_image.extra_properties['x_all_permitted_1'])
another_request = unit_test_utils.get_fake_request(roles=['joe_soap'])
extra_props = {'x_all_permitted_2': '2'}
created_image = self.controller.create(another_request, image=image,
extra_properties=extra_props,
tags=[])
self.assertEqual('2',
created_image.extra_properties['x_all_permitted_2'])
def test_read_non_protected_prop(self):
"""Property marked with special char @ readable by an unknown role"""
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_all_permitted': '1'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['joe_soap'])
output = self.controller.show(another_request, created_image.image_id)
self.assertEqual('1', output.extra_properties['x_all_permitted'])
def test_update_non_protected_prop(self):
"""Property marked with special char @ updatable by an unknown role"""
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_all_permitted': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['joe_soap'])
changes = [
{'op': 'replace', 'path': ['x_all_permitted'], 'value': 'baz'},
]
output = self.controller.update(another_request,
created_image.image_id, changes)
self.assertEqual('baz', output.extra_properties['x_all_permitted'])
def test_delete_non_protected_prop(self):
"""Property marked with special char @ deletable by an unknown role"""
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_all_permitted': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['member'])
changes = [
{'op': 'remove', 'path': ['x_all_permitted']}
]
output = self.controller.update(another_request,
created_image.image_id, changes)
self.assertRaises(KeyError, output.extra_properties.__getitem__,
'x_all_permitted')
def test_create_locked_down_protected_prop(self):
"""Property marked with special char ! creatable by no one"""
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
created_image = self.controller.create(request, image=image,
extra_properties={},
tags=[])
roles = ['fake_member']
another_request = unit_test_utils.get_fake_request(roles=roles)
changes = [
{'op': 'add', 'path': ['x_none_permitted'], 'value': 'bar'},
]
self.assertRaises(webob.exc.HTTPForbidden,
self.controller.update, another_request,
created_image.image_id, changes)
def test_read_locked_down_protected_prop(self):
"""Property marked with special char ! readable by no one"""
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['member'])
image = {'name': 'image-1'}
extra_props = {'x_none_read': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['fake_role'])
output = self.controller.show(another_request, created_image.image_id)
self.assertRaises(KeyError, output.extra_properties.__getitem__,
'x_none_read')
def test_update_locked_down_protected_prop(self):
"""Property marked with special char ! updatable by no one"""
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_none_update': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['fake_role'])
changes = [
{'op': 'replace', 'path': ['x_none_update'], 'value': 'baz'},
]
self.assertRaises(webob.exc.HTTPConflict, self.controller.update,
another_request, created_image.image_id, changes)
def test_delete_locked_down_protected_prop(self):
"""Property marked with special char ! deletable by no one"""
self.set_property_protections()
request = unit_test_utils.get_fake_request(roles=['admin'])
image = {'name': 'image-1'}
extra_props = {'x_none_delete': 'bar'}
created_image = self.controller.create(request, image=image,
extra_properties=extra_props,
tags=[])
another_request = unit_test_utils.get_fake_request(roles=['fake_role'])
changes = [
{'op': 'remove', 'path': ['x_none_delete']}
]
self.assertRaises(webob.exc.HTTPConflict, self.controller.update,
another_request, created_image.image_id, changes)
def test_update_replace_locations_non_empty(self):
self.config(show_multiple_locations=True)
new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}}
request = unit_test_utils.get_fake_request()
changes = [{'op': 'replace', 'path': ['locations'],
'value': [new_location]}]
self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update,
request, UUID1, changes)
def test_update_replace_locations_metadata_update(self):
self.config(show_multiple_locations=True)
location = {'url': '%s/%s' % (BASE_URI, UUID1),
'metadata': {'a': 1}}
request = unit_test_utils.get_fake_request()
changes = [{'op': 'replace', 'path': ['locations'],
'value': [location]}]
output = self.controller.update(request, UUID1, changes)
self.assertEqual({'a': 1}, output.locations[0]['metadata'])
def test_locations_actions_with_locations_invisible(self):
self.config(show_multiple_locations=False)
new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}}
request = unit_test_utils.get_fake_request()
changes = [{'op': 'replace', 'path': ['locations'],
'value': [new_location]}]
self.assertRaises(webob.exc.HTTPForbidden, self.controller.update,
request, UUID1, changes)
def test_update_replace_locations_invalid(self):
request = unit_test_utils.get_fake_request()
changes = [{'op': 'replace', 'path': ['locations'], 'value': []}]
self.assertRaises(webob.exc.HTTPForbidden, self.controller.update,
request, UUID1, changes)
def test_update_add_property(self):
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'add', 'path': ['foo'], 'value': 'baz'},
{'op': 'add', 'path': ['snitch'], 'value': 'golden'},
]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual('baz', output.extra_properties['foo'])
self.assertEqual('golden', output.extra_properties['snitch'])
self.assertNotEqual(output.created_at, output.updated_at)
def test_update_add_base_property_json_schema_version_4(self):
request = unit_test_utils.get_fake_request()
changes = [{
'json_schema_version': 4, 'op': 'add',
'path': ['name'], 'value': 'fedora'
}]
self.assertRaises(webob.exc.HTTPConflict, self.controller.update,
request, UUID1, changes)
def test_update_add_extra_property_json_schema_version_4(self):
self.db.image_update(None, UUID1, {'properties': {'foo': 'bar'}})
request = unit_test_utils.get_fake_request()
changes = [{
'json_schema_version': 4, 'op': 'add',
'path': ['foo'], 'value': 'baz'
}]
self.assertRaises(webob.exc.HTTPConflict, self.controller.update,
request, UUID1, changes)
def test_update_add_base_property_json_schema_version_10(self):
request = unit_test_utils.get_fake_request()
changes = [{
'json_schema_version': 10, 'op': 'add',
'path': ['name'], 'value': 'fedora'
}]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual('fedora', output.name)
def test_update_add_extra_property_json_schema_version_10(self):
self.db.image_update(None, UUID1, {'properties': {'foo': 'bar'}})
request = unit_test_utils.get_fake_request()
changes = [{
'json_schema_version': 10, 'op': 'add',
'path': ['foo'], 'value': 'baz'
}]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual({'foo': 'baz'}, output.extra_properties)
def test_update_add_property_already_present_json_schema_version_4(self):
request = unit_test_utils.get_fake_request()
properties = {'foo': 'bar'}
self.db.image_update(None, UUID1, {'properties': properties})
output = self.controller.show(request, UUID1)
self.assertEqual('bar', output.extra_properties['foo'])
changes = [
{'json_schema_version': 4, 'op': 'add',
'path': ['foo'], 'value': 'baz'},
]
self.assertRaises(webob.exc.HTTPConflict,
self.controller.update, request, UUID1, changes)
def test_update_add_property_already_present_json_schema_version_10(self):
request = unit_test_utils.get_fake_request()
properties = {'foo': 'bar'}
self.db.image_update(None, UUID1, {'properties': properties})
output = self.controller.show(request, UUID1)
self.assertEqual('bar', output.extra_properties['foo'])
changes = [
{'json_schema_version': 10, 'op': 'add',
'path': ['foo'], 'value': 'baz'},
]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual({'foo': 'baz'}, output.extra_properties)
def test_update_add_locations(self):
self.config(show_multiple_locations=True)
new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}}
request = unit_test_utils.get_fake_request()
changes = [{'op': 'add', 'path': ['locations', '-'],
'value': new_location}]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual(2, len(output.locations))
self.assertEqual(new_location, output.locations[1])
def test_replace_location_possible_on_queued(self):
self.skipTest('This test is intermittently failing at the gate. '
'See bug #1649300')
self.config(show_multiple_locations=True)
self.images = [
_db_fixture('1', owner=TENANT1, checksum=CHKSUM,
name='1',
is_public=True,
disk_format='raw',
container_format='bare',
status='queued'),
]
self.db.image_create(None, self.images[0])
request = unit_test_utils.get_fake_request()
new_location = {'url': '%s/fake_location_1' % BASE_URI, 'metadata': {}}
changes = [{'op': 'replace', 'path': ['locations'],
'value': [new_location]}]
output = self.controller.update(request, '1', changes)
self.assertEqual('1', output.image_id)
self.assertEqual(1, len(output.locations))
self.assertEqual(new_location, output.locations[0])
def test_add_location_possible_on_queued(self):
self.skipTest('This test is intermittently failing at the gate. '
'See bug #1649300')
self.config(show_multiple_locations=True)
self.images = [
_db_fixture('1', owner=TENANT1, checksum=CHKSUM,
name='1',
is_public=True,
disk_format='raw',
container_format='bare',
status='queued'),
]
self.db.image_create(None, self.images[0])
request = unit_test_utils.get_fake_request()
new_location = {'url': '%s/fake_location_1' % BASE_URI, 'metadata': {}}
changes = [{'op': 'add', 'path': ['locations', '-'],
'value': new_location}]
output = self.controller.update(request, '1', changes)
self.assertEqual('1', output.image_id)
self.assertEqual(1, len(output.locations))
self.assertEqual(new_location, output.locations[0])
def _test_update_locations_status(self, image_status, update):
self.config(show_multiple_locations=True)
self.images = [
_db_fixture('1', owner=TENANT1, checksum=CHKSUM,
name='1',
disk_format='raw',
container_format='bare',
status=image_status),
]
request = unit_test_utils.get_fake_request()
if image_status == 'deactivated':
self.db.image_create(request.context, self.images[0])
else:
self.db.image_create(None, self.images[0])
new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}}
changes = [{'op': update, 'path': ['locations', '-'],
'value': new_location}]
self.assertRaises(webob.exc.HTTPConflict,
self.controller.update, request, '1', changes)
def test_location_add_not_permitted_status_saving(self):
self._test_update_locations_status('saving', 'add')
def test_location_add_not_permitted_status_deactivated(self):
self._test_update_locations_status('deactivated', 'add')
def test_location_add_not_permitted_status_deleted(self):
self._test_update_locations_status('deleted', 'add')
def test_location_add_not_permitted_status_pending_delete(self):
self._test_update_locations_status('pending_delete', 'add')
def test_location_add_not_permitted_status_killed(self):
self._test_update_locations_status('killed', 'add')
def test_location_remove_not_permitted_status_saving(self):
self._test_update_locations_status('saving', 'remove')
def test_location_remove_not_permitted_status_deactivated(self):
self._test_update_locations_status('deactivated', 'remove')
def test_location_remove_not_permitted_status_deleted(self):
self._test_update_locations_status('deleted', 'remove')
def test_location_remove_not_permitted_status_pending_delete(self):
self._test_update_locations_status('pending_delete', 'remove')
def test_location_remove_not_permitted_status_killed(self):
self._test_update_locations_status('killed', 'remove')
def test_location_remove_not_permitted_status_queued(self):
self._test_update_locations_status('queued', 'remove')
def test_location_replace_not_permitted_status_saving(self):
self._test_update_locations_status('saving', 'replace')
def test_location_replace_not_permitted_status_deactivated(self):
self._test_update_locations_status('deactivated', 'replace')
def test_location_replace_not_permitted_status_deleted(self):
self._test_update_locations_status('deleted', 'replace')
def test_location_replace_not_permitted_status_pending_delete(self):
self._test_update_locations_status('pending_delete', 'replace')
def test_location_replace_not_permitted_status_killed(self):
self._test_update_locations_status('killed', 'replace')
def test_update_add_locations_insertion(self):
self.config(show_multiple_locations=True)
new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}}
request = unit_test_utils.get_fake_request()
changes = [{'op': 'add', 'path': ['locations', '0'],
'value': new_location}]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual(2, len(output.locations))
self.assertEqual(new_location, output.locations[0])
def test_update_add_locations_list(self):
self.config(show_multiple_locations=True)
request = unit_test_utils.get_fake_request()
changes = [{'op': 'add', 'path': ['locations', '-'],
'value': {'url': 'foo', 'metadata': {}}}]
self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update,
request, UUID1, changes)
def test_update_add_locations_invalid(self):
self.config(show_multiple_locations=True)
request = unit_test_utils.get_fake_request()
changes = [{'op': 'add', 'path': ['locations', '-'],
'value': {'url': 'unknow://foo', 'metadata': {}}}]
self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update,
request, UUID1, changes)
changes = [{'op': 'add', 'path': ['locations', None],
'value': {'url': 'unknow://foo', 'metadata': {}}}]
self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update,
request, UUID1, changes)
def test_update_add_duplicate_locations(self):
self.config(show_multiple_locations=True)
new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}}
request = unit_test_utils.get_fake_request()
changes = [{'op': 'add', 'path': ['locations', '-'],
'value': new_location}]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual(2, len(output.locations))
self.assertEqual(new_location, output.locations[1])
self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update,
request, UUID1, changes)
def test_update_add_too_many_locations(self):
self.config(show_multiple_locations=True)
self.config(image_location_quota=1)
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'add', 'path': ['locations', '-'],
'value': {'url': '%s/fake_location_1' % BASE_URI,
'metadata': {}}},
{'op': 'add', 'path': ['locations', '-'],
'value': {'url': '%s/fake_location_2' % BASE_URI,
'metadata': {}}},
]
self.assertRaises(webob.exc.HTTPRequestEntityTooLarge,
self.controller.update, request,
UUID1, changes)
def test_update_add_and_remove_too_many_locations(self):
self.config(show_multiple_locations=True)
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'add', 'path': ['locations', '-'],
'value': {'url': '%s/fake_location_1' % BASE_URI,
'metadata': {}}},
{'op': 'add', 'path': ['locations', '-'],
'value': {'url': '%s/fake_location_2' % BASE_URI,
'metadata': {}}},
]
self.controller.update(request, UUID1, changes)
self.config(image_location_quota=1)
# We must remove two properties to avoid being
# over the limit of 1 property
changes = [
{'op': 'remove', 'path': ['locations', '0']},
{'op': 'add', 'path': ['locations', '-'],
'value': {'url': '%s/fake_location_3' % BASE_URI,
'metadata': {}}},
]
self.assertRaises(webob.exc.HTTPRequestEntityTooLarge,
self.controller.update, request,
UUID1, changes)
def test_update_add_unlimited_locations(self):
self.config(show_multiple_locations=True)
self.config(image_location_quota=-1)
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'add', 'path': ['locations', '-'],
'value': {'url': '%s/fake_location_1' % BASE_URI,
'metadata': {}}},
]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertNotEqual(output.created_at, output.updated_at)
def test_update_remove_location_while_over_limit(self):
"""Ensure that image locations can be removed.
Image locations should be able to be removed as long as the image has
fewer than the limited number of image locations after the
transaction.
"""
self.config(show_multiple_locations=True)
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'add', 'path': ['locations', '-'],
'value': {'url': '%s/fake_location_1' % BASE_URI,
'metadata': {}}},
{'op': 'add', 'path': ['locations', '-'],
'value': {'url': '%s/fake_location_2' % BASE_URI,
'metadata': {}}},
]
self.controller.update(request, UUID1, changes)
self.config(image_location_quota=1)
self.config(show_multiple_locations=True)
# We must remove two locations to avoid being over
# the limit of 1 location
changes = [
{'op': 'remove', 'path': ['locations', '0']},
{'op': 'remove', 'path': ['locations', '0']},
]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual(1, len(output.locations))
self.assertIn('fake_location_2', output.locations[0]['url'])
self.assertNotEqual(output.created_at, output.updated_at)
def test_update_add_and_remove_location_under_limit(self):
"""Ensure that image locations can be removed.
Image locations should be able to be added and removed simultaneously
as long as the image has fewer than the limited number of image
locations after the transaction.
"""
self.stubs.Set(store, 'get_size_from_backend',
unit_test_utils.fake_get_size_from_backend)
self.config(show_multiple_locations=True)
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'add', 'path': ['locations', '-'],
'value': {'url': '%s/fake_location_1' % BASE_URI,
'metadata': {}}},
{'op': 'add', 'path': ['locations', '-'],
'value': {'url': '%s/fake_location_2' % BASE_URI,
'metadata': {}}},
]
self.controller.update(request, UUID1, changes)
self.config(image_location_quota=2)
# We must remove two properties to avoid being
# over the limit of 1 property
changes = [
{'op': 'remove', 'path': ['locations', '0']},
{'op': 'remove', 'path': ['locations', '0']},
{'op': 'add', 'path': ['locations', '-'],
'value': {'url': '%s/fake_location_3' % BASE_URI,
'metadata': {}}},
]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual(2, len(output.locations))
self.assertIn('fake_location_3', output.locations[1]['url'])
self.assertNotEqual(output.created_at, output.updated_at)
def test_update_remove_base_property(self):
self.db.image_update(None, UUID1, {'properties': {'foo': 'bar'}})
request = unit_test_utils.get_fake_request()
changes = [{'op': 'remove', 'path': ['name']}]
self.assertRaises(webob.exc.HTTPForbidden, self.controller.update,
request, UUID1, changes)
def test_update_remove_property(self):
request = unit_test_utils.get_fake_request()
properties = {'foo': 'bar', 'snitch': 'golden'}
self.db.image_update(None, UUID1, {'properties': properties})
output = self.controller.show(request, UUID1)
self.assertEqual('bar', output.extra_properties['foo'])
self.assertEqual('golden', output.extra_properties['snitch'])
changes = [
{'op': 'remove', 'path': ['snitch']},
]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual({'foo': 'bar'}, output.extra_properties)
self.assertNotEqual(output.created_at, output.updated_at)
def test_update_remove_missing_property(self):
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'remove', 'path': ['foo']},
]
self.assertRaises(webob.exc.HTTPConflict,
self.controller.update, request, UUID1, changes)
def test_update_remove_location(self):
self.config(show_multiple_locations=True)
self.stubs.Set(store, 'get_size_from_backend',
unit_test_utils.fake_get_size_from_backend)
request = unit_test_utils.get_fake_request()
new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}}
changes = [{'op': 'add', 'path': ['locations', '-'],
'value': new_location}]
self.controller.update(request, UUID1, changes)
changes = [{'op': 'remove', 'path': ['locations', '0']}]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual(1, len(output.locations))
self.assertEqual('active', output.status)
def test_update_remove_location_invalid_pos(self):
self.config(show_multiple_locations=True)
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'add', 'path': ['locations', '-'],
'value': {'url': '%s/fake_location' % BASE_URI,
'metadata': {}}}]
self.controller.update(request, UUID1, changes)
changes = [{'op': 'remove', 'path': ['locations', None]}]
self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update,
request, UUID1, changes)
changes = [{'op': 'remove', 'path': ['locations', '-1']}]
self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update,
request, UUID1, changes)
changes = [{'op': 'remove', 'path': ['locations', '99']}]
self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update,
request, UUID1, changes)
changes = [{'op': 'remove', 'path': ['locations', 'x']}]
self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update,
request, UUID1, changes)
def test_update_remove_location_store_exception(self):
self.config(show_multiple_locations=True)
def fake_delete_image_location_from_backend(self, *args, **kwargs):
raise Exception('fake_backend_exception')
self.stubs.Set(self.store_utils, 'delete_image_location_from_backend',
fake_delete_image_location_from_backend)
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'add', 'path': ['locations', '-'],
'value': {'url': '%s/fake_location' % BASE_URI,
'metadata': {}}}]
self.controller.update(request, UUID1, changes)
changes = [{'op': 'remove', 'path': ['locations', '0']}]
self.assertRaises(webob.exc.HTTPInternalServerError,
self.controller.update, request, UUID1, changes)
def test_update_multiple_changes(self):
request = unit_test_utils.get_fake_request()
properties = {'foo': 'bar', 'snitch': 'golden'}
self.db.image_update(None, UUID1, {'properties': properties})
changes = [
{'op': 'replace', 'path': ['min_ram'], 'value': 128},
{'op': 'replace', 'path': ['foo'], 'value': 'baz'},
{'op': 'remove', 'path': ['snitch']},
{'op': 'add', 'path': ['kb'], 'value': 'dvorak'},
]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(UUID1, output.image_id)
self.assertEqual(128, output.min_ram)
self.addDetail('extra_properties',
testtools.content.json_content(
jsonutils.dumps(output.extra_properties)))
self.assertEqual(2, len(output.extra_properties))
self.assertEqual('baz', output.extra_properties['foo'])
self.assertEqual('dvorak', output.extra_properties['kb'])
self.assertNotEqual(output.created_at, output.updated_at)
def test_update_invalid_operation(self):
request = unit_test_utils.get_fake_request()
change = {'op': 'test', 'path': 'options', 'value': 'puts'}
try:
self.controller.update(request, UUID1, [change])
except AttributeError:
pass # AttributeError is the desired behavior
else:
self.fail('Failed to raise AssertionError on %s' % change)
def test_update_duplicate_tags(self):
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'replace', 'path': ['tags'], 'value': ['ping', 'ping']},
]
output = self.controller.update(request, UUID1, changes)
self.assertEqual(1, len(output.tags))
self.assertIn('ping', output.tags)
output_logs = self.notifier.get_logs()
self.assertEqual(1, len(output_logs))
output_log = output_logs[0]
self.assertEqual('INFO', output_log['notification_type'])
self.assertEqual('image.update', output_log['event_type'])
self.assertEqual(UUID1, output_log['payload']['id'])
def test_update_disabled_notification(self):
self.config(disabled_notifications=["image.update"])
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'replace', 'path': ['name'], 'value': 'Ping Pong'},
]
output = self.controller.update(request, UUID1, changes)
self.assertEqual('Ping Pong', output.name)
output_logs = self.notifier.get_logs()
self.assertEqual(0, len(output_logs))
def test_delete(self):
request = unit_test_utils.get_fake_request()
self.assertIn('%s/%s' % (BASE_URI, UUID1), self.store.data)
try:
self.controller.delete(request, UUID1)
output_logs = self.notifier.get_logs()
self.assertEqual(1, len(output_logs))
output_log = output_logs[0]
self.assertEqual('INFO', output_log['notification_type'])
self.assertEqual("image.delete", output_log['event_type'])
except Exception as e:
self.fail("Delete raised exception: %s" % e)
deleted_img = self.db.image_get(request.context, UUID1,
force_show_deleted=True)
self.assertTrue(deleted_img['deleted'])
self.assertEqual('deleted', deleted_img['status'])
self.assertNotIn('%s/%s' % (BASE_URI, UUID1), self.store.data)
def test_delete_with_tags(self):
request = unit_test_utils.get_fake_request()
changes = [
{'op': 'replace', 'path': ['tags'],
'value': ['many', 'cool', 'new', 'tags']},
]
self.controller.update(request, UUID1, changes)
self.assertIn('%s/%s' % (BASE_URI, UUID1), self.store.data)
self.controller.delete(request, UUID1)
output_logs = self.notifier.get_logs()
# Get `delete` event from logs
output_delete_logs = [output_log for output_log in output_logs
if output_log['event_type'] == 'image.delete']
self.assertEqual(1, len(output_delete_logs))
output_log = output_delete_logs[0]
self.assertEqual('INFO', output_log['notification_type'])
deleted_img = self.db.image_get(request.context, UUID1,
force_show_deleted=True)
self.assertTrue(deleted_img['deleted'])
self.assertEqual('deleted', deleted_img['status'])
self.assertNotIn('%s/%s' % (BASE_URI, UUID1), self.store.data)
def test_delete_disabled_notification(self):
self.config(disabled_notifications=["image.delete"])
request = unit_test_utils.get_fake_request()
self.assertIn('%s/%s' % (BASE_URI, UUID1), self.store.data)
try:
self.controller.delete(request, UUID1)
output_logs = self.notifier.get_logs()
self.assertEqual(0, len(output_logs))
except Exception as e:
self.fail("Delete raised exception: %s" % e)
deleted_img = self.db.image_get(request.context, UUID1,
force_show_deleted=True)
self.assertTrue(deleted_img['deleted'])
self.assertEqual('deleted', deleted_img['status'])
self.assertNotIn('%s/%s' % (BASE_URI, UUID1), self.store.data)
def test_delete_queued_updates_status(self):
"""Ensure status of queued image is updated (LP bug #1048851)"""
request = unit_test_utils.get_fake_request(is_admin=True)
image = self.db.image_create(request.context, {'status': 'queued'})
image_id = image['id']
self.controller.delete(request, image_id)
image = self.db.image_get(request.context, image_id,
force_show_deleted=True)
self.assertTrue(image['deleted'])
self.assertEqual('deleted', image['status'])
def test_delete_queued_updates_status_delayed_delete(self):
"""Ensure status of queued image is updated (LP bug #1048851).
Must be set to 'deleted' when delayed_delete isenabled.
"""
self.config(delayed_delete=True)
request = unit_test_utils.get_fake_request(is_admin=True)
image = self.db.image_create(request.context, {'status': 'queued'})
image_id = image['id']
self.controller.delete(request, image_id)
image = self.db.image_get(request.context, image_id,
force_show_deleted=True)
self.assertTrue(image['deleted'])
self.assertEqual('deleted', image['status'])
def test_delete_not_in_store(self):
request = unit_test_utils.get_fake_request()
self.assertIn('%s/%s' % (BASE_URI, UUID1), self.store.data)
for k in self.store.data:
if UUID1 in k:
del self.store.data[k]
break
self.controller.delete(request, UUID1)
deleted_img = self.db.image_get(request.context, UUID1,
force_show_deleted=True)
self.assertTrue(deleted_img['deleted'])
self.assertEqual('deleted', deleted_img['status'])
self.assertNotIn('%s/%s' % (BASE_URI, UUID1), self.store.data)
def test_delayed_delete(self):
self.config(delayed_delete=True)
request = unit_test_utils.get_fake_request()
self.assertIn('%s/%s' % (BASE_URI, UUID1), self.store.data)
self.controller.delete(request, UUID1)
deleted_img = self.db.image_get(request.context, UUID1,
force_show_deleted=True)
self.assertTrue(deleted_img['deleted'])
self.assertEqual('pending_delete', deleted_img['status'])
self.assertIn('%s/%s' % (BASE_URI, UUID1), self.store.data)
def test_delete_non_existent(self):
request = unit_test_utils.get_fake_request()
self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete,
request, str(uuid.uuid4()))
def test_delete_already_deleted_image_admin(self):
request = unit_test_utils.get_fake_request(is_admin=True)
self.controller.delete(request, UUID1)
self.assertRaises(webob.exc.HTTPNotFound,
self.controller.delete, request, UUID1)
def test_delete_not_allowed(self):
request = unit_test_utils.get_fake_request()
self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete,
request, UUID4)
def test_delete_in_use(self):
def fake_safe_delete_from_backend(self, *args, **kwargs):
raise store.exceptions.InUseByStore()
self.stubs.Set(self.store_utils, 'safe_delete_from_backend',
fake_safe_delete_from_backend)
request = unit_test_utils.get_fake_request()
self.assertRaises(webob.exc.HTTPConflict, self.controller.delete,
request, UUID1)
def test_delete_has_snapshot(self):
def fake_safe_delete_from_backend(self, *args, **kwargs):
raise store.exceptions.HasSnapshot()
self.stubs.Set(self.store_utils, 'safe_delete_from_backend',
fake_safe_delete_from_backend)
request = unit_test_utils.get_fake_request()
self.assertRaises(webob.exc.HTTPConflict, self.controller.delete,
request, UUID1)
def test_delete_to_unallowed_status(self):
# from deactivated to pending-delete
self.config(delayed_delete=True)
request = unit_test_utils.get_fake_request(is_admin=True)
self.action_controller.deactivate(request, UUID1)
self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete,
request, UUID1)
def test_delete_uploading_status_image(self):
"""Ensure status of uploading image is updated (LP bug #1733289)"""
request = unit_test_utils.get_fake_request(is_admin=True)
image = self.db.image_create(request.context, {'status': 'uploading'})
image_id = image['id']
with mock.patch.object(self.store,
'delete_from_backend') as mock_store:
self.controller.delete(request, image_id)
# Ensure delete_from_backend is called
self.assertEqual(1, mock_store.call_count)
image = self.db.image_get(request.context, image_id,
force_show_deleted=True)
self.assertTrue(image['deleted'])
self.assertEqual('deleted', image['status'])
def test_index_with_invalid_marker(self):
fake_uuid = str(uuid.uuid4())
request = unit_test_utils.get_fake_request()
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.index, request, marker=fake_uuid)
def test_invalid_locations_op_pos(self):
pos = self.controller._get_locations_op_pos(None, 2, True)
self.assertIsNone(pos)
pos = self.controller._get_locations_op_pos('1', None, True)
self.assertIsNone(pos)
def test_image_import(self):
request = unit_test_utils.get_fake_request()
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.return_value = FakeImage(status='uploading')
output = self.controller.import_image(
request, UUID4, {'method': {'name': 'glance-direct'}})
self.assertEqual(UUID4, output)
def test_image_import_not_allowed(self):
request = unit_test_utils.get_fake_request()
# NOTE(abhishekk): For coverage purpose setting tenant to
# None. It is not expected to do in normal scenarios.
request.context.tenant = None
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.return_value = FakeImage(status='uploading')
self.assertRaises(webob.exc.HTTPForbidden,
self.controller.import_image,
request, UUID4, {'method': {'name':
'glance-direct'}})
class TestImagesControllerPolicies(base.IsolatedUnitTest):
def setUp(self):
super(TestImagesControllerPolicies, self).setUp()
self.db = unit_test_utils.FakeDB()
self.policy = unit_test_utils.FakePolicyEnforcer()
self.controller = glance.api.v2.images.ImagesController(self.db,
self.policy)
store = unit_test_utils.FakeStoreAPI()
self.store_utils = unit_test_utils.FakeStoreUtils(store)
def test_index_unauthorized(self):
rules = {"get_images": False}
self.policy.set_rules(rules)
request = unit_test_utils.get_fake_request()
self.assertRaises(webob.exc.HTTPForbidden, self.controller.index,
request)
def test_show_unauthorized(self):
rules = {"get_image": False}
self.policy.set_rules(rules)
request = unit_test_utils.get_fake_request()
self.assertRaises(webob.exc.HTTPForbidden, self.controller.show,
request, image_id=UUID2)
def test_create_image_unauthorized(self):
rules = {"add_image": False}
self.policy.set_rules(rules)
request = unit_test_utils.get_fake_request()
image = {'name': 'image-1'}
extra_properties = {}
tags = []
self.assertRaises(webob.exc.HTTPForbidden, self.controller.create,
request, image, extra_properties, tags)
def test_create_public_image_unauthorized(self):
rules = {"publicize_image": False}
self.policy.set_rules(rules)
request = unit_test_utils.get_fake_request()
image = {'name': 'image-1', 'visibility': 'public'}
extra_properties = {}
tags = []
self.assertRaises(webob.exc.HTTPForbidden, self.controller.create,
request, image, extra_properties, tags)
def test_create_community_image_unauthorized(self):
rules = {"communitize_image": False}
self.policy.set_rules(rules)
request = unit_test_utils.get_fake_request()
image = {'name': 'image-c1', 'visibility': 'community'}
extra_properties = {}
tags = []
self.assertRaises(webob.exc.HTTPForbidden, self.controller.create,
request, image, extra_properties, tags)
def test_update_unauthorized(self):
rules = {"modify_image": False}
self.policy.set_rules(rules)
request = unit_test_utils.get_fake_request()
changes = [{'op': 'replace', 'path': ['name'], 'value': 'image-2'}]
self.assertRaises(webob.exc.HTTPForbidden, self.controller.update,
request, UUID1, changes)
def test_update_publicize_image_unauthorized(self):
rules = {"publicize_image": False}
self.policy.set_rules(rules)
request = unit_test_utils.get_fake_request()
changes = [{'op': 'replace', 'path': ['visibility'],
'value': 'public'}]
self.assertRaises(webob.exc.HTTPForbidden, self.controller.update,
request, UUID1, changes)
def test_update_communitize_image_unauthorized(self):
rules = {"communitize_image": False}
self.policy.set_rules(rules)
request = unit_test_utils.get_fake_request()
changes = [{'op': 'replace', 'path': ['visibility'],
'value': 'community'}]
self.assertRaises(webob.exc.HTTPForbidden, self.controller.update,
request, UUID1, changes)
def test_update_depublicize_image_unauthorized(self):
rules = {"publicize_image": False}
self.policy.set_rules(rules)
request = unit_test_utils.get_fake_request()
changes = [{'op': 'replace', 'path': ['visibility'],
'value': 'private'}]
output = self.controller.update(request, UUID1, changes)
self.assertEqual('private', output.visibility)
def test_update_decommunitize_image_unauthorized(self):
rules = {"communitize_image": False}
self.policy.set_rules(rules)
request = unit_test_utils.get_fake_request()
changes = [{'op': 'replace', 'path': ['visibility'],
'value': 'private'}]
output = self.controller.update(request, UUID1, changes)
self.assertEqual('private', output.visibility)
def test_update_get_image_location_unauthorized(self):
rules = {"get_image_location": False}
self.policy.set_rules(rules)
request = unit_test_utils.get_fake_request()
changes = [{'op': 'replace', 'path': ['locations'], 'value': []}]
self.assertRaises(webob.exc.HTTPForbidden, self.controller.update,
request, UUID1, changes)
def test_update_set_image_location_unauthorized(self):
def fake_delete_image_location_from_backend(self, *args, **kwargs):
pass
rules = {"set_image_location": False}
self.policy.set_rules(rules)
new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}}
request = unit_test_utils.get_fake_request()
changes = [{'op': 'add', 'path': ['locations', '-'],
'value': new_location}]
self.assertRaises(webob.exc.HTTPForbidden, self.controller.update,
request, UUID1, changes)
def test_update_delete_image_location_unauthorized(self):
rules = {"delete_image_location": False}
self.policy.set_rules(rules)
request = unit_test_utils.get_fake_request()
changes = [{'op': 'replace', 'path': ['locations'], 'value': []}]
self.assertRaises(webob.exc.HTTPForbidden, self.controller.update,
request, UUID1, changes)
def test_delete_unauthorized(self):
rules = {"delete_image": False}
self.policy.set_rules(rules)
request = unit_test_utils.get_fake_request()
self.assertRaises(webob.exc.HTTPForbidden, self.controller.delete,
request, UUID1)
class TestImagesDeserializer(test_utils.BaseTestCase):
def setUp(self):
super(TestImagesDeserializer, self).setUp()
self.deserializer = glance.api.v2.images.RequestDeserializer()
def test_create_minimal(self):
request = unit_test_utils.get_fake_request()
request.body = jsonutils.dump_as_bytes({})
output = self.deserializer.create(request)
expected = {'image': {}, 'extra_properties': {}, 'tags': []}
self.assertEqual(expected, output)
def test_create_invalid_id(self):
request = unit_test_utils.get_fake_request()
request.body = jsonutils.dump_as_bytes({'id': 'gabe'})
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create,
request)
def test_create_id_to_image_id(self):
request = unit_test_utils.get_fake_request()
request.body = jsonutils.dump_as_bytes({'id': UUID4})
output = self.deserializer.create(request)
expected = {'image': {'image_id': UUID4},
'extra_properties': {},
'tags': []}
self.assertEqual(expected, output)
def test_create_no_body(self):
request = unit_test_utils.get_fake_request()
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create,
request)
def test_create_full(self):
request = unit_test_utils.get_fake_request()
request.body = jsonutils.dump_as_bytes({
'id': UUID3,
'name': 'image-1',
'visibility': 'public',
'tags': ['one', 'two'],
'container_format': 'ami',
'disk_format': 'ami',
'min_ram': 128,
'min_disk': 10,
'foo': 'bar',
'protected': True,
})
output = self.deserializer.create(request)
properties = {
'image_id': UUID3,
'name': 'image-1',
'visibility': 'public',
'container_format': 'ami',
'disk_format': 'ami',
'min_ram': 128,
'min_disk': 10,
'protected': True,
}
self.maxDiff = None
expected = {'image': properties,
'extra_properties': {'foo': 'bar'},
'tags': ['one', 'two']}
self.assertEqual(expected, output)
def test_create_invalid_property_key(self):
request = unit_test_utils.get_fake_request()
request.body = jsonutils.dump_as_bytes({
'id': UUID3,
'name': 'image-1',
'visibility': 'public',
'tags': ['one', 'two'],
'container_format': 'ami',
'disk_format': 'ami',
'min_ram': 128,
'min_disk': 10,
'f' * 256: 'bar',
'protected': True,
})
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create,
request)
def test_create_readonly_attributes_forbidden(self):
bodies = [
{'direct_url': 'http://example.com'},
{'self': 'http://example.com'},
{'file': 'http://example.com'},
{'schema': 'http://example.com'},
]
for body in bodies:
request = unit_test_utils.get_fake_request()
request.body = jsonutils.dump_as_bytes(body)
self.assertRaises(webob.exc.HTTPForbidden,
self.deserializer.create, request)
def _get_fake_patch_request(self, content_type_minor_version=1):
request = unit_test_utils.get_fake_request()
template = 'application/openstack-images-v2.%d-json-patch'
request.content_type = template % content_type_minor_version
return request
def test_update_empty_body(self):
request = self._get_fake_patch_request()
request.body = jsonutils.dump_as_bytes([])
output = self.deserializer.update(request)
expected = {'changes': []}
self.assertEqual(expected, output)
def test_update_unsupported_content_type(self):
request = unit_test_utils.get_fake_request()
request.content_type = 'application/json-patch'
request.body = jsonutils.dump_as_bytes([])
try:
self.deserializer.update(request)
except webob.exc.HTTPUnsupportedMediaType as e:
# desired result, but must have correct Accept-Patch header
accept_patch = ['application/openstack-images-v2.1-json-patch',
'application/openstack-images-v2.0-json-patch']
expected = ', '.join(sorted(accept_patch))
self.assertEqual(expected, e.headers['Accept-Patch'])
else:
self.fail('Did not raise HTTPUnsupportedMediaType')
def test_update_body_not_a_list(self):
bodies = [
{'op': 'add', 'path': '/someprop', 'value': 'somevalue'},
'just some string',
123,
True,
False,
None,
]
for body in bodies:
request = self._get_fake_patch_request()
request.body = jsonutils.dump_as_bytes(body)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.update, request)
def test_update_invalid_changes(self):
changes = [
['a', 'list', 'of', 'stuff'],
'just some string',
123,
True,
False,
None,
{'op': 'invalid', 'path': '/name', 'value': 'fedora'}
]
for change in changes:
request = self._get_fake_patch_request()
request.body = jsonutils.dump_as_bytes([change])
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.update, request)
def test_update(self):
request = self._get_fake_patch_request()
body = [
{'op': 'replace', 'path': '/name', 'value': 'fedora'},
{'op': 'replace', 'path': '/tags', 'value': ['king', 'kong']},
{'op': 'replace', 'path': '/foo', 'value': 'bar'},
{'op': 'add', 'path': '/bebim', 'value': 'bap'},
{'op': 'remove', 'path': '/sparks'},
{'op': 'add', 'path': '/locations/-',
'value': {'url': 'scheme3://path3', 'metadata': {}}},
{'op': 'add', 'path': '/locations/10',
'value': {'url': 'scheme4://path4', 'metadata': {}}},
{'op': 'remove', 'path': '/locations/2'},
{'op': 'replace', 'path': '/locations', 'value': []},
{'op': 'replace', 'path': '/locations',
'value': [{'url': 'scheme5://path5', 'metadata': {}},
{'url': 'scheme6://path6', 'metadata': {}}]},
]
request.body = jsonutils.dump_as_bytes(body)
output = self.deserializer.update(request)
expected = {'changes': [
{'json_schema_version': 10, 'op': 'replace',
'path': ['name'], 'value': 'fedora'},
{'json_schema_version': 10, 'op': 'replace',
'path': ['tags'], 'value': ['king', 'kong']},
{'json_schema_version': 10, 'op': 'replace',
'path': ['foo'], 'value': 'bar'},
{'json_schema_version': 10, 'op': 'add',
'path': ['bebim'], 'value': 'bap'},
{'json_schema_version': 10, 'op': 'remove',
'path': ['sparks']},
{'json_schema_version': 10, 'op': 'add',
'path': ['locations', '-'],
'value': {'url': 'scheme3://path3', 'metadata': {}}},
{'json_schema_version': 10, 'op': 'add',
'path': ['locations', '10'],
'value': {'url': 'scheme4://path4', 'metadata': {}}},
{'json_schema_version': 10, 'op': 'remove',
'path': ['locations', '2']},
{'json_schema_version': 10, 'op': 'replace',
'path': ['locations'], 'value': []},
{'json_schema_version': 10, 'op': 'replace',
'path': ['locations'],
'value': [{'url': 'scheme5://path5', 'metadata': {}},
{'url': 'scheme6://path6', 'metadata': {}}]},
]}
self.assertEqual(expected, output)
def test_update_v2_0_compatibility(self):
request = self._get_fake_patch_request(content_type_minor_version=0)
body = [
{'replace': '/name', 'value': 'fedora'},
{'replace': '/tags', 'value': ['king', 'kong']},
{'replace': '/foo', 'value': 'bar'},
{'add': '/bebim', 'value': 'bap'},
{'remove': '/sparks'},
{'add': '/locations/-', 'value': {'url': 'scheme3://path3',
'metadata': {}}},
{'add': '/locations/10', 'value': {'url': 'scheme4://path4',
'metadata': {}}},
{'remove': '/locations/2'},
{'replace': '/locations', 'value': []},
{'replace': '/locations',
'value': [{'url': 'scheme5://path5', 'metadata': {}},
{'url': 'scheme6://path6', 'metadata': {}}]},
]
request.body = jsonutils.dump_as_bytes(body)
output = self.deserializer.update(request)
expected = {'changes': [
{'json_schema_version': 4, 'op': 'replace',
'path': ['name'], 'value': 'fedora'},
{'json_schema_version': 4, 'op': 'replace',
'path': ['tags'], 'value': ['king', 'kong']},
{'json_schema_version': 4, 'op': 'replace',
'path': ['foo'], 'value': 'bar'},
{'json_schema_version': 4, 'op': 'add',
'path': ['bebim'], 'value': 'bap'},
{'json_schema_version': 4, 'op': 'remove', 'path': ['sparks']},
{'json_schema_version': 4, 'op': 'add',
'path': ['locations', '-'],
'value': {'url': 'scheme3://path3', 'metadata': {}}},
{'json_schema_version': 4, 'op': 'add',
'path': ['locations', '10'],
'value': {'url': 'scheme4://path4', 'metadata': {}}},
{'json_schema_version': 4, 'op': 'remove',
'path': ['locations', '2']},
{'json_schema_version': 4, 'op': 'replace',
'path': ['locations'], 'value': []},
{'json_schema_version': 4, 'op': 'replace', 'path': ['locations'],
'value': [{'url': 'scheme5://path5', 'metadata': {}},
{'url': 'scheme6://path6', 'metadata': {}}]},
]}
self.assertEqual(expected, output)
def test_update_base_attributes(self):
request = self._get_fake_patch_request()
body = [
{'op': 'replace', 'path': '/name', 'value': 'fedora'},
{'op': 'replace', 'path': '/visibility', 'value': 'public'},
{'op': 'replace', 'path': '/tags', 'value': ['king', 'kong']},
{'op': 'replace', 'path': '/protected', 'value': True},
{'op': 'replace', 'path': '/container_format', 'value': 'bare'},
{'op': 'replace', 'path': '/disk_format', 'value': 'raw'},
{'op': 'replace', 'path': '/min_ram', 'value': 128},
{'op': 'replace', 'path': '/min_disk', 'value': 10},
{'op': 'replace', 'path': '/locations', 'value': []},
{'op': 'replace', 'path': '/locations',
'value': [{'url': 'scheme5://path5', 'metadata': {}},
{'url': 'scheme6://path6', 'metadata': {}}]}
]
request.body = jsonutils.dump_as_bytes(body)
output = self.deserializer.update(request)
expected = {'changes': [
{'json_schema_version': 10, 'op': 'replace',
'path': ['name'], 'value': 'fedora'},
{'json_schema_version': 10, 'op': 'replace',
'path': ['visibility'], 'value': 'public'},
{'json_schema_version': 10, 'op': 'replace',
'path': ['tags'], 'value': ['king', 'kong']},
{'json_schema_version': 10, 'op': 'replace',
'path': ['protected'], 'value': True},
{'json_schema_version': 10, 'op': 'replace',
'path': ['container_format'], 'value': 'bare'},
{'json_schema_version': 10, 'op': 'replace',
'path': ['disk_format'], 'value': 'raw'},
{'json_schema_version': 10, 'op': 'replace',
'path': ['min_ram'], 'value': 128},
{'json_schema_version': 10, 'op': 'replace',
'path': ['min_disk'], 'value': 10},
{'json_schema_version': 10, 'op': 'replace',
'path': ['locations'], 'value': []},
{'json_schema_version': 10, 'op': 'replace', 'path': ['locations'],
'value': [{'url': 'scheme5://path5', 'metadata': {}},
{'url': 'scheme6://path6', 'metadata': {}}]}
]}
self.assertEqual(expected, output)
def test_update_disallowed_attributes(self):
samples = {
'direct_url': '/a/b/c/d',
'self': '/e/f/g/h',
'file': '/e/f/g/h/file',
'schema': '/i/j/k',
}
for key, value in samples.items():
request = self._get_fake_patch_request()
body = [{'op': 'replace', 'path': '/%s' % key, 'value': value}]
request.body = jsonutils.dump_as_bytes(body)
try:
self.deserializer.update(request)
except webob.exc.HTTPForbidden:
pass # desired behavior
else:
self.fail("Updating %s did not result in HTTPForbidden" % key)
def test_update_readonly_attributes(self):
samples = {
'id': '00000000-0000-0000-0000-000000000000',
'status': 'active',
'checksum': 'abcdefghijklmnopqrstuvwxyz012345',
'size': 9001,
'virtual_size': 9001,
'created_at': ISOTIME,
'updated_at': ISOTIME,
}
for key, value in samples.items():
request = self._get_fake_patch_request()
body = [{'op': 'replace', 'path': '/%s' % key, 'value': value}]
request.body = jsonutils.dump_as_bytes(body)
try:
self.deserializer.update(request)
except webob.exc.HTTPForbidden:
pass # desired behavior
else:
self.fail("Updating %s did not result in HTTPForbidden" % key)
def test_update_reserved_attributes(self):
samples = {
'deleted': False,
'deleted_at': ISOTIME,
}
for key, value in samples.items():
request = self._get_fake_patch_request()
body = [{'op': 'replace', 'path': '/%s' % key, 'value': value}]
request.body = jsonutils.dump_as_bytes(body)
try:
self.deserializer.update(request)
except webob.exc.HTTPForbidden:
pass # desired behavior
else:
self.fail("Updating %s did not result in HTTPForbidden" % key)
def test_update_invalid_attributes(self):
keys = [
'noslash',
'///twoslash',
'/two/ /slash',
'/ / ',
'/trailingslash/',
'/lone~tilde',
'/trailingtilde~'
]
for key in keys:
request = self._get_fake_patch_request()
body = [{'op': 'replace', 'path': '%s' % key, 'value': 'dummy'}]
request.body = jsonutils.dump_as_bytes(body)
try:
self.deserializer.update(request)
except webob.exc.HTTPBadRequest:
pass # desired behavior
else:
self.fail("Updating %s did not result in HTTPBadRequest" % key)
def test_update_pointer_encoding(self):
samples = {
'/keywith~1slash': [u'keywith/slash'],
'/keywith~0tilde': [u'keywith~tilde'],
'/tricky~01': [u'tricky~1'],
}
for encoded, decoded in samples.items():
request = self._get_fake_patch_request()
doc = [{'op': 'replace', 'path': '%s' % encoded, 'value': 'dummy'}]
request.body = jsonutils.dump_as_bytes(doc)
output = self.deserializer.update(request)
self.assertEqual(decoded, output['changes'][0]['path'])
def test_update_deep_limited_attributes(self):
samples = {
'locations/1/2': [],
}
for key, value in samples.items():
request = self._get_fake_patch_request()
body = [{'op': 'replace', 'path': '/%s' % key, 'value': value}]
request.body = jsonutils.dump_as_bytes(body)
try:
self.deserializer.update(request)
except webob.exc.HTTPBadRequest:
pass # desired behavior
else:
self.fail("Updating %s did not result in HTTPBadRequest" % key)
def test_update_v2_1_missing_operations(self):
request = self._get_fake_patch_request()
body = [{'path': '/colburn', 'value': 'arcata'}]
request.body = jsonutils.dump_as_bytes(body)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.update, request)
def test_update_v2_1_missing_value(self):
request = self._get_fake_patch_request()
body = [{'op': 'replace', 'path': '/colburn'}]
request.body = jsonutils.dump_as_bytes(body)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.update, request)
def test_update_v2_1_missing_path(self):
request = self._get_fake_patch_request()
body = [{'op': 'replace', 'value': 'arcata'}]
request.body = jsonutils.dump_as_bytes(body)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.update, request)
def test_update_v2_0_multiple_operations(self):
request = self._get_fake_patch_request(content_type_minor_version=0)
body = [{'replace': '/foo', 'add': '/bar', 'value': 'snore'}]
request.body = jsonutils.dump_as_bytes(body)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.update, request)
def test_update_v2_0_missing_operations(self):
request = self._get_fake_patch_request(content_type_minor_version=0)
body = [{'value': 'arcata'}]
request.body = jsonutils.dump_as_bytes(body)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.update, request)
def test_update_v2_0_missing_value(self):
request = self._get_fake_patch_request(content_type_minor_version=0)
body = [{'replace': '/colburn'}]
request.body = jsonutils.dump_as_bytes(body)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.update, request)
def test_index(self):
marker = str(uuid.uuid4())
path = '/images?limit=1&marker=%s&member_status=pending' % marker
request = unit_test_utils.get_fake_request(path)
expected = {'limit': 1,
'marker': marker,
'sort_key': ['created_at'],
'sort_dir': ['desc'],
'member_status': 'pending',
'filters': {}}
output = self.deserializer.index(request)
self.assertEqual(expected, output)
def test_index_with_filter(self):
name = 'My Little Image'
path = '/images?name=%s' % name
request = unit_test_utils.get_fake_request(path)
output = self.deserializer.index(request)
self.assertEqual(name, output['filters']['name'])
def test_index_strip_params_from_filters(self):
name = 'My Little Image'
path = '/images?name=%s' % name
request = unit_test_utils.get_fake_request(path)
output = self.deserializer.index(request)
self.assertEqual(name, output['filters']['name'])
self.assertEqual(1, len(output['filters']))
def test_index_with_many_filter(self):
name = 'My Little Image'
instance_id = str(uuid.uuid4())
path = ('/images?name=%(name)s&id=%(instance_id)s' %
{'name': name, 'instance_id': instance_id})
request = unit_test_utils.get_fake_request(path)
output = self.deserializer.index(request)
self.assertEqual(name, output['filters']['name'])
self.assertEqual(instance_id, output['filters']['id'])
def test_index_with_filter_and_limit(self):
name = 'My Little Image'
path = '/images?name=%s&limit=1' % name
request = unit_test_utils.get_fake_request(path)
output = self.deserializer.index(request)
self.assertEqual(name, output['filters']['name'])
self.assertEqual(1, output['limit'])
def test_index_non_integer_limit(self):
request = unit_test_utils.get_fake_request('/images?limit=blah')
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
def test_index_zero_limit(self):
request = unit_test_utils.get_fake_request('/images?limit=0')
expected = {'limit': 0,
'sort_key': ['created_at'],
'member_status': 'accepted',
'sort_dir': ['desc'],
'filters': {}}
output = self.deserializer.index(request)
self.assertEqual(expected, output)
def test_index_negative_limit(self):
request = unit_test_utils.get_fake_request('/images?limit=-1')
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
def test_index_fraction(self):
request = unit_test_utils.get_fake_request('/images?limit=1.1')
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
def test_index_invalid_status(self):
path = '/images?member_status=blah'
request = unit_test_utils.get_fake_request(path)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
def test_index_marker(self):
marker = str(uuid.uuid4())
path = '/images?marker=%s' % marker
request = unit_test_utils.get_fake_request(path)
output = self.deserializer.index(request)
self.assertEqual(marker, output.get('marker'))
def test_index_marker_not_specified(self):
request = unit_test_utils.get_fake_request('/images')
output = self.deserializer.index(request)
self.assertNotIn('marker', output)
def test_index_limit_not_specified(self):
request = unit_test_utils.get_fake_request('/images')
output = self.deserializer.index(request)
self.assertNotIn('limit', output)
def test_index_sort_key_id(self):
request = unit_test_utils.get_fake_request('/images?sort_key=id')
output = self.deserializer.index(request)
expected = {
'sort_key': ['id'],
'sort_dir': ['desc'],
'member_status': 'accepted',
'filters': {}
}
self.assertEqual(expected, output)
def test_index_multiple_sort_keys(self):
request = unit_test_utils.get_fake_request('/images?'
'sort_key=name&'
'sort_key=size')
output = self.deserializer.index(request)
expected = {
'sort_key': ['name', 'size'],
'sort_dir': ['desc'],
'member_status': 'accepted',
'filters': {}
}
self.assertEqual(expected, output)
def test_index_invalid_multiple_sort_keys(self):
# blah is an invalid sort key
request = unit_test_utils.get_fake_request('/images?'
'sort_key=name&'
'sort_key=blah')
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
def test_index_sort_dir_asc(self):
request = unit_test_utils.get_fake_request('/images?sort_dir=asc')
output = self.deserializer.index(request)
expected = {
'sort_key': ['created_at'],
'sort_dir': ['asc'],
'member_status': 'accepted',
'filters': {}}
self.assertEqual(expected, output)
def test_index_multiple_sort_dirs(self):
req_string = ('/images?sort_key=name&sort_dir=asc&'
'sort_key=id&sort_dir=desc')
request = unit_test_utils.get_fake_request(req_string)
output = self.deserializer.index(request)
expected = {
'sort_key': ['name', 'id'],
'sort_dir': ['asc', 'desc'],
'member_status': 'accepted',
'filters': {}}
self.assertEqual(expected, output)
def test_index_new_sorting_syntax_single_key_default_dir(self):
req_string = '/images?sort=name'
request = unit_test_utils.get_fake_request(req_string)
output = self.deserializer.index(request)
expected = {
'sort_key': ['name'],
'sort_dir': ['desc'],
'member_status': 'accepted',
'filters': {}}
self.assertEqual(expected, output)
def test_index_new_sorting_syntax_single_key_desc_dir(self):
req_string = '/images?sort=name:desc'
request = unit_test_utils.get_fake_request(req_string)
output = self.deserializer.index(request)
expected = {
'sort_key': ['name'],
'sort_dir': ['desc'],
'member_status': 'accepted',
'filters': {}}
self.assertEqual(expected, output)
def test_index_new_sorting_syntax_multiple_keys_default_dir(self):
req_string = '/images?sort=name,size'
request = unit_test_utils.get_fake_request(req_string)
output = self.deserializer.index(request)
expected = {
'sort_key': ['name', 'size'],
'sort_dir': ['desc', 'desc'],
'member_status': 'accepted',
'filters': {}}
self.assertEqual(expected, output)
def test_index_new_sorting_syntax_multiple_keys_asc_dir(self):
req_string = '/images?sort=name:asc,size:asc'
request = unit_test_utils.get_fake_request(req_string)
output = self.deserializer.index(request)
expected = {
'sort_key': ['name', 'size'],
'sort_dir': ['asc', 'asc'],
'member_status': 'accepted',
'filters': {}}
self.assertEqual(expected, output)
def test_index_new_sorting_syntax_multiple_keys_different_dirs(self):
req_string = '/images?sort=name:desc,size:asc'
request = unit_test_utils.get_fake_request(req_string)
output = self.deserializer.index(request)
expected = {
'sort_key': ['name', 'size'],
'sort_dir': ['desc', 'asc'],
'member_status': 'accepted',
'filters': {}}
self.assertEqual(expected, output)
def test_index_new_sorting_syntax_multiple_keys_optional_dir(self):
req_string = '/images?sort=name:asc,size'
request = unit_test_utils.get_fake_request(req_string)
output = self.deserializer.index(request)
expected = {
'sort_key': ['name', 'size'],
'sort_dir': ['asc', 'desc'],
'member_status': 'accepted',
'filters': {}}
self.assertEqual(expected, output)
req_string = '/images?sort=name,size:asc'
request = unit_test_utils.get_fake_request(req_string)
output = self.deserializer.index(request)
expected = {
'sort_key': ['name', 'size'],
'sort_dir': ['desc', 'asc'],
'member_status': 'accepted',
'filters': {}}
self.assertEqual(expected, output)
req_string = '/images?sort=name,id:asc,size'
request = unit_test_utils.get_fake_request(req_string)
output = self.deserializer.index(request)
expected = {
'sort_key': ['name', 'id', 'size'],
'sort_dir': ['desc', 'asc', 'desc'],
'member_status': 'accepted',
'filters': {}}
self.assertEqual(expected, output)
req_string = '/images?sort=name:asc,id,size:asc'
request = unit_test_utils.get_fake_request(req_string)
output = self.deserializer.index(request)
expected = {
'sort_key': ['name', 'id', 'size'],
'sort_dir': ['asc', 'desc', 'asc'],
'member_status': 'accepted',
'filters': {}}
self.assertEqual(expected, output)
def test_index_sort_wrong_sort_dirs_number(self):
req_string = '/images?sort_key=name&sort_dir=asc&sort_dir=desc'
request = unit_test_utils.get_fake_request(req_string)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
def test_index_sort_dirs_fewer_than_keys(self):
req_string = ('/images?sort_key=name&sort_dir=asc&sort_key=id&'
'sort_dir=asc&sort_key=created_at')
request = unit_test_utils.get_fake_request(req_string)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
def test_index_sort_wrong_sort_dirs_number_without_key(self):
req_string = '/images?sort_dir=asc&sort_dir=desc'
request = unit_test_utils.get_fake_request(req_string)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
def test_index_sort_private_key(self):
request = unit_test_utils.get_fake_request('/images?sort_key=min_ram')
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
def test_index_sort_key_invalid_value(self):
# blah is an invalid sort key
request = unit_test_utils.get_fake_request('/images?sort_key=blah')
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
def test_index_sort_dir_invalid_value(self):
# foo is an invalid sort dir
request = unit_test_utils.get_fake_request('/images?sort_dir=foo')
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
def test_index_new_sorting_syntax_invalid_request(self):
# 'blah' is not a supported sorting key
req_string = '/images?sort=blah'
request = unit_test_utils.get_fake_request(req_string)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
req_string = '/images?sort=name,blah'
request = unit_test_utils.get_fake_request(req_string)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
# 'foo' isn't a valid sort direction
req_string = '/images?sort=name:foo'
request = unit_test_utils.get_fake_request(req_string)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
# 'asc:desc' isn't a valid sort direction
req_string = '/images?sort=name:asc:desc'
request = unit_test_utils.get_fake_request(req_string)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
def test_index_combined_sorting_syntax(self):
req_string = '/images?sort_dir=name&sort=name'
request = unit_test_utils.get_fake_request(req_string)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.index, request)
def test_index_with_tag(self):
path = '/images?tag=%s&tag=%s' % ('x86', '64bit')
request = unit_test_utils.get_fake_request(path)
output = self.deserializer.index(request)
self.assertEqual(sorted(['x86', '64bit']),
sorted(output['filters']['tags']))
def test_image_import(self):
# Bug 1754634: make sure that what's considered valid
# is determined by the config option
self.config(enabled_import_methods=['party-time'])
request = unit_test_utils.get_fake_request()
import_body = {
"method": {
"name": "party-time"
}
}
request.body = jsonutils.dump_as_bytes(import_body)
output = self.deserializer.import_image(request)
expected = {"body": import_body}
self.assertEqual(expected, output)
def test_import_image_invalid_body(self):
request = unit_test_utils.get_fake_request()
import_body = {
"method1": {
"name": "glance-direct"
}
}
request.body = jsonutils.dump_as_bytes(import_body)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.import_image,
request)
def test_import_image_invalid_input(self):
request = unit_test_utils.get_fake_request()
import_body = {
"method": {
"abcd": "glance-direct"
}
}
request.body = jsonutils.dump_as_bytes(import_body)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.import_image,
request)
def _get_request_for_method(self, method_name):
request = unit_test_utils.get_fake_request()
import_body = {
"method": {
"name": method_name
}
}
request.body = jsonutils.dump_as_bytes(import_body)
return request
KNOWN_IMPORT_METHODS = ['glance-direct', 'web-download']
def test_import_image_invalid_import_method(self):
# Bug 1754634: make sure that what's considered valid
# is determined by the config option. So put known bad
# name in config, and known good name in request
self.config(enabled_import_methods=['bad-method-name'])
for m in self.KNOWN_IMPORT_METHODS:
request = self._get_request_for_method(m)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.import_image,
request)
class TestImagesDeserializerWithExtendedSchema(test_utils.BaseTestCase):
def setUp(self):
super(TestImagesDeserializerWithExtendedSchema, self).setUp()
self.config(allow_additional_image_properties=False)
custom_image_properties = {
'pants': {
'type': 'string',
'enum': ['on', 'off'],
},
}
schema = glance.api.v2.images.get_schema(custom_image_properties)
self.deserializer = glance.api.v2.images.RequestDeserializer(schema)
def test_create(self):
request = unit_test_utils.get_fake_request()
request.body = jsonutils.dump_as_bytes({
'name': 'image-1',
'pants': 'on'
})
output = self.deserializer.create(request)
expected = {
'image': {'name': 'image-1'},
'extra_properties': {'pants': 'on'},
'tags': [],
}
self.assertEqual(expected, output)
def test_create_bad_data(self):
request = unit_test_utils.get_fake_request()
request.body = jsonutils.dump_as_bytes({
'name': 'image-1',
'pants': 'borked'
})
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.create, request)
def test_update(self):
request = unit_test_utils.get_fake_request()
request.content_type = 'application/openstack-images-v2.1-json-patch'
doc = [{'op': 'add', 'path': '/pants', 'value': 'off'}]
request.body = jsonutils.dump_as_bytes(doc)
output = self.deserializer.update(request)
expected = {'changes': [
{'json_schema_version': 10, 'op': 'add',
'path': ['pants'], 'value': 'off'},
]}
self.assertEqual(expected, output)
def test_update_bad_data(self):
request = unit_test_utils.get_fake_request()
request.content_type = 'application/openstack-images-v2.1-json-patch'
doc = [{'op': 'add', 'path': '/pants', 'value': 'cutoffs'}]
request.body = jsonutils.dump_as_bytes(doc)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.update,
request)
class TestImagesDeserializerWithAdditionalProperties(test_utils.BaseTestCase):
def setUp(self):
super(TestImagesDeserializerWithAdditionalProperties, self).setUp()
self.config(allow_additional_image_properties=True)
self.deserializer = glance.api.v2.images.RequestDeserializer()
def test_create(self):
request = unit_test_utils.get_fake_request()
request.body = jsonutils.dump_as_bytes({'foo': 'bar'})
output = self.deserializer.create(request)
expected = {'image': {},
'extra_properties': {'foo': 'bar'},
'tags': []}
self.assertEqual(expected, output)
def test_create_with_numeric_property(self):
request = unit_test_utils.get_fake_request()
request.body = jsonutils.dump_as_bytes({'abc': 123})
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.create, request)
def test_update_with_numeric_property(self):
request = unit_test_utils.get_fake_request()
request.content_type = 'application/openstack-images-v2.1-json-patch'
doc = [{'op': 'add', 'path': '/foo', 'value': 123}]
request.body = jsonutils.dump_as_bytes(doc)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.update, request)
def test_create_with_list_property(self):
request = unit_test_utils.get_fake_request()
request.body = jsonutils.dump_as_bytes({'foo': ['bar']})
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.create, request)
def test_update_with_list_property(self):
request = unit_test_utils.get_fake_request()
request.content_type = 'application/openstack-images-v2.1-json-patch'
doc = [{'op': 'add', 'path': '/foo', 'value': ['bar', 'baz']}]
request.body = jsonutils.dump_as_bytes(doc)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.update, request)
def test_update(self):
request = unit_test_utils.get_fake_request()
request.content_type = 'application/openstack-images-v2.1-json-patch'
doc = [{'op': 'add', 'path': '/foo', 'value': 'bar'}]
request.body = jsonutils.dump_as_bytes(doc)
output = self.deserializer.update(request)
change = {
'json_schema_version': 10, 'op': 'add',
'path': ['foo'], 'value': 'bar'
}
self.assertEqual({'changes': [change]}, output)
class TestImagesDeserializerNoAdditionalProperties(test_utils.BaseTestCase):
def setUp(self):
super(TestImagesDeserializerNoAdditionalProperties, self).setUp()
self.config(allow_additional_image_properties=False)
self.deserializer = glance.api.v2.images.RequestDeserializer()
def test_create_with_additional_properties_disallowed(self):
self.config(allow_additional_image_properties=False)
request = unit_test_utils.get_fake_request()
request.body = jsonutils.dump_as_bytes({'foo': 'bar'})
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.create, request)
def test_update(self):
request = unit_test_utils.get_fake_request()
request.content_type = 'application/openstack-images-v2.1-json-patch'
doc = [{'op': 'add', 'path': '/foo', 'value': 'bar'}]
request.body = jsonutils.dump_as_bytes(doc)
self.assertRaises(webob.exc.HTTPBadRequest,
self.deserializer.update, request)
class TestImagesSerializer(test_utils.BaseTestCase):
def setUp(self):
super(TestImagesSerializer, self).setUp()
self.serializer = glance.api.v2.images.ResponseSerializer()
self.fixtures = [
# NOTE(bcwaldon): This first fixture has every property defined
_domain_fixture(UUID1, name='image-1', size=1024,
virtual_size=3072, created_at=DATETIME,
updated_at=DATETIME, owner=TENANT1,
visibility='public', container_format='ami',
tags=['one', 'two'], disk_format='ami',
min_ram=128, min_disk=10,
checksum='ca425b88f047ce8ec45ee90e813ada91'),
# NOTE(bcwaldon): This second fixture depends on default behavior
# and sets most values to None
_domain_fixture(UUID2, created_at=DATETIME, updated_at=DATETIME),
]
def test_index(self):
expected = {
'images': [
{
'id': UUID1,
'name': 'image-1',
'status': 'queued',
'visibility': 'public',
'protected': False,
'tags': set(['one', 'two']),
'size': 1024,
'virtual_size': 3072,
'checksum': 'ca425b88f047ce8ec45ee90e813ada91',
'container_format': 'ami',
'disk_format': 'ami',
'min_ram': 128,
'min_disk': 10,
'created_at': ISOTIME,
'updated_at': ISOTIME,
'self': '/v2/images/%s' % UUID1,
'file': '/v2/images/%s/file' % UUID1,
'schema': '/v2/schemas/image',
'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df',
},
{
'id': UUID2,
'status': 'queued',
'visibility': 'private',
'protected': False,
'tags': set([]),
'created_at': ISOTIME,
'updated_at': ISOTIME,
'self': '/v2/images/%s' % UUID2,
'file': '/v2/images/%s/file' % UUID2,
'schema': '/v2/schemas/image',
'size': None,
'name': None,
'owner': None,
'min_ram': None,
'min_disk': None,
'checksum': None,
'disk_format': None,
'virtual_size': None,
'container_format': None,
},
],
'first': '/v2/images',
'schema': '/v2/schemas/images',
}
request = webob.Request.blank('/v2/images')
response = webob.Response(request=request)
result = {'images': self.fixtures}
self.serializer.index(response, result)
actual = jsonutils.loads(response.body)
for image in actual['images']:
image['tags'] = set(image['tags'])
self.assertEqual(expected, actual)
self.assertEqual('application/json', response.content_type)
def test_index_next_marker(self):
request = webob.Request.blank('/v2/images')
response = webob.Response(request=request)
result = {'images': self.fixtures, 'next_marker': UUID2}
self.serializer.index(response, result)
output = jsonutils.loads(response.body)
self.assertEqual('/v2/images?marker=%s' % UUID2, output['next'])
def test_index_carries_query_parameters(self):
url = '/v2/images?limit=10&sort_key=id&sort_dir=asc'
request = webob.Request.blank(url)
response = webob.Response(request=request)
result = {'images': self.fixtures, 'next_marker': UUID2}
self.serializer.index(response, result)
output = jsonutils.loads(response.body)
expected_url = '/v2/images?limit=10&sort_dir=asc&sort_key=id'
self.assertEqual(unit_test_utils.sort_url_by_qs_keys(expected_url),
unit_test_utils.sort_url_by_qs_keys(output['first']))
expect_next = '/v2/images?limit=10&marker=%s&sort_dir=asc&sort_key=id'
self.assertEqual(unit_test_utils.sort_url_by_qs_keys(
expect_next % UUID2),
unit_test_utils.sort_url_by_qs_keys(output['next']))
def test_index_forbidden_get_image_location(self):
"""Make sure the serializer works fine.
No mater if current user is authorized to get image location if the
show_multiple_locations is False.
"""
class ImageLocations(object):
def __len__(self):
raise exception.Forbidden()
self.config(show_multiple_locations=False)
self.config(show_image_direct_url=False)
url = '/v2/images?limit=10&sort_key=id&sort_dir=asc'
request = webob.Request.blank(url)
response = webob.Response(request=request)
result = {'images': self.fixtures}
self.assertEqual(http.OK, response.status_int)
# The image index should work though the user is forbidden
result['images'][0].locations = ImageLocations()
self.serializer.index(response, result)
self.assertEqual(http.OK, response.status_int)
def test_show_full_fixture(self):
expected = {
'id': UUID1,
'name': 'image-1',
'status': 'queued',
'visibility': 'public',
'protected': False,
'tags': set(['one', 'two']),
'size': 1024,
'virtual_size': 3072,
'checksum': 'ca425b88f047ce8ec45ee90e813ada91',
'container_format': 'ami',
'disk_format': 'ami',
'min_ram': 128,
'min_disk': 10,
'created_at': ISOTIME,
'updated_at': ISOTIME,
'self': '/v2/images/%s' % UUID1,
'file': '/v2/images/%s/file' % UUID1,
'schema': '/v2/schemas/image',
'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df',
}
response = webob.Response()
self.serializer.show(response, self.fixtures[0])
actual = jsonutils.loads(response.body)
actual['tags'] = set(actual['tags'])
self.assertEqual(expected, actual)
self.assertEqual('application/json', response.content_type)
def test_show_minimal_fixture(self):
expected = {
'id': UUID2,
'status': 'queued',
'visibility': 'private',
'protected': False,
'tags': [],
'created_at': ISOTIME,
'updated_at': ISOTIME,
'self': '/v2/images/%s' % UUID2,
'file': '/v2/images/%s/file' % UUID2,
'schema': '/v2/schemas/image',
'size': None,
'name': None,
'owner': None,
'min_ram': None,
'min_disk': None,
'checksum': None,
'disk_format': None,
'virtual_size': None,
'container_format': None,
}
response = webob.Response()
self.serializer.show(response, self.fixtures[1])
self.assertEqual(expected, jsonutils.loads(response.body))
def test_create(self):
expected = {
'id': UUID1,
'name': 'image-1',
'status': 'queued',
'visibility': 'public',
'protected': False,
'tags': ['one', 'two'],
'size': 1024,
'virtual_size': 3072,
'checksum': 'ca425b88f047ce8ec45ee90e813ada91',
'container_format': 'ami',
'disk_format': 'ami',
'min_ram': 128,
'min_disk': 10,
'created_at': ISOTIME,
'updated_at': ISOTIME,
'self': '/v2/images/%s' % UUID1,
'file': '/v2/images/%s/file' % UUID1,
'schema': '/v2/schemas/image',
'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df',
}
response = webob.Response()
self.serializer.create(response, self.fixtures[0])
self.assertEqual(http.CREATED, response.status_int)
actual = jsonutils.loads(response.body)
actual['tags'] = sorted(actual['tags'])
self.assertEqual(expected, actual)
self.assertEqual('application/json', response.content_type)
self.assertEqual('/v2/images/%s' % UUID1, response.location)
def test_create_has_import_methods_header(self):
# NOTE(rosmaita): enabled_import_methods is defined as type
# oslo.config.cfg.ListOpt, so it is stored internally as a list
# but is converted to a string for output in the HTTP header
header_name = 'OpenStack-image-import-methods'
# check multiple methods
enabled_methods = ['one', 'two', 'three']
self.config(enabled_import_methods=enabled_methods)
response = webob.Response()
self.serializer.create(response, self.fixtures[0])
self.assertEqual(http.CREATED, response.status_int)
header_value = response.headers.get(header_name)
self.assertIsNotNone(header_value)
self.assertItemsEqual(enabled_methods, header_value.split(','))
# check single method
self.config(enabled_import_methods=['swift-party-time'])
response = webob.Response()
self.serializer.create(response, self.fixtures[0])
self.assertEqual(http.CREATED, response.status_int)
header_value = response.headers.get(header_name)
self.assertIsNotNone(header_value)
self.assertEqual('swift-party-time', header_value)
# no header for empty config value
self.config(enabled_import_methods=[])
response = webob.Response()
self.serializer.create(response, self.fixtures[0])
self.assertEqual(http.CREATED, response.status_int)
headers = response.headers.keys()
self.assertNotIn(header_name, headers)
def test_update(self):
expected = {
'id': UUID1,
'name': 'image-1',
'status': 'queued',
'visibility': 'public',
'protected': False,
'tags': set(['one', 'two']),
'size': 1024,
'virtual_size': 3072,
'checksum': 'ca425b88f047ce8ec45ee90e813ada91',
'container_format': 'ami',
'disk_format': 'ami',
'min_ram': 128,
'min_disk': 10,
'created_at': ISOTIME,
'updated_at': ISOTIME,
'self': '/v2/images/%s' % UUID1,
'file': '/v2/images/%s/file' % UUID1,
'schema': '/v2/schemas/image',
'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df',
}
response = webob.Response()
self.serializer.update(response, self.fixtures[0])
actual = jsonutils.loads(response.body)
actual['tags'] = set(actual['tags'])
self.assertEqual(expected, actual)
self.assertEqual('application/json', response.content_type)
def test_import_image(self):
response = webob.Response()
self.serializer.import_image(response, {})
self.assertEqual(http.ACCEPTED, response.status_int)
self.assertEqual('0', response.headers['Content-Length'])
class TestImagesSerializerWithUnicode(test_utils.BaseTestCase):
def setUp(self):
super(TestImagesSerializerWithUnicode, self).setUp()
self.serializer = glance.api.v2.images.ResponseSerializer()
self.fixtures = [
# NOTE(bcwaldon): This first fixture has every property defined
_domain_fixture(UUID1, **{
'name': u'OpenStack\u2122-1',
'size': 1024,
'virtual_size': 3072,
'tags': [u'\u2160', u'\u2161'],
'created_at': DATETIME,
'updated_at': DATETIME,
'owner': TENANT1,
'visibility': 'public',
'container_format': 'ami',
'disk_format': 'ami',
'min_ram': 128,
'min_disk': 10,
'checksum': u'ca425b88f047ce8ec45ee90e813ada91',
'extra_properties': {'lang': u'Fran\u00E7ais',
u'dispos\u00E9': u'f\u00E2ch\u00E9'},
}),
]
def test_index(self):
expected = {
u'images': [
{
u'id': UUID1,
u'name': u'OpenStack\u2122-1',
u'status': u'queued',
u'visibility': u'public',
u'protected': False,
u'tags': [u'\u2160', u'\u2161'],
u'size': 1024,
u'virtual_size': 3072,
u'checksum': u'ca425b88f047ce8ec45ee90e813ada91',
u'container_format': u'ami',
u'disk_format': u'ami',
u'min_ram': 128,
u'min_disk': 10,
u'created_at': six.text_type(ISOTIME),
u'updated_at': six.text_type(ISOTIME),
u'self': u'/v2/images/%s' % UUID1,
u'file': u'/v2/images/%s/file' % UUID1,
u'schema': u'/v2/schemas/image',
u'lang': u'Fran\u00E7ais',
u'dispos\u00E9': u'f\u00E2ch\u00E9',
u'owner': u'6838eb7b-6ded-434a-882c-b344c77fe8df',
},
],
u'first': u'/v2/images',
u'schema': u'/v2/schemas/images',
}
request = webob.Request.blank('/v2/images')
response = webob.Response(request=request)
result = {u'images': self.fixtures}
self.serializer.index(response, result)
actual = jsonutils.loads(response.body)
actual['images'][0]['tags'] = sorted(actual['images'][0]['tags'])
self.assertEqual(expected, actual)
self.assertEqual('application/json', response.content_type)
def test_show_full_fixture(self):
expected = {
u'id': UUID1,
u'name': u'OpenStack\u2122-1',
u'status': u'queued',
u'visibility': u'public',
u'protected': False,
u'tags': set([u'\u2160', u'\u2161']),
u'size': 1024,
u'virtual_size': 3072,
u'checksum': u'ca425b88f047ce8ec45ee90e813ada91',
u'container_format': u'ami',
u'disk_format': u'ami',
u'min_ram': 128,
u'min_disk': 10,
u'created_at': six.text_type(ISOTIME),
u'updated_at': six.text_type(ISOTIME),
u'self': u'/v2/images/%s' % UUID1,
u'file': u'/v2/images/%s/file' % UUID1,
u'schema': u'/v2/schemas/image',
u'lang': u'Fran\u00E7ais',
u'dispos\u00E9': u'f\u00E2ch\u00E9',
u'owner': u'6838eb7b-6ded-434a-882c-b344c77fe8df',
}
response = webob.Response()
self.serializer.show(response, self.fixtures[0])
actual = jsonutils.loads(response.body)
actual['tags'] = set(actual['tags'])
self.assertEqual(expected, actual)
self.assertEqual('application/json', response.content_type)
def test_create(self):
expected = {
u'id': UUID1,
u'name': u'OpenStack\u2122-1',
u'status': u'queued',
u'visibility': u'public',
u'protected': False,
u'tags': [u'\u2160', u'\u2161'],
u'size': 1024,
u'virtual_size': 3072,
u'checksum': u'ca425b88f047ce8ec45ee90e813ada91',
u'container_format': u'ami',
u'disk_format': u'ami',
u'min_ram': 128,
u'min_disk': 10,
u'created_at': six.text_type(ISOTIME),
u'updated_at': six.text_type(ISOTIME),
u'self': u'/v2/images/%s' % UUID1,
u'file': u'/v2/images/%s/file' % UUID1,
u'schema': u'/v2/schemas/image',
u'lang': u'Fran\u00E7ais',
u'dispos\u00E9': u'f\u00E2ch\u00E9',
u'owner': u'6838eb7b-6ded-434a-882c-b344c77fe8df',
}
response = webob.Response()
self.serializer.create(response, self.fixtures[0])
self.assertEqual(http.CREATED, response.status_int)
actual = jsonutils.loads(response.body)
actual['tags'] = sorted(actual['tags'])
self.assertEqual(expected, actual)
self.assertEqual('application/json', response.content_type)
self.assertEqual('/v2/images/%s' % UUID1, response.location)
def test_update(self):
expected = {
u'id': UUID1,
u'name': u'OpenStack\u2122-1',
u'status': u'queued',
u'visibility': u'public',
u'protected': False,
u'tags': set([u'\u2160', u'\u2161']),
u'size': 1024,
u'virtual_size': 3072,
u'checksum': u'ca425b88f047ce8ec45ee90e813ada91',
u'container_format': u'ami',
u'disk_format': u'ami',
u'min_ram': 128,
u'min_disk': 10,
u'created_at': six.text_type(ISOTIME),
u'updated_at': six.text_type(ISOTIME),
u'self': u'/v2/images/%s' % UUID1,
u'file': u'/v2/images/%s/file' % UUID1,
u'schema': u'/v2/schemas/image',
u'lang': u'Fran\u00E7ais',
u'dispos\u00E9': u'f\u00E2ch\u00E9',
u'owner': u'6838eb7b-6ded-434a-882c-b344c77fe8df',
}
response = webob.Response()
self.serializer.update(response, self.fixtures[0])
actual = jsonutils.loads(response.body)
actual['tags'] = set(actual['tags'])
self.assertEqual(expected, actual)
self.assertEqual('application/json', response.content_type)
class TestImagesSerializerWithExtendedSchema(test_utils.BaseTestCase):
def setUp(self):
super(TestImagesSerializerWithExtendedSchema, self).setUp()
self.config(allow_additional_image_properties=False)
custom_image_properties = {
'color': {
'type': 'string',
'enum': ['red', 'green'],
},
}
schema = glance.api.v2.images.get_schema(custom_image_properties)
self.serializer = glance.api.v2.images.ResponseSerializer(schema)
props = dict(color='green', mood='grouchy')
self.fixture = _domain_fixture(
UUID2, name='image-2', owner=TENANT2,
checksum='ca425b88f047ce8ec45ee90e813ada91',
created_at=DATETIME, updated_at=DATETIME, size=1024,
virtual_size=3072, extra_properties=props)
def test_show(self):
expected = {
'id': UUID2,
'name': 'image-2',
'status': 'queued',
'visibility': 'private',
'protected': False,
'checksum': 'ca425b88f047ce8ec45ee90e813ada91',
'tags': [],
'size': 1024,
'virtual_size': 3072,
'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81',
'color': 'green',
'created_at': ISOTIME,
'updated_at': ISOTIME,
'self': '/v2/images/%s' % UUID2,
'file': '/v2/images/%s/file' % UUID2,
'schema': '/v2/schemas/image',
'min_ram': None,
'min_disk': None,
'disk_format': None,
'container_format': None,
}
response = webob.Response()
self.serializer.show(response, self.fixture)
self.assertEqual(expected, jsonutils.loads(response.body))
def test_show_reports_invalid_data(self):
self.fixture.extra_properties['color'] = 'invalid'
expected = {
'id': UUID2,
'name': 'image-2',
'status': 'queued',
'visibility': 'private',
'protected': False,
'checksum': 'ca425b88f047ce8ec45ee90e813ada91',
'tags': [],
'size': 1024,
'virtual_size': 3072,
'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81',
'color': 'invalid',
'created_at': ISOTIME,
'updated_at': ISOTIME,
'self': '/v2/images/%s' % UUID2,
'file': '/v2/images/%s/file' % UUID2,
'schema': '/v2/schemas/image',
'min_ram': None,
'min_disk': None,
'disk_format': None,
'container_format': None,
}
response = webob.Response()
self.serializer.show(response, self.fixture)
self.assertEqual(expected, jsonutils.loads(response.body))
class TestImagesSerializerWithAdditionalProperties(test_utils.BaseTestCase):
def setUp(self):
super(TestImagesSerializerWithAdditionalProperties, self).setUp()
self.config(allow_additional_image_properties=True)
self.fixture = _domain_fixture(
UUID2, name='image-2', owner=TENANT2,
checksum='ca425b88f047ce8ec45ee90e813ada91',
created_at=DATETIME, updated_at=DATETIME, size=1024,
virtual_size=3072, extra_properties={'marx': 'groucho'})
def test_show(self):
serializer = glance.api.v2.images.ResponseSerializer()
expected = {
'id': UUID2,
'name': 'image-2',
'status': 'queued',
'visibility': 'private',
'protected': False,
'checksum': 'ca425b88f047ce8ec45ee90e813ada91',
'marx': 'groucho',
'tags': [],
'size': 1024,
'virtual_size': 3072,
'created_at': ISOTIME,
'updated_at': ISOTIME,
'self': '/v2/images/%s' % UUID2,
'file': '/v2/images/%s/file' % UUID2,
'schema': '/v2/schemas/image',
'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81',
'min_ram': None,
'min_disk': None,
'disk_format': None,
'container_format': None,
}
response = webob.Response()
serializer.show(response, self.fixture)
self.assertEqual(expected, jsonutils.loads(response.body))
def test_show_invalid_additional_property(self):
"""Ensure that the serializer passes
through invalid additional properties.
It must not complains with i.e. non-string.
"""
serializer = glance.api.v2.images.ResponseSerializer()
self.fixture.extra_properties['marx'] = 123
expected = {
'id': UUID2,
'name': 'image-2',
'status': 'queued',
'visibility': 'private',
'protected': False,
'checksum': 'ca425b88f047ce8ec45ee90e813ada91',
'marx': 123,
'tags': [],
'size': 1024,
'virtual_size': 3072,
'created_at': ISOTIME,
'updated_at': ISOTIME,
'self': '/v2/images/%s' % UUID2,
'file': '/v2/images/%s/file' % UUID2,
'schema': '/v2/schemas/image',
'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81',
'min_ram': None,
'min_disk': None,
'disk_format': None,
'container_format': None,
}
response = webob.Response()
serializer.show(response, self.fixture)
self.assertEqual(expected, jsonutils.loads(response.body))
def test_show_with_additional_properties_disabled(self):
self.config(allow_additional_image_properties=False)
serializer = glance.api.v2.images.ResponseSerializer()
expected = {
'id': UUID2,
'name': 'image-2',
'status': 'queued',
'visibility': 'private',
'protected': False,
'checksum': 'ca425b88f047ce8ec45ee90e813ada91',
'tags': [],
'size': 1024,
'virtual_size': 3072,
'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81',
'created_at': ISOTIME,
'updated_at': ISOTIME,
'self': '/v2/images/%s' % UUID2,
'file': '/v2/images/%s/file' % UUID2,
'schema': '/v2/schemas/image',
'min_ram': None,
'min_disk': None,
'disk_format': None,
'container_format': None,
}
response = webob.Response()
serializer.show(response, self.fixture)
self.assertEqual(expected, jsonutils.loads(response.body))
class TestImagesSerializerDirectUrl(test_utils.BaseTestCase):
def setUp(self):
super(TestImagesSerializerDirectUrl, self).setUp()
self.serializer = glance.api.v2.images.ResponseSerializer()
self.active_image = _domain_fixture(
UUID1, name='image-1', visibility='public',
status='active', size=1024, virtual_size=3072,
created_at=DATETIME, updated_at=DATETIME,
locations=[{'id': '1', 'url': 'http://some/fake/location',
'metadata': {}, 'status': 'active'}])
self.queued_image = _domain_fixture(
UUID2, name='image-2', status='active',
created_at=DATETIME, updated_at=DATETIME,
checksum='ca425b88f047ce8ec45ee90e813ada91')
self.location_data_image_url = 'http://abc.com/somewhere'
self.location_data_image_meta = {'key': 98231}
self.location_data_image = _domain_fixture(
UUID2, name='image-2', status='active',
created_at=DATETIME, updated_at=DATETIME,
locations=[{'id': '2',
'url': self.location_data_image_url,
'metadata': self.location_data_image_meta,
'status': 'active'}])
def _do_index(self):
request = webob.Request.blank('/v2/images')
response = webob.Response(request=request)
self.serializer.index(response,
{'images': [self.active_image,
self.queued_image]})
return jsonutils.loads(response.body)['images']
def _do_show(self, image):
request = webob.Request.blank('/v2/images')
response = webob.Response(request=request)
self.serializer.show(response, image)
return jsonutils.loads(response.body)
def test_index_store_location_enabled(self):
self.config(show_image_direct_url=True)
images = self._do_index()
# NOTE(markwash): ordering sanity check
self.assertEqual(UUID1, images[0]['id'])
self.assertEqual(UUID2, images[1]['id'])
self.assertEqual('http://some/fake/location', images[0]['direct_url'])
self.assertNotIn('direct_url', images[1])
def test_index_store_multiple_location_enabled(self):
self.config(show_multiple_locations=True)
request = webob.Request.blank('/v2/images')
response = webob.Response(request=request)
self.serializer.index(response,
{'images': [self.location_data_image]}),
images = jsonutils.loads(response.body)['images']
location = images[0]['locations'][0]
self.assertEqual(location['url'], self.location_data_image_url)
self.assertEqual(location['metadata'], self.location_data_image_meta)
def test_index_store_location_explicitly_disabled(self):
self.config(show_image_direct_url=False)
images = self._do_index()
self.assertNotIn('direct_url', images[0])
self.assertNotIn('direct_url', images[1])
def test_show_location_enabled(self):
self.config(show_image_direct_url=True)
image = self._do_show(self.active_image)
self.assertEqual('http://some/fake/location', image['direct_url'])
def test_show_location_enabled_but_not_set(self):
self.config(show_image_direct_url=True)
image = self._do_show(self.queued_image)
self.assertNotIn('direct_url', image)
def test_show_location_explicitly_disabled(self):
self.config(show_image_direct_url=False)
image = self._do_show(self.active_image)
self.assertNotIn('direct_url', image)
class TestImageSchemaFormatConfiguration(test_utils.BaseTestCase):
def test_default_disk_formats(self):
schema = glance.api.v2.images.get_schema()
expected = [None, 'ami', 'ari', 'aki', 'vhd', 'vhdx', 'vmdk',
'raw', 'qcow2', 'vdi', 'iso', 'ploop']
actual = schema.properties['disk_format']['enum']
self.assertEqual(expected, actual)
def test_custom_disk_formats(self):
self.config(disk_formats=['gabe'], group="image_format")
schema = glance.api.v2.images.get_schema()
expected = [None, 'gabe']
actual = schema.properties['disk_format']['enum']
self.assertEqual(expected, actual)
def test_default_container_formats(self):
schema = glance.api.v2.images.get_schema()
expected = [None, 'ami', 'ari', 'aki', 'bare', 'ovf', 'ova', 'docker']
actual = schema.properties['container_format']['enum']
self.assertEqual(expected, actual)
def test_custom_container_formats(self):
self.config(container_formats=['mark'], group="image_format")
schema = glance.api.v2.images.get_schema()
expected = [None, 'mark']
actual = schema.properties['container_format']['enum']
self.assertEqual(expected, actual)
class TestImageSchemaDeterminePropertyBasis(test_utils.BaseTestCase):
def test_custom_property_marked_as_non_base(self):
self.config(allow_additional_image_properties=False)
custom_image_properties = {
'pants': {
'type': 'string',
},
}
schema = glance.api.v2.images.get_schema(custom_image_properties)
self.assertFalse(schema.properties['pants'].get('is_base', True))
def test_base_property_marked_as_base(self):
schema = glance.api.v2.images.get_schema()
self.assertTrue(schema.properties['disk_format'].get('is_base', True))
| 44.579592 | 79 | 0.576742 | 19,307 | 185,674 | 5.312063 | 0.042005 | 0.044169 | 0.035491 | 0.04103 | 0.867024 | 0.841624 | 0.815181 | 0.785433 | 0.758463 | 0.734234 | 0 | 0.018055 | 0.297214 | 185,674 | 4,164 | 80 | 44.590298 | 0.767911 | 0.025275 | 0 | 0.687465 | 0 | 0 | 0.129864 | 0.019411 | 0 | 0 | 0 | 0 | 0.136162 | 1 | 0.089018 | false | 0.001941 | 0.018026 | 0 | 0.113422 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5dab1662376a5e53a6942409c9b09120fe39ba6f | 174 | py | Python | mydjango/tausite/admin.py | aborbic/myDjangoProject | 3936c5988fc73a42481fad65de7ea2e671de2664 | [
"MIT"
] | null | null | null | mydjango/tausite/admin.py | aborbic/myDjangoProject | 3936c5988fc73a42481fad65de7ea2e671de2664 | [
"MIT"
] | null | null | null | mydjango/tausite/admin.py | aborbic/myDjangoProject | 3936c5988fc73a42481fad65de7ea2e671de2664 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import *
admin.site.register(Home)
admin.site.register(Announcements)
admin.site.register(Picture)
admin.site.register(Email)
| 19.333333 | 34 | 0.810345 | 24 | 174 | 5.875 | 0.5 | 0.255319 | 0.48227 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08046 | 174 | 8 | 35 | 21.75 | 0.88125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5dcebbb82473c965093c475e05b9d994604981c1 | 41 | py | Python | contrib/packaging-python/conda/setvarnumpy.py | lucasw/chrono | e79d8c761c718ecb4c796725cff37026f357da8c | [
"BSD-3-Clause"
] | 1,383 | 2015-02-04T14:17:40.000Z | 2022-03-30T04:58:16.000Z | contrib/packaging-python/conda/setvarnumpy.py | pchaoWT/chrono | fd68d37d1d4ee75230dc1eea78ceff91cca7ac32 | [
"BSD-3-Clause"
] | 245 | 2015-01-11T15:30:51.000Z | 2022-03-30T21:28:54.000Z | contrib/packaging-python/conda/setvarnumpy.py | pchaoWT/chrono | fd68d37d1d4ee75230dc1eea78ceff91cca7ac32 | [
"BSD-3-Clause"
] | 351 | 2015-02-04T14:17:47.000Z | 2022-03-30T04:42:52.000Z | import numpy
print(numpy.get_include())
| 10.25 | 26 | 0.780488 | 6 | 41 | 5.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 3 | 27 | 13.666667 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
5dd418c83ab725a2b92d104a6013cf1e80e2849f | 3,378 | py | Python | tests/test_phonenumber.py | knwin/python-myanmar | d1a812178848054d1f8795c3a36630ac33750289 | [
"MIT"
] | 2 | 2019-02-06T14:48:59.000Z | 2019-10-20T15:39:48.000Z | tests/test_phonenumber.py | knwin/python-myanmar | d1a812178848054d1f8795c3a36630ac33750289 | [
"MIT"
] | null | null | null | tests/test_phonenumber.py | knwin/python-myanmar | d1a812178848054d1f8795c3a36630ac33750289 | [
"MIT"
] | 1 | 2019-02-06T14:49:11.000Z | 2019-02-06T14:49:11.000Z | import re
from myanmar import phonenumber as mp
def test_with_mobile_code():
assert re.match(mp.mobile_code_re, "09") is not None
assert re.match(mp.mobile_code_re, "9") is None
assert re.match(mp.mobile_code_re, "") is None
def test_with_country_code():
assert re.match(mp.country_code_re, "+959") is not None
assert re.match(mp.country_code_re, "959") is not None
assert re.match(mp.country_code_re, "") is None
def test_telenor():
assert re.match(mp.telenor_re, "791000481") is not None
assert re.match(mp.telenor_re, "763619515") is not None
assert re.match(mp.telenor_re, "991000481") is None
def test_ooredoo():
assert re.match(mp.ooredoo_re, "962038186") is not None
assert re.match(mp.ooredoo_re, "791000481") is None
assert re.match(mp.ooredoo_re, "763619515") is None
assert re.match(mp.ooredoo_re, "950954940") is not None
def test_mytel():
assert re.match(mp.mytel_re, "691778993") is not None
assert re.match(mp.mytel_re, "791000481") is None
assert re.match(mp.mytel_re, "690000966") is not None
assert re.match(mp.mytel_re, "683004063") is not None
assert re.match(mp.mytel_re, "783004063") is None
def test_mpt():
assert re.match(mp.mpt_re, "420090065") is not None
assert re.match(mp.mpt_re, "5093449") is not None
assert re.match(mp.mpt_re, "898941022") is not None
assert re.match(mp.mpt_re, "5093449") is not None
assert re.match(mp.mpt_re, "763619515") is None
def test_all_operators_re():
assert re.match(mp.all_operators_re, "420090065") is not None
assert re.match(mp.all_operators_re, "5093449") is not None
assert re.match(mp.all_operators_re, "962038186") is not None
assert re.match(mp.all_operators_re, "791000481") is not None
assert re.match(mp.all_operators_re, "763619515") is not None
def test_mm_phone_re():
assert re.match(mp.mm_phone_re, "+959420090065") is not None
assert re.match(mp.mm_phone_re, "09420090065") is not None
assert re.match(mp.mm_phone_re, "095093449") is not None
assert re.match(mp.mm_phone_re, "09962038186") is not None
assert re.match(mp.mm_phone_re, "959791000481") is not None
assert re.match(mp.mm_phone_re, "763619515") is not None
assert re.match(mp.mm_phone_re, "9763619515") is None
def test_is_valid_mm_phone_number():
assert mp.is_valid_phonenumber("+959420090065") is True
assert mp.is_valid_phonenumber("959420090065") is True
assert mp.is_valid_phonenumber("09420090065") is True
assert mp.is_valid_phonenumber("9420090065") is False
assert mp.is_valid_phonenumber("420090065") is True
assert mp.is_valid_phonenumber("+95") is False
assert mp.is_valid_phonenumber("959") is False
assert mp.is_valid_phonenumber("+95420090065") is False
def test_normalize_mm_phone_number():
assert mp.normalize_phonenumber("+959420090065") == 959420090065
assert mp.normalize_phonenumber("959420090065") == 959420090065
assert mp.normalize_phonenumber("09420090065") == 959420090065
assert mp.normalize_phonenumber("420090065") == 959420090065
assert mp.normalize_phonenumber("+959972991100") == 959972991100
assert mp.normalize_phonenumber("959972991100") == 959972991100
assert mp.normalize_phonenumber("09972991100") == 959972991100
assert mp.normalize_phonenumber("972991100") == 959972991100
| 39.27907 | 68 | 0.734754 | 521 | 3,378 | 4.568138 | 0.111324 | 0.117647 | 0.191176 | 0.220588 | 0.815966 | 0.744118 | 0.697059 | 0.587815 | 0.519328 | 0.405882 | 0 | 0.184965 | 0.153345 | 3,378 | 85 | 69 | 39.741176 | 0.647203 | 0 | 0 | 0.031746 | 0 | 0 | 0.130255 | 0 | 0 | 0 | 0 | 0 | 0.809524 | 1 | 0.15873 | true | 0 | 0.031746 | 0 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d1acad433e1c92d1f0eb62de419432a684b66ff | 48 | py | Python | xendit/models/recurringpayment/__init__.py | adyaksaw/xendit-python | 47b05f2a6582104a274dc12a172c6421de86febc | [
"MIT"
] | 10 | 2020-10-31T23:34:34.000Z | 2022-03-08T19:08:55.000Z | xendit/models/recurringpayment/__init__.py | adyaksaw/xendit-python | 47b05f2a6582104a274dc12a172c6421de86febc | [
"MIT"
] | 22 | 2020-07-30T14:25:07.000Z | 2022-03-31T03:55:46.000Z | xendit/models/recurringpayment/__init__.py | adyaksaw/xendit-python | 47b05f2a6582104a274dc12a172c6421de86febc | [
"MIT"
] | 11 | 2020-07-28T08:09:40.000Z | 2022-03-18T00:14:02.000Z | from .recurring_payment import RecurringPayment
| 24 | 47 | 0.895833 | 5 | 48 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d218a6ef8a5b24db6369aba11beba122ea6eecd | 11,783 | py | Python | tests/components/homekit/test_type_switches.py | miccico/core | 14c205384171dee59c1a908f8449f9864778b2dc | [
"Apache-2.0"
] | 6 | 2017-08-02T19:26:39.000Z | 2020-03-14T22:47:41.000Z | tests/components/homekit/test_type_switches.py | miccico/core | 14c205384171dee59c1a908f8449f9864778b2dc | [
"Apache-2.0"
] | 57 | 2020-10-15T06:47:00.000Z | 2022-03-31T06:11:18.000Z | tests/components/homekit/test_type_switches.py | miccico/core | 14c205384171dee59c1a908f8449f9864778b2dc | [
"Apache-2.0"
] | 14 | 2018-08-19T16:28:26.000Z | 2021-09-02T18:26:53.000Z | """Test different accessory types: Switches."""
from datetime import timedelta
import pytest
from homeassistant.components.homekit.const import (
ATTR_VALUE,
TYPE_FAUCET,
TYPE_SHOWER,
TYPE_SPRINKLER,
TYPE_VALVE,
)
from homeassistant.components.homekit.type_switches import Outlet, Switch, Vacuum, Valve
from homeassistant.components.vacuum import (
DOMAIN as VACUUM_DOMAIN,
SERVICE_RETURN_TO_BASE,
SERVICE_START,
SERVICE_TURN_OFF,
SERVICE_TURN_ON,
STATE_CLEANING,
STATE_DOCKED,
SUPPORT_RETURN_HOME,
SUPPORT_START,
)
from homeassistant.const import (
ATTR_ENTITY_ID,
ATTR_SUPPORTED_FEATURES,
CONF_TYPE,
STATE_OFF,
STATE_ON,
)
from homeassistant.core import split_entity_id
import homeassistant.util.dt as dt_util
from tests.common import async_fire_time_changed, async_mock_service
async def test_outlet_set_state(hass, hk_driver, events):
"""Test if Outlet accessory and HA are updated accordingly."""
entity_id = "switch.outlet_test"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Outlet(hass, hk_driver, "Outlet", entity_id, 2, None)
await acc.run_handler()
await hass.async_block_till_done()
assert acc.aid == 2
assert acc.category == 7 # Outlet
assert acc.char_on.value is False
assert acc.char_outlet_in_use.value is True
hass.states.async_set(entity_id, STATE_ON)
await hass.async_block_till_done()
assert acc.char_on.value is True
hass.states.async_set(entity_id, STATE_OFF)
await hass.async_block_till_done()
assert acc.char_on.value is False
# Set from HomeKit
call_turn_on = async_mock_service(hass, "switch", "turn_on")
call_turn_off = async_mock_service(hass, "switch", "turn_off")
await hass.async_add_executor_job(acc.char_on.client_update_value, True)
await hass.async_block_till_done()
assert call_turn_on
assert call_turn_on[0].data[ATTR_ENTITY_ID] == entity_id
assert len(events) == 1
assert events[-1].data[ATTR_VALUE] is None
await hass.async_add_executor_job(acc.char_on.client_update_value, False)
await hass.async_block_till_done()
assert call_turn_off
assert call_turn_off[0].data[ATTR_ENTITY_ID] == entity_id
assert len(events) == 2
assert events[-1].data[ATTR_VALUE] is None
@pytest.mark.parametrize(
"entity_id, attrs",
[
("automation.test", {}),
("input_boolean.test", {}),
("remote.test", {}),
("script.test", {}),
("switch.test", {}),
],
)
async def test_switch_set_state(hass, hk_driver, entity_id, attrs, events):
"""Test if accessory and HA are updated accordingly."""
domain = split_entity_id(entity_id)[0]
hass.states.async_set(entity_id, None, attrs)
await hass.async_block_till_done()
acc = Switch(hass, hk_driver, "Switch", entity_id, 2, None)
await acc.run_handler()
await hass.async_block_till_done()
assert acc.aid == 2
assert acc.category == 8 # Switch
assert acc.activate_only is False
assert acc.char_on.value is False
hass.states.async_set(entity_id, STATE_ON, attrs)
await hass.async_block_till_done()
assert acc.char_on.value is True
hass.states.async_set(entity_id, STATE_OFF, attrs)
await hass.async_block_till_done()
assert acc.char_on.value is False
# Set from HomeKit
call_turn_on = async_mock_service(hass, domain, "turn_on")
call_turn_off = async_mock_service(hass, domain, "turn_off")
await hass.async_add_executor_job(acc.char_on.client_update_value, True)
await hass.async_block_till_done()
assert call_turn_on
assert call_turn_on[0].data[ATTR_ENTITY_ID] == entity_id
assert len(events) == 1
assert events[-1].data[ATTR_VALUE] is None
await hass.async_add_executor_job(acc.char_on.client_update_value, False)
await hass.async_block_till_done()
assert call_turn_off
assert call_turn_off[0].data[ATTR_ENTITY_ID] == entity_id
assert len(events) == 2
assert events[-1].data[ATTR_VALUE] is None
async def test_valve_set_state(hass, hk_driver, events):
"""Test if Valve accessory and HA are updated accordingly."""
entity_id = "switch.valve_test"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Valve(hass, hk_driver, "Valve", entity_id, 2, {CONF_TYPE: TYPE_FAUCET})
await acc.run_handler()
await hass.async_block_till_done()
assert acc.category == 29 # Faucet
assert acc.char_valve_type.value == 3 # Water faucet
acc = Valve(hass, hk_driver, "Valve", entity_id, 2, {CONF_TYPE: TYPE_SHOWER})
await acc.run_handler()
await hass.async_block_till_done()
assert acc.category == 30 # Shower
assert acc.char_valve_type.value == 2 # Shower head
acc = Valve(hass, hk_driver, "Valve", entity_id, 2, {CONF_TYPE: TYPE_SPRINKLER})
await acc.run_handler()
await hass.async_block_till_done()
assert acc.category == 28 # Sprinkler
assert acc.char_valve_type.value == 1 # Irrigation
acc = Valve(hass, hk_driver, "Valve", entity_id, 2, {CONF_TYPE: TYPE_VALVE})
await acc.run_handler()
await hass.async_block_till_done()
assert acc.aid == 2
assert acc.category == 29 # Faucet
assert acc.char_active.value == 0
assert acc.char_in_use.value == 0
assert acc.char_valve_type.value == 0 # Generic Valve
hass.states.async_set(entity_id, STATE_ON)
await hass.async_block_till_done()
assert acc.char_active.value == 1
assert acc.char_in_use.value == 1
hass.states.async_set(entity_id, STATE_OFF)
await hass.async_block_till_done()
assert acc.char_active.value == 0
assert acc.char_in_use.value == 0
# Set from HomeKit
call_turn_on = async_mock_service(hass, "switch", "turn_on")
call_turn_off = async_mock_service(hass, "switch", "turn_off")
await hass.async_add_executor_job(acc.char_active.client_update_value, 1)
await hass.async_block_till_done()
assert acc.char_in_use.value == 1
assert call_turn_on
assert call_turn_on[0].data[ATTR_ENTITY_ID] == entity_id
assert len(events) == 1
assert events[-1].data[ATTR_VALUE] is None
await hass.async_add_executor_job(acc.char_active.client_update_value, 0)
await hass.async_block_till_done()
assert acc.char_in_use.value == 0
assert call_turn_off
assert call_turn_off[0].data[ATTR_ENTITY_ID] == entity_id
assert len(events) == 2
assert events[-1].data[ATTR_VALUE] is None
async def test_vacuum_set_state_with_returnhome_and_start_support(
hass, hk_driver, events
):
"""Test if Vacuum accessory and HA are updated accordingly."""
entity_id = "vacuum.roomba"
hass.states.async_set(
entity_id, None, {ATTR_SUPPORTED_FEATURES: SUPPORT_RETURN_HOME | SUPPORT_START}
)
await hass.async_block_till_done()
acc = Vacuum(hass, hk_driver, "Vacuum", entity_id, 2, None)
await acc.run_handler()
await hass.async_block_till_done()
assert acc.aid == 2
assert acc.category == 8 # Switch
assert acc.char_on.value == 0
hass.states.async_set(
entity_id,
STATE_CLEANING,
{ATTR_SUPPORTED_FEATURES: SUPPORT_RETURN_HOME | SUPPORT_START},
)
await hass.async_block_till_done()
assert acc.char_on.value == 1
hass.states.async_set(
entity_id,
STATE_DOCKED,
{ATTR_SUPPORTED_FEATURES: SUPPORT_RETURN_HOME | SUPPORT_START},
)
await hass.async_block_till_done()
assert acc.char_on.value == 0
# Set from HomeKit
call_start = async_mock_service(hass, VACUUM_DOMAIN, SERVICE_START)
call_return_to_base = async_mock_service(
hass, VACUUM_DOMAIN, SERVICE_RETURN_TO_BASE
)
await hass.async_add_executor_job(acc.char_on.client_update_value, 1)
await hass.async_block_till_done()
assert acc.char_on.value == 1
assert call_start
assert call_start[0].data[ATTR_ENTITY_ID] == entity_id
assert len(events) == 1
assert events[-1].data[ATTR_VALUE] is None
await hass.async_add_executor_job(acc.char_on.client_update_value, 0)
await hass.async_block_till_done()
assert acc.char_on.value == 0
assert call_return_to_base
assert call_return_to_base[0].data[ATTR_ENTITY_ID] == entity_id
assert len(events) == 2
assert events[-1].data[ATTR_VALUE] is None
async def test_vacuum_set_state_without_returnhome_and_start_support(
hass, hk_driver, events
):
"""Test if Vacuum accessory and HA are updated accordingly."""
entity_id = "vacuum.roomba"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Vacuum(hass, hk_driver, "Vacuum", entity_id, 2, None)
await acc.run_handler()
await hass.async_block_till_done()
assert acc.aid == 2
assert acc.category == 8 # Switch
assert acc.char_on.value == 0
hass.states.async_set(entity_id, STATE_ON)
await hass.async_block_till_done()
assert acc.char_on.value == 1
hass.states.async_set(entity_id, STATE_OFF)
await hass.async_block_till_done()
assert acc.char_on.value == 0
# Set from HomeKit
call_turn_on = async_mock_service(hass, VACUUM_DOMAIN, SERVICE_TURN_ON)
call_turn_off = async_mock_service(hass, VACUUM_DOMAIN, SERVICE_TURN_OFF)
await hass.async_add_executor_job(acc.char_on.client_update_value, 1)
await hass.async_block_till_done()
assert acc.char_on.value == 1
assert call_turn_on
assert call_turn_on[0].data[ATTR_ENTITY_ID] == entity_id
assert len(events) == 1
assert events[-1].data[ATTR_VALUE] is None
await hass.async_add_executor_job(acc.char_on.client_update_value, 0)
await hass.async_block_till_done()
assert acc.char_on.value == 0
assert call_turn_off
assert call_turn_off[0].data[ATTR_ENTITY_ID] == entity_id
assert len(events) == 2
assert events[-1].data[ATTR_VALUE] is None
async def test_reset_switch(hass, hk_driver, events):
"""Test if switch accessory is reset correctly."""
domain = "scene"
entity_id = "scene.test"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Switch(hass, hk_driver, "Switch", entity_id, 2, None)
await acc.run_handler()
await hass.async_block_till_done()
assert acc.activate_only is True
assert acc.char_on.value is False
call_turn_on = async_mock_service(hass, domain, "turn_on")
call_turn_off = async_mock_service(hass, domain, "turn_off")
await hass.async_add_executor_job(acc.char_on.client_update_value, True)
await hass.async_block_till_done()
assert acc.char_on.value is True
assert call_turn_on
assert call_turn_on[0].data[ATTR_ENTITY_ID] == entity_id
assert len(events) == 1
assert events[-1].data[ATTR_VALUE] is None
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
assert acc.char_on.value is False
assert len(events) == 1
assert not call_turn_off
await hass.async_add_executor_job(acc.char_on.client_update_value, False)
await hass.async_block_till_done()
assert acc.char_on.value is False
assert len(events) == 1
async def test_reset_switch_reload(hass, hk_driver, events):
"""Test reset switch after script reload."""
entity_id = "script.test"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Switch(hass, hk_driver, "Switch", entity_id, 2, None)
await acc.run_handler()
await hass.async_block_till_done()
assert acc.activate_only is False
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
assert acc.char_on.value is False
| 32.913408 | 88 | 0.721293 | 1,795 | 11,783 | 4.406128 | 0.072423 | 0.062713 | 0.093817 | 0.098495 | 0.852952 | 0.833734 | 0.807182 | 0.784549 | 0.757365 | 0.740296 | 0 | 0.009552 | 0.182551 | 11,783 | 357 | 89 | 33.005602 | 0.811566 | 0.020029 | 0 | 0.675182 | 0 | 0 | 0.027765 | 0 | 0 | 0 | 0 | 0 | 0.354015 | 1 | 0 | false | 0 | 0.032847 | 0 | 0.032847 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d26a693d048718ad6893858d64c7c489609f472 | 37 | py | Python | src/coingeckoprice/__init__.py | joshallen64/coingeckoprice | 776e45e6059249ead64a6ed6baf17627c1e8c23b | [
"MIT"
] | 1 | 2022-03-16T22:14:03.000Z | 2022-03-16T22:14:03.000Z | src/coingeckoprice/__init__.py | joshallen64/coingeckoprice | 776e45e6059249ead64a6ed6baf17627c1e8c23b | [
"MIT"
] | null | null | null | src/coingeckoprice/__init__.py | joshallen64/coingeckoprice | 776e45e6059249ead64a6ed6baf17627c1e8c23b | [
"MIT"
] | null | null | null | from .coingeckoprice import CoinPrice | 37 | 37 | 0.891892 | 4 | 37 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d559492e83387c240e2503246f66a32dbdff178 | 50 | py | Python | tests/integration/ext_functions.py | toniwa/tavern | 69e8cf4db7c067f94d3ad480bd8295c1c6c335a6 | [
"MIT"
] | 889 | 2017-11-04T11:43:36.000Z | 2022-03-31T11:37:31.000Z | tests/integration/ext_functions.py | toniwa/tavern | 69e8cf4db7c067f94d3ad480bd8295c1c6c335a6 | [
"MIT"
] | 636 | 2017-11-04T11:43:02.000Z | 2022-03-31T00:02:04.000Z | tests/integration/ext_functions.py | toniwa/tavern | 69e8cf4db7c067f94d3ad480bd8295c1c6c335a6 | [
"MIT"
] | 181 | 2017-12-05T13:51:42.000Z | 2022-03-25T11:34:58.000Z | def return_hello():
return {"hello": "there"}
| 16.666667 | 29 | 0.62 | 6 | 50 | 5 | 0.666667 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18 | 50 | 2 | 30 | 25 | 0.731707 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
53b10b5d95e8a647f9925da1a1c143f6bb6ac647 | 50 | py | Python | perceiver_jax/__init__.py | sooheon/perceiver-jax | 28d8e49d9e3f69fba5f95cb98b36caaa710b5ede | [
"MIT"
] | 11 | 2021-03-12T08:22:32.000Z | 2021-10-14T09:10:54.000Z | perceiver_jax/__init__.py | sooheon/perceiver-jax | 28d8e49d9e3f69fba5f95cb98b36caaa710b5ede | [
"MIT"
] | 1 | 2021-05-08T01:02:51.000Z | 2021-11-27T09:31:17.000Z | perceiver_jax/__init__.py | sooheon/perceiver-jax | 28d8e49d9e3f69fba5f95cb98b36caaa710b5ede | [
"MIT"
] | null | null | null | from perceiver_jax.perceiver_jax import Perceiver
| 25 | 49 | 0.9 | 7 | 50 | 6.142857 | 0.571429 | 0.55814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 50 | 1 | 50 | 50 | 0.934783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
53d14e949381f075cf3d0a3c93822789e169116a | 474 | py | Python | ljpypi_test/sampleCode.py | Leejung8763/ljpypi_test | a9108b3fd65be746de1905e69a45d73d8b1fce30 | [
"MIT"
] | null | null | null | ljpypi_test/sampleCode.py | Leejung8763/ljpypi_test | a9108b3fd65be746de1905e69a45d73d8b1fce30 | [
"MIT"
] | null | null | null | ljpypi_test/sampleCode.py | Leejung8763/ljpypi_test | a9108b3fd65be746de1905e69a45d73d8b1fce30 | [
"MIT"
] | null | null | null | class Calculator:
def __init__(self, par01, par02):
self.par01, self.par02 = par01, par02
print("This is PyPI Uploading Test Code V0.2.0")
print(f"Input parameter is {self.par01, self.par02}")
def add(self):
return self.par01 + self.par02
def substract(self):
return self.par01 - self.par02
def mulitply(self):
return self.par01 * self.par02
def divide(self):
return self.par01 / self.par02 | 27.882353 | 61 | 0.620253 | 64 | 474 | 4.53125 | 0.390625 | 0.217241 | 0.268966 | 0.372414 | 0.489655 | 0.417241 | 0.32069 | 0 | 0 | 0 | 0 | 0.101744 | 0.274262 | 474 | 17 | 62 | 27.882353 | 0.741279 | 0 | 0 | 0 | 0 | 0 | 0.172632 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.384615 | false | 0 | 0 | 0.307692 | 0.769231 | 0.153846 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
9905ba61a5ed7d0353615ed64e7160a84df9705d | 14,857 | py | Python | tests/twitter/test_views.py | garrettc/django-ditto | fcf15beb8f9b4d61634efd4a88064df12ee16a6f | [
"MIT"
] | 54 | 2016-08-15T17:32:41.000Z | 2022-02-27T03:32:05.000Z | tests/twitter/test_views.py | garrettc/django-ditto | fcf15beb8f9b4d61634efd4a88064df12ee16a6f | [
"MIT"
] | 229 | 2015-07-23T12:50:47.000Z | 2022-03-24T10:33:20.000Z | tests/twitter/test_views.py | garrettc/django-ditto | fcf15beb8f9b4d61634efd4a88064df12ee16a6f | [
"MIT"
] | 8 | 2015-09-10T17:10:35.000Z | 2022-03-25T13:05:01.000Z | from django.urls import reverse
from django.test import TestCase
from ditto.twitter import factories
class ViewTests(TestCase):
def test_home_templates(self):
"The Twitter home page uses the correct templates"
response = self.client.get(reverse("twitter:home"))
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, "twitter/home.html")
self.assertTemplateUsed(response, "twitter/base.html")
self.assertTemplateUsed(response, "ditto/base.html")
def test_home_context(self):
"The Twitter home page sends the correct data to templates"
accounts = factories.AccountFactory.create_batch(3)
factories.TweetFactory.create_batch(5, user=accounts[0].user)
factories.TweetFactory.create_batch(5, user=accounts[1].user)
response = self.client.get(reverse("twitter:home"))
self.assertIn("account_list", response.context)
self.assertIn("tweet_list", response.context)
# Three accounts, only two of which have Tweets:
self.assertEqual(
[account.pk for account in response.context["account_list"]], [1, 2, 3]
)
# Tweets for both accounts that have them:
self.assertEqual(
[tweet.pk for tweet in response.context["tweet_list"]],
[10, 9, 8, 7, 6, 5, 4, 3, 2, 1],
)
def test_home_privacy(self):
"Only public Tweets should appear."
private_user = factories.UserFactory(is_private=True)
public_user = factories.UserFactory(is_private=False)
# We only display tweets from Accounts, so add some.
factories.AccountFactory(user=private_user)
factories.AccountFactory(user=public_user)
public_tweet_1 = factories.TweetFactory(user=public_user)
factories.TweetFactory(user=private_user)
public_tweet_2 = factories.TweetFactory(user=public_user)
response = self.client.get(reverse("twitter:home"))
tweets = response.context["tweet_list"]
self.assertEqual(len(tweets), 2)
self.assertEqual(tweets[0].pk, public_tweet_2.pk)
self.assertEqual(tweets[1].pk, public_tweet_1.pk)
def test_favorite_list_templates(self):
"The Twitter favorites page uses the correct templates"
response = self.client.get(reverse("twitter:favorite_list"))
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, "twitter/favorite_list.html")
self.assertTemplateUsed(response, "twitter/base.html")
self.assertTemplateUsed(response, "ditto/base.html")
def test_favorite_list_context(self):
"The Twitter favorites page sends the correct data to templates"
accounts = factories.AccountFactory.create_batch(3)
favoritable_tweets = factories.TweetFactory.create_batch(6)
for tweet in favoritable_tweets:
accounts[0].user.favorites.add(tweet)
accounts[2].user.favorites.add(tweet)
factories.TweetFactory.create_batch(4, user=accounts[0].user)
response = self.client.get(reverse("twitter:favorite_list"))
self.assertIn("tweet_list", response.context)
self.assertEqual(6, len(response.context["tweet_list"]))
self.assertEqual(
[tweet.pk for tweet in response.context["tweet_list"]],
[
favoritable_tweets[5].pk,
favoritable_tweets[4].pk,
favoritable_tweets[3].pk,
favoritable_tweets[2].pk,
favoritable_tweets[1].pk,
favoritable_tweets[0].pk,
],
)
def test_favorite_list_privacy_tweets(self):
"Only public Tweets should appear."
private_user = factories.UserFactory(is_private=True)
public_users = factories.UserFactory.create_batch(2, is_private=False)
favoriting_account = factories.AccountFactory(user=public_users[0])
private_tweet = factories.TweetFactory(user=private_user)
public_tweet = factories.TweetFactory(user=public_users[1])
favoriting_account.user.favorites.add(private_tweet)
favoriting_account.user.favorites.add(public_tweet)
response = self.client.get(reverse("twitter:favorite_list"))
tweets = response.context["tweet_list"]
self.assertEqual(len(tweets), 1)
self.assertEqual(tweets[0].pk, public_tweet.pk)
def test_favorite_list_privacy_accounts(self):
"Only Tweets favorited by Accounts with public Users should appear."
user = factories.UserFactory(is_private=True)
account = factories.AccountFactory(user=user)
tweet = factories.TweetFactory()
account.user.favorites.add(tweet)
# Check it is there:
self.assertEqual(account.user.favorites.count(), 1)
response = self.client.get(reverse("twitter:favorite_list"))
tweets = response.context["tweet_list"]
# Check it doesn't appear on the page:
self.assertEqual(len(tweets), 0)
def test_user_detail_templates(self):
"Uses the correct templates"
account = factories.AccountFactory()
response = self.client.get(
reverse(
"twitter:user_detail", kwargs={"screen_name": account.user.screen_name}
)
)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, "twitter/user_detail.html")
self.assertTemplateUsed(response, "twitter/base.html")
self.assertTemplateUsed(response, "ditto/base.html")
def test_user_detail_context_no_account(self):
"Sends correct data to templates for a User with no Account."
private_user = factories.UserFactory(is_private=True)
factories.AccountFactory(user=private_user)
factories.AccountFactory.create_batch(3)
user_1 = factories.UserFactory()
user_2 = factories.UserFactory()
factories.TweetFactory.create_batch(3, user=user_1)
factories.TweetFactory.create_batch(3, user=user_2)
response = self.client.get(
reverse("twitter:user_detail", kwargs={"screen_name": user_1.screen_name})
)
self.assertIn("account", response.context)
self.assertIsNone(response.context["account"])
self.assertIn("public_accounts", response.context)
self.assertEqual(len(response.context["public_accounts"]), 3)
self.assertIn("user", response.context)
self.assertEqual(user_1.pk, response.context["user"].pk)
self.assertIn("tweet_list", response.context)
self.assertEqual(len(response.context["tweet_list"]), 3)
self.assertEqual(
[twitter.pk for twitter in response.context["tweet_list"]], [3, 2, 1]
)
def test_user_detail_context_with_account(self):
"Sends correct data to templates for a User with an Account."
account_1 = factories.AccountFactory()
account_2 = factories.AccountFactory()
factories.TweetFactory.create_batch(3, user=account_1.user)
factories.TweetFactory.create_batch(3, user=account_2.user)
response = self.client.get(
reverse(
"twitter:user_detail",
kwargs={"screen_name": account_1.user.screen_name},
)
)
self.assertIn("account", response.context)
self.assertEqual(account_1.pk, response.context["account"].pk)
self.assertIn("user", response.context)
self.assertEqual(account_1.user.pk, response.context["user"].pk)
self.assertIn("tweet_list", response.context)
self.assertEqual(len(response.context["tweet_list"]), 3)
self.assertEqual(
[twitter.pk for twitter in response.context["tweet_list"]], [3, 2, 1]
)
def test_user_detail_privacy(self):
"It does not show private Tweets"
user = factories.UserFactory(is_private=True)
factories.AccountFactory(user=user)
factories.TweetFactory.create_batch(3, user=user)
response = self.client.get(
reverse("twitter:user_detail", kwargs={"screen_name": user.screen_name})
)
self.assertIn("account", response.context)
self.assertEqual(len(response.context["tweet_list"]), 0)
def test_account_favorite_list_templates(self):
"Uses the correct templates"
account = factories.AccountFactory()
response = self.client.get(
reverse(
"twitter:account_favorite_list",
kwargs={"screen_name": account.user.screen_name},
)
)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, "twitter/account_favorite_list.html")
self.assertTemplateUsed(response, "twitter/base.html")
self.assertTemplateUsed(response, "ditto/base.html")
def test_account_favorite_list_context(self):
"Sends the correct data to templates"
accounts = factories.AccountFactory.create_batch(3)
favoritable_tweets = factories.TweetFactory.create_batch(6)
for tweet in favoritable_tweets:
accounts[0].user.favorites.add(tweet)
accounts[2].user.favorites.add(tweet)
response = self.client.get(
reverse(
"twitter:account_favorite_list",
kwargs={"screen_name": accounts[0].user.screen_name},
)
)
self.assertIn("account", response.context)
self.assertEqual(accounts[0].pk, response.context["account"].pk)
self.assertIn("tweet_list", response.context)
self.assertEqual(6, len(response.context["tweet_list"]))
self.assertEqual(
[tweet.pk for tweet in response.context["tweet_list"]],
[
favoritable_tweets[5].pk,
favoritable_tweets[4].pk,
favoritable_tweets[3].pk,
favoritable_tweets[2].pk,
favoritable_tweets[1].pk,
favoritable_tweets[0].pk,
],
)
def test_account_favorite_list_privacy_tweets(self):
"It does not show private Tweets"
private_user = factories.UserFactory(is_private=True)
public_users = factories.UserFactory.create_batch(2, is_private=False)
favoriting_account = factories.AccountFactory(user=public_users[0])
private_tweet = factories.TweetFactory(user=private_user)
public_tweet = factories.TweetFactory(user=public_users[1])
favoriting_account.user.favorites.add(private_tweet)
favoriting_account.user.favorites.add(public_tweet)
response = self.client.get(
reverse(
"twitter:account_favorite_list",
kwargs={"screen_name": favoriting_account.user.screen_name},
)
)
tweets = response.context["tweet_list"]
self.assertEqual(len(tweets), 1)
self.assertEqual(tweets[0].pk, public_tweet.pk)
def test_account_favorite_list_privacy_account(self):
"It does not show favorites if the account's user is private"
user = factories.UserFactory(is_private=True)
account = factories.AccountFactory(user=user)
tweet = factories.TweetFactory()
account.user.favorites.add(tweet)
# Check it is there:
self.assertEqual(account.user.favorites.count(), 1)
response = self.client.get(
reverse(
"twitter:account_favorite_list",
kwargs={"screen_name": account.user.screen_name},
)
)
tweets = response.context["tweet_list"]
# Check it doesn't appear on the page:
self.assertEqual(len(tweets), 0)
def test_tweet_detail_templates(self):
"Uses the correct templates"
account = factories.AccountFactory()
tweets = factories.TweetFactory.create_batch(3, user=account.user)
response = self.client.get(
reverse(
"twitter:tweet_detail",
kwargs={
"screen_name": account.user.screen_name,
"twitter_id": tweets[1].twitter_id,
},
)
)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, "twitter/tweet_detail.html")
self.assertTemplateUsed(response, "twitter/base.html")
self.assertTemplateUsed(response, "ditto/base.html")
def test_tweet_detail_context(self):
"Sends the correct data to templates"
account = factories.AccountFactory()
tweets = factories.TweetFactory.create_batch(3, user=account.user)
response = self.client.get(
reverse(
"twitter:tweet_detail",
kwargs={
"screen_name": account.user.screen_name,
"twitter_id": tweets[1].twitter_id,
},
)
)
self.assertIn("account", response.context)
self.assertEqual(account.pk, response.context["account"].pk)
self.assertIn("twitter_user", response.context)
self.assertEqual(account.user.pk, response.context["twitter_user"].pk)
self.assertIn("tweet", response.context)
self.assertEqual(tweets[1].pk, response.context["tweet"].pk)
def test_tweet_detail_context_no_account(self):
"Sends correct data to templates when showing a tweet with no account"
user = factories.UserFactory()
tweets = factories.TweetFactory.create_batch(3, user=user)
response = self.client.get(
reverse(
"twitter:tweet_detail",
kwargs={
"screen_name": user.screen_name,
"twitter_id": tweets[1].twitter_id,
},
)
)
self.assertIn("account", response.context)
self.assertIsNone(response.context["account"])
self.assertIn("twitter_user", response.context)
self.assertEqual(user.pk, response.context["twitter_user"].pk)
self.assertIn("tweet", response.context)
self.assertEqual(tweets[1].pk, response.context["tweet"].pk)
def test_tweet_detail_privacy(self):
"It does not show private Tweets"
user = factories.UserFactory(is_private=True)
account = factories.AccountFactory(user=user)
tweets = factories.TweetFactory.create_batch(3, user=user)
response = self.client.get(
reverse(
"twitter:tweet_detail",
kwargs={
"screen_name": account.user.screen_name,
"twitter_id": tweets[1].twitter_id,
},
)
)
self.assertIn("tweet", response.context)
self.assertIsNone(response.context["tweet"])
| 42.207386 | 87 | 0.64367 | 1,635 | 14,857 | 5.688685 | 0.070336 | 0.079024 | 0.03677 | 0.042899 | 0.904849 | 0.867434 | 0.851844 | 0.770025 | 0.741426 | 0.714117 | 0 | 0.01078 | 0.250724 | 14,857 | 351 | 88 | 42.327635 | 0.824739 | 0.074578 | 0 | 0.617162 | 0 | 0 | 0.144736 | 0.021166 | 0 | 0 | 0 | 0 | 0.247525 | 1 | 0.062706 | false | 0 | 0.009901 | 0 | 0.075908 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
54ce686d8fd7b26dece0c60ba399ee74b034b4e1 | 34 | py | Python | demos/mnist/__init__.py | YblBarry/xshinnosuke | fe71379140910ec073d54fe01f82f8f28f113a98 | [
"MIT"
] | 290 | 2020-07-06T02:13:12.000Z | 2021-01-04T14:23:39.000Z | demos/mnist/__init__.py | YblBarry/xshinnosuke | fe71379140910ec073d54fe01f82f8f28f113a98 | [
"MIT"
] | 1 | 2020-12-03T11:11:48.000Z | 2020-12-03T11:11:48.000Z | demos/mnist/__init__.py | E1eveNn/xshinnosuke | 69da91e0ea5042437edfc31c0e6ff9ef394c6cc9 | [
"MIT"
] | 49 | 2020-07-16T00:27:47.000Z | 2020-11-26T03:03:14.000Z | from .train import go, load_mnist
| 17 | 33 | 0.794118 | 6 | 34 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 34 | 1 | 34 | 34 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
54e50aa19890174b6bcdc5d86a20bef73f7d864b | 321 | py | Python | Aulas Python/Exercicios/exe008.py | marcosviniciusbarbosa/Curso-Python | fc6bba3a6d0adfd51d63f789dec83b5d3ac83e4b | [
"MIT"
] | null | null | null | Aulas Python/Exercicios/exe008.py | marcosviniciusbarbosa/Curso-Python | fc6bba3a6d0adfd51d63f789dec83b5d3ac83e4b | [
"MIT"
] | null | null | null | Aulas Python/Exercicios/exe008.py | marcosviniciusbarbosa/Curso-Python | fc6bba3a6d0adfd51d63f789dec83b5d3ac83e4b | [
"MIT"
] | null | null | null | #escreva um programa que leia um valor em metros e o exiba convertido em centímetros e milímetros
n = float(input("Digite quantos metros deseja converter para centímetros e milímetros "))
print('A converesão dos metros {:.0f} para centímetros é de {:.0f} cm e milímetros é de {:.0f} mm '.format(n, (n * 100), (n * 1000))) | 107 | 133 | 0.728972 | 53 | 321 | 4.415094 | 0.641509 | 0.141026 | 0.188034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037175 | 0.161994 | 321 | 3 | 133 | 107 | 0.832714 | 0.299065 | 0 | 0 | 0 | 0.5 | 0.711111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
54e8daf3a953a19ad37b76b5620aa431d792c195 | 184 | py | Python | tests/test_input_thermo_elasticity_ess.py | vondrejc/sfepy | 8e427af699c4b2858eb096510057abb3ae7e28e8 | [
"BSD-3-Clause"
] | null | null | null | tests/test_input_thermo_elasticity_ess.py | vondrejc/sfepy | 8e427af699c4b2858eb096510057abb3ae7e28e8 | [
"BSD-3-Clause"
] | null | null | null | tests/test_input_thermo_elasticity_ess.py | vondrejc/sfepy | 8e427af699c4b2858eb096510057abb3ae7e28e8 | [
"BSD-3-Clause"
] | 2 | 2019-01-14T03:12:34.000Z | 2021-05-25T11:44:50.000Z | input_name = '../examples/thermo_elasticity/thermo_elasticity_ess.py'
output_name = 'test_thermo_elasticity_ess.vtk'
from tests_basic import TestInput
class Test(TestInput):
pass
| 26.285714 | 69 | 0.815217 | 25 | 184 | 5.64 | 0.68 | 0.340426 | 0.269504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097826 | 184 | 6 | 70 | 30.666667 | 0.849398 | 0 | 0 | 0 | 0 | 0 | 0.456522 | 0.456522 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
0746aac551a23af3c4cf669fa996beb54fd0b449 | 80 | py | Python | compute.py | ttw2514/compute_task | 7744300be06a47c750cc7d7dfe8a25a38b4d067e | [
"MIT"
] | 1 | 2018-05-03T02:32:31.000Z | 2018-05-03T02:32:31.000Z | compute.py | ttw2514/compute_task | 7744300be06a47c750cc7d7dfe8a25a38b4d067e | [
"MIT"
] | null | null | null | compute.py | ttw2514/compute_task | 7744300be06a47c750cc7d7dfe8a25a38b4d067e | [
"MIT"
] | 1 | 2018-05-03T02:32:34.000Z | 2018-05-03T02:32:34.000Z | import logging
def daily_compute():
logging.info('perform daily computes')
| 16 | 42 | 0.75 | 10 | 80 | 5.9 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 80 | 4 | 43 | 20 | 0.867647 | 0 | 0 | 0 | 0 | 0 | 0.275 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
076af02e428b4eec8b29f419b0062f2b9440035f | 30,857 | py | Python | rigid_body_motion/__init__.py | phausamann/rigid-body-motion | 2d4fbb1b949cc0b609a59877d7539af75dad6861 | [
"MIT"
] | 8 | 2021-05-20T02:24:07.000Z | 2022-03-05T17:15:11.000Z | rigid_body_motion/__init__.py | phausamann/rigid-body-motion | 2d4fbb1b949cc0b609a59877d7539af75dad6861 | [
"MIT"
] | 10 | 2019-06-13T09:36:15.000Z | 2022-01-17T16:55:05.000Z | rigid_body_motion/__init__.py | phausamann/rigid-body-motion | 2d4fbb1b949cc0b609a59877d7539af75dad6861 | [
"MIT"
] | 1 | 2021-08-13T10:24:31.000Z | 2021-08-13T10:24:31.000Z | """Top-level package for rigid-body-motion."""
__author__ = """Peter Hausamann"""
__email__ = "peter.hausamann@tum.de"
__version__ = "0.9.0"
# ROS module has to be imported first because of PyKDL
from . import ros # noqa
from . import io, plot # noqa
from .coordinate_systems import (
cartesian_to_polar,
cartesian_to_spherical,
polar_to_cartesian,
spherical_to_cartesian,
)
from .core import (
_make_dataarray,
_make_transform_or_pose_dataset,
_make_twist_dataset,
_make_velocity_dataarray,
_maybe_unpack_dataarray,
_replace_dim,
_resolve_rf,
_transform,
)
from .estimators import (
best_fit_rotation,
best_fit_transform,
estimate_angular_velocity,
estimate_linear_velocity,
iterative_closest_point,
shortest_arc_rotation,
)
from .reference_frames import ReferenceFrame
from .reference_frames import _registry as registry
from .reference_frames import (
clear_registry,
deregister_frame,
register_frame,
render_tree,
)
from .utils import (
ExampleDataStore,
from_euler_angles,
qinterp,
qinv,
qmean,
qmul,
rotate_vectors,
)
try:
import rigid_body_motion.accessors # noqa
except ImportError:
pass
__all__ = [
"transform_points",
"transform_quaternions",
"transform_vectors",
"transform_angular_velocity",
"transform_linear_velocity",
# coordinate system transforms
"cartesian_to_polar",
"polar_to_cartesian",
"cartesian_to_spherical",
"spherical_to_cartesian",
# reference frames
"registry",
"register_frame",
"deregister_frame",
"clear_registry",
"ReferenceFrame",
"render_tree",
# estimators
"estimate_linear_velocity",
"estimate_angular_velocity",
"shortest_arc_rotation",
"best_fit_rotation",
"best_fit_transform",
"iterative_closest_point",
"lookup_transform",
"lookup_pose",
"lookup_twist",
"lookup_linear_velocity",
"lookup_angular_velocity",
# utils
"from_euler_angles",
"example_data",
"qinterp",
"qinv",
"qmean",
"qmul",
"rotate_vectors",
]
_cs_funcs = {
"cartesian": {
"polar": cartesian_to_polar,
"spherical": cartesian_to_spherical,
},
"polar": {"cartesian": polar_to_cartesian},
"spherical": {"cartesian": spherical_to_cartesian},
}
example_data = ExampleDataStore()
def transform_vectors(
arr,
into,
outof=None,
dim=None,
axis=None,
timestamps=None,
time_axis=None,
return_timestamps=False,
):
""" Transform an array of vectors between reference frames.
Parameters
----------
arr: array_like
The array to transform.
into: str or ReferenceFrame
ReferenceFrame instance or name of a registered reference frame in
which the array will be represented after the transformation.
outof: str or ReferenceFrame, optional
ReferenceFrame instance or name of a registered reference frame in
which the array is currently represented. Can be omitted if the array
is a DataArray whose ``attrs`` contain a "representation_frame" entry
with the name of a registered frame.
dim: str, optional
If the array is a DataArray, the name of the dimension
representing the spatial coordinates of the vectors.
axis: int, optional
The axis of the array representing the spatial coordinates of the
vectors. Defaults to the last axis of the array.
timestamps: array_like or str, optional
The timestamps of the vectors, corresponding to the `time_axis`
of the array. If str and the array is a DataArray, the name of the
coordinate with the timestamps. The axis defined by `time_axis` will
be re-sampled to the timestamps for which the transformation is
defined.
time_axis: int, optional
The axis of the array representing the timestamps of the vectors.
Defaults to the first axis of the array.
return_timestamps: bool, default False
If True, also return the timestamps after the transformation.
Returns
-------
arr_transformed: array_like
The transformed array.
ts: array_like
The timestamps after the transformation.
See Also
--------
transform_quaternions, transform_points, ReferenceFrame
"""
return _transform(
"transform_vectors",
arr,
into,
outof,
dim,
axis,
timestamps,
time_axis,
return_timestamps=return_timestamps,
)
def transform_points(
arr,
into,
outof=None,
dim=None,
axis=None,
timestamps=None,
time_axis=None,
return_timestamps=False,
):
""" Transform an array of points between reference frames.
Parameters
----------
arr: array_like
The array to transform.
into: str or ReferenceFrame
ReferenceFrame instance or name of a registered reference frame in
which the array will be represented after the transformation.
outof: str or ReferenceFrame, optional
ReferenceFrame instance or name of a registered reference frame which
is the current reference frame of the array. Can be omitted if the
array is a DataArray whose ``attrs`` contain a "reference_frame" entry
with the name of a registered frame.
dim: str, optional
If the array is a DataArray, the name of the dimension
representing the spatial coordinates of the points.
axis: int, optional
The axis of the array representing the spatial coordinates of the
points. Defaults to the last axis of the array.
timestamps: array_like or str, optional
The timestamps of the points, corresponding to the `time_axis`
of the array. If str and the array is a DataArray, the name of the
coordinate with the timestamps. The axis defined by `time_axis` will
be re-sampled to the timestamps for which the transformation is
defined.
time_axis: int, optional
The axis of the array representing the timestamps of the points.
Defaults to the first axis of the array.
return_timestamps: bool, default False
If True, also return the timestamps after the transformation.
Returns
-------
arr_transformed: array_like
The transformed array.
ts: array_like
The timestamps after the transformation.
See Also
--------
transform_vectors, transform_quaternions, ReferenceFrame
"""
return _transform(
"transform_points",
arr,
into,
outof,
dim,
axis,
timestamps,
time_axis,
return_timestamps=return_timestamps,
)
def transform_quaternions(
arr,
into,
outof=None,
dim=None,
axis=None,
timestamps=None,
time_axis=None,
return_timestamps=False,
):
""" Transform an array of quaternions between reference frames.
Parameters
----------
arr: array_like
The array to transform.
into: str or ReferenceFrame
ReferenceFrame instance or name of a registered reference frame in
which the array will be represented after the transformation.
outof: str or ReferenceFrame, optional
ReferenceFrame instance or name of a registered reference frame which
is the current reference frame of the array. Can be omitted if the
array is a DataArray whose ``attrs`` contain a "reference_frame" entry
with the name of a registered frame.
dim: str, optional
If the array is a DataArray, the name of the dimension
representing the spatial coordinates of the quaternions.
axis: int, optional
The axis of the array representing the spatial coordinates of the
quaternions. Defaults to the last axis of the array.
timestamps: array_like or str, optional
The timestamps of the quaternions, corresponding to the `time_axis`
of the array. If str and the array is a DataArray, the name of the
coordinate with the timestamps. The axis defined by `time_axis` will
be re-sampled to the timestamps for which the transformation is
defined.
time_axis: int, optional
The axis of the array representing the timestamps of the quaternions.
Defaults to the first axis of the array.
return_timestamps: bool, default False
If True, also return the timestamps after the transformation.
Returns
-------
arr_transformed: array_like
The transformed array.
ts: array_like
The timestamps after the transformation.
See Also
--------
transform_vectors, transform_points, ReferenceFrame
"""
return _transform(
"transform_quaternions",
arr,
into,
outof,
dim,
axis,
timestamps,
time_axis,
return_timestamps=return_timestamps,
)
def transform_angular_velocity(
arr,
into,
outof=None,
what="reference_frame",
dim=None,
axis=None,
timestamps=None,
time_axis=None,
cutoff=None,
return_timestamps=False,
):
""" Transform an array of angular velocities between frames.
The array represents the velocity of a moving body or frame wrt a
reference frame, expressed in a representation frame.
The transformation changes either the reference frame, the moving
frame or the representation frame of the velocity from this frame to
another. In either case, it is assumed that the array is represented in
the frame that is being changed and will be represented in the new
frame after the transformation.
When transforming the reference frame R to a new frame R' while keeping
the moving frame M fixed, the transformed velocity is calculated
according to the formula:
.. math:: \omega_{M/R'} = \omega_{M/R} + \omega_{R/R'}
When transforming the moving frame M to a new frame M' while keeping
the reference frame R fixed, the transformed velocity is calculated
according to the formula:
.. math:: \omega_{M'/R} = \omega_{M/R} + \omega_{M'/M}
Parameters
----------
arr: array_like
The array to transform.
into: str or ReferenceFrame
The target reference frame.
outof: str or ReferenceFrame, optional
The source reference frame. Can be omitted if the array
is a DataArray whose ``attrs`` contain a "representation_frame",
"reference_frame" or "moving_frame" entry with the name of a
registered frame (depending on what you want to transform, see `what`).
what: str
What frame of the velocity to transform. Can be "reference_frame",
"moving_frame" or "representation_frame".
dim: str, optional
If the array is a DataArray, the name of the dimension
representing the spatial coordinates of the velocities.
axis: int, optional
The axis of the array representing the spatial coordinates of the
velocities. Defaults to the last axis of the array.
timestamps: array_like or str, optional
The timestamps of the velocities, corresponding to the `time_axis`
of the array. If str and the array is a DataArray, the name of the
coordinate with the timestamps. The axis defined by `time_axis` will
be re-sampled to the timestamps for which the transformation is
defined.
time_axis: int, optional
The axis of the array representing the timestamps of the velocities.
Defaults to the first axis of the array.
cutoff: float, optional
Frequency of a low-pass filter applied to linear and angular
velocity after the twist estimation as a fraction of the Nyquist
frequency.
return_timestamps: bool, default False
If True, also return the timestamps after the transformation.
Returns
-------
arr_transformed: array_like
The transformed array.
ts: array_like
The timestamps after the transformation.
See Also
--------
transform_linear_velocity, transform_vectors, transform_quaternions,
transform_points, ReferenceFrame
""" # noqa
return _transform(
"transform_angular_velocity",
arr,
into,
outof,
dim,
axis,
timestamps,
time_axis,
what=what,
cutoff=cutoff,
return_timestamps=return_timestamps,
)
def transform_linear_velocity(
arr,
into,
outof=None,
what="reference_frame",
moving_frame=None,
reference_frame=None,
dim=None,
axis=None,
timestamps=None,
time_axis=None,
cutoff=None,
outlier_thresh=None,
return_timestamps=False,
):
""" Transform an array of linear velocities between frames.
The array represents the velocity of a moving body or frame wrt a
reference frame, expressed in a representation frame.
The transformation changes either the reference frame, the moving
frame or the representation frame of the velocity from this frame to
another. In either case, it is assumed that the array is represented in
the frame that is being changed and will be represented in the new
frame after the transformation.
When transforming the reference frame R to a new frame R' while keeping
the moving frame M fixed, the transformed velocity is calculated
according to the formula:
.. math:: v_{M/R'} = v_{M/R} + v_{R/R'} + \omega_{R/R'} \\times t_{M/R}
When transforming the moving frame M to a new frame M' while keeping
the reference frame R fixed, the transformed velocity is calculated
according to the formula:
.. math:: v_{M'/R} = v_{M/R} + v_{M'/M} + \omega_{M/R} \\times t_{M'/M}
Parameters
----------
arr: array_like
The array to transform.
into: str or ReferenceFrame
The target reference frame.
outof: str or ReferenceFrame, optional
The source reference frame. Can be omitted if the array
is a DataArray whose ``attrs`` contain a "representation_frame",
"reference_frame" or "moving_frame" entry with the name of a
registered frame (depending on what you want to transform, see `what`).
what: str
What frame of the velocity to transform. Can be "reference_frame",
"moving_frame" or "representation_frame".
moving_frame: str or ReferenceFrame, optional
The moving frame when transforming the reference frame of the
velocity.
reference_frame: str or ReferenceFrame, optional
The reference frame when transforming the moving frame of the
velocity.
dim: str, optional
If the array is a DataArray, the name of the dimension
representing the spatial coordinates of the velocities.
axis: int, optional
The axis of the array representing the spatial coordinates of the
velocities. Defaults to the last axis of the array.
timestamps: array_like or str, optional
The timestamps of the velocities, corresponding to the `time_axis`
of the array. If str and the array is a DataArray, the name of the
coordinate with the timestamps. The axis defined by `time_axis` will
be re-sampled to the timestamps for which the transformation is
defined.
time_axis: int, optional
The axis of the array representing the timestamps of the velocities.
Defaults to the first axis of the array.
cutoff: float, optional
Frequency of a low-pass filter applied to linear and angular
velocity after the twist estimation as a fraction of the Nyquist
frequency.
outlier_thresh: float, optional
Some SLAM-based trackers introduce position corrections when a new
camera frame becomes available. This introduces outliers in the
linear velocity estimate. The estimation algorithm used here
can suppress these outliers by throwing out samples where the
norm of the second-order differences of the position is above
`outlier_thresh` and interpolating the missing values. For
measurements from the Intel RealSense T265 tracker, set this value
to 1e-3.
return_timestamps: bool, default False
If True, also return the timestamps after the transformation.
Returns
-------
arr_transformed: array_like
The transformed array.
ts: array_like
The timestamps after the transformation.
See Also
--------
transform_angular_velocity, transform_vectors, transform_quaternions,
transform_points, ReferenceFrame
""" # noqa
return _transform(
"transform_linear_velocity",
arr,
into,
outof,
dim,
axis,
timestamps,
time_axis,
what=what,
moving_frame=moving_frame,
reference_frame=reference_frame,
cutoff=cutoff,
outlier_thresh=outlier_thresh,
return_timestamps=return_timestamps,
)
def transform_coordinates(
arr, into, outof=None, dim=None, axis=None, replace_dim=True
):
""" Transform motion between coordinate systems.
Parameters
----------
arr: array_like
The array to transform.
into: str
The name of a coordinate system in which the array will be represented
after the transformation.
outof: str, optional
The name of a coordinate system in which the array is currently
represented. Can be omitted if the array is a DataArray whose ``attrs``
contain a "coordinate_system" entry with the name of a valid coordinate
system.
dim: str, optional
If the array is a DataArray, the name of the dimension representing
the coordinates of the motion.
axis: int, optional
The axis of the array representing the coordinates of the motion.
Defaults to the last axis of the array.
replace_dim: bool, default True
If True and the array is a DataArray, replace the dimension
representing the coordinates by a new dimension that describes the
new coordinate system and its axes (e.g.
``cartesian_axis: [x, y, z]``). All coordinates that contained the
original dimension will be dropped.
Returns
-------
arr_transformed: array_like
The transformed array.
See Also
--------
cartesian_to_polar, polar_to_cartesian, cartesian_to_spherical,
spherical_to_cartesian
"""
arr, axis, _, _, _, _, coords, dims, name, attrs = _maybe_unpack_dataarray(
arr, dim, axis, timestamps=False
)
if outof is None:
if attrs is not None and "coordinate_system" in attrs:
# TODO warn if outof(.name) != attrs["coordinate_system"]
outof = attrs["coordinate_system"]
else:
raise ValueError(
"'outof' must be specified unless you provide a DataArray "
"whose ``attrs`` contain a 'coordinate_system' entry with the "
"name of a valid coordinate system"
)
try:
transform_func = _cs_funcs[outof][into]
except KeyError:
raise ValueError(f"Unsupported transformation: {outof} to {into}.")
if attrs is not None and "coordinate_system" in attrs:
attrs.update({"coordinate_system": into})
arr = transform_func(arr, axis=axis)
if coords is not None:
if replace_dim:
# TODO accept (name, coord) tuple
coords, dims = _replace_dim(
coords, dims, axis, into, arr.shape[axis]
)
return _make_dataarray(arr, coords, dims, name, attrs, None, None)
else:
return arr
def lookup_transform(outof, into, as_dataset=False, return_timestamps=False):
""" Look up transformation from one frame to another.
The transformation is a rotation `r` followed by a translation `t` which,
when applied to a point expressed wrt the base frame `B`, yields that
point wrt the target frame `T`:
.. math:: p_T = rot(r, p_B) + t
Parameters
----------
outof: str or ReferenceFrame
Base frame of the transformation.
into: str or ReferenceFrame
Target frame of the transformation.
as_dataset: bool, default False
If True, return an xarray.Dataset. Otherwise, return a tuple of
translation and rotation.
return_timestamps: bool, default False
If True, and `as_dataset` is False, also return the timestamps of the
lookup.
Returns
-------
translation, rotation: each numpy.ndarray
Translation and rotation of transformation between the frames,
if `as_dataset` is False.
timestamps: numpy.ndarray
Corresponding timestamps of the lookup if `return_timestamps` is True.
ds: xarray.Dataset
The above arrays as an xarray.Dataset, if `as_dataset` is True.
"""
into = _resolve_rf(into)
outof = _resolve_rf(outof)
translation, rotation, timestamps = outof.lookup_transform(into)
if as_dataset:
return _make_transform_or_pose_dataset(
translation, rotation, outof, timestamps
)
elif return_timestamps:
return translation, rotation, timestamps
else:
return translation, rotation
def lookup_pose(frame, reference, as_dataset=False, return_timestamps=False):
""" Look up pose of one frame wrt a reference.
Parameters
----------
frame: str or ReferenceFrame
Frame for which to look up the pose.
reference: str or ReferenceFrame
Reference frame of the pose.
as_dataset: bool, default False
If True, return an xarray.Dataset. Otherwise, return a tuple of
position and orientation.
return_timestamps: bool, default False
If True, and `as_dataset` is False, also return the timestamps of the
lookup.
Returns
-------
position, orientation: each numpy.ndarray
Position and orientation of the pose between the frames,
if `as_dataset` is False.
timestamps: numpy.ndarray
Corresponding timestamps of the lookup if `return_timestamps` is True.
ds: xarray.Dataset
The above arrays as an xarray.Dataset, if `as_dataset` is True.
"""
reference = _resolve_rf(reference)
frame = _resolve_rf(frame)
position, orientation, timestamps = frame.lookup_transform(reference)
if as_dataset:
return _make_transform_or_pose_dataset(
position, orientation, reference, timestamps, pose=True
)
elif return_timestamps:
return position, orientation, timestamps
else:
return position, orientation
def lookup_twist(
frame,
reference=None,
represent_in=None,
outlier_thresh=None,
cutoff=None,
mode="quaternion",
as_dataset=False,
return_timestamps=False,
):
""" Estimate linear and angular velocity of a frame wrt a reference.
Parameters
----------
frame: str or ReferenceFrame
The reference frame whose twist is estimated.
reference: str or ReferenceFrame, optional
The reference frame wrt which the twist is estimated. Defaults to
the parent frame of the moving frame.
represent_in: str or ReferenceFrame, optional
The reference frame in which the twist is represented. Defaults
to the reference frame.
outlier_thresh: float, optional
Some SLAM-based trackers introduce position corrections when a new
camera frame becomes available. This introduces outliers in the
linear velocity estimate. The estimation algorithm used here
can suppress these outliers by throwing out samples where the
norm of the second-order differences of the position is above
`outlier_thresh` and interpolating the missing values. For
measurements from the Intel RealSense T265 tracker, set this value
to 1e-3.
cutoff: float, optional
Frequency of a low-pass filter applied to linear and angular
velocity after the estimation as a fraction of the Nyquist
frequency.
mode: str, default "quaternion"
If "quaternion", compute the angular velocity from the quaternion
derivative. If "rotation_vector", compute the angular velocity from
the gradient of the axis-angle representation of the rotations.
as_dataset: bool, default False
If True, return an xarray.Dataset. Otherwise, return a tuple of linear
and angular velocity.
return_timestamps: bool, default False
If True, and `as_dataset` is False, also return the timestamps of the
lookup.
Returns
-------
linear, angular: each numpy.ndarray
Linear and angular velocity of moving frame wrt reference frame,
represented in representation frame, if `as_dataset` is False.
timestamps: numpy.ndarray
Corresponding timestamps of the lookup if `return_timestamps` is True.
ds: xarray.Dataset
The above arrays as an xarray.Dataset, if `as_dataset` is True.
"""
frame = _resolve_rf(frame)
reference = _resolve_rf(reference or frame.parent)
represent_in = _resolve_rf(represent_in or reference)
linear, angular, timestamps = frame.lookup_twist(
reference,
represent_in,
outlier_thresh=outlier_thresh,
cutoff=cutoff,
mode=mode,
return_timestamps=True,
)
if as_dataset:
return _make_twist_dataset(
angular, linear, frame, reference, represent_in, timestamps
)
elif return_timestamps:
return linear, angular, timestamps
else:
return linear, angular
def lookup_linear_velocity(
frame,
reference=None,
represent_in=None,
outlier_thresh=None,
cutoff=None,
as_dataarray=False,
return_timestamps=False,
):
""" Estimate linear velocity of a frame wrt a reference.
Parameters
----------
frame: str or ReferenceFrame
The reference frame whose velocity is estimated.
reference: str or ReferenceFrame, optional
The reference frame wrt which the velocity is estimated. Defaults to
the parent frame of the moving frame.
represent_in: str or ReferenceFrame, optional
The reference frame in which the twist is represented. Defaults
to the reference frame.
outlier_thresh: float, optional
Some SLAM-based trackers introduce position corrections when a new
camera frame becomes available. This introduces outliers in the
linear velocity estimate. The estimation algorithm used here
can suppress these outliers by throwing out samples where the
norm of the second-order differences of the position is above
`outlier_thresh` and interpolating the missing values. For
measurements from the Intel RealSense T265 tracker, set this value
to 1e-3.
cutoff: float, optional
Frequency of a low-pass filter applied to linear and angular
velocity after the estimation as a fraction of the Nyquist
frequency.
as_dataarray: bool, default False
If True, return an xarray.DataArray.
return_timestamps: bool, default False
If True and `as_dataarray` is False, also return the timestamps of the
lookup.
Returns
-------
linear: numpy.ndarray or xarray.DataArray
Linear velocity of moving frame wrt reference frame, represented in
representation frame.
timestamps: numpy.ndarray
Corresponding timestamps of the lookup if `return_timestamps` is True.
"""
frame = _resolve_rf(frame)
reference = _resolve_rf(reference or frame.parent)
represent_in = _resolve_rf(represent_in or reference)
linear, timestamps = frame.lookup_linear_velocity(
reference,
represent_in,
outlier_thresh=outlier_thresh,
cutoff=cutoff,
return_timestamps=True,
)
if as_dataarray:
return _make_velocity_dataarray(
linear, "linear", frame, reference, represent_in, timestamps
)
elif return_timestamps:
return linear, timestamps
else:
return linear
def lookup_angular_velocity(
frame,
reference=None,
represent_in=None,
outlier_thresh=None,
cutoff=None,
mode="quaternion",
as_dataarray=False,
return_timestamps=False,
):
""" Estimate angular velocity of a frame wrt a reference.
Parameters
----------
frame: str or ReferenceFrame
The reference frame whose velocity is estimated.
reference: str or ReferenceFrame, optional
The reference frame wrt which the velocity is estimated. Defaults to
the parent frame of the moving frame.
represent_in: str or ReferenceFrame, optional
The reference frame in which the twist is represented. Defaults
to the reference frame.
outlier_thresh: float, optional
Suppress samples where the norm of the second-order differences of the
rotation is above `outlier_thresh` and interpolate the missing values.
cutoff: float, optional
Frequency of a low-pass filter applied to angular and angular
velocity after the estimation as a fraction of the Nyquist
frequency.
mode: str, default "quaternion"
If "quaternion", compute the angular velocity from the quaternion
derivative. If "rotation_vector", compute the angular velocity from
the gradient of the axis-angle representation of the rotations.
as_dataarray: bool, default False
If True, return an xarray.DataArray.
return_timestamps: bool, default False
If True and `as_dataarray` is False, also return the timestamps of the
lookup.
Returns
-------
angular: numpy.ndarray or xarray.DataArray
Angular velocity of moving frame wrt reference frame, represented in
representation frame.
timestamps: numpy.ndarray
Corresponding timestamps of the lookup if `return_timestamps` is True.
"""
frame = _resolve_rf(frame)
reference = _resolve_rf(reference or frame.parent)
represent_in = _resolve_rf(represent_in or reference)
angular, timestamps = frame.lookup_angular_velocity(
reference,
represent_in,
outlier_thresh=outlier_thresh,
cutoff=cutoff,
mode=mode,
return_timestamps=True,
)
if as_dataarray:
return _make_velocity_dataarray(
angular, "angular", frame, reference, represent_in, timestamps,
)
elif return_timestamps:
return angular, timestamps
else:
return angular
| 31.200202 | 79 | 0.677124 | 3,848 | 30,857 | 5.310811 | 0.080821 | 0.024956 | 0.014191 | 0.018497 | 0.811656 | 0.793306 | 0.767714 | 0.758906 | 0.741339 | 0.731601 | 0 | 0.000792 | 0.263603 | 30,857 | 988 | 80 | 31.231781 | 0.898561 | 0.644035 | 0 | 0.534091 | 0 | 0 | 0.117304 | 0.038259 | 0 | 0 | 0 | 0.001012 | 0 | 1 | 0.03125 | false | 0.002841 | 0.03125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4aea4913de9727d4962d55cf7a0e724611494252 | 182 | py | Python | hm_pyhelper/constants/shipping.py | KevinWassermann94/hm-pyhelper | b48f500b407d3c2ce8651844bb8a939746847691 | [
"MIT"
] | null | null | null | hm_pyhelper/constants/shipping.py | KevinWassermann94/hm-pyhelper | b48f500b407d3c2ce8651844bb8a939746847691 | [
"MIT"
] | null | null | null | hm_pyhelper/constants/shipping.py | KevinWassermann94/hm-pyhelper | b48f500b407d3c2ce8651844bb8a939746847691 | [
"MIT"
] | null | null | null | DESTINATION_NAME_KEY = 'shipping_destination_label'
DESTINATION_ADD_GATEWAY_TXN_KEY = 'shipping_destination_add_gateway_txn'
DESTINATION_WALLETS_KEY = 'shipping_destination_wallets'
| 45.5 | 72 | 0.901099 | 22 | 182 | 6.727273 | 0.409091 | 0.222973 | 0.445946 | 0.324324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049451 | 182 | 3 | 73 | 60.666667 | 0.855491 | 0 | 0 | 0 | 0 | 0 | 0.494505 | 0.494505 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ab1198da8990066e19a5cb62b121c6bb9f2224f2 | 2,713 | py | Python | lib/sqlalchemy/testing/__init__.py | randallk/sqlalchemy | d8ac1e9e6bfc931d2f14f9846d6924106f56b7e6 | [
"MIT"
] | 1 | 2020-02-08T20:04:42.000Z | 2020-02-08T20:04:42.000Z | lib/sqlalchemy/testing/__init__.py | randallk/sqlalchemy | d8ac1e9e6bfc931d2f14f9846d6924106f56b7e6 | [
"MIT"
] | null | null | null | lib/sqlalchemy/testing/__init__.py | randallk/sqlalchemy | d8ac1e9e6bfc931d2f14f9846d6924106f56b7e6 | [
"MIT"
] | null | null | null | # testing/__init__.py
# Copyright (C) 2005-2020 the SQLAlchemy authors and contributors
# <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
from . import config # noqa
from . import mock # noqa
from .assertions import assert_raises # noqa
from .assertions import assert_raises_message # noqa
from .assertions import assert_raises_return # noqa
from .assertions import AssertsCompiledSQL # noqa
from .assertions import AssertsExecutionResults # noqa
from .assertions import ComparesTables # noqa
from .assertions import emits_warning # noqa
from .assertions import emits_warning_on # noqa
from .assertions import eq_ # noqa
from .assertions import eq_ignore_whitespace # noqa
from .assertions import eq_regex # noqa
from .assertions import expect_deprecated # noqa
from .assertions import expect_warnings # noqa
from .assertions import in_ # noqa
from .assertions import is_ # noqa
from .assertions import is_false # noqa
from .assertions import is_instance_of # noqa
from .assertions import is_not_ # noqa
from .assertions import is_true # noqa
from .assertions import le_ # noqa
from .assertions import ne_ # noqa
from .assertions import not_in_ # noqa
from .assertions import startswith_ # noqa
from .assertions import uses_deprecated # noqa
from .config import combinations # noqa
from .config import db # noqa
from .config import fixture # noqa
from .config import requirements as requires # noqa
from .exclusions import _is_excluded # noqa
from .exclusions import _server_version # noqa
from .exclusions import against as _against # noqa
from .exclusions import db_spec # noqa
from .exclusions import exclude # noqa
from .exclusions import fails # noqa
from .exclusions import fails_if # noqa
from .exclusions import fails_on # noqa
from .exclusions import fails_on_everything_except # noqa
from .exclusions import future # noqa
from .exclusions import only_if # noqa
from .exclusions import only_on # noqa
from .exclusions import skip # noqa
from .exclusions import skip_if # noqa
from .util import adict # noqa
from .util import fail # noqa
from .util import flag_combinations # noqa
from .util import force_drop_names # noqa
from .util import lambda_combinations # noqa
from .util import metadata_fixture # noqa
from .util import provide_metadata # noqa
from .util import resolve_lambda # noqa
from .util import rowset # noqa
from .util import run_as_contextmanager # noqa
from .util import teardown_events # noqa
from .warnings import assert_warnings # noqa
def against(*queries):
return _against(config._current, *queries)
crashes = skip
| 37.680556 | 69 | 0.78216 | 370 | 2,713 | 5.575676 | 0.272973 | 0.213282 | 0.209404 | 0.279205 | 0.375182 | 0.117305 | 0 | 0 | 0 | 0 | 0 | 0.003529 | 0.164394 | 2,713 | 71 | 70 | 38.211268 | 0.906484 | 0.186509 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.423729 | 1 | 0.016949 | false | 0 | 0.949153 | 0.016949 | 0.983051 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab3ef1b1530ebda7ef03bb817b6d658a8895bae8 | 7,018 | py | Python | tests/test_filter.py | aeko-empt/ovs-dbg | 1415e10168923c7bf8710dd6a6b47d88a8d83457 | [
"Apache-2.0"
] | 7 | 2021-11-12T16:36:18.000Z | 2022-01-11T06:37:29.000Z | tests/test_filter.py | aeko-empt/ovs-dbg | 1415e10168923c7bf8710dd6a6b47d88a8d83457 | [
"Apache-2.0"
] | 43 | 2021-08-16T23:04:33.000Z | 2022-03-25T07:38:50.000Z | tests/test_filter.py | aeko-empt/ovs-dbg | 1415e10168923c7bf8710dd6a6b47d88a8d83457 | [
"Apache-2.0"
] | 4 | 2021-07-14T13:14:51.000Z | 2021-12-30T12:52:14.000Z | import pytest
from ovs_dbg.filter import OFFilter
from ovs_dbg.ofp import OFPFlowFactory
from ovs_dbg.odp import ODPFlowFactory
ofp_factory = OFPFlowFactory()
odp_factory = ODPFlowFactory()
@pytest.mark.parametrize(
"expr,flow,expected,match",
[
(
"nw_src=192.168.1.1 && tcp_dst=80",
ofp_factory.from_string(
"nw_src=192.168.1.1,tcp_dst=80 actions=drop"
),
True,
["nw_src", "tcp_dst"],
),
(
"nw_src=192.168.1.2 || tcp_dst=80",
ofp_factory.from_string(
"nw_src=192.168.1.1,tcp_dst=80 actions=drop"
),
True,
["nw_src", "tcp_dst"],
),
(
"nw_src=192.168.1.1 || tcp_dst=90",
ofp_factory.from_string(
"nw_src=192.168.1.1,tcp_dst=80 actions=drop"
),
True,
["nw_src", "tcp_dst"],
),
(
"nw_src=192.168.1.2 && tcp_dst=90",
ofp_factory.from_string(
"nw_src=192.168.1.1,tcp_dst=80 actions=drop"
),
False,
["nw_src", "tcp_dst"],
),
(
"nw_src=192.168.1.1",
ofp_factory.from_string(
"nw_src=192.168.1.0/24,tcp_dst=80 actions=drop"
),
False,
["nw_src"],
),
(
"nw_src~=192.168.1.1",
ofp_factory.from_string(
"nw_src=192.168.1.0/24,tcp_dst=80 actions=drop"
),
True,
["nw_src"],
),
(
"nw_src~=192.168.1.1/30",
ofp_factory.from_string(
"nw_src=192.168.1.0/24,tcp_dst=80 actions=drop"
),
True,
["nw_src"],
),
(
"nw_src~=192.168.1.0/16",
ofp_factory.from_string(
"nw_src=192.168.1.0/24,tcp_dst=80 actions=drop"
),
False,
["nw_src"],
),
(
"nw_src~=192.168.1.0/16",
ofp_factory.from_string(
"nw_src=192.168.1.0/24,tcp_dst=80 actions=drop"
),
False,
["nw_src"],
),
(
"n_bytes=100",
ofp_factory.from_string(
"n_bytes=100 priority=100,nw_src=192.168.1.0/24,tcp_dst=80 actions=drop" # noqa: E501
),
True,
["n_bytes"],
),
(
"n_bytes>10",
ofp_factory.from_string(
"n_bytes=100 priority=100,nw_src=192.168.1.0/24,tcp_dst=80 actions=drop" # noqa: E501
),
True,
["n_bytes"],
),
(
"n_bytes>100",
ofp_factory.from_string(
"n_bytes=100 priority=100,nw_src=192.168.1.0/24,tcp_dst=80 actions=drop" # noqa: E501
),
False,
["n_bytes"],
),
(
"n_bytes<100",
ofp_factory.from_string(
"n_bytes=100 priority=100,nw_src=192.168.1.0/24,tcp_dst=80 actions=drop" # noqa: E501
),
False,
["n_bytes"],
),
(
"n_bytes<1000",
ofp_factory.from_string(
"n_bytes=100 priority=100,nw_src=192.168.1.0/24,tcp_dst=80 actions=drop" # noqa: E501
),
True,
["n_bytes"],
),
(
"n_bytes>0 && drop=true",
ofp_factory.from_string(
"n_bytes=100 priority=100,nw_src=192.168.1.0/24,tcp_dst=80 actions=drop" # noqa: E501
),
True,
["n_bytes", "drop"],
),
(
"n_bytes>0 && drop=true",
ofp_factory.from_string(
"n_bytes=100 priority=100,nw_src=192.168.1.0/24,tcp_dst=80 actions=2" # noqa: E501
),
False,
["n_bytes"],
),
(
"n_bytes>10 && !output.port=3",
ofp_factory.from_string(
"n_bytes=100 priority=100,nw_src=192.168.1.0/24,tcp_dst=80 actions=2" # noqa: E501
),
True,
["n_bytes", "output"],
),
(
"dl_src=00:11:22:33:44:55",
ofp_factory.from_string(
"n_bytes=100 priority=100,dl_src=00:11:22:33:44:55,nw_src=192.168.1.0/24,tcp_dst=80 actions=2" # noqa: E501
),
True,
["dl_src"],
),
(
"dl_src~=00:11:22:33:44:55",
ofp_factory.from_string(
"n_bytes=100 priority=100,dl_src=00:11:22:33:44:55/ff:ff:ff:ff:ff:00,nw_src=192.168.1.0/24,tcp_dst=80 actions=2" # noqa: E501
),
True,
["dl_src"],
),
(
"dl_src~=00:11:22:33:44:66",
ofp_factory.from_string(
"n_bytes=100 priority=100,dl_src=00:11:22:33:44:55/ff:ff:ff:ff:ff:00,nw_src=192.168.1.0/24,tcp_dst=80 actions=2" # noqa: E501
),
True,
["dl_src"],
),
(
"dl_src~=00:11:22:33:44:66 && tp_dst=1000",
ofp_factory.from_string(
"n_bytes=100 priority=100,dl_src=00:11:22:33:44:55/ff:ff:ff:ff:ff:00,nw_src=192.168.1.0/24,tp_dst=0x03e8/0xfff8 actions=2" # noqa: E501
),
False,
["dl_src", "tp_dst"],
),
(
"dl_src~=00:11:22:33:44:66 && tp_dst~=1000",
ofp_factory.from_string(
"n_bytes=100 priority=100,dl_src=00:11:22:33:44:55/ff:ff:ff:ff:ff:00,nw_src=192.168.1.0/24,tp_dst=0x03e8/0xfff8 actions=2" # noqa: E501
),
True,
["dl_src", "tp_dst"],
),
(
"encap",
odp_factory.from_string(
"encap(eth_type(0x0800),ipv4(src=10.76.23.240/255.255.255.248,dst=10.76.23.106,proto=17,tos=0/0,ttl=64,frag=no)) actions:drop" # noqa: E501
),
True,
["encap"],
),
(
"encap.ipv4.src=10.76.23.240",
odp_factory.from_string(
"encap(eth_type(0x0800),ipv4(src=10.76.23.240/255.255.255.248,dst=10.76.23.106,proto=17,tos=0/0,ttl=64,frag=no)) actions:drop" # noqa: E501
),
False,
["encap"],
),
(
"encap.ipv4.src~=10.76.23.240",
odp_factory.from_string(
"encap(eth_type(0x0800),ipv4(src=10.76.23.240/255.255.255.248,dst=10.76.23.106,proto=17,tos=0/0,ttl=64,frag=no)) actions:drop" # noqa: E501
),
True,
["encap"],
),
],
)
def test_filter(expr, flow, expected, match):
ffilter = OFFilter(expr)
result = ffilter.evaluate(flow)
if expected:
assert result
else:
assert not result
assert [kv.key for kv in result.kv] == match
| 31.053097 | 156 | 0.46224 | 924 | 7,018 | 3.319264 | 0.098485 | 0.06521 | 0.080861 | 0.111184 | 0.872514 | 0.865993 | 0.865993 | 0.857515 | 0.855559 | 0.851321 | 0 | 0.18324 | 0.38957 | 7,018 | 225 | 157 | 31.191111 | 0.53268 | 0.024936 | 0 | 0.689498 | 0 | 0.073059 | 0.391591 | 0.246118 | 0 | 0 | 0.006153 | 0 | 0.013699 | 1 | 0.004566 | false | 0 | 0.018265 | 0 | 0.022831 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db3ccd5d108f2df2e2dde193332a5f85df9e8386 | 38 | py | Python | uiza/api_resources/__init__.py | uizaio/api-wrapper-python | e67c162e711857341f7ef5752178219e94f604d3 | [
"MIT"
] | 2 | 2019-04-22T11:39:36.000Z | 2020-05-26T04:01:43.000Z | uiza/api_resources/__init__.py | uizaio/api-wrapper-python | e67c162e711857341f7ef5752178219e94f604d3 | [
"MIT"
] | null | null | null | uiza/api_resources/__init__.py | uizaio/api-wrapper-python | e67c162e711857341f7ef5752178219e94f604d3 | [
"MIT"
] | 2 | 2019-02-11T09:34:03.000Z | 2019-02-12T10:31:41.000Z | from uiza.api_resources.base import *
| 19 | 37 | 0.815789 | 6 | 38 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
db3fd06c8c457f59c563a3b517f3691ad4cbb44d | 2,248 | py | Python | frappe-bench/env/lib/python2.7/site-packages/gocardless_pro/resources/refund.py | ibrahmm22/library-management | b88a2129a5a2e96ce1f945ec8ba99a0b63b8c506 | [
"MIT"
] | 30 | 2015-07-08T21:10:10.000Z | 2022-02-17T10:08:55.000Z | frappe-bench/env/lib/python2.7/site-packages/gocardless_pro/resources/refund.py | ibrahmm22/library-management | b88a2129a5a2e96ce1f945ec8ba99a0b63b8c506 | [
"MIT"
] | 21 | 2015-12-14T02:24:52.000Z | 2022-02-05T15:56:00.000Z | frappe-bench/env/lib/python2.7/site-packages/gocardless_pro/resources/refund.py | ibrahmm22/library-management | b88a2129a5a2e96ce1f945ec8ba99a0b63b8c506 | [
"MIT"
] | 19 | 2016-02-10T15:57:42.000Z | 2022-02-05T10:21:05.000Z | # WARNING: Do not edit by hand, this file was generated by Crank:
#
# https://github.com/gocardless/crank
#
class Refund(object):
"""A thin wrapper around a refund, providing easy access to its
attributes.
Example:
refund = client.refunds.get()
refund.id
"""
def __init__(self, attributes, api_response):
self.attributes = attributes
self.api_response = api_response
@property
def amount(self):
return self.attributes.get('amount')
@property
def created_at(self):
return self.attributes.get('created_at')
@property
def currency(self):
return self.attributes.get('currency')
@property
def fx(self):
return self.Fx(self.attributes.get('fx'))
@property
def id(self):
return self.attributes.get('id')
@property
def links(self):
return self.Links(self.attributes.get('links'))
@property
def metadata(self):
return self.attributes.get('metadata')
@property
def reference(self):
return self.attributes.get('reference')
@property
def status(self):
return self.attributes.get('status')
class Fx(object):
"""Wrapper for the response's 'fx' attribute."""
def __init__(self, attributes):
self.attributes = attributes
@property
def estimated_exchange_rate(self):
return self.attributes.get('estimated_exchange_rate')
@property
def exchange_rate(self):
return self.attributes.get('exchange_rate')
@property
def fx_amount(self):
return self.attributes.get('fx_amount')
@property
def fx_currency(self):
return self.attributes.get('fx_currency')
class Links(object):
"""Wrapper for the response's 'links' attribute."""
def __init__(self, attributes):
self.attributes = attributes
@property
def mandate(self):
return self.attributes.get('mandate')
@property
def payment(self):
return self.attributes.get('payment')
| 18.578512 | 67 | 0.584075 | 238 | 2,248 | 5.403361 | 0.243697 | 0.228616 | 0.163297 | 0.242613 | 0.46112 | 0.314152 | 0.161742 | 0.101089 | 0.101089 | 0.101089 | 0 | 0 | 0.311833 | 2,248 | 120 | 68 | 18.733333 | 0.831286 | 0.141459 | 0 | 0.363636 | 1 | 0 | 0.066702 | 0.012176 | 0 | 0 | 0 | 0 | 0 | 1 | 0.327273 | false | 0 | 0 | 0.272727 | 0.654545 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
db8ec0ae489912c0e6a51ff0c63e586586cd5984 | 43 | py | Python | torch_tools/training/callbacks/__init__.py | gregunz/TorchTools | 19a33f2e4cd38f86b74bd732949516df66f9e24f | [
"MIT"
] | null | null | null | torch_tools/training/callbacks/__init__.py | gregunz/TorchTools | 19a33f2e4cd38f86b74bd732949516df66f9e24f | [
"MIT"
] | null | null | null | torch_tools/training/callbacks/__init__.py | gregunz/TorchTools | 19a33f2e4cd38f86b74bd732949516df66f9e24f | [
"MIT"
] | null | null | null | from .checkpoint import CheckpointCallback
| 21.5 | 42 | 0.883721 | 4 | 43 | 9.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.974359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dbaaa17f3518199e252dd5b7cae72eff6a15546c | 99 | py | Python | python/optatufsc/run.py | cchenzi/INE5454 | 8936f84fa96397b75f2dbbf75570865bb94363be | [
"MIT"
] | null | null | null | python/optatufsc/run.py | cchenzi/INE5454 | 8936f84fa96397b75f2dbbf75570865bb94363be | [
"MIT"
] | null | null | null | python/optatufsc/run.py | cchenzi/INE5454 | 8936f84fa96397b75f2dbbf75570865bb94363be | [
"MIT"
] | null | null | null | from optatufsc.data import get_optatufsc_data
if __name__ == '__main__':
get_optatufsc_data()
| 19.8 | 45 | 0.777778 | 13 | 99 | 5 | 0.615385 | 0.6 | 0.492308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141414 | 99 | 4 | 46 | 24.75 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0.080808 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
dbd8b2111a69ca7d7f619880710820ac3a7e3043 | 191 | py | Python | wittgenstein/__init__.py | Arzik1987/wittgenstein | 8378f58f17382a862f5839dcafa756dd2bc84c18 | [
"MIT"
] | 64 | 2019-02-28T08:45:19.000Z | 2022-03-19T21:48:10.000Z | wittgenstein/__init__.py | Arzik1987/wittgenstein | 8378f58f17382a862f5839dcafa756dd2bc84c18 | [
"MIT"
] | 18 | 2019-03-07T09:12:34.000Z | 2022-01-31T17:05:45.000Z | wittgenstein/__init__.py | Arzik1987/wittgenstein | 8378f58f17382a862f5839dcafa756dd2bc84c18 | [
"MIT"
] | 17 | 2019-03-15T09:58:43.000Z | 2022-03-16T17:59:42.000Z | # Author: Ilan Moscovitz <ilan.moscovitz@gmail.com>
# License: MIT
from .irep import IREP
from .ripper import RIPPER
import pandas as pd
__version__ = "0.3.2"
__author__ = "Ilan Moscovitz"
| 19.1 | 51 | 0.748691 | 28 | 191 | 4.821429 | 0.642857 | 0.288889 | 0.281481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018519 | 0.151832 | 191 | 9 | 52 | 21.222222 | 0.814815 | 0.324607 | 0 | 0 | 0 | 0 | 0.150794 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9176471886a15eff391cead19268a92ed8dcf556 | 53 | py | Python | scripts/wrangler.py | miguelesteras/Digital-Marketing-Analytics | e9ffb7698416d3675dbc1624d8da7019bce8f5a5 | [
"MIT"
] | null | null | null | scripts/wrangler.py | miguelesteras/Digital-Marketing-Analytics | e9ffb7698416d3675dbc1624d8da7019bce8f5a5 | [
"MIT"
] | null | null | null | scripts/wrangler.py | miguelesteras/Digital-Marketing-Analytics | e9ffb7698416d3675dbc1624d8da7019bce8f5a5 | [
"MIT"
] | null | null | null | import pandas
import numpy as np
import pandas as pd
| 13.25 | 19 | 0.811321 | 10 | 53 | 4.3 | 0.6 | 0.55814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188679 | 53 | 3 | 20 | 17.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
91859e0562155e61d812e09e5bc7bb8a2421f091 | 23,112 | py | Python | tests/test_psis.py | biomedical-cybernetics/pypsis | 987cea5e2895a9744a266787b7f7f5b702e128ca | [
"MIT"
] | null | null | null | tests/test_psis.py | biomedical-cybernetics/pypsis | 987cea5e2895a9744a266787b7f7f5b702e128ca | [
"MIT"
] | null | null | null | tests/test_psis.py | biomedical-cybernetics/pypsis | 987cea5e2895a9744a266787b7f7f5b702e128ca | [
"MIT"
] | null | null | null | import unittest
import numpy as np
from sklearn.datasets import load_iris
from psis import indices
from tests import sample_data
class TestTrustworthinessComputation(unittest.TestCase):
def test_centroid_based_trustworthiness(self):
matrix, labels, positives = sample_data._swiss_roll_sample_data()
projection = 'centroid'
formula = 'median'
iterations = 50
seed = 100
model = indices.compute_trustworthiness(matrix, labels, positives, projection_type=projection,
center_formula=formula, iterations=iterations, seed=seed)
# psi-p
self.assertEqual(1.2428266254044471e-40, model['psi_p']['value'])
self.assertEqual(0.817396238671282, model['psi_p']['max'])
self.assertEqual(0.017227740338773223, model['psi_p']['min'])
self.assertEqual(0.22880354422930693, model['psi_p']['std'])
self.assertEqual(0.0196078431372549, model['psi_p']['p_value'])
# psi-roc
self.assertEqual(0.8739845884072535, model['psi_roc']['value'])
self.assertEqual(0.5754732669816965, model['psi_roc']['max'])
self.assertEqual(0.504562187438256, model['psi_roc']['min'])
self.assertEqual(0.018282702477521843, model['psi_roc']['std'])
self.assertEqual(0.0196078431372549, model['psi_roc']['p_value'])
# psi-pr
self.assertEqual(0.7829196344855981, model['psi_pr']['value'])
self.assertEqual(0.3184859473523561, model['psi_pr']['max'])
self.assertEqual(0.2704583081970596, model['psi_pr']['min'])
self.assertEqual(0.009793843589861682, model['psi_pr']['std'])
self.assertEqual(0.0196078431372549, model['psi_pr']['p_value'])
# psi-mcc
self.assertEqual(0.628615966394596, model['psi_mcc']['value'])
self.assertEqual(0.10585904641997314, model['psi_mcc']['max'])
self.assertEqual(0.008667971955796558, model['psi_mcc']['min'])
self.assertEqual(0.02324139114770061, model['psi_mcc']['std'])
self.assertEqual(0.0196078431372549, model['psi_mcc']['p_value'])
def test_lda_based_trustworthiness(self):
matrix, labels, positives = sample_data._swiss_roll_sample_data()
projection = 'lda'
iterations = 50
seed = 100
model = indices.compute_trustworthiness(matrix, labels, positives, projection_type=projection,
iterations=iterations, seed=seed)
# psi-p
self.assertEqual(1.009261421351576e-40, model['psi_p']['value'])
self.assertEqual(0.6521271550825026, model['psi_p']['max'])
self.assertEqual(0.010606992074642272, model['psi_p']['min'])
self.assertEqual(0.1627033162824601, model['psi_p']['std'])
self.assertEqual(0.0196078431372549, model['psi_p']['p_value'])
# psi-roc
self.assertEqual(0.9176549415996585, model['psi_roc']['value'])
self.assertEqual(0.5816329751950958, model['psi_roc']['max'])
self.assertEqual(0.513258579532901, model['psi_roc']['min'])
self.assertEqual(0.015973959781501835, model['psi_roc']['std'])
self.assertEqual(0.0196078431372549, model['psi_roc']['p_value'])
# psi-pr
self.assertEqual(0.818340188219321, model['psi_pr']['value'])
self.assertEqual(0.3271676715034102, model['psi_pr']['max'])
self.assertEqual(0.2864985337058604, model['psi_pr']['min'])
self.assertEqual(0.00936130564040644, model['psi_pr']['std'])
self.assertEqual(0.0196078431372549, model['psi_pr']['p_value'])
# psi-mcc
self.assertEqual(0.6401887507956918, model['psi_mcc']['value'])
self.assertEqual(0.11584620474627294, model['psi_mcc']['max'])
self.assertEqual(0.014460273138448517, model['psi_mcc']['min'])
self.assertEqual(0.025160919960162023, model['psi_mcc']['std'])
self.assertEqual(0.0196078431372549, model['psi_mcc']['p_value'])
class TestIndicesComputation(unittest.TestCase):
def test_wrong_inputs(self):
cases = [
# wrong matrix
{
'matrix': [[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]],
'labels': np.array(
['sample1', 'sample1', 'sample1', 'sample1', 'sample2', 'sample2', 'sample2', 'sample2']),
'positives': np.array(['sample1']),
'center': 'median'
},
# wrong labels
{
'matrix': np.array([[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]]),
'labels': list(
['sample1', 'sample1', 'sample1', 'sample1', 'sample2', 'sample2', 'sample2', 'sample2']),
'positives': np.array(['sample1']),
'center': 'median'
},
# wrong positives
{
'matrix': np.array([[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]]),
'labels': np.array(
['sample1', 'sample1', 'sample1', 'sample1', 'sample2', 'sample2', 'sample2', 'sample2']),
'positives': 'sample1',
'center': 'median'
}
]
for case in cases:
self.assertRaises(
TypeError,
indices.compute_psis,
case['matrix'],
case['labels'],
case['positives'],
case['center']
)
def test_wrong_center_formula(self):
input_matrix = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]])
input_labels = np.array(
['sample1', 'sample1', 'sample1', 'sample1', 'sample2', 'sample2', 'sample2', 'sample2'])
input_positive = np.array(['sample1'])
input_projection = 'centroid'
input_formula = 'fake-formula'
self.assertWarns(SyntaxWarning, indices.compute_psis, input_matrix, input_labels, input_positive,
input_projection, input_formula)
def test_centroid_based_perfect_separation(self):
input_matrix = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]])
input_labels = np.array(
['sample1', 'sample1', 'sample1', 'sample1', 'sample2', 'sample2', 'sample2', 'sample2'])
input_positive = np.array(['sample1'])
input_formula = 'median'
input_projection_type = 'centroid'
expected_psi_p = 0.0286
expected_psi_roc = 1.0000
expected_psi_pr = 1.0000
expected_psi_mcc = 1.0000
actual_psi_p, actual_psi_roc, actual_psi_pr, actual_psi_mcc = indices.compute_psis(input_matrix,
input_labels,
input_positive,
input_projection_type,
input_formula)
self.assertEqual(expected_psi_p, round(actual_psi_p, 4))
self.assertEqual(expected_psi_roc, round(actual_psi_roc, 4))
self.assertEqual(expected_psi_pr, round(actual_psi_pr, 4))
self.assertEqual(expected_psi_mcc, round(actual_psi_mcc, 4))
def test_lda_based_perfect_separation(self):
input_matrix = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]])
input_labels = np.array(
['sample1', 'sample1', 'sample1', 'sample1', 'sample2', 'sample2', 'sample2', 'sample2'])
input_positive = np.array(['sample1'])
input_formula = '' # ignored
input_projection_type = 'lda'
expected_psi_p = 0.0286
expected_psi_roc = 1.0000
expected_psi_pr = 1.0000
expected_psi_mcc = 1.0000
actual_psi_p, actual_psi_roc, actual_psi_pr, actual_psi_mcc = indices.compute_psis(input_matrix,
input_labels,
input_positive,
input_projection_type,
input_formula)
self.assertEqual(expected_psi_p, round(actual_psi_p, 4))
self.assertEqual(expected_psi_roc, round(actual_psi_roc, 4))
self.assertEqual(expected_psi_pr, round(actual_psi_pr, 4))
self.assertEqual(expected_psi_mcc, round(actual_psi_mcc, 4))
def test_centroid_based_high_dimensional_separation(self):
input_matrix = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]])
input_labels = np.array(
['sample1', 'sample1', 'sample1', 'sample1', 'sample2', 'sample2', 'sample2', 'sample2'])
input_positive = np.array(['sample1'])
input_formula = 'median'
input_projection_type = 'centroid'
expected_psi_p = 0.0286
expected_psi_roc = 1.0000
expected_psi_pr = 1.0000
expected_psi_mcc = 1.0000
actual_psi_p, actual_psi_roc, actual_psi_pr, actual_psi_mcc = indices.compute_psis(input_matrix,
input_labels,
input_positive,
input_projection_type,
input_formula)
self.assertEqual(expected_psi_p, round(actual_psi_p, 4))
self.assertEqual(expected_psi_roc, round(actual_psi_roc, 4))
self.assertEqual(expected_psi_pr, round(actual_psi_pr, 4))
self.assertEqual(expected_psi_mcc, round(actual_psi_mcc, 4))
def test_lda_based_high_dimensional_separation(self):
input_matrix = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]])
input_labels = np.array(
['sample1', 'sample1', 'sample1', 'sample1', 'sample2', 'sample2', 'sample2', 'sample2'])
input_positive = np.array(['sample1'])
input_formula = '' # ignored
input_projection_type = 'lda'
expected_psi_p = 0.0286
expected_psi_roc = 1.0000
expected_psi_pr = 1.0000
expected_psi_mcc = 1.0000
actual_psi_p, actual_psi_roc, actual_psi_pr, actual_psi_mcc = indices.compute_psis(input_matrix,
input_labels,
input_positive,
input_projection_type,
input_formula)
self.assertEqual(expected_psi_p, round(actual_psi_p, 4))
self.assertEqual(expected_psi_roc, round(actual_psi_roc, 4))
self.assertEqual(expected_psi_pr, round(actual_psi_pr, 4))
self.assertEqual(expected_psi_mcc, round(actual_psi_mcc, 4))
def test_centroid_based_mixed_separation(self):
input_matrix = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]])
input_labels = np.array(
['sample2', 'sample1', 'sample1', 'sample1', 'sample2', 'sample2', 'sample2', 'sample1'])
input_positive = np.array(['sample1'])
input_formula = 'median'
expected_psi_p = 0.8857
expected_psi_roc = 0.5625
expected_psi_pr = 0.5015
expected_psi_mcc = 0.5000
actual_psi_p, actual_psi_roc, actual_psi_pr, actual_psi_mcc = indices.compute_psis(input_matrix,
input_labels,
input_positive,
input_formula)
self.assertEqual(expected_psi_p, round(actual_psi_p, 4))
self.assertEqual(expected_psi_roc, round(actual_psi_roc, 4))
self.assertEqual(expected_psi_pr, round(actual_psi_pr, 4))
self.assertEqual(expected_psi_mcc, round(actual_psi_mcc, 4))
def test_no_separation(self):
input_matrix = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]])
input_labels = np.array(
['sample1', 'sample2', 'sample1', 'sample2', 'sample1', 'sample2', 'sample1', 'sample2'])
input_positive = np.array(['sample1'])
input_formula = 'median'
expected_psi_p = 0.6857
expected_psi_roc = 0.6250
expected_psi_pr = 0.6673
expected_psi_mcc = 0.0000
actual_psi_p, actual_psi_roc, actual_psi_pr, actual_psi_mcc = indices.compute_psis(input_matrix,
input_labels,
input_positive,
input_formula)
self.assertEqual(expected_psi_p, round(actual_psi_p, 4))
self.assertEqual(expected_psi_roc, round(actual_psi_roc, 4))
self.assertEqual(expected_psi_pr, round(actual_psi_pr, 4))
self.assertEqual(expected_psi_mcc, round(actual_psi_mcc, 4))
def test_lda_based_multiclass_separation(self):
input_matrix = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]])
input_labels = np.array(
['sample1', 'sample1', 'sample2', 'sample2', 'sample3', 'sample3', 'sample4', 'sample4'])
input_positive = np.array(['sample2', 'sample3', 'sample4'])
input_formula = '' # ignored
input_projection_type = 'lda'
expected_psi_p = 0.3333
expected_psi_roc = 1.0000
expected_psi_pr = 1.0000
expected_psi_mcc = 1.0000
actual_psi_p, actual_psi_roc, actual_psi_pr, actual_psi_mcc = indices.compute_psis(input_matrix,
input_labels,
input_positive,
input_projection_type,
input_formula)
self.assertEqual(expected_psi_p, round(actual_psi_p, 4))
self.assertEqual(expected_psi_roc, round(actual_psi_roc, 4))
self.assertEqual(expected_psi_pr, round(actual_psi_pr, 4))
self.assertEqual(expected_psi_mcc, round(actual_psi_mcc, 4))
def test_centroid_based_high_dimensional_multiclass_separation(self):
data = load_iris()
samples = np.empty(len(data.target), dtype=object)
samples[data.target == 0] = data.target_names[0]
samples[data.target == 1] = data.target_names[1]
samples[data.target == 2] = data.target_names[2]
positives = np.unique(samples)
positives = np.delete(positives, 0)
input_matrix = data.data
input_labels = samples
input_positive = positives
input_formula = 'median'
input_projection = 'centroid'
expected_psi_p = 0.0000
expected_psi_roc = 0.9723
expected_psi_pr = 0.9707
expected_psi_mcc = 0.8080
actual_psi_p, actual_psi_roc, actual_psi_pr, actual_psi_mcc = indices.compute_psis(input_matrix,
input_labels,
input_positive,
input_projection,
input_formula)
self.assertEqual(expected_psi_p, round(actual_psi_p, 4))
self.assertEqual(expected_psi_roc, round(actual_psi_roc, 4))
self.assertEqual(expected_psi_pr, round(actual_psi_pr, 4))
self.assertEqual(expected_psi_mcc, round(actual_psi_mcc, 4))
def test_lda_based_high_dimensional_multiclass_separation(self):
data = load_iris()
samples = np.empty(len(data.target), dtype=object)
samples[data.target == 0] = data.target_names[0]
samples[data.target == 1] = data.target_names[1]
samples[data.target == 2] = data.target_names[2]
positives = np.unique(samples)
positives = np.delete(positives, 0)
input_matrix = data.data
input_labels = samples
input_positive = positives
input_formula = '' # ignored
input_projection = 'lda'
expected_psi_p = 0.0000
expected_psi_roc = 0.9975
expected_psi_pr = 0.9976
expected_psi_mcc = 0.9644
actual_psi_p, actual_psi_roc, actual_psi_pr, actual_psi_mcc = indices.compute_psis(input_matrix,
input_labels,
input_positive,
input_projection,
input_formula)
self.assertEqual(expected_psi_p, round(actual_psi_p, 4))
self.assertEqual(expected_psi_roc, round(actual_psi_roc, 4))
self.assertEqual(expected_psi_pr, round(actual_psi_pr, 4))
self.assertEqual(expected_psi_mcc, round(actual_psi_mcc, 4))
def test_multiclass_mean_centered_separation(self):
input_matrix = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]])
input_labels = np.array(
['sample3', 'sample3', 'sample2', 'sample2', 'sample1', 'sample1', 'sample1', 'sample1'])
input_positive = np.array(['sample2', 'sample3'])
input_formula = 'mean'
expected_psi_p = 0.2828
expected_psi_roc = 1.0000
expected_psi_pr = 1.0000
expected_psi_mcc = 1.0000
actual_psi_p, actual_psi_roc, actual_psi_pr, actual_psi_mcc = indices.compute_psis(input_matrix,
input_labels,
input_positive,
input_formula)
self.assertEqual(expected_psi_p, round(actual_psi_p, 4))
self.assertEqual(expected_psi_roc, round(actual_psi_roc, 4))
self.assertEqual(expected_psi_pr, round(actual_psi_pr, 4))
self.assertEqual(expected_psi_mcc, round(actual_psi_mcc, 4))
def test_multiclass_mode_centered_separation(self):
input_matrix = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]])
input_labels = np.array(
['sample3', 'sample3', 'sample2', 'sample2', 'sample1', 'sample1', 'sample1', 'sample1'])
input_positive = np.array(['sample2', 'sample3'])
input_formula = 'mode'
expected_psi_p = 0.2828
expected_psi_roc = 1.0000
expected_psi_pr = 1.0000
expected_psi_mcc = 1.0000
actual_psi_p, actual_psi_roc, actual_psi_pr, actual_psi_mcc = indices.compute_psis(input_matrix,
input_labels,
input_positive,
input_formula)
self.assertEqual(expected_psi_p, round(actual_psi_p, 4))
self.assertEqual(expected_psi_roc, round(actual_psi_roc, 4))
self.assertEqual(expected_psi_pr, round(actual_psi_pr, 4))
self.assertEqual(expected_psi_mcc, round(actual_psi_mcc, 4))
def test_multiclass_separation_default_args(self):
input_matrix = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [10, 11], [12, 13], [14, 15], [16, 17]])
input_labels = np.array(
['sample3', 'sample3', 'sample2', 'sample2', 'sample1', 'sample1', 'sample1', 'sample1'])
expected_psi_p = 0.2828
expected_psi_roc = 1.0000
expected_psi_pr = 1.0000
expected_psi_mcc = 1.0000
actual_psi_p, actual_psi_roc, actual_psi_pr, actual_psi_mcc = indices.compute_psis(input_matrix,
input_labels)
self.assertEqual(expected_psi_p, round(actual_psi_p, 4))
self.assertEqual(expected_psi_roc, round(actual_psi_roc, 4))
self.assertEqual(expected_psi_pr, round(actual_psi_pr, 4))
self.assertEqual(expected_psi_mcc, round(actual_psi_mcc, 4))
def test_multidimensional_mean_centered_separation(self):
input_matrix, input_labels, input_positives = sample_data._smokers_sample_data()
expected_psi_p = 0.0
expected_psi_roc = 0.7999
expected_psi_pr = 0.713
expected_psi_mcc = 0.4207
actual_psi_p, actual_psi_roc, actual_psi_pr, actual_psi_mcc = indices.compute_psis(input_matrix,
input_labels,
center_formula='mean')
self.assertEqual(expected_psi_p, round(actual_psi_p, 4))
self.assertEqual(expected_psi_roc, round(actual_psi_roc, 4))
self.assertEqual(expected_psi_pr, round(actual_psi_pr, 4))
self.assertEqual(expected_psi_mcc, round(actual_psi_mcc, 4))
if __name__ == '__main__':
unittest.main()
| 51.474388 | 114 | 0.519514 | 2,389 | 23,112 | 4.726664 | 0.071997 | 0.101311 | 0.105916 | 0.119731 | 0.861938 | 0.855119 | 0.841038 | 0.782767 | 0.777099 | 0.76231 | 0 | 0.102698 | 0.369721 | 23,112 | 448 | 115 | 51.589286 | 0.672479 | 0.005668 | 0 | 0.677871 | 0 | 0 | 0.069725 | 0 | 0 | 0 | 0 | 0 | 0.263305 | 1 | 0.047619 | false | 0 | 0.014006 | 0 | 0.067227 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
918b8ad98b676f63e562081b34f11d5db8acc0d8 | 7,699 | py | Python | tests/test_simple_layers.py | civodlu/trw | b9a1cf045f61d6df9c65c014ef63b4048972dcdc | [
"MIT"
] | 3 | 2019-07-04T01:20:41.000Z | 2020-01-27T02:36:12.000Z | tests/test_simple_layers.py | civodlu/trw | b9a1cf045f61d6df9c65c014ef63b4048972dcdc | [
"MIT"
] | null | null | null | tests/test_simple_layers.py | civodlu/trw | b9a1cf045f61d6df9c65c014ef63b4048972dcdc | [
"MIT"
] | 2 | 2020-10-19T13:46:06.000Z | 2021-12-27T02:18:10.000Z | from unittest import TestCase
import trw
import numpy as np
import torch
class TestSimpleLayers(TestCase):
def test_conv2d(self):
# make sure we the syntatic sugar for the arguments are correctly managed (e.g., kernel sizes are expanded...)
i = trw.simple_layers.Input([None, 3, 28, 28], feature_name='input_2d_rgb')
o = trw.simple_layers.convs_2d(
i,
channels=[16, 32],
convolution_kernels=5,
strides=1,
pooling_size=2,
convolution_repeats=1,
with_flatten=True,
dropout_probability=0.5,
padding='same')
net = trw.simple_layers.compile_nn([o])
r = net({'input_2d_rgb': torch.zeros([10, 3, 28, 28], dtype=torch.float32)})
assert len(r) == 1
assert r[o].shape == (10, 32 * (28 // 2 // 2) ** 2)
def test_conv3d(self):
# make sure we the syntatic sugar for the arguments are correctly managed (e.g., kernel sizes are expanded...)
i = trw.simple_layers.Input([None, 3, 28, 28, 28], feature_name='input_3d_rgb')
o = trw.simple_layers.convs_3d(
i,
channels=[10, 11],
convolution_kernels=3,
strides=1,
pooling_size=2,
convolution_repeats=1,
with_flatten=True,
dropout_probability=0.5,
padding='same')
net = trw.simple_layers.compile_nn([o])
r = net({'input_3d_rgb': torch.zeros([5, 3, 28, 28, 28], dtype=torch.float32)})
assert len(r) == 1
assert r[o].shape == (5, 11 * (28 // 2 // 2) ** 3)
def test_denses(self):
i = trw.simple_layers.Input([None, 3], feature_name='input')
o = trw.simple_layers.denses(
i,
sizes=[16, 32],
dropout_probability=0.5,
activation=torch.nn.ReLU6,
last_layer_is_output=True)
net = trw.simple_layers.compile_nn([o])
r = net({'input': torch.zeros([5, 3], dtype=torch.float32)})
assert len(r) == 1
assert r[o].shape == (5, 32)
def test_shift_scale(self):
i = trw.simple_layers.Input([None, 3, 28, 28], feature_name='input')
o = trw.simple_layers.ShiftScale(i, mean=10.0, standard_deviation=5.0)
net = trw.simple_layers.compile_nn([o])
r = net({'input': torch.zeros([5, 3, 28, 28], dtype=torch.float32)})
assert len(r) == 1
assert r[o].shape == (5, 3, 28, 28)
assert torch.abs(torch.mean(r[o]) - ((0.0 - 10.0) / 5.0)) < 1e-5
def test_global_average_pooling_2d(self):
i = trw.simple_layers.Input([None, 3, 28, 28], feature_name='input')
o = trw.simple_layers.global_average_pooling_2d(i)
net = trw.simple_layers.compile_nn([o])
r = net({'input': torch.zeros([5, 3, 28, 28], dtype=torch.float32)})
assert len(r) == 1
assert r[o].shape == (5, 3)
def test_global_max_pooling_2d(self):
i = trw.simple_layers.Input([None, 3, 28, 28], feature_name='input')
o = trw.simple_layers.global_max_pooling_2d(i)
net = trw.simple_layers.compile_nn([o])
r = net({'input': torch.zeros([5, 3, 28, 28], dtype=torch.float32)})
assert len(r) == 1
assert r[o].shape == (5, 3)
def test_global_max_pooling_3d(self):
i = trw.simple_layers.Input([None, 3, 4, 28, 28], feature_name='input')
o = trw.simple_layers.global_max_pooling_3d(i)
net = trw.simple_layers.compile_nn([o])
r = net({'input': torch.zeros([5, 3, 4, 28, 28], dtype=torch.float32)})
assert len(r) == 1
assert r[o].shape == (5, 3)
def test_global_average_pooling_3d(self):
i = trw.simple_layers.Input([None, 3, 4, 28, 28], feature_name='input')
o = trw.simple_layers.global_average_pooling_3d(i)
net = trw.simple_layers.compile_nn([o])
r = net({'input': torch.zeros([5, 3, 4, 28, 28], dtype=torch.float32)})
assert len(r) == 1
assert r[o].shape == (5, 3)
def test_output_classification(self):
i = trw.simple_layers.Input([None, 3, 6, 6], feature_name='input')
i = trw.simple_layers.denses(i, [4, 2])
o = trw.simple_layers.OutputClassification(i, output_name='classification', classes_name='output')
net = trw.simple_layers.compile_nn([o])
r = net({
'input': torch.zeros([5, 3, 6, 6], dtype=torch.float32),
'output': torch.zeros([5], dtype=torch.int64)
})
assert len(r) == 1
assert r['classification'].output.shape == (5, 2)
def test_output_embedding(self):
i = trw.simple_layers.Input([None, 3, 6, 6], feature_name='input')
o = trw.simple_layers.OutputEmbedding(i, output_name='embedding')
net = trw.simple_layers.compile_nn([o])
i_torch = torch.randn([5, 32]).float()
r = net({'input': i_torch})
assert len(r) == 1
assert (r['embedding'].output == i_torch).all()
def test_batchnorm2d(self):
i = trw.simple_layers.Input([None, 3, 32, 32], feature_name='input')
o = trw.simple_layers.BatchNorm2d(i)
net = trw.simple_layers.compile_nn([o])
i_torch = torch.randn([5, 3, 32, 32]).float()
r = net({'input': i_torch})
assert len(r) == 1
assert r[o].shape == (5, 3, 32, 32)
def test_batchnorm3d(self):
i = trw.simple_layers.Input([None, 3, 32, 32, 32], feature_name='input')
o = trw.simple_layers.BatchNorm3d(i)
net = trw.simple_layers.compile_nn([o])
i_torch = torch.randn([5, 3, 32, 32, 32]).float()
r = net({'input': i_torch})
assert len(r) == 1
assert r[o].shape == (5, 3, 32, 32, 32)
def test_basic_2d(self):
i = trw.simple_layers.Input([None, 3, 6, 6], feature_name='input')
i = trw.simple_layers.Conv2d(i, out_channels=16, kernel_size=5, stride=1, padding='same')
i = trw.simple_layers.ReLU(i)
i = trw.simple_layers.MaxPool2d(i, kernel_size=2)
i2 = trw.simple_layers.ReLU(i)
i = trw.simple_layers.ConcatChannels([i, i2])
i = trw.simple_layers.Flatten(i)
o = trw.simple_layers.Linear(i, 8)
net = trw.simple_layers.compile_nn([o])
r = net({'input': torch.randn([5, 3, 6, 6]).float()})
assert len(r) == 1
assert r[o].shape == (5, 8)
def test_basic_3d(self):
i = trw.simple_layers.Input([None, 3, 6, 6, 6], feature_name='input')
i = trw.simple_layers.Conv3d(i, out_channels=16, kernel_size=5, stride=1, padding='same')
i = trw.simple_layers.ReLU(i)
o = trw.simple_layers.MaxPool3d(i, kernel_size=2)
net = trw.simple_layers.compile_nn([o])
r = net({'input': torch.randn([5, 3, 6, 6, 6]).float()})
assert len(r) == 1
assert r[o].shape == (5, 16, 3, 3, 3)
def test_sub_tensor(self):
n = trw.simple_layers.Input([None, 1, 32, 32], feature_name='input')
n = trw.simple_layers.SubTensor(n, [0, 10, 15], [1, 14, 22])
net = trw.simple_layers.compile_nn([n])
i = torch.randn([5, 1, 32, 32], dtype=torch.float32)
o = net({'input': i})
assert o[n].shape == (5, 1, 4, 7)
assert (o[n] == i[:, 0:1, 10:14, 15:22]).all()
def test_sub_tensor_partial(self):
n = trw.simple_layers.Input([None, 3, 32, 32], feature_name='input')
n = trw.simple_layers.SubTensor(n, [0], [1])
net = trw.simple_layers.compile_nn([n])
i = torch.randn([5, 3, 32, 32], dtype=torch.float32)
o = net({'input': i})
assert o[n].shape == (5, 1, 32, 32)
assert (o[n] == i[:, 0:1]).all()
| 39.482051 | 118 | 0.575529 | 1,145 | 7,699 | 3.71441 | 0.11179 | 0.120621 | 0.201035 | 0.082765 | 0.781801 | 0.759699 | 0.735481 | 0.7162 | 0.709146 | 0.676464 | 0 | 0.066667 | 0.261592 | 7,699 | 194 | 119 | 39.685567 | 0.681442 | 0.028185 | 0 | 0.443038 | 0 | 0 | 0.035031 | 0 | 0 | 0 | 0 | 0 | 0.208861 | 1 | 0.101266 | false | 0 | 0.025316 | 0 | 0.132911 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
91c672366c328fe09f4ffd9f7b47b9bc0ff2f6d4 | 74,234 | py | Python | unit_tests/test_zaza_model.py | coreycb/zaza | 468245ac4fb4016648d8b3c35a64623dafb0d001 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | unit_tests/test_zaza_model.py | coreycb/zaza | 468245ac4fb4016648d8b3c35a64623dafb0d001 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | unit_tests/test_zaza_model.py | coreycb/zaza | 468245ac4fb4016648d8b3c35a64623dafb0d001 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # Copyright 2018 Canonical Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import aiounittest
# Prior to Python 3.8 asyncio would raise a ``asyncio.futures.TimeoutError``
# exception on timeout, from Python 3.8 onwards it raises a exception from a
# new ``asyncio.exceptions`` module.
#
# Neither of these are inherited from a relevant built-in exception so we
# cannot catch them generally with the built-in TimeoutError or similar.
try:
import asyncio.exceptions
AsyncTimeoutError = asyncio.exceptions.TimeoutError
except ImportError:
import asyncio.futures
AsyncTimeoutError = asyncio.futures.TimeoutError
import copy
import concurrent
import mock
import unit_tests.utils as ut_utils
from juju import loop
import zaza.model as model
FAKE_STATUS = {
'can-upgrade-to': '',
'charm': 'local:trusty/app-136',
'subordinate-to': [],
'units': {'app/0': {'leader': True,
'machine': '0',
'agent-status': {
'status': 'idle'
},
'subordinates': {
'app-hacluster/0': {
'charm': 'local:trusty/hacluster-0',
'leader': True,
'agent-status': {
'status': 'idle'
}}}},
'app/1': {'machine': '1',
'agent-status': {
'status': 'idle'
},
'subordinates': {
'app-hacluster/1': {
'charm': 'local:trusty/hacluster-0',
'agent-status': {
'status': 'idle'
}}}},
'app/2': {'machine': '2',
'agent-status': {
'status': 'idle'
},
'subordinates': {
'app-hacluster/2': {
'charm': 'local:trusty/hacluster-0',
'agent-status': {
'status': 'idle'
}}}}}}
EXECUTING_STATUS = {
'can-upgrade-to': '',
'charm': 'local:trusty/app-136',
'subordinate-to': [],
'units': {'app/0': {'leader': True,
'machine': '0',
'agent-status': {
'status': 'executing'
},
'subordinates': {
'app-hacluster/0': {
'charm': 'local:trusty/hacluster-0',
'leader': True,
'agent-status': {
'status': 'executing'
}}}}}}
class TestModel(ut_utils.BaseTestCase):
def tearDown(self):
super(TestModel, self).tearDown()
# Clear cached model name
model.CURRENT_MODEL = None
model.MODEL_ALIASES = {}
def setUp(self):
super(TestModel, self).setUp()
async def _scp_to(source, destination, user=None, proxy=None,
scp_opts=None):
return
async def _scp_from(source, destination, user=None, proxy=None,
scp_opts=None):
return
async def _run(command, timeout=None):
return self.action
async def _run_action(command, **params):
return self.run_action
async def _wait():
return
async def _add_relation(rel1, rel2):
return
async def _destroy_relation(rel1, rel2):
return
async def _add_unit(count=1, to=None):
return
async def _destroy_unit(*unitnames):
return
def _is_leader(leader):
async def _inner_is_leader():
return leader
return _inner_is_leader
self.run_action = mock.MagicMock()
self.run_action.wait.side_effect = _wait
self.action = mock.MagicMock()
self.action.data = {
'model-uuid': '1a035018-71ff-473e-8aab-d1a8d6b6cda7',
'id': 'e26ffb69-6626-4e93-8840-07f7e041e99d',
'receiver': 'glance/0',
'name': 'juju-run',
'parameters': {
'command': 'somecommand someargument', 'timeout': 0},
'status': 'completed',
'message': '',
'results': {'Code': '0', 'Stderr': '', 'Stdout': 'RESULT'},
'enqueued': '2018-04-11T23:13:42Z',
'started': '2018-04-11T23:13:42Z',
'completed': '2018-04-11T23:13:43Z'}
self.unit1 = mock.MagicMock()
self.unit1.public_address = 'ip1'
self.unit1.name = 'app/2'
self.unit1.entity_id = 'app/2'
self.unit1.machine = 'machine3'
self.unit2 = mock.MagicMock()
self.unit2.public_address = 'ip2'
self.unit2.name = 'app/4'
self.unit2.entity_id = 'app/4'
self.unit2.machine = 'machine7'
self.unit2.run.side_effect = _run
self.unit1.run.side_effect = _run
self.unit1.scp_to.side_effect = _scp_to
self.unit2.scp_to.side_effect = _scp_to
self.unit1.scp_from.side_effect = _scp_from
self.unit2.scp_from.side_effect = _scp_from
self.unit1.run_action.side_effect = _run_action
self.unit2.run_action.side_effect = _run_action
self.unit1.is_leader_from_status.side_effect = _is_leader(False)
self.unit2.is_leader_from_status.side_effect = _is_leader(True)
self.unit1.data = {'agent-status': {'current': 'idle'}}
self.unit2.data = {'agent-status': {'current': 'idle'}}
self.units = [self.unit1, self.unit2]
self.relation1 = mock.MagicMock()
self.relation1.id = 42
self.relation1.matches.side_effect = \
lambda x: True if x == 'app' else False
self.relation2 = mock.MagicMock()
self.relation2.id = 51
self.relation2.matches.side_effect = \
lambda x: True if x == 'app:interface' else False
self.relations = [self.relation1, self.relation2]
_units = mock.MagicMock()
_units.units = self.units
_units.relations = self.relations
_units.add_relation.side_effect = _add_relation
_units.destroy_relation.side_effect = _destroy_relation
_units.add_unit.side_effect = _add_unit
_units.destroy_unit.side_effect = _destroy_unit
self.mymodel = mock.MagicMock()
self.mymodel.applications = {
'app': _units
}
self.Model_mock = mock.MagicMock()
# Juju Status Object and data
self.key = "instance-id"
self.key_data = "machine-uuid"
self.machine = "1"
self.machine_data = {self.key: self.key_data}
self.unit = "app/1"
self.application = "app"
self.subordinate_application = "subordinate_application"
self.subordinate_application_data = {
"subordinate-to": [self.application],
"units": None}
self.subordinate_unit = "subordinate_application/1"
self.subordinate_unit_data = {
"workload-status": {"status": "active"}}
self.unit_data = {
"workload-status": {"status": "active"},
"machine": self.machine,
"subordinates": {
self.subordinate_unit: self.subordinate_unit_data}}
self.application_data = {"units": {
self.unit1.name: self.subordinate_unit_data,
self.unit: self.unit_data}}
self.juju_status = mock.MagicMock()
self.juju_status.applications = {
self.application: self.application_data,
self.subordinate_application: self.subordinate_application_data}
self.juju_status.machines = self.machine_data
async def _connect_model(model_name):
return model_name
async def _disconnect():
return
async def _connect():
return
async def _ctrl_connect():
return
async def _ctrl_add_model(model_name, config=None):
return
async def _ctrl_destroy_models(model_name):
return
self.Model_mock.connect.side_effect = _connect
self.Model_mock.connect_model.side_effect = _connect_model
self.Model_mock.disconnect.side_effect = _disconnect
self.Model_mock.applications = self.mymodel.applications
self.Model_mock.units = {
'app/2': self.unit1,
'app/4': self.unit2}
self.model_name = "testmodel"
self.Model_mock.info.name = self.model_name
self.Controller_mock = mock.MagicMock()
self.Controller_mock.connect.side_effect = _ctrl_connect
self.Controller_mock.add_model.side_effect = _ctrl_add_model
self.Controller_mock.destroy_models.side_effect = _ctrl_destroy_models
def test_get_juju_model(self):
self.patch_object(model.os, 'environ')
self.patch_object(model, 'get_current_model')
self.get_current_model.return_value = 'modelsmodel'
def _get_env(key):
return _env[key]
self.environ.__getitem__.side_effect = _get_env
_env = {"JUJU_MODEL": 'envmodel'}
# JUJU_ENV environment variable set
self.assertEqual(model.get_juju_model(), 'envmodel')
self.get_current_model.assert_not_called()
def test_get_juju_model_alt(self):
self.patch_object(model.os, 'environ')
self.patch_object(model, 'get_current_model')
self.get_current_model.return_value = 'modelsmodel'
def _get_env(key):
return _env[key]
self.environ.__getitem__.side_effect = _get_env
_env = {"MODEL_NAME": 'envmodel'}
# JUJU_ENV environment variable set
self.assertEqual(model.get_juju_model(), 'envmodel')
self.get_current_model.assert_not_called()
def test_get_juju_model_noenv(self):
self.patch_object(model.os, 'environ')
self.patch_object(model, 'async_get_current_model')
async def _async_get_current_model():
return 'modelsmodel'
self.async_get_current_model.side_effect = _async_get_current_model
# No envirnment variable
self.environ.__getitem__.side_effect = KeyError
self.assertEqual(model.get_juju_model(), 'modelsmodel')
self.async_get_current_model.assert_called_once()
def test_set_juju_model_aliases(self):
model.set_juju_model_aliases({'alias1': 'model1', 'alias2': 'model2'})
self.assertEqual(
model.MODEL_ALIASES,
{'alias1': 'model1', 'alias2': 'model2'})
def test_unset_juju_model_aliases(self):
model.unset_juju_model_aliases()
self.assertEqual(
model.MODEL_ALIASES,
{})
model.set_juju_model_aliases({'alias1': 'model1', 'alias2': 'model2'})
model.unset_juju_model_aliases()
self.assertEqual(
model.MODEL_ALIASES,
{})
def test_get_juju_model_aliases(self):
model.set_juju_model_aliases({'alias1': 'model1', 'alias2': 'model2'})
self.assertEqual(
model.get_juju_model_aliases(),
{'alias1': 'model1', 'alias2': 'model2'})
def test_run_in_model(self):
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
async def _wrapper():
async with model.run_in_model('modelname') as mymodel:
return mymodel
self.assertEqual(loop.run(_wrapper()), self.Model_mock)
self.Model_mock.connect_model.assert_called_once_with('modelname')
self.Model_mock.disconnect.assert_called_once_with()
def test_run_in_model_exception(self):
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
async def _wrapper():
async with model.run_in_model('modelname'):
raise Exception
with self.assertRaises(Exception):
loop.run(_wrapper())
self.Model_mock.connect_model.assert_called_once_with('modelname')
self.Model_mock.disconnect.assert_called_once_with()
def test_scp_to_unit(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.patch_object(model, 'get_unit_from_name')
self.get_unit_from_name.return_value = self.unit1
self.Model.return_value = self.Model_mock
model.scp_to_unit('app/1', '/tmp/src', '/tmp/dest')
self.unit1.scp_to.assert_called_once_with(
'/tmp/src', '/tmp/dest', proxy=False, scp_opts='', user='ubuntu')
def test_scp_to_all_units(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
model.scp_to_all_units('app', '/tmp/src', '/tmp/dest')
self.unit1.scp_to.assert_called_once_with(
'/tmp/src', '/tmp/dest', proxy=False, scp_opts='', user='ubuntu')
self.unit2.scp_to.assert_called_once_with(
'/tmp/src', '/tmp/dest', proxy=False, scp_opts='', user='ubuntu')
def test_scp_from_unit(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.patch_object(model, 'get_unit_from_name')
self.get_unit_from_name.return_value = self.unit1
self.Model.return_value = self.Model_mock
model.scp_from_unit('app/1', '/tmp/src', '/tmp/dest')
self.unit1.scp_from.assert_called_once_with(
'/tmp/src', '/tmp/dest', proxy=False, scp_opts='', user='ubuntu')
def test_get_units(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.assertEqual(
model.get_units('app'),
self.units)
def test_get_machines(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.assertEqual(
model.get_machines('app'),
['machine3', 'machine7'])
def test_get_first_unit_name(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'get_units')
self.get_units.return_value = self.units
self.assertEqual(
model.get_first_unit_name('model', 'app'),
'app/2')
def test_get_lead_unit_name(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'get_units')
self.get_units.return_value = self.units
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.assertEqual(
model.get_lead_unit_name('app', 'model'),
'app/4')
def test_get_lead_unit_ip(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'get_units')
self.get_units.return_value = self.units
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.assertEqual(
model.get_lead_unit_ip('app', 'model'),
'ip2')
def test_get_unit_from_name(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
# Normal case
self.assertEqual(
model.get_unit_from_name('app/4', model_name='mname'),
self.unit2)
# Normal case with Model()
self.assertEqual(
model.get_unit_from_name('app/4', self.mymodel),
self.unit2)
# Normal case, using default
self.assertEqual(
model.get_unit_from_name('app/4'),
self.unit2)
# Unit does not exist
with self.assertRaises(model.UnitNotFound):
model.get_unit_from_name('app/10', model_name='mname')
# Application does not exist
with self.assertRaises(model.UnitNotFound):
model.get_unit_from_name('bad_name', model_name='mname')
def test_get_app_ips(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'get_units')
self.get_units.return_value = self.units
self.assertEqual(model.get_app_ips('model', 'app'), ['ip1', 'ip2'])
def test_run_on_unit(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
expected = {
'Code': '0',
'Stderr': '',
'Stdout': 'RESULT',
'stderr': '',
'stdout': 'RESULT'}
self.cmd = cmd = 'somecommand someargument'
self.patch_object(model, 'Model')
self.patch_object(model, 'get_unit_from_name')
self.get_unit_from_name.return_value = self.unit1
self.Model.return_value = self.Model_mock
self.assertEqual(model.run_on_unit('app/2', cmd),
expected)
self.unit1.run.assert_called_once_with(cmd, timeout=None)
def test_run_on_unit_lc_keys(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.action.data['results'] = {
'Code': '0',
'stdout': 'RESULT',
'stderr': 'some error'}
expected = {
'Code': '0',
'Stderr': 'some error',
'Stdout': 'RESULT',
'stderr': 'some error',
'stdout': 'RESULT'}
self.cmd = cmd = 'somecommand someargument'
self.patch_object(model, 'Model')
self.patch_object(model, 'get_unit_from_name')
self.get_unit_from_name.return_value = self.unit1
self.Model.return_value = self.Model_mock
self.assertEqual(model.run_on_unit('app/2', cmd),
expected)
self.unit1.run.assert_called_once_with(cmd, timeout=None)
def test_run_on_unit_missing_stderr(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
expected = {
'Code': '0',
'Stderr': '',
'Stdout': 'RESULT',
'stderr': '',
'stdout': 'RESULT'}
self.action.data['results'] = {'Code': '0', 'Stdout': 'RESULT'}
self.cmd = cmd = 'somecommand someargument'
self.patch_object(model, 'Model')
self.patch_object(model, 'get_unit_from_name')
self.get_unit_from_name.return_value = self.unit1
self.Model.return_value = self.Model_mock
self.assertEqual(model.run_on_unit('app/2', cmd),
expected)
self.unit1.run.assert_called_once_with(cmd, timeout=None)
def test_run_on_leader(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
expected = {
'Code': '0',
'Stderr': '',
'Stdout': 'RESULT',
'stderr': '',
'stdout': 'RESULT'}
self.cmd = cmd = 'somecommand someargument'
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.assertEqual(model.run_on_leader('app', cmd),
expected)
self.unit2.run.assert_called_once_with(cmd, timeout=None)
def test_get_relation_id(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.assertEqual(model.get_relation_id('app', 'app'), 42)
def test_add_relation(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
model.add_relation('app', 'shared-db', 'mysql-shared-db')
self.mymodel.applications['app'].add_relation.assert_called_once_with(
'shared-db',
'mysql-shared-db')
def test_remove_relation(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
model.remove_relation('app', 'shared-db', 'mysql-shared-db')
self.mymodel.applications[
'app'].destroy_relation.assert_called_once_with('shared-db',
'mysql-shared-db')
def test_add_unit(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
model.add_unit('app', count=2, to='lxd/0')
self.mymodel.applications['app'].add_unit.assert_called_once_with(
count=2, to='lxd/0')
def test_add_unit_wait(self):
self.patch_object(model, 'async_block_until_unit_count')
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
model.add_unit('app', count=2, to='lxd/0', wait_appear=True)
self.mymodel.applications['app'].add_unit.assert_called_once_with(
count=2, to='lxd/0')
self.async_block_until_unit_count.assert_called_once_with(
'app', 4, model_name=None)
def test_destroy_unit(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
model.destroy_unit('app', 'app/2')
self.mymodel.applications['app'].destroy_unit.assert_called_once_with(
'app/2')
def test_destroy_unit_wait(self):
self.patch_object(model, 'async_block_until_unit_count')
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
model.destroy_unit('app', 'app/2', wait_disappear=True)
self.mymodel.applications['app'].destroy_unit.assert_called_once_with(
'app/2')
self.async_block_until_unit_count.assert_called_once_with(
'app', 1, model_name=None)
def test_get_relation_id_interface(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.assertEqual(
model.get_relation_id('app', 'app',
remote_interface_name='interface'),
51)
def test_run_action(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.patch_object(model, 'get_unit_from_name')
self.get_unit_from_name.return_value = self.unit1
self.Model.return_value = self.Model_mock
model.run_action(
'app/2',
'backup',
action_params={'backup_dir': '/dev/null'})
self.unit1.run_action.assert_called_once_with(
'backup',
backup_dir='/dev/null')
self.run_action.status = 'failed'
self.run_action.message = 'aMessage'
self.run_action.id = 'aId'
self.run_action.enqueued = 'aEnqueued'
self.run_action.started = 'aStarted'
self.run_action.completed = 'aCompleted'
self.run_action.name = 'backup2'
self.run_action.parameters = {'backup_dir': '/non-existent'}
self.run_action.receiver = 'app/2'
with self.assertRaises(model.ActionFailed) as e:
model.run_action(
self.run_action.receiver,
self.run_action.name,
action_params=self.run_action.parameters,
raise_on_failure=True)
self.assertEqual(
str(e.exception),
'Run of action "backup2" with parameters '
'"{\'backup_dir\': \'/non-existent\'}" on '
'"app/2" failed with "aMessage" '
'(id=aId status=failed enqueued=aEnqueued '
'started=aStarted completed=aCompleted)')
def test_get_actions(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model.subprocess, 'check_output')
self.check_output.return_value = 'action: "action desc"'
self.assertEqual(
model.get_actions('myapp'),
{'action': "action desc"})
self.check_output.assert_called_once_with(
['juju', 'actions', '-m', 'mname', 'myapp', '--format', 'yaml'])
def test_run_action_on_leader(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
model.run_action_on_leader(
'app',
'backup',
action_params={'backup_dir': '/dev/null'})
self.assertFalse(self.unit1.called)
self.unit2.run_action.assert_called_once_with(
'backup',
backup_dir='/dev/null')
self.run_action.status = 'failed'
self.run_action.message = 'aMessage'
self.run_action.id = 'aId'
self.run_action.enqueued = 'aEnqueued'
self.run_action.started = 'aStarted'
self.run_action.completed = 'aCompleted'
self.run_action.name = 'backup2'
self.run_action.parameters = {'backup_dir': '/non-existent'}
self.run_action.receiver = 'app/2'
with self.assertRaises(model.ActionFailed) as e:
model.run_action(
self.run_action.receiver,
self.run_action.name,
action_params=self.run_action.parameters,
raise_on_failure=True)
self.assertEqual(
str(e.exception),
'Run of action "backup2" with parameters '
'"{\'backup_dir\': \'/non-existent\'}" on '
'"app/2" failed with "aMessage" '
'(id=aId status=failed enqueued=aEnqueued '
'started=aStarted completed=aCompleted)')
def test_run_action_on_units(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'get_unit_from_name')
units = {
'app/1': self.unit1,
'app/2': self.unit2}
self.get_unit_from_name.side_effect = lambda x, y: units[x]
self.run_action.status = 'completed'
model.run_action_on_units(
['app/1', 'app/2'],
'backup',
action_params={'backup_dir': '/dev/null'})
self.unit1.run_action.assert_called_once_with(
'backup',
backup_dir='/dev/null')
self.unit2.run_action.assert_called_once_with(
'backup',
backup_dir='/dev/null')
def test_run_action_on_units_timeout(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'get_unit_from_name')
self.get_unit_from_name.return_value = self.unit1
self.run_action.status = 'running'
with self.assertRaises(AsyncTimeoutError):
model.run_action_on_units(
['app/1'],
'backup',
action_params={'backup_dir': '/dev/null'},
timeout=0.1)
def test_run_action_on_units_fail(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'get_unit_from_name')
self.get_unit_from_name.return_value = self.unit1
self.run_action.status = 'failed'
with self.assertRaises(model.ActionFailed):
model.run_action_on_units(
['app/1'],
'backup',
raise_on_failure=True,
action_params={'backup_dir': '/dev/null'})
def _application_states_setup(self, setup, units_idle=True):
self.system_ready = True
self._block_until_calls = 0
async def _block_until(f, timeout=0):
# Mimic timeouts
timeout = timeout + self._block_until_calls
self._block_until_calls += 1
if timeout == -1:
raise concurrent.futures._base.TimeoutError("Timeout", 1)
result = f()
if not result:
self.system_ready = False
return
async def _all_units_idle():
return units_idle
self.Model_mock.block_until.side_effect = _block_until
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.Model_mock.all_units_idle.return_value = _all_units_idle
p_mock_ws = mock.PropertyMock(
return_value=setup['workload-status'])
p_mock_wsmsg = mock.PropertyMock(
return_value=setup['workload-status-message'])
type(self.unit1).workload_status = p_mock_ws
type(self.unit1).workload_status_message = p_mock_wsmsg
type(self.unit2).workload_status = p_mock_ws
type(self.unit2).workload_status_message = p_mock_wsmsg
def test_units_with_wl_status_state(self):
self._application_states_setup({
'workload-status': 'active',
'workload-status-message': 'Unit is ready'})
units = model.units_with_wl_status_state(self.Model_mock, 'active')
self.assertTrue(len(units) == 2)
self.assertIn(self.unit1, units)
self.assertIn(self.unit2, units)
def test_units_with_wl_status_state_no_match(self):
self._application_states_setup({
'workload-status': 'blocked',
'workload-status-message': 'Unit is ready'})
units = model.units_with_wl_status_state(self.Model_mock, 'active')
self.assertTrue(len(units) == 0)
def test_check_model_for_hard_errors(self):
self.patch_object(model, 'units_with_wl_status_state')
self.units_with_wl_status_state.return_value = []
# Test will fail if an Exception is raised
model.check_model_for_hard_errors(self.Model_mock)
def test_check_model_for_hard_errors_found(self):
self.patch_object(model, 'units_with_wl_status_state')
self.units_with_wl_status_state.return_value = [self.unit1]
with self.assertRaises(model.UnitError):
model.check_model_for_hard_errors(self.Model_mock)
def test_check_unit_workload_status(self):
self.patch_object(model, 'check_model_for_hard_errors')
self._application_states_setup({
'workload-status': 'active',
'workload-status-message': 'Unit is ready'})
self.assertTrue(
model.check_unit_workload_status(self.Model_mock,
self.unit1, ['active']))
def test_check_unit_workload_status_no_match(self):
self.patch_object(model, 'check_model_for_hard_errors')
self._application_states_setup({
'workload-status': 'blocked',
'workload-status-message': 'Unit is ready'})
self.assertFalse(
model.check_unit_workload_status(self.Model_mock,
self.unit1, ['active']))
def test_check_unit_workload_status_multi(self):
self.patch_object(model, 'check_model_for_hard_errors')
self._application_states_setup({
'workload-status': 'blocked',
'workload-status-message': 'Unit is ready'})
self.assertTrue(
model.check_unit_workload_status(
self.Model_mock,
self.unit1, ['active', 'blocked']))
def test_check_unit_workload_status_message_message(self):
self.patch_object(model, 'check_model_for_hard_errors')
self._application_states_setup({
'workload-status': 'blocked',
'workload-status-message': 'Unit is ready'})
self.assertTrue(
model.check_unit_workload_status_message(self.Model_mock,
self.unit1,
message='Unit is ready'))
def test_check_unit_workload_status_message_message_not_found(self):
self.patch_object(model, 'check_model_for_hard_errors')
self._application_states_setup({
'workload-status': 'blocked',
'workload-status-message': 'Something else'})
self.assertFalse(
model.check_unit_workload_status_message(self.Model_mock,
self.unit1,
message='Unit is ready'))
def test_check_unit_workload_status_message_prefix(self):
self.patch_object(model, 'check_model_for_hard_errors')
self._application_states_setup({
'workload-status': 'blocked',
'workload-status-message': 'Unit is ready (OSD Count 23)'})
self.assertTrue(
model.check_unit_workload_status_message(
self.Model_mock,
self.unit1,
prefixes=['Readyish', 'Unit is ready']))
def test_check_unit_workload_status_message_prefix_no_match(self):
self.patch_object(model, 'check_model_for_hard_errors')
self._application_states_setup({
'workload-status': 'blocked',
'workload-status-message': 'On my holidays'})
self.assertFalse(
model.check_unit_workload_status_message(
self.Model_mock,
self.unit1,
prefixes=['Readyish', 'Unit is ready']))
def test_wait_for_application_states(self):
self._application_states_setup({
'workload-status': 'active',
'workload-status-message': 'Unit is ready'})
model.wait_for_application_states('modelname', timeout=1)
self.assertTrue(self.system_ready)
def test_wait_for_application_states_not_ready_ws(self):
self._application_states_setup({
'workload-status': 'blocked',
'workload-status-message': 'Unit is ready'})
model.wait_for_application_states('modelname', timeout=1)
self.assertFalse(self.system_ready)
def test_wait_for_application_states_errored_unit(self):
self._application_states_setup({
'workload-status': 'error',
'workload-status-message': 'Unit is ready'})
with self.assertRaises(model.UnitError):
model.wait_for_application_states('modelname', timeout=1)
self.assertFalse(self.system_ready)
def test_wait_for_application_states_not_ready_wsmsg(self):
self._application_states_setup({
'workload-status': 'active',
'workload-status-message': 'Unit is not ready'})
model.wait_for_application_states('modelname', timeout=1)
self.assertFalse(self.system_ready)
def test_wait_for_application_states_blocked_ok(self):
self._application_states_setup({
'workload-status': 'blocked',
'workload-status-message': 'Unit is ready'})
model.wait_for_application_states(
'modelname',
states={'app': {
'workload-status': 'blocked'}},
timeout=1)
self.assertTrue(self.system_ready)
def test_wait_for_application_states_bespoke_msg(self):
self._application_states_setup({
'workload-status': 'active',
'workload-status-message': 'Sure, I could do something'})
model.wait_for_application_states(
'modelname',
states={'app': {
'workload-status-message': 'Sure, I could do something'}},
timeout=1)
self.assertTrue(self.system_ready)
def test_wait_for_application_states_bespoke_msg_blocked_ok(self):
self._application_states_setup({
'workload-status': 'blocked',
'workload-status-message': 'Sure, I could do something'})
model.wait_for_application_states(
'modelname',
states={'app': {
'workload-status': 'blocked',
'workload-status-message': 'Sure, I could do something'}},
timeout=1)
self.assertTrue(self.system_ready)
def test_wait_for_application_states_idle_timeout(self):
self._application_states_setup({
'agent-status': 'executing',
'workload-status': 'blocked',
'workload-status-message': 'Sure, I could do something'})
with self.assertRaises(model.ModelTimeout) as timeout:
model.wait_for_application_states('modelname', timeout=-2)
self.assertEqual(
timeout.exception.args[0],
"Zaza has timed out waiting on the model to reach idle state.")
def test_wait_for_application_states_timeout(self):
self._application_states_setup({
'agent-status': 'executing',
'workload-status': 'blocked',
'workload-status-message': 'Sure, I could do something'})
with self.assertRaises(model.ModelTimeout) as timeout:
model.wait_for_application_states('modelname', timeout=-3)
self.assertEqual(
timeout.exception.args[0],
("Timed out waiting for 'app/2'. The workload status is 'blocked' "
"which is not one of '['active']'"))
def test_get_current_model(self):
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.assertEqual(model.get_current_model(), self.model_name)
def test_block_until_file_has_contents(self):
self.action.data = {
'results': {'Code': '0', 'Stderr': '', 'Stdout': 'somestring'}
}
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch("builtins.open",
new_callable=mock.mock_open(),
name="_open")
_fileobj = mock.MagicMock()
_fileobj.__enter__().read.return_value = "somestring"
self._open.return_value = _fileobj
model.block_until_file_has_contents(
'app',
'/tmp/src/myfile.txt',
'somestring',
timeout=0.1)
self.unit1.run.assert_called_once_with(
'cat /tmp/src/myfile.txt')
self.unit2.run.assert_called_once_with(
'cat /tmp/src/myfile.txt')
def test_block_until_file_has_no_contents(self):
self.action.data = {
'results': {'Code': '0', 'Stderr': ''}
}
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch("builtins.open",
new_callable=mock.mock_open(),
name="_open")
_fileobj = mock.MagicMock()
_fileobj.__enter__().read.return_value = ""
self._open.return_value = _fileobj
model.block_until_file_has_contents(
'app',
'/tmp/src/myfile.txt',
'',
timeout=0.1)
self.unit1.run.assert_called_once_with(
'cat /tmp/src/myfile.txt')
self.unit2.run.assert_called_once_with(
'cat /tmp/src/myfile.txt')
def test_block_until_file_has_contents_missing(self):
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch("builtins.open",
new_callable=mock.mock_open(),
name="_open")
_fileobj = mock.MagicMock()
_fileobj.__enter__().read.return_value = "anything else"
self._open.return_value = _fileobj
with self.assertRaises(AsyncTimeoutError):
model.block_until_file_has_contents(
'app',
'/tmp/src/myfile.txt',
'somestring',
timeout=0.1)
self.unit1.run.assert_called_once_with('cat /tmp/src/myfile.txt')
def test_block_until_file_missing(self):
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'get_juju_model', return_value='mname')
self.action.data['results']['Stdout'] = "1"
model.block_until_file_missing(
'app',
'/tmp/src/myfile.txt',
timeout=0.1)
self.unit1.run.assert_called_once_with(
'test -e "/tmp/src/myfile.txt"; echo $?')
def test_block_until_file_missing_isnt_missing(self):
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'get_juju_model', return_value='mname')
self.action.data['results']['Stdout'] = "0"
with self.assertRaises(AsyncTimeoutError):
model.block_until_file_missing(
'app',
'/tmp/src/myfile.txt',
timeout=0.1)
def test_async_block_until_all_units_idle(self):
async def _block_until(f, timeout=None):
if not f():
raise AsyncTimeoutError
def _all_units_idle():
return True
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.Model_mock.all_units_idle.side_effect = _all_units_idle
self.Model_mock.block_until.side_effect = _block_until
# Check exception is not raised:
model.block_until_all_units_idle('modelname')
def test_async_block_until_all_units_idle_false(self):
async def _block_until(f, timeout=None):
if not f():
raise AsyncTimeoutError
def _all_units_idle():
return False
self.Model_mock.all_units_idle.side_effect = _all_units_idle
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.Model_mock.block_until.side_effect = _block_until
# Confirm exception is raised:
with self.assertRaises(AsyncTimeoutError):
model.block_until_all_units_idle('modelname')
def test_async_block_until_all_units_idle_errored_unit(self):
async def _block_until(f, timeout=None):
if not f():
raise AsyncTimeoutError
def _all_units_idle():
return True
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.Model_mock.all_units_idle.side_effect = _all_units_idle
self.patch_object(model, 'units_with_wl_status_state')
unit = mock.MagicMock()
unit.entity_id = 'aerroredunit/0'
self.units_with_wl_status_state.return_value = [unit]
self.Model_mock.block_until.side_effect = _block_until
with self.assertRaises(model.UnitError):
model.block_until_all_units_idle('modelname')
def test_block_until_unit_count(self):
async def _block_until(f, timeout=None):
rc = await f()
if not rc:
raise AsyncTimeoutError
async def _get_status():
return self.juju_status
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'async_block_until')
self.async_block_until.side_effect = _block_until
self.patch_object(model, 'async_get_status')
self.async_get_status.side_effect = _get_status
self.juju_status.applications[self.application]["units"] = [
'app/1', 'app/2']
model.block_until_unit_count('app', 2)
with self.assertRaises(AsyncTimeoutError):
model.block_until_unit_count('app', 3, timeout=0.1)
with self.assertRaises(AssertionError):
model.block_until_unit_count('app', 2.3)
def test_block_until_charm_url(self):
async def _block_until(f, timeout=None):
rc = await f()
if not rc:
raise AsyncTimeoutError
async def _get_status():
return self.juju_status
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'async_block_until')
self.async_block_until.side_effect = _block_until
self.patch_object(model, 'async_get_status')
self.async_get_status.side_effect = _get_status
target_url = 'cs:openstack-charmers-next/app'
self.juju_status.applications[self.application]['charm'] = target_url
model.block_until_charm_url('app', target_url)
with self.assertRaises(AsyncTimeoutError):
model.block_until_charm_url('app', 'something wrong', timeout=0.1)
def block_until_service_status_base(self, rou_return):
async def _block_until(f, timeout=None):
rc = await f()
if not rc:
raise AsyncTimeoutError
async def _run_on_unit(unit_name, cmd, model_name=None, timeout=None):
return rou_return
self.patch_object(model, 'async_run_on_unit')
self.async_run_on_unit.side_effect = _run_on_unit
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'async_block_until')
self.async_block_until.side_effect = _block_until
def test_block_until_service_status_check_running(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.block_until_service_status_base({'Stdout': '152 409 54'})
model.block_until_service_status(
'app/2',
['test_svc'],
'running')
def test_block_until_service_status_check_running_with_pgrep(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.block_until_service_status_base({'Stdout': '152 409 54'})
model.block_until_service_status(
'app/2',
['test_svc'],
'running',
pgrep_full=True)
self.async_run_on_unit.assert_called_once_with(
'app/2',
"pgrep -f 'test_svc'",
model_name=None,
timeout=2700
)
def test_block_until_service_status_check_running_fail(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.block_until_service_status_base({'Stdout': ''})
with self.assertRaises(AsyncTimeoutError):
model.block_until_service_status(
'app/2',
['test_svc'],
'running')
def test_block_until_service_status_check_stopped(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.block_until_service_status_base({'Stdout': ''})
model.block_until_service_status(
'app/2',
['test_svc'],
'stopped')
def test_block_until_service_status_check_stopped_fail(self):
self.patch_object(model, 'get_juju_model', return_value='mname')
self.block_until_service_status_base({'Stdout': '152 409 54'})
with self.assertRaises(AsyncTimeoutError):
model.block_until_service_status(
'app/2',
['test_svc'],
'stopped')
def test_get_unit_time(self):
async def _run_on_unit(
unit_name,
command,
model_name=None,
timeout=None
):
return {'Stdout': '1524409654'}
self.patch_object(model, 'async_run_on_unit')
self.async_run_on_unit.side_effect = _run_on_unit
self.assertEqual(
model.get_unit_time('app/2'),
1524409654)
self.async_run_on_unit.assert_called_once_with(
unit_name='app/2',
command="date +'%s'",
model_name=None,
timeout=None
)
def test_get_unit_service_start_time(self):
async def _run_on_unit(
unit_name,
command,
model_name=None,
timeout=None
):
return {'Stdout': '1524409654'}
self.patch_object(model, 'async_run_on_unit')
self.async_run_on_unit.side_effect = _run_on_unit
self.assertEqual(
model.get_unit_service_start_time('app/2', 'mysvc1'), 1524409654)
cmd = (r"pidof -x 'mysvc1'| tr -d '\n' | "
"xargs -d' ' -I {} stat -c %Y /proc/{} | sort -n | head -1")
self.async_run_on_unit.assert_called_once_with(
unit_name='app/2',
command=cmd,
model_name=None,
timeout=None
)
def test_get_unit_service_start_time_with_pgrep(self):
async def _run_on_unit(
unit_name,
command,
model_name=None,
timeout=None
):
return {'Stdout': '1524409654'}
self.patch_object(model, 'async_run_on_unit')
self.async_run_on_unit.side_effect = _run_on_unit
self.assertEqual(
model.get_unit_service_start_time('app/2',
'mysvc1',
pgrep_full=True),
1524409654)
cmd = "stat -c %Y /proc/$(pgrep -o -f 'mysvc1')"
self.async_run_on_unit.assert_called_once_with(
unit_name='app/2',
command=cmd,
model_name=None,
timeout=None
)
def test_get_unit_service_start_time_not_running(self):
async def _run_on_unit(
unit_name,
command,
model_name=None,
timeout=None
):
return {'Stdout': ''}
self.patch_object(model, 'async_run_on_unit')
self.async_run_on_unit.side_effect = _run_on_unit
with self.assertRaises(model.ServiceNotRunning):
model.get_unit_service_start_time('app/2', 'mysvc1')
def block_until_oslo_config_entries_match_base(self, file_contents,
expected_contents):
self.action.data = {
'results': {'Code': '0', 'Stderr': '', 'Stdout': file_contents}
}
self.patch_object(model, 'Model')
self.patch_object(model, 'get_juju_model', return_value='mname')
self.Model.return_value = self.Model_mock
model.block_until_oslo_config_entries_match(
'app',
'/tmp/src/myfile.txt',
expected_contents,
timeout=0.1)
def test_block_until_oslo_config_entries_match(self):
file_contents = """
[DEFAULT]
verbose = False
use_syslog = False
debug = False
workers = 4
bind_host = 0.0.0.0
[glance_store]
filesystem_store_datadir = /var/lib/glance/images/
stores = glance.store.filesystem.Store,glance.store.http.Store
default_store = file
[image_format]
disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,root-tar
"""
expected_contents = {
'DEFAULT': {
'debug': ['False']},
'glance_store': {
'filesystem_store_datadir': ['/var/lib/glance/images/'],
'default_store': ['file']}}
self.block_until_oslo_config_entries_match_base(
file_contents,
expected_contents)
self.unit1.run.assert_called_once_with(
'cat /tmp/src/myfile.txt')
self.unit2.run.assert_called_once_with(
'cat /tmp/src/myfile.txt')
def test_block_until_oslo_config_entries_match_fail(self):
file_contents = """
[DEFAULT]
verbose = False
use_syslog = False
debug = True
workers = 4
bind_host = 0.0.0.0
[glance_store]
filesystem_store_datadir = /var/lib/glance/images/
stores = glance.store.filesystem.Store,glance.store.http.Store
default_store = file
[image_format]
disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,root-tar
"""
expected_contents = {
'DEFAULT': {
'debug': ['False']},
'glance_store': {
'filesystem_store_datadir': ['/var/lib/glance/images/'],
'default_store': ['file']}}
with self.assertRaises(AsyncTimeoutError):
self.block_until_oslo_config_entries_match_base(
file_contents,
expected_contents)
self.unit1.run.assert_called_once_with(
'cat /tmp/src/myfile.txt')
def test_block_until_oslo_config_entries_match_missing_entry(self):
file_contents = """
[DEFAULT]
verbose = False
use_syslog = False
workers = 4
bind_host = 0.0.0.0
[glance_store]
filesystem_store_datadir = /var/lib/glance/images/
stores = glance.store.filesystem.Store,glance.store.http.Store
default_store = file
[image_format]
disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,root-tar
"""
expected_contents = {
'DEFAULT': {
'debug': ['False']},
'glance_store': {
'filesystem_store_datadir': ['/var/lib/glance/images/'],
'default_store': ['file']}}
with self.assertRaises(AsyncTimeoutError):
self.block_until_oslo_config_entries_match_base(
file_contents,
expected_contents)
self.unit1.run.assert_called_once_with(
'cat /tmp/src/myfile.txt')
def test_block_until_oslo_config_entries_match_missing_section(self):
file_contents = """
[DEFAULT]
verbose = False
use_syslog = False
workers = 4
bind_host = 0.0.0.0
[image_format]
disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,root-tar
"""
expected_contents = {
'DEFAULT': {
'debug': ['False']},
'glance_store': {
'filesystem_store_datadir': ['/var/lib/glance/images/'],
'default_store': ['file']}}
with self.assertRaises(AsyncTimeoutError):
self.block_until_oslo_config_entries_match_base(
file_contents,
expected_contents)
self.unit1.run.assert_called_once_with(
'cat /tmp/src/myfile.txt')
def block_until_services_restarted_base(self, gu_return=None,
gu_raise_exception=False):
async def _block_until(f, timeout=None):
rc = await f()
if not rc:
raise AsyncTimeoutError
self.patch_object(model, 'async_block_until')
self.async_block_until.side_effect = _block_until
async def _async_get_unit_service_start_time(unit, svc, timeout=None,
model_name=None,
pgrep_full=False):
if gu_raise_exception:
raise model.ServiceNotRunning('sv1')
else:
return gu_return
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'async_get_unit_service_start_time')
self.async_get_unit_service_start_time.side_effect = \
_async_get_unit_service_start_time
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
def test_block_until_services_restarted(self):
self.block_until_services_restarted_base(gu_return=10)
model.block_until_services_restarted(
'app',
8,
['svc1', 'svc2'])
def test_block_until_services_restarted_with_pgrep(self):
self.block_until_services_restarted_base(gu_return=10)
model.block_until_services_restarted(
'app',
8,
['svc1', 'svc2'],
pgrep_full=True)
self.async_get_unit_service_start_time.assert_has_calls([
mock.call('app/2',
'svc1',
model_name=None,
pgrep_full=True,
timeout=2700),
mock.call('app/2',
'svc2',
model_name=None,
pgrep_full=True,
timeout=2700),
mock.call('app/4',
'svc1',
model_name=None,
pgrep_full=True,
timeout=2700),
mock.call('app/4',
'svc2',
model_name=None,
pgrep_full=True,
timeout=2700),
])
def test_block_until_services_restarted_fail(self):
self.block_until_services_restarted_base(gu_return=10)
with self.assertRaises(AsyncTimeoutError):
model.block_until_services_restarted(
'app',
12,
['svc1', 'svc2'])
def test_block_until_services_restarted_not_running(self):
self.block_until_services_restarted_base(gu_raise_exception=True)
with self.assertRaises(AsyncTimeoutError):
model.block_until_services_restarted(
'app',
12,
['svc1', 'svc2'])
def test_block_until_unit_wl_status(self):
async def _block_until(f, timeout=None):
rc = await f()
if not rc:
raise AsyncTimeoutError
async def _get_status():
return self.juju_status
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'get_unit_from_name')
self.patch_object(model, 'async_get_status')
self.async_get_status.side_effect = _get_status
self.patch_object(model, 'async_block_until')
self.async_block_until.side_effect = _block_until
model.block_until_unit_wl_status(
'app/1',
'active',
timeout=0.1)
model.block_until_unit_wl_status(
'subordinate_application/1',
'active',
timeout=0.1)
def test_block_until_unit_wl_status_fail(self):
async def _block_until(f, timeout=None):
rc = await f()
if not rc:
raise AsyncTimeoutError
async def _get_status():
return self.juju_status
(self.juju_status.applications[self.application]
["units"][self.unit]["workload-status"]["status"]) = "blocked"
(self.juju_status.applications[self.application]
["units"][self.unit]['subordinates'][self.subordinate_unit]
["workload-status"]["status"]) = "blocked"
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'get_unit_from_name')
self.patch_object(model, 'async_get_status')
self.async_get_status.side_effect = _get_status
self.patch_object(model, 'async_block_until')
self.async_block_until.side_effect = _block_until
with self.assertRaises(AsyncTimeoutError):
model.block_until_unit_wl_status(
'app/1',
'active',
timeout=0.1)
with self.assertRaises(AsyncTimeoutError):
model.block_until_unit_wl_status(
'subordinate_application/1',
'active',
timeout=0.1)
def test_block_until_unit_wl_status_inverse(self):
async def _block_until(f, timeout=None):
rc = await f()
if not rc:
raise AsyncTimeoutError
async def _get_status():
return self.juju_status
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'get_unit_from_name')
self.patch_object(model, 'async_get_status')
self.async_get_status.side_effect = _get_status
self.patch_object(model, 'async_block_until')
self.async_block_until.side_effect = _block_until
model.block_until_unit_wl_status(
'app/1',
'unknown',
negate_match=True,
timeout=0.1)
model.block_until_unit_wl_status(
'subordinate_application/1',
'unknown',
negate_match=True,
timeout=0.1)
def test_block_until_wl_status_info_starts_with(self):
async def _block_until(f, timeout=None):
rc = await f()
if not rc:
raise AsyncTimeoutError
async def _get_status():
return self.juju_status
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'get_unit_from_name')
self.patch_object(model, 'async_get_status')
self.juju_status.applications['app']['units']['app/1'][
'workload-status']['info'] = "match-me if you want"
self.juju_status.applications['app']['units']['app/2'][
'workload-status']['info'] = "match-me if you want"
self.async_get_status.side_effect = _get_status
self.patch_object(model, 'async_block_until')
self.async_block_until.side_effect = _block_until
model.block_until_wl_status_info_starts_with(
'app',
'match-me')
def test_block_until_wl_status_info_starts_with_negative(self):
async def _block_until(f, timeout=None):
rc = await f()
if not rc:
raise AsyncTimeoutError
async def _get_status():
return self.juju_status
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'get_unit_from_name')
self.patch_object(model, 'async_get_status')
self.juju_status.applications['app']['units']['app/1'][
'workload-status']['info'] = "match-me if you want"
self.juju_status.applications['app']['units']['app/2'][
'workload-status']['info'] = "match-me if you want"
self.async_get_status.side_effect = _get_status
self.patch_object(model, 'async_block_until')
self.async_block_until.side_effect = _block_until
model.block_until_wl_status_info_starts_with(
'app',
'dont-match-me',
negate_match=True)
def resolve_units_mocks(self):
async def _block_until(f, timeout=None):
if not f():
raise AsyncTimeoutError
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.patch_object(model, 'units_with_wl_status_state')
self.unit1.workload_status_message = 'hook failed: "update-status"'
self.units_with_wl_status_state.return_value = [self.unit1]
self.patch_object(model, 'subprocess')
self.Model_mock.block_until.side_effect = _block_until
def test_resolve_units(self):
self.resolve_units_mocks()
model.resolve_units(wait=False)
self.subprocess.check_output.assert_called_once_with(
['juju', 'resolved', '-m', 'testmodel', 'app/2'])
def test_resolve_units_no_match(self):
self.resolve_units_mocks()
model.resolve_units(application_name='foo', wait=False)
self.assertFalse(self.subprocess.check_output.called)
def test_resolve_units_wait_timeout(self):
self.resolve_units_mocks()
self.unit1.workload_status = 'error'
with self.assertRaises(AsyncTimeoutError):
model.resolve_units(wait=True, timeout=0.1)
self.subprocess.check_output.assert_called_once_with(
['juju', 'resolved', '-m', 'testmodel', 'app/2'])
def test_resolve_units_erred_hook(self):
self.resolve_units_mocks()
model.resolve_units(wait=False, erred_hook='update-status')
self.subprocess.check_output.assert_called_once_with(
['juju', 'resolved', '-m', 'testmodel', 'app/2'])
def test_resolve_units_erred_hook_no_match(self):
self.resolve_units_mocks()
model.resolve_units(erred_hook='foo', wait=False)
self.assertFalse(self.subprocess.check_output.called)
def test_wait_for_agent_status(self):
async def _block_until(f, timeout=None):
if not f():
raise AsyncTimeoutError
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.unit1.data = {'agent-status': {'current': 'idle'}}
self.unit2.data = {'agent-status': {'current': 'executing'}}
self.Model.return_value = self.Model_mock
self.Model_mock.block_until.side_effect = _block_until
model.wait_for_agent_status(timeout=0.1)
def test_wait_for_agent_status_timeout(self):
async def _block_until(f, timeout=None):
if not f():
raise AsyncTimeoutError
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.Model_mock.block_until.side_effect = _block_until
with self.assertRaises(AsyncTimeoutError):
model.wait_for_agent_status(timeout=0.1)
def test_upgrade_charm(self):
async def _upgrade_charm(channel=None, force_series=False,
force_units=False, path=None,
resources=None, revision=None,
switch=None, model_name=None):
return
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.patch_object(model, 'get_unit_from_name')
self.get_unit_from_name.return_value = self.unit1
self.Model.return_value = self.Model_mock
app_mock = mock.MagicMock()
app_mock.upgrade_charm.side_effect = _upgrade_charm
self.mymodel.applications['myapp'] = app_mock
model.upgrade_charm(
'myapp',
switch='cs:~me/new-charm-45')
app_mock.upgrade_charm.assert_called_once_with(
channel=None,
force_series=False,
force_units=False,
path=None,
resources=None,
revision=None,
switch='cs:~me/new-charm-45')
def test_get_latest_charm_url(self):
async def _entity(charm_url, channel=None):
return {'Id': 'cs:something-23'}
self.patch_object(model, 'get_juju_model', return_value='mname')
self.patch_object(model, 'Model')
self.Model.return_value = self.Model_mock
self.Model_mock.charmstore.entity.side_effect = _entity
self.assertEqual(
model.get_latest_charm_url('cs:something'),
'cs:something-23')
def test_prepare_series_upgrade(self):
self.patch_object(model, 'subprocess')
self.patch_object(model, 'get_juju_model',
return_value=self.model_name)
_machine_num = "1"
_to_series = "bionic"
model.prepare_series_upgrade(_machine_num, to_series=_to_series)
self.subprocess.check_call.assert_called_once_with(
["juju", "upgrade-series", "-m", self.model_name,
_machine_num, "prepare", _to_series, "--yes"])
def test_complete_series_upgrade(self):
self.patch_object(model, 'get_juju_model',
return_value=self.model_name)
self.patch_object(model, 'subprocess')
_machine_num = "1"
model.complete_series_upgrade(_machine_num)
self.subprocess.check_call.assert_called_once_with(
["juju", "upgrade-series", "-m", self.model_name,
_machine_num, "complete"])
def test_set_series(self):
self.patch_object(model, 'get_juju_model',
return_value=self.model_name)
self.patch_object(model, 'subprocess')
_application = "application"
_to_series = "bionic"
model.set_series(_application, _to_series)
self.subprocess.check_call.assert_called_once_with(
["juju", "set-series", "-m", self.model_name,
_application, _to_series])
def test_attach_resource(self):
self.patch_object(model, 'get_juju_model',
return_value=self.model_name)
self.patch_object(model, 'subprocess')
_application = "application"
_resource_name = "myresource"
_resource_path = "/path/to/{}.tar.gz".format(_resource_name)
model.attach_resource(_application, _resource_name, _resource_path)
self.subprocess.check_call.assert_called_once_with(
["juju", "attach-resource", "-m", self.model_name,
_application, "{}={}".format(_resource_name, _resource_path)])
class AsyncModelTests(aiounittest.AsyncTestCase):
async def test_async_block_until_timeout(self):
async def _f():
return False
async def _g():
return True
with self.assertRaises(AsyncTimeoutError):
await model.async_block_until(_f, _g, timeout=0.1)
async def test_async_block_until_pass(self):
async def _f():
return True
async def _g():
return True
await model.async_block_until(_f, _g, timeout=0.1)
async def test_run_on_machine(self):
with mock.patch.object(
model.generic_utils,
'check_call'
) as check_call:
await model.async_run_on_machine('1', 'test')
check_call.assert_called_once_with(
['juju', 'run', '--machine=1', 'test'])
async def test_run_on_machine_with_timeout(self):
# self.patch_object(model.generic_utils, 'check_call')
with mock.patch.object(
model.generic_utils,
'check_call'
) as check_call:
await model.async_run_on_machine('1', 'test', timeout='20m')
check_call.assert_called_once_with(
['juju', 'run', '--machine=1', '--timeout=20m', 'test'])
async def test_run_on_machine_with_model(self):
# self.patch_object(model.generic_utils, 'check_call')
with mock.patch.object(
model.generic_utils,
'check_call'
) as check_call:
await model.async_run_on_machine('1', 'test', model_name='test')
check_call.assert_called_once_with(
['juju', 'run', '--machine=1', '--model=test', 'test'])
async def test_async_get_agent_status(self):
model_mock = mock.MagicMock()
model_mock.applications.__getitem__.return_value = FAKE_STATUS
with mock.patch.object(
model,
'async_get_status',
return_value=model_mock
):
idle = await model.async_get_agent_status('app', 'app/0')
self.assertEqual('idle', idle)
async def test_async_check_if_subordinates_idle(self):
model_mock = mock.MagicMock()
model_mock.applications.__getitem__.return_value = FAKE_STATUS
with mock.patch.object(
model,
'async_get_status',
return_value=model_mock
):
idle = await model.async_check_if_subordinates_idle('app', 'app/0')
assert(idle)
async def test_async_get_agent_status_busy(self):
model_mock = mock.MagicMock()
model_mock.applications.__getitem__.return_value = EXECUTING_STATUS
with mock.patch.object(
model,
'async_get_status',
return_value=model_mock
):
idle = await model.async_get_agent_status('app', 'app/0')
self.assertEqual('executing', idle)
async def test_async_check_if_subordinates_idle_busy(self):
model_mock = mock.MagicMock()
model_mock.applications.__getitem__.return_value = EXECUTING_STATUS
with mock.patch.object(
model,
'async_get_status',
return_value=model_mock
):
idle = await model.async_check_if_subordinates_idle('app', 'app/0')
self.assertFalse(idle)
async def test_async_check_if_subordinates_idle_missing(self):
model_mock = mock.MagicMock()
status = copy.deepcopy(EXECUTING_STATUS)
del(status['units']['app/0']['subordinates'])
model_mock.applications.__getitem__.return_value = status
with mock.patch.object(
model,
'async_get_status',
return_value=model_mock
):
idle = await model.async_check_if_subordinates_idle('app', 'app/0')
assert(idle)
| 39.591467 | 79 | 0.609828 | 8,654 | 74,234 | 4.897735 | 0.060319 | 0.046974 | 0.068326 | 0.081633 | 0.828123 | 0.798443 | 0.76218 | 0.734364 | 0.709426 | 0.687271 | 0 | 0.011207 | 0.281192 | 74,234 | 1,874 | 80 | 39.612593 | 0.783115 | 0.018253 | 0 | 0.68589 | 0 | 0.003067 | 0.152479 | 0.026756 | 0 | 0 | 0 | 0 | 0.088957 | 1 | 0.070552 | false | 0.000614 | 0.006135 | 0.003067 | 0.105521 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
91ce8c5343ac703be5b34803e48492b2c69aa4a9 | 23 | py | Python | slots/__init__.py | roycoding/slots | 38f76812a296ca024ebf9ad3bc60448f8d12207d | [
"MIT"
] | 81 | 2015-07-01T14:34:22.000Z | 2022-01-11T19:53:27.000Z | slots/__init__.py | Chryzanthemum/slots | 503ca8668f74de93f7924539c117396565b38e3d | [
"MIT"
] | 18 | 2016-08-13T05:38:55.000Z | 2020-01-19T20:51:48.000Z | slots/__init__.py | Chryzanthemum/slots | 503ca8668f74de93f7924539c117396565b38e3d | [
"MIT"
] | 23 | 2016-08-11T00:08:42.000Z | 2021-09-07T09:33:17.000Z | from .slots import MAB
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
91cecffda0566ee90d6295205f483359c4ab302b | 1,091 | py | Python | src/data/createdfs.py | josh-gree/debatesnlp | 15bd46793fb826ae63db120c9d395fd71b7e8721 | [
"MIT"
] | null | null | null | src/data/createdfs.py | josh-gree/debatesnlp | 15bd46793fb826ae63db120c9d395fd71b7e8721 | [
"MIT"
] | null | null | null | src/data/createdfs.py | josh-gree/debatesnlp | 15bd46793fb826ae63db120c9d395fd71b7e8721 | [
"MIT"
] | null | null | null | from __future__ import division
import helper_functions as hf
raw_path = '../../data/raw/'
processed_path = '../../processed/'
fnames = ['debate_091016.txt','debate_191016.txt']
with open(raw_path + fnames[0]) as f:
data = [line for line in f.readlines()[1:] if line != '\n']
data = (eval(d) for d in data)
fields = ['text','timestamp_ms','geo','place','lang']
data = ([hf.try_extract(d,f) for f in fields] for d in data)
data = (d for d in data if (d[-1] == 'en') and (d[0] is not None) and (d[0][:2] != 'RT'))
for ind,chunk in enumerate(hf.chunker(data,10000)):
hf.make_df(chunk,ind,fnames[0])
with open(raw_path + fnames[1]) as f:
data = [line for line in f.readlines()[1:] if line != '\n']
data = (eval(d) for d in data)
fields = ['text','timestamp_ms','geo','place','lang']
data = ([hf.try_extract(d,f) for f in fields] for d in data)
data = (d for d in data if (d[-1] == 'en') and (d[0] is not None) and (d[0][:2] != 'RT'))
for ind,chunk in enumerate(hf.chunker(data,10000)):
hf.make_df(chunk,ind,fnames[1])
| 31.171429 | 93 | 0.59945 | 191 | 1,091 | 3.335079 | 0.277487 | 0.037677 | 0.056515 | 0.094192 | 0.803768 | 0.737834 | 0.737834 | 0.737834 | 0.737834 | 0.737834 | 0 | 0.041332 | 0.20165 | 1,091 | 34 | 94 | 32.088235 | 0.690011 | 0 | 0 | 0.571429 | 0 | 0 | 0.121907 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.095238 | 0 | 0.095238 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5303f1c11be59a146331cc268c9f571b1b53d7f5 | 83 | py | Python | backend/microservices/audio-generator/config/routes/__init__.py | MuhamedAbdalla/Automatic-Audio-Book-Based-On-Emotion-Detection | 72130ad037b900461af5be6d80b27ab29c81de5e | [
"MIT"
] | 3 | 2021-04-26T00:17:14.000Z | 2021-07-04T15:30:09.000Z | backend/microservices/audio-generator/config/routes/__init__.py | MuhamedAbdalla/Automatic-Audio-Book-Based-On-Emotion-Detection | 72130ad037b900461af5be6d80b27ab29c81de5e | [
"MIT"
] | null | null | null | backend/microservices/audio-generator/config/routes/__init__.py | MuhamedAbdalla/Automatic-Audio-Book-Based-On-Emotion-Detection | 72130ad037b900461af5be6d80b27ab29c81de5e | [
"MIT"
] | null | null | null | from .headers import *
from .audio_order import *
from .authentication import *
| 20.75 | 30 | 0.746988 | 10 | 83 | 6.1 | 0.6 | 0.327869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180723 | 83 | 3 | 31 | 27.666667 | 0.897059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
53266b2e1b5106333de678092ae08f5dadfb9b93 | 3,748 | py | Python | tests/test_wps_raven.py | fossabot/raven | b5ed6258a4c09ac4d132873d6b8b4a1d82d2131b | [
"MIT"
] | 29 | 2018-08-13T20:16:41.000Z | 2022-03-17T02:31:38.000Z | tests/test_wps_raven.py | fossabot/raven | b5ed6258a4c09ac4d132873d6b8b4a1d82d2131b | [
"MIT"
] | 359 | 2018-05-31T00:37:53.000Z | 2022-03-26T04:35:43.000Z | tests/test_wps_raven.py | fossabot/raven | b5ed6258a4c09ac4d132873d6b8b4a1d82d2131b | [
"MIT"
] | 10 | 2019-06-17T18:07:46.000Z | 2022-02-15T02:01:32.000Z | import zipfile
from pywps import Service
from pywps.tests import assert_response_success
from ravenpy.utilities.testdata import get_local_testdata
from raven.processes import RavenProcess
from .common import CFG_FILE, client_for
cf = ["rvi", "rvp", "rvc", "rvh", "rvt"]
class TestRavenProcess:
def test_gr4j_salmon_nc(self, tmp_path):
client = client_for(Service(processes=[RavenProcess()], cfgfiles=CFG_FILE))
rvs = get_local_testdata("raven-gr4j-cemaneige/raven-gr4j-salmon.rv?")
conf = tmp_path / "conf.zip"
with zipfile.ZipFile(conf, "w") as zip:
for rv in rvs:
zip.write(rv, arcname=rv.name)
ts = get_local_testdata(
"raven-gr4j-cemaneige/Salmon-River-Near-Prince-George_meteo_daily.nc"
)
datainputs = (
f"ts=files@xlink:href=file://{ts};" f"conf=files@xlink:href=file://{conf}"
)
resp = client.get(
service="WPS",
request="Execute",
version="1.0.0",
identifier="raven",
datainputs=datainputs,
)
assert_response_success(resp)
def test_hmets(self, tmp_path):
client = client_for(Service(processes=[RavenProcess()], cfgfiles=CFG_FILE))
rvs = get_local_testdata("raven-hmets/raven-hmets-salmon.rv?")
conf = tmp_path / "conf.zip"
with zipfile.ZipFile(conf, "w") as zip:
for rv in rvs:
zip.write(rv, arcname=rv.name)
ts = get_local_testdata("raven-hmets/Salmon-River-Near-Prince-George_*.rvt")
datainputs = (
"ts=files@xlink:href=file://{};"
"ts=files@xlink:href=file://{};"
"conf=files@xlink:href=file://{conf}"
).format(*ts, conf=conf)
resp = client.get(
service="WPS",
request="Execute",
version="1.0.0",
identifier="raven",
datainputs=datainputs,
)
assert_response_success(resp)
def test_mohyse(self, tmp_path):
client = client_for(Service(processes=[RavenProcess()], cfgfiles=CFG_FILE))
rvs = get_local_testdata("raven-mohyse/raven-mohyse-salmon.rv?")
conf = tmp_path / "conf.zip"
with zipfile.ZipFile(conf, "w") as zip:
for rv in rvs:
zip.write(rv, arcname=rv.name)
ts = get_local_testdata("raven-mohyse/Salmon-River-Near-Prince-George_*.rvt")
datainputs = (
"ts=files@xlink:href=file://{};"
"ts=files@xlink:href=file://{};"
"conf=files@xlink:href=file://{conf}"
).format(*ts, conf=conf)
resp = client.get(
service="WPS",
request="Execute",
version="1.0.0",
identifier="raven",
datainputs=datainputs,
)
assert_response_success(resp)
def test_hbv_ec(self, tmp_path):
client = client_for(Service(processes=[RavenProcess()], cfgfiles=CFG_FILE))
rvs = get_local_testdata("raven-hbv-ec/raven-hbv-ec-salmon.rv?")
conf = tmp_path / "conf.zip"
with zipfile.ZipFile(conf, "w") as zip:
for rv in rvs:
zip.write(rv, arcname=rv.name)
ts = get_local_testdata("raven-hbv-ec/Salmon-River-Near-Prince-George_*.rvt")
datainputs = (
"ts=files@xlink:href=file://{};"
"ts=files@xlink:href=file://{};"
"conf=files@xlink:href=file://{conf}"
).format(*ts, conf=conf)
resp = client.get(
service="WPS",
request="Execute",
version="1.0.0",
identifier="raven",
datainputs=datainputs,
)
assert_response_success(resp)
| 31.762712 | 86 | 0.573372 | 441 | 3,748 | 4.741497 | 0.172336 | 0.052606 | 0.073649 | 0.094692 | 0.841703 | 0.818269 | 0.78001 | 0.78001 | 0.78001 | 0.78001 | 0 | 0.005981 | 0.286286 | 3,748 | 117 | 87 | 32.034188 | 0.775701 | 0 | 0 | 0.695652 | 0 | 0 | 0.225987 | 0.191035 | 0 | 0 | 0 | 0 | 0.054348 | 1 | 0.043478 | false | 0 | 0.065217 | 0 | 0.119565 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
535f7e0ce6b59008b1028435783a395f4320785d | 151 | py | Python | custom_scripts/custom_scripts/doctype/nirmala_settings/test_nirmala_settings.py | VPS-Consultancy/custom_scripts | c812c8fa670c6e3c0e8d94d5ce22638b0daeb522 | [
"MIT"
] | null | null | null | custom_scripts/custom_scripts/doctype/nirmala_settings/test_nirmala_settings.py | VPS-Consultancy/custom_scripts | c812c8fa670c6e3c0e8d94d5ce22638b0daeb522 | [
"MIT"
] | null | null | null | custom_scripts/custom_scripts/doctype/nirmala_settings/test_nirmala_settings.py | VPS-Consultancy/custom_scripts | c812c8fa670c6e3c0e8d94d5ce22638b0daeb522 | [
"MIT"
] | null | null | null | # Copyright (c) 2021, C.R.I.O and Contributors
# See license.txt
# import frappe
import unittest
class TestNirmalaSettings(unittest.TestCase):
pass
| 16.777778 | 46 | 0.768212 | 21 | 151 | 5.52381 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030769 | 0.139073 | 151 | 8 | 47 | 18.875 | 0.861538 | 0.490066 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
725ee20c0a2bb6115d0bde3129e8443d2aa2fd0b | 43 | py | Python | pycmake/platform_specifics/__init__.py | scopatz/PyCMake | 1c047f420a222300b0d1820bea5e5e6e5127c759 | [
"MIT"
] | 1 | 2018-11-30T14:41:50.000Z | 2018-11-30T14:41:50.000Z | pycmake/platform_specifics/__init__.py | scopatz/PyCMake | 1c047f420a222300b0d1820bea5e5e6e5127c759 | [
"MIT"
] | null | null | null | pycmake/platform_specifics/__init__.py | scopatz/PyCMake | 1c047f420a222300b0d1820bea5e5e6e5127c759 | [
"MIT"
] | null | null | null | from .platform_factory import get_platform
| 21.5 | 42 | 0.883721 | 6 | 43 | 6 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7292b05c0c32309c92d2d6478c8a97b0d067663c | 1,346 | py | Python | LibTerm/site-packages/system/dummyobjc_util.py | hackingking123/LibTerm | 08e7e1de1533cfa5aa9097f3d644a69274ae6e6c | [
"MIT"
] | 510 | 2018-05-17T15:28:03.000Z | 2022-03-27T12:34:11.000Z | LibTerm/site-packages/system/dummyobjc_util.py | hackingking123/LibTerm | 08e7e1de1533cfa5aa9097f3d644a69274ae6e6c | [
"MIT"
] | 88 | 2018-11-13T18:12:38.000Z | 2022-02-20T07:01:42.000Z | LibTerm/site-packages/system/dummyobjc_util.py | hackingking123/LibTerm | 08e7e1de1533cfa5aa9097f3d644a69274ae6e6c | [
"MIT"
] | 117 | 2018-11-23T09:35:28.000Z | 2022-03-28T09:22:20.000Z |
class ObjCClass(object):
def __init__(self, *args, **kwargs):
pass
def __call__(self, *args, **kwargs):
return ObjCClass()
def __getattr__(self, item):
return ObjCClass()
class ObjCInstance(ObjCClass):
pass
class UIColor(ObjCClass):
@classmethod
def blackColor(cls):
pass
@classmethod
def redColor(cls):
pass
@classmethod
def greenColor(cls):
pass
@classmethod
def brownColor(cls):
pass
@classmethod
def blueColor(cls):
pass
@classmethod
def magentaColor(cls):
pass
@classmethod
def cyanColor(cls):
pass
@classmethod
def whiteColor(cls):
pass
@classmethod
def yellowColor(cls):
pass
@classmethod
def colorWithRed_green_blue_alpha_(cls, *args, **kwargs):
pass
class NSRange(ObjCClass):
pass
def create_objc_class(*args, **kwargs):
return ObjCClass()
def ns(*args, **kwargs):
return ObjCInstance()
def on_main_thread(func):
return func
class ctypes(object):
class pythonapi(object):
@staticmethod
def PyThreadState_SetAsyncExc(tid, exectype,):
return 1
@staticmethod
def c_long(val):
return val
@staticmethod
def py_object(val):
return val
| 15.295455 | 61 | 0.604755 | 137 | 1,346 | 5.773723 | 0.364964 | 0.176991 | 0.204804 | 0.238938 | 0.070796 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001068 | 0.304606 | 1,346 | 87 | 62 | 15.471264 | 0.844017 | 0 | 0 | 0.525424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.322034 | false | 0.220339 | 0 | 0.135593 | 0.559322 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
72d1363adb05bbd481df9424cbc43c06b16cb0a0 | 101 | py | Python | api/__init__.py | foamliu/i-Cloud | c5eb0a22c1c0c78d5195d4f62237fd6c2b5e6a32 | [
"MIT"
] | 1 | 2020-02-27T07:46:24.000Z | 2020-02-27T07:46:24.000Z | api/__init__.py | foamliu/i-Cloud | c5eb0a22c1c0c78d5195d4f62237fd6c2b5e6a32 | [
"MIT"
] | null | null | null | api/__init__.py | foamliu/i-Cloud | c5eb0a22c1c0c78d5195d4f62237fd6c2b5e6a32 | [
"MIT"
] | 2 | 2019-04-25T22:56:41.000Z | 2019-07-01T21:12:21.000Z | from flask import Blueprint
api = Blueprint('api', __name__)
from . import faces, ar, matches, asr
| 16.833333 | 37 | 0.732673 | 14 | 101 | 5 | 0.714286 | 0.342857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168317 | 101 | 5 | 38 | 20.2 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.029703 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
72e6c547ed1def2ac56529f3dc9b4eddfdb96e65 | 2,156 | py | Python | tests/cupy_tests/manipulation_tests/test_split.py | andy6975/cupy | 34b388e4a4fe7c59092b4d4c9c96b2f307e49e46 | [
"MIT"
] | 1 | 2019-12-01T09:08:14.000Z | 2019-12-01T09:08:14.000Z | tests/cupy_tests/manipulation_tests/test_split.py | hephaex/cupy | 5cf50a93bbdebe825337ed7996c464e84b1495ba | [
"MIT"
] | 1 | 2019-08-05T09:36:13.000Z | 2019-08-06T12:03:01.000Z | tests/cupy_tests/manipulation_tests/test_split.py | hephaex/cupy | 5cf50a93bbdebe825337ed7996c464e84b1495ba | [
"MIT"
] | 1 | 2022-03-24T13:19:55.000Z | 2022-03-24T13:19:55.000Z | import unittest
from cupy import testing
@testing.gpu
class TestSplit(unittest.TestCase):
@testing.numpy_cupy_array_list_equal()
def test_array_split1(self, xp):
a = testing.shaped_arange((3, 11), xp)
return xp.array_split(a, 4, 1)
@testing.numpy_cupy_array_list_equal()
def test_array_split2(self, xp):
a = testing.shaped_arange((3, 11), xp)
return xp.array_split(a, 4, -1)
@testing.with_requires('numpy>=1.11')
@testing.numpy_cupy_array_list_equal()
def test_array_split_empty_array(self, xp):
a = testing.shaped_arange((5, 0), xp)
return xp.array_split(a, [2, 4], 0)
@testing.numpy_cupy_array_list_equal()
def test_array_split_empty_sections(self, xp):
a = testing.shaped_arange((3, 11), xp)
return xp.array_split(a, [])
@testing.numpy_cupy_array_list_equal()
def test_array_split_non_divisible(self, xp):
a = testing.shaped_arange((5, 3), xp)
return xp.array_split(a, 4)
@testing.numpy_cupy_array_list_equal()
def test_dsplit(self, xp):
a = testing.shaped_arange((3, 3, 12), xp)
return xp.dsplit(a, 4)
@testing.numpy_cupy_array_list_equal()
def test_hsplit_vectors(self, xp):
a = testing.shaped_arange((12,), xp)
return xp.hsplit(a, 4)
@testing.numpy_cupy_array_list_equal()
def test_hsplit(self, xp):
a = testing.shaped_arange((3, 12), xp)
return xp.hsplit(a, 4)
@testing.numpy_cupy_array_list_equal()
def test_split_by_sections1(self, xp):
a = testing.shaped_arange((3, 11), xp)
return xp.split(a, (2, 4, 9), 1)
@testing.numpy_cupy_array_list_equal()
def test_split_by_sections2(self, xp):
a = testing.shaped_arange((3, 11), xp)
return xp.split(a, (2, 4, 9), -1)
@testing.numpy_cupy_array_list_equal()
def test_split_by_sections3(self, xp):
a = testing.shaped_arange((3, 11), xp)
return xp.split(a, (-9, 4, -2), 1)
@testing.numpy_cupy_array_list_equal()
def test_vsplit(self, xp):
a = testing.shaped_arange((12, 3), xp)
return xp.vsplit(a, 4)
| 31.246377 | 50 | 0.648887 | 329 | 2,156 | 3.960486 | 0.142857 | 0.079816 | 0.147352 | 0.1934 | 0.838066 | 0.828089 | 0.811972 | 0.672295 | 0.672295 | 0.584037 | 0 | 0.039239 | 0.219852 | 2,156 | 68 | 51 | 31.705882 | 0.735434 | 0 | 0 | 0.377358 | 0 | 0 | 0.005102 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.226415 | false | 0 | 0.037736 | 0 | 0.509434 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
72ef376becb502d6c44fa1c765542dbb8a4bc052 | 46 | py | Python | pettingzoo/classic/mahjong_v4.py | RedTachyon/PettingZoo | 0c4be0ca0de5a11bf8eff3f7b87976edcacd093e | [
"Apache-2.0"
] | 846 | 2020-05-12T05:55:00.000Z | 2021-10-08T19:38:40.000Z | pettingzoo/classic/mahjong_v4.py | RedTachyon/PettingZoo | 0c4be0ca0de5a11bf8eff3f7b87976edcacd093e | [
"Apache-2.0"
] | 237 | 2020-04-27T06:01:39.000Z | 2021-10-13T02:55:54.000Z | pettingzoo/classic/mahjong_v4.py | RedTachyon/PettingZoo | 0c4be0ca0de5a11bf8eff3f7b87976edcacd093e | [
"Apache-2.0"
] | 126 | 2020-05-29T04:20:29.000Z | 2021-10-13T05:31:12.000Z | from .rlcard_envs.mahjong import env, raw_env
| 23 | 45 | 0.826087 | 8 | 46 | 4.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 46 | 1 | 46 | 46 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f4652cc7988f565415b115f570a760a84bf40499 | 31 | py | Python | maps/spillway/__init__.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | maps/spillway/__init__.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | maps/spillway/__init__.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | from .spillway import Spillway
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f46f8ca35be8b0b25451d64befa8ecdef3e8e634 | 39 | py | Python | demo/models/__init__.py | jeanq1/sign-segmentation | cbf1203b06e82e75e06b96a430dab08da3a46f7b | [
"MIT"
] | 17 | 2021-06-08T07:53:36.000Z | 2022-03-27T02:57:50.000Z | demo/models/__init__.py | jeanq1/sign-segmentation | cbf1203b06e82e75e06b96a430dab08da3a46f7b | [
"MIT"
] | 5 | 2021-07-15T09:41:08.000Z | 2022-01-13T14:53:10.000Z | demo/models/__init__.py | jeanq1/sign-segmentation | cbf1203b06e82e75e06b96a430dab08da3a46f7b | [
"MIT"
] | 18 | 2021-06-08T15:22:09.000Z | 2022-02-21T19:06:52.000Z | from .i3d import *
from .mstcn import * | 19.5 | 20 | 0.717949 | 6 | 39 | 4.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.179487 | 39 | 2 | 20 | 19.5 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe20e9d30e48c5006695e29e265a3bd25fd657c3 | 3,250 | py | Python | tensorflow_constrained_optimization/__init__.py | neelguha/tensorflow_constrained_optimization | 46b34d1c2d6ec05ea1e46db3bcc481a81e041637 | [
"Apache-2.0"
] | null | null | null | tensorflow_constrained_optimization/__init__.py | neelguha/tensorflow_constrained_optimization | 46b34d1c2d6ec05ea1e46db3bcc481a81e041637 | [
"Apache-2.0"
] | null | null | null | tensorflow_constrained_optimization/__init__.py | neelguha/tensorflow_constrained_optimization | 46b34d1c2d6ec05ea1e46db3bcc481a81e041637 | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 The TensorFlow Constrained Optimization Authors. All Rights
# Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy of
# the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
# ==============================================================================
"""A library for performing constrained optimization in TensorFlow."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow_constrained_optimization.python.candidates import find_best_candidate_distribution
from tensorflow_constrained_optimization.python.candidates import find_best_candidate_index
from tensorflow_constrained_optimization.python.constrained_minimization_problem import ConstrainedMinimizationProblem
from tensorflow_constrained_optimization.python.rates.binary_rates import accuracy_rate
from tensorflow_constrained_optimization.python.rates.binary_rates import error_rate
from tensorflow_constrained_optimization.python.rates.binary_rates import false_negative_rate
from tensorflow_constrained_optimization.python.rates.binary_rates import false_positive_rate
from tensorflow_constrained_optimization.python.rates.binary_rates import negative_prediction_rate
from tensorflow_constrained_optimization.python.rates.binary_rates import positive_prediction_rate
from tensorflow_constrained_optimization.python.rates.binary_rates import recall
from tensorflow_constrained_optimization.python.rates.binary_rates import roc_auc_lower_bound
from tensorflow_constrained_optimization.python.rates.binary_rates import roc_auc_upper_bound
from tensorflow_constrained_optimization.python.rates.binary_rates import true_negative_rate
from tensorflow_constrained_optimization.python.rates.binary_rates import true_positive_rate
from tensorflow_constrained_optimization.python.rates.loss import HingeLoss
from tensorflow_constrained_optimization.python.rates.loss import ZeroOneLoss
from tensorflow_constrained_optimization.python.rates.operations import lower_bound
from tensorflow_constrained_optimization.python.rates.operations import upper_bound
from tensorflow_constrained_optimization.python.rates.operations import wrap_rate
from tensorflow_constrained_optimization.python.rates.rate_minimization_problem import RateMinimizationProblem
from tensorflow_constrained_optimization.python.rates.subsettable_context import rate_context
from tensorflow_constrained_optimization.python.rates.subsettable_context import split_rate_context
from tensorflow_constrained_optimization.python.train.constrained_optimizer import ConstrainedOptimizer
from tensorflow_constrained_optimization.python.train.lagrangian_optimizer import LagrangianOptimizer
from tensorflow_constrained_optimization.python.train.proxy_lagrangian_optimizer import ProxyLagrangianOptimizer
| 69.148936 | 118 | 0.868308 | 393 | 3,250 | 6.882952 | 0.302799 | 0.229575 | 0.31719 | 0.341959 | 0.641035 | 0.625139 | 0.5878 | 0.544547 | 0.463216 | 0.334935 | 0 | 0.002649 | 0.070769 | 3,250 | 46 | 119 | 70.652174 | 0.893046 | 0.231077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.035714 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe6c69c4ecfed9a4858e05832e8fb659a1f2775c | 25,256 | py | Python | src/nisemidevicecontrol.py | ni/ni-semi-msms | f92eb79fd4f6a5854e728de23620958da95925c7 | [
"MIT"
] | null | null | null | src/nisemidevicecontrol.py | ni/ni-semi-msms | f92eb79fd4f6a5854e728de23620958da95925c7 | [
"MIT"
] | 1 | 2020-12-14T22:51:36.000Z | 2020-12-14T22:51:36.000Z | src/nisemidevicecontrol.py | ni/ni-semi-msms | f92eb79fd4f6a5854e728de23620958da95925c7 | [
"MIT"
] | 2 | 2021-04-01T16:18:55.000Z | 2021-05-25T13:34:11.000Z | import sys
import os
import clr
device_control_path = "C:\\Program Files\\National Instruments\\" \
+ "Semi Device Control"
sys.path.append(os.path.dirname(os.getcwd()))
sys.path.append(device_control_path)
clr.AddReference("SemiconductorDeviceControl")
from SemiconductorDeviceControl import SemiDeviceControlMain # noqa:E402
class SemiconductorDeviceControl:
def __init__(self, ISconfigpath):
'''
Creates and returns a device control session using the Instrument
Studio export configuration. IS export configuration contains the
register map and hardware configuration for Device Control
Arguments:
ISconfigpath {path}
'''
self.semidevicecontrol_main = None
self.semidevicecontrol_session = None
try:
self.semidevicecontrol_main = SemiDeviceControlMain()
self.semidevicecontrol_session = (
self.semidevicecontrol_main.CreateSemiDeviceControlSession(
ISconfigpath)
)
except Exception as e:
print("Exception in accessing conf: {}".format(e))
raise e
def start(self):
'''
Starts the Instrument/Hardware sessions configured for the device
control, through the IS export configuration.
'''
try:
self.semidevicecontrol_session.Start()
except Exception as e:
print("Exception in start(): {}".format(e))
raise e
def stop(self):
'''
Stops the Instrument/Hardware sessions configured for the
device control
'''
try:
self.semidevicecontrol_session.Stop()
except Exception as e:
print("Exception is stop(): {}".format(e))
raise e
def destroy(self):
'''
Destroys the device control session and deallocates the reserved
reference and data in memory
'''
try:
self.semidevicecontrol_main.DestroySemiDeviceControlSession(
self.semidevicecontrol_session)
except Exception as e:
print("Exception is destroy engine: {}".format(e))
raise e
# ---------------------------- Register Device ----------------------------
def write_register_by_name_device(self, register_uid, register_data):
'''
Writes the data to the device using the register unique name.
Register UID: Unique name for the register in the format
<IP block/Device name>-<Register group>-<Register name>
Arguments:
register_uid {string}
register_data {long}
'''
try:
self.semidevicecontrol_session.WriteRegisterByName_Device(
register_uid, register_data
)
except Exception as e:
print("")
raise e
def write_multi_register_by_name_device(
self, register_uid_list, register_data_list):
'''
Writes data to multiple registers on the device using the register
unique name.
Register UID: Unique name for the register in the format
<IP block/Device name>-<Register group>-<Register name>
For each Register UID element, the corresponding element from the data
array will be applied.
Arguments:
register_uid_list {string[]}
register_data_list {long[]}
'''
try:
self.semidevicecontrol_session.WriteMultipleRegistersByName_Device(
register_uid_list, register_data_list
)
except Exception as e:
print("")
raise e
def write_register_by_address_device(
self, ip_block_name, register_address, register_data):
'''
Writes the data to the device using the register address and
IP block name.
Register address: Address of the register from register map
IP block name: Name of the IP block or Device from register map
Arguments:
ip_block_name {string}
register_address {long}
register_data {long}
'''
try:
self.semidevicecontrol_session.WriteRegisterByAddress_Device(
ip_block_name, register_address, register_data
)
except Exception as e:
print("")
raise e
def write_multi_register_by_address_device(
self, ip_block_name_list,
register_address_list,
register_data_list):
'''
Writes data to multiple registers on the device, using the register
address and IP block name.
Register address: Address of the register from register map
IP block name: Name of the IP block or Device from register map
For each Register address & IP block element, the corresponding element
from the data array will be applied.
Arguments:
ip_block_name_list {string[]}
register_address_list {long[]}
register_data_list {long[]}
'''
try:
self.semidevicecontrol_session.WriteMultipleRegistersByAddress_Device(
ip_block_name_list, register_address_list, register_data_list
)
except Exception as e:
print("")
raise e
def read_register_by_name_device(self, register_uid):
'''
Reads the data from the device using the register unique name.
Register UID: Unique name for the register in the format
<IP block/Device name>-<Register group>-<Register name>
Arguments:
register_uid {string}
Returns:
register_data {long}
'''
try:
register_data = (
self.semidevicecontrol_session.ReadRegisterByName_Device(
register_uid)
)
return register_data
except Exception as e:
print("")
raise e
def read_multi_register_by_name_device(self, register_uid_list):
'''
Reads data from multiple registers on the device using the register
unique name.
Register UID: Unique name for the register in the format
<IP block/Device name>-<Register group>-<Register name>
For each Register UID element, the corresponding element from the data
array will be applied.
Arguments:
register_uid_list {string[]}
Returns:
register_data_list {long[]}
'''
try:
register_data_list = (
self.semidevicecontrol_session.ReadMultipleRegistersByName_Device(
register_uid_list)
)
return register_data_list
except Exception as e:
print("")
raise e
def read_register_by_address_device(self, ip_block_name, register_address):
'''
Reads the data from the device using the register address and
IP block name
Register address: Address of the register from register map
IP block name: Name of the IP block or Device from register map
Arguments:
ip_block_name {string}
register_address {long}
Returns:
register_data {long}
'''
try:
register_data = (
self.semidevicecontrol_session.ReadRegisterByAddress_Device(
ip_block_name, register_address)
)
return register_data
except Exception as e:
print("")
raise e
def read_multi_register_by_address_device(
self, ip_block_name_list, register_address_list):
'''
Reads data from multiple registers on the device using the register
address and IP block name.
Register address: Address of the register from register map
IP block name: Name of the IP block or Device from register map
For each Register address & IP block element, the corresponding
element from the data array will be applied.
Arguments:
ip_block_name_list {string[]}
register_address_list {long[]}
Returns:
register_data_list {long[]}
'''
try:
register_data_list = (
self.semidevicecontrol_session.ReadMultipleRegistersByAddress_Device(
ip_block_name_list, register_address_list)
)
return register_data_list
except Exception as e:
print("")
raise e
# --------------------------- Register Device ---------------------------
# ----------------------------- Field Device -----------------------------
def write_field_by_name_device(self, field_uid, field_data):
'''
Writes the data to the device using the field unique name.
Field UID: Unique name for the field in the format
<IP block/Device name>-<Register group>-<Field name>
Arguments:
field_uid {string}
field_data {long}
'''
try:
self.semidevicecontrol_session.WriteFieldByName_Device(
field_uid, field_data
)
except Exception as e:
print("")
raise e
def write_multi_field_by_name_device(
self, field_uid_list, field_data_list):
'''
Writes data to multiple fields on the device using the field unique
name.
Field UID: Unique name for the field in the format
<IP block/Device name>-<Register group>-<Field name>
For each Field UID element, the corresponding element from the data
array will be applied.
Arguments:
field_uid_list {string[]}
field_data_list {long[]}
'''
try:
self.semidevicecontrol_session.WriteMultipleFieldsByName_Device(
field_uid_list, field_data_list
)
except Exception as e:
print("")
raise e
def write_field_by_value_definition_device(
self, field_uid, value_definition):
'''
Writes the data to the device using the field unique name and
field value definition.
Field UID: Unique name for the field in the format
<IP block/Device name>-<Register group>-<Field name>
Value Definition: This is defined in the register map for each field.
Each value of the field can contain a definition string,
that represents the value.
Arguments:
field_uid {string}
value_definition {string}
'''
try:
self.semidevicecontrol_session.WriteFieldByValueDefinition_Device(
field_uid, value_definition
)
except Exception as e:
print("")
raise e
def read_field_by_name_device(self, field_uid):
'''
Reads the data from the device using the field unique name.
Field UID: Unique name for the field in the format
<IP block/Device name>-<Register group>-<Field name>
Arguments:
field_uid {string}
Returns:
field_data {long}
'''
try:
field_data = self.semidevicecontrol_session.ReadFieldByName_Device(
field_uid
)
return field_data
except Exception as e:
print("")
raise e
def read_multi_field_by_name_device(self, field_uid_list):
'''
Reads data from multiple fields on the device using the field unique
name.
Field UID: Unique name for the field in the format
<IP block/Device name>-<Register group>-<Field name>
For each Field UID element, the corresponding element from the data
array will be applied.
Arguments:
field_uid_list {string[]}
Returns:
field_data_list {long[]}
'''
try:
field_data_list = (
self.semidevicecontrol_session.ReadMultipleFieldsByName_Device(
field_uid_list)
)
return field_data_list
except Exception as e:
print("")
raise e
# ----------------------------- Field Device -----------------------------
# ---------------------------- Register Cache ----------------------------
def write_register_by_name_cache(self, register_uid, register_data):
'''
Writes the data to the cache using the register unique name.
Register UID: Unique name for the register in the format
<IP block/Device name>-<Register group>-<Register name>
Arguments:
register_uid {string}
register_data {long}
'''
try:
self.semidevicecontrol_session.WriteRegisterByName_Cache(
register_uid, register_data
)
except Exception as e:
print("")
raise e
def write_multi_register_by_name_cache(
self, register_uid_list, register_data_list):
'''
Writes data to multiple registers on the cache using the register
unique name.
Register UID: Unique name for the register in the format
<IP block/Device name>-<Register group>-<Register name>
For each Register UID element, the corresponding element from the data
array will be applied.
Arguments:
register_uid_list {string[]}
register_data_list {long[]}
'''
try:
self.semidevicecontrol_session.WriteMultipleRegistersByName_Cache(
register_uid_list, register_data_list
)
except Exception as e:
print("")
raise e
def write_register_by_address_cache(
self, ip_block_name, register_address, register_data):
'''
Writes the data to the cache using the register address and
IP block name.
Register address: Address of the register from register map
IP block name: Name of the IP block or Device from register map
Arguments:
ip_block_name {string}
register_address {long}
register_data {long}
'''
try:
self.semidevicecontrol_session.WriteRegisterByAddress_Cache(
ip_block_name, register_address,
register_data)
except Exception as e:
print("")
raise e
def write_multi_register_by_address_cache(
self, ip_block_name_list,
register_address_list,
register_data_list):
'''
Writes data to multiple registers on the cache using the register
address and IP block name.
Register address: Address of the register from register map
IP block name: Name of the IP block or Device from register map
For each Register address & IP block element, the corresponding
element from the data array will be applied.
Arguments:
ip_block_name_list {string[]}
register_address_list {long[]}
register_data_list {long[]}
'''
try:
self.semidevicecontrol_session.WriteMultipleRegistersByAddress_Cache(
ip_block_name_list, register_address_list, register_data_list
)
except Exception as e:
print("")
raise e
def read_register_by_name_cache(self, register_uid):
'''
Reads the data from the cache using the register unique name.
Register UID: Unique name for the register in the format
<IP block/Device name>-<Register group>-<Register name>
Arguments:
register_uid {string}
Returns:
register_data {long}
'''
try:
register_data = (
self.semidevicecontrol_session.ReadRegisterByName_Cache(
register_uid)
)
return register_data
except Exception as e:
print("")
raise e
def read_multi_register_by_name_cache(self, register_uid_list):
'''
Reads data from multiple registers on the cache using the register
unique name.
Register UID: Unique name for the register in the format
<IP block/Device name>-<Register group>-<Register name>
For each Register UID element, the corresponding element from the data
array will be applied.
Arguments:
register_uid_list {string[]}
Returns:
register_data_list {long[]}
'''
try:
register_data_list = (
self.semidevicecontrol_session.ReadMultipleRegistersByName_Cache(
register_uid_list)
)
return register_data_list
except Exception as e:
print("")
raise e
def read_register_by_address_cache(self, ip_block_name, register_address):
'''
Reads the data from the cache using the register address and
IP block name.
Register address: Address of the register from register map
IP block name: Name of the IP block or Device from register map
Arguments:
ip_block_name {string}
register_address {long}
Returns:
register_data {long}
'''
try:
register_data = (
self.semidevicecontrol_session.ReadRegisterByAddress_Cache(
ip_block_name, register_address)
)
return register_data
except Exception as e:
print("")
raise e
def read_multi_register_by_address_cache(
self, ip_block_name_list, register_address_list):
'''
Reads data from multiple registers on the cache using the register
address and IP block name.
Register address: Address of the register from register map
IP block name: Name of the IP block or Device from register map
For each Register address & IP block element, the corresponding
element from the data array will be applied.
Arguments:
ip_block_name_list {string[]}
register_address_list {long[]}
Returns:
register_data_list {long[]}
'''
try:
register_data_list = (
self.semidevicecontrol_session.ReadMultipleRegistersByAddress(
ip_block_name_list, register_address_list)
)
return register_data_list
except Exception as e:
print("")
raise e
# ---------------------------- Register Cache ----------------------------
# ----------------------------- Field Cache -----------------------------
def write_field_by_name_cache(self, field_uid, field_data):
'''
Writes the data to the cache using the field unique name.
Field UID: Unique name for the field in the format
<IP block/Device name>-<Register group>-<Field name>
Arguments:
field_uid {string}
field_data {long}
'''
try:
self.semidevicecontrol_session.WriteFieldByName_Cache(
field_uid, field_data
)
except Exception as e:
print("")
raise e
def write_multi_field_by_name_cache(
self, field_uid_list, field_data_list):
'''
Writes data to multiple fields on the cache using the fields
unique name.
Field UID: Unique name for the field in the format
<IP block/Device name>-<Register group>-<Field name>
For each Field UID element, the corresponding element from the data
array will be applied.
Arguments:
field_uid_list {string[]}
field_data_list {long[]}
'''
try:
self.semidevicecontrol_session.WriteMultipleFieldsByName_Cache(
field_uid_list, field_data_list
)
except Exception as e:
print("")
raise e
def write_field_by_value_definition_cache(
self, field_uid, value_definition):
'''
Writes the data to the cache using the field unique name and
field value definition.
Field UID: Unique name for the field in the format
<IP block/Device name>-<Register group>-<Field name>
Value Definition: This is defined in the register map for each field.
Each value of the field can contain a definition string,
that represents the value.
Arguments:
field_uid {string}
value_definition {string}
'''
try:
self.semidevicecontrol_session.WriteFieldByValueDefinition_Cache(
field_uid, value_definition
)
except Exception as e:
print("")
raise e
def read_field_by_name_cache(self, field_uid):
'''
Read the data from the cache using the field unique name.
Field UID: Unique name for the field in the format
<IP block/Device name>-<Register group>-<Field name>
Arguments:
field_uid {string}
Returns:
field_data {long}
'''
try:
field_data = self.semidevicecontrol_session.ReadFieldByName_Cache(
field_uid
)
return field_data
except Exception as e:
print("")
raise e
def read_multi_field_by_name_cache(self, field_uid_list):
'''
Reads data from multiple fields on the cache using the field
unique name.
Field UID: Unique name for the field in the format
<IP block/Device name>-<Register group>-<Field name>
For each Field UID element, the corresponding element from the data
array will be applied.
Arguments:
field_uid_list {string[]}
Returns:
field_data_list {long[]}
'''
try:
field_data_list = (
self.semidevicecontrol_session.ReadMultipleFieldsByName_Cache(
field_uid_list)
)
return field_data_list
except Exception as e:
print("")
raise e
# ----------------------------- Field Cache -----------------------------
# -------------------------------- Cache --------------------------------
def write_from_cache_to_device(self):
'''
Writes all the cache register data to the device, in the order it is
stored in the cache memory. The cache will be auto cleared after
this operation.
'''
try:
self.semidevicecontrol_session.WriteFromCacheToDevice()
except Exception as e:
print("Exception in writing software cache to hardware")
raise e
def clear_cache(self):
'''
Clears all the cache register data from the device control session
'''
try:
self.semidevicecontrol_session.ClearCache()
except Exception as e:
print("Exception in flush software cache")
raise e
# ------------------------------ DIO ------------------------------
def read_pin_state(self, pin_name):
'''
Reads the pin state (High / Low / Terminate) using the Pin name
defined in the register map.
Arguments:
pin_name {string}
Return:
pin_state {long}
Pin State corresponding long int values
2-Terminate
1=High
0-Low
'''
try:
pin_state = self.semidevicecontrol_session.ReadPinState(
pin_name
)
return pin_state
except Exception as e:
print("Exception occured at read pin state")
raise e
def write_pin_state(self, pin_name, pin_state):
'''
Puts the pin to High / Low / Terminate state using the Pin name
defined in the register map.
Arguments:
pin_name {string}
pin_state {long}
Pin State corresponding long int values
2-Terminate
1=High
0-Low
'''
try:
self.semidevicecontrol_session.WritePinState(
pin_name, pin_state
)
except Exception as e:
print("Exception occured at write pin state")
raise e
# -------------------------------- UTILS --------------------------------
def get_logs(self):
'''
Get log details
Returns:
logs {2d array of strings}
'''
try:
logs = self.semidevicecontrol_session.GetLogs()
return logs
except Exception as e:
print("Exception occured at log function")
raise e
| 29.747939 | 85 | 0.574715 | 2,678 | 25,256 | 5.23301 | 0.06348 | 0.034965 | 0.031397 | 0.044955 | 0.859712 | 0.848009 | 0.846439 | 0.821964 | 0.816255 | 0.800985 | 0 | 0.000605 | 0.345462 | 25,256 | 848 | 86 | 29.783019 | 0.847136 | 0.419069 | 0 | 0.597523 | 0 | 0 | 0.031502 | 0.002161 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108359 | false | 0 | 0.012384 | 0 | 0.167183 | 0.108359 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe8af943bcb55e3dd0d209c69944017c97ab5e0e | 8,281 | py | Python | hvac/tests/unit_tests/api/auth/test_github.py | tiny-dancer/hvac | 76247fbe0f3ae8e6e249f2d83ee6978513526c29 | [
"Apache-2.0"
] | null | null | null | hvac/tests/unit_tests/api/auth/test_github.py | tiny-dancer/hvac | 76247fbe0f3ae8e6e249f2d83ee6978513526c29 | [
"Apache-2.0"
] | null | null | null | hvac/tests/unit_tests/api/auth/test_github.py | tiny-dancer/hvac | 76247fbe0f3ae8e6e249f2d83ee6978513526c29 | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
import requests_mock
from parameterized import parameterized
from hvac.adapters import Request
from hvac.api.auth import Github
from hvac.api.auth.github import DEFAULT_MOUNT_POINT
class TestGithub(TestCase):
@parameterized.expand([
("default mount point", DEFAULT_MOUNT_POINT),
("custom mount point", 'cathub'),
])
@requests_mock.Mocker()
def test_configure(self, test_label, mount_point, requests_mocker):
expected_status_code = 204
mock_url = 'http://localhost:8200/v1/auth/{mount_point}/config'.format(
mount_point=mount_point,
)
requests_mocker.register_uri(
method='POST',
url=mock_url,
status_code=expected_status_code,
)
github = Github(adapter=Request())
response = github.configure(
organization='hvac',
mount_point=mount_point,
)
self.assertEqual(
first=expected_status_code,
second=response.status_code,
)
@parameterized.expand([
("default mount point", DEFAULT_MOUNT_POINT),
("custom mount point", 'cathub'),
])
@requests_mock.Mocker()
def test_read_configuration(self, test_label, mount_point, requests_mocker):
expected_status_code = 200
mock_response = {
'auth': None,
'data': {
'base_url': '',
'max_ttl': 0,
'organization': '',
'ttl': 0
},
'lease_duration': 0,
'lease_id': '',
'renewable': False,
'request_id': '860a11a8-b835-cbab-7fce-de4edc4cf533',
'warnings': None,
'wrap_info': None
}
mock_url = 'http://localhost:8200/v1/auth/{mount_point}/config'.format(
mount_point=mount_point,
)
requests_mocker.register_uri(
method='GET',
url=mock_url,
status_code=expected_status_code,
json=mock_response,
)
github = Github(adapter=Request())
response = github.read_configuration(
mount_point=mount_point,
)
self.assertEqual(
first=mock_response,
second=response,
)
@parameterized.expand([
("default mount point", DEFAULT_MOUNT_POINT),
("custom mount point", 'cathub'),
])
@requests_mock.Mocker()
def test_map_team(self, test_label, mount_point, requests_mocker):
expected_status_code = 204
team_name = 'hvac'
mock_url = 'http://localhost:8200/v1/auth/{mount_point}/map/teams/{team_name}'.format(
mount_point=mount_point,
team_name=team_name,
)
requests_mocker.register_uri(
method='POST',
url=mock_url,
status_code=expected_status_code,
)
github = Github(adapter=Request())
response = github.map_team(
team_name=team_name,
mount_point=mount_point,
)
self.assertEqual(
first=expected_status_code,
second=response.status_code,
)
@parameterized.expand([
("default mount point", DEFAULT_MOUNT_POINT),
("custom mount point", 'cathub'),
])
@requests_mock.Mocker()
def test_read_team_mapping(self, test_label, mount_point, requests_mocker):
expected_status_code = 200
team_name = 'hvac'
mock_response = {
'auth': None,
'data': {
'key': 'SOME_TEAM',
'value': 'some-team-policy'
},
'lease_duration': 0,
'lease_id': '',
'renewable': False,
'request_id': '50346cc8-34e7-f2ea-f36a-fcb9d45c1676',
'warnings': None,
'wrap_info': None
}
mock_url = 'http://localhost:8200/v1/auth/{mount_point}/map/teams/{team_name}'.format(
mount_point=mount_point,
team_name=team_name,
)
requests_mocker.register_uri(
method='GET',
url=mock_url,
status_code=expected_status_code,
json=mock_response,
)
github = Github(adapter=Request())
response = github.read_team_mapping(
team_name=team_name,
mount_point=mount_point,
)
self.assertEqual(
first=mock_response,
second=response,
)
@parameterized.expand([
("default mount point", DEFAULT_MOUNT_POINT),
("custom mount point", 'cathub'),
])
@requests_mock.Mocker()
def test_map_user(self, test_label, mount_point, requests_mocker):
expected_status_code = 204
user_name = 'hvac'
mock_url = 'http://localhost:8200/v1/auth/{mount_point}/map/users/{user_name}'.format(
mount_point=mount_point,
user_name=user_name,
)
requests_mocker.register_uri(
method='POST',
url=mock_url,
status_code=expected_status_code,
)
github = Github(adapter=Request())
response = github.map_user(
user_name=user_name,
mount_point=mount_point,
)
self.assertEqual(
first=expected_status_code,
second=response.status_code,
)
@parameterized.expand([
("default mount point", DEFAULT_MOUNT_POINT),
("custom mount point", 'cathub'),
])
@requests_mock.Mocker()
def test_read_user_mapping(self, test_label, mount_point, requests_mocker):
expected_status_code = 200
user_name = 'hvac'
mock_response = {
'auth': None,
'data': None,
'lease_duration': 0,
'lease_id': '',
'renewable': False,
'request_id': '71ec6e1b-6d4e-6374-ddc2-ff1cdd860e60',
'warnings': None,
'wrap_info': None
}
mock_url = 'http://localhost:8200/v1/auth/{mount_point}/map/users/{user_name}'.format(
mount_point=mount_point,
user_name=user_name,
)
requests_mocker.register_uri(
method='GET',
url=mock_url,
status_code=expected_status_code,
json=mock_response,
)
github = Github(adapter=Request())
response = github.read_user_mapping(
user_name=user_name,
mount_point=mount_point,
)
self.assertEqual(
first=mock_response,
second=response,
)
@parameterized.expand([
("default mount point", DEFAULT_MOUNT_POINT),
("custom mount point", 'cathub'),
])
@requests_mock.Mocker()
def test_login(self, test_label, mount_point, requests_mocker):
mock_response = {
'auth': {
'accessor': 'f578d442-94ec-11e8-afe4-0af6a65f93f6',
'client_token': 'edf5c2c0-94ec-11e8-afe4-0af6a65f93f6',
'entity_id': 'f9268760-94ec-11e8-afe4-0af6a65f93f6',
'lease_duration': 3600,
'metadata': {'org': 'hvac', 'username': 'hvacbot'},
'policies': ['default', ],
'renewable': True,
'token_policies': ['default']
},
'data': None,
'lease_duration': 0,
'lease_id': '',
'renewable': False,
'request_id': '488cf309-2f81-cc04-51bf-c43063d309eb',
'warnings': None,
'wrap_info': None
}
mock_url = 'http://localhost:8200/v1/auth/{mount_point}/login'.format(
mount_point=mount_point,
)
requests_mocker.register_uri(
method='POST',
url=mock_url,
json=mock_response,
)
github = Github(adapter=Request())
response = github.login(
token='valid-token',
mount_point=mount_point,
)
self.assertEqual(
first=mock_response,
second=response,
)
self.assertEqual(
first=mock_response['auth']['client_token'],
second=github._adapter.token,
)
| 32.22179 | 94 | 0.555609 | 812 | 8,281 | 5.383005 | 0.14532 | 0.14642 | 0.058339 | 0.064059 | 0.813315 | 0.800503 | 0.800503 | 0.778312 | 0.778312 | 0.745367 | 0 | 0.034433 | 0.333655 | 8,281 | 256 | 95 | 32.347656 | 0.757702 | 0 | 0 | 0.669388 | 0 | 0 | 0.178602 | 0.030431 | 0 | 0 | 0 | 0 | 0.032653 | 1 | 0.028571 | false | 0 | 0.02449 | 0 | 0.057143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
feae1c48d40778bf2e8dc44fc6629d78752c4ea9 | 13 | py | Python | fython/test/importpec_py/c/a.py | nicolasessisbreton/fython | 988f5a94cee8b16b0000501a22239195c73424a1 | [
"Apache-2.0"
] | 41 | 2016-01-21T05:14:45.000Z | 2021-11-24T20:37:21.000Z | fython/test/importpec_py/c/a.py | nicolasessisbreton/fython | 988f5a94cee8b16b0000501a22239195c73424a1 | [
"Apache-2.0"
] | 5 | 2016-01-21T05:36:37.000Z | 2016-08-22T19:26:51.000Z | fython/test/importpec_py/c/a.py | nicolasessisbreton/fython | 988f5a94cee8b16b0000501a22239195c73424a1 | [
"Apache-2.0"
] | 3 | 2016-01-23T04:03:44.000Z | 2016-08-21T15:58:38.000Z | x = 10
y = 20 | 6.5 | 6 | 0.461538 | 4 | 13 | 1.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0.384615 | 13 | 2 | 7 | 6.5 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
feb1895c4db1b515c73281330ca6be6458cfd88a | 7,752 | py | Python | wagtail/search/tests/test_postgres_backend.py | melisayu/wagtail | 614ec3fc2a656b25db710b80f9e60f1c8effd7a4 | [
"BSD-3-Clause"
] | 8,851 | 2016-12-09T19:01:45.000Z | 2022-03-31T04:45:06.000Z | wagtail/search/tests/test_postgres_backend.py | melisayu/wagtail | 614ec3fc2a656b25db710b80f9e60f1c8effd7a4 | [
"BSD-3-Clause"
] | 5,197 | 2016-12-09T19:24:37.000Z | 2022-03-31T22:17:55.000Z | wagtail/search/tests/test_postgres_backend.py | melisayu/wagtail | 614ec3fc2a656b25db710b80f9e60f1c8effd7a4 | [
"BSD-3-Clause"
] | 2,548 | 2016-12-09T18:16:55.000Z | 2022-03-31T21:34:38.000Z | import unittest
from django.db import connection
from django.test import TestCase
from django.test.utils import override_settings
from wagtail.search.tests.test_backends import BackendTests
from wagtail.tests.search import models
@unittest.skipUnless(connection.vendor == 'postgresql', "The current database is not PostgreSQL")
@override_settings(WAGTAILSEARCH_BACKENDS={
'default': {
'BACKEND': 'wagtail.search.backends.database.postgres.postgres',
}
})
class TestPostgresSearchBackend(BackendTests, TestCase):
backend_path = 'wagtail.search.backends.database.postgres.postgres'
def test_weights(self):
from ..backends.database.postgres.weights import (
BOOSTS_WEIGHTS, WEIGHTS_VALUES, determine_boosts_weights, get_weight)
self.assertListEqual(BOOSTS_WEIGHTS,
[(10, 'A'), (2, 'B'), (0.5, 'C'), (0.25, 'D')])
self.assertListEqual(WEIGHTS_VALUES, [0.025, 0.05, 0.2, 1.0])
self.assertEqual(get_weight(15), 'A')
self.assertEqual(get_weight(10), 'A')
self.assertEqual(get_weight(9.9), 'B')
self.assertEqual(get_weight(2), 'B')
self.assertEqual(get_weight(1.9), 'C')
self.assertEqual(get_weight(0), 'D')
self.assertEqual(get_weight(-1), 'D')
self.assertListEqual(determine_boosts_weights([1]),
[(1, 'A'), (0, 'B'), (0, 'C'), (0, 'D')])
self.assertListEqual(determine_boosts_weights([-1]),
[(-1, 'A'), (-1, 'B'), (-1, 'C'), (-1, 'D')])
self.assertListEqual(determine_boosts_weights([-1, 1, 2]),
[(2, 'A'), (1, 'B'), (-1, 'C'), (-1, 'D')])
self.assertListEqual(determine_boosts_weights([0, 1, 2, 3]),
[(3, 'A'), (2, 'B'), (1, 'C'), (0, 'D')])
self.assertListEqual(determine_boosts_weights([0, 0.25, 0.75, 1, 1.5]),
[(1.5, 'A'), (1, 'B'), (0.5, 'C'), (0, 'D')])
self.assertListEqual(determine_boosts_weights([0, 1, 2, 3, 4, 5, 6]),
[(6, 'A'), (4, 'B'), (2, 'C'), (0, 'D')])
self.assertListEqual(determine_boosts_weights([-2, -1, 0, 1, 2, 3, 4]),
[(4, 'A'), (2, 'B'), (0, 'C'), (-2, 'D')])
def test_search_tsquery_chars(self):
"""
Checks that tsquery characters are correctly escaped
and do not generate a PostgreSQL syntax error.
"""
# Simple quote should be escaped inside each tsquery term.
results = self.backend.search("L'amour piqué par une abeille",
models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
results = self.backend.search("'starting quote",
models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
results = self.backend.search("ending quote'",
models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
results = self.backend.search("double quo''te",
models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
results = self.backend.search("triple quo'''te",
models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
# Now suffixes.
results = self.backend.search("Something:B", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
results = self.backend.search("Something:*", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
results = self.backend.search("Something:A*BCD", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
# Now the AND operator.
results = self.backend.search("first & second", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
# Now the OR operator.
results = self.backend.search("first | second", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
# Now the NOT operator.
results = self.backend.search("first & !second", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
# Now the phrase operator.
results = self.backend.search("first <-> second", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
def test_autocomplete_tsquery_chars(self):
"""
Checks that tsquery characters are correctly escaped
and do not generate a PostgreSQL syntax error.
"""
# Simple quote should be escaped inside each tsquery term.
results = self.backend.autocomplete("L'amour piqué par une abeille",
models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
results = self.backend.autocomplete("'starting quote",
models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
results = self.backend.autocomplete("ending quote'",
models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
results = self.backend.autocomplete("double quo''te",
models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
results = self.backend.autocomplete("triple quo'''te",
models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
# Backslashes should be escaped inside each tsquery term.
results = self.backend.autocomplete("backslash\\",
models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
# Now suffixes.
results = self.backend.autocomplete("Something:B", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
results = self.backend.autocomplete("Something:*", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
results = self.backend.autocomplete("Something:A*BCD", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
# Now the AND operator.
results = self.backend.autocomplete("first & second", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
# Now the OR operator.
results = self.backend.autocomplete("first | second", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
# Now the NOT operator.
results = self.backend.autocomplete("first & !second", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
# Now the phrase operator.
results = self.backend.autocomplete("first <-> second", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [])
def test_index_without_upsert(self):
# Test the add_items code path for Postgres 9.4, where upsert is not available
self.backend.reset_index()
index = self.backend.get_index_for_model(models.Book)
index._enable_upsert = False
index.add_items(models.Book, models.Book.objects.all())
results = self.backend.search("JavaScript", models.Book)
self.assertUnsortedListEqual([r.title for r in results], [
"JavaScript: The good parts",
"JavaScript: The Definitive Guide"
])
| 47.851852 | 97 | 0.589009 | 866 | 7,752 | 5.213626 | 0.150115 | 0.06423 | 0.103654 | 0.213068 | 0.772093 | 0.73732 | 0.717386 | 0.717386 | 0.707198 | 0.676412 | 0 | 0.016976 | 0.278122 | 7,752 | 161 | 98 | 48.149068 | 0.78985 | 0.084494 | 0 | 0.315789 | 0 | 0 | 0.091725 | 0.014243 | 0 | 0 | 0 | 0 | 0.368421 | 1 | 0.035088 | false | 0 | 0.061404 | 0 | 0.114035 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
229eb934150ecdee8b6f5e2e6411a4520ca98ead | 29 | py | Python | snp_companies/__init__.py | ofrik/snp_companies | e67e168fd24a3eddd2dc3195c31c45a0d53a86c8 | [
"MIT"
] | null | null | null | snp_companies/__init__.py | ofrik/snp_companies | e67e168fd24a3eddd2dc3195c31c45a0d53a86c8 | [
"MIT"
] | null | null | null | snp_companies/__init__.py | ofrik/snp_companies | e67e168fd24a3eddd2dc3195c31c45a0d53a86c8 | [
"MIT"
] | null | null | null | from .main import SNPListing
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
22a1761828e6b4575844e6cca4535c0bcf30483d | 46 | py | Python | generator/__init__.py | forskolan-natet/natet-sos-generator | 0062aa49ed134af7de8bf04a1bb4b058b445e4d3 | [
"MIT"
] | null | null | null | generator/__init__.py | forskolan-natet/natet-sos-generator | 0062aa49ed134af7de8bf04a1bb4b058b445e4d3 | [
"MIT"
] | 1 | 2021-04-05T20:36:28.000Z | 2021-04-05T20:36:28.000Z | generator/__init__.py | forskolan-natet/natet-sos-generator | 0062aa49ed134af7de8bf04a1bb4b058b445e4d3 | [
"MIT"
] | null | null | null | from .model import *
from .generator import *
| 15.333333 | 24 | 0.73913 | 6 | 46 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 46 | 2 | 25 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
22a5def788129a7cd0c1fe3b17187bf4a48bbdb6 | 69 | py | Python | src/python/pull_requests/__init__.py | bvdeenen/bitbar | 4bd0876dacecc55f2cb60027510ba47ff7d84d12 | [
"MIT"
] | 1 | 2022-02-08T09:19:40.000Z | 2022-02-08T09:19:40.000Z | src/python/pull_requests/__init__.py | bvdeenen/bitbar | 4bd0876dacecc55f2cb60027510ba47ff7d84d12 | [
"MIT"
] | 3 | 2021-12-24T11:34:24.000Z | 2022-01-18T09:17:32.000Z | src/python/pull_requests/__init__.py | bvdeenen/bitbar | 4bd0876dacecc55f2cb60027510ba47ff7d84d12 | [
"MIT"
] | null | null | null | from .domain import *
from .menu import print_xbar_pull_request_menu
| 23 | 46 | 0.84058 | 11 | 69 | 4.909091 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115942 | 69 | 2 | 47 | 34.5 | 0.885246 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
22ba509d8a2a8b7550fdcc463f0fc90e144c6c80 | 276 | py | Python | backbones/__init__.py | edwardyehuang/iSeg | 256b0f7fdb6e854fe026fa8df41d9a4a55db34d5 | [
"MIT"
] | 4 | 2021-12-13T09:49:26.000Z | 2022-02-19T11:16:50.000Z | backbones/__init__.py | edwardyehuang/iSeg | 256b0f7fdb6e854fe026fa8df41d9a4a55db34d5 | [
"MIT"
] | 1 | 2021-07-28T10:40:56.000Z | 2021-08-09T07:14:06.000Z | backbones/__init__.py | edwardyehuang/iSeg | 256b0f7fdb6e854fe026fa8df41d9a4a55db34d5 | [
"MIT"
] | null | null | null | # ================================================================
# MIT License
# Copyright (c) 2021 edwardyehuang (https://github.com/edwardyehuang)
# ================================================================
from iseg.backbones.feature_extractor import get_backbone
| 46 | 69 | 0.42029 | 18 | 276 | 6.333333 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015444 | 0.061594 | 276 | 5 | 70 | 55.2 | 0.42471 | 0.757246 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
22bf5da776b2a274c1a70a3181a687581962bf0d | 96 | py | Python | venv/lib/python3.8/site-packages/numpy/array_api/_data_type_functions.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/numpy/array_api/_data_type_functions.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/numpy/array_api/_data_type_functions.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/e3/00/f8/8f011a5b14da3801a732138b77a034e94af35a1de6fe1d24bbd1117839 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 0 | 96 | 1 | 96 | 96 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22f2dba70d6b5acb592bbfaf0171dab98b8c8bb7 | 14,986 | py | Python | geotrek/cirkwi/tests/test_views.py | GeotrekCE/Geotrek | c1393925c1940ac795ab7fc04819cd8c78bc79fb | [
"BSD-2-Clause"
] | null | null | null | geotrek/cirkwi/tests/test_views.py | GeotrekCE/Geotrek | c1393925c1940ac795ab7fc04819cd8c78bc79fb | [
"BSD-2-Clause"
] | null | null | null | geotrek/cirkwi/tests/test_views.py | GeotrekCE/Geotrek | c1393925c1940ac795ab7fc04819cd8c78bc79fb | [
"BSD-2-Clause"
] | null | null | null | import datetime
from django.test import TestCase
from django.test.utils import override_settings
from django.utils.timezone import utc, make_aware
from geotrek.common.tests.factories import AttachmentFactory, TargetPortalFactory
from geotrek.common.tests import TranslationResetMixin
from geotrek.authent.tests.factories import StructureFactory
from geotrek.common.utils.testdata import get_dummy_uploaded_image
from geotrek.core.tests.factories import PathFactory
from geotrek.trekking.tests.factories import POIFactory, TrekFactory
from geotrek.cirkwi.serializers import timestamp
from geotrek.trekking import urls # NOQA
class CirkwiTests(TranslationResetMixin, TestCase):
@classmethod
def setUpTestData(cls):
cls.path = PathFactory.create()
cls.creation = make_aware(datetime.datetime(2014, 1, 1), utc)
cls.trek = TrekFactory.create(published=True, paths=[cls.path])
cls.trek.date_insert = cls.creation
cls.trek.save()
TrekFactory.create(published=False, paths=[cls.path])
POIFactory.create(published=False, paths=[cls.path])
def setUp(self):
self.poi = POIFactory.create(published=True, paths=[self.path])
self.poi.date_insert = self.creation
self.poi.save()
def test_export_circuits(self):
response = self.client.get('/api/cirkwi/circuits.xml')
self.assertEqual(response.status_code, 200)
attrs = {
'pk': self.trek.pk,
'title': self.trek.name,
'date_update': timestamp(self.trek.date_update),
'n': self.trek.description.replace('<p>description ', '').replace('</p>', ''),
'poi_pk': self.poi.pk,
'poi_title': self.poi.name,
'poi_date_update': timestamp(self.poi.date_update),
'poi_description': self.poi.description.replace('<p>', '').replace('</p>', ''),
}
self.assertXMLEqual(
response.content.decode(),
'<?xml version="1.0" encoding="utf8"?>\n'
'<circuits version="2">'
'<circuit date_creation="1388534400" date_modification="{date_update}" id_circuit="{pk}">'
'<informations>'
'<information langue="en">'
'<titre>{title}</titre>'
'<description>Description teaser\n\nDescription</description>'
'<informations_complementaires>'
'<information_complementaire><titre>Departure</titre><description>Departure</description></information_complementaire>'
'<information_complementaire><titre>Arrival</titre><description>Arrival</description></information_complementaire>'
'<information_complementaire><titre>Ambiance</titre><description>Ambiance</description></information_complementaire>'
'<information_complementaire><titre>Access</titre><description>Access</description></information_complementaire>'
'<information_complementaire><titre>Accessibility infrastructure</titre><description>Accessibility infrastructure</description></information_complementaire>'
'<information_complementaire><titre>Advised parking</titre><description>Advised parking</description></information_complementaire>'
'<information_complementaire><titre>Public transport</titre><description>Public transport</description></information_complementaire>'
'<information_complementaire><titre>Advice</titre><description>Advice</description></information_complementaire></informations_complementaires>'
'</information>'
'</informations>'
'<distance>141</distance>'
'<locomotions><locomotion duree="5400"></locomotion></locomotions>'
'<fichier_trace url="http://testserver/api/en/treks/{pk}/trek.kml"></fichier_trace>'
'<pois>'
'<poi date_creation="1388534400" date_modification="{poi_date_update}" id_poi="{poi_pk}">'
'<informations>'
'<information langue="en"><titre>POI</titre><description>Description</description></information>'
'</informations>'
'<adresse><position><lat>46.5</lat><lng>3.0</lng></position></adresse>'
'</poi>'
'</pois>'
'</circuit>'
'</circuits>'.format(**attrs))
def test_export_pois(self):
response = self.client.get('/api/cirkwi/pois.xml')
self.assertEqual(response.status_code, 200)
attrs = {
'pk': self.poi.pk,
'title': self.poi.name,
'description': self.poi.description.replace('<p>', '').replace('</p>', ''),
'date_update': timestamp(self.poi.date_update),
}
self.assertXMLEqual(
response.content.decode(),
'<?xml version="1.0" encoding="utf8"?>\n'
'<pois version="2">'
'<poi id_poi="{pk}" date_modification="{date_update}" date_creation="1388534400">'
'<informations>'
'<information langue="en"><titre>{title}</titre><description>{description}</description></information>'
'</informations>'
'<adresse><position><lat>46.5</lat><lng>3.0</lng></position></adresse>'
'</poi>'
'</pois>'.format(**attrs))
def test_export_pois_with_attachments(self):
attachment = AttachmentFactory.create(content_object=self.poi, attachment_file=get_dummy_uploaded_image())
response = self.client.get('/api/cirkwi/pois.xml')
self.assertEqual(response.status_code, 200)
attrs = {
'pk': self.poi.pk,
'title': self.poi.name,
'description': self.poi.description.replace('<p>', '').replace('</p>', ''),
'date_update': timestamp(self.poi.date_update),
'legend': attachment.legend,
'picture': f'http://testserver{self.poi.resized_pictures[0][1].url}'
}
self.assertXMLEqual(
response.content.decode(),
'<?xml version="1.0" encoding="utf8"?>\n'
'<pois version="2">'
'<poi id_poi="{pk}" date_modification="{date_update}" date_creation="1388534400">'
'<informations>'
'<information langue="en"><titre>{title}</titre><description>{description}</description>'
'<medias><images></images></medias></information>'
'</informations>'
'<adresse><position><lat>46.5</lat><lng>3.0</lng></position></adresse>'
'</poi>'
'</pois>'.format(**attrs)
)
def test_export_circuits_with_attachments(self):
attachment = AttachmentFactory.create(content_object=self.trek, attachment_file=get_dummy_uploaded_image())
self.poi.delete()
response = self.client.get('/api/cirkwi/circuits.xml')
self.assertEqual(response.status_code, 200)
attrs = {
'pk': self.trek.pk,
'title': self.trek.name,
'date_update': timestamp(self.trek.date_update),
'n': self.trek.description.replace('<p>description ', '').replace('</p>', ''),
'legend': attachment.legend,
'picture': f'http://testserver{self.trek.resized_pictures[0][1].url}'
}
self.assertXMLEqual(
response.content.decode(),
'<?xml version="1.0" encoding="utf8"?>\n'
'<circuits version="2">'
'<circuit date_creation="1388534400" date_modification="{date_update}" id_circuit="{pk}">'
'<informations>'
'<information langue="en">'
'<titre>{title}</titre>'
'<description>Description teaser\n\nDescription</description>'
'<medias><images></images></medias>'
'<informations_complementaires>'
'<information_complementaire><titre>Departure</titre><description>Departure</description></information_complementaire>'
'<information_complementaire><titre>Arrival</titre><description>Arrival</description></information_complementaire>'
'<information_complementaire><titre>Ambiance</titre><description>Ambiance</description></information_complementaire>'
'<information_complementaire><titre>Access</titre><description>Access</description></information_complementaire>'
'<information_complementaire><titre>Accessibility infrastructure</titre><description>Accessibility infrastructure</description></information_complementaire>'
'<information_complementaire><titre>Advised parking</titre><description>Advised parking</description></information_complementaire>'
'<information_complementaire><titre>Public transport</titre><description>Public transport</description></information_complementaire>'
'<information_complementaire><titre>Advice</titre><description>Advice</description></information_complementaire></informations_complementaires>'
'</information>'
'</informations>'
'<distance>141</distance>'
'<locomotions><locomotion duree="5400"></locomotion></locomotions>'
'<fichier_trace url="http://testserver/api/en/treks/{pk}/trek.kml"></fichier_trace>'
'</circuit>'
'</circuits>'.format(**attrs))
@override_settings(PUBLISHED_BY_LANG=False)
def test_export_pois_without_langs(self):
response = self.client.get('/api/cirkwi/pois.xml')
self.assertEqual(response.status_code, 200)
attrs = {
'pk': self.poi.pk,
'title': self.poi.name,
'description': self.poi.description.replace('<p>', '').replace('</p>', ''),
'date_update': timestamp(self.poi.date_update),
}
self.assertXMLEqual(
response.content.decode(),
'<?xml version="1.0" encoding="utf8"?>\n'
'<pois version="2">'
'<poi id_poi="{pk}" date_modification="{date_update}" date_creation="1388534400">'
'<informations>'
'<information langue="en"><titre>{title}</titre><description>{description}</description></information>'
'<information langue="es"><titre>{title}</titre><description>{description}</description></information>'
'<information langue="fr"><titre>{title}</titre><description>{description}</description></information>'
'<information langue="it"><titre>{title}</titre><description>{description}</description></information>'
'</informations>'
'<adresse><position><lat>46.5</lat><lng>3.0</lng></position></adresse>'
'</poi>'
'</pois>'.format(**attrs))
def test_trek_filter_portals(self):
portal = TargetPortalFactory.create()
self.trek.portal.add(portal)
# We found one trek with the portal
response = self.client.get(f'/api/cirkwi/circuits.xml?portals={portal.pk}')
self.assertEqual(response.status_code, 200)
self.assertXMLNotEqual(response.content.decode(), '<?xml version="1.0" encoding="utf8"?>\n'
'<circuits version="2"/>')
other_portal = TargetPortalFactory.create()
# We found no treks with the other portal's id
response = self.client.get(f'/api/cirkwi/circuits.xml?portals={other_portal.pk}')
self.assertEqual(response.status_code, 200)
self.assertXMLEqual(response.content.decode(), '<?xml version="1.0" encoding="utf8"?>\n'
'<circuits version="2"/>')
# We found treks when we ask for the other portal's id and portal's id
response = self.client.get(f'/api/cirkwi/circuits.xml?portals={other_portal.pk},{portal.pk}')
self.assertEqual(response.status_code, 200)
self.assertXMLNotEqual(response.content.decode(), '<?xml version="1.0" encoding="utf8"?>\n'
'<circuits version="2"/>')
def test_trek_filter_structures(self):
structure = StructureFactory.create()
self.trek.structure = structure
self.trek.save()
# We found one trek with the structure
response = self.client.get(f'/api/cirkwi/circuits.xml?structures={structure.pk}')
self.assertEqual(response.status_code, 200)
self.assertXMLNotEqual(response.content.decode(), '<?xml version="1.0" encoding="utf8"?>\n'
'<circuits version="2"/>')
other_structure = StructureFactory.create()
# We found no treks with the other structure's id
response = self.client.get(f'/api/cirkwi/circuits.xml?structures={other_structure.pk}')
self.assertEqual(response.status_code, 200)
self.assertXMLEqual(response.content.decode(), '<?xml version="1.0" encoding="utf8"?>\n'
'<circuits version="2"/>')
response = self.client.get(f'/api/cirkwi/circuits.xml?structures={other_structure.pk},{structure.pk}')
self.assertEqual(response.status_code, 200)
self.assertXMLNotEqual(response.content.decode(), '<?xml version="1.0" encoding="utf8"?>\n'
'<circuits version="2"/>')
def test_poi_filter_structures(self):
structure = StructureFactory.create()
self.poi.structure = structure
self.poi.save()
# We found one trek with the structure
response = self.client.get(f'/api/cirkwi/pois.xml?structures={structure.pk}')
self.assertEqual(response.status_code, 200)
self.assertXMLNotEqual(response.content.decode(), '<?xml version="1.0" encoding="utf8"?>\n'
'<pois version="2"/>')
other_structure = StructureFactory.create()
# We found no treks with the other structure's id
response = self.client.get(f'/api/cirkwi/pois.xml?structures={other_structure.pk}')
self.assertEqual(response.status_code, 200)
self.assertXMLEqual(response.content.decode(), '<?xml version="1.0" encoding="utf8"?>\n'
'<pois version="2"/>')
response = self.client.get(f'/api/cirkwi/pois.xml?structures={other_structure.pk},{structure.pk}')
self.assertEqual(response.status_code, 200)
self.assertXMLNotEqual(response.content.decode(), '<?xml version="1.0" encoding="utf8"?>\n'
'<pois version="2"/>')
response = self.client.get(f'/api/cirkwi/pois.xml?structures={other_structure.pk}&structures={structure.pk}')
self.assertEqual(response.status_code, 200)
self.assertXMLNotEqual(response.content.decode(), '<?xml version="1.0" encoding="utf8"?>\n'
'<pois version="2"/>')
| 56.338346 | 169 | 0.621647 | 1,509 | 14,986 | 6.070908 | 0.11332 | 0.087327 | 0.052396 | 0.034385 | 0.854929 | 0.848925 | 0.824364 | 0.80799 | 0.789106 | 0.755594 | 0 | 0.017977 | 0.224209 | 14,986 | 265 | 170 | 56.550943 | 0.769998 | 0.021487 | 0 | 0.675214 | 0 | 0.038462 | 0.448243 | 0.305357 | 0 | 0 | 0 | 0 | 0.128205 | 1 | 0.042735 | false | 0 | 0.051282 | 0 | 0.098291 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22f7b098c826fe5fe8e397f117030913cf5fcdec | 48 | py | Python | coinmarketcap_tracker/__init__.py | hmallen/coinmarketcap_tracker | 9b81b2f568050d98546e7419aa62803591e20a8e | [
"MIT"
] | null | null | null | coinmarketcap_tracker/__init__.py | hmallen/coinmarketcap_tracker | 9b81b2f568050d98546e7419aa62803591e20a8e | [
"MIT"
] | null | null | null | coinmarketcap_tracker/__init__.py | hmallen/coinmarketcap_tracker | 9b81b2f568050d98546e7419aa62803591e20a8e | [
"MIT"
] | null | null | null | from .coinmarketcap_tracker import TrackProduct
| 24 | 47 | 0.895833 | 5 | 48 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe1bff46a9ba1bd5b14e79aef1bc0e98ed6b56d6 | 661 | py | Python | main.py | Karthik-Venkatesh/ATOM | d369d8436b71b3af0f5810200c0927d0097f4330 | [
"Apache-2.0"
] | 1 | 2022-02-23T14:54:12.000Z | 2022-02-23T14:54:12.000Z | main.py | Karthik-Venkatesh/atom | d369d8436b71b3af0f5810200c0927d0097f4330 | [
"Apache-2.0"
] | 21 | 2018-12-27T04:47:17.000Z | 2019-01-16T06:00:53.000Z | main.py | Karthik-Venkatesh/ATOM | d369d8436b71b3af0f5810200c0927d0097f4330 | [
"Apache-2.0"
] | null | null | null | #
# main.py
# ATOM
#
# Created by Karthik V.
# Updated copyright on 16/1/19 5:56 PM.
#
# Copyright © 2019 Karthik Venkatesh. All rights reserved.
#
from sense.sensor import Sensor
print("")
print("#######################################")
print("")
print(" _______ ____ __ __ ")
print(" /\|__ __/ __ \| \/ | ")
print(" / \ | | | | | | \ / | ")
print(" / /\ \ | | | | | | |\/| | ")
print(" / ____ \| | | |__| | | | | ")
print(" /_/ \_\_| \____/|_| |_| ")
print("")
print("#######################################")
print("")
sensor = Sensor()
sensor.start_senses()
| 22.793103 | 59 | 0.384266 | 46 | 661 | 4.73913 | 0.630435 | 0.504587 | 0.688073 | 0.825688 | 0.275229 | 0.275229 | 0.275229 | 0.275229 | 0.275229 | 0.275229 | 0 | 0.025974 | 0.301059 | 661 | 28 | 60 | 23.607143 | 0.443723 | 0.20121 | 0 | 0.4 | 0 | 0 | 0.603482 | 0.15087 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0.8 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
a3cece5e9b777913796b31923a699e174c3b8c3e | 40 | py | Python | brainframe_qt/ui/resources/alarms/alarm_bundle/alarm_card/alert_log/alert_log_entry/alert_preview/__init__.py | aotuai/brainframe-qt | 082cfd0694e569122ff7c63e56dd0ec4b62d5bac | [
"BSD-3-Clause"
] | 17 | 2021-02-11T18:19:22.000Z | 2022-02-08T06:12:50.000Z | brainframe_qt/ui/resources/alarms/alarm_bundle/alarm_card/alert_log/alert_log_entry/alert_preview/__init__.py | aotuai/brainframe-qt | 082cfd0694e569122ff7c63e56dd0ec4b62d5bac | [
"BSD-3-Clause"
] | 80 | 2021-02-11T08:27:31.000Z | 2021-10-13T21:33:22.000Z | brainframe_qt/ui/resources/alarms/alarm_bundle/alarm_card/alert_log/alert_log_entry/alert_preview/__init__.py | aotuai/brainframe-qt | 082cfd0694e569122ff7c63e56dd0ec4b62d5bac | [
"BSD-3-Clause"
] | 5 | 2021-02-12T09:51:34.000Z | 2022-02-08T09:25:15.000Z | from .alert_preview import AlertPreview
| 20 | 39 | 0.875 | 5 | 40 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a3db2255624ded5c88602ba356bf063a33195272 | 11,494 | py | Python | src/tests/presale/test_event.py | awg24/pretix | b1d67a48601838bac0d4e498cbe8bdcd16013d60 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-06-23T07:44:59.000Z | 2021-06-23T07:44:59.000Z | src/tests/presale/test_event.py | awg24/pretix | b1d67a48601838bac0d4e498cbe8bdcd16013d60 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | src/tests/presale/test_event.py | awg24/pretix | b1d67a48601838bac0d4e498cbe8bdcd16013d60 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | import datetime
import time
from django.test import TestCase
from django.utils.timezone import now
from tests.base import BrowserTest
from pretix.base.models import (
Event, Item, ItemCategory, ItemVariation, Organizer, Property,
PropertyValue, Quota, User,
)
class EventTestMixin:
def setUp(self):
super().setUp()
self.orga = Organizer.objects.create(name='CCC', slug='ccc')
self.event = Event.objects.create(
organizer=self.orga, name='30C3', slug='30c3',
date_from=datetime.datetime(2013, 12, 26, tzinfo=datetime.timezone.utc),
)
class EventMiddlewareTest(EventTestMixin, BrowserTest):
def setUp(self):
super().setUp()
self.driver.implicitly_wait(10)
def test_event_header(self):
self.driver.get('%s/%s/%s/' % (self.live_server_url, self.orga.slug, self.event.slug))
self.assertIn(str(self.event.name), self.driver.find_element_by_css_selector("h1").text)
def test_not_found(self):
resp = self.client.get('%s/%s/%s/' % (self.live_server_url, 'foo', 'bar'))
self.assertEqual(resp.status_code, 404)
class ItemDisplayTest(EventTestMixin, BrowserTest):
def setUp(self):
super().setUp()
self.driver.implicitly_wait(10)
def test_not_active(self):
q = Quota.objects.create(event=self.event, name='Quota', size=2)
item = Item.objects.create(event=self.event, name='Early-bird ticket', default_price=0, active=False)
q.items.add(item)
self.driver.get('%s/%s/%s/' % (self.live_server_url, self.orga.slug, self.event.slug))
self.assertNotIn("Early-bird", self.driver.find_element_by_css_selector("body").text)
def test_without_category(self):
q = Quota.objects.create(event=self.event, name='Quota', size=2)
item = Item.objects.create(event=self.event, name='Early-bird ticket', default_price=0, active=True)
q.items.add(item)
self.driver.get('%s/%s/%s/' % (self.live_server_url, self.orga.slug, self.event.slug))
self.assertIn("Early-bird", self.driver.find_element_by_css_selector("section .product-row:first-child").text)
def test_simple_with_category(self):
c = ItemCategory.objects.create(event=self.event, name="Entry tickets", position=0)
q = Quota.objects.create(event=self.event, name='Quota', size=2)
item = Item.objects.create(event=self.event, name='Early-bird ticket', category=c, default_price=0)
q.items.add(item)
self.driver.get('%s/%s/%s/' % (self.live_server_url, self.orga.slug, self.event.slug))
self.assertIn("Entry tickets", self.driver.find_element_by_css_selector("section:nth-of-type(1) h3").text)
self.assertIn("Early-bird",
self.driver.find_element_by_css_selector("section:nth-of-type(1) div:nth-of-type(1)").text)
def test_simple_without_quota(self):
c = ItemCategory.objects.create(event=self.event, name="Entry tickets", position=0)
Item.objects.create(event=self.event, name='Early-bird ticket', category=c, default_price=0)
resp = self.client.get('%s/%s/%s/' % (self.live_server_url, self.orga.slug, self.event.slug))
self.assertNotIn("Early-bird", resp.rendered_content)
def test_no_variations_in_quota(self):
c = ItemCategory.objects.create(event=self.event, name="Entry tickets", position=0)
q = Quota.objects.create(event=self.event, name='Quota', size=2)
item = Item.objects.create(event=self.event, name='Early-bird ticket', category=c, default_price=0)
prop1 = Property.objects.create(event=self.event, name="Color")
item.properties.add(prop1)
PropertyValue.objects.create(prop=prop1, value="Red")
PropertyValue.objects.create(prop=prop1, value="Black")
q.items.add(item)
self.driver.get('%s/%s/%s/' % (self.live_server_url, self.orga.slug, self.event.slug))
resp = self.client.get('%s/%s/%s/' % (self.live_server_url, self.orga.slug, self.event.slug))
self.assertNotIn("Early-bird", resp.rendered_content)
def test_one_variation_in_quota(self):
c = ItemCategory.objects.create(event=self.event, name="Entry tickets", position=0)
q = Quota.objects.create(event=self.event, name='Quota', size=2)
item = Item.objects.create(event=self.event, name='Early-bird ticket', category=c, default_price=0)
prop1 = Property.objects.create(event=self.event, name="Color")
item.properties.add(prop1)
val1 = PropertyValue.objects.create(prop=prop1, value="Red")
PropertyValue.objects.create(prop=prop1, value="Black")
q.items.add(item)
var1 = ItemVariation.objects.create(item=item)
var1.values.add(val1)
q.variations.add(var1)
self.driver.get('%s/%s/%s/' % (self.live_server_url, self.orga.slug, self.event.slug))
self.assertIn("Early-bird",
self.driver.find_element_by_css_selector("section:nth-of-type(1) div:nth-of-type(1)").text)
for el in self.driver.find_elements_by_link_text('Show variants'):
self.scroll_and_click(el)
time.sleep(2)
self.assertIn("Red",
self.driver.find_element_by_css_selector("section:nth-of-type(1)").text)
self.assertNotIn("Black",
self.driver.find_element_by_css_selector("section:nth-of-type(1)").text)
def test_variation_prices_in_quota(self):
c = ItemCategory.objects.create(event=self.event, name="Entry tickets", position=0)
q = Quota.objects.create(event=self.event, name='Quota', size=2)
item = Item.objects.create(event=self.event, name='Early-bird ticket', category=c, default_price=12)
prop1 = Property.objects.create(event=self.event, name="Color")
item.properties.add(prop1)
val1 = PropertyValue.objects.create(prop=prop1, value="Red", position=0)
val2 = PropertyValue.objects.create(prop=prop1, value="Black", position=1)
q.items.add(item)
var1 = ItemVariation.objects.create(item=item, default_price=14)
var1.values.add(val1)
var2 = ItemVariation.objects.create(item=item)
var2.values.add(val2)
q.variations.add(var1)
q.variations.add(var2)
self.driver.get('%s/%s/%s/' % (self.live_server_url, self.orga.slug, self.event.slug))
self.assertIn("Early-bird",
self.driver.find_element_by_css_selector("section:nth-of-type(1) div:nth-of-type(1)").text)
for el in self.driver.find_elements_by_link_text('Show variants'):
self.scroll_and_click(el)
time.sleep(2)
self.assertIn("Red",
self.driver.find_elements_by_css_selector("section:nth-of-type(1) div.variation")[0].text)
self.assertIn("14.00",
self.driver.find_elements_by_css_selector("section:nth-of-type(1) div.variation")[0].text)
self.assertIn("Black",
self.driver.find_elements_by_css_selector("section:nth-of-type(1) div.variation")[1].text)
self.assertIn("12.00",
self.driver.find_elements_by_css_selector("section:nth-of-type(1) div.variation")[1].text)
class LoginTest(EventTestMixin, TestCase):
def setUp(self):
super().setUp()
self.local_user = User.objects.create_local_user(self.event, 'demo', 'foo')
self.global_user = User.objects.create_global_user('demo@demo.dummy', 'demo')
def test_login_invalid(self):
response = self.client.post(
'/%s/%s/login' % (self.orga.slug, self.event.slug),
{
'form': 'login',
'username': 'demo',
'password': 'bar'
}
)
self.assertEqual(response.status_code, 200)
self.assertIn('alert-danger', response.rendered_content)
def test_login_local(self):
response = self.client.post(
'/%s/%s/login' % (self.orga.slug, self.event.slug),
{
'form': 'login',
'username': 'demo',
'password': 'foo'
}
)
self.assertEqual(response.status_code, 302)
def test_login_global(self):
response = self.client.post(
'/%s/%s/login' % (self.orga.slug, self.event.slug),
{
'form': 'login',
'username': 'demo@demo.dummy',
'password': 'demo'
}
)
self.assertEqual(response.status_code, 302)
def test_login_already_logged_in(self):
self.assertTrue(self.client.login(username='demo@%s.event.pretix' % self.event.identity, password='foo'))
response = self.client.get(
'/%s/%s/login' % (self.orga.slug, self.event.slug),
)
self.assertEqual(response.status_code, 302)
def test_logout(self):
self.assertTrue(self.client.login(username='demo@%s.event.pretix' % self.event.identity, password='foo'))
response = self.client.get(
'/%s/%s/logout' % (self.orga.slug, self.event.slug),
)
self.assertEqual(response.status_code, 302)
response = self.client.get(
'/%s/%s/login' % (self.orga.slug, self.event.slug),
)
self.assertEqual(response.status_code, 200)
class DeadlineTest(EventTestMixin, TestCase):
def setUp(self):
super().setUp()
def test_not_yet_started(self):
self.event.presale_start = now() + datetime.timedelta(days=1)
self.event.save()
response = self.client.get(
'/%s/%s/' % (self.orga.slug, self.event.slug)
)
self.assertEqual(response.status_code, 200)
self.assertIn('alert-info', response.rendered_content)
self.assertNotIn('checkout-button-row', response.rendered_content)
response = self.client.post(
'/%s/%s/cart/add' % (self.orga.slug, self.event.slug),
follow=True
)
self.assertIn('alert-danger', response.rendered_content)
self.assertIn('not yet started', response.rendered_content)
def test_over(self):
self.event.presale_end = now() - datetime.timedelta(days=1)
self.event.save()
response = self.client.get(
'/%s/%s/' % (self.orga.slug, self.event.slug)
)
self.assertEqual(response.status_code, 200)
self.assertIn('alert-info', response.rendered_content)
self.assertNotIn('checkout-button-row', response.rendered_content)
response = self.client.post(
'/%s/%s/cart/add' % (self.orga.slug, self.event.slug),
follow=True
)
self.assertIn('alert-danger', response.rendered_content)
self.assertIn('is over', response.rendered_content)
def test_in_time(self):
self.event.presale_start = now() - datetime.timedelta(days=1)
self.event.presale_end = now() + datetime.timedelta(days=1)
self.event.save()
response = self.client.get(
'/%s/%s/' % (self.orga.slug, self.event.slug)
)
self.assertEqual(response.status_code, 200)
self.assertNotIn('alert-info', response.rendered_content)
self.assertIn('checkout-button-row', response.rendered_content)
response = self.client.post(
'/%s/%s/cart/add' % (self.orga.slug, self.event.slug)
)
self.assertNotEqual(response.status_code, 403)
| 45.251969 | 118 | 0.63581 | 1,485 | 11,494 | 4.805387 | 0.121212 | 0.068105 | 0.040078 | 0.047085 | 0.830437 | 0.815163 | 0.805914 | 0.775925 | 0.774944 | 0.749299 | 0 | 0.014872 | 0.216113 | 11,494 | 253 | 119 | 45.43083 | 0.777137 | 0 | 0 | 0.570776 | 0 | 0 | 0.116235 | 0.021228 | 0 | 0 | 0 | 0 | 0.178082 | 1 | 0.100457 | false | 0.022831 | 0.027397 | 0 | 0.150685 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a3e5f2c4482e411123c45f3c359cd4b679cacf74 | 6,626 | py | Python | tests/infer_for_schema/test_patterns_on_self.py | aas-core-works/aas-core-codegen | afec2cf363b6cb69816e7724a2b58626e2165869 | [
"MIT"
] | 5 | 2021-12-29T12:55:34.000Z | 2022-03-01T17:57:21.000Z | tests/infer_for_schema/test_patterns_on_self.py | aas-core-works/aas-core-codegen | afec2cf363b6cb69816e7724a2b58626e2165869 | [
"MIT"
] | 10 | 2021-12-29T02:15:55.000Z | 2022-03-09T11:04:22.000Z | tests/infer_for_schema/test_patterns_on_self.py | aas-core-works/aas-core-codegen | afec2cf363b6cb69816e7724a2b58626e2165869 | [
"MIT"
] | 2 | 2021-12-29T01:42:12.000Z | 2022-02-15T13:46:33.000Z | # pylint: disable=missing-docstring
import textwrap
import unittest
import tests.common
import tests.infer_for_schema.common
from aas_core_codegen import infer_for_schema
class Test_expected(unittest.TestCase):
def test_no_pattern(self) -> None:
source = textwrap.dedent(
"""\
class Some_constrained_primitive(str):
pass
class Something:
some_property: Some_constrained_primitive
def __init__(self, some_property: Some_constrained_primitive) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={},
patterns_by_property={})"""
),
text,
)
def test_single_pattern(self) -> None:
source = textwrap.dedent(
"""\
@verification
def is_something(text: str) -> bool:
prefix = "something"
return match(f"{prefix}-[a-zA-Z]+", text) is not None
@invariant(lambda self: is_something(self))
class Some_constrained_primitive(str):
pass
class Something:
some_property: Some_constrained_primitive
def __init__(self, some_property: Some_constrained_primitive) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={},
patterns_by_property={
'some_property': [
PatternConstraint(
pattern='something-[a-zA-Z]+')]})"""
),
text,
)
def test_two_patterns(self) -> None:
source = textwrap.dedent(
"""\
@verification
def is_something(text: str) -> bool:
return match("something-[a-zA-Z]+", text) is not None
@verification
def is_acme(text: str) -> bool:
return match(".*acme.*", text) is not None
@invariant(lambda self: is_acme(self))
@invariant(lambda self: is_something(self))
class Some_constrained_primitive(str):
pass
class Something:
some_property: Some_constrained_primitive
def __init__(self, some_property: Some_constrained_primitive) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={},
patterns_by_property={
'some_property': [
PatternConstraint(
pattern='something-[a-zA-Z]+'),
PatternConstraint(
pattern='.*acme.*')]})"""
),
text,
)
def test_inheritance_between_constrained_primitives_by_default(self) -> None:
source = textwrap.dedent(
"""\
@verification
def is_something(text: str) -> bool:
return match("something-[a-zA-Z]+", text) is not None
@verification
def is_acme(text: str) -> bool:
return match(".*acme.*", text) is not None
@invariant(lambda self: is_something(self))
class Parent_constrained_primitive(str):
pass
@invariant(lambda self: is_acme(self))
class Some_constrained_primitive(Parent_constrained_primitive):
pass
class Something:
some_property: Some_constrained_primitive
def __init__(self, some_property: Some_constrained_primitive) -> None:
self.some_property = some_property
__book_url__ = "dummy"
__book_version__ = "dummy"
"""
)
# NOTE (mristin, 2022-05-15):
# In contrast to classes, we do inherit the constraints among the constrained
# primitives as we in-line them later in the schema classes.
(
_,
something_cls,
constraints_by_class,
) = tests.infer_for_schema.common.parse_to_symbol_table_and_something_cls_and_constraints_by_class(
source=source
)
constraints_by_props = constraints_by_class[something_cls]
text = infer_for_schema.dump(constraints_by_props)
self.assertEqual(
textwrap.dedent(
"""\
ConstraintsByProperty(
len_constraints_by_property={},
patterns_by_property={
'some_property': [
PatternConstraint(
pattern='something-[a-zA-Z]+'),
PatternConstraint(
pattern='.*acme.*')]})"""
),
text,
)
if __name__ == "__main__":
unittest.main()
| 30.675926 | 107 | 0.540145 | 593 | 6,626 | 5.596965 | 0.161889 | 0.094004 | 0.086773 | 0.06508 | 0.851461 | 0.833986 | 0.810184 | 0.808979 | 0.808979 | 0.808979 | 0 | 0.001943 | 0.378509 | 6,626 | 215 | 108 | 30.818605 | 0.804031 | 0.02958 | 0 | 0.631579 | 0 | 0 | 0.003095 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.052632 | false | 0 | 0.065789 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
432bc267a127da57dd8ff7a8befbd0dcaf694b15 | 38 | py | Python | transquest/algo/sentence_level/siamesetransquest/losses/__init__.py | TharinduDR/TransQuest | fd9714050ea73201119f1c70b060b3faee86c133 | [
"Apache-2.0"
] | 78 | 2020-05-24T12:13:16.000Z | 2022-03-19T23:16:02.000Z | transquest/algo/sentence_level/siamesetransquest/losses/__init__.py | TharinduDR/TransQuest | fd9714050ea73201119f1c70b060b3faee86c133 | [
"Apache-2.0"
] | 32 | 2020-07-05T03:27:19.000Z | 2022-03-31T19:11:26.000Z | transquest/algo/sentence_level/siamesetransquest/losses/__init__.py | TharinduDR/TransQuest | fd9714050ea73201119f1c70b060b3faee86c133 | [
"Apache-2.0"
] | 14 | 2020-05-14T09:52:07.000Z | 2022-03-13T10:55:47.000Z | from .cosine_similarity_loss import *
| 19 | 37 | 0.842105 | 5 | 38 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4a9214c8d8254ab878b5849243bbbdeb11bc6a92 | 78,705 | py | Python | Packs/CiscoFirepower/Integrations/CiscoFirepower/CiscoFirepower.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 799 | 2016-08-02T06:43:14.000Z | 2022-03-31T11:10:11.000Z | Packs/CiscoFirepower/Integrations/CiscoFirepower/CiscoFirepower.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 9,317 | 2016-08-07T19:00:51.000Z | 2022-03-31T21:56:04.000Z | Packs/CiscoFirepower/Integrations/CiscoFirepower/CiscoFirepower.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 1,297 | 2016-08-04T13:59:00.000Z | 2022-03-31T23:43:06.000Z | import demistomock as demisto # noqa: F401
from CommonServerPython import * # noqa: F401
''' IMPORTS '''
from typing import Dict, List, Tuple, Union
import urllib3
# Disable insecure warnings
urllib3.disable_warnings()
'''GLOBALS/PARAMS'''
INTEGRATION_NAME = 'Cisco Firepower'
INTEGRATION_CONTEXT_NAME = 'CiscoFP'
OUTPUT_KEYS_DICTIONARY = {
'id': 'ID'
}
class Client(BaseClient):
def login(self):
"""update the X-auth-access-token in the client.
"""
new_headers = self._http_request(
'POST',
url_suffix='/api/fmc_platform/v1/auth/generatetoken',
resp_type='response'
).headers
self._headers = {'X-auth-access-token': new_headers.get('X-auth-access-token')}
self._base_url += f'/api/fmc_config/v1/domain/{new_headers.get("DOMAIN_UUID")}/'
if self._headers['X-auth-access-token'] == '':
return_error('No valid access token')
def get_list(self, limit: int, offset: int, object_path: str) -> Dict:
params = {'expanded': 'true', 'limit': limit, 'offset': offset}
suffix = f'object/{object_path}'
return self._http_request('GET', suffix, params=params)
def list_policy_assignments(self, limit: int, offset: int) -> Dict:
params = {'expanded': 'true', 'limit': limit, 'offset': offset}
suffix = 'assignment/policyassignments'
return self._http_request('GET', suffix, params=params)
def get_deployable_devices(self, limit: int, offset: int, container_uuid: str) -> Dict:
params = {'expanded': 'true', 'limit': limit, 'offset': offset}
end_suffix = '/' + container_uuid + '/deployments' if container_uuid else ''
suffix = f'deployment/deployabledevices{end_suffix}'
return self._http_request('GET', suffix, params=params)
def get_device_records(self, limit: int, offset: int) -> Dict:
params = {'expanded': 'true', 'limit': limit, 'offset': offset}
suffix = 'devices/devicerecords'
return self._http_request('GET', suffix, params=params)
def get_network_objects(self, limit: int, offset: int, object_id: str) -> Dict:
end_suffix = f'/{object_id}' if object_id else f'?expanded=true&limit={limit}&offset={offset}'
suffix = f'object/networks{end_suffix}'
return self._http_request('GET', suffix)
def get_hosts_objects(self, limit: int, offset: int, object_id: str) -> Dict:
end_suffix = f'/{object_id}' if object_id else f'?expanded=true&limit={limit}&offset={offset}'
suffix = f'object/hosts{end_suffix}'
return self._http_request('GET', suffix)
def create_network_objects(self, name: str, value: str, description: str, overridable: bool) -> Dict:
data = {'name': name, 'value': value, 'description': description, 'overridable': overridable}
suffix = 'object/networks'
return self._http_request('POST', suffix, json_data=data)
def create_host_objects(self, name: str, value: str, description: str, overridable: bool) -> Dict:
data = {'name': name, 'value': value, 'description': description, 'overridable': overridable}
suffix = 'object/hosts'
return self._http_request('POST', suffix, json_data=data)
def update_network_objects(
self, name: str, value: str, description: str, overridable: bool, object_id: str) -> Dict:
data = assign_params(id=object_id, name=name, value=value, description=description, overridable=overridable)
suffix = f'object/networks/{object_id}'
return self._http_request('PUT', suffix, json_data=data)
def update_host_objects(self, name: str, value: str, description: str, overridable: bool, object_id: str) -> Dict:
data = assign_params(id=object_id, name=name, value=value, description=description, overridable=overridable)
suffix = f'object/hosts/{object_id}'
return self._http_request('PUT', suffix, json_data=data)
def delete_network_objects(self, object_id: str) -> Dict:
suffix = f'object/networks/{object_id}'
return self._http_request('DELETE', suffix)
def delete_host_objects(self, object_id: str) -> Dict:
suffix = f'object/hosts/{object_id}'
return self._http_request('DELETE', suffix)
def get_network_groups_objects(self, limit: int, offset: int, object_id: str) -> Dict:
end_suffix = f'/{object_id}' if object_id else f'?expanded=true&limit={limit}&offset={offset}'
suffix = f'object/networkgroups{end_suffix}'
return self._http_request('GET', suffix)
def get_url_groups_objects(self, limit: int, offset: int, object_id: str) -> Dict:
end_suffix = f'/{object_id}' if object_id else f'?expanded=true&limit={limit}&offset={offset}'
suffix = f'object/urlgroups{end_suffix}'
return self._http_request('GET', suffix)
def create_network_groups_objects(
self, name: str, ids: str, values: str, description: str, overridable: bool) -> Dict:
objects = [{'id': curr_id} for curr_id in argToList(ids)]
values = [{'value': curr_value} for curr_value in argToList(values)]
data = assign_params(
name=name, objects=objects, literals=values, description=description, overridable=overridable)
suffix = 'object/networkgroups'
return self._http_request('POST', suffix, json_data=data)
def update_network_groups_objects(
self, name: str, ids: str, values: str, group_id: str, description: str, overridable: bool) -> Dict:
objects = [{'id': curr_id} for curr_id in argToList(ids)]
values = [{'value': curr_value} for curr_value in argToList(values)]
data = assign_params(name=name, id=group_id, objects=objects, literals=values,
description=description, overridable=overridable)
suffix = f'object/networkgroups/{group_id}'
return self._http_request('PUT', suffix, json_data=data)
def update_url_groups_objects(
self, name: str, ids: str, values: str, group_id: str, description: str, overridable: bool) -> Dict:
objects = [{'id': curr_id} for curr_id in argToList(ids)]
values = [{'url': curr_value} for curr_value in argToList(values)]
data = assign_params(name=name, id=group_id, objects=objects, literals=values,
description=description, overridable=overridable)
suffix = f'object/urlgroups/{group_id}'
return self._http_request('PUT', suffix, json_data=data)
def delete_network_groups_objects(self, object_id: str) -> Dict:
suffix = f'object/networkgroups/{object_id}'
return self._http_request('DELETE', suffix)
def get_access_policy(self, limit: int, offset: int, policy_id: str) -> Dict:
end_suffix = f'/{policy_id}' if policy_id else f'?expanded=true&limit={limit}&offset={offset}'
suffix = f'policy/accesspolicies{end_suffix}'
return self._http_request('GET', suffix)
def create_access_policy(self, name: str, action: str) -> Dict:
data = {'name': name, 'defaultAction': {'action': action}}
suffix = 'policy/accesspolicies'
return self._http_request('POST', suffix, json_data=data)
def update_access_policy(self, name: str, policy_id: str, action: str, action_id: str) -> Dict:
data = {
'name': name,
'id': policy_id,
'defaultAction': {
'action': action,
'id': action_id
}}
suffix = f'policy/accesspolicies/{policy_id}'
return self._http_request('PUT', suffix, json_data=data)
def delete_access_policy(self, policy_id: str) -> Dict:
suffix = f'policy/accesspolicies/{policy_id}'
return self._http_request('DELETE', suffix)
def get_task_status(self, task_id: str) -> Dict:
suffix = f'job/taskstatuses/{task_id}'
return self._http_request('GET', suffix)
def create_policy_assignments(self, policy_id: str, device_ids: str, device_group_ids: str) -> Dict:
targets = [{'id': curr_id, 'type': 'Device'} for curr_id in argToList(device_ids)]
targets.extend([{'id': curr_id, 'type': 'DeviceGroup'} for curr_id in argToList(device_group_ids)])
data_to_post = assign_params(policy={'id': policy_id}, type='PolicyAssignment', targets=targets)
suffix = 'assignment/policyassignments'
return self._http_request('POST', suffix, json_data=data_to_post)
def update_policy_assignments(self, policy_id: str, device_ids: str, device_group_ids: str) -> Dict:
targets = [{'id': curr_id, 'type': 'Device'} for curr_id in argToList(device_ids)]
targets.extend([{'id': curr_id, 'type': 'DeviceGroup'} for curr_id in argToList(device_group_ids)])
data_to_post = assign_params(policy={'id': policy_id}, type='PolicyAssignment', targets=targets)
suffix = f'assignment/policyassignments/{policy_id}'
return self._http_request('POST', suffix, json_data=data_to_post)
def get_access_rules(self, limit: int, offset: int, policy_id: str, rule_id: str) -> Dict:
end_suffix = f'?expanded=true&limit={limit}&offset={offset}' if rule_id == '' else '/' + rule_id
suffix = f'policy/accesspolicies/{policy_id}/accessrules{end_suffix}'
return self._http_request('GET', suffix)
def create_access_rules(
self,
source_zone_object_ids: str,
destination_zone_object_ids: str,
vlan_tag_object_ids: str,
source_network_object_ids: str,
source_network_addresses: str,
destination_network_object_ids: str,
destination_network_addresses: str,
source_port_object_ids: str,
destination_port_object_ids: str,
source_security_group_tag_object_ids: str,
application_object_ids: str,
url_object_ids: str,
url_addresses: str,
enabled: bool,
name: str,
policy_id: str,
action: str
) -> Dict:
sourceZones = {'objects': [{'id': curr_id, 'type': 'SecurityZone'
} for curr_id in argToList(source_zone_object_ids)]}
destinationZones = {'objects': [{'id': curr_id, 'type': 'SecurityZone'
} for curr_id in argToList(destination_zone_object_ids)]}
vlanTags = {'objects': [{'id': curr_id, 'type': 'vlanTags'} for curr_id in argToList(vlan_tag_object_ids)]}
sourceNetworks = assign_params(
objects=[{'id': curr_id, 'type': 'NetworkGroup'} for curr_id in argToList(source_network_object_ids)],
literals=[{'value': curr_id, 'type': 'Host'} for curr_id in argToList(source_network_addresses)])
destinationNetworks = assign_params(
objects=[{'id': curr_id, 'type': 'NetworkGroup'} for curr_id in argToList(destination_network_object_ids)],
literals=[{'value': curr_id, 'type': 'Host'} for curr_id in argToList(destination_network_addresses)])
sourcePorts = {'objects': [{'id': curr_id, 'type': 'ProtocolPortObject'
} for curr_id in argToList(source_port_object_ids)]}
destinationPorts = {'objects': [{'id': curr_id, 'type': 'ProtocolPortObject'
} for curr_id in argToList(destination_port_object_ids)]}
sourceSecurityGroupTags = {'objects': [{'id': curr_id, 'type': 'SecurityGroupTag'
} for curr_id in argToList(source_security_group_tag_object_ids)]}
applications = {'applications': [{'id': curr_id, 'type': 'Application'
} for curr_id in argToList(application_object_ids)]}
urls = assign_params(
objects=[{'id': curr_id, 'type': 'Url'} for curr_id in argToList(url_object_ids)],
literals=[{'url': curr_id, 'type': 'Url'} for curr_id in argToList(url_addresses)])
data = assign_params(name=name, action=action, enabled=enabled, sourceZones=sourceZones,
destinationZones=destinationZones, vlanTags=vlanTags, sourceNetworks=sourceNetworks,
destinationNetworks=destinationNetworks, sourcePorts=sourcePorts,
destinationPorts=destinationPorts, sourceSecurityGroupTags=sourceSecurityGroupTags,
applications=applications, urls=urls)
suffix = f'policy/accesspolicies/{policy_id}/accessrules'
return self._http_request('POST', suffix, json_data=data)
def update_access_rules(
self,
update_strategy: str,
source_zone_object_ids: str,
destination_zone_object_ids: str,
vlan_tag_object_ids: str,
source_network_object_ids: str,
source_network_addresses: str,
destination_network_object_ids: str,
destination_network_addresses: str,
source_port_object_ids: str,
destination_port_object_ids: str,
source_security_group_tag_object_ids: str,
application_object_ids: str,
url_object_ids: str,
url_addresses: str,
enabled: bool,
name: str,
policy_id: str,
action: str,
rule_id: str
) -> Dict:
suffix = f'policy/accesspolicies/{policy_id}/accessrules/{rule_id}'
sourceZones = assign_params(
objects=[{'id': curr_id, 'type': 'SecurityZone'} for curr_id in argToList(source_zone_object_ids)])
destinationZones = assign_params(
objects=[{'id': curr_id, 'type': 'SecurityZone'} for curr_id in argToList(destination_zone_object_ids)])
vlanTags = assign_params(
objects=[{'id': curr_id, 'type': 'vlanTags'} for curr_id in argToList(vlan_tag_object_ids)])
sourceNetworks = assign_params(
objects=[{'id': curr_id, 'type': 'NetworkGroup'} for curr_id in argToList(source_network_object_ids)],
literals=[{'value': curr_id, 'type': 'Host'} for curr_id in argToList(source_network_addresses)])
destinationNetworks = assign_params(
objects=[{'id': curr_id, 'type': 'NetworkGroup'} for curr_id in argToList(destination_network_object_ids)],
literals=[{'value': curr_id, 'type': 'Host'} for curr_id in argToList(destination_network_addresses)])
sourcePorts = assign_params(
objects=[{'id': curr_id, 'type': 'ProtocolPortObject'} for curr_id in argToList(source_port_object_ids)])
destinationPorts = assign_params(
objects=[{'id': curr_id, 'type': 'ProtocolPortObject'} for curr_id in
argToList(destination_port_object_ids)])
sourceSecurityGroupTags = assign_params(objects=[{'id': curr_id, 'type': 'SecurityGroupTag'} for curr_id in
argToList(source_security_group_tag_object_ids)])
applications = assign_params(
applications=[{'id': curr_id, 'type': 'Application'} for curr_id in argToList(application_object_ids)])
urls = assign_params(
objects=[{'id': curr_id, 'type': 'Url'} for curr_id in argToList(url_object_ids)],
literals=[{'url': curr_id, 'type': 'Url'} for curr_id in argToList(url_addresses)])
data = assign_params(name=name, action=action, id=rule_id, enabled=enabled, sourceZones=sourceZones,
destinationZones=destinationZones, vlanTags=vlanTags, sourceNetworks=sourceNetworks,
destinationNetworks=destinationNetworks, sourcePorts=sourcePorts,
destinationPorts=destinationPorts, sourceSecurityGroupTags=sourceSecurityGroupTags,
applications=applications, urls=urls)
data_from_get = self.get_access_rules(0, 0, rule_id=rule_id, policy_id=policy_id)
if update_strategy == 'override':
if 'name' not in data:
data['name'] = data_from_get.get('name')
if 'action' not in data:
data['action'] = data_from_get.get('action')
return self._http_request('PUT', suffix, json_data=data)
else:
for key, value in data.items():
if type(value) == dict:
for in_key in value:
if in_key in data_from_get[key]:
data_from_get[key][in_key].extend(value[in_key])
else:
data_from_get[key][in_key] = value[in_key]
else:
data_from_get[key] = value
del data_from_get['metadata']
del data_from_get['links']
return self._http_request('PUT', suffix, json_data=data_from_get)
def delete_access_rules(self, policy_id, rule_id) -> Dict:
suffix = f'policy/accesspolicies/{policy_id}/accessrules/{rule_id}'
return self._http_request('DELETE', suffix)
def deploy_to_devices(self, force_deploy, ignore_warning, version, device_ids) -> Dict:
data_to_post = assign_params(forceDeploy=force_deploy, ignoreWarning=ignore_warning, version=version,
deviceList=argToList(device_ids), type="DeploymentRequest")
suffix = 'deployment/deploymentrequests'
return self._http_request('POST', suffix, json_data=data_to_post)
''' HELPER FUNCTIONS '''
def switch_list_to_list_counter(data: Union[Dict, List]) -> Union[Dict, List]:
"""Receives a list of dictionaries or a dictionary,
and if one of the keys contains a list or dictionary with lists,
returns the size of the lists
Examples:
>>> switch_list_to_list_counter({'name': 'n', 'type': 't', 'devices': [1, 2, 3]})
{'name': 'name', 'type': 'type', 'devices': 3}
>>> switch_list_to_list_counter({'name': 'n', 'type': 't', 'devices': {'new': [1, 2, 3], 'old': [1, 2, 3]}}
{'name': 'name', 'type': 'type', 'devices': 6}
>>> switch_list_to_list_counter({'name': 'n', 'type': 't', 'devices': {'new': 'my new'}
{'name': 'name', 'type': 'type', 'devices': 1}
:type data: ``list`` or ``dict``
:param data: context entry
:return: ``list`` or ``dict``
:rtype: context entry for human readable`
"""
if isinstance(data, list):
return [switch_list_to_list_counter(dat) for dat in data]
new_data = {}
for item in data:
if type(data[item]) == list:
new_data[item] = len(data[item])
elif data[item] and type(data[item]) == dict:
counter = 0
for in_item in data[item]:
if type(data[item][in_item]) == list:
counter += len(data[item][in_item])
elif data[item][in_item]:
counter = 1 if counter == 0 else counter
new_data[item] = counter
else:
new_data[item] = data[item]
return new_data
def raw_response_to_context_list(list_key: List, items: Union[Dict, List]) -> Union[Dict, List]:
"""Receives a dictionary or list of dictionaries and returns only the keys that exist in the list_key
and changes the keys by Context Standards
:type items: ``list`` or ``dict``
:param items: list of dict or dict of data from http request
:type list_key: ``list``
:keyword list_key: Selected keys to copy on context_entry
"""
if isinstance(items, list):
return [raw_response_to_context_list(list_key, item) for item in items]
list_to_output = {OUTPUT_KEYS_DICTIONARY.get(key, key.capitalize()): items.get(key, '') for key in list_key}
return list_to_output
def raw_response_to_context_network_groups(items: Union[Dict, List]) -> Union[Dict, List]:
"""Receives raw response and returns Context entry to network groups command
:type items: ``list`` or ``dict``
:param items: list of dict or dict of data from http request
:return: ``list`` or ``dict``
:rtype: context entry`
"""
if isinstance(items, list):
return [raw_response_to_context_network_groups(item) for item in items]
return {
'Name': items.get('name'),
'ID': items.get('id'),
'Overridable': items.get('overridable'),
'Description': items.get('description'),
'Objects': [
{
'Name': obj.get('name'),
'ID': obj.get('id'),
'Type': obj.get('type')
} for obj in items.get('objects', [])
],
'Addresses': [
{
'Value': obj.get('value'),
'Type': obj.get('type')
} for obj in items.get('literals', [])
]
}
def raw_response_to_context_url_groups(items: Union[Dict, List]) -> Union[Dict, List]:
"""Receives raw response and returns Context entry to url groups command
:type items: ``list`` or ``dict``
:param items: list of dict or dict of data from http request
:return: ``list`` or ``dict``
:rtype: context entry`
"""
if isinstance(items, list):
return [raw_response_to_context_url_groups(item) for item in items]
return {
'Name': items.get('name'),
'ID': items.get('id'),
'Overridable': items.get('overridable'),
'Description': items.get('description'),
'Objects': [
{
'Name': obj.get('name'),
'ID': obj.get('id'),
'Type': obj.get('type')
} for obj in items.get('objects', [])
],
'Addresses': [
{
'Url': obj.get('url'),
'Type': obj.get('type')
} for obj in items.get('literals', [])
]
}
def raw_response_to_context_policy_assignment(items: Union[Dict, List]) -> Union[Dict, List]:
"""Receives raw response and returns Context entry to policy assignment command
:type items: ``list`` or ``dict``
:param items: list of dict or dict of data from http request
:return: ``list`` or ``dict``
:rtype: context entry`
"""
if isinstance(items, list):
return [raw_response_to_context_policy_assignment(item) for item in items]
return {
'Name': items.get('name'),
'ID': items.get('id'),
'PolicyName': items.get('policy', {}).get('name', ''),
'PolicyID': items.get('policy', {}).get('id', ''),
'PolicyDescription': items.get('policy', {}).get('description', ''),
'Targets': [
{
'Name': obj.get('name'),
'ID': obj.get('id'),
'Type': obj.get('type')
} for obj in items.get('targets', [])
]
}
def raw_response_to_context_access_policy(items: Union[Dict, List]) -> Union[Dict, List]:
"""Receives raw response and returns Context entry to access policy command
:type items: ``list`` or ``dict``
:param items: list of dict or dict of data from http request
:return: ``list`` or ``dict``
:rtype: context entry`
"""
if isinstance(items, list):
return [raw_response_to_context_access_policy(item) for item in items]
return {
'Name': items.get('name'),
'ID': items.get('id'),
'DefaultActionID': items.get('defaultAction', {}).get('id', '')
}
def raw_response_to_context_rules(items: Union[Dict, List]) -> Union[Dict, List]:
"""Receives raw response and returns Context entry to rules command
:type items: ``list`` or ``dict``
:param items: list of dict or dict of data from http request
:return: ``list`` or ``dict``
:rtype: context entry`
"""
if isinstance(items, list):
return [raw_response_to_context_rules(item) for item in items]
return {
'ID': items.get('id'),
'Name': items.get('name'),
'Action': items.get('action'),
'Enabled': items.get('enabled'),
'SendEventsToFMC': items.get('sendEventsToFMC'),
'RuleIndex': items.get('metadata', {}).get('ruleIndex', ''),
'Section': items.get('metadata', {}).get('section', ''),
'Category': items.get('metadata', {}).get('category', ''),
'Urls': {
'Addresses': [{
'URL': obj.get('url', '')
} for obj in items.get('urls', {}).get('literals', [])
],
'Objects': [{
'Name': obj.get('name', ''),
'ID': obj.get('id', '')
} for obj in items.get('urls', {}).get('objects', [])
]
},
'VlanTags': {
'Numbers': [{
'EndTag': obj.get('endTag', ''),
'StartTag': obj.get('startTag', '')
} for obj in items.get('vlanTags', {}).get('literals', [])
],
'Objects': [{
'Name': obj.get('name', ''),
'ID': obj.get('id', ''),
'Type': obj.get('type', '')
} for obj in items.get('vlanTags', {}).get('objects', [])
]
},
'SourceZones': {
'Objects': [{
'Name': obj.get('name', ''),
'ID': obj.get('id', ''),
'Type': obj.get('type', '')
} for obj in items.get('sourceZones', {}).get('objects', [])
]
},
'Applications': [{
'Name': obj.get('name', ''),
'ID': obj.get('id', '')
} for obj in items.get('applications', {}).get('applications', [])
],
'DestinationZones': {
'Objects': [{
'Name': obj.get('name', ''),
'ID': obj.get('id', ''),
'Type': obj.get('type', '')
} for obj in items.get('destinationZones', {}).get('objects', [])
]
},
'SourceNetworks': {
'Addresses': [{
'Type': obj.get('type', ''),
'Value': obj.get('value', '')
} for obj in items.get('sourceNetworks', {}).get('literals', [])
],
'Objects': [{
'Name': obj.get('name', ''),
'ID': obj.get('id', ''),
'Type': obj.get('type', '')
} for obj in items.get('sourceNetworks', {}).get('objects', [])
]
},
'DestinationNetworks': {
'Addresses': [{
'Type': obj.get('type', ''),
'Value': obj.get('value', '')
} for obj in items.get('destinationNetworks', {}).get('literals', [])
],
'Objects': [{
'Name': obj.get('name', ''),
'ID': obj.get('id', ''),
'Type': obj.get('type', '')
} for obj in items.get('destinationNetworks', {}).get('objects', [])
]
},
'SourcePorts': {
'Addresses': [{
'Port': obj.get('port', ''),
'Protocol': obj.get('protocol', '')
} for obj in items.get('sourcePorts', {}).get('literals', [])
],
'Objects': [{
'Name': obj.get('name', ''),
'ID': obj.get('id', ''),
'Type': obj.get('type', ''),
'Protocol': obj.get('protocol', '')
} for obj in items.get('sourcePorts', {}).get('objects', [])
]
},
'DestinationPorts': {
'Addresses': [{
'Port': obj.get('port', ''),
'Protocol': obj.get('protocol', '')
} for obj in items.get('destinationPorts', {}).get('literals', [])
],
'Objects': [{
'Name': obj.get('name', ''),
'ID': obj.get('id', ''),
'Type': obj.get('type', ''),
'Protocol': obj.get('protocol', '')
} for obj in items.get('destinationPorts', {}).get('objects', [])
]
},
'SourceSecurityGroupTags': {
'Objects': [{
'Name': obj.get('name', ''),
'ID': obj.get('id', ''),
'Type': obj.get('type', '')
} for obj in items.get('sourceSecurityGroupTags', {}).get('objects', [])
]
}
}
''' COMMANDS '''
def list_zones_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', 50)
offset = args.get('offset', 0)
raw_response = client.get_list(limit, offset, 'securityzones')
items = raw_response.get('items')
if items:
title = f'{INTEGRATION_NAME} - List zones:'
context_entry = [{
'ID': item.get('id', ''),
'Name': item.get('name', ''),
'InterfaceMode': item.get('interfaceMode', ''),
'Interfaces': [{
'Name': obj.get('name', ''),
'ID': obj.get('id' '')
} for obj in item.get('interfaces', {})
]
} for item in items
]
context = {
f'{INTEGRATION_CONTEXT_NAME}.Zone(val.ID && val.ID === obj.ID)': context_entry
}
entry_white_list_count = switch_list_to_list_counter(context_entry)
presented_output = ['ID', 'Name', 'InterfaceMode', 'Interfaces']
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any zone.', {}, {}
def list_ports_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', 50)
offset = args.get('offset', 0)
raw_response = client.get_list(limit, offset, 'ports')
items = raw_response.get('items')
if items:
title = f'{INTEGRATION_NAME} - List ports:'
list_to_output = ['id', 'name', 'protocol', 'port']
context_entry = raw_response_to_context_list(list_to_output, items)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Port(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Protocol', 'Port']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any port.', {}, {}
def list_url_categories_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', 50)
offset = args.get('offset', 0)
raw_response = client.get_list(limit, offset, 'urlcategories')
items = raw_response.get('items')
if items:
title = f'{INTEGRATION_NAME} - List url categories:'
list_to_output = ['id', 'name']
context_entry = raw_response_to_context_list(list_to_output, items)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Category(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any category.', {}, {}
def get_network_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', '50')
offset = args.get('offset', '0')
object_id = args.get('object_id', '')
raw_response = client.get_network_objects(limit, offset, object_id)
items: Union[List, Dict] = raw_response.get('items') # type:ignore
if items or 'id' in raw_response:
title = f'{INTEGRATION_NAME} - List network objects:'
if 'id' in raw_response:
title = f'{INTEGRATION_NAME} - get network object {object_id}'
items = raw_response
list_to_output = ['id', 'name', 'value', 'overridable', 'description']
context_entry = raw_response_to_context_list(list_to_output, items)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Network(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Value', 'Overridable', 'Description']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any network object.', {}, {}
def get_host_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', '50')
offset = args.get('offset', '0')
object_id = args.get('object_id', '')
raw_response = client.get_hosts_objects(limit, offset, object_id)
items: Union[List, Dict] = raw_response.get('items') # type:ignore
if items or 'id' in raw_response:
title = f'{INTEGRATION_NAME} - List host objects:'
if 'id' in raw_response:
title = f'{INTEGRATION_NAME} - get host object {object_id}'
items = raw_response
list_to_output = ['id', 'name', 'value', 'overridable', 'description']
context_entry = raw_response_to_context_list(list_to_output, items)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Host(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Value', 'Overridable', 'Description']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any host object.', {}, {}
def create_network_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
name: str = args.get('name') # type:ignore
value: str = args.get('value') # type:ignore
description: str = args.get('description', '') # type:ignore
overridable = args.get('overridable', '')
raw_response = client.create_network_objects(name, value, description, overridable)
title = f'{INTEGRATION_NAME} - network object has been created.'
list_to_output = ['id', 'name', 'value', 'overridable', 'description']
context_entry = raw_response_to_context_list(list_to_output, raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Network(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Value', 'Overridable', 'Description']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
def create_host_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
name: str = args.get('name') # type:ignore
value: str = args.get('value') # type:ignore
description: str = args.get('description', '') # type:ignore
overridable = args.get('overridable', '')
raw_response = client.create_host_objects(name, value, description, overridable)
title = f'{INTEGRATION_NAME} - host object has been created.'
list_to_output = ['id', 'name', 'value', 'overridable', 'description']
context_entry = raw_response_to_context_list(list_to_output, raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Host(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Value', 'Overridable', 'Description']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
def update_network_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
object_id: str = args.get('id') # type:ignore
name: str = args.get('name') # type:ignore
value: str = args.get('value') # type:ignore
description: str = args.get('description', '') # type:ignore
overridable = args.get('overridable', '')
raw_response = client.update_network_objects(name, value, description, overridable, object_id)
title = f'{INTEGRATION_NAME} - network object has been updated.'
list_to_output = ['id', 'name', 'value', 'overridable', 'description']
context_entry = raw_response_to_context_list(list_to_output, raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Network(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Value', 'Overridable', 'Description']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
def update_host_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
object_id: str = args.get('id') # type:ignore
name: str = args.get('name') # type:ignore
value: str = args.get('value') # type:ignore
description: str = args.get('description', '') # type:ignore
overridable = args.get('overridable', '')
raw_response = client.update_host_objects(name, value, description, overridable, object_id)
title = f'{INTEGRATION_NAME} - host object has been updated.'
list_to_output = ['id', 'name', 'value', 'overridable', 'description']
context_entry = raw_response_to_context_list(list_to_output, raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Host(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Value', 'Overridable', 'Description']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
def delete_network_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
object_id: str = args.get('id') # type:ignore
raw_response = client.delete_network_objects(object_id)
title = f'{INTEGRATION_NAME} - network object has been deleted.'
list_to_output = ['id', 'name', 'value', 'overridable', 'description']
context_entry = raw_response_to_context_list(list_to_output, raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Network(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Value', 'Overridable', 'Description']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
def delete_host_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
object_id: str = args.get('id') # type:ignore
raw_response = client.delete_host_objects(object_id)
title = f'{INTEGRATION_NAME} - host object has been deleted.'
list_to_output = ['id', 'name', 'value', 'overridable', 'description']
context_entry = raw_response_to_context_list(list_to_output, raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Host(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Value', 'Overridable', 'Description']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
def get_network_groups_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
object_id = args.get('id', '')
limit = args.get('limit', '50')
offset = args.get('offset', '0')
raw_response = client.get_network_groups_objects(limit, offset, object_id)
items: Union[List, Dict] = raw_response.get('items') # type:ignore
if items or 'id' in raw_response:
title = f'{INTEGRATION_NAME} - List of network groups object:'
if 'id' in raw_response:
title = f'{INTEGRATION_NAME} - network group object:'
items = raw_response
context_entry = raw_response_to_context_network_groups(items)
context = {
f'{INTEGRATION_CONTEXT_NAME}.NetworkGroups(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Overridable', 'Description', 'Addresses', 'Objects']
entry_white_list_count = switch_list_to_list_counter(context_entry)
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
else:
raise DemistoException(f'{INTEGRATION_NAME} - Could not get the network groups.')
def get_url_groups_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
object_id = args.get('id', '')
limit = args.get('limit', '50')
offset = args.get('offset', '0')
raw_response = client.get_url_groups_objects(limit, offset, object_id)
items: Union[List, Dict] = raw_response.get('items') # type:ignore
if items or 'id' in raw_response:
title = f'{INTEGRATION_NAME} - List of url groups object:'
if 'id' in raw_response:
title = f'{INTEGRATION_NAME} - url group object:'
items = raw_response
context_entry = raw_response_to_context_url_groups(items)
context = {
f'{INTEGRATION_CONTEXT_NAME}.URLGroups(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Overridable', 'Description', 'Addresses', 'Objects']
entry_white_list_count = switch_list_to_list_counter(context_entry)
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
else:
raise DemistoException(f'{INTEGRATION_NAME} - Could not get the URL groups.')
def create_network_groups_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
name: str = args.get('name') # type:ignore
ids = args.get('network_objects_id_list', '')
values = args.get('network_address_list', '')
description = args.get('description', '')
overridable = args.get('overridable', '')
if ids or values:
raw_response = client.create_network_groups_objects(name, ids, values, description, overridable)
title = f'{INTEGRATION_NAME} - network group has been created.'
context_entry = raw_response_to_context_network_groups(raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.NetworkGroups(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Overridable', 'Description', 'Addresses', 'Objects']
entry_white_list_count = switch_list_to_list_counter(context_entry)
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
else:
raise DemistoException(f'{INTEGRATION_NAME} - Could not create new group, Missing value or ID.')
def update_network_groups_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
group_id: str = args.get('id') # type:ignore
name: str = args.get('name') # type:ignore
ids = args.get('network_objects_id_list', '')
values = args.get('network_address_list', '')
description = args.get('description', '')
overridable = args.get('overridable', '')
if ids or values:
raw_response = client.update_network_groups_objects(name, ids, values, group_id, description, overridable)
title = f'{INTEGRATION_NAME} - network group has been updated.'
context_entry = raw_response_to_context_network_groups(raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.NetworkGroups(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Overridable', 'Description', 'Addresses', 'Objects']
entry_white_list_count = switch_list_to_list_counter(context_entry)
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
else:
raise DemistoException(f'{INTEGRATION_NAME} - Could not update the group, Missing value or ID.')
def update_url_groups_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
group_id: str = args.get('id') # type:ignore
name: str = args.get('name') # type:ignore
ids = args.get('url_objects_id_list', '')
values = args.get('url_list', '')
description = args.get('description', '')
overridable = args.get('overridable', '')
if ids or values:
raw_response = client.update_url_groups_objects(name, ids, values, group_id, description, overridable)
title = f'{INTEGRATION_NAME} - url group has been updated.'
context_entry = raw_response_to_context_url_groups(raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.UrlGroups(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Overridable', 'Description', 'Addresses', 'Objects']
entry_white_list_count = switch_list_to_list_counter(context_entry)
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
else:
raise DemistoException(f'{INTEGRATION_NAME} - Could not update the group, Missing value or ID.')
def delete_network_groups_objects_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
object_id = args['id']
raw_response = client.delete_network_groups_objects(object_id)
title = f'{INTEGRATION_NAME} - network group - {object_id} - has been delete.'
context_entry = raw_response_to_context_network_groups(raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.NetworkGroups(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Overridable', 'Description', 'Addresses', 'Objects']
entry_white_list_count = switch_list_to_list_counter(context_entry)
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
def get_access_policy_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
policy_id = args.get('id', '')
limit = args.get('limit', '50')
offset = args.get('offset', '0')
raw_response = client.get_access_policy(limit, offset, policy_id)
items: Union[List, Dict] = raw_response.get('items') # type:ignore
if items or 'id' in raw_response:
title = f'{INTEGRATION_NAME} - List access policy:'
if 'id' in raw_response:
title = f'{INTEGRATION_NAME} - get access policy'
items = raw_response
context_entry = raw_response_to_context_access_policy(items)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Policy(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'DefaultActionID']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any access policy.', {}, {}
def create_access_policy_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
name: str = args.get('name') # type:ignore
action: str = args.get('action') # type:ignore
raw_response = client.create_access_policy(name, action)
title = f'{INTEGRATION_NAME} - access policy has been created.'
context_entry = raw_response_to_context_access_policy(raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Policy(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'DefaultActionID']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
def update_access_policy_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
name: str = args.get('name') # type:ignore
policy_id: str = args.get('id') # type:ignore
action: str = args.get('action') # type:ignore
action_id: str = args.get('default_action_id') # type:ignore
raw_response = client.update_access_policy(name, policy_id, action, action_id)
title = f'{INTEGRATION_NAME} - access policy has been updated.'
context_entry = raw_response_to_context_access_policy(raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Policy(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'DefaultActionID']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
def delete_access_policy_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
policy_id: str = args.get('id') # type:ignore
raw_response = client.delete_access_policy(policy_id)
title = f'{INTEGRATION_NAME} - access policy deleted.'
context_entry = raw_response_to_context_access_policy(raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Policy(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'DefaultActionID']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
def list_security_group_tags_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', 50)
offset = args.get('offset', 0)
raw_response = client.get_list(limit, offset, 'securitygrouptags')
items = raw_response.get('items')
if items:
title = f'{INTEGRATION_NAME} - List security group tags:'
list_to_output = ['id', 'name', 'tag']
context_entry = raw_response_to_context_list(list_to_output, items)
context = {
f'{INTEGRATION_CONTEXT_NAME}.SecurityGroupTags(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Tag']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any security group tags.', {}, {}
def list_ise_security_group_tags_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', 50)
offset = args.get('offset', 0)
raw_response = client.get_list(limit, offset, 'isesecuritygrouptags')
items = raw_response.get('items')
if items:
title = f'{INTEGRATION_NAME} - List ise security group tags:'
list_to_output = ['id', 'name', 'tag']
context_entry = raw_response_to_context_list(list_to_output, items)
context = {
f'{INTEGRATION_CONTEXT_NAME}.IseSecurityGroupTags(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Tag']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any ise security group tags.', {}, {}
def list_vlan_tags_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', 50)
offset = args.get('offset', 0)
raw_response = client.get_list(limit, offset, 'vlantags')
items = raw_response.get('items')
if items:
title = f'{INTEGRATION_NAME} - List vlan tags:'
context_entry = [
{
'Name': item.get('name'),
'ID': item.get('id'),
'Overridable': item.get('overridable'),
'Description': item.get('description'),
'StartTag': item.get('data', {}).get('startTag'),
'EndTag': item.get('data', {}).get('endTag')
} for item in items
]
context = {
f'{INTEGRATION_CONTEXT_NAME}.VlanTags(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Overridable', 'Description', 'StartTag', 'EndTag']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any vlan tags.', {}, {}
def list_vlan_tags_group_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', 50)
offset = args.get('offset', 0)
raw_response = client.get_list(limit, offset, 'vlangrouptags')
items = raw_response.get('items')
if items:
title = f'{INTEGRATION_NAME} - List of vlan tags groups objects:'
context_entry = [
{
'Name': item.get('name'),
'ID': item.get('id'),
'Overridable': item.get('overridable'),
'Description': item.get('description'),
'Objects': [
{
'Name': obj.get('name'),
'ID': obj.get('id'),
'Overridable': obj.get('overridable'),
'Description': obj.get('description'),
'StartTag': obj.get('data', {}).get('startTag'),
'EndTag': obj.get('data', {}).get('endTag')
} for obj in item.get('object', [])
]
} for item in items
]
context = {
f'{INTEGRATION_CONTEXT_NAME}.VlanTagsGroup(val.ID && val.ID === obj.ID)': context_entry
}
entry_white_list_count = switch_list_to_list_counter(context_entry)
presented_output = ['ID', 'Name', 'Overridable', 'Description', 'Objects']
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any vlan tags group.', {}, {}
def list_applications_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', 50)
offset = args.get('offset', 0)
raw_response = client.get_list(limit, offset, 'applications')
items = raw_response.get('items')
if items:
context_entry = [
{
'Name': item.get('name'),
'ID': item.get('id'),
'Risk': item.get('risk', {}).get('name', ''),
'AppProductivity': item.get('appProductivity', {}).get('name', ''),
'ApplicationTypes': [
{
'Name': obj.get('name')
} for obj in item.get('applicationTypes', [])
],
'AppCategories': [
{
'Name': obj.get('name'),
'ID': obj.get('id'),
'Count': obj.get('metadata', {}).get('count', '')
} for obj in item.get('appCategories', [])
]
} for item in items
]
title = f'{INTEGRATION_NAME} - List of applications objects:'
context = {
f'{INTEGRATION_CONTEXT_NAME}.Applications(val.ID && val.ID === obj.ID)': context_entry
}
entry_white_list_count = switch_list_to_list_counter(context_entry)
presented_output = ['ID', 'Name', 'Risk', 'AppProductivity', 'ApplicationTypes', 'AppCategories']
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any applications.', {}, {}
def get_access_rules_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', 50)
offset = args.get('offset', 0)
policy_id: str = args.get('policy_id') # type:ignore
rule_id = args.get('rule_id', '')
raw_response = client.get_access_rules(limit, offset, policy_id, rule_id)
items = raw_response.get('items')
if items:
title = f'{INTEGRATION_NAME} - List of access rules:'
elif 'id' in raw_response:
title = f'{INTEGRATION_NAME} - access rule:'
items = raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any access rule.', {}, {}
context_entry = raw_response_to_context_rules(items)
entry_white_list_count = switch_list_to_list_counter(context_entry)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Rule(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Action', 'Enabled', 'SendEventsToFMC', 'RuleIndex', 'Section', 'Category',
'Urls', 'VlanTags', 'SourceZones', 'Applications', 'DestinationZones', 'SourceNetworks',
'DestinationNetworks', 'SourcePorts', 'DestinationPorts', 'SourceSecurityGroupTags']
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
def create_access_rules_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
source_zone_object_ids = args.get('source_zone_object_ids', '')
destination_zone_object_ids = args.get('destination_zone_object_ids', '')
vlan_tag_object_ids = args.get('vlan_tag_object_ids', '')
source_network_object_ids = args.get('source_network_object_ids', '')
source_network_addresses = args.get('source_network_addresses', '')
destination_network_object_ids = args.get('destination_network_object_ids', '')
destination_network_addresses = args.get('destination_network_addresses', '')
source_port_object_ids = args.get('source_port_object_ids', '')
destination_port_object_ids = args.get('destination_port_object_ids', '')
source_security_group_tag_object_ids = args.get('source_security_group_tag_object_ids', '')
application_object_ids = args.get('application_object_ids', '')
url_object_ids = args.get('url_object_ids', '')
url_addresses = args.get('url_addresses', '')
enabled = args.get('enabled', '')
name = args.get('rule_name', '')
policy_id = args.get('policy_id', '')
action = args.get('action', '')
raw_response = client.create_access_rules(source_zone_object_ids,
destination_zone_object_ids,
vlan_tag_object_ids,
source_network_object_ids,
source_network_addresses,
destination_network_object_ids,
destination_network_addresses,
source_port_object_ids,
destination_port_object_ids,
source_security_group_tag_object_ids,
application_object_ids,
url_object_ids,
url_addresses,
enabled,
name,
policy_id,
action)
title = f'{INTEGRATION_NAME} - the new access rule:'
context_entry = raw_response_to_context_rules(raw_response)
entry_white_list_count = switch_list_to_list_counter(context_entry)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Rule(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Action', 'Enabled', 'SendEventsToFMC', 'RuleIndex', 'Section', 'Category',
'Urls', 'VlanTags', 'SourceZones', 'Applications', 'DestinationZones', 'SourceNetworks',
'DestinationNetworks', 'SourcePorts', 'DestinationPorts', 'SourceSecurityGroupTags']
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
def update_access_rules_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
update_strategy: str = args.get('update_strategy') # type:ignore
source_zone_object_ids = args.get('source_zone_object_ids', '')
destination_zone_object_ids = args.get('destination_zone_object_ids', '')
vlan_tag_object_ids = args.get('vlan_tag_object_ids', '')
source_network_object_ids = args.get('source_network_object_ids', '')
source_network_addresses = args.get('source_network_addresses', '')
destination_network_object_ids = args.get('destination_network_object_ids', '')
destination_network_addresses = args.get('destination_network_addresses', '')
source_port_object_ids = args.get('source_port_object_ids', '')
destination_port_object_ids = args.get('destination_port_object_ids', '')
source_security_group_tag_object_ids = args.get('source_security_group_tag_object_ids', '')
application_object_ids = args.get('application_object_ids', '')
url_object_ids = args.get('url_object_ids', '')
url_addresses = args.get('url_addresses', '')
enabled = args.get('enabled', '')
name = args.get('rule_name', '')
policy_id = args.get('policy_id', '')
action = args.get('action', '')
rule_id: str = args.get('rule_id') # type:ignore
raw_response = client.update_access_rules(update_strategy,
source_zone_object_ids,
destination_zone_object_ids,
vlan_tag_object_ids,
source_network_object_ids,
source_network_addresses,
destination_network_object_ids,
destination_network_addresses,
source_port_object_ids,
destination_port_object_ids,
source_security_group_tag_object_ids,
application_object_ids,
url_object_ids,
url_addresses,
enabled,
name,
policy_id,
action,
rule_id)
title = f'{INTEGRATION_NAME} - access rule:'
context_entry = raw_response_to_context_rules(raw_response)
entry_white_list_count = switch_list_to_list_counter(context_entry)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Rule(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Action', 'Enabled', 'SendEventsToFMC', 'RuleIndex', 'Section', 'Category',
'Urls', 'VlanTags', 'SourceZones', 'Applications', 'DestinationZones', 'SourceNetworks',
'DestinationNetworks', 'SourcePorts', 'DestinationPorts', 'SourceSecurityGroupTags']
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
def delete_access_rules_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
policy_id = args.get('policy_id')
rule_id = args.get('rule_id')
raw_response = client.delete_access_rules(policy_id, rule_id)
title = f'{INTEGRATION_NAME} - deleted access rule:'
context_entry = raw_response_to_context_rules(raw_response)
entry_white_list_count = switch_list_to_list_counter(context_entry)
context = {
f'{INTEGRATION_CONTEXT_NAME}.Rule(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'Action', 'Enabled', 'SendEventsToFMC', 'RuleIndex', 'Section', 'Category',
'Urls', 'VlanTags', 'SourceZones', 'Applications', 'DestinationZones', 'SourceNetworks',
'DestinationNetworks', 'SourcePorts', 'DestinationPorts', 'SourceSecurityGroupTags']
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
def list_policy_assignments_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', 50)
offset = args.get('offset', 0)
raw_response = client.list_policy_assignments(limit, offset)
items = raw_response.get('items')
if items:
title = f'{INTEGRATION_NAME} - List of policy assignments:'
context_entry = raw_response_to_context_policy_assignment(items)
context = {
f'{INTEGRATION_CONTEXT_NAME}.PolicyAssignments(val.ID && val.ID === obj.ID)': context_entry
}
entry_white_list_count = switch_list_to_list_counter(context_entry)
presented_output = ['ID', 'Name', 'PolicyName', 'PolicyID', 'PolicyDescription', 'Targets']
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any policy assignments.', {}, {}
def create_policy_assignments_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
device_ids: str = args.get('device_ids') # type:ignore
device_group_ids: str = args.get('device_group_ids') # type:ignore
policy_id: str = args.get('policy_id') # type:ignore
raw_response = client.create_policy_assignments(policy_id, device_ids, device_group_ids)
title = f'{INTEGRATION_NAME} - Policy assignments has been done.'
context_entry = raw_response_to_context_policy_assignment(raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.PolicyAssignments(val.ID && val.ID === obj.ID)': context_entry
}
entry_white_list_count = switch_list_to_list_counter(context_entry)
presented_output = ['ID', 'Name', 'PolicyName', 'PolicyID', 'PolicyDescription', 'Targets']
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
def update_policy_assignments_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
device_ids: str = args.get('device_ids') # type:ignore
device_group_ids: str = args.get('device_group_ids') # type:ignore
policy_id: str = args.get('policy_id') # type:ignore
raw_response = client.update_policy_assignments(policy_id, device_ids, device_group_ids)
title = f'{INTEGRATION_NAME} - policy update has been done.'
context_entry = raw_response_to_context_policy_assignment(raw_response)
context = {
f'{INTEGRATION_CONTEXT_NAME}.PolicyAssignments(val.ID && val.ID === obj.ID)': context_entry
}
entry_white_list_count = switch_list_to_list_counter(context_entry)
presented_output = ['ID', 'Name', 'PolicyName', 'PolicyID', 'PolicyDescription', 'Targets']
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
def get_deployable_devices_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', 50)
offset = args.get('offset', 0)
container_uuid = args.get('container_uuid', '')
raw_response = client.get_deployable_devices(limit, offset, container_uuid)
items = raw_response.get('items')
if container_uuid:
if items:
context_entry = [{
'EndTime': item.get('endTime', ''),
'ID': item.get('id', ''),
'Name': item.get('name', ''),
'StartTime': item.get('startTime', ''),
'Status': item.get('status', ''),
'Type': item.get('type', '')
} for item in items
]
title = f'{INTEGRATION_NAME} - List of devices status pending deployment:'
context = {
f'{INTEGRATION_CONTEXT_NAME}.PendingDeployment(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['EndTime', 'ID', 'Name', 'StartTime', 'Status', 'Type']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
if items:
context_entry = [{
'CanBeDeployed': item.get('canBeDeployed', ''),
'UpToDate': item.get('upToDate', ''),
'DeviceID': item.get('device', {}).get('id', ''),
'DeviceName': item.get('device', {}).get('name', ''),
'DeviceType': item.get('device', {}).get('type', ''),
'Version': item.get('version', '')
} for item in items
]
title = f'{INTEGRATION_NAME} - List of deployable devices:'
context = {
f'{INTEGRATION_CONTEXT_NAME}.DeployableDevices(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['CanBeDeployed', 'UpToDate', 'DeviceID', 'DeviceName', 'DeviceType', 'Version']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any deployable devices.', {}, {}
def get_device_records_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
limit = args.get('limit', 50)
offset = args.get('offset', 0)
raw_response = client.get_device_records(limit, offset)
items = raw_response.get('items')
if items:
context_entry = [{
'ID': item.get('id', ''),
'Name': item.get('name', ''),
'HostName': item.get('hostName', ''),
'Type': item.get('type', ''),
'DeviceGroupID': item.get('deviceGroup', {}).get('id', '')
} for item in items
]
title = f'{INTEGRATION_NAME} - List of device records:'
context = {
f'{INTEGRATION_CONTEXT_NAME}.DeviceRecords(val.ID && val.ID === obj.ID)': context_entry
}
presented_output = ['ID', 'Name', 'HostName', 'Type', 'DeviceGroupID']
human_readable = tableToMarkdown(title, context_entry, headers=presented_output)
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any device records.', {}, {}
def deploy_to_devices_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
force_deploy = args.get('force_deploy', '')
ignore_warning = args.get('ignore_warning', '')
version = args.get('version', '')
device_list = args.get('device_ids', '')
raw_response = client.deploy_to_devices(force_deploy, ignore_warning, version, device_list)
title = f'{INTEGRATION_NAME} - devices requests to deploy.'
context_entry = {
'TaskID': raw_response.get('metadata', {}).get('task', {}).get('id', ''),
'ForceDeploy': raw_response.get('forceDeploy'),
'IgnoreWarning': raw_response.get('ignoreWarning'),
'Version': raw_response.get('version'),
'DeviceList': raw_response.get('deviceList')
}
context = {
f'{INTEGRATION_CONTEXT_NAME}.Deploy(val.ID && val.ID === obj.ID)': context_entry
}
entry_white_list_count = switch_list_to_list_counter(context_entry)
presented_output = ['TaskID', 'ForceDeploy', 'IgnoreWarning', 'Version', 'DeviceList']
human_readable = tableToMarkdown(title, entry_white_list_count, headers=presented_output)
return human_readable, context, raw_response
def get_task_status_command(client: Client, args: Dict) -> Tuple[str, Dict, Dict]:
task_id: str = args.get('task_id') # type:ignore
raw_response = client.get_task_status(task_id)
if 'status' in raw_response:
context_entry = {
'Status': raw_response.get('status')
}
title = f'{INTEGRATION_NAME} - {task_id} status:'
context = {
f'{INTEGRATION_CONTEXT_NAME}.TaskStatus(val.ID && val.ID === obj.ID)': context_entry
}
human_readable = tableToMarkdown(title, context_entry, headers=['Status'])
return human_readable, context, raw_response
else:
return f'{INTEGRATION_NAME} - Could not find any status.', {}, {}
''' COMMANDS MANAGER / SWITCH PANEL '''
def main(): # pragma: no cover
params = demisto.params()
base_url = params.get('url')
username = params.get('credentials').get('identifier')
password = params.get('credentials').get('password')
verify_ssl = not params.get('insecure', False)
proxy = params.get('proxy')
client = Client(base_url=base_url, verify=verify_ssl, proxy=proxy, auth=(username, password))
command = demisto.command()
client.login()
LOG(f'Command being called is {command}')
try:
if demisto.command() == 'test-module':
return_outputs('ok')
# Login is performed at the beginning of each flow if the login fails we return an error.
elif demisto.command() == 'ciscofp-list-zones':
return_outputs(*list_zones_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-list-ports':
return_outputs(*list_ports_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-list-url-categories':
return_outputs(*list_url_categories_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-get-network-object':
return_outputs(*get_network_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-create-network-object':
return_outputs(*create_network_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-update-network-object':
return_outputs(*update_network_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-delete-network-object':
return_outputs(*delete_network_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-get-host-object':
return_outputs(*get_host_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-create-host-object':
return_outputs(*create_host_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-update-host-object':
return_outputs(*update_host_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-delete-host-object':
return_outputs(*delete_host_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-get-network-groups-object':
return_outputs(*get_network_groups_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-create-network-groups-objects':
return_outputs(*create_network_groups_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-update-network-groups-objects':
return_outputs(*update_network_groups_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-delete-network-groups-objects':
return_outputs(*delete_network_groups_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-get-url-groups-object':
return_outputs(*get_url_groups_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-update-url-groups-objects':
return_outputs(*update_url_groups_objects_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-get-access-policy':
return_outputs(*get_access_policy_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-create-access-policy':
return_outputs(*create_access_policy_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-update-access-policy':
return_outputs(*update_access_policy_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-delete-access-policy':
return_outputs(*delete_access_policy_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-list-security-group-tags':
return_outputs(*list_security_group_tags_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-list-ise-security-group-tag':
return_outputs(*list_ise_security_group_tags_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-list-vlan-tags':
return_outputs(*list_vlan_tags_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-list-vlan-tags-group':
return_outputs(*list_vlan_tags_group_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-list-applications':
return_outputs(*list_applications_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-get-access-rules':
return_outputs(*get_access_rules_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-create-access-rules':
return_outputs(*create_access_rules_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-update-access-rules':
return_outputs(*update_access_rules_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-delete-access-rules':
return_outputs(*delete_access_rules_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-list-policy-assignments':
return_outputs(*list_policy_assignments_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-create-policy-assignments':
return_outputs(*create_policy_assignments_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-update-policy-assignments':
return_outputs(*update_policy_assignments_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-get-deployable-devices':
return_outputs(*get_deployable_devices_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-get-device-records':
return_outputs(*get_device_records_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-deploy-to-devices':
return_outputs(*deploy_to_devices_command(client, demisto.args()))
elif demisto.command() == 'ciscofp-get-task-status':
return_outputs(*get_task_status_command(client, demisto.args()))
except Exception as e:
err_msg = f'Error in {INTEGRATION_NAME} Integration [{e}]'
return_error(err_msg, error=e)
if __name__ == 'builtins': # pragma: no cover
main()
| 49.531152 | 119 | 0.623849 | 8,979 | 78,705 | 5.221628 | 0.036307 | 0.042231 | 0.022182 | 0.019708 | 0.847691 | 0.818257 | 0.804394 | 0.789186 | 0.771142 | 0.743052 | 0 | 0.00131 | 0.243301 | 78,705 | 1,588 | 120 | 49.562343 | 0.785933 | 0.037012 | 0 | 0.537092 | 0 | 0 | 0.205605 | 0.060622 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05638 | false | 0.001484 | 0.002967 | 0 | 0.133531 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4a98297af21ef9f92c5fb0d94f1185780bd44e6c | 89 | py | Python | src/pytorch_revgrad/__init__.py | adwardy/pytorch-revgrad | f39d24318df93106aa469c22a701b7b392f60030 | [
"MIT"
] | 1 | 2022-03-11T03:08:04.000Z | 2022-03-11T03:08:04.000Z | src/pytorch_revgrad/__init__.py | hangtingchen/pytorch-revgrad | f39d24318df93106aa469c22a701b7b392f60030 | [
"MIT"
] | 3 | 2022-03-07T03:04:34.000Z | 2022-03-25T12:28:09.000Z | lib/model/faster_rcnn/pytorch_revgrad/__init__.py | yangdb/RD-IOD | 64beb2e1efe823185adc0feb338a900f1a7df7a7 | [
"AFL-1.1"
] | 1 | 2021-05-06T05:35:06.000Z | 2021-05-06T05:35:06.000Z | from .module import RevGrad # noqa: F401
from .version import __version__ # noqa: F401
| 29.666667 | 46 | 0.752809 | 12 | 89 | 5.25 | 0.583333 | 0.253968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082192 | 0.179775 | 89 | 2 | 47 | 44.5 | 0.780822 | 0.235955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4ac41b01a8ce1af189bcb25143d38564c6afc0e6 | 32,308 | py | Python | designate/tests/test_api/test_v1/test_records.py | ISCAS-VDI/designate-base | bd945607e3345fbef8645c3441e96b032b70b098 | [
"Apache-2.0"
] | null | null | null | designate/tests/test_api/test_v1/test_records.py | ISCAS-VDI/designate-base | bd945607e3345fbef8645c3441e96b032b70b098 | [
"Apache-2.0"
] | null | null | null | designate/tests/test_api/test_v1/test_records.py | ISCAS-VDI/designate-base | bd945607e3345fbef8645c3441e96b032b70b098 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# Copyright 2012 Managed I.T.
#
# Author: Kiall Mac Innes <kiall@managedit.ie>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from mock import patch
import oslo_messaging as messaging
from oslo_log import log as logging
from designate.central import service as central_service
from designate.tests.test_api.test_v1 import ApiV1Test
LOG = logging.getLogger(__name__)
class ApiV1RecordsTest(ApiV1Test):
def setUp(self):
super(ApiV1RecordsTest, self).setUp()
self.zone = self.create_zone()
self.recordset = self.create_recordset(self.zone, 'A')
def test_get_record_schema(self):
response = self.get('schemas/record')
self.assertIn('description', response.json)
self.assertIn('links', response.json)
self.assertIn('title', response.json)
self.assertIn('id', response.json)
self.assertIn('additionalProperties', response.json)
self.assertIn('properties', response.json)
self.assertIn('id', response.json['properties'])
self.assertIn('domain_id', response.json['properties'])
self.assertIn('type', response.json['properties'])
self.assertIn('data', response.json['properties'])
self.assertIn('priority', response.json['properties'])
self.assertIn('description', response.json['properties'])
self.assertIn('created_at', response.json['properties'])
self.assertIn('updated_at', response.json['properties'])
self.assertIn('name', response.json['properties'])
self.assertIn('ttl', response.json['properties'])
self.assertIn('oneOf', response.json)
def test_get_records_schema(self):
response = self.get('schemas/records')
self.assertIn('description', response.json)
self.assertIn('additionalProperties', response.json)
self.assertIn('properties', response.json)
self.assertIn('title', response.json)
self.assertIn('id', response.json)
def test_create_record(self):
recordset_fixture = self.get_recordset_fixture(
self.zone['name'])
fixture = self.get_record_fixture(recordset_fixture['type'])
fixture.update({
'name': recordset_fixture['name'],
'type': recordset_fixture['type'],
})
# Create a record
response = self.post('domains/%s/records' % self.zone['id'],
data=fixture)
self.assertIn('id', response.json)
self.assertIn('name', response.json)
self.assertEqual(response.json['name'], fixture['name'])
def test_create_record_existing_recordset(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
# Create a record
response = self.post('domains/%s/records' % self.zone['id'],
data=fixture)
self.assertIn('id', response.json)
self.assertIn('name', response.json)
self.assertEqual(response.json['name'], fixture['name'])
def test_create_record_name_reuse(self):
fixture_1 = self.get_record_fixture(self.recordset['type'])
fixture_1.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
fixture_2 = self.get_record_fixture(self.recordset['type'], fixture=1)
fixture_2.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
# Create 2 records
record_1 = self.post('domains/%s/records' % self.zone['id'],
data=fixture_1)
record_2 = self.post('domains/%s/records' % self.zone['id'],
data=fixture_2)
# Delete record 1, this should not have any side effects
self.delete('domains/%s/records/%s' % (self.zone['id'],
record_1.json['id']))
# Simulate the record 1 having been deleted on the backend
zone_serial = self.central_service.get_zone(
self.admin_context, self.zone['id']).serial
self.central_service.update_status(
self.admin_context, self.zone['id'], "SUCCESS", zone_serial)
# Get the record 2 to ensure recordset did not get deleted
rec_2_get_response = self.get('domains/%s/records/%s' %
(self.zone['id'], record_2.json['id']))
self.assertIn('id', rec_2_get_response.json)
self.assertIn('name', rec_2_get_response.json)
self.assertEqual(rec_2_get_response.json['name'], fixture_1['name'])
# Delete record 2, this should delete the null recordset too
self.delete('domains/%s/records/%s' % (self.zone['id'],
record_2.json['id']))
# Simulate the record 2 having been deleted on the backend
zone_serial = self.central_service.get_zone(
self.admin_context, self.zone['id']).serial
self.central_service.update_status(
self.admin_context, self.zone['id'], "SUCCESS", zone_serial)
# Re-create as a different type, but use the same name
fixture = self.get_record_fixture('CNAME')
fixture.update({
'name': self.recordset['name'],
'type': 'CNAME'
})
response = self.post('domains/%s/records' % self.zone['id'],
data=fixture)
self.assertIn('id', response.json)
self.assertIn('name', response.json)
self.assertEqual(response.json['name'], fixture['name'])
def test_create_record_junk(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
# Add a junk property
fixture['junk'] = 'Junk Field'
# Create a record, Ensuring it fails with a 400
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_wildcard_record_after_named(self):
# We want to test that a wildcard record rs does not use the
# previous one
# https://bugs.launchpad.net/designate/+bug/1391426
name = "foo.%s" % self.zone.name
fixture = {
"name": name,
"type": "A",
"data": "10.0.0.1"
}
self.post('domains/%s/records' % self.zone['id'],
data=fixture)
wildcard_name = '*.%s' % self.zone["name"]
fixture['name'] = wildcard_name
self.post('domains/%s/records' % self.zone['id'],
data=fixture)
named_rs = self.central_service.find_recordset(
self.admin_context, {"name": name})
wildcard_rs = self.central_service.find_recordset(
self.admin_context, {"name": wildcard_name})
self.assertNotEqual(named_rs.name, wildcard_rs.name)
self.assertNotEqual(named_rs.id, wildcard_rs.id)
def test_create_record_utf_description(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
# Add a UTF-8 riddled description
fixture['description'] = "utf-8:2H₂+O₂⇌2H₂O,R=4.7kΩ,⌀200mm∮E⋅da=Q,n" \
",∑f(i)=∏g(i),∀x∈ℝ:⌈x⌉"
# Create a record, Ensuring it succeeds
self.post('domains/%s/records' % self.zone['id'], data=fixture)
def test_create_record_description_too_long(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
# Add a description that is too long
fixture['description'] = "x" * 161
# Create a record, Ensuring it Fails with a 400
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_record_name_too_long(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({'type': self.recordset['type']})
fixture['name'] = 'w' * 255 + ".%s" % self.zone.name
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_record_name_is_missing(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({'type': self.recordset['type']})
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_record_type_is_missing(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture['name'] = "www.%s" % self.zone.name
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_record_invalid_type(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({'type': "ABC", 'name': self.recordset['name']})
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_record_data_is_missing(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({'type': self.recordset['type'],
'name': self.recordset['name']})
del fixture['data']
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_record_ttl_greater_than_max(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
fixture['ttl'] = 2174483648
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_record_negative_ttl(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
# Set the TTL to a negative value
fixture['ttl'] = -1
# Create a record, Ensuring it Fails with a 400
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_record_zero_ttl(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
# Set the TTL to a value zero
fixture['ttl'] = 0
# Create a record, Ensuring it Fails with a 400
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_record_invalid_ttl(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
# Set the TTL to a invalid value
fixture['ttl'] = "$?!."
# Create a record, Ensuring it Fails with a 400
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_record_invalid_priority(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
fixture['priority'] = "$?!."
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_record_negative_priority(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
fixture['priority'] = -1
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_record_priority_greater_than_max(self):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
fixture['priority'] = 65536
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
@patch.object(central_service.Service, 'create_record',
side_effect=messaging.MessagingTimeout())
def test_create_record_timeout(self, _):
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
# Create a record
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=504)
def test_create_wildcard_record(self):
# Prepare a record
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': '*.%s' % self.recordset['name'],
'type': self.recordset['type'],
})
# Create a record
response = self.post('domains/%s/records' % self.zone['id'],
data=fixture)
self.assertIn('id', response.json)
self.assertIn('name', response.json)
self.assertEqual(response.json['name'], fixture['name'])
def test_create_srv_record(self):
recordset_fixture = self.get_recordset_fixture(
self.zone['name'], 'SRV')
fixture = self.get_record_fixture(recordset_fixture['type'])
priority, _, data = fixture['data'].partition(" ")
fixture.update({
'data': data,
'priority': int(priority),
'name': recordset_fixture['name'],
'type': recordset_fixture['type'],
})
# Create a record
response = self.post('domains/%s/records' % self.zone['id'],
data=fixture)
self.assertIn('id', response.json)
self.assertEqual(fixture['type'], response.json['type'])
self.assertEqual(fixture['name'], response.json['name'])
self.assertEqual(fixture['priority'], response.json['priority'])
self.assertEqual(fixture['data'], response.json['data'])
def test_create_invalid_data_srv_record(self):
recordset_fixture = self.get_recordset_fixture(
self.zone['name'], 'SRV')
fixture = self.get_record_fixture(recordset_fixture['type'])
fixture.update({
'name': recordset_fixture['name'],
'type': recordset_fixture['type'],
})
invalid_datas = [
'I 5060 sip.%s' % self.zone['name'],
'5060 sip.%s' % self.zone['name'],
'5060 I sip.%s' % self.zone['name'],
'0 5060 sip',
'sip',
'sip.%s' % self.zone['name'],
]
for invalid_data in invalid_datas:
fixture['data'] = invalid_data
# Attempt to create the record
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_invalid_name_srv_record(self):
recordset_fixture = self.get_recordset_fixture(
self.zone['name'], 'SRV')
fixture = self.get_record_fixture(recordset_fixture['type'])
fixture.update({
'name': recordset_fixture['name'],
'type': recordset_fixture['type'],
})
invalid_names = [
'%s' % self.zone['name'],
'_udp.%s' % self.zone['name'],
'sip._udp.%s' % self.zone['name'],
'_sip.udp.%s' % self.zone['name'],
]
for invalid_name in invalid_names:
fixture['name'] = invalid_name
# Attempt to create the record
self.post('domains/%s/records' % self.zone['id'], data=fixture,
status_code=400)
def test_create_invalid_name(self):
# Prepare a record
fixture = self.get_record_fixture(self.recordset['type'])
fixture.update({
'name': self.recordset['name'],
'type': self.recordset['type'],
})
invalid_names = [
'org',
'example.org',
'$$.example.org',
'*example.org.',
'*.*.example.org.',
'abc.*.example.org.',
]
for invalid_name in invalid_names:
fixture['name'] = invalid_name
# Create a record
response = self.post('domains/%s/records' % self.zone['id'],
data=fixture, status_code=400)
self.assertNotIn('id', response.json)
def test_get_records(self):
response = self.get('domains/%s/records' % self.zone['id'])
# Verify that the SOA & NS records are already created
self.assertIn('records', response.json)
self.assertEqual(2, len(response.json['records']))
# Create a record
self.create_record(self.zone, self.recordset)
response = self.get('domains/%s/records' % self.zone['id'])
# Verify that one more record has been added
self.assertIn('records', response.json)
self.assertEqual(3, len(response.json['records']))
# Create a second record
self.create_record(self.zone, self.recordset, fixture=1)
response = self.get('domains/%s/records' % self.zone['id'])
# Verfiy that all 4 records are there
self.assertIn('records', response.json)
self.assertEqual(4, len(response.json['records']))
@patch.object(central_service.Service, 'get_zone',
side_effect=messaging.MessagingTimeout())
def test_get_records_timeout(self, _):
self.get('domains/%s/records' % self.zone['id'],
status_code=504)
def test_get_records_missing_zone(self):
self.get('domains/2fdadfb1-cf96-4259-ac6b-bb7b6d2ff980/records',
status_code=404)
def test_get_records_invalid_zone_id(self):
self.get('domains/2fdadfb1cf964259ac6bbb7b6d2ff980/records',
status_code=404)
def test_get_record_missing(self):
self.get('domains/%s/records/2fdadfb1-cf96-4259-ac6b-'
'bb7b6d2ff980' % self.zone['id'],
status_code=404)
def test_get_record_with_invalid_id(self):
self.get('domains/%s/records/2fdadfb1-cf96-4259-ac6b-'
'bb7b6d2ff980GH' % self.zone['id'],
status_code=404)
def test_get_record(self):
# Create a record
record = self.create_record(self.zone, self.recordset)
response = self.get('domains/%s/records/%s' % (self.zone['id'],
record['id']))
self.assertIn('id', response.json)
self.assertEqual(response.json['id'], record['id'])
self.assertEqual(response.json['name'], self.recordset['name'])
self.assertEqual(response.json['type'], self.recordset['type'])
def test_update_record(self):
# Create a record
record = self.create_record(self.zone, self.recordset)
# Fetch another fixture to use in the update
fixture = self.get_record_fixture(self.recordset['type'], fixture=1)
# Update the record
data = {'data': fixture['data']}
response = self.put('domains/%s/records/%s' % (self.zone['id'],
record['id']),
data=data)
self.assertIn('id', response.json)
self.assertEqual(response.json['id'], record['id'])
self.assertEqual(response.json['data'], fixture['data'])
self.assertEqual(response.json['type'], self.recordset['type'])
def test_update_record_ttl(self):
# Create a record
record = self.create_record(self.zone, self.recordset)
# Update the record
data = {'ttl': 100}
response = self.put('domains/%s/records/%s' % (self.zone['id'],
record['id']),
data=data)
self.assertIn('id', response.json)
self.assertEqual(record['id'], response.json['id'])
self.assertEqual(record['data'], response.json['data'])
self.assertEqual(self.recordset['type'], response.json['type'])
self.assertEqual(100, response.json['ttl'])
def test_update_record_junk(self):
# Create a record
record = self.create_record(self.zone, self.recordset)
data = {'ttl': 100, 'junk': 'Junk Field'}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=400)
def test_update_record_negative_ttl(self):
# Create a record
record = self.create_record(self.zone, self.recordset)
data = {'ttl': -1}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=400)
def test_update_record_ttl_greater_than_max(self):
record = self.create_record(self.zone, self.recordset)
data = {'ttl': 2174483648}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=400)
def test_update_record_zero_ttl(self):
# Create a record
record = self.create_record(self.zone, self.recordset)
data = {'ttl': 0}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=400)
def test_update_record_invalid_ttl(self):
# Create a record
record = self.create_record(self.zone, self.recordset)
data = {'ttl': "$?>%"}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=400)
def test_update_record_description_too_long(self):
record = self.create_record(self.zone, self.recordset)
data = {'description': 'x' * 165}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=400)
def test_update_record_negative_priority(self):
record = self.create_record(self.zone, self.recordset)
data = {'priority': -1}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=400)
def test_update_record_invalid_priority(self):
record = self.create_record(self.zone, self.recordset)
data = {'priority': "?!:>"}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=400)
def test_update_record_priority_greater_than_max(self):
record = self.create_record(self.zone, self.recordset)
data = {'priority': 65536}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=400)
def test_update_record_name_too_long(self):
record = self.create_record(self.zone, self.recordset)
data = {'name': 'w' * 256 + ".%s" % self.zone.name}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=400)
def test_update_record_invalid_type(self):
record = self.create_record(self.zone, self.recordset)
data = {'type': 'ABC'}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=400)
def test_update_record_data_too_long(self):
record = self.create_record(self.zone, self.recordset)
data = {'data': '1' * 255 + '.2.3.4'}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=400)
def test_update_record_outside_zone_fail(self):
# Create a record
record = self.create_record(self.zone, self.recordset)
data = {'name': 'test.someotherzone.com.'}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=400)
@patch.object(central_service.Service, 'find_zone',
side_effect=messaging.MessagingTimeout())
def test_update_record_timeout(self, _):
# Create a record
record = self.create_record(self.zone, self.recordset)
data = {'name': 'test.example.org.'}
self.put('domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data, status_code=504)
def test_update_record_missing(self):
data = {'name': 'test.example.org.'}
self.put('domains/%s/records/2fdadfb1-cf96-4259-ac6b-'
'bb7b6d2ff980' % self.zone['id'],
data=data,
status_code=404)
def test_update_record_invalid_id(self):
data = {'name': 'test.example.org.'}
self.put('domains/%s/records/2fdadfb1cf964259ac6bbb7b6d2ff980' %
self.zone['id'],
data=data,
status_code=404)
def test_update_record_missing_zone(self):
data = {'name': 'test.example.org.'}
self.put('domains/2fdadfb1-cf96-4259-ac6b-bb7b6d2ff980/records/'
'2fdadfb1-cf96-4259-ac6b-bb7b6d2ff980',
data=data,
status_code=404)
def test_update_record_invalid_zone_id(self):
data = {'name': 'test.example.org.'}
self.put('domains/2fdadfb1cf964259ac6bbb7b6d2ff980/records/'
'2fdadfb1-cf96-4259-ac6b-bb7b6d2ff980',
data=data,
status_code=404)
def test_delete_record(self):
# Create a record
record = self.create_record(self.zone, self.recordset)
self.delete('domains/%s/records/%s' % (self.zone['id'],
record['id']))
# Simulate the record having been deleted on the backend
zone_serial = self.central_service.get_zone(
self.admin_context, self.zone['id']).serial
self.central_service.update_status(
self.admin_context, self.zone['id'], "SUCCESS", zone_serial)
# Ensure we can no longer fetch the record
self.get('domains/%s/records/%s' % (self.zone['id'],
record['id']),
status_code=404)
@patch.object(central_service.Service, 'find_zone',
side_effect=messaging.MessagingTimeout())
def test_delete_record_timeout(self, _):
# Create a record
record = self.create_record(self.zone, self.recordset)
self.delete('domains/%s/records/%s' % (self.zone['id'],
record['id']),
status_code=504)
def test_delete_record_missing(self):
self.delete('domains/%s/records/2fdadfb1-cf96-4259-ac6b-'
'bb7b6d2ff980' % self.zone['id'],
status_code=404)
def test_delete_record_missing_zone(self):
self.delete('domains/2fdadfb1-cf96-4259-ac6b-bb7b6d2ff980/records/'
'2fdadfb1-cf96-4259-ac6b-bb7b6d2ff980',
status_code=404)
def test_delete_record_invalid_zone_id(self):
self.delete('domains/2fdadfb1cf964259ac6bbb7b6d2ff980/records/'
'2fdadfb1-cf96-4259-ac6b-bb7b6d2ff980',
status_code=404)
def test_delete_record_invalid_id(self):
self.delete('domains/%s/records/2fdadfb1-cf96-4259-ac6b-'
'bb7b6d2ff980GH' % self.zone['id'],
status_code=404)
def test_get_record_in_secondary(self):
fixture = self.get_zone_fixture('SECONDARY', 1)
fixture['email'] = "root@example.com"
zone = self.create_zone(**fixture)
record = self.create_record(zone, self.recordset)
url = 'zones/%s/records/%s' % (zone.id, record.id)
self.get(url, status_code=404)
def test_create_record_in_secondary(self):
fixture = self.get_zone_fixture('SECONDARY', 1)
fixture['email'] = "root@example.com"
zone = self.create_zone(**fixture)
record = {
"name": "foo.%s" % zone.name,
"type": "A",
"data": "10.0.0.1"
}
url = 'zones/%s/records' % zone.id
self.post(url, record, status_code=404)
def test_update_record_in_secondary(self):
fixture = self.get_zone_fixture('SECONDARY', 1)
fixture['email'] = "root@example.com"
zone = self.create_zone(**fixture)
record = self.create_record(zone, self.recordset)
url = 'zones/%s/records/%s' % (zone.id, record.id)
self.put(url, {"data": "10.0.0.1"}, status_code=404)
def test_delete_record_in_secondary(self):
fixture = self.get_zone_fixture('SECONDARY', 1)
fixture['email'] = "root@example.com"
zone = self.create_zone(**fixture)
record = self.create_record(zone, self.recordset)
url = 'zones/%s/records/%s' % (zone.id, record.id)
self.delete(url, status_code=404)
def test_create_record_deleting_zone(self):
recordset_fixture = self.get_recordset_fixture(
self.zone['name'])
fixture = self.get_record_fixture(recordset_fixture['type'])
fixture.update({
'name': recordset_fixture['name'],
'type': recordset_fixture['type'],
})
self.delete('/domains/%s' % self.zone['id'])
self.post('domains/%s/records' % self.zone['id'],
data=fixture, status_code=404)
def test_update_record_deleting_zone(self):
# Create a record
record = self.create_record(self.zone, self.recordset)
# Fetch another fixture to use in the update
fixture = self.get_record_fixture(self.recordset['type'], fixture=1)
# Update the record
data = {'data': fixture['data']}
self.delete('/domains/%s' % self.zone['id'])
self.put('domains/%s/records/%s' % (self.zone['id'],
record['id']),
data=data, status_code=404)
def test_delete_record_deleting_zone(self):
# Create a record
record = self.create_record(self.zone, self.recordset)
self.delete('/domains/%s' % self.zone['id'])
self.delete('domains/%s/records/%s' % (self.zone['id'],
record['id']),
status_code=404)
class ApiV1TxtRecordsTest(ApiV1Test):
def setUp(self):
super(ApiV1TxtRecordsTest, self).setUp()
self.zone = self.create_zone()
self.recordset = self.create_recordset(self.zone, 'TXT')
def test_create_txt_record(self):
# See bug #1474012
record = self.create_record(self.zone, self.recordset)
data = {'data': 'a' * 255}
self.put(
'domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data
)
def test_create_txt_record_too_long(self):
# See bug #1474012
record = self.create_record(self.zone, self.recordset)
data = {'data': 'a' * 256}
self.put(
'domains/%s/records/%s' % (self.zone['id'], record['id']),
data=data,
status_code=400
)
| 37.436848 | 78 | 0.583571 | 3,765 | 32,308 | 4.85923 | 0.077025 | 0.053348 | 0.040995 | 0.034272 | 0.846898 | 0.803006 | 0.768789 | 0.736595 | 0.717846 | 0.710194 | 0 | 0.025844 | 0.274235 | 32,308 | 862 | 79 | 37.480278 | 0.753966 | 0.073233 | 0 | 0.61755 | 0 | 0.001656 | 0.142044 | 0.045741 | 0 | 0 | 0 | 0 | 0.10596 | 1 | 0.11755 | false | 0 | 0.008278 | 0 | 0.129139 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
436a61cd6b840ca1d23491b11bd43c9652a560b2 | 156 | py | Python | ambra_sdk/service/entrypoints/appointment.py | dyens/sdk-python | 24bf05268af2832c70120b84fd53bf44862cffec | [
"Apache-2.0"
] | null | null | null | ambra_sdk/service/entrypoints/appointment.py | dyens/sdk-python | 24bf05268af2832c70120b84fd53bf44862cffec | [
"Apache-2.0"
] | null | null | null | ambra_sdk/service/entrypoints/appointment.py | dyens/sdk-python | 24bf05268af2832c70120b84fd53bf44862cffec | [
"Apache-2.0"
] | null | null | null | from ambra_sdk.service.entrypoints.generated.appointment import \
Appointment as GAppointment
class Appointment(GAppointment):
"""Appointment."""
| 22.285714 | 65 | 0.775641 | 15 | 156 | 8 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 156 | 6 | 66 | 26 | 0.882353 | 0.076923 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
43882765961086035309aa7b25f163cb00e0bab8 | 35 | py | Python | test.py | gghattoraya/dbsample | 6f6a17a9b1d6abef5d84e95613b22a3f716d118a | [
"Apache-2.0"
] | null | null | null | test.py | gghattoraya/dbsample | 6f6a17a9b1d6abef5d84e95613b22a3f716d118a | [
"Apache-2.0"
] | null | null | null | test.py | gghattoraya/dbsample | 6f6a17a9b1d6abef5d84e95613b22a3f716d118a | [
"Apache-2.0"
] | null | null | null | print("Hello World from the G man") | 35 | 35 | 0.742857 | 7 | 35 | 3.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 35 | 1 | 35 | 35 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
78d84ef11d68c76db90997a723ee9593e9d9628c | 133,939 | py | Python | tests/test_forecast.py | sky-uk/anticipy | 43efb5c0d5eb9f0c3dbf440fd17abb72e2153e06 | [
"BSD-3-Clause"
] | 79 | 2018-10-11T21:55:25.000Z | 2022-03-20T17:39:53.000Z | tests/test_forecast.py | sky-uk/anticipy | 43efb5c0d5eb9f0c3dbf440fd17abb72e2153e06 | [
"BSD-3-Clause"
] | 153 | 2018-10-04T10:32:31.000Z | 2021-05-28T15:30:30.000Z | tests/test_forecast.py | sky-uk/anticipy | 43efb5c0d5eb9f0c3dbf440fd17abb72e2153e06 | [
"BSD-3-Clause"
] | 15 | 2018-10-21T18:32:41.000Z | 2020-12-18T12:48:57.000Z | """
Author: Pedro Capelastegui
Created on 04/12/2015
"""
import os
import platform
import unittest
from anticipy.forecast import *
from anticipy.utils_test import PandasTest
# Dask dependencies - not currently used
# from dask import delayed
# from dask import compute
# from dask.distributed import Client
# from dask.diagnostics import Profiler, ResourceProfiler, CacheProfiler
# from dask.diagnostics import visualize
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
def logger_info(msg, data):
logger.info(msg + '\n%s\n', data)
base_folder = os.path.join(os.path.dirname(__file__), 'data')
pd.set_option('display.max_columns', 40)
pd.set_option('display.max_rows', 200)
pd.set_option('display.width', 1000)
def list_to_str(l):
if isinstance(l, list):
return str([str(i) for i in l])
else:
return str(l)
def array_ones_in_indices(n, l_indices):
return np.isin(np.arange(0, n), l_indices).astype(float)
def array_zeros_in_indices(n, l_indices):
return (~np.isin(np.arange(0, n), l_indices)).astype(float)
def print_forecast_driver_output(fcast_driver_output, log_first_line=None):
if fcast_driver_output.empty:
logger.info('Error: empty output')
else:
if log_first_line is not None:
log_first_line = '\r\n' + log_first_line
else:
log_first_line = ''
logger.info(log_first_line + '\r\nAIC_C:' +
str(fcast_driver_output.dict_aic_c))
# logger_info('AIC_C:',fcast_driver_output[0])
# usage:
# compute_prof(l_dict_result2_d, scheduler = 'processes', num_workers=4, title='Test figure') # noqa
def compute_prof(*args, **kwargs):
with Profiler() as prof, ResourceProfiler(dt=0.25) as rprof:
out = compute(*args, **kwargs)
visualize([prof, rprof, # cprof
], show=True)
return out
class TestForecast(PandasTest):
def setUp(self):
pass
def test_normalize_df(self):
def run_test(df, df_expected, **kwargs):
df_out = normalize_df(df, **kwargs)
logger_info('df_out:', df_out.tail(10))
self.assert_frame_equal(df_out, df_expected)
a_y = np.full(10, 0.0)
a_x_in = np.arange(0, 10).astype(np.int64)
a_x = a_x_in + 2
a_x2_in = np.tile(np.arange(0, 5), 2).astype(np.int64)
a_x2 = a_x2_in + 2
a_x2_out = np.repeat(np.arange(0, 5), 2).astype(np.int64)
a_x2_repeat = a_x2_out + 2
a_source = ['s1'] * 5 + ['s2'] * 5
a_weight = np.full(10, 1.0)
a_date = pd.date_range('2014-01-01', periods=10, freq='D')
a_date2 = np.tile(pd.date_range('2014-01-01', periods=5, freq='D'), 2)
a_date2_out = np.repeat(
pd.date_range(
'2014-01-01',
periods=5,
freq='D'),
2)
logger.info('Test 0: Empty input')
self.assertIsNone(normalize_df(pd.DataFrame))
logger.info('Test 1: Output with x,y columns')
df_expected = pd.DataFrame({'y': a_y, 'x': a_x_in, })[['x', 'y']]
l_input = [
[pd.DataFrame({'y': a_y}), {}],
[pd.DataFrame({'y': a_y, 'x': a_x_in}), {}],
[pd.DataFrame({'y_test': a_y, 'x_test': a_x_in}),
{'col_name_y': 'y_test', 'col_name_x': 'x_test'}]
]
for df, kwargs in l_input:
run_test(df, df_expected, **kwargs)
logger.info('Test 2: Output with x,y,weight columns')
df_expected = \
pd.DataFrame({'y': a_y, 'x': a_x_in, 'weight': a_weight})[
['x', 'y', 'weight']]
l_input = [
[pd.DataFrame({'y': a_y, 'weight': a_weight}), {}],
[pd.DataFrame({'y': a_y, 'x': a_x_in, 'weight': a_weight}), {}],
[pd.DataFrame({'y_test': a_y, 'x_test': a_x_in,
'weight_test': a_weight}),
{'col_name_y': 'y_test', 'col_name_x': 'x_test',
'col_name_weight': 'weight_test'}]
]
for df, kwargs in l_input:
run_test(df, df_expected, **kwargs)
logger.info('Test 3: Output with x,y,weight,date columns')
logger.info('Test 3a: Input includes x')
# If x column is present, it is preserved - otherwise, we create it
# from date column
df_expected = pd.DataFrame({'y': a_y, 'x': a_x_in, 'weight': a_weight,
'date': a_date})[
['date', 'x', 'y', 'weight']]
l_input = [
[pd.DataFrame({'y': a_y, 'x': a_x_in, 'weight': a_weight,
'date': a_date}), {}],
[pd.DataFrame({'y_test': a_y, 'x_test': a_x_in,
'weight_test': a_weight, 'date_test': a_date}),
{'col_name_y': 'y_test', 'col_name_x': 'x_test',
'col_name_weight': 'weight_test',
'col_name_date': 'date_test'}]
]
for df, kwargs in l_input:
run_test(df, df_expected, **kwargs)
logger.info('Test 3b: Input has no x')
df_expected = pd.DataFrame({'y': a_y, 'x': a_x, 'weight': a_weight,
'date': a_date})[
['date', 'x', 'y', 'weight']]
l_input = [
[pd.DataFrame({'y': a_y, 'weight': a_weight, 'date': a_date}),
{}],
[pd.DataFrame({'y': a_y, 'weight': a_weight}, index=a_date), {}],
]
for df, kwargs in l_input:
run_test(df, df_expected, **kwargs)
logger.info('Test 4: Input series')
df_expected = pd.DataFrame({'y': a_y, 'x': a_x_in, })[['x', 'y']]
l_input = [
[pd.Series(a_y, name='y'), {}],
[pd.Series(a_y, name='y', index=a_x_in), {}],
[pd.Series(a_y, name='y_test'), {'col_name_y': 'y_test'}],
# [pd.DataFrame({'y_test': a_y, 'x_test': a_x}),
# {'col_name_y':'y_test','col_name_x':'x_test'}]
]
for df, kwargs in l_input:
run_test(df, df_expected, **kwargs)
logger.info('Test 5: Input series with datetimeindex')
# If input is series with datetimeindex, create x from date
df_expected = pd.DataFrame({'y': a_y, 'x': a_x, 'date': a_date})[
['date', 'x', 'y']]
l_input = [
[pd.Series(a_y, name='y', index=a_date), {}],
[pd.Series(a_y, name='y_test', index=a_date),
{'col_name_y': 'y_test'}],
# [pd.DataFrame({'y_test': a_y, 'x_test': a_x}),
# {'col_name_y':'y_test','col_name_x':'x_test'}]
]
for df, kwargs in l_input:
run_test(df, df_expected, **kwargs)
logger.info('Test 6: Input df, output with x, y,'
' weight, date, source columns')
df_expected = (
pd.DataFrame({'y': a_y, 'x': a_x2, 'source': a_source,
'weight': a_weight, 'date': a_date2})
[['date', 'source', 'x', 'y', 'weight']]
)
l_input = [
[pd.DataFrame({'y': a_y, 'weight': a_weight,
'date': a_date2, 'source': a_source}), {}],
# Datetime index not supported with source - could be
# added back with multindex
# [pd.DataFrame({'y': a_y, 'weight': a_weight},index = a_date),
# {}],
]
for df, kwargs in l_input:
run_test(df, df_expected, **kwargs)
logger.info('Test 6b - input includes x column')
df_expected = (
pd.DataFrame({'y': a_y, 'x': a_x2_in, 'source': a_source,
'weight': a_weight, 'date': a_date2})
[['date', 'source', 'x', 'y', 'weight']]
)
l_input = [
[pd.DataFrame({'y': a_y, 'x': a_x2_in, 'weight': a_weight,
'source': a_source, 'date': a_date2}), {}],
[pd.DataFrame({'y_test': a_y, 'x_test': a_x2_in,
'weight_test': a_weight, 'date_test': a_date2,
'source_test': a_source}),
{'col_name_y': 'y_test', 'col_name_x': 'x_test',
'col_name_weight': 'weight_test',
'col_name_date': 'date_test',
'col_name_source': 'source_test'}]
]
for df, kwargs in l_input:
run_test(df, df_expected, **kwargs)
logger.info('Test 7: Input df has multiple values per date per source')
df_expected = (
pd.DataFrame({'y': a_y, 'x': a_x2_repeat, 'weight': a_weight,
'date': a_date2_out})
[['date', 'x', 'y', 'weight']]
)
l_input = [
[pd.DataFrame({'y': a_y, 'weight': a_weight, 'date': a_date2}),
{}],
]
for df, kwargs in l_input:
run_test(df, df_expected, **kwargs)
logger.info('Test 7b - Input includes x column')
df_expected = (
pd.DataFrame({'y': a_y, 'x': a_x2_out, 'weight': a_weight,
'date': a_date2_out})
[['date', 'x', 'y', 'weight']]
)
l_input = [
[pd.DataFrame(
{'y': a_y, 'x': a_x2_in, 'weight': a_weight, 'date': a_date2}),
{}],
[pd.DataFrame({'y_test': a_y, 'x_test': a_x2_in,
'weight_test': a_weight, 'date_test': a_date2,
}),
{'col_name_y': 'y_test', 'col_name_x': 'x_test',
'col_name_weight': 'weight_test',
'col_name_date': 'date_test',
}]
]
for df, kwargs in l_input:
run_test(df, df_expected, **kwargs)
logger.info('Test 8: input df has date column in string form')
a_date_str = a_date2.astype(str)
df_expected = (
pd.DataFrame({'y': a_y, 'x': a_x2_in + 2, 'source': a_source,
'weight': a_weight, 'date': a_date2})
[['date', 'source', 'x', 'y', 'weight']]
)
l_input = [
[pd.DataFrame({'y': a_y, 'weight': a_weight, 'date': a_date_str,
'source': a_source}), {}],
]
for df, kwargs in l_input:
run_test(df, df_expected, **kwargs)
logger.info('Test 8b - input has x column')
a_date_str = a_date2.astype(str)
df_expected = (
pd.DataFrame({'y': a_y, 'x': a_x2_in, 'source': a_source,
'weight': a_weight, 'date': a_date2})
[['date', 'source', 'x', 'y', 'weight']]
)
l_input = [
[pd.DataFrame({'y': a_y, 'x': a_x2_in, 'weight': a_weight,
'source': a_source, 'date': a_date_str}), {}],
[pd.DataFrame({'y_test': a_y, 'x_test': a_x2_in,
'weight_test': a_weight, 'date_test': a_date_str,
'source_test': a_source}),
{'col_name_y': 'y_test', 'col_name_x': 'x_test',
'col_name_weight': 'weight_test',
'col_name_date': 'date_test',
'col_name_source': 'source_test'}]
]
for df, kwargs in l_input:
run_test(df, df_expected, **kwargs)
# Test 9: unordered input df
df_expected = pd.DataFrame({'y': a_y, 'x': a_x_in, })[['x', 'y']]
l_input = [
[pd.DataFrame({'y': a_y[::-1]}), {}],
[pd.DataFrame({'y': a_y[::-1], 'x': a_x_in[::-1]}), {}],
]
for df, kwargs in l_input:
run_test(df, df_expected, **kwargs)
logger.info('Test 10: candy production dataset')
path_candy = os.path.join(base_folder, 'candy_production.csv')
df_candy_raw = pd.read_csv(path_candy)
df_candy = df_candy_raw.pipe(
normalize_df,
col_name_y='IPG3113N',
col_name_date='observation_date')
logger_info('df_candy:', df_candy.tail())
logger.info('Test 11: test_normalize.csv')
path_file = os.path.join(base_folder, 'test_normalize.csv')
df_test_raw = pd.read_csv(path_file)
df_test = df_test_raw.pipe(normalize_df, )
logger_info('df_test:', df_test.x.diff().loc[df_test.x.diff() > 1.0])
self.assertFalse((df_test.x.diff() > 31.0).any())
logger.info('Test 11b: test_normalize.csv, with gaps')
path_file = os.path.join(base_folder, 'test_normalize.csv')
df_test_raw = pd.read_csv(path_file)
df_test_raw = pd.concat([df_test_raw.head(10), df_test_raw.tail(10)])
df_test = df_test_raw.pipe(normalize_df, )
logger_info('df_test:', df_test)
logger_info('df_test:', df_test.x.diff().loc[df_test.x.diff() > 1.0])
self.assertTrue((df_test.x.max() == 1311))
def test_forecast_input(self):
y_values1 = pd.DataFrame(
{'a': np.full(100, 0.0),
'b': np.round(np.arange(-0.5, 0.5, 0.01), 2), },
index=pd.date_range('2014-01-01', periods=100, freq='D'))
# Too few samples
n = 4
y_values1b = pd.DataFrame({'a': np.full(n, 0.0)}, index=pd.date_range(
'2014-01-01', periods=n, freq='D'))
y_values2 = pd.DataFrame({'a': np.full(100, 0.0)},
index=pd.date_range(
'2014-01-01', periods=100, freq='D'))
# SolverConfig with trend
conf1 = ForecastInput(
source_id='source1',
l_model_trend=[
forecast_models.model_constant,
forecast_models.model_linear],
l_model_season=None,
df_y=y_values1,
weights_y_values=1.0,
date_start_actuals=None)
logger_info('Solver config:', conf1)
def test_get_residuals(self):
# Linear model
model = forecast_models.model_linear
a_y = np.arange(10.0)
a_x = np.arange(10.0)
a_date = None
# Using parameter(0,0)
residuals = get_residuals([0, 0], model, a_x, a_y, a_date)
l_expected1 = np.arange(10.0)
logger_info('residuals:', residuals)
self.assert_array_equal(residuals, l_expected1)
# Test - If input array is not 1-dimensional, throw Exception
model = forecast_models.model_linear
a_y = pd.DataFrame(
{'a': np.arange(10.0), 'b': -np.arange(10.0)}).values
a_x = np.arange(10.0)
with self.assertRaises(AssertionError):
residuals = get_residuals([0, 0], model, a_x, a_y, a_date)
# Test - multiple values per sample
a_y = np.concatenate([np.arange(10.0), -np.arange(10.0)])
a_x = np.tile(np.arange(10.0), 2)
residuals = get_residuals([0, 0], model, a_x, a_y, a_date)
logger_info('residuals:', residuals)
l_expected2 = np.concatenate([np.arange(10.0), np.arange(10.0)])
self.assert_array_equal(residuals, l_expected2)
# As above, but applying weights to input time series [1.0, 0]
residuals = get_residuals([0, 0], model, a_x, a_y, a_date,
a_weights=np.repeat([1.0, 0], 10))
l_expected2b = np.concatenate([np.arange(10.0), np.full(10, 0)])
logger_info('residuals:', residuals)
self.assert_array_equal(residuals, l_expected2b)
# TODO: MORE TESTS WITH WEIGHTS_Y_VALUES
# New test, different parameters
residuals = get_residuals([0, 5], model, a_x, a_y, a_date)
logger_info('residuals:', residuals)
self.assert_array_equal(
residuals, [5., 4., 3., 2., 1., 0., 1., 2., 3., 4., 5.,
6., 7., 8., 9., 10., 11., 12., 13., 14.])
# Test - Use a_weights to weight residuals based on time
# Using parameter(0,0)
a_y = np.arange(10.0)
a_x = np.arange(10.0)
a_weights = np.linspace(1., 2., 10)
logger_info('a_y: ', a_y)
logger_info('a_weights: ', a_weights)
residuals = get_residuals(
[0, 0], model, a_x, a_y, a_date, a_weights=a_weights)
self.assert_array_equal(residuals, np.arange(10.0) * a_weights)
logger_info('residuals:', residuals)
def test_optimize_least_squares(self):
# Setup
a_x = pd.np.arange(100.0)
a_y = np.arange(100.0)
a_x_long = np.tile(a_x, 2)
a_y_long = np.concatenate([np.full(100, 0.0),
np.round(np.arange(-0.5, 0.5, 0.01), 2)])
a_date = None
l_model = [
forecast_models.model_linear,
forecast_models.model_constant
]
def print_result(result):
logger.info(
'result cost: %s, shape: %s, x: %s, message: %s',
result.cost,
result.fun.shape,
result.x,
result.message)
for model in l_model:
logger.info('#### Model function: %s', model.name)
df_result = optimize_least_squares(model, a_x, a_y, a_date)
logger_info('result:', df_result)
self.assertTrue(df_result.success.any())
# logger_info('result.x:',res_trend.x)
df_result = optimize_least_squares(
model, a_x_long, a_y_long, a_date)
logger_info('result:', df_result)
self.assertTrue(df_result.success.any())
def test_optimize_least_squares_cache(self):
a_date = pd.date_range('2020-01-01', '2022-01-01')
# logger_info('a_date', a_date)
a_x = np.arange(0, a_date.size)
# logger_info('a_x', a_x)
a_y = a_date.month * 10 + a_date.weekday
logger_info('a_y', a_y)
l_models = [
forecast_models.model_season_month,
forecast_models.model_season_wday,
forecast_models.model_season_month +
forecast_models.model_season_wday,
forecast_models.model_season_fourier_yearly,
forecast_models.model_calendar_uk
]
dict_t_summary = dict()
l_df_result = []
for model in l_models:
time_start = datetime.now()
df_result_cache = optimize_least_squares(
model, a_x, a_y, a_date)
fit_time_cache = (datetime.now() - time_start).total_seconds()
time_start = datetime.now()
df_result_no_cache = optimize_least_squares(
model, a_x, a_y, a_date, use_cache=False)
fit_time_no_cache = (datetime.now() - time_start).total_seconds()
l_df_result += [
df_result_cache.assign(model=str(model), is_cache=True),
df_result_no_cache.assign(model=str(model), is_cache=False),
]
dict_t_summary[str(model)] = [fit_time_cache, fit_time_no_cache]
df_result = pd.concat(l_df_result, ignore_index=True)
logger_info('result summary: ', df_result)
df_t_summary = pd.DataFrame(dict_t_summary).T
df_t_summary.columns = ['t_cache', 't_no_cache']
logger_info('time summary: ', df_t_summary)
def test_fit_model(self):
# Input dataframes must have an y column, and may have columns x,date,
# weight
# Setup
# TODO: Use pre-normalized input dfs, rather than callling
# normalize_df()
dict_df_y = {
# Single ts
'df_1ts_nodate': pd.DataFrame({'y': np.full(100, 0.0),
'weight_test': np.full(100, 1.0)}),
# 2 ts
'df_2ts_nodate': pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0),
np.round(np.arange(
-0.5, 0.5, 0.01), 2)]),
'weight_test': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)]),
'x': np.tile(np.arange(0, 100), 2), }),
# 1 ts with datetime index
'df_1ts_w': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight_test': np.full(100, 1.0)
},
index=pd.date_range('2014-01-01', periods=100, freq='W')),
# 2 ts with datetime index
'df_2ts_w': pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0),
np.round(
np.arange(-0.5, 0.5, 0.01), 2)]),
'weight_test': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)]),
},
index=np.tile(pd.date_range('2014-01-01', periods=100,
freq='W'), 2)),
# Single ts, freq=D
'df_1ts_d': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight_test': np.full(100, 1.0)},
index=pd.date_range('2014-01-01', periods=100, freq='D')),
# 2 ts with datetime index, freq=D
'df_2ts_d': pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0),
np.round(np.arange(
-0.5, 0.5, 0.01), 2)]),
'weight_test': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)])},
index=np.tile(pd.date_range('2014-01-01', periods=100,
freq='D'), 2))
}
l_source1 = [
'df_1ts_nodate',
'df_2ts_nodate',
'df_1ts_w',
'df_1ts_w',
'df_2ts_d',
'df_2ts_d']
l_source2 = ['df_1ts_d', 'df_2ts_d']
# Naive trend models - cannot add seasonality
l_model1a = [
# model_naive never actually goes to fit_model
forecast_models.model_naive,
# TODO: add assert check on fit model re: validity of input model
]
l_model1b = [
forecast_models.model_snaive_wday
# TODO: add assert check on fit model re: validity of input model
]
l_model1c = [
forecast_models.model_linear,
forecast_models.model_constant
]
# All trend models
l_model1 = l_model1a + l_model1b + l_model1c
l_model2 = [
forecast_models.model_season_wday,
forecast_models.model_season_wday_2,
forecast_models.model_season_month
]
l_model3 = get_list_model(l_model1c, l_model2)
l_results = []
l_optimize_info = []
l_add_weight = [False, True]
def run_test_logic(source, model, add_weight):
df_y = dict_df_y[source].copy()
if add_weight: # Enable weight column
df_y['weight'] = df_y['weight_test']
df_y = df_y.pipe(normalize_df)
logger.info(
'Fitting src: %s , mod: %s, add_weight: %s',
source,
model,
add_weight)
dict_fit_model = fit_model(
model, df_y, source=source, df_actuals=df_y
)
return dict_fit_model
# logger_info('Result: ',result)
# Test - single solver type, return best fit
for (source, model, add_weight) in itertools.product(
l_source1, l_model1a + l_model1c, l_add_weight):
dict_fit_model = run_test_logic(source, model, add_weight)
result_tmp = dict_fit_model['metadata']
info_tmp = dict_fit_model['optimize_info']
l_results += [result_tmp]
l_optimize_info += [info_tmp]
# Now for models that require datetimeindex
for (source, model, add_weight) in itertools.product(
l_source2, l_model1b + l_model2, l_add_weight):
dict_fit_model = run_test_logic(source, model, add_weight)
result_tmp = dict_fit_model['metadata']
info_tmp = dict_fit_model['optimize_info']
l_results += [result_tmp]
l_optimize_info += [info_tmp]
# Finally, we use trend+seasonality with all models
for (source, model, add_weight) in itertools.product(
l_source2, l_model3, l_add_weight):
dict_fit_model = run_test_logic(source, model, add_weight)
result_tmp = dict_fit_model['metadata']
info_tmp = dict_fit_model['optimize_info']
l_results += [result_tmp]
l_optimize_info += [info_tmp]
df_result = pd.concat(l_results, sort=False, ignore_index=True)
df_optimize_info = pd.concat(
l_optimize_info, sort=False, ignore_index=True)
self.assertFalse(df_result.cost.pipe(pd.isnull).any())
logger_info('Result summary:', df_result)
logger_info('Optimize info summary:', df_optimize_info)
def test_fit_model_metadata(self):
# Check that weight column in metadata is correct
df_in = pd.DataFrame(
dict(x=np.arange(20), y=10., weight=[0] + [1.] * 19))
dict_result = fit_model(
forecast_models.model_constant,
df_in,
)
df_metadata = dict_result.get('metadata')
logger_info('metadata:', df_metadata)
str_weights = df_metadata.weights.iloc[0]
self.assertEqual(str_weights, '0.0-1.0')
@unittest.skip('Dask not supported yet')
def test_fit_model_dask(self):
# Input dataframes must have an y column, and may have columns x,date,
# weight
# Setup
# TODO: Use pre-normalized input dfs, rather than callling
# normalize_df()
dict_df_y = {
# Single ts
'df_1ts_nodate': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight_test': np.full(100, 1.0)}),
# 2 ts
'df_2ts_nodate': pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0),
np.round(
np.arange(-0.5, 0.5, 0.01), 2)]),
'weight_test': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)]),
'x': np.tile(np.arange(0, 100), 2),
}),
# 1 ts with datetime index
'df_1ts_w': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight_test': np.full(100, 1.0)
},
index=pd.date_range('2014-01-01', periods=100, freq='W')),
# 2 ts with datetime index
'df_2ts_w': pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0), np.round(
np.arange(-0.5, 0.5, 0.01), 2)]),
'weight_test': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)]),
},
index=np.tile(pd.date_range('2014-01-01', periods=100,
freq='W'), 2)),
# Single ts, freq=D
'df_1ts_d': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight_test': np.full(100, 1.0)},
index=pd.date_range('2014-01-01', periods=100, freq='D')),
# 2 ts with datetime index, freq=D
'df_2ts_d': pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0), np.round(
np.arange(-0.5, 0.5, 0.01), 2)]),
'weight_test': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)])},
index=np.tile(pd.date_range('2014-01-01', periods=100,
freq='D'), 2))
}
l_source1 = [
'df_1ts_nodate',
'df_2ts_nodate',
'df_1ts_w',
'df_1ts_w',
'df_2ts_d',
'df_2ts_d']
l_source2 = ['df_1ts_d', 'df_2ts_d']
l_model1 = [
# model_naive never actually goes to fit_model
forecast_models.model_naive,
# TODO: add assert check on fit model re: validity of input model
forecast_models.model_linear,
forecast_models.model_constant
]
l_model2 = [
forecast_models.model_season_wday,
forecast_models.model_season_wday_2,
forecast_models.model_season_month
]
l_model3 = get_list_model(l_model1, l_model2)
l_add_weight = [False, True]
def run_test_logic(df_y, source, model, add_weight):
# df_y = dict_df_y[source].copy()
# if add_weight: # Enable weight column
# df_y['weight']=df_y['weight_test']
col_name_weight = 'weight' if add_weight else 'no-weight'
df_y = df_y.pipe(normalize_df, col_name_weight=col_name_weight)
# logger.info('Fitting src: %s , mod: %s, add_weight: %s', source, model, add_weight) # noqa
# dict_fit_model = delayed(fit_model)(model, df_y, source=source, df_actuals = df_y) # noqa
dict_fit_model = fit_model(
model, df_y, source=source, df_actuals=df_y)
return dict_fit_model
# logger_info('Result: ',result)
def aggregate_dict_fit_model(l_dict_fit_model):
l_results = []
l_optimize_info = []
for dict_fit_model in l_dict_fit_model:
result_tmp = dict_fit_model['metadata']
info_tmp = dict_fit_model['optimize_info']
l_results += [result_tmp]
l_optimize_info += [info_tmp]
df_metadata = pd.concat(l_results, sort=False, ignore_index=True)
df_optimize_info = pd.concat(
l_optimize_info, sort=False, ignore_index=True)
return df_metadata, df_optimize_info
l_dict_fit_model_d = []
# Test - single solver type, return best fit
for (source, model, add_weight) in itertools.product(
l_source1, l_model1, l_add_weight):
l_dict_fit_model_d += [
delayed(run_test_logic)(
dict_df_y[source].copy(),
source,
model,
add_weight)]
# Now for models that require datetimeindex
for (source, model, add_weight) in itertools.product(
l_source2, l_model2, l_add_weight):
l_dict_fit_model_d += [
delayed(run_test_logic)(
dict_df_y[source].copy(),
source,
model,
add_weight)]
# Finally, we use trend+seasonality with all models
for (source, model, add_weight) in itertools.product(
l_source2, l_model3, l_add_weight):
l_dict_fit_model_d += [
delayed(run_test_logic)(
dict_df_y[source].copy(),
source,
model,
add_weight)]
logger.info('generated delayed')
# client = Client()
# logger_info('client:',client)
# l_dict_fit_model, = compute(l_dict_fit_model_d)
l_dict_fit_model, = compute_prof(
l_dict_fit_model_d, scheduler='processes', num_workers=4)
# l_dict_fit_model, = compute(l_dict_fit_model_d, scheduler='processes', num_workers=4) # noqa
# l_dict_fit_model, = compute(l_dict_fit_model_d, scheduler='distributed', num_workers=4) # noqa
# l_dict_fit_model, = compute(l_dict_fit_model_d, scheduler='threads', num_workers=4) # noqa
# l_dict_fit_model = l_dict_fit_model_d
df_metadata, df_optimize_info = aggregate_dict_fit_model(
l_dict_fit_model)
# result_d = delayed(aggregate_dict_fit_model)(l_dict_fit_model_d)
# result_d = delayed(aggregate_dict_fit_model)(l_dict_fit_model_d)
# (df_metadata, df_optimize_info), = compute(result_d)
# result, = compute(result_d)
logger_info('Result summary:', df_metadata)
logger_info('Optimize info summary:', df_optimize_info)
# client.close()
@unittest.skip('Dask not supported yet')
def test_fit_model_dask2(self):
# Input dataframes must have an y column, and may have columns x,date,
# weight
# Setup
# TODO: Use pre-normalized input dfs, rather than callling
# normalize_df()
dict_df_y = {
# Single ts
'df_1ts_nodate': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight_test': np.full(100, 1.0)}),
# 2 ts
'df_2ts_nodate': pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0),
np.round(
np.arange(-0.5, 0.5, 0.01), 2)]),
'weight_test': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)]),
'x': np.tile(np.arange(0, 100), 2),
}),
# 1 ts with datetime index
'df_1ts_w': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight_test': np.full(100, 1.0)
},
index=pd.date_range('2014-01-01', periods=100, freq='W')),
# 2 ts with datetime index
'df_2ts_w': pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0), np.round(
np.arange(-0.5, 0.5, 0.01), 2)]),
'weight_test': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)]),
},
index=np.tile(pd.date_range('2014-01-01', periods=100,
freq='W'), 2)),
# Single ts, freq=D
'df_1ts_d': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight_test': np.full(100, 1.0)},
index=pd.date_range('2014-01-01', periods=100,
freq='D')),
# 2 ts with datetime index, freq=D
'df_2ts_d': pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0), np.round(
np.arange(-0.5, 0.5, 0.01), 2)]),
'weight_test': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)])},
index=np.tile(pd.date_range('2014-01-01', periods=100,
freq='D'), 2))
}
l_source1 = [
'df_1ts_nodate',
'df_2ts_nodate',
'df_1ts_w',
'df_1ts_w',
'df_2ts_d',
'df_2ts_d']
l_source2 = ['df_1ts_d', 'df_2ts_d']
l_model1 = [
# model_naive never actually goes to fit_model
forecast_models.model_naive,
# TODO: add assert check on fit model re: validity of input model
forecast_models.model_linear,
forecast_models.model_constant
]
l_model2 = [
forecast_models.model_season_wday,
forecast_models.model_season_wday_2,
forecast_models.model_season_month
]
l_model3 = get_list_model(l_model1, l_model2)
l_weight = ['no-weight', 'weight_test']
def run_test_logic(df_y, source, model, add_weight):
# df_y = dict_df_y[source].copy()
if add_weight: # Enable weight column
df_y['weight'] = df_y['weight_test']
df_y = df_y.pipe(normalize_df)
# logger.info('Fitting src: %s , mod: %s, add_weight: %s', source, model, add_weight) # noqa
# dict_fit_model = delayed(fit_model)(model, df_y, source=source, df_actuals = df_y) # noqa
dict_fit_model = fit_model(
model, df_y, source=source, df_actuals=df_y)
return dict_fit_model
# logger_info('Result: ',result)
def aggregate_dict_fit_model(l_dict_fit_model):
l_results = []
l_optimize_info = []
for dict_fit_model in l_dict_fit_model:
result_tmp = dict_fit_model['metadata']
info_tmp = dict_fit_model['optimize_info']
l_results += [result_tmp]
l_optimize_info += [info_tmp]
df_metadata = delayed(
pd.concat)(
l_results,
sort=False,
ignore_index=False)
df_optimize_info = delayed(
pd.concat)(
l_optimize_info,
sort=False,
ignore_index=False)
return df_metadata, df_optimize_info
l_dict_fit_model_d = []
# Test - single solver type, return best fit
l_dict_fit_model_d += [
delayed(fit_model)(
model,
dict_df_y[source].pipe(
delayed(normalize_df),
col_name_weight=weight),
source=source,
df_actuals=dict_df_y[source].pipe(
delayed(normalize_df),
col_name_weight=weight)) for (
source,
model,
weight) in itertools.product(
l_source1,
l_model1,
l_weight)]
# Now for models that require datetimeindex
l_dict_fit_model_d += [
delayed(fit_model)(
model,
dict_df_y[source].pipe(
delayed(normalize_df),
col_name_weight=weight),
source=source,
df_actuals=dict_df_y[source].pipe(
delayed(normalize_df),
col_name_weight=weight)) for (
source,
model,
weight) in itertools.product(
l_source2,
l_model2,
l_weight)]
# Finally, we use trend+seasonality with all models
l_dict_fit_model_d += [
delayed(fit_model)(
model,
dict_df_y[source].pipe(
delayed(normalize_df),
col_name_weight=weight),
source=source,
df_actuals=dict_df_y[source].pipe(
delayed(normalize_df),
col_name_weight=weight)) for (
source,
model,
weight) in itertools.product(
l_source2,
l_model3,
l_weight)]
# # Finally, we use trend+seasonality with all models
# for (source, model, weight) in itertools.product(
# l_source2, l_model3, l_weight):
# df_y = dict_df_y[source].pipe(normalize_df, col_name_weight=weight) # noqa
# l_dict_fit_model_d += [delayed(fit_model)(model, df_y, source=source, df_actuals=df_y)] # noqa
logger.info('generated delayed')
# l_dict_fit_model, = compute(l_dict_fit_model_d)
l_dict_fit_model = l_dict_fit_model_d
# df_metadata, df_optimize_info = aggregate_dict_fit_model(l_dict_fit_model) # noqa
result_d = delayed(aggregate_dict_fit_model)(l_dict_fit_model_d)
(df_metadata, df_optimize_info), = compute(result_d)
# result, = compute(result_d)
logger_info('Result summary:', df_metadata)
logger_info('Optimize info summary:', df_optimize_info)
@unittest.skip('Dask not supported yet')
def test_dask(self):
def aggregate_result(l_dict_result):
l_metadata = []
l_opt = []
for dict_result in l_dict_result:
l_metadata += [dict_result['metadata']]
l_opt += [dict_result['optimize_info']]
return pd.concat(
l_metadata, sort=False, ignore_index=False), pd.concat(
l_opt, sort=False, ignore_index=False)
model = forecast_models.model_linear
df_y = pd.DataFrame(
{'y': np.full(100, 0.0),
'weight_test': np.full(100, 1.0)}).pipe(normalize_df)
l_dict_result2_d = [
delayed(fit_model)(
model,
df_y,
source=i,
df_actuals=df_y) for i in np.arange(
0,
20)]
result_d = delayed(aggregate_result)(l_dict_result2_d)
# result = compute(result_d,scheduler='processes',num_workers=2)
result = compute(result_d)
logger_info('result', result)
def test_fit_model_date_gaps(self):
# Setup
# 2 ts with datetime index, freq=D
df_2ts_d = pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0), np.round(
np.arange(-0.5, 0.5, 0.01), 2)]),
'weight': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)])},
index=np.tile(pd.date_range('2014-01-01', periods=100,
freq='D'), 2))
df_y = pd.concat([df_2ts_d.head(), df_2ts_d.tail()])
model = forecast_models.model_linear
l_col_name_weight = [None, 'weight']
l_results = []
l_optimize_info = []
def run_test_logic(col_name_weight):
logger.info('Fitting col_w: %s', col_name_weight)
df_y_tmp = df_y.pipe(normalize_df,
col_name_weight=col_name_weight)
dict_fit_model = fit_model(model, df_y_tmp, source='test')
return dict_fit_model
# logger_info('Result: ',result)
# Test - single solver type, return best fit
for col_name_weight in l_col_name_weight:
dict_fit_model = run_test_logic(col_name_weight)
result_tmp = dict_fit_model['metadata']
info_tmp = dict_fit_model['optimize_info']
l_results += [result_tmp]
l_optimize_info += [info_tmp]
df_result = pd.concat(l_results)
df_optimize_info = pd.concat(l_optimize_info)
logger_info('Result summary:', df_result)
logger_info('Optimize info summary:', df_optimize_info)
def test_get_list_model(self):
l1 = [
forecast_models.model_linear,
forecast_models.model_constant
]
l2 = [
forecast_models.model_season_wday_2,
forecast_models.model_null
]
l_result_add = get_list_model(l1, l2, 'add')
l_result_mult = get_list_model(l1, l2, 'mult')
l_result_both = get_list_model(l1, l2, 'both')
l_expected_add = [
l1[0] + l2[0],
l1[0] + l2[1],
l1[1] + l2[0],
l1[1] + l2[1],
]
l_expected_mult = [
l1[0] * l2[0],
l1[0] * l2[1],
l1[1] * l2[0],
l1[1] * l2[1],
]
l_expected_both = [
l1[0] + l2[0],
l1[0] + l2[1],
l1[1] + l2[0],
l1[1] + l2[1],
l1[0] * l2[0],
# l1[0] * l2[1], # This is a duplicate: linear*null=linear+null =
# linear
l1[1] * l2[0],
# l1[1] * l2[1], # This is a duplicate: constant*null =
# constant+null = constant
]
logger_info('Result add:', l_result_add)
logger_info('Expected add:', l_expected_add)
self.assertListEqual(l_result_add, l_expected_add)
logger_info('Result mult:', l_result_mult)
logger_info('Expected mult:', l_expected_mult)
self.assertListEqual(l_result_mult, l_expected_mult)
logger_info('Result both:', l_result_both)
logger_info('Expected both:', l_expected_both)
self.assertListEqual(l_result_both, l_expected_both)
def test_fit_model_trend_season_wday_mult(self):
# Test Specific model combination that doesn't fit
# Setup
n_iterations = 10
# Setup
dict_df_y = {
# Single ts
'df_1ts_nodate': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight': np.full(100, 1.0)}),
# 2 ts
'df_2ts_nodate': pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0),
np.round(np.arange(
-0.5, 0.5, 0.01), 2)]),
'weight': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)]),
'x': np.tile(np.arange(0, 100), 2),
}),
# 1 ts with datetime index
'df_1ts_w': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight': np.full(100, 1.0)
},
index=pd.date_range('2014-01-01', periods=100, freq='W')),
# 2 ts with datetime index
'df_2ts_w': pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0),
np.round(np.arange(
-0.5, 0.5, 0.01), 2)]),
'weight': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)]),
},
index=np.tile(pd.date_range('2014-01-01', periods=100,
freq='W'), 2)),
# Single ts, freq=D
'df_1ts_d': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight': np.full(100, 1.0)},
index=pd.date_range('2014-01-01', periods=100, freq='D')),
# Single ts, freq=D , index named 'date
'df_1ts_d2': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight': np.full(100, 1.0)},
index=pd.date_range('2014-01-01', periods=100, freq='D',
name='date')).reset_index(),
# 2 ts with datetime index, freq=D
'df_2ts_d': pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0), np.round(
np.arange(-0.5, 0.5, 0.01), 2)]),
'weight': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)])},
index=np.tile(pd.date_range('2014-01-01', periods=100,
freq='D'), 2))
}
l_source_d = ['df_1ts_d', 'df_2ts_d', 'df_1ts_d2']
l_source_w = ['df_1ts_2', 'df_2ts_2']
l_model_trend = [
forecast_models.model_linear,
]
l_model_season = [
# forecast_models.model_season_wday,
# forecast_models.model_season_wday,
forecast_models.model_season_wday_2,
forecast_models.model_null
]
l_col_name_weight = [ # None,
'weight']
l_results = []
l_optimize_info = []
# Fit , run n iterations, freq='D'
for (
source,
col_name_weight,
model) in itertools.product(
l_source_d,
l_col_name_weight,
get_list_model(
l_model_trend,
l_model_season,
'both')):
df_y = dict_df_y[source].copy().pipe(
normalize_df, col_name_weight=col_name_weight)
logger.info(
'Fitting src: %s , mod: %s, col_w: %s',
source,
model,
col_name_weight)
for i in np.arange(0, n_iterations):
dict_fit_model = fit_model(
model, df_y, source=source, freq='D')
l_results += [dict_fit_model['metadata']]
l_optimize_info += [dict_fit_model['optimize_info']]
# Fit , run n iterations, freq='D' - test function composition in
# different order
for (
source,
col_name_weight,
model) in itertools.product(
l_source_d,
l_col_name_weight,
get_list_model(
l_model_season,
l_model_trend,
'both')):
df_y = dict_df_y[source].copy().pipe(
normalize_df, col_name_weight=col_name_weight)
logger.info(
'Fitting src: %s , mod: %s, col_w: %s',
source,
model,
col_name_weight)
for i in np.arange(0, n_iterations):
dict_fit_model = fit_model(
model, df_y, source=source, freq='D')
l_results += [dict_fit_model['metadata']]
l_optimize_info += [dict_fit_model['optimize_info']]
df_result = pd.concat(l_results)
df_optimize_info = pd.concat(l_optimize_info)
logger_info('Result summary:', df_result)
logger_info('Optimize info summary:', df_optimize_info)
def test_extrapolate_model(self):
# with freq=None, defaults to W
df_y_forecast = extrapolate_model(
forecast_models.model_constant,
[1.0],
'2017-01-01',
'2017-01-01',
freq=None,
extrapolate_years=1.0)
logger_info('df_y_forecast', df_y_forecast.tail(1))
logger_info('Result length:', df_y_forecast.index.size)
self.assertEqual(df_y_forecast.index.size, 53)
df_y_forecast = extrapolate_model(
forecast_models.model_constant,
[1.0],
'2017-01-01',
'2017-12-31',
freq='D',
extrapolate_years=1.0)
logger_info('df_y_forecast', df_y_forecast.tail(1))
logger_info('Result length:', df_y_forecast.index.size)
self.assertEqual(df_y_forecast.index.size, 365 * 2)
df_y_forecast = extrapolate_model(
forecast_models.model_constant,
[1.0],
'2017-01-01',
'2017-12-31',
freq='MS',
extrapolate_years=1.0)
logger_info('df_y_forecast', df_y_forecast.tail(1))
logger_info('Result length:', df_y_forecast.index.size)
self.assertEqual(df_y_forecast.index.size, 12 * 2)
df_y_forecast = extrapolate_model(
forecast_models.model_constant,
[1.0],
'2000-01-01',
'2009-01-01',
freq='YS',
extrapolate_years=10.0)
logger_info('df_y_forecast', df_y_forecast.tail(20))
logger_info('Result length:', df_y_forecast.index.size)
self.assertEqual(df_y_forecast.index.size, 20)
# TODO: Test other time frequencies, e.g. Q, H, Y.
def test_get_df_actuals_clean(self):
dict_df_y = {
# Single ts
'df_1ts_nodate': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight': np.full(100, 1.0)}),
# 2 ts
'df_2ts_nodate': pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0),
np.round(np.arange(
-0.5, 0.5, 0.01), 2)]),
'weight': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)]),
'x': np.tile(np.arange(0, 100), 2),
}),
# 1 ts with datetime index
'df_1ts_w': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight': np.full(100, 1.0)
},
index=pd.date_range('2014-01-01', periods=100,
freq='W')),
# 1 ts with datetime index named 'date
'df_1ts_w-2': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight': np.full(100, 1.0)
},
index=pd.date_range('2014-01-01', periods=100,
freq='W', name='date')),
# 1 ts with datetime column
'df_1ts_w-3': pd.DataFrame(
{'y': np.full(100, 0.0),
'weight': np.full(100, 1.0),
'date': pd.date_range('2014-01-01', periods=100,
freq='W')
})
}
# Simple test - check for crashes
for k in dict_df_y.keys():
logger.info('Input: %s', k)
df_in = dict_df_y.get(k).pipe(normalize_df)
logger_info('DF_IN', df_in.tail(3))
df_result = get_df_actuals_clean(df_in, 'test', 'test')
logger_info('Result:', df_result.tail(3))
unique_models = df_result.model.drop_duplicates().reset_index(
drop=True)
self.assert_series_equal(unique_models, pd.Series(['actuals']))
logger_info('Models:', df_result.model.drop_duplicates())
def _test_run_forecast_basic_tests_new_api(self, n_sources=1, **kwargs):
# Both additive and multiplicative
dict_result = run_forecast(simplify_output=False, **kwargs)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(1))
l_sources = df_metadata.source_long.unique()
include_all_fits = kwargs.get('include_all_fits')
if not include_all_fits:
# In this case, there should be only one model fitted
# per data source
self.assertTrue(df_metadata.is_best_fit.all())
self.assertTrue((df_data.is_best_fit | df_data.is_actuals).all())
# The following may not be true if a model doesn't converge
self.assertEqual(df_metadata.index.size, n_sources)
self.assertEqual(
df_data.loc[~df_data.is_actuals].drop_duplicates(
'source_long').index.size, n_sources)
# Check that actuals are included
self.assertTrue((df_data.is_actuals.any()))
# Check that dtype is not corrupted
self.assertTrue(np.issubdtype(df_data.y.astype(float), np.float64))
def _test_run_forecast_check_length_new_api(self, **kwargs):
freq = kwargs.get('freq', 'D')
freq = detect_freq(kwargs.get('df_y').pipe(normalize_df))
# Changes e.g. W-MON to W
freq_short = freq[0:1] if freq is not None else None
# Todo: change to dict to support more frequencies
freq_units_per_year = 52.0 if freq_short == 'W' else 365.0
extrapolate_years = kwargs.get('extrapolate_years', 1.0)
# Both additive and multiplicative
dict_result = run_forecast(
simplify_output=False,
extrapolate_years=extrapolate_years,
**kwargs)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
logger_info('df_metadata:', df_metadata)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(1))
df_data_size = df_data.groupby(
['source', 'model', 'is_actuals']).size().rename(
'group_size').reset_index()
df_data_size_unique = (
df_data.drop_duplicates(
['source', 'model', 'is_actuals', 'date']) .groupby(
['source', 'model', 'is_actuals']).size().rename(
'group_size').reset_index()
)
logger_info('df_data_size:', df_data_size)
logger_info('df_data_size_unique:', df_data_size_unique)
df_y = kwargs.get('df_y')
assert df_y is not None
# Normalize df_y
df_y = normalize_df(df_y,
kwargs.get('col_name_y', 'y'),
kwargs.get('col_name_weight', 'weight'),
kwargs.get('col_name_x', 'x'),
kwargs.get('col_name_date', 'date'),
kwargs.get('col_name_source', 'source'))
if 'source' not in df_y.columns:
df_y['source'] = kwargs.get('source_id', 'source')
l_sources = df_y.source.drop_duplicates()
for source in l_sources:
logger.info('Testing source: %s', source)
df_y_tmp = df_y.loc[df_y.source == source]
size_actuals_unique_tmp = df_y_tmp.drop_duplicates('x').index.size
size_actuals_tmp = df_y_tmp.index.size
df_data_size_tmp = df_data_size.loc[
df_data_size.source == source]
df_data_size_actuals = df_data_size_tmp.loc[
df_data_size_tmp.is_actuals]
df_data_size_fcast = df_data_size_tmp.loc[
~df_data_size_tmp.is_actuals]
# logger.info('DEBUG: group size: %s',100 +
# extrapolate_years*freq_units_per_year)
# This assert doesn't work for all years - some have 365 days,
# some 366. Currently running with 365-day year
logger.info(
'DEBUG: df_data_size_fcast.group_size %s , size_actuals_tmp'
' %s, total %s',
df_data_size_fcast.group_size.values,
size_actuals_tmp,
size_actuals_tmp +
extrapolate_years *
freq_units_per_year)
self.assertTrue(
(df_data_size_actuals.group_size == size_actuals_tmp).all())
self.assert_array_equal(
df_data_size_fcast.group_size,
size_actuals_unique_tmp +
extrapolate_years *
freq_units_per_year)
self.assertFalse(df_data_size_fcast.empty)
def _test_run_forecast(self, freq='D'):
logger.info('Testing run_forecast - freq: %s', freq)
# freq_short = freq[0:1] # Changes e.g. W-MON to W
# freq_units_per_year = 52.0 if freq_short == 'W' else 365.0
# Todo: change to dict to support more frequencies
# Input dataframe without date column
df_y0 = pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0),
np.round(np.arange(-0.5, 0.5, 0.01), 2)]),
'weight': np.concatenate([np.full(100, 0.1), np.full(100, 1.0)]),
},
)
df_y1 = pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0), np.round(
np.arange(-0.5, 0.5, 0.01), 2)]),
'weight': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)]),
},
index=np.tile(pd.date_range('2014-01-01', periods=100, freq=freq),
2))
# Too few samples
n = 4
df_y1b = pd.DataFrame({'y': np.full(n, 0.0)}, index=pd.date_range(
'2017-01-01', periods=n, freq=freq))
df_y2 = pd.DataFrame({'y': np.full(100, 0.0)}, index=pd.date_range(
'2017-01-01', periods=100, freq=freq))
# Df with source column
df_y3 = pd.DataFrame(
{'y': np.concatenate([np.full(100, 0.0),
np.round(np.arange(-0.5, 0.5, 0.01), 2)]),
'weight': np.concatenate([np.full(100, 0.1), np.full(100, 1.0)]),
'source': ['src1'] * 100 + ['src2'] * 100
},
index=np.tile(pd.date_range('2014-01-01', periods=100, freq=freq),
2))
# As above, with renamed columns
df_y3b = pd.DataFrame(
{'y_test': np.concatenate([np.full(100, 0.0), np.round(
np.arange(-0.5, 0.5, 0.01), 2)]),
'weight_test': np.concatenate([np.full(100, 0.1),
np.full(100, 1.0)]),
'source_test': ['src1'] * 100 + ['src2'] * 100,
'date_test': np.tile(pd.date_range('2014-01-01',
periods=100, freq=freq), 2)
})
# # Model lists
l_model_trend1 = [forecast_models.model_linear]
l_model_trend1b = [
forecast_models.model_linear,
forecast_models.model_season_wday_2]
l_model_trend2 = [
forecast_models.model_linear,
forecast_models.model_exp]
l_model_season1 = [forecast_models.model_season_wday_2]
l_model_season2 = [
forecast_models.model_season_wday_2,
forecast_models.model_null]
# New test - forecast length
logger.info('Testing Output Length')
logger.info('Testing Output Length - df_y1')
self._test_run_forecast_check_length_new_api(
df_y=df_y1,
include_all_fits=False,
l_model_trend=l_model_trend1b,
source_id='source1',
l_model_naive=[]
)
logger.info('Testing Output Length - df_y2')
self._test_run_forecast_check_length_new_api(
df_y=df_y2,
include_all_fits=False,
l_model_trend=l_model_trend2,
l_model_season=l_model_season2,
source_id='source2',
l_model_naive=[]
)
def test_runforecast(self):
for freq in ['D',
'W']:
self._test_run_forecast(freq=freq)
def test_run_forecast_metadata(self):
df1 = pd.DataFrame({'y': np.arange(0, 10.)}, index=pd.date_range(
'2014-01-01', periods=10, freq='D'))
dict_result = run_forecast(
simplify_output=False, df_y=df1, l_model_trend=[
forecast_models.model_linear])
df_data = dict_result['data']
df_metadata = dict_result['metadata']
logger_info('metadata.fit_time dtype', df_metadata.fit_time.dtype)
self.assertTrue(pd.api.types.is_numeric_dtype(df_metadata.fit_time))
def test_run_forecast_simple_linear_model(self):
df1 = pd.DataFrame({'y': np.arange(0, 10.)}, index=pd.date_range(
'2014-01-01', periods=10, freq='D'))
dict_result = run_forecast(
simplify_output=False, df_y=df1, l_model_trend=[
forecast_models.model_linear])
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(30))
df2 = pd.DataFrame(
{'y': np.arange(0, 10.), 'source': ['src1'] * 5 + ['src2'] * 5},
index=pd.date_range('2014-01-01', periods=10, freq='D'))
dict_result = run_forecast(
simplify_output=False, df_y=df2, l_model_trend=[
forecast_models.model_linear])
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(60))
def test_run_forecast_naive(self):
logger.info('Test 1 - linear series, 1 source')
df1 = pd.DataFrame(
{'y': np.arange(0, 10.)},
index=pd.date_range('2014-01-01', periods=10, freq='D'))
dict_result = run_forecast(
simplify_output=False,
df_y=df1,
l_model_trend=[forecast_models.model_naive],
extrapolate_years=10. / 365)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(40))
logger.info('Test 2 - 2 sources')
df2 = pd.DataFrame(
{'y': np.arange(0, 10.), 'source': ['src1'] * 5 + ['src2'] * 5},
index=pd.date_range('2014-01-01', periods=10, freq='D'))
dict_result = run_forecast(
simplify_output=False, df_y=df2,
l_model_trend=[forecast_models.model_naive],
extrapolate_years=10. / 365)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(60))
logger.info('test 3: weight column')
df1 = pd.DataFrame(
{'y': np.arange(0, 10.),
'weight': array_zeros_in_indices(10, [5, 6])},
index=pd.date_range('2014-01-01', periods=10, freq='D'))
dict_result = run_forecast(
simplify_output=False, df_y=df1,
l_model_trend=[forecast_models.model_naive],
extrapolate_years=10. / 365)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(60))
a_y_result = df_data.loc[df_data.model == 'naive'].y.values
logger_info('a_y_result:', a_y_result)
self.assert_array_equal(
a_y_result,
np.concatenate([np.array(
[0., 0., 1., 2., 3., 4., 4., 4., 7., 8., 9., ]),
np.full(9, 9.)]))
df_forecast = dict_result['forecast']
logger_info('df_forecast', df_forecast)
logger.info('Test 3b - initial sample is 0-weight, '
'extrapolate_years=0')
df1.weight[0:2] = 0.
logger_info('df1:', df1)
dict_result = run_forecast(
simplify_output=False, df_y=df1,
l_model_trend=[forecast_models.model_naive],
extrapolate_years=0)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(60))
logger.info('Test 3c: weight column, season_add_mult = \'both\'')
df1 = pd.DataFrame(
{'y': np.arange(0, 10.),
'weight': array_zeros_in_indices(10, [5, 6])}, # noqa
index=pd.date_range('2014-01-01', periods=10, freq='D')) # noqa
dict_result = run_forecast(
simplify_output=False, df_y=df1,
l_model_trend=[forecast_models.model_naive], # noqa
extrapolate_years=10. / 365,
season_add_mult='both')
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(60))
a_y_result = df_data.loc[df_data.model == 'naive'].y.values
logger_info('a_y_result:', a_y_result)
self.assert_array_equal(
a_y_result,
np.concatenate([
np.array([0., 0., 1., 2., 3., 4., 4., 4., 7., 8., 9., ]),
np.full(9, 9.)
]))
df_forecast = dict_result['forecast']
logger_info('df_forecast', df_forecast)
logger.info('Test 4: find_outliers')
df1 = pd.DataFrame(
{'y': np.arange(0, 10.) + 10 * array_ones_in_indices(10, [5, 6])},
index=pd.date_range('2014-01-01', periods=10, freq='D'))
dict_result = run_forecast(
simplify_output=False, df_y=df1,
l_model_trend=[forecast_models.model_naive],
extrapolate_years=10. / 365, find_outliers=True)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:',
df_data.groupby(['source', 'model']).tail(60)) # noqa
a_y_result = df_data.loc[df_data.model == 'naive'].y.values
logger_info('a_y_result:', a_y_result)
self.assert_array_equal(
a_y_result,
np.concatenate([
np.array([0., 0., 1., 2., 3., 4., 4., 4., 7., 8., 9., ]),
np.full(9, 9.)
]))
df_forecast = dict_result['forecast']
logger_info('df_forecast', df_forecast)
logger.info('Test 4b: find_outliers, season_add_mult = \'both\'')
df1 = pd.DataFrame(
{'y': np.arange(0, 10.) + 10 * array_ones_in_indices(10, [5, 6])},
index=pd.date_range('2014-01-01', periods=10, freq='D'))
dict_result = run_forecast(
simplify_output=False, df_y=df1,
l_model_trend=[forecast_models.model_naive],
extrapolate_years=10. / 365, find_outliers=True,
season_add_mult='both')
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(60))
a_y_result = df_data.loc[df_data.model == 'naive'].y.values
logger_info('a_y_result:', a_y_result)
self.assert_array_equal(
a_y_result,
np.concatenate([
np.array([0., 0., 1., 2., 3., 4., 4., 4., 7., 8., 9., ]),
np.full(9, 9.)
]
))
df_forecast = dict_result['forecast']
logger_info('df_forecast', df_forecast)
logger.info('Test 5: Series with gap')
df1 = (
pd.DataFrame(
{'y': np.arange(0, 10.),
# 'weight': array_zeros_in_indices(10, [5, 6]),
'date': pd.date_range('2014-01-01', periods=10, freq='D')},
)
)
df1 = pd.concat(
[df1.head(5), df1.tail(3)],
sort=False, ignore_index=False
).pipe(normalize_df)
dict_result = run_forecast(
simplify_output=False, df_y=df1,
l_model_trend=[],
l_model_naive=[forecast_models.model_naive,
forecast_models.model_snaive_wday],
extrapolate_years=10. / 365,
season_add_mult='both')
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(60))
a_y_result = df_data.loc[df_data.model == 'naive'].y.values
logger_info('a_y_result:', a_y_result)
self.assert_array_equal(
a_y_result,
np.concatenate([
np.array([0., 0., 1., 2., 3., 4., 4., 4., 7., 8., 9., ]),
np.full(9, 9.)
]))
df_forecast = dict_result['forecast']
logger_info('df_forecast', df_forecast)
logger.info('Test 6: Series with spike, '
'find_outliers=True, use model_snaive_wday')
df1 = (
pd.DataFrame(
{'y': np.arange(0, 21.) + 10 * array_ones_in_indices(21, 7),
# 'weight': array_zeros_in_indices(10, [5, 6]),
'date': pd.date_range('2014-01-01', periods=21, freq='D')},
))
# array_ones_in_indices(n, l_indices)
dict_result = run_forecast(
simplify_output=False,
df_y=df1,
l_model_trend=[],
l_model_season=[],
l_model_naive=[
forecast_models.model_snaive_wday],
extrapolate_years=20. / 365,
season_add_mult='both',
find_outliers=True)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
df_data['wday'] = df_data.date.dt.weekday
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(60))
a_y_result = df_data.loc[df_data.model == 'snaive_wday'].y.values
logger_info('a_y_result:', a_y_result)
self.assert_array_equal(
a_y_result,
np.array([0., 1., 2., 3., 4., 5., 6., 0., 8., 9., 10., 11., 12.,
13., 14., 15., 16., 17., 18., 19., 20., 14., 15., 16.,
17., 18., 19., 20., 14., 15., 16., 17., 18., 19.])
)
df_forecast = dict_result['forecast']
logger_info('df_forecast', df_forecast)
def test_run_forecast_naive2(self):
logger.info('Test 1 - run forecast with naive model, find outliers')
# Test 1: run forecast with naive model, find_outliers,
# season_add_mult = 'add', weekly samples
path_df_naive = os.path.join(base_folder, 'df_test_naive.csv')
df_test_naive = pd.read_csv(path_df_naive)
l_season_yearly = [
forecast_models.model_season_month,
# model_season_fourier_yearly,
forecast_models.model_null]
l_season_weekly = [ # forecast_models.model_season_wday_2,
forecast_models.model_season_wday, forecast_models.model_null]
dict_result = run_forecast(
simplify_output=False, df_y=df_test_naive,
# l_model_trend=[forecast_models.model_naive],
l_model_naive=[forecast_models.model_naive],
l_season_yearly=l_season_yearly,
l_season_weekly=l_season_weekly,
extrapolate_years=75. / 365, find_outliers=True,
season_add_mult='add')
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.loc[
(df_data.date > '2017-12-01') & (df_data.date < '2018-02-01')])
a_y_result = df_data.loc[df_data.model == 'naive'].y.values
# logger_info('a_y_result:', a_y_result)
df_forecast = dict_result['forecast']
logger_info('df_forecast', df_forecast.loc[
(df_forecast.date > '2017-12-01') & (
df_forecast.date < '2018-02-01')])
# After first spike, naive forecast and actuals start matching,
# only if season_add_mult='both'
self.assertNotEqual(
df_data.loc[(df_data.date == '2018-01-07') &
(df_data.model == 'naive')].y.iloc[0],
df_data.loc[(df_data.date == '2018-01-07') &
(df_data.model == 'actuals')].y.iloc[0])
logger.info('Test 2 - run forecast with naive model, find_outliers,'
'season_add_mult = \'both\', weekly samples')
# path_df_naive = os.path.join(base_folder, 'df_test_naive.csv')
# df_test_naive = pd.read_csv(path_df_naive)
l_season_yearly = [
forecast_models.model_season_month,
# model_season_fourier_yearly,
forecast_models.model_null]
l_season_weekly = [ # forecast_models.model_season_wday_2,
forecast_models.model_season_wday, forecast_models.model_null]
dict_result = run_forecast(
simplify_output=False, df_y=df_test_naive,
# l_model_trend=[forecast_models.model_naive],
l_model_naive=[forecast_models.model_naive],
l_season_yearly=l_season_yearly,
l_season_weekly=l_season_weekly,
extrapolate_years=75. / 365, find_outliers=True,
season_add_mult='both')
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.loc[
(df_data.date > '2017-12-01') & (df_data.date < '2018-02-01')])
a_y_result = df_data.loc[df_data.model == 'naive'].y.values
# logger_info('a_y_result:', a_y_result)
df_forecast = dict_result['forecast']
logger_info('df_forecast', df_forecast.loc[
(df_forecast.date > '2017-12-01') & (
df_forecast.date < '2018-02-01')])
# After first spike, naive forecast and actuals start matching, only if
# season_add_mult='both'
self.assertNotEqual(
df_data.loc[(df_data.date == '2018-01-07') &
(df_data.model == 'naive')].y.iloc[0],
df_data.loc[(df_data.date == '2018-01-07') &
(df_data.model == 'actuals')].y.iloc[0])
logger.info('Test 3 - multiple model_naive runs')
path_df_naive = os.path.join(base_folder, 'df_test_naive.csv')
df_test_naive = pd.read_csv(path_df_naive)
model_naive2 = forecast_models.ForecastModel(
'naive2', 0, forecast_models._f_model_naive)
l_model_naive = [forecast_models.model_naive, model_naive2]
dict_result = run_forecast(
simplify_output=False,
df_y=df_test_naive,
l_model_trend=[],
l_season_yearly=l_season_yearly,
l_season_weekly=l_season_weekly,
l_model_naive=l_model_naive,
extrapolate_years=75. / 365,
find_outliers=True,
season_add_mult='add',
)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.loc[
(df_data.date > '2017-12-01') & (df_data.date < '2018-02-01')])
a_y_result = df_data.loc[df_data.model == 'naive'].y.values
# logger_info('a_y_result:', a_y_result)
df_forecast = dict_result['forecast']
logger_info('df_forecast', df_forecast.loc[
(df_forecast.date > '2017-12-01') & (
df_forecast.date < '2018-02-01')])
# After first spike, naive forecast and actuals start matching, only if
# season_add_mult='both'
self.assertNotEqual(
df_data.loc[(df_data.date == '2018-01-07') &
(df_data.model == 'naive')].y.iloc[0],
df_data.loc[(df_data.date == '2018-01-07') &
(df_data.model == 'actuals')].y.iloc[0])
def test_run_forecast_sparse_with_gaps(self):
df_test = pd.DataFrame({'date': pd.to_datetime(
['2018-08-01', '2018-08-09']), 'y': [1., 2.]})
df_out = run_forecast(df_test, extrapolate_years=1.0)
logger_info('df_out', df_out)
def test_run_forecast_output_options(self):
freq = 'D'
freq_short = freq[0:1] # Changes e.g. W-MON to W
# Todo: change to dict to support more frequencies
freq_units_per_year = 52.0 if freq_short == 'W' else 365.0
df_y = pd.DataFrame({'y': np.full(100, 0.0)}, index=pd.date_range(
'2014-01-01', periods=100, freq=freq))
# SolverConfig with trend
conf1 = ForecastInput(
source_id='source1',
l_model_trend=[
forecast_models.model_linear,
forecast_models.model_constant],
l_model_season=None,
df_y=df_y,
date_start_actuals=None)
logger.info('Testing run forecast - default settings')
dict_result = run_l_forecast([conf1])
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(1))
for include_all_fits in [False, True]:
logger.info('Testing run forecast - include_all_fits=%s',
include_all_fits)
dict_result = run_l_forecast([conf1],
include_all_fits=include_all_fits)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(
['source', 'model']).tail(1))
# TODO: ADD ASSERTS
def test_run_forecast_step(self):
# Setup
freq = 'D'
df_y1 = pd.DataFrame({'y': 5 * [10.0] + 5 * [20.0]},
index=pd.date_range('2014-01-01', periods=10,
freq=freq))
# SolverConfig with trend
conf1 = ForecastInput(
source_id='source1',
l_model_trend=[
forecast_models.model_constant,
forecast_models.model_constant +
forecast_models.model_step],
l_model_season=None,
df_y=df_y1,
weights_y_values=1.0,
date_start_actuals=None)
dict_result = run_l_forecast([conf1], include_all_fits=True)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(1))
# Test 2 : 2 steps
# Setup
freq = 'D'
df_y1 = pd.DataFrame({'y': [1., 1., 1., 1., 1., 1., 5., 5., 6., 6.]},
index=pd.date_range('2014-01-01',
periods=10, freq=freq))
# SolverConfig with trend
conf1 = ForecastInput(
source_id='source1',
l_model_trend=[
forecast_models.model_constant +
forecast_models.model_two_steps],
l_model_season=None,
df_y=df_y1,
weights_y_values=1.0,
date_start_actuals=None)
dict_result = run_l_forecast([conf1], include_all_fits=True)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(1))
def test_run_forecast_sigmoid_step(self):
# Setup
freq = 'D'
df_y1 = pd.DataFrame({'y': [10., 10.1, 10.2, 10.3, 10.4, 20.0, 20.1,
20.2, 20.3, 20.4, 20.5, 20.6]},
index=pd.date_range('2014-01-01', periods=12,
freq=freq))
# SolverConfig with trend
conf1 = ForecastInput(
source_id='source1',
l_model_trend=[
forecast_models.model_constant,
forecast_models.model_sigmoid_step,
forecast_models.model_constant +
forecast_models.model_sigmoid_step,
forecast_models.model_linear +
forecast_models.model_sigmoid_step,
forecast_models.model_linear *
forecast_models.model_sigmoid_step],
l_model_season=None,
df_y=df_y1,
weights_y_values=1.0,
date_start_actuals=None)
dict_result = run_l_forecast([conf1], include_all_fits=True)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(1))
# Same with negative step
df_y1 = pd.DataFrame({'y': [20.0, 20.1, 20.2, 20.3, 20.4, 20.5,
20.6, 10., 10.1, 10.2, 10.3, 10.4]},
index=pd.date_range('2014-01-01', periods=12,
freq=freq))
conf1 = ForecastInput(
source_id='source1',
l_model_trend=[
forecast_models.model_constant,
forecast_models.model_sigmoid_step,
forecast_models.model_constant +
forecast_models.model_sigmoid_step,
forecast_models.model_linear +
forecast_models.model_sigmoid_step,
forecast_models.model_linear *
forecast_models.model_sigmoid_step],
l_model_season=None,
df_y=df_y1,
weights_y_values=1.0,
date_start_actuals=None)
dict_result = run_l_forecast([conf1], include_all_fits=True)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(1))
def test_run_forecast_fourier_yearly(self):
# Yearly sinusoidal function
# With daily samples
length = 2 * 365
# size will be +-10 +- uniform error
a_date = pd.date_range(start='2018-01-01', freq='D', periods=length)
a_y = (10 + np.random.uniform(low=0, high=1, size=length) +
10 * (np.sin(np.linspace(-4 * np.pi, 4 * np.pi, length))))
df_y = pd.DataFrame({'y': a_y}, index=a_date)
conf = ForecastInput(
source_id='source',
l_model_trend=[
forecast_models.model_constant,
forecast_models.model_season_fourier_yearly,
forecast_models.model_constant +
forecast_models.model_season_fourier_yearly],
l_model_season=[
forecast_models.model_null],
df_y=df_y,
weights_y_values=1.0,
date_start_actuals=None)
dict_result = run_l_forecast([conf], include_all_fits=True)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(1))
df = df_data.loc[(df_data.model == 'a') | df_data.is_best_fit,
['y', 'date', 'model']]
df = df.pivot(values='y', columns='model', index='date')
if platform.system() != 'Darwin':
# matplotlib tests don't work on mac
df.plot()
length = 1 * 365
# size will be +-10 +- uniform error
a_date = pd.date_range(start='2018-01-01', freq='D', periods=length)
a_y = (10 + np.random.uniform(low=0, high=1, size=length) +
10 * (np.sin(np.linspace(-4 * np.pi, 4 * np.pi, length))) +
5 * (np.cos(np.linspace(-6 * np.pi, 6 * np.pi, length))))
df_y = pd.DataFrame({'y': a_y}, index=a_date)
conf = ForecastInput(
source_id='source',
l_model_trend=[
forecast_models.model_constant,
forecast_models.model_season_fourier_yearly,
forecast_models.model_constant +
forecast_models.model_season_fourier_yearly],
l_model_season=[
forecast_models.model_null],
df_y=df_y,
weights_y_values=1.0,
date_start_actuals=None)
dict_result = run_l_forecast([conf], include_all_fits=True)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(1))
df = df_data.loc[(df_data.model == 'a') | df_data.is_best_fit,
['y', 'date', 'model']]
df = df.pivot(values='y', columns='model', index='date')
if platform.system() != 'Darwin':
# matplotlib tests don't work on mac
df.plot()
# TODO find a better assertion test
pass
def test_run_forecast_sigmoid(self):
# Input parameters
b_in = 100.
c_in = 40.
d_in = 1.
# linear params
a_lin = 0.01
b_lin = 0.05
is_mult_l = [False, True]
def sigmoid(x, a, b, c, d):
y = a + (b - a) / (1 + np.exp(- d * (x - c)))
return y
a_x = np.arange(1, 100)
# linear to find
for is_mult in is_mult_l:
if is_mult:
a_in = 1
model = forecast_models.model_linear * forecast_models.model_sigmoid # noqa
y_lin = a_lin * a_x + b_lin
y_in = sigmoid(a_x, a_in, b_in, c_in, d_in) * y_lin
input_params = [a_lin, b_lin]
y_rand = np.random.uniform(
low=0.001, high=0.1 * b_in, size=len(a_x)) * y_lin
else:
a_in = 30 # the constant
model = forecast_models.model_constant + forecast_models.model_sigmoid # noqa
y_in = sigmoid(a_x, a_in, b_in, c_in, d_in)
input_params = [a_in]
y_rand = np.random.uniform(
low=0.001, high=0.1 * b_in, size=len(a_x))
input_params = input_params + [b_in - a_in, c_in, d_in]
y_in = y_rand + y_in
df_y = pd.DataFrame({'y': y_in}, index=a_x)
# SolverConfig with trend
conf1 = ForecastInput(
source_id='source1',
l_model_trend=[
forecast_models.model_constant,
# forecast_models.model_sigmoid,
model,
# forecast_models.model_linear + forecast_models.model_sigmoid, # noqa
# forecast_models.model_linear * forecast_models.model_sigmoid # noqa
],
l_model_season=None, df_y=df_y, weights_y_values=1.0,
date_start_actuals=None
)
dict_result = run_l_forecast([conf1],
include_all_fits=True)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
# df_optimize_info = dict_result['optimize_info']
df = df_data.loc[:, ['y', 'date', 'model']]
df = df.pivot(values='y', columns='model', index='date')
if platform.system() != 'Darwin':
# matplotlib tests don't work on mac
df.plot()
output_params = df_metadata.loc[df_metadata.is_best_fit,
'params_str']
logger.info('Input parameters: %s, Output parameters: %s',
input_params, output_params.iloc[0])
pass # to see the plot
def test_run_forecast_get_outliers(self):
# Test 1 - no outliers
a_y = [20.0, 20.1, 20.2, 20.3, 20.4, 20.5]
a_date = pd.date_range(start='2018-01-01', periods=len(a_y), freq='D')
df = pd.DataFrame({'y': a_y})
dict_result = run_forecast(
df,
find_outliers=True,
simplify_output=False,
include_all_fits=True,
season_add_mult='add')
logger_info('Metadata', dict_result['metadata'])
logger_info('data', dict_result['data'].tail(3))
# Check that dtype of y is not corrupted by None values from weight
# mask - this happens when no spikes found
self.assertTrue(np.issubdtype(dict_result['data'].y, np.float64))
# Test 2 - Single step
a_y = [19.8, 19.9, 20.0, 20.1, 20.2, 20.3, 20.4, 20.5,
20.6, 10., 10.1, 10.2, 10.3, 10.4,
10.5, 10.6, 10.7, 10.8, 10.9]
a_date = pd.date_range(start='2018-01-01', periods=len(a_y), freq='D')
df = pd.DataFrame({'y': a_y})
dict_result = run_forecast(
df,
find_outliers=True,
simplify_output=False,
include_all_fits=True,
season_add_mult='add')
logger_info('Metadata', dict_result['metadata'])
logger_info('data', dict_result['data'].tail(3))
# Check that dtype of y is not corrupted by None values from weight
# mask - this happens when no spikes found
self.assertTrue(np.issubdtype(dict_result['data'].y, np.float64))
# Test 3 - Single spike
a_y = [19.8, 19.9, 20.0, 20.1, 20.2, 20.3, 20.4, 20.5,
20.6, 10., 20.7, 20.8, 20.9, 21.0,
21.1, 21.2, 21.3, 21.4, 21.5]
a_date = pd.date_range(start='2018-01-01', periods=len(a_y), freq='D')
df_spike = pd.DataFrame({'y': a_y})
dict_result = run_forecast(
df_spike,
find_outliers=True,
simplify_output=False,
include_all_fits=True,
season_add_mult='add')
df_data = dict_result['data']
mask = df_data.loc[df_data.model == 'actuals'].weight
self.assert_array_equal(
mask, [1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, ])
# Test 5 - 2 spikes and 1 step
a_y = [19.8, 19.9, 30.0, 30.1, 20.2, 20.3, 20.4, 20.5,
20.6, 10., 10.1, 10.2, 10.3, 10.4,
10.5, 10.6, 30.7, 10.8, 10.9]
df = pd.DataFrame({'y': a_y}).pipe(normalize_df)
dict_result = run_forecast(
df,
find_outliers=True,
simplify_output=False,
include_all_fits=True,
season_add_mult='add')
logger_info('Metadata', dict_result['metadata'])
df_result = dict_result['data']
logger_info('data', df_result.tail(3))
mask = df_result.loc[df_result.model == 'actuals'].weight
self.assert_array_equal(
mask, [1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1])
def test_run_forecast_auto_season(self):
"""Test run_forecast with automatic seasonality detection"""
# Yearly sinusoidal function
# With daily samples
length = 2 * 365
# size will be +-10 +- uniform error
a_date = pd.date_range(start='2018-01-01', freq='D', periods=length)
a_y = (10 + np.random.uniform(low=0, high=1, size=length) +
10 * (np.sin(np.linspace(-4 * np.pi, 4 * np.pi, length))))
df_y = pd.DataFrame({'y': a_y}, index=a_date)
dict_result = run_forecast(
df_y,
season_add_mult='add',
simplify_output=False,
include_all_fits=True,
l_model_trend=[forecast_models.model_linear],
l_model_naive=[]
)
df_metadata = dict_result['metadata']
l_model_expected = sorted(
['linear',
'(linear+(season_wday+season_fourier_yearly))',
'(linear+season_wday)',
'(linear+season_fourier_yearly)'])
self.assert_array_equal(df_metadata.model, l_model_expected)
logger_info('df_metadata:', df_metadata)
# As above, with additive and multiplicative seasonality
dict_result = run_forecast(
df_y,
season_add_mult='both',
simplify_output=False,
include_all_fits=True,
l_model_trend=[forecast_models.model_linear],
l_model_naive=[]
)
df_metadata = dict_result['metadata']
l_model_expected = np.array([
'linear',
'(linear+season_fourier_yearly)',
'(linear+(season_wday+season_fourier_yearly))',
'(linear+season_wday)',
'(linear*season_fourier_yearly)',
'(linear*season_wday)',
'(linear*(season_wday*season_fourier_yearly))',
])
l_model_expected.sort()
self.assert_array_equal(df_metadata.model.values, l_model_expected)
logger_info('df_metadata:', df_metadata)
def test_run_forecast_auto_composition(self):
"""Test run_forecast with automatic model composition detection"""
np.random.seed(1) # Ensure predictable test results
logger.info('Test 1 - detect additive model')
# With daily samples
length = 2 * 365
# size will be 100 +-10 +- uniform error
a_date = pd.date_range(start='2018-01-01', freq='D', periods=length)
a_y = (100 + np.random.uniform(low=0, high=1, size=length) +
10 * (np.sin(np.linspace(-4 * np.pi, 4 * np.pi, length))))
df_y = pd.DataFrame({'y': a_y}, index=a_date)
dict_result = run_forecast(
df_y,
season_add_mult='auto',
simplify_output=False,
include_all_fits=True,
l_model_trend=[forecast_models.model_linear],
l_model_naive=[]
)
df_metadata = dict_result['metadata']
l_model_expected = sorted(
['linear',
'(linear+(season_wday+season_fourier_yearly))',
'(linear+season_wday)',
'(linear+season_fourier_yearly)'])
self.assert_array_equal(df_metadata.model, l_model_expected)
logger_info('df_metadata:', df_metadata)
logger.info('Test 2 - detect multiplicative model')
# With daily samples
length = 2 * 365
# size will be +-10 +- uniform error
a_date = pd.date_range(start='2018-01-01', freq='D', periods=length)
a_y = (1000 +
0.1 * np.arange(length) *
(1 + (np.sin(np.linspace(-40 * np.pi, 40 * np.pi, length)))))
df_y = pd.DataFrame({'y': a_y}, index=a_date)
dict_result = run_forecast(
df_y,
season_add_mult='auto',
simplify_output=False,
include_all_fits=True,
l_model_trend=[forecast_models.model_linear],
l_model_naive=[]
)
df_metadata = dict_result['metadata']
l_model_expected = sorted(
['linear',
'(linear*(season_wday*season_fourier_yearly))',
'(linear*season_wday)',
'(linear*season_fourier_yearly)'])
self.assert_array_equal(df_metadata.model, l_model_expected)
logger_info('df_metadata:', df_metadata)
def test_run_forecast_with_weight(self):
df1 = pd.DataFrame({'y': np.arange(0, 10.), 'date': pd.date_range(
'2014-01-01', periods=10, freq='D'), 'weight': 1.})
dict_result = run_forecast(
simplify_output=False,
df_y=df1,
l_model_trend=[
forecast_models.model_linear],
extrapolate_years=10. / 365,
l_model_naive=[]
)
df_forecast = dict_result['forecast']
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_forecast:', df_forecast.groupby(
['source', 'model']).tail(30))
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(30))
df_forecast_filtered = df_forecast.loc[~df_forecast.is_actuals & (
df_forecast.date > '2014-01-10')]
self.assert_series_equal(
df_forecast_filtered.y,
df_forecast_filtered.q5)
df1b = df1.copy()
df1b.loc[0, 'weight'] = 0.
dict_result = run_forecast(
simplify_output=False,
df_y=df1b,
l_model_trend=[
forecast_models.model_linear],
extrapolate_years=10. / 365,
l_model_naive=[]
)
df_forecast = dict_result['forecast']
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_forecast:', df_forecast.groupby(
['source', 'model']).tail(30))
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(30))
len_forecast = df_data.loc[~df_data.is_actuals].index.size
# First sample shouldn't be included due to weight=0
self.assertEqual(len_forecast, 19)
# Since fit is perfect, prediction interval should be equal to point
# forecast
df_forecast_filtered = df_forecast.loc[~df_forecast.is_actuals & (
df_forecast.date > '2014-01-10')]
self.assert_series_equal(
df_forecast_filtered.y,
df_forecast_filtered.q5)
# Test with model_ramp
# Param A of model_ramp needs to be within the 15-85 percentile
# of valid x values
# Before a bugfix, we would get initial guesses of A=2,
# with boundaries (5.6, 8.4)
# Note: somehow validate bounds doesn't catch this!
df1c = df1.copy()
df1c.loc[0:4, 'weight'] = 0.
dict_result = run_forecast(
simplify_output=False,
df_y=df1c,
l_model_trend=[
forecast_models.model_ramp],
extrapolate_years=10. / 365,
l_model_naive=[]
)
df_forecast = dict_result['forecast']
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_forecast:', df_forecast.groupby(
['source', 'model']).tail(30))
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(30))
len_forecast = df_data.loc[~df_data.is_actuals].index.size
# First 5 samples shouldn't be included due to weight=0
self.assertEqual(len_forecast, 15)
# # Since fit is perfect, prediction interval should be equal to
# point forecast
# df_forecast_filtered = df_forecast.loc[~df_forecast.is_actuals &
# (df_forecast.date>'2014-01-10')]
# self.assert_series_equal(df_forecast_filtered.y,
# df_forecast_filtered.q5)
def test_detect_freq(self):
# Initial test - what happens with single sample input?
a_date = pd.a_date = pd.date_range('2014-01-01', periods=1, freq='H')
result = detect_freq(a_date)
# self.assertEqual(result, 'H')
a_date = pd.a_date = pd.date_range(
'2014-01-01', periods=24 * 7, freq='H')
result = detect_freq(a_date)
self.assertEqual(result, 'H')
a_date = pd.a_date = pd.date_range(
'2014-01-01', periods=4 * 365, freq='D')
result = detect_freq(a_date)
self.assertEqual(result, 'D')
l_freq_wday = [
'W-MON',
'W-TUE',
'W-WED',
'W-THU',
'W-FRI',
'W-SAT',
'W-SUN']
for freq_wday in l_freq_wday:
a_date = pd.a_date = pd.date_range(
'2014-01-01', periods=4 * 52, freq=freq_wday)
result = detect_freq(a_date)
self.assertEqual(result, freq_wday)
a_date = pd.a_date = pd.date_range(
'2014-01-01', periods=4 * 12, freq='M')
result = detect_freq(a_date)
self.assertEqual(result, 'M')
a_date = pd.a_date = pd.date_range(
'2014-01-01', periods=4 * 12, freq='MS')
result = detect_freq(a_date)
self.assertEqual(result, 'MS')
a_date = pd.a_date = pd.date_range(
'2014-01-01', periods=4 * 12, freq='Q')
result = detect_freq(a_date)
self.assertEqual(result, 'Q')
a_date = pd.a_date = pd.date_range(
'2014-01-01', periods=4 * 12, freq='Y')
result = detect_freq(a_date)
self.assertEqual(result, 'Y')
# Test with input dataframe
a_date = pd.a_date = pd.date_range(
'2014-01-01', periods=24 * 7, freq='H')
df_y = pd.DataFrame({'date': a_date})
result = detect_freq(df_y)
self.assertEqual(result, 'H')
a_date = pd.a_date = pd.date_range(
'2014-01-01', periods=4 * 365, freq='D')
df_y = pd.DataFrame({'date': a_date})
result = detect_freq(df_y)
self.assertEqual(result, 'D')
a_date = pd.a_date = pd.date_range(
'2014-01-01', periods=4 * 12, freq='M')
df_y = pd.DataFrame({'date': a_date})
result = detect_freq(df_y)
self.assertEqual(result, 'M')
a_date = pd.a_date = pd.date_range(
'2014-01-01', periods=4 * 12, freq='Q')
df_y = pd.DataFrame({'date': a_date})
result = detect_freq(df_y)
self.assertEqual(result, 'Q')
a_date = pd.a_date = pd.date_range(
'2014-01-01', periods=4 * 12, freq='Y')
df_y = pd.DataFrame({'date': a_date})
result = detect_freq(df_y)
self.assertEqual(result, 'Y')
a_date = pd.a_date = pd.date_range(
'2014-01-01', periods=4 * 12, freq='YS')
df_y = pd.DataFrame({'date': a_date})
result = detect_freq(df_y)
self.assertEqual(result, 'YS')
# Test with sparse input series
a_date = pd.to_datetime(['2018-08-01', '2018-08-09'])
df_y = pd.DataFrame({'date': a_date})
result = detect_freq(df_y)
self.assertEqual(result, 'D')
# TODO: ADD TEST WITH NULL VALUES, E.G. MODEL_NAIVE_WDAY
def test_get_pi(self):
def check_result(df_result):
self.assertTrue('q5' in df_result.columns)
df_result_actuals = df_result.loc[df_result.is_actuals]
if 'weight' in df_result_actuals.columns:
df_result_actuals = df_result_actuals.loc[
~df_result_actuals.is_weight]
date_max_actuals = df_result_actuals.date.max()
logger_info('debug: date max actuals', date_max_actuals)
df_result_forecast = df_result.loc[
~df_result.is_actuals & (df_result.date > date_max_actuals)]
self.assertFalse(df_result_forecast.q5.isnull().any())
# First test with single source
# then test applied function on df grouped by source
logger.info('Test 1a - Single source')
a_date_actuals = pd.date_range('2014-01-01', periods=10, freq='W')
a_y_actuals = np.arange(0, 10.)
df_actuals = (
pd.DataFrame({'date': a_date_actuals, 'y': a_y_actuals,
'source': 's1', 'is_actuals': True,
'is_best_fit': False, 'model': 'actuals'})
)
a_date = pd.date_range('2014-01-01', periods=20, freq='W')
a_y = np.arange(0, 20.) + (
np.tile([-1, 1], 10) * np.arange(2, 0., -0.1))
df_fcast = (pd.DataFrame({'date': a_date,
'y': a_y,
'source': 's1',
'is_actuals': False,
'is_best_fit': True,
'model': 'linear'}))
df1 = pd.concat([df_actuals, df_fcast], ignore_index=True, sort=False)
df_result = get_pi(df1, n_sims=100)
logger_info('df_result1:', df_result.groupby(
['source', 'model']).head(2))
logger_info('df_result1:', df_result.groupby(
['source', 'model']).tail(2))
# TODO: Add checks
check_result(df_result)
logger.info('Test 1b - n_cum>1')
df_result = get_pi(df1, n_sims=100, n_cum=7)
logger_info('df_result1:', df_result.groupby(
['source', 'model']).head(2))
logger_info('df_result1:', df_result.groupby(
['source', 'model']).tail(2))
# TODO: Add checks
check_result(df_result)
# Test 1c - input dataframe without is_best_fit column, source column
logger.info('Test 1c - Single source, no source column')
df1c = df1[['date', 'is_actuals', 'model', 'y']]
df_result = get_pi(df1c, n_sims=100)
# logger_info('df_result1:', df_result1)
logger_info('df_result1:', df_result.groupby(
['model']).head(2))
logger_info('df_result1:', df_result.groupby(
['model']).tail(2))
check_result(df_result)
logger.info('Test 2 - 2 sources')
df1b = df1.copy()
df1b.source = 's2'
df2 = pd.concat([df1, df1b], sort=False)
df_result = get_pi(df2, n_sims=100)
# logger_info('df_result2:', df_result2)
logger_info('df_result:', df_result.groupby(
['source', 'model']).head(2))
logger_info('df_result:', df_result.groupby(
['source', 'model']).tail(2))
# TODO: Add checks
check_result(df_result)
logger.info('Test 2b - n_cum>1')
df_result = get_pi(df2, n_sims=100, n_cum=5)
# logger_info('df_result2:', df_result2)
logger_info('df_result:', df_result.groupby(
['source', 'model']).head(2))
logger_info('df_result:', df_result.groupby(
['source', 'model']).tail(2))
# TODO: Add checks
check_result(df_result)
# Test 3 - Input has actuals but no forecast - can happen if fit not
# possible
logger.info('Test 3 - Input missing forecast')
df3 = df_actuals
df_result = get_pi(df3, n_sims=100)
self.assertIsNotNone(df3)
self.assertFalse('q5' in df_result.columns)
# logger_info('df_result1:', df_result1)
logger_info('df_result3:', df_result.groupby(
['source', 'model']).head(1))
logger_info('df_result3:', df_result.groupby(
['source', 'model']).tail(1))
logger.info('Test 4 - Input with nulls at end')
a_date_actuals = pd.date_range('2014-01-01', periods=10, freq='W')
a_y_actuals = np.arange(0, 10.)
df_actuals = (
pd.DataFrame({'date': a_date_actuals, 'y': a_y_actuals,
'source': 's1', 'is_actuals': True,
'is_best_fit': False, 'model': 'actuals'})
)
a_date = pd.date_range('2014-01-01', periods=20, freq='W')
a_y = np.arange(0, 20.) + (
np.tile([-1, 1], 10) * np.arange(2, 0., -0.1))
df_fcast = (pd.DataFrame({'date': a_date,
'y': a_y,
'source': 's1',
'is_actuals': False,
'is_best_fit': True,
'model': 'linear'}))
df1 = pd.concat([df_actuals, df_fcast], ignore_index=True, sort=False)
df_result = get_pi(df1, n_sims=100)
a_date_actuals_withnull = pd.date_range(
'2014-01-01', periods=20, freq='W')
a_y_actuals_withnull = np.concatenate(
[np.arange(0, 10.), np.full(10, np.NaN)])
df_actuals_withnull = (
pd.DataFrame({'date': a_date_actuals, 'y': a_y_actuals,
'source': 's1', 'is_actuals': True,
'is_best_fit': False, 'model': 'actuals'})
)
a_date_withnull = pd.date_range('2014-01-01', periods=20, freq='W')
df1_withnull = pd.concat(
[df_actuals_withnull, df_fcast], ignore_index=True, sort=False)
df_result_withnull = get_pi(df1_withnull, n_sims=100)
logger_info('df_result:', df_result.groupby(
['source', 'model']).tail(3))
logger_info('df_result with null:', df_result_withnull.groupby(
['source', 'model']).tail(3))
# Prediction intervals are random, so we need to exclude them from
# comparison
self.assert_frame_equal(df_result[['date',
'source',
'is_actuals',
'model',
'y']],
df_result_withnull[['date',
'source',
'is_actuals',
'model',
'y']])
logger.info('Test 4b - Input with nulls at end, weight column')
df_weight = (pd.DataFrame({'date': a_date,
'y': 1,
'source': 's1',
'is_actuals': False,
'is_best_fit': True,
'model': 'linear',
'weight': 1.0}))
df_weight_withnull = (
pd.DataFrame({'date': a_date_withnull, 'y': 1,
'source': 's1', 'is_actuals': False,
'is_best_fit': True, 'model': 'linear',
'weight': 1.0})
)
df1['weight'] = 1.
df1_withnull['weight'] = 1.
df1b = pd.concat([df1, df_weight], ignore_index=True, sort=False)
df1b_withnull = pd.concat(
[df1_withnull, df_weight_withnull], ignore_index=True, sort=False)
df_result_b = get_pi(df1b, n_sims=100)
df_result_b_withnull = get_pi(df1b_withnull, n_sims=100)
logger_info('df_result b :', df_result_b.groupby(
['source', 'model']).tail(3))
logger_info('df_result b with null:',
df_result_b_withnull.groupby(['source', 'model']).tail(3))
# Prediction intervals are random, so we need to exclude them from
# comparison
self.assert_frame_equal(
df_result_b[['date', 'source', 'is_actuals', 'model', 'y']],
df_result_b_withnull[['date', 'source', 'is_actuals',
'model', 'y']])
check_result(df_result_b)
check_result(df_result_b_withnull)
logger.info('Test 4c - Input with nulls at start')
a_date_actuals = pd.date_range('2014-01-01', periods=10, freq='W')
a_y_actuals = np.arange(0, 10.)
df_actuals = (
pd.DataFrame({'date': a_date_actuals, 'y': a_y_actuals,
'source': 's1', 'is_actuals': True,
'is_best_fit': False, 'model': 'actuals'})
)
a_date = pd.date_range('2014-01-01', periods=20, freq='W')
a_y = np.arange(0, 20.) + (
np.tile([-1, 1], (10)) * np.arange(2, 0., -0.1))
df_fcast = (pd.DataFrame({'date': a_date,
'y': a_y,
'source': 's1',
'is_actuals': False,
'is_best_fit': True,
'model': 'linear'}))
df1 = pd.concat([df_actuals, df_fcast], ignore_index=True, sort=False)
df_result = get_pi(df1, n_sims=100)
a_date_actuals_withnull = pd.date_range(
'2014-01-01', periods=10, freq='W')
a_y_actuals_withnull = np.concatenate(
[np.full(5, np.NaN), np.arange(0, 5.)])
df_actuals_withnull = (
pd.DataFrame({'date': a_date_actuals_withnull,
'y': a_y_actuals_withnull,
'source': 's1', 'is_actuals': True,
'is_best_fit': False, 'model': 'actuals'})
)
a_date_withnull = pd.date_range('2014-01-01', periods=10, freq='W')
df1_withnull = pd.concat(
[df_actuals_withnull, df_fcast], ignore_index=True, sort=False)
df_result_withnull = get_pi(df1_withnull, n_sims=100)
logger_info('df_actuals_withnull:', df_actuals_withnull.groupby(
['source', 'model']).head(20))
logger_info('df_result:', df_result.groupby(
['source', 'model']).tail(3))
logger_info('df_result with null:', df_result_withnull.groupby(
['source', 'model']).tail(100))
# todo - add proper expected value, uncomment assert
# self.assert_frame_equal(df_result[['date', 'source', 'is_actuals',
# 'model', 'y']],
# df_result_withnull[['date', 'source', 'is_actuals', 'model', 'y']])
logger.info('Test 4d - Input with nulls at start of forecast')
a_date_actuals = pd.date_range('2014-01-01', periods=10, freq='W')
a_y_actuals = np.arange(0, 10.)
df_actuals = (
pd.DataFrame({'date': a_date_actuals, 'y': a_y_actuals,
'source': 's1', 'is_actuals': True,
'is_best_fit': False, 'model': 'actuals'})
)
a_date = pd.date_range('2014-01-01', periods=20, freq='W')
a_y = np.arange(0, 20.) + (
np.tile([-1, 1], (10)) * np.arange(2, 0., -0.1))
a_y_withnull = np.concatenate(
[np.full(5, np.NaN), np.arange(0, 15.), ])
df_fcast = (pd.DataFrame({'date': a_date,
'y': a_y,
'source': 's1',
'is_actuals': False,
'is_best_fit': True,
'model': 'linear'}))
df_fcast_withnull = (
pd.DataFrame({'date': a_date, 'y': a_y_withnull,
'source': 's1', 'is_actuals': False,
'is_best_fit': True, 'model': 'linear'})
)
df1 = pd.concat([df_actuals, df_fcast], ignore_index=True, sort=False)
df_result = get_pi(df1, n_sims=100)
df1_withnull = pd.concat(
[df_actuals, df_fcast_withnull], ignore_index=True, sort=False)
df_result_withnull = get_pi(df1_withnull, n_sims=100)
logger_info('df_fcast_withnull:', df_fcast_withnull.groupby(
['source', 'model']).head(20))
logger_info('df_result:', df_result.groupby(
['source', 'model']).tail(100))
logger_info('df_result with null:', df_result_withnull.groupby(
['source', 'model']).tail(100))
# Prediction intervals are random,
# so we need to exclude them from comparison
# self.assert_frame_equal(
# df_result[['date', 'source', 'is_actuals', 'model', 'y']],
# df_result_withnull[['date', 'source', 'is_actuals', 'model', 'y']])
# TODO: ADD VALID CHECK -
def test_get_pi_gap(self):
def check_result(df_result):
self.assertTrue('q5' in df_result.columns)
logger.info('Test 1 - Input has gaps')
a_date_actuals = pd.date_range('2014-01-01', periods=10, freq='W')
a_y_actuals = np.arange(0, 10.)
df_actuals = (
pd.DataFrame({'date': a_date_actuals, 'y': a_y_actuals,
'source': 's1', 'is_actuals': True,
'is_best_fit': False, 'model': 'actuals'})
)
a_date = pd.date_range('2014-01-01', periods=20, freq='W')
a_y = np.arange(0, 20.) + (
np.tile([-1, 1], (10)) * np.arange(2, 0., -0.1))
df_fcast = (pd.DataFrame({'date': a_date,
'y': a_y,
'source': 's1',
'is_actuals': False,
'is_best_fit': True,
'model': 'linear'}))
df_actuals_gap = pd.concat([df_actuals.head(3), df_actuals.tail(3)])
df = pd.concat([df_actuals_gap, df_fcast],
ignore_index=True, sort=False)
# df_result = get_pi(df, n_sims=100)
# # logger_info('df_result1:', df_result1)
# logger_info('df_result1:', df_result.groupby(
# ['source', 'model']).head(2))
# logger_info('df_result1:', df_result.groupby(
# ['source', 'model']).tail(2))
#
# check_result(df_result)
#
# logger.info('Test 2 - Input has nulls')
# df_actuals_null = df_actuals.copy()
# df_actuals_null.loc[5, 'y'] = np.NaN
#
# logger_info('df_actuals_null:', df_actuals_null)
#
# df = pd.concat([df_actuals_null, df_fcast],
# ignore_index=True, sort=False)
#
# df_result = get_pi(df, n_sims=100)
# # logger_info('df_result1:', df_result1)
# logger_info('df_result2:', df_result.groupby(
# ['source', 'model']).head(20))
# logger_info('df_result2:', df_result.groupby(
# ['source', 'model']).tail(20))
#
# self.assertFalse(
# df_result.loc[df_result.date >
# df_actuals.date.max()].q5.isnull().any())
#
# check_result(df_result)
logger.info('Test 3 - Input has weight 0')
df_actuals_weight0 = df_actuals.copy()
# df_actuals_null.loc[5, 'y'] = np.NaN
df_actuals_weight0['weight'] = 1.
df_actuals_weight0.loc[5, 'weight'] = 0.
df_actuals_weight0.loc[5, 'y'] = -5000.
logger_info('df_actuals_weight0:', df_actuals_weight0)
df = pd.concat([df_actuals_weight0, df_fcast],
ignore_index=True, sort=False)
df_result = get_pi(df, n_sims=100)
# logger_info('df_result1:', df_result1)
logger_info('df_result2:', df_result.groupby(
['source', 'model']).head(20))
logger_info('df_result2:', df_result.groupby(
['source', 'model']).tail(20))
self.assertFalse(
df_result.loc[df_result.date >
df_actuals.date.max()].q5.isnull().any())
check_result(df_result)
def test_forecast_pi_weight0(self):
logger.info('Test 1 - Input has gaps')
a_date_actuals = pd.date_range('2014-01-01', periods=10, freq='W')
a_y_actuals = np.arange(0, 10.)
df_actuals = (
pd.DataFrame({'date': a_date_actuals, 'y': a_y_actuals,
'source': 's1', 'is_actuals': True,
'is_best_fit': False, 'model': 'actuals'})
)
a_date = pd.date_range('2014-01-01', periods=20, freq='W')
a_y = np.arange(0, 20.) + (
np.tile([-1, 1], (10)) * np.arange(2, 0., -0.1))
df_fcast = (pd.DataFrame({'date': a_date,
'y': a_y,
'source': 's1',
'is_actuals': False,
'is_best_fit': True,
'model': 'linear'}))
df_actuals_weight0 = df_actuals.copy()
# df_actuals_null.loc[5, 'y'] = np.NaN
df_actuals_weight0['weight'] = 1.
df_actuals_weight0.loc[5, 'weight'] = 0.
df_actuals_weight0.loc[5, 'y'] = -5000.
logger_info('df_actuals_weight0:', df_actuals_weight0)
df_result = run_forecast(
df_actuals_weight0,
extrapolate_years=0,
l_model_season=[],
simplify_output=True)
logger_info('df_result', df_result)
# Extremely low values mean weight filter is not working
assert df_result.q5.min() > -4000
def test_forecast_pi_missing(self):
logger.info('Test1a - Generate forecast, check PI is added')
path_candy = os.path.join(base_folder, 'candy_production.csv')
df_monthly_candy = pd.read_csv(path_candy).head(100)
dict_result = run_forecast(
df_monthly_candy,
col_name_y='IPG3113N',
col_name_date='observation_date',
extrapolate_years=2,
l_model_season=[],
simplify_output=False)
df_fcast = dict_result.get('forecast')
logger_info('df_fcast: ', df_fcast.tail())
self.assertIn('q5', df_fcast.columns)
logger.info('Test1b - Generate forecast with n_cum=30')
dict_result = run_forecast(
df_monthly_candy,
col_name_y='IPG3113N',
col_name_date='observation_date',
extrapolate_years=2,
l_model_season=[],
simplify_output=False,
n_cum=30
)
df_fcast = dict_result.get('forecast')
logger_info('df_result2:', df_fcast.groupby(
['source', 'model']).tail(2))
self.assertIn('q5', df_fcast.columns)
logger.info('Test2a - Generate forecast, multiple models,'
' check PI is added')
df_monthly_candy2 = pd.concat(
[df_monthly_candy.assign(source='s1'),
df_monthly_candy.assign(source='s2')]
)
dict_result = run_forecast(
df_monthly_candy2,
col_name_y='IPG3113N',
col_name_date='observation_date',
extrapolate_years=2,
l_model_season=[],
simplify_output=False)
df_fcast = dict_result.get('forecast')
logger_info('df_result head:', df_fcast.groupby(
['source', 'model']).head(2))
logger_info('df_result tail :', df_fcast.groupby(
['source', 'model']).tail(2))
self.assertIn('q5', df_fcast.columns)
logger.info('Test2b - Generate forecast with n_cum=30')
dict_result = run_forecast(
df_monthly_candy2,
col_name_y='IPG3113N',
col_name_date='observation_date',
extrapolate_years=2,
l_model_season=[],
simplify_output=False,
n_cum=300
)
df_fcast = dict_result.get('forecast')
logger_info('df_result head:', df_fcast.groupby(
['source', 'model']).head(2))
logger_info('df_result tail :', df_fcast.groupby(
['source', 'model']).tail(2))
self.assertIn('q5', df_fcast.columns)
def test_run_forecast_yearly_model(self):
df1 = pd.DataFrame({'y': np.arange(0, 10.), 'date': pd.date_range(
'2000-01-01', periods=10, freq='YS')})
dict_result = run_forecast(
simplify_output=False,
df_y=df1,
l_model_trend=[
forecast_models.model_linear],
extrapolate_years=10.)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(30))
# Repeat test - 2 sources
df1a = df1.copy()
df1b = df1.copy()
df1a['source'] = 'src1'
df1b['source'] = 'src2'
df2 = pd.concat([df1a, df1b], sort=False, ignore_index=True)
logger_info('df input:', df2)
dict_result = run_forecast(
simplify_output=False,
df_y=df2,
l_model_trend=[
forecast_models.model_linear],
extrapolate_years=10.)
df_data = dict_result['data']
df_metadata = dict_result['metadata']
df_optimize_info = dict_result['optimize_info']
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize_info)
logger_info('df_data:', df_data.groupby(['source', 'model']).tail(60))
# Same, with simplify_output=True
df_result = run_forecast(
simplify_output=True,
df_y=df2,
l_model_trend=[
forecast_models.model_linear],
extrapolate_years=10.)
logger_info('df_result:', df_result)
def test_run_forecast_validate_input(self):
# -- TEST 1 - Validate model_season_wday
# Test that seasonality models applied to time series with
# period-aligned gaps (e.g. missing Fridays) raise exc.
# Original time series, no gaps
df1 = pd.DataFrame({'y': np.full(14, 10.0), 'source': 'src1',
'date': pd.date_range('2018-01-01',
periods=14, freq='D')})
# Copy of df1 with gaps on same weekday
df2 = df1.copy()
df2['weight'] = array_zeros_in_indices(14, [5, 12])
df2['source'] = 'src2'
dict_forecast1 = run_forecast(
df1,
extrapolate_years=0.1,
simplify_output=False,
l_model_trend=(forecast_models.model_linear +
forecast_models.model_season_wday),
l_model_season=[],
l_model_naive=[],
include_all_fits=True)
df_forecast1 = dict_forecast1['data'] # This works fine
# In this case, we don't get a fit
dict_forecast2 = run_forecast(
df2,
extrapolate_years=0.1,
simplify_output=False,
l_model_trend=(forecast_models.model_linear +
forecast_models.model_season_wday),
l_model_season=[],
l_model_naive=[],
include_all_fits=True)
df_metadata = dict_forecast2.get('metadata')
logger_info('df_metadata:', df_metadata)
self.assert_series_equal(df_metadata.status, pd.Series('INPUT_ERR'))
df_data = dict_forecast2.get('data')
logger_info('df_data:', df_data)
# Output only includes actuals due to no fit
self.assertEqual(df_data.index.size, 14)
def test_run_forecast_linalgerror(self):
# Testing a dataset that raises a linalgerror from run_forecast()
path_df_in = os.path.join(base_folder,
'df_test_forecast_linalgerror.csv')
df_in = pd.read_csv(path_df_in, parse_dates=['date'])
dict_result = run_forecast(df_in, simplify_output=False,
include_all_fits=True)
df_metadata = dict_result['metadata']
logger_info('df_metadata', df_metadata)
def test_run_forecast_cache(self):
a_date = pd.date_range('2020-01-01', '2022-01-01')
a_x = np.arange(0, a_date.size)
a_y = a_date.month * 10 + a_date.weekday
df_in = pd.DataFrame(data=dict(date=a_date, x=a_x, y=a_y))
l_models = [
forecast_models.model_season_month,
forecast_models.model_season_wday,
forecast_models.model_season_month +
forecast_models.model_season_wday,
forecast_models.model_season_fourier_yearly,
forecast_models.model_calendar_uk,
forecast_models.model_season_month +
forecast_models.model_calendar_uk,
]
def run_fcast(use_cache=True):
dict_result = run_forecast(
simplify_output=False,
include_all_fits=True,
df_y=df_in,
l_model_trend=l_models,
l_model_season=[],
l_model_naive=[],
l_model_calendar=[],
extrapolate_years=2.,
use_cache=use_cache
)
return dict_result
dict_result_cache = run_fcast(True)
dict_result_no_cache = run_fcast(False)
df_metadata = (
pd.concat([
dict_result_cache.get('metadata').assign(use_cache=True),
dict_result_no_cache.get('metadata').assign(use_cache=False),
], ignore_index=True)
.sort_values(['model', 'use_cache'])
)
df_optimize = (
pd.concat([
dict_result_cache.get('optimize_info').assign(use_cache=True),
dict_result_no_cache.get('optimize_info').assign(
use_cache=False),
], ignore_index=True)
.sort_values(['model', 'use_cache'])
)
logger_info('df_metadata:', df_metadata)
logger_info('df_optimize_info:', df_optimize)
df_fit_time = df_metadata[['model', 'use_cache', 'fit_time']]
logger_info('df_fit_time:', df_fit_time)
| 39.545025 | 109 | 0.540746 | 16,903 | 133,939 | 3.983849 | 0.039283 | 0.045442 | 0.029938 | 0.017152 | 0.823965 | 0.79125 | 0.768351 | 0.752536 | 0.732369 | 0.712796 | 0 | 0.043409 | 0.332666 | 133,939 | 3,386 | 110 | 39.556704 | 0.709975 | 0.093722 | 0 | 0.658613 | 0 | 0 | 0.107753 | 0.003562 | 0 | 0 | 0 | 0.000295 | 0.032346 | 1 | 0.024162 | false | 0.001169 | 0.001949 | 0.000779 | 0.031956 | 0.000779 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
603a50d479ed0e82b755d1f5c83218a6bfc6d518 | 76 | py | Python | tests/modules/core/test_time.py | alexsr/bumblebee-status | f8d035c0798621d8f6b33b262aecb42236658bd0 | [
"MIT"
] | null | null | null | tests/modules/core/test_time.py | alexsr/bumblebee-status | f8d035c0798621d8f6b33b262aecb42236658bd0 | [
"MIT"
] | null | null | null | tests/modules/core/test_time.py | alexsr/bumblebee-status | f8d035c0798621d8f6b33b262aecb42236658bd0 | [
"MIT"
] | null | null | null | import pytest
def test_load_module():
__import__("modules.core.time")
| 12.666667 | 35 | 0.736842 | 10 | 76 | 5 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144737 | 76 | 5 | 36 | 15.2 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0.226667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.666667 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.