hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e1fc75f3bdb2aa7712e02b2d6a9ae2e2a59fd173 | 39 | py | Python | kite-go/lang/python/pythonparser/epytext/testdata/empty.py | kiteco/kiteco-public | 74aaf5b9b0592153b92f7ed982d65e15eea885e3 | [
"BSD-3-Clause"
] | 17 | 2022-01-10T11:01:50.000Z | 2022-03-25T03:21:08.000Z | kite-go/lang/python/pythonparser/epytext/testdata/empty.py | kiteco/kiteco-public | 74aaf5b9b0592153b92f7ed982d65e15eea885e3 | [
"BSD-3-Clause"
] | 1 | 2022-01-13T14:28:47.000Z | 2022-01-13T14:28:47.000Z | kite-go/lang/python/pythonparser/epytext/testdata/empty.py | kiteco/kiteco-public | 74aaf5b9b0592153b92f7ed982d65e15eea885e3 | [
"BSD-3-Clause"
] | 7 | 2022-01-07T03:58:10.000Z | 2022-03-24T07:38:20.000Z | def example():
"""
"""
return 1
| 6.5 | 14 | 0.435897 | 4 | 39 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.333333 | 39 | 5 | 15 | 7.8 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
c0098afa8ff94b069a274289845e03eeb0106646 | 65 | py | Python | scraper/__init__.py | rodartha/AIbbeyRoad | 21d5c24731b7069374e873db5e9bb938a4cc314a | [
"MIT"
] | null | null | null | scraper/__init__.py | rodartha/AIbbeyRoad | 21d5c24731b7069374e873db5e9bb938a4cc314a | [
"MIT"
] | null | null | null | scraper/__init__.py | rodartha/AIbbeyRoad | 21d5c24731b7069374e873db5e9bb938a4cc314a | [
"MIT"
] | null | null | null | from secret.py import ACCESS_TOKEN
from scraper.py import scrape
| 21.666667 | 34 | 0.846154 | 11 | 65 | 4.909091 | 0.727273 | 0.296296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123077 | 65 | 2 | 35 | 32.5 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c043b330f7d7b4e7f8c9c9596ddabfa8d7dd5ac2 | 11,567 | py | Python | proxy_requests.py | rootVIII/proxy_requests | 742e0c4357b7a8c1eddd88c2a9bc3e4f2457469c | [
"MIT"
] | 386 | 2018-08-05T01:33:27.000Z | 2022-03-24T04:04:54.000Z | proxy_requests.py | rootVIII/proxy_requests | 742e0c4357b7a8c1eddd88c2a9bc3e4f2457469c | [
"MIT"
] | 20 | 2018-08-05T08:35:21.000Z | 2021-03-01T04:21:16.000Z | proxy_requests.py | rootVIII/proxy_requests | 742e0c4357b7a8c1eddd88c2a9bc3e4f2457469c | [
"MIT"
] | 57 | 2018-08-05T02:41:51.000Z | 2022-02-09T07:21:10.000Z | import requests
from random import randint
from re import findall
# rootVIII
# pycodestyle validated
# 2018-2020
class ProxyRequests:
def __init__(self, url):
self.url = url
self.sockets = []
self.rdata = {
'headers': {},
'json': {},
'status_code': 0,
'timeout': 3.0,
'errs': [
'ConnectTimeout',
'ProxyError',
'SSLError',
'ReadTimeout',
'ConnectionError',
'ConnectTimeoutError'
]
}
self.empty_warn = 'Proxy Pool has been emptied'
self._acquire_sockets()
def _acquire_sockets(self):
r = requests.get('https://www.sslproxies.org/')
matches = findall(r"<td>\d+\.\d+\.\d+\.\d+</td><td>\d+</td>", r.text)
revised = [m.replace('<td>', '') for m in matches]
self.sockets = [s[:-5].replace('</td>', ':') for s in revised]
def _set_request_data(self, req, socket):
self.rdata['request'] = req.text
self.rdata['headers'] = req.headers
self.rdata['status_code'] = req.status_code
self.rdata['url'] = req.url
self.rdata['raw'] = req.content
self.rdata['proxy'] = socket
try:
self.rdata['json'] = req.json()
except Exception as err:
self.rdata['json'] = {type(err).__name__: str(err)}
def _rand_sock(self):
return randint(0, len(self.sockets) - 1)
def _is_err(self, err):
if type(err).__name__ not in self.rdata['errs']:
raise err
def _limit_succeeded(self):
raise Exception(self.empty_warn)
def get(self):
if len(self.sockets) > 0:
current_socket = self.sockets.pop(self._rand_sock())
proxies = {
'http': 'http://' + current_socket,
'https': 'https://' + current_socket
}
try:
request = requests.get(
self.url,
timeout=self.rdata['timeout'],
proxies=proxies)
self._set_request_data(request, current_socket)
except Exception as e:
self._is_err(e)
self.get()
else:
self._limit_succeeded()
def get_with_headers(self):
if len(self.sockets) > 0:
current_socket = self.sockets.pop(self._rand_sock())
proxies = {
'http': 'http://' + current_socket,
'https': 'https://' + current_socket
}
try:
request = requests.get(
self.url,
timeout=self.rdata['timeout'],
proxies=proxies,
headers=self.rdata['headers'])
self._set_request_data(request, current_socket)
except Exception as e:
self._is_err(e)
self.get_with_headers()
else:
self._limit_succeeded()
def post(self, data):
if len(self.sockets) > 0:
current_socket = self.sockets.pop(self._rand_sock())
proxies = {
'http': 'http://' + current_socket,
'https': 'https://' + current_socket
}
try:
request = requests.post(
self.url,
json=data,
timeout=self.rdata['timeout'],
proxies=proxies)
self._set_request_data(request, current_socket)
except Exception as e:
self._is_err(e)
self.post(data)
else:
self._limit_succeeded()
def post_with_headers(self, data):
if len(self.sockets) > 0:
current_socket = self.sockets.pop(self._rand_sock())
proxies = {
'http': 'http://' + current_socket,
'https': 'https://' + current_socket
}
try:
request = requests.post(
self.url,
json=data,
timeout=self.rdata['timeout'],
headers=self.rdata['headers'],
proxies=proxies)
self._set_request_data(request, current_socket)
except Exception as e:
self._is_err(e)
self.post_with_headers(data)
else:
self._limit_succeeded()
def post_file(self):
if len(self.sockets) > 0:
current_socket = self.sockets.pop(self._rand_sock())
proxies = {
'http': 'http://' + current_socket,
'https': 'https://' + current_socket
}
try:
request = requests.post(
self.url,
proxies=proxies,
timeout=self.rdata['timeout'],
files={'upload_file': open(self.rdata['file'], 'rb')})
self._set_request_data(request, current_socket)
except Exception as e:
self._is_err(e)
self.post_file()
else:
self._limit_succeeded()
def post_file_with_headers(self):
if len(self.sockets) > 0:
current_socket = self.sockets.pop(self._rand_sock())
proxies = {
'http': 'http://' + current_socket,
'https': 'https://' + current_socket
}
try:
request = requests.post(
self.url,
files={'upload_file': open(self.rdata['file'], 'rb')},
timeout=self.rdata['timeout'],
headers=self.rdata['headers'],
proxies=proxies)
self._set_request_data(request, current_socket)
except Exception as e:
self._is_err(e)
self.post_file_with_headers()
else:
self._limit_succeeded()
def get_headers(self):
return self.rdata['headers']
def set_headers(self, outgoing_headers):
self.rdata['headers'] = outgoing_headers
def set_file(self, outgoing_file):
self.rdata['file'] = outgoing_file
def get_status_code(self):
return self.rdata['status_code']
def get_proxy_used(self):
return self.rdata['proxy']
def get_raw(self):
return self.rdata['raw']
def get_json(self):
return self.rdata['json']
def get_url(self):
return self.rdata['url']
def __str__(self):
return str(self.rdata['request'])
class ProxyRequestsBasicAuth(ProxyRequests):
def __init__(self, url, username, password):
super().__init__(url)
self.username = username
self.password = password
def get(self):
if len(self.sockets) > 0:
current_socket = self.sockets.pop(self._rand_sock())
proxies = {
'http': 'http://' + current_socket,
'https': 'https://' + current_socket
}
try:
request = requests.get(
self.url,
auth=(self.username, self.password),
timeout=self.rdata['timeout'],
proxies=proxies)
self._set_request_data(request, current_socket)
except Exception as e:
self._is_err(e)
self.get()
else:
self._limit_succeeded()
def get_with_headers(self):
if len(self.sockets) > 0:
current_socket = self.sockets.pop(self._rand_sock())
proxies = {
'http': 'http://' + current_socket,
'https': 'https://' + current_socket
}
try:
request = requests.get(
self.url,
auth=(self.username, self.password),
timeout=self.rdata['timeout'],
proxies=proxies,
headers=self.rdata['headers'])
self._set_request_data(request, current_socket)
except Exception as e:
self._is_err(e)
self.get_with_headers()
else:
self._limit_succeeded()
def post(self, data):
if len(self.sockets) > 0:
current_socket = self.sockets.pop(self._rand_sock())
proxies = {
'http': 'http://' + current_socket,
'https': 'https://' + current_socket
}
try:
request = requests.post(
self.url,
json=data,
auth=(self.username, self.password),
timeout=self.rdata['timeout'],
proxies=proxies)
self._set_request_data(request, current_socket)
except Exception as e:
self._is_err(e)
self.post(data)
else:
self._limit_succeeded()
def post_with_headers(self, data):
if len(self.sockets) > 0:
current_socket = self.sockets.pop(self._rand_sock())
proxies = {
'http': 'http://' + current_socket,
'https': 'https://' + current_socket
}
try:
request = requests.post(
self.url,
json=data,
auth=(self.username, self.password),
timeout=self.rdata['timeout'],
headers=self.rdata['headers'],
proxies=proxies)
self._set_request_data(request, current_socket)
except Exception as e:
self._is_err(e)
self.post_with_headers(data)
else:
self._limit_succeeded()
def post_file(self):
if len(self.sockets) > 0:
current_socket = self.sockets.pop(self._rand_sock())
proxies = {
'http': 'http://' + current_socket,
'https': 'https://' + current_socket
}
try:
request = requests.post(
self.url,
files={'upload_file': open(self.rdata['file'], 'rb')},
auth=(self.username, self.password),
timeout=self.rdata['timeout'],
proxies=proxies)
self._set_request_data(request, current_socket)
except Exception as e:
self._is_err(e)
self.post_file()
else:
self._limit_succeeded()
def post_file_with_headers(self):
if len(self.sockets) > 0:
current_socket = self.sockets.pop(self._rand_sock())
proxies = {
'http': 'http://' + current_socket,
'https': 'https://' + current_socket
}
try:
request = requests.post(
self.url,
files={'upload_file': open(self.rdata['file'], 'rb')},
auth=(self.username, self.password),
timeout=self.rdata['timeout'],
headers=self.rdata['headers'],
proxies=proxies)
self._set_request_data(request, current_socket)
except Exception as e:
self._is_err(e)
self.post_file_with_headers()
else:
self._limit_succeeded()
| 34.120944 | 77 | 0.485692 | 1,118 | 11,567 | 4.802326 | 0.093918 | 0.116223 | 0.033898 | 0.035761 | 0.752654 | 0.742596 | 0.742038 | 0.741479 | 0.735146 | 0.735146 | 0 | 0.003744 | 0.399671 | 11,567 | 338 | 78 | 34.221893 | 0.769441 | 0.003458 | 0 | 0.734426 | 0 | 0 | 0.068906 | 0.003385 | 0 | 0 | 0 | 0 | 0 | 1 | 0.091803 | false | 0.02623 | 0.009836 | 0.02623 | 0.134426 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c048758b8674814cd0ae7c230f3e76e7b6977c33 | 21,049 | py | Python | koku/api/report/test/aws/openshift/test_ocp_aws_query_handler.py | Vasyka/koku | b5aa9ec41c3b0821e74afe9ff3a5ffaedb910614 | [
"Apache-2.0"
] | 2 | 2022-01-12T03:42:39.000Z | 2022-01-12T03:42:40.000Z | koku/api/report/test/aws/openshift/test_ocp_aws_query_handler.py | Vasyka/koku | b5aa9ec41c3b0821e74afe9ff3a5ffaedb910614 | [
"Apache-2.0"
] | null | null | null | koku/api/report/test/aws/openshift/test_ocp_aws_query_handler.py | Vasyka/koku | b5aa9ec41c3b0821e74afe9ff3a5ffaedb910614 | [
"Apache-2.0"
] | 1 | 2021-07-21T09:33:59.000Z | 2021-07-21T09:33:59.000Z | #
# Copyright 2021 Red Hat Inc.
# SPDX-License-Identifier: Apache-2.0
#
"""Test the Report Queries."""
import copy
from tenant_schemas.utils import tenant_context
from api.iam.test.iam_test_case import IamTestCase
from api.report.aws.openshift.query_handler import OCPAWSReportQueryHandler
from api.report.aws.openshift.view import OCPAWSCostView
from api.report.aws.openshift.view import OCPAWSInstanceTypeView
from api.report.aws.openshift.view import OCPAWSStorageView
from api.report.queries import check_view_filter_and_group_by_criteria
from api.utils import DateHelper
from reporting.models import AWSCostEntryBill
from reporting.models import OCPAWSComputeSummary
from reporting.models import OCPAWSCostLineItemDailySummary
from reporting.models import OCPAWSCostSummary
from reporting.models import OCPAWSCostSummaryByAccount
from reporting.models import OCPAWSCostSummaryByRegion
from reporting.models import OCPAWSCostSummaryByService
from reporting.models import OCPAWSDatabaseSummary
from reporting.models import OCPAWSNetworkSummary
from reporting.models import OCPAWSStorageSummary
class OCPAWSQueryHandlerTestNoData(IamTestCase):
"""Tests for the OCP report query handler with no data."""
def setUp(self):
"""Set up the customer view tests."""
super().setUp()
self.dh = DateHelper()
self.this_month_filter = {"usage_start__gte": self.dh.this_month_start}
self.ten_day_filter = {"usage_start__gte": self.dh.n_days_ago(self.dh.today, 9)}
self.thirty_day_filter = {"usage_start__gte": self.dh.n_days_ago(self.dh.today, 29)}
self.last_month_filter = {
"usage_start__gte": self.dh.last_month_start,
"usage_end__lte": self.dh.last_month_end,
}
def test_execute_sum_query_instance_types(self):
"""Test that the sum query runs properly for instance-types."""
url = "?"
query_params = self.mocked_query_params(url, OCPAWSInstanceTypeView)
handler = OCPAWSReportQueryHandler(query_params)
query_output = handler.execute_query()
self.assertIsNotNone(query_output.get("data"))
self.assertIsNotNone(query_output.get("total"))
total = query_output.get("total")
self.assertIsNotNone(total.get("cost"))
self.assertIsInstance(total.get("cost"), dict)
self.assertNotEqual(total.get("cost").get("total", {}).get("value"), 0)
self.assertEqual(total.get("cost").get("total", {}).get("units"), "USD")
self.assertIsNotNone(total.get("usage"))
self.assertIsInstance(total.get("usage"), dict)
self.assertNotEqual(total.get("usage").get("value"), 0)
self.assertEqual(total.get("usage").get("units"), "Hrs")
self.assertIsNotNone(total.get("count"))
self.assertIsInstance(total.get("count"), dict)
self.assertNotEqual(total.get("count").get("value"), 0)
self.assertEqual(total.get("count").get("units"), "instances")
class OCPAWSQueryHandlerTest(IamTestCase):
"""Tests for the OCP report query handler."""
def setUp(self):
"""Set up the customer view tests."""
super().setUp()
self.dh = DateHelper()
self.this_month_filter = {"usage_start__gte": self.dh.this_month_start}
self.ten_day_filter = {"usage_start__gte": self.dh.n_days_ago(self.dh.today, 9)}
self.thirty_day_filter = {"usage_start__gte": self.dh.n_days_ago(self.dh.today, 29)}
self.last_month_filter = {
"usage_start__gte": self.dh.last_month_start,
"usage_end__lte": self.dh.last_month_end,
}
with tenant_context(self.tenant):
self.services = OCPAWSCostLineItemDailySummary.objects.values("product_code").distinct()
self.services = [entry.get("product_code") for entry in self.services]
def get_totals_by_time_scope(self, aggregates, filters=None):
"""Return the total aggregates for a time period."""
if filters is None:
filters = self.ten_day_filter
with tenant_context(self.tenant):
return OCPAWSCostLineItemDailySummary.objects.filter(**filters).aggregate(**aggregates)
def test_execute_sum_query_storage(self):
"""Test that the sum query runs properly."""
url = "?"
query_params = self.mocked_query_params(url, OCPAWSStorageView)
handler = OCPAWSReportQueryHandler(query_params)
filt = {"product_family__contains": "Storage"}
filt.update(self.ten_day_filter)
aggregates = handler._mapper.report_type_map.get("aggregates")
current_totals = self.get_totals_by_time_scope(aggregates, filt)
query_output = handler.execute_query()
self.assertIsNotNone(query_output.get("data"))
self.assertIsNotNone(query_output.get("total"))
total = query_output.get("total")
self.assertEqual(total.get("cost", {}).get("total", {}).get("value", 0), current_totals.get("cost_total", 1))
def test_execute_query_current_month_daily(self):
"""Test execute_query for current month on daily breakdown."""
url = "?filter[time_scope_units]=month&filter[time_scope_value]=-1&filter[resolution]=daily"
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
query_output = handler.execute_query()
self.assertIsNotNone(query_output.get("data"))
self.assertIsNotNone(query_output.get("total"))
total = query_output.get("total")
self.assertIsNotNone(total.get("cost"))
aggregates = handler._mapper.report_type_map.get("aggregates")
current_totals = self.get_totals_by_time_scope(aggregates, self.this_month_filter)
self.assertEqual(total.get("cost", {}).get("total", {}).get("value", 0), current_totals.get("cost_total", 1))
def test_execute_query_current_month_monthly(self):
"""Test execute_query for current month on monthly breakdown."""
url = "?filter[time_scope_units]=month&filter[time_scope_value]=-1&filter[resolution]=daily"
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
query_output = handler.execute_query()
self.assertIsNotNone(query_output.get("data"))
self.assertIsNotNone(query_output.get("total"))
total = query_output.get("total")
self.assertIsNotNone(total.get("cost"))
aggregates = handler._mapper.report_type_map.get("aggregates")
current_totals = self.get_totals_by_time_scope(aggregates, self.this_month_filter)
self.assertEqual(total.get("cost", {}).get("total", {}).get("value", 0), current_totals.get("cost_total", 1))
def test_execute_query_current_month_by_service(self):
"""Test execute_query for current month on monthly breakdown by service."""
url = "?filter[time_scope_units]=month&filter[time_scope_value]=-1&filter[resolution]=monthly&group_by[service]=*" # noqa: E501
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
query_output = handler.execute_query()
data = query_output.get("data")
self.assertIsNotNone(data)
self.assertIsNotNone(query_output.get("total"))
total = query_output.get("total")
self.assertIsNotNone(total.get("cost"))
aggregates = handler._mapper.report_type_map.get("aggregates")
current_totals = self.get_totals_by_time_scope(aggregates, self.this_month_filter)
self.assertEqual(total.get("cost", {}).get("total", {}).get("value", 0), current_totals.get("cost_total", 1))
cmonth_str = DateHelper().this_month_start.strftime("%Y-%m")
for data_item in data:
month_val = data_item.get("date")
month_data = data_item.get("services")
self.assertEqual(month_val, cmonth_str)
self.assertIsInstance(month_data, list)
for month_item in month_data:
service = month_item.get("service")
self.assertIn(service, self.services)
self.assertIsInstance(month_item.get("values"), list)
def test_execute_query_by_filtered_service(self):
"""Test execute_query monthly breakdown by filtered service."""
url = "?filter[time_scope_units]=month&filter[time_scope_value]=-1&filter[resolution]=monthly&group_by[service]=AmazonEC2" # noqa: E501
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
query_output = handler.execute_query()
data = query_output.get("data")
self.assertIsNotNone(data)
self.assertIsNotNone(query_output.get("total"))
total = query_output.get("total")
self.assertIsNotNone(total.get("cost"))
aggregates = handler._mapper.report_type_map.get("aggregates")
filt = copy.deepcopy(self.this_month_filter)
filt["product_code"] = "AmazonEC2"
current_totals = self.get_totals_by_time_scope(aggregates, filt)
self.assertEqual(total.get("cost", {}).get("total", {}).get("value", 0), current_totals.get("cost_total", 1))
cmonth_str = DateHelper().this_month_start.strftime("%Y-%m")
for data_item in data:
month_val = data_item.get("date")
month_data = data_item.get("services")
self.assertEqual(month_val, cmonth_str)
self.assertIsInstance(month_data, list)
for month_item in month_data:
compute = month_item.get("service")
self.assertEqual(compute, "AmazonEC2")
self.assertIsInstance(month_item.get("values"), list)
def test_query_by_partial_filtered_service(self):
"""Test execute_query monthly breakdown by filtered service."""
url = "?filter[time_scope_units]=month&filter[time_scope_value]=-1&filter[resolution]=monthly&group_by[service]=eC2" # noqa: E501
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
query_output = handler.execute_query()
data = query_output.get("data")
self.assertIsNotNone(data)
self.assertIsNotNone(query_output.get("total"))
total = query_output.get("total")
self.assertIsNotNone(total.get("cost"))
filt = copy.deepcopy(self.this_month_filter)
filt["product_code__icontains"] = "ec2"
aggregates = handler._mapper.report_type_map.get("aggregates")
current_totals = self.get_totals_by_time_scope(aggregates, filt)
self.assertEqual(total.get("cost", {}).get("total", {}).get("value", 0), current_totals.get("cost_total", 1))
cmonth_str = DateHelper().this_month_start.strftime("%Y-%m")
for data_item in data:
month_val = data_item.get("date")
month_data = data_item.get("services")
self.assertEqual(month_val, cmonth_str)
self.assertIsInstance(month_data, list)
for month_item in month_data:
compute = month_item.get("service")
self.assertEqual(compute, "AmazonEC2")
self.assertIsInstance(month_item.get("values"), list)
def test_execute_query_current_month_by_account(self):
"""Test execute_query for current month on monthly breakdown by account."""
url = "?filter[time_scope_units]=month&filter[time_scope_value]=-1&filter[resolution]=monthly&group_by[account]=*" # noqa: E501
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
query_output = handler.execute_query()
data = query_output.get("data")
self.assertIsNotNone(data)
self.assertIsNotNone(query_output.get("total"))
total = query_output.get("total")
self.assertIsNotNone(total.get("cost"))
aggregates = handler._mapper.report_type_map.get("aggregates")
current_totals = self.get_totals_by_time_scope(aggregates, self.this_month_filter)
self.assertEqual(total.get("cost", {}).get("total", {}).get("value", 0), current_totals.get("cost_total", 1))
cmonth_str = DateHelper().this_month_start.strftime("%Y-%m")
for data_item in data:
month_val = data_item.get("date", "not-a-date")
month_data = data_item.get("accounts", "not-a-list")
self.assertEqual(month_val, cmonth_str)
self.assertIsInstance(month_data, list)
for month_item in month_data:
self.assertIsInstance(month_item.get("values"), list)
def test_execute_query_by_account_by_service(self):
"""Test execute_query for current month breakdown by account by service."""
url = "?filter[time_scope_units]=month&filter[time_scope_value]=-1&filter[resolution]=monthly&group_by[account]=*&group_by[service]=*" # noqa: E501
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
query_output = handler.execute_query()
data = query_output.get("data")
self.assertIsNotNone(data)
self.assertIsNotNone(query_output.get("total"))
total = query_output.get("total")
self.assertIsNotNone(total.get("cost"))
aggregates = handler._mapper.report_type_map.get("aggregates")
current_totals = self.get_totals_by_time_scope(aggregates, self.this_month_filter)
self.assertEqual(total.get("cost", {}).get("total", {}).get("value", 0), current_totals.get("cost_total", 1))
cmonth_str = DateHelper().this_month_start.strftime("%Y-%m")
for data_item in data:
month_val = data_item.get("date", "not-a-date")
month_data = data_item.get("accounts", "not-a-string")
self.assertEqual(month_val, cmonth_str)
self.assertIsInstance(month_data, list)
for month_item in month_data:
self.assertIsInstance(month_item.get("services"), list)
def test_check_view_filter_and_group_by_criteria(self):
"""Test that all filter and group by checks return the correct result."""
good_group_by_options = ["account", "service", "region", "cluster", "product_family"]
bad_group_by_options = ["project", "node"]
for option in good_group_by_options:
filter_keys = {option}
group_by_keys = set()
self.assertTrue(check_view_filter_and_group_by_criteria(filter_keys, group_by_keys))
filter_keys = set()
group_by_keys = {option}
self.assertTrue(check_view_filter_and_group_by_criteria(filter_keys, group_by_keys))
# Different group by and filter
filter_keys = {"account"}
group_by_keys = {"cluster"}
self.assertTrue(check_view_filter_and_group_by_criteria(filter_keys, group_by_keys))
# Multiple group bys
filter_keys = set()
group_by_keys = {"cluster", "account"}
self.assertTrue(check_view_filter_and_group_by_criteria(filter_keys, group_by_keys))
# Multiple filters
filter_keys = {"cluster", "account"}
group_by_keys = set()
self.assertTrue(check_view_filter_and_group_by_criteria(filter_keys, group_by_keys))
# Project and node unsupported
for option in bad_group_by_options:
filter_keys = {option}
group_by_keys = set()
self.assertFalse(check_view_filter_and_group_by_criteria(filter_keys, group_by_keys))
filter_keys = set()
group_by_keys = {option}
self.assertFalse(check_view_filter_and_group_by_criteria(filter_keys, group_by_keys))
def test_query_table(self):
"""Test that the correct view is assigned by query table property."""
url = "?"
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSCostSummary)
url = "?group_by[account]=*"
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSCostSummaryByAccount)
url = "?group_by[region]=*"
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSCostSummaryByRegion)
url = "?group_by[region]=*&group_by[account]=*"
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSCostSummaryByRegion)
url = "?group_by[service]=*"
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSCostSummaryByService)
url = "?group_by[service]=*&group_by[account]=*"
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSCostSummaryByService)
url = "?"
query_params = self.mocked_query_params(url, OCPAWSInstanceTypeView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSComputeSummary)
url = "?group_by[account]=*"
query_params = self.mocked_query_params(url, OCPAWSInstanceTypeView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSComputeSummary)
url = "?"
query_params = self.mocked_query_params(url, OCPAWSStorageView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSStorageSummary)
url = "?group_by[account]=*"
query_params = self.mocked_query_params(url, OCPAWSStorageView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSStorageSummary)
url = "?filter[service]=AmazonVPC,AmazonCloudFront,AmazonRoute53,AmazonAPIGateway"
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSNetworkSummary)
url = "?filter[service]=AmazonVPC,AmazonCloudFront,AmazonRoute53,AmazonAPIGateway&group_by[account]=*"
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSNetworkSummary)
url = (
"?filter[service]=AmazonRDS,AmazonDynamoDB,AmazonElastiCache,AmazonNeptune,AmazonRedshift,AmazonDocumentDB"
)
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSDatabaseSummary)
url = (
"?filter[service]=AmazonRDS,AmazonDynamoDB,AmazonElastiCache,AmazonNeptune,AmazonRedshift,AmazonDocumentDB"
"&group_by[account]=*"
)
query_params = self.mocked_query_params(url, OCPAWSCostView)
handler = OCPAWSReportQueryHandler(query_params)
self.assertEqual(handler.query_table, OCPAWSDatabaseSummary)
def test_source_uuid_mapping(self): # noqa: C901
"""Test source_uuid is mapped to the correct source."""
endpoints = [OCPAWSCostView, OCPAWSInstanceTypeView, OCPAWSStorageView]
with tenant_context(self.tenant):
expected_source_uuids = list(AWSCostEntryBill.objects.distinct().values_list("provider_id", flat=True))
source_uuid_list = []
for endpoint in endpoints:
urls = ["?"]
if endpoint == OCPAWSCostView:
urls.extend(["?group_by[account]=*", "?group_by[service]=*", "?group_by[region]=*"])
for url in urls:
query_params = self.mocked_query_params(url, endpoint)
handler = OCPAWSReportQueryHandler(query_params)
query_output = handler.execute_query()
for dictionary in query_output.get("data"):
for _, value in dictionary.items():
if isinstance(value, list):
for item in value:
if isinstance(item, dict):
if "values" in item.keys():
value = item["values"][0]
source_uuid_list.extend(value.get("source_uuid"))
self.assertNotEquals(source_uuid_list, [])
for source_uuid in source_uuid_list:
self.assertIn(source_uuid, expected_source_uuids)
| 50.720482 | 156 | 0.68388 | 2,410 | 21,049 | 5.70249 | 0.092946 | 0.057629 | 0.041476 | 0.036673 | 0.803318 | 0.78178 | 0.777996 | 0.748527 | 0.72939 | 0.707051 | 0 | 0.004007 | 0.20571 | 21,049 | 414 | 157 | 50.842995 | 0.817992 | 0.055822 | 0 | 0.684685 | 0 | 0.015015 | 0.126151 | 0.062317 | 0 | 0 | 0 | 0 | 0.258258 | 1 | 0.045045 | false | 0 | 0.057057 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fbede8f9c339e894898e2fbd4f3a9f82d59ad873 | 47 | py | Python | pajbot/managers/__init__.py | gigglearrows/anniesbot | fb9fb92b827c6c78efebb415f10d015216fb3ba2 | [
"MIT"
] | null | null | null | pajbot/managers/__init__.py | gigglearrows/anniesbot | fb9fb92b827c6c78efebb415f10d015216fb3ba2 | [
"MIT"
] | 1 | 2015-12-24T02:01:21.000Z | 2018-02-19T01:08:16.000Z | pajbot/managers/__init__.py | gigglearrows/anniesbot | fb9fb92b827c6c78efebb415f10d015216fb3ba2 | [
"MIT"
] | null | null | null | from pajbot.managers.redis import RedisManager
| 23.5 | 46 | 0.87234 | 6 | 47 | 6.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
223564bea007540f0a53ada58c804d1c2f0fcc7c | 23,320 | py | Python | nirvana/tests/test_axisym.py | kbwestfall/BarFit | e0f91d1813655c0aa9af77ccca32a85c5a9cfdc1 | [
"BSD-3-Clause"
] | null | null | null | nirvana/tests/test_axisym.py | kbwestfall/BarFit | e0f91d1813655c0aa9af77ccca32a85c5a9cfdc1 | [
"BSD-3-Clause"
] | null | null | null | nirvana/tests/test_axisym.py | kbwestfall/BarFit | e0f91d1813655c0aa9af77ccca32a85c5a9cfdc1 | [
"BSD-3-Clause"
] | null | null | null |
from IPython import embed
import numpy
from scipy import stats, special
from nirvana.data import manga
from nirvana.data import util
from nirvana.data import scatter
from nirvana.tests.util import remote_data_file, requires_remote
from nirvana.models.oned import HyperbolicTangent, Exponential
from nirvana.models.axisym import AxisymmetricDisk
from nirvana.models.beam import gauss2d_kernel, ConvolveFFTW
def test_disk():
disk = AxisymmetricDisk()
disk.par[:2] = 0. # Ensure that the center is at 0,0
disk.par[-1] = 1. # Put in a quickly rising RC
n = 51
x = numpy.arange(n, dtype=float)[::-1] - n//2
y = numpy.arange(n, dtype=float) - n//2
x, y = numpy.meshgrid(x, y)
vel = disk.model(disk.par, x=x, y=y)
beam = gauss2d_kernel(n, 3.)
_vel = disk.model(disk.par, x=x, y=y, beam=beam)
assert numpy.isclose(vel[n//2,n//2], _vel[n//2,n//2]), 'Smearing moved the center.'
def test_disk_derivative_nosig():
disk = AxisymmetricDisk()
# Ensure that center is offset from 0,0 because of derivative calculation when r==0.
disk.par[:2] = 0.1
# Use a slowly rising rotation curve. More quickly rising rotation curves
# show a greater difference between the finite-difference and direct
# derivative calculations after the convolution.
disk.par[-1] = 20.
# Finite difference test steps
# x0 y0 pa inc vsys vinf hv
dp = numpy.array([0.0001, 0.0001, 0.001, 0.001, 0.001, 0.001, 0.0001])
n = 101
x = numpy.arange(n, dtype=float)[::-1] - n//2
y = numpy.arange(n, dtype=float) - n//2
x, y = numpy.meshgrid(x, y)
v, dv = disk.deriv_model(disk.par, x=x, y=y)
vp = numpy.empty(v.shape+(disk.par.size,), dtype=float)
p = disk.par.copy()
for i in range(disk.par.size):
_p = p.copy()
_p[i] += dp[i]
# These calls to `model` reuse the previously provided x and y
vp[...,i] = disk.model(_p)
disk._set_par(p)
fd_dv = (vp - v[...,None])/dp[None,:]
for i in range(disk.par.size):
assert numpy.allclose(dv[...,i], fd_dv[...,i], rtol=0., atol=1e-4), \
f'Finite difference produced different derivative for parameter {i+1}!'
# Now include the beam-smearing
beam = gauss2d_kernel(n, 3.)
try:
cnvfftw = ConvolveFFTW(beam.shape)
except:
cnvfftw = None
v, dv = disk.deriv_model(disk.par, x=x, y=y, beam=beam, cnvfftw=cnvfftw)
vp = numpy.empty(v.shape+(disk.par.size,), dtype=float)
p = disk.par.copy()
for i in range(disk.par.size):
_p = p.copy()
_p[i] += dp[i]
# These calls to `model` reuse the previously provided x, y, beam, and
# cnvfftw
vp[...,i] = disk.model(_p)
disk._set_par(p)
fd_dv = (vp - v[...,None])/dp[None,:]
for i in range(disk.par.size):
assert numpy.allclose(dv[...,i], fd_dv[...,i], rtol=0., atol=1e-4), \
f'Finite difference produced different derivative for parameter {i+1}!'
def test_disk_derivative():
disk = AxisymmetricDisk(rc=HyperbolicTangent(), dc=Exponential())
# Ensure that center is offset from 0,0 because of derivative calculation when r==0.
disk.par[:2] = 0.1
# Use a slowly rising rotation curve. More quickly rising rotation curves
# show a greater difference between the finite-difference and direct
# derivative calculations after the convolution.
disk.par[-3] = 20.
# Finite difference test steps
# x0 y0 pa inc vsys vinf hv sig0 hsig
dp = numpy.array([0.0001, 0.0001, 0.001, 0.001, 0.001, 0.001, 0.0001, 0.001, 0.0001])
n = 101
x = numpy.arange(n, dtype=float)[::-1] - n//2
y = numpy.arange(n, dtype=float) - n//2
x, y = numpy.meshgrid(x, y)
v, sig, dv, dsig = disk.deriv_model(disk.par, x=x, y=y)
vp = numpy.empty(v.shape+(disk.par.size,), dtype=float)
sigp = numpy.empty(v.shape+(disk.par.size,), dtype=float)
p = disk.par.copy()
for i in range(disk.par.size):
_p = p.copy()
_p[i] += dp[i]
# These calls to `model` reuse the previously provided x and y
vp[...,i], sigp[...,i] = disk.model(_p)
disk._set_par(p)
fd_dv = (vp - v[...,None])/dp[None,:]
fd_dsig = (sigp - sig[...,None])/dp[None,:]
for i in range(disk.par.size):
assert numpy.allclose(dv[...,i], fd_dv[...,i], rtol=0., atol=1e-4), \
f'Finite difference produced different velocity derivative for parameter {i+1}!'
# The precision is worse for dsig/dx0 and dsig/dy0 at x=y=0.0. Not sure
# why. The larger atol is to account for this.
assert numpy.allclose(dsig[...,i], fd_dsig[...,i], rtol=0., atol=3e-3), \
f'Finite difference produced different sigma derivative for parameter {i+1}!'
# Now include the beam-smearing
beam = gauss2d_kernel(n, 3.)
try:
cnvfftw = ConvolveFFTW(beam.shape)
except:
cnvfftw = None
v, sig, dv, dsig = disk.deriv_model(disk.par, x=x, y=y, beam=beam, cnvfftw=cnvfftw)
vp = numpy.empty(v.shape+(disk.par.size,), dtype=float)
sigp = numpy.empty(v.shape+(disk.par.size,), dtype=float)
p = disk.par.copy()
for i in range(disk.par.size):
_p = p.copy()
_p[i] += dp[i]
# These calls to `model` reuse the previously provided x, y, beam, and
# cnvfftw
vp[...,i], sigp[...,i] = disk.model(_p)
disk._set_par(p)
fd_dv = (vp - v[...,None])/dp[None,:]
fd_dsig = (sigp - sig[...,None])/dp[None,:]
for i in range(disk.par.size):
assert numpy.allclose(dv[...,i], fd_dv[...,i], rtol=0., atol=1e-4), \
f'Finite difference produced different derivative for parameter {i+1}!'
# Apparently the convolution smooths out the difference seen in the test above
assert numpy.allclose(dsig[...,i], fd_dsig[...,i], rtol=0., atol=1e-4), \
f'Finite difference produced different sigma derivative for parameter {i+1}!'
@requires_remote
def test_disk_derivative_bin():
# Read the data to fit
data_root = remote_data_file()
kin = manga.MaNGAStellarKinematics.from_plateifu(8138, 12704, cube_path=data_root,
maps_path=data_root)
disk = AxisymmetricDisk(rc=HyperbolicTangent(), dc=Exponential())
# Ensure that center is offset from 0,0 because of derivative calculation when r==0.
disk.par[:2] = 0.1
# Use a slowly rising rotation curve. More quickly rising rotation curves
# show a greater difference between the finite-difference and direct
# derivative calculations after the convolution.
disk.par[-3] = 20.
# Finite difference test steps
# x0 y0 pa inc vsys vinf hv sig0 hsig
dp = numpy.array([0.0001, 0.0001, 0.001, 0.001, 0.001, 0.001, 0.0001, 0.001, 0.0001])
# Include the beam-smearing
try:
cnvfftw = ConvolveFFTW(kin.spatial_shape)
except:
cnvfftw = None
v, sig, dv, dsig = disk.deriv_model(disk.par, x=kin.grid_x, y=kin.grid_y, #sb=kin.grid_sb,
beam=kin.beam_fft, is_fft=True, cnvfftw=cnvfftw)
# Now also include the binning
bv, dbv = kin.deriv_bin(v, dv)
bsig, dbsig = kin.deriv_bin(sig, dsig)
vp = numpy.empty(v.shape+(disk.par.size,), dtype=float)
sigp = numpy.empty(v.shape+(disk.par.size,), dtype=float)
bvp = numpy.empty(bv.shape+(disk.par.size,), dtype=float)
bsigp = numpy.empty(bv.shape+(disk.par.size,), dtype=float)
p = disk.par.copy()
for i in range(disk.par.size):
_p = p.copy()
_p[i] += dp[i]
# These calls to `model` reuse the previously provided x, y, sb, beam,
# and cnvfftw
vp[...,i], sigp[...,i] = disk.model(_p)
bvp[...,i] = kin.bin(vp[...,i])
bsigp[...,i] = kin.bin(sigp[...,i])
disk._set_par(p)
fd_dbv = (bvp - bv[...,None])/dp[None,:]
fd_dbsig = (bsigp - bsig[...,None])/dp[None,:]
for i in range(disk.par.size):
assert numpy.allclose(dbv[...,i], fd_dbv[...,i], rtol=0., atol=1e-4), \
f'Finite difference produced different derivative for parameter {i+1}!'
# The difference is relatively large (again) for the dispersion data
assert numpy.allclose(dbsig[...,i], fd_dbsig[...,i], rtol=0., atol=1e-3), \
f'Finite difference produced different sigma derivative for parameter {i+1}!'
@requires_remote
def test_disk_derivative_bin_moments():
# Read the data to fit
data_root = remote_data_file()
kin = manga.MaNGAStellarKinematics.from_plateifu(8138, 12704, cube_path=data_root,
maps_path=data_root)
disk = AxisymmetricDisk(rc=HyperbolicTangent(), dc=Exponential())
# Ensure that center is offset from 0,0 because of derivative calculation when r==0.
disk.par[:2] = 0.1
# Use a slowly rising rotation curve. More quickly rising rotation curves
# show a greater difference between the finite-difference and direct
# derivative calculations after the convolution.
disk.par[-3] = 20.
# Finite difference test steps
# x0 y0 pa inc vsys vinf hv sig0 hsig
dp = numpy.array([0.0001, 0.0001, 0.001, 0.001, 0.001, 0.001, 0.0001, 0.001, 0.0001])
# Include the beam-smearing
try:
cnvfftw = ConvolveFFTW(kin.spatial_shape)
except:
cnvfftw = None
v, sig, dv, dsig = disk.deriv_model(disk.par, x=kin.grid_x, y=kin.grid_y, #sb=kin.grid_sb,
beam=kin.beam_fft, is_fft=True, cnvfftw=cnvfftw)
_, bv, bsig, _, dbv, dbsig = kin.deriv_bin_moments(None, v, sig, None, dv, dsig)
vp = numpy.empty(v.shape+(disk.par.size,), dtype=float)
sigp = numpy.empty(v.shape+(disk.par.size,), dtype=float)
bvp = numpy.empty(bv.shape+(disk.par.size,), dtype=float)
bsigp = numpy.empty(bv.shape+(disk.par.size,), dtype=float)
p = disk.par.copy()
for i in range(disk.par.size):
_p = p.copy()
_p[i] += dp[i]
# These calls to `model` reuse the previously provided x, y, sb, beam,
# and cnvfftw
vp[...,i], sigp[...,i] = disk.model(_p)
_, bvp[...,i], bsigp[...,i] = kin.bin_moments(None, vp[...,i], sigp[...,i])
disk._set_par(p)
fd_dbv = (bvp - bv[...,None])/dp[None,:]
fd_dbsig = (bsigp - bsig[...,None])/dp[None,:]
# TODO: Constrain the test to only test the bins that have multiple spaxels?
for i in range(disk.par.size):
# vdiff = numpy.absolute(dbv[...,i]-fd_dbv[...,i])
# sdiff = numpy.absolute(dbsig[...,i]-fd_dbsig[...,i])
# print(i, numpy.amax(vdiff), numpy.amin(vdiff), numpy.amax(sdiff), numpy.amin(sdiff))
# print(i, numpy.amax(vdiff[kin.nspax > 1]), numpy.amin(vdiff[kin.nspax > 1]),
# numpy.amax(sdiff[kin.nspax > 1]), numpy.amin(sdiff[kin.nspax > 1]))
# continue
assert numpy.allclose(dbv[...,i], fd_dbv[...,i], rtol=0., atol=1e-4), \
f'Finite difference produced different derivative for parameter {i+1}!'
# The difference is relatively large (again) for the dispersion data
assert numpy.allclose(dbsig[...,i], fd_dbsig[...,i], rtol=0., atol=1e-3), \
f'Finite difference produced different sigma derivative for parameter {i+1}!'
@requires_remote
def test_disk_fit_derivative():
# Read the data to fit
data_root = remote_data_file()
kin = manga.MaNGAStellarKinematics.from_plateifu(8138, 12704, cube_path=data_root,
maps_path=data_root)
disk = AxisymmetricDisk(rc=HyperbolicTangent(), dc=Exponential())
# Set the parameters close to the best-fitting parameters from a previous
# run
p0 = numpy.array([-0.2, -0.08, 166.3, 53.0, 25.6, 217.0, 2.82, 189.7, 16.2])
# Finite difference test steps
# x0 y0 pa inc vsys vinf hv sig0 hsig
dp = numpy.array([0.0001, 0.0001, 0.001, 0.001, 0.001, 0.001, 0.0001, 0.001, 0.0001])
# Run the fit preparation
disk._fit_prep(kin, p0, None, None, True, True, True, None)
# Get the method used to generate the figure-of-merit and the jacobian
fom = disk._get_fom()
jac = disk._get_jac()
# Get the fom and the jacobian
chi = fom(p0)
dchi = jac(p0)
# Brute force it
chip = numpy.empty(dchi.shape, dtype=float)
p = disk.par.copy()
for i in range(disk.par.size):
_p = p.copy()
_p[i] += dp[i]
chip[...,i] = fom(_p)
disk._set_par(p)
# Compare them
fd_dchi = (chip - chi[...,None])/dp[None,:]
for i in range(disk.par.size):
# diff = numpy.absolute(dchi[...,i]-fd_dchi[...,i])
# print(i, numpy.amax(diff), numpy.amin(diff))
# continue
assert numpy.allclose(dchi[...,i], fd_dchi[...,i], rtol=0., atol=1e-3), \
f'Finite difference produced different derivative for parameter {i+1}!'
@requires_remote
def test_lsq_nopsf():
# Read the data to fit
data_root = remote_data_file()
kin = manga.MaNGAGasKinematics.from_plateifu(8138, 12704, cube_path=data_root,
maps_path=data_root, ignore_psf=True)
# Set the rotation curve
rc = HyperbolicTangent(lb=numpy.array([0., 1e-3]), ub=numpy.array([500., kin.max_radius()]))
# Set the disk velocity field
disk = AxisymmetricDisk(rc=rc)
# Fit it with a non-linear least-squares optimizer
disk.lsq_fit(kin) #, verbose=2)
assert numpy.all(numpy.absolute(disk.par[:2]) < 0.1), 'Center changed'
assert 165. < disk.par[2] < 167., 'PA changed'
assert 53. < disk.par[3] < 55., 'Inclination changed'
assert 243. < disk.par[5] < 245., 'Projected rotation changed'
@requires_remote
def test_lsq_psf():
# Read the data to fit
data_root = remote_data_file()
kin = manga.MaNGAGasKinematics.from_plateifu(8138, 12704, cube_path=data_root,
maps_path=data_root)
# Set the rotation curve
rc = HyperbolicTangent(lb=numpy.array([0., 1e-3]), ub=numpy.array([500., kin.max_radius()]))
# Set the disk velocity field
disk = AxisymmetricDisk(rc=rc)
# Fit it with a non-linear least-squares optimizer
disk.lsq_fit(kin) #, verbose=2)
assert numpy.all(numpy.absolute(disk.par[:2]) < 0.1), 'Center changed'
assert 165. < disk.par[2] < 167., 'PA changed'
assert 55. < disk.par[3] < 59., 'Inclination changed'
assert 252. < disk.par[5] < 255., 'Projected rotation changed'
@requires_remote
def test_lsq_with_sig():
# Read the data to fit
data_root = remote_data_file()
kin = manga.MaNGAGasKinematics.from_plateifu(8138, 12704, cube_path=data_root,
maps_path=data_root)
# Set the rotation curve
rc = HyperbolicTangent(lb=numpy.array([0., 1e-3]), ub=numpy.array([500., kin.max_radius()]))
# Set the dispersion profile
dc = Exponential(lb=numpy.array([0., 1e-3]), ub=numpy.array([500., kin.max_radius()]))
# Set the disk velocity field
disk = AxisymmetricDisk(rc=rc, dc=dc)
# Fit it with a non-linear least-squares optimizer
disk.lsq_fit(kin, sb_wgt=True) #, verbose=2)
assert numpy.all(numpy.absolute(disk.par[:2]) < 0.1), 'Center changed'
assert 165. < disk.par[2] < 167., 'PA changed'
assert 56. < disk.par[3] < 60., 'Inclination changed'
assert 250. < disk.par[5] < 253., 'Projected rotation changed'
assert 27. < disk.par[7] < 37., 'Central velocity dispersion changed'
@requires_remote
def test_lsq_with_covar():
# NOTE: This only fits the velocity field....
# Read the data to fit
data_root = remote_data_file()
kin = manga.MaNGAGasKinematics.from_plateifu(8138, 12704, cube_path=data_root,
maps_path=data_root, covar=True)
print('Forcing covariance to be positive definite.')
kin.vel_covar = util.impose_positive_definite(kin.vel_covar)
# Set the rotation curve
rc = HyperbolicTangent(lb=numpy.array([0., 1e-3]), ub=numpy.array([500., kin.max_radius()]))
# Set the disk velocity field
disk = AxisymmetricDisk(rc=rc) #, dc=dc)
# Fit it with a non-linear least-squares optimizer
# import time
# t = time.perf_counter()
disk.lsq_fit(kin, sb_wgt=True)
# print(f'First fit (no covar): {time.perf_counter()-t} s')
# Rejected based on error-weighted residuals, accounting for intrinsic scatter
resid = kin.vel - kin.bin(disk.model())
err = 1/numpy.sqrt(kin.vel_ivar)
scat = scatter.IntrinsicScatter(resid, err=err, gpm=disk.vel_gpm)
sig, rej, gpm = scat.iter_fit(fititer=5) #, verbose=2)
# Check
assert sig > 8., 'Different intrinsic scatter'
assert numpy.sum(rej) == 21, 'Different number of pixels were rejected'
# Refit with new mask, include scatter and covariance
kin.vel_mask = numpy.logical_not(gpm)
p0 = disk.par
# t = time.perf_counter()
disk.lsq_fit(kin, scatter=sig, sb_wgt=True, p0=p0, ignore_covar=False,
assume_posdef_covar=True) #, verbose=2)
# print(f'Second fit (w/ covar): {time.perf_counter()-t} s')
# Reject
resid = kin.vel - kin.bin(disk.model())
scat = scatter.IntrinsicScatter(resid, covar=kin.vel_covar, gpm=disk.vel_gpm,
assume_posdef_covar=True)
sig, rej, gpm = scat.iter_fit(fititer=5) #, verbose=2)
# Check
assert sig > 5., 'Different intrinsic scatter'
assert numpy.sum(rej) == 7, 'Different number of pixels were rejected'
# Model parameters
assert numpy.all(numpy.absolute(disk.par[:2]) < 0.1), 'Center changed'
assert 165. < disk.par[2] < 167., 'PA changed'
assert 56. < disk.par[3] < 58., 'Inclination changed'
assert 249. < disk.par[5] < 252., 'Projected rotation changed'
@requires_remote
def test_mock_noerr():
data_root = remote_data_file()
kin = manga.MaNGAStellarKinematics.from_plateifu(8138, 12704, cube_path=data_root,
maps_path=data_root)
# Set the parameters close to the best-fitting parameters from a previous
# run
p0 = numpy.array([-0.2, -0.08, 166.3, 53.0, 25.6, 217.0, 2.82, 189.7, 16.2])
disk = AxisymmetricDisk(rc=HyperbolicTangent(), dc=Exponential())
v, s = disk.model(p0, x=kin.grid_x, y=kin.grid_y, sb=kin.grid_sb, beam=kin.beam_fft,
is_fft=True)
_, bv, bs = kin.bin_moments(kin.grid_sb, v, s)
vremap = kin.remap(bv, mask=kin.vel_mask)
sremap = kin.remap(bs, mask=kin.sig_mask)
mock_kin = disk.mock_observation(p0, kin=kin)
mock_vremap = mock_kin.remap('vel')
mock_sremap = mock_kin.remap(numpy.sqrt(mock_kin.sig_phys2), mask=kin.sig_mask)
assert numpy.ma.allclose(mock_vremap, vremap), 'Bad mock velocity'
assert numpy.ma.allclose(mock_sremap, sremap), 'Bad mock dispersion'
@requires_remote
def test_mock_err():
data_root = remote_data_file()
kin = manga.MaNGAStellarKinematics.from_plateifu(8138, 12704, cube_path=data_root,
maps_path=data_root)
# Set the parameters close to the best-fitting parameters from a previous
# run
p0 = numpy.array([-0.2, -0.08, 166.3, 53.0, 25.6, 217.0, 2.82, 189.7, 16.2])
disk = AxisymmetricDisk(rc=HyperbolicTangent(), dc=Exponential())
v, s = disk.model(p0, x=kin.grid_x, y=kin.grid_y, sb=kin.grid_sb, beam=kin.beam_fft,
is_fft=True)
_, bv, bs = kin.bin_moments(kin.grid_sb, v, s)
vremap = kin.remap(bv, mask=kin.vel_mask)
sremap = kin.remap(bs, mask=kin.sig_mask)
rng = numpy.random.default_rng(seed=909)
mock_kin = disk.mock_observation(p0, kin=kin, add_err=True, rng=rng)
mock_vremap = mock_kin.remap('vel')
mock_sremap = mock_kin.remap(numpy.sqrt(mock_kin.sig_phys2), mask=kin.sig_mask)
assert numpy.ma.std(mock_vremap-vremap) > 5, 'Velocity error changed'
assert numpy.ma.std(mock_sremap-sremap) > 7, 'Dispersion error changed'
@requires_remote
def test_mock_covar():
data_root = remote_data_file()
kin = manga.MaNGAStellarKinematics.from_plateifu(8138, 12704, cube_path=data_root,
maps_path=data_root, covar=True)
# Set the parameters close to the best-fitting parameters from a previous
# run
p0 = numpy.array([-0.2, -0.08, 166.3, 53.0, 25.6, 217.0, 2.82, 189.7, 16.2])
disk = AxisymmetricDisk(rc=HyperbolicTangent(), dc=Exponential())
v, s = disk.model(p0, x=kin.grid_x, y=kin.grid_y, sb=kin.grid_sb, beam=kin.beam_fft,
is_fft=True)
vremap = kin.remap(kin.bin(v), mask=kin.vel_mask)
sremap = kin.remap(kin.bin(s), mask=kin.sig_mask)
# Fix the seed so that the result is deterministic
# WARNING: Without this, there were instances where the deviate for the
# dispersion would be entirely masked! Need to understand how/why that can
# happen.
rng = numpy.random.default_rng(seed=909)
mock_kin = disk.mock_observation(p0, kin=kin, add_err=True, rng=rng)
mock_vremap = mock_kin.remap('vel')
mock_sremap = mock_kin.remap(numpy.sqrt(mock_kin.sig_phys2), mask=kin.sig_mask)
assert numpy.ma.std(mock_vremap-vremap) > 5, 'Velocity error changed'
assert numpy.ma.std(mock_sremap-sremap) > 7, 'Dispersion error changed'
@requires_remote
def test_fisher():
data_root = remote_data_file()
for use_covar in [False, True]:
kin = manga.MaNGAStellarKinematics.from_plateifu(8138, 12704, cube_path=data_root,
maps_path=data_root, covar=use_covar)
# Set the parameters close to the best-fitting parameters from a previous
# run
p0 = numpy.array([-0.2, -0.08, 166.3, 53.0, 25.6, 217.0, 2.82, 189.7, 16.2])
# Get the Fisher Information Matrix
disk = AxisymmetricDisk(rc=HyperbolicTangent(), dc=Exponential())
fim = disk.fisher_matrix(p0, kin, sb_wgt=True)
# Use it to compute the correlation matrix
covar = util.cinv(fim)
var = numpy.diag(covar)
rho = covar / numpy.sqrt(var[:,None]*var[None,:])
# Get the upper triangle of the correlation matrix (without the main
# diagonal)
indx = numpy.triu_indices(rho.shape[0], k=1)
# Get the indices of the parameters with the 4 strongest correlation coefficients
srt = numpy.argsort(numpy.absolute(rho[indx]))[::-1][:4]
# Check the result. The strongest correlations should be between:
# (7,8) - The two sigma parameters
# (1,4) - The y coordinate and the systemic velocity
# (3,5) - The inclination and the asymptotic rotation speed
# (5,6) - The two rotation curve parameters
# (0,1) - The center coordinates
for correlated_pair in zip(indx[0][srt], indx[1][srt]):
assert correlated_pair in [(7,8), (1,4), (3,5), (5,6)], \
'Unexpected pair with strong correlation'
| 41.274336 | 96 | 0.620455 | 3,443 | 23,320 | 4.103979 | 0.117049 | 0.035173 | 0.021798 | 0.008493 | 0.778132 | 0.752159 | 0.737933 | 0.717551 | 0.704246 | 0.702477 | 0 | 0.043225 | 0.242067 | 23,320 | 564 | 97 | 41.347518 | 0.756209 | 0.236063 | 0 | 0.721068 | 0 | 0 | 0.083192 | 0 | 0 | 0 | 0 | 0.001773 | 0.118694 | 1 | 0.041543 | false | 0 | 0.029674 | 0 | 0.071217 | 0.002967 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
224bfe60f0b9bc11344797d99af47572e7830253 | 103 | py | Python | python programs/special_var.py | saddam-gif/Python-crushcourse | 63e1e1ff1eeb9a5d34bb0354cc86566c4de60260 | [
"MIT"
] | null | null | null | python programs/special_var.py | saddam-gif/Python-crushcourse | 63e1e1ff1eeb9a5d34bb0354cc86566c4de60260 | [
"MIT"
] | null | null | null | python programs/special_var.py | saddam-gif/Python-crushcourse | 63e1e1ff1eeb9a5d34bb0354cc86566c4de60260 | [
"MIT"
] | null | null | null | #first module name
#main is the starting point of execution
#first module name is main
print(__name__) | 20.6 | 40 | 0.796117 | 17 | 103 | 4.588235 | 0.647059 | 0.282051 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15534 | 103 | 5 | 41 | 20.6 | 0.896552 | 0.786408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
226ef2bda52c6de85228e434df51edaa7a46391d | 25 | py | Python | codigos-aula/cod0.py | maumneto/exercicio-python | bd57cd9f3b48c76ea3f8195544d347bc1b0c943e | [
"MIT"
] | null | null | null | codigos-aula/cod0.py | maumneto/exercicio-python | bd57cd9f3b48c76ea3f8195544d347bc1b0c943e | [
"MIT"
] | null | null | null | codigos-aula/cod0.py | maumneto/exercicio-python | bd57cd9f3b48c76ea3f8195544d347bc1b0c943e | [
"MIT"
] | 1 | 2020-04-27T15:01:10.000Z | 2020-04-27T15:01:10.000Z | print('olá mundo cruel!') | 25 | 25 | 0.72 | 4 | 25 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 25 | 1 | 25 | 25 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
97d4e1f28315490467729f1c41439d8186ef3ea5 | 45 | py | Python | sdc/ysdc_dataset_api/dataset/__init__.py | sty61010/shifts | d3bb3086d8f2581f74644585701f4b1db4338483 | [
"Apache-2.0"
] | 156 | 2021-07-16T08:54:39.000Z | 2022-03-24T11:49:36.000Z | sdc/ysdc_dataset_api/dataset/__init__.py | sty61010/shifts | d3bb3086d8f2581f74644585701f4b1db4338483 | [
"Apache-2.0"
] | 18 | 2021-07-21T14:02:46.000Z | 2022-02-26T04:07:12.000Z | sdc/ysdc_dataset_api/dataset/__init__.py | sty61010/shifts | d3bb3086d8f2581f74644585701f4b1db4338483 | [
"Apache-2.0"
] | 41 | 2021-07-21T05:38:07.000Z | 2022-01-13T15:25:51.000Z | from .dataset import MotionPredictionDataset
| 22.5 | 44 | 0.888889 | 4 | 45 | 10 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.97561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
97e3266e1a69ec79a11d1baaf63dacf2fbcee08a | 12,108 | py | Python | site/thicc/apps/stats/migrations/0001_initial.py | aldenjenkins/ThiccGaming | 4790d2568b019438d1569d0fe4e9f9aba008b737 | [
"BSD-3-Clause"
] | null | null | null | site/thicc/apps/stats/migrations/0001_initial.py | aldenjenkins/ThiccGaming | 4790d2568b019438d1569d0fe4e9f9aba008b737 | [
"BSD-3-Clause"
] | 9 | 2020-03-24T16:20:31.000Z | 2022-03-11T23:32:38.000Z | site/thicc/apps/stats/migrations/0001_initial.py | aldenjenkins/ThiccGaming | 4790d2568b019438d1569d0fe4e9f9aba008b737 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.15 on 2018-11-24 17:13
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
('social_django', '0008_partial_timestamp'),
]
operations = [
migrations.CreateModel(
name='GmodMapStats',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('gamemode', models.IntegerField(default=0)),
('playtime_nor', models.IntegerField(default=0)),
('playtime_adv', models.IntegerField(default=0)),
('playtime_exp', models.IntegerField(default=0)),
('restarts', models.IntegerField(blank=True, default=0)),
('custom', models.BooleanField(default=0)),
],
),
migrations.CreateModel(
name='L4d2MapStats',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('gamemode', models.IntegerField(default=0)),
('playtime', models.IntegerField(blank=True, default=0)),
('restarts', models.IntegerField(blank=True, default=0)),
('custom', models.BooleanField(default=0)),
('mutation', models.IntegerField(blank=True, default=0)),
('points', models.IntegerField(blank=True, default=0)),
('points_infected', models.IntegerField(blank=True, default=0)),
('points_survivor', models.IntegerField(blank=True, default=0)),
('charger_impacts', models.IntegerField(blank=True, default=0)),
('caralarm', models.IntegerField(blank=True, default=0)),
('jockey_rides', models.IntegerField(blank=True, default=0)),
('infected_spawn_1', models.IntegerField(blank=True, default=0)),
('infected_spawn_2', models.IntegerField(blank=True, default=0)),
('infected_spawn_3', models.IntegerField(blank=True, default=0)),
('infected_spawn_4', models.IntegerField(blank=True, default=0)),
('infected_spawn_5', models.IntegerField(blank=True, default=0)),
('infected_spawn_6', models.IntegerField(blank=True, default=0)),
('infected_spawn_8', models.IntegerField(blank=True, default=0)),
('infected_spitter_damage', models.IntegerField(blank=True, default=0)),
('infected_tank_damage', models.IntegerField(blank=True, default=0)),
('infected_charger_damage', models.IntegerField(blank=True, default=0)),
('infected_jocker_ridetime', models.IntegerField(blank=True, default=0)),
('infected_jocker_damage', models.IntegerField(blank=True, default=0)),
('infected_smoker_damage', models.IntegerField(blank=True, default=0)),
('infected_hunter_pounce_counter', models.IntegerField(blank=True, default=0)),
('infected_hunter_pounce_damage', models.IntegerField(blank=True, default=0)),
('infected_tanksniper', models.IntegerField(blank=True, default=0)),
('infected_boomer_vomits', models.IntegerField(blank=True, default=0)),
('infected_boomer_blinded', models.IntegerField(blank=True, default=0)),
('infected_win', models.IntegerField(blank=True, default=0)),
('survivors_win', models.IntegerField(blank=True, default=0)),
('survivor_kills', models.IntegerField(blank=True, default=0)),
('kills', models.IntegerField(blank=True, default=0)),
],
),
migrations.CreateModel(
name='UserSettings',
fields=[
('steam64', models.CharField(max_length=255, primary_key=True, serialize=False)),
('l4d2_mute', models.BooleanField(default=False)),
('gmodzs_mute', models.BooleanField(default=False)),
('gmodrp_mute', models.BooleanField(default=False)),
],
),
migrations.CreateModel(
name='UserStats',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('steam64', models.CharField(max_length=255)),
('ip', models.CharField(blank=True, default='0.0.0.0', max_length=16)),
('last_used_username', models.CharField(max_length=255)),
('last_online', models.CharField(max_length=255)),
('last_gamemode', models.IntegerField()),
('total_points', models.IntegerField(blank=True, default=0)),
('total_playtime', models.IntegerField(blank=True, default=0)),
('l4d2_points', models.IntegerField(blank=True, default=0)),
('l4d2_playtime', models.IntegerField(blank=True, default=0)),
('l4d2_points_infected', models.IntegerField(blank=True, default=0)),
('l4d2_points_survivor', models.IntegerField(blank=True, default=0)),
('l4d2_headshots', models.IntegerField(blank=True, default=0)),
('l4d2_kills', models.IntegerField(blank=True, default=0)),
('l4d2_melee_kills', models.IntegerField(blank=True, default=0)),
('l4d2_kills_survivor', models.IntegerField(blank=True, default=0)),
('l4d2_charger_impacts', models.IntegerField(blank=True, default=0)),
('l4d2_friendly_fire', models.IntegerField(blank=True, default=0)),
('l4d2_kill_infected', models.IntegerField(blank=True, default=0)),
('l4d2_kill_hunter', models.IntegerField(blank=True, default=0)),
('l4d2_kill_boomer', models.IntegerField(blank=True, default=0)),
('l4d2_kill_spitter', models.IntegerField(blank=True, default=0)),
('l4d2_kill_charger', models.IntegerField(blank=True, default=0)),
('l4d2_kill_jockey', models.IntegerField(blank=True, default=0)),
('l4d2_kill_smoker', models.IntegerField(blank=True, default=0)),
('l4d2_kill_tank', models.IntegerField(blank=True, default=0)),
('l4d2_infected_jockey_ridetime', models.FloatField(blank=True, default=0)),
('l4d2_infected_jockey_rides', models.IntegerField(blank=True, default=0)),
('l4d2_infected_boomer_vomits', models.IntegerField(blank=True, default=0)),
('l4d2_infected_boomer_blinded', models.IntegerField(blank=True, default=0)),
('l4d2_infected_hunter_pounce_damage', models.IntegerField(blank=True, default=0)),
('l4d2_infected_hunter_pounce_counter', models.IntegerField(blank=True, default=0)),
('l4d2_infected_smoker_damage', models.IntegerField(blank=True, default=0)),
('l4d2_infected_jockey_damage', models.IntegerField(blank=True, default=0)),
('l4d2_infected_charger_damage', models.IntegerField(blank=True, default=0)),
('l4d2_infected_tank_damage', models.IntegerField(blank=True, default=0)),
('l4d2_infected_tanksniper', models.IntegerField(blank=True, default=0)),
('l4d2_infected_spitter_damage', models.IntegerField(blank=True, default=0)),
('l4d2_infected_spawn_1', models.IntegerField(blank=True, default=0, verbose_name='Spawned as Smoker')),
('l4d2_infected_spawn_2', models.IntegerField(blank=True, default=0, verbose_name='Spawned as Boomer')),
('l4d2_infected_spawn_3', models.IntegerField(blank=True, default=0, verbose_name='Spawned as Hunter')),
('l4d2_infected_spawn_4', models.IntegerField(blank=True, default=0, verbose_name='Spawned as Spitter')),
('l4d2_infected_spawn_5', models.IntegerField(blank=True, default=0, verbose_name='Spawned as Jockey')),
('l4d2_infected_spawn_6', models.IntegerField(blank=True, default=0, verbose_name='Spawned as Charger')),
('l4d2_infected_spawn_8', models.IntegerField(blank=True, default=0, verbose_name='Spawned as Tank')),
('l4d2_award_survivor_down', models.IntegerField(blank=True, default=0)),
('l4d2_award_bulldozer', models.IntegerField(blank=True, default=0)),
('l4d2_award_infected_win', models.IntegerField(blank=True, default=0)),
('l4d2_award_allinsafehouse', models.IntegerField(blank=True, default=0)),
('l4d2_award_witchdisturb', models.IntegerField(blank=True, default=0)),
('l4d2_award_rescue', models.IntegerField(blank=True, default=0)),
('l4d2_award_pounce_nice', models.IntegerField(blank=True, default=0)),
('l4d2_award_pounce_perfect', models.IntegerField(blank=True, default=0)),
('l4d2_award_perfect_blindness', models.IntegerField(blank=True, default=0)),
('l4d2_award_gascans_poured', models.IntegerField(blank=True, default=0)),
('l4d2_award_upgrades_added', models.IntegerField(blank=True, default=0)),
('l4d2_award_matador', models.IntegerField(blank=True, default=0)),
('l4d2_award_ledgegrab', models.IntegerField(blank=True, default=0)),
('l4d2_award_fincap', models.IntegerField(blank=True, default=0)),
('l4d2_award_campaigns', models.IntegerField(blank=True, default=0)),
('l4d2_award_medkit', models.IntegerField(blank=True, default=0)),
('l4d2_award_adrenaline', models.IntegerField(blank=True, default=0)),
('l4d2_award_pills', models.IntegerField(blank=True, default=0)),
('l4d2_award_defib', models.IntegerField(blank=True, default=0)),
('l4d2_award_protect', models.IntegerField(blank=True, default=0)),
('l4d2_award_revive', models.IntegerField(blank=True, default=0)),
('l4d2_award_teamkill', models.IntegerField(blank=True, default=0)),
('l4d2_award_scatteringram', models.IntegerField(blank=True, default=0)),
('gmodzs_playtime', models.IntegerField(blank=True, default=0)),
('gmodzs_points', models.IntegerField(blank=True, default=0)),
('gmodzs_kills', models.IntegerField(blank=True, default=0)),
('gmodzs_kills_as_human', models.IntegerField(blank=True, default=0)),
('gmodzs_kills_as_infected', models.IntegerField(blank=True, default=0)),
('gmodzs_headshots', models.IntegerField(blank=True, default=0)),
('gmodzs_redemptions', models.IntegerField(blank=True, default=0)),
('gmodzs_deaths', models.IntegerField(blank=True, default=0)),
('gmodrp_points', models.IntegerField(blank=True, default=0)),
('gmodrp_playtime', models.IntegerField(blank=True, default=0)),
('gmodrp_kills', models.IntegerField(blank=True, default=0)),
('gmodrp_deaths', models.IntegerField(blank=True, default=0)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('linked_steam', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='my_stats_object', to='social_django.UserSocialAuth')),
],
options={
'verbose_name': "Player's Stats",
'verbose_name_plural': 'Player In Game Stats',
'ordering': ['-total_points'],
},
),
]
| 69.586207 | 191 | 0.622151 | 1,261 | 12,108 | 5.765266 | 0.133228 | 0.125447 | 0.235488 | 0.250206 | 0.842779 | 0.800963 | 0.778267 | 0.666988 | 0.381293 | 0.168226 | 0 | 0.032579 | 0.236951 | 12,108 | 173 | 192 | 69.988439 | 0.754302 | 0.005699 | 0 | 0.163636 | 1 | 0 | 0.209621 | 0.088318 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.018182 | 0 | 0.042424 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f4df7b80cd4c54ef38ef45fef595ee64261549e | 50,719 | py | Python | src/main.py | bioinform/rnacocktail | 9a4ddee62dcfbcf3c1dfd6c3dfffd4b66e1f76e1 | [
"Apache-2.0"
] | 74 | 2017-07-11T13:51:02.000Z | 2022-01-05T02:07:29.000Z | src/main.py | bioinform/rnacocktail | 9a4ddee62dcfbcf3c1dfd6c3dfffd4b66e1f76e1 | [
"Apache-2.0"
] | 15 | 2017-04-19T04:45:42.000Z | 2021-06-06T13:48:51.000Z | src/main.py | bioinform/rnacocktail | 9a4ddee62dcfbcf3c1dfd6c3dfffd4b66e1f76e1 | [
"Apache-2.0"
] | 51 | 2017-01-21T07:24:00.000Z | 2022-03-29T09:36:47.000Z | from collections import defaultdict
import sys
import os
from defaults import *
from run_sr_align import run_sr_align
from run_sr_align import run_sr_align
from run_reconstruct import run_reconstruct
from run_quantify import run_quantify
from run_diff import run_diff
from run_dnv_assemebly import run_dnv_assemebly
from run_lr_correct import run_lr_correct
from run_lr_align import run_lr_align
from run_lr_reconstruct import run_lr_reconstruct
from run_lr_fusion import run_lr_fusion
from run_variant import run_variant
from run_editing import run_editing
from run_fusion import run_fusion
from _version import __version__
from utils import *
import logging
def run_pipeline(args,parser):
mode = args.mode
create_dirs([args.workdir, args.outdir,os.path.join(args.workdir,"logs")])
log_file=os.path.join(args.workdir,"logs","run-%s-%s.log"%(
mode, time.strftime("%Y%m%d-%H%M%S")))
FORMAT = '%(levelname)s %(asctime)-15s %(name)-20s %(message)s'
logging.basicConfig(level=logging.INFO, format=FORMAT, filename=log_file, filemode="w")
logFormatter = logging.Formatter(FORMAT)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
logger.addHandler(consoleHandler)
logger.info("Running RNASeqPipeline %s" % __version__)
logger.info("Command-line %s" % (" ".join(sys.argv)))
logger.info("Arguments are " + str(args))
logger.info("Run log will be saved in " + log_file)
logger.info("Run in mode: " + mode)
# Simple check for arguments
if mode=="align":
if (vars(args)["1"]=="" or vars(args)["2"]=="") and args.U=="" and args.sra=="":
parser.print_help()
logger.error("Input sequence file(s) are missing.")
return os.EX_USAGE
elif mode=="quantify":
if (vars(args)["1"]=="" or vars(args)["2"]=="") and args.U=="":
parser.print_help()
logger.error("Input sequence file(s) are missing.")
return os.EX_USAGE
elif mode=="diff":
if (not args.quant_files or not args.ref_gtf) and \
(not args.alignments or (not args.transcripts_gtfs and not args.ref_gtf)):
parser.print_help()
logger.error("\n\tYou should either provode {the quantification files and a refrence GTF}, \n\
\tOR {the alignment files and a (reference or assembled) GTF files}.")
return os.EX_USAGE
elif mode=="denovo":
if (vars(args)["1"]=="" or vars(args)["2"]=="") and args.U=="" and args.I=="":
parser.print_help()
logger.error("Input sequence file(s) are missing.")
return os.EX_USAGE
elif mode=="variant":
if args.no_BaseRecalibrator==False and args.knownsites=="":
parser.print_help()
logger.error("\n\tTo run BaseRecalibrator step, knownsites should provide. \n\
\tIf you don't have knownsites, please use --no_BaseRecalibrator option.")
return os.EX_USAGE
if mode=="align":
if not args.sr_aligner.upper()=="HISAT2":
logger.error("%s is not supported. \
\nThe supported short read aligner(s) are: %s."%(args.sr_aligner,SR_ALIGNERS))
return os.EX_USAGE
logger.info("Assigned sample ID: %s"%args.sample)
logger.info("Running align step using %s"%args.sr_aligner)
run_sr_align(sr_aligner=args.sr_aligner, align_idx=args.align_idx,
seq_1=vars(args)["1"], seq_2=vars(args)["2"], seq_u=args.U,
seq_sra=args.sra, ref_gtf=args.ref_gtf,
hisat2_opts=args.hisat2_opts, hisat2=args.hisat2, hisat2_sps=args.hisat2_sps, samtools=args.samtools,
start=args.start, sample= args.sample, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout)
elif mode=="reconstruct":
if not args.reconstructor.upper()=="STRINGTIE":
logger.error("%s is not supported. \
\n The supported transcriptome reconstructor(s) are: %s."%(args.reconstructor,RECONSTRUCTORS))
return os.EX_USAGE
logger.info("Assigned sample ID: %s"%args.sample)
logger.info("Running reconstruct step using %s"%args.reconstructor)
run_reconstruct(reconstructor=args.reconstructor, alignment_bam=args.alignment_bam,
ref_gtf=args.ref_gtf,
stringtie_opts=args.stringtie_opts, stringtie=args.stringtie,
start=args.start, sample= args.sample, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout)
elif mode=="quantify":
if not args.quantifier.upper()=="SALMON-SMEM":
logger.error("%s is not supported. \
\nThe supported treanscriptome reconstructor(s) are: %s."%(args.quantifier, QUANTIFIERS))
return os.EX_USAGE
logger.info("Assigned sample ID: %s"%args.sample)
logger.info("Running quantification step using %s"%args.quantifier)
run_quantify(quantifier=args.quantifier, quantifier_idx=args.quantifier_idx,
seq_1=vars(args)["1"], seq_2=vars(args)["2"], seq_u=args.U,
salmon_k=args.salmon_k, libtype=args.libtype,
salmon_smem_opts=args.salmon_smem_opts, salmon=args.salmon,
start=args.start, sample= args.sample, nthreads=args.threads, unzip=args.unzip,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout)
elif mode=="diff":
if not args.difftool.upper()=="DESEQ2":
logger.error("%s is not supported. \
\nThe supported differential analysis tool(s) are: %s."%(args.difftool,DIFFS))
return os.EX_USAGE
logger.info("Running differential analysis step using %s"%args.difftool)
run_diff(difftool=args.difftool, quant_files=args.quant_files, alignments=args.alignments,
transcripts_gtfs=args.transcripts_gtfs,
ref_gtf=args.ref_gtf,
featureCounts_opts=args.featureCounts_opts, featureCounts=args.featureCounts,
stringtie=args.stringtie, stringtie_merge_opts=args.stringtie_merge_opts,
mincount=args.mincount, alpha=args.alpha,
R=args.R, start=args.start, samples=args.sample, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout)
elif mode=="denovo":
if not args.assembler.upper()=="OASES":
logger.error("%s is not supported. \
\nThe supported de novo assembler(s) are: %s."%(args.assembler,DNV_ASSEMBLERS))
return os.EX_USAGE
logger.info("Running de novo assembly step using %s"%args.assembler)
run_dnv_assemebly(assembler=args.assembler, assmebly_hash=args.assmebly_hash,
seq_1=vars(args)["1"], seq_2=vars(args)["2"], seq_u=args.U, seq_i=args.I,
file_format=args.file_format, read_type=args.read_type,
oases=args.oases, velvetg=args.velvetg, velveth=args.velveth,
oases_opts=args.oases_opts, velvetg_opts=args.velvetg_opts, velveth_opts=args.velveth_opts,
start=args.start, sample= args.sample, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout)
elif mode=="long_correct":
if not args.long_corrector.upper()=="LORDEC":
logger.error("%s is not supported. \
\nThe supported long read error correction tool(s) are: %s."%(args.long_corrector,LR_CORRECTORS))
return os.EX_USAGE
logger.info("Running long read error correction step using %s"%args.long_corrector)
run_lr_correct(long_corrector=args.long_corrector, kmer=args.kmer,
solid=args.solid,long=args.long, short=args.short,
lordec=args.lordec, lordec_opts=args.lordec_opts,
start=args.start, sample= args.sample, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout)
elif mode=="long_align":
if not args.long_aligner.upper()=="STARLONG":
logger.error("%s is not supported. \
\nThe supported long read aligner(s) are: %s."%(args.long_aligner,LR_ALIGNERS))
return os.EX_USAGE
logger.info("Running long read alignment step using %s"%args.long_aligner)
run_lr_align(long_aligner=args.long_aligner,long=args.long,
genome_dir=args.genome_dir, ref_gtf=args.ref_gtf,
starlong=args.starlong, starlong_opts=args.starlong_opts,
sam2psl=args.sam2psl, samtools=args.samtools,
start=args.start, sample= args.sample, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout)
elif mode=="long_reconstruct":
if not args.long_reconstructor.upper()=="IDP":
logger.error("%s is not supported. \
\nThe supported long read transcriptome reconstructor(s) are: %s."%(args.long_reconstructor,
LR_RECONSTRUCTOR))
return os.EX_USAGE
logger.info("Running long read transcriptome reconstruction step using %s"%args.long_reconstructor)
run_lr_reconstruct(long_reconstructor=args.long_reconstructor,
alignment=args.alignment,
short_junction=args.short_junction, long_alignment=args.long_alignment,
mode_number=args.mode_number,
ref_genome=args.ref_genome, ref_all_gpd=args.ref_all_gpd, ref_gpd=args.ref_gpd,
read_length=args.read_length,
samtools=args.samtools, idp=args.idp, idp_cfg=args.idp_cfg,
start=args.start, sample= args.sample, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout)
elif mode=="long_fusion":
if not args.long_fusion_caller.upper()=="IDP-FUSION":
logger.error("%s is not supported. \
\nThe supported long read fusion detection tool(s) are: %s."%(args.long_fusion_caller,
LR_FUSION))
return os.EX_USAGE
logger.info("Running long read fusion detection step using %s"%args.long_fusion_caller)
run_lr_fusion(long_fusion_caller=args.long_fusion_caller,
alignment=args.alignment,
short_junction=args.short_junction, long_alignment=args.long_alignment,
short_fasta=args.short_fasta, long_fasta=args.long_fasta,
mode_number=args.mode_number,
ref_genome=args.ref_genome, ref_all_gpd=args.ref_all_gpd, ref_gpd=args.ref_gpd,
uniqueness_bedgraph=args.uniqueness_bedgraph,
genome_bowtie2_idx=args.genome_bowtie2_idx, transcriptome_bowtie2_idx=args.transcriptome_bowtie2_idx,
read_length=args.read_length,
samtools=args.samtools, idpfusion=args.idpfusion, idpfusion_cfg=args.idpfusion_cfg,
gmap=args.gmap, gmap_idx=args.gmap_idx, star_dir=args.star_dir, bowtie2_dir=args.bowtie2_dir,
start=args.start, sample= args.sample, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout)
elif mode=="variant":
if not args.variant_caller.upper()=="GATK":
logger.error("%s is not supported. \
\nThe supported variant caller(s) are: %s."%(args.variant_caller,
variant_caller))
return os.EX_USAGE
logger.info("Running variant calling step using %s"%args.variant_caller)
run_variant(variant_caller=args.variant_caller,
alignment=args.alignment, ref_genome=args.ref_genome, knownsites=args.knownsites,
picard=args.picard, gatk=args.gatk,
java=args.java, java_opts=args.java_opts,
CleanSam=args.CleanSam,
no_BaseRecalibrator=args.no_BaseRecalibrator,
AddOrReplaceReadGroups_opts=args.AddOrReplaceReadGroups_opts,
MarkDuplicates_opts=args.MarkDuplicates_opts,
SplitNCigarReads_opts=args.SplitNCigarReads_opts,
BaseRecalibrator_opts=args.BaseRecalibrator_opts,
ApplyBQSR_opts=args.ApplyBQSR_opts,
HaplotypeCaller_opts=args.HaplotypeCaller_opts,
VariantFiltration_opts=args.VariantFiltration_opts,
start=args.start, sample= args.sample, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout)
elif mode=="editing":
if not args.editing_caller.upper()=="GIREMI":
logger.error("%s is not supported. \
\nThe supported RNA editing caller(s) are: %s."%(args.editing_caller,
editing_caller))
return os.EX_USAGE
logger.info("Running RNA editing calling step using %s"%args.editing_caller)
run_editing(editing_caller=args.editing_caller,
alignment=args.alignment, variant=args.variant,
strand_pos=args.strand_pos, genes_pos=args.genes_pos,
ref_genome=args.ref_genome, knownsites=args.knownsites,
giremi_dir=args.giremi_dir, htslib_dir=args.htslib_dir,
samtools=args.samtools, gatk=args.gatk,
java=args.java, giremi_opts=args.giremi_opts,java_opts=args.java_opts,
VariantAnnotator_opts=args.VariantAnnotator_opts,
start=args.start, sample= args.sample, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout)
elif mode=="fusion":
if not args.fusion_caller.upper()=="FUSIONCATCHER":
logger.error("%s is not supported. \
\nThe supported fusion predictor(s) are: %s."%(args.fusion_caller,
fusion_caller))
return os.EX_USAGE
logger.info("Running Fusion prediction step using %s"%args.fusion_caller)
run_fusion(fusion_caller=args.fusion_caller,
data_dir=args.data_dir, input=args.input, start=args.start,
fusioncatcher=args.fusioncatcher, fusioncatcher_opts=args.fusioncatcher_opts,
sample= args.sample, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout)
elif mode=="all":
if not args.sr_aligner.upper()=="HISAT2":
logger.error("%s is not supported. \
\nThe supported short read aligner(s) are: %s."%(args.sr_aligner,SR_ALIGNERS))
return os.EX_USAGE
if not args.reconstructor.upper()=="STRINGTIE":
logger.error("%s is not supported. \
\n The supported transcriptome reconstructor(s) are: %s."%(args.reconstructor,RECONSTRUCTORS))
return os.EX_USAGE
if not args.quantifier.upper()=="SALMON-SMEM":
logger.error("%s is not supported. \
\nThe supported treanscriptome reconstructor(s) are: %s."%(args.quantifier, QUANTIFIERS))
return os.EX_USAGE
if not args.difftool.upper()=="DESEQ2":
logger.error("%s is not supported. \
\nThe supported differential analysis tool(s) are: %s."%(args.difftool,DIFFS))
return os.EX_USAGE
if not args.assembler.upper()=="OASES":
logger.error("%s is not supported. \
\nThe supported de novo assembler(s) are: %s."%(args.assembler,DNV_ASSEMBLERS))
return os.EX_USAGE
if not args.long_corrector.upper()=="LORDEC":
logger.error("%s is not supported. \
\nThe supported long read error correction tool(s) are: %s."%(args.long_corrector,LR_CORRECTORS))
return os.EX_USAGE
if not args.long_aligner.upper()=="STARLONG":
logger.error("%s is not supported. \
\nThe supported long read aligner(s) are: %s."%(args.long_aligner,LR_ALIGNERS))
return os.EX_USAGE
if not args.long_reconstructor.upper()=="IDP":
logger.error("%s is not supported. \
\nThe supported long read transcriptome reconstructor(s) are: %s."%(args.long_reconstructor,
LR_RECONSTRUCTOR))
return os.EX_USAGE
if not args.variant_caller.upper()=="GATK":
logger.error("%s is not supported. \
\nThe supported variant caller(s) are: %s."%(args.variant_caller,
variant_caller))
return os.EX_USAGE
if not args.editing_caller.upper()=="GIREMI":
logger.error("%s is not supported. \
\nThe supported RNA editing caller(s) are: %s."%(args.editing_caller,
editing_caller))
return os.EX_USAGE
if not args.fusion_caller.upper()=="FUSIONCATCHER":
logger.error("%s is not supported. \
\nThe supported fusion predictor(s) are: %s."%(args.fusion_caller,
fusion_caller))
return os.EX_USAGE
if not args.long_fusion_caller.upper()=="IDP-FUSION":
logger.error("%s is not supported. \
\nThe supported long read fusion detection tool(s) are: %s."%(args.long_fusion_caller,
LR_FUSION))
return os.EX_USAGE
do_short = True
if (vars(args)["1"]=="" or vars(args)["2"]=="") and args.U=="":
parser.print_help()
logger.info("Input short-read sequence file(s) are missing. Will skipp short-read steps")
do_short = False
if (vars(args)["1"]=="" and vars(args)["2"]=="") and (args.U==""):
parser.print_help()
logger.error("In pipeline mode, only one input short-read type is possible: paired-end (--1 and --2) or unpaired (--U)")
return os.EX_USAGE
do_long = args.long != ""
if not do_long:
logger.info("Input long-read sequence file(s) are missing. Will skipp long-read steps")
samples=[[replicate for replicate in sample.split(",")] for sample in args.sample]
all_samples=[replicate for sample in samples for replicate in sample]
n_samples=sum(map(lambda x:len(x),samples))
input_sr={}
if (vars(args)["1"] and vars(args)["2"]):
logger.info("Inputs are paired-end reads.")
input_sr["1"] = [j for i in vars(args)["1"] for j in i.split(",")]
input_sr["2"] = [j for i in vars(args)["2"] for j in i.split(",")]
if len(input_sr["1"])!=n_samples or len(input_sr["2"])!=n_samples:
parser.print_help()
logger.error("Number of short paired-end input sequences does not match number of samples.")
return os.EX_USAGE
input_mode="paired"
input_sr["1"]={all_samples[i]:j for i,j in enumerate(input_sr["1"])}
input_sr["2"]={all_samples[i]:j for i,j in enumerate(input_sr["2"])}
input_sr["U"]={all_samples[i]:"" for i,j in enumerate(input_sr["1"])}
else:
logger.info("Inputs are unpaired reads.")
input_sr["U"] = [j for i in args.U for j in i.split(",")]
if len(input_sr["U"])!=n_samples:
parser.print_help()
logger.error("Number of short unpiared input sequences does not match number of samples.")
return os.EX_USAGE
input_mode="un-paired"
input_sr["U"]={all_samples[i]:j for i,j in enumerate(input_sr["U"])}
input_sr["1"]={all_samples[i]:"" for i,j in enumerate(input_sr["U"])}
input_sr["2"]={all_samples[i]:"" for i,j in enumerate(input_sr["U"])}
input_lr={}
if do_long:
input_lr = [j for i in args.long for j in i.split(",")]
if len(input_lr)!=n_samples:
parser.print_help()
logger.error("Number of long input sequences does not match number of samples.")
return os.EX_USAGE
input_lr={all_samples[i]:j for i,j in enumerate(input_lr)}
alignments_bam={}
junctions_tab={}
junctions_bed={}
transcripts={}
abundances={}
quant={}
diff_af=""
diff_al=""
variants={}
transcripts_dnv={}
edits={}
fusions={}
corrected={}
alignments_lr={}
transcripts_lr={}
abundances_lr={}
fusions_lr={}
if do_short:
for si,sample in enumerate(samples):
alignments_bam[si]={}
junctions_tab[si]={}
junctions_bed[si]={}
transcripts[si]={}
abundances[si]={}
quant[si]={}
transcripts_dnv[si]={}
for ri,replicate in enumerate(sample):
logger.info("Assigned sample ID for replicate-%d in sample-%d: %s"%(ri+1,si+1,replicate))
if "align" not in args.exclude:
logger.info("******************************************************************************")
logger.info("Running align step using %s for %s"%(args.sr_aligner,replicate))
logger.info("******************************************************************************")
alignments_bam[si][replicate],junctions_tab[si][replicate],junctions_bed[si][replicate]=run_sr_align(sr_aligner=args.sr_aligner,
align_idx=args.align_idx,
seq_1=input_sr["1"][replicate], seq_2=input_sr["2"][replicate],
seq_u=input_sr["U"][replicate],
seq_sra="", ref_gtf=args.ref_gtf,
hisat2_opts=args.hisat2_opts, hisat2=args.hisat2,
hisat2_sps=args.hisat2_sps, samtools=args.samtools,
start=0, sample=replicate, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout,ignore_exceptions=True)
else:
logger.info("******************************************************************************")
logger.info("Excluding align step using %s for %s"%(args.sr_aligner,replicate))
logger.info("******************************************************************************")
alignments_bam[si][replicate],junctions_tab[si][replicate],junctions_bed[si][replicate]=["","",""]
if "reconstruct" not in args.exclude:
logger.info("******************************************************************************")
logger.info("Running reconstruct step using %s for %s"%(args.reconstructor,replicate))
logger.info("******************************************************************************")
transcripts[si][replicate],abundances[si][replicate]=run_reconstruct(reconstructor=args.reconstructor,
alignment_bam=alignments_bam[si][replicate],
ref_gtf=args.ref_gtf,
stringtie_opts=args.stringtie_opts, stringtie=args.stringtie,
start=0, sample=replicate, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout, ignore_exceptions=True)
else:
logger.info("******************************************************************************")
logger.info("Excluding reconstruct step using %s for %s"%(args.reconstructor,replicate))
logger.info("******************************************************************************")
transcripts[si][replicate],abundances[si][replicate]=["",""]
if "quantify" not in args.exclude:
logger.info("******************************************************************************")
logger.info("Running quantification step using %s for %s"%(args.quantifier,replicate))
logger.info("******************************************************************************")
quant[si][replicate]=run_quantify(quantifier=args.quantifier, quantifier_idx=args.quantifier_idx,
seq_1=input_sr["1"][replicate], seq_2=input_sr["2"][replicate],
seq_u=input_sr["U"][replicate],
salmon_k=args.salmon_k, libtype=args.libtype,
salmon_smem_opts=args.salmon_smem_opts, salmon=args.salmon,
start=0, sample=replicate, nthreads=args.threads, unzip=args.unzip,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout, ignore_exceptions=True)
else:
logger.info("******************************************************************************")
logger.info("Excluding quantification step using %s for %s"%(args.quantifier,replicate))
logger.info("******************************************************************************")
quant[si][replicate]=""
if "denovo" not in args.exclude:
logger.info("******************************************************************************")
logger.info("Running de novo assembly step using %s for %s"%(args.assembler,replicate))
logger.info("******************************************************************************")
transcripts_dnv[si][replicate]=run_dnv_assemebly(assembler=args.assembler,
assmebly_hash=args.assmebly_hash,
seq_1=input_sr["1"][replicate], seq_2=input_sr["2"][replicate],
seq_u=input_sr["U"][replicate], seq_i="",
file_format=args.file_format, read_type=args.read_type,
oases=args.oases, velvetg=args.velvetg, velveth=args.velveth,
oases_opts=args.oases_opts, velvetg_opts=args.velvetg_opts,
velveth_opts=args.velveth_opts,
start=0, sample= replicate, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout, ignore_exceptions=True)
else:
logger.info("******************************************************************************")
logger.info("Excluding de novo assembly step using %s for %s"%(args.assembler,replicate))
logger.info("******************************************************************************")
transcripts_dnv[si][replicate]=""
if "diff" not in args.exclude:
logger.info("******************************************************************************")
logger.info("Running differential analysis step (based on alignment-free quantification results) using %s for %s"%(args.difftool,samples))
logger.info("******************************************************************************")
diff_af=run_diff(difftool=args.difftool, quant_files=[",".join([quant[si][replicate] for replicate in sample]) for si,sample in enumerate(samples)],
alignments="",
transcripts_gtfs="",
ref_gtf=args.ref_gtf,
featureCounts_opts=args.featureCounts_opts, featureCounts=args.featureCounts,
stringtie=args.stringtie, stringtie_merge_opts=args.stringtie_merge_opts,
mincount=args.mincount, alpha=args.alpha,
R=args.R, start=0, samples=args.sample, nthreads=args.threads,
workdir=os.path.join(args.workdir, "diff-quant"),
outdir=os.path.join(args.outdir, "diff-quant"), timeout=args.timeout, ignore_exceptions=True)
logger.info("******************************************************************************")
logger.info("Running differential analysis step (based on alignment results) using %s for %s"%(args.difftool,samples))
logger.info("******************************************************************************")
# if use_tgtf
diff_al=run_diff(difftool=args.difftool, quant_files="",
alignments=[",".join([alignments_bam[si][replicate] for replicate in sample]) for si,sample in enumerate(samples)],
transcripts_gtfs=[",".join([transcripts[si][replicate] for replicate in sample]) for si,sample in enumerate(samples)],
ref_gtf=args.ref_gtf,
featureCounts_opts=args.featureCounts_opts, featureCounts=args.featureCounts,
stringtie=args.stringtie, stringtie_merge_opts=args.stringtie_merge_opts,
mincount=args.mincount, alpha=args.alpha,
R=args.R, start=0, samples=args.sample, nthreads=args.threads,
workdir=os.path.join(args.workdir, "diff-alignment"),
outdir=os.path.join(args.outdir, "diff-alignment"), timeout=args.timeout, ignore_exceptions=True)
else:
logger.info("******************************************************************************")
logger.info("Excluding differential analysis step (based on alignment-free quantification results) using %s for %s"%(args.difftool,samples))
logger.info("******************************************************************************")
diff_af=""
logger.info("******************************************************************************")
logger.info("Excluding differential analysis step (based on alignment results) using %s for %s"%(args.difftool,samples))
logger.info("******************************************************************************")
diff_al=""
for si,sample in enumerate(samples):
variants[si]={}
edits[si]={}
fusions[si]={}
for ri,replicate in enumerate(sample):
if "variant" not in args.exclude:
logger.info("******************************************************************************")
logger.info("Running variant calling step using %s for %s"%(args.variant_caller,replicate))
logger.info("******************************************************************************")
variants[si][replicate]=run_variant(variant_caller=args.variant_caller,
alignment=alignments_bam[si][replicate], ref_genome=args.ref_genome,
knownsites=args.knownsites,
picard=args.picard, gatk=args.gatk,
java=args.java, java_opts=args.java_opts,
CleanSam=args.CleanSam,
no_BaseRecalibrator=args.no_BaseRecalibrator,
AddOrReplaceReadGroups_opts=args.AddOrReplaceReadGroups_opts,
MarkDuplicates_opts=args.MarkDuplicates_opts,
SplitNCigarReads_opts=args.SplitNCigarReads_opts,
BaseRecalibrator_opts=args.BaseRecalibrator_opts,
ApplyBQSR_opts=args.ApplyBQSR_opts,
HaplotypeCaller_opts=args.HaplotypeCaller_opts,
VariantFiltration_opts=args.VariantFiltration_opts,
start=0, sample=replicate, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout, ignore_exceptions=True)
else:
logger.info("******************************************************************************")
logger.info("Excluding variant calling step using %s for %s"%(args.variant_caller,replicate))
logger.info("******************************************************************************")
variants[si][replicate]=""
if "editing" not in args.exclude:
logger.info("******************************************************************************")
logger.info("Running RNA editing calling step using %s for %s"%(args.editing_caller,replicate))
logger.info("******************************************************************************")
edits[si][replicate]=run_editing(editing_caller=args.editing_caller,
alignment=alignments_bam[si][replicate], variant=variants[si][replicate],
strand_pos=args.strand_pos, genes_pos=args.genes_pos,
ref_genome=args.ref_genome, knownsites=args.knownsites,
giremi_dir=args.giremi_dir, htslib_dir=args.htslib_dir,
samtools=args.samtools, gatk=args.gatk,
java=args.java, giremi_opts=args.giremi_opts,java_opts=args.java_opts,
VariantAnnotator_opts=args.VariantAnnotator_opts,
start=0, sample= replicate, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout, ignore_exceptions=True)
else:
logger.info("******************************************************************************")
logger.info("Excluding RNA editing calling step using %s for %s"%(args.editing_caller,replicate))
logger.info("******************************************************************************")
edits[si][replicate]=""
if "fusion" not in args.exclude:
logger.info("******************************************************************************")
logger.info("Running Fusion prediction step using %s for %s"%(args.fusion_caller,replicate))
logger.info("******************************************************************************")
fusions[si][replicate]=run_fusion(fusion_caller=args.fusion_caller,
data_dir=args.data_dir, input="%s,%s"%(input_sr["1"][replicate],
input_sr["2"][replicate]) if input_mode=="paired" else input_sr["U"][replicate],
start=0,
fusioncatcher=args.fusioncatcher, fusioncatcher_opts=args.fusioncatcher_opts,
sample= replicate, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout, ignore_exceptions=True)
else:
logger.info("******************************************************************************")
logger.info("Excluding RNA editing calling step using %s for %s"%(args.editing_caller,replicate))
logger.info("******************************************************************************")
fusions[si][replicate]=""
if do_long:
if do_short:
for si,sample in enumerate(samples):
corrected[si]={}
alignments_lr[si]={}
transcripts_lr[si]={}
abundances_lr[si]={}
fusions_lr[si]={}
for ri,replicate in enumerate(sample):
if "long_correct" not in args.exclude:
logger.info("******************************************************************************")
logger.info("Running long read error correction step using %s for %s"%(args.long_corrector,replicate))
logger.info("******************************************************************************")
corrected[si][replicate]=run_lr_correct(long_corrector=args.long_corrector, kmer=args.kmer,
solid=args.solid,long=input_lr[replicate], short="%s,%s"%(input_sr["1"][replicate],
input_sr["2"][replicate]) if input_mode=="paired" else input_sr["U"][replicate],
lordec=args.lordec, lordec_opts=args.lordec_opts,
start=0, sample= replicate, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout, ignore_exceptions=True)
else:
logger.info("******************************************************************************")
logger.info("Excluding long read error correction step using %s for %s"%(args.long_corrector,replicate))
logger.info("******************************************************************************")
corrected[si][replicate]=""
if "long_align" not in args.exclude:
logger.info("******************************************************************************")
logger.info("Running long read alignment step on corrected long-reads using %s for %s"%(args.long_aligner,replicate))
logger.info("******************************************************************************")
alignments_lr[si][replicate]=run_lr_align(long_aligner=args.long_aligner,
long=corrected[si][replicate] if corrected[si][replicate] else input_lr[replicate],
genome_dir=args.star_genome_dir, ref_gtf=args.ref_gtf,
starlong=args.starlong, starlong_opts=args.starlong_opts,
sam2psl=args.sam2psl, samtools=args.samtools,
start=0, sample= replicate, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout, ignore_exceptions=True)
else:
logger.info("******************************************************************************")
logger.info("Excluding long read alignment step on corrected long-reads using %s for %s"%(args.long_aligner,replicate))
logger.info("******************************************************************************")
alignments_lr[si][replicate]=""
if "long_reconstruct" not in args.exclude:
logger.info("******************************************************************************")
logger.info("Running long read transcriptome reconstruction step using %s for %s"%(args.long_reconstructor,replicate))
logger.info("******************************************************************************")
transcripts_lr[si][replicate],abundances_lr[si][replicate]=run_lr_reconstruct(long_reconstructor=args.long_reconstructor,
alignment=alignments_bam[si][replicate],
short_junction=junctions_bed[si][replicate],
long_alignment=alignments_lr[si][replicate],
mode_number=args.mode_number,
ref_genome=args.ref_genome, ref_all_gpd=args.ref_all_gpd, ref_gpd=args.ref_gpd,
read_length=args.read_length,
samtools=args.samtools, idp=args.idp, idp_cfg=args.idp_cfg,
start=0, sample= replicate, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout, ignore_exceptions=True)
else:
logger.info("******************************************************************************")
logger.info("Excluding long read transcriptome reconstruction step using %s for %s"%(args.long_reconstructor,replicate))
logger.info("******************************************************************************")
transcripts_lr[si][replicate],abundances_lr[si][replicate]=["",""]
if "long_fusion" not in args.exclude:
logger.info("******************************************************************************")
logger.info("Running long read fusion detection step using %s for %s"%(args.long_reconstructor,replicate))
logger.info("******************************************************************************")
fusions_lr[si][replicate]=run_lr_fusion(long_fusion_caller=args.long_fusion_caller,
alignment=alignments_bam[si][replicate],
short_junction=junctions_bed[si][replicate],
short_fasta=input_sr["U"][replicate],
long_fasta=corrected[si][replicate] if corrected[si][replicate] else input_lr[replicate],
mode_number=args.mode_number,
ref_genome=args.ref_genome, ref_all_gpd=args.ref_all_gpd, ref_gpd=args.ref_gpd,
uniqueness_bedgraph=args.uniqueness_bedgraph,
genome_bowtie2_idx=args.genome_bowtie2_idx, transcriptome_bowtie2_idx=args.transcriptome_bowtie2_idx,
read_length=args.read_length,
samtools=args.samtools, idpfusion=args.idpfusion, idpfusion_cfg=args.idpfusion_cfg,
gmap=args.gmap, gmap_idx=args.gmap_idx, star_dir=args.star_dir, bowtie2_dir=args.bowtie2_dir,
start=0, sample= replicate, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout,ignore_exceptions=True)
else:
logger.info("******************************************************************************")
logger.info("Excluding long read transcriptome reconstruction step using %s for %s"%(args.long_reconstructor,replicate))
logger.info("******************************************************************************")
fusions_lr[si][replicate]=""
else:
for si,sample in enumerate(samples):
corrected[si]={}
for ri,replicate in enumerate(sample):
if "long_align" not in args.exclude:
logger.info("******************************************************************************")
logger.info("Running long read alignment step on original long-reads using %s for %s"%(args.long_aligner,replicate))
logger.info("******************************************************************************")
alignments_lr[si][replicate]=run_lr_align(long_aligner=args.long_aligner,long=input_lr[replicate],
genome_dir=args.star_genome_dir, ref_gtf=args.ref_gtf,
starlong=args.starlong, starlong_opts=args.starlong_opts,
sam2psl=args.sam2psl, samtools=args.samtools,
start=0, sample= replicate, nthreads=args.threads,
workdir=args.workdir, outdir=args.outdir, timeout=args.timeout, ignore_exceptions=True)
else:
logger.info("******************************************************************************")
logger.info("Excluding long read alignment step on original long-reads using %s for %s"%(args.long_aligner,replicate))
logger.info("******************************************************************************")
alignments_lr[si][replicate]=""
tasks={"Short-read alignment":[alignments_bam,junctions_tab,junctions_bed],
"Short-read transcriptome reconstruction":[transcripts,abundances],
"Short-read alignment-free quantification":[quant],
"Short-read alignment-free differential analysis":[diff_af],
"Short-read alignment-based differential analysis":[diff_al],
"Short-read de novo assembly":[transcripts_dnv],
"Short-read variant calling":[variants],
"Short-read rna editing detection":[edits],
"Short-read fusion detection":[fusions],
"Long-read error correction":[corrected],
"Long-read alignment":[alignments_lr],
"long-read transcriptome reconstruction":[transcripts_lr,abundances_lr],
"long-read fusion detection":[fusions_lr],
}
ordered_tasks=["Short-read alignment",
"Short-read transcriptome reconstruction",
"Short-read alignment-free quantification",
"Short-read alignment-free differential analysis",
"Short-read alignment-based differential analysis",
"Short-read de novo assembly",
"Short-read variant calling",
"Short-read rna editing detection",
"Short-read fusion detection",
"Long-read error correction",
"Long-read alignment",
"long-read transcriptome reconstruction",
"long-read fusion detection"]
success={task:[] for task in ordered_tasks}
failure={task:[] for task in ordered_tasks}
for t,vv in tasks.iteritems():
v=vv[0]
if t=="Short-read alignment-free differential analysis" or t=="Short-read alignment-based differential analysis":
if v:
success[t].append("ALL")
else:
failure[t].append("ALL")
else:
if v:
for si,sample in enumerate(samples):
for replicate in sample:
if v[si][replicate]:
success[t].append(replicate)
else:
failure[t].append(replicate)
else:
failure[t].append("ALL")
logger.info("***********************************************")
logger.info("Successfull Runs:")
logger.info("***********************************************")
for t in ordered_tasks:
if not set(success[t])^set(all_samples):
success[t]=["ALL"]
if success[t]:
logger.info("%s: %s"%(t,",".join(success[t])))
logger.info("")
logger.info("***********************************************")
logger.info("Failed Runs:")
logger.info("***********************************************")
for t in ordered_tasks:
if not set(failure[t])^set(all_samples):
failure[t]=["ALL"]
if failure[t]:
logger.info("%s: %s"%(t,",".join(failure[t])))
logger.info("")
else:
logger.error("wrong mode %s"%(mode))
return os.EX_USAGE
logger.info("Run log is saved in " + log_file)
logger.info("All Done!")
return os.EX_OK
| 67 | 164 | 0.49932 | 4,919 | 50,719 | 4.995324 | 0.063021 | 0.049243 | 0.014244 | 0.020755 | 0.845678 | 0.826551 | 0.792528 | 0.767825 | 0.746541 | 0.709222 | 0 | 0.003172 | 0.316311 | 50,719 | 756 | 165 | 67.088624 | 0.705445 | 0.000907 | 0 | 0.586158 | 0 | 0.001412 | 0.181601 | 0.089922 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001412 | false | 0 | 0.028249 | 0 | 0.079096 | 0.014124 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
58bb47263e731da899657408fe9c7c4b53c863cd | 30,179 | py | Python | entmoot/learning/distance_based_std.py | marcosfelt/entmoot | 54d6e9299c3a44a28a9f132224905c16982d2160 | [
"BSD-3-Clause"
] | null | null | null | entmoot/learning/distance_based_std.py | marcosfelt/entmoot | 54d6e9299c3a44a28a9f132224905c16982d2160 | [
"BSD-3-Clause"
] | null | null | null | entmoot/learning/distance_based_std.py | marcosfelt/entmoot | 54d6e9299c3a44a28a9f132224905c16982d2160 | [
"BSD-3-Clause"
] | null | null | null | from sklearn.preprocessing import StandardScaler
from abc import ABC, abstractmethod
import numpy as np
class DistanceMetric(ABC):
"""Computes distances and defines the optimization model for both
exploration and penalty.
Parameters
----------
-
Attributes
----------
-
"""
@abstractmethod
def get_distance(self, x_left, x_right):
"""Compute the distance between `x_left` and `x_right` per row of x_left.
Parameters
----------
x_left : np.array, shape (n_rows, n_dims)
Each row of `n_rows` is a reference point. Each dimension of `n_dim`
is the numerical value of a continuous variable.
x_right : np.array, shape (n_dims,)
Each dimension is the numerical value of a continuous variable.
Returns
-------
dist : np.array, shape(n_rows,)
Distance between `x_left` and `x_right`. If multiple rows are given
for `x_left`, a 1-dimensional array, i.e. `n_rows` > 1, is returned.
"""
pass
def get_max_space_scaled_dist(self, ref_points, x_means, x_stddev, model):
# computes maximum distance in search space
n_features = len(model._c_x)
lb = np.asarray(model._c_x_lb)
ub = np.asarray(model._c_x_ub)
lb_std = np.divide(lb - x_means, x_stddev)
ub_std = np.divide(ub - x_means, x_stddev)
max_dist = self.get_distance(
lb_std,
ub_std
)
return max_dist
class SquaredEuclidean(DistanceMetric):
"""Computes distances and defines the optimization model for both
exploration and penalty. The distance metric used is the squared euclidean
distance.
Parameters
----------
-
Attributes
----------
-
"""
@staticmethod
def get_distance(x_left, x_right):
"""Compute the distance between `x_left` and `x_right` per row of x_left.
Here the squared euclidean distance is used.
Parameters
----------
x_left : np.array, shape (n_rows, n_dims)
Each row of `n_rows` is a reference point. Each dimension of `n_dim`
is the numerical value of a continuous variable.
x_right : np.array, shape (n_dims,)
Each dimension is the numerical value of a continuous variable.
Returns
-------
dist : np.array, shape(n_rows,)
Distance between `x_left` and `x_right`. If multiple rows are given
for `x_left`, a 1-dimensional array, i.e. `n_rows` > 1, is returned.
"""
if x_left.ndim == 1:
dist = np.sum((x_left - x_right)**2)
else:
dist = np.sum((x_left - x_right)**2, axis=1)
return dist
def add_exploration_to_gurobi_model(self,
ref_points, x_means, x_stddev, distance_bound, model):
"""Adds exploration constraints to a gurobi optimization model.
Incentivizes solutions far away from reference points.
Parameters
----------
ref_points : np.array, shape (n_rows, n_dims)
Each row of `n_rows` is a reference point. Each dimension of `n_dim`
is the numerical value of a continuous variable.
x_means : np.array, shape (n_dims,)
Each dimension is the mean value of a continuous variable used to
scale the data set.
x_stddev : np.array, shape (n_dims,)
Each dimension is the std value of a continuous variable used to
scale the data set.
distance_bound : float
Defines the maximum value that the exploration term can take.
Returns
-------
-
"""
from gurobipy import GRB, quicksum
n_ref_points = len(ref_points)
# variable alpha captures distance measure
alpha_bound = distance_bound
model._alpha = \
model.addVar(
lb=0,
ub=alpha_bound,
name="alpha",
vtype='C'
)
def distance_ref_point_i(model, xi_ref, x_mean, x_stddev):
# function returns constraints capturing the standardized
# exploration distance
c_x = model._c_x
alpha = model._alpha
n_features = len(xi_ref)
diff_to_ref_point_i = quicksum(
( (xi_ref[j] - (c_x[j]-x_mean[j]) / x_stddev[j]) * \
(xi_ref[j] - (c_x[j]-x_mean[j]) / x_stddev[j]) )
for j in range(n_features)
)
return alpha <= diff_to_ref_point_i
# add exploration distances as quadratic constraints to the model
for i in range(n_ref_points):
model.addQConstr(
distance_ref_point_i(model, ref_points[i], x_means, x_stddev),
name=f"std_const_{i}"
)
model.update()
def add_penalty_to_gurobi_model(self,
ref_points, x_means, x_stddev, model):
"""Adds penalty constraints to a gurobi optimization model.
Incentivizes solutions close to reference points.
Parameters
----------
ref_points : np.array, shape (n_rows, n_dims)
Each row of n_rows is a reference point. Each dimension of n_dim is
the numerical value of a continuous variable.
x_means : np.array, shape (n_dims,)
Each dimension is the mean value of a continuous variable used to
scale the data set.
x_stddev : np.array, shape (n_dims,)
Each dimension is the std value of a continuous variable used to
scale the data set.
distance_bound : float
Defines the maximum value that the exploration term can take.
Returns
-------
-
"""
from gurobipy import GRB, quicksum
n_ref_points = len(ref_points)
# big m is required to formulate the constraints
model._big_m = \
self.get_max_space_scaled_dist(ref_points, x_means, x_stddev, model)
# binary variables b_ref correspond to active cluster centers
model._b_ref = \
model.addVars(
range(n_ref_points),
name="b_ref",
vtype=GRB.BINARY
)
# variable alpha captures distance measure
model._alpha = \
model.addVar(
ub=GRB.INFINITY,
lb=0.0,
name="alpha",
vtype='C'
)
def distance_ref_point_k(model, xk_ref, k_ref, x_mean, x_stddev):
# function returns constraints capturing the standardized
# penalty distance
c_x = model._c_x
b_ref = model._b_ref
alpha = model._alpha
n_features = len(xk_ref)
diff_to_ref_point_k = quicksum(
( (xk_ref[j] - (c_x[j]-x_mean[j]) / x_stddev[j]) * \
(xk_ref[j] - (c_x[j]-x_mean[j]) / x_stddev[j]) )
for j in range(n_features)
)
big_m_term = model._big_m*(1-b_ref[k_ref])
return diff_to_ref_point_k <= alpha + big_m_term
# add penalty distances as quadratic constraints to the model
for k in range(n_ref_points):
model.addQConstr(
distance_ref_point_k(
model,
ref_points[k],
k,
x_means,
x_stddev
),
name=f"std_const_{k}"
)
def sum_ref_point_vars(n_ref_points, model):
return quicksum(model._b_ref[k] \
for k in range(n_ref_points))== 1
# add additional sum constraints forcing only one ref_point to
# be active
model.addConstr(
sum_ref_point_vars(n_ref_points, model),
name="std_ref_sum"
)
class Manhattan(DistanceMetric):
"""Computes distances and defines the optimization model for both
exploration and penalty. The distance metric used is the manhattan
distance.
Parameters
----------
-
Attributes
----------
-
"""
@staticmethod
def get_distance(x_left, x_right):
"""Compute the distance between `x_left` and `x_right` per row of
`x_left`. Here the manhattan distance is used.
Parameters
----------
x_left : np.array, shape (n_rows, n_dims)
Each row of `n_rows` is a reference point. Each dimension of n_dim is
the numerical value of a continuous variable.
x_right : np.array, shape (n_dims,)
Each dimension is the numerical value of a continuous variable.
Returns
-------
dist : np.array, shape(n_rows,)
Distance between `x_left` and `x_right`. If multiple rows are given
for `x_left`, a 1-dimensional array, i.e. `n_rows` > 1, is returned.
"""
if x_left.ndim == 1:
dist = np.sum( np.abs(x_left - x_right) )
else:
dist = np.sum( np.abs(x_left - x_right), axis=1)
return dist
def add_exploration_to_gurobi_model(self,
ref_points, x_means, x_stddev, distance_bound, model):
"""Adds exploration constraints to a gurobi optimization model.
Incentivizes solutions far away from reference points.
Parameters
----------
ref_points : np.array, shape (n_rows, n_dims)
Each row of `n_rows` is a reference point. Each dimension of `n_dim`
is the numerical value of a continuous variable.
x_means : np.array, shape (n_dims,)
Each dimension is the mean value of a continuous variable used to
scale the data set.
x_stddev : np.array, shape (n_dims,)
Each dimension is the std value of a continuous variable used to
scale the data set.
distance_bound : float
Defines the maximum value that the exploration term can take.
Returns
-------
-
"""
from gurobipy import GRB, quicksum
n_ref_points = len(ref_points)
# two sets of variables are used to capture positive and negative
# parts of manhattan distance
model._c_x_aux_pos = \
model.addVars(range(n_ref_points), range(model._n_feat),
name="c_x_aux_pos", vtype='C')
model._c_x_aux_neg = \
model.addVars(range(n_ref_points), range(model._n_feat),
name="c_x_aux_neg", vtype='C')
# variable alpha captures distance measure
alpha_bound = distance_bound
model._alpha = \
model.addVar(lb=0,
ub=alpha_bound,
name="alpha", vtype='C')
def distance_ref_point_i_for_feat_j(
model, xi_ref, i_ref, feat_j, x_mean, x_stddev):
# function returns constraints capturing the standardized
# exploration distance
c_x = model._c_x
diff_to_ref_point_i = \
( xi_ref[feat_j] - (c_x[feat_j]-x_mean[feat_j]) / \
x_stddev[feat_j] )
return diff_to_ref_point_i == model._c_x_aux_pos[i_ref, feat_j] - \
model._c_x_aux_neg[i_ref, feat_j]
for i_ref in range(n_ref_points):
for feat_j in range(model._n_feat):
# add constraints to capture distances in variables
# _c_x_aux_pos and _c_x_aux_neg
model.addConstr(
distance_ref_point_i_for_feat_j(model,
ref_points[i_ref], i_ref, feat_j, x_means, x_stddev),
name=f"std_const_feat_{feat_j}_{i_ref}"
)
# add sos constraints that allow only one of the +/- vars,
# i.e. _c_x_aux_pos / _c_x_aux_neg to be active
model.addSOS(GRB.SOS_TYPE1,
[
model._c_x_aux_pos[i_ref, feat_j],
model._c_x_aux_neg[i_ref, feat_j]
]
)
# add exploration distances as linear constraints to the model
model.addConstr(
model._alpha <= quicksum(
(model._c_x_aux_pos[i_ref, j] + model._c_x_aux_neg[i_ref, j])
for j in range(model._n_feat)
),
name=f"alpha_sum"
)
model.update()
def add_penalty_to_gurobi_model(self,
ref_points, x_means, x_stddev, model):
"""Adds penalty constraints to a gurobi optimization model.
Incentivizes solutions close to reference points.
Parameters
----------
ref_points : np.array, shape (n_rows, n_dims)
Each row of `n_rows` is a reference point. Each dimension of `n_dim`
is the numerical value of a continuous variable.
x_means : np.array, shape (n_dims,)
Each dimension is the mean value of a continuous variable used to
scale the data set.
x_stddev : np.array, shape (n_dims,)
Each dimension is the std value of a continuous variable used to
scale the data set.
distance_bound : float
Defines the maximum value that the exploration term can take.
Returns
-------
-
"""
from gurobipy import GRB, quicksum
n_ref_points = len(ref_points)
# big m is required to formulate the constraints
model._big_m = \
self.get_max_space_scaled_dist(ref_points, x_means, x_stddev, model)
# two sets of variables are used to capture positive and negative
# parts of manhattan distance
model._c_x_aux_pos = \
model.addVars(range(n_ref_points), range(model._n_feat),
name="c_x_aux_pos", vtype='C')
model._c_x_aux_neg = \
model.addVars(range(n_ref_points), range(model._n_feat),
name="c_x_aux_neg", vtype='C')
# binary variables b_ref correspond to active cluster centers
model._b_ref = \
model.addVars(n_ref_points,
name="b_ref", vtype=GRB.BINARY)
# variable alpha captures distance measure
model._alpha = \
model.addVar(ub=GRB.INFINITY,
lb=0.0,
name="alpha", vtype='C')
def distance_ref_point_i_for_feat_j(
model, xi_ref, i_ref, feat_j, x_mean, x_stddev):
# function returns constraints capturing the standardized
# exploration distance
c_x = model._c_x
diff_to_ref_point_i = \
( xi_ref[feat_j] - (c_x[feat_j]-x_mean[feat_j]) / \
x_stddev[feat_j] )
return diff_to_ref_point_i == model._c_x_aux_pos[i_ref, feat_j] - \
model._c_x_aux_neg[i_ref, feat_j]
for i_ref in range(n_ref_points):
for feat_j in range(model._n_feat):
# add constraints to capture distances in variables
# _c_x_aux_pos and _c_x_aux_neg
model.addConstr(
distance_ref_point_i_for_feat_j(
model, ref_points[i_ref], i_ref, feat_j, x_means, x_stddev),
name=f"std_const_feat_{feat_j}_{i_ref}"
)
# add sos constraints that allow only one of the +/- vars,
# i.e. _c_x_aux_pos / _c_x_aux_neg to be active
model.addSOS(GRB.SOS_TYPE1,
[model._c_x_aux_pos[i_ref, feat_j], model._c_x_aux_neg[i_ref, feat_j]]
)
# add penalty distances as linear constraints to the model
model.addConstr(
quicksum(
(model._c_x_aux_pos[i_ref, j] + model._c_x_aux_neg[i_ref, j])
for j in range(model._n_feat)
) <= model._alpha + model._big_m*(1-model._b_ref[i_ref]),
name=f"alpha_sum"
)
model.update()
def sum_ref_point_vars(n_ref_points, model):
return quicksum(model._b_ref[k] for k in range(n_ref_points))== 1
# add additional sum constraints forcing only one ref_point to
# be active
model.addConstr(
sum_ref_point_vars(n_ref_points, model),
name="std_ref_sum"
)
class DistanceBasedStd(ABC):
"""Define a distance-based standard estimator.
A `DistanceBasedStd` object is used to quantify model uncertainty based
on distance to reference points, e.g. data points. The underlying assumption
is that base estimator predictions are good close to training data.
Use this class as a template if you want to develop your own distance-based
measure.
Parameters
----------
metric : string
Metric used to compute distances, e.g. squared euclidean, manhattan
Attributes
----------
Xi : list
Points at which objective has been evaluated.
yi : scalar
Values of objective at corresponding points in `Xi`.
cont_dist_metric : DistanceMetric
Object used to compute distances between continuous variables.
x_means : list
Mean of attribute `Xi`.
x_scaler : list
Scalers of attribute `Xi`.
Xi_standard : list
Standardized `Xi` array.
ref_points : list
Points to which the distance is computed to estimate model uncertainty.
Is different for all child classes.
"""
def __init__(self, metric='sq_euclidean'):
# define the distance metric for continuous variables
if metric == 'sq_euclidean':
from entmoot.learning.distance_based_std import SquaredEuclidean
self.cont_dist_metric = SquaredEuclidean()
elif metric == 'manhattan':
from entmoot.learning.distance_based_std import Manhattan
self.cont_dist_metric = Manhattan()
def set_params(self,**kwargs):
"""Sets parameters related to distance-based standard estimator.
Parameters
----------
kwargs : dict
Additional arguments to be passed to the standard estimator
Returns
-------
-
"""
pass
def update(self, Xi, yi):
"""Update available data points which is usually done after every
iteration.
Parameters
----------
Xi : list
Points at which objective has been evaluated.
yi : scalar
Values of objective at corresponding points in `Xi`.
Returns
-------
-
"""
# update data set attributes
self.Xi = Xi
self.yi = yi
self.n_features = self.Xi.shape[1]
# compute mean and scaler of data set
standard_scaler = StandardScaler()
projected_features = standard_scaler.fit_transform(self.Xi)
self.x_means = standard_scaler.mean_
self.x_scalers = standard_scaler.scale_
# compute scale coefficient
y_scaler = np.std(self.yi)
self.std_scale_coef = abs(y_scaler / self.n_features)
# standardize dataset
self.Xi_standard = self.standardize_with_Xi(self.Xi)
def standardize_with_Xi(self, X):
"""Standardize given input `X` based on attribute `Xi`.
Parameters
----------
X : numpy array, shape (n_rows, n_dims)
Each row of n_rows is a point in `X`.
Points which are standardized based on `Xi`
Returns
-------
x_standard : numpy array, shape (n_rows, n_dims)
Standardized array of `X`
"""
x_standard = np.divide(X - self.x_means, self.x_scalers)
return x_standard
def get_closest_point_distance(self, X):
"""Get distance to point of attribute `ref_points` which is closest to
point given as parameter `X`.
Parameters
----------
X : numpy array, shape (n_dims,)
Point to which the distance of closest reference point is
computed
Returns
-------
dist : numpy array, shape (1, )
Returns distance to closest `ref_point`.
"""
ref_points = np.asarray(self.ref_points)
x_standard = self.standardize_with_Xi(X)
x_standard = np.asarray(x_standard)
dist_cont = \
self.cont_dist_metric.get_distance(ref_points,x_standard)
return np.min(dist_cont)
def predict(self, X, scaled=True):
"""Predict standard estimate at location `X`.
Parameters
----------
X : numpy array, shape (n_rows, n_dims)
Points at which the standard estimator is evaluated.
Returns
-------
dist : numpy array, shape (n_rows,)
Returns distances to closest `ref_point` for every point per row
in `n_rows`.
"""
dist = np.empty([X.shape[0],])
for row_res,Xi in enumerate(X):
ref_distance = self.get_closest_point_distance(Xi)
dist[row_res] = ref_distance
if scaled:
dist *= self.std_scale_coef
return dist
@abstractmethod
def add_to_gurobi_model(self,model):
"""Add standard estimator to gurobi model. Model details are
implemented in child class.
Parameters
----------
model : gurobipy.Model,
Model to which the standard estimator is added.
"""
pass
@abstractmethod
def get_gurobi_obj(self,model):
"""Get contribution of standard estimator to gurobi model objective
function.
Parameters
----------
model : gurobipy.Model,
Model to which the standard estimator is added.
s
----------
alpha : gurobipy.Var,
Model variable that takes the value of the uncertainty measure.
"""
pass
class DistanceBasedExploration(DistanceBasedStd):
"""Defines a child class based on `DistanceBasedStd`. Exploration
refers to how the distance measure contributes to the acquisition
function. Exploration refers to incentivizing distance to reference points
leading to a negative contribution of the distance measure to the objective
function.
Parameters
----------
metric : string
Metric used to compute distances, e.g. squared euclidean, manhattan
zeta : scalar
Coefficient determining how the distance measure is bounded
Attributes
----------
Xi : list
Points at which objective has been evaluated.
yi : scalar
Values of objective at corresponding points in `Xi`.
cont_dist_metric : DistanceMetric
Object used to compute distances between continuous variables.
x_means : list
Mean of attribute `Xi`.
x_scaler : list
Scalers of attribute `Xi`.
Xi_standard : list
Standardized `Xi` array.
ref_points : list
`ref_points` standardized to which the distance measure is computed.
ref_points_unscaled : list
Unscaled `ref_points` to which the distance measure is computed.
distance_bound : scalar
Bound of exploration measure to prohibit over-exploration
"""
def __init__(self,
metric="sq_euclidean",
zeta=0.5):
super().__init__(metric)
self.zeta = zeta
def set_params(self,**kwargs):
"""Sets parameters related to distance-based standard estimator.
Parameters
----------
kwargs : dict
Additional arguments to be passed to the standard estimator
Returns
-------
-
"""
zeta = kwargs.get("zeta", 0.5)
self.zeta = zeta
def update(self, Xi, yi):
"""Update available data points which is usually done after every
iteration.
Parameters
----------
Xi : list
Points at which objective has been evaluated.
yi : scalar
Values of objective at corresponding points in `Xi`.
Returns
-------
-
"""
super().update(Xi, yi)
self.ref_points_unscaled = self.Xi
self.ref_points = self.Xi_standard
# compute upper bound of uncertainty
y_var = np.var(yi)
self.distance_bound = abs(self.zeta*y_var)
def predict(self, X, scaled=True):
"""Predict standard estimate at location `X`. By default `dist` is
bounded by attribute `distance_bound`.
Parameters
----------
X : numpy array, shape (n_rows, n_dims)
Points at which the standard estimator is evaluated.
Returns
-------
dist : numpy array, shape (n_rows,)
Returns distances to closest `ref_point` for every point per row
in `n_rows`.
"""
dist = super().predict(X, scaled=scaled)
# prediction has max out at `distance_bound`
dist[dist > self.distance_bound] = self.distance_bound
return dist
def get_gurobi_obj(self, model, scaled=True):
"""Get contribution of standard estimator to gurobi model objective
function.
Parameters
----------
model : gurobipy.Model,
Model to which the standard estimator is added.
Returns
-------
alpha : gurobipy.Var,
Model variable that takes the value of the uncertainty measure.
"""
# negative contributation of alpha requires non-convex flag in gurobi.
model.Params.NonConvex=2
if scaled:
return -self.std_scale_coef*model._alpha
else:
return -model._alpha
def add_to_gurobi_model(self,model):
"""Add standard estimator to gurobi model. Model details are
implemented in child class.
Parameters
----------
model : gurobipy.Model,
Model to which the standard estimator is added.
"""
self.cont_dist_metric.add_exploration_to_gurobi_model(
self.ref_points,
self.x_means,
self.x_scalers,
self.distance_bound,
model
)
class DistanceBasedPenalty(DistanceBasedStd):
"""Defines a child class based on `DistanceBasedStd`. Penalty
refers to how the distance measure contributes to the acquisition
function. Penalty refers to penalizing distance to reference points
leading to a positive contribution of the distance measure to the objective
function.
Parameters
----------
metric : string
Metric used to compute distances, e.g. squared euclidean, manhattan
Attributes
----------
Xi : list
Points at which objective has been evaluated.
yi : scalar
Values of objective at corresponding points in `Xi`.
cont_dist_metric : DistanceMetric
Object used to compute distances between continuous variables.
x_means : list
Mean of attribute `Xi`.
x_scaler : list
Scalers of attribute `Xi`.
Xi_standard : list
Standardized `Xi` array.
ref_points : list
`ref_points` reference to which the distance measure is computed
n_ref_points : scalar
length of `ref_points"""
def __init__(self,
metric="sq_euclidean"):
super().__init__(metric)
def set_params(self,**kwargs):
"""Sets parameters related to distance-based standard estimator.
Parameters
----------
kwargs : dict
Additional arguments to be passed to the standard estimator
Returns
-------
-
"""
pass
def update(self, Xi, yi):
"""Update available data points which is usually done after every
iteration.
Parameters
----------
Xi : list
Points at which objective has been evaluated.
yi : scalar
Values of objective at corresponding points in `Xi`.
Returns
-------
-
"""
super().update(Xi, yi)
self.ref_points_unscaled = self.Xi
self.ref_points = self.Xi_standard
def predict(self, X, scaled=True):
"""Predict standard estimate at location `X`. Sign of `dist` is negative
because it contributes as a penalty.
Parameters
----------
X : numpy array, shape (n_rows, n_dims)
Points at which the standard estimator is evaluated.
Returns
-------
dist : numpy array, shape (n_rows,)
Returns distances to closest `ref_point` for every point per row
in `n_rows`.
"""
dist = super().predict(X, scaled=scaled)
return -dist
def get_gurobi_obj(self, model, scaled=True):
"""Get contribution of standard estimator to gurobi model objective
function.
Parameters
----------
model : gurobipy.Model,
Model to which the standard estimator is added.
Returns
-------
alpha : gurobipy.Var,
Model variable that takes the value of the uncertainty measure.
"""
if scaled:
return self.std_scale_coef*model._alpha
else:
return model._alpha
def add_to_gurobi_model(self,model):
"""Add standard estimator to gurobi model. Model details are
implemented in child class.
Parameters
----------
model : gurobipy.Model,
Model to which the standard estimator is added.
"""
self.cont_dist_metric.add_penalty_to_gurobi_model(
self.ref_points,
self.x_means,
self.x_scalers,
model
) | 32.105319 | 90 | 0.577189 | 3,700 | 30,179 | 4.492162 | 0.080811 | 0.031406 | 0.019854 | 0.016425 | 0.820769 | 0.805006 | 0.792612 | 0.771674 | 0.751459 | 0.741772 | 0 | 0.001607 | 0.340369 | 30,179 | 940 | 91 | 32.105319 | 0.833325 | 0.476225 | 0 | 0.546926 | 0 | 0 | 0.020898 | 0.004781 | 0 | 0 | 0 | 0 | 0 | 1 | 0.110032 | false | 0.016181 | 0.029126 | 0.006472 | 0.216828 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
58dd9ab2acb988ff3c4beeaf6b454d4728f06e25 | 3,730 | py | Python | tests/unit/db/cbor/test_log.py | torquem-ch/silksnake | 68794838e8c2be036f158b2a842ba9201be610a3 | [
"Apache-2.0"
] | 3 | 2020-09-16T14:47:58.000Z | 2021-03-08T13:26:40.000Z | tests/unit/db/cbor/test_log.py | torquem-ch/silksnake | 68794838e8c2be036f158b2a842ba9201be610a3 | [
"Apache-2.0"
] | null | null | null | tests/unit/db/cbor/test_log.py | torquem-ch/silksnake | 68794838e8c2be036f158b2a842ba9201be610a3 | [
"Apache-2.0"
] | 1 | 2021-03-15T11:02:08.000Z | 2021-03-15T11:02:08.000Z | # -*- coding: utf-8 -*-
"""The unit test for hashing module."""
from typing import List
import cbor2
import pytest
from silksnake.db.cbor import log
# pylint: disable=line-too-long,no-self-use
class TestLog:
"""Test case for Log"""
@pytest.mark.parametrize("buffer,address,topics,data,should_pass", [
# Valid test list
('835412b731d23993eb97ba19e7c48ea6428edfd3e3e1845820ba5de06d22af2685c6c7765f60067f7d2b08c2d29f53cdf14d67f6d1c9bfb5275820000000000000000000000000485afa8808deb85c07c1dcbc896623f67e2e763658\
2000000000000000000000000000000000000000000000000000000000016f4770582000000000000000000000000000000000000000000000044664c7bf6451f0000058600000000000000000000000000000000000000000000000\
0000000000000965360000000000000000000000000000000000000000000000000000000000096635c00fdd12a308538d70ee5ab0afef1e99d2281829f4063e767db281a28e601c92',
'12b731d23993eb97ba19e7c48ea6428edfd3e3e1', ['BA5DE06D22AF2685C6C7765F60067F7D2B08C2D29F53CDF14D67F6D1C9BFB527', '000000000000000000000000485AFA8808DEB85C07C1DCBC896623F67E2E7636',
'00000000000000000000000000000000000000000000000000000000016F4770', '00000000000000000000000000000000000000000000044664C7BF6451F00000'],
'00000000000000000000000000000000000000000000000000000000000965360000000000000000000000000000000000000000000000000000000000096635C00FDD12A308538D70EE5AB0AFEF1E99D2281829F4063E767DB281A28E601C92',
True),
('835412b731d23993eb97ba19e7c48ea6428edfd3e3e1845820ba5de06d22af2685c6c7765f60067f7d2b08c2d29f53cdf14d67f6d1c9bfb5275820000000000000000000000000485afa8808deb85c07c1dcbc896623f67e2e763658\
2000000000000000000000000000000000000000000000000000000000016f4770582000000000000000000000000000000000000000000000044664c7bf6451f0000058600000000000000000000000000000000000000000000000\
0000000000000965360000000000000000000000000000000000000000000000000000000000096635c00fdd12a308538d70ee5ab0afef1e99d2281829f4063e767db281a28e601c92',
'12b731d23993eb97ba19e7c48ea6428edfd3e3e1', ['BA5DE06D22AF2685C6C7765F60067F7D2B08C2D29F53CDF14D67F6D1C9BFB527', '000000000000000000000000485AFA8808DEB85C07C1DCBC896623F67E2E7636',
'00000000000000000000000000000000000000000000000000000000016F4770', '00000000000000000000000000000000000000000000044664C7BF6451F00000'],
'00000000000000000000000000000000000000000000000000000000000965360000000000000000000000000000000000000000000000000000000000096635C00FDD12A308538D70EE5AB0AFEF1E99D2281829F4063E767DB281A28E601C92',
True),
# Invalid test list
(None, '', (), '', False),
('', '12b731d23993eb97ba19e7c48ea6428edfd3e3e1', [], '', False),
('80', '12b731d23993eb97ba19e7c48ea6428edfd3e3e1', [], '', False),
('9412b731d23993eb97ba19e7c48ea6428edfd3e3e1c080', '12b731d23993eb97ba19e7c48ea6428edfd3e3e1', [], '', False),
])
def test_from_bytes(self, buffer: str, address: str, topics: List[str], data: str, should_pass: bool):
"""Unit test for from_bytes."""
buffer_bytes = bytes.fromhex(buffer) if buffer is not None else None
topics_bytes = [bytes.fromhex(topic) for topic in topics]
data_bytes = bytes.fromhex(data) if data is not None else None
if should_pass:
log_instance = log.Log.from_bytes(buffer_bytes)
assert log_instance.address == address
assert log_instance.topics == topics_bytes
assert log_instance.data == data_bytes
assert len(str(log_instance)) > 0
assert len(repr(log_instance)) > 0
else:
with pytest.raises((cbor2.CBORDecodeError, ValueError)):
log.Log.from_bytes(buffer_bytes)
| 69.074074 | 203 | 0.799464 | 190 | 3,730 | 15.584211 | 0.389474 | 0.02229 | 0.015198 | 0.020263 | 0.708544 | 0.697062 | 0.6795 | 0.6795 | 0.6795 | 0.6795 | 0 | 0.550482 | 0.136997 | 3,730 | 53 | 204 | 70.377358 | 0.369369 | 0.047185 | 0 | 0.358974 | 0 | 0 | 0.334371 | 0.333805 | 0 | 1 | 0 | 0 | 0.128205 | 1 | 0.025641 | false | 0.076923 | 0.102564 | 0 | 0.153846 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
58e9361eff4c4b47d4c412d5209b4f2b0ae36775 | 10,896 | py | Python | aoc2021/day9-amazingsolution.py | sagasu/python-algorithms | d630777a3f17823165e4d72ab780ede7b10df752 | [
"MIT"
] | null | null | null | aoc2021/day9-amazingsolution.py | sagasu/python-algorithms | d630777a3f17823165e4d72ab780ede7b10df752 | [
"MIT"
] | null | null | null | aoc2021/day9-amazingsolution.py | sagasu/python-algorithms | d630777a3f17823165e4d72ab780ede7b10df752 | [
"MIT"
] | null | null | null | data = [
"5456789349886456890123985435578996543213456789656899996467789234989765442345789778999989652349879899",
"4349891298765348789339875323456789665434568996545698874356679959879898321457893569998879931998765668",
"1298910989873234595498764312345678976746899989656987563234567899767987442578954678987968899897654457",
"2987939875432123489999953201234599698657979979997965432023479998959876553689965789876856789789543345",
"9896899984321012668899865313546789569798965469879876553135568987643988767997896898765945697698722256",
"8765789965442143456789996579658895434999876398767987864589679876542099898966789999833123598589810123",
"9954629876553234667899987988767932129899989219755398878678989987943989959355678998921057987678924345",
"6543212989654345788999898999998941098789998998543229989789797999899876543134989997632345698789545456",
"8654543498765476899398759126679953997678987987632101297997656798678965432015699876546559989898756567",
"8767654579879989943297642014567899889567896597543223456789348976578976543126921998758698979999768979",
"9988765678998999765965432123789998765456997498654345789893219865458897654434890139769987867896979989",
"9899876989987899899876843234891249654369889329765567899994325976346789887565789349898765756975395699",
"8767987899876999997987755346910198773235679939876688989875634987497893999679899959999654248954234799",
"9653299999875789986599896467899987652134567899999789879876786798998932398789999898998643135932129978",
"8632102398764578965434987578978698743028979989998998866987899899989510129999987797987659239891098769",
"6543223987643589874323498678965469764167899878987897654398999989876431236789876576898798998789989756",
"7654564597532599993214998789975379894356899865476789762129789879876543345699765455789987684569877546",
"8799796798543489874109899899876123989456989874345678995345698765987876596998954314597998593498766434",
"9989987987654678965929799999993239979569876543234589986797987653298989989897895423456987432987655325",
"9876898998779899899898589998754998768998765432107678999989298942139599879766889435668986543498943212",
"9785799659989956798767467899769878657889877543234789998979109893013498765455679546899598665569954323",
"9654588949893245697654356999898767345678987698645679876767998782129987654343568958912459988798766534",
"7543567998732196899793234789999543236789699798759798765656897643298799973212389979101967999899887646",
"6432457898653989932989345699987684545893568999868979874545989654349698764324567899999899767999998787",
"7421348999769878949878996789999876657932378999979657953435679876598539875634689979876798657789429898",
"3210128799898767956965989899987988867891234989989345942124789998987621986875678967965987545679945929",
"4323235678923456899854567978976799978910349878991249873675678989798710197987989459893296534567896912",
"5454348789636567989765878967965456989321398767890956954578789977679821298998994299794987321457999899",
"7689656896547678979896989549876567895432999656799897967789897855569932459019865987689876432346899768",
"9798798998658999867998996432987678999549899245698689879892996743498643567999989876530998546456796547",
"9899899139769643456999876545698789598998778967789542989954985432129754979789997987421987698578899656",
"3956921019898752147898987676789993497987668989899901299875976593939878998668946799532398799989998997",
"2345892345999864234987698787896432986543445899999892989989987989899999976557899987643469896799997989",
"1016795469899975695995429898999999995432036789987799767898999879789998764345998987654567965678986879",
"2525679579789986989876210989878878986544125679986678956967999765678999855235687899768789654379875668",
"3434567989678999879998329876756569876543234589765459548456898764567899932123456969899997543268994345",
"7566798998989239868965498995435457987764547678955347932345679876689999899015897954966989875346789656",
"9789899867992198957896597954321349999975656889543136793656789997799975768976788963245976996657898769",
"9899997656893987645789986896440497898896787897665945679778997598899854356898899995139895987767899878",
"8989876545679998435699875789559976786797898998779897899899995439998765456999999989298784598998996999",
"7879987934567954324987654668998765345689959689898799910999989323689878767892198778997653239569985321",
"6567899895678965439876523456987643286789244567999667899989878934569989978999987667998732123489876432",
"5434998799789877598965412355698321098992153456796543498868767957878999999398799456789541025678997643",
"3129899678995998997894301234899532136789012568986432987659756899989999893298654345897632125989898856",
"4598797587894359876894213568998754245989193467965421098643237923499989789139885457998784534898759878",
"5999675466789299875689428689549895345678989979876533498754346935999767679099976868959987656789943989",
"9876543245699987654578938795430976467989667899998754579865467899898654568989899979249898767899899993",
"9985432125789998743567899894321987598992556998879867678987568998789843499876788989198779898989798921",
"7898743014569879852456789965939798679321349876765978789998979987679921989795667891098655999876567893",
"6987653123698765920378898979896549789210199765654989899979989875568999876654456789297643498765456989",
"9998754534899994321234567898789329895321987654323499998765698784459679765443234678989931019874359878",
"8979986785678987435345679987679434976433499876212578987644597643274598754320123799767892398765699967",
"7768997896789876587486889898569555986545689974323459954323998932123459866541234598756789469886987856",
"6456798987893987697578998766478976797969789865464567893219876543235579877532346689347898579998986543",
"4347899999932398798678998754367898899898993976789679987623998995468678998546456789456987678979897421",
"5456987891291999899989659765456789989697894987898799876534569989579789979798767897567898989469653210",
"7567896790989899956796539878677899978576989998999893989545698878998996869899878998678999993298964523",
"8879935789876788932986424989789999867485678899987932395987987659497895456989989659799899989987895634",
"9989323498765567891093212399899998754334799789876421034599997545376994345679896543986789876766989745",
"6695214998854458989989923569979876543212345698765432123498898434265679266989689959875698765454878957",
"4594349876543234568979894698667987656101356789876743649987654320124589356789567898764329954343459998",
"2987656987435156679965789987546499797219459899998654997698965434245678967892459929873219543212368999",
"1099878994321067789874689876432345989498968978998789876569876554356789988921378910989398654563567897",
"2178989965442128999943599984321259878987899567889898765459989866457891299543467899898989765687679956",
"4569399879674236789732398743210198767466789335678999654378999878968954397664578998787779876998789632",
"6891239998787645897653987654323987654345679126799198796459998989879765989876789999676567987899996543",
"7920198769898657998764598765459876543234678938894349986567897692999979975987998987543456798998987655",
"8941987654998789729978689876599987532145678956976499987778976543978898943299987976532102679987698786",
"9659998543239897539989798987989976541013799767898989898989987999865767892109876598799993568976539987",
"9998987654357999998599897899879997663123478988999875679297599789754456793298997329987889979765423498",
"7767899765468998897432986798768989893234567899698764799198988678963345989987789419876776898976434579",
"6756789976979767796521095679946679964545678924569953898999876567892259869876678923995665987897545678",
"5345999898989745689432194599834569875657899535679831987894324456891098754324567899874354556789696989",
"3234998789497659996543986987612346989798999947799762986543212367952169765913478998765212345678987891",
"0199879695398798789674987965401656999899587899987653987655423478943459899894589998754303456789698910",
"1987654569219899678995699876213457899945456989999768998766545678986567998789678999865412567896599932",
"2398543458901987567989789984345569989631345678999899469898758789997679989698999989986543456789387893",
"3987654567899895456778999995668979876520234567896952399969969898898798778456789976597654568895456794",
"4598897678998754234567899876779989987431345778995431989459878976789989656345467894398767678998768989",
"7679998789329895345978953987889296796532459889889549879321989345899876543234349943209879789899979978",
"9796999899902976567889992198999345986545669994679698767990993276799989854101247895499989896789898765",
"9895789999893597678999989239998456797676778933468987657789894387989998768212356976989998965899765754",
"3934598998789698789999878956987568998987889012457896545698765498978999978326979899678987654987654323",
"2123567997688999899898767999998678979998993233468993234569976999869899989545998787567899543499843212",
"3234979783467899998789956678999989568989654354569689156789989897456789987659898645456789532398765301",
"4569898672456789987695434599789995479679765457689589235679998756345679998798786534345789643999876412",
"5698765421345698768459996789589954234569879878797678946899987643234568979897654323245689659876986433",
"8789876510124569654347789993467893199979989989998789757998998743123458954999843210156799998765498764",
"9998765421267898542125678975678999987898793296789899867987689854234567893598764321277899899654359875",
"9329878632358987651014567896989998976799532135678999978976534965446789932349875432348998798765667986",
"8912998793479199432123678998999876565987673234567998989986549876767899953456986546556795659976889997",
"7894989895589298753234899769235965434598765345688997899987656998898998764567898657867954345989998998",
"6789976986678999954745678952139876512349998656899876569898967989989689877698949768978985466996567899",
"5689895499899888895677789543025987829467899767998775466789989879976578999789429899989876877895456789",
"4578789357998767797799998765434598998998999878989654345678999967896469989993210967999987998954345678",
"3435689467997655689899899879545679997889997999678921234589997656789359879879921256789798999543257789",
"2324579569889834578965789998976899876679876544567892346899986545992198765767893345997569987654568999",
"1012459698779323569954678967997987764589997432378965467999987956893987654556789457896456999765678967",
"2123468987656313467893212356789996543678986543567899578998898768954998643544579968998345899976899456",
"3654567896543202348932104567899987654569999864568998679997649879869876542123567899765456789987893237",
]
import functools, collections
l = []
for line in data:
s = line.strip()
l.append([int(i) for i in s])
n, m = len(l), len(l[0])
@functools.lru_cache(None)
def ans(x, y):
for i, j in [(x-1, y),(x+1, y), (x, y-1), (x, y+1)]:
if 0<=i<n and 0<=j<m and l[i][j] < l[x][y]:
return ans(i, j)
return (x, y)
d = collections.Counter(ans(i, j) for i in range(n) for j in range(m) if l[i][j] != 9)
z = sorted(list(d.values()))
print(z[-1]*z[-2]*z[-3])
| 87.168 | 107 | 0.941079 | 206 | 10,896 | 49.771845 | 0.679612 | 0.000975 | 0.00117 | 0.00078 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.937974 | 0.020466 | 10,896 | 124 | 108 | 87.870968 | 0.022674 | 0 | 0 | 0 | 0 | 0 | 0.917936 | 0.917936 | 0 | 1 | 0 | 0 | 0 | 1 | 0.008547 | false | 0 | 0.008547 | 0 | 0.034188 | 0.008547 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
452fe81f2a585634dad2176f029f7f76e014e6f9 | 18,422 | py | Python | tests/test_main.py | biosimulators/Biosimulators_utils | c1363467263120bf1166da2b75e38fc7f56dc94f | [
"MIT"
] | 2 | 2021-06-02T13:26:34.000Z | 2021-12-27T23:12:47.000Z | tests/test_main.py | biosimulators/Biosimulators_utils | c1363467263120bf1166da2b75e38fc7f56dc94f | [
"MIT"
] | 102 | 2020-12-06T19:47:43.000Z | 2022-03-31T12:56:17.000Z | tests/test_main.py | biosimulators/Biosimulators_utils | c1363467263120bf1166da2b75e38fc7f56dc94f | [
"MIT"
] | 4 | 2021-01-27T19:56:34.000Z | 2022-02-03T21:08:20.000Z | from biosimulators_utils.combine.data_model import CombineArchive, CombineArchiveContent
from biosimulators_utils.viz.vega.utils import dict_to_vega_dataset
from biosimulators_utils.warnings import BioSimulatorsWarning
from unittest import mock
import biosimulators_utils
import biosimulators_utils.__main__
import capturer
import json
import os
import shutil
import tempfile
import unittest
class CliTestCase(unittest.TestCase):
def setUp(self):
self.tmp_dir = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.tmp_dir)
def test_help(self):
with biosimulators_utils.__main__.App(argv=[]) as app:
with capturer.CaptureOutput(merged=False, relay=False) as captured:
app.run()
stdout = captured.stdout.get_text()
self.assertTrue(stdout.startswith('usage: biosimulators-utils'))
self.assertEqual(captured.stderr.get_text(), '')
def test_version(self):
with biosimulators_utils.__main__.App(argv=['-v']) as app:
with capturer.CaptureOutput(merged=False, relay=False) as captured:
with self.assertRaises(SystemExit) as cm:
app.run()
self.assertEqual(cm.exception.code, 0)
stdout = captured.stdout.get_text()
self.assertEqual(stdout, biosimulators_utils.__version__)
self.assertEqual(captured.stderr.get_text(), '')
with biosimulators_utils.__main__.App(argv=['--version']) as app:
with capturer.CaptureOutput(merged=False, relay=False) as captured:
with self.assertRaises(SystemExit) as cm:
app.run()
self.assertEqual(cm.exception.code, 0)
stdout = captured.stdout.get_text()
self.assertEqual(stdout, biosimulators_utils.__version__)
self.assertEqual(captured.stderr.get_text(), '')
def test_raw_cli(self):
with mock.patch('sys.argv', ['', '--help']):
with self.assertRaises(SystemExit) as context:
biosimulators_utils.__main__.main()
self.assertRegex(context.Exception, 'usage: biosimulators-utils')
def test_build_modeling_project(self):
archive_filename = os.path.join(self.tmp_dir, 'archive.omex')
with biosimulators_utils.__main__.App(argv=[
'build-project',
'undefined',
os.path.join(os.path.dirname(__file__), 'fixtures', 'bngl', 'valid.bngl'),
'UniformTimeCourse',
archive_filename,
]) as app:
with self.assertRaisesRegex(SystemExit, 'Model language must be'):
app.run()
with biosimulators_utils.__main__.App(argv=[
'build-project',
'BNGL',
os.path.join(os.path.dirname(__file__), 'fixtures', 'bngl', 'valid.bngl'),
'undefined',
archive_filename,
]) as app:
with self.assertRaisesRegex(SystemExit, 'Simulation type must be'):
app.run()
with biosimulators_utils.__main__.App(argv=[
'build-project',
'BNGL',
os.path.join(os.path.dirname(__file__), 'fixtures', 'bngl', 'valid.bngl'),
'UniformTimeCourse',
archive_filename,
]) as app:
app.run()
self.assertTrue(os.path.isfile(archive_filename))
def test_validate_model(self):
with biosimulators_utils.__main__.App(argv=[
'validate-model',
'SBML',
os.path.join(os.path.dirname(__file__), 'fixtures', 'BIOMD0000000297.xml'),
]) as app:
app.run()
with self.assertRaisesRegex(SystemExit, 'is invalid'):
with biosimulators_utils.__main__.App(argv=[
'validate-model',
'SBML',
os.path.join(os.path.dirname(__file__), 'fixtures', 'does not exist'),
]) as app:
app.run()
with self.assertRaisesRegex(SystemExit, 'Model language must be'):
with biosimulators_utils.__main__.App(argv=[
'validate-model',
'invalid',
os.path.join(os.path.dirname(__file__), 'fixtures', 'BIOMD0000000297.xml'),
]) as app:
app.run()
def test_validate_simulation(self):
with biosimulators_utils.__main__.App(argv=[
'validate-simulation',
os.path.join(os.path.dirname(__file__), 'fixtures', 'sedml', 'BIOMD0000000673_sim.sedml'),
]) as app:
app.run()
with biosimulators_utils.__main__.App(argv=[
'validate-simulation',
os.path.join(os.path.dirname(__file__), 'fixtures', 'sedml', 'Ciliberto-J-Cell-Biol-2003-morphogenesis-checkpoint.sedml'),
]) as app:
app.run()
with biosimulators_utils.__main__.App(argv=[
'validate-simulation',
os.path.join(os.path.dirname(__file__), 'fixtures', 'sedml',
'Ciliberto-J-Cell-Biol-2003-morphogenesis-checkpoint-invalid-model.sedml'),
]) as app:
app.run()
with biosimulators_utils.__main__.App(argv=[
'validate-simulation',
os.path.join(os.path.dirname(__file__), 'fixtures', 'sedml',
'Ciliberto-J-Cell-Biol-2003-morphogenesis-checkpoint-invalid-target.sedml'),
]) as app:
app.run()
with self.assertRaisesRegex(SystemExit, 'is invalid.'):
with biosimulators_utils.__main__.App(argv=[
'validate-simulation',
os.path.join(os.path.dirname(__file__), 'fixtures', 'sedml',
'Ciliberto-J-Cell-Biol-2003-morphogenesis-checkpoint-invalid-xpath.sedml'),
]) as app:
app.run()
with self.assertRaisesRegex(SystemExit, 'is invalid.'):
with biosimulators_utils.__main__.App(argv=[
'validate-simulation',
os.path.join(os.path.dirname(__file__), 'fixtures', 'sedml', 'does not exist'),
]) as app:
app.run()
with self.assertRaisesRegex(SystemExit, 'is invalid.'):
with biosimulators_utils.__main__.App(argv=[
'validate-simulation',
os.path.join(os.path.dirname(__file__), 'fixtures', 'sedml', 'no-id.sedml'),
]) as app:
app.run()
with self.assertRaisesRegex(SystemExit, 'is invalid.'):
with biosimulators_utils.__main__.App(argv=[
'validate-simulation',
os.path.join(os.path.dirname(__file__), 'fixtures', 'sedml', 'duplicate-ids.sedml'),
]) as app:
app.run()
with self.assertRaisesRegex(ValueError, 'Big error'):
with biosimulators_utils.__main__.App(argv=[
'validate-simulation',
os.path.join(os.path.dirname(__file__), 'fixtures', 'sedml', 'duplicate-ids.sedml'),
]) as app:
with mock.patch.object(biosimulators_utils.sedml.io.SedmlSimulationReader, 'run', side_effect=ValueError('Big error')):
app.run()
def test_validate_metadata(self):
with biosimulators_utils.__main__.App(argv=[
'validate-metadata',
os.path.join(os.path.dirname(__file__), 'fixtures', 'omex-metadata', 'biosimulations-abbrev.rdf'),
]) as app:
with mock.patch.dict(os.environ, {'OMEX_METADATA_SCHEMA': 'BioSimulations'}):
app.run()
with biosimulators_utils.__main__.App(argv=[
'validate-metadata',
os.path.join(os.path.dirname(__file__), 'fixtures', 'omex-metadata', 'biosimulations-abbrev.rdf'),
]) as app:
with mock.patch.dict(os.environ, {'OMEX_METADATA_SCHEMA': 'rdf_triples'}):
app.run()
with self.assertRaisesRegex(SystemExit, 'is invalid'):
with biosimulators_utils.__main__.App(argv=[
'validate-metadata',
os.path.join(os.path.dirname(__file__), 'fixtures', 'omex-metadata', 'malformed.rdf'),
]) as app:
with mock.patch.dict(os.environ, {'OMEX_METADATA_SCHEMA': 'BioSimulations'}):
app.run()
with self.assertRaisesRegex(SystemExit, 'is invalid'):
with biosimulators_utils.__main__.App(argv=[
'validate-metadata',
os.path.join(os.path.dirname(__file__), 'fixtures', 'omex-metadata', 'missing-required.rdf'),
]) as app:
with mock.patch.dict(os.environ, {'OMEX_METADATA_SCHEMA': 'BioSimulations'}):
app.run()
with biosimulators_utils.__main__.App(argv=[
'validate-metadata',
os.path.join(os.path.dirname(__file__), 'fixtures', 'omex-metadata', 'missing-required.rdf'),
]) as app:
with mock.patch.dict(os.environ, {'OMEX_METADATA_SCHEMA': 'rdf_triples'}):
app.run()
def test_validate_modeling_project(self):
with biosimulators_utils.__main__.App(argv=[
'validate-project',
os.path.join(os.path.dirname(__file__), 'fixtures', 'mock-file'),
]) as app:
archive = CombineArchive(contents=[])
with mock.patch('biosimulators_utils.combine.io.CombineArchiveReader.run', return_value=archive):
with mock.patch('biosimulators_utils.combine.validation.validate', return_value=([], [])):
app.run()
with biosimulators_utils.__main__.App(argv=[
'validate-project',
os.path.join(os.path.dirname(__file__), 'fixtures', 'Ciliberto-J-Cell-Biol-2003-morphogenesis-checkpoint.omex'),
]) as app:
with capturer.CaptureOutput(merged=False, relay=False) as captured:
app.run()
stdout = captured.stdout.get_text()
self.assertRegex(stdout, 'Archive contains 1 SED-ML documents with 1 models')
# warnings
with biosimulators_utils.__main__.App(argv=[
'validate-project',
os.path.join(os.path.dirname(__file__), 'fixtures', 'mock-file'),
]) as app:
archive = CombineArchive(contents=[CombineArchiveContent(), CombineArchiveContent()])
with mock.patch('biosimulators_utils.combine.io.CombineArchiveReader.run', return_value=archive):
with mock.patch('biosimulators_utils.combine.validation.validate', return_value=([['Bigger error']], [['Big warning']])):
with self.assertWarnsRegex(BioSimulatorsWarning, '- Big warning'):
with self.assertRaisesRegex(SystemExit, '- Bigger error'):
app.run()
with biosimulators_utils.__main__.App(argv=[
'validate-project',
os.path.join(os.path.dirname(__file__), 'fixtures', 'mock-file'),
]) as app:
archive = CombineArchive(contents=[])
with mock.patch('biosimulators_utils.combine.io.CombineArchiveReader.run', return_value=archive):
with self.assertRaisesRegex(SystemExit, 'must have at least one content element'):
with self.assertWarnsRegex(BioSimulatorsWarning, 'does not contain any SED-ML files'):
app.run()
# error
with biosimulators_utils.__main__.App(argv=[
'validate-project',
os.path.join(os.path.dirname(__file__), 'fixtures', 'not-a-file'),
]) as app:
with self.assertRaisesRegex(SystemExit, 'is not a file'):
app.run()
with biosimulators_utils.__main__.App(argv=[
'validate-project',
os.path.join(os.path.dirname(__file__), 'fixtures', 'mock-file'),
]) as app:
archive = CombineArchive(contents=[CombineArchiveContent(), CombineArchiveContent()])
with mock.patch('biosimulators_utils.combine.io.CombineArchiveReader.run', return_value=archive):
with self.assertRaisesRegex(SystemExit, '- Content element must'):
app.run()
def test_exec_modeling_project(self):
with biosimulators_utils.__main__.App(argv=[
'exec',
'ghcr.io/biosimulators/copasi:latest',
'-i', os.path.join(os.path.dirname(__file__), 'fixtures', 'Ciliberto-J-Cell-Biol-2003-morphogenesis-checkpoint.omex'),
'-o', os.path.join(self.tmp_dir, 'results'),
'--env', 'KEY1=value1', 'KEY2=value2',
'--user', str(os.getuid()),
]) as app:
app.run()
outputs = os.listdir(os.path.join(self.tmp_dir, 'results'))
self.assertIn('reports.h5', outputs)
def test_exec_modeling_project_error_handling(self):
with self.assertRaisesRegex(SystemExit, 'must be pairs of keys and values'):
with biosimulators_utils.__main__.App(argv=[
'exec',
'ghcr.io/biosimulators/tellurium:latest',
'-i', os.path.join(os.path.dirname(__file__), 'fixtures', 'BIOMD0000000297.omex'),
'-o', os.path.join(self.tmp_dir, 'results'),
'--env', 'KEY1:value1', 'KEY2-value2',
'--user', str(os.getuid()),
]) as app:
app.run()
def test_convert_help(self):
with biosimulators_utils.__main__.App(argv=['convert']) as app:
app.run()
def test_convert_escher_to_vega(self):
escher_filename = os.path.join(os.path.dirname(__file__), 'fixtures', 'escher', 'e_coli_core.Core metabolism.json')
vega_filename = os.path.join(self.tmp_dir, 'viz.json')
# data from SED-ML report
data_url = 'http://site.com/flux.json'
with biosimulators_utils.__main__.App(argv=[
'convert', 'escher-to-vega',
'--data-sedml', 'simulation.sedml/report_1',
escher_filename,
vega_filename,
]) as app:
app.run()
with open(vega_filename, 'rb') as file:
vega = json.load(file)
reaction_data_set = next(data for data in vega['data'] if data['name'] == 'reactionFluxes')
self.assertEqual(reaction_data_set, {'name': 'reactionFluxes', 'sedmlUri': ['simulation.sedml', 'report_1']})
# data from file
data_filename = os.path.join(self.tmp_dir, 'fluxes.json')
flux_values = dict_to_vega_dataset({
'GND': 2.,
'PGK': 10.,
})
with open(data_filename, 'w') as file:
json.dump(flux_values, file)
with biosimulators_utils.__main__.App(argv=[
'convert', 'escher-to-vega',
'--data-file', data_filename,
escher_filename,
vega_filename,
]) as app:
app.run()
with open(vega_filename, 'rb') as file:
vega = json.load(file)
reaction_data_set = next(data for data in vega['data'] if data['name'] == 'reactionFluxes')
self.assertEqual(reaction_data_set, {'name': 'reactionFluxes', 'values': flux_values})
# data at URL
data_url = 'http://site.com/flux.json'
with biosimulators_utils.__main__.App(argv=[
'convert', 'escher-to-vega',
'--data-url', data_url,
escher_filename,
vega_filename,
]) as app:
app.run()
with open(vega_filename, 'rb') as file:
vega = json.load(file)
reaction_data_set = next(data for data in vega['data'] if data['name'] == 'reactionFluxes')
self.assertEqual(reaction_data_set, {'name': 'reactionFluxes', 'url': data_url})
def test_convert_ginml_to_vega(self):
ginml_filename = os.path.join(os.path.dirname(__file__), 'fixtures', 'ginml', 'ginsim-35-regulatoryGraph.ginml')
vega_filename = os.path.join(self.tmp_dir, 'viz.json')
# data from SED-ML report
with biosimulators_utils.__main__.App(argv=[
'convert', 'ginml-to-vega',
'--data-sedml',
ginml_filename,
vega_filename,
]) as app:
app.run()
with open(vega_filename, 'rb') as file:
vega = json.load(file)
data_set = next(data for data in vega['data'] if data['name'] == 'nodesValues')
self.assertEqual(data_set, {'name': 'nodesValues', 'sedmlUri': []})
def test_convert_sbgn_to_vega(self):
sbgn_filename = os.path.join(os.path.dirname(__file__), 'fixtures', 'sbgn', 'Repressilator_PD_v6_color-modified.sbgn')
vega_filename = os.path.join(self.tmp_dir, 'viz.json')
# data from SED-ML report
with biosimulators_utils.__main__.App(argv=[
'convert', 'sbgn-to-vega',
'--data-sedml', 'simulation.sedml/report',
sbgn_filename,
vega_filename,
]) as app:
app.run()
with open(vega_filename, 'rb') as file:
vega = json.load(file)
data_set = next(data for data in vega['data'] if data['name'] == 'glyphsValues')
self.assertEqual(data_set, {'name': 'glyphsValues', 'sedmlUri': ['simulation.sedml', 'report']})
def test_convert_diagram_error_handling(self):
with self.assertRaisesRegex(SystemExit, 'must be used'):
with biosimulators_utils.__main__.App(argv=[
'convert', 'escher-to-vega',
'path/to/escher.json',
'path/to/vg.json',
]) as app:
app.run()
with self.assertRaisesRegex(SystemExit, 'can be used'):
with biosimulators_utils.__main__.App(argv=[
'convert', 'escher-to-vega',
'--data-file', 'path/to/flux.json',
'--data-url', 'http://site.com/flux.json',
'path/to/escher.json',
'path/to/vg.json',
]) as app:
app.run()
with self.assertRaisesRegex(SystemExit, 'No such file or directory'):
with biosimulators_utils.__main__.App(argv=[
'convert', 'escher-to-vega',
'--data-url', 'path/to/flux.json',
'path/to/escher.json',
'path/to/vg.json',
]) as app:
app.run()
| 43.861905 | 137 | 0.586038 | 1,959 | 18,422 | 5.255743 | 0.119959 | 0.041375 | 0.089744 | 0.10101 | 0.812937 | 0.794483 | 0.786713 | 0.762821 | 0.724165 | 0.695416 | 0 | 0.006424 | 0.281783 | 18,422 | 419 | 138 | 43.966587 | 0.771748 | 0.006134 | 0 | 0.698324 | 0 | 0 | 0.205289 | 0.05262 | 0 | 0 | 0 | 0 | 0.114525 | 1 | 0.047486 | false | 0 | 0.03352 | 0 | 0.083799 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4534d2a5c4a719f9955528ed81bada81b5c3724f | 79 | py | Python | tests/TopLevelPackage/packageB/packageBB/packageBBB/__init__.py | jsfehler/package-tree | e1416a55f077083f98db6805bef9aeef1ec62c29 | [
"MIT"
] | 1 | 2022-01-03T17:26:30.000Z | 2022-01-03T17:26:30.000Z | tests/TopLevelPackage/packageB/packageBB/packageBBB/__init__.py | jsfehler/package-tree | e1416a55f077083f98db6805bef9aeef1ec62c29 | [
"MIT"
] | 2 | 2018-06-26T03:01:01.000Z | 2018-09-04T22:03:25.000Z | tests/TopLevelPackage/packageB/packageBB/packageBBB/__init__.py | jsfehler/package-tree | e1416a55f077083f98db6805bef9aeef1ec62c29 | [
"MIT"
] | null | null | null | from .classBBB import ClassBBB # NOQA
from .classBBB import ClassBBB2 # NOQA
| 26.333333 | 39 | 0.772152 | 10 | 79 | 6.1 | 0.5 | 0.393443 | 0.590164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0.177215 | 79 | 2 | 40 | 39.5 | 0.923077 | 0.113924 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
18b3ce3511b70f72a1b5887b2be67f205d11c9df | 21,051 | py | Python | test/test_block_system.py | ComplexArts/pyctrl-core | a72bd53924410c2e7f1e71c8188a0391550febdd | [
"Apache-2.0"
] | null | null | null | test/test_block_system.py | ComplexArts/pyctrl-core | a72bd53924410c2e7f1e71c8188a0391550febdd | [
"Apache-2.0"
] | null | null | null | test/test_block_system.py | ComplexArts/pyctrl-core | a72bd53924410c2e7f1e71c8188a0391550febdd | [
"Apache-2.0"
] | null | null | null | import unittest
import numpy as np
import pyctrl.block as block
import pyctrl.block.system as system
import pyctrl.system.tf as tf
import pyctrl.system.ss as ss
test_ode = True
try:
import pyctrl.system.ode as ode
except ImportError:
test_ode = False
class TestUnittestAssertions(unittest.TestCase):
def test_System(self):
signals = {'clock': 1, 'encoder1': 2, 'test': 3}
labels = ['clock', 'encoder1']
# Transfer-function
num = np.array([1, 1, 3])
den = np.array([1, -1])
sys = tf.DTTF(num, den)
self.assertTrue(np.array_equal(sys.num, num))
den = np.array([1, -1, 0])
self.assertTrue(np.array_equal(sys.den, den))
self.assertTrue(np.array_equal(sys.state, np.zeros(2)))
blk = system.System(model=sys)
self.assertTrue(blk.model is sys)
with self.assertRaises(block.BlockException):
blk = system.System(modelo=sys)
with self.assertRaises(block.BlockException):
blk = system.System(model=1)
with self.assertRaises(block.BlockException):
blk = system.System(model=sys, mux=False)
blk.write([1])
(yk,) = blk.read()
state = np.array([1, 0])
self.assertTrue(np.array_equal(sys.state, state))
assert yk == 1
blk.write([-1])
(yk,) = blk.read()
state = np.array([0, 1])
self.assertTrue(np.array_equal(sys.state, state))
assert yk == 1
blk.write([2])
(yk,) = blk.read()
state = np.array([2, 0])
self.assertTrue(np.array_equal(sys.state, state))
assert yk == 5
blk.write([1])
(yk,) = blk.read()
state = np.array([3, 2])
self.assertTrue(np.array_equal(sys.state, state))
assert yk == 5
blk.reset()
yk = sys.update(0)
assert yk == 0
num = np.array([1, 1])
den = np.array([1, -1])
sys2 = tf.DTTF(num, den)
blk = system.System(model=sys)
blk.set(model=sys2)
assert blk.model is sys2
with self.assertRaises(block.BlockException):
blk.set(model=1)
# State space
A = np.array([[0, 1], [1, -2]])
B = np.array([[0], [1]])
C = np.array([[1, -2], [0, 1]])
D = np.array([[1], [0]])
sys = ss.DTSS(A, B, C, D)
self.assertTrue(np.array_equal(sys.A, A))
self.assertTrue(np.array_equal(sys.B, B))
self.assertTrue(np.array_equal(sys.C, C))
self.assertTrue(np.array_equal(sys.D, D))
self.assertTrue(np.array_equal(sys.state, np.zeros(2)))
blk = system.System(model=sys)
assert blk.model is sys
with self.assertRaises(block.BlockException):
blk = system.System(modelo=sys)
blk.write([1])
yk, = blk.read()
state = np.array([0, 1])
self.assertTrue(np.array_equal(sys.state, state))
self.assertTrue(np.array_equal(yk, np.array([1, 0])))
blk.write([-1])
yk, = blk.read()
state = np.array([1, -3])
self.assertTrue(np.array_equal(sys.state, state))
self.assertTrue(np.array_equal(yk, np.array([-3, 1])))
blk.write([3])
yk, = blk.read()
state = np.array([-3, 10])
self.assertTrue(np.array_equal(sys.state, state))
self.assertTrue(np.array_equal(yk, np.array([10, -3])))
blk.write([0])
yk, = blk.read()
state = np.array([10, -23])
self.assertTrue(np.array_equal(sys.state, state))
self.assertTrue(np.array_equal(yk, np.array([-23, 10])))
blk.reset()
self.assertTrue(np.array_equal(sys.state, np.array([0, 0])))
# SIMO
A = np.array([[0, 1], [1, -2]])
B = np.array([[0], [1]])
C = np.array([[1, -2], [0, 1]])
D = np.array([[1], [0]])
sys = ss.DTSS(A, B, C, D)
self.assertTrue(np.array_equal(sys.A, A))
self.assertTrue(np.array_equal(sys.B, B))
self.assertTrue(np.array_equal(sys.C, C))
self.assertTrue(np.array_equal(sys.D, D))
self.assertTrue(np.array_equal(sys.state, np.zeros(2)))
blk = system.System(model=sys)
assert blk.model is sys
with self.assertRaises(block.BlockException):
blk = system.System(modelo=sys)
blk.write(1)
yk, = blk.read()
state = np.array([0, 1])
# print(sys.state)
self.assertTrue(np.array_equal(sys.state, state))
self.assertTrue(np.array_equal(yk, np.array([1, 0])))
blk.write([-1])
yk, = blk.read()
state = np.array([1, -3])
self.assertTrue(np.array_equal(sys.state, state))
self.assertTrue(np.array_equal(yk, np.array([-3, 1])))
blk.write(3)
yk, = blk.read()
state = np.array([-3, 10])
self.assertTrue(np.array_equal(sys.state, state))
self.assertTrue(np.array_equal(yk, np.array([10, -3])))
blk.write([0])
yk, = blk.read()
state = np.array([10, -23])
self.assertTrue(np.array_equal(sys.state, state))
self.assertTrue(np.array_equal(yk, np.array([-23, 10])))
blk.reset()
self.assertTrue(np.array_equal(sys.state, np.array([0, 0])))
# System
A = np.array([[0, 1], [1, -2]])
B = np.array([[1, -1], [1, 0]])
C = np.array([[1, -2], [0, 1]])
D = np.array([[1, 0], [-1, 1]])
sys = ss.DTSS(A, B, C, D)
self.assertTrue(np.array_equal(sys.A, A))
self.assertTrue(np.array_equal(sys.B, B))
self.assertTrue(np.array_equal(sys.C, C))
self.assertTrue(np.array_equal(sys.D, D))
self.assertTrue(np.array_equal(sys.state, np.zeros(2)))
blk = system.System(model=sys)
assert blk.model is sys
# u1 = 1 => y1 = 1
blk.write([1, 1])
y2, = blk.read()
self.assertTrue(np.array_equal(sys.state, np.array([0, 1])))
self.assertTrue(np.array_equal(y2, np.array([1, 0])))
# u2 = -1 => y2 = -2 y1 + u2 = -2 - 1 = -3
blk.write([-1, 0])
y2, = blk.read()
self.assertTrue(np.array_equal(sys.state, np.array([0, -3])))
self.assertTrue(np.array_equal(y2, np.array([-3, 2])))
# u3 = 3 => y3 = -2 y2 + y1 + u3 = 6 + 1 + 3 = 10
blk.write([3, -1])
y2, = blk.read()
self.assertTrue(np.array_equal(sys.state, np.array([1, 9])))
self.assertTrue(np.array_equal(y2, np.array([9, -7])))
# u4 = 0 => y4 = -2 y3 + y2 + u4 = - 20 - 3 + 0 = -23
blk.write([2, 1])
y2, = blk.read()
self.assertTrue(np.array_equal(sys.state, np.array([10, -15])))
self.assertTrue(np.array_equal(y2, np.array([-15, 8])))
# Test to work with multiple signals
# reset state
blk.reset()
# u1 = 1 => y1 = 1
blk.write(1, 1)
y2, = blk.read()
self.assertTrue(np.array_equal(sys.state, np.array([0, 1])))
self.assertTrue(np.array_equal(y2, np.array([1, 0])))
# u2 = -1 => y2 = -2 y1 + u2 = -2 - 1 = -3
blk.write(-1, 0)
y2, = blk.read()
self.assertTrue(np.array_equal(sys.state, np.array([0, -3])))
self.assertTrue(np.array_equal(y2, np.array([-3, 2])))
# u3 = 3 => y3 = -2 y2 + y1 + u3 = 6 + 1 + 3 = 10
blk.write(3, -1)
y2, = blk.read()
self.assertTrue(np.array_equal(sys.state, np.array([1, 9])))
self.assertTrue(np.array_equal(y2, np.array([9, -7])))
# u4 = 0 => y4 = -2 y3 + y2 + u4 = - 20 - 3 + 0 = -23
blk.write(2, 1)
y2, = blk.read()
self.assertTrue(np.array_equal(sys.state, np.array([10, -15])))
self.assertTrue(np.array_equal(y2, np.array([-15, 8])))
def test_Gain(self):
# Gain
blk = system.Gain()
self.assertEqual(blk.gain , 1)
blk = system.Gain(gain=-1)
self.assertEqual(blk.gain , -1)
blk = system.Gain(gain=3)
self.assertEqual(blk.gain , 3)
blk = system.Gain(gain=-1.2)
self.assertEqual(blk.gain , -1.2)
with self.assertRaises(block.BlockException):
blk = system.Gain(gain='asd')
blk = system.Gain(gain=-5.2)
blk.write(np.array([2]))
(yk,) = blk.read()
self.assertEqual(yk[0], -10.4)
blk = system.Gain(gain=3)
blk.write(2, 4)
yk = blk.read()
self.assertEqual(yk, (6, 12))
blk.write(np.array([2, 4]))
(yk,) = blk.read()
assert np.all(yk == np.array([6, 12]))
blk.write(2, np.array([4, 2]))
yk = blk.read()
assert yk[0] == 6 and np.all(yk[1] == np.array([12, 6]))
blk.set(gain=8)
self.assertEqual(blk.gain , 8)
blk = system.Gain(gain=(-1, 2), demux=True)
blk.write(1)
yk = blk.read()
self.assertEqual(yk , (-1, 2))
blk = system.Gain(gain=np.array([-1, 2]), demux=True)
blk.write(1)
yk = blk.read()
self.assertEqual(yk , (-1, 2))
with self.assertRaises(block.BlockException):
blk = system.Gain(gain=np.array([[-1, 2], [3, 1]]),
mux=True, demux=True)
def test_Affine(self):
# Affine
blk = system.Affine()
self.assertEqual(blk.gain , 1)
self.assertEqual(blk.offset , 0)
blk = system.Affine(gain=-1, offset=2)
self.assertEqual(blk.gain , -1)
self.assertEqual(blk.offset , 2)
blk = system.Affine(offset=3)
self.assertEqual(blk.gain , 1)
self.assertEqual(blk.offset , 3)
blk = system.Affine(gain=-1.2, offset=2.2)
self.assertEqual(blk.gain , -1.2)
self.assertEqual(blk.offset , 2.2)
with self.assertRaises(block.BlockException):
blk = system.Affine(gain='asd')
with self.assertRaises(block.BlockException):
blk = system.Affine(offset='asd')
blk = system.Affine(gain=-5.2)
blk.write(np.array([2]))
(yk,) = blk.read()
self.assertEqual(yk[0], -10.4)
blk = system.Affine(gain=3)
blk.write(2, 4)
yk = blk.read()
self.assertEqual(yk , (6, 12))
blk.write(np.array([2, 4]))
(yk,) = blk.read()
self.assertTrue(np.all(yk == np.array([6, 12])))
blk.write(2, np.array([4, 2]))
yk = blk.read()
self.assertTrue(yk[0] == 6 and np.all(yk[1] == np.array([12, 6])))
blk.set(gain=8)
self.assertEqual(blk.gain , 8)
blk = system.Affine(gain=(-1, 2), offset=1, demux=True)
blk.write(1)
yk = blk.read()
self.assertEqual(yk , (0, 3))
blk = system.Affine(gain=np.array([-1, 2]), offset=1, demux=True)
blk.write(1)
yk = blk.read()
self.assertEqual(yk , (0, 3))
blk = system.Affine(gain=np.array([-1, 2]), offset=(3, 4), demux=True)
blk.write(1)
yk = blk.read()
self.assertEqual(yk , (2, 6))
blk = system.Affine(gain=np.array([-1, 2]), offset=np.array([3, 4]), demux=True)
blk.write(1)
yk = blk.read()
self.assertEqual(yk , (2, 6))
with self.assertRaises(block.BlockException):
blk = system.Affine(gain=np.array([[-1, 2], [3, 1]]),
mux=True, demux=True)
with self.assertRaises(block.BlockException):
blk = system.Affine(offset=np.array([[-1, 2], [3, 1]]),
mux=True, demux=True)
def test_ShortCircuit(self):
# Short-Circuit
blk = block.ShortCircuit()
blk.write(2)
(yk,) = blk.read()
self.assertEqual(yk , 2)
blk.write(2, 4)
yk = blk.read()
self.assertEqual(yk , (2, 4))
blk.write(np.array([2, 4]))
(yk,) = blk.read()
self.assertTrue(np.all(yk == np.array([2, 4])))
blk.write(np.array([2, 4]), -1)
yk = blk.read()
self.assertTrue(np.all(yk[0] == np.array([2, 4])) and yk[1] == -1)
def test_Differentiator(self):
# Differentiator
signals = {'clock': 1, 'encoder1': 5, 'test': 0}
labels = ['clock', 'test']
diff = system.Differentiator()
diff.write(*[signals[label] for label in labels])
result = diff.read()
self.assertEqual(result , ([0]))
signals = {'clock': 2, 'encoder1': 5, 'test': 3}
diff.write(*[signals[label] for label in labels])
result = diff.read()
self.assertEqual(result , ([3]))
signals = {'clock': 4, 'encoder1': 6, 'test': 0}
diff.write(*[signals[label] for label in labels])
result = diff.read()
self.assertEqual(result , ([-1.5]))
signals = {'clock': 1, 'encoder1': 5, 'test': 0}
labels = ['clock', 'test', 'encoder1']
diff = system.Differentiator()
diff.write(*[signals[label] for label in labels])
result = diff.read()
self.assertEqual(result , ([0, 0]))
signals = {'clock': 2, 'encoder1': 5, 'test': 3}
diff.write(*[signals[label] for label in labels])
result = diff.read()
self.assertEqual(result , ([3, 0]))
signals = {'clock': 4, 'encoder1': 6, 'test': 0}
diff.write(*[signals[label] for label in labels])
result = diff.read()
self.assertEqual(result , ([-1.5, .5]))
signals = {'clock': 1, 'encoder1': 5, 'test': np.array([0, 1])}
labels = ['clock', 'test', 'encoder1']
diff = system.Differentiator()
diff.write(*[signals[label] for label in labels])
result = diff.read()
assert result[1] == 0 and np.all(result[0] == np.array([0, 0]))
signals = {'clock': 2, 'encoder1': 5, 'test': np.array([3, 2])}
diff.write(*[signals[label] for label in labels])
result = diff.read()
assert result[1] == 0 and np.all(result[0] == np.array([3, 1]))
signals = {'clock': 4, 'encoder1': 6, 'test': np.array([0, -1])}
diff.write(*[signals[label] for label in labels])
result = diff.read()
assert result[1] == .5 and np.all(result[0] == np.array([-1.5, -1.5]))
with self.assertRaises(block.BlockException):
diff.set(time=8)
with self.assertRaises(block.BlockException):
diff.set(last=8)
def test_Feedback(self):
# Feedback
blk1 = system.Gain(gain=2)
blk = system.Feedback(block=blk1)
assert blk.block is blk1
assert blk.gamma == 1.0
blk = system.Feedback(block=blk1)
assert blk.block is blk1
assert blk.gamma == 1.0
blk = system.Feedback(block=blk1, gamma=4)
assert blk.block is blk1
assert blk.gamma == 4
blk.write(2, 3)
(yk,) = blk.read()
self.assertEqual(yk , 2 * (3 * 4 - 2))
gn = system.Gain(gain=150)
blk.set(block=gn)
self.assertTrue(blk.block is gn)
blk.set(gamma=10)
self.assertEqual(blk.gamma , 10)
# Feedback with transfer-function
#
# G(z) = -.5/(z - .5)
# TODO: CHECK DIFFERENT SIZES NUM/DEN
blk1 = system.System(model=tf.zDTTF([-.5, 0], [-.5, 1]))
blktf = system.Feedback(block=blk1)
self.assertTrue(blktf.block is blk1)
# A = .5, B = 1, C = -.5, D = 0
#
# u = C x + D (- y + r)
# x = A x + B (- y + r)
A = np.array([[.5]])
B = np.array([[-1, 1]])
C = np.array([[-.5]])
D = np.array([[0, 0]])
blkss = system.System(model=ss.DTSS(A, B, C, D))
blktf.write(1, 3)
yk1 = list(blktf.read())
blkss.write([1, 3])
yk2, = blkss.read()
self.assertTrue(np.all(np.array(yk1) == yk2))
blktf.write(-1, 3)
yk1 = list(blktf.read())
blkss.write([-1, 3])
yk2, = blkss.read()
self.assertTrue(np.all(np.array(yk1) == yk2))
blktf.write(-1, 3)
yk1 = list(blktf.read())
blkss.write([-1, 3])
yk2, = blkss.read()
self.assertTrue(np.all(np.array(yk1) == yk2))
# Reset feedback
self.assertTrue(np.array_equal(blktf.block.model.state, np.array((6.5,))))
blktf.reset()
self.assertTrue(np.array_equal(blktf.block.model.state, np.array((0,))))
def test_Sum(self):
# Sum
blk = system.Sum()
blk.write(1)
(yk,) = blk.read()
self.assertEqual(yk, 1)
# TODO: is this case really important?
# blk.write()
# (yk,) = blk.read()
# self.assertEqual(yk , 0)
blk.write(1, 2)
(yk,) = blk.read()
self.assertEqual(yk , 3)
blk.write(1, .4)
(yk,) = blk.read()
self.assertEqual(yk , 1.4)
blk.write([1, .4])
(yk,) = blk.read()
self.assertTrue(np.array_equal(yk, np.array([1, .4])))
blk.write([1, .4], [2, 3])
(yk,) = blk.read()
self.assertTrue(np.array_equal(yk, np.array([3, 3.4])))
# TODO: micropython average
def _test_Average(self):
# Average
blk = system.Average()
blk.write(1)
(yk,) = blk.read()
self.assertEqual(yk , 1)
# TODO: is this case really important?
# blk.write()
# (yk,) = blk.read()
# self.assertEqual(yk , 0)
blk.write(1, 2)
(yk,) = blk.read()
self.assertEqual(yk, 1.5)
blk.write(1, .4)
(yk,) = blk.read()
self.assertEqual(yk, (1 + .4) / 2)
blk.write([1, .4])
(yk,) = blk.read()
self.assertTrue(np.all(yk == np.array([1, .4])))
blk.write([1, .4], [2, 3])
(yk,) = blk.read()
assert np.all(yk == np.array([1.5, 3.4 / 2]))
# Weighted
blk = system.Average(weights=np.array([1]))
blk.write(1)
(yk,) = blk.read()
self.assertEqual(yk , 1)
blk.write()
(yk,) = blk.read()
self.assertEqual(yk , 0)
blk.set(weights=np.array([2, 1]))
blk.write(1, 2)
(yk,) = blk.read()
self.assertEqual(yk , (2 + 2) / 3)
blk.write(1, .4)
(yk,) = blk.read()
self.assertEqual(yk , (2 + .4) / 3)
blk.set(weights=None)
blk.write([1, .4])
(yk,) = blk.read()
assert np.all(yk == [1, .4])
blk.set(weights=np.array([1, 2]))
blk.write([1, .4], [2, 3])
(yk,) = blk.read()
assert np.all(yk == [(1 + 2 * 2) / 3, (.4 + 2 * 3) / 3])
def test_Subtract(self):
# Subtract
blk = system.Subtract()
blk.write(1, 2)
(yk,) = blk.read()
self.assertEqual(yk , 1)
blk.write(2, 1)
(yk,) = blk.read()
self.assertEqual(yk , -1)
blk.write(0, 0)
(yk,) = blk.read()
self.assertEqual(yk , 0)
blk.write(2, 1, 1)
(yk,) = blk.read()
self.assertEqual(yk , 0)
blk.write(2)
(yk,) = blk.read()
self.assertEqual(yk , -2)
# TODO: is this case really important?
# blk.write()
# (yk,) = blk.read()
# self.assertEqual(yk , 0)
def test_TimeVaryingSystem(self):
with self.assertRaises(block.BlockException):
blk = system.TimeVaryingSystem(modelo=1)
with self.assertRaises(block.BlockException):
blk = system.TimeVaryingSystem(model=1)
if test_ode:
a = np.array([[-1, 1], [0, -2]])
b = np.array([[1], [1]])
def f(t, x, u, a, b):
return a.dot(x) + b.dot(u)
tk = 0
xk = np.array([1, -1])
sys = ode.ODE(shape=(1, 2, 2), f=f, t0=tk, x0=xk, pars=(a, b))
with self.assertRaises(block.BlockException):
blk = system.TimeVaryingSystem(model=sys, mux=False)
uk = [0]
tk += 1
yk1 = sys.update(tk, uk)
# print(yk1)
uk = [0]
tk += 10
yk2 = sys.update(tk, uk)
# print(yk2)
uk = [1]
tk += 3
yk3 = sys.update(tk, uk)
# print(yk3)
# Repeat with TimeVaryingSystem block
tk = 0
blk = system.TimeVaryingSystem(model=ode.ODE(shape=(1, 2, 2), f=f, t0=tk, x0=xk, pars=(a, b)))
# u1 = 1 => y1 = 1
uk = [0]
tk += 1
blk.write(tk, uk)
yk = blk.read()
assert np.all(np.abs(yk - yk1) < 1e-4)
uk = 0
tk += 10
blk.write(tk, uk)
yk = blk.read()
assert np.all(np.abs(yk - yk2) < 1e-4)
uk = [1]
tk += 3
blk.write(tk, uk)
yk = blk.read()
assert np.all(np.abs(yk - yk3) < 1e-4)
if __name__ == "__main__":
unittest.main()
| 28.836986 | 106 | 0.505202 | 2,864 | 21,051 | 3.684707 | 0.054818 | 0.107458 | 0.101582 | 0.119397 | 0.841277 | 0.811712 | 0.795508 | 0.767175 | 0.742443 | 0.666825 | 0 | 0.050873 | 0.31742 | 21,051 | 729 | 107 | 28.876543 | 0.683555 | 0.052112 | 0 | 0.649087 | 0 | 0 | 0.01221 | 0 | 0 | 0 | 0 | 0.001372 | 0.332657 | 1 | 0.022312 | false | 0 | 0.016227 | 0.002028 | 0.042596 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
18dfec892d32cbff1b50c90d16def63e0c15227a | 96 | py | Python | venv/lib/python3.8/site-packages/pipreqs/__init__.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pipreqs/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pipreqs/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/85/f5/09/8e88c40baa1ab3c07d856988b448f2d921cca24341684ff8eae493a54e | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427083 | 0 | 96 | 1 | 96 | 96 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e1513866cf00432f3b722088ab2d446338166ac2 | 263 | py | Python | notification/utils/flask_app_utils.py | EhsanSaZ/send_message_api_bale_bot | 803e9b91d1eea477d3060b5dcc4e0099641876c9 | [
"MIT"
] | 1 | 2018-11-12T17:00:35.000Z | 2018-11-12T17:00:35.000Z | notification/utils/flask_app_utils.py | EhsanSaZ/send_message_api_bale_bot | 803e9b91d1eea477d3060b5dcc4e0099641876c9 | [
"MIT"
] | null | null | null | notification/utils/flask_app_utils.py | EhsanSaZ/send_message_api_bale_bot | 803e9b91d1eea477d3060b5dcc4e0099641876c9 | [
"MIT"
] | null | null | null | from notification.config.notification_config import NotificationConfig
ALLOWED_EXTENSIONS = NotificationConfig.allowed_extensions
def allowed_file(filename):
return '.' in filename and \
filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
| 26.3 | 70 | 0.768061 | 28 | 263 | 7.035714 | 0.571429 | 0.258883 | 0.35533 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008889 | 0.144487 | 263 | 9 | 71 | 29.222222 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0.007634 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e18528af053a732451d1f606501626364f9a3c16 | 5,633 | py | Python | spydrnet/composers/verilog/tests/test_composer.py | ganeshgore/spydrnet | 22672b8fc7d63461a71077bd20f29df6d38e96f4 | [
"BSD-3-Clause"
] | 34 | 2020-03-12T15:40:49.000Z | 2022-02-28T07:13:47.000Z | spydrnet/composers/verilog/tests/test_composer.py | ganeshgore/spydrnet | 22672b8fc7d63461a71077bd20f29df6d38e96f4 | [
"BSD-3-Clause"
] | 104 | 2020-01-06T20:32:19.000Z | 2022-01-02T00:20:14.000Z | spydrnet/composers/verilog/tests/test_composer.py | ganeshgore/spydrnet | 22672b8fc7d63461a71077bd20f29df6d38e96f4 | [
"BSD-3-Clause"
] | 10 | 2020-09-02T20:24:00.000Z | 2022-02-24T16:10:07.000Z | import unittest
import spydrnet as sdn
from spydrnet import composers
from spydrnet import parsers
import os
import tempfile
import glob
class TestVerilogComposer(unittest.TestCase):
@classmethod
def setUpClass(cls) -> None:
cls.dir_of_verilog_netlists = os.path.join(sdn.base_dir, "support_files", "verilog_netlists")
cls.verilog_files = sorted(glob.glob(os.path.join(cls.dir_of_verilog_netlists, "*.v.zip")), key = os.path.getsize)
@unittest.skip("Test takes a long time right now.")
def test_large_verilog_compose(self):
i = 0
errors = 0
for ii, filename in enumerate(self.verilog_files):
with self.subTest(i=ii):
if os.path.getsize(filename) <= 1024 * 10:
continue
if filename.endswith(".zip"):
with tempfile.TemporaryDirectory() as tempdirectory:
# try:
print("*********************"+filename+"*********************")
# vp = sdn.parsers.verilog.parser.VerilogParser.from_filename(os.path.join(directory, filename))
# netlist = vp.parse()
netlist = parsers.parse(filename)
composers.compose(netlist, os.path.join(tempdirectory, os.path.basename(filename) + "-spydrnet.v"))
#comp.run(netlist,"temp2/"+filename[:len(filename)-6] + "-spydrnet.v")
# comp.run(netlist,os.path.join(tempdirectory, filename[:len(filename)-6] + "-spydrnet.v"))
i+=1
print("pass")
# except Exception as identifier:
# print("FAIL")
# print(identifier)
# errors += 1
else:
continue
print("processed",i,"errors", errors)
assert errors == 0, "there were errors while parsing and composing files. Please see the output."
def test_small_verilog_compose(self):
i = 0
errors = 0
for ii, filename in enumerate(self.verilog_files):
with self.subTest(i=ii):
if os.path.getsize(filename) > 1024 * 10:
continue
if filename.endswith(".zip"):
with tempfile.TemporaryDirectory() as tempdirectory:
# try:
print("*********************"+filename+"*********************")
# vp = sdn.parsers.verilog.parser.VerilogParser.from_filename(os.path.join(directory, filename))
# netlist = vp.parse()
netlist = parsers.parse(filename)
composers.compose(netlist, os.path.join(tempdirectory, os.path.basename(filename) + "-spydrnet.v"))
#comp.run(netlist,"temp2/"+filename[:len(filename)-6] + "-spydrnet.v")
# comp.run(netlist,os.path.join(tempdirectory, filename[:len(filename)-6] + "-spydrnet.v"))
i+=1
print("pass")
# except Exception as identifier:
# print("FAIL")
# print(identifier)
# errors += 1
else:
continue
print("processed",i,"errors", errors)
assert errors == 0, "there were errors while parsing and composing files. Please see the output."
def test_definition_list_option(self):
for filename in glob.glob(os.path.join(
self.dir_of_verilog_netlists, "*4bitadder.v.zip")):
with tempfile.TemporaryDirectory() as tempdirectory:
netlist = parsers.parse(filename)
out_file = os.path.join(
tempdirectory, os.path.basename(filename) + "-spydrnet.v")
composers.compose(netlist, out_file, definition_list=['adder'])
with open(out_file, "r") as fp:
lines = fp.readlines()
print(len(lines))
m = list(filter(lambda x: x.startswith('module'), lines))
self.assertGreater(len(m), 0, "Adder module not written")
self.assertLess(len(m), 2, "Failed to write only definition_list")
return
raise AssertionError("Adder design not found " +
"definition_list options not tested,")
def test_write_blackbox_option(self):
for filename in glob.glob(os.path.join(
self.dir_of_verilog_netlists, "*4bitadder.v.zip")):
with tempfile.TemporaryDirectory() as tempdirectory:
netlist = parsers.parse(filename)
out_file = os.path.join(
tempdirectory, os.path.basename(filename) + "-spydrnet.v")
composers.compose(netlist, out_file, write_blackbox=False)
with open(out_file, "r") as fp:
lines = fp.readlines()
print(len(lines))
m = list(filter(lambda x: x.startswith('module'), lines))
self.assertGreater(len(m), 0, "Adder module not written")
self.assertLess(len(m), 2, "Failed to write only definition_list" +
"%s" % m)
return
raise AssertionError("definition_list options not test," +
"Adder design not found")
| 48.982609 | 128 | 0.516599 | 555 | 5,633 | 5.163964 | 0.237838 | 0.039777 | 0.04187 | 0.048151 | 0.808095 | 0.785764 | 0.785764 | 0.785764 | 0.785764 | 0.785764 | 0 | 0.009508 | 0.36517 | 5,633 | 114 | 129 | 49.412281 | 0.791946 | 0.130481 | 0 | 0.666667 | 0 | 0 | 0.139168 | 0.017217 | 0 | 0 | 0 | 0 | 0.095238 | 1 | 0.059524 | false | 0.02381 | 0.083333 | 0 | 0.178571 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e19a00c047df4a328cb56f005906eedb413c0174 | 36,667 | py | Python | manila/tests/share/drivers/qnap/test_api.py | inspur-storage/manila | 0f8cc58e9454643b492b18c6284f6b0bc4aa311b | [
"Apache-2.0"
] | 3 | 2016-06-06T13:05:00.000Z | 2021-05-05T04:29:24.000Z | manila/tests/share/drivers/qnap/test_api.py | ljzjohnson/manila | 7f990ffa16117769f7616779dd94f81c8d676511 | [
"Apache-2.0"
] | null | null | null | manila/tests/share/drivers/qnap/test_api.py | ljzjohnson/manila | 7f990ffa16117769f7616779dd94f81c8d676511 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2016 QNAP Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import base64
import ddt
import mock
import six
from six.moves import urllib
import time
from manila import exception
from manila.share.drivers.qnap import qnap
from manila import test
from manila.tests import fake_share
from manila.tests.share.drivers.qnap import fakes
def create_configuration(management_url, qnap_share_ip, qnap_nas_login,
qnap_nas_password, qnap_poolname):
"""Create configuration."""
configuration = mock.Mock()
configuration.qnap_management_url = management_url
configuration.qnap_share_ip = qnap_share_ip
configuration.qnap_nas_login = qnap_nas_login
configuration.qnap_nas_password = qnap_nas_password
configuration.qnap_poolname = qnap_poolname
configuration.safe_get.return_value = False
return configuration
class QnapShareDriverBaseTestCase(test.TestCase):
"""Base Class for the QnapShareDriver Tests."""
def setUp(self):
"""Setup the Qnap Driver Base TestCase."""
super(QnapShareDriverBaseTestCase, self).setUp()
self.driver = None
self.share_api = None
def _do_setup(self, management_url, share_ip, nas_login,
nas_password, poolname, **kwargs):
"""Config do setup configurations."""
self.driver = qnap.QnapShareDriver(
configuration=create_configuration(
management_url,
share_ip,
nas_login,
nas_password,
poolname),
private_storage=kwargs.get('private_storage'))
self.driver.do_setup('context')
@ddt.ddt
class QnapAPITestCase(QnapShareDriverBaseTestCase):
"""Tests QNAP api functions."""
login_url = ('/cgi-bin/authLogin.cgi?')
get_basic_info_url = ('/cgi-bin/authLogin.cgi')
fake_password = 'qnapadmin'
def setUp(self):
"""Setup the Qnap API TestCase."""
super(QnapAPITestCase, self).setUp()
fake_parms = {}
fake_parms['user'] = 'admin'
fake_parms['pwd'] = base64.b64encode(
self.fake_password.encode("utf-8"))
fake_parms['serviceKey'] = 1
sanitized_params = self._sanitize_params(fake_parms)
self.login_url = ('/cgi-bin/authLogin.cgi?%s' % sanitized_params)
self.mock_object(six.moves.http_client, 'HTTPConnection')
self.share = fake_share.fake_share(
share_proto='NFS',
id='shareId',
display_name='fakeDisplayName',
export_locations=[{'path': '1.2.3.4:/share/fakeShareName'}],
host='QnapShareDriver',
size=10)
def _sanitize_params(self, params, doseq=False):
sanitized_params = {}
for key in params:
value = params[key]
if value is not None:
if isinstance(value, list):
sanitized_params[key] = [six.text_type(v) for v in value]
else:
sanitized_params[key] = six.text_type(value)
sanitized_params = urllib.parse.urlencode(sanitized_params, doseq)
return sanitized_params
@ddt.data('fake_share_name', 'fakeLabel')
def test_create_share_api(self, fake_name):
"""Test create share api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakes.FakeCreateShareResponse()]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.create_share(
self.share,
'Storage Pool 1',
fake_name,
'NFS',
qnap_deduplication=False,
qnap_compression=True,
qnap_thin_provision=True,
qnap_ssd_cache=False)
fake_params = {
'wiz_func': 'share_create',
'action': 'add_share',
'vol_name': fake_name,
'vol_size': '10' + 'GB',
'threshold': '80',
'dedup': 'off',
'compression': '1',
'thin_pro': '1',
'cache': '0',
'cifs_enable': '0',
'nfs_enable': '1',
'afp_enable': '0',
'ftp_enable': '0',
'encryption': '0',
'hidden': '0',
'oplocks': '1',
'sync': 'always',
'userrw0': 'admin',
'userrd_len': '0',
'userrw_len': '1',
'userno_len': '0',
'access_r': 'setup_users',
'path_type': 'auto',
'recycle_bin': '1',
'recycle_bin_administrators_only': '0',
'pool_name': 'Storage Pool 1',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = ('/cgi-bin/wizReq.cgi?%s' % sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
def test_api_delete_share(self):
"""Test delete share api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakes.FakeDeleteShareResponse()]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.delete_share(
'fakeId')
fake_params = {
'func': 'volume_mgmt',
'vol_remove': '1',
'volumeID': 'fakeId',
'stop_service': 'no',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
'/cgi-bin/disk/disk_manage.cgi?%s' % sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
def test_get_specific_poolinfo(self):
"""Test get specific poolinfo api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakes.FakeSpecificPoolInfoResponse()]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.get_specific_poolinfo(
'fakePoolId')
fake_params = {
'store': 'poolInfo',
'func': 'extra_get',
'poolID': 'fakePoolId',
'Pool_Info': '1',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
'/cgi-bin/disk/disk_manage.cgi?%s' % sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
@ddt.data({'pool_id': "Storage Pool 1"},
{'pool_id': "Storage Pool 1", 'vol_no': 'fakeNo'},
{'pool_id': "Storage Pool 1", 'vol_label': 'fakeShareName'})
def test_get_share_info(self, dict_parm):
"""Test get share info api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakes.FakeShareInfoResponse()]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.get_share_info(**dict_parm)
fake_params = {
'store': 'poolVolumeList',
'poolID': 'Storage Pool 1',
'func': 'extra_get',
'Pool_Vol_Info': '1',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
'/cgi-bin/disk/disk_manage.cgi?%s' % sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
def test_get_specific_volinfo(self):
"""Test get specific volume info api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakes.FakeSpecificVolInfoResponse()]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.get_specific_volinfo(
'fakeNo')
fake_params = {
'store': 'volumeInfo',
'volumeID': 'fakeNo',
'func': 'extra_get',
'Volume_Info': '1',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
'/cgi-bin/disk/disk_manage.cgi?%s' % sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
def test_get_snapshot_info_es(self):
"""Test get snapsho info api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakes.FakeSnapshotInfoResponse()]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.get_snapshot_info(
volID='volId', snapshot_name='fakeSnapshotName')
fake_params = {
'func': 'extra_get',
'volumeID': 'volId',
'snapshot_list': '1',
'snap_start': '0',
'snap_count': '100',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
'/cgi-bin/disk/snapshot.cgi?%s' % sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
def test_create_snapshot_api(self):
"""Test create snapshot api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakes.FakeCreateSnapshotResponse()]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.create_snapshot_api(
'fakeVolumeId',
'fakeSnapshotName')
fake_params = {
'func': 'create_snapshot',
'volumeID': 'fakeVolumeId',
'snapshot_name': 'fakeSnapshotName',
'expire_min': '0',
'vital': '1',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
'/cgi-bin/disk/snapshot.cgi?%s' % sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
@ddt.data(fakes.FakeDeleteSnapshotResponse(),
fakes.FakeDeleteSnapshotResponseSnapshotNotExist(),
fakes.FakeDeleteSnapshotResponseShareNotExist())
def test_delete_snapshot_api(self, fakeDeleteSnapshotResponse):
"""Test delete snapshot api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakeDeleteSnapshotResponse]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.delete_snapshot_api(
'fakeSnapshotId')
fake_params = {
'func': 'del_snapshots',
'snapshotID': 'fakeSnapshotId',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
'/cgi-bin/disk/snapshot.cgi?%s' % sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
def test_clone_snapshot_api(self):
"""Test clone snapshot api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakes.FakeDeleteSnapshotResponse()]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.clone_snapshot(
'fakeSnapshotId',
'fakeNewShareName')
fake_params = {
'func': 'clone_qsnapshot',
'by_vol': '1',
'snapshotID': 'fakeSnapshotId',
'new_name': 'fakeNewShareName',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
'/cgi-bin/disk/snapshot.cgi?%s' % sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
def test_edit_share_api(self):
"""Test edit share api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseTs_4_3_0(),
fakes.FakeLoginResponse(),
fakes.FakeCreateSnapshotResponse()]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
expect_share_dict = {
"sharename": 'fakeVolId',
"old_sharename": 'fakeVolId',
"new_size": 100,
"deduplication": False,
"compression": True,
"thin_provision": True,
"ssd_cache": False,
"share_proto": "NFS"
}
self.driver.api_executor.edit_share(
expect_share_dict)
fake_params = {
'wiz_func': 'share_property',
'action': 'share_property',
'sharename': 'fakeVolId',
'old_sharename': 'fakeVolId',
'vol_size': '100GB',
'dedup': 'off',
'compression': '1',
'thin_pro': '1',
'cache': '0',
'cifs_enable': '0',
'nfs_enable': '1',
'afp_enable': '0',
'ftp_enable': '0',
'hidden': '0',
'oplocks': '1',
'sync': 'always',
'recycle_bin': '1',
'recycle_bin_administrators_only': '0',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
'/cgi-bin/priv/privWizard.cgi?%s' % sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
@ddt.data(fakes.FakeGetHostListResponse(),
fakes.FakeGetNoHostListResponse())
def test_get_host_list(self, fakeGetHostListResponse):
"""Test get host list api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakeGetHostListResponse]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.get_host_list()
fake_params = {
'module': 'hosts',
'func': 'get_hostlist',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
('/cgi-bin/accessrights/accessrightsRequest.cgi?%s') %
sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
def test_add_host(self):
"""Test add host api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakes.FakeGetHostListResponse()]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.add_host(
'fakeHostName', 'fakeIpV4')
fake_params = {
'module': 'hosts',
'func': 'apply_addhost',
'name': 'fakeHostName',
'ipaddr_v4': 'fakeIpV4',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
('/cgi-bin/accessrights/accessrightsRequest.cgi?%s') %
sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
def test_edit_host(self):
"""Test edit host api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakes.FakeGetHostListResponse()]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.edit_host(
'fakeHostName', ['fakeIpV4'])
fake_params = {
'module': 'hosts',
'func': 'apply_sethost',
'name': 'fakeHostName',
'ipaddr_v4': ['fakeIpV4'],
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params, doseq=True)
fake_url = (
('/cgi-bin/accessrights/accessrightsRequest.cgi?%s') %
sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
def test_delete_host(self):
"""Test delete host api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakes.FakeGetHostListResponse()]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.delete_host('fakeHostName')
fake_params = {
'module': 'hosts',
'func': 'apply_delhost',
'host_name': 'fakeHostName',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
('/cgi-bin/accessrights/accessrightsRequest.cgi?%s') %
sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
@ddt.data(fakes.FakeGetHostListResponse())
def test_set_nfs_access(self, fakeGetHostListResponse):
"""Test get host list api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fakeGetHostListResponse]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.set_nfs_access(
'fakeShareName', 'fakeAccess', 'fakeHostName')
fake_params = {
'wiz_func': 'share_nfs_control',
'action': 'share_nfs_control',
'sharename': 'fakeShareName',
'access': 'fakeAccess',
'host_name': 'fakeHostName',
'sid': 'fakeSid',
}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
('/cgi-bin/priv/privWizard.cgi?%s') %
sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
def test_get_snapshot_info_ts_api(self):
"""Test get snapshot info api."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseTs_4_3_0(),
fakes.FakeLoginResponse(),
fakes.FakeSnapshotInfoResponse()]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.driver.api_executor.get_snapshot_info(
snapshot_name='fakeSnapshotName',
lun_index='fakeLunIndex')
fake_params = {
'func': 'extra_get',
'LUNIndex': 'fakeLunIndex',
'smb_snapshot_list': '1',
'smb_snapshot': '1',
'snapshot_list': '1',
'sid': 'fakeSid'}
sanitized_params = self._sanitize_params(fake_params)
fake_url = (
('/cgi-bin/disk/snapshot.cgi?%s') %
sanitized_params)
expected_call_list = [
mock.call('GET', self.login_url),
mock.call('GET', self.get_basic_info_url),
mock.call('GET', self.login_url),
mock.call('GET', fake_url)]
self.assertEqual(
expected_call_list,
mock_http_connection.return_value.request.call_args_list)
@ddt.data(fakes.FakeAuthPassFailResponse(),
fakes.FakeEsResCodeNegativeResponse())
def test_api_create_share_with_fail_response(self, fake_fail_response):
"""Test create share api with fail response."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3(),
fakes.FakeLoginResponse(),
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response]
self.mock_object(time, 'sleep')
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.assertRaises(
exception.ShareBackendException,
self.driver.api_executor.create_share,
share=self.share,
pool_name='Storage Pool 1',
create_share_name='fake_share_name',
share_proto='NFS',
qnap_deduplication=False,
qnap_compression=True,
qnap_thin_provision=True,
qnap_ssd_cache=False)
@ddt.unpack
@ddt.data(['self.driver.api_executor.get_share_info',
{'pool_id': 'fakeId'},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.get_specific_volinfo',
{'vol_id': 'fakeId'},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.create_snapshot_api',
{'volumeID': 'fakeVolumeId',
'snapshot_name': 'fakeSnapshotName'},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.create_snapshot_api',
{'volumeID': 'fakeVolumeId',
'snapshot_name': 'fakeSnapshotName'},
fakes.FakeEsResCodeNegativeResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.get_snapshot_info',
{'volID': 'volId'},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.get_snapshot_info',
{'volID': 'volId'},
fakes.FakeResultNegativeResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.get_specific_poolinfo',
{'pool_id': 'Storage Pool 1'},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.get_specific_poolinfo',
{'pool_id': 'Storage Pool 1'},
fakes.FakeResultNegativeResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.delete_share',
{'vol_id': 'fakeId'},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.delete_share',
{'vol_id': 'fakeId'},
fakes.FakeResultNegativeResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.delete_snapshot_api',
{'snapshot_id': 'fakeSnapshotId'},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.delete_snapshot_api',
{'snapshot_id': 'fakeSnapshotId'},
fakes.FakeResultNegativeResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.clone_snapshot',
{'snapshot_id': 'fakeSnapshotId',
'new_sharename': 'fakeNewShareName'},
fakes.FakeResultNegativeResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.clone_snapshot',
{'snapshot_id': 'fakeSnapshotId',
'new_sharename': 'fakeNewShareName'},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.edit_share',
{'share_dict': {"sharename": 'fakeVolId',
"old_sharename": 'fakeVolId',
"new_size": 100,
"deduplication": False,
"compression": True,
"thin_provision": False,
"ssd_cache": False,
"share_proto": "NFS"}},
fakes.FakeEsResCodeNegativeResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.edit_share',
{'share_dict': {"sharename": 'fakeVolId',
"old_sharename": 'fakeVolId',
"new_size": 100,
"deduplication": False,
"compression": True,
"thin_provision": False,
"ssd_cache": False,
"share_proto": "NFS"}},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.add_host',
{'hostname': 'fakeHostName',
'ipv4': 'fakeIpV4'},
fakes.FakeResultNegativeResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.add_host',
{'hostname': 'fakeHostName',
'ipv4': 'fakeIpV4'},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.edit_host',
{'hostname': 'fakeHostName',
'ipv4_list': 'fakeIpV4List'},
fakes.FakeResultNegativeResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.edit_host',
{'hostname': 'fakeHostName',
'ipv4_list': 'fakeIpV4List'},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.delete_host',
{'hostname': 'fakeHostName'},
fakes.FakeResultNegativeResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.delete_host',
{'hostname': 'fakeHostName'},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.get_host_list',
{},
fakes.FakeResultNegativeResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.get_host_list',
{},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.set_nfs_access',
{'sharename': 'fakeShareName',
'access': 'fakeAccess',
'host_name': 'fakeHostName'},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.set_nfs_access',
{'sharename': 'fakeShareName',
'access': 'fakeAccess',
'host_name': 'fakeHostName'},
fakes.FakeResultNegativeResponse(),
fakes.FakeGetBasicInfoResponseEs_1_1_3()],
['self.driver.api_executor.get_snapshot_info',
{'snapshot_name': 'fakeSnapshoName',
'lun_index': 'fakeLunIndex'},
fakes.FakeAuthPassFailResponse(),
fakes.FakeGetBasicInfoResponseTs_4_3_0()],
['self.driver.api_executor.get_snapshot_info',
{'snapshot_name': 'fakeSnapshoName',
'lun_index': 'fakeLunIndex'},
fakes.FakeResultNegativeResponse(),
fakes.FakeGetBasicInfoResponseTs_4_3_0()])
def test_get_snapshot_info_ts_with_fail_response(
self, api, dict_parm,
fake_fail_response, fake_basic_info):
"""Test get snapshot info api with fail response."""
mock_http_connection = six.moves.http_client.HTTPConnection
mock_http_connection.return_value.getresponse.side_effect = [
fakes.FakeLoginResponse(),
fake_basic_info,
fakes.FakeLoginResponse(),
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response,
fake_fail_response]
self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin',
'qnapadmin', 'Storage Pool 1')
self.mock_object(time, 'sleep')
self.assertRaises(
exception.ShareBackendException,
eval(api),
**dict_parm)
| 39.812161 | 78 | 0.57894 | 3,641 | 36,667 | 5.525954 | 0.090634 | 0.025447 | 0.03499 | 0.035785 | 0.789215 | 0.765755 | 0.743042 | 0.73335 | 0.729374 | 0.718241 | 0 | 0.018495 | 0.303979 | 36,667 | 920 | 79 | 39.855435 | 0.769876 | 0.035072 | 0 | 0.70802 | 0 | 0 | 0.17471 | 0.05138 | 0 | 0 | 0 | 0 | 0.022556 | 1 | 0.028822 | false | 0.027569 | 0.013784 | 0 | 0.051378 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e1a6587e7e20b627ffe54c075037bb8d900d8e6e | 123 | py | Python | media/exception.py | Marusoftware/tkmedia3 | 8a49fb5fad3a9e0cf64e3dacba9a322430ef1ba6 | [
"MIT"
] | null | null | null | media/exception.py | Marusoftware/tkmedia3 | 8a49fb5fad3a9e0cf64e3dacba9a322430ef1ba6 | [
"MIT"
] | 6 | 2021-04-08T09:16:10.000Z | 2022-02-16T02:39:50.000Z | media/exception.py | Marusoftware/tkmedia3 | 8a49fb5fad3a9e0cf64e3dacba9a322430ef1ba6 | [
"MIT"
] | null | null | null | class MediaFileError(Exception):
pass
class WrongOrderError(Exception):
pass
class ModeError(Exception):
pass | 15.375 | 33 | 0.747967 | 12 | 123 | 7.666667 | 0.5 | 0.423913 | 0.391304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178862 | 123 | 8 | 34 | 15.375 | 0.910891 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
e1ae502623fc19a39fefc4ec18722d1fee1cb645 | 201 | py | Python | src/apps/staples/api/admin.py | columbia/fairtest | 8696051c9276f127ab8b2f437850f845ff0ca786 | [
"Apache-2.0"
] | 42 | 2017-01-12T13:59:23.000Z | 2022-03-01T01:44:12.000Z | src/apps/staples/api/admin.py | columbia/fairtest | 8696051c9276f127ab8b2f437850f845ff0ca786 | [
"Apache-2.0"
] | 3 | 2019-05-24T21:02:51.000Z | 2019-11-15T15:36:17.000Z | src/apps/staples/api/admin.py | columbia/fairtest | 8696051c9276f127ab8b2f437850f845ff0ca786 | [
"Apache-2.0"
] | 20 | 2017-01-12T23:07:10.000Z | 2021-08-11T09:13:50.000Z | from django.contrib import admin
from .models import User, Store, Competitor, Zipcode
admin.site.register(User)
admin.site.register(Store)
admin.site.register(Competitor)
admin.site.register(Zipcode)
| 25.125 | 52 | 0.81592 | 28 | 201 | 5.857143 | 0.428571 | 0.219512 | 0.414634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079602 | 201 | 7 | 53 | 28.714286 | 0.886486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
beca2f6678b2d85b0a58334f862887e520d6b77a | 118 | py | Python | day1_9.py | kangsup/maybler0 | 0128054800c4afbe842e711a881378382ffa5c6f | [
"MIT"
] | null | null | null | day1_9.py | kangsup/maybler0 | 0128054800c4afbe842e711a881378382ffa5c6f | [
"MIT"
] | null | null | null | day1_9.py | kangsup/maybler0 | 0128054800c4afbe842e711a881378382ffa5c6f | [
"MIT"
] | null | null | null | #예제 4-2
str2 = "programming"
print (str2[1])
print (str2[5])
#Ex 4-4
str4="980123-1234567"
print(str4[:6])
| 11.8 | 22 | 0.59322 | 20 | 118 | 3.5 | 0.65 | 0.257143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.265957 | 0.20339 | 118 | 9 | 23 | 13.111111 | 0.478723 | 0.101695 | 0 | 0 | 0 | 0 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.6 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
bedec89a87e60f93ad70b1964b22aca1c286bbd7 | 3,558 | py | Python | torch_geometric/utils/metric.py | DL-85/pytorch_geometric | eb12a94a667e881c4a6bff26b0453428bcb72393 | [
"MIT"
] | 8 | 2020-06-03T00:55:09.000Z | 2022-01-23T16:06:56.000Z | torch_geometric/utils/metric.py | chentingpc/pytorch_geometric | 44c4c5069dbc4c8a96761a3b5a7e7b45c8352a53 | [
"MIT"
] | null | null | null | torch_geometric/utils/metric.py | chentingpc/pytorch_geometric | 44c4c5069dbc4c8a96761a3b5a7e7b45c8352a53 | [
"MIT"
] | 6 | 2020-06-03T00:55:11.000Z | 2022-03-16T01:14:36.000Z | from __future__ import division
import torch
def accuracy(pred, target):
r"""Computes the accuracy of correct predictions.
Args:
pred (Tensor): The predictions.
target (Tensor): The targets.
:rtype: int
"""
return (pred == target).sum().item() / target.numel()
def true_positive(pred, target, num_classes):
r"""Computes the number of true positive predictions.
Args:
pred (Tensor): The predictions.
target (Tensor): The targets.
num_classes (int): The number of classes.
:rtype: :class:`LongTensor`
"""
out = []
for i in range(num_classes):
out.append(((pred == i) & (target == i)).sum())
return torch.tensor(out)
def true_negative(pred, target, num_classes):
r"""Computes the number of true negative predictions.
Args:
pred (Tensor): The predictions.
target (Tensor): The targets.
num_classes (int): The number of classes.
:rtype: :class:`LongTensor`
"""
out = []
for i in range(num_classes):
out.append(((pred != i) & (target != i)).sum())
return torch.tensor(out)
def false_positive(pred, target, num_classes):
r"""Computes the number of false positive predictions.
Args:
pred (Tensor): The predictions.
target (Tensor): The targets.
num_classes (int): The number of classes.
:rtype: :class:`LongTensor`
"""
out = []
for i in range(num_classes):
out.append(((pred == i) & (target != i)).sum())
return torch.tensor(out)
def false_negative(pred, target, num_classes):
r"""Computes the number of false negative predictions.
Args:
pred (Tensor): The predictions.
target (Tensor): The targets.
num_classes (int): The number of classes.
:rtype: :class:`LongTensor`
"""
out = []
for i in range(num_classes):
out.append(((pred != i) & (target == i)).sum())
return torch.tensor(out)
def precision(pred, target, num_classes):
r"""Computes the precision:
:math:`\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FP}}`.
Args:
pred (Tensor): The predictions.
target (Tensor): The targets.
num_classes (int): The number of classes.
:rtype: :class:`Tensor`
"""
tp = true_positive(pred, target, num_classes).to(torch.float)
fp = false_positive(pred, target, num_classes).to(torch.float)
out = tp / (tp + fp)
out[torch.isnan(out)] = 0
return out
def recall(pred, target, num_classes):
r"""Computes the recall:
:math:`\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}`.
Args:
pred (Tensor): The predictions.
target (Tensor): The targets.
num_classes (int): The number of classes.
:rtype: :class:`Tensor`
"""
tp = true_positive(pred, target, num_classes).to(torch.float)
fn = false_negative(pred, target, num_classes).to(torch.float)
out = tp / (tp + fn)
out[torch.isnan(out)] = 0
return out
def f1_score(pred, target, num_classes):
r"""Computes the :math:`F_1` score:
:math:`2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}
{\mathrm{precision}+\mathrm{recall}}`.
Args:
pred (Tensor): The predictions.
target (Tensor): The targets.
num_classes (int): The number of classes.
:rtype: :class:`Tensor`
"""
prec = precision(pred, target, num_classes)
rec = recall(pred, target, num_classes)
score = 2 * (prec * rec) / (prec + rec)
score[torch.isnan(score)] = 0
return score
| 24.537931 | 66 | 0.60905 | 450 | 3,558 | 4.731111 | 0.131111 | 0.112729 | 0.07938 | 0.122123 | 0.854861 | 0.821982 | 0.811649 | 0.738375 | 0.707374 | 0.707374 | 0 | 0.002616 | 0.247892 | 3,558 | 144 | 67 | 24.708333 | 0.792975 | 0.470489 | 0 | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16 | false | 0 | 0.04 | 0 | 0.36 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bef05970bba28627642579c3cdfc28d306ab9245 | 284 | py | Python | thefuck/rules/django_south_merge.py | Archstacker/thefuck | ebe53f0d181c28ec2f7a86f46d7d51a7d48bbd9e | [
"MIT"
] | 1 | 2021-05-08T23:24:17.000Z | 2021-05-08T23:24:17.000Z | thefuck/rules/django_south_merge.py | qrqiuren/thefuck | 710a72ee8c9133b05e19d41db75a523f5f1e0cb2 | [
"MIT"
] | null | null | null | thefuck/rules/django_south_merge.py | qrqiuren/thefuck | 710a72ee8c9133b05e19d41db75a523f5f1e0cb2 | [
"MIT"
] | 1 | 2021-06-21T09:01:08.000Z | 2021-06-21T09:01:08.000Z | def match(command, settings):
return 'manage.py' in command.script and \
'migrate' in command.script \
and '--merge: will just attempt the migration' in command.stderr
def get_new_command(command, settings):
return u'{} --merge'.format(command.script)
| 31.555556 | 75 | 0.672535 | 37 | 284 | 5.108108 | 0.594595 | 0.142857 | 0.222222 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211268 | 284 | 8 | 76 | 35.5 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0.232394 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
55e807567d8541ec5acf6a944e80a4dac26c50ce | 144 | py | Python | electronic_station/acceptable_password_3.py | NigrumAquila/py_checkio | df437c2c3ad325d84714665000e3299a70e91f32 | [
"MIT"
] | null | null | null | electronic_station/acceptable_password_3.py | NigrumAquila/py_checkio | df437c2c3ad325d84714665000e3299a70e91f32 | [
"MIT"
] | null | null | null | electronic_station/acceptable_password_3.py | NigrumAquila/py_checkio | df437c2c3ad325d84714665000e3299a70e91f32 | [
"MIT"
] | null | null | null | def is_acceptable_password(password: str) -> bool:
return len(password) > 6 and any(map(str.isdigit, password)) and not password.isnumeric() | 72 | 93 | 0.75 | 21 | 144 | 5.047619 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007937 | 0.125 | 144 | 2 | 93 | 72 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 1 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
55fa35b38975dc74ed4a638b4d20e79961497c16 | 26,162 | py | Python | agent.py | mopisec/c4_agent | 407c86eb5d72048c116c2473c62ecbff9f677f44 | [
"MIT"
] | null | null | null | agent.py | mopisec/c4_agent | 407c86eb5d72048c116c2473c62ecbff9f677f44 | [
"MIT"
] | null | null | null | agent.py | mopisec/c4_agent | 407c86eb5d72048c116c2473c62ecbff9f677f44 | [
"MIT"
] | null | null | null | from pwn import *
import random
import copy
import math
WIDTH = 7
HEIGHT = 6
r = process('./game')
# r = remote('demo.local', 14989)
class human_player:
my_turn = 0
sflag = 1
board = [[],[],[],[],[],[]]
def __init__(self, turn):
print('[+] Welcome! Human Player Mode Loaded (as Player ' + str(turn) + ')')
self.my_turn = turn
def load_board(self, board):
self.board = board
def play(self, turn):
if self.my_turn == turn:
self.sflag = 1
output_board(self.board)
while self.sflag:
inst = int(input('[*] Input: Place your chip at x = '))
if check_space(inst, self.board) == 1:
self.sflag = 0
else:
print('[-] Error: You can not place chip at x = ')
return str(inst)
else:
return "NOT_MY_TURN"
class random_agent:
my_turn = 0
sflag = 1
board = [[],[],[],[],[],[]]
def __init__(self, turn):
print('[+] Random Agent Loaded (as Player ' + str(turn) + ')')
self.my_turn = turn
def load_board(self, board):
self.board = board
def play(self, turn):
if self.my_turn == turn:
self.sflag = 1
while self.sflag:
inst = random.randrange(6)
if check_space(inst, self.board) == 1:
self.sflag = 0
return str(inst)
else:
return "NOT_MY_TURN"
class smart_agent:
my_turn = 0
enemy_turn = 0
sflag = 1
board = [[],[],[],[],[],[]]
def __init__(self, turn):
print('[+] Smart Agent Loaded (as Player ' + str(turn) + ')')
self.my_turn = turn
if turn == 1:
self.enemy_turn = 2
else:
self.enemy_turn = 1
def load_board(self, board):
self.board = board
def get_randominst(self):
self.sflag = 1
while self.sflag:
inst = random.randrange(6)
if check_space(inst, self.board) == 1:
self.sflag = 0
return inst
def check_winroute(self):
# Vertical
for i in range(HEIGHT - 3):
for j in range(WIDTH):
if self.board[i][j] == self.my_turn and self.board[i+1][j] == self.my_turn and self.board[i+2][j] == self.my_turn:
if check_space_xy(j, i+3, self.board) == 1:
return j
if self.board[i+1][j] == self.my_turn and self.board[i+2][j] == self.my_turn and self.board[i+3][j] == self.my_turn:
if check_space_xy(j, i, self.board) == 1:
return j
# Horizontal
for i in range(HEIGHT):
for j in range(WIDTH - 3):
if self.board[i][j] == self.my_turn and self.board[i][j+1] == self.my_turn and self.board[i][j+2] == self.my_turn:
if check_space_xy(j+3, i, self.board) == 1:
return j+3
if self.board[i][j+1] == self.my_turn and self.board[i][j+2] == self.my_turn and self.board[i][j+3] == self.my_turn:
if check_space_xy(j, i, self.board) == 1:
return j
# Diagonal (Down-Right to Up-Left)
for i in range(3, HEIGHT):
for j in range(WIDTH - 3):
if self.board[i][j] == self.my_turn and self.board[i-1][j+1] == self.my_turn and self.board[i-2][j+2] == self.my_turn:
if check_space_xy(j+3, i-3, self.board) == 1:
return j+3
if self.board[i-1][j+1] == self.my_turn and self.board[i-2][j+2] == self.my_turn and self.board[i-3][j+3] == self.my_turn:
if check_space_xy(j, i, self.board) == 1:
return j
# Diagonal (Down-Left to Up-Right)
for i in range(3, HEIGHT):
for j in range(3, WIDTH):
if self.board[i][j] == self.my_turn and self.board[i-1][j-1] == self.my_turn and self.board[i-2][j-2] == self.my_turn:
if check_space_xy(j-3, i-3, self.board) == 1:
return j-3
if self.board[i-1][j-1] == 1 and self.board[i-2][j-2] == 1 and self.board[i-3][j-3] == self.my_turn:
if check_space_xy(j, i, self.board) == 1:
return j
return self.check_prereach()
def check_enemywinroute(self):
# Vertical
for i in range(HEIGHT - 3):
for j in range(WIDTH):
if self.board[i][j] == self.enemy_turn and self.board[i+1][j] == self.enemy_turn and self.board[i+2][j] == self.enemy_turn:
if check_space_xy(j, i+3, self.board) == 1:
return j
if self.board[i+1][j] == self.enemy_turn and self.board[i+2][j] == self.enemy_turn and self.board[i+3][j] == self.enemy_turn:
if check_space_xy(j, i, self.board) == 1:
return j
# Horizontal
for i in range(HEIGHT):
for j in range(WIDTH - 3):
if self.board[i][j] == self.enemy_turn and self.board[i][j+1] == self.enemy_turn and self.board[i][j+2] == self.enemy_turn:
if check_space_xy(j+3, i, self.board) == 1:
return j+3
if self.board[i][j+1] == self.enemy_turn and self.board[i][j+2] == self.enemy_turn and self.board[i][j+3] == self.enemy_turn:
if check_space_xy(j, i, self.board) == 1:
return j
# Diagonal (Down-Right to Up-Left)
for i in range(3, HEIGHT):
for j in range(WIDTH - 3):
if self.board[i][j] == self.enemy_turn and self.board[i-1][j+1] == self.enemy_turn and self.board[i-2][j+2] == self.enemy_turn:
if check_space_xy(j+3, i-3, self.board) == 1:
return j+3
if self.board[i-1][j+1] == self.enemy_turn and self.board[i-2][j+2] == self.enemy_turn and self.board[i-3][j+3] == self.enemy_turn:
if check_space_xy(j, i, self.board) == 1:
return j
# Diagonal (Down-Left to Up-Right)
for i in range(3, HEIGHT):
for j in range(3, WIDTH):
if self.board[i][j] == self.enemy_turn and self.board[i-1][j-1] == self.enemy_turn and self.board[i-2][j-2] == self.enemy_turn:
if check_space_xy(j-3, i-3, self.board) == 1:
return j-3
if self.board[i-1][j-1] == 1 and self.board[i-2][j-2] == 1 and self.board[i-3][j-3] == self.enemy_turn:
if check_space_xy(j, i, self.board) == 1:
return j
return self.check_winroute()
def play(self, turn):
if self.my_turn == turn:
inst = self.check_enemywinroute()
return str(inst)
else:
return "NOT_MY_TURN"
class lookahead_agent:
my_turn = 0
enemy_turn = 0
sflag = 1
board = [[],[],[],[],[],[]]
def __init__(self, turn):
print('[+] Minimax Agent Loaded (as Player ' + str(turn) + ')')
self.my_turn = turn
if turn == 1:
self.enemy_turn = 2
else:
self.enemy_turn = 1
def load_board(self, board):
self.board = board
def get_randominst(self):
self.sflag = 1
while self.sflag:
inst = random.randrange(6)
if check_space(inst, self.board) == 1:
self.sflag = 0
return inst
def check_preprereach(self):
for i in range(HEIGHT):
for j in range(WIDTH):
if self.board[i][j] == str(self.my_turn):
if check_space(j, self.board) == 1:
return j
if j != 0:
if check_space_xy(j-1, i, self.board) == 1:
return j-1
if j != 6:
if check_space_xy(j+1, i, self.board) == 1:
return j+1
return self.get_randominst()
def check_prereach(self):
# Vertical
for i in range(HEIGHT - 3):
for j in range(WIDTH):
if self.board[i][j] == str(self.my_turn) and self.board[i+1][j] == str(self.my_turn):
if check_space_xy(j, i+2, self.board) == 1:
return j
if self.board[i+1][j] == str(self.my_turn) and self.board[i+2][j] == str(self.my_turn):
if check_space_xy(j, i+3, self.board) == 1:
return j
# Horizontal
for i in range(HEIGHT):
for j in range(WIDTH - 3):
if self.board[i][j] == str(str(self.my_turn)) and self.board[i][j+1] == str(self.my_turn):
if check_space_xy(j+2, i, self.board) == 1:
return j+2
if self.board[i][j+1] == str(self.my_turn) and self.board[i][j+2] == str(self.my_turn):
if check_space_xy(j, i, self.board) == 1:
return j
if check_space_xy(j+3, i, self.board) == 1:
return j+3
if self.board[i][j+2] == str(self.my_turn) and self.board[i][j+3] == str(self.my_turn):
if check_space_xy(j+1, i, self.board) == 1:
return j+1
# Diagonal (Down-Right to Up-Left)
for i in range(3, HEIGHT):
for j in range(WIDTH - 3):
if self.board[i][j] == str(self.my_turn) and self.board[i-1][j+1] == str(self.my_turn):
if check_space_xy(j+2, i-2, self.board) == 1:
return j+3
if self.board[i-1][j+1] == str(self.my_turn) and self.board[i-2][j+2] == str(self.my_turn):
if check_space_xy(j, i, self.board) == 1:
return j+3
if check_space_xy(j+3, i-3, self.board) == 1:
return j+3
if self.board[i-2][j+2] == str(self.my_turn) and self.board[i-3][j+3] == str(self.my_turn):
if check_space_xy(j+1, i-1, self.board) == 1:
return j+1
# Diagonal (Down-Left to Up-Right)
for i in range(3, HEIGHT):
for j in range(3, WIDTH):
if self.board[i][j] == str(self.my_turn) and self.board[i-1][j-1] == str(self.my_turn):
if check_space_xy(j-2, i-2, self.board) == 1:
return j-2
if self.board[i-1][j-1] == str(self.my_turn) and self.board[i-2][j-2] == str(self.my_turn):
if check_space_xy(j, i, self.board) == 1:
return j
if check_space_xy(j-3, i-3, self.board) == 1:
return j-3
if self.board[i-2][j-2] == 1 and self.board[i-3][j-3] == str(self.my_turn):
if check_space_xy(j-1, i-1, self.board) == 1:
return j-1
return self.check_preprereach()
def check_winroute(self):
# Vertical
for i in range(HEIGHT - 3):
for j in range(WIDTH):
if self.board[i][j] == str(self.my_turn) and self.board[i+1][j] == str(self.my_turn) and self.board[i+2][j] == str(self.my_turn):
if check_space_xy(j, i+3, self.board) == 1:
return j
if self.board[i+1][j] == str(self.my_turn) and self.board[i+2][j] == str(self.my_turn) and self.board[i+3][j] == str(self.my_turn):
if check_space_xy(j, i, self.board) == 1:
return j
# Horizontal
for i in range(HEIGHT):
for j in range(WIDTH - 3):
if self.board[i][j] == str(self.my_turn) and self.board[i][j+1] == str(self.my_turn) and self.board[i][j+2] == str(self.my_turn):
if check_space_xy(j+3, i, self.board) == 1:
return j+3
if self.board[i][j+1] == str(self.my_turn) and self.board[i][j+2] == str(self.my_turn) and self.board[i][j+3] == str(self.my_turn):
if check_space_xy(j, i, self.board) == 1:
return j
# Diagonal (Down-Right to Up-Left)
for i in range(3, HEIGHT):
for j in range(WIDTH - 3):
if self.board[i][j] == str(self.my_turn) and self.board[i-1][j+1] == str(self.my_turn) and self.board[i-2][j+2] == str(self.my_turn):
if check_space_xy(j+3, i-3, self.board) == 1:
return j+3
if self.board[i-1][j+1] == str(self.my_turn) and self.board[i-2][j+2] == str(self.my_turn) and self.board[i-3][j+3] == str(self.my_turn):
if check_space_xy(j, i, self.board) == 1:
return j
# Diagonal (Down-Left to Up-Right)
for i in range(3, HEIGHT):
for j in range(3, WIDTH):
if self.board[i][j] == str(self.my_turn) and self.board[i-1][j-1] == str(self.my_turn) and self.board[i-2][j-2] == str(self.my_turn):
if check_space_xy(j-3, i-3, self.board) == 1:
return j-3
if self.board[i-1][j-1] == 1 and self.board[i-2][j-2] == 1 and self.board[i-3][j-3] == str(self.my_turn):
if check_space_xy(j, i, self.board) == 1:
return j
return self.check_prereach()
def check_enemywinroute(self):
# Vertical
for i in range(HEIGHT - 3):
for j in range(WIDTH):
if self.board[i][j] == str(self.enemy_turn) and self.board[i+1][j] == str(self.enemy_turn) and self.board[i+2][j] == str(self.enemy_turn):
if check_space_xy(j, i+3, self.board) == 1:
return j
if self.board[i+1][j] == str(self.enemy_turn) and self.board[i+2][j] == str(self.enemy_turn) and self.board[i+3][j] == str(self.enemy_turn):
if check_space_xy(j, i, self.board) == 1:
return j
# Horizontal
for i in range(HEIGHT):
for j in range(WIDTH - 3):
if self.board[i][j] == str(self.enemy_turn) and self.board[i][j+1] == str(self.enemy_turn) and self.board[i][j+2] == str(self.enemy_turn):
if check_space_xy(j+3, i, self.board) == 1:
return j+3
if self.board[i][j+1] == str(self.enemy_turn) and self.board[i][j+2] == str(self.enemy_turn) and self.board[i][j+3] == str(self.enemy_turn):
if check_space_xy(j, i, self.board) == 1:
return j
if self.board[i][j] == str(self.enemy_turn) and self.board[i][j+2] == str(self.enemy_turn) and self.board[i][j+3] == str(self.enemy_turn):
if check_space_xy(j+1, i, self.board) == 1:
return j+1
if self.board[i][j] == str(self.enemy_turn) and self.board[i][j+1] == str(self.enemy_turn) and self.board[i][j+3] == str(self.enemy_turn):
if check_space_xy(j+2, i, self.board) == 1:
return j+2
# Diagonal (Down-Right to Up-Left)
for i in range(3, HEIGHT):
for j in range(WIDTH - 3):
if self.board[i][j] == str(self.enemy_turn) and self.board[i-1][j+1] == str(self.enemy_turn) and self.board[i-2][j+2] == str(self.enemy_turn):
if check_space_xy(j+3, i-3, self.board) == 1:
return j+3
if self.board[i-1][j+1] == str(self.enemy_turn) and self.board[i-2][j+2] == str(self.enemy_turn) and self.board[i-3][j+3] == str(self.enemy_turn):
if check_space_xy(j, i, self.board) == 1:
return j
if self.board[i][j] == str(self.enemy_turn) and self.board[i-2][j+2] == str(self.enemy_turn) and self.board[i-3][j+3] == str(self.enemy_turn):
if check_space_xy(j+1, i-1, self.board) == 1:
return j+1
if self.board[i][j] == str(self.enemy_turn) and self.board[i-1][j+1] == str(self.enemy_turn) and self.board[i-3][j+3] == str(self.enemy_turn):
if check_space_xy(j+2, i-2, self.board) == 1:
return j+2
# Diagonal (Down-Left to Up-Right)
for i in range(3, HEIGHT):
for j in range(3, WIDTH):
if self.board[i][j] == str(self.enemy_turn) and self.board[i-1][j-1] == str(self.enemy_turn) and self.board[i-2][j-2] == str(self.enemy_turn):
if check_space_xy(j-3, i-3, self.board) == 1:
return j-3
if self.board[i-1][j-1] == 1 and self.board[i-2][j-2] == 1 and self.board[i-3][j-3] == str(self.enemy_turn):
if check_space_xy(j, i, self.board) == 1:
return j
if self.board[i][j] == str(self.enemy_turn) and self.board[i-2][j-2] == str(self.enemy_turn) and self.board[i-3][j-3] == str(self.enemy_turn):
if check_space_xy(j-1, i-1, self.board) == 1:
return j-1
if self.board[i][j] == str(self.enemy_turn) and self.board[i-1][j-1] == str(self.enemy_turn) and self.board[i-3][j-3] == str(self.enemy_turn):
if check_space_xy(j-2, i-2, self.board) == 1:
return j-2
if self.board[i][j] == str(self.enemy_turn) and self.board[i-1][j-1] == str(self.enemy_turn) and self.board[i-3][j-3] == str(self.enemy_turn):
if check_space_xy(j-2, i-2, self.board) == 1:
return j-2
if i == 3:
if j == 3 or j == 4:
if self.board[i][j] == str(self.enemy_turn) and self.board[i-1][j-1] == str(self.enemy_turn) and self.board[i+2][j+2] == str(self.enemy_turn):
if check_space_xy(j+1, i+1, self.board) == 1:
return j+1
return self.check_winroute()
def play(self, turn):
if self.my_turn == turn:
inst = self.check_enemywinroute()
return str(inst)
else:
return "NOT_MY_TURN"
class minimax_agent:
my_turn = 0
enemy_turn = 0
sflag = 1
board = [[],[],[],[],[],[]]
def __init__(self, turn):
print('[+] Minimax Agent Loaded (as Player ' + str(turn) + ')')
self.my_turn = turn
if turn == 1:
self.enemy_turn = 2
else:
self.enemy_turn = 1
def is_valid_location(self, board, col):
return board[HEIGHT-1][col] == 0
def get_next_open_row(self, board, col):
for r in range(HEIGHT):
if board[r][col] == 0:
return r
def winning_move(self, board, piece):
for c in range(7-3):
for r in range(6):
if board[r][c] == piece and board[r][c+1] == piece and board[r][c+2] == piece and board[r][c+3] == piece:
return True
for c in range(7):
for r in range(6-3):
if board[r][c] == piece and board[r+1][c] == piece and board[r+2][c] == piece and board[r+3][c] == piece:
return True
for c in range(7-3):
for r in range(HEIGHT-3):
if board[r][c] == piece and board[r+1][c+1] == piece and board[r+2][c+2] == piece and board[r+3][c+3] == piece:
return True
for c in range(7-3):
for r in range(3, HEIGHT):
if board[r][c] == piece and board[r-1][c+1] == piece and board[r-2][c+2] == piece and board[r-3][c+3] == piece:
return True
def evaluate_window(self, window, piece):
score = 0
opp_piece = 1
if piece == 1:
opp_piece = 2
if window.count(piece) == 4:
score += 100
elif window.count(piece) == 3 and window.count(0) == 1:
score += 5
elif window.count(piece) == 2 and window.count(0) == 2:
score += 2
if window.count(opp_piece) == 3 and window.count(0) == 1:
score -= 4
return score
def score_position(self, board, piece):
for i in range(HEIGHT):
row_array = board[i]
for c in range(4):
window = row_array[c:c+4]
score += self.evaluate_window(window, piece)
for c in range(WIDTH):
col_array = [board[i][c] for i in range(6)]
for r in range(HEIGHT-3):
window = col_array[r:r+4]
score += self.evaluate_window(window, piece)
for r in range(HEIGHT-3):
for c in range(WIDTH-3):
window = [board[r+i][c+i] for i in range(4)]
score += self.evaluate_window(window, piece)
for r in range(HEIGHT-3):
for c in range(WIDTH-3):
window = [board[r+3-i][c+i] for i in range(4)]
score += self.evaluate_window(window, piece)
return score
def is_terminal_node(self, board):
return self.winning_move(board, self.my_turn) or self.winning_move(board, self.enemy_turn) or len(self.get_valid_locations(board)) == 0
def get_valid_locations(self, board):
valid_locations = []
for col in range(WIDTH):
if self.is_valid_location(board, col):
valid_locations.append(col)
return valid_locations
def minimax(self, board, depth, alpha, beta, maximizingPlayer):
valid_locations = self.get_valid_locations(board)
is_terminal = self.is_terminal_node(board)
if depth == 0 or is_terminal:
if is_terminal:
if self.winning_move(board, self.my_turn):
return (None, 100000000000000)
elif self.winning_move(board, self.enemy_turn):
return (None, -10000000000000)
else: # Game is over, no more valid moves
return (None, 0)
else: # Depth is zero
return (None, self.score_position(board, self.my_turn))
if maximizingPlayer:
value = -math.inf
column = random.choice(valid_locations)
for col in valid_locations:
row = self.get_next_open_row(board, col)
b_copy = copy.deepcopy(self.board)
b_copy[row][col] = self.my_turn
new_score = self.minimax(b_copy, depth-1, alpha, beta, False)[1]
if new_score > value:
value = new_score
column = col
alpha = max(alpha, value)
if alpha >= beta:
break
return column, value
else: # Minimizing player
value = math.inf
column = random.choice(valid_locations)
for col in valid_locations:
row = self.get_next_open_row(board, col)
b_copy = copy.deepcopy(self.board)
b_copy[row][col] = self.enemy_turn
new_score = minimax(b_copy, depth-1, alpha, beta, True)[1]
if new_score < value:
value = new_score
column = col
beta = min(beta, value)
if alpha >= beta:
break
return column, value
def drop_piece(self, board, row, col, piece):
board[row][col] = piece
def load_board(self, board):
self.board = board
def play(self, turn):
col, minimax_score = self.minimax(self.board, 5, -math.inf, math.inf, True)
if col == None:
col = random.randrange(7)
if self.my_turn == turn:
self.sflag = 1
row = self.get_next_open_row(self.board, col)
if row == None:
row = random.randrange(6)
return str(row)
else:
return "NOT_MY_TURN"
def output_board(b):
for i in range(6):
print(' '.join(b[i]))
def parse_board():
b = [[],[],[],[],[],[]]
print('[*] Parsing the game board ...')
for i in range(6):
b[i] = []
data = r.recvuntil('\n').decode()[:-1]
for j in range(7):
b[i].append(data[j])
return b
def check_space_xy(x, y, b):
res = 0
if b[y][x] == '0':
if y == (HEIGHT - 1):
res = 1
return res
if b[y+1][x] != '0':
res = 1
return res
return res
def check_space(x, b):
res = 0
for i in range(6):
if b[i][x] == '0':
res = 1
return res
def main():
# Specify the agent (and something else)
player1 = lookahead_agent(1)
player2 = random_agent(2)
#player2 = human_player(2)
turn = 1
wflag = 1
while wflag:
# Load Game Board
b = parse_board()
player1.load_board(b)
player2.load_board(b)
# Place a Chip
if player1.play(turn) == 'NOT_MY_TURN':
placed = player2.play(turn)
r.sendline(placed)
elif player2.play(turn) == 'NOT_MY_TURN':
placed = player1.play(turn)
r.sendline(placed)
else:
print('Error: Unexcepted value in turn variable')
quit(1)
# Log Message
print('[+] Player ' + str(turn) + ' placed chip on x = ' + str(placed))
# Result Validation
msg = r.recvuntil('\n').decode()[:-1]
if 'Win' in msg:
print(msg)
b = parse_board()
output_board(b)
wflag = 0
if turn == 1:
turn = 2
else:
turn = 1
if __name__ == '__main__':
main() | 43.458472 | 166 | 0.492929 | 3,847 | 26,162 | 3.239407 | 0.04393 | 0.163938 | 0.114749 | 0.094929 | 0.836302 | 0.811908 | 0.796662 | 0.777965 | 0.758706 | 0.748917 | 0 | 0.033237 | 0.36748 | 26,162 | 602 | 167 | 43.458472 | 0.719845 | 0.024845 | 0 | 0.627639 | 0 | 0 | 0.018562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071017 | false | 0 | 0.007678 | 0.003839 | 0.307102 | 0.021113 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3604ed52fc271e16652050c3c3f5b1d6ee4cfcf9 | 6,492 | py | Python | storm_analysis/test/test_spliner.py | bintulab/storm-analysis | 71ae493cbd17ddb97938d0ae2032d97a0eaa76b2 | [
"CNRI-Python"
] | null | null | null | storm_analysis/test/test_spliner.py | bintulab/storm-analysis | 71ae493cbd17ddb97938d0ae2032d97a0eaa76b2 | [
"CNRI-Python"
] | null | null | null | storm_analysis/test/test_spliner.py | bintulab/storm-analysis | 71ae493cbd17ddb97938d0ae2032d97a0eaa76b2 | [
"CNRI-Python"
] | 1 | 2021-04-19T18:17:06.000Z | 2021-04-19T18:17:06.000Z | #!/usr/bin/env python
"""
Tests for Spliner analysis.
"""
import sys
import storm_analysis
import storm_analysis.test.verifications as veri
def test_measure_psf():
movie = storm_analysis.getData("test/data/test_spliner.dax")
mlist = storm_analysis.getData("test/data/test_spliner_ref.hdf5")
psf = storm_analysis.getPathOutputTest("test_spliner_psf.psf")
storm_analysis.removeFile(psf)
from storm_analysis.spliner.measure_psf import measurePSF
measurePSF(movie, "", mlist, psf)
def test_measure_psf_2D():
movie = storm_analysis.getData("test/data/test.dax")
mlist = storm_analysis.getData("test/data/test_ref.hdf5")
psf = storm_analysis.getPathOutputTest("test_spliner_psf_2d.psf")
storm_analysis.removeFile(psf)
from storm_analysis.spliner.measure_psf import measurePSF
measurePSF(movie, "", mlist, psf, want2d = True, aoi_size = 5)
def _test_psf_to_spline():
psf = storm_analysis.getPathOutputTest("test_spliner_psf.psf")
spline = storm_analysis.getPathOutputTest("test_spliner_psf.spline")
storm_analysis.removeFile(spline)
from storm_analysis.spliner.psf_to_spline import psfToSpline
psfToSpline(psf, spline, 10)
def _test_psf_to_spline_2D():
psf = storm_analysis.getPathOutputTest("test_spliner_psf_2d.psf")
spline = storm_analysis.getPathOutputTest("test_spliner_psf_2d.spline")
storm_analysis.removeFile(spline)
from storm_analysis.spliner.psf_to_spline import psfToSpline
psfToSpline(psf, spline, 7)
def test_spliner_std():
# Only test for Python3 due to pickle incompatibility issues.
if (sys.version_info < (3, 0)):
return
movie_name = storm_analysis.getData("test/data/test_spliner.dax")
settings = storm_analysis.getData("test/data/test_spliner_dh.xml")
mlist = storm_analysis.getPathOutputTest("test_spliner_dh.hdf5")
storm_analysis.removeFile(mlist)
from storm_analysis.spliner.spline_analysis import analyze
analyze(movie_name, mlist, settings)
# Verify number of localizations found.
num_locs = veri.verifyNumberLocalizations(mlist)
if not veri.verifyIsCloseEnough(num_locs, 720):
raise Exception("Spliner 3D did not find the expected number of localizations.")
def test_spliner_std_2D():
# Only test for Python3 due to pickle incompatibility issues.
if (sys.version_info < (3, 0)):
return
movie_name = storm_analysis.getData("test/data/test.dax")
settings = storm_analysis.getData("test/data/test_spliner_2D.xml")
mlist = storm_analysis.getPathOutputTest("test_spliner_2D.hdf5")
storm_analysis.removeFile(mlist)
from storm_analysis.spliner.spline_analysis import analyze
analyze(movie_name, mlist, settings)
# Verify number of localizations found.
num_locs = veri.verifyNumberLocalizations(mlist)
if not veri.verifyIsCloseEnough(num_locs, 2004):
raise Exception("Spliner 2D did not find the expected number of localizations.")
def test_spliner_std_non_square():
# Only test for Python3 due to pickle incompatibility issues.
if (sys.version_info < (3, 0)):
return
movie_name = storm_analysis.getData("test/data/test_300x200_dh.dax")
settings = storm_analysis.getData("test/data/test_spliner_dh.xml")
mlist = storm_analysis.getPathOutputTest("test_spliner_dh.hdf5")
storm_analysis.removeFile(mlist)
from storm_analysis.spliner.spline_analysis import analyze
analyze(movie_name, mlist, settings)
# Verify number of localizations found.
num_locs = veri.verifyNumberLocalizations(mlist)
if not veri.verifyIsCloseEnough(num_locs, 120):
raise Exception("Spliner 3D non square did not find the expected number of localizations.")
def _test_spliner_fista():
# Only test for Python3 due to pickle incompatibility issues.
if (sys.version_info < (3, 0)):
return
movie_name = storm_analysis.getData("test/data/test_spliner.dax")
settings = storm_analysis.getData("test/data/test_spliner_dh_fista.xml")
mlist = storm_analysis.getPathOutputTest("test_spliner_dh_fista.hdf5")
storm_analysis.removeFile(mlist)
from storm_analysis.spliner.spline_analysis import analyze
analyze(movie_name, mlist, settings)
# Verify number of localizations found.
num_locs = veri.verifyNumberLocalizations(mlist)
if not veri.verifyIsCloseEnough(num_locs, 36):
raise Exception("Spliner 3D FISTA did not find the expected number of localizations.")
def _test_spliner_fista_2D():
# Only test for Python3 due to pickle incompatibility issues.
if (sys.version_info < (3, 0)):
return
movie_name = storm_analysis.getData("test/data/test.dax")
settings = storm_analysis.getData("test/data/test_spliner_2D_fista.xml")
mlist = storm_analysis.getPathOutputTest("test_spliner_2D_fista.hdf5")
storm_analysis.removeFile(mlist)
from storm_analysis.spliner.spline_analysis import analyze
analyze(movie_name, mlist, settings)
# Verify number of localizations found.
num_locs = veri.verifyNumberLocalizations(mlist)
if not veri.verifyIsCloseEnough(num_locs, 587):
raise Exception("Spliner 2D FISTA did not find the expected number of localizations.")
def _test_spliner_fista_non_square():
# Only test for Python3 due to pickle incompatibility issues.
if (sys.version_info < (3, 0)):
return
movie_name = storm_analysis.getData("test/data/test_300x200_dh.dax")
settings = storm_analysis.getData("test/data/test_spliner_dh_fista.xml")
mlist = storm_analysis.getPathOutputTest("test_spliner_dh_fista.hdf5")
storm_analysis.removeFile(mlist)
from storm_analysis.spliner.spline_analysis import analyze
analyze(movie_name, mlist, settings)
# Verify number of localizations found.
num_locs = veri.verifyNumberLocalizations(mlist)
if not veri.verifyIsCloseEnough(num_locs, 24):
raise Exception("Spliner 3D FISTA non square did not find the expected number of localizations.")
if (__name__ == "__main__"):
test_measure_psf()
test_measure_psf_2D()
# _test_psf_to_spline()
# _test_psf_to_spline_2D()
test_spliner_std()
test_spliner_std_2D()
test_spliner_std_non_square()
_test_spliner_fista()
_test_spliner_fista_2D()
_test_spliner_fista_non_square()
| 35.091892 | 105 | 0.735213 | 824 | 6,492 | 5.518204 | 0.104369 | 0.142951 | 0.070376 | 0.084451 | 0.909831 | 0.875962 | 0.875962 | 0.864746 | 0.782054 | 0.746426 | 0 | 0.015197 | 0.17899 | 6,492 | 184 | 106 | 35.282609 | 0.837899 | 0.106131 | 0 | 0.563636 | 0 | 0 | 0.194257 | 0.096004 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.118182 | 0 | 0.263636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3613b51e848595932a9e6ebc25b0b3c91d6795f3 | 103 | py | Python | pyroaman/__init__.py | br-g/pyroaman | 86d9a4771e4e0657c96e1c45dacbbde579e527d9 | [
"MIT"
] | 2 | 2021-06-16T01:54:36.000Z | 2021-11-08T13:00:39.000Z | pyroaman/__init__.py | br-g/pyroaman | 86d9a4771e4e0657c96e1c45dacbbde579e527d9 | [
"MIT"
] | null | null | null | pyroaman/__init__.py | br-g/pyroaman | 86d9a4771e4e0657c96e1c45dacbbde579e527d9 | [
"MIT"
] | 1 | 2021-04-24T17:02:26.000Z | 2021-04-24T17:02:26.000Z | from pyroaman.main import load
from pyroaman.database import Database
from pyroaman.block import Block
| 25.75 | 38 | 0.854369 | 15 | 103 | 5.866667 | 0.466667 | 0.409091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116505 | 103 | 3 | 39 | 34.333333 | 0.967033 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
36bf4df12677f37780b5dcdad7ffa1ee90f29320 | 96 | py | Python | venv/lib/python3.8/site-packages/future/moves/html/entities.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/future/moves/html/entities.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/future/moves/html/entities.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/95/5b/dc/85d8cafd1cd18fbe7d7a0e1132f1961df8016e3d2d2863a867c75b4726 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36c837728607d0df00d1604ff3265a8b90251f67 | 4,089 | py | Python | modules/rsconv.py | luost26/Equivariant-OrientedMP | 597f9c4ace953929e5eefef84e4c840d6636b818 | [
"MIT"
] | 5 | 2022-03-26T07:08:21.000Z | 2022-03-31T12:23:40.000Z | modules/rsconv.py | luost26/Equivariant-OrientedMP | 597f9c4ace953929e5eefef84e4c840d6636b818 | [
"MIT"
] | null | null | null | modules/rsconv.py | luost26/Equivariant-OrientedMP | 597f9c4ace953929e5eefef84e4c840d6636b818 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
from pytorch3d.ops.knn import knn_points, knn_gather
from .geometric import global_to_local
class RSConv(nn.Module):
def __init__(self, in_channels, out_channels, k):
super().__init__()
self.in_channels = in_channels
self.out_channales = out_channels
self.k = k
self.weight_network = nn.Sequential(
nn.Conv2d(10, in_channels//4, kernel_size=(1, 1)),
nn.BatchNorm2d(in_channels//4),
nn.ReLU(),
nn.Conv2d(in_channels//4, in_channels, kernel_size=(1, 1)),
)
self.conv_bn = nn.BatchNorm2d(in_channels)
self.conv_act = nn.ReLU()
self.out_network = nn.Sequential(
nn.Conv1d(in_channels, out_channels, kernel_size=1),
nn.BatchNorm1d(out_channels),
nn.ReLU(),
)
def forward(self, p_in, p_out, h_in):
"""
Args:
p_in: (B, N_in, 3)
p_out: (B, N_out, 3)
h_in: (B, N_in, in_ch)
Returns:
h_out: (B, N_out, out_ch)
"""
_, idx, p_j = knn_points(p_out, p_in, K=self.k, return_nn=True) # (B, N_out, K), (B, N_out, K), (B, N_out, K, 3)
p_i = p_out.unsqueeze(2).repeat(1, 1, self.k, 1) # (B, N_out, K, 3)
p_ij = (p_j - p_i) # (B, N_out, K, 3)
d_ij = torch.linalg.norm(p_ij, dim=-1, keepdim=True) # (B, N_out, K, 1)
w_ij = torch.cat([p_ij, d_ij, p_i, p_j], dim=-1) # (B, N_out, K, 3+3+3+1)
w_ij = self.weight_network(w_ij.permute(0, 3, 1, 2).contiguous()) # (B, in_ch, N_out, K)
h_j = knn_gather(h_in, idx).permute(0, 3, 1, 2).contiguous() # (B, N_out, K, in_ch) -> (B, in_ch, N_out, K)
m_ij = self.conv_act(self.conv_bn(w_ij * h_j)) # (B, in_ch, N_out, K)
h_out = m_ij.max(dim=-1)[0] # (B, in_ch, N_out)
h_out = self.out_network(h_out).permute(0, 2, 1).contiguous() # (B, out_ch, N_out) -> (B, N_out, out_ch)
return h_out
class OrientedAnchoredRSConv(nn.Module):
def __init__(self, in_channels, out_channels, k, num_frames):
super().__init__()
self.in_channels = in_channels
self.out_channales = out_channels
self.k = k
self.num_frames = num_frames
self.weight_network = nn.Sequential(
nn.Conv2d(num_frames*4, in_channels//4, kernel_size=(1, 1)),
nn.BatchNorm2d(in_channels//4),
nn.ReLU(),
nn.Conv2d(in_channels//4, in_channels, kernel_size=(1, 1)),
)
self.conv_bn = nn.BatchNorm2d(in_channels)
self.conv_act = nn.ReLU()
self.out_network = nn.Sequential(
nn.Conv1d(in_channels, out_channels, kernel_size=1),
nn.BatchNorm1d(out_channels),
nn.ReLU(),
)
def forward(self, p_in, p_out, R_out, h_in):
"""
Args:
p_in: (B, N_in, 3)
p_out: (B, N_out, 3)
R_out: (B, N_out, F*3, 3)
h_in: (B, N_in, in_ch)
Returns:
h_out: (B, N_out, out_ch)
"""
B, N_in, N_out = p_in.size(0), p_in.size(1), p_out.size(1)
_, idx, p_j = knn_points(p_out, p_in, K=self.k, return_nn=True) # (B, N_out, K), (B, N_out, K), (B, N_out, K, 3)
p_ij = global_to_local(R_out, p_out, p_j) # (B, N_out, K, F*3)
d_ij = torch.linalg.norm(p_ij.reshape(B, N_out, self.k, self.num_frames, 3), dim=-1, keepdim=False) # (B, N_out, K, F)
w_ij = torch.cat([p_ij, d_ij], dim=-1) # (B, N_out, K, 3+1)
w_ij = self.weight_network(w_ij.permute(0, 3, 1, 2).contiguous()) # (B, in_ch, N_out, K)
h_j = knn_gather(h_in, idx).permute(0, 3, 1, 2).contiguous() # (B, N_out, K, in_ch) -> (B, in_ch, N_out, K)
m_ij = self.conv_act(self.conv_bn(w_ij * h_j)) # (B, in_ch, N_out, K)
h_out = m_ij.max(dim=-1)[0] # (B, in_ch, N_out)
h_out = self.out_network(h_out).permute(0, 2, 1).contiguous() # (B, out_ch, N_out) -> (B, N_out, out_ch)
return h_out
| 37.172727 | 129 | 0.549279 | 697 | 4,089 | 2.918221 | 0.113343 | 0.066863 | 0.056539 | 0.044248 | 0.830875 | 0.816126 | 0.815634 | 0.764995 | 0.726647 | 0.726647 | 0 | 0.030481 | 0.293959 | 4,089 | 109 | 130 | 37.513761 | 0.674056 | 0.195158 | 0 | 0.626866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059701 | false | 0 | 0.059701 | 0 | 0.179104 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36d3c6446c2cb8513862c067d95169edb0b1ffb6 | 167 | py | Python | unittest/python/test_version.py | seanyen/eigenpy | e164f03eb13b5fc531dd6b5e7e0f28560f405464 | [
"BSD-2-Clause"
] | 96 | 2015-12-25T10:05:13.000Z | 2022-03-16T01:14:25.000Z | unittest/python/test_version.py | seanyen/eigenpy | e164f03eb13b5fc531dd6b5e7e0f28560f405464 | [
"BSD-2-Clause"
] | 123 | 2015-04-29T09:48:05.000Z | 2022-03-27T02:26:33.000Z | unittest/python/test_version.py | seanyen/eigenpy | e164f03eb13b5fc531dd6b5e7e0f28560f405464 | [
"BSD-2-Clause"
] | 29 | 2015-02-20T00:45:41.000Z | 2022-01-28T11:25:43.000Z | from __future__ import print_function
import eigenpy
assert eigenpy.checkVersionAtLeast(0,0,0)
assert eigenpy.__version__ != ""
assert eigenpy.__raw_version__ != ""
| 20.875 | 41 | 0.802395 | 20 | 167 | 6 | 0.55 | 0.325 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020134 | 0.107784 | 167 | 7 | 42 | 23.857143 | 0.785235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.6 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
36e51767b1d84ab7988567c923debd02fa4a8ba6 | 2,296 | py | Python | epytope/Data/pssms/smmpmbec/mat/A_23_01_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/smmpmbec/mat/A_23_01_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/smmpmbec/mat/A_23_01_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | A_23_01_9 = {0: {'A': 0.225, 'C': 0.027, 'E': 0.353, 'D': 0.568, 'G': 0.123, 'F': -0.078, 'I': -0.161, 'H': 0.041, 'K': -0.095, 'M': -0.457, 'L': -0.047, 'N': 0.102, 'Q': 0.062, 'P': 0.145, 'S': 0.168, 'R': -0.139, 'T': -0.173, 'W': -0.038, 'V': -0.195, 'Y': -0.43}, 1: {'A': 0.323, 'C': 0.177, 'E': 0.396, 'D': 0.453, 'G': 0.063, 'F': -0.885, 'I': 0.15, 'H': 0.098, 'K': 0.49, 'M': -0.308, 'L': -0.045, 'N': 0.041, 'Q': 0.133, 'P': 0.484, 'S': 0.124, 'R': 0.289, 'T': 0.166, 'W': -0.791, 'V': 0.047, 'Y': -1.403}, 2: {'A': 0.056, 'C': 0.038, 'E': 0.164, 'D': 0.296, 'G': 0.062, 'F': -0.265, 'I': -0.343, 'H': 0.117, 'K': 0.142, 'M': -0.296, 'L': -0.275, 'N': 0.124, 'Q': -0.028, 'P': 0.265, 'S': 0.193, 'R': 0.14, 'T': 0.134, 'W': -0.157, 'V': -0.119, 'Y': -0.249}, 3: {'A': -0.098, 'C': -0.033, 'E': 0.061, 'D': 0.109, 'G': 0.031, 'F': -0.171, 'I': -0.006, 'H': -0.017, 'K': 0.033, 'M': 0.017, 'L': 0.055, 'N': -0.029, 'Q': 0.025, 'P': -0.039, 'S': -0.071, 'R': 0.0, 'T': 0.129, 'W': -0.044, 'V': 0.032, 'Y': 0.016}, 4: {'A': 0.004, 'C': -0.124, 'E': 0.14, 'D': 0.151, 'G': 0.065, 'F': 0.018, 'I': -0.142, 'H': 0.081, 'K': 0.226, 'M': -0.134, 'L': 0.002, 'N': -0.113, 'Q': 0.157, 'P': 0.194, 'S': -0.041, 'R': 0.202, 'T': -0.108, 'W': -0.299, 'V': -0.168, 'Y': -0.109}, 5: {'A': 0.207, 'C': -0.136, 'E': 0.135, 'D': 0.14, 'G': 0.173, 'F': -0.374, 'I': -0.125, 'H': 0.042, 'K': 0.169, 'M': 0.064, 'L': -0.14, 'N': 0.02, 'Q': 0.231, 'P': -0.16, 'S': 0.163, 'R': 0.141, 'T': -0.051, 'W': -0.252, 'V': 0.039, 'Y': -0.287}, 6: {'A': 0.084, 'C': 0.114, 'E': -0.035, 'D': 0.251, 'G': 0.36, 'F': -0.448, 'I': 0.227, 'H': -0.244, 'K': 0.258, 'M': -0.196, 'L': -0.252, 'N': 0.038, 'Q': 0.087, 'P': 0.016, 'S': 0.097, 'R': 0.475, 'T': 0.145, 'W': -0.467, 'V': -0.071, 'Y': -0.439}, 7: {'A': 0.112, 'C': 0.039, 'E': 0.006, 'D': 0.016, 'G': 0.093, 'F': -0.028, 'I': 0.03, 'H': -0.069, 'K': -0.032, 'M': -0.017, 'L': -0.082, 'N': -0.038, 'Q': -0.005, 'P': 0.03, 'S': 0.028, 'R': -0.037, 'T': -0.041, 'W': 0.001, 'V': 0.017, 'Y': -0.022}, 8: {'A': 0.054, 'C': 0.143, 'E': 0.455, 'D': 0.376, 'G': 0.37, 'F': -1.57, 'I': -0.932, 'H': 0.486, 'K': 0.043, 'M': -0.402, 'L': -0.64, 'N': 0.271, 'Q': 0.531, 'P': 0.574, 'S': 0.536, 'R': 0.484, 'T': 0.482, 'W': -0.973, 'V': -0.189, 'Y': -0.102}, -1: {'con': 4.65717}} | 2,296 | 2,296 | 0.396777 | 557 | 2,296 | 1.630162 | 0.310592 | 0.019824 | 0.011013 | 0.013216 | 0.030837 | 0 | 0 | 0 | 0 | 0 | 0 | 0.376495 | 0.162456 | 2,296 | 1 | 2,296 | 2,296 | 0.095684 | 0 | 0 | 0 | 0 | 0 | 0.079669 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7fe0cf91ecf9cc2966139b408d6debcabf3fc49d | 295 | py | Python | packages/girder_worker/girder_worker/_test_plugins/tasks.py | ShenQianwithC/HistomicsTK | 4ad7e72a7ebdabbdfc879254fad04ce7ca47e320 | [
"Apache-2.0"
] | 37 | 2016-01-26T19:21:23.000Z | 2021-06-10T14:12:59.000Z | packages/girder_worker/girder_worker/_test_plugins/tasks.py | ShenQianwithC/HistomicsTK | 4ad7e72a7ebdabbdfc879254fad04ce7ca47e320 | [
"Apache-2.0"
] | 290 | 2016-01-27T14:02:10.000Z | 2022-01-24T16:50:27.000Z | packages/girder_worker/girder_worker/_test_plugins/tasks.py | ShenQianwithC/HistomicsTK | 4ad7e72a7ebdabbdfc879254fad04ce7ca47e320 | [
"Apache-2.0"
] | 29 | 2016-02-17T17:54:47.000Z | 2022-03-17T23:36:17.000Z | from girder_worker.app import app
from girder_worker_utils import types
from girder_worker_utils.decorators import argument
def not_a_task():
pass
@argument('n', types.Integer)
def function_task(n):
return n
@app.task
@argument('n', types.Integer)
def celery_task(n):
return n
| 15.526316 | 51 | 0.752542 | 46 | 295 | 4.630435 | 0.413043 | 0.140845 | 0.225352 | 0.197183 | 0.225352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155932 | 295 | 18 | 52 | 16.388889 | 0.855422 | 0 | 0 | 0.333333 | 0 | 0 | 0.00678 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.083333 | 0.25 | 0.166667 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
3d20602dc9b64d8b766d34944d5236a2a54f76af | 7,628 | py | Python | etl/parsers/etw/Microsoft_Windows_Kernel_Prefetch.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 104 | 2020-03-04T14:31:31.000Z | 2022-03-28T02:59:36.000Z | etl/parsers/etw/Microsoft_Windows_Kernel_Prefetch.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 7 | 2020-04-20T09:18:39.000Z | 2022-03-19T17:06:19.000Z | etl/parsers/etw/Microsoft_Windows_Kernel_Prefetch.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 16 | 2020-03-05T18:55:59.000Z | 2022-03-01T10:19:28.000Z | # -*- coding: utf-8 -*-
"""
Microsoft-Windows-Kernel-Prefetch
GUID : 5322d61a-9efa-4bc3-a3f9-14be95c144f8
"""
from construct import Int8sl, Int8ul, Int16ul, Int16sl, Int32sl, Int32ul, Int64sl, Int64ul, Bytes, Double, Float32l, Struct
from etl.utils import WString, CString, SystemTime, Guid
from etl.dtyp import Sid
from etl.parsers.etw.core import Etw, declare, guid
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=1, version=0)
class Microsoft_Windows_Kernel_Prefetch_1_0(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul,
"PrefetchPhase" / Int32ul,
"PrefetchType" / Int32ul,
"IsTricklePhase" / Int8ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=1, version=1)
class Microsoft_Windows_Kernel_Prefetch_1_1(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul,
"PrefetchPhaseMask" / Int32ul,
"PrefetchType" / Int32ul,
"IsTricklePhase" / Int8ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=2, version=0)
class Microsoft_Windows_Kernel_Prefetch_2_0(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul,
"PrefetchPhase" / Int32ul,
"PrefetchType" / Int32ul,
"IsTricklePhase" / Int8ul,
"NumPagesPrefetched" / Int64ul,
"NumReadLists" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=2, version=1)
class Microsoft_Windows_Kernel_Prefetch_2_1(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul,
"PrefetchPhaseMask" / Int32ul,
"PrefetchType" / Int32ul,
"IsTricklePhase" / Int8ul,
"NumPagesPrefetched" / Int64ul,
"NumReadLists" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=3, version=0)
class Microsoft_Windows_Kernel_Prefetch_3_0(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul,
"PrefetchPhase" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=3, version=1)
class Microsoft_Windows_Kernel_Prefetch_3_1(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul,
"PrefetchPhaseMask" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=4, version=0)
class Microsoft_Windows_Kernel_Prefetch_4_0(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul,
"PrefetchPhase" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=4, version=1)
class Microsoft_Windows_Kernel_Prefetch_4_1(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul,
"PrefetchPhaseMask" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=5, version=0)
class Microsoft_Windows_Kernel_Prefetch_5_0(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=6, version=0)
class Microsoft_Windows_Kernel_Prefetch_6_0(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=7, version=0)
class Microsoft_Windows_Kernel_Prefetch_7_0(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul,
"EndReason" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=8, version=0)
class Microsoft_Windows_Kernel_Prefetch_8_0(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul,
"ActionFlags" / Int16ul,
"TraceReason" / Int8ul,
"PrefetchReason" / Int8ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=8, version=1)
class Microsoft_Windows_Kernel_Prefetch_8_1(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul,
"ActionFlags" / Int16ul,
"TraceReason" / Int8ul,
"PrefetchReason" / Int8ul,
"NumLaunches" / Int32ul,
"TimeSinceLastLaunchInS" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=9, version=1)
class Microsoft_Windows_Kernel_Prefetch_9_1(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul,
"WorkItemsCount" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=10, version=1)
class Microsoft_Windows_Kernel_Prefetch_10_1(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=11, version=0)
class Microsoft_Windows_Kernel_Prefetch_11_0(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul,
"NumPhases" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=12, version=0)
class Microsoft_Windows_Kernel_Prefetch_12_0(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul
)
@declare(guid=guid("5322d61a-9efa-4bc3-a3f9-14be95c144f8"), event_id=13, version=0)
class Microsoft_Windows_Kernel_Prefetch_13_0(Etw):
pattern = Struct(
"ScenarioNameLength" / Int16ul,
"ScenarioName" / Bytes(lambda this: this.ScenarioNameLength),
"ScenarioHashId" / Int32ul,
"ScenarioType" / Int32ul
)
| 34.36036 | 123 | 0.677373 | 742 | 7,628 | 6.818059 | 0.103774 | 0.060091 | 0.082625 | 0.11267 | 0.932595 | 0.932595 | 0.923503 | 0.784345 | 0.784345 | 0.784345 | 0 | 0.102091 | 0.203854 | 7,628 | 221 | 124 | 34.515837 | 0.73094 | 0.01311 | 0 | 0.636872 | 0 | 0 | 0.276234 | 0.089108 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022346 | 0 | 0.223464 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3d3310d5f693a5c3cb1ea6761b1dd8a9462a7e9f | 235 | py | Python | business_rules.py | StanHaakman/Business-rules-voor-RE | 461264a0a54e39537c56d5438ce045022834ae9b | [
"MIT"
] | null | null | null | business_rules.py | StanHaakman/Business-rules-voor-RE | 461264a0a54e39537c56d5438ce045022834ae9b | [
"MIT"
] | null | null | null | business_rules.py | StanHaakman/Business-rules-voor-RE | 461264a0a54e39537c56d5438ce045022834ae9b | [
"MIT"
] | 1 | 2021-04-02T15:57:43.000Z | 2021-04-02T15:57:43.000Z | from contentRules._popular_products import popular_products
from contentRules._target_tables import target_tables
from collabRules._preference_tables import preference_tables
# popular_products()
target_tables()
preference_tables()
| 23.5 | 60 | 0.876596 | 27 | 235 | 7.185185 | 0.333333 | 0.231959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080851 | 235 | 9 | 61 | 26.111111 | 0.898148 | 0.076596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e9e86d7e567fd38811c20dd4e695ef5f697d928d | 26 | py | Python | deepdrive/utils/__init__.py | braceal/DeepDriveMD | 5d8ae5016a6bb172fa0188a78b8d2b14ebb754fd | [
"MIT"
] | 3 | 2020-02-07T21:35:48.000Z | 2020-12-23T01:44:49.000Z | deepdrive/utils/__init__.py | braceal/DeepDriveMD | 5d8ae5016a6bb172fa0188a78b8d2b14ebb754fd | [
"MIT"
] | 5 | 2019-11-02T05:29:55.000Z | 2020-05-06T04:20:24.000Z | deepdrive/utils/__init__.py | braceal/DeepDriveMD | 5d8ae5016a6bb172fa0188a78b8d2b14ebb754fd | [
"MIT"
] | 1 | 2020-12-07T12:26:01.000Z | 2020-12-07T12:26:01.000Z | from .utils import get_id
| 13 | 25 | 0.807692 | 5 | 26 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1866555f456eb0f65030695ca730e26e615240b8 | 28 | py | Python | model/__init__.py | ghostxsl/pytorch-Yolov3 | e951b81d583294944b6ff5d36a39aa28eb86bc64 | [
"Apache-2.0"
] | 3 | 2019-02-28T08:36:03.000Z | 2019-10-19T11:44:30.000Z | model/__init__.py | ghostxsl/pytorch-Yolov3 | e951b81d583294944b6ff5d36a39aa28eb86bc64 | [
"Apache-2.0"
] | null | null | null | model/__init__.py | ghostxsl/pytorch-Yolov3 | e951b81d583294944b6ff5d36a39aa28eb86bc64 | [
"Apache-2.0"
] | 1 | 2019-10-19T11:44:32.000Z | 2019-10-19T11:44:32.000Z | from .yolonet import YoLoNet | 28 | 28 | 0.857143 | 4 | 28 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a12ec78bc0d2d7c0df2df67a69daf66067313a0a | 341 | py | Python | test_pyship/test_b_module_info.py | daobook/pyship | 31b8e0b4c1cfc7677d418024f27642183cb1966d | [
"MIT"
] | 16 | 2020-10-28T02:49:39.000Z | 2022-03-18T16:50:11.000Z | test_pyship/test_b_module_info.py | daobook/pyship | 31b8e0b4c1cfc7677d418024f27642183cb1966d | [
"MIT"
] | 4 | 2020-12-07T23:20:09.000Z | 2020-12-18T03:25:49.000Z | test_pyship/test_b_module_info.py | daobook/pyship | 31b8e0b4c1cfc7677d418024f27642183cb1966d | [
"MIT"
] | 1 | 2022-01-26T11:26:00.000Z | 2022-01-26T11:26:00.000Z | from semver import VersionInfo
from test_pyship import TST_APP_NAME, TstAppDirs
def test_module_info():
# todo: use TargetAppInfo's get_module_info()
# tst_app_dirs = TstAppDirs(TST_APP_NAME, VersionInfo.parse("0.0.1"))
# #module_info = ModuleInfo(TST_APP_NAME, tst_app_dirs.project_dir)
# #print(module_info)
pass
| 22.733333 | 73 | 0.744868 | 50 | 341 | 4.72 | 0.54 | 0.127119 | 0.127119 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01049 | 0.16129 | 341 | 14 | 74 | 24.357143 | 0.814685 | 0.571848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0 | 1 | 0.25 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a13a57a3242bd22e42f1b8ec50a2d02f835a97a4 | 1,609 | py | Python | train/gen/baseline/models/shallow/v4/setup.py | sammysiegel/SubtLeNet | 94d1507a8a7c60548b59400109b6c4086ad83141 | [
"MIT"
] | null | null | null | train/gen/baseline/models/shallow/v4/setup.py | sammysiegel/SubtLeNet | 94d1507a8a7c60548b59400109b6c4086ad83141 | [
"MIT"
] | null | null | null | train/gen/baseline/models/shallow/v4/setup.py | sammysiegel/SubtLeNet | 94d1507a8a7c60548b59400109b6c4086ad83141 | [
"MIT"
] | 2 | 2019-07-08T20:18:22.000Z | 2020-06-01T20:04:08.000Z |
from subtlenet import config
from subtlenet.generators import gen_singletons as generator
config.gen_singletons = {'2_3_1': 12, '2_3_2': 13, '2_4_2': 15, 'tau1': 33, '2_4_1': 14, '2_1_2': 9, '2_1_1': 8, '2_2_1': 10, '2_2_2': 11, 'partonm': 29, '1_2_2': 3, '1_2_1': 2, 'pt': 32, 'tau2': 35, 'tau3': 37, 'eta': 24, '1_4_1': 6, '1_3_1': 4, 'msd': 27, 'partonpt': 30, '3_3_1': 20, 'tau1sd': 34, 'phi': 31, '3_3_2': 21, '3_2_1': 18, '3_2_2': 19, '1_3_2': 5, '1_1_1': 0, '3_1_2': 17, '3_1_1': 16, '1_1_2': 1, 'nprongs': 28, '1_4_2': 7, 'tau3sd': 38, 'eventNumber': 25, 'm': 26, 'tau2sd': 36, '3_4_2': 23, '3_4_1': 22}
config.gen_default_variables = ['1_1_1', '1_1_2', '1_2_1', '1_2_2', '1_3_1', '1_3_2', '1_4_1', '1_4_2', '2_1_1', '2_1_2', '2_2_1', '2_2_2', '2_3_1', '2_3_2', '2_4_1', '2_4_2', '3_1_1', '3_1_2', '3_2_1', '3_2_2', '3_3_1', '3_3_2', '3_4_1', '3_4_2', 'eta', 'm', 'msd', 'phi', 'pt', 'tau1', 'tau1sd', 'tau2', 'tau2sd', 'tau3', 'tau3sd']
config.gen_default_mus = [1.0, 1.0, 0.065825, 0.039773, 0.002703, 0.001093, 6.5e-05, 1.3e-05, 1.0, 1.0, 0.065825, 0.039773, 0.0009, 0.00026, 9e-06, 1e-06, 1.0, 1.0, 0.065825, 0.039773, 0.000611, 0.000186, 0.0, 0.0, 0.004036, 164.187057, 149.535583, 3.140448, 570.259277, 0.265966, 0.254007, 0.112757, 0.100529, 0.060297, 0.051978]
config.gen_default_sigmas = [1.0, 1.0, 0.038151, 0.030055, 0.002839, 0.00147, 9.6e-05, 3.2e-05, 1.0, 1.0, 0.038151, 0.030055, 0.001247, 0.000523, 2.3e-05, 6e-06, 1.0, 1.0, 0.038151, 0.030055, 0.001072, 0.000533, 1.0, 1.0, 0.97017, 42.81216, 54.38921, 1.815403, 99.722885, 0.096302, 0.105487, 0.057523, 0.063567, 0.02873, 0.030201]
| 201.125 | 521 | 0.614046 | 381 | 1,609 | 2.32021 | 0.312336 | 0.033937 | 0.023756 | 0.031674 | 0.179864 | 0.138009 | 0.128959 | 0.128959 | 0.128959 | 0 | 0 | 0.432125 | 0.125544 | 1,609 | 7 | 522 | 229.857143 | 0.196162 | 0 | 0 | 0 | 0 | 0 | 0.222015 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a14774839ebe52f2020c48400b7a058d7dd7ae34 | 2,296 | py | Python | tests/_api_client/api/test_api_commons.py | MLAide/python-client | f8b1ec1cb22b281088c0fab0b6808b59bc27ca87 | [
"Apache-2.0"
] | 1 | 2021-03-05T19:14:06.000Z | 2021-03-05T19:14:06.000Z | tests/_api_client/api/test_api_commons.py | MLAide/python-client | f8b1ec1cb22b281088c0fab0b6808b59bc27ca87 | [
"Apache-2.0"
] | 2 | 2021-04-18T11:17:43.000Z | 2021-05-02T13:22:24.000Z | tests/_api_client/api/test_api_commons.py | MLAide/python-client | f8b1ec1cb22b281088c0fab0b6808b59bc27ca87 | [
"Apache-2.0"
] | null | null | null | from pytest import raises
from pytest_mock import MockerFixture
from mlaide.error import *
from mlaide._api_client.api._api_commons import assert_response_status
def test_assert_response_status_with_404_response_and_404_is_allowed_should_not_raise(mocker: MockerFixture):
# arrange
response = mocker.MagicMock()
response.status_code = 404
# act
assert_response_status(response, True)
def test_assert_response_status_with_404_response_should_raise_not_found_error(mocker: MockerFixture):
# arrange
response = mocker.MagicMock()
response.status_code = 404
# act
with raises(NotFoundError):
assert_response_status(response)
def test_assert_response_status_with_400_response_should_raise_input_error(mocker: MockerFixture):
# arrange
response = mocker.MagicMock()
response.status_code = 400
# act
with raises(InputError):
assert_response_status(response)
def test_assert_response_status_with_401_response_should_raise_invalid_authorization_error(mocker: MockerFixture):
# arrange
response = mocker.MagicMock()
response.status_code = 401
# act
with raises(InvalidAuthorizationError):
assert_response_status(response)
def test_assert_response_status_with_403_response_should_raise_not_authorized_error(mocker: MockerFixture):
# arrange
response = mocker.MagicMock()
response.status_code = 403
# act
with raises(NotAuthorizedError):
assert_response_status(response)
def test_assert_response_status_with_500_response_should_raise_server_error(mocker: MockerFixture):
# arrange
response = mocker.MagicMock()
response.status_code = 500
# act
with raises(ServerError):
assert_response_status(response)
def test_assert_response_status_with_501_response_should_raise_server_error(mocker: MockerFixture):
# arrange
response = mocker.MagicMock()
response.status_code = 501
# act
with raises(ServerError):
assert_response_status(response)
def test_assert_response_status_with_502_response_should_raise_server_error(mocker: MockerFixture):
# arrange
response = mocker.MagicMock()
response.status_code = 502
# act
with raises(ServerError):
assert_response_status(response)
| 27.011765 | 114 | 0.772213 | 268 | 2,296 | 6.179104 | 0.175373 | 0.211353 | 0.205314 | 0.101449 | 0.722222 | 0.722222 | 0.722222 | 0.722222 | 0.640097 | 0.640097 | 0 | 0.026674 | 0.167247 | 2,296 | 84 | 115 | 27.333333 | 0.839435 | 0.041376 | 0 | 0.465116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.395349 | 1 | 0.186047 | false | 0 | 0.093023 | 0 | 0.27907 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a163fd255415f004de613d49f606c04493e38591 | 48 | py | Python | pystickmover/__init__.py | nicholasrobinson/pystickmover | a53cca1e118030e9b6463134fa9f84c09c83ba26 | [
"MIT"
] | null | null | null | pystickmover/__init__.py | nicholasrobinson/pystickmover | a53cca1e118030e9b6463134fa9f84c09c83ba26 | [
"MIT"
] | null | null | null | pystickmover/__init__.py | nicholasrobinson/pystickmover | a53cca1e118030e9b6463134fa9f84c09c83ba26 | [
"MIT"
] | null | null | null | from pystickmover.pystickmover import StickMover | 48 | 48 | 0.916667 | 5 | 48 | 8.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 48 | 1 | 48 | 48 | 0.977778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a16aefc226fbee4f41c85e9521798c563422e0b4 | 139,219 | py | Python | plots/ref_v1/plots.py | pps-lab/rofl-project-code | eaa9f1aeca3a40ca939c0f723af0186af0f95f9b | [
"MIT"
] | 12 | 2021-07-08T13:27:54.000Z | 2021-12-25T14:53:26.000Z | plots/ref_v1/plots.py | pps-lab/rofl-project-code | eaa9f1aeca3a40ca939c0f723af0186af0f95f9b | [
"MIT"
] | 1 | 2021-10-15T09:48:18.000Z | 2022-03-31T12:41:15.000Z | plots/ref_v1/plots.py | pps-lab/rofl-project-code | eaa9f1aeca3a40ca939c0f723af0186af0f95f9b | [
"MIT"
] | 1 | 2021-11-24T19:21:38.000Z | 2021-11-24T19:21:38.000Z | #!/usr/bin/python
# coding=utf-8
import math
import sys
import os
import re
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import scipy
from matplotlib import ticker
from matplotlib.backends.backend_pdf import PdfPages
import pandas as pd
from matplotlib.font_manager import FontProperties
from matplotlib.legend import Legend
from matplotlib.lines import Line2D
import seaborn as sns
from matplotlib.patches import Patch
import matplotlib.patches as mpatches
#from plotting.report.extract_histogram import extract_histogram
from extract_histogram import extract_histogram
plot_data_save_path = "./data/"
plots = "./images/"
COLOR_GRAY = "#AAAAAA"
FONT_SIZE = 20
DATA_KEYS = {
"CLIP_DEFENSE": {
"L2": {
"BASELINE": 'e41_google_tasks_noconstrain_evaluation',
"XMAX": 50,
"XMIN": 0.01,
"ATTACK": {
'e41_clipl2_0_01_evaluation': 0.01,
'e41_clipl2_0_025_evaluation': 0.025,
'e41_clipl2_0_05_evaluation': 0.05,
'e41_clipl2_0_1_evaluation': 0.1,
'e41_clipl2_0_5_evaluation': 0.5,
'e41_clipl2_1_evaluation': 1,
'e41_clipl2_3_evaluation': 3,
'e41_clipl2_3_5_evaluation': 3.5,
'e41_clipl2_5_evaluation': 5,
'e41_clipl2_10_evaluation': 10,
'e41_clipl2_12_evaluation': 12,
'e41_clipl2_14_evaluation': 14,
'e41_clipl2_16_evaluation': 16,
'e41_clipl2_18_evaluation': 18,
'e41_clipl2_20_evaluation': 20,
'e41_clipl2_25_evaluation': 25,
'e41_clipl2_30_evaluation': 30,
'e41_clipl2_35_evaluation': 35,
},
"PGD_ATTACK": {
'e41_clipl2_20_pgd_evaluation': 20,
'e41_clipl2_10_pgd_evaluation': 10
},
"NO_ATTACK": {
"e41_clipl2_0_01_noattack_evaluation": 0.01,
"e41_clipl2_0_025_noattack_evaluation": 0.025,
# "e41_clipl2_0_05_noattack_evaluation": 0.05,
"e41_clipl2_0_1_noattack_evaluation": 0.1,
"e41_clipl2_3_5_noattack_evaluation": 3.5,
"e41_clipl2_35_noattack_evaluation": 35
}
# 'e41_clipl2_100_evaluation': 100
},
"LINF": {
"BASELINE": 'e41_google_tasks_noconstrain_evaluation',
"XMAX": 0.2,
"XMIN": 0.00004,
"ATTACK": {
'e41_clipinf_0_00005_2_evaluation': 0.00005,
'e41_clipinf_0_0001_evaluation': 0.0001,
'e41_clipinf_0.00015_evaluation': 0.00015,
'e41_clipinf_0_00100_evaluation': 0.0010,
'e41_clipinf_0.0015_evaluation': 0.0015,
'e41_clipinf_0.005_evaluation': 0.005,
'e41_clipinf_0.015_evaluation': 0.015,
'e41_clipinf_0.010_evaluation': 0.01,
'e41_clipinf_0.020_evaluation': 0.02,
'e41_clipinf_0.025_evaluation': 0.025,
'e41_clipinf_0_03_evaluation': 0.03,
'e41_clipinf_0.15_evaluation': 0.15
},
"PGD_ATTACK": {
},
"NO_ATTACK": {
}
}
}
}
# Theming !
#output_dir = "."
def setup_plt(square=False):
fig_width_pt = 240.0 # Get this from LaTeX using \showthe
inches_per_pt = 1.0 / 72.27 * 2 # Convert pt to inches
golden_mean = ((np.math.sqrt(5) - 1.0) / 2.0) * .8 # Aesthetic ratio
fig_width = fig_width_pt * inches_per_pt # width in inches
fig_height = (fig_width * golden_mean) # height in inches
fig_size = [fig_width, fig_height]
if square:
fig_size = [fig_height, fig_height]
plt_params = {
'backend': 'ps',
'axes.labelsize': 20,
'legend.fontsize': 16,
'xtick.labelsize': 18,
'ytick.labelsize': 18,
'font.size': 18,
'figure.figsize': fig_size,
'font.family': 'Times New Roman'
}
plt.rcParams.update(plt_params)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
def get_task_styling():
task = {
# Attacks
"a2" : {
"label": "A2-WALL",
"color": "0.1"
},
"a3" : {
"label": "A3-GREEN",
"color": "0.3"
},
"a4": {
"label": "A4-STRIPES",
"color": "0.6"
},
# Metrics
"main": { # accuracy
"label": "Main Task",
"linestyle": "dashdot"
},
"bdoor": { # accuracy
"label": "Backdoor Task",
"linestyle": "solid"
},
"norm": {
"label": "Norm",
"linestyle": "dashdot"
},
# clients
"benign_client": {
"color": "black",
"label": "Benign clients"
}
}
return task
def get_task_styling_colorful():
cmap = matplotlib.cm.get_cmap('Set1')
colors = [cmap(i) for i in range(8)]
task = {
# Attacks
"a2" : {
"label": "A2-WALL",
"color": colors[0]
},
"a3" : {
"label": "A3-GREEN",
"color": colors[1]
},
"a4": {
"label": "A4-STRIPES",
"color": colors[2]
},
# Metrics
"main": { # accuracy
"label": "Main Task",
"linestyle": "dashdot"
},
"bdoor": { # accuracy
"label": "Backdoor Task",
"linestyle": "solid"
},
"norm": {
"label": "Norm",
"linestyle": "dashdot"
},
# clients
"benign_client": {
"color": "black",
"label": "Benign clients"
}
}
return task
def get_grayscale_styles():
colors = ['0.1', '0.3', '0.6']
linestyles = ['-', '--', '-']
return colors, linestyles
COLOR_BENIGN = "#c3ddec"
def get_colorful_styles():
cmap_1 = matplotlib.cm.get_cmap('Set1')
cmap_2 = matplotlib.cm.get_cmap('Set2')
# colors = [cmap_1(i) for i in range(8)]
colors = []
colors.extend([cmap_2(i) for i in range(30)])
# colors = ['#CD4631', '#8B1E3F', '#3C153B', '#89BD9E', '#F9C80E']
linestyles = ['-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-',
'-', '-', '-', '-', '-', '-', '-', '-', '-', '-']
return colors, linestyles
def get_large_figsize(fig_width_pt=300.0, golden_mean=None):
# fig_width_pt = 300.0 # Get this from LaTeX using \showthe
inches_per_pt = 1.0 / 72.27 * 2 # Convert pt to inches
if golden_mean is None:
golden_mean = ((np.math.sqrt(5) - 1.0) / 2.0) * .8 # Aesthetic ratio
fig_width = fig_width_pt * inches_per_pt # width in inches
fig_height = (fig_width * golden_mean) # height in inches
fig_size = [fig_width, fig_height / 1.22]
return fig_height, fig_size, fig_width
def get_progressive_colors(totals=10.0):
cmap_1 = matplotlib.cm.get_cmap('summer')
# totals = 10.0
colors = [cmap_1(i) for i in np.arange(0, 1, 1.0 / totals)]
# colors = ['#CD4631', '#8B1E3F', '#3C153B', '#89BD9E', '#F9C80E']
# linestyles = ['-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-']
return colors
colors, linestyles = get_colorful_styles()
def backup_array(arr, name):
np.save(os.path.join(plot_data_save_path, name), arr)
def load_backup_array(name):
return np.load(os.path.join(plot_data_save_path, name + ".npy"))
def cifar_lenet_wr_plot(plotname):
df = pd.read_csv(os.path.join(plot_data_save_path, 'cifar_lenet_wr_varying.csv'))
# print(df)
adv = 'adv_success'
suc = 'test_accuracy'
params, fig_size = get_plt_params()
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
pdf_pages = PdfPages('./plots_output/%s' % plotname)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
f, ax1 = plt.subplots()
wrs = [0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
results_adv = [(wrs[i], df[f"run-{i}_evaluation/{adv}"][4]) for i in range(0, 11)]
results_adv_x, results_adv_y = zip(*results_adv)
results_ben = [(wrs[i], df[f"run-{i}_evaluation/{suc}"][4]) for i in range(0, 11)]
results_ben_x, results_ben_y = zip(*results_ben)
plt.plot(results_adv_x, results_adv_y, '-o', label="Adversarial objective", color=colors[1], linewidth=2)
plt.plot(results_ben_x, results_ben_y, '-o', label="Benign objective", color=colors[0], linewidth=2)
# plt.
# plt.scatter(pgd_compare.values(), [compare_pgd_mean], label="PGD", color=colors[3])
# print(df[f"e41_clipl2_0_05_noattack_evaluation/{suc}"].last_valid_index())
# for id, (key, norm) in enumerate(evaluate.items()):
# # df.plot(x='Round', y=plot_legend[type], style='o', label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
# plt.plot(norm, df[type], label=key, color=colors[id], linestyle=linestyles[id], linewidth=2)
plt.xlabel('Weight regularization factor')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Accuracy")
plt.legend(bbox_to_anchor=(-0.016, 1.00, 1., .102), loc=3, ncol=2, columnspacing=0.75)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def norm_accuracy_tradeoff_plot(plotname, norm, output_dir, xtickspacing=None, xmax=None, add_legend=True, model="mnist"):
df = pd.read_csv(os.path.join(plot_data_save_path, 'femnist_bounds_4.csv'))
#df = pd.read_csv(os.path.join(plot_data_save_path, 'cifar_bounds.csv'))
def build_df(df, norm, window_size, selected_round, pattern, col_baseline="e41_google_tasks_noconstrain_evaluation/test_accuracy", ignored_cols = ["e41_clipinf_0_03_evaluation/adv_success","e41_clipinf_0_03_evaluation/test_accuracy"]):
lst = []
used = []
notused = []
df["baseline_mean"] = df[col_baseline].rolling(window_size).mean()
df["baseline_std"] = df[col_baseline].rolling(window_size).std()
df_baseline = df[df["Round"]==selected_round]
df_baseline = df_baseline[["Round", "baseline_mean", "baseline_std"]]
df_baseline = df_baseline.rename(columns={"Round": "round"})
bounds = {}
for col in df.columns:
match = re.search(pattern, col, re.IGNORECASE)
if match:
if col in ignored_cols:
print(f"Skipped (ignored): {col}")
notused.append(col)
continue
try:
bound = float(match.group(2).replace("_", "."))
except ValueError:
print(f"Skipped: {col}")
notused.append(col)
continue
col_type = match.group(3)
if f"{bound}_{col_type}" in bounds:
print(f"Skipped (Duplicate Bound): {col}")
notused.append(col)
continue
else:
bounds[f"{bound}_{col_type}"] = True
if col_type not in ["adv_success", "test_accuracy"]:
raise ValueError(f"Unknown col type: {col_type}")
df[col + "_rmean"] = df[col].rolling(window_size).mean()
df[col + "_rstd"] = df[col].rolling(window_size).std()
row = df[df["Round"]==selected_round]
d = {
"round": row["Round"].values[0],
"norm": norm,
"bound": bound,
col_type + "_mean": row[col + "_rmean"].values[0],
col_type + "_std": row[col + "_rstd"].values[0],
}
lst.append(d)
used.append(col)
else:
notused.append(col)
#print(f"Norm={norm} - Ignored Columns: {notused}")
df1 = pd.DataFrame(lst)
# group together test accuracy and adv success
df1 = df1.fillna(0)
df1 = df1.groupby(["round", "norm", "bound"]).agg({"test_accuracy_mean":"sum", "test_accuracy_std":"sum", "adv_success_mean": "sum", "adv_success_std": "sum"})
# remove hierarchical index
df1 = pd.DataFrame(df1.to_records())
df1 = df1.merge(df_baseline)
return df1
setup_plt(square=False)
name = plotname
if norm == "l2" and model == "mnist":
norm_label = "$L_2$"
df = build_df(df, norm="l2", window_size=20, selected_round=670, pattern="e41_(emnist_)?clipl2_([0-9_\.]+)_evaluation/(.*)", col_baseline="e41_google_tasks_noconstrain_evaluation/test_accuracy", ignored_cols = ["e41_clipinf_0_03_evaluation/adv_success","e41_clipinf_0_03_evaluation/test_accuracy"])
df = df[df["bound"]<100]
elif norm == "l8" and model == "mnist":
norm_label = "$L_{\infty}$"
df = build_df(df, norm="l8", window_size=20, selected_round=670, pattern="e41_(emnist_)?clipinf_([0-9_\.]+)_evaluation/(.*)", col_baseline="e41_google_tasks_noconstrain_evaluation/test_accuracy", ignored_cols = ["e41_clipinf_0_03_evaluation/adv_success","e41_clipinf_0_03_evaluation/test_accuracy"])
df = df[df["bound"]<=0.075]
else: raise ValueError("unknown norm")
colors = ["0.1", "0.3", "0.6"]
ecolor=None #"0.6"
linestyles = ["solid", "dotted"] #dashdot
with PdfPages(f"{output_dir}/{name}.pdf") as pdf:
fig, ax = plt.subplots()
##########################
# Draw all the lines
##########################
baseline= ax.plot(df["bound"], df["baseline_mean"], label="Baseline (no bound)", color=colors[0],
linestyle='dashdot', linewidth=2, alpha=0.5)
testacc = ax.errorbar(df["bound"], df["test_accuracy_mean"], yerr=df["test_accuracy_std"], label="Main Task", color=colors[0], linewidth=2, capsize=5, ecolor=ecolor, marker="o")
advsucc = ax.errorbar(df["bound"], df["adv_success_mean"], yerr=df["adv_success_std"], label="Backdoor Task", color=colors[1], linestyle="dashed", linewidth=2, capsize=5, ecolor=ecolor, marker="o")
##########################
# General Format
##########################
#ax.set_title("Hello World")
# 'best', 'upper right', 'upper left', 'lower left',
# 'lower right', 'right', 'center left', 'center right',
# 'lower center', 'upper center', 'center'
ax.grid(True, axis="y", linestyle=':', color='0.6', zorder=0, linewidth=1.2)
if add_legend:
ax.legend(title_fontsize=20, bbox_to_anchor=(0., 1.02, 2/3, .102), mode="expand", loc="lower left", title="Tasks", labelspacing=.05)
##########################
# Y - Axis Format
##########################
ax.set_ylim(ymin=0, ymax=1.02)
ax.set_ylabel("Accuracy")
ax.set_yticks([0, 0.25, 0.5, 0.75, 1])
#ax.set_yticklabels(labels, fontsize=16, rotation=345)
##########################
# X - Axis Format
##########################
ax.set_xlim(xmin=0, xmax=xmax)
ax.set_xlabel(f"{norm_label} norm bound")
import matplotlib.ticker as ticker
ax.xaxis.set_major_locator(ticker.MultipleLocator(xtickspacing))
#ax.set_xticks(xticks)
#ax.set_xticklabels(labels, fontsize=16, rotation=345)
if add_legend:
ax.axis('off')
baseline[0].set_visible(False)
testacc[0].set_visible(False)
testacc[1][0].set_visible(False)
testacc[1][1].set_visible(False)
testacc[2][0].set_visible(False)
advsucc[0].set_visible(False)
advsucc[1][0].set_visible(False)
advsucc[1][1].set_visible(False)
advsucc[2][0].set_visible(False)
pdf.savefig(bbox_inches='tight', pad_inches=0)
plt.close()
return fig, df
def get_plt_params():
fig_height, fig_size, fig_width = get_large_figsize()
params = {'backend': 'ps',
'axes.labelsize': FONT_SIZE,
'legend.fontsize': FONT_SIZE,
'xtick.labelsize': FONT_SIZE,
'ytick.labelsize': FONT_SIZE,
'font.size': FONT_SIZE,
'figure.figsize': fig_size,
'font.family': 'times new roman'}
return params, [fig_width, fig_height]
def norm_accuracy_compare_plot(plotname, norm, output_dir, legend_type=None, use_error=True, model="mnist", xmax=600, ignore_error=[], markevery=50):
if legend_type not in [None, "tootight", "ideal", "tooloose"]:
raise ValueError(f"legend type not supported: {legend_type}")
window_size = 20
if model == "mnist":
df = pd.read_csv(os.path.join(plot_data_save_path, 'femnist_bounds_4.csv'))
l2_bound_tootight = "e41_clipl2_0_01_evaluation"
l2_bound_ideal = "e41_clipl2_1_evaluation"
l2_bound_tooloose = "e41_clipl2_35_evaluation" #e41_clipl2_100_evaluation
l8_bound_tootight = "e41_clipinf_0_0001_evaluation"
l8_bound_ideal = "e41_clipinf_0_00100_evaluation"
l8_bound_tooloose = "e41_emnist_clipinf_0_075_evaluation"
tootight_bound = (r"10^{-2}", r"10^{-4}") #(L2, L8)
ideal_bound = ("1", r"10^{-3}") #(L2, L8)
tooloose_bound = ("35", "0.075") #(L2, L8)
elif model == "cifar":
df = pd.read_csv(os.path.join(plot_data_save_path, 'cifar_bounds.csv'))
l2_bound_tootight = "e58_lr1_cifar_clipl2_0.5_evaluation"
l2_bound_ideal = "e58_lr1_cifar_clipl2_10_evaluation"
l2_bound_tooloose = "e58_lr1_cifar_baseline_evaluation"
l8_bound_tootight = "e58_lr1_cifar_clip_0.004_evaluation"
l8_bound_ideal = "e58_lr1_cifar_clip_0.0055_evaluation"
l8_bound_tooloose = "e58_lr1_cifar_baseline_evaluation"
tootight_bound = ("0.5", "0.004") #(L2, L8)
ideal_bound = ("10", "0.0055") #(L2, L8)
tooloose_bound = ("\infty", "\infty") #(L2, L8)
def build_df(df, norm, bound_tootight_key, bound_ideal_key, bound_tooloose_key, window_size):
if bound_tootight_key is not None:
df[f"{norm}_bound_tootight_advsuccess"] = df[f"{bound_tootight_key}/adv_success"].rolling(window_size).mean()
df[f"{norm}_bound_tootight_testaccuracy"] = df[f"{bound_tootight_key}/test_accuracy"].rolling(window_size).mean()
df[f"{norm}_bound_tootight_advsuccess_std"] = df[f"{bound_tootight_key}/adv_success"].rolling(window_size).std()
df[f"{norm}_bound_tootight_testaccuracy_std"] = df[f"{bound_tootight_key}/test_accuracy"].rolling(window_size).std()
if bound_ideal_key is not None:
df[f"{norm}_bound_ideal_advsuccess"] = df[f"{bound_ideal_key}/adv_success"].rolling(window_size).mean()
df[f"{norm}_bound_ideal_testaccuracy"] = df[f"{bound_ideal_key}/test_accuracy"].rolling(window_size).mean()
df[f"{norm}_bound_ideal_advsuccess_std"] = df[f"{bound_ideal_key}/adv_success"].rolling(window_size).std()
df[f"{norm}_bound_ideal_testaccuracy_std"] = df[f"{bound_ideal_key}/test_accuracy"].rolling(window_size).std()
if bound_tooloose_key is not None:
df[f"{norm}_bound_tooloose_advsuccess"] = df[f"{bound_tooloose_key}/adv_success"].rolling(window_size).mean()
df[f"{norm}_bound_tooloose_testaccuracy"] = df[f"{bound_tooloose_key}/test_accuracy"].rolling(window_size).mean()
df[f"{norm}_bound_tooloose_advsuccess_std"] = df[f"{bound_tooloose_key}/adv_success"].rolling(window_size).std()
df[f"{norm}_bound_tooloose_testaccuracy_std"] = df[f"{bound_tooloose_key}/test_accuracy"].rolling(window_size).std()
return df
df = build_df(df, norm="l8", bound_tootight_key=l8_bound_tootight, bound_ideal_key=l8_bound_ideal, bound_tooloose_key=l8_bound_tooloose, window_size=window_size)
df = build_df(df, norm="l2", bound_tootight_key=l2_bound_tootight, bound_ideal_key=l2_bound_ideal, bound_tooloose_key=l2_bound_tooloose, window_size=window_size)
name = plotname
setup_plt(square=False)
with PdfPages(f"{output_dir}/{name}.pdf") as pdf:
fig, ax = plt.subplots()
##########################
# Draw all the lines
##########################
error_color = "0.85"
colors = ["0.1", "0.3", "0.6"]
linestyles = ["solid", "dotted"] #dashdot
line_d = {}
plines = []
if f"{norm}_bound_tootight_testaccuracy" in df.columns:
plines += ax.plot(df["Round"], df[f"{norm}_bound_tootight_testaccuracy"], color=colors[0], linestyle=linestyles[0], linewidth=2, marker="s", markevery=markevery)
line_d["tootight_tacc"] = len(plines)-1
if f"{norm}_bound_ideal_testaccuracy" in df.columns:
plines += ax.plot(df["Round"], df[f"{norm}_bound_ideal_testaccuracy"], color=colors[1], linestyle=linestyles[0], linewidth=2, marker="o", markevery=markevery)
line_d["ideal_tacc"] = len(plines)-1
if f"{norm}_bound_tooloose_testaccuracy" in df.columns:
plines += ax.plot(df["Round"], df[f"{norm}_bound_tooloose_testaccuracy"], color=colors[2], linestyle=linestyles[0], linewidth=2, marker="v", markevery=markevery)
line_d["tooloose_tacc"] = len(plines)-1
if f"{norm}_bound_tootight_advsuccess" in df.columns:
plines += ax.plot(df["Round"], df[f"{norm}_bound_tootight_advsuccess"], color=colors[0], linestyle=linestyles[1], linewidth=2, marker="s", markevery=markevery)
line_d["tootight_advs"] = len(plines)-1
if f"{norm}_bound_ideal_advsuccess" in df.columns:
plines += ax.plot(df["Round"], df[f"{norm}_bound_ideal_advsuccess"], color=colors[1], linestyle=linestyles[1], linewidth=2, marker="o", markevery=markevery)
line_d["ideal_advs"] = len(plines)-1
if f"{norm}_bound_tooloose_advsuccess" in df.columns:
plines += ax.plot(df["Round"], df[f"{norm}_bound_tooloose_advsuccess"], color=colors[2], linestyle=linestyles[1], linewidth=2, marker="v", markevery=markevery)
line_d["tooloose_advs"] = len(plines)-1
lines = ax.get_lines()
labels = ["Main Task", "Backdoor Task"]
empty_patch = mpatches.Patch(color='none')
handles=None
if legend_type == "tootight" and "tootight_tacc" in line_d:
title = "Bound too tight"
labels = [f"($L_2 \leq {tootight_bound[0]}$, $L_{{\infty}} \leq {tootight_bound[1]}$)"] + labels
handles = [empty_patch, lines[line_d["tootight_tacc"]], lines[line_d["tootight_advs"]]]
elif legend_type == "ideal" and "ideal_tacc" in line_d:
title = "Bound ideal"
labels = [f"($L_2 \leq {ideal_bound[0]}$, $L_{{\infty}} \leq {ideal_bound[1]}$)"] + labels
handles = [empty_patch, lines[line_d["ideal_tacc"]], lines[line_d["ideal_advs"]]]
elif legend_type == "tooloose" and "tooloose_tacc" in line_d:
title = "Bound too loose"
labels = [f"($L_2 \leq {tooloose_bound[0]}$, $L_{{\infty}} \leq {tooloose_bound[1]}$)"] + labels
handles = [empty_patch, lines[line_d["tooloose_tacc"]], lines[line_d["tooloose_advs"]]]
if legend_type is not None and handles is not None:
ax.legend(handles, labels, title_fontsize=20, bbox_to_anchor=(0., 1.02, 2/3, .102), mode="expand", loc="lower left", title=title, labelspacing=.05)
if use_error:
if f"{norm}_bound_tootight_advsuccess" in df.columns:
ax.fill_between(df["Round"],
df[f"{norm}_bound_tootight_advsuccess"]-df[f"{norm}_bound_tootight_advsuccess_std"],
df[f"{norm}_bound_tootight_advsuccess"]+df[f"{norm}_bound_tootight_advsuccess_std"],
alpha=1, edgecolor='#3F7F4C', facecolor=error_color, linewidth=0)
if f"{norm}_bound_tooloose_advsuccess" in df.columns and f"{norm}_bound_tooloose_advsuccess" not in ignore_error:
ax.fill_between(df["Round"],
df[f"{norm}_bound_tooloose_advsuccess"]-df[f"{norm}_bound_tooloose_advsuccess_std"],
df[f"{norm}_bound_tooloose_advsuccess"]+df[f"{norm}_bound_tooloose_advsuccess_std"],
alpha=1, edgecolor='#3F7F4C', facecolor=error_color, linewidth=0)
elif f"{norm}_bound_tooloose_advsuccess" in ignore_error:
ax.annotate('* std large', xy=(500, 0.32), color=colors[2], xycoords='data', xytext=(0, 0), textcoords='offset points', horizontalalignment='right', verticalalignment='bottom')
if f"{norm}_bound_ideal_advsuccess" in df.columns:
ax.fill_between(df["Round"],
df[f"{norm}_bound_ideal_advsuccess"]-df[f"{norm}_bound_ideal_advsuccess_std"],
df[f"{norm}_bound_ideal_advsuccess"]+df[f"{norm}_bound_ideal_advsuccess_std"],
alpha=1, edgecolor='#3F7F4C', facecolor=error_color, linewidth=0)
if f"{norm}_bound_tootight_testaccuracy" in df.columns:
ax.fill_between(df["Round"],
df[f"{norm}_bound_tootight_testaccuracy"]-df[f"{norm}_bound_tootight_testaccuracy_std"],
df[f"{norm}_bound_tootight_testaccuracy"]+df[f"{norm}_bound_tootight_testaccuracy_std"],
alpha=1, edgecolor='#3F7F4C', facecolor=error_color, linewidth=0)
if f"{norm}_bound_tooloose_testaccuracy" in df.columns:
ax.fill_between(df["Round"],
df[f"{norm}_bound_tooloose_testaccuracy"]-df[f"{norm}_bound_tooloose_testaccuracy_std"],
df[f"{norm}_bound_tooloose_testaccuracy"]+df[f"{norm}_bound_tooloose_testaccuracy_std"],
alpha=1, edgecolor='#3F7F4C', facecolor=error_color, linewidth=0)
if f"{norm}_bound_ideal_testaccuracy" in df.columns:
ax.fill_between(df["Round"],
df[f"{norm}_bound_ideal_testaccuracy"]-df[f"{norm}_bound_ideal_testaccuracy_std"],
df[f"{norm}_bound_ideal_testaccuracy"]+df[f"{norm}_bound_ideal_testaccuracy_std"],
alpha=1, edgecolor='#3F7F4C', facecolor=error_color, linewidth=0)
##########################
# General Format
##########################
#ax.set_title("Hello World")
ax.grid(True, axis="y", linestyle=':', color='0.6', zorder=0, linewidth=1.2)
##########################
# Y - Axis Format
##########################
ax.set_ylim(ymin=0, ymax=1.01)
ax.set_ylabel("Accuracy")
ax.set_yticks([0,0.25, 0.5, 0.75, 1])
#ax.set_yticklabels(labels, fontsize=16, rotation=345)
##########################
# X - Axis Format
##########################
ax.set_xlim(xmin=0, xmax=xmax)
ax.set_xlabel("Rounds")
#ax.set_xticks(xticks)
#ax.set_xticklabels(labels, fontsize=16, rotation=345)
if legend_type is not None:
ax.axis('off')
for line in plines:
line.set_visible(False)
pdf.savefig(bbox_inches='tight', pad_inches=0)
plt.close()
return fig, df
def norm_accuracy_compare_presentation_plot(plotname, norm, output_dir, legend_type=None, use_error=True, model="mnist", xmax=600, ignore_error=[], markevery=50, selector=None):
if legend_type not in [None, "tootight", "ideal", "tooloose"]:
raise ValueError(f"legend type not supported: {legend_type}")
window_size = 20
if model == "mnist":
df = pd.read_csv(os.path.join(plot_data_save_path, 'femnist_bounds_4.csv'))
l2_bound_tootight = "e41_clipl2_0_01_evaluation"
l2_bound_ideal = "e41_clipl2_1_evaluation"
l2_bound_tooloose = "e41_clipl2_35_evaluation" #e41_clipl2_100_evaluation
l8_bound_tootight = "e41_clipinf_0_0001_evaluation"
l8_bound_ideal = "e41_clipinf_0_00100_evaluation"
l8_bound_tooloose = "e41_emnist_clipinf_0_075_evaluation"
tootight_bound = (r"10^{-2}", r"10^{-4}") #(L2, L8)
ideal_bound = ("1", r"10^{-3}") #(L2, L8)
tooloose_bound = ("35", "0.075") #(L2, L8)
elif model == "cifar":
df = pd.read_csv(os.path.join(plot_data_save_path, 'cifar_bounds.csv'))
l2_bound_tootight = "e58_lr1_cifar_clipl2_0.5_evaluation"
l2_bound_ideal = "e58_lr1_cifar_clipl2_10_evaluation"
l2_bound_tooloose = "e58_lr1_cifar_clipl2_20_evaluation"
l8_bound_tootight = "e58_lr1_cifar_clip_0.004_evaluation"
l8_bound_ideal = "e58_lr1_cifar_clip_0.0055_evaluation"
l8_bound_tooloose = "e58_lr1_cifar_baseline_evaluation"
bounds = {
"tootight": {
"l2": 0.5,
"l8": 0.004
},
"ideal": {
"l2": 10,
"l8": 0.0055
},
"tooloose": {
"l2": 20,
"l8": "\infty"
}
}
def build_df(df, norm, bound_tootight_key, bound_ideal_key, bound_tooloose_key, window_size):
if bound_tootight_key is not None:
df[f"{norm}_bound_tootight_advsuccess"] = df[f"{bound_tootight_key}/adv_success"].rolling(window_size).mean()
df[f"{norm}_bound_tootight_testaccuracy"] = df[f"{bound_tootight_key}/test_accuracy"].rolling(window_size).mean()
df[f"{norm}_bound_tootight_advsuccess_std"] = df[f"{bound_tootight_key}/adv_success"].rolling(window_size).std()
df[f"{norm}_bound_tootight_testaccuracy_std"] = df[f"{bound_tootight_key}/test_accuracy"].rolling(window_size).std()
if bound_ideal_key is not None:
df[f"{norm}_bound_ideal_advsuccess"] = df[f"{bound_ideal_key}/adv_success"].rolling(window_size).mean()
df[f"{norm}_bound_ideal_testaccuracy"] = df[f"{bound_ideal_key}/test_accuracy"].rolling(window_size).mean()
df[f"{norm}_bound_ideal_advsuccess_std"] = df[f"{bound_ideal_key}/adv_success"].rolling(window_size).std()
df[f"{norm}_bound_ideal_testaccuracy_std"] = df[f"{bound_ideal_key}/test_accuracy"].rolling(window_size).std()
if bound_tooloose_key is not None:
df[f"{norm}_bound_tooloose_advsuccess"] = df[f"{bound_tooloose_key}/adv_success"].rolling(window_size).mean()
df[f"{norm}_bound_tooloose_testaccuracy"] = df[f"{bound_tooloose_key}/test_accuracy"].rolling(window_size).mean()
df[f"{norm}_bound_tooloose_advsuccess_std"] = df[f"{bound_tooloose_key}/adv_success"].rolling(window_size).std()
df[f"{norm}_bound_tooloose_testaccuracy_std"] = df[f"{bound_tooloose_key}/test_accuracy"].rolling(window_size).std()
df["baseline_testaccuracy"] = df["e58_lr1_cifar_baseline_evaluation/test_accuracy"].rolling(window_size).mean()
return df
if selector is not None:
if selector is 'tootight':
l8_bound_ideal = None
l8_bound_tooloose = None
l2_bound_ideal = None
l2_bound_tooloose = None
elif selector is 'ideal':
l8_bound_tootight, l2_bound_tootight, l8_bound_tooloose, l2_bound_tooloose = None, None, None, None
elif selector is 'tooloose':
l8_bound_tootight, l2_bound_tootight, l8_bound_ideal, l2_bound_ideal = None, None, None, None
df = build_df(df, norm="l8", bound_tootight_key=l8_bound_tootight, bound_ideal_key=l8_bound_ideal, bound_tooloose_key=l8_bound_tooloose, window_size=window_size)
df = build_df(df, norm="l2", bound_tootight_key=l2_bound_tootight, bound_ideal_key=l2_bound_ideal, bound_tooloose_key=l2_bound_tooloose, window_size=window_size)
if "Unnamed: 0" in df.columns:
df["Round"] = df["Unnamed: 0"]
name = plotname
setup_plt(square=False)
with PdfPages(f"{output_dir}/{name}.pdf") as pdf:
fig, ax = plt.subplots()
##########################
# Draw all the lines
##########################
error_color = "0.85"
cmap = matplotlib.cm.get_cmap('Set1')
colors = [cmap(i) for i in range(8)]
linestyles = ["solid", "dotted"] #dashdot
if f"baseline_testaccuracy" in df.columns:
ax.plot(df["Round"], df[f"baseline_testaccuracy"], color=colors[1], linestyle=linestyles[1], linewidth=2, marker="v", markevery=markevery, label='Main Task (baseline)')
line_d = {}
plines = []
if f"{norm}_bound_tootight_testaccuracy" in df.columns:
plines += ax.plot(df["Round"], df[f"{norm}_bound_tootight_testaccuracy"], color=colors[2], linestyle=linestyles[0], linewidth=2, marker="s", markevery=markevery, label='Main Task')
line_d["tootight_tacc"] = len(plines)-1
if f"{norm}_bound_ideal_testaccuracy" in df.columns:
plines += ax.plot(df["Round"], df[f"{norm}_bound_ideal_testaccuracy"], color=colors[2], linestyle=linestyles[0], linewidth=2, marker="o", markevery=markevery, label='Main Task')
line_d["ideal_tacc"] = len(plines)-1
if f"{norm}_bound_tooloose_testaccuracy" in df.columns:
plines += ax.plot(df["Round"], df[f"{norm}_bound_tooloose_testaccuracy"], color=colors[2], linestyle=linestyles[0], linewidth=2, marker="v", markevery=markevery, label='Main Task')
line_d["tooloose_tacc"] = len(plines)-1
if f"{norm}_bound_tootight_advsuccess" in df.columns:
plines += ax.plot(df["Round"], df[f"{norm}_bound_tootight_advsuccess"], color=colors[0], linestyle=linestyles[1], linewidth=2, marker="s", markevery=markevery, label='Backdoor Task')
line_d["tootight_advs"] = len(plines)-1
if f"{norm}_bound_ideal_advsuccess" in df.columns:
plines += ax.plot(df["Round"], df[f"{norm}_bound_ideal_advsuccess"], color=colors[0], linestyle=linestyles[1], linewidth=2, marker="o", markevery=markevery, label='Backdoor Task')
line_d["ideal_advs"] = len(plines)-1
if f"{norm}_bound_tooloose_advsuccess" in df.columns:
plines += ax.plot(df["Round"], df[f"{norm}_bound_tooloose_advsuccess"], color=colors[0], linestyle=linestyles[1], linewidth=2, marker="v", markevery=markevery, label='Backdoor Task')
line_d["tooloose_advs"] = len(plines)-1
lines = ax.get_lines()
labels = ["Main Task", "Backdoor Task"]
empty_patch = mpatches.Patch(color='none')
handles=None
ax.legend(mode="expand", loc="lower left", labelspacing=.05, bbox_to_anchor=(1.01, 0, .6, 0))
norm_map = {"l2": "L_2", "l8": "L_\infty"}
norm_title = norm_map[norm]
norm_title_bound = f"${norm_title} \leq {bounds[selector][norm]}$"
selector_map = {"tooloose": f"Bound too loose ({norm_title_bound})", "ideal": f"Bound ideal ({norm_title_bound})", "tootight": f"Bound too tight ({norm_title_bound})"}
plt.title(selector_map[selector])
##########################
# General Format
##########################
#ax.set_title("Hello World")
ax.grid(True, axis="y", linestyle=':', color='0.6', zorder=0, linewidth=1.2)
##########################
# Y - Axis Format
##########################
ax.set_ylim(ymin=0, ymax=1.01)
ax.set_ylabel("Accuracy")
ax.set_yticks([0,0.25, 0.5, 0.75, 1])
#ax.set_yticklabels(labels, fontsize=16, rotation=345)
##########################
# X - Axis Format
##########################
ax.set_xlim(xmin=0, xmax=xmax)
ax.set_xlabel("Rounds")
#ax.set_xticks(xticks)
#ax.set_xticklabels(labels, fontsize=16, rotation=345)
if legend_type is not None:
ax.axis('off')
for line in plines:
line.set_visible(False)
pdf.savefig(bbox_inches='tight', pad_inches=0)
plt.close()
return fig, df
def norm_per_round(plotname):
fig_height, fig_size, fig_width = get_large_figsize()
pdf_pages = PdfPages('./plots_output/%s' % plotname)
params, fig_size = get_plt_params()
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
benign = []
mal = []
benign_avg = [] # debug
for i in range(1, 1000, 1):
# for i in range(1, 4821, 1):
file = np.load(f'../../experiments_set/norm/normround/round_{i}.npy', allow_pickle=True)
benign_norms_l2, benign_norms_l1, mal_norms_l2, mal_norms_l1 = file[0], file[1], file[2], file[3]
benign.append(benign_norms_l2)
mal.append(mal_norms_l2[0])
benign_avg.append(np.average(benign_norms_l2))
# print(f"Reading {i}")
# plt.boxplot(benign)
plt.plot(benign_avg, label="Benign (avg)", color=colors[0], linestyle=linestyles[1], linewidth=2)
plt.plot(mal, label="Malicious", color=colors[1], linestyle=linestyles[1], linewidth=2)
plt.xlabel('Round')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("L2 Norm")
# plt.yscale("log")
plt.legend(bbox_to_anchor=(-0.016, 1.00, 1., .102), loc=3, ncol=4, columnspacing=0.75)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def hypergeometric_distribution(plotname):
fig_height, fig_size, fig_width = get_large_figsize()
pdf_pages = PdfPages('./plots_output/%s' % plotname)
params, fig_size = get_plt_params()
plt.rcParams.update(params)
matplotlib.rc('font', **{'size': 14, 'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
def hypergeom_calc(x, frac, total):
top = scipy.special.comb(total - int(total * frac), x)
bottom = scipy.special.comb(total, x)
return 1.0 - (top / bottom)
total_number_of_weights = 20000
fractions = [0.1, 0.25, 0.5, 0.75]
x_values = list(range(1, 101))
perc = '%'
for i, f in enumerate(fractions):
y_values = [hypergeom_calc(x, f, total_number_of_weights) for x in x_values]
x_values_perc = [float(x) / float(total_number_of_weights) for x in x_values]
label = f"{(f * 100.0):.0f}\\%"
plt.plot(x_values_perc, y_values, label=label, color=colors[i], linewidth=2)
# plt.boxplot(benign)
# plt.plot(benign_avg, label="Benign (avg)", color=colors[0], linestyle=linestyles[1], linewidth=2)
# plt.plot(mal, label="Malicious", color=colors[1], linestyle=linestyles[1], linewidth=2)
plt.xlabel(f'Parameters outside range (total weights = {total_number_of_weights})')
ax1.xaxis.set_major_formatter(ticker.PercentFormatter())
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Detection probability")
# plt.yscale("log")
leg = plt.legend(bbox_to_anchor=(-0.016, 1.00, 1., .102), loc=3, ncol=4, columnspacing=0.75,
title="Percentage of parameters verified")
leg._legend_box.align = "left"
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def build_df_scaling_norm_advsuccess(prefix):
SCALING_FACTORS = {
10: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], # 10 clients selected -> have 10 scaling factors
20: [1, 3, 5, 7, 9, 11, 13, 15, 17, 19], # 20 client selected -> have 10 different scaling factors
40: [1, 5, 9, 13, 17, 23, 27, 31, 35, 40] # 40 clients -> have 10 different scaling factors
# 40: [1, 10, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 90, 100]
}
folder = "./data/l2_comparison_attack"
df_10 = pd.DataFrame(SCALING_FACTORS[10], columns=["scaling_factor"])
df_10["n_clients"] = 10
df_20 = pd.DataFrame(SCALING_FACTORS[20], columns=["scaling_factor"])
df_20["n_clients"] = 20
df_40 = pd.DataFrame(SCALING_FACTORS[40], columns=["scaling_factor"])
df_40["n_clients"] = 40
task_translation = {
"bgwall": "a2-wall",
"greencar": "a3-green",
"racingstripes": "a4-stripes"
}
for filename in os.listdir(folder):
pattern = f"{prefix}_([a-z]+)_([0-9]+).csv"
match = re.search(pattern, filename, re.IGNORECASE)
if match:
attack_task = match.group(1)
n_clients = int(match.group(2))
df1 = pd.read_csv(f"{folder}/{filename}")
df1 = df1.tail(n=1) # attack happens only in last round (round 5)
# select and sort all backdoor columns and all norm columns
advsucc_cols = [col for col in df1.columns if "/adv_success" in col]
l2norm_cols = [col for col in df1.columns if "_l2_total/mal"in col]
advsucc_cols.sort()
l2norm_cols.sort()
# extract two columns and merge them into df
df_advsucc = pd.DataFrame(df1[advsucc_cols].transpose().values, columns=[f"{task_translation[attack_task]}_bdoor"])
df_l2norm = pd.DataFrame(df1[l2norm_cols].transpose().values, columns=[f"{task_translation[attack_task]}_l2norm"])
df_cc = pd.concat([df_advsucc, df_l2norm], axis=1)
df_sorted = df_cc.sort_values(f"{task_translation[attack_task]}_l2norm").reset_index(drop=True)
if n_clients == 10:
df_10 = pd.concat([df_10, df_sorted], axis=1)
elif n_clients == 20:
df_20 = pd.concat([df_20, df_sorted], axis=1)
elif n_clients == 40:
df_40 = pd.concat([df_40, df_sorted], axis=1)
else:
print(f"Ignore file: {filename} with n_clients={n_clients}")
else:
print(f"no match: {filename}")
df = pd.concat([df_10, df_20, df_40])
df["alpha_fracadv"] = 1 / df["n_clients"]
return df
def scaling_factor_adv_success(plotname, output_dir, prefix=None, df=None):
if prefix is not None:
df = build_df_scaling_norm_advsuccess(prefix)
df = df[df["n_clients"]==40]
setup_plt()
task = get_task_styling()
name = plotname
with PdfPages(f"{output_dir}/{name}.pdf") as pdf:
fig, ax = plt.subplots()
ax2 = ax.twinx()
##########################
# Draw all the lines
##########################
linewidth = 1.5
ax2.plot(df["scaling_factor"], df["a2-wall_bdoor"], marker="o", color=task["a2"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
ax2.plot(df["scaling_factor"], df["a3-green_bdoor"], marker="o", color=task["a3"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
ax2.plot(df["scaling_factor"], df["a4-stripes_bdoor"], marker="o", color=task["a4"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
ax.plot(df["scaling_factor"], df["a2-wall_l2norm"], color=task["a2"]["color"], linestyle=task["norm"]["linestyle"], linewidth=linewidth)
ax.plot(df["scaling_factor"], df["a3-green_l2norm"], color=task["a3"]["color"], linestyle=task["norm"]["linestyle"], linewidth=linewidth)
ax.plot(df["scaling_factor"], df["a4-stripes_l2norm"], color=task["a4"]["color"], linestyle=task["norm"]["linestyle"], linewidth=linewidth)
##########################
# General Format
##########################
ax.grid(True, axis="y", linestyle=':', color='0.6', zorder=0, linewidth=1.2)
## Additional, custom legend
patches = [mpatches.Patch(color=task["a2"]["color"]), mpatches.Patch(color=task["a3"]["color"]), mpatches.Patch(color=task["a4"]["color"])]
custom_lines_styles = [Line2D([0], [0], linestyle=task["norm"]["linestyle"], lw=2, color=COLOR_GRAY),
Line2D([0], [0], linestyle=task["bdoor"]["linestyle"], lw=2, color=COLOR_GRAY)]
height = 0
width = 0.48
leg1 = ax.legend(patches, [task["a2"]["label"], task["a3"]["label"], task["a4"]["label"]],
mode="expand", title="Attack Tasks", bbox_to_anchor=(1.15, 1, width, height), loc="upper left", labelspacing=0.2)
leg2 = ax.legend(custom_lines_styles, [task["norm"]["label"], task["bdoor"]["label"]],
mode="expand", title="Metrics", bbox_to_anchor=(1.15, 0, width, height), loc="lower left", labelspacing=0.2)
ax.add_artist(leg1)
ax.add_artist(leg2)
##########################
# Y - Axis Format
##########################
ax.set_ylim(ymin=0, ymax=None)
ax.set_ylabel("$L_2$ Norm of Update")
ax2.set_ylim(ymin=0, ymax=1.02)
ax2.set_ylabel("Task Accuracy")
ax2.set_yticks([0, 0.25, 0.5, 0.75, 1])
#ax.set_yticklabels(labels, fontsize=16, rotation=345)
##########################
# X - Axis Format
##########################
ax.set_xlim(xmin=0, xmax=None)
ax.set_xlabel("Scaling factor")
#ax.set_xticks(xticks)
#ax.set_xticklabels(labels, fontsize=16, rotation=345)
pdf.savefig(bbox_inches='tight', pad_inches=0)
plt.close()
return fig, df
def scaling_factor_adv_success_presentation(plotname, output_dir, prefix=None, df=None, df_stats=None, show_norm_adv=True, show_norm_benign=True):
if prefix is not None:
df = build_df_scaling_norm_advsuccess(prefix)
df = df[df["n_clients"]==40]
df = df[df["scaling_factor"] <= 60]
setup_plt()
task = get_task_styling_colorful()
name = plotname
with PdfPages(f"{output_dir}/{name}.pdf") as pdf:
fig, ax = plt.subplots()
ax2 = ax.twinx()
##########################
# Draw all the lines
##########################
linewidth = 1.5
ax2.plot(df["scaling_factor"], df["a2-wall_bdoor"], marker="o", color=task["a2"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth, label="Backdoor Task")
# ax2.plot(df["scaling_factor"], df["a3-green_bdoor"], marker="o", color=task["a3"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
# ax2.plot(df["scaling_factor"], df["a4-stripes_bdoor"], marker="o", color=task["a4"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
if show_norm_adv:
ax.plot(df["scaling_factor"], df["a2-wall_l2norm"], color=task["a3"]["color"], linestyle=task["norm"]["linestyle"], linewidth=linewidth, label="Backdoor Task Norm")
if show_norm_benign:
num_points = len(df["scaling_factor"])
ax.plot(df["scaling_factor"], np.repeat(df_stats[df_stats["Round"] == '1']["max"], [num_points]), color=task["a4"]["color"], linestyle=task["norm"]["linestyle"], linewidth=linewidth, label="Max. Benign Norm")
##########################
# General Format
##########################
ax.grid(True, axis="y", linestyle=':', color='0.6', zorder=0, linewidth=1.2)
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax.legend(lines2 + lines, labels2 + labels, bbox_to_anchor=(1.15, 0, .48, 0), loc="lower left", labelspacing=0.2)
##########################
# Y - Axis Format
##########################
if show_norm_adv:
ax.set_ylim(ymin=0, ymax=None)
ax.set_ylabel("$L_2$ Norm of Update")
else:
ax.set_yticklabels([])
ax2.set_ylim(ymin=0, ymax=1.02)
ax2.set_ylabel("Task Accuracy")
ax2.set_yticks([0, 0.25, 0.5, 0.75, 1])
#ax.set_yticklabels(labels, fontsize=16, rotation=345)
##########################
# X - Axis Format
##########################
ax.set_xlim(xmin=0, xmax=None)
ax.set_xlabel("Scaling factor")
#ax.set_xticks(xticks)
#ax.set_xticklabels(labels, fontsize=16, rotation=345)
pdf.savefig(bbox_inches='tight', pad_inches=0)
plt.close()
return fig, df
def scaling_factor_adv_success_benign_norms(plotname, output_dir, df_adv, df_stats):
setup_plt()
task = get_task_styling()
name = plotname
with PdfPages(f"{output_dir}/{name}.pdf") as pdf:
fig, ax = plt.subplots()
ax2 = ax.twinx()
##########################
# Draw all the lines
##########################
linewidth = 1.5
ax2.plot(df_adv["scaling_factor"], df_adv["a2-wall_bdoor"], marker="o", color=task["a2"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
ax2.plot(df_adv["scaling_factor"], df_adv["a3-green_bdoor"], marker="o", color=task["a3"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
ax2.plot(df_adv["scaling_factor"], df_adv["a4-stripes_bdoor"], marker="o", color=task["a4"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
ax.plot(df_adv["scaling_factor"], df_adv["a2-wall_l2norm"], color=task["a2"]["color"], linestyle=task["norm"]["linestyle"], linewidth=linewidth)
ax.plot(df_adv["scaling_factor"], df_adv["a3-green_l2norm"], color=task["a3"]["color"], linestyle=task["norm"]["linestyle"], linewidth=linewidth)
ax.plot(df_adv["scaling_factor"], df_adv["a4-stripes_l2norm"], color=task["a4"]["color"], linestyle=task["norm"]["linestyle"], linewidth=linewidth)
num_points = len(df_adv["scaling_factor"])
ax.plot(df_adv["scaling_factor"], np.repeat(df_stats[df_stats["Round"] == '1']["max"], [num_points]), color="red", linestyle=task["norm"]["linestyle"], linewidth=linewidth)
##########################
# General Format
##########################
ax.grid(True, axis="y", linestyle=':', color='0.6', zorder=0, linewidth=1.2)
## Additional, custom legend
patches = [mpatches.Patch(color=task["a2"]["color"]), mpatches.Patch(color=task["a3"]["color"]), mpatches.Patch(color=task["a4"]["color"])]
custom_lines_styles = [Line2D([0], [0], linestyle=task["norm"]["linestyle"], lw=2, color=COLOR_GRAY),
Line2D([0], [0], linestyle=task["bdoor"]["linestyle"], lw=2, color=COLOR_GRAY)]
height = 0
width = 0.48
leg1 = ax.legend(patches, [task["a2"]["label"], task["a3"]["label"], task["a4"]["label"]],
mode="expand", title="Attack Tasks", bbox_to_anchor=(1.15, 1, width, height), loc="upper left", labelspacing=0.2)
leg2 = ax.legend(custom_lines_styles, [task["norm"]["label"], task["bdoor"]["label"]],
mode="expand", title="Metrics", bbox_to_anchor=(1.15, 0, width, height), loc="lower left", labelspacing=0.2)
ax.add_artist(leg1)
ax.add_artist(leg2)
##########################
# Y - Axis Format
##########################
ax.set_ylim(ymin=0, ymax=None)
ax.set_ylabel("$L_2$ Norm of Update")
ax2.set_ylim(ymin=0, ymax=1.02)
ax2.set_ylabel("Task Accuracy")
ax2.set_yticks([0, 0.25, 0.5, 0.75, 1])
#ax.set_yticklabels(labels, fontsize=16, rotation=345)
##########################
# X - Axis Format
##########################
ax.set_xlim(xmin=0, xmax=None)
ax.set_xlabel("Scaling factor")
#ax.set_xticks(xticks)
#ax.set_xticklabels(labels, fontsize=16, rotation=345)
pdf.savefig(bbox_inches='tight', pad_inches=0)
plt.close()
return fig, df_adv
def accuracy_pgd(plotname):
pdf_pages = PdfPages('./plots_output/%s' % plotname)
params, fig_size = get_plt_params()
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
df_pgd = pd.read_csv(os.path.join(plot_data_save_path, f'cifar_lenet_pgd.csv'))
df_baseline = pd.read_csv(os.path.join(plot_data_save_path, 'constant_attack_lenet_bound_plot.csv'))
baseline_noclip = "cifar_lenet_train_noattack_clip_100_evaluation/test_accuracy"
baseline_clip = "cifar_lenet_train_noattack_clip_100_evaluation/test_accuracy"
runs = {
"run-0": "PGD Attack ($\gamma =5$)",
"run-1": "PGD Attack ($\gamma =25$)",
"run-2": "PGD Attack ($\gamma =40$)"
}
linestyles = ["-", ":"]
plt.plot(df_baseline["Round"][:500], df_baseline[baseline_noclip][:500], color=colors[0], linestyle=linestyles[0],
linewidth=2)
# plt.plot(df_baseline["Round"][:500], df_baseline[baseline_clip][:500], color=colors[1], linestyle=linestyles[0],
# linewidth=2)
for i, (run, scale) in enumerate(runs.items()):
plt.plot(df_pgd["Round"], df_pgd[f"{run}_evaluation/test_accuracy"], color=colors[i+1], linestyle=linestyles[0], linewidth=2)
plt.plot(df_pgd["Round"], df_pgd[f"{run}_evaluation/adv_success"], color=colors[i+1], linestyle=linestyles[1], linewidth=2)
ax1.set_ylabel("Accuracy")
ax1.set_ylim(0, 0.6)
ax1.set_xlabel("Round")
# ax1.set_xlim(left=1)
# plt.legend(bbox_to_anchor=(-0.016, 1.00, 1., .102), loc=3, ncol=4, columnspacing=0.75)
run_type_labels = ["Baseline"]
run_type_labels.extend(list(runs.values()))
custom_lines_colors = [Line2D([0], [0], linestyle="-", lw=2, color=colors[0]),
Line2D([0], [0], linestyle="-", lw=2, color=colors[1]),
Line2D([0], [0], linestyle="-", lw=2, color=colors[2]),
Line2D([0], [0], linestyle="-", lw=2, color=colors[3]),
Line2D([0], [0], linestyle="-", lw=2, color=colors[4])]
custom_lines_styles = [Line2D([0], [0], linestyle=ls, lw=2, color=COLOR_GRAY) for ls in linestyles]
leg1 = plt.legend(custom_lines_colors, run_type_labels,
bbox_to_anchor=(1., 0.43, 1., .102), loc=3, ncol=1, columnspacing=0.75)
leg2 = plt.legend(custom_lines_styles, ["Benign objective", "Malicious objective"],
bbox_to_anchor=(1., 0.13, 1., .102), loc=3, ncol=1, columnspacing=0.75,
)
# leg3 = plt.legend(handles=custom_benign, bbox_to_anchor=(1.12, -0.26, 1., .102), loc=3, ncol=1, columnspacing=0.75,
# )
leg1._legend_box.align = "left"
leg2._legend_box.align = "left"
# leg3._legend_box.align = "left"
ax1.add_artist(leg1)
ax1.add_artist(leg2)
# ax2.add_artist(leg3)
# plt.title("Comparison of $L_2$-norm of attacks under different participation rates", y=1.04, fontsize=FONT_SIZE)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def endtoend_timing_bar(plotname, bound):
timings = {
"MNIST_CONV": {
"label": "MNIST ConvNN \n (19166 param.)",
"plain": 5.604241216,
"range": {
"naive": 278.45,
"optim": 86.17
},
"l2": {
"naive": 335.77, # 339.669 seconds, 336
"optim": 38.50
}
},
"CIFAR_LENET": {
"label": "CIFAR10 LeNet \n (62006 param.)",
"plain": 7.31,
"range": {
"naive": 660.3487, # 660.3487005233765 per round
"optim": 293.35 # 323.424 per round
},
"l2": {
"naive": 801.80, # TIMING: running now ?? too big to transfer
"optim": 120.4801153 # TIMING: todo... subspace
}
}
}
pdf_pages = PdfPages('./plots_output/%s' % plotname)
params, fig_size = get_plt_params()
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
labels = [x["label"] for (_, x) in timings.items()]
x = np.arange(len(labels)) # the label locations
width = 0.15 # the width of the bars
plt.bar(x - width, [x["plain"] for (_, x) in timings.items()], width, color=colors[0], label="Plain")
plt.bar(x, [x[bound]["optim"] for (_, x) in timings.items()], width, color=colors[2], label="Optimized")
plt.bar(x + width, [x[bound]["naive"] for (_, x) in timings.items()], width, color=colors[1], label="Na\\\"{i}ve")
plt.title("Time per round")
plt.ylabel("Time (seconds)")
ax1.set_xticks(x)
ax1.set_xticklabels(labels)
plt.legend()
# plt.title("Comparison of $L_2$-norm of attacks under different participation rates", y=1.04, fontsize=FONT_SIZE)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def norm_distribution_benign(plotname, output_dir):
df = build_df_scaling_norm_advsuccess("cifar_lenet_minloss_wr")
name = plotname
setup_plt()
task = get_task_styling()
with PdfPages(f"{output_dir}/{name}.pdf") as pdf:
fig, ax = plt.subplots()
##########################
# Draw all the lines
##########################
for i in [6]:
file = np.load(f'./data/cifar_lenet/noniid_norms/round_{i}.npy',
allow_pickle=True)
# file = np.load(f'../../experiments_set/norm/normround/round_{i}.npy', allow_pickle=True)
benign_norms_l2, benign_norms_l1, mal_norms_l2, mal_norms_l1 = file[0], file[1], file[2], file[3]
sns.distplot(benign_norms_l2, hist=False, kde=True, color="black", norm_hist=True,
kde_kws={'shade': True, 'linewidth': 2, "alpha":0, "hatch": "///"}, ax=ax)
ax2 = ax.twinx()
alphas = {
0.025: {
"label": "2.5 %",
"linestyle": "dashed"
},
0.05: {
"label": "5 %",
"linestyle": "dashdot"
},
0.1:{
"label": "10 %",
"linestyle": "solid"
}
}
for alpha in df["alpha_fracadv"].unique():
df1 = df[df["alpha_fracadv"]==alpha]
df1.sort_values("a2-wall_l2norm", inplace=True)
ax2.plot(df1["a2-wall_l2norm"], df1["a2-wall_bdoor"], linestyle=alphas[alpha]["linestyle"], marker="o", color=task["a2"]["color"])
df1.sort_values("a3-green_l2norm", inplace=True)
ax2.plot(df1["a3-green_l2norm"], df1["a3-green_bdoor"], linestyle=alphas[alpha]["linestyle"], marker="o", color=task["a3"]["color"])
df1.sort_values("a4-stripes_l2norm", inplace=True)
ax2.plot(df1["a4-stripes_l2norm"], df1["a4-stripes_bdoor"], linestyle=alphas[alpha]["linestyle"], marker="o", color=task["a4"]["color"])
##########################
# General Format
##########################
ax.grid(True, axis="y", linestyle=':', color='0.6', zorder=0, linewidth=1.2)
## Additional, custom legend
patches = [mpatches.Patch(color=task["a2"]["color"]), mpatches.Patch(color=task["a3"]["color"]), mpatches.Patch(color=task["a4"]["color"])]
matplotlib.rcParams['hatch.linewidth'] = 2
custom_lines_styles = [Line2D([0], [0], linestyle=alphas[0.025]["linestyle"], lw=2, color=COLOR_GRAY),
Line2D([0], [0], linestyle=alphas[0.05]["linestyle"], lw=2, color=COLOR_GRAY),
Line2D([0], [0], linestyle=alphas[0.1]["linestyle"], lw=2, color=COLOR_GRAY)]
height = 0
width = 0.48
leg0 = ax.legend([mpatches.Patch(facecolor="white" , edgecolor="black", hatch="///", linewidth=2)], [task["benign_client"]["label"]], loc="lower right")
leg1 = ax.legend(patches, [task["a2"]["label"], task["a3"]["label"], task["a4"]["label"]],
mode="expand", title="Attack Tasks", bbox_to_anchor=(1.15, 1.05, width, height), loc="upper left", labelspacing=0.2)
leg2 = ax.legend(custom_lines_styles, [alphas[0.025]["label"], alphas[0.05]["label"], alphas[0.1]["label"]],
mode="expand", title=r"$\alpha$ (attackers)", bbox_to_anchor=(1.15, -0.05, width, height), loc="lower left", labelspacing=0.2)
ax.add_artist(leg0)
ax.add_artist(leg1)
ax.add_artist(leg2)
##########################
# Y - Axis Format
##########################
ax.set_ylabel("Density (KDE)")
ax2.set_ylim(ymin=0, ymax=1.02)
ax2.set_ylabel("Backdoor Accuracy")
ax2.set_yticks([0, 0.25, 0.5, 0.75, 1])
#ax.set_yticklabels(labels, fontsize=16, rotation=345)
##########################
# X - Axis Format
##########################
ax.set_xlim(xmin=0, xmax=None)
ax.set_xlabel("$L_2$ Norm of Updates")
#ax.set_xticks(yticks)
#ax.set_xticklabels(labels, fontsize=16, rotation=345)
pdf.savefig(bbox_inches='tight', pad_inches=0)
plt.close()
return fig, df
def norm_distribution_iid_noniid(plotname):
pdf_pages = PdfPages('./plots_output/%s' % plotname)
params, fig_size = get_plt_params()
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
# round_num = 1
# benign_norms_l2, benign_norms_l1 = [], []
#
# for round_num in range(1, 11):
# file = np.load(f'../../experiments_set/cifar_lenet/dist_iid/norms/round_{round_num}.npy',
# allow_pickle=True)
# benign_norms_l2.extend(file[0])
# benign_norms_l1.extend(file[1])
# # file = np.load(f'../../experiments_set/norm/normround/round_{i}.npy', allow_pickle=True)
# # benign_norms_l2, benign_norms_l1 = file[0], file[1]
#
# print("IID", benign_norms_l2)
# sns.distplot(benign_norms_l2, hist=True, kde=True,
# kde_kws={'shade': True, 'linewidth': 0}, ax=ax1, label=f"IID")
benign_norms_l2, benign_norms_l1 = [], []
for round_num in range(1, 11):
file = np.load(f'../../experiments_set/cifar_lenet/dist_noniid/norms/round_{round_num}.npy',
allow_pickle=True)
benign_norms_l2.extend(file[0])
benign_norms_l1.extend(file[1])
# file = np.load(f'../../experiments_set/norm/normround/round_{i}.npy', allow_pickle=True)
# benign_norms_l2, benign_norms_l1 = file[0], file[1]
print("NonIID", benign_norms_l2)
sns.distplot(benign_norms_l2, hist=False, kde=True,
kde_kws={'shade': True, 'linewidth': 0}, ax=ax1)
plt.axvline(x=1.9, ymin=0, ymax=1, label="Norm bound (1.9)", linestyle="--", color=colors[1])
ax1.set_xlabel('$L_2$-norm')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
ax1.set_ylabel("Percentage of benign users")
# plt.xscale("log")
# plt.yscale("log")
plt.legend(bbox_to_anchor=(-0.016, 1.00, 1., .102), loc=3, ncol=4, columnspacing=0.75)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def norm_distribution_benign_overtime(plotname):
pdf_pages = PdfPages('./plots_output/%s' % plotname)
params, fig_size = get_plt_params()
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
f, ax1 = plt.subplots()
benign = []
mal = []
benign_avg = [] # debug
round_step = 1000
p_colors = get_progressive_colors()
for i in range(1, 100, 10):
# for i in range(1, 4821, round_step):
# for i in range(1, 4821, 1):
file = np.load(f'../../experiments_set/norm/normround/round_{i}.npy', allow_pickle=True)
benign_norms_l2, benign_norms_l1, mal_norms_l2, mal_norms_l1 = file[0], file[1], file[2], file[3]
benign.append(benign_norms_l2)
mal.append(mal_norms_l2[0])
benign_avg.append(np.average(benign_norms_l2))
# print(f"Reading {i}")
# plt.boxplot(benign)
# plt.plot(benign_avg, label="Benign (avg)", color=colors[0], linestyle=linestyles[1], linewidth=2)
# plt.plot(mal, label="Malicious", color=colors[1], linestyle=linestyles[1], linewidth=2)
# print(benign)
for i, b in enumerate(benign):
sns.distplot(b, hist=False, # For histogram
kde=True,
kde_kws={'shade': True, 'linewidth': 0, 'clip': (0.0, 7.0)},
label=f"Round {(i) * round_step + 1}", color=p_colors[i])
plt.xlabel('$L_2$-norm')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Percentage of benign users")
# plt.yscale("log")
# plt.ylim(0, 7)
plt.legend(bbox_to_anchor=(-0.016, 1.00, 1., .102), loc=3, ncol=4, columnspacing=0.75)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def squarerandproof_log_plot(plotname):
df = pd.read_csv(os.path.join(plot_data_save_path, 'microbench_squarerandproof32bit.csv'))
# print(df)
plot_types = ['baseline_create',
'square_create']
plot_legend = {'baseline_create': 'Randomness Proof',
'square_create': 'Squared Randomness Proof'}
pdf_pages = PdfPages('./plots_output/%s' % plotname)
params, fig_size = get_plt_params()
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
plt.rcParams['axes.titlepad'] = 50
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
colors, linestyles = get_colorful_styles()
plt.subplots()
for id, type in enumerate(plot_types):
# df.plot(x='Round', y=plot_legend[type], style='o', label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
plt.semilogx(df.parameters, df[type] / 1000.0, '-o', basex=2, label=plot_legend[type], color=colors[id],
linestyle=linestyles[id], linewidth=2)
plt.xlabel('Parameters')
plt.title("Create Randomness Proof (32-bit precision)")
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Time (seconds)")
# plt.yscale("log")
plt.legend(bbox_to_anchor=(-0.016, .98, 1., .102), loc=3, ncol=4, columnspacing=0.75)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def squarerandproof_verify_log_plot(plotname):
df = pd.read_csv(os.path.join(plot_data_save_path, 'microbench_squarerandproof32bit.csv'))
# print(df)
plot_types = ['baseline_verify',
'square_verify']
plot_legend = {'baseline_verify': 'Randomness Proof',
'square_verify': 'Squared Randomness Proof'}
pdf_pages = PdfPages('./plots_output/%s' % plotname)
params, fig_size = get_plt_params()
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
plt.rcParams['axes.titlepad'] = 50
colors, linestyles = get_colorful_styles()
plt.subplots()
for id, type in enumerate(plot_types):
# df.plot(x='Round', y=plot_legend[type], style='o', label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
plt.semilogx(df.parameters, df[type] / 1000.0, '-o', basex=2, label=plot_legend[type], color=colors[id],
linestyle=linestyles[id], linewidth=2)
plt.xlabel('Parameters')
plt.title("Verify Randomness Proof (32-bit Precision)")
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Time (seconds)")
# plt.yscale("log")
plt.legend(bbox_to_anchor=(-0.016, 0.98, 1., .102), loc=3, ncol=4, columnspacing=0.75)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def l2proof_plots():
# print(df)
lengths = [32]
ranges = [32, 16, 8]
actions = ['create', 'verify']
for l in lengths:
df = pd.read_csv(os.path.join(plot_data_save_path, f'microbench_l2proof{l}bit.csv'))
for action in actions:
plt.figure()
plotname = f"microbenchmark_l2_{action}_{l}bit.pdf"
pdf_pages = PdfPages('./plots_output/%s' % plotname)
params, fig_size = get_plt_params()
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
f, ax1 = plt.subplots()
for id, range in enumerate(ranges):
type_baseline = f"baseline_r{range}_{action}"
type_l2 = f"l2_r{range}_{action}"
# print(df[type_baseline])
plt.semilogx(df.parameters, df[type_baseline] / 1000.0, '-o', basex=2, color=colors[id],
linestyle="--", linewidth=2)
plt.semilogx(df.parameters, df[type_l2] / 1000.0, '-o', basex=2,
color=colors[id],
linestyle="-", linewidth=2)
plt.xlabel('Parameters')
plt.title(f"{action.capitalize()} Range Proof ({l}-bit precision)")
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Time (seconds)")
# plt.yscale("log")
# plt.legend(bbox_to_anchor=(-0.016, 1.00, 1., .102), loc=3, ncol=4, columnspacing=0.75)
# Additional, custom legend
custom_lines_colors = [Line2D([0], [0], linestyle="-", lw=2, color=colors[0]),
Line2D([0], [0], linestyle="-", lw=2, color=colors[1]),
Line2D([0], [0], linestyle="-", lw=2, color=colors[2])]
custom_lines_styles = [Line2D([0], [0], linestyle="-", lw=2, color=COLOR_GRAY),
Line2D([0], [0], linestyle="--", lw=2, color=COLOR_GRAY)]
leg1 = plt.legend(custom_lines_colors, ["32-bit", "16-bit", "8-bit"],
bbox_to_anchor=(1., 0.50, 1., .102), loc=3, ncol=1, columnspacing=0.75, title="Range")
leg1._legend_box.align = "left"
leg2 = plt.legend(custom_lines_styles, ["$L_2$", "$L_\\infty$"], bbox_to_anchor=(1., 0.11, 1., .102), loc=3,
title="Norm",
ncol=1, columnspacing=0.75)
leg2._legend_box.align = "left"
ax1.add_artist(leg1)
ax1.add_artist(leg2)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def l2proof_flexible_case():
df = pd.read_csv(os.path.join(plot_data_save_path, 'microbench_l2proof32bit.csv'))
# print(df)
actions = ['create', 'verify']
for action in actions:
plotname = f"microbenchmark_l2_{action}_flexible.pdf"
pdf_pages = PdfPages('./plots_output/%s' % plotname)
params, fig_size = get_plt_params()
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
plt.subplots()
plt.semilogx(df.parameters, df[f"baseline_r32_{action}"] / 1000.0, '-o', basex=2, label="$L_\\infty$",
color=colors[1],
linestyle=linestyles[0], linewidth=2)
plt.semilogx(df.parameters, df[f"l2_r32_p8_{action}"] / 1000.0, '-o', basex=2, label="$L_2$", color=colors[0],
linestyle=linestyles[0], linewidth=2)
plt.xlabel('Parameters')
plt.title(f"{action.capitalize()} Norm Bound Proof (32-bit range, 8-bit parameters)")
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Time (seconds)")
# plt.yscale("log")
leg = plt.legend(bbox_to_anchor=(1., 0.61, 1., .102), loc=3, ncol=1, columnspacing=0.75, title="Norm")
leg._legend_box.align = "left"
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def microbench_proof_arbitrary_ranges():
df = pd.read_csv(os.path.join(plot_data_save_path, 'microbenchmark_arbitraryrange.csv'))
# print(df)
actions = ['create', 'verify']
for action in actions:
plotname = f"microbenchmark_arbitraryrange_{action}.pdf"
pdf_pages = PdfPages('./plots_output/%s' % plotname)
params, fig_size = get_plt_params()
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
plt.subplots()
plt.semilogx(df.parameters, df[f"linf_{action}"] / 1000.0, '-o', basex=2, label="$L_\\infty$", color=colors[1],
linestyle=linestyles[0], linewidth=2)
plt.semilogx(df.parameters, df[f"l2_{action}"] / 1000.0, '-o', basex=2, label="$L_2$", color=colors[0],
linestyle=linestyles[0], linewidth=2)
plt.xlabel('Parameters')
plt.title(f"{action.capitalize()} Arbitrary Range (32-bit range, 32-bit parameters)")
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Time (seconds)")
# plt.yscale("log")
leg = plt.legend(loc=2, ncol=1, columnspacing=0.75, title="Norm")
leg._legend_box.align = "left"
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def inspect_norm_plot_lm_scale(plotname):
df = pd.read_csv(os.path.join(plot_data_save_path, 'femnist_norm_inspect-output.csv'))
# print(df)
# print("HEY")
plot_types = ['femnist_norm_inspect_l2_total/benign',
# 'femnist_norm_inspect_l2_total/mal',
# 'femnist_norm_inspect_data_poison_l2_total/mal',
'femnist_norm_inspect_scaled_l2_total/mal']
plot_legend = {'femnist_norm_inspect_l2_total/benign': 'Benign',
'femnist_norm_inspect_l2_total/mal': 'Mal. (LM)',
'femnist_norm_inspect_data_poison_l2_total/mal': 'Mal. (DP)',
'femnist_norm_inspect_scaled_l2_total/mal': 'Mal. (SP, scaled by $\gamma=30$)'}
pdf_pages = PdfPages('./plots_output/%s' % plotname)
params, fig_size = get_plt_params()
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
for id, type in enumerate(plot_types):
# df.plot(x='Round', y=plot_legend[type], style='o', label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
plt.plot(df.Round, df[type], label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
plt.xlabel('Round')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Update L2-norm")
plt.yscale("log")
plt.legend(bbox_to_anchor=(-0.016, 1.00, 1., .102), loc=3, ncol=4, columnspacing=0.75)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def inspect_norm_plot(plotname):
df = pd.read_csv(os.path.join(plot_data_save_path, 'femnist_norm_inspect-output.csv'))
# print(df)
plot_types = ['femnist_norm_inspect_l2_total/benign',
'femnist_norm_inspect_l2_total/mal',
'femnist_norm_inspect_data_poison_l2_total/mal',
# 'femnist_norm_inspect_scaled_l2_total/mal'
]
plot_legend = {'femnist_norm_inspect_l2_total/benign': 'Benign',
'femnist_norm_inspect_l2_total/mal': 'Mal. (LM)',
'femnist_norm_inspect_data_poison_l2_total/mal': 'Mal. (DP)',
'femnist_norm_inspect_scaled_l2_total/mal': 'Mal. (Segment poisoning, scaled by $\gamma=30$)'}
fig_height, fig_size, fig_width = get_large_figsize()
params, fig_size = get_plt_params()
pdf_pages = PdfPages('./plots_output/%s' % plotname)
plt.rcParams.update(params)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
for id, type in enumerate(plot_types):
# df.plot(x='Round', y=plot_legend[type], style='o', label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
plt.plot(df.Round, df[type], label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
plt.xlabel('Round')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Update L2-norm")
plt.legend(bbox_to_anchor=(-0.016, 1.00, 1., .102), loc=3, ncol=4, columnspacing=0.75)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
# TODO [nku] adjust color scheme
def modelreplacement_cifar_resnet56_plot(plotname, output_dir):
df = pd.read_csv(os.path.join(plot_data_save_path, 'e44_cifar_resnet.csv'))
# NEW
df1 = df[["Round"]]
df1 = df1.rename(columns={"Round": "round"})
# rename cols
for suffix, short in [("test_accuracy", "testacc"), ("adv_success", "advsucc")]:
df1[f"a2-wall_{short}"] = df[f"e44_cifar_attack_400_0.0001_full_evaluation/{suffix}"]
df1[f"a3-green_{short}"] = df[f"e44_cifar_attack_400_0.0001_full_greencars_evaluation/{suffix}"]
df1[f"a4-stripes_{short}"] = df[f"e44_cifar_resnet_racing_stripes_evaluation/{suffix}"]
df = df1
task = get_task_styling()
name = plotname
setup_plt()
with PdfPages(f"{output_dir}/{name}.pdf") as pdf:
fig, ax = plt.subplots()
##########################
# Draw all the lines
##########################
linewidth = 1.5
ax.plot(df["round"], df[f"a2-wall_testacc"], color=task["a2"]["color"], linestyle=task["main"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a3-green_testacc"], color=task["a3"]["color"], linestyle=task["main"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a4-stripes_testacc"], color=task["a4"]["color"], linestyle=task["main"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a2-wall_advsucc"], color=task["a2"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a3-green_advsucc"], color=task["a3"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a4-stripes_advsucc"], color=task["a4"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
##########################
# General Format
##########################
## Additional, custom legend
patches = [mpatches.Patch(color=task["a2"]["color"]), mpatches.Patch(color=task["a3"]["color"]), mpatches.Patch(color=task["a4"]["color"])]
custom_lines_styles = [Line2D([0], [0], linestyle=task["main"]["linestyle"], lw=2, color=COLOR_GRAY),
Line2D([0], [0], linestyle=task["bdoor"]["linestyle"], lw=2, color=COLOR_GRAY)]
height = 0
width = 0.48
leg1 = ax.legend(patches, [task["a2"]["label"], task["a3"]["label"], task["a4"]["label"]],
mode="expand", title="Attack Tasks", bbox_to_anchor=(1, 1, width, height), loc="upper left", labelspacing=0.2)
leg2 = ax.legend(custom_lines_styles, [task["main"]["label"], task["bdoor"]["label"]],
mode="expand", title="Metrics", bbox_to_anchor=(1, 0, width, height), loc="lower left", labelspacing=0.2)
ax.add_artist(leg1)
ax.add_artist(leg2)
ax.grid(True, axis="y", linestyle=':', color='0.6', zorder=0, linewidth=1.2)
##########################
# Y - Axis Format
##########################
ax.set_ylim(ymin=0, ymax=1.02)
ax.set_ylabel("Task Accuracy")
ax.set_yticks([0,0.25, 0.5, 0.75, 1])
#ax.set_yticklabels(labels, fontsize=16, rotation=345)
##########################
# X - Axis Format
##########################
ax.set_xlim(xmin=0, xmax=300)
ax.set_xlabel("Rounds")
#ax.set_xticks(xticks)
#ax.set_xticklabels(labels, fontsize=16, rotation=345)
pdf.savefig(bbox_inches='tight', pad_inches=0)
plt.close()
return fig, df
def modelreplacement_cifar_resnet18_plot(plotname, output_dir):
df = pd.read_csv(os.path.join(plot_data_save_path, 'e44_cifar_resnet18.csv'))
# NEW
df1 = df[["Round"]]
df1 = df1.rename(columns={"Round": "round"})
# rename cols
for suffix, short in [("test_accuracy", "testacc"), ("adv_success", "advsucc")]:
df1[f"a2-wall_{short}"] = df[f"e3_cifar_resnet18_long_WALL_lrlow10_evaluation/{suffix}"]
df1[f"a3-green_{short}"] = df[f"e3_cifar_resnet18_long_GREEN_lrlow10_evaluation/{suffix}"]
df1[f"a4-stripes_{short}"] = df[f"e3_cifar_resnet18_long_STRIPES_lrlow10_evaluation/{suffix}"]
df = df1
task = get_task_styling()
name = plotname
setup_plt()
with PdfPages(f"{output_dir}/{name}.pdf") as pdf:
fig, ax = plt.subplots()
##########################
# Draw all the lines
##########################
linewidth = 1.5
ax.plot(df["round"], df[f"a2-wall_testacc"], color=task["a2"]["color"], linestyle=task["main"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a3-green_testacc"], color=task["a3"]["color"], linestyle=task["main"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a4-stripes_testacc"], color=task["a4"]["color"], linestyle=task["main"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a2-wall_advsucc"], color=task["a2"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a3-green_advsucc"], color=task["a3"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a4-stripes_advsucc"], color=task["a4"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
##########################
# General Format
##########################
## Additional, custom legend
patches = [mpatches.Patch(color=task["a2"]["color"]), mpatches.Patch(color=task["a3"]["color"]), mpatches.Patch(color=task["a4"]["color"])]
custom_lines_styles = [Line2D([0], [0], linestyle=task["main"]["linestyle"], lw=2, color=COLOR_GRAY),
Line2D([0], [0], linestyle=task["bdoor"]["linestyle"], lw=2, color=COLOR_GRAY)]
height = 0
width = 0.48
leg1 = ax.legend(patches, [task["a2"]["label"], task["a3"]["label"], task["a4"]["label"]],
mode="expand", title="Attack Tasks", bbox_to_anchor=(1, 1, width, height), loc="upper left", labelspacing=0.2)
leg2 = ax.legend(custom_lines_styles, [task["main"]["label"], task["bdoor"]["label"]],
mode="expand", title="Metrics", bbox_to_anchor=(1, 0, width, height), loc="lower left", labelspacing=0.2)
ax.add_artist(leg1)
ax.add_artist(leg2)
ax.grid(True, axis="y", linestyle=':', color='0.6', zorder=0, linewidth=1.2)
##########################
# Y - Axis Format
##########################
ax.set_ylim(ymin=0, ymax=1.02)
ax.set_ylabel("Task Accuracy")
ax.set_yticks([0,0.25, 0.5, 0.75, 1])
#ax.set_yticklabels(labels, fontsize=16, rotation=345)
##########################
# X - Axis Format
##########################
ax.set_xlim(xmin=0, xmax=20)
ax.set_xlabel("Rounds")
#ax.set_xticks(xticks)
#ax.set_xticklabels(labels, fontsize=16, rotation=345)
pdf.savefig(bbox_inches='tight', pad_inches=0)
plt.close()
return fig, df
def modelreplacement_cifar_resnet18_lowerlr_plot(plotname, output_dir):
df = pd.read_csv(os.path.join(plot_data_save_path, 'e44_cifar_resnet18.csv'))
# NEW
df1 = df[["Round"]]
df1 = df1.rename(columns={"Round": "round"})
# rename cols
for suffix, short in [("test_accuracy", "testacc"), ("adv_success", "advsucc")]:
df1[f"a2-wall_{short}"] = df[f"e3_cifar_resnet18_long_WALL_lrlow_evaluation/{suffix}"]
df1[f"a3-green_{short}"] = df[f"e3_cifar_resnet18_long_GREEN_lrlow100_evaluation/{suffix}"]
df1[f"a4-stripes_{short}"] = df[f"e3_cifar_resnet18_long_STRIPES_lrlow100_evaluation/{suffix}"]
df = df1
task = get_task_styling()
name = plotname
setup_plt()
with PdfPages(f"{output_dir}/{name}.pdf") as pdf:
fig, ax = plt.subplots()
##########################
# Draw all the lines
##########################
linewidth = 1.5
ax.plot(df["round"], df[f"a2-wall_testacc"], color=task["a2"]["color"], linestyle=task["main"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a3-green_testacc"], color=task["a3"]["color"], linestyle=task["main"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a4-stripes_testacc"], color=task["a4"]["color"], linestyle=task["main"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a2-wall_advsucc"], color=task["a2"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a3-green_advsucc"], color=task["a3"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a4-stripes_advsucc"], color=task["a4"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
##########################
# General Format
##########################
## Additional, custom legend
patches = [mpatches.Patch(color=task["a2"]["color"]), mpatches.Patch(color=task["a3"]["color"]), mpatches.Patch(color=task["a4"]["color"])]
custom_lines_styles = [Line2D([0], [0], linestyle=task["main"]["linestyle"], lw=2, color=COLOR_GRAY),
Line2D([0], [0], linestyle=task["bdoor"]["linestyle"], lw=2, color=COLOR_GRAY)]
height = 0
width = 0.48
leg1 = ax.legend(patches, [task["a2"]["label"], task["a3"]["label"], task["a4"]["label"]],
mode="expand", title="Attack Tasks", bbox_to_anchor=(1, 1, width, height), loc="upper left", labelspacing=0.2)
leg2 = ax.legend(custom_lines_styles, [task["main"]["label"], task["bdoor"]["label"]],
mode="expand", title="Metrics", bbox_to_anchor=(1, 0, width, height), loc="lower left", labelspacing=0.2)
ax.add_artist(leg1)
ax.add_artist(leg2)
ax.grid(True, axis="y", linestyle=':', color='0.6', zorder=0, linewidth=1.2)
##########################
# Y - Axis Format
##########################
ax.set_ylim(ymin=0, ymax=1.02)
ax.set_ylabel("Task Accuracy")
ax.set_yticks([0,0.25, 0.5, 0.75, 1])
#ax.set_yticklabels(labels, fontsize=16, rotation=345)
##########################
# X - Axis Format
##########################
ax.set_xlim(xmin=0, xmax=300)
ax.set_xlabel("Rounds")
#ax.set_xticks(xticks)
#ax.set_xticklabels(labels, fontsize=16, rotation=345)
pdf.savefig(bbox_inches='tight', pad_inches=0)
plt.close()
return fig, df
def modelreplacement_cifar_resnet18_presentation_plot(plotname, output_dir):
df = pd.read_csv(os.path.join(plot_data_save_path, 'e44_cifar_resnet18.csv'))
# NEW
df1 = df[["Round"]]
df1 = df1.rename(columns={"Round": "round"})
# rename cols
for suffix, short in [("test_accuracy", "testacc"), ("adv_success", "advsucc")]:
df1[f"a2-wall_{short}"] = df[f"e3_cifar_resnet18_long_WALL_lrlow_evaluation/{suffix}"]
df1[f"a3-green_{short}"] = df[f"e3_cifar_resnet18_long_GREEN_lrlow100_evaluation/{suffix}"]
df1[f"a4-stripes_{short}"] = df[f"e3_cifar_resnet18_long_STRIPES_lrlow100_evaluation/{suffix}"]
df = df1
task = get_task_styling_colorful()
name = plotname
setup_plt()
with PdfPages(f"{output_dir}/{name}.pdf") as pdf:
fig, ax = plt.subplots()
##########################
# Draw all the lines
##########################
linewidth = 1.5
ax.plot(df["round"], df[f"a2-wall_testacc"], color=task["a4"]["color"], linestyle=task["main"]["linestyle"], linewidth=linewidth,
label="Main Task")
# ax.plot(df["round"], df[f"a3-green_testacc"], color=task["a3"]["color"], linestyle=task["main"]["linestyle"], linewidth=linewidth)
# ax.plot(df["round"], df[f"a4-stripes_testacc"], color=task["a4"]["color"], linestyle=task["main"]["linestyle"], linewidth=linewidth)
ax.plot(df["round"], df[f"a2-wall_advsucc"], color=task["a2"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth,
label="Backdoor Task")
# ax.plot(df["round"], df[f"a3-green_advsucc"], color=task["a3"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
# ax.plot(df["round"], df[f"a4-stripes_advsucc"], color=task["a4"]["color"], linestyle=task["bdoor"]["linestyle"], linewidth=linewidth)
##########################
# General Format
##########################
ax.legend(bbox_to_anchor=(1, 1, .48, 0), loc="upper left", labelspacing=0.2)
## Additional, custom legend
# patches = [mpatches.Patch(color=task["a2"]["color"]), mpatches.Patch(color=task["a3"]["color"]), mpatches.Patch(color=task["a4"]["color"])]
#
#
# custom_lines_styles = [Line2D([0], [0], linestyle=task["main"]["linestyle"], lw=2, color=COLOR_GRAY),
# Line2D([0], [0], linestyle=task["bdoor"]["linestyle"], lw=2, color=COLOR_GRAY)]
#
# height = 0
# width = 0.48
# leg1 = ax.legend(patches, [task["a2"]["label"], task["a3"]["label"], task["a4"]["label"]],
# mode="expand", title="Attack Tasks", bbox_to_anchor=(1, 1, width, height), loc="upper left", labelspacing=0.2)
#
#
# leg2 = ax.legend(custom_lines_styles, [task["main"]["label"], task["bdoor"]["label"]],
# mode="expand", title="Metrics", bbox_to_anchor=(1, 0, width, height), loc="lower left", labelspacing=0.2)
# ax.add_artist(leg1)
# ax.add_artist(leg2)
ax.grid(True, axis="y", linestyle=':', color='0.6', zorder=0, linewidth=1.2)
##########################
# Y - Axis Format
##########################
ax.set_ylim(ymin=0, ymax=1.02)
ax.set_ylabel("Task Accuracy")
ax.set_yticks([0,0.25, 0.5, 0.75, 1])
#ax.set_yticklabels(labels, fontsize=16, rotation=345)
##########################
# X - Axis Format
##########################
ax.set_xlim(xmin=0, xmax=300)
ax.set_xlabel("Rounds")
#ax.set_xticks(xticks)
#ax.set_xticklabels(labels, fontsize=16, rotation=345)
pdf.savefig(bbox_inches='tight', pad_inches=0)
plt.close()
return fig, df
def modelreplacement_subspacepoisoning_attack_compare(plotname):
df = pd.read_csv(os.path.join(plot_data_save_path, 'e44_cifar_resnet.csv'))
df["Round"] = df["Round"].apply(lambda x: x - 5)
# print(df)
plot_types = [
'e44_cifar_resnet_racing_stripes_evaluation', #
'resnet_cifar_greencars_lm_cmp_evaluation' # It says green cars but it is actually racing stripes !!
]
params, fig_size = get_plt_params()
pdf_pages = PdfPages('./plots_output/%s' % plotname)
plt.rcParams.update(params)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
legend_custom = {
'adv_success': 'Malicious objective',
'test_accuracy': 'Benign objective'
}
linestyles_custom = {
'adv_success': ':',
'test_accuracy': '-'
}
colors_custom = {
'resnet_cifar_greencars_lm_cmp_evaluation': colors[0],
'e44_cifar_resnet_racing_stripes_evaluation': colors[1],
}
for id, type in enumerate(plot_types):
for suffix in ['adv_success', 'test_accuracy']:
# print(f"{type}/{suffix}")
# print(df[f"{type}/{suffix}"])
# df.plot(x='Round', y=plot_legend[type], style='o', label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
plt.plot(df.Round, df[f"{type}/{suffix}"], label=legend_custom[suffix], color=colors_custom[type],
linestyle=linestyles_custom[suffix], linewidth=2)
plt.xlabel('Round')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Accuracy")
plt.ylim(ymin=0, ymax=1.0)
plt.xlim(xmin=-5, xmax=430)
start, end = ax1.get_xlim()
xticks = np.arange(0, end + 1, 100)
# np.insert(xticks, 0, -5, axis=0)
ax1.xaxis.set_ticks(xticks)
# Additional, custom legend
custom_lines_colors = [ # Line2D([0], [0], linestyle="-", lw=2, color=colors[2]),
Line2D([0], [0], linestyle="-", lw=2, color=colors[1]),
Line2D([0], [0], linestyle="-", lw=2, color=colors[0])]
custom_lines_styles = [Line2D([0], [0], linestyle="-", lw=2, color=COLOR_GRAY),
Line2D([0], [0], linestyle=":", lw=2, color=COLOR_GRAY)]
leg1 = plt.legend(custom_lines_colors, ["Model replacement", "Subspace poisoning"],
bbox_to_anchor=(1., 0.69, 1., .102), loc=3, ncol=1, columnspacing=0.75)
leg2 = plt.legend(custom_lines_styles, ["Benign objective", "Malicious objective"],
bbox_to_anchor=(1., 0.39, 1., .102), loc=3, ncol=1, columnspacing=0.75)
ax1.add_artist(leg1)
ax1.add_artist(leg2)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def prio_accuracy_plot(plotname):
data_prio = {
5: 0.002185,
25: 0.106695,
250: 0.308112,
2500: 2.59,
32768: 34.884178,
50000: 44.497297,
100000: 89.693458,
200000: 187.233027
}
data_me = {
1024: 445.9 / 1000.0,
2048: 897.2 / 1000.0,
4096: 1798.075 / 1000.0,
8192: 3601.8 / 1000.0,
16384: 7221.125 / 1000.0,
32768: 14621.25 / 1000.0
}
params, fig_size = get_plt_params()
pdf_pages = PdfPages('./plots_output/%s' % plotname)
plt.rcParams.update(params)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
p_x, p_y = zip(*data_prio.items())
m_x, m_y = zip(*data_me.items())
plt.plot(p_x, p_y, '-o', color=colors[0], label="Prio", linewidth=2)
plt.plot(m_x, m_y, '-o', color=colors[1], label="Bulletproofs", linewidth=2)
plt.xlabel('Parameters')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
# plt.ylim(ymin, ymax)
plt.ylabel("Time")
plt.title("Range proof generation time per client")
plt.legend()
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def endtoend_accuracy_plot(plotname, dataset, title, ymin, ymax):
plot_types = {
f'{dataset}_plain_baseline.csv': "Plain",
f'{dataset}_range_old_slow.csv': "Na\\\"{i}ve",
f'{dataset}_range_optim_slow.csv': "Optimized"
}
eval_save_path = os.path.join(plot_data_save_path, "endtoend")
params, fig_size = get_plt_params()
pdf_pages = PdfPages('./plots_output/%s' % plotname)
plt.rcParams.update(params)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
for id, (type, name) in enumerate(plot_types.items()):
df = pd.read_csv(os.path.join(eval_save_path, type), header=None)
# print(type, df[0], df[2])
plt.plot(df[0][:40], df[2][:40], '-o', color=colors[id], label=name, linewidth=2)
# for id, type in enumerate(plot_types):
# for suffix in ['adv_success', 'test_accuracy']:
# # print(f"{type}/{suffix}")
# # print(df[f"{type}/{suffix}"])
# # df.plot(x='Round', y=plot_legend[type], style='o', label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
# plt.plot(df.Round, df[f"{type}/{suffix}"], label=legend_custom[suffix], color=colors_custom[type], linestyle=linestyles_custom[suffix], linewidth=2)
plt.xlabel('Round')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylim(ymin, ymax)
plt.ylabel("Accuracy")
plt.title(title)
plt.legend()
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def endtoend_accuracy_four_plot(plotname):
plot_types = {
"mnist": {
"plain": "mnist_plain_baseline.csv",
"range": {
f'mnist_range_old_slow.csv': "Na\\\"{i}ve",
f'mnist_range_optim_randproof.csv': "Optimized"
},
"l2": {
f'mnist_range_old_slow.csv': "Na\\\"{i}ve",
f'mnist_l2_optim.csv': "Optimized" # TODO!
}
},
"cifar": {
"plain": "cifar_lenet_plain_baseline.csv",
"range": {
f'cifar_lenet_range_old_slow.csv': "Na\\\"{i}ve",
f'cifar_lenet_range_optim_slow.csv': "Optimized"
},
"l2": {
f'cifar_lenet_range_old_slow.csv': "Na\\\"{i}ve",
f'cifar_lenet_l2_optim.csv': "Optimized"
}
}
}
eval_save_path = os.path.join(plot_data_save_path, "endtoend")
params, fig_size = get_plt_params()
_, fig_size, _ = get_large_figsize(450.0, 0.7)
pdf_pages = PdfPages('./plots_output/%s' % plotname)
plt.rcParams.update(params)
# plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
colors, linestyles = get_colorful_styles()
f, axs = plt.subplots(2, 2)
labels = {
"range": "$L_\\infty$",
"l2": "$L_2$",
"mnist": "MNIST",
"cifar": "CIFAR-10"
}
for id, (dataset, bounds) in enumerate(plot_types.items()):
dfplain = pd.read_csv(os.path.join(eval_save_path, bounds["plain"]), header=None)
for x in [0, 1]:
axs[id, x].plot(dfplain[0][:40], dfplain[2][:40], '-o', color=colors[0], label="Plain", linewidth=2)
axs[id, 0].set(ylabel=labels[dataset])
for index, bound in enumerate(["range", "l2"]):
axs[1, index].set(xlabel=labels[bound])
for optimizedIndex, (filename, label) in enumerate(bounds[bound].items()):
df = pd.read_csv(os.path.join(eval_save_path, filename), header=None)
axs[id, index].plot(df[0][:40], df[2][:40], '-o', color=colors[optimizedIndex + 1], label=label,
linewidth=2)
for i in [0, 1]:
axs[0, i].set_ylim(0.8, 1.0)
axs[1, i].set_ylim(0, 0.6)
for ax in axs.flat:
ax.grid(True, linestyle=':', color='0.8', zorder=0)
# df = pd.read_csv(os.path.join(eval_save_path, type), header=None)
# # print(type, df[0], df[2])
# plt.plot(df[0][:40], df[2][:40], '-o', color=colors[id], label=name, linewidth=2)
# for id, type in enumerate(plot_types):
# for suffix in ['adv_success', 'test_accuracy']:
# # print(f"{type}/{suffix}")
# # print(df[f"{type}/{suffix}"])
# # df.plot(x='Round', y=plot_legend[type], style='o', label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
# plt.plot(df.Round, df[f"{type}/{suffix}"], label=legend_custom[suffix], color=colors_custom[type], linestyle=linestyles_custom[suffix], linewidth=2)
# for ax in axs.flat:
# ax.set(xlabel='Round', ylabel='Accuracy', ylim=(0.0, 1.0))
# for ax in axs.flat:
# ax.label_outer()
# plt.xlabel('Round')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
# plt.ylim(0.0, 1.0)
# plt.ylabel("Accuracy")
# plt.title("Title")
plt.legend()
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def bandwidth_bounds_four_plot(plotname):
params, fig_size = get_plt_params()
params['legend.fontsize'] = FONT_SIZE - 4
_, fig_size, _ = get_large_figsize(450.0, 0.5)
pdf_pages = PdfPages('./plots_output/%s' % plotname)
plt.rcParams.update(params)
# plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
colors, linestyles = get_colorful_styles()
f, axs = plt.subplots(1, 2)
labels = {
"range": "$L_\\infty$",
"l2": "$L_2$",
"mnist": "MNIST",
"cifar": "CIFAR-10"
}
def next_pow(x):
# print(x)
return pow(2, math.ceil(math.log(x, 2)))
def linf_baseline(D, n, p):
return 32 * 2 * D,\
32 * p * (math.log(n, 2) + math.log(next_pow(D / p)) + 9),\
32 * 4 * D
# return 32 * (6 * D + p * (math.log(n, 2) + math.log(next_pow(D / p)) + 9))
def l2_baseline(D, n, p):
return 32 * 2 * D, \
32 * p * (math.log(n, 2) + math.log(next_pow(D / p)) + 9), \
32 * 6 * D,\
32 * D,\
math.log(n, 2) + 9
# return 32 * (9 * D + p * (math.log(n, 2) + math.log(next_pow(D / p)) + 9) + math.log(n, 2) + 9)
def plaintext(D):
return 4 * D
n = 32
p = 64
print(linf_baseline(pow(2, 15), n, p))
x = list(range(1, int(math.pow(2, 15)), 1000))
# print([linf_baseline(y, n, p) for y in x])
axs[0].stackplot(x, *zip(*[linf_baseline(y, n, p) for y in x]), linewidth=2, colors=colors)
axs[1].stackplot(x, *zip(*[l2_baseline(y, n, p) for y in x]), linewidth=2, labels=["Commitments", "Range proofs", "Randomness proofs", "Squared commitments", "$L_2$-norm range proof"], colors=colors)
mkfunc = lambda x, pos: '%1.1f' % (x * 1e-6) if x >= 1e6 else '%1.1fK' % (x * 1e-3) if x >= 1e3 else '%1.1f' % x
mkformatter = matplotlib.ticker.FuncFormatter(mkfunc)
axs[0].set(ylabel="Message size (Mbytes)")
axs[0].set(xlabel="Parameters ($L_\\infty$)")
axs[1].set(xlabel="Parameters ($L_2$)")
for id in [0, 1]:
axs[id].set_ylim(0, 10000000)
axs[0].plot(x, [plaintext(y) for y in x], linewidth=2, color='#000000', linestyle='--')
axs[1].plot(x, [plaintext(y) for y in x], linewidth=2, color='#000000', label="Plaintext", linestyle='--')
axs[1].legend(bbox_to_anchor=(-.49, 1.), loc="upper right", ncol=1, columnspacing=0.75)
# axs[1].legend(loc="upper left", prop=fontP)
for ax in axs.flat:
ax.yaxis.set_major_formatter(mkformatter)
# axs[0, 0].plot(x, dfplain[2][:40], '-o', color=colors[0], label="Plain", linewidth=2)
# for id, (dataset, bounds) in enumerate(plot_types.items()):
# dfplain = pd.read_csv(os.path.join(eval_save_path, bounds["plain"]), header=None)
# for x in [0, 1]:
# axs[id, x].plot(dfplain[0][:40], dfplain[2][:40], '-o', color=colors[0], label="Plain", linewidth=2)
# axs[id, 0].set(ylabel=labels[dataset])
#
# for index, bound in enumerate(["range", "l2"]):
# axs[1, index].set(xlabel=labels[bound])
# for optimizedIndex, (filename, label) in enumerate(bounds[bound].items()):
# df = pd.read_csv(os.path.join(eval_save_path, filename), header=None)
# axs[id, index].plot(df[0][:40], df[2][:40], '-o', color=colors[optimizedIndex + 1], label=label,
# linewidth=2)
#
# for i in [0, 1]:
# axs[0, i].set_ylim(0.8, 1.0)
# axs[1, i].set_ylim(0, 0.6)
for ax in axs.flat:
ax.grid(True, linestyle=':', color='0.8', zorder=0)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def weight_distribution_plot_plus_l2(plotname):
tags = [
'histogram_ben/l6_Dense',
'histogram_ben/l0_Conv2D',
'histogram_ben/l2_Conv2D',
'histogram_ben/l4_Dense',
'histogram_ben/l8_Dense'
]
tags_mal = [
'histogram_mal/l6_Dense',
'histogram_mal/l0_Conv2D',
'histogram_mal/l2_Conv2D',
'histogram_mal/l4_Dense',
'histogram_mal/l8_Dense'
]
params, fig_size = get_plt_params()
pdf_pages = PdfPages('./plots_output/%s' % plotname)
# plt.rcParams.update(params)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
f, ax1 = plt.subplots()
_, dist_ben_nineteen = extract_histogram(
'../../experiments_set/cifar_lenet/cifar_lenet_bgwall_40_dist/events/events.out.tfevents.1592158373.ip-172-31-1-86.eu-central-1.compute.internal',
tags,
[5])
# shuffle randomly, then select
bins = np.arange(-0.03263, 0.02697, 0.0009)
dist_ben = dist_ben_nineteen
print(dist_ben.shape)
display_hist = True
display_kde = False
sns.distplot(dist_ben, bins=bins, hist=display_hist, kde=display_kde, norm_hist=True,
kde_kws={'shade': True, 'linewidth': 0}, hist_kws={'weights': np.repeat(1. / 19., dist_ben.shape[0])},
color=colors[0], label="Benign", ax=ax1)
del dist_ben
print('Done with ben')
_, dist_mal = extract_histogram(
'../../experiments_set/cifar_lenet/cifar_lenet_bgwall_40_dist/events/events.out.tfevents.1592158373.ip-172-31-1-86.eu-central-1.compute.internal',
tags_mal,
[5]) # For now
print(dist_mal.shape)
bins = np.arange(-0.1125, 0.1125, 0.005)
sns.distplot(dist_mal, bins=bins, hist=display_hist, kde=display_kde, norm_hist=True,
kde_kws={'shade': True, 'linewidth': 0}, color=colors[1], label="Malicious", ax=ax1)
del dist_mal
# print("Second attack")
# _, dist_mal_modelreplacement = extract_histogram(
# '../../experiments_set/cifar_lenet/cifar_lenet_bgwall_40_mr_dist/events/events.out.tfevents.1592169131.ip-172-31-1-86.eu-central-1.compute.internal',
# tags_mal,
# [5]) # For now
# print(dist_mal_modelreplacement.shape)
# # bins = np.arange(-0.1125, 0.1125, 0.005)
# sns.distplot(dist_mal_modelreplacement, hist=display_hist, kde=display_kde, norm_hist=True,
# kde_kws={'shade': True, 'linewidth': 0}, color=colors[2], label="Malicious (model replacement)", ax=ax1)
# plt.hist(dist_mal)
ax1.set_xlabel("Weight")
ax1.set_ylabel("Density")
# plt.yscale("log")
custom_benign = [Patch(facecolor=colors[0], label="Benign (0.99)"),
Patch(facecolor=colors[1], label="Malicious (8.38)")]
leg1 = plt.legend(handles=custom_benign, title="Client type ($L_2$)")
leg1._legend_box.align = "left"
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def weight_distribution_plot_plus_l2_attack(plotname):
tags = [
'histogram_ben/l6_Dense',
'histogram_ben/l0_Conv2D',
'histogram_ben/l2_Conv2D',
'histogram_ben/l4_Dense',
'histogram_ben/l8_Dense'
]
tags_mal = [
'histogram_mal/l6_Dense',
'histogram_mal/l0_Conv2D',
'histogram_mal/l2_Conv2D',
'histogram_mal/l4_Dense',
'histogram_mal/l8_Dense'
]
params, fig_size = get_plt_params()
pdf_pages = PdfPages('./plots_output/%s' % plotname)
# plt.rcParams.update(params)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
f, ax1 = plt.subplots()
# _, dist_ben_nineteen = extract_histogram(
# '../../experiments_set/cifar_lenet/cifar_lenet_bgwall_40_dist/events/events.out.tfevents.1592158373.ip-172-31-1-86.eu-central-1.compute.internal',
# tags,
# [5])
#
# # shuffle randomly, then select
# bins = np.arange(-0.03263, 0.02697, 0.0009)
# dist_ben = dist_ben_nineteen
# print(dist_ben.shape)
display_hist = True
display_kde = False
# sns.distplot(dist_ben, bins=bins, hist=display_hist, kde=display_kde, norm_hist=True,
# kde_kws={'shade': True, 'linewidth': 0}, hist_kws={'weights': np.repeat(1. / 19., dist_ben.shape[0])},
# color=colors[0], label="Benign", ax=ax1)
# del dist_ben
print('Done with ben')
_, dist_mal = extract_histogram(
'../../experiments_set/cifar_lenet/cifar_lenet_bgwall_40_dist/events/events.out.tfevents.1592158373.ip-172-31-1-86.eu-central-1.compute.internal',
tags_mal,
[5]) # For now
print(dist_mal.shape)
bins = np.arange(-0.1125, 0.1125, 0.005)
sns.distplot(dist_mal, bins=bins, hist=display_hist, kde=display_kde, norm_hist=True,
kde_kws={'shade': True, 'linewidth': 0}, color=colors[0], label="Malicious", ax=ax1)
del dist_mal
print("Second attack")
scale_factor = 40. / 100.
_, dist_mal_modelreplacement = extract_histogram(
'../../experiments_set/cifar_lenet/cifar_lenet5_bgwall/run-3/events/events.out.tfevents.1591807122.ip-172-31-1-86.eu-central-1.compute.internal',
tags_mal,
[5]) # For now
print(dist_mal_modelreplacement.shape)
# bins = np.arange(-0.1125, 0.1125, 0.005)
dist_mal_modelreplacement = dist_mal_modelreplacement * scale_factor
sns.distplot(dist_mal_modelreplacement, hist=display_hist, kde=display_kde, norm_hist=True,
kde_kws={'shade': True, 'linewidth': 0}, color=colors[1], label="Malicious (model replacement)", ax=ax1)
ax1.set_xlabel("Weight")
ax1.set_ylabel("Density")
# plt.yscale("log")
custom_benign = [Patch(facecolor=colors[0], label="Subspace p. (8.38)"),
Patch(facecolor=colors[1], label="Model repl. (8.38)")]
leg1 = plt.legend(handles=custom_benign, title="Client type ($L_2$)")
leg1._legend_box.align = "left"
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def quantization_mnist(plotname):
df = pd.read_csv(os.path.join(plot_data_save_path, 'quantization_emnist.csv'))
# print(df)
plot_types = [
# 'quantization_baseline_evaluation/test_accuracy',
# 'quantization_prob_evaluation/test_accuracy',
# 'quantization_deterministic_evaluation/test_accuracy',
# 'quantization_prob_higher_loss_evaluation/test_accuracy'
'quantization_emnist_baseline_evaluation/test_accuracy',
'quantization_emnist_p_8_7_evaluation/test_accuracy',
'quantization_emnist_p_4_3_evaluation/test_accuracy',
'quantization_emnist_d_8_7_evaluation/test_accuracy',
# 'quantization_mnist5_prob_1_1_evaluation/test_accuracy'
]
plot_legend = {
# 'quantization_baseline_evaluation/test_accuracy': "No quantization",
# 'quantization_prob_evaluation/test_accuracy': "(16-7)-p)",
# 'quantization_deterministic_evaluation/test_accuracy': "(16-7)-d",
# "quantization_prob_higher_loss_evaluation/test_accuracy": "(8-4)-p"
'quantization_emnist_baseline_evaluation/test_accuracy': '32-bit float',
'quantization_emnist_p_8_7_evaluation/test_accuracy': '(8,7)-prob.',
'quantization_emnist_p_4_3_evaluation/test_accuracy': '(4,3)-prob.',
'quantization_emnist_d_8_7_evaluation/test_accuracy': '(8,7)-det.',
# 'quantization_mnist5_prob_1_1_evaluation/test_accuracy': "(1-1)-p"
}
fig_height, fig_size, fig_width = get_large_figsize()
params, fig_size = get_plt_params()
pdf_pages = PdfPages('./plots_output/%s' % plotname)
plt.rcParams.update(params)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
linestyles = ["-", "--", ":"]
for id, type in enumerate(plot_types):
# df.plot(x='Round', y=plot_legend[type], style='o', label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
plt.plot(df.Round[0:1100], df[type][0:1100], label=plot_legend[type], color=colors[id], linestyle="-",
linewidth=2)
plt.xlabel('Round')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Accuracy")
plt.ylim(0.9, 1.0)
# plt.xlim(0, 1000)
plt.legend()
# plt.legend(bbox_to_anchor=(-0.016, 1., 1., .102), loc=3, ncol=4, columnspacing=0.75)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def cifar_client_comparison_unbounded(plotname):
df = pd.read_csv(os.path.join(plot_data_save_path, 'cifar_lenet_client_comparison.csv'))
# print(df)
runs = {
"run-0": 0.02, "run-1": 0.01, "run-2": 1. / 150., "run-3": 0.005
}
fig_height, fig_size, fig_width = get_large_figsize()
params, fig_size = get_plt_params()
pdf_pages = PdfPages('./plots_output/%s' % plotname)
plt.rcParams.update(params)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
values_tuples = [(alpha, df[f"{type}_evaluation/adv_success"][4]) for (type, alpha) in runs.items()]
values = list(zip(*values_tuples))
# for id, (type, alpha) in enumerate(runs.items()):
# # df.plot(x='Round', y=plot_legend[type], style='o', label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
# key = f"{type}_evaluation/adv_success"
plt.plot(values[0], values[1], '-o', color=colors[1], label="Green cars", linestyle="-", linewidth=2)
plt.xlabel('Adversarial fraction $\\alpha$')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
ax1.xaxis.set_major_formatter(ticker.PercentFormatter())
plt.ylabel("Adversarial accuracy")
plt.ylim(0.5, 1.0)
# plt.xlim(0, 1000)
plt.legend(loc='lower right')
# plt.legend(bbox_to_anchor=(-0.016, 1., 1., .102), loc=3, ncol=4, columnspacing=0.75)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def modelreplacement_cifar_clip_plot(plotname):
df = pd.read_csv(os.path.join(plot_data_save_path, 'modelreplacement.csv'))
df["Round"] = df["Round"].apply(lambda x: x - 5)
# print(df)
def cust(plt, ax):
plt.xlim(xmin=-5, xmax=300)
plt.ylim(ymin=0, ymax=1.0)
start, end = ax.get_xlim()
xticks = np.arange(0, end + 1, 20)
# np.insert(xticks, 0, -5, axis=0)
ax.xaxis.set_ticks(xticks)
plot_types = ['Adversarial objective (clipped)', 'Benign objective (clipped)']
plot_legend = {'Benign objective (clipped)': 'Benign objective',
'Adversarial objective (clipped)': 'Adversarial objective'}
plot_accuracy_round(plotname, df, plot_types, plot_legend, cust)
def constant_attack_lenet_bound_plot(plotname):
df = pd.read_csv(os.path.join(plot_data_save_path, 'constant_attack_lenet_bound_plot.csv'))
df["Round"] = df["Round"].apply(lambda x: x - 5)
# print(df)
plot_types = [
'cifar_lenet_noniid_evaluation',
'cifar_lenet_train_noattack_clip_100_evaluation',
'cifar_lenet_train_repeated_greencar_100_evaluation'
]
params, fig_size = get_plt_params()
pdf_pages = PdfPages('./plots_output/%s' % plotname)
plt.rcParams.update(params)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
plt.rcParams.update(params)
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
legend_custom = {
'adv_success': 'Malicious objective',
'test_accuracy': 'Benign objective'
}
linestyles_custom = {
'adv_success': ':',
'test_accuracy': '-'
}
colors_custom = {
'cifar_lenet_noniid_evaluation': colors[0],
'cifar_lenet_train_noattack_clip_100_evaluation': colors[1],
'cifar_lenet_train_repeated_greencar_100_evaluation': colors[2] # add colors 1
}
for id, type in enumerate(plot_types[::-1]):
for suffix in ['adv_success', 'test_accuracy']:
# print(f"{type}/{suffix}")
# print(df[f"{type}/{suffix}"])
# df.plot(x='Round', y=plot_legend[type], style='o', label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
plt.plot(df.Round, df[f"{type}/{suffix}"], label=legend_custom[suffix], color=colors_custom[type],
linestyle=linestyles_custom[suffix], linewidth=2)
plt.xlabel('Round')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Accuracy")
plt.xlim(xmin=0, xmax=3927)
start, end = ax1.get_xlim()
# xticks = np.arange(0, end + 1, 100)
# np.insert(xticks, 0, -5, axis=0)
# ax1.xaxis.set_ticks(xticks)
# Additional, custom legend
custom_lines_colors = [Line2D([0], [0], linestyle="-", lw=2, color=colors[0]),
Line2D([0], [0], linestyle="-", lw=2, color=colors[1]),
Line2D([0], [0], linestyle="-", lw=2, color=colors[2])]
custom_lines_styles = [Line2D([0], [0], linestyle="-", lw=2, color=COLOR_GRAY),
Line2D([0], [0], linestyle=":", lw=2, color=COLOR_GRAY)]
leg1 = plt.legend(custom_lines_colors, ["Baseline", "Clipped ($L_2$)", "Attack, Clipped ($L_2$)"],
bbox_to_anchor=(1., 0.55, 1., .102), loc=3, ncol=1, columnspacing=0.75)
leg2 = plt.legend(custom_lines_styles, ["Benign objective", "Malicious objective"],
bbox_to_anchor=(1., 0.26, 1., .102), loc=3, ncol=1,
columnspacing=0.75)
ax1.add_artist(leg1)
ax1.add_artist(leg2)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def plot_accuracy_round(plotname, df, plot_types, plot_legend, customize=None):
fig_height, fig_size, fig_width = get_large_figsize()
params, fig_size = get_plt_params()
pdf_pages = PdfPages('./plots_output/%s' % plotname)
plt.rcParams.update(params)
plt.axes([0.12, 0.32, 0.85, 0.63], frameon=True)
plt.rc('pdf', fonttype=42) # IMPORTANT to get rid of Type 3
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
matplotlib.rc('text', usetex=True)
colors, linestyles = get_colorful_styles()
f, ax1 = plt.subplots()
linestyles = ["-", "--", ":"]
for id, type in enumerate(plot_types):
# df.plot(x='Round', y=plot_legend[type], style='o', label=plot_legend[type], color=colors[id], linestyle=linestyles[id], linewidth=2)
plt.plot(df.Round, df[type], label=plot_legend[type], color=colors[id], linestyle="-", linewidth=2)
plt.xlabel('Round')
# ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '%.0fk' % (y * 1e-3)))
plt.ylabel("Accuracy")
if customize is not None:
customize(plt, ax1)
plt.legend(bbox_to_anchor=(-0.016, 1., 1., .102), loc=3, ncol=4, columnspacing=0.75)
plt.grid(True, linestyle=':', color='0.8', zorder=0)
F = plt.gcf()
F.set_size_inches(fig_size)
pdf_pages.savefig(F, bbox_inches='tight')
plt.clf()
pdf_pages.close()
def edgecases_norm_bound_plot(plotname, output_dir, df, show_blackbox=False):
window_size = 20
markevery = 50
xmax = 900
print(df.columns)
df["pgd_testaccuracy"] = df["e63_edgecase_attack_clipl2_2_pgd_evaluation/test_accuracy"].rolling(window_size).mean()
df["pgd_advsuccess"] = df["e63_edgecase_attack_clipl2_2_pgd_evaluation/adv_success"].rolling(window_size).mean()
df["blackbox_testaccuracy"] = df["e63_edgecase_attack_clipl2_2_blackbox_evaluation/test_accuracy"].rolling(window_size).mean()
df["blackbox_advsuccess"] = df["e63_edgecase_attack_clipl2_2_blackbox_evaluation/adv_success"].rolling(window_size).mean()
df["noedge_testaccuracy"] = df["e63_edgecase_attack_clipl2_2_blackbox_backdoor_tasks_evaluation/test_accuracy"].rolling(window_size).mean()
df["noedge_advsuccess"] = df["e63_edgecase_attack_clipl2_2_blackbox_backdoor_tasks_evaluation/adv_success"].rolling(window_size).mean()
name = plotname
setup_plt(square=False)
with PdfPages(f"{output_dir}/{name}.pdf") as pdf:
fig, ax = plt.subplots()
##########################
# Draw all the lines
##########################
error_color = "0.85"
cmap = matplotlib.cm.get_cmap('Set1')
colors = [cmap(i) for i in range(8)]
linestyles = ["solid", "dotted", "dashdot"] #dashdot
ax.plot(df["Round"], df[f"pgd_testaccuracy"], color=colors[2], linestyle=linestyles[0], linewidth=2, label='Main Task')
ax.plot(df["Round"], df[f"pgd_advsuccess"], color=colors[0], linestyle=linestyles[1], linewidth=2, label='Backdoor Task (PGD)')
if show_blackbox:
ax.plot(df["Round"], df[f"blackbox_advsuccess"], color=colors[0], linestyle=linestyles[2], linewidth=2, label='Backdoor Task (Blackbox)')
lines = ax.get_lines()
labels = ["Main Task", "Backdoor Task"]
empty_patch = mpatches.Patch(color='none')
handles=None
ax.legend(mode="expand", loc="lower left", labelspacing=.05, bbox_to_anchor=(1.01, 0, .66 if show_blackbox else .6, 0))
##########################
# General Format
##########################
#ax.set_title("Hello World")
ax.grid(True, axis="y", linestyle=':', color='0.6', zorder=0, linewidth=1.2)
##########################
# Y - Axis Format
##########################
ax.set_ylim(ymin=0, ymax=1.01)
ax.set_ylabel("Accuracy")
ax.set_yticks([0,0.25, 0.5, 0.75, 1])
#ax.set_yticklabels(labels, fontsize=16, rotation=345)
##########################
# X - Axis Format
##########################
ax.set_xlim(xmin=0, xmax=xmax)
ax.set_xlabel("Rounds")
#ax.set_xticks(xticks)
#ax.set_xticklabels(labels, fontsize=16, rotation=345)
pdf.savefig(bbox_inches='tight', pad_inches=0)
plt.close()
return fig, df
def edgecases_norm_bound_noise_plot(plotname, output_dir, df):
window_size = 20
xmax = 900
print(df.columns)
df["pgd_testaccuracy"] = df["e63_edgecase_attack_clipl2_2_pgd_evaluation/test_accuracy"].rolling(window_size).mean()
df["pgd_advsuccess"] = df["e63_edgecase_attack_clipl2_2_pgd_evaluation/adv_success"].rolling(window_size).mean()
df["blackbox_testaccuracy"] = df["e63_edgecase_attack_clipl2_2_blackbox_evaluation/test_accuracy"].rolling(window_size).mean()
df["blackbox_advsuccess"] = df["e63_edgecase_attack_clipl2_2_blackbox_evaluation/adv_success"].rolling(window_size).mean()
df["noise_testaccuracy"] = df["e63_edgecase_attack_clipl2_2_blackbox_noise_0_025_evaluation/test_accuracy"].rolling(window_size).mean()
df["noise_advsuccess"] = df["e63_edgecase_attack_clipl2_2_blackbox_noise_0_025_evaluation/adv_success"].rolling(window_size).mean()
name = plotname
setup_plt(square=False)
with PdfPages(f"{output_dir}/{name}.pdf") as pdf:
fig, ax = plt.subplots()
##########################
# Draw all the lines
##########################
error_color = "0.85"
cmap = matplotlib.cm.get_cmap('Set1')
colors = [cmap(i) for i in range(8)]
linestyles = ["solid", "dotted", "dashdot"] #dashdot
ax.plot(df["Round"], df[f"pgd_testaccuracy"], color=colors[2], linestyle=linestyles[0], linewidth=2)
ax.plot(df["Round"], df[f"pgd_advsuccess"], color=colors[0], linestyle=linestyles[0], linewidth=2)
ax.plot(df["Round"], df[f"noise_testaccuracy"], color=colors[2], linestyle=linestyles[1], linewidth=2)
ax.plot(df["Round"], df[f"noise_advsuccess"], color=colors[0], linestyle=linestyles[1], linewidth=2)
##########################
# Custom legend
##########################
patches = [mpatches.Patch(color=colors[2]), mpatches.Patch(color=colors[0])]
custom_lines_styles = [Line2D([0], [0], linestyle=linestyles[0], lw=2, color=COLOR_GRAY),
Line2D([0], [0], linestyle=linestyles[1], lw=2, color=COLOR_GRAY)]
height = 0
width = 0.6
leg1 = ax.legend(patches, ["Main Task", "Backdoor Task"],
mode="expand", title="Tasks", bbox_to_anchor=(1.01, 1, width, height), loc="upper left", labelspacing=0.2)
leg2 = ax.legend(custom_lines_styles, ["Baseline", "Noise ($\sigma = 0.025$)"],
mode="expand", title="Metrics", bbox_to_anchor=(1.01, 0, width, height), loc="lower left", labelspacing=0.2)
leg1._legend_box.align = "left"
leg2._legend_box.align = "left"
ax.add_artist(leg1)
ax.add_artist(leg2)
##########################
# General Format
##########################
#ax.set_title("Hello World")
ax.grid(True, axis="y", linestyle=':', color='0.6', zorder=0, linewidth=1.2)
##########################
# Y - Axis Format
##########################
ax.set_ylim(ymin=0, ymax=1.01)
ax.set_ylabel("Accuracy")
ax.set_yticks([0,0.25, 0.5, 0.75, 1])
#ax.set_yticklabels(labels, fontsize=16, rotation=345)
##########################
# X - Axis Format
##########################
ax.set_xlim(xmin=0, xmax=xmax)
ax.set_xlabel("Rounds")
#ax.set_xticks(xticks)
#ax.set_xticklabels(labels, fontsize=16, rotation=345)
pdf.savefig(bbox_inches='tight', pad_inches=0)
plt.close()
return fig, df
def main():
return
# e2e Plots
selection = 'all'
if len(sys.argv) > 1:
selection = sys.argv[1]
if selection == 'modelreplacement_cifar' or selection == 'all':
modelreplacement_cifar_resnet56_plot("modelreplacement_cifar.pdf")
if selection == 'modelreplacement_subspacepoisoning_attack_compare' or selection == 'all':
modelreplacement_subspacepoisoning_attack_compare("modelreplacement_subspacepoisoning_attack_compare.pdf")
if selection == 'modelreplacement_cifar_clip' or selection == 'all':
modelreplacement_cifar_clip_plot("modelreplacement_cifar_clip.pdf")
if selection == 'inspectnorm_fmnist' or selection == 'all':
inspect_norm_plot("inspectnorm_fmnist.pdf")
if selection == 'inspectnorm_fmnist_lm_scale' or selection == 'all':
inspect_norm_plot_lm_scale("inspectnorm_fmnist_lm_scale.pdf")
if selection == 'microbenchmark_randproof' or selection == 'all':
squarerandproof_log_plot("microbenchmark_create_randproof.pdf")
if selection == 'microbenchmark_randproof' or selection == 'all':
squarerandproof_verify_log_plot("microbenchmark_verify_randproof.pdf")
if selection == 'norm_per_round' or selection == 'all':
norm_per_round("norm_per_round.pdf")
if selection == 'norm_distribution_benign' or selection == 'all':
norm_distribution_benign("norm_distribution_benign.pdf")
if selection == 'norm_distribution_iid_noniid' or selection == 'all':
norm_distribution_iid_noniid("norm_distribution_iid_noniid.pdf")
if selection == 'norm_distribution_benign_overtime' or selection == 'all':
norm_distribution_benign_overtime("norm_distribution_benign_overtime.pdf")
if selection == 'constant_attack_lenet_bound_plot' or selection == 'all':
constant_attack_lenet_bound_plot("constant_attack_lenet_bound_plot.pdf")
# TODO [nku] adjust to new style
if selection == 'l2_norm_accuracy_compare_plot' or selection == 'all':
norm_accuracy_compare_plot("l2_norm_accuracy_compare_plot.pdf", "L2")
if selection == 'linf_norm_accuracy_compare_plot' or selection == 'all':
norm_accuracy_compare_plot("linf_norm_accuracy_compare_plot.pdf", "LINF")
if selection == 'l2_norm_accuracy_tradeoff_plot' or selection == 'all':
norm_accuracy_tradeoff_plot("l2_norm_accuracy_tradeoff_plot.pdf", "L2")
if selection == 'linf_norm_accuracy_tradeoff_plot' or selection == 'all':
norm_accuracy_tradeoff_plot("linf_norm_accuracy_tradeoff_plot.pdf", "LINF")
if selection == 'hypergeometric_distribution' or selection == 'all':
hypergeometric_distribution("hypergeometric_distribution.pdf")
if selection == 'quantization_mnist' or selection == 'all':
quantization_mnist("quantization_mnist.pdf")
if selection == 'l2proof_plots' or selection == 'all':
l2proof_plots()
l2proof_flexible_case()
if selection == 'weight_distribution_plot_plus_l2' or selection == 'all':
weight_distribution_plot_plus_l2("weight_distribution_plot_plus_l2.pdf")
if selection == 'weight_distribution_plot_plus_l2_attack' or selection == 'all':
weight_distribution_plot_plus_l2_attack("weight_distribution_plot_plus_l2_attack.pdf")
if selection == 'cifar_client_comparison_unbounded' or selection == 'all':
cifar_client_comparison_unbounded("cifar_client_comparison_unbounded.pdf")
if selection == 'scaling_factor_adv_success_lenet' or selection == 'all':
scaling_factor_adv_success("scaling_factor_adv_success_lenet.pdf")
if selection == 'endtoend_mnist_cnn_range' or selection == 'all':
endtoend_accuracy_plot("endtoend_mnist_cnn_range.pdf", "mnist", "$L_\\infty$-norm bound for the MNIST task.",
0.9, 1.0)
if selection == 'endtoend_cifar_lenet_range' or selection == 'all':
endtoend_accuracy_plot("endtoend_cifar_lenet_range.pdf", "cifar_lenet",
"$L_\\infty$-norm bound for the CIFAR10 task.", 0.0, 0.6)
if selection == 'endtoend_accuracy_four_plot' or selection == 'all':
endtoend_accuracy_four_plot("endtoend_accuracy_four_plot.pdf")
if selection == 'endtoend_timing_bar_range' or selection == 'all':
endtoend_timing_bar("endtoend_timing_bar_range.pdf", "range")
if selection == 'endtoend_timing_bar_l2' or selection == 'all':
endtoend_timing_bar("endtoend_timing_bar_l2.pdf", "l2")
if selection == 'microbench_proof_arbitrary_ranges' or selection == 'all':
microbench_proof_arbitrary_ranges()
if selection == 'bandwidth_bounds_four_plot' or selection == 'all':
bandwidth_bounds_four_plot("bandwidth_bounds_four_plot.pdf")
if selection == 'accuracy_pgd' or selection == 'all':
accuracy_pgd("accuracy_pgd.pdf")
if selection == 'prio_accuracy_plot' or selection == 'all':
prio_accuracy_plot("prio_accuracy_plot.pdf")
if selection == 'cifar_lenet_wr_plot' or selection == 'all':
cifar_lenet_wr_plot("cifar_lenet_wr_plot.pdf")
if __name__ == "__main__":
main() | 40.400174 | 307 | 0.596794 | 18,260 | 139,219 | 4.338828 | 0.047371 | 0.005453 | 0.010098 | 0.009088 | 0.821626 | 0.783761 | 0.75728 | 0.740026 | 0.718985 | 0.69032 | 0 | 0.045473 | 0.225831 | 139,219 | 3,446 | 308 | 40.400174 | 0.689615 | 0.136655 | 0 | 0.569602 | 0 | 0.02036 | 0.22741 | 0.115364 | 0 | 0 | 0 | 0.00029 | 0 | 1 | 0.028409 | false | 0 | 0.008996 | 0.002367 | 0.052083 | 0.007576 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a173cf2bcad13b354c83cb0dcb1208061bd1826c | 121 | py | Python | polyline/admin.py | UjalaJha/DMBIProject | 1f9762e9d1f0261ab0421c185cd120e7890f77e9 | [
"MIT"
] | null | null | null | polyline/admin.py | UjalaJha/DMBIProject | 1f9762e9d1f0261ab0421c185cd120e7890f77e9 | [
"MIT"
] | null | null | null | polyline/admin.py | UjalaJha/DMBIProject | 1f9762e9d1f0261ab0421c185cd120e7890f77e9 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import *
admin.site.register(PolylineData)
admin.site.register(Polyline)
| 17.285714 | 33 | 0.809917 | 16 | 121 | 6.125 | 0.625 | 0.22449 | 0.346939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099174 | 121 | 6 | 34 | 20.166667 | 0.899083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a197b79f5fbd228aaf3079a95dd6f5ae40a9a96e | 161 | py | Python | cloudlift/config/stack.py | sannithibalaji/cloudlift | 656e152adff353fcb45c800d464a4ed945b7b34f | [
"MIT"
] | 19 | 2019-03-04T08:38:18.000Z | 2022-03-25T04:48:38.000Z | cloudlift/config/stack.py | sannithibalaji/cloudlift | 656e152adff353fcb45c800d464a4ed945b7b34f | [
"MIT"
] | 28 | 2020-01-19T07:16:02.000Z | 2022-02-24T06:58:27.000Z | cloudlift/config/stack.py | sannithibalaji/cloudlift | 656e152adff353fcb45c800d464a4ed945b7b34f | [
"MIT"
] | 10 | 2019-07-29T12:21:03.000Z | 2021-11-17T15:52:54.000Z | def get_cluster_name(environment):
return "cluster-" + environment
def get_service_stack_name(environment, name):
return '-'.join([name, environment])
| 23 | 46 | 0.73913 | 19 | 161 | 6 | 0.473684 | 0.394737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136646 | 161 | 6 | 47 | 26.833333 | 0.820144 | 0 | 0 | 0 | 0 | 0 | 0.055901 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a1e87a08a6ce7204fcaf88b66da6249504137138 | 173 | py | Python | stac_api/utils/dependencies.py | c-core-labs/arturo-stac-api | 1edd30adbb7032ed41a5e40372c6f98bc2481529 | [
"MIT"
] | 2 | 2021-03-18T05:39:12.000Z | 2021-03-18T05:39:41.000Z | stac_api/utils/dependencies.py | c-core-labs/arturo-stac-api | 1edd30adbb7032ed41a5e40372c6f98bc2481529 | [
"MIT"
] | null | null | null | stac_api/utils/dependencies.py | c-core-labs/arturo-stac-api | 1edd30adbb7032ed41a5e40372c6f98bc2481529 | [
"MIT"
] | null | null | null | """FastAPI dependencies."""
from contextvars import ContextVar
# TODO: Find a new home
READER: ContextVar = ContextVar("reader")
WRITER: ContextVar = ContextVar("writer")
| 21.625 | 41 | 0.751445 | 19 | 173 | 6.842105 | 0.684211 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127168 | 173 | 7 | 42 | 24.714286 | 0.860927 | 0.254335 | 0 | 0 | 0 | 0 | 0.097561 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a1ee42eceb586c760bfc9d95dd9fa994c2d6220d | 168 | py | Python | leaguepybotv2.0/_backup2/league_client/__init__.py | TierynnB/LeaguePyBot | 2e96230b9dc24d185ddc0c6086d79f7d01e7a643 | [
"MIT"
] | null | null | null | leaguepybotv2.0/_backup2/league_client/__init__.py | TierynnB/LeaguePyBot | 2e96230b9dc24d185ddc0c6086d79f7d01e7a643 | [
"MIT"
] | null | null | null | leaguepybotv2.0/_backup2/league_client/__init__.py | TierynnB/LeaguePyBot | 2e96230b9dc24d185ddc0c6086d79f7d01e7a643 | [
"MIT"
] | null | null | null | from .league_client import LeagueClient
from .league_connector import LeagueConnector
from .league_summoner import LeagueSummoner
from .league_lockfile import Lockfile
| 33.6 | 45 | 0.880952 | 20 | 168 | 7.2 | 0.5 | 0.277778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 168 | 4 | 46 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b8056adc46e0d0526645da30224ac308f6c914bd | 28 | py | Python | __init__.py | KeinShin/YoutubeCommentPoster | d2d40a5089e33369ec4b4b0a34785f8252e09fa2 | [
"MIT"
] | 7 | 2021-08-19T15:41:40.000Z | 2022-03-20T21:48:41.000Z | __init__.py | KeinShin/YoutubeCommentPoster | d2d40a5089e33369ec4b4b0a34785f8252e09fa2 | [
"MIT"
] | 2 | 2021-08-29T16:26:08.000Z | 2021-08-31T13:34:15.000Z | __init__.py | KeinShin/YoutubeCommentPoster | d2d40a5089e33369ec4b4b0a34785f8252e09fa2 | [
"MIT"
] | 2 | 2021-08-23T01:20:07.000Z | 2021-08-28T23:47:45.000Z | from .comment import Comment | 28 | 28 | 0.857143 | 4 | 28 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b81c275aa6dcef3a2577df89799b125c18ab054b | 11,087 | py | Python | lib/model/scnet.py | jfzhuang/DAVSS | 5dc785ee1417ade5b82f5e34b34817e3c4492acb | [
"MIT"
] | 11 | 2021-02-26T14:28:41.000Z | 2022-03-10T02:48:56.000Z | lib/model/scnet.py | jfzhuang/DAVSS | 5dc785ee1417ade5b82f5e34b34817e3c4492acb | [
"MIT"
] | 4 | 2021-03-23T07:32:57.000Z | 2021-07-04T07:39:45.000Z | lib/model/scnet.py | jfzhuang/DAVSS | 5dc785ee1417ade5b82f5e34b34817e3c4492acb | [
"MIT"
] | 1 | 2021-12-03T13:09:26.000Z | 2021-12-03T13:09:26.000Z | import os
import sys
import cv2
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from lib.model.flownet import FlowNets
from lib.model.deeplabv3plus import deeplabv3plus
from lib.model.dmnet import DMNet
from lib.model.cfnet import CFNet
from lib.model.warpnet import warp
class SCNet(nn.Module):
def __init__(self, n_classes=19):
super(SCNet, self).__init__()
self.deeplab = deeplabv3plus(n_classes=n_classes)
self.flownet = FlowNets()
self.cfnet = CFNet(n_classes=n_classes)
self.dmnet = DMNet()
self.warpnet = warp()
self.semantic_loss = nn.CrossEntropyLoss(ignore_index=255)
self.cfnet_loss = nn.CrossEntropyLoss(ignore_index=255, reduce=False)
self.dmnet_loss = nn.BCELoss()
self.set_fix_deeplab()
self.set_fix_dmnet()
def forward(self, img_list, label=None):
n, c, h, w = img_list[0].shape
img_1_feat = self.deeplab(img_list[0])
warp_img = F.upsample(img_list[0], scale_factor=0.25, mode='bilinear', align_corners=True)
img_2_mask = self.deeplab(img_list[1])
img_2_mask = F.upsample(img_2_mask, scale_factor=4, mode='bilinear', align_corners=True)
img_2_mask = torch.argmax(img_2_mask, dim=1)
loss_semantic = 0.0
loss_cfnet = 0.0
flow = self.flownet(torch.cat([img_list[1], img_list[0]], dim=1))
img_2_feat = self.warpnet(img_1_feat, flow)
warp_img = self.warpnet(warp_img, flow)
# semantic loss
img_2_out_propagate = F.upsample(img_2_feat, scale_factor=4, mode='bilinear', align_corners=True)
loss_semantic += self.semantic_loss(img_2_out_propagate, img_2_mask)
# smooth loss
img_2_down = F.upsample(img_list[1], scale_factor=0.25, mode='bilinear', align_corners=True)
dm_2 = self.dmnet(warp_img, img_2_down)
dm_2 = F.interpolate(dm_2, scale_factor=4, mode='bilinear', align_corners=True)
# cfnet loss
img_2_feat_cc = self.cfnet(img_list[1])
img_2_out_cc = F.upsample(img_2_feat_cc, scale_factor=4, mode='bilinear', align_corners=True)
loss = self.cfnet_loss(img_2_out_cc, img_2_mask)
loss_cfnet += torch.mean(loss * dm_2)
img_2_out_merge = img_2_out_propagate * (1-dm_2) + img_2_out_cc*dm_2
loss_semantic += self.semantic_loss(img_2_out_merge, img_2_mask)
flow = self.flownet(torch.cat([img_list[2], img_list[1]], dim=1))
img_3_feat = self.warpnet(img_2_feat, flow)
warp_img = self.warpnet(warp_img, flow)
# semantic loss
img_3_out_propagate = F.upsample(img_3_feat, scale_factor=4, mode='bilinear', align_corners=True)
loss_semantic += self.semantic_loss(img_3_out_propagate, label)
# smooth loss
img_3_down = F.upsample(img_list[2], scale_factor=0.25, mode='bilinear', align_corners=True)
dm_3 = self.dmnet(warp_img, img_3_down)
dm_3 = F.interpolate(dm_3, scale_factor=4, mode='bilinear', align_corners=True)
# cfnet loss
img_3_feat_cc = self.cfnet(image=img_list[2])
img_3_out_cc = F.upsample(img_3_feat_cc, scale_factor=4, mode='bilinear', align_corners=True)
loss = self.cfnet_loss(img_3_out_cc, label)
loss_cfnet += torch.mean(loss * dm_3)
img_3_out_merge = img_3_out_propagate * (1-dm_3) + img_3_out_cc*dm_3
loss_semantic += self.semantic_loss(img_3_out_merge, label)
loss_semantic /= 4
loss_semantic = torch.unsqueeze(loss_semantic, 0)
loss_cfnet /= 2
loss_cfnet = torch.unsqueeze(loss_cfnet, 0)
return loss_semantic, loss_cfnet
def set_fix_deeplab(self):
for param in self.deeplab.parameters():
param.requires_grad = False
def set_fix_dmnet(self):
for param in self.dmnet.parameters():
param.requires_grad = False
class SCNet_dmnet(nn.Module):
# For training DMNet
def __init__(self, n_classes=19):
super(SCNet_dmnet, self).__init__()
self.deeplab = deeplabv3plus(n_classes=n_classes)
self.flownet = FlowNets()
self.dmnet = DMNet()
self.warpnet = warp()
self.dmnet_loss = nn.BCELoss()
self.set_fix_deeplab()
self.set_fix_flownet()
def forward(self, img_list):
n, c, h, w = img_list[0].shape
img_1_feat = self.deeplab(img_list[0])
warp_im = F.upsample(img_list[0], scale_factor=0.25, mode='bilinear', align_corners=True)
img_2_mask = self.deeplab(img_list[1])
img_2_mask = F.upsample(img_2_mask, scale_factor=4, mode='bilinear', align_corners=True)
img_2_mask = torch.argmax(img_2_mask, dim=1)
img_3_mask = self.deeplab(img_list[2])
img_3_mask = F.upsample(img_3_mask, scale_factor=4, mode='bilinear', align_corners=True)
img_3_mask = torch.argmax(img_3_mask, dim=1)
loss_dmnet = 0.0
flow = self.flownet(torch.cat([img_list[1], img_list[0]], dim=1))
img_2_feat = self.warpnet(img_1_feat, flow)
warp_im = self.warpnet(warp_im, flow)
img_2_out_propagate = F.upsample(img_2_feat, scale_factor=4, mode='bilinear', align_corners=True)
img_2_out_propagate = torch.argmax(img_2_out_propagate, dim=1, keepdims=True)
img_2_down = F.upsample(img_list[1], scale_factor=0.25, mode='bilinear', align_corners=True)
dm_2 = self.dmnet(warp_im, img_2_down)
dm_2 = F.interpolate(dm_2, scale_factor=4, mode='bilinear', align_corners=True)
label_2 = (img_2_out_propagate != img_2_mask.unsqueeze(1)).float().detach()
loss_dmnet += self.dmnet_loss(dm_2, label_2)
flow = self.flownet(torch.cat([img_list[2], img_list[1]], dim=1))
img_3_feat = self.warpnet(img_2_feat, flow)
warp_im = self.warpnet(warp_im, flow)
img_3_out_propagate = F.upsample(img_3_feat, scale_factor=4, mode='bilinear', align_corners=True)
img_3_out_propagate = torch.argmax(img_3_out_propagate, dim=1, keepdims=True)
img_3_down = F.upsample(img_list[2], scale_factor=0.25, mode='bilinear', align_corners=True)
dm_3 = self.dmnet(warp_im, img_3_down)
dm_3 = F.interpolate(dm_3, scale_factor=4, mode='bilinear', align_corners=True)
label_3 = (img_3_out_propagate != img_3_mask.unsqueeze(1)).float().detach()
loss_dmnet += self.dmnet_loss(dm_3, label_3)
loss_dmnet /= 2
loss_dmnet = torch.unsqueeze(loss_dmnet, 0)
return loss_dmnet
def set_fix_deeplab(self):
for param in self.deeplab.parameters():
param.requires_grad = False
def set_fix_flownet(self):
for param in self.flownet.parameters():
param.requires_grad = False
# class SCNet_Camvid(nn.Module):
# def __init__(self, n_classes=19):
# super(SCNet_Camvid, self).__init__()
# self.deeplab = deeplabv3plus(n_classes=n_classes)
# self.flownet = FlowNets()
# self.cfnet = CFNet(n_classes=n_classes)
# self.dmnet = DMNet()
# self.warpnet = warp()
# self.semantic_loss = nn.CrossEntropyLoss(ignore_index=255)
# self.cfnet_loss = nn.CrossEntropyLoss(ignore_index=255, reduce=False)
# self.dmnet_loss = nn.BCELoss()
# self.set_fix_deeplab()
# self.set_fix_dmnet()
# def forward(self, img_1, img_2, img_3, label):
# n, c, h, w = img_1.shape
# img_1_feat = self.deeplab(img_1)
# warp_img = F.upsample(img_1, scale_factor=0.25, mode='bilinear', align_corners=True)
# img_2_mask = self.deeplab(img_2)
# img_2_mask = F.upsample(img_2_mask, scale_factor=4, mode='bilinear', align_corners=True)
# img_2_mask = torch.argmax(img_2_mask, dim=1)
# loss_semantic = 0.0
# loss_cfnet = 0.0
# flow = self.flownet(torch.cat([img_2, img_1], dim=1))
# img_2_feat = self.warpnet(img_1_feat, flow)
# warp_img = self.warpnet(warp_img, flow)
# # semantic loss
# img_2_out_propagate = F.upsample(img_2_feat, scale_factor=4, mode='bilinear', align_corners=True)
# loss_semantic += self.semantic_loss(img_2_out_propagate, img_2_mask)
# # smooth loss
# img_2_down = F.upsample(img_2, scale_factor=0.25, mode='bilinear', align_corners=True)
# dm_2 = self.dmnet(warp_img, img_2_down)
# dm_2 = F.interpolate(dm_2, scale_factor=4, mode='bilinear', align_corners=True)
# # cfnet loss
# img_2_feat_cc = self.cfnet(img_2)
# img_2_out_cc = F.upsample(img_2_feat_cc, scale_factor=4, mode='bilinear', align_corners=True)
# loss = self.cfnet_loss(img_2_out_cc, img_2_mask)
# loss_cfnet += torch.mean(loss * dm_2)
# img_2_out_merge = img_2_out_propagate * (1-dm_2) + img_2_out_cc*dm_2
# loss_semantic += self.semantic_loss(img_2_out_merge, img_2_mask)
# flow = self.flownet(torch.cat([img_3, img_2], dim=1))
# img_3_feat = self.warpnet(img_2_feat, flow)
# warp_img = self.warpnet(warp_img, flow)
# # semantic loss
# img_3_out_propagate = F.upsample(img_3_feat, scale_factor=4, mode='bilinear', align_corners=True)
# loss_semantic += self.semantic_loss(img_3_out_propagate, label)
# # smooth loss
# img_3_down = F.upsample(img_3, scale_factor=0.25, mode='bilinear', align_corners=True)
# dm_3 = self.dmnet(warp_img, img_3_down)
# dm_3 = F.interpolate(dm_3, scale_factor=4, mode='bilinear', align_corners=True)
# # cfnet loss
# img_3_feat_cc = self.cfnet(image=img_3)
# img_3_out_cc = F.upsample(img_3_feat_cc, scale_factor=4, mode='bilinear', align_corners=True)
# loss = self.cfnet_loss(img_3_out_cc, label)
# loss_cfnet += torch.mean(loss * dm_3)
# img_3_out_merge = img_3_out_propagate * (1-dm_3) + img_3_out_cc*dm_3
# loss_semantic += self.semantic_loss(img_3_out_merge, label)
# loss_semantic /= 4
# loss_semantic = torch.unsqueeze(loss_semantic, 0)
# loss_cfnet /= 2
# loss_cfnet = torch.unsqueeze(loss_cfnet, 0)
# return loss_semantic, loss_cfnet
# def set_fix_deeplab(self):
# for param in self.deeplab.parameters():
# param.requires_grad = False
# def set_fix_dmnet(self):
# for param in self.dmnet.parameters():
# param.requires_grad = False
if __name__ == '__main__':
net = SCNet()
net.cuda().eval()
img_1 = torch.rand([2, 3, 512, 1024]).cuda()
img_1_mask = torch.zeros([2, 512, 1024]).long().cuda()
img_2 = torch.rand([2, 3, 512, 1024]).cuda()
img_2_mask = torch.zeros([2, 512, 1024]).long().cuda()
img_3 = torch.rand([2, 3, 512, 1024]).cuda()
img_3_mask = torch.zeros([2, 512, 1024]).long().cuda()
label = torch.zeros([2, 512, 1024]).long().cuda()
with torch.no_grad():
loss_semantic, loss_cfnet = net(img_1, img_1_mask, img_2, img_2_mask, img_3, img_3_mask, label)
print(loss_semantic.item(), loss_cfnet.item())
| 39.455516 | 107 | 0.658699 | 1,705 | 11,087 | 3.939003 | 0.065103 | 0.042287 | 0.073407 | 0.103633 | 0.900238 | 0.873437 | 0.873437 | 0.842764 | 0.822513 | 0.811197 | 0 | 0.042004 | 0.220529 | 11,087 | 280 | 108 | 39.596429 | 0.735131 | 0.325787 | 0 | 0.411765 | 0 | 0 | 0.021639 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.088235 | 0 | 0.176471 | 0.007353 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
629f3805330b039297e68de8e94eb4b2056a7457 | 4,001 | py | Python | analyses/quantifications/scripts/2019_07_0608quantifications.py | brendano257/Zugspitze-Schneefernerhaus | 64bb86ece2eec147f2a7fb412f87ff2313388753 | [
"MIT"
] | null | null | null | analyses/quantifications/scripts/2019_07_0608quantifications.py | brendano257/Zugspitze-Schneefernerhaus | 64bb86ece2eec147f2a7fb412f87ff2313388753 | [
"MIT"
] | null | null | null | analyses/quantifications/scripts/2019_07_0608quantifications.py | brendano257/Zugspitze-Schneefernerhaus | 64bb86ece2eec147f2a7fb412f87ff2313388753 | [
"MIT"
] | null | null | null | """
A sequence of standards were run over three days to quantify and compare EMPA SX3555 vs CC416168.
The sequence was (CC416168, SX3555, Blank2500), which was run after normal runs for three days (2019-07-04 --> 06).
"""
__package__ = 'Z'
import datetime as dt
from datetime import datetime
from settings import CORE_DIR, DB_NAME
from IO.db import connect_to_db, GcRun, Integration, Standard, SampleQuant
from reporting import compile_quant_report
engine, session = connect_to_db(DB_NAME, CORE_DIR)
standard_to_quantify_with = session.query(Standard).filter(Standard.name == 'cc416168').one_or_none()
# get standard cert values for the quantifier
certified_values_of_sample = session.query(Standard).filter(Standard.name == 'sx3555').one().quantifications
# get standard cert values for the sample being quantified
days_with_standards = [datetime(2019, 7, 6), datetime(2019, 7, 7), datetime(2019, 7, 8)]
quant_runs = []
for day in days_with_standards:
day_end = day + dt.timedelta(days=1)
sample = (session.query(GcRun).join(Integration, Integration.run_id == GcRun.id)
.filter(GcRun.date > day, GcRun.date < day_end)
.filter(Integration.filename.like('%SX3555.D'))
.order_by(GcRun.date)
.one_or_none())
quantifier = (session.query(GcRun).join(Integration, Integration.run_id == GcRun.id)
.filter(GcRun.date > day, GcRun.date < day_end)
.filter(Integration.filename.like('%CC416168.D'))
.order_by(GcRun.date)
.one_or_none())
blank = (session.query(GcRun).join(Integration, Integration.run_id == GcRun.id)
.filter(GcRun.date > day, GcRun.date < day_end)
.filter(Integration.filename.like('%Blank2500.D'))
.order_by(GcRun.date)
.one_or_none())
if not sample or not quantifier or not blank:
print(f'Sample, standard or blank not found for {day}.')
continue
quant = SampleQuant(sample, quantifier, blank, standard_to_quantify_with)
quant.quantify()
quant_runs.append(quant)
compile_quant_report(quant_runs, 'SX3555', 'CC416168', certified_values_of_sample, date=datetime(2019, 7, 6))
# report for SX3555 Qx CC416168 finished, values to be re-assigned for vice versa
standard_to_quantify_with = session.query(Standard).filter(Standard.name == 'sx3555').one_or_none()
# get standard cert values for the quantifier
certified_values_of_sample = session.query(Standard).filter(Standard.name == 'cc416168').one().quantifications
# get standard cert values for the sample being quantified
quant_runs = [] # re-assign to quantify the other way around (CC4416168 Qx SX3555)
for day in days_with_standards:
day_end = day + dt.timedelta(days=1)
sample = (session.query(GcRun).join(Integration, Integration.run_id == GcRun.id)
.filter(GcRun.date > day, GcRun.date < day_end)
.filter(Integration.filename.like('%CC416168.D'))
.order_by(GcRun.date)
.one_or_none())
quantifier = (session.query(GcRun).join(Integration, Integration.run_id == GcRun.id)
.filter(GcRun.date > day, GcRun.date < day_end)
.filter(Integration.filename.like('%SX3555.D'))
.order_by(GcRun.date)
.one_or_none())
blank = (session.query(GcRun).join(Integration, Integration.run_id == GcRun.id)
.filter(GcRun.date > day, GcRun.date < day_end)
.filter(Integration.filename.like('%Blank2500.D'))
.order_by(GcRun.date)
.one_or_none())
if not sample or not quantifier or not blank:
print('Sample, standard or blank not found for {day}.')
continue
quant = SampleQuant(sample, quantifier, blank, standard_to_quantify_with)
quant.quantify()
quant_runs.append(quant)
compile_quant_report(quant_runs, 'CC416168', 'SX3555', certified_values_of_sample, date=datetime(2019, 7, 6))
| 42.115789 | 115 | 0.68058 | 528 | 4,001 | 4.986742 | 0.19697 | 0.061527 | 0.05469 | 0.047854 | 0.770224 | 0.770224 | 0.770224 | 0.770224 | 0.75959 | 0.728447 | 0 | 0.048757 | 0.205449 | 4,001 | 94 | 116 | 42.56383 | 0.77949 | 0.140465 | 0 | 0.741935 | 0 | 0 | 0.062172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.080645 | 0 | 0.080645 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
629f823d4c3d63c0c65d6a53de9337516bb99a0e | 8,705 | py | Python | modpy/random/_uniform.py | FrederikLehn/modpy | 19ab18547e06e93fabfbd7f7b2f0f07ff0e70db3 | [
"MIT"
] | null | null | null | modpy/random/_uniform.py | FrederikLehn/modpy | 19ab18547e06e93fabfbd7f7b2f0f07ff0e70db3 | [
"MIT"
] | null | null | null | modpy/random/_uniform.py | FrederikLehn/modpy | 19ab18547e06e93fabfbd7f7b2f0f07ff0e70db3 | [
"MIT"
] | null | null | null | import numpy as np
from modpy.special import sqrt
from modpy.random._random_util import _chk_dist_inp, _chk_invdist_inp, _chk_mmm_inp, _chk_log_mmm_inp,\
_chk_root_mmm_inp, _chk_prob_inp
def uniform_pdf(x, a, b, bounds=()):
"""
Calculates the probability density function of the uniform distribution, i.e.::
f(x; a, b) =
\begin{cases}
1 / (b - a), for x\in[a, b]
0, otherwise
\end{cases}
Parameters
----------
x : float or array_like, shape (n,)
Realization.
a : float
Minimum.
b : float
Maximum.
bounds : tuple
Tuple of minimum and maximum attainable realizations
Returns
-------
p : float or array_like, shape (n,)
Probability.
"""
_chk_mmm_inp(a, b)
if not bounds:
bounds = (a, b)
_chk_dist_inp(x, bounds)
p = np.zeros_like(x)
return np.where((x >= a) & (x <= b), 1. / (b - a), p)
def uniform_cdf(x, a, b, bounds=()):
"""
Calculates the cumulative density function of the uniform distribution, i.e.::
F(x; a, b) =
\begin{cases}
0, for x < a
1 / (b - a), for x\in[a, b]
1, for x > b
\end{cases}
Parameters
----------
x : float or array_like, shape (n,)
Realization.
a : float
Minimum.
b : float
Maximum.
bounds : tuple
Tuple of minimum and maximum attainable realizations
Returns
-------
p : float or array_like, shape (n,)
Probability.
"""
_chk_mmm_inp(a, b)
if not bounds:
bounds = (a, b)
_chk_dist_inp(x, bounds)
p = np.zeros_like(x)
p = np.where((x >= a) & (x <= b), (x - a) / (b - a), p)
return np.where(x > b, 1., p)
def uniform_ppf(p, a, b):
"""
Calculates the inverse of the cumulative density function of the uniform distribution, i.e.::
x = F^{-1}(y; a, b) = a + y * (b - a)
Parameters
----------
p : float or array_like, shape (n,)
Cumulative probability.
a : float
Minimum.
b : float
Maximum.
Returns
-------
x : float or array_like, shape (n,)
Realization.
"""
_chk_mmm_inp(a, b)
_chk_invdist_inp(p)
return a + p * (b - a)
def loguniform_pdf(x, a, b, bounds=()):
"""
Calculates the probability density function of the log-uniform distribution (reciprocal distribution), i.e.::
f(x; a, b) =
\begin{cases}
1 / (x\ln(b/a)), for x\in[a, b]
0, otherwise
\end{cases}
The log-uniform distribution is unaffected by choice of logarithmic base, so the natural logarithm
is used in order to simplify expression and reduce computational cost.
Parameters
----------
x : float or array_like, shape (n,)
Realization.
a : float
Minimum.
b : float
Maximum.
bounds : tuple
Tuple of minimum and maximum attainable realizations
Returns
-------
p : float or array_like, shape (n,)
Probability.
"""
_chk_log_mmm_inp(a, b)
if not bounds:
bounds = (a, b)
_chk_dist_inp(x, bounds)
return uniform_pdf(np.log(x), np.log(a), np.log(b)) / x
def loguniform_cdf(x, a, b, bounds=()):
"""
Calculates the cumulative density function of the log-uniform distribution (reciprocal distribution), i.e.::
F(x; a, b) =
\begin{cases}
0, for x < a
log_{b/a}(x / a), for x\in[a, b]
1, for x > b
\end{cases}
The log-uniform distribution is unaffected by choice of logarithmic base, so the natural logarithm
is used in order to simplify expression and reduce computational cost.
Parameters
----------
x : float or array_like, shape (n,)
Realization.
a : float
Minimum.
b : float
Maximum.
bounds : tuple
Tuple of minimum and maximum attainable realizations
Returns
-------
p : float or array_like, shape (n,)
Probability.
"""
_chk_log_mmm_inp(a, b)
if not bounds:
bounds = (a, b)
_chk_dist_inp(x, bounds)
return uniform_cdf(np.log(x), np.log(a), np.log(b))
def loguniform_ppf(p, a, b):
"""
Calculates the inverse of the cumulative density function of the log-uniform distribution
(reciprocal distribution), i.e.::
x = F^{-1}(y; a, b) = e^{ln(b / a) * p + ln(a)}
The log-uniform distribution is unaffected by choice of logarithmic base, so the natural logarithm
is used in order to simplify expression and reduce computational cost.
Parameters
----------
p : float or array_like, shape (n,)
Cumulative probability.
a : float
Minimum.
b : float
Maximum.
Returns
-------
x : float or array_like, shape (n,)
Realization.
"""
_chk_log_mmm_inp(a, b)
_chk_invdist_inp(p)
return np.exp(uniform_ppf(p, np.log(a), np.log(b)))
def rootuniform_pdf(x, a, b, bounds=(), root=2.):
"""
Calculates the probability density function of the root-uniform distribution, i.e.::
f(x; a, b) =
\begin{cases}
1 / (n (b^{1/n} - a^{1/n}) * x^{1/n-1}, for x\in[a, b]
0, otherwise
\end{cases}
where `n` is the root of the function.
Parameters
----------
x : float or array_like, shape (n,)
Realization.
a : float
Minimum.
b : float
Maximum.
bounds : tuple
Tuple of minimum and maximum attainable realizations.
root : float
Root.
Returns
-------
p : float or array_like, shape (n,)
Probability.
"""
_chk_root_mmm_inp(a, b)
if not bounds:
bounds = (a, b)
_chk_dist_inp(x, bounds)
return uniform_pdf(sqrt(x, root), sqrt(a, root), sqrt(b, root)) * x ** (1. / root - 1.) / root
def rootuniform_cdf(x, a, b, bounds=(), root=2.):
"""
Calculates the cumulative density function of the root-uniform distribution, i.e.::
F(x; a, b) =
\begin{cases}
0, for x < a
(x^{1/n}-a^{1/n}) / (b^{1/n} - a^{1/n}), for x\in[a, b]
1, for x > b
\end{cases}
where `n` is the root of the function.
Parameters
----------
x : float or array_like, shape (n,)
Realization.
a : float
Minimum.
b : float
Maximum.
bounds : tuple
Tuple of minimum and maximum attainable realizations
root : float
Root.
Returns
-------
p : float or array_like, shape (n,)
Probability.
"""
_chk_root_mmm_inp(a, b)
if not bounds:
bounds = (a, b)
_chk_dist_inp(x, bounds)
return uniform_cdf(sqrt(x, root), sqrt(a, root), sqrt(b, root))
def rootuniform_ppf(p, a, b, root=2.):
"""
Calculates the inverse of the cumulative density function of the root-uniform distribution, i.e.::
x = F^{-1}(y; a, b) = (y (b^{1/n} - a^{1/n}) + a^{1/n})^n
where `n` is the root of the function.
Parameters
----------
p : float or array_like, shape (n,)
Cumulative probability.
a : float
Minimum.
b : float
Maximum.
root : float
Root.
Returns
-------
x : float or array_like, shape (n,)
Realization.
"""
_chk_root_mmm_inp(a, b)
_chk_invdist_inp(p)
return uniform_ppf(p, sqrt(a, root), sqrt(b, root)) ** root
def pv2par_uniform(p1, v1, p2, v2):
"""
Calculates the minimum and the maximum value of a uniform distribution given the probability/value sets
(p1, v1) and (p2, v2).
Parameters
----------
p1 : float
Cumulative probability of `v1`.
v1 : float
Value at probability `p1`.
p2 : float
Cumulative probability of `v2`.
v2 : float
Value at probability `p2`.
Returns
-------
a : float
Minimum.
b : float
Maximum.
"""
_chk_prob_inp(p1, v1, p2, v2)
a = (p2 - p1) / (v2 - v1)
b = p1 - (a * v1)
return -b / a, (1. - b) / a
| 24.046961 | 114 | 0.513038 | 1,126 | 8,705 | 3.857904 | 0.089698 | 0.018416 | 0.049724 | 0.066298 | 0.843692 | 0.84093 | 0.822744 | 0.816068 | 0.800414 | 0.748389 | 0 | 0.011185 | 0.36324 | 8,705 | 361 | 115 | 24.113573 | 0.772506 | 0.605514 | 0 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.05 | 0 | 0.383333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
62baa8a85d456cf7246e1c504992129d893e3ea0 | 335 | py | Python | can_decoder/iterator/__init__.py | justinwald99/can_decoder | abfdd839856745f88b3fc3a58c8bedbdd05d5616 | [
"MIT"
] | 17 | 2020-08-18T02:34:57.000Z | 2022-03-16T16:26:53.000Z | can_decoder/iterator/__init__.py | justinwald99/can_decoder | abfdd839856745f88b3fc3a58c8bedbdd05d5616 | [
"MIT"
] | 4 | 2020-09-09T04:18:28.000Z | 2022-02-23T10:29:14.000Z | can_decoder/iterator/__init__.py | justinwald99/can_decoder | abfdd839856745f88b3fc3a58c8bedbdd05d5616 | [
"MIT"
] | 3 | 2021-08-18T18:30:43.000Z | 2022-02-21T07:11:09.000Z | from can_decoder.iterator.IteratorDecoder import IteratorDecoder
from can_decoder.iterator.IteratorGenericDecoder import IteratorGenericDecoder
from can_decoder.iterator.IteratorJ1939Decoder import IteratorJ1939Decoder
from can_decoder.iterator.can_record import can_record
from can_decoder.iterator.DecodedSignal import DecodedSignal
| 55.833333 | 78 | 0.910448 | 37 | 335 | 8.054054 | 0.27027 | 0.11745 | 0.234899 | 0.369128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025397 | 0.059701 | 335 | 5 | 79 | 67 | 0.920635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
62cb1ca67a9902346b64a8ea0b490b2b1eca4641 | 109 | py | Python | examples/underscored/print.py | doboy/Underscore | d98273db3144cda79191d2c90f45d81b6d700b1f | [
"MIT"
] | 7 | 2016-09-23T00:44:05.000Z | 2021-10-04T21:19:12.000Z | examples/underscored/print.py | jameswu1991/Underscore | d98273db3144cda79191d2c90f45d81b6d700b1f | [
"MIT"
] | 1 | 2016-09-23T00:45:05.000Z | 2019-02-16T19:05:37.000Z | examples/underscored/print.py | jameswu1991/Underscore | d98273db3144cda79191d2c90f45d81b6d700b1f | [
"MIT"
] | 3 | 2016-09-23T01:13:15.000Z | 2018-07-20T21:22:17.000Z | # import sys
#
# print('uh oh hot dog')
(__,) = ('uh oh hot dog',)
import sys as _
print __
(sys,) = (_,)
| 12.111111 | 26 | 0.541284 | 16 | 109 | 3.3125 | 0.5 | 0.339623 | 0.264151 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.247706 | 109 | 8 | 27 | 13.625 | 0.646341 | 0.311927 | 0 | 0 | 0 | 0 | 0.185714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1a043dc434729c70e10c37460db6409ce2a59aa3 | 167 | py | Python | mypy_drf_plugin/lib/helpers.py | danielroseman/djangorestframework-stubs | e59097e38e3d66791c6d3bf886dda3a627d4d29a | [
"MIT"
] | 224 | 2019-07-05T22:58:33.000Z | 2022-03-30T13:10:20.000Z | mypy_drf_plugin/lib/helpers.py | danielroseman/djangorestframework-stubs | e59097e38e3d66791c6d3bf886dda3a627d4d29a | [
"MIT"
] | 140 | 2019-07-09T10:46:27.000Z | 2022-03-31T09:17:10.000Z | mypy_drf_plugin/lib/helpers.py | danielroseman/djangorestframework-stubs | e59097e38e3d66791c6d3bf886dda3a627d4d29a | [
"MIT"
] | 61 | 2019-07-05T18:03:49.000Z | 2022-03-31T09:18:10.000Z | from typing import Any, Dict
from mypy.nodes import TypeInfo
def get_drf_metadata(info: TypeInfo) -> Dict[str, Any]:
return info.metadata.setdefault("drf", {})
| 20.875 | 55 | 0.730539 | 24 | 167 | 5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149701 | 167 | 7 | 56 | 23.857143 | 0.84507 | 0 | 0 | 0 | 0 | 0 | 0.017964 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
1a08166adf09915fa21576e70b28228248e6f4a9 | 609 | py | Python | 14/00/02/0.py | pylangstudy/201707 | c1cc72667f1e0b6e8eef4ee85067d7fa4ca500b6 | [
"CC0-1.0"
] | null | null | null | 14/00/02/0.py | pylangstudy/201707 | c1cc72667f1e0b6e8eef4ee85067d7fa4ca500b6 | [
"CC0-1.0"
] | 46 | 2017-06-30T22:19:07.000Z | 2017-07-31T22:51:31.000Z | 14/00/02/0.py | pylangstudy/201707 | c1cc72667f1e0b6e8eef4ee85067d7fa4ca500b6 | [
"CC0-1.0"
] | null | null | null | import datetime
#print(int('100').__setattr__('abcdefg', 0)) # AttributeError: 'int' object has no attribute 'abcdefg'
#print(str('abc').__setattr__('abcdefg', 'value')) # AttributeError: 'str' object has no attribute 'abcdefg'
#print(range(3).__setattr__('abcdefg', 'value')) # AttributeError: 'range' object has no attribute 'abcdefg'
#print(datetime.datetime.now().__setattr__('abcdefg', 'value')) # AttributeError: 'datetime.datetime' object has no attribute 'abcdefg'
#print(datetime.datetime.now().__setattr__('now', 'value')) # AttributeError: 'datetime.datetime' object attribute 'now' is read-only
| 67.666667 | 135 | 0.740558 | 72 | 609 | 5.986111 | 0.305556 | 0.12993 | 0.102088 | 0.185615 | 0.593968 | 0.417633 | 0.269142 | 0.269142 | 0.269142 | 0.269142 | 0 | 0.009009 | 0.08867 | 609 | 8 | 136 | 76.125 | 0.767568 | 0.945813 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1a20e9aea508676e45b1295d86796cb7ee8494de | 80,463 | py | Python | tests/helpers_tests/test_condition.py | shujat333/python-sdk | b55582c18542344d510a4d4b928dc8b6c4d4d02c | [
"Apache-2.0"
] | 31 | 2016-08-03T23:28:07.000Z | 2022-02-18T18:58:45.000Z | tests/helpers_tests/test_condition.py | shujat333/python-sdk | b55582c18542344d510a4d4b928dc8b6c4d4d02c | [
"Apache-2.0"
] | 337 | 2016-08-09T16:42:20.000Z | 2022-02-02T18:49:10.000Z | tests/helpers_tests/test_condition.py | shujat333/python-sdk | b55582c18542344d510a4d4b928dc8b6c4d4d02c | [
"Apache-2.0"
] | 35 | 2016-08-09T01:27:10.000Z | 2022-02-16T11:47:22.000Z | # Copyright 2016-2020, Optimizely
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import mock
from six import PY2
from optimizely.helpers import condition as condition_helper
from tests import base
browserConditionSafari = ['browser_type', 'safari', 'custom_attribute', 'exact']
booleanCondition = ['is_firefox', True, 'custom_attribute', 'exact']
integerCondition = ['num_users', 10, 'custom_attribute', 'exact']
doubleCondition = ['pi_value', 3.14, 'custom_attribute', 'exact']
exists_condition_list = [['input_value', None, 'custom_attribute', 'exists']]
exact_string_condition_list = [['favorite_constellation', 'Lacerta', 'custom_attribute', 'exact']]
exact_int_condition_list = [['lasers_count', 9000, 'custom_attribute', 'exact']]
exact_float_condition_list = [['lasers_count', 9000.0, 'custom_attribute', 'exact']]
exact_bool_condition_list = [['did_register_user', False, 'custom_attribute', 'exact']]
substring_condition_list = [['headline_text', 'buy now', 'custom_attribute', 'substring']]
gt_int_condition_list = [['meters_travelled', 48, 'custom_attribute', 'gt']]
gt_float_condition_list = [['meters_travelled', 48.2, 'custom_attribute', 'gt']]
ge_int_condition_list = [['meters_travelled', 48, 'custom_attribute', 'ge']]
ge_float_condition_list = [['meters_travelled', 48.2, 'custom_attribute', 'ge']]
lt_int_condition_list = [['meters_travelled', 48, 'custom_attribute', 'lt']]
lt_float_condition_list = [['meters_travelled', 48.2, 'custom_attribute', 'lt']]
le_int_condition_list = [['meters_travelled', 48, 'custom_attribute', 'le']]
le_float_condition_list = [['meters_travelled', 48.2, 'custom_attribute', 'le']]
class CustomAttributeConditionEvaluatorTest(base.BaseTest):
def setUp(self):
base.BaseTest.setUp(self)
self.condition_list = [
browserConditionSafari,
booleanCondition,
integerCondition,
doubleCondition,
]
self.mock_client_logger = mock.MagicMock()
def test_evaluate__returns_true__when_attributes_pass_audience_condition(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
self.condition_list, {'browser_type': 'safari'}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_evaluate__returns_false__when_attributes_fail_audience_condition(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
self.condition_list, {'browser_type': 'chrome'}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_evaluate__evaluates__different_typed_attributes(self):
userAttributes = {
'browser_type': 'safari',
'is_firefox': True,
'num_users': 10,
'pi_value': 3.14,
}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
self.condition_list, userAttributes, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
self.assertStrictTrue(evaluator.evaluate(1))
self.assertStrictTrue(evaluator.evaluate(2))
self.assertStrictTrue(evaluator.evaluate(3))
def test_evaluate__returns_null__when_condition_has_an_invalid_match_property(self):
condition_list = [['weird_condition', 'hi', 'custom_attribute', 'weird_match']]
evaluator = condition_helper.CustomAttributeConditionEvaluator(
condition_list, {'weird_condition': 'hi'}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_evaluate__assumes_exact__when_condition_match_property_is_none(self):
condition_list = [['favorite_constellation', 'Lacerta', 'custom_attribute', None]]
evaluator = condition_helper.CustomAttributeConditionEvaluator(
condition_list, {'favorite_constellation': 'Lacerta'}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_evaluate__returns_null__when_condition_has_an_invalid_type_property(self):
condition_list = [['weird_condition', 'hi', 'weird_type', 'exact']]
evaluator = condition_helper.CustomAttributeConditionEvaluator(
condition_list, {'weird_condition': 'hi'}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_semver_eq__returns_true(self):
semver_equal_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_eq']]
user_versions = ['2.0.0', '2.0']
for user_version in user_versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_equal_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.evaluate(0)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertTrue(result, custom_err_msg)
def test_semver_eq__returns_false(self):
semver_equal_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_eq']]
user_versions = ['2.9', '1.9']
for user_version in user_versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_equal_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.evaluate(0)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertFalse(result, custom_err_msg)
def test_semver_le__returns_true(self):
semver_less_than_or_equal_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_le']]
user_versions = ['2.0.0', '1.9']
for user_version in user_versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_less_than_or_equal_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.evaluate(0)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertTrue(result, custom_err_msg)
def test_semver_le__returns_false(self):
semver_less_than_or_equal_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_le']]
user_versions = ['2.5.1']
for user_version in user_versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_less_than_or_equal_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.evaluate(0)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertFalse(result, custom_err_msg)
def test_semver_ge__returns_true(self):
semver_greater_than_or_equal_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_ge']]
user_versions = ['2.0.0', '2.9']
for user_version in user_versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_greater_than_or_equal_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.evaluate(0)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertTrue(result, custom_err_msg)
def test_semver_ge__returns_false(self):
semver_greater_than_or_equal_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_ge']]
user_versions = ['1.9']
for user_version in user_versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_greater_than_or_equal_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.evaluate(0)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertFalse(result, custom_err_msg)
def test_semver_lt__returns_true(self):
semver_less_than_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_lt']]
user_versions = ['1.9']
for user_version in user_versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_less_than_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.evaluate(0)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertTrue(result, custom_err_msg)
def test_semver_lt__returns_false(self):
semver_less_than_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_lt']]
user_versions = ['2.0.0', '2.5.1']
for user_version in user_versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_less_than_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.evaluate(0)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertFalse(result, custom_err_msg)
def test_semver_gt__returns_true(self):
semver_greater_than_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_gt']]
user_versions = ['2.9']
for user_version in user_versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_greater_than_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.evaluate(0)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertTrue(result, custom_err_msg)
def test_semver_gt__returns_false(self):
semver_greater_than_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_gt']]
user_versions = ['2.0.0', '1.9']
for user_version in user_versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_greater_than_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.evaluate(0)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertFalse(result, custom_err_msg)
def test_evaluate__returns_None__when_user_version_is_not_string(self):
semver_greater_than_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_gt']]
user_versions = [True, 37]
for user_version in user_versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_greater_than_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.evaluate(0)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertIsNone(result, custom_err_msg)
def test_evaluate__returns_None__when_user_version_with_invalid_semantic(self):
semver_greater_than_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_gt']]
user_versions = ['3.7.2.2', '+']
for user_version in user_versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_greater_than_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.evaluate(0)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertIsNone(result, custom_err_msg)
def test_compare_user_version_with_target_version_equal_to_0(self):
semver_greater_than_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_gt']]
versions = [
('2.0.1', '2.0.1'),
('2.9.9-beta', '2.9.9-beta'),
('2.1', '2.1.0'),
('2', '2.12'),
('2.9', '2.9.1'),
('2.9.1', '2.9.1+beta')
]
for target_version, user_version in versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_greater_than_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.compare_user_version_with_target_version(target_version, user_version)
custom_err_msg = "Got {} in result. Failed for user version:" \
" {} and target version: {}".format(result,
user_version,
target_version
)
self.assertEqual(result, 0, custom_err_msg)
def test_compare_user_version_with_target_version_greater_than_0(self):
semver_greater_than_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_gt']]
versions = [
('2.0.0', '2.0.1'),
('2.0', '3.0.1'),
('2.1.2-beta', '2.1.2-release'),
('2.1.3-beta1', '2.1.3-beta2'),
('2.9.9-beta', '2.9.9'),
('2.9.9+beta', '2.9.9'),
('3.7.0-prerelease+build', '3.7.0-prerelease+rc'),
('2.2.3-beta-beta1', '2.2.3-beta-beta2'),
('2.2.3-beta+beta1', '2.2.3-beta+beta2'),
('2.2.3+beta2-beta1', '2.2.3+beta3-beta2')
]
for target_version, user_version in versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_greater_than_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.compare_user_version_with_target_version(target_version, user_version)
custom_err_msg = "Got {} in result. Failed for user version:" \
" {} and target version: {}".format(result,
user_version,
target_version)
self.assertEqual(result, 1, custom_err_msg)
def test_compare_user_version_with_target_version_less_than_0(self):
semver_greater_than_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_gt']]
versions = [
('2.0.1', '2.0.0'),
('3.0', '2.0.1'),
('2.3', '2.0.1'),
('2.3.5', '2.3.1'),
('2.9.8', '2.9'),
('2.1.2-release', '2.1.2-beta'),
('2.9.9+beta', '2.9.9-beta'),
('3.7.0+build3.7.0-prerelease+build', '3.7.0-prerelease'),
('2.1.3-beta-beta2', '2.1.3-beta'),
('2.1.3-beta1+beta3', '2.1.3-beta1+beta2')
]
for target_version, user_version in versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_greater_than_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.compare_user_version_with_target_version(target_version, user_version)
custom_err_msg = "Got {} in result. Failed for user version: {} " \
"and target version: {}".format(result,
user_version,
target_version)
self.assertEqual(result, -1, custom_err_msg)
def test_compare_invalid_user_version_with(self):
semver_greater_than_2_0_condition_list = [['Android', "2.0", 'custom_attribute', 'semver_gt']]
versions = ['-', '.', '..', '+', '+test', ' ', '2 .3. 0', '2.', '.2.2', '3.7.2.2', '3.x', ',',
'+build-prerelease', '2..2']
target_version = '2.1.0'
for user_version in versions:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_greater_than_2_0_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.compare_user_version_with_target_version(user_version, target_version)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertIsNone(result, custom_err_msg)
def test_exists__returns_false__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exists_condition_list, {}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_exists__returns_false__when_user_provided_value_is_null(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exists_condition_list, {'input_value': None}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_exists__returns_true__when_user_provided_value_is_string(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exists_condition_list, {'input_value': 'hi'}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_exists__returns_true__when_user_provided_value_is_number(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exists_condition_list, {'input_value': 10}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exists_condition_list, {'input_value': 10.0}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_exists__returns_true__when_user_provided_value_is_boolean(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exists_condition_list, {'input_value': False}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_exact_string__returns_true__when_user_provided_value_is_equal_to_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_string_condition_list, {'favorite_constellation': 'Lacerta'}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_exact_string__returns_false__when_user_provided_value_is_not_equal_to_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_string_condition_list, {'favorite_constellation': 'The Big Dipper'}, self.mock_client_logger,
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_exact_string__returns_null__when_user_provided_value_is_different_type_from_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_string_condition_list, {'favorite_constellation': False}, self.mock_client_logger,
)
self.assertIsNone(evaluator.evaluate(0))
def test_exact_string__returns_null__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_string_condition_list, {}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_exact_int__returns_true__when_user_provided_value_is_equal_to_condition_value(self, ):
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_int_condition_list, {'lasers_count': long(9000)}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_int_condition_list, {'lasers_count': 9000}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_int_condition_list, {'lasers_count': 9000.0}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_exact_float__returns_true__when_user_provided_value_is_equal_to_condition_value(self, ):
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_float_condition_list, {'lasers_count': long(9000)}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_float_condition_list, {'lasers_count': 9000}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_float_condition_list, {'lasers_count': 9000.0}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_exact_int__returns_false__when_user_provided_value_is_not_equal_to_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_int_condition_list, {'lasers_count': 8000}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_exact_float__returns_false__when_user_provided_value_is_not_equal_to_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_float_condition_list, {'lasers_count': 8000.0}, self.mock_client_logger,
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_exact_int__returns_null__when_user_provided_value_is_different_type_from_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_int_condition_list, {'lasers_count': 'hi'}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_int_condition_list, {'lasers_count': True}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_exact_float__returns_null__when_user_provided_value_is_different_type_from_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_float_condition_list, {'lasers_count': 'hi'}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_float_condition_list, {'lasers_count': True}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_exact_int__returns_null__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_int_condition_list, {}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_exact_float__returns_null__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_float_condition_list, {}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_exact__given_number_values__calls_is_finite_number(self):
""" Test that CustomAttributeConditionEvaluator.evaluate returns True
if is_finite_number returns True. Returns None if is_finite_number returns False. """
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_int_condition_list, {'lasers_count': 9000}, self.mock_client_logger
)
# assert that isFiniteNumber only needs to reject condition value to stop evaluation.
with mock.patch('optimizely.helpers.validator.is_finite_number', side_effect=[False, True]) as mock_is_finite:
self.assertIsNone(evaluator.evaluate(0))
mock_is_finite.assert_called_once_with(9000)
# assert that isFiniteNumber evaluates user value only if it has accepted condition value.
with mock.patch('optimizely.helpers.validator.is_finite_number', side_effect=[True, False]) as mock_is_finite:
self.assertIsNone(evaluator.evaluate(0))
mock_is_finite.assert_has_calls([mock.call(9000), mock.call(9000)])
# assert CustomAttributeConditionEvaluator.evaluate returns True only when isFiniteNumber returns
# True both for condition and user values.
with mock.patch('optimizely.helpers.validator.is_finite_number', side_effect=[True, True]) as mock_is_finite:
self.assertTrue(evaluator.evaluate(0))
mock_is_finite.assert_has_calls([mock.call(9000), mock.call(9000)])
def test_exact_bool__returns_true__when_user_provided_value_is_equal_to_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_bool_condition_list, {'did_register_user': False}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_exact_bool__returns_false__when_user_provided_value_is_not_equal_to_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_bool_condition_list, {'did_register_user': True}, self.mock_client_logger,
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_exact_bool__returns_null__when_user_provided_value_is_different_type_from_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_bool_condition_list, {'did_register_user': 0}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_exact_bool__returns_null__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_bool_condition_list, {}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_substring__returns_true__when_condition_value_is_substring_of_user_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
substring_condition_list, {'headline_text': 'Limited time, buy now!'}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_substring__returns_false__when_condition_value_is_not_a_substring_of_user_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
substring_condition_list, {'headline_text': 'Breaking news!'}, self.mock_client_logger,
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_substring__returns_null__when_user_provided_value_not_a_string(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
substring_condition_list, {'headline_text': 10}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_substring__returns_null__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
substring_condition_list, {}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_greater_than_int__returns_true__when_user_value_greater_than_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_int_condition_list, {'meters_travelled': 48.1}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_int_condition_list, {'meters_travelled': 49}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_int_condition_list, {'meters_travelled': long(49)}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_greater_than_float__returns_true__when_user_value_greater_than_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_float_condition_list, {'meters_travelled': 48.3}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_float_condition_list, {'meters_travelled': 49}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_float_condition_list, {'meters_travelled': long(49)}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_greater_than_int__returns_false__when_user_value_not_greater_than_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_int_condition_list, {'meters_travelled': 47.9}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_int_condition_list, {'meters_travelled': 47}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_int_condition_list, {'meters_travelled': long(47)}, self.mock_client_logger,
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_greater_than_float__returns_false__when_user_value_not_greater_than_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_float_condition_list, {'meters_travelled': 48.2}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_float_condition_list, {'meters_travelled': 48}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_float_condition_list, {'meters_travelled': long(48)}, self.mock_client_logger,
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_greater_than_int__returns_null__when_user_value_is_not_a_number(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_int_condition_list, {'meters_travelled': 'a long way'}, self.mock_client_logger,
)
self.assertIsNone(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_int_condition_list, {'meters_travelled': False}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_greater_than_float__returns_null__when_user_value_is_not_a_number(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_float_condition_list, {'meters_travelled': 'a long way'}, self.mock_client_logger,
)
self.assertIsNone(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_float_condition_list, {'meters_travelled': False}, self.mock_client_logger,
)
self.assertIsNone(evaluator.evaluate(0))
def test_greater_than_int__returns_null__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_int_condition_list, {}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_greater_than_float__returns_null__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_float_condition_list, {}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_greater_than_or_equal_int__returns_true__when_user_value_greater_than_or_equal_condition_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_int_condition_list, {'meters_travelled': 48.1}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_int_condition_list, {'meters_travelled': 48}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_int_condition_list, {'meters_travelled': 49}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_int_condition_list, {'meters_travelled': long(49)}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_greater_than_or_equal_float__returns_true__when_user_value_greater_than_or_equal_condition_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_float_condition_list, {'meters_travelled': 48.3}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_float_condition_list, {'meters_travelled': 48.2}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_float_condition_list, {'meters_travelled': 49}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_float_condition_list, {'meters_travelled': long(49)}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_greater_than_or_equal_int__returns_false__when_user_value_not_greater_than_or_equal_condition_value(
self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_int_condition_list, {'meters_travelled': 47.9}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_int_condition_list, {'meters_travelled': 47}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_int_condition_list, {'meters_travelled': long(47)}, self.mock_client_logger,
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_greater_than_or_equal_float__returns_false__when_user_value_not_greater_than_or_equal_condition_value(
self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_float_condition_list, {'meters_travelled': 48.1}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_float_condition_list, {'meters_travelled': 48}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_float_condition_list, {'meters_travelled': long(48)}, self.mock_client_logger,
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_greater_than_or_equal_int__returns_null__when_user_value_is_not_a_number(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_int_condition_list, {'meters_travelled': 'a long way'}, self.mock_client_logger,
)
self.assertIsNone(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_int_condition_list, {'meters_travelled': False}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_greater_than_or_equal_float__returns_null__when_user_value_is_not_a_number(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_float_condition_list, {'meters_travelled': 'a long way'}, self.mock_client_logger,
)
self.assertIsNone(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_float_condition_list, {'meters_travelled': False}, self.mock_client_logger,
)
self.assertIsNone(evaluator.evaluate(0))
def test_greater_than_or_equal_int__returns_null__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_int_condition_list, {}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_greater_than_or_equal_float__returns_null__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_float_condition_list, {}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_less_than_int__returns_true__when_user_value_less_than_condition_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_int_condition_list, {'meters_travelled': 47.9}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_int_condition_list, {'meters_travelled': 47}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_int_condition_list, {'meters_travelled': long(47)}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_less_than_float__returns_true__when_user_value_less_than_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_float_condition_list, {'meters_travelled': 48.1}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_float_condition_list, {'meters_travelled': 48}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_float_condition_list, {'meters_travelled': long(48)}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_less_than_int__returns_false__when_user_value_not_less_than_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_int_condition_list, {'meters_travelled': 48.1}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_int_condition_list, {'meters_travelled': 49}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_int_condition_list, {'meters_travelled': long(49)}, self.mock_client_logger,
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_less_than_float__returns_false__when_user_value_not_less_than_condition_value(self, ):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_float_condition_list, {'meters_travelled': 48.2}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_float_condition_list, {'meters_travelled': 49}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_float_condition_list, {'meters_travelled': long(49)}, self.mock_client_logger,
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_less_than_int__returns_null__when_user_value_is_not_a_number(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_int_condition_list, {'meters_travelled': False}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_less_than_float__returns_null__when_user_value_is_not_a_number(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_float_condition_list, {'meters_travelled': False}, self.mock_client_logger,
)
self.assertIsNone(evaluator.evaluate(0))
def test_less_than_int__returns_null__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_int_condition_list, {}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_less_than_float__returns_null__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_float_condition_list, {}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_less_than_or_equal_int__returns_true__when_user_value_less_than_or_equal_condition_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_int_condition_list, {'meters_travelled': 47.9}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_int_condition_list, {'meters_travelled': 47}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_int_condition_list, {'meters_travelled': 48}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_int_condition_list, {'meters_travelled': long(47)}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_int_condition_list, {'meters_travelled': long(48)}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_less_than_or_equal_float__returns_true__when_user_value_less_than_or_equal_condition_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_float_condition_list, {'meters_travelled': 48.1}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_float_condition_list, {'meters_travelled': 48.2}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_float_condition_list, {'meters_travelled': 48}, self.mock_client_logger
)
self.assertStrictTrue(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_float_condition_list, {'meters_travelled': long(48)}, self.mock_client_logger,
)
self.assertStrictTrue(evaluator.evaluate(0))
def test_less_than_or_equal_int__returns_false__when_user_value_not_less_than_or_equal_condition_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_int_condition_list, {'meters_travelled': 48.1}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_int_condition_list, {'meters_travelled': 49}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_int_condition_list, {'meters_travelled': long(49)}, self.mock_client_logger,
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_less_than_or_equal_float__returns_false__when_user_value_not_less_than_or_equal_condition_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_float_condition_list, {'meters_travelled': 48.3}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_float_condition_list, {'meters_travelled': 49}, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
if PY2:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_float_condition_list, {'meters_travelled': long(49)}, self.mock_client_logger,
)
self.assertStrictFalse(evaluator.evaluate(0))
def test_less_than_or_equal_int__returns_null__when_user_value_is_not_a_number(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_int_condition_list, {'meters_travelled': False}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_less_than_or_equal_float__returns_null__when_user_value_is_not_a_number(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_float_condition_list, {'meters_travelled': False}, self.mock_client_logger,
)
self.assertIsNone(evaluator.evaluate(0))
def test_less_than_or_equal_int__returns_null__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_int_condition_list, {}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_less_than_or_equal_float__returns_null__when_no_user_provided_value(self):
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_float_condition_list, {}, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
def test_greater_than__calls_is_finite_number(self):
""" Test that CustomAttributeConditionEvaluator.evaluate returns True
if is_finite_number returns True. Returns None if is_finite_number returns False. """
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_int_condition_list, {'meters_travelled': 48.1}, self.mock_client_logger
)
def is_finite_number__rejecting_condition_value(value):
if value == 48:
return False
return True
with mock.patch(
'optimizely.helpers.validator.is_finite_number',
side_effect=is_finite_number__rejecting_condition_value,
) as mock_is_finite:
self.assertIsNone(evaluator.evaluate(0))
# assert that isFiniteNumber only needs to reject condition value to stop evaluation.
mock_is_finite.assert_called_once_with(48)
def is_finite_number__rejecting_user_attribute_value(value):
if value == 48.1:
return False
return True
with mock.patch(
'optimizely.helpers.validator.is_finite_number',
side_effect=is_finite_number__rejecting_user_attribute_value,
) as mock_is_finite:
self.assertIsNone(evaluator.evaluate(0))
# assert that isFiniteNumber evaluates user value only if it has accepted condition value.
mock_is_finite.assert_has_calls([mock.call(48), mock.call(48.1)])
def is_finite_number__accepting_both_values(value):
return True
with mock.patch(
'optimizely.helpers.validator.is_finite_number', side_effect=is_finite_number__accepting_both_values,
):
self.assertTrue(evaluator.evaluate(0))
def test_less_than__calls_is_finite_number(self):
""" Test that CustomAttributeConditionEvaluator.evaluate returns True
if is_finite_number returns True. Returns None if is_finite_number returns False. """
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_int_condition_list, {'meters_travelled': 47}, self.mock_client_logger
)
def is_finite_number__rejecting_condition_value(value):
if value == 48:
return False
return True
with mock.patch(
'optimizely.helpers.validator.is_finite_number',
side_effect=is_finite_number__rejecting_condition_value,
) as mock_is_finite:
self.assertIsNone(evaluator.evaluate(0))
# assert that isFiniteNumber only needs to reject condition value to stop evaluation.
mock_is_finite.assert_called_once_with(48)
def is_finite_number__rejecting_user_attribute_value(value):
if value == 47:
return False
return True
with mock.patch(
'optimizely.helpers.validator.is_finite_number',
side_effect=is_finite_number__rejecting_user_attribute_value,
) as mock_is_finite:
self.assertIsNone(evaluator.evaluate(0))
# assert that isFiniteNumber evaluates user value only if it has accepted condition value.
mock_is_finite.assert_has_calls([mock.call(48), mock.call(47)])
def is_finite_number__accepting_both_values(value):
return True
with mock.patch(
'optimizely.helpers.validator.is_finite_number', side_effect=is_finite_number__accepting_both_values,
):
self.assertTrue(evaluator.evaluate(0))
def test_greater_than_or_equal__calls_is_finite_number(self):
""" Test that CustomAttributeConditionEvaluator.evaluate returns True
if is_finite_number returns True. Returns None if is_finite_number returns False. """
evaluator = condition_helper.CustomAttributeConditionEvaluator(
ge_int_condition_list, {'meters_travelled': 48.1}, self.mock_client_logger
)
def is_finite_number__rejecting_condition_value(value):
if value == 48:
return False
return True
with mock.patch(
'optimizely.helpers.validator.is_finite_number',
side_effect=is_finite_number__rejecting_condition_value,
) as mock_is_finite:
self.assertIsNone(evaluator.evaluate(0))
# assert that isFiniteNumber only needs to reject condition value to stop evaluation.
mock_is_finite.assert_called_once_with(48)
def is_finite_number__rejecting_user_attribute_value(value):
if value == 48.1:
return False
return True
with mock.patch(
'optimizely.helpers.validator.is_finite_number',
side_effect=is_finite_number__rejecting_user_attribute_value,
) as mock_is_finite:
self.assertIsNone(evaluator.evaluate(0))
# assert that isFiniteNumber evaluates user value only if it has accepted condition value.
mock_is_finite.assert_has_calls([mock.call(48), mock.call(48.1)])
def is_finite_number__accepting_both_values(value):
return True
with mock.patch(
'optimizely.helpers.validator.is_finite_number', side_effect=is_finite_number__accepting_both_values,
):
self.assertTrue(evaluator.evaluate(0))
def test_less_than_or_equal__calls_is_finite_number(self):
""" Test that CustomAttributeConditionEvaluator.evaluate returns True
if is_finite_number returns True. Returns None if is_finite_number returns False. """
evaluator = condition_helper.CustomAttributeConditionEvaluator(
le_int_condition_list, {'meters_travelled': 47}, self.mock_client_logger
)
def is_finite_number__rejecting_condition_value(value):
if value == 48:
return False
return True
with mock.patch(
'optimizely.helpers.validator.is_finite_number',
side_effect=is_finite_number__rejecting_condition_value,
) as mock_is_finite:
self.assertIsNone(evaluator.evaluate(0))
# assert that isFiniteNumber only needs to reject condition value to stop evaluation.
mock_is_finite.assert_called_once_with(48)
def is_finite_number__rejecting_user_attribute_value(value):
if value == 47:
return False
return True
with mock.patch(
'optimizely.helpers.validator.is_finite_number',
side_effect=is_finite_number__rejecting_user_attribute_value,
) as mock_is_finite:
self.assertIsNone(evaluator.evaluate(0))
# assert that isFiniteNumber evaluates user value only if it has accepted condition value.
mock_is_finite.assert_has_calls([mock.call(48), mock.call(47)])
def is_finite_number__accepting_both_values(value):
return True
with mock.patch(
'optimizely.helpers.validator.is_finite_number', side_effect=is_finite_number__accepting_both_values,
):
self.assertTrue(evaluator.evaluate(0))
def test_invalid_semver__returns_None__when_semver_is_invalid(self):
semver_less_than_or_equal_2_0_1_condition_list = [['Android', "2.0.1", 'custom_attribute', 'semver_le']]
invalid_test_cases = ["-", ".", "..", "+", "+test", " ", "2 .0. 0",
"2.", ".0.0", "1.2.2.2", "2.x", ",",
"+build-prerelease", "2..0"]
for user_version in invalid_test_cases:
evaluator = condition_helper.CustomAttributeConditionEvaluator(
semver_less_than_or_equal_2_0_1_condition_list, {'Android': user_version}, self.mock_client_logger)
result = evaluator.evaluate(0)
custom_err_msg = "Got {} in result. Failed for user version: {}".format(result, user_version)
self.assertIsNone(result, custom_err_msg)
class ConditionDecoderTests(base.BaseTest):
def test_loads(self):
""" Test that loads correctly sets condition structure and list. """
condition_structure, condition_list = condition_helper.loads(self.config_dict['audiences'][0]['conditions'])
self.assertEqual(['and', ['or', ['or', 0]]], condition_structure)
self.assertEqual(
[['test_attribute', 'test_value_1', 'custom_attribute', None]], condition_list,
)
def test_audience_condition_deserializer_defaults(self):
""" Test that audience_condition_deserializer defaults to None."""
browserConditionSafari = {}
items = condition_helper._audience_condition_deserializer(browserConditionSafari)
self.assertIsNone(items[0])
self.assertIsNone(items[1])
self.assertIsNone(items[2])
self.assertIsNone(items[3])
class CustomAttributeConditionEvaluatorLogging(base.BaseTest):
def setUp(self):
base.BaseTest.setUp(self)
self.mock_client_logger = mock.MagicMock()
def test_evaluate__match_type__invalid(self):
log_level = 'warning'
condition_list = [['favorite_constellation', 'Lacerta', 'custom_attribute', 'regex']]
user_attributes = {}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'favorite_constellation',
"value": 'Lacerta',
"type": 'custom_attribute',
"match": 'regex',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" uses an unknown match '
'type. You may need to upgrade to a newer release of the Optimizely SDK.'
).format(json.dumps(expected_condition_log))
)
def test_evaluate__condition_type__invalid(self):
log_level = 'warning'
condition_list = [['favorite_constellation', 'Lacerta', 'sdk_version', 'exact']]
user_attributes = {}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'favorite_constellation',
"value": 'Lacerta',
"type": 'sdk_version',
"match": 'exact',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" uses an unknown condition type. '
'You may need to upgrade to a newer release of the Optimizely SDK.'
).format(json.dumps(expected_condition_log))
)
def test_exact__user_value__missing(self):
log_level = 'debug'
exact_condition_list = [['favorite_constellation', 'Lacerta', 'custom_attribute', 'exact']]
user_attributes = {}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'favorite_constellation',
"value": 'Lacerta',
"type": 'custom_attribute',
"match": 'exact',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition {} evaluated to UNKNOWN because '
'no value was passed for user attribute "favorite_constellation".'
).format(json.dumps(expected_condition_log))
)
def test_greater_than__user_value__missing(self):
log_level = 'debug'
gt_condition_list = [['meters_travelled', 48, 'custom_attribute', 'gt']]
user_attributes = {}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'meters_travelled',
"value": 48,
"type": 'custom_attribute',
"match": 'gt',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition {} evaluated to UNKNOWN because no value was passed for user '
'attribute "meters_travelled".'
).format(json.dumps(expected_condition_log))
)
def test_less_than__user_value__missing(self):
log_level = 'debug'
lt_condition_list = [['meters_travelled', 48, 'custom_attribute', 'lt']]
user_attributes = {}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'meters_travelled',
"value": 48,
"type": 'custom_attribute',
"match": 'lt',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition {} evaluated to UNKNOWN because no value was passed for user attribute '
'"meters_travelled".'
).format(json.dumps(expected_condition_log))
)
def test_substring__user_value__missing(self):
log_level = 'debug'
substring_condition_list = [['headline_text', 'buy now', 'custom_attribute', 'substring']]
user_attributes = {}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
substring_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'headline_text',
"value": 'buy now',
"type": 'custom_attribute',
"match": 'substring',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition {} evaluated to UNKNOWN because no value was passed for '
'user attribute "headline_text".'
).format(json.dumps(expected_condition_log))
)
def test_exists__user_value__missing(self):
exists_condition_list = [['input_value', None, 'custom_attribute', 'exists']]
user_attributes = {}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exists_condition_list, user_attributes, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
self.mock_client_logger.debug.assert_not_called()
self.mock_client_logger.info.assert_not_called()
self.mock_client_logger.warning.assert_not_called()
def test_exact__user_value__None(self):
log_level = 'debug'
exact_condition_list = [['favorite_constellation', 'Lacerta', 'custom_attribute', 'exact']]
user_attributes = {'favorite_constellation': None}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'favorite_constellation',
"value": 'Lacerta',
"type": 'custom_attribute',
"match": 'exact',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" evaluated to UNKNOWN because a null value was passed for user attribute '
'"favorite_constellation".'
).format(json.dumps(expected_condition_log))
)
def test_greater_than__user_value__None(self):
log_level = 'debug'
gt_condition_list = [['meters_travelled', 48, 'custom_attribute', 'gt']]
user_attributes = {'meters_travelled': None}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'meters_travelled',
"value": 48,
"type": 'custom_attribute',
"match": 'gt',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" evaluated to UNKNOWN because a null value was passed for '
'user attribute "meters_travelled".'
).format(json.dumps(expected_condition_log))
)
def test_less_than__user_value__None(self):
log_level = 'debug'
lt_condition_list = [['meters_travelled', 48, 'custom_attribute', 'lt']]
user_attributes = {'meters_travelled': None}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'meters_travelled',
"value": 48,
"type": 'custom_attribute',
"match": 'lt',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" evaluated to UNKNOWN because a null value was passed '
'for user attribute "meters_travelled".'
).format(json.dumps(expected_condition_log))
)
def test_substring__user_value__None(self):
log_level = 'debug'
substring_condition_list = [['headline_text', '12', 'custom_attribute', 'substring']]
user_attributes = {'headline_text': None}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
substring_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'headline_text',
"value": '12',
"type": 'custom_attribute',
"match": 'substring',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" evaluated to UNKNOWN because a null value was '
'passed for user attribute "headline_text".'
).format(json.dumps(expected_condition_log))
)
def test_exists__user_value__None(self):
exists_condition_list = [['input_value', None, 'custom_attribute', 'exists']]
user_attributes = {'input_value': None}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exists_condition_list, user_attributes, self.mock_client_logger
)
self.assertStrictFalse(evaluator.evaluate(0))
self.mock_client_logger.debug.assert_not_called()
self.mock_client_logger.info.assert_not_called()
self.mock_client_logger.warning.assert_not_called()
def test_exact__user_value__unexpected_type(self):
log_level = 'warning'
exact_condition_list = [['favorite_constellation', 'Lacerta', 'custom_attribute', 'exact']]
user_attributes = {'favorite_constellation': {}}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'favorite_constellation',
"value": 'Lacerta',
"type": 'custom_attribute',
"match": 'exact',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" evaluated to UNKNOWN because a value of type "{}" was passed for '
'user attribute "favorite_constellation".'
).format(json.dumps(expected_condition_log), type({}))
)
def test_greater_than__user_value__unexpected_type(self):
log_level = 'warning'
gt_condition_list = [['meters_travelled', 48, 'custom_attribute', 'gt']]
user_attributes = {'meters_travelled': '48'}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'meters_travelled',
"value": 48,
"type": 'custom_attribute',
"match": 'gt',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}"'
' evaluated to UNKNOWN because a value of type "{}" was passed for user attribute '
'"meters_travelled".'
).format(json.dumps(expected_condition_log), type('48'))
)
def test_less_than__user_value__unexpected_type(self):
log_level = 'warning'
lt_condition_list = [['meters_travelled', 48, 'custom_attribute', 'lt']]
user_attributes = {'meters_travelled': True}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'meters_travelled',
"value": 48,
"type": 'custom_attribute',
"match": 'lt',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}"'
' evaluated to UNKNOWN because a value of type "{}" was passed for user attribute '
'"meters_travelled".'
).format(json.dumps(expected_condition_log), type(True))
)
def test_substring__user_value__unexpected_type(self):
log_level = 'warning'
substring_condition_list = [['headline_text', '12', 'custom_attribute', 'substring']]
user_attributes = {'headline_text': 1234}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
substring_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'headline_text',
"value": '12',
"type": 'custom_attribute',
"match": 'substring',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" evaluated to UNKNOWN because a value of type "{}" was passed for '
'user attribute "headline_text".'
).format(json.dumps(expected_condition_log), type(1234))
)
def test_exact__user_value__infinite(self):
log_level = 'warning'
exact_condition_list = [['meters_travelled', 48, 'custom_attribute', 'exact']]
user_attributes = {'meters_travelled': float("inf")}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_condition_list, user_attributes, self.mock_client_logger
)
self.assertIsNone(evaluator.evaluate(0))
expected_condition_log = {
"name": 'meters_travelled',
"value": 48,
"type": 'custom_attribute',
"match": 'exact',
}
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" evaluated to UNKNOWN because the number value for '
'user attribute "meters_travelled" is not in the range [-2^53, +2^53].'
).format(json.dumps(expected_condition_log))
)
def test_greater_than__user_value__infinite(self):
log_level = 'warning'
gt_condition_list = [['meters_travelled', 48, 'custom_attribute', 'gt']]
user_attributes = {'meters_travelled': float("nan")}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'meters_travelled',
"value": 48,
"type": 'custom_attribute',
"match": 'gt',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" '
'evaluated to UNKNOWN because the number value for user attribute "meters_travelled" is not'
' in the range [-2^53, +2^53].'
).format(json.dumps(expected_condition_log))
)
def test_less_than__user_value__infinite(self):
log_level = 'warning'
lt_condition_list = [['meters_travelled', 48, 'custom_attribute', 'lt']]
user_attributes = {'meters_travelled': float('-inf')}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
lt_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'meters_travelled',
"value": 48,
"type": 'custom_attribute',
"match": 'lt',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" '
'evaluated to UNKNOWN because the number value for user attribute "meters_travelled" is not in '
'the range [-2^53, +2^53].'
).format(json.dumps(expected_condition_log))
)
def test_exact__user_value_type_mismatch(self):
log_level = 'warning'
exact_condition_list = [['favorite_constellation', 'Lacerta', 'custom_attribute', 'exact']]
user_attributes = {'favorite_constellation': 5}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'favorite_constellation',
"value": 'Lacerta',
"type": 'custom_attribute',
"match": 'exact',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" evaluated to UNKNOWN because a value of type "{}" was passed for '
'user attribute "favorite_constellation".'
).format(json.dumps(expected_condition_log), type(5))
)
def test_exact__condition_value_invalid(self):
log_level = 'warning'
exact_condition_list = [['favorite_constellation', {}, 'custom_attribute', 'exact']]
user_attributes = {'favorite_constellation': 'Lacerta'}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'favorite_constellation',
"value": {},
"type": 'custom_attribute',
"match": 'exact',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" has an unsupported condition value. You may need to upgrade to a '
'newer release of the Optimizely SDK.'
).format(json.dumps(expected_condition_log))
)
def test_exact__condition_value_infinite(self):
log_level = 'warning'
exact_condition_list = [['favorite_constellation', float('inf'), 'custom_attribute', 'exact']]
user_attributes = {'favorite_constellation': 'Lacerta'}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
exact_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'favorite_constellation',
"value": float('inf'),
"type": 'custom_attribute',
"match": 'exact',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" has an unsupported condition value. You may need to upgrade to a '
'newer release of the Optimizely SDK.'
).format(json.dumps(expected_condition_log))
)
def test_greater_than__condition_value_invalid(self):
log_level = 'warning'
gt_condition_list = [['meters_travelled', True, 'custom_attribute', 'gt']]
user_attributes = {'meters_travelled': 48}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'meters_travelled',
"value": True,
"type": 'custom_attribute',
"match": 'gt',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" has an unsupported condition value. You may need to upgrade to a '
'newer release of the Optimizely SDK.'
).format(json.dumps(expected_condition_log))
)
def test_less_than__condition_value_invalid(self):
log_level = 'warning'
gt_condition_list = [['meters_travelled', float('nan'), 'custom_attribute', 'lt']]
user_attributes = {'meters_travelled': 48}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
gt_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'meters_travelled',
"value": float('nan'),
"type": 'custom_attribute',
"match": 'lt',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" has an unsupported condition value. You may need to upgrade to a '
'newer release of the Optimizely SDK.'
).format(json.dumps(expected_condition_log))
)
def test_substring__condition_value_invalid(self):
log_level = 'warning'
substring_condition_list = [['headline_text', False, 'custom_attribute', 'substring']]
user_attributes = {'headline_text': 'breaking news'}
evaluator = condition_helper.CustomAttributeConditionEvaluator(
substring_condition_list, user_attributes, self.mock_client_logger
)
expected_condition_log = {
"name": 'headline_text',
"value": False,
"type": 'custom_attribute',
"match": 'substring',
}
self.assertIsNone(evaluator.evaluate(0))
mock_log = getattr(self.mock_client_logger, log_level)
mock_log.assert_called_once_with(
(
'Audience condition "{}" has an unsupported condition value. You may need to upgrade to a '
'newer release of the Optimizely SDK.'
).format(json.dumps(expected_condition_log))
)
| 41.136503 | 118 | 0.673912 | 8,560 | 80,463 | 5.92021 | 0.032593 | 0.056436 | 0.052213 | 0.07459 | 0.944649 | 0.941275 | 0.935158 | 0.924581 | 0.913748 | 0.895278 | 0 | 0.014963 | 0.23586 | 80,463 | 1,955 | 119 | 41.157545 | 0.809254 | 0.030064 | 0 | 0.584151 | 0 | 0 | 0.142267 | 0.018271 | 0 | 0 | 0 | 0 | 0.152174 | 1 | 0.088359 | false | 0.009818 | 0.003506 | 0.002805 | 0.107994 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c52b5b7d7319a26bcaadc3ae4ecc03f5a83f10f9 | 191 | py | Python | faker/providers/lorem/de_AT/__init__.py | tristanHdez18/faker | 14cb25712e6efcb7bf8d9f30f404a7304722af6d | [
"MIT"
] | 1 | 2022-02-23T08:21:01.000Z | 2022-02-23T08:21:01.000Z | faker/providers/lorem/de_AT/__init__.py | tristanHdez18/faker | 14cb25712e6efcb7bf8d9f30f404a7304722af6d | [
"MIT"
] | 4 | 2022-02-04T17:24:59.000Z | 2022-03-29T20:02:57.000Z | faker/providers/lorem/de_AT/__init__.py | tristanHdez18/faker | 14cb25712e6efcb7bf8d9f30f404a7304722af6d | [
"MIT"
] | null | null | null | from ..de_DE import Provider as GermanProvider
class Provider(GermanProvider):
"""Implement lorem provider for ``de_DE`` locale.
Using the same as in ```de_DE```.
"""
pass
| 19.1 | 53 | 0.664921 | 25 | 191 | 4.96 | 0.64 | 0.096774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21466 | 191 | 9 | 54 | 21.222222 | 0.826667 | 0.418848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
c547b5b022b3ee6fe4d62276194f9df2c51bf8c7 | 34 | py | Python | main.py | BuildPC/Backend | c549cfd5f86796d4e51eca51a0ca9e618044c707 | [
"MIT"
] | 1 | 2020-02-26T07:16:43.000Z | 2020-02-26T07:16:43.000Z | main.py | BuildPC/Backend | c549cfd5f86796d4e51eca51a0ca9e618044c707 | [
"MIT"
] | 11 | 2019-09-28T11:09:58.000Z | 2019-12-22T14:35:08.000Z | main.py | BuildPC/Backend | c549cfd5f86796d4e51eca51a0ca9e618044c707 | [
"MIT"
] | 1 | 2019-10-09T18:11:40.000Z | 2019-10-09T18:11:40.000Z | import sys
print(sys.executeable)
| 11.333333 | 22 | 0.823529 | 5 | 34 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 2 | 23 | 17 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
c5543c022f266c29b7a905404b8c924e6bf2f136 | 238 | py | Python | incasem/torch/loss/__init__.py | kirchhausenlab/incasem | ee9e007c5c04571e547e2fb5af5e800bd2d2b435 | [
"BSD-3-Clause"
] | null | null | null | incasem/torch/loss/__init__.py | kirchhausenlab/incasem | ee9e007c5c04571e547e2fb5af5e800bd2d2b435 | [
"BSD-3-Clause"
] | null | null | null | incasem/torch/loss/__init__.py | kirchhausenlab/incasem | ee9e007c5c04571e547e2fb5af5e800bd2d2b435 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import absolute_import
from .cross_entropy_loss_with_scaling_and_mean_reduction import CrossEntropyLossWithScalingAndMeanReduction
from .cross_entropy_loss_debug import CrossEntropyLossDebug
from .lsd_loss import LsdLoss
| 39.666667 | 107 | 0.915966 | 28 | 238 | 7.214286 | 0.607143 | 0.089109 | 0.158416 | 0.19802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 238 | 5 | 108 | 47.6 | 0.914027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3d8aa6b30bb9d3564f0f8d72b79cbf283e95aace | 21,564 | py | Python | python/federatedml/protobuf/generated/boosting_tree_model_meta_pb2.py | rubenlozanoaht3m/DataDogm | cd605e8072cca31e8418830c3300657ae2fa5b16 | [
"Apache-2.0"
] | 715 | 2019-01-24T10:52:03.000Z | 2019-10-31T12:19:22.000Z | python/federatedml/protobuf/generated/boosting_tree_model_meta_pb2.py | rubenlozanoaht3m/DataDogm | cd605e8072cca31e8418830c3300657ae2fa5b16 | [
"Apache-2.0"
] | 270 | 2019-02-11T02:57:36.000Z | 2019-08-29T11:22:33.000Z | python/federatedml/protobuf/generated/boosting_tree_model_meta_pb2.py | rubenlozanoaht3m/DataDogm | cd605e8072cca31e8418830c3300657ae2fa5b16 | [
"Apache-2.0"
] | 200 | 2019-01-26T14:21:35.000Z | 2019-11-01T01:14:36.000Z | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: boosting-tree-model-meta.proto
import sys
_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='boosting-tree-model-meta.proto',
package='com.webank.ai.fate.core.mlmodel.buffer',
syntax='proto3',
serialized_options=_b('B\027BoostTreeModelMetaProto'),
serialized_pb=_b('\n\x1e\x62oosting-tree-model-meta.proto\x12&com.webank.ai.fate.core.mlmodel.buffer\"1\n\rObjectiveMeta\x12\x11\n\tobjective\x18\x01 \x01(\t\x12\r\n\x05param\x18\x02 \x03(\x01\"B\n\rCriterionMeta\x12\x18\n\x10\x63riterion_method\x18\x01 \x01(\t\x12\x17\n\x0f\x63riterion_param\x18\x02 \x03(\x01\"\xf4\x01\n\x15\x44\x65\x63isionTreeModelMeta\x12M\n\x0e\x63riterion_meta\x18\x01 \x01(\x0b\x32\x35.com.webank.ai.fate.core.mlmodel.buffer.CriterionMeta\x12\x11\n\tmax_depth\x18\x02 \x01(\x05\x12\x18\n\x10min_sample_split\x18\x03 \x01(\x05\x12\x1a\n\x12min_impurity_split\x18\x04 \x01(\x01\x12\x15\n\rmin_leaf_node\x18\x05 \x01(\x05\x12\x13\n\x0buse_missing\x18\x06 \x01(\x08\x12\x17\n\x0fzero_as_missing\x18\x07 \x01(\x08\"8\n\x0cQuantileMeta\x12\x17\n\x0fquantile_method\x18\x01 \x01(\t\x12\x0f\n\x07\x62in_num\x18\x02 \x01(\x05\"\xd5\x03\n\x15\x42oostingTreeModelMeta\x12P\n\ttree_meta\x18\x01 \x01(\x0b\x32=.com.webank.ai.fate.core.mlmodel.buffer.DecisionTreeModelMeta\x12\x15\n\rlearning_rate\x18\x02 \x01(\x01\x12\x11\n\tnum_trees\x18\x03 \x01(\x05\x12K\n\rquantile_meta\x18\x04 \x01(\x0b\x32\x34.com.webank.ai.fate.core.mlmodel.buffer.QuantileMeta\x12M\n\x0eobjective_meta\x18\x05 \x01(\x0b\x32\x35.com.webank.ai.fate.core.mlmodel.buffer.ObjectiveMeta\x12\x11\n\ttask_type\x18\x06 \x01(\t\x12\x18\n\x10n_iter_no_change\x18\x07 \x01(\x08\x12\x0b\n\x03tol\x18\x08 \x01(\x01\x12\x13\n\x0buse_missing\x18\t \x01(\x08\x12\x17\n\x0fzero_as_missing\x18\n \x01(\x08\x12\x11\n\twork_mode\x18\x0b \x01(\t\x12\x0e\n\x06module\x18\x0c \x01(\t\x12\x19\n\x11\x62oosting_strategy\x18\r \x01(\t\"w\n\x0fTransformerMeta\x12P\n\ttree_meta\x18\x01 \x01(\x0b\x32=.com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta\x12\x12\n\nmodel_name\x18\x02 \x01(\tB\x19\x42\x17\x42oostTreeModelMetaProtob\x06proto3')
)
_OBJECTIVEMETA = _descriptor.Descriptor(
name='ObjectiveMeta',
full_name='com.webank.ai.fate.core.mlmodel.buffer.ObjectiveMeta',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='objective', full_name='com.webank.ai.fate.core.mlmodel.buffer.ObjectiveMeta.objective', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='param', full_name='com.webank.ai.fate.core.mlmodel.buffer.ObjectiveMeta.param', index=1,
number=2, type=1, cpp_type=5, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=74,
serialized_end=123,
)
_CRITERIONMETA = _descriptor.Descriptor(
name='CriterionMeta',
full_name='com.webank.ai.fate.core.mlmodel.buffer.CriterionMeta',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='criterion_method',
full_name='com.webank.ai.fate.core.mlmodel.buffer.CriterionMeta.criterion_method',
index=0,
number=1,
type=9,
cpp_type=9,
label=1,
has_default_value=False,
default_value=_b("").decode('utf-8'),
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
serialized_options=None,
file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='criterion_param',
full_name='com.webank.ai.fate.core.mlmodel.buffer.CriterionMeta.criterion_param',
index=1,
number=2,
type=1,
cpp_type=5,
label=3,
has_default_value=False,
default_value=[],
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
serialized_options=None,
file=DESCRIPTOR),
],
extensions=[],
nested_types=[],
enum_types=[],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[],
serialized_start=125,
serialized_end=191,
)
_DECISIONTREEMODELMETA = _descriptor.Descriptor(
name='DecisionTreeModelMeta',
full_name='com.webank.ai.fate.core.mlmodel.buffer.DecisionTreeModelMeta',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='criterion_meta', full_name='com.webank.ai.fate.core.mlmodel.buffer.DecisionTreeModelMeta.criterion_meta', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='max_depth', full_name='com.webank.ai.fate.core.mlmodel.buffer.DecisionTreeModelMeta.max_depth', index=1,
number=2, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='min_sample_split', full_name='com.webank.ai.fate.core.mlmodel.buffer.DecisionTreeModelMeta.min_sample_split', index=2,
number=3, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='min_impurity_split', full_name='com.webank.ai.fate.core.mlmodel.buffer.DecisionTreeModelMeta.min_impurity_split', index=3,
number=4, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='min_leaf_node', full_name='com.webank.ai.fate.core.mlmodel.buffer.DecisionTreeModelMeta.min_leaf_node', index=4,
number=5, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='use_missing', full_name='com.webank.ai.fate.core.mlmodel.buffer.DecisionTreeModelMeta.use_missing', index=5,
number=6, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='zero_as_missing', full_name='com.webank.ai.fate.core.mlmodel.buffer.DecisionTreeModelMeta.zero_as_missing', index=6,
number=7, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=194,
serialized_end=438,
)
_QUANTILEMETA = _descriptor.Descriptor(
name='QuantileMeta',
full_name='com.webank.ai.fate.core.mlmodel.buffer.QuantileMeta',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='quantile_method',
full_name='com.webank.ai.fate.core.mlmodel.buffer.QuantileMeta.quantile_method',
index=0,
number=1,
type=9,
cpp_type=9,
label=1,
has_default_value=False,
default_value=_b("").decode('utf-8'),
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
serialized_options=None,
file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='bin_num',
full_name='com.webank.ai.fate.core.mlmodel.buffer.QuantileMeta.bin_num',
index=1,
number=2,
type=5,
cpp_type=1,
label=1,
has_default_value=False,
default_value=0,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
serialized_options=None,
file=DESCRIPTOR),
],
extensions=[],
nested_types=[],
enum_types=[],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[],
serialized_start=440,
serialized_end=496,
)
_BOOSTINGTREEMODELMETA = _descriptor.Descriptor(
name='BoostingTreeModelMeta',
full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='tree_meta', full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta.tree_meta', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='learning_rate', full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta.learning_rate', index=1,
number=2, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='num_trees', full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta.num_trees', index=2,
number=3, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='quantile_meta', full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta.quantile_meta', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='objective_meta', full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta.objective_meta', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='task_type', full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta.task_type', index=5,
number=6, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='n_iter_no_change', full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta.n_iter_no_change', index=6,
number=7, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='tol', full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta.tol', index=7,
number=8, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='use_missing', full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta.use_missing', index=8,
number=9, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='zero_as_missing', full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta.zero_as_missing', index=9,
number=10, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='work_mode', full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta.work_mode', index=10,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='module', full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta.module', index=11,
number=12, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='boosting_strategy', full_name='com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta.boosting_strategy', index=12,
number=13, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=499,
serialized_end=968,
)
_TRANSFORMERMETA = _descriptor.Descriptor(
name='TransformerMeta',
full_name='com.webank.ai.fate.core.mlmodel.buffer.TransformerMeta',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='tree_meta', full_name='com.webank.ai.fate.core.mlmodel.buffer.TransformerMeta.tree_meta', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='model_name', full_name='com.webank.ai.fate.core.mlmodel.buffer.TransformerMeta.model_name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=970,
serialized_end=1089,
)
_DECISIONTREEMODELMETA.fields_by_name['criterion_meta'].message_type = _CRITERIONMETA
_BOOSTINGTREEMODELMETA.fields_by_name['tree_meta'].message_type = _DECISIONTREEMODELMETA
_BOOSTINGTREEMODELMETA.fields_by_name['quantile_meta'].message_type = _QUANTILEMETA
_BOOSTINGTREEMODELMETA.fields_by_name['objective_meta'].message_type = _OBJECTIVEMETA
_TRANSFORMERMETA.fields_by_name['tree_meta'].message_type = _BOOSTINGTREEMODELMETA
DESCRIPTOR.message_types_by_name['ObjectiveMeta'] = _OBJECTIVEMETA
DESCRIPTOR.message_types_by_name['CriterionMeta'] = _CRITERIONMETA
DESCRIPTOR.message_types_by_name['DecisionTreeModelMeta'] = _DECISIONTREEMODELMETA
DESCRIPTOR.message_types_by_name['QuantileMeta'] = _QUANTILEMETA
DESCRIPTOR.message_types_by_name['BoostingTreeModelMeta'] = _BOOSTINGTREEMODELMETA
DESCRIPTOR.message_types_by_name['TransformerMeta'] = _TRANSFORMERMETA
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
ObjectiveMeta = _reflection.GeneratedProtocolMessageType('ObjectiveMeta', (_message.Message,), {
'DESCRIPTOR': _OBJECTIVEMETA,
'__module__': 'boosting_tree_model_meta_pb2'
# @@protoc_insertion_point(class_scope:com.webank.ai.fate.core.mlmodel.buffer.ObjectiveMeta)
})
_sym_db.RegisterMessage(ObjectiveMeta)
CriterionMeta = _reflection.GeneratedProtocolMessageType('CriterionMeta', (_message.Message,), {
'DESCRIPTOR': _CRITERIONMETA,
'__module__': 'boosting_tree_model_meta_pb2'
# @@protoc_insertion_point(class_scope:com.webank.ai.fate.core.mlmodel.buffer.CriterionMeta)
})
_sym_db.RegisterMessage(CriterionMeta)
DecisionTreeModelMeta = _reflection.GeneratedProtocolMessageType('DecisionTreeModelMeta', (_message.Message,), {
'DESCRIPTOR': _DECISIONTREEMODELMETA,
'__module__': 'boosting_tree_model_meta_pb2'
# @@protoc_insertion_point(class_scope:com.webank.ai.fate.core.mlmodel.buffer.DecisionTreeModelMeta)
})
_sym_db.RegisterMessage(DecisionTreeModelMeta)
QuantileMeta = _reflection.GeneratedProtocolMessageType('QuantileMeta', (_message.Message,), {
'DESCRIPTOR': _QUANTILEMETA,
'__module__': 'boosting_tree_model_meta_pb2'
# @@protoc_insertion_point(class_scope:com.webank.ai.fate.core.mlmodel.buffer.QuantileMeta)
})
_sym_db.RegisterMessage(QuantileMeta)
BoostingTreeModelMeta = _reflection.GeneratedProtocolMessageType('BoostingTreeModelMeta', (_message.Message,), {
'DESCRIPTOR': _BOOSTINGTREEMODELMETA,
'__module__': 'boosting_tree_model_meta_pb2'
# @@protoc_insertion_point(class_scope:com.webank.ai.fate.core.mlmodel.buffer.BoostingTreeModelMeta)
})
_sym_db.RegisterMessage(BoostingTreeModelMeta)
TransformerMeta = _reflection.GeneratedProtocolMessageType('TransformerMeta', (_message.Message,), {
'DESCRIPTOR': _TRANSFORMERMETA,
'__module__': 'boosting_tree_model_meta_pb2'
# @@protoc_insertion_point(class_scope:com.webank.ai.fate.core.mlmodel.buffer.TransformerMeta)
})
_sym_db.RegisterMessage(TransformerMeta)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
| 47.289474 | 1,817 | 0.691616 | 2,575 | 21,564 | 5.528544 | 0.08699 | 0.050576 | 0.036316 | 0.049522 | 0.743959 | 0.725133 | 0.711647 | 0.702796 | 0.688255 | 0.683759 | 0 | 0.034977 | 0.191245 | 21,564 | 455 | 1,818 | 47.393407 | 0.781307 | 0.034919 | 0 | 0.673031 | 1 | 0.002387 | 0.251743 | 0.214241 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.011933 | 0 | 0.011933 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3da9afae15bfb27d779d753004ea35b41f7370ee | 33 | py | Python | source/cognidron/gui/controllers/borrame.py | dregmli/cognidron | f5e3a1e2299699e25b9c38b9ef2056e1b59302c6 | [
"Apache-2.0"
] | 1 | 2019-07-21T03:59:20.000Z | 2019-07-21T03:59:20.000Z | source/cognidron/gui/controllers/borrame.py | dregmli/cognidron | f5e3a1e2299699e25b9c38b9ef2056e1b59302c6 | [
"Apache-2.0"
] | null | null | null | source/cognidron/gui/controllers/borrame.py | dregmli/cognidron | f5e3a1e2299699e25b9c38b9ef2056e1b59302c6 | [
"Apache-2.0"
] | null | null | null |
print("Hola mundo desde pychar") | 16.5 | 32 | 0.757576 | 5 | 33 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 2 | 32 | 16.5 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
3ddc971bc6f716db7624a6db688ba76a72361cbe | 2,573 | py | Python | tests/test_filters.py | Gabik21/afancontrol | 4f2c01bf5f20f595125f8b1c89a2b07bf463416e | [
"MIT"
] | 36 | 2019-06-15T15:54:45.000Z | 2022-03-23T06:33:41.000Z | tests/test_filters.py | Gabik21/afancontrol | 4f2c01bf5f20f595125f8b1c89a2b07bf463416e | [
"MIT"
] | 5 | 2020-05-07T13:25:08.000Z | 2021-04-18T19:41:22.000Z | tests/test_filters.py | Gabik21/afancontrol | 4f2c01bf5f20f595125f8b1c89a2b07bf463416e | [
"MIT"
] | 2 | 2020-06-09T06:47:25.000Z | 2021-03-13T22:45:31.000Z | import pytest
from afancontrol.filters import MovingMedianFilter, MovingQuantileFilter, NullFilter
from afancontrol.temp import TempCelsius, TempStatus
def make_temp_status(temp):
return TempStatus(
min=TempCelsius(30),
max=TempCelsius(50),
temp=TempCelsius(temp),
panic=None,
threshold=None,
is_panic=False,
is_threshold=False,
)
@pytest.mark.parametrize(
"filter",
[
NullFilter(),
MovingMedianFilter(window_size=3),
MovingQuantileFilter(0.5, window_size=3),
],
)
def test_none(filter):
with filter:
assert filter.apply(None) is None
@pytest.mark.parametrize(
"filter",
[
NullFilter(),
MovingMedianFilter(window_size=3),
MovingQuantileFilter(0.5, window_size=3),
],
)
def test_single_point(filter):
with filter:
assert filter.apply(make_temp_status(42.0)) == make_temp_status(42.0)
def test_moving_quantile():
f = MovingQuantileFilter(0.8, window_size=10)
with f:
assert f.apply(make_temp_status(42.0)) == make_temp_status(42.0)
assert f.apply(make_temp_status(45.0)) == make_temp_status(45.0)
assert f.apply(make_temp_status(47.0)) == make_temp_status(47.0)
assert f.apply(make_temp_status(123.0)) == make_temp_status(123.0)
assert f.apply(make_temp_status(46.0)) == make_temp_status(123.0)
assert f.apply(make_temp_status(49.0)) == make_temp_status(49.0)
assert f.apply(make_temp_status(51.0)) == make_temp_status(51.0)
assert f.apply(None) == make_temp_status(123.0)
assert f.apply(None) is None
assert f.apply(make_temp_status(51.0)) is None
assert f.apply(make_temp_status(53.0)) is None
def test_moving_median():
f = MovingMedianFilter(window_size=3)
with f:
assert f.apply(make_temp_status(42.0)) == make_temp_status(42.0)
assert f.apply(make_temp_status(45.0)) == make_temp_status(45.0)
assert f.apply(make_temp_status(47.0)) == make_temp_status(45.0)
assert f.apply(make_temp_status(123.0)) == make_temp_status(47.0)
assert f.apply(make_temp_status(46.0)) == make_temp_status(47.0)
assert f.apply(make_temp_status(49.0)) == make_temp_status(49.0)
assert f.apply(make_temp_status(51.0)) == make_temp_status(49.0)
assert f.apply(None) == make_temp_status(51.0)
assert f.apply(None) is None
assert f.apply(make_temp_status(51.0)) is None
assert f.apply(make_temp_status(53.0)) == make_temp_status(53.0)
| 34.306667 | 84 | 0.670424 | 382 | 2,573 | 4.277487 | 0.136126 | 0.186047 | 0.325581 | 0.22093 | 0.750306 | 0.739902 | 0.70257 | 0.70257 | 0.689718 | 0.659119 | 0 | 0.064941 | 0.204042 | 2,573 | 74 | 85 | 34.77027 | 0.73291 | 0 | 0 | 0.412698 | 0 | 0 | 0.004664 | 0 | 0 | 0 | 0 | 0 | 0.380952 | 1 | 0.079365 | false | 0 | 0.047619 | 0.015873 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9a8fc0ed84b5ce73d1e75f9044dd98c7a02dcae5 | 411 | py | Python | python/phonenumbers/data/alt_format_52.py | rodgar-nvkz/python-phonenumbers | 4c7c4892211dbc9bc328bc3356b03853eaf993dc | [
"Apache-2.0"
] | 2,424 | 2015-01-05T05:34:45.000Z | 2022-03-28T22:37:53.000Z | python/phonenumbers/data/alt_format_52.py | rodgar-nvkz/python-phonenumbers | 4c7c4892211dbc9bc328bc3356b03853eaf993dc | [
"Apache-2.0"
] | 166 | 2015-01-30T23:59:18.000Z | 2022-03-14T21:08:42.000Z | Lib/site-packages/phonenumbers/data/alt_format_52.py | PsychedVic/Portafolio | 4bd59d19de41fbea5317d4f2b9e6219ea0359945 | [
"bzip2-1.0.6"
] | 345 | 2015-01-02T00:33:27.000Z | 2022-03-26T13:06:57.000Z | """Auto-generated file, do not edit by hand. 52 metadata"""
from ..phonemetadata import NumberFormat
PHONE_ALT_FORMAT_52 = [NumberFormat(pattern='(\\d{2})(\\d{2})(\\d{2})(\\d{2})(\\d{2})', format='\\1 \\2 \\3 \\4 \\5', leading_digits_pattern=['33|5[56]|81']), NumberFormat(pattern='(\\d{3})(\\d{3})(\\d{2})(\\d{2})', format='\\1 \\2 \\3 \\4', leading_digits_pattern=['[24679]|3[0-2457-9]|5[089]|8[02-46-9]'])]
| 82.2 | 308 | 0.59854 | 72 | 411 | 3.319444 | 0.513889 | 0.058577 | 0.062762 | 0.083682 | 0.142259 | 0.142259 | 0.142259 | 0.142259 | 0.117155 | 0 | 0 | 0.133508 | 0.07056 | 411 | 4 | 309 | 102.75 | 0.492147 | 0.128954 | 0 | 0 | 1 | 1 | 0.4375 | 0.309659 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9ac83c03ac9c90bf072834026198c2f0c837f3f0 | 3,177 | py | Python | huaweicloud-sdk-moderation/huaweicloudsdkmoderation/v2/__init__.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 64 | 2020-06-12T07:05:07.000Z | 2022-03-30T03:32:50.000Z | huaweicloud-sdk-moderation/huaweicloudsdkmoderation/v2/__init__.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 11 | 2020-07-06T07:56:54.000Z | 2022-01-11T11:14:40.000Z | huaweicloud-sdk-moderation/huaweicloudsdkmoderation/v2/__init__.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 24 | 2020-06-08T11:42:13.000Z | 2022-03-04T06:44:08.000Z | # coding: utf-8
from __future__ import absolute_import
# import ModerationClient
from huaweicloudsdkmoderation.v2.moderation_client import ModerationClient
from huaweicloudsdkmoderation.v2.moderation_async_client import ModerationAsyncClient
# import models into sdk package
from huaweicloudsdkmoderation.v2.model.check_result_items_body import CheckResultItemsBody
from huaweicloudsdkmoderation.v2.model.check_result_result_body import CheckResultResultBody
from huaweicloudsdkmoderation.v2.model.check_task_jobs_items_body import CheckTaskJobsItemsBody
from huaweicloudsdkmoderation.v2.model.image_batch_moderation_req import ImageBatchModerationReq
from huaweicloudsdkmoderation.v2.model.image_batch_moderation_result_body import ImageBatchModerationResultBody
from huaweicloudsdkmoderation.v2.model.image_detection_req import ImageDetectionReq
from huaweicloudsdkmoderation.v2.model.image_detection_result_ad_detail import ImageDetectionResultAdDetail
from huaweicloudsdkmoderation.v2.model.image_detection_result_body import ImageDetectionResultBody
from huaweicloudsdkmoderation.v2.model.image_detection_result_detail import ImageDetectionResultDetail
from huaweicloudsdkmoderation.v2.model.image_detection_result_detail_face_detail import ImageDetectionResultDetailFaceDetail
from huaweicloudsdkmoderation.v2.model.image_detection_result_detail_politics import ImageDetectionResultDetailPolitics
from huaweicloudsdkmoderation.v2.model.image_detection_result_simple_detail import ImageDetectionResultSimpleDetail
from huaweicloudsdkmoderation.v2.model.run_check_result_request import RunCheckResultRequest
from huaweicloudsdkmoderation.v2.model.run_check_result_response import RunCheckResultResponse
from huaweicloudsdkmoderation.v2.model.run_check_task_jobs_request import RunCheckTaskJobsRequest
from huaweicloudsdkmoderation.v2.model.run_check_task_jobs_response import RunCheckTaskJobsResponse
from huaweicloudsdkmoderation.v2.model.run_image_batch_moderation_request import RunImageBatchModerationRequest
from huaweicloudsdkmoderation.v2.model.run_image_batch_moderation_response import RunImageBatchModerationResponse
from huaweicloudsdkmoderation.v2.model.run_image_moderation_request import RunImageModerationRequest
from huaweicloudsdkmoderation.v2.model.run_image_moderation_response import RunImageModerationResponse
from huaweicloudsdkmoderation.v2.model.run_task_sumbit_request import RunTaskSumbitRequest
from huaweicloudsdkmoderation.v2.model.run_task_sumbit_response import RunTaskSumbitResponse
from huaweicloudsdkmoderation.v2.model.run_text_moderation_request import RunTextModerationRequest
from huaweicloudsdkmoderation.v2.model.run_text_moderation_response import RunTextModerationResponse
from huaweicloudsdkmoderation.v2.model.task_sumbit_req import TaskSumbitReq
from huaweicloudsdkmoderation.v2.model.task_sumbit_response_result import TaskSumbitResponseResult
from huaweicloudsdkmoderation.v2.model.text_detection_items_req import TextDetectionItemsReq
from huaweicloudsdkmoderation.v2.model.text_detection_req import TextDetectionReq
from huaweicloudsdkmoderation.v2.model.text_detection_response_result import TextDetectionResponseResult
| 81.461538 | 124 | 0.924772 | 328 | 3,177 | 8.643293 | 0.207317 | 0.306173 | 0.328042 | 0.358025 | 0.571076 | 0.556966 | 0.380952 | 0.141446 | 0 | 0 | 0 | 0.010547 | 0.045011 | 3,177 | 38 | 125 | 83.605263 | 0.923863 | 0.021404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9afc6a1c5c210ca14994a46902f4239791f2f3cf | 335 | py | Python | simtbx/diffBragg/refiners/crystal_systems/__init__.py | dperl-sol/cctbx_project | b9e390221a2bc4fd00b9122e97c3b79c632c6664 | [
"BSD-3-Clause-LBNL"
] | 155 | 2016-11-23T12:52:16.000Z | 2022-03-31T15:35:44.000Z | simtbx/diffBragg/refiners/crystal_systems/__init__.py | dperl-sol/cctbx_project | b9e390221a2bc4fd00b9122e97c3b79c632c6664 | [
"BSD-3-Clause-LBNL"
] | 590 | 2016-12-10T11:31:18.000Z | 2022-03-30T23:10:09.000Z | simtbx/diffBragg/refiners/crystal_systems/__init__.py | dperl-sol/cctbx_project | b9e390221a2bc4fd00b9122e97c3b79c632c6664 | [
"BSD-3-Clause-LBNL"
] | 115 | 2016-11-15T08:17:28.000Z | 2022-02-09T15:30:14.000Z | from __future__ import division
from .manager import CrystalSystemManager # special import
from .tetragonal import TetragonalManager # special import
from .monoclinic import MonoclinicManager # special import
from .hexagonal import HexagonalManager # special import
from .orthorhombic import OrthorhombicManager # special import
| 41.875 | 63 | 0.835821 | 34 | 335 | 8.117647 | 0.441176 | 0.235507 | 0.246377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134328 | 335 | 7 | 64 | 47.857143 | 0.951724 | 0.220896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b168401167180d3dcec1ce28c5b1a169da9352c8 | 21,193 | py | Python | pycelsiusnetwork/celsius.py | eitchtee/pyCelsiusNetwork | 7aa36687334c43989ff3318bde336d0ec663eb9c | [
"MIT"
] | 4 | 2020-09-17T18:30:08.000Z | 2021-03-15T19:28:13.000Z | pycelsiusnetwork/celsius.py | eitchtee/pyCelsiusNetwork | 7aa36687334c43989ff3318bde336d0ec663eb9c | [
"MIT"
] | null | null | null | pycelsiusnetwork/celsius.py | eitchtee/pyCelsiusNetwork | 7aa36687334c43989ff3318bde336d0ec663eb9c | [
"MIT"
] | 1 | 2020-09-17T18:30:12.000Z | 2020-09-17T18:30:12.000Z | from typing import Optional, Any
import requests
from .env import Env
from .exceptions import AbstractionFailure, CelsiusNetworkHTTPError
from .utils import get_key, filter_transactions
class CelsiusNetwork:
def __init__(self,
partner_token: str,
api_key: str,
enviroment: Env = Env.PRODUCTION,
silent: bool = False):
"""Initializes pyCelsiusNetwork
Args:
partner_token (str): A partner token provided by Celsius Network
api_key (str): An API Key generated by the user on the app
enviroment (Env): Optional. Can be either PRODUCTION or STAGING. Changes API calls' base URL to match provided enviroment. Defaults to PRODUCTION
silent (bool): Global. If True silently returns None instead of raising custom Exceptions. Can be overriden on a per-function basis.
"""
self._token = partner_token
self._key = api_key
if enviroment == Env.PRODUCTION:
self._base_url = "https://wallet-api.celsius.network"
elif enviroment == Env.STAGING:
self._base_url = "https://wallet-api.staging.celsius.network"
else:
self._base_url = "https://wallet-api.celsius.network"
self.headers = {
'X-Cel-Partner-Token': self._token,
'X-Cel-Api-Key': self._key
}
self.silent = silent
def get_interest_rate(self,
coin: str = None,
raw: bool = False,
silent: bool = None):
"""Fetch interest rates
Args:
coin (str): Optional. A 3-letter code representing a cryptocoin
raw (bool): If True returns the raw JSON response given by the server
silent (bool): If True silently returns None instead of raising custom Exceptions
Returns:
A dict with interest rates for each coin
i.e. {'ETH': '0.0445', 'BTC': '0.0441'}
If coin is given, will return a float with the interest rate for that coin
Raises:
CelsiusNetworkHTTPError
AbstractionFailure
"""
silent = silent if silent is not None else self.silent
coin = coin.upper() if coin else None
url = f"{self._base_url}" \
"/util" \
"/interest" \
"/rates"
response = requests.request("GET", url)
if silent and not response.ok:
return None
elif not silent and not response.ok:
raise CelsiusNetworkHTTPError(response)
json = response.json()
if raw:
return json
else:
rates = get_key('interestRates', json=json, silent=silent)
rates_list = [{'coin': x['coin'], 'rate': x['rate']} for x in rates]
rates_dict = {item.pop("coin"): item['rate'] for item in rates_list}
if coin:
return float(rates_dict[coin])
else:
return rates_dict
def get_wallet_balance(self,
raw: bool = False,
silent: bool = None):
"""Fetch account balance
Args:
raw (bool): If True returns the raw JSON response given by the server
silent (bool): If True silently returns None instead of raising custom Exceptions
Returns:
A dict with a balance for each coin, even empty ones.
i.e. {'eth': '0', 'btc': '0.00315111', 'dash': '0'}
Raises:
CelsiusNetworkHTTPError
AbstractionFailure
"""
silent = silent if silent is not None else self.silent
url = f"{self._base_url}" \
"/wallet" \
"/balance"
response = requests.request("GET", url, headers=self.headers)
if silent and not response.ok:
return None
elif not silent and not response.ok:
raise CelsiusNetworkHTTPError(response)
json = response.json()
if raw:
return json
else:
return get_key('balance', json=json, silent=silent)
def get_coin_balance(self,
coin: str,
return_type: str = 'both',
raw: bool = False,
silent: bool = None):
"""Fetch account balance for specific coin
Args:
coin (str): A 3-letter code representing a cryptocoin
return_type (str): Specify what you want to get. Can be: 'in_coin' for amount in coin, 'in_usd', for amount in usd and 'both', for a dict containing both values
raw (bool): If True returns the raw JSON response given by the server
silent (bool): If True silently returns None instead of raising custom Exceptions
Returns:
Either a number with balance for specified coin in usd or the coin itself.
Or a dict with both values.
Raises:
CelsiusNetworkHTTPError
AbstractionFailure
"""
coin = coin.upper()
silent = silent if silent is not None else self.silent
return_type = return_type.lower()
url = f"{self._base_url}" \
f"/wallet" \
f"/{coin}" \
f"/balance"
response = requests.request("GET", url, headers=self.headers)
if silent and not response.ok:
return None
elif not silent and not response.ok:
raise CelsiusNetworkHTTPError(response)
json = response.json()
if raw:
return json
else:
if return_type == 'in_coin':
in_coin = get_key('amount', json=json, silent=silent)
return in_coin
elif return_type == 'in_usd':
in_usd = get_key('amount_in_usd', json=json, silent=silent)
return in_usd
elif return_type == 'both':
in_coin = get_key('amount', json=json, silent=silent)
in_usd = get_key('amount_in_usd', json=json, silent=silent)
return {'in_coin': in_coin,
'in_usd': in_usd}
def get_transactions(self,
raw: bool = False,
depaginate: bool = True,
reverse: bool = False,
silent: bool = None,
**kwargs):
"""Fetch all transactions on a account
Args:
depaginate (bool): Will automatically fetch all results in the next pages of the response. Defaults to True
reverse (bool): Will reverse the results. From newest first to oldest first. Defaults to False
raw (bool): If True returns the raw JSON response given by the server
silent (bool): If True silently returns None instead of raising custom Exceptions
Keyword Args:
page (int): The page you want to fetch or start depagination from. Defaults to 1.
per_page (int): The amount of results you want to see in a page. Only works if depaginate is False or raw is True. Defaults to 100
dt_from (str/datetime): Optional. Inclusive. ISO compliant date string or datetime object. Return results after or equal to this date.
dt_to (str/datetime): Optional. Inclusive. ISO compliant date string or datetime object. Only return results before or equal to this date.
amount_bigger_than (float/int): Optional. Inclusive. Only return results with amounts bigger or equal to this
amount_lower_than (float/int): Optional. Inclusive. Only return results with amounts lower or equal to this
state (str): Optional. Only return results with a 'state' value equals to this
nature (str): Optional. Only return results with a 'nature' value equals to this
Returns:
A list of dicts containing transaction information
Raises:
CelsiusNetworkHTTPError
AbstractionFailure
"""
page = kwargs.get('page') or 1
per_page = kwargs.get('per_page') or 100
silent = silent if silent is not None else self.silent
# Filter options
dt_from = kwargs.get('dt_from')
dt_to = kwargs.get('dt_to')
amount_bigger_than = kwargs.get('amount_bigger_than')
amount_lower_than = kwargs.get('amount_lower_than')
state = kwargs.get('state')
nature = kwargs.get('nature')
url = f"{self._base_url}" \
f"/wallet" \
f"/transactions?page={page}&per_page={per_page}"
response = requests.request("GET", url, headers=self.headers)
if silent and not response.ok:
return None
elif not silent and not response.ok:
raise CelsiusNetworkHTTPError(response)
json = response.json()
if raw:
return json
elif depaginate:
# Depaginate results and return then as one list
result = []
try:
result += json['record']
pagination = json['pagination']
if pagination['pages'] > page:
for next_page in range(
pagination['current'] + 1, pagination['pages'] + 1):
url = f"{self._base_url}" \
f"/wallet" \
f"/transactions?page={next_page}&per_page={per_page}"
response = requests.request("GET", url,
headers=self.headers)
json = response.json()
result += json['record']
except KeyError:
if silent:
return None
else:
raise AbstractionFailure(json=json)
if reverse:
result.reverse()
return filter_transactions(result,
dt_from,
dt_to,
amount_bigger_than,
amount_lower_than,
state,
nature)
else:
return filter_transactions(get_key(
'record', json=json, silent=silent),
dt_from,
dt_to,
amount_bigger_than,
amount_lower_than,
state,
nature)
def get_transactions_for_coin(self,
coin: str,
raw: bool = False,
depaginate: bool = True,
reverse: bool = False,
silent: bool = None,
**kwargs):
"""Fetch all transactions for a specific coin
Args:
coin (str): A 3-letter code representing a cryptocoin
depaginate (bool): Will automatically fetch all results in the next pages of the response. Defaults to True
reverse (bool): Will reverse the results. From newest first to oldest first. Defaults to False
raw (bool): If True returns the raw JSON response given by the server
silent (bool): If True silently returns None instead of raising custom Exceptions
Keyword Args:
page (int): The page you want to fetch or start depagination from. Defaults to 1.
per_page (int): The amount of results you want to see in a page. Only works if depaginate is False or raw is True. Defaults to 100
dt_from (str/datetime): Optional. Inclusive. ISO compliant date string or datetime object. Return results after or equal to this date.
dt_to (str/datetime): Optional. Inclusive. ISO compliant date string or datetime object. Only return results before or equal to this date.
amount_bigger_than (float/int): Optional. Inclusive. Only return results with amounts bigger or equal to this
amount_lower_than (float/int): Optional. Inclusive. Only return results with amounts lower or equal to this
state (str): Optional. Only return results with a 'state' value equals to this
nature (str): Optional. Only return results with a 'nature' value equals to this
Returns:
A list of dicts containing transaction information
Raises:
CelsiusNetworkHTTPError
AbstractionFailure
"""
coin = coin.upper()
page = kwargs.get('page') or 1
per_page = kwargs.get('per_page') or 100
silent = silent if silent is not None else self.silent
# Filter options
dt_from = kwargs.get('dt_from')
dt_to = kwargs.get('dt_to')
amount_bigger_than = kwargs.get('amount_bigger_than')
amount_lower_than = kwargs.get('amount_lower_than')
state = kwargs.get('state')
nature = kwargs.get('nature')
url = f"{self._base_url}" \
f"/wallet" \
f"/{coin}" \
f"/transactions?page={page}&per_page={per_page}"
response = requests.request("GET", url, headers=self.headers)
if silent and not response.ok:
return None
elif not silent and not response.ok:
raise CelsiusNetworkHTTPError(response)
json = response.json()
if raw:
return json
elif depaginate:
# Depaginate results and return then as one list
result = []
try:
result += json['record']
pagination = json['pagination']
if pagination['pages'] > page:
for next_page in range(
pagination['current'] + 1, pagination['pages'] + 1):
url = f"{self._base_url}" \
f"/wallet" \
f"/{coin}" \
f"/transactions?page={next_page}&per_page=" \
f"{per_page}"
response = requests.request("GET", url,
headers=self.headers)
json = response.json()
result += json['record']
except KeyError:
if silent:
return None
else:
raise AbstractionFailure(json=json)
if reverse:
result.reverse()
return filter_transactions(result,
dt_from,
dt_to,
amount_bigger_than,
amount_lower_than,
state,
nature)
else:
return filter_transactions(get_key(
'record', json=json, silent=silent),
dt_from,
dt_to,
amount_bigger_than,
amount_lower_than,
state,
nature)
def get_deposit_adress_for_coin(self,
coin: str,
raw: bool = False,
silent: bool = None):
"""Fetch the deposit address for a specific coin
Args:
coin (str): A 3-letter code representing a cryptocoin
raw (bool): If True returns the raw JSON response given by the server
silent (bool): If True silently returns None instead of raising custom Exceptions
Returns:
A string representing the deposit address for adding the specified coin funds to Celsius Wallet
Raises:
CelsiusNetworkHTTPError
AbstractionFailure
"""
coin = coin.upper()
silent = silent if silent is not None else self.silent
url = f"{self._base_url}" \
"/wallet" \
f"/{coin}" \
"/deposit"
response = requests.request("GET", url, headers=self.headers)
if silent and not response.ok:
return None
elif not silent and not response.ok:
raise CelsiusNetworkHTTPError(response)
json = response.json()
if raw:
return json
else:
return get_key('address', json=json, silent=silent)
def get_interest_summary(self,
coin: str = None,
raw: bool = False,
silent: bool = None):
"""Fetch a summary of all interest gained on Celsius Network by coin
Args:
coin (str): Optional. A 3-letter code representing a cryptocoin
raw (bool): If True returns the raw JSON response given by the server
silent (bool): If True silently returns None instead of raising custom Exceptions
Returns:
A dict of dicts with all interest gained divided by coin.
Includes 0 interest.
i.e. {'BTC': {'amount': '0.00002348', 'amount_usd': 0.27939308701579496, 'amount_cel': 0},
'ETH': {'amount': 0, 'amount_usd': 0, 'amount_cel': 0, 'coin': 'ETH'}}
If a coin argumenth is given, only the dictionary for that coin is returned.
i.e. >> get_interest_summary('EHT')
{'amount': 0, 'amount_usd': 0, 'amount_cel': 0, 'coin': 'ETH'}
Raises:
CelsiusNetworkHTTPError
AbstractionFailure
"""
url = f"{self._base_url}" \
"/wallet" \
f"/interest"
response = requests.request("GET", url, headers=self.headers)
if silent and not response.ok:
return None
elif not silent and not response.ok:
raise CelsiusNetworkHTTPError(response)
json = response.json()
if raw:
return json
elif coin:
return get_key('interest', coin, json=json, silent=silent)
else:
return get_key('interest', json=json, silent=silent)
def get_kyc_status(self,
raw: bool = False,
silent: bool = None):
"""Fetch KYC status for API Key owner (A.K.A. User)
Args:
raw (bool): If True returns the raw JSON response given by the server
silent (bool): If True silently returns None instead of raising custom Exceptions
Returns:
An upper case string informing the status.
Can be:
PENDING | Waiting on user to provide documents for verification
COMPLETED | User has provided documents and is waiting to be verified
PASSED | User was successfully verified
REJECTED | User has failed verification
Raises:
CelsiusNetworkHTTPError
AbstractionFailure
"""
url = f"{self._base_url}" \
"/kyc"
response = requests.request("GET", url, headers=self.headers)
if silent and not response.ok:
return None
elif not silent and not response.ok:
raise CelsiusNetworkHTTPError(response)
json = response.json()
if raw:
return json
else:
return get_key('status', json=json, silent=silent)
def get_supported_coins(self,
raw: bool = False,
silent: bool = None):
"""Fetch a list of coins supported by Celsius Network
Args:
raw (bool): If True returns the raw JSON response given by the server
silent (bool): If True silently returns None instead of raising custom Exceptions
Returns:
A list cointaing 3 digit codes for all cryptocoins supported by Celsius Network
i.e. ['ETH', 'BTC', 'USDC']
Raises:
CelsiusNetworkHTTPError
AbstractionFailure
"""
url = f"{self._base_url}" \
"/util" \
"/supported_currencies"
response = requests.request("GET", url, headers=self.headers)
if silent and not response.ok:
return None
elif not silent and not response.ok:
raise CelsiusNetworkHTTPError(response)
json = response.json()
if raw:
return json
else:
return get_key('currencies', json=json, silent=silent)
| 37.844643 | 172 | 0.530081 | 2,270 | 21,193 | 4.861674 | 0.105727 | 0.018123 | 0.01631 | 0.032621 | 0.780808 | 0.7751 | 0.760149 | 0.754531 | 0.719373 | 0.704331 | 0 | 0.006418 | 0.397112 | 21,193 | 559 | 173 | 37.912343 | 0.857322 | 0.34417 | 0 | 0.784768 | 0 | 0 | 0.085477 | 0.015719 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033113 | false | 0 | 0.016556 | 0 | 0.168874 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b1814994e258e0d808c94dd2fc6fa07abf51b181 | 72 | py | Python | tests/conftest.py | shlomihod/smartnoise-sdk-synth | cc143390d96f3dd8b3af365094f969dfea0d4f0b | [
"MIT"
] | 56 | 2021-02-21T19:45:47.000Z | 2022-03-20T16:45:56.000Z | tests/conftest.py | shlomihod/smartnoise-sdk-synth | cc143390d96f3dd8b3af365094f969dfea0d4f0b | [
"MIT"
] | 87 | 2021-02-20T20:43:49.000Z | 2022-03-31T16:24:46.000Z | tests/conftest.py | shlomihod/smartnoise-sdk-synth | cc143390d96f3dd8b3af365094f969dfea0d4f0b | [
"MIT"
] | 17 | 2021-02-18T18:47:09.000Z | 2022-03-01T06:44:17.000Z | from .setup.dataloader import download_data_files
download_data_files() | 24 | 49 | 0.875 | 10 | 72 | 5.9 | 0.7 | 0.40678 | 0.576271 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069444 | 72 | 3 | 50 | 24 | 0.880597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b1a451ca88862b4933d6ca4f84b7011002ef59db | 5,991 | py | Python | tests/test_dataflow/test_dataset/test_path.py | alexandreMayerowitz/playground-plums | a6be79e4c30c7abcbade5581f052a4e8035a2057 | [
"MIT"
] | null | null | null | tests/test_dataflow/test_dataset/test_path.py | alexandreMayerowitz/playground-plums | a6be79e4c30c7abcbade5581f052a4e8035a2057 | [
"MIT"
] | null | null | null | tests/test_dataflow/test_dataset/test_path.py | alexandreMayerowitz/playground-plums | a6be79e4c30c7abcbade5581f052a4e8035a2057 | [
"MIT"
] | 2 | 2021-02-03T12:37:53.000Z | 2022-03-09T03:48:12.000Z | import pytest
from plums.commons.path import Path
from plums.dataflow.utils.path import PathResolver
def test_resolver_init():
resolver = PathResolver('data/images/{dataset}/{aoi}/{source}/{tile}.jpg')
assert resolver._regex.pattern \
== r'data/images/(?P<dataset>[^/]+)/(?P<aoi>[^/]+)/(?P<source>[^/]+)/(?P<tile>[^/]+)\.jpg'
assert resolver._prefix == Path('data/images/')
resolver = PathResolver('/home/user/{dataset}/{aoi}/{source}/{tile}.jpg')
assert resolver._regex.pattern \
== r'/home/user/(?P<dataset>[^/]+)/(?P<aoi>[^/]+)/(?P<source>[^/]+)/(?P<tile>[^/]+)\.jpg'
assert resolver._prefix == Path('/home/user')
def test_degenerate(complex_tree):
root, path_list = complex_tree
resolver = PathResolver('data/images/dataset_0/labeled/tile_83.jpg')
with pytest.raises(OSError, match='Degenerate path pattern points to a non-existing file'):
_ = list(resolver.find(root))
resolver = PathResolver('data/images/dataset_0/labeled/tile_23.jpg')
resolved = list(resolver.find(root))
assert len(resolved) == 1
assert resolved[0] == root / 'data/images/dataset_0/labeled/tile_23.jpg'
def test_absolute_degenerate(complex_tree):
root, path_list = complex_tree
resolver = PathResolver(str(root / 'data/images/dataset_0/labeled/tile_83.jpg'))
with pytest.raises(OSError, match='Degenerate path pattern points to a non-existing file'):
_ = list(resolver.find())
resolver = PathResolver(str(root / 'data/images/dataset_0/labeled/tile_23.jpg'))
resolved = list(resolver.find())
assert len(resolved) == 1
assert resolved[0] == root / 'data/images/dataset_0/labeled/tile_23.jpg'
def test_absolute_group_walk(complex_tree):
root, path_list = complex_tree
resolver = PathResolver(str(root / 'data/images/{dataset}/{aoi}/{source}/{tile}.jpg'))
# Test raise on absolute + root find
with pytest.raises(ValueError, match='The dataset pattern to search for is '
'absolute but a search path was provided'):
_ = list(resolver.find(root))
ground_truth = [root / path for path in path_list if 'dataset_1' in path]
resolved = list(resolver.find())
# Test unordered equality
assert len(resolved) == len(ground_truth)
assert all(path in ground_truth for path in resolved)
# Test capture
for path in resolved:
assert hasattr(path, 'match')
assert path.match['dataset'] == 'dataset_1'
assert path.match['aoi'] in ('aoi_0', 'aoi_3')
assert path.match['source'] in ('labeled', 'simulated')
assert 'tile_' in path.match['tile']
def test_group_walk(complex_tree):
root, path_list = complex_tree
resolver = PathResolver('data/images/{dataset}/{aoi}/{source}/{tile}.jpg')
# Test raise on relative - root find
with pytest.raises(ValueError, match='The dataset pattern to search for is '
'relative but no search path was provided'):
_ = list(resolver.find())
ground_truth = [path for path in path_list if 'dataset_1' in path]
resolved = list(resolver.find(root))
# Test unordered equality
assert len(resolved) == len(ground_truth)
assert all(path in ground_truth for path in resolved)
# Test capture
for path in resolved:
assert hasattr(path, 'match')
assert path.match['dataset'] == 'dataset_1'
assert path.match['aoi'] in ('aoi_0', 'aoi_3')
assert path.match['source'] in ('labeled', 'simulated')
assert 'tile_' in path.match['tile']
def test_composed_group_walk(complex_tree):
root, path_list = complex_tree
resolver = PathResolver('data/images/{dataset}/aoi_0/{source}/{tile}.jpg')
ground_truth = [path for path in path_list if 'dataset_1' in path and 'aoi_0' in path]
resolved = list(resolver.find(root))
# Test unordered equality
assert len(resolved) == len(ground_truth)
assert all(path in ground_truth for path in resolved)
# Test capture
for path in resolved:
assert hasattr(path, 'match')
assert path.match['dataset'] == 'dataset_1'
assert path.match['source'] in ('labeled', 'simulated')
assert 'tile_' in path.match['tile']
def test_loose_regex_recursive_walk(complex_tree):
root, path_list = complex_tree
resolver = PathResolver('data/images/{path/:(?!.*added.*).*}/{tile}.jpg')
ground_truth = [path for path in path_list if 'added' not in path and path.ext == '.jpg']
resolved = list(resolver.find(root))
# Test unordered equality
assert len(resolved) == len(ground_truth)
assert all(path in ground_truth for path in resolved)
def test_strict_regex_recursive_walk(complex_tree):
root, path_list = complex_tree
resolver = PathResolver('data/images/{path/:[a-z]+_[0-9]+}/{tile}.jpg')
ground_truth = [path for path in path_list if 'dataset_3' in path and 'added' not in path]
resolved = list(resolver.find(root))
# Test unordered equality
assert len(resolved) == len(ground_truth)
assert all(path in ground_truth for path in resolved)
def test_composed_strict_regex_recursive_walk(complex_tree):
root, path_list = complex_tree
resolver = PathResolver('data/images/{path/:[a-z]+_[0-9]+}/added/{tile}.jpg')
ground_truth = [path for path in path_list if 'dataset_3' in path and 'added' in path]
resolved = list(resolver.find(root))
# Test unordered equality
assert len(resolved) == len(ground_truth)
assert all(path in ground_truth for path in resolved)
def test_loose_recursive_walk(complex_tree):
root, path_list = complex_tree
resolver = PathResolver('data/images/{path/}/{tile}.jpg')
ground_truth = [path for path in path_list if path.ext == '.jpg']
resolved = list(resolver.find(root))
# Test unordered equality
assert len(resolved) == len(ground_truth)
assert all(path in ground_truth for path in resolved)
| 36.530488 | 98 | 0.673176 | 815 | 5,991 | 4.797546 | 0.107975 | 0.036829 | 0.03913 | 0.043478 | 0.923274 | 0.923274 | 0.922251 | 0.903325 | 0.89821 | 0.890537 | 0 | 0.008268 | 0.192455 | 5,991 | 163 | 99 | 36.754601 | 0.799917 | 0.046069 | 0 | 0.637255 | 0 | 0.029412 | 0.237285 | 0.143283 | 0 | 0 | 0 | 0 | 0.352941 | 1 | 0.098039 | false | 0 | 0.029412 | 0 | 0.127451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
495f2cebb1c16ddbd622e9144488e8d84e110b7f | 5,901 | py | Python | tests/bio_based/test_SBO.py | rishavpramanik/mealpy | d4a4d5810f15837764e4ee61517350fef3dc92b3 | [
"MIT"
] | null | null | null | tests/bio_based/test_SBO.py | rishavpramanik/mealpy | d4a4d5810f15837764e4ee61517350fef3dc92b3 | [
"MIT"
] | null | null | null | tests/bio_based/test_SBO.py | rishavpramanik/mealpy | d4a4d5810f15837764e4ee61517350fef3dc92b3 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# Created by "Thieu" at 21:05, 16/03/2022 ----------%
# Email: nguyenthieu2102@gmail.com %
# Github: https://github.com/thieu1995 %
# --------------------------------------------------%
from mealpy.bio_based import SBO
from mealpy.optimizer import Optimizer
import numpy as np
import pytest
@pytest.fixture(scope="module") # scope: Call only 1 time at the beginning
def problem():
def fitness_function(solution):
return np.sum(solution ** 2)
problem = {
"fit_func": fitness_function,
"lb": [-10, -10, -10, -10, -10],
"ub": [10, 10, 10, 10, 10],
"minmax": "min",
}
return problem
def test_OriginalSBO_results(problem):
epoch = 10
pop_size = 50
alpha = 0.94
p_m = 0.05
psw = 0.02
model = SBO.OriginalSBO(problem, epoch, pop_size, alpha, p_m, psw)
best_position, best_fitness = model.solve()
assert isinstance(model, Optimizer)
assert isinstance(best_position, np.ndarray)
assert len(best_position) == len(problem["lb"])
def test_BaseSBO_results(problem):
epoch = 10
pop_size = 50
alpha = 0.94
p_m = 0.05
psw = 0.02
model = SBO.BaseSBO(problem, epoch, pop_size, alpha, p_m, psw)
best_position, best_fitness = model.solve()
assert isinstance(model, Optimizer)
assert isinstance(best_position, np.ndarray)
assert len(best_position) == len(problem["lb"])
@pytest.mark.parametrize("problem, epoch, system_code",
[
(problem, None, 0),
(problem, "hello", 0),
(problem, -10, 0),
(problem, [10], 0),
(problem, (0, 9), 0),
(problem, 0, 0),
(problem, float("inf"), 0),
])
def test_epoch_SBO(problem, epoch, system_code):
pop_size = 50
algorithms = [SBO.OriginalSBO, SBO.BaseSBO]
for algorithm in algorithms:
with pytest.raises(SystemExit) as e:
model = algorithm(problem, epoch, pop_size)
assert e.type == SystemExit
assert e.value.code == system_code
@pytest.mark.parametrize("problem, pop_size, system_code",
[
(problem, None, 0),
(problem, "hello", 0),
(problem, -10, 0),
(problem, [10], 0),
(problem, (0, 9), 0),
(problem, 0, 0),
(problem, float("inf"), 0),
])
def test_pop_size_SBO(problem, pop_size, system_code):
epoch = 10
algorithms = [SBO.OriginalSBO, SBO.BaseSBO]
for algorithm in algorithms:
with pytest.raises(SystemExit) as e:
model = algorithm(problem, epoch, pop_size)
assert e.type == SystemExit
assert e.value.code == system_code
@pytest.mark.parametrize("problem, alpha, system_code",
[
(problem, None, 0),
(problem, "hello", 0),
(problem, -1.0, 0),
(problem, [10], 0),
(problem, (0, 9), 0),
(problem, 0, 0),
(problem, 1, 0),
(problem, 1.1, 0),
(problem, -0.01, 0),
])
def test_alpha_SBO(problem, alpha, system_code):
epoch = 10
pop_size = 50
algorithms = [SBO.OriginalSBO, SBO.BaseSBO]
for algorithm in algorithms:
with pytest.raises(SystemExit) as e:
model = algorithm(problem, epoch, pop_size, alpha=alpha)
assert e.type == SystemExit
assert e.value.code == system_code
@pytest.mark.parametrize("problem, p_m, system_code",
[
(problem, None, 0),
(problem, "hello", 0),
(problem, -1.0, 0),
(problem, [10], 0),
(problem, (0, 9), 0),
(problem, 0, 0),
(problem, 1, 0),
(problem, 1.1, 0),
(problem, -0.01, 0),
])
def test_p_m_SBO(problem, p_m, system_code):
epoch = 10
pop_size = 50
algorithms = [SBO.OriginalSBO, SBO.BaseSBO]
for algorithm in algorithms:
with pytest.raises(SystemExit) as e:
model = algorithm(problem, epoch, pop_size, p_m=p_m)
assert e.type == SystemExit
assert e.value.code == system_code
@pytest.mark.parametrize("problem, psw, system_code",
[
(problem, None, 0),
(problem, "hello", 0),
(problem, -1.0, 0),
(problem, [10], 0),
(problem, (0, 9), 0),
(problem, 0, 0),
(problem, 1, 0),
(problem, 1.1, 0),
(problem, -0.01, 0),
])
def test_p_m_SBO(problem, psw, system_code):
epoch = 10
pop_size = 50
algorithms = [SBO.OriginalSBO, SBO.BaseSBO]
for algorithm in algorithms:
with pytest.raises(SystemExit) as e:
model = algorithm(problem, epoch, pop_size, psw=psw)
assert e.type == SystemExit
assert e.value.code == system_code
| 36.88125 | 132 | 0.45433 | 595 | 5,901 | 4.393277 | 0.164706 | 0.110176 | 0.044759 | 0.05088 | 0.796863 | 0.77391 | 0.77391 | 0.77391 | 0.77391 | 0.77391 | 0 | 0.052476 | 0.425182 | 5,901 | 159 | 133 | 37.113208 | 0.71816 | 0.072022 | 0 | 0.727941 | 0 | 0 | 0.035832 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.066176 | false | 0 | 0.029412 | 0.007353 | 0.110294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
498f20a55a269d54f778e9577b99089c5af33594 | 95 | py | Python | radical_translations/utils/context_processors.py | kingsdigitallab/radical_translations | c18ca1ccc0ab2d88ae472dc2eda58e2ff9dcc76a | [
"MIT"
] | 3 | 2022-02-08T18:03:44.000Z | 2022-03-18T18:10:43.000Z | radical_translations/utils/context_processors.py | kingsdigitallab/radical_translations | c18ca1ccc0ab2d88ae472dc2eda58e2ff9dcc76a | [
"MIT"
] | 19 | 2020-05-11T15:36:35.000Z | 2022-02-08T11:26:40.000Z | radical_translations/utils/context_processors.py | kingsdigitallab/radical_translations | c18ca1ccc0ab2d88ae472dc2eda58e2ff9dcc76a | [
"MIT"
] | null | null | null | from django.conf import settings
def settings_context(_request):
return {"ds": settings}
| 15.833333 | 32 | 0.747368 | 12 | 95 | 5.75 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 95 | 5 | 33 | 19 | 0.8625 | 0 | 0 | 0 | 0 | 0 | 0.021053 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
49905bf5357f25fdda65176e132b93ee72251690 | 48 | py | Python | server/app/settings/contrib/__init__.py | LowerDeez/movies_finder | 3763bfe4c0d1cfe36e081c45a9cc9cdaa85e0ee4 | [
"MIT"
] | null | null | null | server/app/settings/contrib/__init__.py | LowerDeez/movies_finder | 3763bfe4c0d1cfe36e081c45a9cc9cdaa85e0ee4 | [
"MIT"
] | null | null | null | server/app/settings/contrib/__init__.py | LowerDeez/movies_finder | 3763bfe4c0d1cfe36e081c45a9cc9cdaa85e0ee4 | [
"MIT"
] | null | null | null | from .constance import *
from .rosetta import *
| 16 | 24 | 0.75 | 6 | 48 | 6 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 48 | 2 | 25 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b8e8894ecc4a7f2c95846177077c9c7cb6d40f3b | 94 | py | Python | python/chap_0/0.5.2.py | RyodoTanaka/Cording_Matrix | 7d357266c0b659495f226000418e9cdaee133ebf | [
"BSD-3-Clause"
] | null | null | null | python/chap_0/0.5.2.py | RyodoTanaka/Cording_Matrix | 7d357266c0b659495f226000418e9cdaee133ebf | [
"BSD-3-Clause"
] | null | null | null | python/chap_0/0.5.2.py | RyodoTanaka/Cording_Matrix | 7d357266c0b659495f226000418e9cdaee133ebf | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
ret = 2304811 - (2304811 // 47) * 47
print ret
| 15.666667 | 36 | 0.574468 | 14 | 94 | 3.857143 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.253333 | 0.202128 | 94 | 5 | 37 | 18.8 | 0.466667 | 0.446809 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
7732484f955ab373ee96a148d1b1705bcdfb3e7a | 24,219 | py | Python | npstreams/stats.py | LaurentRDC/npstreams | 730e77eed3ee594e212ccd500558558fc7f37642 | [
"BSD-3-Clause"
] | 30 | 2017-10-22T22:07:53.000Z | 2022-03-08T19:42:14.000Z | npstreams/stats.py | LaurentRDC/npstreams | 730e77eed3ee594e212ccd500558558fc7f37642 | [
"BSD-3-Clause"
] | null | null | null | npstreams/stats.py | LaurentRDC/npstreams | 730e77eed3ee594e212ccd500558558fc7f37642 | [
"BSD-3-Clause"
] | 1 | 2019-08-08T14:34:48.000Z | 2019-08-08T14:34:48.000Z | # -*- coding: utf-8 -*-
"""
Statistical functions
---------------------
"""
from functools import partial
from itertools import count, repeat, starmap
from operator import truediv
from warnings import catch_warnings, simplefilter
import numpy as np
from .array_stream import array_stream
from .array_utils import nan_to_num
from .iter_utils import itercopy, last, peek
from .numerics import isum
@array_stream
def _iaverage(arrays, axis=-1, weights=None, ignore_nan=False):
"""
Primitive version of weighted averaging that yields the running sum and running weights sum,
but avoids the costly division at every step.
"""
# Special case: in the easiest case, no need to calculate
# weights and ignore nans.
# This case is pretty common
if (weights is None) and (not ignore_nan) and (axis == -1):
yield from zip(isum(arrays, axis=axis, dtype=float, ignore_nan=False), count(1))
return
first, arrays = peek(arrays)
# We make sure that weights is always an array
# This simplifies the handling of NaNs.
if weights is None:
weights = repeat(1)
weights = map(partial(np.broadcast_to, shape=first.shape), weights)
# Need to know which array has NaNs, and modify the weights stream accordingly
if ignore_nan:
arrays, arrays2 = itercopy(arrays)
weights = map(
lambda arr, wgt: np.logical_not(np.isnan(arr)) * wgt, arrays2, weights
)
weights1, weights2 = itercopy(weights)
sum_of_weights = isum(weights1, axis=axis, dtype=float)
weighted_arrays = map(lambda arr, wgt: arr * wgt, arrays, weights2)
weighted_sum = isum(weighted_arrays, axis=axis, ignore_nan=ignore_nan, dtype=float)
yield from zip(weighted_sum, sum_of_weights)
@array_stream
def average(arrays, axis=-1, weights=None, ignore_nan=False):
"""
Average (weighted) of a stream of arrays. This function consumes the
entire stream.
Parameters
----------
arrays : iterable of ndarrays
Arrays to be averaged. This iterable can also a generator.
axis : int, optional
Reduction axis. Default is to average the arrays in the stream as if
they had been stacked along a new axis, then average along this new axis.
If None, arrays are flattened before averaging. If `axis` is an int larger that
the number of dimensions in the arrays of the stream, arrays are averaged
along the new axis.
weights : iterable of ndarray, iterable of floats, or None, optional
Iterable of weights associated with the values in each item of `arrays`.
Each value in an element of `arrays` contributes to the average
according to its associated weight. The weights array can either be a float
or an array of the same shape as any element of `arrays`. If ``weights=None``,
then all data in each element of `arrays` are assumed to have a weight equal to one.
ignore_nan : bool, optional
If True, NaNs are set to zero weight. Default is propagation of NaNs.
Returns
-------
avg: `~numpy.ndarray`, dtype float
Weighted average.
See Also
--------
iaverage : streaming (weighted) average.
numpy.average : (weighted) average of dense arrays
mean : non-weighted average of a stream.
"""
total_sum, total_weight = last(_iaverage(arrays, axis, weights, ignore_nan))
with catch_warnings():
simplefilter("ignore", category=RuntimeWarning)
return np.true_divide(total_sum, total_weight)
@array_stream
def iaverage(arrays, axis=-1, weights=None, ignore_nan=False):
"""
Streaming (weighted) average of arrays.
Parameters
----------
arrays : iterable of ndarrays
Arrays to be averaged. This iterable can also a generator.
axis : int, optional
Reduction axis. Default is to average the arrays in the stream as if
they had been stacked along a new axis, then average along this new axis.
If None, arrays are flattened before averaging. If `axis` is an int larger that
the number of dimensions in the arrays of the stream, arrays are averaged
along the new axis.
weights : iterable of ndarray, iterable of floats, or None, optional
Iterable of weights associated with the values in each item of `arrays`.
Each value in an element of `arrays` contributes to the average
according to its associated weight. The weights array can either be a float
or an array of the same shape as any element of `arrays`. If weights=None,
then all data in each element of `arrays` are assumed to have a weight equal to one.
ignore_nan : bool, optional
If True, NaNs are set to zero weight. Default is propagation of NaNs.
Yields
------
avg: `~numpy.ndarray`, dtype float
Weighted average.
See Also
--------
imean : streaming array mean (non-weighted average).
"""
# Primitive stream is composed of tuples (running_sum, running_weights)
primitive = _iaverage(arrays, axis, weights, ignore_nan)
yield from map(lambda element: truediv(*element), primitive)
@array_stream
def mean(arrays, axis=-1, ignore_nan=False):
"""
Mean of a stream of arrays. This function consumes the
entire stream.
Parameters
----------
arrays : iterable of ndarrays
Arrays to be averaged. This iterable can also a generator.
axis : int, optional
Reduction axis. Default is to average the arrays in the stream as if
they had been stacked along a new axis, then average along this new axis.
If None, arrays are flattened before averaging. If `axis` is an int larger that
the number of dimensions in the arrays of the stream, arrays are averaged
along the new axis.
ignore_nan : bool, optional
If True, NaNs are set to zero weight. Default is propagation of NaNs.
Returns
-------
mean: `~numpy.ndarray`, dtype float
Total mean array.
"""
total_sum, total_count = last(
_iaverage(arrays, axis, weights=None, ignore_nan=ignore_nan)
)
return total_sum / total_count
@array_stream
def imean(arrays, axis=-1, ignore_nan=False):
"""
Streaming mean of arrays. Equivalent to `iaverage(arrays, weights = None)`.
Parameters
----------
arrays : iterable of ndarrays
Arrays to be averaged. This iterable can also a generator.
axis : int, optional
Reduction axis. Default is to average the arrays in the stream as if
they had been stacked along a new axis, then average along this new axis.
If None, arrays are flattened before averaging. If `axis` is an int larger that
the number of dimensions in the arrays of the stream, arrays are averaged
along the new axis.
ignore_nan : bool, optional
If True, NaNs are set to zero weight. Default is propagation of NaNs.
Yields
------
mean: `~numpy.ndarray`, dtype float
Online mean array.
"""
# Primitive stream is composed of tuples (running_sum, running_count)
primitive = _iaverage(arrays, axis, weights=None, ignore_nan=ignore_nan)
yield from map(lambda element: truediv(*element), primitive)
@array_stream
def _ivar(arrays, axis=-1, weights=None, ignore_nan=False):
"""
Primitive version of weighted variance that yields the running average, running average of squares and running weights sum,
but avoids the costly division and squaring at every step.
"""
first, arrays = peek(arrays)
# We make sure that weights is always an array
# This simplifies the handling of NaNs.
if weights is None:
weights = repeat(1)
weights = map(partial(np.broadcast_to, shape=first.shape), weights)
# Need to know which array has NaNs, and modify the weights stream accordingly
if ignore_nan:
arrays, arrays2 = itercopy(arrays)
weights = map(
lambda arr, wgt: np.logical_not(np.isnan(arr)) * wgt, arrays2, weights
)
arrays, arrays2 = itercopy(arrays)
weights, weights2, weights3 = itercopy(weights, 3)
avgs = iaverage(arrays, axis=axis, weights=weights, ignore_nan=ignore_nan)
avg_of_squares = iaverage(
map(np.square, arrays2), axis=axis, weights=weights2, ignore_nan=ignore_nan
)
sum_of_weights = isum(weights3, axis=axis, ignore_nan=ignore_nan)
yield from zip(avgs, avg_of_squares, sum_of_weights)
@array_stream
def average_and_var(arrays, axis=-1, ddof=0, weights=None, ignore_nan=False):
"""
Calculate the simultaneous average and variance of a stream of arrays. This is done in
single iteration for maximum performance.
.. versionadded:: 1.6.1
Parameters
----------
arrays : iterable of ndarrays
Arrays to be combined. This iterable can also a generator.
axis : int, optional
Reduction axis. Default is to combine the arrays in the stream as if
they had been stacked along a new axis, then compute the variance along this new axis.
If None, arrays are flattened. If `axis` is an int larger that
the number of dimensions in the arrays of the stream, variance is computed
along the new axis.
ddof : int, optional
Means Delta Degrees of Freedom. The divisor used in calculations
is ``N - ddof``, where ``N`` represents the number of elements.
weights : iterable of ndarray, iterable of floats, or None, optional
Iterable of weights associated with the values in each item of `arrays`.
Each value in an element of `arrays` contributes to the variance
according to its associated weight. The weights array can either be a float
or an array of the same shape as any element of `arrays`. If weights=None,
then all data in each element of `arrays` are assumed to have a weight equal to one.
ignore_nan : bool, optional
If True, NaNs are set to zero weight. Default is propagation of NaNs.
Returns
-------
average : `~numpy.ndarray`
Average, possibly weighted.
var: `~numpy.ndarray`
Variance, possibly weighted.
Notes
-----
Since the calculation of the variance requires knowledge of the average, this function is a
very thin wrapper around `var`.
References
----------
.. [#] D. H. D. West, Updating the mean and variance estimates: an improved method.
Communications of the ACM Vol. 22, Issue 9, pp. 532 - 535 (1979)
"""
# Since the variance calculation requires knowing the average,
# `average_and_var` runs in the exact same time as `var`
avg, sq_avg, swgt = last(
_ivar(arrays=arrays, axis=axis, weights=weights, ignore_nan=ignore_nan)
)
variance = (sq_avg - avg ** 2) * (swgt / (swgt - ddof))
return avg, variance
@array_stream
def var(arrays, axis=-1, ddof=0, weights=None, ignore_nan=False):
"""
Total variance of a stream of arrays. Weights are also supported. This function
consumes the input stream.
Parameters
----------
arrays : iterable of ndarrays
Arrays to be combined. This iterable can also a generator.
axis : int, optional
Reduction axis. Default is to combine the arrays in the stream as if
they had been stacked along a new axis, then compute the variance along this new axis.
If None, arrays are flattened. If `axis` is an int larger that
the number of dimensions in the arrays of the stream, variance is computed
along the new axis.
ddof : int, optional
Means Delta Degrees of Freedom. The divisor used in calculations
is ``N - ddof``, where ``N`` represents the number of elements.
weights : iterable of ndarray, iterable of floats, or None, optional
Iterable of weights associated with the values in each item of `arrays`.
Each value in an element of `arrays` contributes to the variance
according to its associated weight. The weights array can either be a float
or an array of the same shape as any element of `arrays`. If weights=None,
then all data in each element of `arrays` are assumed to have a weight equal to one.
ignore_nan : bool, optional
If True, NaNs are set to zero weight. Default is propagation of NaNs.
Returns
-------
var: `~numpy.ndarray`
Variance.
See Also
--------
ivar : streaming variance
numpy.var : variance calculation for dense arrays. Weights are not supported.
References
----------
.. [#] D. H. D. West, Updating the mean and variance estimates: an improved method.
Communications of the ACM Vol. 22, Issue 9, pp. 532 - 535 (1979)
"""
_, variance = average_and_var(
arrays=arrays, axis=axis, ddof=ddof, weights=weights, ignore_nan=ignore_nan
)
return variance
@array_stream
def ivar(arrays, axis=-1, ddof=0, weights=None, ignore_nan=False):
"""
Streaming variance of arrays. Weights are also supported.
Parameters
----------
arrays : iterable of ndarrays
Arrays to be combined. This iterable can also a generator.
axis : int, optional
Reduction axis. Default is to combine the arrays in the stream as if
they had been stacked along a new axis, then compute the variance along this new axis.
If None, arrays are flattened. If `axis` is an int larger that
the number of dimensions in the arrays of the stream, variance is computed
along the new axis.
ddof : int, optional
Means Delta Degrees of Freedom. The divisor used in calculations
is ``N - ddof``, where ``N`` represents the number of elements.
weights : iterable of ndarray, iterable of floats, or None, optional
Iterable of weights associated with the values in each item of `arrays`.
Each value in an element of `arrays` contributes to the variance
according to its associated weight. The weights array can either be a float
or an array of the same shape as any element of `arrays`. If weights=None,
then all data in each element of `arrays` are assumed to have a weight equal to one.
ignore_nan : bool, optional
If True, NaNs are set to zero weight. Default is propagation of NaNs.
Yields
------
var: `~numpy.ndarray`
Variance.
See Also
--------
numpy.var : variance calculation for dense arrays. Weights are not supported.
References
----------
.. [#] D. H. D. West, Updating the mean and variance estimates: an improved method.
Communications of the ACM Vol. 22, Issue 9, pp. 532 - 535 (1979)
"""
primitive = _ivar(arrays=arrays, axis=axis, weights=weights, ignore_nan=ignore_nan)
for avg, sq_avg, swgt in primitive:
yield (sq_avg - avg ** 2) * (swgt / (swgt - ddof))
@array_stream
def std(arrays, axis=-1, ddof=0, weights=None, ignore_nan=False):
"""
Total standard deviation of arrays. Weights are also supported. This function
consumes the input stream.
Parameters
----------
arrays : iterable of ndarrays
Arrays to be combined. This iterable can also a generator.
axis : int, optional
Reduction axis. Default is to combine the arrays in the stream as if
they had been stacked along a new axis, then compute the standard deviation along this new axis.
If None, arrays are flattened. If `axis` is an int larger that
the number of dimensions in the arrays of the stream, standard deviation is computed
along the new axis.
ddof : int, optional
Means Delta Degrees of Freedom. The divisor used in calculations
is ``N - ddof``, where ``N`` represents the number of elements.
weights : iterable of ndarray, iterable of floats, or None, optional
Iterable of weights associated with the values in each item of `arrays`.
Each value in an element of `arrays` contributes to the standard deviation
according to its associated weight. The weights array can either be a float
or an array of the same shape as any element of `arrays`. If weights=None,
then all data in each element of `arrays` are assumed to have a weight equal to one.
ignore_nan : bool, optional
If True, NaNs are set to zero weight. Default is propagation of NaNs.
Returns
-------
std: `~numpy.ndarray`
Standard deviation
See Also
--------
istd : streaming standard deviation.
numpy.std : standard deviation calculation of dense arrays. Weights are not supported.
"""
return np.sqrt(
var(arrays=arrays, axis=axis, ddof=ddof, weights=weights, ignore_nan=ignore_nan)
)
@array_stream
def istd(arrays, axis=-1, ddof=0, weights=None, ignore_nan=False):
"""
Streaming standard deviation of arrays. Weights are also supported.
This is equivalent to calling `numpy.std(axis = 2)` on a stack of images.
Parameters
----------
arrays : iterable of ndarrays
Arrays to be combined. This iterable can also a generator.
axis : int, optional
Reduction axis. Default is to combine the arrays in the stream as if
they had been stacked along a new axis, then compute the standard deviation along this new axis.
If None, arrays are flattened. If `axis` is an int larger that
the number of dimensions in the arrays of the stream, standard deviation is computed
along the new axis.
ddof : int, optional
Means Delta Degrees of Freedom. The divisor used in calculations
is ``N - ddof``, where ``N`` represents the number of elements.
weights : iterable of ndarray, iterable of floats, or None, optional
Iterable of weights associated with the values in each item of `arrays`.
Each value in an element of `arrays` contributes to the standard deviation
according to its associated weight. The weights array can either be a float
or an array of the same shape as any element of `arrays`. If weights=None,
then all data in each element of `arrays` are assumed to have a weight equal to one.
ignore_nan : bool, optional
If True, NaNs are set to zero weight. Default is propagation of NaNs.
Yields
------
std: `~numpy.ndarray`
Standard deviation
See Also
--------
std : total standard deviation.
numpy.std : standard deviation calculation of dense arrays. Weights are not supported.
"""
yield from map(
np.sqrt,
ivar(
arrays=arrays, axis=axis, ddof=ddof, weights=weights, ignore_nan=ignore_nan
),
)
@array_stream
def sem(arrays, axis=-1, ddof=0, weights=None, ignore_nan=False):
"""
Standard error in the mean (SEM) of a stream of arrays. This function consumes
the entire stream.
Parameters
----------
arrays : iterable of ndarrays
Arrays to be combined. This iterable can also a generator.
axis : int, optional
Reduction axis. Default is to combine the arrays in the stream as if
they had been stacked along a new axis, then compute the standard error along this new axis.
If None, arrays are flattened. If `axis` is an int larger that
the number of dimensions in the arrays of the stream, standard error is computed
along the new axis.
ddof : int, optional
Means Delta Degrees of Freedom. The divisor used in calculations
is ``N - ddof``, where ``N`` represents the number of elements.
weights : iterable of ndarray, iterable of floats, or None, optional
Iterable of weights associated with the values in each item of `arrays`.
Each value in an element of `arrays` contributes to the standard error
according to its associated weight. The weights array can either be a float
or an array of the same shape as any element of `arrays`. If weights=None,
then all data in each element of `arrays` are assumed to have a weight equal to one.
ignore_nan : bool, optional
If True, NaNs are set to zero weight. Default is propagation of NaNs.
Returns
-------
sem: `~numpy.ndarray`, dtype float
Standard error in the mean.
See Also
--------
scipy.stats.sem : standard error in the mean of dense arrays.
"""
avg, sq_avg, swgt = last(
_ivar(arrays=arrays, axis=axis, weights=weights, ignore_nan=ignore_nan)
)
return np.sqrt((sq_avg - avg ** 2) * (1 / (swgt - ddof)))
@array_stream
def isem(arrays, axis=-1, ddof=1, weights=None, ignore_nan=False):
"""
Streaming standard error in the mean (SEM) of arrays. This is equivalent to
calling `scipy.stats.sem(axis = 2)` on a stack of images.
Parameters
----------
arrays : iterable of ndarrays
Arrays to be combined. This iterable can also a generator.
axis : int, optional
Reduction axis. Default is to combine the arrays in the stream as if
they had been stacked along a new axis, then compute the standard error along this new axis.
If None, arrays are flattened. If `axis` is an int larger that
the number of dimensions in the arrays of the stream, standard error is computed
along the new axis.
ddof : int, optional
Means Delta Degrees of Freedom. The divisor used in calculations
is ``N - ddof``, where ``N`` represents the number of elements.
weights : iterable of ndarray, iterable of floats, or None, optional
Iterable of weights associated with the values in each item of `arrays`.
Each value in an element of `arrays` contributes to the standard error
according to its associated weight. The weights array can either be a float
or an array of the same shape as any element of `arrays`. If weights=None,
then all data in each element of `arrays` are assumed to have a weight equal to one.
ignore_nan : bool, optional
If True, NaNs are set to zero weight. Default is propagation of NaNs.
Yields
------
sem: `~numpy.ndarray`, dtype float
Standard error in the mean.
See Also
--------
scipy.stats.sem : standard error in the mean of dense arrays.
"""
primitive = _ivar(arrays=arrays, axis=axis, weights=weights, ignore_nan=ignore_nan)
for avg, sq_avg, swgt in primitive:
yield np.sqrt((sq_avg - avg ** 2) * (1 / (swgt - ddof)))
@array_stream
def ihistogram(arrays, bins, range=None, weights=None):
"""
Streaming histogram calculation.
Parameters
----------
arrays : iterable of ndarrays
Arrays to be combined. This iterable can also a generator. Arrays in this stream
can be of any shape; the histogram is computed over the flattened array.
bins : iterable
Bin edges, including the rightmost edge, allowing for non-uniform bin widths.
To determine the appropriate bins automatically, see ``numpy.histogram_bin_edges``.
weights : iterable of ndarray, iterable of floats, or None, optional
Iterable of weights associated with the values in each item of `arrays`.
Each value in a only contributes its associated weight towards the
bin count (instead of 1). The weights array can either be a float
or an array of the same shape as any element of `arrays`. If ``weights=None``,
then all data in each element of `arrays` are assumed to have a weight equal to one.
.. versionadded:: 1.6.1
Yields
------
hist : `~numpy.ndarray`
Streamed histogram.
See Also
--------
numpy.histogram : 1D histogram of dense arrays.
numpy.histogram_bin_edges : automatic selection of bins
"""
bins = np.asarray(bins)
first, arrays = peek(arrays)
if weights is None:
weights = repeat(None)
else:
weights = map(partial(np.broadcast_to, shape=first.shape), weights)
# np.histogram also returns the bin edges, which we ignore
hist_func = lambda arr, wgt: np.histogram(arr, bins=bins, weights=wgt)[0]
yield from isum(starmap(hist_func, zip(arrays, weights)))
| 40.704202 | 127 | 0.674264 | 3,445 | 24,219 | 4.698694 | 0.08418 | 0.031136 | 0.026873 | 0.016062 | 0.834929 | 0.823686 | 0.80299 | 0.783468 | 0.781244 | 0.75122 | 0 | 0.005286 | 0.250093 | 24,219 | 594 | 128 | 40.772727 | 0.885971 | 0.716999 | 0 | 0.386555 | 0 | 0 | 0.001133 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.07563 | 0 | 0.252101 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7747ef4e2fc32af4bdee55f809c4eebe8a1c00c2 | 48 | py | Python | config/__init__.py | i3Cheese/MatBoy | 29dd65f07393087758179d14d4b40d5974816759 | [
"WTFPL"
] | 10 | 2020-04-24T02:39:22.000Z | 2021-07-22T13:12:55.000Z | config/__init__.py | i3Cheese/MatBoy | 29dd65f07393087758179d14d4b40d5974816759 | [
"WTFPL"
] | null | null | null | config/__init__.py | i3Cheese/MatBoy | 29dd65f07393087758179d14d4b40d5974816759 | [
"WTFPL"
] | 4 | 2020-05-31T12:34:55.000Z | 2020-06-25T17:35:43.000Z | from .configs import ProductionConfig as config
| 24 | 47 | 0.854167 | 6 | 48 | 6.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 48 | 1 | 48 | 48 | 0.97619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
91f7b4ce31a3708f14ce93ab17a55ed03170d6e7 | 32 | py | Python | authman/schema/__init__.py | masoudn84/authman | 411a5461e52410ab9ec11e99285f27296d381c2c | [
"Apache-2.0"
] | null | null | null | authman/schema/__init__.py | masoudn84/authman | 411a5461e52410ab9ec11e99285f27296d381c2c | [
"Apache-2.0"
] | null | null | null | authman/schema/__init__.py | masoudn84/authman | 411a5461e52410ab9ec11e99285f27296d381c2c | [
"Apache-2.0"
] | null | null | null | from authman.schema import apiv1 | 32 | 32 | 0.875 | 5 | 32 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.09375 | 32 | 1 | 32 | 32 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
622b3fdf1fa0b727ba025c9f2891b3b893a720c2 | 344 | py | Python | src/infrastructure/clients/provider/__init__.py | sdediego/forex-django-clean-architecture | 915a8d844a8db5a40c726fe4cf9f6d50f7c95275 | [
"MIT"
] | 8 | 2021-11-09T16:43:38.000Z | 2022-03-25T16:04:26.000Z | src/infrastructure/clients/provider/__init__.py | sdediego/forex-django-clean-architecture | 915a8d844a8db5a40c726fe4cf9f6d50f7c95275 | [
"MIT"
] | null | null | null | src/infrastructure/clients/provider/__init__.py | sdediego/forex-django-clean-architecture | 915a8d844a8db5a40c726fe4cf9f6d50f7c95275 | [
"MIT"
] | 2 | 2021-11-16T21:17:31.000Z | 2022-02-11T11:15:29.000Z | # coding: utf-8
from src.infrastructure.clients.provider.exchange_rate_api.drivers import ExchangeRateAPIDriver
from src.infrastructure.clients.provider.fixer.drivers import FixerDriver
from src.infrastructure.clients.provider.mock.drivers import MockDriver
from src.infrastructure.clients.provider.xchange_api.drivers import XChangeAPIDriver
| 49.142857 | 95 | 0.875 | 42 | 344 | 7.095238 | 0.47619 | 0.09396 | 0.281879 | 0.375839 | 0.483221 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003086 | 0.05814 | 344 | 6 | 96 | 57.333333 | 0.916667 | 0.037791 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
62632d47c9d451480685d886ddb31d4391858eaa | 47 | py | Python | docs/components_page/__init__.py | zackirby/dash-bootstrap-components | 158c68bbe335e2c56c7fbea2be497bf6d8a53e4a | [
"Apache-2.0"
] | 1 | 2018-12-22T20:56:53.000Z | 2018-12-22T20:56:53.000Z | docs/components_page/__init__.py | zmoxq/dash-bootstrap-components | f10107834a1fe468e68dd0cc60fdaf550c10a50a | [
"Apache-2.0"
] | null | null | null | docs/components_page/__init__.py | zmoxq/dash-bootstrap-components | f10107834a1fe468e68dd0cc60fdaf550c10a50a | [
"Apache-2.0"
] | null | null | null | from .page import ComponentsPage # noqa: F401
| 23.5 | 46 | 0.765957 | 6 | 47 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.170213 | 47 | 1 | 47 | 47 | 0.846154 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
628886acb4c5c57c2687399853c9c7a0a321fe1b | 145 | py | Python | test/load_args.py | wuchihsu/FLAML | 54d303a95ab8615ec298a5a7a530f8d1d477bf68 | [
"MIT"
] | 1 | 2021-12-03T06:48:31.000Z | 2021-12-03T06:48:31.000Z | test/load_args.py | wuchihsu/FLAML | 54d303a95ab8615ec298a5a7a530f8d1d477bf68 | [
"MIT"
] | 4 | 2022-01-16T04:25:26.000Z | 2022-02-23T04:50:37.000Z | test/load_args.py | wuchihsu/FLAML | 54d303a95ab8615ec298a5a7a530f8d1d477bf68 | [
"MIT"
] | 1 | 2022-01-20T02:40:07.000Z | 2022-01-20T02:40:07.000Z | def test_load_args_sub():
from flaml.nlp.utils import HPOArgs
HPOArgs.load_args()
if __name__ == "__main__":
test_load_args_sub()
| 16.111111 | 39 | 0.710345 | 21 | 145 | 4.190476 | 0.666667 | 0.272727 | 0.272727 | 0.340909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186207 | 145 | 8 | 40 | 18.125 | 0.745763 | 0 | 0 | 0 | 0 | 0 | 0.055172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
655f514a4fa922d56b77f697e6d2bad18e588974 | 43,371 | py | Python | dataworkspace/dataworkspace/tests/applications/test_utils.py | uktrade/jupyterhub-data-auth-admin | 91544f376209a201531f4dbfb8faad1b8ada18c9 | [
"MIT"
] | 1 | 2019-06-10T08:22:56.000Z | 2019-06-10T08:22:56.000Z | dataworkspace/dataworkspace/tests/applications/test_utils.py | uktrade/jupyterhub-data-auth-admin | 91544f376209a201531f4dbfb8faad1b8ada18c9 | [
"MIT"
] | 2 | 2019-05-17T13:10:42.000Z | 2019-06-17T10:48:46.000Z | dataworkspace/dataworkspace/tests/applications/test_utils.py | uktrade/jupyterhub-data-auth-admin | 91544f376209a201531f4dbfb8faad1b8ada18c9 | [
"MIT"
] | null | null | null | # pylint: disable=unspecified-encoding
import datetime
import json
import os
import random
import string
import botocore
from django.contrib.auth import get_user_model
from django.contrib.auth.models import Permission
from django.contrib.contenttypes.models import ContentType
from django.core.cache import cache
from django.test import override_settings
from freezegun import freeze_time
from waffle.testutils import override_switch
import mock
import pytest
import redis
from dataworkspace.apps.applications.models import ApplicationInstance
from dataworkspace.apps.applications.utils import (
_do_sync_tool_query_logs,
delete_unused_datasets_users,
_do_create_tools_access_iam_role,
_do_sync_activity_stream_sso_users,
long_running_query_alert,
sync_quicksight_permissions,
)
from dataworkspace.apps.datasets.constants import UserAccessType
from dataworkspace.apps.datasets.models import ToolQueryAuditLog, ToolQueryAuditLogTable
from dataworkspace.tests import factories
from dataworkspace.tests.factories import (
UserFactory,
MasterDataSetFactory,
SourceTableFactory,
)
class TestDeleteUnusedDatasetsUsers:
def setup_method(self):
self.lock = cache.lock( # pylint: disable=attribute-defined-outside-init
"delete_unused_datasets_users", blocking_timeout=0
)
def teardown_method(self):
try:
self.lock.release()
except redis.exceptions.LockError:
pass
@pytest.mark.timeout(2)
@mock.patch("dataworkspace.apps.applications.utils._do_delete_unused_datasets_users")
def test_dies_immediately_if_already_locked(self, do_delete_mock):
do_delete_mock.side_effect = Exception("I will be raised if the lock is available")
# Make sure we actually acquire the lock, else the test is flawed
assert self.lock.acquire() is True
delete_unused_datasets_users()
self.lock.release()
with pytest.raises(Exception) as e:
delete_unused_datasets_users()
assert e.value is do_delete_mock.side_effect
class TestSyncQuickSightPermissions:
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.core.utils.new_private_database_credentials")
@mock.patch("dataworkspace.apps.core.boto3_client.boto3.client")
@mock.patch("dataworkspace.apps.applications.utils.cache")
def test_create_new_data_source(self, mock_cache, mock_boto3_client, mock_creds):
# Arrange
UserFactory.create(username="fake@email.com")
SourceTableFactory(
dataset=MasterDataSetFactory.create(
user_access_type=UserAccessType.REQUIRES_AUTHENTICATION
)
)
mock_user_client = mock.Mock()
mock_user_client.list_users.return_value = {
"UserList": [
{
"Arn": "Arn",
"Email": "fake@email.com",
"Role": "AUTHOR",
"UserName": "user/fake@email.com",
}
]
}
mock_data_client = mock.Mock()
mock_sts_client = mock.Mock()
mock_boto3_client.side_effect = [
mock_user_client,
mock_data_client,
mock_sts_client,
]
mock_creds.return_value = [mock.Mock()]
# Act
sync_quicksight_permissions()
# Assert
assert mock_user_client.update_user.call_args_list == [
mock.call(
AwsAccountId=mock.ANY,
Namespace="default",
Role="AUTHOR",
CustomPermissionsName="author-custom-permissions",
UserName="user/fake@email.com",
Email="fake@email.com",
)
]
assert mock_data_client.create_data_source.call_args_list == [
mock.call(
AwsAccountId=mock.ANY,
DataSourceId=mock.ANY,
Name=mock.ANY,
DataSourceParameters={
"AuroraPostgreSqlParameters": {
"Host": mock.ANY,
"Port": mock.ANY,
"Database": mock.ANY,
}
},
Credentials={"CredentialPair": {"Username": mock.ANY, "Password": mock.ANY}},
VpcConnectionProperties={"VpcConnectionArn": mock.ANY},
Type="AURORA_POSTGRESQL",
Permissions=[
{
"Principal": "Arn",
"Actions": [
"quicksight:DescribeDataSource",
"quicksight:DescribeDataSourcePermissions",
"quicksight:PassDataSource",
],
}
],
)
]
assert mock_data_client.update_data_source.call_args_list == []
assert sorted(
mock_data_client.delete_data_source.call_args_list,
key=lambda x: x.kwargs["DataSourceId"],
) == [
mock.call(
AwsAccountId=mock.ANY,
DataSourceId="data-workspace-dev-my_database-88f3887d",
),
mock.call(
AwsAccountId=mock.ANY,
DataSourceId="data-workspace-dev-test_external_db2-88f3887d",
),
]
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.core.utils.new_private_database_credentials")
@mock.patch("dataworkspace.apps.core.boto3_client.boto3.client")
@mock.patch("dataworkspace.apps.applications.utils.cache")
def test_list_user_pagination(self, mock_cache, mock_boto3_client, mock_creds):
# Arrange
UserFactory.create(username="fake@email.com")
UserFactory.create(username="fake2@email.com")
SourceTableFactory(
dataset=MasterDataSetFactory.create(
user_access_type=UserAccessType.REQUIRES_AUTHENTICATION
)
)
mock_user_client = mock.Mock()
mock_user_client.list_users.side_effect = [
{
"UserList": [
{
"Arn": "Arn",
"Email": "fake@email.com",
"Role": "AUTHOR",
"UserName": "user/fake@email.com",
}
],
"NextToken": "foo",
},
{
"UserList": [
{
"Arn": "Arn2",
"Email": "fake2@email.com",
"Role": "AUTHOR",
"UserName": "user/fake2@email.com",
}
]
},
]
mock_data_client = mock.Mock()
mock_sts_client = mock.Mock()
mock_boto3_client.side_effect = [
mock_user_client,
mock_data_client,
mock_sts_client,
]
mock_creds.return_value = [mock.Mock()]
# Act
sync_quicksight_permissions()
# Assert
assert mock_user_client.update_user.call_args_list == [
mock.call(
AwsAccountId=mock.ANY,
Namespace="default",
Role="AUTHOR",
CustomPermissionsName="author-custom-permissions",
UserName="user/fake@email.com",
Email="fake@email.com",
),
mock.call(
AwsAccountId=mock.ANY,
Namespace="default",
Role="AUTHOR",
CustomPermissionsName="author-custom-permissions",
UserName="user/fake2@email.com",
Email="fake2@email.com",
),
]
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.core.utils.new_private_database_credentials")
@mock.patch("dataworkspace.apps.core.boto3_client.boto3.client")
@mock.patch("dataworkspace.apps.applications.utils.cache")
def test_update_existing_data_source(self, mock_cache, mock_boto3_client, mock_creds):
# Arrange
UserFactory.create(username="fake@email.com")
SourceTableFactory(
dataset=MasterDataSetFactory.create(
user_access_type=UserAccessType.REQUIRES_AUTHENTICATION
)
)
mock_user_client = mock.Mock()
mock_user_client.list_users.return_value = {
"UserList": [
{
"Arn": "Arn",
"Email": "fake@email.com",
"Role": "AUTHOR",
"UserName": "user/fake@email.com",
}
]
}
mock_data_client = mock.Mock()
mock_data_client.create_data_source.side_effect = [
botocore.exceptions.ClientError(
{
"Error": {
"Code": "ResourceExistsException",
"Message": "Data source already exists",
}
},
"CreateDataSource",
)
]
mock_sts_client = mock.Mock()
mock_boto3_client.side_effect = [
mock_user_client,
mock_data_client,
mock_sts_client,
]
# Act
sync_quicksight_permissions()
# Assert
assert mock_user_client.update_user.call_args_list == [
mock.call(
AwsAccountId=mock.ANY,
Namespace="default",
Role="AUTHOR",
CustomPermissionsName="author-custom-permissions",
UserName="user/fake@email.com",
Email="fake@email.com",
)
]
assert mock_data_client.create_data_source.call_args_list == [
mock.call(
AwsAccountId=mock.ANY,
DataSourceId=mock.ANY,
Name=mock.ANY,
DataSourceParameters={
"AuroraPostgreSqlParameters": {
"Host": mock.ANY,
"Port": mock.ANY,
"Database": mock.ANY,
}
},
Credentials={"CredentialPair": {"Username": mock.ANY, "Password": mock.ANY}},
VpcConnectionProperties={"VpcConnectionArn": mock.ANY},
Type="AURORA_POSTGRESQL",
Permissions=[
{
"Principal": "Arn",
"Actions": [
"quicksight:DescribeDataSource",
"quicksight:DescribeDataSourcePermissions",
"quicksight:PassDataSource",
],
}
],
)
]
assert mock_data_client.update_data_source.call_args_list == [
mock.call(
AwsAccountId=mock.ANY,
DataSourceId=mock.ANY,
Name=mock.ANY,
DataSourceParameters={
"AuroraPostgreSqlParameters": {
"Host": mock.ANY,
"Port": mock.ANY,
"Database": mock.ANY,
}
},
Credentials={"CredentialPair": {"Username": mock.ANY, "Password": mock.ANY}},
VpcConnectionProperties={"VpcConnectionArn": mock.ANY},
)
]
assert sorted(
mock_data_client.delete_data_source.call_args_list,
key=lambda x: x.kwargs["DataSourceId"],
) == [
mock.call(
AwsAccountId=mock.ANY,
DataSourceId="data-workspace-dev-my_database-88f3887d",
),
mock.call(
AwsAccountId=mock.ANY,
DataSourceId="data-workspace-dev-test_external_db2-88f3887d",
),
]
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.core.utils.new_private_database_credentials")
@mock.patch("dataworkspace.apps.core.boto3_client.boto3.client")
@mock.patch("dataworkspace.apps.applications.utils.cache")
def test_missing_user_handled_gracefully(self, mock_cache, mock_boto3_client, mock_creds):
# Arrange
user = UserFactory.create(username="fake@email.com")
user2 = UserFactory.create(username="fake2@email.com")
SourceTableFactory(
dataset=MasterDataSetFactory.create(
user_access_type=UserAccessType.REQUIRES_AUTHENTICATION
)
)
mock_user_client = mock.Mock()
mock_user_client.describe_user.side_effect = [
botocore.exceptions.ClientError(
{
"Error": {
"Code": "ResourceNotFoundException",
"Message": "User not found",
}
},
"DescribeUser",
),
{
"User": {
"Arn": "Arn",
"Email": "fake2@email.com",
"Role": "ADMIN",
"UserName": "user/fake2@email.com",
}
},
botocore.exceptions.ClientError(
{"Error": {"Code": "ThrottlingException", "Message": "Hold up"}},
"DescribeUser",
),
]
mock_data_client = mock.Mock()
mock_sts_client = mock.Mock()
mock_boto3_client.side_effect = [
mock_user_client,
mock_data_client,
mock_sts_client,
]
# Act
sync_quicksight_permissions(
user_sso_ids_to_update=[str(user.profile.sso_id), str(user2.profile.sso_id)]
)
# Assert
assert mock_user_client.update_user.call_args_list == [
mock.call(
AwsAccountId=mock.ANY,
Namespace="default",
Role="ADMIN",
UnapplyCustomPermissions=True,
UserName="user/fake2@email.com",
Email="fake2@email.com",
)
]
assert mock_user_client.describe_user.call_args_list == [
mock.call(
AwsAccountId=mock.ANY,
Namespace="default",
UserName=f"quicksight_federation/{user.profile.sso_id}",
),
mock.call(
AwsAccountId=mock.ANY,
Namespace="default",
UserName=f"quicksight_federation/{user2.profile.sso_id}",
),
]
assert len(mock_data_client.create_data_source.call_args_list) == 1
assert len(mock_data_client.update_data_source.call_args_list) == 0
class TestSyncActivityStreamSSOUsers:
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.hawk_request")
@override_settings(ACTIVITY_STREAM_BASE_URL="http://activity.stream")
@override_settings(
CACHES={"default": {"BACKEND": "django.core.cache.backends.dummy.DummyCache"}}
)
def test_sync_calls_activity_stream(self, mock_hawk_request):
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_empty.json",
),
"r",
) as file:
empty_result = (200, file.read())
mock_hawk_request.return_value = empty_result
_do_sync_activity_stream_sso_users()
assert mock_hawk_request.call_args_list == [
mock.call(
"GET",
"http://activity.stream/v3/activities/_search",
mock.ANY,
)
]
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.hawk_request")
@override_settings(ACTIVITY_STREAM_BASE_URL="http://activity.stream")
def test_sync_first_time(self, mock_hawk_request):
cache.delete("activity_stream_sync_last_published")
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_empty.json",
),
"r",
) as file:
empty_result = (200, file.read())
mock_hawk_request.side_effect = [empty_result]
_do_sync_activity_stream_sso_users()
assert mock_hawk_request.call_args_list == [
mock.call(
"GET",
"http://activity.stream/v3/activities/_search",
json.dumps(
{
"size": 1000,
"query": {
"bool": {
"filter": [
{"term": {"object.type": "dit:StaffSSO:User"}},
{"range": {"published": {"gte": "1969-12-31T23:59:50"}}},
]
}
},
"sort": [{"published": "asc"}, {"id": "asc"}],
}
),
)
]
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.hawk_request")
@override_settings(ACTIVITY_STREAM_BASE_URL="http://activity.stream")
def test_sync_with_cache_set(self, mock_hawk_request):
cache.set("activity_stream_sync_last_published", datetime.datetime(2020, 1, 1, 12))
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_empty.json",
),
"r",
) as file:
empty_result = (200, file.read())
mock_hawk_request.return_value = empty_result
_do_sync_activity_stream_sso_users()
assert mock_hawk_request.call_args_list == [
mock.call(
"GET",
"http://activity.stream/v3/activities/_search",
json.dumps(
{
"size": 1000,
"query": {
"bool": {
"filter": [
{"term": {"object.type": "dit:StaffSSO:User"}},
{"range": {"published": {"gte": "2020-01-01T11:59:50"}}},
]
}
},
"sort": [{"published": "asc"}, {"id": "asc"}],
}
),
)
]
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.hawk_request")
@override_settings(ACTIVITY_STREAM_BASE_URL="http://activity.stream")
def test_sync_pagination(self, mock_hawk_request):
cache.delete("activity_stream_sync_last_published")
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_john_smith.json",
),
"r",
) as file:
user_john_smith = (200, file.read())
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_empty.json",
),
"r",
) as file:
empty_result = (200, file.read())
mock_hawk_request.side_effect = [user_john_smith, empty_result]
_do_sync_activity_stream_sso_users()
assert mock_hawk_request.call_args_list == [
mock.call(
"GET",
"http://activity.stream/v3/activities/_search",
json.dumps(
{
"size": 1000,
"query": {
"bool": {
"filter": [
{"term": {"object.type": "dit:StaffSSO:User"}},
{"range": {"published": {"gte": "1969-12-31T23:59:50"}}},
]
}
},
"sort": [{"published": "asc"}, {"id": "asc"}],
}
),
),
mock.call(
"GET",
"http://activity.stream/v3/activities/_search",
json.dumps(
{
"size": 1000,
"query": {
"bool": {
"filter": [
{"term": {"object.type": "dit:StaffSSO:User"}},
{"range": {"published": {"gte": "1969-12-31T23:59:50"}}},
]
}
},
"sort": [{"published": "asc"}, {"id": "asc"}],
"search_after": [
1000000000000,
"dit:StaffSSO:User:00000000-0000-0000-0000-000000000000:Update",
],
}
),
),
]
assert cache.get("activity_stream_sync_last_published") == datetime.datetime(
2020, 1, 1, 12
)
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.hawk_request")
@override_settings(
CACHES={"default": {"BACKEND": "django.core.cache.backends.dummy.DummyCache"}}
)
def test_sync_creates_user(self, mock_hawk_request):
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_john_smith.json",
),
"r",
) as file:
user_john_smith = (200, file.read())
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_empty.json",
),
"r",
) as file:
empty_result = (200, file.read())
mock_hawk_request.side_effect = [user_john_smith, empty_result]
_do_sync_activity_stream_sso_users()
User = get_user_model()
all_users = User.objects.all()
assert len(all_users) == 1
assert str(all_users[0].profile.sso_id) == "00000000-0000-0000-0000-000000000000"
assert all_users[0].email == "john.smith@trade.gov.uk"
assert all_users[0].username == "john.smith@trade.gov.uk"
assert all_users[0].first_name == "John"
assert all_users[0].last_name == "Smith"
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.hawk_request")
@override_settings(
CACHES={"default": {"BACKEND": "django.core.cache.backends.dummy.DummyCache"}}
)
def test_sync_updates_existing_users_sso_id(self, mock_hawk_request):
user = UserFactory.create(email="john.smith@trade.gov.uk")
# set the sso id to something different to what the activity stream
# will return to test that it gets updated
user.profile.sso_id = "00000000-0000-0000-0000-111111111111"
user.save()
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_john_smith.json",
),
"r",
) as file:
user_john_smith = (200, file.read())
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_empty.json",
),
"r",
) as file:
empty_result = (200, file.read())
mock_hawk_request.side_effect = [user_john_smith, empty_result]
_do_sync_activity_stream_sso_users()
User = get_user_model()
all_users = User.objects.all()
assert len(all_users) == 1
assert str(all_users[0].profile.sso_id) == "00000000-0000-0000-0000-000000000000"
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.hawk_request")
@override_settings(
CACHES={"default": {"BACKEND": "django.core.cache.backends.dummy.DummyCache"}}
)
def test_sync_updates_existing_users_email(self, mock_hawk_request):
# set the email to something different to what the activity stream
# will return to test that it gets updated
user = UserFactory.create(email="john.smith@gmail.com")
user.profile.sso_id = "00000000-0000-0000-0000-000000000000"
user.save()
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_john_smith.json",
),
"r",
) as file:
user_john_smith = (200, file.read())
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_empty.json",
),
"r",
) as file:
empty_result = (200, file.read())
mock_hawk_request.side_effect = [user_john_smith, empty_result]
_do_sync_activity_stream_sso_users()
User = get_user_model()
all_users = User.objects.all()
assert len(all_users) == 1
assert str(all_users[0].email) == "john.smith@trade.gov.uk"
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.hawk_request")
@override_settings(
CACHES={"default": {"BACKEND": "django.core.cache.backends.dummy.DummyCache"}}
)
def test_sync_updates_existing_users_sso_id_and_email(self, mock_hawk_request):
# set the sso id to something different to what the activity stream
# will return and set the email to the third email in the list that
# the activity stream will return to test that it is able to look up
# the user and update both their email and sso id
user = UserFactory.create(email="john@trade.gov.uk")
user.profile.sso_id = "00000000-0000-0000-0000-111111111111"
user.save()
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_john_smith_multiple_emails.json",
),
"r",
) as file:
user_john_smith = (200, file.read())
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_empty.json",
),
"r",
) as file:
empty_result = (200, file.read())
mock_hawk_request.side_effect = [user_john_smith, empty_result]
_do_sync_activity_stream_sso_users()
User = get_user_model()
all_users = User.objects.all()
assert len(all_users) == 1
assert str(all_users[0].profile.sso_id) == "00000000-0000-0000-0000-000000000000"
assert str(all_users[0].email) == "john.smith@digital.trade.gov.uk"
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.create_tools_access_iam_role_task")
@mock.patch("dataworkspace.apps.applications.utils.hawk_request")
@override_settings(
CACHES={"default": {"BACKEND": "django.core.cache.backends.dummy.DummyCache"}}
)
def test_sync_creates_role_if_user_can_access_tools(
self, mock_hawk_request, create_tools_access_iam_role_task
):
can_access_tools_permission = Permission.objects.get(
codename="start_all_applications",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user = UserFactory.create(email="john.smith@trade.gov.uk")
user.profile.sso_id = "00000000-0000-0000-0000-000000000000"
user.save()
user.user_permissions.add(can_access_tools_permission)
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_john_smith.json",
),
"r",
) as file:
user_john_smith = (200, file.read())
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_empty.json",
),
"r",
) as file:
empty_result = (200, file.read())
mock_hawk_request.side_effect = [user_john_smith, empty_result]
_do_sync_activity_stream_sso_users()
User = get_user_model()
all_users = User.objects.all()
assert len(all_users) == 1
assert create_tools_access_iam_role_task.delay.call_args_list == [
mock.call(
user.id,
)
]
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.create_tools_access_iam_role_task")
@mock.patch("dataworkspace.apps.applications.utils.hawk_request")
@override_settings(
CACHES={"default": {"BACKEND": "django.core.cache.backends.dummy.DummyCache"}}
)
def test_sync_doesnt_create_role_if_user_cant_access_tools(
self, mock_hawk_request, create_tools_access_iam_role_task
):
user = UserFactory.create(email="john.smith@trade.gov.uk")
user.profile.sso_id = "00000000-0000-0000-0000-000000000000"
user.save()
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_john_smith.json",
),
"r",
) as file:
user_john_smith = (200, file.read())
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_empty.json",
),
"r",
) as file:
empty_result = (200, file.read())
mock_hawk_request.side_effect = [user_john_smith, empty_result]
_do_sync_activity_stream_sso_users()
User = get_user_model()
all_users = User.objects.all()
assert len(all_users) == 1
assert not create_tools_access_iam_role_task.delay.called
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.create_tools_access_iam_role_task")
@mock.patch("dataworkspace.apps.applications.utils.hawk_request")
@override_settings(
CACHES={"default": {"BACKEND": "django.core.cache.backends.dummy.DummyCache"}}
)
def test_sync_doesnt_create_role_if_user_already_has_role(
self, mock_hawk_request, create_tools_access_iam_role_task
):
can_access_tools_permission = Permission.objects.get(
codename="start_all_applications",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user = UserFactory.create(email="john.smith@trade.gov.uk")
user.user_permissions.add(can_access_tools_permission)
user.profile.sso_id = "00000000-0000-0000-0000-000000000000"
user.profile.tools_access_role_arn = "some-arn"
user.save()
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_john_smith.json",
),
"r",
) as file:
user_john_smith = (200, file.read())
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_empty.json",
),
"r",
) as file:
empty_result = (200, file.read())
mock_hawk_request.side_effect = [user_john_smith, empty_result]
_do_sync_activity_stream_sso_users()
User = get_user_model()
all_users = User.objects.all()
assert len(all_users) == 1
assert not create_tools_access_iam_role_task.delay.called
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.hawk_request")
@override_settings(
CACHES={"default": {"BACKEND": "django.core.cache.backends.dummy.DummyCache"}}
)
def test_sync_hawk_request_fails(self, mock_hawk_request):
mock_hawk_request.return_value = 500, "Unable to reach shard"
with pytest.raises(Exception) as e:
_do_sync_activity_stream_sso_users()
assert str(e.value) == "Failed to fetch SSO users: Unable to reach shard"
User = get_user_model()
assert not User.objects.all()
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.hawk_request")
@override_settings(
CACHES={"default": {"BACKEND": "django.core.cache.backends.dummy.DummyCache"}}
)
def test_sync_failures_in_response(self, mock_hawk_request):
with open(
os.path.join(
os.path.dirname(__file__),
"test_fixture_activity_stream_sso_failures.json",
),
"r",
) as file:
failure_response = (200, file.read())
mock_hawk_request.return_value = failure_response
with pytest.raises(Exception) as e:
_do_sync_activity_stream_sso_users()
assert str(e.value) == "Failed to fetch SSO users: An error occured"
User = get_user_model()
assert not User.objects.all()
class TestCreateToolsAccessIAMRoleTask:
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.create_tools_access_iam_role")
def test_task_creates_iam_role(self, mock_create_tools_access_iam_role):
user = UserFactory.create(username="john.smith@trade.gov.uk")
user.profile.sso_id = "00000000-0000-0000-0000-000000000001"
user.profile.home_directory_efs_access_point_id = "some-access-point-id"
user.save()
_do_create_tools_access_iam_role(user.id)
assert mock_create_tools_access_iam_role.call_args_list == [
mock.call(
"john.smith@trade.gov.uk",
"00000000-0000-0000-0000-000000000001",
"some-access-point-id",
)
]
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.utils.create_tools_access_iam_role")
@mock.patch("logging.Logger.exception")
def test_task_fails_non_existent_user(self, mock_logger, mock_create_tools_access_iam_role):
_do_create_tools_access_iam_role(2)
assert mock_logger.call_args_list == [mock.call("User id %d does not exist", 2)]
class TestSyncToolQueryLogs:
log_data = [
# Valid user and db select statement
'2020-12-08 18:00:00.400 UTC,"auser","test_datasets",114,"172.19.0.4:53462",'
'5fcfc36b.72,19047,"SELECT",2020-12-08 18:18:19 UTC,9/19040,0,LOG,00000,'
'"AUDIT: SESSION,19047,1,READ,SELECT,,,""SELECT * FROM dataset_test"",<not logged>",,,,,,,,,""\n',
# Non-pgaudit log
'2020-12-08 18:00:10.400 UTC,"auser","test_datasets",114,"172.19.0.4:53462",'
'5fcfc36b.72,19047,"SELECT",2020-12-08 18:18:19 UTC,9/19040,0,LOG,00000,'
'"A random message",,,,,,,,,""\n',
# Unrecognised user
'2020-12-08 18:00:20.395 UTC,"unknownuser","test_datasets",114,"172.19.0.4:53462",'
'5fcfc36b.72,19041,"SELECT",2020-12-08 18:18:19 UTC,9/19034,0,LOG,00000,'
'"AUDIT: SESSION,19041,1,READ,SELECT,,,""SELECT a FROM b"",<not logged>",,,,,,,,,""\n',
# Unrecognised db
'2020-12-08 18:00:30.395 UTC,"auser","unknowndb",114,"172.19.0.4:53462",'
'5fcfc36b.72,19041,"SELECT",2020-12-08 18:18:19 UTC,9/19034,0,LOG,00000,'
'"AUDIT: SESSION,19041,1,READ,SELECT,,,""SELECT c FROM d"",<not logged>",,,,,,,,,""\n',
# Valid user and db insert statement
'2020-12-08 18:00:40.400 UTC,"auser","test_datasets",114,"172.19.0.5:53462",'
'5fcfc36b.72,19047,"SELECT",2020-12-08 18:18:19 UTC,9/19040,0,LOG,00000,'
'"AUDIT: SESSION,19047,1,READ,SELECT,,,""INSERT INTO dataset_test VALUES(1);"",<not logged>"'
',,,,,,,,,""\n',
# Timestamp out of range
'2020-12-08 17:00:00.400 UTC,"auser","test_datasets",114,"172.19.0.4:53462",'
'5fcfc36b.72,19047,"SELECT",2020-12-08 18:18:19 UTC,9/19040,0,LOG,00000,'
'"AUDIT: SESSION,19047,1,READ,SELECT,,,""INSERT INTO dataset_test VALUES(2);"",<not logged>"'
',,,,,,,,,""\n',
# No timestamp
"An exception occurred...\n",
# Duplicate record
'2020-12-08 18:00:00.400 UTC,"auser","test_datasets",114,"172.19.0.4:53462",'
'5fcfc36b.72,19047,"SELECT",2020-12-08 18:18:19 UTC,9/19040,0,LOG,00000,'
'"AUDIT: SESSION,19047,1,READ,SELECT,,,""SELECT * FROM dataset_test"",<not logged>",,,,,,,,,""\n',
# Ignored statement
'2020-12-08 19:00:00.400 UTC,"auser","test_datasets",114,"172.19.0.4:53462",'
'5fcfc36b.72,19047,"SELECT",2020-12-08 18:18:19 UTC,9/19040,0,LOG,00000,'
'"AUDIT: SESSION,19047,1,READ,SELECT,,,""select CAST(id as VARCHAR(50)) as col1 from a"",'
'<not logged>",,,,,,,,,""\n',
# > 1 million characters
'2020-12-08 20:00:00.400 UTC,"auser","test_datasets",114,"172.19.0.4:53462",'
'5fcfc36b.72,19047,"SELECT",2020-12-08 18:18:19 UTC,9/19040,0,LOG,00000,'
'"AUDIT: SESSION,19047,1,READ,SELECT,,,""'
f'{"".join(random.choices(string.ascii_letters, k=1500000))}"",<not logged>",,,,,,,,,""\n',
]
@pytest.mark.django_db(transaction=True)
@freeze_time("2020-12-08 18:04:00")
@mock.patch("dataworkspace.apps.core.boto3_client.boto3.client")
@override_settings(
PGAUDIT_LOG_TYPE="rds",
CACHES={"default": {"BACKEND": "django.core.cache.backends.dummy.DummyCache"}},
)
def test_rds_sync(self, mock_client, dataset_db):
cache.delete("q" "uery_tool_logs_last_run")
log_count = ToolQueryAuditLog.objects.count()
table_count = ToolQueryAuditLogTable.objects.count()
factories.DatabaseFactory(memorable_name="my_database")
factories.DatabaseUserFactory.create(username="auser")
factories.SourceTableFactory.create(schema="public", table="test_dataset")
mock_client.return_value.describe_db_log_files.return_value = {
"DescribeDBLogFiles": [
{"LogFileName": "/file/1.csv"},
{"LogFileName": "/file/2.csv"},
]
}
mock_client.return_value.download_db_log_file_portion.side_effect = [
{
"Marker": "1",
"AdditionalDataPending": True,
"LogFileData": (
# Valid user and db select statement
self.log_data[0]
# Non-pgaudit log
+ self.log_data[1]
),
},
{
"Marker": None,
"AdditionalDataPending": False,
"LogFileData": (
# Unrecognised user
self.log_data[2]
# Unrecognised database
+ self.log_data[3]
),
},
{
"Marker": None,
"AdditionalDataPending": False,
"LogFileData": (
# Valid username and db insert statement
self.log_data[4]
# Timestamp out of range
+ self.log_data[5]
# No timestamp
+ self.log_data[6]
# Duplicate log entry
+ self.log_data[7]
),
},
]
_do_sync_tool_query_logs()
queries = ToolQueryAuditLog.objects.all()
tables = ToolQueryAuditLogTable.objects.all()
assert queries.count() == log_count + 2
assert tables.count() == table_count + 1
assert list(queries)[-2].query_sql == "SELECT * FROM dataset_test"
assert list(queries)[-2].connection_from == "172.19.0.4"
assert list(queries)[-1].query_sql == "INSERT INTO dataset_test VALUES(1);"
assert list(queries)[-1].connection_from == "172.19.0.5"
@pytest.mark.django_db(transaction=True)
@freeze_time("2020-12-08 18:04:00")
@mock.patch("dataworkspace.apps.applications.utils.os")
@mock.patch("builtins.open", mock.mock_open(read_data="".join(log_data)))
@override_settings(
PGAUDIT_LOG_TYPE="docker",
CACHES={"default": {"BACKEND": "django.core.cache.backends.dummy.DummyCache"}},
)
def test_docker_sync(self, mock_os, dataset_db):
cache.delete("query_tool_logs_last_run")
table_count = ToolQueryAuditLogTable.objects.count()
log_count = ToolQueryAuditLog.objects.count()
factories.DatabaseFactory(memorable_name="my_database")
factories.DatabaseUserFactory.create(username="auser")
factories.SourceTableFactory.create(schema="public", table="test_dataset")
mock_os.listdir.return_value = [
"file1.csv",
"file2.log",
]
mock_os.path.getmtime.return_value = datetime.datetime.now().timestamp()
_do_sync_tool_query_logs()
queries = ToolQueryAuditLog.objects.all()
tables = ToolQueryAuditLogTable.objects.all()
assert queries.count() == log_count + 2
assert tables.count() == table_count + 1
assert list(queries)[-2].query_sql == "SELECT * FROM dataset_test"
assert list(queries)[-1].query_sql == "INSERT INTO dataset_test VALUES(1);"
class TestLongRunningQueryAlerts:
@pytest.mark.django_db
@override_switch("enable_long_running_query_alerts", active=True)
@mock.patch("dataworkspace.apps.applications.utils.connections")
@mock.patch("dataworkspace.apps.applications.utils._send_slack_message")
def test_no_long_running_queries(self, mock_send_slack_message, mock_connections):
mock_cursor = mock.Mock()
mock_cursor.fetchone.return_value = [0]
mock_connection = mock.Mock()
mock_cursor_ctx_manager = mock.MagicMock()
mock_cursor_ctx_manager.__enter__.return_value = mock_cursor
mock_connection.cursor.return_value = mock_cursor_ctx_manager
mock_connections.__getitem__.return_value = mock_connection
long_running_query_alert()
mock_send_slack_message.assert_not_called()
@pytest.mark.django_db
@override_switch("enable_long_running_query_alerts", active=True)
@override_settings(SLACK_SENTRY_CHANNEL_WEBHOOK="http://test.com")
@mock.patch("dataworkspace.apps.applications.utils.connections")
@mock.patch("dataworkspace.apps.applications.utils._send_slack_message")
def test_long_running_queries(self, mock_send_slack_message, mock_connections):
mock_cursor = mock.Mock()
mock_cursor.fetchone.return_value = [1]
mock_connection = mock.Mock()
mock_cursor_ctx_manager = mock.MagicMock()
mock_cursor_ctx_manager.__enter__.return_value = mock_cursor
mock_connection.cursor.return_value = mock_cursor_ctx_manager
mock_connections.__getitem__.return_value = mock_connection
long_running_query_alert()
mock_send_slack_message.assert_called_once_with(
":rotating_light: Found 1 SQL query running for longer than 15 minutes "
"on the datasets db."
)
| 37.944882 | 106 | 0.565701 | 4,514 | 43,371 | 5.157732 | 0.105228 | 0.033674 | 0.034963 | 0.041319 | 0.829138 | 0.794004 | 0.77227 | 0.762263 | 0.755777 | 0.733528 | 0 | 0.046813 | 0.321298 | 43,371 | 1,142 | 107 | 37.978109 | 0.744123 | 0.024971 | 0 | 0.676113 | 0 | 0.025304 | 0.242295 | 0.158382 | 0 | 0 | 0 | 0 | 0.057692 | 1 | 0.026316 | false | 0.006073 | 0.022267 | 0 | 0.055668 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
656027f9a6ca65cd4dcc4f05e7b251e3214eb9aa | 160 | py | Python | dp_tornado/helper/security/crypto/file/__init__.py | donghak-shin/dp-tornado | 095bb293661af35cce5f917d8a2228d273489496 | [
"MIT"
] | 18 | 2015-04-07T14:28:39.000Z | 2020-02-08T14:03:38.000Z | dp_tornado/helper/security/crypto/file/__init__.py | donghak-shin/dp-tornado | 095bb293661af35cce5f917d8a2228d273489496 | [
"MIT"
] | 7 | 2016-10-05T05:14:06.000Z | 2021-05-20T02:07:22.000Z | dp_tornado/helper/security/crypto/file/__init__.py | donghak-shin/dp-tornado | 095bb293661af35cce5f917d8a2228d273489496 | [
"MIT"
] | 11 | 2015-12-15T09:49:39.000Z | 2021-09-06T18:38:21.000Z | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from dp_tornado.engine.helper import Helper as dpHelper
class FileHelper(dpHelper):
pass
| 16 | 55 | 0.75 | 21 | 160 | 5.428571 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007463 | 0.1625 | 160 | 9 | 56 | 17.777778 | 0.843284 | 0.13125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
65a505efff83f04c575296823030c0a50b80b54d | 267 | py | Python | gdal/swig/python/scripts/gdal_calc.py | Sokigo-GLS/gdal | 595f74bf60dff89fc5df53f9f4c3e40fc835e909 | [
"MIT"
] | null | null | null | gdal/swig/python/scripts/gdal_calc.py | Sokigo-GLS/gdal | 595f74bf60dff89fc5df53f9f4c3e40fc835e909 | [
"MIT"
] | null | null | null | gdal/swig/python/scripts/gdal_calc.py | Sokigo-GLS/gdal | 595f74bf60dff89fc5df53f9f4c3e40fc835e909 | [
"MIT"
] | null | null | null | import sys
# import osgeo.utils.gdal_calc as a convenience to use as a script
from osgeo.utils.gdal_calc import * # noqa
from osgeo.utils.gdal_calc import main
from osgeo.gdal import deprecation_warn
deprecation_warn('gdal_calc', 'utils')
sys.exit(main(sys.argv))
| 26.7 | 66 | 0.790262 | 45 | 267 | 4.555556 | 0.422222 | 0.156098 | 0.204878 | 0.263415 | 0.273171 | 0.273171 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123596 | 267 | 9 | 67 | 29.666667 | 0.876068 | 0.258427 | 0 | 0 | 0 | 0 | 0.071795 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
02ab440a7bb658502ba51ba5730207a42c2da1d9 | 27 | py | Python | vnpy/api/sec/__init__.py | jubal/vnpy | f50f2535ed39dd33272e0985ed40c7078e4c19f6 | [
"MIT"
] | 5 | 2020-05-19T07:32:39.000Z | 2022-03-14T09:09:48.000Z | vnpy/api/sec/__init__.py | jubal/vnpy | f50f2535ed39dd33272e0985ed40c7078e4c19f6 | [
"MIT"
] | null | null | null | vnpy/api/sec/__init__.py | jubal/vnpy | f50f2535ed39dd33272e0985ed40c7078e4c19f6 | [
"MIT"
] | 3 | 2020-04-02T08:30:17.000Z | 2020-05-03T12:12:05.000Z | from vnpy_sec.api import *
| 13.5 | 26 | 0.777778 | 5 | 27 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
02b921f30c150fb3dd0d1555b046fd799547bba8 | 2,991 | py | Python | huaweicloud-sdk-oms/huaweicloudsdkoms/v2/model/__init__.py | wuchen-huawei/huaweicloud-sdk-python-v3 | 3683d703f4320edb2b8516f36f16d485cff08fc2 | [
"Apache-2.0"
] | 64 | 2020-06-12T07:05:07.000Z | 2022-03-30T03:32:50.000Z | huaweicloud-sdk-oms/huaweicloudsdkoms/v2/model/__init__.py | wuchen-huawei/huaweicloud-sdk-python-v3 | 3683d703f4320edb2b8516f36f16d485cff08fc2 | [
"Apache-2.0"
] | 11 | 2020-07-06T07:56:54.000Z | 2022-01-11T11:14:40.000Z | huaweicloud-sdk-oms/huaweicloudsdkoms/v2/model/__init__.py | wuchen-huawei/huaweicloud-sdk-python-v3 | 3683d703f4320edb2b8516f36f16d485cff08fc2 | [
"Apache-2.0"
] | 24 | 2020-06-08T11:42:13.000Z | 2022-03-04T06:44:08.000Z | # coding: utf-8
from __future__ import absolute_import
# import models into model package
from huaweicloudsdkoms.v2.model.bandwidth_policy_dto import BandwidthPolicyDto
from huaweicloudsdkoms.v2.model.create_sync_events_request import CreateSyncEventsRequest
from huaweicloudsdkoms.v2.model.create_sync_events_response import CreateSyncEventsResponse
from huaweicloudsdkoms.v2.model.create_task_req import CreateTaskReq
from huaweicloudsdkoms.v2.model.create_task_request import CreateTaskRequest
from huaweicloudsdkoms.v2.model.create_task_response import CreateTaskResponse
from huaweicloudsdkoms.v2.model.delete_task_request import DeleteTaskRequest
from huaweicloudsdkoms.v2.model.delete_task_response import DeleteTaskResponse
from huaweicloudsdkoms.v2.model.dst_node_req import DstNodeReq
from huaweicloudsdkoms.v2.model.dst_node_resp import DstNodeResp
from huaweicloudsdkoms.v2.model.error_reason_resp import ErrorReasonResp
from huaweicloudsdkoms.v2.model.failed_object_record_dto import FailedObjectRecordDto
from huaweicloudsdkoms.v2.model.link import Link
from huaweicloudsdkoms.v2.model.list_api_versions_request import ListApiVersionsRequest
from huaweicloudsdkoms.v2.model.list_api_versions_response import ListApiVersionsResponse
from huaweicloudsdkoms.v2.model.list_file import ListFile
from huaweicloudsdkoms.v2.model.list_tasks_request import ListTasksRequest
from huaweicloudsdkoms.v2.model.list_tasks_response import ListTasksResponse
from huaweicloudsdkoms.v2.model.show_api_info_request import ShowApiInfoRequest
from huaweicloudsdkoms.v2.model.show_api_info_response import ShowApiInfoResponse
from huaweicloudsdkoms.v2.model.show_task_request import ShowTaskRequest
from huaweicloudsdkoms.v2.model.show_task_response import ShowTaskResponse
from huaweicloudsdkoms.v2.model.smn_config import SmnConfig
from huaweicloudsdkoms.v2.model.smn_info import SmnInfo
from huaweicloudsdkoms.v2.model.source_cdn_req import SourceCdnReq
from huaweicloudsdkoms.v2.model.source_cdn_resp import SourceCdnResp
from huaweicloudsdkoms.v2.model.src_node_req import SrcNodeReq
from huaweicloudsdkoms.v2.model.src_node_resp import SrcNodeResp
from huaweicloudsdkoms.v2.model.start_task_req import StartTaskReq
from huaweicloudsdkoms.v2.model.start_task_request import StartTaskRequest
from huaweicloudsdkoms.v2.model.start_task_response import StartTaskResponse
from huaweicloudsdkoms.v2.model.stop_task_request import StopTaskRequest
from huaweicloudsdkoms.v2.model.stop_task_response import StopTaskResponse
from huaweicloudsdkoms.v2.model.sync_object_req import SyncObjectReq
from huaweicloudsdkoms.v2.model.task_resp import TaskResp
from huaweicloudsdkoms.v2.model.update_bandwidth_policy_req import UpdateBandwidthPolicyReq
from huaweicloudsdkoms.v2.model.update_bandwidth_policy_request import UpdateBandwidthPolicyRequest
from huaweicloudsdkoms.v2.model.update_bandwidth_policy_response import UpdateBandwidthPolicyResponse
from huaweicloudsdkoms.v2.model.version import Version
| 66.466667 | 101 | 0.902374 | 366 | 2,991 | 7.142077 | 0.234973 | 0.313313 | 0.343152 | 0.417751 | 0.469013 | 0.433053 | 0.15264 | 0 | 0 | 0 | 0 | 0.014184 | 0.057172 | 2,991 | 44 | 102 | 67.977273 | 0.912766 | 0.015379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
02f7c520bb4bf5c39b3e5691a78fd4579547c718 | 23 | py | Python | ramen/core/__init__.py | bmwang/ramen | 92d9eefb072d19fb7973a8ea18a1bbad91fcab77 | [
"Apache-2.0"
] | null | null | null | ramen/core/__init__.py | bmwang/ramen | 92d9eefb072d19fb7973a8ea18a1bbad91fcab77 | [
"Apache-2.0"
] | null | null | null | ramen/core/__init__.py | bmwang/ramen | 92d9eefb072d19fb7973a8ea18a1bbad91fcab77 | [
"Apache-2.0"
] | null | null | null | import ramen.core.node
| 11.5 | 22 | 0.826087 | 4 | 23 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 23 | 1 | 23 | 23 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
02ffa0a22703511ead030b6cda4da1a9db74a78b | 19 | py | Python | RDS/circle3_central_services/token_storage/src/api/User/__init__.py | Sciebo-RDS/Sciebo-RDS | d71cf449ed045a2a7a049e2cb77c99fd5a9195bd | [
"MIT"
] | 10 | 2020-06-24T08:22:24.000Z | 2022-01-13T16:17:36.000Z | RDS/circle3_central_services/token_storage/src/api/User/__init__.py | Sciebo-RDS/Sciebo-RDS | d71cf449ed045a2a7a049e2cb77c99fd5a9195bd | [
"MIT"
] | 78 | 2020-01-23T14:32:06.000Z | 2022-03-07T14:11:16.000Z | gitapi_it/core/__init__.py | GitAPI-it/GitAPI.it-Python | c31fda491311ae1bc87af653282dc732729d441f | [
"MIT"
] | 1 | 2020-06-24T08:33:48.000Z | 2020-06-24T08:33:48.000Z | from .User import * | 19 | 19 | 0.736842 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 19 | 1 | 19 | 19 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f3215aaad2ba272c5c5239258801ed496dd4bf2d | 271 | py | Python | example/models/__init__.py | bolinette/bolinette | b35a7d828c7d9617da6a8d7ac066e3b675a65252 | [
"MIT"
] | 4 | 2020-11-02T15:16:32.000Z | 2022-01-11T11:19:24.000Z | example/models/__init__.py | bolinette/bolinette | b35a7d828c7d9617da6a8d7ac066e3b675a65252 | [
"MIT"
] | 14 | 2021-01-04T11:06:59.000Z | 2022-03-23T17:01:49.000Z | example/models/__init__.py | bolinette/bolinette | b35a7d828c7d9617da6a8d7ac066e3b675a65252 | [
"MIT"
] | null | null | null | from example.models.user import User
from example.models.person import Person
from example.models.book import Book
from example.models.library import Library
from example.models.tag import Tag
from example.models.label import Label
from example.models.trace import Trace
| 33.875 | 42 | 0.845018 | 42 | 271 | 5.452381 | 0.261905 | 0.336245 | 0.519651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103321 | 271 | 7 | 43 | 38.714286 | 0.942387 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b85e2d80e97dcc6e8076ed98c86a921a42e3d9d7 | 1,597 | py | Python | execode/entry_points_console_scripts.py | Cologler/execode-python | 71e172ee5875a161c0daec61266069982c845b83 | [
"MIT"
] | null | null | null | execode/entry_points_console_scripts.py | Cologler/execode-python | 71e172ee5875a161c0daec61266069982c845b83 | [
"MIT"
] | null | null | null | execode/entry_points_console_scripts.py | Cologler/execode-python | 71e172ee5875a161c0daec61266069982c845b83 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright (c) 2019~2999 - Cologler <skyoflw@gmail.com>
# ----------
#
# ----------
def run_py():
import sys
import os
sys.argv.pop(0) # current script name
if not sys.argv:
raise RuntimeError('run-py require at least python script path as arguments.')
target_path = sys.argv[0] # target script path
from execode import run_py as rp
rp(target_path)
def run_pym():
import sys
import os
sys.argv.pop(0) # current script name
if not sys.argv:
raise RuntimeError('run-pym require at least python package path as arguments.')
target_path = sys.argv[0] # target script path
if not target_path.endswith('__main__.py'):
target_path = os.path.join(target_path, '__main__.py')
sys.argv[0] = target_path
from execode import run_py_m as rpm
rpm(target_path)
def pipenv_run_py():
import sys
if len(sys.argv) < 2:
raise RuntimeError('pipenv-run-py require at least python script path as arguments.')
target_path = sys.argv[1] # target script path
from execode.utils import find_pipfile
from execode import pipenv_context
with pipenv_context(find_pipfile(target_path)):
run_py()
def pipenv_run_pym():
import sys
if len(sys.argv) < 2:
raise RuntimeError('pipenv-run-pym require at least python package path as arguments.')
target_path = sys.argv[1] # target script path
from execode.utils import find_pipfile
from execode import pipenv_context
with pipenv_context(find_pipfile(target_path)):
run_pym()
| 27.534483 | 95 | 0.671885 | 233 | 1,597 | 4.437768 | 0.23176 | 0.116054 | 0.054159 | 0.077369 | 0.791103 | 0.791103 | 0.744681 | 0.744681 | 0.744681 | 0.744681 | 0 | 0.014551 | 0.225423 | 1,597 | 57 | 96 | 28.017544 | 0.821342 | 0.134001 | 0 | 0.564103 | 0 | 0 | 0.192701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0 | 0.307692 | 0 | 0.410256 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b86c9cd6707bf6a1da0bf6216b3a6648fa72cf57 | 104 | py | Python | weak_supervision/semparse/worlds/__init__.py | pdasigi/allennlp-weak-supervision-research | 900d064d5a29a905be2288004315678247c4d84b | [
"Apache-2.0"
] | null | null | null | weak_supervision/semparse/worlds/__init__.py | pdasigi/allennlp-weak-supervision-research | 900d064d5a29a905be2288004315678247c4d84b | [
"Apache-2.0"
] | null | null | null | weak_supervision/semparse/worlds/__init__.py | pdasigi/allennlp-weak-supervision-research | 900d064d5a29a905be2288004315678247c4d84b | [
"Apache-2.0"
] | null | null | null | from weak_supervision.semparse.worlds.wikitables_variable_free_world import WikiTablesVariableFreeWorld
| 52 | 103 | 0.932692 | 11 | 104 | 8.454545 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 104 | 1 | 104 | 104 | 0.93 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b8b0100af557588c30bca25f95d47470428d139e | 14,383 | py | Python | pug/dj/miner/migrations/0001_initial.py | hobson/pug-dj | 55678b08755a55366ce18e7d3b8ea8fa4491ab04 | [
"MIT"
] | null | null | null | pug/dj/miner/migrations/0001_initial.py | hobson/pug-dj | 55678b08755a55366ce18e7d3b8ea8fa4491ab04 | [
"MIT"
] | 5 | 2021-09-07T23:53:24.000Z | 2022-03-11T23:22:04.000Z | pug/dj/miner/migrations/0001_initial.py | hobson/pug-dj | 55678b08755a55366ce18e7d3b8ea8fa4491ab04 | [
"MIT"
] | 1 | 2015-04-23T14:45:04.000Z | 2015-04-23T14:45:04.000Z | # -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Connection'
db.create_table(u'miner_connection', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('ip', self.gf('django.db.models.fields.CharField')(max_length=15, null=True)),
('uri', self.gf('django.db.models.fields.CharField')(max_length=256, null=True)),
('fqdn', self.gf('django.db.models.fields.CharField')(max_length=128, null=True)),
('user', self.gf('django.db.models.fields.CharField')(max_length=128, null=True)),
('password', self.gf('django.db.models.fields.CharField')(max_length=128, null=True)),
('port', self.gf('django.db.models.fields.IntegerField')()),
))
db.send_create_signal(u'miner', ['Connection'])
# Adding model 'Database'
db.create_table(u'miner_database', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(default='', max_length=128)),
('date', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now, auto_now_add=True, blank=True)),
('connection', self.gf('django.db.models.fields.related.ForeignKey')(default=None, to=orm['miner.Connection'], null=True)),
))
db.send_create_signal(u'miner', ['Database'])
# Adding model 'Table'
db.create_table(u'miner_table', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('app', self.gf('django.db.models.fields.CharField')(default='', max_length=256, blank=True)),
('database', self.gf('django.db.models.fields.related.ForeignKey')(default=None, to=orm['miner.Database'])),
('db_table', self.gf('django.db.models.fields.CharField')(max_length=256, null=True)),
('django_model', self.gf('django.db.models.fields.CharField')(default=None, max_length=256, null=True)),
('primary_key', self.gf('django.db.models.fields.related.OneToOneField')(default=None, to=orm['miner.Field'], unique=True, null=True)),
('count', self.gf('django.db.models.fields.IntegerField')(default=None, null=True)),
))
db.send_create_signal(u'miner', ['Table'])
# Adding model 'ChangeLog'
db.create_table(u'miner_changelog', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('model', self.gf('django.db.models.fields.CharField')(default='', max_length=255, blank=True)),
('app', self.gf('django.db.models.fields.CharField')(default='', max_length=255, blank=True)),
('primary_key', self.gf('django.db.models.fields.IntegerField')(default=None, null=True)),
('values_hash', self.gf('django.db.models.fields.IntegerField')(default=None, null=True, db_index=True, blank=True)),
))
db.send_create_signal(u'miner', ['ChangeLog'])
# Adding model 'Type'
db.create_table(u'miner_type', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('django_type', self.gf('django.db.models.fields.CharField')(default=None, max_length=20, null=True)),
('ansi_type', self.gf('django.db.models.fields.CharField')(max_length=20, null=True)),
))
db.send_create_signal(u'miner', ['Type'])
# Adding model 'Field'
db.create_table(u'miner_field', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('table_stats', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['miner.Table'])),
('max_length', self.gf('django.db.models.fields.IntegerField')(null=True)),
('blank', self.gf('django.db.models.fields.BooleanField')(default=False)),
('choices', self.gf('django.db.models.fields.TextField')(null=True)),
('django_type', self.gf('django.db.models.fields.related.ForeignKey')(default=None, to=orm['miner.Type'], null=True)),
('type', self.gf('django.db.models.fields.CharField')(default='', max_length=32, blank=True)),
('scale', self.gf('django.db.models.fields.IntegerField')(null=True)),
('db_column', self.gf('django.db.models.fields.CharField')(default='', max_length=255, blank=True)),
('display_size', self.gf('django.db.models.fields.IntegerField')(null=True)),
('min', self.gf('django.db.models.fields.TextField')(null=True)),
('max', self.gf('django.db.models.fields.TextField')(null=True)),
('num_distinct', self.gf('django.db.models.fields.IntegerField')(default=None, null=True)),
('num_null', self.gf('django.db.models.fields.IntegerField')(default=None, null=True)),
('precision', self.gf('django.db.models.fields.IntegerField')(default=None, null=True)),
('fraction_distinct', self.gf('django.db.models.fields.FloatField')(default=None, null=True)),
('internal_size', self.gf('django.db.models.fields.IntegerField')(default=None, null=True)),
('null_ok', self.gf('django.db.models.fields.NullBooleanField')(default=None, null=True, blank=True)),
('primary_key', self.gf('django.db.models.fields.NullBooleanField')(default=None, null=True, blank=True)),
('relative', self.gf('django.db.models.fields.related.ForeignKey')(related_name='relative_source', null=True, to=orm['miner.Field'])),
('relative_type', self.gf('django.db.models.fields.CharField')(max_length=20)),
))
db.send_create_signal(u'miner', ['Field'])
# Adding model 'Correlation'
db.create_table(u'miner_correlation', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('source', self.gf('django.db.models.fields.related.ForeignKey')(related_name='source_correlation', to=orm['miner.Field'])),
('target', self.gf('django.db.models.fields.related.ForeignKey')(related_name='target_correlation', to=orm['miner.Field'])),
('correlation', self.gf('django.db.models.fields.FloatField')(null=True)),
('mutual_information', self.gf('django.db.models.fields.FloatField')(null=True)),
('shared_distinct_values', self.gf('django.db.models.fields.IntegerField')()),
('shared_values', self.gf('django.db.models.fields.IntegerField')()),
('shared_distinct_words', self.gf('django.db.models.fields.IntegerField')()),
('shared_tokens', self.gf('django.db.models.fields.IntegerField')()),
))
db.send_create_signal(u'miner', ['Correlation'])
def backwards(self, orm):
# Deleting model 'Connection'
db.delete_table(u'miner_connection')
# Deleting model 'Database'
db.delete_table(u'miner_database')
# Deleting model 'Table'
db.delete_table(u'miner_table')
# Deleting model 'ChangeLog'
db.delete_table(u'miner_changelog')
# Deleting model 'Type'
db.delete_table(u'miner_type')
# Deleting model 'Field'
db.delete_table(u'miner_field')
# Deleting model 'Correlation'
db.delete_table(u'miner_correlation')
models = {
u'miner.changelog': {
'Meta': {'object_name': 'ChangeLog'},
'app': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '255', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '255', 'blank': 'True'}),
'primary_key': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'}),
'values_hash': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True', 'db_index': 'True', 'blank': 'True'})
},
u'miner.connection': {
'Meta': {'object_name': 'Connection'},
'fqdn': ('django.db.models.fields.CharField', [], {'max_length': '128', 'null': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'ip': ('django.db.models.fields.CharField', [], {'max_length': '15', 'null': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128', 'null': 'True'}),
'port': ('django.db.models.fields.IntegerField', [], {}),
'uri': ('django.db.models.fields.CharField', [], {'max_length': '256', 'null': 'True'}),
'user': ('django.db.models.fields.CharField', [], {'max_length': '128', 'null': 'True'})
},
u'miner.correlation': {
'Meta': {'object_name': 'Correlation'},
'correlation': ('django.db.models.fields.FloatField', [], {'null': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'mutual_information': ('django.db.models.fields.FloatField', [], {'null': 'True'}),
'shared_distinct_values': ('django.db.models.fields.IntegerField', [], {}),
'shared_distinct_words': ('django.db.models.fields.IntegerField', [], {}),
'shared_tokens': ('django.db.models.fields.IntegerField', [], {}),
'shared_values': ('django.db.models.fields.IntegerField', [], {}),
'source': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'source_correlation'", 'to': u"orm['miner.Field']"}),
'target': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'target_correlation'", 'to': u"orm['miner.Field']"})
},
u'miner.database': {
'Meta': {'object_name': 'Database'},
'connection': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': u"orm['miner.Connection']", 'null': 'True'}),
'date': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '128'})
},
u'miner.field': {
'Meta': {'object_name': 'Field'},
'blank': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'choices': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'db_column': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '255', 'blank': 'True'}),
'display_size': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'django_type': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': u"orm['miner.Type']", 'null': 'True'}),
'fraction_distinct': ('django.db.models.fields.FloatField', [], {'default': 'None', 'null': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'internal_size': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'}),
'max': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'max_length': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'min': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'null_ok': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'num_distinct': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'}),
'num_null': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'}),
'peer': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['miner.Field']", 'through': u"orm['miner.Correlation']", 'symmetrical': 'False'}),
'precision': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'}),
'primary_key': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'relative': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'relative_source'", 'null': 'True', 'to': u"orm['miner.Field']"}),
'relative_type': ('django.db.models.fields.CharField', [], {'max_length': '20'}),
'scale': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'table_stats': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['miner.Table']"}),
'type': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '32', 'blank': 'True'})
},
u'miner.table': {
'Meta': {'object_name': 'Table'},
'app': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '256', 'blank': 'True'}),
'count': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'}),
'database': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': u"orm['miner.Database']"}),
'db_table': ('django.db.models.fields.CharField', [], {'max_length': '256', 'null': 'True'}),
'django_model': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '256', 'null': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'primary_key': ('django.db.models.fields.related.OneToOneField', [], {'default': 'None', 'to': u"orm['miner.Field']", 'unique': 'True', 'null': 'True'})
},
u'miner.type': {
'Meta': {'object_name': 'Type'},
'ansi_type': ('django.db.models.fields.CharField', [], {'max_length': '20', 'null': 'True'}),
'django_type': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '20', 'null': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})
}
}
complete_apps = ['miner'] | 69.483092 | 171 | 0.593131 | 1,650 | 14,383 | 5.069091 | 0.069697 | 0.109039 | 0.189144 | 0.270206 | 0.844692 | 0.783955 | 0.771879 | 0.718914 | 0.664873 | 0.547226 | 0 | 0.007501 | 0.184315 | 14,383 | 207 | 172 | 69.483092 | 0.705421 | 0.025238 | 0 | 0.116279 | 0 | 0 | 0.490396 | 0.300321 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011628 | false | 0.011628 | 0.023256 | 0 | 0.052326 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b246b63a42a5f95de77f592b4a5d1c04f222772d | 243 | py | Python | nssrc/com/citrix/netscaler/nitro/resource/config/cr/__init__.py | benfinke/ns_python | d651d7aa01d7dc63c1cd435c7b3314d7f5b26659 | [
"Apache-2.0"
] | 1 | 2015-04-05T21:21:26.000Z | 2015-04-05T21:21:26.000Z | nssrc/com/citrix/netscaler/nitro/resource/config/cr/__init__.py | benfinke/ns_python | d651d7aa01d7dc63c1cd435c7b3314d7f5b26659 | [
"Apache-2.0"
] | 1 | 2017-01-20T22:56:58.000Z | 2017-01-20T22:56:58.000Z | nssrc/com/citrix/netscaler/nitro/resource/config/cr/__init__.py | benfinke/ns_python | d651d7aa01d7dc63c1cd435c7b3314d7f5b26659 | [
"Apache-2.0"
] | 6 | 2015-04-21T13:14:08.000Z | 2020-12-03T07:27:52.000Z | __all__ = ['crpolicy', 'crvserver', 'crvserver_binding', 'crvserver_cmppolicy_binding', 'crvserver_crpolicy_binding', 'crvserver_cspolicy_binding', 'crvserver_filterpolicy_binding', 'crvserver_lbvserver_binding', 'crvserver_policymap_binding'] | 243 | 243 | 0.839506 | 23 | 243 | 8.130435 | 0.391304 | 0.513369 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041152 | 243 | 1 | 243 | 243 | 0.802575 | 0 | 0 | 0 | 0 | 0 | 0.807377 | 0.668033 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b24bc1df3799a8ab62ed336742e5bce0fabae98e | 31 | py | Python | morphelia/external/__init__.py | marx-alex/Morphelia | 809278b07f1a535789455d54df3cbddc850d609c | [
"MIT"
] | null | null | null | morphelia/external/__init__.py | marx-alex/Morphelia | 809278b07f1a535789455d54df3cbddc850d609c | [
"MIT"
] | null | null | null | morphelia/external/__init__.py | marx-alex/Morphelia | 809278b07f1a535789455d54df3cbddc850d609c | [
"MIT"
] | null | null | null | from .palantir import Palantir
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b250041f7a6201e976c57915354eb7d4cc633d73 | 199 | py | Python | synapse/servers/jsonstor.py | ackroute/synapse | 51197f89ab372d2e357bcd054358352ecca66840 | [
"Apache-2.0"
] | 216 | 2017-01-17T18:52:50.000Z | 2022-03-31T18:44:49.000Z | synapse/servers/jsonstor.py | ackroute/synapse | 51197f89ab372d2e357bcd054358352ecca66840 | [
"Apache-2.0"
] | 2,189 | 2017-01-17T22:31:48.000Z | 2022-03-31T20:41:45.000Z | synapse/servers/jsonstor.py | ackroute/synapse | 51197f89ab372d2e357bcd054358352ecca66840 | [
"Apache-2.0"
] | 44 | 2017-01-17T16:50:57.000Z | 2022-03-16T18:35:52.000Z | # pragma: no cover
import sys
import asyncio
import synapse.lib.jsonstor as s_jsonstor
if __name__ == '__main__': # pragma: no cover
asyncio.run(s_jsonstor.JsonStorCell.execmain(sys.argv[1:]))
| 22.111111 | 63 | 0.753769 | 29 | 199 | 4.827586 | 0.655172 | 0.114286 | 0.185714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005814 | 0.135678 | 199 | 8 | 64 | 24.875 | 0.80814 | 0.165829 | 0 | 0 | 0 | 0 | 0.04908 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b251257d8b050e94a80e31dfe50d3a202b0aea89 | 126 | py | Python | pbsmmapi/franchise/models.py | WGBH/django-pbsmmapi-light | d33bbea8c4ede1905d74336df351a81e1d1c9d5c | [
"MIT"
] | null | null | null | pbsmmapi/franchise/models.py | WGBH/django-pbsmmapi-light | d33bbea8c4ede1905d74336df351a81e1d1c9d5c | [
"MIT"
] | null | null | null | pbsmmapi/franchise/models.py | WGBH/django-pbsmmapi-light | d33bbea8c4ede1905d74336df351a81e1d1c9d5c | [
"MIT"
] | null | null | null | from django.db import models
from ..abstract.models import PBSMMLightObject
class PBSMMFranchise(PBSMMLightObject):
pass
| 21 | 46 | 0.81746 | 14 | 126 | 7.357143 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126984 | 126 | 5 | 47 | 25.2 | 0.936364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b253a1addfff6c60ff784a009f3c1c004331e627 | 4,632 | py | Python | tests/test_diagnostic.py | Jiaming1999/ChainConsumer | 5606696525d91f11d8093085934fa352b98ce97c | [
"MIT"
] | 55 | 2016-08-31T01:02:41.000Z | 2022-03-15T15:23:29.000Z | tests/test_diagnostic.py | Jiaming1999/ChainConsumer | 5606696525d91f11d8093085934fa352b98ce97c | [
"MIT"
] | 86 | 2016-10-09T23:20:00.000Z | 2022-03-23T09:55:57.000Z | tests/test_diagnostic.py | Jiaming1999/ChainConsumer | 5606696525d91f11d8093085934fa352b98ce97c | [
"MIT"
] | 17 | 2016-08-31T08:35:37.000Z | 2021-07-24T16:39:26.000Z | import numpy as np
import pytest
from chainconsumer import ChainConsumer
def test_gelman_rubin_index():
data = np.vstack((np.random.normal(loc=0.0, size=100000),
np.random.normal(loc=1.0, size=100000))).T
consumer = ChainConsumer()
consumer.add_chain(data, walkers=4)
assert consumer.diagnostic.gelman_rubin(chain=0)
def test_gelman_rubin_index_not_converged():
data = np.vstack((np.random.normal(loc=0.0, size=100000),
np.random.normal(loc=1.0, size=100000))).T
data[80000:, :] *= 2
data[80000:, :] += 1
consumer = ChainConsumer()
consumer.add_chain(data, walkers=4)
assert not consumer.diagnostic.gelman_rubin(chain=0)
def test_gelman_rubin_index_not_converged():
data = np.vstack((np.random.normal(loc=0.0, size=100000),
np.random.normal(loc=1.0, size=100000))).T
data[:, 0] += np.linspace(0, 10, 100000)
consumer = ChainConsumer()
consumer.add_chain(data, walkers=8)
assert not consumer.diagnostic.gelman_rubin(chain=0)
def test_gelman_rubin_index_fails():
data = np.vstack((np.random.normal(loc=0.0, size=100000),
np.random.normal(loc=1.0, size=100000))).T
consumer = ChainConsumer()
consumer.add_chain(data, walkers=4)
with pytest.raises(AssertionError):
consumer.diagnostic.gelman_rubin(chain=10)
def test_gelman_rubin_name():
data = np.vstack((np.random.normal(loc=0.0, size=100000),
np.random.normal(loc=1.0, size=100000))).T
consumer = ChainConsumer()
consumer.add_chain(data, walkers=4, name="testchain")
assert consumer.diagnostic.gelman_rubin(chain="testchain")
def test_gelman_rubin_name_fails():
data = np.vstack((np.random.normal(loc=0.0, size=100000),
np.random.normal(loc=1.0, size=100000))).T
consumer = ChainConsumer()
consumer.add_chain(data, walkers=4, name="testchain")
with pytest.raises(AssertionError):
consumer.diagnostic.gelman_rubin(chain="testchain2")
def test_gelman_rubin_unknown_fails():
data = np.vstack((np.random.normal(loc=0.0, size=100000),
np.random.normal(loc=1.0, size=100000))).T
consumer = ChainConsumer()
consumer.add_chain(data, walkers=4, name="testchain")
with pytest.raises(ValueError):
consumer.diagnostic.gelman_rubin(chain=np.pi)
def test_gelman_rubin_default():
data = np.vstack((np.random.normal(loc=0.0, size=100000),
np.random.normal(loc=1.0, size=100000))).T
consumer = ChainConsumer()
consumer.add_chain(data, walkers=4, name="c1")
consumer.add_chain(data, walkers=4, name="c2")
consumer.add_chain(data, walkers=4, name="c3")
assert consumer.diagnostic.gelman_rubin()
def test_gelman_rubin_default_not_converge():
data = np.vstack((np.random.normal(loc=0.0, size=100000),
np.random.normal(loc=1.0, size=100000))).T
consumer = ChainConsumer()
consumer.add_chain(data, walkers=4, name="c1")
consumer.add_chain(data, walkers=4, name="c2")
data2 = data.copy()
data2[:, 0] += np.linspace(-5, 5, 100000)
consumer.add_chain(data2, walkers=4, name="c3")
assert not consumer.diagnostic.gelman_rubin()
def test_geweke_index():
data = np.vstack((np.random.normal(loc=0.0, size=100000),
np.random.normal(loc=1.0, size=100000))).T
consumer = ChainConsumer()
consumer.add_chain(data, walkers=20, name="c1")
assert consumer.diagnostic.geweke(chain=0)
def test_geweke_index_failed():
data = np.vstack((np.random.normal(loc=0.0, size=100000),
np.random.normal(loc=1.0, size=100000))).T
consumer = ChainConsumer()
data[98000:, :] += 0.5
consumer.add_chain(data, walkers=20, name="c1")
assert not consumer.diagnostic.geweke(chain=0)
def test_geweke_default():
np.random.seed(0)
data = np.vstack((np.random.normal(loc=0.0, size=100000),
np.random.normal(loc=1.0, size=100000))).T
consumer = ChainConsumer()
consumer.add_chain(data, walkers=20, name="c1")
consumer.add_chain(data, walkers=20, name="c2")
assert consumer.diagnostic.geweke(chain=0)
def test_geweke_default_failed():
data = np.vstack((np.random.normal(loc=0.0, size=100000),
np.random.normal(loc=1.0, size=100000))).T
consumer = ChainConsumer()
consumer.add_chain(data, walkers=20, name="c1")
data2 = data.copy()
data2[98000:, :] += 0.3
consumer.add_chain(data2, walkers=20, name="c2")
assert not consumer.diagnostic.geweke() | 36.1875 | 64 | 0.661054 | 646 | 4,632 | 4.625387 | 0.092879 | 0.072289 | 0.121821 | 0.147925 | 0.910308 | 0.824632 | 0.784137 | 0.746319 | 0.746319 | 0.624833 | 0 | 0.081578 | 0.190199 | 4,632 | 128 | 65 | 36.1875 | 0.715009 | 0 | 0 | 0.636364 | 0 | 0 | 0.015109 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 1 | 0.131313 | false | 0 | 0.030303 | 0 | 0.161616 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b26eab19122820e40998f016d5ef03a6e51dfccf | 484 | py | Python | octicons16px/sign_out.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | 1 | 2021-01-28T06:47:39.000Z | 2021-01-28T06:47:39.000Z | octicons16px/sign_out.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null | octicons16px/sign_out.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null |
OCTICON_SIGN_OUT = """
<svg class="octicon octicon-sign-out" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M2 2.75C2 1.784 2.784 1 3.75 1h2.5a.75.75 0 010 1.5h-2.5a.25.25 0 00-.25.25v10.5c0 .138.112.25.25.25h2.5a.75.75 0 010 1.5h-2.5A1.75 1.75 0 012 13.25V2.75zm10.44 4.5H6.75a.75.75 0 000 1.5h5.69l-1.97 1.97a.75.75 0 101.06 1.06l3.25-3.25a.75.75 0 000-1.06l-3.25-3.25a.75.75 0 10-1.06 1.06l1.97 1.97z"></path></svg>
"""
| 96.8 | 455 | 0.667355 | 127 | 484 | 2.527559 | 0.480315 | 0.065421 | 0.093458 | 0.043614 | 0.196262 | 0.155763 | 0.087227 | 0.087227 | 0 | 0 | 0 | 0.43318 | 0.103306 | 484 | 4 | 456 | 121 | 0.306452 | 0 | 0 | 0 | 0 | 0.333333 | 0.94617 | 0.15528 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b2772c3d1c48161a4376aa884c2738ebdf6d2094 | 26 | py | Python | hello_world.py | ajlif/profiles-api | b3233dca232a53818dbcd9caaba3fe477ea8284e | [
"MIT"
] | null | null | null | hello_world.py | ajlif/profiles-api | b3233dca232a53818dbcd9caaba3fe477ea8284e | [
"MIT"
] | 3 | 2021-03-18T22:35:45.000Z | 2021-06-10T18:10:50.000Z | hello_world.py | ajlif/profiles-api | b3233dca232a53818dbcd9caaba3fe477ea8284e | [
"MIT"
] | null | null | null | print("hello from local")
| 13 | 25 | 0.730769 | 4 | 26 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
a24ba9f1b79cba8e840e25ead8654dfa9c5da454 | 49 | py | Python | functions/python/perper/utils/__init__.py | obecto/perper | ce25abde413bdb4c054a06d810939e98fac04d62 | [
"MIT"
] | 24 | 2019-11-11T13:26:12.000Z | 2022-03-18T23:38:07.000Z | functions/python/perper/utils/__init__.py | obecto/perper | ce25abde413bdb4c054a06d810939e98fac04d62 | [
"MIT"
] | 76 | 2020-01-25T16:48:37.000Z | 2022-01-03T09:26:11.000Z | functions/python/perper/utils/__init__.py | obecto/perper | ce25abde413bdb4c054a06d810939e98fac04d62 | [
"MIT"
] | 4 | 2020-06-25T13:21:37.000Z | 2021-11-03T09:05:11.000Z | from .perper_thin_client import PerperThinClient
| 24.5 | 48 | 0.897959 | 6 | 49 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a25a4a388edce240edc9807b3c15e26076b684e5 | 95 | py | Python | calc/calc.py | zoch22/gethub | 3564b96e29a28bdc65e0e709d6e1b414bd708914 | [
"MIT"
] | null | null | null | calc/calc.py | zoch22/gethub | 3564b96e29a28bdc65e0e709d6e1b414bd708914 | [
"MIT"
] | null | null | null | calc/calc.py | zoch22/gethub | 3564b96e29a28bdc65e0e709d6e1b414bd708914 | [
"MIT"
] | null | null | null | print("hello")
x = int(input("enter number 1 "))
y = int(input("enter number 2 "))
print (x+y) | 23.75 | 34 | 0.621053 | 17 | 95 | 3.470588 | 0.588235 | 0.271186 | 0.440678 | 0.644068 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.157895 | 95 | 4 | 35 | 23.75 | 0.7125 | 0 | 0 | 0 | 0 | 0 | 0.364583 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
a2d5890a7bbc8425497aeafbdde56301f8e03447 | 283 | py | Python | snmpagent_unity/unity_impl/VolumeWriteBandwidth.py | factioninc/snmp-unity-agent | 3525dc0fac60d1c784dcdd7c41693544bcbef843 | [
"Apache-2.0"
] | 2 | 2019-03-01T11:14:59.000Z | 2019-10-02T17:47:59.000Z | snmpagent_unity/unity_impl/VolumeWriteBandwidth.py | factioninc/snmp-unity-agent | 3525dc0fac60d1c784dcdd7c41693544bcbef843 | [
"Apache-2.0"
] | 2 | 2019-03-01T11:26:29.000Z | 2019-10-11T18:56:54.000Z | snmpagent_unity/unity_impl/VolumeWriteBandwidth.py | factioninc/snmp-unity-agent | 3525dc0fac60d1c784dcdd7c41693544bcbef843 | [
"Apache-2.0"
] | 1 | 2019-10-03T21:09:17.000Z | 2019-10-03T21:09:17.000Z | class VolumeWriteBandwidth(object):
def read_get(self, name, idx_name, unity_client):
return unity_client.get_lun_write_byte_rate(idx_name)
class VolumeWriteBandwidthColumn(object):
def get_idx(self, name, idx, unity_client):
return unity_client.get_luns()
| 31.444444 | 61 | 0.759717 | 38 | 283 | 5.315789 | 0.473684 | 0.217822 | 0.108911 | 0.217822 | 0.306931 | 0.306931 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155477 | 283 | 8 | 62 | 35.375 | 0.845188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.