hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f8ef348d72a2c59ebee4e58d2d54391c314c7c4c | 49 | py | Python | tasks/__init__.py | narphorium/mesh-transformer-jax | 76cda3c2440e5993d697d04650ea6429f8574c83 | [
"Apache-2.0"
] | 4,045 | 2021-03-14T06:09:01.000Z | 2022-03-31T16:24:44.000Z | tasks/__init__.py | narphorium/mesh-transformer-jax | 76cda3c2440e5993d697d04650ea6429f8574c83 | [
"Apache-2.0"
] | 201 | 2021-03-19T16:08:36.000Z | 2022-03-28T01:55:55.000Z | tasks/__init__.py | narphorium/mesh-transformer-jax | 76cda3c2440e5993d697d04650ea6429f8574c83 | [
"Apache-2.0"
] | 520 | 2021-04-25T23:53:31.000Z | 2022-03-31T14:35:09.000Z | from tasks.eval_harness import EvalHarnessAdaptor | 49 | 49 | 0.918367 | 6 | 49 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061224 | 49 | 1 | 49 | 49 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d043b00f5a0354f5f8a7b201a82c099454d5f5d | 6,576 | py | Python | tests/test_ops/test_roiaware_pool3d.py | BIGWangYuDong/mmcv | c46deb0576edaff5cd5a7d384c617478c7a73a70 | [
"Apache-2.0"
] | null | null | null | tests/test_ops/test_roiaware_pool3d.py | BIGWangYuDong/mmcv | c46deb0576edaff5cd5a7d384c617478c7a73a70 | [
"Apache-2.0"
] | null | null | null | tests/test_ops/test_roiaware_pool3d.py | BIGWangYuDong/mmcv | c46deb0576edaff5cd5a7d384c617478c7a73a70 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) OpenMMLab. All rights reserved.
import numpy as np
import pytest
import torch
from mmcv.ops import (RoIAwarePool3d, points_in_boxes_all, points_in_boxes_cpu,
points_in_boxes_part)
@pytest.mark.skipif(
not torch.cuda.is_available(), reason='requires CUDA support')
def test_RoIAwarePool3d():
roiaware_pool3d_max = RoIAwarePool3d(
out_size=4, max_pts_per_voxel=128, mode='max')
roiaware_pool3d_avg = RoIAwarePool3d(
out_size=4, max_pts_per_voxel=128, mode='avg')
rois = torch.tensor(
[[1.0, 2.0, 3.0, 5.0, 4.0, 6.0, -0.3 - np.pi / 2],
[-10.0, 23.0, 16.0, 20.0, 10.0, 20.0, -0.5 - np.pi / 2]],
dtype=torch.float32).cuda(
) # boxes (m, 7) with bottom center in lidar coordinate
pts = torch.tensor(
[[1, 2, 3.3], [1.2, 2.5, 3.0], [0.8, 2.1, 3.5], [1.6, 2.6, 3.6],
[0.8, 1.2, 3.9], [-9.2, 21.0, 18.2], [3.8, 7.9, 6.3],
[4.7, 3.5, -12.2], [3.8, 7.6, -2], [-10.6, -12.9, -20], [-16, -18, 9],
[-21.3, -52, -5], [0, 0, 0], [6, 7, 8], [-2, -3, -4]],
dtype=torch.float32).cuda() # points (n, 3) in lidar coordinate
pts_feature = pts.clone()
pooled_features_max = roiaware_pool3d_max(
rois=rois, pts=pts, pts_feature=pts_feature)
assert pooled_features_max.shape == torch.Size([2, 4, 4, 4, 3])
assert torch.allclose(pooled_features_max.sum(),
torch.tensor(51.100).cuda(), 1e-3)
pooled_features_avg = roiaware_pool3d_avg(
rois=rois, pts=pts, pts_feature=pts_feature)
assert pooled_features_avg.shape == torch.Size([2, 4, 4, 4, 3])
assert torch.allclose(pooled_features_avg.sum(),
torch.tensor(49.750).cuda(), 1e-3)
@pytest.mark.skipif(
not torch.cuda.is_available(), reason='requires CUDA support')
def test_points_in_boxes_part():
boxes = torch.tensor(
[[[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 0.3]],
[[-10.0, 23.0, 16.0, 10, 20, 20, 0.5]]],
dtype=torch.float32).cuda(
) # boxes (b, t, 7) with bottom center in lidar coordinate
pts = torch.tensor(
[[[1, 2, 3.3], [1.2, 2.5, 3.0], [0.8, 2.1, 3.5], [1.6, 2.6, 3.6],
[0.8, 1.2, 3.9], [-9.2, 21.0, 18.2], [3.8, 7.9, 6.3],
[4.7, 3.5, -12.2]],
[[3.8, 7.6, -2], [-10.6, -12.9, -20], [-16, -18, 9], [-21.3, -52, -5],
[0, 0, 0], [6, 7, 8], [-2, -3, -4], [6, 4, 9]]],
dtype=torch.float32).cuda() # points (b, m, 3) in lidar coordinate
point_indices = points_in_boxes_part(points=pts, boxes=boxes)
expected_point_indices = torch.tensor(
[[0, 0, 0, 0, 0, -1, -1, -1], [-1, -1, -1, -1, -1, -1, -1, -1]],
dtype=torch.int32).cuda()
assert point_indices.shape == torch.Size([2, 8])
assert (point_indices == expected_point_indices).all()
boxes = torch.tensor([[[0.0, 0.0, 0.0, 1.0, 20.0, 1.0, 0.523598]]],
dtype=torch.float32).cuda() # 30 degrees
pts = torch.tensor(
[[[4, 6.928, 0], [6.928, 4, 0], [4, -6.928, 0], [6.928, -4, 0],
[-4, 6.928, 0], [-6.928, 4, 0], [-4, -6.928, 0], [-6.928, -4, 0]]],
dtype=torch.float32).cuda()
point_indices = points_in_boxes_part(points=pts, boxes=boxes)
expected_point_indices = torch.tensor([[-1, -1, 0, -1, 0, -1, -1, -1]],
dtype=torch.int32).cuda()
assert (point_indices == expected_point_indices).all()
def test_points_in_boxes_cpu():
boxes = torch.tensor(
[[[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 0.3],
[-10.0, 23.0, 16.0, 10, 20, 20, 0.5]]],
dtype=torch.float32
) # boxes (m, 7) with bottom center in lidar coordinate
pts = torch.tensor(
[[[1, 2, 3.3], [1.2, 2.5, 3.0], [0.8, 2.1, 3.5], [1.6, 2.6, 3.6],
[0.8, 1.2, 3.9], [-9.2, 21.0, 18.2], [3.8, 7.9, 6.3],
[4.7, 3.5, -12.2], [3.8, 7.6, -2], [-10.6, -12.9, -20], [
-16, -18, 9
], [-21.3, -52, -5], [0, 0, 0], [6, 7, 8], [-2, -3, -4]]],
dtype=torch.float32) # points (n, 3) in lidar coordinate
point_indices = points_in_boxes_cpu(points=pts, boxes=boxes)
expected_point_indices = torch.tensor(
[[[1, 0], [1, 0], [1, 0], [1, 0], [1, 0], [0, 1], [0, 0], [0, 0],
[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0]]],
dtype=torch.int32)
assert point_indices.shape == torch.Size([1, 15, 2])
assert (point_indices == expected_point_indices).all()
boxes = torch.tensor([[[0.0, 0.0, 0.0, 1.0, 20.0, 1.0, 0.523598]]],
dtype=torch.float32) # 30 degrees
pts = torch.tensor(
[[[4, 6.928, 0], [6.928, 4, 0], [4, -6.928, 0], [6.928, -4, 0],
[-4, 6.928, 0], [-6.928, 4, 0], [-4, -6.928, 0], [-6.928, -4, 0]]],
dtype=torch.float32)
point_indices = points_in_boxes_cpu(points=pts, boxes=boxes)
expected_point_indices = torch.tensor(
[[[0], [0], [1], [0], [1], [0], [0], [0]]], dtype=torch.int32)
assert (point_indices == expected_point_indices).all()
@pytest.mark.skipif(
not torch.cuda.is_available(), reason='requires CUDA support')
def test_points_in_boxes_all():
boxes = torch.tensor(
[[[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 0.3],
[-10.0, 23.0, 16.0, 10, 20, 20, 0.5]]],
dtype=torch.float32).cuda(
) # boxes (m, 7) with bottom center in lidar coordinate
pts = torch.tensor(
[[[1, 2, 3.3], [1.2, 2.5, 3.0], [0.8, 2.1, 3.5], [1.6, 2.6, 3.6],
[0.8, 1.2, 3.9], [-9.2, 21.0, 18.2], [3.8, 7.9, 6.3],
[4.7, 3.5, -12.2], [3.8, 7.6, -2], [-10.6, -12.9, -20], [
-16, -18, 9
], [-21.3, -52, -5], [0, 0, 0], [6, 7, 8], [-2, -3, -4]]],
dtype=torch.float32).cuda() # points (n, 3) in lidar coordinate
point_indices = points_in_boxes_all(points=pts, boxes=boxes)
expected_point_indices = torch.tensor(
[[[1, 0], [1, 0], [1, 0], [1, 0], [1, 0], [0, 1], [0, 0], [0, 0],
[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0]]],
dtype=torch.int32).cuda()
assert point_indices.shape == torch.Size([1, 15, 2])
assert (point_indices == expected_point_indices).all()
if torch.cuda.device_count() > 1:
pts = pts.to('cuda:1')
boxes = boxes.to('cuda:1')
expected_point_indices = expected_point_indices.to('cuda:1')
point_indices = points_in_boxes_all(points=pts, boxes=boxes)
assert point_indices.shape == torch.Size([1, 15, 2])
assert (point_indices == expected_point_indices).all()
| 45.666667 | 79 | 0.524483 | 1,135 | 6,576 | 2.933921 | 0.088987 | 0.043243 | 0.043243 | 0.045646 | 0.866967 | 0.829129 | 0.825526 | 0.810811 | 0.805706 | 0.789489 | 0 | 0.160943 | 0.251673 | 6,576 | 143 | 80 | 45.986014 | 0.515749 | 0.063412 | 0 | 0.620968 | 0 | 0 | 0.014153 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 1 | 0.032258 | false | 0 | 0.032258 | 0 | 0.064516 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d0bc92595b7d95b7cd9d6493f7911bc96c4ffd9 | 91 | py | Python | tests/__init__.py | SabaunT/bot-motivator | 6b80b8d47f9ed0d071195c1f312a419093665994 | [
"MIT"
] | null | null | null | tests/__init__.py | SabaunT/bot-motivator | 6b80b8d47f9ed0d071195c1f312a419093665994 | [
"MIT"
] | 4 | 2019-12-15T13:39:22.000Z | 2020-02-20T14:07:55.000Z | tests/__init__.py | SabaunT/bot-motivator | 6b80b8d47f9ed0d071195c1f312a419093665994 | [
"MIT"
] | 2 | 2019-12-12T22:13:35.000Z | 2020-03-08T18:37:06.000Z | import sys
sys.path.append('/media/sabaun/4C71BE7650587C7D/documents/bot-motivator/app')
| 18.2 | 77 | 0.802198 | 12 | 91 | 6.083333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127907 | 0.054945 | 91 | 4 | 78 | 22.75 | 0.72093 | 0 | 0 | 0 | 0 | 0 | 0.637363 | 0.637363 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5d5fa3c7ea19826d2fb8fef75fd71f779425c931 | 3,928 | py | Python | tests/test_user_remove_association.py | ScilifelabDataCentre/DDS_WEB | 8799cf51bef456fb843758b2e94462766e2b3319 | [
"BSD-3-Clause"
] | 3 | 2021-06-18T09:38:28.000Z | 2022-02-28T19:37:54.000Z | tests/test_user_remove_association.py | ScilifelabDataCentre/DDS_WEB | 8799cf51bef456fb843758b2e94462766e2b3319 | [
"BSD-3-Clause"
] | 610 | 2021-05-12T08:33:31.000Z | 2022-03-31T14:55:05.000Z | tests/test_user_remove_association.py | ScilifelabDataCentre/DDS_WEB | 8799cf51bef456fb843758b2e94462766e2b3319 | [
"BSD-3-Clause"
] | 12 | 2021-05-19T10:33:45.000Z | 2022-03-16T10:23:27.000Z | # Installed
import json
import http
import copy
# Own
import tests
from tests.test_project_creation import proj_data_with_existing_users
def test_remove_user_from_project(client, boto3_session):
"""Remove an associated user from a project"""
response = client.post(
tests.DDSEndpoint.PROJECT_CREATE,
headers=tests.UserAuth(tests.USER_CREDENTIALS["unituser"]).token(client),
data=json.dumps(proj_data_with_existing_users),
content_type="application/json",
)
assert response.status_code == http.HTTPStatus.OK
project_id = response.json.get("project_id")
email = proj_data_with_existing_users["users_to_add"][0]["email"]
rem_user = {"email": email, "project": project_id}
response = client.post(
tests.DDSEndpoint.REMOVE_USER_FROM_PROJ,
headers=tests.UserAuth(tests.USER_CREDENTIALS["unituser"]).token(client),
data=json.dumps(rem_user),
content_type="application/json",
)
assert response.status_code == http.HTTPStatus.OK
assert (
f"User with email {email} no longer associated with {project_id}."
in response.json["message"]
)
def test_remove_not_associated_user_from_project(client, boto3_session):
"""Try to remove a user that exists in db but is not associated to a project"""
proj_data = copy.deepcopy(proj_data_with_existing_users)
proj_data["users_to_add"].pop(1)
response = client.post(
tests.DDSEndpoint.PROJECT_CREATE,
headers=tests.UserAuth(tests.USER_CREDENTIALS["unituser"]).token(client),
data=json.dumps(proj_data),
content_type="application/json",
)
assert response.status_code == http.HTTPStatus.OK
project_id = response.json.get("project_id")
email = proj_data_with_existing_users["users_to_add"][1]["email"]
rem_user = {"email": email, "project": project_id}
response = client.post(
tests.DDSEndpoint.REMOVE_USER_FROM_PROJ,
headers=tests.UserAuth(tests.USER_CREDENTIALS["unituser"]).token(client),
data=json.dumps(rem_user),
content_type="application/json",
)
assert response.status_code == http.HTTPStatus.OK
assert "User already not associated with this project" in response.json["message"]
def test_remove_nonexistent_user_from_project(client, boto3_session):
"""Try to remove an nonexistent user from a project"""
response = client.post(
tests.DDSEndpoint.PROJECT_CREATE,
headers=tests.UserAuth(tests.USER_CREDENTIALS["unituser"]).token(client),
data=json.dumps(proj_data_with_existing_users),
content_type="application/json",
)
assert response.status_code == http.HTTPStatus.OK
project_id = response.json.get("project_id")
email = "nonexistent@testmail.com"
rem_user = {"email": email, "project": project_id}
response = client.post(
tests.DDSEndpoint.REMOVE_USER_FROM_PROJ,
headers=tests.UserAuth(tests.USER_CREDENTIALS["unituser"]).token(client),
data=json.dumps(rem_user),
content_type="application/json",
)
assert response.status_code == http.HTTPStatus.BAD_REQUEST
assert f"{email} already not associated with this project" in response.json["message"]
def test_remove_existing_user_from_nonexistent_proj(client, boto3_session):
"""Try to an existing user from a nonexistent project"""
project_id = "nonexistent001"
email = proj_data_with_existing_users["users_to_add"][0]["email"]
rem_user = {"email": email, "project": project_id}
response = client.post(
tests.DDSEndpoint.REMOVE_USER_FROM_PROJ,
headers=tests.UserAuth(tests.USER_CREDENTIALS["unituser"]).token(client),
data=json.dumps(rem_user),
content_type="application/json",
)
assert response.status_code == http.HTTPStatus.BAD_REQUEST
assert "The specified project does not exist" in response.json["message"]
| 37.056604 | 90 | 0.714358 | 500 | 3,928 | 5.364 | 0.154 | 0.040268 | 0.03132 | 0.0522 | 0.824758 | 0.797539 | 0.785235 | 0.772558 | 0.772558 | 0.739746 | 0 | 0.003391 | 0.174134 | 3,928 | 105 | 91 | 37.409524 | 0.823366 | 0.058299 | 0 | 0.632911 | 0 | 0 | 0.154202 | 0.006527 | 0 | 0 | 0 | 0 | 0.139241 | 1 | 0.050633 | false | 0 | 0.063291 | 0 | 0.113924 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d687722fb221687e6fbfc314fc00c6f862c1a8d | 12,559 | py | Python | project/proj2_baseball/src/RobotControl.py | robot-tutorial/robotics_tutorial | 12affebfe6cb3810cc1e8fde4c674ed077b926a5 | [
"MIT"
] | null | null | null | project/proj2_baseball/src/RobotControl.py | robot-tutorial/robotics_tutorial | 12affebfe6cb3810cc1e8fde4c674ed077b926a5 | [
"MIT"
] | null | null | null | project/proj2_baseball/src/RobotControl.py | robot-tutorial/robotics_tutorial | 12affebfe6cb3810cc1e8fde4c674ed077b926a5 | [
"MIT"
] | 1 | 2020-04-23T14:11:00.000Z | 2020-04-23T14:11:00.000Z | import pybullet as p
import numpy as np
import Jacobian
def load():
# work in the following section to load your robot
robotName = 'HextechCatcher.urdf'
# robotPath = os.path.join('project', 'proj2_baseball', 'rsc', robotName)
robotInitPos = [0.0, 0.0, 1.1]
robotInitOrn = p.getQuaternionFromEuler([0, 0, 0])
robotId = p.loadURDF("../rsc/robotarm/urdf/robotarm.urdf", robotInitPos, robotInitOrn, useFixedBase=1)
basePos = [-0.3, -0.3, 0.3, 0]
return robotId, basePos
def generateTraj(robotId, ballPos, targetPos):
# work in this section, generate your tarjectory as a second order list
# e.g. traj = [[j_1(t1), j_2(t1), j_3(t1)], [j_1(t2), j_2(t2), j_3(t2)], [j_1(t3), j_2(t3), j_3(t3)], ...]
# robotId is the Unique body index for your robot
# ballPos is a list for the baseball position, like [x, y, z]
# targetPos is a list for the target position, like [x, y, z]
# do not use the inverse kinematics function of pybullet!!!!!!
pi = 3.14159265358979323846264338327950288419716939937510
traj = []
# numJoints = p.getNumJoints(robotId)
Ball_x = ballPos[0]
Ball_y = ballPos[1]
# calculate the polar angle of ball
if True:
if Ball_y >= 0:
if Ball_x == 0:
Ball_theta = pi / 2
elif Ball_x < 0:
Ball_theta = np.arctan(Ball_y / Ball_x) + pi
else:
Ball_theta = np.arctan(Ball_y / Ball_x)
else:
if Ball_x == 0:
Ball_theta = 3 * pi / 2
elif Ball_x < 0:
Ball_theta = np.arctan(Ball_y / Ball_x) + pi
else:
Ball_theta = np.arctan(Ball_y / Ball_x) + 2 * pi
# set an set of initial angle in case of singularity
theta = [Ball_theta + pi / 2, 0.0, 3 * pi / 4, 3 * pi / 4, pi / 2, 0]
for i in range(480):
traj.append([theta[0] * i / 480, theta[1] * i / 480, theta[2] * i / 480, 0, theta[3] * i / 480,
theta[4] * i / 480, theta[5] * i / 480, 0.0, 0.0])
# set position step here
step = 240
# move the catcher towards beyond the ball
if True:
theta1 = theta[0]
theta2 = theta[1]
theta3 = theta[2]
theta5 = theta[3]
theta6 = theta[4]
theta7 = theta[5]
WRA = np.array([[np.cos(theta1), -np.sin(theta1), 0], [np.sin(theta1), np.cos(theta1), 0], [0, 0, 1]])
WPA = np.array([[0], [0], [215.2]])
WtA = np.concatenate((WRA, WPA), axis=1)
WTA = np.concatenate((WtA, [[0, 0, 0, 1]]), axis=0)
ARB = np.array([[1, 0, 0], [0, np.cos(theta2), -np.sin(theta2)], [0, np.sin(theta2), np.cos(theta2)]])
APB = np.array([[162.4], [0], [0]])
AtB = np.concatenate((ARB, APB), axis=1)
ATB = np.concatenate((AtB, [[0, 0, 0, 1]]), axis=0)
BRC = np.array([[1, 0, 0], [0, np.cos(theta3), -np.sin(theta3)], [0, np.sin(theta3), np.cos(theta3)]])
BPC = np.array([[-162.4], [0], [351]])
BtC = np.concatenate((BRC, BPC), axis=1)
BTC = np.concatenate((BtC, [[0, 0, 0, 1]]), axis=0)
CRD = np.array([[1, 0, 0], [0, np.cos(theta5), -np.sin(theta5)], [0, np.sin(theta5), np.cos(theta5)]])
CPD = np.array([[0], [0], [351.2]])
CtD = np.concatenate((CRD, CPD), axis=1)
CTD = np.concatenate((CtD, [[0, 0, 0, 1]]), axis=0)
DRE = np.array([[np.cos(theta6), -np.sin(theta6), 0], [np.sin(theta6), np.cos(theta6), 0], [0, 0, 1]])
DPE = np.array([[162.4], [0], [0]])
DtE = np.concatenate((DRE, DPE), axis=1)
DTE = np.concatenate((DtE, [[0, 0, 0, 1]]), axis=0)
ERF = np.array([[1, 0, 0], [0, np.cos(theta7), -np.sin(theta7)], [0, np.sin(theta7), np.cos(theta7)]])
EPF = np.array([[0], [0], [162.4]])
EtF = np.concatenate((ERF, EPF), axis=1)
ETF = np.concatenate((EtF, [[0, 0, 0, 1]]), axis=0)
PO = np.array([[0], [0], [0], [1]])
pOF = np.dot(WTA, np.dot(ATB, np.dot(BTC, np.dot(CTD, np.dot(DTE, np.dot(ETF, PO)))))) + [[0.0], [0.0], [1100.0], [0.0]]
CurrentX = pOF[0][0] / 1000
print('CurrentX: ')
print(CurrentX)
print('ballPos: ')
print(ballPos[0])
CurrentY = pOF[1][0] / 1000
for i in range(step):
delta_p = np.array([[(ballPos[0] - CurrentX) * 1000 / step], [(ballPos[1] - CurrentY) * 1000 / step], [0.0 * 1000 / step], [0.0 / 240.], [0.0 / 240.], [0.0 / 240.]])
Ja = Jacobian.jacobian(theta[0], theta[1], theta[2], theta[3], theta[4], theta[5])
Ja = np.array(Ja, dtype='float')
Jainv = np.linalg.inv(Ja)
delta_theta = np.dot(Jainv, delta_p)
theta = [theta[0] + delta_theta[0][0],
theta[1] + delta_theta[1][0],
theta[2] + delta_theta[2][0],
theta[3] + delta_theta[3][0],
theta[4] + delta_theta[4][0],
theta[5] + delta_theta[5][0]
]
traj.append([theta[0], theta[1], theta[2], 0, theta[3], theta[4], theta[5], 0.0, 0.0])
if True:
theta1 = theta[0]
theta2 = theta[1]
theta3 = theta[2]
theta5 = theta[3]
theta6 = theta[4]
theta7 = theta[5]
WRA = np.array([[np.cos(theta1), -np.sin(theta1), 0], [np.sin(theta1), np.cos(theta1), 0], [0, 0, 1]])
WPA = np.array([[0], [0], [215.2]])
WtA = np.concatenate((WRA, WPA), axis=1)
WTA = np.concatenate((WtA, [[0, 0, 0, 1]]), axis=0)
ARB = np.array([[1, 0, 0], [0, np.cos(theta2), -np.sin(theta2)], [0, np.sin(theta2), np.cos(theta2)]])
APB = np.array([[162.4], [0], [0]])
AtB = np.concatenate((ARB, APB), axis=1)
ATB = np.concatenate((AtB, [[0, 0, 0, 1]]), axis=0)
BRC = np.array([[1, 0, 0], [0, np.cos(theta3), -np.sin(theta3)], [0, np.sin(theta3), np.cos(theta3)]])
BPC = np.array([[-162.4], [0], [351]])
BtC = np.concatenate((BRC, BPC), axis=1)
BTC = np.concatenate((BtC, [[0, 0, 0, 1]]), axis=0)
CRD = np.array([[1, 0, 0], [0, np.cos(theta5), -np.sin(theta5)], [0, np.sin(theta5), np.cos(theta5)]])
CPD = np.array([[0], [0], [351.2]])
CtD = np.concatenate((CRD, CPD), axis=1)
CTD = np.concatenate((CtD, [[0, 0, 0, 1]]), axis=0)
DRE = np.array([[np.cos(theta6), -np.sin(theta6), 0], [np.sin(theta6), np.cos(theta6), 0], [0, 0, 1]])
DPE = np.array([[162.4], [0], [0]])
DtE = np.concatenate((DRE, DPE), axis=1)
DTE = np.concatenate((DtE, [[0, 0, 0, 1]]), axis=0)
ERF = np.array([[1, 0, 0], [0, np.cos(theta7), -np.sin(theta7)], [0, np.sin(theta7), np.cos(theta7)]])
EPF = np.array([[0], [0], [162.4]])
EtF = np.concatenate((ERF, EPF), axis=1)
ETF = np.concatenate((EtF, [[0, 0, 0, 1]]), axis=0)
PO = np.array([[0], [0], [0], [1]])
pOF = np.dot(WTA, np.dot(ATB, np.dot(BTC, np.dot(CTD, np.dot(DTE, np.dot(ETF, PO)))))) + [[0.0], [0.0],
[1100.0], [0.0]]
CurrentX = pOF[0][0] / 1000
print('CurrentX: ')
print(CurrentX)
print('ballPos: ')
print(ballPos[0])
CurrentY = pOF[1][0] / 1000
for i in range(step):
delta_p = np.array([[(ballPos[0] - CurrentX) * 1000 / step], [(ballPos[1] - CurrentY) * 1000 / step], [0.0 * 1000 / step], [0.0 / 240.], [0.0 / 240.], [0.0 / 240.]])
Ja = Jacobian.jacobian(theta[0], theta[1], theta[2], theta[3], theta[4], theta[5])
Ja = np.array(Ja, dtype='float')
Jainv = np.linalg.inv(Ja)
delta_theta = np.dot(Jainv, delta_p)
theta = [theta[0] + delta_theta[0][0],
theta[1] + delta_theta[1][0],
theta[2] + delta_theta[2][0],
theta[3] + delta_theta[3][0],
theta[4] + delta_theta[4][0],
theta[5] + delta_theta[5][0]
]
traj.append([theta[0], theta[1], theta[2], 0, theta[3], theta[4], theta[5], 0.0, 0.0])
# move down along the z axis towards the ball
if True:
theta1 = theta[0]
theta2 = theta[1]
theta3 = theta[2]
theta5 = theta[3]
theta6 = theta[4]
theta7 = theta[5]
WRA = np.array([[np.cos(theta1), -np.sin(theta1), 0], [np.sin(theta1), np.cos(theta1), 0], [0, 0, 1]])
WPA = np.array([[0], [0], [215.2]])
WtA = np.concatenate((WRA, WPA), axis=1)
WTA = np.concatenate((WtA, [[0, 0, 0, 1]]), axis=0)
ARB = np.array([[1, 0, 0], [0, np.cos(theta2), -np.sin(theta2)], [0, np.sin(theta2), np.cos(theta2)]])
APB = np.array([[162.4], [0], [0]])
AtB = np.concatenate((ARB, APB), axis=1)
ATB = np.concatenate((AtB, [[0, 0, 0, 1]]), axis=0)
BRC = np.array([[1, 0, 0], [0, np.cos(theta3), -np.sin(theta3)], [0, np.sin(theta3), np.cos(theta3)]])
BPC = np.array([[-162.4], [0], [351]])
BtC = np.concatenate((BRC, BPC), axis=1)
BTC = np.concatenate((BtC, [[0, 0, 0, 1]]), axis=0)
CRD = np.array([[1, 0, 0], [0, np.cos(theta5), -np.sin(theta5)], [0, np.sin(theta5), np.cos(theta5)]])
CPD = np.array([[0], [0], [351.2]])
CtD = np.concatenate((CRD, CPD), axis=1)
CTD = np.concatenate((CtD, [[0, 0, 0, 1]]), axis=0)
DRE = np.array([[np.cos(theta6), -np.sin(theta6), 0], [np.sin(theta6), np.cos(theta6), 0], [0, 0, 1]])
DPE = np.array([[162.4], [0], [0]])
DtE = np.concatenate((DRE, DPE), axis=1)
DTE = np.concatenate((DtE, [[0, 0, 0, 1]]), axis=0)
ERF = np.array([[1, 0, 0], [0, np.cos(theta7), -np.sin(theta7)], [0, np.sin(theta7), np.cos(theta7)]])
EPF = np.array([[0], [0], [162.4]])
EtF = np.concatenate((ERF, EPF), axis=1)
ETF = np.concatenate((EtF, [[0, 0, 0, 1]]), axis=0)
PO = np.array([[0], [0], [0], [1]])
pOF = np.dot(WTA, np.dot(ATB, np.dot(BTC, np.dot(CTD, np.dot(DTE, np.dot(ETF, PO)))))) + [[0.0], [0.0],
[1100.0], [0.0]]
CurrentX = pOF[0][0] / 1000
print('CurrentX: ')
print(CurrentX)
print('ballPos: ')
print(ballPos[0])
CurrentY = pOF[1][0] / 1000
CurrentZ = pOF[2][0] / 1000
for i in range(step):
delta_p = np.array([[(ballPos[0] - CurrentX) * 1000 / step], [(ballPos[1] - CurrentY) * 1000 / step], [(1.24 - CurrentZ) * 1000 / step], [0.0 / 240.], [0.0 / 240.], [0.0 / 240.]])
Ja = Jacobian.jacobian(theta[0], theta[1], theta[2], theta[3], theta[4], theta[5])
Ja = np.array(Ja, dtype='float')
Jainv = np.linalg.inv(Ja)
delta_theta = np.dot(Jainv, delta_p)
theta = [theta[0] + delta_theta[0][0],
theta[1] + delta_theta[1][0],
theta[2] + delta_theta[2][0],
theta[3] + delta_theta[3][0],
theta[4] + delta_theta[4][0],
theta[5] + delta_theta[5][0]
]
traj.append([theta[0], theta[1], theta[2], 0, theta[3], theta[4], theta[5], 0.0, 0.0])
# grasp the ball
AnglePara = 0.65
for i in range(100):
traj.append([theta[0], theta[1], theta[2], 0, theta[3], theta[4], theta[5], pi / 2 * AnglePara, pi / 2 * AnglePara])
for i in range(1000):
delta_p = np.array([[0], [0], [100 / 240], [0.0 / 240.], [0.0 / 240.], [0.0 / 240.]])
Ja = Jacobian.jacobian(theta[0], theta[1], theta[2], theta[3], theta[4], theta[5])
Ja = np.array(Ja, dtype='float')
Jainv = np.linalg.inv(Ja)
delta_theta = np.dot(Jainv, delta_p)
theta = [theta[0] + delta_theta[0][0],
theta[1] + delta_theta[1][0],
theta[2] + delta_theta[2][0],
theta[3] + delta_theta[3][0],
theta[4] + delta_theta[4][0],
theta[5] + delta_theta[5][0]
]
traj.append([theta[0], theta[1], theta[2], 0, theta[3], theta[4], theta[5], pi / 2 * AnglePara, pi / 2 * AnglePara])
return traj
def addDebugItems(robotId):
# add any debug Items you like
p.addUserDebugLine([0, 0, 0], [1, 0, 0], lineColorRGB=[
0.5, 0.5, 0.5], parentObjectUniqueId=robotId, parentLinkIndex=6)
pass
| 45.503623 | 191 | 0.490963 | 1,893 | 12,559 | 3.221342 | 0.092446 | 0.049524 | 0.029518 | 0.019023 | 0.798623 | 0.788291 | 0.782716 | 0.782224 | 0.782224 | 0.782224 | 0 | 0.106729 | 0.299467 | 12,559 | 275 | 192 | 45.669091 | 0.586383 | 0.06346 | 0 | 0.812785 | 0 | 0 | 0.011069 | 0.002895 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013699 | false | 0.004566 | 0.013699 | 0 | 0.03653 | 0.054795 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d754a6d621ae31faaddae4c3d9bcea8f1fa9210 | 98 | py | Python | tests/test_initialize.py | bos-lab/piret | 23ecdf80331d72612bb2f48ab3207117673c373d | [
"BSD-3-Clause"
] | 2 | 2017-05-04T05:25:43.000Z | 2017-08-02T19:20:53.000Z | tests/test_initialize.py | bos-lab/piret | 23ecdf80331d72612bb2f48ab3207117673c373d | [
"BSD-3-Clause"
] | 7 | 2018-01-24T16:39:47.000Z | 2018-04-11T16:49:42.000Z | tests/test_initialize.py | bos-lab/piret | 23ecdf80331d72612bb2f48ab3207117673c373d | [
"BSD-3-Clause"
] | 4 | 2017-08-02T19:20:57.000Z | 2018-01-10T00:31:31.000Z | #! /usr/bin/env python
from piret import initialize
def test_initialize():
assert initialize | 16.333333 | 28 | 0.755102 | 13 | 98 | 5.615385 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 98 | 6 | 29 | 16.333333 | 0.890244 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
537e7aaa2404beb498874165d39ce1998b7e2c9f | 174 | py | Python | gallery/email_info.py | glpinto10/multimedia-gallery | 95e33ac85f872442c6de193fdf8bf58e57833085 | [
"MIT"
] | 1 | 2020-12-15T02:39:41.000Z | 2020-12-15T02:39:41.000Z | gallery/email_info.py | cinemafactory2/multimedia-gallery | af5a05a1bdecb7d5c7a20c06f371a3e5c542f076 | [
"MIT"
] | 1 | 2019-02-09T23:37:56.000Z | 2019-02-09T23:37:56.000Z | gallery/email_info.py | cinemafactory2/multimedia-gallery | af5a05a1bdecb7d5c7a20c06f371a3e5c542f076 | [
"MIT"
] | 2 | 2019-02-02T21:51:00.000Z | 2019-02-10T00:15:39.000Z | #Info email
EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = 'info.multimedia.gallery@gmail.com'
EMAIL_HOST_PASSWORD = 'tVxT/_yk4&3fYG..'
EMAIL_PORT = 587 | 29 | 53 | 0.781609 | 28 | 174 | 4.535714 | 0.642857 | 0.212598 | 0.204724 | 0.267717 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031646 | 0.091954 | 174 | 6 | 54 | 29 | 0.772152 | 0.057471 | 0 | 0 | 0 | 0 | 0.384146 | 0.20122 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
539e44c3fe26e6378a9099ddcc68a69bb61154e4 | 41 | py | Python | test/run/t521.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | test/run/t521.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | test/run/t521.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | x = (1,2,3)
print hash(x), type(hash(x))
| 13.666667 | 28 | 0.560976 | 10 | 41 | 2.3 | 0.7 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 0.146341 | 41 | 2 | 29 | 20.5 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
53b2f3a444d1fdda965256906160179199bd07c9 | 242 | py | Python | torchinfo/__init__.py | kgarg8/torchinfo | 2d472402906b2da1a95fff249f2a2d89e5175fa1 | [
"MIT"
] | null | null | null | torchinfo/__init__.py | kgarg8/torchinfo | 2d472402906b2da1a95fff249f2a2d89e5175fa1 | [
"MIT"
] | null | null | null | torchinfo/__init__.py | kgarg8/torchinfo | 2d472402906b2da1a95fff249f2a2d89e5175fa1 | [
"MIT"
] | null | null | null | """ torchinfo """
from .formatting import ALL_COLUMN_SETTINGS, ALL_ROW_SETTINGS
from .model_statistics import ModelStatistics
from .torchinfo import summary
__all__ = ("ModelStatistics", "summary", "ALL_COLUMN_SETTINGS", "ALL_ROW_SETTINGS")
| 34.571429 | 83 | 0.805785 | 28 | 242 | 6.5 | 0.428571 | 0.098901 | 0.186813 | 0.21978 | 0.340659 | 0.340659 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095041 | 242 | 6 | 84 | 40.333333 | 0.83105 | 0.03719 | 0 | 0 | 0 | 0 | 0.253333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
07029a1251dbe0b229b46aa6ab66a36fbc3baeba | 4,190 | py | Python | tests/cupy_tests/padding_tests/test_pad.py | wkentaro/cupy | 1d072d0b3cb2780c0874201c0222d46fa8e7797d | [
"BSD-3-Clause"
] | 1 | 2020-11-24T03:44:35.000Z | 2020-11-24T03:44:35.000Z | tests/cupy_tests/padding_tests/test_pad.py | wkentaro/cupy | 1d072d0b3cb2780c0874201c0222d46fa8e7797d | [
"BSD-3-Clause"
] | null | null | null | tests/cupy_tests/padding_tests/test_pad.py | wkentaro/cupy | 1d072d0b3cb2780c0874201c0222d46fa8e7797d | [
"BSD-3-Clause"
] | 1 | 2020-11-24T03:44:35.000Z | 2020-11-24T03:44:35.000Z | import unittest
import warnings
import numpy
from cupy import testing
@testing.parameterize(
{'array': numpy.arange(6).reshape([2, 3]), 'pad_width': 1,
'mode': 'constant'},
{'array': numpy.arange(6).reshape([2, 3]),
'pad_width': [1, 2], 'mode': 'constant'},
{'array': numpy.arange(6).reshape([2, 3]),
'pad_width': [[1, 2], [3, 4]], 'mode': 'constant'},
)
@testing.gpu
class TestPadDefault(unittest.TestCase):
_multiprocess_can_split_ = True
@testing.for_all_dtypes(no_bool=True)
@testing.numpy_cupy_array_equal()
def test_pad_default(self, xp, dtype):
array = xp.array(self.array, dtype=dtype)
# Older version of NumPy(<1.12) can emit ComplexWarning
def f():
return xp.pad(array, self.pad_width, mode=self.mode)
if xp is numpy:
with warnings.catch_warnings():
warnings.simplefilter('ignore', numpy.ComplexWarning)
return f()
else:
return f()
@testing.parameterize(
{'array': numpy.arange(6).reshape([2, 3]), 'pad_width': 1,
'mode': 'constant', 'constant_values': 3},
{'array': numpy.arange(6).reshape([2, 3]),
'pad_width': [1, 2], 'mode': 'constant',
'constant_values': [3, 4]},
{'array': numpy.arange(6).reshape([2, 3]),
'pad_width': [[1, 2], [3, 4]], 'mode': 'constant',
'constant_values': [[3, 4], [5, 6]]},
)
@testing.gpu
# Old numpy does not work with multi-dimensional constant_values
@testing.with_requires('numpy>=1.11.1')
class TestPad(unittest.TestCase):
_multiprocess_can_split_ = True
@testing.for_all_dtypes(no_bool=True)
@testing.numpy_cupy_array_equal()
def test_pad(self, xp, dtype):
array = xp.array(self.array, dtype=dtype)
# Older version of NumPy(<1.12) can emit ComplexWarning
def f():
return xp.pad(array, self.pad_width, mode=self.mode,
constant_values=self.constant_values)
if xp is numpy:
with warnings.catch_warnings():
warnings.simplefilter('ignore', numpy.ComplexWarning)
return f()
else:
return f()
@testing.gpu
class TestPadNumpybug(unittest.TestCase):
_multiprocess_can_split_ = True
@testing.with_requires('numpy>=1.11.2')
@testing.for_all_dtypes(no_bool=True, no_complex=True)
@testing.numpy_cupy_array_equal()
def test_pad_highdim_default(self, xp, dtype):
array = xp.arange(6, dtype=dtype).reshape([2, 3])
pad_width = [[1, 2], [3, 4]]
constant_values = [[1, 2], [3, 4]]
a = xp.pad(array, pad_width, mode='constant',
constant_values=constant_values)
return a
@testing.parameterize(
{'array': [], 'pad_width': 1, 'mode': 'constant', 'constant_values': 3},
{'array': 1, 'pad_width': 1, 'mode': 'constant', 'constant_values': 3},
{'array': [0, 1, 2, 3], 'pad_width': 1, 'mode': 'constant',
'constant_values': 3},
{'array': [0, 1, 2, 3], 'pad_width': [1, 2], 'mode': 'constant',
'constant_values': 3},
)
@testing.gpu
class TestPadSpecial(unittest.TestCase):
_multiprocess_can_split_ = True
@testing.numpy_cupy_array_equal()
def test_pad_special(self, xp):
a = xp.pad(self.array, self.pad_width, mode=self.mode,
constant_values=self.constant_values)
return a
@testing.parameterize(
{'array': [0, 1, 2, 3], 'pad_width': [-1, 1], 'mode': 'constant',
'constant_values': 3},
{'array': [0, 1, 2, 3], 'pad_width': [], 'mode': 'constant',
'constant_values': 3},
{'array': [0, 1, 2, 3], 'pad_width': [[3, 4], [5, 6]], 'mode': 'constant',
'constant_values': 3},
{'array': [0, 1, 2, 3], 'pad_width': [1], 'mode': 'constant',
'notallowedkeyword': 3},
)
@testing.gpu
@testing.with_requires('numpy>=1.11.1') # Old numpy fails differently
class TestPadFailure(unittest.TestCase):
_multiprocess_can_split_ = True
@testing.numpy_cupy_raises()
def test_pad_failure(self, xp):
a = xp.pad(self.array, self.pad_width, mode=self.mode,
constant_values=self.constant_values)
return a
| 31.742424 | 78 | 0.606444 | 545 | 4,190 | 4.480734 | 0.144954 | 0.06552 | 0.026618 | 0.053235 | 0.841523 | 0.841523 | 0.799345 | 0.719083 | 0.711712 | 0.615889 | 0 | 0.034801 | 0.22506 | 4,190 | 131 | 79 | 31.984733 | 0.717277 | 0.047255 | 0 | 0.558824 | 0 | 0 | 0.147981 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068627 | false | 0 | 0.039216 | 0.019608 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ab2046fe69a24cd2c8cc223b956b951c8b80b773 | 39 | py | Python | evaml/logging/__init__.py | hillolkallol/evaML | dec7b2b97e25fa0c7c2df8356952417cf8f7051b | [
"MIT"
] | null | null | null | evaml/logging/__init__.py | hillolkallol/evaML | dec7b2b97e25fa0c7c2df8356952417cf8f7051b | [
"MIT"
] | 3 | 2021-03-29T20:46:58.000Z | 2021-03-29T21:00:07.000Z | evaml/logging/__init__.py | hillolkallol/evaML | dec7b2b97e25fa0c7c2df8356952417cf8f7051b | [
"MIT"
] | null | null | null | from evaml.logging.logger import logger | 39 | 39 | 0.871795 | 6 | 39 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 39 | 1 | 39 | 39 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab26858bdcd04218b32fd4fecbc7e5a2407c871b | 463 | py | Python | orchestra/contrib/orchestration/signals.py | RubenPX/django-orchestra | 5ab4779e1ae12ec99569d682601b7810587ed381 | [
"Unlicense"
] | 68 | 2015-02-09T10:28:44.000Z | 2022-03-12T11:08:36.000Z | orchestra/contrib/orchestration/signals.py | RubenPX/django-orchestra | 5ab4779e1ae12ec99569d682601b7810587ed381 | [
"Unlicense"
] | 17 | 2015-05-01T18:10:03.000Z | 2021-03-19T21:52:55.000Z | orchestra/contrib/orchestration/signals.py | RubenPX/django-orchestra | 5ab4779e1ae12ec99569d682601b7810587ed381 | [
"Unlicense"
] | 29 | 2015-03-31T04:51:03.000Z | 2022-02-17T02:58:50.000Z | import django.dispatch
pre_action = django.dispatch.Signal(providing_args=['backend', 'instance', 'action'])
post_action = django.dispatch.Signal(providing_args=['backend', 'instance', 'action'])
pre_prepare = django.dispatch.Signal(providing_args=['backend'])
post_prepare = django.dispatch.Signal(providing_args=['backend'])
pre_commit = django.dispatch.Signal(providing_args=['backend'])
post_commit = django.dispatch.Signal(providing_args=['backend'])
| 30.866667 | 86 | 0.775378 | 55 | 463 | 6.309091 | 0.236364 | 0.282421 | 0.345821 | 0.501441 | 0.904899 | 0.904899 | 0.904899 | 0.345821 | 0.345821 | 0 | 0 | 0 | 0.066955 | 463 | 14 | 87 | 33.071429 | 0.803241 | 0 | 0 | 0 | 0 | 0 | 0.151188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ab4924b7f24f20882e6ea4056cba01142573c5f1 | 17,949 | py | Python | pyscf/pbc/dft/test/test_multigrid.py | willwheelera/pyscf | 1de7f6fb8403bb0769a05eade2c2e7aa4f8a160e | [
"Apache-2.0"
] | null | null | null | pyscf/pbc/dft/test/test_multigrid.py | willwheelera/pyscf | 1de7f6fb8403bb0769a05eade2c2e7aa4f8a160e | [
"Apache-2.0"
] | null | null | null | pyscf/pbc/dft/test/test_multigrid.py | willwheelera/pyscf | 1de7f6fb8403bb0769a05eade2c2e7aa4f8a160e | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# Copyright 2014-2018 The PySCF Developers. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Author: Qiming Sun <osirpt.sun@gmail.com>
#
import unittest
import numpy
from pyscf import lib
from pyscf.pbc import gto, scf, dft, df
from pyscf.pbc import tools
from pyscf.pbc.dft import gen_grid
from pyscf.pbc.dft import multigrid
multigrid.R_RATIO_SUBLOOP = 0.6
def setUpModule():
global cell_orth, cell_nonorth, cell_he, mydf
global kpts, nao, dm, dm1, vj_uks_orth, he_nao, dm_he
numpy.random.seed(2)
cell_orth = gto.M(
verbose = 7,
output = '/dev/null',
a = numpy.eye(3)*3.5668,
atom = '''C 0. 0. 0.
C 1.8 1.8 1.8 ''',
basis = 'gth-dzv',
pseudo = 'gth-pade',
precision = 1e-9,
mesh = [48] * 3,
)
cell_nonorth = gto.M(
a = numpy.eye(3)*3.5668 + numpy.random.random((3,3)),
atom = '''C 0. 0. 0.
C 0.8917 0.8917 0.8917''',
basis = 'gth-dzv',
pseudo = 'gth-pade',
precision = 1e-9,
mesh = [44,43,42],
)
cell_he = gto.M(atom='He 0 0 0',
basis=[[0, ( 1, 1, .1), (.5, .1, 1)],
[1, (.8, 1)]],
unit='B',
precision = 1e-9,
mesh=[18]*3,
a=numpy.eye(3)*5)
kptsa = numpy.random.random((2,3))
kpts = kptsa.copy()
kpts[1] = -kpts[0]
nao = cell_orth.nao_nr()
dm = numpy.random.random((len(kpts),nao,nao)) * .2
dm1 = dm + numpy.eye(nao)
dm = dm1 + dm1.transpose(0,2,1)
mydf = df.FFTDF(cell_orth)
vj_uks_orth = mydf.get_jk(dm1, with_k=False)[0]
he_nao = cell_he.nao
dm_he = numpy.random.random((len(kpts), he_nao, he_nao))
dm_he = dm_he + dm_he.transpose(0,2,1)
dm_he = dm_he * .2 + numpy.eye(he_nao)
def tearDownModule():
global cell_orth, cell_nonorth, cell_he, mydf
del cell_orth, cell_nonorth, cell_he, mydf
class KnownValues(unittest.TestCase):
def test_orth_get_pp(self):
ref = df.FFTDF(cell_orth).get_pp()
out = multigrid.MultiGridFFTDF(cell_orth).get_pp()
self.assertAlmostEqual(abs(ref-out).max(), 0, 9)
def test_nonorth_get_pp(self):
ref = df.FFTDF(cell_nonorth).get_pp()
out = multigrid.MultiGridFFTDF(cell_nonorth).get_pp()
self.assertAlmostEqual(abs(ref-out).max(), 0, 9)
def test_orth_get_nuc_kpts(self):
ref = df.FFTDF(cell_orth).get_nuc(kpts)
out = multigrid.MultiGridFFTDF(cell_orth).get_nuc(kpts)
self.assertAlmostEqual(abs(ref-out).max(), 0, 8)
def test_orth_get_j_kpts(self):
ref = df.FFTDF(cell_orth).get_jk(dm, kpts=kpts, with_k=False)[0]
out = multigrid.MultiGridFFTDF(cell_orth).get_jk(dm, kpts=kpts)[0]
self.assertAlmostEqual(abs(ref-out).max(), 0, 9)
# mydf = multigrid.MultiGridFFTDF(cell_orth)
# self.assertRaises(ValueError, mydf.get_jk, dm1, hermi=0, kpts=kpts, with_k=False)
def test_nonorth_get_j_kpts(self):
ref = df.FFTDF(cell_nonorth).get_jk(dm, kpts=kpts, with_k=False)[0]
out = multigrid.MultiGridFFTDF(cell_nonorth, kpts=kpts).get_jk(dm)[0]
self.assertAlmostEqual(abs(ref-out).max(), 0, 9)
def test_nonorth_get_j(self):
ref = df.FFTDF(cell_nonorth).get_jk(dm[0], with_k=False)[0]
out = multigrid.MultiGridFFTDF(cell_nonorth).get_jk(dm)[0]
self.assertAlmostEqual(abs(ref-out).max(), 0, 9)
def test_orth_rks_lda_kpts(self):
xc = 'lda,'
mydf = df.FFTDF(cell_orth)
ni = dft.numint.KNumInt()
n, exc0, ref = ni.nr_rks(cell_orth, mydf.grids, xc, dm, 0, kpts=kpts)
mydf = multigrid.MultiGridFFTDF(cell_orth)
n, exc1, vxc = multigrid.nr_rks(mydf, xc, dm, kpts=kpts)
self.assertAlmostEqual(float(abs(ref-vxc).max()), 0, 9)
self.assertAlmostEqual(abs(exc0-exc1).max(), 0, 8)
def test_multigrid_kuks(self):
mf = dft.KUKS(cell_he)
mf.xc = 'lda,'
ref = mf.get_veff(cell_he, numpy.array((dm_he,dm_he)), kpts=kpts)
out = multigrid.multigrid(mf).get_veff(cell_he, (dm_he,dm_he), kpts=kpts)
self.assertAlmostEqual(float(abs(ref-out).max()), 0, 9)
self.assertAlmostEqual(abs(ref.exc-out.exc).max(), 0, 9)
self.assertAlmostEqual(abs(ref.ecoul-out.ecoul).max(), 0, 9)
def test_multigrid_krks(self):
mf = dft.KRKS(cell_he)
mf.xc = 'lda,'
ref = mf.get_veff(cell_he, dm_he, kpts=kpts)
out = multigrid.multigrid(mf).get_veff(cell_he, dm_he, kpts=kpts)
self.assertAlmostEqual(float(abs(ref-out).max()), 0, 9)
self.assertAlmostEqual(abs(ref.exc-out.exc).max(), 0, 9)
self.assertAlmostEqual(abs(ref.ecoul-out.ecoul).max(), 0, 9)
def test_multigrid_kroks(self):
mf = dft.KROKS(cell_he)
mf.xc = 'lda,'
nao = cell_he.nao
mo = dm_he
mo_occ = numpy.ones((2,nao))
dm1 = numpy.einsum('kpi,ki,kqi->kpq', mo, mo_occ, mo)
dm1 = lib.tag_array(numpy.array([dm1,dm1]), mo_coeff=mo,
mo_occ=mo_occ*2)
ref = mf.get_veff(cell_he, dm1, kpts=kpts)
out = multigrid.multigrid(mf).get_veff(cell_he, dm1, kpts=kpts)
self.assertAlmostEqual(float(abs(ref-out).max()), 0, 9)
self.assertAlmostEqual(abs(ref.exc-out.exc).max(), 0, 9)
self.assertAlmostEqual(abs(ref.ecoul-out.ecoul).max(), 0, 9)
def test_multigrid_uks(self):
mf = dft.UKS(cell_he)
mf.xc = 'lda,'
ref = mf.get_veff(cell_he, numpy.array((dm_he[0],dm_he[0])))
out = multigrid.multigrid(mf).get_veff(cell_he, (dm_he[0], dm_he[0]))
self.assertAlmostEqual(float(abs(ref-out).max()), 0, 9)
self.assertAlmostEqual(abs(ref.exc-out.exc).max(), 0, 9)
self.assertAlmostEqual(abs(ref.ecoul-out.ecoul).max(), 0, 9)
def test_multigrid_rks(self):
mf = dft.RKS(cell_he)
mf.xc = 'lda,'
ref = mf.get_veff(cell_he, dm_he[0])
out = multigrid.multigrid(mf).get_veff(cell_he, dm_he[0])
self.assertAlmostEqual(float(abs(ref-out).max()), 0, 9)
self.assertAlmostEqual(abs(ref.exc-out.exc).max(), 0, 9)
self.assertAlmostEqual(abs(ref.ecoul-out.ecoul).max(), 0, 9)
def test_multigrid_roks(self):
mf = dft.ROKS(cell_he)
mf.xc = 'lda,'
mo = dm_he[0]
nao = cell_he.nao
mo_occ = numpy.ones(nao)
dm1 = numpy.einsum('pi,i,qi->pq', mo, mo_occ, mo)
dm1 = lib.tag_array(numpy.array([dm1,dm1]), mo_coeff=mo,
mo_occ=mo_occ*2)
ref = mf.get_veff(cell_he, dm1)
out = multigrid.multigrid(mf).get_veff(cell_he, dm1)
self.assertAlmostEqual(float(abs(ref-out).max()), 0, 9)
self.assertAlmostEqual(abs(ref.exc-out.exc).max(), 0, 9)
self.assertAlmostEqual(abs(ref.ecoul-out.ecoul).max(), 0, 8)
def test_orth_rks_gga_kpts(self):
xc = 'b88,'
mydf = df.FFTDF(cell_orth)
ni = dft.numint.KNumInt()
n, exc0, ref = ni.nr_rks(cell_orth, mydf.grids, xc, dm, hermi=1, kpts=kpts)
ref += mydf.get_jk(dm, hermi=1, with_k=False, kpts=kpts)[0]
mydf = multigrid.MultiGridFFTDF(cell_orth)
n, exc1, vxc = multigrid.nr_rks(mydf, xc, dm, hermi=1, kpts=kpts, with_j=True)
self.assertAlmostEqual(float(abs(ref-vxc).max()), 0, 9)
self.assertAlmostEqual(abs(exc0-exc1).max(), 0, 8)
def test_orth_uks_lda_hermi0(self):
xc = 'lda,'
mydf = df.FFTDF(cell_orth)
ni = dft.numint.NumInt()
n, exc0, ref = ni.nr_uks(cell_orth, mydf.grids, xc, dm1, 0)
ref += vj_uks_orth[0] + vj_uks_orth[1]
mydf = multigrid.MultiGridFFTDF(cell_orth)
n, exc1, vxc = multigrid.nr_uks(mydf, xc, dm1, hermi=0, with_j=True)
self.assertAlmostEqual(float(abs(ref-vxc).max()), 0, 8)
self.assertAlmostEqual(abs(exc0-exc1).max(), 0, 7)
def test_orth_uks_gga_hermi0(self):
xc = 'b88,'
mydf = df.FFTDF(cell_orth)
ni = dft.numint.NumInt()
n, exc0, ref = ni.nr_uks(cell_orth, mydf.grids, xc, dm1, 0)
ref += vj_uks_orth[0] + vj_uks_orth[1]
mydf = multigrid.MultiGridFFTDF(cell_orth)
n, exc1, vxc = multigrid.nr_uks(mydf, xc, dm1, hermi=0, with_j=True)
self.assertAlmostEqual(float(abs(ref-vxc).max()), 0, 8)
self.assertAlmostEqual(abs(exc0-exc1).max(), 0, 8)
def test_eval_rhoG_orth_kpts(self):
numpy.random.seed(9)
dm = numpy.random.random(dm1.shape) + numpy.random.random(dm1.shape) * 1j
mydf = multigrid.MultiGridFFTDF(cell_orth)
rhoG = multigrid._eval_rhoG(mydf, dm, hermi=0, kpts=kpts, deriv=0,
rhog_high_order=True)
self.assertTrue(rhoG.dtype == numpy.complex128)
mydf = df.FFTDF(cell_orth)
ni = dft.numint.KNumInt()
ao_kpts = ni.eval_ao(cell_orth, mydf.grids.coords, kpts, deriv=0)
ref = ni.eval_rho(cell_orth, ao_kpts, dm, hermi=0, xctype='LDA')
rhoR = tools.ifft(rhoG[0], cell_orth.mesh).real
rhoR *= numpy.prod(cell_orth.mesh)/cell_orth.vol
self.assertAlmostEqual(abs(rhoR-ref).max(), 0, 8)
def test_eval_rhoG_orth_gga(self):
mydf = multigrid.MultiGridFFTDF(cell_orth)
rhoG = multigrid._eval_rhoG(mydf, dm, hermi=1, kpts=kpts, deriv=1,
rhog_high_order=True)
mydf = df.FFTDF(cell_orth)
ni = dft.numint.KNumInt()
ao_kpts = ni.eval_ao(cell_orth, mydf.grids.coords, kpts, deriv=1)
ref = ni.eval_rho(cell_orth, ao_kpts, dm, xctype='GGA')
rhoR = tools.ifft(rhoG[0], cell_orth.mesh).real
rhoR *= numpy.prod(cell_orth.mesh)/cell_orth.vol
self.assertAlmostEqual(abs(rhoR-ref).max(), 0, 8)
def test_eval_rhoG_nonorth_gga(self):
mydf = multigrid.MultiGridFFTDF(cell_nonorth)
rhoG = multigrid._eval_rhoG(mydf, dm, hermi=1, kpts=kpts, deriv=1,
rhog_high_order=True)
mydf = df.FFTDF(cell_nonorth)
ni = dft.numint.KNumInt()
ao_kpts = ni.eval_ao(cell_nonorth, mydf.grids.coords, kpts, deriv=1)
ref = ni.eval_rho(cell_nonorth, ao_kpts, dm, xctype='GGA')
rhoR = tools.ifft(rhoG[0], cell_nonorth.mesh).real
rhoR *= numpy.prod(cell_nonorth.mesh)/cell_nonorth.vol
self.assertAlmostEqual(abs(rhoR-ref).max(), 0, 7)
def test_gen_rhf_response(self):
numpy.random.seed(9)
dm1 = numpy.random.random(dm_he.shape)
dm1 = dm1 + dm1.transpose(0,2,1)
dm1[1] = dm1[0]
mydf = df.FFTDF(cell_he)
ni = dft.numint.KNumInt()
mf = dft.KRKS(cell_he)
mf.with_df = multigrid.MultiGridFFTDF(cell_he)
mf.kpts = kpts
mf.xc = 'lda,'
ref = dft.numint.nr_rks_fxc(ni, cell_he, mydf.grids, mf.xc, dm_he, dm1,
kpts=kpts)
vj = mydf.get_jk(dm1, with_k=False, kpts=kpts)[0]
ref += vj
v = multigrid._gen_rhf_response(mf, dm_he)(dm1)
self.assertEqual(ref.dtype, v.dtype)
self.assertEqual(ref.shape, v.shape)
self.assertAlmostEqual(abs(v-ref).max(), 0, 9)
mf.xc = 'b88,'
ref = dft.numint.nr_rks_fxc(ni, cell_he, mydf.grids, mf.xc, dm_he, dm1,
kpts=kpts)
ref += vj
v = multigrid._gen_rhf_response(mf, dm_he, hermi=1)(dm1)
self.assertEqual(ref.dtype, v.dtype)
self.assertEqual(ref.shape, v.shape)
self.assertAlmostEqual(abs(v-ref).max(), 0, 6)
def test_nr_rks_fxc(self):
numpy.random.seed(9)
dm1 = numpy.random.random(dm_he.shape)
dm1 = dm1 + dm1.transpose(0,2,1)
mydf = df.FFTDF(cell_he)
ni = dft.numint.NumInt()
mg_df = multigrid.MultiGridFFTDF(cell_he)
xc = 'lda,'
ref = dft.numint.nr_rks_fxc(ni, cell_he, mydf.grids, xc, dm_he[0], dm1)
v = multigrid.nr_rks_fxc(mg_df, xc, dm_he[0], dm1)
self.assertEqual(ref.dtype, v.dtype)
self.assertEqual(ref.shape, v.shape)
self.assertAlmostEqual(abs(v-ref).max(), 0, 9)
xc = 'b88,'
ref = dft.numint.nr_rks_fxc(ni, cell_he, mydf.grids, xc, dm_he, dm1)
v = multigrid.nr_rks_fxc(mg_df, xc, dm_he, dm1)
self.assertEqual(ref.dtype, v.dtype)
self.assertEqual(ref.shape, v.shape)
self.assertAlmostEqual(abs(v-ref).max(), 0, 6)
def test_nr_rks_fxc_st(self):
numpy.random.seed(9)
dm1 = numpy.random.random(dm_he.shape)
dm1 = dm1 + dm1.transpose(0,2,1)
dm1[1] = dm1[0]
mydf = df.FFTDF(cell_he)
ni = dft.numint.KNumInt()
mg_df = multigrid.MultiGridFFTDF(cell_he)
mf = dft.KRKS(cell_he)
mf.with_df = mg_df
mf.kpts = kpts
xc = 'lda,'
ref = dft.numint.nr_rks_fxc_st(ni, cell_he, mydf.grids, xc, dm_he, dm1,
singlet=True, kpts=kpts)
v = multigrid.nr_rks_fxc_st(mg_df, xc, dm_he, dm1, singlet=True, kpts=kpts)
self.assertEqual(ref.dtype, v.dtype)
self.assertEqual(ref.shape, v.shape)
self.assertAlmostEqual(abs(v-ref).max(), 0, 9)
mf.xc = 'b88,'
ref = dft.numint.nr_rks_fxc_st(ni, cell_he, mydf.grids, mf.xc, dm_he, dm1,
singlet=True, kpts=kpts)
ref += mydf.get_jk(dm1, with_k=False, kpts=kpts)[0]
v = multigrid._gen_rhf_response(mf, dm_he, singlet=True, hermi=1)(dm1)
self.assertEqual(ref.dtype, v.dtype)
self.assertEqual(ref.shape, v.shape)
self.assertAlmostEqual(abs(v-ref).max(), 0, 6)
mf.xc = 'lda,'
ref = dft.numint.nr_rks_fxc_st(ni, cell_he, mydf.grids, mf.xc, dm_he, dm1,
singlet=False, kpts=kpts)
v = multigrid._gen_rhf_response(mf, dm_he, singlet=False, hermi=1)(dm1)
self.assertEqual(ref.dtype, v.dtype)
self.assertEqual(ref.shape, v.shape)
self.assertAlmostEqual(abs(v-ref).max(), 0, 9)
xc = 'b88,'
ref = dft.numint.nr_rks_fxc_st(ni, cell_he, mydf.grids, xc, dm_he, dm1,
singlet=False, kpts=kpts)
v = multigrid.nr_rks_fxc_st(mg_df, xc, dm_he, dm1, singlet=False, kpts=kpts)
self.assertEqual(ref.dtype, v.dtype)
self.assertEqual(ref.shape, v.shape)
self.assertAlmostEqual(abs(v-ref).max(), 0, 6)
def test_gen_uhf_response(self):
numpy.random.seed(9)
dm1 = numpy.random.random(dm_he.shape)
dm1 = dm1 + dm1.transpose(0,2,1)
mydf = df.FFTDF(cell_he)
ni = dft.numint.NumInt()
mf = dft.UKS(cell_he)
mf.with_df = multigrid.MultiGridFFTDF(cell_he)
mf.xc = 'lda,'
ref = dft.numint.nr_uks_fxc(ni, cell_he, mydf.grids, mf.xc, dm_he, dm1)
vj = mydf.get_jk(dm1, with_k=False)[0]
ref += vj[0] + vj[1]
v = multigrid._gen_uhf_response(mf, dm_he, with_j=True)(dm1)
self.assertEqual(ref.dtype, v.dtype)
self.assertEqual(ref.shape, v.shape)
self.assertAlmostEqual(abs(v-ref).max(), 0, 9)
mf.xc = 'b88,'
ref = dft.numint.nr_uks_fxc(ni, cell_he, mydf.grids, mf.xc, dm_he, dm1)
ref += vj[0] + vj[1]
v = multigrid._gen_uhf_response(mf, dm_he, with_j=True)(dm1)
self.assertEqual(ref.dtype, v.dtype)
self.assertEqual(ref.shape, v.shape)
self.assertAlmostEqual(abs(v-ref).max(), 0, 7)
def test_nr_uks_fxc(self):
numpy.random.seed(9)
dm1 = numpy.random.random(dm_he.shape)
dm1 = dm1 + dm1.transpose(0,2,1)
mydf = df.FFTDF(cell_he)
ni = dft.numint.KNumInt()
mg_df = multigrid.MultiGridFFTDF(cell_he)
xc = 'lda,'
ref = dft.numint.nr_uks_fxc(ni, cell_he, mydf.grids, xc,
(dm_he, dm_he), (dm1, dm1), kpts=kpts)
v = multigrid.nr_uks_fxc(mg_df, xc, (dm_he, dm_he), (dm1, dm1), kpts=kpts)
self.assertEqual(ref.dtype, v.dtype)
self.assertEqual(ref.shape, v.shape)
self.assertAlmostEqual(abs(v-ref).max(), 0, 9)
xc = 'b88,'
ref = dft.numint.nr_uks_fxc(ni, cell_he, mydf.grids, xc,
(dm_he, dm_he), (dm1, dm1), kpts=kpts)
v = multigrid.nr_uks_fxc(mg_df, xc, (dm_he, dm_he), (dm1, dm1), kpts=kpts)
self.assertEqual(ref.dtype, v.dtype)
self.assertEqual(ref.shape, v.shape)
self.assertAlmostEqual(abs(v-ref).max(), 0, 8)
def test_rcut_vs_ke_cut(self):
xc = 'lda,'
with lib.temporary_env(multigrid, TASKS_TYPE='rcut'):
mg_df = multigrid.MultiGridFFTDF(cell_orth)
n1, exc1, v1 = multigrid.nr_rks(mg_df, xc, dm1, kpts=kpts)
self.assertEqual(len(mg_df.tasks), 3)
with lib.temporary_env(multigrid, TASKS_TYPE='ke_cut'):
mg_df = multigrid.MultiGridFFTDF(cell_orth)
n2, exc2, v2 = multigrid.nr_rks(mg_df, xc, dm1, kpts=kpts)
self.assertEqual(len(mg_df.tasks), 6)
self.assertAlmostEqual(n1, n2, 8)
self.assertAlmostEqual(exc1, exc2, 8)
self.assertAlmostEqual(abs(v1-v2).max(), 0, 8)
if __name__ == '__main__':
print("Full Tests for multigrid")
unittest.main()
| 41.357143 | 90 | 0.599421 | 2,701 | 17,949 | 3.818215 | 0.094039 | 0.020944 | 0.088432 | 0.047125 | 0.828372 | 0.797828 | 0.751188 | 0.729468 | 0.700572 | 0.672452 | 0 | 0.031116 | 0.25695 | 17,949 | 433 | 91 | 41.452656 | 0.742146 | 0.043791 | 0 | 0.576177 | 0 | 0 | 0.02094 | 0 | 0 | 0 | 0 | 0 | 0.213296 | 1 | 0.074792 | false | 0 | 0.019391 | 0 | 0.096953 | 0.00277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ab63864ff2de6af656ead09d70797f9b27c5569c | 33 | py | Python | tnp/__init__.py | daico007/TNP | 1877bc86f5a40f1503022491f5dd9495d23332ad | [
"MIT"
] | null | null | null | tnp/__init__.py | daico007/TNP | 1877bc86f5a40f1503022491f5dd9495d23332ad | [
"MIT"
] | null | null | null | tnp/__init__.py | daico007/TNP | 1877bc86f5a40f1503022491f5dd9495d23332ad | [
"MIT"
] | null | null | null | from .silicatnp import SilicaTNP
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
db5170b93a7516bfe1e27c019fa0a5bb53c9bcac | 5,907 | py | Python | var/spack/repos/builtin/packages/nano/package.py | jeanbez/spack | f4e51ce8f366c85bf5aa0eafe078677b42dae1ba | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | var/spack/repos/builtin/packages/nano/package.py | jeanbez/spack | f4e51ce8f366c85bf5aa0eafe078677b42dae1ba | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 8 | 2021-11-09T20:28:40.000Z | 2022-03-15T03:26:33.000Z | var/spack/repos/builtin/packages/nano/package.py | jeanbez/spack | f4e51ce8f366c85bf5aa0eafe078677b42dae1ba | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 2 | 2019-02-08T20:37:20.000Z | 2019-03-31T15:19:26.000Z | # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class Nano(AutotoolsPackage):
"""Tiny little text editor"""
homepage = "https://www.nano-editor.org"
url = "https://www.nano-editor.org/dist/v6/nano-6.3.tar.xz"
list_url = "https://www.nano-editor.org/dist/"
list_depth = 1
# 6.x
version('6.3', sha256='eb532da4985672730b500f685dbaab885a466d08fbbf7415832b95805e6f8687')
version('6.2', sha256='2bca1804bead6aaf4ad791f756e4749bb55ed860eec105a97fba864bc6a77cb3')
version('6.1', sha256='3d57ec893fbfded12665b7f0d563d74431fc43abeaccacedea23b66af704db40')
version('6.0', sha256='93ac8cb68b4ad10e0aaeb80a2dd15c5bb89eb665a4844f7ad01c67efcb169ea2')
# 5.x
version('5.9', sha256='757db8cda4bb2873599e47783af463e3b547a627b0cabb30ea7bf71fb4c24937')
version('5.8', sha256='e43b63db2f78336e2aa123e8d015dbabc1720a15361714bfd4b1bb4e5e87768c')
version('5.7', sha256='d4b181cc2ec11def3711b4649e34f2be7a668e70ab506860514031d069cccafa')
version('5.6', sha256='fce183e4a7034d07d219c79aa2f579005d1fd49f156db6e50f53543a87637a32')
version('5.5', sha256='390b81bf9b41ff736db997aede4d1f60b4453fbd75a519a4ddb645f6fd687e4a')
version('5.4', sha256='fe993408b22286355809ce48ebecc4444d19af8203ed4959d269969112ed86e9')
version('5.3', sha256='c5c1cbcf622d9a96b6030d66409ed12b204e8bc01ef5e6554ebbe6fb1d734352')
version('5.2', sha256='32c2da43e1ae9a5e43437d8c6e1ec0388af870c7762c479e5bffb5f292bda7e1')
version('5.1', sha256='9efc46f341404d60095d16fc4f0419fc84b6e4eaeaf6ebce605d0465d92a6ee6')
version('5.0', sha256='7c0d94be69cd066f20df2868a2da02f7b1d416ce8d47c0850a8bd270897caa36')
# 4.x
version('4.9', sha256='0e399729d105cb1a587b4140db5cf1b23215a0886a42b215efa98137164233a6')
version('4.8', sha256='c348f61c68ab1d573b308398212a09cd68c60fbee20f01a5bd4b50071a258e63')
version('4.7', sha256='58c0e197de5339ca3cad3ef42b65626d612ddb0b270e730f02e6ab3785c736f5')
version('4.6', sha256='9bac3a4153774fd921dd3eb291986d43985466b081165b5ac5262b37b79628e9')
version('4.5', sha256='ded5c38f5ecd9de2b624e0db8013a375c169d3fbbd49575967b868847df8f533')
version('4.4', sha256='2af222e0354848ffaa3af31b5cd0a77917e9cb7742cd073d762f3c32f0f582c7')
version('4.3', sha256='00d3ad1a287a85b4bf83e5f06cedd0a9f880413682bebd52b4b1e2af8cfc0d81')
version('4.2', sha256='1143defce62e391b241252ffdb6e5c1ded56cfe26d46ee81b796abe0ccc45df9')
version('4.1', sha256='86bde596a038d6fde619b49d785c0ebf0b3eaa7001a39dbe9316bd5392d221d0')
version('4.0', sha256='1e2fcfea35784624a7d86785768b772d58bb3995d1aec9176a27a113b1e9bac3')
# 3.x
version('3.2', sha256='d12773af3589994b2e4982c5792b07c6240da5b86c5aef2103ab13b401fe6349')
version('3.1', sha256='14c02ca40a5bc61c580ce2f9cb7f9fc72d5ccc9da17ad044f78f6fb3fdb7719e')
version('3.0', sha256='e0a5bca354514e64762c987c200a8758b05e7bcced3b00b3e48ea0a2d383c8a0')
# 2.9.x
version('2.9.8', sha256='c2deac31ba4d3fd27a42fafcc47ccf499296cc69a422bbecab63f2933ea85488')
version('2.9.7', sha256='b64ab017305b1777e97b5b9b07b31db8aeebfc3e8719f61e8da1cf3866d344bd')
version('2.9.6', sha256='a373507ebb4e9307a8202fbc19b5d29718025c8ec773669349211c362545d4c6')
version('2.9.5', sha256='7b8d181cb57f42fa86a380bb9ad46abab859b60383607f731b65a9077f4b4e19')
version('2.9.4', sha256='2cf9726e735f5c962af63d27c2faaead5936e45adec983659fb9e4af88ffa35a')
version('2.9.3', sha256='7783bcfd4b2d5dc0bf64d4bd07b1a19e7ba3c91da881a4249772a36b972d4012')
version('2.9.2', sha256='4eccb7451b5729ce8abae8f9a5679f32e41ae58df73ea86b850ec45b10a83d55')
version('2.9.1', sha256='6316d52d0d26af3e79a13dcb4db1c7a4aeac61b37fd9381e801a4189a2ecba7c')
version('2.9.0', sha256='d2d30c39caef53aba1ec1b4baff4186d4496f35d2411b0848242a5f2e27e129e')
# 2.8.x
version('2.8.7', sha256='fbe31746958698d73c6726ee48ad8b0612697157961a2e9aaa83b4aa53d1165a')
version('2.8.6', sha256='9a46962a3ae59db922467a095217ed23280b42d80640f932f3a59ba2a6a85776')
version('2.8.5', sha256='cb43bf11990b2839446229b0c21ed7abef67c2df861f250cc874553ca27d89c2')
version('2.8.4', sha256='c7cf264f0f3e4af43ecdbc4ec72c3b1e831c69a1a5f6512d5b0c109e6bac7b11')
version('2.8.3', sha256='62b8e55b934091edbb41e948eac3d6769cc13d18b837c38faf7226c0820b0f55')
version('2.8.2', sha256='023e8a7b38b2420d5476d7b2b4d8524d7de55c0853b4dc0b02e4a4adf7ecb9e0')
version('2.8.1', sha256='e935a8bb373345c833dff3a304c6d392775d206b94c802d9285ae80ac6b66d0b')
version('2.8.0', sha256='15c1bcf4d8888d3b56f68a0b75871cc349b81a9094f8a42d73010ffc26c85dc2')
# 2.7.x
version('2.7.5', sha256='a64d24e6bc4fc448376d038f9a755af77f8e748c9051b6f45bf85e783a7e67e4')
version('2.7.4', sha256='752170643039e2c95a433de357f0c70a8c4c4c561a90a7e7259a63e225b659b9')
version('2.7.3', sha256='d926ef5060d23bafec75dab9328bb9b9df9a08e58c56b9061d686f5698770bfc')
version('2.7.2', sha256='77016f73c686857ce8a3ec217832febb6e636122762d47ce3c6cbef6f7e390c2')
version('2.7.1', sha256='df5cbe69831d7394c0a32fb27373ab313335ea4dc586d6f4be4081eb1de857cd')
version('2.7.0', sha256='f86af39514ae74e20bef3c29cd01d1090a9aca772a70e9c9f9e70c3d14b39521')
# 2.6.x
version('2.6.3', sha256='69ecbfbaa845800f43c27d6190ca87d277f3278f81e9c55ee569181b572b7519')
version('2.6.2', sha256='22f79cc635458e0c0d110d211576f1edc03b112a62d73b914826a46547a6ac27')
version('2.6.1', sha256='45721fa6d6128068895ad71a6967ff7398d11b064b3f888e5073c97a2b6e9a81')
depends_on('ncurses')
def url_for_version(self, version):
url = "https://www.nano-editor.org/dist/v{0}/nano-{1}.tar.xz"
subdir = version.up_to(2)
major = version.up_to(1)
if int(str(major)) > 2:
subdir = major
return url.format(subdir, version)
| 67.125 | 95 | 0.803115 | 425 | 5,907 | 11.145882 | 0.301176 | 0.04391 | 0.017099 | 0.015199 | 0.022166 | 0.017733 | 0.017733 | 0 | 0 | 0 | 0 | 0.449879 | 0.089724 | 5,907 | 87 | 96 | 67.896552 | 0.431095 | 0.042831 | 0 | 0 | 0 | 0.029851 | 0.669505 | 0.601739 | 0 | 1 | 0 | 0 | 0 | 1 | 0.014925 | false | 0 | 0.014925 | 0 | 0.119403 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db6e003633a91ae4f78a2d376693bdd0a4fcfe63 | 19,070 | py | Python | layers/equivariant_linear.py | JiaHe-yogurt/GNN | 6b6dbc362591b4521e0b437d17ab09c1c879aa75 | [
"Apache-2.0"
] | null | null | null | layers/equivariant_linear.py | JiaHe-yogurt/GNN | 6b6dbc362591b4521e0b437d17ab09c1c879aa75 | [
"Apache-2.0"
] | null | null | null | layers/equivariant_linear.py | JiaHe-yogurt/GNN | 6b6dbc362591b4521e0b437d17ab09c1c879aa75 | [
"Apache-2.0"
] | null | null | null | import tensorflow.compat.v1 as tf
import numpy as np
def equi_2_to_2(name, input_depth, output_depth, inputs, normalization='inf', normalization_val=1.0):
'''
:param name: name of layer
:param input_depth: D
:param output_depth: S
:param inputs: N x D x m x m tensor
:return: output: N x S x m x m tensor
'''
basis_dimension = 15
with tf.variable_scope(name, reuse=tf.AUTO_REUSE) as scope:
# initialization values for variables
coeffs_values = tf.multiply(tf.random_normal([input_depth, output_depth, basis_dimension], dtype=tf.float32), tf.sqrt(2. / tf.to_float(input_depth + output_depth)))
#coeffs_values = tf.random_normal([input_depth, output_depth, basis_dimension], dtype=tf.float32)
# define variables
coeffs = tf.get_variable('coeffs', initializer=coeffs_values)
m = tf.to_int32(tf.shape(inputs)[3]) # extract dimension
ops_out = ops_2_to_2(inputs, m, normalization=normalization)
ops_out = tf.stack(ops_out, axis=2)
output = tf.einsum('dsb,ndbij->nsij', coeffs, ops_out) # N x S x m x m
# bias
diag_bias = tf.get_variable('diag_bias', initializer=tf.zeros([1, output_depth, 1, 1], dtype=tf.float32))
all_bias = tf.get_variable('all_bias', initializer=tf.zeros([1, output_depth, 1, 1], dtype=tf.float32))
mat_diag_bias = tf.multiply(tf.expand_dims(tf.expand_dims(tf.eye(tf.to_int32(tf.shape(inputs)[3])), 0), 0), diag_bias)
output = output + all_bias + mat_diag_bias
return output
def equi_2_to_1(name, input_depth, output_depth, inputs, normalization='inf', normalization_val=1.0):
'''
:param name: name of layer
:param input_depth: D
:param output_depth: S
:param inputs: N x D x m x m tensor
:return: output: N x S x m tensor
'''
basis_dimension = 5
with tf.variable_scope(name, reuse=tf.AUTO_REUSE) as scope:
# initialization values for variables
coeffs_values = tf.multiply(tf.random_normal([input_depth, output_depth, basis_dimension], dtype=tf.float32), tf.sqrt(2. / tf.to_float(input_depth + output_depth)))
#coeffs_values = tf.random_normal([input_depth, output_depth, basis_dimension], dtype=tf.float32)
# define variables
coeffs = tf.get_variable('coeffs', initializer=coeffs_values)
m = tf.to_int32(tf.shape(inputs)[3]) # extract dimension
ops_out = ops_2_to_1(inputs, m, normalization=normalization)
ops_out = tf.stack(ops_out, axis=2) # N x D x B x m
output = tf.einsum('dsb,ndbi->nsi', coeffs, ops_out) # N x S x m
# bias
bias = tf.get_variable('bias', initializer=tf.zeros([1, output_depth, 1], dtype=tf.float32))
output = output + bias
return output
def equi_1_to_2(name, input_depth, output_depth, inputs, normalization='inf', normalization_val=1.0):
'''
:param name: name of layer
:param input_depth: D
:param output_depth: S
:param inputs: N x D x m tensor
:return: output: N x S x m x m tensor
'''
basis_dimension = 5
with tf.variable_scope(name, reuse=tf.AUTO_REUSE) as scope:
# initialization values for variables
coeffs_values = tf.multiply(tf.random_normal([input_depth, output_depth, basis_dimension], dtype=tf.float32), tf.sqrt(2. / tf.to_float(input_depth + output_depth)))
#coeffs_values = tf.random_normal([input_depth, output_depth, basis_dimension], dtype=tf.float32)
# define variables
coeffs = tf.get_variable('coeffs', initializer=coeffs_values)
m = tf.to_int32(tf.shape(inputs)[2]) # extract dimension
ops_out = ops_1_to_2(inputs, m, normalization=normalization)
ops_out = tf.stack(ops_out, axis=2) # N x D x B x m x m
output = tf.einsum('dsb,ndbij->nsij', coeffs, ops_out) # N x S x m x m
# bias
bias = tf.get_variable('bias', initializer=tf.zeros([1, output_depth, 1, 1], dtype=tf.float32))
output = output + bias
return output
def equi_1_to_1(name, input_depth, output_depth, inputs, normalization='inf', normalization_val=1.0):
'''
:param name: name of layer
:param input_depth: D
:param output_depth: S
:param inputs: N x D x m tensor
:return: output: N x S x m tensor
'''
basis_dimension = 2
with tf.variable_scope(name, reuse=tf.AUTO_REUSE) as scope:
# initialization values for variables
coeffs_values = tf.multiply(tf.random_normal([input_depth, output_depth, basis_dimension], dtype=tf.float32), tf.sqrt(2. / tf.to_float(input_depth + output_depth)))
#coeffs_values = tf.random_normal([input_depth, output_depth, basis_dimension], dtype=tf.float32)
# define variables
coeffs = tf.get_variable('coeffs', initializer=coeffs_values)
m = tf.to_int32(tf.shape(inputs)[2]) # extract dimension
ops_out = ops_1_to_1(inputs, m, normalization=normalization)
ops_out = tf.stack(ops_out, axis=2) # N x D x B x m
output = tf.einsum('dsb,ndbi->nsi', coeffs, ops_out) # N x S x m
# bias
bias = tf.get_variable('bias', initializer=tf.zeros([1, output_depth, 1], dtype=tf.float32))
output = output + bias
return output
def equi_basic(name, input_depth, output_depth, inputs):
'''
:param name: name of layer
:param input_depth: D
:param output_depth: S
:param inputs: N x D x m x m tensor
:return: output: N x S x m x m tensor
'''
basis_dimension = 4
with tf.variable_scope(name, reuse=tf.AUTO_REUSE) as scope:
# initialization values for variables
coeffs_values = tf.multiply(tf.random_normal([input_depth, output_depth, basis_dimension], dtype=tf.float32), tf.sqrt(2. / tf.to_float(input_depth + output_depth)))
#coeffs_values = tf.random_normal([input_depth, output_depth, basis_dimension], dtype=tf.float32)
# define variables
coeffs = tf.get_variable('coeffs', initializer=coeffs_values)
m = tf.to_int32(tf.shape(inputs)[3]) # extract dimension
float_dim = tf.to_float(m)
# apply ops
ops_out = []
# w1 - identity
ops_out.append(inputs)
# w2 - sum cols
sum_of_cols = tf.divide(tf.reduce_sum(inputs, axis=2), float_dim) # N x D x m
ops_out.append(tf.tile(tf.expand_dims(sum_of_cols, axis=2), [1, 1, m, 1])) # N x D x m x m
# w3 - sum rows
sum_of_rows = tf.divide(tf.reduce_sum(inputs, axis=3), float_dim) # N x D x m
ops_out.append(tf.tile(tf.expand_dims(sum_of_rows, axis=3), [1, 1, 1, m])) # N x D x m x m
# w4 - sum all
sum_all = tf.divide(tf.reduce_sum(sum_of_rows, axis=2), tf.square(float_dim)) # N x D
ops_out.append(tf.tile(tf.expand_dims(tf.expand_dims(sum_all, axis=2), axis=3), [1, 1, m, m])) # N x D x m x m
ops_out = tf.stack(ops_out, axis=2)
output = tf.einsum('dsb,ndbij->nsij', coeffs, ops_out) # N x S x m x m
# bias
bias = tf.get_variable('bias', initializer=tf.zeros([1, output_depth, 1, 1], dtype=tf.float32))
output = output + bias
return output
def ops_2_to_2(inputs, dim, normalization='inf', normalization_val=1.0): # N x D x m x m
diag_part = tf.matrix_diag_part(inputs) # N x D x m
sum_diag_part = tf.reduce_sum(diag_part, axis=2, keepdims=True) # N x D x 1
sum_of_rows = tf.reduce_sum(inputs, axis=3) # N x D x m
sum_of_cols = tf.reduce_sum(inputs, axis=2) # N x D x m
sum_all = tf.reduce_sum(sum_of_rows, axis=2) # N x D
# op1 - (1234) - extract diag
op1 = tf.matrix_diag(diag_part) # N x D x m x m
# op2 - (1234) + (12)(34) - place sum of diag on diag
op2 = tf.matrix_diag(tf.tile(sum_diag_part, [1, 1, dim])) # N x D x m x m
# op3 - (1234) + (123)(4) - place sum of row i on diag ii
op3 = tf.matrix_diag(sum_of_rows) # N x D x m x m
# op4 - (1234) + (124)(3) - place sum of col i on diag ii
op4 = tf.matrix_diag(sum_of_cols) # N x D x m x m
# op5 - (1234) + (124)(3) + (123)(4) + (12)(34) + (12)(3)(4) - place sum of all entries on diag
op5 = tf.matrix_diag(tf.tile(tf.expand_dims(sum_all, axis=2), [1, 1, dim])) # N x D x m x m
# op6 - (14)(23) + (13)(24) + (24)(1)(3) + (124)(3) + (1234) - place sum of col i on row i
op6 = tf.tile(tf.expand_dims(sum_of_cols, axis=3), [1, 1, 1, dim]) # N x D x m x m
# op7 - (14)(23) + (23)(1)(4) + (234)(1) + (123)(4) + (1234) - place sum of row i on row i
op7 = tf.tile(tf.expand_dims(sum_of_rows, axis=3), [1, 1, 1, dim]) # N x D x m x m
# op8 - (14)(2)(3) + (134)(2) + (14)(23) + (124)(3) + (1234) - place sum of col i on col i
op8 = tf.tile(tf.expand_dims(sum_of_cols, axis=2), [1, 1, dim, 1]) # N x D x m x m
# op9 - (13)(24) + (13)(2)(4) + (134)(2) + (123)(4) + (1234) - place sum of row i on col i
op9 = tf.tile(tf.expand_dims(sum_of_rows, axis=2), [1, 1, dim, 1]) # N x D x m x m
# op10 - (1234) + (14)(23) - identity
op10 = inputs # N x D x m x m
# op11 - (1234) + (13)(24) - transpose
op11 = tf.transpose(inputs, [0, 1, 3, 2]) # N x D x m x m
# op12 - (1234) + (234)(1) - place ii element in row i
op12 = tf.tile(tf.expand_dims(diag_part, axis=3), [1, 1, 1, dim]) # N x D x m x m
# op13 - (1234) + (134)(2) - place ii element in col i
op13 = tf.tile(tf.expand_dims(diag_part, axis=2), [1, 1, dim, 1]) # N x D x m x m
# op14 - (34)(1)(2) + (234)(1) + (134)(2) + (1234) + (12)(34) - place sum of diag in all entries
op14 = tf.tile(tf.expand_dims(sum_diag_part, axis=3), [1, 1, dim, dim]) # N x D x m x m
# op15 - sum of all ops - place sum of all entries in all entries
op15 = tf.tile(tf.expand_dims(tf.expand_dims(sum_all, axis=2), axis=3), [1, 1, dim, dim]) # N x D x m x m
if normalization is not None:
float_dim = tf.to_float(dim)
if normalization is 'inf':
op2 = tf.divide(op2, float_dim)
op3 = tf.divide(op3, float_dim)
op4 = tf.divide(op4, float_dim)
op5 = tf.divide(op5, float_dim**2)
op6 = tf.divide(op6, float_dim)
op7 = tf.divide(op7, float_dim)
op8 = tf.divide(op8, float_dim)
op9 = tf.divide(op9, float_dim)
op14 = tf.divide(op14, float_dim)
op15 = tf.divide(op15, float_dim**2)
return [op1, op2, op3, op4, op5, op6, op7, op8, op9, op10, op11, op12, op13, op14, op15]
def ops_2_to_1(inputs, dim, normalization='inf', normalization_val=1.0): # N x D x m x m
diag_part = tf.matrix_diag_part(inputs) # N x D x m
sum_diag_part = tf.reduce_sum(diag_part, axis=2, keepdims=True) # N x D x 1
sum_of_rows = tf.reduce_sum(inputs, axis=3) # N x D x m
sum_of_cols = tf.reduce_sum(inputs, axis=2) # N x D x m
sum_all = tf.reduce_sum(inputs, axis=(2, 3)) # N x D
# op1 - (123) - extract diag
op1 = diag_part # N x D x m
# op2 - (123) + (12)(3) - tile sum of diag part
op2 = tf.tile(sum_diag_part, [1, 1, dim]) # N x D x m
# op3 - (123) + (13)(2) - place sum of row i in element i
op3 = sum_of_rows # N x D x m
# op4 - (123) + (23)(1) - place sum of col i in element i
op4 = sum_of_cols # N x D x m
# op5 - (1)(2)(3) + (123) + (12)(3) + (13)(2) + (23)(1) - tile sum of all entries
op5 = tf.tile(tf.expand_dims(sum_all, axis=2), [1, 1, dim]) # N x D x m
if normalization is not None:
float_dim = tf.to_float(dim)
if normalization is 'inf':
op2 = tf.divide(op2, float_dim)
op3 = tf.divide(op3, float_dim)
op4 = tf.divide(op4, float_dim)
op5 = tf.divide(op5, float_dim ** 2)
return [op1, op2, op3, op4, op5]
def ops_1_to_2(inputs, dim, normalization='inf', normalization_val=1.0): # N x D x m
sum_all = tf.reduce_sum(inputs, axis=2, keepdims=True) # N x D x 1
# op1 - (123) - place on diag
op1 = tf.matrix_diag(inputs) # N x D x m x m
# op2 - (123) + (12)(3) - tile sum on diag
op2 = tf.matrix_diag(tf.tile(sum_all, [1, 1, dim])) # N x D x m x m
# op3 - (123) + (13)(2) - tile element i in row i
op3 = tf.tile(tf.expand_dims(inputs, axis=2), [1, 1, dim, 1]) # N x D x m x m
# op4 - (123) + (23)(1) - tile element i in col i
op4 = tf.tile(tf.expand_dims(inputs, axis=3), [1, 1, 1, dim]) # N x D x m x m
# op5 - (1)(2)(3) + (123) + (12)(3) + (13)(2) + (23)(1) - tile sum of all entries
op5 = tf.tile(tf.expand_dims(sum_all, axis=3), [1, 1, dim, dim]) # N x D x m x m
if normalization is not None:
float_dim = tf.to_float(dim)
if normalization is 'inf':
op2 = tf.divide(op2, float_dim)
op5 = tf.divide(op5, float_dim)
return [op1, op2, op3, op4, op5]
def ops_1_to_1(inputs, dim, normalization='inf', normalization_val=1.0): # N x D x m
sum_all = tf.reduce_sum(inputs, axis=2, keepdims=True) # N x D x 1
# op1 - (12) - identity
op1 = inputs # N x D x m
# op2 - (1)(2) - tile sum of all
op2 = tf.tile(sum_all, [1, 1, dim]) # N x D x m
if normalization is not None:
float_dim = tf.to_float(dim)
if normalization is 'inf':
op2 = tf.divide(op2, float_dim)
return [op1, op2]
def ops_3_to_1(inputs, dim, normalization='inf', normalization_val=1.0): # N x D x m x m x m
diag = tf.matrix_diag_part(inputs)
sum_of_cols = tf.reduce_sum(inputs, axis=3)
sum_of_rows = tf.reduce_sum(inputs, axis=4)
op1 = tf.reduce_sum(tf.matrix_diag_part(diag), axis=2)
op2 = tf.reduce_sum(tf.reduce_sum(diag, axis=2), axis=2)
op3 = tf.reduce_sum(tf.matrix_diag_part(sum_of_cols), axis=2)
op4 = tf.reduce_sum(tf.matrix_diag_part(sum_of_rows), axis=2)
op5 = tf.reduce_sum(tf.reduce_sum(sum_of_cols, axis=2), axis=2)
if normalization is not None:
float_dim = tf.to_float(dim)
if normalization is 'inf':
op1 = tf.divide(op1, float_dim)
op2 = tf.divide(op2, float_dim)
op3 = tf.divide(op3, float_dim)
op4 = tf.divide(op4, float_dim)
op5 = tf.divide(op5, float_dim)
return [op1, op2, op3, op4, op5]
# return [ op5]
def equi_3_to_1(name, input_depth, output_depth, inputs, normalization='inf', normalization_val=1.0):
'''
:param name: name of layer
:param input_depth: D
:param output_depth: S
:param inputs: N x D x m x m tensor
:return: output: N x S tensor
'''
basis_dimension = 5
with tf.variable_scope(name, reuse=tf.AUTO_REUSE) as scope:
# initialization values for variables
coeffs_values = tf.multiply(tf.random_normal([input_depth, output_depth, basis_dimension], dtype=tf.float32),
tf.sqrt(2. / tf.to_float(input_depth + output_depth)))
# define variables
coeffs = tf.get_variable('coeffs', initializer=coeffs_values)
m = tf.to_int32(tf.shape(inputs)[2]) # extract dimension
ops_out = ops_3_to_1(inputs, m, normalization=normalization)
# ops_out = tf.concat(ops_out, axis=1)
# output = tf.einsum('sdb,nb->nd', coeffs, ops_out) # N x D
ops_out = tf.stack(ops_out, axis=2) # N x D x B x m
output = tf.einsum('sdb,nsb->nd', coeffs, ops_out) # N x D
# bias
bias = tf.get_variable('bias', initializer=tf.zeros([1, output_depth], dtype=tf.float32))
output = output + bias
return output
def ops_4_to_1(inputs, dim, normalization='inf', normalization_val=1.0): # N x m x m x m x m
diag = tf.matrix_diag_part(inputs)
slice_pointwise_sum = tf.reduce_sum(inputs, axis=3)
sum_vertical_slice = tf.reduce_sum(slice_pointwise_sum, axis=2)
sum_of_col = tf.reduce_sum(inputs, axis=4)
sum_of_row = tf.reduce_sum(inputs, axis=5)
sum_horizotal_slice = tf.reduce_sum(sum_of_row, axis=4)
trans1 = tf.transpose(inputs, [0, 1, 2, 4, 3, 5])
trans2 = tf.transpose(inputs, [0, 1, 2, 5, 4, 3])
# sum all elements
op1 = tf.reduce_sum(tf.reduce_sum(sum_horizotal_slice, axis=2), axis=2)
# sum column slice
# op2 = tf.reduce_sum(tf.matrix_diag_part(sum_vertical_slice), axis=2)
# sum ith column of ith slice for all tensor
# op3 = tf.reduce_sum(tf.reduce_sum( tf.matrix_diag_part(sum_of_col), axis=2), axis=2)
# sum ith row of ith slice for all tensor
# op4 = tf.reduce_sum(tf.reduce_sum( tf.matrix_diag_part(sum_of_row),axis=2),axis=2)
# sum ith slice for ith tensor
# op5 = tf.reduce_sum(tf.matrix_diag_part(sum_horizotal_slice),axis=2)
# sum of ith row of all slice for every tensor
# op6 = tf.reduce_sum(tf.matrix_diag_part( tf.reduce_sum(slice_pointwise_sum , axis=4)),axis=2)
# sum of all diagonal
# op7 = tf.reduce_sum( tf.reduce_sum( tf.reduce_sum(diag, axis=2), axis=2), axis=2)
# (iiii)
# op8 = tf.reduce_sum(tf.matrix_diag_part(tf.matrix_diag_part(diag)), axis=2)
# (iiii) + (iijj)
# op9 = tf.reduce_sum(tf.matrix_diag_part(tf.reduce_sum(diag, axis=4)),axis=2)
# (iiii) + (ijij)
# op10 = tf.reduce_sum(tf.matrix_diag_part(tf.matrix_diag_part(sum_of_col)), axis=2)
# (iiii) + (iiij)
# op11 = tf.reduce_sum(tf.matrix_diag_part(tf.matrix_diag_part(sum_of_row)), axis=2)
# (iiii) + (ijii)
# op12 = tf.reduce_sum(tf.matrix_diag_part((tf.matrix_diag_part(slice_pointwise_sum))), axis=2)
# (iiii) + (ijjj)
## op13 = tf.reduce_sum(tf.reduce_sum(tf.matrix_diag_part(diag),axis=2),axis=2)
# (iiii) + (ijij)
op14 = tf.reduce_sum(tf.matrix_diag_part(tf.reduce_sum(tf.matrix_diag_part(trans1), axis=4)), axis=2)
# (iiii) + (ijji)
# op15 = tf.reduce_sum(tf.matrix_diag_part(tf.reduce_sum(tf.matrix_diag_part(trans2), axis=4)),axis=2)
# return [op1, op2, op3, op4, op5, op6, op7, op8, op9, op10, op11, op12, op13, op14, op15]
return [op1]
def equi_4_to_1(name, input_depth, output_depth, inputs, normalization='inf', normalization_val=1.0):
'''
:param name: name of layer
:param input_depth: D
:param output_depth: S
:param inputs: N x D x m x m x m tensor
:return: output: N x S tensor
'''
basis_dimension = 1
with tf.variable_scope(name, reuse=tf.AUTO_REUSE) as scope:
# initialization values for variables
coeffs_values = tf.multiply(tf.random_normal([input_depth, output_depth, basis_dimension], dtype=tf.float32),
tf.sqrt(2. / tf.to_float(input_depth + output_depth)))
# define variables
coeffs = tf.get_variable('coeffs', initializer=coeffs_values)
m = tf.to_int32(tf.shape(inputs)[2]) # extract dimension
ops_out = ops_4_to_1(inputs, m, normalization=normalization)
ops_out = tf.concat(ops_out, axis=1)
output = tf.einsum('sdb,nb->nd', coeffs, ops_out) # N x D
# bias
bias = tf.get_variable('bias', initializer=tf.zeros([1, output_depth], dtype=tf.float32))
output = output + bias
return output
| 40.317125 | 172 | 0.625223 | 3,259 | 19,070 | 3.48788 | 0.050015 | 0.019002 | 0.016891 | 0.02041 | 0.886162 | 0.84587 | 0.819565 | 0.78895 | 0.755168 | 0.729128 | 0 | 0.053634 | 0.240325 | 19,070 | 472 | 173 | 40.402542 | 0.731 | 0.313214 | 0 | 0.529412 | 0 | 0 | 0.017774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063725 | false | 0 | 0.009804 | 0 | 0.137255 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db76c034e675ea1b2db91f7069f2c7c7df97b830 | 93 | py | Python | utils/models.py | HackTJ/live | 2e29176da78e2a2834e1004c7390d4a74c324142 | [
"MIT"
] | 2 | 2021-03-11T22:50:23.000Z | 2021-05-13T14:52:25.000Z | utils/models.py | HackTJ/live | 2e29176da78e2a2834e1004c7390d4a74c324142 | [
"MIT"
] | 78 | 2020-08-01T20:06:38.000Z | 2022-03-30T23:34:02.000Z | utils/models.py | HackTJ/live | 2e29176da78e2a2834e1004c7390d4a74c324142 | [
"MIT"
] | null | null | null | from django.contrib.auth.models import AbstractUser
class LiveUser(AbstractUser):
pass
| 15.5 | 51 | 0.795699 | 11 | 93 | 6.727273 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139785 | 93 | 5 | 52 | 18.6 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
db9b4df2728a38085dc57875eaaeee6decb75ba3 | 27 | py | Python | experiments/data_structure/__init__.py | shruti-bt/data-structure-python | 0729f486f516ce05acdd92b28b108f43b67f656f | [
"MIT"
] | 1 | 2022-01-10T17:17:35.000Z | 2022-01-10T17:17:35.000Z | experiments/data_structure/__init__.py | shruti-bt/data-structure-python | 0729f486f516ce05acdd92b28b108f43b67f656f | [
"MIT"
] | null | null | null | experiments/data_structure/__init__.py | shruti-bt/data-structure-python | 0729f486f516ce05acdd92b28b108f43b67f656f | [
"MIT"
] | null | null | null | from ._stack import Stack
| 13.5 | 26 | 0.777778 | 4 | 27 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 27 | 1 | 27 | 27 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dbbd9c3da5af5b70f9f7582aab184e41efa34db1 | 14,353 | py | Python | pybind/slxos/v17r_2_00/interface/ethernet/qos/rx_queue/multicast/queue_size/__init__.py | extremenetworks/pybind | 44c467e71b2b425be63867aba6e6fa28b2cfe7fb | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v17r_2_00/interface/ethernet/qos/rx_queue/multicast/queue_size/__init__.py | extremenetworks/pybind | 44c467e71b2b425be63867aba6e6fa28b2cfe7fb | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v17r_2_00/interface/ethernet/qos/rx_queue/multicast/queue_size/__init__.py | extremenetworks/pybind | 44c467e71b2b425be63867aba6e6fa28b2cfe7fb | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
class queue_size(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-interface - based on the path /interface/ethernet/qos/rx-queue/multicast/queue-size. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__traffic_class','__min_queue_size','__max_queue_size',)
_yang_name = 'queue-size'
_rest_name = 'queue-size'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__traffic_class = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), restriction_dict={'range': [u'0 .. 3']}), is_leaf=True, yang_name="traffic-class", rest_name="traffic-class", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Traffic class to configure multicast queue size', u'cli-full-no': None, u'cli-expose-key-name': None}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-qos-mls', defining_module='brocade-qos-mls', yang_type='uint8', is_config=True)
self.__max_queue_size = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0 .. 2048']}), is_leaf=True, yang_name="max-queue-size", rest_name="max-queue-size", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure maximum queue size'}}, namespace='urn:brocade.com:mgmt:brocade-qos-mls', defining_module='brocade-qos-mls', yang_type='max-queue-size-type', is_config=True)
self.__min_queue_size = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0 .. 1024']}), is_leaf=True, yang_name="min-queue-size", rest_name="min-queue-size", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure minimum queue size', u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-qos-mls', defining_module='brocade-qos-mls', yang_type='min-queue-size-type', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'interface', u'ethernet', u'qos', u'rx-queue', u'multicast', u'queue-size']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'interface', u'Ethernet', u'qos', u'rx-queue', u'multicast', u'queue-size']
def _get_traffic_class(self):
"""
Getter method for traffic_class, mapped from YANG variable /interface/ethernet/qos/rx_queue/multicast/queue_size/traffic_class (uint8)
"""
return self.__traffic_class
def _set_traffic_class(self, v, load=False):
"""
Setter method for traffic_class, mapped from YANG variable /interface/ethernet/qos/rx_queue/multicast/queue_size/traffic_class (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_traffic_class is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_traffic_class() directly.
"""
parent = getattr(self, "_parent", None)
if parent is not None and load is False:
raise AttributeError("Cannot set keys directly when" +
" within an instantiated list")
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), restriction_dict={'range': [u'0 .. 3']}), is_leaf=True, yang_name="traffic-class", rest_name="traffic-class", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Traffic class to configure multicast queue size', u'cli-full-no': None, u'cli-expose-key-name': None}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-qos-mls', defining_module='brocade-qos-mls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """traffic_class must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), restriction_dict={'range': [u'0 .. 3']}), is_leaf=True, yang_name="traffic-class", rest_name="traffic-class", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Traffic class to configure multicast queue size', u'cli-full-no': None, u'cli-expose-key-name': None}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-qos-mls', defining_module='brocade-qos-mls', yang_type='uint8', is_config=True)""",
})
self.__traffic_class = t
if hasattr(self, '_set'):
self._set()
def _unset_traffic_class(self):
self.__traffic_class = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), restriction_dict={'range': [u'0 .. 3']}), is_leaf=True, yang_name="traffic-class", rest_name="traffic-class", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Traffic class to configure multicast queue size', u'cli-full-no': None, u'cli-expose-key-name': None}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-qos-mls', defining_module='brocade-qos-mls', yang_type='uint8', is_config=True)
def _get_min_queue_size(self):
"""
Getter method for min_queue_size, mapped from YANG variable /interface/ethernet/qos/rx_queue/multicast/queue_size/min_queue_size (min-queue-size-type)
"""
return self.__min_queue_size
def _set_min_queue_size(self, v, load=False):
"""
Setter method for min_queue_size, mapped from YANG variable /interface/ethernet/qos/rx_queue/multicast/queue_size/min_queue_size (min-queue-size-type)
If this variable is read-only (config: false) in the
source YANG file, then _set_min_queue_size is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_min_queue_size() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0 .. 1024']}), is_leaf=True, yang_name="min-queue-size", rest_name="min-queue-size", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure minimum queue size', u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-qos-mls', defining_module='brocade-qos-mls', yang_type='min-queue-size-type', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """min_queue_size must be of a type compatible with min-queue-size-type""",
'defined-type': "brocade-qos-mls:min-queue-size-type",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0 .. 1024']}), is_leaf=True, yang_name="min-queue-size", rest_name="min-queue-size", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure minimum queue size', u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-qos-mls', defining_module='brocade-qos-mls', yang_type='min-queue-size-type', is_config=True)""",
})
self.__min_queue_size = t
if hasattr(self, '_set'):
self._set()
def _unset_min_queue_size(self):
self.__min_queue_size = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0 .. 1024']}), is_leaf=True, yang_name="min-queue-size", rest_name="min-queue-size", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure minimum queue size', u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-qos-mls', defining_module='brocade-qos-mls', yang_type='min-queue-size-type', is_config=True)
def _get_max_queue_size(self):
"""
Getter method for max_queue_size, mapped from YANG variable /interface/ethernet/qos/rx_queue/multicast/queue_size/max_queue_size (max-queue-size-type)
"""
return self.__max_queue_size
def _set_max_queue_size(self, v, load=False):
"""
Setter method for max_queue_size, mapped from YANG variable /interface/ethernet/qos/rx_queue/multicast/queue_size/max_queue_size (max-queue-size-type)
If this variable is read-only (config: false) in the
source YANG file, then _set_max_queue_size is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_max_queue_size() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0 .. 2048']}), is_leaf=True, yang_name="max-queue-size", rest_name="max-queue-size", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure maximum queue size'}}, namespace='urn:brocade.com:mgmt:brocade-qos-mls', defining_module='brocade-qos-mls', yang_type='max-queue-size-type', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """max_queue_size must be of a type compatible with max-queue-size-type""",
'defined-type': "brocade-qos-mls:max-queue-size-type",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0 .. 2048']}), is_leaf=True, yang_name="max-queue-size", rest_name="max-queue-size", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure maximum queue size'}}, namespace='urn:brocade.com:mgmt:brocade-qos-mls', defining_module='brocade-qos-mls', yang_type='max-queue-size-type', is_config=True)""",
})
self.__max_queue_size = t
if hasattr(self, '_set'):
self._set()
def _unset_max_queue_size(self):
self.__max_queue_size = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0 .. 2048']}), is_leaf=True, yang_name="max-queue-size", rest_name="max-queue-size", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure maximum queue size'}}, namespace='urn:brocade.com:mgmt:brocade-qos-mls', defining_module='brocade-qos-mls', yang_type='max-queue-size-type', is_config=True)
traffic_class = __builtin__.property(_get_traffic_class, _set_traffic_class)
min_queue_size = __builtin__.property(_get_min_queue_size, _set_min_queue_size)
max_queue_size = __builtin__.property(_get_max_queue_size, _set_max_queue_size)
_pyangbind_elements = {'traffic_class': traffic_class, 'min_queue_size': min_queue_size, 'max_queue_size': max_queue_size, }
| 72.489899 | 655 | 0.730788 | 2,029 | 14,353 | 4.923115 | 0.096599 | 0.086495 | 0.043248 | 0.037241 | 0.806087 | 0.769847 | 0.748924 | 0.743918 | 0.728702 | 0.712484 | 0 | 0.01487 | 0.128545 | 14,353 | 197 | 656 | 72.857868 | 0.783738 | 0.129938 | 0 | 0.378788 | 0 | 0.022727 | 0.357125 | 0.133344 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.060606 | 0 | 0.280303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dbd801795470214342b240e3b88738a800ef6744 | 196 | py | Python | ADVANCED_MODULE/05_Functions_Advanced/LAB/02_Person_Info.py | sleepychild/SoftUni_SE | ae94488befb6de8b74ffdcb14ed6470739a67786 | [
"MIT"
] | null | null | null | ADVANCED_MODULE/05_Functions_Advanced/LAB/02_Person_Info.py | sleepychild/SoftUni_SE | ae94488befb6de8b74ffdcb14ed6470739a67786 | [
"MIT"
] | 1 | 2022-01-15T10:33:56.000Z | 2022-01-15T10:33:56.000Z | ADVANCED_MODULE/05_Functions_Advanced/LAB/02_Person_Info.py | sleepychild/SoftUni_SE | ae94488befb6de8b74ffdcb14ed6470739a67786 | [
"MIT"
] | null | null | null | def get_info(**kwargs) -> str:
return f"This is {kwargs['name']} from {kwargs['town']} and he is {kwargs['age']} years old"
print(get_info(**{"name": "George", "town": "Sofia", "age": 20}))
| 32.666667 | 96 | 0.602041 | 30 | 196 | 3.866667 | 0.7 | 0.12069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011905 | 0.142857 | 196 | 5 | 97 | 39.2 | 0.678571 | 0 | 0 | 0 | 0 | 0.333333 | 0.530612 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0.333333 | 0.666667 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
9167966ac67f118af122eae97b67691eb0e749ba | 28,531 | py | Python | src/bayes_tec/likelihoods.py | Joshuaalbert/bayes_tec | 655c4ec29427c7bb0616d5752c34207714a0151c | [
"Apache-2.0"
] | null | null | null | src/bayes_tec/likelihoods.py | Joshuaalbert/bayes_tec | 655c4ec29427c7bb0616d5752c34207714a0151c | [
"Apache-2.0"
] | null | null | null | src/bayes_tec/likelihoods.py | Joshuaalbert/bayes_tec | 655c4ec29427c7bb0616d5752c34207714a0151c | [
"Apache-2.0"
] | null | null | null | import gpflow as gp
import numpy as np
import tensorflow as tf
from gpflow import params_as_tensors
from gpflow import transforms
from gpflow.params import Parameter
from gpflow.likelihoods import Likelihood
from gpflow import settings
from gpflow.quadrature import ndiagquad, ndiag_mc, mvnquad
from gpflow import logdensities
float_type = settings.float_type
try:
@tf.RegisterGradient('WrapGrad')
def _wrap_grad(op,grad):
phi = op.inputs[0]
return tf.ones_like(phi)*grad
def wrap(phi):
out = tf.atan2(tf.sin(phi),tf.cos(phi))
with tf.get_default_graph().gradient_override_map({'Identity': 'WrapGrad'}):
return tf.identity(out)
except:
pass#already defined
class WrappedPhaseGaussianEncoded(Likelihood):
def __init__(self, tec_scale=0.01, num_gauss_hermite_points=20, variance=1.0, name=None):
super().__init__(name=name)
self.variance = Parameter(
variance, transform=transforms.positive, dtype=settings.float_type)
self.tec_scale = tec_scale
self.num_gauss_hermite_points = num_gauss_hermite_points
self.freq = tf.convert_to_tensor(freq,dtype=settings.float_type,name='test_freq') # frequency the phase is calculated at for the predictive distribution
self.tec_conversion = tf.convert_to_tensor(tec_scale * -8.4480e9,dtype=settings.float_type,name='tec_conversion') # rad Hz/ tecu
self.tec2phase = tf.convert_to_tensor(self.tec_conversion / self.freq,dtype=settings.float_type,name='tec2phase')
@params_as_tensors
def logp(self, F, Y, freqs, **kwargs):
"""The log-likelihood function."""
assert freqs is not None
#freqs = Y[:,-1:]
#Y = Y[:,:self.num_latent]
# N,1
tec2phase = self.tec_conversion/freqs
phase = F*tec2phase
dphase = wrap(phase) - wrap(Y) # Ito theorem
arg = tf.stack([-0.5*tf.square(dphase + 2*np.pi*k)/self.variance - 0.5 * tf.log((2*np.pi) * self.variance) \
for k in range(-2,3,1)],axis=-1)
return tf.reduce_logsumexp(arg,axis=-1)
@params_as_tensors
def conditional_mean(self, F, eval_freq=None): # pylint: disable=R0201
"""The mean of the likelihood conditioned on latent."""
eval_freq = self.freq if eval_freq is None else eval_freq
tec2phase = self.tec_conversion/eval_freq
phase = F*tec2phase
return phase
@params_as_tensors
def conditional_variance(self, F):
return tf.fill(tf.shape(F),tf.cast(self.variance,gp.settings.float_type))
def predict_mean_and_var(self, Fmu, Fvar, **kwargs):
r"""
Given a Normal distribution for the latent function,
return the mean of Y
if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes the predictive mean
\int\int y p(y|f)q(f) df dy
and the predictive variance
\int\int y^2 p(y|f)q(f) df dy - [ \int\int y^2 p(y|f)q(f) df dy ]^2
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (e.g. Gaussian) will implement specific cases.
"""
integrand2 = lambda *X, **kwargs: self.conditional_variance(*X, **kwargs) + tf.square(self.conditional_mean(*X, **kwargs))
E_y, E_y2 = ndiagquad([self.conditional_mean, integrand2],
self.num_gauss_hermite_points,
Fmu, Fvar, **kwargs)
V_y = E_y2 - tf.square(E_y)
return E_y, V_y
def predict_density(self, Fmu, Fvar, Y, **kwargs):
r"""
Given a Normal distribution for the latent function, and a datum Y,
compute the log predictive density of Y.
i.e. if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes the predictive density
\log \int p(y=Y|f)q(f) df
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (Gaussian, Poisson) will implement specific cases.
"""
return ndiagquad(self.logp,
self.num_gauss_hermite_points,
Fmu, Fvar, logspace=True, Y=Y, **kwargs)
def variational_expectations(self, Fmu, Fvar, Y, **kwargs):
r"""
Compute the expected log density of the data, given a Gaussian
distribution for the function values.
if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes
\int (\log p(y|f)) q(f) df.
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (Gaussian, Poisson) will implement specific cases.
"""
return ndiagquad(self.logp,
self.num_gauss_hermite_points,
Fmu, Fvar, Y=Y, **kwargs)
class WrappedPhaseGaussianMulti(Likelihood):
"""This is an efficient version of the encoded likelihood."""
def __init__(self, tec_scale=0.001, freqs=None, num_gauss_hermite_points=20, variance=1.0, name=None):
super().__init__(name=name)
self.variance = Parameter(
variance, transform=transforms.positive, dtype=settings.float_type)
self.tec_scale = tec_scale
self.num_gauss_hermite_points = num_gauss_hermite_points
self.Nf = len(freqs)
self.freqs = tf.convert_to_tensor(freqs,dtype=settings.float_type,name='freqs') # freqs of data
self.tec_conversion = tf.convert_to_tensor(tec_scale * -8.4480e9,dtype=settings.float_type,name='tec_conversion') # rad Hz/ tecu
# Nf
self.tec2phase = self.tec_conversion / self.freqs
@params_as_tensors
def logp(self, F, **kwargs):
"""The log-likelihood function."""
#..., Nf
Y = tf.stack([kwargs["Y_{}".format(i)] for i in range(self.Nf)],axis=2)
phase = F[..., None]*self.tec2phase
dphase = wrap(phase) - wrap(Y) # Ito theorem
arg = tf.stack([-0.5*(tf.square(dphase + 2*np.pi*k)/self.variance + tf.cast(tf.log(2*np.pi), settings.float_type) + tf.log(self.variance)) \
for k in range(-2,3,1)],axis=0)
if kwargs.get("W_0") is not None:
W = tf.stack([kwargs["W_{}".format(i)] for i in range(self.Nf)],axis=2)
return tf.reduce_mean(W*tf.reduce_logsumexp(arg,axis=0), axis=-1)
else:
return tf.reduce_mean(tf.reduce_logsumexp(arg,axis=0),axis=-1)
@params_as_tensors
def conditional_mean(self, F): # pylint: disable=R0201
"""The mean of the likelihood conditioned on latent."""
# ..., Nf
phase = F[..., None]*self.tec2phase
return phase
@params_as_tensors
def conditional_variance(self, F):
return tf.fill(tf.concat([tf.shape(F),tf.shape(self.freqs)],axis=0),
tf.cast(self.variance,gp.settings.float_type))
def predict_mean_and_var(self, Fmu, Fvar, **kwargs):
r"""
Given a Normal distribution for the latent function,
return the mean of Y
if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes the predictive mean
\int\int y p(y|f)q(f) df dy
and the predictive variance
\int\int y^2 p(y|f)q(f) df dy - [ \int\int y^2 p(y|f)q(f) df dy ]^2
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (e.g. Gaussian) will implement specific cases.
"""
integrand2 = lambda *X, **kwargs: self.conditional_variance(*X, **kwargs) + tf.square(self.conditional_mean(*X, **kwargs))
E_y, E_y2 = ndiagquad([self.conditional_mean, integrand2],
self.num_gauss_hermite_points,
Fmu, Fvar, **kwargs)
V_y = E_y2 - tf.square(E_y)
return E_y, V_y
def predict_density(self, Fmu, Fvar, Y, **kwargs):
r"""
Given a Normal distribution for the latent function, and a datum Y,
compute the log predictive density of Y.
i.e. if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes the predictive density
\log \int p(y=Y|f)q(f) df
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (Gaussian, Poisson) will implement specific cases.
"""
Y_burst = {"Y_{}".format(i): Y[:,:,i] for i in range(self.Nf)}
return ndiagquad(self.logp,
self.num_gauss_hermite_points,
Fmu, Fvar, logspace=True, **Y_burst, **kwargs)
def variational_expectations(self, Fmu, Fvar, Y, weights, **kwargs):
r"""
Compute the expected log density of the data, given a Gaussian
distribution for the function values.
if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes
\int (\log p(y|f)) q(f) df.
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (Gaussian, Poisson) will implement specific cases.
"""
Y_burst = {"Y_{}".format(i): Y[:,:,i] for i in range(self.Nf)}
weights_burst = {"W_{}".format(i): weights[:,:,i] for i in range(self.Nf)}
return ndiagquad(self.logp,
self.num_gauss_hermite_points,
Fmu, Fvar, **Y_burst, **weights_burst, **kwargs)
class ItohGaussianEncodedHetero(Likelihood):
"""This is an efficient version of the encoded likelihood."""
def __init__(self, tec_scale=0.005, num_gauss_hermite_points=20, num_mc_samples=1, variance=1.0, name=None):
super().__init__(name=name)
self.variance = Parameter(
variance, transform=transforms.positive, dtype=settings.float_type)
self.tec_scale = tec_scale
self.num_gauss_hermite_points = num_gauss_hermite_points
self.num_mc_samples = num_mc_samples
self.tec_conversion = tf.convert_to_tensor(tec_scale * -8.4480e9,dtype=settings.float_type, name='tec_conversion') # rad Hz/ tecu
@params_as_tensors
def logp(self, F, Y, Y_var, freq, **kwargs):
"""
The log-likelihood function.
F is ..., P
Y is ..., P
Y_var ..., P
freq ..., P
Returns:
tensor ..., P
"""
#..., Nf
phase = self.tec_conversion * (F / freq)
dphase = wrap(wrap(phase) - wrap(Y)) # Ito theorem
log_prob = tf.distributions.Normal(dphase, tf.sqrt(Y_var)).log_prob(tf.zeros_like(F))#..., P
return log_prob
@params_as_tensors
def conditional_mean(self, F, freq, **kwargs): # pylint: disable=R0201
"""The mean of the likelihood conditioned on latent."""
# ..., Nf
phase = self.tec_conversion * (F/freq)
return phase
@params_as_tensors
def conditional_variance(self, Y_var, **kwargs):
return Y_var + self.variance
def predict_mean_and_var(self, Fmu, Fvar, Y_var, freq):
r"""
Given a Normal distribution for the latent function,
return the mean of Y
if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes the predictive mean
\int\int y p(y|f)q(f) df dy
and the predictive variance
\int\int y^2 p(y|f)q(f) df dy - [ \int\int y^2 p(y|f)q(f) df dy ]^2
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (e.g. Gaussian) will implement specific cases.
"""
integrand2 = lambda *X, **kwargs: self.conditional_variance(*X, **kwargs) + tf.square(self.conditional_mean(*X, **kwargs))
E_y, E_y2 = ndiagquad([self.conditional_mean, integrand2],
self.num_gauss_hermite_points,
Fmu, Fvar, Y_var=Y_var, freq=freq)
V_y = E_y2 - tf.square(E_y)
return E_y, V_y
def predict_density(self, Fmu, Fvar, Y, Y_var, freq):
r"""
Given a Normal distribution for the latent function, and a datum Y,
compute the log predictive density of Y.
i.e. if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes the predictive density
\log \int p(y=Y|f)q(f) df
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (Gaussian, Poisson) will implement specific cases.
"""
return ndiagquad(self.logp,
self.num_gauss_hermite_points,
Fmu, Fvar, logspace=True, Y=Y, Y_var=Y_var, freq=freq)
def variational_expectations(self, Fmu, Fvar, Y, Y_var, freq, mc=False, mvn=False):
r"""
Compute the expected log density of the data, given a Gaussian
distribution for the function values.
if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes
\int (\log p(y|f)) q(f) df.
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (Gaussian, Poisson) will implement specific cases.
"""
if mvn:
assert len(Fvar.shape) == 3
if not mvn:
if not mc:
return ndiagquad(self.logp,
self.num_gauss_hermite_points,
Fmu, Fvar, Y=Y, Y_var=Y_var, freq=freq)
else:
return ndiag_mc(self.logp, self.num_mc_samples , Fmu, Fvar, Y=Y, Y_var=Y_var, freq=freq)
else:
if not mc:
raise ValueError("Too slow to do this")
else:
return mvn_mc(self.logp, self.num_mc_samples , Fmu, Fvar, Y=Y, Y_var=Y_var, freq=freq)
class WrappedPhaseGaussianEncodedHetero(Likelihood):
"""This is an efficient version of the encoded likelihood."""
def __init__(self, tec_scale=0.005, num_gauss_hermite_points=20, num_mc_samples=1, variance=1.0, K=2, use_mc=False, name=None):
super().__init__(name=name)
self.K = K
self.use_mc = use_mc
self.variance = Parameter(
variance, transform=transforms.positive, dtype=settings.float_type)
self.tec_scale = tec_scale
self.num_gauss_hermite_points = num_gauss_hermite_points
self.num_mc_samples = num_mc_samples
self.tec_conversion = tf.convert_to_tensor(tec_scale * -8.4480e9,dtype=settings.float_type, name='tec_conversion') # rad Hz/ tecu
@params_as_tensors
def logp(self, F, Y, Y_var, freq, **kwargs):
"""
The log-likelihood function.
F is ..., P
Y is ..., P
Y_var ..., P
freq ..., P
Returns:
tensor ..., P
"""
#..., Nf
phase = self.tec_conversion * (F / freq)
# dphase = wrap(phase) - wrap(Y) # Ito theorem
log_prob = tf.stack([tf.distributions.Normal(phase + tf.convert_to_tensor(k*2*np.pi,float_type),
tf.sqrt(Y_var)).log_prob(Y) for k in range(-self.K,self.K+1,1)], axis=0)
log_prob = tf.reduce_logsumexp(log_prob, axis=0) #..., P
return log_prob
@params_as_tensors
def conditional_mean(self, F, freq, **kwargs): # pylint: disable=R0201
"""The mean of the likelihood conditioned on latent."""
# ..., Nf
phase = self.tec_conversion * (F/freq)
return phase
@params_as_tensors
def conditional_variance(self, Y_var, **kwargs):
return Y_var + self.variance
def predict_mean_and_var(self, Fmu, Fvar, Y_var, freq):
r"""
Given a Normal distribution for the latent function,
return the mean of Y
if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes the predictive mean
\int\int y p(y|f)q(f) df dy
and the predictive variance
\int\int y^2 p(y|f)q(f) df dy - [ \int\int y^2 p(y|f)q(f) df dy ]^2
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (e.g. Gaussian) will implement specific cases.
"""
integrand2 = lambda *X, **kwargs: self.conditional_variance(*X, **kwargs) + tf.square(self.conditional_mean(*X, **kwargs))
E_y, E_y2 = ndiagquad([self.conditional_mean, integrand2],
self.num_gauss_hermite_points,
Fmu, Fvar, Y_var=Y_var, freq=freq)
V_y = E_y2 - tf.square(E_y)
return E_y, V_y
def predict_density(self, Fmu, Fvar, Y, Y_var, freq):
r"""
Given a Normal distribution for the latent function, and a datum Y,
compute the log predictive density of Y.
i.e. if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes the predictive density
\log \int p(y=Y|f)q(f) df
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (Gaussian, Poisson) will implement specific cases.
"""
return ndiagquad(self.logp,
self.num_gauss_hermite_points,
Fmu, Fvar, logspace=True, Y=Y, Y_var=Y_var, freq=freq)
def variational_expectations(self, Fmu, Fvar, Y, Y_var, freq):
r"""
Compute the expected log density of the data, given a Gaussian
distribution for the function values.
if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes
\int (\log p(y|f)) q(f) df.
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (Gaussian, Poisson) will implement specific cases.
"""
if self.use_mc:
return ndiagquad(self.logp,
self.num_gauss_hermite_points,
Fmu, Fvar, Y=Y, Y_var=Y_var, freq=freq)
return ndiag_mc(self.logp, self.num_mc_samples , Fmu, Fvar, Y=Y, Y_var=Y_var, freq=freq)
class GaussianTecHetero(Likelihood):
def __init__(self, tec_scale=0.005, **kwargs):
super().__init__(**kwargs)
self.tec_scale = tf.convert_to_tensor(tec_scale, dtype=float_type)
@params_as_tensors
def logp(self, F, Y, Y_var):
tec = F*self.tec_scale
return logdensities.gaussian(Y, tec, Y_var)
@params_as_tensors
def predict_mean_and_var(self, Fmu, Fvar, Y_var):
return tf.identity(Fmu)*self.tec_scale, Fvar*self.tec_scale**2 + Y_var
@params_as_tensors
def predict_density(self, Fmu, Fvar, Y, Y_var):
return logdensities.gaussian(Y, Fmu*self.tec_scale, Fvar*self.tec_scale**2 + Y_var)
@params_as_tensors
def variational_expectations(self, Fmu, Fvar, Y, Y_var):
return -0.5 * np.log(2 * np.pi) - 0.5 * tf.log(Y_var) \
- 0.5 * (tf.square(Y - Fmu*self.tec_scale) + Fvar*self.tec_scale**2) / Y_var
class WrappedPhaseGaussianEncodedHeteroDirectionalOutliers(Likelihood):
"""This is an efficient version of the encoded likelihood."""
def __init__(self, tec_scale=0.005, num_gauss_hermite_points=20, num_mc_samples=1, variance=1.0, K=2, directional_var_matrix=None, name=None):
super().__init__(name=name)
self.K = K
self.variance = Parameter(
variance, transform=transforms.positive, dtype=settings.float_type)
assert directionla_var_matrix is not None
self.directional_var_matrix = Parameter(directional_var_matrix, transform=transforms.positive, dtype=settings.float_type)
self.tec_scale = tec_scale
self.num_gauss_hermite_points = num_gauss_hermite_points
self.num_mc_samples = num_mc_samples
self.tec_conversion = tf.convert_to_tensor(tec_scale * -8.4480e9,dtype=settings.float_type, name='tec_conversion') # rad Hz/ tecu
@params_as_tensors
def logp(self, F, Y, Y_var, freq, dir_idx, **kwargs):
"""
The log-likelihood function.
F is ..., P
Y is ..., P
Y_var ..., P
freq ..., P
Returns:
tensor ..., P
"""
#..., Nf
phase = wrap(self.tec_conversion * (F / freq))
# dphase = wrap(phase) - wrap(Y) # Ito theorem
dir_var = tf.gather(self.directional_var_matrix, dir_idx)
log_prob = tf.stack([tf.distributions.Normal(phase + tf.convert_to_tensor(k*2*np.pi,float_type),
tf.sqrt(Y_var)).log_prob(wrap(Y)) for k in range(-self.K,self.K+1,1)], axis=0)
log_prob = tf.reduce_logsumexp(log_prob, axis=0) #..., P
return log_prob
@params_as_tensors
def conditional_mean(self, F, freq, **kwargs): # pylint: disable=R0201
"""The mean of the likelihood conditioned on latent."""
# ..., Nf
phase = self.tec_conversion * (F/freq)
return phase
@params_as_tensors
def conditional_variance(self, Y_var, **kwargs):
return Y_var + self.variance
def predict_mean_and_var(self, Fmu, Fvar, Y_var, freq):
r"""
Given a Normal distribution for the latent function,
return the mean of Y
if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes the predictive mean
\int\int y p(y|f)q(f) df dy
and the predictive variance
\int\int y^2 p(y|f)q(f) df dy - [ \int\int y^2 p(y|f)q(f) df dy ]^2
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (e.g. Gaussian) will implement specific cases.
"""
integrand2 = lambda *X, **kwargs: self.conditional_variance(*X, **kwargs) + tf.square(self.conditional_mean(*X, **kwargs))
E_y, E_y2 = ndiagquad([self.conditional_mean, integrand2],
self.num_gauss_hermite_points,
Fmu, Fvar, Y_var=Y_var, freq=freq)
V_y = E_y2 - tf.square(E_y)
return E_y, V_y
def predict_density(self, Fmu, Fvar, Y, Y_var, freq):
r"""
Given a Normal distribution for the latent function, and a datum Y,
compute the log predictive density of Y.
i.e. if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes the predictive density
\log \int p(y=Y|f)q(f) df
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (Gaussian, Poisson) will implement specific cases.
"""
return ndiagquad(self.logp,
self.num_gauss_hermite_points,
Fmu, Fvar, logspace=True, Y=Y, Y_var=Y_var, freq=freq)
def variational_expectations(self, Fmu, Fvar, Y, Y_var, freq, mc=False, mvn=False):
r"""
Compute the expected log density of the data, given a Gaussian
distribution for the function values.
if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes
\int (\log p(y|f)) q(f) df.
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (Gaussian, Poisson) will implement specific cases.
"""
if mvn:
assert len(Fvar.shape) == 3
if not mvn:
if not mc:
return ndiagquad(self.logp,
self.num_gauss_hermite_points,
Fmu, Fvar, Y=Y, Y_var=Y_var, freq=freq)
else:
return ndiag_mc(self.logp, self.num_mc_samples , Fmu, Fvar, Y=Y, Y_var=Y_var, freq=freq)
else:
if not mc:
raise ValueError("Too slow to do this")
else:
return mvn_mc(self.logp, self.num_mc_samples , Fmu, Fvar, Y=Y, Y_var=Y_var, freq=freq)
class ComplexHarmonicPhaseOnlyGaussianEncodedHetero(Likelihood):
"""This is an efficient version of the encoded likelihood."""
def __init__(self, tec_scale=0.005, variance=1.0, analytic = False, name=None):
super().__init__(name=name)
self.variance = Parameter(
variance, transform=transforms.positive, dtype=settings.float_type)
self.tec_scale = tec_scale
self.tec_conversion = tf.convert_to_tensor(tec_scale * -8.4480e9,dtype=settings.float_type, name='tec_conversion') # rad Hz/ tecu
self.analytic = analytic
@params_as_tensors
def logp(self, F, Y, Y_var, freq, **kwargs):
"""
The log-likelihood function.
F is ..., P
Y is ..., P
Y_var ..., P
freq ..., P
Returns:
tensor ..., P
"""
#..., Nf
phase = self.tec_conversion * (F / freq)
### might need to use with I(1./Y_var)
kappa = 1./Y_var
log_prob = kappa * tf.cos(phase - Y) - np.log(2*np.pi) - kappa - tf.log(tf.math.bessel_i0e(kappa))
return log_prob
@params_as_tensors
def conditional_mean(self, F, freq, **kwargs): # pylint: disable=R0201
"""The mean of the likelihood conditioned on latent."""
# ..., Nf
phase = self.tec_conversion * (F/freq)
return phase
@params_as_tensors
def conditional_variance(self, Y_var, **kwargs):
return Y_var + self.variance
def predict_mean_and_var(self, Fmu, Fvar, Y_var, freq):
r"""
Given a Normal distribution for the latent function,
return the mean of Y
if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes the predictive mean
\int\int y p(y|f)q(f) df dy
and the predictive variance
\int\int y^2 p(y|f)q(f) df dy - [ \int\int y^2 p(y|f)q(f) df dy ]^2
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (e.g. Gaussian) will implement specific cases.
"""
integrand2 = lambda *X, **kwargs: self.conditional_variance(*X, **kwargs) + tf.square(self.conditional_mean(*X, **kwargs))
E_y, E_y2 = ndiagquad([self.conditional_mean, integrand2],
self.num_gauss_hermite_points,
Fmu, Fvar, Y_var=Y_var, freq=freq)
V_y = E_y2 - tf.square(E_y)
return E_y, V_y
def predict_density(self, Fmu, Fvar, Y, Y_var, freq):
r"""
Given a Normal distribution for the latent function, and a datum Y,
compute the log predictive density of Y.
i.e. if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes the predictive density
\log \int p(y=Y|f)q(f) df
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (Gaussian, Poisson) will implement specific cases.
"""
return ndiagquad(self.logp,
self.num_gauss_hermite_points,
Fmu, Fvar, logspace=True, Y=Y, Y_var=Y_var, freq=freq)
def variational_expectations(self, Fmu, Fvar, Y, Y_var, freq):
r"""
Compute the expected log density of the data, given a Gaussian
distribution for the function values.
if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes
\int (\log p(y|f)) q(f) df.
Here, we implement a default Gauss-Hermite quadrature routine, but some
likelihoods (Gaussian, Poisson) will implement specific cases.
"""
kappa = 1./Y_var
A = self.tec_conversion / freq
norm = np.log(2*np.pi) + kappa + tf.log(tf.math.bessel_i0e(kappa))
#..., Nf
# phase = self.tec_conversion * (Fmu / freq)
return kappa * tf.cos(A*Fmu - Y) * tf.exp(-A**2 * Fvar / 2.) - norm
| 40.875358 | 160 | 0.595212 | 3,933 | 28,531 | 4.167811 | 0.058988 | 0.018302 | 0.007687 | 0.040996 | 0.878294 | 0.86713 | 0.860908 | 0.855539 | 0.842484 | 0.836262 | 0 | 0.011348 | 0.298868 | 28,531 | 697 | 161 | 40.934003 | 0.808088 | 0.313203 | 0 | 0.651515 | 0 | 0 | 0.011037 | 0 | 0 | 0 | 0 | 0 | 0.012121 | 1 | 0.148485 | false | 0.00303 | 0.030303 | 0.027273 | 0.345455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
918ffcfbae4018f8e031be6bd00c023c64d5092a | 6,592 | py | Python | fireant/tests/dataset/operations/test_share.py | mikeengland/fireant | 63c12728c11f1fb252265459f8b8f384d20414b9 | [
"Apache-2.0"
] | 122 | 2016-08-05T13:34:52.000Z | 2022-03-15T13:21:13.000Z | fireant/tests/dataset/operations/test_share.py | mikeengland/fireant | 63c12728c11f1fb252265459f8b8f384d20414b9 | [
"Apache-2.0"
] | 321 | 2016-08-10T08:48:15.000Z | 2021-07-28T13:08:18.000Z | fireant/tests/dataset/operations/test_share.py | mikeengland/fireant | 63c12728c11f1fb252265459f8b8f384d20414b9 | [
"Apache-2.0"
] | 27 | 2016-08-10T08:11:08.000Z | 2021-08-23T08:14:37.000Z | from unittest import TestCase
from unittest.mock import MagicMock
import numpy as np
import pandas as pd
import pandas.testing
from fireant import Field, Share
from fireant.dataset.references import Reference, WeekOverWeek
from fireant.tests.dataset.mocks import (
dimx0_metricx1_df,
dimx1_str_df,
dimx1_str_totals_df,
dimx2_date_str_df,
dimx2_date_str_totals_df,
dimx2_date_str_totalsx2_df,
mock_dataset,
)
from fireant.utils import alias_selector
class ShareTests(TestCase):
def test_apply_to_zero_dims(self):
share = Share(mock_dataset.fields.votes)
result = share.apply(dimx0_metricx1_df, None)
f_metric_key = alias_selector(mock_dataset.fields.votes.alias)
expected = pd.Series([100.0], name=f_metric_key)
pandas.testing.assert_series_equal(expected, result)
def test_apply_to_one_dim_over_first(self):
share = Share(mock_dataset.fields.votes, over=mock_dataset.fields.political_party)
result = share.apply(dimx1_str_totals_df, None)
f_metric_key = alias_selector(mock_dataset.fields.votes.alias)
expected = pd.Series([48.8487, 0.9638, 50.1873, 100.0], name=f_metric_key, index=dimx1_str_totals_df.index)
pandas.testing.assert_series_equal(expected, result, rtol=0.5e-3)
def test_apply_to_one_dim_over_none(self):
share = Share(mock_dataset.fields.votes)
result = share.apply(dimx1_str_df, None)
f_metric_key = alias_selector(mock_dataset.fields.votes.alias)
expected = pd.Series([100.0] * 3, name=f_metric_key, index=dimx1_str_df.index)
pandas.testing.assert_series_equal(expected, result, rtol=0.5e-3)
def test_apply_to_two_dims_over_first(self):
share = Share(mock_dataset.fields.votes, over=mock_dataset.fields.timestamp)
result = share.apply(dimx2_date_str_totalsx2_df, None)
f_metric_key = alias_selector(mock_dataset.fields.votes.alias)
metric_series = dimx2_date_str_totalsx2_df[f_metric_key]
expected = 100 * metric_series / metric_series.iloc[-1]
pandas.testing.assert_series_equal(expected, result, rtol=0.5e-3)
def test_apply_to_two_dims_over_second(self):
share = Share(mock_dataset.fields.votes, over=mock_dataset.fields.political_party)
result = share.apply(dimx2_date_str_totals_df, None)
f_metric_key = alias_selector(mock_dataset.fields.votes.alias)
expected = pd.Series(
[
49.79,
7.07,
43.12,
100.0,
49.78,
50.21,
100.0,
48.83,
51.16,
100.0,
55.42,
44.57,
100.0,
60.39,
39.60,
100.0,
26.60,
73.39,
100.0,
],
name=f_metric_key,
index=dimx2_date_str_totals_df.index,
)
pandas.testing.assert_series_equal(expected, result, rtol=0.5e-3)
def test_apply_to_two_dims_over_second_all_totals(self):
share = Share(mock_dataset.fields.votes, over=mock_dataset.fields.political_party)
result = share.apply(dimx2_date_str_totalsx2_df, None)
f_metric_key = alias_selector(mock_dataset.fields.votes.alias)
expected = pd.Series(
[
49.79,
7.07,
43.12,
100.0,
49.78,
50.21,
100.0,
48.83,
51.16,
100.0,
55.42,
44.57,
100.0,
60.39,
39.60,
100.0,
26.60,
73.39,
100.0,
100.0,
],
name=f_metric_key,
index=dimx2_date_str_totalsx2_df.index,
)
pandas.testing.assert_series_equal(expected, result, rtol=0.5e-3)
def test_apply_to_two_dims_over_second_with_one_row_per_group(self):
raw_df = dimx2_date_str_totals_df.iloc[[0, 3, 4, 6]]
share = Share(mock_dataset.fields.votes, over=mock_dataset.fields.political_party)
result = share.apply(raw_df, None)
f_metric_key = alias_selector(mock_dataset.fields.votes.alias)
expected = pd.Series([49.79, 100.0, 49.78, 100.0], name=f_metric_key, index=raw_df.index)
pandas.testing.assert_series_equal(expected, result, rtol=0.5e-3)
def test_apply_to_two_dims_over_none(self):
share = Share(mock_dataset.fields.votes)
result = share.apply(dimx2_date_str_df, None)
f_metric_key = alias_selector(mock_dataset.fields.votes.alias)
expected = pd.Series([100.0] * 13, name=f_metric_key, index=dimx2_date_str_df.index)
pandas.testing.assert_series_equal(expected, result, rtol=0.5e-3)
def test_share_for_references_with_delta_percent(self):
dataset = MagicMock()
dataset.table._table_name = "table"
value_field = Field("value", None)
over_field = Field("dim-over", None)
share = Share(value_field, over_field)
reference = Reference(value_field, WeekOverWeek, delta=True, delta_percent=True)
df = pd.DataFrame.from_dict(
{
"$value_wow": [10, 15, 20, 5, 50],
"$value": [12, 16, 14, 8, 50],
"$share(value,dim-over)": [24, 32, 28, 16, 100],
"$dim-over": ["A", "B", "C", "D", "~~totals"],
}
).set_index('$dim-over')
result = share.apply(df, reference)
np.testing.assert_array_equal(([20.0, 6 + (2 / 3), -30.0, 60, 0]), result.values)
def test_share_for_references_with_delta(self):
dataset = MagicMock()
dataset.table._table_name = "table"
value_field = Field("value", None)
over_field = Field("dim-over", None)
share = Share(value_field, over_field)
reference = Reference(value_field, WeekOverWeek, delta=True, delta_percent=False)
df = pd.DataFrame.from_dict(
{
"$value_wow": [10, 15, 20, 5, 50],
"$value": [12, 16, 14, 8, 50],
"$share(value,dim-over)": [24, 32, 28, 16, 100],
"$dim-over": ["A", "B", "C", "D", "~~totals"],
}
).set_index('$dim-over')
result = share.apply(df, reference)
np.testing.assert_array_equal(([4, 2, -12, 6, 0]), result.values)
| 34.513089 | 115 | 0.598301 | 869 | 6,592 | 4.258918 | 0.151899 | 0.065388 | 0.09646 | 0.095109 | 0.848149 | 0.834639 | 0.807079 | 0.751419 | 0.743853 | 0.743853 | 0 | 0.072439 | 0.292172 | 6,592 | 190 | 116 | 34.694737 | 0.720746 | 0 | 0 | 0.615894 | 0 | 0 | 0.026092 | 0.006675 | 0 | 0 | 0 | 0 | 0.066225 | 1 | 0.066225 | false | 0 | 0.059603 | 0 | 0.13245 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
91b0cdf6068a00244732f06fa195518867211586 | 41 | py | Python | call_seq_browser/TextEdit/__init__.py | ya790206/call_seq_browser | 3f52fe3cc340af8e454f57d04e3dec17168a29bd | [
"Apache-2.0"
] | 3 | 2016-02-26T10:46:47.000Z | 2016-06-02T03:14:30.000Z | call_seq/TextEdit/__init__.py | ya790206/call_seq | ee6e0022e1731ce0c72e4101100b6f2a94812b15 | [
"Apache-2.0"
] | null | null | null | call_seq/TextEdit/__init__.py | ya790206/call_seq | ee6e0022e1731ce0c72e4101100b6f2a94812b15 | [
"Apache-2.0"
] | 1 | 2018-12-09T04:35:34.000Z | 2018-12-09T04:35:34.000Z | from .rich import RichTextEdit as Editor
| 20.5 | 40 | 0.829268 | 6 | 41 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 41 | 1 | 41 | 41 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
91d1e25e88673b49acfb9880bb5b056b3503b465 | 35,070 | py | Python | tf_verify/deeppoly_nodes.py | yugeshk/eran | 30c6c92e4f2de1fa9da4dd445701385cbfe0fe57 | [
"Apache-2.0"
] | 4 | 2021-05-19T17:35:30.000Z | 2021-08-17T04:03:21.000Z | tf_verify/deeppoly_nodes.py | verivital/eran | f4cbc73422cda19316b6312bb288b0e46848510c | [
"Apache-2.0"
] | null | null | null | tf_verify/deeppoly_nodes.py | verivital/eran | f4cbc73422cda19316b6312bb288b0e46848510c | [
"Apache-2.0"
] | 1 | 2021-01-20T14:35:53.000Z | 2021-01-20T14:35:53.000Z | '''
@author: Adrian Hoffmann
'''
import numpy as np
from config import config, Device
if config.device == Device.CPU:
from fppoly import *
else:
from fppoly_gpu import *
from elina_interval import *
from elina_abstract0 import *
from elina_manager import *
from krelu import encode_krelu_cons
from ai_milp import *
from functools import reduce
def calc_bounds(man, element, nn, nlb, nub, relu_groups, is_refine_layer = False, destroy=True, use_krelu = False):
layerno = nn.calc_layerno()
bounds = box_for_layer(man, element, layerno)
num_neurons = get_num_neurons_in_layer(man, element, layerno)
itv = [bounds[i] for i in range(num_neurons)]
lbi = [x.contents.inf.contents.val.dbl for x in itv]
ubi = [x.contents.sup.contents.val.dbl for x in itv]
if is_refine_layer:
nlb.append(lbi)
nub.append(ubi)
if use_krelu:
encode_krelu_cons(nn, man, element, 0, layerno, num_neurons, lbi, ubi, relu_groups, False, 'refinepoly')
if destroy:
elina_interval_array_free(bounds,num_neurons)
return lbi, ubi
return layerno, bounds, num_neurons, lbi, ubi
def add_input_output_information_deeppoly(self, input_names, output_name, output_shape):
"""
sets for an object the three fields:
- self.output_length
- self.input_names
- self.output_name
which will mainly be used by the Optimizer, but can also be used by the Nodes itself
Arguments
---------
self : Object
will be a DeepzonoNode, but could be any object
input_names : iterable
iterable of strings, each one being the name of another Deepzono-Node
output_name : str
name of self
output_shape : iterable
iterable of ints with the shape of the output of this node
Return
------
None
"""
self.output_length = reduce((lambda x, y: x*y), output_shape)
self.input_names = input_names
self.output_name = output_name
class DeeppolyInput:
def __init__(self, specLB, specUB, input_names, output_name, output_shape,
lexpr_weights=None, lexpr_cst=None, lexpr_dim=None,
uexpr_weights=None, uexpr_cst=None, uexpr_dim=None,
expr_size=0):
"""
Arguments
---------
specLB : numpy.ndarray
1D array with the lower bound of the input spec
specUB : numpy.ndarray
1D array with the upper bound of the input spec
lexpr_weights: numpy.ndarray
ndarray of doubles with coefficients of lower polyhedral expressions
lexpr_cst: numpy.ndarray
ndarray of doubles with the constants of lower polyhedral expressions
lexpr_dim: numpy.ndarray
ndarray of unsigned int with the indexes of pixels from the original image for the lower polyhedral expressions
uexpr_weights: numpy.ndarray
ndarray of doubles with coefficients of upper polyhedral expressions
uexpr_cst: numpy.ndarray
ndarray of doubles with the constants of upper polyhedral expressions
uexpr_dim: numpy.ndarray
ndarray of unsigned int with the indexes of pixels from the original image for the upper polyhedral expressions
expr_size: numpy.ndarray
unsigned int with the sizes of polyhedral expressions
"""
self.specLB = np.ascontiguousarray(specLB, dtype=np.double)
self.specUB = np.ascontiguousarray(specUB, dtype=np.double)
if lexpr_weights is not None:
self.lexpr_weights = np.ascontiguousarray(lexpr_weights, dtype=np.double)
else:
self.lexpr_weights = None
if lexpr_cst is not None:
self.lexpr_cst = np.ascontiguousarray(lexpr_cst, dtype=np.double)
else:
self.lexpr_cst = None
if lexpr_dim is not None:
self.lexpr_dim = np.ascontiguousarray(lexpr_dim, dtype=np.uintp)
else:
self.lexpr_dim = None
if uexpr_weights is not None:
self.uexpr_weights = np.ascontiguousarray(uexpr_weights, dtype=np.double)
else:
self.uexpr_weights = None
if uexpr_cst is not None:
self.uexpr_cst = np.ascontiguousarray(uexpr_cst, dtype=np.double)
else:
self.uexpr_cst = None
if uexpr_dim is not None:
self.uexpr_dim = np.ascontiguousarray(lexpr_dim, dtype=np.uintp)
else:
self.uexpr_dim = None
self.expr_size = expr_size
add_input_output_information_deeppoly(self, input_names, output_name, output_shape)
def transformer(self, man):
"""
creates an abstract element from the input spec
Arguments
---------
man : ElinaManagerPtr
inside this manager the abstract element will be created
Return
------
output : ElinaAbstract0Ptr
new abstract element representing the element specified by self.specLB and self.specUB
"""
if self.expr_size == 0:
return fppoly_from_network_input(man, 0, len(self.specLB), self.specLB, self.specUB)
else:
return fppoly_from_network_input_poly(man, 0, len(self.specLB), self.specLB, self.specUB,
self.lexpr_weights, self.lexpr_cst, self.lexpr_dim,
self.uexpr_weights, self.uexpr_cst, self.uexpr_dim, self.expr_size)
class DeeppolyNode:
"""
Parent class for all the classes that implement fully connected layers
"""
def __init__(self, weights, bias, input_names, output_name, output_shape):
"""
Arguments
---------
weights : numpy.ndarray
matrix of the fully connected layer (must be 2D)
bias : numpy.ndarray
bias of the fully connected layer
"""
self.weights = np.ascontiguousarray(weights, dtype=np.double)
self.bias = np.ascontiguousarray(bias, dtype=np.double)
add_input_output_information_deeppoly(self, input_names, output_name, output_shape)
def get_arguments(self):
"""
facilitates putting together all the arguments for the transformers in the child classes
Return
------
output : tuple
the four entries are pointers to the rows of the matrix, the bias, the length of the output, and the length of the input
"""
xpp = self.get_xpp()
return xpp, self.bias, self.weights.shape[0], self.weights.shape[1], self.predecessors
def get_xpp(self):
"""
helper function to get pointers to the rows of self.weights.
Return
------
output : numpy.ndarray
pointers to the rows of the matrix
"""
return (self.weights.__array_interface__['data'][0]+ np.arange(self.weights.shape[0])*self.weights.strides[0]).astype(np.uintp)
class DeeppolyReluNodeFirst(DeeppolyNode):
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
"""
transformer for the first layer of a neural network, if that first layer is fully connected with relu
Arguments
---------
man : ElinaManagerPtr
man to which element belongs
element : ElinaAbstract0Ptr
abstract element onto which the transformer gets applied
Return
------
output : ElinaAbstract0Ptr
abstract element after the transformer
"""
ffn_handle_first_relu_layer(man, element, *self.get_arguments())
calc_bounds(man, element, nn, nlb, nub, relu_groups, is_refine_layer=True, use_krelu=refine)
nn.ffn_counter+=1
if testing:
return element, nlb[-1], nub[-1]
return element
class DeeppolySigmoidNodeFirst(DeeppolyNode):
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
"""
transformer for the first layer of a neural network, if that first layer is fully connected with sigmoid
Arguments
---------
man : ElinaManagerPtr
man to which element belongs
element : ElinaAbstract0Ptr
abstract element onto which the transformer gets applied
Return
------
output : ElinaAbstract0Ptr
abstract element after the transformer
"""
ffn_handle_first_sigmoid_layer(man, element, *self.get_arguments())
if testing:
lb, ub = calc_bounds(man, element, nn, nlb, nub, relu_groups)
nn.ffn_counter+=1
if testing:
return element, lb, ub
return element
class DeeppolyTanhNodeFirst(DeeppolyNode):
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
"""
transformer for the first layer of a neural network, if that first layer is fully connected with tanh
Arguments
---------
man : ElinaManagerPtr
man to which element belongs
element : ElinaAbstract0Ptr
abstract element onto which the transformer gets applied
Return
------
output : ElinaAbstract0Ptr
abstract element after the transformer
"""
ffn_handle_first_tanh_layer(man, element, *self.get_arguments())
if testing:
lb, ub = calc_bounds(man, element, nn, nlb, nub, relu_groups)
nn.ffn_counter+=1
if testing:
return element, lb, ub
return element
class DeeppolyReluNodeIntermediate(DeeppolyNode):
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
"""
transformer for any intermediate fully connected layer with relu
Arguments
---------
man : ElinaManagerPtr
man to which element belongs
element : ElinaAbstract0Ptr
abstract element onto which the transformer gets applied
Return
------
output : ElinaAbstract0Ptr
abstract element after the transformer
"""
ffn_handle_intermediate_relu_layer(man, element, *self.get_arguments(), use_default_heuristic)
layerno, bounds, num_neurons, lbi, ubi = calc_bounds(man, element, nn, nlb, nub, relu_groups, is_refine_layer=True, destroy = False)
candidate_vars = [i for i, (l, u) in enumerate(zip(lbi, ubi)) if l<0 and u>0]
#print("lbi ", timeout_milp, "ubi ", timeout_lp)
if refine:
if layerno <= 1:
use_milp = config.use_milp
else:
use_milp = 0
if use_milp:
timeout = timeout_milp
else:
timeout = timeout_lp
if nn.is_ffn():
resl, resu, indices = get_bounds_for_layer_with_milp(nn, nn.specLB, nn.specUB, layerno, layerno, num_neurons, nlb, nub, relu_groups, use_milp, candidate_vars, timeout)
for j in indices:
update_bounds_for_neuron(man,element,layerno,j,resl[j],resu[j])
nlb[-1] = resl
nub[-1] = resu
encode_krelu_cons(nn, man, element, 0, layerno, num_neurons, lbi, ubi, relu_groups, False, 'refinepoly')
elina_interval_array_free(bounds,num_neurons)
nn.ffn_counter+=1
if testing:
return element, nlb[-1], nub[-1]
return element
class DeeppolySigmoidNodeIntermediate(DeeppolyNode):
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
"""
transformer for any intermediate fully connected layer with sigmoid
Arguments
---------
man : ElinaManagerPtr
man to which element belongs
element : ElinaAbstract0Ptr
abstract element onto which the transformer gets applied
Return
------
output : ElinaAbstract0Ptr
abstract element after the transformer
"""
ffn_handle_intermediate_sigmoid_layer(man, element, *self.get_arguments(), use_default_heuristic)
if testing or refine:
calc_bounds(man, element, nn, nlb, nub, relu_groups, is_refine_layer=True)
nn.ffn_counter+=1
if testing:
return element, nlb[-1], nub[-1]
return element
class DeeppolyTanhNodeIntermediate(DeeppolyNode):
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
"""
transformer for any intermediate fully connected layer with tanh
Arguments
---------
man : ElinaManagerPtr
man to which element belongs
element : ElinaAbstract0Ptr
abstract element onto which the transformer gets applied
Return
------
output : ElinaAbstract0Ptr
abstract element after the transformer
"""
ffn_handle_intermediate_tanh_layer(man, element, *self.get_arguments(), use_default_heuristic)
if testing or refine:
calc_bounds(man, element, nn, nlb, nub, relu_groups, is_refine_layer=True)
nn.ffn_counter+=1
if testing:
return element, nlb[-1], nub[-1]
return element
class DeeppolyReluNodeLast(DeeppolyNode):
def __init__(self, weights, bias, relu_present, input_names, output_name, output_shape):
"""
Arguments
---------
weights : numpy.ndarray
matrix of the fully connected layer (must be 2D)
bias : numpy.ndarray
bias of the fully connected layer
relu_present : bool
whether this layer has relu or not
"""
DeeppolyNode.__init__(self, weights, bias, input_names, output_name, output_shape)
self.relu_present = relu_present
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
"""
transformer for a fully connected layer if it's the last layer in the network
Arguments
---------
man : ElinaManagerPtr
man to which element belongs
element : ElinaAbstract0Ptr
abstract element onto which the transformer gets applied
Return
------
output : ElinaAbstract0Ptr
abstract element after the transformer
"""
ffn_handle_last_relu_layer(man, element, *self.get_arguments(), self.relu_present, use_default_heuristic)
layerno, bounds, num_neurons, lbi, ubi = calc_bounds(man, element, nn, nlb, nub, relu_groups, is_refine_layer=True, destroy=False)
candidate_vars = [i for i, (l, u) in enumerate(zip(lbi, ubi)) if l<0 and u>0]
if(refine):
if layerno<=1:
use_milp = 1
else:
use_milp = 0
if use_milp:
timeout = timeout_milp
else:
timeout = timeout_lp
if nn.is_ffn():
resl, resu, indices = get_bounds_for_layer_with_milp(nn, nn.specLB, nn.specUB, layerno, layerno, num_neurons, nlb, nub, relu_groups, use_milp, candidate_vars, timeout)
for j in indices:
update_bounds_for_neuron(man,element,layerno,j,resl[j],resu[j])
#print("resl ", resl, "resu ", resu)
nlb[-1] = resl
nub[-1] = resu
encode_krelu_cons(nn, man, element, 0, layerno, num_neurons, lbi, ubi, relu_groups, False, 'refinepoly')
elina_interval_array_free(bounds,num_neurons)
nn.ffn_counter+=1
if testing:
return element, nlb[-1], nub[-1]
return element
class DeeppolySigmoidNodeLast(DeeppolyNode):
def __init__(self, weights, bias, sigmoid_present, input_names, output_name, output_shape):
"""
Arguments
---------
weights : numpy.ndarray
matrix of the fully connected layer (must be 2D)
bias : numpy.ndarray
bias of the fully connected layer
relu_present : bool
whether this layer has sigmoid or not
"""
DeeppolySigmoidNode.__init__(self, weights, bias, input_names, output_name, output_shape)
self.sigmoid_present = sigmoid_present
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
"""
transformer for a fully connected layer if it's the last layer in the network
Arguments
---------
man : ElinaManagerPtr
man to which element belongs
element : ElinaAbstract0Ptr
abstract element onto which the transformer gets applied
Return
------
output : ElinaAbstract0Ptr
abstract element after the transformer
"""
ffn_handle_last_sigmoid_layer(man, element, *self.get_arguments(), self.sigmoid_present, use_default_heuristic)
if testing:
lb, ub = calc_bounds(man, element, nn, nlb, nub)
nn.ffn_counter+=1
if testing:
return element, lb, ub
return element
class DeeppolyTanhNodeLast(DeeppolyNode):
def __init__(self, weights, bias, tanh_present, input_names, output_name, output_shape):
"""
Arguments
---------
weights : numpy.ndarray
matrix of the fully connected layer (must be 2D)
bias : numpy.ndarray
bias of the fully connected layer
relu_present : bool
whether this layer has relu or not
"""
DeeppolyTanhNode.__init__(self, weights, bias, input_names, output_name, output_shape)
self.tanh_present = tanh_present
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
"""
transformer for a fully connected layer if it's the last layer in the network
Arguments
---------
man : ElinaManagerPtr
man to which element belongs
element : ElinaAbstract0Ptr
abstract element onto which the transformer gets applied
Return
------
output : ElinaAbstract0Ptr
abstract element after the transformer
"""
ffn_handle_last_tanh_layer(man, element, *self.get_arguments(), self.tanh_present, use_default_heuristic)
if testing:
lb, ub = calc_bounds(man, element, nn, nlb, nub)
nn.ffn_counter+=1
if testing:
return element, lb, ub
return element
class DeeppolyConv2dNodeIntermediate:
def __init__(self, filters, strides, pad_top, pad_left, bias, image_shape, input_names, output_name, output_shape, has_relu):
"""
collects the information needed for the conv_handle_intermediate_relu_layer transformer and brings it into the required shape
Arguments
---------
filters : numpy.ndarray
the actual 4D filter of the convolutional layer
strides : numpy.ndarray
1D with to elements, stride in height and width direction
bias : numpy.ndarray
the bias of the layer
image_shape : numpy.ndarray
1D array of ints with 3 entries [height, width, channels] representing the shape of the of the image that is passed to the conv-layer
"""
self.image_shape = np.ascontiguousarray(image_shape, dtype=np.uintp)
self.filters = np.ascontiguousarray(filters, dtype=np.double)
self.strides = np.ascontiguousarray(strides, dtype=np.uintp)
self.bias = np.ascontiguousarray(bias, dtype=np.double)
self.out_size = (c_size_t * 3)(output_shape[1], output_shape[2], output_shape[3])
self.pad_top = pad_top
self.pad_left = pad_left
self.has_relu = has_relu
add_input_output_information_deeppoly(self, input_names, output_name, output_shape)
def get_arguments(self):
"""
facilitates putting together all the arguments for the transformers in the child classes
Return
------
output : tuple
the 5 entries are:
1. the filter (numpy.ndarray)
2. the bias (numpy.ndarray)
3. the image_shape (numpy.ndarray)
4. length of a side of the square kernel (int)
5. number of filters (int)
"""
filter_size = (c_size_t * 2) (self.filters.shape[0], self.filters.shape[1])
numfilters = self.filters.shape[3]
strides = (c_size_t * 2)(self.strides[0], self.strides[1])
return self.filters, self.bias, self.image_shape, filter_size, numfilters, strides, self.out_size, self.pad_top, self.pad_left, True, self.predecessors
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
"""
transformer for a convolutional layer, if that layer is an intermediate of the network
Arguments
---------
man : ElinaManagerPtr
man to which element belongs
element : ElinaAbstract0Ptr
abstract element onto which the transformer gets applied
Return
------
output : ElinaAbstract0Ptr
abstract element after the transformer
"""
if(self.has_relu):
conv_handle_intermediate_relu_layer(man, element, *self.get_arguments(), use_default_heuristic)
else:
conv_handle_intermediate_affine_layer(man, element, *self.get_arguments(), use_default_heuristic)
layerno, bounds, num_neurons, lbi, ubi = calc_bounds(man, element, nn, nlb, nub, relu_groups, is_refine_layer=True, destroy=False)
candidate_vars = [i for i, (l, u) in enumerate(zip(lbi, ubi)) if l<0 and u>0]
if(refine):
use_milp = config.use_milp
if use_milp:
timeout = timeout_milp
else:
timeout = timeout_lp
#numconvslayers = sum('Conv2D' in l for l in nn.layertypes)
#if numconvslayers-nn.conv_counter <= 1:
if nn.is_ffn():
resl, resu, indices = get_bounds_for_layer_with_milp(nn, nn.specLB, nn.specUB, num_neurons, nlb, nub, relu_groups, use_milp, candidate_vars, timeout)
nlb[-1] = resl
nub[-1] = resu
for j in indices:
update_bounds_for_neuron(man,element,layerno,j,resl[j],resu[j])
encode_krelu_cons(nn, man, element, 0, layerno, num_neurons, lbi, ubi, relu_groups, False, 'refinepoly')
elina_interval_array_free(bounds,num_neurons)
nn.conv_counter+=1
if testing:
return element, nlb[-1], nub[-1]
return element
class DeeppolyConv2dNodeFirst(DeeppolyConv2dNodeIntermediate):
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
"""
transformer for a convolutional layer, if that layer is the first of the network
Arguments
---------
man : ElinaManagerPtr
man to which element belongs
element : ElinaAbstract0Ptr
abstract element onto which the transformer gets applied
Return
------
output : ElinaAbstract0Ptr
abstract element after the transformer
"""
conv_handle_first_layer(man, element, *self.get_arguments())
calc_bounds(man, element, nn, nlb, nub, relu_groups, use_krelu=refine, is_refine_layer=True)
nn.conv_counter+=1
if testing:
return element, nlb[-1], nub[-1]
return element
class DeeppolyMaxpool:
def __init__(self, image_shape, window_size, strides, input_names, output_name, output_shape):
"""
collects the information needed for the handle_maxpool_layer transformer and brings it into the required shape
Arguments
---------
input_shape : numpy.ndarray
1D array of ints with 3 entries [height, width, channels] representing the shape of the of the image that is passed to the conv-layer
window_size : numpy.ndarray
1D array of ints with 2 entries [height, width] representing the window's size in these directions
strides : numpy.ndarray
1D array of ints with 2 entries [height, width] representing the stride in these directions
"""
self.image_shape = np.ascontiguousarray(image_shape, dtype=np.uintp)
self.window_size = np.ascontiguousarray([window_size[0], window_size[1], 1], dtype=np.uintp)
add_input_output_information_deeppoly(self, input_names, output_name, output_shape)
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
"""
transformer for a maxpool layer, this can't be the first layer of a network
Arguments
---------
man : ElinaManagerPtr
man to which element belongs
element : ElinaAbstract0Ptr
abstract element onto which the transformer gets applied
Return
------
output : ElinaAbstract0Ptr
abstract element after the transformer
"""
handle_maxpool_layer(man, element, self.window_size, self.image_shape, self.predecessors)
if refine or testing:
calc_bounds(man, element, nn, nlb, nub, relu_groups, is_refine_layer=True)
nn.maxpool_counter += 1
if testing:
return element, nlb[-1], nub[-1]
return element
class DeeppolyResadd:
def __init__(self, input_names, output_name, output_shape, has_relu):
"""
Arguments
---------
input_names : iterable
iterable with the names of the two nodes you want to add
output_name : str
name of this node's output
output_shape : iterable
iterable of ints with the shape of the output of this node
"""
self.has_relu = has_relu
add_input_output_information_deeppoly(self, input_names, output_name, output_shape)
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
if(self.has_relu):
handle_residual_relu_layer(man,element,self.output_length,self.predecessors,use_default_heuristic)
else:
handle_residual_affine_layer(man,element,self.output_length,self.predecessors,use_default_heuristic)
calc_bounds(man, element, nn, nlb, nub, relu_groups, use_krelu=refine, is_refine_layer=True)
# print("Residual ", nn.layertypes[layerno],layerno)
nn.residual_counter += 1
if testing:
return element, nlb[-1], nub[-1]
return element
class DeeppolyGather:
def __init__(self, indexes, input_names, output_name, output_shape):
"""
collects the information needed for the handle_gather_layer transformer and brings it into the required shape
Arguments
---------
indexes : numpy.ndarray
1D array of ints with 3 entries [height, width, channels] representing the shape of the of the image that is passed to the conv-layer
window_size : numpy.ndarray
1D array of ints with 2 entries [height, width] representing the window's size in these directions
strides : numpy.ndarray
1D array of ints with 2 entries [height, width] representing the stride in these directions
"""
self.indexes = np.ascontiguousarray(indexes, dtype=np.uintp)
add_input_output_information_deeppoly(self, input_names, output_name, output_shape)
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
handle_gather_layer(man, element, self.indexes)
return element
class DeeppolySubNodeFirst:
def __init__(self, bias, is_minuend, input_names, output_name, output_shape):
"""
collects the information needed for the handle_gather_layer transformer and brings it into the required shape
Arguments
---------
indexes : numpy.ndarray
1D array of ints with 3 entries [height, width, channels] representing the shape of the of the image that is passed to the conv-layer
window_size : numpy.ndarray
1D array of ints with 2 entries [height, width] representing the window's size in these directions
strides : numpy.ndarray
1D array of ints with 2 entries [height, width] representing the stride in these directions
"""
self.bias = np.ascontiguousarray(bias, dtype=np.float64)
self.is_minuend = is_minuend
add_input_output_information_deeppoly(self, input_names, output_name, output_shape)
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
ffn_handle_first_sub_layer(man, element, self.bias, self.is_minuend, len(self.bias.reshape(-1)), self.predecessors)
if refine or testing:
calc_bounds(man, element, nn, nlb, nub, relu_groups, is_refine_layer=True)
nn.ffn_counter+=1
if testing:
return element, nlb[-1], nub[-1]
return element
class DeeppolySubNodeIntermediate:
def __init__(self, bias, is_minuend, input_names, output_name, output_shape):
"""
collects the information needed for the handle_gather_layer transformer and brings it into the required shape
Arguments
---------
indexes : numpy.ndarray
1D array of ints with 3 entries [height, width, channels] representing the shape of the of the image that is passed to the conv-layer
window_size : numpy.ndarray
1D array of ints with 2 entries [height, width] representing the window's size in these directions
strides : numpy.ndarray
1D array of ints with 2 entries [height, width] representing the stride in these directions
"""
self.bias = np.ascontiguousarray(bias.reshape(-1), dtype=np.float64)
self.is_minuend = is_minuend
add_input_output_information_deeppoly(self, input_names, output_name, output_shape)
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
layerno = nn.calc_layerno()
num_neurons = get_num_neurons_in_layer(man, element, layerno)
ffn_handle_intermediate_sub_layer(man, element, self.bias, self.is_minuend, num_neurons, self.predecessors, use_default_heuristic)
if refine or testing:
calc_bounds(man, element, nn, nlb, nub, relu_groups, is_refine_layer=True)
nn.ffn_counter+=1
if testing:
return element, nlb[-1], nub[-1]
return element
class DeeppolyMulNodeFirst:
def __init__(self, bias, input_names, output_name, output_shape):
"""
collects the information needed for the handle_gather_layer transformer and brings it into the required shape
Arguments
---------
indexes : numpy.ndarray
1D array of ints with 3 entries [height, width, channels] representing the shape of the of the image that is passed to the conv-layer
window_size : numpy.ndarray
1D array of ints with 2 entries [height, width] representing the window's size in these directions
strides : numpy.ndarray
1D array of ints with 2 entries [height, width] representing the stride in these directions
"""
self.bias = np.ascontiguousarray(bias, dtype=np.float64)
add_input_output_information_deeppoly(self, input_names, output_name, output_shape)
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
ffn_handle_first_mul_layer(man, element, self.bias, len(self.bias.reshape(-1)), self.predecessors)
if refine or testing:
calc_bounds(man, element, nn, nlb, nub, relu_groups, is_refine_layer=True)
nn.ffn_counter+=1
if testing:
return element, nlb[-1], nub[-1]
return element
class DeeppolyMulNodeIntermediate:
def __init__(self, bias, input_names, output_name, output_shape):
"""
collects the information needed for the handle_gather_layer transformer and brings it into the required shape
Arguments
---------
indexes : numpy.ndarray
1D array of ints with 3 entries [height, width, channels] representing the shape of the of the image that is passed to the conv-layer
window_size : numpy.ndarray
1D array of ints with 2 entries [height, width] representing the window's size in these directions
strides : numpy.ndarray
1D array of ints with 2 entries [height, width] representing the stride in these directions
"""
self.bias = np.ascontiguousarray(bias.reshape(-1), dtype=np.float64)
add_input_output_information_deeppoly(self, input_names, output_name, output_shape)
def transformer(self, nn, man, element, nlb, nub, relu_groups, refine, timeout_lp, timeout_milp, use_default_heuristic, testing):
ffn_handle_intermediate_mul_layer(man, element, self.bias, len(self.bias.reshape(-1)), self.predecessors, use_default_heuristic)
if refine or testing:
calc_bounds(man, element, nn, nlb, nub, relu_groups, is_refine_layer=True)
nn.ffn_counter+=1
if testing:
return element, nlb[-1], nub[-1]
return element
| 41.065574 | 184 | 0.629512 | 4,216 | 35,070 | 5.055028 | 0.074241 | 0.030968 | 0.017361 | 0.027778 | 0.803726 | 0.774399 | 0.753566 | 0.737988 | 0.730293 | 0.718187 | 0 | 0.007392 | 0.294069 | 35,070 | 853 | 185 | 41.113716 | 0.853456 | 0.328914 | 0 | 0.605882 | 0 | 0 | 0.002216 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108824 | false | 0 | 0.029412 | 0 | 0.320588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
37e5545312950baed8e45d929ea6e8a99b2a4b2d | 27 | py | Python | raymarching/__init__.py | NamorNiradnug/raymarching | 49cbe47b0a0616e6f3cb0c0eb1d3bcf50eb22fff | [
"MIT"
] | null | null | null | raymarching/__init__.py | NamorNiradnug/raymarching | 49cbe47b0a0616e6f3cb0c0eb1d3bcf50eb22fff | [
"MIT"
] | null | null | null | raymarching/__init__.py | NamorNiradnug/raymarching | 49cbe47b0a0616e6f3cb0c0eb1d3bcf50eb22fff | [
"MIT"
] | null | null | null | from .raymarching import *
| 13.5 | 26 | 0.777778 | 3 | 27 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
37f50bfff26a9d21699607a2917c38e52f099bf6 | 33 | py | Python | vrpc/__init__.py | pablitovicente/vrpc_260_debug | 6ffffcaefae261772fd71c642d5d6ef349440b2d | [
"MIT"
] | null | null | null | vrpc/__init__.py | pablitovicente/vrpc_260_debug | 6ffffcaefae261772fd71c642d5d6ef349440b2d | [
"MIT"
] | null | null | null | vrpc/__init__.py | pablitovicente/vrpc_260_debug | 6ffffcaefae261772fd71c642d5d6ef349440b2d | [
"MIT"
] | null | null | null | from .VrpcLocal import VrpcLocal
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
727a8db4db591fb6ca953bdbdadf6777fd3c6e4f | 1,776 | py | Python | quadpy/hexahedron/helpers.py | gdmcbain/quadpy | c083d500027d7c1b2187ae06ff2b7fbdd360ccc7 | [
"MIT"
] | 1 | 2019-01-02T19:04:42.000Z | 2019-01-02T19:04:42.000Z | quadpy/hexahedron/helpers.py | gdmcbain/quadpy | c083d500027d7c1b2187ae06ff2b7fbdd360ccc7 | [
"MIT"
] | null | null | null | quadpy/hexahedron/helpers.py | gdmcbain/quadpy | c083d500027d7c1b2187ae06ff2b7fbdd360ccc7 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
import numpy
def z():
return numpy.array([[0, 0, 0]])
def fs_r00(a):
return numpy.array(
[[+a, 0, 0], [0, +a, 0], [0, 0, +a], [-a, 0, 0], [0, -a, 0], [0, 0, -a]]
)
def fs_rr0(a):
return numpy.array(
[
[+a, +a, 0],
[+a, 0, +a],
[0, +a, +a],
[+a, -a, 0],
[+a, 0, -a],
[0, +a, -a],
[-a, +a, 0],
[-a, 0, +a],
[0, -a, +a],
[-a, -a, 0],
[-a, 0, -a],
[0, -a, -a],
]
)
def fs_rrs(a, b):
return numpy.array(
[
[+a, +a, +b],
[+a, +b, +a],
[+b, +a, +a],
[+a, -a, +b],
[+a, +b, -a],
[+b, +a, -a],
[-a, +a, +b],
[-a, +b, +a],
[+b, -a, +a],
[-a, -a, +b],
[-a, +b, -a],
[+b, -a, -a],
[+a, +a, -b],
[+a, -b, +a],
[-b, +a, +a],
[+a, -a, -b],
[+a, -b, -a],
[-b, +a, -a],
[-a, +a, -b],
[-a, -b, +a],
[-b, -a, +a],
[-a, -a, -b],
[-a, -b, -a],
[-b, -a, -a],
]
)
def rss_pm(r, s):
return numpy.array(
[
[+r, +s, +s],
[+s, +r, +s],
[+s, +s, +r],
[-r, -s, -s],
[-s, -r, -s],
[-s, -s, -r],
]
)
def pm_rrr(a):
return numpy.array(
[
[+a, +a, +a],
[-a, +a, +a],
[+a, -a, +a],
[-a, -a, +a],
[+a, +a, -a],
[-a, +a, -a],
[+a, -a, -a],
[-a, -a, -a],
]
)
| 19.304348 | 80 | 0.193694 | 218 | 1,776 | 1.555046 | 0.09633 | 0.342183 | 0.371681 | 0.365782 | 0.693215 | 0.59587 | 0.495575 | 0.495575 | 0.495575 | 0.389381 | 0 | 0.037304 | 0.532095 | 1,776 | 91 | 81 | 19.516484 | 0.370638 | 0.011824 | 0 | 0.064935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077922 | false | 0 | 0.012987 | 0.077922 | 0.168831 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7287be5aae461d6a67ddf9b30b8b3cc1d7b986c0 | 24 | py | Python | foil/data/datasets/__init__.py | nbrgr/Foil-VLBert | f6a1b54a87affa91a7362216e8c7598e30d45ae5 | [
"MIT"
] | null | null | null | foil/data/datasets/__init__.py | nbrgr/Foil-VLBert | f6a1b54a87affa91a7362216e8c7598e30d45ae5 | [
"MIT"
] | null | null | null | foil/data/datasets/__init__.py | nbrgr/Foil-VLBert | f6a1b54a87affa91a7362216e8c7598e30d45ae5 | [
"MIT"
] | null | null | null | from .foil import Foil
| 8 | 22 | 0.75 | 4 | 24 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 24 | 2 | 23 | 12 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
72e85908a10f2b7bee28f8f0e12051d7e5b8f76b | 19 | py | Python | frameworks/Python/api_hour/yocto_http/hello/endpoints/__init__.py | xsoheilalizadeh/FrameworkBenchmarks | 855527008f7488e4fd508d1e72dfa9953874a2c6 | [
"BSD-3-Clause"
] | 5,300 | 2015-01-02T08:04:20.000Z | 2022-03-31T10:08:33.000Z | frameworks/Python/api_hour/yocto_http/hello/endpoints/__init__.py | xsoheilalizadeh/FrameworkBenchmarks | 855527008f7488e4fd508d1e72dfa9953874a2c6 | [
"BSD-3-Clause"
] | 3,075 | 2015-01-01T05:11:45.000Z | 2022-03-31T23:56:33.000Z | frameworks/Python/api_hour/yocto_http/hello/endpoints/__init__.py | xsoheilalizadeh/FrameworkBenchmarks | 855527008f7488e4fd508d1e72dfa9953874a2c6 | [
"BSD-3-Clause"
] | 2,151 | 2015-01-02T14:16:09.000Z | 2022-03-30T00:15:26.000Z | from . import world | 19 | 19 | 0.789474 | 3 | 19 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 19 | 1 | 19 | 19 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
72fbbf5336a69e038b317edbf5d3b989493f3ca6 | 28 | py | Python | lcm/__init__.py | bashirk/lcmfinda | da57e9127367cdd2b24fbb351dc478b2318a2882 | [
"MIT"
] | null | null | null | lcm/__init__.py | bashirk/lcmfinda | da57e9127367cdd2b24fbb351dc478b2318a2882 | [
"MIT"
] | null | null | null | lcm/__init__.py | bashirk/lcmfinda | da57e9127367cdd2b24fbb351dc478b2318a2882 | [
"MIT"
] | null | null | null | from lcm.lcm import cal_lcm
| 14 | 27 | 0.821429 | 6 | 28 | 3.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f4214cfb5642facc7a9c6ae222a0074549016e89 | 28 | py | Python | conduit/data/datasets/audio/__init__.py | DavidHurst/palbolts | 72f9ca3f82499b532f14d0e797426e1b425d3efe | [
"MIT"
] | 2 | 2021-07-15T20:36:25.000Z | 2021-08-04T15:53:50.000Z | conduit/data/datasets/audio/__init__.py | DavidHurst/palbolts | 72f9ca3f82499b532f14d0e797426e1b425d3efe | [
"MIT"
] | 18 | 2021-09-07T13:50:10.000Z | 2021-12-06T19:02:23.000Z | conduit/data/datasets/audio/__init__.py | predictive-analytics-lab/pal-bolts | 5f1932f351f2e551276b47dfeda7888772d99895 | [
"MIT"
] | 1 | 2022-03-24T03:52:44.000Z | 2022-03-24T03:52:44.000Z | from .ecoacoustics import *
| 14 | 27 | 0.785714 | 3 | 28 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f473190f9141668ad046402b61fb1a081fb9f63e | 27 | py | Python | examples/verlet_chain/__init__.py | abbasegbeyemi/pyqtgraph | 6aeafce477d1d7eebb9d2fe824d4c5573ef9ceed | [
"MIT"
] | 150 | 2018-03-27T16:45:37.000Z | 2022-03-30T03:47:56.000Z | examples/verlet_chain/__init__.py | abbasegbeyemi/pyqtgraph | 6aeafce477d1d7eebb9d2fe824d4c5573ef9ceed | [
"MIT"
] | 67 | 2019-11-30T14:45:05.000Z | 2022-03-14T20:26:06.000Z | examples/verlet_chain/__init__.py | abbasegbeyemi/pyqtgraph | 6aeafce477d1d7eebb9d2fe824d4c5573ef9ceed | [
"MIT"
] | 40 | 2018-04-06T19:42:21.000Z | 2022-01-11T00:34:17.000Z | from .chain import ChainSim | 27 | 27 | 0.851852 | 4 | 27 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
be420617d1f6a9d4325f318fcad5cbb304689b60 | 136 | py | Python | ex9.py | nguyennam9696/Learn_Python_The_Hard_Way | 402ffad8d8dc80f0c1f541d8e3d69980268bb559 | [
"MIT"
] | null | null | null | ex9.py | nguyennam9696/Learn_Python_The_Hard_Way | 402ffad8d8dc80f0c1f541d8e3d69980268bb559 | [
"MIT"
] | null | null | null | ex9.py | nguyennam9696/Learn_Python_The_Hard_Way | 402ffad8d8dc80f0c1f541d8e3d69980268bb559 | [
"MIT"
] | null | null | null | print "I am 6\'2\""
print "I am \\ a \\ cat."
print "\tTabbed"
while True:
for i in ["/", "-", "|", "\\", "|"]:
print "%s\r" % i, | 19.428571 | 38 | 0.419118 | 21 | 136 | 2.714286 | 0.666667 | 0.210526 | 0.280702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019417 | 0.242647 | 136 | 7 | 39 | 19.428571 | 0.533981 | 0 | 0 | 0 | 0 | 0 | 0.328467 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
be6bcaa5e7bf3d656bbeef0d053d9e75c2dd3e67 | 6,391 | py | Python | projects/vdk-core/tests/vdk/internal/builtin_plugins/connection/test_managed_cursor.py | vmware/versatile-data-kit | c4e10324a4f3203c58079cb18203880f68053f15 | [
"Apache-2.0"
] | 100 | 2021-10-04T09:32:04.000Z | 2022-03-30T11:23:53.000Z | projects/vdk-core/tests/vdk/internal/builtin_plugins/connection/test_managed_cursor.py | vmware/versatile-data-kit | c4e10324a4f3203c58079cb18203880f68053f15 | [
"Apache-2.0"
] | 208 | 2021-10-04T16:56:40.000Z | 2022-03-31T10:41:44.000Z | projects/vdk-core/tests/vdk/internal/builtin_plugins/connection/test_managed_cursor.py | vmware/versatile-data-kit | c4e10324a4f3203c58079cb18203880f68053f15 | [
"Apache-2.0"
] | 14 | 2021-10-11T14:15:13.000Z | 2022-03-11T13:39:17.000Z | # Copyright 2021 VMware, Inc.
# SPDX-License-Identifier: Apache-2.0
import logging
from unittest.mock import call
import pytest
from vdk.internal.builtin_plugins.connection.decoration_cursor import DecorationCursor
from vdk.internal.builtin_plugins.connection.recovery_cursor import RecoveryCursor
from vdk.plugin.test_utils.util_funcs import populate_mock_managed_cursor
_query = "select 1"
def test_validation__query_valid__execute():
(
mock_native_cursor,
mock_managed_cursor,
_,
_,
mock_connection_hook_spec,
) = populate_mock_managed_cursor()
mock_managed_cursor.execute(_query)
mock_connection_hook_spec.db_connection_validate_operation.assert_called_once_with(
operation=_query, parameters=None
)
mock_native_cursor.execute.assert_called_once()
def test_validation__query_nonvalid__execute():
(
mock_native_cursor,
mock_managed_cursor,
_,
_,
mock_connection_hook_spec,
) = populate_mock_managed_cursor()
mock_connection_hook_spec.db_connection_validate_operation.side_effect = Exception(
"Validation exception"
)
with pytest.raises(Exception) as e:
mock_managed_cursor.execute(_query)
assert "Validation exception" == e.value.args[0]
mock_native_cursor.execute.assert_not_called()
def test_decoration__success__execute():
(
mock_native_cursor,
mock_managed_cursor,
_,
_,
mock_connection_hook_spec,
) = populate_mock_managed_cursor()
def mock_decorate(decoration_cursor: DecorationCursor):
managed_operation = decoration_cursor.get_managed_operation()
managed_operation.set_operation(
f"decorated {managed_operation.get_operation()}"
)
mock_connection_hook_spec.db_connection_decorate_operation.side_effect = (
mock_decorate
)
mock_managed_cursor.execute(_query)
mock_connection_hook_spec.db_connection_decorate_operation.assert_called_once()
calls = [call(f"decorated {_query}")]
mock_native_cursor.execute.assert_has_calls(calls)
def test_decoration__failure__execute():
(
mock_native_cursor,
mock_managed_cursor,
_,
_,
mock_connection_hook_spec,
) = populate_mock_managed_cursor()
mock_connection_hook_spec.db_connection_decorate_operation.side_effect = Exception(
"Decoration exception"
)
with pytest.raises(Exception) as e:
mock_managed_cursor.execute(_query)
assert True == mock_connection_hook_spec.db_connection_decorate_operation.called
assert "Decoration exception" == e.value.args[0]
mock_native_cursor.execute.assert_not_called()
def test_recovery__success__execute():
(
mock_native_cursor,
mock_managed_cursor,
_,
_,
mock_connection_hook_spec,
) = populate_mock_managed_cursor()
def mock_decorate(decoration_cursor: DecorationCursor):
managed_operation = decoration_cursor.get_managed_operation()
managed_operation.set_operation(
f"decorated {managed_operation.get_operation()}"
)
def mock_recover(recovery_cursor: RecoveryCursor):
recovery_cursor.execute("recovery")
recovery_cursor.retry_operation()
assert recovery_cursor.get_retries() == 1
mock_connection_hook_spec.db_connection_decorate_operation.side_effect = (
mock_decorate
)
mock_connection_hook_spec.db_connection_recover_operation.side_effect = mock_recover
exception = Exception()
mock_native_cursor.execute.side_effect = [exception, None, None]
mock_managed_cursor.execute(_query)
mock_connection_hook_spec.db_connection_recover_operation.assert_called_once()
calls = [
call(f"decorated {_query}"),
call(f"decorated recovery"),
call(f"decorated {_query}"),
]
mock_native_cursor.execute.assert_has_calls(calls)
def test_recovery__failure__execute():
(
mock_native_cursor,
mock_managed_cursor,
_,
_,
mock_connection_hook_spec,
) = populate_mock_managed_cursor()
def mock_decorate(decoration_cursor: DecorationCursor):
managed_operation = decoration_cursor.get_managed_operation()
managed_operation.set_operation(
f"decorated {managed_operation.get_operation()}"
)
def mock_recover(recovery_cursor: RecoveryCursor):
raise Exception("Could not handle execution exception")
mock_connection_hook_spec.db_connection_decorate_operation.side_effect = (
mock_decorate
)
mock_connection_hook_spec.db_connection_recover_operation.side_effect = mock_recover
exception = Exception()
mock_native_cursor.execute.side_effect = exception
with pytest.raises(Exception) as e:
mock_managed_cursor.execute(_query)
assert "Could not handle execution exception" == e.value.args[0]
mock_connection_hook_spec.db_connection_recover_operation.assert_called_once()
mock_native_cursor.execute.assert_called_once()
def test_query_timing_successful_query(caplog):
caplog.set_level(logging.INFO)
(
_,
mock_managed_cursor,
_,
_,
_,
) = populate_mock_managed_cursor()
mock_managed_cursor.execute(_query)
assert "Query duration 00h:00m:" in str(caplog.records)
def test_query_timing_recovered_query(caplog):
caplog.set_level(logging.INFO)
(
mock_native_cursor,
mock_managed_cursor,
_,
_,
_,
) = populate_mock_managed_cursor()
mock_native_cursor.execute.side_effect = [Exception("Mock exception")]
mock_managed_cursor.execute(_query)
assert "Recovered query duration 00h:00m:" in str(caplog.records)
def test_query_timing_failed_query(caplog):
caplog.set_level(logging.INFO)
(
mock_native_cursor,
mock_managed_cursor,
_,
_,
mock_connection_hook_spec,
) = populate_mock_managed_cursor()
exception = Exception("Mock exception")
mock_native_cursor.execute.side_effect = [exception]
mock_connection_hook_spec.db_connection_recover_operation.side_effect = [exception]
with pytest.raises(Exception):
mock_managed_cursor.execute(_query)
assert "Failed query duration 00h:00m:" in str(caplog.records)
| 30.004695 | 88 | 0.724613 | 718 | 6,391 | 5.940111 | 0.129526 | 0.072216 | 0.111606 | 0.103165 | 0.842673 | 0.831887 | 0.807503 | 0.778898 | 0.707386 | 0.660492 | 0 | 0.004506 | 0.201377 | 6,391 | 212 | 89 | 30.146226 | 0.831113 | 0.009858 | 0 | 0.620482 | 0 | 0 | 0.077312 | 0.016601 | 0 | 0 | 0 | 0 | 0.108434 | 1 | 0.084337 | false | 0 | 0.036145 | 0 | 0.120482 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
be7f0d131224cf60a95fdd0a0b5d6aaf6b19a1a0 | 793 | py | Python | seapy/model/__init__.py | hadfieldnz/seapy | 88c2747dbcf85fa9d1da3509e4c510c53016680b | [
"MIT"
] | null | null | null | seapy/model/__init__.py | hadfieldnz/seapy | 88c2747dbcf85fa9d1da3509e4c510c53016680b | [
"MIT"
] | null | null | null | seapy/model/__init__.py | hadfieldnz/seapy | 88c2747dbcf85fa9d1da3509e4c510c53016680b | [
"MIT"
] | null | null | null | """
State Estimation and Analysis for PYthon
Module for working with oceanographic data and models
Copyright (c)2020 University of Hawaii under the MIT-License.
Import classes include:
- :class:`~seapy.model.grid`
Imported functions include:
- :func:`~seapy.model.lib.bvf`
- :func:`~seapy.model.lib.density`
- :func:`~seapy.model.hycom.load_history`
- :func:`~seapy.model.soda.load_history`
- :func:`~seapy.model.lib.pressure`
- :func:`~seapy.model.lib.rho2u`
- :func:`~seapy.model.lib.rho2v`
- :func:`~seapy.model.lib.sound`
- :func:`~seapy.model.lib.u2rho`
- :func:`~seapy.model.lib.v2rho`
- :func:`~seapy.model.lib.v2rho`
- :func:`~seapy.model.lib.w`
"""
from .grid import grid, asgrid
from .lib import *
from .hycom import *
from .soda import *
| 25.580645 | 63 | 0.680958 | 110 | 793 | 4.890909 | 0.436364 | 0.241636 | 0.312268 | 0.315985 | 0.20632 | 0.113383 | 0.113383 | 0.113383 | 0.113383 | 0 | 0 | 0.013314 | 0.147541 | 793 | 30 | 64 | 26.433333 | 0.782544 | 0.828499 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
bea7b96e159cfc97dc6d86b145318257ca06802d | 190 | py | Python | src/graph_transpiler/webdnn/backend/fallback/kernels/broadcast.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | 1 | 2018-07-26T13:52:21.000Z | 2018-07-26T13:52:21.000Z | src/graph_transpiler/webdnn/backend/fallback/kernels/broadcast.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | null | null | null | src/graph_transpiler/webdnn/backend/fallback/kernels/broadcast.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | null | null | null | from webdnn.backend.fallback.kernels.elementwise import register_elementwise_kernel
from webdnn.graph.operators.broadcast import Broadcast
register_elementwise_kernel(Broadcast, "y = x0;")
| 38 | 83 | 0.857895 | 23 | 190 | 6.913043 | 0.608696 | 0.125786 | 0.314465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00565 | 0.068421 | 190 | 4 | 84 | 47.5 | 0.892655 | 0 | 0 | 0 | 0 | 0 | 0.036842 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bead51cd0b282b2bbcd757acb45eec77e088b316 | 41 | py | Python | onto/query/__init__.py | billyrrr/onto | 72733d36a2583ae4758f7cf33a5229b79773702b | [
"MIT"
] | 1 | 2020-10-04T10:01:45.000Z | 2020-10-04T10:01:45.000Z | onto/query/__init__.py | billyrrr/onto | 72733d36a2583ae4758f7cf33a5229b79773702b | [
"MIT"
] | null | null | null | onto/query/__init__.py | billyrrr/onto | 72733d36a2583ae4758f7cf33a5229b79773702b | [
"MIT"
] | null | null | null | from .transaction import run_transaction
| 20.5 | 40 | 0.878049 | 5 | 41 | 7 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe3ab7746a46a0f797f9c0d364948d9c7fdc855f | 144 | py | Python | magical_sqlserver/__init__.py | brennoflavio/magical-sqlserver | 6dc7cb3df8341f8234f18d36fd13b637a4ffc948 | [
"MIT"
] | 3 | 2018-12-27T14:15:47.000Z | 2021-05-02T10:23:07.000Z | magical_sqlserver/__init__.py | brennoflavio/magical-sqlserver | 6dc7cb3df8341f8234f18d36fd13b637a4ffc948 | [
"MIT"
] | null | null | null | magical_sqlserver/__init__.py | brennoflavio/magical-sqlserver | 6dc7cb3df8341f8234f18d36fd13b637a4ffc948 | [
"MIT"
] | null | null | null | # flake8: noqa
from magical_sqlserver.api import SQLServer
from magical_sqlserver.decorators import provide_session
name = "magical_sqlserver"
| 24 | 56 | 0.847222 | 18 | 144 | 6.555556 | 0.611111 | 0.40678 | 0.338983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007752 | 0.104167 | 144 | 5 | 57 | 28.8 | 0.906977 | 0.083333 | 0 | 0 | 0 | 0 | 0.130769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe3dcbfc658822e6f3be554c63956da9069f93f2 | 114 | py | Python | poll/reddit.py | nath/rcfbpoll | 364cb734f97b33b42fa72efb797d9783d391d79a | [
"0BSD"
] | null | null | null | poll/reddit.py | nath/rcfbpoll | 364cb734f97b33b42fa72efb797d9783d391d79a | [
"0BSD"
] | null | null | null | poll/reddit.py | nath/rcfbpoll | 364cb734f97b33b42fa72efb797d9783d391d79a | [
"0BSD"
] | null | null | null | import praw
from .models import User, UserRole
def message_voters(username, access_token, title, body):
pass | 19 | 56 | 0.77193 | 16 | 114 | 5.375 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 114 | 6 | 57 | 19 | 0.895833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
fe444035e2f464ae94435151e7a0a349481c1be4 | 96 | py | Python | venv/lib/python3.8/site-packages/rope/base/history.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/rope/base/history.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/rope/base/history.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/fe/13/d5/dcf4b396fe8c506578795049b7eca94b89333bd904e26977206a8902f2 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.458333 | 0 | 96 | 1 | 96 | 96 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe49eeedf618e37861c472469d1004711cf6b2c8 | 2,969 | py | Python | tests/test_logistic_regression.py | shotahorii/ml-from-scratch | 10fe8c9d5811bfcb9ee303aba2087524574681e6 | [
"MIT"
] | 3 | 2021-03-21T21:16:42.000Z | 2021-06-27T03:20:04.000Z | tests/test_logistic_regression.py | shotahorii/ml-from-scratch | 10fe8c9d5811bfcb9ee303aba2087524574681e6 | [
"MIT"
] | null | null | null | tests/test_logistic_regression.py | shotahorii/ml-from-scratch | 10fe8c9d5811bfcb9ee303aba2087524574681e6 | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
from sklearn.datasets import load_iris, load_breast_cancer
from sklearn.linear_model import LogisticRegression as LogisticRegression_skl
import sys
sys.path.append('../')
from bareml.machinelearning.supervised import LogisticRegression
from bareml.machinelearning.utils.model_selection import KFold, StratifiedKFold
def test_binary_classification():
data = load_breast_cancer()
X = data.data
y = data.target
clf_skl = LogisticRegression_skl()
clf_bareml = LogisticRegression()
skl_scores = []
bareml_scores = []
kf = KFold()
for train_idx, test_idx in kf.split(X,y):
X_train, X_test = X[train_idx], X[test_idx]
y_train, y_test = y[train_idx], y[test_idx]
clf_skl.fit(X_train, y_train)
clf_bareml.fit(X_train, y_train)
skl_scores.append(clf_skl.score(X_test, y_test))
bareml_scores.append(clf_bareml.score(X_test, y_test)['accuracy'])
skl_score = np.array(skl_scores).mean()
bareml_score = np.array(bareml_scores).mean()
# accuracy difference from sklearn's LogisticRegression is less than 5%
assert skl_score - bareml_score < 0.05
def test_multi_classification():
data = load_iris()
X = data.data
y = data.target
clf_skl = LogisticRegression_skl()
clf_bareml = LogisticRegression()
skl_scores = []
bareml_scores = []
kf = StratifiedKFold()
for train_idx, test_idx in kf.split(X,y):
X_train, X_test = X[train_idx], X[test_idx]
y_train, y_test = y[train_idx], y[test_idx]
clf_skl.fit(X_train, y_train)
clf_bareml.fit(X_train, y_train)
skl_scores.append(clf_skl.score(X_test, y_test))
bareml_scores.append(clf_bareml.score(X_test, y_test)['accuracy'])
skl_score = np.array(skl_scores).mean()
bareml_score = np.array(bareml_scores).mean()
# accuracy difference from sklearn's LogisticRegression is less than 5%
assert skl_score - bareml_score < 0.05
def test_multi_classification_onehot():
data = load_iris()
X = data.data
y = data.target
clf_skl = LogisticRegression_skl()
clf_bareml = LogisticRegression()
skl_scores = []
bareml_scores = []
kf = StratifiedKFold()
for train_idx, test_idx in kf.split(X,y):
X_train, X_test = X[train_idx], X[test_idx]
y_train, y_test = y[train_idx], y[test_idx]
y_train_onehot = pd.get_dummies(y_train).values
y_test_onehot = pd.get_dummies(y_test).values
clf_skl.fit(X_train, y_train)
clf_bareml.fit(X_train, y_train_onehot)
skl_scores.append(clf_skl.score(X_test, y_test))
bareml_scores.append(clf_bareml.score(X_test, y_test_onehot)['accuracy'])
skl_score = np.array(skl_scores).mean()
bareml_score = np.array(bareml_scores).mean()
# accuracy difference from sklearn's LogisticRegression is less than 5%
assert skl_score - bareml_score < 0.05
| 29.39604 | 81 | 0.695184 | 434 | 2,969 | 4.451613 | 0.142857 | 0.043478 | 0.02795 | 0.031056 | 0.791925 | 0.772257 | 0.772257 | 0.772257 | 0.772257 | 0.772257 | 0 | 0.005074 | 0.203436 | 2,969 | 100 | 82 | 29.69 | 0.811839 | 0.070394 | 0 | 0.746269 | 0 | 0 | 0.009797 | 0 | 0 | 0 | 0 | 0 | 0.044776 | 1 | 0.044776 | false | 0 | 0.104478 | 0 | 0.149254 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe91b2aa940491c23131e880511f2d458bec70d7 | 46 | py | Python | panoptes/analysis/admin.py | oberlin/panoptes | 67d451ea4ffc58c23b5f347bfa5609fa7f853b45 | [
"BSD-3-Clause"
] | 2 | 2017-07-24T05:11:59.000Z | 2017-08-27T19:17:42.000Z | panoptes/analysis/admin.py | oberlin/panoptes | 67d451ea4ffc58c23b5f347bfa5609fa7f853b45 | [
"BSD-3-Clause"
] | null | null | null | panoptes/analysis/admin.py | oberlin/panoptes | 67d451ea4ffc58c23b5f347bfa5609fa7f853b45 | [
"BSD-3-Clause"
] | null | null | null |
import panoptes.analysis.panels.events.admin
| 15.333333 | 44 | 0.847826 | 6 | 46 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 46 | 2 | 45 | 23 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
feabcece44b356ca3c8fa42d9d51eb38f916ba3c | 2,943 | py | Python | adapters/actuators/overhead_display/hc595/slow_cycling_through.py | andycavatorta/thirtybirds3.0 | d2987c29af48f879bddb8e12fc42549fefb084cf | [
"MIT"
] | 2 | 2020-05-13T02:53:02.000Z | 2021-03-21T05:54:53.000Z | adapters/gpio/hc595/slow_cycling_through.py | andycavatorta/thirtybirds3.0 | d2987c29af48f879bddb8e12fc42549fefb084cf | [
"MIT"
] | null | null | null | adapters/gpio/hc595/slow_cycling_through.py | andycavatorta/thirtybirds3.0 | d2987c29af48f879bddb8e12fc42549fefb084cf | [
"MIT"
] | 1 | 2021-05-06T18:42:41.000Z | 2021-05-06T18:42:41.000Z | #!/usr/bin/env python
import time
import math
import HC595_shift_reg as shifter
reg = shifter.HC595()
# attraction mode
seq = [ [ 1, 0, 0, 0, 0, ],
[ 0, 1, 0, 0, 0, ],
[ 0, 0, 1, 0, 0, ],
[ 0, 0, 0, 1, 0, ],
[ 0, 0, 0, 0, 1 ] ]
"""
# score
seq = [ [ 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ] ]
seq = [ [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] ]
seq = [ [ 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0 ],
[ 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1 ],
[ 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0 ],
[ 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0 ] ]
seq = [ [ 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0 ],
[ 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1 ],
[ 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1 ] ]
seq = [ [ 1, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 1, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 1, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 1, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 1, 0, 0, 0 ] ]
"""
#seq = [ 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1 ]
seq_step = 0
val = [ 0 ]
lfo = 3.141596
period = 0.1
period = 1.1
# this is put inside a try block so it can clean up
# the output enable. very important to protect relays from
# being left on!!!!
try:
while True:
print()
ontime =2.09
offtime = 2.0
val[ 0 ] = 0;
for trk in range( 0, 5 ):
if seq[ trk ][ seq_step ] == 1:
val[ 0 ] = val[ 0 ] + ( 1 << trk )
#print( ontime, offtime )
reg.write( val )
print( val )
time.sleep( ontime )
val[ 0 ] = 0x00
reg.write( val )
print( val )
time.sleep( offtime )
seq_step = seq_step + 1
if seq_step >= 5:
seq_step = 0
except KeyboardInterrupt:
print( "You've exited the program." )
finally:
print( "cleaning up GPIO now." )
reg.disable_Output_Enable()
| 28.028571 | 109 | 0.349983 | 605 | 2,943 | 1.68595 | 0.123967 | 0.592157 | 0.711765 | 0.756863 | 0.526471 | 0.521569 | 0.521569 | 0.460784 | 0.460784 | 0.457843 | 0 | 0.295414 | 0.429494 | 2,943 | 104 | 110 | 28.298077 | 0.312091 | 0.100238 | 0 | 0.157895 | 0 | 0 | 0.046169 | 0 | 0 | 0 | 0.003929 | 0 | 0 | 1 | 0 | false | 0 | 0.078947 | 0 | 0.078947 | 0.131579 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
feb2f8aa617eeae9bfa99eb3bb351a7a8bd60965 | 308 | py | Python | allennlp_semparse/predictors/__init__.py | pdasigi/allennlp-semparse | 843c9e5a4d15f449c8f11e6c08940d3de3e2a8c7 | [
"Apache-2.0"
] | null | null | null | allennlp_semparse/predictors/__init__.py | pdasigi/allennlp-semparse | 843c9e5a4d15f449c8f11e6c08940d3de3e2a8c7 | [
"Apache-2.0"
] | null | null | null | allennlp_semparse/predictors/__init__.py | pdasigi/allennlp-semparse | 843c9e5a4d15f449c8f11e6c08940d3de3e2a8c7 | [
"Apache-2.0"
] | null | null | null | from allennlp_semparse.predictors.atis_parser import AtisParserPredictor
from allennlp_semparse.predictors.nlvr_parser import NlvrParserPredictor
from allennlp_semparse.predictors.quarel_parser import QuarelParserPredictor
from allennlp_semparse.predictors.wikitables_parser import WikiTablesParserPredictor
| 61.6 | 84 | 0.922078 | 32 | 308 | 8.625 | 0.4375 | 0.173913 | 0.289855 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051948 | 308 | 4 | 85 | 77 | 0.945205 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
22870aca78e0ec5d773d4b6f4ead3a2952c7f983 | 126,032 | py | Python | koku/masu/database/ocp_report_db_accessor.py | bsquizz/koku | 386dd6ca4a4fd1b50790a929acc81d2dc245a91c | [
"Apache-2.0"
] | null | null | null | koku/masu/database/ocp_report_db_accessor.py | bsquizz/koku | 386dd6ca4a4fd1b50790a929acc81d2dc245a91c | [
"Apache-2.0"
] | null | null | null | koku/masu/database/ocp_report_db_accessor.py | bsquizz/koku | 386dd6ca4a4fd1b50790a929acc81d2dc245a91c | [
"Apache-2.0"
] | null | null | null | #
# Copyright 2021 Red Hat Inc.
# SPDX-License-Identifier: Apache-2.0
#
"""Database accessor for OCP report data."""
import copy
import datetime
import json
import logging
import os
import pkgutil
import uuid
from decimal import Decimal
import pytz
from dateutil.parser import parse
from dateutil.rrule import MONTHLY
from dateutil.rrule import rrule
from django.conf import settings
from django.db import connection
from django.db.models import DecimalField
from django.db.models import ExpressionWrapper
from django.db.models import F
from django.db.models import Sum
from django.db.models import Value
from django.db.models.functions import Coalesce
from jinjasql import JinjaSql
from tenant_schemas.utils import schema_context
from trino.exceptions import TrinoExternalError
import koku.presto_database as kpdb
from api.metrics import constants as metric_constants
from api.utils import DateHelper
from koku.database import JSONBBuildObject
from koku.database import SQLScriptAtomicExecutorMixin
from masu.config import Config
from masu.database import AWS_CUR_TABLE_MAP
from masu.database import OCP_REPORT_TABLE_MAP
from masu.database.report_db_accessor_base import ReportDBAccessorBase
from masu.util.common import month_date_range_tuple
from reporting.models import OCP_ON_ALL_PERSPECTIVES
from reporting.provider.aws.models import PRESTO_LINE_ITEM_DAILY_TABLE as AWS_PRESTO_LINE_ITEM_DAILY_TABLE
from reporting.provider.azure.models import PRESTO_LINE_ITEM_DAILY_TABLE as AZURE_PRESTO_LINE_ITEM_DAILY_TABLE
from reporting.provider.ocp.models import OCPCluster
from reporting.provider.ocp.models import OCPNode
from reporting.provider.ocp.models import OCPProject
from reporting.provider.ocp.models import OCPPVC
from reporting.provider.ocp.models import OCPUsageLineItemDailySummary
from reporting.provider.ocp.models import OCPUsageReport
from reporting.provider.ocp.models import OCPUsageReportPeriod
from reporting.provider.ocp.models import PRESTO_LINE_ITEM_TABLE_DAILY_MAP
from reporting.provider.ocp.models import UI_SUMMARY_TABLES
LOG = logging.getLogger(__name__)
def create_filter(data_source, start_date, end_date, cluster_id):
"""Create filter with data source, start and end dates."""
filters = {"data_source": data_source}
if start_date:
filters["usage_start__gte"] = start_date if isinstance(start_date, datetime.date) else start_date.date()
if end_date:
filters["usage_start__lte"] = end_date if isinstance(end_date, datetime.date) else end_date.date()
if cluster_id:
filters["cluster_id"] = cluster_id
return filters
class OCPReportDBAccessor(SQLScriptAtomicExecutorMixin, ReportDBAccessorBase):
"""Class to interact with customer reporting tables."""
# Empty string will put a path seperator on the end
OCP_ON_ALL_SQL_PATH = os.path.join("sql", "openshift", "all", "")
def __init__(self, schema):
"""Establish the database connection.
Args:
schema (str): The customer schema to associate with
"""
super().__init__(schema)
self._datetime_format = Config.OCP_DATETIME_STR_FORMAT
self.jinja_sql = JinjaSql()
self.date_helper = DateHelper()
self._table_map = OCP_REPORT_TABLE_MAP
self._aws_table_map = AWS_CUR_TABLE_MAP
@property
def line_item_daily_summary_table(self):
return OCPUsageLineItemDailySummary
def get_current_usage_report(self):
"""Get the most recent usage report object."""
table_name = self._table_map["report"]
with schema_context(self.schema):
return self._get_db_obj_query(table_name).order_by("-interval_start").first()
def get_current_usage_period(self):
"""Get the most recent usage report period object."""
table_name = self._table_map["report_period"]
with schema_context(self.schema):
return self._get_db_obj_query(table_name).order_by("-report_period_start").first()
def get_usage_periods_by_date(self, start_date):
"""Return all report period entries for the specified start date."""
table_name = self._table_map["report_period"]
with schema_context(self.schema):
return self._get_db_obj_query(table_name).filter(report_period_start=start_date).all()
def get_usage_period_by_dates_and_cluster(self, start_date, end_date, cluster_id):
"""Return all report period entries for the specified start date."""
table_name = self._table_map["report_period"]
with schema_context(self.schema):
return (
self._get_db_obj_query(table_name)
.filter(report_period_start=start_date, report_period_end=end_date, cluster_id=cluster_id)
.first()
)
def get_usage_period_on_or_before_date(self, date, provider_uuid=None):
"""Get the usage report period objects before provided date."""
table_name = self._table_map["report_period"]
with schema_context(self.schema):
base_query = self._get_db_obj_query(table_name)
if provider_uuid:
usage_period_query = base_query.filter(report_period_start__lte=date, provider_id=provider_uuid)
else:
usage_period_query = base_query.filter(report_period_start__lte=date)
return usage_period_query
def get_usage_period_query_by_provider(self, provider_uuid):
"""Return all report periods for the specified provider."""
table_name = self._table_map["report_period"]
with schema_context(self.schema):
return self._get_db_obj_query(table_name).filter(provider_id=provider_uuid)
def report_periods_for_provider_uuid(self, provider_uuid, start_date=None):
"""Return all report periods for provider_uuid on date."""
report_periods = self.get_usage_period_query_by_provider(provider_uuid)
with schema_context(self.schema):
if start_date:
if isinstance(start_date, str):
start_date = parse(start_date)
report_date = start_date.replace(day=1)
report_periods = report_periods.filter(report_period_start=report_date).first()
return report_periods
def get_lineitem_query_for_reportid(self, query_report_id):
"""Get the usage report line item for a report id query."""
table_name = self._table_map["line_item"]
with schema_context(self.schema):
base_query = self._get_db_obj_query(table_name)
line_item_query = base_query.filter(report_id=query_report_id)
return line_item_query
def get_daily_usage_query_for_clusterid(self, cluster_identifier):
"""Get the usage report daily item for a cluster id query."""
table_name = self._table_map["line_item_daily"]
with schema_context(self.schema):
base_query = self._get_db_obj_query(table_name)
daily_usage_query = base_query.filter(cluster_id=cluster_identifier)
return daily_usage_query
def get_summary_usage_query_for_clusterid(self, cluster_identifier):
"""Get the usage report summary for a cluster id query."""
table_name = self._table_map["line_item_daily_summary"]
with schema_context(self.schema):
base_query = self._get_db_obj_query(table_name)
summary_usage_query = base_query.filter(cluster_id=cluster_identifier)
return summary_usage_query
def get_item_query_report_period_id(self, report_period_id):
"""Get the usage report line item for a report id query."""
table_name = self._table_map["line_item"]
with schema_context(self.schema):
base_query = self._get_db_obj_query(table_name)
line_item_query = base_query.filter(report_period_id=report_period_id)
return line_item_query
def get_storage_item_query_report_period_id(self, report_period_id):
"""Get the storage report line item for a report id query."""
table_name = self._table_map["storage_line_item"]
with schema_context(self.schema):
base_query = self._get_db_obj_query(table_name)
line_item_query = base_query.filter(report_period_id=report_period_id)
return line_item_query
def get_daily_storage_item_query_cluster_id(self, cluster_identifier):
"""Get the daily storage report line item for a cluster id query."""
table_name = self._table_map["storage_line_item_daily"]
with schema_context(self.schema):
base_query = self._get_db_obj_query(table_name)
daily_item_query = base_query.filter(cluster_id=cluster_identifier)
return daily_item_query
def get_storage_summary_query_cluster_id(self, cluster_identifier):
"""Get the storage report summary for a cluster id query."""
table_name = self._table_map["line_item_daily_summary"]
filters = {"cluster_id": cluster_identifier, "data_source": "Storage"}
with schema_context(self.schema):
base_query = self._get_db_obj_query(table_name)
daily_item_query = base_query.filter(**filters)
return daily_item_query
def get_node_label_item_query_report_period_id(self, report_period_id):
"""Get the node label report line item for a report id query."""
table_name = self._table_map["node_label_line_item"]
with schema_context(self.schema):
base_query = self._get_db_obj_query(table_name)
line_item_query = base_query.filter(report_period_id=report_period_id)
return line_item_query
def get_ocp_aws_summary_query_for_cluster_id(self, cluster_identifier):
"""Get the OCP-on-AWS report summary item for a given cluster id query."""
table_name = self._aws_table_map["ocp_on_aws_daily_summary"]
with schema_context(self.schema):
base_query = self._get_db_obj_query(table_name)
summary_item_query = base_query.filter(cluster_id=cluster_identifier)
return summary_item_query
def get_ocp_aws_project_summary_query_for_cluster_id(self, cluster_identifier):
"""Get the OCP-on-AWS report project summary item for a given cluster id query."""
table_name = self._aws_table_map["ocp_on_aws_project_daily_summary"]
with schema_context(self.schema):
base_query = self._get_db_obj_query(table_name)
summary_item_query = base_query.filter(cluster_id=cluster_identifier)
return summary_item_query
def get_report_query_report_period_id(self, report_period_id):
"""Get the usage report line item for a report id query."""
table_name = self._table_map["report"]
with schema_context(self.schema):
base_query = self._get_db_obj_query(table_name)
usage_report_query = base_query.filter(report_period_id=report_period_id)
return usage_report_query
def get_report_periods(self):
"""Get all usage period objects."""
periods = []
with schema_context(self.schema):
periods = OCPUsageReportPeriod.objects.values("id", "cluster_id", "report_period_start", "provider_id")
return_value = {(p["cluster_id"], p["report_period_start"], p["provider_id"]): p["id"] for p in periods}
return return_value
def get_reports(self):
"""Make a mapping of reports by time."""
with schema_context(self.schema):
reports = OCPUsageReport.objects.all()
return {
(entry.report_period_id, entry.interval_start.strftime(self._datetime_format)): entry.id
for entry in reports
}
def get_pod_usage_cpu_core_hours(self, start_date, end_date, cluster_id=None):
"""Make a mapping of cpu pod usage hours."""
table = OCPUsageLineItemDailySummary
filters = create_filter("Pod", start_date, end_date, cluster_id)
with schema_context(self.schema):
reports = self._get_reports(table, filters)
return {entry.uuid: entry.pod_usage_cpu_core_hours for entry in reports}
def _get_reports(self, table, filters=None):
"""Return requested reports from given table.
Args:
table (Django models.Model object): The table to query against
filters (dict): Columns to filter the query on
Returns:
(QuerySet): Django queryset of objects queried on
"""
with schema_context(self.schema):
if filters:
reports = self._get_db_obj_query(table).filter(**filters).all()
else:
reports = self._get_db_obj_query(table).all()
return reports
def get_pod_request_cpu_core_hours(self, start_date, end_date, cluster_id=None):
"""Make a mapping of cpu pod request hours."""
table = OCPUsageLineItemDailySummary
filters = create_filter("Pod", start_date, end_date, cluster_id)
with schema_context(self.schema):
reports = self._get_reports(table, filters)
return {entry.uuid: entry.pod_request_cpu_core_hours for entry in reports}
def get_pod_usage_memory_gigabyte_hours(self, start_date, end_date, cluster_id=None):
"""Make a mapping of memory_usage hours."""
table = OCPUsageLineItemDailySummary
filters = create_filter("Pod", start_date, end_date, cluster_id)
with schema_context(self.schema):
reports = self._get_reports(table, filters)
return {entry.uuid: entry.pod_usage_memory_gigabyte_hours for entry in reports}
def get_pod_request_memory_gigabyte_hours(self, start_date, end_date, cluster_id=None):
"""Make a mapping of memory_request_hours."""
table = OCPUsageLineItemDailySummary
filters = create_filter("Pod", start_date, end_date, cluster_id)
with schema_context(self.schema):
reports = self._get_reports(table, filters)
return {entry.uuid: entry.pod_request_memory_gigabyte_hours for entry in reports}
def get_persistentvolumeclaim_usage_gigabyte_months(self, start_date, end_date, cluster_id=None):
"""Make a mapping of persistentvolumeclaim_usage_gigabyte_months."""
table = OCPUsageLineItemDailySummary
filters = create_filter("Storage", start_date, end_date, cluster_id)
with schema_context(self.schema):
reports = self._get_reports(table, filters)
return {entry.uuid: entry.persistentvolumeclaim_usage_gigabyte_months for entry in reports}
def get_volume_request_storage_gigabyte_months(self, start_date, end_date, cluster_id=None):
"""Make a mapping of volume_request_storage_gigabyte_months."""
table = OCPUsageLineItemDailySummary
filters = create_filter("Storage", start_date, end_date, cluster_id)
with schema_context(self.schema):
reports = self._get_reports(table, filters)
return {entry.uuid: entry.volume_request_storage_gigabyte_months for entry in reports}
def populate_line_item_daily_table(self, start_date, end_date, cluster_id):
"""Populate the daily aggregate of line items table.
Args:
start_date (datetime.date) The date to start populating the table.
end_date (datetime.date) The date to end on.
cluster_id (String) Cluster Identifier
Returns
(None)
"""
# Cast start_date and end_date into date object instead of string
if isinstance(start_date, str):
start_date = datetime.datetime.strptime(start_date, "%Y-%m-%d").date()
end_date = datetime.datetime.strptime(end_date, "%Y-%m-%d").date()
if isinstance(start_date, datetime.datetime):
start_date = start_date.date()
end_date = end_date.date()
table_name = self._table_map["line_item_daily"]
daily_sql = pkgutil.get_data("masu.database", "sql/reporting_ocpusagelineitem_daily.sql")
daily_sql = daily_sql.decode("utf-8")
daily_sql_params = {
"uuid": str(uuid.uuid4()).replace("-", "_"),
"start_date": start_date,
"end_date": end_date,
"cluster_id": cluster_id,
"schema": self.schema,
}
daily_sql, daily_sql_params = self.jinja_sql.prepare_query(daily_sql, daily_sql_params)
self._execute_raw_sql_query(table_name, daily_sql, start_date, end_date, bind_params=list(daily_sql_params))
def populate_ui_summary_tables(self, start_date, end_date, source_uuid, tables=UI_SUMMARY_TABLES):
"""Populate our UI summary tables (formerly materialized views)."""
for table_name in tables:
summary_sql = pkgutil.get_data("masu.database", f"sql/openshift/{table_name}.sql")
summary_sql = summary_sql.decode("utf-8")
summary_sql_params = {
"start_date": start_date,
"end_date": end_date,
"schema": self.schema,
"source_uuid": source_uuid,
}
summary_sql, summary_sql_params = self.jinja_sql.prepare_query(summary_sql, summary_sql_params)
self._execute_raw_sql_query(
table_name, summary_sql, start_date, end_date, bind_params=list(summary_sql_params)
)
def update_line_item_daily_summary_with_enabled_tags(self, start_date, end_date, report_period_ids):
"""Populate the enabled tag key table.
Args:
start_date (datetime.date) The date to start populating the table.
end_date (datetime.date) The date to end on.
bill_ids (list) A list of bill IDs.
Returns
(None)
"""
table_name = self._table_map["line_item_daily_summary"]
summary_sql = pkgutil.get_data(
"masu.database", "sql/reporting_ocpusagelineitem_daily_summary_update_enabled_tags.sql"
)
summary_sql = summary_sql.decode("utf-8")
summary_sql_params = {
"start_date": start_date,
"end_date": end_date,
"report_period_ids": report_period_ids,
"schema": self.schema,
}
summary_sql, summary_sql_params = self.jinja_sql.prepare_query(summary_sql, summary_sql_params)
self._execute_raw_sql_query(
table_name, summary_sql, start_date, end_date, bind_params=list(summary_sql_params)
)
def get_ocp_infrastructure_map(self, start_date, end_date, **kwargs):
"""Get the OCP on infrastructure map.
Args:
start_date (datetime.date) The date to start populating the table.
end_date (datetime.date) The date to end on.
Returns
(None)
"""
# kwargs here allows us to optionally pass in a provider UUID based on
# the provider type this is run for
ocp_provider_uuid = kwargs.get("ocp_provider_uuid")
aws_provider_uuid = kwargs.get("aws_provider_uuid")
azure_provider_uuid = kwargs.get("azure_provider_uuid")
# In case someone passes this function a string instead of the date object like we asked...
# Cast the string into a date object, end_date into date object instead of string
if isinstance(start_date, str):
start_date = datetime.datetime.strptime(start_date, "%Y-%m-%d").date()
end_date = datetime.datetime.strptime(end_date, "%Y-%m-%d").date()
infra_sql = pkgutil.get_data("masu.database", "sql/reporting_ocpinfrastructure_provider_map.sql")
infra_sql = infra_sql.decode("utf-8")
infra_sql_params = {
"uuid": str(uuid.uuid4()).replace("-", "_"),
"start_date": start_date,
"end_date": end_date,
"schema": self.schema,
"aws_provider_uuid": aws_provider_uuid,
"ocp_provider_uuid": ocp_provider_uuid,
"azure_provider_uuid": azure_provider_uuid,
}
infra_sql, infra_sql_params = self.jinja_sql.prepare_query(infra_sql, infra_sql_params)
with connection.cursor() as cursor:
cursor.db.set_schema(self.schema)
cursor.execute(infra_sql, list(infra_sql_params))
results = cursor.fetchall()
db_results = {}
for entry in results:
# This dictionary is keyed on an OpenShift provider UUID
# and the tuple contains
# (Infrastructure Provider UUID, Infrastructure Provider Type)
db_results[entry[0]] = (entry[1], entry[2])
return db_results
def get_ocp_infrastructure_map_trino(self, start_date, end_date, **kwargs):
"""Get the OCP on infrastructure map.
Args:
start_date (datetime.date) The date to start populating the table.
end_date (datetime.date) The date to end on.
Returns
(None)
"""
# kwargs here allows us to optionally pass in a provider UUID based on
# the provider type this is run for
ocp_provider_uuid = kwargs.get("ocp_provider_uuid")
aws_provider_uuid = kwargs.get("aws_provider_uuid")
azure_provider_uuid = kwargs.get("azure_provider_uuid")
if not self.table_exists_trino(PRESTO_LINE_ITEM_TABLE_DAILY_MAP.get("pod_usage")):
return {}
if aws_provider_uuid and not self.table_exists_trino(AWS_PRESTO_LINE_ITEM_DAILY_TABLE):
return {}
if azure_provider_uuid and not self.table_exists_trino(AZURE_PRESTO_LINE_ITEM_DAILY_TABLE):
return {}
if isinstance(start_date, str):
start_date = datetime.datetime.strptime(start_date, "%Y-%m-%d").date()
end_date = datetime.datetime.strptime(end_date, "%Y-%m-%d").date()
infra_sql = pkgutil.get_data("masu.database", "presto_sql/reporting_ocpinfrastructure_provider_map.sql")
infra_sql = infra_sql.decode("utf-8")
infra_sql_params = {
"start_date": start_date,
"end_date": end_date,
"year": start_date.strftime("%Y"),
"month": start_date.strftime("%m"),
"schema": self.schema,
"aws_provider_uuid": aws_provider_uuid,
"ocp_provider_uuid": ocp_provider_uuid,
"azure_provider_uuid": azure_provider_uuid,
}
infra_sql, infra_sql_params = self.jinja_sql.prepare_query(infra_sql, infra_sql_params)
results = self._execute_presto_raw_sql_query(self.schema, infra_sql, bind_params=infra_sql_params)
db_results = {}
for entry in results:
# This dictionary is keyed on an OpenShift provider UUID
# and the tuple contains
# (Infrastructure Provider UUID, Infrastructure Provider Type)
db_results[entry[0]] = (entry[1], entry[2])
return db_results
def populate_storage_line_item_daily_table(self, start_date, end_date, cluster_id):
"""Populate the daily storage aggregate of line items table.
Args:
start_date (datetime.date) The date to start populating the table.
end_date (datetime.date) The date to end on.
cluster_id (String) Cluster Identifier
Returns
(None)
"""
# Cast string to date object
if isinstance(start_date, str):
start_date = datetime.datetime.strptime(start_date, "%Y-%m-%d").date()
end_date = datetime.datetime.strptime(end_date, "%Y-%m-%d").date()
if isinstance(start_date, datetime.datetime):
start_date = start_date.date()
end_date = end_date.date()
table_name = self._table_map["storage_line_item_daily"]
daily_sql = pkgutil.get_data("masu.database", "sql/reporting_ocpstoragelineitem_daily.sql")
daily_sql = daily_sql.decode("utf-8")
daily_sql_params = {
"uuid": str(uuid.uuid4()).replace("-", "_"),
"start_date": start_date,
"end_date": end_date,
"cluster_id": cluster_id,
"schema": self.schema,
}
daily_sql, daily_sql_params = self.jinja_sql.prepare_query(daily_sql, daily_sql_params)
self._execute_raw_sql_query(table_name, daily_sql, start_date, end_date, bind_params=list(daily_sql_params))
def populate_pod_charge(self, cpu_temp_table, mem_temp_table):
"""Populate the memory and cpu charge on daily summary table.
Args:
cpu_temp_table (String) Name of cpu charge temp table
mem_temp_table (String) Name of mem charge temp table
Returns
(None)
"""
table_name = self._table_map["line_item_daily_summary"]
daily_charge_sql = pkgutil.get_data("masu.database", "sql/reporting_ocpusagelineitem_daily_pod_charge.sql")
charge_line_sql = daily_charge_sql.decode("utf-8")
charge_line_sql_params = {"cpu_temp": cpu_temp_table, "mem_temp": mem_temp_table, "schema": self.schema}
charge_line_sql, charge_line_sql_params = self.jinja_sql.prepare_query(charge_line_sql, charge_line_sql_params)
self._execute_raw_sql_query(table_name, charge_line_sql, bind_params=list(charge_line_sql_params))
def populate_storage_charge(self, temp_table_name):
"""Populate the storage charge into the daily summary table.
Args:
storage_charge (Float) Storage charge.
Returns
(None)
"""
table_name = self._table_map["line_item_daily_summary"]
daily_charge_sql = pkgutil.get_data("masu.database", "sql/reporting_ocp_storage_charge.sql")
charge_line_sql = daily_charge_sql.decode("utf-8")
charge_line_sql_params = {"temp_table": temp_table_name, "schema": self.schema}
charge_line_sql, charge_line_sql_params = self.jinja_sql.prepare_query(charge_line_sql, charge_line_sql_params)
self._execute_raw_sql_query(table_name, charge_line_sql, bind_params=list(charge_line_sql_params))
def populate_line_item_daily_summary_table(self, start_date, end_date, cluster_id, source):
"""Populate the daily aggregate of line items table.
Args:
start_date (datetime.date) The date to start populating the table.
end_date (datetime.date) The date to end on.
cluster_id (String) Cluster Identifier
source (String) Source UUID
Returns
(None)
"""
# Cast start_date to date
if isinstance(start_date, str):
start_date = datetime.datetime.strptime(start_date, "%Y-%m-%d").date()
end_date = datetime.datetime.strptime(end_date, "%Y-%m-%d").date()
if isinstance(start_date, datetime.datetime):
start_date = start_date.date()
end_date = end_date.date()
table_name = self._table_map["line_item_daily_summary"]
summary_sql = pkgutil.get_data("masu.database", "sql/reporting_ocpusagelineitem_daily_summary.sql")
summary_sql = summary_sql.decode("utf-8")
summary_sql_params = {
"uuid": str(uuid.uuid4()).replace("-", "_"),
"start_date": start_date,
"end_date": end_date,
"cluster_id": cluster_id,
"schema": self.schema,
"source_uuid": source,
}
summary_sql, summary_sql_params = self.jinja_sql.prepare_query(summary_sql, summary_sql_params)
self._execute_raw_sql_query(
table_name, summary_sql, start_date, end_date, bind_params=list(summary_sql_params)
)
def populate_storage_line_item_daily_summary_table(self, start_date, end_date, cluster_id, source):
"""Populate the daily aggregate of storage line items table.
Args:
start_date (datetime.date) The date to start populating the table.
end_date (datetime.date) The date to end on.
cluster_id (String) Cluster Identifier
source (String) Source UUID
Returns
(None)
"""
# Cast start_date and end_date to date object, if they aren't already
if isinstance(start_date, str):
start_date = datetime.datetime.strptime(start_date, "%Y-%m-%d").date()
end_date = datetime.datetime.strptime(end_date, "%Y-%m-%d").date()
if isinstance(start_date, datetime.datetime):
start_date = start_date.date()
end_date = end_date.date()
table_name = self._table_map["line_item_daily_summary"]
summary_sql = pkgutil.get_data("masu.database", "sql/reporting_ocpstoragelineitem_daily_summary.sql")
summary_sql = summary_sql.decode("utf-8")
summary_sql_params = {
"uuid": str(uuid.uuid4()).replace("-", "_"),
"start_date": start_date,
"end_date": end_date,
"cluster_id": cluster_id,
"schema": self.schema,
"source_uuid": source,
}
summary_sql, summary_sql_params = self.jinja_sql.prepare_query(summary_sql, summary_sql_params)
self._execute_raw_sql_query(table_name, summary_sql, start_date, end_date, list(summary_sql_params))
def delete_ocp_hive_partition_by_day(self, days, source, year, month):
"""Deletes partitions individually for each day in days list."""
table = self._table_map["line_item_daily_summary"]
retries = settings.HIVE_PARTITION_DELETE_RETRIES
if self.table_exists_trino(table):
LOG.info(
"Deleting partitions for the following: \n\tSchema: %s "
"\n\tOCP Source: %s \n\tTable: %s \n\tYear-Month: %s-%s \n\tDays: %s",
self.schema,
source,
table,
year,
month,
days,
)
for day in days:
for i in range(retries):
try:
sql = f"""
DELETE FROM hive.{self.schema}.{table}
WHERE source = '{source}'
AND year = '{year}'
AND (month = replace(ltrim(replace('{month}', '0', ' ')),' ', '0') OR month = '{month}')
AND day = '{day}'
"""
self._execute_presto_raw_sql_query(self.schema, sql)
break
except TrinoExternalError as err:
if err.error_name == "HIVE_METASTORE_ERROR" and i < (retries - 1):
continue
else:
raise err
def populate_line_item_daily_summary_table_presto(
self, start_date, end_date, report_period_id, cluster_id, cluster_alias, source
):
"""Populate the daily aggregate of line items table.
Args:
start_date (datetime.date) The date to start populating the table.
end_date (datetime.date) The date to end on.
report_period_id (int) : report period for which we are processing
cluster_id (str) : Cluster Identifier
cluster_alias (str) : Cluster alias
source (UUID) : provider uuid
Returns
(None)
"""
# Cast start_date to date
if isinstance(start_date, str):
start_date = datetime.datetime.strptime(start_date, "%Y-%m-%d").date()
end_date = datetime.datetime.strptime(end_date, "%Y-%m-%d").date()
if isinstance(start_date, datetime.datetime):
start_date = start_date.date()
end_date = end_date.date()
days = DateHelper().list_days(start_date, end_date)
days_str = "','".join([str(day.day) for day in days])
days_list = [str(day.day) for day in days]
year = start_date.strftime("%Y")
month = start_date.strftime("%m")
self.delete_ocp_hive_partition_by_day(days_list, source, year, month)
tmpl_summary_sql = pkgutil.get_data("masu.database", "presto_sql/reporting_ocpusagelineitem_daily_summary.sql")
tmpl_summary_sql = tmpl_summary_sql.decode("utf-8")
summary_sql_params = {
"uuid": str(source).replace("-", "_"),
"start_date": start_date,
"end_date": end_date,
"report_period_id": report_period_id,
"cluster_id": cluster_id,
"cluster_alias": cluster_alias,
"schema": self.schema,
"source": str(source),
"year": year,
"month": month,
"days": days_str,
}
LOG.info("PRESTO OCP: Connect")
presto_conn = kpdb.connect(schema=self.schema)
try:
LOG.info("PRESTO OCP: executing SQL buffer for OCP usage processing")
kpdb.executescript(
presto_conn, tmpl_summary_sql, params=summary_sql_params, preprocessor=self.jinja_sql.prepare_query
)
except Exception as e:
LOG.error(f"PRESTO OCP ERROR : {e}")
try:
presto_conn.rollback()
except RuntimeError:
# If presto has not started a transaction, it will throw
# a RuntimeError that we just want to ignore.
pass
raise e
else:
LOG.info("PRESTO OCP: Commit actions")
presto_conn.commit()
finally:
LOG.info("PRESTO OCP: Close connection")
presto_conn.close()
def populate_pod_label_summary_table_presto(self, report_period_ids, start_date, end_date, source):
"""
Populate label usage summary tables
Args:
report_period_ids (list(int)) : List of report_period_ids for processing
start_date (datetime.date) The date to start populating the table.
end_date (datetime.date) The date to end on.
source (UUID) : provider uuid
Returns
(None)
"""
# Cast start_date to date
if isinstance(start_date, str):
start_date = datetime.datetime.strptime(start_date, "%Y-%m-%d").date()
end_date = datetime.datetime.strptime(end_date, "%Y-%m-%d").date()
if isinstance(start_date, datetime.datetime):
start_date = start_date.date()
end_date = end_date.date()
agg_sql = pkgutil.get_data("masu.database", "presto_sql/reporting_ocp_usage_label_summary.sql")
agg_sql = agg_sql.decode("utf-8")
agg_sql_params = {
"uuid": str(uuid.uuid4()).replace("-", "_"),
"schema": self.schema,
"report_period_ids": tuple(report_period_ids),
"start_date": start_date,
"end_date": end_date,
"source": str(source),
"year": start_date.strftime("%Y"),
"month": start_date.strftime("%m"),
}
LOG.info("PRESTO OCP: Connect")
presto_conn = kpdb.connect(schema=self.schema)
try:
LOG.info("PRESTO OCP: executing SQL buffer for OCP tag/label processing")
kpdb.executescript(presto_conn, agg_sql, params=agg_sql_params, preprocessor=self.jinja_sql.prepare_query)
except Exception as e:
LOG.error(f"PRESTO OCP ERROR : {e}")
try:
presto_conn.rollback()
except RuntimeError:
# If presto has not started a transaction, it will throw
# a RuntimeError that we just want to ignore.
pass
raise e
else:
LOG.info("PRESTO OCP: Commit actions")
presto_conn.commit()
finally:
LOG.info("PRESTO OCP: Close connection")
presto_conn.close()
def get_cost_summary_for_clusterid(self, cluster_identifier):
"""Get the cost summary for a cluster id query."""
table_name = self._table_map["cost_summary"]
base_query = self._get_db_obj_query(table_name)
cost_summary_query = base_query.filter(cluster_id=cluster_identifier)
return cost_summary_query
def populate_pod_label_summary_table(self, report_period_ids, start_date, end_date):
"""Populate the line item aggregated totals data table."""
table_name = self._table_map["pod_label_summary"]
agg_sql = pkgutil.get_data("masu.database", "sql/reporting_ocpusagepodlabel_summary.sql")
agg_sql = agg_sql.decode("utf-8")
agg_sql_params = {
"schema": self.schema,
"report_period_ids": report_period_ids,
"start_date": start_date,
"end_date": end_date,
}
agg_sql, agg_sql_params = self.jinja_sql.prepare_query(agg_sql, agg_sql_params)
self._execute_raw_sql_query(table_name, agg_sql, bind_params=list(agg_sql_params))
def populate_volume_label_summary_table(self, report_period_ids, start_date, end_date):
"""Populate the OCP volume label summary table."""
table_name = self._table_map["volume_label_summary"]
agg_sql = pkgutil.get_data("masu.database", "sql/reporting_ocpstoragevolumelabel_summary.sql")
agg_sql = agg_sql.decode("utf-8")
agg_sql_params = {
"schema": self.schema,
"report_period_ids": report_period_ids,
"start_date": start_date,
"end_date": end_date,
}
agg_sql, agg_sql_params = self.jinja_sql.prepare_query(agg_sql, agg_sql_params)
self._execute_raw_sql_query(table_name, agg_sql, bind_params=list(agg_sql_params))
def populate_markup_cost(self, markup, start_date, end_date, cluster_id):
"""Set markup cost for OCP including infrastructure cost markup."""
with schema_context(self.schema):
OCPUsageLineItemDailySummary.objects.filter(
cluster_id=cluster_id, usage_start__gte=start_date, usage_start__lte=end_date
).update(
infrastructure_markup_cost=(
(Coalesce(F("infrastructure_raw_cost"), Value(0, output_field=DecimalField()))) * markup
),
infrastructure_project_markup_cost=(
(Coalesce(F("infrastructure_project_raw_cost"), Value(0, output_field=DecimalField()))) * markup
),
)
def get_distinct_nodes(self, start_date, end_date, cluster_id):
"""Return a list of nodes for a cluster between given dates."""
with schema_context(self.schema):
unique_nodes = (
OCPUsageLineItemDailySummary.objects.filter(
usage_start__gte=start_date, usage_start__lt=end_date, cluster_id=cluster_id, node__isnull=False
)
.values_list("node")
.distinct()
)
return [node[0] for node in unique_nodes]
def get_distinct_pvcs(self, start_date, end_date, cluster_id):
"""Return a list of tuples of (PVC, node) for a cluster between given dates."""
with schema_context(self.schema):
unique_pvcs = (
OCPUsageLineItemDailySummary.objects.filter(
usage_start__gte=start_date,
usage_start__lt=end_date,
cluster_id=cluster_id,
persistentvolumeclaim__isnull=False,
namespace__isnull=False,
)
.values_list("persistentvolumeclaim", "node", "namespace")
.distinct()
)
return [(pvc[0], pvc[1], pvc[2]) for pvc in unique_pvcs]
def generate_monthly_cost_json_object(self, distribution, distributed_cost):
"""Generates the default monthly cost dict."""
default_cost = Decimal(0)
if not isinstance(distributed_cost, Decimal) and distributed_cost:
distributed_cost = Decimal(distributed_cost)
if not distributed_cost:
distributed_cost = default_cost
cost_mapping = {distribution: distributed_cost}
return JSONBBuildObject(
Value(metric_constants.CPU_DISTRIBUTION),
cost_mapping.get(metric_constants.CPU_DISTRIBUTION, default_cost),
Value(metric_constants.MEMORY_DISTRIBUTION),
cost_mapping.get(metric_constants.MEMORY_DISTRIBUTION, default_cost),
Value(metric_constants.PVC_DISTRIBUTION),
cost_mapping.get(metric_constants.PVC_DISTRIBUTION, default_cost),
)
def populate_monthly_cost(
self, cost_type, rate_type, rate, start_date, end_date, cluster_id, cluster_alias, distribution, provider_uuid
):
"""
Populate the monthly cost of a customer.
There are three types of monthly rates Node, Cluster & PVC.
args:
cost_type (str): Contains the type of monthly cost. ex: "Node"
rate_type(str): Contains the metric name. ex: "node_cost_per_month"
rate (decimal): Contains the rate amount ex: 100.0
node_cost (Decimal): The node cost per month
start_date (datetime, str): The start_date to calculate monthly_cost.
end_date (datetime, str): The end_date to calculate monthly_cost.
cluster_id (str): The id of the cluster
cluster_alias: The name of the cluster
distribution: Choice of monthly distribution ex. memory
"""
if isinstance(start_date, str):
start_date = parse(start_date).date()
if isinstance(end_date, str):
end_date = parse(end_date).date()
# usage_start, usage_end are date types
first_month = datetime.datetime(*start_date.replace(day=1).timetuple()[:3]).replace(tzinfo=pytz.UTC)
end_date = datetime.datetime(*end_date.timetuple()[:3]).replace(hour=23, minute=59, second=59, tzinfo=pytz.UTC)
# Calculate monthly cost for each month from start date to end date
for curr_month in rrule(freq=MONTHLY, until=end_date, dtstart=first_month):
first_curr_month, first_next_month = month_date_range_tuple(curr_month)
LOG.info("Populating monthly cost from %s to %s.", first_curr_month, first_next_month)
if cost_type == "Node":
if rate is None:
self.remove_monthly_cost(first_curr_month, first_next_month, cluster_id, cost_type)
else:
self.upsert_monthly_node_cost_line_item(
first_curr_month,
first_next_month,
cluster_id,
cluster_alias,
rate_type,
rate,
distribution,
provider_uuid,
)
elif cost_type == "Cluster":
if rate is None:
self.remove_monthly_cost(first_curr_month, first_next_month, cluster_id, cost_type)
else:
# start_date, end_date, cluster_id, cluster_alias, rate_type, cluster_cost
self.upsert_monthly_cluster_cost_line_item(
first_curr_month,
first_next_month,
cluster_id,
cluster_alias,
rate_type,
rate,
distribution,
provider_uuid,
)
elif cost_type == "PVC":
if rate is None:
self.remove_monthly_cost(first_curr_month, first_next_month, cluster_id, cost_type)
else:
self.upsert_monthly_pvc_cost_line_item(
first_curr_month, first_next_month, cluster_id, cluster_alias, rate_type, rate, provider_uuid
)
def populate_monthly_tag_cost(
self,
cost_type,
rate_type,
rate_dict,
start_date,
end_date,
cluster_id,
cluster_alias,
distribution,
provider_uuid,
):
"""
Populate the monthly cost of a customer based on tag rates.
Right now this is just the node/month cost. Calculated from
tag value cost * number_unique_nodes for each tag key value pair
that is found on a line item with that node.
"""
if isinstance(start_date, str):
start_date = parse(start_date).date()
if isinstance(end_date, str):
end_date = parse(end_date).date()
# usage_start, usage_end are date types
first_month = datetime.datetime(*start_date.replace(day=1).timetuple()[:3]).replace(tzinfo=pytz.UTC)
end_date = datetime.datetime(*end_date.timetuple()[:3]).replace(hour=23, minute=59, second=59, tzinfo=pytz.UTC)
# Calculate monthly cost for each month from start date to end date for each tag key:value pair in the rate
for curr_month in rrule(freq=MONTHLY, until=end_date, dtstart=first_month):
first_curr_month, first_next_month = month_date_range_tuple(curr_month)
LOG.info("Populating monthly tag based cost from %s to %s.", first_curr_month, first_next_month)
if cost_type == "Node":
self.tag_upsert_monthly_node_cost_line_item(
first_curr_month,
first_next_month,
cluster_id,
cluster_alias,
rate_type,
rate_dict,
distribution,
provider_uuid,
)
elif cost_type == "Cluster":
self.tag_upsert_monthly_cluster_cost_line_item(
first_curr_month,
first_next_month,
cluster_id,
cluster_alias,
rate_type,
rate_dict,
distribution,
provider_uuid,
)
elif cost_type == "PVC":
self.tag_upsert_monthly_pvc_cost_line_item(
first_curr_month, first_next_month, cluster_id, cluster_alias, rate_type, rate_dict, provider_uuid
)
def populate_monthly_tag_default_cost(
self,
cost_type,
rate_type,
rate_dict,
start_date,
end_date,
cluster_id,
cluster_alias,
distribution,
provider_uuid,
):
"""
Populate the monthly default cost of a customer based on tag rates.
Right now this is just the node/month cost. Calculated from
tag value cost * number_unique_nodes for each tag key value pair
that is found on a line item with that node.
"""
if isinstance(start_date, str):
start_date = parse(start_date).date()
if isinstance(end_date, str):
end_date = parse(end_date).date()
# usage_start, usage_end are date types
first_month = datetime.datetime(*start_date.replace(day=1).timetuple()[:3]).replace(tzinfo=pytz.UTC)
end_date = datetime.datetime(*end_date.timetuple()[:3]).replace(hour=23, minute=59, second=59, tzinfo=pytz.UTC)
# Calculate monthly cost for each month from start date to end date for each tag key:value pair in the rate
for curr_month in rrule(freq=MONTHLY, until=end_date, dtstart=first_month):
first_curr_month, first_next_month = month_date_range_tuple(curr_month)
LOG.info("Populating monthly tag based default cost from %s to %s.", first_curr_month, first_next_month)
if cost_type == "Node":
self.tag_upsert_monthly_default_node_cost_line_item(
first_curr_month,
first_next_month,
cluster_id,
cluster_alias,
rate_type,
rate_dict,
distribution,
provider_uuid,
)
elif cost_type == "Cluster":
self.tag_upsert_monthly_default_cluster_cost_line_item(
first_curr_month,
first_next_month,
cluster_id,
cluster_alias,
rate_type,
rate_dict,
distribution,
provider_uuid,
)
elif cost_type == "PVC":
self.tag_upsert_monthly_default_pvc_cost_line_item(
first_curr_month, first_next_month, cluster_id, cluster_alias, rate_type, rate_dict, provider_uuid
)
def get_node_to_project_distribution(self, start_date, end_date, cluster_id, node_cost):
"""Returns a list of dictionaries containing the distributed cost.
args:
start_date (datetime, str): The start_date to calculate monthly_cost.
end_date (datetime, str): The end_date to calculate monthly_cost.
cluster_id (str): The id of the cluster
cluster_cost (dec): The flat cost of the cluster
Node to Project Distribution:
- Node to project distribution is based on a per node scenario
- (node_cost) / (number of projects)
Return nested dictionaries:
- ex {'master_3': {'namespaces': ['openshift', 'kube-system'], 'distributed_cost': Decimal('500.0000000000')}
"""
with schema_context(self.schema):
distributed_project_list = (
OCPUsageLineItemDailySummary.objects.filter(
usage_start__gte=start_date, usage_start__lt=end_date, cluster_id=cluster_id
)
.filter(namespace__isnull=False)
.filter(node__isnull=False)
.values("namespace", "node")
.distinct()
)
node_mappings = {}
for project in distributed_project_list:
node_value = project.get("node")
namespace_value = project.get("namespace")
node_map = node_mappings.get(node_value)
if node_map:
namespaces = copy.deepcopy(node_map.get("namespaces", []))
namespaces.append(namespace_value)
node_map["namespaces"] = namespaces
node_map["distributed_cost"] = Decimal(node_cost) / Decimal(len(namespaces))
node_mappings[node_value] = node_map
else:
initial_map = {"namespaces": [namespace_value], "distributed_cost": Decimal(node_cost)}
node_mappings[node_value] = initial_map
return node_mappings
def upsert_monthly_node_cost_line_item(
self, start_date, end_date, cluster_id, cluster_alias, rate_type, node_cost, distribution, provider_uuid
):
"""Update or insert daily summary line item for node cost."""
unique_nodes = self.get_distinct_nodes(start_date, end_date, cluster_id)
report_period = self.get_usage_period_by_dates_and_cluster(start_date, end_date, cluster_id)
project_distrib_map = self.get_node_to_project_distribution(start_date, end_date, cluster_id, node_cost)
with schema_context(self.schema):
for node in unique_nodes:
line_item = OCPUsageLineItemDailySummary.objects.filter(
usage_start=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Node",
node=node,
data_source="Pod",
namespace__isnull=True,
).first()
if not line_item:
line_item = OCPUsageLineItemDailySummary(
uuid=uuid.uuid4(),
usage_start=start_date,
usage_end=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Node",
node=node,
data_source="Pod",
source_uuid=provider_uuid,
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, node_cost)
if rate_type == metric_constants.INFRASTRUCTURE_COST_TYPE:
LOG.debug("Node (%s) has a monthly infrastructure cost of %s.", node, node_cost)
line_item.infrastructure_monthly_cost_json = monthly_cost
elif rate_type == metric_constants.SUPPLEMENTARY_COST_TYPE:
LOG.debug("Node (%s) has a monthly supplemenarty cost of %s.", node, node_cost)
line_item.supplementary_monthly_cost_json = monthly_cost
line_item.save()
# How are we gonna handle distributing the node cost to the projects.
project_nodes = project_distrib_map.keys()
for project_node in project_nodes:
for namespace in project_distrib_map[project_node]["namespaces"]:
distributed_cost = project_distrib_map[project_node]["distributed_cost"]
project_line_item = OCPUsageLineItemDailySummary.objects.filter(
usage_start=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Node",
node=project_node,
namespace=namespace,
data_source="Pod",
).first()
if not project_line_item:
project_line_item = OCPUsageLineItemDailySummary(
uuid=uuid.uuid4(),
usage_start=start_date,
usage_end=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Node",
node=project_node,
namespace=namespace,
data_source="Pod",
source_uuid=provider_uuid,
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, distributed_cost)
log_statement = (
f"Distributing Node Cost to Project:\n"
f" node ({project_node}) cost: {node_cost} \n"
f" project ({namespace}) distributed cost: {distributed_cost}\n"
f" distribution type: {distribution}\n"
)
if rate_type == metric_constants.INFRASTRUCTURE_COST_TYPE:
project_line_item.infrastructure_project_monthly_cost = monthly_cost
elif rate_type == metric_constants.SUPPLEMENTARY_COST_TYPE:
project_line_item.supplementary_project_monthly_cost = monthly_cost
project_line_item.save()
LOG.debug(log_statement)
def tag_upsert_monthly_node_cost_line_item( # noqa: C901
self, start_date, end_date, cluster_id, cluster_alias, rate_type, rate_dict, distribution, provider_uuid
):
"""
Update or insert daily summary line item for node cost.
It checks to see if a line item exists for each node
that contains the tag key:value pair,
if it does then the price is added to the monthly cost.
"""
unique_nodes = self.get_distinct_nodes(start_date, end_date, cluster_id)
report_period = self.get_usage_period_by_dates_and_cluster(start_date, end_date, cluster_id)
with schema_context(self.schema):
for node in unique_nodes:
if rate_dict is not None:
for tag_key in rate_dict:
tag_values = rate_dict.get(tag_key)
for value_name, rate_value in tag_values.items():
# this makes sure that there is an entry for that node
# that contains the specified key_value pair
item_check = OCPUsageLineItemDailySummary.objects.filter(
usage_start__gte=start_date,
usage_start__lte=end_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
node=node,
pod_labels__contains={tag_key: value_name},
).first()
if item_check:
line_item = OCPUsageLineItemDailySummary.objects.filter(
usage_start=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Node",
node=node,
data_source="Pod",
).first()
if not line_item:
line_item = OCPUsageLineItemDailySummary(
uuid=uuid.uuid4(),
usage_start=start_date,
usage_end=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Node",
node=node,
data_source="Pod",
source_uuid=provider_uuid,
)
node_cost = rate_value
if rate_type == metric_constants.INFRASTRUCTURE_COST_TYPE:
LOG.debug("Node (%s) has a monthly infrastructure cost of %s.", node, rate_value)
if line_item.infrastructure_monthly_cost_json:
node_cost = (
line_item.infrastructure_monthly_cost_json.get(distribution, 0)
+ rate_value
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, node_cost)
line_item.infrastructure_monthly_cost_json = monthly_cost
elif rate_type == metric_constants.SUPPLEMENTARY_COST_TYPE:
LOG.debug("Node (%s) has a monthly supplemenarty cost of %s.", node, rate_value)
if line_item.supplementary_monthly_cost_json:
node_cost = (
line_item.supplementary_monthly_cost_json.get(distribution, 0) + rate_value
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, node_cost)
line_item.supplementary_monthly_cost_json = monthly_cost
line_item.save()
def tag_upsert_monthly_default_node_cost_line_item( # noqa: C901
self, start_date, end_date, cluster_id, cluster_alias, rate_type, rate_dict, distribution, provider_uuid
):
"""
Update or insert daily summary line item for node cost.
It checks to see if a line item exists for each node
that contains the tag key:value pair,
if it does then the price is added to the monthly cost.
"""
unique_nodes = self.get_distinct_nodes(start_date, end_date, cluster_id)
report_period = self.get_usage_period_by_dates_and_cluster(start_date, end_date, cluster_id)
with schema_context(self.schema):
for node in unique_nodes:
if rate_dict is not None:
for tag_key in rate_dict:
tag_values = rate_dict.get(tag_key)
tag_default = tag_values.get("default_value")
values_to_skip = tag_values.get("defined_keys")
item_check = OCPUsageLineItemDailySummary.objects.filter(
usage_start__gte=start_date,
usage_start__lte=end_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
node=node,
pod_labels__has_key=tag_key,
)
for value in values_to_skip:
item_check = item_check.exclude(pod_labels__contains={tag_key: value})
# this won't run if there are no matching items and item_check will continue to be
# filtered until there are no items left
while item_check:
# get the first value for our tag key and exclude it from the queryset for the next check
# will remove values until there are none left
tag_key_value = item_check.first().pod_labels.get(tag_key)
item_check = item_check.exclude(pod_labels__contains={tag_key: tag_key_value})
line_item = OCPUsageLineItemDailySummary.objects.filter(
usage_start=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Node",
node=node,
data_source="Pod",
).first()
if not line_item:
line_item = OCPUsageLineItemDailySummary(
uuid=uuid.uuid4(),
usage_start=start_date,
usage_end=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Node",
node=node,
data_source="Pod",
source_uuid=provider_uuid,
)
node_cost = tag_default
if rate_type == metric_constants.INFRASTRUCTURE_COST_TYPE:
LOG.info(
"Node (%s) has a default monthly infrastructure cost of %s.", node, tag_default
)
if line_item.infrastructure_monthly_cost_json:
node_cost = (
line_item.infrastructure_monthly_cost_json.get(distribution, 0) + tag_default
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, node_cost)
line_item.infrastructure_monthly_cost_json = monthly_cost
elif rate_type == metric_constants.SUPPLEMENTARY_COST_TYPE:
LOG.info(
"Node (%s) has a default monthly supplemenarty cost of %s.", node, tag_default
)
if line_item.supplementary_monthly_cost_json:
node_cost = (
line_item.supplementary_monthly_cost_json.get(distribution, 0) + tag_default
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, node_cost)
line_item.supplementary_monthly_cost_json = monthly_cost
line_item.save()
def tag_upsert_monthly_default_pvc_cost_line_item( # noqa: C901
self, start_date, end_date, cluster_id, cluster_alias, rate_type, rate_dict, provider_uuid
):
"""
Update or insert daily summary line item for node cost.
It checks to see if a line item exists for each node
that contains the tag key:value pair,
if it does then the price is added to the monthly cost.
"""
distribution = metric_constants.PVC_DISTRIBUTION
unique_pvcs = self.get_distinct_pvcs(start_date, end_date, cluster_id)
report_period = self.get_usage_period_by_dates_and_cluster(start_date, end_date, cluster_id)
with schema_context(self.schema):
for pvc, node, namespace in unique_pvcs:
if rate_dict is not None:
for tag_key in rate_dict:
tag_values = rate_dict.get(tag_key)
tag_default = tag_values.get("default_value")
values_to_skip = tag_values.get("defined_keys")
item_check = OCPUsageLineItemDailySummary.objects.filter(
usage_start__gte=start_date,
usage_start__lte=end_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
persistentvolumeclaim=pvc,
node=node,
volume_labels__has_key=tag_key,
namespace=namespace,
)
for value in values_to_skip:
item_check = item_check.exclude(volume_labels__contains={tag_key: value})
# this won't run if there are no matching items and item_check will continue to be
# filtered until there are no items left
while item_check:
# get the first value for our tag key and exclude it from the queryset for the next check
# will remove values until there are none left
tag_key_value = item_check.first().volume_labels.get(tag_key)
item_check = item_check.exclude(volume_labels__contains={tag_key: tag_key_value})
line_item = OCPUsageLineItemDailySummary.objects.filter(
usage_start=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="PVC",
persistentvolumeclaim=pvc,
node=node,
data_source="Storage",
namespace=namespace,
).first()
if not line_item:
line_item = OCPUsageLineItemDailySummary(
uuid=uuid.uuid4(),
usage_start=start_date,
usage_end=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="PVC",
persistentvolumeclaim=pvc,
node=node,
data_source="Storage",
namespace=namespace,
source_uuid=provider_uuid,
)
pvc_cost = tag_default
if rate_type == metric_constants.INFRASTRUCTURE_COST_TYPE:
LOG.info("PVC (%s) has a default monthly infrastructure cost of %s.", pvc, tag_default)
if line_item.infrastructure_monthly_cost_json:
pvc_cost = (
line_item.infrastructure_monthly_cost_json.get(distribution, 0) + tag_default
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, pvc_cost)
line_item.infrastructure_monthly_cost_json = monthly_cost
elif rate_type == metric_constants.SUPPLEMENTARY_COST_TYPE:
LOG.info("PVC (%s) has a default monthly supplemenarty cost of %s.", pvc, tag_default)
if line_item.supplementary_monthly_cost_json:
pvc_cost = (
line_item.supplementary_monthly_cost_json.get(distribution, 0) + tag_default
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, pvc_cost)
line_item.supplementary_monthly_cost_json = monthly_cost
line_item.save()
def get_cluster_to_node_distribution(self, start_date, end_date, cluster_id, distribution, cluster_cost):
"""Returns a list of dictionaries containing the distributed cost.
args:
start_date (datetime, str): The start_date to calculate monthly_cost.
end_date (datetime, str): The end_date to calculate monthly_cost.
cluster_id (str): The id of the cluster
cluster_cost (dec): The flat cost of the cluster
distribution: Choice of monthly distribution ex. (memory or cpu)
Memory Distribution:
- (node memory capacity/cluster memory capacity) x cluster_cost
CPU Distribution:
- (node cpu capacity/cluster cpu capacity) x cluster_cost
Return list of dictionaries: ex [{"node":"aws_compute2", "distributed_cost": 285.71}]
"""
node_column = "node_capacity_cpu_core_hours"
cluster_column = "cluster_capacity_cpu_core_hours"
if "memory" in distribution:
node_column = "node_capacity_memory_gigabyte_hours"
cluster_column = "cluster_capacity_memory_gigabyte_hours"
with schema_context(self.schema):
distributed_node_list = (
OCPUsageLineItemDailySummary.objects.filter(
usage_start__gte=start_date, usage_start__lt=end_date, cluster_id=cluster_id
)
.values("node")
.annotate(
distributed_cost=ExpressionWrapper(
Sum(node_column) / Sum(cluster_column) * cluster_cost, output_field=DecimalField()
)
)
)
# TIP: For debugging add these to the annotation
# node_hours=Sum(node_column),
# cluster_hours=Sum(cluster_column),
# node_to_cluster_ratio=Sum(node_column)/Sum(cluster_column)
return distributed_node_list
def get_cluster_to_project_distribution(self, start_date, end_date, cluster_id, distribution, cluster_cost):
"""Returns a list of dictionaries containing the distributed cost.
args:
start_date (datetime, str): The start_date to calculate monthly_cost.
end_date (datetime, str): The end_date to calculate monthly_cost.
cluster_id (str): The id of the cluster
cluster_cost (dec): The flat cost of the cluster
distribution: Choice of monthly distribution ex. (memory or cpu)
Project Distribution:
- Project distribution is a rolling window estimate of month to date.
- (project_usage / cluster_usage) x cluster_cost
Return list of dictionaries:
- ex [{'namespace': 'openshift', 'distributed_cost': Decimal('71.84')}
"""
usage_column = "pod_usage_cpu_core_hours"
if "memory" in distribution:
usage_column = "pod_usage_memory_gigabyte_hours"
with schema_context(self.schema):
cluster_hours = (
OCPUsageLineItemDailySummary.objects.filter(
usage_start__gte=start_date, usage_start__lt=end_date, cluster_id=cluster_id
).aggregate(cluster_hours=Sum(usage_column))
).get("cluster_hours")
distributed_project_list = (
OCPUsageLineItemDailySummary.objects.filter(
usage_start__gte=start_date, usage_start__lt=end_date, cluster_id=cluster_id
)
.filter(namespace__isnull=False)
.values("namespace")
.annotate(
distributed_cost=ExpressionWrapper(
Sum(usage_column) / cluster_hours * cluster_cost, output_field=DecimalField()
)
)
)
return distributed_project_list
def upsert_monthly_cluster_cost_line_item(
self, start_date, end_date, cluster_id, cluster_alias, rate_type, cluster_cost, distribution, provider_uuid
):
"""
Update or insert a daily summary line item for cluster cost.
args:
start_date (datetime, str): The start_date to calculate monthly_cost.
end_date (datetime, str): The end_date to calculate monthly_cost.
cluster_id (str): The id of the cluster
cluster_alias: The name of the cluster
cost_type (str): Contains the type of monthly cost. ex: "Node"
rate_type (str): Contains the metric name. ex: "node_cost_per_month"
cluster_cost (dec): The flat cost of the cluster
distribution: Choice of monthly distribution ex. (memory or cpu)
"""
report_period = self.get_usage_period_by_dates_and_cluster(start_date, end_date, cluster_id)
distribution_list = self.get_cluster_to_node_distribution(
start_date, end_date, cluster_id, distribution, cluster_cost
)
if report_period:
with schema_context(self.schema):
# NOTE: I implemented a logic change here, now instead of one entry per cluster cost
# We now have multiple cluster cost entries for each node.
LOG.debug("Cluster (%s) has a monthly cost of %s.", cluster_id, cluster_cost)
LOG.debug("Distributing the cluster cost to nodes using %s distribution.", distribution)
for node_dikt in distribution_list:
node = node_dikt.get("node")
distributed_cost = node_dikt.get("distributed_cost", Decimal(0))
line_item = OCPUsageLineItemDailySummary.objects.filter(
usage_start=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Cluster",
node=node,
data_source="Pod",
namespace__isnull=True,
).first()
if not line_item:
line_item = OCPUsageLineItemDailySummary(
uuid=uuid.uuid4(),
usage_start=start_date,
usage_end=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Cluster",
node=node,
data_source="Pod",
source_uuid=provider_uuid,
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, distributed_cost)
log_statement = (
f"Distributing Cluster Cost to Nodes:\n"
f" cluster ({cluster_id}) cost: {cluster_cost} \n"
f" node ({node}) distributed cost: {distributed_cost}\n"
f" distribution type: {distribution}\n"
)
if rate_type == metric_constants.INFRASTRUCTURE_COST_TYPE:
line_item.infrastructure_monthly_cost_json = monthly_cost
elif rate_type == metric_constants.SUPPLEMENTARY_COST_TYPE:
line_item.supplementary_monthly_cost_json = monthly_cost
LOG.debug(log_statement)
line_item.save()
# Project Distribution
project_distribution_list = self.get_cluster_to_project_distribution(
start_date, end_date, cluster_id, distribution, cluster_cost
)
with schema_context(self.schema):
for project_dikt in project_distribution_list:
namespace = project_dikt.get("namespace")
distributed_cost = project_dikt.get("distributed_cost", Decimal(0))
project_line_item = OCPUsageLineItemDailySummary.objects.filter(
usage_start=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Cluster",
namespace=namespace,
data_source="Pod",
).first()
if not project_line_item:
project_line_item = OCPUsageLineItemDailySummary(
uuid=uuid.uuid4(),
usage_start=start_date,
usage_end=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Cluster",
namespace=namespace,
data_source="Pod",
source_uuid=provider_uuid,
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, distributed_cost)
log_statement = (
f"Distributing Cluster Cost to Project:\n"
f" cluster ({cluster_id}) cost: {cluster_cost} \n"
f" project ({namespace}) distributed cost: {distributed_cost}\n"
f" distribution type: {distribution}\n"
)
if rate_type == metric_constants.INFRASTRUCTURE_COST_TYPE:
project_line_item.infrastructure_project_monthly_cost = monthly_cost
elif rate_type == metric_constants.SUPPLEMENTARY_COST_TYPE:
project_line_item.supplementary_project_monthly_cost = monthly_cost
project_line_item.save()
LOG.debug(log_statement)
def tag_upsert_monthly_pvc_cost_line_item( # noqa: C901
self, start_date, end_date, cluster_id, cluster_alias, rate_type, rate_dict, provider_uuid
):
"""
Update or insert daily summary line item for PVC cost.
It checks to see if a line item exists for each PVC
that contains the tag key:value pair,
if it does then the price is added to the monthly cost.
"""
distribution = metric_constants.PVC_DISTRIBUTION
unique_pvcs = self.get_distinct_pvcs(start_date, end_date, cluster_id)
report_period = self.get_usage_period_by_dates_and_cluster(start_date, end_date, cluster_id)
with schema_context(self.schema):
for pvc, node, namespace in unique_pvcs:
if rate_dict is not None:
for tag_key in rate_dict:
tag_values = rate_dict.get(tag_key)
for value_name, rate_value in tag_values.items():
item_check = OCPUsageLineItemDailySummary.objects.filter(
usage_start__gte=start_date,
usage_start__lte=end_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
persistentvolumeclaim=pvc,
node=node,
volume_labels__contains={tag_key: value_name},
namespace=namespace,
).first()
if item_check:
line_item = OCPUsageLineItemDailySummary.objects.filter(
usage_start=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="PVC",
persistentvolumeclaim=pvc,
node=node,
data_source="Storage",
namespace=namespace,
).first()
if not line_item:
line_item = OCPUsageLineItemDailySummary(
uuid=uuid.uuid4(),
usage_start=start_date,
usage_end=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="PVC",
persistentvolumeclaim=pvc,
node=node,
data_source="Storage",
namespace=namespace,
source_uuid=provider_uuid,
)
pvc_cost = rate_value
if rate_type == metric_constants.INFRASTRUCTURE_COST_TYPE:
LOG.debug("PVC (%s) has a monthly infrastructure cost of %s.", pvc, rate_value)
if line_item.infrastructure_monthly_cost_json:
pvc_cost = (
line_item.infrastructure_monthly_cost_json.get(distribution, 0)
+ rate_value
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, pvc_cost)
line_item.infrastructure_monthly_cost_json = monthly_cost
elif rate_type == metric_constants.SUPPLEMENTARY_COST_TYPE:
LOG.debug("PVC (%s) has a monthly supplemenarty cost of %s.", pvc, rate_value)
if line_item.supplementary_monthly_cost_json:
pvc_cost = (
line_item.supplementary_monthly_cost_json.get(distribution, 0) + rate_value
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, pvc_cost)
line_item.supplementary_monthly_cost_json = monthly_cost
line_item.save()
def upsert_monthly_pvc_cost_line_item(
self, start_date, end_date, cluster_id, cluster_alias, rate_type, pvc_cost, provider_uuid
):
"""Update or insert daily summary line item for pvc cost."""
unique_pvcs = self.get_distinct_pvcs(start_date, end_date, cluster_id)
report_period = self.get_usage_period_by_dates_and_cluster(start_date, end_date, cluster_id)
with schema_context(self.schema):
for pvc, node, namespace in unique_pvcs:
line_item = OCPUsageLineItemDailySummary.objects.filter(
usage_start=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="PVC",
persistentvolumeclaim=pvc,
node=node,
data_source="Storage",
namespace=namespace,
infrastructure_project_monthly_cost__isnull=True,
supplementary_project_monthly_cost__isnull=True,
).first()
if not line_item:
line_item = OCPUsageLineItemDailySummary(
uuid=uuid.uuid4(),
usage_start=start_date,
usage_end=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="PVC",
persistentvolumeclaim=pvc,
node=node,
data_source="Storage",
namespace=namespace,
source_uuid=provider_uuid,
)
monthly_cost = self.generate_monthly_cost_json_object(metric_constants.PVC_DISTRIBUTION, pvc_cost)
if rate_type == metric_constants.INFRASTRUCTURE_COST_TYPE:
LOG.debug("PVC (%s) has a monthly infrastructure cost of %s.", pvc, pvc_cost)
line_item.infrastructure_monthly_cost_json = monthly_cost
elif rate_type == metric_constants.SUPPLEMENTARY_COST_TYPE:
LOG.debug("PVC (%s) has a monthly supplemenarty cost of %s.", pvc, pvc_cost)
line_item.supplementary_monthly_cost_json = monthly_cost
line_item.save()
# PVC to project Distribution
project_line_item = OCPUsageLineItemDailySummary.objects.filter(
usage_start=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="PVC",
persistentvolumeclaim=pvc,
node=node,
namespace=namespace,
data_source="Storage",
infrastructure_monthly_cost_json__isnull=True,
supplementary_monthly_cost_json__isnull=True,
).first()
if not project_line_item:
project_line_item = OCPUsageLineItemDailySummary(
uuid=uuid.uuid4(),
usage_start=start_date,
usage_end=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="PVC",
persistentvolumeclaim=pvc,
node=node,
namespace=namespace,
data_source="Storage",
source_uuid=provider_uuid,
)
monthly_cost = self.generate_monthly_cost_json_object(metric_constants.PVC_DISTRIBUTION, pvc_cost)
if rate_type == metric_constants.INFRASTRUCTURE_COST_TYPE:
LOG.debug("PVC (%s) has a monthly project infrastructure cost of %s.", pvc, pvc_cost)
project_line_item.infrastructure_project_monthly_cost = monthly_cost
elif rate_type == metric_constants.SUPPLEMENTARY_COST_TYPE:
LOG.debug("PVC (%s) has a monthly project supplemenarty cost of %s.", pvc, pvc_cost)
project_line_item.supplementary_project_monthly_cost = monthly_cost
project_line_item.save()
def tag_upsert_monthly_cluster_cost_line_item( # noqa: C901
self, start_date, end_date, cluster_id, cluster_alias, rate_type, rate_dict, distribution, provider_uuid
):
"""
Update or insert a daily summary line item for cluster cost based on tag rates.
It checks to see if a line item exists for the cluster
that contains the tag key:value pair,
if it does then the price is added to the monthly cost.
"""
report_period = self.get_usage_period_by_dates_and_cluster(start_date, end_date, cluster_id)
if report_period:
with schema_context(self.schema):
if rate_dict is not None:
for tag_key in rate_dict:
tag_values = rate_dict.get(tag_key)
for value_name, rate_value in tag_values.items():
# this makes sure that there is an entry for that node
# that contains the specified key_value pair
item_check = line_item = OCPUsageLineItemDailySummary.objects.filter(
usage_start__gte=start_date,
usage_start__lte=end_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
pod_labels__contains={tag_key: value_name},
).first()
if item_check:
line_item = OCPUsageLineItemDailySummary.objects.filter(
usage_start=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Cluster",
data_source="Pod",
).first()
if not line_item:
line_item = OCPUsageLineItemDailySummary(
uuid=uuid.uuid4(),
usage_start=start_date,
usage_end=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Cluster",
data_source="Pod",
source_uuid=provider_uuid,
)
cluster_cost = rate_value
if rate_type == metric_constants.INFRASTRUCTURE_COST_TYPE:
LOG.debug(
"Cluster (%s) has a monthly infrastructure cost of %s from tag rates.",
cluster_id,
rate_value,
)
if line_item.infrastructure_monthly_cost_json:
cluster_cost = (
line_item.infrastructure_monthly_cost_json.get(distribution, 0)
+ rate_value
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, cluster_cost)
line_item.infrastructure_monthly_cost_json = monthly_cost
elif rate_type == metric_constants.SUPPLEMENTARY_COST_TYPE:
LOG.debug(
"Cluster (%s) has a monthly supplemenarty cost of %s from tag rates.",
cluster_id,
rate_value,
)
if line_item.supplementary_monthly_cost_json:
cluster_cost = (
line_item.supplementary_monthly_cost_json.get(distribution, 0) + rate_value
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, cluster_cost)
line_item.supplementary_monthly_cost_json = monthly_cost
line_item.save()
def tag_upsert_monthly_default_cluster_cost_line_item( # noqa: C901
self, start_date, end_date, cluster_id, cluster_alias, rate_type, rate_dict, distribution, provider_uuid
):
"""
Update or insert daily summary line item for cluster cost.
It checks to see if a line item exists for each cluster
that contains the tag key:value pair,
if it does then the price is added to the monthly cost.
"""
report_period = self.get_usage_period_by_dates_and_cluster(start_date, end_date, cluster_id)
with schema_context(self.schema):
if rate_dict is not None:
for tag_key in rate_dict:
tag_values = rate_dict.get(tag_key)
tag_default = tag_values.get("default_value")
values_to_skip = tag_values.get("defined_keys")
item_check = OCPUsageLineItemDailySummary.objects.filter(
usage_start__gte=start_date,
usage_start__lte=end_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
pod_labels__has_key=tag_key,
)
for value in values_to_skip:
item_check = item_check.exclude(pod_labels__contains={tag_key: value})
# this won't run if there are no matching items and item_check will continue to be
# filtered until there are no items left
while item_check:
# get the first value for our tag key and exclude it from the queryset for the next check
# will remove values until there are none left
tag_key_value = item_check.first().pod_labels.get(tag_key)
item_check = item_check.exclude(pod_labels__contains={tag_key: tag_key_value})
line_item = OCPUsageLineItemDailySummary.objects.filter(
usage_start=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Cluster",
data_source="Pod",
).first()
if not line_item:
line_item = OCPUsageLineItemDailySummary(
uuid=uuid.uuid4(),
usage_start=start_date,
usage_end=start_date,
report_period=report_period,
cluster_id=cluster_id,
cluster_alias=cluster_alias,
monthly_cost_type="Cluster",
data_source="Pod",
source_uuid=provider_uuid,
)
cluster_cost = tag_default
if rate_type == metric_constants.INFRASTRUCTURE_COST_TYPE:
LOG.debug(
"Cluster (%s) has a default monthly infrastructure cost of %s.",
cluster_id,
tag_default,
)
if line_item.infrastructure_monthly_cost_json:
cluster_cost = (
line_item.infrastructure_monthly_cost_json.get(distribution, 0) + tag_default
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, cluster_cost)
line_item.infrastructure_monthly_cost_json = monthly_cost
elif rate_type == metric_constants.SUPPLEMENTARY_COST_TYPE:
LOG.debug(
"Cluster (%s) has a default monthly supplemenarty cost of %s.", cluster_id, tag_default
)
if line_item.supplementary_monthly_cost_json:
cluster_cost = (
line_item.supplementary_monthly_cost_json.get(distribution, 0) + tag_default
)
monthly_cost = self.generate_monthly_cost_json_object(distribution, cluster_cost)
line_item.supplementary_monthly_cost_json = monthly_cost
line_item.save()
def remove_monthly_cost(self, start_date, end_date, cluster_id, cost_type):
"""Delete all monthly costs of a specific type over a date range."""
report_period = self.get_usage_period_by_dates_and_cluster(start_date, end_date, cluster_id)
filters = {
"usage_start": start_date.date(),
"report_period": report_period,
"cluster_id": cluster_id,
"monthly_cost_type": cost_type,
}
for rate_type, __ in metric_constants.COST_TYPE_CHOICES:
cost_filters = [
f"{rate_type.lower()}_monthly_cost__isnull",
f"{rate_type.lower()}_monthly_cost_json__isnull",
f"{rate_type.lower()}_project_monthly_cost__isnull",
]
for cost_filter in cost_filters:
filters.update({cost_filter: False})
LOG.info(
"Removing %s %s monthly costs \n\tfor %s \n\tfrom %s - %s.",
cost_type,
rate_type,
cluster_id,
start_date,
end_date,
)
with schema_context(self.schema):
OCPUsageLineItemDailySummary.objects.filter(**filters).all().delete()
filters.pop(cost_filter)
def populate_node_label_line_item_daily_table(self, start_date, end_date, cluster_id):
"""Populate the daily node label aggregate of line items table.
Args:
start_date (datetime.date) The date to start populating the table.
end_date (datetime.date) The date to end on.
cluster_id (String) Cluster Identifier
Returns
(None)
"""
# Cast string to date object
if isinstance(start_date, str):
start_date = datetime.datetime.strptime(start_date, "%Y-%m-%d").date()
end_date = datetime.datetime.strptime(end_date, "%Y-%m-%d").date()
if isinstance(start_date, datetime.datetime):
start_date = start_date.date()
end_date = end_date.date()
table_name = self._table_map["node_label_line_item_daily"]
daily_sql = pkgutil.get_data("masu.database", "sql/reporting_ocpnodelabellineitem_daily.sql")
daily_sql = daily_sql.decode("utf-8")
daily_sql_params = {
"uuid": str(uuid.uuid4()).replace("-", "_"),
"start_date": start_date,
"end_date": end_date,
"cluster_id": cluster_id,
"schema": self.schema,
}
daily_sql, daily_sql_params = self.jinja_sql.prepare_query(daily_sql, daily_sql_params)
self._execute_raw_sql_query(table_name, daily_sql, start_date, end_date, bind_params=list(daily_sql_params))
def populate_usage_costs(self, infrastructure_rates, supplementary_rates, start_date, end_date, cluster_id):
"""Update the reporting_ocpusagelineitem_daily_summary table with usage costs."""
# Cast start_date and end_date to date object, if they aren't already
if isinstance(start_date, str):
start_date = datetime.datetime.strptime(start_date, "%Y-%m-%d").date()
end_date = datetime.datetime.strptime(end_date, "%Y-%m-%d").date()
if isinstance(start_date, datetime.datetime):
start_date = start_date.date()
end_date = end_date.date()
OCPUsageLineItemDailySummary.objects.filter(
cluster_id=cluster_id, usage_start__gte=start_date, usage_start__lte=end_date
).update(
infrastructure_usage_cost=JSONBBuildObject(
Value("cpu"),
Coalesce(
Value(infrastructure_rates.get("cpu_core_usage_per_hour", 0), output_field=DecimalField())
* Coalesce(F("pod_usage_cpu_core_hours"), Value(0), output_field=DecimalField())
+ Value(infrastructure_rates.get("cpu_core_request_per_hour", 0), output_field=DecimalField())
* Coalesce(F("pod_request_cpu_core_hours"), Value(0), output_field=DecimalField())
+ Value(
infrastructure_rates.get("cpu_core_effective_usage_per_hour", 0), output_field=DecimalField()
)
* Coalesce(F("pod_effective_usage_cpu_core_hours"), Value(0), output_field=DecimalField()),
0,
output_field=DecimalField(),
),
Value("memory"),
Coalesce(
Value(infrastructure_rates.get("memory_gb_usage_per_hour", 0), output_field=DecimalField())
* Coalesce(F("pod_usage_memory_gigabyte_hours"), Value(0), output_field=DecimalField())
+ Value(infrastructure_rates.get("memory_gb_request_per_hour", 0), output_field=DecimalField())
* Coalesce(F("pod_request_memory_gigabyte_hours"), Value(0), output_field=DecimalField())
+ Value(
infrastructure_rates.get("memory_gb_effective_usage_per_hour", 0), output_field=DecimalField()
)
* Coalesce(F("pod_effective_usage_memory_gigabyte_hours"), Value(0), output_field=DecimalField()),
0,
output_field=DecimalField(),
),
Value("storage"),
Coalesce(
Value(infrastructure_rates.get("storage_gb_usage_per_month", 0), output_field=DecimalField())
* Coalesce(F("persistentvolumeclaim_usage_gigabyte_months"), Value(0), output_field=DecimalField())
+ Value(infrastructure_rates.get("storage_gb_request_per_month", 0), output_field=DecimalField())
* Coalesce(F("volume_request_storage_gigabyte_months"), Value(0), output_field=DecimalField()),
0,
output_field=DecimalField(),
),
),
supplementary_usage_cost=JSONBBuildObject(
Value("cpu"),
Coalesce(
Value(supplementary_rates.get("cpu_core_usage_per_hour", 0), output_field=DecimalField())
* Coalesce(F("pod_usage_cpu_core_hours"), Value(0), output_field=DecimalField())
+ Value(supplementary_rates.get("cpu_core_request_per_hour", 0), output_field=DecimalField())
* Coalesce(F("pod_request_cpu_core_hours"), Value(0), output_field=DecimalField())
+ Value(
supplementary_rates.get("cpu_core_effective_usage_per_hour", 0), output_field=DecimalField()
)
* Coalesce(F("pod_effective_usage_cpu_core_hours"), Value(0), output_field=DecimalField()),
0,
output_field=DecimalField(),
),
Value("memory"),
Coalesce(
Value(supplementary_rates.get("memory_gb_usage_per_hour", 0), output_field=DecimalField())
* Coalesce(F("pod_usage_memory_gigabyte_hours"), Value(0), output_field=DecimalField())
+ Value(supplementary_rates.get("memory_gb_request_per_hour", 0), output_field=DecimalField())
* Coalesce(F("pod_request_memory_gigabyte_hours"), Value(0), output_field=DecimalField())
+ Value(
supplementary_rates.get("memory_gb_effective_usage_per_hour", 0), output_field=DecimalField()
)
* Coalesce(F("pod_effective_usage_memory_gigabyte_hours"), Value(0), output_field=DecimalField()),
0,
output_field=DecimalField(),
),
Value("storage"),
Coalesce(
Value(supplementary_rates.get("storage_gb_usage_per_month", 0), output_field=DecimalField())
* Coalesce(F("persistentvolumeclaim_usage_gigabyte_months"), Value(0), output_field=DecimalField())
+ Value(supplementary_rates.get("storage_gb_request_per_month", 0), output_field=DecimalField())
* Coalesce(F("volume_request_storage_gigabyte_months"), Value(0), output_field=DecimalField()),
0,
output_field=DecimalField(),
),
),
)
def populate_tag_usage_costs( # noqa: C901
self, infrastructure_rates, supplementary_rates, start_date, end_date, cluster_id
):
"""
Update the reporting_ocpusagelineitem_daily_summary table with
usage costs based on tag rates.
Due to the way the tag_keys are stored it loops through all of
the tag keys to filter and update costs.
The data structure for infrastructure and supplementary rates are
a dictionary that include the metric name, the tag key,
the tag value names, and the tag value, for example:
{'cpu_core_usage_per_hour': {
'app': {
'far': '0.2000000000', 'manager': '100.0000000000', 'walk': '5.0000000000'
}
}
}
"""
# defines the usage type for each metric
metric_usage_type_map = {
"cpu_core_usage_per_hour": "cpu",
"cpu_core_request_per_hour": "cpu",
"cpu_core_effective_usage_per_hour": "cpu",
"memory_gb_usage_per_hour": "memory",
"memory_gb_request_per_hour": "memory",
"memory_gb_effective_usage_per_hour": "memory",
"storage_gb_usage_per_month": "storage",
"storage_gb_request_per_month": "storage",
}
# define the rates so the loop can operate on both rate types
rate_types = [
{"rates": infrastructure_rates, "sql_file": "sql/infrastructure_tag_rates.sql"},
{"rates": supplementary_rates, "sql_file": "sql/supplementary_tag_rates.sql"},
]
# Cast start_date and end_date to date object, if they aren't already
if isinstance(start_date, str):
start_date = datetime.datetime.strptime(start_date, "%Y-%m-%d").date()
end_date = datetime.datetime.strptime(end_date, "%Y-%m-%d").date()
if isinstance(start_date, datetime.datetime):
start_date = start_date.date()
end_date = end_date.date()
# updates costs from tags
for rate_type in rate_types:
rate = rate_type.get("rates")
sql_file = rate_type.get("sql_file")
for metric in rate:
tags = rate.get(metric, {})
usage_type = metric_usage_type_map.get(metric)
if usage_type == "storage":
labels_field = "volume_labels"
else:
labels_field = "pod_labels"
table_name = self._table_map["line_item_daily_summary"]
for tag_key in tags:
tag_vals = tags.get(tag_key, {})
value_names = list(tag_vals.keys())
for val_name in value_names:
rate_value = tag_vals[val_name]
key_value_pair = json.dumps({tag_key: val_name})
tag_rates_sql = pkgutil.get_data("masu.database", sql_file)
tag_rates_sql = tag_rates_sql.decode("utf-8")
tag_rates_sql_params = {
"start_date": start_date,
"end_date": end_date,
"rate": rate_value,
"cluster_id": cluster_id,
"schema": self.schema,
"usage_type": usage_type,
"metric": metric,
"k_v_pair": key_value_pair,
"labels_field": labels_field,
}
tag_rates_sql, tag_rates_sql_params = self.jinja_sql.prepare_query(
tag_rates_sql, tag_rates_sql_params
)
msg = f"Running populate_tag_usage_costs SQL with params: {tag_rates_sql_params}"
LOG.info(msg)
self._execute_raw_sql_query(
table_name, tag_rates_sql, start_date, end_date, bind_params=list(tag_rates_sql_params)
)
def populate_tag_usage_default_costs( # noqa: C901
self, infrastructure_rates, supplementary_rates, start_date, end_date, cluster_id
):
"""
Update the reporting_ocpusagelineitem_daily_summary table
with usage costs based on tag rates.
The data structure for infrastructure and supplementary rates
are a dictionary that includes the metric, the tag key,
the default value, and the values for that key that have
rates defined and do not need the default applied,
for example:
{
'cpu_core_usage_per_hour': {
'app': {
'default_value': '100.0000000000', 'defined_keys': ['far', 'manager', 'walk']
}
}
}
"""
# defines the usage type for each metric
metric_usage_type_map = {
"cpu_core_usage_per_hour": "cpu",
"cpu_core_request_per_hour": "cpu",
"cpu_core_effective_usage_per_hour": "cpu",
"memory_gb_usage_per_hour": "memory",
"memory_gb_request_per_hour": "memory",
"memory_gb_effective_usage_per_hour": "memory",
"storage_gb_usage_per_month": "storage",
"storage_gb_request_per_month": "storage",
}
# define the rates so the loop can operate on both rate types
rate_types = [
{"rates": infrastructure_rates, "sql_file": "sql/default_infrastructure_tag_rates.sql"},
{"rates": supplementary_rates, "sql_file": "sql/default_supplementary_tag_rates.sql"},
]
# Cast start_date and end_date to date object, if they aren't already
if isinstance(start_date, str):
start_date = datetime.datetime.strptime(start_date, "%Y-%m-%d").date()
end_date = datetime.datetime.strptime(end_date, "%Y-%m-%d").date()
if isinstance(start_date, datetime.datetime):
start_date = start_date.date()
end_date = end_date.date()
# updates costs from tags
for rate_type in rate_types:
rate = rate_type.get("rates")
sql_file = rate_type.get("sql_file")
for metric in rate:
tags = rate.get(metric, {})
usage_type = metric_usage_type_map.get(metric)
if usage_type == "storage":
labels_field = "volume_labels"
else:
labels_field = "pod_labels"
table_name = self._table_map["line_item_daily_summary"]
for tag_key in tags:
key_value_pair = []
tag_vals = tags.get(tag_key)
rate_value = tag_vals.get("default_value", 0)
if rate_value == 0:
continue
value_names = tag_vals.get("defined_keys", [])
for value_to_skip in value_names:
key_value_pair.append(json.dumps({tag_key: value_to_skip}))
json.dumps(key_value_pair)
tag_rates_sql = pkgutil.get_data("masu.database", sql_file)
tag_rates_sql = tag_rates_sql.decode("utf-8")
tag_rates_sql_params = {
"start_date": start_date,
"end_date": end_date,
"rate": rate_value,
"cluster_id": cluster_id,
"schema": self.schema,
"usage_type": usage_type,
"metric": metric,
"tag_key": tag_key,
"k_v_pair": key_value_pair,
"labels_field": labels_field,
}
tag_rates_sql, tag_rates_sql_params = self.jinja_sql.prepare_query(
tag_rates_sql, tag_rates_sql_params
)
msg = f"Running populate_tag_usage_default_costs SQL with params: {tag_rates_sql_params}"
LOG.info(msg)
self._execute_raw_sql_query(
table_name, tag_rates_sql, start_date, end_date, bind_params=list(tag_rates_sql_params)
)
def populate_openshift_cluster_information_tables(self, provider, cluster_id, cluster_alias, start_date, end_date):
"""Populate the cluster, node, PVC, and project tables for the cluster."""
cluster = self.populate_cluster_table(provider, cluster_id, cluster_alias)
nodes = self.get_nodes_presto(str(provider.uuid), start_date, end_date)
pvcs = self.get_pvcs_presto(str(provider.uuid), start_date, end_date)
projects = self.get_projects_presto(str(provider.uuid), start_date, end_date)
# pvcs = self.match_node_to_pvc(pvcs, projects)
self.populate_node_table(cluster, nodes)
self.populate_pvc_table(cluster, pvcs)
self.populate_project_table(cluster, projects)
def populate_cluster_table(self, provider, cluster_id, cluster_alias):
"""Get or create an entry in the OCP cluster table."""
with schema_context(self.schema):
cluster, created = OCPCluster.objects.get_or_create(
cluster_id=cluster_id, cluster_alias=cluster_id, provider=provider
)
if created:
msg = f"Add entry in reporting_ocp_clusters for {cluster_id}/{cluster_alias}"
LOG.info(msg)
return cluster
def populate_node_table(self, cluster, nodes):
"""Get or create an entry in the OCP cluster table."""
LOG.info("Populating reporting_ocp_nodes table.")
with schema_context(self.schema):
for node in nodes:
OCPNode.objects.get_or_create(
node=node[0], resource_id=node[1], node_capacity_cpu_cores=node[2], cluster=cluster
)
def populate_pvc_table(self, cluster, pvcs):
"""Get or create an entry in the OCP cluster table."""
LOG.info("Populating reporting_ocp_pvcs table.")
with schema_context(self.schema):
for pvc in pvcs:
OCPPVC.objects.get_or_create(persistent_volume=pvc[0], persistent_volume_claim=pvc[1], cluster=cluster)
def populate_project_table(self, cluster, projects):
"""Get or create an entry in the OCP cluster table."""
LOG.info("Populating reporting_ocp_projects table.")
with schema_context(self.schema):
for project in projects:
OCPProject.objects.get_or_create(project=project, cluster=cluster)
def get_nodes_presto(self, source_uuid, start_date, end_date):
"""Get the nodes from an OpenShift cluster."""
sql = f"""
SELECT node,
resource_id,
max(node_capacity_cpu_cores) as node_capacity_cpu_cores
FROM hive.{self.schema}.openshift_pod_usage_line_items_daily as ocp
WHERE ocp.source = '{source_uuid}'
AND ocp.year = '{start_date.strftime("%Y")}'
AND ocp.month = '{start_date.strftime("%m")}'
AND ocp.interval_start >= TIMESTAMP '{start_date}'
AND ocp.interval_start < date_add('day', 1, TIMESTAMP '{end_date}')
GROUP BY node,
resource_id
"""
nodes = self._execute_presto_raw_sql_query(self.schema, sql)
return nodes
def get_pvcs_presto(self, source_uuid, start_date, end_date):
"""Get the nodes from an OpenShift cluster."""
sql = f"""
SELECT distinct persistentvolume,
persistentvolumeclaim
FROM hive.{self.schema}.openshift_storage_usage_line_items_daily as ocp
WHERE ocp.source = '{source_uuid}'
AND ocp.year = '{start_date.strftime("%Y")}'
AND ocp.month = '{start_date.strftime("%m")}'
AND ocp.interval_start >= TIMESTAMP '{start_date}'
AND ocp.interval_start < date_add('day', 1, TIMESTAMP '{end_date}')
"""
pvcs = self._execute_presto_raw_sql_query(self.schema, sql)
return pvcs
def get_projects_presto(self, source_uuid, start_date, end_date):
"""Get the nodes from an OpenShift cluster."""
sql = f"""
SELECT distinct namespace
FROM hive.{self.schema}.openshift_pod_usage_line_items_daily as ocp
WHERE ocp.source = '{source_uuid}'
AND ocp.year = '{start_date.strftime("%Y")}'
AND ocp.month = '{start_date.strftime("%m")}'
AND ocp.interval_start >= TIMESTAMP '{start_date}'
AND ocp.interval_start < date_add('day', 1, TIMESTAMP '{end_date}')
"""
projects = self._execute_presto_raw_sql_query(self.schema, sql)
return [project[0] for project in projects]
def get_cluster_for_provider(self, provider_uuid):
"""Return the cluster entry for a provider UUID."""
with schema_context(self.schema):
cluster = OCPCluster.objects.filter(provider_id=provider_uuid).first()
return cluster
def get_nodes_for_cluster(self, cluster_id):
"""Get all nodes for an OCP cluster."""
with schema_context(self.schema):
nodes = OCPNode.objects.filter(cluster_id=cluster_id).values_list("node", "resource_id")
nodes = [(node[0], node[1]) for node in nodes]
return nodes
def get_pvcs_for_cluster(self, cluster_id):
"""Get all nodes for an OCP cluster."""
with schema_context(self.schema):
pvcs = OCPPVC.objects.filter(cluster_id=cluster_id).values_list(
"persistent_volume", "persistent_volume_claim"
)
pvcs = [(pvc[0], pvc[1]) for pvc in pvcs]
return pvcs
def get_projects_for_cluster(self, cluster_id):
"""Get all nodes for an OCP cluster."""
with schema_context(self.schema):
projects = OCPProject.objects.filter(cluster_id=cluster_id).values_list("project")
projects = [project[0] for project in projects]
return projects
def get_openshift_topology_for_provider(self, provider_uuid):
"""Return a dictionary with Cluster topology."""
cluster = self.get_cluster_for_provider(provider_uuid)
topology = {"cluster_id": cluster.cluster_id, "cluster_alias": cluster.cluster_alias}
node_tuples = self.get_nodes_for_cluster(cluster.uuid)
pvc_tuples = self.get_pvcs_for_cluster(cluster.uuid)
topology["nodes"] = [node[0] for node in node_tuples]
topology["resource_ids"] = [node[1] for node in node_tuples]
topology["persistent_volumes"] = [pvc[0] for pvc in pvc_tuples]
topology["persistent_volume_claims"] = [pvc[1] for pvc in pvc_tuples]
topology["projects"] = self.get_projects_for_cluster(cluster.uuid)
return topology
def delete_infrastructure_raw_cost_from_daily_summary(self, provider_uuid, report_period_id, start_date, end_date):
table_name = OCP_REPORT_TABLE_MAP["line_item_daily_summary"]
msg = f"Removing infrastructure_raw_cost for {provider_uuid} from {start_date} to {end_date}."
LOG.info(msg)
sql = f"""
DELETE FROM {self.schema}.reporting_ocpusagelineitem_daily_summary
WHERE usage_start >= '{start_date}'::date
AND usage_start <= '{end_date}'::date
AND report_period_id = {report_period_id}
AND infrastructure_raw_cost IS NOT NULL
AND infrastructure_raw_cost != 0
"""
self._execute_raw_sql_query(table_name, sql, start_date, end_date)
def populate_ocp_on_all_project_daily_summary(self, platform, sql_params):
LOG.info(f"Populating {platform.upper()} records for ocpallcostlineitem_project_daily_summary")
script_file_name = f"reporting_ocpallcostlineitem_project_daily_summary_{platform.lower()}.sql"
script_file_path = f"{self.OCP_ON_ALL_SQL_PATH}{script_file_name}"
self._execute_processing_script("masu.database", script_file_path, sql_params)
def populate_ocp_on_all_daily_summary(self, platform, sql_params):
LOG.info(f"Populating {platform.upper()} records for ocpallcostlineitem_daily_summary")
script_file_name = f"reporting_ocpallcostlineitem_daily_summary_{platform.lower()}.sql"
script_file_path = f"{self.OCP_ON_ALL_SQL_PATH}{script_file_name}"
self._execute_processing_script("masu.database", script_file_path, sql_params)
def populate_ocp_on_all_ui_summary_tables(self, sql_params):
for perspective in OCP_ON_ALL_PERSPECTIVES:
LOG.info(f"Populating {perspective._meta.db_table} data using {sql_params}")
script_file_path = f"{self.OCP_ON_ALL_SQL_PATH}{perspective._meta.db_table}.sql"
self._execute_processing_script("masu.database", script_file_path, sql_params)
def get_max_min_timestamp_from_parquet(self, source_uuid, start_date, end_date):
"""Get the max and min timestamps for parquet data given a date range"""
sql = f"""
SELECT min(interval_start) as min_timestamp,
max(interval_start) as max_timestamp
FROM hive.{self.schema}.openshift_pod_usage_line_items_daily as ocp
WHERE ocp.source = '{source_uuid}'
AND ocp.year = '{start_date.strftime("%Y")}'
AND ocp.month = '{start_date.strftime("%m")}'
AND ocp.interval_start >= TIMESTAMP '{start_date}'
AND ocp.interval_start < date_add('day', 1, TIMESTAMP '{end_date}')
"""
timestamps = self._execute_presto_raw_sql_query(self.schema, sql)
max, min = timestamps[0]
return parse(max), parse(min)
| 51.107867 | 119 | 0.579377 | 13,800 | 126,032 | 4.943696 | 0.040072 | 0.042083 | 0.024508 | 0.024625 | 0.83378 | 0.804978 | 0.772848 | 0.755757 | 0.73465 | 0.721619 | 0 | 0.003168 | 0.346404 | 126,032 | 2,465 | 120 | 51.1286 | 0.825042 | 0.124 | 0 | 0.660657 | 0 | 0.001591 | 0.111215 | 0.040255 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04666 | false | 0.00106 | 0.02386 | 0.00053 | 0.098621 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22c7cab92448f7fbdb5216234b5cbd6796657787 | 222 | py | Python | tests/conftest.py | JLSteenwyk/ClipKIT | b2d6033e638a78acc36942f9f420d5d3bc0e09ad | [
"MIT"
] | 28 | 2020-06-11T14:06:15.000Z | 2022-03-14T04:32:12.000Z | tests/conftest.py | JLSteenwyk/ClipKIT | b2d6033e638a78acc36942f9f420d5d3bc0e09ad | [
"MIT"
] | 10 | 2020-09-14T13:59:13.000Z | 2022-02-25T17:17:01.000Z | tests/conftest.py | JLSteenwyk/ClipKIT | b2d6033e638a78acc36942f9f420d5d3bc0e09ad | [
"MIT"
] | 1 | 2020-12-15T07:25:09.000Z | 2020-12-15T07:25:09.000Z | # global fixtures can go here
import pytest
def pytest_configure(config):
config.addinivalue_line("markers", "integration: mark as integration test")
config.addinivalue_line("markers", "slow: mark as slow test")
| 27.75 | 79 | 0.756757 | 29 | 222 | 5.689655 | 0.62069 | 0.206061 | 0.254545 | 0.339394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144144 | 222 | 7 | 80 | 31.714286 | 0.868421 | 0.121622 | 0 | 0 | 0 | 0 | 0.38342 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22efb227a59c5d0b08ab4ff7dbf6fb6755a8fa17 | 214 | py | Python | battleforcastile/utils/get_match_by_id.py | battleforcastile/battleforcastile | 65223fcb56ecc550f1a7c7b70beadff22c866d85 | [
"MIT"
] | null | null | null | battleforcastile/utils/get_match_by_id.py | battleforcastile/battleforcastile | 65223fcb56ecc550f1a7c7b70beadff22c866d85 | [
"MIT"
] | 1 | 2021-08-21T10:16:03.000Z | 2021-08-21T10:16:03.000Z | battleforcastile/utils/get_match_by_id.py | battleforcastile/battleforcastile | 65223fcb56ecc550f1a7c7b70beadff22c866d85 | [
"MIT"
] | null | null | null | import requests
from battleforcastile.constants import BATTLEFORCASTILE_BACKEND_URL
def get_match_by_id(match_id: int):
url = f'{BATTLEFORCASTILE_BACKEND_URL}/matches/{match_id}/'
return requests.get(url) | 30.571429 | 67 | 0.808411 | 29 | 214 | 5.655172 | 0.551724 | 0.280488 | 0.317073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107477 | 214 | 7 | 68 | 30.571429 | 0.858639 | 0 | 0 | 0 | 0 | 0 | 0.232558 | 0.232558 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a3dae3df14116aa1a32d8b1fa1fd7a42ee14a9e8 | 23,935 | py | Python | src/python/nimbusml/tests/test_syntax.py | montehoover/NimbusML | f6be39ce9359786976429bab0ccd837e849b4ba5 | [
"MIT"
] | 134 | 2018-11-01T22:15:24.000Z | 2019-05-04T11:30:08.000Z | src/python/nimbusml/tests/test_syntax.py | montehoover/NimbusML | f6be39ce9359786976429bab0ccd837e849b4ba5 | [
"MIT"
] | 226 | 2019-05-07T19:00:44.000Z | 2021-01-06T07:59:48.000Z | src/python/nimbusml/tests/test_syntax.py | montehoover/NimbusML | f6be39ce9359786976429bab0ccd837e849b4ba5 | [
"MIT"
] | 43 | 2019-05-15T20:19:42.000Z | 2022-03-30T10:26:07.000Z | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# --------------------------------------------------------------------------------------------
import unittest
import pandas
import six
from nimbusml import Pipeline
from nimbusml.feature_extraction.categorical import OneHotVectorizer, \
OneHotHashVectorizer
from nimbusml.feature_extraction.text import NGramFeaturizer
from nimbusml.feature_selection import MutualInformationSelector
from nimbusml.internal.entrypoints._ngramextractor_ngram import n_gram
from nimbusml.internal.utils.data_roles import Role
from nimbusml.linear_model import FastLinearBinaryClassifier
from nimbusml.preprocessing.normalization import LogMeanVarianceScaler
from nimbusml.preprocessing.schema import ColumnConcatenator as Concat, \
ColumnDropper as Drop
# from sklearn.pipeline import Pipeline
if six.PY2:
pass
else:
pass
class TestSyntax(unittest.TestCase):
def test_syntax1(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer(),
FastLinearBinaryClassifier(maximum_number_of_iterations=1)
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
def test_syntax2(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << 'education',
OneHotVectorizer(max_num_terms=2) << 'workclass',
FastLinearBinaryClassifier(maximum_number_of_iterations=1)
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
def test_syntax2_lt(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << 'education',
OneHotVectorizer(max_num_terms=2) << 'workclass',
FastLinearBinaryClassifier(maximum_number_of_iterations=1)
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
def test_syntax3(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << {'edu1': 'education'},
OneHotHashVectorizer() << 'education',
OneHotVectorizer(max_num_terms=2) << 'workclass',
# Currently the learner does not use edu1
# unless it is specified explicitely so nimbusml
# does not do what the syntax implicetely tells.
# We need to modify either the bridge to look into
# every available column at one step.
FastLinearBinaryClassifier(maximum_number_of_iterations=1)
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
def test_syntax4(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << {'edu1': 'education'},
OneHotHashVectorizer() << {'edu2': 'education'},
OneHotVectorizer(max_num_terms=2) << {'wki': 'workclass'},
Concat() << {'Inputs': ['edu1', 'edu2', 'wki']},
FastLinearBinaryClassifier(maximum_number_of_iterations=1) << 'Inputs'
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
def test_syntax4_2(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << {'edu1': 'education'},
OneHotHashVectorizer() << {'edu2': 'education'},
OneHotVectorizer(max_num_terms=2) << {'wki': 'workclass'},
Concat() << {'Inputs': ['edu1', 'edu2', 'wki']},
FastLinearBinaryClassifier(maximum_number_of_iterations=1) << 'Inputs'
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
def test_syntax4_dict(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << {'edu1': 'education'},
OneHotHashVectorizer() << {'edu2': 'education'},
OneHotVectorizer(max_num_terms=2) << {'wki': 'workclass'},
Concat() << {'Inputs': ['edu1', 'edu2', 'wki']},
FastLinearBinaryClassifier(maximum_number_of_iterations=1) << 'Inputs'
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
def test_syntax4_columns(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer(columns={'edu1': 'education'}),
OneHotHashVectorizer(columns={'edu2': 'education'}),
OneHotVectorizer(max_num_terms=2, columns={'wki': 'workclass'}),
Concat(columns={'Inputs': ['edu1', 'edu2', 'wki']}),
FastLinearBinaryClassifier(maximum_number_of_iterations=1) << 'Inputs'
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
@unittest.skip(
"skip until we have a proper way to catch exception raised by nimbusml")
def test_syntax4_fail(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << {'edu1': 'education'},
OneHotHashVectorizer() << {'edu2': 'education'},
OneHotVectorizer(max_num_terms=2) << {'wki': 'workclass'},
FastLinearBinaryClassifier(maximum_number_of_iterations=1) << ['edu1', 'edu2',
'wki']
])
try:
exp.fit(X, y, verbose=0)
assert False
except RuntimeError as e:
assert "ConcatTransform() << {'Input': ['edu1', 'edu2', 'wki']}" \
in str(e)
@unittest.skip(
"skip until we have a proper way to catch exception raised by nimbusml")
def test_syntax4_fail2(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << {'edu1': 'education'},
OneHotHashVectorizer() << {'edu2': 'education'},
OneHotVectorizer(max_num_terms=2) << {'wki': 'workclass'},
FastLinearBinaryClassifier(maximum_number_of_iterations=1) << ['edu1', 'edu4',
'wki']
])
try:
exp.fit(X, y, verbose=0)
raise AssertionError("The test should not reach this line.")
except Exception as e:
assert "Feature column 'edu4' not found" in str(e)
def test_syntax5(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << {'f1': 'education'},
OneHotHashVectorizer() << {'f2': 'education'},
OneHotVectorizer(max_num_terms=2) << {'f3': 'workclass'},
Concat() << {'Features': ['f%d' % i for i in range(1, 4)]},
FastLinearBinaryClassifier(maximum_number_of_iterations=1) << 'Features'
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
@unittest.skip("regular expression not yet implemented")
def test_syntax5_regular_expression(self):
# REVIEW: not implemented yet
# The best would be to handle regular expression inside nimbusml.
# It could be handled in entrypoint.py just before calling nimbusml.
# It can be handled inside Pipeline if it is aware of
# the input schema.
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << {'f1': 'education'},
OneHotHashVectorizer() << {'f2': 'education'},
OneHotVectorizer(max_num_terms=2) << {'f3': 'workclass'},
Concat() << {'Features': 'f[0-9]+'},
FastLinearBinaryClassifier(maximum_number_of_iterations=1) << 'Features'
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
def test_syntax6(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << {'f1': 'education'},
OneHotHashVectorizer() << {'f2': 'education'},
OneHotVectorizer(max_num_terms=2) << {'f3': 'workclass'},
Concat() << {'Features': ['f%d' % i for i in range(1, 4)]},
Drop() << ['education', 'workclass', 'f1', 'f2', 'f3'],
FastLinearBinaryClassifier(maximum_number_of_iterations=1) << ['Features']
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
def test_syntax6_not_features(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << {'f1': 'education'},
OneHotHashVectorizer() << {'f2': 'education'},
OneHotVectorizer(max_num_terms=2) << {'f3': 'workclass'},
Concat() << {'FeaturesCustom': ['f%d' % i for i in range(1, 4)]},
Drop() << ['education', 'workclass', 'f1', 'f2', 'f3'],
FastLinearBinaryClassifier(maximum_number_of_iterations=1) << 'FeaturesCustom'
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
@unittest.skip(reason="what should be the default behavior")
def test_syntax6_change_role(self):
# REVIEW: the pipeline drops all columns but one -->
# nimbusml still thinks the Features are eduction, workclass
# and does not automatically detects that the only remaining
# columns should play that role
# (maybe because the label column is here too even though
# the only remaining column without a role is Features).
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << {'f1': 'education'},
OneHotHashVectorizer() << {'f2': 'education'},
OneHotVectorizer(max_num_terms=2) << {'f3': 'workclass'},
Concat() << {'Features': ['f%d' % i for i in range(1, 4)]},
Drop() << ['education', 'workclass', 'f1', 'f2', 'f3'],
FastLinearBinaryClassifier(maximum_number_of_iterations=1) << ['Features']
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
@unittest.skip("regular expression not yet implemented")
def test_syntax6_regular_expression(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
y = df['y']
exp = Pipeline([
OneHotVectorizer() << {'f1': 'education'},
OneHotHashVectorizer() << {'f2': 'education'},
OneHotVectorizer(max_num_terms=2) << {'f3': 'workclass'},
Concat() << {'Features': ['f%d' % i for i in range(1, 4)]},
Drop() << '~Features',
FastLinearBinaryClassifier(maximum_number_of_iterations=1)
])
exp.fit(X, y, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
def test_syntax9_slots_label(self):
train_reviews = pandas.DataFrame(
data=dict(
review=[
"This is great",
"I hate it",
"Love it",
"Do not like it",
"Really like it",
"I hate it",
"I like it a lot",
"I kind of hate it",
"I do like it",
"I really hate it",
"It is very good",
"I hate it a bunch",
"I love it a bunch",
"I hate it",
"I like it very much",
"I hate it very much.",
"I really do love it",
"I really do hate it",
"Love it!",
"Hate it!",
"I love it",
"I hate it",
"I love it",
"I hate it",
"I love it"],
like=[
True,
False,
True,
False,
True,
False,
True,
False,
True,
False,
True,
False,
True,
False,
True,
False,
True,
False,
True,
False,
True,
False,
True,
False,
True]))
X = train_reviews.loc[:, train_reviews.columns != 'like']
y = train_reviews[['like']]
transform_1 = NGramFeaturizer(word_feature_extractor=n_gram())
transform_2 = MutualInformationSelector()
exp = Pipeline([transform_1, transform_2])
res = exp.fit_transform(X, y, verbose=0)
assert res is not None
# Scikit compatibility (Compose transforms inside Scikit Pipeline).
# In this scenario, we do not provide {input, output} arguments
transform_1 = NGramFeaturizer(word_feature_extractor=n_gram())
transform_2 = MutualInformationSelector(slots_in_output=2)
pipe = Pipeline([transform_1, transform_2])
res = pipe.fit_transform(X, y, verbose=0)
assert res is not None
def test_syntax10_multi_output1(self):
in_df = pandas.DataFrame(
data=dict(
Sepal_Length=[
2.5, 1, 2.1, 1.0], Sepal_Width=[
.75, .9, .8, .76], Petal_Length=[
0, 2.5, 2.6, 2.4], Species=[
"setosa", "viginica", "setosa", 'versicolor']))
# generate two new Columns - Petal_Normed and Sepal_Normed
normed = LogMeanVarianceScaler() << {
'Petal_Normed': 'Petal_Length',
'Sepal_Normed': 'Sepal_Width'}
out_df = normed.fit_transform(in_df, verbose=0)
self.assertEqual(sorted(list(out_df.columns)),
['Petal_Length', 'Petal_Normed', 'Sepal_Length',
'Sepal_Normed', 'Sepal_Width', 'Species'])
def test_syntax10_multi_output2(self):
in_df = pandas.DataFrame(
data=dict(
Sepal_Length=[
2.5, 1, 2.1, 1.0], Sepal_Width=[
.75, .9, .8, .76], Petal_Length=[
0, 2.5, 2.6, 2.4], Species=[
"setosa", "viginica", "setosa", 'versicolor']))
# generate two new Columns - Petal_Normed and Sepal_Normed
newcols = {
'Petal_Normed': 'Petal_Length',
'Sepal_Normed': 'Sepal_Width'}
normed = LogMeanVarianceScaler() << newcols
out_df = normed.fit_transform(in_df, verbose=0)
self.assertEqual(sorted(list(out_df.columns)),
['Petal_Length', 'Petal_Normed', 'Sepal_Length',
'Sepal_Normed', 'Sepal_Width', 'Species'])
def test_syntax11_learner(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
exp = Pipeline(
[
OneHotVectorizer() << {
'edu1': 'education'}, OneHotHashVectorizer() << {
'edu2': 'education'}, FastLinearBinaryClassifier(
maximum_number_of_iterations=1) << {
'Features': ['edu1', 'edu2'], Role.Label: 'y'}])
exp.fit(df, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
def test_syntax11_append_insert(self):
df = pandas.DataFrame(dict(education=['A', 'B', 'A', 'B', 'A'],
workclass=['X', 'X', 'Y', 'Y', 'Y'],
y=[1, 0, 1, 0, 0]))
X = df.drop('y', axis=1)
exp = Pipeline()
exp.append(
("OneHotHashVectorizer",
OneHotHashVectorizer() << {
'edu2': 'education'}))
exp.insert(0, OneHotVectorizer() << {'edu1': 'education'})
exp.append(
FastLinearBinaryClassifier(
maximum_number_of_iterations=1) << {
'Features': [
'edu1',
'edu2'],
Role.Label: 'y'})
exp.append(OneHotHashVectorizer() << {'edu2': 'education'})
del exp[-1]
assert len(exp) == 3
exp.fit(df, verbose=0)
prediction = exp.predict(X)
assert isinstance(prediction, pandas.DataFrame)
assert sorted(list(prediction.columns)) == [
'PredictedLabel', 'Probability', 'Score']
assert prediction.shape == (5, 3)
try:
exp.append(OneHotHashVectorizer() << {'edu2': 'education'})
except RuntimeError as e:
assert "Model is fitted and cannot be modified" in str(e)
try:
exp.insert(0, OneHotHashVectorizer() << {'edu2': 'education'})
except RuntimeError as e:
assert "Model is fitted and cannot be modified" in str(e)
try:
del exp[0]
except RuntimeError as e:
assert "Model is fitted and cannot be modified" in str(e)
obj = exp[1][1]
assert obj.__class__.__name__ == "OneHotHashVectorizer"
obj = exp[1][1]
assert obj.__class__.__name__ == "OneHotHashVectorizer"
res = exp['OneHotHashVectorizer']
assert len(res) == 1
graph = exp.graph_
assert len(graph.nodes) >= len(exp)
if __name__ == "__main__":
unittest.main()
| 40.84471 | 94 | 0.499144 | 2,407 | 23,935 | 4.871624 | 0.120066 | 0.00921 | 0.00921 | 0.032236 | 0.776053 | 0.768975 | 0.7613 | 0.75388 | 0.737506 | 0.729661 | 0 | 0.023229 | 0.347107 | 23,935 | 585 | 95 | 40.91453 | 0.727139 | 0.054857 | 0 | 0.763265 | 0 | 0 | 0.124712 | 0 | 0 | 0 | 0 | 0 | 0.130612 | 1 | 0.042857 | false | 0.004082 | 0.02449 | 0 | 0.069388 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a3f3fd516c4c0f4c175e6506a9bdc8cdbcfb2670 | 114,906 | py | Python | google/cloud/contact_center_insights_v1/services/contact_center_insights/async_client.py | googleapis/python-contact-center-insights | 3eb4794a0c25a090f0f3a0e0e5f7fd74eb7c356e | [
"Apache-2.0"
] | 4 | 2021-08-15T04:55:44.000Z | 2022-02-01T09:19:57.000Z | google/cloud/contact_center_insights_v1/services/contact_center_insights/async_client.py | googleapis/python-contact-center-insights | 3eb4794a0c25a090f0f3a0e0e5f7fd74eb7c356e | [
"Apache-2.0"
] | 53 | 2021-07-16T11:02:44.000Z | 2022-03-07T16:39:20.000Z | google/cloud/contact_center_insights_v1/services/contact_center_insights/async_client.py | googleapis/python-contact-center-insights | 3eb4794a0c25a090f0f3a0e0e5f7fd74eb7c356e | [
"Apache-2.0"
] | 5 | 2021-07-15T18:17:53.000Z | 2022-01-29T08:09:16.000Z | # -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from collections import OrderedDict
import functools
import re
from typing import Dict, Sequence, Tuple, Type, Union
import pkg_resources
from google.api_core.client_options import ClientOptions
from google.api_core import exceptions as core_exceptions
from google.api_core import gapic_v1
from google.api_core import retry as retries
from google.auth import credentials as ga_credentials # type: ignore
from google.oauth2 import service_account # type: ignore
try:
OptionalRetry = Union[retries.Retry, gapic_v1.method._MethodDefault]
except AttributeError: # pragma: NO COVER
OptionalRetry = Union[retries.Retry, object] # type: ignore
from google.api_core import operation # type: ignore
from google.api_core import operation_async # type: ignore
from google.cloud.contact_center_insights_v1.services.contact_center_insights import (
pagers,
)
from google.cloud.contact_center_insights_v1.types import contact_center_insights
from google.cloud.contact_center_insights_v1.types import resources
from google.protobuf import duration_pb2 # type: ignore
from google.protobuf import empty_pb2 # type: ignore
from google.protobuf import field_mask_pb2 # type: ignore
from google.protobuf import timestamp_pb2 # type: ignore
from .transports.base import ContactCenterInsightsTransport, DEFAULT_CLIENT_INFO
from .transports.grpc_asyncio import ContactCenterInsightsGrpcAsyncIOTransport
from .client import ContactCenterInsightsClient
class ContactCenterInsightsAsyncClient:
"""An API that lets users analyze and explore their business
conversation data.
"""
_client: ContactCenterInsightsClient
DEFAULT_ENDPOINT = ContactCenterInsightsClient.DEFAULT_ENDPOINT
DEFAULT_MTLS_ENDPOINT = ContactCenterInsightsClient.DEFAULT_MTLS_ENDPOINT
analysis_path = staticmethod(ContactCenterInsightsClient.analysis_path)
parse_analysis_path = staticmethod(ContactCenterInsightsClient.parse_analysis_path)
conversation_path = staticmethod(ContactCenterInsightsClient.conversation_path)
parse_conversation_path = staticmethod(
ContactCenterInsightsClient.parse_conversation_path
)
issue_path = staticmethod(ContactCenterInsightsClient.issue_path)
parse_issue_path = staticmethod(ContactCenterInsightsClient.parse_issue_path)
issue_model_path = staticmethod(ContactCenterInsightsClient.issue_model_path)
parse_issue_model_path = staticmethod(
ContactCenterInsightsClient.parse_issue_model_path
)
participant_path = staticmethod(ContactCenterInsightsClient.participant_path)
parse_participant_path = staticmethod(
ContactCenterInsightsClient.parse_participant_path
)
phrase_matcher_path = staticmethod(ContactCenterInsightsClient.phrase_matcher_path)
parse_phrase_matcher_path = staticmethod(
ContactCenterInsightsClient.parse_phrase_matcher_path
)
settings_path = staticmethod(ContactCenterInsightsClient.settings_path)
parse_settings_path = staticmethod(ContactCenterInsightsClient.parse_settings_path)
view_path = staticmethod(ContactCenterInsightsClient.view_path)
parse_view_path = staticmethod(ContactCenterInsightsClient.parse_view_path)
common_billing_account_path = staticmethod(
ContactCenterInsightsClient.common_billing_account_path
)
parse_common_billing_account_path = staticmethod(
ContactCenterInsightsClient.parse_common_billing_account_path
)
common_folder_path = staticmethod(ContactCenterInsightsClient.common_folder_path)
parse_common_folder_path = staticmethod(
ContactCenterInsightsClient.parse_common_folder_path
)
common_organization_path = staticmethod(
ContactCenterInsightsClient.common_organization_path
)
parse_common_organization_path = staticmethod(
ContactCenterInsightsClient.parse_common_organization_path
)
common_project_path = staticmethod(ContactCenterInsightsClient.common_project_path)
parse_common_project_path = staticmethod(
ContactCenterInsightsClient.parse_common_project_path
)
common_location_path = staticmethod(
ContactCenterInsightsClient.common_location_path
)
parse_common_location_path = staticmethod(
ContactCenterInsightsClient.parse_common_location_path
)
@classmethod
def from_service_account_info(cls, info: dict, *args, **kwargs):
"""Creates an instance of this client using the provided credentials
info.
Args:
info (dict): The service account private key info.
args: Additional arguments to pass to the constructor.
kwargs: Additional arguments to pass to the constructor.
Returns:
ContactCenterInsightsAsyncClient: The constructed client.
"""
return ContactCenterInsightsClient.from_service_account_info.__func__(ContactCenterInsightsAsyncClient, info, *args, **kwargs) # type: ignore
@classmethod
def from_service_account_file(cls, filename: str, *args, **kwargs):
"""Creates an instance of this client using the provided credentials
file.
Args:
filename (str): The path to the service account private key json
file.
args: Additional arguments to pass to the constructor.
kwargs: Additional arguments to pass to the constructor.
Returns:
ContactCenterInsightsAsyncClient: The constructed client.
"""
return ContactCenterInsightsClient.from_service_account_file.__func__(ContactCenterInsightsAsyncClient, filename, *args, **kwargs) # type: ignore
from_service_account_json = from_service_account_file
@property
def transport(self) -> ContactCenterInsightsTransport:
"""Returns the transport used by the client instance.
Returns:
ContactCenterInsightsTransport: The transport used by the client instance.
"""
return self._client.transport
get_transport_class = functools.partial(
type(ContactCenterInsightsClient).get_transport_class,
type(ContactCenterInsightsClient),
)
def __init__(
self,
*,
credentials: ga_credentials.Credentials = None,
transport: Union[str, ContactCenterInsightsTransport] = "grpc_asyncio",
client_options: ClientOptions = None,
client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO,
) -> None:
"""Instantiates the contact center insights client.
Args:
credentials (Optional[google.auth.credentials.Credentials]): The
authorization credentials to attach to requests. These
credentials identify the application to the service; if none
are specified, the client will attempt to ascertain the
credentials from the environment.
transport (Union[str, ~.ContactCenterInsightsTransport]): The
transport to use. If set to None, a transport is chosen
automatically.
client_options (ClientOptions): Custom options for the client. It
won't take effect if a ``transport`` instance is provided.
(1) The ``api_endpoint`` property can be used to override the
default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT
environment variable can also be used to override the endpoint:
"always" (always use the default mTLS endpoint), "never" (always
use the default regular endpoint) and "auto" (auto switch to the
default mTLS endpoint if client certificate is present, this is
the default value). However, the ``api_endpoint`` property takes
precedence if provided.
(2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable
is "true", then the ``client_cert_source`` property can be used
to provide client certificate for mutual TLS transport. If
not provided, the default SSL client certificate will be used if
present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not
set, no client certificate will be used.
Raises:
google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport
creation failed for any reason.
"""
self._client = ContactCenterInsightsClient(
credentials=credentials,
transport=transport,
client_options=client_options,
client_info=client_info,
)
async def create_conversation(
self,
request: Union[contact_center_insights.CreateConversationRequest, dict] = None,
*,
parent: str = None,
conversation: resources.Conversation = None,
conversation_id: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.Conversation:
r"""Creates a conversation.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.CreateConversationRequest, dict]):
The request object. Request to create a conversation.
parent (:class:`str`):
Required. The parent resource of the
conversation.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
conversation (:class:`google.cloud.contact_center_insights_v1.types.Conversation`):
Required. The conversation resource
to create.
This corresponds to the ``conversation`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
conversation_id (:class:`str`):
A unique ID for the new conversation. This ID will
become the final component of the conversation's
resource name. If no ID is specified, a server-generated
ID will be used.
This value should be 4-64 characters and must match the
regular expression ``^[a-z0-9-]{4,64}$``. Valid
characters are ``[a-z][0-9]-``
This corresponds to the ``conversation_id`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.Conversation:
The conversation resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent, conversation, conversation_id])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.CreateConversationRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
if conversation is not None:
request.conversation = conversation
if conversation_id is not None:
request.conversation_id = conversation_id
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.create_conversation,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def update_conversation(
self,
request: Union[contact_center_insights.UpdateConversationRequest, dict] = None,
*,
conversation: resources.Conversation = None,
update_mask: field_mask_pb2.FieldMask = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.Conversation:
r"""Updates a conversation.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.UpdateConversationRequest, dict]):
The request object. The request to update a
conversation.
conversation (:class:`google.cloud.contact_center_insights_v1.types.Conversation`):
Required. The new values for the
conversation.
This corresponds to the ``conversation`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
update_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`):
The list of fields to be updated.
This corresponds to the ``update_mask`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.Conversation:
The conversation resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([conversation, update_mask])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.UpdateConversationRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if conversation is not None:
request.conversation = conversation
if update_mask is not None:
request.update_mask = update_mask
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.update_conversation,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata(
(("conversation.name", request.conversation.name),)
),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def get_conversation(
self,
request: Union[contact_center_insights.GetConversationRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.Conversation:
r"""Gets a conversation.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.GetConversationRequest, dict]):
The request object. The request to get a conversation.
name (:class:`str`):
Required. The name of the
conversation to get.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.Conversation:
The conversation resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.GetConversationRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.get_conversation,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def list_conversations(
self,
request: Union[contact_center_insights.ListConversationsRequest, dict] = None,
*,
parent: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> pagers.ListConversationsAsyncPager:
r"""Lists conversations.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.ListConversationsRequest, dict]):
The request object. Request to list conversations.
parent (:class:`str`):
Required. The parent resource of the
conversation.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.services.contact_center_insights.pagers.ListConversationsAsyncPager:
The response of listing
conversations.
Iterating over this object will yield
results and resolve additional pages
automatically.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.ListConversationsRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.list_conversations,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# This method is paged; wrap the response in a pager, which provides
# an `__aiter__` convenience method.
response = pagers.ListConversationsAsyncPager(
method=rpc, request=request, response=response, metadata=metadata,
)
# Done; return the response.
return response
async def delete_conversation(
self,
request: Union[contact_center_insights.DeleteConversationRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> None:
r"""Deletes a conversation.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.DeleteConversationRequest, dict]):
The request object. The request to delete a
conversation.
name (:class:`str`):
Required. The name of the
conversation to delete.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.DeleteConversationRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.delete_conversation,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
await rpc(
request, retry=retry, timeout=timeout, metadata=metadata,
)
async def create_analysis(
self,
request: Union[contact_center_insights.CreateAnalysisRequest, dict] = None,
*,
parent: str = None,
analysis: resources.Analysis = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Creates an analysis. The long running operation is
done when the analysis has completed.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.CreateAnalysisRequest, dict]):
The request object. The request to create an analysis.
parent (:class:`str`):
Required. The parent resource of the
analysis.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
analysis (:class:`google.cloud.contact_center_insights_v1.types.Analysis`):
Required. The analysis to create.
This corresponds to the ``analysis`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be
:class:`google.cloud.contact_center_insights_v1.types.Analysis`
The analysis resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent, analysis])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.CreateAnalysisRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
if analysis is not None:
request.analysis = analysis
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.create_analysis,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
resources.Analysis,
metadata_type=contact_center_insights.CreateAnalysisOperationMetadata,
)
# Done; return the response.
return response
async def get_analysis(
self,
request: Union[contact_center_insights.GetAnalysisRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.Analysis:
r"""Gets an analysis.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.GetAnalysisRequest, dict]):
The request object. The request to get an analysis.
name (:class:`str`):
Required. The name of the analysis to
get.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.Analysis:
The analysis resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.GetAnalysisRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.get_analysis,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def list_analyses(
self,
request: Union[contact_center_insights.ListAnalysesRequest, dict] = None,
*,
parent: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> pagers.ListAnalysesAsyncPager:
r"""Lists analyses.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.ListAnalysesRequest, dict]):
The request object. The request to list analyses.
parent (:class:`str`):
Required. The parent resource of the
analyses.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.services.contact_center_insights.pagers.ListAnalysesAsyncPager:
The response to list analyses.
Iterating over this object will yield
results and resolve additional pages
automatically.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.ListAnalysesRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.list_analyses,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# This method is paged; wrap the response in a pager, which provides
# an `__aiter__` convenience method.
response = pagers.ListAnalysesAsyncPager(
method=rpc, request=request, response=response, metadata=metadata,
)
# Done; return the response.
return response
async def delete_analysis(
self,
request: Union[contact_center_insights.DeleteAnalysisRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> None:
r"""Deletes an analysis.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.DeleteAnalysisRequest, dict]):
The request object. The request to delete an analysis.
name (:class:`str`):
Required. The name of the analysis to
delete.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.DeleteAnalysisRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.delete_analysis,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
await rpc(
request, retry=retry, timeout=timeout, metadata=metadata,
)
async def export_insights_data(
self,
request: Union[contact_center_insights.ExportInsightsDataRequest, dict] = None,
*,
parent: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Export insights data to a destination defined in the
request body.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.ExportInsightsDataRequest, dict]):
The request object. The request to export insights.
parent (:class:`str`):
Required. The parent resource to
export data from.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be
:class:`google.cloud.contact_center_insights_v1.types.ExportInsightsDataResponse`
Response for an export insights operation.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.ExportInsightsDataRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.export_insights_data,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
contact_center_insights.ExportInsightsDataResponse,
metadata_type=contact_center_insights.ExportInsightsDataMetadata,
)
# Done; return the response.
return response
async def create_issue_model(
self,
request: Union[contact_center_insights.CreateIssueModelRequest, dict] = None,
*,
parent: str = None,
issue_model: resources.IssueModel = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Creates an issue model.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.CreateIssueModelRequest, dict]):
The request object. The request to create an issue
model.
parent (:class:`str`):
Required. The parent resource of the
issue model.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
issue_model (:class:`google.cloud.contact_center_insights_v1.types.IssueModel`):
Required. The issue model to create.
This corresponds to the ``issue_model`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be
:class:`google.cloud.contact_center_insights_v1.types.IssueModel`
The issue model resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent, issue_model])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.CreateIssueModelRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
if issue_model is not None:
request.issue_model = issue_model
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.create_issue_model,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
resources.IssueModel,
metadata_type=contact_center_insights.CreateIssueModelMetadata,
)
# Done; return the response.
return response
async def update_issue_model(
self,
request: Union[contact_center_insights.UpdateIssueModelRequest, dict] = None,
*,
issue_model: resources.IssueModel = None,
update_mask: field_mask_pb2.FieldMask = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.IssueModel:
r"""Updates an issue model.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.UpdateIssueModelRequest, dict]):
The request object. The request to update an issue
model.
issue_model (:class:`google.cloud.contact_center_insights_v1.types.IssueModel`):
Required. The new values for the
issue model.
This corresponds to the ``issue_model`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
update_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`):
The list of fields to be updated.
This corresponds to the ``update_mask`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.IssueModel:
The issue model resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([issue_model, update_mask])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.UpdateIssueModelRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if issue_model is not None:
request.issue_model = issue_model
if update_mask is not None:
request.update_mask = update_mask
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.update_issue_model,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata(
(("issue_model.name", request.issue_model.name),)
),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def get_issue_model(
self,
request: Union[contact_center_insights.GetIssueModelRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.IssueModel:
r"""Gets an issue model.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.GetIssueModelRequest, dict]):
The request object. The request to get an issue model.
name (:class:`str`):
Required. The name of the issue model
to get.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.IssueModel:
The issue model resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.GetIssueModelRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.get_issue_model,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def list_issue_models(
self,
request: Union[contact_center_insights.ListIssueModelsRequest, dict] = None,
*,
parent: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> contact_center_insights.ListIssueModelsResponse:
r"""Lists issue models.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.ListIssueModelsRequest, dict]):
The request object. Request to list issue models.
parent (:class:`str`):
Required. The parent resource of the
issue model.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.ListIssueModelsResponse:
The response of listing issue models.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.ListIssueModelsRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.list_issue_models,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def delete_issue_model(
self,
request: Union[contact_center_insights.DeleteIssueModelRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Deletes an issue model.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.DeleteIssueModelRequest, dict]):
The request object. The request to delete an issue
model.
name (:class:`str`):
Required. The name of the issue model
to delete.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be :class:`google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated
empty messages in your APIs. A typical example is to
use it as the request or the response type of an API
method. For instance:
service Foo {
rpc Bar(google.protobuf.Empty) returns
(google.protobuf.Empty);
}
The JSON representation for Empty is empty JSON
object {}.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.DeleteIssueModelRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.delete_issue_model,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
empty_pb2.Empty,
metadata_type=contact_center_insights.DeleteIssueModelMetadata,
)
# Done; return the response.
return response
async def deploy_issue_model(
self,
request: Union[contact_center_insights.DeployIssueModelRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Deploys an issue model. Returns an error if a model
is already deployed. An issue model can only be used in
analysis after it has been deployed.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.DeployIssueModelRequest, dict]):
The request object. The request to deploy an issue
model.
name (:class:`str`):
Required. The issue model to deploy.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be
:class:`google.cloud.contact_center_insights_v1.types.DeployIssueModelResponse`
The response to deploy an issue model.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.DeployIssueModelRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.deploy_issue_model,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
contact_center_insights.DeployIssueModelResponse,
metadata_type=contact_center_insights.DeployIssueModelMetadata,
)
# Done; return the response.
return response
async def undeploy_issue_model(
self,
request: Union[contact_center_insights.UndeployIssueModelRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Undeploys an issue model.
An issue model can not be used in analysis after it has
been undeployed.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.UndeployIssueModelRequest, dict]):
The request object. The request to undeploy an issue
model.
name (:class:`str`):
Required. The issue model to
undeploy.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be
:class:`google.cloud.contact_center_insights_v1.types.UndeployIssueModelResponse`
The response to undeploy an issue model.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.UndeployIssueModelRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.undeploy_issue_model,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
contact_center_insights.UndeployIssueModelResponse,
metadata_type=contact_center_insights.UndeployIssueModelMetadata,
)
# Done; return the response.
return response
async def get_issue(
self,
request: Union[contact_center_insights.GetIssueRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.Issue:
r"""Gets an issue.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.GetIssueRequest, dict]):
The request object. The request to get an issue.
name (:class:`str`):
Required. The name of the issue to
get.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.Issue:
The issue resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.GetIssueRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.get_issue,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def list_issues(
self,
request: Union[contact_center_insights.ListIssuesRequest, dict] = None,
*,
parent: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> contact_center_insights.ListIssuesResponse:
r"""Lists issues.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.ListIssuesRequest, dict]):
The request object. Request to list issues.
parent (:class:`str`):
Required. The parent resource of the
issue.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.ListIssuesResponse:
The response of listing issues.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.ListIssuesRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.list_issues,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def update_issue(
self,
request: Union[contact_center_insights.UpdateIssueRequest, dict] = None,
*,
issue: resources.Issue = None,
update_mask: field_mask_pb2.FieldMask = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.Issue:
r"""Updates an issue.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.UpdateIssueRequest, dict]):
The request object. The request to update an issue.
issue (:class:`google.cloud.contact_center_insights_v1.types.Issue`):
Required. The new values for the
issue.
This corresponds to the ``issue`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
update_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`):
The list of fields to be updated.
This corresponds to the ``update_mask`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.Issue:
The issue resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([issue, update_mask])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.UpdateIssueRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if issue is not None:
request.issue = issue
if update_mask is not None:
request.update_mask = update_mask
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.update_issue,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata(
(("issue.name", request.issue.name),)
),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def calculate_issue_model_stats(
self,
request: Union[
contact_center_insights.CalculateIssueModelStatsRequest, dict
] = None,
*,
issue_model: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> contact_center_insights.CalculateIssueModelStatsResponse:
r"""Gets an issue model's statistics.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.CalculateIssueModelStatsRequest, dict]):
The request object. Request to get statistics of an
issue model.
issue_model (:class:`str`):
Required. The resource name of the
issue model to query against.
This corresponds to the ``issue_model`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.CalculateIssueModelStatsResponse:
Response of querying an issue model's
statistics.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([issue_model])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.CalculateIssueModelStatsRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if issue_model is not None:
request.issue_model = issue_model
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.calculate_issue_model_stats,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata(
(("issue_model", request.issue_model),)
),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def create_phrase_matcher(
self,
request: Union[contact_center_insights.CreatePhraseMatcherRequest, dict] = None,
*,
parent: str = None,
phrase_matcher: resources.PhraseMatcher = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.PhraseMatcher:
r"""Creates a phrase matcher.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.CreatePhraseMatcherRequest, dict]):
The request object. Request to create a phrase matcher.
parent (:class:`str`):
Required. The parent resource of the phrase matcher.
Required. The location to create a phrase matcher for.
Format:
``projects/<Project ID>/locations/<Location ID>`` or
``projects/<Project Number>/locations/<Location ID>``
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
phrase_matcher (:class:`google.cloud.contact_center_insights_v1.types.PhraseMatcher`):
Required. The phrase matcher resource
to create.
This corresponds to the ``phrase_matcher`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.PhraseMatcher:
The phrase matcher resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent, phrase_matcher])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.CreatePhraseMatcherRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
if phrase_matcher is not None:
request.phrase_matcher = phrase_matcher
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.create_phrase_matcher,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def get_phrase_matcher(
self,
request: Union[contact_center_insights.GetPhraseMatcherRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.PhraseMatcher:
r"""Gets a phrase matcher.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.GetPhraseMatcherRequest, dict]):
The request object. The request to get a a phrase
matcher.
name (:class:`str`):
Required. The name of the phrase
matcher to get.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.PhraseMatcher:
The phrase matcher resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.GetPhraseMatcherRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.get_phrase_matcher,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def list_phrase_matchers(
self,
request: Union[contact_center_insights.ListPhraseMatchersRequest, dict] = None,
*,
parent: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> pagers.ListPhraseMatchersAsyncPager:
r"""Lists phrase matchers.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.ListPhraseMatchersRequest, dict]):
The request object. Request to list phrase matchers.
parent (:class:`str`):
Required. The parent resource of the
phrase matcher.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.services.contact_center_insights.pagers.ListPhraseMatchersAsyncPager:
The response of listing phrase
matchers.
Iterating over this object will yield
results and resolve additional pages
automatically.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.ListPhraseMatchersRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.list_phrase_matchers,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# This method is paged; wrap the response in a pager, which provides
# an `__aiter__` convenience method.
response = pagers.ListPhraseMatchersAsyncPager(
method=rpc, request=request, response=response, metadata=metadata,
)
# Done; return the response.
return response
async def delete_phrase_matcher(
self,
request: Union[contact_center_insights.DeletePhraseMatcherRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> None:
r"""Deletes a phrase matcher.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.DeletePhraseMatcherRequest, dict]):
The request object. The request to delete a phrase
matcher.
name (:class:`str`):
Required. The name of the phrase
matcher to delete.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.DeletePhraseMatcherRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.delete_phrase_matcher,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
await rpc(
request, retry=retry, timeout=timeout, metadata=metadata,
)
async def update_phrase_matcher(
self,
request: Union[contact_center_insights.UpdatePhraseMatcherRequest, dict] = None,
*,
phrase_matcher: resources.PhraseMatcher = None,
update_mask: field_mask_pb2.FieldMask = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.PhraseMatcher:
r"""Updates a phrase matcher.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.UpdatePhraseMatcherRequest, dict]):
The request object. The request to update a phrase
matcher.
phrase_matcher (:class:`google.cloud.contact_center_insights_v1.types.PhraseMatcher`):
Required. The new values for the
phrase matcher.
This corresponds to the ``phrase_matcher`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
update_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`):
The list of fields to be updated.
This corresponds to the ``update_mask`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.PhraseMatcher:
The phrase matcher resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([phrase_matcher, update_mask])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.UpdatePhraseMatcherRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if phrase_matcher is not None:
request.phrase_matcher = phrase_matcher
if update_mask is not None:
request.update_mask = update_mask
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.update_phrase_matcher,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata(
(("phrase_matcher.name", request.phrase_matcher.name),)
),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def calculate_stats(
self,
request: Union[contact_center_insights.CalculateStatsRequest, dict] = None,
*,
location: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> contact_center_insights.CalculateStatsResponse:
r"""Gets conversation statistics.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.CalculateStatsRequest, dict]):
The request object. The request for calculating
conversation statistics.
location (:class:`str`):
Required. The location of the
conversations.
This corresponds to the ``location`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.CalculateStatsResponse:
The response for calculating
conversation statistics.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([location])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.CalculateStatsRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if location is not None:
request.location = location
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.calculate_stats,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("location", request.location),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def get_settings(
self,
request: Union[contact_center_insights.GetSettingsRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.Settings:
r"""Gets project-level settings.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.GetSettingsRequest, dict]):
The request object. The request to get project-level
settings.
name (:class:`str`):
Required. The name of the settings
resource to get.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.Settings:
The settings resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.GetSettingsRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.get_settings,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def update_settings(
self,
request: Union[contact_center_insights.UpdateSettingsRequest, dict] = None,
*,
settings: resources.Settings = None,
update_mask: field_mask_pb2.FieldMask = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.Settings:
r"""Updates project-level settings.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.UpdateSettingsRequest, dict]):
The request object. The request to update project-level
settings.
settings (:class:`google.cloud.contact_center_insights_v1.types.Settings`):
Required. The new settings values.
This corresponds to the ``settings`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
update_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`):
Required. The list of fields to be
updated.
This corresponds to the ``update_mask`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.Settings:
The settings resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([settings, update_mask])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.UpdateSettingsRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if settings is not None:
request.settings = settings
if update_mask is not None:
request.update_mask = update_mask
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.update_settings,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata(
(("settings.name", request.settings.name),)
),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def create_view(
self,
request: Union[contact_center_insights.CreateViewRequest, dict] = None,
*,
parent: str = None,
view: resources.View = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.View:
r"""Creates a view.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.CreateViewRequest, dict]):
The request object. The request to create a view.
parent (:class:`str`):
Required. The parent resource of the view. Required. The
location to create a view for. Format:
``projects/<Project ID>/locations/<Location ID>`` or
``projects/<Project Number>/locations/<Location ID>``
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
view (:class:`google.cloud.contact_center_insights_v1.types.View`):
Required. The view resource to
create.
This corresponds to the ``view`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.View:
The View resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent, view])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.CreateViewRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
if view is not None:
request.view = view
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.create_view,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def get_view(
self,
request: Union[contact_center_insights.GetViewRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.View:
r"""Gets a view.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.GetViewRequest, dict]):
The request object. The request to get a view.
name (:class:`str`):
Required. The name of the view to
get.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.View:
The View resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.GetViewRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.get_view,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def list_views(
self,
request: Union[contact_center_insights.ListViewsRequest, dict] = None,
*,
parent: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> pagers.ListViewsAsyncPager:
r"""Lists views.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.ListViewsRequest, dict]):
The request object. The request to list views.
parent (:class:`str`):
Required. The parent resource of the
views.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.services.contact_center_insights.pagers.ListViewsAsyncPager:
The response of listing views.
Iterating over this object will yield
results and resolve additional pages
automatically.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.ListViewsRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.list_views,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# This method is paged; wrap the response in a pager, which provides
# an `__aiter__` convenience method.
response = pagers.ListViewsAsyncPager(
method=rpc, request=request, response=response, metadata=metadata,
)
# Done; return the response.
return response
async def update_view(
self,
request: Union[contact_center_insights.UpdateViewRequest, dict] = None,
*,
view: resources.View = None,
update_mask: field_mask_pb2.FieldMask = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> resources.View:
r"""Updates a view.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.UpdateViewRequest, dict]):
The request object. The request to update a view.
view (:class:`google.cloud.contact_center_insights_v1.types.View`):
Required. The new view.
This corresponds to the ``view`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
update_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`):
The list of fields to be updated.
This corresponds to the ``update_mask`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.contact_center_insights_v1.types.View:
The View resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([view, update_mask])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.UpdateViewRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if view is not None:
request.view = view
if update_mask is not None:
request.update_mask = update_mask
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.update_view,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata(
(("view.name", request.view.name),)
),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def delete_view(
self,
request: Union[contact_center_insights.DeleteViewRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> None:
r"""Deletes a view.
Args:
request (Union[google.cloud.contact_center_insights_v1.types.DeleteViewRequest, dict]):
The request object. The request to delete a view.
name (:class:`str`):
Required. The name of the view to
delete.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = contact_center_insights.DeleteViewRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.delete_view,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
await rpc(
request, retry=retry, timeout=timeout, metadata=metadata,
)
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc, tb):
await self.transport.close()
try:
DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo(
gapic_version=pkg_resources.get_distribution(
"google-cloud-contact-center-insights",
).version,
)
except pkg_resources.DistributionNotFound:
DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo()
__all__ = ("ContactCenterInsightsAsyncClient",)
| 41.096567 | 171 | 0.617888 | 12,698 | 114,906 | 5.468105 | 0.036069 | 0.040182 | 0.050206 | 0.026961 | 0.853789 | 0.828528 | 0.816228 | 0.796757 | 0.778984 | 0.763257 | 0 | 0.002829 | 0.310906 | 114,906 | 2,795 | 172 | 41.11127 | 0.874073 | 0.169243 | 0 | 0.615321 | 0 | 0 | 0.062783 | 0.001241 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003295 | false | 0 | 0.018946 | 0 | 0.076606 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4334de44baf022ae30ecaa322a2da3016fc21820 | 70 | py | Python | code/optimizers/__init__.py | OnlinePredictorTS/AOLForTimeSeries | ba2cd6aae7f367c6af879d0a4e58870050c00d04 | [
"Apache-2.0"
] | null | null | null | code/optimizers/__init__.py | OnlinePredictorTS/AOLForTimeSeries | ba2cd6aae7f367c6af879d0a4e58870050c00d04 | [
"Apache-2.0"
] | null | null | null | code/optimizers/__init__.py | OnlinePredictorTS/AOLForTimeSeries | ba2cd6aae7f367c6af879d0a4e58870050c00d04 | [
"Apache-2.0"
] | null | null | null | # utils init file
import optimizers.RealOGD
import optimizers.RealONS | 17.5 | 25 | 0.842857 | 9 | 70 | 6.555556 | 0.777778 | 0.542373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 70 | 4 | 26 | 17.5 | 0.951613 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4a4b1e4a510b3df83a4c1daf938eb73ae8003137 | 66,446 | py | Python | scripts/calendar_view_gui/utils/batch_utils.py | voicagi/cbm | c076e272f34a93a2e7dcc7de3c9685bd5c86c27a | [
"BSD-3-Clause"
] | null | null | null | scripts/calendar_view_gui/utils/batch_utils.py | voicagi/cbm | c076e272f34a93a2e7dcc7de3c9685bd5c86c27a | [
"BSD-3-Clause"
] | null | null | null | scripts/calendar_view_gui/utils/batch_utils.py | voicagi/cbm | c076e272f34a93a2e7dcc7de3c9685bd5c86c27a | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# This file is part of CbM (https://github.com/ec-jrc/cbm).
# Author : Csaba Wirnhardt
# Credits : GTCAP Team
# Copyright : 2021 European Commission, Joint Research Centre
# License : 3-Clause BSD
import time
import geopandas
import download_utils, extract_utils, plot_utils
from glob import glob
import os
import lut
from osgeo import ogr
import datetime
import collections
import warnings
import calendar
import numpy
import pandas as pd
import rasterio
from rasterio.enums import Resampling
def select_parcel(vector_file_name, parcel_id_column, parcel_id, logfile):
fout = open(logfile, 'a')
start = time.time()
parcels = geopandas.read_file(vector_file_name)
parcel = parcels[parcels[parcel_id_column]==parcel_id]
print(f"Parcel selected in: {time.time() - start} seconds")
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.select_parcel:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
return parcel
def run_get_scl_imagettes(parcel, parcel_id, crop, out_tif_folder_base,
search_window_start_date, search_window_end_date, search_split_days,
raw_chips_by_location_url, username, password, chipsize,
url_base, lon, lat, logfile
):
fout = open(logfile, 'a')
start = time.time()
# get the list of SCL imagettes for the parcel in a given date range
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
# lon, lat = download_utils.get_centroid_of_parcel(parcel)
date_ranges = download_utils.split_date_range(search_window_start_date, search_window_end_date, search_split_days)
for date_range in date_ranges:
start_date = date_range[0]
end_date = date_range[1]
print("Getting SCL imagettes from" , start_date, "to", end_date)
was_error_1 = True
was_error_2 = True
while was_error_1:
locurl, list_of_scl_imagettes, was_error_1 = download_utils.get_scl_imagettes(raw_chips_by_location_url, lon, lat,
start_date, end_date,
username, password, chipsize)
while was_error_2:
was_error_2 = download_utils.download_scl_imagettes(url_base, list_of_scl_imagettes, out_tif_folder, username, password)
print(f"Got list of SCL imagettes and downloaded in: {time.time() - start} seconds")
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.run_get_scl_imagettes:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
def run_get_scl_imagettes_l1c(parcel, parcel_id, crop, out_tif_folder_base,
search_window_start_date, search_window_end_date, search_split_days,
raw_chips_by_location_url, username, password, chipsize,
url_base, lon, lat, logfile
):
fout = open(logfile, 'a')
start = time.time()
# get the list of SCL imagettes for the parcel in a given date range
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
# lon, lat = download_utils.get_centroid_of_parcel(parcel)
date_ranges = download_utils.split_date_range(search_window_start_date, search_window_end_date, search_split_days)
for date_range in date_ranges:
start_date = date_range[0]
end_date = date_range[1]
print("Getting SCL imagettes from" , start_date, "to", end_date)
was_error_1 = True
was_error_2 = True
while was_error_1:
locurl, list_of_scl_imagettes, was_error_1 = download_utils.get_scl_imagettes_l1c(raw_chips_by_location_url, lon, lat,
start_date, end_date,
username, password, chipsize)
while was_error_2:
was_error_2 = download_utils.download_scl_imagettes(url_base, list_of_scl_imagettes, out_tif_folder, username, password)
print(f"Got list of SCL imagettes and downloaded in: {time.time() - start} seconds")
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.run_get_scl_imagettes:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
def create_list_of_tiles_to_be_downloaded(parcel, parcel_id, crop, out_tif_folder_base, cloud_categories, logfile):
# create the list of tiles to be downloaded
warnings.simplefilter(action='ignore', category=FutureWarning)
fout = open(logfile, 'a')
start = time.time()
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
# get downloaded SCL tile tifs and see if they are cloudfree
downloaded_scl_files_pattern = out_tif_folder + "/*/*.SCL.tif"
downloaded_scl_files = glob(downloaded_scl_files_pattern)
tiles_to_download = []
for downloaded_scl_file in downloaded_scl_files:
is_tile_cloudy = download_utils.is_tile_cloudy_geopandas(downloaded_scl_file, parcel, cloud_categories)
if not is_tile_cloudy:
tile_scl_name = os.path.basename(downloaded_scl_file)
tile_name = tile_scl_name.split(".")[0]
tiles_to_download.append(tile_name)
print(f"List of tiles to be downloaded created in {time.time() - start} seconds")
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.create_list_of_tiles_to_be_downloaded:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
return tiles_to_download
def create_list_of_tiles_to_be_downloaded_l1c(parcel, parcel_id, crop, out_tif_folder_base, cloud_categories, logfile):
print('valami')
# create the list of tiles to be downloaded
warnings.simplefilter(action='ignore', category=FutureWarning)
fout = open(logfile, 'a')
start = time.time()
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
# get downloaded SCL tile tifs and see if they are cloudfree
downloaded_scl_files_pattern = out_tif_folder + "/*/*.B08.tif"
downloaded_scl_files = glob(downloaded_scl_files_pattern)
print(downloaded_scl_files_pattern)
tiles_to_download = []
for downloaded_scl_file in downloaded_scl_files:
# is_tile_cloudy = download_utils.is_tile_cloudy_geopandas(downloaded_scl_file, parcel, cloud_categories)
# if not is_tile_cloudy:
tile_scl_name = os.path.basename(downloaded_scl_file)
tile_name = tile_scl_name.split(".")[0]
tiles_to_download.append(tile_name)
print(f"List of tiles to be downloaded created in {time.time() - start} seconds")
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.create_list_of_tiles_to_be_downloaded_l1c:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
return tiles_to_download
def run_get_and_download_band_imagettes(max_number_of_tiles_per_request, tiles_to_download, raw_chips_batch_url,
lon, lat, bands, username, password, chipsize, url_base,
parcel_id, crop, out_tif_folder_base, logfile):
# run the batch chip extract query with the JSON input as POST
# and get the response which contains the download folder of the extracted chips
# and download the cloudfree band imagettes
fout = open(logfile, 'a')
start = time.time()
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
# max_number_of_tiles_per_request = 12
number_of_full_requests = len(tiles_to_download)//max_number_of_tiles_per_request
if number_of_full_requests == 0:
number_of_full_requests = 1
for request in range(0,number_of_full_requests):
list_of_band_imagettes = {}
request_end_index = max_number_of_tiles_per_request*(request+1)
request_start_index = request_end_index - max_number_of_tiles_per_request
print("request number:", request)
tiles_to_download_subset = tiles_to_download[request_start_index:request_end_index]
was_error_1 = True
was_error_2 = True
while was_error_1:
# print("Requesting band imagettes for tiles: ")
# print(tiles_to_download_subset)
list_of_band_imagettes, was_error_1 = download_utils.get_band_imagettes(raw_chips_batch_url, lon, lat,
tiles_to_download_subset,
bands, username, password, chipsize )
while was_error_2:
was_error_2 = download_utils.download_band_imagettes(url_base, list_of_band_imagettes, out_tif_folder, username, password)
# print("*******************************************")
# print(list_of_band_imagettes)
# print("*******************************************")
last_request_end_index = len(tiles_to_download) + 1
last_request_start_index = request_end_index
print("last bunch")
was_error_1 = True
was_error_2 = True
while was_error_1:
list_of_band_imagettes, was_error_1 = download_utils.get_band_imagettes(raw_chips_batch_url, lon, lat,
tiles_to_download[last_request_start_index:last_request_end_index],
bands, username, password, chipsize )
while was_error_2:
was_error_2 = download_utils.download_band_imagettes(url_base, list_of_band_imagettes, out_tif_folder, username, password)
# print("*******************************************")
# print(list_of_band_imagettes)
# print("*******************************************")
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.run_get_and_download_band_imagettes:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"Got list of cloudfree bands and downloaded images: {time.time() - start} seconds")
def run_get_and_download_band_imagettes_l1c(max_number_of_tiles_per_request, tiles_to_download, raw_chips_batch_url,
lon, lat, bands, username, password, chipsize, url_base,
parcel_id, crop, out_tif_folder_base, logfile):
# run the batch chip extract query with the JSON input as POST
# and get the response which contains the download folder of the extracted chips
# and download the cloudfree band imagettes
fout = open(logfile, 'a')
start = time.time()
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
# max_number_of_tiles_per_request = 12
number_of_full_requests = len(tiles_to_download)//max_number_of_tiles_per_request
if number_of_full_requests == 0:
number_of_full_requests = 1
for request in range(0,number_of_full_requests):
list_of_band_imagettes = {}
request_end_index = max_number_of_tiles_per_request*(request+1)
request_start_index = request_end_index - max_number_of_tiles_per_request
print("request number:", request)
tiles_to_download_subset = tiles_to_download[request_start_index:request_end_index]
was_error_1 = True
was_error_2 = True
while was_error_1:
# print("Requesting band imagettes for tiles: ")
# print(tiles_to_download_subset)
list_of_band_imagettes, was_error_1 = download_utils.get_band_imagettes_l1c(raw_chips_batch_url, lon, lat,
tiles_to_download_subset,
bands, username, password, chipsize )
while was_error_2:
was_error_2 = download_utils.download_band_imagettes(url_base, list_of_band_imagettes, out_tif_folder, username, password)
# print("*******************************************")
# print(list_of_band_imagettes)
# print("*******************************************")
last_request_end_index = len(tiles_to_download) + 1
last_request_start_index = request_end_index
print("last bunch")
was_error_1 = True
was_error_2 = True
while was_error_1:
list_of_band_imagettes, was_error_1 = download_utils.get_band_imagettes_l1c(raw_chips_batch_url, lon, lat,
tiles_to_download[last_request_start_index:last_request_end_index],
bands, username, password, chipsize )
while was_error_2:
was_error_2 = download_utils.download_band_imagettes(url_base, list_of_band_imagettes, out_tif_folder, username, password)
# print("*******************************************")
# print(list_of_band_imagettes)
# print("*******************************************")
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.run_get_and_download_band_imagettes_l1c:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"Got list of cloudfree bands and downloaded images: {time.time() - start} seconds")
def run_merge_bands(parcel_id, crop, out_tif_folder_base, logfile):
# look around in the date folders where the bands were downloade and merge bands
# B08, B11, B04 for each tile where these bands were downloaded and the bands were
# not yet merged
fout = open(logfile, 'a')
start = time.time()
download_utils.merge_bands(parcel_id, crop, out_tif_folder_base)
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.run_merge_bands:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"Merging cloudfree bands images in {time.time() - start} seconds")
def run_merge_4_bands(parcel_id, crop, out_tif_folder_base):
# look around in the date folders where the bands were downloade and merge bands
# B08, B11, B04 for each tile where these bands were downloaded and the bands were
# not yet merged
start = time.time()
download_utils.merge_4_bands(parcel_id, crop, out_tif_folder_base)
print(f"Merging 4 bands images in {time.time() - start} seconds")
def run_lut_stretch(parcel_id, crop, out_tif_folder_base, left_percent, right_percent, lut_txt_file, logfile):
# lut stretch
fout = open(logfile, 'a')
start = time.time()
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
lut_bands=[1,2,3]
merge_folder = out_tif_folder + "_merged"
merge_lut_folder = out_tif_folder + "_merged_lut_magic"
# merge_lut_folder = out_tif_folder + "_merged_lut_dynamic"
if not os.path.exists(merge_lut_folder):
os.makedirs(merge_lut_folder)
merged_files_pattern = merge_folder + "/*.tif"
merged_files = glob(merged_files_pattern)
for merged_file in merged_files:
# print(merged_file)
merged_file_base = os.path.basename(merged_file)
merged_file_path = os.path.dirname(merged_file)
tile_name = merged_file_base.split(".")[0]
#get acquisition date from tile name
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
# print(tile_name)
output = merge_lut_folder + "/" + tile_name + ".tif"
# here again: if the lut stretched image is already created we do not create it again
if os.path.isfile(output):
# we already created the lut stretched image for this date for this parcel so we skip it
print(tile_name + " already created")
else:
print("LUT stretching tile: ", tile_name, end="")
lut.writeMinMaxToFile(merged_file, acq_date, lut_bands, left_percent, right_percent, lut_txt_file, tile_name)
lut.lutStretchMagicLut(merged_file, output, lut_bands )
# lut.lutStretch(merged_file, output, left_percent, right_percent, lut_bands )
print("...done")
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.run_lut_stretch:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"LUT stretch: {time.time() - start} seconds")
def run_lut_stretch_dynamic(parcel_id, crop, out_tif_folder_base, left_percent, right_percent, lut_txt_file, logfile):
# lut stretch
fout = open(logfile, 'a')
start = time.time()
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
lut_bands=[1,2,3]
merge_folder = out_tif_folder + "_merged"
# merge_lut_folder = out_tif_folder + "_merged_lut_magic"
merge_lut_folder = out_tif_folder + "_merged_lut_dynamic"
if not os.path.exists(merge_lut_folder):
os.makedirs(merge_lut_folder)
merged_files_pattern = merge_folder + "/*.tif"
merged_files = glob(merged_files_pattern)
for merged_file in merged_files:
# print(merged_file)
merged_file_base = os.path.basename(merged_file)
merged_file_path = os.path.dirname(merged_file)
tile_name = merged_file_base.split(".")[0]
#get acquisition date from tile name
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
# print(tile_name)
output = merge_lut_folder + "/" + tile_name + ".tif"
# here again: if the lut stretched image is already created we do not create it again
if os.path.isfile(output):
# we already created the lut stretched image for this date for this parcel so we skip it
print(tile_name + " already created")
else:
print("LUT stretching tile: ", tile_name, end="")
lut.writeMinMaxToFile(merged_file, acq_date, lut_bands, left_percent, right_percent, lut_txt_file, tile_name)
lut.lutStretchMagicLut(merged_file, output, lut_bands )
# lut.lutStretch(merged_file, output, left_percent, right_percent, lut_bands )
print("...done")
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.run_lut_stretch_dynamic:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"LUT stretch: {time.time() - start} seconds")
def get_merged_lutstretched_files_and_acquisition_dates(parcel_id, crop, out_tif_folder_base, logfile):
fout = open(logfile, 'a')
start = time.time()
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
merge_lut_folder = out_tif_folder + "_merged_lut_magic"
# merge_lut_folder = out_tif_folder + "_merged_lut_dynamic"
merged_lut_files_pattern = merge_lut_folder + "/*.tif"
merged_lut_files = glob(merged_lut_files_pattern)
acq_dates = []
for merged_lut_file in merged_lut_files:
merged_lut_file_base = os.path.basename(merged_lut_file)
merged_lut_file_path = os.path.dirname(merged_lut_file)
tile_name = merged_lut_file_base.split(".")[0]
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
acq_dates.append(acq_date)
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.get_merged_lutstretched_files_and_acquisition_dates:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
return acq_dates, merged_lut_files
def get_merged_lutstretched_files_and_acquisition_dates_dynamic(parcel_id, crop, out_tif_folder_base, logfile):
fout = open(logfile, 'a')
start = time.time()
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
# merge_lut_folder = out_tif_folder + "_merged_lut_magic"
merge_lut_folder = out_tif_folder + "_merged_lut_dynamic"
merged_lut_files_pattern = merge_lut_folder + "/*.tif"
merged_lut_files = glob(merged_lut_files_pattern)
acq_dates = []
for merged_lut_file in merged_lut_files:
merged_lut_file_base = os.path.basename(merged_lut_file)
merged_lut_file_path = os.path.dirname(merged_lut_file)
tile_name = merged_lut_file_base.split(".")[0]
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
acq_dates.append(acq_date)
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.get_merged_lutstretched_files_and_acquisition_dates_dynamic:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
return acq_dates, merged_lut_files
def get_merged_ndvi_files_and_acquisition_dates(parcel_id, crop, out_tif_folder_base):
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
merge_lut_folder = out_tif_folder + "_merged_ndvi"
merged_lut_files_pattern = merge_lut_folder + "/*.tif"
merged_lut_files = glob(merged_lut_files_pattern)
acq_dates = []
for merged_lut_file in merged_lut_files:
merged_lut_file_base = os.path.basename(merged_lut_file)
merged_lut_file_path = os.path.dirname(merged_lut_file)
tile_name = merged_lut_file_base.split(".")[0]
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
acq_dates.append(acq_date)
return acq_dates, merged_lut_files
def get_merged_ndwi_files_and_acquisition_dates(parcel_id, crop, out_tif_folder_base):
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
merge_lut_folder = out_tif_folder + "_merged_ndwi"
merged_lut_files_pattern = merge_lut_folder + "/*.tif"
merged_lut_files = glob(merged_lut_files_pattern)
acq_dates = []
for merged_lut_file in merged_lut_files:
merged_lut_file_base = os.path.basename(merged_lut_file)
merged_lut_file_path = os.path.dirname(merged_lut_file)
tile_name = merged_lut_file_base.split(".")[0]
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
acq_dates.append(acq_date)
return acq_dates, merged_lut_files
def get_merged_tif_files_and_acquisition_dates(parcel_id, crop, out_tif_folder_base):
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
merge_lut_folder = out_tif_folder + "_merged"
merged_lut_files_pattern = merge_lut_folder + "/*.tif"
merged_lut_files = glob(merged_lut_files_pattern)
acq_dates = []
for merged_lut_file in merged_lut_files:
merged_lut_file_base = os.path.basename(merged_lut_file)
merged_lut_file_path = os.path.dirname(merged_lut_file)
tile_name = merged_lut_file_base.split(".")[0]
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
acq_dates.append(acq_date)
return acq_dates, merged_lut_files
def get_merged_tif_files_and_acquisition_dates_in_dict(parcel_id, crop, out_tif_folder_base):
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
merge_lut_folder = out_tif_folder + "_merged"
merged_lut_files_pattern = merge_lut_folder + "/*.tif"
merged_lut_files = glob(merged_lut_files_pattern)
acq_dates_tif_files_dict = {}
for merged_lut_file in merged_lut_files:
merged_lut_file_base = os.path.basename(merged_lut_file)
merged_lut_file_path = os.path.dirname(merged_lut_file)
tile_name = merged_lut_file_base.split(".")[0]
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
acq_dates_tif_files_dict[acq_date]=merged_lut_file
return collections.OrderedDict(sorted(acq_dates_tif_files_dict.items()))
def get_index_files_and_acquisition_dates_in_dict(parcel_id, crop, out_tif_folder_base, index_name):
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
index_folder = out_tif_folder + "_merged_" + index_name
index_files_pattern = index_folder + "/*.tif"
index_files = glob(index_files_pattern)
acq_dates_tif_files_dict = {}
for index_file in index_files:
index_file_base = os.path.basename(index_file)
index_file_path = os.path.dirname(index_file)
tile_name = index_file_base.split(".")[0]
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
acq_dates_tif_files_dict[acq_date]=index_file
return collections.OrderedDict(sorted(acq_dates_tif_files_dict.items()))
def get_acq_dates_band_names_tif_files_list(parcel_id, crop, out_tif_folder_base):
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
band_tif_files_pattern = out_tif_folder + "/????-??-??/*.B??.tif"
# S2A_MSIL2A_20180326T103021_N0207_R108_T32TMR_20180326T155240.B11.tif
band_tif_files = glob(band_tif_files_pattern)
# print(band_tif_files)
acq_dates_band_names_tif_files_list = []
for band_tif_file in band_tif_files:
acq_dates_band_names_tif_files = []
band_tif_file_base = os.path.basename(band_tif_file)
band_tif_file_path = os.path.dirname(band_tif_file)
tile_name = band_tif_file_base.split(".")[0]
band_name = band_tif_file_base.split(".")[1]
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
acq_dates_band_names_tif_files.append(band_tif_file_path)
acq_dates_band_names_tif_files.append(tile_name)
acq_dates_band_names_tif_files.append(band_name)
acq_dates_band_names_tif_files.append(acq_date)
acq_dates_band_names_tif_files_list.append(acq_dates_band_names_tif_files)
return acq_dates_band_names_tif_files_list
def run_ndvi_creation(parcel_id, crop, out_tif_folder_base, logfile):
fout = open(logfile, 'a')
start = time.time()
# create ndvi image
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
lut_bands=[1,2,3]
merge_folder = out_tif_folder + "_merged"
merge_ndvi_folder = out_tif_folder + "_merged_ndvi"
if not os.path.exists(merge_ndvi_folder):
os.makedirs(merge_ndvi_folder)
merged_files_pattern = merge_folder + "/*.tif"
merged_files = glob(merged_files_pattern)
for merged_file in merged_files:
# print(merged_file)
merged_file_base = os.path.basename(merged_file)
merged_file_path = os.path.dirname(merged_file)
tile_name = merged_file_base.split(".")[0]
#get acquisition date from tile name
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
# print(tile_name)
output = merge_ndvi_folder + "/" + tile_name + ".tif"
# here again: if the ndvi image image is already created we do not create it again
if os.path.isfile(output):
# we already created the ndvi image for this date for this parcel so we skip it
print(tile_name + " ndvi already created")
else:
print("Creating NDVI for tile: ", tile_name, end="")
extract_utils.calculate_ndvi(merged_file, output)
print("...done")
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.run_ndvi_creation:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"NDVI created in: {time.time() - start} seconds")
def run_ndwi_creation(parcel_id, crop, out_tif_folder_base, logfile):
fout = open(logfile, 'a')
start = time.time()
# create ndwi image
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
lut_bands=[1,2,3]
merge_folder = out_tif_folder + "_merged"
merge_ndwi_folder = out_tif_folder + "_merged_ndwi"
if not os.path.exists(merge_ndwi_folder):
os.makedirs(merge_ndwi_folder)
merged_files_pattern = merge_folder + "/*.tif"
merged_files = glob(merged_files_pattern)
for merged_file in merged_files:
# print(merged_file)
merged_file_base = os.path.basename(merged_file)
merged_file_path = os.path.dirname(merged_file)
tile_name = merged_file_base.split(".")[0]
#get acquisition date from tile name
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
# print(tile_name)
output = merge_ndwi_folder + "/" + tile_name + ".tif"
# here again: if the ndwi image image is already created we do not create it again
if os.path.isfile(output):
# we already created the ndwi image for this date for this parcel so we skip it
print(tile_name + " ndwi already created")
else:
print("Creating NDWI for tile: ", tile_name, end="")
extract_utils.calculate_ndwi(merged_file, output)
print("...done")
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.run_ndwi_creation:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"NDWI created in: {time.time() - start} seconds")
def run_bare_soil_index_creation(parcel_id, crop, out_tif_folder_base, logfile):
fout = open(logfile, 'a')
start = time.time()
# create ndvi image
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
lut_bands=[1,2,3]
merge_folder = out_tif_folder + "_merged"
merge_ndvi_folder = out_tif_folder + "_merged_ndvi"
if not os.path.exists(merge_ndvi_folder):
os.makedirs(merge_ndvi_folder)
merged_files_pattern = merge_folder + "/*.tif"
merged_files = glob(merged_files_pattern)
for merged_file in merged_files:
# print(merged_file)
merged_file_base = os.path.basename(merged_file)
merged_file_path = os.path.dirname(merged_file)
tile_name = merged_file_base.split(".")[0]
#get acquisition date from tile name
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
# print(tile_name)
output = merge_ndvi_folder + "/" + tile_name + ".tif"
# here again: if the ndvi image image is already created we do not create it again
if os.path.isfile(output):
# we already created the ndvi image for this date for this parcel so we skip it
print(tile_name + " ndvi already created")
else:
print("Creating NDVI for tile: ", tile_name, end="")
extract_utils.calculate_baresoil_index_image(merged_file, output)
print("...done")
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.run_ndvi_creation:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"NDVI created in: {time.time() - start} seconds")
def calculate_ndvi_statistics(parcel_id, crop, out_tif_folder_base, tiles_to_download, parcel, vector_file_name, parcel_id_column, logfile):
fout = open(logfile, 'a')
start = time.time()
acq_dates, merged_ndvi_files = get_merged_ndvi_files_and_acquisition_dates(parcel_id, crop, out_tif_folder_base)
chip_folder = str(parcel_id) + '_' + crop
output_ndvi_folder = out_tif_folder_base + "/ndvi"
output_ndvi_csv_file = output_ndvi_folder + "/" + chip_folder + "_ndvi.csv"
if not os.path.exists(output_ndvi_folder):
os.makedirs(output_ndvi_folder)
first_line ="Field_ID,acq_date,ndvi_mean,ndvi_count,ndvi_std"
print(first_line, file=open(output_ndvi_csv_file, "w"))
for merged_ndvi_file in merged_ndvi_files:
merged_ndvi_file_base = os.path.basename(merged_ndvi_file)
merged_ndvi_file_path = os.path.dirname(merged_ndvi_file)
tile_name = merged_ndvi_file_base.split(".")[0]
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
# print(merged_ndvi_file)
ndvi_mean, ndvi_count, ndvi_std = extract_utils.extract_stats_for_one_parcel_geopandas_presel(merged_ndvi_file, parcel)
# print(parcel_id, acq_date, ndvi_mean, ndvi_count, ndvi_std, sep=',')
print(parcel_id, acq_date, ndvi_mean, ndvi_count, ndvi_std, sep=',',
file=open(output_ndvi_csv_file, "a"))
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.calculate_ndvi_statistics:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"NDVI stats read in: {time.time() - start} seconds")
def calculate_ndwi_statistics(parcel_id, crop, out_tif_folder_base, tiles_to_download, parcel, vector_file_name, parcel_id_column, logfile):
fout = open(logfile, 'a')
start = time.time()
acq_dates, merged_ndwi_files = get_merged_ndwi_files_and_acquisition_dates(parcel_id, crop, out_tif_folder_base)
chip_folder = str(parcel_id) + '_' + crop
output_ndwi_folder = out_tif_folder_base + "/ndwi"
output_ndwi_csv_file = output_ndwi_folder + "/" + chip_folder + "_ndwi.csv"
if not os.path.exists(output_ndwi_folder):
os.makedirs(output_ndwi_folder)
first_line ="Field_ID,acq_date,ndwi_mean,ndwi_count,ndwi_std"
print(first_line, file=open(output_ndwi_csv_file, "w"))
for merged_ndwi_file in merged_ndwi_files:
merged_ndwi_file_base = os.path.basename(merged_ndwi_file)
merged_ndwi_file_path = os.path.dirname(merged_ndwi_file)
tile_name = merged_ndwi_file_base.split(".")[0]
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
# print(merged_ndwi_file)
ndwi_mean, ndwi_count, ndwi_std = extract_utils.extract_stats_for_one_parcel_geopandas_presel(merged_ndwi_file, parcel)
# print(parcel_id, acq_date, ndwi_mean, ndwi_count, ndwi_std, sep=',')
print(parcel_id, acq_date, ndwi_mean, ndwi_count, ndwi_std, sep=',',
file=open(output_ndwi_csv_file, "a"))
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.calculate_ndwi_statistics:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"ndwi stats read in: {time.time() - start} seconds")
def calculate_bs_statistics(parcel_id, crop, out_tif_folder_base, parcel, logfile, polarisation, orbit_orientation):
fout = open(logfile, 'a')
start = time.time()
chip_folder = str(parcel_id) + '_' + crop
output_s1_bs_folder = out_tif_folder_base + "/s1_bs"
output_s1_bs_csv_file = output_s1_bs_folder + "/" + chip_folder + "_s1bs_" + polarisation + "_" + orbit_orientation + ".csv"
acquisition_dates_and_s1_bs_files_dict = plot_utils.get_acquisition_dates_and_s1_bs_files_dict(out_tif_folder_base + "/" + chip_folder + "_s1_bs", polarisation, orbit_orientation)
if not os.path.exists(output_s1_bs_folder):
os.makedirs(output_s1_bs_folder)
first_line ="Field_ID,acq_date,bs_mean,bs_count,bs_std"
print(first_line, file=open(output_s1_bs_csv_file, "w"))
for acq_date, s1_bs_file in acquisition_dates_and_s1_bs_files_dict.items():
bs_mean, bs_count, bs_std = extract_utils.extract_stats_for_one_parcel_geopandas_presel_bs(s1_bs_file, parcel)
if bs_mean != None:
# print(parcel_id, acq_date, bs_mean, bs_count, bs_std, sep=',')
print(parcel_id, acq_date, bs_mean, bs_count, bs_std, sep=',',
file=open(output_s1_bs_csv_file, "a"))
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.calculate_bs_statistics:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print("S1 BS_" + polarisation + "_" + orbit_orientation + f" stats read in: {time.time() - start} seconds")
def calculate_coh6_statistics(parcel_id, crop, out_tif_folder_base, parcel, logfile, polarisation, orbit_orientation):
fout = open(logfile, 'a')
start = time.time()
chip_folder = str(parcel_id) + '_' + crop
output_s1_coh6_folder = out_tif_folder_base + "/s1_coh6"
output_s1_coh6_csv_file = output_s1_coh6_folder + "/" + chip_folder + "_s1coh6_" + polarisation + "_" + orbit_orientation + ".csv"
acquisition_dates_and_s1_coh6_files_dict = plot_utils.get_acquisition_dates_and_s1_bs_files_dict(out_tif_folder_base + "/" + chip_folder + "_s1_coh6", polarisation, orbit_orientation)
if not os.path.exists(output_s1_coh6_folder):
os.makedirs(output_s1_coh6_folder)
first_line ="Field_ID,acq_date,coh6_mean,coh6_count,coh6_std"
print(first_line, file=open(output_s1_coh6_csv_file, "w"))
for acq_date, s1_coh6_file in acquisition_dates_and_s1_coh6_files_dict.items():
coh6_mean, coh6_count, coh6_std = extract_utils.extract_stats_for_one_parcel_geopandas_presel_bs(s1_coh6_file, parcel)
if coh6_mean != None:
# print(parcel_id, acq_date, coh6_mean, coh6_count, coh6_std, sep=',')
print(parcel_id, acq_date, coh6_mean, coh6_count, coh6_std, sep=',',
file=open(output_s1_coh6_csv_file, "a"))
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.calculate_coh6_statistics:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print("S1 COH6 " + polarisation + " " + orbit_orientation + f" stats read in: {time.time() - start} seconds")
def calculate_index_statistics(parcel_id, crop, out_tif_folder_base, parcel, logfile, index_name):
fout = open(logfile, 'a')
start = time.time()
acquisition_dates_and_index_files_dict = get_index_files_and_acquisition_dates_in_dict(parcel_id, crop, out_tif_folder_base, index_name)
chip_folder = str(parcel_id) + '_' + crop
output_index_folder = out_tif_folder_base + "/" + index_name
output_index_csv_file = output_index_folder + "/" + chip_folder + "_" + index_name + ".csv"
if not os.path.exists(output_index_folder):
os.makedirs(output_index_folder)
first_line ="Field_ID,acq_date,mean,count,std"
print(first_line, file=open(output_index_csv_file, "w"))
for acq_date, index_file in acquisition_dates_and_index_files_dict.items():
mean, count, std = extract_utils.extract_stats_for_one_parcel_geopandas_presel(index_file, parcel)
print(parcel_id, acq_date, mean, count, std, sep=',',
file=open(output_index_csv_file, "a"))
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.calculate_index_statistics:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"Index stats read in: {time.time() - start} seconds")
def get_all_parcel_ids_from_parcel_shape(parcel_shp, parcel_id_column, crop_name_column):
ds=ogr.Open(parcel_shp)
lyr=ds.GetLayer()
parcel_id_crop_list = []
for feat in lyr:
parcel_id = feat.GetField(parcel_id_column)
crop_name = feat.GetField(crop_name_column)
if crop_name is None:
crop_name = ""
parcel_id_crop_list.append((parcel_id,crop_name.replace(" ", "_")))
parcel_id_crop_list = sorted(parcel_id_crop_list, key=getKey)
return parcel_id_crop_list
def getKey(item):
return item[0]
def does_ndvi_csv_exist(parcel_id, crop, out_tif_folder_base):
chip_folder = str(parcel_id) + '_' + crop
output_ndvi_folder = out_tif_folder_base + "/ndvi"
output_ndvi_csv_file = output_ndvi_folder + "/" + chip_folder + "_ndvi.csv"
if os.path.isfile(output_ndvi_csv_file):
return True
else:
return False
def does_ndvi_graph_exist(parcel_id, out_tif_folder_base):
output_ndvi_graph_folder = out_tif_folder_base + "/ndvi_graphs"
output_ndvi_graph_file = output_ndvi_graph_folder + "/parcel_id_" + str(parcel_id) + "_NDVI.jpg"
if os.path.isfile(output_ndvi_graph_file):
return True
else:
return False
def run_get_and_download_s1_bs_imagettes(raw_chips_s1_batch_url, out_s1_bs_folder,
search_window_start_date, search_window_end_date,
lon, lat, username, password, chipsize, url_base, logfile):
# list_of_s1_bs_imagettes, was_error_1 = download_utils.get_s1_bs_imagettes(raw_chips_s1_batch_url, lon, lat, start_date, end_date, username, password, chipsize)
# download_utils.download_s1_bs_imagettes(url_base, list_of_s1_bs_imagettes, out_s1_bs_folder, username, password)
# run the batch chip extract query with the JSON input as POST
# and get the response which contains the download folder of the extracted chips
# and download the s1 backscatter imagettes
fout = open(logfile, 'a')
start = time.time()
# we get and download the s1 bs images by month
# search_window_start_date, search_window_end_date
# search_window_start_date = "2019-11-15"
# search_window_end_date = "2020-09-15"
dt_search_window_start_date = plot_utils.get_date_from_string(search_window_start_date)
dt_search_window_end_date = plot_utils.get_date_from_string(search_window_end_date)
# print(last_day_of_month(dt_search_window_start_date))
# print(add_one_month(dt_search_window_start_date))
act_start_date = dt_search_window_start_date
while act_start_date < dt_search_window_end_date:
act_end_date = last_day_of_month(act_start_date)
if act_start_date == dt_search_window_start_date:
was_error_1 = True
was_error_2 = True
while was_error_1:
list_of_s1_bs_imagettes, was_error_1 = download_utils.get_s1_bs_imagettes(raw_chips_s1_batch_url, lon, lat, str(act_start_date), str(act_end_date), username, password, chipsize)
while was_error_2:
was_error_2 = download_utils.download_s1_bs_imagettes(url_base, list_of_s1_bs_imagettes, out_s1_bs_folder, username, password)
elif act_end_date > dt_search_window_end_date:
act_end_date = dt_search_window_end_date
was_error_1 = True
was_error_2 = True
while was_error_1:
list_of_s1_bs_imagettes, was_error_1 = download_utils.get_s1_bs_imagettes(raw_chips_s1_batch_url, lon, lat, str(act_start_date), str(act_end_date), username, password, chipsize)
while was_error_2:
was_error_2 = download_utils.download_s1_bs_imagettes(url_base, list_of_s1_bs_imagettes, out_s1_bs_folder, username, password)
else:
was_error_1 = True
was_error_2 = True
while was_error_1:
list_of_s1_bs_imagettes, was_error_1 = download_utils.get_s1_bs_imagettes(raw_chips_s1_batch_url, lon, lat, str(act_start_date), str(act_end_date), username, password, chipsize)
while was_error_2:
was_error_2 = download_utils.download_s1_bs_imagettes(url_base, list_of_s1_bs_imagettes, out_s1_bs_folder, username, password)
act_start_date = add_one_month(act_start_date)
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t\tbatch_utils.run_get_and_download_s1_bs_imagettes:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"Got list of S1 BS downloaded images: {time.time() - start} seconds")
def run_get_and_download_s1_coh6_imagettes(raw_chips_s1_batch_url, out_s1_coh6_folder,
search_window_start_date, search_window_end_date,
lon, lat, username, password, chipsize, url_base, logfile):
# list_of_s1_bs_imagettes, was_error_1 = download_utils.get_s1_bs_imagettes(raw_chips_s1_batch_url, lon, lat, start_date, end_date, username, password, chipsize)
# download_utils.download_s1_bs_imagettes(url_base, list_of_s1_bs_imagettes, out_s1_bs_folder, username, password)
# run the batch chip extract query with the JSON input as POST
# and get the response which contains the download folder of the extracted chips
# and download the s1 backscatter imagettes
fout = open(logfile, 'a')
start = time.time()
# we get and download the s1 bs images by month
# search_window_start_date, search_window_end_date
# search_window_start_date = "2019-11-15"
# search_window_end_date = "2020-09-15"
dt_search_window_start_date = plot_utils.get_date_from_string(search_window_start_date)
dt_search_window_end_date = plot_utils.get_date_from_string(search_window_end_date)
# print(last_day_of_month(dt_search_window_start_date))
# print(add_one_month(dt_search_window_start_date))
act_start_date = dt_search_window_start_date
while act_start_date < dt_search_window_end_date:
act_end_date = last_day_of_month(act_start_date)
if act_start_date == dt_search_window_start_date:
was_error_1 = True
was_error_2 = True
while was_error_1:
list_of_s1_coh6_imagettes, was_error_1 = download_utils.get_s1_coh6_imagettes(raw_chips_s1_batch_url, lon, lat, str(act_start_date), str(act_end_date), username, password, chipsize)
while was_error_2:
was_error_2 = download_utils.download_s1_coh6_imagettes(url_base, list_of_s1_coh6_imagettes, out_s1_coh6_folder, username, password)
elif act_end_date > dt_search_window_end_date:
act_end_date = dt_search_window_end_date
was_error_1 = True
was_error_2 = True
while was_error_1:
list_of_s1_coh6_imagettes, was_error_1 = download_utils.get_s1_coh6_imagettes(raw_chips_s1_batch_url, lon, lat, str(act_start_date), str(act_end_date), username, password, chipsize)
while was_error_2:
was_error_2 = download_utils.download_s1_coh6_imagettes(url_base, list_of_s1_coh6_imagettes, out_s1_coh6_folder, username, password)
else:
was_error_1 = True
was_error_2 = True
while was_error_1:
list_of_s1_coh6_imagettes, was_error_1 = download_utils.get_s1_coh6_imagettes(raw_chips_s1_batch_url, lon, lat, str(act_start_date), str(act_end_date), username, password, chipsize)
while was_error_2:
was_error_2 = download_utils.download_s1_coh6_imagettes(url_base, list_of_s1_coh6_imagettes, out_s1_coh6_folder, username, password)
act_start_date = add_one_month(act_start_date)
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t\tbatch_utils.run_get_and_download_s1_coh6_imagettes:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"Got list of S1 COH6 and downloaded images: {time.time() - start} seconds")
def run_rescale_s1_bs_images(out_s1_bs_folder, out_s1_bs_folder_rescale):
# we take all the downloaded s1 bs images for the given parcel and rescale them to uint16
if not os.path.exists(out_s1_bs_folder_rescale):
os.makedirs(out_s1_bs_folder_rescale)
raw_files_pattern = out_s1_bs_folder + "/*.tif"
raw_files = glob(raw_files_pattern)
for raw_file in raw_files:
raw_file_base = os.path.basename(raw_file)
actdate = raw_file_base.split(".")[0]
# print(tile_name)
output = out_s1_bs_folder_rescale + "/" + actdate + ".tif"
download_utils.rescale_s1_bs_image(raw_file, output)
def run_lut_stretch_one_band_s1_bs(out_s1_bs_folder_rescale, out_s1_bs_folder_rescale_lut, s1_bs_left_percent, s1_bs_right_percent):
# we take all the downloaded s1 bs images for the given parcel and rescale them to uint16
if not os.path.exists(out_s1_bs_folder_rescale_lut):
os.makedirs(out_s1_bs_folder_rescale_lut)
rescaled_files_pattern = out_s1_bs_folder_rescale + "/*.tif"
rescaled_files = glob(rescaled_files_pattern)
for rescaled_file in rescaled_files:
rescaled_file_base = os.path.basename(rescaled_file)
actdate = rescaled_file_base.split(".")[0]
print(actdate)
output = out_s1_bs_folder_rescale_lut + "/" + actdate + ".tif"
lut.lut_stretch_one_band_s1_bs(rescaled_file, output, s1_bs_left_percent, s1_bs_right_percent)
def add_one_month(orig_date):
# advance year and month by one month
new_year = orig_date.year
new_month = orig_date.month + 1
# note: in datetime.date, months go from 1 to 12
if new_month > 12:
new_year += 1
new_month -= 12
last_day_of_month = calendar.monthrange(new_year, new_month)[1]
new_day = min(orig_date.day, last_day_of_month)
return orig_date.replace(year=new_year, month=new_month, day=new_day)
def last_day_of_month(any_day):
next_month = any_day.replace(day=28) + datetime.timedelta(days=4) # this will never fail
return next_month - datetime.timedelta(days=next_month.day)
def run_lut_stretch_dynamic(parcel_id, crop, out_tif_folder_base, left_percent, right_percent, lut_txt_file, logfile):
# lut stretch
fout = open(logfile, 'a')
start = time.time()
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
lut_bands=[1,2,3]
merge_folder = out_tif_folder + "_merged"
# merge_lut_folder = out_tif_folder + "_merged_lut_magic"
merge_lut_folder = out_tif_folder + "_merged_lut_dynamic"
if not os.path.exists(merge_lut_folder):
os.makedirs(merge_lut_folder)
merged_files_pattern = merge_folder + "/*.tif"
merged_files = glob(merged_files_pattern)
for merged_file in merged_files:
# print(merged_file)
merged_file_base = os.path.basename(merged_file)
merged_file_path = os.path.dirname(merged_file)
tile_name = merged_file_base.split(".")[0]
#get acquisition date from tile name
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
# print(tile_name)
output = merge_lut_folder + "/" + tile_name + ".tif"
# here again: if the lut stretched image is already created we do not create it again
if os.path.isfile(output):
# we already created the lut stretched image for this date for this parcel so we skip it
print(tile_name + " already created")
else:
print("LUT stretching tile: ", tile_name, end="")
lut.writeMinMaxToFile(merged_file, acq_date, lut_bands, left_percent, right_percent, lut_txt_file, tile_name)
# lut.lutStretchMagicLut(merged_file, output, lut_bands )
lut.lutStretch(merged_file, output, left_percent, right_percent, lut_bands )
print("...done")
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.run_lut_stretch_dynamic:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
print(f"LUT stretch dynamic: {time.time() - start} seconds")
def get_merged_dynamically_lutstretched_files_and_acquisition_dates(parcel_id, crop, out_tif_folder_base, logfile):
fout = open(logfile, 'a')
start = time.time()
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
# merge_lut_folder = out_tif_folder + "_merged_lut_magic"
merge_lut_folder = out_tif_folder + "_merged_lut_dynamic"
merged_lut_files_pattern = merge_lut_folder + "/*.tif"
merged_lut_files = glob(merged_lut_files_pattern)
acq_dates = []
for merged_lut_file in merged_lut_files:
merged_lut_file_base = os.path.basename(merged_lut_file)
merged_lut_file_path = os.path.dirname(merged_lut_file)
tile_name = merged_lut_file_base.split(".")[0]
acq_date = download_utils.get_acquisition_date_from_tile_name(tile_name)
acq_dates.append(acq_date)
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "\t", parcel_id, "\tbatch_utils.get_merged_dynamically_lutstretched_files_and_acquisition_dates:\t", "{0:.3f}".format(time.time() - start), file=fout)
fout.close()
return acq_dates, merged_lut_files
def calculate_band_statistics_orig(parcel_id, crop, out_tif_folder_base, parcel):
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
downloaded_band04_files_pattern = out_tif_folder + "/*/*.B04.tif"
downloaded_band04_files = glob(downloaded_band04_files_pattern)
band_stats_folder = out_tif_folder_base + "/band_stats"
if not os.path.exists(band_stats_folder):
os.makedirs(band_stats_folder)
band_stats_file_b04 = band_stats_folder + "/" + str(parcel_id) + "_b04.csv"
band_stats_file_b08 = band_stats_folder + "/" + str(parcel_id) + "_b08.csv"
band_stats_file_b11 = band_stats_folder + "/" + str(parcel_id) + "_b11.csv"
first_line ="Field_ID,acq_date,band_mean,band_count,band_std"
print(first_line, file=open(band_stats_file_b04, "w"))
print(first_line, file=open(band_stats_file_b08, "w"))
print(first_line, file=open(band_stats_file_b11, "w"))
for downloaded_band04_file in downloaded_band04_files:
band04_file_base = os.path.basename(downloaded_band04_file)
band_file_path = os.path.dirname(downloaded_band04_file)
tile_name = band04_file_base.split(".")[0]
#get acquisition date from tile name
acq_date_full = tile_name.split("_")[2]
acq_date = acq_date_full[0:4] + "-" + acq_date_full[4:6] + "-" + acq_date_full[6:8]
# check if the other bands are also available for this tile
if os.path.isfile(band_file_path + "/" + tile_name + ".B08.tif") and \
os.path.isfile(band_file_path + "/" + tile_name + ".B11.tif"):
band04 = band_file_path + "/" + tile_name + ".B04.tif"
band08 = band_file_path + "/" + tile_name + ".B08.tif"
band11 = band_file_path + "/" + tile_name + ".B11.tif"
band04_mean, band04_count, band04_std = extract_utils.extract_stats_for_one_parcel_geopandas_presel(band04, parcel)
if band04_mean != None:
print(parcel_id, acq_date, band04_mean, band04_count, band04_std, sep=',',
file=open(band_stats_file_b04, "a"))
band08_mean, band08_count, band08_std = extract_utils.extract_stats_for_one_parcel_geopandas_presel(band08, parcel)
if band08_mean != None:
print(parcel_id, acq_date, band08_mean, band08_count, band08_std, sep=',',
file=open(band_stats_file_b08, "a"))
band11_mean, band11_count, band11_std = extract_utils.extract_stats_for_one_parcel_geopandas_presel(band11, parcel)
if band11_mean != None:
print(parcel_id, acq_date, band11_mean, band11_count, band11_std, sep=',',
file=open(band_stats_file_b11, "a"))
def calculate_band_statistics(parcel_id, crop, out_tif_folder_base, parcel):
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
downloaded_band04_files_pattern = out_tif_folder + "/*/*.B04.tif"
downloaded_band04_files = glob(downloaded_band04_files_pattern)
if len(downloaded_band04_files)>0:
print("Calculating band statistics for parcel:" + str(parcel_id))
else:
return
band_stats_folder = out_tif_folder_base + "/band_stats"
if not os.path.exists(band_stats_folder):
os.makedirs(band_stats_folder)
band_stats_file = band_stats_folder + "/" + str(parcel_id) + ".csv"
first_date = True
for downloaded_band04_file in downloaded_band04_files:
band04_file_base = os.path.basename(downloaded_band04_file)
band_file_path = os.path.dirname(downloaded_band04_file)
tile_name = band04_file_base.split(".")[0]
#get acquisition date from tile name
acq_date_full = tile_name.split("_")[2]
acq_date = acq_date_full[0:4] + "-" + acq_date_full[4:6] + "-" + acq_date_full[6:8]
if first_date:
print(acq_date , "is the first date")
first_date = False
# check what other bands are available for this tile
band_files_available = glob(band_file_path + "/" + tile_name + ".B??.tif")
band_names=[]
for band_file_available in band_files_available:
band_name = os.path.basename(band_file_available).split(".")[1]
band_names.append(band_name)
#now we can get the band statistisc for the firs date and write it to csv
#with header row!
first_line ="Field_ID,acq_date,"
for band_name in band_names[0:-1]:
first_line+=band_name.lower() + "_mean,"
first_line+=band_name.lower() + "_count,"
first_line+=band_name.lower() + "_std,"
first_line+=band_names[-1].lower() + "_mean,"
first_line+=band_names[-1].lower() + "_count,"
first_line+=band_names[-1].lower() + "_std"
print(first_line, file=open(band_stats_file, "w"))
stats_line = str(parcel_id) + "," + str(acq_date) + ","
for band_name in band_names[0:-1]:
band_tif = band_file_path + "/" + tile_name + "." + band_name + ".tif"
band_mean, band_count, band_std = extract_utils.extract_stats_for_one_parcel_geopandas_presel(band_tif, parcel)
stats_line += "{:.3f}".format(band_mean) + ","
stats_line += "{:.0f}".format(band_count) + ","
stats_line += "{:.3f}".format(band_std) + ","
band_tif = band_file_path + "/" + tile_name + "." + band_names[-1] + ".tif"
band_mean, band_count, band_std = extract_utils.extract_stats_for_one_parcel_geopandas_presel(band_tif, parcel)
stats_line += "{:.3f}".format(band_mean) + ","
stats_line += "{:.0f}".format(band_count) + ","
stats_line += "{:.3f}".format(band_std)
print(stats_line, file=open(band_stats_file, "a"))
else:
#we assume the same bands are available for all the dates as for the first one
print(acq_date, "is NOT the first date")
stats_line = str(parcel_id) + "," + str(acq_date) + ","
for band_name in band_names[0:-1]:
band_tif = band_file_path + "/" + tile_name + "." + band_name + ".tif"
band_mean, band_count, band_std = extract_utils.extract_stats_for_one_parcel_geopandas_presel(band_tif, parcel)
stats_line += "{:.3f}".format(band_mean) + ","
stats_line += "{:.0f}".format(band_count) + ","
stats_line += "{:.3f}".format(band_std) + ","
band_tif = band_file_path + "/" + tile_name + "." + band_names[-1] + ".tif"
band_mean, band_count, band_std = extract_utils.extract_stats_for_one_parcel_geopandas_presel(band_tif, parcel)
stats_line += "{:.3f}".format(band_mean) + ","
stats_line += "{:.0f}".format(band_count) + ","
stats_line += "{:.3f}".format(band_std)
print(stats_line, file=open(band_stats_file, "a"))
def create_index_images(parcel_id, crop, out_tif_folder_base, acq_dates_band_names_tif_files_list, index_name):
# create bare soil index image
# https://giscrack.com/list-of-spectral-indices-for-sentinel-and-landsat/
chip_folder = str(parcel_id) + '_' + crop
out_tif_folder = out_tif_folder_base + "/" + chip_folder
if index_name == "bare_soil_index":
create_bare_soil_index_images(acq_dates_band_names_tif_files_list, index_name, out_tif_folder)
else:
print(index_name + " index calculation is not defined yet")
def create_bare_soil_index_images(acq_dates_band_names_tif_files_list, index_name, out_tif_folder):
df = pd.DataFrame(acq_dates_band_names_tif_files_list, columns = ['band_file_path', 'tile_name', 'band','acq_date'])
bands_needed_for_this_index = ['B02', 'B04', 'B08', 'B11']
# create index image
bare_soil_index_folder = out_tif_folder + "_merged_" + index_name
if not os.path.exists(bare_soil_index_folder):
os.makedirs(bare_soil_index_folder)
# first create a list of unique dates where there is at least one band image
acq_dates = df.acq_date.unique()
for acq_date in acq_dates:
# select corresponding band files
current_bands = df[df['acq_date'] == acq_date]
bands_present_for_this_date = current_bands['band'].tolist()
all_bands_available = all(elem in bands_present_for_this_date for elem in bands_needed_for_this_index)
if all_bands_available:
# print("All bands available, we can calculate the given index")
# we put together given band filenames for rasterio processing
full_path = current_bands['band_file_path'].iloc[0]
tile_name = current_bands['tile_name'].iloc[0]
band02_tif = full_path + "/" + tile_name + ".B02.tif"
band04_tif = full_path + "/" + tile_name + ".B04.tif"
band08_tif = full_path + "/" + tile_name + ".B08.tif"
band11_tif = full_path + "/" + tile_name + ".B11.tif"
with rasterio.open(band02_tif) as src02:
band02 = src02.read(1)
with rasterio.open(band04_tif) as src04:
band04 = src04.read(1)
with rasterio.open(band08_tif) as src08:
band08 = src08.read(1)
with rasterio.open(band11_tif) as src11:
band11 = src11.read(1,
out_shape=(
src11.count,
src08.height,
src08.width
),
resampling=Resampling.bilinear
)
# scale image transform
transform = src11.transform * src11.transform.scale(
(src11.width / band11.shape[-1]),
(src11.height / band11.shape[-2])
)
# Allow division by zero
numpy.seterr(divide='ignore', invalid='ignore')
bsi_file = bare_soil_index_folder + "/" + tile_name + ".tif"
# here again: if the bsi image image is already created we do not create it again
if os.path.isfile(bsi_file):
# we already created the ndvi image for this date for this parcel so we skip it
print(tile_name + " bare soil index image already created")
else:
print("Creating bare soil index image for tile: ", tile_name, end="")
# Formula of BSI = ((Red+SWIR) – (NIR+Blue)) / ((Red+SWIR) + (NIR+Blue))
# BSI (Sentinel 2) = ((B11 + B4) – (B8 + B2)) / ((B11 + B4) + (B8 + B2))
# Calculate BSI
bsi = ((band11.astype(float) + band04.astype(float)) - (band08.astype(float) + band02.astype(float))) / \
((band11 + band04) + (band08 + band02))
# bsi = (band11.astype(float) / band08)
# Set spatial characteristics of the output object to mirror the input
kwargs = src08.meta
kwargs.update(
dtype=rasterio.float32,
count = 1)
# Create the file
with rasterio.open(bsi_file, 'w', **kwargs) as dst:
dst.write_band(1, bsi.astype(rasterio.float32))
print("...done")
else:
print("We do not have all bands for this index")
| 49.328879 | 216 | 0.655705 | 9,051 | 66,446 | 4.399845 | 0.046735 | 0.020641 | 0.041283 | 0.02933 | 0.869924 | 0.84632 | 0.81134 | 0.781458 | 0.757929 | 0.738669 | 0 | 0.015078 | 0.241429 | 66,446 | 1,346 | 217 | 49.365527 | 0.774958 | 0.120534 | 0 | 0.624309 | 0 | 0 | 0.087111 | 0.024004 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047514 | false | 0.033149 | 0.016575 | 0.001105 | 0.087293 | 0.110497 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4a5c79724f4734739c573dc503e5b42e30ff7fcc | 118 | py | Python | src/main/python/twitter/thermos/config/bin/config_repl.py | isomer/incubator-aurora | 5f54d4de25413bb18acec16120eb18f3e08c6bf0 | [
"Apache-2.0"
] | null | null | null | src/main/python/twitter/thermos/config/bin/config_repl.py | isomer/incubator-aurora | 5f54d4de25413bb18acec16120eb18f3e08c6bf0 | [
"Apache-2.0"
] | null | null | null | src/main/python/twitter/thermos/config/bin/config_repl.py | isomer/incubator-aurora | 5f54d4de25413bb18acec16120eb18f3e08c6bf0 | [
"Apache-2.0"
] | null | null | null | from twitter.thermos.config.schema import *
from code import interact
interact('Thermos Config REPL', local=locals())
| 29.5 | 47 | 0.79661 | 16 | 118 | 5.875 | 0.6875 | 0.276596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101695 | 118 | 3 | 48 | 39.333333 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0.161017 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4a8116df307fa5a86aaf035f0108479b462ef5b2 | 147 | py | Python | src/kraetos/v1/helpers/google.py | sinhadotabhinav/kraetos-v1 | adc4f37c2648968a9981331135ff366351c97bac | [
"MIT"
] | 1 | 2021-05-08T10:11:15.000Z | 2021-05-08T10:11:15.000Z | src/kraetos/v1/helpers/google.py | sinhadotabhinav/kraetos-v1 | adc4f37c2648968a9981331135ff366351c97bac | [
"MIT"
] | null | null | null | src/kraetos/v1/helpers/google.py | sinhadotabhinav/kraetos-v1 | adc4f37c2648968a9981331135ff366351c97bac | [
"MIT"
] | null | null | null | from googlesearch import search
# Make Google search query
def googleSearch(query):
return search(query, num_results=3, lang="en", proxy=None) | 29.4 | 62 | 0.768707 | 21 | 147 | 5.333333 | 0.761905 | 0.196429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007874 | 0.136054 | 147 | 5 | 62 | 29.4 | 0.874016 | 0.163265 | 0 | 0 | 0 | 0 | 0.016393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
4ab09df92507adca7347bc083fc49366215ba098 | 39 | py | Python | examples/math.sin/ex1.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | examples/math.sin/ex1.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | examples/math.sin/ex1.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | import math
print(math.sin(math.pi/2))
| 13 | 26 | 0.74359 | 8 | 39 | 3.625 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0.076923 | 39 | 2 | 27 | 19.5 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
436109567fd3b0c93424d845f5cf31e29db79275 | 219 | py | Python | src/gpt2_output.py | yulonglin/gpt2 | c86239940bfb9fc34a3ea5df3c6aa16e9c57887e | [
"MIT"
] | null | null | null | src/gpt2_output.py | yulonglin/gpt2 | c86239940bfb9fc34a3ea5df3c6aa16e9c57887e | [
"MIT"
] | null | null | null | src/gpt2_output.py | yulonglin/gpt2 | c86239940bfb9fc34a3ea5df3c6aa16e9c57887e | [
"MIT"
] | null | null | null | from dataclasses import dataclass
from torchtyping import TensorType
@dataclass
class GPT2Output:
logits: TensorType["batch_size", "vocab_size"]
final_encoding: TensorType["batch_size", "hidden_size"]
| 24.333333 | 60 | 0.757991 | 24 | 219 | 6.708333 | 0.625 | 0.186335 | 0.236025 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005435 | 0.159817 | 219 | 8 | 61 | 27.375 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0.194313 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
437cfe29ec099e567cf181e0ddd700c491ad3877 | 141 | py | Python | mysite/polls/views.py | smurve/cbrokerage | 9c0ada2981d60ab04a4a2120f40f9ebc4a38befc | [
"Apache-2.0"
] | null | null | null | mysite/polls/views.py | smurve/cbrokerage | 9c0ada2981d60ab04a4a2120f40f9ebc4a38befc | [
"Apache-2.0"
] | 3 | 2021-03-19T03:06:40.000Z | 2022-02-10T13:35:19.000Z | mysite/polls/views.py | smurve/cbrokerage | 9c0ada2981d60ab04a4a2120f40f9ebc4a38befc | [
"Apache-2.0"
] | null | null | null | from django.http import HttpResponse
# Create your views here.
def index(request):
return HttpResponse("Hello Wolfie. Your at polls.")
| 20.142857 | 55 | 0.751773 | 19 | 141 | 5.578947 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163121 | 141 | 6 | 56 | 23.5 | 0.898305 | 0.163121 | 0 | 0 | 0 | 0 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
438d54eb3fa1824dcf1f422f8efdd923564112c3 | 34,394 | py | Python | posthog/api/test/test_feature_flag.py | ld-rale/posthog | 0fa5b18b2e940cf5cdbe8afc733eb7e3cd4ae810 | [
"MIT"
] | null | null | null | posthog/api/test/test_feature_flag.py | ld-rale/posthog | 0fa5b18b2e940cf5cdbe8afc733eb7e3cd4ae810 | [
"MIT"
] | null | null | null | posthog/api/test/test_feature_flag.py | ld-rale/posthog | 0fa5b18b2e940cf5cdbe8afc733eb7e3cd4ae810 | [
"MIT"
] | null | null | null | import json
from unittest.mock import patch
from rest_framework import status
from posthog.models import FeatureFlag, GroupTypeMapping, User
from posthog.models.cohort import Cohort
from posthog.models.feature_flag import FeatureFlagOverride
from posthog.test.base import APIBaseTest
class TestFeatureFlag(APIBaseTest):
feature_flag: FeatureFlag = None # type: ignore
@classmethod
def setUpTestData(cls):
super().setUpTestData()
cls.feature_flag = FeatureFlag.objects.create(team=cls.team, created_by=cls.user, key="red_button")
def test_cant_create_flag_with_duplicate_key(self):
count = FeatureFlag.objects.count()
# Make sure the endpoint works with and without the trailing slash
response = self.client.post(
f"/api/projects/{self.team.id}/feature_flags", {"name": "Beta feature", "key": "red_button"}
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(
response.json(),
{
"type": "validation_error",
"code": "unique",
"detail": "There is already a feature flag with this key.",
"attr": "key",
},
)
self.assertEqual(FeatureFlag.objects.count(), count)
def test_cant_update_flag_with_duplicate_key(self):
another_feature_flag = FeatureFlag.objects.create(
team=self.team, rollout_percentage=50, name="some feature", key="some-feature", created_by=self.user,
)
response = self.client.patch(
f"/api/projects/{self.team.id}/feature_flags/{another_feature_flag.pk}",
{"name": "Beta feature", "key": "red_button"},
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(
response.json(),
{
"type": "validation_error",
"code": "unique",
"detail": "There is already a feature flag with this key.",
"attr": "key",
},
)
another_feature_flag.refresh_from_db()
self.assertEqual(another_feature_flag.key, "some-feature")
# Try updating the existing one
response = self.client.patch(
f"/api/projects/{self.team.id}/feature_flags/{self.feature_flag.id}/",
{"name": "Beta feature 3", "key": "red_button"},
)
self.assertEqual(response.status_code, 200)
self.feature_flag.refresh_from_db()
self.assertEqual(self.feature_flag.name, "Beta feature 3")
def test_is_simple_flag(self):
feature_flag = self.client.post(
f"/api/projects/{self.team.id}/feature_flags/",
data={"name": "Beta feature", "key": "beta-feature", "filters": {"groups": [{"rollout_percentage": 65,}]},},
format="json",
).json()
self.assertTrue(feature_flag["is_simple_flag"])
self.assertEqual(feature_flag["rollout_percentage"], 65)
def test_is_not_simple_flag(self):
feature_flag = self.client.post(
f"/api/projects/{self.team.id}/feature_flags/",
data={
"name": "Beta feature",
"key": "beta-feature",
"filters": {
"groups": [
{
"rollout_percentage": 65,
"properties": [
{"key": "email", "type": "person", "value": "@posthog.com", "operator": "icontains",},
],
}
]
},
},
format="json",
).json()
self.assertFalse(feature_flag["is_simple_flag"])
@patch("posthog.api.feature_flag.report_user_action")
def test_is_simple_flag_groups(self, mock_capture):
feature_flag = self.client.post(
f"/api/projects/{self.team.id}/feature_flags/",
data={
"name": "Beta feature",
"key": "beta-feature",
"filters": {"aggregation_group_type_index": 0, "groups": [{"rollout_percentage": 65,}]},
},
format="json",
).json()
self.assertFalse(feature_flag["is_simple_flag"])
# Assert analytics are sent
instance = FeatureFlag.objects.get(id=feature_flag["id"])
mock_capture.assert_called_once_with(
self.user,
"feature flag created",
{
"groups_count": 1,
"has_variants": False,
"variants_count": 0,
"has_rollout_percentage": True,
"has_filters": False,
"filter_count": 0,
"created_at": instance.created_at,
"aggregating_by_groups": True,
},
)
@patch("posthog.api.feature_flag.report_user_action")
def test_create_feature_flag(self, mock_capture):
response = self.client.post(
f"/api/projects/{self.team.id}/feature_flags/",
{"name": "Alpha feature", "key": "alpha-feature", "filters": {"groups": [{"rollout_percentage": 50}]}},
format="json",
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
instance = FeatureFlag.objects.get(id=response.json()["id"])
self.assertEqual(instance.key, "alpha-feature")
# Assert analytics are sent
mock_capture.assert_called_once_with(
self.user,
"feature flag created",
{
"groups_count": 1,
"has_variants": False,
"variants_count": 0,
"has_rollout_percentage": True,
"has_filters": False,
"filter_count": 0,
"created_at": instance.created_at,
"aggregating_by_groups": False,
},
)
@patch("posthog.api.feature_flag.report_user_action")
def test_create_minimal_feature_flag(self, mock_capture):
response = self.client.post(
f"/api/projects/{self.team.id}/feature_flags/", {"key": "omega-feature"}, format="json"
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(response.json()["key"], "omega-feature")
self.assertEqual(response.json()["name"], "")
instance = FeatureFlag.objects.get(id=response.json()["id"])
self.assertEqual(instance.key, "omega-feature")
self.assertEqual(instance.name, "")
# Assert analytics are sent
mock_capture.assert_called_once_with(
self.user,
"feature flag created",
{
"groups_count": 1, # 1 is always created by default
"has_variants": False,
"variants_count": 0,
"has_rollout_percentage": False,
"has_filters": False,
"filter_count": 0,
"created_at": instance.created_at,
"aggregating_by_groups": False,
},
)
@patch("posthog.api.feature_flag.report_user_action")
def test_create_multivariate_feature_flag(self, mock_capture):
response = self.client.post(
f"/api/projects/{self.team.id}/feature_flags/",
{
"name": "Multivariate feature",
"key": "multivariate-feature",
"filters": {
"groups": [{"properties": [], "rollout_percentage": None}],
"multivariate": {
"variants": [
{"key": "first-variant", "name": "First Variant", "rollout_percentage": 50},
{"key": "second-variant", "name": "Second Variant", "rollout_percentage": 25},
{"key": "third-variant", "name": "Third Variant", "rollout_percentage": 25},
],
},
},
},
format="json",
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
instance = FeatureFlag.objects.get(id=response.json()["id"])
self.assertEqual(instance.key, "multivariate-feature")
# Assert analytics are sent
mock_capture.assert_called_once_with(
self.user,
"feature flag created",
{
"groups_count": 1,
"has_variants": True,
"variants_count": 3,
"has_filters": False,
"has_rollout_percentage": False,
"filter_count": 0,
"created_at": instance.created_at,
"aggregating_by_groups": False,
},
)
def test_cant_create_multivariate_feature_flag_with_variant_rollout_lt_100(self):
response = self.client.post(
f"/api/projects/{self.team.id}/feature_flags/",
{
"name": "Multivariate feature",
"key": "multivariate-feature",
"filters": {
"groups": [{"properties": [], "rollout_percentage": None}],
"multivariate": {
"variants": [
{"key": "first-variant", "name": "First Variant", "rollout_percentage": 50},
{"key": "second-variant", "name": "Second Variant", "rollout_percentage": 25},
{"key": "third-variant", "name": "Third Variant", "rollout_percentage": 0},
],
},
},
},
format="json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(response.json().get("type"), "validation_error")
self.assertEqual(
response.json().get("detail"), "Invalid variant definitions: Variant rollout percentages must sum to 100."
)
def test_cant_create_multivariate_feature_flag_with_variant_rollout_gt_100(self):
response = self.client.post(
f"/api/projects/{self.team.id}/feature_flags/",
{
"name": "Multivariate feature",
"key": "multivariate-feature",
"filters": {
"groups": [{"properties": [], "rollout_percentage": None}],
"multivariate": {
"variants": [
{"key": "first-variant", "name": "First Variant", "rollout_percentage": 50},
{"key": "second-variant", "name": "Second Variant", "rollout_percentage": 25},
{"key": "third-variant", "name": "Third Variant", "rollout_percentage": 50},
],
},
},
},
format="json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(response.json().get("type"), "validation_error")
self.assertEqual(
response.json().get("detail"), "Invalid variant definitions: Variant rollout percentages must sum to 100."
)
def test_cant_create_feature_flag_without_key(self):
count = FeatureFlag.objects.count()
response = self.client.post(f"/api/projects/{self.team.id}/feature_flags/", format="json")
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(
response.json(),
{"type": "validation_error", "code": "required", "detail": "This field is required.", "attr": "key"},
)
self.assertEqual(FeatureFlag.objects.count(), count)
@patch("posthog.api.feature_flag.report_user_action")
def test_updating_feature_flag(self, mock_capture):
instance = self.feature_flag
response = self.client.patch(
f"/api/projects/{self.team.id}/feature_flags/{instance.pk}",
{
"name": "Updated name",
"filters": {
"groups": [
{
"rollout_percentage": 65,
"properties": [
{"key": "email", "type": "person", "value": "@posthog.com", "operator": "icontains",},
],
}
]
},
},
format="json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
instance.refresh_from_db()
self.assertEqual(instance.name, "Updated name")
self.assertEqual(instance.conditions[0]["rollout_percentage"], 65)
# Assert analytics are sent
mock_capture.assert_called_once_with(
self.user,
"feature flag updated",
{
"groups_count": 1,
"has_variants": False,
"variants_count": 0,
"has_rollout_percentage": True,
"has_filters": True,
"filter_count": 1,
"created_at": instance.created_at,
"aggregating_by_groups": False,
},
)
def test_deleting_feature_flag(self):
new_user = User.objects.create_and_join(self.organization, "new_annotations@posthog.com", None)
instance = FeatureFlag.objects.create(team=self.team, created_by=self.user)
self.client.force_login(new_user)
with patch("posthog.mixins.report_user_action") as mock_capture:
response = self.client.delete(f"/api/projects/{self.team.id}/feature_flags/{instance.pk}/")
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
self.assertFalse(FeatureFlag.objects.filter(pk=instance.pk).exists())
# Assert analytics are sent (notice the event is sent on the user that executed the deletion, not the creator)
mock_capture.assert_called_once_with(
new_user,
"feature flag deleted",
{
"groups_count": 1,
"has_variants": False,
"variants_count": 0,
"has_rollout_percentage": False,
"has_filters": False,
"filter_count": 0,
"created_at": instance.created_at,
"aggregating_by_groups": False,
},
)
@patch("posthog.api.feature_flag.report_user_action")
def test_cannot_delete_feature_flag_on_another_team(self, mock_capture):
_, other_team, other_user = User.objects.bootstrap("Test", "team2@posthog.com", None)
self.client.force_login(other_user)
response = self.client.delete(f"/api/projects/{other_team.id}/feature_flags/{self.feature_flag.pk}/")
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
self.assertTrue(FeatureFlag.objects.filter(pk=self.feature_flag.pk).exists())
mock_capture.assert_not_called()
def test_get_flags_with_specified_token(self):
_, _, user = User.objects.bootstrap("Test", "team2@posthog.com", None)
self.client.force_login(user)
assert user.team is not None
assert self.team is not None
self.assertNotEqual(user.team.id, self.team.id)
response_team_1 = self.client.get(f"/api/projects/@current/feature_flags")
response_team_1_token = self.client.get(f"/api/projects/@current/feature_flags?token={user.team.api_token}")
response_team_2 = self.client.get(f"/api/projects/@current/feature_flags?token={self.team.api_token}")
self.assertEqual(response_team_1.json(), response_team_1_token.json())
self.assertNotEqual(response_team_1.json(), response_team_2.json())
response_invalid_token = self.client.get(f"/api/projects/@current/feature_flags?token=invalid")
self.assertEqual(response_invalid_token.status_code, 401)
def test_creating_a_feature_flag_with_same_team_and_key_after_deleting(self):
FeatureFlag.objects.create(team=self.team, created_by=self.user, key="alpha-feature", deleted=True)
response = self.client.post(
f"/api/projects/{self.team.id}/feature_flags/", {"name": "Alpha feature", "key": "alpha-feature"}
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
instance = FeatureFlag.objects.get(id=response.json()["id"])
self.assertEqual(instance.key, "alpha-feature")
def test_updating_a_feature_flag_with_same_team_and_key_of_a_deleted_one(self):
FeatureFlag.objects.create(team=self.team, created_by=self.user, key="alpha-feature", deleted=True)
instance = FeatureFlag.objects.create(team=self.team, created_by=self.user, key="beta-feature")
response = self.client.patch(
f"/api/projects/{self.team.id}/feature_flags/{instance.pk}", {"key": "alpha-feature",}, format="json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
instance.refresh_from_db()
self.assertEqual(instance.key, "alpha-feature")
@patch("posthog.api.feature_flag.report_user_action")
def test_my_flags(self, mock_capture):
self.client.post(
f"/api/projects/{self.team.id}/feature_flags/",
{
"name": "Alpha feature",
"key": "alpha-feature",
"filters": {
"groups": [{"rollout_percentage": 20}],
"multivariate": {
"variants": [
{"key": "first-variant", "name": "First Variant", "rollout_percentage": 50},
{"key": "second-variant", "name": "Second Variant", "rollout_percentage": 25},
{"key": "third-variant", "name": "Third Variant", "rollout_percentage": 25},
],
},
},
},
format="json",
)
# # alpha-feature is set for "distinct_id"
distinct_id_user = User.objects.create_and_join(self.organization, "distinct_id_user@posthog.com", None)
distinct_id_user.distinct_id = "distinct_id"
distinct_id_user.save()
self.client.force_login(distinct_id_user)
response = self.client.get(f"/api/projects/{self.team.id}/feature_flags/my_flags")
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = response.json()
self.assertEqual(len(response_data), 2)
first_flag = response_data[0]
self.assertEqual(first_flag["feature_flag"]["key"], "alpha-feature")
self.assertEqual(first_flag["value_for_user_without_override"], "third-variant")
self.assertEqual(first_flag["override"], None)
second_flag = response_data[1]
self.assertEqual(second_flag["feature_flag"]["key"], "red_button")
self.assertEqual(second_flag["value_for_user_without_override"], True)
self.assertEqual(second_flag["override"], None)
# alpha-feature is not set for "distinct_id_0"
distinct_id_0_user = User.objects.create_and_join(self.organization, "distinct_id_0_user@posthog.com", None)
distinct_id_0_user.distinct_id = "distinct_id_0"
distinct_id_0_user.save()
self.client.force_login(distinct_id_0_user)
response = self.client.get(f"/api/projects/{self.team.id}/feature_flags/my_flags")
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = response.json()
self.assertEqual(len(response_data), 2)
first_flag = response_data[0]
self.assertEqual(first_flag["feature_flag"]["key"], "alpha-feature")
self.assertEqual(first_flag["value_for_user_without_override"], False)
self.assertEqual(first_flag["override"], None)
@patch("posthoganalytics.capture")
def test_my_flags_groups(self, mock_capture):
self.client.post(
f"/api/projects/{self.team.id}/feature_flags/",
{
"name": "groups flag",
"key": "groups-flag",
"filters": {"aggregation_group_type_index": 0, "groups": [{"rollout_percentage": 100,}]},
},
format="json",
)
GroupTypeMapping.objects.create(team=self.team, group_type="organization", group_type_index=0)
response = self.client.get(f"/api/projects/{self.team.id}/feature_flags/my_flags")
self.assertEqual(response.status_code, status.HTTP_200_OK)
groups_flag = response.json()[0]
self.assertEqual(groups_flag["feature_flag"]["key"], "groups-flag")
self.assertEqual(groups_flag["value_for_user_without_override"], False)
response = self.client.get(
f"/api/projects/{self.team.id}/feature_flags/my_flags", data={"groups": json.dumps({"organization": "7"})}
)
groups_flag = response.json()[0]
self.assertEqual(groups_flag["feature_flag"]["key"], "groups-flag")
self.assertEqual(groups_flag["value_for_user_without_override"], True)
def test_create_override(self):
# Boolean override value
feature_flag_instance = FeatureFlag.objects.create(team=self.team, created_by=self.user, key="beta-feature")
response = self.client.post(
"/api/projects/@current/feature_flag_overrides/my_overrides",
{"feature_flag": feature_flag_instance.id, "override_value": True},
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertIsNotNone(
FeatureFlagOverride.objects.get(
team=self.team, user=self.user, feature_flag=feature_flag_instance, override_value=True
)
)
# String override value
feature_flag_instance_2 = FeatureFlag.objects.create(team=self.team, created_by=self.user, key="beta-feature-2")
response = self.client.post(
"/api/projects/@current/feature_flag_overrides/my_overrides",
{"feature_flag": feature_flag_instance_2.id, "override_value": "hey-hey"},
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertIsNotNone(
FeatureFlagOverride.objects.get(
team=self.team, user=self.user, feature_flag=feature_flag_instance_2, override_value="hey-hey"
)
)
response = self.client.get(f"/api/projects/{self.team.id}/feature_flags/my_flags")
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = response.json()
first_flag = response_data[0]
self.assertEqual(first_flag["feature_flag"]["key"], "beta-feature-2")
self.assertEqual(first_flag["override"]["override_value"], "hey-hey")
second_flag = response_data[1]
self.assertEqual(second_flag["feature_flag"]["key"], "beta-feature")
self.assertEqual(second_flag["override"]["override_value"], True)
third_flag = response_data[2]
self.assertEqual(third_flag["feature_flag"]["key"], "red_button")
self.assertEqual(third_flag["override"], None)
def test_update_override(self):
# Create an override and, and make sure the my_flags response shows it
feature_flag_instance = FeatureFlag.objects.create(team=self.team, created_by=self.user, key="beta-feature")
response = self.client.post(
"/api/projects/@current/feature_flag_overrides/my_overrides",
{"feature_flag": feature_flag_instance.id, "override_value": "hey-hey"},
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
response = self.client.get(f"/api/projects/{self.team.id}/feature_flags/my_flags")
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = response.json()
first_flag = response_data[0]
self.assertEqual(first_flag["feature_flag"]["key"], "beta-feature")
self.assertEqual(first_flag["override"]["override_value"], "hey-hey")
# Update the override, and make sure the my_flags response reflects the update
response = self.client.post(
"/api/projects/@current/feature_flag_overrides/my_overrides",
{"feature_flag": feature_flag_instance.id, "override_value": "new-override"},
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
response = self.client.get(f"/api/projects/{self.team.id}/feature_flags/my_flags")
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = response.json()
first_flag = response_data[0]
self.assertEqual(first_flag["feature_flag"]["key"], "beta-feature")
self.assertEqual(first_flag["override"]["override_value"], "new-override")
# Ensure only 1 override exists in the DB for the feature_flag/user combo
self.assertEqual(
FeatureFlagOverride.objects.filter(user=self.user, feature_flag=feature_flag_instance).count(), 1
)
def test_delete_override(self):
# Create an override and, and make sure the my_flags response shows it
feature_flag_instance = FeatureFlag.objects.create(team=self.team, created_by=self.user, key="beta-feature")
response = self.client.post(
"/api/projects/@current/feature_flag_overrides/my_overrides",
{"feature_flag": feature_flag_instance.id, "override_value": "hey-hey"},
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
response = self.client.get(f"/api/projects/{self.team.id}/feature_flags/my_flags")
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = response.json()
first_flag = response_data[0]
self.assertEqual(first_flag["feature_flag"]["key"], "beta-feature")
self.assertEqual(first_flag["override"]["override_value"], "hey-hey")
# Delete the override, and make sure the my_flags response reflects the update
existing_override_id = first_flag["override"]["id"]
response = self.client.delete(f"/api/projects/@current/feature_flag_overrides/{existing_override_id}",)
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
response = self.client.get(f"/api/projects/{self.team.id}/feature_flags/my_flags")
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = response.json()
first_flag = response_data[0]
self.assertEqual(first_flag["feature_flag"]["key"], "beta-feature")
self.assertEqual(first_flag["override"], None)
def test_create_override_with_invalid_override(self):
feature_flag_instance = FeatureFlag.objects.create(team=self.team, created_by=self.user, key="beta-feature")
response = self.client.post(
"/api/projects/@current/feature_flag_overrides/my_overrides",
{"feature_flag": feature_flag_instance.id, "override_value": {"key": "a dict"}},
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_create_override_for_feature_flag_in_another_team(self):
feature_flag_instance = FeatureFlag.objects.create(team=self.team, created_by=self.user, key="beta-feature")
_, _, team_2_user = User.objects.bootstrap("Test", "team2@posthog.com", None)
self.client.force_login(team_2_user)
response = self.client.post(
"/api/projects/@current/feature_flag_overrides/my_overrides",
{"feature_flag": feature_flag_instance.id, "override_value": True},
)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_delete_another_users_override(self):
feature_flag_instance = FeatureFlag.objects.create(team=self.team, created_by=self.user, key="beta-feature")
feature_flag_override = FeatureFlagOverride.objects.create(
team=self.team, user=self.user, feature_flag=feature_flag_instance, override_value=True
)
feature_flag_override_id = feature_flag_override.id
_, _, user_2 = User.objects.bootstrap(self.organization.name, "user2@posthog.com", None)
self.client.force_login(user_2)
response = self.client.delete(f"/api/projects/@current/feature_flag_overrides/{feature_flag_override_id}",)
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
def test_standard_viewset_endpoints_are_not_available(self):
feature_flag_instance = FeatureFlag.objects.create(team=self.team, created_by=self.user, key="beta-feature")
feature_flag_override = FeatureFlagOverride.objects.create(
team=self.team, user=self.user, feature_flag=feature_flag_instance, override_value=True
)
feature_flag_override_id = feature_flag_override.id
response = self.client.put(
f"/api/projects/@current/feature_flag_overrides/{feature_flag_override_id}",
{"feature_flag": feature_flag_instance.id, "override_value": True},
)
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
response = self.client.patch(
f"/api/projects/@current/feature_flag_overrides/{feature_flag_override_id}",
{"feature_flag": feature_flag_instance.id, "override_value": True},
)
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
response = self.client.get(f"/api/projects/@current/feature_flag_overrides/{feature_flag_override_id}")
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
response = self.client.get(f"/api/projects/@current/feature_flag_overrides/")
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
response = self.client.post(
f"/api/projects/@current/feature_flag_overrides/",
{"feature_flag": feature_flag_instance.id, "override_value": True},
)
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
def test_validation_person_properties(self):
person_request = self._create_flag_with_properties(
"person-flag", [{"key": "email", "type": "person", "value": "@posthog.com", "operator": "icontains",},]
)
self.assertEqual(person_request.status_code, status.HTTP_201_CREATED)
cohort_request = self._create_flag_with_properties(
"cohort-flag", [{"key": "id", "type": "cohort", "value": 5},]
)
self.assertEqual(cohort_request.status_code, status.HTTP_201_CREATED)
event_request = self._create_flag_with_properties("illegal-event-flag", [{"key": "id", "value": 5},])
self.assertEqual(event_request.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(
event_request.json(),
{
"type": "validation_error",
"code": "invalid_input",
"detail": "Filters are not valid (can only use person and cohort properties)",
"attr": "filters",
},
)
groups_request = self._create_flag_with_properties(
"illegal-groups-flag", [{"key": "industry", "value": "finance", "type": "group", "group_type_index": 0}]
)
self.assertEqual(groups_request.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(
groups_request.json(),
{
"type": "validation_error",
"code": "invalid_input",
"detail": "Filters are not valid (can only use person and cohort properties)",
"attr": "filters",
},
)
@patch("posthog.tasks.calculate_cohort.calculate_cohort_ch.delay")
def test_cohort_is_calculated(self, calculate_cohort_ch):
cohort = Cohort.objects.create(
team=self.team,
groups=[{"properties": {"$some_prop": "something", "$another_prop": "something"}}],
name="cohort1",
)
cohort_request = self._create_flag_with_properties(
"cohort-flag", [{"key": "id", "type": "cohort", "value": cohort.pk},]
)
self.assertEqual(cohort_request.status_code, status.HTTP_201_CREATED)
self.assertEqual(calculate_cohort_ch.call_count, 1)
def test_validation_group_properties(self):
groups_request = self._create_flag_with_properties(
"groups-flag",
[{"key": "industry", "value": "finance", "type": "group", "group_type_index": 0}],
aggregation_group_type_index=0,
)
self.assertEqual(groups_request.status_code, status.HTTP_201_CREATED)
illegal_groups_request = self._create_flag_with_properties(
"illegal-groups-flag",
[{"key": "industry", "value": "finance", "type": "group", "group_type_index": 0}],
aggregation_group_type_index=3,
)
self.assertEqual(illegal_groups_request.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(
illegal_groups_request.json(),
{
"type": "validation_error",
"code": "invalid_input",
"detail": "Filters are not valid (can only use group properties)",
"attr": "filters",
},
)
person_request = self._create_flag_with_properties(
"person-flag",
[{"key": "email", "type": "person", "value": "@posthog.com", "operator": "icontains",},],
aggregation_group_type_index=0,
)
self.assertEqual(person_request.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(
person_request.json(),
{
"type": "validation_error",
"code": "invalid_input",
"detail": "Filters are not valid (can only use group properties)",
"attr": "filters",
},
)
def _create_flag_with_properties(self, name, properties, **kwargs):
return self.client.post(
f"/api/projects/{self.team.id}/feature_flags/",
data={"name": name, "key": name, "filters": {**kwargs, "groups": [{"properties": properties,}],},},
format="json",
)
| 46.042838 | 120 | 0.610368 | 3,755 | 34,394 | 5.327297 | 0.067377 | 0.067636 | 0.054039 | 0.042991 | 0.837333 | 0.813187 | 0.791892 | 0.770146 | 0.739602 | 0.727604 | 0 | 0.010659 | 0.263534 | 34,394 | 746 | 121 | 46.104558 | 0.779076 | 0.025324 | 0 | 0.537152 | 0 | 0 | 0.253239 | 0.103457 | 0 | 0 | 0 | 0 | 0.187307 | 1 | 0.047988 | false | 0 | 0.010836 | 0.001548 | 0.063467 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
43adeb0b315f3844b15bd4d49be46779cb1ebb32 | 94 | py | Python | interfaces/python/test/trainers_test.py | awf/ELL | 25c94a1422efc41d5560db11b136f9d8f957ad41 | [
"MIT"
] | 2,094 | 2016-09-28T05:55:24.000Z | 2019-05-04T19:06:36.000Z | interfaces/python/test/trainers_test.py | awesomemachinelearning/ELL | cb897e3aec148a1e9bd648012b5f53ab9d0dd20c | [
"MIT"
] | 213 | 2017-06-30T12:53:40.000Z | 2019-05-03T06:35:38.000Z | interfaces/python/test/trainers_test.py | awesomemachinelearning/ELL | cb897e3aec148a1e9bd648012b5f53ab9d0dd20c | [
"MIT"
] | 301 | 2017-03-24T08:40:00.000Z | 2019-05-02T21:22:28.000Z | import ell_helper
import ell
def test():
print("trainers_test.test -- TBD")
return 0
| 13.428571 | 38 | 0.680851 | 14 | 94 | 4.428571 | 0.714286 | 0.290323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013514 | 0.212766 | 94 | 6 | 39 | 15.666667 | 0.824324 | 0 | 0 | 0 | 0 | 0 | 0.265957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.8 | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
43af954e8ed556cd8c31fb9944438d1bc9067fbb | 556 | py | Python | templates/fastApiService/zarubaServiceName/helpers/transport/__init__.py | state-alchemists/zaruba | 2c689c920df3589168ec81664b92110021892464 | [
"Apache-2.0"
] | 39 | 2020-03-13T19:41:11.000Z | 2022-02-14T02:01:00.000Z | templates/fastApiService/zarubaServiceName/helpers/transport/__init__.py | state-alchemists/zaruba | 2c689c920df3589168ec81664b92110021892464 | [
"Apache-2.0"
] | 5 | 2020-08-01T08:55:48.000Z | 2022-02-10T00:55:39.000Z | templates/fastApiService/zarubaServiceName/helpers/transport/__init__.py | state-alchemists/zaruba | 2c689c920df3589168ec81664b92110021892464 | [
"Apache-2.0"
] | 4 | 2020-11-10T20:45:12.000Z | 2021-03-18T06:18:55.000Z | from typing import Mapping
from helpers.transport.interface import MessageBus, RPC
from helpers.transport.rmq_connection import get_rmq_connection_parameters
from helpers.transport.rmq_mb import RMQMessageBus
from helpers.transport.rmq_rpc import RMQRPC
from helpers.transport.rmq_config import RMQEventMap
from helpers.transport.kafka_mb import KafkaMessageBus, get_kafka_connection_parameters
from helpers.transport.kafka_config import KafkaEventMap
from helpers.transport.local_mb import LocalMessageBus
from helpers.transport.local_rpc import LocalRPC
| 50.545455 | 87 | 0.888489 | 74 | 556 | 6.486486 | 0.324324 | 0.20625 | 0.375 | 0.191667 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07554 | 556 | 10 | 88 | 55.6 | 0.933852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
43c5dd76e089ab74fecdf1df9f255393f4550f8e | 167 | py | Python | tools/Vitis-AI-Quantizer/vai_q_pytorch/pytorch_binding/pytorch_nndct/utils/__init__.py | hito0512/Vitis-AI | 996459fb96cb077ed2f7e789d515893b1cccbc95 | [
"Apache-2.0"
] | 1 | 2022-02-17T22:13:23.000Z | 2022-02-17T22:13:23.000Z | tools/Vitis-AI-Quantizer/vai_q_pytorch/pytorch_binding/pytorch_nndct/utils/__init__.py | hito0512/Vitis-AI | 996459fb96cb077ed2f7e789d515893b1cccbc95 | [
"Apache-2.0"
] | null | null | null | tools/Vitis-AI-Quantizer/vai_q_pytorch/pytorch_binding/pytorch_nndct/utils/__init__.py | hito0512/Vitis-AI | 996459fb96cb077ed2f7e789d515893b1cccbc95 | [
"Apache-2.0"
] | null | null | null | from .torch_op_attr import *
from .nndct2torch_op_map import *
from .op_register import *
from .torch_const import *
from .tensor_util import *
from .schema import *
| 20.875 | 33 | 0.778443 | 25 | 167 | 4.92 | 0.48 | 0.406504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 0.149701 | 167 | 7 | 34 | 23.857143 | 0.859155 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
78dad85a74c50279420c37bee23b73f7afcccdae | 67 | py | Python | library_api_sematics/controllers/__init__.py | sematicshood/addons_sematics | a9e1871938d12b595730122b55f538d300a6255f | [
"MIT"
] | null | null | null | library_api_sematics/controllers/__init__.py | sematicshood/addons_sematics | a9e1871938d12b595730122b55f538d300a6255f | [
"MIT"
] | null | null | null | library_api_sematics/controllers/__init__.py | sematicshood/addons_sematics | a9e1871938d12b595730122b55f538d300a6255f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from . import bitly
from . import midtrans | 16.75 | 23 | 0.641791 | 9 | 67 | 4.777778 | 0.777778 | 0.465116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018519 | 0.19403 | 67 | 4 | 24 | 16.75 | 0.777778 | 0.313433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
78f0c5ac88aa898cce0abb13733698822d935083 | 43 | py | Python | byol_pytorch/__init__.py | TariqAHassan/byol-pytorch | 7be5b87b7dfd41eec8a1b1c2d44b0211a30673da | [
"MIT"
] | 1,230 | 2020-06-17T01:05:21.000Z | 2022-03-30T10:21:04.000Z | byol_pytorch/__init__.py | TariqAHassan/byol-pytorch | 7be5b87b7dfd41eec8a1b1c2d44b0211a30673da | [
"MIT"
] | 74 | 2020-06-17T10:12:14.000Z | 2022-03-30T06:19:15.000Z | byol_pytorch/__init__.py | TariqAHassan/byol-pytorch | 7be5b87b7dfd41eec8a1b1c2d44b0211a30673da | [
"MIT"
] | 193 | 2020-06-17T08:11:52.000Z | 2022-03-31T21:10:49.000Z | from byol_pytorch.byol_pytorch import BYOL
| 21.5 | 42 | 0.883721 | 7 | 43 | 5.142857 | 0.571429 | 0.611111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
78fe2ff8c32f6631cebc50746f0a9700e79cfe7e | 17,699 | py | Python | tests/test_rules.py | DrackThor/artifactory-cleanup | 7fe154e1822fd6449c7dc896c0d9904f61adbc86 | [
"MIT"
] | 1 | 2022-03-22T06:54:36.000Z | 2022-03-22T06:54:36.000Z | tests/test_rules.py | DrackThor/artifactory-cleanup | 7fe154e1822fd6449c7dc896c0d9904f61adbc86 | [
"MIT"
] | null | null | null | tests/test_rules.py | DrackThor/artifactory-cleanup | 7fe154e1822fd6449c7dc896c0d9904f61adbc86 | [
"MIT"
] | null | null | null | from artifactory_cleanup import rules
import custom_rules
from policy import RULES
def test_repo_rules():
for repo_rules in RULES:
assert isinstance(repo_rules.name, str)
def test_keep_latest_n_version():
rule = rules.keep_latest_nupkg_n_version(2)
result = [
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.108",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.110",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.113",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.109-Feature",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.110-Feature",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.111-Feature",
},
},
]
result_expexted = [
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.108",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.109-Feature",
},
},
]
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_expexted
def test_keep_latest_n_version_with_tar_gz():
rule = rules.keep_latest_nupkg_n_version(1)
result = [
{
"name": ".tar.gz",
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.110",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.113",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.110-Feature",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.111-Feature",
},
},
]
result_expexted = [
{
"name": ".tar.gz",
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.110",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.110-Feature",
},
},
]
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_expexted
def test_keep_latest_n_version_one():
rule = rules.keep_latest_nupkg_n_version(1)
result = [
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.110",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.113",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.110-Feature",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.111-Feature",
},
},
]
result_expexted = [
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.110",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.110-Feature",
},
},
]
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_expexted
def test_keep_latest_n_version_empty():
rule = rules.keep_latest_nupkg_n_version(2)
result = [
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.110",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.113",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.110-Feature",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.111-Feature",
},
},
]
result_expexted = []
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_expexted
def test_keep_latest_n_version_patch():
rule = rules.keep_latest_nupkg_n_version(2)
result = [
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.2.109-Feature",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.1.111-Feature",
},
},
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.1.110-Feature",
},
},
]
result_to_delete = [
{
"name": ".nupkg",
"properties": {
"nuget.id": "Package",
"nuget.version": "16.0.1.110-Feature",
},
},
]
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_to_delete
def test_keep_latest_n_file():
rule = rules.keep_latest_n_file(2)
result = [
{"path": 1, "name": 1},
{"path": 1, "name": 2},
{"path": 1, "name": 3},
{"path": 1, "name": 4},
{"path": 1, "name": 5},
]
result_to_delete = [
{"path": 1, "name": 1},
{"path": 1, "name": 2},
{"path": 1, "name": 3},
]
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_to_delete
def test_keep_latest_n_file_empty():
rule = rules.keep_latest_n_file(10)
result = [
{"path": 1, "name": 1},
{"path": 1, "name": 2},
{"path": 1, "name": 3},
{"path": 1, "name": 4},
{"path": 1, "name": 5},
]
result_to_delete = []
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_to_delete
def test_keep_latest_n_file_in_folder():
rule = rules.keep_latest_n_file_in_folder(2)
result = [
{"path": 1, "name": 1},
{"path": 1, "name": 2},
{"path": 1, "name": 3},
{"path": 1, "name": 4},
{"path": 1, "name": 5},
{"path": 2, "name": 1},
{"path": 2, "name": 2},
{"path": 2, "name": 3},
{"path": 2, "name": 4},
{"path": 2, "name": 5},
{"path": 3, "name": 1},
{"path": 3, "name": 2},
{"path": 3, "name": 3},
]
result_to_delete = [
{"path": 1, "name": 1},
{"path": 1, "name": 2},
{"path": 1, "name": 3},
{"path": 2, "name": 1},
{"path": 2, "name": 2},
{"path": 2, "name": 3},
{"path": 3, "name": 1},
]
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_to_delete
def test_keep_latest_n_file_in_folder_empty():
rule = rules.keep_latest_n_file_in_folder(100)
result = [
{"path": 1, "name": 1},
{"path": 1, "name": 2},
{"path": 1, "name": 3},
{"path": 1, "name": 4},
{"path": 1, "name": 5},
{"path": 2, "name": 1},
{"path": 2, "name": 2},
{"path": 2, "name": 3},
{"path": 2, "name": 4},
{"path": 2, "name": 5},
{"path": 3, "name": 1},
{"path": 3, "name": 2},
{"path": 3, "name": 3},
]
result_to_delete = []
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_to_delete
def test_keep_latest_version_n_file_in_folder():
rule = rules.keep_latest_version_n_file_in_folder(1)
result = [
{
"name": "name.1.2.100.tar.gz",
"path": "repo/folder",
},
{
"name": "name.1.2.200.tar.gz",
"path": "repo/folder",
},
{
"name": "new_name_1.2.3.101.tar.gz",
"path": "repo/folder",
},
{
"name": "new_name_1.2.4.100.tar.gz",
"path": "repo/folder",
},
]
result_expexted = [
{
"name": "name.1.2.100.tar.gz",
"path": "repo/folder",
},
{
"name": "new_name_1.2.3.101.tar.gz",
"path": "repo/folder",
},
]
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_expexted
def test_delete_if_image_not_contained_in_properties():
rule = rules.delete_docker_image_if_not_contained_in_properties(
"docker-repo", "test_docker."
)
result = [
{"properties": {"test_docker.test1": "tag1"}},
{"properties": {"test_docker.test2": "tag2"}},
]
result_expexted = {
"test1": {"tag1": True},
"test2": {"tag2": True},
}
assert rule.get_properties_dict(result) == result_expexted
def test_delete_images_older_than_n_days():
rule = rules.delete_docker_images_older_than(days=10)
rule._collect_docker_size = lambda x: x
result = [
{"path": "repo/image/tag", "name": "manifest.json"},
{"path": "repo/image/tag1", "name": "manifest.json"},
{"path": "repo/image/tag2", "name": "manifest.json"},
]
result_expexted = [
{"path": "repo/image", "name": "tag"},
{"path": "repo/image", "name": "tag1"},
{"path": "repo/image", "name": "tag2"},
]
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_expexted
def test_keep_latest_n_file_in_folder_by_version():
rule = custom_rules.keep_latest_cross_package_n_version(2)
result = [
{
"name": "package-name.0.50.100.tar.gz",
"path": "package-name/master/0.50.100/other/folder/inside",
},
{
"name": "package-name.0.50.90.tar.gz",
"path": "package-name/develop/0.50.90/other/folder/inside",
},
{
"name": "package-name.0.50.201.tar.gz",
"path": "package-name/master/0.50.201/other/folder/inside",
},
{
"name": "package-name.0.50.94.tar.gz",
"path": "package-name/master/0.50.94/other/folder/inside",
},
{
"name": "package-name.0.51.104.tar.gz",
"path": "package-name/develop/0.51.104/other/folder/inside",
},
{
"name": "package-name.0.51.105.tar.gz",
"path": "package-name/release/0.51.105/other/folder/inside",
},
]
result_expexted = [
{
"name": "package-name.0.50.94.tar.gz",
"path": "package-name/master/0.50.94/other/folder/inside",
},
]
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_expexted
def test_keep_latest_n_file_in_folder_by_version_does_not_suit_check_for_major_minor():
rule = custom_rules.keep_latest_cross_package_n_version(2)
# версия артефакта, которая не подходит по количеству цифр не удаляется. В result: 0.50.1.02
result = [
{
"name": "package-name.0.50.100.tar.gz",
"path": "package-name/master/0.50.100/other/folder/inside",
},
{
"name": "package-name.0.50.101.tar.gz",
"path": "package-name/develop/0.50.101/other/folder/inside",
},
{
"name": "package-name.0.50.1.02.tar.gz",
"path": "package-name/master/0.50.1.02/other/folder/inside",
},
{
"name": "package-name.0.50.103.tar.gz",
"path": "package-name/master/0.50.103/other/folder/inside",
},
{
"name": "package-name.0.50.104.tar.gz",
"path": "package-name/develop/0.50.104/other/folder/inside",
},
{
"name": "package-name.0.50.105.tar.gz",
"path": "package-name/master/0.50.105/other/folder/inside",
},
]
result_expexted = [
{
"name": "package-name.0.50.100.tar.gz",
"path": "package-name/master/0.50.100/other/folder/inside",
},
]
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_expexted
def test_keep_latest_n_file_in_folder_by_version_multiple_versions_in_path():
rule = custom_rules.keep_latest_cross_package_n_version(1)
# Если в пути есть несколько версий, то артефакт не удаляем.
# Скорее всего ветку так назвали или ошибочно в пути появилась версия дважды. В result: /0.50/0.50.103/
result = [
{
"name": "package-name.0.50.100.tar.gz",
"path": "package-name/master/0.50.100/other/folder/inside",
},
{
"name": "package-name.0.50.101.tar.gz",
"path": "package-name/develop/0.50.101/other/folder/inside",
},
{
"name": "package-name.0.50.102.tar.gz",
"path": "package-name/0.50/0.50.102/other/folder/inside",
},
{
"name": "package-name.0.50.103.tar.gz",
"path": "package-name/0.50/0.50.103/other/folder/inside",
},
{
"name": "package-name.0.50.104.tar.gz",
"path": "package-name/master/0.50.104/other/folder/inside",
},
]
result_expexted = [
{
"name": "package-name.0.50.100.tar.gz",
"path": "package-name/master/0.50.100/other/folder/inside",
},
{
"name": "package-name.0.50.102.tar.gz",
"path": "package-name/0.50/0.50.102/other/folder/inside",
},
]
result_after_filter = rule.filter_result(result)
assert result_after_filter == result_expexted
def test_delete_files_that_do_not_exist_in_other_repository():
rule = custom_rules.delete_files_that_do_not_exist_in_other_repository(
"other_repository", "property"
)
result = [
{
"name": "package-name.0.50.100.tar.gz",
"path": "package-name/master/0.50.100/other/folder/inside",
"properties": {"property": "95117"},
},
{
"name": "package-name.0.50.101.tar.gz",
"path": "package-name/master/0.50.101/other/folder/inside",
"properties": {"property": "95118"},
},
{
"name": "package-name.0.50.102.tar.gz",
"path": "package-name/master/0.50.102/other/folder/inside",
"properties": {"property": "95119"},
},
{
"name": "package-name.0.50.103.tar.gz",
"path": "package-name/master/0.50.103/other/folder/inside",
},
]
artifacts_in_other_repo = [
{
"name": "package-name.0.50.100.tar.gz",
"path": "package-name/master/0.50.100/other/folder/inside",
"properties": {"property": "95117"},
},
{
"name": "package-name.0.50.101.tar.gz",
"path": "package-name/master/0.50.101/other/folder/inside",
"properties": {"property": "95118"},
},
{
"name": "package-name.0.50.102.tar.gz",
"path": "package-name/master/0.50.102/other/folder/inside",
},
]
result_expexted = [
{
"name": "package-name.0.50.102.tar.gz",
"path": "package-name/master/0.50.102/other/folder/inside",
"properties": {"property": "95119"},
},
]
result_after_filter = rule.remove_artifacts_from_result_artifact_if_property_exists_in_other_repository(
result, artifacts_in_other_repo
)
assert result_after_filter == result_expexted
def test_docker_values():
rule = rules.delete_docker_image_if_not_contained_in_properties_value(
"docker-repo", "test_docker."
)
result = [
{"properties": {"test_docker.test1": "value1"}},
{"properties": {"test_docker.test2": "value2"}},
{"properties": {"no_test_docker.test3": "value3"}},
{"no_properties": {"test_key4": "value4"}},
]
expected_set = {"value1", "value2"}
test_set = rule.get_properties_values(result)
assert test_set == expected_set
| 27.784929 | 108 | 0.481214 | 1,889 | 17,699 | 4.313923 | 0.083113 | 0.022089 | 0.038655 | 0.05154 | 0.854093 | 0.849061 | 0.828936 | 0.82145 | 0.765861 | 0.737882 | 0 | 0.063568 | 0.348494 | 17,699 | 636 | 109 | 27.828616 | 0.643136 | 0.014182 | 0 | 0.514388 | 0 | 0 | 0.308072 | 0.130417 | 0 | 0 | 0 | 0 | 0.032374 | 1 | 0.032374 | false | 0 | 0.005396 | 0 | 0.03777 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
601613098414f5c27d5074dbd4516beeb1a75504 | 42 | py | Python | library_simulator/__init__.py | harmsm/library_simulator | 26092cde0f9f89659b210f6829e01799ac77e555 | [
"Unlicense"
] | null | null | null | library_simulator/__init__.py | harmsm/library_simulator | 26092cde0f9f89659b210f6829e01799ac77e555 | [
"Unlicense"
] | null | null | null | library_simulator/__init__.py | harmsm/library_simulator | 26092cde0f9f89659b210f6829e01799ac77e555 | [
"Unlicense"
] | 1 | 2019-06-03T21:28:05.000Z | 2019-06-03T21:28:05.000Z |
from .simulator import LibrarySimulator
| 10.5 | 39 | 0.833333 | 4 | 42 | 8.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 42 | 3 | 40 | 14 | 0.972222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
602114acc4b7b28ce0edb065b03bd341f5578251 | 8,529 | py | Python | tests/unit/amlb/utils/serialization/test_serializers.py | PGijsbers/automlbenchmark | 6e0296b097455caf18d754e79a2bd85e85d01548 | [
"MIT"
] | 282 | 2018-09-19T09:45:46.000Z | 2022-03-30T04:05:51.000Z | tests/unit/amlb/utils/serialization/test_serializers.py | PGijsbers/automlbenchmark | 6e0296b097455caf18d754e79a2bd85e85d01548 | [
"MIT"
] | 267 | 2018-11-02T11:43:11.000Z | 2022-03-31T08:58:16.000Z | tests/unit/amlb/utils/serialization/test_serializers.py | PGijsbers/automlbenchmark | 6e0296b097455caf18d754e79a2bd85e85d01548 | [
"MIT"
] | 104 | 2018-10-17T19:32:36.000Z | 2022-03-19T22:47:59.000Z | import os
import pytest
from amlb.utils.core import Namespace as ns
from amlb.utils.serialization import is_sparse, serialize_data, deserialize_data
@pytest.mark.use_disk
def test_serialize_list_json(tmpdir):
li = [[1, 2.2, None, 3, 4.4, 'foo', True], ['bar', False, 2/3]]
dest = os.path.join(tmpdir, "my_list")
path = serialize_data(li, dest, config=ns(fallback_serializer='json'))
assert path == f"{dest}.json"
reloaded = deserialize_data(path)
assert isinstance(reloaded, list)
assert li == reloaded
@pytest.mark.use_disk
def test_serialize_list_pickle(tmpdir):
li = [[1, 2.2, None, 3, 4.4, 'foo', True], ['bar', False, 2/3]]
dest = os.path.join(tmpdir, "my_list")
path = serialize_data(li, dest, config=ns(fallback_serializer='pickle'))
assert path == f"{dest}.pkl"
reloaded = deserialize_data(path)
assert isinstance(reloaded, list)
assert li == reloaded
@pytest.mark.use_disk
def test_serialize_dict_json(tmpdir):
di = dict(
first=[1, 2.2, None, 3, 4.4, 'foo', True],
second=['bar', False, 2/3]
)
dest = os.path.join(tmpdir, "my_dict")
path = serialize_data(di, dest, config=ns(fallback_serializer='json'))
assert path == f"{dest}.json"
reloaded = deserialize_data(path)
assert isinstance(reloaded, dict)
assert di == reloaded
@pytest.mark.use_disk
def test_serialize_dict_pickle(tmpdir):
di = dict(
first=[1, 2.2, None, 3, 4.4, 'foo', True],
second=['bar', False, 2/3]
)
dest = os.path.join(tmpdir, "my_dict")
path = serialize_data(di, dest, config=ns(fallback_serializer='pickle'))
assert path == f"{dest}.pkl"
reloaded = deserialize_data(path)
assert isinstance(reloaded, dict)
assert di == reloaded
@pytest.mark.use_disk
def test_serialize_numpy_array(tmpdir):
import numpy as np
arr = np.array([1, 2.2, np.nan, 3, 4.4])
dest = os.path.join(tmpdir, "my_np_arr")
path = serialize_data(arr, dest)
assert path == f"{dest}.npy"
reloaded = deserialize_data(path)
assert isinstance(reloaded, np.ndarray)
assert np.array_equal(arr, reloaded, equal_nan=True)
@pytest.mark.use_disk
def test_serialize_pandas_series(tmpdir):
import pandas as pd
ser = pd.Series([1, 2.2, pd.NA, 3, 4.4])
dest = os.path.join(tmpdir, "my_pd_ser")
path = serialize_data(ser, dest)
assert path == f"{dest}.pd"
reloaded = deserialize_data(path)
assert isinstance(reloaded, pd.Series)
assert ser.compare(reloaded).empty
@pytest.mark.use_disk
def test_serialize_pandas_dataframes(tmpdir):
import pandas as pd
df = pd.DataFrame(dict(
first=[1, 2.2, pd.NA, 3, 4.4],
second=['a', 'b', 'c', 'a', 'b']
))
dest = os.path.join(tmpdir, "my_pd_df")
path = serialize_data(df, dest)
assert path == f"{dest}.pd"
reloaded = deserialize_data(path)
assert isinstance(reloaded, pd.DataFrame)
assert df.compare(reloaded).empty
@pytest.mark.use_disk
def test_serialize_sparse_matrix(tmpdir):
import scipy.sparse as sp
import numpy as np
arr = np.array([[0, 0, 0, 3.3], [4.4, 0, 0, 0], [0, np.nan, 0, 0]])
nans = np.count_nonzero(np.isnan(arr))
mat = sp.csc_matrix(arr)
assert sp.issparse(mat)
dest = os.path.join(tmpdir, "my_sparse_mat")
path = serialize_data(mat, dest)
assert path == f"{dest}.spy.npz"
reloaded = deserialize_data(path, config=ns(sparse_matrix_deserialized_format=None))
assert isinstance(reloaded, sp.spmatrix)
assert (mat != reloaded).nnz == nans
assert np.array_equal(mat.toarray(), reloaded.toarray(), equal_nan=True)
@pytest.mark.use_disk
def test_serialize_sparse_matrix_reload_as_dense(tmpdir):
import scipy.sparse as sp
import numpy as np
arr = np.array([[0, 0, 0, 3.3], [4.4, 0, 0, 0], [0, np.nan, 0, 0]])
mat = sp.csc_matrix(arr)
assert sp.issparse(mat)
dest = os.path.join(tmpdir, "my_sparse_mat")
path = serialize_data(mat, dest)
assert path == f"{dest}.spy.npz"
reloaded = deserialize_data(path, config=ns(sparse_matrix_deserialized_format='dense'))
assert not sp.issparse(reloaded)
assert isinstance(reloaded, np.matrix)
assert np.array_equal(mat.toarray(), np.asarray(reloaded), equal_nan=True)
@pytest.mark.use_disk
def test_serialize_sparse_matrix_reload_as_array(tmpdir):
import scipy.sparse as sp
import numpy as np
arr = np.array([[0, 0, 0, 3.3], [4.4, 0, 0, 0], [0, np.nan, 0, 0]])
mat = sp.csc_matrix(arr)
assert sp.issparse(mat)
dest = os.path.join(tmpdir, "my_sparse_mat")
path = serialize_data(mat, dest)
assert path == f"{dest}.spy.npz"
reloaded = deserialize_data(path, config=ns(sparse_matrix_deserialized_format='array'))
assert isinstance(reloaded, np.ndarray)
assert np.array_equal(mat.toarray(), reloaded, equal_nan=True)
@pytest.mark.use_disk
def test_serialize_sparse_dataframe(tmpdir):
import pandas as pd
ser_config = ns(pandas_serializer='pickle', sparse_dataframe_deserialized_format=None)
dfs = pd.DataFrame(dict(
first=[0, 0, 0, 3.3],
second=[4.4, 0, 0, 0],
third=[0, pd.NA, 0, 0],
)).astype('Sparse')
assert is_sparse(dfs)
dest = os.path.join(tmpdir, "my_sparse_df")
path = serialize_data(dfs, dest, config=ser_config)
assert path == f"{dest}.pd"
reloaded = deserialize_data(path, config=ser_config)
assert isinstance(reloaded, pd.DataFrame)
assert is_sparse(reloaded)
assert dfs.compare(reloaded).empty
@pytest.mark.use_disk
def test_serialize_pandas_dataframe_reload_as_dense(tmpdir):
import pandas as pd
ser_config = ns(pandas_serializer='pickle', sparse_dataframe_deserialized_format='dense')
dfs = pd.DataFrame(dict(
first=[0, 0, 0, 3.3],
second=[4.4, 0, 0, 0],
third=[0, pd.NA, 0, 0],
# fourth=[None, None, 'a', None]
)).astype('Sparse')
assert is_sparse(dfs)
dest = os.path.join(tmpdir, "my_sparse_df")
path = serialize_data(dfs, dest, config=ser_config)
assert path == f"{dest}.pd"
reloaded = deserialize_data(path, config=ser_config)
assert isinstance(reloaded, pd.DataFrame)
assert not is_sparse(reloaded)
assert dfs.compare(reloaded).empty
@pytest.mark.use_disk
def test_serialize_pandas_dataframe_reload_as_array(tmpdir):
import numpy as np
import pandas as pd
ser_config = ns(pandas_serializer='pickle', sparse_dataframe_deserialized_format='array')
dfs = pd.DataFrame(dict(
first=[0, 0, 0, 3.3],
second=[4.4, 0, 0, 0],
third=[0, pd.NA, 0, 0],
# fourth=[None, None, 'a', None]
)).astype('Sparse')
assert is_sparse(dfs)
dest = os.path.join(tmpdir, "my_sparse_df")
path = serialize_data(dfs, dest, config=ser_config)
assert path == f"{dest}.pd"
reloaded = deserialize_data(path, config=ser_config)
assert isinstance(reloaded, np.ndarray)
assert np.array_equal(dfs.to_numpy(), np.asarray(reloaded), equal_nan=True)
@pytest.mark.use_disk
def test_serialize_sparse_numerical_dataframe_to_parquet(tmpdir):
import pandas as pd
ser_config = ns(pandas_serializer='parquet', sparse_dataframe_deserialized_format=None)
dfs = pd.DataFrame(dict(
first=[0, 0, 0, 3.3],
second=[4.4, 0, 0, 0],
third=[0, pd.NA, 0, 0],
)).astype('Sparse')
assert is_sparse(dfs)
dest = os.path.join(tmpdir, "my_sparse_df")
path = serialize_data(dfs, dest, config=ser_config)
assert path == f"{dest}.sparse.pd"
reloaded = deserialize_data(path, config=ser_config)
assert isinstance(reloaded, pd.DataFrame)
assert is_sparse(reloaded)
assert dfs.compare(reloaded).empty
@pytest.mark.use_disk
def test_serialize_mixed_dataframe_to_parquet(tmpdir):
import pandas as pd
ser_config = ns(pandas_serializer='parquet', sparse_dataframe_deserialized_format=None)
dfm = pd.DataFrame(dict(
first=pd.arrays.SparseArray([0, 0, 0, 3.3]),
second=pd.arrays.SparseArray([4.4, 0, 0, 0], dtype=pd.SparseDtype(float, 0)),
third=pd.arrays.SparseArray([0, pd.NA, 0, 0]),
fourth=[None, None, 'a', None]
))
assert is_sparse(dfm)
dest = os.path.join(tmpdir, "my_mixed_df")
path = serialize_data(dfm, dest, config=ser_config)
assert path == f"{dest}.sparse.pd"
reloaded = deserialize_data(path, config=ser_config)
assert isinstance(reloaded, pd.DataFrame)
assert is_sparse(reloaded)
assert dfm.compare(reloaded).empty
| 32.930502 | 93 | 0.674874 | 1,263 | 8,529 | 4.392716 | 0.085511 | 0.015501 | 0.010274 | 0.045963 | 0.876532 | 0.863915 | 0.839041 | 0.816691 | 0.805155 | 0.769286 | 0 | 0.022321 | 0.185837 | 8,529 | 258 | 94 | 33.05814 | 0.776642 | 0.007152 | 0 | 0.681159 | 0 | 0 | 0.053054 | 0 | 0 | 0 | 0 | 0 | 0.285024 | 1 | 0.072464 | false | 0 | 0.091787 | 0 | 0.164251 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6048ded71fda64ba64a69d786b38ec992eb0ab85 | 7,292 | py | Python | pretrained_mol_sim/molecule_optimization/get_final_results.py | allisontam/graph_coattention | 5edc98615d5526269452b519bb13512ec5e952da | [
"MIT"
] | null | null | null | pretrained_mol_sim/molecule_optimization/get_final_results.py | allisontam/graph_coattention | 5edc98615d5526269452b519bb13512ec5e952da | [
"MIT"
] | null | null | null | pretrained_mol_sim/molecule_optimization/get_final_results.py | allisontam/graph_coattention | 5edc98615d5526269452b519bb13512ec5e952da | [
"MIT"
] | null | null | null |
import pickle
import gzip
# We define the functions used to load and save objects
def save_object(obj, filename):
"""
Function that saves an object to a file using pickle
"""
result = pickle.dumps(obj)
with gzip.GzipFile(filename, 'wb') as dest: dest.write(result)
dest.close()
def load_object(filename):
"""
Function that loads an object from a file using pickle
"""
with gzip.GzipFile(filename, 'rb') as source: result = source.read()
ret = pickle.loads(result)
source.close()
return ret
# We compute the average statistics for the grammar autoencoder
import numpy as np
n_simulations = 10
iteration = 5
results_grammar = np.zeros((n_simulations, 3))
for j in range(1, n_simulations + 1):
best_value = 1e10
n_valid = 0
max_value = 0
for i in range(iteration):
smiles = load_object('simulation{}/grammar/results/valid_smiles{}.dat'.format(j, i))
scores = load_object('simulation{}/grammar/results/scores{}.dat'.format(j, i))
n_valid += len([ x for x in smiles if x is not None ])
if min(scores) < best_value:
best_value = min(scores)
if max(scores) > max_value:
max_value = max(scores)
import numpy as np
sum_values = 0
count_values = 0
for i in range(iteration):
scores = np.array(load_object('simulation{}/grammar/results/scores{}.dat'.format(j, i)))
sum_values += np.sum(scores[ scores < max_value ])
count_values += len(scores[ scores < max_value ])
# fraction of valid smiles
results_grammar[ j - 1, 0 ] = 1.0 * n_valid / (iteration * 50)
# Best value
results_grammar[ j - 1, 1 ] = best_value
# Average value =
results_grammar[ j - 1, 2 ] = 1.0 * sum_values / count_values
print("Results Grammar VAE (fraction valid, best, average)):")
print("Mean:", np.mean(results_grammar, 0)[ 0 ], -np.mean(results_grammar, 0)[ 1 ], -np.mean(results_grammar, 0)[ 2 ])
print("Std:", np.std(results_grammar, 0) / np.sqrt(iteration))
print("First:", -np.min(results_grammar[ : , 1 ]))
best_score = np.min(results_grammar[ : , 1 ])
results_grammar[ results_grammar[ : , 1 ] == best_score , 1 ] = 1e10
print("Second:", -np.min(results_grammar[ : , 1 ]))
second_best_score = np.min(results_grammar[ : , 1 ])
results_grammar[ results_grammar[ : , 1 ] == second_best_score, 1 ] = 1e10
print("Third:", -np.min(results_grammar[ : , 1 ]))
third_best_score = np.min(results_grammar[ : , 1 ])
from rdkit.Chem import MolFromSmiles, MolToSmiles
from rdkit.Chem import Draw
from rdkit.Chem import Descriptors
mols = []
for j in range(1, n_simulations + 1):
for i in range(iteration):
smiles = np.array(load_object('simulation{}/grammar/results/valid_smiles{}.dat'.format(j, i)))
scores = np.array(load_object('simulation{}/grammar/results/scores{}.dat'.format(j, i)))
if np.any(scores == best_score):
smile = smiles[ scores == best_score ]
smile = np.array(smile).astype('str')[ 0 ]
print("First:", smile)
mol = MolFromSmiles(smile)
mols.append(mol)
best_score = 1e10
if np.any(scores == second_best_score):
smile = smiles[ scores == second_best_score ]
smile = np.array(smile).astype('str')[ 0 ]
print("Second:", smile)
mol = MolFromSmiles(smile)
mols.append(mol)
second_best_score = 1e10
if np.any(scores == third_best_score):
smile = smiles[ scores == third_best_score ]
smile = np.array(smile).astype('str')[ 0 ]
print("Third:", smile)
mol = MolFromSmiles(smile)
mols.append(mol)
third_best_score = 1e10
img = Draw.MolsToGridImage(mols, molsPerRow = len(mols), subImgSize=(300, 300), useSVG=True)
with open("molecule_images/best_grammar_molecule.svg", "w") as text_file:
text_file.write(img)
results_character = np.zeros((n_simulations, 3))
for j in range(1, n_simulations + 1):
best_value = 1e10
n_valid = 0
max_value = 0
for i in range(iteration):
smiles = load_object('simulation{}/character/results/valid_smiles{}.dat'.format(j, i))
scores = load_object('simulation{}/character/results/scores{}.dat'.format(j, i))
n_valid += len([ x for x in smiles if x is not None ])
if min(scores) < best_value:
best_value = min(scores)
if max(scores) > max_value:
max_value = max(scores)
import numpy as np
sum_values = 0
count_values = 0
for i in range(iteration):
scores = np.array(load_object('simulation{}/character/results/scores{}.dat'.format(j, i)))
sum_values += np.sum(scores[ scores < max_value ])
count_values += len(scores[ scores < max_value ])
# fraction of valid smiles
results_character[ j - 1, 0 ] = 1.0 * n_valid / (iteration * 50)
# Best value
results_character[ j - 1, 1 ] = best_value
# Average value =
results_character[ j - 1, 2 ] = 1.0 * sum_values / count_values
print("Results Character VAE (fraction valid, best, average)):")
print("Mean:", np.mean(results_character, 0)[ 0 ], -np.mean(results_character, 0)[ 1 ], -np.mean(results_character, 0)[ 2 ])
print("Std:", np.std(results_character, 0) / np.sqrt(iteration))
print("First:", -np.min(results_character[ : , 1 ]))
best_score = np.min(results_character[ : , 1 ])
results_character[ results_character[ : , 1 ] == best_score , 1 ] = 1e10
print("Second:", -np.min(results_character[ : , 1 ]))
second_best_score = np.min(results_character[ : , 1 ])
results_character[ results_character[ : , 1 ] == second_best_score, 1 ] = 1e10
print("Third:", -np.min(results_character[ : , 1 ]))
third_best_score = np.min(results_character[ : , 1 ])
# We print the best smile found the character autoencoder
mols = []
for j in range(1, n_simulations + 1):
for i in range(iteration):
smiles = np.array(load_object('simulation{}/character/results/valid_smiles{}.dat'.format(j, i)))
scores = np.array(load_object('simulation{}/character/results/scores{}.dat'.format(j, i)))
if np.any(scores == best_score):
smile = smiles[ scores == best_score ]
smile = np.array(smile).astype('str')[ 0 ]
print("First:", smile)
mol = MolFromSmiles(smile)
mols.append(mol)
best_score = 1e10
if np.any(scores == second_best_score):
smile = smiles[ scores == second_best_score ]
smile = np.array(smile).astype('str')[ 0 ]
print("Second:", smile)
mol = MolFromSmiles(smile)
mols.append(mol)
second_best_score = 1e10
if np.any(scores == third_best_score):
smile = smiles[ scores == third_best_score ]
smile = np.array(smile).astype('str')[ 0 ]
print("Third:", smile)
mol = MolFromSmiles(smile)
mols.append(mol)
third_best_score = 1e10
img = Draw.MolsToGridImage(mols, molsPerRow = len(mols), subImgSize=(300, 300), useSVG=True)
with open("molecule_images/best_character_molecule.svg", "w") as text_file:
text_file.write(img)
| 34.72381 | 124 | 0.625206 | 978 | 7,292 | 4.507157 | 0.126789 | 0.057169 | 0.032668 | 0.024955 | 0.842559 | 0.803539 | 0.799456 | 0.769964 | 0.755898 | 0.738657 | 0 | 0.023981 | 0.23944 | 7,292 | 209 | 125 | 34.889952 | 0.770826 | 0.052935 | 0 | 0.622378 | 0 | 0 | 0.109816 | 0.076901 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013986 | false | 0 | 0.055944 | 0 | 0.076923 | 0.125874 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
608c48677c96388db33737cc0a430016c00fbb8d | 1,000 | py | Python | Phase_3/ds-k-nearest_neighbors-main/src/euclid.py | VaneezaAhmad/ds-east-042621-lectures | 334f98bb4bd4f8020055e95994764b1587a809c0 | [
"MIT"
] | 1 | 2021-08-12T21:48:21.000Z | 2021-08-12T21:48:21.000Z | Phase_3/ds-k-nearest_neighbors-main/src/euclid.py | VaneezaAhmad/ds-east-042621-lectures | 334f98bb4bd4f8020055e95994764b1587a809c0 | [
"MIT"
] | null | null | null | Phase_3/ds-k-nearest_neighbors-main/src/euclid.py | VaneezaAhmad/ds-east-042621-lectures | 334f98bb4bd4f8020055e95994764b1587a809c0 | [
"MIT"
] | 20 | 2021-04-27T19:27:58.000Z | 2021-06-16T15:08:50.000Z | import pandas as pd
import numpy as np
def euclid(train_X, val_X):
"""
:param train_X: one record from the training set
(type series or dataframe including target (survived))
:param val_X: one record from the validation set
series or dataframe include target (survived)
:return:
"""
diff = train_X - val_X
# Remove survived column
diff = diff.iloc[:, :-1]
dist = np.sqrt((diff ** 2).sum(axis=1))
return dist
def manhattan(train_X, val_X):
"""
:param train_X: one record from the training set
(type series or dataframe including target (survived))
:param val_X: one record from the validation set
series or dataframe include target (survived)
:return: the Manhattan distance between train_X and val_X
"""
diff = train_X - val_X
# Remove survived column
diff = diff.iloc[:, :-1]
dist = np.sqrt((np.abs(diff)).sum(axis=1))
return dist
| 27.027027 | 74 | 0.618 | 138 | 1,000 | 4.376812 | 0.311594 | 0.069536 | 0.059603 | 0.066225 | 0.824503 | 0.764901 | 0.764901 | 0.764901 | 0.764901 | 0.764901 | 0 | 0.007052 | 0.291 | 1,000 | 36 | 75 | 27.777778 | 0.844852 | 0.575 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60b8a20d80d44a1cb041da6833d2028e2e163f6b | 5,038 | py | Python | posthog/test/test_entity_model.py | alx-a/posthog | a76959bb2a7640ca8cf367a4d3a0e4ca67f65a5e | [
"MIT"
] | null | null | null | posthog/test/test_entity_model.py | alx-a/posthog | a76959bb2a7640ca8cf367a4d3a0e4ca67f65a5e | [
"MIT"
] | null | null | null | posthog/test/test_entity_model.py | alx-a/posthog | a76959bb2a7640ca8cf367a4d3a0e4ca67f65a5e | [
"MIT"
] | null | null | null | from django.test import TestCase
from posthog.models.entity import TREND_FILTER_TYPE_ACTIONS, TREND_FILTER_TYPE_EVENTS, Entity
class TestEntity(TestCase):
def test_equality_with_ids(self):
entity1 = Entity({"id": "e1", "type": TREND_FILTER_TYPE_ACTIONS})
entity2 = Entity({"id": "e1", "type": TREND_FILTER_TYPE_ACTIONS})
self.assertTrue(entity1.equals(entity2))
entity2 = Entity({"id": "e2", "type": TREND_FILTER_TYPE_ACTIONS})
self.assertFalse(entity1.equals(entity2))
def test_equality_with_type(self):
entity1 = Entity({"id": "e1", "type": TREND_FILTER_TYPE_EVENTS})
entity2 = Entity({"id": "e1", "type": TREND_FILTER_TYPE_EVENTS})
self.assertTrue(entity1.equals(entity2))
entity1 = Entity({"id": "e1", "type": TREND_FILTER_TYPE_EVENTS})
entity2 = Entity({"id": "e1", "type": TREND_FILTER_TYPE_ACTIONS})
self.assertFalse(entity1.equals(entity2))
def test_equality_with_simple_properties(self):
entity1 = Entity(
{
"id": "e1",
"type": TREND_FILTER_TYPE_EVENTS,
"properties": [
{"key": "email", "value": "test@posthog.com", "type": "person"},
{"key": "current_url", "value": "test@posthog.com", "type": "element"},
],
}
)
entity2 = Entity(
{
"id": "e1",
"type": TREND_FILTER_TYPE_EVENTS,
"properties": [
{"key": "current_url", "value": "test@posthog.com", "type": "element"},
{"key": "email", "value": "test@posthog.com", "type": "person"},
],
}
)
self.assertTrue(entity1.equals(entity2))
entity2 = Entity(
{
"id": "e1",
"type": TREND_FILTER_TYPE_EVENTS,
"properties": [
{"key": "current$url", "value": "test@posthog.com", "type": "element"},
{"key": "email", "value": "test@posthog.com", "type": "person"},
],
}
)
self.assertFalse(entity1.equals(entity2))
def test_equality_with_complex_operator_properties(self):
entity1 = Entity(
{
"id": "e1",
"type": TREND_FILTER_TYPE_EVENTS,
"properties": [
{"key": "count", "operator": "lt", "value": 12, "type": "element"},
{"key": "email", "operator": "in", "value": ["a, b"], "type": "person"},
{"key": "selector", "value": [".btn"], "operator": "exact", "type": "element"},
{"key": "test_prop", "value": 1.2, "operator": "gt"},
],
}
)
entity2 = Entity(
{
"id": "e1",
"type": TREND_FILTER_TYPE_EVENTS,
"properties": [
{"key": "test_prop", "value": 1.20, "operator": "gt"},
{"key": "count", "operator": "lt", "value": 12, "type": "element"},
{"key": "selector", "value": [".btn"], "operator": "exact", "type": "element"},
{"key": "email", "operator": "in", "value": ["a, b"], "type": "person"},
],
}
)
self.assertTrue(entity1.equals(entity2))
# playing with decimals
entity2 = Entity(
{
"id": "e1",
"type": TREND_FILTER_TYPE_EVENTS,
"properties": [
{"key": "test_prop", "value": 1.200, "operator": "gt"},
{"key": "count", "operator": "lt", "value": 12, "type": "element"},
{"key": "selector", "value": [".btn"], "operator": "exact", "type": "element"},
{"key": "email", "operator": "in", "value": ["a, b"], "type": "person"},
],
}
)
self.assertTrue(entity1.equals(entity2))
entity2 = Entity(
{
"id": "e1",
"type": TREND_FILTER_TYPE_EVENTS,
"properties": [
{"key": "test_prop", "value": 1.2001, "operator": "gt"},
{"key": "count", "operator": "lt", "value": 12, "type": "element"},
{"key": "selector", "value": [".btn"], "operator": "exact", "type": "element"},
{"key": "email", "operator": "in", "value": ["a, b"], "type": "person"},
],
}
)
self.assertFalse(entity1.equals(entity2))
def test_equality_with_old_style_and_new_style_properties(self):
entity1 = Entity({"id": "e1", "type": TREND_FILTER_TYPE_EVENTS, "properties": {"key": "value"}})
entity2 = Entity(
{"id": "e1", "type": TREND_FILTER_TYPE_EVENTS, "properties": [{"key": "key", "value": "value"},]}
)
self.assertTrue(entity1.equals(entity2))
| 38.166667 | 109 | 0.470623 | 455 | 5,038 | 5.028571 | 0.136264 | 0.086538 | 0.118007 | 0.132867 | 0.888986 | 0.853147 | 0.853147 | 0.853147 | 0.815559 | 0.746941 | 0 | 0.022451 | 0.345772 | 5,038 | 131 | 110 | 38.458015 | 0.671723 | 0.004168 | 0 | 0.626168 | 0 | 0 | 0.212363 | 0 | 0 | 0 | 0 | 0 | 0.093458 | 1 | 0.046729 | false | 0 | 0.018692 | 0 | 0.074766 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60c7aa21b3becb1e4a441128f6d43ba3e4e7eaa3 | 49,109 | py | Python | tests/test_services.py | gustavofonseca/multiverse | 1d98f9374a92cdce3c198518d6f70a010f5abc67 | [
"BSD-2-Clause"
] | 6 | 2018-12-05T15:52:13.000Z | 2019-04-18T14:14:32.000Z | tests/test_services.py | gustavofonseca/multiverse | 1d98f9374a92cdce3c198518d6f70a010f5abc67 | [
"BSD-2-Clause"
] | 117 | 2018-09-03T21:13:30.000Z | 2019-09-26T19:16:24.000Z | tests/test_services.py | gustavofonseca/multiverse | 1d98f9374a92cdce3c198518d6f70a010f5abc67 | [
"BSD-2-Clause"
] | 9 | 2018-12-05T14:01:30.000Z | 2019-07-04T17:34:08.000Z | import os
import unittest
from unittest import mock
import datetime
import random
from bson.objectid import ObjectId
from documentstore import services, exceptions, domain
from . import apptesting
def make_services():
session = apptesting.Session()
return services.get_handlers(lambda: session, subscribers=[]), session
class CommandTestMixin:
SUBSCRIBERS_EVENTS = [subscriber[0] for subscriber in services.DEFAULT_SUBSCRIBERS]
def test_command_interface(self):
self.assertIsNotNone(self.command)
self.assertTrue(callable(self.command))
class CreateDocumentsBundleTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("create_documents_bundle")
self.event = services.Events.DOCUMENTSBUNDLE_CREATED
def test_event(self):
self.assertIn(self.event, self.SUBSCRIBERS_EVENTS)
def test_command_success(self):
self.assertIsNone(self.command(id="xpto"))
def test_command_with_documents_success(self):
self.assertIsNone(
self.command(id="xpto", docs=[{"id": "/document/1"}, {"id": "/document/2"}])
)
def test_command_with_metadata_success(self):
self.assertIsNone(
self.command(
id="xpto", metadata={"publication_year": "2018", "volume": "2"}
)
)
def test_command_raises_exception_if_already_exists(self):
self.command(id="xpto")
self.assertRaises(exceptions.AlreadyExists, self.command, id="xpto")
def test_command_notify_event(self):
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(id="xpto", docs=[{"id": "/document/1"}])
mock_notify.assert_called_once_with(
self.event,
{
"id": "xpto",
"docs": [{"id": "/document/1"}],
"metadata": None,
"instance": mock.ANY,
},
)
class FetchDocumentsBundleTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("fetch_documents_bundle")
datetime_patcher = mock.patch.object(
domain, "datetime", mock.Mock(wraps=datetime.datetime)
)
mocked_datetime = datetime_patcher.start()
mocked_datetime.utcnow.return_value = datetime.datetime(
2018, 8, 5, 22, 33, 49, 795151
)
self.addCleanup(datetime_patcher.stop)
def test_command_raises_exception_if_does_not_exist(self):
self.assertRaises(exceptions.DoesNotExist, self.command, id="xpto")
def test_command_success(self):
self.services["create_documents_bundle"](id="xpto")
result = self.command(id="xpto")
self.assertEqual(result["id"], "xpto")
def test_command_with_documents_success(self):
self.services["create_documents_bundle"](
id="xpto", docs=[{"id": "/document/1"}, {"id": "/document/2"}]
)
result = self.command(id="xpto")
self.assertEqual(
result["items"], [{"id": "/document/1"}, {"id": "/document/2"}]
)
def test_command_with_metadata_success(self):
self.services["create_documents_bundle"](
id="xpto", metadata={"publication_year": "2018", "volume": "2"}
)
result = self.command(id="xpto")
self.assertEqual(
result["metadata"], {"publication_year": "2018", "volume": "2"}
)
def test_command_with_unexpected_metadata(self):
self.services["create_documents_bundle"](
id="xpto",
metadata={"publication_year": "2018", "volume": "2", "unknown": "0"},
)
result = self.command(id="xpto")
self.assertEqual(
result["metadata"], {"publication_year": "2018", "volume": "2"}
)
class UpdateDocumentsBundleTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("update_documents_bundle_metadata")
self.event = services.Events.DOCUMENTSBUNDLE_METATADA_UPDATED
datetime_patcher = mock.patch.object(
domain, "datetime", mock.Mock(wraps=datetime.datetime)
)
mocked_datetime = datetime_patcher.start()
mocked_datetime.utcnow.side_effect = lambda: (
datetime.datetime(2018, 8, 5, 22, 33, 49, random.randint(1, 1000000))
)
self.addCleanup(datetime_patcher.stop)
def test_event(self):
self.assertIn(self.event, self.SUBSCRIBERS_EVENTS)
def test_command_raises_exception_if_does_not_exist(self):
self.assertRaises(exceptions.DoesNotExist, self.command, id="xpto", metadata={})
def test_command_success(self):
self.services["create_documents_bundle"](
id="xpto", metadata={"publication_year": "2018", "volume": "2"}
)
self.command(id="xpto", metadata={"publication_year": "2019"})
result = self.services["fetch_documents_bundle"](id="xpto")
self.assertEqual(
result["metadata"], {"publication_year": "2019", "volume": "2"}
)
def test_command_with_unexpected_metadata(self):
self.services["create_documents_bundle"](
id="xpto", metadata={"publication_year": "2018", "volume": "2"}
)
self.command(id="xpto", metadata={"unknown": "0"})
result = self.services["fetch_documents_bundle"](id="xpto")
self.assertEqual(
result["metadata"], {"publication_year": "2018", "volume": "2"}
)
def test_command_remove_metadata(self):
"""
Por ora, a maneira de remover um metadado é através da atribuição de uma
string vazia para o mesmo. Note que este procedimento não removerá o metadado
do manifesto.
"""
self.services["create_documents_bundle"](
id="xpto", metadata={"publication_year": "2018", "volume": "2"}
)
self.command(id="xpto", metadata={"volume": ""})
result = self.services["fetch_documents_bundle"](id="xpto")
self.assertEqual(result["metadata"], {"publication_year": "2018", "volume": ""})
def test_command_notify_event(self):
self.services["create_documents_bundle"](
id="xpto", metadata={"publication_year": "2018", "volume": "2"}
)
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(id="xpto", metadata={"publication_year": "2019"})
mock_notify.assert_called_once_with(
self.event,
{
"id": "xpto",
"metadata": {"publication_year": "2019"},
"instance": mock.ANY,
},
)
class AddDocumentToDocumentsBundleTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("add_document_to_documents_bundle")
self.event = services.Events.DOCUMENT_ADDED_TO_DOCUMENTSBUNDLE
def test_event(self):
self.assertIn(self.event, self.SUBSCRIBERS_EVENTS)
def test_command_raises_exception_if_does_not_exist(self):
self.assertRaises(
exceptions.DoesNotExist, self.command, id="xpto", doc="/document/1"
)
def test_command_success(self):
self.services["create_documents_bundle"](id="xpto")
self.command(id="xpto", doc={"id": "/document/1"})
result = self.services["fetch_documents_bundle"](id="xpto")
self.assertEqual(result["items"], [{"id": "/document/1"}])
self.command(id="xpto", doc={"id": "/document/2"})
result = self.services["fetch_documents_bundle"](id="xpto")
self.assertEqual(
result["items"], [{"id": "/document/1"}, {"id": "/document/2"}]
)
def test_command_raises_exception_if_already_exists(self):
self.services["create_documents_bundle"](
id="xpto", docs=[{"id": "/document/1"}]
)
self.assertRaises(
exceptions.AlreadyExists, self.command, id="xpto", doc={"id": "/document/1"}
)
def test_command_notify_event(self):
self.services["create_documents_bundle"](id="xpto")
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(id="xpto", doc={"id": "/document/1"})
mock_notify.assert_called_once_with(
self.event,
{"id": "xpto", "doc": {"id": "/document/1"}, "instance": mock.ANY},
)
class InsertDocumentToDocumentsBundleTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("insert_document_to_documents_bundle")
self.event = services.Events.DOCUMENT_INSERTED_TO_DOCUMENTSBUNDLE
def test_event(self):
self.assertIn(self.event, self.SUBSCRIBERS_EVENTS)
def test_command_raises_exception_if_does_not_exist(self):
self.assertRaises(
exceptions.DoesNotExist,
self.command,
id="xpto",
index=0,
doc={"id": "/document/1"},
)
def test_command_success(self):
self.services["create_documents_bundle"](id="xpto")
self.command(id="xpto", index=1, doc={"id": "/document/1"})
result = self.services["fetch_documents_bundle"](id="xpto")
self.assertEqual(result["items"], [{"id": "/document/1"}])
self.command(id="xpto", index=0, doc={"id": "/document/2"})
result = self.services["fetch_documents_bundle"](id="xpto")
self.assertEqual(
result["items"], [{"id": "/document/2"}, {"id": "/document/1"}]
)
self.command(id="xpto", index=10, doc={"id": "/document/3"})
result = self.services["fetch_documents_bundle"](id="xpto")
self.assertEqual(
result["items"],
[{"id": "/document/2"}, {"id": "/document/1"}, {"id": "/document/3"}],
)
def test_command_raises_exception_if_already_exists(self):
self.services["create_documents_bundle"](
id="xpto", docs=[{"id": "/document/1"}, {"id": "/document/2"}]
)
self.assertRaises(
exceptions.AlreadyExists,
self.command,
id="xpto",
index=0,
doc={"id": "/document/1"},
)
self.assertRaises(
exceptions.AlreadyExists,
self.command,
id="xpto",
index=1,
doc={"id": "/document/1"},
)
def test_command_notify_event(self):
self.services["create_documents_bundle"](id="xpto")
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(id="xpto", index=10, doc={"id": "/document/3"})
mock_notify.assert_called_once_with(
self.event,
{
"id": "xpto",
"doc": {"id": "/document/3"},
"index": 10,
"instance": mock.ANY,
},
)
class UpdateDocumentInDocumentsBundleTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("update_documents_in_documents_bundle")
self.event = services.Events.ISSUE_DOCUMENTS_UPDATED
create_documents_bundle_command = self.services.get("create_documents_bundle")
create_documents_bundle_command(id="issue-example-id")
def test_event(self):
self.assertIn(self.event, self.SUBSCRIBERS_EVENTS)
def test_raises_does_not_exists_if_journal_not_found(self):
self.assertRaises(
exceptions.DoesNotExist, self.command, id="not-found-issue", docs=[]
)
def test_issues_list_should_be_updated(self):
with mock.patch.object(self.session.documents_bundles, "fetch") as mock_fetch:
DocumentsBundleStub = mock.Mock(spec=domain.DocumentsBundle)
DocumentsBundleStub.documents = [{"id": "a"}, {"id": "b"}, {"id": "c"}]
DocumentsBundleStub.add_document = mock.Mock()
DocumentsBundleStub.remove_document = mock.Mock()
mock_fetch.return_value = DocumentsBundleStub
self.command(id="issue-example-id", docs=["d"])
DocumentsBundleStub.remove_document.assert_has_calls(
[mock.call("a"), mock.call("b"), mock.call("c")]
)
DocumentsBundleStub.add_document.assert_called_once_with("d")
def test_raises_already_exists_if_duplicated_are_in_list(self):
self.assertRaises(
exceptions.AlreadyExists,
self.command,
id="issue-example-id",
docs=[{"id": "a"}, {"id": "a"}, {"id": "b"}, {"id": "a"}, {"id": "b"}],
)
def test_should_call_update_issue(self):
with mock.patch.object(self.session.documents_bundles, "update") as mock_update:
self.command(id="issue-example-id", docs=[{"id": "a"}])
mock_update.assert_called_once()
def test_should_empty_bundle_document(self):
with mock.patch.object(self.session.documents_bundles, "fetch") as mock_fetch:
DocumentsBundleStub = mock.Mock(spec=domain.DocumentsBundle)
DocumentsBundleStub.documents = [{"id": "a"}]
DocumentsBundleStub.add_document = mock.Mock()
DocumentsBundleStub.remove_document = mock.Mock()
mock_fetch.return_value = DocumentsBundleStub
self.command(id="issue-example-id", docs=[])
DocumentsBundleStub.remove_document.assert_has_calls([mock.call("a")])
DocumentsBundleStub.add_document.assert_not_called()
def test_command_notify_event(self):
with mock.patch.object(self.session.documents_bundles, "fetch") as mock_fetch:
DocumentsBundleStub = mock.Mock(spec=domain.DocumentsBundle)
DocumentsBundleStub.documents = []
mock_fetch.return_value = DocumentsBundleStub
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(id="issue-example-id", docs=[{"id": "a"}])
mock_notify.assert_called_once_with(
self.event,
{
"instance": DocumentsBundleStub,
"id": "issue-example-id",
"docs": [{"id": "a"}],
},
)
class CreateJournalTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("create_journal")
self.event = services.Events.JOURNAL_CREATED
def test_event(self):
self.assertIn(self.event, self.SUBSCRIBERS_EVENTS)
def test_command_success(self):
self.assertIsNone(self.command(id="xpto"))
def test_command_with_metadata_success(self):
self.assertIsNone(
self.command(
id="xpto",
metadata={
"title": "Journal Title",
"mission": [
{"language": "pt", "value": "Missão do Periódico"},
{"language": "en", "value": "Journal Mission"},
],
},
)
)
def test_command_raises_exception_if_already_exists(self):
self.command(id="xpto")
self.assertRaises(exceptions.AlreadyExists, self.command, id="xpto")
def test_command_notify_event(self):
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(id="jxpto")
mock_notify.assert_called_once_with(
self.event, {"id": "jxpto", "instance": mock.ANY, "metadata": None}
)
class AddIssueToJournalTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("add_issue_to_journal")
self.event = services.Events.ISSUE_ADDED_TO_JOURNAL
create_journal_command = self.services.get("create_journal")
create_journal_command(id="0034-8910-rsp")
def test_event(self):
self.assertIn(self.event, self.SUBSCRIBERS_EVENTS)
def test_command_calls_add_issue(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.add_issue = mock.Mock()
mock_fetch.return_value = JournalStub
self.command(id="0034-8910-rsp", issue={"id": "0034-8910-rsp-48-2"})
JournalStub.add_issue.assert_called_once_with({"id": "0034-8910-rsp-48-2"})
def test_command_update_journals(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.add_issue = mock.Mock()
mock_fetch.return_value = JournalStub
with mock.patch.object(self.session.journals, "update") as mock_update:
self.command(id="0034-8910-rsp", issue={"id": "0034-8910-rsp-48-2"})
mock_update.assert_called_once_with(JournalStub)
def test_command_success(self):
self.assertIsNone(
self.command(id="0034-8910-rsp", issue={"id": "0034-8910-rsp-48-2"})
)
def test_command_raises_exception_if_journal_does_not_exist(self):
self.assertRaises(
exceptions.DoesNotExist,
self.command,
id="0101-8910-csp",
issue="0101-8910-csp-48-2",
)
def test_command_raises_exception_if_issue_already_exists(self):
self.command(id="0034-8910-rsp", issue={"id": "0034-8910-rsp-48-2"})
self.assertRaises(
exceptions.AlreadyExists,
self.command,
id="0034-8910-rsp",
issue={"id": "0034-8910-rsp-48-2"},
)
def test_command_notify_event(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.insert_issue = mock.Mock()
mock_fetch.return_value = JournalStub
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(id="0034-8910-rsp", issue={"id": "0034-8910-rsp-48-2"})
mock_notify.assert_called_once_with(
self.event,
{
"instance": JournalStub,
"id": "0034-8910-rsp",
"issue": {"id": "0034-8910-rsp-48-2"},
},
)
class InsertIssueToJournalTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("insert_issue_to_journal")
self.event = services.Events.ISSUE_INSERTED_TO_JOURNAL
create_journal_command = self.services.get("create_journal")
create_journal_command(id="0034-8910-rsp")
def test_event(self):
self.assertIn(self.event, self.SUBSCRIBERS_EVENTS)
def test_command_raises_exception_if_journal_does_not_exist(self):
self.assertRaises(
exceptions.DoesNotExist,
self.command,
id="0101-8910-csp",
index=0,
issue={"id": "0101-8910-csp-48-2"},
)
def test_command_calls_insert_issue(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.insert_issue = mock.Mock()
mock_fetch.return_value = JournalStub
self.command(
id="0034-8910-rsp", index=0, issue={"id": "0034-8910-rsp-48-2"}
)
JournalStub.insert_issue.assert_called_once_with(
0, {"id": "0034-8910-rsp-48-2"}
)
def test_command_update_journals(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.insert_issue = mock.Mock()
mock_fetch.return_value = JournalStub
with mock.patch.object(self.session.journals, "update") as mock_update:
self.command(
id="0034-8910-rsp", index=0, issue={"id": "0034-8910-rsp-48-2"}
)
mock_update.assert_called_once_with(JournalStub)
def test_command_success(self):
self.assertIsNone(
self.command(
id="0034-8910-rsp", index=0, issue={"id": "0034-8910-rsp-48-2"}
)
)
self.assertIsNone(
self.command(
id="0034-8910-rsp", index=10, issue={"id": "0034-8910-rsp-48-3"}
)
)
self.assertIsNone(
self.command(
id="0034-8910-rsp", index=-1, issue={"id": "0034-8910-rsp-48-4"}
)
)
def test_command_raises_exception_if_issue_already_exists(self):
self.command(id="0034-8910-rsp", index=0, issue={"id": "0034-8910-rsp-48-2"})
self.assertRaises(
exceptions.AlreadyExists,
self.command,
id="0034-8910-rsp",
index=0,
issue={"id": "0034-8910-rsp-48-2"},
)
self.assertRaises(
exceptions.AlreadyExists,
self.command,
id="0034-8910-rsp",
index=5,
issue={"id": "0034-8910-rsp-48-2"},
)
def test_command_notify_event(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.insert_issue = mock.Mock()
mock_fetch.return_value = JournalStub
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(
id="0034-8910-rsp", index=0, issue={"id": "0034-8910-rsp-48-2"}
)
mock_notify.assert_called_once_with(
self.event,
{
"instance": JournalStub,
"id": "0034-8910-rsp",
"index": 0,
"issue": {"id": "0034-8910-rsp-48-2"},
},
)
class RemoveIssueFromJournalTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("remove_issue_from_journal")
self.event = services.Events.ISSUE_REMOVED_FROM_JOURNAL
create_journal_command = self.services.get("create_journal")
create_journal_command(id="0034-8910-rsp")
def test_event(self):
self.assertIn(self.event, self.SUBSCRIBERS_EVENTS)
def test_command_raises_exception_if_journal_does_not_exist(self):
self.assertRaises(
exceptions.DoesNotExist,
self.command,
id="0101-8910-csp",
issue="0101-8910-csp-48-2",
)
def test_command_calls_remove_issue(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.remove_issue = mock.Mock()
mock_fetch.return_value = JournalStub
self.command(id="0034-8910-rsp", issue="0034-8910-rsp-48-2")
JournalStub.remove_issue.assert_called_once_with("0034-8910-rsp-48-2")
def test_command_update_journals(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.remove_issue = mock.Mock()
mock_fetch.return_value = JournalStub
with mock.patch.object(self.session.journals, "update") as mock_update:
self.command(id="0034-8910-rsp", issue="0034-8910-rsp-48-2")
mock_update.assert_called_once_with(JournalStub)
def test_command_success(self):
self.services.get("add_issue_to_journal")(
id="0034-8910-rsp", issue={"id": "0034-8910-rsp-48-2"}
)
self.assertIsNone(self.command(id="0034-8910-rsp", issue="0034-8910-rsp-48-2"))
def test_command_raises_exception_if_issue_does_not_exist(self):
self.assertRaises(
exceptions.DoesNotExist,
self.command,
id="0034-8910-rsp",
issue="0034-8910-rsp-48-2",
)
def test_command_notify_event(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.remove_issue = mock.Mock()
mock_fetch.return_value = JournalStub
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(id="0034-8910-rsp", issue="0034-8910-rsp-48-2")
mock_notify.assert_called_once_with(
self.event,
{
"instance": JournalStub,
"id": "0034-8910-rsp",
"issue": "0034-8910-rsp-48-2",
},
)
class UpdateIssuesInJournalTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("update_issues_in_journal")
self.event = services.Events.JOURNAL_ISSUES_UPDATED
create_journal_command = self.services.get("create_journal")
create_journal_command(id="journal-example-id")
def test_event(self):
self.assertIn(self.event, self.SUBSCRIBERS_EVENTS)
def test_raises_does_not_exists_if_journal_not_found(self):
self.assertRaises(
exceptions.DoesNotExist, self.command, id="not-found-journal", issues=[]
)
def test_issues_list_should_be_updated(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.issues = [{"id": "a"}, {"id": "b"}, {"id": "c"}]
JournalStub.add_issue = mock.Mock()
JournalStub.remove_issue = mock.Mock()
mock_fetch.return_value = JournalStub
self.command(id="journal-example-id", issues=["d"])
JournalStub.remove_issue.assert_has_calls(
[mock.call("a"), mock.call("b"), mock.call("c")]
)
JournalStub.add_issue.assert_called_once_with("d")
def test_raises_already_exists_if_duplicated_are_in_list(self):
self.assertRaises(
exceptions.AlreadyExists,
self.command,
id="journal-example-id",
issues=[{"id": "a"}, {"id": "a"}, {"id": "b"}, {"id": "a"}, {"id": "b"}],
)
def test_should_call_update_journal(self):
with mock.patch.object(self.session.journals, "update") as mock_update:
self.command(id="journal-example-id", issues=[{"id": "a"}])
mock_update.assert_called_once()
def test_should_empty_journal_issues(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.issues = [{"id": "a"}]
JournalStub.add_issue = mock.Mock()
JournalStub.remove_issue = mock.Mock()
mock_fetch.return_value = JournalStub
self.command(id="journal-example-id", issues=[])
JournalStub.remove_issue.assert_has_calls([mock.call("a")])
JournalStub.add_issue.assert_not_called()
def test_command_notify_event(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.issues = []
mock_fetch.return_value = JournalStub
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(id="journal-example-id", issues=[{"id": "a"}])
mock_notify.assert_called_once_with(
self.event,
{
"instance": JournalStub,
"id": "journal-example-id",
"issues": [{"id": "a"}],
},
)
class SetAheadOfPrintBundleToJournalTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("set_ahead_of_print_bundle_to_journal")
self.event = services.Events.AHEAD_OF_PRINT_BUNDLE_SET_TO_JOURNAL
create_journal_command = self.services.get("create_journal")
create_journal_command(id="0034-8910-rsp")
def test_event(self):
self.assertIn(self.event, self.SUBSCRIBERS_EVENTS)
def test_command_raises_exception_if_journal_does_not_exist(self):
self.assertRaises(
exceptions.DoesNotExist,
self.command,
id="0101-8910-csp",
aop="0101-8910-csp-aop",
)
def test_command_calls_ahead_of_print_bundle(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.ahead_of_print_bundle = mock.Mock()
mock_fetch.return_value = JournalStub
self.command(id="0034-8910-rsp", aop="0034-8910-rsp-aop")
self.assertEqual(JournalStub.ahead_of_print_bundle, "0034-8910-rsp-aop")
def test_command_update_journals(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.ahead_of_print_bundle = mock.Mock()
mock_fetch.return_value = JournalStub
with mock.patch.object(self.session.journals, "update") as mock_update:
self.command(id="0034-8910-rsp", aop="0034-8910-rsp-aop")
mock_update.assert_called_once_with(JournalStub)
def test_command_success(self):
self.assertIsNone(self.command(id="0034-8910-rsp", aop="0034-8910-rsp-aop"))
def test_command_notify_event(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.ahead_of_print_bundle = mock.Mock()
mock_fetch.return_value = JournalStub
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(id="0034-8910-rsp", aop="0034-8910-rsp-aop")
mock_notify.assert_called_once_with(
self.event,
{
"instance": JournalStub,
"id": "0034-8910-rsp",
"aop": "0034-8910-rsp-aop",
},
)
class RemoveAheadOfPrintBundleFromJournalTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("remove_ahead_of_print_bundle_from_journal")
self.event = services.Events.AHEAD_OF_PRINT_BUNDLE_REMOVED_FROM_JOURNAL
create_journal_command = self.services.get("create_journal")
create_journal_command(id="0034-8910-rsp")
def test_event(self):
self.assertIn(self.event, self.SUBSCRIBERS_EVENTS)
def test_command_raises_exception_if_journal_does_not_exist(self):
self.assertRaises(exceptions.DoesNotExist, self.command, id="0101-8910-csp")
def test_command_calls_remove_ahead_of_print(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.remove_ahead_of_print_bundle = mock.Mock()
mock_fetch.return_value = JournalStub
self.command(id="0034-8910-rsp")
JournalStub.remove_ahead_of_print_bundle.assert_called_once_with()
def test_command_update_journals(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.remove_ahead_of_print_bundle = mock.Mock()
mock_fetch.return_value = JournalStub
with mock.patch.object(self.session.journals, "update") as mock_update:
self.command(id="0034-8910-rsp")
mock_update.assert_called_once_with(JournalStub)
def test_command_raises_exception_if_ahead_of_print_does_not_exist(self):
self.assertRaises(exceptions.DoesNotExist, self.command, id="0034-8910-rsp")
def test_command_notify_event(self):
with mock.patch.object(self.session.journals, "fetch") as mock_fetch:
JournalStub = mock.Mock(spec=domain.Journal)
JournalStub.remove_ahead_of_print_bundle = mock.Mock()
mock_fetch.return_value = JournalStub
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(id="0034-8910-rsp")
mock_notify.assert_called_once_with(
self.event, {"instance": JournalStub, "id": "0034-8910-rsp"}
)
class FetchJournalTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("fetch_journal")
create_journal_command = self.services.get("create_journal")
create_journal_command(id="1678-4596-cr-49-02")
def test_should_raise_does_not_exists_exception(self):
self.assertRaises(
exceptions.DoesNotExist, self.command, id="1678-4596-cr-49-03"
)
def test_should_return_a_journal(self):
self.assertIsNotNone(self.command(id="1678-4596-cr-49-02"))
def test_should_require_an_id(self):
self.assertRaises(TypeError, self.command)
class UpdateJornalMetadataTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services.get("update_journal_metadata")
self.event = services.Events.JOURNAL_METATADA_UPDATED
self.services["create_journal"](
id="1678-4596-cr",
metadata={
"title": "Journal Title",
"mission": [
{"language": "pt", "value": "Missão do Periódico"},
{"language": "en", "value": "Journal Mission"},
],
},
)
def test_event(self):
self.assertIn(self.event, self.SUBSCRIBERS_EVENTS)
def test_command_raises_exception_if_does_not_exist(self):
self.session.journals.fetch = mock.Mock(side_effect=exceptions.DoesNotExist)
self.assertRaises(
exceptions.DoesNotExist, self.command, id="1678-4596-cr", metadata={}
)
def test_command_success(self):
self.command(
id="1678-4596-cr",
metadata={
"title": "Journal New Title",
"mission": [
{"language": "pt", "value": "Missão do Periódico"},
{"language": "en", "value": "Journal Mission"},
{"language": "es", "value": "Misión de la Revista"},
],
},
)
result = self.services["fetch_journal"](id="1678-4596-cr")
self.assertEqual(
result["metadata"],
{
"title": "Journal New Title",
"mission": [
{"language": "pt", "value": "Missão do Periódico"},
{"language": "en", "value": "Journal Mission"},
{"language": "es", "value": "Misión de la Revista"},
],
},
)
def test_command_with_unexpected_metadata(self):
self.command(
id="1678-4596-cr",
metadata={
"unknown": "0",
"title": "Journal New Title",
"title_iso": "Title ISO",
},
)
result = self.services["fetch_journal"](id="1678-4596-cr")
self.assertEqual(
result["metadata"],
{
"title": "Journal New Title",
"mission": [
{"language": "pt", "value": "Missão do Periódico"},
{"language": "en", "value": "Journal Mission"},
],
"title_iso": "Title ISO",
},
)
def test_command_remove_metadata(self):
"""
Por ora, a maneira de remover um metadado é através da atribuição de uma
string vazia para o mesmo. Note que este procedimento não removerá o metadado
do manifesto.
"""
self.command(id="1678-4596-cr", metadata={"title": ""})
result = self.services["fetch_journal"](id="1678-4596-cr")
self.assertEqual(
result["metadata"],
{
"title": "",
"mission": [
{"language": "pt", "value": "Missão do Periódico"},
{"language": "en", "value": "Journal Mission"},
],
},
)
def test_command_notify_event(self):
metadata = {
"title": "Journal New Title",
"mission": [
{"language": "pt", "value": "Missão do Periódico"},
{"language": "en", "value": "Journal Mission"},
{"language": "es", "value": "Misión de la Revista"},
],
}
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(id="1678-4596-cr", metadata=metadata)
mock_notify.assert_called_once_with(
self.event,
{"id": "1678-4596-cr", "metadata": metadata, "instance": mock.ANY},
)
class RegisterRenditionVersionTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services["register_rendition_version"]
self.event = services.Events.RENDITION_VERSION_REGISTERED
self.document = domain.Document(manifest=apptesting.manifest_data_fixture())
self.session.documents.add(self.document)
def test_register_rendition_version_returns_none(self):
self.assertIsNone(
self.command(
self.document.id(),
"0034-8910-rsp-48-2-0275-pt.pdf",
"/rawfiles/7ca9f9b2687cb/0034-8910-rsp-48-2-0275-pt.pdf",
"application/pdf",
"pt",
23456,
)
)
def test_register_duplicated_rendition_version_raises_error(self):
self.command(
self.document.id(),
"0034-8910-rsp-48-2-0275-pt.pdf",
"/rawfiles/7ca9f9b2687cb/0034-8910-rsp-48-2-0275-pt.pdf",
"application/pdf",
"pt",
23456,
)
self.assertRaises(
exceptions.VersionAlreadySet,
self.command,
self.document.id(),
"0034-8910-rsp-48-2-0275-pt.pdf",
"/rawfiles/7ca9f9b2687cb/0034-8910-rsp-48-2-0275-pt.pdf",
"application/pdf",
"pt",
23456,
)
def test_register_new_rendition_version(self):
"""Qualquer diferença em qualquer campo é suficiente para que seja
considerada uma nova versão válida.
"""
self.command(
self.document.id(),
"0034-8910-rsp-48-2-0275-pt.pdf",
"/rawfiles/7ca9f9b2687cb/0034-8910-rsp-48-2-0275-pt.pdf",
"application/pdf",
"pt",
23456,
)
self.assertIsNone(
self.command(
self.document.id(),
"0034-8910-rsp-48-2-0275-pt.pdf",
"/rawfiles/7ca9f9b2687cb/0034-8910-rsp-48-2-0275-pt-v2.pdf",
"application/pdf",
"pt",
23456,
)
)
def test_command_notify_event(self):
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(
self.document.id(),
"0034-8910-rsp-48-2-0275-pt.pdf",
"/rawfiles/7ca9f9b2687cb/0034-8910-rsp-48-2-0275-pt.pdf",
"application/pdf",
"pt",
23456,
)
mock_notify.assert_called_once_with(
self.event,
{
"instance": mock.ANY,
"id": self.document.id(),
"filename": "0034-8910-rsp-48-2-0275-pt.pdf",
"data_url": "/rawfiles/7ca9f9b2687cb/0034-8910-rsp-48-2-0275-pt.pdf",
"mimetype": "application/pdf",
"lang": "pt",
"size_bytes": 23456,
},
)
class FetchDocumentRenditionsTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services["fetch_document_renditions"]
self.document = domain.Document(manifest=apptesting.manifest_data_fixture())
self.session.documents.add(self.document)
def test_fetch_rendition(self):
self.services["register_rendition_version"](
self.document.id(),
"0034-8910-rsp-48-2-0275-pt.pdf",
"/rawfiles/7ca9f9b2687cb/0034-8910-rsp-48-2-0275-pt.pdf",
"application/pdf",
"pt",
23456,
)
renditions = self.command(self.document.id())
self.assertEqual(len(renditions), 1)
self.assertEqual(
renditions[0]["url"],
"/rawfiles/7ca9f9b2687cb/0034-8910-rsp-48-2-0275-pt.pdf",
)
def test_fetch_latest_version(self):
self.services["register_rendition_version"](
self.document.id(),
"0034-8910-rsp-48-2-0275-pt.pdf",
"/rawfiles/7ca9f9b2687cb/0034-8910-rsp-48-2-0275-pt.pdf",
"application/pdf",
"pt",
23456,
)
self.services["register_rendition_version"](
self.document.id(),
"0034-8910-rsp-48-2-0275-pt.pdf",
"/rawfiles/8ca9f9c1397cc/0035-8910-rsp-48-2-0275-pt.pdf",
"application/pdf",
"pt",
234567,
)
renditions = self.command(self.document.id())
self.assertEqual(len(renditions), 1)
self.assertEqual(
renditions[0]["url"],
"/rawfiles/8ca9f9c1397cc/0035-8910-rsp-48-2-0275-pt.pdf",
)
def test_fetch_version_at(self):
self.services["register_rendition_version"](
self.document.id(),
"0034-8910-rsp-48-2-0275-pt.pdf",
"/rawfiles/7ca9f9b2687cb/0034-8910-rsp-48-2-0275-pt.pdf",
"application/pdf",
"pt",
23456,
)
now = services.utcnow()[:-8] + "Z" # em segundos
datetime_patcher = mock.patch.object(
domain, "datetime", mock.Mock(wraps=datetime.datetime)
)
mocked_datetime = datetime_patcher.start()
# faz com que o timestamp da próxima versão seja do próximo ano
mocked_datetime.utcnow.return_value = datetime.datetime(
datetime.date.today().year + 1, 8, 5, 22, 34, 49, 795151
)
self.addCleanup(datetime_patcher.stop)
self.services["register_rendition_version"](
self.document.id(),
"0034-8910-rsp-48-2-0275-pt.pdf",
"/rawfiles/8ca9f9c1397cc/0035-8910-rsp-48-2-0275-pt.pdf",
"application/pdf",
"pt",
234567,
)
renditions = self.command(self.document.id(), version_at=now)
self.assertEqual(len(renditions), 1)
self.assertEqual(
renditions[0]["url"],
"/rawfiles/7ca9f9b2687cb/0034-8910-rsp-48-2-0275-pt.pdf",
)
class DeleteDocumentTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services["delete_document"]
self.document = domain.Document(manifest=apptesting.manifest_data_fixture())
self.session.documents.add(self.document)
self.event = services.Events.DOCUMENT_DELETED
def test_delete_document_returns_none(self):
self.assertIsNone(self.command(self.document.id()))
def test_raises_when_document_does_not_exist(self):
self.assertRaises(
exceptions.DoesNotExist, self.command, "inexistent-document-id"
)
def test_command_notify_event(self):
with mock.patch.object(self.session, "notify") as mock_notify:
self.command(self.document.id())
mock_notify.assert_called_once_with(
self.event, {"instance": mock.ANY, "id": self.document.id()}
)
class FetchChangeTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.change_id = str(ObjectId())
self.session.changes.add(
{
"_id": self.change_id,
"timestamp": "2018-08-05T23:08:50.331687Z",
"entity": "Document",
"id": "S0034-89102014000200347",
"content_gz": '{"hello": "world"}',
"content_type": "application/json",
}
)
self.command = self.services.get("fetch_change")
def test_should_raise_does_not_exists_exception(self):
self.assertRaises(exceptions.DoesNotExist, self.command, id="missing-change")
def test_should_return_a_change(self):
self.assertIsNotNone(self.command(id=self.change_id))
def test_should_require_an_id(self):
self.assertRaises(TypeError, self.command)
class RegisterDocumentVersionTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.manifest = {
"id": "0034-8910-rsp-48-2-0347",
"versions": [
{
"data": "https://url.to/0034-8910-rsp-48-2-0347.xml",
"assets": {
"0034-8910-rsp-48-2-0347-gf01": [
[
"2018-08-05T23:03:44.971230Z",
"http://www.scielo.br/img/revistas/rsp/v48n2/0034-8910-rsp-48-2-0347-gf01.jpg",
],
],
},
"renditions": [],
},
],
}
self.doc = domain.Document(manifest=self.manifest)
self.session.documents.add(self.doc)
self.command = self.services["register_document_version"]
def test_swollows_VersionAlreadySet_exception_for_assets(self):
with mock.patch("documentstore.domain.requests.get") as mock_request:
with open(
os.path.join(
os.path.dirname(os.path.abspath(__file__)),
"0034-8910-rsp-48-2-0347.xml",
)
) as fixture:
mock_request.return_value.content = fixture.read().encode("utf-8")
assets = self.doc.version()["assets"]
self.assertIsNone(
self.command(
id=self.doc.id(),
data_url="https://url.to.new/0034-8910-rsp-48-2-0347.xml",
assets=assets,
)
)
class FetchDocumentFrontTest(CommandTestMixin, unittest.TestCase):
def setUp(self):
self.services, self.session = make_services()
self.command = self.services["sanitize_document_front"]
with open(
os.path.join(
os.path.dirname(os.path.abspath(__file__)),
"0034-8910-rsp-48-2-0347.xml",
),
"rb"
) as fixture:
self.data = fixture.read()
def test_call_returns_display_format(self):
expected = {
'article_title': {
"en": """Proposal for a telehealth concept in the translational research model""",
"pt": """Proposta conceitual de telessaúde no modelo da pesquisa translacional""",
}
}
result = self.command(self.data)
self.assertEqual(expected, result['display_format'])
| 39.796596 | 111 | 0.58867 | 5,311 | 49,109 | 5.258143 | 0.065148 | 0.049631 | 0.040572 | 0.033052 | 0.880148 | 0.8621 | 0.843121 | 0.824321 | 0.80613 | 0.779524 | 0 | 0.050963 | 0.283186 | 49,109 | 1,233 | 112 | 39.828873 | 0.742344 | 0.010263 | 0 | 0.631879 | 0 | 0.002846 | 0.160073 | 0.051791 | 0 | 0 | 0 | 0 | 0.112903 | 1 | 0.119545 | false | 0 | 0.00759 | 0 | 0.149905 | 0.012334 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60d27bc7cc5914cc9c25adb19669b05a9e543193 | 7,879 | py | Python | tests/test_fill.py | mgesteiro/pyubx2 | 02fd8fa2863b88ed2d746b5800717a1b6b213181 | [
"BSD-3-Clause"
] | null | null | null | tests/test_fill.py | mgesteiro/pyubx2 | 02fd8fa2863b88ed2d746b5800717a1b6b213181 | [
"BSD-3-Clause"
] | null | null | null | tests/test_fill.py | mgesteiro/pyubx2 | 02fd8fa2863b88ed2d746b5800717a1b6b213181 | [
"BSD-3-Clause"
] | null | null | null | '''
Created on 21 Oct 2020
Fill method tests for pyubx2.UBXMessage
@author: semuadmin
'''
# pylint: disable=line-too-long, invalid-name, missing-docstring, no-member
import unittest
from pyubx2 import UBXMessage, SET, POLL
class FillTest(unittest.TestCase):
def setUp(self):
self.maxDiff = None
def tearDown(self):
pass
def testFill_CFGMSG(self): # test POLL constructor fill, format 1
EXPECTED_RESULT = "<UBX(CFG-MSG, msgClass=NMEA-Standard, msgID=VTG)>"
res = UBXMessage('CFG', 'CFG-MSG', POLL, msgClass=240, msgID=5)
self.assertEqual(str(res), EXPECTED_RESULT)
def testFill_CFGMSG2(self): # test POLL constructor fill, format 2
EXPECTED_RESULT = "<UBX(CFG-MSG, msgClass=NMEA-Standard, msgID=VTG)>"
res = UBXMessage(b'\x06', b'\x01', POLL, msgClass=240, msgID=5)
self.assertEqual(str(res), EXPECTED_RESULT)
def testFill_CFGMSG3(self): # test POLL constructor fill, format 3
EXPECTED_RESULT = "<UBX(CFG-MSG, msgClass=NMEA-Standard, msgID=VTG)>"
res = UBXMessage(6, 1, POLL, msgClass=240, msgID=5)
self.assertEqual(str(res), EXPECTED_RESULT)
def testFill_CFGMSG4(self): # test SET constructor fill
EXPECTED_RESULT = "<UBX(CFG-MSG, msgClass=NMEA-Standard, msgID=GLL, rateDDC=0, rateUART1=1, rateUART2=0, rateUSB=1, rateSPI=0, reserved=0)>"
res = UBXMessage('CFG', 'CFG-MSG', SET, msgClass=240, msgID=1, rateUART1=1, rateUSB=1)
self.assertEqual(str(res), EXPECTED_RESULT)
def testFill_CFGNMEA(self): # test SET constructor fill, set all values
EXPECTED_RESULT = "<UBX(CFG-NMEA, filter=b'E', nmeaVersion=4.0, numSV=4, flags=b'\\x14', gnssToFilter=b'\\x00\\x00\\x00\\x00', svNumbering=0, mainTalkerId=0, gsvTalkerId=0, version=0, bdsTalkerId=b'\\x00\\x00', reserved1=0)>"
res = UBXMessage('CFG', 'CFG-NMEA', SET, filter=b'\x45', nmeaVersion=64, numSV=4, flags=b'\x14')
self.assertEqual(str(res), EXPECTED_RESULT)
def testFill_CFGNMEA2(self): # test SET constructor fill, set some values, default others
EXPECTED_RESULT = "<UBX(CFG-NMEA, filter=b'\\x00', nmeaVersion=2.3, numSV=1, flags=b'\\x00', gnssToFilter=b'\\x00\\x00\\x00\\x00', svNumbering=0, mainTalkerId=0, gsvTalkerId=0, version=0, bdsTalkerId=b'\\x00\\x00', reserved1=0)>"
res = UBXMessage('CFG', 'CFG-NMEA', SET, nmeaVersion=35, numSV=1)
self.assertEqual(str(res), EXPECTED_RESULT)
def testFill_CFGNMEAPARSE(self): # check that raw payload is correctly populated and parses back to original message
EXPECTED_RESULT = "<UBX(CFG-NMEA, filter=b'\\x00', nmeaVersion=2.3, numSV=1, flags=b'\\x00', gnssToFilter=b'\\x00\\x00\\x00\\x00', svNumbering=0, mainTalkerId=0, gsvTalkerId=0, version=0, bdsTalkerId=b'\\x00\\x00', reserved1=0)>"
res = UBXMessage('CFG', 'CFG-NMEA', SET, nmeaVersion=35, numSV=1)
res2 = UBXMessage.parse(res.serialize())
self.assertEqual(str(res2), EXPECTED_RESULT)
def testFill_CFGNMEAPOLL(self): # test POLL constructor, no payload
EXPECTED_RESULT = "<UBX(CFG-NMEA)>"
res = UBXMessage('CFG', 'CFG-NMEA', POLL)
self.assertEqual(str(res), EXPECTED_RESULT)
def testFill_CFGNMEAPOLL2(self): # test POLL constructor, no payload
EXPECTED_RESULT = "<UBX(CFG-NMEA)>"
res = UBXMessage('CFG', 'CFG-NMEA', POLL)
res2 = UBXMessage.parse(res.serialize())
self.assertEqual(str(res2), EXPECTED_RESULT)
def testFill_CFGDOSC(self): # multiple repeats in group
EXPECTED_RESULT = "<UBX(CFG-DOSC, version=23, numOsc=2, reserved1=0, oscId_01=4, reserved2_01=0, flags_01=b'\\x00\\x00', freq_01=22, phaseOffset_01=0, withTemp_01=0, withAge_01=0, timeToTemp_01=0, reserved3_01=0, gainVco_01=0, gainUncertainty_01=0, reserved4_01=0, oscId_02=7, reserved2_02=0, flags_02=b'\\x00\\x00', freq_02=44, phaseOffset_02=0, withTemp_02=0, withAge_02=0, timeToTemp_02=0, reserved3_02=0, gainVco_02=0, gainUncertainty_02=0, reserved4_02=0)>"
res = UBXMessage('CFG', 'CFG-DOSC', SET, version=23, numOsc=2, oscId_01=4, freq_01=22, oscId_02=7, freq_02=44)
self.assertEqual(str(res), EXPECTED_RESULT)
def testFill_CFGDOSC1(self): # single repeat in group
EXPECTED_RESULT = "<UBX(CFG-DOSC, version=37, numOsc=1, reserved1=0, oscId_01=8, reserved2_01=0, flags_01=b'\\x00\\x00', freq_01=53, phaseOffset_01=26, withTemp_01=0, withAge_01=0, timeToTemp_01=0, reserved3_01=0, gainVco_01=4, gainUncertainty_01=123, reserved4_01=0)>"
res = UBXMessage('CFG', 'CFG-DOSC', SET, version=37, numOsc=1, oscId_01=8, freq_01=53, phaseOffset_01=26, gainVco_01=4, gainUncertainty_01=123)
self.assertEqual(str(res), EXPECTED_RESULT)
def testFill_CFGDOSCPARSE(self): # check that raw payload is correctly populated and parses back to original message
EXPECTED_RESULT = "<UBX(CFG-DOSC, version=37, numOsc=1, reserved1=0, oscId_01=8, reserved2_01=0, flags_01=b'\\x00\\x00', freq_01=53, phaseOffset_01=26, withTemp_01=0, withAge_01=0, timeToTemp_01=0, reserved3_01=0, gainVco_01=4, gainUncertainty_01=123, reserved4_01=0)>"
res = UBXMessage('CFG', 'CFG-DOSC', SET, version=37, numOsc=1, oscId_01=8, freq_01=53, phaseOffset_01=26, gainVco_01=4, gainUncertainty_01=123)
res2 = UBXMessage.parse(res.serialize())
self.assertEqual(str(res2), EXPECTED_RESULT)
def testFill_CFGDOSC2(self): # empty group
EXPECTED_RESULT = "<UBX(CFG-DOSC, version=37, numOsc=0, reserved1=0)>"
res = UBXMessage('CFG', 'CFG-DOSC', SET, version=37, numOsc=0)
self.assertEqual(str(res), EXPECTED_RESULT)
def testFill_CFGDAT(self): # floating point attribute, single and double precision
EXPECTED_RESULT = "<UBX(CFG-DAT, datumNum=4, datumName=b'WGS-84', majA=4321.123456789128, flat=-2964.00469836, dX=-1.2345678, dY=27.40654, dZ=0.0, rotX=0.0, rotY=0.0, rotZ=0.0, scale=0.0)>"
res = UBXMessage('CFG', 'CFG-DAT', SET, datumNum=4, datumName=b'WGS-84', majA=4321.123456789128, flat=-2964.00469836, dX=-1.2345678, dY=27.40654)
self.assertEqual(str(res), EXPECTED_RESULT)
def testFill_CFGDATPARSE(self): # check that raw payload is correctly populated and parses back to original message
EXPECTED_RESULT = "<UBX(CFG-DAT, datumNum=4, datumName=b'WGS-84', majA=4321.123456789128, flat=-2964.00469836, dX=-1.2345677614212036, dY=27.406539916992188, dZ=0.0, rotX=0.0, rotY=0.0, rotZ=0.0, scale=0.0)>"
res = UBXMessage('CFG', 'CFG-DAT', SET, datumNum=4, datumName=b'WGS-84', majA=4321.123456789128, flat=-2964.00469836, dX=-1.2345678, dY=27.40654)
res2 = UBXMessage.parse(res.serialize())
self.assertEqual(str(res2), EXPECTED_RESULT)
def testFill_CFGDATPARSE2(self): # check that raw payload is correctly populated and parses back to original message
EXPECTED_RESULT = "<UBX(CFG-DAT, datumNum=4, datumName=b'WGS-84', majA=0.0, flat=0.0, dX=-1.2345677614212036, dY=27.406539916992188, dZ=0.0, rotX=0.0, rotY=0.0, rotZ=0.0, scale=0.0)>"
res = UBXMessage('CFG', 'CFG-DAT', SET, datumNum=4, datumName=b'WGS-84', dX=-1.2345678, dY=27.40654)
res2 = UBXMessage.parse(res.serialize())
self.assertEqual(str(res2), EXPECTED_RESULT)
def testEVAL(self): # test eval of repr
res = UBXMessage('CFG', 'CFG-MSG', POLL, msgClass=240, msgID=5)
reseval = eval(repr(res))
assert type(reseval) is UBXMessage
def testEVAL2(self): # test eval of repr
res = UBXMessage('CFG', 'CFG-MSG', SET, msgClass=240, msgID=5, rateUART1=1, rateUSB=1)
reseval = eval(repr(res))
assert type(reseval) is UBXMessage
if __name__ == "__main__":
# import sys;sys.argv = ['', 'Test.testName']
unittest.main()
| 64.581967 | 471 | 0.678639 | 1,135 | 7,879 | 4.607048 | 0.176211 | 0.085676 | 0.052018 | 0.061197 | 0.807994 | 0.797093 | 0.765347 | 0.759419 | 0.692484 | 0.626697 | 0 | 0.109728 | 0.174134 | 7,879 | 121 | 472 | 65.115702 | 0.693868 | 0.12527 | 0 | 0.511628 | 0 | 0.116279 | 0.391607 | 0.088523 | 0 | 0 | 0 | 0 | 0.209302 | 1 | 0.232558 | false | 0.011628 | 0.023256 | 0 | 0.267442 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60e29c5b15948891a63018e89f81ccd1d0f84b90 | 88 | py | Python | saopy/owlssc/__init__.py | CityPulse/CP_Resourcemanagement | aa670fa89d5e086a98ade3ccc152518be55abf2e | [
"MIT"
] | 2 | 2016-11-03T14:57:45.000Z | 2019-05-13T13:21:08.000Z | saopy/owlssc/__init__.py | CityPulse/CP_Resourcemanagement | aa670fa89d5e086a98ade3ccc152518be55abf2e | [
"MIT"
] | null | null | null | saopy/owlssc/__init__.py | CityPulse/CP_Resourcemanagement | aa670fa89d5e086a98ade3ccc152518be55abf2e | [
"MIT"
] | 1 | 2020-07-23T11:27:15.000Z | 2020-07-23T11:27:15.000Z | import saopy.model
from saopy.model import owlssc___ServiceCategory as ServiceCategory
| 22 | 67 | 0.875 | 11 | 88 | 6.727273 | 0.636364 | 0.27027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102273 | 88 | 3 | 68 | 29.333333 | 0.936709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
880bad88a3f34b8e4e3ccb80279c4c4c188539a5 | 141 | py | Python | commondata/__init__.py | softwaresaved/policy_common_data | 2c20a5929a9509a323269af5fb815a8273e64516 | [
"BSD-3-Clause"
] | null | null | null | commondata/__init__.py | softwaresaved/policy_common_data | 2c20a5929a9509a323269af5fb815a8273e64516 | [
"BSD-3-Clause"
] | null | null | null | commondata/__init__.py | softwaresaved/policy_common_data | 2c20a5929a9509a323269af5fb815a8273e64516 | [
"BSD-3-Clause"
] | null | null | null | import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
DATA_DIR = os.path.abspath(os.path.join(BASE_DIR, 'data'))
| 28.2 | 70 | 0.751773 | 25 | 141 | 3.96 | 0.4 | 0.30303 | 0.181818 | 0.30303 | 0.323232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070922 | 141 | 4 | 71 | 35.25 | 0.755725 | 0 | 0 | 0 | 0 | 0 | 0.028369 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
71664c0de31b805d12b7f53141d6708136fe4d5d | 32 | py | Python | app/views/api/users/__init__.py | dandye/DjanGoat | 72beb30afe3ddd5b31ce74a5d3b9da61d2c5df1d | [
"MIT"
] | 65 | 2017-08-18T15:12:03.000Z | 2021-08-14T16:50:07.000Z | app/views/api/users/__init__.py | dandye/DjanGoat | 72beb30afe3ddd5b31ce74a5d3b9da61d2c5df1d | [
"MIT"
] | 83 | 2017-11-28T21:45:20.000Z | 2021-11-02T18:52:52.000Z | app/views/api/users/__init__.py | dandye/DjanGoat | 72beb30afe3ddd5b31ce74a5d3b9da61d2c5df1d | [
"MIT"
] | 71 | 2017-08-17T14:58:01.000Z | 2022-02-02T17:09:49.000Z | import app.views.api.users.urls
| 16 | 31 | 0.8125 | 6 | 32 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 32 | 1 | 32 | 32 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
71b36a5752301a4fab1f956f44b1cf0805f7207e | 20,533 | py | Python | tests/test_Models.py | xin-huang/dadi-cli | d403e9dced19c3a71dc134a8993ad0ceba592c51 | [
"Apache-2.0"
] | 5 | 2021-12-07T23:27:40.000Z | 2022-03-15T08:59:33.000Z | tests/test_Models.py | xin-huang/dadi-cli | d403e9dced19c3a71dc134a8993ad0ceba592c51 | [
"Apache-2.0"
] | 2 | 2022-01-15T09:27:12.000Z | 2022-03-25T16:08:52.000Z | tests/test_Models.py | xin-huang/dadi-cli | d403e9dced19c3a71dc134a8993ad0ceba592c51 | [
"Apache-2.0"
] | 5 | 2021-03-31T19:22:23.000Z | 2021-12-07T18:24:59.000Z | import dadi
import dadi.DFE as DFE
import pytest
from src.Models import get_dadi_model_func, get_dadi_model_params, print_available_models, print_model_details
def test_get_dadi_model_func():
#Selection with a gamma shared between populations
assert get_dadi_model_func('IM', withSelection=True, single_gamma=True) == DFE.DemogSelModels.IM_single_gamma
assert get_dadi_model_func('IM_pre', withSelection=True, single_gamma=True) == DFE.DemogSelModels.IM_pre_single_gamma
assert get_dadi_model_func('split_mig', withSelection=True, single_gamma=True) == DFE.DemogSelModels.split_mig_single_gamma
assert get_dadi_model_func('split_asym_mig', withSelection=True, single_gamma=True) == DFE.DemogSelModels.split_asym_mig_single_gamma
#Selection with independant gammas
assert get_dadi_model_func('IM', withSelection=True, single_gamma=False) == DFE.DemogSelModels.IM
assert get_dadi_model_func('IM_pre', withSelection=True, single_gamma=False) == DFE.DemogSelModels.IM_pre
assert get_dadi_model_func('split_mig', withSelection=True, single_gamma=False) == DFE.DemogSelModels.split_mig
assert get_dadi_model_func('split_asym_mig', withSelection=True, single_gamma=False) == DFE.DemogSelModels.split_asym_mig
assert get_dadi_model_func('equil', withSelection=True, single_gamma=False) == DFE.DemogSelModels.equil
assert get_dadi_model_func('two_epoch', withSelection=True, single_gamma=False) == DFE.DemogSelModels.two_epoch
assert get_dadi_model_func('three_epoch', withSelection=True, single_gamma=False) == DFE.DemogSelModels.three_epoch
#1D demographic models
assert get_dadi_model_func('bottlegrowth_1d', withSelection=False, single_gamma=False) == dadi.Demographics1D.bottlegrowth
assert get_dadi_model_func('growth', withSelection=False, single_gamma=False) == dadi.Demographics1D.growth
assert get_dadi_model_func('snm_1d', withSelection=False, single_gamma=False) == dadi.Demographics1D.snm
assert get_dadi_model_func('three_epoch', withSelection=False, single_gamma=False) == dadi.Demographics1D.three_epoch
assert get_dadi_model_func('two_epoch', withSelection=False, single_gamma=False) == dadi.Demographics1D.two_epoch
#2D demographic models
assert get_dadi_model_func('bottlegrowth_2d', withSelection=False, single_gamma=False) == dadi.Demographics2D.bottlegrowth
assert get_dadi_model_func('bottlegrowth_split', withSelection=False, single_gamma=False) == dadi.Demographics2D.bottlegrowth_split
assert get_dadi_model_func('bottlegrowth_split_mig', withSelection=False, single_gamma=False) == dadi.Demographics2D.bottlegrowth_split_mig
assert get_dadi_model_func('IM', withSelection=False, single_gamma=False) == dadi.Demographics2D.IM
assert get_dadi_model_func('IM_pre', withSelection=False, single_gamma=False) == dadi.Demographics2D.IM_pre
assert get_dadi_model_func('split_mig', withSelection=False, single_gamma=False) == dadi.Demographics2D.split_mig
assert get_dadi_model_func('split_asym_mig', withSelection=False, single_gamma=False) == dadi.Demographics2D.split_asym_mig
assert get_dadi_model_func('snm_2d', withSelection=False, single_gamma=False) == dadi.Demographics2D.snm
#Cover error message
with pytest.raises(Exception) as e_info:
get_dadi_model_func('haha', withSelection=False, single_gamma=False)
with pytest.raises(Exception) as e_info:
get_dadi_model_func('haha', withSelection=True, single_gamma=False)
with pytest.raises(Exception) as e_info:
get_dadi_model_func('haha', withSelection=True, single_gamma=True)
def test_get_dadi_model_params():
#1D demographic models
assert get_dadi_model_params('bottlegrowth_1d') == ['nuB', 'nuF', 'T']
assert get_dadi_model_params('growth') == ['nu', 'T']
assert get_dadi_model_params('snm_1d') == []
assert get_dadi_model_params('three_epoch') == ['nuB', 'nuF', 'TB', 'TF']
assert get_dadi_model_params('two_epoch') == ['nu', 'T']
#2D demographic models
assert get_dadi_model_params('bottlegrowth_2d') == ['nuB', 'nuF', 'T']
assert get_dadi_model_params('bottlegrowth_split') == ['nuB', 'nuF', 'T', 'Ts']
assert get_dadi_model_params('bottlegrowth_split_mig') == ['nuB', 'nuF', 'm', 'T', 'Ts']
assert get_dadi_model_params('IM') == ['s', 'nu1', 'nu2', 'T', 'm12', 'm21']
assert get_dadi_model_params('IM_pre') == ['nuPre', 'TPre', 's', 'nu1', 'nu2', 'T', 'm12', 'm21']
assert get_dadi_model_params('split_mig') == ['nu1', 'nu2', 'T', 'm']
assert get_dadi_model_params('split_asym_mig') == ['nu1', 'nu2', 'T', 'm12', 'm21']
assert get_dadi_model_params('snm_2d') == []
#Cover error message
with pytest.raises(Exception) as e_info:
get_dadi_model_params('haha')
def test_print_available_models(capfd):
print_available_models()
out, err = capfd.readouterr()
assert out == 'Available 1D demographic models:\n' + '- bottlegrowth_1d\n' + '- growth\n' + '- snm_1d\n' + '- three_epoch\n' + '- two_epoch\n\n' + 'Available 2D demographic models:\n' + '- bottlegrowth_2d\n' + '- bottlegrowth_split\n' + '- bottlegrowth_split_mig\n' + '- IM\n' + '- IM_pre\n' + '- split_mig\n' + '- split_asym_mig\n' + '- split_delay_mig\n' + '- snm_2d\n\n' + 'Available demographic models with selection:\n' + '- equil\n' + '- equil_X\n' + '- IM_sel\n' + '- IM_sel_single_gamma\n' + '- IM_pre_sel\n' + '- IM_pre_sel_single_gamma\n' + '- split_mig_sel\n' + '- split_mig_sel_single_gamma\n' + '- split_asym_mig_sel\n' + '- split_asym_mig_sel_single_gamma\n' + '- two_epoch_sel\n' + '- three_epoch_sel\n'
def test_print_model_details(capfd):
print_model_details('bottlegrowth_1d')
out, err = capfd.readouterr()
exp_out = '- bottlegrowth_1d:\n' + '''
Instantanous size change followed by exponential growth.
Only one population in this model.
params = [nuB,nuF,T]
nuB: Ratio of population size after instantanous change to ancient
population size (in units of Na)
nuF: Ratio of contemporary to ancient population size (in units of Na)
T: Time in the past at which instantaneous change happened and growth began
(in units of 2*Na generations)
''' + '\n'
assert out == exp_out
print_model_details('growth')
out, err = capfd.readouterr()
exp_out = '- growth:\n' + '''
Exponential growth beginning some time ago.
Only one population in this model.
params = [nu,T]
nu: Ratio of contemporary to ancient population size (in units of Na)
T: Time in the past at which growth began (in units of 2*Na generations)
''' + '\n'
assert out == exp_out
print_model_details('snm_1d')
out, err = capfd.readouterr()
exp_out = '- snm_1d:\n' + '''
Standard neutral model.
Only one population in this model.
''' + '\n'
assert out == exp_out
print_model_details('three_epoch')
out, err = capfd.readouterr()
exp_out = '- three_epoch:\n' + '''
Two instantaneous size changes some time ago.
Only one population in this model.
params = [nuB,nuF,TB,TF]
nuB: Ratio of bottleneck population size to ancient pop size (in units of Na)
nuF: Ratio of contemporary to ancient pop size (in units of Na)
TB: Length of bottleneck (in units of 2*Na generations)
TF: Time since bottleneck recovery (in units of 2*Na generations)
''' + '\n'
assert out == exp_out
print_model_details('two_epoch')
out, err = capfd.readouterr()
exp_out = '- two_epoch:\n' + '''
One instantaneous size change some time ago.
Only one population in this model.
params = [nu,T]
nu: Ratio of contemporary to ancient population size (in units of Na)
T: Time in the past at which size change happened (in units of 2*Na generations)
''' + '\n'
assert out == exp_out
print_model_details('bottlegrowth_2d')
out, err = capfd.readouterr()
exp_out = '- bottlegrowth_2d:\n' + '''
Instantanous size change followed by exponential growth with no population
split.
Two populations in this model.
params = [nuB,nuF,T]
nuB: Ratio of population size after instantanous change to ancient
population size (in units of Na)
nuF: Ratio of contempoary to ancient population size (in units of Na)
T: Time in the past at which instantaneous change happened and growth began
(in units of 2*Na generations)
''' + '\n'
assert out == exp_out
print_model_details('bottlegrowth_split')
out, err = capfd.readouterr()
exp_out = '- bottlegrowth_split:\n' + '''
Instantanous size change followed by exponential growth then split without
migration.
Two populations in this model.
params = [nuB,nuF,T,Ts]
nuB: Ratio of population size after instantanous change to ancient
population size (in units of Na)
nuF: Ratio of contempoary to ancient population size (in units of Na)
T: Time in the past at which instantaneous change happened and growth began
(in units of 2*Na generations)
Ts: Time in the past at which the two populations split (in units of 2*Na generations)
''' + '\n'
assert out == exp_out
print_model_details('bottlegrowth_split_mig')
out, err = capfd.readouterr()
exp_out = '- bottlegrowth_split_mig:\n' + '''
Instantanous size change followed by exponential growth then split with
symmetric migration.
Two populations in this model.
params = [nuB,nuF,m,T,Ts]
nuB: Ratio of population size after instantanous change to ancient
population size (in units of Na)
nuF: Ratio of contempoary to ancient population size (in units of Na)
m: Migration rate between the two populations (2*Na*m)
T: Time in the past at which instantaneous change happened and growth began
(in units of 2*Na generations)
Ts: Time in the past at which the two populations split (in units of 2*Na generations)
''' + '\n'
assert out == exp_out
print_model_details('IM')
out, err = capfd.readouterr()
exp_out = '- IM:\n' + '''
Isolation-with-migration model with exponential pop growth.
Two populations in this model.
params = [s,nu1,nu2,T,m12,m21]
s: Size of pop 1 after split (Pop 2 has size 1-s)
nu1: Final size of pop 1 (in units of Na)
nu2: Final size of pop 2 (in units of Na)
T: Time in the past of split (in units of 2*Na generations)
m12: Migration from pop 2 to pop 1 (2*Na*m12)
m21: Migration from pop 1 to pop 2 (2*Na*m21)
''' + '\n'
assert out == exp_out
print_model_details('IM_pre')
out, err = capfd.readouterr()
exp_out = '- IM_pre:\n' + '''
Isolation-with-migration model with exponential pop growth and a size change
prior to split.
Two populations in this model.
params = [nuPre,TPre,s,nu1,nu2,T,m12,m21]
nuPre: Size after first size change (in units of Na)
TPre: Time before split of first size change (in units of 2*Na generations)
s: Fraction of nuPre that goes to pop1 (Pop 2 has size nuPre*(1-s))
nu1: Final size of pop 1 (in units of Na)
nu2: Final size of pop 2 (in units of Na)
T: Time in the past of split (in units of 2*Na generations)
m12: Migration from pop 2 to pop 1 (2*Na*m12)
m21: Migration from pop 1 to pop 2 (2*Na*m21)
''' + '\n'
assert out == exp_out
print_model_details('split_mig')
out, err = capfd.readouterr()
exp_out = '- split_mig:\n' + '''
Split into two populations of specifed size, with symmetric migration.
Two populations in this model.
params = [nu1,nu2,T,m]
nu1: Size of population 1 after split (in units of Na)
nu2: Size of population 2 after split (in units of Na)
T: Time in the past of split (in units of 2*Na generations)
m: Migration rate between populations (2*Na*m)
''' + '\n'
assert out == exp_out
print_model_details('split_asym_mig')
out, err = capfd.readouterr()
exp_out = '- split_asym_mig:\n' + '''
Split into two populations of specifed size, with asymmetric migration .
Two populations in this model.
params = [nu1,nu2,T,m12,m21]
nu1: Size of population 1 after split (in units of Na)
nu2: Size of population 2 after split (in units of Na)
T: Time in the past of split (in units of 2*Na generations)
m12: Migration from pop 2 to pop 1 (2*Na*m12)
m21: Migration from pop 1 to pop 2 (2*Na*m21)
''' + '\n'
assert out == exp_out
print_model_details('snm_2d')
out, err = capfd.readouterr()
exp_out = '- snm_2d:\n' + '''
Standard neutral model, populations never diverge.
Two populations in this model.
''' + '\n'
assert out == exp_out
print_model_details('equil')
out, err = capfd.readouterr()
exp_out = '- equil:\n' + '''
Equilibrium demography, plus selection.
Only one population in this model.
params: [gamma]
gamma: Population-scaled selection coefficient
''' + '\n'
assert out == exp_out
print_model_details('equil_X')
out, err = capfd.readouterr()
exp_out = '- equil_X:\n' + '''
Equilibrium demography in chromosome X, plus selection.
Only one population in this model.
params: [gamma]
gamma: Population-scaled selection coefficient
''' + '\n'
assert out == exp_out
print_model_details('IM_sel')
out, err = capfd.readouterr()
exp_out = '- IM_sel:\n' + '''
Isolation-with-migration model with exponential pop growth and selection.
Two populations in this model.
params: [s,nu1,nu2,T,m12,m21,gamma1,gamma2]
s: Fraction of nuPre that goes to pop1 (Pop 2 has size Na*(1-s))
nu1: Final size of pop 1 (in units of Na)
nu2: Final size of pop 2 (in units of Na)
T: Time in the past of split (in units of 2*Na generations)
m12: Migration from pop 2 to pop 1 (2*Na*m12)
m21: Migration from pop 1 to pop 2 (2*Na*m21)
gamma1: Population-scaled selection coefficient in pop 1 *and* the ancestral population
gamma2: Population-scaled selection coefficient in pop 2
''' + '\n'
assert out == exp_out
print_model_details('IM_sel_single_gamma')
out, err = capfd.readouterr()
exp_out = '- IM_sel_single_gamma:\n' + '''
IM model with selection assumed to be equal in all populations.
Two populations in this model.
See IM_sel for argument definitions, but only a single gamma in params.
''' + '\n'
assert out == exp_out
print_model_details('IM_pre_sel')
out, err = capfd.readouterr()
exp_out = '- IM_pre_sel:\n' + '''
Isolation-with-migration model with exponential pop growth, a size change
prior to split, and selection.
Two populations in this model.
params: [nuPre,TPre,s,nu1,nu2,T,m12,m21,gamma1,gamma2]
nuPre: Size after first size change (in units of Na)
TPre: Time before split of first size change (in units of 2*Na generations)
s: Fraction of nuPre that goes to pop1 (Pop 2 has size nuPre*(1-s))
nu1: Final size of pop 1 (in units of Na)
nu2: Final size of pop 2 (in units of Na)
T: Time in the past of split (in units of 2*Na generations)
m12: Migration from pop 2 to pop 1 (2*Na*m12)
m21: Migration from pop 1 to pop 2 (2*Na*m21)
gamma1: Population-scaled selection coefficient in pop 1 *and* the ancestral population
gamma2: Population-scaled selection coefficient in pop 2
''' + '\n'
assert out == exp_out
print_model_details('IM_pre_sel_single_gamma')
out, err = capfd.readouterr()
exp_out = '- IM_pre_sel_single_gamma:\n' + '''
IM_pre model with selection assumed to be equal in all populations.
Two populations in this model.
See IM_pre_sel for argument definitions, but only a single gamma in params.
''' + '\n'
assert out == exp_out
print_model_details('split_mig_sel')
out, err = capfd.readouterr()
exp_out = '- split_mig_sel:\n' + '''
Instantaneous split into two populations of specified size, with symmetric migration.
Two populations in this model.
params = [nu1,nu2,T,m]
nu1: Size of population 1 after split (in units of Na)
nu2: Size of population 2 after split (in units of Na)
T: Time in the past of split (in units of 2*Na generations)
m: Migration rate between populations (2*Na*m)
gamma1: Population-scaled selection coefficient in pop 1 *and* the ancestral population
gamma2: Population-scaled selection coefficient in pop 2
''' + '\n'
assert out == exp_out
print_model_details('split_mig_sel_single_gamma')
out, err = capfd.readouterr()
exp_out = '- split_mig_sel_single_gamma:\n' + '''
split_mig model with selection assumed to be equal in all populations.
Two populations in this model.
See split_mig_sel for argument definitions, but only a single gamma in params.
''' + '\n'
assert out == exp_out
print_model_details('split_asym_mig_sel')
out, err = capfd.readouterr()
exp_out = '- split_asym_mig_sel:\n' + '''
Instantaneous split into two populations of specified size, with asymmetric migration.
Two populations in this model.
params = [nu1,nu2,T,m12,m21]
nu1: Size of population 1 after split (in units of Na)
nu2: Size of population 2 after split (in units of Na)
T: Time in the past of split (in units of 2*Na generations)
m12: Migration rate from population 2 to population 1 (2*Na*m12)
m21: Migration rate from population 1 to population 2 (2*Na*m21)
gamma1: Population-scaled selection coefficient in pop 1 *and* the ancestral population
gamma2: Population-scaled selection coefficient in pop 2
''' + '\n'
assert out == exp_out
print_model_details('split_asym_mig_sel_single_gamma')
out, err = capfd.readouterr()
exp_out = '- split_asym_mig_sel_single_gamma:\n' + '''
split_asym_mig model with selection assumed to be equal in all populations.
Two populations in this model.
See split_asym_mig_sel for argument definitions, but only a single gamma in params.
''' + '\n'
assert out == exp_out
print_model_details('two_epoch_sel')
out, err = capfd.readouterr()
exp_out = '- two_epoch_sel:\n' + '''
One instantaneous population size change, plus selection.
Only one population in this model.
params: [nu,T,gamma]
nu: Final population size (in units of Na)
T: Time of size changei (in units of 2*Na generations)
gamma: Population-scaled selection coefficient
''' + '\n'
assert out == exp_out
print_model_details('three_epoch_sel')
out, err = capfd.readouterr()
exp_out = '- three_epoch_sel:\n' + '''
Two instantaneous size changes some time ago, plus selection.
Only one population in this model.
params = [nuB,nuF,TB,TF,gamma]
nuB: Ratio of bottleneck population size to ancient pop size (in units of Na)
nuF: Ratio of contemporary to ancient pop size (in units of Na)
TB: Length of bottleneck (in units of 2*Na generations)
TF: Time since bottleneck recovery (in units of 2*Na generations)
gamma: Population-scaled selection coefficient
''' + '\n'
assert out == exp_out
with pytest.raises(Exception) as e_info:
print_model_details('mixture')
| 47.094037 | 722 | 0.652754 | 2,865 | 20,533 | 4.508551 | 0.057941 | 0.030348 | 0.039018 | 0.05156 | 0.900674 | 0.889371 | 0.864133 | 0.814663 | 0.68104 | 0.62259 | 0 | 0.017756 | 0.251205 | 20,533 | 435 | 723 | 47.202299 | 0.822374 | 0.009935 | 0 | 0.564607 | 0 | 0.008427 | 0.630579 | 0.034742 | 0.073034 | 0 | 0 | 0 | 0.176966 | 1 | 0.011236 | false | 0 | 0.011236 | 0 | 0.022472 | 0.08427 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71c710af7bd067012d5a03d00b4c1e65d2abcfc2 | 6,709 | py | Python | tests/integration/commands/test_set.py | real-digital/esque | 0b779fc308ce8bce45c1903f36c33664b2e832e7 | [
"MIT"
] | 29 | 2019-05-10T21:12:38.000Z | 2021-08-24T08:09:49.000Z | tests/integration/commands/test_set.py | real-digital/esque | 0b779fc308ce8bce45c1903f36c33664b2e832e7 | [
"MIT"
] | 103 | 2019-05-17T07:21:41.000Z | 2021-12-02T08:29:00.000Z | tests/integration/commands/test_set.py | real-digital/esque | 0b779fc308ce8bce45c1903f36c33664b2e832e7 | [
"MIT"
] | 2 | 2019-05-28T06:45:14.000Z | 2019-11-21T00:33:15.000Z | import pendulum
import pytest
from confluent_kafka.cimpl import Producer as ConfluenceProducer
from confluent_kafka.cimpl import TopicPartition
from esque.cli.commands import esque
from esque.controller.consumergroup_controller import ConsumerGroupController
from tests.utils import produce_text_test_messages
@pytest.mark.integration
def test_set_offsets_offset_to_absolute_value(
topic: str,
interactive_cli_runner,
producer: ConfluenceProducer,
consumer_group: str,
consumergroup_controller: ConsumerGroupController,
):
produce_text_test_messages(producer=producer, topic_name=topic, amount=10)
consumergroup_controller.commit_offsets(consumer_group, [TopicPartition(topic=topic, partition=0, offset=10)])
consumergroup_desc_before = consumergroup_controller.get_consumer_group(consumer_id=consumer_group).describe(
partitions=True
)
interactive_cli_runner.invoke(
esque,
args=["set", "offsets", consumer_group, "--topic-name", topic, "--offset-to-value", "1"],
input="y\n",
catch_exceptions=False,
)
# Check assertions:
consumergroup_desc_after = consumergroup_controller.get_consumer_group(consumer_id=consumer_group).describe(
partitions=True
)
assert consumergroup_desc_before["offsets"][topic][0]["consumer_offset"] == 10
assert consumergroup_desc_after["offsets"][topic][0]["consumer_offset"] == 1
@pytest.mark.integration
def test_set_offsets_offset_to_delta(
topic: str,
interactive_cli_runner,
producer: ConfluenceProducer,
consumer_group: str,
consumergroup_controller: ConsumerGroupController,
):
produce_text_test_messages(producer=producer, topic_name=topic, amount=10)
consumergroup_controller.commit_offsets(consumer_group, [TopicPartition(topic=topic, partition=0, offset=10)])
consumergroup_desc_before = consumergroup_controller.get_consumer_group(consumer_id=consumer_group).describe(
partitions=True
)
interactive_cli_runner.invoke(
esque,
args=["set", "offsets", consumer_group, "--topic-name", topic, "--offset-by-delta", "-2"],
input="y\n",
catch_exceptions=False,
)
# Check assertions:
consumergroup_desc_after = consumergroup_controller.get_consumer_group(consumer_id=consumer_group).describe(
partitions=True
)
assert consumergroup_desc_before["offsets"][topic][0]["consumer_offset"] == 10
assert consumergroup_desc_after["offsets"][topic][0]["consumer_offset"] == 8
@pytest.mark.integration
def test_set_offsets_offset_to_delta_all_topics(
topic: str,
interactive_cli_runner,
producer: ConfluenceProducer,
consumer_group: str,
consumergroup_controller: ConsumerGroupController,
):
produce_text_test_messages(producer=producer, topic_name=topic, amount=10)
consumergroup_controller.commit_offsets(consumer_group, [TopicPartition(topic=topic, partition=0, offset=10)])
consumergroup_desc_before = consumergroup_controller.get_consumer_group(consumer_id=consumer_group).describe(
partitions=True
)
interactive_cli_runner.invoke(
esque, args=["set", "offsets", consumer_group, "--offset-by-delta", "-2"], input="y\n", catch_exceptions=False
)
# Check assertions:
consumergroup_desc_after = consumergroup_controller.get_consumer_group(consumer_id=consumer_group).describe(
partitions=True
)
assert consumergroup_desc_before["offsets"][topic][0]["consumer_offset"] == 10
assert consumergroup_desc_after["offsets"][topic][0]["consumer_offset"] == 8
@pytest.mark.integration
def test_set_offsets_offset_from_group(
topic: str,
interactive_cli_runner,
producer: ConfluenceProducer,
consumer_group: str,
target_consumer_group: str,
consumergroup_controller: ConsumerGroupController,
):
produce_text_test_messages(producer=producer, topic_name=topic, amount=10)
consumergroup_controller.commit_offsets(consumer_group, [TopicPartition(topic=topic, partition=0, offset=10)])
consumergroup_desc_before = consumergroup_controller.get_consumer_group(consumer_id=consumer_group).describe(
partitions=True
)
interactive_cli_runner.invoke(
esque, args=["set", "offsets", consumer_group, "--offset-by-delta", "-2"], input="y\n", catch_exceptions=False
)
consumergroup_desc_after = consumergroup_controller.get_consumer_group(consumer_id=consumer_group).describe(
partitions=True
)
# create a new consumer in a separate group and consume just one message
consumergroup_controller.commit_offsets(
target_consumer_group, [TopicPartition(topic=topic, partition=0, offset=1)]
)
interactive_cli_runner.invoke(
esque,
args=["set", "offsets", target_consumer_group, "--offset-from-group", consumer_group],
input="y\n",
catch_exceptions=False,
)
consumergroup_desc_target = consumergroup_controller.get_consumer_group(
consumer_id=target_consumer_group
).describe(partitions=True)
assert consumergroup_desc_before["offsets"][topic][0]["consumer_offset"] == 10
assert consumergroup_desc_after["offsets"][topic][0]["consumer_offset"] == 8
assert consumergroup_desc_target["offsets"][topic][0]["consumer_offset"] == 8
@pytest.mark.integration
def test_set_offsets_offset_to_timestamp_value(
topic: str,
interactive_cli_runner,
producer: ConfluenceProducer,
consumer_group: str,
consumergroup_controller: ConsumerGroupController,
):
messages = produce_text_test_messages(producer=producer, topic_name=topic, amount=10)
consumergroup_controller.commit_offsets(consumer_group, [TopicPartition(topic=topic, partition=0, offset=10)])
consumergroup_desc_before = consumergroup_controller.get_consumer_group(consumer_id=consumer_group).describe(
partitions=True
)
fifth_message = messages[4]
timestamp = fifth_message.timestamp
dt = pendulum.from_timestamp(round(timestamp / 1000) - 1)
interactive_cli_runner.invoke(
esque,
args=[
"set",
"offsets",
consumer_group,
"--topic-name",
topic,
"--offset-to-timestamp",
dt.format("YYYY-MM-DDTHH:mm:ss"),
],
input="y\n",
catch_exceptions=False,
)
# Check assertions:
consumergroup_desc_after = consumergroup_controller.get_consumer_group(consumer_id=consumer_group).describe(
partitions=True
)
assert consumergroup_desc_before["offsets"][topic][0]["consumer_offset"] == 10
assert consumergroup_desc_after["offsets"][topic][0]["consumer_offset"] == 4
| 36.862637 | 118 | 0.738411 | 747 | 6,709 | 6.31593 | 0.123159 | 0.112972 | 0.04663 | 0.079271 | 0.876643 | 0.864349 | 0.864349 | 0.853964 | 0.833404 | 0.813692 | 0 | 0.011346 | 0.159189 | 6,709 | 181 | 119 | 37.066298 | 0.825031 | 0.021166 | 0 | 0.630137 | 0 | 0 | 0.074684 | 0.003201 | 0 | 0 | 0 | 0 | 0.075342 | 1 | 0.034247 | false | 0 | 0.047945 | 0 | 0.082192 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71d262f35c243d34bf28b4d9faec2f9d4b3dc6f3 | 247 | py | Python | libqtile/core/base.py | daroczig/qtile | 75528dd304f48e6a3a945b9029c40131129d2e7e | [
"MIT"
] | 1 | 2019-06-18T07:44:04.000Z | 2019-06-18T07:44:04.000Z | libqtile/core/base.py | daroczig/qtile | 75528dd304f48e6a3a945b9029c40131129d2e7e | [
"MIT"
] | 22 | 2019-02-23T23:56:05.000Z | 2019-09-04T21:35:24.000Z | libqtile/core/base.py | daroczig/qtile | 75528dd304f48e6a3a945b9029c40131129d2e7e | [
"MIT"
] | 4 | 2019-02-22T23:26:00.000Z | 2022-01-03T17:46:54.000Z | from abc import ABCMeta, abstractmethod
import typing
class Core(metaclass=ABCMeta):
@abstractmethod
def get_keys(self) -> typing.List[str]:
pass
@abstractmethod
def get_modifiers(self) -> typing.List[str]:
pass
| 19 | 48 | 0.680162 | 29 | 247 | 5.724138 | 0.586207 | 0.253012 | 0.240964 | 0.204819 | 0.253012 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.226721 | 247 | 12 | 49 | 20.583333 | 0.86911 | 0 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.222222 | 0.222222 | 0 | 0.555556 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
e08ec27c8e7a010be777e7976f1a133c60e92aa6 | 141 | py | Python | src/www/api/query.py | SuLab/bioreel | 1c701316a338b16d12c11903e40866e945abb8d1 | [
"Apache-2.0"
] | 32 | 2015-10-23T19:47:09.000Z | 2019-11-16T01:28:26.000Z | src/www/api/query.py | SuLab/bioreel | 1c701316a338b16d12c11903e40866e945abb8d1 | [
"Apache-2.0"
] | 12 | 2015-10-27T20:20:41.000Z | 2017-04-04T21:35:46.000Z | src/www/api/query.py | SuLab/bioreel | 1c701316a338b16d12c11903e40866e945abb8d1 | [
"Apache-2.0"
] | 15 | 2015-10-15T20:46:50.000Z | 2021-07-12T19:17:49.000Z | # -*- coding: utf-8 -*-
from biothings.www.api.es.query import ESQuery
class ESQuery(ESQuery):
# Add app specific queries here
pass
| 20.142857 | 46 | 0.687943 | 20 | 141 | 4.85 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008772 | 0.191489 | 141 | 6 | 47 | 23.5 | 0.842105 | 0.361702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e099930567f761c4be80e5e0de1694f146ac73ef | 550 | py | Python | __init__.py | NWChemEx-Project/PythonProjectorEmbedding | a1c3346f65382de3493d690b4075930b34c96862 | [
"Apache-2.0"
] | null | null | null | __init__.py | NWChemEx-Project/PythonProjectorEmbedding | a1c3346f65382de3493d690b4075930b34c96862 | [
"Apache-2.0"
] | null | null | null | __init__.py | NWChemEx-Project/PythonProjectorEmbedding | a1c3346f65382de3493d690b4075930b34c96862 | [
"Apache-2.0"
] | null | null | null | """ __init__.py """
from projectorEmbedding.embed_utils import make_dm
from projectorEmbedding.embed_utils import flatten_basis
from projectorEmbedding.embed_utils import purify
from projectorEmbedding.embed_utils import screen_aos
from projectorEmbedding.embed_utils import truncate_basis
from projectorEmbedding.embed_partition import mulliken_partition
from projectorEmbedding.embed_partition import occupancy_partition
from projectorEmbedding.embed_partition import spade_partition
from projectorEmbedding.embed_proc import embedding_procedure
| 42.307692 | 66 | 0.896364 | 64 | 550 | 7.375 | 0.34375 | 0.419492 | 0.514831 | 0.338983 | 0.707627 | 0.216102 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074545 | 550 | 12 | 67 | 45.833333 | 0.927308 | 0.02 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0a7639e28fef5a8793742c26a1a9e2498e954d1 | 11,841 | py | Python | onnx_chainer/functions/math.py | ir5/onnx-chainer | c4e4a900c612b3528df9ef7535b7f94c7eda2f8a | [
"MIT"
] | null | null | null | onnx_chainer/functions/math.py | ir5/onnx-chainer | c4e4a900c612b3528df9ef7535b7f94c7eda2f8a | [
"MIT"
] | null | null | null | onnx_chainer/functions/math.py | ir5/onnx-chainer | c4e4a900c612b3528df9ef7535b7f94c7eda2f8a | [
"MIT"
] | null | null | null | import numpy as np
from onnx.mapping import NP_TYPE_TO_TENSOR_TYPE
from onnx_chainer.functions.opset_version import support
from onnx_chainer import onnx_helper
@support((1, 6, 7))
def convert_Add(func, opset_version, input_names, output_names, context):
if opset_version == 1:
return onnx_helper.make_node(
'Add', input_names, output_names, consumed_inputs=[1, 1]),
elif opset_version == 6 or opset_version == 7:
return onnx_helper.make_node('Add', input_names, output_names),
@support((1, 6, 7))
def convert_AddConstant(
func, opset_version, input_names, output_names, context):
value_name = context.add_const(
np.array(func.value, dtype=func.inputs[0].dtype), 'value')
input_names.append(value_name)
if opset_version == 1:
return onnx_helper.make_node(
'Add', input_names, output_names, consumed_inputs=[1, 1]),
elif opset_version == 6 or opset_version == 7:
return onnx_helper.make_node('Add', input_names, output_names),
@support((1, 6, 7))
def convert_Sub(func, opset_version, input_names, output_names, context):
if opset_version == 1:
return onnx_helper.make_node(
'Sub', input_names, output_names, consumed_inputs=[1, 1]),
elif opset_version == 6 or opset_version == 7:
return onnx_helper.make_node('Sub', input_names, output_names),
@support((1, 6, 7))
def convert_SubFromConstant(
func, opset_version, input_names, output_names, context):
value_name = context.add_const(
np.array(func.value, dtype=func.inputs[0].dtype), 'value')
input_names[:0] = [value_name]
if opset_version == 1:
return onnx_helper.make_node(
'Sub', input_names, output_names, consumed_inputs=[1, 1]),
elif opset_version == 6 or opset_version == 7:
return onnx_helper.make_node('Sub', input_names, output_names),
@support((1, 6, 7))
def convert_Mul(func, opset_version, input_names, output_names, context):
if opset_version == 1:
return onnx_helper.make_node(
'Mul', input_names, output_names, consumed_inputs=[1, 1]),
elif opset_version == 6 or opset_version == 7:
return onnx_helper.make_node('Mul', input_names, output_names),
@support((1, 6, 7))
def convert_MulConstant(
func, opset_version, input_names, output_names, context):
value_name = context.add_const(
np.array(func.value, dtype=func.inputs[0].dtype), 'value')
input_names.append(value_name)
if opset_version == 1:
return onnx_helper.make_node(
'Mul', input_names, output_names, consumed_inputs=[1, 1]),
elif opset_version == 6 or opset_version == 7:
return onnx_helper.make_node('Mul', input_names, output_names),
@support((1, 6))
def convert_Neg(func, opset_version, input_names, output_names, context):
if opset_version == 1:
return onnx_helper.make_node(
'Neg', input_names, output_names, consumed_inputs=[1, 1]),
elif opset_version == 6:
return onnx_helper.make_node('Neg', input_names, output_names),
@support((1, 6, 7))
def convert_Div(func, opset_version, input_names, output_names, context):
if opset_version == 1:
return onnx_helper.make_node(
'Div', input_names, output_names, consumed_inputs=[1, 1]),
elif opset_version == 6 or opset_version == 7:
return onnx_helper.make_node('Div', input_names, output_names),
@support((1, 6, 7))
def convert_DivFromConstant(
func, opset_version, input_names, output_names, context):
value_name = context.add_const(
np.array(func.value, dtype=func.inputs[0].dtype), 'value')
input_names[:0] = [value_name]
if opset_version == 1:
return onnx_helper.make_node(
'Div', input_names, output_names, consumed_inputs=[1, 1]),
elif opset_version == 6 or opset_version == 7:
return onnx_helper.make_node('Div', input_names, output_names),
@support((1, 6))
def convert_Absolute(func, opset_version, input_names, output_names, context):
if opset_version == 1:
return onnx_helper.make_node(
'Abs', input_names, output_names, consumed_inputs=[1]),
elif opset_version == 6:
return onnx_helper.make_node('Abs', input_names, output_names),
@support((1, 7))
def convert_PowVarConst(
func, opset_version, input_names, output_names, context):
value_name = context.add_const(
np.array(func.value, dtype=func.inputs[0].dtype), 'value')
input_names.append(value_name)
if opset_version == 1 or opset_version == 7:
return onnx_helper.make_node('Pow', input_names, output_names),
@support((1, 6))
def convert_Clip(func, opset_version, input_names, output_names, context):
if opset_version == 1:
return onnx_helper.make_node(
'Clip', input_names, output_names,
max=func.x_max,
min=func.x_min,
consumed_inputs=[1]
),
elif opset_version == 6:
return onnx_helper.make_node(
'Clip', input_names, output_names,
max=func.x_max,
min=func.x_min,
),
@support((1, 6))
def convert_Exp(func, opset_version, input_names, output_names, context):
if opset_version == 1:
return onnx_helper.make_node(
'Exp', input_names, output_names, consumed_inputs=[1, 1]),
elif opset_version == 6:
return onnx_helper.make_node('Exp', input_names, output_names),
def convert_Identity(func, opset_version, input_names, output_names, context):
return onnx_helper.make_node('Identity', input_names, output_names),
def convert_MatMul(func, opset_version, input_names, output_names, context):
ndim_a = len(func.inputs[0].shape)
ndim_b = len(func.inputs[1].shape)
gb = onnx_helper.GraphBuilder()
if ndim_a > 1 and func.transa:
perm = list(range(ndim_a))
perm[-1], perm[-2] = perm[-2], perm[-1]
input_names[0] = gb.op('Transpose', [input_names[0]], perm=perm)
if ndim_b > 1 and func.transb:
perm = list(range(ndim_b))
perm[-1], perm[-2] = perm[-2], perm[-1]
input_names[1] = gb.op('Transpose', [input_names[1]], perm=perm)
gb.op('MatMul', input_names)
return gb.nodes(output_names)
@support((1, 6, 8))
def convert_Maximum(func, opset_version, input_names, output_names, context):
if opset_version == 1:
return onnx_helper.make_node(
'Max', input_names, output_names, consumed_inputs=[1, 1]),
elif opset_version == 6 or opset_version == 8:
return onnx_helper.make_node('Max', input_names, output_names),
@support((1, 6, 8))
def convert_Minimum(func, opset_version, input_names, output_names, context):
if opset_version == 1:
return onnx_helper.make_node(
'Min', input_names, output_names, consumed_inputs=[1, 1]),
elif opset_version == 6 or opset_version == 8:
return onnx_helper.make_node('Min', input_names, output_names),
@support((1, 6))
def convert_Sqrt(func, opset_version, input_names, output_names, context):
if opset_version == 1:
return onnx_helper.make_node(
'Sqrt', input_names, output_names, consumed_inputs=[1, 1]),
elif opset_version == 6:
return onnx_helper.make_node('Sqrt', input_names, output_names),
def convert_RsqrtGPU(func, opset_version, input_names, output_names, context):
gb = onnx_helper.GraphBuilder()
sqrt_out = gb.op('Sqrt', input_names)
gb.op('Reciprocal', [sqrt_out])
return gb.nodes(output_names)
def convert_LogSumExp(func, opset_version, input_names, output_names, context):
# Use keepdims=False by default
# since the chainer does not support keepdims option
kwargs = {'keepdims': False}
if hasattr(func, 'keepdims'):
kwargs['keepdims'] = func.keepdims
if func.axis is not None:
kwargs['axes'] = func.axis
return onnx_helper.make_node(
'ReduceLogSumExp', input_names, output_names, **kwargs),
def convert_Max(func, opset_version, input_names, output_names, context):
kwargs = {'keepdims': func.keepdims}
if func.axis is not None:
kwargs['axes'] = func.axis
return onnx_helper.make_node(
'ReduceMax', input_names, output_names, **kwargs),
def convert_Mean(func, opset_version, input_names, output_names, context):
kwargs = {'keepdims': func.keepdims}
if func.axis is not None:
kwargs['axes'] = func.axis
return onnx_helper.make_node(
'ReduceMean', input_names, output_names, **kwargs),
def convert_Min(func, opset_version, input_names, output_names, context):
kwargs = {'keepdims': func.keepdims}
if func.axis is not None:
kwargs['axes'] = func.axis
return onnx_helper.make_node(
'ReduceMin', input_names, output_names, **kwargs),
def convert_Prod(func, opset_version, input_names, output_names, context):
kwargs = {'keepdims': func.keepdims}
if func.axis is not None:
kwargs['axes'] = func.axis
return onnx_helper.make_node(
'ReduceProd', input_names, output_names, **kwargs),
def convert_Sum(func, opset_version, input_names, output_names, context):
kwargs = {'keepdims': func.keepdims}
if func.axis is not None:
kwargs['axes'] = func.axis
return onnx_helper.make_node(
'ReduceSum', input_names, output_names, **kwargs),
@support((1, 6, 7))
def convert_LinearInterpolate(
func, opset_version, input_names, output_names, context):
typ = func.inputs[0].dtype if isinstance(
func.inputs[0].dtype, np.dtype) else np.dtype(func.inputs[0].dtype)
one_name = context.add_const(np.array(1, dtype=typ), 'one')
kwargs = {'consumed_inputs': [1, 1]} if opset_version == 1 else {}
kwargs2 = {} if opset_version >= 7 else {'broadcast': 1}
gb = onnx_helper.GraphBuilder()
p, x, y = input_names
n1 = gb.op('Sub', [one_name, p], **kwargs, **kwargs2)
n2 = gb.op('Mul', [p, x], **kwargs)
n3 = gb.op('Mul', [n1, y], **kwargs)
gb.op_output_named('Add', [n2, n3], output_names, **kwargs)
return gb.nodes()
@support((1, 6, 7))
def convert_Square(func, opset_version, input_names, output_names, context):
if opset_version == 1:
return onnx_helper.make_node(
'Mul', [input_names[0], input_names[0]], output_names,
consumed_inputs=[1, 1]),
elif opset_version == 6 or opset_version == 7:
return onnx_helper.make_node(
'Mul', [input_names[0], input_names[0]], output_names),
@support((8,))
def convert_BroadcastTo(
func, opset_version, input_names, output_names, context):
shape_name = context.add_const(np.array(func._shape), 'shape')
input_names.append(shape_name)
return onnx_helper.make_node('Expand', input_names, output_names),
def _argminmax_nodes(op_name, func, input_names, output_names, context):
gb = onnx_helper.GraphBuilder()
target_input_names = input_names
axis = func.axis
if axis is None:
shape_name = context.add_const(np.array([-1]), 'shape')
input_names.append(shape_name)
target_input_names = [gb.op('Reshape', input_names)]
axis = 0
out = gb.op(op_name, target_input_names, axis=axis, keepdims=0)
# Chainer's ArgMax always return value as int32
# Cast spec is changed from opset6, this logic does not support ~opset5
gb.op('Cast', [out], to=NP_TYPE_TO_TENSOR_TYPE[np.dtype('int32')])
return gb.nodes(output_names)
@support((6,))
def convert_ArgMax(func, opset_version, input_names, output_names, context):
return _argminmax_nodes('ArgMax', func, input_names, output_names, context)
@support((6,))
def convert_ArgMin(func, opset_version, input_names, output_names, context):
return _argminmax_nodes('ArgMin', func, input_names, output_names, context)
| 36.322086 | 79 | 0.676885 | 1,656 | 11,841 | 4.571256 | 0.085749 | 0.125495 | 0.15218 | 0.199736 | 0.813606 | 0.782034 | 0.743593 | 0.706737 | 0.674637 | 0.638705 | 0 | 0.018842 | 0.197703 | 11,841 | 325 | 80 | 36.433846 | 0.778 | 0.016553 | 0 | 0.600806 | 0 | 0 | 0.034966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.016129 | 0.012097 | 0.330645 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e0eab40ce8550a4b66719f827842d29b129472c0 | 38 | py | Python | test_calc_fail.py | piersto/pilot_project | 1d3aad28f1d9f77ba95de9adc8a0702701ab6c12 | [
"Apache-2.0"
] | null | null | null | test_calc_fail.py | piersto/pilot_project | 1d3aad28f1d9f77ba95de9adc8a0702701ab6c12 | [
"Apache-2.0"
] | null | null | null | test_calc_fail.py | piersto/pilot_project | 1d3aad28f1d9f77ba95de9adc8a0702701ab6c12 | [
"Apache-2.0"
] | null | null | null | def test_add():
assert (1+2 == 4)
| 12.666667 | 21 | 0.526316 | 7 | 38 | 2.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 0.263158 | 38 | 2 | 22 | 19 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e0f520d4f53ec84746fea23a298cc88e355e4dcf | 25,429 | py | Python | tests/wallet/cat_wallet/test_trades.py | bolshoytoster/chia-blockchain | 1086fb255601de931dc771caa7327f4df5c87ace | [
"Apache-2.0"
] | 2 | 2022-02-03T01:22:32.000Z | 2022-02-03T01:44:43.000Z | tests/wallet/cat_wallet/test_trades.py | jacobdmn/chia-blockchain | 29fad42e0c42654ec123e414966b1aa29789f384 | [
"Apache-2.0"
] | null | null | null | tests/wallet/cat_wallet/test_trades.py | jacobdmn/chia-blockchain | 29fad42e0c42654ec123e414966b1aa29789f384 | [
"Apache-2.0"
] | null | null | null | import asyncio
from secrets import token_bytes
from typing import List
import pytest
from chia.full_node.mempool_manager import MempoolManager
from chia.simulator.simulator_protocol import FarmNewBlockProtocol
from chia.types.peer_info import PeerInfo
from chia.util.ints import uint16, uint64
from chia.wallet.cat_wallet.cat_wallet import CATWallet
from chia.wallet.trading.offer import Offer
from chia.wallet.trading.trade_status import TradeStatus
from chia.wallet.transaction_record import TransactionRecord
from tests.setup_nodes import setup_simulators_and_wallets
from tests.time_out_assert import time_out_assert
async def tx_in_pool(mempool: MempoolManager, tx_id):
tx = mempool.get_spendbundle(tx_id)
if tx is None:
return False
return True
@pytest.fixture(scope="module")
def event_loop():
loop = asyncio.get_event_loop()
yield loop
@pytest.fixture(scope="function")
async def two_wallet_nodes():
async for _ in setup_simulators_and_wallets(1, 2, {}):
yield _
buffer_blocks = 4
@pytest.fixture(scope="function")
async def wallets_prefarm(two_wallet_nodes, trusted):
"""
Sets up the node with 10 blocks, and returns a payer and payee wallet.
"""
farm_blocks = 10
buffer = 4
full_nodes, wallets = two_wallet_nodes
full_node_api = full_nodes[0]
full_node_server = full_node_api.server
wallet_node_0, wallet_server_0 = wallets[0]
wallet_node_1, wallet_server_1 = wallets[1]
wallet_0 = wallet_node_0.wallet_state_manager.main_wallet
wallet_1 = wallet_node_1.wallet_state_manager.main_wallet
ph0 = await wallet_0.get_new_puzzlehash()
ph1 = await wallet_1.get_new_puzzlehash()
if trusted:
wallet_node_0.config["trusted_peers"] = {full_node_server.node_id.hex(): full_node_server.node_id.hex()}
wallet_node_1.config["trusted_peers"] = {full_node_server.node_id.hex(): full_node_server.node_id.hex()}
else:
wallet_node_0.config["trusted_peers"] = {}
wallet_node_1.config["trusted_peers"] = {}
await wallet_server_0.start_client(PeerInfo("localhost", uint16(full_node_server._port)), None)
await wallet_server_1.start_client(PeerInfo("localhost", uint16(full_node_server._port)), None)
for i in range(0, farm_blocks):
await full_node_api.farm_new_transaction_block(FarmNewBlockProtocol(ph0))
for i in range(0, farm_blocks):
await full_node_api.farm_new_transaction_block(FarmNewBlockProtocol(ph1))
for i in range(0, buffer):
await full_node_api.farm_new_transaction_block(FarmNewBlockProtocol(token_bytes()))
return wallet_node_0, wallet_node_1, full_node_api
@pytest.mark.parametrize(
"trusted",
[True, False],
)
class TestCATTrades:
@pytest.mark.asyncio
async def test_cat_trades(self, wallets_prefarm):
wallet_node_maker, wallet_node_taker, full_node = wallets_prefarm
wallet_maker = wallet_node_maker.wallet_state_manager.main_wallet
wallet_taker = wallet_node_taker.wallet_state_manager.main_wallet
# Create two new CATs, one in each wallet
async with wallet_node_maker.wallet_state_manager.lock:
cat_wallet_maker: CATWallet = await CATWallet.create_new_cat_wallet(
wallet_node_maker.wallet_state_manager, wallet_maker, {"identifier": "genesis_by_id"}, uint64(100)
)
await asyncio.sleep(1)
async with wallet_node_taker.wallet_state_manager.lock:
new_cat_wallet_taker: CATWallet = await CATWallet.create_new_cat_wallet(
wallet_node_taker.wallet_state_manager, wallet_taker, {"identifier": "genesis_by_id"}, uint64(100)
)
await asyncio.sleep(1)
for i in range(1, buffer_blocks):
await full_node.farm_new_transaction_block(FarmNewBlockProtocol(token_bytes()))
await time_out_assert(15, cat_wallet_maker.get_confirmed_balance, 100)
await time_out_assert(15, cat_wallet_maker.get_unconfirmed_balance, 100)
await time_out_assert(15, new_cat_wallet_taker.get_confirmed_balance, 100)
await time_out_assert(15, new_cat_wallet_taker.get_unconfirmed_balance, 100)
# Add the taker's CAT to the maker's wallet
assert cat_wallet_maker.cat_info.my_tail is not None
assert new_cat_wallet_taker.cat_info.my_tail is not None
new_cat_wallet_maker: CATWallet = await CATWallet.create_wallet_for_cat(
wallet_node_maker.wallet_state_manager, wallet_maker, new_cat_wallet_taker.get_asset_id()
)
# Create the trade parameters
MAKER_CHIA_BALANCE = 20 * 1000000000000 - 100
TAKER_CHIA_BALANCE = 20 * 1000000000000 - 100
await time_out_assert(25, wallet_maker.get_confirmed_balance, MAKER_CHIA_BALANCE)
await time_out_assert(25, wallet_taker.get_unconfirmed_balance, TAKER_CHIA_BALANCE)
MAKER_CAT_BALANCE = 100
MAKER_NEW_CAT_BALANCE = 0
TAKER_CAT_BALANCE = 0
TAKER_NEW_CAT_BALANCE = 100
chia_for_cat = {
wallet_maker.id(): -1,
new_cat_wallet_maker.id(): 2, # This is the CAT that the taker made
}
cat_for_chia = {
wallet_maker.id(): 3,
cat_wallet_maker.id(): -4, # The taker has no knowledge of this CAT yet
}
cat_for_cat = {
cat_wallet_maker.id(): -5,
new_cat_wallet_maker.id(): 6,
}
chia_for_multiple_cat = {
wallet_maker.id(): -7,
cat_wallet_maker.id(): 8,
new_cat_wallet_maker.id(): 9,
}
multiple_cat_for_chia = {
wallet_maker.id(): 10,
cat_wallet_maker.id(): -11,
new_cat_wallet_maker.id(): -12,
}
chia_and_cat_for_cat = {
wallet_maker.id(): -13,
cat_wallet_maker.id(): -14,
new_cat_wallet_maker.id(): 15,
}
trade_manager_maker = wallet_node_maker.wallet_state_manager.trade_manager
trade_manager_taker = wallet_node_taker.wallet_state_manager.trade_manager
# Execute all of the trades
# chia_for_cat
success, trade_make, error = await trade_manager_maker.create_offer_for_ids(chia_for_cat, fee=uint64(1))
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_make is not None
success, trade_take, error = await trade_manager_taker.respond_to_offer(
Offer.from_bytes(trade_make.offer), fee=uint64(1)
)
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_take is not None
MAKER_CHIA_BALANCE -= 2 # -1 and -1 for fee
MAKER_NEW_CAT_BALANCE += 2
TAKER_CHIA_BALANCE += 0 # +1 and -1 for fee
TAKER_NEW_CAT_BALANCE -= 2
await time_out_assert(15, wallet_taker.get_unconfirmed_balance, TAKER_CHIA_BALANCE)
await time_out_assert(15, new_cat_wallet_taker.get_unconfirmed_balance, TAKER_NEW_CAT_BALANCE)
for i in range(0, buffer_blocks):
await full_node.farm_new_transaction_block(FarmNewBlockProtocol(token_bytes()))
await time_out_assert(15, wallet_maker.get_confirmed_balance, MAKER_CHIA_BALANCE)
await time_out_assert(15, wallet_maker.get_unconfirmed_balance, MAKER_CHIA_BALANCE)
await time_out_assert(15, new_cat_wallet_maker.get_confirmed_balance, MAKER_NEW_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_maker.get_unconfirmed_balance, MAKER_NEW_CAT_BALANCE)
await time_out_assert(15, wallet_taker.get_confirmed_balance, TAKER_CHIA_BALANCE)
await time_out_assert(15, wallet_taker.get_unconfirmed_balance, TAKER_CHIA_BALANCE)
await time_out_assert(15, new_cat_wallet_taker.get_confirmed_balance, TAKER_NEW_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_taker.get_unconfirmed_balance, TAKER_NEW_CAT_BALANCE)
async def get_trade_and_status(trade_manager, trade) -> TradeStatus:
trade_rec = await trade_manager.get_trade_by_id(trade.trade_id)
return TradeStatus(trade_rec.status)
await time_out_assert(15, get_trade_and_status, TradeStatus.CONFIRMED, trade_manager_maker, trade_make)
await time_out_assert(15, get_trade_and_status, TradeStatus.CONFIRMED, trade_manager_taker, trade_take)
maker_txs = await wallet_node_maker.wallet_state_manager.tx_store.get_transactions_by_trade_id(
trade_make.trade_id
)
taker_txs = await wallet_node_taker.wallet_state_manager.tx_store.get_transactions_by_trade_id(
trade_take.trade_id
)
assert len(maker_txs) == 1 # The other side will show up as a regular incoming transaction
assert len(taker_txs) == 3 # One for each: the outgoing CAT, the incoming chia, and the outgoing chia fee
# cat_for_chia
success, trade_make, error = await trade_manager_maker.create_offer_for_ids(cat_for_chia)
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_make is not None
success, trade_take, error = await trade_manager_taker.respond_to_offer(Offer.from_bytes(trade_make.offer))
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_take is not None
MAKER_CAT_BALANCE -= 4
MAKER_CHIA_BALANCE += 3
TAKER_CAT_BALANCE += 4
TAKER_CHIA_BALANCE -= 3
cat_wallet_taker: CATWallet = await wallet_node_taker.wallet_state_manager.get_wallet_for_asset_id(
cat_wallet_maker.get_asset_id()
)
await time_out_assert(15, wallet_taker.get_unconfirmed_balance, TAKER_CHIA_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_unconfirmed_balance, TAKER_CAT_BALANCE)
for i in range(0, buffer_blocks):
await full_node.farm_new_transaction_block(FarmNewBlockProtocol(token_bytes()))
await time_out_assert(15, wallet_maker.get_confirmed_balance, MAKER_CHIA_BALANCE)
await time_out_assert(15, wallet_maker.get_unconfirmed_balance, MAKER_CHIA_BALANCE)
await time_out_assert(15, cat_wallet_maker.get_confirmed_balance, MAKER_CAT_BALANCE)
await time_out_assert(15, cat_wallet_maker.get_unconfirmed_balance, MAKER_CAT_BALANCE)
await time_out_assert(15, wallet_taker.get_confirmed_balance, TAKER_CHIA_BALANCE)
await time_out_assert(15, wallet_taker.get_unconfirmed_balance, TAKER_CHIA_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_confirmed_balance, TAKER_CAT_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_unconfirmed_balance, TAKER_CAT_BALANCE)
await time_out_assert(15, get_trade_and_status, TradeStatus.CONFIRMED, trade_manager_maker, trade_make)
await time_out_assert(15, get_trade_and_status, TradeStatus.CONFIRMED, trade_manager_taker, trade_take)
maker_txs = await wallet_node_maker.wallet_state_manager.tx_store.get_transactions_by_trade_id(
trade_make.trade_id
)
taker_txs = await wallet_node_taker.wallet_state_manager.tx_store.get_transactions_by_trade_id(
trade_take.trade_id
)
assert len(maker_txs) == 1 # The other side will show up as a regular incoming transaction
assert len(taker_txs) == 2 # One for each: the outgoing chia, the incoming CAT
# cat_for_cat
success, trade_make, error = await trade_manager_maker.create_offer_for_ids(cat_for_cat)
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_make is not None
success, trade_take, error = await trade_manager_taker.respond_to_offer(Offer.from_bytes(trade_make.offer))
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_take is not None
MAKER_CAT_BALANCE -= 5
MAKER_NEW_CAT_BALANCE += 6
TAKER_CAT_BALANCE += 5
TAKER_NEW_CAT_BALANCE -= 6
await time_out_assert(15, new_cat_wallet_taker.get_unconfirmed_balance, TAKER_NEW_CAT_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_unconfirmed_balance, TAKER_CAT_BALANCE)
for i in range(0, buffer_blocks):
await full_node.farm_new_transaction_block(FarmNewBlockProtocol(token_bytes()))
await time_out_assert(15, new_cat_wallet_maker.get_confirmed_balance, MAKER_NEW_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_maker.get_unconfirmed_balance, MAKER_NEW_CAT_BALANCE)
await time_out_assert(15, cat_wallet_maker.get_confirmed_balance, MAKER_CAT_BALANCE)
await time_out_assert(15, cat_wallet_maker.get_unconfirmed_balance, MAKER_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_taker.get_confirmed_balance, TAKER_NEW_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_taker.get_unconfirmed_balance, TAKER_NEW_CAT_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_confirmed_balance, TAKER_CAT_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_unconfirmed_balance, TAKER_CAT_BALANCE)
await time_out_assert(15, get_trade_and_status, TradeStatus.CONFIRMED, trade_manager_maker, trade_make)
await time_out_assert(15, get_trade_and_status, TradeStatus.CONFIRMED, trade_manager_taker, trade_take)
# chia_for_multiple_cat
success, trade_make, error = await trade_manager_maker.create_offer_for_ids(chia_for_multiple_cat)
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_make is not None
success, trade_take, error = await trade_manager_taker.respond_to_offer(Offer.from_bytes(trade_make.offer))
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_take is not None
MAKER_CHIA_BALANCE -= 7
MAKER_CAT_BALANCE += 8
MAKER_NEW_CAT_BALANCE += 9
TAKER_CHIA_BALANCE += 7
TAKER_CAT_BALANCE -= 8
TAKER_NEW_CAT_BALANCE -= 9
await time_out_assert(15, new_cat_wallet_taker.get_unconfirmed_balance, TAKER_NEW_CAT_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_unconfirmed_balance, TAKER_CAT_BALANCE)
for i in range(0, buffer_blocks):
await full_node.farm_new_transaction_block(FarmNewBlockProtocol(token_bytes()))
await time_out_assert(15, new_cat_wallet_maker.get_confirmed_balance, MAKER_NEW_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_maker.get_unconfirmed_balance, MAKER_NEW_CAT_BALANCE)
await time_out_assert(15, cat_wallet_maker.get_confirmed_balance, MAKER_CAT_BALANCE)
await time_out_assert(15, cat_wallet_maker.get_unconfirmed_balance, MAKER_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_taker.get_confirmed_balance, TAKER_NEW_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_taker.get_unconfirmed_balance, TAKER_NEW_CAT_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_confirmed_balance, TAKER_CAT_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_unconfirmed_balance, TAKER_CAT_BALANCE)
await time_out_assert(15, get_trade_and_status, TradeStatus.CONFIRMED, trade_manager_maker, trade_make)
await time_out_assert(15, get_trade_and_status, TradeStatus.CONFIRMED, trade_manager_taker, trade_take)
# multiple_cat_for_chia
success, trade_make, error = await trade_manager_maker.create_offer_for_ids(multiple_cat_for_chia)
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_make is not None
success, trade_take, error = await trade_manager_taker.respond_to_offer(Offer.from_bytes(trade_make.offer))
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_take is not None
MAKER_CAT_BALANCE -= 11
MAKER_NEW_CAT_BALANCE -= 12
MAKER_CHIA_BALANCE += 10
TAKER_CAT_BALANCE += 11
TAKER_NEW_CAT_BALANCE += 12
TAKER_CHIA_BALANCE -= 10
await time_out_assert(15, new_cat_wallet_taker.get_unconfirmed_balance, TAKER_NEW_CAT_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_unconfirmed_balance, TAKER_CAT_BALANCE)
for i in range(0, buffer_blocks):
await full_node.farm_new_transaction_block(FarmNewBlockProtocol(token_bytes()))
await time_out_assert(15, new_cat_wallet_maker.get_confirmed_balance, MAKER_NEW_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_maker.get_unconfirmed_balance, MAKER_NEW_CAT_BALANCE)
await time_out_assert(15, cat_wallet_maker.get_confirmed_balance, MAKER_CAT_BALANCE)
await time_out_assert(15, cat_wallet_maker.get_unconfirmed_balance, MAKER_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_taker.get_confirmed_balance, TAKER_NEW_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_taker.get_unconfirmed_balance, TAKER_NEW_CAT_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_confirmed_balance, TAKER_CAT_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_unconfirmed_balance, TAKER_CAT_BALANCE)
await time_out_assert(15, get_trade_and_status, TradeStatus.CONFIRMED, trade_manager_maker, trade_make)
await time_out_assert(15, get_trade_and_status, TradeStatus.CONFIRMED, trade_manager_taker, trade_take)
# chia_and_cat_for_cat
success, trade_make, error = await trade_manager_maker.create_offer_for_ids(chia_and_cat_for_cat)
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_make is not None
success, trade_take, error = await trade_manager_taker.respond_to_offer(Offer.from_bytes(trade_make.offer))
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_take is not None
MAKER_CHIA_BALANCE -= 13
MAKER_CAT_BALANCE -= 14
MAKER_NEW_CAT_BALANCE += 15
TAKER_CHIA_BALANCE += 13
TAKER_CAT_BALANCE += 14
TAKER_NEW_CAT_BALANCE -= 15
await time_out_assert(15, new_cat_wallet_taker.get_unconfirmed_balance, TAKER_NEW_CAT_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_unconfirmed_balance, TAKER_CAT_BALANCE)
for i in range(0, buffer_blocks):
await full_node.farm_new_transaction_block(FarmNewBlockProtocol(token_bytes()))
await time_out_assert(15, new_cat_wallet_maker.get_confirmed_balance, MAKER_NEW_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_maker.get_unconfirmed_balance, MAKER_NEW_CAT_BALANCE)
await time_out_assert(15, cat_wallet_maker.get_confirmed_balance, MAKER_CAT_BALANCE)
await time_out_assert(15, cat_wallet_maker.get_unconfirmed_balance, MAKER_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_taker.get_confirmed_balance, TAKER_NEW_CAT_BALANCE)
await time_out_assert(15, new_cat_wallet_taker.get_unconfirmed_balance, TAKER_NEW_CAT_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_confirmed_balance, TAKER_CAT_BALANCE)
await time_out_assert(15, cat_wallet_taker.get_unconfirmed_balance, TAKER_CAT_BALANCE)
await time_out_assert(15, get_trade_and_status, TradeStatus.CONFIRMED, trade_manager_maker, trade_make)
await time_out_assert(15, get_trade_and_status, TradeStatus.CONFIRMED, trade_manager_taker, trade_take)
@pytest.mark.asyncio
async def test_trade_cancellation(self, wallets_prefarm):
wallet_node_maker, wallet_node_taker, full_node = wallets_prefarm
wallet_maker = wallet_node_maker.wallet_state_manager.main_wallet
wallet_taker = wallet_node_taker.wallet_state_manager.main_wallet
async with wallet_node_maker.wallet_state_manager.lock:
cat_wallet_maker: CATWallet = await CATWallet.create_new_cat_wallet(
wallet_node_maker.wallet_state_manager, wallet_maker, {"identifier": "genesis_by_id"}, uint64(100)
)
tx_queue: List[TransactionRecord] = await wallet_node_maker.wallet_state_manager.tx_store.get_not_sent()
await time_out_assert(
15, tx_in_pool, True, full_node.full_node.mempool_manager, tx_queue[0].spend_bundle.name()
)
for i in range(1, buffer_blocks):
await full_node.farm_new_transaction_block(FarmNewBlockProtocol(token_bytes()))
await time_out_assert(15, cat_wallet_maker.get_confirmed_balance, 100)
await time_out_assert(15, cat_wallet_maker.get_unconfirmed_balance, 100)
MAKER_CHIA_BALANCE = 20 * 1000000000000 - 100
MAKER_CAT_BALANCE = 100
TAKER_CHIA_BALANCE = 20 * 1000000000000
await time_out_assert(15, wallet_maker.get_confirmed_balance, MAKER_CHIA_BALANCE)
cat_for_chia = {
wallet_maker.id(): 1,
cat_wallet_maker.id(): -2,
}
chia_for_cat = {
wallet_maker.id(): -3,
cat_wallet_maker.id(): 4,
}
trade_manager_maker = wallet_node_maker.wallet_state_manager.trade_manager
trade_manager_taker = wallet_node_taker.wallet_state_manager.trade_manager
async def get_trade_and_status(trade_manager, trade) -> TradeStatus:
trade_rec = await trade_manager.get_trade_by_id(trade.trade_id)
return TradeStatus(trade_rec.status)
success, trade_make, error = await trade_manager_maker.create_offer_for_ids(cat_for_chia)
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_make is not None
await trade_manager_maker.cancel_pending_offer(trade_make.trade_id)
await time_out_assert(15, get_trade_and_status, TradeStatus.CANCELLED, trade_manager_maker, trade_make)
# Due to current mempool rules, trying to force a take out of the mempool with a cancel will not work.
# Uncomment this when/if it does
# success, trade_take, error = await trade_manager_taker.respond_to_offer(Offer.from_bytes(trade_make.offer))
# await asyncio.sleep(1)
# assert error is None
# assert success is True
# assert trade_take is not None
# await time_out_assert(15, get_trade_and_status, TradeStatus.PENDING_CONFIRM, trade_manager_taker, trade_take)
# await time_out_assert(
# 15,
# tx_in_pool,
# True,
# full_node.full_node.mempool_manager,
# Offer.from_bytes(trade_take.offer).to_valid_spend().name(),
# )
FEE = uint64(2000000000000)
txs = await trade_manager_maker.cancel_pending_offer_safely(trade_make.trade_id, fee=FEE)
await time_out_assert(15, get_trade_and_status, TradeStatus.PENDING_CANCEL, trade_manager_maker, trade_make)
for tx in txs:
if tx.spend_bundle is not None:
await time_out_assert(15, tx_in_pool, True, full_node.full_node.mempool_manager, tx.spend_bundle.name())
for i in range(1, buffer_blocks):
await full_node.farm_new_transaction_block(FarmNewBlockProtocol(token_bytes()))
await time_out_assert(15, get_trade_and_status, TradeStatus.CANCELLED, trade_manager_maker, trade_make)
# await time_out_assert(15, get_trade_and_status, TradeStatus.FAILED, trade_manager_taker, trade_take)
await time_out_assert(15, wallet_maker.get_pending_change_balance, 0)
await time_out_assert(15, wallet_maker.get_confirmed_balance, MAKER_CHIA_BALANCE - FEE)
await time_out_assert(15, cat_wallet_maker.get_confirmed_balance, MAKER_CAT_BALANCE)
await time_out_assert(15, wallet_taker.get_confirmed_balance, TAKER_CHIA_BALANCE)
success, trade_take, error = await trade_manager_taker.respond_to_offer(Offer.from_bytes(trade_make.offer))
await asyncio.sleep(1)
assert error is not None
assert success is False
assert trade_take is None
# Now we're going to create the other way around for test coverage sake
success, trade_make, error = await trade_manager_maker.create_offer_for_ids(chia_for_cat)
await asyncio.sleep(1)
assert error is None
assert success is True
assert trade_make is not None
# This take should fail since we have no CATs to fulfill it with
success, trade_take, error = await trade_manager_taker.respond_to_offer(Offer.from_bytes(trade_make.offer))
await asyncio.sleep(1)
assert error is not None
assert success is False
assert trade_take is None
txs = await trade_manager_maker.cancel_pending_offer_safely(trade_make.trade_id, fee=uint64(0))
await time_out_assert(15, get_trade_and_status, TradeStatus.PENDING_CANCEL, trade_manager_maker, trade_make)
for tx in txs:
if tx.spend_bundle is not None:
await time_out_assert(15, tx_in_pool, True, full_node.full_node.mempool_manager, tx.spend_bundle.name())
for i in range(1, buffer_blocks):
await full_node.farm_new_transaction_block(FarmNewBlockProtocol(token_bytes()))
await time_out_assert(15, get_trade_and_status, TradeStatus.CANCELLED, trade_manager_maker, trade_make)
| 48.996146 | 120 | 0.733926 | 3,606 | 25,429 | 4.742097 | 0.061009 | 0.040117 | 0.074503 | 0.101053 | 0.841287 | 0.823743 | 0.790409 | 0.780175 | 0.773977 | 0.763918 | 0 | 0.023638 | 0.203115 | 25,429 | 518 | 121 | 49.090734 | 0.820223 | 0.056982 | 0 | 0.578947 | 0 | 0 | 0.007042 | 0 | 0 | 0 | 0 | 0 | 0.370927 | 1 | 0.002506 | false | 0 | 0.035088 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1cb73bffffb01af2d50f20ce7c8461f1b52b3315 | 30 | py | Python | src/telemetryserver/__init__.py | arcan1s/telemetry-server | 39eca89db557b21ab1315ef4db50d33a7947535b | [
"MIT"
] | null | null | null | src/telemetryserver/__init__.py | arcan1s/telemetry-server | 39eca89db557b21ab1315ef4db50d33a7947535b | [
"MIT"
] | null | null | null | src/telemetryserver/__init__.py | arcan1s/telemetry-server | 39eca89db557b21ab1315ef4db50d33a7947535b | [
"MIT"
] | null | null | null | from telemetryserver import *
| 15 | 29 | 0.833333 | 3 | 30 | 8.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1cf8a95eb5eedeaf0e26f6af3c5c60d2b93c583e | 2,639 | py | Python | marvel/modules/events.py | wrap-away/Marvellous | d4312fc91c45df6910d0f5f8b52be2b46cc73a3f | [
"MIT"
] | 28 | 2018-10-27T08:36:29.000Z | 2021-11-08T12:55:58.000Z | marvel/modules/events.py | wrap-away/Marvellous | d4312fc91c45df6910d0f5f8b52be2b46cc73a3f | [
"MIT"
] | 2 | 2020-08-31T17:01:35.000Z | 2021-07-29T13:46:39.000Z | marvel/modules/events.py | wrap-away/Marvellous | d4312fc91c45df6910d0f5f8b52be2b46cc73a3f | [
"MIT"
] | 4 | 2019-04-08T00:59:13.000Z | 2021-12-17T21:55:10.000Z | from marvel.modules.base_module import BaseModule
class Events(BaseModule):
def __init__(self, requester):
"""
Events Module.
:param requester: Requester
"""
super().__init__(requester)
def all(self, **kwargs):
"""
This returns data containing all events
:param kwargs: dict
:return: dict
"""
data, headers = self.r.request('events', payload=kwargs)
return data
def get(self, identifier, **kwargs):
"""
This returns data containing a single event using identifier (id)
:param identifier: int
:param kwargs: dict
:return: dict
"""
data, headers = self.r.request('events', identifier=identifier, payload=kwargs)
return data
def characters(self, identifier, **kwargs):
"""
This returns data containing a single event's characters using identifier (id)
:param identifier: int
:param kwargs: dict
:return: dict
"""
data, headers = self.r.request('events', identifier=identifier, payload=kwargs, sub_endpoint="characters")
return data
def comics(self, identifier, **kwargs):
"""
This returns data containing a single event's comics using identifier (id)
:param identifier: int
:param kwargs: dict
:return: dict
"""
data, headers = self.r.request('events', identifier=identifier, payload=kwargs, sub_endpoint="comics")
return data
def creators(self, identifier, **kwargs):
"""
This returns data containing a single event's creators using identifier (id)
:param identifier: int
:param kwargs: dict
:return: dict
"""
data, headers = self.r.request('events', identifier=identifier, payload=kwargs, sub_endpoint="creators")
return data
def series(self, identifier, **kwargs):
"""
This returns data containing a single event's series using identifier (id)
:param identifier: int
:param kwargs: dict
:return: dict
"""
data, headers = self.r.request('events', identifier=identifier, payload=kwargs, sub_endpoint="series")
return data
def stories(self, identifier, **kwargs):
"""
This returns data containing a single event's stories using identifier (id)
:param identifier: int
:param kwargs: dict
:return: dict
"""
data, headers = self.r.request('events', identifier=identifier, payload=kwargs, sub_endpoint="stories")
return data
| 32.9875 | 114 | 0.607806 | 282 | 2,639 | 5.638298 | 0.152482 | 0.044025 | 0.074843 | 0.092453 | 0.791195 | 0.74717 | 0.74717 | 0.74717 | 0.74717 | 0.74717 | 0 | 0 | 0.289125 | 2,639 | 79 | 115 | 33.405063 | 0.847548 | 0.343312 | 0 | 0.28 | 0 | 0 | 0.057246 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.32 | false | 0 | 0.04 | 0 | 0.68 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1c26b1b4cd6d1882c2123dabf2bf1409399320bb | 43 | py | Python | pynet_ansible/subdir/my.py | joeyb182/pynet_ansible | b8221ebf23937838e0ecf5c71277cb042f408698 | [
"Apache-2.0"
] | null | null | null | pynet_ansible/subdir/my.py | joeyb182/pynet_ansible | b8221ebf23937838e0ecf5c71277cb042f408698 | [
"Apache-2.0"
] | null | null | null | pynet_ansible/subdir/my.py | joeyb182/pynet_ansible | b8221ebf23937838e0ecf5c71277cb042f408698 | [
"Apache-2.0"
] | null | null | null |
def print_hello():
print 'hello, world!'
| 10.75 | 22 | 0.674419 | 6 | 43 | 4.666667 | 0.666667 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 43 | 3 | 23 | 14.333333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.309524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
1c3714f8b97190de54387c05067462c712019aa2 | 7,071 | py | Python | pyxform/tests_v1/test_support_external_instances.py | PMA-2020/pmaxform3 | 9d36f97f25cb09f0fb8aafb69370454731ecbbd5 | [
"BSD-2-Clause"
] | null | null | null | pyxform/tests_v1/test_support_external_instances.py | PMA-2020/pmaxform3 | 9d36f97f25cb09f0fb8aafb69370454731ecbbd5 | [
"BSD-2-Clause"
] | null | null | null | pyxform/tests_v1/test_support_external_instances.py | PMA-2020/pmaxform3 | 9d36f97f25cb09f0fb8aafb69370454731ecbbd5 | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Test external instance syntax
"""
from pyxform.tests_v1.pyxform_test_case import PyxformTestCase
class ExternalCSVInstancesTest(PyxformTestCase):
def test_external_csv_instances(self):
# re: https://github.com/XLSForm/pyxform/issues/30
self.assertPyxformXform(
name="ecsv",
md="""
| survey | | | |
| | type | name | label |
| | select_one_from_file cities.csv | city | City |
| | select_multiple_from_file neighbourhoods.csv | neighbourhoods | Neighbourhoods |
""", # noqa
xml__contains=[
"""<instance id="cities" src="jr://file-csv/cities.csv">
<root>
<item>
<name>_</name>
<label>_</label>
</item>
</root>
</instance>""", # noqa
'<select1 ref="/ecsv/city">',
"<itemset nodeset=\"instance('cities')/root/item\">",
"""<instance id="neighbourhoods" src="jr://file-csv/neighbourhoods.csv">
<root>
<item>
<name>_</name>
<label>_</label>
</item>
</root>
</instance>""", # noqa
'<select ref="/ecsv/neighbourhoods">',
"<itemset nodeset=\"instance('neighbourhoods')/root/item\">",
],
run_odk_validate=True,
)
def test_external_csv_instances_w_choice_filter(self):
# re: https://github.com/XLSForm/pyxform/issues/30
self.assertPyxformXform(
name="ecsv",
md="""
| survey | | | |
| | type | name | label | choice_filter |
| | select_one_from_file cities.csv | city | City | |
| | select_multiple_from_file neighbourhoods.csv | neighbourhoods | Neighbourhoods | city=${city} |
""", # noqa
xml__contains=[
"""<instance id="cities" src="jr://file-csv/cities.csv">
<root>
<item>
<name>_</name>
<label>_</label>
</item>
</root>
</instance>""", # noqa
'<select1 ref="/ecsv/city">',
"""<instance id="neighbourhoods" src="jr://file-csv/neighbourhoods.csv">
<root>
<item>
<name>_</name>
<label>_</label>
</item>
</root>
</instance>""", # noqa
'<select ref="/ecsv/neighbourhoods">',
"<itemset nodeset=\"instance('neighbourhoods')/root/item[city= /ecsv/city ]\">", # noqa
],
run_odk_validate=True,
)
class ExternalXMLInstancesTest(PyxformTestCase):
def test_external_xml_instances(self):
# re: https://github.com/XLSForm/pyxform/issues/30
self.assertPyxformXform(
name="exml",
md="""
| survey | | | |
| | type | name | label |
| | select_one_from_file cities.xml | city | City |
| | select_multiple_from_file neighbourhoods.xml | neighbourhoods | Neighbourhoods |
""", # noqa
xml__contains=[
"""<instance id="cities" src="jr://file/cities.xml">
<root>
<item>
<name>_</name>
<label>_</label>
</item>
</root>
</instance>""", # noqa
'<select1 ref="/exml/city">',
"<itemset nodeset=\"instance('cities')/root/item\">",
"""<instance id="neighbourhoods" src="jr://file/neighbourhoods.xml">
<root>
<item>
<name>_</name>
<label>_</label>
</item>
</root>
</instance>""", # noqa
'<select ref="/exml/neighbourhoods">',
"<itemset nodeset=\"instance('neighbourhoods')/root/item\">",
],
run_odk_validate=True,
)
class InvalidExternalFileInstancesTest(PyxformTestCase):
def test_external_other_extension_instances(self):
# re: https://github.com/XLSForm/pyxform/issues/30
self.assertPyxformXform(
name="epdf",
md="""
| survey | | | |
| | type | name | label |
| | select_one_from_file cities.pdf | city | City |
| | select_multiple_from_file neighbourhoods.pdf | neighbourhoods | Neighbourhoods |
""", # noqa
errored=True,
error_contains=["should be a choices sheet in this xlsform"],
)
def test_external_choices_sheet_included_instances(self):
# re: https://github.com/XLSForm/pyxform/issues/30
self.assertPyxformXform(
name="epdf",
md="""
| survey | | | |
| | type | name | label |
| | select_one_from_file cities.pdf | city | City |
| | select_multiple_from_file neighbourhoods.pdf | neighbourhoods | Neighbourhoods |
| choices |
| | list name | name | label |
| | fruits | apple | Apple |
""", # noqa
errored=True,
error__contains=["List name not in choices sheet: cities.pdf"],
)
class ExternalCSVInstancesBugsTest(PyxformTestCase):
def test_non_existent_itext_reference(self):
# re: https://github.com/XLSForm/pyxform/issues/80
self.assertPyxformXform(
name="ecsv",
md="""
| survey | | | |
| | type | name | label |
| | select_one_from_file cities.csv | city | City |
| | select_multiple_from_file neighbourhoods.csv | neighbourhoods | Neighbourhoods |
""", # noqa
xml__contains=[
"""<itemset nodeset="instance('cities')/root/item">
<value ref="name"/>
<label ref="label"/>
</itemset>"""
],
)
| 42.089286 | 119 | 0.424975 | 514 | 7,071 | 5.663424 | 0.171206 | 0.043284 | 0.031261 | 0.03504 | 0.790106 | 0.745792 | 0.733425 | 0.71831 | 0.704569 | 0.704569 | 0 | 0.004421 | 0.45623 | 7,071 | 167 | 120 | 42.341317 | 0.752666 | 0.058125 | 0 | 0.63 | 0 | 0 | 0.605198 | 0.043825 | 0 | 0 | 0 | 0 | 0.06 | 1 | 0.06 | false | 0 | 0.01 | 0 | 0.11 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1c6b7ce6628ed4c2c6ba2a9fb3019e845e0efb7a | 84 | py | Python | shiftevent/__init__.py | projectshift/shift-event | ce4b7cf5398dd8108de304e1fb64016b511bacc5 | [
"MIT"
] | null | null | null | shiftevent/__init__.py | projectshift/shift-event | ce4b7cf5398dd8108de304e1fb64016b511bacc5 | [
"MIT"
] | 5 | 2018-07-30T09:46:41.000Z | 2018-09-10T10:43:43.000Z | shiftevent/__init__.py | projectshift/shift-event | ce4b7cf5398dd8108de304e1fb64016b511bacc5 | [
"MIT"
] | null | null | null | from .event_service import EventService
from .event import Event
from .db import Db
| 21 | 39 | 0.821429 | 13 | 84 | 5.230769 | 0.461538 | 0.264706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 84 | 3 | 40 | 28 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98f9ba4c752a7f9c1271d58b61768db7efcba440 | 254 | py | Python | strings/tests/test_implement_str_str.py | ahcode0919/python-ds-algorithms | 0d617b78c50b6c18da40d9fa101438749bfc82e1 | [
"MIT"
] | null | null | null | strings/tests/test_implement_str_str.py | ahcode0919/python-ds-algorithms | 0d617b78c50b6c18da40d9fa101438749bfc82e1 | [
"MIT"
] | null | null | null | strings/tests/test_implement_str_str.py | ahcode0919/python-ds-algorithms | 0d617b78c50b6c18da40d9fa101438749bfc82e1 | [
"MIT"
] | 3 | 2020-10-07T20:24:45.000Z | 2020-12-16T04:53:19.000Z | from strings.implement_str_str import str_str
def test_str_str():
assert str_str('hello', 'll') == 2
assert str_str('test', '') == 0
assert str_str('foo', 'bar') == -1
assert str_str('a', 'aa') == -1
assert str_str('aaa', 'a') == 0
| 25.4 | 45 | 0.590551 | 41 | 254 | 3.414634 | 0.439024 | 0.342857 | 0.428571 | 0.185714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024876 | 0.208661 | 254 | 9 | 46 | 28.222222 | 0.671642 | 0 | 0 | 0 | 0 | 0 | 0.094488 | 0 | 0 | 0 | 0 | 0 | 0.714286 | 1 | 0.142857 | true | 0 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c709246d79078a23fde624f1ae90f9aad90b2cf4 | 1,089 | py | Python | test/authentication/test_logout.py | rubelw/auth0_client | 51e68239babcf7c40e40491d1aaa3f8547a67f63 | [
"MIT"
] | 2 | 2020-10-08T21:42:56.000Z | 2021-03-21T08:17:52.000Z | test/authentication/test_logout.py | rubelw/auth0_client | 51e68239babcf7c40e40491d1aaa3f8547a67f63 | [
"MIT"
] | null | null | null | test/authentication/test_logout.py | rubelw/auth0_client | 51e68239babcf7c40e40491d1aaa3f8547a67f63 | [
"MIT"
] | null | null | null | import unittest
import mock
from auth0_client.v3.authentication.logout import Logout
class TestLogout(unittest.TestCase):
@mock.patch('auth0_client.v3.authentication.logout.Logout.get')
def test_logout(self, mock_get):
g = Logout('my.domain.com')
g.logout(client_id='cid',
return_to='rto')
args, kwargs = mock_get.call_args
self.assertEqual(args[0], 'https://my.domain.com/v2/logout?client_id=cid&returnTo=rto')
self.assertEqual(kwargs['headers'], {
'Content-Type': 'application/json'
})
@mock.patch('auth0_client.v3.authentication.logout.Logout.get')
def test_federated_logout(self, mock_get):
g = Logout('my.domain.com')
g.logout(client_id='cid',
return_to='rto',
federated=True)
args, kwargs = mock_get.call_args
self.assertEqual(args[0], 'https://my.domain.com/v2/logout?federated&client_id=cid&returnTo=rto')
self.assertEqual(kwargs['headers'], {
'Content-Type': 'application/json'
})
| 28.657895 | 105 | 0.630854 | 134 | 1,089 | 4.992537 | 0.30597 | 0.041854 | 0.06577 | 0.121076 | 0.847534 | 0.798206 | 0.798206 | 0.798206 | 0.798206 | 0.798206 | 0 | 0.011919 | 0.229568 | 1,089 | 37 | 106 | 29.432432 | 0.785459 | 0 | 0 | 0.56 | 0 | 0.04 | 0.30303 | 0.088154 | 0 | 0 | 0 | 0 | 0.16 | 1 | 0.08 | false | 0 | 0.12 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c725e05e913a7cdcaff62a2f96ebb30f80f29197 | 840 | py | Python | temboo/core/Library/eBay/Finding/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 7 | 2016-03-07T02:07:21.000Z | 2022-01-21T02:22:41.000Z | temboo/core/Library/eBay/Finding/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | null | null | null | temboo/core/Library/eBay/Finding/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 8 | 2016-06-14T06:01:11.000Z | 2020-04-22T09:21:44.000Z | from temboo.Library.eBay.Finding.FindCompletedItems import FindCompletedItems, FindCompletedItemsInputSet, FindCompletedItemsResultSet, FindCompletedItemsChoreographyExecution
from temboo.Library.eBay.Finding.FindItemsAdvanced import FindItemsAdvanced, FindItemsAdvancedInputSet, FindItemsAdvancedResultSet, FindItemsAdvancedChoreographyExecution
from temboo.Library.eBay.Finding.FindItemsByImage import FindItemsByImage, FindItemsByImageInputSet, FindItemsByImageResultSet, FindItemsByImageChoreographyExecution
from temboo.Library.eBay.Finding.FindItemsByProduct import FindItemsByProduct, FindItemsByProductInputSet, FindItemsByProductResultSet, FindItemsByProductChoreographyExecution
from temboo.Library.eBay.Finding.GetHistograms import GetHistograms, GetHistogramsInputSet, GetHistogramsResultSet, GetHistogramsChoreographyExecution
| 140 | 175 | 0.916667 | 55 | 840 | 14 | 0.472727 | 0.064935 | 0.11039 | 0.136364 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 840 | 5 | 176 | 168 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c74e5adfddaee9185e9b60592ade7fa035c1905b | 41 | py | Python | kivymd/uix/dropdownitem/__init__.py | AnEx07/KivyMD | e4004a570ad3f1874b3540cc1b0c243b3037bba8 | [
"MIT"
] | null | null | null | kivymd/uix/dropdownitem/__init__.py | AnEx07/KivyMD | e4004a570ad3f1874b3540cc1b0c243b3037bba8 | [
"MIT"
] | null | null | null | kivymd/uix/dropdownitem/__init__.py | AnEx07/KivyMD | e4004a570ad3f1874b3540cc1b0c243b3037bba8 | [
"MIT"
] | null | null | null | from .dropdownitem import MDDropDownItem
| 20.5 | 40 | 0.878049 | 4 | 41 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c752bc69f299b69e947bc51fc332212eb3810b95 | 20 | py | Python | models/__init__.py | kex5n/Vehicles-Dispatch-Simulator | d0cca03fbf56e4b0ceeef8dafc59de105c1d4507 | [
"MIT"
] | 2 | 2020-02-08T06:09:37.000Z | 2020-02-09T04:11:20.000Z | models/__init__.py | kex5n/Vehicles-Dispatch-Simulator | d0cca03fbf56e4b0ceeef8dafc59de105c1d4507 | [
"MIT"
] | null | null | null | models/__init__.py | kex5n/Vehicles-Dispatch-Simulator | d0cca03fbf56e4b0ceeef8dafc59de105c1d4507 | [
"MIT"
] | null | null | null | from .dqn import DQN | 20 | 20 | 0.8 | 4 | 20 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c75d24fb1ca74abb1d21c56cb9dab5046f360f39 | 99 | py | Python | tests/data/nested_raise_without_from.py | jdufresne/flake8-raise | 22415a4ae85a9dbb859cc92252ad5f7252b8fc98 | [
"MIT"
] | 21 | 2020-01-19T17:33:07.000Z | 2021-10-02T16:53:40.000Z | tests/data/nested_raise_without_from.py | jdufresne/flake8-raise | 22415a4ae85a9dbb859cc92252ad5f7252b8fc98 | [
"MIT"
] | 3 | 2020-01-20T08:47:49.000Z | 2020-01-30T16:39:50.000Z | tests/data/nested_raise_without_from.py | jdufresne/flake8-raise | 22415a4ae85a9dbb859cc92252ad5f7252b8fc98 | [
"MIT"
] | null | null | null | try:
pass
except ValueError:
try:
pass
except OSError:
raise TypeError
| 12.375 | 23 | 0.575758 | 10 | 99 | 5.7 | 0.7 | 0.245614 | 0.45614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.383838 | 99 | 7 | 24 | 14.142857 | 0.934426 | 0 | 0 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.285714 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
c7664fff21f573f38379359f7b3aa4285e079324 | 32,548 | py | Python | tests/views/test_reporting_units.py | ONSdigital/response-operations-ui | 1ec70c89e443fdfba620af328a4a13ce67459aa8 | [
"MIT"
] | 3 | 2018-03-06T12:33:11.000Z | 2021-03-09T09:20:55.000Z | tests/views/test_reporting_units.py | ONSdigital/response-operations-ui | 1ec70c89e443fdfba620af328a4a13ce67459aa8 | [
"MIT"
] | 519 | 2017-11-30T16:32:24.000Z | 2022-03-28T13:37:57.000Z | tests/views/test_reporting_units.py | ONSdigital/response-operations-ui | 1ec70c89e443fdfba620af328a4a13ce67459aa8 | [
"MIT"
] | 2 | 2020-01-21T20:27:32.000Z | 2021-04-11T07:45:16.000Z | import json
import os
import re
from random import randint
from unittest import TestCase
import requests_mock
from config import TestingConfig
from response_operations_ui import create_app
respondent_party_id = "cd592e0f-8d07-407b-b75d-e01fbdae8233"
business_party_id = "b3ba864b-7cbc-4f44-84fe-88dc018a1a4c"
ru_ref = "50012345678"
collection_exercise_id_1 = "14fb3e68-4dca-46db-bf49-04b84e07e77c"
collection_exercise_id_2 = "9af403f8-5fc5-43b1-9fca-afbd9c65da5c"
iac_1 = "jkbvyklkwj88"
iac_2 = "ljbgg3kgstr4"
survey_id = "cb0711c3-0ac8-41d3-ae0e-567e5ea1ef87"
case_id = "10b04906-f478-47f9-a985-783400dd8482"
CONNECTION_ERROR = "Connection error"
url_search_reporting_units = f"{TestingConfig.PARTY_URL}/party-api/v1/businesses/search"
get_respondent_by_id_url = f"{TestingConfig.PARTY_URL}/party-api/v1/respondents/id/{respondent_party_id}"
url_edit_contact_details = f"{TestingConfig.PARTY_URL}/party-api/v1/respondents/id/{respondent_party_id}"
url_post_case_event = f"{TestingConfig.CASE_URL}/cases/{case_id}/events"
url_change_enrolment_status = f"{TestingConfig.PARTY_URL}/party-api/v1/respondents/change_enrolment_status"
url_change_respondent_status = (
f"{TestingConfig.PARTY_URL}/party-api/v1/respondents/edit-account-status/" f"{respondent_party_id}"
)
url_get_business_by_ru_ref = f"{TestingConfig.PARTY_URL}/party-api/v1/businesses/ref/{ru_ref}"
url_get_cases_by_business_party_id = f"{TestingConfig.CASE_URL}/cases/partyid/{business_party_id}"
url_get_collection_exercise_by_id = f"{TestingConfig.COLLECTION_EXERCISE_URL}/collectionexercises"
url_get_business_attributes = f"{TestingConfig.PARTY_URL}/party-api/v1/businesses/id/{business_party_id}/attributes"
url_get_survey_by_id = f"{TestingConfig.SURVEY_URL}/surveys/{survey_id}"
url_get_respondent_party_by_party_id = f"{TestingConfig.PARTY_URL}/party-api/v1/respondents/id/{respondent_party_id}"
url_get_respondent_party_by_list = f"{TestingConfig.PARTY_URL}/party-api/v1/respondents?id={respondent_party_id}"
url_get_iac = f"{TestingConfig.IAC_URL}/iacs"
url_get_case = f"{TestingConfig.CASE_URL}/cases/{case_id}?iac=true"
project_root = os.path.dirname(os.path.dirname(__file__))
with open(f"{project_root}/test_data/reporting_units/respondent.json") as fp:
respondent = json.load(fp)
with open(f"{project_root}/test_data/reporting_units/respondent_with_pending_email.json") as fp:
respondent_with_pending_email = json.load(fp)
with open(f"{project_root}/test_data/case/case.json") as fp:
case = json.load(fp)
with open(f"{project_root}/test_data/party/business_reporting_unit.json") as fp:
business_reporting_unit = json.load(fp)
with open(f"{project_root}/test_data/case/cases_list.json") as fp:
cases_list = json.load(fp)
with open(f"{project_root}/test_data/case/cases_list_completed.json") as fp:
cases_list_completed = json.load(fp)
with open(f"{project_root}/test_data/case/case_groups_list.json") as fp:
case_groups = json.load(fp)
with open(f"{project_root}/test_data/collection_exercise/collection_exercise.json") as fp:
collection_exercise = json.load(fp)
with open(f"{project_root}/test_data/collection_exercise/collection_exercise_2.json") as fp:
collection_exercise_2 = json.load(fp)
with open(f"{project_root}/test_data/party/business_party.json") as fp:
business_party = json.load(fp)
with open(f"{project_root}/test_data/party/business_attributes.json") as fp:
business_attributes = json.load(fp)
with open(f"{project_root}/test_data/case/case_group_statuses.json") as fp:
case_group_statuses = json.load(fp)
with open(f"{project_root}/test_data/survey/single_survey.json") as fp:
survey = json.load(fp)
with open(f"{project_root}/test_data/party/respondent_party.json") as fp:
respondent_party = json.load(fp)
with open(f"{project_root}/test_data/party/respondent_party_list.json") as fp:
respondent_party_list = json.load(fp)
with open(f"{project_root}/test_data/iac/iac.json") as fp:
iac = json.load(fp)
class TestReportingUnits(TestCase):
def setUp(self):
self.app = create_app("TestingConfig")
self.client = self.app.test_client()
@requests_mock.mock()
def test_get_reporting_unit(self, mock_request):
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_cases_by_business_party_id, json=cases_list)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_1}", json=collection_exercise)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_2}", json=collection_exercise_2)
mock_request.get(url_get_business_attributes, json=business_attributes)
mock_request.get(url_get_survey_by_id, json=survey)
mock_request.get(url_get_respondent_party_by_list, json=respondent_party_list)
mock_request.get(f"{url_get_iac}/{iac_1}", json=iac)
mock_request.get(f"{url_get_iac}/{iac_2}", json=iac)
response = self.client.get("/reporting-units/50012345678", follow_redirects=True)
self.assertEqual(response.status_code, 200)
self.assertIn("Bolts and Ratchets Ltd".encode(), response.data)
self.assertIn("50012345678".encode(), response.data)
self.assertIn("221 BLOCKS".encode(), response.data)
self.assertIn("Not started".encode(), response.data)
@requests_mock.mock()
def test_get_reporting_unit_party_ru_fail(self, mock_request):
mock_request.get(url_get_business_by_ru_ref, status_code=500)
response = self.client.get("/reporting-units/50012345678", follow_redirects=True)
request_history = mock_request.request_history
self.assertEqual(len(request_history), 1)
self.assertEqual(response.status_code, 500)
@requests_mock.mock()
def test_get_reporting_unit_cases_fail(self, mock_request):
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_cases_by_business_party_id, status_code=500)
response = self.client.get("/reporting-units/50012345678", follow_redirects=True)
request_history = mock_request.request_history
self.assertEqual(len(request_history), 2)
self.assertEqual(response.status_code, 500)
@requests_mock.mock()
def test_get_reporting_unit_cases_404(self, mock_request):
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_cases_by_business_party_id, status_code=404)
mock_request.get(url_get_business_attributes, json={})
mock_request.get(url_get_respondent_party_by_list, json=[])
response = self.client.get("/reporting-units/50012345678")
self.assertEqual(response.status_code, 200)
@requests_mock.mock()
def test_get_reporting_unit_collection_exercise_fail(self, mock_request):
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_cases_by_business_party_id, json=cases_list)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_1}", status_code=500)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_2}", status_code=500)
response = self.client.get("/reporting-units/50012345678", follow_redirects=True)
request_history = mock_request.request_history
self.assertEqual(len(request_history), 3)
self.assertEqual(response.status_code, 500)
@requests_mock.mock()
def test_get_reporting_unit_party_id_fail(self, mock_request):
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_cases_by_business_party_id, json=cases_list)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_1}", json=collection_exercise)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_2}", json=collection_exercise_2)
mock_request.get(url_get_business_attributes, status_code=500)
response = self.client.get("/reporting-units/50012345678", follow_redirects=True)
request_history = mock_request.request_history
self.assertEqual(len(request_history), 5)
self.assertEqual(response.status_code, 500)
@requests_mock.mock()
def test_get_reporting_unit_survey_fail(self, mock_request):
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_cases_by_business_party_id, json=cases_list)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_1}", json=collection_exercise)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_2}", json=collection_exercise_2)
mock_request.get(url_get_survey_by_id, status_code=500)
response = self.client.get("/reporting-units/50012345678", follow_redirects=True)
request_history = mock_request.request_history
self.assertEqual(len(request_history), 5)
self.assertEqual(response.status_code, 500)
@requests_mock.mock()
def test_get_reporting_unit_respondent_party_fail(self, mock_request):
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_cases_by_business_party_id, json=cases_list)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_1}", json=collection_exercise)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_2}", json=collection_exercise_2)
mock_request.get(url_get_business_attributes, json=business_attributes)
mock_request.get(url_get_respondent_party_by_party_id, status_code=500)
response = self.client.get("/reporting-units/50012345678/surveys/BLOCKS", follow_redirects=True)
request_history = mock_request.request_history
self.assertEqual(len(request_history), 5)
self.assertEqual(response.status_code, 500)
@requests_mock.mock()
def test_get_reporting_unit_iac_fail(self, mock_request):
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_cases_by_business_party_id, json=cases_list)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_1}", json=collection_exercise)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_2}", json=collection_exercise_2)
mock_request.get(url_get_business_attributes, json=business_attributes)
mock_request.get(url_get_respondent_party_by_list, json=respondent_party_list)
mock_request.get(f"{url_get_iac}/{iac_1}", status_code=500)
response = self.client.get("/reporting-units/50012345678/surveys/BLOCKS", follow_redirects=True)
request_history = mock_request.request_history
self.assertEqual(len(request_history), 7)
self.assertEqual(response.status_code, 500)
@requests_mock.mock()
def test_get_reporting_unit_iac_404(self, mock_request):
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_cases_by_business_party_id, json=cases_list)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_1}", json=collection_exercise)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_2}", json=collection_exercise_2)
mock_request.get(url_get_business_attributes, json=business_attributes)
mock_request.get(url_get_survey_by_id, json=survey)
mock_request.get(url_get_respondent_party_by_list, json=respondent_party_list)
mock_request.get(f"{url_get_iac}/{iac_1}", status_code=404)
response = self.client.get("/reporting-units/50012345678", follow_redirects=True)
self.assertEqual(response.status_code, 200)
self.assertIn("Bolts and Ratchets Ltd".encode(), response.data)
self.assertIn("50012345678".encode(), response.data)
@requests_mock.mock()
def test_get_reporting_unit_hides_change_link_when_no_available_statuses(self, mock_request):
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_cases_by_business_party_id, json=cases_list_completed)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_1}", json=collection_exercise)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_2}", json=collection_exercise_2)
mock_request.get(url_get_business_attributes, json=business_attributes)
mock_request.get(url_get_survey_by_id, json=survey)
mock_request.get(url_get_respondent_party_by_list, json=respondent_party_list)
mock_request.get(f"{url_get_iac}/{iac_1}", json=iac)
mock_request.get(f"{url_get_iac}/{iac_2}", json=iac)
response = self.client.get("/reporting-units/50012345678", follow_redirects=True)
self.assertEqual(response.status_code, 200)
self.assertNotIn("ChaFnge</a>".encode(), response.data)
@requests_mock.mock()
def test_search_reporting_units_for_1_business_redirects_and_holds_correct_data(self, mock_request):
mock_business_search_response = {"businesses": [{"name": "test", "ruref": "123456"}], "total_business_count": 2}
mock_request.get(url_search_reporting_units, json=mock_business_search_response)
response = self.client.post("/reporting-units", follow_redirects=True)
self.assertEqual(response.status_code, 200)
self.assertIn("test".encode(), response.data)
self.assertIn("123456".encode(), response.data)
@requests_mock.mock()
def test_search_reporting_units_fail(self, mock_request):
mock_request.get(url_search_reporting_units, status_code=500)
response = self.client.post("/reporting-units", follow_redirects=True)
request_history = mock_request.request_history
self.assertEqual(len(request_history), 1)
self.assertEqual(response.status_code, 500)
@requests_mock.mock()
def test_search_reporting_units_show_correct_pagination_data(self, mock_request):
mock_business_search_response = TestReportingUnits._build_test_ru_search_response_data(75)
mock_request.get(url_search_reporting_units, json=mock_business_search_response)
form_data = {"query": ""}
response = self.client.post("/reporting-units", data=form_data, follow_redirects=True)
self.assertEqual(response.status_code, 200)
data = re.sub("<[^<]+?>", "", response.data.decode()) # Strip out html tags from the response data
self.assertIn("75 Results found", data)
self.assertIn("Displaying 1 - 25 of 75", data)
self.assertIn("Page 1 of 3", data) # Validates the page count is correct
self.assertIn("Previous 123Next", data) # Validates Pagination controls displayed
@requests_mock.mock()
def test_search_reporting_units_no_results_displays_correctly(self, mock_request):
mock_business_search_response = {"businesses": [], "total_business_count": 0}
mock_request.get(url_search_reporting_units, json=mock_business_search_response)
form_data = {"query": ""}
response = self.client.post("/reporting-units", data=form_data, follow_redirects=True)
self.assertEqual(response.status_code, 200)
data = re.sub("<[^<]+?>", "", response.data.decode()) # Strip out html tags from the response data
self.assertIn("No results found", data)
@requests_mock.mock()
def test_search_reporting_units_for_specific_name_displays_correctly(self, mock_request):
ru_ref_num = "12345678901" # named so as to not clash with outer definition of ru_ref
mock_response = {"businesses": [{"name": "SomeName", "ruref": ru_ref_num}], "total_business_count": 1}
mock_request.get(url_search_reporting_units, json=mock_response)
form_data = {"query": "SomeName"}
response = self.client.post("/reporting-units", data=form_data, follow_redirects=True)
self.assertEqual(response.status_code, 200)
data = response.data.decode()
self.assertIn("1 Result found", data)
self.assertIn('value="SomeName"', data) # Validates that search term is displayed in text entry box
# now validate that the ru is displayed as an href
self.assertIn(f'href="/reporting-units/{ru_ref_num}" name="details-link-{ru_ref_num}">{ru_ref_num}', data)
@requests_mock.mock()
def test_resend_verification_email(self, mock_request):
mock_request.get(get_respondent_by_id_url, json=respondent)
response = self.client.get(f"reporting-units/resend_verification/50012345678/{respondent_party_id}")
self.assertEqual(response.status_code, 200)
@requests_mock.mock()
def test_resend_verification_email_to_pending_email_address(self, mock_request):
mock_request.get(get_respondent_by_id_url, json=respondent_with_pending_email)
response = self.client.get(f"reporting-units/resend_verification/50012345678/{respondent_party_id}")
self.assertEqual(response.status_code, 200)
@requests_mock.mock()
def test_change_respondent_status(self, mock_request):
mock_request.put(url_change_respondent_status)
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_cases_by_business_party_id, json=cases_list)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_1}", json=collection_exercise)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_2}", json=collection_exercise_2)
mock_request.get(url_get_business_attributes, json=business_attributes)
mock_request.get(url_get_survey_by_id, json=survey)
mock_request.get(url_get_respondent_party_by_list, json=respondent_party_list)
mock_request.get(f"{url_get_iac}/{iac_1}", json=iac)
mock_request.get(f"{url_get_iac}/{iac_2}", json=iac)
response = self.client.post(
f"reporting-units/50012345678/change-respondent-status"
f"?respondent_id={respondent_party_id}&change_flag=ACTIVE",
follow_redirects=True,
)
self.assertEqual(response.status_code, 200)
@requests_mock.mock()
def test_change_respondent_status_fail(self, mock_request):
mock_request.put(url_change_respondent_status, status_code=500)
response = self.client.post(
f"reporting-units/50012345678/change-respondent-status"
f"?respondent_id={respondent_party_id}&change_flag=ACTIVE",
follow_redirects=True,
)
request_history = mock_request.request_history
self.assertEqual(len(request_history), 1)
self.assertEqual(response.status_code, 500)
@requests_mock.mock()
def test_confirm_change_respondent_status(self, mock_request):
mock_request.get(get_respondent_by_id_url)
mock_request.put(url_change_respondent_status)
mock_request.get(url_get_cases_by_business_party_id, json=cases_list)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_1}", json=collection_exercise)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_2}", json=collection_exercise_2)
mock_request.get(url_get_business_attributes, json=business_attributes)
mock_request.get(url_get_survey_by_id, json=survey)
mock_request.get(url_get_respondent_party_by_party_id, json=respondent_party)
mock_request.get(url_get_respondent_party_by_list, json=respondent_party_list)
mock_request.get(f"{url_get_iac}/{iac_1}", json=iac)
mock_request.get(f"{url_get_iac}/{iac_2}", json=iac)
response = self.client.get(
f"reporting-units/50012345678/change-respondent-status"
f"?party_id={respondent_party_id}&change_flag=ACTIVE&tab=reporting_units",
follow_redirects=True,
)
self.assertEqual(response.status_code, 200)
@requests_mock.mock()
def test_get_contact_details(self, mock_request):
mock_request.get(get_respondent_by_id_url, json=respondent)
response = self.client.get(f"/reporting-units/50012345678/edit-contact-details/{respondent_party_id}")
self.assertEqual(response.status_code, 200)
self.assertIn("Jacky".encode(), response.data)
self.assertIn("Turner".encode(), response.data)
self.assertIn("0987654321".encode(), response.data)
@requests_mock.mock()
def test_get_contact_details_fail(self, mock_request):
mock_request.get(get_respondent_by_id_url, status_code=500)
response = self.client.get(
f"/reporting-units/50012345678/edit-contact-details/{respondent_party_id}", follow_redirects=True
)
request_history = mock_request.request_history
self.assertEqual(len(request_history), 1)
self.assertEqual(response.status_code, 500)
@requests_mock.mock()
def test_edit_contact_details(self, mock_request):
changed_details = {
"first_name": "Tom",
"last_name": "Smith",
"email": "Jacky.Turner@email.com",
"telephone": "7971161867",
}
response = self.mock_for_change_details(changed_details, mock_request)
self.assertEqual(response.status_code, 200)
@requests_mock.mock()
def test_edit_contact_details_email_already_exists(self, mock_request):
changed_details = {
"first_name": "Tom",
"last_name": "Smith",
"email": "Jacky.Turner@email.com",
"telephone": "7971161859",
}
mock_request.get(get_respondent_by_id_url, json=respondent)
mock_request.put(url_edit_contact_details, status_code=409)
response = self.client.post(
f"/reporting-units/50012345678/edit-contact-details/{respondent_party_id}",
data=changed_details,
follow_redirects=True,
)
self.assertIn("Error - email address already exists".encode(), response.data)
@requests_mock.mock()
def test_edit_contact_details_404_response(self, mock_request):
changed_details = {
"first_name": "Tom",
"last_name": "Smith",
"email": "Jacky.Turner@email.com",
"telephone": "7971161859",
}
mock_request.get(get_respondent_by_id_url, json=respondent)
mock_request.put(url_edit_contact_details, status_code=404)
response = self.client.post(
f"/reporting-units/50012345678/edit-contact-details/{respondent_party_id}",
data=changed_details,
follow_redirects=True,
)
self.assertIn(CONNECTION_ERROR.encode(), response.data)
@requests_mock.mock()
def test_edit_contact_details_500_response(self, mock_request):
changed_details = {
"first_name": "Tom",
"last_name": "Smith",
"email": "Jacky.Turner@email.com",
"telephone": "7971161867",
}
mock_request.get(get_respondent_by_id_url, json=respondent)
mock_request.put(url_edit_contact_details, status_code=500)
response = self.client.post(
f"/reporting-units/50012345678/edit-contact-details/{respondent_party_id}",
data=changed_details,
follow_redirects=True,
)
self.assertIn(CONNECTION_ERROR.encode(), response.data)
@requests_mock.mock()
def test_edit_contact_details_error_response(self, mock_request):
changed_details = {
"first_name": "Tom",
"last_name": "Smith",
"email": "Jacky.Turner@email.com",
"telephone": "7971161867",
}
mock_request.get(get_respondent_by_id_url, json=respondent)
mock_request.put(url_edit_contact_details, status_code=405)
response = self.client.post(
f"/reporting-units/50012345678/edit-contact-details/{respondent_party_id}",
data=changed_details,
follow_redirects=True,
)
self.assertIn(CONNECTION_ERROR.encode(), response.data)
@requests_mock.mock()
def test_edit_contact_details_last_name_change(self, mock_request):
changed_details = {
"first_name": "Jacky",
"last_name": "Smith",
"email": "Jacky.Turner@email.com",
"telephone": "7971161859",
}
response = self.mock_for_change_details(changed_details, mock_request)
self.assertEqual(response.status_code, 200)
def mock_for_change_details(self, changed_details, mock_request):
mock_request.get(get_respondent_by_id_url, json=respondent)
mock_request.put(url_edit_contact_details)
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_cases_by_business_party_id, json=cases_list)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_1}", json=collection_exercise)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_2}", json=collection_exercise_2)
mock_request.get(url_get_business_attributes, json=business_attributes)
mock_request.get(url_get_survey_by_id, json=survey)
mock_request.get(url_get_respondent_party_by_list, json=respondent_party_list)
mock_request.get(f"{url_get_iac}/{iac_1}", json=iac)
mock_request.get(f"{url_get_iac}/{iac_2}", json=iac)
response = self.client.post(
f"/reporting-units/50012345678/edit-contact-details/{respondent_party_id}",
data=changed_details,
follow_redirects=True,
)
return response
@requests_mock.mock()
def test_edit_contact_details_telephone_change(self, mock_request):
changed_details = {
"first_name": "Jacky",
"last_name": "Turner",
"email": "Jacky.Turner@email.com",
"telephone": "7971161867",
}
response = self.mock_for_change_details(changed_details, mock_request)
self.assertEqual(response.status_code, 200)
@requests_mock.mock()
def test_edit_contact_details_email_change(self, mock_request):
changed_details = {
"first_name": "Jacky",
"last_name": "Turner",
"email": "Jacky.Turner@thisemail.com",
"telephone": "7971161859",
}
response = self.mock_for_change_details(changed_details, mock_request)
self.assertEqual(response.status_code, 200)
@requests_mock.mock()
def test_edit_contact_details_email_change_with_trailing_space(self, mock_request):
changed_details = {
"first_name": "Jacky",
"last_name": "Turner",
"email": r"Jacky.Turner@thisemail.com ",
"telephone": "7971161859",
}
response = self.mock_for_change_details(changed_details, mock_request)
self.assertEqual(response.status_code, 200)
self.assertIsNot(r"Jacky.Turner@thisemail.com ".encode(), response.data)
@requests_mock.mock()
def test_edit_contact_details_and_email_change(self, mock_request):
changed_details = {
"first_name": "Jacky",
"last_name": "Turner",
"email": "Jacky.Turner@thisemail.com",
"telephone": "7971161867",
}
response = self.mock_for_change_details(changed_details, mock_request)
self.assertEqual(response.status_code, 200)
@requests_mock.mock()
def test_reporting_unit_generate_new_code(self, mock_request):
mock_request.post(url_post_case_event)
mock_request.get(url_get_case, json=case)
response = self.client.get(
f"/reporting-units/{ru_ref}/new_enrolment_code?case_id={case['id']}&"
"survey_name=test_survey_name&trading_as=trading_name&ru_name=test_ru_name",
follow_redirects=True,
)
self.assertEqual(response.status_code, 200)
self.assertIn("jkbvyklkwj88".encode(), response.data)
self.assertIn("test_ru_name".encode(), response.data)
self.assertIn("trading_name".encode(), response.data)
self.assertIn("test_survey_name".encode(), response.data)
@requests_mock.mock()
def test_reporting_unit_generate_new_code_event_fail(self, mock_request):
mock_request.post(url_post_case_event, status_code=500)
response = self.client.get(
f"/reporting-units/{ru_ref}/new_enrolment_code?case_id={case['id']}", follow_redirects=True
)
request_history = mock_request.request_history
self.assertEqual(len(request_history), 1)
self.assertEqual(response.status_code, 500)
@requests_mock.mock()
def test_reporting_unit_generate_new_code_case_fail(self, mock_request):
mock_request.post(url_post_case_event)
mock_request.get(url_get_case, status_code=500)
response = self.client.get(
f"/reporting-units/{ru_ref}/new_enrolment_code?case_id={case['id']}&"
"survey_name=test_survey_name&trading_as=trading_name&ru_name=test_ru_name",
follow_redirects=True,
)
request_history = mock_request.request_history
self.assertEqual(len(request_history), 2)
self.assertEqual(response.status_code, 500)
def test_disable_enrolment_view(self):
response = self.client.get(
"/reporting-units/ru_ref/change-enrolment-status"
"?survey_id=test_id&survey_name=test_survey_name&respondent_id=test_id"
"&respondent_first_name=first_name&respondent_last_name=last_name"
"&business_id=test_id"
"&trading_as=test_name&change_flag=DISABLED"
"&ru_name=test_ru_name&tab=reporting_units"
)
self.assertEqual(response.status_code, 200)
self.assertIn("test_ru_name".encode(), response.data)
self.assertIn("test_name".encode(), response.data)
self.assertIn("test_survey_name".encode(), response.data)
self.assertIn("first_name".encode(), response.data)
self.assertIn("Disable enrolment".encode(), response.data)
@requests_mock.mock()
def test_disable_enrolment_post(self, mock_request):
mock_request.put(url_change_enrolment_status)
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_cases_by_business_party_id, json=cases_list)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_1}", json=collection_exercise)
mock_request.get(f"{url_get_collection_exercise_by_id}/{collection_exercise_id_2}", json=collection_exercise_2)
mock_request.get(url_get_business_attributes, json=business_attributes)
mock_request.get(url_get_survey_by_id, json=survey)
mock_request.get(url_get_respondent_party_by_list, json=respondent_party_list)
mock_request.get(f"{url_get_iac}/{iac_1}", json=iac)
mock_request.get(f"{url_get_iac}/{iac_2}", json=iac)
response = self.client.post(
"/reporting-units/50012345678/change-enrolment-status"
"?survey_id=test_id&respondent_id=test_id&business_id=test_id&change_flag=DISABLED",
follow_redirects=True,
)
self.assertEqual(response.status_code, 200)
self.assertIn("Bolts and Ratchets Ltd".encode(), response.data)
@requests_mock.mock()
def test_disable_enrolment_post_fail(self, mock_request):
mock_request.put(url_change_enrolment_status, status_code=500)
response = self.client.post(
"/reporting-units/50012345678/change-enrolment-status"
"?survey_id=test_id&respondent_id=test_id&business_id=test_id&change_flag=DISABLED",
follow_redirects=True,
)
request_history = mock_request.request_history
self.assertEqual(len(request_history), 1)
self.assertEqual(response.status_code, 500)
@staticmethod
def _build_test_ru_search_response_data(count):
businesses = [{"name": f"{i}_name", "ruref": f"{randint(0, 100000000000)}"} for i in range(count)]
return {"businesses": businesses, "total_business_count": count}
| 48.579104 | 120 | 0.728801 | 4,312 | 32,548 | 5.108534 | 0.05821 | 0.091883 | 0.071818 | 0.050163 | 0.873025 | 0.851643 | 0.839704 | 0.827447 | 0.794897 | 0.778328 | 0 | 0.031207 | 0.165141 | 32,548 | 669 | 121 | 48.651719 | 0.77945 | 0.009985 | 0 | 0.605119 | 0 | 0 | 0.239081 | 0.19702 | 0 | 0 | 0 | 0 | 0.151737 | 1 | 0.076782 | false | 0 | 0.014625 | 0 | 0.096892 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4001ae10566099b627db4979f72b2da91561b25b | 7,361 | py | Python | stackprinter/colorschemes.py | cknd/talkative_tracebacks | 02516555ea4f070d15bc39d4b26b448ba686ff17 | [
"MIT"
] | 1,233 | 2019-04-23T10:51:05.000Z | 2022-03-25T23:50:20.000Z | stackprinter/colorschemes.py | cknd/talkative_tracebacks | 02516555ea4f070d15bc39d4b26b448ba686ff17 | [
"MIT"
] | 40 | 2019-04-28T21:29:41.000Z | 2022-02-18T03:47:32.000Z | stackprinter/colorschemes.py | cknd/talkative_tracebacks | 02516555ea4f070d15bc39d4b26b448ba686ff17 | [
"MIT"
] | 48 | 2019-04-28T23:25:54.000Z | 2022-02-22T20:12:52.000Z | import random
__all__ = ['color', 'darkbg', 'darkbg2', 'darkbg3',
'lightbg', 'lightbg2', 'lightbg3']
class ColorScheme():
def __getitem__(self, name):
raise NotImplemented
def get_random(self):
raise NotImplemented
class darkbg(ColorScheme):
# Hue, Sat, Val, Bold
colors = {'exception_type': (0.0, 0.9, 0.6, False),
'exception_msg': (0.0, 0.9, 0.6, True),
'highlight': (0.0, 0., 0.8, True),
'header': (0., 0., 0.3, False),
'lineno': (0., 0.0, 0.1, False),
'arrow_lineno': (0., 0.0, 0.2, True),
'dots': (0., 0.0, 0.6, False),
'source_bold': (0.,0., 0.6, True),
'source_default': (0.,0., 0.7, False),
'source_comment': (0.,0.,0.2, False),
'var_invisible': (0.6, 0.4, 0.4, False)
}
def __init__(self):
self.rng = random.Random()
def __getitem__(self, name):
return self.colors[name]
def get_random(self, seed, highlight):
self.rng.seed(seed)
hue = self.rng.uniform(0.05,0.7)
# if hue < 0:
# hue = hue + 1
sat = 1. #1. if highlight else 0.5
val = 0.5 #1. if highlight else 0.3
bold = highlight
return hue, sat, val, bold
class darkbg2(ColorScheme):
# Hue, Sat, Val, Bold
colors = {'exception_type': (0., 1., 0.8, True),
'exception_msg': (0., 1., 0.8, True),
'highlight': (0., 0., 1., True),
'header': (0, 0, 0.6, False),
'lineno': (0, 0, 0.2, True),
'arrow_lineno': (0, 0, 0.8, True),
'dots': (0, 0, 0.4, False),
'source_bold': (0.,0.,0.8, True),
'source_default': (0.,0.,0.8, False),
'source_comment': (0.,0.,0.2, False),
'var_invisible': (0.6, 0.4, 0.4, False)
}
def __init__(self):
self.rng = random.Random()
def __getitem__(self, name):
return self.colors[name]
def get_random(self, seed, highlight):
self.rng.seed(seed)
hue = self.rng.uniform(0.05,0.7)
# if hue < 0:
# hue = hue + 1
sat = 1. if highlight else 1.
val = 0.8 #if highlight else 0.5
bold = highlight
return hue, sat, val, bold
class darkbg3(ColorScheme):
# Hue, Sat, Val, Bold
colors = {'exception_type': (0., 1., 0.8, True),
'exception_msg': (0., 1., 0.8, True),
'highlight': (0., 1., 0.8, True),
'header': (0, 0, 0.8, True),
'lineno': (0, 0, 0.2, True),
'arrow_lineno': (0, 0, 0.8, True),
'dots': (0, 0, 0.4, False),
'source_bold': (0.,0.,0.8, True),
'source_default': (0.,0.,0.8, False),
'source_comment': (0.,0.,0.2, False),
'var_invisible': (0.6, 0.4, 0.4, False)
}
def __init__(self):
self.rng = random.Random()
def __getitem__(self, name):
return self.colors[name]
def get_random(self, seed, highlight):
self.rng.seed(seed)
hue = self.rng.uniform(0.05,0.7)
# if hue < 0:
# hue = hue + 1
sat = 1. if highlight else 1.
val = 0.8 if highlight else 0.5
bold = highlight
return hue, sat, val, bold
class lightbg(ColorScheme):
# Hue, Sat, Val, Bold
colors = {'exception_type': (0.0, 1., 0.6, False),
'exception_msg': (0.0, 1., 0.6, True),
'highlight': (0.0, 0, 0., True),
'header': (0, 0, 0.2, False),
'lineno': (0, 0, 0.8, True),
'arrow_lineno': (0, 0, 0.3, True),
'dots': (0, 0, 0.4, False),
'source_bold': (0.,0.,0.2, True),
'source_default': (0.,0.,0.1, False),
'source_comment': (0.,0.,0.6, False),
'var_invisible': (0.6, 0.4, 0.2, False)
}
def __init__(self):
self.rng = random.Random()
def __getitem__(self, name):
return self.colors[name]
def get_random(self, seed, highlight):
self.rng.seed(seed)
hue = self.rng.uniform(0.05, 0.7)
# if hue < 0:
# hue = hue + 1
sat = 1.
val = 0.5 #0.5 #0.6 if highlight else 0.2
bold = highlight
return hue, sat, val, bold
class lightbg2(ColorScheme):
# Hue, Sat, Val, Bold
colors = {'exception_type': (0.0, 1., 0.6, False),
'exception_msg': (0.0, 1., 0.6, True),
'highlight': (0.0, 0, 0., True),
'header': (0, 0, 0.1, False),
'lineno': (0, 0, 0.5, True),
'arrow_lineno': (0, 0, 0.1, True),
'dots': (0, 0, 0.4, False),
'source_bold': (0.,0.,0.1, True),
'source_default': (0.,0.,0., False),
'source_comment': (0.,0.,0.6, False),
'var_invisible': (0.6, 0.4, 0.2, False)
}
def __init__(self):
self.rng = random.Random()
def __getitem__(self, name):
return self.colors[name]
def get_random(self, seed, highlight):
self.rng.seed(seed)
hue = self.rng.uniform(0.05, 0.7)
# if hue < 0:
# hue = hue + 1
sat = 1.
val = 0.5
bold = True
return hue, sat, val, bold
class lightbg3(ColorScheme):
# Hue, Sat, Val, Bold
colors = {'exception_type': (0.0, 1., 0.7, False),
'exception_msg': (0.0, 1., 0.7, True),
'highlight': (0.0, 1., 0.6, True),
'header': (0, 0, 0.1, True),
'lineno': (0, 0, 0.5, True),
'arrow_lineno': (0, 0, 0.1, True),
'dots': (0, 0, 0.4, False),
'source_bold': (0.,0.,0., True),
'source_default': (0.,0.,0., False),
'source_comment': (0.,0.,0.6, False),
'var_invisible': (0.6, 0.4, 0.2, False)
}
def __init__(self):
self.rng = random.Random()
def __getitem__(self, name):
return self.colors[name]
def get_random(self, seed, highlight):
self.rng.seed(seed)
hue = self.rng.uniform(0.05, 0.7)
# if hue < 0:
# hue = hue + 1
sat = 1.
val = 0.5
bold = True
return hue, sat, val, bold
color = darkbg2
if __name__ == '__main__':
import numpy as np
from utils import get_ansi_tpl
for hue in np.arange(0,1.05,0.05):
print('\n\nhue %.2f\nsat' % hue)
for sat in np.arange(0,1.05,0.05):
print('%.2f ' % sat, end='')
for val in np.arange(0,1.05,0.05):
tpl = get_ansi_tpl(hue, sat, val)
# number = " (%.1f %.1f %.1f)" % (hue, sat, val)
number = ' %.2f' % val
print(tpl % number, end='')
print(' %.2f' % sat)
| 29.210317 | 64 | 0.440701 | 957 | 7,361 | 3.267503 | 0.079415 | 0.069076 | 0.050847 | 0.049888 | 0.867925 | 0.821554 | 0.784778 | 0.762072 | 0.726895 | 0.712824 | 0 | 0.090013 | 0.393289 | 7,361 | 251 | 65 | 29.326693 | 0.610166 | 0.060318 | 0 | 0.654545 | 0 | 0 | 0.114045 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121212 | false | 0 | 0.018182 | 0.036364 | 0.290909 | 0.024242 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
403b80c13dd122ac54e744c585147065f5c58cfa | 4,678 | py | Python | seq2seq/layers.py | anna1995d/signature_verification-1 | eb5c9e8486f7e0135e71080f26649b0e438a1a3d | [
"MIT"
] | 14 | 2018-03-01T08:51:39.000Z | 2021-03-27T17:41:33.000Z | seq2seq/layers.py | anna1995d/signature_verification-1 | eb5c9e8486f7e0135e71080f26649b0e438a1a3d | [
"MIT"
] | 3 | 2019-02-15T06:39:18.000Z | 2020-08-10T08:42:07.000Z | seq2seq/layers.py | kahrabian/signature_verification | 2a35bb2c7c934bd94104cf9e1fd83e18bd4846ee | [
"MIT"
] | 10 | 2017-10-30T16:59:26.000Z | 2021-04-23T01:26:16.000Z | import keras.backend as K
from keras import initializers, regularizers, constraints
from keras.layers import Layer
class AttentionWithContext(Layer):
def __init__(self,
kernel_regularizer=None, align_regularizer=None, bias_regularizer=None,
kernel_constraint=None, align_constraint=None, bias_constraint=None,
use_bias=True, **kwargs):
self.kernel = None
self.bias = None
self.align = None
self.supports_masking = True
self.kernel_initializer = initializers.get('glorot_uniform')
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.align_regularizer = regularizers.get(align_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.align_constraint = constraints.get(align_constraint)
self.bias_constraint = constraints.get(bias_constraint)
self.use_bias = use_bias
super(AttentionWithContext, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) == 3
self.kernel = self.add_weight(
name='kernel',
shape=(input_shape[-1], input_shape[-1],),
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint
)
if self.use_bias:
self.bias = self.add_weight(
name='bias',
shape=(input_shape[-1],),
initializer='zero',
regularizer=self.bias_regularizer,
constraint=self.bias_constraint
)
else:
self.bias = None
self.align = self.add_weight(
name='align',
shape=(input_shape[-1],),
initializer=self.kernel_initializer,
regularizer=self.align_regularizer,
constraint=self.align_constraint
)
super(AttentionWithContext, self).build(input_shape)
def compute_mask(self, inputs, mask=None):
return None
def call(self, inputs, mask=None):
uit = K.tanh(K.dot(inputs, self.kernel) + (self.bias if self.use_bias else 0))
ait = K.sum(uit * self.align, axis=2) if K.backend() == 'tensorflow' else K.dot(uit, self.align)
a = K.exp(ait) * (K.cast(mask, K.floatx()) if mask is not None else 1)
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())
a = K.expand_dims(a)
return K.sum(inputs * a, axis=1)
def compute_output_shape(self, input_shape):
return input_shape[0], input_shape[-1]
class Attention(Layer):
def __init__(self,
kernel_regularizer=None, bias_regularizer=None,
kernel_constraint=None, bias_constraint=None,
use_bias=True, **kwargs):
self.kernel = None
self.bias = None
self.supports_masking = True
self.kernel_initializer = initializers.get('glorot_uniform')
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.bias_constraint = constraints.get(bias_constraint)
self.use_bias = use_bias
super(Attention, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) == 3
self.kernel = self.add_weight(
name='kernel',
shape=(input_shape[-1], input_shape[-1],),
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint
)
if self.use_bias:
self.bias = self.add_weight(
name='bias',
shape=(input_shape[-1],),
initializer='zero',
regularizer=self.bias_regularizer,
constraint=self.bias_constraint
)
else:
self.bias = None
super(Attention, self).build(input_shape)
def compute_mask(self, inputs, mask=None):
return None
def call(self, inputs, mask=None):
eij = K.tanh(K.dot(inputs, self.kernel) + (self.bias if self.use_bias else 0))
ai = K.exp(eij) * K.expand_dims(K.cast(mask, K.floatx()) if mask is not None else 1)
a = ai / K.cast(K.sum(ai, axis=1, keepdims=True) + K.epsilon(), K.floatx())
return K.sum(inputs * a, axis=1)
def compute_output_shape(self, input_shape):
return input_shape[0], input_shape[-1]
| 35.172932 | 104 | 0.617999 | 549 | 4,678 | 5.071038 | 0.132969 | 0.075431 | 0.03556 | 0.030532 | 0.82579 | 0.818966 | 0.818966 | 0.797773 | 0.74102 | 0.720187 | 0 | 0.006499 | 0.2764 | 4,678 | 132 | 105 | 35.439394 | 0.815953 | 0 | 0 | 0.679612 | 0 | 0 | 0.015177 | 0 | 0 | 0 | 0 | 0 | 0.019417 | 1 | 0.097087 | false | 0 | 0.029126 | 0.038835 | 0.203884 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
40898d3ddd8d94cacf42fb634042ca56e7a2d9b5 | 41,242 | py | Python | chainlibpy/generated/tendermint/types/types_pb2.py | MaCong-crypto/chainlibpy | 8f91869fdf068359ebd9a3b206a7e856d8fa84f3 | [
"Apache-2.0"
] | null | null | null | chainlibpy/generated/tendermint/types/types_pb2.py | MaCong-crypto/chainlibpy | 8f91869fdf068359ebd9a3b206a7e856d8fa84f3 | [
"Apache-2.0"
] | null | null | null | chainlibpy/generated/tendermint/types/types_pb2.py | MaCong-crypto/chainlibpy | 8f91869fdf068359ebd9a3b206a7e856d8fa84f3 | [
"Apache-2.0"
] | null | null | null |
'Generated protocol buffer code.'
from google.protobuf.internal import enum_type_wrapper
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
_sym_db = _symbol_database.Default()
from ...gogoproto import gogo_pb2 as gogoproto_dot_gogo__pb2
from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2
from ...tendermint.crypto import proof_pb2 as tendermint_dot_crypto_dot_proof__pb2
from ...tendermint.version import types_pb2 as tendermint_dot_version_dot_types__pb2
from ...tendermint.types import validator_pb2 as tendermint_dot_types_dot_validator__pb2
DESCRIPTOR = _descriptor.FileDescriptor(name='tendermint/types/types.proto', package='tendermint.types', syntax='proto3', serialized_options=b'Z7github.com/tendermint/tendermint/proto/tendermint/types', create_key=_descriptor._internal_create_key, serialized_pb=b'\n\x1ctendermint/types/types.proto\x12\x10tendermint.types\x1a\x14gogoproto/gogo.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1dtendermint/crypto/proof.proto\x1a\x1etendermint/version/types.proto\x1a tendermint/types/validator.proto",\n\rPartSetHeader\x12\r\n\x05total\x18\x01 \x01(\r\x12\x0c\n\x04hash\x18\x02 \x01(\x0c"S\n\x04Part\x12\r\n\x05index\x18\x01 \x01(\r\x12\r\n\x05bytes\x18\x02 \x01(\x0c\x12-\n\x05proof\x18\x03 \x01(\x0b2\x18.tendermint.crypto.ProofB\x04\xc8\xde\x1f\x00"W\n\x07BlockID\x12\x0c\n\x04hash\x18\x01 \x01(\x0c\x12>\n\x0fpart_set_header\x18\x02 \x01(\x0b2\x1f.tendermint.types.PartSetHeaderB\x04\xc8\xde\x1f\x00"\xb3\x03\n\x06Header\x124\n\x07version\x18\x01 \x01(\x0b2\x1d.tendermint.version.ConsensusB\x04\xc8\xde\x1f\x00\x12\x1d\n\x08chain_id\x18\x02 \x01(\tB\x0b\xe2\xde\x1f\x07ChainID\x12\x0e\n\x06height\x18\x03 \x01(\x03\x122\n\x04time\x18\x04 \x01(\x0b2\x1a.google.protobuf.TimestampB\x08\xc8\xde\x1f\x00\x90\xdf\x1f\x01\x126\n\rlast_block_id\x18\x05 \x01(\x0b2\x19.tendermint.types.BlockIDB\x04\xc8\xde\x1f\x00\x12\x18\n\x10last_commit_hash\x18\x06 \x01(\x0c\x12\x11\n\tdata_hash\x18\x07 \x01(\x0c\x12\x17\n\x0fvalidators_hash\x18\x08 \x01(\x0c\x12\x1c\n\x14next_validators_hash\x18\t \x01(\x0c\x12\x16\n\x0econsensus_hash\x18\n \x01(\x0c\x12\x10\n\x08app_hash\x18\x0b \x01(\x0c\x12\x19\n\x11last_results_hash\x18\x0c \x01(\x0c\x12\x15\n\revidence_hash\x18\r \x01(\x0c\x12\x18\n\x10proposer_address\x18\x0e \x01(\x0c"\x13\n\x04Data\x12\x0b\n\x03txs\x18\x01 \x03(\x0c"\x92\x02\n\x04Vote\x12-\n\x04type\x18\x01 \x01(\x0e2\x1f.tendermint.types.SignedMsgType\x12\x0e\n\x06height\x18\x02 \x01(\x03\x12\r\n\x05round\x18\x03 \x01(\x05\x12<\n\x08block_id\x18\x04 \x01(\x0b2\x19.tendermint.types.BlockIDB\x0f\xc8\xde\x1f\x00\xe2\xde\x1f\x07BlockID\x127\n\ttimestamp\x18\x05 \x01(\x0b2\x1a.google.protobuf.TimestampB\x08\xc8\xde\x1f\x00\x90\xdf\x1f\x01\x12\x19\n\x11validator_address\x18\x06 \x01(\x0c\x12\x17\n\x0fvalidator_index\x18\x07 \x01(\x05\x12\x11\n\tsignature\x18\x08 \x01(\x0c"\x9c\x01\n\x06Commit\x12\x0e\n\x06height\x18\x01 \x01(\x03\x12\r\n\x05round\x18\x02 \x01(\x05\x12<\n\x08block_id\x18\x03 \x01(\x0b2\x19.tendermint.types.BlockIDB\x0f\xc8\xde\x1f\x00\xe2\xde\x1f\x07BlockID\x125\n\nsignatures\x18\x04 \x03(\x0b2\x1b.tendermint.types.CommitSigB\x04\xc8\xde\x1f\x00"\xa8\x01\n\tCommitSig\x124\n\rblock_id_flag\x18\x01 \x01(\x0e2\x1d.tendermint.types.BlockIDFlag\x12\x19\n\x11validator_address\x18\x02 \x01(\x0c\x127\n\ttimestamp\x18\x03 \x01(\x0b2\x1a.google.protobuf.TimestampB\x08\xc8\xde\x1f\x00\x90\xdf\x1f\x01\x12\x11\n\tsignature\x18\x04 \x01(\x0c"\xf5\x01\n\x08Proposal\x12-\n\x04type\x18\x01 \x01(\x0e2\x1f.tendermint.types.SignedMsgType\x12\x0e\n\x06height\x18\x02 \x01(\x03\x12\r\n\x05round\x18\x03 \x01(\x05\x12\x11\n\tpol_round\x18\x04 \x01(\x05\x12<\n\x08block_id\x18\x05 \x01(\x0b2\x19.tendermint.types.BlockIDB\x0f\xe2\xde\x1f\x07BlockID\xc8\xde\x1f\x00\x127\n\ttimestamp\x18\x06 \x01(\x0b2\x1a.google.protobuf.TimestampB\x08\xc8\xde\x1f\x00\x90\xdf\x1f\x01\x12\x11\n\tsignature\x18\x07 \x01(\x0c"b\n\x0cSignedHeader\x12(\n\x06header\x18\x01 \x01(\x0b2\x18.tendermint.types.Header\x12(\n\x06commit\x18\x02 \x01(\x0b2\x18.tendermint.types.Commit"z\n\nLightBlock\x125\n\rsigned_header\x18\x01 \x01(\x0b2\x1e.tendermint.types.SignedHeader\x125\n\rvalidator_set\x18\x02 \x01(\x0b2\x1e.tendermint.types.ValidatorSet"\x9e\x01\n\tBlockMeta\x12<\n\x08block_id\x18\x01 \x01(\x0b2\x19.tendermint.types.BlockIDB\x0f\xe2\xde\x1f\x07BlockID\xc8\xde\x1f\x00\x12\x12\n\nblock_size\x18\x02 \x01(\x03\x12.\n\x06header\x18\x03 \x01(\x0b2\x18.tendermint.types.HeaderB\x04\xc8\xde\x1f\x00\x12\x0f\n\x07num_txs\x18\x04 \x01(\x03"S\n\x07TxProof\x12\x11\n\troot_hash\x18\x01 \x01(\x0c\x12\x0c\n\x04data\x18\x02 \x01(\x0c\x12\'\n\x05proof\x18\x03 \x01(\x0b2\x18.tendermint.crypto.Proof*\xd7\x01\n\x0bBlockIDFlag\x121\n\x15BLOCK_ID_FLAG_UNKNOWN\x10\x00\x1a\x16\x8a\x9d \x12BlockIDFlagUnknown\x12/\n\x14BLOCK_ID_FLAG_ABSENT\x10\x01\x1a\x15\x8a\x9d \x11BlockIDFlagAbsent\x12/\n\x14BLOCK_ID_FLAG_COMMIT\x10\x02\x1a\x15\x8a\x9d \x11BlockIDFlagCommit\x12)\n\x11BLOCK_ID_FLAG_NIL\x10\x03\x1a\x12\x8a\x9d \x0eBlockIDFlagNil\x1a\x08\xa8\xa4\x1e\x01\x88\xa3\x1e\x00*\xd7\x01\n\rSignedMsgType\x12,\n\x17SIGNED_MSG_TYPE_UNKNOWN\x10\x00\x1a\x0f\x8a\x9d \x0bUnknownType\x12,\n\x17SIGNED_MSG_TYPE_PREVOTE\x10\x01\x1a\x0f\x8a\x9d \x0bPrevoteType\x120\n\x19SIGNED_MSG_TYPE_PRECOMMIT\x10\x02\x1a\x11\x8a\x9d \rPrecommitType\x12.\n\x18SIGNED_MSG_TYPE_PROPOSAL\x10 \x1a\x10\x8a\x9d \x0cProposalType\x1a\x08\xa8\xa4\x1e\x01\x88\xa3\x1e\x00B9Z7github.com/tendermint/tendermint/proto/tendermint/typesb\x06proto3', dependencies=[gogoproto_dot_gogo__pb2.DESCRIPTOR, google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR, tendermint_dot_crypto_dot_proof__pb2.DESCRIPTOR, tendermint_dot_version_dot_types__pb2.DESCRIPTOR, tendermint_dot_types_dot_validator__pb2.DESCRIPTOR])
_BLOCKIDFLAG = _descriptor.EnumDescriptor(name='BlockIDFlag', full_name='tendermint.types.BlockIDFlag', filename=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key, values=[_descriptor.EnumValueDescriptor(name='BLOCK_ID_FLAG_UNKNOWN', index=0, number=0, serialized_options=b'\x8a\x9d \x12BlockIDFlagUnknown', type=None, create_key=_descriptor._internal_create_key), _descriptor.EnumValueDescriptor(name='BLOCK_ID_FLAG_ABSENT', index=1, number=1, serialized_options=b'\x8a\x9d \x11BlockIDFlagAbsent', type=None, create_key=_descriptor._internal_create_key), _descriptor.EnumValueDescriptor(name='BLOCK_ID_FLAG_COMMIT', index=2, number=2, serialized_options=b'\x8a\x9d \x11BlockIDFlagCommit', type=None, create_key=_descriptor._internal_create_key), _descriptor.EnumValueDescriptor(name='BLOCK_ID_FLAG_NIL', index=3, number=3, serialized_options=b'\x8a\x9d \x0eBlockIDFlagNil', type=None, create_key=_descriptor._internal_create_key)], containing_type=None, serialized_options=b'\xa8\xa4\x1e\x01\x88\xa3\x1e\x00', serialized_start=2207, serialized_end=2422)
_sym_db.RegisterEnumDescriptor(_BLOCKIDFLAG)
BlockIDFlag = enum_type_wrapper.EnumTypeWrapper(_BLOCKIDFLAG)
_SIGNEDMSGTYPE = _descriptor.EnumDescriptor(name='SignedMsgType', full_name='tendermint.types.SignedMsgType', filename=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key, values=[_descriptor.EnumValueDescriptor(name='SIGNED_MSG_TYPE_UNKNOWN', index=0, number=0, serialized_options=b'\x8a\x9d \x0bUnknownType', type=None, create_key=_descriptor._internal_create_key), _descriptor.EnumValueDescriptor(name='SIGNED_MSG_TYPE_PREVOTE', index=1, number=1, serialized_options=b'\x8a\x9d \x0bPrevoteType', type=None, create_key=_descriptor._internal_create_key), _descriptor.EnumValueDescriptor(name='SIGNED_MSG_TYPE_PRECOMMIT', index=2, number=2, serialized_options=b'\x8a\x9d \rPrecommitType', type=None, create_key=_descriptor._internal_create_key), _descriptor.EnumValueDescriptor(name='SIGNED_MSG_TYPE_PROPOSAL', index=3, number=32, serialized_options=b'\x8a\x9d \x0cProposalType', type=None, create_key=_descriptor._internal_create_key)], containing_type=None, serialized_options=b'\xa8\xa4\x1e\x01\x88\xa3\x1e\x00', serialized_start=2425, serialized_end=2640)
_sym_db.RegisterEnumDescriptor(_SIGNEDMSGTYPE)
SignedMsgType = enum_type_wrapper.EnumTypeWrapper(_SIGNEDMSGTYPE)
BLOCK_ID_FLAG_UNKNOWN = 0
BLOCK_ID_FLAG_ABSENT = 1
BLOCK_ID_FLAG_COMMIT = 2
BLOCK_ID_FLAG_NIL = 3
SIGNED_MSG_TYPE_UNKNOWN = 0
SIGNED_MSG_TYPE_PREVOTE = 1
SIGNED_MSG_TYPE_PRECOMMIT = 2
SIGNED_MSG_TYPE_PROPOSAL = 32
_PARTSETHEADER = _descriptor.Descriptor(name='PartSetHeader', full_name='tendermint.types.PartSetHeader', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='total', full_name='tendermint.types.PartSetHeader.total', index=0, number=1, type=13, cpp_type=3, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='hash', full_name='tendermint.types.PartSetHeader.hash', index=1, number=2, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=202, serialized_end=246)
_PART = _descriptor.Descriptor(name='Part', full_name='tendermint.types.Part', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='index', full_name='tendermint.types.Part.index', index=0, number=1, type=13, cpp_type=3, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='bytes', full_name='tendermint.types.Part.bytes', index=1, number=2, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='proof', full_name='tendermint.types.Part.proof', index=2, number=3, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00', file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=248, serialized_end=331)
_BLOCKID = _descriptor.Descriptor(name='BlockID', full_name='tendermint.types.BlockID', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='hash', full_name='tendermint.types.BlockID.hash', index=0, number=1, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='part_set_header', full_name='tendermint.types.BlockID.part_set_header', index=1, number=2, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00', file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=333, serialized_end=420)
_HEADER = _descriptor.Descriptor(name='Header', full_name='tendermint.types.Header', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='version', full_name='tendermint.types.Header.version', index=0, number=1, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='chain_id', full_name='tendermint.types.Header.chain_id', index=1, number=2, type=9, cpp_type=9, label=1, has_default_value=False, default_value=b''.decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xe2\xde\x1f\x07ChainID', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='height', full_name='tendermint.types.Header.height', index=2, number=3, type=3, cpp_type=2, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='time', full_name='tendermint.types.Header.time', index=3, number=4, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00\x90\xdf\x1f\x01', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='last_block_id', full_name='tendermint.types.Header.last_block_id', index=4, number=5, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='last_commit_hash', full_name='tendermint.types.Header.last_commit_hash', index=5, number=6, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='data_hash', full_name='tendermint.types.Header.data_hash', index=6, number=7, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='validators_hash', full_name='tendermint.types.Header.validators_hash', index=7, number=8, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='next_validators_hash', full_name='tendermint.types.Header.next_validators_hash', index=8, number=9, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='consensus_hash', full_name='tendermint.types.Header.consensus_hash', index=9, number=10, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='app_hash', full_name='tendermint.types.Header.app_hash', index=10, number=11, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='last_results_hash', full_name='tendermint.types.Header.last_results_hash', index=11, number=12, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='evidence_hash', full_name='tendermint.types.Header.evidence_hash', index=12, number=13, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='proposer_address', full_name='tendermint.types.Header.proposer_address', index=13, number=14, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=423, serialized_end=858)
_DATA = _descriptor.Descriptor(name='Data', full_name='tendermint.types.Data', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='txs', full_name='tendermint.types.Data.txs', index=0, number=1, type=12, cpp_type=9, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=860, serialized_end=879)
_VOTE = _descriptor.Descriptor(name='Vote', full_name='tendermint.types.Vote', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='type', full_name='tendermint.types.Vote.type', index=0, number=1, type=14, cpp_type=8, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='height', full_name='tendermint.types.Vote.height', index=1, number=2, type=3, cpp_type=2, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='round', full_name='tendermint.types.Vote.round', index=2, number=3, type=5, cpp_type=1, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='block_id', full_name='tendermint.types.Vote.block_id', index=3, number=4, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00\xe2\xde\x1f\x07BlockID', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='timestamp', full_name='tendermint.types.Vote.timestamp', index=4, number=5, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00\x90\xdf\x1f\x01', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='validator_address', full_name='tendermint.types.Vote.validator_address', index=5, number=6, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='validator_index', full_name='tendermint.types.Vote.validator_index', index=6, number=7, type=5, cpp_type=1, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='signature', full_name='tendermint.types.Vote.signature', index=7, number=8, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=882, serialized_end=1156)
_COMMIT = _descriptor.Descriptor(name='Commit', full_name='tendermint.types.Commit', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='height', full_name='tendermint.types.Commit.height', index=0, number=1, type=3, cpp_type=2, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='round', full_name='tendermint.types.Commit.round', index=1, number=2, type=5, cpp_type=1, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='block_id', full_name='tendermint.types.Commit.block_id', index=2, number=3, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00\xe2\xde\x1f\x07BlockID', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='signatures', full_name='tendermint.types.Commit.signatures', index=3, number=4, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00', file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=1159, serialized_end=1315)
_COMMITSIG = _descriptor.Descriptor(name='CommitSig', full_name='tendermint.types.CommitSig', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='block_id_flag', full_name='tendermint.types.CommitSig.block_id_flag', index=0, number=1, type=14, cpp_type=8, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='validator_address', full_name='tendermint.types.CommitSig.validator_address', index=1, number=2, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='timestamp', full_name='tendermint.types.CommitSig.timestamp', index=2, number=3, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00\x90\xdf\x1f\x01', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='signature', full_name='tendermint.types.CommitSig.signature', index=3, number=4, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=1318, serialized_end=1486)
_PROPOSAL = _descriptor.Descriptor(name='Proposal', full_name='tendermint.types.Proposal', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='type', full_name='tendermint.types.Proposal.type', index=0, number=1, type=14, cpp_type=8, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='height', full_name='tendermint.types.Proposal.height', index=1, number=2, type=3, cpp_type=2, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='round', full_name='tendermint.types.Proposal.round', index=2, number=3, type=5, cpp_type=1, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='pol_round', full_name='tendermint.types.Proposal.pol_round', index=3, number=4, type=5, cpp_type=1, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='block_id', full_name='tendermint.types.Proposal.block_id', index=4, number=5, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xe2\xde\x1f\x07BlockID\xc8\xde\x1f\x00', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='timestamp', full_name='tendermint.types.Proposal.timestamp', index=5, number=6, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00\x90\xdf\x1f\x01', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='signature', full_name='tendermint.types.Proposal.signature', index=6, number=7, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=1489, serialized_end=1734)
_SIGNEDHEADER = _descriptor.Descriptor(name='SignedHeader', full_name='tendermint.types.SignedHeader', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='header', full_name='tendermint.types.SignedHeader.header', index=0, number=1, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='commit', full_name='tendermint.types.SignedHeader.commit', index=1, number=2, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=1736, serialized_end=1834)
_LIGHTBLOCK = _descriptor.Descriptor(name='LightBlock', full_name='tendermint.types.LightBlock', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='signed_header', full_name='tendermint.types.LightBlock.signed_header', index=0, number=1, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='validator_set', full_name='tendermint.types.LightBlock.validator_set', index=1, number=2, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=1836, serialized_end=1958)
_BLOCKMETA = _descriptor.Descriptor(name='BlockMeta', full_name='tendermint.types.BlockMeta', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='block_id', full_name='tendermint.types.BlockMeta.block_id', index=0, number=1, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xe2\xde\x1f\x07BlockID\xc8\xde\x1f\x00', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='block_size', full_name='tendermint.types.BlockMeta.block_size', index=1, number=2, type=3, cpp_type=2, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='header', full_name='tendermint.types.BlockMeta.header', index=2, number=3, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='num_txs', full_name='tendermint.types.BlockMeta.num_txs', index=3, number=4, type=3, cpp_type=2, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=1961, serialized_end=2119)
_TXPROOF = _descriptor.Descriptor(name='TxProof', full_name='tendermint.types.TxProof', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='root_hash', full_name='tendermint.types.TxProof.root_hash', index=0, number=1, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='data', full_name='tendermint.types.TxProof.data', index=1, number=2, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='proof', full_name='tendermint.types.TxProof.proof', index=2, number=3, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=2121, serialized_end=2204)
_PART.fields_by_name['proof'].message_type = tendermint_dot_crypto_dot_proof__pb2._PROOF
_BLOCKID.fields_by_name['part_set_header'].message_type = _PARTSETHEADER
_HEADER.fields_by_name['version'].message_type = tendermint_dot_version_dot_types__pb2._CONSENSUS
_HEADER.fields_by_name['time'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_HEADER.fields_by_name['last_block_id'].message_type = _BLOCKID
_VOTE.fields_by_name['type'].enum_type = _SIGNEDMSGTYPE
_VOTE.fields_by_name['block_id'].message_type = _BLOCKID
_VOTE.fields_by_name['timestamp'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_COMMIT.fields_by_name['block_id'].message_type = _BLOCKID
_COMMIT.fields_by_name['signatures'].message_type = _COMMITSIG
_COMMITSIG.fields_by_name['block_id_flag'].enum_type = _BLOCKIDFLAG
_COMMITSIG.fields_by_name['timestamp'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_PROPOSAL.fields_by_name['type'].enum_type = _SIGNEDMSGTYPE
_PROPOSAL.fields_by_name['block_id'].message_type = _BLOCKID
_PROPOSAL.fields_by_name['timestamp'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_SIGNEDHEADER.fields_by_name['header'].message_type = _HEADER
_SIGNEDHEADER.fields_by_name['commit'].message_type = _COMMIT
_LIGHTBLOCK.fields_by_name['signed_header'].message_type = _SIGNEDHEADER
_LIGHTBLOCK.fields_by_name['validator_set'].message_type = tendermint_dot_types_dot_validator__pb2._VALIDATORSET
_BLOCKMETA.fields_by_name['block_id'].message_type = _BLOCKID
_BLOCKMETA.fields_by_name['header'].message_type = _HEADER
_TXPROOF.fields_by_name['proof'].message_type = tendermint_dot_crypto_dot_proof__pb2._PROOF
DESCRIPTOR.message_types_by_name['PartSetHeader'] = _PARTSETHEADER
DESCRIPTOR.message_types_by_name['Part'] = _PART
DESCRIPTOR.message_types_by_name['BlockID'] = _BLOCKID
DESCRIPTOR.message_types_by_name['Header'] = _HEADER
DESCRIPTOR.message_types_by_name['Data'] = _DATA
DESCRIPTOR.message_types_by_name['Vote'] = _VOTE
DESCRIPTOR.message_types_by_name['Commit'] = _COMMIT
DESCRIPTOR.message_types_by_name['CommitSig'] = _COMMITSIG
DESCRIPTOR.message_types_by_name['Proposal'] = _PROPOSAL
DESCRIPTOR.message_types_by_name['SignedHeader'] = _SIGNEDHEADER
DESCRIPTOR.message_types_by_name['LightBlock'] = _LIGHTBLOCK
DESCRIPTOR.message_types_by_name['BlockMeta'] = _BLOCKMETA
DESCRIPTOR.message_types_by_name['TxProof'] = _TXPROOF
DESCRIPTOR.enum_types_by_name['BlockIDFlag'] = _BLOCKIDFLAG
DESCRIPTOR.enum_types_by_name['SignedMsgType'] = _SIGNEDMSGTYPE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
PartSetHeader = _reflection.GeneratedProtocolMessageType('PartSetHeader', (_message.Message,), {'DESCRIPTOR': _PARTSETHEADER, '__module__': 'tendermint.types.types_pb2'})
_sym_db.RegisterMessage(PartSetHeader)
Part = _reflection.GeneratedProtocolMessageType('Part', (_message.Message,), {'DESCRIPTOR': _PART, '__module__': 'tendermint.types.types_pb2'})
_sym_db.RegisterMessage(Part)
BlockID = _reflection.GeneratedProtocolMessageType('BlockID', (_message.Message,), {'DESCRIPTOR': _BLOCKID, '__module__': 'tendermint.types.types_pb2'})
_sym_db.RegisterMessage(BlockID)
Header = _reflection.GeneratedProtocolMessageType('Header', (_message.Message,), {'DESCRIPTOR': _HEADER, '__module__': 'tendermint.types.types_pb2'})
_sym_db.RegisterMessage(Header)
Data = _reflection.GeneratedProtocolMessageType('Data', (_message.Message,), {'DESCRIPTOR': _DATA, '__module__': 'tendermint.types.types_pb2'})
_sym_db.RegisterMessage(Data)
Vote = _reflection.GeneratedProtocolMessageType('Vote', (_message.Message,), {'DESCRIPTOR': _VOTE, '__module__': 'tendermint.types.types_pb2'})
_sym_db.RegisterMessage(Vote)
Commit = _reflection.GeneratedProtocolMessageType('Commit', (_message.Message,), {'DESCRIPTOR': _COMMIT, '__module__': 'tendermint.types.types_pb2'})
_sym_db.RegisterMessage(Commit)
CommitSig = _reflection.GeneratedProtocolMessageType('CommitSig', (_message.Message,), {'DESCRIPTOR': _COMMITSIG, '__module__': 'tendermint.types.types_pb2'})
_sym_db.RegisterMessage(CommitSig)
Proposal = _reflection.GeneratedProtocolMessageType('Proposal', (_message.Message,), {'DESCRIPTOR': _PROPOSAL, '__module__': 'tendermint.types.types_pb2'})
_sym_db.RegisterMessage(Proposal)
SignedHeader = _reflection.GeneratedProtocolMessageType('SignedHeader', (_message.Message,), {'DESCRIPTOR': _SIGNEDHEADER, '__module__': 'tendermint.types.types_pb2'})
_sym_db.RegisterMessage(SignedHeader)
LightBlock = _reflection.GeneratedProtocolMessageType('LightBlock', (_message.Message,), {'DESCRIPTOR': _LIGHTBLOCK, '__module__': 'tendermint.types.types_pb2'})
_sym_db.RegisterMessage(LightBlock)
BlockMeta = _reflection.GeneratedProtocolMessageType('BlockMeta', (_message.Message,), {'DESCRIPTOR': _BLOCKMETA, '__module__': 'tendermint.types.types_pb2'})
_sym_db.RegisterMessage(BlockMeta)
TxProof = _reflection.GeneratedProtocolMessageType('TxProof', (_message.Message,), {'DESCRIPTOR': _TXPROOF, '__module__': 'tendermint.types.types_pb2'})
_sym_db.RegisterMessage(TxProof)
DESCRIPTOR._options = None
_BLOCKIDFLAG._options = None
_BLOCKIDFLAG.values_by_name['BLOCK_ID_FLAG_UNKNOWN']._options = None
_BLOCKIDFLAG.values_by_name['BLOCK_ID_FLAG_ABSENT']._options = None
_BLOCKIDFLAG.values_by_name['BLOCK_ID_FLAG_COMMIT']._options = None
_BLOCKIDFLAG.values_by_name['BLOCK_ID_FLAG_NIL']._options = None
_SIGNEDMSGTYPE._options = None
_SIGNEDMSGTYPE.values_by_name['SIGNED_MSG_TYPE_UNKNOWN']._options = None
_SIGNEDMSGTYPE.values_by_name['SIGNED_MSG_TYPE_PREVOTE']._options = None
_SIGNEDMSGTYPE.values_by_name['SIGNED_MSG_TYPE_PRECOMMIT']._options = None
_SIGNEDMSGTYPE.values_by_name['SIGNED_MSG_TYPE_PROPOSAL']._options = None
_PART.fields_by_name['proof']._options = None
_BLOCKID.fields_by_name['part_set_header']._options = None
_HEADER.fields_by_name['version']._options = None
_HEADER.fields_by_name['chain_id']._options = None
_HEADER.fields_by_name['time']._options = None
_HEADER.fields_by_name['last_block_id']._options = None
_VOTE.fields_by_name['block_id']._options = None
_VOTE.fields_by_name['timestamp']._options = None
_COMMIT.fields_by_name['block_id']._options = None
_COMMIT.fields_by_name['signatures']._options = None
_COMMITSIG.fields_by_name['timestamp']._options = None
_PROPOSAL.fields_by_name['block_id']._options = None
_PROPOSAL.fields_by_name['timestamp']._options = None
_BLOCKMETA.fields_by_name['block_id']._options = None
_BLOCKMETA.fields_by_name['header']._options = None
| 312.439394 | 5,788 | 0.824718 | 5,932 | 41,242 | 5.397842 | 0.054282 | 0.04772 | 0.076546 | 0.067458 | 0.796877 | 0.722861 | 0.68476 | 0.664366 | 0.63401 | 0.6099 | 0 | 0.043374 | 0.042384 | 41,242 | 131 | 5,789 | 314.824427 | 0.767382 | 0.000752 | 0 | 0 | 1 | 0.738462 | 0.219151 | 0.171116 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
408adaf01422058a2623a04ce6cfc948ad877247 | 10,001 | py | Python | bindings/python/tests/magnet_uri_test.py | evsh/libtorrent | 6be7c5fde24ff3b3942933a18d02592a63f22cc0 | [
"BSL-1.0",
"BSD-3-Clause"
] | 9 | 2019-11-05T16:47:12.000Z | 2022-03-05T15:21:25.000Z | bindings/python/tests/magnet_uri_test.py | zeule/libtorrent | 6be7c5fde24ff3b3942933a18d02592a63f22cc0 | [
"BSL-1.0",
"BSD-3-Clause"
] | null | null | null | bindings/python/tests/magnet_uri_test.py | zeule/libtorrent | 6be7c5fde24ff3b3942933a18d02592a63f22cc0 | [
"BSL-1.0",
"BSD-3-Clause"
] | null | null | null | import tempfile
import unittest
import libtorrent as lt
from . import lib
class ParseMagnetTest(unittest.TestCase):
def setUp(self) -> None:
self.info_hash_sha1 = lib.get_random_bytes(20).hex()
self.info_hash_sha256 = lib.get_random_bytes(32).hex()
def test_parse_to_atp(self) -> None:
uri = f"magnet:?xt=urn:btih:{self.info_hash_sha1}"
atp = lt.parse_magnet_uri(uri)
self.assertEqual(str(atp.info_hash).lower(), self.info_hash_sha1)
def test_parse_to_atp_error(self) -> None:
with self.assertRaises(RuntimeError):
lt.parse_magnet_uri("magnet:?")
@unittest.skip("https://github.com/arvidn/libtorrent/issues/5992")
def test_parse_dict_deprecated(self) -> None:
uri = f"magnet:?xt=urn:btih:{self.info_hash_sha1}"
with self.assertWarns(DeprecationWarning):
lt.parse_magnet_uri_dict(uri)
def test_parse_dict_sha1(self) -> None:
uri = (
f"magnet:?xt=urn:btih:{self.info_hash_sha1}&"
"dn=test.txt&"
"tr=http://example.com/tr&"
"ws=http://example.com/ws&"
"so=0-2,4&"
"x.pe=0.1.2.3:4567&"
"dht=1.2.3.4:5678"
)
params = lt.parse_magnet_uri_dict(uri)
self.assertEqual(
params,
{
"dht_nodes": [("1.2.3.4", 5678)],
"flags": lt.add_torrent_params_flags_t.default_flags,
"info_hash": bytes.fromhex(self.info_hash_sha1),
"info_hashes": bytes.fromhex(self.info_hash_sha1),
"name": "test.txt",
"save_path": "",
"storage_mode": lt.storage_mode_t.storage_mode_sparse,
"trackers": ["http://example.com/tr"],
"url": "",
},
)
# The dict is intended to be usable as argument to session.add_torrent()
session = lt.session(lib.get_isolated_settings())
with tempfile.TemporaryDirectory() as path:
params["save_path"] = path
with self.assertWarns(DeprecationWarning):
handle = session.add_torrent(params)
self.assertEqual(str(handle.info_hashes().v1), self.info_hash_sha1)
self.assertEqual(handle.status().name, "test.txt")
self.assertEqual(
[t["url"] for t in handle.trackers()], ["http://example.com/tr"]
)
# self.assertEqual(handle.url_seeds(), ["http://example.com/ws"])
# self.assertEqual(handle.file_priorities(), [4, 4, 4, 0, 4])
# Can't test peers or dht
@unittest.skip("need to parse more params")
def test_parse_dict_sha1_broken(self) -> None:
uri = (
f"magnet:?xt=urn:btih:{self.info_hash_sha1}&"
"dn=test.txt&"
"tr=http://example.com/tr&"
"ws=http://example.com/ws&"
"so=0-2,4&"
"x.pe=0.1.2.3:4567&"
"dht=1.2.3.4:5678"
)
params = lt.parse_magnet_uri_dict(uri)
self.assertEqual(
params,
{
"dht_nodes": [("1.2.3.4", 5678)],
"file_priorities": [4, 4, 4, 0, 4],
"flags": lt.add_torrent_params_flags_t.default_flags,
"info_hash": bytes.fromhex(self.info_hash_sha1),
"info_hashes": bytes.fromhex(self.info_hash_sha1),
"name": "test.txt",
"save_path": "",
"storage_mode": lt.storage_mode_t.storage_mode_sparse,
"trackers": ["http://example.com/tr"],
"url": "",
"url_seeds": ["http://example.com/ws"],
},
)
# The dict is intended to be usable as argument to session.add_torrent()
session = lt.session(lib.get_isolated_settings())
with tempfile.TemporaryDirectory() as path:
params["save_path"] = path
handle = session.add_torrent(params)
self.assertEqual(str(handle.info_hashes().v1), self.info_hash_sha1)
self.assertEqual(handle.name(), "test.txt")
self.assertEqual(
[t["url"] for t in handle.trackers()], ["http://example.com/tr"]
)
self.assertEqual(handle.url_seeds(), ["http://example.com/ws"])
self.assertEqual(handle.file_priorities(), [4, 4, 4, 0, 4])
# Can't test peers or dht
def test_parse_dict_sha256(self) -> None:
uri = (
f"magnet:?xt=urn:btmh:1220{self.info_hash_sha256}&"
"dn=test.txt&"
"tr=http://example.com/tr&"
"ws=http://example.com/ws&"
"so=0-2,4&"
"x.pe=0.1.2.3:4567&"
"dht=1.2.3.4:5678"
)
params = lt.parse_magnet_uri_dict(uri)
self.assertEqual(
params,
{
"dht_nodes": [("1.2.3.4", 5678)],
"flags": lt.add_torrent_params_flags_t.default_flags,
"info_hash": bytes.fromhex(self.info_hash_sha256)[:20],
"info_hashes": bytes.fromhex(self.info_hash_sha256),
"name": "test.txt",
"save_path": "",
"storage_mode": lt.storage_mode_t.storage_mode_sparse,
"trackers": ["http://example.com/tr"],
"url": "",
},
)
# The dict is intended to be usable as argument to session.add_torrent()
session = lt.session(lib.get_isolated_settings())
with tempfile.TemporaryDirectory() as path:
params["save_path"] = path
with self.assertWarns(DeprecationWarning):
handle = session.add_torrent(params)
# self.assertEqual(str(handle.info_hashes().v2), self.info_hash_sha256)
self.assertEqual(handle.status().name, "test.txt")
self.assertEqual(
[t["url"] for t in handle.trackers()], ["http://example.com/tr"]
)
# self.assertEqual(handle.url_seeds(), ["http://example.com/ws"])
# self.assertEqual(handle.file_priorities(), [4, 4, 4, 0, 4])
# Can't test peers or dht
@unittest.skip("need to parse more params")
def test_parse_dict_sha256_broken(self) -> None:
uri = (
f"magnet:?xt=urn:btmh:1220{self.info_hash_sha256}&"
"dn=test.txt&"
"tr=http://example.com/tr&"
"ws=http://example.com/ws&"
"so=0-2,4&"
"x.pe=0.1.2.3:4567&"
"dht=1.2.3.4:5678"
)
params = lt.parse_magnet_uri_dict(uri)
self.assertEqual(
params,
{
"dht_nodes": [("1.2.3.4", 5678)],
"file_priorities": [4, 4, 4, 0, 4],
"flags": lt.add_torrent_params_flags_t.default_flags,
"info_hash": bytes.fromhex(self.info_hash_sha256)[:20],
"info_hashes": bytes.fromhex(self.info_hash_sha256),
"name": "test.txt",
"peers": [("0.1.2.3", 4567)],
"save_path": "",
"storage_mode": lt.storage_mode_t.storage_mode_sparse,
"trackers": ["http://example.com/tr"],
"url": "",
"url_seeds": "http://example.com/ws",
},
)
# The dict is intended to be usable as argument to session.add_torrent()
session = lt.session(lib.get_isolated_settings())
with tempfile.TemporaryDirectory() as path:
params["save_path"] = path
handle = session.add_torrent(params)
self.assertEqual(
str(handle.info_hashes().v2), # type: ignore
self.info_hash_sha256,
)
self.assertEqual(handle.name(), "test.txt")
self.assertEqual(
[t["url"] for t in handle.trackers()], ["http://example.com/tr"]
)
self.assertEqual(handle.url_seeds(), ["http://example.com/ws"])
self.assertEqual(handle.file_priorities(), [4, 4, 4, 0, 4])
# Can't test peers or dht
def test_parse_dict_error(self) -> None:
with self.assertRaises(RuntimeError):
lt.parse_magnet_uri_dict("magnet:?")
class AddMagnetUriTest(unittest.TestCase):
def setUp(self) -> None:
self.session = lt.session(lib.get_isolated_settings())
self.dir = tempfile.TemporaryDirectory()
self.info_hash_sha1 = lib.get_random_bytes(20).hex()
def tearDown(self) -> None:
lib.cleanup_with_windows_fix(self.dir, timeout=5)
def test_error(self) -> None:
with self.assertWarns(DeprecationWarning):
with self.assertRaises(RuntimeError):
lt.add_magnet_uri(self.session, "magnet:?", {})
def test_add(self) -> None:
uri = f"magnet:?xt=urn:btih:{self.info_hash_sha1}"
with self.assertWarns(DeprecationWarning):
handle = lt.add_magnet_uri(self.session, uri, {"save_path": self.dir.name})
self.assertEqual(str(handle.info_hashes().v1), self.info_hash_sha1)
class MakeMagnetUriTest(unittest.TestCase):
def setUp(self) -> None:
self.info_hash_sha1 = lib.get_random_bytes(20).hex()
def test_torrent_info(self) -> None:
ti = lt.torrent_info(lt.sha1_hash(bytes.fromhex(self.info_hash_sha1)))
uri = lt.make_magnet_uri(ti)
self.assertEqual(uri, f"magnet:?xt=urn:btih:{self.info_hash_sha1}")
def test_torrent_handle(self) -> None:
atp = lt.add_torrent_params()
atp.info_hashes = lt.info_hash_t(
lt.sha1_hash(bytes.fromhex(self.info_hash_sha1))
)
session = lt.session(lib.get_isolated_settings())
with tempfile.TemporaryDirectory() as path:
atp.save_path = path
handle = session.add_torrent(atp)
uri = lt.make_magnet_uri(handle)
self.assertEqual(uri, f"magnet:?xt=urn:btih:{self.info_hash_sha1}")
| 40.489879 | 87 | 0.559244 | 1,226 | 10,001 | 4.365416 | 0.103589 | 0.052317 | 0.065022 | 0.059791 | 0.886958 | 0.855194 | 0.835949 | 0.808857 | 0.806614 | 0.792414 | 0 | 0.033196 | 0.29817 | 10,001 | 246 | 88 | 40.654472 | 0.729306 | 0.070993 | 0 | 0.677885 | 0 | 0 | 0.179819 | 0.041505 | 0 | 0 | 0 | 0 | 0.149038 | 1 | 0.076923 | false | 0 | 0.019231 | 0 | 0.110577 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
40d47771cf4fedd8ab38f717dbc5e79b4675e572 | 185 | py | Python | python/ray/job_submission/__init__.py | mgelbart/ray | 4cec2286572e368a4bd64aae467751a384eff62d | [
"Apache-2.0"
] | 22 | 2018-05-08T05:52:34.000Z | 2020-04-01T10:09:55.000Z | python/ray/job_submission/__init__.py | mgelbart/ray | 4cec2286572e368a4bd64aae467751a384eff62d | [
"Apache-2.0"
] | 73 | 2021-09-25T07:11:39.000Z | 2022-03-26T07:10:59.000Z | python/ray/job_submission/__init__.py | mgelbart/ray | 4cec2286572e368a4bd64aae467751a384eff62d | [
"Apache-2.0"
] | 10 | 2018-04-27T10:50:59.000Z | 2020-02-24T02:41:43.000Z | from ray.dashboard.modules.job.sdk import JobSubmissionClient
from ray.dashboard.modules.job.common import JobStatus, JobInfo
__all__ = ["JobSubmissionClient", "JobStatus", "JobInfo"]
| 37 | 63 | 0.810811 | 21 | 185 | 6.952381 | 0.571429 | 0.09589 | 0.219178 | 0.315068 | 0.356164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 185 | 4 | 64 | 46.25 | 0.858824 | 0 | 0 | 0 | 0 | 0 | 0.189189 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9068a9c175055a5f5321007d2df7665ba414cff7 | 5,534 | py | Python | tests/test.py | Mambix/ChaosDuino | b4b95b2f0d47a60d165145ab02af1fa0ac3239aa | [
"MIT"
] | null | null | null | tests/test.py | Mambix/ChaosDuino | b4b95b2f0d47a60d165145ab02af1fa0ac3239aa | [
"MIT"
] | null | null | null | tests/test.py | Mambix/ChaosDuino | b4b95b2f0d47a60d165145ab02af1fa0ac3239aa | [
"MIT"
] | null | null | null | try:
from StringIO import StringIO
except ImportError:
from io import StringIO
import unittest
import sys
import time
from serial import Serial
PORT = '/dev/ttyACM0'
class TestSettings(unittest.TestCase):
def setUp(self):
self.held, sys.stdout = sys.stdout, StringIO()
def test_00_echo(self):
ser=Serial(PORT, 2000000, timeout=3)
time.sleep(0.1)
ser.write('AT\r')
time.sleep(0.1)
response = ser.readline()
ser.close()
self.assertEqual(response, 'AT\r\n')
def test_01_version(self):
ser=Serial(PORT, 2000000, timeout=3)
time.sleep(0.1)
ser.write('ATV\r')
time.sleep(0.1)
response = ser.readline()
ser.close()
self.assertEqual(response, 'ChaosDuino v0.1.3 for PCB rev3 running...\r\n')
def test_02_LED(self):
ser=Serial(PORT, 2000000, timeout=3)
time.sleep(0.1)
ser.write('ATLR0\r')
ser.write('ATLG0\r')
ser.write('ATLB0\r')
j = 0
for i in range(50):
if j & 1 == 1:
ser.write('ATLB1\r')
if j & 1 == 0:
ser.write('ATLB0\r')
if j & 2 == 0:
ser.write('ATLG1\r')
if j % 4 != 0:
ser.write('ATLG0\r')
if j % 8 == 0:
ser.write('ATLR1\r')
if j % 8 != 0:
ser.write('ATLR0\r')
j+=1
time.sleep(0.1)
ser.write('ATLR0\r')
ser.write('ATLG0\r')
ser.write('ATLB0\r')
ser.close()
def test_03_ATE(self):
ser=Serial(PORT, 2000000, timeout=3)
time.sleep(0.1)
ser.write('ATE?\r')
time.sleep(0.1)
response = ser.readline()
self.assertEqual(response, 'ATE0\r\n')
ser.write('ATE1\r')
time.sleep(0.1)
ser.write('ATE?\r')
time.sleep(0.1)
response = ser.readline()
self.assertEqual(response, 'ATE1\r\n')
ser.close()
def test_04_ATP(self):
ser=Serial(PORT, 2000000, timeout=3)
time.sleep(0.1)
ser.write('ATP?\r')
time.sleep(0.1)
response = ser.readline()
self.assertEqual(response, 'ATP0\r\n')
for i in range(3):
ser.write('ATP{}\r'.format(i))
time.sleep(0.1)
ser.write('ATP?\r')
time.sleep(0.1)
response = ser.readline()
self.assertEqual(response, 'ATP{}\r\n'.format(i))
ser.write('ATP0\r')
time.sleep(0.1)
ser.write('ATP?\r')
time.sleep(0.1)
response = ser.readline()
self.assertEqual(response, 'ATP0\r\n')
ser.close()
def test_05_ATM(self):
ser=Serial(PORT, 2000000, timeout=3)
time.sleep(0.1)
ser.write('ATM?\r')
time.sleep(0.1)
response = ser.readline()
self.assertEqual(response, 'ATM1\r\n')
for i in range(8):
ser.write('ATM{}\r'.format(i))
time.sleep(0.1)
ser.write('ATM?\r')
time.sleep(0.1)
response = ser.readline()
self.assertEqual(response, 'ATM{}\r\n'.format(i))
ser.write('ATM1\r')
time.sleep(0.1)
ser.write('ATM?\r')
time.sleep(0.1)
response = ser.readline()
self.assertEqual(response, 'ATM1\r\n')
ser.close()
def test_06_OK(self):
ser=Serial(PORT, 2000000, timeout=3)
time.sleep(0.1)
ser.write('ATOK?\r')
time.sleep(0.1)
response = ser.readline()
self.assertEqual(response, 'ATOK0\r\n')
ser.close()
def test_07_POOL(self):
ser=Serial(PORT, 2000000, timeout=3)
time.sleep(0.1)
ser.write('ATPOOL?\r')
time.sleep(0.1)
response = ser.readline()
self.assertEqual(response, '10000\r\n')
ser.close()
def test_08_BIP39(self):
ser=Serial(PORT, 2000000, timeout=3)
time.sleep(0.1)
ser.write('BIP39W?\r')
time.sleep(0.1)
response = ser.readline()
self.assertEqual(response, 'BIP39W24\r\n')
for i in [15, 18, 21, 24]:
ser.write('BIP39W{}\r'.format(i))
time.sleep(0.1)
ser.write('BIP39W?\r')
time.sleep(0.1)
response = ser.readline()
self.assertEqual(response, 'BIP39W{}\r\n'.format(i))
ser.close()
# class TestData(unittest.TestCase):
# def setUp(self):
# self.held, sys.stdout = sys.stdout, StringIO()
# def test_dual_classes(self):
# jam1 = Jam({'issue': {'rmamba': 0.5}})
# jam2 = Jam({'issue': {'ledi_mambix': 1.5}})
# self.assertEqual(jam1.jam, {'issue': {'rmamba': 0.5}})
# self.assertFalse(jam1.modified, 'Should not be modified!!!')
# self.assertEqual(jam2.jam, {'issue': {'ledi_mambix': 1.5}})
# self.assertFalse(jam2.modified, 'Should NOT be modified!!!')
# class TestEntropy(unittest.TestCase):
# def setUp(self):
# self.held, sys.stdout = sys.stdout, StringIO()
# def test_dual_classes(self):
# jam1 = Jam({'issue': {'rmamba': 0.5}})
# jam2 = Jam({'issue': {'ledi_mambix': 1.5}})
# self.assertEqual(jam1.jam, {'issue': {'rmamba': 0.5}})
# self.assertFalse(jam1.modified, 'Should not be modified!!!')
# self.assertEqual(jam2.jam, {'issue': {'ledi_mambix': 1.5}})
# self.assertFalse(jam2.modified, 'Should NOT be modified!!!')
| 29.280423 | 83 | 0.526744 | 728 | 5,534 | 3.968407 | 0.151099 | 0.088612 | 0.103842 | 0.114226 | 0.817238 | 0.810315 | 0.762894 | 0.752163 | 0.752163 | 0.743856 | 0 | 0.065274 | 0.307915 | 5,534 | 188 | 84 | 29.43617 | 0.689034 | 0.185399 | 0 | 0.602837 | 0 | 0 | 0.086472 | 0 | 0 | 0 | 0 | 0 | 0.099291 | 1 | 0.070922 | false | 0 | 0.049645 | 0 | 0.12766 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
907ecfcfb4010949c3cdd2644fb88bf4f0cba483 | 103 | py | Python | tableschema_to_template/__init__.py | mccalluc/tableschema-to-template | 6a206fcaf29227491259502f7a7743ca8d2a710a | [
"MIT"
] | 5 | 2020-12-05T18:53:54.000Z | 2021-06-07T15:54:44.000Z | tableschema_to_template/__init__.py | mccalluc/tableschema-to-template | 6a206fcaf29227491259502f7a7743ca8d2a710a | [
"MIT"
] | 13 | 2020-12-01T19:20:01.000Z | 2021-03-07T03:03:04.000Z | tableschema_to_template/__init__.py | mccalluc/tableschema-to-excel-template | 6a206fcaf29227491259502f7a7743ca8d2a710a | [
"MIT"
] | 2 | 2021-02-08T15:15:33.000Z | 2021-06-07T15:55:49.000Z | # Export from the top level:
from tableschema_to_template.create_xlsx import create_xlsx # noqa: F401
| 34.333333 | 73 | 0.815534 | 16 | 103 | 5 | 0.8125 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033708 | 0.135922 | 103 | 2 | 74 | 51.5 | 0.865169 | 0.359223 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.